Http get python urllib

User Guide#

You’ll need a PoolManager instance to make requests. This object handles all of the details of connection pooling and thread safety so that you don’t have to:

import urllib3 # Creating a PoolManager instance for sending requests. http = urllib3.PoolManager() # Sending a GET request and getting back response as HTTPResponse object. resp = http.request("GET", "https://httpbin.org/robots.txt") # Print the returned data. print(resp.data) # b"User-agent: *\nDisallow: /deny\n" 

request() returns a HTTPResponse object, the Response Content section explains how to handle various responses. You can use request() to make requests using any HTTP verb:

import urllib3 http = urllib3.PoolManager() resp = http.request( "POST", "https://httpbin.org/post", fields="hello": "world"> # Add custom form fields ) print(resp.data) # b", . > 

The Request Data section covers sending other kinds of requests data, including JSON, files, and binary data.

Note For quick scripts and experiments you can also use a top-level urllib3.request() . It uses a module-global PoolManager instance. Because of that, its side effects could be shared across dependencies relying on it. To avoid side effects, create a new PoolManager instance and use it instead. In addition, the method does not accept the low-level **urlopen_kw keyword arguments. System CA certificates are loaded on default.

Response Content#

import urllib3 # Making the request (The request function returns HTTPResponse object) resp = urllib3.request("GET", "https://httpbin.org/ip") print(resp.status) # 200 print(resp.data) # b"\n" print(resp.headers) # HTTPHeaderDict() 

JSON Content#

import urllib3 resp = urllib3.request("GET", "https://httpbin.org/ip") print(resp.json()) #

Alternatively, Custom JSON libraries such as orjson can be used to encode data, retrieve data by decoding and deserializing the data attribute of the request:

import orjson import urllib3 encoded_data = orjson.dumps("attribute": "value">) resp = urllib3.request(method="POST", url="http://httpbin.org/post", body=encoded_data) print(orjson.loads(resp.data)["json"]) #

Binary Content#

import urllib3 resp = urllib3.request("GET", "https://httpbin.org/bytes/8") print(resp.data) # b"\xaa\xa5H?\x95\xe9\x9b\x11" 

Using io Wrappers with Response Content#

Sometimes you want to use io.TextIOWrapper or similar objects like a CSV reader directly with HTTPResponse data. Making these two interfaces play nice together requires using the auto_close attribute by setting it to False . By default HTTP responses are closed after reading all bytes, this disables that behavior:

import io import urllib3 resp = urllib3.request("GET", "https://example.com", preload_content=False) resp.auto_close = False for line in io.TextIOWrapper(resp): print(line) # # # # . #  #  

Request Data#

Headers#

import urllib3 resp = urllib3.request( "GET", "https://httpbin.org/headers", headers= "X-Something": "value" > ) print(resp.json()["headers"]) #
import urllib3 # Create an HTTPHeaderDict and add headers headers = urllib3.HTTPHeaderDict() headers.add("Accept", "application/json") headers.add("Accept", "text/plain") # Make the request using the headers resp = urllib3.request( "GET", "https://httpbin.org/headers", headers=headers ) print(resp.json()["headers"]) #  

Cookies#

Cookies are specified using the Cookie header with a string containing the ; delimited key-value pairs:

import urllib3 resp = urllib3.request( "GET", "https://httpbin.org/cookies", headers= "Cookie": "session=f3efe9db; > ) print(resp.json()) # > 
import urllib3 resp = urllib3.request( "GET", "https://httpbin.org/cookies/set/session/f3efe9db", redirect=False ) print(resp.headers["Set-Cookie"]) # session=f3efe9db; Path=/ 

Query Parameters#

For GET , HEAD , and DELETE requests, you can simply pass the arguments as a dictionary in the fields argument to request() :

import urllib3 resp = urllib3.request( "GET", "https://httpbin.org/get", fields="arg": "value"> ) print(resp.json()["args"]) #
from urllib.parse import urlencode import urllib3 # Encode the args into url grammar. encoded_args = urlencode("arg": "value">) # Create a URL with args encoded. url = "https://httpbin.org/post?" + encoded_args resp = urllib3.request("POST", url) print(resp.json()["args"]) #

Form Data#

For PUT and POST requests, urllib3 will automatically form-encode the dictionary in the fields argument provided to request() :

import urllib3 resp = urllib3.request( "POST", "https://httpbin.org/post", fields="field": "value"> ) print(resp.json()["form"]) #

JSON#

To send JSON in the body of a request, provide the data in the json argument to request() and urllib3 will automatically encode the data using the json module with UTF-8 encoding. In addition, when json is provided, the «Content-Type» in headers is set to «application/json» if not specified otherwise.

import urllib3 resp = urllib3.request( "POST", "https://httpbin.org/post", json="attribute": "value">, headers="Content-Type": "application/json"> ) print(resp.json()) # , # 'data': '', 'json': , . > 

Files & Binary Data#

For uploading files using multipart/form-data encoding you can use the same approach as Form Data and specify the file field as a tuple of (file_name, file_data) :

import urllib3 # Reading the text file from local storage. with open("example.txt") as fp: file_data = fp.read() # Sending the request. resp = urllib3.request( "POST", "https://httpbin.org/post", fields= "filefield": ("example.txt", file_data), > ) print(resp.json()["files"]) #

While specifying the filename is not strictly required, it’s recommended in order to match browser behavior. You can also pass a third item in the tuple to specify the file’s MIME type explicitly:

resp = urllib3.request( "POST", "https://httpbin.org/post", fields= "filefield": ("example.txt", file_data, "text/plain"), > ) 

For sending raw binary data simply specify the body argument. It’s also recommended to set the Content-Type header:

import urllib3 with open("/home/samad/example.jpg", "rb") as fp: binary_data = fp.read() resp = urllib3.request( "POST", "https://httpbin.org/post", body=binary_data, headers="Content-Type": "image/jpeg"> ) print(resp.json()["data"]) # data:application/octet-stream;base64. 

Certificate Verification#

Note New in version 1.25: HTTPS connections are now verified by default ( cert_reqs = «CERT_REQUIRED» ).

While you can disable certification verification by setting cert_reqs = «CERT_NONE» , it is highly recommend to leave it on. Unless otherwise specified urllib3 will try to load the default system certificate stores. The most reliable cross-platform method is to use the certifi package which provides Mozilla’s root certificate bundle:

$ python -m pip install certifi

Once you have certificates, you can create a PoolManager that verifies certificates when making requests:

import certifi import urllib3 http = urllib3.PoolManager( cert_reqs="CERT_REQUIRED", ca_certs=certifi.where() ) 

The PoolManager will automatically handle certificate verification and will raise SSLError if verification fails:

import certifi import urllib3 http = urllib3.PoolManager( cert_reqs="CERT_REQUIRED", ca_certs=certifi.where() ) http.request("GET", "https://httpbin.org/") # (No exception) http.request("GET", "https://expired.badssl.com") # urllib3.exceptions.SSLError . 

Note You can use OS-provided certificates if desired. Just specify the full path to the certificate bundle as the ca_certs argument instead of certifi.where() . For example, most Linux systems store the certificates at /etc/ssl/certs/ca-certificates.crt . Other operating systems can be difficult.

Using Timeouts#

Timeouts allow you to control how long (in seconds) requests are allowed to run before being aborted. In simple cases, you can specify a timeout as a float to request() :

import urllib3 resp = urllib3.request( "GET", "https://httpbin.org/delay/3", timeout=4.0 ) print(type(resp)) # # This request will take more time to process than timeout. urllib3.request( "GET", "https://httpbin.org/delay/3", timeout=2.5 ) # MaxRetryError caused by ReadTimeoutError 

For more granular control you can use a Timeout instance which lets you specify separate connect and read timeouts:

import urllib3 resp = urllib3.request( "GET", "https://httpbin.org/delay/3", timeout=urllib3.Timeout(connect=1.0) ) print(type(resp)) # urllib3.request( "GET", "https://httpbin.org/delay/3", timeout=urllib3.Timeout(connect=1.0, read=2.0) ) # MaxRetryError caused by ReadTimeoutError 

If you want all requests to be subject to the same timeout, you can specify the timeout at the PoolManager level:

import urllib3 http = urllib3.PoolManager(timeout=3.0) http = urllib3.PoolManager( timeout=urllib3.Timeout(connect=1.0, read=2.0) ) 

Retrying Requests#

urllib3 can automatically retry idempotent requests. This same mechanism also handles redirects. You can control the retries using the retries parameter to request() . By default, urllib3 will retry requests 3 times and follow up to 3 redirects. To change the number of retries just specify an integer:

import urllib3 urllib3.request("GET", "https://httpbin.org/ip", retries=10) 
import urllib3 urllib3.request( "GET", "https://nxdomain.example.com", retries=False ) # NewConnectionError resp = urllib3.request( "GET", "https://httpbin.org/redirect/1", retries=False ) print(resp.status) # 302 
resp = urllib3.request( "GET", "https://httpbin.org/redirect/1", redirect=False ) print(resp.status) # 302 

For more granular control you can use a Retry instance. This class allows you far greater control of how requests are retried. For example, to do a total of 3 retries, but limit to only 2 redirects:

urllib3.request( "GET", "https://httpbin.org/redirect/3", retries=urllib3.Retry(3, redirect=2) ) # MaxRetryError 
resp = urllib3.request( "GET", "https://httpbin.org/redirect/3", retries=urllib3.Retry( redirect=2, raise_on_redirect=False ) ) print(resp.status) # 302 

If you want all requests to be subject to the same retry policy, you can specify the retry at the PoolManager level:

import urllib3 http = urllib3.PoolManager(retries=False) http = urllib3.PoolManager( retries=urllib3.Retry(5, redirect=2) ) 

Errors & Exceptions#

import urllib3 try: urllib3.request("GET","https://nx.example.com", retries=False) except urllib3.exceptions.NewConnectionError: print("Connection failed.") # Connection failed. 

Logging#

If you are using the standard library logging module urllib3 will emit several logs. In some cases this can be undesirable. You can use the standard logger interface to change the log level for urllib3’s logger:

logging.getLogger("urllib3").setLevel(logging.WARNING) 

Источник

Читайте также:  Css select text height
Оцените статью