Requests-HTML: HTML Parsing for Humans [writing Python 3]!¶
Nội dung chính
- Requests-HTML: HTML Parsing for Humans [writing Python 3]!¶
- Installation¶
- Tutorial & Usage¶
- JavaScript Support¶
- Using without Requests¶
- Main Classes¶
- Utility Functions¶
- HTML Sessions¶
This library intends to make parsing HTML [e.g. scraping the web] as simple and intuitive as possible.
When using this library you automatically get:
- Full JavaScript support!
- CSS Selectors [a.k.a jQuery-style, thanks to PyQuery].
- XPath Selectors, for the faint of heart.
- Mocked user-agent [like a real web browser].
- Automatic following of redirects.
- Connection–pooling and cookie persistence.
- The Requests experience you know and love, with magical parsing abilities.
- Async Support
Installation¶
$ pipenv install requests-html ✨🍰✨
Only Python 3.6 is supported.
Tutorial & Usage¶
Make a GET request to python.org, using Requests:
>>> from requests_html import HTMLSession >>> session = HTMLSession[] >>> r = session.get['//python.org/']
Or want to try our async session:
>>> from requests_html import AsyncHTMLSession >>> asession = AsyncHTMLSession[] >>> r = await asession.get['//python.org/']
But async is fun when fetching some sites at the same time:
>>> from requests_html import AsyncHTMLSession >>> asession = AsyncHTMLSession[] >>> async def get_pythonorg[]: ... r = await asession.get['//python.org/'] >>> async def get_reddit[]: ... r = await asession.get['//reddit.com/'] >>> async def get_google[]: ... r = await asession.get['//google.com/'] >>> session.run[get_pythonorg, get_reddit, get_google]
Grab a list of all links on the page, as–is [anchors excluded]:
>>> r.html.links {'//docs.python.org/3/tutorial/', '/about/apps/', '//github.com/python/pythondotorg/issues', '/accounts/login/', '/dev/peps/', '/about/legal/', '//docs.python.org/3/tutorial/introduction.html#lists', '/download/alternatives', '//feedproxy.google.com/~r/PythonInsider/~3/kihd2DW98YY/python-370a4-is-available-for-testing.html', '/download/other/', '/downloads/windows/', '//mail.python.org/mailman/listinfo/python-dev', '/doc/av', '//devguide.python.org/', '/about/success/#engineering', '//wiki.python.org/moin/PythonEventsCalendar#Submitting_an_Event', '//www.openstack.org', '/about/gettingstarted/', '//feedproxy.google.com/~r/PythonInsider/~3/AMoBel8b8Mc/python-3.html', '/success-stories/industrial-light-magic-runs-python/', '//docs.python.org/3/tutorial/introduction.html#using-python-as-a-calculator', '/', '//pyfound.blogspot.com/', '/events/python-events/past/', '/downloads/release/python-2714/', '//wiki.python.org/moin/PythonBooks', '//plus.google.com/+Python', '//wiki.python.org/moin/', '//status.python.org/', '/community/workshops/', '/community/lists/', '//buildbot.net/', '/community/awards', '//twitter.com/ThePSF', '//docs.python.org/3/license.html', '/psf/donations/', '//wiki.python.org/moin/Languages', '/dev/', '/events/python-user-group/', '//wiki.qt.io/PySide', '/community/sigs/', '//wiki.gnome.org/Projects/PyGObject', '//www.ansible.com', '//www.saltstack.com', '//planetpython.org/', '/events/python-events', '/about/help/', '/events/python-user-group/past/', '/about/success/', '/psf-landing/', '/about/apps', '/about/', '//www.wxpython.org/', '/events/python-user-group/665/', '//www.python.org/psf/codeofconduct/', '/dev/peps/peps.rss', '/downloads/source/', '/psf/sponsorship/sponsors/', '//bottlepy.org', '//roundup.sourceforge.net/', '//pandas.pydata.org/', '//brochure.getpython.info/', '//bugs.python.org/', '/community/merchandise/', '//tornadoweb.org', '/events/python-user-group/650/', '//flask.pocoo.org/', '/downloads/release/python-364/', '/events/python-user-group/660/', '/events/python-user-group/638/', '/psf/', '/doc/', '//blog.python.org', '/events/python-events/604/', '/about/success/#government', '//python.org/dev/peps/', '//docs.python.org', '//feedproxy.google.com/~r/PythonInsider/~3/zVC80sq9s00/python-364-is-now-available.html', '/users/membership/', '/about/success/#arts', '//wiki.python.org/moin/Python2orPython3', '/downloads/', '/jobs/', '//trac.edgewall.org/', '//feedproxy.google.com/~r/PythonInsider/~3/wh73_1A-N7Q/python-355rc1-and-python-348rc1-are-now.html', '/privacy/', '//pypi.python.org/', '//www.riverbankcomputing.co.uk/software/pyqt/intro', '//www.scipy.org', '/community/forums/', '/about/success/#scientific', '/about/success/#software-development', '/shell/', '/accounts/signup/', '//www.facebook.com/pythonlang?fref=ts', '/community/', '//kivy.org/', '/about/quotes/', '//www.web2py.com/', '/community/logos/', '/community/diversity/', '/events/calendars/', '//wiki.python.org/moin/BeginnersGuide', '/success-stories/', '/doc/essays/', '/dev/core-mentorship/', '//ipython.org', '/events/', '//docs.python.org/3/tutorial/controlflow.html', '/about/success/#education', '/blogs/', '/community/irc/', '//pycon.blogspot.com/', '//jobs.python.org', '//www.pylonsproject.org/', '//www.djangoproject.com/', '/downloads/mac-osx/', '/about/success/#business', '//feedproxy.google.com/~r/PythonInsider/~3/x_c9D0S-4C4/python-370b1-is-now-available-for.html', '//wiki.python.org/moin/TkInter', '//docs.python.org/faq/', '//docs.python.org/3/tutorial/controlflow.html#defining-functions'}
Grab a list of all links on the page, in absolute form [anchors excluded]:
>>> r.html.absolute_links {'//github.com/python/pythondotorg/issues', '//docs.python.org/3/tutorial/', '//www.python.org/about/success/', '//feedproxy.google.com/~r/PythonInsider/~3/kihd2DW98YY/python-370a4-is-available-for-testing.html', '//www.python.org/dev/peps/', '//mail.python.org/mailman/listinfo/python-dev', '//www.python.org/doc/', '//www.python.org/', '//www.python.org/about/', '//www.python.org/events/python-events/past/', '//devguide.python.org/', '//wiki.python.org/moin/PythonEventsCalendar#Submitting_an_Event', '//www.openstack.org', '//feedproxy.google.com/~r/PythonInsider/~3/AMoBel8b8Mc/python-3.html', '//docs.python.org/3/tutorial/introduction.html#lists', '//docs.python.org/3/tutorial/introduction.html#using-python-as-a-calculator', '//pyfound.blogspot.com/', '//wiki.python.org/moin/PythonBooks', '//plus.google.com/+Python', '//wiki.python.org/moin/', '//www.python.org/events/python-events', '//status.python.org/', '//www.python.org/about/apps', '//www.python.org/downloads/release/python-2714/', '//www.python.org/psf/donations/', '//buildbot.net/', '//twitter.com/ThePSF', '//docs.python.org/3/license.html', '//wiki.python.org/moin/Languages', '//docs.python.org/faq/', '//jobs.python.org', '//www.python.org/about/success/#software-development', '//www.python.org/about/success/#education', '//www.python.org/community/logos/', '//www.python.org/doc/av', '//wiki.qt.io/PySide', '//www.python.org/events/python-user-group/660/', '//wiki.gnome.org/Projects/PyGObject', '//www.ansible.com', '//www.saltstack.com', '//www.python.org/dev/peps/peps.rss', '//planetpython.org/', '//www.python.org/events/python-user-group/past/', '//docs.python.org/3/tutorial/controlflow.html#defining-functions', '//www.python.org/community/diversity/', '//docs.python.org/3/tutorial/controlflow.html', '//www.python.org/community/awards', '//www.python.org/events/python-user-group/638/', '//www.python.org/about/legal/', '//www.python.org/dev/', '//www.python.org/download/alternatives', '//www.python.org/downloads/', '//www.python.org/community/lists/', '//www.wxpython.org/', '//www.python.org/about/success/#government', '//www.python.org/psf/', '//www.python.org/psf/codeofconduct/', '//bottlepy.org', '//roundup.sourceforge.net/', '//pandas.pydata.org/', '//brochure.getpython.info/', '//www.python.org/downloads/source/', '//bugs.python.org/', '//www.python.org/downloads/mac-osx/', '//www.python.org/about/help/', '//tornadoweb.org', '//flask.pocoo.org/', '//www.python.org/users/membership/', '//blog.python.org', '//www.python.org/privacy/', '//www.python.org/about/gettingstarted/', '//python.org/dev/peps/', '//www.python.org/about/apps/', '//docs.python.org', '//www.python.org/success-stories/', '//www.python.org/community/forums/', '//feedproxy.google.com/~r/PythonInsider/~3/zVC80sq9s00/python-364-is-now-available.html', '//www.python.org/community/merchandise/', '//www.python.org/about/success/#arts', '//wiki.python.org/moin/Python2orPython3', '//trac.edgewall.org/', '//feedproxy.google.com/~r/PythonInsider/~3/wh73_1A-N7Q/python-355rc1-and-python-348rc1-are-now.html', '//pypi.python.org/', '//www.python.org/events/python-user-group/650/', '//www.riverbankcomputing.co.uk/software/pyqt/intro', '//www.python.org/about/quotes/', '//www.python.org/downloads/windows/', '//www.python.org/events/calendars/', '//www.scipy.org', '//www.python.org/community/workshops/', '//www.python.org/blogs/', '//www.python.org/accounts/signup/', '//www.python.org/events/', '//kivy.org/', '//www.facebook.com/pythonlang?fref=ts', '//www.web2py.com/', '//www.python.org/psf/sponsorship/sponsors/', '//www.python.org/community/', '//www.python.org/download/other/', '//www.python.org/psf-landing/', '//www.python.org/events/python-user-group/665/', '//wiki.python.org/moin/BeginnersGuide', '//www.python.org/accounts/login/', '//www.python.org/downloads/release/python-364/', '//www.python.org/dev/core-mentorship/', '//www.python.org/about/success/#business', '//www.python.org/community/sigs/', '//www.python.org/events/python-user-group/', '//ipython.org', '//www.python.org/shell/', '//www.python.org/community/irc/', '//www.python.org/about/success/#engineering', '//www.pylonsproject.org/', '//pycon.blogspot.com/', '//www.python.org/about/success/#scientific', '//www.python.org/doc/essays/', '//www.djangoproject.com/', '//www.python.org/success-stories/industrial-light-magic-runs-python/', '//feedproxy.google.com/~r/PythonInsider/~3/x_c9D0S-4C4/python-370b1-is-now-available-for.html', '//wiki.python.org/moin/TkInter', '//www.python.org/jobs/', '//www.python.org/events/python-events/604/'}
Select an Element
with a CSS Selector [learn more]:
>>> about = r.html.find['#about', first=True]
Grab an Element
’s text contents:
>>> print[about.text] About Applications Quotes Getting Started Help Python Brochure
Introspect an Element
’s attributes [learn more]:
>>> about.attrs {'id': 'about', 'class': ['tier-1', 'element-1'], 'aria-haspopup': 'true'}
Render out an Element
’s HTML:
>>> about.html '\nAbout\n '\n
\n- Applications
\n- Quotes
\n- Getting Started
\n- Help
\n- Python Brochure
\n
Crab an Element
’s root tag
name:
Show the line number that an Element
’s root tag located in:
Select an Element
list within an Element
:
>>> about.find['a'] [, , , , , ]
Search for links within an element:
>>> about.absolute_links {'//brochure.getpython.info/', '//www.python.org/about/gettingstarted/', '//www.python.org/about/', '//www.python.org/about/quotes/', '//www.python.org/about/help/', '//www.python.org/about/apps/'}
Search for text on the page:
>>> r.html.search['Python is a {} language'][0] programming
More complex CSS Selector example [copied from Chrome dev tools]:
>>> r = session.get['//github.com/'] >>> sel = 'body > div.application-main > div.jumbotron.jumbotron-codelines > div > div > div.col-md-7.text-center.text-md-left > p' >>> print[r.html.find[sel, first=True].text] GitHub is a development platform inspired by the way you work. From open source to business, you can host and review code, manage projects, and build software alongside millions of other developers.
XPath is also supported [learn more]:
>>> r.html.xpath['a'] []
You can also select only elements containing certain text:
>>> r = session.get['//python-requests.org/'] >>> r.html.find['a', containing='kenneth'] [, , , ]
JavaScript Support¶
Let’s grab some text that’s rendered by JavaScript:
>>> r = session.get['//python-requests.org/'] >>> r.html.render[] >>> r.html.search['Python 2 will retire in only {months} months!']['months'] '25'
Or you can do this async also:
>>> r = asession.get['//python-requests.org/'] >>> await r.html.arender[] >>> r.html.search['Python 2 will retire in only {months} months!']['months'] '25'
Note, the first time you ever run the render[]
method, it will download Chromium into your home directory [e.g. ~/.pyppeteer/
]. This only happens once. You may also need to install a few Linux packages to get pyppeteer working.
Using without Requests¶
You can also use this library without Requests:
>>> from requests_html import HTML >>> doc = """
For using arender just pass async_=True to HTML.
# ^^ using above script ^^ >>> html = HTML[html=doc, async_=True] >>> val = await html.arender[script=script, reload=False] >>> print[val] {'width': 800, 'height': 600, 'deviceScaleFactor': 1}
Main Classes¶
These classes are the main interface to requests-html
:
requests_html.
HTML
[*, session: Union[HTMLSession, AsyncHTMLSession] = None, url: str = '//example.org/', html: Union[str, bytes], default_encoding: str = 'utf-8', async_: bool = False][source]¶An HTML document, ready for parsing.
|
absolute_links
¶All found links on page, in absolute form [learn more].
arender
[retries: int = 8, script: str = None, wait: float = 0.2, scrolldown=False, sleep: int = 0, reload: bool = True, timeout: Union[float, int] = 8.0, keep_page: bool = False, cookies: list = [{}], send_cookies_session: bool = False][source]¶Async version of render. Takes same parameters.
base_url
¶The base URL for the page. Supports the tag
[learn more].
encoding
¶The encoding string to be used, extracted from the HTML and HTMLResponse
headers.
find
[selector: str = '*', *, containing: Union[str, List[str]] = None, clean: bool = False, first: bool = False, _encoding: str = None] → Union[List[requests_html.Element], requests_html.Element]¶Given a CSS Selector, returns a list of Element
objects or a single one.
|
Example CSS Selectors:
a
a.someClass
a#someID
a[target=_blank]
See W3School’s CSS Selectors Reference for more details.
If first
is True
, only returns the first Element
found.
full_text
¶The full text content [including links] of the Element
or HTML
.
html
¶Unicode representation of the HTML content [learn more].
links
¶All found links on page, in as–is form.
lxml
¶ lxml
representation of the Element
or HTML
.
next
[fetch: bool = False, next_symbol: List[str] = ['next', 'more', 'older']] → Union[requests_html.HTML, List[str]][source]¶Attempts to find the next page, if there is one. If fetch
is True
[default], returns HTML
object of next page. If fetch
is False
, simply returns the next URL.
pq
¶ PyQuery representation of the Element
or HTML
.
raw_html
¶Bytes representation of the HTML content. [learn more].
render
[retries: int = 8, script: str = None, wait: float = 0.2, scrolldown=False, sleep: int = 0, reload: bool = True, timeout: Union[float, int] = 8.0, keep_page: bool = False, cookies: list = [{}], send_cookies_session: bool = False][source]¶Reloads the response in Chromium, and replaces HTML content with an updated version, with JavaScript executed.
|
If scrolldown
is specified, the page will scrolldown the specified number of times, after sleeping the specified amount of time [e.g. scrolldown=10, sleep=1
].
If just sleep
is provided, the rendering will wait n seconds, before returning.
If script
is specified, it will execute the provided JavaScript at runtime. Example:
script = """ [] => { return { width: document.documentElement.clientWidth, height: document.documentElement.clientHeight, deviceScaleFactor: window.devicePixelRatio, } } """
Returns the return value of the executed script
, if any is provided:
>>> r.html.render[script=script] {'width': 800, 'height': 600, 'deviceScaleFactor': 1}
Warning: the first time you run this method, it will
download Chromium into your home directory [~/.pyppeteer
].
search
[template: str] → parse.Result¶Search the Element
for the given Parse template.
template – The Parse template to use. |
search_all
[template: str] → Union[List[parse.Result], parse.Result]¶Search the Element
[multiple times] for the given parse template.
template – The Parse template to use. |
text
¶The text content of the Element
or HTML
.
xpath
[selector: str, *, clean: bool = False, first: bool = False, _encoding: str = None] → Union[List[str], List[requests_html.Element], str, requests_html.Element]¶Given an XPath selector, returns a list of Element
objects or a single one.
|
If a sub-selector is specified [e.g. //a/@href
], a simple list of results is returned.
See W3School’s XPath Examples for more details.
If first
is True
, only returns the first Element
found.
requests_html.
Element
[*, element, url: str, default_encoding: str = None][source]¶An element of HTML.
|
absolute_links
¶All found links on page, in absolute form [learn more].
attrs
¶Returns a dictionary of the attributes of the Element
[learn more].
base_url
¶The base URL for the page. Supports the tag [learn more].
encoding
¶The encoding string to be used, extracted from the HTML and HTMLResponse
headers.
find
[selector: str = '*', *, containing: Union[str, List[str]] = None, clean: bool = False, first: bool = False, _encoding: str =
None] → Union[List[requests_html.Element], requests_html.Element]¶Given a CSS Selector, returns a list of Element
objects or a single one.
|
Example CSS Selectors:
a
a.someClass
a#someID
a[target=_blank]
See W3School’s CSS Selectors Reference for more details.
If first
is True
, only returns the first Element
found.
full_text
¶The full text content [including links] of the Element
or HTML
.
html
¶Unicode representation of the HTML content [learn more].
links
¶All found links on page, in as–is form.
lxml
¶lxml
representation of the Element
or HTML
.
pq
¶PyQuery representation of the Element
or HTML
.
raw_html
¶Bytes representation of the HTML content. [learn more].
search
[template: str] → parse.Result¶Search the Element
for the given Parse template.
template – The Parse template to use. |
search_all
[template: str] → Union[List[parse.Result], parse.Result]¶Search the Element
[multiple times] for the given parse template.
template – The Parse template to use. |
text
¶The text content of the Element
or HTML
.
xpath
[selector: str, *, clean: bool = False, first: bool = False, _encoding: str = None] → Union[List[str], List[requests_html.Element], str, requests_html.Element]¶Given an XPath selector, returns a list of Element
objects or a single one.
|
If a sub-selector is specified [e.g. //a/@href
], a simple list of results is returned.
See W3School’s XPath Examples for more details.
If first
is True
, only returns the first Element
found.
Utility Functions¶
requests_html.
user_agent
[style=None] → str[source]¶Returns an apparently legit user-agent, if not requested one of a specific style. Defaults to a Chrome-style User-Agent.
HTML Sessions¶
These sessions are for making HTTP requests:
classrequests_html.
HTMLSession
[**kwargs][source]¶ close
[][source]¶If a browser was created close it first.
delete
[url, **kwargs]¶Sends a DELETE request. Returns Response
object.
|
requests.Response |
get
[url, **kwargs]¶Sends a GET request. Returns Response
object.
|
requests.Response |
get_adapter
[url]¶Returns the appropriate connection adapter for the given URL.
requests.adapters.BaseAdapter |
get_redirect_target
[resp]¶Receives a Response. Returns a redirect URI or None
head
[url, **kwargs]¶Sends a HEAD request. Returns Response
object.
|
requests.Response |
merge_environment_settings
[url, proxies, stream, verify, cert]¶Check the environment and merge it with some settings.
dict |
mount
[prefix, adapter]¶Registers a connection adapter to a prefix.
Adapters are sorted in descending order by prefix length.
options
[url, **kwargs]¶Sends a OPTIONS request. Returns Response
object.
|
requests.Response |
patch
[url, data=None, **kwargs]¶Sends a PATCH request. Returns Response
object.
|
requests.Response |
post
[url, data=None, json=None, **kwargs]¶Sends a POST request. Returns Response
object.
|
requests.Response |
prepare_request
[request]¶Constructs a PreparedRequest
for transmission and returns it. The PreparedRequest
has settings merged from the Request
instance and those of the Session
.
request – Request instance to prepare with this session’s settings.
|
requests.PreparedRequest |
put
[url, data=None, **kwargs]¶Sends a PUT request. Returns Response
object.
|
requests.Response |
rebuild_auth
[prepared_request, response]¶When being redirected we may want to strip authentication from the request to avoid leaking credentials. This method intelligently removes and reapplies authentication where possible to avoid credential loss.
rebuild_method
[prepared_request, response]¶When being redirected we may want to change the method of the request based on certain specs or browser behavior.
rebuild_proxies
[prepared_request, proxies]¶This method re-evaluates the proxy configuration by considering the environment variables. If we are redirected to a URL covered by NO_PROXY, we strip the proxy configuration. Otherwise, we set missing proxy keys for this URL [in case they were stripped by a previous redirect].
This method also replaces the Proxy-Authorization header where necessary.
dict |
request
[method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None]¶Constructs a Request
, prepares it and sends it. Returns Response
object.
|
requests.Response |
resolve_redirects
[resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None, yield_requests=False, **adapter_kwargs]¶Receives a Response. Returns a generator of Responses or Requests.
response_hook
[response, **kwargs] → requests_html.HTMLResponse¶Change response enconding and replace it by a HTMLResponse.
send
[request, **kwargs]¶Send a given PreparedRequest.
requests.Response |
should_strip_auth
[old_url, new_url]¶Decide whether Authorization header should be removed when redirecting
classrequests_html.
AsyncHTMLSession
[loop=None, workers=None, mock_browser: bool = True, *args, **kwargs][source]¶ An async consumable session.
close
[][source]¶If a browser was created close it first.
delete
[url, **kwargs]¶Sends a DELETE
request. Returns Response
object.
|
requests.Response |
get
[url, **kwargs]¶Sends a GET request. Returns Response
object.
|
requests.Response |
get_adapter
[url]¶Returns the appropriate connection adapter for the given URL.
requests.adapters.BaseAdapter |
get_redirect_target
[resp]¶Receives a Response. Returns a redirect URI or None
head
[url, **kwargs]¶Sends a HEAD request. Returns Response
object.
|
requests.Response |
merge_environment_settings
[url, proxies, stream, verify, cert]¶Check the environment and merge it with some settings.
dict |
mount
[prefix, adapter]¶Registers a connection adapter to a prefix.
Adapters are sorted in descending order by prefix length.
options
[url, **kwargs]¶Sends a OPTIONS request. Returns Response
object.
|
requests.Response |
patch
[url, data=None, **kwargs]¶Sends a PATCH request. Returns Response
object.
|
requests.Response |
post
[url, data=None, json=None, **kwargs]¶Sends a POST request. Returns Response
object.
|
requests.Response |
prepare_request
[request]¶Constructs a PreparedRequest
for transmission and returns it. The PreparedRequest
has settings merged from the Request
instance and those of the Session
.
request – Request instance to prepare with this session’s settings.
|
requests.PreparedRequest |
put
[url, data=None, **kwargs]¶Sends a PUT request. Returns Response
object.
|
requests.Response |
rebuild_auth
[prepared_request, response]¶When being redirected we may want to strip authentication from the request to avoid leaking credentials. This method intelligently removes and reapplies authentication where possible to avoid credential loss.
rebuild_method
[prepared_request, response]¶When being redirected we may want to change the method of the request based on certain specs or browser behavior.
rebuild_proxies
[prepared_request, proxies]¶This method re-evaluates the proxy configuration by considering the environment variables. If we are redirected to a URL covered by NO_PROXY, we strip the proxy configuration. Otherwise, we set missing proxy keys for this URL [in case they were stripped by a previous redirect].
This method also replaces the Proxy-Authorization header where necessary.
dict |
request
[*args, **kwargs][source]¶ Partial original request func and run it in a thread.
resolve_redirects
[resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None, yield_requests=False, **adapter_kwargs]¶Receives a Response. Returns a generator of Responses or Requests.
response_hook
[response, **kwargs] → requests_html.HTMLResponse¶Change response enconding and replace it by a HTMLResponse.
run
[*coros][source]¶Pass in all the coroutines you want to run, it will wrap each one in a task, run it and wait for the result. Return a list with all results, this is returned in the same order coros are passed in.
send
[request, **kwargs]¶Send a given PreparedRequest.
requests.Response |
should_strip_auth
[old_url, new_url]¶Decide whether Authorization header should be removed when redirecting