summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/conf.py9
-rw-r--r--docs/index.rst3
-rw-r--r--docs/user/advanced.rst78
-rw-r--r--docs/user/quickstart.rst15
4 files changed, 60 insertions, 45 deletions
diff --git a/docs/conf.py b/docs/conf.py
index f7dcafc1..4521eed4 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -27,7 +27,10 @@ from requests import __version__
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = ['sphinx.ext.autodoc']
+extensions = [
+ 'sphinx.ext.autodoc',
+ 'sphinx.ext.intersphinx',
+]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -43,7 +46,7 @@ master_doc = 'index'
# General information about the project.
project = u'Requests'
-copyright = u'2013. A <a href="http://kennethreitz.com/pages/open-projects.html">Kenneth Reitz</a> Project'
+copyright = u'2014. A <a href="http://kennethreitz.com/pages/open-projects.html">Kenneth Reitz</a> Project'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
@@ -241,3 +244,5 @@ texinfo_appendices = []
sys.path.append(os.path.abspath('_themes'))
html_theme_path = ['_themes']
html_theme = 'kr'
+
+intersphinx_mapping = {'urllib3': ('http://urllib3.readthedocs.org/en/latest', None)}
diff --git a/docs/index.rst b/docs/index.rst
index 0716c65c..4b0ecfd4 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -38,7 +38,7 @@ Requests takes all of the work out of Python HTTP/1.1 — making your integrati
Testimonials
------------
-Her Majesty's Government, Amazon, Google, Twilio, Runscope, Mozilla, Heroku, PayPal, NPR, Obama for America, Transifex, Native Instruments, The Washington Post, Twitter, SoundCloud, Kippt, Readability, and Federal US Institutions use Requests internally. It has been downloaded over 5,000,000 times from PyPI.
+Her Majesty's Government, Amazon, Google, Twilio, Runscope, Mozilla, Heroku, PayPal, NPR, Obama for America, Transifex, Native Instruments, The Washington Post, Twitter, SoundCloud, Kippt, Readability, and Federal US Institutions use Requests internally. It has been downloaded over 8,000,000 times from PyPI.
**Armin Ronacher**
Requests is the perfect example how beautiful an API can be with the
@@ -130,6 +130,5 @@ you.
:maxdepth: 1
dev/philosophy
- dev/internals
dev/todo
dev/authors
diff --git a/docs/user/advanced.rst b/docs/user/advanced.rst
index 74f59af8..6a78dcf1 100644
--- a/docs/user/advanced.rst
+++ b/docs/user/advanced.rst
@@ -109,7 +109,7 @@ request. The simple recipe for this is the following::
print(resp.status_code)
Since you are not doing anything special with the ``Request`` object, you
-prepare it immediately and modified the ``PreparedRequest`` object. You then
+prepare it immediately and modify the ``PreparedRequest`` object. You then
send that with the other parameters you would have sent to ``requests.*`` or
``Sesssion.*``.
@@ -118,8 +118,9 @@ However, the above code will lose some of the advantages of having a Requests
:class:`Session <requests.Session>`-level state such as cookies will
not get applied to your request. To get a
:class:`PreparedRequest <requests.models.PreparedRequest>` with that state
-applied, replace the call to ``Request.prepare()`` with a call to
-``Session.prepare_request()``, like this::
+applied, replace the call to :meth:`Request.prepare()
+<requests.Request.prepare>` with a call to
+:meth:`Session.prepare_request() <requests.Session.prepare_request>`, like this::
from requests import Request, Session
@@ -182,7 +183,10 @@ If you specify a wrong path or an invalid cert::
Body Content Workflow
---------------------
-By default, when you make a request, the body of the response is downloaded immediately. You can override this behavior and defer downloading the response body until you access the :class:`Response.content` attribute with the ``stream`` parameter::
+By default, when you make a request, the body of the response is downloaded
+immediately. You can override this behavior and defer downloading the response
+body until you access the :class:`Response.content <requests.Response.content>`
+attribute with the ``stream`` parameter::
tarball_url = 'https://github.com/kennethreitz/requests/tarball/master'
r = requests.get(tarball_url, stream=True)
@@ -193,7 +197,7 @@ At this point only the response headers have been downloaded and the connection
content = r.content
...
-You can further control the workflow by use of the :class:`Response.iter_content` and :class:`Response.iter_lines` methods. Alternatively, you can read the undecoded body from the underlying urllib3 :class:`urllib3.HTTPResponse` at :class:`Response.raw`.
+You can further control the workflow by use of the :class:`Response.iter_content <requests.Response.iter_content>` and :class:`Response.iter_lines <requests.Response.iter_lines>` methods. Alternatively, you can read the undecoded body from the underlying urllib3 :class:`urllib3.HTTPResponse <urllib3.response.HTTPResponse>` at :class:`Response.raw <requests.Response.raw>`.
Keep-Alive
@@ -300,14 +304,16 @@ Then, we can make a request using our Pizza Auth::
>>> requests.get('http://pizzabin.org/admin', auth=PizzaAuth('kenneth'))
<Response [200]>
-.. _streaming-requests
+.. _streaming-requests:
Streaming Requests
------------------
-With ``requests.Response.iter_lines()`` you can easily iterate over streaming
-APIs such as the `Twitter Streaming API <https://dev.twitter.com/docs/streaming-api>`_.
-Simply set ``stream`` to ``True`` and iterate over the response with ``iter_lines()``::
+With :class:`requests.Response.iter_lines()` you can easily
+iterate over streaming APIs such as the `Twitter Streaming
+API <https://dev.twitter.com/docs/streaming-api>`_. Simply
+set ``stream`` to ``True`` and iterate over the response with
+:class:`~requests.Response.iter_lines()`::
import json
import requests
@@ -366,20 +372,20 @@ unusual to those not familiar with the relevant specification.
Encodings
^^^^^^^^^
-When you receive a response, Requests makes a guess at the encoding to use for
-decoding the response when you call the ``Response.text`` method. Requests
-will first check for an encoding in the HTTP header, and if none is present,
-will use `charade <http://pypi.python.org/pypi/charade>`_ to attempt to guess
-the encoding.
-
-The only time Requests will not do this is if no explicit charset is present
-in the HTTP headers **and** the ``Content-Type`` header contains ``text``. In
-this situation,
-`RFC 2616 <http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7.1>`_
-specifies that the default charset must be ``ISO-8859-1``. Requests follows
-the specification in this case. If you require a different encoding, you can
-manually set the ``Response.encoding`` property, or use the raw
-``Response.content``.
+When you receive a response, Requests makes a guess at the encoding to
+use for decoding the response when you access the :attr:`Response.text
+<requests.Response.text>` attribute. Requests will first check for an
+encoding in the HTTP header, and if none is present, will use `chardet
+<http://pypi.python.org/pypi/chardet>`_ to attempt to guess the encoding.
+
+The only time Requests will not do this is if no explicit charset
+is present in the HTTP headers **and** the ``Content-Type``
+header contains ``text``. In this situation, `RFC 2616
+<http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7.1>`_ specifies
+that the default charset must be ``ISO-8859-1``. Requests follows the
+specification in this case. If you require a different encoding, you can
+manually set the :attr:`Response.encoding <requests.Response.encoding>`
+property, or use the raw :attr:`Response.content <requests.Response.content>`.
HTTP Verbs
----------
@@ -406,8 +412,8 @@ out what type of content it is. Do this like so::
...
application/json; charset=utf-8
-So, GitHub returns JSON. That's great, we can use the ``r.json`` method to
-parse it into Python objects.
+So, GitHub returns JSON. That's great, we can use the :meth:`r.json
+<requests.Response.json>` method to parse it into Python objects.
::
@@ -583,11 +589,11 @@ reason this was done was to implement Transport Adapters, originally
methods for an HTTP service. In particular, they allow you to apply per-service
configuration.
-Requests ships with a single Transport Adapter, the
-:class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. This adapter provides the
-default Requests interaction with HTTP and HTTPS using the powerful `urllib3`_
-library. Whenever a Requests :class:`Session <Session>` is initialized, one of
-these is attached to the :class:`Session <Session>` object for HTTP, and one
+Requests ships with a single Transport Adapter, the :class:`HTTPAdapter
+<requests.adapters.HTTPAdapter>`. This adapter provides the default Requests
+interaction with HTTP and HTTPS using the powerful `urllib3`_ library. Whenever
+a Requests :class:`Session <requests.Session>` is initialized, one of these is
+attached to the :class:`Session <requests.Session>` object for HTTP, and one
for HTTPS.
Requests enables users to create and use their own Transport Adapters that
@@ -605,7 +611,7 @@ prefix. Once mounted, any HTTP request made using that session whose URL starts
with the given prefix will use the given Transport Adapter.
Implementing a Transport Adapter is beyond the scope of this documentation, but
-a good start would be to subclass the ``requests.adapters.BaseAdapter`` class.
+a good start would be to subclass the :class:`requests.adapters.BaseAdapter` class.
.. _`described here`: http://kennethreitz.org/exposures/the-future-of-python-http
.. _`urllib3`: https://github.com/shazow/urllib3
@@ -614,11 +620,11 @@ Blocking Or Non-Blocking?
-------------------------
With the default Transport Adapter in place, Requests does not provide any kind
-of non-blocking IO. The ``Response.content`` property will block until the
-entire response has been downloaded. If you require more granularity, the
-streaming features of the library (see :ref:`streaming-requests`) allow you to
-retrieve smaller quantities of the response at a time. However, these calls
-will still block.
+of non-blocking IO. The :attr:`Response.content <requests.Response.content>`
+property will block until the entire response has been downloaded. If
+you require more granularity, the streaming features of the library (see
+:ref:`streaming-requests`) allow you to retrieve smaller quantities of the
+response at a time. However, these calls will still block.
If you are concerned about the use of blocking IO, there are lots of projects
out there that combine Requests with one of Python's asynchronicity frameworks.
diff --git a/docs/user/quickstart.rst b/docs/user/quickstart.rst
index cd1a5ea4..78b2cc65 100644
--- a/docs/user/quickstart.rst
+++ b/docs/user/quickstart.rst
@@ -99,7 +99,12 @@ using, and change it, using the ``r.encoding`` property::
>>> r.encoding = 'ISO-8859-1'
If you change the encoding, Requests will use the new value of ``r.encoding``
-whenever you call ``r.text``.
+whenever you call ``r.text``. You might want to do this in any situation where
+you can apply special logic to work out what the encoding of the content will
+be. For example, HTTP and XML have the ability to specify their encoding in
+their body. In situations like this, you should use ``r.content`` to find the
+encoding, and then set ``r.encoding``. This will let you use ``r.text`` with
+the correct encoding.
Requests will also use custom encodings in the event that you need them. If
you have created your own encoding and registered it with the ``codecs``
@@ -152,16 +157,16 @@ server, you can access ``r.raw``. If you want to do this, make sure you set
>>> r.raw.read(10)
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03'
-In general, however, you should use a pattern like this to save what is being
+In general, however, you should use a pattern like this to save what is being
streamed to a file::
with open(filename, 'wb') as fd:
for chunk in r.iter_content(chunk_size):
fd.write(chunk)
-Using ``Response.iter_content`` will handle a lot of what you would otherwise
-have to handle when using ``Response.raw`` directly. When streaming a
-download, the above is the preferred and recommended way to retrieve the
+Using ``Response.iter_content`` will handle a lot of what you would otherwise
+have to handle when using ``Response.raw`` directly. When streaming a
+download, the above is the preferred and recommended way to retrieve the
content.