diff options
author | Jeff Widman <jeff@jeffwidman.com> | 2017-04-09 22:00:24 -0700 |
---|---|---|
committer | GitHub <noreply@github.com> | 2017-04-09 22:00:24 -0700 |
commit | 8ed5592d5af2cc52586b9b39833d01382b2cc96b (patch) | |
tree | 47969a5371b29306b67741db5102619461d9e5d3 | |
parent | 7c24135eaf1db95c50c5d340cd15cbfc2674c927 (diff) | |
download | kafka-python-update-rtd-links.tar.gz |
Update RTD links with https / .ioupdate-rtd-links
-rw-r--r-- | README.rst | 12 |
1 files changed, 6 insertions, 6 deletions
@@ -2,7 +2,7 @@ Kafka Python client ------------------------ .. image:: https://img.shields.io/badge/kafka-0.10%2C%200.9%2C%200.8.2%2C%200.8.1%2C%200.8-brightgreen.svg - :target: https://kafka-python.readthedocs.org/compatibility.html + :target: https://kafka-python.readthedocs.io/compatibility.html .. image:: https://img.shields.io/pypi/pyversions/kafka-python.svg :target: https://pypi.python.org/pypi/kafka-python .. image:: https://coveralls.io/repos/dpkp/kafka-python/badge.svg?branch=master&service=github @@ -26,7 +26,7 @@ check code (perhaps using zookeeper or consul). For older brokers, you can achieve something similar by manually assigning different partitions to each consumer instance with config management tools like chef, ansible, etc. This approach will work fine, though it does not support rebalancing on failures. -See <http://kafka-python.readthedocs.org/en/master/compatibility.html> +See <https://kafka-python.readthedocs.io/en/master/compatibility.html> for more details. Please note that the master branch may contain unreleased features. For release @@ -41,7 +41,7 @@ KafkaConsumer is a high-level message consumer, intended to operate as similarly as possible to the official java client. Full support for coordinated consumer groups requires use of kafka brokers that support the Group APIs: kafka v0.9+. -See <http://kafka-python.readthedocs.org/en/master/apidoc/KafkaConsumer.html> +See <https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html> for API and configuration details. The consumer iterator returns ConsumerRecords, which are simple namedtuples @@ -76,7 +76,7 @@ KafkaProducer KafkaProducer is a high-level, asynchronous message producer. The class is intended to operate as similarly as possible to the official java client. -See <http://kafka-python.readthedocs.org/en/master/apidoc/KafkaProducer.html> +See <https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html> for more details. >>> from kafka import KafkaProducer @@ -116,7 +116,7 @@ Compression kafka-python supports gzip compression/decompression natively. To produce or consume lz4 compressed messages, you should install python-lz4 (pip install lz4). To enable snappy compression/decompression install python-snappy (also requires snappy library). -See <http://kafka-python.readthedocs.org/en/master/install.html#optional-snappy-install> +See <https://kafka-python.readthedocs.io/en/master/install.html#optional-snappy-install> for more information. Protocol @@ -135,4 +135,4 @@ Low-level Legacy support is maintained for low-level consumer and producer classes, SimpleConsumer and SimpleProducer. See -<http://kafka-python.readthedocs.io/en/master/simple.html?highlight=SimpleProducer> for API details. +<https://kafka-python.readthedocs.io/en/master/simple.html?highlight=SimpleProducer> for API details. |