1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
|
kafka-python
############
.. image:: https://img.shields.io/badge/kafka-0.9%2C%200.8.2%2C%200.8.1%2C%200.8-brightgreen.svg
:target: https://kafka-python.readthedocs.org/compatibility.html
.. image:: https://img.shields.io/pypi/pyversions/kafka-python.svg
:target: https://pypi.python.org/pypi/kafka-python
.. image:: https://coveralls.io/repos/dpkp/kafka-python/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/dpkp/kafka-python?branch=master
.. image:: https://travis-ci.org/dpkp/kafka-python.svg?branch=master
:target: https://travis-ci.org/dpkp/kafka-python
.. image:: https://img.shields.io/badge/license-Apache%202-blue.svg
:target: https://github.com/dpkp/kafka-python/blob/master/LICENSE
>>> pip install kafka-python
kafka-python is a client for the Apache Kafka distributed stream processing
system. It is designed to function much like the official java client, with a
sprinkling of pythonic interfaces (e.g., iterators).
KafkaConsumer
*************
>>> from kafka import KafkaConsumer
>>> consumer = KafkaConsumer('my_favorite_topic')
>>> for msg in consumer:
... print (msg)
:class:`~kafka.consumer.KafkaConsumer` is a full-featured,
high-level message consumer class that is similar in design and function to the
new 0.9 java consumer. Most configuration parameters defined by the official
java client are supported as optional kwargs, with generally similar behavior.
Gzip and Snappy compressed messages are supported transparently.
In addition to the standard
:meth:`~kafka.consumer.KafkaConsumer.poll` interface (which returns
micro-batches of messages, grouped by topic-partition), kafka-python supports
single-message iteration, yielding :class:`~kafka.consumer.ConsumerRecord`
namedtuples, which include the topic, partition, offset, key, and value of each
message.
By default, :class:`~kafka.consumer.KafkaConsumer` will attempt to auto-commit
message offsets every 5 seconds. When used with 0.9 kafka brokers,
:class:`~kafka.consumer.KafkaConsumer` will dynamically assign partitions using
the kafka GroupCoordinator APIs and a
:class:`~kafka.coordinator.assignors.roundrobin.RoundRobinPartitionAssignor`
partitioning strategy, enabling relatively straightforward parallel consumption
patterns. See :doc:`usage` for examples.
KafkaProducer
*************
TBD
Protocol
********
A secondary goal of kafka-python is to provide an easy-to-use protocol layer
for interacting with kafka brokers via the python repl. This is useful for
testing, probing, and general experimentation. The protocol support is
leveraged to enable a :meth:`~kafka.KafkaClient.check_version()`
method that probes a kafka broker and
attempts to identify which version it is running (0.8.0 to 0.9).
Low-level
*********
Legacy support is maintained for low-level consumer and producer classes,
SimpleConsumer and SimpleProducer.
.. toctree::
:hidden:
:maxdepth: 2
Usage Overview <usage>
API </apidoc/modules>
install
tests
compatibility
support
license
|