| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Metadata Refactor
* add MetadataRequest and MetadataResponse namedtuples
* add TopicMetadata namedtuple
* add error codes to Topic and Partition Metadata
* add KafkaClient.send_metadata_request() method
* KafkaProtocol.decode_metadata_response changed to return a
MetadataResponse object so that it is consistent with server api:
[broker_list, topic_list]
* raise server exceptions in load_metadata_for_topics(*topics)
unless topics is null (full refresh)
* Replace non-standard exceptions (LeaderUnavailable,
PartitionUnavailable) with server standard exceptions
(LeaderNotAvailableError, UnknownTopicOrPartitionError)
Conflicts:
kafka/client.py
test/test_client.py
test/test_producer_integration.py
test/test_protocol.py
|
| |
| |
| |
| |
| |
| |
| |
| | |
- use helper methods not direct access
- add get_partition_ids_for_topic
- check for topic and partition errors during load_metadata_for_topics
- raise LeaderNotAvailableError when topic is being auto-created
or UnknownTopicOrPartitionError if auto-creation off
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Python 3 Support
Conflicts:
kafka/producer.py
test/test_client.py
test/test_client_integration.py
test/test_codec.py
test/test_consumer.py
test/test_consumer_integration.py
test/test_failover_integration.py
test/test_producer.py
test/test_producer_integration.py
test/test_protocol.py
test/test_util.py
|
| | | |
|
| | | |
|
| |/ |
|
|/
|
|
| |
python3
|
|
|
|
| |
retry failed messages
|
|
|
|
| |
subclass); document
|
|
|
|
|
| |
Adds ensure_topic_exists to KafkaClient, redirects test case to use
that. Fixes #113 and fixes #150.
|
|
|
|
|
|
|
|
|
|
| |
Add function kafka.protocol.create_message_set() that takes a list of
payloads and a codec and returns a message set with the desired encoding.
Introduce kafka.common.UnsupportedCodecError, raised if an unknown codec
is specified.
Include a test for the new function.
|
|\
| |
| |
| |
| |
| | |
Conflicts:
servers/0.8.0/kafka-src
test/test_unit.py
|
| |
| |
| |
| |
| | |
Adds a codec parameter to Producer.__init__ that lets the user choose
a compression codec to use for all messages sent by it.
|
| |
| |
| |
| | |
of the initial partition messages are published to
|
| |
| |
| |
| | |
the sorted list of partition rather than completely randomizing the initial ordering before round-robin cycling the partitions
|
|/
|
|
|
| |
of partitions to prevent the first message from always being published
to partition 0.
|
| |
|
|
|
|
| |
This allows a single producer to be used to send to multiple topics.
See https://github.com/mumrah/kafka-python/issues/110
|
|\
| |
| |
| |
| |
| |
| |
| | |
mahendra-repr
Conflicts:
kafka/client.py
kafka/consumer.py
|
| | |
|
| | |
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Various changes/fixes, including:
* Allow customizing socket timeouts
* Read the correct number of bytes from kafka
* Guarantee reading the expected number of bytes from the socket every time
* Remove bufsize from client and conn
* SimpleConsumer flow changes
* Fix some error handling
* Add optional upper limit to consumer fetch buffer size
* Add and fix unit and integration tests
|
| | | |
|
| | |
| | |
| | |
| | | |
Also, log.exception() is unhelpfully noisy. Use log.error() with some error details in the message instead.
|
|\ \ \
| |/ /
|/| | |
Enable absolute imports for modules using Queue.
|
| |/
| |
| |
| |
| |
| |
| | |
When running on Linux with code on a case-insensitive file system,
imports of the `Queue` module fail because python resolves the
wrong file (It is trying to use a relative import of `queue.py` in
the kafka directory). This change forces absolute imports via PEP328.
|
|/ |
|
|\
| |
| |
| |
| | |
Conflicts:
kafka/producer.py
|
| |
| |
| |
| |
| |
| |
| |
| | |
failed_messages
- add integration tests for sync producer
- add integration tests for async producer w. leadership election
- use log.exception
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per the multiprocessing module's documentation, the objects
passed to the Process() class must be pickle-able in Windows.
So, the Async producer did not work in windows.
To fix this we have to ensure that code which uses multiprocessing
has to follow certain rules
* The target=func should not be a member function
* We cannot pass objects like socket() to multiprocessing
This ticket fixes these issues. For KafkaClient and KafkaConnection
objects, we make copies of the object and reinit() them inside the
child processes.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Also improve on the logic for stopping the async Processor instance.
Ensure that unsend messages are sent before it is stopped.
|
|
|
|
|
|
|
| |
Also, ensure that the case of 'no-acks' works fine
In conn.send(), do not wait for the response. Wait for it only on
conn.recv(). This behaviour is fine now since the connection is not
shared among consumer threads etc.
|
| |
|
|
|
|
|
| |
Add support for two options in the producer - req_acks and ack_timeout
The acks, if any, are passed to the caller directly
|
|\
| |
| |
| |
| | |
Conflicts:
kafka/producer.py
|
| |
| |
| |
| |
| |
| | |
* Ensure that round-robin partitioner works fine
* _load_metadata_for_topics() would cause duplicate and stale entries in
self.topic_partitions. Fix this
|
| |
| |
| |
| |
| |
| | |
Provides support for two partitioners
* Round robin
* Hashed (default as per kafka clients)
|
|/
|
|
|
|
|
|
|
|
| |
The Java/Scala Kafka client supports a mechanism for sending
messages asynchronously by using a queue and a thread.
Messages are put on the queue and the worker thread keeps sending
it to the broker.
This ticket implements this feature in python
We use multiprocessing instead of threads to send the messages
|
|
|
|
| |
consumer.py and conn.py will be done later after pending merges
|
| |
|
|
|
|
| |
Marking some stuff as not compatible for 0.8 (will be added in 0.8.1)
|
|
|