| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Metadata Refactor
* add MetadataRequest and MetadataResponse namedtuples
* add TopicMetadata namedtuple
* add error codes to Topic and Partition Metadata
* add KafkaClient.send_metadata_request() method
* KafkaProtocol.decode_metadata_response changed to return a
MetadataResponse object so that it is consistent with server api:
[broker_list, topic_list]
* raise server exceptions in load_metadata_for_topics(*topics)
unless topics is null (full refresh)
* Replace non-standard exceptions (LeaderUnavailable,
PartitionUnavailable) with server standard exceptions
(LeaderNotAvailableError, UnknownTopicOrPartitionError)
Conflicts:
kafka/client.py
test/test_client.py
test/test_producer_integration.py
test/test_protocol.py
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
- drop custom PartitionUnavailable exception
- raise UnknownTopicOrPartitionError or LeaderNotAvailableError
- add tests for exception raises
|
| | |
| | |
| | |
| | | |
server is not auto-creating
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- use helper methods not direct access
- add get_partition_ids_for_topic
- check for topic and partition errors during load_metadata_for_topics
- raise LeaderNotAvailableError when topic is being auto-created
or UnknownTopicOrPartitionError if auto-creation off
|
| | | |
|
| | | |
|
| | | |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- add MetadataRequest and MetadataResponse namedtuples
- add TopicMetadata namedtuple
- add error codes to Topic and Partition Metadata
- add KafkaClient.send_metadata_request() method
- KafkaProtocol.decode_metadata_response changed
to return a MetadataResponse object
so that it is consistent with server api: [broker_list, topic_list]
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Python 3 Support
Conflicts:
kafka/producer.py
test/test_client.py
test/test_client_integration.py
test/test_codec.py
test/test_consumer.py
test/test_consumer_integration.py
test/test_failover_integration.py
test/test_producer.py
test/test_producer_integration.py
test/test_protocol.py
test/test_util.py
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |/ |
|
| | |
|
|/
|
|
| |
python3
|
|\
| |
| |
| |
| | |
Warn users about async producer
Refactor producer failover tests (5x speedup)
Skip async producer failover test for now, because it is broken
|
| |
| |
| |
| | |
retry failed messages
|
|\ \
| |/
|/|
| |
| |
| |
| | |
fix consumer retry logic (fixes #135)
Conflicts:
kafka/consumer.py
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes bug in the follow condition:
* Starting buffer size is 1024, max buffer size is 2048, both set on an instance level
* Fetch from p0, p1 and received response
* p0 has more than 1024 bytes, consumer doubles buffer size to 2048 and marks p0 for retry
* p1 has more than 1024 bytes, consumer tries to double buffer size, but sees that it's at
the max and raises ConsumerFetchSizeTooSmall
The fix changes the logic to the following:
* Starting buffer size is 1024 set on a per-partition level, max buffer size is 2048 set on an instance level
* Fetch from p0, p1 and received response
* p0 has more than 1024 bytes, consumer doubles buffer size to 2048 for p0 and marks p0 for retry
* p1 has more than 1024 bytes, consumer double buffer size to 2048 for p1 and marks p1 for retry
* Consumer sees that there's partitions to retry, repeats parsing loop
* p0 sent all the bytes this time, consumer yields these messages
* p1 sent all the bytes this time, consumer yields these messages
|
|\ \
| | |
| | | |
Use PyLint for static error checking
|
| | |
| | |
| | |
| | | |
only; fixup errors; skip kafka/queue.py
|
| | |
| | |
| | |
| | | |
subclass); document
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The `Consumer` class fetches the last known offsets in `__init__` if
`auto_commit` is enabled, but it would be nice to expose this behavior
for consumers that aren't using auto_commit. This doesn't change
existing behavior, just exposes the ability to easily fetch and set the
last known offsets. Once #162 or something similar lands this may no
longer be necessary, but it looks like that might take a while to make
it through.
|
|\ \
| | |
| | | |
Improve KafkaConnection with more tests
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
try block in conn.recv
|
| | | |
|
| | |
| | |
| | |
| | | |
return val check in KafkaConnection.send()
|
| | | |
|
| | |
| | |
| | |
| | | |
close()
|
| | |
| | |
| | |
| | | |
connection fails
|
|\ \ \
| | | |
| | | | |
Better type errors
|
| | | |
| | | |
| | | |
| | | | |
It will still die, just as before, but it now includes a *helpful* error message
|
| |/ / |
|
|\ \ \
| | | |
| | | | |
Add KafkaTimeoutError and fix client.ensure_topic_exists
|
| |/ / |
|
|/ /
| |
| |
| | |
Tags applied to master will now be automatically deployed on PyPI
|
|\ \
| | |
| | | |
Handle New Topic Creation
|
| | |
| | |
| | |
| | |
| | | |
Adds ensure_topic_exists to KafkaClient, redirects test case to use
that. Fixes #113 and fixes #150.
|
|/ / |
|