| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
Initial support for consumer coordinator
|
| |
| |
| |
| |
| |
| | |
Support added for ConsumerMetadataRequest and ConsumerMetadataResponse
Added consumer-aware request routine for supporting the consumer coordinator
Added separate client method for fetching Kafka-committed offsets from the coordinator
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Metadata Refactor
* add MetadataRequest and MetadataResponse namedtuples
* add TopicMetadata namedtuple
* add error codes to Topic and Partition Metadata
* add KafkaClient.send_metadata_request() method
* KafkaProtocol.decode_metadata_response changed to return a
MetadataResponse object so that it is consistent with server api:
[broker_list, topic_list]
* raise server exceptions in load_metadata_for_topics(*topics)
unless topics is null (full refresh)
* Replace non-standard exceptions (LeaderUnavailable,
PartitionUnavailable) with server standard exceptions
(LeaderNotAvailableError, UnknownTopicOrPartitionError)
Conflicts:
kafka/client.py
test/test_client.py
test/test_producer_integration.py
test/test_protocol.py
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- add MetadataRequest and MetadataResponse namedtuples
- add TopicMetadata namedtuple
- add error codes to Topic and Partition Metadata
- add KafkaClient.send_metadata_request() method
- KafkaProtocol.decode_metadata_response changed
to return a MetadataResponse object
so that it is consistent with server api: [broker_list, topic_list]
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
| |
Add function kafka.protocol.create_message_set() that takes a list of
payloads and a codec and returns a message set with the desired encoding.
Introduce kafka.common.UnsupportedCodecError, raised if an unknown codec
is specified.
Include a test for the new function.
|
|\
| |
| |
| |
| |
| | |
Conflicts:
servers/0.8.0/kafka-src
test/test_unit.py
|
| |
| |
| |
| |
| | |
Adds a codec parameter to Producer.__init__ that lets the user choose
a compression codec to use for all messages sent by it.
|
| | |
|
|/
|
|
| |
working on intermittent failures in test_encode_fetch_request and test_encode_produc_request
|
|\
| |
| | |
Changes for aligning code with offset fetch and commit APIs (Kafka 0.8.1)
|
| | |
|
| | |
|
| |
| |
| |
| | |
Will remove once any error handling issues are resolved.
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
| |
Conflicts:
kafka/util.py
|
|
|
|
| |
Also move the exceptions to common instead of util
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Related to #42
Adds new ConsumerFetchSizeTooSmall exception that is thrown when
`_decode_message_set_iter` gets a BufferUnderflowError but has not yet
yielded a message
In this event, SimpleConsumer will increase the fetch size by 1.5 and
continue the fetching loop while _not_ increasing the offset (basically
just retries the request with a larger fetch size)
Once the consumer fetch size has been increased, it will remain
increased while SimpleConsumer fetches from that partition
|
|
|
|
| |
consumer.py and conn.py will be done later after pending merges
|
| |
|
|
|
|
| |
Marking some stuff as not compatible for 0.8 (will be added in 0.8.1)
|
|
|