| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Add function kafka.protocol.create_message_set() that takes a list of
payloads and a codec and returns a message set with the desired encoding.
Introduce kafka.common.UnsupportedCodecError, raised if an unknown codec
is specified.
Include a test for the new function.
|
|\
| |
| |
| |
| |
| | |
Conflicts:
servers/0.8.0/kafka-src
test/test_unit.py
|
| |
| |
| |
| |
| | |
Adds a codec parameter to Producer.__init__ that lets the user choose
a compression codec to use for all messages sent by it.
|
| | |
|
|/
|
|
| |
working on intermittent failures in test_encode_fetch_request and test_encode_produc_request
|
|\
| |
| | |
Changes for aligning code with offset fetch and commit APIs (Kafka 0.8.1)
|
| | |
|
| | |
|
| |
| |
| |
| | |
Will remove once any error handling issues are resolved.
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
| |
Conflicts:
kafka/util.py
|
|
|
|
| |
Also move the exceptions to common instead of util
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Related to #42
Adds new ConsumerFetchSizeTooSmall exception that is thrown when
`_decode_message_set_iter` gets a BufferUnderflowError but has not yet
yielded a message
In this event, SimpleConsumer will increase the fetch size by 1.5 and
continue the fetching loop while _not_ increasing the offset (basically
just retries the request with a larger fetch size)
Once the consumer fetch size has been increased, it will remain
increased while SimpleConsumer fetches from that partition
|
|
|
|
| |
consumer.py and conn.py will be done later after pending merges
|
| |
|
|
|
|
| |
Marking some stuff as not compatible for 0.8 (will be added in 0.8.1)
|
|
|