| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \
| | |
| | | |
Add 'codec' parameter to Producer
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add function kafka.protocol.create_message_set() that takes a list of
payloads and a codec and returns a message set with the desired encoding.
Introduce kafka.common.UnsupportedCodecError, raised if an unknown codec
is specified.
Include a test for the new function.
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
servers/0.8.0/kafka-src
test/test_unit.py
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Adds a codec parameter to Producer.__init__ that lets the user choose
a compression codec to use for all messages sent by it.
|
|/ / / |
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
kafka/client.py contained duplicate copies of same refactor, merged.
Move test/test_integration.py changes into test/test_producer_integration.
Conflicts:
kafka/client.py
servers/0.8.0/kafka-src
test/test_integration.py
|
| |\ \ \
| | | | |
| | | | | |
SimpleProducer randomization of initial round robin ordering
|
| | | | |
| | | | |
| | | | |
| | | | | |
of the initial partition messages are published to
|
| | | | |
| | | | |
| | | | |
| | | | | |
the sorted list of partition rather than completely randomizing the initial ordering before round-robin cycling the partitions
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
of partitions to prevent the first message from always being published
to partition 0.
|
| | |/ /
| |/| | |
|
| | | |
| | | |
| | | |
| | | | |
in memory logging. Address code review concerns
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Bump version number to 0.9.1
Update readme to show supported Kafka/Python versions
Validate arguments in consumer.py, add initial consumer unit test
Make service kill() child processes when startup fails
Add tests for util.py, fix Python 2.6 specific bug.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
integration tests, make skipped integration also skip setupClass, implement rudimentary offset support in consumer.py
|
| | | | |
|
| | | | |
|
| | | | |
|
|\ \ \ \
| |/ / / |
|
| | | | |
|
| | | | |
|
| | | | |
|
|/ / /
| | |
| | |
| | | |
working on intermittent failures in test_encode_fetch_request and test_encode_produc_request
|
|\ \ \
| | | |
| | | | |
conn.py performance improvements, make examples work, add another example
|
| |\ \ \ |
|
| |\ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Conflicts:
example.py
|
| | | | | | |
|
| | | | | | |
|
|\ \ \ \ \ \
| |_|_|/ / /
|/| | | | | |
TopicAndPartition fix when partition has no leader = -1
|
| |\ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Conflicts:
test/test_unit.py
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
clarity
|
| | | | | | | |
|
| |\ \ \ \ \ \
| | | |_|_|_|/
| | |/| | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| |_|/ / / /
|/| | | | | |
|
| |_|_|_|/
|/| | | | |
|
| |_|_|/
|/| | | |
|
|\ \ \ \
| |_|_|/
|/| | | |
Support for multiple hosts on KafkaClient boostrap (improves on #70)
|
| | | | |
|
| | | | |
|
| |\ \ \
| | | |/
| | |/|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
kafka/client.py
kafka/conn.py
setup.py
test/test_integration.py
test/test_unit.py
|
| | | | |
|
| | | | |
|
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes mumrah/kafka-python#126
TL;DR
=====
This makes it possible to read and write snappy compressed streams that
are compatible with the java and scala kafka clients (the xerial
blocking format))
Xerial Details
==============
Kafka supports transparent compression of data (both in transit and at
rest) of messages, one of the allowable compression algorithms is
Google's snappy, an algorithm which has excellent performance at the
cost of efficiency.
The specific implementation of snappy used in kafka is the xerial-snappy
implementation, this is a readily available java library for snappy.
As part of this implementation, there is a specialised blocking format
that is somewhat none standard in the snappy world.
Xerial Format
-------------
The blocking mode of the xerial snappy library is fairly simple, using a
magic header to identify itself and then a size + block scheme, unless
otherwise noted all items in xerials blocking format are assumed to be
big-endian.
A block size (```xerial_blocksize``` in implementation) controls how
frequent the blocking occurs 32k is the default in the xerial library,
this blocking controls the size of the uncompressed chunks that will be
fed to snappy to be compressed.
The format winds up being
| Header | Block1 len | Block1 data | Blockn len | Blockn data |
| ----------- | ---------- | ------------ | ---------- | ------------ |
| 16 bytes | BE int32 | snappy bytes | BE int32 | snappy bytes |
It is important to not that the blocksize is the amount of uncompressed
data presented to snappy at each block, whereas the blocklen is the
number of bytes that will be present in the stream, that is the
length will always be <= blocksize.
Xerial blocking header
----------------------
Marker | Magic String | Null / Pad | Version | Compat
------ | ------------ | ---------- | -------- | --------
byte | c-string | byte | int32 | int32
------ | ------------ | ---------- | -------- | --------
-126 | 'SNAPPY' | \0 | variable | variable
The pad appears to be to ensure that SNAPPY is a valid cstring, and to
align the header on a word boundary.
The version is the version of this format as written by xerial, in the
wild this is currently 1 as such we only support v1.
Compat is there to claim the minimum supported version that can read a
xerial block stream, presently in the wild this is 1.
Implementation specific details
===============================
The implementation presented here follows the Xerial implementation as
of its v1 blocking format, no attempts are made to check for future
versions. Since none-xerial aware clients might have persisted snappy
compressed messages to kafka brokers we allow clients to turn on xerial
compatibility for message sending, and perform header sniffing to detect
xerial vs plain snappy payloads.
|
|\ \ \
| | | |
| | | | |
Make producers take a topic argument at send rather than init time -- fixes Issue #110, but breaks backwards compatibility with previous Producer interface.
|
| | | | |
|
| | |/
| |/|
| | |
| | | |
This allows a single producer to be used to send to multiple topics.
See https://github.com/mumrah/kafka-python/issues/110
|