| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |\ \
| | | |
| | | |
| | | |
| | | | |
Conflicts:
example.py
|
| | | | |
|
|\ \ \ \
| | |_|/
| |/| |
| | | |
| | | | |
Conflicts:
test/test_unit.py
|
| | |/
| |/| |
|
| | | |
|
| |\ \
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
kafka/client.py
kafka/conn.py
setup.py
test/test_integration.py
test/test_unit.py
|
| | | |
|
| | |
| | |
| | |
| | | |
clarity
|
| | | |
|
| | | |
|
| |/
|/| |
|
|\ \
| | |
| | | |
Increase default connection timeout
|
| | | |
|
|\ \ \
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | | |
mahendra-repr
Conflicts:
kafka/client.py
kafka/consumer.py
|
| |/ |
|
| | |
|
| | |
|
| |
| |
| | |
Both errors are handled the same way when raised and caught, so this makes sense.
|
| |
| |
| |
| |
| | |
This differentiates between errors that occur when sending the request
and receiving the response, and adds BufferUnderflowError handling.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Remove bufsize from client and conn, since they're not actually enforced
Notes:
This commit changes behavior a bit by raising a BufferUnderflowError when
no data is received for the message size rather than a ConnectionError.
Since bufsize in the socket is not actually enforced, but it is used by the consumer
when creating requests, moving it there until a better solution is implemented.
|
|/
|
|
|
|
| |
Previously, if you try to consume a message with a timeout greater than 10 seconds,
but you don't receive data in those 10 seconds, a socket.timeout exception is raised.
This allows a higher socket timeout to be set, or even None for no timeout.
|
|\
| |
| |
| |
| | |
Conflicts:
kafka/producer.py
|
| |
| |
| |
| |
| |
| |
| |
| | |
failed_messages
- add integration tests for sync producer
- add integration tests for async producer w. leadership election
- use log.exception
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per the multiprocessing module's documentation, the objects
passed to the Process() class must be pickle-able in Windows.
So, the Async producer did not work in windows.
To fix this we have to ensure that code which uses multiprocessing
has to follow certain rules
* The target=func should not be a member function
* We cannot pass objects like socket() to multiprocessing
This ticket fixes these issues. For KafkaClient and KafkaConnection
objects, we make copies of the object and reinit() them inside the
child processes.
|
| |
|
| |
|
|
|
|
|
|
| |
Conflicts:
kafka/util.py
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| | |
Conflicts:
kafka/__init__.py
kafka/consumer.py
test/test_integration.py
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The implementation is done by using simple options to
Kafka Fetch Request
Also in the SimpleConsumer iterator, update the offset before the
message is yielded. This is so that the consumer state is not lost
if certain cases.
For eg: the message is yielded and consumed by the caller,
but the caller does not come back into the generator again.
The message will be consumed but the status is not updated in
the consumer
|
|/
|
|
|
|
|
| |
Also, ensure that the case of 'no-acks' works fine
In conn.send(), do not wait for the response. Wait for it only on
conn.recv(). This behaviour is fine now since the connection is not
shared among consumer threads etc.
|
|
|
|
|
|
| |
* Ensure that round-robin partitioner works fine
* _load_metadata_for_topics() would cause duplicate and stale entries in
self.topic_partitions. Fix this
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* When you initiate a producer with a non-existant queue, the queue is
created. However this partition info is not reflected in KafkaClient()
immediately. So, we wait for a second and try loading it again.
Without this fix, if we do producer.send_messages() after creating a new
queue, the library will throw a StopIteration exception.
* In SimpleConsumer(), the defaults are not as mentioned in the comments.
Fix this (or do we change the documentation?)
* There was a problem with the way the consumer iterator worked.
for eg: assume that there were 10 messages in the queue/topic
and you iterate over it as -
for msg in consumer:
print (msg)
At the end of this, 'offset' that is saved is 10.
So, if you run the above loop again, the last message (10) is repeated.
This can be fixed by adjusting the offset counter before fetching
the message
* Avoid some code repeat in consumer.commit()
* Fix a bug in send_offset_commit_request() invocation in consumer.py
* Fix missing imports
|
| |
|
| |
|
|
|
|
| |
Fix a broken test (100k was too much to send in one batch)
|
| |
|
|
|
|
| |
Marking some stuff as not compatible for 0.8 (will be added in 0.8.1)
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Creates a producer process and one consumer process per partition. Uses
`multiprocessing.Queue` for communication between the parent process and
the producer/consumers.
```python
kafka = KafkaClient("localhost", 9092)
q = KafkaQueue(kafka, client="test-queue", partitions=[0,1])
q.put("test")
q.get()
q.close()
kafka.close()
```
Ref #8
|
|
|
|
| |
Fixes #2
|
| |
|