summaryrefslogtreecommitdiff
path: root/kafka
Commit message (Collapse)AuthorAgeFilesLines
* Add more reprMark Roberts2014-02-241-1/+4
|
* Add SimpleProducer reprMark Roberts2014-02-241-0/+3
|
* Add consumer reprMark Roberts2014-02-241-0/+8
|
* Add repr to clientMark Roberts2014-02-241-0/+3
|
* Correct __version__ in initMark Roberts2014-02-241-1/+1
|
* Ensure that multiprocess consumer works in windowsMahendra M2013-10-081-53/+63
|
* Merge branch 'master' into prod-windowsMahendra M2013-10-084-19/+49
|\ | | | | | | | | Conflicts: kafka/producer.py
| * make changes to be more fault tolerant: clean up connections, brokers, ↵Jim Lim2013-10-044-19/+49
| | | | | | | | | | | | | | | | failed_messages - add integration tests for sync producer - add integration tests for async producer w. leadership election - use log.exception
* | Ensure that async producer works in windows. Fixes #46Mahendra M2013-10-073-51/+87
|/ | | | | | | | | | | | | | | As per the multiprocessing module's documentation, the objects passed to the Process() class must be pickle-able in Windows. So, the Async producer did not work in windows. To fix this we have to ensure that code which uses multiprocessing has to follow certain rules * The target=func should not be a member function * We cannot pass objects like socket() to multiprocessing This ticket fixes these issues. For KafkaClient and KafkaConnection objects, we make copies of the object and reinit() them inside the child processes.
* Test fixes after flake8 runmrtheb2013-10-031-1/+2
|
* flake8 pass (pep8 and pyflakes)mrtheb2013-10-036-80/+97
|
* remove unused exception instance variablesVetoshkin Nikita2013-10-032-2/+2
|
* use NotImplementedError instead of NotImplementedErrorVetoshkin Nikita2013-10-031-1/+1
|
* don't forget to call superclass __init__Vetoshkin Nikita2013-10-032-1/+3
|
* style: fix camelCase variable names againVetoshkin Nikita2013-10-031-2/+2
|
* style: remove extra brackets one more timeVetoshkin Nikita2013-10-031-5/+5
|
* style: remove extra bracketsVetoshkin Nikita2013-10-031-1/+1
|
* style: fix camelCase variable names once moreVetoshkin Nikita2013-10-031-3/+3
|
* style: fix camelCase variable namesVetoshkin Nikita2013-10-034-30/+29
| | | | | | Conflicts: kafka/util.py
* style: use triple quotes for docstringsVetoshkin Nikita2013-10-032-4/+12
|
* style: fix whitespacesVetoshkin Nikita2013-10-033-1/+7
|
* Cherry-pick mrtheb/kafka-python 2b016b69mrtheb2013-10-032-4/+8
| | | | Set FetchRequest MaxBytes value to bufsize instead of fetchsize (=MinBytes)
* import bufferunderflow errorJim Lim2013-09-271-0/+2
|
* Fix #44 Add missing exception classv0.8.0David Arthur2013-09-245-17/+23
| | | | Also move the exceptions to common instead of util
* allow a client id to be passed to the clientJim Lim2013-09-241-4/+5
|
* Auto-adjusting consumer fetch sizeDavid Arthur2013-09-093-21/+37
| | | | | | | | | | | | | | | Related to #42 Adds new ConsumerFetchSizeTooSmall exception that is thrown when `_decode_message_set_iter` gets a BufferUnderflowError but has not yet yielded a message In this event, SimpleConsumer will increase the fetch size by 1.5 and continue the fetching loop while _not_ increasing the offset (basically just retries the request with a larger fetch size) Once the consumer fetch size has been increased, it will remain increased while SimpleConsumer fetches from that partition
* Fixed #42, make fetch size configurableDavid Arthur2013-09-081-4/+7
| | | | | | | | | | Was hard coded to 1024 bytes which meant that larger messages were unconsumable since they would always get split causing the consumer to stop. It would probably be best to automatically retry truncated messages with a larger request size so you don't have to know your max message size ahead of time
* Merge branch 'issue-35'David Arthur2013-07-264-94/+445
|\ | | | | | | | | | | | | Conflicts: kafka/__init__.py kafka/consumer.py test/test_integration.py
| * Fix minor bug in offset managementMahendra M2013-07-011-1/+4
| | | | | | | | | | In the current patch get_messages(count=1) would return zero messages the first time it is invoked after a consumer was initialized.
| * Add more cleanup in consumer.stop()Mahendra M2013-06-281-5/+7
| |
| * Fix cases of single partitionMahendra M2013-06-281-2/+3
| |
| * Add TODO commentsMahendra M2013-06-271-0/+2
| |
| * Re-init the sockets in the new processMahendra M2013-06-273-1/+16
| |
| * Fix a bug in seek.Mahendra M2013-06-271-0/+6
| | | | | | | | This was hidden because of another bug in offset management
| * Merge branch 'master' into partitionMahendra M2013-06-252-26/+40
| |\ | | | | | | | | | | | | Conflicts: kafka/consumer.py
| * | Got MultiProcessConsumer workingMahendra M2013-06-251-10/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Other changes * Put a message size restriction on the shared queue - to prevent message overload * Wait for a while after each process is started (in constructor) * Wait for a while in each child if the consumer does not return any messages Just to be nice to the CPU. * Control the start event more granularly - this prevents infinite loops if the control does not return to the generator. For eg: for msg in consumer: assert False * Update message status before yield
| * | Added some comments about message stateMahendra M2013-06-251-0/+7
| | |
| * | Implement blocking get_messages for SimpleConsumerMahendra M2013-06-252-16/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The implementation is done by using simple options to Kafka Fetch Request Also in the SimpleConsumer iterator, update the offset before the message is yielded. This is so that the consumer state is not lost if certain cases. For eg: the message is yielded and consumed by the caller, but the caller does not come back into the generator again. The message will be consumed but the status is not updated in the consumer
| * | Added the modules in __init__.pyMahendra M2013-06-251-2/+3
| | |
| * | Added more documentation and clean up duplicate codeMahendra M2013-06-251-86/+76
| | |
| * | Minor bug fixMahendra M2013-06-241-2/+3
| | |
| * | Add support for multi-process consumerMahendra M2013-06-241-48/+286
| | |
| * | Merge branch 'master' into partitionMahendra M2013-06-203-0/+93
| |\ \
| * | | Add support to consume messages from specific partitionsMahendra M2013-06-121-7/+11
| | | | | | | | | | | | | | | | | | | | | | | | Currently the kafka SimpleConsumer consumes messages from all partitions. This commit will ensure that data is consumed only from partitions specified during init
* | | | Merge pull request #33 from mahendra/asyncproducerDavid Arthur2013-07-114-26/+202
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | Support for async producer Merged locally, tests pass, +1
| * | | | Test cases for new producerMahendra M2013-06-271-1/+1
| | | | |
| * | | | Optimize sending of batch messagesMahendra M2013-06-261-6/+9
| | | | |
| * | | | Got batched mode to work properlyMahendra M2013-06-261-59/+48
| | | | |
| * | | | Update README with examplesMahendra M2013-06-261-1/+1
| | | | |
| * | | | Add support for batched message sendMahendra M2013-06-261-9/+100
| | | | | | | | | | | | | | | | | | | | | | | | | Also improve on the logic for stopping the async Processor instance. Ensure that unsend messages are sent before it is stopped.