summaryrefslogtreecommitdiff
path: root/kafka
Commit message (Collapse)AuthorAgeFilesLines
...
* | | Merge branch 'master' into lazythreadMahendra M2013-06-253-0/+93
|\ \ \ | | |/ | |/|
| * | Fix bugs and testingMahendra M2013-06-133-5/+14
| | | | | | | | | | | | | | | | | | * Ensure that round-robin partitioner works fine * _load_metadata_for_topics() would cause duplicate and stale entries in self.topic_partitions. Fix this
| * | Implement support for Keyed producerMahendra M2013-06-132-0/+84
| |/ | | | | | | | | | | Provides support for two partitioners * Round robin * Hashed (default as per kafka clients)
* | Fix an issue with thread argumentMahendra M2013-06-121-1/+1
| |
* | Optimize auto-commit threadMahendra M2013-06-122-33/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | The previous commit optimized the commit thread such that the timer started only when there were messages to be consumed. This commit goes a step further and ensures the following: * Only one timer thread is created * The main app does not block on exit (waiting for timer thread to finish) This is ensured by having a single thread blocking on an event and keeps calling a function. We use events instead of time.sleep() so as to prevent the python interpreter from running every 50ms checking if the timer has expired (logic copied from threading.Timer)
* | Spawn the commit thread only if necessaryMahendra M2013-06-112-2/+8
|/ | | | | | | If there are no messages being consumed, the timer keeps creating new threads at the specified intervals. This may not be necessary. We can control this behaviour such that the timer thread is started only when a message is consumed
* Merge pull request #29 from mahendra/threadingDavid Arthur2013-06-101-1/+2
|\ | | | | Fix auto-commit issues with multi-threading
| * Fix auto-commit issues with multi-threadingMahendra M2013-06-031-1/+2
| |
* | Split fixtures out to a separate fileIvan Pouzyrevsky2013-06-071-3/+3
| |
* | Beautify codec.pyIvan Pouzyrevsky2013-06-071-24/+21
| |
* | Refactor and update integration testsIvan Pouzyrevsky2013-06-072-0/+12
| |
* | Finish making remaining files pep8 readyMahendra M2013-06-042-94/+168
|/
* PEP8-ify most of the filesMahendra M2013-05-297-128/+309
| | | | consumer.py and conn.py will be done later after pending merges
* Removing the bit about offsetsDavid Arthur2013-05-291-5/+0
|
* Minor bug fixesMahendra M2013-05-292-12/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * When you initiate a producer with a non-existant queue, the queue is created. However this partition info is not reflected in KafkaClient() immediately. So, we wait for a second and try loading it again. Without this fix, if we do producer.send_messages() after creating a new queue, the library will throw a StopIteration exception. * In SimpleConsumer(), the defaults are not as mentioned in the comments. Fix this (or do we change the documentation?) * There was a problem with the way the consumer iterator worked. for eg: assume that there were 10 messages in the queue/topic and you iterate over it as - for msg in consumer: print (msg) At the end of this, 'offset' that is saved is 10. So, if you run the above loop again, the last message (10) is repeated. This can be fixed by adjusting the offset counter before fetching the message * Avoid some code repeat in consumer.commit() * Fix a bug in send_offset_commit_request() invocation in consumer.py * Fix missing imports
* Merge branch 'issue-22'David Arthur2013-05-281-0/+24
|\ | | | | | | | | Conflicts: kafka/consumer.py
| * Closes #22David Arthur2013-05-281-20/+0
| | | | | | | | Removed get_messages API, added test for get_pending
| * Missed a doc stringMahendra M2013-05-271-0/+2
| |
| * New API for getting a specified set of messagesMahendra M2013-05-271-0/+18
| | | | | | | | | | | | | | | | | | This will be easier to use in some cases where we have to get only a specified set of messages. This API uses the __iter__ API internally, but maintains the state to give back only the required set of messages API is - get_messages(count=1)
| * New API for checking pending message countMahendra M2013-05-271-0/+24
| |
* | Adding a debug statementDavid Arthur2013-05-281-1/+2
| |
* | Auto commit timer is not periodicMahendra M2013-05-282-13/+23
|/ | | | | | | The auto commit timer is one-shot. After the first commit, it does not fire again. This ticket fixes the issue. Also, in util.ReentrantTimer(), some duplicate code was cleaned up
* Fixes #14David Arthur2013-04-111-1/+1
|
* A few fixes for offset APIs in 0.8.1David Arthur2013-04-022-3/+1
|
* Update kafka-src to latest trunk, enable 0.8.1 featuresDavid Arthur2013-04-022-3/+0
|
* Update kafka-src to latest 0.8David Arthur2013-04-022-2/+3
| | | | Fix a broken test (100k was too much to send in one batch)
* Bring acks and timeout down to the clientDavid Arthur2013-04-022-5/+5
|
* Refactoring a bit, cleanup for 0.8David Arthur2013-04-025-154/+193
| | | | Marking some stuff as not compatible for 0.8 (will be added in 0.8.1)
* Big code re-orgDavid Arthur2013-04-029-646/+822
|
* Some work on a simple consumerDavid Arthur2013-04-021-4/+32
|
* Started on a simple producer and consumerDavid Arthur2013-04-022-6/+58
|
* Removing __main__ stuff from client.pyDavid Arthur2013-04-021-66/+0
|
* Integration tests passingDavid Arthur2013-04-024-1374/+744
|
* Protocol and low-level client done, adding testsDavid Arthur2013-04-022-78/+118
|
* Fix a bunch of bugsDavid Arthur2013-04-022-65/+138
|
* First pass of cleanup/refactoringDavid Arthur2013-04-023-225/+487
| | | | Also added a bunch of docstrings
* Starting work on 0.8 compatDavid Arthur2013-04-022-0/+562
|
* Adding client_fetch_size to queue interfaceDavid Arthur2013-04-021-3/+34
| | | | Also more docs
* Add some docs and KafkaQueue configDavid Arthur2012-11-191-19/+47
| | | | Ref #8
* Add a Queue-like producer/consumerDavid Arthur2012-11-192-0/+122
| | | | | | | | | | | | | | | | | Creates a producer process and one consumer process per partition. Uses `multiprocessing.Queue` for communication between the parent process and the producer/consumers. ```python kafka = KafkaClient("localhost", 9092) q = KafkaQueue(kafka, client="test-queue", partitions=[0,1]) q.put("test") q.get() q.close() kafka.close() ``` Ref #8
* Add Snappy support0.1-alphaDavid Arthur2012-11-163-1/+41
| | | | Fixes #2
* Clean up imports in client, fixed #5David Arthur2012-10-301-3/+0
|
* Replace socket.send with socket.sendall, Fixes #6David Arthur2012-10-301-5/+5
|
* error handling fixBen Frederickson2012-10-251-1/+1
|
* Isn't it nice when tests actually find bugsDavid Arthur2012-10-021-4/+3
|
* Packaging improvmentsDavid Arthur2012-10-021-1/+11
| | | | | | | | | | | | | | | | | | | | | | | Can now: ```python import kafka kafka.KafkaClient("localhost", 9092) ``` or ```python from kafka.client import KafkaClient KafkaClient("localhost", 9092) ``` or ```python import kafka.client kafka.client.KafkaClient("localhost", 9092) ```
* Renaming kafka.py to client.pyDavid Arthur2012-10-021-0/+0
|
* Moved codec stuff into it's own moduleDavid Arthur2012-10-023-22/+26
| | | | Snappy will go there when I get around to it
* Start work on packaging issue #3David Arthur2012-10-022-0/+630