summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* cleanup java install scriptadd-python-37-supportDana Powers2019-03-121-20/+1
|
* Maybe pypy2.7-6.0 ?Dana Powers2019-03-121-2/+2
|
* replace java entry in PATHDana Powers2019-03-121-1/+1
|
* Maybe fixup PATH?Dana Powers2019-03-121-0/+9
|
* Fix pypy travis install for xenialDana Powers2019-03-121-2/+2
|
* Remove update-alternatives --config java; fix JAVA_HOMEDana Powers2019-03-121-4/+3
|
* list available java alternatives b/c this isnt working...Dana Powers2019-03-121-0/+5
|
* Try install openjdk-8-jdk via travis.yml apt packagesDana Powers2019-03-122-3/+13
|
* Use xenialDana Powers2019-03-121-0/+2
|
* Try travis_java_install script from pywranglerDana Powers2019-03-122-0/+23
|
* Revert TRAVIS_PYTHON_VERSION changeDana Powers2019-03-121-1/+1
|
* Perhaps no xenial?Dana Powers2019-03-121-3/+0
|
* Try openjdk8?Dana Powers2019-03-121-3/+1
|
* xenial needs explicit 3rd party apt sourceDana Powers2019-03-121-0/+2
|
* setting jdk on non-java project does not work... also drop sudo and improve ↵Dana Powers2019-03-121-6/+3
| | | | tox variable handling
* Pin travis tests to openjdk8Dana Powers2019-03-121-2/+3
|
* Use xenial dist for travis buildsDana Powers2019-03-121-13/+4
|
* Add python 3.7 supportJeff Widman2019-03-125-7/+21
| | | | | | | Add Python 3.7 to the tests. Note that Travis requires a workaround for now. Document 3.7 support on PyPi.
* Synchronize puts to KafkaConsumer protocol buffer during async sendsDana Powers2019-03-122-25/+60
|
* Do network connections and writes in KafkaClient.poll() (#1729)Dana Powers2019-03-086-57/+84
| | | | | | * Add BrokerConnection.send_pending_requests to support async network sends * Send network requests during KafkaClient.poll() rather than in KafkaClient.send() * Dont acquire lock during KafkaClient.send if node is connected / ready * Move all network connection IO into KafkaClient.poll()
* Do not require client lock for read-only operations (#1730)Dana Powers2019-03-061-50/+50
| | | In an effort to reduce the surface area of lock coordination, and thereby hopefully reduce lock contention, I think we can remove locking from the read-only KafkaClient methods: connected, is_disconnected, in_flight_request_count, and least_loaded_node . Given that the read data could change after the lock is released but before the caller uses it, the value of acquiring a lock here does not seem high to me.
* Use test.fixtures.version not test.conftest.version to avoid warnings (#1731)Dana Powers2019-03-064-8/+4
|
* Make NotEnoughReplicasError/NotEnoughReplicasAfterAppendError retriable (#1722)le-linh2019-03-031-0/+2
|
* Drop dependency on sphinxcontrib-napoleonStanislav Levin2019-02-271-1/+0
| | | | | | | | | Since 1.3b1 (released Oct 10, 2014) Sphinx has support for NumPy and Google style docstring support via sphinx.ext.napoleon extension. The latter is already used, but sphinxcontrib-napoleon requirement still presents. Signed-off-by: Stanislav Levin <slev@altlinux.org>
* Fix test_legacy_correct_metadata_response on x86 arch (#1718)Stanislav Levin2019-02-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | The problem is that the type of required operation result is "long". ``` >>> type(278251978 & 0xffffffff) <type 'long'> ``` However, by default "format" method uses __format__(): ``` >>> (278251978 & 0xffffffff).__format__('') '278251978' ``` So, let's compare things using the same engine: ``` >>> "{!r}".format(278251978 & 0xffffffff) '278251978L' ``` Fixes: https://github.com/dpkp/kafka-python/issues/1717 Signed-off-by: Stanislav Levin <slev@altlinux.org>
* Remove unused importJeff Widman2019-01-281-1/+0
|
* Improve KafkaConsumer join group / only enable Heartbeat Thread during ↵Dana Powers2019-01-151-11/+23
| | | | stable group (#1695)
* Travis CI: 'sudo' tag is now deprecated in Travis (#1698)cclauss2019-01-131-2/+0
|
* Remove unused `skip_double_compressed_messages`Jeff Widman2019-01-132-16/+0
| | | | | | | | | | This `skip_double_compressed_messages` flag was added in https://github.com/dpkp/kafka-python/pull/755 in order to fix https://github.com/dpkp/kafka-python/issues/718. However, grep'ing through the code, it looks like it this is no longer used anywhere and doesn't do anything. So removing it.
* Timeout all unconnected conns (incl SSL) after request_timeout_msDana Powers2019-01-131-6/+8
|
* Fix `AttributeError` caused by `getattr()`Jeff Widman2019-01-071-1/+2
| | | | | | | `getattr(object, 'x', object.y)` will evaluate the default argument `object.y` regardless of whether `'x'` exists. For details see: https://stackoverflow.com/q/31443989/770425
* Use Popen.communicate() instead of Popen.wait()Brian Sang2019-01-051-9/+10
| | | | | Popen objects may deadlock when using stdout=PIPE or stderr=PIPE with Popen.wait(). Using Popen.communicate() avoids the issue.
* Fix SSL connection testing in Python 3.7Ben Weir2019-01-031-0/+7
|
* Fix response error checking in KafkaAdminClient send_to_controllerDana Powers2019-01-032-5/+15
| | | | | | | | | | | | Previously we weren't accounting for when the response tuple also has a `error_message` value. Note that in Java, the error fieldname is inconsistent: - `CreateTopicsResponse` / `CreatePartitionsResponse` uses `topic_errors` - `DeleteTopicsResponse` uses `topic_error_codes` So this updates the `CreateTopicsResponse` classes to match. The fix is a little brittle, but should suffice for now.
* #1681 add copy() in metrics() to avoid thread safety issues (#1682)Tosi Émeric2018-12-272-4/+4
|
* Bugfix: Types need identity comparisonJeff Widman2018-12-131-1/+1
| | | `isinstance()` won't work here, as the types require identity comparison.
* Bump version for developmentDana Powers2018-11-201-1/+1
|
* Release 1.4.41.4.4Dana Powers2018-11-203-5/+42
|
* Cleanup formatting, no functional changesJeff Widman2018-11-201-23/+23
| | | | Cleanup the formatting, remove parens, extraneous spaces, etc.
* Rename KafkaAdmin to KafkaAdminClientJeff Widman2018-11-206-26/+26
|
* Update kafka broker compatibility docsDana Powers2018-11-203-6/+9
|
* Bump travis test for 1.x brokers to 1.1.1Dana Powers2018-11-201-1/+1
|
* Add test resources for kafka versions 1.0.2 -> 2.0.1Dana Powers2018-11-2016-1/+941
|
* Break KafkaClient poll if closedDana Powers2018-11-201-0/+2
|
* Add protocols for {Describe,Create,Delete} AclsUlrik Johansson2018-11-191-0/+185
|
* Bugfix: Always set this_groups_coordinator_idJeff Widman2018-11-191-1/+3
|
* Various docstring / pep8 / code hygiene cleanupsJeff Widman2018-11-181-71/+86
|
* Fix describe_groupsJeff Widman2018-11-181-13/+50
| | | | | | | | | | | | | This was completely broken previously because it didn't lookup the group coordinator of the consumer group. Also added basic error handling/raising. Note: I added the `group_coordinator_id` as an optional kwarg. As best I can tell, the Java client doesn't include this and instead looks it up every time. However, if we add this, it allows the caller the flexibility to bypass the network round trip of the lookup if for some reason they already know the `group_coordinator_id`.
* Set a clear default value for `validate_only`/`include_synonyms`Jeff Widman2018-11-181-8/+8
| | | | | | | Set a clear default value for `validate_only` / `include_synonyms` Previously the kwarg defaulted to `None`, but then sent a `False` so this makes it more explicit and reduces ambiguity.
* Fix list_consumer_groups() to query all brokersJeff Widman2018-11-181-5/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, this only queried the controller. In actuality, the Kafka protocol requires that the client query all brokers in order to get the full list of consumer groups. Note: The Java code (as best I can tell) doesn't allow limiting this to specific brokers. And on the surface, this makes sense... you typically don't care about specific brokers. However, the inverse is true... consumer groups care about knowing their group coordinator so they don't have to repeatedly query to find it. In fact, a Kafka broker will only return the groups that it's a coordinator for. While this is an implementation detail that is not guaranteed by the upstream broker code, and technically should not be relied upon, I think it very unlikely to change. So monitoring scripts that fetch the offsets or describe the consumers groups of all groups in the cluster can simply issue one call per broker to identify all the coordinators, rather than having to issue one call per consumer group. For an ad-hoc script this doesn't matter, but for a monitoring script that runs every couple of minutes, this can be a big deal. I know in the situations where I will use this, this matters more to me than the risk of the interface unexpectedly breaking.