summaryrefslogtreecommitdiff
path: root/kafka
Commit message (Collapse)AuthorAgeFilesLines
* Patch Release 2.0.12.0.1Dana Powers2020-02-191-1/+1
|
* KAFKA-8962: Use least_loaded_node() for describe_topics()Jeff Widman2020-02-161-15/+7
| | | | | | | | | | | | | | | | | | | | | | | In KAFKA-8962 the `AdminClient.describe_topics()` call was changed from using the controller to using the `least_loaded_node()`: https://github.com/apache/kafka/commit/317089663cc7ff4fdfcba6ee434f455e8ae13acd#diff-6869b8fccf6b098cbcb0676e8ceb26a7R1540 As a result, no metadata request/response processing needs to happen through the controller, so it's safe to remove the custom error-checking. Besides, I don't think this error-checking even added any value because AFAIK no metadata response would return a `NotControllerError` because the recipient broker wouldn't realize the metadata request was intended for only the controller. Originally our admin client was implemented using the least-loaded-node, then later updated to the controller. So updating it back to least-loaded node is a simple case of reverting the associated commits. This reverts commit 7195f0369c7dbe25aea2c3fed78d2b4f772d775b. This reverts commit 6e2978edee9a06e9dbe60afcac226b27b83cbc74. This reverts commit f92889af79db08ef26d89cb18bd48c7dd5080010.
* Fix topic error parsing in MetadataResponseJeff Tribble2020-02-151-6/+11
|
* Bump version for development of next releaseDana Powers2020-02-101-1/+1
|
* Release 2.0.02.0.0Dana Powers2020-02-101-1/+1
|
* _send_request_to_controller returns a raw result, not a futureTyler Lubeck2020-02-061-6/+6
|
* Use the controller for topic metadata requestsTyler Lubeck2020-02-061-6/+9
| | | | Closes #1994
* Implement list_topics, describe_topics, and describe_clusterTyler Lubeck2020-02-061-6/+40
|
* Implement methods to convert a Struct object to a pythonic object (#1951)Tyler Lubeck2020-02-062-1/+37
| | | Implement methods to convert a Struct object to a pythonic object
* Remove unused importJeff Widman2020-02-051-1/+0
| | | | | Forgot to remove this in https://github.com/dpkp/kafka-python/pull/1925 / ca2d76304bfe3900f995e6f0e4377b2ef654997e
* Remove some dead codeJeff Widman2020-02-053-196/+0
|
* Fix slots usage and use more slotsCarson Ip2020-02-054-0/+26
| | | | | Use empty slots for ABC classes, otherwise classes which inherit from them will still have __dict__. Also use __slots__ for more classes.
* Do not log topic-specific errors in full metadata fetch (#1980)Dana Powers2019-12-291-0/+4
|
* Optionally return OffsetAndMetadata from consumer.committed(tp) (#1979)Dana Powers2019-12-294-9/+16
|
* Do not block on sender thread join after timeout in producer.close() (#1974)Dana Powers2019-12-291-5/+1
|
* Raise AssertionError if consumer closed in poll() (#1978)Dana Powers2019-12-291-0/+3
|
* Reset conn configs on exception in conn.check_version() (#1977)Dana Powers2019-12-291-2/+7
|
* Log retriable coordinator NodeNotReady, TooManyInFlightRequests as debug not ↵Dana Powers2019-12-291-2/+5
| | | | error (#1975)
* Implement __eq__ and __hash__ for ACL objects (#1955)Tyler Lubeck2019-12-291-1/+33
|
* Fixes KafkaAdminClient returning `IncompatibleBrokerVersion` when passing an ↵Ian Bucad2019-12-291-0/+1
| | | | `api_version` (#1953)
* Fix typoDana Powers2019-12-291-1/+1
|
* Admin protocol updates (#1948)Tyler Lubeck2019-12-292-30/+266
|
* Style updates to scram sasl supportDana Powers2019-12-292-78/+87
|
* Enable SCRAM-SHA-256 and SCRAM-SHA-512 for sasl (#1918)Swen Wenzel2019-12-295-35/+157
|
* Improve docs for reconnect_backoff_max_ms (#1976)Dana Powers2019-12-285-25/+30
|
* Fix simple typo: managementment -> managementTim Gates2019-12-081-1/+1
| | | | Closes #1965
* Fix typosCarson Ip2019-11-087-9/+9
|
* Remove deprecated `ConnectionError` (#1816)Jeff Widman2019-10-111-4/+0
| | | | | | This has been deprecated for a bit in favor of `KafkaConnectionError` because it conflicts with Python's built-in `ConnectionError`. Time to remove it as part of cleaning up our old deprecated code.
* Remove SimpleClient, Producer, Consumer, Unittest (#1196)Jeff Widman2019-10-1123-3408/+79
| | | | | | | | | | | | | | | | | | In the 2.0 release, we're removing: * `SimpleClient` * `SimpleConsumer` * `SimpleProducer` * Old partitioners used by `SimpleProducer`; these are superceded by the `DefaultPartitioner` These have been deprecated for several years in favor of `KafkaClient` / `KafkaConsumer` / `KafkaProducer`. Since 2.0 allows breaking changes, we are removing the deprecated classes. Additionally, since the only usage of `unittest` was in tests for these old Simple* clients, this also drops `unittest` from the library. All tests now run under `pytest`.
* Fix describe config for multi-broker clusters (#1869)Jeppe Andersen2019-10-111-14/+56
| | | | | | | * Fix describe config for multi-broker clusters Currently all describe config requests are sent to "least loaded node". Requests for broker configs must, however, be sent to the specific broker, otherwise an error is returned. Only topic requests can be handled by any node. This changes the logic to send all describe config requests to the specific broker.
* Update docstring to match conn.py's (#1921)David Bouchare2019-10-031-1/+2
|
* Release 1.4.7 (#1916)1.4.7Dana Powers2019-09-301-1/+1
|
* Follow up to PR 1782 -- fix tests (#1914)Dana Powers2019-09-301-1/+2
|
* Improve/refactor bootstrap_connectedDana Powers2019-09-304-14/+22
|
* Added a function to determine if bootstrap is successfully connected (#1876)PandllCom2019-09-302-7/+20
|
* Issue #1780 - Consumer hang indefinitely in fetcher._retrieve_offsets() due ↵Commander Dishwasher2019-09-302-8/+26
| | | | to topic deletion while rebalancing (#1782)
* Change coordinator lock acquisition order (#1821)Dana Powers2019-09-292-43/+39
|
* Send socket data via non-blocking IO with send buffer (#1912)Dana Powers2019-09-293-12/+105
|
* Do not use wakeup when sending fetch requests from consumer (#1911)Dana Powers2019-09-291-1/+1
|
* Rely on socket selector to detect completed connection attempts (#1909)Dana Powers2019-09-283-9/+13
|
* Wrap consumer.poll() for KafkaConsumer iteration (#1902)Dana Powers2019-09-283-11/+74
|
* Fix Admin Client api version checking; only test ACL integration on 0.11+Dana Powers2019-09-281-4/+10
|
* Add ACL api to KafkaAdminClient (#1833)Ulrik Johansson2019-09-284-9/+488
|
* Improve connection lock handling; always use context manager (#1895)Dana Powers2019-09-031-126/+151
|
* Reduce internal client poll timeout for consumer iterator interface (#1824)Dana Powers2019-08-161-3/+1
| | | More attempts to address heartbeat timing issues in consumers, especially with the iterator interface. Here we can reduce the `client.poll` timeout to at most the retry backoff (typically 100ms) so that the consumer iterator interface doesn't block for longer than the heartbeat timeout.
* Update conn.pyCameron Boulton2019-08-161-0/+3
|
* Break FindCoordinator into request/response methodsJeff Widman2019-07-311-32/+48
| | | | | | | | | | | | | | | | | | This splits the `_find_coordinator_id()` method (which is blocking) into request generation / response parsing methods. The public API does not change. However, this allows power users who are willing to deal with risk of private methods changing under their feet to decouple generating the message futures from processing their responses. In other words, you can use these to fire a bunch of requests at once and delay processing the responses until all requests are fired. This is modeled on the work done in #1845. Additionally, I removed the code that tried to leverage the error checking from `cluster.add_group_coordinator()`. That code had changed in #1822, removing most of the error checking... so it no longer adds any value, but instead merely increases complexity and coupling.
* Fix minor typo (#1865)Carson Ip2019-07-142-2/+2
|
* Update link to upstream Kafka docsJeff Widman2019-07-111-1/+1
| | | the new consumer is now the standard consumer, so they dropped the `new_` from the anchor
* Add the `sasl_kerberos_domain_name` arg to `KafkaAdminClient`Jeff Widman2019-06-281-0/+3
| | | | | | Previously the `sasl_kerberos_domain_name` was missing from the Admin client. It is already present in the Consumer/Producer, and in all three cases gets transparently passed down to the client.