summaryrefslogtreecommitdiff
path: root/kafka
Commit message (Collapse)AuthorAgeFilesLines
* Return float(inf) seconds instead of sys.maxsize int in coordinator time to ↵consumer_iterator_with_pollDana Powers2019-09-281-1/+5
| | | | next poll()
* Add internal update_offsets param to consumer poll(); default to new iteratorDana Powers2019-09-281-7/+13
|
* Wrap consumer.poll() for KafkaConsumer iterationDana Powers2019-09-282-7/+60
|
* Add ACL api to KafkaAdminClient (#1833)Ulrik Johansson2019-09-284-9/+488
|
* Improve connection lock handling; always use context manager (#1895)Dana Powers2019-09-031-126/+151
|
* Reduce internal client poll timeout for consumer iterator interface (#1824)Dana Powers2019-08-161-3/+1
| | | More attempts to address heartbeat timing issues in consumers, especially with the iterator interface. Here we can reduce the `client.poll` timeout to at most the retry backoff (typically 100ms) so that the consumer iterator interface doesn't block for longer than the heartbeat timeout.
* Update conn.pyCameron Boulton2019-08-161-0/+3
|
* Break FindCoordinator into request/response methodsJeff Widman2019-07-311-32/+48
| | | | | | | | | | | | | | | | | | This splits the `_find_coordinator_id()` method (which is blocking) into request generation / response parsing methods. The public API does not change. However, this allows power users who are willing to deal with risk of private methods changing under their feet to decouple generating the message futures from processing their responses. In other words, you can use these to fire a bunch of requests at once and delay processing the responses until all requests are fired. This is modeled on the work done in #1845. Additionally, I removed the code that tried to leverage the error checking from `cluster.add_group_coordinator()`. That code had changed in #1822, removing most of the error checking... so it no longer adds any value, but instead merely increases complexity and coupling.
* Fix minor typo (#1865)Carson Ip2019-07-142-2/+2
|
* Update link to upstream Kafka docsJeff Widman2019-07-111-1/+1
| | | the new consumer is now the standard consumer, so they dropped the `new_` from the anchor
* Add the `sasl_kerberos_domain_name` arg to `KafkaAdminClient`Jeff Widman2019-06-281-0/+3
| | | | | | Previously the `sasl_kerberos_domain_name` was missing from the Admin client. It is already present in the Consumer/Producer, and in all three cases gets transparently passed down to the client.
* Update KafkaAdminClient DocsRob Cardy2019-06-211-1/+2
| | | Updated to include SASL_PLAINTEXT and SASL_SSL as options for security_protocol.
* Allow the coordinator to auto-commit for all api_version.Jay Chan2019-06-201-1/+1
|
* Break consumer operations into request / response methods (#1845)Jeff Widman2019-06-191-94/+155
| | | | | | | | | | This breaks some of the consumer operations into request generation / response parsing methods. The public API does not change. However, this allows power users who are willing to deal with risk of private methods changing under their feet to decouple generating the message futures from processing their responses. In other words, you can use these to fire a bunch of request at once and delay processing the responses until all requests are fired.
* Use dedicated connection for group coordinator (#1822)Dana Powers2019-06-192-25/+17
| | | This changes the coordinator_id to be a unique string, e.g., `coordinator-1`, so that it will get a dedicated connection. This won't eliminate lock contention because the client lock applies to all connections, but it should improve in-flight-request contention.
* Delay converting to secondsJeff Widman2019-05-301-2/+2
| | | | Delaying the conversion to seconds makes the code intent more clear.
* Reduce client poll timeout when no ifrsDana Powers2019-05-291-0/+3
|
* Catch TimeoutError in BrokerConnection send/recv (#1820)Dana Powers2019-05-291-6/+7
|
* Remove unused/weird comment line (#1813)Jeff Widman2019-05-281-1/+0
|
* Update docs for api_version_auto_timeout_ms (#1812)Jeff Widman2019-05-242-2/+2
| | | | | | | | | The docs for `api_version_auto_timeout_ms` mention setting `api_version='auto'` but that value has been deprecated for years in favor of `api_version=None`. Updating the docs for now, and will remove support for `'auto'` in next major version bump.
* Fix typo in _fetch_all_topic_metadata function (#1809)Brian Sang2019-05-231-1/+1
|
* Make partitions_for_topic a read-through cache (#1781)Brian Sang2019-05-221-8/+25
| | | If the cluster metadata object has no info about the topic, then issue a blocking metadata call to fetch it.
* Remove unused imports (#1808)Jeff Widman2019-05-221-2/+2
|
* Use futures to parallelize calls to _send_request_to_node() (#1807)Lou-Cipher2019-05-211-34/+75
| | | | | Use `futures` to parallelize calls to `_send_request_to_node()` This allows queries that need to go to multiple brokers to be run in parallel.
* Update link to kafka docsJeff Widman2019-05-171-1/+1
| | | Now that the old zookeeper consumer has been completely deprecated/removed, these are no longer the "new consumer configs" but rather simply the "consumer configs"
* A little python cleanup (#1805)Jeff Widman2019-05-171-4/+2
| | | | | | 1. Remove unused variable: `partitions_for_topic` 2. No need to cast to list as `sorted()` already returns a list 3. Using `enumerate()` is cleaner than `range(len())` and handles assigning `member`
* Bump version for development of next releaseDana Powers2019-04-031-1/+1
|
* Release 1.4.61.4.6Dana Powers2019-04-021-1/+1
|
* Do not call state_change_callback with lock (#1775)Dana Powers2019-04-022-21/+29
|
* Additional BrokerConnection locks to synchronize protocol/IFR state (#1768)Dana Powers2019-04-021-61/+85
|
* Return connection state explicitly after close in connect() (#1778)Dana Powers2019-04-021-1/+3
|
* Reset reconnect backoff on SSL connection (#1777)Dana Powers2019-04-021-0/+1
|
* Fix possible AttribueError during conn._close_socket (#1776)Dana Powers2019-04-011-1/+1
|
* Dont treat popped conn.close() as failure in state change callback (#1773)Dana Powers2019-04-011-3/+10
|
* Avoid race condition on client._conns in send() (#1772)Dana Powers2019-03-311-2/+3
| | | There was a very small possibility that between checking `self._can_send_request(node_id)` and grabbing the connection object via `self._conns[node_id]` that the connection could get closed / recycled / removed from _conns and cause a KeyError. This PR should prevent such a KeyError. In the case where the connection is disconnected by the time we call send(), we should expect conn.send() simply to fail the request.
* lock client.check_version (#1771)Dana Powers2019-03-311-0/+5
|
* Dont wakeup during maybe_refresh_metadata -- it is only called by poll() (#1769)Dana Powers2019-03-301-4/+4
|
* Revert 703f0659 / fix 0.8.2 protocol quick detection (#1763)Dana Powers2019-03-272-6/+9
|
* Send pending requests before waiting for responses (#1762)Dana Powers2019-03-271-2/+4
|
* Dont do client wakeup when sending from sender thread (#1761)Dana Powers2019-03-242-6/+10
|
* Update sasl configuration docstringsDana Powers2019-03-235-24/+24
|
* Support SASL OAuthBearer Authentication (#1750)Phong Pham2019-03-227-6/+117
|
* Maintain shadow cluster metadata for bootstrapping (#1753)Dana Powers2019-03-212-35/+21
|
* Allow configuration of SSL Ciphers (#1755)Dana Powers2019-03-214-1/+28
|
* Wrap SSL sockets after connecting (#1754)Dana Powers2019-03-211-19/+11
|
* Fix race condition in protocol.send_bytes (#1752)Filip Stefanak2019-03-211-1/+2
|
* Bump version for developmentDana Powers2019-03-141-1/+1
|
* Release 1.4.51.4.5Dana Powers2019-03-141-1/+1
|
* Error if connections_max_idle_ms not larger than request_timeout_ms (#1688)Jeff Widman2019-03-141-3/+7
|
* Retry bootstrapping after backoff when necessary (#1736)Dana Powers2019-03-142-84/+89
| | | | | | | The current client attempts to bootstrap once during initialization, but if it fails there is no second attempt and the client will be inoperable. This can happen, for example, if an entire cluster is down at the time a long-running client starts execution. This commit attempts to fix this by removing the synchronous bootstrapping from `KafkaClient` init, and instead merges bootstrap metadata with the cluster metadata. The Java client uses a similar approach. This allows us to continue falling back to bootstrap data when necessary throughout the life of a long-running consumer or producer. Fix #1670