| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
next poll()
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
More attempts to address heartbeat timing issues in consumers, especially with the iterator interface. Here we can reduce the `client.poll` timeout to at most the retry backoff (typically 100ms) so that the consumer iterator interface doesn't block for longer than the heartbeat timeout.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This splits the `_find_coordinator_id()` method (which is blocking) into
request generation / response parsing methods.
The public API does not change. However, this allows power users who are
willing to deal with risk of private methods changing under their feet
to decouple generating the message futures from processing their
responses. In other words, you can use these to fire a bunch of requests
at once and delay processing the responses until all requests are fired.
This is modeled on the work done in #1845.
Additionally, I removed the code that tried to leverage the error
checking from `cluster.add_group_coordinator()`. That code had changed
in #1822, removing most of the error checking... so it no longer adds
any value, but instead merely increases complexity and coupling.
|
| |
|
|
|
| |
the new consumer is now the standard consumer, so they dropped the `new_` from the anchor
|
|
|
|
|
|
| |
Previously the `sasl_kerberos_domain_name` was missing from the Admin
client. It is already present in the Consumer/Producer, and in all three
cases gets transparently passed down to the client.
|
|
|
| |
Updated to include SASL_PLAINTEXT and SASL_SSL as options for security_protocol.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This breaks some of the consumer operations into request generation /
response parsing methods.
The public API does not change. However, this allows power users who are
willing to deal with risk of private methods changing under their feet
to decouple generating the message futures from processing their
responses. In other words, you can use these to fire a bunch of request
at once and delay processing the responses until all requests are fired.
|
|
|
| |
This changes the coordinator_id to be a unique string, e.g., `coordinator-1`, so that it will get a dedicated connection. This won't eliminate lock contention because the client lock applies to all connections, but it should improve in-flight-request contention.
|
|
|
|
| |
Delaying the conversion to seconds makes the code intent more clear.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
The docs for `api_version_auto_timeout_ms` mention setting
`api_version='auto'` but that value has been deprecated for years in
favor of `api_version=None`.
Updating the docs for now, and will remove support for `'auto'` in next
major version bump.
|
| |
|
|
|
| |
If the cluster metadata object has no info about the topic, then issue a blocking metadata call to fetch it.
|
| |
|
|
|
|
|
| |
Use `futures` to parallelize calls to `_send_request_to_node()`
This allows queries that need to go to multiple brokers to be run in parallel.
|
|
|
| |
Now that the old zookeeper consumer has been completely deprecated/removed, these are no longer the "new consumer configs" but rather simply the "consumer configs"
|
|
|
|
|
|
| |
1. Remove unused variable: `partitions_for_topic`
2. No need to cast to list as `sorted()` already returns a list
3. Using `enumerate()` is cleaner than `range(len())` and handles assigning
`member`
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
There was a very small possibility that between checking `self._can_send_request(node_id)` and grabbing the connection object via `self._conns[node_id]` that the connection could get closed / recycled / removed from _conns and cause a KeyError. This PR should prevent such a KeyError. In the case where the connection is disconnected by the time we call send(), we should expect conn.send() simply to fail the request.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
The current client attempts to bootstrap once during initialization, but if it fails there is no second attempt and the client will be inoperable. This can happen, for example, if an entire cluster is down at the time a long-running client starts execution.
This commit attempts to fix this by removing the synchronous bootstrapping from `KafkaClient` init, and instead merges bootstrap metadata with the cluster metadata. The Java client uses a similar approach. This allows us to continue falling back to bootstrap data when necessary throughout the life of a long-running consumer or producer.
Fix #1670
|