| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
| |
Co-authored-by: will.k <will.k@kakaocorp.com>
|
| |
|
|
|
|
|
|
|
| |
- Add support for new request and response headers, supporting flexible
versions / tagged fields
- Add List / Alter partition reassignments APIs
- Add support for varints
- Add support for compact collections (byte array, string, array)
|
| |
|
|
|
|
|
|
|
| |
* Hotfix: TypeError: object of type 'dict_itemiterator' has no len()
* Avoid looping over items 2x
Co-authored-by: Grabowski <chris@crawlinski.com>
|
|
|
|
| |
* Co-authored-by: Andrew Brown <andrew.brown@shopify.com>
* Co-authored-by: Aaron Brady <aaron.brady@shopify.com>
|
| |
|
|
|
|
|
| |
(#2158)
Previously, if the `unpacked` list was empty, the call to `unpacked[-1]` would throw an `IndexError: list index out of range`
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Fix a deprecation warning in the newest version.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously there were two methods:
* `_find_coordinator_id()`
* `_find_many_coordinator_ids()`
But they do basically the same thing internally. And we need the plural
two places, but the singular only one place.
So merge them, and change the function signature to take a list of
`group_ids` and return a dict of `group_id: coordinator_id`s.
As a result of this, the `describe_groups()` command should scale better
because the `_find_coordinator_ids()` command issues all the requests
async, instead of sequentially blocking as the `described_groups()` used
to do.
|
|
|
| |
Small cleanup leftover from https://github.com/dpkp/kafka-python/pull/2035
|
|
|
|
|
| |
* Add consumergroup related errors
* Add DeleteGroups to protocol.admin
* Implement delete_groups feature on KafkaAdminClient
|
|
|
| |
Fix initialization order in KafkaClient
|
|
|
| |
Currently there's no way to pass timeout to check_version if called from admin.
|
|
|
|
|
|
| |
Adding namedtuples for DescribeConsumerGroup response; Adding Serialization of MemberData and MemberAssignment for the response
Co-authored-by: Apurva Telang <atelang@paypal.com>
Co-authored-by: Jeff Widman <jeff@jeffwidman.com>
|
| |
|
|
|
| |
Co-authored-by: MostafaElmenabawy <momenabawy@synapse-analytics.io>
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add logic for inferring newer broker versions
- New Fetch / ListOffsets request / response objects
- Add new test cases to inferr code based on mentioned objects
- Add unit test to check inferred version against whatever resides in KAFKA_VERSION
- Add new kafka broker versions to integration list
- Add more kafka broker versions to travis task list
- Add support for broker version 2.5 id
* Implement PR change requests: fewer versions for travis testing, remove unused older versions for inference code, remove one minor version from known server list
Do not use newly created ACL request / responses in allowed version lists, due to flexible versions enabling in kafka actually requiring a serialization protocol header update
Revert admin client file change
|
|
|
| |
This is in preparation for adding `zstd` support.
|
|
|
|
|
| |
This was previously supported but wasn't documented.
Fix #1990.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In KAFKA-8962 the `AdminClient.describe_topics()` call was changed from
using the controller to using the `least_loaded_node()`:
https://github.com/apache/kafka/commit/317089663cc7ff4fdfcba6ee434f455e8ae13acd#diff-6869b8fccf6b098cbcb0676e8ceb26a7R1540
As a result, no metadata request/response processing needs to happen
through the controller, so it's safe to remove the custom
error-checking. Besides, I don't think this error-checking even added
any value because AFAIK no metadata response would return a
`NotControllerError` because the recipient broker wouldn't realize the
metadata request was intended for only the controller.
Originally our admin client was implemented using the least-loaded-node,
then later updated to the controller. So updating it back to
least-loaded node is a simple case of reverting the associated commits.
This reverts commit 7195f0369c7dbe25aea2c3fed78d2b4f772d775b.
This reverts commit 6e2978edee9a06e9dbe60afcac226b27b83cbc74.
This reverts commit f92889af79db08ef26d89cb18bd48c7dd5080010.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Closes #1994
|
| |
|
|
|
| |
Implement methods to convert a Struct object to a pythonic object
|
|
|
|
|
| |
Forgot to remove this in https://github.com/dpkp/kafka-python/pull/1925
/ ca2d76304bfe3900f995e6f0e4377b2ef654997e
|
| |
|
|
|
|
|
| |
Use empty slots for ABC classes, otherwise classes which inherit from
them will still have __dict__. Also use __slots__ for more classes.
|
| |
|
| |
|
| |
|
| |
|
| |
|