| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
- Add support for new request and response headers, supporting flexible
versions / tagged fields
- Add List / Alter partition reassignments APIs
- Add support for varints
- Add support for compact collections (byte array, string, array)
|
| |
|
| |
|
|
|
|
|
| |
* Add consumergroup related errors
* Add DeleteGroups to protocol.admin
* Implement delete_groups feature on KafkaAdminClient
|
|
|
|
|
|
| |
Adding namedtuples for DescribeConsumerGroup response; Adding Serialization of MemberData and MemberAssignment for the response
Co-authored-by: Apurva Telang <atelang@paypal.com>
Co-authored-by: Jeff Widman <jeff@jeffwidman.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add logic for inferring newer broker versions
- New Fetch / ListOffsets request / response objects
- Add new test cases to inferr code based on mentioned objects
- Add unit test to check inferred version against whatever resides in KAFKA_VERSION
- Add new kafka broker versions to integration list
- Add more kafka broker versions to travis task list
- Add support for broker version 2.5 id
* Implement PR change requests: fewer versions for travis testing, remove unused older versions for inference code, remove one minor version from known server list
Do not use newly created ACL request / responses in allowed version lists, due to flexible versions enabling in kafka actually requiring a serialization protocol header update
Revert admin client file change
|
|
|
| |
Implement methods to convert a Struct object to a pythonic object
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Identity is not the same thing as equality in Python so use ==/!= to compare str, bytes, and int literals. In Python >= 3.8, these instances will raise __SyntaxWarnings__ so it is best to fix them now. https://docs.python.org/3.8/whatsnew/3.8.html#porting-to-python-3-8
% __python__
```
>>> consumer = "cons"
>>> consumer += "umer"
>>> consumer == "consumer"
True
>>> consumer is "consumer"
False
>>> 0 == 0.0
True
>>> 0 is 0.0
False
```
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the 2.0 release, we're removing:
* `SimpleClient`
* `SimpleConsumer`
* `SimpleProducer`
* Old partitioners used by `SimpleProducer`; these are superceded by
the `DefaultPartitioner`
These have been deprecated for several years in favor of `KafkaClient`
/ `KafkaConsumer` / `KafkaProducer`.
Since 2.0 allows breaking changes, we are removing the deprecated
classes.
Additionally, since the only usage of `unittest` was in tests for these
old Simple* clients, this also drops `unittest` from the library. All
tests now run under `pytest`.
|
|
|
|
|
|
|
| |
* Fix describe config for multi-broker clusters
Currently all describe config requests are sent to "least loaded node". Requests for broker configs must, however, be sent to the specific broker, otherwise an error is returned. Only topic requests can be handled by any node.
This changes the logic to send all describe config requests to the specific broker.
|
| |
|
| |
|
| |
|
|
|
|
| |
to topic deletion while rebalancing (#1782)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This makes it so the only remaining use of `unittest` is in the old
tests of the deprecated `Simple*` clients. All `KafkaConsumer` tests are
migrated to `pytest`.
I also had to bump the test iterations up on one of the tests, I think there was a race condition there that was more commonly hit under pytest , planning to cleanup that in a followup PR. See https://github.com/dpkp/kafka-python/pull/1886#discussion_r316860737 for details.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we are using `pytest`, there is no need for a custom decorator
because we can use `pytest.mark.skipif()`.
This makes the code significantly simpler. In particular, dropping the
custom `@kafka_versions()` decorator is necessary because it uses
`func.wraps()` which doesn't play nice with `pytest` fixtures:
- https://github.com/pytest-dev/pytest/issues/677
- https://stackoverflow.com/a/19614807/770425
So this is a pre-requisite to migrating some of those tests to using
pytest fixtures.
|
|
|
|
| |
Remove unused import, whitespace, etc. No functional changes, just
cleaning it up so the diffs of later changes are cleaner.
|
|
|
|
|
|
|
|
|
|
|
| |
socket.SOCK_STREAM is platform specific and on some
platforms (most notably on Linux on MIPS) does not
equal 1; so it's better to use the constant where
appropriate.
This change fixes the tests on my MIPS32 LE machine.
Signed-off-by: Ivan A. Melnikov <iv@altlinux.org>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
The docs for `api_version_auto_timeout_ms` mention setting
`api_version='auto'` but that value has been deprecated for years in
favor of `api_version=None`.
Updating the docs for now, and will remove support for `'auto'` in next
major version bump.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
This doesn't fully implement SSL fixtures, but as a first step it should help with automatically generating required certificates / keystores / etc. My hope is that this helps generate more community support for SSL testing!
|
| |
|
|
|
|
|
|
|
| |
The current client attempts to bootstrap once during initialization, but if it fails there is no second attempt and the client will be inoperable. This can happen, for example, if an entire cluster is down at the time a long-running client starts execution.
This commit attempts to fix this by removing the synchronous bootstrapping from `KafkaClient` init, and instead merges bootstrap metadata with the cluster metadata. The Java client uses a similar approach. This allows us to continue falling back to bootstrap data when necessary throughout the life of a long-running consumer or producer.
Fix #1670
|
|
|
| |
Small change to avoid doing dns resolution when running local connection tests. This fixture always returns a broker on localhost:9092, so DNS lookups don't make sense here.
|
| |
|
| |
|
|
|
|
|
|
| |
* Add BrokerConnection.send_pending_requests to support async network sends
* Send network requests during KafkaClient.poll() rather than in KafkaClient.send()
* Dont acquire lock during KafkaClient.send if node is connected / ready
* Move all network connection IO into KafkaClient.poll()
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that the type of required operation result is
"long".
```
>>> type(278251978 & 0xffffffff)
<type 'long'>
```
However, by default "format" method uses __format__():
```
>>> (278251978 & 0xffffffff).__format__('')
'278251978'
```
So, let's compare things using the same engine:
```
>>> "{!r}".format(278251978 & 0xffffffff)
'278251978L'
```
Fixes: https://github.com/dpkp/kafka-python/issues/1717
Signed-off-by: Stanislav Levin <slev@altlinux.org>
|