| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the 2.0 release, we're removing:
* `SimpleClient`
* `SimpleConsumer`
* `SimpleProducer`
* Old partitioners used by `SimpleProducer`; these are superceded by
the `DefaultPartitioner`
These have been deprecated for several years in favor of `KafkaClient`
/ `KafkaConsumer` / `KafkaProducer`.
Since 2.0 allows breaking changes, we are removing the deprecated
classes.
Additionally, since the only usage of `unittest` was in tests for these
old Simple* clients, this also drops `unittest` from the library. All
tests now run under `pytest`.
|
|
|
|
|
|
|
| |
* Fix describe config for multi-broker clusters
Currently all describe config requests are sent to "least loaded node". Requests for broker configs must, however, be sent to the specific broker, otherwise an error is returned. Only topic requests can be handled by any node.
This changes the logic to send all describe config requests to the specific broker.
|
| |
|
| |
|
| |
|
|
|
|
| |
to topic deletion while rebalancing (#1782)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This makes it so the only remaining use of `unittest` is in the old
tests of the deprecated `Simple*` clients. All `KafkaConsumer` tests are
migrated to `pytest`.
I also had to bump the test iterations up on one of the tests, I think there was a race condition there that was more commonly hit under pytest , planning to cleanup that in a followup PR. See https://github.com/dpkp/kafka-python/pull/1886#discussion_r316860737 for details.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we are using `pytest`, there is no need for a custom decorator
because we can use `pytest.mark.skipif()`.
This makes the code significantly simpler. In particular, dropping the
custom `@kafka_versions()` decorator is necessary because it uses
`func.wraps()` which doesn't play nice with `pytest` fixtures:
- https://github.com/pytest-dev/pytest/issues/677
- https://stackoverflow.com/a/19614807/770425
So this is a pre-requisite to migrating some of those tests to using
pytest fixtures.
|
|
|
|
| |
Remove unused import, whitespace, etc. No functional changes, just
cleaning it up so the diffs of later changes are cleaner.
|
|
|
|
|
|
|
|
|
|
|
| |
socket.SOCK_STREAM is platform specific and on some
platforms (most notably on Linux on MIPS) does not
equal 1; so it's better to use the constant where
appropriate.
This change fixes the tests on my MIPS32 LE machine.
Signed-off-by: Ivan A. Melnikov <iv@altlinux.org>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
The docs for `api_version_auto_timeout_ms` mention setting
`api_version='auto'` but that value has been deprecated for years in
favor of `api_version=None`.
Updating the docs for now, and will remove support for `'auto'` in next
major version bump.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
This doesn't fully implement SSL fixtures, but as a first step it should help with automatically generating required certificates / keystores / etc. My hope is that this helps generate more community support for SSL testing!
|
| |
|
|
|
|
|
|
|
| |
The current client attempts to bootstrap once during initialization, but if it fails there is no second attempt and the client will be inoperable. This can happen, for example, if an entire cluster is down at the time a long-running client starts execution.
This commit attempts to fix this by removing the synchronous bootstrapping from `KafkaClient` init, and instead merges bootstrap metadata with the cluster metadata. The Java client uses a similar approach. This allows us to continue falling back to bootstrap data when necessary throughout the life of a long-running consumer or producer.
Fix #1670
|
|
|
| |
Small change to avoid doing dns resolution when running local connection tests. This fixture always returns a broker on localhost:9092, so DNS lookups don't make sense here.
|
| |
|
| |
|
|
|
|
|
|
| |
* Add BrokerConnection.send_pending_requests to support async network sends
* Send network requests during KafkaClient.poll() rather than in KafkaClient.send()
* Dont acquire lock during KafkaClient.send if node is connected / ready
* Move all network connection IO into KafkaClient.poll()
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that the type of required operation result is
"long".
```
>>> type(278251978 & 0xffffffff)
<type 'long'>
```
However, by default "format" method uses __format__():
```
>>> (278251978 & 0xffffffff).__format__('')
'278251978'
```
So, let's compare things using the same engine:
```
>>> "{!r}".format(278251978 & 0xffffffff)
'278251978L'
```
Fixes: https://github.com/dpkp/kafka-python/issues/1717
Signed-off-by: Stanislav Levin <slev@altlinux.org>
|
|
|
|
|
| |
Popen objects may deadlock when using stdout=PIPE or stderr=PIPE
with Popen.wait(). Using Popen.communicate() avoids the issue.
|
|
|
|
| |
Cleanup the formatting, remove parens, extraneous spaces, etc.
|
|
|
|
| |
Fix #1633
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`random_string` now comes from `test.fixtures` and was being
transparently imported via `test.testutil` so this bypasses the
pointless indirect import.
Similarly, `kafka_version` was transparently imported by `test.testutil`
from `test.fixtures`.
Also removed `random_port()` in `test.testutil` because its unused as its been replaced
by the one in `test.fixtures`.
This is part of the pytest migration that was started back in
a1869c4be5f47b4f6433610249aaf29af4ec95e5.
|
| |
|
|
|
|
| |
This is no longer used anywhere in the codebase
|
|
|
|
|
|
| |
Requires cluster version > 0.10.0.0, and uses new wire protocol
classes to do many things via broker connection that previously
needed to be done directly in zookeeper.
|
|
|
|
| |
Use vendored `six`, and also `six.moves.range` rather than `xrange`
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This finishes the split from `kafka.common` to `kafka.errors`/`kafka.structs`.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In Python3, `ConnectionError` is a native exception. So rename our
custom one to `KafkaConnectionError` to prevent accidentally
shadowing the native one.
Note that there are still valid uses of `ConnectionError` in this code.
They already expect a native Python3 `ConnectionError`, and also already
handle the Python2 compatibility issues.
|