| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
https://github.com/eventlet/eventlet/issues/791
|
|
|
|
| |
fixes https://github.com/eventlet/eventlet/issues/763
|
|
|
|
| |
fixes https://github.com/eventlet/eventlet/issues/790
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes the server pool fills up; all max_size greenthreads are
already in use with other connections. One more connection gets
accept()ed, but then has to wait for a slot to open up to actually be
handled.
This works out fine if your clients tend to make a single request then
close the connection upon receiving a response. It works out OK-ish when
clients are continually pipelining requests; the new connection still
has to wait, but at least there's plenty of work getting processed --
it's defensible. It can work out pretty terribly if clients tend to hold
on to connections "just in case" -- we're ignoring fresh work from a new
client just so we can be ready-to-go if an existing connection wakes up.
There are a couple existing tunings we can use, but they can each have
downsides:
- Increasing max_size is nice for dealing with idle connections, but can
cause hub contention and high latency variance when all those
connections are actually busy.
- socket_timeout can be used to limit the idle socket time, but it
*also* impacts send/recv operations while processing a request, which
may not be desirable.
- keepalive can be set to False, disabling request pipelining entirely.
Change the keepalive option to wsgi.server so it can be the timeout to
use while waiting for a new request, separate from socket_timeout. By
default, socket_timeout continues to be used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Relevant traceback:
```
File "dns/resolver.py", line 858, in query
if qname.is_absolute():
AttributeError: 'bytes' object has no attribute 'is_absolute'
```
Fixes https://github.com/eventlet/eventlet/issues/599
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
Co-authored-by: Tim Burke <tim.burke@gmail.com>
|
|
|
|
|
|
|
|
| |
non-essential CI against 2.7 to 3.6
fixes https://github.com/eventlet/eventlet/issues/740
related to https://github.com/eventlet/eventlet/pull/715
|
|
|
|
| |
https://github.com/eventlet/eventlet/pull/657
|
|
|
|
|
|
| |
https://github.com/eventlet/eventlet/issues/785
Signed-off-by: Sergey Shepelev <temotor@gmail.com>
|
|
|
|
|
|
| |
- github actions ubuntu-latest switched to 22.04 with python3>=3.7
- tool: pep8 was renamed and upgraded to pycodestyle 2.1, fixed 2 empty lines after class/def
- common github actions upgrade to v3 https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SSLError
Back in Python 3.2, ssl.SSLError used to be a subclass of socket.error (see
https://docs.python.org/3/library/ssl.html#exceptions), so timeouts on
monkeypatched ssl sockets would be properly caught by socket.timeout
excpetion handlers in applications. However, since Python 3.3 ssl.SSLerror
is a subclass of OSError, which signifies a different (typically fatal)
type of error that is usually not handled gracefully by applications.
By changing the timeout excpetion back to socket.timeout, libraries such
as pymysql and redis will again properly support TLS-connections in
monkeypatched apoplications.
|
|
|
|
| |
fixes https://github.com/eventlet/eventlet/issues/457
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
tox>=4.0.0 changed passenv from a space-separated list to a
comma-separated list. tox>=4.0.6 made it a hard error to include
spaces, complaining
pass_env values cannot contain whitespace, use comma to have
multiple values in a single line
Switch to using multiple lines for the multiple variables to be
compatible with both tox3 and tox4.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dnspython added more type annotations in 2.3.0 -- because of how we
reimport the package piece by piece, though, not all of the types would
be available during reimport, leading to AttributeErrors like
module 'dns.rdtypes' has no attribute 'ANY'
Now, do all of our rdtypes special-handling *first*, before reimporting
any other submodules.
Addresses #781
|
| |
|
|
|
|
|
|
|
| |
Way back in py26, snake_case alternatives were added for the old
camelCase APIs. py310 started emitting DeprecationWarnings about them;
presumably they'll look to remove the old APIs eventually. See
https://github.com/python/cpython/commit/9825bdfb
|
|
|
| |
https://github.com/eventlet/eventlet/issues/757
|
|
|
|
|
|
|
|
|
| |
File or stream is not writable
https://github.com/eventlet/eventlet/pull/758
https://github.com/eventlet/eventlet/issues/757
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
|
| |
|
| |
|
|
|
|
|
|
|
| |
Remove basepython declarations from tox.ini where they match what tox
already infers from the environment name. This is NFC for existing
testenvs, and it makes it possible to run tests on newer Python targets,
e.g. py310-*.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Or at least, reduce the likelihood of it.
We've seen deadlocks when GC runs while in the process of releasing an
RLock; it results in a stack that looks like
File ".../logging/__init__.py", line 232, in _releaseLock
_lock.release()
File ".../threading.py", line 189, in release
self._block.release()
File ".../eventlet/lock.py", line 24, in release
return super(Lock, self).release(blocking=blocking)
File ".../logging/__init__.py", line 831, in _removeHandlerRef
acquire()
File ".../logging/__init__.py", line 225, in _acquireLock
_lock.acquire()
That is, we try to release the lock, but in between clearing the RLock
ownership information and releasing the underlying Lock, GC runs and
invokes a weakref callback, which in turn tries to acquire the RLock.
Since the ownership information has already been cleared, the lock's no
longer re-entrant and everything seizes.
This seems to have become more of a problem since we separated Lock and
Semaphore; apparently the extra stack frame makes it much more likely
that GC can sneak in during that critical moment. So, inline the release
inside of Lock rather than punting to Semaphore; the implementation is
simple anyway, and hasn't changed for at least 12 years (since Semaphore
was pulled out to its own module).
Closes #742
|
| |
|
|
|
|
|
| |
Creating a DNS resolver on import results in a failure in environments
where DNS is not available (containers, service ramdisks, etc).
|
|
|
|
|
|
|
| |
https://github.com/eventlet/eventlet/issues/697
https://github.com/eventlet/eventlet/pull/721
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
Co-authored-by: Tim Burke <tim.burke@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Setuptools v54.1.0 introduces a warning that the use of dash-separated
options in 'setup.cfg' will not be supported in a future version [1].
Get ahead of the issue by replacing the dashes with underscores. Without
this, we see 'UserWarning' messages like the following on new enough
versions of setuptools:
UserWarning: Usage of dash-separated 'description-file' will not be
supported in future versions. Please use the underscore name
'description_file' instead
[1] https://github.com/pypa/setuptools/commit/a2e9ae4cb
Signed-off-by: Arthur Zamarin <arthurzam@gentoo.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Everything below is specific to changes in Python 3.10.
https://github.com/eventlet/eventlet/pull/715
- Only wrap socket.timeout on Python < 3.10
socket.timeout is TimeoutError, which our is_timeout() helper func already knows.
fixes https://github.com/eventlet/eventlet/issues/687
- Working greenio._open
_pyio.open is now a staticmethod, so we've got to go down to
_pyio.open.__wrapped__ to get to the python function object.
- Test using eventlet.is_timeout rather than requiring an is_timeout attribute on errors.
TimeoutErrors (which are covered by is_timeout) can't necessarily have attributes added to them.
- Fix backdoor tests
Skip build info line at interpreter startup. Also, start printing the banner as we read it to aid in future debugging.
- Tolerate __builtins__ being a dict (rather than module) in is_timeout
(@tipabu) still not sure how this happens, but somehow it does in socket_test.test_error_is_timeout.
|
|
|
|
|
| |
https://github.com/eventlet/eventlet/pull/727
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Expect: 100-continue is a funny beast -- the client sends it to indicate
that it's willing to wait for an early error, but
- the client has no guarantee that the server supports 100 Continue,
- the server gets no indication of how long the client's willing to wait
for the go/no-go response, and
- even if it did, the server has no way of knowing that the response it
*emitted* within that time was actually *received* within that time
- so the client may have started sending the body regardless of what the
server's done.
As a result, the server only has two options when it *does not* send the
100 Continue response:
- close the socket
- read and discard the request body
Previously, we did neither of these things; as a result, a request body
could be interpreted as a new request. Now, close out the connection,
including sending a `Connection: close` header when practical.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compatibility with dnspython v2:
- `_compute_expiration` was replaced by `_compute_times`
- `dns.query.{tcp,udp}` take new arguments
Main issue for tracking: https://github.com/eventlet/eventlet/issues/619
This patch discussion: https://github.com/eventlet/eventlet/pull/722
This patch deprecates dnspython<2 pin: https://github.com/eventlet/eventlet/issues/629
Co-authored-by: John Vandenberg <jayvdb@gmail.com>
Co-authored-by: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This disables compression for the control frames sent by the websocket.
As per RFC 7692, Section 6.1:
An endpoint MUST NOT set the "Per-Message Compressed" bit of control
frames and non-first fragments of a data message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are small typos in:
- eventlet/zipkin/client.py
- tests/hub_test.py
- tests/wsgi_test.py
Fixes:
- Should read `propagating` rather than `propogating`.
- Should read `implementation` rather than `implementaiton`.
- Should read `address` rather than `addoress`.
Closes #710
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
server_hostname argument
https://github.com/eventlet/eventlet/issues/567
https://github.com/eventlet/eventlet/pull/575
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This fixes a memory exhaustion DOS attack vector.
References: GHSA-9p9m-jm8w-94p2
https://github.com/eventlet/eventlet/security/advisories/GHSA-9p9m-jm8w-94p2
|
| |
|
|
|
| |
https://github.com/eventlet/eventlet/issues/543
|
|
|
|
|
|
|
| |
Error: `EOF occurred in violation of protocol (_ssl.c:2570)` in some HTTPS `connection: close` scenario.
This is a result of suppress_ragged_eofs defaulting to True in SSLSocket, but defaulting to None in GreenSSLSocket when monkey_patched. This only occurs in Python 3.7+.
https://github.com/eventlet/eventlet/pull/695
|
| |
|
|
|
|
|
|
| |
https://github.com/eventlet/eventlet/issues/696
Co-authored-by: Skyline124 <gregoire2011dumas@gmail.com>
|
|
|
|
|
|
| |
Module imp is deprecated in favour of importlib. But importlib doesn't
support acquire_lock/release_lock/lock_held. Use internal _imp module
instead.
|