| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
https://github.com/eventlet/eventlet/issues/791
|
| |
|
|
| |
fixes https://github.com/eventlet/eventlet/issues/763
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes the server pool fills up; all max_size greenthreads are
already in use with other connections. One more connection gets
accept()ed, but then has to wait for a slot to open up to actually be
handled.
This works out fine if your clients tend to make a single request then
close the connection upon receiving a response. It works out OK-ish when
clients are continually pipelining requests; the new connection still
has to wait, but at least there's plenty of work getting processed --
it's defensible. It can work out pretty terribly if clients tend to hold
on to connections "just in case" -- we're ignoring fresh work from a new
client just so we can be ready-to-go if an existing connection wakes up.
There are a couple existing tunings we can use, but they can each have
downsides:
- Increasing max_size is nice for dealing with idle connections, but can
cause hub contention and high latency variance when all those
connections are actually busy.
- socket_timeout can be used to limit the idle socket time, but it
*also* impacts send/recv operations while processing a request, which
may not be desirable.
- keepalive can be set to False, disabling request pipelining entirely.
Change the keepalive option to wsgi.server so it can be the timeout to
use while waiting for a new request, separate from socket_timeout. By
default, socket_timeout continues to be used.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Relevant traceback:
```
File "dns/resolver.py", line 858, in query
if qname.is_absolute():
AttributeError: 'bytes' object has no attribute 'is_absolute'
```
Fixes https://github.com/eventlet/eventlet/issues/599
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
Co-authored-by: Tim Burke <tim.burke@gmail.com>
|
| |
|
|
| |
https://github.com/eventlet/eventlet/pull/657
|
| |
|
|
|
|
| |
- github actions ubuntu-latest switched to 22.04 with python3>=3.7
- tool: pep8 was renamed and upgraded to pycodestyle 2.1, fixed 2 empty lines after class/def
- common github actions upgrade to v3 https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
|
| | |
|
| |
|
| |
https://github.com/eventlet/eventlet/issues/757
|
| |
|
|
|
|
|
|
|
| |
File or stream is not writable
https://github.com/eventlet/eventlet/pull/758
https://github.com/eventlet/eventlet/issues/757
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
|
| |
|
|
|
| |
Creating a DNS resolver on import results in a failure in environments
where DNS is not available (containers, service ramdisks, etc).
|
| |
|
|
|
|
|
| |
https://github.com/eventlet/eventlet/issues/697
https://github.com/eventlet/eventlet/pull/721
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
Co-authored-by: Tim Burke <tim.burke@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Everything below is specific to changes in Python 3.10.
https://github.com/eventlet/eventlet/pull/715
- Only wrap socket.timeout on Python < 3.10
socket.timeout is TimeoutError, which our is_timeout() helper func already knows.
fixes https://github.com/eventlet/eventlet/issues/687
- Working greenio._open
_pyio.open is now a staticmethod, so we've got to go down to
_pyio.open.__wrapped__ to get to the python function object.
- Test using eventlet.is_timeout rather than requiring an is_timeout attribute on errors.
TimeoutErrors (which are covered by is_timeout) can't necessarily have attributes added to them.
- Fix backdoor tests
Skip build info line at interpreter startup. Also, start printing the banner as we read it to aid in future debugging.
- Tolerate __builtins__ being a dict (rather than module) in is_timeout
(@tipabu) still not sure how this happens, but somehow it does in socket_test.test_error_is_timeout.
|
| |
|
|
|
| |
https://github.com/eventlet/eventlet/pull/727
Co-authored-by: Sergey Shepelev <temotor@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Expect: 100-continue is a funny beast -- the client sends it to indicate
that it's willing to wait for an early error, but
- the client has no guarantee that the server supports 100 Continue,
- the server gets no indication of how long the client's willing to wait
for the go/no-go response, and
- even if it did, the server has no way of knowing that the response it
*emitted* within that time was actually *received* within that time
- so the client may have started sending the body regardless of what the
server's done.
As a result, the server only has two options when it *does not* send the
100 Continue response:
- close the socket
- read and discard the request body
Previously, we did neither of these things; as a result, a request body
could be interpreted as a new request. Now, close out the connection,
including sending a `Connection: close` header when practical.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Compatibility with dnspython v2:
- `_compute_expiration` was replaced by `_compute_times`
- `dns.query.{tcp,udp}` take new arguments
Main issue for tracking: https://github.com/eventlet/eventlet/issues/619
This patch discussion: https://github.com/eventlet/eventlet/pull/722
This patch deprecates dnspython<2 pin: https://github.com/eventlet/eventlet/issues/629
Co-authored-by: John Vandenberg <jayvdb@gmail.com>
Co-authored-by: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are small typos in:
- eventlet/zipkin/client.py
- tests/hub_test.py
- tests/wsgi_test.py
Fixes:
- Should read `propagating` rather than `propogating`.
- Should read `implementation` rather than `implementaiton`.
- Should read `address` rather than `addoress`.
Closes #710
|
| |
|
|
|
|
|
| |
server_hostname argument
https://github.com/eventlet/eventlet/issues/567
https://github.com/eventlet/eventlet/pull/575
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
This fixes a memory exhaustion DOS attack vector.
References: GHSA-9p9m-jm8w-94p2
https://github.com/eventlet/eventlet/security/advisories/GHSA-9p9m-jm8w-94p2
|
| |
|
| |
https://github.com/eventlet/eventlet/issues/543
|
| |
|
|
|
|
| |
https://github.com/eventlet/eventlet/issues/696
Co-authored-by: Skyline124 <gregoire2011dumas@gmail.com>
|
| |
|
|
| |
https://github.com/eventlet/eventlet/issues/683
|
| |
|
|
|
| |
https://github.com/eventlet/eventlet/issues/671
https://github.com/pyca/pyopenssl/pull/913
|
| |
|
|
|
|
|
|
| |
CPython expects to be able to call such a method on RLocks, Conditions,
and Events in threading; since we may monkey-patch threading to use
Semaphores as locks, they need the method, too.
Addresses #646
|
| |
|
|
|
|
|
|
|
|
|
| |
Only the value for the current platform should be considered
valid here, so this check uses the constant from `errno`
module as expected output, instead of hardcoded ints.
Also, this fixes build on MIPS, where ECONNREFUSED is defined
as 146.
Signed-off-by: Ivan A. Melnikov <iv@altlinux.org>
|
| |
|
|
|
|
|
| |
gone (#545)
https://github.com/eventlet/eventlet/issues/541
https://github.com/eventlet/eventlet/pull/545
https://docs.python.org/release/3.0/whatsnew/3.0.html#builtins
|
| |
|
|
| |
https://github.com/eventlet/eventlet/issues/651
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When in hubs.trampoline(fd, ...), a greenthread registers itself as a
listener for fd, switches to the hub, and then calls
hub.remove(listener) to deregister itself. hub.remove(listener)
removes the primary listener. If the greenthread awoke because its fd
became ready, then it is the primary listener, and everything is
fine. However, if the greenthread was a secondary listener and awoke
because a Timeout fired then it would remove the primary and promote a
random secondary to primary.
This commit makes hub.remove(listener) check to make sure listener is
the primary, and if it's not, remove the listener from the
secondaries.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if we patched threading then forked (or, in some cases, used
the subprocess module), Python would log an ignored exception like
Exception ignored in: <function _after_fork at 0x7f16493489d8>
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 1335, in _after_fork
assert len(_active) == 1
AssertionError:
This comes down to threading in Python 3.7+ having an import side-effect
of registering an at-fork callback. When we re-import threading to patch
it, the old (but still registered) callback still points to the old
thread-tracking dict, rather than the new dict that's actually doing the
tracking.
Now, register our own at_fork hook that will fix up the dict reference
before threading's _at_fork runs and put it back afterwards.
Closes #592
|
| |
|
|
| |
Previous behavior to ignore DeprecationWarning is now default in py2.7
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, when a client quickly disconnected (causing a socket.error
before the SocketConsole greenlet had a chance to switch), it would
break us out of our accept loop, permanently closing the backdoor.
Now, it will just break us out of the interactive session, leaving the
server ready to accept another backdoor client.
Fixes #570
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
str.capitalize() and str.upper() respect unicode capitalization rules on
py3, while py2 just translates a-z to A-Z.
At best, this may cause confusion and unexpected behaviors, such as
when '\xdf' (a Latin1-encoded ß) becomes 'SS'; at worst, this causes
UnicodeEncodeErrors and the server fails to reply, such as when '\xff'
(a Latin1-encoded ÿ) becomes '\u0178' which does not map back into
Latin1.
Now, convert everything to bytes before capitalizing so just a-z and A-Z
are affected on both py2 and py3.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For SSL sockets created using the SSLContext class under Python >= 3.7,
eventlet incorrectly passes the context as '_context' to the top
level wrap_socket function in the native ssl module.
This causes:
wrap_socket() got an unexpected keyword argument '_context'
as the context cannot be passed this way.
If a context is provided, use the underlying sslsocket_class to
wrap the socket, mirroring the implementation of the wrap_socket
method in the native SSLContext class.
Fixes issue #526
Co-authored-by: Tim Burke <tim.burke@gmail.com>
|
| |
|
|
|
|
|
|
|
|
| |
Increase the timeout used for test_isolate_from_socket_default_timeout
from 1 second to 5 seconds. Otherwise, the test can't succeed
on hardware where Python runs slower. In particular, on our SPARC box
importing greenlet modules takes almost 2 seconds, so the test program
does not even start properly.
Fixes #614
|
| |
|
|
|
|
|
| |
Fix test_set_nonblocking() to account for the alternative possible
outcome that enabling non-blocking mode can set both O_NONBLOCK
and O_NDELAY as it does on SPARC. Note that O_NDELAY may be a superset
of O_NONBLOCK, so we can't just filter it out of new_flags.
|
| |
|
|
|
|
|
|
| |
Fix TestGreenSocket.test_skip_nonblocking() to unset both O_NONBLOCK
and O_NDELAY. This is necessary to fix tests on SPARC where both flags
are used simultaneously, and unsetting one is ineffective (flags remain
the same). This should not affect other platforms where O_NDELAY
is an alias for O_NONBLOCK.
|
| |
|
|
|
|
| |
Fix TestGreenSocket.test_skip_nonblocking() to call F_GETFL again
to get the flags for the socket. Previously, the code wrongly assumed
F_SETFL will return flags while it always returns 0 (see fcntl(2)).
|
| |
|
|
|
| |
Previously, a bunch of tests that just call `tests.run_isolated(...)`
(such as those at the end of patcher_test.py) might time out but not
actually show any errors.
|
| |
|
|
|
|
|
|
| |
Python 3.7 and later implement queue.SimpleQueue in C, causing a
deadlock when using ThreadPoolExecutor with eventlet.
To avoid this deadlock we now replace the C implementation with the
Python implementation on monkey_patch for Python versions 3.7 and
higher.
|
| |
|
|
|
|
|
|
|
|
|
| |
fixes #534
pathlib._NormalAccessor wraps `open` in `staticmethod` for py < 3.7 but
not 3.7. That means we `Path.open` calls `green.os.open` with `file`
being a pathlib._NormalAccessor object, and the other arguments shifted.
Fortunately pathlib doesn't use the `dir_fd` argument, so we have space
in the parameter list. We use some heuristics to detect this and adjust
the parameters (without importing pathlib)
|
| |
|
|
| |
Fixes https://github.com/eventlet/eventlet/issues/580
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some applications may need to perform some long-running operation during
a client-request cycle. To keep the client from timing out while waiting
for the response, the application issues a status pro tempore, dribbles
out whitespace (or some other filler) periodically, and expects the
client to parse the final response to confirm success or failure.
Previously, if the application was *too* eager and sent data before ever
reading from the request body, we would write headers to the client,
send that initial data, but then *still send the 100 Continue* when the
application finally read the request. Since this would occur on a chunk
boundary, the client cannot parse the size of the next chunk, and
everything goes off the rails.
Now, only be willing to send the 100 Continue response if we have not
sent headers to the client.
|
| |
|
|
|
|
|
| |
We already 400 missing and non-integer Content-Lengths, and Input almost
certainly wasn't intended to handle negative lengths.
Be sure to close the connection, too -- we have no reason to think that
the client's request framing is still good.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* #53: Make a GreenPile with no spawn()s an empty sequence.
Remove the GreenPile.used flag. This was probably originally intended to
address the potential problem of GreenPool.starmap() returning a (still empty)
GreenPile, a problem that has since been addressed by using GreenMap which
requires an explicit marker from the producer. Currently the only effect of
GreenPile.used is to make GreenPile hang when used with an empty sequence.
Test the empty GreenPile case.
Even though GreenMap is probably not intended for general use, make it
slightly more consumer-friendly by adding a done_spawning() method instead of
requiring the consumer to spawn(return_stop_iteration). GreenPool._do_map()
now calls done_spawning(). Remove return_stop_iteration().
Since done_spawning() merely spawns a function that returns StopIteration, any
existing consumer that explicitly does the same will still work. However, this
is potentially a breaking change if any consumers specifically reference
eventlet.greenpool.return_stop_iteration() for that or any other purpose.
Refactor GreenPile.next(), breaking out the bookkeeping detail to new _next()
method. Make subclass GreenMap call base-class _next(), eliminating the need
to replicate that bookkeeping detail.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Issue #535: use Python 2 compatible syntax for keyword-only args.
* Validate that encode_chunked is the *only* keyword argument passed.
* Increase Travis slop factor for ZMQ CPU usage.
The comment in check_idle_cpu_usage() reads:
# This check is reliably unreliable on Travis, presumably because of CPU
# resources being quite restricted by the build environment. The workaround
# is to apply an arbitrary factor that should be enough to make it work nicely.
Empirically -- it's not. Over the last few months there have been many Travis
"failures" that boil down to this one spurious error. Increase from a slop
factor of 1.2 to 5. If that's still unreliable, may bypass this test entirely
on Travis.
|
| |
|
|
|
|
|
|
| |
Previously, we pretended the input wasn't chunked and hoped for the best. On
py2, this would give the caller the raw, chunk-encoded data; for some reason,
on py3, this would hang.
Now, readlines() will behave as expected.
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, we would compare the last item of a byte string
with a newline in a native string. On Python 3, getting a
single item from a byte string give you an integer (which
will not be equal to any string), so readline would return
the entire request body.
While we're at it, fix the return type when the caller requests
that zero bytes be read.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For more context, see #467 and #497.
On py3, urllib.parse.unquote() defaults to decoding via UTF-8 and
replacing invalid UTF-8 sequences with "\N{REPLACEMENT CHARACTER}".
This causes a few problems:
- Since WSGI requires that bytes be decoded as Latin-1 on py3, we
have to do an extra re-encode/decode cycle in encode_dance().
- Applications written for Latin-1 are broken, as there are valid
Latin-1 sequences that are mangled because of the replacement.
- Applications written for UTF-8 cannot differentiate between a
replacement character that was intentionally sent by the client
versus an invalid byte sequence.
Fortunately, unquote() allows us to specify the encoding that should
be used. By specifying Latin-1, we can drop encode_dance() entirely
and preserve as much information from the wire as we can.
|
| |
|
|
|
|
|
|
|
|
| |
* [bug] reimport submodule as well in patcher.inject
* [dev] add unit-test
* [dev] move unit test to isolated tests
* improve unit test
|