summaryrefslogtreecommitdiff
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* tests: getaddrinfo(host, 0) is not supported on OpenIndiana platformHEADmasterSergey Shepelev2023-03-271-7/+7
| | | | https://github.com/eventlet/eventlet/issues/791
* dep: greenlet>=1.0 removing unused clear_sys_exc_info stubSergey Shepelev2023-03-272-2/+0
| | | | fixes https://github.com/eventlet/eventlet/issues/763
* wsgi: Allow keepalive option to be a timeoutTim Burke2023-02-221-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes the server pool fills up; all max_size greenthreads are already in use with other connections. One more connection gets accept()ed, but then has to wait for a slot to open up to actually be handled. This works out fine if your clients tend to make a single request then close the connection upon receiving a response. It works out OK-ish when clients are continually pipelining requests; the new connection still has to wait, but at least there's plenty of work getting processed -- it's defensible. It can work out pretty terribly if clients tend to hold on to connections "just in case" -- we're ignoring fresh work from a new client just so we can be ready-to-go if an existing connection wakes up. There are a couple existing tunings we can use, but they can each have downsides: - Increasing max_size is nice for dealing with idle connections, but can cause hub contention and high latency variance when all those connections are actually busy. - socket_timeout can be used to limit the idle socket time, but it *also* impacts send/recv operations while processing a request, which may not be desirable. - keepalive can be set to False, disabling request pipelining entirely. Change the keepalive option to wsgi.server so it can be the timeout to use while waiting for a new request, separate from socket_timeout. By default, socket_timeout continues to be used.
* dns: support host as bytes in getaddrinfo, resolveGorka Eguileor2023-02-021-0/+24
| | | | | | | | | | | | | | | Relevant traceback: ``` File "dns/resolver.py", line 858, in query if qname.is_absolute(): AttributeError: 'bytes' object has no attribute 'is_absolute' ``` Fixes https://github.com/eventlet/eventlet/issues/599 Co-authored-by: Sergey Shepelev <temotor@gmail.com> Co-authored-by: Tim Burke <tim.burke@gmail.com>
* hubs: drop pyevent hubSergey Shepelev2023-01-2215-106/+16
| | | | https://github.com/eventlet/eventlet/pull/657
* chore: CI upgrades, pycodestyle fix 2 empty lines after class/defSergey Shepelev2023-01-1813-0/+16
| | | | | | - github actions ubuntu-latest switched to 22.04 with python3>=3.7 - tool: pep8 was renamed and upgraded to pycodestyle 2.1, fixed 2 empty lines after class/def - common github actions upgrade to v3 https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
* Fix WSGI testTim Burke2023-01-181-1/+7
|
* test: GreenPipe with mode w+, a+wuming02022-07-081-0/+20
| | | https://github.com/eventlet/eventlet/issues/757
* greenio: GreenPipe/fdopen() with 'a' in mode raised io.UnsupportedOperation: ↵wuming02022-06-272-0/+45
| | | | | | | | | File or stream is not writable https://github.com/eventlet/eventlet/pull/758 https://github.com/eventlet/eventlet/issues/757 Co-authored-by: Sergey Shepelev <temotor@gmail.com>
* Create a DNS resolver lazily rather than on import (fixes #736)Dmitry Tantsur2021-11-161-0/+3
| | | | | Creating a DNS resolver on import results in a failure in environments where DNS is not available (containers, service ramdisks, etc).
* green.thread: unlocked Lock().release() should raise exception, returned TrueMichael Wright2021-10-282-21/+24
| | | | | | | https://github.com/eventlet/eventlet/issues/697 https://github.com/eventlet/eventlet/pull/721 Co-authored-by: Sergey Shepelev <temotor@gmail.com> Co-authored-by: Tim Burke <tim.burke@gmail.com>
* Python 3.10 partial supportTim Burke2021-10-082-2/+5
| | | | | | | | | | | | | | | | | Everything below is specific to changes in Python 3.10. https://github.com/eventlet/eventlet/pull/715 - Only wrap socket.timeout on Python < 3.10 socket.timeout is TimeoutError, which our is_timeout() helper func already knows. fixes https://github.com/eventlet/eventlet/issues/687 - Working greenio._open _pyio.open is now a staticmethod, so we've got to go down to _pyio.open.__wrapped__ to get to the python function object. - Test using eventlet.is_timeout rather than requiring an is_timeout attribute on errors. TimeoutErrors (which are covered by is_timeout) can't necessarily have attributes added to them. - Fix backdoor tests Skip build info line at interpreter startup. Also, start printing the banner as we read it to aid in future debugging. - Tolerate __builtins__ being a dict (rather than module) in is_timeout (@tipabu) still not sure how this happens, but somehow it does in socket_test.test_error_is_timeout.
* ssl: GreenSSLContext minimum_version and maximum_version settersBob Haddleton2021-09-292-0/+16
| | | | | https://github.com/eventlet/eventlet/pull/727 Co-authored-by: Sergey Shepelev <temotor@gmail.com>
* wsgi: Don't break HTTP framing during 100-continue handlingTim Burke2021-09-131-0/+78
| | | | | | | | | | | | | | | | | | | | | | | Expect: 100-continue is a funny beast -- the client sends it to indicate that it's willing to wait for an early error, but - the client has no guarantee that the server supports 100 Continue, - the server gets no indication of how long the client's willing to wait for the go/no-go response, and - even if it did, the server has no way of knowing that the response it *emitted* within that time was actually *received* within that time - so the client may have started sending the body regardless of what the server's done. As a result, the server only has two options when it *does not* send the 100 Continue response: - close the socket - read and discard the request body Previously, we did neither of these things; as a result, a request body could be interpreted as a new request. Now, close out the connection, including sending a `Connection: close` header when practical.
* greendns: compatibility with dnspython v2Felix Yan2021-09-012-2/+3
| | | | | | | | | | | | Compatibility with dnspython v2: - `_compute_expiration` was replaced by `_compute_times` - `dns.query.{tcp,udp}` take new arguments Main issue for tracking: https://github.com/eventlet/eventlet/issues/619 This patch discussion: https://github.com/eventlet/eventlet/pull/722 This patch deprecates dnspython<2 pin: https://github.com/eventlet/eventlet/issues/629 Co-authored-by: John Vandenberg <jayvdb@gmail.com> Co-authored-by: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
* docs: fix a few simple typosTim Gates2021-08-162-2/+2
| | | | | | | | | | | | | | There are small typos in: - eventlet/zipkin/client.py - tests/hub_test.py - tests/wsgi_test.py Fixes: - Should read `propagating` rather than `propogating`. - Should read `implementation` rather than `implementaiton`. - Should read `address` rather than `addoress`. Closes #710
* ssl: py3.6 using client certificates raised ValueError: check_hostname needs ↵Paul Lockaby2021-05-111-0/+24
| | | | | | | server_hostname argument https://github.com/eventlet/eventlet/issues/567 https://github.com/eventlet/eventlet/pull/575
* tests: extend default mysqldb test timeout to 5sSergey Shepelev2021-05-111-0/+2
|
* replace Travis with Github (actions) CISergey Shepelev2021-05-101-2/+2
|
* websocket: Limit maximum uncompressed frame length to 8MiBOnno Kortmann2021-05-051-1/+58
| | | | | | | This fixes a memory exhaustion DOS attack vector. References: GHSA-9p9m-jm8w-94p2 https://github.com/eventlet/eventlet/security/advisories/GHSA-9p9m-jm8w-94p2
* wsgi: websocket ALREADY_HANDLED flag on corolocalChoi Geonu2021-05-022-3/+37
| | | https://github.com/eventlet/eventlet/issues/543
* greenio: socket.connect_ex returned None instead of 0 on success696-connect_exSergey Shepelev2021-03-251-0/+8
| | | | | | https://github.com/eventlet/eventlet/issues/696 Co-authored-by: Skyline124 <gregoire2011dumas@gmail.com>
* patcher: built-in open() did not accept kwargs683-open-kwargsSergey Shepelev2021-01-292-0/+14
| | | | https://github.com/eventlet/eventlet/issues/683
* pyopenssl tsafe module was deprecated and removed in v20.0.0Sergey Shepelev2020-12-131-1/+0
| | | | | https://github.com/eventlet/eventlet/issues/671 https://github.com/pyca/pyopenssl/pull/913
* py39: Add _at_fork_reinit method to SemaphoresTim Burke2020-11-031-0/+21
| | | | | | | | CPython expects to be able to call such a method on RLocks, Conditions, and Events in threading; since we may monkey-patch threading to use Semaphores as locks, they need the method, too. Addresses #646
* tests: Improve ECONNREFUSED checksIvan A. Melnikov2020-10-221-4/+5
| | | | | | | | | | | Only the value for the current platform should be considered valid here, so this check uses the constant from `errno` module as expected output, instead of hardcoded ints. Also, this fixes build on MIPS, where ECONNREFUSED is defined as 146. Signed-off-by: Ivan A. Melnikov <iv@altlinux.org>
* patcher: monkey_patch(builtins=True) failed on py3 because `file` class is ↵秋葉2020-10-222-0/+18
| | | | | | | gone (#545) https://github.com/eventlet/eventlet/issues/541 https://github.com/eventlet/eventlet/pull/545 https://docs.python.org/release/3.0/whatsnew/3.0.html#builtins
* ssl: context wrapped listener failed to supply _context in accept()Sergey Shepelev2020-10-201-0/+46
| | | | https://github.com/eventlet/eventlet/issues/651
* Always remove the right listener from the hubSamuel Merritt2020-09-231-1/+46
| | | | | | | | | | | | | | | When in hubs.trampoline(fd, ...), a greenthread registers itself as a listener for fd, switches to the hub, and then calls hub.remove(listener) to deregister itself. hub.remove(listener) removes the primary listener. If the greenthread awoke because its fd became ready, then it is the primary listener, and everything is fine. However, if the greenthread was a secondary listener and awoke because a Timeout fired then it would remove the primary and promote a random secondary to primary. This commit makes hub.remove(listener) check to make sure listener is the primary, and if it's not, remove the listener from the secondaries.
* Clean up threading book-keeping at fork when monkey-patchedTim Burke2020-08-282-0/+63
| | | | | | | | | | | | | | | | | | | | | | Previously, if we patched threading then forked (or, in some cases, used the subprocess module), Python would log an ignored exception like Exception ignored in: <function _after_fork at 0x7f16493489d8> Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 1335, in _after_fork assert len(_active) == 1 AssertionError: This comes down to threading in Python 3.7+ having an import side-effect of registering an at-fork callback. When we re-import threading to patch it, the old (but still registered) callback still points to the old thread-tracking dict, rather than the new dict that's actually doing the tracking. Now, register our own at_fork hook that will fix up the dict reference before threading's _at_fork runs and put it back afterwards. Closes #592
* tests checking output were broken by Python 2 end of support warningpy27-warningSergey Shepelev2020-08-191-2/+2
| | | | Previous behavior to ignore DeprecationWarning is now default in py2.7
* backdoor: handle disconnects betterTim Burke2020-07-311-0/+16
| | | | | | | | | | | Previously, when a client quickly disconnected (causing a socket.error before the SocketConsole greenlet had a chance to switch), it would break us out of our accept loop, permanently closing the backdoor. Now, it will just break us out of the interactive session, leaving the server ready to accept another backdoor client. Fixes #570
* wsgi: Fix header capitalization on py3Tim Burke2020-07-021-1/+21
| | | | | | | | | | | | | | str.capitalize() and str.upper() respect unicode capitalization rules on py3, while py2 just translates a-z to A-Z. At best, this may cause confusion and unexpected behaviors, such as when '\xdf' (a Latin1-encoded ß) becomes 'SS'; at worst, this causes UnicodeEncodeErrors and the server fails to reply, such as when '\xff' (a Latin1-encoded ÿ) becomes '\u0178' which does not map back into Latin1. Now, convert everything to bytes before capitalizing so just a-z and A-Z are affected on both py2 and py3.
* Fix compatibility with SSLContext usage >= Python 3.7James Page2020-07-021-0/+22
| | | | | | | | | | | | | | | | | | | | For SSL sockets created using the SSLContext class under Python >= 3.7, eventlet incorrectly passes the context as '_context' to the top level wrap_socket function in the native ssl module. This causes: wrap_socket() got an unexpected keyword argument '_context' as the context cannot be passed this way. If a context is provided, use the underlying sslsocket_class to wrap the socket, mirroring the implementation of the wrap_socket method in the native SSLContext class. Fixes issue #526 Co-authored-by: Tim Burke <tim.burke@gmail.com>
* tests: Increase timeout for test_isolate_from_socket_default_timeoutMichał Górny2020-07-011-1/+1
| | | | | | | | | | Increase the timeout used for test_isolate_from_socket_default_timeout from 1 second to 5 seconds. Otherwise, the test can't succeed on hardware where Python runs slower. In particular, on our SPARC box importing greenlet modules takes almost 2 seconds, so the test program does not even start properly. Fixes #614
* tests: Assume that nonblocking mode might set O_NDELAY to fix SPARCMichał Górny2020-07-011-1/+4
| | | | | | | Fix test_set_nonblocking() to account for the alternative possible outcome that enabling non-blocking mode can set both O_NONBLOCK and O_NDELAY as it does on SPARC. Note that O_NDELAY may be a superset of O_NONBLOCK, so we can't just filter it out of new_flags.
* tests: Unset O_NONBLOCK|O_NDELAY to fix SPARCMichał Górny2020-07-011-3/+5
| | | | | | | | Fix TestGreenSocket.test_skip_nonblocking() to unset both O_NONBLOCK and O_NDELAY. This is necessary to fix tests on SPARC where both flags are used simultaneously, and unsetting one is ineffective (flags remain the same). This should not affect other platforms where O_NDELAY is an alias for O_NONBLOCK.
* tests: F_SETFL does not return flags, use F_GETFL againMichał Górny2020-07-011-1/+2
| | | | | | Fix TestGreenSocket.test_skip_nonblocking() to call F_GETFL again to get the flags for the socket. Previously, the code wrongly assumed F_SETFL will return flags while it always returns 0 (see fcntl(2)).
* tests: Fail on timeout when expect_pass=True (#612)Tim Burke2020-05-151-0/+3
| | | | | Previously, a bunch of tests that just call `tests.run_isolated(...)` (such as those at the end of patcher_test.py) might time out but not actually show any errors.
* Fix #508: Py37 Deadlock ThreadPoolExecutor (#598)Gorka Eguileor2020-05-152-0/+22
| | | | | | | | Python 3.7 and later implement queue.SimpleQueue in C, causing a deadlock when using ThreadPoolExecutor with eventlet. To avoid this deadlock we now replace the C implementation with the Python implementation on monkey_patch for Python versions 3.7 and higher.
* workaround for pathlib on py 3.7David Szotten2019-08-201-0/+11
| | | | | | | | | | | fixes #534 pathlib._NormalAccessor wraps `open` in `staticmethod` for py < 3.7 but not 3.7. That means we `Path.open` calls `green.os.open` with `file` being a pathlib._NormalAccessor object, and the other arguments shifted. Fortunately pathlib doesn't use the `dir_fd` argument, so we have space in the parameter list. We use some heuristics to detect this and adjust the parameters (without importing pathlib)
* Stop using deprecated cgi.parse_qs() to support Python 3.8Miro Hrončok2019-07-101-1/+2
| | | | Fixes https://github.com/eventlet/eventlet/issues/580
* wsgi: Only send 100 Continue response if no response has been sent yet (#557)Tim Burke2019-03-211-0/+61
| | | | | | | | | | | | | | | | | Some applications may need to perform some long-running operation during a client-request cycle. To keep the client from timing out while waiting for the response, the application issues a status pro tempore, dribbles out whitespace (or some other filler) periodically, and expects the client to parse the final response to confirm success or failure. Previously, if the application was *too* eager and sent data before ever reading from the request body, we would write headers to the client, send that initial data, but then *still send the 100 Continue* when the application finally read the request. Since this would occur on a chunk boundary, the client cannot parse the size of the next chunk, and everything goes off the rails. Now, only be willing to send the 100 Continue response if we have not sent headers to the client.
* wsgi: Return 400 on negative Content-Length request headers (#537)Tim Burke2019-03-041-0/+7
| | | | | | | We already 400 missing and non-integer Content-Lengths, and Input almost certainly wasn't intended to handle negative lengths. Be sure to close the connection, too -- we have no reason to think that the client's request framing is still good.
* #53: Make a GreenPile with no spawn()s an empty sequence. (#555)nat-goodspeed2019-03-041-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | * #53: Make a GreenPile with no spawn()s an empty sequence. Remove the GreenPile.used flag. This was probably originally intended to address the potential problem of GreenPool.starmap() returning a (still empty) GreenPile, a problem that has since been addressed by using GreenMap which requires an explicit marker from the producer. Currently the only effect of GreenPile.used is to make GreenPile hang when used with an empty sequence. Test the empty GreenPile case. Even though GreenMap is probably not intended for general use, make it slightly more consumer-friendly by adding a done_spawning() method instead of requiring the consumer to spawn(return_stop_iteration). GreenPool._do_map() now calls done_spawning(). Remove return_stop_iteration(). Since done_spawning() merely spawns a function that returns StopIteration, any existing consumer that explicitly does the same will still work. However, this is potentially a breaking change if any consumers specifically reference eventlet.greenpool.return_stop_iteration() for that or any other purpose. Refactor GreenPile.next(), breaking out the bookkeeping detail to new _next() method. Make subclass GreenMap call base-class _next(), eliminating the need to replicate that bookkeeping detail.
* Increase Travis slop factor for ZMQ CPU usage. (#542)nat-goodspeed2019-03-041-1/+1
| | | | | | | | | | | | | | | | | | | * Issue #535: use Python 2 compatible syntax for keyword-only args. * Validate that encode_chunked is the *only* keyword argument passed. * Increase Travis slop factor for ZMQ CPU usage. The comment in check_idle_cpu_usage() reads: # This check is reliably unreliable on Travis, presumably because of CPU # resources being quite restricted by the build environment. The workaround # is to apply an arbitrary factor that should be enough to make it work nicely. Empirically -- it's not. Over the last few months there have been many Travis "failures" that boil down to this one spurious error. Increase from a slop factor of 1.2 to 5. If that's still unreliable, may bypass this test entirely on Travis.
* wsgi: fix Input.readlines when dealing with chunked inputTim Burke2019-02-281-0/+13
| | | | | | | | Previously, we pretended the input wasn't chunked and hoped for the best. On py2, this would give the caller the raw, chunk-encoded data; for some reason, on py3, this would hang. Now, readlines() will behave as expected.
* wsgi: fix Input.readline on Python 3Tim Burke2019-02-281-0/+13
| | | | | | | | | | | Previously, we would compare the last item of a byte string with a newline in a native string. On Python 3, getting a single item from a byte string give you an integer (which will not be equal to any string), so readline would return the entire request body. While we're at it, fix the return type when the caller requests that zero bytes be read.
* wsgi: Stop replacing invalid UTF-8 on py3Tim Burke2019-02-281-8/+15
| | | | | | | | | | | | | | | | | | | | For more context, see #467 and #497. On py3, urllib.parse.unquote() defaults to decoding via UTF-8 and replacing invalid UTF-8 sequences with "\N{REPLACEMENT CHARACTER}". This causes a few problems: - Since WSGI requires that bytes be decoded as Latin-1 on py3, we have to do an extra re-encode/decode cycle in encode_dance(). - Applications written for Latin-1 are broken, as there are valid Latin-1 sequences that are mangled because of the replacement. - Applications written for UTF-8 cannot differentiate between a replacement character that was intentionally sent by the client versus an invalid byte sequence. Fortunately, unquote() allows us to specify the encoding that should be used. By specifying Latin-1, we can drop encode_dance() entirely and preserve as much information from the wire as we can.
* [bug] reimport submodule as well in patcher.inject (#540)Junyi2019-01-235-0/+31
| | | | | | | | | | * [bug] reimport submodule as well in patcher.inject * [dev] add unit-test * [dev] move unit test to isolated tests * improve unit test