summaryrefslogtreecommitdiff
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* Rename "tests" directory to be "test" like in the swift repoTim Burke2019-11-0613-12224/+0
| | | | | | | | | | | | | In addition to being less confusing for devs, this lets us actually run tempauth tests in swiftclient dsvm jobs. The job definition (over in the swift repo) specifies test/sample.conf, which does not exist in this repo. As a result, those tests would skip with SKIPPING FUNCTIONAL TESTS DUE TO NO CONFIG Change-Id: I558dbf9a657d442e6e19468e543bbec855129eeb
* Merge "Fix up requests so we can send non-RFC-compliant headers on py3"Zuul2019-08-021-0/+17
|\
| * Fix up requests so we can send non-RFC-compliant headers on py3Tim Burke2019-07-251-0/+17
| | | | | | | | Change-Id: I3dac826c1f208569c5f40431f59a2045e5744415
* | Delete/overwrite symlinks betterTim Burke2019-08-012-7/+37
|/ | | | | | | | | | | | | Previously, when deleting a symlink that points to an xLO, we'd clean up the xLO's segments then delete the symlink, leaving the xLO itself busted. Similar trouble would come from overwriting a symlink pointing to an xLO. Check for a Content-Location in the HEAD response and leave such segments. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I45b210cf380a68bd88187c91fa2d63a8b2bb709b
* Merge "Support pdb in tests better"Zuul2019-07-101-2/+14
|\
| * Support pdb in tests betterClay Gerrard2017-06-131-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Not really "better" so much as "at all" - the thing we do with the capture stderr *everywhere* is probably brilliant - but absolutely not strictly necessary for every MockHttpTest TestCase and comes with the annoying overhead of trying to get into a debugger causes tests to hang inexplicably and you can't even do debug prints in tests!? Now if you add SWIFTCLIENT_DEBUG=1 to your nose -vsx command you can not only jump into debugger, but if you're "in the know" you could even get some stderr print debugging going on! If you're not "in the know" when you try to pdb.set_trace() the tests will blow-up for you because we monkeypatch pdb when not in SWIFTCLIENT_DEBUG mode, you're welcome. Change-Id: I21298bfd39fe386b5ea19e3a6f4408d8a0459c92
* | Optionally display listings in raw jsonClay Gerrard2019-07-092-0/+59
| | | | | | | | | | | | | | | | | | | | | | | | Symlinks have recently added some new keys to container listings. It's very convenient to be able to see and reason about the extra information in container listings. Allowing raw json output is similar with what the client already does for the info command, and it's forward compatible with any listing enhancements added by future middleware development. Change-Id: I88fb38529342ac4e4198aeccd2f10c69c7396704
* | Merge "Clean up warnings from newer flake8"Zuul2019-06-281-2/+2
|\ \
| * | Clean up warnings from newer flake8Tim Burke2019-06-271-2/+2
| | | | | | | | | | | | Change-Id: I18a6327b3acdd4db5ae80097080c043f7c20c353
* | | Fix SLO re-uploadTim Burke2019-06-271-2/+42
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if you uploaded a file as an SLO then re-uploaded it with the same segment size and mtime, the second upload would go delete the segments it just (re)uploaded. This was due to us tracking old_slo_manifest_paths and new_slo_manifest_paths in different formats; one would have a leading slash while the other would not. Now, normalize to the stripped-slash version so we stop deleting segments we just uploaded. Change-Id: Ibcbed3df4febe81cdf13855656e2daaca8d521b4
* | Isolate docs requirementsTim Burke2019-06-271-14/+28
| | | | | | | | | | | | | | | | | | | | | | ...since modern sphinx won't install on py27. While we're at it, clean up some warnings and treat warnings as errors. Also, fix up how we parse test configs so we can run func tests. Related-Change: Id3c2ed87230c5918c18e2c01d086df8157f036b1 Change-Id: I3718f69610545b0dbcb0a2ab45b400da3a45682c
* | Update hacking versionZhijunWei2019-01-031-6/+6
| | | | | | | | | | | | | | 1. update hacking version to latest 2. fix pep8 failed Change-Id: Ifc3bfeff4038c93d8c8cf2c9d7814c3003e73504
* | Add delimiter to get_account().Timur Alperovich2018-11-302-0/+24
| | | | | | | | | | | | | | Exposes the delimiter parameter, which the Swift API supports for container listings. Change-Id: Id8dfce01a9b64de9d1222aab9a4a682ce9e0f2b7
* | Stop leaking quite so many connectionsTim Burke2018-11-093-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | While investigating the failures when you move func tests to py3, I noticed a whole bunch of ResourceWarning: unclosed <socket.socket ...> noise. This should fix it. While we're at it, make get_capabilities less stupid. Change-Id: I3913e9334090b04a78143e0b70f621aad30fc642 Related-Change: I86d24104033b490a35178fc504d88c1e4a566628
* | Stop lazy importing keystoneclientTim Burke2018-09-073-59/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were two basic problems: - We'd try to import on every attempt at getting auth, even when we already know keystoneclient isn't available. - Sometimes devs would hit some crazy import race involving (some combination of?) greenthreads and OS threads. So let's just try the imports *once*, at import time, and have None sentinels if it fails. Try both versions separately to decouple failures; this should let us support a wider range of keystoneclient versions. Change-Id: I2367310aac74f1b7c5ea0cb1a822a491e4ba8e68
* | Merge "Back out some version bumps"Zuul2018-07-241-9/+26
|\ \
| * | Back out some version bumpsTim Burke2018-07-111-9/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I'm giving up on trying to back out all of the test-requirements up-revs, but let's try to stay compatibile with old requests/six. As part of that, only disable some requests warnings on new-enough requests. Note that we should now be compatible with distro packages back to Ubuntu 16.04 and CentOS 6. Our six is still too new for Trusty, but hey, there's less than a year left on that anyway, right? Change-Id: Iccb23638393616f9ec3da660dd5e39ea4ea94220 Related-Change: I2a8f465c8b08370517cbec857933b08fca94ca38
* | | Properly handle unicode headers.Timur Alperovich2018-07-231-0/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix unicode handling in Python 3 and Python 2. There are currently two failure modes. In python 2, swiftclient fails to log in debug mode if the account name has a non-ASCII character. This is because the account name will appear in the storage URL, which we attempt to pass to the logger as a byte string (whereas it should be a unicode string). This patch changes the behavior to convert the path strings into unicode by calling the parse_header_string() function. The second failure mode is with Python 3, where http_lib returns headers that are latin-1 encoded, but swiftclient expects UTF-8. The patch automatically converts headers from latin-1 (iso-8859-1) to UTF-8, so that we can properly handle non-ASCII headers in responses. Change-Id: Ifa7f3d5af71bde8127129f1f8603772d80d063c1
* | | Merge "Stop mutating header dicts"Zuul2018-07-171-2/+13
|\ \ \
| * | | Stop mutating header dictsTim Burke2017-08-251-2/+13
| | | | | | | | | | | | | | | | Change-Id: Ia1638c216eff9db6fbe416bc0570c27cfdcfe730
* | | | Add ability to generate a temporary URL with anmmcardle2018-07-102-5/+73
| |/ / |/| | | | | | | | | | | | | | IP range restriction Change-Id: I4734599886e4f4a563162390d0ff3bb1ef639db4
* | | Merge "Add option for user to enter password"Zuul2018-06-301-3/+54
|\ \ \
| * | | Add option for user to enter passwordAlistair Coles2018-06-111-3/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the --prompt option for the CLI which will cause the user to be prompted to enter a password. Any password otherwise specified by --key, --os-password or an environment variable will be ignored. The swift client will exit with a warning if the password cannot be entered without its value being echoed. Closes-Bug: #1357562 Change-Id: I513647eed460007617f129691069c6fb1bfe62d7
* | | | Merge "Make OS_AUTH_URL work in DevStack by default"Zuul2018-06-291-0/+47
|\ \ \ \
| * | | | Make OS_AUTH_URL work in DevStack by defaultClay Gerrard2018-06-201-0/+47
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An earlier change added support for versionless authurls, but the huristic to detect them didn't work for some configurations I've encountered. Now we use a little bit tighter pattern matching and support auth_url values with more than one path component. Change-Id: I5a99c7b4e957ee7c8a5b5470477db49ab2ddba4b Related-Change-Id: If7ecb67776cb77828f93ad8278cc5040015216b7
* | | | Remove some pointless codeTim Burke2018-06-221-6/+0
|/ / / | | | | | | | | | Change-Id: I3163834c330c5ea44c1096e83127588c88f0d761
* | | Add force auth retry mode in swiftclientKota Tsuyuzaki2018-03-131-0/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch attemps to add an option to force get_auth call while retrying an operation even if it gets errors other than 401 Unauthorized. Why we need this: The main reason why we need this is current python-swiftclient requests could never get succeeded under certion situation using third party proxies/load balancers between the client and swift-proxy server. I think, it would be general situation of the use case. Specifically describing nginx case, the nginx can close the socket from the client when the response code from swift is not 2xx series. In default, nginx can wait the buffers from the client for a while (default 30s)[1] but after the time past, nginx will close the socket immediately. Unfortunately, if python-swiftclient has still been sending the data into the socket, python-swiftclient will get socket error (EPIPE, BrokenPipe). From the swiftclient perspective, this is absolutely not an auth error, so current python-swiftclient will continue to retry without re-auth. However, if the root cause is sort of 401 (i.e. nginx got 401 unauthorized from the swift-proxy because of token expiration), swiftclient will loop 401 -> EPIPE -> 401... until it consume the max retry times. In particlar, less time to live of the token and multipart object upload with large segments could not get succeeded as below: Connection Model: python-swiftclient -> nginx -> swift-proxy -> swift-backend Case: Try to create slo with large segments and the auth token expired with 1 hour 1. client create a connection to nginx with successful response from swift-proxy and its auth 2. client continue to put large segment objects (e.g. 1~5GB for each and the total would 20~30GB, i.e. 20~30 segments) 3. after some of segments uploaded, 1 hour past but client is still trying to send remaining segment objects. 4. nginx got 401 from swift-proxy for a request and wait that the connection is closed from the client but timeout past because the python-swiftclient is still sending much data into the socket before reading the 401 response. 5. client got socket error because nginx closed the connection during sending the buffer. 6. client retries a new connection to nginx without re-auth... <loop 4-6> 7. finally python-swiftclient failed with socket error (Broken Pipe) In operational perspective, setting longer timeout for lingering close would be an option but it's not complete solution because any other proxy/LB may not support the options. If we actually do THE RIGHT THING in python-swiftclient, we should send expects: 100-continue header and handle the first response to re-auth correctly. HOWEVER, the current python's httplib and requests module used by python-swiftclient doesn't support expects: 100-continue header [2] and the thread proposed a fix [3] is not super active. And we know the reason we depends on the library is to fix a security issue that existed in older python-swiftclient [4] so that we should touch around it super carefully. In the reality, as the hot fix, this patch try to mitigate the unfortunate situation described above WITHOUT 100-continue fix, just users can force to re-auth when any errors occurred during the retries that can be accepted in the upstream. 1: http://nginx.org/en/docs/http/ngx_http_core_module.html#lingering_close 2: https://github.com/requests/requests/issues/713 3: https://bugs.python.org/issue1346874 4: https://review.openstack.org/#/c/69187/ Change-Id: I3470b56e3f9cf9cdb8c2fc2a94b2c551927a3440
* | | Add a query_string option to head_object().Timur Alperovich2018-03-051-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | Submitting a path parameter with a HEAD request on an object can be useful if one is trying to find out information about an SLO/DLO without retrieving the manifest. Change-Id: I39efd098e72bd31de271ac51d4d75381929c9638
* | | Allow for object uploads > 5GB from stdin.Timur Alperovich2018-01-182-0/+235
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When uploading from standard input, swiftclient should turn the upload into an SLO in the case of large objects. This patch picks the threshold as 10MB (and uses that as the default segment size). The consumers can also supply the --segment-size option to alter that threshold and the SLO segment size. The patch does buffer one segment in memory (which is why 10MB default was chosen). (test is updated) Change-Id: Ib13e0b687bc85930c29fe9f151cf96bc53b2e594
* | | Merge "Allow --meta on upload"Zuul2017-12-082-28/+13
|\ \ \
| * | | Allow --meta on uploadTim Burke2017-07-062-28/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the --meta option was only allowed on post or copy subcommands. Change-Id: I87bf0338c34b5e89aa946505bee68dbeb37d784c Closes-Bug: #1616238
* | | | Merge "Add support for versionless endpoints"Jenkins2017-08-291-0/+20
|\ \ \ \ | |_|/ / |/| | |
| * | | Add support for versionless endpointsChristian Schwede2017-06-131-0/+20
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Newer deployments are using versionless Keystone endpoints, and most OpenStack clients already support this. This patch enables this for Swift: if an auth_url without any path component is found, it assumes a versionless endpoint will be used. In this case the v3 suffix will be appended to the path if none auth_version is set, and v2.0 is appended if auth_version requires v2. Closes-Bug: 1554885 Related-Bug: 1691106 Change-Id: If7ecb67776cb77828f93ad8278cc5040015216b7
* | | Allow for uploads from standard input.Timur Alperovich2017-07-261-1/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If "-" is passed in for the source, python-swiftclient will upload the object by reading the contents of the standard input. The object name option must be set, as well, and this cannot be used in conjunction with other files. This approach stores the entire contents as one object. A follow on patch will change this behavior to upload from standard input as SLO, unless the segment size is larger than the content size. Change-Id: I1a8be6377de06f702e0f336a5a593408ed49be02
* | | Buffer reads from diskTim Burke2017-07-112-10/+10
| | | | | | | | | | | | | | | | | | | | | Otherwise, Python defaults to 8k reads which seems kinda terrible. Change-Id: I3160626e947083af487fd1c3cb0aa6a62646527b Closes-Bug: #1671621
* | | Option to ignore mtime metadata entry.Christopher Bartz2017-07-061-0/+46
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the swiftclient upload command passes a custom metadata header for each object (called object-meta-mtime), whose value is the current UNIX timestamp. When downloading such an object with the swiftclient, the mtime header is parsed and passed as the atime and mtime for the newly created file. There are use-cases where this is not desired, for example when using tmp or scratch directories in which files older than a specific date are deleted. This commit provides a boolean option for ignoring the mtime header. Change-Id: If60b389aa910c6f1969b999b5d3b6d0940375686
* | Merge "Skip checksum validation on partial downloads"Jenkins2017-06-221-0/+10
|\ \
| * | Skip checksum validation on partial downloadsTim Burke2017-04-211-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | If we get back some partial content, we can't validate the MD5. That's OK. Change-Id: Ic1d65272190af0d3d982f3cd06833cac5c791a1e Closes-Bug: 1642021
* | | Merge "Tolerate RFC-compliant ETags"Jenkins2017-06-221-15/+23
|\ \ \ | |/ /
| * | Tolerate RFC-compliant ETagsTim Burke2017-04-211-15/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since time immemorial, Swift has returned unquoted ETags for plain-old Swift objects -- I hear tell that we once tried to change this, but quickly backed it out when some clients broke. However, some proxies (such as nginx) apparently may force the ETag to adhere to the RFC, which states [1]: An entity-tag consists of an opaque *quoted* string (emphasis mine). See the related bug for an instance of this happening. Since we can still get the original ETag easily, we should tolerate the more-compliant format. [1] https://tools.ietf.org/html/rfc2616.html#section-3.11 or, if you prefer the new ones, https://tools.ietf.org/html/rfc7232#section-2.3 Change-Id: I7cfacab3f250a9443af4b67111ef8088d37d9171 Closes-Bug: 1681529 Related-Bug: 1678976
* | | Merge "Stop sending X-Static-Large-Object headers"Jenkins2017-06-141-1/+0
|\ \ \
| * | | Stop sending X-Static-Large-Object headersTim Burke2017-04-101-1/+0
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we were to include this in a normal PUT, it would 400, but only if slo is actually in the pipeline. If it's *not*, we'll create a normal Swift object and the header sticks. - This is really confusing for users; see the related bug. - If slo is later enabled in the cluster, Swift starts responding 500 with a KeyError because the client and on-disk formats don't match! Change-Id: I1d80c76af02f2ca847123349224ddc36d2a6996b Related-Change: I986c1656658f874172860469624118cc63bff9bc Related-Bug: #1680083
* | | Merge "Do not set Content-Type to '' with new requests."Jenkins2017-06-131-2/+17
|\ \ \ | |_|/ |/| |
| * | Do not set Content-Type to '' with new requests.Timur Alperovich2017-06-131-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, python-swiftclient worked around a requests issue where Content-Type could be set to application/x-www-form-urlencoded when using python3. This issue has been resolved and a fix released in requests 2.4 (fixed in subsequent releases as well). The patch makes the workaround conditional on the requests version, so that with sufficiently new requests libraries, the Content-Type is not set. For reference, requests 2.4 was released August 29th, 2014. The specific issue filed in the requests tracker is: https://github.com/requests/requests/issues/2071. Related-Change: I035f8b4b9c9ccdc79820b907770a48f86d0343b4 Closes-Bug: #1433767 Change-Id: Ieb2243d2ff5326920a27ce8c3c6f0f5c396701ed
* | | Merge "Fix MockHttpResponse to be more like the Real"Jenkins2017-06-121-44/+31
|\ \ \ | |/ / |/| |
| * | Fix MockHttpResponse to be more like the RealClay Gerrard2017-03-081-44/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change pulls out that relatively new [1] little string to pull at in the MockHttpResponse that I think is sorta ugly. And replaces it with the correct behavior that's representative of the Real for which it's standing in (which is sadly our wrapper to make a requests response feel like a httplib.HTTPResponse). It's not clear (to me) the history which allowed this difference in the behavior of the Real and Fake to persist - it seems to have always been this way [2]. I also reworded a relatively new test [1] to cover more code, and make assertions on the desired behavior of the client instead of "just" the http_log method. FWIW, I don't think there was necessarily anything wrong with the scope of the new test [1] - and it certainly makes sense to see new tests copy nearby existing tests. But I subjectively think this smaller test is more demonstrative of the desired behavior. 1. Related-Change-Id: I6d7ccbf4ef9b46e890ecec58842c5cdd2804c7a9 2. Related-Change-Id: If07af46cb377f3f3d70f6c4284037241d360a8b7 Change-Id: Ib99a029c1bd1ea1efa8060fe8a11cb01deea41c6
* | | Merge "ISO 8601 timestamps for tempurl"Jenkins2017-05-182-16/+151
|\ \ \
| * | | ISO 8601 timestamps for tempurlChristopher Bartz2017-03-292-16/+151
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | Client-side implementation for ISO 8601 timestamp support of tempurl middleware. Please see https://review.openstack.org/#/c/422679/ Change-Id: I76da28b48948475ec1bae5258e0b39a316553fb7
* | | respect bulk delete page size and fix logic errorJohn Dickinson2017-04-202-20/+126
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, using SwiftService to delete "many" objects would use bulk delete if available, but it would not respect the bulk delete page size. If the number of objects to delete exceeded the bulk delete page size, SwiftService would ignore the error and nothing would be deleted. This patch changes _should_bulk_delete() to be _bulk_delete_page_size(); instead of returning a simple True/False, it returns the page size for the bulk deleter, or 1 if objects should be deleted one at a time. Delete SDK calls are then spread across multiple bulk DELETEs if the requested number of objects to delete exceeds the returned page size. Fixed the logic in _should_bulk_delete() so that if the object list is exactly 2x the thread count, it will not bulk delete. This is the natural conclusion following the logic that existed previously: if the delete request can be satisfied by every worker thread doing one or two tasks, don't bulk delete. But if it requires a worker thread to do three or more tasks, do a bulk delete instead. Previously, the logic would mean that if every worker thread did exactly two tasks, it would bulk delete. This patch changes a "<" to a "<=". Closes-Bug: 1679851 Change-Id: I3c18f89bac1170dc62187114ef06dbe721afcc2e
* | Close file handle after upload jobKazufumi Noto2017-03-161-47/+72
|/ | | | | | | | The opened file for upload is not closed. This fix prevents possible file handle leak. Closes-Bug: #1559079 Change-Id: Ibc58667789e8f54c74ae2bbd32717a45f7b30550