| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Convert several methods to the vectorcall convention. The conversions give a performance improvement, see #20790 (comment)
Some notes:
* For vdot the METH_KEYWORDS was removed, as the C vdot method was positional only.
* The add_docstring is converted with an additional check. It was parsed as if (!PyArg_ParseTuple(args, "OO!:add_docstring", &obj, &PyUnicode_Type, &str)), but there is no support for the ! in the npy_parse_arguments
* CI was complaining about coverage of _get_ndarray_c_version. A test was added, but only to provide coverage
* In function_base.py a redundant check in def place was removed
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This wraps `trapz` into a proper python function and then copies all
attributes expected on a Python function over from the "fake" version
to the real one.
This allows SciPy to pretend `trapz` is a Python function to create
their own version.
|
| | |\ \ \
| | | | | |
| | | | | | |
DEP: Finalize the non-sequence stacking deprecation
|
| | | |/ /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The `__array_function__` API currently will exhaust iterators so we
cannot accept sequences reasonably. Checking for `__getitem__` is presumably
enough to reject that (and was what the deprecation used).
Future changes could allow this again, although it is not a useful API
anyway, since we have to materialize the iterable in any case.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This moves dispatching for `__array_function__` into a C-wrapper. This
helps speed for multiple reasons:
* Avoids one additional dispatching function call to C
* Avoids the use of `*args, **kwargs` which is slower.
* For simple NumPy calls we can stay in the faster "vectorcall" world
This speeds up things generally a little, but can speed things up a lot
when keyword arguments are used on lightweight functions, for example::
np.can_cast(arr, dtype, casting="same_kind")
is more than twice as fast with this.
There is one alternative in principle to get best speed: We could inline
the "relevant argument"/dispatcher extraction. That changes behavior in
an acceptable but larger way (passes default arguments).
Unless the C-entry point seems unwanted, this should be a decent step
in the right direction even if we want to do that eventually, though.
Closes gh-20790
Closes gh-18547 (although not quite sure why)
|
| | |/ /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* DOC: #22266 Add examples for tri[lu]_indices_from()
* DOC: see also for tri[lu]_indices_from()
* DOC: Fix triu_indices_from example and minor updates.
* incides -> indices
* Update wording surrounding .
Co-authored-by: Ross Barnowski <rossbar@berkeley.edu>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This pull requests speeds up numpy.load. Since _filter_header is quite a bottleneck, we only run it if we must. Users will get a warning if they have a legacy Numpy file so that they can save it again for faster loading.
Main discussion and benchmarks see #22898
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
|
| | | |
| | | |
| | | |
| | | | |
The number "changed" is weird if the user fixed it, so give a different
message in that case.
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | | |
whitespace (#22906)
Fix issue with `delimiter=None` and quote character not working properly (not using whitespace delimiter mode).
Closes gh-22899
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* DOC: Add a note to the documentation of the rot90
The note added indicates that rotation is counter clockwise with the default argumemnts.
Co-authored-by: Ross Barnowski <rossbar@berkeley.edu>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* TST: Mixed integer types for in1d
* BUG: Fix mixed dtype overflows for in1d (#22877)
* BUG: Type conversion for integer overflow check
* MAINT: Fix linting issues in in1d
* MAINT: ar1 overflow check only for non-empty array
* MAINT: Expand bounds of overflow check
* TST: Fix integer overflow in mixed boolean test
* TST: Include test for overflow on mixed dtypes
* MAINT: Less conservative overflow checks
|
| | |\
| | | |
| | | | |
TST: Ignore nan-warnings in randomized nanfunction `out=` tests
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The tests randomize the nan pattern and thus can run into these
(additional) warnings, so ignore them.
(Could also fix the random seed, but this should do)
Closes gh-22835
|
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If a row ends in a delimiter, `add_fields` can be called twice without
any field actually being parsed. This causes issues with the field
buffer setup.
closes gh-22833
|
| | |\
| | | |
| | | | |
DOC: Fix legend placement in `numpy.percentile()` docs
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A plot is meant to demonstrate the different methods of estimating the
percentile that `numpy.percentile()` supports, but previously the legend
covered a large fraction of it. Now the legend is drawn next to the
plot.
|
| | |/
| | |
| | |
| | | |
There should be more tests for this, but this now passes.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Examples in documentation for trapz goes straight from integrating random arrays to parametric curves. I think it's worth pointing out one can integrate something they'd see in Calculus 1 and get the answer they'd expect.
Also add some more guidance text to the existing examples (and style fixes)
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
Co-authored-by: Melissa Weber Mendonça <melissawm@gmail.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since percentile is more or less identical to quantile, I also made it
throw an error if it receives a complex input. I also made nanquantile
and nanpercentile throw errors as well.
* Made the changes recommended by seberg
* Fixed a test for PR 22703
* Fixed tests for quantile
* Shortened some more lines
* Fixup more lines
Co-authored-by: Sebastian Berg <sebastianb@nvidia.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
np.pad with mode="wrap" returns unexpected result that original data is not strictly looped in padding. This may happen in some occassions when padding widths in the same dimension are unbalanced (see added testcase in test_arraypad.py and the related issue). The reason is the function pad makes iterative calls of _set_wrap_both() in the above situation, yet period for padding is not correctly computed in each iteration.
The bug is fixed by guaranteeing that period is always a multiple of original data size, and also be the possible maximum for computation efficiency.
Closes #22464
Co-authored-by: Lars Grüter <lagru+github@mailbox.org>
|
| | |
| | |
| | |
| | |
| | |
| | | |
`numpy.percentile()`, and `numpy.quantile()`.
Closes #22714, #22544.
|
| | |\
| | | |
| | | | |
MAINT: Move set_module from numpy.core to numpy._utils
|
| | | | |
|
| | | | |
|
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* BUG: Histogramdd breaks on big arrays in Windows
Resolved the issue with line change from int to np.intp in numpy/numpy/lib/histograms.py
* BUG: Histogramdd breaks on big arrays in Windows
Resolved the issue with line change from int to np.intp in numpy/numpy/lib/histograms.py
* Removed the binary files
* Update test_histograms.py
* Update test_histograms.py
* Update test_histograms.py
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* BUILD: update OpenBLAS to 0.3.21 and clean up openblas download test
* set LDFLAGS on windows64 like the openblaslib build does
* use rtools compilers on windows when building wheels
* fix typos
* add rtools gfortran to PATH
* use the openblas dll from the zip archive without rewrapping
* typos
* copy dll import library for 64-bit interfaces
* revert many of the changes to azure-steps-windows.yaml, copy openblas better in wheels
* fix wildcard copy
* test OpenBLAS build worked with threadpoolctl
* typos
* install threadpoolctl where needed, use for loop to recursively copy
* update macos OpenBLAS suffixes for newer gfortran hashes
* use libgfortran5.dylib on macos
* fix scripts
* re-use gfortran install from MacPython/gfortran-install on macos
* use pre-release version of delocate
* fixes for wheel builds/tests
* add debugging cruft for pypy+win, macos wheels
* add DYLD_LIBRARY_PATH on macosx-x86_64
* use 32-bit openblas interfaces for ppc64le tests
* skip large_archive test that sometimes segfaults on PyPy+windows
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some parameters like pad_width or stat_length claimed to expect tuples-of-tuples as
input, but in practice they also work with single tuples. The parameter descriptions of the
relevant parameters are updated in the docstring to reflect this implicit tuple wrapping
behavior.
Co-authored-by: 渡邉 美希 <miki.watanabe@watanabenoMacBook-Pro.local>
|
| | |\
| | | |
| | | | |
ENH, CI: Add Emscripten to CI
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Xref https://github.com/numpy/numpy/pull/22456
|
| | |/
| | |
| | |
| | | |
Xref https://github.com/numpy/numpy/pull/21468
|
| | | |
|
| |/ |
|
| |\
| | |
| | | |
DOC: How to partition domains
|
| | |
| | |
| | |
| | | |
Also add links to this document from the functions' docstrings.
|
| | | |
|
| | |
| | |
| | |
| | | |
That is, once the NEP 50 transition happens
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Rather, use `result_type` instead. There are some exceedingly small
theoretical changes, since `result_type` currently uses value-inspection
logic.
`find_common_type` did not, because it pre-dates the value inspection
logic.
(I.e. in theory, this switches it to value-based promotion, just to
partially undo that in NEP 50; although more changes there.)
The only place where it is fathomable to matter is if someone is using
`np.c_[uint8_arr, -1]` to append 255 to an unsigned integer array.
|
| |
| |
| |
| | |
Not new things, but in touched lines...
|
| |
| |
| |
| |
| |
| |
| |
| | |
In some cases, the replacement is clearly not what is intended,
in those (where setup was called explicitly), I mostly renamed
`setup` to `_setup`.
The `test_ccompile_opt` is a bit confusing, so left it right now
(this will probably fail)
|
| | |
|
| |
| |
| |
| |
| | |
Add date to deprecation in comment
Co-authored-by: Matti Picus <matti.picus@gmail.com>
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Before:
>>> field = unstructured_to_structured(np.zeros((20, 2)), dtype=[('x', float), ('y', float)]) # failed
>>> field = unstructured_to_structured(np.zeros((20, 2)), dtype=np.dtype([('x', float), ('y', float)])) # success
After:
>>> field = unstructured_to_structured(np.zeros((20, 2)), dtype=[('x', float), ('y', float)]) # success
>>> field = unstructured_to_structured(np.zeros((20, 2)), dtype=np.dtype([('x', float), ('y', float)])) # success
Closes gh-22428
|
| | |
|
|\ \
| | |
| | | |
MAINT: Ensure graceful handling of large header sizes
|