| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This reverts commit 6b5cd92675139511b4b24ddfe822e96b03700edb.
|
|
|
|
|
| |
The shim has been deprecated since 2019, the proper place to import
utils funtions is directly from numpy.testing.
|
|\
| |
| | |
ENH: Move identity to the ArrayMethod to allow customization
|
| |
| |
| |
| |
| | |
It should only be used by the legacy method, so also reflect that
in the field name.
|
| |
| |
| |
| | |
Co-authored-by: Matti Picus <matti.picus@gmail.com>
|
| |
| |
| |
| | |
As requested by Matti in review.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
Its a new version (it changed ;)), plus I took the liberty to move
things around a bit ABI wise.
|
| | |
|
| |
| |
| |
| | |
value
|
| | |
|
| |
| |
| |
| |
| | |
Need to look into whether to cut out the dynamic discovery of
reorderability though.
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
Also add it to the wrapped array-method (ufunc) implementation
so that a Unit dtype can reasonably use an identity for reductions.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This also fixes/changes that the identity value is defined by the
reduce dtype and not by the result dtype. That does not make a
different in any sane use-case, but there is a theoretical change:
arr = np.array([])
out = np.array(None) # object array
np.add.reduce(arr, out=out, dtype=np.float64)
Where the output result was previously an integer 0, due to the
output being of `object` dtype, but now it is the correct float
due to the _operation_ being done in `float64`, so that the output
should be `np.zeros((), dtype=np.float64).astype(object)`.
|
|/
|
| |
[ci skip]
|
|\
| |
| | |
API: Allow SciPy to get away with assuming `trapz` is a Python function
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This wraps `trapz` into a proper python function and then copies all
attributes expected on a Python function over from the "fake" version
to the real one.
This allows SciPy to pretend `trapz` is a Python function to create
their own version.
|
|/
|
|
|
|
|
|
|
| |
pyinstaller should pack it to allow running the tests, but doesn't
pack the tests themselves and thus doesn't find the `import` statements
that use `numpy.core._multiarray_tests`.
This makes sure that pyinstaller will ship `numpy.core._multiarray_tests`
in any case.
|
|\
| |
| | |
MAINT: Remove all nose testing support.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
NumPy switched to using pytest in 2018 and nose has been unmaintained
for many years. We have kept NumPy's nose support to avoid breaking
downstream projects who might have been using it and not yet switched to
pytest or some other testing framework. With the arrival of Python 3.12,
unpatched nose will raise an error. It it time to move on.
Decorators removed
- raises
- slow
- setastest
- skipif
- knownfailif
- deprecated
- parametrize
- _needs_refcount
These are not to be confused with pytest versions with similar names,
e.g., pytest.mark.slow, pytest.mark.skipif, pytest.mark.parametrize.
Functions removed
- Tester
- import_nose
- run_module_suite
|
| | |
|
| |
| |
| |
| |
| | |
Deprecate np.finfo(None), it may be that we should more generally deprecate `np.dtype(None)` but this is a start and particularly weird maybe.
Closes gh-14684
|
|\ \
| | |
| | | |
BUG: Implement `ArrayFunctionDispatcher.__get__`
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While functions should not normally need this, Python functions do
provide it (C functions do not, but we are a fatter object anyway).
By implementing `__get__` we also ensure that `inspect.isroutine()`
passes. And by that we ensure that Sphinx considers these a `py:function:`
role.
Closes gh-23032
|
|/
|
|
|
| |
Port CORE diff relevant code to MA and adapt docstrings examples and add tsts.
Closes gh-22465
|
|\
| |
| | |
DEP: Finalize the non-sequence stacking deprecation
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The `__array_function__` API currently will exhaust iterators so we
cannot accept sequences reasonably. Checking for `__getitem__` is presumably
enough to reject that (and was what the deprecation used).
Future changes could allow this again, although it is not a useful API
anyway, since we have to materialize the iterable in any case.
|
|\ \
| | |
| | | |
API: Fix cython exception handling for exported extern C functions
|
| | |
| | |
| | |
| | |
| | |
| | | |
The incref/decref function shouldn't be able to fail (especially the
decref). But right now they can, this will be fixed when we redo
clearing (see gh-22924)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This hopefully fixes them for all the functions currently in the `.pyd`
files. A surprising amount of them look like scary thing I wouldn't
mind to just delete :).
Anyway, had looked at some Cython code today and remembered this.
Closes gh-19291
|
|\ \ \
| | | |
| | | | |
ENH: Improve array function overhead by using vectorcall
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This makes these functions much faster when used with keyword arguments
now that the array-function dispatching does not rely on `*args, **kwargs`
anymore.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The refactor to use vectorcall/fastcall is obviously much better
if we don't have to go back and forth, for concatenate we get:
arr = np.random.random(20)
%timeit np.concatenate((arr, arr), axis=0)
Going from ~1.2µs to just below 1µs and all the way down to ~850ns
(fluctuates quite a lot down to 822 even). ~40% speedup in total
which is not too shabby.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This moves dispatching for `__array_function__` into a C-wrapper. This
helps speed for multiple reasons:
* Avoids one additional dispatching function call to C
* Avoids the use of `*args, **kwargs` which is slower.
* For simple NumPy calls we can stay in the faster "vectorcall" world
This speeds up things generally a little, but can speed things up a lot
when keyword arguments are used on lightweight functions, for example::
np.can_cast(arr, dtype, casting="same_kind")
is more than twice as fast with this.
There is one alternative in principle to get best speed: We could inline
the "relevant argument"/dispatcher extraction. That changes behavior in
an acceptable but larger way (passes default arguments).
Unless the C-entry point seems unwanted, this should be a decent step
in the right direction even if we want to do that eventually, though.
Closes gh-20790
Closes gh-18547 (although not quite sure why)
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
WG14 N2350 made very clear that it is an UB having type definitions
within "offsetof" [1]. This patch enhances the implementation of macro
_ALIGN to use builtin "_Alignof" to avoid undefined behavior on
when using std=c11 or newer
clang 16+ has started to flag this [2]
Fixes build when using -std >= gnu11 and using clang16+
Older compilers gcc < 4.9 or clang < 8 has buggy _Alignof even though it
may support C11, exclude those compilers too
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2350.htm
[2] https://reviews.llvm.org/D133574
Signed-off-by: Khem Raj <raj.khem@gmail.com>
* Apply suggestions from code review
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* DOC: #22266 Add examples for tri[lu]_indices_from()
* DOC: see also for tri[lu]_indices_from()
* DOC: Fix triu_indices_from example and minor updates.
* incides -> indices
* Update wording surrounding .
Co-authored-by: Ross Barnowski <rossbar@berkeley.edu>
|
|\ \ \
| | | |
| | | | |
BLD: Try building wheels with cibuildwheel 2.12.0
|
| | | |
| | | |
| | | |
| | | | |
Also cleanup some uneeded/commented stuff.
|