| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Signed-off-by: Matti Picus <matti.picus@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Forcing the output dtype does not work here, since it actually
can be integral (not that this is usually a good idea).
In practice, the change we are doing here is forcing 64bit (or
32bit depending on platform) of precision for the calculation.
This means that the change will only ever increase precision mildly
compared to the current situation.
|
| |
|
|
|
|
| |
Small micro-optimization, since using Python integers is generally a bit faster in the loop.
The effect is small, so this could undone if anyone figures it is cleaner the other way.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The array creation functions have the most to gain:
1. np.asarray is 4 times faster and commonly used.
2. Other functions are wrapped using __array_function__ in Python
making it more difficult
This commit (unfortunatly) has to do a few things:
* Modify __array_function__ C-side dispatching to accomodate
the fastcall argument convention.
* Move asarray, etc. to C after removing all "fast paths" from
np.array (simplifying the code)
* Fixup imports, since asarray was imported directly in a few places
* Replace some places where `np.array` was probably used for speed
instead of np.asarray or similar. (or by accident in 1 or 2 places)
|
|
|
|
|
|
|
|
|
|
|
| |
* Fixed keyword bug
* Added test case
* Reverted to original notation
* Added tests for var and std
Closes gh-18552
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This removes a 20%-30% overhead, and thus the largest chunk of
slowdown incurred by adding the `where` argument. Most other places
have fast-paths for `where=True`, this one also should have it.
The additional argument does slow down the function versions a bit
more than this, but that is to be expected probably (it has to
build a new argument dict, at some point we might want to move this
to C, but that seems worth much more with FASTCALL logic).
|
|
|
|
|
|
| |
Harmonize the signature of np.mean, np.var np.std, np.any, np.all,
and their respective nd.array methods with np.sum by adding a where
argument, see gh-15818.
|
|
|
|
|
| |
Catch IndexError in _count_reduce_items used in np.mean and np.var for
illegal axis and reraise as AxisError, see gh-15817.
|
|
|
|
|
|
|
|
| |
Removes unnecessary code introduced in #15696.
Non-native byte orders were explicitly added to the fast-path
check in _var for complex numbers. However, the non-native
path is unreachable due to coercion in upstream ufuncs.
|
|
|
|
|
|
|
|
|
|
| |
var currently has a conditional that results in conjugate being
called for the variance calculation of complex inputs.
This leg of the computation is slow. This PR avoids this
computational leg for complex inputs via a type check.
Closes #15684
|
|
|
|
|
| |
As numpy is Python 3 only, these import statements are now unnecessary
and don't alter runtime behavior.
|
|
|
|
| |
As a side-effect, this makes it support kwargs.
|
|
|
|
|
|
|
|
|
| |
This includes:
* The addition of 3-input PyObject inner loop
* The removal of `->f->fastclip` for builtin types, which now use ufuncs instead
* A deprecation in `PyArray_Clip` for third-party types that still use `->f->fastclip`
* A deprecation of the unusual casting behavior of `clip`
* A deprecation of the broken `nan`-behavior of `clip`, which was previously dependent on dimensionality and byte-order.
|
|
|
|
|
|
|
|
| |
This is a direct move, with some tweaks to imports.
This breaks a cyclic imports between `core.numeric` and `core.fromnumeric`.
This doesn't affect the value of `np.core.numeric.__all__` which keeps code doing `from numpy.core.numeric import *` working.
|
|
|
|
|
| |
Previously, an object array which contained complex numbers would
give complex output, as it was simply multiplied with itself instead
of with its conjugate.
|
|
|
|
|
| |
In this implementation, if the ufunc does not have an identity, it needs
an initial vavlue to be supplied.
|
| |
|
| |
|
|
|
|
| |
The Liskov substitution principle suggests it should.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'd like to implement NEP-18 in a multi-step process:
1. (This PR) Pure Python implementation of `__array_function__` machinery,
based on the prototype implementation from NEP-18.
2. Rewrite this machinery in C as needed to improve performance.
3. Implement overrides on NumPy functions.
Steps 2 and 3 should be able to happen in parallel (and with other people
contributing!).
This PR still needs more tests, especially for `ndarray.__array_function__`
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
output dtype that stay in float16.
|
| |
|
| |
|
|
|
|
|
|
| |
PyArg_ParseTupleAndKeywords is pretty slow for keywords as it needs to
create python strings first. Using positional arguments avoids this and
gains 15-20% performance for small reductions.
|
|
|
|
|
|
|
|
| |
This takes care to preserve the object type for scalar returns and
fixes the error that resulted when the scalar did not have a dtype
attribute.
Closes #4063.
|
| |
|
| |
|
|
|
|
| |
Now is as good a time as any with open PR's at a low.
|
|
|
|
|
| |
Currently the results may be infinite or negative. Instead, raise a
ValueError in this case.
|
|
|
|
|
| |
The return type could differ depending on whether or not the value
was a scalar.
|
|
|
|
|
|
|
|
| |
Use issubclass instead of issubdtype.
Add some blank lines.
Remove trailing whitespace.
Remove uneeded float casts since true_divide is default.
Clean up documentation a bit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New files lib/nanfunctions.py and lib/tests/test_nanfunctions.py are
added and both the previous and new nan functions and tests are moved
into them.
The existing nan functions moved from lib/function_base are:
nansum, nanmin, nanmax, nanargmin, nanargmax
The added nan functions moved from core/numeric are:
nanmean, nanvar, nanstd
|
| |
|
| |
|
|
|
|
|
|
|
| |
Add `print_function` to all `from __future__ import ...` statements
and use the python3 print function syntax everywhere.
Closes #3078.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new import `absolute_import` is added the `from __future__ import`
statement and The 2to3 `import` fixer is run to make the imports
compatible. There are several things that need to be dealt with to make
this work.
1) Files meant to be run as scripts run in a different environment than
files imported as part of a package, and so changes to those files need
to be skipped. The affected script files are:
* all setup.py files
* numpy/core/code_generators/generate_umath.py
* numpy/core/code_generators/generate_numpy_api.py
* numpy/core/code_generators/generate_ufunc_api.py
2) Some imported modules are not available as they are created during
the build process and consequently 2to3 is unable to handle them
correctly. Files that import those modules need a bit of extra work.
The affected files are:
* core/__init__.py,
* core/numeric.py,
* core/_internal.py,
* core/arrayprint.py,
* core/fromnumeric.py,
* numpy/__init__.py,
* lib/npyio.py,
* lib/function_base.py,
* fft/fftpack.py,
* random/__init__.py
Closes #3172
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In python3 range is an iterator and `xrange` has been removed. This has
two consequence for code:
1) Where a list is needed `list(range(...))` must be used.
2) `xrange` must be replaced by `range`
Both of these changes also work in python2 and this patch makes both.
There are three places fixed that do not need it, but I left them in
so that the result would be `xrange` clean.
Closes #3092
|
|
|
|
|
|
|
|
| |
This should be harmless, as we already are division clean. However,
placement of this import takes some care. In the future a script
can be used to append new features without worry, at least until
such time as it exceeds a single line. Having that ability will
make it easier to deal with absolute imports and printing updates.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original masked-NA-NEP branch contained a large number of changes
in addition to the core NA support. For example:
- ufunc.__call__ support for where= argument
- nditer support for arbitrary masks (in support of where=)
- ufunc.reduce support for simultaneous reduction over multiple axes
- a new "array assignment API"
- ndarray.diagonal() returning a view in all cases
- bug-fixes in __array_priority__ handling
- datetime test changes
etc. There's no consensus yet on what should be done with the
maskna-related part of this branch, but the rest is generally useful
and uncontroversial, so the goal of this branch is to identify exactly
which code changes are involved in maskna support.
The basic strategy used to create this patch was:
- Remove the new masking-related fields from ndarray, so no arrays
are masked
- Go through and remove all the code that this makes
dead/inaccessible/irrelevant, in a largely mechanical fashion. So
for example, if I saw 'if (PyArray_HASMASK(a)) { ... }' then that
whole block was obviously just dead code if no arrays have masks,
and I removed it. Likewise for function arguments like skipna that
are useless if there aren't any NAs to skip.
This changed the signature of a number of functions that were newly
exposed in the numpy public API. I've removed all such functions from
the public API, since releasing them with the NA-less signature in 1.7
would create pointless compatibility hassles later if and when we add
back the NA-related functionality. Most such functions are removed by
this commit; the exception is PyArray_ReduceWrapper, which requires
more extensive surgery, and will be handled in followup commits.
I also removed the new ndarray.setasflat method. Reason: a comment
noted that the only reason this was added was to allow easier testing
of one branch of PyArray_CopyAsFlat. That branch is now the main
branch, so that isn't an issue. Nonetheless this function is arguably
useful, so perhaps it should have remained, but I judged that since
numpy's API is already hairier than we would like, it's not a good
idea to add extra hair "just in case". (Also AFAICT the test for this
method in test_maskna was actually incorrect, as noted here:
https://github.com/njsmith/numpyNEP/blob/master/numpyNEP.py
so I'm not confident that it ever worked in master, though I haven't
had a chance to follow-up on this.)
I also removed numpy.count_reduce_items, since without skipna it
became trivial.
I believe that these are the only exceptions to the "remove dead code"
strategy.
|
| |
|