| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
order of the powers (either decreasing or increasing).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This preserves the complex (and higher precision float or
object) type of the input array, so that the complex
covariance and correlation coefficients can be calculated.
It also fixes the the behaviour of empty arrays. These will
now either result in a 0x0 result, or a NxN result filled
with NaNs.
A warning is now issued when ddof is too large and the factor
set to 0 so that in this case the result is always NaN or
infinity/negative infinity and never a negative number.
Closes gh-597 and gh-2680
Closes gh-3882 (original pull request)
|
|\
| |
| | |
A couple micro optimizations
|
| |
| |
| |
| |
| |
| |
| | |
move slow test_memmap_roundtrip to slow tests
decrease excessively large array size used in np.sin(x) compuation
TestInterp.test_if_len_x_is_small, the code has no special path for this
large size differences.
|
| | |
|
| | |
|
|\ \
| | |
| | | |
BUG: ensure percentile has same output structure as in 1.8
|
| | |
| | |
| | |
| | |
| | |
| | | |
percentile returned scalars and lists of arrays in 1.8
adapt new percentile to return scalar and arrays with q dimension first
for compatibility.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Deal with subclasses of ndarray, like pandas.Series and matrix.
Subclasses may not define the new keyword keepdims or deal
gracefully with ufuncs in all their forms. This is solved by
throwing the problem onto the np.sum, np.any, etc. functions
that have ugly hacks to deal with the problem.
Settle handling of all-nan slices.
nanmax, nanmin -- Raise warning, return NaN for slice.
nanargmax, nanargmin -- Raise ValueError.
nansum -- Return 0 for slice.
nanmean, nanvar, nanstd -- Raise warning, return NaN for slice.
Make NaN functions work for scalar arguments.
This may seem silly, but it removes a check for special cases.
Update tests
Deal with new all-nan handling.
Test with matrix class as example of subclass without keepdims.
Test with scalar arguments.
Fix nanvar issue reported in #3860.
Closes #3860 #3850
|
| |
| |
| |
| | |
closes gh-3846
|
|/
|
|
|
| |
This covers those locations that either import or build numarray
or numeric.
|
|
|
|
|
|
|
| |
* added note that `overwrite_input` has not effect when `a` is not
an array in the percentile function.
* added unit test to verify that no error is raised when `a` is not
an array and `overwrite_input` is True.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
The percentile function was enhanced by adding limit and interpolation
parameters to give it similar functionality to SciPy's stats.scoreatpercentile
function. In addition the function was vecorized along q and rewritten to
use the partition method for better performance.
|
|
|
|
|
|
|
| |
The zerosize_ok flag to nditer was missing, so that it did not
allow for 0-sized iteration.
Closes gh-3714
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* numpy.gradient has been enhanced to use a second order accurate
one-sided finite difference stencil at boundary elements of the
array. Second order accurate central difference are still used for
the interior elements. The result is a fully second order accurate
approximation of the gradient over the full domain.
* The one-sided stencil uses 3 elements each with a different weight. A
forward difference is used for the first element,
dy/dx ~ -(3.0*y[0] - 4.0*y[1] + y[2]) / (2.0*dx)
and backwards difference is used for the last element,
dy/dx ~ (3.0*y[-1] - 4.0*y[-2] + y[-3]) / (2.0*dx)
* Because the datetime64 datatype cannot be multiplied a view is taken
of datetime64 arrays and cast to int64. The gradient algorithm is
then applied to the view rather than the input array.
* Previously no dimension checks were performed on the input array. Now
if the array size along the differentiation axis is less than 2, a
ValueError is raised which explains that more elements are needed. If
the size is exactly two the function falls back to using a 2 point
stencil (the old behaviour). If the size is 3 and above then the
higher accuracy methods are used.
* A new test has been added which validates the higher accuracy. Old
tests have been updated to pass. Note, this should be expected
because the boundary elements now return different (more accurate)
values.
|
|
|
|
|
|
|
| |
Run autopep8 over the test files in numpy/lib/test and make fixes
to the result.
Also remove Python5 workaround.
|
|
|
|
|
|
| |
The test_fancy in numpy/lib/tests/test_function_base.py failed in
release because a DeprecationWarning was no longer raised, it had
become a warning.
|
|
|
|
|
| |
Make this happen and remove test parts dependent on numpy version
< 1.9. Fixes test failures in numpy after 1.8 branch.
|
|
|
|
|
|
|
| |
Run the 2to3 ws_comma fixer on *.py files. Some lines are now too long
and will need to be broken at some point. OTOH, some lines were already
too long and need to be broken at some point. Now seems as good a time
as any to do this with open PRs at a minimum.
|
|
|
|
| |
Now is as good a time as any with open PR's at a low.
|
| |
|
| |
|
|
|
|
|
|
| |
Fix typos and clarify some explanations. Document the changes in the return
values of nanargmin and nanargmax for all-NaN slices in the 1.8.0 release
notes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New files lib/nanfunctions.py and lib/tests/test_nanfunctions.py are
added and both the previous and new nan functions and tests are moved
into them.
The existing nan functions moved from lib/function_base are:
nansum, nanmin, nanmax, nanargmin, nanargmax
The added nan functions moved from core/numeric are:
nanmean, nanvar, nanstd
|
|
|
|
|
|
| |
Partitioning is sufficient to obtain the median and is much faster.
In the case of overwrite_input=True the resulting array will not be
fully sorted anymore.
|
|
|
|
| |
Also add test for IndexError exception when axis is out of bounds.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
WarningManager was a workaround for the lack of the with statement
in Python versions < 2.6. As those versions are no longer supported
it can be removed.
Deprecation notes are added to WarningManager and WarningMessage, but
to avoid a cascade of messages in third party apps, no warnings are
raised at this time, that can be done later.
Closes #3519.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that Python < 2.6 is no longer supported we can use the errstate
context manager in places where constructs like
```
old = seterr(invalid='ignore')
try:
blah
finally:
seterr(**old)
```
were used.
|
| |
|
|
|
|
|
|
|
| |
This assures that when the loaded file is closed it also closes the
file descriptor, avoiding a resource warning in Python3.
Closes #3457.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correct the implementation of the npv function, its documentation, and
the mirr function that depends on it. The test_financial.py is also
corrected to take into account those modifications
The npv function behavior was contrary to what the documentation stated
as it summed indexes 1 to M instead of 0 to M-1. The mirr function used
a corrective factor to get the correct result in spite of that error so
that factor is removed.
Closes #649
|
|
|
|
|
|
|
|
|
| |
memmap needs to call it in __array_finalize__ to determine if it can
drop the references on copies.
The python version if may_share_memory caused significant slowdowns when
slicing these maps.
closes gh-3364
|
|
|
|
|
|
|
|
|
|
|
|
| |
New padding method which scales much better with dimensionality.
This new implementation is fully vectorized, builds each abstracted
n-dimensional padding block in a single step, and takes advantage
of separability. The API is completely preserved, and the old
algorithm is used if a vector function is input for `mode`.
The new algorithm is faster for all tested combinations of inputs,
and scales much better with dimensionality. Execution time reductions
from ~25% for small rank 1 arrays to >99% for rank 4+ arrays observed.
|
|\
| |
| | |
BUG: np.insert must copy index array
|
| |
| |
| |
| | |
Otherwise it would do in-place changes to it. Fixes gh-3279.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The unicode fixer strips the u from u'hi' and converts the unicode type
to str. The first won't work for Python 2 and instead we replace the u
prefix with the sixu function borrowed from the six compatibility
package. That function calls the unicode constructor with the
'unicode_escape' encoder so that the many tests using escaped unicode
characters like u'\u0900' will be handled correctly. That makes the
sixu function a bit different from the asunicode function currently in
numpy.compat and also provides a target that can be converted back to
the u prefix when support for Python 3.2 is dropped. Python 3.3
reintroduced the u prefix for compatibility.
The unicode fixer also replaces 'unicode' with 'str' as 'unicode' is no
longer a builtin in Python 3. For code compatibility, 'unicode' is
defined either as 'str' or 'unicode' in numpy.compat so that checks like
if isinstance(x, unicode):
...
will work properly for all python versions.
Closes #3089.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Various functions have been moved around in the stdlib for Python 3,
this fixes that up so that the code is valid in both Python 2 and
Python 3.
Note: monkey patching the stlib urlopen for testing looks a bit hokey
to me, but I don't see an easier, more reliable way to do the test.
Closes #3090.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In Python 3 zip returns an iterator instead of a list. Consequently, in
places where an iterator won't do it must be enclosed in list(...).
Lists instead of iterators are also used in array constructors as using
iterators there usually results in an object array containing an
iterator object.
Closes #3094
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The numliterals fixer replaces the old style octal number like '01' by
'0o1' removes the 'L' suffix.
Octal values were previously mistakenly specified in some dates, those
uses have been corrected by removing the leading zeros.
Simply Removing the 'L' suffix should not be a problem, but in some
testing code it looks neccesary, so in those places the Python long
constructor is used instead.
The 'long' type is no longer defined in Python 3. Because we need to
have it defined for Python 2 it is added to numpy/compat/np3k.py where
it is defined as 'int' for Python 3 and 'long' for Python 2. The `long`
fixer then needs to be skipped so that it doesn't undo the good work.
Closes #3074, #3067.
|
| |
|