| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
|
| |
replace slow exec with a direct __import__.
improves `import numpy` speed by about 10%.
|
| |
|
|
|
|
|
|
|
| |
This fix correctly calculates the number of BUFFER_SIZE chunks to read.
This fix takes into account dtype sizes that could be larger
than the BUFFER_SIZE, like a long string.
See #4027
|
| |
|
|
|
|
| |
and tests/test_twodim_base.py. Also make a couple more PEP8 tweaks.
|
|
|
|
| |
order of the powers (either decreasing or increasing).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This preserves the complex (and higher precision float or
object) type of the input array, so that the complex
covariance and correlation coefficients can be calculated.
It also fixes the the behaviour of empty arrays. These will
now either result in a 0x0 result, or a NxN result filled
with NaNs.
A warning is now issued when ddof is too large and the factor
set to 0 so that in this case the result is always NaN or
infinity/negative infinity and never a negative number.
Closes gh-597 and gh-2680
Closes gh-3882 (original pull request)
|
|\
| |
| | |
A couple micro optimizations
|
| |
| |
| |
| |
| |
| |
| | |
move slow test_memmap_roundtrip to slow tests
decrease excessively large array size used in np.sin(x) compuation
TestInterp.test_if_len_x_is_small, the code has no special path for this
large size differences.
|
| | |
|
| | |
|
|\ \
| | |
| | | |
BUG: ensure percentile has same output structure as in 1.8
|
| | |
| | |
| | |
| | |
| | |
| | | |
percentile returned scalars and lists of arrays in 1.8
adapt new percentile to return scalar and arrays with q dimension first
for compatibility.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Deal with subclasses of ndarray, like pandas.Series and matrix.
Subclasses may not define the new keyword keepdims or deal
gracefully with ufuncs in all their forms. This is solved by
throwing the problem onto the np.sum, np.any, etc. functions
that have ugly hacks to deal with the problem.
Settle handling of all-nan slices.
nanmax, nanmin -- Raise warning, return NaN for slice.
nanargmax, nanargmin -- Raise ValueError.
nansum -- Return 0 for slice.
nanmean, nanvar, nanstd -- Raise warning, return NaN for slice.
Make NaN functions work for scalar arguments.
This may seem silly, but it removes a check for special cases.
Update tests
Deal with new all-nan handling.
Test with matrix class as example of subclass without keepdims.
Test with scalar arguments.
Fix nanvar issue reported in #3860.
Closes #3860 #3850
|
| |
| |
| |
| | |
closes gh-3846
|
|/
|
|
|
| |
This covers those locations that either import or build numarray
or numeric.
|
|
|
|
|
|
|
| |
* added note that `overwrite_input` has not effect when `a` is not
an array in the percentile function.
* added unit test to verify that no error is raised when `a` is not
an array and `overwrite_input` is True.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
The percentile function was enhanced by adding limit and interpolation
parameters to give it similar functionality to SciPy's stats.scoreatpercentile
function. In addition the function was vecorized along q and rewritten to
use the partition method for better performance.
|
|
|
|
|
|
|
| |
The zerosize_ok flag to nditer was missing, so that it did not
allow for 0-sized iteration.
Closes gh-3714
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* numpy.gradient has been enhanced to use a second order accurate
one-sided finite difference stencil at boundary elements of the
array. Second order accurate central difference are still used for
the interior elements. The result is a fully second order accurate
approximation of the gradient over the full domain.
* The one-sided stencil uses 3 elements each with a different weight. A
forward difference is used for the first element,
dy/dx ~ -(3.0*y[0] - 4.0*y[1] + y[2]) / (2.0*dx)
and backwards difference is used for the last element,
dy/dx ~ (3.0*y[-1] - 4.0*y[-2] + y[-3]) / (2.0*dx)
* Because the datetime64 datatype cannot be multiplied a view is taken
of datetime64 arrays and cast to int64. The gradient algorithm is
then applied to the view rather than the input array.
* Previously no dimension checks were performed on the input array. Now
if the array size along the differentiation axis is less than 2, a
ValueError is raised which explains that more elements are needed. If
the size is exactly two the function falls back to using a 2 point
stencil (the old behaviour). If the size is 3 and above then the
higher accuracy methods are used.
* A new test has been added which validates the higher accuracy. Old
tests have been updated to pass. Note, this should be expected
because the boundary elements now return different (more accurate)
values.
|
|
|
|
|
|
|
| |
Run autopep8 over the test files in numpy/lib/test and make fixes
to the result.
Also remove Python5 workaround.
|
|
|
|
|
|
| |
The test_fancy in numpy/lib/tests/test_function_base.py failed in
release because a DeprecationWarning was no longer raised, it had
become a warning.
|
|
|
|
|
| |
Make this happen and remove test parts dependent on numpy version
< 1.9. Fixes test failures in numpy after 1.8 branch.
|
|
|
|
|
|
|
| |
Run the 2to3 ws_comma fixer on *.py files. Some lines are now too long
and will need to be broken at some point. OTOH, some lines were already
too long and need to be broken at some point. Now seems as good a time
as any to do this with open PRs at a minimum.
|
|
|
|
| |
Now is as good a time as any with open PR's at a low.
|
| |
|
| |
|
|
|
|
|
|
| |
Fix typos and clarify some explanations. Document the changes in the return
values of nanargmin and nanargmax for all-NaN slices in the 1.8.0 release
notes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New files lib/nanfunctions.py and lib/tests/test_nanfunctions.py are
added and both the previous and new nan functions and tests are moved
into them.
The existing nan functions moved from lib/function_base are:
nansum, nanmin, nanmax, nanargmin, nanargmax
The added nan functions moved from core/numeric are:
nanmean, nanvar, nanstd
|
|
|
|
|
|
| |
Partitioning is sufficient to obtain the median and is much faster.
In the case of overwrite_input=True the resulting array will not be
fully sorted anymore.
|
|
|
|
| |
Also add test for IndexError exception when axis is out of bounds.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
WarningManager was a workaround for the lack of the with statement
in Python versions < 2.6. As those versions are no longer supported
it can be removed.
Deprecation notes are added to WarningManager and WarningMessage, but
to avoid a cascade of messages in third party apps, no warnings are
raised at this time, that can be done later.
Closes #3519.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that Python < 2.6 is no longer supported we can use the errstate
context manager in places where constructs like
```
old = seterr(invalid='ignore')
try:
blah
finally:
seterr(**old)
```
were used.
|
| |
|
|
|
|
|
|
|
| |
This assures that when the loaded file is closed it also closes the
file descriptor, avoiding a resource warning in Python3.
Closes #3457.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correct the implementation of the npv function, its documentation, and
the mirr function that depends on it. The test_financial.py is also
corrected to take into account those modifications
The npv function behavior was contrary to what the documentation stated
as it summed indexes 1 to M instead of 0 to M-1. The mirr function used
a corrective factor to get the correct result in spite of that error so
that factor is removed.
Closes #649
|