| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This preserves the complex (and higher precision float or
object) type of the input array, so that the complex
covariance and correlation coefficients can be calculated.
It also fixes the the behaviour of empty arrays. These will
now either result in a 0x0 result, or a NxN result filled
with NaNs.
A warning is now issued when ddof is too large and the factor
set to 0 so that in this case the result is always NaN or
infinity/negative infinity and never a negative number.
Closes gh-597 and gh-2680
Closes gh-3882 (original pull request)
|
|
|
| |
Fix `ResourceWarning: unclosed file` on Python 3
|
|\
| |
| | |
A couple micro optimizations
|
| |
| |
| |
| |
| |
| |
| | |
move slow test_memmap_roundtrip to slow tests
decrease excessively large array size used in np.sin(x) compuation
TestInterp.test_if_len_x_is_small, the code has no special path for this
large size differences.
|
|\ \
| | |
| | | |
MAINT: accept NULL in NpyIter_Deallocate and remove redundant NULL checks
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Deallocation should just do nothing if provided a NULL pointer nditer
deletion broke this convention.
Removed many redundant NULL checks for various deallocation functions
used in numpy, they all end up in standard C free or PyMem_Free which
are both NULL safe.
|
|\ \ \
| | | |
| | | | |
STY: pep8 for npyio
|
| | | |
| | | |
| | | |
| | | | |
fixing one typo in npyio.py
|
| | | |
| | | |
| | | |
| | | | |
Two slight style modifications in npyio, regarding line length.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Various pep8 fixes for npyio.py
Also reorganized the imports, and removed the unnecessary (I hope)
`_string_like = _is_string_like` statement.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Fixes gh-2561
|
|\ \ \ \
| | | | |
| | | | | |
BUG: ensure percentile has same output structure as in 1.8
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | | |
percentile returned scalars and lists of arrays in 1.8
adapt new percentile to return scalar and arrays with q dimension first
for compatibility.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Deal with subclasses of ndarray, like pandas.Series and matrix.
Subclasses may not define the new keyword keepdims or deal
gracefully with ufuncs in all their forms. This is solved by
throwing the problem onto the np.sum, np.any, etc. functions
that have ugly hacks to deal with the problem.
Settle handling of all-nan slices.
nanmax, nanmin -- Raise warning, return NaN for slice.
nanargmax, nanargmin -- Raise ValueError.
nansum -- Return 0 for slice.
nanmean, nanvar, nanstd -- Raise warning, return NaN for slice.
Make NaN functions work for scalar arguments.
This may seem silly, but it removes a check for special cases.
Update tests
Deal with new all-nan handling.
Test with matrix class as example of subclass without keepdims.
Test with scalar arguments.
Fix nanvar issue reported in #3860.
Closes #3860 #3850
|
|/ /
| |
| |
| | |
closes gh-3846
|
|\ \
| |/
|/| |
Remove numarray and oldnumeric
|
| |
| |
| |
| |
| |
| | |
The numarray info function is called by lib.utils.info. Rename it
to _info and copy into lib/utils.py. Some modifications are made
as it only needs to support numpy.
|
| |
| |
| |
| |
| | |
This covers those locations that either import or build numarray
or numeric.
|
|\ \
| | |
| | | |
STY: make function_base.py pep8 compatible
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Continueing the pep8 effort, adds newline afer each `Error(`
and tries to wrap correctly.
|
| | |
| | |
| | |
| | |
| | |
| | | |
This makes function_base.py almost pep8 compatible.
ALSO, removes the Set import which is unneeded since python 2.4,
and organises the import statements.
|
| |/
|/| |
|
|\ \
| |/
|/| |
DOC: Fixes in the npyio documentation
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Fixes the "see also" section of savetxt, which
described savez as compressing (closes #587 ). Also
replaces all occurences of .npy and .npz to use double backticks.
Some had, some did not, and some had " symbols.
|
|\ \
| |/
|/| |
ENH: percentile function with additional parameters and vectorization
|
| |
| |
| |
| |
| |
| |
| | |
* added note that `overwrite_input` has not effect when `a` is not
an array in the percentile function.
* added unit test to verify that no error is raised when `a` is not
an array and `overwrite_input` is True.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
The percentile function was enhanced by adding limit and interpolation
parameters to give it similar functionality to SciPy's stats.scoreatpercentile
function. In addition the function was vecorized along q and rewritten to
use the partition method for better performance.
|
|/
|
|
|
| |
Simply state that Numpy versions < 1.9 returned nan instead of zero
for the sum of empty slices.
|
|
|
|
|
|
|
| |
The zerosize_ok flag to nditer was missing, so that it did not
allow for 0-sized iteration.
Closes gh-3714
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* numpy.gradient has been enhanced to use a second order accurate
one-sided finite difference stencil at boundary elements of the
array. Second order accurate central difference are still used for
the interior elements. The result is a fully second order accurate
approximation of the gradient over the full domain.
* The one-sided stencil uses 3 elements each with a different weight. A
forward difference is used for the first element,
dy/dx ~ -(3.0*y[0] - 4.0*y[1] + y[2]) / (2.0*dx)
and backwards difference is used for the last element,
dy/dx ~ (3.0*y[-1] - 4.0*y[-2] + y[-3]) / (2.0*dx)
* Because the datetime64 datatype cannot be multiplied a view is taken
of datetime64 arrays and cast to int64. The gradient algorithm is
then applied to the view rather than the input array.
* Previously no dimension checks were performed on the input array. Now
if the array size along the differentiation axis is less than 2, a
ValueError is raised which explains that more elements are needed. If
the size is exactly two the function falls back to using a 2 point
stencil (the old behaviour). If the size is 3 and above then the
higher accuracy methods are used.
* A new test has been added which validates the higher accuracy. Old
tests have been updated to pass. Note, this should be expected
because the boundary elements now return different (more accurate)
values.
|
|\
| |
| | |
BUG: Set __hash__ = None for non-hashable classes.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Because neither poly1d nor the Polynomial package polynomial classes are
immutable, hence not reliably hashable, they should signal that by
setting __hash__ = None. This also fixes the warning
Overriding __eq__ blocks inheritance of __hash__ in 3.x
that is given when the command `python2.7 -3 -c"import numpy"` is run.
|
| |
| |
| |
| |
| |
| |
| | |
Run autopep8 over the test files in numpy/lib/test and make fixes
to the result.
Also remove Python5 workaround.
|
| |
| |
| |
| |
| |
| | |
The test_fancy in numpy/lib/tests/test_function_base.py failed in
release because a DeprecationWarning was no longer raised, it had
become a warning.
|
|/
|
|
| |
the notes were wrong in the order of the index for p.
|
|
|
|
|
| |
Make this happen and remove test parts dependent on numpy version
< 1.9. Fixes test failures in numpy after 1.8 branch.
|
|
|
|
|
|
|
| |
Run the 2to3 ws_comma fixer on *.py files. Some lines are now too long
and will need to be broken at some point. OTOH, some lines were already
too long and need to be broken at some point. Now seems as good a time
as any to do this with open PRs at a minimum.
|
|
|
|
| |
Now is as good a time as any with open PR's at a low.
|
| |
|