| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \ \ |
|
| |\ \ \
| | |/ /
| | | | |
BUG: fix nanmedian on arrays containing inf
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There are two issues:
A masked divide of an infinite value is a masked value which means one
can't use np.ma.mean to compute the median as infinity division is well
defined.
This behaviour seems wrong to me but it also looks intentional so
changing it is not appropriate for a bugfix release.
The second issue is that the ordering of the sorted masked array is
undefined for equal values, the sorting considers infinity the largest
floating point value which is not correct in respect to sorting where
nan is considered larger. This is fixed by changing the
minimum_fill_value to nan for float data in the masked sorts.
Closes gh-5138
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Previous expected behavior was that the gradient is computed using central
differences in the interior and first differences at the boundaries.
* gradient was updated in v1.9.0 so that second-order accurate calculations are
done at the boundaries, but this breaks expected behavior with old code, so
`edge_order` keyword (Default: 1) is added to specify whether first or second
order calculations at the boundaries should be used.
* Since the second argument is *varargs, in order to maintain compatibility
with old code and compatibility with python 2.6 & 2.7, **kwargs is used, and
all kwargs that are not `edge_order` raise an error, listing the offending
kwargs.
* Tests and documentation updated to reflect this change.
* Added `.. versionadded:: 1.9.1` to the new optional kwarg `edge_order`
documentation, and specified supported `edge_order` kwarg values.
* Clarified documentation for `varargs`.
* Made indentation in docstring consistent with other docstring styles.
* Examples corrected
|
| |\ \
| | |/
| | | |
Fix npz header incompatibility
|
| | | |
|
| |\ \
| | |/
| | |
| | | |
BUG: Make numpy import when run with Python flag '-OO
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This consists of checking for a docstring equal to None and skipping two
tests that require docstrings.
Closes #5148.
|
|/ / |
|
|\ \
| | |
| | | |
ENH: add subok flag to stride_tricks (and thus broadcast_arrays)
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Closes #5096. Casts integer arrays to np.double, to prevent
integer overflow. Object arrays are left unchanged, to allow
use of arbitrary precision objects.
|
| | |
| | |
| | |
| | |
| | | |
The call to `empty_like` was trying to use the `chararray` subclass,
which doesn't support the `np.intp` dtype.
|
|\ \ \
| | | |
| | | | |
BUG: Fix np.insert for inserting a single item into a structured array
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Note that there are some object array special cases because of allowing
multiple inserts. `np.array(..., dtype=object)` is not always clear.
|
|\ \ \ \
| |/ / /
| | | |
| | | | |
BUG: fix genfromtxt check of converters when using usecols
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fixes an issue reported by Adrian Altenhoff where user-supplied
converters in genfromtxt were not tested with the right first_values
when also specifying usecols.
|
|\ \ \ \
| |_|/ /
|/| | | |
Use tempdir for large file
|
| | | |
| | | |
| | | |
| | | | |
nobody knows if it supports sparse filesystems, so just skip it.
|
| | | |
| | | |
| | | |
| | | | |
NamedTemporaryFile files can't be reopened on Windows.
|
|\ \ \ \
| | |/ /
| |/| | |
BUG: don't overwrite input percentile arrays
|
| | |/
| |/| |
|
| | | |
|
|\ \ \
| |/ /
| | | |
Charris pep8 numpy lib
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The possibly controversial part of this is making the nested array
value lists PEP8 compliant, as there is something to be said aligning
the values for clarity. In the end, it seemed like the easiest thing
to do was to make them PEP8 compliant. The eye can get used to that.
|
| |/ |
|
| | |
|
| |
| |
| |
| |
| | |
Replaces the current method to zero items, from multiplication to
using `np.where`.
|
|\ \
| | |
| | | |
FIX isfileobj accepts write-mode files under PY3
|
| |/ |
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Following inequality causes wrong counting at the edges and can be avoided by
making the edge array of the same type as the input data.
In [1]: np.around(np.float64(6010.36962890625), 5)
Out[1]: 6010.3696300000001
In [2]: np.around(np.float32(6010.36962890625), 5)
Out[2]: 6010.3701
Closes gh-4799
|
|
|
|
|
|
|
|
|
|
| |
When `x` has more than one element the condlist `[True, False]`
is being made equivalent to `[[True, False]]`, which is correct.
However, when `x` is zero dimensional the expected condlist is
`[[True], [False]]`: this commit addresses the issue. Besides,
the documentation stated that there could be undefined values
but actually these are 0 by default: using `nan` would be desirable,
but for the moment the docs were corrected. Closes #331.
|
|
|
|
|
|
|
|
| |
np.unique produces wrong results when passed a list of tuples and
no keyword arguments, as it fails to recognize it as a multidim
array, but handles it as a 1D array of objects. The only way around
this seems to be to completely eliminate the fast path for non-array
inputs using `set`.
|
| |
|
|
|
|
|
|
|
| |
The version check was not valid for python3, though the whole logic can
be removed with a finally clause.
This requires that the savez tests need to cleanup the NpyzFile results
which still hold an open file descriptor.
|
|\
| |
| | |
ENH: rewrite ma.median to improve poor performance for multiple dimensions
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
The new format only increases the header length field to 4 bytes.
allows storing structured arrays with a large number of named columns.
The dtype serialization for these can exceed the 2 byte header length
field required by the 1.0 format.
The generic functions automatically use the 2.0 format if the to be
stored data requires it. To avoid unintentional incompatibilies a
UserWarning is emitted when this happens.
If the format is not required the more compatible 1.0 format is used.
Closes gh-4690
|
|
|
|
|
| |
histogramdd rounds by decimal=6 so the random numbers may not be
outliers if they are below 1. + 1e6
|
| |
|
|
|
|
| |
Also some PEP-8 fixes and test improvements
|
|
|
|
|
|
| |
Implemented a nanpercentile and associated tests
as an extension of np.percentile to complement the
other nanfunctions.
|
|
|
|
|
| |
The class is in numpy/lib/_version.py and can be used to compare
numpy versions when the version goes to double digits.
|
|
|
|
|
|
|
|
|
|
| |
Implemented a nanmedian and associated tests as an
extension of np.median to complement the other
nanfunctions
Added negative values to the unit tests
Cleaned up documentation of nanmedian
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR adds a new keyword argument to `np.unique` that returns the
number of times each unique item comes up in the array. This allows
replacing a typical numpy construct:
unq, _ = np.unique(a, return_inverse=True)
unq_counts = np.bincount(_)
with a single line of code:
unq, unq_counts = np.unique(a, return_counts=True)
As a plus, it runs faster, because it does not need the extra
operations required to produce `unique_inverse`.
|
|
|
|
| |
Resolves #2591. Adds more explicit error handling in line parsing loop.
|
|\
| |
| | |
MAINT (API?): organize npyio.recfromcsv defaults
|
| | |
|