| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \
| | |
| | | |
MAINT: refactor packbits/unpackbits
|
| | | |
|
| |/
| |
| |
| |
| |
| | |
Pushes the GIL release one loop outward. First test for these functions (!).
Incorporates suggestions by @jaimefrio and @charris.
|
|/
|
|
| |
numpy.lib._iotools.StringConverter.upgrade should have a return value
|
| |
|
|\
| |
| |
| | |
Update to average calculation
|
| |
| |
| |
| | |
closes gh-5202
|
|\ \
| | |
| | | |
BUG: copy inherited masks in MaskedArray.__array_finalize__
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, operations which created a new masked array from an old
masked array -- e.g., np.empty_like -- would tend to result in the new
and old arrays sharing the same .mask attribute. This leads to
horrible brokenness in which writes to one array affect the other. In
particular this was responsible for part of the brokenness that
@jenshnielsen reported in gh-5184 in which np.gradient on masked
arrays would modify the original array's mask.
This fixes the worst part of the issues addressed in gh-3404, though
there's still an argument that we ought to deprecate the mask-copying
behaviour entirely so that empty_like returns an array with an empty
mask. That can wait until we find someone who cares though.
I also applied a small speedup to np.gradient (avoiding one copy);
previously this inefficiency was masking (heh) some of the problems
with masked arrays, so removing it is both an optimization and makes
it easier to test that things are working now.
|
|\ \ \ |
|
| |\ \ \
| | |/ /
| | | | |
BUG: fix nanmedian on arrays containing inf
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There are two issues:
A masked divide of an infinite value is a masked value which means one
can't use np.ma.mean to compute the median as infinity division is well
defined.
This behaviour seems wrong to me but it also looks intentional so
changing it is not appropriate for a bugfix release.
The second issue is that the ordering of the sorted masked array is
undefined for equal values, the sorting considers infinity the largest
floating point value which is not correct in respect to sorting where
nan is considered larger. This is fixed by changing the
minimum_fill_value to nan for float data in the masked sorts.
Closes gh-5138
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Previous expected behavior was that the gradient is computed using central
differences in the interior and first differences at the boundaries.
* gradient was updated in v1.9.0 so that second-order accurate calculations are
done at the boundaries, but this breaks expected behavior with old code, so
`edge_order` keyword (Default: 1) is added to specify whether first or second
order calculations at the boundaries should be used.
* Since the second argument is *varargs, in order to maintain compatibility
with old code and compatibility with python 2.6 & 2.7, **kwargs is used, and
all kwargs that are not `edge_order` raise an error, listing the offending
kwargs.
* Tests and documentation updated to reflect this change.
* Added `.. versionadded:: 1.9.1` to the new optional kwarg `edge_order`
documentation, and specified supported `edge_order` kwarg values.
* Clarified documentation for `varargs`.
* Made indentation in docstring consistent with other docstring styles.
* Examples corrected
|
| |\ \
| | |/
| | | |
Fix npz header incompatibility
|
| | | |
|
| |\ \
| | |/
| | |
| | | |
BUG: Make numpy import when run with Python flag '-OO
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This consists of checking for a docstring equal to None and skipping two
tests that require docstrings.
Closes #5148.
|
|/ / |
|
|\ \
| | |
| | | |
ENH: add subok flag to stride_tricks (and thus broadcast_arrays)
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Closes #5096. Casts integer arrays to np.double, to prevent
integer overflow. Object arrays are left unchanged, to allow
use of arbitrary precision objects.
|
| | |
| | |
| | |
| | |
| | | |
The call to `empty_like` was trying to use the `chararray` subclass,
which doesn't support the `np.intp` dtype.
|
|\ \ \
| | | |
| | | | |
BUG: Fix np.insert for inserting a single item into a structured array
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Note that there are some object array special cases because of allowing
multiple inserts. `np.array(..., dtype=object)` is not always clear.
|
|\ \ \ \
| |/ / /
| | | |
| | | | |
BUG: fix genfromtxt check of converters when using usecols
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fixes an issue reported by Adrian Altenhoff where user-supplied
converters in genfromtxt were not tested with the right first_values
when also specifying usecols.
|
|\ \ \ \
| |_|/ /
|/| | | |
Use tempdir for large file
|
| | | |
| | | |
| | | |
| | | | |
nobody knows if it supports sparse filesystems, so just skip it.
|
| | | |
| | | |
| | | |
| | | | |
NamedTemporaryFile files can't be reopened on Windows.
|
|\ \ \ \
| | |/ /
| |/| | |
BUG: don't overwrite input percentile arrays
|
| | |/
| |/| |
|
| | | |
|
|\ \ \
| |/ /
| | | |
Charris pep8 numpy lib
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The possibly controversial part of this is making the nested array
value lists PEP8 compliant, as there is something to be said aligning
the values for clarity. In the end, it seemed like the easiest thing
to do was to make them PEP8 compliant. The eye can get used to that.
|
| |/ |
|
| | |
|
| |
| |
| |
| |
| | |
Replaces the current method to zero items, from multiplication to
using `np.where`.
|
|\ \
| | |
| | | |
FIX isfileobj accepts write-mode files under PY3
|
| |/ |
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Following inequality causes wrong counting at the edges and can be avoided by
making the edge array of the same type as the input data.
In [1]: np.around(np.float64(6010.36962890625), 5)
Out[1]: 6010.3696300000001
In [2]: np.around(np.float32(6010.36962890625), 5)
Out[2]: 6010.3701
Closes gh-4799
|
|
|
|
|
|
|
|
|
|
| |
When `x` has more than one element the condlist `[True, False]`
is being made equivalent to `[[True, False]]`, which is correct.
However, when `x` is zero dimensional the expected condlist is
`[[True], [False]]`: this commit addresses the issue. Besides,
the documentation stated that there could be undefined values
but actually these are 0 by default: using `nan` would be desirable,
but for the moment the docs were corrected. Closes #331.
|
|
|
|
|
|
|
|
| |
np.unique produces wrong results when passed a list of tuples and
no keyword arguments, as it fails to recognize it as a multidim
array, but handles it as a 1D array of objects. The only way around
this seems to be to completely eliminate the fast path for non-array
inputs using `set`.
|
| |
|
|
|
|
|
|
|
| |
The version check was not valid for python3, though the whole logic can
be removed with a finally clause.
This requires that the savez tests need to cleanup the NpyzFile results
which still hold an open file descriptor.
|
|\
| |
| | |
ENH: rewrite ma.median to improve poor performance for multiple dimensions
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
The new format only increases the header length field to 4 bytes.
allows storing structured arrays with a large number of named columns.
The dtype serialization for these can exceed the 2 byte header length
field required by the 1.0 format.
The generic functions automatically use the 2.0 format if the to be
stored data requires it. To avoid unintentional incompatibilies a
UserWarning is emitted when this happens.
If the format is not required the more compatible 1.0 format is used.
Closes gh-4690
|
|
|
|
|
| |
histogramdd rounds by decimal=6 so the random numbers may not be
outliers if they are below 1. + 1e6
|
| |
|