| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |\ \
| | |/
| | | |
Fix npz header incompatibility
|
| | | |
|
| |\ \
| | |/
| | |
| | | |
BUG: Make numpy import when run with Python flag '-OO
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This consists of checking for a docstring equal to None and skipping two
tests that require docstrings.
Closes #5148.
|
|/ / |
|
|\ \
| | |
| | | |
ENH: add subok flag to stride_tricks (and thus broadcast_arrays)
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Closes #5096. Casts integer arrays to np.double, to prevent
integer overflow. Object arrays are left unchanged, to allow
use of arbitrary precision objects.
|
| | |
| | |
| | |
| | |
| | | |
The call to `empty_like` was trying to use the `chararray` subclass,
which doesn't support the `np.intp` dtype.
|
|\ \ \
| | | |
| | | | |
BUG: Fix np.insert for inserting a single item into a structured array
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Note that there are some object array special cases because of allowing
multiple inserts. `np.array(..., dtype=object)` is not always clear.
|
|\ \ \ \
| |/ / /
| | | |
| | | | |
BUG: fix genfromtxt check of converters when using usecols
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fixes an issue reported by Adrian Altenhoff where user-supplied
converters in genfromtxt were not tested with the right first_values
when also specifying usecols.
|
|\ \ \ \
| |_|/ /
|/| | | |
Use tempdir for large file
|
| | | |
| | | |
| | | |
| | | | |
nobody knows if it supports sparse filesystems, so just skip it.
|
| | | |
| | | |
| | | |
| | | | |
NamedTemporaryFile files can't be reopened on Windows.
|
|\ \ \ \
| | |/ /
| |/| | |
BUG: don't overwrite input percentile arrays
|
| | |/
| |/| |
|
| | | |
|
|\ \ \
| |/ /
| | | |
Charris pep8 numpy lib
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The possibly controversial part of this is making the nested array
value lists PEP8 compliant, as there is something to be said aligning
the values for clarity. In the end, it seemed like the easiest thing
to do was to make them PEP8 compliant. The eye can get used to that.
|
| |/ |
|
| | |
|
| |
| |
| |
| |
| | |
Replaces the current method to zero items, from multiplication to
using `np.where`.
|
|\ \
| | |
| | | |
FIX isfileobj accepts write-mode files under PY3
|
| |/ |
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Following inequality causes wrong counting at the edges and can be avoided by
making the edge array of the same type as the input data.
In [1]: np.around(np.float64(6010.36962890625), 5)
Out[1]: 6010.3696300000001
In [2]: np.around(np.float32(6010.36962890625), 5)
Out[2]: 6010.3701
Closes gh-4799
|
|
|
|
|
|
|
|
|
|
| |
When `x` has more than one element the condlist `[True, False]`
is being made equivalent to `[[True, False]]`, which is correct.
However, when `x` is zero dimensional the expected condlist is
`[[True], [False]]`: this commit addresses the issue. Besides,
the documentation stated that there could be undefined values
but actually these are 0 by default: using `nan` would be desirable,
but for the moment the docs were corrected. Closes #331.
|
|
|
|
|
|
|
|
| |
np.unique produces wrong results when passed a list of tuples and
no keyword arguments, as it fails to recognize it as a multidim
array, but handles it as a 1D array of objects. The only way around
this seems to be to completely eliminate the fast path for non-array
inputs using `set`.
|
| |
|
|
|
|
|
|
|
| |
The version check was not valid for python3, though the whole logic can
be removed with a finally clause.
This requires that the savez tests need to cleanup the NpyzFile results
which still hold an open file descriptor.
|
|\
| |
| | |
ENH: rewrite ma.median to improve poor performance for multiple dimensions
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
The new format only increases the header length field to 4 bytes.
allows storing structured arrays with a large number of named columns.
The dtype serialization for these can exceed the 2 byte header length
field required by the 1.0 format.
The generic functions automatically use the 2.0 format if the to be
stored data requires it. To avoid unintentional incompatibilies a
UserWarning is emitted when this happens.
If the format is not required the more compatible 1.0 format is used.
Closes gh-4690
|
|
|
|
|
| |
histogramdd rounds by decimal=6 so the random numbers may not be
outliers if they are below 1. + 1e6
|
| |
|
|
|
|
| |
Also some PEP-8 fixes and test improvements
|
|
|
|
|
|
| |
Implemented a nanpercentile and associated tests
as an extension of np.percentile to complement the
other nanfunctions.
|
|
|
|
|
| |
The class is in numpy/lib/_version.py and can be used to compare
numpy versions when the version goes to double digits.
|
|
|
|
|
|
|
|
|
|
| |
Implemented a nanmedian and associated tests as an
extension of np.median to complement the other
nanfunctions
Added negative values to the unit tests
Cleaned up documentation of nanmedian
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR adds a new keyword argument to `np.unique` that returns the
number of times each unique item comes up in the array. This allows
replacing a typical numpy construct:
unq, _ = np.unique(a, return_inverse=True)
unq_counts = np.bincount(_)
with a single line of code:
unq, unq_counts = np.unique(a, return_counts=True)
As a plus, it runs faster, because it does not need the extra
operations required to produce `unique_inverse`.
|
|
|
|
| |
Resolves #2591. Adds more explicit error handling in line parsing loop.
|
|\
| |
| | |
MAINT (API?): organize npyio.recfromcsv defaults
|
| | |
|
|\ \
| | |
| | | |
BUG: fix some errors raised when minlength is incorrect in np.bincount
|
| | | |
|
|\ \ \
| |/ /
|/| | |
ENH: speed-up of triangular matrix functions
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* `np.tri` now produces less intermediate arrays. Runs about 40% faster for
general dtypes, up to 3x faster for boolean arrays.
* `np.tril` now does smarter type conversions (thanks Julian!), and together
with the improvements in `np.tri` now runs about 30% faster. `np.triu`
runs almost 2x faster than before, but still runs 20% slower than
`np.tril`, which is an improvement over the 50% difference before.
* `np.triu_indices` and `np.tril_indices` do not call `np.mask_indices`,
instead they call `np.where` directly on a boolean array created with
`np.tri`. They now run roughly 2x faster.
* Removed the constraint for the array to be square in calls to
`np.triu_indices`, `np.tril_indices`, `np.triu_indices_from` and
`np.tril_indices_from`.
|