| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
ENH: apply_along_axis accepts named arguments
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When `x` has more than one element the condlist `[True, False]`
is being made equivalent to `[[True, False]]`, which is correct.
However, when `x` is zero dimensional the expected condlist is
`[[True], [False]]`: this commit addresses the issue. Besides,
the documentation stated that there could be undefined values
but actually these are 0 by default: using `nan` would be desirable,
but for the moment the docs were corrected. Closes #331.
|
| |
| |
| |
| |
| |
| |
| |
| | |
np.unique produces wrong results when passed a list of tuples and
no keyword arguments, as it fails to recognize it as a multidim
array, but handles it as a 1D array of objects. The only way around
this seems to be to completely eliminate the fast path for non-array
inputs using `set`.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
The version check was not valid for python3, though the whole logic can
be removed with a finally clause.
This requires that the savez tests need to cleanup the NpyzFile results
which still hold an open file descriptor.
|
|\ \
| | |
| | | |
ENH: rewrite ma.median to improve poor performance for multiple dimensions
|
| | | |
|
|\ \ \
| | | |
| | | | |
ENH: add storage format 2.0 with 4 byte header size
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The new format only increases the header length field to 4 bytes.
allows storing structured arrays with a large number of named columns.
The dtype serialization for these can exceed the 2 byte header length
field required by the 1.0 format.
The generic functions automatically use the 2.0 format if the to be
stored data requires it. To avoid unintentional incompatibilies a
UserWarning is emitted when this happens.
If the format is not required the more compatible 1.0 format is used.
Closes gh-4690
|
|/ / |
|
| |
| |
| |
| | |
[ci skip]
|
| |
| |
| |
| |
| | |
histogramdd rounds by decimal=6 so the random numbers may not be
outliers if they are below 1. + 1e6
|
| | |
|
| | |
|
|\ \
| | |
| | | |
BUG: nanpercentile 0-d with output given.
|
| | |
| | |
| | |
| | | |
Also some PEP-8 fixes and test improvements
|
|\ \ \
| |/ /
|/| | |
STY: Use `.astype`'s `copy` kwarg in `np.tri`
|
| | |
| | |
| | |
| | | |
Replace an explicit type check with setting `copy=False` in call to `astype`.
|
| | |
| | |
| | | |
newline and delimiter can be strings not only single characters
|
|\ \ \
| |/ /
|/| | |
DOC: Docstring fix for `savetxt` (minor change)
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Implemented a nanpercentile and associated tests
as an extension of np.percentile to complement the
other nanfunctions.
|
| | |
| | |
| | |
| | |
| | | |
The class is in numpy/lib/_version.py and can be used to compare
numpy versions when the version goes to double digits.
|
|\ \ \
| | | |
| | | | |
ENH: added functionality nanmedian to numpy
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Implemented a nanmedian and associated tests as an
extension of np.median to complement the other
nanfunctions
Added negative values to the unit tests
Cleaned up documentation of nanmedian
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Makes the identity check `a = np.array([np.nan], dtype=object)`
`a == a`, etc. a deprecation/futurewarning instead of just
changing it.
Also fixes some smaller things.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This means that for example broadcasting errors get raised.
The array_equiv function is changed to explicitely test
if broadcasting is possible. It may be nice to do this
test differently, but I am not sure if that is possible.
Create a FutureWarning for comparisons to None, which
should result in areal elementwise (object) comparisons.
Slightly adepted a wrong test.
Poly changes: Some changes in the polycode was necessary,
the one is probably a bug fix, the other needs to be
thought over, since len check is not perfect maybe, since
it is more liekly to raise raise an error.
Closes gh-3759 and gh-1608
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
Remove misleading note about equivalency betwen column_stack and
np.vstack(tup).T.
Fixes #3488
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This PR adds a new keyword argument to `np.unique` that returns the
number of times each unique item comes up in the array. This allows
replacing a typical numpy construct:
unq, _ = np.unique(a, return_inverse=True)
unq_counts = np.bincount(_)
with a single line of code:
unq, unq_counts = np.unique(a, return_counts=True)
As a plus, it runs faster, because it does not need the extra
operations required to produce `unique_inverse`.
|
|/ /
| |
| |
| | |
Resolves #2591. Adds more explicit error handling in line parsing loop.
|
| |
| |
| |
| |
| | |
Speeds calculation up by ~3x for 100x100 matrices, and by ~45x for
1000x1000
|
|\ \
| | |
| | | |
MAINT (API?): organize npyio.recfromcsv defaults
|
| | |
| | |
| | |
| | |
| | | |
Removed two irrelevant comments about code history.
P.S. my first try with Github's online editor.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Added a note to recfromcsv about the `dtype` keyword,
as suggested by @hpaulj. Also added a note to the release notes,
about the change in the `update` keyword, as suggested by @charris.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Organizes the default kwargs in recfromcsv. Changes two undocumented
kwargs behaviors:
* previously, if a user set `names=None`, it was ignored and replaced
with `names=True`
* the `dtype` kwarg was ignored. If `update` was given, it was used as
`dtype`, and if not, None was used. We can retain the `update` behavior
by using `kwargs.setdefault("dtype",kwargs.get('update', None))`.
This Closes #311 .
|
| | |
| | |
| | |
| | | |
reduces buffer copy and comparison overhead for boolean outer product
|
|\ \ \
| | | |
| | | | |
BUG: fix some errors raised when minlength is incorrect in np.bincount
|
| | | | |
|
|\ \ \ \
| |/ / /
|/| | | |
ENH: speed-up of triangular matrix functions
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* `np.tri` now produces less intermediate arrays. Runs about 40% faster for
general dtypes, up to 3x faster for boolean arrays.
* `np.tril` now does smarter type conversions (thanks Julian!), and together
with the improvements in `np.tri` now runs about 30% faster. `np.triu`
runs almost 2x faster than before, but still runs 20% slower than
`np.tril`, which is an improvement over the 50% difference before.
* `np.triu_indices` and `np.tril_indices` do not call `np.mask_indices`,
instead they call `np.where` directly on a boolean array created with
`np.tri`. They now run roughly 2x faster.
* Removed the constraint for the array to be square in calls to
`np.triu_indices`, `np.tril_indices`, `np.triu_indices_from` and
`np.tril_indices_from`.
|
|\ \ \ \
| | | | |
| | | | | |
ENH: Speed improvements and deprecations for np.select
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The idea for this (and some of the code) originally comes from
Graeme B Bell (gh-3537).
Choose is not as fast and pretty limited, so an iterative
copyto is used instead.
Closes gh-3259, gh-3537, gh-3551, and gh-3254
|
| | | | |
| | | | |
| | | | |
| | | | | |
Matlab uses `conv` for both convolution and polynomial multiplication. Clarifying that numpy has functions for each.
|
| | | | | |
|
| | | | | |
|