| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some of those problems look like potential coding errors. In those
cases a Fixme comment was made and the offending code, usually an
unused variable, was commented out.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The possibly controversial part of this is making the nested array
value lists PEP8 compliant, as there is something to be said aligning
the values for clarity. In the end, it seemed like the easiest thing
to do was to make them PEP8 compliant. The eye can get used to that.
|
| |/ |
|
| | |
|
| |
| |
| |
| |
| | |
Replaces the current method to zero items, from multiplication to
using `np.where`.
|
|\ \
| | |
| | | |
MAINT: start 1.10-devel.
|
| | |
| | |
| | |
| | |
| | |
| | | |
There has been a warning of this change since numpy 1.7. numpy 1.10
is a good time to do it. The nanvar function needed a fix after the
change, and the tests and documentation are updated.
|
|\ \ \
| | | |
| | | | |
FIX isfileobj accepts write-mode files under PY3
|
| | | | |
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Following inequality causes wrong counting at the edges and can be avoided by
making the edge array of the same type as the input data.
In [1]: np.around(np.float64(6010.36962890625), 5)
Out[1]: 6010.3696300000001
In [2]: np.around(np.float32(6010.36962890625), 5)
Out[2]: 6010.3701
Closes gh-4799
|
| |/
|/|
| |
| | |
closes gh-312
|
|\ \
| | |
| | | |
ENH: apply_along_axis accepts named arguments
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When `x` has more than one element the condlist `[True, False]`
is being made equivalent to `[[True, False]]`, which is correct.
However, when `x` is zero dimensional the expected condlist is
`[[True], [False]]`: this commit addresses the issue. Besides,
the documentation stated that there could be undefined values
but actually these are 0 by default: using `nan` would be desirable,
but for the moment the docs were corrected. Closes #331.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
np.unique produces wrong results when passed a list of tuples and
no keyword arguments, as it fails to recognize it as a multidim
array, but handles it as a 1D array of objects. The only way around
this seems to be to completely eliminate the fast path for non-array
inputs using `set`.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The version check was not valid for python3, though the whole logic can
be removed with a finally clause.
This requires that the savez tests need to cleanup the NpyzFile results
which still hold an open file descriptor.
|
|\ \ \
| | | |
| | | | |
ENH: rewrite ma.median to improve poor performance for multiple dimensions
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
ENH: add storage format 2.0 with 4 byte header size
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The new format only increases the header length field to 4 bytes.
allows storing structured arrays with a large number of named columns.
The dtype serialization for these can exceed the 2 byte header length
field required by the 1.0 format.
The generic functions automatically use the 2.0 format if the to be
stored data requires it. To avoid unintentional incompatibilies a
UserWarning is emitted when this happens.
If the format is not required the more compatible 1.0 format is used.
Closes gh-4690
|
|/ / / |
|
| | |
| | |
| | |
| | | |
[ci skip]
|
| | |
| | |
| | |
| | |
| | | |
histogramdd rounds by decimal=6 so the random numbers may not be
outliers if they are below 1. + 1e6
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
BUG: nanpercentile 0-d with output given.
|
| | | |
| | | |
| | | |
| | | | |
Also some PEP-8 fixes and test improvements
|
|\ \ \ \
| |/ / /
|/| | | |
STY: Use `.astype`'s `copy` kwarg in `np.tri`
|
| | | |
| | | |
| | | |
| | | | |
Replace an explicit type check with setting `copy=False` in call to `astype`.
|
| | | |
| | | |
| | | | |
newline and delimiter can be strings not only single characters
|
|\ \ \ \
| |/ / /
|/| | | |
DOC: Docstring fix for `savetxt` (minor change)
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Implemented a nanpercentile and associated tests
as an extension of np.percentile to complement the
other nanfunctions.
|
| |_|/
|/| |
| | |
| | |
| | | |
The class is in numpy/lib/_version.py and can be used to compare
numpy versions when the version goes to double digits.
|
|\ \ \
| | | |
| | | | |
ENH: added functionality nanmedian to numpy
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Implemented a nanmedian and associated tests as an
extension of np.median to complement the other
nanfunctions
Added negative values to the unit tests
Cleaned up documentation of nanmedian
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Makes the identity check `a = np.array([np.nan], dtype=object)`
`a == a`, etc. a deprecation/futurewarning instead of just
changing it.
Also fixes some smaller things.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This means that for example broadcasting errors get raised.
The array_equiv function is changed to explicitely test
if broadcasting is possible. It may be nice to do this
test differently, but I am not sure if that is possible.
Create a FutureWarning for comparisons to None, which
should result in areal elementwise (object) comparisons.
Slightly adepted a wrong test.
Poly changes: Some changes in the polycode was necessary,
the one is probably a bug fix, the other needs to be
thought over, since len check is not perfect maybe, since
it is more liekly to raise raise an error.
Closes gh-3759 and gh-1608
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
Remove misleading note about equivalency betwen column_stack and
np.vstack(tup).T.
Fixes #3488
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This PR adds a new keyword argument to `np.unique` that returns the
number of times each unique item comes up in the array. This allows
replacing a typical numpy construct:
unq, _ = np.unique(a, return_inverse=True)
unq_counts = np.bincount(_)
with a single line of code:
unq, unq_counts = np.unique(a, return_counts=True)
As a plus, it runs faster, because it does not need the extra
operations required to produce `unique_inverse`.
|
|/ /
| |
| |
| | |
Resolves #2591. Adds more explicit error handling in line parsing loop.
|
| |
| |
| |
| |
| | |
Speeds calculation up by ~3x for 100x100 matrices, and by ~45x for
1000x1000
|
|\ \
| | |
| | | |
MAINT (API?): organize npyio.recfromcsv defaults
|
| | |
| | |
| | |
| | |
| | | |
Removed two irrelevant comments about code history.
P.S. my first try with Github's online editor.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Added a note to recfromcsv about the `dtype` keyword,
as suggested by @hpaulj. Also added a note to the release notes,
about the change in the `update` keyword, as suggested by @charris.
|
| | | |
|