| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
| |
* Run test_large_zip in a child process
|
|
|
|
|
| |
types (#15816)
Cleanup from the dropping of python 2
|
|
|
|
|
|
|
|
| |
This expires a deprecation from 1.8.
The corresponding deprecation in `np.insert` has less clear semantics, so has been left to a future patch.
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
Co-authored-by: Warren Weckesser <warren.weckesser@gmail.com>
|
|\
| |
| | |
DEP: Make `np.insert` and `np.delete` on 0d arrays with an axis an error
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Before this change, the following code worked:
```
>>> some_0d = np.array(1)
>>> np.insert(some_0d, "some nonsense", 10, axis=0)
array(10)
>>> np.insert(some_0d, "some nonsense", 42, axis="some nonsense")
array(42)
```
Now these raise AxisError and TypeError, respectively.
`delete` is exactly the same.
|
|\ \
| | |
| | | |
TST: Remove code that is not supposed to warn out of warning assertion
|
| | | |
|
|\ \ \
| |/ /
| | | |
DEP: Make np.delete on out-of-bounds indices an error
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Note that this only affects lists of indices.
```python
>>> a = np.arange(3)
````
Before:
```python
>>> np.delete(a, 100)
IndexError
>>> np.delete(a, [100])
DeprecationWarning
array([0, 1, 2])
>>> np.delete(a, -1)
array([0, 1])
>>> np.delete(a, [-1])
FutureWarning
array([0, 1, 2])
```
After:
```python
>>> np.delete(a, 100)
IndexError
>>> np.delete(a, [100])
IndexError
>>> np.delete(a, -1)
array([0, 1])
>>> np.delete(a, [-1])
array([0, 1])
```
|
|\ \
| | |
| | | |
DEP: Forbid passing non-integral index arrays to `insert` and `delete`
|
| |/
| |
| |
| | |
This expires a deprecation warning from back in 1.9.
|
|/
|
| |
* BUG, TST: fix f2py for PyPy, skip one test for PyPy, xfail tests for s390x
|
| |
|
|
|
|
|
|
|
| |
Tweak a few lines so that arrays with an axis with length 0
don't break the np.unique code.
Closes gh-15559.
|
|
|
|
|
| |
This code is from github user huonw, from this PR:
https://github.com/numpy/numpy/pull/15565
|
|
|
|
|
| |
This is largely a re-submission of the original change proposed in #6509. Discussion was hosted in multiple forums including #3474, the numpy mailing list circa 10-2015, and the 02-26-2020 NumPy Triage meeting.
This PR closes #3474 and #15570
|
|\
| |
| | |
DEP: Do not allow "abstract" dtype conversion/creation
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
These dtypes do not really make sense as instances. We can (somewhat)
reasonably define np.dtype(np.int64) as the default (machine endianess)
int64. (Arguably, it is unclear that `np.array(arr_of_>f8, dtype="f")`
should return arr_of_<f8, but that would be very noisy!)
However, `np.integer` as equivalent to long, is not well defined.
Similarly, `dtype=Decimal` may be neat to spell `dtype=object` when you
intend to put Decimal objects into the array. But it is misleading,
since there is no special meaning to it at this time.
The biggest issue with it, is that `arr.astype(np.floating)` looks
like it will let float32 or float128 pass, but it will force a
float64 output! Arguably downcasting is a bug in this case.
A related issue is `np.dtype("S")` and especially "S0". The dtype "S"
does make sense for most or all places where `dtype=...` can be
passed. However, it is conceptionally different from other dtypes, since
it will not end up being attached to the array (unlike "S2" which
would be). The dtype "S" really means the type number/DType class
of String, and not a specific dtype instance.
|
|/ |
|
|
|
|
|
|
|
| |
The bug occurs since numpy 1.16. Before that empty descr corresponds to
`np.dtype([])`. This fixes the problem by following numpy 1.15's
behavior.
Closes gh-15396
|
| |
|
| |
|
| |
|
|
|
|
| |
This implements NEP 34.
|
| |
|
| |
|
|
|
|
| |
Now that 2.7 is gone, there is no need to pop manually from kwarg dictionaries.
|
|
|
|
|
|
|
| |
Inheriting from object was necessary for Python 2 compatibility to use
new-style classes. In Python 3, this is unnecessary as there are no
old-style classes.
Dropping the object is more idiomatic Python.
|
|
|
|
|
| |
As numpy is Python 3 only, these import statements are now unnecessary
and don't alter runtime behavior.
|
|
|
|
|
|
|
| |
In numpy.gradient, convert integer array inputs to float64 to avoid
unwanted modular arithmetic.
Closes gh-15207.
|
|
|
|
| |
* Remove a few unused imports in several files.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This reverts commit c088383cb290ca064d456e89d79177a0e234cb8d and
uses the same kind casting rule for the additional keyword arguments
``to_end`` and ``to_begin``. This results in slightly more leniant
behaviour for integers (which can now have overflows that are
hidden), but fixes an issue with the handling of NaN.
Generally, this behaviour seems more conistent with what NumPy does
elsewhere. The Overflow issue exists similar in many other places
and should be solved by integer overflow warning machinery while
the actual cast takes place.
Closes gh-13103
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This reverts commit c088383cb290ca064d456e89d79177a0e234cb8d and
uses the same kind casting rule for the additional keyword arguments
``to_end`` and ``to_begin``. This results in slightly more leniant
behaviour for integers (which can now have overflows that are
hidden), but fixes an issue with the handling of NaN.
Generally, this behaviour seems more conistent with what NumPy does
elsewhere. The Overflow issue exists similar in many other places
and should be solved by integer overflow warning machinery while
the actual cast takes place.
Closes gh-13103
|
| | |
|
| |
| |
| |
| | |
Fix wrong multiplier for /proc/meminfo, and do style cleanups.
|
|\ \
| | |
| | | |
MAINT: Only copy input array in _replace_nan() if there are nans to replace
|
| |/ |
|
|\ \
| | |
| | | |
DEP: issue deprecation warning when creating ragged array (NEP 34)
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This PR allows the axis keyword in expand_dims to be a tuple of ints. Previously, axis could only be an int.
This issue was previously discussed in gh-12290 and the changes are based on gh-12290 (comment).
This PR also removes the deprecation added in v1.13 (2017-05-17), where previously axis could be outside of the range (-a.ndim - 1) <= axis <= a.ndim. Such an axis value will now raise an AxisError. Please let me know if it's too soon to remove this deprecation (I could not find any dev docs stating the length of the numpy deprecation cycle).
Closes gh-12290.
|
| | |
| | |
| | | |
Address gh-14142 for the 1.18 release: warn when saving a dtype with metadata that cannot be loaded.
|
| |/
|/| |
|
| |
| |
| |
| | |
Relates to gh-6103
|
|/
|
|
|
|
| |
Fraction.__float__ gives a DeprecationWarning if the division results in a non-builtin float
This was never intended as part of the test anyway.
|
|
|
|
| |
As per NEP-32, the financial functions are deprecated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An input such as
np.histogram(np.array([-2, 0, 127], dtype=np.int8), bins="auto")
would raise the exception
ValueError: Number of samples, -1, must be non-negative.
The problem was that the peak-to-peak value for the input array was
computed with the `ptp` method, which returned negative values for
signed integer arrays when the actual value was more than the
maximum signed value of the array's data type.
The fix is to use a peak-to-peak function that returns an
unsigned value for signed integer arrays.
Closes gh-14379.
|
|\
| |
| | |
MAINT: Avoid BytesWarning in PyArray_DescrConverter()
|
| |
| |
| |
| |
| |
| |
| |
| | |
A BytesWarning can be emitted when bytes are and strings are mismatched.
Catching BytesWarning ensures a better boundary between str and bytes
type. The test suite is now run with the -b flag to emit this warning.
Fixes #9308
|
|\ \
| |/
|/| |
DEP: remove deprecated select behaviour
|