| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This is largely a re-submission of the original change proposed in #6509. Discussion was hosted in multiple forums including #3474, the numpy mailing list circa 10-2015, and the 02-26-2020 NumPy Triage meeting.
This PR closes #3474 and #15570
|
|
|
|
|
|
|
|
| |
Removes unnecessary code introduced in #15696.
Non-native byte orders were explicitly added to the fast-path
check in _var for complex numbers. However, the non-native
path is unreachable due to coercion in upstream ufuncs.
|
|\
| |
| | |
MAINT: Add a fast path to var for complex input
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
var currently has a conditional that results in conjugate being
called for the variance calculation of complex inputs.
This leg of the computation is slow. This PR avoids this
computational leg for complex inputs via a type check.
Closes #15684
|
|\ \
| | |
| | | |
MAINT: Fixing typos in f2py comments and code.
|
| |/ |
|
| | |
|
| |
| |
| |
| |
| |
| | |
* MAINT: use list-based APIs to call subprocesses
* TST, MAINT: add a test for mingw32ccompiler.build_import, clean up lib2def
Co-authored-by: Matti Picus <matti.picus@gmail.com>
Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
|
|\ \
| | |
| | | |
DEP: Do not allow "abstract" dtype conversion/creation
|
| | |
| | |
| | | |
Co-Authored-By: Matti Picus <matti.picus@gmail.com>
|
| | |
| | |
| | |
| | |
| | | |
Add a comment that we can think about only allowing it for `dtype=...`
though...
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
These dtypes do not really make sense as instances. We can (somewhat)
reasonably define np.dtype(np.int64) as the default (machine endianess)
int64. (Arguably, it is unclear that `np.array(arr_of_>f8, dtype="f")`
should return arr_of_<f8, but that would be very noisy!)
However, `np.integer` as equivalent to long, is not well defined.
Similarly, `dtype=Decimal` may be neat to spell `dtype=object` when you
intend to put Decimal objects into the array. But it is misleading,
since there is no special meaning to it at this time.
The biggest issue with it, is that `arr.astype(np.floating)` looks
like it will let float32 or float128 pass, but it will force a
float64 output! Arguably downcasting is a bug in this case.
A related issue is `np.dtype("S")` and especially "S0". The dtype "S"
does make sense for most or all places where `dtype=...` can be
passed. However, it is conceptionally different from other dtypes, since
it will not end up being attached to the array (unlike "S2" which
would be). The dtype "S" really means the type number/DType class
of String, and not a specific dtype instance.
|
|\ \ \
| | | |
| | | | |
BUG: Fixing result of np quantile edge case
|
| | | | |
|
| |_|/
|/| |
| | |
| | | |
Signed-off-by: Changqing Li <changqing.li@windriver.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* DOC: Improve docs for np.finfo.
Replace inaccurate statements about eps and epsneg attrs with
correct statements and examples.
Added np.spacing and np.nextafter to See Also.
Closes #6940.
* Removed LaTeX math from finfo docstring.
* MAINT: Add periods at end of some sentences.
Co-authored-by: Charles Harris <charlesr.harris@gmail.com>
|
| | |
| | |
| | |
| | | |
Reference multiple relevant discussions on GH
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Clarifies a FIXME comment in numpy/__init__.py by referencing
relevant discussion in issue tracker.
Closes #15668.
|
|/ /
| |
| |
| |
| |
| | |
Updated the description to consider all array elements
Updated the examples to use multiple elements array, to show that one element not close enough prevent for the whole array to be considered as real
Closes #15626
|
| | |
|
|\ \
| | |
| | | |
TST: check exception details in refguide_check.py
|
| | |
| | |
| | |
| | | |
xref GH-12548
|
|\ \ \
| | | |
| | | | |
DOC: Remove duplicated code in true_divide docstring
|
| | | | |
|
| | | |
| | | |
| | | | |
I believe this line comes from an older version of the docstring which was importing `__future__`.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The value for platform.machine() was reported from Debian in 2012, but current
Debian systems no longer this value. Switch the test to the new arm_softfloat
flag, which is the actual underlying cause of the test failure.
closes gh-15562
|
|\ \ \ \
| | | | |
| | | | | |
BUG: Use ``__array__`` during dimension discovery
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
``__array__`` was previously not used during dimension discovery,
while bein gused during dtype discovery (if dtype is not given),
as well as during filling of the resulting array.
This would lead to inconsistencies with respect to array likes
that have a shape including a 0 (typically as first dimension).
Thus a shape of ``(0, 1, 1)`` would be found as ``(0,)`` because
a nested list/sequence cannot represent empty shapes, except 1-D.
This uses the `_array_from_array_like` function, which means that
some coercions may be tiny bit slower, at the gain of removing
a lot of complex code.
(this also reverts commit d0d250a3c9d7d90e75701c32d7d435640e6b02eb
or the related change).
This is a continuation of work by Sergei Lebedev in gh-13663
which had to be reverted due to problems with Pandas, and the
general inconsistency. This version may not resolve all issues
with pandas, but does resolve the inconsistency.
Closes gh-13958
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
BUG: Fix bug in AVX-512F np.maximum and np.minimum
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
abs_ptrdiff(args[1], args[0]) >= (vsize) does not accomodate strides,
specially when the strides are negative.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
np.minimum
Fixes bug in np.maximum.accumulate and np.minimum.accumulate
See https://github.com/numpy/numpy/issues/15597
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
DOC: numpy.clip is equivalent to minimum(..., maximum(...))
|
| | | | | | | |
|
| |/ / / / /
|/| | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
There was an identical statement in both possible branches of
a conditional.
Moved statement out of conditional to eliminate one
repititious LOC
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
BUG: Remove check requiring natural alignment of float/double to run AVX code
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
In a x86-32 bit system, doubles need not be naturally aligned to 8 Byte
boundary (see: -malign-double section of
https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html). Having this check
meant it ran different code paths (AVX v/s scalar) depending on the
alignment of data which leads to different results and test failing
intermittently. AVX code uses un-aligned loads and this check is
unnecessary to begin with.
|
|/ / / / / / |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
NAN's
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
sine and cosine of large numbers are always computed via the standard C
library and the results vary depending on the platform.
|
|/ / / / / |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
TST: Accuracy test float32 sin/cos/exp/log for AVX platforms
|
| | | | | | |
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
BUG, MAINT: Stop using the error-prone deprecated Py_UNICODE apis
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This eliminates the need for special casing in `np.generic.__reduce__`
|
| | | | | | | |
|