| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | |/ / / / / /
| | |/| | | | | | |
|
| |/ / / / / / /
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
* DOC: add examples for random generator exponential function (Issue #22270)
* DOC: fix doc test for random exponential generator example (Issue #22270)
* DOC: fix formatting on np.random.exponential example (Issue: #22270)
* DOC: fix test and problem context on np.random.exponential example (Issue: #22270)
* DOC: use may vary instead of will vary for exponential example (Issue: #22270)
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Examples in documentation for trapz goes straight from integrating random arrays to parametric curves. I think it's worth pointing out one can integrate something they'd see in Calculus 1 and get the answer they'd expect.
Also add some more guidance text to the existing examples (and style fixes)
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
Co-authored-by: Melissa Weber Mendonça <melissawm@gmail.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Since percentile is more or less identical to quantile, I also made it
throw an error if it receives a complex input. I also made nanquantile
and nanpercentile throw errors as well.
* Made the changes recommended by seberg
* Fixed a test for PR 22703
* Fixed tests for quantile
* Shortened some more lines
* Fixup more lines
Co-authored-by: Sebastian Berg <sebastianb@nvidia.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
np.pad with mode="wrap" returns unexpected result that original data is not strictly looped in padding. This may happen in some occassions when padding widths in the same dimension are unbalanced (see added testcase in test_arraypad.py and the related issue). The reason is the function pad makes iterative calls of _set_wrap_both() in the above situation, yet period for padding is not correctly computed in each iteration.
The bug is fixed by guaranteeing that period is always a multiple of original data size, and also be the possible maximum for computation efficiency.
Closes #22464
Co-authored-by: Lars Grüter <lagru+github@mailbox.org>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The deepcopy cleanup on error did not clean up everything in all
code paths. Our test do excercise the path, but leak checking
is necessry to see the issue.
Making a single PR, since it is a bit larger change.
|
| |\ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | | |
BUG, SIMD: Fix rounding large numbers on SSE2
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Before SSE41, there were no native instructions for rounding
operations on double precision. We usually emulate it by assuming
that the `MXCR` register is set to rounding, adding a
large number `2^52` to `X` and then subtracting it back to
eliminate any excess precision as long as `|X|` is less than `2^52`
otherwise returns `X.`
The current emulated intrinics `npyv_[rint,floor, ceil, trunc]_f64`
was not checking whether `|x|` equal or large `2^52` which leads
to losing accuracy on large numbers.
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
* Add reshape differences to the array API compat document
* Add an item to the array API compat document about reverse broadcasting
* Make some wording easier to read
|
| |\ \ \ \ \ \ \ \
| | |_|/ / / / / /
| |/| | | | | | | |
API: Hide exceptions from the main namespace
|
| | | | | | | | | |
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
I wasn't sure if we should already start deprecating the exceptions
so opted to follow up with only hiding them from `__dir__()` but
still having them in `__all__` and available.
This also changes their module to `numpy.exceptions`, which matters
because that is how they will be pickled (it would not be possible
to unpickle such an exception in an older NumPy version).
Due to pickling, we could put off changing the module.
|
| |\ \ \ \ \ \ \ \
| | |/ / / / / / /
| |/| | | | | | | |
DOC: Improve description of the dtype parameter in np.array docstring
|
| | |/ / / / / / |
|
| |\ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | | |
BUG: `keepdims=True` is ignored if `out` is not `None` in `numpy.median`.
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
`numpy.percentile()`, and `numpy.quantile()`.
Closes #22714, #22544.
|
| |\ \ \ \ \ \ \ \
| | | | | | | | | |
| | | | | | | | | | |
BENCH: Update MaskedArray Benchmarks
|
| | | | | | | | | | |
|
| |\ \ \ \ \ \ \ \ \
| | |_|/ / / / / / /
| |/| | | | | | | | |
ENH: Add SIMD versions of negative
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
NumPy already has SSE2 versions of `negative`. Changes here convert that to universal intrinsics so other architectures can benefit. Previously there was no unroll and SIMD was only used in contiguous cases. We're now unrolling 4x/2x depending on whether destination is contiguous. x86 doesn't perform as well for non-contiguous cases here, so we leave previous implementation / fall back to scalar. Additionally, we've added SIMD versions for ints.
|
| |\ \ \ \ \ \ \ \ \
| | | | | | | | | | |
| | | | | | | | | | | |
MAINT: check if PyArrayDTypeMeta_Spec->casts is set
|
| | | |_|/ / / / / /
| | |/| | | | | | | |
|
| |\ \ \ \ \ \ \ \ \
| | | | | | | | | | |
| | | | | | | | | | | |
ENH,DEP: Add DTypePromotionError and finalize the == and != FutureWarning/Deprecation
|
| | | | | | | | | | | |
|
| | | | | | | | | | | |
|
| | | | | | | | | | | |
|
| | | | | | | | | | | |
|
| | | | | | | | | | | |
|
| | | | | | | | | | | |
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Co-authored-by: Marten van Kerkwijk <mhvk@astro.utoronto.ca>
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Python does even crash when you get it wrong, so this is unnecessar
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
just silences a compiler warning.
|
| | | | | | | | | | | |
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
This allows correctly cathing the error, also adjust the the NoLoop
error is used for the the "binary no loop error".
For now, I think I may want to keep the casting error distinct.
|
| | | | | | | | | | | |
|
| | | | | | | | | | | |
|
| | | | | | | | | | | |
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
This finalize the deprecation warning, there are a couple of corner
cases that we don't get as nicely as we could. E.g. we (for now?)
manually create the all-false or all-true result array with a bit
clunky subclass support (subclasses can always override `==` and
`!=` though).
We also keep some (existing) behavior for 0-D objects where we
just return `NotImplemented`, which means the result should end
up with Python booleans (which is probably just as well).
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
If promotion fails internally, this effectively means that there is
no loop available for the given ufunc, so chain that error.
We only do this for `InvalidPromotion` since we assume other errors
may well be critical.
|
| | | |_|_|_|_|/ / /
| | |/| | | | | | | |
|
| | |/ / / / / / /
| |/| | | | | | | |
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
This PR Closes gh-17046.
The problem was that when calling mask=None, the array creation took seconds compared to the microseconds needed when calling mask=False.
Using `mask=None` is a bit dubious, since it has a different meaning from the default `mask=nomask`, but the speed trap is so hard to find, that it seems pragmatic to support it. OTOH, it also would seem fine to deprecate the whole path (or maybe see if the path can be sped up so that the speed difference isn't forbidding eough to bother).
|