| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This is required to support Python 3.10.
|
| |
|
|\
| |
| | |
ENH: Implement the NumPy C SIMD vectorization interface
|
| |
| |
| | |
Co-authored-by: Matti Picus <matti.picus@gmail.com>
|
| |
| |
| |
| | |
implement the same intrinsics as X86 for Power/VSX little-endian mode
|
| |
| |
| |
| | |
implement the same intrinsics as X86 for NEON
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
implement the following intrinsics for X86 extensions:
- load, store
- zero, setall, set, select, reinterpret
- boolean conversions
- (add, sub, mul, div, adds, subs)
- logical
- comparison
- left and right shifting
- combine, zip
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
"NPYV" or universal intrinsics as NEP-38 define it, are types and functions
intended to simplify vectorization of code on different platforms.
This patch initialize NPYV for SIMD extensions SSE, AVX2, AVX512,
VSX and NEON on the top of C definitions that defined by the new
generated header '_cpu_dispatch.h' which included by 'cpu_dispatch.h'.
|
| |
| |
| | |
DOC: fixes to capitalization and header lines
|
|\ \
| | |
| | | |
ENH: Rewrite of array-coercion to support new dtypes
|
| | |
| | |
| | |
| | |
| | | |
astype is similar to array coercion, and is also compared in
many of the tests below.
|
| | |
| | |
| | |
| | |
| | | |
As asked for by Hameer, the comment was outdated, so had to change
it completely instead of fixing one word.
|
| | |
| | |
| | |
| | |
| | | |
The rational test still fails, but the xfail message was wrong
or confusing.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
string is too short
this retains (and slightly expands) the old behaviour. It is inconsistent
and I think we should fix that inconsistency one way or the other
(I honestly do not care *which* way).
|
| | |
| | |
| | | |
Co-authored-by: Anirudh Subramanian <anirudh2290@apache.org>
|
| | |
| | |
| | |
| | |
| | | |
Also add some more documentation for
PyArray_DiscoverDTypeAndShape_Recursive
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Unfortunately one test had to be adapted (I missed the difference
between timedelta64 and datetime64), and one tests which I thought
failed only for one reason, also failed for anothe reason which
is not fully gone in master.
|
| | |
| | |
| | |
| | |
| | | |
Could also just delete this path entirely, but thought for now to
keep it around.
|
| | |
| | |
| | |
| | |
| | | |
This is similar to the ragged deprecation, and proofs that
gh-15611 is fixed previously.
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
This function is not used anymore, so can be deleted. In all
cases where we would want to use it, we first figure out the
parameters and thus might as well use the AssignFromCache function
|
| | |
| | |
| | |
| | | |
Hopefully this approach is acceptable for PyPy...
|
| | |
| | |
| | |
| | |
| | |
| | | |
There may be one more leak to be tracked down here, and I have the
intention of checking/writing a few simple benchmarks just
because...
|
| | |
| | |
| | |
| | |
| | | |
If the user already set the descriptor, within the recursive call
`*out_descr` is simply set correctly ahead of time.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Swapping promotion should not make a difference, but is a bit
closer to what happened before, which should smooth over one
failure in the pandas test-suit.
The swapping probably only affects the promotion of datetime+float
where we have a PR open to disable it in all cases.
|
| | |
| | |
| | |
| | |
| | | |
It seems strange that this did not error in any tests, should
make sure I have at least one that hits this?!
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The code had both mixed up leading to a reference count leak
when ragged arrays were involved.
Further some fixups for the (semi-exposed) python side API
|
| | |
| | |
| | |
| | |
| | | |
This was also missing before (I think there was even an unfinished
PR), but my changes and new asserts flushed it out...
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
For object dtypes when the dimension limit is reached, we should prefer
to store the original object, even if we have a converted array available.
|
| | |
| | |
| | |
| | |
| | |
| | | |
While the assert is correct during array coercion, in general
our parameteric instances *can* actually be attached to ndarrays
and as such be used as a forced dtype in certain contexts...
|
| | |
| | |
| | |
| | |
| | | |
I was hunting down for real bugs, and found these instead... The
real bug was in the tests :/
|
| | |
| | |
| | |
| | |
| | | |
That is probably actually much better. I first thought that might push
annoying logic down in many places, but its actually only one place...
|
| | | |
|
| | |
| | |
| | |
| | | |
This required an additional hack unfortunately...
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The can guess the dtype unit for almost anything that looks like
a string (including numpy strings). So they support looking
at every array element very well (in order to find the correct
dtype).
We thus need two hacks to support them:
1. Ensure that any string-like object is directly handled by
the datetime (using is_known_scalar_type)
2. Have a string/bytes->datetime special case in the dtype from
array discovery path which should only special case `object`
arrays. (at least I would prefer that).
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This fails two tests (but behaviour is better now). The string->datetime
unit discovery for string arrays is hard-coded now, and IMO should be
deprecated at some point, if necessary we can add a specific function
for it maybe?
Unless we really want to make this pattern a first class citizen (and
not special case object). But I am against that until someone finds a
better use-case...
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
This simplifies find_scalar_descriptor (and is actually what the NEP
describes right now)
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Surprisingly, this actually is quite a large overhead (20-30%
probably, since it is 10+% on a timing with many other overheads)
|