summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.github/workflows/wheels.yml1
-rw-r--r--doc/neps/nep-0022-ndarray-duck-typing-overview.rst2
-rw-r--r--doc/neps/nep-0023-backwards-compatibility.rst2
-rw-r--r--doc/neps/nep-0041-improved-dtype-support.rst2
-rw-r--r--doc/neps/nep-0042-new-dtypes.rst2
-rw-r--r--doc/neps/nep-0043-extensible-ufuncs.rst10
-rw-r--r--numpy/array_api/_set_functions.py20
-rw-r--r--numpy/array_api/tests/test_set_functions.py19
8 files changed, 44 insertions, 14 deletions
diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index b17b478ab..6a05c5830 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -123,4 +123,5 @@ jobs:
- uses: actions/upload-artifact@v2
with:
+ name: ${{ matrix.python }}-${{ startsWith(matrix.buildplat[1], 'macosx') && 'macosx' || matrix.buildplat[1] }}
path: ./wheelhouse/*.whl
diff --git a/doc/neps/nep-0022-ndarray-duck-typing-overview.rst b/doc/neps/nep-0022-ndarray-duck-typing-overview.rst
index 47b81d9e7..8f3e09995 100644
--- a/doc/neps/nep-0022-ndarray-duck-typing-overview.rst
+++ b/doc/neps/nep-0022-ndarray-duck-typing-overview.rst
@@ -97,7 +97,7 @@ partial duck arrays. We've been guilty of this ourself.
At this point though, we think the best general strategy is to focus
our efforts primarily on supporting full duck arrays, and only worry
-about partial duck arrays as much as we need to to make sure we don't
+about partial duck arrays as much as we need to make sure we don't
accidentally rule them out for no reason.
Why focus on full duck arrays? Several reasons:
diff --git a/doc/neps/nep-0023-backwards-compatibility.rst b/doc/neps/nep-0023-backwards-compatibility.rst
index 8b6f4cd11..a056e7074 100644
--- a/doc/neps/nep-0023-backwards-compatibility.rst
+++ b/doc/neps/nep-0023-backwards-compatibility.rst
@@ -327,7 +327,7 @@ Discussion
- `Mailing list discussion on the first version of this NEP in 2018 <https://mail.python.org/pipermail/numpy-discussion/2018-July/078432.html>`__
- `Mailing list discussion on the Dec 2020 update of this NEP <https://mail.python.org/pipermail/numpy-discussion/2020-December/081358.html>`__
-- `PR with review comments on the the Dec 2020 update of this NEP <https://github.com/numpy/numpy/pull/18097>`__
+- `PR with review comments on the Dec 2020 update of this NEP <https://github.com/numpy/numpy/pull/18097>`__
References and Footnotes
diff --git a/doc/neps/nep-0041-improved-dtype-support.rst b/doc/neps/nep-0041-improved-dtype-support.rst
index 9f2e14640..fcde39780 100644
--- a/doc/neps/nep-0041-improved-dtype-support.rst
+++ b/doc/neps/nep-0041-improved-dtype-support.rst
@@ -74,7 +74,7 @@ cannot describe casting for such parametric datatypes implemented outside of Num
This additional functionality for supporting parametric datatypes introduces
increased complexity within NumPy itself,
and furthermore is not available to external user-defined datatypes.
-In general the concerns of different datatypes are not well well-encapsulated.
+In general the concerns of different datatypes are not well-encapsulated.
This burden is exacerbated by the exposure of internal C structures,
limiting the addition of new fields
(for example to support new sorting methods [new_sort]_).
diff --git a/doc/neps/nep-0042-new-dtypes.rst b/doc/neps/nep-0042-new-dtypes.rst
index c29172a28..99c0a4013 100644
--- a/doc/neps/nep-0042-new-dtypes.rst
+++ b/doc/neps/nep-0042-new-dtypes.rst
@@ -302,7 +302,7 @@ user-defined DType::
class UserDtype(dtype): ...
one can do ``np.ndarray[UserDtype]``, keeping annotations concise in
-that case without introducing boilerplate in NumPy itself. For a user
+that case without introducing boilerplate in NumPy itself. For a
user-defined scalar type::
class UserScalar(generic): ...
diff --git a/doc/neps/nep-0043-extensible-ufuncs.rst b/doc/neps/nep-0043-extensible-ufuncs.rst
index 3312eb12c..abc19ccf3 100644
--- a/doc/neps/nep-0043-extensible-ufuncs.rst
+++ b/doc/neps/nep-0043-extensible-ufuncs.rst
@@ -804,7 +804,7 @@ the inner-loop operates on.
This is necessary information for parametric dtypes since for example comparing
two strings requires knowing the length of both strings.
The ``Context`` can also hold potentially useful information such as the
-the original ``ufunc``, which can be helpful when reporting errors.
+original ``ufunc``, which can be helpful when reporting errors.
In principle passing in Context is not necessary, as all information could be
included in ``innerloop_data`` and set up in the ``get_loop`` function.
@@ -948,7 +948,7 @@ This wrapped ``ArrayMethod`` will have two additional methods:
convert this to ``float64 + float64``.
* ``wrap_outputs(Tuple[DType]: input_descr) -> Tuple[DType]`` replacing the
- resolved descriptors with with the desired actual loop descriptors.
+ resolved descriptors with the desired actual loop descriptors.
The original ``resolve_descriptors`` function will be called between these
two calls, so that the output descriptors may not be set in the first call.
In the above example it will use the ``float64`` as returned (which might
@@ -987,8 +987,8 @@ A different use-case is that of a ``Unit(float64, "m")`` DType, where
the numerical type is part of the DType parameter.
This approach is possible, but will require a custom ``ArrayMethod``
which wraps existing loops.
-It must also always require require two steps of dispatching
-(one to the ``Unit`` DType and a second one for the numerical type).
+It must also always require two steps of dispatching (one to the ``Unit``
+DType and a second one for the numerical type).
Furthermore, the efficient implementation will require the ability to
fetch and reuse the inner-loop function from another ``ArrayMethod``.
@@ -1296,7 +1296,7 @@ of the current ufunc machinery (as well as casting).
The implementation unfortunately will require large maintenance of the
UFunc machinery, since both the actual UFunc loop calls, as well as the
-the initial dispatching steps have to be modified.
+initial dispatching steps have to be modified.
In general, the correct ``ArrayMethod``, also those returned by a promoter,
will be cached (or stored) inside a hashtable for efficient lookup.
diff --git a/numpy/array_api/_set_functions.py b/numpy/array_api/_set_functions.py
index 05ee7e555..db9370f84 100644
--- a/numpy/array_api/_set_functions.py
+++ b/numpy/array_api/_set_functions.py
@@ -41,14 +41,21 @@ def unique_all(x: Array, /) -> UniqueAllResult:
See its docstring for more information.
"""
- res = np.unique(
+ values, indices, inverse_indices, counts = np.unique(
x._array,
return_counts=True,
return_index=True,
return_inverse=True,
)
-
- return UniqueAllResult(*[Array._new(i) for i in res])
+ # np.unique() flattens inverse indices, but they need to share x's shape
+ # See https://github.com/numpy/numpy/issues/20638
+ inverse_indices = inverse_indices.reshape(x.shape)
+ return UniqueAllResult(
+ Array._new(values),
+ Array._new(indices),
+ Array._new(inverse_indices),
+ Array._new(counts),
+ )
def unique_counts(x: Array, /) -> UniqueCountsResult:
@@ -68,13 +75,16 @@ def unique_inverse(x: Array, /) -> UniqueInverseResult:
See its docstring for more information.
"""
- res = np.unique(
+ values, inverse_indices = np.unique(
x._array,
return_counts=False,
return_index=False,
return_inverse=True,
)
- return UniqueInverseResult(*[Array._new(i) for i in res])
+ # np.unique() flattens inverse indices, but they need to share x's shape
+ # See https://github.com/numpy/numpy/issues/20638
+ inverse_indices = inverse_indices.reshape(x.shape)
+ return UniqueInverseResult(Array._new(values), Array._new(inverse_indices))
def unique_values(x: Array, /) -> Array:
diff --git a/numpy/array_api/tests/test_set_functions.py b/numpy/array_api/tests/test_set_functions.py
new file mode 100644
index 000000000..b8eb65d43
--- /dev/null
+++ b/numpy/array_api/tests/test_set_functions.py
@@ -0,0 +1,19 @@
+import pytest
+from hypothesis import given
+from hypothesis.extra.array_api import make_strategies_namespace
+
+from numpy import array_api as xp
+
+xps = make_strategies_namespace(xp)
+
+
+@pytest.mark.parametrize("func", [xp.unique_all, xp.unique_inverse])
+@given(xps.arrays(dtype=xps.scalar_dtypes(), shape=xps.array_shapes()))
+def test_inverse_indices_shape(func, x):
+ """
+ Inverse indices share shape of input array
+
+ See https://github.com/numpy/numpy/issues/20638
+ """
+ out = func(x)
+ assert out.inverse_indices.shape == x.shape