summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/changelog/1.18.1-changelog.rst33
-rw-r--r--doc/neps/nep-0037-array-module.rst550
-rw-r--r--doc/source/reference/ufuncs.rst4
-rw-r--r--doc/source/release.rst1
-rw-r--r--doc/source/release/1.18.1-notes.rst52
-rw-r--r--numpy/_pytesttester.py2
-rw-r--r--numpy/core/src/common/ucsnarrow.c19
-rw-r--r--numpy/core/src/multiarray/arrayobject.c21
-rw-r--r--numpy/core/src/multiarray/descriptor.c126
-rw-r--r--numpy/core/src/multiarray/methods.c68
-rw-r--r--numpy/core/src/multiarray/number.c34
-rw-r--r--numpy/core/src/multiarray/scalartypes.c.src28
-rw-r--r--numpy/core/src/multiarray/strfuncs.c31
-rw-r--r--numpy/core/src/multiarray/strfuncs.h5
-rw-r--r--numpy/core/src/umath/scalarmath.c.src47
-rw-r--r--numpy/core/tests/test_indexing.py34
-rw-r--r--numpy/testing/_private/nosetester.py2
-rw-r--r--pytest.ini2
18 files changed, 719 insertions, 340 deletions
diff --git a/doc/changelog/1.18.1-changelog.rst b/doc/changelog/1.18.1-changelog.rst
new file mode 100644
index 000000000..d3df29198
--- /dev/null
+++ b/doc/changelog/1.18.1-changelog.rst
@@ -0,0 +1,33 @@
+
+Contributors
+============
+
+A total of 7 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Matti Picus
+* Maxwell Aladago
+* Pauli Virtanen
+* Ralf Gommers
+* Tyler Reddy
+* Warren Weckesser
+
+Pull requests merged
+====================
+
+A total of 13 pull requests were merged for this release.
+
+* `#15158 <https://github.com/numpy/numpy/pull/15158>`__: MAINT: Update pavement.py for towncrier.
+* `#15159 <https://github.com/numpy/numpy/pull/15159>`__: DOC: add moved modules to 1.18 release note
+* `#15161 <https://github.com/numpy/numpy/pull/15161>`__: MAINT, DOC: Minor backports and updates for 1.18.x
+* `#15176 <https://github.com/numpy/numpy/pull/15176>`__: TST: Add assert_array_equal test for big integer arrays
+* `#15184 <https://github.com/numpy/numpy/pull/15184>`__: BUG: use tmp dir and check version for cython test (#15170)
+* `#15220 <https://github.com/numpy/numpy/pull/15220>`__: BUG: distutils: fix msvc+gfortran openblas handling corner case
+* `#15221 <https://github.com/numpy/numpy/pull/15221>`__: BUG: remove -std=c99 for c++ compilation (#15194)
+* `#15222 <https://github.com/numpy/numpy/pull/15222>`__: MAINT: unskip test on win32
+* `#15223 <https://github.com/numpy/numpy/pull/15223>`__: TST: add BLAS ILP64 run in Travis & Azure
+* `#15245 <https://github.com/numpy/numpy/pull/15245>`__: MAINT: only add --std=c99 where needed
+* `#15246 <https://github.com/numpy/numpy/pull/15246>`__: BUG: lib: Fix handling of integer arrays by gradient.
+* `#15247 <https://github.com/numpy/numpy/pull/15247>`__: MAINT: Do not use private Python function in testing
+* `#15250 <https://github.com/numpy/numpy/pull/15250>`__: REL: Prepare for the NumPy 1.18.1 release.
diff --git a/doc/neps/nep-0037-array-module.rst b/doc/neps/nep-0037-array-module.rst
new file mode 100644
index 000000000..387356490
--- /dev/null
+++ b/doc/neps/nep-0037-array-module.rst
@@ -0,0 +1,550 @@
+===================================================
+NEP 37 — A dispatch protocol for NumPy-like modules
+===================================================
+
+:Author: Stephan Hoyer <shoyer@google.com>
+:Author: Hameer Abbasi
+:Author: Sebastian Berg
+:Status: Draft
+:Type: Standards Track
+:Created: 2019-12-29
+
+Abstract
+--------
+
+NEP-18's ``__array_function__`` has been a mixed success. Some projects (e.g.,
+dask, CuPy, xarray, sparse, Pint) have enthusiastically adopted it. Others
+(e.g., PyTorch, JAX, SciPy) have been more reluctant. Here we propose a new
+protocol, ``__array_module__``, that we expect could eventually subsume most
+use-cases for ``__array_function__``. The protocol requires explicit adoption
+by both users and library authors, which ensures backwards compatibility, and
+is also significantly simpler than ``__array_function__``, both of which we
+expect will make it easier to adopt.
+
+Why ``__array_function__`` hasn't been enough
+---------------------------------------------
+
+There are two broad ways in which NEP-18 has fallen short of its goals:
+
+1. **Maintainability concerns**. `__array_function__` has significant
+ implications for libraries that use it:
+
+ - Projects like `PyTorch
+ <https://github.com/pytorch/pytorch/issues/22402>`_, `JAX
+ <https://github.com/google/jax/issues/1565>`_ and even `scipy.sparse
+ <https://github.com/scipy/scipy/issues/10362>`_ have been reluctant to
+ implement `__array_function__` in part because they are concerned about
+ **breaking existing code**: users expect NumPy functions like
+ ``np.concatenate`` to return NumPy arrays. This is a fundamental
+ limitation of the ``__array_function__`` design, which we chose to allow
+ overriding the existing ``numpy`` namespace.
+ - ``__array_function__`` currently requires an "all or nothing" approach to
+ implementing NumPy's API. There is no good pathway for **incremental
+ adoption**, which is particularly problematic for established projects
+ for which adopting ``__array_function__`` would result in breaking
+ changes.
+ - It is no longer possible to use **aliases to NumPy functions** within
+ modules that support overrides. For example, both CuPy and JAX set
+ ``result_type = np.result_type``.
+ - Implementing **fall-back mechanisms** for unimplemented NumPy functions
+ by using NumPy's implementation is hard to get right (but see the
+ `version from dask <https://github.com/dask/dask/pull/5043>`_), because
+ ``__array_function__`` does not present a consistent interface.
+ Converting all arguments of array type requires recursing into generic
+ arguments of the form ``*args, **kwargs``.
+
+2. **Limitations on what can be overridden.** ``__array_function__`` has some
+ important gaps, most notably array creation and coercion functions:
+
+ - **Array creation** routines (e.g., ``np.arange`` and those in
+ ``np.random``) need some other mechanism for indicating what type of
+ arrays to create. `NEP 36 <https://github.com/numpy/numpy/pull/14715>`_
+ proposed adding optional ``like=`` arguments to functions without
+ existing array arguments. However, we still lack any mechanism to
+ override methods on objects, such as those needed by
+ ``np.random.RandomState``.
+ - **Array conversion** can't reuse the existing coercion functions like
+ ``np.asarray``, because ``np.asarray`` sometimes means "convert to an
+ exact ``np.ndarray``" and other times means "convert to something _like_
+ a NumPy array." This led to the `NEP 30
+ <https://numpy.org/neps/nep-0030-duck-array-protocol.html>`_ proposal for
+ a separate ``np.duckarray`` function, but this still does not resolve how
+ to cast one duck array into a type matching another duck array.
+
+``get_array_module`` and the ``__array_module__`` protocol
+----------------------------------------------------------
+
+We propose a new user-facing mechanism for dispatching to a duck-array
+implementation, ``numpy.get_array_module``. ``get_array_module`` performs the
+same type resolution as ``__array_function__`` and returns a module with an API
+promised to match the standard interface of ``numpy`` that can implement
+operations on all provided array types.
+
+The protocol itself is both simpler and more powerful than
+``__array_function__``, because it doesn't need to worry about actually
+implementing functions. We believe it resolves most of the maintainability and
+functionality limitations of ``__array_function__``.
+
+The new protocol is opt-in, explicit and with local control; see
+:ref:`appendix-design-choices` for discussion on the importance of these design
+features.
+
+The array module contract
+=========================
+
+Modules returned by ``get_array_module``/``__array_module__`` should make a
+best effort to implement NumPy's core functionality on new array types(s).
+Unimplemented functionality should simply be omitted (e.g., accessing an
+unimplemented function should raise ``AttributeError``). In the future, we
+anticipate codifying a protocol for requesting restricted subsets of ``numpy``;
+see :ref:`requesting-restricted-subsets` for more details.
+
+How to use ``get_array_module``
+===============================
+
+Code that wants to support generic duck arrays should explicitly call
+``get_array_module`` to determine an appropriate array module from which to
+call functions, rather than using the ``numpy`` namespace directly. For
+example:
+
+.. code:: python
+
+ # calls the appropriate version of np.something for x and y
+ module = np.get_array_module(x, y)
+ module.something(x, y)
+
+Both array creation and array conversion are supported, because dispatching is
+handled by ``get_array_module`` rather than via the types of function
+arguments. For example, to use random number generation functions or methods,
+we can simply pull out the appropriate submodule:
+
+.. code:: python
+
+ def duckarray_add_random(array):
+ module = np.get_array_module(array)
+ noise = module.random.randn(*array.shape)
+ return array + noise
+
+We can also write the duck-array ``stack`` function from `NEP 30
+<https://numpy.org/neps/nep-0030-duck-array-protocol.html>`_, without the need
+for a new ``np.duckarray`` function:
+
+.. code:: python
+
+ def duckarray_stack(arrays):
+ module = np.get_array_module(*arrays)
+ arrays = [module.asarray(arr) for arr in arrays]
+ shapes = {arr.shape for arr in arrays}
+ if len(shapes) != 1:
+ raise ValueError('all input arrays must have the same shape')
+ expanded_arrays = [arr[module.newaxis, ...] for arr in arrays]
+ return module.concatenate(expanded_arrays, axis=0)
+
+By default, ``get_array_module`` will return the ``numpy`` module if no
+arguments are arrays. This fall-back can be explicitly controlled by providing
+the ``module`` keyword-only argument. It is also possible to indicate that an
+exception should be raised instead of returning a default array module by
+setting ``module=None``.
+
+How to implement ``__array_module__``
+=====================================
+
+Libraries implementing a duck array type that want to support
+``get_array_module`` need to implement the corresponding protocol,
+``__array_module__``. This new protocol is based on Python's dispatch protocol
+for arithmetic, and is essentially a simpler version of ``__array_function__``.
+
+Only one argument is passed into ``__array_module__``, a Python collection of
+unique array types passed into ``get_array_module``, i.e., all arguments with
+an ``__array_module__`` attribute.
+
+The special method should either return an namespace with an API matching
+``numpy``, or ``NotImplemented``, indicating that it does not know how to
+handle the operation:
+
+.. code:: python
+
+ class MyArray:
+ def __array_module__(self, types):
+ if not all(issubclass(t, MyArray) for t in types):
+ return NotImplemented
+ return my_array_module
+
+Returning custom objects from ``__array_module__``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``my_array_module`` will typically, but need not always, be a Python module.
+Returning a custom objects (e.g., with functions implemented via
+``__getattr__``) may be useful for some advanced use cases.
+
+For example, custom objects could allow for partial implementations of duck
+array modules that fall-back to NumPy (although this is not recommended in
+general because such fall-back behavior can be error prone):
+
+.. code:: python
+
+ class MyArray:
+ def __array_module__(self, types):
+ if all(issubclass(t, MyArray) for t in types):
+ return ArrayModule()
+ else:
+ return NotImplemented
+
+ class ArrayModule:
+ def __getattr__(self, name):
+ import base_module
+ return getattr(base_module, name, getattr(numpy, name))
+
+Subclassing from ``numpy.ndarray``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+All of the same guidance about well-defined type casting hierarchies from
+NEP-18 still applies. ``numpy.ndarray`` itself contains a matching
+implementation of ``__array_module__``, which is convenient for subclasses:
+
+.. code:: python
+
+ class ndarray:
+ def __array_module__(self, types):
+ if all(issubclass(t, ndarray) for t in types):
+ return numpy
+ else:
+ return NotImplemented
+
+NumPy's internal machinery
+==========================
+
+The type resolution rules of ``get_array_module`` follow the same model as
+Python and NumPy's existing dispatch protocols: subclasses are called before
+super-classes, and otherwise left to right. ``__array_module__`` is guaranteed
+to be called only a single time on each unique type.
+
+The actual implementation of `get_array_module` will be in C, but should be
+equivalent to this Python code:
+
+.. code:: python
+
+ def get_array_module(*arrays, default=numpy):
+ implementing_arrays, types = _implementing_arrays_and_types(arrays)
+ if not implementing_arrays and default is not None:
+ return default
+ for array in implementing_arrays:
+ module = array.__array_module__(types)
+ if module is not NotImplemented:
+ return module
+ raise TypeError("no common array module found")
+
+ def _implementing_arrays_and_types(relevant_arrays):
+ types = []
+ implementing_arrays = []
+ for array in relevant_arrays:
+ t = type(array)
+ if t not in types and hasattr(t, '__array_module__'):
+ types.append(t)
+ # Subclasses before superclasses, otherwise left to right
+ index = len(implementing_arrays)
+ for i, old_array in enumerate(implementing_arrays):
+ if issubclass(t, type(old_array)):
+ index = i
+ break
+ implementing_arrays.insert(index, array)
+ return implementing_arrays, types
+
+Relationship with ``__array_ufunc__`` and ``__array_function__``
+----------------------------------------------------------------
+
+These older protocols have distinct use-cases and should remain
+===============================================================
+
+``__array_module__`` is intended to resolve limitations of
+``__array_function__``, so it is natural to consider whether it could entirely
+replace ``__array_function__``. This would offer dual benefits: (1) simplifying
+the user-story about how to override NumPy and (2) removing the slowdown
+associated with checking for dispatch when calling every NumPy function.
+
+However, ``__array_module__`` and ``__array_function__`` are pretty different
+from a user perspective: it requires explicit calls to ``get_array_function``,
+rather than simply reusing original ``numpy`` functions. This is probably fine
+for *libraries* that rely on duck-arrays, but may be frustratingly verbose for
+interactive use.
+
+Some of the dispatching use-cases for ``__array_ufunc__`` are also solved by
+``__array_module__``, but not all of them. For example, it is still useful to
+be able to define non-NumPy ufuncs (e.g., from Numba or SciPy) in a generic way
+on non-NumPy arrays (e.g., with dask.array).
+
+Given their existing adoption and distinct use cases, we don't think it makes
+sense to remove or deprecate ``__array_function__`` and ``__array_ufunc__`` at
+this time.
+
+Mixin classes to implement ``__array_function__`` and ``__array_ufunc__``
+=========================================================================
+
+Despite the user-facing differences, ``__array_module__`` and a module
+implementing NumPy's API still contain sufficient functionality needed to
+implement dispatching with the existing duck array protocols.
+
+For example, the following mixin classes would provide sensible defaults for
+these special methods in terms of ``get_array_module`` and
+``__array_module__``:
+
+.. code:: python
+
+ class ArrayUfuncFromModuleMixin:
+
+ def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+ arrays = inputs + kwargs.get('out', ())
+ try:
+ array_module = np.get_array_module(*arrays)
+ except TypeError:
+ return NotImplemented
+
+ try:
+ # Note this may have false positive matches, if ufunc.__name__
+ # matches the name of a ufunc defined by NumPy. Unfortunately
+ # there is no way to determine in which module a ufunc was
+ # defined.
+ new_ufunc = getattr(array_module, ufunc.__name__)
+ except AttributeError:
+ return NotImplemented
+
+ try:
+ callable = getattr(new_ufunc, method)
+ except AttributeError:
+ return NotImplemented
+
+ return callable(*inputs, **kwargs)
+
+ class ArrayFunctionFromModuleMixin:
+
+ def __array_function__(self, func, types, args, kwargs):
+ array_module = self.__array_module__(types)
+ if array_module is NotImplemented:
+ return NotImplemented
+
+ # Traverse submodules to find the appropriate function
+ modules = func.__module__.split('.')
+ assert modules[0] == 'numpy'
+ for submodule in modules[1:]:
+ module = getattr(module, submodule, None)
+ new_func = getattr(module, func.__name__, None)
+ if new_func is None:
+ return NotImplemented
+
+ return new_func(*args, **kwargs)
+
+To make it easier to write duck arrays, we could also add these mixin classes
+into ``numpy.lib.mixins`` (but the examples above may suffice).
+
+Alternatives considered
+-----------------------
+
+Naming
+======
+
+We like the name ``__array_module__`` because it mirrors the existing
+``__array_function__`` and ``__array_ufunc__`` protocols. Another reasonable
+choice could be ``__array_namespace__``.
+
+It is less clear what the NumPy function that calls this protocol should be
+called (``get_array_module`` in this proposal). Some possible alternatives:
+``array_module``, ``common_array_module``, ``resolve_array_module``,
+``get_namespace``, ``get_numpy``, ``get_numpylike_module``,
+``get_duck_array_module``.
+
+.. _requesting-restricted-subsets:
+
+Requesting restricted subsets of NumPy's API
+============================================
+
+Over time, NumPy has accumulated a very large API surface, with over 600
+attributes in the top level ``numpy`` module alone. It is unlikely that any
+duck array library could or would want to implement all of these functions and
+classes, because the frequently used subset of NumPy is much smaller.
+
+We think it would be useful exercise to define "minimal" subset(s) of NumPy's
+API, omitting rarely used or non-recommended functionality. For example,
+minimal NumPy might include ``stack``, but not the other stacking functions
+``column_stack``, ``dstack``, ``hstack`` and ``vstack``. This could clearly
+indicate to duck array authors and users want functionality is core and what
+functionality they can skip.
+
+Support for requesting a restricted subset of NumPy's API would be a natural
+feature to include in ``get_array_function`` and ``__array_module__``, e.g.,
+
+.. code:: python
+
+ # array_module is only guaranteed to contain "minimal" NumPy
+ array_module = np.get_array_module(*arrays, request='minimal')
+
+To facilitate testing with NumPy and use with any valid duck array library,
+NumPy itself would return restricted versions of the ``numpy`` module when
+``get_array_module`` is called only on NumPy arrays. Omitted functions would
+simply not exist.
+
+Unfortunately, we have not yet figured out what these restricted subsets should
+be, so it doesn't make sense to do this yet. When/if we do, we could either add
+new keyword arguments to ``get_array_module`` or add new top level functions,
+e.g., ``get_minimal_array_module``. We would also need to add either a new
+protocol patterned off of ``__array_module__`` (e.g.,
+``__array_module_minimal__``), or could add an optional second argument to
+``__array_module__`` (catching errors with ``try``/``except``).
+
+A new namespace for implicit dispatch
+=====================================
+
+Instead of supporting overrides in the main `numpy` namespace with
+``__array_function__``, we could create a new opt-in namespace, e.g.,
+``numpy.api``, with versions of NumPy functions that support dispatching. These
+overrides would need new opt-in protocols, e.g., ``__array_function_api__``
+patterned off of ``__array_function__``.
+
+This would resolve the biggest limitations of ``__array_function__`` by being
+opt-in and would also allow for unambiguously overriding functions like
+``asarray``, because ``np.api.asarray`` would always mean "convert an
+array-like object." But it wouldn't solve all the dispatching needs met by
+``__array_module__``, and would leave us with supporting a considerably more
+complex protocol both for array users and implementors.
+
+We could potentially implement such a new namespace *via* the
+``__array_module__`` protocol. Certainly some users would find this convenient,
+because it is slightly less boilerplate. But this would leave users with a
+confusing choice: when should they use `get_array_module` vs.
+`np.api.something`. Also, we would have to add and maintain a whole new module,
+which is considerably more expensive than merely adding a function.
+
+Dispatching on both types and arrays instead of only types
+==========================================================
+
+Instead of supporting dispatch only via unique array types, we could also
+support dispatch via array objects, e.g., by passing an ``arrays`` argument as
+part of the ``__array_module__`` protocol. This could potentially be useful for
+dispatch for arrays with metadata, such provided by Dask and Pint, but would
+impose costs in terms of type safety and complexity.
+
+For example, a library that supports arrays on both CPUs and GPUs might decide
+on which device to create a new arrays from functions like ``ones`` based on
+input arguments:
+
+.. code:: python
+
+ class Array:
+ def __array_module__(self, types, arrays):
+ useful_arrays = tuple(a in arrays if isinstance(a, Array))
+ if not useful_arrays:
+ return NotImplemented
+ prefer_gpu = any(a.prefer_gpu for a in useful_arrays)
+ return ArrayModule(prefer_gpu)
+
+ class ArrayModule:
+ def __init__(self, prefer_gpu):
+ self.prefer_gpu = prefer_gpu
+
+ def __getattr__(self, name):
+ import base_module
+ base_func = getattr(base_module, name)
+ return functools.partial(base_func, prefer_gpu=self.prefer_gpu)
+
+This might be useful, but it's not clear if we really need it. Pint seems to
+get along OK without any explicit array creation routines (favoring
+multiplication by units, e.g., ``np.ones(5) * ureg.m``), and for the most part
+Dask is also OK with existing ``__array_function__`` style overides (e.g.,
+favoring ``np.ones_like`` over ``np.ones``). Choosing whether to place an array
+on the CPU or GPU could be solved by `making array creation lazy
+<https://github.com/google/jax/pull/1668>`_.
+
+.. _appendix-design-choices:
+
+Appendix: design choices for API overrides
+------------------------------------------
+
+There is a large range of possible design choices for overriding NumPy's API.
+Here we discuss three major axes of the design decision that guided our design
+for ``__array_module__``.
+
+Opt-in vs. opt-out for users
+============================
+
+The ``__array_ufunc__`` and ``__array_function__`` protocols provide a
+mechanism for overriding NumPy functions *within NumPy's existing namespace*.
+This means that users need to explicitly opt-out if they do not want any
+overridden behavior, e.g., by casting arrays with ``np.asarray()``.
+
+In theory, this approach lowers the barrier for adopting these protocols in
+user code and libraries, because code that uses the standard NumPy namespace is
+automatically compatible. But in practice, this hasn't worked out. For example,
+most well-maintained libraries that use NumPy follow the best practice of
+casting all inputs with ``np.asarray()``, which they would have to explicitly
+relax to use ``__array_function__``. Our experience has been that making a
+library compatible with a new duck array type typically requires at least a
+small amount of work to accommodate differences in the data model and operations
+that can be implemented efficiently.
+
+These opt-out approaches also considerably complicate backwards compatibility
+for libraries that adopt these protocols, because by opting in as a library
+they also opt-in their users, whether they expect it or not. For winning over
+libraries that have been unable to adopt ``__array_function__``, an opt-in
+approach seems like a must.
+
+Explicit vs. implicit choice of implementation
+==============================================
+
+Both ``__array_ufunc__`` and ``__array_function__`` have implicit control over
+dispatching: the dispatched functions are determined via the appropriate
+protocols in every function call. This generalizes well to handling many
+different types of objects, as evidenced by its use for implementing arithmetic
+operators in Python, but it has two downsides:
+
+1. *Speed*: it imposes additional overhead in every function call, because each
+ function call needs to inspect each of its arguments for overrides. This is
+ why arithmetic on builtin Python numbers is slow.
+2. *Readability*: it is not longer immediately evident to readers of code what
+ happens when a function is called, because the function's implementation
+ could be overridden by any of its arguments.
+
+In contrast, importing a new library (e.g., ``import dask.array as da``) with
+an API matching NumPy is entirely explicit. There is no overhead from dispatch
+or ambiguity about which implementation is being used.
+
+Explicit and implicit choice of implementations are not mutually exclusive
+options. Indeed, most implementations of NumPy API overrides via
+``__array_function__`` that we are familiar with (namely, dask, CuPy and
+sparse, but not Pint) also include an explicit way to use their version of
+NumPy's API by importing a module directly (``dask.array``, ``cupy`` or
+``sparse``, respectively).
+
+Local vs. non-local vs. global control
+======================================
+
+The final design axis is how users control the choice of API:
+
+- **Local control**, as exemplified by multiple dispatch and Python protocols for
+ arithmetic, determines which implementation to use either by checking types
+ or calling methods on the direct arguments of a function.
+- **Non-local control** such as `np.errstate
+ <https://docs.scipy.org/doc/numpy/reference/generated/numpy.errstate.html>`_
+ overrides behavior with global-state via function decorators or
+ context-managers. Control is determined hierarchically, via the inner-most
+ context.
+- **Global control** provides a mechanism for users to set default behavior,
+ either via function calls or configuration files. For example, matplotlib
+ allows setting a global choice of plotting backend.
+
+Local control is generally considered a best practice for API design, because
+control flow is entirely explicit, which makes it the easiest to understand.
+Non-local and global control are occasionally used, but generally either due to
+ignorance or a lack of better alternatives.
+
+In the case of duck typing for NumPy's public API, we think non-local or global
+control would be mistakes, mostly because they **don't compose well**. If one
+library sets/needs one set of overrides and then internally calls a routine
+that expects another set of overrides, the resulting behavior may be very
+surprising. Higher order functions are especially problematic, because the
+context in which functions are evaluated may not be the context in which they
+are defined.
+
+One class of override use cases where we think non-local and global control are
+appropriate is for choosing a backend system that is guaranteed to have an
+entirely consistent interface, such as a faster alternative implementation of
+``numpy.fft`` on NumPy arrays. However, these are out of scope for the current
+proposal, which is focused on duck arrays.
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index 361cf11b9..20c89e0b3 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -569,6 +569,7 @@ Math operations
add
subtract
multiply
+ matmul
divide
logaddexp
logaddexp2
@@ -577,6 +578,7 @@ Math operations
negative
positive
power
+ float_power
remainder
mod
fmod
@@ -635,6 +637,8 @@ The ratio of degrees to radians is :math:`180^{\circ}/\pi.`
arcsinh
arccosh
arctanh
+ degrees
+ radians
deg2rad
rad2deg
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 9679ec6c8..7d12bae41 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -6,6 +6,7 @@ Release Notes
:maxdepth: 3
1.19.0 <release/1.19.0-notes>
+ 1.18.1 <release/1.18.1-notes>
1.18.0 <release/1.18.0-notes>
1.17.5 <release/1.17.5-notes>
1.17.4 <release/1.17.4-notes>
diff --git a/doc/source/release/1.18.1-notes.rst b/doc/source/release/1.18.1-notes.rst
new file mode 100644
index 000000000..8bc502ecb
--- /dev/null
+++ b/doc/source/release/1.18.1-notes.rst
@@ -0,0 +1,52 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.18.1 Release Notes
+==========================
+
+This release contains fixes for bugs reported against NumPy 1.18.0. Two bugs
+in particular that caused widespread problems downstream were:
+
+- The cython random extension test was not using a temporary directory for
+ building, resulting in a permission violation. Fixed.
+
+- Numpy distutils was appending `-std=c99` to all C compiler runs, leading to
+ changed behavior and compile problems downstream. That flag is now only
+ applied when building numpy C code.
+
+The Python versions supported in this release are 3.5-3.8. Downstream
+developers should use Cython >= 0.29.14 for Python 3.8 support and OpenBLAS >=
+3.7 to avoid errors on the Skylake architecture.
+
+Contributors
+============
+
+A total of 7 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Matti Picus
+* Maxwell Aladago
+* Pauli Virtanen
+* Ralf Gommers
+* Tyler Reddy
+* Warren Weckesser
+
+Pull requests merged
+====================
+
+A total of 13 pull requests were merged for this release.
+
+* `#15158 <https://github.com/numpy/numpy/pull/15158>`__: MAINT: Update pavement.py for towncrier.
+* `#15159 <https://github.com/numpy/numpy/pull/15159>`__: DOC: add moved modules to 1.18 release note
+* `#15161 <https://github.com/numpy/numpy/pull/15161>`__: MAINT, DOC: Minor backports and updates for 1.18.x
+* `#15176 <https://github.com/numpy/numpy/pull/15176>`__: TST: Add assert_array_equal test for big integer arrays
+* `#15184 <https://github.com/numpy/numpy/pull/15184>`__: BUG: use tmp dir and check version for cython test (#15170)
+* `#15220 <https://github.com/numpy/numpy/pull/15220>`__: BUG: distutils: fix msvc+gfortran openblas handling corner case
+* `#15221 <https://github.com/numpy/numpy/pull/15221>`__: BUG: remove -std=c99 for c++ compilation (#15194)
+* `#15222 <https://github.com/numpy/numpy/pull/15222>`__: MAINT: unskip test on win32
+* `#15223 <https://github.com/numpy/numpy/pull/15223>`__: TST: add BLAS ILP64 run in Travis & Azure
+* `#15245 <https://github.com/numpy/numpy/pull/15245>`__: MAINT: only add --std=c99 where needed
+* `#15246 <https://github.com/numpy/numpy/pull/15246>`__: BUG: lib: Fix handling of integer arrays by gradient.
+* `#15247 <https://github.com/numpy/numpy/pull/15247>`__: MAINT: Do not use private Python function in testing
+* `#15250 <https://github.com/numpy/numpy/pull/15250>`__: REL: Prepare for the NumPy 1.18.1 release.
diff --git a/numpy/_pytesttester.py b/numpy/_pytesttester.py
index 56eb3ac67..8b6e3217e 100644
--- a/numpy/_pytesttester.py
+++ b/numpy/_pytesttester.py
@@ -164,8 +164,6 @@ class PytestTester:
# Ignore python2.7 -3 warnings
pytest_args += [
- r"-W ignore:in 3\.x, __setslice__:DeprecationWarning",
- r"-W ignore:in 3\.x, __getslice__:DeprecationWarning",
r"-W ignore:buffer\(\) not supported in 3\.x:DeprecationWarning",
r"-W ignore:CObject type is not supported in 3\.x:DeprecationWarning",
r"-W ignore:comparing unequal types not supported in 3\.x:DeprecationWarning",
diff --git a/numpy/core/src/common/ucsnarrow.c b/numpy/core/src/common/ucsnarrow.c
index 125235381..946a72257 100644
--- a/numpy/core/src/common/ucsnarrow.c
+++ b/numpy/core/src/common/ucsnarrow.c
@@ -107,12 +107,11 @@ PyUCS2Buffer_AsUCS4(Py_UNICODE const *ucs2, npy_ucs4 *ucs4, int ucs2len, int ucs
* new_reference: PyUnicodeObject
*/
NPY_NO_EXPORT PyUnicodeObject *
-PyUnicode_FromUCS4(char const *src, Py_ssize_t size, int swap, int align)
+PyUnicode_FromUCS4(char const *src_char, Py_ssize_t size, int swap, int align)
{
Py_ssize_t ucs4len = size / sizeof(npy_ucs4);
- /* FIXME: This is safe, but better to rewrite to not cast away const */
- npy_ucs4 *buf = (npy_ucs4 *)(char *)src;
- int alloc = 0;
+ npy_ucs4 const *src = (npy_ucs4 const *)src_char;
+ npy_ucs4 *buf = NULL;
PyUnicodeObject *ret;
/* swap and align if needed */
@@ -122,22 +121,22 @@ PyUnicode_FromUCS4(char const *src, Py_ssize_t size, int swap, int align)
PyErr_NoMemory();
goto fail;
}
- alloc = 1;
memcpy(buf, src, size);
if (swap) {
byte_swap_vector(buf, ucs4len, sizeof(npy_ucs4));
}
+ src = buf;
}
/* trim trailing zeros */
- while (ucs4len > 0 && buf[ucs4len - 1] == 0) {
+ while (ucs4len > 0 && src[ucs4len - 1] == 0) {
ucs4len--;
}
/* produce PyUnicode object */
#ifdef Py_UNICODE_WIDE
{
- ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE const*)buf,
+ ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE const*)src,
(Py_ssize_t) ucs4len);
if (ret == NULL) {
goto fail;
@@ -153,7 +152,7 @@ PyUnicode_FromUCS4(char const *src, Py_ssize_t size, int swap, int align)
PyErr_NoMemory();
goto fail;
}
- ucs2len = PyUCS2Buffer_FromUCS4(tmp, buf, ucs4len);
+ ucs2len = PyUCS2Buffer_FromUCS4(tmp, src, ucs4len);
ret = (PyUnicodeObject *)PyUnicode_FromUnicode(tmp, (Py_ssize_t) ucs2len);
free(tmp);
if (ret == NULL) {
@@ -162,13 +161,13 @@ PyUnicode_FromUCS4(char const *src, Py_ssize_t size, int swap, int align)
}
#endif
- if (alloc) {
+ if (buf) {
free(buf);
}
return ret;
fail:
- if (alloc) {
+ if (buf) {
free(buf);
}
return NULL;
diff --git a/numpy/core/src/multiarray/arrayobject.c b/numpy/core/src/multiarray/arrayobject.c
index 5a7f85b1a..82eda3464 100644
--- a/numpy/core/src/multiarray/arrayobject.c
+++ b/numpy/core/src/multiarray/arrayobject.c
@@ -706,35 +706,37 @@ static int
_myunincmp(npy_ucs4 const *s1, npy_ucs4 const *s2, int len1, int len2)
{
npy_ucs4 const *sptr;
- /* FIXME: Casting away const makes the below easier to write, but should
- * still be safe.
- */
- npy_ucs4 *s1t = (npy_ucs4 *)s1, *s2t = (npy_ucs4 *)s2;
+ npy_ucs4 *s1t = NULL;
+ npy_ucs4 *s2t = NULL;
int val;
npy_intp size;
int diff;
+ /* Replace `s1` and `s2` with aligned copies if needed */
if ((npy_intp)s1 % sizeof(npy_ucs4) != 0) {
size = len1*sizeof(npy_ucs4);
s1t = malloc(size);
memcpy(s1t, s1, size);
+ s1 = s1t;
}
if ((npy_intp)s2 % sizeof(npy_ucs4) != 0) {
size = len2*sizeof(npy_ucs4);
s2t = malloc(size);
memcpy(s2t, s2, size);
+ s2 = s1t;
}
- val = PyArray_CompareUCS4(s1t, s2t, PyArray_MIN(len1,len2));
+
+ val = PyArray_CompareUCS4(s1, s2, PyArray_MIN(len1,len2));
if ((val != 0) || (len1 == len2)) {
goto finish;
}
if (len2 > len1) {
- sptr = s2t+len1;
+ sptr = s2+len1;
val = -1;
diff = len2-len1;
}
else {
- sptr = s1t+len2;
+ sptr = s1+len2;
val = 1;
diff=len1-len2;
}
@@ -747,10 +749,11 @@ _myunincmp(npy_ucs4 const *s1, npy_ucs4 const *s2, int len1, int len2)
val = 0;
finish:
- if (s1t != s1) {
+ /* Cleanup the aligned copies */
+ if (s1t) {
free(s1t);
}
- if (s2t != s2) {
+ if (s2t) {
free(s2t);
}
return val;
diff --git a/numpy/core/src/multiarray/descriptor.c b/numpy/core/src/multiarray/descriptor.c
index 7a94929dd..36b749467 100644
--- a/numpy/core/src/multiarray/descriptor.c
+++ b/numpy/core/src/multiarray/descriptor.c
@@ -43,6 +43,24 @@ static PyArray_Descr *
_use_inherit(PyArray_Descr *type, PyObject *newobj, int *errflag);
static PyArray_Descr *
+_arraydescr_run_converter(PyObject *arg, int align)
+{
+ PyArray_Descr *type = NULL;
+ if (align) {
+ if (PyArray_DescrAlignConverter(arg, &type) == NPY_FAIL) {
+ return NULL;
+ }
+ }
+ else {
+ if (PyArray_DescrConverter(arg, &type) == NPY_FAIL) {
+ return NULL;
+ }
+ }
+ assert(type != NULL);
+ return type;
+}
+
+static PyArray_Descr *
_arraydescr_from_ctypes_type(PyTypeObject *type)
{
PyObject *_numpy_dtype_ctypes;
@@ -232,15 +250,9 @@ _convert_from_tuple(PyObject *obj, int align)
if (PyTuple_GET_SIZE(obj) != 2) {
return NULL;
}
- if (align) {
- if (!PyArray_DescrAlignConverter(PyTuple_GET_ITEM(obj, 0), &type)) {
- return NULL;
- }
- }
- else {
- if (!PyArray_DescrConverter(PyTuple_GET_ITEM(obj, 0), &type)) {
- return NULL;
- }
+ type = _arraydescr_run_converter(PyTuple_GET_ITEM(obj, 0), align);
+ if (type == NULL) {
+ return NULL;
}
val = PyTuple_GET_ITEM(obj,1);
/* try to interpret next item as a type */
@@ -411,7 +423,6 @@ static PyArray_Descr *
_convert_from_array_descr(PyObject *obj, int align)
{
int n, i, totalsize;
- int ret;
PyObject *fields, *item, *newobj;
PyObject *name, *tup, *title;
PyObject *nameslist;
@@ -491,31 +502,22 @@ _convert_from_array_descr(PyObject *obj, int align)
/* Process rest */
if (PyTuple_GET_SIZE(item) == 2) {
- if (align) {
- ret = PyArray_DescrAlignConverter(PyTuple_GET_ITEM(item, 1),
- &conv);
- }
- else {
- ret = PyArray_DescrConverter(PyTuple_GET_ITEM(item, 1), &conv);
+ conv = _arraydescr_run_converter(PyTuple_GET_ITEM(item, 1), align);
+ if (conv == NULL) {
+ goto fail;
}
}
else if (PyTuple_GET_SIZE(item) == 3) {
newobj = PyTuple_GetSlice(item, 1, 3);
- if (align) {
- ret = PyArray_DescrAlignConverter(newobj, &conv);
- }
- else {
- ret = PyArray_DescrConverter(newobj, &conv);
- }
+ conv = _arraydescr_run_converter(newobj, align);
Py_DECREF(newobj);
+ if (conv == NULL) {
+ goto fail;
+ }
}
else {
goto fail;
}
- if (ret == NPY_FAIL) {
- goto fail;
- }
-
if ((PyDict_GetItem(fields, name) != NULL)
|| (title
&& PyBaseString_Check(title)
@@ -616,7 +618,6 @@ _convert_from_list(PyObject *obj, int align)
PyArray_Descr *new;
PyObject *key, *tup;
PyObject *nameslist = NULL;
- int ret;
int maxalign = 0;
/* Types with fields need the Python C API for field access */
char dtypeflags = NPY_NEEDS_PYAPI;
@@ -643,13 +644,8 @@ _convert_from_list(PyObject *obj, int align)
for (i = 0; i < n; i++) {
tup = PyTuple_New(2);
key = PyUString_FromFormat("f%d", i);
- if (align) {
- ret = PyArray_DescrAlignConverter(PyList_GET_ITEM(obj, i), &conv);
- }
- else {
- ret = PyArray_DescrConverter(PyList_GET_ITEM(obj, i), &conv);
- }
- if (ret == NPY_FAIL) {
+ conv = _arraydescr_run_converter(PyList_GET_ITEM(obj, i), align);
+ if (conv == NULL) {
Py_DECREF(tup);
Py_DECREF(key);
goto fail;
@@ -1091,7 +1087,7 @@ _convert_from_dict(PyObject *obj, int align)
totalsize = 0;
for (i = 0; i < n; i++) {
PyObject *tup, *descr, *ind, *title, *name, *off;
- int len, ret, _align = 1;
+ int len, _align = 1;
PyArray_Descr *newdescr;
/* Build item to insert (descr, offset, [title])*/
@@ -1115,14 +1111,9 @@ _convert_from_dict(PyObject *obj, int align)
Py_DECREF(ind);
goto fail;
}
- if (align) {
- ret = PyArray_DescrAlignConverter(descr, &newdescr);
- }
- else {
- ret = PyArray_DescrConverter(descr, &newdescr);
- }
+ newdescr = _arraydescr_run_converter(descr, align);
Py_DECREF(descr);
- if (ret == NPY_FAIL) {
+ if (newdescr == NULL) {
Py_DECREF(tup);
Py_DECREF(ind);
goto fail;
@@ -1168,7 +1159,9 @@ _convert_from_dict(PyObject *obj, int align)
"not divisible by the field alignment %d "
"with align=True",
offset, newdescr->alignment);
- ret = NPY_FAIL;
+ Py_DECREF(ind);
+ Py_DECREF(tup);
+ goto fail;
}
else if (offset + newdescr->elsize > totalsize) {
totalsize = offset + newdescr->elsize;
@@ -1181,11 +1174,6 @@ _convert_from_dict(PyObject *obj, int align)
PyTuple_SET_ITEM(tup, 1, PyInt_FromLong(totalsize));
totalsize += newdescr->elsize;
}
- if (ret == NPY_FAIL) {
- Py_DECREF(ind);
- Py_DECREF(tup);
- goto fail;
- }
if (len == 3) {
PyTuple_SET_ITEM(tup, 2, title);
}
@@ -1223,9 +1211,6 @@ _convert_from_dict(PyObject *obj, int align)
}
}
Py_DECREF(tup);
- if (ret == NPY_FAIL) {
- goto fail;
- }
dtypeflags |= (newdescr->flags & NPY_FROM_FIELDS);
}
@@ -2281,12 +2266,8 @@ arraydescr_new(PyTypeObject *NPY_UNUSED(subtype),
return NULL;
}
- if (align) {
- if (!PyArray_DescrAlignConverter(odescr, &conv)) {
- return NULL;
- }
- }
- else if (!PyArray_DescrConverter(odescr, &conv)) {
+ conv = _arraydescr_run_converter(odescr, align);
+ if (conv == NULL) {
return NULL;
}
@@ -2969,32 +2950,13 @@ PyArray_DescrAlignConverter(PyObject *obj, PyArray_Descr **at)
NPY_NO_EXPORT int
PyArray_DescrAlignConverter2(PyObject *obj, PyArray_Descr **at)
{
- if (PyDict_Check(obj) || PyDictProxy_Check(obj)) {
- *at = _convert_from_dict(obj, 1);
- }
- else if (PyBytes_Check(obj)) {
- *at = _convert_from_commastring(obj, 1);
- }
- else if (PyUnicode_Check(obj)) {
- PyObject *tmp;
- tmp = PyUnicode_AsASCIIString(obj);
- *at = _convert_from_commastring(tmp, 1);
- Py_DECREF(tmp);
- }
- else if (PyList_Check(obj)) {
- *at = _convert_from_array_descr(obj, 1);
+ if (obj == Py_None) {
+ *at = NULL;
+ return NPY_SUCCEED;
}
else {
- return PyArray_DescrConverter2(obj, at);
- }
- if (*at == NULL) {
- if (!PyErr_Occurred()) {
- PyErr_SetString(PyExc_ValueError,
- "data-type-descriptor not understood");
- }
- return NPY_FAIL;
+ return PyArray_DescrAlignConverter(obj, at);
}
- return NPY_SUCCEED;
}
@@ -3293,10 +3255,6 @@ static PyNumberMethods descr_as_number = {
(binaryfunc)0, /* nb_add */
(binaryfunc)0, /* nb_subtract */
(binaryfunc)0, /* nb_multiply */
- #if defined(NPY_PY3K)
- #else
- (binaryfunc)0, /* nb_divide */
- #endif
(binaryfunc)0, /* nb_remainder */
(binaryfunc)0, /* nb_divmod */
(ternaryfunc)0, /* nb_power */
diff --git a/numpy/core/src/multiarray/methods.c b/numpy/core/src/multiarray/methods.c
index ae26bbd4a..55c74eeb2 100644
--- a/numpy/core/src/multiarray/methods.c
+++ b/numpy/core/src/multiarray/methods.c
@@ -2641,51 +2641,6 @@ array_complex(PyArrayObject *self, PyObject *NPY_UNUSED(args))
return c;
}
-#ifndef NPY_PY3K
-
-static PyObject *
-array_getslice(PyArrayObject *self, PyObject *args)
-{
- PyObject *start, *stop, *slice, *result;
- if (!PyArg_ParseTuple(args, "OO:__getslice__", &start, &stop)) {
- return NULL;
- }
-
- slice = PySlice_New(start, stop, NULL);
- if (slice == NULL) {
- return NULL;
- }
-
- /* Deliberately delegate to subclasses */
- result = PyObject_GetItem((PyObject *)self, slice);
- Py_DECREF(slice);
- return result;
-}
-
-static PyObject *
-array_setslice(PyArrayObject *self, PyObject *args)
-{
- PyObject *start, *stop, *value, *slice;
- if (!PyArg_ParseTuple(args, "OOO:__setslice__", &start, &stop, &value)) {
- return NULL;
- }
-
- slice = PySlice_New(start, stop, NULL);
- if (slice == NULL) {
- return NULL;
- }
-
- /* Deliberately delegate to subclasses */
- if (PyObject_SetItem((PyObject *)self, slice, value) < 0) {
- Py_DECREF(slice);
- return NULL;
- }
- Py_DECREF(slice);
- Py_RETURN_NONE;
-}
-
-#endif
-
NPY_NO_EXPORT PyMethodDef array_methods[] = {
/* for subtypes */
@@ -2705,12 +2660,6 @@ NPY_NO_EXPORT PyMethodDef array_methods[] = {
(PyCFunction)array_function,
METH_VARARGS | METH_KEYWORDS, NULL},
-#ifndef NPY_PY3K
- {"__unicode__",
- (PyCFunction)array_unicode,
- METH_NOARGS, NULL},
-#endif
-
/* for the sys module */
{"__sizeof__",
(PyCFunction) array_sizeof,
@@ -2749,23 +2698,6 @@ NPY_NO_EXPORT PyMethodDef array_methods[] = {
(PyCFunction) array_format,
METH_VARARGS, NULL},
-#ifndef NPY_PY3K
- /*
- * While we could put these in `tp_sequence`, its' easier to define them
- * in terms of PyObject* arguments.
- *
- * We must provide these for compatibility with code that calls them
- * directly. They are already deprecated at a language level in python 2.7,
- * but are removed outright in python 3.
- */
- {"__getslice__",
- (PyCFunction) array_getslice,
- METH_VARARGS, NULL},
- {"__setslice__",
- (PyCFunction) array_setslice,
- METH_VARARGS, NULL},
-#endif
-
/* Original and Extended methods added 2005 */
{"all",
(PyCFunction)array_all,
diff --git a/numpy/core/src/multiarray/number.c b/numpy/core/src/multiarray/number.c
index dabc866ff..0091dcd74 100644
--- a/numpy/core/src/multiarray/number.c
+++ b/numpy/core/src/multiarray/number.c
@@ -32,10 +32,6 @@ static PyObject *
array_inplace_subtract(PyArrayObject *m1, PyObject *m2);
static PyObject *
array_inplace_multiply(PyArrayObject *m1, PyObject *m2);
-#if !defined(NPY_PY3K)
-static PyObject *
-array_inplace_divide(PyArrayObject *m1, PyObject *m2);
-#endif
static PyObject *
array_inplace_true_divide(PyArrayObject *m1, PyObject *m2);
static PyObject *
@@ -353,20 +349,6 @@ array_multiply(PyArrayObject *m1, PyObject *m2)
return PyArray_GenericBinaryFunction(m1, m2, n_ops.multiply);
}
-#if !defined(NPY_PY3K)
-static PyObject *
-array_divide(PyArrayObject *m1, PyObject *m2)
-{
- PyObject *res;
-
- BINOP_GIVE_UP_IF_NEEDED(m1, m2, nb_divide, array_divide);
- if (try_binary_elide(m1, m2, &array_inplace_divide, &res, 0)) {
- return res;
- }
- return PyArray_GenericBinaryFunction(m1, m2, n_ops.divide);
-}
-#endif
-
static PyObject *
array_remainder(PyArrayObject *m1, PyObject *m2)
{
@@ -728,16 +710,6 @@ array_inplace_multiply(PyArrayObject *m1, PyObject *m2)
return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.multiply);
}
-#if !defined(NPY_PY3K)
-static PyObject *
-array_inplace_divide(PyArrayObject *m1, PyObject *m2)
-{
- INPLACE_GIVE_UP_IF_NEEDED(
- m1, m2, nb_inplace_divide, array_inplace_divide);
- return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.divide);
-}
-#endif
-
static PyObject *
array_inplace_remainder(PyArrayObject *m1, PyObject *m2)
{
@@ -1008,9 +980,6 @@ NPY_NO_EXPORT PyNumberMethods array_as_number = {
(binaryfunc)array_add, /*nb_add*/
(binaryfunc)array_subtract, /*nb_subtract*/
(binaryfunc)array_multiply, /*nb_multiply*/
-#if !defined(NPY_PY3K)
- (binaryfunc)array_divide, /*nb_divide*/
-#endif
(binaryfunc)array_remainder, /*nb_remainder*/
(binaryfunc)array_divmod, /*nb_divmod*/
(ternaryfunc)array_power, /*nb_power*/
@@ -1046,9 +1015,6 @@ NPY_NO_EXPORT PyNumberMethods array_as_number = {
(binaryfunc)array_inplace_add, /*nb_inplace_add*/
(binaryfunc)array_inplace_subtract, /*nb_inplace_subtract*/
(binaryfunc)array_inplace_multiply, /*nb_inplace_multiply*/
-#if !defined(NPY_PY3K)
- (binaryfunc)array_inplace_divide, /*nb_inplace_divide*/
-#endif
(binaryfunc)array_inplace_remainder, /*nb_inplace_remainder*/
(ternaryfunc)array_inplace_power, /*nb_inplace_power*/
(binaryfunc)array_inplace_left_shift, /*nb_inplace_lshift*/
diff --git a/numpy/core/src/multiarray/scalartypes.c.src b/numpy/core/src/multiarray/scalartypes.c.src
index 724b53c18..816f76eab 100644
--- a/numpy/core/src/multiarray/scalartypes.c.src
+++ b/numpy/core/src/multiarray/scalartypes.c.src
@@ -226,20 +226,6 @@ gentype_@name@(PyObject *m1, PyObject *m2)
/**end repeat**/
-#if !defined(NPY_PY3K)
-/**begin repeat
- *
- * #name = divide#
- */
-static PyObject *
-gentype_@name@(PyObject *m1, PyObject *m2)
-{
- BINOP_GIVE_UP_IF_NEEDED(m1, m2, nb_@name@, gentype_@name@);
- return PyArray_Type.tp_as_number->nb_@name@(m1, m2);
-}
-/**end repeat**/
-#endif
-
/* Get a nested slot, or NULL if absent */
#define GET_NESTED_SLOT(type, group, slot) \
((type)->group == NULL ? NULL : (type)->group->slot)
@@ -1104,9 +1090,6 @@ static PyNumberMethods gentype_as_number = {
(binaryfunc)gentype_add, /*nb_add*/
(binaryfunc)gentype_subtract, /*nb_subtract*/
(binaryfunc)gentype_multiply, /*nb_multiply*/
-#if !defined(NPY_PY3K)
- (binaryfunc)gentype_divide, /*nb_divide*/
-#endif
(binaryfunc)gentype_remainder, /*nb_remainder*/
(binaryfunc)gentype_divmod, /*nb_divmod*/
(ternaryfunc)gentype_power, /*nb_power*/
@@ -1137,9 +1120,6 @@ static PyNumberMethods gentype_as_number = {
0, /*inplace_add*/
0, /*inplace_subtract*/
0, /*inplace_multiply*/
-#if !defined(NPY_PY3K)
- 0, /*inplace_divide*/
-#endif
0, /*inplace_remainder*/
0, /*inplace_power*/
0, /*inplace_lshift*/
@@ -3034,10 +3014,6 @@ NPY_NO_EXPORT PyNumberMethods bool_arrtype_as_number = {
0, /* nb_add */
0, /* nb_subtract */
0, /* nb_multiply */
-#if defined(NPY_PY3K)
-#else
- 0, /* nb_divide */
-#endif
0, /* nb_remainder */
0, /* nb_divmod */
0, /* nb_power */
@@ -3071,10 +3047,6 @@ NPY_NO_EXPORT PyNumberMethods bool_arrtype_as_number = {
0, /* nb_inplace_add */
0, /* nb_inplace_subtract */
0, /* nb_inplace_multiply */
-#if defined(NPY_PY3K)
-#else
- 0, /* nb_inplace_divide */
-#endif
0, /* nb_inplace_remainder */
0, /* nb_inplace_power */
0, /* nb_inplace_lshift */
diff --git a/numpy/core/src/multiarray/strfuncs.c b/numpy/core/src/multiarray/strfuncs.c
index 33f3a6543..b570aec08 100644
--- a/numpy/core/src/multiarray/strfuncs.c
+++ b/numpy/core/src/multiarray/strfuncs.c
@@ -226,34 +226,3 @@ array_format(PyArrayObject *self, PyObject *args)
}
}
-#ifndef NPY_PY3K
-
-NPY_NO_EXPORT PyObject *
-array_unicode(PyArrayObject *self)
-{
- PyObject *uni;
-
- if (PyArray_NDIM(self) == 0) {
- PyObject *item = PyArray_ToScalar(PyArray_DATA(self), self);
- if (item == NULL){
- return NULL;
- }
-
- /* defer to invoking `unicode` on the scalar */
- uni = PyObject_CallFunctionObjArgs(
- (PyObject *)&PyUnicode_Type, item, NULL);
- Py_DECREF(item);
- }
- else {
- /* Do what unicode(self) would normally do */
- PyObject *str = PyObject_Str((PyObject *)self);
- if (str == NULL){
- return NULL;
- }
- uni = PyUnicode_FromObject(str);
- Py_DECREF(str);
- }
- return uni;
-}
-
-#endif
diff --git a/numpy/core/src/multiarray/strfuncs.h b/numpy/core/src/multiarray/strfuncs.h
index 7e869d926..5dd661a20 100644
--- a/numpy/core/src/multiarray/strfuncs.h
+++ b/numpy/core/src/multiarray/strfuncs.h
@@ -13,9 +13,4 @@ array_str(PyArrayObject *self);
NPY_NO_EXPORT PyObject *
array_format(PyArrayObject *self, PyObject *args);
-#ifndef NPY_PY3K
- NPY_NO_EXPORT PyObject *
- array_unicode(PyArrayObject *self);
-#endif
-
#endif
diff --git a/numpy/core/src/umath/scalarmath.c.src b/numpy/core/src/umath/scalarmath.c.src
index d5d8d659b..7f115974d 100644
--- a/numpy/core/src/umath/scalarmath.c.src
+++ b/numpy/core/src/umath/scalarmath.c.src
@@ -744,56 +744,50 @@ _@name@_convert2_to_ctypes(PyObject *a, @type@ *arg1,
/**end repeat**/
-#if defined(NPY_PY3K)
-#define CODEGEN_SKIP_divide_FLAG
-#endif
-
/**begin repeat
*
* #name = (byte, ubyte, short, ushort, int, uint,
- * long, ulong, longlong, ulonglong)*13,
+ * long, ulong, longlong, ulonglong)*12,
* (half, float, double, longdouble,
- * cfloat, cdouble, clongdouble)*6,
+ * cfloat, cdouble, clongdouble)*5,
* (half, float, double, longdouble)*2#
* #Name = (Byte, UByte, Short, UShort, Int, UInt,
- * Long, ULong,LongLong,ULongLong)*13,
+ * Long, ULong,LongLong,ULongLong)*12,
* (Half, Float, Double, LongDouble,
- * CFloat, CDouble, CLongDouble)*6,
+ * CFloat, CDouble, CLongDouble)*5,
* (Half, Float, Double, LongDouble)*2#
* #type = (npy_byte, npy_ubyte, npy_short, npy_ushort, npy_int, npy_uint,
- * npy_long, npy_ulong, npy_longlong, npy_ulonglong)*13,
+ * npy_long, npy_ulong, npy_longlong, npy_ulonglong)*12,
* (npy_half, npy_float, npy_double, npy_longdouble,
- * npy_cfloat, npy_cdouble, npy_clongdouble)*6,
+ * npy_cfloat, npy_cdouble, npy_clongdouble)*5,
* (npy_half, npy_float, npy_double, npy_longdouble)*2#
*
- * #oper = add*10, subtract*10, multiply*10, divide*10, remainder*10,
+ * #oper = add*10, subtract*10, multiply*10, remainder*10,
* divmod*10, floor_divide*10, lshift*10, rshift*10, and*10,
* or*10, xor*10, true_divide*10,
- * add*7, subtract*7, multiply*7, divide*7, floor_divide*7, true_divide*7,
+ * add*7, subtract*7, multiply*7, floor_divide*7, true_divide*7,
* divmod*4, remainder*4#
*
- * #fperr = 1*70,0*50,1*10,
- * 1*42,
+ * #fperr = 1*60,0*50,1*10,
+ * 1*35,
* 1*8#
- * #twoout = 0*50,1*10,0*70,
- * 0*42,
+ * #twoout = 0*40,1*10,0*70,
+ * 0*35,
* 1*4,0*4#
* #otype = (npy_byte, npy_ubyte, npy_short, npy_ushort, npy_int, npy_uint,
- * npy_long, npy_ulong, npy_longlong, npy_ulonglong)*12,
+ * npy_long, npy_ulong, npy_longlong, npy_ulonglong)*11,
* npy_float*4, npy_double*6,
* (npy_half, npy_float, npy_double, npy_longdouble,
- * npy_cfloat, npy_cdouble, npy_clongdouble)*6,
+ * npy_cfloat, npy_cdouble, npy_clongdouble)*5,
* (npy_half, npy_float, npy_double, npy_longdouble)*2#
* #OName = (Byte, UByte, Short, UShort, Int, UInt,
- * Long, ULong, LongLong, ULongLong)*12,
+ * Long, ULong, LongLong, ULongLong)*11,
* Float*4, Double*6,
* (Half, Float, Double, LongDouble,
- * CFloat, CDouble, CLongDouble)*6,
+ * CFloat, CDouble, CLongDouble)*5,
* (Half, Float, Double, LongDouble)*2#
*/
-#if !defined(CODEGEN_SKIP_@oper@_FLAG)
-
static PyObject *
@name@_@oper@(PyObject *a, PyObject *b)
{
@@ -904,12 +898,9 @@ static PyObject *
#endif
return ret;
}
-#endif
/**end repeat**/
-#undef CODEGEN_SKIP_divide_FLAG
-
#define _IS_ZERO(x) (x == 0)
/**begin repeat
@@ -1597,9 +1588,6 @@ static PyNumberMethods @name@_as_number = {
(binaryfunc)@name@_add, /*nb_add*/
(binaryfunc)@name@_subtract, /*nb_subtract*/
(binaryfunc)@name@_multiply, /*nb_multiply*/
-#if !defined(NPY_PY3K)
- (binaryfunc)@name@_divide, /*nb_divide*/
-#endif
(binaryfunc)@name@_remainder, /*nb_remainder*/
(binaryfunc)@name@_divmod, /*nb_divmod*/
(ternaryfunc)@name@_power, /*nb_power*/
@@ -1634,9 +1622,6 @@ static PyNumberMethods @name@_as_number = {
0, /*inplace_add*/
0, /*inplace_subtract*/
0, /*inplace_multiply*/
-#if !defined(NPY_PY3K)
- 0, /*inplace_divide*/
-#endif
0, /*inplace_remainder*/
0, /*inplace_power*/
0, /*inplace_lshift*/
diff --git a/numpy/core/tests/test_indexing.py b/numpy/core/tests/test_indexing.py
index 56bcf0177..237e381a7 100644
--- a/numpy/core/tests/test_indexing.py
+++ b/numpy/core/tests/test_indexing.py
@@ -648,40 +648,6 @@ class TestSubclasses:
assert_array_equal(new_s.finalize_status, new_s)
assert_array_equal(new_s.old, s)
- @pytest.mark.skipif(not HAS_REFCOUNT, reason="Python lacks refcounts")
- def test_slice_decref_getsetslice(self):
- # See gh-10066, a temporary slice object should be discarted.
- # This test is only really interesting on Python 2 since
- # it goes through `__set/getslice__` here and can probably be
- # removed. Use 0:7 to make sure it is never None:7.
- class KeepIndexObject(np.ndarray):
- def __getitem__(self, indx):
- self.indx = indx
- if indx == slice(0, 7):
- raise ValueError
-
- def __setitem__(self, indx, val):
- self.indx = indx
- if indx == slice(0, 4):
- raise ValueError
-
- k = np.array([1]).view(KeepIndexObject)
- k[0:5]
- assert_equal(k.indx, slice(0, 5))
- assert_equal(sys.getrefcount(k.indx), 2)
- with assert_raises(ValueError):
- k[0:7]
- assert_equal(k.indx, slice(0, 7))
- assert_equal(sys.getrefcount(k.indx), 2)
-
- k[0:3] = 6
- assert_equal(k.indx, slice(0, 3))
- assert_equal(sys.getrefcount(k.indx), 2)
- with assert_raises(ValueError):
- k[0:4] = 2
- assert_equal(k.indx, slice(0, 4))
- assert_equal(sys.getrefcount(k.indx), 2)
-
class TestFancyIndexingCast:
def test_boolean_index_cast_assign(self):
diff --git a/numpy/testing/_private/nosetester.py b/numpy/testing/_private/nosetester.py
index 73f5b3d35..4ca5267ce 100644
--- a/numpy/testing/_private/nosetester.py
+++ b/numpy/testing/_private/nosetester.py
@@ -454,8 +454,6 @@ class NoseTester:
# This is very specific, so using the fragile module filter
# is fine
import threading
- sup.filter(DeprecationWarning, message=r"in 3\.x, __setslice__")
- sup.filter(DeprecationWarning, message=r"in 3\.x, __getslice__")
sup.filter(DeprecationWarning, message=r"buffer\(\) not supported in 3\.x")
sup.filter(DeprecationWarning, message=r"CObject type is not supported in 3\.x")
sup.filter(DeprecationWarning, message=r"comparing unequal types not supported in 3\.x")
diff --git a/pytest.ini b/pytest.ini
index 045406f68..141c2f6ef 100644
--- a/pytest.ini
+++ b/pytest.ini
@@ -14,8 +14,6 @@ filterwarnings =
# Matrix PendingDeprecationWarning.
ignore:the matrix subclass is not
# Ignore python2.7 -3 warnings
- ignore:in 3\.x, __setslice__:DeprecationWarning
- ignore:in 3\.x, __getslice__:DeprecationWarning
ignore:buffer\(\) not supported in 3\.x:DeprecationWarning
ignore:CObject type is not supported in 3\.x:DeprecationWarning
ignore:comparing unequal types not supported in 3\.x:DeprecationWarning