summaryrefslogtreecommitdiff
path: root/doc/source/reference
diff options
context:
space:
mode:
Diffstat (limited to 'doc/source/reference')
-rw-r--r--doc/source/reference/arrays.classes.rst8
-rw-r--r--doc/source/reference/arrays.datetime.rst2
-rw-r--r--doc/source/reference/arrays.dtypes.rst53
-rw-r--r--doc/source/reference/arrays.indexing.rst21
-rw-r--r--doc/source/reference/arrays.interface.rst113
-rw-r--r--doc/source/reference/arrays.ndarray.rst20
-rw-r--r--doc/source/reference/arrays.nditer.cython.rst2
-rw-r--r--doc/source/reference/arrays.scalars.rst374
-rw-r--r--doc/source/reference/c-api/array.rst412
-rw-r--r--doc/source/reference/c-api/config.rst79
-rw-r--r--doc/source/reference/c-api/coremath.rst46
-rw-r--r--doc/source/reference/c-api/deprecations.rst4
-rw-r--r--doc/source/reference/c-api/dtype.rst45
-rw-r--r--doc/source/reference/c-api/iterator.rst60
-rw-r--r--doc/source/reference/c-api/types-and-structures.rst586
-rw-r--r--doc/source/reference/c-api/ufunc.rst169
-rw-r--r--doc/source/reference/global_state.rst15
-rw-r--r--doc/source/reference/internals.code-explanations.rst7
-rw-r--r--doc/source/reference/internals.rst158
-rw-r--r--doc/source/reference/maskedarray.baseclass.rst4
-rw-r--r--doc/source/reference/maskedarray.generic.rst18
-rw-r--r--doc/source/reference/random/c-api.rst3
-rw-r--r--doc/source/reference/random/generator.rst94
-rw-r--r--doc/source/reference/random/legacy.rst2
-rw-r--r--doc/source/reference/routines.array-manipulation.rst1
-rw-r--r--doc/source/reference/routines.char.rst2
-rw-r--r--doc/source/reference/routines.ctypeslib.rst1
-rw-r--r--doc/source/reference/routines.financial.rst21
-rw-r--r--doc/source/reference/routines.indexing.rst1
-rw-r--r--doc/source/reference/routines.io.rst2
-rw-r--r--doc/source/reference/routines.ma.rst5
-rw-r--r--doc/source/reference/routines.other.rst1
-rw-r--r--doc/source/reference/routines.rst1
-rw-r--r--doc/source/reference/routines.set.rst5
-rw-r--r--doc/source/reference/simd/simd-optimizations-tables-diff.inc37
-rw-r--r--doc/source/reference/simd/simd-optimizations-tables.inc165
-rw-r--r--doc/source/reference/simd/simd-optimizations.py236
-rw-r--r--doc/source/reference/simd/simd-optimizations.rst42
-rw-r--r--doc/source/reference/ufuncs.rst22
39 files changed, 1723 insertions, 1114 deletions
diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst
index c5563bddd..3a4ed2168 100644
--- a/doc/source/reference/arrays.classes.rst
+++ b/doc/source/reference/arrays.classes.rst
@@ -480,16 +480,16 @@ Character arrays (:mod:`numpy.char`)
The `chararray` class exists for backwards compatibility with
Numarray, it is not recommended for new development. Starting from numpy
1.4, if one needs arrays of strings, it is recommended to use arrays of
- `dtype` `object_`, `string_` or `unicode_`, and use the free functions
+ `dtype` `object_`, `bytes_` or `str_`, and use the free functions
in the `numpy.char` module for fast vectorized string operations.
-These are enhanced arrays of either :class:`string_` type or
-:class:`unicode_` type. These arrays inherit from the
+These are enhanced arrays of either :class:`str_` type or
+:class:`bytes_` type. These arrays inherit from the
:class:`ndarray`, but specially-define the operations ``+``, ``*``,
and ``%`` on a (broadcasting) element-by-element basis. These
operations are not available on the standard :class:`ndarray` of
character type. In addition, the :class:`chararray` has all of the
-standard :class:`string <str>` (and :class:`unicode`) methods,
+standard :class:`str` (and :class:`bytes`) methods,
executing them on an element-by-element basis. Perhaps the easiest
way to create a chararray is to use :meth:`self.view(chararray)
<ndarray.view>` where *self* is an ndarray of str or unicode
diff --git a/doc/source/reference/arrays.datetime.rst b/doc/source/reference/arrays.datetime.rst
index 9ce77424a..c5947620e 100644
--- a/doc/source/reference/arrays.datetime.rst
+++ b/doc/source/reference/arrays.datetime.rst
@@ -218,7 +218,7 @@ And here are the time units:
m minute +/- 1.7e13 years [1.7e13 BC, 1.7e13 AD]
s second +/- 2.9e11 years [2.9e11 BC, 2.9e11 AD]
ms millisecond +/- 2.9e8 years [ 2.9e8 BC, 2.9e8 AD]
- us microsecond +/- 2.9e5 years [290301 BC, 294241 AD]
+us / μs microsecond +/- 2.9e5 years [290301 BC, 294241 AD]
ns nanosecond +/- 292 years [ 1678 AD, 2262 AD]
ps picosecond +/- 106 days [ 1969 AD, 1970 AD]
fs femtosecond +/- 2.6 hours [ 1969 AD, 1970 AD]
diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst
index 8afbaeacc..b5ffa1a8b 100644
--- a/doc/source/reference/arrays.dtypes.rst
+++ b/doc/source/reference/arrays.dtypes.rst
@@ -122,14 +122,12 @@ constructor:
What can be converted to a data-type object is described below:
:class:`dtype` object
-
.. index::
triple: dtype; construction; from dtype
Used as-is.
None
-
.. index::
triple: dtype; construction; from None
@@ -139,7 +137,6 @@ None
triple: dtype; construction; from type
Array-scalar types
-
The 24 built-in :ref:`array scalar type objects
<arrays.scalars.built-in>` all convert to an associated data-type object.
This is true for their sub-classes as well.
@@ -155,15 +152,6 @@ Array-scalar types
>>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number
Generic types
-
- .. deprecated NumPy 1.19::
-
- The use of generic types is deprecated. This is because it can be
- unexpected in a context such as ``arr.astype(dtype=np.floating)``.
- ``arr.astype(dtype=np.floating)`` which casts an array of ``float32``
- to an array of ``float64``, even though ``float32`` is a subdtype of
- ``np.floating``.
-
The generic hierarchical type objects convert to corresponding
type objects according to the associations:
@@ -176,8 +164,16 @@ Generic types
:class:`generic`, :class:`flexible` :class:`void`
===================================================== ===============
-Built-in Python types
+ .. deprecated:: 1.19
+
+ This conversion of generic scalar types is deprecated.
+ This is because it can be unexpected in a context such as
+ ``arr.astype(dtype=np.floating)``, which casts an array of ``float32``
+ to an array of ``float64``, even though ``float32`` is a subdtype of
+ ``np.floating``.
+
+Built-in Python types
Several python types are equivalent to a corresponding
array scalar when used to generate a :class:`dtype` object:
@@ -209,7 +205,6 @@ Built-in Python types
that such types may map to a specific (new) dtype in the future.
Types with ``.dtype``
-
Any type object with a ``dtype`` attribute: The attribute will be
accessed and used directly. The attribute must return something
that is convertible into a dtype object.
@@ -223,7 +218,6 @@ prepended with ``'>'`` (:term:`big-endian`), ``'<'``
specify the byte order.
One-character strings
-
Each built-in data-type has a character code
(the updated Numeric typecodes), that uniquely identifies it.
@@ -235,7 +229,6 @@ One-character strings
>>> dt = np.dtype('d') # double-precision floating-point number
Array-protocol type strings (see :ref:`arrays.interface`)
-
The first character specifies the kind of data and the remaining
characters specify the number of bytes per item, except for Unicode,
where it is interpreted as the number of characters. The item size
@@ -271,14 +264,12 @@ Array-protocol type strings (see :ref:`arrays.interface`)
.. admonition:: Note on string types
For backward compatibility with Python 2 the ``S`` and ``a`` typestrings
- remain zero-terminated bytes and ``np.string_`` continues to map to
- ``np.bytes_``.
- To use actual strings in Python 3 use ``U`` or ``np.unicode_``.
+ remain zero-terminated bytes and `numpy.string_` continues to alias
+ `numpy.bytes_`. To use actual strings in Python 3 use ``U`` or `numpy.str_`.
For signed bytes that do not need zero-termination ``b`` or ``i1`` can be
used.
String with comma-separated fields
-
A short-hand notation for specifying the format of a structured data type is
a comma-separated string of basic formats.
@@ -310,7 +301,6 @@ String with comma-separated fields
>>> dt = np.dtype("a3, 3u8, (3,4)a10")
Type strings
-
Any string in :obj:`numpy.sctypeDict`.keys():
.. admonition:: Example
@@ -322,7 +312,6 @@ Type strings
triple: dtype; construction; from tuple
``(flexible_dtype, itemsize)``
-
The first argument must be an object that is converted to a
zero-sized flexible data-type object, the second argument is
an integer providing the desired itemsize.
@@ -333,7 +322,6 @@ Type strings
>>> dt = np.dtype(('U', 10)) # 10-character unicode string
``(fixed_dtype, shape)``
-
.. index::
pair: dtype; sub-array
@@ -354,10 +342,9 @@ Type strings
triple: dtype; construction; from list
``[(field_name, field_dtype, field_shape), ...]``
-
*obj* should be a list of fields where each field is described by a
tuple of length 2 or 3. (Equivalent to the ``descr`` item in the
- :obj:`__array_interface__` attribute.)
+ :obj:`~object.__array_interface__` attribute.)
The first element, *field_name*, is the field name (if this is
``''`` then a standard field name, ``'f#'``, is assigned). The
@@ -394,7 +381,6 @@ Type strings
triple: dtype; construction; from dict
``{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ..., 'itemsize': ...}``
-
This style has two required and three optional keys. The *names*
and *formats* keys are required. Their respective values are
equal-length lists with the field names and the field formats.
@@ -405,9 +391,9 @@ Type strings
their values must each be lists of the same length as the *names*
and *formats* lists. The *offsets* value is a list of byte offsets
(limited to `ctypes.c_int`) for each field, while the *titles* value is a
- list of titles for each field (None can be used if no title is
- desired for that field). The *titles* can be any :class:`string`
- or :class:`unicode` object and will add another entry to the
+ list of titles for each field (``None`` can be used if no title is
+ desired for that field). The *titles* can be any object, but when a
+ :class:`str` object will add another entry to the
fields dictionary keyed by the title and referencing the same
field tuple which will contain the title as an additional tuple
member.
@@ -436,7 +422,6 @@ Type strings
``{'field1': ..., 'field2': ..., ...}``
-
This usage is discouraged, because it is ambiguous with the
other dict-based construction method. If you have a field
called 'names' and a field called 'formats' there will be
@@ -458,7 +443,6 @@ Type strings
... 'col3': (int, 14)})
``(base_dtype, new_dtype)``
-
In NumPy 1.7 and later, this form allows `base_dtype` to be interpreted as
a structured dtype. Arrays created with this dtype will have underlying
dtype `base_dtype` but will have fields and flags taken from `new_dtype`.
@@ -553,6 +537,13 @@ Attributes providing additional information:
dtype.alignment
dtype.base
+Metadata attached by the user:
+
+.. autosummary::
+ :toctree: generated/
+
+ dtype.metadata
+
Methods
-------
diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst
index 56b99f272..9f82875ea 100644
--- a/doc/source/reference/arrays.indexing.rst
+++ b/doc/source/reference/arrays.indexing.rst
@@ -34,7 +34,7 @@ Basic Slicing and Indexing
Basic slicing extends Python's basic concept of slicing to N
dimensions. Basic slicing occurs when *obj* is a :class:`slice` object
(constructed by ``start:stop:step`` notation inside of brackets), an
-integer, or a tuple of slice objects and integers. :const:`Ellipsis`
+integer, or a tuple of slice objects and integers. :py:data:`Ellipsis`
and :const:`newaxis` objects can be interspersed with these as
well.
@@ -43,7 +43,7 @@ well.
In order to remain backward compatible with a common usage in
Numeric, basic slicing is also initiated if the selection object is
any non-ndarray and non-tuple sequence (such as a :class:`list`) containing
- :class:`slice` objects, the :const:`Ellipsis` object, or the :const:`newaxis`
+ :class:`slice` objects, the :py:data:`Ellipsis` object, or the :const:`newaxis`
object, but not for integer arrays or other embedded sequences.
.. index::
@@ -129,7 +129,7 @@ concepts to remember include:
[5],
[6]]])
-- :const:`Ellipsis` expands to the number of ``:`` objects needed for the
+- :py:data:`Ellipsis` expands to the number of ``:`` objects needed for the
selection tuple to index all dimensions. In most cases, this means that
length of the expanded selection tuple is ``x.ndim``. There may only be a
single ellipsis present.
@@ -198,6 +198,7 @@ concepts to remember include:
create an axis of length one. :const:`newaxis` is an alias for
'None', and 'None' can be used in place of this with the same result.
+.. _advanced-indexing:
Advanced Indexing
-----------------
@@ -304,6 +305,8 @@ understood with an example.
most important thing to remember about indexing with multiple advanced
indexes.
+.. _combining-advanced-and-basic-indexing:
+
Combining advanced and basic indexing
"""""""""""""""""""""""""""""""""""""
@@ -330,7 +333,7 @@ the subspace defined by the basic indexing (excluding integers) and the
subspace from the advanced indexing part. Two cases of index combination
need to be distinguished:
-* The advanced indexes are separated by a slice, :const:`Ellipsis` or :const:`newaxis`.
+* The advanced indexes are separated by a slice, :py:data:`Ellipsis` or :const:`newaxis`.
For example ``x[arr1, :, arr2]``.
* The advanced indexes are all next to each other.
For example ``x[..., arr1, arr2, :]`` but *not* ``x[arr1, :, 1]``
@@ -377,15 +380,15 @@ type, such as may be returned from comparison operators. A single
boolean index array is practically identical to ``x[obj.nonzero()]`` where,
as described above, :meth:`obj.nonzero() <ndarray.nonzero>` returns a
tuple (of length :attr:`obj.ndim <ndarray.ndim>`) of integer index
-arrays showing the :const:`True` elements of *obj*. However, it is
+arrays showing the :py:data:`True` elements of *obj*. However, it is
faster when ``obj.shape == x.shape``.
If ``obj.ndim == x.ndim``, ``x[obj]`` returns a 1-dimensional array
-filled with the elements of *x* corresponding to the :const:`True`
+filled with the elements of *x* corresponding to the :py:data:`True`
values of *obj*. The search order will be :term:`row-major`,
-C-style. If *obj* has :const:`True` values at entries that are outside
+C-style. If *obj* has :py:data:`True` values at entries that are outside
of the bounds of *x*, then an index error will be raised. If *obj* is
-smaller than *x* it is identical to filling it with :const:`False`.
+smaller than *x* it is identical to filling it with :py:data:`False`.
.. admonition:: Example
@@ -450,7 +453,7 @@ also supports boolean arrays and will work without any surprises.
array([[ 3, 5],
[ 9, 11]])
- Without the ``np.ix_`` call or only the diagonal elements would be
+ Without the ``np.ix_`` call, only the diagonal elements would be
selected.
Or without ``np.ix_`` (compare the integer array examples):
diff --git a/doc/source/reference/arrays.interface.rst b/doc/source/reference/arrays.interface.rst
index 4e95535c0..6a8c5f9c4 100644
--- a/doc/source/reference/arrays.interface.rst
+++ b/doc/source/reference/arrays.interface.rst
@@ -49,9 +49,9 @@ Python side
===========
This approach to the interface consists of the object having an
-:data:`__array_interface__` attribute.
+:data:`~object.__array_interface__` attribute.
-.. data:: __array_interface__
+.. data:: object.__array_interface__
A dictionary of items (3 required and 5 optional). The optional
keys in the dictionary have implied defaults if they are not
@@ -60,17 +60,15 @@ This approach to the interface consists of the object having an
The keys are:
**shape** (required)
-
Tuple whose elements are the array size in each dimension. Each
- entry is an integer (a Python int or long). Note that these
- integers could be larger than the platform "int" or "long"
- could hold (a Python int is a C long). It is up to the code
+ entry is an integer (a Python :py:class:`int`). Note that these
+ integers could be larger than the platform ``int`` or ``long``
+ could hold (a Python :py:class:`int` is a C ``long``). It is up to the code
using this attribute to handle this appropriately; either by
raising an error when overflow is possible, or by using
- :c:data:`Py_LONG_LONG` as the C type for the shapes.
+ ``long long`` as the C type for the shapes.
**typestr** (required)
-
A string providing the basic type of the homogeneous array The
basic string format consists of 3 parts: a character describing
the byteorder of the data (``<``: little-endian, ``>``:
@@ -97,7 +95,6 @@ This approach to the interface consists of the object having an
===== ================================================================
**descr** (optional)
-
A list of tuples providing a more detailed description of the
memory layout for each item in the homogeneous array. Each
tuple in the list has two or three elements. Normally, this
@@ -127,7 +124,6 @@ This approach to the interface consists of the object having an
**Default**: ``[('', typestr)]``
**data** (optional)
-
A 2-tuple whose first argument is an integer (a long integer
if necessary) that points to the data-area storing the array
contents. This pointer must point to the first element of
@@ -136,7 +132,7 @@ This approach to the interface consists of the object having an
means the data area is read-only).
This attribute can also be an object exposing the
- :c:func:`buffer interface <PyObject_AsCharBuffer>` which
+ :ref:`buffer interface <bufferobjects>` which
will be used to share the data. If this key is not present (or
returns None), then memory sharing will be done
through the buffer interface of the object itself. In this
@@ -148,25 +144,23 @@ This approach to the interface consists of the object having an
**Default**: None
**strides** (optional)
-
- Either None to indicate a C-style contiguous array or
+ Either ``None`` to indicate a C-style contiguous array or
a Tuple of strides which provides the number of bytes needed
to jump to the next array element in the corresponding
dimension. Each entry must be an integer (a Python
- :const:`int` or :const:`long`). As with shape, the values may
- be larger than can be represented by a C "int" or "long"; the
+ :py:class:`int`). As with shape, the values may
+ be larger than can be represented by a C ``int`` or ``long``; the
calling code should handle this appropriately, either by
- raising an error, or by using :c:type:`Py_LONG_LONG` in C. The
- default is None which implies a C-style contiguous
- memory buffer. In this model, the last dimension of the array
+ raising an error, or by using ``long long`` in C. The
+ default is ``None`` which implies a C-style contiguous
+ memory buffer. In this model, the last dimension of the array
varies the fastest. For example, the default strides tuple
for an object whose array entries are 8 bytes long and whose
- shape is (10,20,30) would be (4800, 240, 8)
+ shape is ``(10, 20, 30)`` would be ``(4800, 240, 8)``
- **Default**: None (C-style contiguous)
+ **Default**: ``None`` (C-style contiguous)
**mask** (optional)
-
None or an object exposing the array interface. All
elements of the mask array should be interpreted only as true
or not true indicating which elements of this array are valid.
@@ -177,15 +171,13 @@ This approach to the interface consists of the object having an
**Default**: None (All array values are valid)
**offset** (optional)
-
An integer offset into the array data region. This can only be
- used when data is None or returns a :class:`buffer`
+ used when data is ``None`` or returns a :class:`buffer`
object.
**Default**: 0.
**version** (required)
-
An integer showing the version of the interface (i.e. 3 for
this version). Be careful not to use this to invalidate
objects exposing future versions of the interface.
@@ -197,11 +189,11 @@ C-struct access
This approach to the array interface allows for faster access to an
array using only one attribute lookup and a well-defined C-structure.
-.. c:var:: __array_struct__
+.. data:: object.__array_struct__
- A :c:type: `PyCObject` whose :c:data:`voidptr` member contains a
+ A :c:type:`PyCapsule` whose ``pointer`` member contains a
pointer to a filled :c:type:`PyArrayInterface` structure. Memory
- for the structure is dynamically created and the :c:type:`PyCObject`
+ for the structure is dynamically created and the :c:type:`PyCapsule`
is also created with an appropriate destructor so the retriever of
this attribute simply has to apply :c:func:`Py_DECREF()` to the
object returned by this attribute when it is finished. Also,
@@ -211,7 +203,7 @@ array using only one attribute lookup and a well-defined C-structure.
must also not reallocate their memory if other objects are
referencing them.
-The PyArrayInterface structure is defined in ``numpy/ndarrayobject.h``
+The :c:type:`PyArrayInterface` structure is defined in ``numpy/ndarrayobject.h``
as::
typedef struct {
@@ -231,29 +223,32 @@ as::
The flags member may consist of 5 bits showing how the data should be
interpreted and one bit showing how the Interface should be
-interpreted. The data-bits are :const:`CONTIGUOUS` (0x1),
-:const:`FORTRAN` (0x2), :const:`ALIGNED` (0x100), :const:`NOTSWAPPED`
-(0x200), and :const:`WRITEABLE` (0x400). A final flag
-:const:`ARR_HAS_DESCR` (0x800) indicates whether or not this structure
+interpreted. The data-bits are :c:macro:`NPY_ARRAY_C_CONTIGUOUS` (0x1),
+:c:macro:`NPY_ARRAY_F_CONTIGUOUS` (0x2), :c:macro:`NPY_ARRAY_ALIGNED` (0x100),
+:c:macro:`NPY_ARRAY_NOTSWAPPED` (0x200), and :c:macro:`NPY_ARRAY_WRITEABLE` (0x400). A final flag
+:c:macro:`NPY_ARR_HAS_DESCR` (0x800) indicates whether or not this structure
has the arrdescr field. The field should not be accessed unless this
flag is present.
+ .. c:macro:: NPY_ARR_HAS_DESCR
+
.. admonition:: New since June 16, 2006:
- In the past most implementations used the "desc" member of the
- :c:type:`PyCObject` itself (do not confuse this with the "descr" member of
+ In the past most implementations used the ``desc`` member of the ``PyCObject``
+ (now :c:type:`PyCapsule`) itself (do not confuse this with the "descr" member of
the :c:type:`PyArrayInterface` structure above --- they are two separate
things) to hold the pointer to the object exposing the interface.
- This is now an explicit part of the interface. Be sure to own a
- reference to the object when the :c:type:`PyCObject` is created using
- :c:type:`PyCObject_FromVoidPtrAndDesc`.
+ This is now an explicit part of the interface. Be sure to take a
+ reference to the object and call :c:func:`PyCapsule_SetContext` before
+ returning the :c:type:`PyCapsule`, and configure a destructor to decref this
+ reference.
Type description examples
=========================
For clarity it is useful to provide some examples of the type
-description and corresponding :data:`__array_interface__` 'descr'
+description and corresponding :data:`~object.__array_interface__` 'descr'
entries. Thanks to Scott Gilbert for these examples:
In every case, the 'descr' key is optional, but of course provides
@@ -315,25 +310,39 @@ largely aesthetic. In particular:
1. The PyArrayInterface structure had no descr member at the end
(and therefore no flag ARR_HAS_DESCR)
-2. The desc member of the PyCObject returned from __array_struct__ was
+2. The ``context`` member of the :c:type:`PyCapsule` (formally the ``desc``
+ member of the ``PyCObject``) returned from ``__array_struct__`` was
not specified. Usually, it was the object exposing the array (so
that a reference to it could be kept and destroyed when the
- C-object was destroyed). Now it must be a tuple whose first
- element is a string with "PyArrayInterface Version #" and whose
- second element is the object exposing the array.
+ C-object was destroyed). It is now an explicit requirement that this field
+ be used in some way to hold a reference to the owning object.
+
+ .. note::
+
+ Until August 2020, this said:
+
+ Now it must be a tuple whose first element is a string with
+ "PyArrayInterface Version #" and whose second element is the object
+ exposing the array.
+
+ This design was retracted almost immediately after it was proposed, in
+ <https://mail.python.org/pipermail/numpy-discussion/2006-June/020995.html>.
+ Despite 14 years of documentation to the contrary, at no point was it
+ valid to assume that ``__array_interface__`` capsules held this tuple
+ content.
-3. The tuple returned from __array_interface__['data'] used to be a
+3. The tuple returned from ``__array_interface__['data']`` used to be a
hex-string (now it is an integer or a long integer).
-4. There was no __array_interface__ attribute instead all of the keys
- (except for version) in the __array_interface__ dictionary were
+4. There was no ``__array_interface__`` attribute instead all of the keys
+ (except for version) in the ``__array_interface__`` dictionary were
their own attribute: Thus to obtain the Python-side information you
had to access separately the attributes:
- * __array_data__
- * __array_shape__
- * __array_strides__
- * __array_typestr__
- * __array_descr__
- * __array_offset__
- * __array_mask__
+ * ``__array_data__``
+ * ``__array_shape__``
+ * ``__array_strides__``
+ * ``__array_typestr__``
+ * ``__array_descr__``
+ * ``__array_offset__``
+ * ``__array_mask__``
diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst
index 689240c7d..191367058 100644
--- a/doc/source/reference/arrays.ndarray.rst
+++ b/doc/source/reference/arrays.ndarray.rst
@@ -1,11 +1,11 @@
+.. currentmodule:: numpy
+
.. _arrays.ndarray:
******************************************
The N-dimensional array (:class:`ndarray`)
******************************************
-.. currentmodule:: numpy
-
An :class:`ndarray` is a (usually fixed-size) multidimensional
container of items of the same type and size. The number of dimensions
and items in an array is defined by its :attr:`shape <ndarray.shape>`,
@@ -259,10 +259,10 @@ Array interface
.. seealso:: :ref:`arrays.interface`.
-========================== ===================================
-:obj:`__array_interface__` Python-side of the array interface
-:obj:`__array_struct__` C-side of the array interface
-========================== ===================================
+================================== ===================================
+:obj:`~object.__array_interface__` Python-side of the array interface
+:obj:`~object.__array_struct__` C-side of the array interface
+================================== ===================================
:mod:`ctypes` foreign function interface
----------------------------------------
@@ -469,7 +469,7 @@ Comparison operators:
ndarray.__eq__
ndarray.__ne__
-Truth value of an array (:func:`bool()`):
+Truth value of an array (:class:`bool() <bool>`):
.. autosummary::
:toctree: generated/
@@ -604,9 +604,9 @@ Container customization: (see :ref:`Indexing <arrays.indexing>`)
ndarray.__setitem__
ndarray.__contains__
-Conversion; the operations :func:`int()`, :func:`float()` and
-:func:`complex()`.
-. They work only on arrays that have one element in them
+Conversion; the operations :class:`int() <int>`,
+:class:`float() <float>` and :class:`complex() <complex>`.
+They work only on arrays that have one element in them
and return the appropriate scalar.
.. autosummary::
diff --git a/doc/source/reference/arrays.nditer.cython.rst b/doc/source/reference/arrays.nditer.cython.rst
index 2cc7763ed..43aad9927 100644
--- a/doc/source/reference/arrays.nditer.cython.rst
+++ b/doc/source/reference/arrays.nditer.cython.rst
@@ -5,7 +5,7 @@ Those who want really good performance out of their low level operations
should strongly consider directly using the iteration API provided
in C, but for those who are not comfortable with C or C++, Cython
is a good middle ground with reasonable performance tradeoffs. For
-the :class:`nditer` object, this means letting the iterator take care
+the :class:`~numpy.nditer` object, this means letting the iterator take care
of broadcasting, dtype conversion, and buffering, while giving the inner
loop to Cython.
diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst
index d27d61e2c..4b5da2e13 100644
--- a/doc/source/reference/arrays.scalars.rst
+++ b/doc/source/reference/arrays.scalars.rst
@@ -24,14 +24,14 @@ mixing scalar and array operations.
Array scalars live in a hierarchy (see the Figure below) of data
types. They can be detected using the hierarchy: For example,
-``isinstance(val, np.generic)`` will return :const:`True` if *val* is
+``isinstance(val, np.generic)`` will return :py:data:`True` if *val* is
an array scalar object. Alternatively, what kind of array scalar is
present can be determined using other members of the data type
hierarchy. Thus, for example ``isinstance(val, np.complexfloating)``
-will return :const:`True` if *val* is a complex valued type, while
-:const:`isinstance(val, np.flexible)` will return true if *val* is one
-of the flexible itemsize array types (:class:`string`,
-:class:`unicode`, :class:`void`).
+will return :py:data:`True` if *val* is a complex valued type, while
+``isinstance(val, np.flexible)`` will return true if *val* is one
+of the flexible itemsize array types (:class:`str_`,
+:class:`bytes_`, :class:`void`).
.. figure:: figures/dtype-hierarchy.png
@@ -41,6 +41,13 @@ of the flexible itemsize array types (:class:`string`,
pointer for the platform. All the number types can be obtained
using bit-width names as well.
+
+.. TODO - use something like this instead of the diagram above, as it generates
+ links to the classes and is a vector graphic. Unfortunately it looks worse
+ and the html <map> element providing the linked regions is misaligned.
+
+ .. inheritance-diagram:: byte short intc int_ longlong ubyte ushort uintc uint ulonglong half single double longdouble csingle cdouble clongdouble bool_ datetime64 timedelta64 object_ bytes_ str_ void
+
.. [#] However, array scalars are immutable, so none of the array
scalar attributes are settable.
@@ -51,129 +58,148 @@ of the flexible itemsize array types (:class:`string`,
Built-in scalar types
=====================
-The built-in scalar types are shown below. Along with their (mostly)
-C-derived names, the integer, float, and complex data-types are also
-available using a bit-width convention so that an array of the right
-size can always be ensured (e.g. :class:`int8`, :class:`float64`,
-:class:`complex128`). Two aliases (:class:`intp` and :class:`uintp`)
-pointing to the integer type that is sufficiently large to hold a C pointer
-are also provided. The C-like names are associated with character codes,
-which are shown in the table. Use of the character codes, however,
+The built-in scalar types are shown below. The C-like names are associated with character codes,
+which are shown in their descriptions. Use of the character codes, however,
is discouraged.
Some of the scalar types are essentially equivalent to fundamental
Python types and therefore inherit from them as well as from the
generic array scalar type:
-==================== ================================
-Array scalar type Related Python type
-==================== ================================
-:class:`int_` :class:`IntType` (Python 2 only)
-:class:`float_` :class:`FloatType`
-:class:`complex_` :class:`ComplexType`
-:class:`bytes_` :class:`BytesType`
-:class:`unicode_` :class:`UnicodeType`
-==================== ================================
+==================== =========================== =============
+Array scalar type Related Python type Inherits?
+==================== =========================== =============
+:class:`int_` :class:`int` Python 2 only
+:class:`float_` :class:`float` yes
+:class:`complex_` :class:`complex` yes
+:class:`bytes_` :class:`bytes` yes
+:class:`str_` :class:`str` yes
+:class:`bool_` :class:`bool` no
+:class:`datetime64` :class:`datetime.datetime` no
+:class:`timedelta64` :class:`datetime.timedelta` no
+==================== =========================== =============
The :class:`bool_` data type is very similar to the Python
-:class:`BooleanType` but does not inherit from it because Python's
-:class:`BooleanType` does not allow itself to be inherited from, and
+:class:`bool` but does not inherit from it because Python's
+:class:`bool` does not allow itself to be inherited from, and
on the C-level the size of the actual bool data is not the same as a
Python Boolean scalar.
.. warning::
- The :class:`bool_` type is not a subclass of the :class:`int_` type
- (the :class:`bool_` is not even a number type). This is different
- than Python's default implementation of :class:`bool` as a
- sub-class of int.
-
-.. warning::
-
The :class:`int_` type does **not** inherit from the
:class:`int` built-in under Python 3, because type :class:`int` is no
longer a fixed-width integer type.
.. tip:: The default data type in NumPy is :class:`float_`.
-In the tables below, ``platform?`` means that the type may not be
-available on all platforms. Compatibility with different C or Python
-types is indicated: two types are compatible if their data is of the
-same size and interpreted in the same way.
-
-Booleans:
-
-=================== ============================= ===============
-Type Remarks Character code
-=================== ============================= ===============
-:class:`bool_` compatible: Python bool ``'?'``
-:class:`bool8` 8 bits
-=================== ============================= ===============
-
-Integers:
-
-=================== ============================= ===============
-:class:`byte` compatible: C char ``'b'``
-:class:`short` compatible: C short ``'h'``
-:class:`intc` compatible: C int ``'i'``
-:class:`int_` compatible: Python int ``'l'``
-:class:`longlong` compatible: C long long ``'q'``
-:class:`intp` large enough to fit a pointer ``'p'``
-:class:`int8` 8 bits
-:class:`int16` 16 bits
-:class:`int32` 32 bits
-:class:`int64` 64 bits
-=================== ============================= ===============
-
-Unsigned integers:
-
-=================== ============================= ===============
-:class:`ubyte` compatible: C unsigned char ``'B'``
-:class:`ushort` compatible: C unsigned short ``'H'``
-:class:`uintc` compatible: C unsigned int ``'I'``
-:class:`uint` compatible: Python int ``'L'``
-:class:`ulonglong` compatible: C long long ``'Q'``
-:class:`uintp` large enough to fit a pointer ``'P'``
-:class:`uint8` 8 bits
-:class:`uint16` 16 bits
-:class:`uint32` 32 bits
-:class:`uint64` 64 bits
-=================== ============================= ===============
-
-Floating-point numbers:
-
-=================== ============================= ===============
-:class:`half` ``'e'``
-:class:`single` compatible: C float ``'f'``
-:class:`double` compatible: C double
-:class:`float_` compatible: Python float ``'d'``
-:class:`longfloat` compatible: C long float ``'g'``
-:class:`float16` 16 bits
-:class:`float32` 32 bits
-:class:`float64` 64 bits
-:class:`float96` 96 bits, platform?
-:class:`float128` 128 bits, platform?
-=================== ============================= ===============
-
-Complex floating-point numbers:
-
-=================== ============================= ===============
-:class:`csingle` ``'F'``
-:class:`complex_` compatible: Python complex ``'D'``
-:class:`clongfloat` ``'G'``
-:class:`complex64` two 32-bit floats
-:class:`complex128` two 64-bit floats
-:class:`complex192` two 96-bit floats,
- platform?
-:class:`complex256` two 128-bit floats,
- platform?
-=================== ============================= ===============
-
-Any Python object:
-
-=================== ============================= ===============
-:class:`object_` any Python object ``'O'``
-=================== ============================= ===============
+.. autoclass:: numpy.generic
+ :exclude-members:
+
+.. autoclass:: numpy.number
+ :exclude-members:
+
+Integer types
+~~~~~~~~~~~~~
+
+.. autoclass:: numpy.integer
+ :exclude-members:
+
+Signed integer types
+++++++++++++++++++++
+
+.. autoclass:: numpy.signedinteger
+ :exclude-members:
+
+.. autoclass:: numpy.byte
+ :exclude-members:
+
+.. autoclass:: numpy.short
+ :exclude-members:
+
+.. autoclass:: numpy.intc
+ :exclude-members:
+
+.. autoclass:: numpy.int_
+ :exclude-members:
+
+.. autoclass:: numpy.longlong
+ :exclude-members:
+
+Unsigned integer types
+++++++++++++++++++++++
+
+.. autoclass:: numpy.unsignedinteger
+ :exclude-members:
+
+.. autoclass:: numpy.ubyte
+ :exclude-members:
+
+.. autoclass:: numpy.ushort
+ :exclude-members:
+
+.. autoclass:: numpy.uintc
+ :exclude-members:
+
+.. autoclass:: numpy.uint
+ :exclude-members:
+
+.. autoclass:: numpy.ulonglong
+ :exclude-members:
+
+Inexact types
+~~~~~~~~~~~~~
+
+.. autoclass:: numpy.inexact
+ :exclude-members:
+
+Floating-point types
+++++++++++++++++++++
+
+.. autoclass:: numpy.floating
+ :exclude-members:
+
+.. autoclass:: numpy.half
+ :exclude-members:
+
+.. autoclass:: numpy.single
+ :exclude-members:
+
+.. autoclass:: numpy.double
+ :exclude-members:
+
+.. autoclass:: numpy.longdouble
+ :exclude-members:
+
+Complex floating-point types
+++++++++++++++++++++++++++++
+
+.. autoclass:: numpy.complexfloating
+ :exclude-members:
+
+.. autoclass:: numpy.csingle
+ :exclude-members:
+
+.. autoclass:: numpy.cdouble
+ :exclude-members:
+
+.. autoclass:: numpy.clongdouble
+ :exclude-members:
+
+Other types
+~~~~~~~~~~~
+
+.. autoclass:: numpy.bool_
+ :exclude-members:
+
+.. autoclass:: numpy.datetime64
+ :exclude-members:
+
+.. autoclass:: numpy.timedelta64
+ :exclude-members:
+
+.. autoclass:: numpy.object_
+ :exclude-members:
.. note::
@@ -195,11 +221,17 @@ size and the data they describe can be of different length in different
arrays. (In the character codes ``#`` is an integer denoting how many
elements the data type consists of.)
-=================== ============================== ========
-:class:`bytes_` compatible: Python bytes ``'S#'``
-:class:`unicode_` compatible: Python unicode/str ``'U#'``
-:class:`void` ``'V#'``
-=================== ============================== ========
+.. autoclass:: numpy.flexible
+ :exclude-members:
+
+.. autoclass:: numpy.bytes_
+ :exclude-members:
+
+.. autoclass:: numpy.str_
+ :exclude-members:
+
+.. autoclass:: numpy.void
+ :exclude-members:
.. warning::
@@ -214,12 +246,123 @@ elements the data type consists of.)
convention more consistent with other Python modules such as the
:mod:`struct` module.
+Sized aliases
+~~~~~~~~~~~~~
+
+Along with their (mostly)
+C-derived names, the integer, float, and complex data-types are also
+available using a bit-width convention so that an array of the right
+size can always be ensured. Two aliases (:class:`numpy.intp` and :class:`numpy.uintp`)
+pointing to the integer type that is sufficiently large to hold a C pointer
+are also provided.
+
+.. note that these are documented with ..attribute because that is what
+ autoclass does for aliases under the hood.
+
+.. autoclass:: numpy.bool8
+
+.. attribute:: int8
+ int16
+ int32
+ int64
+
+ Aliases for the signed integer types (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `numpy.longlong`) with the specified number
+ of bits.
+
+ Compatible with the C99 ``int8_t``, ``int16_t``, ``int32_t``, and
+ ``int64_t``, respectively.
+
+.. attribute:: uint8
+ uint16
+ uint32
+ uint64
+
+ Alias for the unsigned integer types (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `numpy.longlong`) with the specified number
+ of bits.
+
+ Compatible with the C99 ``uint8_t``, ``uint16_t``, ``uint32_t``, and
+ ``uint64_t``, respectively.
+
+.. attribute:: intp
+
+ Alias for the signed integer type (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `np.longlong`) that is the same size as a
+ pointer.
+
+ Compatible with the C ``intptr_t``.
+
+ :Character code: ``'p'``
+
+.. attribute:: uintp
+
+ Alias for the unsigned integer type (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `np.longlong`) that is the same size as a
+ pointer.
+
+ Compatible with the C ``uintptr_t``.
+
+ :Character code: ``'P'``
+
+.. autoclass:: numpy.float16
+
+.. autoclass:: numpy.float32
+
+.. autoclass:: numpy.float64
+
+.. attribute:: float96
+ float128
+
+ Alias for `numpy.longdouble`, named after its size in bits.
+ The existence of these aliases depends on the platform.
+
+.. autoclass:: numpy.complex64
+
+.. autoclass:: numpy.complex128
+
+.. attribute:: complex192
+ complex256
+
+ Alias for `numpy.clongdouble`, named after its size in bits.
+ The existance of these aliases depends on the platform.
+
+Other aliases
+~~~~~~~~~~~~~
+
+The first two of these are conveniences which resemble the names of the
+builtin types, in the same style as `bool_`, `int_`, `str_`, `bytes_`, and
+`object_`:
+
+.. autoclass:: numpy.float_
+
+.. autoclass:: numpy.complex_
+
+Some more use alternate naming conventions for extended-precision floats and
+complex numbers:
+
+.. autoclass:: numpy.longfloat
+
+.. autoclass:: numpy.singlecomplex
+
+.. autoclass:: numpy.cfloat
+
+.. autoclass:: numpy.longcomplex
+
+.. autoclass:: numpy.clongfloat
+
+The following aliases originate from Python 2, and it is recommended that they
+not be used in new code.
+
+.. autoclass:: numpy.string_
+
+.. autoclass:: numpy.unicode_
Attributes
==========
The array scalar objects have an :obj:`array priority
-<__array_priority__>` of :c:data:`NPY_SCALAR_PRIORITY`
+<class.__array_priority__>` of :c:data:`NPY_SCALAR_PRIORITY`
(-1,000,000.0). They also do not (yet) have a :attr:`ctypes <ndarray.ctypes>`
attribute. Otherwise, they share the same attributes as arrays:
@@ -273,7 +416,6 @@ The exceptions to the above rules are given below:
.. autosummary::
:toctree: generated/
- generic
generic.__array__
generic.__array_wrap__
generic.squeeze
diff --git a/doc/source/reference/c-api/array.rst b/doc/source/reference/c-api/array.rst
index 10c1704c2..3aa541b79 100644
--- a/doc/source/reference/c-api/array.rst
+++ b/doc/source/reference/c-api/array.rst
@@ -24,7 +24,7 @@ These macros access the :c:type:`PyArrayObject` structure members and are
defined in ``ndarraytypes.h``. The input argument, *arr*, can be any
:c:type:`PyObject *<PyObject>` that is directly interpretable as a
:c:type:`PyArrayObject *` (any instance of the :c:data:`PyArray_Type`
-and itssub-types).
+and its sub-types).
.. c:function:: int PyArray_NDIM(PyArrayObject *arr)
@@ -326,7 +326,7 @@ From scratch
Create a new array with the provided data-type descriptor, *descr*,
of the shape determined by *nd* and *dims*.
-.. c:function:: PyArray_FILLWBYTE(PyObject* obj, int val)
+.. c:function:: void PyArray_FILLWBYTE(PyObject* obj, int val)
Fill the array pointed to by *obj* ---which must be a (subclass
of) ndarray---with the contents of *val* (evaluated as a byte).
@@ -428,44 +428,44 @@ From other objects
have :c:data:`NPY_ARRAY_DEFAULT` as its flags member. The *context*
argument is unused.
- .. c:var:: NPY_ARRAY_C_CONTIGUOUS
+ .. c:macro:: NPY_ARRAY_C_CONTIGUOUS
Make sure the returned array is C-style contiguous
- .. c:var:: NPY_ARRAY_F_CONTIGUOUS
+ .. c:macro:: NPY_ARRAY_F_CONTIGUOUS
Make sure the returned array is Fortran-style contiguous.
- .. c:var:: NPY_ARRAY_ALIGNED
+ .. c:macro:: NPY_ARRAY_ALIGNED
Make sure the returned array is aligned on proper boundaries for its
data type. An aligned array has the data pointer and every strides
factor as a multiple of the alignment factor for the data-type-
descriptor.
- .. c:var:: NPY_ARRAY_WRITEABLE
+ .. c:macro:: NPY_ARRAY_WRITEABLE
Make sure the returned array can be written to.
- .. c:var:: NPY_ARRAY_ENSURECOPY
+ .. c:macro:: NPY_ARRAY_ENSURECOPY
Make sure a copy is made of *op*. If this flag is not
present, data is not copied if it can be avoided.
- .. c:var:: NPY_ARRAY_ENSUREARRAY
+ .. c:macro:: NPY_ARRAY_ENSUREARRAY
Make sure the result is a base-class ndarray. By
default, if *op* is an instance of a subclass of
ndarray, an instance of that same subclass is returned. If
this flag is set, an ndarray object will be returned instead.
- .. c:var:: NPY_ARRAY_FORCECAST
+ .. c:macro:: NPY_ARRAY_FORCECAST
Force a cast to the output type even if it cannot be done
safely. Without this flag, a data cast will occur only if it
can be done safely, otherwise an error is raised.
- .. c:var:: NPY_ARRAY_WRITEBACKIFCOPY
+ .. c:macro:: NPY_ARRAY_WRITEBACKIFCOPY
If *op* is already an array, but does not satisfy the
requirements, then a copy is made (which will satisfy the
@@ -478,67 +478,67 @@ From other objects
will be made writeable again. If *op* is not writeable to begin
with, or if it is not already an array, then an error is raised.
- .. c:var:: NPY_ARRAY_UPDATEIFCOPY
+ .. c:macro:: NPY_ARRAY_UPDATEIFCOPY
Deprecated. Use :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, which is similar.
This flag "automatically" copies the data back when the returned
array is deallocated, which is not supported in all python
implementations.
- .. c:var:: NPY_ARRAY_BEHAVED
+ .. c:macro:: NPY_ARRAY_BEHAVED
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE`
- .. c:var:: NPY_ARRAY_CARRAY
+ .. c:macro:: NPY_ARRAY_CARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
- .. c:var:: NPY_ARRAY_CARRAY_RO
+ .. c:macro:: NPY_ARRAY_CARRAY_RO
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_FARRAY
+ .. c:macro:: NPY_ARRAY_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
- .. c:var:: NPY_ARRAY_FARRAY_RO
+ .. c:macro:: NPY_ARRAY_FARRAY_RO
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_DEFAULT
+ .. c:macro:: NPY_ARRAY_DEFAULT
:c:data:`NPY_ARRAY_CARRAY`
- .. c:var:: NPY_ARRAY_IN_ARRAY
+ .. c:macro:: NPY_ARRAY_IN_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_IN_FARRAY
+ .. c:macro:: NPY_ARRAY_IN_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_OUT_ARRAY
+ .. c:macro:: NPY_OUT_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_OUT_ARRAY
+ .. c:macro:: NPY_ARRAY_OUT_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED` \|
:c:data:`NPY_ARRAY_WRITEABLE`
- .. c:var:: NPY_ARRAY_OUT_FARRAY
+ .. c:macro:: NPY_ARRAY_OUT_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_INOUT_ARRAY
+ .. c:macro:: NPY_ARRAY_INOUT_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` \|
:c:data:`NPY_ARRAY_UPDATEIFCOPY`
- .. c:var:: NPY_ARRAY_INOUT_FARRAY
+ .. c:macro:: NPY_ARRAY_INOUT_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` \|
@@ -574,7 +574,7 @@ From other objects
did not have the _ARRAY_ macro namespace in them. That form
of the constant names is deprecated in 1.7.
-.. c:var:: NPY_ARRAY_NOTSWAPPED
+.. c:macro:: NPY_ARRAY_NOTSWAPPED
Make sure the returned array has a data-type descriptor that is in
machine byte-order, over-riding any specification in the *dtype*
@@ -585,11 +585,11 @@ From other objects
not in machine byte- order), then a new data-type descriptor is
created and used with its byte-order field set to native.
-.. c:var:: NPY_ARRAY_BEHAVED_NS
+.. c:macro:: NPY_ARRAY_BEHAVED_NS
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE` \| :c:data:`NPY_ARRAY_NOTSWAPPED`
-.. c:var:: NPY_ARRAY_ELEMENTSTRIDES
+.. c:macro:: NPY_ARRAY_ELEMENTSTRIDES
Make sure the returned array has strides that are multiples of the
element size.
@@ -604,14 +604,14 @@ From other objects
.. c:function:: PyObject* PyArray_FromStructInterface(PyObject* op)
Returns an ndarray object from a Python object that exposes the
- :obj:`__array_struct__` attribute and follows the array interface
+ :obj:`~object.__array_struct__` attribute and follows the array interface
protocol. If the object does not contain this attribute then a
borrowed reference to :c:data:`Py_NotImplemented` is returned.
.. c:function:: PyObject* PyArray_FromInterface(PyObject* op)
Returns an ndarray object from a Python object that exposes the
- :obj:`__array_interface__` attribute following the array interface
+ :obj:`~object.__array_interface__` attribute following the array interface
protocol. If the object does not contain this attribute then a
borrowed reference to :c:data:`Py_NotImplemented` is returned.
@@ -790,17 +790,17 @@ Dealing with types
General check of Python Type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. c:function:: PyArray_Check(PyObject *op)
+.. c:function:: int PyArray_Check(PyObject *op)
Evaluates true if *op* is a Python object whose type is a sub-type
of :c:data:`PyArray_Type`.
-.. c:function:: PyArray_CheckExact(PyObject *op)
+.. c:function:: int PyArray_CheckExact(PyObject *op)
Evaluates true if *op* is a Python object with type
:c:data:`PyArray_Type`.
-.. c:function:: PyArray_HasArrayInterface(PyObject *op, PyObject *out)
+.. c:function:: int PyArray_HasArrayInterface(PyObject *op, PyObject *out)
If ``op`` implements any part of the array interface, then ``out``
will contain a new reference to the newly created ndarray using
@@ -808,7 +808,8 @@ General check of Python Type
conversion occurs. Otherwise, out will contain a borrowed
reference to :c:data:`Py_NotImplemented` and no error condition is set.
-.. c:function:: PyArray_HasArrayInterfaceType(op, dtype, context, out)
+.. c:function:: int PyArray_HasArrayInterfaceType(\
+ PyObject *op, PyArray_Descr *dtype, PyObject *context, PyObject *out)
If ``op`` implements any part of the array interface, then ``out``
will contain a new reference to the newly created ndarray using
@@ -819,38 +820,38 @@ General check of Python Type
that looks for the :obj:`~numpy.class.__array__` attribute. `context` is
unused.
-.. c:function:: PyArray_IsZeroDim(op)
+.. c:function:: int PyArray_IsZeroDim(PyObject *op)
Evaluates true if *op* is an instance of (a subclass of)
:c:data:`PyArray_Type` and has 0 dimensions.
.. c:function:: PyArray_IsScalar(op, cls)
- Evaluates true if *op* is an instance of :c:data:`Py{cls}ArrType_Type`.
+ Evaluates true if *op* is an instance of ``Py{cls}ArrType_Type``.
-.. c:function:: PyArray_CheckScalar(op)
+.. c:function:: int PyArray_CheckScalar(PyObject *op)
Evaluates true if *op* is either an array scalar (an instance of a
sub-type of :c:data:`PyGenericArr_Type` ), or an instance of (a
sub-class of) :c:data:`PyArray_Type` whose dimensionality is 0.
-.. c:function:: PyArray_IsPythonNumber(op)
+.. c:function:: int PyArray_IsPythonNumber(PyObject *op)
Evaluates true if *op* is an instance of a builtin numeric type (int,
float, complex, long, bool)
-.. c:function:: PyArray_IsPythonScalar(op)
+.. c:function:: int PyArray_IsPythonScalar(PyObject *op)
Evaluates true if *op* is a builtin Python scalar object (int,
float, complex, bytes, str, long, bool).
-.. c:function:: PyArray_IsAnyScalar(op)
+.. c:function:: int PyArray_IsAnyScalar(PyObject *op)
Evaluates true if *op* is either a Python scalar object (see
:c:func:`PyArray_IsPythonScalar`) or an array scalar (an instance of a sub-
type of :c:data:`PyGenericArr_Type` ).
-.. c:function:: PyArray_CheckAnyScalar(op)
+.. c:function:: int PyArray_CheckAnyScalar(PyObject *op)
Evaluates true if *op* is a Python scalar object (see
:c:func:`PyArray_IsPythonScalar`), an array scalar (an instance of a
@@ -866,82 +867,82 @@ enumerated array data type. For the array type checking macros the
argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpreted as a
:c:type:`PyArrayObject *`.
-.. c:function:: PyTypeNum_ISUNSIGNED(int num)
+.. c:function:: int PyTypeNum_ISUNSIGNED(int num)
-.. c:function:: PyDataType_ISUNSIGNED(PyArray_Descr *descr)
+.. c:function:: int PyDataType_ISUNSIGNED(PyArray_Descr *descr)
-.. c:function:: PyArray_ISUNSIGNED(PyArrayObject *obj)
+.. c:function:: int PyArray_ISUNSIGNED(PyArrayObject *obj)
Type represents an unsigned integer.
-.. c:function:: PyTypeNum_ISSIGNED(int num)
+.. c:function:: int PyTypeNum_ISSIGNED(int num)
-.. c:function:: PyDataType_ISSIGNED(PyArray_Descr *descr)
+.. c:function:: int PyDataType_ISSIGNED(PyArray_Descr *descr)
-.. c:function:: PyArray_ISSIGNED(PyArrayObject *obj)
+.. c:function:: int PyArray_ISSIGNED(PyArrayObject *obj)
Type represents a signed integer.
-.. c:function:: PyTypeNum_ISINTEGER(int num)
+.. c:function:: int PyTypeNum_ISINTEGER(int num)
-.. c:function:: PyDataType_ISINTEGER(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISINTEGER(PyArray_Descr* descr)
-.. c:function:: PyArray_ISINTEGER(PyArrayObject *obj)
+.. c:function:: int PyArray_ISINTEGER(PyArrayObject *obj)
Type represents any integer.
-.. c:function:: PyTypeNum_ISFLOAT(int num)
+.. c:function:: int PyTypeNum_ISFLOAT(int num)
-.. c:function:: PyDataType_ISFLOAT(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISFLOAT(PyArray_Descr* descr)
-.. c:function:: PyArray_ISFLOAT(PyArrayObject *obj)
+.. c:function:: int PyArray_ISFLOAT(PyArrayObject *obj)
Type represents any floating point number.
-.. c:function:: PyTypeNum_ISCOMPLEX(int num)
+.. c:function:: int PyTypeNum_ISCOMPLEX(int num)
-.. c:function:: PyDataType_ISCOMPLEX(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISCOMPLEX(PyArray_Descr* descr)
-.. c:function:: PyArray_ISCOMPLEX(PyArrayObject *obj)
+.. c:function:: int PyArray_ISCOMPLEX(PyArrayObject *obj)
Type represents any complex floating point number.
-.. c:function:: PyTypeNum_ISNUMBER(int num)
+.. c:function:: int PyTypeNum_ISNUMBER(int num)
-.. c:function:: PyDataType_ISNUMBER(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISNUMBER(PyArray_Descr* descr)
-.. c:function:: PyArray_ISNUMBER(PyArrayObject *obj)
+.. c:function:: int PyArray_ISNUMBER(PyArrayObject *obj)
Type represents any integer, floating point, or complex floating point
number.
-.. c:function:: PyTypeNum_ISSTRING(int num)
+.. c:function:: int PyTypeNum_ISSTRING(int num)
-.. c:function:: PyDataType_ISSTRING(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISSTRING(PyArray_Descr* descr)
-.. c:function:: PyArray_ISSTRING(PyArrayObject *obj)
+.. c:function:: int PyArray_ISSTRING(PyArrayObject *obj)
Type represents a string data type.
-.. c:function:: PyTypeNum_ISPYTHON(int num)
+.. c:function:: int PyTypeNum_ISPYTHON(int num)
-.. c:function:: PyDataType_ISPYTHON(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISPYTHON(PyArray_Descr* descr)
-.. c:function:: PyArray_ISPYTHON(PyArrayObject *obj)
+.. c:function:: int PyArray_ISPYTHON(PyArrayObject *obj)
Type represents an enumerated type corresponding to one of the
standard Python scalar (bool, int, float, or complex).
-.. c:function:: PyTypeNum_ISFLEXIBLE(int num)
+.. c:function:: int PyTypeNum_ISFLEXIBLE(int num)
-.. c:function:: PyDataType_ISFLEXIBLE(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISFLEXIBLE(PyArray_Descr* descr)
-.. c:function:: PyArray_ISFLEXIBLE(PyArrayObject *obj)
+.. c:function:: int PyArray_ISFLEXIBLE(PyArrayObject *obj)
Type represents one of the flexible array types ( :c:data:`NPY_STRING`,
:c:data:`NPY_UNICODE`, or :c:data:`NPY_VOID` ).
-.. c:function:: PyDataType_ISUNSIZED(PyArray_Descr* descr):
+.. c:function:: int PyDataType_ISUNSIZED(PyArray_Descr* descr)
Type has no size information attached, and can be resized. Should only be
called on flexible dtypes. Types that are attached to an array will always
@@ -951,55 +952,55 @@ argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpret
For structured datatypes with no fields this function now returns False.
-.. c:function:: PyTypeNum_ISUSERDEF(int num)
+.. c:function:: int PyTypeNum_ISUSERDEF(int num)
-.. c:function:: PyDataType_ISUSERDEF(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISUSERDEF(PyArray_Descr* descr)
-.. c:function:: PyArray_ISUSERDEF(PyArrayObject *obj)
+.. c:function:: int PyArray_ISUSERDEF(PyArrayObject *obj)
Type represents a user-defined type.
-.. c:function:: PyTypeNum_ISEXTENDED(int num)
+.. c:function:: int PyTypeNum_ISEXTENDED(int num)
-.. c:function:: PyDataType_ISEXTENDED(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISEXTENDED(PyArray_Descr* descr)
-.. c:function:: PyArray_ISEXTENDED(PyArrayObject *obj)
+.. c:function:: int PyArray_ISEXTENDED(PyArrayObject *obj)
Type is either flexible or user-defined.
-.. c:function:: PyTypeNum_ISOBJECT(int num)
+.. c:function:: int PyTypeNum_ISOBJECT(int num)
-.. c:function:: PyDataType_ISOBJECT(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISOBJECT(PyArray_Descr* descr)
-.. c:function:: PyArray_ISOBJECT(PyArrayObject *obj)
+.. c:function:: int PyArray_ISOBJECT(PyArrayObject *obj)
Type represents object data type.
-.. c:function:: PyTypeNum_ISBOOL(int num)
+.. c:function:: int PyTypeNum_ISBOOL(int num)
-.. c:function:: PyDataType_ISBOOL(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISBOOL(PyArray_Descr* descr)
-.. c:function:: PyArray_ISBOOL(PyArrayObject *obj)
+.. c:function:: int PyArray_ISBOOL(PyArrayObject *obj)
Type represents Boolean data type.
-.. c:function:: PyDataType_HASFIELDS(PyArray_Descr* descr)
+.. c:function:: int PyDataType_HASFIELDS(PyArray_Descr* descr)
-.. c:function:: PyArray_HASFIELDS(PyArrayObject *obj)
+.. c:function:: int PyArray_HASFIELDS(PyArrayObject *obj)
Type has fields associated with it.
-.. c:function:: PyArray_ISNOTSWAPPED(m)
+.. c:function:: int PyArray_ISNOTSWAPPED(PyArrayObject *m)
Evaluates true if the data area of the ndarray *m* is in machine
byte-order according to the array's data-type descriptor.
-.. c:function:: PyArray_ISBYTESWAPPED(m)
+.. c:function:: int PyArray_ISBYTESWAPPED(PyArrayObject *m)
Evaluates true if the data area of the ndarray *m* is **not** in
machine byte-order according to the array's data-type descriptor.
-.. c:function:: Bool PyArray_EquivTypes( \
+.. c:function:: npy_bool PyArray_EquivTypes( \
PyArray_Descr* type1, PyArray_Descr* type2)
Return :c:data:`NPY_TRUE` if *type1* and *type2* actually represent
@@ -1008,18 +1009,18 @@ argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpret
:c:data:`NPY_LONG` and :c:data:`NPY_INT` are equivalent. Otherwise
return :c:data:`NPY_FALSE`.
-.. c:function:: Bool PyArray_EquivArrTypes( \
+.. c:function:: npy_bool PyArray_EquivArrTypes( \
PyArrayObject* a1, PyArrayObject * a2)
Return :c:data:`NPY_TRUE` if *a1* and *a2* are arrays with equivalent
types for this platform.
-.. c:function:: Bool PyArray_EquivTypenums(int typenum1, int typenum2)
+.. c:function:: npy_bool PyArray_EquivTypenums(int typenum1, int typenum2)
Special case of :c:func:`PyArray_EquivTypes` (...) that does not accept
flexible data types but may be easier to call.
-.. c:function:: int PyArray_EquivByteorders({byteorder} b1, {byteorder} b2)
+.. c:function:: int PyArray_EquivByteorders(int b1, int b2)
True if byteorder characters ( :c:data:`NPY_LITTLE`,
:c:data:`NPY_BIG`, :c:data:`NPY_NATIVE`, :c:data:`NPY_IGNORE` ) are
@@ -1142,8 +1143,8 @@ Converting data types
storing the max value of the input types converted to a string or unicode.
.. c:function:: PyArray_Descr* PyArray_ResultType( \
- npy_intp narrs, PyArrayObject**arrs, npy_intp ndtypes, \
- PyArray_Descr**dtypes)
+ npy_intp narrs, PyArrayObject **arrs, npy_intp ndtypes, \
+ PyArray_Descr **dtypes)
.. versionadded:: 1.6
@@ -1334,7 +1335,7 @@ Special functions for NPY_OBJECT
locations in the structure with object data-types. No checking is
performed but *arr* must be of data-type :c:type:`NPY_OBJECT` and be
single-segment and uninitialized (no previous objects in
- position). Use :c:func:`PyArray_DECREF` (*arr*) if you need to
+ position). Use :c:func:`PyArray_XDECREF` (*arr*) if you need to
decrement all the items in the object array prior to calling this
function.
@@ -1343,7 +1344,7 @@ Special functions for NPY_OBJECT
Precondition: ``arr`` is a copy of ``base`` (though possibly with different
strides, ordering, etc.) Set the UPDATEIFCOPY flag and ``arr->base`` so
that when ``arr`` is destructed, it will copy any changes back to ``base``.
- DEPRECATED, use :c:func:`PyArray_SetWritebackIfCopyBase``.
+ DEPRECATED, use :c:func:`PyArray_SetWritebackIfCopyBase`.
Returns 0 for success, -1 for failure.
@@ -1353,7 +1354,7 @@ Special functions for NPY_OBJECT
strides, ordering, etc.) Sets the :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag
and ``arr->base``, and set ``base`` to READONLY. Call
:c:func:`PyArray_ResolveWritebackIfCopy` before calling
- `Py_DECREF`` in order copy any changes back to ``base`` and
+ `Py_DECREF` in order copy any changes back to ``base`` and
reset the READONLY flag.
Returns 0 for success, -1 for failure.
@@ -1397,12 +1398,12 @@ In versions 1.6 and earlier of NumPy, the following flags
did not have the _ARRAY_ macro namespace in them. That form
of the constant names is deprecated in 1.7.
-.. c:var:: NPY_ARRAY_C_CONTIGUOUS
+.. c:macro:: NPY_ARRAY_C_CONTIGUOUS
The data area is in C-style contiguous order (last index varies the
fastest).
-.. c:var:: NPY_ARRAY_F_CONTIGUOUS
+.. c:macro:: NPY_ARRAY_F_CONTIGUOUS
The data area is in Fortran-style contiguous order (first index varies
the fastest).
@@ -1423,22 +1424,22 @@ of the constant names is deprecated in 1.7.
.. seealso:: :ref:`Internal memory layout of an ndarray <arrays.ndarray>`
-.. c:var:: NPY_ARRAY_OWNDATA
+.. c:macro:: NPY_ARRAY_OWNDATA
The data area is owned by this array.
-.. c:var:: NPY_ARRAY_ALIGNED
+.. c:macro:: NPY_ARRAY_ALIGNED
The data area and all array elements are aligned appropriately.
-.. c:var:: NPY_ARRAY_WRITEABLE
+.. c:macro:: NPY_ARRAY_WRITEABLE
The data area can be written to.
Notice that the above 3 flags are defined so that a new, well-
behaved array has these flags defined as true.
-.. c:var:: NPY_ARRAY_WRITEBACKIFCOPY
+.. c:macro:: NPY_ARRAY_WRITEBACKIFCOPY
The data area represents a (well-behaved) copy whose information
should be transferred back to the original when
@@ -1457,7 +1458,7 @@ of the constant names is deprecated in 1.7.
would have returned an error because :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`
would not have been possible.
-.. c:var:: NPY_ARRAY_UPDATEIFCOPY
+.. c:macro:: NPY_ARRAY_UPDATEIFCOPY
A deprecated version of :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` which
depends upon ``dealloc`` to trigger the writeback. For backwards
@@ -1474,31 +1475,31 @@ for ``flags`` which can be any of :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
Combinations of array flags
^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. c:var:: NPY_ARRAY_BEHAVED
+.. c:macro:: NPY_ARRAY_BEHAVED
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE`
-.. c:var:: NPY_ARRAY_CARRAY
+.. c:macro:: NPY_ARRAY_CARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
-.. c:var:: NPY_ARRAY_CARRAY_RO
+.. c:macro:: NPY_ARRAY_CARRAY_RO
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
-.. c:var:: NPY_ARRAY_FARRAY
+.. c:macro:: NPY_ARRAY_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
-.. c:var:: NPY_ARRAY_FARRAY_RO
+.. c:macro:: NPY_ARRAY_FARRAY_RO
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
-.. c:var:: NPY_ARRAY_DEFAULT
+.. c:macro:: NPY_ARRAY_DEFAULT
:c:data:`NPY_ARRAY_CARRAY`
-.. c:var:: NPY_ARRAY_UPDATE_ALL
+.. c:macro:: NPY_ARRAY_UPDATE_ALL
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
@@ -1509,28 +1510,19 @@ Flag-like constants
These constants are used in :c:func:`PyArray_FromAny` (and its macro forms) to
specify desired properties of the new array.
-.. c:var:: NPY_ARRAY_FORCECAST
+.. c:macro:: NPY_ARRAY_FORCECAST
Cast to the desired type, even if it can't be done without losing
information.
-.. c:var:: NPY_ARRAY_ENSURECOPY
+.. c:macro:: NPY_ARRAY_ENSURECOPY
Make sure the resulting array is a copy of the original.
-.. c:var:: NPY_ARRAY_ENSUREARRAY
+.. c:macro:: NPY_ARRAY_ENSUREARRAY
Make sure the resulting object is an actual ndarray, and not a sub-class.
-.. c:var:: NPY_ARRAY_NOTSWAPPED
-
- Only used in :c:func:`PyArray_CheckFromAny` to over-ride the byteorder
- of the data-type object passed in.
-
-.. c:var:: NPY_ARRAY_BEHAVED_NS
-
- :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE` \| :c:data:`NPY_ARRAY_NOTSWAPPED`
-
Flag checking
^^^^^^^^^^^^^
@@ -1538,7 +1530,7 @@ Flag checking
For all of these macros *arr* must be an instance of a (subclass of)
:c:data:`PyArray_Type`.
-.. c:function:: PyArray_CHKFLAGS(PyObject *arr, flags)
+.. c:function:: int PyArray_CHKFLAGS(PyObject *arr, int flags)
The first parameter, arr, must be an ndarray or subclass. The
parameter, *flags*, should be an integer consisting of bitwise
@@ -1548,60 +1540,60 @@ For all of these macros *arr* must be an instance of a (subclass of)
:c:data:`NPY_ARRAY_WRITEABLE`, :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`,
:c:data:`NPY_ARRAY_UPDATEIFCOPY`.
-.. c:function:: PyArray_IS_C_CONTIGUOUS(PyObject *arr)
+.. c:function:: int PyArray_IS_C_CONTIGUOUS(PyObject *arr)
Evaluates true if *arr* is C-style contiguous.
-.. c:function:: PyArray_IS_F_CONTIGUOUS(PyObject *arr)
+.. c:function:: int PyArray_IS_F_CONTIGUOUS(PyObject *arr)
Evaluates true if *arr* is Fortran-style contiguous.
-.. c:function:: PyArray_ISFORTRAN(PyObject *arr)
+.. c:function:: int PyArray_ISFORTRAN(PyObject *arr)
Evaluates true if *arr* is Fortran-style contiguous and *not*
C-style contiguous. :c:func:`PyArray_IS_F_CONTIGUOUS`
is the correct way to test for Fortran-style contiguity.
-.. c:function:: PyArray_ISWRITEABLE(PyObject *arr)
+.. c:function:: int PyArray_ISWRITEABLE(PyObject *arr)
Evaluates true if the data area of *arr* can be written to
-.. c:function:: PyArray_ISALIGNED(PyObject *arr)
+.. c:function:: int PyArray_ISALIGNED(PyObject *arr)
Evaluates true if the data area of *arr* is properly aligned on
the machine.
-.. c:function:: PyArray_ISBEHAVED(PyObject *arr)
+.. c:function:: int PyArray_ISBEHAVED(PyObject *arr)
Evaluates true if the data area of *arr* is aligned and writeable
and in machine byte-order according to its descriptor.
-.. c:function:: PyArray_ISBEHAVED_RO(PyObject *arr)
+.. c:function:: int PyArray_ISBEHAVED_RO(PyObject *arr)
Evaluates true if the data area of *arr* is aligned and in machine
byte-order.
-.. c:function:: PyArray_ISCARRAY(PyObject *arr)
+.. c:function:: int PyArray_ISCARRAY(PyObject *arr)
Evaluates true if the data area of *arr* is C-style contiguous,
and :c:func:`PyArray_ISBEHAVED` (*arr*) is true.
-.. c:function:: PyArray_ISFARRAY(PyObject *arr)
+.. c:function:: int PyArray_ISFARRAY(PyObject *arr)
Evaluates true if the data area of *arr* is Fortran-style
contiguous and :c:func:`PyArray_ISBEHAVED` (*arr*) is true.
-.. c:function:: PyArray_ISCARRAY_RO(PyObject *arr)
+.. c:function:: int PyArray_ISCARRAY_RO(PyObject *arr)
Evaluates true if the data area of *arr* is C-style contiguous,
aligned, and in machine byte-order.
-.. c:function:: PyArray_ISFARRAY_RO(PyObject *arr)
+.. c:function:: int PyArray_ISFARRAY_RO(PyObject *arr)
Evaluates true if the data area of *arr* is Fortran-style
contiguous, aligned, and in machine byte-order **.**
-.. c:function:: PyArray_ISONESEGMENT(PyObject *arr)
+.. c:function:: int PyArray_ISONESEGMENT(PyObject *arr)
Evaluates true if the data area of *arr* consists of a single
(C-style or Fortran-style) contiguous segment.
@@ -1659,7 +1651,7 @@ Conversion
destination must be an integer multiple of the number of elements
in *val*.
-.. c:function:: PyObject* PyArray_Byteswap(PyArrayObject* self, Bool inplace)
+.. c:function:: PyObject* PyArray_Byteswap(PyArrayObject* self, npy_bool inplace)
Equivalent to :meth:`ndarray.byteswap<numpy.ndarray.byteswap>` (*self*, *inplace*). Return an array
whose data area is byteswapped. If *inplace* is non-zero, then do
@@ -1876,16 +1868,16 @@ Item selection and manipulation
created. The *clipmode* argument determines behavior for when
entries in *self* are not between 0 and len(*op*).
- .. c:var:: NPY_RAISE
+ .. c:macro:: NPY_RAISE
raise a ValueError;
- .. c:var:: NPY_WRAP
+ .. c:macro:: NPY_WRAP
wrap values < 0 by adding len(*op*) and values >=len(*op*)
by subtracting len(*op*) until they are in range;
- .. c:var:: NPY_CLIP
+ .. c:macro:: NPY_CLIP
all values are clipped to the region [0, len(*op*) ).
@@ -2263,7 +2255,7 @@ Array Functions
See the :func:`~numpy.einsum` function for more details.
-.. c:function:: PyObject* PyArray_CopyAndTranspose(PyObject \* op)
+.. c:function:: PyObject* PyArray_CopyAndTranspose(PyObject * op)
A specialized copy and transpose function that works only for 2-d
arrays. The returned array is a transposed copy of *op*.
@@ -2318,7 +2310,7 @@ Array Functions
Other functions
^^^^^^^^^^^^^^^
-.. c:function:: Bool PyArray_CheckStrides( \
+.. c:function:: npy_bool PyArray_CheckStrides( \
int elsize, int nd, npy_intp numbytes, npy_intp const* dims, \
npy_intp const* newstrides)
@@ -2361,7 +2353,9 @@ it is possible to do this.
Defining an :c:type:`NpyAuxData` is similar to defining a class in C++,
but the object semantics have to be tracked manually since the API is in C.
Here's an example for a function which doubles up an element using
-an element copier function as a primitive.::
+an element copier function as a primitive.
+
+.. code-block:: c
typedef struct {
NpyAuxData base;
@@ -2425,12 +2419,12 @@ an element copier function as a primitive.::
functions should never set the Python exception on error, because
they may be called from a multi-threaded context.
-.. c:function:: NPY_AUXDATA_FREE(auxdata)
+.. c:function:: void NPY_AUXDATA_FREE(NpyAuxData *auxdata)
A macro which calls the auxdata's free function appropriately,
does nothing if auxdata is NULL.
-.. c:function:: NPY_AUXDATA_CLONE(auxdata)
+.. c:function:: NpyAuxData *NPY_AUXDATA_CLONE(NpyAuxData *auxdata)
A macro which calls the auxdata's clone function appropriately,
returning a deep copy of the auxiliary data.
@@ -2453,7 +2447,7 @@ this useful approach to looping over an array.
it easy to loop over an N-dimensional non-contiguous array in
C-style contiguous fashion.
-.. c:function:: PyObject* PyArray_IterAllButAxis(PyObject* arr, int \*axis)
+.. c:function:: PyObject* PyArray_IterAllButAxis(PyObject* arr, int* axis)
Return an array iterator that will iterate over all axes but the
one provided in *\*axis*. The returned iterator cannot be used
@@ -2497,7 +2491,7 @@ this useful approach to looping over an array.
*destination*, which must have size at least *iterator*
->nd_m1+1.
-.. c:function:: PyArray_ITER_GOTO1D(PyObject* iterator, npy_intp index)
+.. c:function:: void PyArray_ITER_GOTO1D(PyObject* iterator, npy_intp index)
Set the *iterator* index and dataptr to the location in the array
indicated by the integer *index* which points to an element in the
@@ -2818,11 +2812,21 @@ Data-type descriptors
Create a new data-type object with the byteorder set according to
*newendian*. All referenced data-type objects (in subdescr and
fields members of the data-type object) are also changed
- (recursively). If a byteorder of :c:data:`NPY_IGNORE` is encountered it
+ (recursively).
+
+ The value of *newendian* is one of these macros:
+
+ .. c:macro:: NPY_IGNORE
+ NPY_SWAP
+ NPY_NATIVE
+ NPY_LITTLE
+ NPY_BIG
+
+ If a byteorder of :c:data:`NPY_IGNORE` is encountered it
is left alone. If newendian is :c:data:`NPY_SWAP`, then all byte-orders
are swapped. Other valid newendian values are :c:data:`NPY_NATIVE`,
- :c:data:`NPY_LITTLE`, and :c:data:`NPY_BIG` which all cause the returned
- data-typed descriptor (and all it's
+ :c:data:`NPY_LITTLE`, and :c:data:`NPY_BIG` which all cause
+ the returned data-typed descriptor (and all it's
referenced data-type descriptors) to have the corresponding byte-
order.
@@ -2956,11 +2960,11 @@ to.
already a buffer object pointing to another object). If you need
to hold on to the memory be sure to INCREF the base member. The
chunk of memory is pointed to by *buf* ->ptr member and has length
- *buf* ->len. The flags member of *buf* is :c:data:`NPY_BEHAVED_RO` with
- the :c:data:`NPY_ARRAY_WRITEABLE` flag set if *obj* has a writeable buffer
- interface.
+ *buf* ->len. The flags member of *buf* is :c:data:`NPY_ARRAY_ALIGNED`
+ with the :c:data:`NPY_ARRAY_WRITEABLE` flag set if *obj* has
+ a writeable buffer interface.
-.. c:function:: int PyArray_AxisConverter(PyObject \* obj, int* axis)
+.. c:function:: int PyArray_AxisConverter(PyObject* obj, int* axis)
Convert a Python object, *obj*, representing an axis argument to
the proper value for passing to the functions that take an integer
@@ -2968,7 +2972,7 @@ to.
:c:data:`NPY_MAXDIMS` which is interpreted correctly by the C-API
functions that take axis arguments.
-.. c:function:: int PyArray_BoolConverter(PyObject* obj, Bool* value)
+.. c:function:: int PyArray_BoolConverter(PyObject* obj, npy_bool* value)
Convert any Python object, *obj*, to :c:data:`NPY_TRUE` or
:c:data:`NPY_FALSE`, and place the result in *value*.
@@ -3120,19 +3124,19 @@ the C-API is needed then some additional steps must be taken.
Internally, these #defines work as follows:
* If neither is defined, the C-API is declared to be
- :c:type:`static void**`, so it is only visible within the
+ ``static void**``, so it is only visible within the
compilation unit that #includes numpy/arrayobject.h.
* If :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, but
:c:macro:`NO_IMPORT_ARRAY` is not, the C-API is declared to
- be :c:type:`void**`, so that it will also be visible to other
+ be ``void**``, so that it will also be visible to other
compilation units.
* If :c:macro:`NO_IMPORT_ARRAY` is #defined, regardless of
whether :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is, the C-API is
- declared to be :c:type:`extern void**`, so it is expected to
+ declared to be ``extern void**``, so it is expected to
be defined in another compilation unit.
* Whenever :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, it
also changes the name of the variable holding the C-API, which
- defaults to :c:data:`PyArray_API`, to whatever the macro is
+ defaults to ``PyArray_API``, to whatever the macro is
#defined to.
Checking the API Version
@@ -3147,21 +3151,31 @@ calling the function). That's why several functions are provided to check for
numpy versions. The macros :c:data:`NPY_VERSION` and
:c:data:`NPY_FEATURE_VERSION` corresponds to the numpy version used to build the
extension, whereas the versions returned by the functions
-PyArray_GetNDArrayCVersion and PyArray_GetNDArrayCFeatureVersion corresponds to
-the runtime numpy's version.
+:c:func:`PyArray_GetNDArrayCVersion` and :c:func:`PyArray_GetNDArrayCFeatureVersion`
+corresponds to the runtime numpy's version.
The rules for ABI and API compatibilities can be summarized as follows:
- * Whenever :c:data:`NPY_VERSION` != PyArray_GetNDArrayCVersion, the
+ * Whenever :c:data:`NPY_VERSION` != ``PyArray_GetNDArrayCVersion()``, the
extension has to be recompiled (ABI incompatibility).
- * :c:data:`NPY_VERSION` == PyArray_GetNDArrayCVersion and
- :c:data:`NPY_FEATURE_VERSION` <= PyArray_GetNDArrayCFeatureVersion means
+ * :c:data:`NPY_VERSION` == ``PyArray_GetNDArrayCVersion()`` and
+ :c:data:`NPY_FEATURE_VERSION` <= ``PyArray_GetNDArrayCFeatureVersion()`` means
backward compatible changes.
ABI incompatibility is automatically detected in every numpy's version. API
incompatibility detection was added in numpy 1.4.0. If you want to supported
many different numpy versions with one extension binary, you have to build your
-extension with the lowest NPY_FEATURE_VERSION as possible.
+extension with the lowest :c:data:`NPY_FEATURE_VERSION` as possible.
+
+.. c:macro:: NPY_VERSION
+
+ The current version of the ndarray object (check to see if this
+ variable is defined to guarantee the ``numpy/arrayobject.h`` header is
+ being used).
+
+.. c:macro:: NPY_FEATURE_VERSION
+
+ The current version of the C-API.
.. c:function:: unsigned int PyArray_GetNDArrayCVersion(void)
@@ -3242,7 +3256,7 @@ Memory management
.. c:function:: char* PyDataMem_NEW(size_t nbytes)
-.. c:function:: PyDataMem_FREE(char* ptr)
+.. c:function:: void PyDataMem_FREE(char* ptr)
.. c:function:: char* PyDataMem_RENEW(void * ptr, size_t newbytes)
@@ -3251,7 +3265,7 @@ Memory management
.. c:function:: npy_intp* PyDimMem_NEW(int nd)
-.. c:function:: PyDimMem_FREE(char* ptr)
+.. c:function:: void PyDimMem_FREE(char* ptr)
.. c:function:: npy_intp* PyDimMem_RENEW(void* ptr, size_t newnd)
@@ -3259,7 +3273,7 @@ Memory management
.. c:function:: void* PyArray_malloc(size_t nbytes)
-.. c:function:: PyArray_free(void* ptr)
+.. c:function:: void PyArray_free(void* ptr)
.. c:function:: void* PyArray_realloc(npy_intp* ptr, size_t nbytes)
@@ -3268,6 +3282,8 @@ Memory management
:c:data:`NPY_USE_PYMEM` is 0, if :c:data:`NPY_USE_PYMEM` is 1, then
the Python memory allocator is used.
+ .. c:macro:: NPY_USE_PYMEM
+
.. c:function:: int PyArray_ResolveWritebackIfCopy(PyArrayObject* obj)
If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` or (deprecated)
@@ -3298,9 +3314,13 @@ be accomplished using two groups of macros. Typically, if one macro in
a group is used in a code block, all of them must be used in the same
code block. Currently, :c:data:`NPY_ALLOW_THREADS` is defined to the
python-defined :c:data:`WITH_THREADS` constant unless the environment
-variable :c:data:`NPY_NOSMP` is set in which case
+variable ``NPY_NOSMP`` is set in which case
:c:data:`NPY_ALLOW_THREADS` is defined to be 0.
+.. c:macro:: NPY_ALLOW_THREADS
+
+.. c:macro:: WITH_THREADS
+
Group 1
"""""""
@@ -3337,18 +3357,18 @@ Group 1
interpreter. This macro acquires the GIL and restores the
Python state from the saved variable.
- .. c:function:: NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype)
+ .. c:function:: void NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype)
Useful to release the GIL only if *dtype* does not contain
arbitrary Python objects which may need the Python interpreter
during execution of the loop.
- .. c:function:: NPY_END_THREADS_DESCR(PyArray_Descr *dtype)
+ .. c:function:: void NPY_END_THREADS_DESCR(PyArray_Descr *dtype)
Useful to regain the GIL in situations where it was released
using the BEGIN form of this macro.
- .. c:function:: NPY_BEGIN_THREADS_THRESHOLDED(int loop_size)
+ .. c:function:: void NPY_BEGIN_THREADS_THRESHOLDED(int loop_size)
Useful to release the GIL only if *loop_size* exceeds a
minimum threshold, currently set to 500. Should be matched
@@ -3388,15 +3408,15 @@ Group 2
Priority
^^^^^^^^
-.. c:var:: NPY_PRIORITY
+.. c:macro:: NPY_PRIORITY
Default priority for arrays.
-.. c:var:: NPY_SUBTYPE_PRIORITY
+.. c:macro:: NPY_SUBTYPE_PRIORITY
Default subtype priority.
-.. c:var:: NPY_SCALAR_PRIORITY
+.. c:macro:: NPY_SCALAR_PRIORITY
Default scalar priority (very small)
@@ -3411,15 +3431,15 @@ Priority
Default buffers
^^^^^^^^^^^^^^^
-.. c:var:: NPY_BUFSIZE
+.. c:macro:: NPY_BUFSIZE
Default size of the user-settable internal buffers.
-.. c:var:: NPY_MIN_BUFSIZE
+.. c:macro:: NPY_MIN_BUFSIZE
Smallest size of user-settable internal buffers.
-.. c:var:: NPY_MAX_BUFSIZE
+.. c:macro:: NPY_MAX_BUFSIZE
Largest size allowed for the user-settable buffers.
@@ -3427,38 +3447,32 @@ Default buffers
Other constants
^^^^^^^^^^^^^^^
-.. c:var:: NPY_NUM_FLOATTYPE
+.. c:macro:: NPY_NUM_FLOATTYPE
The number of floating-point types
-.. c:var:: NPY_MAXDIMS
+.. c:macro:: NPY_MAXDIMS
The maximum number of dimensions allowed in arrays.
-.. c:var:: NPY_MAXARGS
+.. c:macro:: NPY_MAXARGS
The maximum number of array arguments that can be used in functions.
-.. c:var:: NPY_VERSION
-
- The current version of the ndarray object (check to see if this
- variable is defined to guarantee the numpy/arrayobject.h header is
- being used).
-
-.. c:var:: NPY_FALSE
+.. c:macro:: NPY_FALSE
Defined as 0 for use with Bool.
-.. c:var:: NPY_TRUE
+.. c:macro:: NPY_TRUE
Defined as 1 for use with Bool.
-.. c:var:: NPY_FAIL
+.. c:macro:: NPY_FAIL
The return value of failed converter functions which are called using
the "O&" syntax in :c:func:`PyArg_ParseTuple`-like functions.
-.. c:var:: NPY_SUCCEED
+.. c:macro:: NPY_SUCCEED
The return value of successful converter functions which are called
using the "O&" syntax in :c:func:`PyArg_ParseTuple`-like functions.
@@ -3467,7 +3481,7 @@ Other constants
Miscellaneous Macros
^^^^^^^^^^^^^^^^^^^^
-.. c:function:: PyArray_SAMESHAPE(PyArrayObject *a1, PyArrayObject *a2)
+.. c:function:: int PyArray_SAMESHAPE(PyArrayObject *a1, PyArrayObject *a2)
Evaluates as True if arrays *a1* and *a2* have the same shape.
@@ -3502,11 +3516,11 @@ Miscellaneous Macros
of the ordering which is lexicographic: comparing the real parts
first and then the complex parts if the real parts are equal.
-.. c:function:: PyArray_REFCOUNT(PyObject* op)
+.. c:function:: npy_intp PyArray_REFCOUNT(PyObject* op)
Returns the reference count of any Python object.
-.. c:function:: PyArray_DiscardWritebackIfCopy(PyObject* obj)
+.. c:function:: void PyArray_DiscardWritebackIfCopy(PyObject* obj)
If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` or (deprecated)
:c:data:`NPY_ARRAY_UPDATEIFCOPY`, this function clears the flags, `DECREF` s
@@ -3517,7 +3531,7 @@ Miscellaneous Macros
error when you are finished with ``obj``, just before ``Py_DECREF(obj)``.
It may be called multiple times, or with ``NULL`` input.
-.. c:function:: PyArray_XDECREF_ERR(PyObject* obj)
+.. c:function:: void PyArray_XDECREF_ERR(PyObject* obj)
Deprecated in 1.14, use :c:func:`PyArray_DiscardWritebackIfCopy`
followed by ``Py_XDECREF``
@@ -3623,6 +3637,22 @@ Enumerated Types
Wraps an index to the valid range if it is out of bounds.
+.. c:type:: NPY_SEARCHSIDE
+
+ A variable type indicating whether the index returned should be that of
+ the first suitable location (if :c:data:`NPY_SEARCHLEFT`) or of the last
+ (if :c:data:`NPY_SEARCHRIGHT`).
+
+ .. c:var:: NPY_SEARCHLEFT
+
+ .. c:var:: NPY_SEARCHRIGHT
+
+.. c:type:: NPY_SELECTKIND
+
+ A variable type indicating the selection algorithm being used.
+
+ .. c:var:: NPY_INTROSELECT
+
.. c:type:: NPY_CASTING
.. versionadded:: 1.6
diff --git a/doc/source/reference/c-api/config.rst b/doc/source/reference/c-api/config.rst
index 05e6fe44d..87130699b 100644
--- a/doc/source/reference/c-api/config.rst
+++ b/doc/source/reference/c-api/config.rst
@@ -19,59 +19,62 @@ avoid namespace pollution.
Data type sizes
---------------
-The :c:data:`NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof
+The ``NPY_SIZEOF_{CTYPE}`` constants are defined so that sizeof
information is available to the pre-processor.
-.. c:var:: NPY_SIZEOF_SHORT
+.. c:macro:: NPY_SIZEOF_SHORT
sizeof(short)
-.. c:var:: NPY_SIZEOF_INT
+.. c:macro:: NPY_SIZEOF_INT
sizeof(int)
-.. c:var:: NPY_SIZEOF_LONG
+.. c:macro:: NPY_SIZEOF_LONG
sizeof(long)
-.. c:var:: NPY_SIZEOF_LONGLONG
+.. c:macro:: NPY_SIZEOF_LONGLONG
sizeof(longlong) where longlong is defined appropriately on the
platform.
-.. c:var:: NPY_SIZEOF_PY_LONG_LONG
+.. c:macro:: NPY_SIZEOF_PY_LONG_LONG
-.. c:var:: NPY_SIZEOF_FLOAT
+.. c:macro:: NPY_SIZEOF_FLOAT
sizeof(float)
-.. c:var:: NPY_SIZEOF_DOUBLE
+.. c:macro:: NPY_SIZEOF_DOUBLE
sizeof(double)
-.. c:var:: NPY_SIZEOF_LONG_DOUBLE
+.. c:macro:: NPY_SIZEOF_LONG_DOUBLE
- sizeof(longdouble) (A macro defines **NPY_SIZEOF_LONGDOUBLE** as well.)
+.. c:macro:: NPY_SIZEOF_LONGDOUBLE
-.. c:var:: NPY_SIZEOF_PY_INTPTR_T
+ sizeof(longdouble)
- Size of a pointer on this platform (sizeof(void \*)) (A macro defines
- NPY_SIZEOF_INTP as well.)
+.. c:macro:: NPY_SIZEOF_PY_INTPTR_T
+
+.. c:macro:: NPY_SIZEOF_INTP
+
+ Size of a pointer on this platform (sizeof(void \*))
Platform information
--------------------
-.. c:var:: NPY_CPU_X86
-.. c:var:: NPY_CPU_AMD64
-.. c:var:: NPY_CPU_IA64
-.. c:var:: NPY_CPU_PPC
-.. c:var:: NPY_CPU_PPC64
-.. c:var:: NPY_CPU_SPARC
-.. c:var:: NPY_CPU_SPARC64
-.. c:var:: NPY_CPU_S390
-.. c:var:: NPY_CPU_PARISC
+.. c:macro:: NPY_CPU_X86
+.. c:macro:: NPY_CPU_AMD64
+.. c:macro:: NPY_CPU_IA64
+.. c:macro:: NPY_CPU_PPC
+.. c:macro:: NPY_CPU_PPC64
+.. c:macro:: NPY_CPU_SPARC
+.. c:macro:: NPY_CPU_SPARC64
+.. c:macro:: NPY_CPU_S390
+.. c:macro:: NPY_CPU_PARISC
.. versionadded:: 1.3.0
@@ -80,11 +83,11 @@ Platform information
Defined in ``numpy/npy_cpu.h``
-.. c:var:: NPY_LITTLE_ENDIAN
+.. c:macro:: NPY_LITTLE_ENDIAN
-.. c:var:: NPY_BIG_ENDIAN
+.. c:macro:: NPY_BIG_ENDIAN
-.. c:var:: NPY_BYTE_ORDER
+.. c:macro:: NPY_BYTE_ORDER
.. versionadded:: 1.3.0
@@ -94,7 +97,7 @@ Platform information
Defined in ``numpy/npy_endian.h``.
-.. c:function:: PyArray_GetEndianness()
+.. c:function:: int PyArray_GetEndianness()
.. versionadded:: 1.3.0
@@ -102,21 +105,27 @@ Platform information
One of :c:data:`NPY_CPU_BIG`, :c:data:`NPY_CPU_LITTLE`,
or :c:data:`NPY_CPU_UNKNOWN_ENDIAN`.
+ .. c:macro:: NPY_CPU_BIG
+
+ .. c:macro:: NPY_CPU_LITTLE
+
+ .. c:macro:: NPY_CPU_UNKNOWN_ENDIAN
+
Compiler directives
-------------------
-.. c:var:: NPY_LIKELY
-.. c:var:: NPY_UNLIKELY
-.. c:var:: NPY_UNUSED
+.. c:macro:: NPY_LIKELY
+.. c:macro:: NPY_UNLIKELY
+.. c:macro:: NPY_UNUSED
Interrupt Handling
------------------
-.. c:var:: NPY_INTERRUPT_H
-.. c:var:: NPY_SIGSETJMP
-.. c:var:: NPY_SIGLONGJMP
-.. c:var:: NPY_SIGJMP_BUF
-.. c:var:: NPY_SIGINT_ON
-.. c:var:: NPY_SIGINT_OFF
+.. c:macro:: NPY_INTERRUPT_H
+.. c:macro:: NPY_SIGSETJMP
+.. c:macro:: NPY_SIGLONGJMP
+.. c:macro:: NPY_SIGJMP_BUF
+.. c:macro:: NPY_SIGINT_ON
+.. c:macro:: NPY_SIGINT_OFF
diff --git a/doc/source/reference/c-api/coremath.rst b/doc/source/reference/c-api/coremath.rst
index 0c46475cf..338c584a1 100644
--- a/doc/source/reference/c-api/coremath.rst
+++ b/doc/source/reference/c-api/coremath.rst
@@ -24,23 +24,23 @@ in doubt.
Floating point classification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. c:var:: NPY_NAN
+.. c:macro:: NPY_NAN
This macro is defined to a NaN (Not a Number), and is guaranteed to have
the signbit unset ('positive' NaN). The corresponding single and extension
precision macro are available with the suffix F and L.
-.. c:var:: NPY_INFINITY
+.. c:macro:: NPY_INFINITY
This macro is defined to a positive inf. The corresponding single and
extension precision macro are available with the suffix F and L.
-.. c:var:: NPY_PZERO
+.. c:macro:: NPY_PZERO
This macro is defined to positive zero. The corresponding single and
extension precision macro are available with the suffix F and L.
-.. c:var:: NPY_NZERO
+.. c:macro:: NPY_NZERO
This macro is defined to negative zero (that is with the sign bit set). The
corresponding single and extension precision macro are available with the
@@ -84,47 +84,47 @@ The following math constants are available in ``npy_math.h``. Single
and extended precision are also available by adding the ``f`` and
``l`` suffixes respectively.
-.. c:var:: NPY_E
+.. c:macro:: NPY_E
Base of natural logarithm (:math:`e`)
-.. c:var:: NPY_LOG2E
+.. c:macro:: NPY_LOG2E
Logarithm to base 2 of the Euler constant (:math:`\frac{\ln(e)}{\ln(2)}`)
-.. c:var:: NPY_LOG10E
+.. c:macro:: NPY_LOG10E
Logarithm to base 10 of the Euler constant (:math:`\frac{\ln(e)}{\ln(10)}`)
-.. c:var:: NPY_LOGE2
+.. c:macro:: NPY_LOGE2
Natural logarithm of 2 (:math:`\ln(2)`)
-.. c:var:: NPY_LOGE10
+.. c:macro:: NPY_LOGE10
Natural logarithm of 10 (:math:`\ln(10)`)
-.. c:var:: NPY_PI
+.. c:macro:: NPY_PI
Pi (:math:`\pi`)
-.. c:var:: NPY_PI_2
+.. c:macro:: NPY_PI_2
Pi divided by 2 (:math:`\frac{\pi}{2}`)
-.. c:var:: NPY_PI_4
+.. c:macro:: NPY_PI_4
Pi divided by 4 (:math:`\frac{\pi}{4}`)
-.. c:var:: NPY_1_PI
+.. c:macro:: NPY_1_PI
Reciprocal of pi (:math:`\frac{1}{\pi}`)
-.. c:var:: NPY_2_PI
+.. c:macro:: NPY_2_PI
Two times the reciprocal of pi (:math:`\frac{2}{\pi}`)
-.. c:var:: NPY_EULER
+.. c:macro:: NPY_EULER
The Euler constant
:math:`\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})`
@@ -308,35 +308,35 @@ __ https://en.wikipedia.org/wiki/Half-precision_floating-point_format
__ https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_half_float_pixel.txt
__ https://www.openexr.com/about.html
-.. c:var:: NPY_HALF_ZERO
+.. c:macro:: NPY_HALF_ZERO
This macro is defined to positive zero.
-.. c:var:: NPY_HALF_PZERO
+.. c:macro:: NPY_HALF_PZERO
This macro is defined to positive zero.
-.. c:var:: NPY_HALF_NZERO
+.. c:macro:: NPY_HALF_NZERO
This macro is defined to negative zero.
-.. c:var:: NPY_HALF_ONE
+.. c:macro:: NPY_HALF_ONE
This macro is defined to 1.0.
-.. c:var:: NPY_HALF_NEGONE
+.. c:macro:: NPY_HALF_NEGONE
This macro is defined to -1.0.
-.. c:var:: NPY_HALF_PINF
+.. c:macro:: NPY_HALF_PINF
This macro is defined to +inf.
-.. c:var:: NPY_HALF_NINF
+.. c:macro:: NPY_HALF_NINF
This macro is defined to -inf.
-.. c:var:: NPY_HALF_NAN
+.. c:macro:: NPY_HALF_NAN
This macro is defined to a NaN value, guaranteed to have its sign bit unset.
diff --git a/doc/source/reference/c-api/deprecations.rst b/doc/source/reference/c-api/deprecations.rst
index a382017a2..5b1abc6f2 100644
--- a/doc/source/reference/c-api/deprecations.rst
+++ b/doc/source/reference/c-api/deprecations.rst
@@ -48,7 +48,9 @@ warnings).
To use the NPY_NO_DEPRECATED_API mechanism, you need to #define it to
the target API version of NumPy before #including any NumPy headers.
-If you want to confirm that your code is clean against 1.7, use::
+If you want to confirm that your code is clean against 1.7, use:
+
+.. code-block:: c
#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
diff --git a/doc/source/reference/c-api/dtype.rst b/doc/source/reference/c-api/dtype.rst
index 72e908861..a1a53cdb6 100644
--- a/doc/source/reference/c-api/dtype.rst
+++ b/doc/source/reference/c-api/dtype.rst
@@ -30,7 +30,7 @@ Enumerated Types
There is a list of enumerated types defined providing the basic 24
data types plus some useful generic names. Whenever the code requires
a type number, one of these enumerated types is requested. The types
-are all called :c:data:`NPY_{NAME}`:
+are all called ``NPY_{NAME}``:
.. c:var:: NPY_BOOL
@@ -183,23 +183,23 @@ Some useful aliases of the above types are
Other useful related constants are
-.. c:var:: NPY_NTYPES
+.. c:macro:: NPY_NTYPES
The total number of built-in NumPy types. The enumeration covers
the range from 0 to NPY_NTYPES-1.
-.. c:var:: NPY_NOTYPE
+.. c:macro:: NPY_NOTYPE
A signal value guaranteed not to be a valid type enumeration number.
-.. c:var:: NPY_USERDEF
+.. c:macro:: NPY_USERDEF
The start of type numbers used for Custom Data types.
The various character codes indicating certain types are also part of
an enumerated list. References to type characters (should they be
needed at all) should always use these enumerations. The form of them
-is :c:data:`NPY_{NAME}LTR` where ``{NAME}`` can be
+is ``NPY_{NAME}LTR`` where ``{NAME}`` can be
**BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**,
**UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**,
@@ -221,24 +221,17 @@ Defines
Max and min values for integers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. c:var:: NPY_MAX_INT{bits}
-
-.. c:var:: NPY_MAX_UINT{bits}
-
-.. c:var:: NPY_MIN_INT{bits}
-
+``NPY_MAX_INT{bits}``, ``NPY_MAX_UINT{bits}``, ``NPY_MIN_INT{bits}``
These are defined for ``{bits}`` = 8, 16, 32, 64, 128, and 256 and provide
the maximum (minimum) value of the corresponding (unsigned) integer
type. Note: the actual integer type may not be available on all
platforms (i.e. 128-bit and 256-bit integers are rare).
-.. c:var:: NPY_MIN_{type}
-
+``NPY_MIN_{type}``
This is defined for ``{type}`` = **BYTE**, **SHORT**, **INT**,
**LONG**, **LONGLONG**, **INTP**
-.. c:var:: NPY_MAX_{type}
-
+``NPY_MAX_{type}``
This is defined for all defined for ``{type}`` = **BYTE**, **UBYTE**,
**SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**,
**LONGLONG**, **ULONGLONG**, **INTP**, **UINTP**
@@ -247,8 +240,8 @@ Max and min values for integers
Number of bits in data types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-All :c:data:`NPY_SIZEOF_{CTYPE}` constants have corresponding
-:c:data:`NPY_BITSOF_{CTYPE}` constants defined. The :c:data:`NPY_BITSOF_{CTYPE}`
+All ``NPY_SIZEOF_{CTYPE}`` constants have corresponding
+``NPY_BITSOF_{CTYPE}`` constants defined. The ``NPY_BITSOF_{CTYPE}``
constants provide the number of bits in the data type. Specifically,
the available ``{CTYPE}s`` are
@@ -263,7 +256,7 @@ All of the numeric data types (integer, floating point, and complex)
have constants that are defined to be a specific enumerated type
number. Exactly which enumerated type a bit-width type refers to is
platform dependent. In particular, the constants available are
-:c:data:`PyArray_{NAME}{BITS}` where ``{NAME}`` is **INT**, **UINT**,
+``PyArray_{NAME}{BITS}`` where ``{NAME}`` is **INT**, **UINT**,
**FLOAT**, **COMPLEX** and ``{BITS}`` can be 8, 16, 32, 64, 80, 96, 128,
160, 192, 256, and 512. Obviously not all bit-widths are available on
all platforms for all the kinds of numeric types. Commonly 8-, 16-,
@@ -397,8 +390,8 @@ There are also typedefs for signed integers, unsigned integers,
floating point, and complex floating point types of specific bit-
widths. The available type names are
- :c:type:`npy_int{bits}`, :c:type:`npy_uint{bits}`, :c:type:`npy_float{bits}`,
- and :c:type:`npy_complex{bits}`
+ ``npy_int{bits}``, ``npy_uint{bits}``, ``npy_float{bits}``,
+ and ``npy_complex{bits}``
where ``{bits}`` is the number of bits in the type and can be **8**,
**16**, **32**, **64**, 128, and 256 for integer types; 16, **32**
@@ -414,6 +407,12 @@ Printf Formatting
For help in printing, the following strings are defined as the correct
format specifier in printf and related commands.
- :c:data:`NPY_LONGLONG_FMT`, :c:data:`NPY_ULONGLONG_FMT`,
- :c:data:`NPY_INTP_FMT`, :c:data:`NPY_UINTP_FMT`,
- :c:data:`NPY_LONGDOUBLE_FMT`
+.. c:macro:: NPY_LONGLONG_FMT
+
+.. c:macro:: NPY_ULONGLONG_FMT
+
+.. c:macro:: NPY_INTP_FMT
+
+.. c:macro:: NPY_UINTP_FMT
+
+.. c:macro:: NPY_LONGDOUBLE_FMT
diff --git a/doc/source/reference/c-api/iterator.rst b/doc/source/reference/c-api/iterator.rst
index b77d029cc..ae96bb3fb 100644
--- a/doc/source/reference/c-api/iterator.rst
+++ b/doc/source/reference/c-api/iterator.rst
@@ -313,17 +313,17 @@ Construction and Destruction
Flags that may be passed in ``flags``, applying to the whole
iterator, are:
- .. c:var:: NPY_ITER_C_INDEX
+ .. c:macro:: NPY_ITER_C_INDEX
Causes the iterator to track a raveled flat index matching C
order. This option cannot be used with :c:data:`NPY_ITER_F_INDEX`.
- .. c:var:: NPY_ITER_F_INDEX
+ .. c:macro:: NPY_ITER_F_INDEX
Causes the iterator to track a raveled flat index matching Fortran
order. This option cannot be used with :c:data:`NPY_ITER_C_INDEX`.
- .. c:var:: NPY_ITER_MULTI_INDEX
+ .. c:macro:: NPY_ITER_MULTI_INDEX
Causes the iterator to track a multi-index.
This prevents the iterator from coalescing axes to
@@ -336,7 +336,7 @@ Construction and Destruction
However, it is possible to remove axes again and use the iterator
normally if the size is small enough after removal.
- .. c:var:: NPY_ITER_EXTERNAL_LOOP
+ .. c:macro:: NPY_ITER_EXTERNAL_LOOP
Causes the iterator to skip iteration of the innermost
loop, requiring the user of the iterator to handle it.
@@ -344,7 +344,7 @@ Construction and Destruction
This flag is incompatible with :c:data:`NPY_ITER_C_INDEX`,
:c:data:`NPY_ITER_F_INDEX`, and :c:data:`NPY_ITER_MULTI_INDEX`.
- .. c:var:: NPY_ITER_DONT_NEGATE_STRIDES
+ .. c:macro:: NPY_ITER_DONT_NEGATE_STRIDES
This only affects the iterator when :c:type:`NPY_KEEPORDER` is
specified for the order parameter. By default with
@@ -355,7 +355,7 @@ Construction and Destruction
but don't want an axis reversed. This is the behavior of
``numpy.ravel(a, order='K')``, for instance.
- .. c:var:: NPY_ITER_COMMON_DTYPE
+ .. c:macro:: NPY_ITER_COMMON_DTYPE
Causes the iterator to convert all the operands to a common
data type, calculated based on the ufunc type promotion rules.
@@ -364,7 +364,7 @@ Construction and Destruction
If the common data type is known ahead of time, don't use this
flag. Instead, set the requested dtype for all the operands.
- .. c:var:: NPY_ITER_REFS_OK
+ .. c:macro:: NPY_ITER_REFS_OK
Indicates that arrays with reference types (object
arrays or structured arrays containing an object type)
@@ -373,7 +373,7 @@ Construction and Destruction
:c:func:`NpyIter_IterationNeedsAPI(iter)` is true, in which case
it may not release the GIL during iteration.
- .. c:var:: NPY_ITER_ZEROSIZE_OK
+ .. c:macro:: NPY_ITER_ZEROSIZE_OK
Indicates that arrays with a size of zero should be permitted.
Since the typical iteration loop does not naturally work with
@@ -381,7 +381,7 @@ Construction and Destruction
than zero before entering the iteration loop.
Currently only the operands are checked, not a forced shape.
- .. c:var:: NPY_ITER_REDUCE_OK
+ .. c:macro:: NPY_ITER_REDUCE_OK
Permits writeable operands with a dimension with zero
stride and size greater than one. Note that such operands
@@ -400,7 +400,7 @@ Construction and Destruction
after initializing the allocated operand to prepare the
buffers.
- .. c:var:: NPY_ITER_RANGED
+ .. c:macro:: NPY_ITER_RANGED
Enables support for iteration of sub-ranges of the full
``iterindex`` range ``[0, NpyIter_IterSize(iter))``. Use
@@ -414,7 +414,7 @@ Construction and Destruction
would require special handling, effectively making it more
like the buffered version.
- .. c:var:: NPY_ITER_BUFFERED
+ .. c:macro:: NPY_ITER_BUFFERED
Causes the iterator to store buffering data, and use buffering
to satisfy data type, alignment, and byte-order requirements.
@@ -441,7 +441,7 @@ Construction and Destruction
the inner loops may become smaller depending
on the structure of the reduction.
- .. c:var:: NPY_ITER_GROWINNER
+ .. c:macro:: NPY_ITER_GROWINNER
When buffering is enabled, this allows the size of the inner
loop to grow when buffering isn't necessary. This option
@@ -449,7 +449,7 @@ Construction and Destruction
data, rather than anything with small cache-friendly arrays
of temporary values for each inner loop.
- .. c:var:: NPY_ITER_DELAY_BUFALLOC
+ .. c:macro:: NPY_ITER_DELAY_BUFALLOC
When buffering is enabled, this delays allocation of the
buffers until :c:func:`NpyIter_Reset` or another reset function is
@@ -465,7 +465,7 @@ Construction and Destruction
Then, call :c:func:`NpyIter_Reset` to allocate and fill the buffers
with their initial values.
- .. c:var:: NPY_ITER_COPY_IF_OVERLAP
+ .. c:macro:: NPY_ITER_COPY_IF_OVERLAP
If any write operand has overlap with any read operand, eliminate all
overlap by making temporary copies (enabling UPDATEIFCOPY for write
@@ -484,9 +484,9 @@ Construction and Destruction
Flags that may be passed in ``op_flags[i]``, where ``0 <= i < nop``:
- .. c:var:: NPY_ITER_READWRITE
- .. c:var:: NPY_ITER_READONLY
- .. c:var:: NPY_ITER_WRITEONLY
+ .. c:macro:: NPY_ITER_READWRITE
+ .. c:macro:: NPY_ITER_READONLY
+ .. c:macro:: NPY_ITER_WRITEONLY
Indicate how the user of the iterator will read or write
to ``op[i]``. Exactly one of these flags must be specified
@@ -495,13 +495,13 @@ Construction and Destruction
semantics. The data will be written back to the original array
when ``NpyIter_Deallocate`` is called.
- .. c:var:: NPY_ITER_COPY
+ .. c:macro:: NPY_ITER_COPY
Allow a copy of ``op[i]`` to be made if it does not
meet the data type or alignment requirements as specified
by the constructor flags and parameters.
- .. c:var:: NPY_ITER_UPDATEIFCOPY
+ .. c:macro:: NPY_ITER_UPDATEIFCOPY
Triggers :c:data:`NPY_ITER_COPY`, and when an array operand
is flagged for writing and is copied, causes the data
@@ -513,9 +513,9 @@ Construction and Destruction
to back to ``op[i]`` on calling ``NpyIter_Deallocate``, instead of
doing the unnecessary copy operation.
- .. c:var:: NPY_ITER_NBO
- .. c:var:: NPY_ITER_ALIGNED
- .. c:var:: NPY_ITER_CONTIG
+ .. c:macro:: NPY_ITER_NBO
+ .. c:macro:: NPY_ITER_ALIGNED
+ .. c:macro:: NPY_ITER_CONTIG
Causes the iterator to provide data for ``op[i]``
that is in native byte order, aligned according to
@@ -534,7 +534,7 @@ Construction and Destruction
the NBO flag overrides it and the requested data type is
converted to be in native byte order.
- .. c:var:: NPY_ITER_ALLOCATE
+ .. c:macro:: NPY_ITER_ALLOCATE
This is for output arrays, and requires that the flag
:c:data:`NPY_ITER_WRITEONLY` or :c:data:`NPY_ITER_READWRITE`
@@ -557,7 +557,7 @@ Construction and Destruction
getting the i-th object in the returned C array. The caller
must call Py_INCREF on it to claim a reference to the array.
- .. c:var:: NPY_ITER_NO_SUBTYPE
+ .. c:macro:: NPY_ITER_NO_SUBTYPE
For use with :c:data:`NPY_ITER_ALLOCATE`, this flag disables
allocating an array subtype for the output, forcing
@@ -566,12 +566,12 @@ Construction and Destruction
TODO: Maybe it would be better to introduce a function
``NpyIter_GetWrappedOutput`` and remove this flag?
- .. c:var:: NPY_ITER_NO_BROADCAST
+ .. c:macro:: NPY_ITER_NO_BROADCAST
Ensures that the input or output matches the iteration
dimensions exactly.
- .. c:var:: NPY_ITER_ARRAYMASK
+ .. c:macro:: NPY_ITER_ARRAYMASK
.. versionadded:: 1.7
@@ -595,7 +595,7 @@ Construction and Destruction
modified. This is useful when the mask should be a combination
of input masks.
- .. c:var:: NPY_ITER_WRITEMASKED
+ .. c:macro:: NPY_ITER_WRITEMASKED
.. versionadded:: 1.7
@@ -613,7 +613,7 @@ Construction and Destruction
returns true from the corresponding element in the ARRAYMASK
operand.
- .. c:var:: NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE
+ .. c:macro:: NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE
In memory overlap checks, assume that operands with
``NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE`` enabled are accessed only
@@ -707,7 +707,7 @@ Construction and Destruction
:c:func:`NpyIter_Deallocate` must be called for each copy.
-.. c:function:: int NpyIter_RemoveAxis(NpyIter* iter, int axis)``
+.. c:function:: int NpyIter_RemoveAxis(NpyIter* iter, int axis)
Removes an axis from iteration. This requires that
:c:data:`NPY_ITER_MULTI_INDEX` was set for iterator creation, and does
@@ -1264,7 +1264,7 @@ functions provide that information.
NPY_MAX_INTP is placed in the stride.
Once the iterator is prepared for iteration (after a reset if
- :c:data:`NPY_DELAY_BUFALLOC` was used), call this to get the strides
+ :c:data:`NPY_ITER_DELAY_BUFALLOC` was used), call this to get the strides
which may be used to select a fast inner loop function. For example,
if the stride is 0, that means the inner loop can always load its
value into a variable once, then use the variable throughout the loop,
diff --git a/doc/source/reference/c-api/types-and-structures.rst b/doc/source/reference/c-api/types-and-structures.rst
index 60d8e420b..763f985a6 100644
--- a/doc/source/reference/c-api/types-and-structures.rst
+++ b/doc/source/reference/c-api/types-and-structures.rst
@@ -26,7 +26,7 @@ By constructing a new Python type you make available a new object for
Python. The ndarray object is an example of a new type defined in C.
New types are defined in C by two basic steps:
-1. creating a C-structure (usually named :c:type:`Py{Name}Object`) that is
+1. creating a C-structure (usually named ``Py{Name}Object``) that is
binary- compatible with the :c:type:`PyObject` structure itself but holds
the additional information needed for that particular object;
@@ -69,6 +69,7 @@ PyArray_Type and PyArrayObject
typeobject.
.. c:type:: PyArrayObject
+ NPY_AO
The :c:type:`PyArrayObject` C-structure contains all of the required
information for an array. All instances of an ndarray (and its
@@ -77,7 +78,9 @@ PyArray_Type and PyArrayObject
provided macros. If you need a shorter name, then you can make use
of :c:type:`NPY_AO` (deprecated) which is defined to be equivalent to
:c:type:`PyArrayObject`. Direct access to the struct fields are
- deprecated. Use the `PyArray_*(arr)` form instead.
+ deprecated. Use the ``PyArray_*(arr)`` form instead.
+ As of NumPy 1.20, the size of this struct is not considered part of
+ the NumPy ABI (see note at the end of the member list).
.. code-block:: c
@@ -91,86 +94,108 @@ PyArray_Type and PyArrayObject
PyArray_Descr *descr;
int flags;
PyObject *weakreflist;
+ /* version dependend private members */
} PyArrayObject;
-.. c:macro:: PyArrayObject.PyObject_HEAD
+ .. c:macro:: PyObject_HEAD
- This is needed by all Python objects. It consists of (at least)
- a reference count member ( ``ob_refcnt`` ) and a pointer to the
- typeobject ( ``ob_type`` ). (Other elements may also be present
- if Python was compiled with special options see
- Include/object.h in the Python source tree for more
- information). The ob_type member points to a Python type
- object.
+ This is needed by all Python objects. It consists of (at least)
+ a reference count member ( ``ob_refcnt`` ) and a pointer to the
+ typeobject ( ``ob_type`` ). (Other elements may also be present
+ if Python was compiled with special options see
+ Include/object.h in the Python source tree for more
+ information). The ob_type member points to a Python type
+ object.
-.. c:member:: char *PyArrayObject.data
+ .. c:member:: char *data
- Accessible via :c:data:`PyArray_DATA`, this data member is a
- pointer to the first element of the array. This pointer can
- (and normally should) be recast to the data type of the array.
+ Accessible via :c:data:`PyArray_DATA`, this data member is a
+ pointer to the first element of the array. This pointer can
+ (and normally should) be recast to the data type of the array.
-.. c:member:: int PyArrayObject.nd
+ .. c:member:: int nd
- An integer providing the number of dimensions for this
- array. When nd is 0, the array is sometimes called a rank-0
- array. Such arrays have undefined dimensions and strides and
- cannot be accessed. Macro :c:data:`PyArray_NDIM` defined in
- ``ndarraytypes.h`` points to this data member. :c:data:`NPY_MAXDIMS`
- is the largest number of dimensions for any array.
+ An integer providing the number of dimensions for this
+ array. When nd is 0, the array is sometimes called a rank-0
+ array. Such arrays have undefined dimensions and strides and
+ cannot be accessed. Macro :c:data:`PyArray_NDIM` defined in
+ ``ndarraytypes.h`` points to this data member. :c:data:`NPY_MAXDIMS`
+ is the largest number of dimensions for any array.
-.. c:member:: npy_intp PyArrayObject.dimensions
+ .. c:member:: npy_intp dimensions
- An array of integers providing the shape in each dimension as
- long as nd :math:`\geq` 1. The integer is always large enough
- to hold a pointer on the platform, so the dimension size is
- only limited by memory. :c:data:`PyArray_DIMS` is the macro
- associated with this data member.
+ An array of integers providing the shape in each dimension as
+ long as nd :math:`\geq` 1. The integer is always large enough
+ to hold a pointer on the platform, so the dimension size is
+ only limited by memory. :c:data:`PyArray_DIMS` is the macro
+ associated with this data member.
-.. c:member:: npy_intp *PyArrayObject.strides
+ .. c:member:: npy_intp *strides
- An array of integers providing for each dimension the number of
- bytes that must be skipped to get to the next element in that
- dimension. Associated with macro :c:data:`PyArray_STRIDES`.
+ An array of integers providing for each dimension the number of
+ bytes that must be skipped to get to the next element in that
+ dimension. Associated with macro :c:data:`PyArray_STRIDES`.
-.. c:member:: PyObject *PyArrayObject.base
+ .. c:member:: PyObject *base
- Pointed to by :c:data:`PyArray_BASE`, this member is used to hold a
- pointer to another Python object that is related to this array.
- There are two use cases:
+ Pointed to by :c:data:`PyArray_BASE`, this member is used to hold a
+ pointer to another Python object that is related to this array.
+ There are two use cases:
- - If this array does not own its own memory, then base points to the
- Python object that owns it (perhaps another array object)
- - If this array has the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
- :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set, then this array is a working
- copy of a "misbehaved" array.
+ - If this array does not own its own memory, then base points to the
+ Python object that owns it (perhaps another array object)
+ - If this array has the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set, then this array is a working
+ copy of a "misbehaved" array.
- When ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to
- by base will be updated with the contents of this array.
+ When ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to
+ by base will be updated with the contents of this array.
-.. c:member:: PyArray_Descr *PyArrayObject.descr
+ .. c:member:: PyArray_Descr *descr
- A pointer to a data-type descriptor object (see below). The
- data-type descriptor object is an instance of a new built-in
- type which allows a generic description of memory. There is a
- descriptor structure for each data type supported. This
- descriptor structure contains useful information about the type
- as well as a pointer to a table of function pointers to
- implement specific functionality. As the name suggests, it is
- associated with the macro :c:data:`PyArray_DESCR`.
+ A pointer to a data-type descriptor object (see below). The
+ data-type descriptor object is an instance of a new built-in
+ type which allows a generic description of memory. There is a
+ descriptor structure for each data type supported. This
+ descriptor structure contains useful information about the type
+ as well as a pointer to a table of function pointers to
+ implement specific functionality. As the name suggests, it is
+ associated with the macro :c:data:`PyArray_DESCR`.
-.. c:member:: int PyArrayObject.flags
+ .. c:member:: int flags
- Pointed to by the macro :c:data:`PyArray_FLAGS`, this data member represents
- the flags indicating how the memory pointed to by data is to be
- interpreted. Possible flags are :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
- :c:data:`NPY_ARRAY_F_CONTIGUOUS`, :c:data:`NPY_ARRAY_OWNDATA`,
- :c:data:`NPY_ARRAY_ALIGNED`, :c:data:`NPY_ARRAY_WRITEABLE`,
- :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, and :c:data:`NPY_ARRAY_UPDATEIFCOPY`.
+ Pointed to by the macro :c:data:`PyArray_FLAGS`, this data member represents
+ the flags indicating how the memory pointed to by data is to be
+ interpreted. Possible flags are :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS`, :c:data:`NPY_ARRAY_OWNDATA`,
+ :c:data:`NPY_ARRAY_ALIGNED`, :c:data:`NPY_ARRAY_WRITEABLE`,
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, and :c:data:`NPY_ARRAY_UPDATEIFCOPY`.
-.. c:member:: PyObject *PyArrayObject.weakreflist
+ .. c:member:: PyObject *weakreflist
- This member allows array objects to have weak references (using the
- weakref module).
+ This member allows array objects to have weak references (using the
+ weakref module).
+
+ .. note::
+
+ Further members are considered private and version dependend. If the size
+ of the struct is important for your code, special care must be taken.
+ A possible use-case when this is relevant is subclassing in C.
+ If your code relies on ``sizeof(PyArrayObject)`` to be constant,
+ you must add the following check at import time:
+
+ .. code-block:: c
+
+ if (sizeof(PyArrayObject) < PyArray_Type.tp_basicsize) {
+ PyErr_SetString(PyExc_ImportError,
+ "Binary incompatibility with NumPy, must recompile/update X.");
+ return NULL;
+ }
+
+ To ensure that your code does not have to be compiled for a specific
+ NumPy version, you may add a constant, leaving room for changes in NumPy.
+ A solution guaranteed to be compatible with any future NumPy version
+ requires the use of a runtime calculate offset and allocation size.
PyArrayDescr_Type and PyArray_Descr
@@ -226,197 +251,196 @@ PyArrayDescr_Type and PyArray_Descr
npy_hash_t hash;
} PyArray_Descr;
-.. c:member:: PyTypeObject *PyArray_Descr.typeobj
-
- Pointer to a typeobject that is the corresponding Python type for
- the elements of this array. For the builtin types, this points to
- the corresponding array scalar. For user-defined types, this
- should point to a user-defined typeobject. This typeobject can
- either inherit from array scalars or not. If it does not inherit
- from array scalars, then the :c:data:`NPY_USE_GETITEM` and
- :c:data:`NPY_USE_SETITEM` flags should be set in the ``flags`` member.
+ .. c:member:: PyTypeObject *typeobj
-.. c:member:: char PyArray_Descr.kind
+ Pointer to a typeobject that is the corresponding Python type for
+ the elements of this array. For the builtin types, this points to
+ the corresponding array scalar. For user-defined types, this
+ should point to a user-defined typeobject. This typeobject can
+ either inherit from array scalars or not. If it does not inherit
+ from array scalars, then the :c:data:`NPY_USE_GETITEM` and
+ :c:data:`NPY_USE_SETITEM` flags should be set in the ``flags`` member.
- A character code indicating the kind of array (using the array
- interface typestring notation). A 'b' represents Boolean, a 'i'
- represents signed integer, a 'u' represents unsigned integer, 'f'
- represents floating point, 'c' represents complex floating point, 'S'
- represents 8-bit zero-terminated bytes, 'U' represents 32-bit/character
- unicode string, and 'V' represents arbitrary.
+ .. c:member:: char kind
-.. c:member:: char PyArray_Descr.type
+ A character code indicating the kind of array (using the array
+ interface typestring notation). A 'b' represents Boolean, a 'i'
+ represents signed integer, a 'u' represents unsigned integer, 'f'
+ represents floating point, 'c' represents complex floating point, 'S'
+ represents 8-bit zero-terminated bytes, 'U' represents 32-bit/character
+ unicode string, and 'V' represents arbitrary.
- A traditional character code indicating the data type.
+ .. c:member:: char type
-.. c:member:: char PyArray_Descr.byteorder
+ A traditional character code indicating the data type.
- A character indicating the byte-order: '>' (big-endian), '<' (little-
- endian), '=' (native), '\|' (irrelevant, ignore). All builtin data-
- types have byteorder '='.
+ .. c:member:: char byteorder
-.. c:member:: char PyArray_Descr.flags
+ A character indicating the byte-order: '>' (big-endian), '<' (little-
+ endian), '=' (native), '\|' (irrelevant, ignore). All builtin data-
+ types have byteorder '='.
- A data-type bit-flag that determines if the data-type exhibits object-
- array like behavior. Each bit in this member is a flag which are named
- as:
+ .. c:member:: char flags
- .. c:var:: NPY_ITEM_REFCOUNT
+ A data-type bit-flag that determines if the data-type exhibits object-
+ array like behavior. Each bit in this member is a flag which are named
+ as:
- Indicates that items of this data-type must be reference
- counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
+ .. c:macro:: NPY_ITEM_REFCOUNT
- .. c:var:: NPY_ITEM_HASOBJECT
+ Indicates that items of this data-type must be reference
+ counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
- Same as :c:data:`NPY_ITEM_REFCOUNT`.
+ .. c:macro:: NPY_ITEM_HASOBJECT
- .. c:var:: NPY_LIST_PICKLE
+ Same as :c:data:`NPY_ITEM_REFCOUNT`.
- Indicates arrays of this data-type must be converted to a list
- before pickling.
+ .. c:macro:: NPY_LIST_PICKLE
- .. c:var:: NPY_ITEM_IS_POINTER
+ Indicates arrays of this data-type must be converted to a list
+ before pickling.
- Indicates the item is a pointer to some other data-type
+ .. c:macro:: NPY_ITEM_IS_POINTER
- .. c:var:: NPY_NEEDS_INIT
+ Indicates the item is a pointer to some other data-type
- Indicates memory for this data-type must be initialized (set
- to 0) on creation.
+ .. c:macro:: NPY_NEEDS_INIT
- .. c:var:: NPY_NEEDS_PYAPI
+ Indicates memory for this data-type must be initialized (set
+ to 0) on creation.
- Indicates this data-type requires the Python C-API during
- access (so don't give up the GIL if array access is going to
- be needed).
+ .. c:macro:: NPY_NEEDS_PYAPI
- .. c:var:: NPY_USE_GETITEM
+ Indicates this data-type requires the Python C-API during
+ access (so don't give up the GIL if array access is going to
+ be needed).
- On array access use the ``f->getitem`` function pointer
- instead of the standard conversion to an array scalar. Must
- use if you don't define an array scalar to go along with
- the data-type.
+ .. c:macro:: NPY_USE_GETITEM
- .. c:var:: NPY_USE_SETITEM
+ On array access use the ``f->getitem`` function pointer
+ instead of the standard conversion to an array scalar. Must
+ use if you don't define an array scalar to go along with
+ the data-type.
- When creating a 0-d array from an array scalar use
- ``f->setitem`` instead of the standard copy from an array
- scalar. Must use if you don't define an array scalar to go
- along with the data-type.
+ .. c:macro:: NPY_USE_SETITEM
- .. c:var:: NPY_FROM_FIELDS
+ When creating a 0-d array from an array scalar use
+ ``f->setitem`` instead of the standard copy from an array
+ scalar. Must use if you don't define an array scalar to go
+ along with the data-type.
- The bits that are inherited for the parent data-type if these
- bits are set in any field of the data-type. Currently (
- :c:data:`NPY_NEEDS_INIT` \| :c:data:`NPY_LIST_PICKLE` \|
- :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_PYAPI` ).
+ .. c:macro:: NPY_FROM_FIELDS
- .. c:var:: NPY_OBJECT_DTYPE_FLAGS
+ The bits that are inherited for the parent data-type if these
+ bits are set in any field of the data-type. Currently (
+ :c:data:`NPY_NEEDS_INIT` \| :c:data:`NPY_LIST_PICKLE` \|
+ :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_PYAPI` ).
- Bits set for the object data-type: ( :c:data:`NPY_LIST_PICKLE`
- \| :c:data:`NPY_USE_GETITEM` \| :c:data:`NPY_ITEM_IS_POINTER` \|
- :c:data:`NPY_REFCOUNT` \| :c:data:`NPY_NEEDS_INIT` \|
- :c:data:`NPY_NEEDS_PYAPI`).
+ .. c:macro:: NPY_OBJECT_DTYPE_FLAGS
- .. c:function:: PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
+ Bits set for the object data-type: ( :c:data:`NPY_LIST_PICKLE`
+ \| :c:data:`NPY_USE_GETITEM` \| :c:data:`NPY_ITEM_IS_POINTER` \|
+ :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_INIT` \|
+ :c:data:`NPY_NEEDS_PYAPI`).
- Return true if all the given flags are set for the data-type
- object.
+ .. c:function:: int PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
- .. c:function:: PyDataType_REFCHK(PyArray_Descr *dtype)
+ Return true if all the given flags are set for the data-type
+ object.
- Equivalent to :c:func:`PyDataType_FLAGCHK` (*dtype*,
- :c:data:`NPY_ITEM_REFCOUNT`).
+ .. c:function:: int PyDataType_REFCHK(PyArray_Descr *dtype)
-.. c:member:: int PyArray_Descr.type_num
+ Equivalent to :c:func:`PyDataType_FLAGCHK` (*dtype*,
+ :c:data:`NPY_ITEM_REFCOUNT`).
- A number that uniquely identifies the data type. For new data-types,
- this number is assigned when the data-type is registered.
+ .. c:member:: int type_num
-.. c:member:: int PyArray_Descr.elsize
+ A number that uniquely identifies the data type. For new data-types,
+ this number is assigned when the data-type is registered.
- For data types that are always the same size (such as long), this
- holds the size of the data type. For flexible data types where
- different arrays can have a different elementsize, this should be
- 0.
+ .. c:member:: int elsize
-.. c:member:: int PyArray_Descr.alignment
+ For data types that are always the same size (such as long), this
+ holds the size of the data type. For flexible data types where
+ different arrays can have a different elementsize, this should be
+ 0.
- A number providing alignment information for this data type.
- Specifically, it shows how far from the start of a 2-element
- structure (whose first element is a ``char`` ), the compiler
- places an item of this type: ``offsetof(struct {char c; type v;},
- v)``
+ .. c:member:: int alignment
-.. c:member:: PyArray_ArrayDescr *PyArray_Descr.subarray
+ A number providing alignment information for this data type.
+ Specifically, it shows how far from the start of a 2-element
+ structure (whose first element is a ``char`` ), the compiler
+ places an item of this type: ``offsetof(struct {char c; type v;},
+ v)``
- If this is non- ``NULL``, then this data-type descriptor is a
- C-style contiguous array of another data-type descriptor. In
- other-words, each element that this descriptor describes is
- actually an array of some other base descriptor. This is most
- useful as the data-type descriptor for a field in another
- data-type descriptor. The fields member should be ``NULL`` if this
- is non- ``NULL`` (the fields member of the base descriptor can be
- non- ``NULL`` however). The :c:type:`PyArray_ArrayDescr` structure is
- defined using
+ .. c:member:: PyArray_ArrayDescr *subarray
- .. code-block:: c
+ If this is non- ``NULL``, then this data-type descriptor is a
+ C-style contiguous array of another data-type descriptor. In
+ other-words, each element that this descriptor describes is
+ actually an array of some other base descriptor. This is most
+ useful as the data-type descriptor for a field in another
+ data-type descriptor. The fields member should be ``NULL`` if this
+ is non- ``NULL`` (the fields member of the base descriptor can be
+ non- ``NULL`` however).
- typedef struct {
- PyArray_Descr *base;
- PyObject *shape;
- } PyArray_ArrayDescr;
+ .. c:type:: PyArray_ArrayDescr
- The elements of this structure are:
+ .. code-block:: c
- .. c:member:: PyArray_Descr *PyArray_ArrayDescr.base
+ typedef struct {
+ PyArray_Descr *base;
+ PyObject *shape;
+ } PyArray_ArrayDescr;
- The data-type-descriptor object of the base-type.
+ .. c:member:: PyArray_Descr *base
- .. c:member:: PyObject *PyArray_ArrayDescr.shape
+ The data-type-descriptor object of the base-type.
- The shape (always C-style contiguous) of the sub-array as a Python
- tuple.
+ .. c:member:: PyObject *shape
+ The shape (always C-style contiguous) of the sub-array as a Python
+ tuple.
-.. c:member:: PyObject *PyArray_Descr.fields
+ .. c:member:: PyObject *fields
- If this is non-NULL, then this data-type-descriptor has fields
- described by a Python dictionary whose keys are names (and also
- titles if given) and whose values are tuples that describe the
- fields. Recall that a data-type-descriptor always describes a
- fixed-length set of bytes. A field is a named sub-region of that
- total, fixed-length collection. A field is described by a tuple
- composed of another data- type-descriptor and a byte
- offset. Optionally, the tuple may contain a title which is
- normally a Python string. These tuples are placed in this
- dictionary keyed by name (and also title if given).
+ If this is non-NULL, then this data-type-descriptor has fields
+ described by a Python dictionary whose keys are names (and also
+ titles if given) and whose values are tuples that describe the
+ fields. Recall that a data-type-descriptor always describes a
+ fixed-length set of bytes. A field is a named sub-region of that
+ total, fixed-length collection. A field is described by a tuple
+ composed of another data- type-descriptor and a byte
+ offset. Optionally, the tuple may contain a title which is
+ normally a Python string. These tuples are placed in this
+ dictionary keyed by name (and also title if given).
-.. c:member:: PyObject *PyArray_Descr.names
+ .. c:member:: PyObject *names
- An ordered tuple of field names. It is NULL if no field is
- defined.
+ An ordered tuple of field names. It is NULL if no field is
+ defined.
-.. c:member:: PyArray_ArrFuncs *PyArray_Descr.f
+ .. c:member:: PyArray_ArrFuncs *f
- A pointer to a structure containing functions that the type needs
- to implement internal features. These functions are not the same
- thing as the universal functions (ufuncs) described later. Their
- signatures can vary arbitrarily.
+ A pointer to a structure containing functions that the type needs
+ to implement internal features. These functions are not the same
+ thing as the universal functions (ufuncs) described later. Their
+ signatures can vary arbitrarily.
-.. c:member:: PyObject *PyArray_Descr.metadata
+ .. c:member:: PyObject *metadata
- Metadata about this dtype.
+ Metadata about this dtype.
-.. c:member:: NpyAuxData *PyArray_Descr.c_metadata
+ .. c:member:: NpyAuxData *c_metadata
- Metadata specific to the C implementation
- of the particular dtype. Added for NumPy 1.7.0.
+ Metadata specific to the C implementation
+ of the particular dtype. Added for NumPy 1.7.0.
-.. c:member:: Npy_hash_t *PyArray_Descr.hash
+ .. c:type:: npy_hash_t
+ .. c:member:: npy_hash_t *hash
- Currently unused. Reserved for future use in caching
- hash values.
+ Currently unused. Reserved for future use in caching
+ hash values.
.. c:type:: PyArray_ArrFuncs
@@ -568,7 +592,7 @@ PyArrayDescr_Type and PyArray_Descr
This function should be called without holding the Python GIL, and
has to grab it for error reporting.
- .. c:member:: Bool nonzero(void* data, void* arr)
+ .. c:member:: npy_bool nonzero(void* data, void* arr)
A pointer to a function that returns TRUE if the item of
``arr`` pointed to by ``data`` is nonzero. This function can
@@ -612,7 +636,8 @@ PyArrayDescr_Type and PyArray_Descr
Either ``NULL`` or a dictionary containing low-level casting
functions for user- defined data-types. Each function is
- wrapped in a :c:type:`PyCObject *` and keyed by the data-type number.
+ wrapped in a :c:type:`PyCapsule *<PyCapsule>` and keyed by
+ the data-type number.
.. c:member:: NPY_SCALARKIND scalarkind(PyArrayObject* arr)
@@ -791,35 +816,37 @@ PyUFunc_Type and PyUFuncObject
npy_uint32 *iter_flags;
/* new in API version 0x0000000D */
npy_intp *core_dim_sizes;
- npy_intp *core_dim_flags;
-
+ npy_uint32 *core_dim_flags;
+ PyObject *identity_value;
} PyUFuncObject;
- .. c:macro: PyUFuncObject.PyObject_HEAD
+ .. c:macro: PyObject_HEAD
required for all Python objects.
- .. c:member:: int PyUFuncObject.nin
+ .. c:member:: int nin
The number of input arguments.
- .. c:member:: int PyUFuncObject.nout
+ .. c:member:: int nout
The number of output arguments.
- .. c:member:: int PyUFuncObject.nargs
+ .. c:member:: int nargs
The total number of arguments (*nin* + *nout*). This must be
less than :c:data:`NPY_MAXARGS`.
- .. c:member:: int PyUFuncObject.identity
+ .. c:member:: int identity
Either :c:data:`PyUFunc_One`, :c:data:`PyUFunc_Zero`,
- :c:data:`PyUFunc_None` or :c:data:`PyUFunc_AllOnes` to indicate
+ :c:data:`PyUFunc_MinusOne`, :c:data:`PyUFunc_None`,
+ :c:data:`PyUFunc_ReorderableNone`, or
+ :c:data:`PyUFunc_IdentityValue` to indicate
the identity for this operation. It is only used for a
reduce-like call on an empty array.
- .. c:member:: void PyUFuncObject.functions( \
+ .. c:member:: void functions( \
char** args, npy_intp* dims, npy_intp* steps, void* extradata)
An array of function pointers --- one for each data type
@@ -837,7 +864,7 @@ PyUFunc_Type and PyUFuncObject
passed in as *extradata*. The size of this function pointer
array is ntypes.
- .. c:member:: void **PyUFuncObject.data
+ .. c:member:: void **data
Extra data to be passed to the 1-d vector loops or ``NULL`` if
no extra-data is needed. This C-array must be the same size (
@@ -846,22 +873,22 @@ PyUFunc_Type and PyUFuncObject
just 1-d vector loops that make use of this extra data to
receive a pointer to the actual function to call.
- .. c:member:: int PyUFuncObject.ntypes
+ .. c:member:: int ntypes
The number of supported data types for the ufunc. This number
specifies how many different 1-d loops (of the builtin data
types) are available.
- .. c:member:: int PyUFuncObject.reserved1
+ .. c:member:: int reserved1
Unused.
- .. c:member:: char *PyUFuncObject.name
+ .. c:member:: char *name
A string name for the ufunc. This is used dynamically to build
the __doc\__ attribute of ufuncs.
- .. c:member:: char *PyUFuncObject.types
+ .. c:member:: char *types
An array of :math:`nargs \times ntypes` 8-bit type_numbers
which contains the type signature for the function for each of
@@ -871,24 +898,24 @@ PyUFunc_Type and PyUFuncObject
vector loop. These type numbers do not have to be the same type
and mixed-type ufuncs are supported.
- .. c:member:: char *PyUFuncObject.doc
+ .. c:member:: char *doc
Documentation for the ufunc. Should not contain the function
signature as this is generated dynamically when __doc\__ is
retrieved.
- .. c:member:: void *PyUFuncObject.ptr
+ .. c:member:: void *ptr
Any dynamically allocated memory. Currently, this is used for
dynamic ufuncs created from a python function to store room for
the types, data, and name members.
- .. c:member:: PyObject *PyUFuncObject.obj
+ .. c:member:: PyObject *obj
For ufuncs dynamically created from python functions, this member
holds a reference to the underlying Python function.
- .. c:member:: PyObject *PyUFuncObject.userloops
+ .. c:member:: PyObject *userloops
A dictionary of user-defined 1-d vector loops (stored as CObject
ptrs) for user-defined types. A loop may be registered by the
@@ -896,74 +923,85 @@ PyUFunc_Type and PyUFuncObject
User defined type numbers are always larger than
:c:data:`NPY_USERDEF`.
- .. c:member:: int PyUFuncObject.core_enabled
+ .. c:member:: int core_enabled
0 for scalar ufuncs; 1 for generalized ufuncs
- .. c:member:: int PyUFuncObject.core_num_dim_ix
+ .. c:member:: int core_num_dim_ix
Number of distinct core dimension names in the signature
- .. c:member:: int *PyUFuncObject.core_num_dims
+ .. c:member:: int *core_num_dims
Number of core dimensions of each argument
- .. c:member:: int *PyUFuncObject.core_dim_ixs
+ .. c:member:: int *core_dim_ixs
Dimension indices in a flattened form; indices of argument ``k`` are
stored in ``core_dim_ixs[core_offsets[k] : core_offsets[k] +
core_numdims[k]]``
- .. c:member:: int *PyUFuncObject.core_offsets
+ .. c:member:: int *core_offsets
Position of 1st core dimension of each argument in ``core_dim_ixs``,
equivalent to cumsum(``core_num_dims``)
- .. c:member:: char *PyUFuncObject.core_signature
+ .. c:member:: char *core_signature
Core signature string
- .. c:member:: PyUFunc_TypeResolutionFunc *PyUFuncObject.type_resolver
+ .. c:member:: PyUFunc_TypeResolutionFunc *type_resolver
A function which resolves the types and fills an array with the dtypes
for the inputs and outputs
- .. c:member:: PyUFunc_LegacyInnerLoopSelectionFunc *PyUFuncObject.legacy_inner_loop_selector
+ .. c:member:: PyUFunc_LegacyInnerLoopSelectionFunc *legacy_inner_loop_selector
A function which returns an inner loop. The ``legacy`` in the name arises
because for NumPy 1.6 a better variant had been planned. This variant
has not yet come about.
- .. c:member:: void *PyUFuncObject.reserved2
+ .. c:member:: void *reserved2
For a possible future loop selector with a different signature.
- .. c:member:: PyUFunc_MaskedInnerLoopSelectionFunc *PyUFuncObject.masked_inner_loop_selector
+ .. c:member:: PyUFunc_MaskedInnerLoopSelectionFunc *masked_inner_loop_selector
Function which returns a masked inner loop for the ufunc
- .. c:member:: npy_uint32 PyUFuncObject.op_flags
+ .. c:member:: npy_uint32 op_flags
Override the default operand flags for each ufunc operand.
- .. c:member:: npy_uint32 PyUFuncObject.iter_flags
+ .. c:member:: npy_uint32 iter_flags
Override the default nditer flags for the ufunc.
Added in API version 0x0000000D
- .. c:member:: npy_intp *PyUFuncObject.core_dim_sizes
+ .. c:member:: npy_intp *core_dim_sizes
For each distinct core dimension, the possible
- :ref:`frozen <frozen>` size if :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` is 0
+ :ref:`frozen <frozen>` size if
+ :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` is ``0``
- .. c:member:: npy_uint32 *PyUFuncObject.core_dim_flags
+ .. c:member:: npy_uint32 *core_dim_flags
For each distinct core dimension, a set of ``UFUNC_CORE_DIM*`` flags
- - :c:data:`UFUNC_CORE_DIM_CAN_IGNORE` if the dim name ends in ``?``
- - :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` if the dim size will be
- determined from the operands and not from a :ref:`frozen <frozen>` signature
+ .. c:macro:: UFUNC_CORE_DIM_CAN_IGNORE
+
+ if the dim name ends in ``?``
+
+ .. c:macro:: UFUNC_CORE_DIM_SIZE_INFERRED
+
+ if the dim size will be determined from the operands
+ and not from a :ref:`frozen <frozen>` signature
+
+ .. c:member:: PyObject *identity_value
+
+ Identity for reduction, when :c:member:`PyUFuncObject.identity`
+ is equal to :c:data:`PyUFunc_IdentityValue`.
PyArrayIter_Type and PyArrayIterObject
--------------------------------------
@@ -1009,57 +1047,57 @@ PyArrayIter_Type and PyArrayIterObject
npy_intp factors[NPY_MAXDIMS];
PyArrayObject *ao;
char *dataptr;
- Bool contiguous;
+ npy_bool contiguous;
} PyArrayIterObject;
- .. c:member:: int PyArrayIterObject.nd_m1
+ .. c:member:: int nd_m1
:math:`N-1` where :math:`N` is the number of dimensions in the
underlying array.
- .. c:member:: npy_intp PyArrayIterObject.index
+ .. c:member:: npy_intp index
The current 1-d index into the array.
- .. c:member:: npy_intp PyArrayIterObject.size
+ .. c:member:: npy_intp size
The total size of the underlying array.
- .. c:member:: npy_intp *PyArrayIterObject.coordinates
+ .. c:member:: npy_intp *coordinates
An :math:`N` -dimensional index into the array.
- .. c:member:: npy_intp *PyArrayIterObject.dims_m1
+ .. c:member:: npy_intp *dims_m1
The size of the array minus 1 in each dimension.
- .. c:member:: npy_intp *PyArrayIterObject.strides
+ .. c:member:: npy_intp *strides
The strides of the array. How many bytes needed to jump to the next
element in each dimension.
- .. c:member:: npy_intp *PyArrayIterObject.backstrides
+ .. c:member:: npy_intp *backstrides
How many bytes needed to jump from the end of a dimension back
to its beginning. Note that ``backstrides[k] == strides[k] *
dims_m1[k]``, but it is stored here as an optimization.
- .. c:member:: npy_intp *PyArrayIterObject.factors
+ .. c:member:: npy_intp *factors
This array is used in computing an N-d index from a 1-d index. It
contains needed products of the dimensions.
- .. c:member:: PyArrayObject *PyArrayIterObject.ao
+ .. c:member:: PyArrayObject *ao
A pointer to the underlying ndarray this iterator was created to
represent.
- .. c:member:: char *PyArrayIterObject.dataptr
+ .. c:member:: char *dataptr
This member points to an element in the ndarray indicated by the
index.
- .. c:member:: Bool PyArrayIterObject.contiguous
+ .. c:member:: npy_bool contiguous
This flag is true if the underlying array is
:c:data:`NPY_ARRAY_C_CONTIGUOUS`. It is used to simplify
@@ -1106,32 +1144,32 @@ PyArrayMultiIter_Type and PyArrayMultiIterObject
PyArrayIterObject *iters[NPY_MAXDIMS];
} PyArrayMultiIterObject;
- .. c:macro: PyArrayMultiIterObject.PyObject_HEAD
+ .. c:macro: PyObject_HEAD
Needed at the start of every Python object (holds reference count
and type identification).
- .. c:member:: int PyArrayMultiIterObject.numiter
+ .. c:member:: int numiter
The number of arrays that need to be broadcast to the same shape.
- .. c:member:: npy_intp PyArrayMultiIterObject.size
+ .. c:member:: npy_intp size
The total broadcasted size.
- .. c:member:: npy_intp PyArrayMultiIterObject.index
+ .. c:member:: npy_intp index
The current (1-d) index into the broadcasted result.
- .. c:member:: int PyArrayMultiIterObject.nd
+ .. c:member:: int nd
The number of dimensions in the broadcasted result.
- .. c:member:: npy_intp *PyArrayMultiIterObject.dimensions
+ .. c:member:: npy_intp *dimensions
The shape of the broadcasted result (only ``nd`` slots are used).
- .. c:member:: PyArrayIterObject **PyArrayMultiIterObject.iters
+ .. c:member:: PyArrayIterObject **iters
An array of iterator objects that holds the iterators for the
arrays to be broadcast together. On return, the iterators are
@@ -1204,7 +1242,7 @@ ScalarArrayTypes
There is a Python type for each of the different built-in data types
that can be present in the array Most of these are simple wrappers
around the corresponding data type in C. The C-names for these types
-are :c:data:`Py{TYPE}ArrType_Type` where ``{TYPE}`` can be
+are ``Py{TYPE}ArrType_Type`` where ``{TYPE}`` can be
**Bool**, **Byte**, **Short**, **Int**, **Long**, **LongLong**,
**UByte**, **UShort**, **UInt**, **ULong**, **ULongLong**,
@@ -1213,8 +1251,8 @@ are :c:data:`Py{TYPE}ArrType_Type` where ``{TYPE}`` can be
**Object**.
These type names are part of the C-API and can therefore be created in
-extension C-code. There is also a :c:data:`PyIntpArrType_Type` and a
-:c:data:`PyUIntpArrType_Type` that are simple substitutes for one of the
+extension C-code. There is also a ``PyIntpArrType_Type`` and a
+``PyUIntpArrType_Type`` that are simple substitutes for one of the
integer types that can hold a pointer on the platform. The structure
of these scalar objects is not exposed to C-code. The function
:c:func:`PyArray_ScalarAsCtype` (..) can be used to extract the C-type
@@ -1249,12 +1287,12 @@ PyArray_Dims
The members of this structure are
- .. c:member:: npy_intp *PyArray_Dims.ptr
+ .. c:member:: npy_intp *ptr
A pointer to a list of (:c:type:`npy_intp`) integers which
usually represent array shape or array strides.
- .. c:member:: int PyArray_Dims.len
+ .. c:member:: int len
The length of the list of integers. It is assumed safe to
access *ptr* [0] to *ptr* [len-1].
@@ -1283,26 +1321,26 @@ PyArray_Chunk
The members are
- .. c:macro: PyArray_Chunk.PyObject_HEAD
+ .. c:macro: PyObject_HEAD
Necessary for all Python objects. Included here so that the
:c:type:`PyArray_Chunk` structure matches that of the buffer object
(at least to the len member).
- .. c:member:: PyObject *PyArray_Chunk.base
+ .. c:member:: PyObject *base
The Python object this chunk of memory comes from. Needed so that
memory can be accounted for properly.
- .. c:member:: void *PyArray_Chunk.ptr
+ .. c:member:: void *ptr
A pointer to the start of the single-segment chunk of memory.
- .. c:member:: npy_intp PyArray_Chunk.len
+ .. c:member:: npy_intp len
The length of the segment in bytes.
- .. c:member:: int PyArray_Chunk.flags
+ .. c:member:: int flags
Any data flags (*e.g.* :c:data:`NPY_ARRAY_WRITEABLE` ) that should
be used to interpret the memory.
@@ -1317,13 +1355,13 @@ PyArrayInterface
The :c:type:`PyArrayInterface` structure is defined so that NumPy and
other extension modules can use the rapid array interface
- protocol. The :obj:`__array_struct__` method of an object that
+ protocol. The :obj:`~object.__array_struct__` method of an object that
supports the rapid array interface protocol should return a
- :c:type:`PyCObject` that contains a pointer to a :c:type:`PyArrayInterface`
+ :c:type:`PyCapsule` that contains a pointer to a :c:type:`PyArrayInterface`
structure with the relevant details of the array. After the new
array is created, the attribute should be ``DECREF``'d which will
free the :c:type:`PyArrayInterface` structure. Remember to ``INCREF`` the
- object (whose :obj:`__array_struct__` attribute was retrieved) and
+ object (whose :obj:`~object.__array_struct__` attribute was retrieved) and
point the base member of the new :c:type:`PyArrayObject` to this same
object. In this way the memory for the array will be managed
correctly.
@@ -1342,15 +1380,15 @@ PyArrayInterface
PyObject *descr;
} PyArrayInterface;
- .. c:member:: int PyArrayInterface.two
+ .. c:member:: int two
the integer 2 as a sanity check.
- .. c:member:: int PyArrayInterface.nd
+ .. c:member:: int nd
the number of dimensions in the array.
- .. c:member:: char PyArrayInterface.typekind
+ .. c:member:: char typekind
A character indicating what kind of array is present according to the
typestring convention with 't' -> bitfield, 'b' -> Boolean, 'i' ->
@@ -1358,11 +1396,11 @@ PyArrayInterface
complex floating point, 'O' -> object, 'S' -> (byte-)string, 'U' ->
unicode, 'V' -> void.
- .. c:member:: int PyArrayInterface.itemsize
+ .. c:member:: int itemsize
The number of bytes each item in the array requires.
- .. c:member:: int PyArrayInterface.flags
+ .. c:member:: int flags
Any of the bits :c:data:`NPY_ARRAY_C_CONTIGUOUS` (1),
:c:data:`NPY_ARRAY_F_CONTIGUOUS` (2), :c:data:`NPY_ARRAY_ALIGNED` (0x100),
@@ -1376,26 +1414,26 @@ PyArrayInterface
structure is present (it will be ignored by objects consuming
version 2 of the array interface).
- .. c:member:: npy_intp *PyArrayInterface.shape
+ .. c:member:: npy_intp *shape
An array containing the size of the array in each dimension.
- .. c:member:: npy_intp *PyArrayInterface.strides
+ .. c:member:: npy_intp *strides
An array containing the number of bytes to jump to get to the next
element in each dimension.
- .. c:member:: void *PyArrayInterface.data
+ .. c:member:: void *data
A pointer *to* the first element of the array.
- .. c:member:: PyObject *PyArrayInterface.descr
+ .. c:member:: PyObject *descr
A Python object describing the data-type in more detail (same
- as the *descr* key in :obj:`__array_interface__`). This can be
+ as the *descr* key in :obj:`~object.__array_interface__`). This can be
``NULL`` if *typekind* and *itemsize* provide enough
information. This field is also ignored unless
- :c:data:`ARR_HAS_DESCR` flag is on in *flags*.
+ :c:data:`NPY_ARR_HAS_DESCR` flag is on in *flags*.
Internally used structures
@@ -1433,7 +1471,7 @@ for completeness and assistance in understanding the code.
Advanced indexing is handled with this Python type. It is simply a
loose wrapper around the C-structure containing the variables
needed for advanced array indexing. The associated C-structure,
- :c:type:`PyArrayMapIterObject`, is useful if you are trying to
+ ``PyArrayMapIterObject``, is useful if you are trying to
understand the advanced-index mapping code. It is defined in the
``arrayobject.h`` header. This type is not exposed to Python and
could be replaced with a C-structure. As a Python type it takes
diff --git a/doc/source/reference/c-api/ufunc.rst b/doc/source/reference/c-api/ufunc.rst
index 16ddde58c..9eb70c3fb 100644
--- a/doc/source/reference/c-api/ufunc.rst
+++ b/doc/source/reference/c-api/ufunc.rst
@@ -10,28 +10,52 @@ UFunc API
Constants
---------
-.. c:var:: UFUNC_ERR_{HANDLER}
+``UFUNC_ERR_{HANDLER}``
+ .. c:macro:: UFUNC_ERR_IGNORE
- ``{HANDLER}`` can be **IGNORE**, **WARN**, **RAISE**, or **CALL**
+ .. c:macro:: UFUNC_ERR_WARN
-.. c:var:: UFUNC_{THING}_{ERR}
+ .. c:macro:: UFUNC_ERR_RAISE
- ``{THING}`` can be **MASK**, **SHIFT**, or **FPE**, and ``{ERR}`` can
- be **DIVIDEBYZERO**, **OVERFLOW**, **UNDERFLOW**, and **INVALID**.
+ .. c:macro:: UFUNC_ERR_CALL
-.. c:var:: PyUFunc_{VALUE}
+``UFUNC_{THING}_{ERR}``
+ .. c:macro:: UFUNC_MASK_DIVIDEBYZERO
- .. c:var:: PyUFunc_One
+ .. c:macro:: UFUNC_MASK_OVERFLOW
- .. c:var:: PyUFunc_Zero
+ .. c:macro:: UFUNC_MASK_UNDERFLOW
- .. c:var:: PyUFunc_MinusOne
+ .. c:macro:: UFUNC_MASK_INVALID
- .. c:var:: PyUFunc_ReorderableNone
+ .. c:macro:: UFUNC_SHIFT_DIVIDEBYZERO
- .. c:var:: PyUFunc_None
+ .. c:macro:: UFUNC_SHIFT_OVERFLOW
- .. c:var:: PyUFunc_IdentityValue
+ .. c:macro:: UFUNC_SHIFT_UNDERFLOW
+
+ .. c:macro:: UFUNC_SHIFT_INVALID
+
+ .. c:macro:: UFUNC_FPE_DIVIDEBYZERO
+
+ .. c:macro:: UFUNC_FPE_OVERFLOW
+
+ .. c:macro:: UFUNC_FPE_UNDERFLOW
+
+ .. c:macro:: UFUNC_FPE_INVALID
+
+``PyUFunc_{VALUE}``
+ .. c:macro:: PyUFunc_One
+
+ .. c:macro:: PyUFunc_Zero
+
+ .. c:macro:: PyUFunc_MinusOne
+
+ .. c:macro:: PyUFunc_ReorderableNone
+
+ .. c:macro:: PyUFunc_None
+
+ .. c:macro:: PyUFunc_IdentityValue
Macros
@@ -50,6 +74,66 @@ Macros
was released (because loop->obj was not true).
+Types
+-----
+
+.. c:type:: PyUFuncGenericFunction
+
+ pointers to functions that actually implement the underlying
+ (element-by-element) function :math:`N` times with the following
+ signature:
+
+ .. c:function:: void loopfunc(\
+ char** args, npy_intp const *dimensions, npy_intp const *steps, void* data)
+
+ *args*
+
+ An array of pointers to the actual data for the input and output
+ arrays. The input arguments are given first followed by the output
+ arguments.
+
+ *dimensions*
+
+ A pointer to the size of the dimension over which this function is
+ looping.
+
+ *steps*
+
+ A pointer to the number of bytes to jump to get to the
+ next element in this dimension for each of the input and
+ output arguments.
+
+ *data*
+
+ Arbitrary data (extra arguments, function names, *etc.* )
+ that can be stored with the ufunc and will be passed in
+ when it is called.
+
+ This is an example of a func specialized for addition of doubles
+ returning doubles.
+
+ .. code-block:: c
+
+ static void
+ double_add(char **args,
+ npy_intp const *dimensions,
+ npy_intp const *steps,
+ void *extra)
+ {
+ npy_intp i;
+ npy_intp is1 = steps[0], is2 = steps[1];
+ npy_intp os = steps[2], n = dimensions[0];
+ char *i1 = args[0], *i2 = args[1], *op = args[2];
+ for (i = 0; i < n; i++) {
+ *((double *)op) = *((double *)i1) +
+ *((double *)i2);
+ i1 += is1;
+ i2 += is2;
+ op += os;
+ }
+ }
+
+
Functions
---------
@@ -71,60 +155,7 @@ Functions
:param func:
Must to an array of length *ntypes* containing
- :c:type:`PyUFuncGenericFunction` items. These items are pointers to
- functions that actually implement the underlying
- (element-by-element) function :math:`N` times with the following
- signature:
-
- .. c:function:: void loopfunc(
- char** args, npy_intp const *dimensions, npy_intp const *steps, void* data)
-
- *args*
-
- An array of pointers to the actual data for the input and output
- arrays. The input arguments are given first followed by the output
- arguments.
-
- *dimensions*
-
- A pointer to the size of the dimension over which this function is
- looping.
-
- *steps*
-
- A pointer to the number of bytes to jump to get to the
- next element in this dimension for each of the input and
- output arguments.
-
- *data*
-
- Arbitrary data (extra arguments, function names, *etc.* )
- that can be stored with the ufunc and will be passed in
- when it is called.
-
- This is an example of a func specialized for addition of doubles
- returning doubles.
-
- .. code-block:: c
-
- static void
- double_add(char **args,
- npy_intp const *dimensions,
- npy_intp const *steps,
- void *extra)
- {
- npy_intp i;
- npy_intp is1 = steps[0], is2 = steps[1];
- npy_intp os = steps[2], n = dimensions[0];
- char *i1 = args[0], *i2 = args[1], *op = args[2];
- for (i = 0; i < n; i++) {
- *((double *)op) = *((double *)i1) +
- *((double *)i2);
- i1 += is1;
- i2 += is2;
- op += os;
- }
- }
+ :c:type:`PyUFuncGenericFunction` items.
:param data:
Should be ``NULL`` or a pointer to an array of size *ntypes*
@@ -269,7 +300,7 @@ Functions
.. c:function:: int PyUFunc_checkfperr(int errmask, PyObject* errobj)
A simple interface to the IEEE error-flag checking support. The
- *errmask* argument is a mask of :c:data:`UFUNC_MASK_{ERR}` bitmasks
+ *errmask* argument is a mask of ``UFUNC_MASK_{ERR}`` bitmasks
indicating which errors to check for (and how to check for
them). The *errobj* must be a Python tuple with two elements: a
string containing the name which will be used in any communication
@@ -459,9 +490,9 @@ structure.
Importing the API
-----------------
-.. c:var:: PY_UFUNC_UNIQUE_SYMBOL
+.. c:macro:: PY_UFUNC_UNIQUE_SYMBOL
-.. c:var:: NO_IMPORT_UFUNC
+.. c:macro:: NO_IMPORT_UFUNC
.. c:function:: void import_ufunc(void)
diff --git a/doc/source/reference/global_state.rst b/doc/source/reference/global_state.rst
index 7bf9310e8..b59467210 100644
--- a/doc/source/reference/global_state.rst
+++ b/doc/source/reference/global_state.rst
@@ -83,3 +83,18 @@ in C which iterates through arrays that may or may not be
contiguous in memory.
Most users will have no reason to change these; for details
see the :ref:`memory layout <memory-layout>` documentation.
+
+Using the new casting implementation
+------------------------------------
+
+Within NumPy 1.20 it is possible to enable the new experimental casting
+implementation for testing purposes. To do this set::
+
+ NPY_USE_NEW_CASTINGIMPL=1
+
+Setting the flag is only useful to aid with NumPy developement to ensure the
+new version is bug free and should be avoided for production code.
+It is a helpful test for projects that either create custom datatypes or
+use for example complicated structured dtypes. The flag is expected to be
+removed in 1.21 with the new version being always in use.
+
diff --git a/doc/source/reference/internals.code-explanations.rst b/doc/source/reference/internals.code-explanations.rst
index 65553e07e..e8e428f2e 100644
--- a/doc/source/reference/internals.code-explanations.rst
+++ b/doc/source/reference/internals.code-explanations.rst
@@ -147,7 +147,8 @@ an iterator for each of the arrays being broadcast.
The :c:func:`PyArray_Broadcast` function takes the iterators that have already
been defined and uses them to determine the broadcast shape in each
dimension (to create the iterators at the same time that broadcasting
-occurs then use the :c:func:`PyMultiIter_New` function). Then, the iterators are
+occurs then use the :c:func:`PyArray_MultiIterNew` function).
+Then, the iterators are
adjusted so that each iterator thinks it is iterating over an array
with the broadcast size. This is done by adjusting the iterators
number of dimensions, and the shape in each dimension. This works
@@ -162,7 +163,7 @@ for the extended dimensions. It is done in exactly the same way in
NumPy. The big difference is that now the array of strides is kept
track of in a :c:type:`PyArrayIterObject`, the iterators involved in a
broadcast result are kept track of in a :c:type:`PyArrayMultiIterObject`,
-and the :c:func:`PyArray_BroadCast` call implements the broad-casting rules.
+and the :c:func:`PyArray_Broadcast` call implements the broad-casting rules.
Array Scalars
@@ -368,7 +369,7 @@ The output arguments (if any) are then processed and any missing
return arrays are constructed. If any provided output array doesn't
have the correct type (or is mis-aligned) and is smaller than the
buffer size, then a new output array is constructed with the special
-:c:data:`WRITEBACKIFCOPY` flag set. At the end of the function,
+:c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set. At the end of the function,
:c:func:`PyArray_ResolveWritebackIfCopy` is called so that
its contents will be copied back into the output array.
Iterators for the output arguments are then processed.
diff --git a/doc/source/reference/internals.rst b/doc/source/reference/internals.rst
index aacfabcd3..ed8042c08 100644
--- a/doc/source/reference/internals.rst
+++ b/doc/source/reference/internals.rst
@@ -9,4 +9,160 @@ NumPy internals
internals.code-explanations
alignment
-.. automodule:: numpy.doc.internals
+Internal organization of numpy arrays
+=====================================
+
+It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to NumPy".
+
+NumPy arrays consist of two major components, the raw array data (from now on,
+referred to as the data buffer), and the information about the raw array data.
+The data buffer is typically what people think of as arrays in C or Fortran,
+a contiguous (and fixed) block of memory containing fixed sized data items.
+NumPy also contains a significant set of data that describes how to interpret
+the data in the data buffer. This extra information contains (among other things):
+
+ 1) The basic data element's size in bytes
+ 2) The start of the data within the data buffer (an offset relative to the
+ beginning of the data buffer).
+ 3) The number of dimensions and the size of each dimension
+ 4) The separation between elements for each dimension (the 'stride'). This
+ does not have to be a multiple of the element size
+ 5) The byte order of the data (which may not be the native byte order)
+ 6) Whether the buffer is read-only
+ 7) Information (via the dtype object) about the interpretation of the basic
+ data element. The basic data element may be as simple as a int or a float,
+ or it may be a compound object (e.g., struct-like), a fixed character field,
+ or Python object pointers.
+ 8) Whether the array is to interpreted as C-order or Fortran-order.
+
+This arrangement allow for very flexible use of arrays. One thing that it allows
+is simple changes of the metadata to change the interpretation of the array buffer.
+Changing the byteorder of the array is a simple change involving no rearrangement
+of the data. The shape of the array can be changed very easily without changing
+anything in the data buffer or any data copying at all
+
+Among other things that are made possible is one can create a new array metadata
+object that uses the same data buffer
+to create a new view of that data buffer that has a different interpretation
+of the buffer (e.g., different shape, offset, byte order, strides, etc) but
+shares the same data bytes. Many operations in numpy do just this such as
+slices. Other operations, such as transpose, don't move data elements
+around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
+
+Typically these new versions of the array metadata but the same data buffer are
+new 'views' into the data buffer. There is a different ndarray object, but it
+uses the same data buffer. This is why it is necessary to force copies through
+use of the .copy() method if one really wants to make a new and independent
+copy of the data buffer.
+
+New views into arrays mean the object reference counts for the data buffer
+increase. Simply doing away with the original array object will not remove the
+data buffer if other views of it still exist.
+
+Multidimensional Array Indexing Order Issues
+============================================
+
+What is the right way to index
+multi-dimensional arrays? Before you jump to conclusions about the one and
+true way to index multi-dimensional arrays, it pays to understand why this is
+a confusing issue. This section will try to explain in detail how numpy
+indexing works and why we adopt the convention we do for images, and when it
+may be appropriate to adopt other conventions.
+
+The first thing to understand is
+that there are two conflicting conventions for indexing 2-dimensional arrays.
+Matrix notation uses the first index to indicate which row is being selected and
+the second index to indicate which column is selected. This is opposite the
+geometrically oriented-convention for images where people generally think the
+first index represents x position (i.e., column) and the second represents y
+position (i.e., row). This alone is the source of much confusion;
+matrix-oriented users and image-oriented users expect two different things with
+regard to indexing.
+
+The second issue to understand is how indices correspond
+to the order the array is stored in memory. In Fortran the first index is the
+most rapidly varying index when moving through the elements of a two
+dimensional array as it is stored in memory. If you adopt the matrix
+convention for indexing, then this means the matrix is stored one column at a
+time (since the first index moves to the next row as it changes). Thus Fortran
+is considered a Column-major language. C has just the opposite convention. In
+C, the last index changes most rapidly as one moves through the array as
+stored in memory. Thus C is a Row-major language. The matrix is stored by
+rows. Note that in both cases it presumes that the matrix convention for
+indexing is being used, i.e., for both Fortran and C, the first index is the
+row. Note this convention implies that the indexing convention is invariant
+and that the data order changes to keep that so.
+
+But that's not the only way
+to look at it. Suppose one has large two-dimensional arrays (images or
+matrices) stored in data files. Suppose the data are stored by rows rather than
+by columns. If we are to preserve our index convention (whether matrix or
+image) that means that depending on the language we use, we may be forced to
+reorder the data if it is read into memory to preserve our indexing
+convention. For example if we read row-ordered data into memory without
+reordering, it will match the matrix indexing convention for C, but not for
+Fortran. Conversely, it will match the image indexing convention for Fortran,
+but not for C. For C, if one is using data stored in row order, and one wants
+to preserve the image index convention, the data must be reordered when
+reading into memory.
+
+In the end, which you do for Fortran or C depends on
+which is more important, not reordering data or preserving the indexing
+convention. For large images, reordering data is potentially expensive, and
+often the indexing convention is inverted to avoid that.
+
+The situation with
+numpy makes this issue yet more complicated. The internal machinery of numpy
+arrays is flexible enough to accept any ordering of indices. One can simply
+reorder indices by manipulating the internal stride information for arrays
+without reordering the data at all. NumPy will know how to map the new index
+order to the data without moving the data.
+
+So if this is true, why not choose
+the index order that matches what you most expect? In particular, why not define
+row-ordered images to use the image convention? (This is sometimes referred
+to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
+order options for array ordering in numpy.) The drawback of doing this is
+potential performance penalties. It's common to access the data sequentially,
+either implicitly in array operations or explicitly by looping over rows of an
+image. When that is done, then the data will be accessed in non-optimal order.
+As the first index is incremented, what is actually happening is that elements
+spaced far apart in memory are being sequentially accessed, with usually poor
+memory access speeds. For example, for a two dimensional image 'im' defined so
+that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
+Python behavior then im[0] would represent a column at x=0. Yet that data
+would be spread over the whole array since the data are stored in row order.
+Despite the flexibility of numpy's indexing, it can't really paper over the fact
+basic operations are rendered inefficient because of data order or that getting
+contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
+im[0]), thus one can't use an idiom such as for row in im; for col in im does
+work, but doesn't yield contiguous column data.
+
+As it turns out, numpy is
+smart enough when dealing with ufuncs to determine which index is the most
+rapidly varying one in memory and uses that for the innermost loop. Thus for
+ufuncs there is no large intrinsic advantage to either approach in most cases.
+On the other hand, use of .flat with an FORTRAN ordered array will lead to
+non-optimal memory access as adjacent elements in the flattened array (iterator,
+actually) are not contiguous in memory.
+
+Indeed, the fact is that Python
+indexing on lists and other sequences naturally leads to an outside-to inside
+ordering (the first index gets the largest grouping, the next the next largest,
+and the last gets the smallest element). Since image data are normally stored
+by rows, this corresponds to position within rows being the last item indexed.
+
+If you do want to use Fortran ordering realize that
+there are two approaches to consider: 1) accept that the first index is just not
+the most rapidly changing in memory and have all your I/O routines reorder
+your data when going from memory to disk or visa versa, or use numpy's
+mechanism for mapping the first index to the most rapidly varying data. We
+recommend the former if possible. The disadvantage of the latter is that many
+of numpy's functions will yield arrays without Fortran ordering unless you are
+careful to use the 'order' keyword. Doing this would be highly inconvenient.
+
+Otherwise we recommend simply learning to reverse the usual order of indices
+when accessing elements of an array. Granted, it goes against the grain, but
+it is more in line with Python semantics and the natural order of the data.
+
+
diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst
index 5c1bdda23..5a0f99651 100644
--- a/doc/source/reference/maskedarray.baseclass.rst
+++ b/doc/source/reference/maskedarray.baseclass.rst
@@ -242,8 +242,8 @@ Comparison operators:
MaskedArray.__eq__
MaskedArray.__ne__
-Truth value of an array (:func:`bool()`):
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Truth value of an array (:class:`bool() <bool>`):
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/maskedarray.generic.rst b/doc/source/reference/maskedarray.generic.rst
index 41c3ee564..d3849c50d 100644
--- a/doc/source/reference/maskedarray.generic.rst
+++ b/doc/source/reference/maskedarray.generic.rst
@@ -177,8 +177,8 @@ attribute. We must keep in mind that a ``True`` entry in the mask indicates an
*invalid* data.
Another possibility is to use the :func:`getmask` and :func:`getmaskarray`
-functions. :func:`getmask(x)` outputs the mask of ``x`` if ``x`` is a masked
-array, and the special value :data:`nomask` otherwise. :func:`getmaskarray(x)`
+functions. ``getmask(x)`` outputs the mask of ``x`` if ``x`` is a masked
+array, and the special value :data:`nomask` otherwise. ``getmaskarray(x)``
outputs the mask of ``x`` if ``x`` is a masked array. If ``x`` has no invalid
entry or is not a masked array, the function outputs a boolean array of
``False`` with as many elements as ``x``.
@@ -296,11 +296,11 @@ new valid values to them::
.. note::
Unmasking an entry by direct assignment will silently fail if the masked
- array has a *hard* mask, as shown by the :attr:`hardmask` attribute. This
- feature was introduced to prevent overwriting the mask. To force the
- unmasking of an entry where the array has a hard mask, the mask must first
- to be softened using the :meth:`soften_mask` method before the allocation.
- It can be re-hardened with :meth:`harden_mask`::
+ array has a *hard* mask, as shown by the :attr:`~MaskedArray.hardmask`
+ attribute. This feature was introduced to prevent overwriting the mask.
+ To force the unmasking of an entry where the array has a hard mask,
+ the mask must first to be softened using the :meth:`soften_mask` method
+ before the allocation. It can be re-hardened with :meth:`harden_mask`::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True)
>>> x
@@ -406,8 +406,8 @@ Operations on masked arrays
Arithmetic and comparison operations are supported by masked arrays.
As much as possible, invalid entries of a masked array are not processed,
-meaning that the corresponding :attr:`data` entries *should* be the same
-before and after the operation.
+meaning that the corresponding :attr:`~MaskedArray.data` entries
+*should* be the same before and after the operation.
.. warning::
We need to stress that this behavior may not be systematic, that masked
diff --git a/doc/source/reference/random/c-api.rst b/doc/source/reference/random/c-api.rst
index 63b0fdc2b..a79da7a49 100644
--- a/doc/source/reference/random/c-api.rst
+++ b/doc/source/reference/random/c-api.rst
@@ -181,6 +181,5 @@ Generate a single integer
Generate random uint64 numbers in closed interval [off, off + rng].
-.. c:function:: npy_uint64 random_bounded_uint64(bitgen_t *bitgen_state, npy_uint64 off, npy_uint64 rng, npy_uint64 mask, bint use_masked)
-
+.. c:function:: npy_uint64 random_bounded_uint64(bitgen_t *bitgen_state, npy_uint64 off, npy_uint64 rng, npy_uint64 mask, bool use_masked)
diff --git a/doc/source/reference/random/generator.rst b/doc/source/reference/random/generator.rst
index a2cbb493a..8706e1de2 100644
--- a/doc/source/reference/random/generator.rst
+++ b/doc/source/reference/random/generator.rst
@@ -36,11 +36,105 @@ Simple random data
Permutations
============
+The methods for randomly permuting a sequence are
+
.. autosummary::
:toctree: generated/
~numpy.random.Generator.shuffle
~numpy.random.Generator.permutation
+ ~numpy.random.Generator.permuted
+
+The following table summarizes the behaviors of the methods.
+
++--------------+-------------------+------------------+
+| method | copy/in-place | axis handling |
++==============+===================+==================+
+| shuffle | in-place | as if 1d |
++--------------+-------------------+------------------+
+| permutation | copy | as if 1d |
++--------------+-------------------+------------------+
+| permuted | either (use 'out' | axis independent |
+| | for in-place) | |
++--------------+-------------------+------------------+
+
+The following subsections provide more details about the differences.
+
+In-place vs. copy
+~~~~~~~~~~~~~~~~~
+The main difference between `Generator.shuffle` and `Generator.permutation`
+is that `Generator.shuffle` operates in-place, while `Generator.permutation`
+returns a copy.
+
+By default, `Generator.permuted` returns a copy. To operate in-place with
+`Generator.permuted`, pass the same array as the first argument *and* as
+the value of the ``out`` parameter. For example,
+
+ >>> rg = np.random.default_rng()
+ >>> x = np.arange(0, 15).reshape(3, 5)
+ >>> x
+ array([[ 0, 1, 2, 3, 4],
+ [ 5, 6, 7, 8, 9],
+ [10, 11, 12, 13, 14]])
+ >>> y = rg.permuted(x, axis=1, out=x)
+ >>> x
+ array([[ 1, 0, 2, 4, 3], # random
+ [ 6, 7, 8, 9, 5],
+ [10, 14, 11, 13, 12]])
+
+Note that when ``out`` is given, the return value is ``out``:
+
+ >>> y is x
+ True
+
+Handling the ``axis`` parameter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+An important distinction for these methods is how they handle the ``axis``
+parameter. Both `Generator.shuffle` and `Generator.permutation` treat the
+input as a one-dimensional sequence, and the ``axis`` parameter determines
+which dimension of the input array to use as the sequence. In the case of a
+two-dimensional array, ``axis=0`` will, in effect, rearrange the rows of the
+array, and ``axis=1`` will rearrange the columns. For example
+
+ >>> rg = np.random.default_rng()
+ >>> x = np.arange(0, 15).reshape(3, 5)
+ >>> x
+ array([[ 0, 1, 2, 3, 4],
+ [ 5, 6, 7, 8, 9],
+ [10, 11, 12, 13, 14]])
+ >>> rg.permutation(x, axis=1)
+ array([[ 1, 3, 2, 0, 4], # random
+ [ 6, 8, 7, 5, 9],
+ [11, 13, 12, 10, 14]])
+
+Note that the columns have been rearranged "in bulk": the values within
+each column have not changed.
+
+The method `Generator.permuted` treats the ``axis`` parameter similar to
+how `numpy.sort` treats it. Each slice along the given axis is shuffled
+independently of the others. Compare the following example of the use of
+`Generator.permuted` to the above example of `Generator.permutation`:
+
+ >>> rg.permuted(x, axis=1)
+ array([[ 1, 0, 2, 4, 3], # random
+ [ 5, 7, 6, 9, 8],
+ [10, 14, 12, 13, 11]])
+
+In this example, the values within each row (i.e. the values along
+``axis=1``) have been shuffled independently. This is not a "bulk"
+shuffle of the columns.
+
+Shuffling non-NumPy sequences
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+`Generator.shuffle` works on non-NumPy sequences. That is, if it is given
+a sequence that is not a NumPy array, it shuffles that sequence in-place.
+For example,
+
+ >>> rg = np.random.default_rng()
+ >>> a = ['A', 'B', 'C', 'D', 'E']
+ >>> rg.shuffle(a) # shuffle the list in-place
+ >>> a
+ ['B', 'D', 'A', 'E', 'C'] # random
Distributions
=============
diff --git a/doc/source/reference/random/legacy.rst b/doc/source/reference/random/legacy.rst
index 91b91dac8..6cf4775b8 100644
--- a/doc/source/reference/random/legacy.rst
+++ b/doc/source/reference/random/legacy.rst
@@ -133,7 +133,7 @@ Many of the RandomState methods above are exported as functions in
- It uses a `RandomState` rather than the more modern `Generator`.
For backward compatible legacy reasons, we cannot change this. See
-`random-quick-start`.
+:ref:`random-quick-start`.
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/routines.array-manipulation.rst b/doc/source/reference/routines.array-manipulation.rst
index 8d13a1800..1c96495d9 100644
--- a/doc/source/reference/routines.array-manipulation.rst
+++ b/doc/source/reference/routines.array-manipulation.rst
@@ -74,6 +74,7 @@ Joining arrays
hstack
dstack
column_stack
+ row_stack
Splitting arrays
================
diff --git a/doc/source/reference/routines.char.rst b/doc/source/reference/routines.char.rst
index ed8393855..90df14125 100644
--- a/doc/source/reference/routines.char.rst
+++ b/doc/source/reference/routines.char.rst
@@ -6,7 +6,7 @@ String operations
.. module:: numpy.char
The `numpy.char` module provides a set of vectorized string
-operations for arrays of type `numpy.string_` or `numpy.unicode_`.
+operations for arrays of type `numpy.str_` or `numpy.bytes_`.
All of them are based on the string methods in the Python standard library.
String operations
diff --git a/doc/source/reference/routines.ctypeslib.rst b/doc/source/reference/routines.ctypeslib.rst
index 562638e9c..3a059f5d9 100644
--- a/doc/source/reference/routines.ctypeslib.rst
+++ b/doc/source/reference/routines.ctypeslib.rst
@@ -9,6 +9,5 @@ C-Types Foreign Function Interface (:mod:`numpy.ctypeslib`)
.. autofunction:: as_array
.. autofunction:: as_ctypes
.. autofunction:: as_ctypes_type
-.. autofunction:: ctypes_load_library
.. autofunction:: load_library
.. autofunction:: ndpointer
diff --git a/doc/source/reference/routines.financial.rst b/doc/source/reference/routines.financial.rst
deleted file mode 100644
index 5f426d7ab..000000000
--- a/doc/source/reference/routines.financial.rst
+++ /dev/null
@@ -1,21 +0,0 @@
-Financial functions
-*******************
-
-.. currentmodule:: numpy
-
-Simple financial functions
---------------------------
-
-.. autosummary::
- :toctree: generated/
-
- fv
- pv
- npv
- pmt
- ppmt
- ipmt
- irr
- mirr
- nper
- rate
diff --git a/doc/source/reference/routines.indexing.rst b/doc/source/reference/routines.indexing.rst
index aeec1a1bb..eebbf4989 100644
--- a/doc/source/reference/routines.indexing.rst
+++ b/doc/source/reference/routines.indexing.rst
@@ -42,6 +42,7 @@ Indexing-like operations
diag
diagonal
select
+ lib.stride_tricks.sliding_window_view
lib.stride_tricks.as_strided
Inserting data into arrays
diff --git a/doc/source/reference/routines.io.rst b/doc/source/reference/routines.io.rst
index 2e119af9a..3052ee1fb 100644
--- a/doc/source/reference/routines.io.rst
+++ b/doc/source/reference/routines.io.rst
@@ -88,4 +88,4 @@ Binary Format Description
.. autosummary::
:toctree: generated/
- lib.format
+ lib.format
diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst
index 97859ac67..d961cbf02 100644
--- a/doc/source/reference/routines.ma.rst
+++ b/doc/source/reference/routines.ma.rst
@@ -67,6 +67,9 @@ Inspecting the array
ma.size
ma.is_masked
ma.is_mask
+ ma.isMaskedArray
+ ma.isMA
+ ma.isarray
ma.MaskedArray.all
@@ -272,7 +275,7 @@ Filling a masked array
ma.common_fill_value
ma.default_fill_value
ma.maximum_fill_value
- ma.maximum_fill_value
+ ma.minimum_fill_value
ma.set_fill_value
ma.MaskedArray.get_fill_value
diff --git a/doc/source/reference/routines.other.rst b/doc/source/reference/routines.other.rst
index def5b3e3c..aefd680bb 100644
--- a/doc/source/reference/routines.other.rst
+++ b/doc/source/reference/routines.other.rst
@@ -47,6 +47,7 @@ Utility
show_config
deprecate
deprecate_with_doc
+ broadcast_shapes
Matlab-like Functions
---------------------
diff --git a/doc/source/reference/routines.rst b/doc/source/reference/routines.rst
index 7a9b97d77..5d6a823b7 100644
--- a/doc/source/reference/routines.rst
+++ b/doc/source/reference/routines.rst
@@ -28,7 +28,6 @@ indentation.
routines.emath
routines.err
routines.fft
- routines.financial
routines.functional
routines.help
routines.indexing
diff --git a/doc/source/reference/routines.set.rst b/doc/source/reference/routines.set.rst
index b12d3d5f5..149c33a8b 100644
--- a/doc/source/reference/routines.set.rst
+++ b/doc/source/reference/routines.set.rst
@@ -3,6 +3,11 @@ Set routines
.. currentmodule:: numpy
+.. autosummary::
+ :toctree: generated/
+
+ lib.arraysetops
+
Making proper sets
------------------
.. autosummary::
diff --git a/doc/source/reference/simd/simd-optimizations-tables-diff.inc b/doc/source/reference/simd/simd-optimizations-tables-diff.inc
new file mode 100644
index 000000000..41fa96703
--- /dev/null
+++ b/doc/source/reference/simd/simd-optimizations-tables-diff.inc
@@ -0,0 +1,37 @@
+.. generated via source/reference/simd/simd-optimizations.py
+
+x86::Intel Compiler - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. table::
+ :align: left
+
+ =========== ==================================================================================================================
+ Name Implies
+ =========== ==================================================================================================================
+ ``FMA3`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **AVX2**
+ ``AVX2`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **FMA3**
+ ``AVX512F`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` **AVX512CD**
+ =========== ==================================================================================================================
+
+.. note::
+ The following features aren't supported by x86::Intel Compiler:
+ **XOP FMA4**
+
+x86::Microsoft Visual C/C++ - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. table::
+ :align: left
+
+ ============ =================================================================================================================================
+ Name Implies
+ ============ =================================================================================================================================
+ ``FMA3`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **AVX2**
+ ``AVX2`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **FMA3**
+ ``AVX512F`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` **AVX512CD** **AVX512_SKX**
+ ``AVX512CD`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` **AVX512_SKX**
+ ============ =================================================================================================================================
+
+.. note::
+ The following features aren't supported by x86::Microsoft Visual C/C++:
+ **AVX512_KNL AVX512_KNM**
+
diff --git a/doc/source/reference/simd/simd-optimizations-tables.inc b/doc/source/reference/simd/simd-optimizations-tables.inc
index d5b82ee0c..f038a91e1 100644
--- a/doc/source/reference/simd/simd-optimizations-tables.inc
+++ b/doc/source/reference/simd/simd-optimizations-tables.inc
@@ -1,110 +1,103 @@
.. generated via source/reference/simd/simd-optimizations.py
-``X86`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+x86 - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ======== =================================================================================================================
- Name Implies
- ======== =================================================================================================================
- SSE ``SSE`` ``SSE2``
- SSE2 ``SSE`` ``SSE2``
- SSE3 ``SSE`` ``SSE2``
- SSSE3 ``SSE`` ``SSE2`` ``SSE3``
- SSE41 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3``
- POPCNT ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41``
- SSE42 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT``
- AVX ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42``
- XOP ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
- FMA4 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
- F16C ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
- FMA3 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
- AVX2 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
- AVX512F ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2``
- AVX512CD ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F``
- ======== =================================================================================================================
-
-``X86`` - Group names
-~~~~~~~~~~~~~~~~~~~~~
-
+ ============ =================================================================================================================
+ Name Implies
+ ============ =================================================================================================================
+ ``SSE`` ``SSE2``
+ ``SSE2`` ``SSE``
+ ``SSE3`` ``SSE`` ``SSE2``
+ ``SSSE3`` ``SSE`` ``SSE2`` ``SSE3``
+ ``SSE41`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3``
+ ``POPCNT`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41``
+ ``SSE42`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT``
+ ``AVX`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42``
+ ``XOP`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
+ ``FMA4`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
+ ``F16C`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
+ ``FMA3`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
+ ``AVX2`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
+ ``AVX512F`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2``
+ ``AVX512CD`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F``
+ ============ =================================================================================================================
+
+x86 - Group names
+~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ========== ===================================================== ===========================================================================================================================================================================
- Name Gather Implies
- ========== ===================================================== ===========================================================================================================================================================================
- AVX512_KNL ``AVX512ER`` ``AVX512PF`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
- AVX512_KNM ``AVX5124FMAPS`` ``AVX5124VNNIW`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_KNL``
- AVX512_SKX ``AVX512VL`` ``AVX512BW`` ``AVX512DQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
- AVX512_CLX ``AVX512VNNI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
- AVX512_CNL ``AVX512IFMA`` ``AVX512VBMI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
- AVX512_ICL ``AVX512VBMI2`` ``AVX512BITALG`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX`` ``AVX512_CLX`` ``AVX512_CNL``
- ========== ===================================================== ===========================================================================================================================================================================
-
-``IBM/POWER`` ``big-endian`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+ ============== ===================================================== ===========================================================================================================================================================================
+ Name Gather Implies
+ ============== ===================================================== ===========================================================================================================================================================================
+ ``AVX512_KNL`` ``AVX512ER`` ``AVX512PF`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
+ ``AVX512_KNM`` ``AVX5124FMAPS`` ``AVX5124VNNIW`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_KNL``
+ ``AVX512_SKX`` ``AVX512VL`` ``AVX512BW`` ``AVX512DQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
+ ``AVX512_CLX`` ``AVX512VNNI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
+ ``AVX512_CNL`` ``AVX512IFMA`` ``AVX512VBMI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
+ ``AVX512_ICL`` ``AVX512VBMI2`` ``AVX512BITALG`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX`` ``AVX512_CLX`` ``AVX512_CNL``
+ ============== ===================================================== ===========================================================================================================================================================================
+
+IBM/POWER big-endian - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ==== ================
- Name Implies
- ==== ================
- VSX
- VSX2 ``VSX``
- VSX3 ``VSX`` ``VSX2``
- ==== ================
-
-``IBM/POWER`` ``little-endian mode`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ ======== ================
+ Name Implies
+ ======== ================
+ ``VSX``
+ ``VSX2`` ``VSX``
+ ``VSX3`` ``VSX`` ``VSX2``
+ ======== ================
+IBM/POWER little-endian - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ==== ================
- Name Implies
- ==== ================
- VSX ``VSX`` ``VSX2``
- VSX2 ``VSX`` ``VSX2``
- VSX3 ``VSX`` ``VSX2``
- ==== ================
+ ======== ================
+ Name Implies
+ ======== ================
+ ``VSX`` ``VSX2``
+ ``VSX2`` ``VSX``
+ ``VSX3`` ``VSX`` ``VSX2``
+ ======== ================
-``ARMHF`` - CPU feature names
+ARMv7/A32 - CPU feature names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
.. table::
:align: left
- ========== ===========================================================
- Name Implies
- ========== ===========================================================
- NEON
- NEON_FP16 ``NEON``
- NEON_VFPV4 ``NEON`` ``NEON_FP16``
- ASIMD ``NEON`` ``NEON_FP16`` ``NEON_VFPV4``
- ASIMDHP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDDP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDFHM ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
- ========== ===========================================================
-
-``ARM64`` ``AARCH64`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+ ============== ===========================================================
+ Name Implies
+ ============== ===========================================================
+ ``NEON``
+ ``NEON_FP16`` ``NEON``
+ ``NEON_VFPV4`` ``NEON`` ``NEON_FP16``
+ ``ASIMD`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4``
+ ``ASIMDHP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDDP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDFHM`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
+ ============== ===========================================================
+
+ARMv8/A64 - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ========== ===========================================================
- Name Implies
- ========== ===========================================================
- NEON ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- NEON_FP16 ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- NEON_VFPV4 ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMD ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDHP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDDP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDFHM ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
- ========== ===========================================================
+ ============== ===========================================================
+ Name Implies
+ ============== ===========================================================
+ ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``NEON_FP16`` ``NEON`` ``NEON_VFPV4`` ``ASIMD``
+ ``NEON_VFPV4`` ``NEON`` ``NEON_FP16`` ``ASIMD``
+ ``ASIMD`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4``
+ ``ASIMDHP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDDP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDFHM`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
+ ============== ===========================================================
- \ No newline at end of file
diff --git a/doc/source/reference/simd/simd-optimizations.py b/doc/source/reference/simd/simd-optimizations.py
index 628356163..5d6da50e3 100644
--- a/doc/source/reference/simd/simd-optimizations.py
+++ b/doc/source/reference/simd/simd-optimizations.py
@@ -3,10 +3,14 @@ Generate CPU features tables from CCompilerOpt
"""
from os import sys, path
gen_path = path.dirname(path.realpath(__file__))
+#sys.path.append(path.abspath(path.join(gen_path, *([".."]*4), "numpy", "distutils")))
+#from ccompiler_opt import CCompilerOpt
from numpy.distutils.ccompiler_opt import CCompilerOpt
class FakeCCompilerOpt(CCompilerOpt):
fake_info = ""
+ # disable caching no need for it
+ conf_nocache = True
def __init__(self, *args, **kwargs):
no_cc = None
CCompilerOpt.__init__(self, no_cc, **kwargs)
@@ -23,40 +27,49 @@ class FakeCCompilerOpt(CCompilerOpt):
return True
def gen_features_table(self, features, ignore_groups=True,
- field_names=["Name", "Implies"], **kwargs):
+ field_names=["Name", "Implies"],
+ fstyle=None, fstyle_implies=None, **kwargs):
rows = []
- for f in features:
+ if fstyle is None:
+ fstyle = lambda ft: f'``{ft}``'
+ if fstyle_implies is None:
+ fstyle_implies = lambda origin, ft: fstyle(ft)
+ for f in self.feature_sorted(features):
is_group = "group" in self.feature_supported.get(f, {})
if ignore_groups and is_group:
continue
implies = self.feature_sorted(self.feature_implies(f))
- implies = ' '.join(['``%s``' % i for i in implies])
- rows.append([f, implies])
- return self.gen_rst_table(field_names, rows, **kwargs)
+ implies = ' '.join([fstyle_implies(f, i) for i in implies])
+ rows.append([fstyle(f), implies])
+ if rows:
+ return self.gen_rst_table(field_names, rows, **kwargs)
def gen_gfeatures_table(self, features,
field_names=["Name", "Gather", "Implies"],
- **kwargs):
+ fstyle=None, fstyle_implies=None, **kwargs):
rows = []
- for f in features:
+ if fstyle is None:
+ fstyle = lambda ft: f'``{ft}``'
+ if fstyle_implies is None:
+ fstyle_implies = lambda origin, ft: fstyle(ft)
+ for f in self.feature_sorted(features):
gather = self.feature_supported.get(f, {}).get("group", None)
if not gather:
continue
implies = self.feature_sorted(self.feature_implies(f))
- implies = ' '.join(['``%s``' % i for i in implies])
- gather = ' '.join(['``%s``' % i for i in gather])
- rows.append([f, gather, implies])
- return self.gen_rst_table(field_names, rows, **kwargs)
+ implies = ' '.join([fstyle_implies(f, i) for i in implies])
+ gather = ' '.join([fstyle_implies(f, i) for i in gather])
+ rows.append([fstyle(f), gather, implies])
+ if rows:
+ return self.gen_rst_table(field_names, rows, **kwargs)
-
- def gen_rst_table(self, field_names, rows, margin_left=2):
+ def gen_rst_table(self, field_names, rows, tab_size=4):
assert(not rows or len(field_names) == len(rows[0]))
rows.append(field_names)
fld_len = len(field_names)
cls_len = [max(len(c[i]) for c in rows) for i in range(fld_len)]
del rows[-1]
- padding = 0
- cformat = ' '.join('{:<%d}' % (i+padding) for i in cls_len)
+ cformat = ' '.join('{:<%d}' % i for i in cls_len)
border = cformat.format(*['='*i for i in cls_len])
rows = [cformat.format(*row) for row in rows]
@@ -65,102 +78,113 @@ class FakeCCompilerOpt(CCompilerOpt):
# footer
rows += [border]
# add left margin
- rows = [(' ' * margin_left) + r for r in rows]
+ rows = [(' ' * tab_size) + r for r in rows]
return '\n'.join(rows)
-if __name__ == '__main__':
- margin_left = 4*1
- ############### x86 ###############
- FakeCCompilerOpt.fake_info = "x86_64 gcc"
- x64_gcc = FakeCCompilerOpt(cpu_baseline="max")
- x86_tables = """\
-``X86`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{x86_features}
-
-``X86`` - Group names
-~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{x86_gfeatures}
-
-""".format(
- x86_features = x64_gcc.gen_features_table(
- x64_gcc.cpu_baseline_names(), margin_left=margin_left
- ),
- x86_gfeatures = x64_gcc.gen_gfeatures_table(
- x64_gcc.cpu_baseline_names(), margin_left=margin_left
+def features_table_sections(name, ftable=None, gtable=None, tab_size=4):
+ tab = ' '*tab_size
+ content = ''
+ if ftable:
+ title = f"{name} - CPU feature names"
+ content = (
+ f"{title}\n{'~'*len(title)}"
+ f"\n.. table::\n{tab}:align: left\n\n"
+ f"{ftable}\n\n"
)
- )
- ############### Power ###############
- FakeCCompilerOpt.fake_info = "ppc64 gcc"
- ppc64_gcc = FakeCCompilerOpt(cpu_baseline="max")
- FakeCCompilerOpt.fake_info = "ppc64le gcc"
- ppc64le_gcc = FakeCCompilerOpt(cpu_baseline="max")
- ppc64_tables = """\
-``IBM/POWER`` ``big-endian`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{ppc64_features}
-
-``IBM/POWER`` ``little-endian mode`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{ppc64le_features}
-
-""".format(
- ppc64_features = ppc64_gcc.gen_features_table(
- ppc64_gcc.cpu_baseline_names(), margin_left=margin_left
- ),
- ppc64le_features = ppc64le_gcc.gen_features_table(
- ppc64le_gcc.cpu_baseline_names(), margin_left=margin_left
+ if gtable:
+ title = f"{name} - Group names"
+ content += (
+ f"{title}\n{'~'*len(title)}"
+ f"\n.. table::\n{tab}:align: left\n\n"
+ f"{gtable}\n\n"
)
+ return content
+
+def features_table(arch, cc="gcc", pretty_name=None, **kwargs):
+ FakeCCompilerOpt.fake_info = arch + cc
+ ccopt = FakeCCompilerOpt(cpu_baseline="max")
+ features = ccopt.cpu_baseline_names()
+ ftable = ccopt.gen_features_table(features, **kwargs)
+ gtable = ccopt.gen_gfeatures_table(features, **kwargs)
+
+ if not pretty_name:
+ pretty_name = arch + '/' + cc
+ return features_table_sections(pretty_name, ftable, gtable, **kwargs)
+
+def features_table_diff(arch, cc, cc_vs="gcc", pretty_name=None, **kwargs):
+ FakeCCompilerOpt.fake_info = arch + cc
+ ccopt = FakeCCompilerOpt(cpu_baseline="max")
+ fnames = ccopt.cpu_baseline_names()
+ features = {f:ccopt.feature_implies(f) for f in fnames}
+
+ FakeCCompilerOpt.fake_info = arch + cc_vs
+ ccopt_vs = FakeCCompilerOpt(cpu_baseline="max")
+ fnames_vs = ccopt_vs.cpu_baseline_names()
+ features_vs = {f:ccopt_vs.feature_implies(f) for f in fnames_vs}
+
+ common = set(fnames).intersection(fnames_vs)
+ extra_avl = set(fnames).difference(fnames_vs)
+ not_avl = set(fnames_vs).difference(fnames)
+ diff_impl_f = {f:features[f].difference(features_vs[f]) for f in common}
+ diff_impl = {k for k, v in diff_impl_f.items() if v}
+
+ fbold = lambda ft: f'**{ft}**' if ft in extra_avl else f'``{ft}``'
+ fbold_implies = lambda origin, ft: (
+ f'**{ft}**' if ft in diff_impl_f.get(origin, {}) else f'``{ft}``'
)
- ############### Arm ###############
- FakeCCompilerOpt.fake_info = "armhf gcc"
- armhf_gcc = FakeCCompilerOpt(cpu_baseline="max")
- FakeCCompilerOpt.fake_info = "aarch64 gcc"
- aarch64_gcc = FakeCCompilerOpt(cpu_baseline="max")
- arm_tables = """\
-``ARMHF`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{armhf_features}
-
-``ARM64`` ``AARCH64`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{aarch64_features}
-
- """.format(
- armhf_features = armhf_gcc.gen_features_table(
- armhf_gcc.cpu_baseline_names(), margin_left=margin_left
- ),
- aarch64_features = aarch64_gcc.gen_features_table(
- aarch64_gcc.cpu_baseline_names(), margin_left=margin_left
- )
+ diff_all = diff_impl.union(extra_avl)
+ ftable = ccopt.gen_features_table(
+ diff_all, fstyle=fbold, fstyle_implies=fbold_implies, **kwargs
+ )
+ gtable = ccopt.gen_gfeatures_table(
+ diff_all, fstyle=fbold, fstyle_implies=fbold_implies, **kwargs
)
- # TODO: diff the difference among all supported compilers
+ if not pretty_name:
+ pretty_name = arch + '/' + cc
+ content = features_table_sections(pretty_name, ftable, gtable, **kwargs)
+
+ if not_avl:
+ not_avl = ccopt_vs.feature_sorted(not_avl)
+ not_avl = ' '.join(not_avl)
+ content += (
+ ".. note::\n"
+ f" The following features aren't supported by {pretty_name}:\n"
+ f" **{not_avl}**\n\n"
+ )
+ return content
+
+if __name__ == '__main__':
+ pretty_names = {
+ "PPC64": "IBM/POWER big-endian",
+ "PPC64LE": "IBM/POWER little-endian",
+ "ARMHF": "ARMv7/A32",
+ "AARCH64": "ARMv8/A64",
+ "ICC": "Intel Compiler",
+ # "ICCW": "Intel Compiler msvc-like",
+ "MSVC": "Microsoft Visual C/C++"
+ }
with open(path.join(gen_path, 'simd-optimizations-tables.inc'), 'wt') as fd:
fd.write(f'.. generated via {__file__}\n\n')
- fd.write(x86_tables)
- fd.write(ppc64_tables)
- fd.write(arm_tables)
+ for arch in (
+ ("x86", "PPC64", "PPC64LE", "ARMHF", "AARCH64")
+ ):
+ pretty_name = pretty_names.get(arch, arch)
+ table = features_table(arch=arch, pretty_name=pretty_name)
+ assert(table)
+ fd.write(table)
+
+ with open(path.join(gen_path, 'simd-optimizations-tables-diff.inc'), 'wt') as fd:
+ fd.write(f'.. generated via {__file__}\n\n')
+ for arch, cc_names in (
+ ("x86", ("clang", "ICC", "MSVC")),
+ ("PPC64", ("clang",)),
+ ("PPC64LE", ("clang",)),
+ ("ARMHF", ("clang",)),
+ ("AARCH64", ("clang",))
+ ):
+ arch_pname = pretty_names.get(arch, arch)
+ for cc in cc_names:
+ pretty_name = f"{arch_pname}::{pretty_names.get(cc, cc)}"
+ table = features_table_diff(arch=arch, cc=cc, pretty_name=pretty_name)
+ if table:
+ fd.write(table)
diff --git a/doc/source/reference/simd/simd-optimizations.rst b/doc/source/reference/simd/simd-optimizations.rst
index eb7eb2a83..59a4892b2 100644
--- a/doc/source/reference/simd/simd-optimizations.rst
+++ b/doc/source/reference/simd/simd-optimizations.rst
@@ -29,8 +29,8 @@ Build options for compilation
safely run on a wide range of platforms within the processor family.
- ``--cpu-dispatch``: dispatched set of additional optimizations.
- The default value for ``x86`` is ``max -xop -fma4`` which enables all CPU
- features, except for AMD legacy features.
+ The default value is ``max -xop -fma4`` which enables all CPU
+ features, except for AMD legacy features(in case of X86).
The command arguments are available in ``build``, ``build_clib``, and
``build_ext``.
@@ -38,13 +38,24 @@ if ``build_clib`` or ``build_ext`` are not specified by the user, the arguments
``build`` will be used instead, which also holds the default values.
Optimization names can be CPU features or groups of features that gather
-several features or special options to perform a series of procedures.
+several features or :ref:`special options <special-options>` to perform a series of procedures.
The following tables show the current supported optimizations sorted from the lowest to the highest interest.
.. include:: simd-optimizations-tables.inc
+----
+
+.. _tables-diff:
+
+While the above tables are based on the GCC Compiler, the following tables showing the differences in the
+other compilers:
+
+.. include:: simd-optimizations-tables-diff.inc
+
+.. _special-options:
+
Special options
~~~~~~~~~~~~~~~
@@ -80,7 +91,7 @@ NOTES
- The order of the requsted optimizations doesn't matter.
-- Either commas or spaces can be used as a separator, e.g. ``--cpu-dispatch``\ =
+- Either commas or spaces can be used as a separator, e.g. ``--cpu-dispatch``\ =
"avx2 avx512f" or ``--cpu-dispatch``\ = "avx2, avx512f" both work, but the
arguments must be enclosed in quotes.
@@ -114,6 +125,25 @@ NOTES
Special cases
~~~~~~~~~~~~~
+**Interrelated CPU features**: Some exceptional conditions force us to link some features together when it come to certain compilers or architectures, resulting in the impossibility of building them separately.
+These conditions can be divided into two parts, as follows:
+
+- **Architectural compatibility**: The need to align certain CPU features that are assured
+ to be supported by successive generations of the same architecture, for example:
+
+ - On ppc64le `VSX(ISA 2.06)` and `VSX2(ISA 2.07)` both imply one another since the
+ first generation that supports little-endian mode is Power-8`(ISA 2.07)`
+ - On AArch64 `NEON` `FP16` `VFPV4` `ASIMD` implies each other since they are part of the
+ hardware baseline.
+
+- **Compilation compatibility**: Not all **C/C++** compilers provide independent support for all CPU
+ features. For example, **Intel**'s compiler doesn't provide separated flags for `AVX2` and `FMA3`,
+ it makes sense since all Intel CPUs that comes with `AVX2` also support `FMA3` and vice versa,
+ but this approach is incompatible with other **x86** CPUs from **AMD** or **VIA**.
+ Therefore, there are differences in the depiction of CPU features between the C/C++ compilers,
+ as shown in the :ref:`tables above <tables-diff>`.
+
+
Behaviors and Errors
~~~~~~~~~~~~~~~~~~~~
@@ -224,7 +254,7 @@ Definitely, yes. But the :ref:`dispatch-able sources <dispatchable-sources>` are
treated differently.
What if the user specifies certain **baseline features** during the
-build but at runtime the machine doesn't support even these
+build but at runtime the machine doesn't support even these
features? Will the compiled code be called via one of these definitions, or
maybe the compiler itself auto-generated/vectorized certain piece of code
based on the provided command line compiler flags?
@@ -304,7 +334,7 @@ through ``--cpu-dispatch``, but it can also represent other options such as:
.. code:: c
- /*
+ /*
* this definition is used by NumPy utilities as suffixes for the
* exported symbols
*/
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index 8f506dd8b..06fbe28dd 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -1,5 +1,7 @@
.. sectionauthor:: adapted from "Guide to NumPy" by Travis E. Oliphant
+.. currentmodule:: numpy
+
.. _ufuncs:
************************************
@@ -8,8 +10,6 @@ Universal functions (:class:`ufunc`)
.. note: XXX: section might need to be made more reference-guideish...
-.. currentmodule:: numpy
-
.. index: ufunc, universal function, arithmetic, operation
A universal function (or :term:`ufunc` for short) is a function that
@@ -298,6 +298,11 @@ them by defining certain special methods. For details, see
:class:`ufunc`
==============
+.. autosummary::
+ :toctree: generated/
+
+ numpy.ufunc
+
.. _ufuncs.kwargs:
Optional keyword arguments
@@ -335,6 +340,19 @@ advanced usage and will not typically be used.
Note that outputs not explicitly filled are left with their
uninitialized values.
+ .. versionadded:: 1.13
+
+ Operations where ufunc input and output operands have memory overlap are
+ defined to be the same as for equivalent operations where there
+ is no memory overlap. Operations affected make temporary copies
+ as needed to eliminate data dependency. As detecting these cases
+ is computationally expensive, a heuristic is used, which may in rare
+ cases result in needless temporary copies. For operations where the
+ data dependency is simple enough for the heuristic to analyze,
+ temporary copies will not be made even if the arrays overlap, if it
+ can be deduced copies are not necessary. As an example,
+ ``np.add(a, b, out=a)`` will not involve copies.
+
*where*
.. versionadded:: 1.7