summaryrefslogtreecommitdiff
path: root/doc/source/reference/c-api
diff options
context:
space:
mode:
authorKriti Singh <kritisingh1.ks@gmail.com>2019-07-22 21:47:39 +0530
committerSebastian Berg <sebastian@sipsolutions.net>2019-07-22 09:17:39 -0700
commitab87388a76c0afca4eb1159ab0ed232d502a8378 (patch)
treee686041b1cc4d10815a2ade2bf7f4f090815e6a8 /doc/source/reference/c-api
parent49fbbbff78034bc1c95c11c884b0233fb10b5955 (diff)
downloadnumpy-ab87388a76c0afca4eb1159ab0ed232d502a8378.tar.gz
DOC: Array API : Directory restructure and code cleanup (#14010)
* Minor improvements in Array API docs * Directory restruture
Diffstat (limited to 'doc/source/reference/c-api')
-rw-r--r--doc/source/reference/c-api/array.rst3679
-rw-r--r--doc/source/reference/c-api/config.rst122
-rw-r--r--doc/source/reference/c-api/coremath.rst453
-rw-r--r--doc/source/reference/c-api/deprecations.rst58
-rw-r--r--doc/source/reference/c-api/dtype.rst419
-rw-r--r--doc/source/reference/c-api/generalized-ufuncs.rst216
-rw-r--r--doc/source/reference/c-api/index.rst51
-rw-r--r--doc/source/reference/c-api/iterator.rst1322
-rw-r--r--doc/source/reference/c-api/types-and-structures.rst1422
-rw-r--r--doc/source/reference/c-api/ufunc.rst485
10 files changed, 8227 insertions, 0 deletions
diff --git a/doc/source/reference/c-api/array.rst b/doc/source/reference/c-api/array.rst
new file mode 100644
index 000000000..1d9d31b83
--- /dev/null
+++ b/doc/source/reference/c-api/array.rst
@@ -0,0 +1,3679 @@
+Array API
+=========
+
+.. sectionauthor:: Travis E. Oliphant
+
+| The test of a first-rate intelligence is the ability to hold two
+| opposed ideas in the mind at the same time, and still retain the
+| ability to function.
+| --- *F. Scott Fitzgerald*
+
+| For a successful technology, reality must take precedence over public
+| relations, for Nature cannot be fooled.
+| --- *Richard P. Feynman*
+
+.. index::
+ pair: ndarray; C-API
+ pair: C-API; array
+
+
+Array structure and data access
+-------------------------------
+
+These macros access the :c:type:`PyArrayObject` structure members and are
+defined in ``ndarraytypes.h``. The input argument, *arr*, can be any
+:c:type:`PyObject *<PyObject>` that is directly interpretable as a
+:c:type:`PyArrayObject *` (any instance of the :c:data:`PyArray_Type`
+and itssub-types).
+
+.. c:function:: int PyArray_NDIM(PyArrayObject *arr)
+
+ The number of dimensions in the array.
+
+.. c:function:: int PyArray_FLAGS(PyArrayObject* arr)
+
+ Returns an integer representing the :ref:`array-flags<array-flags>`.
+
+.. c:function:: int PyArray_TYPE(PyArrayObject* arr)
+
+ Return the (builtin) typenumber for the elements of this array.
+
+.. c:function:: int PyArray_SETITEM( \
+ PyArrayObject* arr, void* itemptr, PyObject* obj)
+
+ Convert obj and place it in the ndarray, *arr*, at the place
+ pointed to by itemptr. Return -1 if an error occurs or 0 on
+ success.
+
+.. c:function:: void PyArray_ENABLEFLAGS(PyArrayObject* arr, int flags)
+
+ .. versionadded:: 1.7
+
+ Enables the specified array flags. This function does no validation,
+ and assumes that you know what you're doing.
+
+.. c:function:: void PyArray_CLEARFLAGS(PyArrayObject* arr, int flags)
+
+ .. versionadded:: 1.7
+
+ Clears the specified array flags. This function does no validation,
+ and assumes that you know what you're doing.
+
+.. c:function:: void *PyArray_DATA(PyArrayObject *arr)
+
+.. c:function:: char *PyArray_BYTES(PyArrayObject *arr)
+
+ These two macros are similar and obtain the pointer to the
+ data-buffer for the array. The first macro can (and should be)
+ assigned to a particular pointer where the second is for generic
+ processing. If you have not guaranteed a contiguous and/or aligned
+ array then be sure you understand how to access the data in the
+ array to avoid memory and/or alignment problems.
+
+.. c:function:: npy_intp *PyArray_DIMS(PyArrayObject *arr)
+
+ Returns a pointer to the dimensions/shape of the array. The
+ number of elements matches the number of dimensions
+ of the array. Can return ``NULL`` for 0-dimensional arrays.
+
+.. c:function:: npy_intp *PyArray_SHAPE(PyArrayObject *arr)
+
+ .. versionadded:: 1.7
+
+ A synonym for :c:func:`PyArray_DIMS`, named to be consistent with the
+ `shape <numpy.ndarray.shape>` usage within Python.
+
+.. c:function:: npy_intp *PyArray_STRIDES(PyArrayObject* arr)
+
+ Returns a pointer to the strides of the array. The
+ number of elements matches the number of dimensions
+ of the array.
+
+.. c:function:: npy_intp PyArray_DIM(PyArrayObject* arr, int n)
+
+ Return the shape in the *n* :math:`^{\textrm{th}}` dimension.
+
+.. c:function:: npy_intp PyArray_STRIDE(PyArrayObject* arr, int n)
+
+ Return the stride in the *n* :math:`^{\textrm{th}}` dimension.
+
+.. c:function:: npy_intp PyArray_ITEMSIZE(PyArrayObject* arr)
+
+ Return the itemsize for the elements of this array.
+
+ Note that, in the old API that was deprecated in version 1.7, this function
+ had the return type ``int``.
+
+.. c:function:: npy_intp PyArray_SIZE(PyArrayObject* arr)
+
+ Returns the total size (in number of elements) of the array.
+
+.. c:function:: npy_intp PyArray_Size(PyArrayObject* obj)
+
+ Returns 0 if *obj* is not a sub-class of ndarray. Otherwise,
+ returns the total number of elements in the array. Safer version
+ of :c:func:`PyArray_SIZE` (*obj*).
+
+.. c:function:: npy_intp PyArray_NBYTES(PyArrayObject* arr)
+
+ Returns the total number of bytes consumed by the array.
+
+.. c:function:: PyObject *PyArray_BASE(PyArrayObject* arr)
+
+ This returns the base object of the array. In most cases, this
+ means the object which owns the memory the array is pointing at.
+
+ If you are constructing an array using the C API, and specifying
+ your own memory, you should use the function :c:func:`PyArray_SetBaseObject`
+ to set the base to an object which owns the memory.
+
+ If the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or the
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flags are set, it has a different
+ meaning, namely base is the array into which the current array will
+ be copied upon copy resolution. This overloading of the base property
+ for two functions is likely to change in a future version of NumPy.
+
+.. c:function:: PyArray_Descr *PyArray_DESCR(PyArrayObject* arr)
+
+ Returns a borrowed reference to the dtype property of the array.
+
+.. c:function:: PyArray_Descr *PyArray_DTYPE(PyArrayObject* arr)
+
+ .. versionadded:: 1.7
+
+ A synonym for PyArray_DESCR, named to be consistent with the
+ 'dtype' usage within Python.
+
+.. c:function:: PyObject *PyArray_GETITEM(PyArrayObject* arr, void* itemptr)
+
+ Get a Python object of a builtin type from the ndarray, *arr*,
+ at the location pointed to by itemptr. Return ``NULL`` on failure.
+
+ `numpy.ndarray.item` is identical to PyArray_GETITEM.
+
+
+Data access
+^^^^^^^^^^^
+
+These functions and macros provide easy access to elements of the
+ndarray from C. These work for all arrays. You may need to take care
+when accessing the data in the array, however, if it is not in machine
+byte-order, misaligned, or not writeable. In other words, be sure to
+respect the state of the flags unless you know what you are doing, or
+have previously guaranteed an array that is writeable, aligned, and in
+machine byte-order using :c:func:`PyArray_FromAny`. If you wish to handle all
+types of arrays, the copyswap function for each type is useful for
+handling misbehaved arrays. Some platforms (e.g. Solaris) do not like
+misaligned data and will crash if you de-reference a misaligned
+pointer. Other platforms (e.g. x86 Linux) will just work more slowly
+with misaligned data.
+
+.. c:function:: void* PyArray_GetPtr(PyArrayObject* aobj, npy_intp* ind)
+
+ Return a pointer to the data of the ndarray, *aobj*, at the
+ N-dimensional index given by the c-array, *ind*, (which must be
+ at least *aobj* ->nd in size). You may want to typecast the
+ returned pointer to the data type of the ndarray.
+
+.. c:function:: void* PyArray_GETPTR1(PyArrayObject* obj, npy_intp i)
+
+.. c:function:: void* PyArray_GETPTR2( \
+ PyArrayObject* obj, npy_intp i, npy_intp j)
+
+.. c:function:: void* PyArray_GETPTR3( \
+ PyArrayObject* obj, npy_intp i, npy_intp j, npy_intp k)
+
+.. c:function:: void* PyArray_GETPTR4( \
+ PyArrayObject* obj, npy_intp i, npy_intp j, npy_intp k, npy_intp l)
+
+ Quick, inline access to the element at the given coordinates in
+ the ndarray, *obj*, which must have respectively 1, 2, 3, or 4
+ dimensions (this is not checked). The corresponding *i*, *j*,
+ *k*, and *l* coordinates can be any integer but will be
+ interpreted as ``npy_intp``. You may want to typecast the
+ returned pointer to the data type of the ndarray.
+
+
+Creating arrays
+---------------
+
+
+From scratch
+^^^^^^^^^^^^
+
+.. c:function:: PyObject* PyArray_NewFromDescr( \
+ PyTypeObject* subtype, PyArray_Descr* descr, int nd, npy_intp const* dims, \
+ npy_intp const* strides, void* data, int flags, PyObject* obj)
+
+ This function steals a reference to *descr*. The easiest way to get one
+ is using :c:func:`PyArray_DescrFromType`.
+
+ This is the main array creation function. Most new arrays are
+ created with this flexible function.
+
+ The returned object is an object of Python-type *subtype*, which
+ must be a subtype of :c:data:`PyArray_Type`. The array has *nd*
+ dimensions, described by *dims*. The data-type descriptor of the
+ new array is *descr*.
+
+ If *subtype* is of an array subclass instead of the base
+ :c:data:`&PyArray_Type<PyArray_Type>`, then *obj* is the object to pass to
+ the :obj:`~numpy.class.__array_finalize__` method of the subclass.
+
+ If *data* is ``NULL``, then new unitinialized memory will be allocated and
+ *flags* can be non-zero to indicate a Fortran-style contiguous array. Use
+ :c:func:`PyArray_FILLWBYTE` to initialize the memory.
+
+ If *data* is not ``NULL``, then it is assumed to point to the memory
+ to be used for the array and the *flags* argument is used as the
+ new flags for the array (except the state of :c:data:`NPY_OWNDATA`,
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` and :c:data:`NPY_ARRAY_UPDATEIFCOPY`
+ flags of the new array will be reset).
+
+ In addition, if *data* is non-NULL, then *strides* can
+ also be provided. If *strides* is ``NULL``, then the array strides
+ are computed as C-style contiguous (default) or Fortran-style
+ contiguous (*flags* is nonzero for *data* = ``NULL`` or *flags* &
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` is nonzero non-NULL *data*). Any
+ provided *dims* and *strides* are copied into newly allocated
+ dimension and strides arrays for the new array object.
+
+ :c:func:`PyArray_CheckStrides` can help verify non- ``NULL`` stride
+ information.
+
+ If ``data`` is provided, it must stay alive for the life of the array. One
+ way to manage this is through :c:func:`PyArray_SetBaseObject`
+
+.. c:function:: PyObject* PyArray_NewLikeArray( \
+ PyArrayObject* prototype, NPY_ORDER order, PyArray_Descr* descr, \
+ int subok)
+
+ .. versionadded:: 1.6
+
+ This function steals a reference to *descr* if it is not NULL.
+
+ This array creation routine allows for the convenient creation of
+ a new array matching an existing array's shapes and memory layout,
+ possibly changing the layout and/or data type.
+
+ When *order* is :c:data:`NPY_ANYORDER`, the result order is
+ :c:data:`NPY_FORTRANORDER` if *prototype* is a fortran array,
+ :c:data:`NPY_CORDER` otherwise. When *order* is
+ :c:data:`NPY_KEEPORDER`, the result order matches that of *prototype*, even
+ when the axes of *prototype* aren't in C or Fortran order.
+
+ If *descr* is NULL, the data type of *prototype* is used.
+
+ If *subok* is 1, the newly created array will use the sub-type of
+ *prototype* to create the new array, otherwise it will create a
+ base-class array.
+
+.. c:function:: PyObject* PyArray_New( \
+ PyTypeObject* subtype, int nd, npy_intp const* dims, int type_num, \
+ npy_intp const* strides, void* data, int itemsize, int flags, \
+ PyObject* obj)
+
+ This is similar to :c:func:`PyArray_NewFromDescr` (...) except you
+ specify the data-type descriptor with *type_num* and *itemsize*,
+ where *type_num* corresponds to a builtin (or user-defined)
+ type. If the type always has the same number of bytes, then
+ itemsize is ignored. Otherwise, itemsize specifies the particular
+ size of this array.
+
+
+
+.. warning::
+
+ If data is passed to :c:func:`PyArray_NewFromDescr` or :c:func:`PyArray_New`,
+ this memory must not be deallocated until the new array is
+ deleted. If this data came from another Python object, this can
+ be accomplished using :c:func:`Py_INCREF` on that object and setting the
+ base member of the new array to point to that object. If strides
+ are passed in they must be consistent with the dimensions, the
+ itemsize, and the data of the array.
+
+.. c:function:: PyObject* PyArray_SimpleNew(int nd, npy_intp const* dims, int typenum)
+
+ Create a new uninitialized array of type, *typenum*, whose size in
+ each of *nd* dimensions is given by the integer array, *dims*.The memory
+ for the array is uninitialized (unless typenum is :c:data:`NPY_OBJECT`
+ in which case each element in the array is set to NULL). The
+ *typenum* argument allows specification of any of the builtin
+ data-types such as :c:data:`NPY_FLOAT` or :c:data:`NPY_LONG`. The
+ memory for the array can be set to zero if desired using
+ :c:func:`PyArray_FILLWBYTE` (return_object, 0).This function cannot be
+ used to create a flexible-type array (no itemsize given).
+
+.. c:function:: PyObject* PyArray_SimpleNewFromData( \
+ int nd, npy_intp const* dims, int typenum, void* data)
+
+ Create an array wrapper around *data* pointed to by the given
+ pointer. The array flags will have a default that the data area is
+ well-behaved and C-style contiguous. The shape of the array is
+ given by the *dims* c-array of length *nd*. The data-type of the
+ array is indicated by *typenum*. If data comes from another
+ reference-counted Python object, the reference count on this object
+ should be increased after the pointer is passed in, and the base member
+ of the returned ndarray should point to the Python object that owns
+ the data. This will ensure that the provided memory is not
+ freed while the returned array is in existence. To free memory as soon
+ as the ndarray is deallocated, set the OWNDATA flag on the returned ndarray.
+
+.. c:function:: PyObject* PyArray_SimpleNewFromDescr( \
+ int nd, npy_int const* dims, PyArray_Descr* descr)
+
+ This function steals a reference to *descr*.
+
+ Create a new array with the provided data-type descriptor, *descr*,
+ of the shape determined by *nd* and *dims*.
+
+.. c:function:: PyArray_FILLWBYTE(PyObject* obj, int val)
+
+ Fill the array pointed to by *obj* ---which must be a (subclass
+ of) ndarray---with the contents of *val* (evaluated as a byte).
+ This macro calls memset, so obj must be contiguous.
+
+.. c:function:: PyObject* PyArray_Zeros( \
+ int nd, npy_intp const* dims, PyArray_Descr* dtype, int fortran)
+
+ Construct a new *nd* -dimensional array with shape given by *dims*
+ and data type given by *dtype*. If *fortran* is non-zero, then a
+ Fortran-order array is created, otherwise a C-order array is
+ created. Fill the memory with zeros (or the 0 object if *dtype*
+ corresponds to :c:type:`NPY_OBJECT` ).
+
+.. c:function:: PyObject* PyArray_ZEROS( \
+ int nd, npy_intp const* dims, int type_num, int fortran)
+
+ Macro form of :c:func:`PyArray_Zeros` which takes a type-number instead
+ of a data-type object.
+
+.. c:function:: PyObject* PyArray_Empty( \
+ int nd, npy_intp const* dims, PyArray_Descr* dtype, int fortran)
+
+ Construct a new *nd* -dimensional array with shape given by *dims*
+ and data type given by *dtype*. If *fortran* is non-zero, then a
+ Fortran-order array is created, otherwise a C-order array is
+ created. The array is uninitialized unless the data type
+ corresponds to :c:type:`NPY_OBJECT` in which case the array is
+ filled with :c:data:`Py_None`.
+
+.. c:function:: PyObject* PyArray_EMPTY( \
+ int nd, npy_intp const* dims, int typenum, int fortran)
+
+ Macro form of :c:func:`PyArray_Empty` which takes a type-number,
+ *typenum*, instead of a data-type object.
+
+.. c:function:: PyObject* PyArray_Arange( \
+ double start, double stop, double step, int typenum)
+
+ Construct a new 1-dimensional array of data-type, *typenum*, that
+ ranges from *start* to *stop* (exclusive) in increments of *step*
+ . Equivalent to **arange** (*start*, *stop*, *step*, dtype).
+
+.. c:function:: PyObject* PyArray_ArangeObj( \
+ PyObject* start, PyObject* stop, PyObject* step, PyArray_Descr* descr)
+
+ Construct a new 1-dimensional array of data-type determined by
+ ``descr``, that ranges from ``start`` to ``stop`` (exclusive) in
+ increments of ``step``. Equivalent to arange( ``start``,
+ ``stop``, ``step``, ``typenum`` ).
+
+.. c:function:: int PyArray_SetBaseObject(PyArrayObject* arr, PyObject* obj)
+
+ .. versionadded:: 1.7
+
+ This function **steals a reference** to ``obj`` and sets it as the
+ base property of ``arr``.
+
+ If you construct an array by passing in your own memory buffer as
+ a parameter, you need to set the array's `base` property to ensure
+ the lifetime of the memory buffer is appropriate.
+
+ The return value is 0 on success, -1 on failure.
+
+ If the object provided is an array, this function traverses the
+ chain of `base` pointers so that each array points to the owner
+ of the memory directly. Once the base is set, it may not be changed
+ to another value.
+
+From other objects
+^^^^^^^^^^^^^^^^^^
+
+.. c:function:: PyObject* PyArray_FromAny( \
+ PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, \
+ int requirements, PyObject* context)
+
+ This is the main function used to obtain an array from any nested
+ sequence, or object that exposes the array interface, *op*. The
+ parameters allow specification of the required *dtype*, the
+ minimum (*min_depth*) and maximum (*max_depth*) number of
+ dimensions acceptable, and other *requirements* for the array. This
+ function **steals a reference** to the dtype argument, which needs
+ to be a :c:type:`PyArray_Descr` structure
+ indicating the desired data-type (including required
+ byteorder). The *dtype* argument may be ``NULL``, indicating that any
+ data-type (and byteorder) is acceptable. Unless
+ :c:data:`NPY_ARRAY_FORCECAST` is present in ``flags``,
+ this call will generate an error if the data
+ type cannot be safely obtained from the object. If you want to use
+ ``NULL`` for the *dtype* and ensure the array is notswapped then
+ use :c:func:`PyArray_CheckFromAny`. A value of 0 for either of the
+ depth parameters causes the parameter to be ignored. Any of the
+ following array flags can be added (*e.g.* using \|) to get the
+ *requirements* argument. If your code can handle general (*e.g.*
+ strided, byte-swapped, or unaligned arrays) then *requirements*
+ may be 0. Also, if *op* is not already an array (or does not
+ expose the array interface), then a new array will be created (and
+ filled from *op* using the sequence protocol). The new array will
+ have :c:data:`NPY_ARRAY_DEFAULT` as its flags member. The *context* argument
+ is passed to the :obj:`~numpy.class.__array__` method of *op* and is only used if
+ the array is constructed that way. Almost always this
+ parameter is ``NULL``.
+
+ .. c:var:: NPY_ARRAY_C_CONTIGUOUS
+
+ Make sure the returned array is C-style contiguous
+
+ .. c:var:: NPY_ARRAY_F_CONTIGUOUS
+
+ Make sure the returned array is Fortran-style contiguous.
+
+ .. c:var:: NPY_ARRAY_ALIGNED
+
+ Make sure the returned array is aligned on proper boundaries for its
+ data type. An aligned array has the data pointer and every strides
+ factor as a multiple of the alignment factor for the data-type-
+ descriptor.
+
+ .. c:var:: NPY_ARRAY_WRITEABLE
+
+ Make sure the returned array can be written to.
+
+ .. c:var:: NPY_ARRAY_ENSURECOPY
+
+ Make sure a copy is made of *op*. If this flag is not
+ present, data is not copied if it can be avoided.
+
+ .. c:var:: NPY_ARRAY_ENSUREARRAY
+
+ Make sure the result is a base-class ndarray. By
+ default, if *op* is an instance of a subclass of
+ ndarray, an instance of that same subclass is returned. If
+ this flag is set, an ndarray object will be returned instead.
+
+ .. c:var:: NPY_ARRAY_FORCECAST
+
+ Force a cast to the output type even if it cannot be done
+ safely. Without this flag, a data cast will occur only if it
+ can be done safely, otherwise an error is raised.
+
+ .. c:var:: NPY_ARRAY_WRITEBACKIFCOPY
+
+ If *op* is already an array, but does not satisfy the
+ requirements, then a copy is made (which will satisfy the
+ requirements). If this flag is present and a copy (of an object
+ that is already an array) must be made, then the corresponding
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag is set in the returned
+ copy and *op* is made to be read-only. You must be sure to call
+ :c:func:`PyArray_ResolveWritebackIfCopy` to copy the contents
+ back into *op* and the *op* array
+ will be made writeable again. If *op* is not writeable to begin
+ with, or if it is not already an array, then an error is raised.
+
+ .. c:var:: NPY_ARRAY_UPDATEIFCOPY
+
+ Deprecated. Use :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, which is similar.
+ This flag "automatically" copies the data back when the returned
+ array is deallocated, which is not supported in all python
+ implementations.
+
+ .. c:var:: NPY_ARRAY_BEHAVED
+
+ :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE`
+
+ .. c:var:: NPY_ARRAY_CARRAY
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
+
+ .. c:var:: NPY_ARRAY_CARRAY_RO
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
+
+ .. c:var:: NPY_ARRAY_FARRAY
+
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
+
+ .. c:var:: NPY_ARRAY_FARRAY_RO
+
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
+
+ .. c:var:: NPY_ARRAY_DEFAULT
+
+ :c:data:`NPY_ARRAY_CARRAY`
+
+ .. c:var:: NPY_ARRAY_IN_ARRAY
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
+
+ .. c:var:: NPY_ARRAY_IN_FARRAY
+
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
+
+ .. c:var:: NPY_OUT_ARRAY
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
+ :c:data:`NPY_ARRAY_ALIGNED`
+
+ .. c:var:: NPY_ARRAY_OUT_ARRAY
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED` \|
+ :c:data:`NPY_ARRAY_WRITEABLE`
+
+ .. c:var:: NPY_ARRAY_OUT_FARRAY
+
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
+ :c:data:`NPY_ARRAY_ALIGNED`
+
+ .. c:var:: NPY_ARRAY_INOUT_ARRAY
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
+ :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` \|
+ :c:data:`NPY_ARRAY_UPDATEIFCOPY`
+
+ .. c:var:: NPY_ARRAY_INOUT_FARRAY
+
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
+ :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` \|
+ :c:data:`NPY_ARRAY_UPDATEIFCOPY`
+
+.. c:function:: int PyArray_GetArrayParamsFromObject( \
+ PyObject* op, PyArray_Descr* requested_dtype, npy_bool writeable, \
+ PyArray_Descr** out_dtype, int* out_ndim, npy_intp* out_dims, \
+ PyArrayObject** out_arr, PyObject* context)
+
+ .. versionadded:: 1.6
+
+ Retrieves the array parameters for viewing/converting an arbitrary
+ PyObject* to a NumPy array. This allows the "innate type and shape"
+ of Python list-of-lists to be discovered without
+ actually converting to an array. PyArray_FromAny calls this function
+ to analyze its input.
+
+ In some cases, such as structured arrays and the :obj:`~numpy.class.__array__` interface,
+ a data type needs to be used to make sense of the object. When
+ this is needed, provide a Descr for 'requested_dtype', otherwise
+ provide NULL. This reference is not stolen. Also, if the requested
+ dtype doesn't modify the interpretation of the input, out_dtype will
+ still get the "innate" dtype of the object, not the dtype passed
+ in 'requested_dtype'.
+
+ If writing to the value in 'op' is desired, set the boolean
+ 'writeable' to 1. This raises an error when 'op' is a scalar, list
+ of lists, or other non-writeable 'op'. This differs from passing
+ :c:data:`NPY_ARRAY_WRITEABLE` to PyArray_FromAny, where the writeable array may
+ be a copy of the input.
+
+ When success (0 return value) is returned, either out_arr
+ is filled with a non-NULL PyArrayObject and
+ the rest of the parameters are untouched, or out_arr is
+ filled with NULL, and the rest of the parameters are filled.
+
+ Typical usage:
+
+ .. code-block:: c
+
+ PyArrayObject *arr = NULL;
+ PyArray_Descr *dtype = NULL;
+ int ndim = 0;
+ npy_intp dims[NPY_MAXDIMS];
+
+ if (PyArray_GetArrayParamsFromObject(op, NULL, 1, &dtype,
+ &ndim, &dims, &arr, NULL) < 0) {
+ return NULL;
+ }
+ if (arr == NULL) {
+ /*
+ ... validate/change dtype, validate flags, ndim, etc ...
+ Could make custom strides here too */
+ arr = PyArray_NewFromDescr(&PyArray_Type, dtype, ndim,
+ dims, NULL,
+ fortran ? NPY_ARRAY_F_CONTIGUOUS : 0,
+ NULL);
+ if (arr == NULL) {
+ return NULL;
+ }
+ if (PyArray_CopyObject(arr, op) < 0) {
+ Py_DECREF(arr);
+ return NULL;
+ }
+ }
+ else {
+ /*
+ ... in this case the other parameters weren't filled, just
+ validate and possibly copy arr itself ...
+ */
+ }
+ /*
+ ... use arr ...
+ */
+
+.. c:function:: PyObject* PyArray_CheckFromAny( \
+ PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, \
+ int requirements, PyObject* context)
+
+ Nearly identical to :c:func:`PyArray_FromAny` (...) except
+ *requirements* can contain :c:data:`NPY_ARRAY_NOTSWAPPED` (over-riding the
+ specification in *dtype*) and :c:data:`NPY_ARRAY_ELEMENTSTRIDES` which
+ indicates that the array should be aligned in the sense that the
+ strides are multiples of the element size.
+
+ In versions 1.6 and earlier of NumPy, the following flags
+ did not have the _ARRAY_ macro namespace in them. That form
+ of the constant names is deprecated in 1.7.
+
+.. c:var:: NPY_ARRAY_NOTSWAPPED
+
+ Make sure the returned array has a data-type descriptor that is in
+ machine byte-order, over-riding any specification in the *dtype*
+ argument. Normally, the byte-order requirement is determined by
+ the *dtype* argument. If this flag is set and the dtype argument
+ does not indicate a machine byte-order descriptor (or is NULL and
+ the object is already an array with a data-type descriptor that is
+ not in machine byte- order), then a new data-type descriptor is
+ created and used with its byte-order field set to native.
+
+.. c:var:: NPY_ARRAY_BEHAVED_NS
+
+ :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE` \| :c:data:`NPY_ARRAY_NOTSWAPPED`
+
+.. c:var:: NPY_ARRAY_ELEMENTSTRIDES
+
+ Make sure the returned array has strides that are multiples of the
+ element size.
+
+.. c:function:: PyObject* PyArray_FromArray( \
+ PyArrayObject* op, PyArray_Descr* newtype, int requirements)
+
+ Special case of :c:func:`PyArray_FromAny` for when *op* is already an
+ array but it needs to be of a specific *newtype* (including
+ byte-order) or has certain *requirements*.
+
+.. c:function:: PyObject* PyArray_FromStructInterface(PyObject* op)
+
+ Returns an ndarray object from a Python object that exposes the
+ :obj:`__array_struct__` attribute and follows the array interface
+ protocol. If the object does not contain this attribute then a
+ borrowed reference to :c:data:`Py_NotImplemented` is returned.
+
+.. c:function:: PyObject* PyArray_FromInterface(PyObject* op)
+
+ Returns an ndarray object from a Python object that exposes the
+ :obj:`__array_interface__` attribute following the array interface
+ protocol. If the object does not contain this attribute then a
+ borrowed reference to :c:data:`Py_NotImplemented` is returned.
+
+.. c:function:: PyObject* PyArray_FromArrayAttr( \
+ PyObject* op, PyArray_Descr* dtype, PyObject* context)
+
+ Return an ndarray object from a Python object that exposes the
+ :obj:`~numpy.class.__array__` method. The :obj:`~numpy.class.__array__` method can take 0, 1, or 2
+ arguments ([dtype, context]) where *context* is used to pass
+ information about where the :obj:`~numpy.class.__array__` method is being called
+ from (currently only used in ufuncs).
+
+.. c:function:: PyObject* PyArray_ContiguousFromAny( \
+ PyObject* op, int typenum, int min_depth, int max_depth)
+
+ This function returns a (C-style) contiguous and behaved function
+ array from any nested sequence or array interface exporting
+ object, *op*, of (non-flexible) type given by the enumerated
+ *typenum*, of minimum depth *min_depth*, and of maximum depth
+ *max_depth*. Equivalent to a call to :c:func:`PyArray_FromAny` with
+ requirements set to :c:data:`NPY_ARRAY_DEFAULT` and the type_num member of the
+ type argument set to *typenum*.
+
+.. c:function:: PyObject *PyArray_FromObject( \
+ PyObject *op, int typenum, int min_depth, int max_depth)
+
+ Return an aligned and in native-byteorder array from any nested
+ sequence or array-interface exporting object, op, of a type given by
+ the enumerated typenum. The minimum number of dimensions the array can
+ have is given by min_depth while the maximum is max_depth. This is
+ equivalent to a call to :c:func:`PyArray_FromAny` with requirements set to
+ BEHAVED.
+
+.. c:function:: PyObject* PyArray_EnsureArray(PyObject* op)
+
+ This function **steals a reference** to ``op`` and makes sure that
+ ``op`` is a base-class ndarray. It special cases array scalars,
+ but otherwise calls :c:func:`PyArray_FromAny` ( ``op``, NULL, 0, 0,
+ :c:data:`NPY_ARRAY_ENSUREARRAY`, NULL).
+
+.. c:function:: PyObject* PyArray_FromString( \
+ char* string, npy_intp slen, PyArray_Descr* dtype, npy_intp num, \
+ char* sep)
+
+ Construct a one-dimensional ndarray of a single type from a binary
+ or (ASCII) text ``string`` of length ``slen``. The data-type of
+ the array to-be-created is given by ``dtype``. If num is -1, then
+ **copy** the entire string and return an appropriately sized
+ array, otherwise, ``num`` is the number of items to **copy** from
+ the string. If ``sep`` is NULL (or ""), then interpret the string
+ as bytes of binary data, otherwise convert the sub-strings
+ separated by ``sep`` to items of data-type ``dtype``. Some
+ data-types may not be readable in text mode and an error will be
+ raised if that occurs. All errors return NULL.
+
+.. c:function:: PyObject* PyArray_FromFile( \
+ FILE* fp, PyArray_Descr* dtype, npy_intp num, char* sep)
+
+ Construct a one-dimensional ndarray of a single type from a binary
+ or text file. The open file pointer is ``fp``, the data-type of
+ the array to be created is given by ``dtype``. This must match
+ the data in the file. If ``num`` is -1, then read until the end of
+ the file and return an appropriately sized array, otherwise,
+ ``num`` is the number of items to read. If ``sep`` is NULL (or
+ ""), then read from the file in binary mode, otherwise read from
+ the file in text mode with ``sep`` providing the item
+ separator. Some array types cannot be read in text mode in which
+ case an error is raised.
+
+.. c:function:: PyObject* PyArray_FromBuffer( \
+ PyObject* buf, PyArray_Descr* dtype, npy_intp count, npy_intp offset)
+
+ Construct a one-dimensional ndarray of a single type from an
+ object, ``buf``, that exports the (single-segment) buffer protocol
+ (or has an attribute __buffer\__ that returns an object that
+ exports the buffer protocol). A writeable buffer will be tried
+ first followed by a read- only buffer. The :c:data:`NPY_ARRAY_WRITEABLE`
+ flag of the returned array will reflect which one was
+ successful. The data is assumed to start at ``offset`` bytes from
+ the start of the memory location for the object. The type of the
+ data in the buffer will be interpreted depending on the data- type
+ descriptor, ``dtype.`` If ``count`` is negative then it will be
+ determined from the size of the buffer and the requested itemsize,
+ otherwise, ``count`` represents how many elements should be
+ converted from the buffer.
+
+.. c:function:: int PyArray_CopyInto(PyArrayObject* dest, PyArrayObject* src)
+
+ Copy from the source array, ``src``, into the destination array,
+ ``dest``, performing a data-type conversion if necessary. If an
+ error occurs return -1 (otherwise 0). The shape of ``src`` must be
+ broadcastable to the shape of ``dest``. The data areas of dest
+ and src must not overlap.
+
+.. c:function:: int PyArray_MoveInto(PyArrayObject* dest, PyArrayObject* src)
+
+ Move data from the source array, ``src``, into the destination
+ array, ``dest``, performing a data-type conversion if
+ necessary. If an error occurs return -1 (otherwise 0). The shape
+ of ``src`` must be broadcastable to the shape of ``dest``. The
+ data areas of dest and src may overlap.
+
+.. c:function:: PyArrayObject* PyArray_GETCONTIGUOUS(PyObject* op)
+
+ If ``op`` is already (C-style) contiguous and well-behaved then
+ just return a reference, otherwise return a (contiguous and
+ well-behaved) copy of the array. The parameter op must be a
+ (sub-class of an) ndarray and no checking for that is done.
+
+.. c:function:: PyObject* PyArray_FROM_O(PyObject* obj)
+
+ Convert ``obj`` to an ndarray. The argument can be any nested
+ sequence or object that exports the array interface. This is a
+ macro form of :c:func:`PyArray_FromAny` using ``NULL``, 0, 0, 0 for the
+ other arguments. Your code must be able to handle any data-type
+ descriptor and any combination of data-flags to use this macro.
+
+.. c:function:: PyObject* PyArray_FROM_OF(PyObject* obj, int requirements)
+
+ Similar to :c:func:`PyArray_FROM_O` except it can take an argument
+ of *requirements* indicating properties the resulting array must
+ have. Available requirements that can be enforced are
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS`, :c:data:`NPY_ARRAY_F_CONTIGUOUS`,
+ :c:data:`NPY_ARRAY_ALIGNED`, :c:data:`NPY_ARRAY_WRITEABLE`,
+ :c:data:`NPY_ARRAY_NOTSWAPPED`, :c:data:`NPY_ARRAY_ENSURECOPY`,
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, :c:data:`NPY_ARRAY_UPDATEIFCOPY`,
+ :c:data:`NPY_ARRAY_FORCECAST`, and
+ :c:data:`NPY_ARRAY_ENSUREARRAY`. Standard combinations of flags can also
+ be used:
+
+.. c:function:: PyObject* PyArray_FROM_OT(PyObject* obj, int typenum)
+
+ Similar to :c:func:`PyArray_FROM_O` except it can take an argument of
+ *typenum* specifying the type-number the returned array.
+
+.. c:function:: PyObject* PyArray_FROM_OTF( \
+ PyObject* obj, int typenum, int requirements)
+
+ Combination of :c:func:`PyArray_FROM_OF` and :c:func:`PyArray_FROM_OT`
+ allowing both a *typenum* and a *flags* argument to be provided.
+
+.. c:function:: PyObject* PyArray_FROMANY( \
+ PyObject* obj, int typenum, int min, int max, int requirements)
+
+ Similar to :c:func:`PyArray_FromAny` except the data-type is
+ specified using a typenumber. :c:func:`PyArray_DescrFromType`
+ (*typenum*) is passed directly to :c:func:`PyArray_FromAny`. This
+ macro also adds :c:data:`NPY_ARRAY_DEFAULT` to requirements if
+ :c:data:`NPY_ARRAY_ENSURECOPY` is passed in as requirements.
+
+.. c:function:: PyObject *PyArray_CheckAxis( \
+ PyObject* obj, int* axis, int requirements)
+
+ Encapsulate the functionality of functions and methods that take
+ the axis= keyword and work properly with None as the axis
+ argument. The input array is ``obj``, while ``*axis`` is a
+ converted integer (so that >=MAXDIMS is the None value), and
+ ``requirements`` gives the needed properties of ``obj``. The
+ output is a converted version of the input so that requirements
+ are met and if needed a flattening has occurred. On output
+ negative values of ``*axis`` are converted and the new value is
+ checked to ensure consistency with the shape of ``obj``.
+
+
+Dealing with types
+------------------
+
+
+General check of Python Type
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. c:function:: PyArray_Check(PyObject *op)
+
+ Evaluates true if *op* is a Python object whose type is a sub-type
+ of :c:data:`PyArray_Type`.
+
+.. c:function:: PyArray_CheckExact(PyObject *op)
+
+ Evaluates true if *op* is a Python object with type
+ :c:data:`PyArray_Type`.
+
+.. c:function:: PyArray_HasArrayInterface(PyObject *op, PyObject *out)
+
+ If ``op`` implements any part of the array interface, then ``out``
+ will contain a new reference to the newly created ndarray using
+ the interface or ``out`` will contain ``NULL`` if an error during
+ conversion occurs. Otherwise, out will contain a borrowed
+ reference to :c:data:`Py_NotImplemented` and no error condition is set.
+
+.. c:function:: PyArray_HasArrayInterfaceType(op, type, context, out)
+
+ If ``op`` implements any part of the array interface, then ``out``
+ will contain a new reference to the newly created ndarray using
+ the interface or ``out`` will contain ``NULL`` if an error during
+ conversion occurs. Otherwise, out will contain a borrowed
+ reference to Py_NotImplemented and no error condition is set.
+ This version allows setting of the type and context in the part of
+ the array interface that looks for the :obj:`~numpy.class.__array__` attribute.
+
+.. c:function:: PyArray_IsZeroDim(op)
+
+ Evaluates true if *op* is an instance of (a subclass of)
+ :c:data:`PyArray_Type` and has 0 dimensions.
+
+.. c:function:: PyArray_IsScalar(op, cls)
+
+ Evaluates true if *op* is an instance of :c:data:`Py{cls}ArrType_Type`.
+
+.. c:function:: PyArray_CheckScalar(op)
+
+ Evaluates true if *op* is either an array scalar (an instance of a
+ sub-type of :c:data:`PyGenericArr_Type` ), or an instance of (a
+ sub-class of) :c:data:`PyArray_Type` whose dimensionality is 0.
+
+.. c:function:: PyArray_IsPythonNumber(op)
+
+ Evaluates true if *op* is an instance of a builtin numeric type (int,
+ float, complex, long, bool)
+
+.. c:function:: PyArray_IsPythonScalar(op)
+
+ Evaluates true if *op* is a builtin Python scalar object (int,
+ float, complex, str, unicode, long, bool).
+
+.. c:function:: PyArray_IsAnyScalar(op)
+
+ Evaluates true if *op* is either a Python scalar object (see
+ :c:func:`PyArray_IsPythonScalar`) or an array scalar (an instance of a sub-
+ type of :c:data:`PyGenericArr_Type` ).
+
+.. c:function:: PyArray_CheckAnyScalar(op)
+
+ Evaluates true if *op* is a Python scalar object (see
+ :c:func:`PyArray_IsPythonScalar`), an array scalar (an instance of a
+ sub-type of :c:data:`PyGenericArr_Type`) or an instance of a sub-type of
+ :c:data:`PyArray_Type` whose dimensionality is 0.
+
+
+Data-type checking
+^^^^^^^^^^^^^^^^^^
+
+For the typenum macros, the argument is an integer representing an
+enumerated array data type. For the array type checking macros the
+argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpreted as a
+:c:type:`PyArrayObject *`.
+
+.. c:function:: PyTypeNum_ISUNSIGNED(num)
+
+.. c:function:: PyDataType_ISUNSIGNED(descr)
+
+.. c:function:: PyArray_ISUNSIGNED(obj)
+
+ Type represents an unsigned integer.
+
+.. c:function:: PyTypeNum_ISSIGNED(num)
+
+.. c:function:: PyDataType_ISSIGNED(descr)
+
+.. c:function:: PyArray_ISSIGNED(obj)
+
+ Type represents a signed integer.
+
+.. c:function:: PyTypeNum_ISINTEGER(num)
+
+.. c:function:: PyDataType_ISINTEGER(descr)
+
+.. c:function:: PyArray_ISINTEGER(obj)
+
+ Type represents any integer.
+
+.. c:function:: PyTypeNum_ISFLOAT(num)
+
+.. c:function:: PyDataType_ISFLOAT(descr)
+
+.. c:function:: PyArray_ISFLOAT(obj)
+
+ Type represents any floating point number.
+
+.. c:function:: PyTypeNum_ISCOMPLEX(num)
+
+.. c:function:: PyDataType_ISCOMPLEX(descr)
+
+.. c:function:: PyArray_ISCOMPLEX(obj)
+
+ Type represents any complex floating point number.
+
+.. c:function:: PyTypeNum_ISNUMBER(num)
+
+.. c:function:: PyDataType_ISNUMBER(descr)
+
+.. c:function:: PyArray_ISNUMBER(obj)
+
+ Type represents any integer, floating point, or complex floating point
+ number.
+
+.. c:function:: PyTypeNum_ISSTRING(num)
+
+.. c:function:: PyDataType_ISSTRING(descr)
+
+.. c:function:: PyArray_ISSTRING(obj)
+
+ Type represents a string data type.
+
+.. c:function:: PyTypeNum_ISPYTHON(num)
+
+.. c:function:: PyDataType_ISPYTHON(descr)
+
+.. c:function:: PyArray_ISPYTHON(obj)
+
+ Type represents an enumerated type corresponding to one of the
+ standard Python scalar (bool, int, float, or complex).
+
+.. c:function:: PyTypeNum_ISFLEXIBLE(num)
+
+.. c:function:: PyDataType_ISFLEXIBLE(descr)
+
+.. c:function:: PyArray_ISFLEXIBLE(obj)
+
+ Type represents one of the flexible array types ( :c:data:`NPY_STRING`,
+ :c:data:`NPY_UNICODE`, or :c:data:`NPY_VOID` ).
+
+.. c:function:: PyDataType_ISUNSIZED(descr):
+
+ Type has no size information attached, and can be resized. Should only be
+ called on flexible dtypes. Types that are attached to an array will always
+ be sized, hence the array form of this macro not existing.
+
+.. c:function:: PyTypeNum_ISUSERDEF(num)
+
+.. c:function:: PyDataType_ISUSERDEF(descr)
+
+.. c:function:: PyArray_ISUSERDEF(obj)
+
+ Type represents a user-defined type.
+
+.. c:function:: PyTypeNum_ISEXTENDED(num)
+
+.. c:function:: PyDataType_ISEXTENDED(descr)
+
+.. c:function:: PyArray_ISEXTENDED(obj)
+
+ Type is either flexible or user-defined.
+
+.. c:function:: PyTypeNum_ISOBJECT(num)
+
+.. c:function:: PyDataType_ISOBJECT(descr)
+
+.. c:function:: PyArray_ISOBJECT(obj)
+
+ Type represents object data type.
+
+.. c:function:: PyTypeNum_ISBOOL(num)
+
+.. c:function:: PyDataType_ISBOOL(descr)
+
+.. c:function:: PyArray_ISBOOL(obj)
+
+ Type represents Boolean data type.
+
+.. c:function:: PyDataType_HASFIELDS(descr)
+
+.. c:function:: PyArray_HASFIELDS(obj)
+
+ Type has fields associated with it.
+
+.. c:function:: PyArray_ISNOTSWAPPED(m)
+
+ Evaluates true if the data area of the ndarray *m* is in machine
+ byte-order according to the array's data-type descriptor.
+
+.. c:function:: PyArray_ISBYTESWAPPED(m)
+
+ Evaluates true if the data area of the ndarray *m* is **not** in
+ machine byte-order according to the array's data-type descriptor.
+
+.. c:function:: Bool PyArray_EquivTypes( \
+ PyArray_Descr* type1, PyArray_Descr* type2)
+
+ Return :c:data:`NPY_TRUE` if *type1* and *type2* actually represent
+ equivalent types for this platform (the fortran member of each
+ type is ignored). For example, on 32-bit platforms,
+ :c:data:`NPY_LONG` and :c:data:`NPY_INT` are equivalent. Otherwise
+ return :c:data:`NPY_FALSE`.
+
+.. c:function:: Bool PyArray_EquivArrTypes( \
+ PyArrayObject* a1, PyArrayObject * a2)
+
+ Return :c:data:`NPY_TRUE` if *a1* and *a2* are arrays with equivalent
+ types for this platform.
+
+.. c:function:: Bool PyArray_EquivTypenums(int typenum1, int typenum2)
+
+ Special case of :c:func:`PyArray_EquivTypes` (...) that does not accept
+ flexible data types but may be easier to call.
+
+.. c:function:: int PyArray_EquivByteorders({byteorder} b1, {byteorder} b2)
+
+ True if byteorder characters ( :c:data:`NPY_LITTLE`,
+ :c:data:`NPY_BIG`, :c:data:`NPY_NATIVE`, :c:data:`NPY_IGNORE` ) are
+ either equal or equivalent as to their specification of a native
+ byte order. Thus, on a little-endian machine :c:data:`NPY_LITTLE`
+ and :c:data:`NPY_NATIVE` are equivalent where they are not
+ equivalent on a big-endian machine.
+
+
+Converting data types
+^^^^^^^^^^^^^^^^^^^^^
+
+.. c:function:: PyObject* PyArray_Cast(PyArrayObject* arr, int typenum)
+
+ Mainly for backwards compatibility to the Numeric C-API and for
+ simple casts to non-flexible types. Return a new array object with
+ the elements of *arr* cast to the data-type *typenum* which must
+ be one of the enumerated types and not a flexible type.
+
+.. c:function:: PyObject* PyArray_CastToType( \
+ PyArrayObject* arr, PyArray_Descr* type, int fortran)
+
+ Return a new array of the *type* specified, casting the elements
+ of *arr* as appropriate. The fortran argument specifies the
+ ordering of the output array.
+
+.. c:function:: int PyArray_CastTo(PyArrayObject* out, PyArrayObject* in)
+
+ As of 1.6, this function simply calls :c:func:`PyArray_CopyInto`,
+ which handles the casting.
+
+ Cast the elements of the array *in* into the array *out*. The
+ output array should be writeable, have an integer-multiple of the
+ number of elements in the input array (more than one copy can be
+ placed in out), and have a data type that is one of the builtin
+ types. Returns 0 on success and -1 if an error occurs.
+
+.. c:function:: PyArray_VectorUnaryFunc* PyArray_GetCastFunc( \
+ PyArray_Descr* from, int totype)
+
+ Return the low-level casting function to cast from the given
+ descriptor to the builtin type number. If no casting function
+ exists return ``NULL`` and set an error. Using this function
+ instead of direct access to *from* ->f->cast will allow support of
+ any user-defined casting functions added to a descriptors casting
+ dictionary.
+
+.. c:function:: int PyArray_CanCastSafely(int fromtype, int totype)
+
+ Returns non-zero if an array of data type *fromtype* can be cast
+ to an array of data type *totype* without losing information. An
+ exception is that 64-bit integers are allowed to be cast to 64-bit
+ floating point values even though this can lose precision on large
+ integers so as not to proliferate the use of long doubles without
+ explicit requests. Flexible array types are not checked according
+ to their lengths with this function.
+
+.. c:function:: int PyArray_CanCastTo( \
+ PyArray_Descr* fromtype, PyArray_Descr* totype)
+
+ :c:func:`PyArray_CanCastTypeTo` supersedes this function in
+ NumPy 1.6 and later.
+
+ Equivalent to PyArray_CanCastTypeTo(fromtype, totype, NPY_SAFE_CASTING).
+
+.. c:function:: int PyArray_CanCastTypeTo( \
+ PyArray_Descr* fromtype, PyArray_Descr* totype, NPY_CASTING casting)
+
+ .. versionadded:: 1.6
+
+ Returns non-zero if an array of data type *fromtype* (which can
+ include flexible types) can be cast safely to an array of data
+ type *totype* (which can include flexible types) according to
+ the casting rule *casting*. For simple types with :c:data:`NPY_SAFE_CASTING`,
+ this is basically a wrapper around :c:func:`PyArray_CanCastSafely`, but
+ for flexible types such as strings or unicode, it produces results
+ taking into account their sizes. Integer and float types can only be cast
+ to a string or unicode type using :c:data:`NPY_SAFE_CASTING` if the string
+ or unicode type is big enough to hold the max value of the integer/float
+ type being cast from.
+
+.. c:function:: int PyArray_CanCastArrayTo( \
+ PyArrayObject* arr, PyArray_Descr* totype, NPY_CASTING casting)
+
+ .. versionadded:: 1.6
+
+ Returns non-zero if *arr* can be cast to *totype* according
+ to the casting rule given in *casting*. If *arr* is an array
+ scalar, its value is taken into account, and non-zero is also
+ returned when the value will not overflow or be truncated to
+ an integer when converting to a smaller type.
+
+ This is almost the same as the result of
+ PyArray_CanCastTypeTo(PyArray_MinScalarType(arr), totype, casting),
+ but it also handles a special case arising because the set
+ of uint values is not a subset of the int values for types with the
+ same number of bits.
+
+.. c:function:: PyArray_Descr* PyArray_MinScalarType(PyArrayObject* arr)
+
+ .. versionadded:: 1.6
+
+ If *arr* is an array, returns its data type descriptor, but if
+ *arr* is an array scalar (has 0 dimensions), it finds the data type
+ of smallest size to which the value may be converted
+ without overflow or truncation to an integer.
+
+ This function will not demote complex to float or anything to
+ boolean, but will demote a signed integer to an unsigned integer
+ when the scalar value is positive.
+
+.. c:function:: PyArray_Descr* PyArray_PromoteTypes( \
+ PyArray_Descr* type1, PyArray_Descr* type2)
+
+ .. versionadded:: 1.6
+
+ Finds the data type of smallest size and kind to which *type1* and
+ *type2* may be safely converted. This function is symmetric and
+ associative. A string or unicode result will be the proper size for
+ storing the max value of the input types converted to a string or unicode.
+
+.. c:function:: PyArray_Descr* PyArray_ResultType( \
+ npy_intp narrs, PyArrayObject**arrs, npy_intp ndtypes, \
+ PyArray_Descr**dtypes)
+
+ .. versionadded:: 1.6
+
+ This applies type promotion to all the inputs,
+ using the NumPy rules for combining scalars and arrays, to
+ determine the output type of a set of operands. This is the
+ same result type that ufuncs produce. The specific algorithm
+ used is as follows.
+
+ Categories are determined by first checking which of boolean,
+ integer (int/uint), or floating point (float/complex) the maximum
+ kind of all the arrays and the scalars are.
+
+ If there are only scalars or the maximum category of the scalars
+ is higher than the maximum category of the arrays,
+ the data types are combined with :c:func:`PyArray_PromoteTypes`
+ to produce the return value.
+
+ Otherwise, PyArray_MinScalarType is called on each array, and
+ the resulting data types are all combined with
+ :c:func:`PyArray_PromoteTypes` to produce the return value.
+
+ The set of int values is not a subset of the uint values for types
+ with the same number of bits, something not reflected in
+ :c:func:`PyArray_MinScalarType`, but handled as a special case in
+ PyArray_ResultType.
+
+.. c:function:: int PyArray_ObjectType(PyObject* op, int mintype)
+
+ This function is superceded by :c:func:`PyArray_MinScalarType` and/or
+ :c:func:`PyArray_ResultType`.
+
+ This function is useful for determining a common type that two or
+ more arrays can be converted to. It only works for non-flexible
+ array types as no itemsize information is passed. The *mintype*
+ argument represents the minimum type acceptable, and *op*
+ represents the object that will be converted to an array. The
+ return value is the enumerated typenumber that represents the
+ data-type that *op* should have.
+
+.. c:function:: void PyArray_ArrayType( \
+ PyObject* op, PyArray_Descr* mintype, PyArray_Descr* outtype)
+
+ This function is superceded by :c:func:`PyArray_ResultType`.
+
+ This function works similarly to :c:func:`PyArray_ObjectType` (...)
+ except it handles flexible arrays. The *mintype* argument can have
+ an itemsize member and the *outtype* argument will have an
+ itemsize member at least as big but perhaps bigger depending on
+ the object *op*.
+
+.. c:function:: PyArrayObject** PyArray_ConvertToCommonType( \
+ PyObject* op, int* n)
+
+ The functionality this provides is largely superceded by iterator
+ :c:type:`NpyIter` introduced in 1.6, with flag
+ :c:data:`NPY_ITER_COMMON_DTYPE` or with the same dtype parameter for
+ all operands.
+
+ Convert a sequence of Python objects contained in *op* to an array
+ of ndarrays each having the same data type. The type is selected
+ based on the typenumber (larger type number is chosen over a
+ smaller one) ignoring objects that are only scalars. The length of
+ the sequence is returned in *n*, and an *n* -length array of
+ :c:type:`PyArrayObject` pointers is the return value (or ``NULL`` if an
+ error occurs). The returned array must be freed by the caller of
+ this routine (using :c:func:`PyDataMem_FREE` ) and all the array objects
+ in it ``DECREF`` 'd or a memory-leak will occur. The example
+ template-code below shows a typically usage:
+
+ .. code-block:: c
+
+ mps = PyArray_ConvertToCommonType(obj, &n);
+ if (mps==NULL) return NULL;
+ {code}
+ <before return>
+ for (i=0; i<n; i++) Py_DECREF(mps[i]);
+ PyDataMem_FREE(mps);
+ {return}
+
+.. c:function:: char* PyArray_Zero(PyArrayObject* arr)
+
+ A pointer to newly created memory of size *arr* ->itemsize that
+ holds the representation of 0 for that type. The returned pointer,
+ *ret*, **must be freed** using :c:func:`PyDataMem_FREE` (ret) when it is
+ not needed anymore.
+
+.. c:function:: char* PyArray_One(PyArrayObject* arr)
+
+ A pointer to newly created memory of size *arr* ->itemsize that
+ holds the representation of 1 for that type. The returned pointer,
+ *ret*, **must be freed** using :c:func:`PyDataMem_FREE` (ret) when it
+ is not needed anymore.
+
+.. c:function:: int PyArray_ValidType(int typenum)
+
+ Returns :c:data:`NPY_TRUE` if *typenum* represents a valid type-number
+ (builtin or user-defined or character code). Otherwise, this
+ function returns :c:data:`NPY_FALSE`.
+
+
+New data types
+^^^^^^^^^^^^^^
+
+.. c:function:: void PyArray_InitArrFuncs(PyArray_ArrFuncs* f)
+
+ Initialize all function pointers and members to ``NULL``.
+
+.. c:function:: int PyArray_RegisterDataType(PyArray_Descr* dtype)
+
+ Register a data-type as a new user-defined data type for
+ arrays. The type must have most of its entries filled in. This is
+ not always checked and errors can produce segfaults. In
+ particular, the typeobj member of the ``dtype`` structure must be
+ filled with a Python type that has a fixed-size element-size that
+ corresponds to the elsize member of *dtype*. Also the ``f``
+ member must have the required functions: nonzero, copyswap,
+ copyswapn, getitem, setitem, and cast (some of the cast functions
+ may be ``NULL`` if no support is desired). To avoid confusion, you
+ should choose a unique character typecode but this is not enforced
+ and not relied on internally.
+
+ A user-defined type number is returned that uniquely identifies
+ the type. A pointer to the new structure can then be obtained from
+ :c:func:`PyArray_DescrFromType` using the returned type number. A -1 is
+ returned if an error occurs. If this *dtype* has already been
+ registered (checked only by the address of the pointer), then
+ return the previously-assigned type-number.
+
+.. c:function:: int PyArray_RegisterCastFunc( \
+ PyArray_Descr* descr, int totype, PyArray_VectorUnaryFunc* castfunc)
+
+ Register a low-level casting function, *castfunc*, to convert
+ from the data-type, *descr*, to the given data-type number,
+ *totype*. Any old casting function is over-written. A ``0`` is
+ returned on success or a ``-1`` on failure.
+
+.. c:function:: int PyArray_RegisterCanCast( \
+ PyArray_Descr* descr, int totype, NPY_SCALARKIND scalar)
+
+ Register the data-type number, *totype*, as castable from
+ data-type object, *descr*, of the given *scalar* kind. Use
+ *scalar* = :c:data:`NPY_NOSCALAR` to register that an array of data-type
+ *descr* can be cast safely to a data-type whose type_number is
+ *totype*.
+
+
+Special functions for NPY_OBJECT
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. c:function:: int PyArray_INCREF(PyArrayObject* op)
+
+ Used for an array, *op*, that contains any Python objects. It
+ increments the reference count of every object in the array
+ according to the data-type of *op*. A -1 is returned if an error
+ occurs, otherwise 0 is returned.
+
+.. c:function:: void PyArray_Item_INCREF(char* ptr, PyArray_Descr* dtype)
+
+ A function to INCREF all the objects at the location *ptr*
+ according to the data-type *dtype*. If *ptr* is the start of a
+ structured type with an object at any offset, then this will (recursively)
+ increment the reference count of all object-like items in the
+ structured type.
+
+.. c:function:: int PyArray_XDECREF(PyArrayObject* op)
+
+ Used for an array, *op*, that contains any Python objects. It
+ decrements the reference count of every object in the array
+ according to the data-type of *op*. Normal return value is 0. A
+ -1 is returned if an error occurs.
+
+.. c:function:: void PyArray_Item_XDECREF(char* ptr, PyArray_Descr* dtype)
+
+ A function to XDECREF all the object-like items at the location
+ *ptr* as recorded in the data-type, *dtype*. This works
+ recursively so that if ``dtype`` itself has fields with data-types
+ that contain object-like items, all the object-like fields will be
+ XDECREF ``'d``.
+
+.. c:function:: void PyArray_FillObjectArray(PyArrayObject* arr, PyObject* obj)
+
+ Fill a newly created array with a single value obj at all
+ locations in the structure with object data-types. No checking is
+ performed but *arr* must be of data-type :c:type:`NPY_OBJECT` and be
+ single-segment and uninitialized (no previous objects in
+ position). Use :c:func:`PyArray_DECREF` (*arr*) if you need to
+ decrement all the items in the object array prior to calling this
+ function.
+
+.. c:function:: int PyArray_SetUpdateIfCopyBase(PyArrayObject* arr, PyArrayObject* base)
+
+ Precondition: ``arr`` is a copy of ``base`` (though possibly with different
+ strides, ordering, etc.) Set the UPDATEIFCOPY flag and ``arr->base`` so
+ that when ``arr`` is destructed, it will copy any changes back to ``base``.
+ DEPRECATED, use :c:func:`PyArray_SetWritebackIfCopyBase``.
+
+ Returns 0 for success, -1 for failure.
+
+.. c:function:: int PyArray_SetWritebackIfCopyBase(PyArrayObject* arr, PyArrayObject* base)
+
+ Precondition: ``arr`` is a copy of ``base`` (though possibly with different
+ strides, ordering, etc.) Sets the :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag
+ and ``arr->base``, and set ``base`` to READONLY. Call
+ :c:func:`PyArray_ResolveWritebackIfCopy` before calling
+ `Py_DECREF`` in order copy any changes back to ``base`` and
+ reset the READONLY flag.
+
+ Returns 0 for success, -1 for failure.
+
+.. _array-flags:
+
+Array flags
+-----------
+
+The ``flags`` attribute of the ``PyArrayObject`` structure contains
+important information about the memory used by the array (pointed to
+by the data member) This flag information must be kept accurate or
+strange results and even segfaults may result.
+
+There are 6 (binary) flags that describe the memory area used by the
+data buffer. These constants are defined in ``arrayobject.h`` and
+determine the bit-position of the flag. Python exposes a nice
+attribute- based interface as well as a dictionary-like interface for
+getting (and, if appropriate, setting) these flags.
+
+Memory areas of all kinds can be pointed to by an ndarray, necessitating
+these flags. If you get an arbitrary ``PyArrayObject`` in C-code, you
+need to be aware of the flags that are set. If you need to guarantee
+a certain kind of array (like :c:data:`NPY_ARRAY_C_CONTIGUOUS` and
+:c:data:`NPY_ARRAY_BEHAVED`), then pass these requirements into the
+PyArray_FromAny function.
+
+
+Basic Array Flags
+^^^^^^^^^^^^^^^^^
+
+An ndarray can have a data segment that is not a simple contiguous
+chunk of well-behaved memory you can manipulate. It may not be aligned
+with word boundaries (very important on some platforms). It might have
+its data in a different byte-order than the machine recognizes. It
+might not be writeable. It might be in Fortan-contiguous order. The
+array flags are used to indicate what can be said about data
+associated with an array.
+
+In versions 1.6 and earlier of NumPy, the following flags
+did not have the _ARRAY_ macro namespace in them. That form
+of the constant names is deprecated in 1.7.
+
+.. c:var:: NPY_ARRAY_C_CONTIGUOUS
+
+ The data area is in C-style contiguous order (last index varies the
+ fastest).
+
+.. c:var:: NPY_ARRAY_F_CONTIGUOUS
+
+ The data area is in Fortran-style contiguous order (first index varies
+ the fastest).
+
+.. note::
+
+ Arrays can be both C-style and Fortran-style contiguous simultaneously.
+ This is clear for 1-dimensional arrays, but can also be true for higher
+ dimensional arrays.
+
+ Even for contiguous arrays a stride for a given dimension
+ ``arr.strides[dim]`` may be *arbitrary* if ``arr.shape[dim] == 1``
+ or the array has no elements.
+ It does *not* generally hold that ``self.strides[-1] == self.itemsize``
+ for C-style contiguous arrays or ``self.strides[0] == self.itemsize`` for
+ Fortran-style contiguous arrays is true. The correct way to access the
+ ``itemsize`` of an array from the C API is ``PyArray_ITEMSIZE(arr)``.
+
+ .. seealso:: :ref:`Internal memory layout of an ndarray <arrays.ndarray>`
+
+.. c:var:: NPY_ARRAY_OWNDATA
+
+ The data area is owned by this array.
+
+.. c:var:: NPY_ARRAY_ALIGNED
+
+ The data area and all array elements are aligned appropriately.
+
+.. c:var:: NPY_ARRAY_WRITEABLE
+
+ The data area can be written to.
+
+ Notice that the above 3 flags are defined so that a new, well-
+ behaved array has these flags defined as true.
+
+.. c:var:: NPY_ARRAY_WRITEBACKIFCOPY
+
+ The data area represents a (well-behaved) copy whose information
+ should be transferred back to the original when
+ :c:func:`PyArray_ResolveWritebackIfCopy` is called.
+
+ This is a special flag that is set if this array represents a copy
+ made because a user required certain flags in
+ :c:func:`PyArray_FromAny` and a copy had to be made of some other
+ array (and the user asked for this flag to be set in such a
+ situation). The base attribute then points to the "misbehaved"
+ array (which is set read_only). :c:func`PyArray_ResolveWritebackIfCopy`
+ will copy its contents back to the "misbehaved"
+ array (casting if necessary) and will reset the "misbehaved" array
+ to :c:data:`NPY_ARRAY_WRITEABLE`. If the "misbehaved" array was not
+ :c:data:`NPY_ARRAY_WRITEABLE` to begin with then :c:func:`PyArray_FromAny`
+ would have returned an error because :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`
+ would not have been possible.
+
+.. c:var:: NPY_ARRAY_UPDATEIFCOPY
+
+ A deprecated version of :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` which
+ depends upon ``dealloc`` to trigger the writeback. For backwards
+ compatibility, :c:func:`PyArray_ResolveWritebackIfCopy` is called at
+ ``dealloc`` but relying
+ on that behavior is deprecated and not supported in PyPy.
+
+:c:func:`PyArray_UpdateFlags` (obj, flags) will update the ``obj->flags``
+for ``flags`` which can be any of :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
+:c:data:`NPY_ARRAY_F_CONTIGUOUS`, :c:data:`NPY_ARRAY_ALIGNED`, or
+:c:data:`NPY_ARRAY_WRITEABLE`.
+
+
+Combinations of array flags
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. c:var:: NPY_ARRAY_BEHAVED
+
+ :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE`
+
+.. c:var:: NPY_ARRAY_CARRAY
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
+
+.. c:var:: NPY_ARRAY_CARRAY_RO
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
+
+.. c:var:: NPY_ARRAY_FARRAY
+
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
+
+.. c:var:: NPY_ARRAY_FARRAY_RO
+
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
+
+.. c:var:: NPY_ARRAY_DEFAULT
+
+ :c:data:`NPY_ARRAY_CARRAY`
+
+.. c:var:: NPY_ARRAY_UPDATE_ALL
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
+
+
+Flag-like constants
+^^^^^^^^^^^^^^^^^^^
+
+These constants are used in :c:func:`PyArray_FromAny` (and its macro forms) to
+specify desired properties of the new array.
+
+.. c:var:: NPY_ARRAY_FORCECAST
+
+ Cast to the desired type, even if it can't be done without losing
+ information.
+
+.. c:var:: NPY_ARRAY_ENSURECOPY
+
+ Make sure the resulting array is a copy of the original.
+
+.. c:var:: NPY_ARRAY_ENSUREARRAY
+
+ Make sure the resulting object is an actual ndarray, and not a sub-class.
+
+.. c:var:: NPY_ARRAY_NOTSWAPPED
+
+ Only used in :c:func:`PyArray_CheckFromAny` to over-ride the byteorder
+ of the data-type object passed in.
+
+.. c:var:: NPY_ARRAY_BEHAVED_NS
+
+ :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE` \| :c:data:`NPY_ARRAY_NOTSWAPPED`
+
+
+Flag checking
+^^^^^^^^^^^^^
+
+For all of these macros *arr* must be an instance of a (subclass of)
+:c:data:`PyArray_Type`.
+
+.. c:function:: PyArray_CHKFLAGS(arr, flags)
+
+ The first parameter, arr, must be an ndarray or subclass. The
+ parameter, *flags*, should be an integer consisting of bitwise
+ combinations of the possible flags an array can have:
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS`, :c:data:`NPY_ARRAY_F_CONTIGUOUS`,
+ :c:data:`NPY_ARRAY_OWNDATA`, :c:data:`NPY_ARRAY_ALIGNED`,
+ :c:data:`NPY_ARRAY_WRITEABLE`, :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`,
+ :c:data:`NPY_ARRAY_UPDATEIFCOPY`.
+
+.. c:function:: PyArray_IS_C_CONTIGUOUS(arr)
+
+ Evaluates true if *arr* is C-style contiguous.
+
+.. c:function:: PyArray_IS_F_CONTIGUOUS(arr)
+
+ Evaluates true if *arr* is Fortran-style contiguous.
+
+.. c:function:: PyArray_ISFORTRAN(arr)
+
+ Evaluates true if *arr* is Fortran-style contiguous and *not*
+ C-style contiguous. :c:func:`PyArray_IS_F_CONTIGUOUS`
+ is the correct way to test for Fortran-style contiguity.
+
+.. c:function:: PyArray_ISWRITEABLE(arr)
+
+ Evaluates true if the data area of *arr* can be written to
+
+.. c:function:: PyArray_ISALIGNED(arr)
+
+ Evaluates true if the data area of *arr* is properly aligned on
+ the machine.
+
+.. c:function:: PyArray_ISBEHAVED(arr)
+
+ Evaluates true if the data area of *arr* is aligned and writeable
+ and in machine byte-order according to its descriptor.
+
+.. c:function:: PyArray_ISBEHAVED_RO(arr)
+
+ Evaluates true if the data area of *arr* is aligned and in machine
+ byte-order.
+
+.. c:function:: PyArray_ISCARRAY(arr)
+
+ Evaluates true if the data area of *arr* is C-style contiguous,
+ and :c:func:`PyArray_ISBEHAVED` (*arr*) is true.
+
+.. c:function:: PyArray_ISFARRAY(arr)
+
+ Evaluates true if the data area of *arr* is Fortran-style
+ contiguous and :c:func:`PyArray_ISBEHAVED` (*arr*) is true.
+
+.. c:function:: PyArray_ISCARRAY_RO(arr)
+
+ Evaluates true if the data area of *arr* is C-style contiguous,
+ aligned, and in machine byte-order.
+
+.. c:function:: PyArray_ISFARRAY_RO(arr)
+
+ Evaluates true if the data area of *arr* is Fortran-style
+ contiguous, aligned, and in machine byte-order **.**
+
+.. c:function:: PyArray_ISONESEGMENT(arr)
+
+ Evaluates true if the data area of *arr* consists of a single
+ (C-style or Fortran-style) contiguous segment.
+
+.. c:function:: void PyArray_UpdateFlags(PyArrayObject* arr, int flagmask)
+
+ The :c:data:`NPY_ARRAY_C_CONTIGUOUS`, :c:data:`NPY_ARRAY_ALIGNED`, and
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` array flags can be "calculated" from the
+ array object itself. This routine updates one or more of these
+ flags of *arr* as specified in *flagmask* by performing the
+ required calculation.
+
+
+.. warning::
+
+ It is important to keep the flags updated (using
+ :c:func:`PyArray_UpdateFlags` can help) whenever a manipulation with an
+ array is performed that might cause them to change. Later
+ calculations in NumPy that rely on the state of these flags do not
+ repeat the calculation to update them.
+
+
+Array method alternative API
+----------------------------
+
+
+Conversion
+^^^^^^^^^^
+
+.. c:function:: PyObject* PyArray_GetField( \
+ PyArrayObject* self, PyArray_Descr* dtype, int offset)
+
+ Equivalent to :meth:`ndarray.getfield<numpy.ndarray.getfield>`
+ (*self*, *dtype*, *offset*). This function `steals a reference
+ <https://docs.python.org/3/c-api/intro.html?reference-count-details>`_
+ to `PyArray_Descr` and returns a new array of the given `dtype` using
+ the data in the current array at a specified `offset` in bytes. The
+ `offset` plus the itemsize of the new array type must be less than ``self
+ ->descr->elsize`` or an error is raised. The same shape and strides
+ as the original array are used. Therefore, this function has the
+ effect of returning a field from a structured array. But, it can also
+ be used to select specific bytes or groups of bytes from any array
+ type.
+
+.. c:function:: int PyArray_SetField( \
+ PyArrayObject* self, PyArray_Descr* dtype, int offset, PyObject* val)
+
+ Equivalent to :meth:`ndarray.setfield<numpy.ndarray.setfield>` (*self*, *val*, *dtype*, *offset*
+ ). Set the field starting at *offset* in bytes and of the given
+ *dtype* to *val*. The *offset* plus *dtype* ->elsize must be less
+ than *self* ->descr->elsize or an error is raised. Otherwise, the
+ *val* argument is converted to an array and copied into the field
+ pointed to. If necessary, the elements of *val* are repeated to
+ fill the destination array, But, the number of elements in the
+ destination must be an integer multiple of the number of elements
+ in *val*.
+
+.. c:function:: PyObject* PyArray_Byteswap(PyArrayObject* self, Bool inplace)
+
+ Equivalent to :meth:`ndarray.byteswap<numpy.ndarray.byteswap>` (*self*, *inplace*). Return an array
+ whose data area is byteswapped. If *inplace* is non-zero, then do
+ the byteswap inplace and return a reference to self. Otherwise,
+ create a byteswapped copy and leave self unchanged.
+
+.. c:function:: PyObject* PyArray_NewCopy(PyArrayObject* old, NPY_ORDER order)
+
+ Equivalent to :meth:`ndarray.copy<numpy.ndarray.copy>` (*self*, *fortran*). Make a copy of the
+ *old* array. The returned array is always aligned and writeable
+ with data interpreted the same as the old array. If *order* is
+ :c:data:`NPY_CORDER`, then a C-style contiguous array is returned. If
+ *order* is :c:data:`NPY_FORTRANORDER`, then a Fortran-style contiguous
+ array is returned. If *order is* :c:data:`NPY_ANYORDER`, then the array
+ returned is Fortran-style contiguous only if the old one is;
+ otherwise, it is C-style contiguous.
+
+.. c:function:: PyObject* PyArray_ToList(PyArrayObject* self)
+
+ Equivalent to :meth:`ndarray.tolist<numpy.ndarray.tolist>` (*self*). Return a nested Python list
+ from *self*.
+
+.. c:function:: PyObject* PyArray_ToString(PyArrayObject* self, NPY_ORDER order)
+
+ Equivalent to :meth:`ndarray.tobytes<numpy.ndarray.tobytes>` (*self*, *order*). Return the bytes
+ of this array in a Python string.
+
+.. c:function:: PyObject* PyArray_ToFile( \
+ PyArrayObject* self, FILE* fp, char* sep, char* format)
+
+ Write the contents of *self* to the file pointer *fp* in C-style
+ contiguous fashion. Write the data as binary bytes if *sep* is the
+ string ""or ``NULL``. Otherwise, write the contents of *self* as
+ text using the *sep* string as the item separator. Each item will
+ be printed to the file. If the *format* string is not ``NULL`` or
+ "", then it is a Python print statement format string showing how
+ the items are to be written.
+
+.. c:function:: int PyArray_Dump(PyObject* self, PyObject* file, int protocol)
+
+ Pickle the object in *self* to the given *file* (either a string
+ or a Python file object). If *file* is a Python string it is
+ considered to be the name of a file which is then opened in binary
+ mode. The given *protocol* is used (if *protocol* is negative, or
+ the highest available is used). This is a simple wrapper around
+ cPickle.dump(*self*, *file*, *protocol*).
+
+.. c:function:: PyObject* PyArray_Dumps(PyObject* self, int protocol)
+
+ Pickle the object in *self* to a Python string and return it. Use
+ the Pickle *protocol* provided (or the highest available if
+ *protocol* is negative).
+
+.. c:function:: int PyArray_FillWithScalar(PyArrayObject* arr, PyObject* obj)
+
+ Fill the array, *arr*, with the given scalar object, *obj*. The
+ object is first converted to the data type of *arr*, and then
+ copied into every location. A -1 is returned if an error occurs,
+ otherwise 0 is returned.
+
+.. c:function:: PyObject* PyArray_View( \
+ PyArrayObject* self, PyArray_Descr* dtype, PyTypeObject *ptype)
+
+ Equivalent to :meth:`ndarray.view<numpy.ndarray.view>` (*self*, *dtype*). Return a new
+ view of the array *self* as possibly a different data-type, *dtype*,
+ and different array subclass *ptype*.
+
+ If *dtype* is ``NULL``, then the returned array will have the same
+ data type as *self*. The new data-type must be consistent with the
+ size of *self*. Either the itemsizes must be identical, or *self* must
+ be single-segment and the total number of bytes must be the same.
+ In the latter case the dimensions of the returned array will be
+ altered in the last (or first for Fortran-style contiguous arrays)
+ dimension. The data area of the returned array and self is exactly
+ the same.
+
+
+Shape Manipulation
+^^^^^^^^^^^^^^^^^^
+
+.. c:function:: PyObject* PyArray_Newshape( \
+ PyArrayObject* self, PyArray_Dims* newshape, NPY_ORDER order)
+
+ Result will be a new array (pointing to the same memory location
+ as *self* if possible), but having a shape given by *newshape*.
+ If the new shape is not compatible with the strides of *self*,
+ then a copy of the array with the new specified shape will be
+ returned.
+
+.. c:function:: PyObject* PyArray_Reshape(PyArrayObject* self, PyObject* shape)
+
+ Equivalent to :meth:`ndarray.reshape<numpy.ndarray.reshape>` (*self*, *shape*) where *shape* is a
+ sequence. Converts *shape* to a :c:type:`PyArray_Dims` structure and
+ calls :c:func:`PyArray_Newshape` internally.
+ For back-ward compatibility -- Not recommended
+
+.. c:function:: PyObject* PyArray_Squeeze(PyArrayObject* self)
+
+ Equivalent to :meth:`ndarray.squeeze<numpy.ndarray.squeeze>` (*self*). Return a new view of *self*
+ with all of the dimensions of length 1 removed from the shape.
+
+.. warning::
+
+ matrix objects are always 2-dimensional. Therefore,
+ :c:func:`PyArray_Squeeze` has no effect on arrays of matrix sub-class.
+
+.. c:function:: PyObject* PyArray_SwapAxes(PyArrayObject* self, int a1, int a2)
+
+ Equivalent to :meth:`ndarray.swapaxes<numpy.ndarray.swapaxes>` (*self*, *a1*, *a2*). The returned
+ array is a new view of the data in *self* with the given axes,
+ *a1* and *a2*, swapped.
+
+.. c:function:: PyObject* PyArray_Resize( \
+ PyArrayObject* self, PyArray_Dims* newshape, int refcheck, \
+ NPY_ORDER fortran)
+
+ Equivalent to :meth:`ndarray.resize<numpy.ndarray.resize>` (*self*, *newshape*, refcheck
+ ``=`` *refcheck*, order= fortran ). This function only works on
+ single-segment arrays. It changes the shape of *self* inplace and
+ will reallocate the memory for *self* if *newshape* has a
+ different total number of elements then the old shape. If
+ reallocation is necessary, then *self* must own its data, have
+ *self* - ``>base==NULL``, have *self* - ``>weakrefs==NULL``, and
+ (unless refcheck is 0) not be referenced by any other array.
+ The fortran argument can be :c:data:`NPY_ANYORDER`, :c:data:`NPY_CORDER`,
+ or :c:data:`NPY_FORTRANORDER`. It currently has no effect. Eventually
+ it could be used to determine how the resize operation should view
+ the data when constructing a differently-dimensioned array.
+ Returns None on success and NULL on error.
+
+.. c:function:: PyObject* PyArray_Transpose( \
+ PyArrayObject* self, PyArray_Dims* permute)
+
+ Equivalent to :meth:`ndarray.transpose<numpy.ndarray.transpose>` (*self*, *permute*). Permute the
+ axes of the ndarray object *self* according to the data structure
+ *permute* and return the result. If *permute* is ``NULL``, then
+ the resulting array has its axes reversed. For example if *self*
+ has shape :math:`10\times20\times30`, and *permute* ``.ptr`` is
+ (0,2,1) the shape of the result is :math:`10\times30\times20.` If
+ *permute* is ``NULL``, the shape of the result is
+ :math:`30\times20\times10.`
+
+.. c:function:: PyObject* PyArray_Flatten(PyArrayObject* self, NPY_ORDER order)
+
+ Equivalent to :meth:`ndarray.flatten<numpy.ndarray.flatten>` (*self*, *order*). Return a 1-d copy
+ of the array. If *order* is :c:data:`NPY_FORTRANORDER` the elements are
+ scanned out in Fortran order (first-dimension varies the
+ fastest). If *order* is :c:data:`NPY_CORDER`, the elements of ``self``
+ are scanned in C-order (last dimension varies the fastest). If
+ *order* :c:data:`NPY_ANYORDER`, then the result of
+ :c:func:`PyArray_ISFORTRAN` (*self*) is used to determine which order
+ to flatten.
+
+.. c:function:: PyObject* PyArray_Ravel(PyArrayObject* self, NPY_ORDER order)
+
+ Equivalent to *self*.ravel(*order*). Same basic functionality
+ as :c:func:`PyArray_Flatten` (*self*, *order*) except if *order* is 0
+ and *self* is C-style contiguous, the shape is altered but no copy
+ is performed.
+
+
+Item selection and manipulation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. c:function:: PyObject* PyArray_TakeFrom( \
+ PyArrayObject* self, PyObject* indices, int axis, PyArrayObject* ret, \
+ NPY_CLIPMODE clipmode)
+
+ Equivalent to :meth:`ndarray.take<numpy.ndarray.take>` (*self*, *indices*, *axis*, *ret*,
+ *clipmode*) except *axis* =None in Python is obtained by setting
+ *axis* = :c:data:`NPY_MAXDIMS` in C. Extract the items from self
+ indicated by the integer-valued *indices* along the given *axis.*
+ The clipmode argument can be :c:data:`NPY_RAISE`, :c:data:`NPY_WRAP`, or
+ :c:data:`NPY_CLIP` to indicate what to do with out-of-bound indices. The
+ *ret* argument can specify an output array rather than having one
+ created internally.
+
+.. c:function:: PyObject* PyArray_PutTo( \
+ PyArrayObject* self, PyObject* values, PyObject* indices, \
+ NPY_CLIPMODE clipmode)
+
+ Equivalent to *self*.put(*values*, *indices*, *clipmode*
+ ). Put *values* into *self* at the corresponding (flattened)
+ *indices*. If *values* is too small it will be repeated as
+ necessary.
+
+.. c:function:: PyObject* PyArray_PutMask( \
+ PyArrayObject* self, PyObject* values, PyObject* mask)
+
+ Place the *values* in *self* wherever corresponding positions
+ (using a flattened context) in *mask* are true. The *mask* and
+ *self* arrays must have the same total number of elements. If
+ *values* is too small, it will be repeated as necessary.
+
+.. c:function:: PyObject* PyArray_Repeat( \
+ PyArrayObject* self, PyObject* op, int axis)
+
+ Equivalent to :meth:`ndarray.repeat<numpy.ndarray.repeat>` (*self*, *op*, *axis*). Copy the
+ elements of *self*, *op* times along the given *axis*. Either
+ *op* is a scalar integer or a sequence of length *self*
+ ->dimensions[ *axis* ] indicating how many times to repeat each
+ item along the axis.
+
+.. c:function:: PyObject* PyArray_Choose( \
+ PyArrayObject* self, PyObject* op, PyArrayObject* ret, \
+ NPY_CLIPMODE clipmode)
+
+ Equivalent to :meth:`ndarray.choose<numpy.ndarray.choose>` (*self*, *op*, *ret*, *clipmode*).
+ Create a new array by selecting elements from the sequence of
+ arrays in *op* based on the integer values in *self*. The arrays
+ must all be broadcastable to the same shape and the entries in
+ *self* should be between 0 and len(*op*). The output is placed
+ in *ret* unless it is ``NULL`` in which case a new output is
+ created. The *clipmode* argument determines behavior for when
+ entries in *self* are not between 0 and len(*op*).
+
+ .. c:var:: NPY_RAISE
+
+ raise a ValueError;
+
+ .. c:var:: NPY_WRAP
+
+ wrap values < 0 by adding len(*op*) and values >=len(*op*)
+ by subtracting len(*op*) until they are in range;
+
+ .. c:var:: NPY_CLIP
+
+ all values are clipped to the region [0, len(*op*) ).
+
+
+.. c:function:: PyObject* PyArray_Sort(PyArrayObject* self, int axis, NPY_SORTKIND kind)
+
+ Equivalent to :meth:`ndarray.sort<numpy.ndarray.sort>` (*self*, *axis*, *kind*).
+ Return an array with the items of *self* sorted along *axis*. The array
+ is sorted using the algorithm denoted by *kind* , which is an integer/enum pointing
+ to the type of sorting algorithms used.
+
+.. c:function:: PyObject* PyArray_ArgSort(PyArrayObject* self, int axis)
+
+ Equivalent to :meth:`ndarray.argsort<numpy.ndarray.argsort>` (*self*, *axis*).
+ Return an array of indices such that selection of these indices
+ along the given ``axis`` would return a sorted version of *self*. If *self* ->descr
+ is a data-type with fields defined, then self->descr->names is used
+ to determine the sort order. A comparison where the first field is equal
+ will use the second field and so on. To alter the sort order of a
+ structured array, create a new data-type with a different order of names
+ and construct a view of the array with that new data-type.
+
+.. c:function:: PyObject* PyArray_LexSort(PyObject* sort_keys, int axis)
+
+ Given a sequence of arrays (*sort_keys*) of the same shape,
+ return an array of indices (similar to :c:func:`PyArray_ArgSort` (...))
+ that would sort the arrays lexicographically. A lexicographic sort
+ specifies that when two keys are found to be equal, the order is
+ based on comparison of subsequent keys. A merge sort (which leaves
+ equal entries unmoved) is required to be defined for the
+ types. The sort is accomplished by sorting the indices first using
+ the first *sort_key* and then using the second *sort_key* and so
+ forth. This is equivalent to the lexsort(*sort_keys*, *axis*)
+ Python command. Because of the way the merge-sort works, be sure
+ to understand the order the *sort_keys* must be in (reversed from
+ the order you would use when comparing two elements).
+
+ If these arrays are all collected in a structured array, then
+ :c:func:`PyArray_Sort` (...) can also be used to sort the array
+ directly.
+
+.. c:function:: PyObject* PyArray_SearchSorted( \
+ PyArrayObject* self, PyObject* values, NPY_SEARCHSIDE side, \
+ PyObject* perm)
+
+ Equivalent to :meth:`ndarray.searchsorted<numpy.ndarray.searchsorted>` (*self*, *values*, *side*,
+ *perm*). Assuming *self* is a 1-d array in ascending order, then the
+ output is an array of indices the same shape as *values* such that, if
+ the elements in *values* were inserted before the indices, the order of
+ *self* would be preserved. No checking is done on whether or not self is
+ in ascending order.
+
+ The *side* argument indicates whether the index returned should be that of
+ the first suitable location (if :c:data:`NPY_SEARCHLEFT`) or of the last
+ (if :c:data:`NPY_SEARCHRIGHT`).
+
+ The *sorter* argument, if not ``NULL``, must be a 1D array of integer
+ indices the same length as *self*, that sorts it into ascending order.
+ This is typically the result of a call to :c:func:`PyArray_ArgSort` (...)
+ Binary search is used to find the required insertion points.
+
+.. c:function:: int PyArray_Partition( \
+ PyArrayObject *self, PyArrayObject * ktharray, int axis, \
+ NPY_SELECTKIND which)
+
+ Equivalent to :meth:`ndarray.partition<numpy.ndarray.partition>` (*self*, *ktharray*, *axis*,
+ *kind*). Partitions the array so that the values of the element indexed by
+ *ktharray* are in the positions they would be if the array is fully sorted
+ and places all elements smaller than the kth before and all elements equal
+ or greater after the kth element. The ordering of all elements within the
+ partitions is undefined.
+ If *self*->descr is a data-type with fields defined, then
+ self->descr->names is used to determine the sort order. A comparison where
+ the first field is equal will use the second field and so on. To alter the
+ sort order of a structured array, create a new data-type with a different
+ order of names and construct a view of the array with that new data-type.
+ Returns zero on success and -1 on failure.
+
+.. c:function:: PyObject* PyArray_ArgPartition( \
+ PyArrayObject *op, PyArrayObject * ktharray, int axis, \
+ NPY_SELECTKIND which)
+
+ Equivalent to :meth:`ndarray.argpartition<numpy.ndarray.argpartition>` (*self*, *ktharray*, *axis*,
+ *kind*). Return an array of indices such that selection of these indices
+ along the given ``axis`` would return a partitioned version of *self*.
+
+.. c:function:: PyObject* PyArray_Diagonal( \
+ PyArrayObject* self, int offset, int axis1, int axis2)
+
+ Equivalent to :meth:`ndarray.diagonal<numpy.ndarray.diagonal>` (*self*, *offset*, *axis1*, *axis2*
+ ). Return the *offset* diagonals of the 2-d arrays defined by
+ *axis1* and *axis2*.
+
+.. c:function:: npy_intp PyArray_CountNonzero(PyArrayObject* self)
+
+ .. versionadded:: 1.6
+
+ Counts the number of non-zero elements in the array object *self*.
+
+.. c:function:: PyObject* PyArray_Nonzero(PyArrayObject* self)
+
+ Equivalent to :meth:`ndarray.nonzero<numpy.ndarray.nonzero>` (*self*). Returns a tuple of index
+ arrays that select elements of *self* that are nonzero. If (nd=
+ :c:func:`PyArray_NDIM` ( ``self`` ))==1, then a single index array is
+ returned. The index arrays have data type :c:data:`NPY_INTP`. If a
+ tuple is returned (nd :math:`\neq` 1), then its length is nd.
+
+.. c:function:: PyObject* PyArray_Compress( \
+ PyArrayObject* self, PyObject* condition, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.compress<numpy.ndarray.compress>` (*self*, *condition*, *axis*
+ ). Return the elements along *axis* corresponding to elements of
+ *condition* that are true.
+
+
+Calculation
+^^^^^^^^^^^
+
+.. tip::
+
+ Pass in :c:data:`NPY_MAXDIMS` for axis in order to achieve the same
+ effect that is obtained by passing in *axis* = :const:`None` in Python
+ (treating the array as a 1-d array).
+
+
+.. note::
+
+ The out argument specifies where to place the result. If out is
+ NULL, then the output array is created, otherwise the output is
+ placed in out which must be the correct size and type. A new
+ reference to the output array is always returned even when out
+ is not NULL. The caller of the routine has the responsibility
+ to ``Py_DECREF`` out if not NULL or a memory-leak will occur.
+
+
+.. c:function:: PyObject* PyArray_ArgMax( \
+ PyArrayObject* self, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.argmax<numpy.ndarray.argmax>` (*self*, *axis*). Return the index of
+ the largest element of *self* along *axis*.
+
+.. c:function:: PyObject* PyArray_ArgMin( \
+ PyArrayObject* self, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.argmin<numpy.ndarray.argmin>` (*self*, *axis*). Return the index of
+ the smallest element of *self* along *axis*.
+
+.. c:function:: PyObject* PyArray_Max( \
+ PyArrayObject* self, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.max<numpy.ndarray.max>` (*self*, *axis*). Returns the largest
+ element of *self* along the given *axis*. When the result is a single
+ element, returns a numpy scalar instead of an ndarray.
+
+.. c:function:: PyObject* PyArray_Min( \
+ PyArrayObject* self, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.min<numpy.ndarray.min>` (*self*, *axis*). Return the smallest
+ element of *self* along the given *axis*. When the result is a single
+ element, returns a numpy scalar instead of an ndarray.
+
+
+.. c:function:: PyObject* PyArray_Ptp( \
+ PyArrayObject* self, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.ptp<numpy.ndarray.ptp>` (*self*, *axis*). Return the difference
+ between the largest element of *self* along *axis* and the
+ smallest element of *self* along *axis*. When the result is a single
+ element, returns a numpy scalar instead of an ndarray.
+
+
+
+
+.. note::
+
+ The rtype argument specifies the data-type the reduction should
+ take place over. This is important if the data-type of the array
+ is not "large" enough to handle the output. By default, all
+ integer data-types are made at least as large as :c:data:`NPY_LONG`
+ for the "add" and "multiply" ufuncs (which form the basis for
+ mean, sum, cumsum, prod, and cumprod functions).
+
+.. c:function:: PyObject* PyArray_Mean( \
+ PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.mean<numpy.ndarray.mean>` (*self*, *axis*, *rtype*). Returns the
+ mean of the elements along the given *axis*, using the enumerated
+ type *rtype* as the data type to sum in. Default sum behavior is
+ obtained using :c:data:`NPY_NOTYPE` for *rtype*.
+
+.. c:function:: PyObject* PyArray_Trace( \
+ PyArrayObject* self, int offset, int axis1, int axis2, int rtype, \
+ PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.trace<numpy.ndarray.trace>` (*self*, *offset*, *axis1*, *axis2*,
+ *rtype*). Return the sum (using *rtype* as the data type of
+ summation) over the *offset* diagonal elements of the 2-d arrays
+ defined by *axis1* and *axis2* variables. A positive offset
+ chooses diagonals above the main diagonal. A negative offset
+ selects diagonals below the main diagonal.
+
+.. c:function:: PyObject* PyArray_Clip( \
+ PyArrayObject* self, PyObject* min, PyObject* max)
+
+ Equivalent to :meth:`ndarray.clip<numpy.ndarray.clip>` (*self*, *min*, *max*). Clip an array,
+ *self*, so that values larger than *max* are fixed to *max* and
+ values less than *min* are fixed to *min*.
+
+.. c:function:: PyObject* PyArray_Conjugate(PyArrayObject* self)
+
+ Equivalent to :meth:`ndarray.conjugate<numpy.ndarray.conjugate>` (*self*).
+ Return the complex conjugate of *self*. If *self* is not of
+ complex data type, then return *self* with a reference.
+
+.. c:function:: PyObject* PyArray_Round( \
+ PyArrayObject* self, int decimals, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.round<numpy.ndarray.round>` (*self*, *decimals*, *out*). Returns
+ the array with elements rounded to the nearest decimal place. The
+ decimal place is defined as the :math:`10^{-\textrm{decimals}}`
+ digit so that negative *decimals* cause rounding to the nearest 10's, 100's, etc. If out is ``NULL``, then the output array is created, otherwise the output is placed in *out* which must be the correct size and type.
+
+.. c:function:: PyObject* PyArray_Std( \
+ PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.std<numpy.ndarray.std>` (*self*, *axis*, *rtype*). Return the
+ standard deviation using data along *axis* converted to data type
+ *rtype*.
+
+.. c:function:: PyObject* PyArray_Sum( \
+ PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.sum<numpy.ndarray.sum>` (*self*, *axis*, *rtype*). Return 1-d
+ vector sums of elements in *self* along *axis*. Perform the sum
+ after converting data to data type *rtype*.
+
+.. c:function:: PyObject* PyArray_CumSum( \
+ PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.cumsum<numpy.ndarray.cumsum>` (*self*, *axis*, *rtype*). Return
+ cumulative 1-d sums of elements in *self* along *axis*. Perform
+ the sum after converting data to data type *rtype*.
+
+.. c:function:: PyObject* PyArray_Prod( \
+ PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.prod<numpy.ndarray.prod>` (*self*, *axis*, *rtype*). Return 1-d
+ products of elements in *self* along *axis*. Perform the product
+ after converting data to data type *rtype*.
+
+.. c:function:: PyObject* PyArray_CumProd( \
+ PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.cumprod<numpy.ndarray.cumprod>` (*self*, *axis*, *rtype*). Return
+ 1-d cumulative products of elements in ``self`` along ``axis``.
+ Perform the product after converting data to data type ``rtype``.
+
+.. c:function:: PyObject* PyArray_All( \
+ PyArrayObject* self, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.all<numpy.ndarray.all>` (*self*, *axis*). Return an array with
+ True elements for every 1-d sub-array of ``self`` defined by
+ ``axis`` in which all the elements are True.
+
+.. c:function:: PyObject* PyArray_Any( \
+ PyArrayObject* self, int axis, PyArrayObject* out)
+
+ Equivalent to :meth:`ndarray.any<numpy.ndarray.any>` (*self*, *axis*). Return an array with
+ True elements for every 1-d sub-array of *self* defined by *axis*
+ in which any of the elements are True.
+
+Functions
+---------
+
+
+Array Functions
+^^^^^^^^^^^^^^^
+
+.. c:function:: int PyArray_AsCArray( \
+ PyObject** op, void* ptr, npy_intp* dims, int nd, int typenum, \
+ int itemsize)
+
+ Sometimes it is useful to access a multidimensional array as a
+ C-style multi-dimensional array so that algorithms can be
+ implemented using C's a[i][j][k] syntax. This routine returns a
+ pointer, *ptr*, that simulates this kind of C-style array, for
+ 1-, 2-, and 3-d ndarrays.
+
+ :param op:
+
+ The address to any Python object. This Python object will be replaced
+ with an equivalent well-behaved, C-style contiguous, ndarray of the
+ given data type specified by the last two arguments. Be sure that
+ stealing a reference in this way to the input object is justified.
+
+ :param ptr:
+
+ The address to a (ctype* for 1-d, ctype** for 2-d or ctype*** for 3-d)
+ variable where ctype is the equivalent C-type for the data type. On
+ return, *ptr* will be addressable as a 1-d, 2-d, or 3-d array.
+
+ :param dims:
+
+ An output array that contains the shape of the array object. This
+ array gives boundaries on any looping that will take place.
+
+ :param nd:
+
+ The dimensionality of the array (1, 2, or 3).
+
+ :param typenum:
+
+ The expected data type of the array.
+
+ :param itemsize:
+
+ This argument is only needed when *typenum* represents a
+ flexible array. Otherwise it should be 0.
+
+.. note::
+
+ The simulation of a C-style array is not complete for 2-d and 3-d
+ arrays. For example, the simulated arrays of pointers cannot be passed
+ to subroutines expecting specific, statically-defined 2-d and 3-d
+ arrays. To pass to functions requiring those kind of inputs, you must
+ statically define the required array and copy data.
+
+.. c:function:: int PyArray_Free(PyObject* op, void* ptr)
+
+ Must be called with the same objects and memory locations returned
+ from :c:func:`PyArray_AsCArray` (...). This function cleans up memory
+ that otherwise would get leaked.
+
+.. c:function:: PyObject* PyArray_Concatenate(PyObject* obj, int axis)
+
+ Join the sequence of objects in *obj* together along *axis* into a
+ single array. If the dimensions or types are not compatible an
+ error is raised.
+
+.. c:function:: PyObject* PyArray_InnerProduct(PyObject* obj1, PyObject* obj2)
+
+ Compute a product-sum over the last dimensions of *obj1* and
+ *obj2*. Neither array is conjugated.
+
+.. c:function:: PyObject* PyArray_MatrixProduct(PyObject* obj1, PyObject* obj)
+
+ Compute a product-sum over the last dimension of *obj1* and the
+ second-to-last dimension of *obj2*. For 2-d arrays this is a
+ matrix-product. Neither array is conjugated.
+
+.. c:function:: PyObject* PyArray_MatrixProduct2( \
+ PyObject* obj1, PyObject* obj, PyArrayObject* out)
+
+ .. versionadded:: 1.6
+
+ Same as PyArray_MatrixProduct, but store the result in *out*. The
+ output array must have the correct shape, type, and be
+ C-contiguous, or an exception is raised.
+
+.. c:function:: PyObject* PyArray_EinsteinSum( \
+ char* subscripts, npy_intp nop, PyArrayObject** op_in, \
+ PyArray_Descr* dtype, NPY_ORDER order, NPY_CASTING casting, \
+ PyArrayObject* out)
+
+ .. versionadded:: 1.6
+
+ Applies the Einstein summation convention to the array operands
+ provided, returning a new array or placing the result in *out*.
+ The string in *subscripts* is a comma separated list of index
+ letters. The number of operands is in *nop*, and *op_in* is an
+ array containing those operands. The data type of the output can
+ be forced with *dtype*, the output order can be forced with *order*
+ (:c:data:`NPY_KEEPORDER` is recommended), and when *dtype* is specified,
+ *casting* indicates how permissive the data conversion should be.
+
+ See the :func:`~numpy.einsum` function for more details.
+
+.. c:function:: PyObject* PyArray_CopyAndTranspose(PyObject \* op)
+
+ A specialized copy and transpose function that works only for 2-d
+ arrays. The returned array is a transposed copy of *op*.
+
+.. c:function:: PyObject* PyArray_Correlate( \
+ PyObject* op1, PyObject* op2, int mode)
+
+ Compute the 1-d correlation of the 1-d arrays *op1* and *op2*
+ . The correlation is computed at each output point by multiplying
+ *op1* by a shifted version of *op2* and summing the result. As a
+ result of the shift, needed values outside of the defined range of
+ *op1* and *op2* are interpreted as zero. The mode determines how
+ many shifts to return: 0 - return only shifts that did not need to
+ assume zero- values; 1 - return an object that is the same size as
+ *op1*, 2 - return all possible shifts (any overlap at all is
+ accepted).
+
+ .. rubric:: Notes
+
+ This does not compute the usual correlation: if op2 is larger than op1, the
+ arguments are swapped, and the conjugate is never taken for complex arrays.
+ See PyArray_Correlate2 for the usual signal processing correlation.
+
+.. c:function:: PyObject* PyArray_Correlate2( \
+ PyObject* op1, PyObject* op2, int mode)
+
+ Updated version of PyArray_Correlate, which uses the usual definition of
+ correlation for 1d arrays. The correlation is computed at each output point
+ by multiplying *op1* by a shifted version of *op2* and summing the result.
+ As a result of the shift, needed values outside of the defined range of
+ *op1* and *op2* are interpreted as zero. The mode determines how many
+ shifts to return: 0 - return only shifts that did not need to assume zero-
+ values; 1 - return an object that is the same size as *op1*, 2 - return all
+ possible shifts (any overlap at all is accepted).
+
+ .. rubric:: Notes
+
+ Compute z as follows::
+
+ z[k] = sum_n op1[n] * conj(op2[n+k])
+
+.. c:function:: PyObject* PyArray_Where( \
+ PyObject* condition, PyObject* x, PyObject* y)
+
+ If both ``x`` and ``y`` are ``NULL``, then return
+ :c:func:`PyArray_Nonzero` (*condition*). Otherwise, both *x* and *y*
+ must be given and the object returned is shaped like *condition*
+ and has elements of *x* and *y* where *condition* is respectively
+ True or False.
+
+
+Other functions
+^^^^^^^^^^^^^^^
+
+.. c:function:: Bool PyArray_CheckStrides( \
+ int elsize, int nd, npy_intp numbytes, npy_intp const* dims, \
+ npy_intp const* newstrides)
+
+ Determine if *newstrides* is a strides array consistent with the
+ memory of an *nd* -dimensional array with shape ``dims`` and
+ element-size, *elsize*. The *newstrides* array is checked to see
+ if jumping by the provided number of bytes in each direction will
+ ever mean jumping more than *numbytes* which is the assumed size
+ of the available memory segment. If *numbytes* is 0, then an
+ equivalent *numbytes* is computed assuming *nd*, *dims*, and
+ *elsize* refer to a single-segment array. Return :c:data:`NPY_TRUE` if
+ *newstrides* is acceptable, otherwise return :c:data:`NPY_FALSE`.
+
+.. c:function:: npy_intp PyArray_MultiplyList(npy_intp const* seq, int n)
+
+.. c:function:: int PyArray_MultiplyIntList(int const* seq, int n)
+
+ Both of these routines multiply an *n* -length array, *seq*, of
+ integers and return the result. No overflow checking is performed.
+
+.. c:function:: int PyArray_CompareLists(npy_intp const* l1, npy_intp const* l2, int n)
+
+ Given two *n* -length arrays of integers, *l1*, and *l2*, return
+ 1 if the lists are identical; otherwise, return 0.
+
+
+Auxiliary Data With Object Semantics
+------------------------------------
+
+.. versionadded:: 1.7.0
+
+.. c:type:: NpyAuxData
+
+When working with more complex dtypes which are composed of other dtypes,
+such as the struct dtype, creating inner loops that manipulate the dtypes
+requires carrying along additional data. NumPy supports this idea
+through a struct :c:type:`NpyAuxData`, mandating a few conventions so that
+it is possible to do this.
+
+Defining an :c:type:`NpyAuxData` is similar to defining a class in C++,
+but the object semantics have to be tracked manually since the API is in C.
+Here's an example for a function which doubles up an element using
+an element copier function as a primitive.::
+
+ typedef struct {
+ NpyAuxData base;
+ ElementCopier_Func *func;
+ NpyAuxData *funcdata;
+ } eldoubler_aux_data;
+
+ void free_element_doubler_aux_data(NpyAuxData *data)
+ {
+ eldoubler_aux_data *d = (eldoubler_aux_data *)data;
+ /* Free the memory owned by this auxdata */
+ NPY_AUXDATA_FREE(d->funcdata);
+ PyArray_free(d);
+ }
+
+ NpyAuxData *clone_element_doubler_aux_data(NpyAuxData *data)
+ {
+ eldoubler_aux_data *ret = PyArray_malloc(sizeof(eldoubler_aux_data));
+ if (ret == NULL) {
+ return NULL;
+ }
+
+ /* Raw copy of all data */
+ memcpy(ret, data, sizeof(eldoubler_aux_data));
+
+ /* Fix up the owned auxdata so we have our own copy */
+ ret->funcdata = NPY_AUXDATA_CLONE(ret->funcdata);
+ if (ret->funcdata == NULL) {
+ PyArray_free(ret);
+ return NULL;
+ }
+
+ return (NpyAuxData *)ret;
+ }
+
+ NpyAuxData *create_element_doubler_aux_data(
+ ElementCopier_Func *func,
+ NpyAuxData *funcdata)
+ {
+ eldoubler_aux_data *ret = PyArray_malloc(sizeof(eldoubler_aux_data));
+ if (ret == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ memset(&ret, 0, sizeof(eldoubler_aux_data));
+ ret->base->free = &free_element_doubler_aux_data;
+ ret->base->clone = &clone_element_doubler_aux_data;
+ ret->func = func;
+ ret->funcdata = funcdata;
+
+ return (NpyAuxData *)ret;
+ }
+
+.. c:type:: NpyAuxData_FreeFunc
+
+ The function pointer type for NpyAuxData free functions.
+
+.. c:type:: NpyAuxData_CloneFunc
+
+ The function pointer type for NpyAuxData clone functions. These
+ functions should never set the Python exception on error, because
+ they may be called from a multi-threaded context.
+
+.. c:function:: NPY_AUXDATA_FREE(auxdata)
+
+ A macro which calls the auxdata's free function appropriately,
+ does nothing if auxdata is NULL.
+
+.. c:function:: NPY_AUXDATA_CLONE(auxdata)
+
+ A macro which calls the auxdata's clone function appropriately,
+ returning a deep copy of the auxiliary data.
+
+Array Iterators
+---------------
+
+As of NumPy 1.6.0, these array iterators are superceded by
+the new array iterator, :c:type:`NpyIter`.
+
+An array iterator is a simple way to access the elements of an
+N-dimensional array quickly and efficiently. Section `2
+<#sec-array-iterator>`__ provides more description and examples of
+this useful approach to looping over an array.
+
+.. c:function:: PyObject* PyArray_IterNew(PyObject* arr)
+
+ Return an array iterator object from the array, *arr*. This is
+ equivalent to *arr*. **flat**. The array iterator object makes
+ it easy to loop over an N-dimensional non-contiguous array in
+ C-style contiguous fashion.
+
+.. c:function:: PyObject* PyArray_IterAllButAxis(PyObject* arr, int \*axis)
+
+ Return an array iterator that will iterate over all axes but the
+ one provided in *\*axis*. The returned iterator cannot be used
+ with :c:func:`PyArray_ITER_GOTO1D`. This iterator could be used to
+ write something similar to what ufuncs do wherein the loop over
+ the largest axis is done by a separate sub-routine. If *\*axis* is
+ negative then *\*axis* will be set to the axis having the smallest
+ stride and that axis will be used.
+
+.. c:function:: PyObject *PyArray_BroadcastToShape( \
+ PyObject* arr, npy_intp *dimensions, int nd)
+
+ Return an array iterator that is broadcast to iterate as an array
+ of the shape provided by *dimensions* and *nd*.
+
+.. c:function:: int PyArrayIter_Check(PyObject* op)
+
+ Evaluates true if *op* is an array iterator (or instance of a
+ subclass of the array iterator type).
+
+.. c:function:: void PyArray_ITER_RESET(PyObject* iterator)
+
+ Reset an *iterator* to the beginning of the array.
+
+.. c:function:: void PyArray_ITER_NEXT(PyObject* iterator)
+
+ Incremement the index and the dataptr members of the *iterator* to
+ point to the next element of the array. If the array is not
+ (C-style) contiguous, also increment the N-dimensional coordinates
+ array.
+
+.. c:function:: void *PyArray_ITER_DATA(PyObject* iterator)
+
+ A pointer to the current element of the array.
+
+.. c:function:: void PyArray_ITER_GOTO( \
+ PyObject* iterator, npy_intp* destination)
+
+ Set the *iterator* index, dataptr, and coordinates members to the
+ location in the array indicated by the N-dimensional c-array,
+ *destination*, which must have size at least *iterator*
+ ->nd_m1+1.
+
+.. c:function:: PyArray_ITER_GOTO1D(PyObject* iterator, npy_intp index)
+
+ Set the *iterator* index and dataptr to the location in the array
+ indicated by the integer *index* which points to an element in the
+ C-styled flattened array.
+
+.. c:function:: int PyArray_ITER_NOTDONE(PyObject* iterator)
+
+ Evaluates TRUE as long as the iterator has not looped through all of
+ the elements, otherwise it evaluates FALSE.
+
+
+Broadcasting (multi-iterators)
+------------------------------
+
+.. c:function:: PyObject* PyArray_MultiIterNew(int num, ...)
+
+ A simplified interface to broadcasting. This function takes the
+ number of arrays to broadcast and then *num* extra ( :c:type:`PyObject *<PyObject>`
+ ) arguments. These arguments are converted to arrays and iterators
+ are created. :c:func:`PyArray_Broadcast` is then called on the resulting
+ multi-iterator object. The resulting, broadcasted mult-iterator
+ object is then returned. A broadcasted operation can then be
+ performed using a single loop and using :c:func:`PyArray_MultiIter_NEXT`
+ (..)
+
+.. c:function:: void PyArray_MultiIter_RESET(PyObject* multi)
+
+ Reset all the iterators to the beginning in a multi-iterator
+ object, *multi*.
+
+.. c:function:: void PyArray_MultiIter_NEXT(PyObject* multi)
+
+ Advance each iterator in a multi-iterator object, *multi*, to its
+ next (broadcasted) element.
+
+.. c:function:: void *PyArray_MultiIter_DATA(PyObject* multi, int i)
+
+ Return the data-pointer of the *i* :math:`^{\textrm{th}}` iterator
+ in a multi-iterator object.
+
+.. c:function:: void PyArray_MultiIter_NEXTi(PyObject* multi, int i)
+
+ Advance the pointer of only the *i* :math:`^{\textrm{th}}` iterator.
+
+.. c:function:: void PyArray_MultiIter_GOTO( \
+ PyObject* multi, npy_intp* destination)
+
+ Advance each iterator in a multi-iterator object, *multi*, to the
+ given :math:`N` -dimensional *destination* where :math:`N` is the
+ number of dimensions in the broadcasted array.
+
+.. c:function:: void PyArray_MultiIter_GOTO1D(PyObject* multi, npy_intp index)
+
+ Advance each iterator in a multi-iterator object, *multi*, to the
+ corresponding location of the *index* into the flattened
+ broadcasted array.
+
+.. c:function:: int PyArray_MultiIter_NOTDONE(PyObject* multi)
+
+ Evaluates TRUE as long as the multi-iterator has not looped
+ through all of the elements (of the broadcasted result), otherwise
+ it evaluates FALSE.
+
+.. c:function:: int PyArray_Broadcast(PyArrayMultiIterObject* mit)
+
+ This function encapsulates the broadcasting rules. The *mit*
+ container should already contain iterators for all the arrays that
+ need to be broadcast. On return, these iterators will be adjusted
+ so that iteration over each simultaneously will accomplish the
+ broadcasting. A negative number is returned if an error occurs.
+
+.. c:function:: int PyArray_RemoveSmallest(PyArrayMultiIterObject* mit)
+
+ This function takes a multi-iterator object that has been
+ previously "broadcasted," finds the dimension with the smallest
+ "sum of strides" in the broadcasted result and adapts all the
+ iterators so as not to iterate over that dimension (by effectively
+ making them of length-1 in that dimension). The corresponding
+ dimension is returned unless *mit* ->nd is 0, then -1 is
+ returned. This function is useful for constructing ufunc-like
+ routines that broadcast their inputs correctly and then call a
+ strided 1-d version of the routine as the inner-loop. This 1-d
+ version is usually optimized for speed and for this reason the
+ loop should be performed over the axis that won't require large
+ stride jumps.
+
+Neighborhood iterator
+---------------------
+
+.. versionadded:: 1.4.0
+
+Neighborhood iterators are subclasses of the iterator object, and can be used
+to iter over a neighborhood of a point. For example, you may want to iterate
+over every voxel of a 3d image, and for every such voxel, iterate over an
+hypercube. Neighborhood iterator automatically handle boundaries, thus making
+this kind of code much easier to write than manual boundaries handling, at the
+cost of a slight overhead.
+
+.. c:function:: PyObject* PyArray_NeighborhoodIterNew( \
+ PyArrayIterObject* iter, npy_intp bounds, int mode, \
+ PyArrayObject* fill_value)
+
+ This function creates a new neighborhood iterator from an existing
+ iterator. The neighborhood will be computed relatively to the position
+ currently pointed by *iter*, the bounds define the shape of the
+ neighborhood iterator, and the mode argument the boundaries handling mode.
+
+ The *bounds* argument is expected to be a (2 * iter->ao->nd) arrays, such
+ as the range bound[2*i]->bounds[2*i+1] defines the range where to walk for
+ dimension i (both bounds are included in the walked coordinates). The
+ bounds should be ordered for each dimension (bounds[2*i] <= bounds[2*i+1]).
+
+ The mode should be one of:
+
+ .. c:macro:: NPY_NEIGHBORHOOD_ITER_ZERO_PADDING
+ Zero padding. Outside bounds values will be 0.
+ .. c:macro:: NPY_NEIGHBORHOOD_ITER_ONE_PADDING
+ One padding, Outside bounds values will be 1.
+ .. c:macro:: NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING
+ Constant padding. Outside bounds values will be the
+ same as the first item in fill_value.
+ .. c:macro:: NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING
+ Mirror padding. Outside bounds values will be as if the
+ array items were mirrored. For example, for the array [1, 2, 3, 4],
+ x[-2] will be 2, x[-2] will be 1, x[4] will be 4, x[5] will be 1,
+ etc...
+ .. c:macro:: NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING
+ Circular padding. Outside bounds values will be as if the array
+ was repeated. For example, for the array [1, 2, 3, 4], x[-2] will
+ be 3, x[-2] will be 4, x[4] will be 1, x[5] will be 2, etc...
+
+ If the mode is constant filling (`NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING`),
+ fill_value should point to an array object which holds the filling value
+ (the first item will be the filling value if the array contains more than
+ one item). For other cases, fill_value may be NULL.
+
+ - The iterator holds a reference to iter
+ - Return NULL on failure (in which case the reference count of iter is not
+ changed)
+ - iter itself can be a Neighborhood iterator: this can be useful for .e.g
+ automatic boundaries handling
+ - the object returned by this function should be safe to use as a normal
+ iterator
+ - If the position of iter is changed, any subsequent call to
+ PyArrayNeighborhoodIter_Next is undefined behavior, and
+ PyArrayNeighborhoodIter_Reset must be called.
+
+ .. code-block:: c
+
+ PyArrayIterObject *iter;
+ PyArrayNeighborhoodIterObject *neigh_iter;
+ iter = PyArray_IterNew(x);
+
+ /*For a 3x3 kernel */
+ bounds = {-1, 1, -1, 1};
+ neigh_iter = (PyArrayNeighborhoodIterObject*)PyArrayNeighborhoodIter_New(
+ iter, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL);
+
+ for(i = 0; i < iter->size; ++i) {
+ for (j = 0; j < neigh_iter->size; ++j) {
+ /* Walk around the item currently pointed by iter->dataptr */
+ PyArrayNeighborhoodIter_Next(neigh_iter);
+ }
+
+ /* Move to the next point of iter */
+ PyArrayIter_Next(iter);
+ PyArrayNeighborhoodIter_Reset(neigh_iter);
+ }
+
+.. c:function:: int PyArrayNeighborhoodIter_Reset( \
+ PyArrayNeighborhoodIterObject* iter)
+
+ Reset the iterator position to the first point of the neighborhood. This
+ should be called whenever the iter argument given at
+ PyArray_NeighborhoodIterObject is changed (see example)
+
+.. c:function:: int PyArrayNeighborhoodIter_Next( \
+ PyArrayNeighborhoodIterObject* iter)
+
+ After this call, iter->dataptr points to the next point of the
+ neighborhood. Calling this function after every point of the
+ neighborhood has been visited is undefined.
+
+Array Scalars
+-------------
+
+.. c:function:: PyObject* PyArray_Return(PyArrayObject* arr)
+
+ This function steals a reference to *arr*.
+
+ This function checks to see if *arr* is a 0-dimensional array and,
+ if so, returns the appropriate array scalar. It should be used
+ whenever 0-dimensional arrays could be returned to Python.
+
+.. c:function:: PyObject* PyArray_Scalar( \
+ void* data, PyArray_Descr* dtype, PyObject* itemsize)
+
+ Return an array scalar object of the given enumerated *typenum*
+ and *itemsize* by **copying** from memory pointed to by *data*
+ . If *swap* is nonzero then this function will byteswap the data
+ if appropriate to the data-type because array scalars are always
+ in correct machine-byte order.
+
+.. c:function:: PyObject* PyArray_ToScalar(void* data, PyArrayObject* arr)
+
+ Return an array scalar object of the type and itemsize indicated
+ by the array object *arr* copied from the memory pointed to by
+ *data* and swapping if the data in *arr* is not in machine
+ byte-order.
+
+.. c:function:: PyObject* PyArray_FromScalar( \
+ PyObject* scalar, PyArray_Descr* outcode)
+
+ Return a 0-dimensional array of type determined by *outcode* from
+ *scalar* which should be an array-scalar object. If *outcode* is
+ NULL, then the type is determined from *scalar*.
+
+.. c:function:: void PyArray_ScalarAsCtype(PyObject* scalar, void* ctypeptr)
+
+ Return in *ctypeptr* a pointer to the actual value in an array
+ scalar. There is no error checking so *scalar* must be an
+ array-scalar object, and ctypeptr must have enough space to hold
+ the correct type. For flexible-sized types, a pointer to the data
+ is copied into the memory of *ctypeptr*, for all other types, the
+ actual data is copied into the address pointed to by *ctypeptr*.
+
+.. c:function:: void PyArray_CastScalarToCtype( \
+ PyObject* scalar, void* ctypeptr, PyArray_Descr* outcode)
+
+ Return the data (cast to the data type indicated by *outcode*)
+ from the array-scalar, *scalar*, into the memory pointed to by
+ *ctypeptr* (which must be large enough to handle the incoming
+ memory).
+
+.. c:function:: PyObject* PyArray_TypeObjectFromType(int type)
+
+ Returns a scalar type-object from a type-number, *type*
+ . Equivalent to :c:func:`PyArray_DescrFromType` (*type*)->typeobj
+ except for reference counting and error-checking. Returns a new
+ reference to the typeobject on success or ``NULL`` on failure.
+
+.. c:function:: NPY_SCALARKIND PyArray_ScalarKind( \
+ int typenum, PyArrayObject** arr)
+
+ See the function :c:func:`PyArray_MinScalarType` for an alternative
+ mechanism introduced in NumPy 1.6.0.
+
+ Return the kind of scalar represented by *typenum* and the array
+ in *\*arr* (if *arr* is not ``NULL`` ). The array is assumed to be
+ rank-0 and only used if *typenum* represents a signed integer. If
+ *arr* is not ``NULL`` and the first element is negative then
+ :c:data:`NPY_INTNEG_SCALAR` is returned, otherwise
+ :c:data:`NPY_INTPOS_SCALAR` is returned. The possible return values
+ are :c:data:`NPY_{kind}_SCALAR` where ``{kind}`` can be **INTPOS**,
+ **INTNEG**, **FLOAT**, **COMPLEX**, **BOOL**, or **OBJECT**.
+ :c:data:`NPY_NOSCALAR` is also an enumerated value
+ :c:type:`NPY_SCALARKIND` variables can take on.
+
+.. c:function:: int PyArray_CanCoerceScalar( \
+ char thistype, char neededtype, NPY_SCALARKIND scalar)
+
+ See the function :c:func:`PyArray_ResultType` for details of
+ NumPy type promotion, updated in NumPy 1.6.0.
+
+ Implements the rules for scalar coercion. Scalars are only
+ silently coerced from thistype to neededtype if this function
+ returns nonzero. If scalar is :c:data:`NPY_NOSCALAR`, then this
+ function is equivalent to :c:func:`PyArray_CanCastSafely`. The rule is
+ that scalars of the same KIND can be coerced into arrays of the
+ same KIND. This rule means that high-precision scalars will never
+ cause low-precision arrays of the same KIND to be upcast.
+
+
+Data-type descriptors
+---------------------
+
+
+
+.. warning::
+
+ Data-type objects must be reference counted so be aware of the
+ action on the data-type reference of different C-API calls. The
+ standard rule is that when a data-type object is returned it is a
+ new reference. Functions that take :c:type:`PyArray_Descr *` objects and
+ return arrays steal references to the data-type their inputs
+ unless otherwise noted. Therefore, you must own a reference to any
+ data-type object used as input to such a function.
+
+.. c:function:: int PyArray_DescrCheck(PyObject* obj)
+
+ Evaluates as true if *obj* is a data-type object ( :c:type:`PyArray_Descr *` ).
+
+.. c:function:: PyArray_Descr* PyArray_DescrNew(PyArray_Descr* obj)
+
+ Return a new data-type object copied from *obj* (the fields
+ reference is just updated so that the new object points to the
+ same fields dictionary if any).
+
+.. c:function:: PyArray_Descr* PyArray_DescrNewFromType(int typenum)
+
+ Create a new data-type object from the built-in (or
+ user-registered) data-type indicated by *typenum*. All builtin
+ types should not have any of their fields changed. This creates a
+ new copy of the :c:type:`PyArray_Descr` structure so that you can fill
+ it in as appropriate. This function is especially needed for
+ flexible data-types which need to have a new elsize member in
+ order to be meaningful in array construction.
+
+.. c:function:: PyArray_Descr* PyArray_DescrNewByteorder( \
+ PyArray_Descr* obj, char newendian)
+
+ Create a new data-type object with the byteorder set according to
+ *newendian*. All referenced data-type objects (in subdescr and
+ fields members of the data-type object) are also changed
+ (recursively). If a byteorder of :c:data:`NPY_IGNORE` is encountered it
+ is left alone. If newendian is :c:data:`NPY_SWAP`, then all byte-orders
+ are swapped. Other valid newendian values are :c:data:`NPY_NATIVE`,
+ :c:data:`NPY_LITTLE`, and :c:data:`NPY_BIG` which all cause the returned
+ data-typed descriptor (and all it's
+ referenced data-type descriptors) to have the corresponding byte-
+ order.
+
+.. c:function:: PyArray_Descr* PyArray_DescrFromObject( \
+ PyObject* op, PyArray_Descr* mintype)
+
+ Determine an appropriate data-type object from the object *op*
+ (which should be a "nested" sequence object) and the minimum
+ data-type descriptor mintype (which can be ``NULL`` ). Similar in
+ behavior to array(*op*).dtype. Don't confuse this function with
+ :c:func:`PyArray_DescrConverter`. This function essentially looks at
+ all the objects in the (nested) sequence and determines the
+ data-type from the elements it finds.
+
+.. c:function:: PyArray_Descr* PyArray_DescrFromScalar(PyObject* scalar)
+
+ Return a data-type object from an array-scalar object. No checking
+ is done to be sure that *scalar* is an array scalar. If no
+ suitable data-type can be determined, then a data-type of
+ :c:data:`NPY_OBJECT` is returned by default.
+
+.. c:function:: PyArray_Descr* PyArray_DescrFromType(int typenum)
+
+ Returns a data-type object corresponding to *typenum*. The
+ *typenum* can be one of the enumerated types, a character code for
+ one of the enumerated types, or a user-defined type. If you want to use a
+ flexible size array, then you need to ``flexible typenum`` and set the
+ results ``elsize`` parameter to the desired size. The typenum is one of the
+ :c:data:`NPY_TYPES`.
+
+.. c:function:: int PyArray_DescrConverter(PyObject* obj, PyArray_Descr** dtype)
+
+ Convert any compatible Python object, *obj*, to a data-type object
+ in *dtype*. A large number of Python objects can be converted to
+ data-type objects. See :ref:`arrays.dtypes` for a complete
+ description. This version of the converter converts None objects
+ to a :c:data:`NPY_DEFAULT_TYPE` data-type object. This function can
+ be used with the "O&" character code in :c:func:`PyArg_ParseTuple`
+ processing.
+
+.. c:function:: int PyArray_DescrConverter2( \
+ PyObject* obj, PyArray_Descr** dtype)
+
+ Convert any compatible Python object, *obj*, to a data-type
+ object in *dtype*. This version of the converter converts None
+ objects so that the returned data-type is ``NULL``. This function
+ can also be used with the "O&" character in PyArg_ParseTuple
+ processing.
+
+.. c:function:: int Pyarray_DescrAlignConverter( \
+ PyObject* obj, PyArray_Descr** dtype)
+
+ Like :c:func:`PyArray_DescrConverter` except it aligns C-struct-like
+ objects on word-boundaries as the compiler would.
+
+.. c:function:: int Pyarray_DescrAlignConverter2( \
+ PyObject* obj, PyArray_Descr** dtype)
+
+ Like :c:func:`PyArray_DescrConverter2` except it aligns C-struct-like
+ objects on word-boundaries as the compiler would.
+
+.. c:function:: PyObject *PyArray_FieldNames(PyObject* dict)
+
+ Take the fields dictionary, *dict*, such as the one attached to a
+ data-type object and construct an ordered-list of field names such
+ as is stored in the names field of the :c:type:`PyArray_Descr` object.
+
+
+Conversion Utilities
+--------------------
+
+
+For use with :c:func:`PyArg_ParseTuple`
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+All of these functions can be used in :c:func:`PyArg_ParseTuple` (...) with
+the "O&" format specifier to automatically convert any Python object
+to the required C-object. All of these functions return
+:c:data:`NPY_SUCCEED` if successful and :c:data:`NPY_FAIL` if not. The first
+argument to all of these function is a Python object. The second
+argument is the **address** of the C-type to convert the Python object
+to.
+
+
+.. warning::
+
+ Be sure to understand what steps you should take to manage the
+ memory when using these conversion functions. These functions can
+ require freeing memory, and/or altering the reference counts of
+ specific objects based on your use.
+
+.. c:function:: int PyArray_Converter(PyObject* obj, PyObject** address)
+
+ Convert any Python object to a :c:type:`PyArrayObject`. If
+ :c:func:`PyArray_Check` (*obj*) is TRUE then its reference count is
+ incremented and a reference placed in *address*. If *obj* is not
+ an array, then convert it to an array using :c:func:`PyArray_FromAny`
+ . No matter what is returned, you must DECREF the object returned
+ by this routine in *address* when you are done with it.
+
+.. c:function:: int PyArray_OutputConverter( \
+ PyObject* obj, PyArrayObject** address)
+
+ This is a default converter for output arrays given to
+ functions. If *obj* is :c:data:`Py_None` or ``NULL``, then *\*address*
+ will be ``NULL`` but the call will succeed. If :c:func:`PyArray_Check` (
+ *obj*) is TRUE then it is returned in *\*address* without
+ incrementing its reference count.
+
+.. c:function:: int PyArray_IntpConverter(PyObject* obj, PyArray_Dims* seq)
+
+ Convert any Python sequence, *obj*, smaller than :c:data:`NPY_MAXDIMS`
+ to a C-array of :c:type:`npy_intp`. The Python object could also be a
+ single number. The *seq* variable is a pointer to a structure with
+ members ptr and len. On successful return, *seq* ->ptr contains a
+ pointer to memory that must be freed, by calling :c:func:`PyDimMem_FREE`,
+ to avoid a memory leak. The restriction on memory size allows this
+ converter to be conveniently used for sequences intended to be
+ interpreted as array shapes.
+
+.. c:function:: int PyArray_BufferConverter(PyObject* obj, PyArray_Chunk* buf)
+
+ Convert any Python object, *obj*, with a (single-segment) buffer
+ interface to a variable with members that detail the object's use
+ of its chunk of memory. The *buf* variable is a pointer to a
+ structure with base, ptr, len, and flags members. The
+ :c:type:`PyArray_Chunk` structure is binary compatible with the
+ Python's buffer object (through its len member on 32-bit platforms
+ and its ptr member on 64-bit platforms or in Python 2.5). On
+ return, the base member is set to *obj* (or its base if *obj* is
+ already a buffer object pointing to another object). If you need
+ to hold on to the memory be sure to INCREF the base member. The
+ chunk of memory is pointed to by *buf* ->ptr member and has length
+ *buf* ->len. The flags member of *buf* is :c:data:`NPY_BEHAVED_RO` with
+ the :c:data:`NPY_ARRAY_WRITEABLE` flag set if *obj* has a writeable buffer
+ interface.
+
+.. c:function:: int PyArray_AxisConverter(PyObject \* obj, int* axis)
+
+ Convert a Python object, *obj*, representing an axis argument to
+ the proper value for passing to the functions that take an integer
+ axis. Specifically, if *obj* is None, *axis* is set to
+ :c:data:`NPY_MAXDIMS` which is interpreted correctly by the C-API
+ functions that take axis arguments.
+
+.. c:function:: int PyArray_BoolConverter(PyObject* obj, Bool* value)
+
+ Convert any Python object, *obj*, to :c:data:`NPY_TRUE` or
+ :c:data:`NPY_FALSE`, and place the result in *value*.
+
+.. c:function:: int PyArray_ByteorderConverter(PyObject* obj, char* endian)
+
+ Convert Python strings into the corresponding byte-order
+ character:
+ '>', '<', 's', '=', or '\|'.
+
+.. c:function:: int PyArray_SortkindConverter(PyObject* obj, NPY_SORTKIND* sort)
+
+ Convert Python strings into one of :c:data:`NPY_QUICKSORT` (starts
+ with 'q' or 'Q'), :c:data:`NPY_HEAPSORT` (starts with 'h' or 'H'),
+ :c:data:`NPY_MERGESORT` (starts with 'm' or 'M') or :c:data:`NPY_STABLESORT`
+ (starts with 't' or 'T'). :c:data:`NPY_MERGESORT` and :c:data:`NPY_STABLESORT`
+ are aliased to each other for backwards compatibility and may refer to one
+ of several stable sorting algorithms depending on the data type.
+
+.. c:function:: int PyArray_SearchsideConverter( \
+ PyObject* obj, NPY_SEARCHSIDE* side)
+
+ Convert Python strings into one of :c:data:`NPY_SEARCHLEFT` (starts with 'l'
+ or 'L'), or :c:data:`NPY_SEARCHRIGHT` (starts with 'r' or 'R').
+
+.. c:function:: int PyArray_OrderConverter(PyObject* obj, NPY_ORDER* order)
+
+ Convert the Python strings 'C', 'F', 'A', and 'K' into the :c:type:`NPY_ORDER`
+ enumeration :c:data:`NPY_CORDER`, :c:data:`NPY_FORTRANORDER`,
+ :c:data:`NPY_ANYORDER`, and :c:data:`NPY_KEEPORDER`.
+
+.. c:function:: int PyArray_CastingConverter( \
+ PyObject* obj, NPY_CASTING* casting)
+
+ Convert the Python strings 'no', 'equiv', 'safe', 'same_kind', and
+ 'unsafe' into the :c:type:`NPY_CASTING` enumeration :c:data:`NPY_NO_CASTING`,
+ :c:data:`NPY_EQUIV_CASTING`, :c:data:`NPY_SAFE_CASTING`,
+ :c:data:`NPY_SAME_KIND_CASTING`, and :c:data:`NPY_UNSAFE_CASTING`.
+
+.. c:function:: int PyArray_ClipmodeConverter( \
+ PyObject* object, NPY_CLIPMODE* val)
+
+ Convert the Python strings 'clip', 'wrap', and 'raise' into the
+ :c:type:`NPY_CLIPMODE` enumeration :c:data:`NPY_CLIP`, :c:data:`NPY_WRAP`,
+ and :c:data:`NPY_RAISE`.
+
+.. c:function:: int PyArray_ConvertClipmodeSequence( \
+ PyObject* object, NPY_CLIPMODE* modes, int n)
+
+ Converts either a sequence of clipmodes or a single clipmode into
+ a C array of :c:type:`NPY_CLIPMODE` values. The number of clipmodes *n*
+ must be known before calling this function. This function is provided
+ to help functions allow a different clipmode for each dimension.
+
+Other conversions
+^^^^^^^^^^^^^^^^^
+
+.. c:function:: int PyArray_PyIntAsInt(PyObject* op)
+
+ Convert all kinds of Python objects (including arrays and array
+ scalars) to a standard integer. On error, -1 is returned and an
+ exception set. You may find useful the macro:
+
+ .. code-block:: c
+
+ #define error_converting(x) (((x) == -1) && PyErr_Occurred()
+
+.. c:function:: npy_intp PyArray_PyIntAsIntp(PyObject* op)
+
+ Convert all kinds of Python objects (including arrays and array
+ scalars) to a (platform-pointer-sized) integer. On error, -1 is
+ returned and an exception set.
+
+.. c:function:: int PyArray_IntpFromSequence( \
+ PyObject* seq, npy_intp* vals, int maxvals)
+
+ Convert any Python sequence (or single Python number) passed in as
+ *seq* to (up to) *maxvals* pointer-sized integers and place them
+ in the *vals* array. The sequence can be smaller then *maxvals* as
+ the number of converted objects is returned.
+
+.. c:function:: int PyArray_TypestrConvert(int itemsize, int gentype)
+
+ Convert typestring characters (with *itemsize*) to basic
+ enumerated data types. The typestring character corresponding to
+ signed and unsigned integers, floating point numbers, and
+ complex-floating point numbers are recognized and converted. Other
+ values of gentype are returned. This function can be used to
+ convert, for example, the string 'f4' to :c:data:`NPY_FLOAT32`.
+
+
+Miscellaneous
+-------------
+
+
+Importing the API
+^^^^^^^^^^^^^^^^^
+
+In order to make use of the C-API from another extension module, the
+:c:func:`import_array` function must be called. If the extension module is
+self-contained in a single .c file, then that is all that needs to be
+done. If, however, the extension module involves multiple files where
+the C-API is needed then some additional steps must be taken.
+
+.. c:function:: void import_array(void)
+
+ This function must be called in the initialization section of a
+ module that will make use of the C-API. It imports the module
+ where the function-pointer table is stored and points the correct
+ variable to it.
+
+.. c:macro:: PY_ARRAY_UNIQUE_SYMBOL
+
+.. c:macro:: NO_IMPORT_ARRAY
+
+ Using these #defines you can use the C-API in multiple files for a
+ single extension module. In each file you must define
+ :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` to some name that will hold the
+ C-API (*e.g.* myextension_ARRAY_API). This must be done **before**
+ including the numpy/arrayobject.h file. In the module
+ initialization routine you call :c:func:`import_array`. In addition,
+ in the files that do not have the module initialization
+ sub_routine define :c:macro:`NO_IMPORT_ARRAY` prior to including
+ numpy/arrayobject.h.
+
+ Suppose I have two files coolmodule.c and coolhelper.c which need
+ to be compiled and linked into a single extension module. Suppose
+ coolmodule.c contains the required initcool module initialization
+ function (with the import_array() function called). Then,
+ coolmodule.c would have at the top:
+
+ .. code-block:: c
+
+ #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API
+ #include numpy/arrayobject.h
+
+ On the other hand, coolhelper.c would contain at the top:
+
+ .. code-block:: c
+
+ #define NO_IMPORT_ARRAY
+ #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API
+ #include numpy/arrayobject.h
+
+ You can also put the common two last lines into an extension-local
+ header file as long as you make sure that NO_IMPORT_ARRAY is
+ #defined before #including that file.
+
+ Internally, these #defines work as follows:
+
+ * If neither is defined, the C-API is declared to be
+ :c:type:`static void**`, so it is only visible within the
+ compilation unit that #includes numpy/arrayobject.h.
+ * If :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, but
+ :c:macro:`NO_IMPORT_ARRAY` is not, the C-API is declared to
+ be :c:type:`void**`, so that it will also be visible to other
+ compilation units.
+ * If :c:macro:`NO_IMPORT_ARRAY` is #defined, regardless of
+ whether :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is, the C-API is
+ declared to be :c:type:`extern void**`, so it is expected to
+ be defined in another compilation unit.
+ * Whenever :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, it
+ also changes the name of the variable holding the C-API, which
+ defaults to :c:data:`PyArray_API`, to whatever the macro is
+ #defined to.
+
+Checking the API Version
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Because python extensions are not used in the same way as usual libraries on
+most platforms, some errors cannot be automatically detected at build time or
+even runtime. For example, if you build an extension using a function available
+only for numpy >= 1.3.0, and you import the extension later with numpy 1.2, you
+will not get an import error (but almost certainly a segmentation fault when
+calling the function). That's why several functions are provided to check for
+numpy versions. The macros :c:data:`NPY_VERSION` and
+:c:data:`NPY_FEATURE_VERSION` corresponds to the numpy version used to build the
+extension, whereas the versions returned by the functions
+PyArray_GetNDArrayCVersion and PyArray_GetNDArrayCFeatureVersion corresponds to
+the runtime numpy's version.
+
+The rules for ABI and API compatibilities can be summarized as follows:
+
+ * Whenever :c:data:`NPY_VERSION` != PyArray_GetNDArrayCVersion, the
+ extension has to be recompiled (ABI incompatibility).
+ * :c:data:`NPY_VERSION` == PyArray_GetNDArrayCVersion and
+ :c:data:`NPY_FEATURE_VERSION` <= PyArray_GetNDArrayCFeatureVersion means
+ backward compatible changes.
+
+ABI incompatibility is automatically detected in every numpy's version. API
+incompatibility detection was added in numpy 1.4.0. If you want to supported
+many different numpy versions with one extension binary, you have to build your
+extension with the lowest NPY_FEATURE_VERSION as possible.
+
+.. c:function:: unsigned int PyArray_GetNDArrayCVersion(void)
+
+ This just returns the value :c:data:`NPY_VERSION`. :c:data:`NPY_VERSION`
+ changes whenever a backward incompatible change at the ABI level. Because
+ it is in the C-API, however, comparing the output of this function from the
+ value defined in the current header gives a way to test if the C-API has
+ changed thus requiring a re-compilation of extension modules that use the
+ C-API. This is automatically checked in the function :c:func:`import_array`.
+
+.. c:function:: unsigned int PyArray_GetNDArrayCFeatureVersion(void)
+
+ .. versionadded:: 1.4.0
+
+ This just returns the value :c:data:`NPY_FEATURE_VERSION`.
+ :c:data:`NPY_FEATURE_VERSION` changes whenever the API changes (e.g. a
+ function is added). A changed value does not always require a recompile.
+
+Internal Flexibility
+^^^^^^^^^^^^^^^^^^^^
+
+.. c:function:: int PyArray_SetNumericOps(PyObject* dict)
+
+ NumPy stores an internal table of Python callable objects that are
+ used to implement arithmetic operations for arrays as well as
+ certain array calculation methods. This function allows the user
+ to replace any or all of these Python objects with their own
+ versions. The keys of the dictionary, *dict*, are the named
+ functions to replace and the paired value is the Python callable
+ object to use. Care should be taken that the function used to
+ replace an internal array operation does not itself call back to
+ that internal array operation (unless you have designed the
+ function to handle that), or an unchecked infinite recursion can
+ result (possibly causing program crash). The key names that
+ represent operations that can be replaced are:
+
+ **add**, **subtract**, **multiply**, **divide**,
+ **remainder**, **power**, **square**, **reciprocal**,
+ **ones_like**, **sqrt**, **negative**, **positive**,
+ **absolute**, **invert**, **left_shift**, **right_shift**,
+ **bitwise_and**, **bitwise_xor**, **bitwise_or**,
+ **less**, **less_equal**, **equal**, **not_equal**,
+ **greater**, **greater_equal**, **floor_divide**,
+ **true_divide**, **logical_or**, **logical_and**,
+ **floor**, **ceil**, **maximum**, **minimum**, **rint**.
+
+
+ These functions are included here because they are used at least once
+ in the array object's methods. The function returns -1 (without
+ setting a Python Error) if one of the objects being assigned is not
+ callable.
+
+ .. deprecated:: 1.16
+
+.. c:function:: PyObject* PyArray_GetNumericOps(void)
+
+ Return a Python dictionary containing the callable Python objects
+ stored in the internal arithmetic operation table. The keys of
+ this dictionary are given in the explanation for :c:func:`PyArray_SetNumericOps`.
+
+ .. deprecated:: 1.16
+
+.. c:function:: void PyArray_SetStringFunction(PyObject* op, int repr)
+
+ This function allows you to alter the tp_str and tp_repr methods
+ of the array object to any Python function. Thus you can alter
+ what happens for all arrays when str(arr) or repr(arr) is called
+ from Python. The function to be called is passed in as *op*. If
+ *repr* is non-zero, then this function will be called in response
+ to repr(arr), otherwise the function will be called in response to
+ str(arr). No check on whether or not *op* is callable is
+ performed. The callable passed in to *op* should expect an array
+ argument and should return a string to be printed.
+
+
+Memory management
+^^^^^^^^^^^^^^^^^
+
+.. c:function:: char* PyDataMem_NEW(size_t nbytes)
+
+.. c:function:: PyDataMem_FREE(char* ptr)
+
+.. c:function:: char* PyDataMem_RENEW(void * ptr, size_t newbytes)
+
+ Macros to allocate, free, and reallocate memory. These macros are used
+ internally to create arrays.
+
+.. c:function:: npy_intp* PyDimMem_NEW(int nd)
+
+.. c:function:: PyDimMem_FREE(char* ptr)
+
+.. c:function:: npy_intp* PyDimMem_RENEW(void* ptr, size_t newnd)
+
+ Macros to allocate, free, and reallocate dimension and strides memory.
+
+.. c:function:: void* PyArray_malloc(size_t nbytes)
+
+.. c:function:: PyArray_free(void* ptr)
+
+.. c:function:: void* PyArray_realloc(npy_intp* ptr, size_t nbytes)
+
+ These macros use different memory allocators, depending on the
+ constant :c:data:`NPY_USE_PYMEM`. The system malloc is used when
+ :c:data:`NPY_USE_PYMEM` is 0, if :c:data:`NPY_USE_PYMEM` is 1, then
+ the Python memory allocator is used.
+
+.. c:function:: int PyArray_ResolveWritebackIfCopy(PyArrayObject* obj)
+
+ If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` or (deprecated)
+ :c:data:`NPY_ARRAY_UPDATEIFCOPY`, this function clears the flags, `DECREF` s
+ `obj->base` and makes it writeable, and sets ``obj->base`` to NULL. It then
+ copies ``obj->data`` to `obj->base->data`, and returns the error state of
+ the copy operation. This is the opposite of
+ :c:func:`PyArray_SetWritebackIfCopyBase`. Usually this is called once
+ you are finished with ``obj``, just before ``Py_DECREF(obj)``. It may be called
+ multiple times, or with ``NULL`` input. See also
+ :c:func:`PyArray_DiscardWritebackIfCopy`.
+
+ Returns 0 if nothing was done, -1 on error, and 1 if action was taken.
+
+Threading support
+^^^^^^^^^^^^^^^^^
+
+These macros are only meaningful if :c:data:`NPY_ALLOW_THREADS`
+evaluates True during compilation of the extension module. Otherwise,
+these macros are equivalent to whitespace. Python uses a single Global
+Interpreter Lock (GIL) for each Python process so that only a single
+thread may execute at a time (even on multi-cpu machines). When
+calling out to a compiled function that may take time to compute (and
+does not have side-effects for other threads like updated global
+variables), the GIL should be released so that other Python threads
+can run while the time-consuming calculations are performed. This can
+be accomplished using two groups of macros. Typically, if one macro in
+a group is used in a code block, all of them must be used in the same
+code block. Currently, :c:data:`NPY_ALLOW_THREADS` is defined to the
+python-defined :c:data:`WITH_THREADS` constant unless the environment
+variable :c:data:`NPY_NOSMP` is set in which case
+:c:data:`NPY_ALLOW_THREADS` is defined to be 0.
+
+Group 1
+"""""""
+
+ This group is used to call code that may take some time but does not
+ use any Python C-API calls. Thus, the GIL should be released during
+ its calculation.
+
+ .. c:macro:: NPY_BEGIN_ALLOW_THREADS
+
+ Equivalent to :c:macro:`Py_BEGIN_ALLOW_THREADS` except it uses
+ :c:data:`NPY_ALLOW_THREADS` to determine if the macro if
+ replaced with white-space or not.
+
+ .. c:macro:: NPY_END_ALLOW_THREADS
+
+ Equivalent to :c:macro:`Py_END_ALLOW_THREADS` except it uses
+ :c:data:`NPY_ALLOW_THREADS` to determine if the macro if
+ replaced with white-space or not.
+
+ .. c:macro:: NPY_BEGIN_THREADS_DEF
+
+ Place in the variable declaration area. This macro sets up the
+ variable needed for storing the Python state.
+
+ .. c:macro:: NPY_BEGIN_THREADS
+
+ Place right before code that does not need the Python
+ interpreter (no Python C-API calls). This macro saves the
+ Python state and releases the GIL.
+
+ .. c:macro:: NPY_END_THREADS
+
+ Place right after code that does not need the Python
+ interpreter. This macro acquires the GIL and restores the
+ Python state from the saved variable.
+
+ .. c:function:: NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype)
+
+ Useful to release the GIL only if *dtype* does not contain
+ arbitrary Python objects which may need the Python interpreter
+ during execution of the loop. Equivalent to
+
+ .. c:function:: NPY_END_THREADS_DESCR(PyArray_Descr *dtype)
+
+ Useful to regain the GIL in situations where it was released
+ using the BEGIN form of this macro.
+
+ .. c:function:: NPY_BEGIN_THREADS_THRESHOLDED(int loop_size)
+
+ Useful to release the GIL only if *loop_size* exceeds a
+ minimum threshold, currently set to 500. Should be matched
+ with a :c:macro:`NPY_END_THREADS` to regain the GIL.
+
+Group 2
+"""""""
+
+ This group is used to re-acquire the Python GIL after it has been
+ released. For example, suppose the GIL has been released (using the
+ previous calls), and then some path in the code (perhaps in a
+ different subroutine) requires use of the Python C-API, then these
+ macros are useful to acquire the GIL. These macros accomplish
+ essentially a reverse of the previous three (acquire the LOCK saving
+ what state it had) and then re-release it with the saved state.
+
+ .. c:macro:: NPY_ALLOW_C_API_DEF
+
+ Place in the variable declaration area to set up the necessary
+ variable.
+
+ .. c:macro:: NPY_ALLOW_C_API
+
+ Place before code that needs to call the Python C-API (when it is
+ known that the GIL has already been released).
+
+ .. c:macro:: NPY_DISABLE_C_API
+
+ Place after code that needs to call the Python C-API (to re-release
+ the GIL).
+
+.. tip::
+
+ Never use semicolons after the threading support macros.
+
+
+Priority
+^^^^^^^^
+
+.. c:var:: NPY_PRIORITY
+
+ Default priority for arrays.
+
+.. c:var:: NPY_SUBTYPE_PRIORITY
+
+ Default subtype priority.
+
+.. c:var:: NPY_SCALAR_PRIORITY
+
+ Default scalar priority (very small)
+
+.. c:function:: double PyArray_GetPriority(PyObject* obj, double def)
+
+ Return the :obj:`~numpy.class.__array_priority__` attribute (converted to a
+ double) of *obj* or *def* if no attribute of that name
+ exists. Fast returns that avoid the attribute lookup are provided
+ for objects of type :c:data:`PyArray_Type`.
+
+
+Default buffers
+^^^^^^^^^^^^^^^
+
+.. c:var:: NPY_BUFSIZE
+
+ Default size of the user-settable internal buffers.
+
+.. c:var:: NPY_MIN_BUFSIZE
+
+ Smallest size of user-settable internal buffers.
+
+.. c:var:: NPY_MAX_BUFSIZE
+
+ Largest size allowed for the user-settable buffers.
+
+
+Other constants
+^^^^^^^^^^^^^^^
+
+.. c:var:: NPY_NUM_FLOATTYPE
+
+ The number of floating-point types
+
+.. c:var:: NPY_MAXDIMS
+
+ The maximum number of dimensions allowed in arrays.
+
+.. c:var:: NPY_MAXARGS
+
+ The maximum number of array arguments that can be used in functions.
+
+.. c:var:: NPY_VERSION
+
+ The current version of the ndarray object (check to see if this
+ variable is defined to guarantee the numpy/arrayobject.h header is
+ being used).
+
+.. c:var:: NPY_FALSE
+
+ Defined as 0 for use with Bool.
+
+.. c:var:: NPY_TRUE
+
+ Defined as 1 for use with Bool.
+
+.. c:var:: NPY_FAIL
+
+ The return value of failed converter functions which are called using
+ the "O&" syntax in :c:func:`PyArg_ParseTuple`-like functions.
+
+.. c:var:: NPY_SUCCEED
+
+ The return value of successful converter functions which are called
+ using the "O&" syntax in :c:func:`PyArg_ParseTuple`-like functions.
+
+
+Miscellaneous Macros
+^^^^^^^^^^^^^^^^^^^^
+
+.. c:function:: PyArray_SAMESHAPE(PyArrayObject *a1, PyArrayObject *a2)
+
+ Evaluates as True if arrays *a1* and *a2* have the same shape.
+
+.. c:macro:: PyArray_MAX(a,b)
+
+ Returns the maximum of *a* and *b*. If (*a*) or (*b*) are
+ expressions they are evaluated twice.
+
+.. c:macro:: PyArray_MIN(a,b)
+
+ Returns the minimum of *a* and *b*. If (*a*) or (*b*) are
+ expressions they are evaluated twice.
+
+.. c:macro:: PyArray_CLT(a,b)
+
+.. c:macro:: PyArray_CGT(a,b)
+
+.. c:macro:: PyArray_CLE(a,b)
+
+.. c:macro:: PyArray_CGE(a,b)
+
+.. c:macro:: PyArray_CEQ(a,b)
+
+.. c:macro:: PyArray_CNE(a,b)
+
+ Implements the complex comparisons between two complex numbers
+ (structures with a real and imag member) using NumPy's definition
+ of the ordering which is lexicographic: comparing the real parts
+ first and then the complex parts if the real parts are equal.
+
+.. c:function:: PyArray_REFCOUNT(PyObject* op)
+
+ Returns the reference count of any Python object.
+
+.. c:function:: PyArray_DiscardWritebackIfCopy(PyObject* obj)
+
+ If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` or (deprecated)
+ :c:data:`NPY_ARRAY_UPDATEIFCOPY`, this function clears the flags, `DECREF` s
+ `obj->base` and makes it writeable, and sets ``obj->base`` to NULL. In
+ contrast to :c:func:`PyArray_DiscardWritebackIfCopy` it makes no attempt
+ to copy the data from `obj->base` This undoes
+ :c:func:`PyArray_SetWritebackIfCopyBase`. Usually this is called after an
+ error when you are finished with ``obj``, just before ``Py_DECREF(obj)``.
+ It may be called multiple times, or with ``NULL`` input.
+
+.. c:function:: PyArray_XDECREF_ERR(PyObject* obj)
+
+ Deprecated in 1.14, use :c:func:`PyArray_DiscardWritebackIfCopy`
+ followed by ``Py_XDECREF``
+
+ DECREF's an array object which may have the (deprecated)
+ :c:data:`NPY_ARRAY_UPDATEIFCOPY` or :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`
+ flag set without causing the contents to be copied back into the
+ original array. Resets the :c:data:`NPY_ARRAY_WRITEABLE` flag on the base
+ object. This is useful for recovering from an error condition when
+ writeback semantics are used, but will lead to wrong results.
+
+
+Enumerated Types
+^^^^^^^^^^^^^^^^
+
+.. c:type:: NPY_SORTKIND
+
+ A special variable-type which can take on different values to indicate
+ the sorting algorithm being used.
+
+ .. c:var:: NPY_QUICKSORT
+
+ .. c:var:: NPY_HEAPSORT
+
+ .. c:var:: NPY_MERGESORT
+
+ .. c:var:: NPY_STABLESORT
+
+ Used as an alias of :c:data:`NPY_MERGESORT` and vica versa.
+
+ .. c:var:: NPY_NSORTS
+
+ Defined to be the number of sorts. It is fixed at three by the need for
+ backwards compatibility, and consequently :c:data:`NPY_MERGESORT` and
+ :c:data:`NPY_STABLESORT` are aliased to each other and may refer to one
+ of several stable sorting algorithms depending on the data type.
+
+
+.. c:type:: NPY_SCALARKIND
+
+ A special variable type indicating the number of "kinds" of
+ scalars distinguished in determining scalar-coercion rules. This
+ variable can take on the values :c:data:`NPY_{KIND}` where ``{KIND}`` can be
+
+ **NOSCALAR**, **BOOL_SCALAR**, **INTPOS_SCALAR**,
+ **INTNEG_SCALAR**, **FLOAT_SCALAR**, **COMPLEX_SCALAR**,
+ **OBJECT_SCALAR**
+
+ .. c:var:: NPY_NSCALARKINDS
+
+ Defined to be the number of scalar kinds
+ (not including :c:data:`NPY_NOSCALAR`).
+
+.. c:type:: NPY_ORDER
+
+ An enumeration type indicating the element order that an array should be
+ interpreted in. When a brand new array is created, generally
+ only **NPY_CORDER** and **NPY_FORTRANORDER** are used, whereas
+ when one or more inputs are provided, the order can be based on them.
+
+ .. c:var:: NPY_ANYORDER
+
+ Fortran order if all the inputs are Fortran, C otherwise.
+
+ .. c:var:: NPY_CORDER
+
+ C order.
+
+ .. c:var:: NPY_FORTRANORDER
+
+ Fortran order.
+
+ .. c:var:: NPY_KEEPORDER
+
+ An order as close to the order of the inputs as possible, even
+ if the input is in neither C nor Fortran order.
+
+.. c:type:: NPY_CLIPMODE
+
+ A variable type indicating the kind of clipping that should be
+ applied in certain functions.
+
+ .. c:var:: NPY_RAISE
+
+ The default for most operations, raises an exception if an index
+ is out of bounds.
+
+ .. c:var:: NPY_CLIP
+
+ Clips an index to the valid range if it is out of bounds.
+
+ .. c:var:: NPY_WRAP
+
+ Wraps an index to the valid range if it is out of bounds.
+
+.. c:type:: NPY_CASTING
+
+ .. versionadded:: 1.6
+
+ An enumeration type indicating how permissive data conversions should
+ be. This is used by the iterator added in NumPy 1.6, and is intended
+ to be used more broadly in a future version.
+
+ .. c:var:: NPY_NO_CASTING
+
+ Only allow identical types.
+
+ .. c:var:: NPY_EQUIV_CASTING
+
+ Allow identical and casts involving byte swapping.
+
+ .. c:var:: NPY_SAFE_CASTING
+
+ Only allow casts which will not cause values to be rounded,
+ truncated, or otherwise changed.
+
+ .. c:var:: NPY_SAME_KIND_CASTING
+
+ Allow any safe casts, and casts between types of the same kind.
+ For example, float64 -> float32 is permitted with this rule.
+
+ .. c:var:: NPY_UNSAFE_CASTING
+
+ Allow any cast, no matter what kind of data loss may occur.
+
+.. index::
+ pair: ndarray; C-API
diff --git a/doc/source/reference/c-api/config.rst b/doc/source/reference/c-api/config.rst
new file mode 100644
index 000000000..05e6fe44d
--- /dev/null
+++ b/doc/source/reference/c-api/config.rst
@@ -0,0 +1,122 @@
+System configuration
+====================
+
+.. sectionauthor:: Travis E. Oliphant
+
+When NumPy is built, information about system configuration is
+recorded, and is made available for extension modules using NumPy's C
+API. These are mostly defined in ``numpyconfig.h`` (included in
+``ndarrayobject.h``). The public symbols are prefixed by ``NPY_*``.
+NumPy also offers some functions for querying information about the
+platform in use.
+
+For private use, NumPy also constructs a ``config.h`` in the NumPy
+include directory, which is not exported by NumPy (that is a python
+extension which use the numpy C API will not see those symbols), to
+avoid namespace pollution.
+
+
+Data type sizes
+---------------
+
+The :c:data:`NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof
+information is available to the pre-processor.
+
+.. c:var:: NPY_SIZEOF_SHORT
+
+ sizeof(short)
+
+.. c:var:: NPY_SIZEOF_INT
+
+ sizeof(int)
+
+.. c:var:: NPY_SIZEOF_LONG
+
+ sizeof(long)
+
+.. c:var:: NPY_SIZEOF_LONGLONG
+
+ sizeof(longlong) where longlong is defined appropriately on the
+ platform.
+
+.. c:var:: NPY_SIZEOF_PY_LONG_LONG
+
+
+.. c:var:: NPY_SIZEOF_FLOAT
+
+ sizeof(float)
+
+.. c:var:: NPY_SIZEOF_DOUBLE
+
+ sizeof(double)
+
+.. c:var:: NPY_SIZEOF_LONG_DOUBLE
+
+ sizeof(longdouble) (A macro defines **NPY_SIZEOF_LONGDOUBLE** as well.)
+
+.. c:var:: NPY_SIZEOF_PY_INTPTR_T
+
+ Size of a pointer on this platform (sizeof(void \*)) (A macro defines
+ NPY_SIZEOF_INTP as well.)
+
+
+Platform information
+--------------------
+
+.. c:var:: NPY_CPU_X86
+.. c:var:: NPY_CPU_AMD64
+.. c:var:: NPY_CPU_IA64
+.. c:var:: NPY_CPU_PPC
+.. c:var:: NPY_CPU_PPC64
+.. c:var:: NPY_CPU_SPARC
+.. c:var:: NPY_CPU_SPARC64
+.. c:var:: NPY_CPU_S390
+.. c:var:: NPY_CPU_PARISC
+
+ .. versionadded:: 1.3.0
+
+ CPU architecture of the platform; only one of the above is
+ defined.
+
+ Defined in ``numpy/npy_cpu.h``
+
+.. c:var:: NPY_LITTLE_ENDIAN
+
+.. c:var:: NPY_BIG_ENDIAN
+
+.. c:var:: NPY_BYTE_ORDER
+
+ .. versionadded:: 1.3.0
+
+ Portable alternatives to the ``endian.h`` macros of GNU Libc.
+ If big endian, :c:data:`NPY_BYTE_ORDER` == :c:data:`NPY_BIG_ENDIAN`, and
+ similarly for little endian architectures.
+
+ Defined in ``numpy/npy_endian.h``.
+
+.. c:function:: PyArray_GetEndianness()
+
+ .. versionadded:: 1.3.0
+
+ Returns the endianness of the current platform.
+ One of :c:data:`NPY_CPU_BIG`, :c:data:`NPY_CPU_LITTLE`,
+ or :c:data:`NPY_CPU_UNKNOWN_ENDIAN`.
+
+
+Compiler directives
+-------------------
+
+.. c:var:: NPY_LIKELY
+.. c:var:: NPY_UNLIKELY
+.. c:var:: NPY_UNUSED
+
+
+Interrupt Handling
+------------------
+
+.. c:var:: NPY_INTERRUPT_H
+.. c:var:: NPY_SIGSETJMP
+.. c:var:: NPY_SIGLONGJMP
+.. c:var:: NPY_SIGJMP_BUF
+.. c:var:: NPY_SIGINT_ON
+.. c:var:: NPY_SIGINT_OFF
diff --git a/doc/source/reference/c-api/coremath.rst b/doc/source/reference/c-api/coremath.rst
new file mode 100644
index 000000000..7e00322f9
--- /dev/null
+++ b/doc/source/reference/c-api/coremath.rst
@@ -0,0 +1,453 @@
+NumPy core libraries
+====================
+
+.. sectionauthor:: David Cournapeau
+
+.. versionadded:: 1.3.0
+
+Starting from numpy 1.3.0, we are working on separating the pure C,
+"computational" code from the python dependent code. The goal is twofolds:
+making the code cleaner, and enabling code reuse by other extensions outside
+numpy (scipy, etc...).
+
+NumPy core math library
+-----------------------
+
+The numpy core math library ('npymath') is a first step in this direction. This
+library contains most math-related C99 functionality, which can be used on
+platforms where C99 is not well supported. The core math functions have the
+same API as the C99 ones, except for the npy_* prefix.
+
+The available functions are defined in <numpy/npy_math.h> - please refer to this header when
+in doubt.
+
+Floating point classification
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. c:var:: NPY_NAN
+
+ This macro is defined to a NaN (Not a Number), and is guaranteed to have
+ the signbit unset ('positive' NaN). The corresponding single and extension
+ precision macro are available with the suffix F and L.
+
+.. c:var:: NPY_INFINITY
+
+ This macro is defined to a positive inf. The corresponding single and
+ extension precision macro are available with the suffix F and L.
+
+.. c:var:: NPY_PZERO
+
+ This macro is defined to positive zero. The corresponding single and
+ extension precision macro are available with the suffix F and L.
+
+.. c:var:: NPY_NZERO
+
+ This macro is defined to negative zero (that is with the sign bit set). The
+ corresponding single and extension precision macro are available with the
+ suffix F and L.
+
+.. c:function:: int npy_isnan(x)
+
+ This is a macro, and is equivalent to C99 isnan: works for single, double
+ and extended precision, and return a non 0 value is x is a NaN.
+
+.. c:function:: int npy_isfinite(x)
+
+ This is a macro, and is equivalent to C99 isfinite: works for single,
+ double and extended precision, and return a non 0 value is x is neither a
+ NaN nor an infinity.
+
+.. c:function:: int npy_isinf(x)
+
+ This is a macro, and is equivalent to C99 isinf: works for single, double
+ and extended precision, and return a non 0 value is x is infinite (positive
+ and negative).
+
+.. c:function:: int npy_signbit(x)
+
+ This is a macro, and is equivalent to C99 signbit: works for single, double
+ and extended precision, and return a non 0 value is x has the signbit set
+ (that is the number is negative).
+
+.. c:function:: double npy_copysign(double x, double y)
+
+ This is a function equivalent to C99 copysign: return x with the same sign
+ as y. Works for any value, including inf and nan. Single and extended
+ precisions are available with suffix f and l.
+
+ .. versionadded:: 1.4.0
+
+Useful math constants
+~~~~~~~~~~~~~~~~~~~~~
+
+The following math constants are available in ``npy_math.h``. Single
+and extended precision are also available by adding the ``f`` and
+``l`` suffixes respectively.
+
+.. c:var:: NPY_E
+
+ Base of natural logarithm (:math:`e`)
+
+.. c:var:: NPY_LOG2E
+
+ Logarithm to base 2 of the Euler constant (:math:`\frac{\ln(e)}{\ln(2)}`)
+
+.. c:var:: NPY_LOG10E
+
+ Logarithm to base 10 of the Euler constant (:math:`\frac{\ln(e)}{\ln(10)}`)
+
+.. c:var:: NPY_LOGE2
+
+ Natural logarithm of 2 (:math:`\ln(2)`)
+
+.. c:var:: NPY_LOGE10
+
+ Natural logarithm of 10 (:math:`\ln(10)`)
+
+.. c:var:: NPY_PI
+
+ Pi (:math:`\pi`)
+
+.. c:var:: NPY_PI_2
+
+ Pi divided by 2 (:math:`\frac{\pi}{2}`)
+
+.. c:var:: NPY_PI_4
+
+ Pi divided by 4 (:math:`\frac{\pi}{4}`)
+
+.. c:var:: NPY_1_PI
+
+ Reciprocal of pi (:math:`\frac{1}{\pi}`)
+
+.. c:var:: NPY_2_PI
+
+ Two times the reciprocal of pi (:math:`\frac{2}{\pi}`)
+
+.. c:var:: NPY_EULER
+
+ The Euler constant
+ :math:`\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})`
+
+Low-level floating point manipulation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Those can be useful for precise floating point comparison.
+
+.. c:function:: double npy_nextafter(double x, double y)
+
+ This is a function equivalent to C99 nextafter: return next representable
+ floating point value from x in the direction of y. Single and extended
+ precisions are available with suffix f and l.
+
+ .. versionadded:: 1.4.0
+
+.. c:function:: double npy_spacing(double x)
+
+ This is a function equivalent to Fortran intrinsic. Return distance between
+ x and next representable floating point value from x, e.g. spacing(1) ==
+ eps. spacing of nan and +/- inf return nan. Single and extended precisions
+ are available with suffix f and l.
+
+ .. versionadded:: 1.4.0
+
+.. c:function:: void npy_set_floatstatus_divbyzero()
+
+ Set the divide by zero floating point exception
+
+ .. versionadded:: 1.6.0
+
+.. c:function:: void npy_set_floatstatus_overflow()
+
+ Set the overflow floating point exception
+
+ .. versionadded:: 1.6.0
+
+.. c:function:: void npy_set_floatstatus_underflow()
+
+ Set the underflow floating point exception
+
+ .. versionadded:: 1.6.0
+
+.. c:function:: void npy_set_floatstatus_invalid()
+
+ Set the invalid floating point exception
+
+ .. versionadded:: 1.6.0
+
+.. c:function:: int npy_get_floatstatus()
+
+ Get floating point status. Returns a bitmask with following possible flags:
+
+ * NPY_FPE_DIVIDEBYZERO
+ * NPY_FPE_OVERFLOW
+ * NPY_FPE_UNDERFLOW
+ * NPY_FPE_INVALID
+
+ Note that :c:func:`npy_get_floatstatus_barrier` is preferable as it prevents
+ aggressive compiler optimizations reordering the call relative to
+ the code setting the status, which could lead to incorrect results.
+
+ .. versionadded:: 1.9.0
+
+.. c:function:: int npy_get_floatstatus_barrier(char*)
+
+ Get floating point status. A pointer to a local variable is passed in to
+ prevent aggressive compiler optimizations from reodering this function call
+ relative to the code setting the status, which could lead to incorrect
+ results.
+
+ Returns a bitmask with following possible flags:
+
+ * NPY_FPE_DIVIDEBYZERO
+ * NPY_FPE_OVERFLOW
+ * NPY_FPE_UNDERFLOW
+ * NPY_FPE_INVALID
+
+ .. versionadded:: 1.15.0
+
+.. c:function:: int npy_clear_floatstatus()
+
+ Clears the floating point status. Returns the previous status mask.
+
+ Note that :c:func:`npy_clear_floatstatus_barrier` is preferable as it
+ prevents aggressive compiler optimizations reordering the call relative to
+ the code setting the status, which could lead to incorrect results.
+
+ .. versionadded:: 1.9.0
+
+.. c:function:: int npy_clear_floatstatus_barrier(char*)
+
+ Clears the floating point status. A pointer to a local variable is passed in to
+ prevent aggressive compiler optimizations from reodering this function call.
+ Returns the previous status mask.
+
+ .. versionadded:: 1.15.0
+
+Complex functions
+~~~~~~~~~~~~~~~~~
+
+.. versionadded:: 1.4.0
+
+C99-like complex functions have been added. Those can be used if you wish to
+implement portable C extensions. Since we still support platforms without C99
+complex type, you need to restrict to C90-compatible syntax, e.g.:
+
+.. code-block:: c
+
+ /* a = 1 + 2i \*/
+ npy_complex a = npy_cpack(1, 2);
+ npy_complex b;
+
+ b = npy_log(a);
+
+Linking against the core math library in an extension
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. versionadded:: 1.4.0
+
+To use the core math library in your own extension, you need to add the npymath
+compile and link options to your extension in your setup.py:
+
+ >>> from numpy.distutils.misc_util import get_info
+ >>> info = get_info('npymath')
+ >>> config.add_extension('foo', sources=['foo.c'], extra_info=info)
+
+In other words, the usage of info is exactly the same as when using blas_info
+and co.
+
+Half-precision functions
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. versionadded:: 1.6.0
+
+The header file <numpy/halffloat.h> provides functions to work with
+IEEE 754-2008 16-bit floating point values. While this format is
+not typically used for numerical computations, it is useful for
+storing values which require floating point but do not need much precision.
+It can also be used as an educational tool to understand the nature
+of floating point round-off error.
+
+Like for other types, NumPy includes a typedef npy_half for the 16 bit
+float. Unlike for most of the other types, you cannot use this as a
+normal type in C, since it is a typedef for npy_uint16. For example,
+1.0 looks like 0x3c00 to C, and if you do an equality comparison
+between the different signed zeros, you will get -0.0 != 0.0
+(0x8000 != 0x0000), which is incorrect.
+
+For these reasons, NumPy provides an API to work with npy_half values
+accessible by including <numpy/halffloat.h> and linking to 'npymath'.
+For functions that are not provided directly, such as the arithmetic
+operations, the preferred method is to convert to float
+or double and back again, as in the following example.
+
+.. code-block:: c
+
+ npy_half sum(int n, npy_half *array) {
+ float ret = 0;
+ while(n--) {
+ ret += npy_half_to_float(*array++);
+ }
+ return npy_float_to_half(ret);
+ }
+
+External Links:
+
+* `754-2008 IEEE Standard for Floating-Point Arithmetic`__
+* `Half-precision Float Wikipedia Article`__.
+* `OpenGL Half Float Pixel Support`__
+* `The OpenEXR image format`__.
+
+__ https://ieeexplore.ieee.org/document/4610935/
+__ https://en.wikipedia.org/wiki/Half-precision_floating-point_format
+__ https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_half_float_pixel.txt
+__ https://www.openexr.com/about.html
+
+.. c:var:: NPY_HALF_ZERO
+
+ This macro is defined to positive zero.
+
+.. c:var:: NPY_HALF_PZERO
+
+ This macro is defined to positive zero.
+
+.. c:var:: NPY_HALF_NZERO
+
+ This macro is defined to negative zero.
+
+.. c:var:: NPY_HALF_ONE
+
+ This macro is defined to 1.0.
+
+.. c:var:: NPY_HALF_NEGONE
+
+ This macro is defined to -1.0.
+
+.. c:var:: NPY_HALF_PINF
+
+ This macro is defined to +inf.
+
+.. c:var:: NPY_HALF_NINF
+
+ This macro is defined to -inf.
+
+.. c:var:: NPY_HALF_NAN
+
+ This macro is defined to a NaN value, guaranteed to have its sign bit unset.
+
+.. c:function:: float npy_half_to_float(npy_half h)
+
+ Converts a half-precision float to a single-precision float.
+
+.. c:function:: double npy_half_to_double(npy_half h)
+
+ Converts a half-precision float to a double-precision float.
+
+.. c:function:: npy_half npy_float_to_half(float f)
+
+ Converts a single-precision float to a half-precision float. The
+ value is rounded to the nearest representable half, with ties going
+ to the nearest even. If the value is too small or too big, the
+ system's floating point underflow or overflow bit will be set.
+
+.. c:function:: npy_half npy_double_to_half(double d)
+
+ Converts a double-precision float to a half-precision float. The
+ value is rounded to the nearest representable half, with ties going
+ to the nearest even. If the value is too small or too big, the
+ system's floating point underflow or overflow bit will be set.
+
+.. c:function:: int npy_half_eq(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats (h1 == h2).
+
+.. c:function:: int npy_half_ne(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats (h1 != h2).
+
+.. c:function:: int npy_half_le(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats (h1 <= h2).
+
+.. c:function:: int npy_half_lt(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats (h1 < h2).
+
+.. c:function:: int npy_half_ge(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats (h1 >= h2).
+
+.. c:function:: int npy_half_gt(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats (h1 > h2).
+
+.. c:function:: int npy_half_eq_nonan(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats that are known to not be NaN (h1 == h2). If
+ a value is NaN, the result is undefined.
+
+.. c:function:: int npy_half_lt_nonan(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats that are known to not be NaN (h1 < h2). If
+ a value is NaN, the result is undefined.
+
+.. c:function:: int npy_half_le_nonan(npy_half h1, npy_half h2)
+
+ Compares two half-precision floats that are known to not be NaN (h1 <= h2). If
+ a value is NaN, the result is undefined.
+
+.. c:function:: int npy_half_iszero(npy_half h)
+
+ Tests whether the half-precision float has a value equal to zero. This may be slightly
+ faster than calling npy_half_eq(h, NPY_ZERO).
+
+.. c:function:: int npy_half_isnan(npy_half h)
+
+ Tests whether the half-precision float is a NaN.
+
+.. c:function:: int npy_half_isinf(npy_half h)
+
+ Tests whether the half-precision float is plus or minus Inf.
+
+.. c:function:: int npy_half_isfinite(npy_half h)
+
+ Tests whether the half-precision float is finite (not NaN or Inf).
+
+.. c:function:: int npy_half_signbit(npy_half h)
+
+ Returns 1 is h is negative, 0 otherwise.
+
+.. c:function:: npy_half npy_half_copysign(npy_half x, npy_half y)
+
+ Returns the value of x with the sign bit copied from y. Works for any value,
+ including Inf and NaN.
+
+.. c:function:: npy_half npy_half_spacing(npy_half h)
+
+ This is the same for half-precision float as npy_spacing and npy_spacingf
+ described in the low-level floating point section.
+
+.. c:function:: npy_half npy_half_nextafter(npy_half x, npy_half y)
+
+ This is the same for half-precision float as npy_nextafter and npy_nextafterf
+ described in the low-level floating point section.
+
+.. c:function:: npy_uint16 npy_floatbits_to_halfbits(npy_uint32 f)
+
+ Low-level function which converts a 32-bit single-precision float, stored
+ as a uint32, into a 16-bit half-precision float.
+
+.. c:function:: npy_uint16 npy_doublebits_to_halfbits(npy_uint64 d)
+
+ Low-level function which converts a 64-bit double-precision float, stored
+ as a uint64, into a 16-bit half-precision float.
+
+.. c:function:: npy_uint32 npy_halfbits_to_floatbits(npy_uint16 h)
+
+ Low-level function which converts a 16-bit half-precision float
+ into a 32-bit single-precision float, stored as a uint32.
+
+.. c:function:: npy_uint64 npy_halfbits_to_doublebits(npy_uint16 h)
+
+ Low-level function which converts a 16-bit half-precision float
+ into a 64-bit double-precision float, stored as a uint64.
diff --git a/doc/source/reference/c-api/deprecations.rst b/doc/source/reference/c-api/deprecations.rst
new file mode 100644
index 000000000..a382017a2
--- /dev/null
+++ b/doc/source/reference/c-api/deprecations.rst
@@ -0,0 +1,58 @@
+C API Deprecations
+==================
+
+Background
+----------
+
+The API exposed by NumPy for third-party extensions has grown over
+years of releases, and has allowed programmers to directly access
+NumPy functionality from C. This API can be best described as
+"organic". It has emerged from multiple competing desires and from
+multiple points of view over the years, strongly influenced by the
+desire to make it easy for users to move to NumPy from Numeric and
+Numarray. The core API originated with Numeric in 1995 and there are
+patterns such as the heavy use of macros written to mimic Python's
+C-API as well as account for compiler technology of the late 90's.
+There is also only a small group of volunteers who have had very little
+time to spend on improving this API.
+
+There is an ongoing effort to improve the API.
+It is important in this effort
+to ensure that code that compiles for NumPy 1.X continues to
+compile for NumPy 1.X. At the same time, certain API's will be marked
+as deprecated so that future-looking code can avoid these API's and
+follow better practices.
+
+Another important role played by deprecation markings in the C API is to move
+towards hiding internal details of the NumPy implementation. For those
+needing direct, easy, access to the data of ndarrays, this will not
+remove this ability. Rather, there are many potential performance
+optimizations which require changing the implementation details, and
+NumPy developers have been unable to try them because of the high
+value of preserving ABI compatibility. By deprecating this direct
+access, we will in the future be able to improve NumPy's performance
+in ways we cannot presently.
+
+Deprecation Mechanism NPY_NO_DEPRECATED_API
+-------------------------------------------
+
+In C, there is no equivalent to the deprecation warnings that Python
+supports. One way to do deprecations is to flag them in the
+documentation and release notes, then remove or change the deprecated
+features in a future major version (NumPy 2.0 and beyond). Minor
+versions of NumPy should not have major C-API changes, however, that
+prevent code that worked on a previous minor release. For example, we
+will do our best to ensure that code that compiled and worked on NumPy
+1.4 should continue to work on NumPy 1.7 (but perhaps with compiler
+warnings).
+
+To use the NPY_NO_DEPRECATED_API mechanism, you need to #define it to
+the target API version of NumPy before #including any NumPy headers.
+If you want to confirm that your code is clean against 1.7, use::
+
+ #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
+
+On compilers which support a #warning mechanism, NumPy issues a
+compiler warning if you do not define the symbol NPY_NO_DEPRECATED_API.
+This way, the fact that there are deprecations will be flagged for
+third-party developers who may not have read the release notes closely.
diff --git a/doc/source/reference/c-api/dtype.rst b/doc/source/reference/c-api/dtype.rst
new file mode 100644
index 000000000..72e908861
--- /dev/null
+++ b/doc/source/reference/c-api/dtype.rst
@@ -0,0 +1,419 @@
+Data Type API
+=============
+
+.. sectionauthor:: Travis E. Oliphant
+
+The standard array can have 24 different data types (and has some
+support for adding your own types). These data types all have an
+enumerated type, an enumerated type-character, and a corresponding
+array scalar Python type object (placed in a hierarchy). There are
+also standard C typedefs to make it easier to manipulate elements of
+the given data type. For the numeric types, there are also bit-width
+equivalent C typedefs and named typenumbers that make it easier to
+select the precision desired.
+
+.. warning::
+
+ The names for the types in c code follows c naming conventions
+ more closely. The Python names for these types follow Python
+ conventions. Thus, :c:data:`NPY_FLOAT` picks up a 32-bit float in
+ C, but :class:`numpy.float_` in Python corresponds to a 64-bit
+ double. The bit-width names can be used in both Python and C for
+ clarity.
+
+
+Enumerated Types
+----------------
+
+.. c:var:: NPY_TYPES
+
+There is a list of enumerated types defined providing the basic 24
+data types plus some useful generic names. Whenever the code requires
+a type number, one of these enumerated types is requested. The types
+are all called :c:data:`NPY_{NAME}`:
+
+.. c:var:: NPY_BOOL
+
+ The enumeration value for the boolean type, stored as one byte.
+ It may only be set to the values 0 and 1.
+
+.. c:var:: NPY_BYTE
+.. c:var:: NPY_INT8
+
+ The enumeration value for an 8-bit/1-byte signed integer.
+
+.. c:var:: NPY_SHORT
+.. c:var:: NPY_INT16
+
+ The enumeration value for a 16-bit/2-byte signed integer.
+
+.. c:var:: NPY_INT
+.. c:var:: NPY_INT32
+
+ The enumeration value for a 32-bit/4-byte signed integer.
+
+.. c:var:: NPY_LONG
+
+ Equivalent to either NPY_INT or NPY_LONGLONG, depending on the
+ platform.
+
+.. c:var:: NPY_LONGLONG
+.. c:var:: NPY_INT64
+
+ The enumeration value for a 64-bit/8-byte signed integer.
+
+.. c:var:: NPY_UBYTE
+.. c:var:: NPY_UINT8
+
+ The enumeration value for an 8-bit/1-byte unsigned integer.
+
+.. c:var:: NPY_USHORT
+.. c:var:: NPY_UINT16
+
+ The enumeration value for a 16-bit/2-byte unsigned integer.
+
+.. c:var:: NPY_UINT
+.. c:var:: NPY_UINT32
+
+ The enumeration value for a 32-bit/4-byte unsigned integer.
+
+.. c:var:: NPY_ULONG
+
+ Equivalent to either NPY_UINT or NPY_ULONGLONG, depending on the
+ platform.
+
+.. c:var:: NPY_ULONGLONG
+.. c:var:: NPY_UINT64
+
+ The enumeration value for a 64-bit/8-byte unsigned integer.
+
+.. c:var:: NPY_HALF
+.. c:var:: NPY_FLOAT16
+
+ The enumeration value for a 16-bit/2-byte IEEE 754-2008 compatible floating
+ point type.
+
+.. c:var:: NPY_FLOAT
+.. c:var:: NPY_FLOAT32
+
+ The enumeration value for a 32-bit/4-byte IEEE 754 compatible floating
+ point type.
+
+.. c:var:: NPY_DOUBLE
+.. c:var:: NPY_FLOAT64
+
+ The enumeration value for a 64-bit/8-byte IEEE 754 compatible floating
+ point type.
+
+.. c:var:: NPY_LONGDOUBLE
+
+ The enumeration value for a platform-specific floating point type which is
+ at least as large as NPY_DOUBLE, but larger on many platforms.
+
+.. c:var:: NPY_CFLOAT
+.. c:var:: NPY_COMPLEX64
+
+ The enumeration value for a 64-bit/8-byte complex type made up of
+ two NPY_FLOAT values.
+
+.. c:var:: NPY_CDOUBLE
+.. c:var:: NPY_COMPLEX128
+
+ The enumeration value for a 128-bit/16-byte complex type made up of
+ two NPY_DOUBLE values.
+
+.. c:var:: NPY_CLONGDOUBLE
+
+ The enumeration value for a platform-specific complex floating point
+ type which is made up of two NPY_LONGDOUBLE values.
+
+.. c:var:: NPY_DATETIME
+
+ The enumeration value for a data type which holds dates or datetimes with
+ a precision based on selectable date or time units.
+
+.. c:var:: NPY_TIMEDELTA
+
+ The enumeration value for a data type which holds lengths of times in
+ integers of selectable date or time units.
+
+.. c:var:: NPY_STRING
+
+ The enumeration value for ASCII strings of a selectable size. The
+ strings have a fixed maximum size within a given array.
+
+.. c:var:: NPY_UNICODE
+
+ The enumeration value for UCS4 strings of a selectable size. The
+ strings have a fixed maximum size within a given array.
+
+.. c:var:: NPY_OBJECT
+
+ The enumeration value for references to arbitrary Python objects.
+
+.. c:var:: NPY_VOID
+
+ Primarily used to hold struct dtypes, but can contain arbitrary
+ binary data.
+
+Some useful aliases of the above types are
+
+.. c:var:: NPY_INTP
+
+ The enumeration value for a signed integer type which is the same
+ size as a (void \*) pointer. This is the type used by all
+ arrays of indices.
+
+.. c:var:: NPY_UINTP
+
+ The enumeration value for an unsigned integer type which is the
+ same size as a (void \*) pointer.
+
+.. c:var:: NPY_MASK
+
+ The enumeration value of the type used for masks, such as with
+ the :c:data:`NPY_ITER_ARRAYMASK` iterator flag. This is equivalent
+ to :c:data:`NPY_UINT8`.
+
+.. c:var:: NPY_DEFAULT_TYPE
+
+ The default type to use when no dtype is explicitly specified, for
+ example when calling np.zero(shape). This is equivalent to
+ :c:data:`NPY_DOUBLE`.
+
+Other useful related constants are
+
+.. c:var:: NPY_NTYPES
+
+ The total number of built-in NumPy types. The enumeration covers
+ the range from 0 to NPY_NTYPES-1.
+
+.. c:var:: NPY_NOTYPE
+
+ A signal value guaranteed not to be a valid type enumeration number.
+
+.. c:var:: NPY_USERDEF
+
+ The start of type numbers used for Custom Data types.
+
+The various character codes indicating certain types are also part of
+an enumerated list. References to type characters (should they be
+needed at all) should always use these enumerations. The form of them
+is :c:data:`NPY_{NAME}LTR` where ``{NAME}`` can be
+
+ **BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**,
+ **UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**,
+ **HALF**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**,
+ **CDOUBLE**, **CLONGDOUBLE**, **DATETIME**, **TIMEDELTA**,
+ **OBJECT**, **STRING**, **VOID**
+
+ **INTP**, **UINTP**
+
+ **GENBOOL**, **SIGNED**, **UNSIGNED**, **FLOATING**, **COMPLEX**
+
+The latter group of ``{NAME}s`` corresponds to letters used in the array
+interface typestring specification.
+
+
+Defines
+-------
+
+Max and min values for integers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. c:var:: NPY_MAX_INT{bits}
+
+.. c:var:: NPY_MAX_UINT{bits}
+
+.. c:var:: NPY_MIN_INT{bits}
+
+ These are defined for ``{bits}`` = 8, 16, 32, 64, 128, and 256 and provide
+ the maximum (minimum) value of the corresponding (unsigned) integer
+ type. Note: the actual integer type may not be available on all
+ platforms (i.e. 128-bit and 256-bit integers are rare).
+
+.. c:var:: NPY_MIN_{type}
+
+ This is defined for ``{type}`` = **BYTE**, **SHORT**, **INT**,
+ **LONG**, **LONGLONG**, **INTP**
+
+.. c:var:: NPY_MAX_{type}
+
+ This is defined for all defined for ``{type}`` = **BYTE**, **UBYTE**,
+ **SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**,
+ **LONGLONG**, **ULONGLONG**, **INTP**, **UINTP**
+
+
+Number of bits in data types
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+All :c:data:`NPY_SIZEOF_{CTYPE}` constants have corresponding
+:c:data:`NPY_BITSOF_{CTYPE}` constants defined. The :c:data:`NPY_BITSOF_{CTYPE}`
+constants provide the number of bits in the data type. Specifically,
+the available ``{CTYPE}s`` are
+
+ **BOOL**, **CHAR**, **SHORT**, **INT**, **LONG**,
+ **LONGLONG**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**
+
+
+Bit-width references to enumerated typenums
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+All of the numeric data types (integer, floating point, and complex)
+have constants that are defined to be a specific enumerated type
+number. Exactly which enumerated type a bit-width type refers to is
+platform dependent. In particular, the constants available are
+:c:data:`PyArray_{NAME}{BITS}` where ``{NAME}`` is **INT**, **UINT**,
+**FLOAT**, **COMPLEX** and ``{BITS}`` can be 8, 16, 32, 64, 80, 96, 128,
+160, 192, 256, and 512. Obviously not all bit-widths are available on
+all platforms for all the kinds of numeric types. Commonly 8-, 16-,
+32-, 64-bit integers; 32-, 64-bit floats; and 64-, 128-bit complex
+types are available.
+
+
+Integer that can hold a pointer
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The constants **NPY_INTP** and **NPY_UINTP** refer to an
+enumerated integer type that is large enough to hold a pointer on the
+platform. Index arrays should always be converted to **NPY_INTP**
+, because the dimension of the array is of type npy_intp.
+
+
+C-type names
+------------
+
+There are standard variable types for each of the numeric data types
+and the bool data type. Some of these are already available in the
+C-specification. You can create variables in extension code with these
+types.
+
+
+Boolean
+^^^^^^^
+
+.. c:type:: npy_bool
+
+ unsigned char; The constants :c:data:`NPY_FALSE` and
+ :c:data:`NPY_TRUE` are also defined.
+
+
+(Un)Signed Integer
+^^^^^^^^^^^^^^^^^^
+
+Unsigned versions of the integers can be defined by pre-pending a 'u'
+to the front of the integer name.
+
+.. c:type:: npy_(u)byte
+
+ (unsigned) char
+
+.. c:type:: npy_short
+
+ short
+
+.. c:type:: npy_ushort
+
+ unsigned short
+
+.. c:type:: npy_uint
+
+ unsigned int
+
+.. c:type:: npy_int
+
+ int
+
+.. c:type:: npy_int16
+
+ 16-bit integer
+
+.. c:type:: npy_uint16
+
+ 16-bit unsigned integer
+
+.. c:type:: npy_int32
+
+ 32-bit integer
+
+.. c:type:: npy_uint32
+
+ 32-bit unsigned integer
+
+.. c:type:: npy_int64
+
+ 64-bit integer
+
+.. c:type:: npy_uint64
+
+ 64-bit unsigned integer
+
+.. c:type:: npy_(u)long
+
+ (unsigned) long int
+
+.. c:type:: npy_(u)longlong
+
+ (unsigned long long int)
+
+.. c:type:: npy_intp
+
+ Py_intptr_t (an integer that is the size of a pointer on
+ the platform).
+
+.. c:type:: npy_uintp
+
+ unsigned Py_intptr_t (an integer that is the size of a pointer on
+ the platform).
+
+
+(Complex) Floating point
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. c:type:: npy_half
+
+ 16-bit float
+
+.. c:type:: npy_(c)float
+
+ 32-bit float
+
+.. c:type:: npy_(c)double
+
+ 64-bit double
+
+.. c:type:: npy_(c)longdouble
+
+ long double
+
+complex types are structures with **.real** and **.imag** members (in
+that order).
+
+
+Bit-width names
+^^^^^^^^^^^^^^^
+
+There are also typedefs for signed integers, unsigned integers,
+floating point, and complex floating point types of specific bit-
+widths. The available type names are
+
+ :c:type:`npy_int{bits}`, :c:type:`npy_uint{bits}`, :c:type:`npy_float{bits}`,
+ and :c:type:`npy_complex{bits}`
+
+where ``{bits}`` is the number of bits in the type and can be **8**,
+**16**, **32**, **64**, 128, and 256 for integer types; 16, **32**
+, **64**, 80, 96, 128, and 256 for floating-point types; and 32,
+**64**, **128**, 160, 192, and 512 for complex-valued types. Which
+bit-widths are available is platform dependent. The bolded bit-widths
+are usually available on all platforms.
+
+
+Printf Formatting
+-----------------
+
+For help in printing, the following strings are defined as the correct
+format specifier in printf and related commands.
+
+ :c:data:`NPY_LONGLONG_FMT`, :c:data:`NPY_ULONGLONG_FMT`,
+ :c:data:`NPY_INTP_FMT`, :c:data:`NPY_UINTP_FMT`,
+ :c:data:`NPY_LONGDOUBLE_FMT`
diff --git a/doc/source/reference/c-api/generalized-ufuncs.rst b/doc/source/reference/c-api/generalized-ufuncs.rst
new file mode 100644
index 000000000..b59f077ad
--- /dev/null
+++ b/doc/source/reference/c-api/generalized-ufuncs.rst
@@ -0,0 +1,216 @@
+.. _c-api.generalized-ufuncs:
+
+==================================
+Generalized Universal Function API
+==================================
+
+There is a general need for looping over not only functions on scalars
+but also over functions on vectors (or arrays).
+This concept is realized in NumPy by generalizing the universal functions
+(ufuncs). In regular ufuncs, the elementary function is limited to
+element-by-element operations, whereas the generalized version (gufuncs)
+supports "sub-array" by "sub-array" operations. The Perl vector library PDL
+provides a similar functionality and its terms are re-used in the following.
+
+Each generalized ufunc has information associated with it that states
+what the "core" dimensionality of the inputs is, as well as the
+corresponding dimensionality of the outputs (the element-wise ufuncs
+have zero core dimensions). The list of the core dimensions for all
+arguments is called the "signature" of a ufunc. For example, the
+ufunc numpy.add has signature ``(),()->()`` defining two scalar inputs
+and one scalar output.
+
+Another example is the function ``inner1d(a, b)`` with a signature of
+``(i),(i)->()``. This applies the inner product along the last axis of
+each input, but keeps the remaining indices intact.
+For example, where ``a`` is of shape ``(3, 5, N)`` and ``b`` is of shape
+``(5, N)``, this will return an output of shape ``(3,5)``.
+The underlying elementary function is called ``3 * 5`` times. In the
+signature, we specify one core dimension ``(i)`` for each input and zero core
+dimensions ``()`` for the output, since it takes two 1-d arrays and
+returns a scalar. By using the same name ``i``, we specify that the two
+corresponding dimensions should be of the same size.
+
+The dimensions beyond the core dimensions are called "loop" dimensions. In
+the above example, this corresponds to ``(3, 5)``.
+
+The signature determines how the dimensions of each input/output array are
+split into core and loop dimensions:
+
+#. Each dimension in the signature is matched to a dimension of the
+ corresponding passed-in array, starting from the end of the shape tuple.
+ These are the core dimensions, and they must be present in the arrays, or
+ an error will be raised.
+#. Core dimensions assigned to the same label in the signature (e.g. the
+ ``i`` in ``inner1d``'s ``(i),(i)->()``) must have exactly matching sizes,
+ no broadcasting is performed.
+#. The core dimensions are removed from all inputs and the remaining
+ dimensions are broadcast together, defining the loop dimensions.
+#. The shape of each output is determined from the loop dimensions plus the
+ output's core dimensions
+
+Typically, the size of all core dimensions in an output will be determined by
+the size of a core dimension with the same label in an input array. This is
+not a requirement, and it is possible to define a signature where a label
+comes up for the first time in an output, although some precautions must be
+taken when calling such a function. An example would be the function
+``euclidean_pdist(a)``, with signature ``(n,d)->(p)``, that given an array of
+``n`` ``d``-dimensional vectors, computes all unique pairwise Euclidean
+distances among them. The output dimension ``p`` must therefore be equal to
+``n * (n - 1) / 2``, but it is the caller's responsibility to pass in an
+output array of the right size. If the size of a core dimension of an output
+cannot be determined from a passed in input or output array, an error will be
+raised.
+
+Note: Prior to NumPy 1.10.0, less strict checks were in place: missing core
+dimensions were created by prepending 1's to the shape as necessary, core
+dimensions with the same label were broadcast together, and undetermined
+dimensions were created with size 1.
+
+
+Definitions
+-----------
+
+Elementary Function
+ Each ufunc consists of an elementary function that performs the
+ most basic operation on the smallest portion of array arguments
+ (e.g. adding two numbers is the most basic operation in adding two
+ arrays). The ufunc applies the elementary function multiple times
+ on different parts of the arrays. The input/output of elementary
+ functions can be vectors; e.g., the elementary function of inner1d
+ takes two vectors as input.
+
+Signature
+ A signature is a string describing the input/output dimensions of
+ the elementary function of a ufunc. See section below for more
+ details.
+
+Core Dimension
+ The dimensionality of each input/output of an elementary function
+ is defined by its core dimensions (zero core dimensions correspond
+ to a scalar input/output). The core dimensions are mapped to the
+ last dimensions of the input/output arrays.
+
+Dimension Name
+ A dimension name represents a core dimension in the signature.
+ Different dimensions may share a name, indicating that they are of
+ the same size.
+
+Dimension Index
+ A dimension index is an integer representing a dimension name. It
+ enumerates the dimension names according to the order of the first
+ occurrence of each name in the signature.
+
+.. _details-of-signature:
+
+Details of Signature
+--------------------
+
+The signature defines "core" dimensionality of input and output
+variables, and thereby also defines the contraction of the
+dimensions. The signature is represented by a string of the
+following format:
+
+* Core dimensions of each input or output array are represented by a
+ list of dimension names in parentheses, ``(i_1,...,i_N)``; a scalar
+ input/output is denoted by ``()``. Instead of ``i_1``, ``i_2``,
+ etc, one can use any valid Python variable name.
+* Dimension lists for different arguments are separated by ``","``.
+ Input/output arguments are separated by ``"->"``.
+* If one uses the same dimension name in multiple locations, this
+ enforces the same size of the corresponding dimensions.
+
+The formal syntax of signatures is as follows::
+
+ <Signature> ::= <Input arguments> "->" <Output arguments>
+ <Input arguments> ::= <Argument list>
+ <Output arguments> ::= <Argument list>
+ <Argument list> ::= nil | <Argument> | <Argument> "," <Argument list>
+ <Argument> ::= "(" <Core dimension list> ")"
+ <Core dimension list> ::= nil | <Core dimension> |
+ <Core dimension> "," <Core dimension list>
+ <Core dimension> ::= <Dimension name> <Dimension modifier>
+ <Dimension name> ::= valid Python variable name | valid integer
+ <Dimension modifier> ::= nil | "?"
+
+Notes:
+
+#. All quotes are for clarity.
+#. Unmodified core dimensions that share the same name must have the same size.
+ Each dimension name typically corresponds to one level of looping in the
+ elementary function's implementation.
+#. White spaces are ignored.
+#. An integer as a dimension name freezes that dimension to the value.
+#. If the name is suffixed with the "?" modifier, the dimension is a core
+ dimension only if it exists on all inputs and outputs that share it;
+ otherwise it is ignored (and replaced by a dimension of size 1 for the
+ elementary function).
+
+Here are some examples of signatures:
+
++-------------+----------------------------+-----------------------------------+
+| name | signature | common usage |
++=============+============================+===================================+
+| add | ``(),()->()`` | binary ufunc |
++-------------+----------------------------+-----------------------------------+
+| sum1d | ``(i)->()`` | reduction |
++-------------+----------------------------+-----------------------------------+
+| inner1d | ``(i),(i)->()`` | vector-vector multiplication |
++-------------+----------------------------+-----------------------------------+
+| matmat | ``(m,n),(n,p)->(m,p)`` | matrix multiplication |
++-------------+----------------------------+-----------------------------------+
+| vecmat | ``(n),(n,p)->(p)`` | vector-matrix multiplication |
++-------------+----------------------------+-----------------------------------+
+| matvec | ``(m,n),(n)->(m)`` | matrix-vector multiplication |
++-------------+----------------------------+-----------------------------------+
+| matmul | ``(m?,n),(n,p?)->(m?,p?)`` | combination of the four above |
++-------------+----------------------------+-----------------------------------+
+| outer_inner | ``(i,t),(j,t)->(i,j)`` | inner over the last dimension, |
+| | | outer over the second to last, |
+| | | and loop/broadcast over the rest. |
++-------------+----------------------------+-----------------------------------+
+| cross1d | ``(3),(3)->(3)`` | cross product where the last |
+| | | dimension is frozen and must be 3 |
++-------------+----------------------------+-----------------------------------+
+
+.. _frozen:
+
+The last is an instance of freezing a core dimension and can be used to
+improve ufunc performance
+
+C-API for implementing Elementary Functions
+-------------------------------------------
+
+The current interface remains unchanged, and ``PyUFunc_FromFuncAndData``
+can still be used to implement (specialized) ufuncs, consisting of
+scalar elementary functions.
+
+One can use ``PyUFunc_FromFuncAndDataAndSignature`` to declare a more
+general ufunc. The argument list is the same as
+``PyUFunc_FromFuncAndData``, with an additional argument specifying the
+signature as C string.
+
+Furthermore, the callback function is of the same type as before,
+``void (*foo)(char **args, intp *dimensions, intp *steps, void *func)``.
+When invoked, ``args`` is a list of length ``nargs`` containing
+the data of all input/output arguments. For a scalar elementary
+function, ``steps`` is also of length ``nargs``, denoting the strides used
+for the arguments. ``dimensions`` is a pointer to a single integer
+defining the size of the axis to be looped over.
+
+For a non-trivial signature, ``dimensions`` will also contain the sizes
+of the core dimensions as well, starting at the second entry. Only
+one size is provided for each unique dimension name and the sizes are
+given according to the first occurrence of a dimension name in the
+signature.
+
+The first ``nargs`` elements of ``steps`` remain the same as for scalar
+ufuncs. The following elements contain the strides of all core
+dimensions for all arguments in order.
+
+For example, consider a ufunc with signature ``(i,j),(i)->()``. In
+this case, ``args`` will contain three pointers to the data of the
+input/output arrays ``a``, ``b``, ``c``. Furthermore, ``dimensions`` will be
+``[N, I, J]`` to define the size of ``N`` of the loop and the sizes ``I`` and ``J``
+for the core dimensions ``i`` and ``j``. Finally, ``steps`` will be
+``[a_N, b_N, c_N, a_i, a_j, b_i]``, containing all necessary strides.
diff --git a/doc/source/reference/c-api/index.rst b/doc/source/reference/c-api/index.rst
new file mode 100644
index 000000000..56fe8e473
--- /dev/null
+++ b/doc/source/reference/c-api/index.rst
@@ -0,0 +1,51 @@
+.. _c-api:
+
+###########
+NumPy C-API
+###########
+
+.. sectionauthor:: Travis E. Oliphant
+
+| Beware of the man who won't be bothered with details.
+| --- *William Feather, Sr.*
+
+| The truth is out there.
+| --- *Chris Carter, The X Files*
+
+
+NumPy provides a C-API to enable users to extend the system and get
+access to the array object for use in other routines. The best way to
+truly understand the C-API is to read the source code. If you are
+unfamiliar with (C) source code, however, this can be a daunting
+experience at first. Be assured that the task becomes easier with
+practice, and you may be surprised at how simple the C-code can be to
+understand. Even if you don't think you can write C-code from scratch,
+it is much easier to understand and modify already-written source code
+then create it *de novo*.
+
+Python extensions are especially straightforward to understand because
+they all have a very similar structure. Admittedly, NumPy is not a
+trivial extension to Python, and may take a little more snooping to
+grasp. This is especially true because of the code-generation
+techniques, which simplify maintenance of very similar code, but can
+make the code a little less readable to beginners. Still, with a
+little persistence, the code can be opened to your understanding. It
+is my hope, that this guide to the C-API can assist in the process of
+becoming familiar with the compiled-level work that can be done with
+NumPy in order to squeeze that last bit of necessary speed out of your
+code.
+
+.. currentmodule:: numpy-c-api
+
+.. toctree::
+ :maxdepth: 2
+
+ types-and-structures
+ config
+ dtype
+ array
+ iterator
+ ufunc
+ generalized-ufuncs
+ coremath
+ deprecations
diff --git a/doc/source/reference/c-api/iterator.rst b/doc/source/reference/c-api/iterator.rst
new file mode 100644
index 000000000..b77d029cc
--- /dev/null
+++ b/doc/source/reference/c-api/iterator.rst
@@ -0,0 +1,1322 @@
+Array Iterator API
+==================
+
+.. sectionauthor:: Mark Wiebe
+
+.. index::
+ pair: iterator; C-API
+ pair: C-API; iterator
+
+.. versionadded:: 1.6
+
+Array Iterator
+--------------
+
+The array iterator encapsulates many of the key features in ufuncs,
+allowing user code to support features like output parameters,
+preservation of memory layouts, and buffering of data with the wrong
+alignment or type, without requiring difficult coding.
+
+This page documents the API for the iterator.
+The iterator is named ``NpyIter`` and functions are
+named ``NpyIter_*``.
+
+There is an :ref:`introductory guide to array iteration <arrays.nditer>`
+which may be of interest for those using this C API. In many instances,
+testing out ideas by creating the iterator in Python is a good idea
+before writing the C iteration code.
+
+Simple Iteration Example
+------------------------
+
+The best way to become familiar with the iterator is to look at its
+usage within the NumPy codebase itself. For example, here is a slightly
+tweaked version of the code for :c:func:`PyArray_CountNonzero`, which counts the
+number of non-zero elements in an array.
+
+.. code-block:: c
+
+ npy_intp PyArray_CountNonzero(PyArrayObject* self)
+ {
+ /* Nonzero boolean function */
+ PyArray_NonzeroFunc* nonzero = PyArray_DESCR(self)->f->nonzero;
+
+ NpyIter* iter;
+ NpyIter_IterNextFunc *iternext;
+ char** dataptr;
+ npy_intp nonzero_count;
+ npy_intp* strideptr,* innersizeptr;
+
+ /* Handle zero-sized arrays specially */
+ if (PyArray_SIZE(self) == 0) {
+ return 0;
+ }
+
+ /*
+ * Create and use an iterator to count the nonzeros.
+ * flag NPY_ITER_READONLY
+ * - The array is never written to.
+ * flag NPY_ITER_EXTERNAL_LOOP
+ * - Inner loop is done outside the iterator for efficiency.
+ * flag NPY_ITER_NPY_ITER_REFS_OK
+ * - Reference types are acceptable.
+ * order NPY_KEEPORDER
+ * - Visit elements in memory order, regardless of strides.
+ * This is good for performance when the specific order
+ * elements are visited is unimportant.
+ * casting NPY_NO_CASTING
+ * - No casting is required for this operation.
+ */
+ iter = NpyIter_New(self, NPY_ITER_READONLY|
+ NPY_ITER_EXTERNAL_LOOP|
+ NPY_ITER_REFS_OK,
+ NPY_KEEPORDER, NPY_NO_CASTING,
+ NULL);
+ if (iter == NULL) {
+ return -1;
+ }
+
+ /*
+ * The iternext function gets stored in a local variable
+ * so it can be called repeatedly in an efficient manner.
+ */
+ iternext = NpyIter_GetIterNext(iter, NULL);
+ if (iternext == NULL) {
+ NpyIter_Deallocate(iter);
+ return -1;
+ }
+ /* The location of the data pointer which the iterator may update */
+ dataptr = NpyIter_GetDataPtrArray(iter);
+ /* The location of the stride which the iterator may update */
+ strideptr = NpyIter_GetInnerStrideArray(iter);
+ /* The location of the inner loop size which the iterator may update */
+ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter);
+
+ nonzero_count = 0;
+ do {
+ /* Get the inner loop data/stride/count values */
+ char* data = *dataptr;
+ npy_intp stride = *strideptr;
+ npy_intp count = *innersizeptr;
+
+ /* This is a typical inner loop for NPY_ITER_EXTERNAL_LOOP */
+ while (count--) {
+ if (nonzero(data, self)) {
+ ++nonzero_count;
+ }
+ data += stride;
+ }
+
+ /* Increment the iterator to the next inner loop */
+ } while(iternext(iter));
+
+ NpyIter_Deallocate(iter);
+
+ return nonzero_count;
+ }
+
+Simple Multi-Iteration Example
+------------------------------
+
+Here is a simple copy function using the iterator. The ``order`` parameter
+is used to control the memory layout of the allocated result, typically
+:c:data:`NPY_KEEPORDER` is desired.
+
+.. code-block:: c
+
+ PyObject *CopyArray(PyObject *arr, NPY_ORDER order)
+ {
+ NpyIter *iter;
+ NpyIter_IterNextFunc *iternext;
+ PyObject *op[2], *ret;
+ npy_uint32 flags;
+ npy_uint32 op_flags[2];
+ npy_intp itemsize, *innersizeptr, innerstride;
+ char **dataptrarray;
+
+ /*
+ * No inner iteration - inner loop is handled by CopyArray code
+ */
+ flags = NPY_ITER_EXTERNAL_LOOP;
+ /*
+ * Tell the constructor to automatically allocate the output.
+ * The data type of the output will match that of the input.
+ */
+ op[0] = arr;
+ op[1] = NULL;
+ op_flags[0] = NPY_ITER_READONLY;
+ op_flags[1] = NPY_ITER_WRITEONLY | NPY_ITER_ALLOCATE;
+
+ /* Construct the iterator */
+ iter = NpyIter_MultiNew(2, op, flags, order, NPY_NO_CASTING,
+ op_flags, NULL);
+ if (iter == NULL) {
+ return NULL;
+ }
+
+ /*
+ * Make a copy of the iternext function pointer and
+ * a few other variables the inner loop needs.
+ */
+ iternext = NpyIter_GetIterNext(iter, NULL);
+ innerstride = NpyIter_GetInnerStrideArray(iter)[0];
+ itemsize = NpyIter_GetDescrArray(iter)[0]->elsize;
+ /*
+ * The inner loop size and data pointers may change during the
+ * loop, so just cache the addresses.
+ */
+ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter);
+ dataptrarray = NpyIter_GetDataPtrArray(iter);
+
+ /*
+ * Note that because the iterator allocated the output,
+ * it matches the iteration order and is packed tightly,
+ * so we don't need to check it like the input.
+ */
+ if (innerstride == itemsize) {
+ do {
+ memcpy(dataptrarray[1], dataptrarray[0],
+ itemsize * (*innersizeptr));
+ } while (iternext(iter));
+ } else {
+ /* For efficiency, should specialize this based on item size... */
+ npy_intp i;
+ do {
+ npy_intp size = *innersizeptr;
+ char *src = dataptrarray[0], *dst = dataptrarray[1];
+ for(i = 0; i < size; i++, src += innerstride, dst += itemsize) {
+ memcpy(dst, src, itemsize);
+ }
+ } while (iternext(iter));
+ }
+
+ /* Get the result from the iterator object array */
+ ret = NpyIter_GetOperandArray(iter)[1];
+ Py_INCREF(ret);
+
+ if (NpyIter_Deallocate(iter) != NPY_SUCCEED) {
+ Py_DECREF(ret);
+ return NULL;
+ }
+
+ return ret;
+ }
+
+
+Iterator Data Types
+---------------------
+
+The iterator layout is an internal detail, and user code only sees
+an incomplete struct.
+
+.. c:type:: NpyIter
+
+ This is an opaque pointer type for the iterator. Access to its contents
+ can only be done through the iterator API.
+
+.. c:type:: NpyIter_Type
+
+ This is the type which exposes the iterator to Python. Currently, no
+ API is exposed which provides access to the values of a Python-created
+ iterator. If an iterator is created in Python, it must be used in Python
+ and vice versa. Such an API will likely be created in a future version.
+
+.. c:type:: NpyIter_IterNextFunc
+
+ This is a function pointer for the iteration loop, returned by
+ :c:func:`NpyIter_GetIterNext`.
+
+.. c:type:: NpyIter_GetMultiIndexFunc
+
+ This is a function pointer for getting the current iterator multi-index,
+ returned by :c:func:`NpyIter_GetGetMultiIndex`.
+
+Construction and Destruction
+----------------------------
+
+.. c:function:: NpyIter* NpyIter_New( \
+ PyArrayObject* op, npy_uint32 flags, NPY_ORDER order, \
+ NPY_CASTING casting, PyArray_Descr* dtype)
+
+ Creates an iterator for the given numpy array object ``op``.
+
+ Flags that may be passed in ``flags`` are any combination
+ of the global and per-operand flags documented in
+ :c:func:`NpyIter_MultiNew`, except for :c:data:`NPY_ITER_ALLOCATE`.
+
+ Any of the :c:type:`NPY_ORDER` enum values may be passed to ``order``. For
+ efficient iteration, :c:type:`NPY_KEEPORDER` is the best option, and
+ the other orders enforce the particular iteration pattern.
+
+ Any of the :c:type:`NPY_CASTING` enum values may be passed to ``casting``.
+ The values include :c:data:`NPY_NO_CASTING`, :c:data:`NPY_EQUIV_CASTING`,
+ :c:data:`NPY_SAFE_CASTING`, :c:data:`NPY_SAME_KIND_CASTING`, and
+ :c:data:`NPY_UNSAFE_CASTING`. To allow the casts to occur, copying or
+ buffering must also be enabled.
+
+ If ``dtype`` isn't ``NULL``, then it requires that data type.
+ If copying is allowed, it will make a temporary copy if the data
+ is castable. If :c:data:`NPY_ITER_UPDATEIFCOPY` is enabled, it will
+ also copy the data back with another cast upon iterator destruction.
+
+ Returns NULL if there is an error, otherwise returns the allocated
+ iterator.
+
+ To make an iterator similar to the old iterator, this should work.
+
+ .. code-block:: c
+
+ iter = NpyIter_New(op, NPY_ITER_READWRITE,
+ NPY_CORDER, NPY_NO_CASTING, NULL);
+
+ If you want to edit an array with aligned ``double`` code,
+ but the order doesn't matter, you would use this.
+
+ .. code-block:: c
+
+ dtype = PyArray_DescrFromType(NPY_DOUBLE);
+ iter = NpyIter_New(op, NPY_ITER_READWRITE|
+ NPY_ITER_BUFFERED|
+ NPY_ITER_NBO|
+ NPY_ITER_ALIGNED,
+ NPY_KEEPORDER,
+ NPY_SAME_KIND_CASTING,
+ dtype);
+ Py_DECREF(dtype);
+
+.. c:function:: NpyIter* NpyIter_MultiNew( \
+ npy_intp nop, PyArrayObject** op, npy_uint32 flags, NPY_ORDER order, \
+ NPY_CASTING casting, npy_uint32* op_flags, PyArray_Descr** op_dtypes)
+
+ Creates an iterator for broadcasting the ``nop`` array objects provided
+ in ``op``, using regular NumPy broadcasting rules.
+
+ Any of the :c:type:`NPY_ORDER` enum values may be passed to ``order``. For
+ efficient iteration, :c:data:`NPY_KEEPORDER` is the best option, and the
+ other orders enforce the particular iteration pattern. When using
+ :c:data:`NPY_KEEPORDER`, if you also want to ensure that the iteration is
+ not reversed along an axis, you should pass the flag
+ :c:data:`NPY_ITER_DONT_NEGATE_STRIDES`.
+
+ Any of the :c:type:`NPY_CASTING` enum values may be passed to ``casting``.
+ The values include :c:data:`NPY_NO_CASTING`, :c:data:`NPY_EQUIV_CASTING`,
+ :c:data:`NPY_SAFE_CASTING`, :c:data:`NPY_SAME_KIND_CASTING`, and
+ :c:data:`NPY_UNSAFE_CASTING`. To allow the casts to occur, copying or
+ buffering must also be enabled.
+
+ If ``op_dtypes`` isn't ``NULL``, it specifies a data type or ``NULL``
+ for each ``op[i]``.
+
+ Returns NULL if there is an error, otherwise returns the allocated
+ iterator.
+
+ Flags that may be passed in ``flags``, applying to the whole
+ iterator, are:
+
+ .. c:var:: NPY_ITER_C_INDEX
+
+ Causes the iterator to track a raveled flat index matching C
+ order. This option cannot be used with :c:data:`NPY_ITER_F_INDEX`.
+
+ .. c:var:: NPY_ITER_F_INDEX
+
+ Causes the iterator to track a raveled flat index matching Fortran
+ order. This option cannot be used with :c:data:`NPY_ITER_C_INDEX`.
+
+ .. c:var:: NPY_ITER_MULTI_INDEX
+
+ Causes the iterator to track a multi-index.
+ This prevents the iterator from coalescing axes to
+ produce bigger inner loops. If the loop is also not buffered
+ and no index is being tracked (`NpyIter_RemoveAxis` can be called),
+ then the iterator size can be ``-1`` to indicate that the iterator
+ is too large. This can happen due to complex broadcasting and
+ will result in errors being created when the setting the iterator
+ range, removing the multi index, or getting the next function.
+ However, it is possible to remove axes again and use the iterator
+ normally if the size is small enough after removal.
+
+ .. c:var:: NPY_ITER_EXTERNAL_LOOP
+
+ Causes the iterator to skip iteration of the innermost
+ loop, requiring the user of the iterator to handle it.
+
+ This flag is incompatible with :c:data:`NPY_ITER_C_INDEX`,
+ :c:data:`NPY_ITER_F_INDEX`, and :c:data:`NPY_ITER_MULTI_INDEX`.
+
+ .. c:var:: NPY_ITER_DONT_NEGATE_STRIDES
+
+ This only affects the iterator when :c:type:`NPY_KEEPORDER` is
+ specified for the order parameter. By default with
+ :c:type:`NPY_KEEPORDER`, the iterator reverses axes which have
+ negative strides, so that memory is traversed in a forward
+ direction. This disables this step. Use this flag if you
+ want to use the underlying memory-ordering of the axes,
+ but don't want an axis reversed. This is the behavior of
+ ``numpy.ravel(a, order='K')``, for instance.
+
+ .. c:var:: NPY_ITER_COMMON_DTYPE
+
+ Causes the iterator to convert all the operands to a common
+ data type, calculated based on the ufunc type promotion rules.
+ Copying or buffering must be enabled.
+
+ If the common data type is known ahead of time, don't use this
+ flag. Instead, set the requested dtype for all the operands.
+
+ .. c:var:: NPY_ITER_REFS_OK
+
+ Indicates that arrays with reference types (object
+ arrays or structured arrays containing an object type)
+ may be accepted and used in the iterator. If this flag
+ is enabled, the caller must be sure to check whether
+ :c:func:`NpyIter_IterationNeedsAPI(iter)` is true, in which case
+ it may not release the GIL during iteration.
+
+ .. c:var:: NPY_ITER_ZEROSIZE_OK
+
+ Indicates that arrays with a size of zero should be permitted.
+ Since the typical iteration loop does not naturally work with
+ zero-sized arrays, you must check that the IterSize is larger
+ than zero before entering the iteration loop.
+ Currently only the operands are checked, not a forced shape.
+
+ .. c:var:: NPY_ITER_REDUCE_OK
+
+ Permits writeable operands with a dimension with zero
+ stride and size greater than one. Note that such operands
+ must be read/write.
+
+ When buffering is enabled, this also switches to a special
+ buffering mode which reduces the loop length as necessary to
+ not trample on values being reduced.
+
+ Note that if you want to do a reduction on an automatically
+ allocated output, you must use :c:func:`NpyIter_GetOperandArray`
+ to get its reference, then set every value to the reduction
+ unit before doing the iteration loop. In the case of a
+ buffered reduction, this means you must also specify the
+ flag :c:data:`NPY_ITER_DELAY_BUFALLOC`, then reset the iterator
+ after initializing the allocated operand to prepare the
+ buffers.
+
+ .. c:var:: NPY_ITER_RANGED
+
+ Enables support for iteration of sub-ranges of the full
+ ``iterindex`` range ``[0, NpyIter_IterSize(iter))``. Use
+ the function :c:func:`NpyIter_ResetToIterIndexRange` to specify
+ a range for iteration.
+
+ This flag can only be used with :c:data:`NPY_ITER_EXTERNAL_LOOP`
+ when :c:data:`NPY_ITER_BUFFERED` is enabled. This is because
+ without buffering, the inner loop is always the size of the
+ innermost iteration dimension, and allowing it to get cut up
+ would require special handling, effectively making it more
+ like the buffered version.
+
+ .. c:var:: NPY_ITER_BUFFERED
+
+ Causes the iterator to store buffering data, and use buffering
+ to satisfy data type, alignment, and byte-order requirements.
+ To buffer an operand, do not specify the :c:data:`NPY_ITER_COPY`
+ or :c:data:`NPY_ITER_UPDATEIFCOPY` flags, because they will
+ override buffering. Buffering is especially useful for Python
+ code using the iterator, allowing for larger chunks
+ of data at once to amortize the Python interpreter overhead.
+
+ If used with :c:data:`NPY_ITER_EXTERNAL_LOOP`, the inner loop
+ for the caller may get larger chunks than would be possible
+ without buffering, because of how the strides are laid out.
+
+ Note that if an operand is given the flag :c:data:`NPY_ITER_COPY`
+ or :c:data:`NPY_ITER_UPDATEIFCOPY`, a copy will be made in preference
+ to buffering. Buffering will still occur when the array was
+ broadcast so elements need to be duplicated to get a constant
+ stride.
+
+ In normal buffering, the size of each inner loop is equal
+ to the buffer size, or possibly larger if
+ :c:data:`NPY_ITER_GROWINNER` is specified. If
+ :c:data:`NPY_ITER_REDUCE_OK` is enabled and a reduction occurs,
+ the inner loops may become smaller depending
+ on the structure of the reduction.
+
+ .. c:var:: NPY_ITER_GROWINNER
+
+ When buffering is enabled, this allows the size of the inner
+ loop to grow when buffering isn't necessary. This option
+ is best used if you're doing a straight pass through all the
+ data, rather than anything with small cache-friendly arrays
+ of temporary values for each inner loop.
+
+ .. c:var:: NPY_ITER_DELAY_BUFALLOC
+
+ When buffering is enabled, this delays allocation of the
+ buffers until :c:func:`NpyIter_Reset` or another reset function is
+ called. This flag exists to avoid wasteful copying of
+ buffer data when making multiple copies of a buffered
+ iterator for multi-threaded iteration.
+
+ Another use of this flag is for setting up reduction operations.
+ After the iterator is created, and a reduction output
+ is allocated automatically by the iterator (be sure to use
+ READWRITE access), its value may be initialized to the reduction
+ unit. Use :c:func:`NpyIter_GetOperandArray` to get the object.
+ Then, call :c:func:`NpyIter_Reset` to allocate and fill the buffers
+ with their initial values.
+
+ .. c:var:: NPY_ITER_COPY_IF_OVERLAP
+
+ If any write operand has overlap with any read operand, eliminate all
+ overlap by making temporary copies (enabling UPDATEIFCOPY for write
+ operands, if necessary). A pair of operands has overlap if there is
+ a memory address that contains data common to both arrays.
+
+ Because exact overlap detection has exponential runtime
+ in the number of dimensions, the decision is made based
+ on heuristics, which has false positives (needless copies in unusual
+ cases) but has no false negatives.
+
+ If any read/write overlap exists, this flag ensures the result of the
+ operation is the same as if all operands were copied.
+ In cases where copies would need to be made, **the result of the
+ computation may be undefined without this flag!**
+
+ Flags that may be passed in ``op_flags[i]``, where ``0 <= i < nop``:
+
+ .. c:var:: NPY_ITER_READWRITE
+ .. c:var:: NPY_ITER_READONLY
+ .. c:var:: NPY_ITER_WRITEONLY
+
+ Indicate how the user of the iterator will read or write
+ to ``op[i]``. Exactly one of these flags must be specified
+ per operand. Using ``NPY_ITER_READWRITE`` or ``NPY_ITER_WRITEONLY``
+ for a user-provided operand may trigger `WRITEBACKIFCOPY``
+ semantics. The data will be written back to the original array
+ when ``NpyIter_Deallocate`` is called.
+
+ .. c:var:: NPY_ITER_COPY
+
+ Allow a copy of ``op[i]`` to be made if it does not
+ meet the data type or alignment requirements as specified
+ by the constructor flags and parameters.
+
+ .. c:var:: NPY_ITER_UPDATEIFCOPY
+
+ Triggers :c:data:`NPY_ITER_COPY`, and when an array operand
+ is flagged for writing and is copied, causes the data
+ in a copy to be copied back to ``op[i]`` when
+ ``NpyIter_Deallocate`` is called.
+
+ If the operand is flagged as write-only and a copy is needed,
+ an uninitialized temporary array will be created and then copied
+ to back to ``op[i]`` on calling ``NpyIter_Deallocate``, instead of
+ doing the unnecessary copy operation.
+
+ .. c:var:: NPY_ITER_NBO
+ .. c:var:: NPY_ITER_ALIGNED
+ .. c:var:: NPY_ITER_CONTIG
+
+ Causes the iterator to provide data for ``op[i]``
+ that is in native byte order, aligned according to
+ the dtype requirements, contiguous, or any combination.
+
+ By default, the iterator produces pointers into the
+ arrays provided, which may be aligned or unaligned, and
+ with any byte order. If copying or buffering is not
+ enabled and the operand data doesn't satisfy the constraints,
+ an error will be raised.
+
+ The contiguous constraint applies only to the inner loop,
+ successive inner loops may have arbitrary pointer changes.
+
+ If the requested data type is in non-native byte order,
+ the NBO flag overrides it and the requested data type is
+ converted to be in native byte order.
+
+ .. c:var:: NPY_ITER_ALLOCATE
+
+ This is for output arrays, and requires that the flag
+ :c:data:`NPY_ITER_WRITEONLY` or :c:data:`NPY_ITER_READWRITE`
+ be set. If ``op[i]`` is NULL, creates a new array with
+ the final broadcast dimensions, and a layout matching
+ the iteration order of the iterator.
+
+ When ``op[i]`` is NULL, the requested data type
+ ``op_dtypes[i]`` may be NULL as well, in which case it is
+ automatically generated from the dtypes of the arrays which
+ are flagged as readable. The rules for generating the dtype
+ are the same is for UFuncs. Of special note is handling
+ of byte order in the selected dtype. If there is exactly
+ one input, the input's dtype is used as is. Otherwise,
+ if more than one input dtypes are combined together, the
+ output will be in native byte order.
+
+ After being allocated with this flag, the caller may retrieve
+ the new array by calling :c:func:`NpyIter_GetOperandArray` and
+ getting the i-th object in the returned C array. The caller
+ must call Py_INCREF on it to claim a reference to the array.
+
+ .. c:var:: NPY_ITER_NO_SUBTYPE
+
+ For use with :c:data:`NPY_ITER_ALLOCATE`, this flag disables
+ allocating an array subtype for the output, forcing
+ it to be a straight ndarray.
+
+ TODO: Maybe it would be better to introduce a function
+ ``NpyIter_GetWrappedOutput`` and remove this flag?
+
+ .. c:var:: NPY_ITER_NO_BROADCAST
+
+ Ensures that the input or output matches the iteration
+ dimensions exactly.
+
+ .. c:var:: NPY_ITER_ARRAYMASK
+
+ .. versionadded:: 1.7
+
+ Indicates that this operand is the mask to use for
+ selecting elements when writing to operands which have
+ the :c:data:`NPY_ITER_WRITEMASKED` flag applied to them.
+ Only one operand may have :c:data:`NPY_ITER_ARRAYMASK` flag
+ applied to it.
+
+ The data type of an operand with this flag should be either
+ :c:data:`NPY_BOOL`, :c:data:`NPY_MASK`, or a struct dtype
+ whose fields are all valid mask dtypes. In the latter case,
+ it must match up with a struct operand being WRITEMASKED,
+ as it is specifying a mask for each field of that array.
+
+ This flag only affects writing from the buffer back to
+ the array. This means that if the operand is also
+ :c:data:`NPY_ITER_READWRITE` or :c:data:`NPY_ITER_WRITEONLY`,
+ code doing iteration can write to this operand to
+ control which elements will be untouched and which ones will be
+ modified. This is useful when the mask should be a combination
+ of input masks.
+
+ .. c:var:: NPY_ITER_WRITEMASKED
+
+ .. versionadded:: 1.7
+
+ This array is the mask for all `writemasked <numpy.nditer>`
+ operands. Code uses the ``writemasked`` flag which indicates
+ that only elements where the chosen ARRAYMASK operand is True
+ will be written to. In general, the iterator does not enforce
+ this, it is up to the code doing the iteration to follow that
+ promise.
+
+ When ``writemasked`` flag is used, and this operand is buffered,
+ this changes how data is copied from the buffer into the array.
+ A masked copying routine is used, which only copies the
+ elements in the buffer for which ``writemasked``
+ returns true from the corresponding element in the ARRAYMASK
+ operand.
+
+ .. c:var:: NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE
+
+ In memory overlap checks, assume that operands with
+ ``NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE`` enabled are accessed only
+ in the iterator order.
+
+ This enables the iterator to reason about data dependency,
+ possibly avoiding unnecessary copies.
+
+ This flag has effect only if ``NPY_ITER_COPY_IF_OVERLAP`` is enabled
+ on the iterator.
+
+.. c:function:: NpyIter* NpyIter_AdvancedNew( \
+ npy_intp nop, PyArrayObject** op, npy_uint32 flags, NPY_ORDER order, \
+ NPY_CASTING casting, npy_uint32* op_flags, PyArray_Descr** op_dtypes, \
+ int oa_ndim, int** op_axes, npy_intp const* itershape, npy_intp buffersize)
+
+ Extends :c:func:`NpyIter_MultiNew` with several advanced options providing
+ more control over broadcasting and buffering.
+
+ If -1/NULL values are passed to ``oa_ndim``, ``op_axes``, ``itershape``,
+ and ``buffersize``, it is equivalent to :c:func:`NpyIter_MultiNew`.
+
+ The parameter ``oa_ndim``, when not zero or -1, specifies the number of
+ dimensions that will be iterated with customized broadcasting.
+ If it is provided, ``op_axes`` must and ``itershape`` can also be provided.
+ The ``op_axes`` parameter let you control in detail how the
+ axes of the operand arrays get matched together and iterated.
+ In ``op_axes``, you must provide an array of ``nop`` pointers
+ to ``oa_ndim``-sized arrays of type ``npy_intp``. If an entry
+ in ``op_axes`` is NULL, normal broadcasting rules will apply.
+ In ``op_axes[j][i]`` is stored either a valid axis of ``op[j]``, or
+ -1 which means ``newaxis``. Within each ``op_axes[j]`` array, axes
+ may not be repeated. The following example is how normal broadcasting
+ applies to a 3-D array, a 2-D array, a 1-D array and a scalar.
+
+ **Note**: Before NumPy 1.8 ``oa_ndim == 0` was used for signalling that
+ that ``op_axes`` and ``itershape`` are unused. This is deprecated and
+ should be replaced with -1. Better backward compatibility may be
+ achieved by using :c:func:`NpyIter_MultiNew` for this case.
+
+ .. code-block:: c
+
+ int oa_ndim = 3; /* # iteration axes */
+ int op0_axes[] = {0, 1, 2}; /* 3-D operand */
+ int op1_axes[] = {-1, 0, 1}; /* 2-D operand */
+ int op2_axes[] = {-1, -1, 0}; /* 1-D operand */
+ int op3_axes[] = {-1, -1, -1} /* 0-D (scalar) operand */
+ int* op_axes[] = {op0_axes, op1_axes, op2_axes, op3_axes};
+
+ The ``itershape`` parameter allows you to force the iterator
+ to have a specific iteration shape. It is an array of length
+ ``oa_ndim``. When an entry is negative, its value is determined
+ from the operands. This parameter allows automatically allocated
+ outputs to get additional dimensions which don't match up with
+ any dimension of an input.
+
+ If ``buffersize`` is zero, a default buffer size is used,
+ otherwise it specifies how big of a buffer to use. Buffers
+ which are powers of 2 such as 4096 or 8192 are recommended.
+
+ Returns NULL if there is an error, otherwise returns the allocated
+ iterator.
+
+.. c:function:: NpyIter* NpyIter_Copy(NpyIter* iter)
+
+ Makes a copy of the given iterator. This function is provided
+ primarily to enable multi-threaded iteration of the data.
+
+ *TODO*: Move this to a section about multithreaded iteration.
+
+ The recommended approach to multithreaded iteration is to
+ first create an iterator with the flags
+ :c:data:`NPY_ITER_EXTERNAL_LOOP`, :c:data:`NPY_ITER_RANGED`,
+ :c:data:`NPY_ITER_BUFFERED`, :c:data:`NPY_ITER_DELAY_BUFALLOC`, and
+ possibly :c:data:`NPY_ITER_GROWINNER`. Create a copy of this iterator
+ for each thread (minus one for the first iterator). Then, take
+ the iteration index range ``[0, NpyIter_GetIterSize(iter))`` and
+ split it up into tasks, for example using a TBB parallel_for loop.
+ When a thread gets a task to execute, it then uses its copy of
+ the iterator by calling :c:func:`NpyIter_ResetToIterIndexRange` and
+ iterating over the full range.
+
+ When using the iterator in multi-threaded code or in code not
+ holding the Python GIL, care must be taken to only call functions
+ which are safe in that context. :c:func:`NpyIter_Copy` cannot be safely
+ called without the Python GIL, because it increments Python
+ references. The ``Reset*`` and some other functions may be safely
+ called by passing in the ``errmsg`` parameter as non-NULL, so that
+ the functions will pass back errors through it instead of setting
+ a Python exception.
+
+ :c:func:`NpyIter_Deallocate` must be called for each copy.
+
+.. c:function:: int NpyIter_RemoveAxis(NpyIter* iter, int axis)``
+
+ Removes an axis from iteration. This requires that
+ :c:data:`NPY_ITER_MULTI_INDEX` was set for iterator creation, and does
+ not work if buffering is enabled or an index is being tracked. This
+ function also resets the iterator to its initial state.
+
+ This is useful for setting up an accumulation loop, for example.
+ The iterator can first be created with all the dimensions, including
+ the accumulation axis, so that the output gets created correctly.
+ Then, the accumulation axis can be removed, and the calculation
+ done in a nested fashion.
+
+ **WARNING**: This function may change the internal memory layout of
+ the iterator. Any cached functions or pointers from the iterator
+ must be retrieved again! The iterator range will be reset as well.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+
+.. c:function:: int NpyIter_RemoveMultiIndex(NpyIter* iter)
+
+ If the iterator is tracking a multi-index, this strips support for them,
+ and does further iterator optimizations that are possible if multi-indices
+ are not needed. This function also resets the iterator to its initial
+ state.
+
+ **WARNING**: This function may change the internal memory layout of
+ the iterator. Any cached functions or pointers from the iterator
+ must be retrieved again!
+
+ After calling this function, :c:func:`NpyIter_HasMultiIndex(iter)` will
+ return false.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: int NpyIter_EnableExternalLoop(NpyIter* iter)
+
+ If :c:func:`NpyIter_RemoveMultiIndex` was called, you may want to enable the
+ flag :c:data:`NPY_ITER_EXTERNAL_LOOP`. This flag is not permitted
+ together with :c:data:`NPY_ITER_MULTI_INDEX`, so this function is provided
+ to enable the feature after :c:func:`NpyIter_RemoveMultiIndex` is called.
+ This function also resets the iterator to its initial state.
+
+ **WARNING**: This function changes the internal logic of the iterator.
+ Any cached functions or pointers from the iterator must be retrieved
+ again!
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: int NpyIter_Deallocate(NpyIter* iter)
+
+ Deallocates the iterator object and resolves any needed writebacks.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: int NpyIter_Reset(NpyIter* iter, char** errmsg)
+
+ Resets the iterator back to its initial state, at the beginning
+ of the iteration range.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``. If errmsg is non-NULL,
+ no Python exception is set when ``NPY_FAIL`` is returned.
+ Instead, \*errmsg is set to an error message. When errmsg is
+ non-NULL, the function may be safely called without holding
+ the Python GIL.
+
+.. c:function:: int NpyIter_ResetToIterIndexRange( \
+ NpyIter* iter, npy_intp istart, npy_intp iend, char** errmsg)
+
+ Resets the iterator and restricts it to the ``iterindex`` range
+ ``[istart, iend)``. See :c:func:`NpyIter_Copy` for an explanation of
+ how to use this for multi-threaded iteration. This requires that
+ the flag :c:data:`NPY_ITER_RANGED` was passed to the iterator constructor.
+
+ If you want to reset both the ``iterindex`` range and the base
+ pointers at the same time, you can do the following to avoid
+ extra buffer copying (be sure to add the return code error checks
+ when you copy this code).
+
+ .. code-block:: c
+
+ /* Set to a trivial empty range */
+ NpyIter_ResetToIterIndexRange(iter, 0, 0);
+ /* Set the base pointers */
+ NpyIter_ResetBasePointers(iter, baseptrs);
+ /* Set to the desired range */
+ NpyIter_ResetToIterIndexRange(iter, istart, iend);
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``. If errmsg is non-NULL,
+ no Python exception is set when ``NPY_FAIL`` is returned.
+ Instead, \*errmsg is set to an error message. When errmsg is
+ non-NULL, the function may be safely called without holding
+ the Python GIL.
+
+.. c:function:: int NpyIter_ResetBasePointers( \
+ NpyIter *iter, char** baseptrs, char** errmsg)
+
+ Resets the iterator back to its initial state, but using the values
+ in ``baseptrs`` for the data instead of the pointers from the arrays
+ being iterated. This functions is intended to be used, together with
+ the ``op_axes`` parameter, by nested iteration code with two or more
+ iterators.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``. If errmsg is non-NULL,
+ no Python exception is set when ``NPY_FAIL`` is returned.
+ Instead, \*errmsg is set to an error message. When errmsg is
+ non-NULL, the function may be safely called without holding
+ the Python GIL.
+
+ *TODO*: Move the following into a special section on nested iterators.
+
+ Creating iterators for nested iteration requires some care. All
+ the iterator operands must match exactly, or the calls to
+ :c:func:`NpyIter_ResetBasePointers` will be invalid. This means that
+ automatic copies and output allocation should not be used haphazardly.
+ It is possible to still use the automatic data conversion and casting
+ features of the iterator by creating one of the iterators with
+ all the conversion parameters enabled, then grabbing the allocated
+ operands with the :c:func:`NpyIter_GetOperandArray` function and passing
+ them into the constructors for the rest of the iterators.
+
+ **WARNING**: When creating iterators for nested iteration,
+ the code must not use a dimension more than once in the different
+ iterators. If this is done, nested iteration will produce
+ out-of-bounds pointers during iteration.
+
+ **WARNING**: When creating iterators for nested iteration, buffering
+ can only be applied to the innermost iterator. If a buffered iterator
+ is used as the source for ``baseptrs``, it will point into a small buffer
+ instead of the array and the inner iteration will be invalid.
+
+ The pattern for using nested iterators is as follows.
+
+ .. code-block:: c
+
+ NpyIter *iter1, *iter1;
+ NpyIter_IterNextFunc *iternext1, *iternext2;
+ char **dataptrs1;
+
+ /*
+ * With the exact same operands, no copies allowed, and
+ * no axis in op_axes used both in iter1 and iter2.
+ * Buffering may be enabled for iter2, but not for iter1.
+ */
+ iter1 = ...; iter2 = ...;
+
+ iternext1 = NpyIter_GetIterNext(iter1);
+ iternext2 = NpyIter_GetIterNext(iter2);
+ dataptrs1 = NpyIter_GetDataPtrArray(iter1);
+
+ do {
+ NpyIter_ResetBasePointers(iter2, dataptrs1);
+ do {
+ /* Use the iter2 values */
+ } while (iternext2(iter2));
+ } while (iternext1(iter1));
+
+.. c:function:: int NpyIter_GotoMultiIndex(NpyIter* iter, npy_intp const* multi_index)
+
+ Adjusts the iterator to point to the ``ndim`` indices
+ pointed to by ``multi_index``. Returns an error if a multi-index
+ is not being tracked, the indices are out of bounds,
+ or inner loop iteration is disabled.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: int NpyIter_GotoIndex(NpyIter* iter, npy_intp index)
+
+ Adjusts the iterator to point to the ``index`` specified.
+ If the iterator was constructed with the flag
+ :c:data:`NPY_ITER_C_INDEX`, ``index`` is the C-order index,
+ and if the iterator was constructed with the flag
+ :c:data:`NPY_ITER_F_INDEX`, ``index`` is the Fortran-order
+ index. Returns an error if there is no index being tracked,
+ the index is out of bounds, or inner loop iteration is disabled.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: npy_intp NpyIter_GetIterSize(NpyIter* iter)
+
+ Returns the number of elements being iterated. This is the product
+ of all the dimensions in the shape. When a multi index is being tracked
+ (and `NpyIter_RemoveAxis` may be called) the size may be ``-1`` to
+ indicate an iterator is too large. Such an iterator is invalid, but
+ may become valid after `NpyIter_RemoveAxis` is called. It is not
+ necessary to check for this case.
+
+.. c:function:: npy_intp NpyIter_GetIterIndex(NpyIter* iter)
+
+ Gets the ``iterindex`` of the iterator, which is an index matching
+ the iteration order of the iterator.
+
+.. c:function:: void NpyIter_GetIterIndexRange( \
+ NpyIter* iter, npy_intp* istart, npy_intp* iend)
+
+ Gets the ``iterindex`` sub-range that is being iterated. If
+ :c:data:`NPY_ITER_RANGED` was not specified, this always returns the
+ range ``[0, NpyIter_IterSize(iter))``.
+
+.. c:function:: int NpyIter_GotoIterIndex(NpyIter* iter, npy_intp iterindex)
+
+ Adjusts the iterator to point to the ``iterindex`` specified.
+ The IterIndex is an index matching the iteration order of the iterator.
+ Returns an error if the ``iterindex`` is out of bounds,
+ buffering is enabled, or inner loop iteration is disabled.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: npy_bool NpyIter_HasDelayedBufAlloc(NpyIter* iter)
+
+ Returns 1 if the flag :c:data:`NPY_ITER_DELAY_BUFALLOC` was passed
+ to the iterator constructor, and no call to one of the Reset
+ functions has been done yet, 0 otherwise.
+
+.. c:function:: npy_bool NpyIter_HasExternalLoop(NpyIter* iter)
+
+ Returns 1 if the caller needs to handle the inner-most 1-dimensional
+ loop, or 0 if the iterator handles all looping. This is controlled
+ by the constructor flag :c:data:`NPY_ITER_EXTERNAL_LOOP` or
+ :c:func:`NpyIter_EnableExternalLoop`.
+
+.. c:function:: npy_bool NpyIter_HasMultiIndex(NpyIter* iter)
+
+ Returns 1 if the iterator was created with the
+ :c:data:`NPY_ITER_MULTI_INDEX` flag, 0 otherwise.
+
+.. c:function:: npy_bool NpyIter_HasIndex(NpyIter* iter)
+
+ Returns 1 if the iterator was created with the
+ :c:data:`NPY_ITER_C_INDEX` or :c:data:`NPY_ITER_F_INDEX`
+ flag, 0 otherwise.
+
+.. c:function:: npy_bool NpyIter_RequiresBuffering(NpyIter* iter)
+
+ Returns 1 if the iterator requires buffering, which occurs
+ when an operand needs conversion or alignment and so cannot
+ be used directly.
+
+.. c:function:: npy_bool NpyIter_IsBuffered(NpyIter* iter)
+
+ Returns 1 if the iterator was created with the
+ :c:data:`NPY_ITER_BUFFERED` flag, 0 otherwise.
+
+.. c:function:: npy_bool NpyIter_IsGrowInner(NpyIter* iter)
+
+ Returns 1 if the iterator was created with the
+ :c:data:`NPY_ITER_GROWINNER` flag, 0 otherwise.
+
+.. c:function:: npy_intp NpyIter_GetBufferSize(NpyIter* iter)
+
+ If the iterator is buffered, returns the size of the buffer
+ being used, otherwise returns 0.
+
+.. c:function:: int NpyIter_GetNDim(NpyIter* iter)
+
+ Returns the number of dimensions being iterated. If a multi-index
+ was not requested in the iterator constructor, this value
+ may be smaller than the number of dimensions in the original
+ objects.
+
+.. c:function:: int NpyIter_GetNOp(NpyIter* iter)
+
+ Returns the number of operands in the iterator.
+
+.. c:function:: npy_intp* NpyIter_GetAxisStrideArray(NpyIter* iter, int axis)
+
+ Gets the array of strides for the specified axis. Requires that
+ the iterator be tracking a multi-index, and that buffering not
+ be enabled.
+
+ This may be used when you want to match up operand axes in
+ some fashion, then remove them with :c:func:`NpyIter_RemoveAxis` to
+ handle their processing manually. By calling this function
+ before removing the axes, you can get the strides for the
+ manual processing.
+
+ Returns ``NULL`` on error.
+
+.. c:function:: int NpyIter_GetShape(NpyIter* iter, npy_intp* outshape)
+
+ Returns the broadcast shape of the iterator in ``outshape``.
+ This can only be called on an iterator which is tracking a multi-index.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: PyArray_Descr** NpyIter_GetDescrArray(NpyIter* iter)
+
+ This gives back a pointer to the ``nop`` data type Descrs for
+ the objects being iterated. The result points into ``iter``,
+ so the caller does not gain any references to the Descrs.
+
+ This pointer may be cached before the iteration loop, calling
+ ``iternext`` will not change it.
+
+.. c:function:: PyObject** NpyIter_GetOperandArray(NpyIter* iter)
+
+ This gives back a pointer to the ``nop`` operand PyObjects
+ that are being iterated. The result points into ``iter``,
+ so the caller does not gain any references to the PyObjects.
+
+.. c:function:: PyObject* NpyIter_GetIterView(NpyIter* iter, npy_intp i)
+
+ This gives back a reference to a new ndarray view, which is a view
+ into the i-th object in the array :c:func:`NpyIter_GetOperandArray()`,
+ whose dimensions and strides match the internal optimized
+ iteration pattern. A C-order iteration of this view is equivalent
+ to the iterator's iteration order.
+
+ For example, if an iterator was created with a single array as its
+ input, and it was possible to rearrange all its axes and then
+ collapse it into a single strided iteration, this would return
+ a view that is a one-dimensional array.
+
+.. c:function:: void NpyIter_GetReadFlags(NpyIter* iter, char* outreadflags)
+
+ Fills ``nop`` flags. Sets ``outreadflags[i]`` to 1 if
+ ``op[i]`` can be read from, and to 0 if not.
+
+.. c:function:: void NpyIter_GetWriteFlags(NpyIter* iter, char* outwriteflags)
+
+ Fills ``nop`` flags. Sets ``outwriteflags[i]`` to 1 if
+ ``op[i]`` can be written to, and to 0 if not.
+
+.. c:function:: int NpyIter_CreateCompatibleStrides( \
+ NpyIter* iter, npy_intp itemsize, npy_intp* outstrides)
+
+ Builds a set of strides which are the same as the strides of an
+ output array created using the :c:data:`NPY_ITER_ALLOCATE` flag, where NULL
+ was passed for op_axes. This is for data packed contiguously,
+ but not necessarily in C or Fortran order. This should be used
+ together with :c:func:`NpyIter_GetShape` and :c:func:`NpyIter_GetNDim`
+ with the flag :c:data:`NPY_ITER_MULTI_INDEX` passed into the constructor.
+
+ A use case for this function is to match the shape and layout of
+ the iterator and tack on one or more dimensions. For example,
+ in order to generate a vector per input value for a numerical gradient,
+ you pass in ndim*itemsize for itemsize, then add another dimension to
+ the end with size ndim and stride itemsize. To do the Hessian matrix,
+ you do the same thing but add two dimensions, or take advantage of
+ the symmetry and pack it into 1 dimension with a particular encoding.
+
+ This function may only be called if the iterator is tracking a multi-index
+ and if :c:data:`NPY_ITER_DONT_NEGATE_STRIDES` was used to prevent an axis
+ from being iterated in reverse order.
+
+ If an array is created with this method, simply adding 'itemsize'
+ for each iteration will traverse the new array matching the
+ iterator.
+
+ Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
+
+.. c:function:: npy_bool NpyIter_IsFirstVisit(NpyIter* iter, int iop)
+
+ .. versionadded:: 1.7
+
+ Checks to see whether this is the first time the elements of the
+ specified reduction operand which the iterator points at are being
+ seen for the first time. The function returns a reasonable answer
+ for reduction operands and when buffering is disabled. The answer
+ may be incorrect for buffered non-reduction operands.
+
+ This function is intended to be used in EXTERNAL_LOOP mode only,
+ and will produce some wrong answers when that mode is not enabled.
+
+ If this function returns true, the caller should also check the inner
+ loop stride of the operand, because if that stride is 0, then only
+ the first element of the innermost external loop is being visited
+ for the first time.
+
+ *WARNING*: For performance reasons, 'iop' is not bounds-checked,
+ it is not confirmed that 'iop' is actually a reduction operand,
+ and it is not confirmed that EXTERNAL_LOOP mode is enabled. These
+ checks are the responsibility of the caller, and should be done
+ outside of any inner loops.
+
+Functions For Iteration
+-----------------------
+
+.. c:function:: NpyIter_IterNextFunc* NpyIter_GetIterNext( \
+ NpyIter* iter, char** errmsg)
+
+ Returns a function pointer for iteration. A specialized version
+ of the function pointer may be calculated by this function
+ instead of being stored in the iterator structure. Thus, to
+ get good performance, it is required that the function pointer
+ be saved in a variable rather than retrieved for each loop iteration.
+
+ Returns NULL if there is an error. If errmsg is non-NULL,
+ no Python exception is set when ``NPY_FAIL`` is returned.
+ Instead, \*errmsg is set to an error message. When errmsg is
+ non-NULL, the function may be safely called without holding
+ the Python GIL.
+
+ The typical looping construct is as follows.
+
+ .. code-block:: c
+
+ NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL);
+ char** dataptr = NpyIter_GetDataPtrArray(iter);
+
+ do {
+ /* use the addresses dataptr[0], ... dataptr[nop-1] */
+ } while(iternext(iter));
+
+ When :c:data:`NPY_ITER_EXTERNAL_LOOP` is specified, the typical
+ inner loop construct is as follows.
+
+ .. code-block:: c
+
+ NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL);
+ char** dataptr = NpyIter_GetDataPtrArray(iter);
+ npy_intp* stride = NpyIter_GetInnerStrideArray(iter);
+ npy_intp* size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size;
+ npy_intp iop, nop = NpyIter_GetNOp(iter);
+
+ do {
+ size = *size_ptr;
+ while (size--) {
+ /* use the addresses dataptr[0], ... dataptr[nop-1] */
+ for (iop = 0; iop < nop; ++iop) {
+ dataptr[iop] += stride[iop];
+ }
+ }
+ } while (iternext());
+
+ Observe that we are using the dataptr array inside the iterator, not
+ copying the values to a local temporary. This is possible because
+ when ``iternext()`` is called, these pointers will be overwritten
+ with fresh values, not incrementally updated.
+
+ If a compile-time fixed buffer is being used (both flags
+ :c:data:`NPY_ITER_BUFFERED` and :c:data:`NPY_ITER_EXTERNAL_LOOP`), the
+ inner size may be used as a signal as well. The size is guaranteed
+ to become zero when ``iternext()`` returns false, enabling the
+ following loop construct. Note that if you use this construct,
+ you should not pass :c:data:`NPY_ITER_GROWINNER` as a flag, because it
+ will cause larger sizes under some circumstances.
+
+ .. code-block:: c
+
+ /* The constructor should have buffersize passed as this value */
+ #define FIXED_BUFFER_SIZE 1024
+
+ NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL);
+ char **dataptr = NpyIter_GetDataPtrArray(iter);
+ npy_intp *stride = NpyIter_GetInnerStrideArray(iter);
+ npy_intp *size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size;
+ npy_intp i, iop, nop = NpyIter_GetNOp(iter);
+
+ /* One loop with a fixed inner size */
+ size = *size_ptr;
+ while (size == FIXED_BUFFER_SIZE) {
+ /*
+ * This loop could be manually unrolled by a factor
+ * which divides into FIXED_BUFFER_SIZE
+ */
+ for (i = 0; i < FIXED_BUFFER_SIZE; ++i) {
+ /* use the addresses dataptr[0], ... dataptr[nop-1] */
+ for (iop = 0; iop < nop; ++iop) {
+ dataptr[iop] += stride[iop];
+ }
+ }
+ iternext();
+ size = *size_ptr;
+ }
+
+ /* Finish-up loop with variable inner size */
+ if (size > 0) do {
+ size = *size_ptr;
+ while (size--) {
+ /* use the addresses dataptr[0], ... dataptr[nop-1] */
+ for (iop = 0; iop < nop; ++iop) {
+ dataptr[iop] += stride[iop];
+ }
+ }
+ } while (iternext());
+
+.. c:function:: NpyIter_GetMultiIndexFunc *NpyIter_GetGetMultiIndex( \
+ NpyIter* iter, char** errmsg)
+
+ Returns a function pointer for getting the current multi-index
+ of the iterator. Returns NULL if the iterator is not tracking
+ a multi-index. It is recommended that this function
+ pointer be cached in a local variable before the iteration
+ loop.
+
+ Returns NULL if there is an error. If errmsg is non-NULL,
+ no Python exception is set when ``NPY_FAIL`` is returned.
+ Instead, \*errmsg is set to an error message. When errmsg is
+ non-NULL, the function may be safely called without holding
+ the Python GIL.
+
+.. c:function:: char** NpyIter_GetDataPtrArray(NpyIter* iter)
+
+ This gives back a pointer to the ``nop`` data pointers. If
+ :c:data:`NPY_ITER_EXTERNAL_LOOP` was not specified, each data
+ pointer points to the current data item of the iterator. If
+ no inner iteration was specified, it points to the first data
+ item of the inner loop.
+
+ This pointer may be cached before the iteration loop, calling
+ ``iternext`` will not change it. This function may be safely
+ called without holding the Python GIL.
+
+.. c:function:: char** NpyIter_GetInitialDataPtrArray(NpyIter* iter)
+
+ Gets the array of data pointers directly into the arrays (never
+ into the buffers), corresponding to iteration index 0.
+
+ These pointers are different from the pointers accepted by
+ ``NpyIter_ResetBasePointers``, because the direction along
+ some axes may have been reversed.
+
+ This function may be safely called without holding the Python GIL.
+
+.. c:function:: npy_intp* NpyIter_GetIndexPtr(NpyIter* iter)
+
+ This gives back a pointer to the index being tracked, or NULL
+ if no index is being tracked. It is only useable if one of
+ the flags :c:data:`NPY_ITER_C_INDEX` or :c:data:`NPY_ITER_F_INDEX`
+ were specified during construction.
+
+When the flag :c:data:`NPY_ITER_EXTERNAL_LOOP` is used, the code
+needs to know the parameters for doing the inner loop. These
+functions provide that information.
+
+.. c:function:: npy_intp* NpyIter_GetInnerStrideArray(NpyIter* iter)
+
+ Returns a pointer to an array of the ``nop`` strides,
+ one for each iterated object, to be used by the inner loop.
+
+ This pointer may be cached before the iteration loop, calling
+ ``iternext`` will not change it. This function may be safely
+ called without holding the Python GIL.
+
+ **WARNING**: While the pointer may be cached, its values may
+ change if the iterator is buffered.
+
+.. c:function:: npy_intp* NpyIter_GetInnerLoopSizePtr(NpyIter* iter)
+
+ Returns a pointer to the number of iterations the
+ inner loop should execute.
+
+ This address may be cached before the iteration loop, calling
+ ``iternext`` will not change it. The value itself may change during
+ iteration, in particular if buffering is enabled. This function
+ may be safely called without holding the Python GIL.
+
+.. c:function:: void NpyIter_GetInnerFixedStrideArray( \
+ NpyIter* iter, npy_intp* out_strides)
+
+ Gets an array of strides which are fixed, or will not change during
+ the entire iteration. For strides that may change, the value
+ NPY_MAX_INTP is placed in the stride.
+
+ Once the iterator is prepared for iteration (after a reset if
+ :c:data:`NPY_DELAY_BUFALLOC` was used), call this to get the strides
+ which may be used to select a fast inner loop function. For example,
+ if the stride is 0, that means the inner loop can always load its
+ value into a variable once, then use the variable throughout the loop,
+ or if the stride equals the itemsize, a contiguous version for that
+ operand may be used.
+
+ This function may be safely called without holding the Python GIL.
+
+.. index::
+ pair: iterator; C-API
+
+Converting from Previous NumPy Iterators
+----------------------------------------
+
+The old iterator API includes functions like PyArrayIter_Check,
+PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes
+PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The
+new iterator design replaces all of this functionality with a single object
+and associated API. One goal of the new API is that all uses of the
+existing iterator should be replaceable with the new iterator without
+significant effort. In 1.6, the major exception to this is the neighborhood
+iterator, which does not have corresponding features in this iterator.
+
+Here is a conversion table for which functions to use with the new iterator:
+
+===================================== ===================================================
+*Iterator Functions*
+:c:func:`PyArray_IterNew` :c:func:`NpyIter_New`
+:c:func:`PyArray_IterAllButAxis` :c:func:`NpyIter_New` + ``axes`` parameter **or**
+ Iterator flag :c:data:`NPY_ITER_EXTERNAL_LOOP`
+:c:func:`PyArray_BroadcastToShape` **NOT SUPPORTED** (Use the support for
+ multiple operands instead.)
+:c:func:`PyArrayIter_Check` Will need to add this in Python exposure
+:c:func:`PyArray_ITER_RESET` :c:func:`NpyIter_Reset`
+:c:func:`PyArray_ITER_NEXT` Function pointer from :c:func:`NpyIter_GetIterNext`
+:c:func:`PyArray_ITER_DATA` :c:func:`NpyIter_GetDataPtrArray`
+:c:func:`PyArray_ITER_GOTO` :c:func:`NpyIter_GotoMultiIndex`
+:c:func:`PyArray_ITER_GOTO1D` :c:func:`NpyIter_GotoIndex` or
+ :c:func:`NpyIter_GotoIterIndex`
+:c:func:`PyArray_ITER_NOTDONE` Return value of ``iternext`` function pointer
+*Multi-iterator Functions*
+:c:func:`PyArray_MultiIterNew` :c:func:`NpyIter_MultiNew`
+:c:func:`PyArray_MultiIter_RESET` :c:func:`NpyIter_Reset`
+:c:func:`PyArray_MultiIter_NEXT` Function pointer from :c:func:`NpyIter_GetIterNext`
+:c:func:`PyArray_MultiIter_DATA` :c:func:`NpyIter_GetDataPtrArray`
+:c:func:`PyArray_MultiIter_NEXTi` **NOT SUPPORTED** (always lock-step iteration)
+:c:func:`PyArray_MultiIter_GOTO` :c:func:`NpyIter_GotoMultiIndex`
+:c:func:`PyArray_MultiIter_GOTO1D` :c:func:`NpyIter_GotoIndex` or
+ :c:func:`NpyIter_GotoIterIndex`
+:c:func:`PyArray_MultiIter_NOTDONE` Return value of ``iternext`` function pointer
+:c:func:`PyArray_Broadcast` Handled by :c:func:`NpyIter_MultiNew`
+:c:func:`PyArray_RemoveSmallest` Iterator flag :c:data:`NPY_ITER_EXTERNAL_LOOP`
+*Other Functions*
+:c:func:`PyArray_ConvertToCommonType` Iterator flag :c:data:`NPY_ITER_COMMON_DTYPE`
+===================================== ===================================================
diff --git a/doc/source/reference/c-api/types-and-structures.rst b/doc/source/reference/c-api/types-and-structures.rst
new file mode 100644
index 000000000..336dff211
--- /dev/null
+++ b/doc/source/reference/c-api/types-and-structures.rst
@@ -0,0 +1,1422 @@
+
+*****************************
+Python Types and C-Structures
+*****************************
+
+.. sectionauthor:: Travis E. Oliphant
+
+Several new types are defined in the C-code. Most of these are
+accessible from Python, but a few are not exposed due to their limited
+use. Every new Python type has an associated :c:type:`PyObject *<PyObject>` with an
+internal structure that includes a pointer to a "method table" that
+defines how the new object behaves in Python. When you receive a
+Python object into C code, you always get a pointer to a
+:c:type:`PyObject` structure. Because a :c:type:`PyObject` structure is
+very generic and defines only :c:macro:`PyObject_HEAD`, by itself it
+is not very interesting. However, different objects contain more
+details after the :c:macro:`PyObject_HEAD` (but you have to cast to the
+correct type to access them --- or use accessor functions or macros).
+
+
+New Python Types Defined
+========================
+
+Python types are the functional equivalent in C of classes in Python.
+By constructing a new Python type you make available a new object for
+Python. The ndarray object is an example of a new type defined in C.
+New types are defined in C by two basic steps:
+
+1. creating a C-structure (usually named :c:type:`Py{Name}Object`) that is
+ binary- compatible with the :c:type:`PyObject` structure itself but holds
+ the additional information needed for that particular object;
+
+2. populating the :c:type:`PyTypeObject` table (pointed to by the ob_type
+ member of the :c:type:`PyObject` structure) with pointers to functions
+ that implement the desired behavior for the type.
+
+Instead of special method names which define behavior for Python
+classes, there are "function tables" which point to functions that
+implement the desired results. Since Python 2.2, the PyTypeObject
+itself has become dynamic which allows C types that can be "sub-typed
+"from other C-types in C, and sub-classed in Python. The children
+types inherit the attributes and methods from their parent(s).
+
+There are two major new types: the ndarray ( :c:data:`PyArray_Type` )
+and the ufunc ( :c:data:`PyUFunc_Type` ). Additional types play a
+supportive role: the :c:data:`PyArrayIter_Type`, the
+:c:data:`PyArrayMultiIter_Type`, and the :c:data:`PyArrayDescr_Type`
+. The :c:data:`PyArrayIter_Type` is the type for a flat iterator for an
+ndarray (the object that is returned when getting the flat
+attribute). The :c:data:`PyArrayMultiIter_Type` is the type of the
+object returned when calling ``broadcast`` (). It handles iteration
+and broadcasting over a collection of nested sequences. Also, the
+:c:data:`PyArrayDescr_Type` is the data-type-descriptor type whose
+instances describe the data. Finally, there are 21 new scalar-array
+types which are new Python scalars corresponding to each of the
+fundamental data types available for arrays. An additional 10 other
+types are place holders that allow the array scalars to fit into a
+hierarchy of actual Python types.
+
+
+PyArray_Type and PyArrayObject
+------------------------------
+
+.. c:var:: PyArray_Type
+
+ The Python type of the ndarray is :c:data:`PyArray_Type`. In C, every
+ ndarray is a pointer to a :c:type:`PyArrayObject` structure. The ob_type
+ member of this structure contains a pointer to the :c:data:`PyArray_Type`
+ typeobject.
+
+.. c:type:: PyArrayObject
+
+ The :c:type:`PyArrayObject` C-structure contains all of the required
+ information for an array. All instances of an ndarray (and its
+ subclasses) will have this structure. For future compatibility,
+ these structure members should normally be accessed using the
+ provided macros. If you need a shorter name, then you can make use
+ of :c:type:`NPY_AO` (deprecated) which is defined to be equivalent to
+ :c:type:`PyArrayObject`. Direct access to the struct fields are
+ deprecated. Use the `PyArray_*(arr)` form instead.
+
+ .. code-block:: c
+
+ typedef struct PyArrayObject {
+ PyObject_HEAD
+ char *data;
+ int nd;
+ npy_intp *dimensions;
+ npy_intp *strides;
+ PyObject *base;
+ PyArray_Descr *descr;
+ int flags;
+ PyObject *weakreflist;
+ } PyArrayObject;
+
+.. c:macro:: PyArrayObject.PyObject_HEAD
+
+ This is needed by all Python objects. It consists of (at least)
+ a reference count member ( ``ob_refcnt`` ) and a pointer to the
+ typeobject ( ``ob_type`` ). (Other elements may also be present
+ if Python was compiled with special options see
+ Include/object.h in the Python source tree for more
+ information). The ob_type member points to a Python type
+ object.
+
+.. c:member:: char *PyArrayObject.data
+
+ Accessible via :c:data:`PyArray_DATA`, this data member is a
+ pointer to the first element of the array. This pointer can
+ (and normally should) be recast to the data type of the array.
+
+.. c:member:: int PyArrayObject.nd
+
+ An integer providing the number of dimensions for this
+ array. When nd is 0, the array is sometimes called a rank-0
+ array. Such arrays have undefined dimensions and strides and
+ cannot be accessed. Macro :c:data:`PyArray_NDIM` defined in
+ ``ndarraytypes.h`` points to this data member. :c:data:`NPY_MAXDIMS`
+ is the largest number of dimensions for any array.
+
+.. c:member:: npy_intp PyArrayObject.dimensions
+
+ An array of integers providing the shape in each dimension as
+ long as nd :math:`\geq` 1. The integer is always large enough
+ to hold a pointer on the platform, so the dimension size is
+ only limited by memory. :c:data:`PyArray_DIMS` is the macro
+ associated with this data member.
+
+.. c:member:: npy_intp *PyArrayObject.strides
+
+ An array of integers providing for each dimension the number of
+ bytes that must be skipped to get to the next element in that
+ dimension. Associated with macro :c:data:`PyArray_STRIDES`.
+
+.. c:member:: PyObject *PyArrayObject.base
+
+ Pointed to by :c:data:`PyArray_BASE`, this member is used to hold a
+ pointer to another Python object that is related to this array.
+ There are two use cases:
+
+ - If this array does not own its own memory, then base points to the
+ Python object that owns it (perhaps another array object)
+ - If this array has the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set, then this array is a working
+ copy of a "misbehaved" array.
+
+ When ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to
+ by base will be updated with the contents of this array.
+
+.. c:member:: PyArray_Descr *PyArrayObject.descr
+
+ A pointer to a data-type descriptor object (see below). The
+ data-type descriptor object is an instance of a new built-in
+ type which allows a generic description of memory. There is a
+ descriptor structure for each data type supported. This
+ descriptor structure contains useful information about the type
+ as well as a pointer to a table of function pointers to
+ implement specific functionality. As the name suggests, it is
+ associated with the macro :c:data:`PyArray_DESCR`.
+
+.. c:member:: int PyArrayObject.flags
+
+ Pointed to by the macro :c:data:`PyArray_FLAGS`, this data member represents
+ the flags indicating how the memory pointed to by data is to be
+ interpreted. Possible flags are :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS`, :c:data:`NPY_ARRAY_OWNDATA`,
+ :c:data:`NPY_ARRAY_ALIGNED`, :c:data:`NPY_ARRAY_WRITEABLE`,
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, and :c:data:`NPY_ARRAY_UPDATEIFCOPY`.
+
+.. c:member:: PyObject *PyArrayObject.weakreflist
+
+ This member allows array objects to have weak references (using the
+ weakref module).
+
+
+PyArrayDescr_Type and PyArray_Descr
+-----------------------------------
+
+.. c:var:: PyArrayDescr_Type
+
+ The :c:data:`PyArrayDescr_Type` is the built-in type of the
+ data-type-descriptor objects used to describe how the bytes comprising
+ the array are to be interpreted. There are 21 statically-defined
+ :c:type:`PyArray_Descr` objects for the built-in data-types. While these
+ participate in reference counting, their reference count should never
+ reach zero. There is also a dynamic table of user-defined
+ :c:type:`PyArray_Descr` objects that is also maintained. Once a
+ data-type-descriptor object is "registered" it should never be
+ deallocated either. The function :c:func:`PyArray_DescrFromType` (...) can
+ be used to retrieve a :c:type:`PyArray_Descr` object from an enumerated
+ type-number (either built-in or user- defined).
+
+.. c:type:: PyArray_Descr
+
+ The :c:type:`PyArray_Descr` structure lies at the heart of the
+ :c:data:`PyArrayDescr_Type`. While it is described here for
+ completeness, it should be considered internal to NumPy and manipulated via
+ ``PyArrayDescr_*`` or ``PyDataType*`` functions and macros. The size of this
+ structure is subject to change across versions of NumPy. To ensure
+ compatibility:
+
+ - Never declare a non-pointer instance of the struct
+ - Never perform pointer arithmatic
+ - Never use ``sizof(PyArray_Descr)``
+
+ It has the following structure:
+
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ PyTypeObject *typeobj;
+ char kind;
+ char type;
+ char byteorder;
+ char flags;
+ int type_num;
+ int elsize;
+ int alignment;
+ PyArray_ArrayDescr *subarray;
+ PyObject *fields;
+ PyObject *names;
+ PyArray_ArrFuncs *f;
+ PyObject *metadata;
+ NpyAuxData *c_metadata;
+ npy_hash_t hash;
+ } PyArray_Descr;
+
+.. c:member:: PyTypeObject *PyArray_Descr.typeobj
+
+ Pointer to a typeobject that is the corresponding Python type for
+ the elements of this array. For the builtin types, this points to
+ the corresponding array scalar. For user-defined types, this
+ should point to a user-defined typeobject. This typeobject can
+ either inherit from array scalars or not. If it does not inherit
+ from array scalars, then the :c:data:`NPY_USE_GETITEM` and
+ :c:data:`NPY_USE_SETITEM` flags should be set in the ``flags`` member.
+
+.. c:member:: char PyArray_Descr.kind
+
+ A character code indicating the kind of array (using the array
+ interface typestring notation). A 'b' represents Boolean, a 'i'
+ represents signed integer, a 'u' represents unsigned integer, 'f'
+ represents floating point, 'c' represents complex floating point, 'S'
+ represents 8-bit zero-terminated bytes, 'U' represents 32-bit/character
+ unicode string, and 'V' represents arbitrary.
+
+.. c:member:: char PyArray_Descr.type
+
+ A traditional character code indicating the data type.
+
+.. c:member:: char PyArray_Descr.byteorder
+
+ A character indicating the byte-order: '>' (big-endian), '<' (little-
+ endian), '=' (native), '\|' (irrelevant, ignore). All builtin data-
+ types have byteorder '='.
+
+.. c:member:: char PyArray_Descr.flags
+
+ A data-type bit-flag that determines if the data-type exhibits object-
+ array like behavior. Each bit in this member is a flag which are named
+ as:
+
+ .. c:var:: NPY_ITEM_REFCOUNT
+
+ Indicates that items of this data-type must be reference
+ counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
+
+ .. c:var:: NPY_ITEM_HASOBJECT
+
+ Same as :c:data:`NPY_ITEM_REFCOUNT`.
+
+ .. c:var:: NPY_LIST_PICKLE
+
+ Indicates arrays of this data-type must be converted to a list
+ before pickling.
+
+ .. c:var:: NPY_ITEM_IS_POINTER
+
+ Indicates the item is a pointer to some other data-type
+
+ .. c:var:: NPY_NEEDS_INIT
+
+ Indicates memory for this data-type must be initialized (set
+ to 0) on creation.
+
+ .. c:var:: NPY_NEEDS_PYAPI
+
+ Indicates this data-type requires the Python C-API during
+ access (so don't give up the GIL if array access is going to
+ be needed).
+
+ .. c:var:: NPY_USE_GETITEM
+
+ On array access use the ``f->getitem`` function pointer
+ instead of the standard conversion to an array scalar. Must
+ use if you don't define an array scalar to go along with
+ the data-type.
+
+ .. c:var:: NPY_USE_SETITEM
+
+ When creating a 0-d array from an array scalar use
+ ``f->setitem`` instead of the standard copy from an array
+ scalar. Must use if you don't define an array scalar to go
+ along with the data-type.
+
+ .. c:var:: NPY_FROM_FIELDS
+
+ The bits that are inherited for the parent data-type if these
+ bits are set in any field of the data-type. Currently (
+ :c:data:`NPY_NEEDS_INIT` \| :c:data:`NPY_LIST_PICKLE` \|
+ :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_PYAPI` ).
+
+ .. c:var:: NPY_OBJECT_DTYPE_FLAGS
+
+ Bits set for the object data-type: ( :c:data:`NPY_LIST_PICKLE`
+ \| :c:data:`NPY_USE_GETITEM` \| :c:data:`NPY_ITEM_IS_POINTER` \|
+ :c:data:`NPY_REFCOUNT` \| :c:data:`NPY_NEEDS_INIT` \|
+ :c:data:`NPY_NEEDS_PYAPI`).
+
+ .. c:function:: PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
+
+ Return true if all the given flags are set for the data-type
+ object.
+
+ .. c:function:: PyDataType_REFCHK(PyArray_Descr *dtype)
+
+ Equivalent to :c:func:`PyDataType_FLAGCHK` (*dtype*,
+ :c:data:`NPY_ITEM_REFCOUNT`).
+
+.. c:member:: int PyArray_Descr.type_num
+
+ A number that uniquely identifies the data type. For new data-types,
+ this number is assigned when the data-type is registered.
+
+.. c:member:: int PyArray_Descr.elsize
+
+ For data types that are always the same size (such as long), this
+ holds the size of the data type. For flexible data types where
+ different arrays can have a different elementsize, this should be
+ 0.
+
+.. c:member:: int PyArray_Descr.alignment
+
+ A number providing alignment information for this data type.
+ Specifically, it shows how far from the start of a 2-element
+ structure (whose first element is a ``char`` ), the compiler
+ places an item of this type: ``offsetof(struct {char c; type v;},
+ v)``
+
+.. c:member:: PyArray_ArrayDescr *PyArray_Descr.subarray
+
+ If this is non- ``NULL``, then this data-type descriptor is a
+ C-style contiguous array of another data-type descriptor. In
+ other-words, each element that this descriptor describes is
+ actually an array of some other base descriptor. This is most
+ useful as the data-type descriptor for a field in another
+ data-type descriptor. The fields member should be ``NULL`` if this
+ is non- ``NULL`` (the fields member of the base descriptor can be
+ non- ``NULL`` however). The :c:type:`PyArray_ArrayDescr` structure is
+ defined using
+
+ .. code-block:: c
+
+ typedef struct {
+ PyArray_Descr *base;
+ PyObject *shape;
+ } PyArray_ArrayDescr;
+
+ The elements of this structure are:
+
+ .. c:member:: PyArray_Descr *PyArray_ArrayDescr.base
+
+ The data-type-descriptor object of the base-type.
+
+ .. c:member:: PyObject *PyArray_ArrayDescr.shape
+
+ The shape (always C-style contiguous) of the sub-array as a Python
+ tuple.
+
+
+.. c:member:: PyObject *PyArray_Descr.fields
+
+ If this is non-NULL, then this data-type-descriptor has fields
+ described by a Python dictionary whose keys are names (and also
+ titles if given) and whose values are tuples that describe the
+ fields. Recall that a data-type-descriptor always describes a
+ fixed-length set of bytes. A field is a named sub-region of that
+ total, fixed-length collection. A field is described by a tuple
+ composed of another data- type-descriptor and a byte
+ offset. Optionally, the tuple may contain a title which is
+ normally a Python string. These tuples are placed in this
+ dictionary keyed by name (and also title if given).
+
+.. c:member:: PyObject *PyArray_Descr.names
+
+ An ordered tuple of field names. It is NULL if no field is
+ defined.
+
+.. c:member:: PyArray_ArrFuncs *PyArray_Descr.f
+
+ A pointer to a structure containing functions that the type needs
+ to implement internal features. These functions are not the same
+ thing as the universal functions (ufuncs) described later. Their
+ signatures can vary arbitrarily.
+
+.. c:member:: PyObject *PyArray_Descr.metadata
+
+ Metadata about this dtype.
+
+.. c:member:: NpyAuxData *PyArray_Descr.c_metadata
+
+ Metadata specific to the C implementation
+ of the particular dtype. Added for NumPy 1.7.0.
+
+.. c:member:: Npy_hash_t *PyArray_Descr.hash
+
+ Currently unused. Reserved for future use in caching
+ hash values.
+
+.. c:type:: PyArray_ArrFuncs
+
+ Functions implementing internal features. Not all of these
+ function pointers must be defined for a given type. The required
+ members are ``nonzero``, ``copyswap``, ``copyswapn``, ``setitem``,
+ ``getitem``, and ``cast``. These are assumed to be non- ``NULL``
+ and ``NULL`` entries will cause a program crash. The other
+ functions may be ``NULL`` which will just mean reduced
+ functionality for that data-type. (Also, the nonzero function will
+ be filled in with a default function if it is ``NULL`` when you
+ register a user-defined data-type).
+
+ .. code-block:: c
+
+ typedef struct {
+ PyArray_VectorUnaryFunc *cast[NPY_NTYPES];
+ PyArray_GetItemFunc *getitem;
+ PyArray_SetItemFunc *setitem;
+ PyArray_CopySwapNFunc *copyswapn;
+ PyArray_CopySwapFunc *copyswap;
+ PyArray_CompareFunc *compare;
+ PyArray_ArgFunc *argmax;
+ PyArray_DotFunc *dotfunc;
+ PyArray_ScanFunc *scanfunc;
+ PyArray_FromStrFunc *fromstr;
+ PyArray_NonzeroFunc *nonzero;
+ PyArray_FillFunc *fill;
+ PyArray_FillWithScalarFunc *fillwithscalar;
+ PyArray_SortFunc *sort[NPY_NSORTS];
+ PyArray_ArgSortFunc *argsort[NPY_NSORTS];
+ PyObject *castdict;
+ PyArray_ScalarKindFunc *scalarkind;
+ int **cancastscalarkindto;
+ int *cancastto;
+ PyArray_FastClipFunc *fastclip;
+ PyArray_FastPutmaskFunc *fastputmask;
+ PyArray_FastTakeFunc *fasttake;
+ PyArray_ArgFunc *argmin;
+ } PyArray_ArrFuncs;
+
+ The concept of a behaved segment is used in the description of the
+ function pointers. A behaved segment is one that is aligned and in
+ native machine byte-order for the data-type. The ``nonzero``,
+ ``copyswap``, ``copyswapn``, ``getitem``, and ``setitem``
+ functions can (and must) deal with mis-behaved arrays. The other
+ functions require behaved memory segments.
+
+ .. c:member:: void cast( \
+ void *from, void *to, npy_intp n, void *fromarr, void *toarr)
+
+ An array of function pointers to cast from the current type to
+ all of the other builtin types. Each function casts a
+ contiguous, aligned, and notswapped buffer pointed at by
+ *from* to a contiguous, aligned, and notswapped buffer pointed
+ at by *to* The number of items to cast is given by *n*, and
+ the arguments *fromarr* and *toarr* are interpreted as
+ PyArrayObjects for flexible arrays to get itemsize
+ information.
+
+ .. c:member:: PyObject *getitem(void *data, void *arr)
+
+ A pointer to a function that returns a standard Python object
+ from a single element of the array object *arr* pointed to by
+ *data*. This function must be able to deal with "misbehaved
+ "(misaligned and/or swapped) arrays correctly.
+
+ .. c:member:: int setitem(PyObject *item, void *data, void *arr)
+
+ A pointer to a function that sets the Python object *item*
+ into the array, *arr*, at the position pointed to by *data*
+ . This function deals with "misbehaved" arrays. If successful,
+ a zero is returned, otherwise, a negative one is returned (and
+ a Python error set).
+
+ .. c:member:: void copyswapn( \
+ void *dest, npy_intp dstride, void *src, npy_intp sstride, \
+ npy_intp n, int swap, void *arr)
+
+ .. c:member:: void copyswap(void *dest, void *src, int swap, void *arr)
+
+ These members are both pointers to functions to copy data from
+ *src* to *dest* and *swap* if indicated. The value of arr is
+ only used for flexible ( :c:data:`NPY_STRING`, :c:data:`NPY_UNICODE`,
+ and :c:data:`NPY_VOID` ) arrays (and is obtained from
+ ``arr->descr->elsize`` ). The second function copies a single
+ value, while the first loops over n values with the provided
+ strides. These functions can deal with misbehaved *src*
+ data. If *src* is NULL then no copy is performed. If *swap* is
+ 0, then no byteswapping occurs. It is assumed that *dest* and
+ *src* do not overlap. If they overlap, then use ``memmove``
+ (...) first followed by ``copyswap(n)`` with NULL valued
+ ``src``.
+
+ .. c:member:: int compare(const void* d1, const void* d2, void* arr)
+
+ A pointer to a function that compares two elements of the
+ array, ``arr``, pointed to by ``d1`` and ``d2``. This
+ function requires behaved (aligned and not swapped) arrays.
+ The return value is 1 if * ``d1`` > * ``d2``, 0 if * ``d1`` == *
+ ``d2``, and -1 if * ``d1`` < * ``d2``. The array object ``arr`` is
+ used to retrieve itemsize and field information for flexible arrays.
+
+ .. c:member:: int argmax( \
+ void* data, npy_intp n, npy_intp* max_ind, void* arr)
+
+ A pointer to a function that retrieves the index of the
+ largest of ``n`` elements in ``arr`` beginning at the element
+ pointed to by ``data``. This function requires that the
+ memory segment be contiguous and behaved. The return value is
+ always 0. The index of the largest element is returned in
+ ``max_ind``.
+
+ .. c:member:: void dotfunc( \
+ void* ip1, npy_intp is1, void* ip2, npy_intp is2, void* op, \
+ npy_intp n, void* arr)
+
+ A pointer to a function that multiplies two ``n`` -length
+ sequences together, adds them, and places the result in
+ element pointed to by ``op`` of ``arr``. The start of the two
+ sequences are pointed to by ``ip1`` and ``ip2``. To get to
+ the next element in each sequence requires a jump of ``is1``
+ and ``is2`` *bytes*, respectively. This function requires
+ behaved (though not necessarily contiguous) memory.
+
+ .. c:member:: int scanfunc(FILE* fd, void* ip, void* arr)
+
+ A pointer to a function that scans (scanf style) one element
+ of the corresponding type from the file descriptor ``fd`` into
+ the array memory pointed to by ``ip``. The array is assumed
+ to be behaved.
+ The last argument ``arr`` is the array to be scanned into.
+ Returns number of receiving arguments successfully assigned (which
+ may be zero in case a matching failure occurred before the first
+ receiving argument was assigned), or EOF if input failure occurs
+ before the first receiving argument was assigned.
+ This function should be called without holding the Python GIL, and
+ has to grab it for error reporting.
+
+ .. c:member:: int fromstr(char* str, void* ip, char** endptr, void* arr)
+
+ A pointer to a function that converts the string pointed to by
+ ``str`` to one element of the corresponding type and places it
+ in the memory location pointed to by ``ip``. After the
+ conversion is completed, ``*endptr`` points to the rest of the
+ string. The last argument ``arr`` is the array into which ip
+ points (needed for variable-size data- types). Returns 0 on
+ success or -1 on failure. Requires a behaved array.
+ This function should be called without holding the Python GIL, and
+ has to grab it for error reporting.
+
+ .. c:member:: Bool nonzero(void* data, void* arr)
+
+ A pointer to a function that returns TRUE if the item of
+ ``arr`` pointed to by ``data`` is nonzero. This function can
+ deal with misbehaved arrays.
+
+ .. c:member:: void fill(void* data, npy_intp length, void* arr)
+
+ A pointer to a function that fills a contiguous array of given
+ length with data. The first two elements of the array must
+ already be filled- in. From these two values, a delta will be
+ computed and the values from item 3 to the end will be
+ computed by repeatedly adding this computed delta. The data
+ buffer must be well-behaved.
+
+ .. c:member:: void fillwithscalar( \
+ void* buffer, npy_intp length, void* value, void* arr)
+
+ A pointer to a function that fills a contiguous ``buffer`` of
+ the given ``length`` with a single scalar ``value`` whose
+ address is given. The final argument is the array which is
+ needed to get the itemsize for variable-length arrays.
+
+ .. c:member:: int sort(void* start, npy_intp length, void* arr)
+
+ An array of function pointers to a particular sorting
+ algorithms. A particular sorting algorithm is obtained using a
+ key (so far :c:data:`NPY_QUICKSORT`, :c:data:`NPY_HEAPSORT`,
+ and :c:data:`NPY_MERGESORT` are defined). These sorts are done
+ in-place assuming contiguous and aligned data.
+
+ .. c:member:: int argsort( \
+ void* start, npy_intp* result, npy_intp length, void *arr)
+
+ An array of function pointers to sorting algorithms for this
+ data type. The same sorting algorithms as for sort are
+ available. The indices producing the sort are returned in
+ ``result`` (which must be initialized with indices 0 to
+ ``length-1`` inclusive).
+
+ .. c:member:: PyObject *castdict
+
+ Either ``NULL`` or a dictionary containing low-level casting
+ functions for user- defined data-types. Each function is
+ wrapped in a :c:type:`PyCObject *` and keyed by the data-type number.
+
+ .. c:member:: NPY_SCALARKIND scalarkind(PyArrayObject* arr)
+
+ A function to determine how scalars of this type should be
+ interpreted. The argument is ``NULL`` or a 0-dimensional array
+ containing the data (if that is needed to determine the kind
+ of scalar). The return value must be of type
+ :c:type:`NPY_SCALARKIND`.
+
+ .. c:member:: int **cancastscalarkindto
+
+ Either ``NULL`` or an array of :c:type:`NPY_NSCALARKINDS`
+ pointers. These pointers should each be either ``NULL`` or a
+ pointer to an array of integers (terminated by
+ :c:data:`NPY_NOTYPE`) indicating data-types that a scalar of
+ this data-type of the specified kind can be cast to safely
+ (this usually means without losing precision).
+
+ .. c:member:: int *cancastto
+
+ Either ``NULL`` or an array of integers (terminated by
+ :c:data:`NPY_NOTYPE` ) indicated data-types that this data-type
+ can be cast to safely (this usually means without losing
+ precision).
+
+ .. c:member:: void fastclip( \
+ void *in, npy_intp n_in, void *min, void *max, void *out)
+
+ A function that reads ``n_in`` items from ``in``, and writes to
+ ``out`` the read value if it is within the limits pointed to by
+ ``min`` and ``max``, or the corresponding limit if outside. The
+ memory segments must be contiguous and behaved, and either
+ ``min`` or ``max`` may be ``NULL``, but not both.
+
+ .. c:member:: void fastputmask( \
+ void *in, void *mask, npy_intp n_in, void *values, npy_intp nv)
+
+ A function that takes a pointer ``in`` to an array of ``n_in``
+ items, a pointer ``mask`` to an array of ``n_in`` boolean
+ values, and a pointer ``vals`` to an array of ``nv`` items.
+ Items from ``vals`` are copied into ``in`` wherever the value
+ in ``mask`` is non-zero, tiling ``vals`` as needed if
+ ``nv < n_in``. All arrays must be contiguous and behaved.
+
+ .. c:member:: void fasttake( \
+ void *dest, void *src, npy_intp *indarray, npy_intp nindarray, \
+ npy_intp n_outer, npy_intp m_middle, npy_intp nelem, \
+ NPY_CLIPMODE clipmode)
+
+ A function that takes a pointer ``src`` to a C contiguous,
+ behaved segment, interpreted as a 3-dimensional array of shape
+ ``(n_outer, nindarray, nelem)``, a pointer ``indarray`` to a
+ contiguous, behaved segment of ``m_middle`` integer indices,
+ and a pointer ``dest`` to a C contiguous, behaved segment,
+ interpreted as a 3-dimensional array of shape
+ ``(n_outer, m_middle, nelem)``. The indices in ``indarray`` are
+ used to index ``src`` along the second dimension, and copy the
+ corresponding chunks of ``nelem`` items into ``dest``.
+ ``clipmode`` (which can take on the values :c:data:`NPY_RAISE`,
+ :c:data:`NPY_WRAP` or :c:data:`NPY_CLIP`) determines how will
+ indices smaller than 0 or larger than ``nindarray`` will be
+ handled.
+
+ .. c:member:: int argmin( \
+ void* data, npy_intp n, npy_intp* min_ind, void* arr)
+
+ A pointer to a function that retrieves the index of the
+ smallest of ``n`` elements in ``arr`` beginning at the element
+ pointed to by ``data``. This function requires that the
+ memory segment be contiguous and behaved. The return value is
+ always 0. The index of the smallest element is returned in
+ ``min_ind``.
+
+
+The :c:data:`PyArray_Type` typeobject implements many of the features of
+:c:type:`Python objects <PyTypeObject>` including the :c:member:`tp_as_number
+<PyTypeObject.tp_as_number>`, :c:member:`tp_as_sequence
+<PyTypeObject.tp_as_sequence>`, :c:member:`tp_as_mapping
+<PyTypeObject.tp_as_mapping>`, and :c:member:`tp_as_buffer
+<PyTypeObject.tp_as_buffer>` interfaces. The :c:type:`rich comparison
+<richcmpfunc>`) is also used along with new-style attribute lookup for
+member (:c:member:`tp_members <PyTypeObject.tp_members>`) and properties
+(:c:member:`tp_getset <PyTypeObject.tp_getset>`).
+The :c:data:`PyArray_Type` can also be sub-typed.
+
+.. tip::
+
+ The ``tp_as_number`` methods use a generic approach to call whatever
+ function has been registered for handling the operation. When the
+ ``_multiarray_umath module`` is imported, it sets the numeric operations
+ for all arrays to the corresponding ufuncs. This choice can be changed with
+ :c:func:`PyUFunc_ReplaceLoopBySignature` The ``tp_str`` and ``tp_repr``
+ methods can also be altered using :c:func:`PyArray_SetStringFunction`.
+
+
+PyUFunc_Type and PyUFuncObject
+------------------------------
+
+.. c:var:: PyUFunc_Type
+
+ The ufunc object is implemented by creation of the
+ :c:data:`PyUFunc_Type`. It is a very simple type that implements only
+ basic getattribute behavior, printing behavior, and has call
+ behavior which allows these objects to act like functions. The
+ basic idea behind the ufunc is to hold a reference to fast
+ 1-dimensional (vector) loops for each data type that supports the
+ operation. These one-dimensional loops all have the same signature
+ and are the key to creating a new ufunc. They are called by the
+ generic looping code as appropriate to implement the N-dimensional
+ function. There are also some generic 1-d loops defined for
+ floating and complexfloating arrays that allow you to define a
+ ufunc using a single scalar function (*e.g.* atanh).
+
+
+.. c:type:: PyUFuncObject
+
+ The core of the ufunc is the :c:type:`PyUFuncObject` which contains all
+ the information needed to call the underlying C-code loops that
+ perform the actual work. While it is described here for completeness, it
+ should be considered internal to NumPy and manipulated via ``PyUFunc_*``
+ functions. The size of this structure is subject to change across versions
+ of NumPy. To ensure compatibility:
+
+ - Never declare a non-pointer instance of the struct
+ - Never perform pointer arithmetic
+ - Never use ``sizeof(PyUFuncObject)``
+
+ It has the following structure:
+
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ int nin;
+ int nout;
+ int nargs;
+ int identity;
+ PyUFuncGenericFunction *functions;
+ void **data;
+ int ntypes;
+ int reserved1;
+ const char *name;
+ char *types;
+ const char *doc;
+ void *ptr;
+ PyObject *obj;
+ PyObject *userloops;
+ int core_enabled;
+ int core_num_dim_ix;
+ int *core_num_dims;
+ int *core_dim_ixs;
+ int *core_offsets;
+ char *core_signature;
+ PyUFunc_TypeResolutionFunc *type_resolver;
+ PyUFunc_LegacyInnerLoopSelectionFunc *legacy_inner_loop_selector;
+ PyUFunc_MaskedInnerLoopSelectionFunc *masked_inner_loop_selector;
+ npy_uint32 *op_flags;
+ npy_uint32 *iter_flags;
+ /* new in API version 0x0000000D */
+ npy_intp *core_dim_sizes;
+ npy_intp *core_dim_flags;
+
+ } PyUFuncObject;
+
+ .. c:macro: PyUFuncObject.PyObject_HEAD
+
+ required for all Python objects.
+
+ .. c:member:: int PyUFuncObject.nin
+
+ The number of input arguments.
+
+ .. c:member:: int PyUFuncObject.nout
+
+ The number of output arguments.
+
+ .. c:member:: int PyUFuncObject.nargs
+
+ The total number of arguments (*nin* + *nout*). This must be
+ less than :c:data:`NPY_MAXARGS`.
+
+ .. c:member:: int PyUFuncObject.identity
+
+ Either :c:data:`PyUFunc_One`, :c:data:`PyUFunc_Zero`,
+ :c:data:`PyUFunc_None` or :c:data:`PyUFunc_AllOnes` to indicate
+ the identity for this operation. It is only used for a
+ reduce-like call on an empty array.
+
+ .. c:member:: void PyUFuncObject.functions( \
+ char** args, npy_intp* dims, npy_intp* steps, void* extradata)
+
+ An array of function pointers --- one for each data type
+ supported by the ufunc. This is the vector loop that is called
+ to implement the underlying function *dims* [0] times. The
+ first argument, *args*, is an array of *nargs* pointers to
+ behaved memory. Pointers to the data for the input arguments
+ are first, followed by the pointers to the data for the output
+ arguments. How many bytes must be skipped to get to the next
+ element in the sequence is specified by the corresponding entry
+ in the *steps* array. The last argument allows the loop to
+ receive extra information. This is commonly used so that a
+ single, generic vector loop can be used for multiple
+ functions. In this case, the actual scalar function to call is
+ passed in as *extradata*. The size of this function pointer
+ array is ntypes.
+
+ .. c:member:: void **PyUFuncObject.data
+
+ Extra data to be passed to the 1-d vector loops or ``NULL`` if
+ no extra-data is needed. This C-array must be the same size (
+ *i.e.* ntypes) as the functions array. ``NULL`` is used if
+ extra_data is not needed. Several C-API calls for UFuncs are
+ just 1-d vector loops that make use of this extra data to
+ receive a pointer to the actual function to call.
+
+ .. c:member:: int PyUFuncObject.ntypes
+
+ The number of supported data types for the ufunc. This number
+ specifies how many different 1-d loops (of the builtin data
+ types) are available.
+
+ .. c:member:: int PyUFuncObject.reserved1
+
+ Unused.
+
+ .. c:member:: char *PyUFuncObject.name
+
+ A string name for the ufunc. This is used dynamically to build
+ the __doc\__ attribute of ufuncs.
+
+ .. c:member:: char *PyUFuncObject.types
+
+ An array of :math:`nargs \times ntypes` 8-bit type_numbers
+ which contains the type signature for the function for each of
+ the supported (builtin) data types. For each of the *ntypes*
+ functions, the corresponding set of type numbers in this array
+ shows how the *args* argument should be interpreted in the 1-d
+ vector loop. These type numbers do not have to be the same type
+ and mixed-type ufuncs are supported.
+
+ .. c:member:: char *PyUFuncObject.doc
+
+ Documentation for the ufunc. Should not contain the function
+ signature as this is generated dynamically when __doc\__ is
+ retrieved.
+
+ .. c:member:: void *PyUFuncObject.ptr
+
+ Any dynamically allocated memory. Currently, this is used for
+ dynamic ufuncs created from a python function to store room for
+ the types, data, and name members.
+
+ .. c:member:: PyObject *PyUFuncObject.obj
+
+ For ufuncs dynamically created from python functions, this member
+ holds a reference to the underlying Python function.
+
+ .. c:member:: PyObject *PyUFuncObject.userloops
+
+ A dictionary of user-defined 1-d vector loops (stored as CObject
+ ptrs) for user-defined types. A loop may be registered by the
+ user for any user-defined type. It is retrieved by type number.
+ User defined type numbers are always larger than
+ :c:data:`NPY_USERDEF`.
+
+ .. c:member:: int PyUFuncObject.core_enabled
+
+ 0 for scalar ufuncs; 1 for generalized ufuncs
+
+ .. c:member:: int PyUFuncObject.core_num_dim_ix
+
+ Number of distinct core dimension names in the signature
+
+ .. c:member:: int *PyUFuncObject.core_num_dims
+
+ Number of core dimensions of each argument
+
+ .. c:member:: int *PyUFuncObject.core_dim_ixs
+
+ Dimension indices in a flattened form; indices of argument ``k`` are
+ stored in ``core_dim_ixs[core_offsets[k] : core_offsets[k] +
+ core_numdims[k]]``
+
+ .. c:member:: int *PyUFuncObject.core_offsets
+
+ Position of 1st core dimension of each argument in ``core_dim_ixs``,
+ equivalent to cumsum(``core_num_dims``)
+
+ .. c:member:: char *PyUFuncObject.core_signature
+
+ Core signature string
+
+ .. c:member:: PyUFunc_TypeResolutionFunc *PyUFuncObject.type_resolver
+
+ A function which resolves the types and fills an array with the dtypes
+ for the inputs and outputs
+
+ .. c:member:: PyUFunc_LegacyInnerLoopSelectionFunc *PyUFuncObject.legacy_inner_loop_selector
+
+ A function which returns an inner loop. The ``legacy`` in the name arises
+ because for NumPy 1.6 a better variant had been planned. This variant
+ has not yet come about.
+
+ .. c:member:: void *PyUFuncObject.reserved2
+
+ For a possible future loop selector with a different signature.
+
+ .. c:member:: PyUFunc_MaskedInnerLoopSelectionFunc *PyUFuncObject.masked_inner_loop_selector
+
+ Function which returns a masked inner loop for the ufunc
+
+ .. c:member:: npy_uint32 PyUFuncObject.op_flags
+
+ Override the default operand flags for each ufunc operand.
+
+ .. c:member:: npy_uint32 PyUFuncObject.iter_flags
+
+ Override the default nditer flags for the ufunc.
+
+ Added in API version 0x0000000D
+
+ .. c:member:: npy_intp *PyUFuncObject.core_dim_sizes
+
+ For each distinct core dimension, the possible
+ :ref:`frozen <frozen>` size if :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` is 0
+
+ .. c:member:: npy_uint32 *PyUFuncObject.core_dim_flags
+
+ For each distinct core dimension, a set of ``UFUNC_CORE_DIM*`` flags
+
+ - :c:data:`UFUNC_CORE_DIM_CAN_IGNORE` if the dim name ends in ``?``
+ - :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` if the dim size will be
+ determined from the operands and not from a :ref:`frozen <frozen>` signature
+
+PyArrayIter_Type and PyArrayIterObject
+--------------------------------------
+
+.. c:var:: PyArrayIter_Type
+
+ This is an iterator object that makes it easy to loop over an
+ N-dimensional array. It is the object returned from the flat
+ attribute of an ndarray. It is also used extensively throughout the
+ implementation internals to loop over an N-dimensional array. The
+ tp_as_mapping interface is implemented so that the iterator object
+ can be indexed (using 1-d indexing), and a few methods are
+ implemented through the tp_methods table. This object implements the
+ next method and can be used anywhere an iterator can be used in
+ Python.
+
+.. c:type:: PyArrayIterObject
+
+ The C-structure corresponding to an object of :c:data:`PyArrayIter_Type` is
+ the :c:type:`PyArrayIterObject`. The :c:type:`PyArrayIterObject` is used to
+ keep track of a pointer into an N-dimensional array. It contains associated
+ information used to quickly march through the array. The pointer can
+ be adjusted in three basic ways: 1) advance to the "next" position in
+ the array in a C-style contiguous fashion, 2) advance to an arbitrary
+ N-dimensional coordinate in the array, and 3) advance to an arbitrary
+ one-dimensional index into the array. The members of the
+ :c:type:`PyArrayIterObject` structure are used in these
+ calculations. Iterator objects keep their own dimension and strides
+ information about an array. This can be adjusted as needed for
+ "broadcasting," or to loop over only specific dimensions.
+
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ int nd_m1;
+ npy_intp index;
+ npy_intp size;
+ npy_intp coordinates[NPY_MAXDIMS];
+ npy_intp dims_m1[NPY_MAXDIMS];
+ npy_intp strides[NPY_MAXDIMS];
+ npy_intp backstrides[NPY_MAXDIMS];
+ npy_intp factors[NPY_MAXDIMS];
+ PyArrayObject *ao;
+ char *dataptr;
+ Bool contiguous;
+ } PyArrayIterObject;
+
+ .. c:member:: int PyArrayIterObject.nd_m1
+
+ :math:`N-1` where :math:`N` is the number of dimensions in the
+ underlying array.
+
+ .. c:member:: npy_intp PyArrayIterObject.index
+
+ The current 1-d index into the array.
+
+ .. c:member:: npy_intp PyArrayIterObject.size
+
+ The total size of the underlying array.
+
+ .. c:member:: npy_intp *PyArrayIterObject.coordinates
+
+ An :math:`N` -dimensional index into the array.
+
+ .. c:member:: npy_intp *PyArrayIterObject.dims_m1
+
+ The size of the array minus 1 in each dimension.
+
+ .. c:member:: npy_intp *PyArrayIterObject.strides
+
+ The strides of the array. How many bytes needed to jump to the next
+ element in each dimension.
+
+ .. c:member:: npy_intp *PyArrayIterObject.backstrides
+
+ How many bytes needed to jump from the end of a dimension back
+ to its beginning. Note that ``backstrides[k] == strides[k] *
+ dims_m1[k]``, but it is stored here as an optimization.
+
+ .. c:member:: npy_intp *PyArrayIterObject.factors
+
+ This array is used in computing an N-d index from a 1-d index. It
+ contains needed products of the dimensions.
+
+ .. c:member:: PyArrayObject *PyArrayIterObject.ao
+
+ A pointer to the underlying ndarray this iterator was created to
+ represent.
+
+ .. c:member:: char *PyArrayIterObject.dataptr
+
+ This member points to an element in the ndarray indicated by the
+ index.
+
+ .. c:member:: Bool PyArrayIterObject.contiguous
+
+ This flag is true if the underlying array is
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS`. It is used to simplify
+ calculations when possible.
+
+
+How to use an array iterator on a C-level is explained more fully in
+later sections. Typically, you do not need to concern yourself with
+the internal structure of the iterator object, and merely interact
+with it through the use of the macros :c:func:`PyArray_ITER_NEXT` (it),
+:c:func:`PyArray_ITER_GOTO` (it, dest), or :c:func:`PyArray_ITER_GOTO1D`
+(it, index). All of these macros require the argument *it* to be a
+:c:type:`PyArrayIterObject *`.
+
+
+PyArrayMultiIter_Type and PyArrayMultiIterObject
+------------------------------------------------
+
+.. c:var:: PyArrayMultiIter_Type
+
+ This type provides an iterator that encapsulates the concept of
+ broadcasting. It allows :math:`N` arrays to be broadcast together
+ so that the loop progresses in C-style contiguous fashion over the
+ broadcasted array. The corresponding C-structure is the
+ :c:type:`PyArrayMultiIterObject` whose memory layout must begin any
+ object, *obj*, passed in to the :c:func:`PyArray_Broadcast` (obj)
+ function. Broadcasting is performed by adjusting array iterators so
+ that each iterator represents the broadcasted shape and size, but
+ has its strides adjusted so that the correct element from the array
+ is used at each iteration.
+
+
+.. c:type:: PyArrayMultiIterObject
+
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ int numiter;
+ npy_intp size;
+ npy_intp index;
+ int nd;
+ npy_intp dimensions[NPY_MAXDIMS];
+ PyArrayIterObject *iters[NPY_MAXDIMS];
+ } PyArrayMultiIterObject;
+
+ .. c:macro: PyArrayMultiIterObject.PyObject_HEAD
+
+ Needed at the start of every Python object (holds reference count
+ and type identification).
+
+ .. c:member:: int PyArrayMultiIterObject.numiter
+
+ The number of arrays that need to be broadcast to the same shape.
+
+ .. c:member:: npy_intp PyArrayMultiIterObject.size
+
+ The total broadcasted size.
+
+ .. c:member:: npy_intp PyArrayMultiIterObject.index
+
+ The current (1-d) index into the broadcasted result.
+
+ .. c:member:: int PyArrayMultiIterObject.nd
+
+ The number of dimensions in the broadcasted result.
+
+ .. c:member:: npy_intp *PyArrayMultiIterObject.dimensions
+
+ The shape of the broadcasted result (only ``nd`` slots are used).
+
+ .. c:member:: PyArrayIterObject **PyArrayMultiIterObject.iters
+
+ An array of iterator objects that holds the iterators for the
+ arrays to be broadcast together. On return, the iterators are
+ adjusted for broadcasting.
+
+PyArrayNeighborhoodIter_Type and PyArrayNeighborhoodIterObject
+--------------------------------------------------------------
+
+.. c:var:: PyArrayNeighborhoodIter_Type
+
+ This is an iterator object that makes it easy to loop over an
+ N-dimensional neighborhood.
+
+.. c:type:: PyArrayNeighborhoodIterObject
+
+ The C-structure corresponding to an object of
+ :c:data:`PyArrayNeighborhoodIter_Type` is the
+ :c:type:`PyArrayNeighborhoodIterObject`.
+
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ int nd_m1;
+ npy_intp index, size;
+ npy_intp coordinates[NPY_MAXDIMS]
+ npy_intp dims_m1[NPY_MAXDIMS];
+ npy_intp strides[NPY_MAXDIMS];
+ npy_intp backstrides[NPY_MAXDIMS];
+ npy_intp factors[NPY_MAXDIMS];
+ PyArrayObject *ao;
+ char *dataptr;
+ npy_bool contiguous;
+ npy_intp bounds[NPY_MAXDIMS][2];
+ npy_intp limits[NPY_MAXDIMS][2];
+ npy_intp limits_sizes[NPY_MAXDIMS];
+ npy_iter_get_dataptr_t translate;
+ npy_intp nd;
+ npy_intp dimensions[NPY_MAXDIMS];
+ PyArrayIterObject* _internal_iter;
+ char* constant;
+ int mode;
+ } PyArrayNeighborhoodIterObject;
+
+PyArrayFlags_Type and PyArrayFlagsObject
+----------------------------------------
+
+.. c:var:: PyArrayFlags_Type
+
+ When the flags attribute is retrieved from Python, a special
+ builtin object of this type is constructed. This special type makes
+ it easier to work with the different flags by accessing them as
+ attributes or by accessing them as if the object were a dictionary
+ with the flag names as entries.
+
+.. c:type:: PyArrayFlagsObject
+
+ .. code-block:: c
+
+ typedef struct PyArrayFlagsObject {
+ PyObject_HEAD
+ PyObject *arr;
+ int flags;
+ } PyArrayFlagsObject;
+
+
+ScalarArrayTypes
+----------------
+
+There is a Python type for each of the different built-in data types
+that can be present in the array Most of these are simple wrappers
+around the corresponding data type in C. The C-names for these types
+are :c:data:`Py{TYPE}ArrType_Type` where ``{TYPE}`` can be
+
+ **Bool**, **Byte**, **Short**, **Int**, **Long**, **LongLong**,
+ **UByte**, **UShort**, **UInt**, **ULong**, **ULongLong**,
+ **Half**, **Float**, **Double**, **LongDouble**, **CFloat**,
+ **CDouble**, **CLongDouble**, **String**, **Unicode**, **Void**, and
+ **Object**.
+
+These type names are part of the C-API and can therefore be created in
+extension C-code. There is also a :c:data:`PyIntpArrType_Type` and a
+:c:data:`PyUIntpArrType_Type` that are simple substitutes for one of the
+integer types that can hold a pointer on the platform. The structure
+of these scalar objects is not exposed to C-code. The function
+:c:func:`PyArray_ScalarAsCtype` (..) can be used to extract the C-type
+value from the array scalar and the function :c:func:`PyArray_Scalar`
+(...) can be used to construct an array scalar from a C-value.
+
+
+Other C-Structures
+==================
+
+A few new C-structures were found to be useful in the development of
+NumPy. These C-structures are used in at least one C-API call and are
+therefore documented here. The main reason these structures were
+defined is to make it easy to use the Python ParseTuple C-API to
+convert from Python objects to a useful C-Object.
+
+
+PyArray_Dims
+------------
+
+.. c:type:: PyArray_Dims
+
+ This structure is very useful when shape and/or strides information
+ is supposed to be interpreted. The structure is:
+
+ .. code-block:: c
+
+ typedef struct {
+ npy_intp *ptr;
+ int len;
+ } PyArray_Dims;
+
+ The members of this structure are
+
+ .. c:member:: npy_intp *PyArray_Dims.ptr
+
+ A pointer to a list of (:c:type:`npy_intp`) integers which
+ usually represent array shape or array strides.
+
+ .. c:member:: int PyArray_Dims.len
+
+ The length of the list of integers. It is assumed safe to
+ access *ptr* [0] to *ptr* [len-1].
+
+
+PyArray_Chunk
+-------------
+
+.. c:type:: PyArray_Chunk
+
+ This is equivalent to the buffer object structure in Python up to
+ the ptr member. On 32-bit platforms (*i.e.* if :c:data:`NPY_SIZEOF_INT`
+ == :c:data:`NPY_SIZEOF_INTP`), the len member also matches an equivalent
+ member of the buffer object. It is useful to represent a generic
+ single-segment chunk of memory.
+
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ PyObject *base;
+ void *ptr;
+ npy_intp len;
+ int flags;
+ } PyArray_Chunk;
+
+ The members are
+
+ .. c:macro: PyArray_Chunk.PyObject_HEAD
+
+ Necessary for all Python objects. Included here so that the
+ :c:type:`PyArray_Chunk` structure matches that of the buffer object
+ (at least to the len member).
+
+ .. c:member:: PyObject *PyArray_Chunk.base
+
+ The Python object this chunk of memory comes from. Needed so that
+ memory can be accounted for properly.
+
+ .. c:member:: void *PyArray_Chunk.ptr
+
+ A pointer to the start of the single-segment chunk of memory.
+
+ .. c:member:: npy_intp PyArray_Chunk.len
+
+ The length of the segment in bytes.
+
+ .. c:member:: int PyArray_Chunk.flags
+
+ Any data flags (*e.g.* :c:data:`NPY_ARRAY_WRITEABLE` ) that should
+ be used to interpret the memory.
+
+
+PyArrayInterface
+----------------
+
+.. seealso:: :ref:`arrays.interface`
+
+.. c:type:: PyArrayInterface
+
+ The :c:type:`PyArrayInterface` structure is defined so that NumPy and
+ other extension modules can use the rapid array interface
+ protocol. The :obj:`__array_struct__` method of an object that
+ supports the rapid array interface protocol should return a
+ :c:type:`PyCObject` that contains a pointer to a :c:type:`PyArrayInterface`
+ structure with the relevant details of the array. After the new
+ array is created, the attribute should be ``DECREF``'d which will
+ free the :c:type:`PyArrayInterface` structure. Remember to ``INCREF`` the
+ object (whose :obj:`__array_struct__` attribute was retrieved) and
+ point the base member of the new :c:type:`PyArrayObject` to this same
+ object. In this way the memory for the array will be managed
+ correctly.
+
+ .. code-block:: c
+
+ typedef struct {
+ int two;
+ int nd;
+ char typekind;
+ int itemsize;
+ int flags;
+ npy_intp *shape;
+ npy_intp *strides;
+ void *data;
+ PyObject *descr;
+ } PyArrayInterface;
+
+ .. c:member:: int PyArrayInterface.two
+
+ the integer 2 as a sanity check.
+
+ .. c:member:: int PyArrayInterface.nd
+
+ the number of dimensions in the array.
+
+ .. c:member:: char PyArrayInterface.typekind
+
+ A character indicating what kind of array is present according to the
+ typestring convention with 't' -> bitfield, 'b' -> Boolean, 'i' ->
+ signed integer, 'u' -> unsigned integer, 'f' -> floating point, 'c' ->
+ complex floating point, 'O' -> object, 'S' -> (byte-)string, 'U' ->
+ unicode, 'V' -> void.
+
+ .. c:member:: int PyArrayInterface.itemsize
+
+ The number of bytes each item in the array requires.
+
+ .. c:member:: int PyArrayInterface.flags
+
+ Any of the bits :c:data:`NPY_ARRAY_C_CONTIGUOUS` (1),
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` (2), :c:data:`NPY_ARRAY_ALIGNED` (0x100),
+ :c:data:`NPY_ARRAY_NOTSWAPPED` (0x200), or :c:data:`NPY_ARRAY_WRITEABLE`
+ (0x400) to indicate something about the data. The
+ :c:data:`NPY_ARRAY_ALIGNED`, :c:data:`NPY_ARRAY_C_CONTIGUOUS`, and
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS` flags can actually be determined from
+ the other parameters. The flag :c:data:`NPY_ARR_HAS_DESCR`
+ (0x800) can also be set to indicate to objects consuming the
+ version 3 array interface that the descr member of the
+ structure is present (it will be ignored by objects consuming
+ version 2 of the array interface).
+
+ .. c:member:: npy_intp *PyArrayInterface.shape
+
+ An array containing the size of the array in each dimension.
+
+ .. c:member:: npy_intp *PyArrayInterface.strides
+
+ An array containing the number of bytes to jump to get to the next
+ element in each dimension.
+
+ .. c:member:: void *PyArrayInterface.data
+
+ A pointer *to* the first element of the array.
+
+ .. c:member:: PyObject *PyArrayInterface.descr
+
+ A Python object describing the data-type in more detail (same
+ as the *descr* key in :obj:`__array_interface__`). This can be
+ ``NULL`` if *typekind* and *itemsize* provide enough
+ information. This field is also ignored unless
+ :c:data:`ARR_HAS_DESCR` flag is on in *flags*.
+
+
+Internally used structures
+--------------------------
+
+Internally, the code uses some additional Python objects primarily for
+memory management. These types are not accessible directly from
+Python, and are not exposed to the C-API. They are included here only
+for completeness and assistance in understanding the code.
+
+
+.. c:type:: PyUFuncLoopObject
+
+ A loose wrapper for a C-structure that contains the information
+ needed for looping. This is useful if you are trying to understand
+ the ufunc looping code. The :c:type:`PyUFuncLoopObject` is the associated
+ C-structure. It is defined in the ``ufuncobject.h`` header.
+
+.. c:type:: PyUFuncReduceObject
+
+ A loose wrapper for the C-structure that contains the information
+ needed for reduce-like methods of ufuncs. This is useful if you are
+ trying to understand the reduce, accumulate, and reduce-at
+ code. The :c:type:`PyUFuncReduceObject` is the associated C-structure. It
+ is defined in the ``ufuncobject.h`` header.
+
+.. c:type:: PyUFunc_Loop1d
+
+ A simple linked-list of C-structures containing the information needed
+ to define a 1-d loop for a ufunc for every defined signature of a
+ user-defined data-type.
+
+.. c:var:: PyArrayMapIter_Type
+
+ Advanced indexing is handled with this Python type. It is simply a
+ loose wrapper around the C-structure containing the variables
+ needed for advanced array indexing. The associated C-structure,
+ :c:type:`PyArrayMapIterObject`, is useful if you are trying to
+ understand the advanced-index mapping code. It is defined in the
+ ``arrayobject.h`` header. This type is not exposed to Python and
+ could be replaced with a C-structure. As a Python type it takes
+ advantage of reference- counted memory management.
diff --git a/doc/source/reference/c-api/ufunc.rst b/doc/source/reference/c-api/ufunc.rst
new file mode 100644
index 000000000..92a679510
--- /dev/null
+++ b/doc/source/reference/c-api/ufunc.rst
@@ -0,0 +1,485 @@
+UFunc API
+=========
+
+.. sectionauthor:: Travis E. Oliphant
+
+.. index::
+ pair: ufunc; C-API
+
+
+Constants
+---------
+
+.. c:var:: UFUNC_ERR_{HANDLER}
+
+ ``{HANDLER}`` can be **IGNORE**, **WARN**, **RAISE**, or **CALL**
+
+.. c:var:: UFUNC_{THING}_{ERR}
+
+ ``{THING}`` can be **MASK**, **SHIFT**, or **FPE**, and ``{ERR}`` can
+ be **DIVIDEBYZERO**, **OVERFLOW**, **UNDERFLOW**, and **INVALID**.
+
+.. c:var:: PyUFunc_{VALUE}
+
+ .. c:var:: PyUFunc_One
+
+ .. c:var:: PyUFunc_Zero
+
+ .. c:var:: PyUFunc_MinusOne
+
+ .. c:var:: PyUFunc_ReorderableNone
+
+ .. c:var:: PyUFunc_None
+
+ .. c:var:: PyUFunc_IdentityValue
+
+
+Macros
+------
+
+.. c:macro:: NPY_LOOP_BEGIN_THREADS
+
+ Used in universal function code to only release the Python GIL if
+ loop->obj is not true (*i.e.* this is not an OBJECT array
+ loop). Requires use of :c:macro:`NPY_BEGIN_THREADS_DEF` in variable
+ declaration area.
+
+.. c:macro:: NPY_LOOP_END_THREADS
+
+ Used in universal function code to re-acquire the Python GIL if it
+ was released (because loop->obj was not true).
+
+
+Functions
+---------
+
+.. c:function:: PyObject* PyUFunc_FromFuncAndData( \
+ PyUFuncGenericFunction* func, void** data, char* types, int ntypes, \
+ int nin, int nout, int identity, char* name, char* doc, int unused)
+
+ Create a new broadcasting universal function from required variables.
+ Each ufunc builds around the notion of an element-by-element
+ operation. Each ufunc object contains pointers to 1-d loops
+ implementing the basic functionality for each supported type.
+
+ .. note::
+
+ The *func*, *data*, *types*, *name*, and *doc* arguments are not
+ copied by :c:func:`PyUFunc_FromFuncAndData`. The caller must ensure
+ that the memory used by these arrays is not freed as long as the
+ ufunc object is alive.
+
+ :param func:
+ Must to an array of length *ntypes* containing
+ :c:type:`PyUFuncGenericFunction` items. These items are pointers to
+ functions that actually implement the underlying
+ (element-by-element) function :math:`N` times with the following
+ signature:
+
+ .. c:function:: void loopfunc(
+ char** args, npy_intp* dimensions, npy_intp* steps, void* data)
+
+ *args*
+
+ An array of pointers to the actual data for the input and output
+ arrays. The input arguments are given first followed by the output
+ arguments.
+
+ *dimensions*
+
+ A pointer to the size of the dimension over which this function is
+ looping.
+
+ *steps*
+
+ A pointer to the number of bytes to jump to get to the
+ next element in this dimension for each of the input and
+ output arguments.
+
+ *data*
+
+ Arbitrary data (extra arguments, function names, *etc.* )
+ that can be stored with the ufunc and will be passed in
+ when it is called.
+
+ This is an example of a func specialized for addition of doubles
+ returning doubles.
+
+ .. code-block:: c
+
+ static void
+ double_add(char **args, npy_intp *dimensions, npy_intp *steps,
+ void *extra)
+ {
+ npy_intp i;
+ npy_intp is1 = steps[0], is2 = steps[1];
+ npy_intp os = steps[2], n = dimensions[0];
+ char *i1 = args[0], *i2 = args[1], *op = args[2];
+ for (i = 0; i < n; i++) {
+ *((double *)op) = *((double *)i1) +
+ *((double *)i2);
+ i1 += is1;
+ i2 += is2;
+ op += os;
+ }
+ }
+
+ :param data:
+ Should be ``NULL`` or a pointer to an array of size *ntypes*
+ . This array may contain arbitrary extra-data to be passed to
+ the corresponding loop function in the func array.
+
+ :param types:
+ Length ``(nin + nout) * ntypes`` array of ``char`` encoding the
+ `numpy.dtype.num` (built-in only) that the corresponding
+ function in the ``func`` array accepts. For instance, for a comparison
+ ufunc with three ``ntypes``, two ``nin`` and one ``nout``, where the
+ first function accepts `numpy.int32` and the the second
+ `numpy.int64`, with both returning `numpy.bool_`, ``types`` would
+ be ``(char[]) {5, 5, 0, 7, 7, 0}`` since ``NPY_INT32`` is 5,
+ ``NPY_INT64`` is 7, and ``NPY_BOOL`` is 0.
+
+ The bit-width names can also be used (e.g. :c:data:`NPY_INT32`,
+ :c:data:`NPY_COMPLEX128` ) if desired.
+
+ :ref:`ufuncs.casting` will be used at runtime to find the first
+ ``func`` callable by the input/output provided.
+
+ :param ntypes:
+ How many different data-type-specific functions the ufunc has implemented.
+
+ :param nin:
+ The number of inputs to this operation.
+
+ :param nout:
+ The number of outputs
+
+ :param identity:
+
+ Either :c:data:`PyUFunc_One`, :c:data:`PyUFunc_Zero`,
+ :c:data:`PyUFunc_MinusOne`, or :c:data:`PyUFunc_None`.
+ This specifies what should be returned when
+ an empty array is passed to the reduce method of the ufunc.
+ The special value :c:data:`PyUFunc_IdentityValue` may only be used with
+ the :c:func:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity` method, to
+ allow an arbitrary python object to be used as the identity.
+
+ :param name:
+ The name for the ufunc as a ``NULL`` terminated string. Specifying
+ a name of 'add' or 'multiply' enables a special behavior for
+ integer-typed reductions when no dtype is given. If the input type is an
+ integer (or boolean) data type smaller than the size of the `numpy.int_`
+ data type, it will be internally upcast to the `numpy.int_` (or
+ `numpy.uint`) data type.
+
+ :param doc:
+ Allows passing in a documentation string to be stored with the
+ ufunc. The documentation string should not contain the name
+ of the function or the calling signature as that will be
+ dynamically determined from the object and available when
+ accessing the **__doc__** attribute of the ufunc.
+
+ :param unused:
+ Unused and present for backwards compatibility of the C-API.
+
+.. c:function:: PyObject* PyUFunc_FromFuncAndDataAndSignature( \
+ PyUFuncGenericFunction* func, void** data, char* types, int ntypes, \
+ int nin, int nout, int identity, char* name, char* doc, int unused, char *signature)
+
+ This function is very similar to PyUFunc_FromFuncAndData above, but has
+ an extra *signature* argument, to define a
+ :ref:`generalized universal functions <c-api.generalized-ufuncs>`.
+ Similarly to how ufuncs are built around an element-by-element operation,
+ gufuncs are around subarray-by-subarray operations, the
+ :ref:`signature <details-of-signature>` defining the subarrays to operate on.
+
+ :param signature:
+ The signature for the new gufunc. Setting it to NULL is equivalent
+ to calling PyUFunc_FromFuncAndData. A copy of the string is made,
+ so the passed in buffer can be freed.
+
+.. c:function:: PyObject* PyUFunc_FromFuncAndDataAndSignatureAndIdentity(
+ PyUFuncGenericFunction *func, void **data, char *types, int ntypes, \
+ int nin, int nout, int identity, char *name, char *doc, int unused, char *signature,
+ PyObject *identity_value)
+
+ This function is very similar to `PyUFunc_FromFuncAndDataAndSignature` above,
+ but has an extra *identity_value* argument, to define an arbitrary identity
+ for the ufunc when ``identity`` is passed as ``PyUFunc_IdentityValue``.
+
+ :param identity_value:
+ The identity for the new gufunc. Must be passed as ``NULL`` unless the
+ ``identity`` argument is ``PyUFunc_IdentityValue``. Setting it to NULL
+ is equivalent to calling PyUFunc_FromFuncAndDataAndSignature.
+
+
+.. c:function:: int PyUFunc_RegisterLoopForType( \
+ PyUFuncObject* ufunc, int usertype, PyUFuncGenericFunction function, \
+ int* arg_types, void* data)
+
+ This function allows the user to register a 1-d loop with an
+ already- created ufunc to be used whenever the ufunc is called
+ with any of its input arguments as the user-defined
+ data-type. This is needed in order to make ufuncs work with
+ built-in data-types. The data-type must have been previously
+ registered with the numpy system. The loop is passed in as
+ *function*. This loop can take arbitrary data which should be
+ passed in as *data*. The data-types the loop requires are passed
+ in as *arg_types* which must be a pointer to memory at least as
+ large as ufunc->nargs.
+
+.. c:function:: int PyUFunc_RegisterLoopForDescr( \
+ PyUFuncObject* ufunc, PyArray_Descr* userdtype, \
+ PyUFuncGenericFunction function, PyArray_Descr** arg_dtypes, void* data)
+
+ This function behaves like PyUFunc_RegisterLoopForType above, except
+ that it allows the user to register a 1-d loop using PyArray_Descr
+ objects instead of dtype type num values. This allows a 1-d loop to be
+ registered for structured array data-dtypes and custom data-types
+ instead of scalar data-types.
+
+.. c:function:: int PyUFunc_ReplaceLoopBySignature( \
+ PyUFuncObject* ufunc, PyUFuncGenericFunction newfunc, int* signature, \
+ PyUFuncGenericFunction* oldfunc)
+
+ Replace a 1-d loop matching the given *signature* in the
+ already-created *ufunc* with the new 1-d loop newfunc. Return the
+ old 1-d loop function in *oldfunc*. Return 0 on success and -1 on
+ failure. This function works only with built-in types (use
+ :c:func:`PyUFunc_RegisterLoopForType` for user-defined types). A
+ signature is an array of data-type numbers indicating the inputs
+ followed by the outputs assumed by the 1-d loop.
+
+.. c:function:: int PyUFunc_GenericFunction( \
+ PyUFuncObject* self, PyObject* args, PyObject* kwds, PyArrayObject** mps)
+
+ A generic ufunc call. The ufunc is passed in as *self*, the arguments
+ to the ufunc as *args* and *kwds*. The *mps* argument is an array of
+ :c:type:`PyArrayObject` pointers whose values are discarded and which
+ receive the converted input arguments as well as the ufunc outputs
+ when success is returned. The user is responsible for managing this
+ array and receives a new reference for each array in *mps*. The total
+ number of arrays in *mps* is given by *self* ->nin + *self* ->nout.
+
+ Returns 0 on success, -1 on error.
+
+.. c:function:: int PyUFunc_checkfperr(int errmask, PyObject* errobj)
+
+ A simple interface to the IEEE error-flag checking support. The
+ *errmask* argument is a mask of :c:data:`UFUNC_MASK_{ERR}` bitmasks
+ indicating which errors to check for (and how to check for
+ them). The *errobj* must be a Python tuple with two elements: a
+ string containing the name which will be used in any communication
+ of error and either a callable Python object (call-back function)
+ or :c:data:`Py_None`. The callable object will only be used if
+ :c:data:`UFUNC_ERR_CALL` is set as the desired error checking
+ method. This routine manages the GIL and is safe to call even
+ after releasing the GIL. If an error in the IEEE-compatible
+ hardware is determined a -1 is returned, otherwise a 0 is
+ returned.
+
+.. c:function:: void PyUFunc_clearfperr()
+
+ Clear the IEEE error flags.
+
+.. c:function:: void PyUFunc_GetPyValues( \
+ char* name, int* bufsize, int* errmask, PyObject** errobj)
+
+ Get the Python values used for ufunc processing from the
+ thread-local storage area unless the defaults have been set in
+ which case the name lookup is bypassed. The name is placed as a
+ string in the first element of *\*errobj*. The second element is
+ the looked-up function to call on error callback. The value of the
+ looked-up buffer-size to use is passed into *bufsize*, and the
+ value of the error mask is placed into *errmask*.
+
+
+Generic functions
+-----------------
+
+At the core of every ufunc is a collection of type-specific functions
+that defines the basic functionality for each of the supported types.
+These functions must evaluate the underlying function :math:`N\geq1`
+times. Extra-data may be passed in that may be used during the
+calculation. This feature allows some general functions to be used as
+these basic looping functions. The general function has all the code
+needed to point variables to the right place and set up a function
+call. The general function assumes that the actual function to call is
+passed in as the extra data and calls it with the correct values. All
+of these functions are suitable for placing directly in the array of
+functions stored in the functions member of the PyUFuncObject
+structure.
+
+.. c:function:: void PyUFunc_f_f_As_d_d( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_d_d( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_f_f( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_g_g( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_F_F_As_D_D( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_F_F( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_D_D( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_G_G( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_e_e( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_e_e_As_f_f( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_e_e_As_d_d( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+ Type specific, core 1-d functions for ufuncs where each
+ calculation is obtained by calling a function taking one input
+ argument and returning one output. This function is passed in
+ ``func``. The letters correspond to dtypechar's of the supported
+ data types ( ``e`` - half, ``f`` - float, ``d`` - double,
+ ``g`` - long double, ``F`` - cfloat, ``D`` - cdouble,
+ ``G`` - clongdouble). The argument *func* must support the same
+ signature. The _As_X_X variants assume ndarray's of one data type
+ but cast the values to use an underlying function that takes a
+ different data type. Thus, :c:func:`PyUFunc_f_f_As_d_d` uses
+ ndarrays of data type :c:data:`NPY_FLOAT` but calls out to a
+ C-function that takes double and returns double.
+
+.. c:function:: void PyUFunc_ff_f_As_dd_d( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_ff_f( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_dd_d( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_gg_g( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_FF_F_As_DD_D( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_DD_D( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_FF_F( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_GG_G( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_ee_e( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_ee_e_As_ff_f( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_ee_e_As_dd_d( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+ Type specific, core 1-d functions for ufuncs where each
+ calculation is obtained by calling a function taking two input
+ arguments and returning one output. The underlying function to
+ call is passed in as *func*. The letters correspond to
+ dtypechar's of the specific data type supported by the
+ general-purpose function. The argument ``func`` must support the
+ corresponding signature. The ``_As_XX_X`` variants assume ndarrays
+ of one data type but cast the values at each iteration of the loop
+ to use the underlying function that takes a different data type.
+
+.. c:function:: void PyUFunc_O_O( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+.. c:function:: void PyUFunc_OO_O( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+ One-input, one-output, and two-input, one-output core 1-d functions
+ for the :c:data:`NPY_OBJECT` data type. These functions handle reference
+ count issues and return early on error. The actual function to call is
+ *func* and it must accept calls with the signature ``(PyObject*)
+ (PyObject*)`` for :c:func:`PyUFunc_O_O` or ``(PyObject*)(PyObject *,
+ PyObject *)`` for :c:func:`PyUFunc_OO_O`.
+
+.. c:function:: void PyUFunc_O_O_method( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+ This general purpose 1-d core function assumes that *func* is a string
+ representing a method of the input object. For each
+ iteration of the loop, the Python object is extracted from the array
+ and its *func* method is called returning the result to the output array.
+
+.. c:function:: void PyUFunc_OO_O_method( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+ This general purpose 1-d core function assumes that *func* is a
+ string representing a method of the input object that takes one
+ argument. The first argument in *args* is the method whose function is
+ called, the second argument in *args* is the argument passed to the
+ function. The output of the function is stored in the third entry
+ of *args*.
+
+.. c:function:: void PyUFunc_On_Om( \
+ char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+
+ This is the 1-d core function used by the dynamic ufuncs created
+ by umath.frompyfunc(function, nin, nout). In this case *func* is a
+ pointer to a :c:type:`PyUFunc_PyFuncData` structure which has definition
+
+ .. c:type:: PyUFunc_PyFuncData
+
+ .. code-block:: c
+
+ typedef struct {
+ int nin;
+ int nout;
+ PyObject *callable;
+ } PyUFunc_PyFuncData;
+
+ At each iteration of the loop, the *nin* input objects are extracted
+ from their object arrays and placed into an argument tuple, the Python
+ *callable* is called with the input arguments, and the nout
+ outputs are placed into their object arrays.
+
+
+Importing the API
+-----------------
+
+.. c:var:: PY_UFUNC_UNIQUE_SYMBOL
+
+.. c:var:: NO_IMPORT_UFUNC
+
+.. c:function:: void import_ufunc(void)
+
+ These are the constants and functions for accessing the ufunc
+ C-API from extension modules in precisely the same way as the
+ array C-API can be accessed. The ``import_ufunc`` () function must
+ always be called (in the initialization subroutine of the
+ extension module). If your extension module is in one file then
+ that is all that is required. The other two constants are useful
+ if your extension module makes use of multiple files. In that
+ case, define :c:data:`PY_UFUNC_UNIQUE_SYMBOL` to something unique to
+ your code and then in source files that do not contain the module
+ initialization function but still need access to the UFUNC API,
+ define :c:data:`PY_UFUNC_UNIQUE_SYMBOL` to the same name used previously
+ and also define :c:data:`NO_IMPORT_UFUNC`.
+
+ The C-API is actually an array of function pointers. This array is
+ created (and pointed to by a global variable) by import_ufunc. The
+ global variable is either statically defined or allowed to be seen
+ by other files depending on the state of
+ :c:data:`PY_UFUNC_UNIQUE_SYMBOL` and :c:data:`NO_IMPORT_UFUNC`.
+
+.. index::
+ pair: ufunc; C-API