summaryrefslogtreecommitdiff
path: root/doc/source/reference
diff options
context:
space:
mode:
Diffstat (limited to 'doc/source/reference')
-rw-r--r--doc/source/reference/arrays.classes.rst9
-rw-r--r--doc/source/reference/arrays.indexing.rst10
-rw-r--r--doc/source/reference/c-api.array.rst51
-rw-r--r--doc/source/reference/c-api.generalized-ufuncs.rst66
-rw-r--r--doc/source/reference/c-api.iterator.rst95
-rw-r--r--doc/source/reference/c-api.types-and-structures.rst12
-rw-r--r--doc/source/reference/c-api.ufunc.rst16
-rw-r--r--doc/source/reference/routines.array-creation.rst2
-rw-r--r--doc/source/reference/routines.array-manipulation.rst2
-rw-r--r--doc/source/reference/routines.io.rst7
-rw-r--r--doc/source/reference/routines.ma.rst5
-rw-r--r--doc/source/reference/routines.maskna.rst11
-rw-r--r--doc/source/reference/routines.polynomials.classes.rst6
-rw-r--r--doc/source/reference/routines.sort.rst1
-rw-r--r--doc/source/reference/ufuncs.rst17
15 files changed, 177 insertions, 133 deletions
diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst
index 036185782..e77dfc31e 100644
--- a/doc/source/reference/arrays.classes.rst
+++ b/doc/source/reference/arrays.classes.rst
@@ -41,7 +41,7 @@ Numpy provides several hooks that classes can customize:
.. function:: class.__numpy_ufunc__(self, ufunc, method, i, inputs, **kwargs)
- .. versionadded:: 1.9
+ .. versionadded:: 1.10
Any class (ndarray subclass or not) can define this method to
override behavior of Numpy's ufuncs. This works quite similarly to
@@ -267,13 +267,6 @@ they inherit from the ndarray): :meth:`.flush() <memmap.flush>` which
must be called manually by the user to ensure that any changes to the
array actually get written to disk.
-.. note::
-
- Memory-mapped arrays use the the Python memory-map object which
- (prior to Python 2.5) does not allow files to be larger than a
- certain size depending on the platform. This size is always
- < 2GB even on 64-bit systems.
-
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst
index d04f89897..ef0180e0f 100644
--- a/doc/source/reference/arrays.indexing.rst
+++ b/doc/source/reference/arrays.indexing.rst
@@ -31,9 +31,9 @@ integer, or a tuple of slice objects and integers. :const:`Ellipsis`
and :const:`newaxis` objects can be interspersed with these as
well. In order to remain backward compatible with a common usage in
Numeric, basic slicing is also initiated if the selection object is
-any sequence (such as a :class:`list`) containing :class:`slice`
+any non-ndarray sequence (such as a :class:`list`) containing :class:`slice`
objects, the :const:`Ellipsis` object, or the :const:`newaxis` object,
-but no integer arrays or other embedded sequences.
+but not for integer arrays or other embedded sequences.
.. index::
triple: ndarray; special methods; getslice
@@ -46,8 +46,8 @@ scalar <arrays.scalars>` representing the corresponding item. As in
Python, all indices are zero-based: for the *i*-th index :math:`n_i`,
the valid range is :math:`0 \le n_i < d_i` where :math:`d_i` is the
*i*-th element of the shape of the array. Negative indices are
-interpreted as counting from the end of the array (*i.e.*, if *i < 0*,
-it means :math:`n_i + i`).
+interpreted as counting from the end of the array (*i.e.*, if
+:math:`n_i < 0`, it means :math:`n_i + d_i`).
All arrays generated by basic slicing are always :term:`views <view>`
@@ -84,7 +84,7 @@ concepts to remember include:
- Assume *n* is the number of elements in the dimension being
sliced. Then, if *i* is not given it defaults to 0 for *k > 0* and
- *n* for *k < 0* . If *j* is not given it defaults to *n* for *k > 0*
+ *n - 1* for *k < 0* . If *j* is not given it defaults to *n* for *k > 0*
and -1 for *k < 0* . If *k* is not given it defaults to 1. Note that
``::`` is the same as ``:`` and means select all indices along this
axis.
diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst
index 23355bc91..08eba243e 100644
--- a/doc/source/reference/c-api.array.rst
+++ b/doc/source/reference/c-api.array.rst
@@ -108,10 +108,13 @@ sub-types).
.. cfunction:: int PyArray_FLAGS(PyArrayObject* arr)
-.. cfunction:: int PyArray_ITEMSIZE(PyArrayObject* arr)
+.. cfunction:: npy_intp PyArray_ITEMSIZE(PyArrayObject* arr)
Return the itemsize for the elements of this array.
+ Note that, in the old API that was deprecated in version 1.7, this function
+ had the return type ``int``.
+
.. cfunction:: int PyArray_TYPE(PyArrayObject* arr)
Return the (builtin) typenumber for the elements of this array.
@@ -460,7 +463,7 @@ From other objects
.. cvar:: NPY_ARRAY_IN_ARRAY
- :cdata:`NPY_ARRAY_CONTIGUOUS` \| :cdata:`NPY_ARRAY_ALIGNED`
+ :cdata:`NPY_ARRAY_C_CONTIGUOUS` \| :cdata:`NPY_ARRAY_ALIGNED`
.. cvar:: NPY_ARRAY_IN_FARRAY
@@ -1632,11 +1635,11 @@ Conversion
Shape Manipulation
^^^^^^^^^^^^^^^^^^
-.. cfunction:: PyObject* PyArray_Newshape(PyArrayObject* self, PyArray_Dims* newshape)
+.. cfunction:: PyObject* PyArray_Newshape(PyArrayObject* self, PyArray_Dims* newshape, NPY_ORDER order)
Result will be a new array (pointing to the same memory location
- as *self* if possible), but having a shape given by *newshape*
- . If the new shape is not compatible with the strides of *self*,
+ as *self* if possible), but having a shape given by *newshape*.
+ If the new shape is not compatible with the strides of *self*,
then a copy of the array with the new specified shape will be
returned.
@@ -1645,6 +1648,7 @@ Shape Manipulation
Equivalent to :meth:`ndarray.reshape` (*self*, *shape*) where *shape* is a
sequence. Converts *shape* to a :ctype:`PyArray_Dims` structure and
calls :cfunc:`PyArray_Newshape` internally.
+ For back-ward compatability -- Not recommended
.. cfunction:: PyObject* PyArray_Squeeze(PyArrayObject* self)
@@ -1805,14 +1809,23 @@ Item selection and manipulation
:cfunc:`PyArray_Sort` (...) can also be used to sort the array
directly.
-.. cfunction:: PyObject* PyArray_SearchSorted(PyArrayObject* self, PyObject* values)
+.. cfunction:: PyObject* PyArray_SearchSorted(PyArrayObject* self, PyObject* values, NPY_SEARCHSIDE side, PyObject* perm)
+
+ Equivalent to :meth:`ndarray.searchsorted` (*self*, *values*, *side*,
+ *perm*). Assuming *self* is a 1-d array in ascending order, then the
+ output is an array of indices the same shape as *values* such that, if
+ the elements in *values* were inserted before the indices, the order of
+ *self* would be preserved. No checking is done on whether or not self is
+ in ascending order.
- Equivalent to :meth:`ndarray.searchsorted` (*self*, *values*). Assuming
- *self* is a 1-d array in ascending order representing bin
- boundaries then the output is an array the same shape as *values*
- of bin numbers, giving the bin into which each item in *values*
- would be placed. No checking is done on whether or not self is in
- ascending order.
+ The *side* argument indicates whther the index returned should be that of
+ the first suitable location (if :cdata:`NPY_SEARCHLEFT`) or of the last
+ (if :cdata:`NPY_SEARCHRIGHT`).
+
+ The *sorter* argument, if not ``NULL``, must be a 1D array of integer
+ indices the same length as *self*, that sorts it into ascending order.
+ This is typically the result of a call to :cfunc:`PyArray_ArgSort` (...)
+ Binary search is used to find the required insertion points.
.. cfunction:: int PyArray_Partition(PyArrayObject *self, PyArrayObject * ktharray, int axis, NPY_SELECTKIND which)
@@ -1886,10 +1899,10 @@ Calculation
.. note::
- The out argument specifies where to place the result. If out is
- NULL, then the output array is created, otherwise the output is
- placed in out which must be the correct size and type. A new
- reference to the ouput array is always returned even when out
+ The out argument specifies where to place the result. If out is
+ NULL, then the output array is created, otherwise the output is
+ placed in out which must be the correct size and type. A new
+ reference to the ouput array is always returned even when out
is not NULL. The caller of the routine has the responsability
to ``DECREF`` out if not NULL or a memory-leak will occur.
@@ -3103,6 +3116,12 @@ Group 1
Useful to regain the GIL in situations where it was released
using the BEGIN form of this macro.
+ .. cfunction:: NPY_BEGIN_THREADS_THRESHOLDED(int loop_size)
+
+ Useful to release the GIL only if *loop_size* exceeds a
+ minimum threshold, currently set to 500. Should be matched
+ with a .. cmacro::`NPY_END_THREADS` to regain the GIL.
+
Group 2
"""""""
diff --git a/doc/source/reference/c-api.generalized-ufuncs.rst b/doc/source/reference/c-api.generalized-ufuncs.rst
index 14f33efcb..92dc8aec0 100644
--- a/doc/source/reference/c-api.generalized-ufuncs.rst
+++ b/doc/source/reference/c-api.generalized-ufuncs.rst
@@ -18,30 +18,52 @@ arguments is called the "signature" of a ufunc. For example, the
ufunc numpy.add has signature ``(),()->()`` defining two scalar inputs
and one scalar output.
-Another example is the function ``inner1d(a,b)`` with a signature of
-``(i),(i)->()``. This applies the inner product along the last axis of
+Another example is the function ``inner1d(a, b)`` with a signature of
+``(i),(i)->()``. This applies the inner product along the last axis of
each input, but keeps the remaining indices intact.
-For example, where ``a`` is of shape ``(3,5,N)``
-and ``b`` is of shape ``(5,N)``, this will return an output of shape ``(3,5)``.
+For example, where ``a`` is of shape ``(3, 5, N)`` and ``b`` is of shape
+``(5, N)``, this will return an output of shape ``(3,5)``.
The underlying elementary function is called ``3 * 5`` times. In the
signature, we specify one core dimension ``(i)`` for each input and zero core
dimensions ``()`` for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name ``i``, we specify that the two
-corresponding dimensions should be of the same size (or one of them is
-of size 1 and will be broadcasted).
+corresponding dimensions should be of the same size.
The dimensions beyond the core dimensions are called "loop" dimensions. In
-the above example, this corresponds to ``(3,5)``.
-
-The usual numpy "broadcasting" rules apply, where the signature
-determines how the dimensions of each input/output object are split
-into core and loop dimensions:
-
-#. While an input array has a smaller dimensionality than the corresponding
- number of core dimensions, 1's are pre-pended to its shape.
+the above example, this corresponds to ``(3, 5)``.
+
+The signature determines how the dimensions of each input/output array are
+split into core and loop dimensions:
+
+#. Each dimension in the signature is matched to a dimension of the
+ corresponding passed-in array, starting from the end of the shape tuple.
+ These are the core dimensions, and they must be present in the arrays, or
+ an error will be raised.
+#. Core dimensions assigned to the same label in the signature (e.g. the
+ ``i`` in ``inner1d``'s ``(i),(i)->()``) must have exactly matching sizes,
+ no broadcasting is performed.
#. The core dimensions are removed from all inputs and the remaining
- dimensions are broadcasted; defining the loop dimensions.
-#. The output is given by the loop dimensions plus the output core dimensions.
+ dimensions are broadcast together, defining the loop dimensions.
+#. The shape of each output is determined from the loop dimensions plus the
+ output's core dimensions
+
+Typically, the size of all core dimensions in an output will be determined by
+the size of a core dimension with the same label in an input array. This is
+not a requirement, and it is possible to define a signature where a label
+comes up for the first time in an output, although some precautions must be
+taken when calling such a function. An example would be the function
+``euclidean_pdist(a)``, with signature ``(n,d)->(p)``, that given an array of
+``n`` ``d``-dimensional vectors, computes all unique pairwise Euclidean
+distances among them. The output dimension ``p`` must therefore be equal to
+``n * (n - 1) / 2``, but it is the caller's responsibility to pass in an
+output array of the right size. If the size of a core dimension of an output
+cannot be determined from a passed in input or output array, an error will be
+raised.
+
+Note: Prior to Numpy 1.10.0, less strict checks were in place: missing core
+dimensions were created by prepending 1's to the shape as necessary, core
+dimensions with the same label were broadcast together, and undetermined
+dimensions were created with size 1.
Definitions
@@ -70,7 +92,7 @@ Core Dimension
Dimension Name
A dimension name represents a core dimension in the signature.
Different dimensions may share a name, indicating that they are of
- the same size (or are broadcastable).
+ the same size.
Dimension Index
A dimension index is an integer representing a dimension name. It
@@ -93,8 +115,7 @@ following format:
* Dimension lists for different arguments are separated by ``","``.
Input/output arguments are separated by ``"->"``.
* If one uses the same dimension name in multiple locations, this
- enforces the same size (or broadcastable size) of the corresponding
- dimensions.
+ enforces the same size of the corresponding dimensions.
The formal syntax of signatures is as follows::
@@ -111,10 +132,9 @@ The formal syntax of signatures is as follows::
Notes:
#. All quotes are for clarity.
-#. Core dimensions that share the same name must be broadcastable, as
- the two ``i`` in our example above. Each dimension name typically
- corresponding to one level of looping in the elementary function's
- implementation.
+#. Core dimensions that share the same name must have the exact same size.
+ Each dimension name typically corresponds to one level of looping in the
+ elementary function's implementation.
#. White spaces are ignored.
Here are some examples of signatures:
diff --git a/doc/source/reference/c-api.iterator.rst b/doc/source/reference/c-api.iterator.rst
index 084fdcbce..1d90ce302 100644
--- a/doc/source/reference/c-api.iterator.rst
+++ b/doc/source/reference/c-api.iterator.rst
@@ -18,8 +18,6 @@ preservation of memory layouts, and buffering of data with the wrong
alignment or type, without requiring difficult coding.
This page documents the API for the iterator.
-The C-API naming convention chosen is based on the one in the numpy-refactor
-branch, so will integrate naturally into the refactored code base.
The iterator is named ``NpyIter`` and functions are
named ``NpyIter_*``.
@@ -28,51 +26,6 @@ which may be of interest for those using this C API. In many instances,
testing out ideas by creating the iterator in Python is a good idea
before writing the C iteration code.
-Converting from Previous NumPy Iterators
-----------------------------------------
-
-The existing iterator API includes functions like PyArrayIter_Check,
-PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes
-PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The
-new iterator design replaces all of this functionality with a single object
-and associated API. One goal of the new API is that all uses of the
-existing iterator should be replaceable with the new iterator without
-significant effort. In 1.6, the major exception to this is the neighborhood
-iterator, which does not have corresponding features in this iterator.
-
-Here is a conversion table for which functions to use with the new iterator:
-
-===================================== =============================================
-*Iterator Functions*
-:cfunc:`PyArray_IterNew` :cfunc:`NpyIter_New`
-:cfunc:`PyArray_IterAllButAxis` :cfunc:`NpyIter_New` + ``axes`` parameter **or**
- Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP`
-:cfunc:`PyArray_BroadcastToShape` **NOT SUPPORTED** (Use the support for
- multiple operands instead.)
-:cfunc:`PyArrayIter_Check` Will need to add this in Python exposure
-:cfunc:`PyArray_ITER_RESET` :cfunc:`NpyIter_Reset`
-:cfunc:`PyArray_ITER_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext`
-:cfunc:`PyArray_ITER_DATA` :cfunc:`NpyIter_GetDataPtrArray`
-:cfunc:`PyArray_ITER_GOTO` :cfunc:`NpyIter_GotoMultiIndex`
-:cfunc:`PyArray_ITER_GOTO1D` :cfunc:`NpyIter_GotoIndex` or
- :cfunc:`NpyIter_GotoIterIndex`
-:cfunc:`PyArray_ITER_NOTDONE` Return value of ``iternext`` function pointer
-*Multi-iterator Functions*
-:cfunc:`PyArray_MultiIterNew` :cfunc:`NpyIter_MultiNew`
-:cfunc:`PyArray_MultiIter_RESET` :cfunc:`NpyIter_Reset`
-:cfunc:`PyArray_MultiIter_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext`
-:cfunc:`PyArray_MultiIter_DATA` :cfunc:`NpyIter_GetDataPtrArray`
-:cfunc:`PyArray_MultiIter_NEXTi` **NOT SUPPORTED** (always lock-step iteration)
-:cfunc:`PyArray_MultiIter_GOTO` :cfunc:`NpyIter_GotoMultiIndex`
-:cfunc:`PyArray_MultiIter_GOTO1D` :cfunc:`NpyIter_GotoIndex` or
- :cfunc:`NpyIter_GotoIterIndex`
-:cfunc:`PyArray_MultiIter_NOTDONE` Return value of ``iternext`` function pointer
-:cfunc:`PyArray_Broadcast` Handled by :cfunc:`NpyIter_MultiNew`
-:cfunc:`PyArray_RemoveSmallest` Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP`
-*Other Functions*
-:cfunc:`PyArray_ConvertToCommonType` Iterator flag :cdata:`NPY_ITER_COMMON_DTYPE`
-===================================== =============================================
-
Simple Iteration Example
------------------------
@@ -91,6 +44,7 @@ number of non-zero elements in an array.
NpyIter* iter;
NpyIter_IterNextFunc *iternext;
char** dataptr;
+ npy_intp nonzero_count;
npy_intp* strideptr,* innersizeptr;
/* Handle zero-sized arrays specially */
@@ -138,7 +92,7 @@ number of non-zero elements in an array.
/* The location of the inner loop size which the iterator may update */
innersizeptr = NpyIter_GetInnerLoopSizePtr(iter);
- /* The iteration loop */
+ nonzero_count = 0;
do {
/* Get the inner loop data/stride/count values */
char* data = *dataptr;
@@ -1296,3 +1250,48 @@ functions provide that information.
.. index::
pair: iterator; C-API
+
+Converting from Previous NumPy Iterators
+----------------------------------------
+
+The old iterator API includes functions like PyArrayIter_Check,
+PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes
+PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The
+new iterator design replaces all of this functionality with a single object
+and associated API. One goal of the new API is that all uses of the
+existing iterator should be replaceable with the new iterator without
+significant effort. In 1.6, the major exception to this is the neighborhood
+iterator, which does not have corresponding features in this iterator.
+
+Here is a conversion table for which functions to use with the new iterator:
+
+===================================== =============================================
+*Iterator Functions*
+:cfunc:`PyArray_IterNew` :cfunc:`NpyIter_New`
+:cfunc:`PyArray_IterAllButAxis` :cfunc:`NpyIter_New` + ``axes`` parameter **or**
+ Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP`
+:cfunc:`PyArray_BroadcastToShape` **NOT SUPPORTED** (Use the support for
+ multiple operands instead.)
+:cfunc:`PyArrayIter_Check` Will need to add this in Python exposure
+:cfunc:`PyArray_ITER_RESET` :cfunc:`NpyIter_Reset`
+:cfunc:`PyArray_ITER_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext`
+:cfunc:`PyArray_ITER_DATA` :cfunc:`NpyIter_GetDataPtrArray`
+:cfunc:`PyArray_ITER_GOTO` :cfunc:`NpyIter_GotoMultiIndex`
+:cfunc:`PyArray_ITER_GOTO1D` :cfunc:`NpyIter_GotoIndex` or
+ :cfunc:`NpyIter_GotoIterIndex`
+:cfunc:`PyArray_ITER_NOTDONE` Return value of ``iternext`` function pointer
+*Multi-iterator Functions*
+:cfunc:`PyArray_MultiIterNew` :cfunc:`NpyIter_MultiNew`
+:cfunc:`PyArray_MultiIter_RESET` :cfunc:`NpyIter_Reset`
+:cfunc:`PyArray_MultiIter_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext`
+:cfunc:`PyArray_MultiIter_DATA` :cfunc:`NpyIter_GetDataPtrArray`
+:cfunc:`PyArray_MultiIter_NEXTi` **NOT SUPPORTED** (always lock-step iteration)
+:cfunc:`PyArray_MultiIter_GOTO` :cfunc:`NpyIter_GotoMultiIndex`
+:cfunc:`PyArray_MultiIter_GOTO1D` :cfunc:`NpyIter_GotoIndex` or
+ :cfunc:`NpyIter_GotoIterIndex`
+:cfunc:`PyArray_MultiIter_NOTDONE` Return value of ``iternext`` function pointer
+:cfunc:`PyArray_Broadcast` Handled by :cfunc:`NpyIter_MultiNew`
+:cfunc:`PyArray_RemoveSmallest` Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP`
+*Other Functions*
+:cfunc:`PyArray_ConvertToCommonType` Iterator flag :cdata:`NPY_ITER_COMMON_DTYPE`
+===================================== =============================================
diff --git a/doc/source/reference/c-api.types-and-structures.rst b/doc/source/reference/c-api.types-and-structures.rst
index f1e216a5c..473e25010 100644
--- a/doc/source/reference/c-api.types-and-structures.rst
+++ b/doc/source/reference/c-api.types-and-structures.rst
@@ -244,7 +244,7 @@ PyArrayDescr_Type
Indicates that items of this data-type must be reference
counted (using :cfunc:`Py_INCREF` and :cfunc:`Py_DECREF` ).
- .. cvar:: NPY_ITEM_LISTPICKLE
+ .. cvar:: NPY_LIST_PICKLE
Indicates arrays of this data-type must be converted to a list
before pickling.
@@ -646,9 +646,9 @@ PyUFunc_Type
void **data;
int ntypes;
int check_return;
- char *name;
+ const char *name;
char *types;
- char *doc;
+ const char *doc;
void *ptr;
PyObject *obj;
PyObject *userloops;
@@ -1031,9 +1031,9 @@ PyArray_Chunk
This is equivalent to the buffer object structure in Python up to
the ptr member. On 32-bit platforms (*i.e.* if :cdata:`NPY_SIZEOF_INT`
- == :cdata:`NPY_SIZEOF_INTP` ) or in Python 2.5, the len member also
- matches an equivalent member of the buffer object. It is useful to
- represent a generic single- segment chunk of memory.
+ == :cdata:`NPY_SIZEOF_INTP`), the len member also matches an equivalent
+ member of the buffer object. It is useful to represent a generic
+ single-segment chunk of memory.
.. code-block:: c
diff --git a/doc/source/reference/c-api.ufunc.rst b/doc/source/reference/c-api.ufunc.rst
index 71abffd04..3673958d9 100644
--- a/doc/source/reference/c-api.ufunc.rst
+++ b/doc/source/reference/c-api.ufunc.rst
@@ -114,7 +114,6 @@ Functions
data type, it will be internally upcast to the int_ (or uint)
data type.
-
:param doc:
Allows passing in a documentation string to be stored with the
ufunc. The documentation string should not contain the name
@@ -128,6 +127,21 @@ Functions
structure and it does get set with this value when the ufunc
object is created.
+.. cfunction:: PyObject* PyUFunc_FromFuncAndDataAndSignature(PyUFuncGenericFunction* func,
+ void** data, char* types, int ntypes, int nin, int nout, int identity,
+ char* name, char* doc, int check_return, char *signature)
+
+ This function is very similar to PyUFunc_FromFuncAndData above, but has
+ an extra *signature* argument, to define generalized universal functions.
+ Similarly to how ufuncs are built around an element-by-element operation,
+ gufuncs are around subarray-by-subarray operations, the signature defining
+ the subarrays to operate on.
+
+ :param signature:
+ The signature for the new gufunc. Setting it to NULL is equivalent
+ to calling PyUFunc_FromFuncAndData. A copy of the string is made,
+ so the passed in buffer can be freed.
+
.. cfunction:: int PyUFunc_RegisterLoopForType(PyUFuncObject* ufunc,
int usertype, PyUFuncGenericFunction function, int* arg_types, void* data)
diff --git a/doc/source/reference/routines.array-creation.rst b/doc/source/reference/routines.array-creation.rst
index 23b35243b..c7c6ab815 100644
--- a/doc/source/reference/routines.array-creation.rst
+++ b/doc/source/reference/routines.array-creation.rst
@@ -20,6 +20,8 @@ Ones and zeros
ones_like
zeros
zeros_like
+ full
+ full_like
From existing data
------------------
diff --git a/doc/source/reference/routines.array-manipulation.rst b/doc/source/reference/routines.array-manipulation.rst
index ca97bb479..81af0a315 100644
--- a/doc/source/reference/routines.array-manipulation.rst
+++ b/doc/source/reference/routines.array-manipulation.rst
@@ -54,6 +54,8 @@ Changing kind of array
asmatrix
asfarray
asfortranarray
+ ascontiguousarray
+ asarray_chkfinite
asscalar
require
diff --git a/doc/source/reference/routines.io.rst b/doc/source/reference/routines.io.rst
index 26afbfb4f..b99754912 100644
--- a/doc/source/reference/routines.io.rst
+++ b/doc/source/reference/routines.io.rst
@@ -3,8 +3,8 @@ Input and output
.. currentmodule:: numpy
-NPZ files
----------
+Numpy binary files (NPY, NPZ)
+-----------------------------
.. autosummary::
:toctree: generated/
@@ -13,6 +13,9 @@ NPZ files
savez
savez_compressed
+The format of these binary file types is documented in
+http://docs.scipy.org/doc/numpy/neps/npy-format.html
+
Text files
----------
.. autosummary::
diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst
index 5cb38e83f..2408899b3 100644
--- a/doc/source/reference/routines.ma.rst
+++ b/doc/source/reference/routines.ma.rst
@@ -65,6 +65,8 @@ Inspecting the array
ma.nonzero
ma.shape
ma.size
+ ma.is_masked
+ ma.is_mask
ma.MaskedArray.data
ma.MaskedArray.mask
@@ -141,6 +143,7 @@ Joining arrays
ma.column_stack
ma.concatenate
+ ma.append
ma.dstack
ma.hstack
ma.vstack
@@ -181,6 +184,8 @@ Finding masked data
ma.flatnotmasked_edges
ma.notmasked_contiguous
ma.notmasked_edges
+ ma.clump_masked
+ ma.clump_unmasked
Modifying a mask
diff --git a/doc/source/reference/routines.maskna.rst b/doc/source/reference/routines.maskna.rst
deleted file mode 100644
index 2910acbac..000000000
--- a/doc/source/reference/routines.maskna.rst
+++ /dev/null
@@ -1,11 +0,0 @@
-NA-Masked Array Routines
-========================
-
-.. currentmodule:: numpy
-
-NA Values
----------
-.. autosummary::
- :toctree: generated/
-
- isna
diff --git a/doc/source/reference/routines.polynomials.classes.rst b/doc/source/reference/routines.polynomials.classes.rst
index 14729f08b..c40795434 100644
--- a/doc/source/reference/routines.polynomials.classes.rst
+++ b/doc/source/reference/routines.polynomials.classes.rst
@@ -211,7 +211,7 @@ constant are 0, but both can be specified.::
In the first case the lower bound of the integration is set to -1 and the
integration constant is 0. In the second the constant of integration is set
to 1 as well. Differentiation is simpler since the only option is the
-number times the polynomial is differentiated::
+number of times the polynomial is differentiated::
>>> p = P([1, 2, 3])
>>> p.deriv(1)
@@ -270,7 +270,7 @@ polynomials up to degree 5 are plotted below.
>>> import matplotlib.pyplot as plt
>>> from numpy.polynomial import Chebyshev as T
>>> x = np.linspace(-1, 1, 100)
- >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="T_%d"%i)
+ >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="$T_%d$"%i)
...
>>> plt.legend(loc="upper left")
<matplotlib.legend.Legend object at 0x3b3ee10>
@@ -284,7 +284,7 @@ The same plots over the range -2 <= `x` <= 2 look very different:
>>> import matplotlib.pyplot as plt
>>> from numpy.polynomial import Chebyshev as T
>>> x = np.linspace(-2, 2, 100)
- >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="T_%d"%i)
+ >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="$T_%d$"%i)
...
>>> plt.legend(loc="lower right")
<matplotlib.legend.Legend object at 0x3b3ee10>
diff --git a/doc/source/reference/routines.sort.rst b/doc/source/reference/routines.sort.rst
index 2b36aec75..c22fa0cd6 100644
--- a/doc/source/reference/routines.sort.rst
+++ b/doc/source/reference/routines.sort.rst
@@ -39,4 +39,3 @@ Counting
:toctree: generated/
count_nonzero
- count_reduce_items
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index 2ae794f59..3d6112058 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -313,16 +313,15 @@ advanced usage and will not typically be used.
.. versionadded:: 1.6
+ May be 'no', 'equiv', 'safe', 'same_kind', or 'unsafe'.
+ See :func:`can_cast` for explanations of the parameter values.
+
Provides a policy for what kind of casting is permitted. For compatibility
- with previous versions of NumPy, this defaults to 'unsafe'. May be 'no',
- 'equiv', 'safe', 'same_kind', or 'unsafe'. See :func:`can_cast` for
- explanations of the parameter values.
-
- In a future version of numpy, this argument will default to
- 'same_kind'. As part of this transition, starting in version 1.7,
- ufuncs will produce a DeprecationWarning for calls which are
- allowed under the 'unsafe' rules, but not under the 'same_kind'
- rules.
+ with previous versions of NumPy, this defaults to 'unsafe' for numpy < 1.7.
+ In numpy 1.7 a transition to 'same_kind' was begun where ufuncs produce a
+ DeprecationWarning for calls which are allowed under the 'unsafe'
+ rules, but not under the 'same_kind' rules. In numpy 1.10 the default
+ will be 'same_kind'.
*order*