diff options
Diffstat (limited to 'doc/source/reference')
21 files changed, 305 insertions, 211 deletions
diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst index 036185782..caaf3a73b 100644 --- a/doc/source/reference/arrays.classes.rst +++ b/doc/source/reference/arrays.classes.rst @@ -41,7 +41,7 @@ Numpy provides several hooks that classes can customize: .. function:: class.__numpy_ufunc__(self, ufunc, method, i, inputs, **kwargs) - .. versionadded:: 1.9 + .. versionadded:: 1.10 Any class (ndarray subclass or not) can define this method to override behavior of Numpy's ufuncs. This works quite similarly to @@ -267,13 +267,6 @@ they inherit from the ndarray): :meth:`.flush() <memmap.flush>` which must be called manually by the user to ensure that any changes to the array actually get written to disk. -.. note:: - - Memory-mapped arrays use the the Python memory-map object which - (prior to Python 2.5) does not allow files to be larger than a - certain size depending on the platform. This size is always - < 2GB even on 64-bit systems. - .. autosummary:: :toctree: generated/ @@ -344,7 +337,7 @@ Record arrays (:mod:`numpy.rec`) :ref:`arrays.dtypes`. Numpy provides the :class:`recarray` class which allows accessing the -fields of a record/structured array as attributes, and a corresponding +fields of a structured array as attributes, and a corresponding scalar data type object :class:`record`. .. currentmodule:: numpy diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst index 797f1f6f8..a43c23218 100644 --- a/doc/source/reference/arrays.dtypes.rst +++ b/doc/source/reference/arrays.dtypes.rst @@ -14,12 +14,12 @@ following aspects of the data: 1. Type of the data (integer, float, Python object, etc.) 2. Size of the data (how many bytes is in *e.g.* the integer) 3. Byte order of the data (:term:`little-endian` or :term:`big-endian`) -4. If the data type is a :term:`record`, an aggregate of other +4. If the data type is :term:`structured`, an aggregate of other data types, (*e.g.*, describing an array item consisting of an integer and a float), - 1. what are the names of the ":term:`fields <field>`" of the record, - by which they can be :ref:`accessed <arrays.indexing.rec>`, + 1. what are the names of the ":term:`fields <field>`" of the structure, + by which they can be :ref:`accessed <arrays.indexing.fields>`, 2. what is the data-type of each :term:`field`, and 3. which part of the memory block each field takes. @@ -40,15 +40,14 @@ needed in Numpy. .. index:: pair: dtype; field - pair: dtype; record -Struct data types are formed by creating a data type whose +Structured data types are formed by creating a data type whose :term:`fields` contain other data types. Each field has a name by -which it can be :ref:`accessed <arrays.indexing.rec>`. The parent data +which it can be :ref:`accessed <arrays.indexing.fields>`. The parent data type should be of sufficient size to contain all its fields; the parent is nearly always based on the :class:`void` type which allows -an arbitrary item size. Struct data types may also contain nested struct -sub-array data types in their fields. +an arbitrary item size. Structured data types may also contain nested +structured sub-array data types in their fields. .. index:: pair: dtype; sub-array @@ -60,7 +59,7 @@ fixed size. If an array is created using a data-type describing a sub-array, the dimensions of the sub-array are appended to the shape of the array when the array is created. Sub-arrays in a field of a -record behave differently, see :ref:`arrays.indexing.rec`. +structured type behave differently, see :ref:`arrays.indexing.fields`. Sub-arrays always have a C-contiguous memory layout. @@ -83,7 +82,7 @@ Sub-arrays always have a C-contiguous memory layout. .. admonition:: Example - A record data type containing a 16-character string (in field 'name') + A structured data type containing a 16-character string (in field 'name') and a sub-array of two 64-bit floating-point number (in field 'grades'): >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) @@ -246,8 +245,8 @@ Array-protocol type strings (see :ref:`arrays.interface`) String with comma-separated fields - Numarray introduced a short-hand notation for specifying the format - of a record as a comma-separated string of basic formats. + A short-hand notation for specifying the format of a structured data type is + a comma-separated string of basic formats. A basic format in this context is an optional shape specifier followed by an array-protocol type string. Parenthesis are required @@ -315,7 +314,7 @@ Type strings >>> dt = np.dtype((np.int32, (2,2))) # 2 x 2 integer sub-array >>> dt = np.dtype(('S10', 1)) # 10-character string - >>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 record sub-array + >>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 structured sub-array .. index:: triple: dtype; construction; from list @@ -432,7 +431,8 @@ Type strings Both arguments must be convertible to data-type objects in this case. The *base_dtype* is the data-type object that the new data-type builds on. This is how you could assign named fields to - any built-in data-type object. + any built-in data-type object, as done in + :ref:`record arrays <arrays.classes.rec>`. .. admonition:: Example @@ -486,7 +486,7 @@ Endianness of this data: dtype.byteorder -Information about sub-data-types in a :term:`record`: +Information about sub-data-types in a :term:`structured` data type: .. autosummary:: :toctree: generated/ diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst index d04f89897..2eb07c4e0 100644 --- a/doc/source/reference/arrays.indexing.rst +++ b/doc/source/reference/arrays.indexing.rst @@ -11,7 +11,7 @@ Indexing :class:`ndarrays <ndarray>` can be indexed using the standard Python ``x[obj]`` syntax, where *x* is the array and *obj* the selection. -There are three kinds of indexing available: record access, basic +There are three kinds of indexing available: field access, basic slicing, advanced indexing. Which one occurs depends on *obj*. .. note:: @@ -31,9 +31,9 @@ integer, or a tuple of slice objects and integers. :const:`Ellipsis` and :const:`newaxis` objects can be interspersed with these as well. In order to remain backward compatible with a common usage in Numeric, basic slicing is also initiated if the selection object is -any sequence (such as a :class:`list`) containing :class:`slice` +any non-ndarray sequence (such as a :class:`list`) containing :class:`slice` objects, the :const:`Ellipsis` object, or the :const:`newaxis` object, -but no integer arrays or other embedded sequences. +but not for integer arrays or other embedded sequences. .. index:: triple: ndarray; special methods; getslice @@ -46,8 +46,8 @@ scalar <arrays.scalars>` representing the corresponding item. As in Python, all indices are zero-based: for the *i*-th index :math:`n_i`, the valid range is :math:`0 \le n_i < d_i` where :math:`d_i` is the *i*-th element of the shape of the array. Negative indices are -interpreted as counting from the end of the array (*i.e.*, if *i < 0*, -it means :math:`n_i + i`). +interpreted as counting from the end of the array (*i.e.*, if +:math:`n_i < 0`, it means :math:`n_i + d_i`). All arrays generated by basic slicing are always :term:`views <view>` @@ -84,7 +84,7 @@ concepts to remember include: - Assume *n* is the number of elements in the dimension being sliced. Then, if *i* is not given it defaults to 0 for *k > 0* and - *n* for *k < 0* . If *j* is not given it defaults to *n* for *k > 0* + *n - 1* for *k < 0* . If *j* is not given it defaults to *n* for *k > 0* and -1 for *k < 0* . If *k* is not given it defaults to 1. Note that ``::`` is the same as ``:`` and means select all indices along this axis. @@ -489,25 +489,25 @@ indexing (in no particular order): view on the data. This *must* be done if the subclasses ``__getitem__`` does not return views. -.. _arrays.indexing.rec: +.. _arrays.indexing.fields: -Record Access +Field Access ------------- .. seealso:: :ref:`arrays.dtypes`, :ref:`arrays.scalars` -If the :class:`ndarray` object is a record array, *i.e.* its data type -is a :term:`record` data type, the :term:`fields <field>` of the array -can be accessed by indexing the array with strings, dictionary-like. +If the :class:`ndarray` object is a structured array the :term:`fields <field>` +of the array can be accessed by indexing the array with strings, +dictionary-like. Indexing ``x['field-name']`` returns a new :term:`view` to the array, which is of the same shape as *x* (except when the field is a sub-array) but of data type ``x.dtype['field-name']`` and contains -only the part of the data in the specified field. Also record array -scalars can be "indexed" this way. +only the part of the data in the specified field. Also +:ref:`record array <arrays.classes.rec>` scalars can be "indexed" this way. -Indexing into a record array can also be done with a list of field names, +Indexing into a structured array can also be done with a list of field names, *e.g.* ``x[['field-name1','field-name2']]``. Currently this returns a new array containing a copy of the values in the fields specified in the list. As of NumPy 1.7, returning a copy is being deprecated in favor of returning diff --git a/doc/source/reference/arrays.interface.rst b/doc/source/reference/arrays.interface.rst index 16abe5ce1..50595c2d8 100644 --- a/doc/source/reference/arrays.interface.rst +++ b/doc/source/reference/arrays.interface.rst @@ -103,19 +103,19 @@ This approach to the interface consists of the object having an not a requirement. The only requirement is that the number of bytes represented in the *typestr* key is the same as the total number of bytes represented here. The idea is to support - descriptions of C-like structs (records) that make up array + descriptions of C-like structs that make up array elements. The elements of each tuple in the list are 1. A string providing a name associated with this portion of - the record. This could also be a tuple of ``('full name', + the datatype. This could also be a tuple of ``('full name', 'basic_name')`` where basic name would be a valid Python variable name representing the full name of the field. 2. Either a basic-type description string as in *typestr* or - another list (for nested records) + another list (for nested structured types) 3. An optional shape tuple providing how many times this part - of the record should be repeated. No repeats are assumed + of the structure should be repeated. No repeats are assumed if this is not given. Very complicated structures can be described using this generic interface. Notice, however, that each element of the array is still of the same @@ -301,7 +301,8 @@ more information which may be important for various applications:: typestr == '|V16' descr == [('ival','>i4'),('','|V4'),('dval','>f8')] -It should be clear that any record type could be described using this interface. +It should be clear that any structured type could be described using this +interface. Differences with Array interface (Version 2) ============================================ diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst index e9c0a6d87..c8d834d1c 100644 --- a/doc/source/reference/arrays.ndarray.rst +++ b/doc/source/reference/arrays.ndarray.rst @@ -82,7 +82,7 @@ Indexing arrays Arrays can be indexed using an extended Python slicing syntax, ``array[selection]``. Similar syntax is also used for accessing -fields in a :ref:`record array <arrays.dtypes>`. +fields in a :ref:`structured array <arrays.dtypes.field>`. .. seealso:: :ref:`Array Indexing <arrays.indexing>`. diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst index f229efb07..652fa62e1 100644 --- a/doc/source/reference/arrays.scalars.rst +++ b/doc/source/reference/arrays.scalars.rst @@ -250,7 +250,7 @@ array scalar, - ``x[()]`` returns a 0-dimensional :class:`ndarray` - ``x['field-name']`` returns the array scalar in the field *field-name*. - (*x* can have fields, for example, when it corresponds to a record data type.) + (*x* can have fields, for example, when it corresponds to a structured data type.) Methods ======= @@ -282,10 +282,10 @@ Defining new types ================== There are two ways to effectively define a new array scalar type -(apart from composing record :ref:`dtypes <arrays.dtypes>` from the built-in -scalar types): One way is to simply subclass the :class:`ndarray` and -overwrite the methods of interest. This will work to a degree, but -internally certain behaviors are fixed by the data type of the array. -To fully customize the data type of an array you need to define a new -data-type, and register it with NumPy. Such new types can only be -defined in C, using the :ref:`Numpy C-API <c-api>`. +(apart from composing structured types :ref:`dtypes <arrays.dtypes>` from +the built-in scalar types): One way is to simply subclass the +:class:`ndarray` and overwrite the methods of interest. This will work to +a degree, but internally certain behaviors are fixed by the data type of +the array. To fully customize the data type of an array you need to +define a new data-type, and register it with NumPy. Such new types can only +be defined in C, using the :ref:`Numpy C-API <c-api>`. diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst index 23355bc91..f5f753292 100644 --- a/doc/source/reference/c-api.array.rst +++ b/doc/source/reference/c-api.array.rst @@ -108,10 +108,13 @@ sub-types). .. cfunction:: int PyArray_FLAGS(PyArrayObject* arr) -.. cfunction:: int PyArray_ITEMSIZE(PyArrayObject* arr) +.. cfunction:: npy_intp PyArray_ITEMSIZE(PyArrayObject* arr) Return the itemsize for the elements of this array. + Note that, in the old API that was deprecated in version 1.7, this function + had the return type ``int``. + .. cfunction:: int PyArray_TYPE(PyArrayObject* arr) Return the (builtin) typenumber for the elements of this array. @@ -460,7 +463,7 @@ From other objects .. cvar:: NPY_ARRAY_IN_ARRAY - :cdata:`NPY_ARRAY_CONTIGUOUS` \| :cdata:`NPY_ARRAY_ALIGNED` + :cdata:`NPY_ARRAY_C_CONTIGUOUS` \| :cdata:`NPY_ARRAY_ALIGNED` .. cvar:: NPY_ARRAY_IN_FARRAY @@ -590,18 +593,16 @@ From other objects .. cfunction:: PyObject* PyArray_FromStructInterface(PyObject* op) Returns an ndarray object from a Python object that exposes the - :obj:`__array_struct__`` method and follows the array interface - protocol. If the object does not contain this method then a + :obj:`__array_struct__` attribute and follows the array interface + protocol. If the object does not contain this attribute then a borrowed reference to :cdata:`Py_NotImplemented` is returned. .. cfunction:: PyObject* PyArray_FromInterface(PyObject* op) Returns an ndarray object from a Python object that exposes the - :obj:`__array_shape__` and :obj:`__array_typestr__` - methods following - the array interface protocol. If the object does not contain one - of these method then a borrowed reference to :cdata:`Py_NotImplemented` - is returned. + :obj:`__array_interface__` attribute following the array interface + protocol. If the object does not contain this attribute then a + borrowed reference to :cdata:`Py_NotImplemented` is returned. .. cfunction:: PyObject* PyArray_FromArrayAttr(PyObject* op, PyArray_Descr* dtype, PyObject* context) @@ -803,15 +804,28 @@ General check of Python Type sub-type of :cdata:`PyGenericArr_Type` ), or an instance of (a sub-class of) :cdata:`PyArray_Type` whose dimensionality is 0. +.. cfunction:: PyArray_IsPythonNumber(op) + + Evaluates true if *op* is an instance of a builtin numeric type (int, + float, complex, long, bool) + .. cfunction:: PyArray_IsPythonScalar(op) - Evaluates true if *op* is a builtin Python "scalar" object (int, + Evaluates true if *op* is a builtin Python scalar object (int, float, complex, str, unicode, long, bool). .. cfunction:: PyArray_IsAnyScalar(op) - Evaluates true if *op* is either a Python scalar or an array - scalar (an instance of a sub- type of :cdata:`PyGenericArr_Type` ). + Evaluates true if *op* is either a Python scalar object (see + :cfunc:`PyArray_IsPythonScalar`) or an array scalar (an instance of a sub- + type of :cdata:`PyGenericArr_Type` ). + +.. cfunction:: PyArray_CheckAnyScalar(op) + + Evaluates true if *op* is a Python scalar object (see + :cfunc:`PyArray_IsPythonScalar`), an array scalar (an instance of a + sub-type of :cdata:`PyGenericArr_Type`) or an instance of a sub-type of + :cdata:`PyArray_Type` whose dimensionality is 0. Data-type checking @@ -1237,9 +1251,9 @@ Special functions for NPY_OBJECT A function to INCREF all the objects at the location *ptr* according to the data-type *dtype*. If *ptr* is the start of a - record with an object at any offset, then this will (recursively) + structured type with an object at any offset, then this will (recursively) increment the reference count of all object-like items in the - record. + structured type. .. cfunction:: int PyArray_XDECREF(PyArrayObject* op) @@ -1250,7 +1264,7 @@ Special functions for NPY_OBJECT .. cfunction:: void PyArray_Item_XDECREF(char* ptr, PyArray_Descr* dtype) - A function to XDECREF all the object-like items at the loacation + A function to XDECREF all the object-like items at the location *ptr* as recorded in the data-type, *dtype*. This works recursively so that if ``dtype`` itself has fields with data-types that contain object-like items, all the object-like fields will be @@ -1537,7 +1551,7 @@ Conversion itemsize of the new array type must be less than *self* ->descr->elsize or an error is raised. The same shape and strides as the original array are used. Therefore, this function has the - effect of returning a field from a record array. But, it can also + effect of returning a field from a structured array. But, it can also be used to select specific bytes or groups of bytes from any array type. @@ -1632,11 +1646,11 @@ Conversion Shape Manipulation ^^^^^^^^^^^^^^^^^^ -.. cfunction:: PyObject* PyArray_Newshape(PyArrayObject* self, PyArray_Dims* newshape) +.. cfunction:: PyObject* PyArray_Newshape(PyArrayObject* self, PyArray_Dims* newshape, NPY_ORDER order) Result will be a new array (pointing to the same memory location - as *self* if possible), but having a shape given by *newshape* - . If the new shape is not compatible with the strides of *self*, + as *self* if possible), but having a shape given by *newshape*. + If the new shape is not compatible with the strides of *self*, then a copy of the array with the new specified shape will be returned. @@ -1645,6 +1659,7 @@ Shape Manipulation Equivalent to :meth:`ndarray.reshape` (*self*, *shape*) where *shape* is a sequence. Converts *shape* to a :ctype:`PyArray_Dims` structure and calls :cfunc:`PyArray_Newshape` internally. + For back-ward compatability -- Not recommended .. cfunction:: PyObject* PyArray_Squeeze(PyArrayObject* self) @@ -1782,7 +1797,7 @@ Item selection and manipulation ->descr is a data-type with fields defined, then self->descr->names is used to determine the sort order. A comparison where the first field is equal will use the second - field and so on. To alter the sort order of a record array, create + field and so on. To alter the sort order of a structured array, create a new data-type with a different order of names and construct a view of the array with that new data-type. @@ -1801,18 +1816,27 @@ Item selection and manipulation to understand the order the *sort_keys* must be in (reversed from the order you would use when comparing two elements). - If these arrays are all collected in a record array, then + If these arrays are all collected in a structured array, then :cfunc:`PyArray_Sort` (...) can also be used to sort the array directly. -.. cfunction:: PyObject* PyArray_SearchSorted(PyArrayObject* self, PyObject* values) +.. cfunction:: PyObject* PyArray_SearchSorted(PyArrayObject* self, PyObject* values, NPY_SEARCHSIDE side, PyObject* perm) + + Equivalent to :meth:`ndarray.searchsorted` (*self*, *values*, *side*, + *perm*). Assuming *self* is a 1-d array in ascending order, then the + output is an array of indices the same shape as *values* such that, if + the elements in *values* were inserted before the indices, the order of + *self* would be preserved. No checking is done on whether or not self is + in ascending order. + + The *side* argument indicates whther the index returned should be that of + the first suitable location (if :cdata:`NPY_SEARCHLEFT`) or of the last + (if :cdata:`NPY_SEARCHRIGHT`). - Equivalent to :meth:`ndarray.searchsorted` (*self*, *values*). Assuming - *self* is a 1-d array in ascending order representing bin - boundaries then the output is an array the same shape as *values* - of bin numbers, giving the bin into which each item in *values* - would be placed. No checking is done on whether or not self is in - ascending order. + The *sorter* argument, if not ``NULL``, must be a 1D array of integer + indices the same length as *self*, that sorts it into ascending order. + This is typically the result of a call to :cfunc:`PyArray_ArgSort` (...) + Binary search is used to find the required insertion points. .. cfunction:: int PyArray_Partition(PyArrayObject *self, PyArrayObject * ktharray, int axis, NPY_SELECTKIND which) @@ -1825,7 +1849,7 @@ Item selection and manipulation If *self*->descr is a data-type with fields defined, then self->descr->names is used to determine the sort order. A comparison where the first field is equal will use the second field and so on. To alter the - sort order of a record array, create a new data-type with a different + sort order of a structured array, create a new data-type with a different order of names and construct a view of the array with that new data-type. Returns zero on success and -1 on failure. @@ -1886,10 +1910,10 @@ Calculation .. note:: - The out argument specifies where to place the result. If out is - NULL, then the output array is created, otherwise the output is - placed in out which must be the correct size and type. A new - reference to the ouput array is always returned even when out + The out argument specifies where to place the result. If out is + NULL, then the output array is created, otherwise the output is + placed in out which must be the correct size and type. A new + reference to the ouput array is always returned even when out is not NULL. The caller of the routine has the responsability to ``DECREF`` out if not NULL or a memory-leak will occur. @@ -2506,6 +2530,8 @@ Array Scalars .. cfunction:: PyObject* PyArray_Return(PyArrayObject* arr) + This function steals a reference to *arr*. + This function checks to see if *arr* is a 0-dimensional array and, if so, returns the appropriate array scalar. It should be used whenever 0-dimensional arrays could be returned to Python. @@ -3103,6 +3129,12 @@ Group 1 Useful to regain the GIL in situations where it was released using the BEGIN form of this macro. + .. cfunction:: NPY_BEGIN_THREADS_THRESHOLDED(int loop_size) + + Useful to release the GIL only if *loop_size* exceeds a + minimum threshold, currently set to 500. Should be matched + with a .. cmacro::`NPY_END_THREADS` to regain the GIL. + Group 2 """"""" diff --git a/doc/source/reference/c-api.generalized-ufuncs.rst b/doc/source/reference/c-api.generalized-ufuncs.rst index 14f33efcb..92dc8aec0 100644 --- a/doc/source/reference/c-api.generalized-ufuncs.rst +++ b/doc/source/reference/c-api.generalized-ufuncs.rst @@ -18,30 +18,52 @@ arguments is called the "signature" of a ufunc. For example, the ufunc numpy.add has signature ``(),()->()`` defining two scalar inputs and one scalar output. -Another example is the function ``inner1d(a,b)`` with a signature of -``(i),(i)->()``. This applies the inner product along the last axis of +Another example is the function ``inner1d(a, b)`` with a signature of +``(i),(i)->()``. This applies the inner product along the last axis of each input, but keeps the remaining indices intact. -For example, where ``a`` is of shape ``(3,5,N)`` -and ``b`` is of shape ``(5,N)``, this will return an output of shape ``(3,5)``. +For example, where ``a`` is of shape ``(3, 5, N)`` and ``b`` is of shape +``(5, N)``, this will return an output of shape ``(3,5)``. The underlying elementary function is called ``3 * 5`` times. In the signature, we specify one core dimension ``(i)`` for each input and zero core dimensions ``()`` for the output, since it takes two 1-d arrays and returns a scalar. By using the same name ``i``, we specify that the two -corresponding dimensions should be of the same size (or one of them is -of size 1 and will be broadcasted). +corresponding dimensions should be of the same size. The dimensions beyond the core dimensions are called "loop" dimensions. In -the above example, this corresponds to ``(3,5)``. - -The usual numpy "broadcasting" rules apply, where the signature -determines how the dimensions of each input/output object are split -into core and loop dimensions: - -#. While an input array has a smaller dimensionality than the corresponding - number of core dimensions, 1's are pre-pended to its shape. +the above example, this corresponds to ``(3, 5)``. + +The signature determines how the dimensions of each input/output array are +split into core and loop dimensions: + +#. Each dimension in the signature is matched to a dimension of the + corresponding passed-in array, starting from the end of the shape tuple. + These are the core dimensions, and they must be present in the arrays, or + an error will be raised. +#. Core dimensions assigned to the same label in the signature (e.g. the + ``i`` in ``inner1d``'s ``(i),(i)->()``) must have exactly matching sizes, + no broadcasting is performed. #. The core dimensions are removed from all inputs and the remaining - dimensions are broadcasted; defining the loop dimensions. -#. The output is given by the loop dimensions plus the output core dimensions. + dimensions are broadcast together, defining the loop dimensions. +#. The shape of each output is determined from the loop dimensions plus the + output's core dimensions + +Typically, the size of all core dimensions in an output will be determined by +the size of a core dimension with the same label in an input array. This is +not a requirement, and it is possible to define a signature where a label +comes up for the first time in an output, although some precautions must be +taken when calling such a function. An example would be the function +``euclidean_pdist(a)``, with signature ``(n,d)->(p)``, that given an array of +``n`` ``d``-dimensional vectors, computes all unique pairwise Euclidean +distances among them. The output dimension ``p`` must therefore be equal to +``n * (n - 1) / 2``, but it is the caller's responsibility to pass in an +output array of the right size. If the size of a core dimension of an output +cannot be determined from a passed in input or output array, an error will be +raised. + +Note: Prior to Numpy 1.10.0, less strict checks were in place: missing core +dimensions were created by prepending 1's to the shape as necessary, core +dimensions with the same label were broadcast together, and undetermined +dimensions were created with size 1. Definitions @@ -70,7 +92,7 @@ Core Dimension Dimension Name A dimension name represents a core dimension in the signature. Different dimensions may share a name, indicating that they are of - the same size (or are broadcastable). + the same size. Dimension Index A dimension index is an integer representing a dimension name. It @@ -93,8 +115,7 @@ following format: * Dimension lists for different arguments are separated by ``","``. Input/output arguments are separated by ``"->"``. * If one uses the same dimension name in multiple locations, this - enforces the same size (or broadcastable size) of the corresponding - dimensions. + enforces the same size of the corresponding dimensions. The formal syntax of signatures is as follows:: @@ -111,10 +132,9 @@ The formal syntax of signatures is as follows:: Notes: #. All quotes are for clarity. -#. Core dimensions that share the same name must be broadcastable, as - the two ``i`` in our example above. Each dimension name typically - corresponding to one level of looping in the elementary function's - implementation. +#. Core dimensions that share the same name must have the exact same size. + Each dimension name typically corresponds to one level of looping in the + elementary function's implementation. #. White spaces are ignored. Here are some examples of signatures: diff --git a/doc/source/reference/c-api.iterator.rst b/doc/source/reference/c-api.iterator.rst index 084fdcbce..1d90ce302 100644 --- a/doc/source/reference/c-api.iterator.rst +++ b/doc/source/reference/c-api.iterator.rst @@ -18,8 +18,6 @@ preservation of memory layouts, and buffering of data with the wrong alignment or type, without requiring difficult coding. This page documents the API for the iterator. -The C-API naming convention chosen is based on the one in the numpy-refactor -branch, so will integrate naturally into the refactored code base. The iterator is named ``NpyIter`` and functions are named ``NpyIter_*``. @@ -28,51 +26,6 @@ which may be of interest for those using this C API. In many instances, testing out ideas by creating the iterator in Python is a good idea before writing the C iteration code. -Converting from Previous NumPy Iterators ----------------------------------------- - -The existing iterator API includes functions like PyArrayIter_Check, -PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes -PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The -new iterator design replaces all of this functionality with a single object -and associated API. One goal of the new API is that all uses of the -existing iterator should be replaceable with the new iterator without -significant effort. In 1.6, the major exception to this is the neighborhood -iterator, which does not have corresponding features in this iterator. - -Here is a conversion table for which functions to use with the new iterator: - -===================================== ============================================= -*Iterator Functions* -:cfunc:`PyArray_IterNew` :cfunc:`NpyIter_New` -:cfunc:`PyArray_IterAllButAxis` :cfunc:`NpyIter_New` + ``axes`` parameter **or** - Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP` -:cfunc:`PyArray_BroadcastToShape` **NOT SUPPORTED** (Use the support for - multiple operands instead.) -:cfunc:`PyArrayIter_Check` Will need to add this in Python exposure -:cfunc:`PyArray_ITER_RESET` :cfunc:`NpyIter_Reset` -:cfunc:`PyArray_ITER_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext` -:cfunc:`PyArray_ITER_DATA` :cfunc:`NpyIter_GetDataPtrArray` -:cfunc:`PyArray_ITER_GOTO` :cfunc:`NpyIter_GotoMultiIndex` -:cfunc:`PyArray_ITER_GOTO1D` :cfunc:`NpyIter_GotoIndex` or - :cfunc:`NpyIter_GotoIterIndex` -:cfunc:`PyArray_ITER_NOTDONE` Return value of ``iternext`` function pointer -*Multi-iterator Functions* -:cfunc:`PyArray_MultiIterNew` :cfunc:`NpyIter_MultiNew` -:cfunc:`PyArray_MultiIter_RESET` :cfunc:`NpyIter_Reset` -:cfunc:`PyArray_MultiIter_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext` -:cfunc:`PyArray_MultiIter_DATA` :cfunc:`NpyIter_GetDataPtrArray` -:cfunc:`PyArray_MultiIter_NEXTi` **NOT SUPPORTED** (always lock-step iteration) -:cfunc:`PyArray_MultiIter_GOTO` :cfunc:`NpyIter_GotoMultiIndex` -:cfunc:`PyArray_MultiIter_GOTO1D` :cfunc:`NpyIter_GotoIndex` or - :cfunc:`NpyIter_GotoIterIndex` -:cfunc:`PyArray_MultiIter_NOTDONE` Return value of ``iternext`` function pointer -:cfunc:`PyArray_Broadcast` Handled by :cfunc:`NpyIter_MultiNew` -:cfunc:`PyArray_RemoveSmallest` Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP` -*Other Functions* -:cfunc:`PyArray_ConvertToCommonType` Iterator flag :cdata:`NPY_ITER_COMMON_DTYPE` -===================================== ============================================= - Simple Iteration Example ------------------------ @@ -91,6 +44,7 @@ number of non-zero elements in an array. NpyIter* iter; NpyIter_IterNextFunc *iternext; char** dataptr; + npy_intp nonzero_count; npy_intp* strideptr,* innersizeptr; /* Handle zero-sized arrays specially */ @@ -138,7 +92,7 @@ number of non-zero elements in an array. /* The location of the inner loop size which the iterator may update */ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter); - /* The iteration loop */ + nonzero_count = 0; do { /* Get the inner loop data/stride/count values */ char* data = *dataptr; @@ -1296,3 +1250,48 @@ functions provide that information. .. index:: pair: iterator; C-API + +Converting from Previous NumPy Iterators +---------------------------------------- + +The old iterator API includes functions like PyArrayIter_Check, +PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes +PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The +new iterator design replaces all of this functionality with a single object +and associated API. One goal of the new API is that all uses of the +existing iterator should be replaceable with the new iterator without +significant effort. In 1.6, the major exception to this is the neighborhood +iterator, which does not have corresponding features in this iterator. + +Here is a conversion table for which functions to use with the new iterator: + +===================================== ============================================= +*Iterator Functions* +:cfunc:`PyArray_IterNew` :cfunc:`NpyIter_New` +:cfunc:`PyArray_IterAllButAxis` :cfunc:`NpyIter_New` + ``axes`` parameter **or** + Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP` +:cfunc:`PyArray_BroadcastToShape` **NOT SUPPORTED** (Use the support for + multiple operands instead.) +:cfunc:`PyArrayIter_Check` Will need to add this in Python exposure +:cfunc:`PyArray_ITER_RESET` :cfunc:`NpyIter_Reset` +:cfunc:`PyArray_ITER_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext` +:cfunc:`PyArray_ITER_DATA` :cfunc:`NpyIter_GetDataPtrArray` +:cfunc:`PyArray_ITER_GOTO` :cfunc:`NpyIter_GotoMultiIndex` +:cfunc:`PyArray_ITER_GOTO1D` :cfunc:`NpyIter_GotoIndex` or + :cfunc:`NpyIter_GotoIterIndex` +:cfunc:`PyArray_ITER_NOTDONE` Return value of ``iternext`` function pointer +*Multi-iterator Functions* +:cfunc:`PyArray_MultiIterNew` :cfunc:`NpyIter_MultiNew` +:cfunc:`PyArray_MultiIter_RESET` :cfunc:`NpyIter_Reset` +:cfunc:`PyArray_MultiIter_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext` +:cfunc:`PyArray_MultiIter_DATA` :cfunc:`NpyIter_GetDataPtrArray` +:cfunc:`PyArray_MultiIter_NEXTi` **NOT SUPPORTED** (always lock-step iteration) +:cfunc:`PyArray_MultiIter_GOTO` :cfunc:`NpyIter_GotoMultiIndex` +:cfunc:`PyArray_MultiIter_GOTO1D` :cfunc:`NpyIter_GotoIndex` or + :cfunc:`NpyIter_GotoIterIndex` +:cfunc:`PyArray_MultiIter_NOTDONE` Return value of ``iternext`` function pointer +:cfunc:`PyArray_Broadcast` Handled by :cfunc:`NpyIter_MultiNew` +:cfunc:`PyArray_RemoveSmallest` Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP` +*Other Functions* +:cfunc:`PyArray_ConvertToCommonType` Iterator flag :cdata:`NPY_ITER_COMMON_DTYPE` +===================================== ============================================= diff --git a/doc/source/reference/c-api.types-and-structures.rst b/doc/source/reference/c-api.types-and-structures.rst index f1e216a5c..43abe24c7 100644 --- a/doc/source/reference/c-api.types-and-structures.rst +++ b/doc/source/reference/c-api.types-and-structures.rst @@ -244,7 +244,7 @@ PyArrayDescr_Type Indicates that items of this data-type must be reference counted (using :cfunc:`Py_INCREF` and :cfunc:`Py_DECREF` ). - .. cvar:: NPY_ITEM_LISTPICKLE + .. cvar:: NPY_LIST_PICKLE Indicates arrays of this data-type must be converted to a list before pickling. @@ -407,7 +407,10 @@ PyArrayDescr_Type PyArray_ScalarKindFunc *scalarkind; int **cancastscalarkindto; int *cancastto; - int listpickle + PyArray_FastClipFunc *fastclip; + PyArray_FastPutmaskFunc *fastputmask; + PyArray_FastTakeFunc *fasttake; + PyArray_ArgFunc *argmin; } PyArray_ArrFuncs; The concept of a behaved segment is used in the description of the @@ -417,8 +420,7 @@ PyArrayDescr_Type functions can (and must) deal with mis-behaved arrays. The other functions require behaved memory segments. - .. cmember:: void cast(void *from, void *to, npy_intp n, void *fromarr, - void *toarr) + .. cmember:: void cast(void *from, void *to, npy_intp n, void *fromarr, void *toarr) An array of function pointers to cast from the current type to all of the other builtin types. Each function casts a @@ -444,8 +446,7 @@ PyArrayDescr_Type a zero is returned, otherwise, a negative one is returned (and a Python error set). - .. cmember:: void copyswapn(void *dest, npy_intp dstride, void *src, - npy_intp sstride, npy_intp n, int swap, void *arr) + .. cmember:: void copyswapn(void *dest, npy_intp dstride, void *src, npy_intp sstride, npy_intp n, int swap, void *arr) .. cmember:: void copyswap(void *dest, void *src, int swap, void *arr) @@ -471,8 +472,7 @@ PyArrayDescr_Type ``d2``, and -1 if * ``d1`` < * ``d2``. The array object ``arr`` is used to retrieve itemsize and field information for flexible arrays. - .. cmember:: int argmax(void* data, npy_intp n, npy_intp* max_ind, - void* arr) + .. cmember:: int argmax(void* data, npy_intp n, npy_intp* max_ind, void* arr) A pointer to a function that retrieves the index of the largest of ``n`` elements in ``arr`` beginning at the element @@ -481,8 +481,7 @@ PyArrayDescr_Type always 0. The index of the largest element is returned in ``max_ind``. - .. cmember:: void dotfunc(void* ip1, npy_intp is1, void* ip2, npy_intp is2, - void* op, npy_intp n, void* arr) + .. cmember:: void dotfunc(void* ip1, npy_intp is1, void* ip2, npy_intp is2, void* op, npy_intp n, void* arr) A pointer to a function that multiplies two ``n`` -length sequences together, adds them, and places the result in @@ -532,8 +531,7 @@ PyArrayDescr_Type computed by repeatedly adding this computed delta. The data buffer must be well-behaved. - .. cmember:: void fillwithscalar(void* buffer, npy_intp length, - void* value, void* arr) + .. cmember:: void fillwithscalar(void* buffer, npy_intp length, void* value, void* arr) A pointer to a function that fills a contiguous ``buffer`` of the given ``length`` with a single scalar ``value`` whose @@ -548,14 +546,13 @@ PyArrayDescr_Type :cdata:`NPY_MERGESORT` are defined). These sorts are done in-place assuming contiguous and aligned data. - .. cmember:: int argsort(void* start, npy_intp* result, npy_intp length, - void \*arr) + .. cmember:: int argsort(void* start, npy_intp* result, npy_intp length, void *arr) An array of function pointers to sorting algorithms for this data type. The same sorting algorithms as for sort are available. The indices producing the sort are returned in - result (which must be initialized with indices 0 to length-1 - inclusive). + ``result`` (which must be initialized with indices 0 to + ``length-1`` inclusive). .. cmember:: PyObject *castdict @@ -587,9 +584,48 @@ PyArrayDescr_Type can be cast to safely (this usually means without losing precision). - .. cmember:: int listpickle + .. cmember:: void fastclip(void *in, npy_intp n_in, void *min, void *max, void *out) + + A function that reads ``n_in`` items from ``in``, and writes to + ``out`` the read value if it is within the limits pointed to by + ``min`` and ``max``, or the corresponding limit if outside. The + memory segments must be contiguous and behaved, and either + ``min`` or ``max`` may be ``NULL``, but not both. + + .. cmember:: void fastputmask(void *in, void *mask, npy_intp n_in, void *values, npy_intp nv) + + A function that takes a pointer ``in`` to an array of ``n_in`` + items, a pointer ``mask`` to an array of ``n_in`` boolean + values, and a pointer ``vals`` to an array of ``nv`` items. + Items from ``vals`` are copied into ``in`` wherever the value + in ``mask`` is non-zero, tiling ``vals`` as needed if + ``nv < n_in``. All arrays must be contiguous and behaved. + + .. cmember:: void fasttake(void *dest, void *src, npy_intp *indarray, npy_intp nindarray, npy_intp n_outer, npy_intp m_middle, npy_intp nelem, NPY_CLIPMODE clipmode) + + A function that takes a pointer ``src`` to a C contiguous, + behaved segment, interpreted as a 3-dimensional array of shape + ``(n_outer, nindarray, nelem)``, a pointer ``indarray`` to a + contiguous, behaved segment of ``m_middle`` integer indices, + and a pointer ``dest`` to a C contiguous, behaved segment, + interpreted as a 3-dimensional array of shape + ``(n_outer, m_middle, nelem)``. The indices in ``indarray`` are + used to index ``src`` along the second dimension, and copy the + corresponding chunks of ``nelem`` items into ``dest``. + ``clipmode`` (which can take on the values :cdata:`NPY_RAISE`, + :cdata:`NPY_WRAP` or :cdata:`NPY_CLIP`) determines how will + indices smaller than 0 or larger than ``nindarray`` will be + handled. + + .. cmember:: int argmin(void* data, npy_intp n, npy_intp* min_ind, void* arr) + + A pointer to a function that retrieves the index of the + smallest of ``n`` elements in ``arr`` beginning at the element + pointed to by ``data``. This function requires that the + memory segment be contiguous and behaved. The return value is + always 0. The index of the smallest element is returned in + ``min_ind``. - Unused. The :cdata:`PyArray_Type` typeobject implements many of the features of Python objects including the tp_as_number, tp_as_sequence, @@ -646,9 +682,9 @@ PyUFunc_Type void **data; int ntypes; int check_return; - char *name; + const char *name; char *types; - char *doc; + const char *doc; void *ptr; PyObject *obj; PyObject *userloops; @@ -1031,9 +1067,9 @@ PyArray_Chunk This is equivalent to the buffer object structure in Python up to the ptr member. On 32-bit platforms (*i.e.* if :cdata:`NPY_SIZEOF_INT` - == :cdata:`NPY_SIZEOF_INTP` ) or in Python 2.5, the len member also - matches an equivalent member of the buffer object. It is useful to - represent a generic single- segment chunk of memory. + == :cdata:`NPY_SIZEOF_INTP`), the len member also matches an equivalent + member of the buffer object. It is useful to represent a generic + single-segment chunk of memory. .. code-block:: c diff --git a/doc/source/reference/c-api.ufunc.rst b/doc/source/reference/c-api.ufunc.rst index 71abffd04..3673958d9 100644 --- a/doc/source/reference/c-api.ufunc.rst +++ b/doc/source/reference/c-api.ufunc.rst @@ -114,7 +114,6 @@ Functions data type, it will be internally upcast to the int_ (or uint) data type. - :param doc: Allows passing in a documentation string to be stored with the ufunc. The documentation string should not contain the name @@ -128,6 +127,21 @@ Functions structure and it does get set with this value when the ufunc object is created. +.. cfunction:: PyObject* PyUFunc_FromFuncAndDataAndSignature(PyUFuncGenericFunction* func, + void** data, char* types, int ntypes, int nin, int nout, int identity, + char* name, char* doc, int check_return, char *signature) + + This function is very similar to PyUFunc_FromFuncAndData above, but has + an extra *signature* argument, to define generalized universal functions. + Similarly to how ufuncs are built around an element-by-element operation, + gufuncs are around subarray-by-subarray operations, the signature defining + the subarrays to operate on. + + :param signature: + The signature for the new gufunc. Setting it to NULL is equivalent + to calling PyUFunc_FromFuncAndData. A copy of the string is made, + so the passed in buffer can be freed. + .. cfunction:: int PyUFunc_RegisterLoopForType(PyUFuncObject* ufunc, int usertype, PyUFuncGenericFunction function, int* arg_types, void* data) diff --git a/doc/source/reference/internals.code-explanations.rst b/doc/source/reference/internals.code-explanations.rst index 580661cb3..f01300e25 100644 --- a/doc/source/reference/internals.code-explanations.rst +++ b/doc/source/reference/internals.code-explanations.rst @@ -74,9 +74,9 @@ optimizations that by-pass this mechanism, but the point of the data- type abstraction is to allow new data-types to be added. One of the built-in data-types, the void data-type allows for -arbitrary records containing 1 or more fields as elements of the +arbitrary structured types containing 1 or more fields as elements of the array. A field is simply another data-type object along with an offset -into the current record. In order to support arbitrarily nested +into the current structured type. In order to support arbitrarily nested fields, several recursive implementations of data-type access are implemented for the void type. A common idiom is to cycle through the elements of the dictionary and perform a specific operation based on @@ -184,7 +184,7 @@ The array scalars also offer the same methods and attributes as arrays with the intent that the same code can be used to support arbitrary dimensions (including 0-dimensions). The array scalars are read-only (immutable) with the exception of the void scalar which can also be -written to so that record-array field setting works more naturally +written to so that structured array field setting works more naturally (a[0]['f1'] = ``value`` ). diff --git a/doc/source/reference/routines.array-creation.rst b/doc/source/reference/routines.array-creation.rst index 23b35243b..c7c6ab815 100644 --- a/doc/source/reference/routines.array-creation.rst +++ b/doc/source/reference/routines.array-creation.rst @@ -20,6 +20,8 @@ Ones and zeros ones_like zeros zeros_like + full + full_like From existing data ------------------ diff --git a/doc/source/reference/routines.array-manipulation.rst b/doc/source/reference/routines.array-manipulation.rst index ca97bb479..81af0a315 100644 --- a/doc/source/reference/routines.array-manipulation.rst +++ b/doc/source/reference/routines.array-manipulation.rst @@ -54,6 +54,8 @@ Changing kind of array asmatrix asfarray asfortranarray + ascontiguousarray + asarray_chkfinite asscalar require diff --git a/doc/source/reference/routines.io.rst b/doc/source/reference/routines.io.rst index 26afbfb4f..b99754912 100644 --- a/doc/source/reference/routines.io.rst +++ b/doc/source/reference/routines.io.rst @@ -3,8 +3,8 @@ Input and output .. currentmodule:: numpy -NPZ files ---------- +Numpy binary files (NPY, NPZ) +----------------------------- .. autosummary:: :toctree: generated/ @@ -13,6 +13,9 @@ NPZ files savez savez_compressed +The format of these binary file types is documented in +http://docs.scipy.org/doc/numpy/neps/npy-format.html + Text files ---------- .. autosummary:: diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst index 5cb38e83f..2408899b3 100644 --- a/doc/source/reference/routines.ma.rst +++ b/doc/source/reference/routines.ma.rst @@ -65,6 +65,8 @@ Inspecting the array ma.nonzero ma.shape ma.size + ma.is_masked + ma.is_mask ma.MaskedArray.data ma.MaskedArray.mask @@ -141,6 +143,7 @@ Joining arrays ma.column_stack ma.concatenate + ma.append ma.dstack ma.hstack ma.vstack @@ -181,6 +184,8 @@ Finding masked data ma.flatnotmasked_edges ma.notmasked_contiguous ma.notmasked_edges + ma.clump_masked + ma.clump_unmasked Modifying a mask diff --git a/doc/source/reference/routines.maskna.rst b/doc/source/reference/routines.maskna.rst deleted file mode 100644 index 2910acbac..000000000 --- a/doc/source/reference/routines.maskna.rst +++ /dev/null @@ -1,11 +0,0 @@ -NA-Masked Array Routines -======================== - -.. currentmodule:: numpy - -NA Values ---------- -.. autosummary:: - :toctree: generated/ - - isna diff --git a/doc/source/reference/routines.polynomials.classes.rst b/doc/source/reference/routines.polynomials.classes.rst index 14729f08b..c40795434 100644 --- a/doc/source/reference/routines.polynomials.classes.rst +++ b/doc/source/reference/routines.polynomials.classes.rst @@ -211,7 +211,7 @@ constant are 0, but both can be specified.:: In the first case the lower bound of the integration is set to -1 and the integration constant is 0. In the second the constant of integration is set to 1 as well. Differentiation is simpler since the only option is the -number times the polynomial is differentiated:: +number of times the polynomial is differentiated:: >>> p = P([1, 2, 3]) >>> p.deriv(1) @@ -270,7 +270,7 @@ polynomials up to degree 5 are plotted below. >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> x = np.linspace(-1, 1, 100) - >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="T_%d"%i) + >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="$T_%d$"%i) ... >>> plt.legend(loc="upper left") <matplotlib.legend.Legend object at 0x3b3ee10> @@ -284,7 +284,7 @@ The same plots over the range -2 <= `x` <= 2 look very different: >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> x = np.linspace(-2, 2, 100) - >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="T_%d"%i) + >>> for i in range(6): ax = plt.plot(x, T.basis(i)(x), lw=2, label="$T_%d$"%i) ... >>> plt.legend(loc="lower right") <matplotlib.legend.Legend object at 0x3b3ee10> diff --git a/doc/source/reference/routines.sort.rst b/doc/source/reference/routines.sort.rst index 2b36aec75..c22fa0cd6 100644 --- a/doc/source/reference/routines.sort.rst +++ b/doc/source/reference/routines.sort.rst @@ -39,4 +39,3 @@ Counting :toctree: generated/ count_nonzero - count_reduce_items diff --git a/doc/source/reference/swig.testing.rst b/doc/source/reference/swig.testing.rst index decc681c5..c0daaec66 100644 --- a/doc/source/reference/swig.testing.rst +++ b/doc/source/reference/swig.testing.rst @@ -10,8 +10,8 @@ data types are supported, each with 74 different argument signatures, for a total of 888 typemaps supported "out of the box". Each of these typemaps, in turn, might require several unit tests in order to verify expected behavior for both proper and improper inputs. Currently, -this results in 1,427 individual unit tests that are performed when -``make test`` is run in the ``numpy/docs/swig`` subdirectory. +this results in more than 1,000 individual unit tests executed when +``make test`` is run in the ``numpy/tools/swig`` subdirectory. To facilitate this many similar unit tests, some high-level programming techniques are employed, including C and `SWIG`_ macros, diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst index 2ae794f59..3d6112058 100644 --- a/doc/source/reference/ufuncs.rst +++ b/doc/source/reference/ufuncs.rst @@ -313,16 +313,15 @@ advanced usage and will not typically be used. .. versionadded:: 1.6 + May be 'no', 'equiv', 'safe', 'same_kind', or 'unsafe'. + See :func:`can_cast` for explanations of the parameter values. + Provides a policy for what kind of casting is permitted. For compatibility - with previous versions of NumPy, this defaults to 'unsafe'. May be 'no', - 'equiv', 'safe', 'same_kind', or 'unsafe'. See :func:`can_cast` for - explanations of the parameter values. - - In a future version of numpy, this argument will default to - 'same_kind'. As part of this transition, starting in version 1.7, - ufuncs will produce a DeprecationWarning for calls which are - allowed under the 'unsafe' rules, but not under the 'same_kind' - rules. + with previous versions of NumPy, this defaults to 'unsafe' for numpy < 1.7. + In numpy 1.7 a transition to 'same_kind' was begun where ufuncs produce a + DeprecationWarning for calls which are allowed under the 'unsafe' + rules, but not under the 'same_kind' rules. In numpy 1.10 the default + will be 'same_kind'. *order* |