diff options
author | Pauli Virtanen <pav@iki.fi> | 2009-03-21 21:19:53 +0000 |
---|---|---|
committer | Pauli Virtanen <pav@iki.fi> | 2009-03-21 21:19:53 +0000 |
commit | bab64b897064cfdf8cf86fcc62b44e21df1153ee (patch) | |
tree | 6e1cee5b837bbccdfb2c78f12f3f6205ed40953a /doc/source/reference | |
parent | b2634ff922176acd12ddd3725434d3dfaaf25422 (diff) | |
download | numpy-bab64b897064cfdf8cf86fcc62b44e21df1153ee.tar.gz |
docs: strip trailing whitespace from RST files
Diffstat (limited to 'doc/source/reference')
21 files changed, 509 insertions, 509 deletions
diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst index f5a262076..865838699 100644 --- a/doc/source/reference/arrays.classes.rst +++ b/doc/source/reference/arrays.classes.rst @@ -121,7 +121,7 @@ arrays: matrix-multiplication and matrix power, respectively. If your subroutine can accept sub-classes and you do not convert to base-class arrays, then you must use the ufuncs multiply and power to be sure - that you are performing the correct operation for all inputs. + that you are performing the correct operation for all inputs. The matrix class is a Python subclass of the ndarray and can be used as a reference for how to construct your own subclass of the ndarray. @@ -170,7 +170,7 @@ entire file into memory. A simple subclass of the ndarray uses a memory-mapped file for the data buffer of the array. For small files, the over-head of reading the entire file into memory is typically not significant, however for large files using memory mapping can save -considerable resources. +considerable resources. Memory-mapped-file arrays have one additional method (besides those they inherit from the ndarray): :meth:`.flush() <memmap.flush>` which @@ -182,7 +182,7 @@ array actually get written to disk. Memory-mapped arrays use the the Python memory-map object which (prior to Python 2.5) does not allow files to be larger than a certain size depending on the platform. This size is always < 2GB even on 64-bit - systems. + systems. .. autosummary:: :toctree: generated/ @@ -227,7 +227,7 @@ data-type. However, a chararray can also be created using the :func:`numpy.char.array` function: .. autosummary:: - :toctree: generated/ + :toctree: generated/ chararray core.defchararray.array @@ -235,7 +235,7 @@ data-type. However, a chararray can also be created using the Another difference with the standard ndarray of string data-type is that the chararray inherits the feature introduced by Numarray that white-space at the end of any element in the array will be ignored on -item retrieval and comparison operations. +item retrieval and comparison operations. .. _arrays.classes.rec: @@ -321,7 +321,7 @@ used as an iterator. The default behavior is equivalent to:: val = arr[i] This default iterator selects a sub-array of dimension :math:`N-1` from the array. This can be a useful construct for defining recursive -algorithms. To loop over the entire array requires :math:`N` for-loops. +algorithms. To loop over the entire array requires :math:`N` for-loops. >>> a = arange(24).reshape(3,2,4)+10 >>> for val in a: @@ -344,7 +344,7 @@ Flat iteration As mentioned previously, the flat attribute of ndarray objects returns an iterator that will cycle over the entire array in C-style -contiguous order. +contiguous order. >>> for i, val in enumerate(a.flat): ... if i%5 == 0: print i, val @@ -355,7 +355,7 @@ contiguous order. 20 30 Here, I've used the built-in enumerate iterator to return the iterator -index as well as the value. +index as well as the value. N-dimensional enumeration @@ -367,7 +367,7 @@ N-dimensional enumeration ndenumerate Sometimes it may be useful to get the N-dimensional index while -iterating. The ndenumerate iterator can achieve this. +iterating. The ndenumerate iterator can achieve this. >>> for i, val in ndenumerate(a): ... if sum(i)%5 == 0: print i, val diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst index 6b0d2cea3..4cc5a88d8 100644 --- a/doc/source/reference/arrays.dtypes.rst +++ b/doc/source/reference/arrays.dtypes.rst @@ -38,7 +38,7 @@ Note that the scalar types are not :class:`dtype` objects, even though they can be used in place of one whenever a data type specification is needed in Numpy. -.. index:: +.. index:: pair: dtype; field pair: dtype; record @@ -66,7 +66,7 @@ record behave differently, see :ref:`arrays.indexing.rec`. (see :ref:`arrays.dtypes.constructing` for details on construction) >>> dt = np.dtype('>i4') - >>> dt.byteorder + >>> dt.byteorder '>' >>> dt.itemsize 4 @@ -100,7 +100,7 @@ record behave differently, see :ref:`arrays.indexing.rec`. <type 'numpy.void'> >>> type(x[1]['grades']) <type 'numpy.ndarray'> - + .. _arrays.dtypes.constructing: Specifying and constructing data types @@ -130,7 +130,7 @@ What can be converted to a data-type object is described below: .. index:: triple: dtype; construction; from None - The default data type: :class:`float_`. + The default data type: :class:`float_`. .. index:: triple: dtype; construction; from type @@ -139,7 +139,7 @@ Array-scalar types The 21 built-in :ref:`array scalar type objects <arrays.scalars.built-in>` all convert to an associated data-type object. - This is true for their sub-classes as well. + This is true for their sub-classes as well. Note that not all data-type information can be supplied with a type-object: for example, :term:`flexible` data-types have @@ -188,7 +188,7 @@ Built-in Python types >>> dt = np.dtype(object) # Python object Types with ``.dtype`` - + Any type object with a ``dtype`` attribute: The attribute will be accessed and used directly. The attribute must return something that is convertible into a dtype object. @@ -253,7 +253,7 @@ String with comma-separated fields .. admonition:: Example - - field named ``f0`` containing a 32-bit integer + - field named ``f0`` containing a 32-bit integer - field named ``f1`` containing a 2 x 3 sub-array of 64-bit floating-point numbers - field named ``f2`` containing a 32-bit floating-point number @@ -265,7 +265,7 @@ String with comma-separated fields containing 64-bit unsigned integers - field named ``f2`` containing a 3 x 4 sub-array containing 10-character strings - + >>> dt = np.dtype("a3, 3u8, (3,4)a10") Type strings @@ -281,7 +281,7 @@ Type strings triple: dtype; construction; from tuple ``(flexible_dtype, itemsize)`` - + The first argument must be an object that is converted to a flexible data-type object (one whose element size is 0), the second argument is an integer providing the desired itemsize. @@ -320,7 +320,7 @@ Type strings 32-bit integer, whose first two bytes are interpreted as an integer via field ``real``, and the following two bytes via field ``imag``. - + >>> dt = np.dtype((np.int32, {'real': (np.int16, 0), 'imag': (np.int16, 2)}) 32-bit integer, which is interpreted as consisting of a sub-array @@ -334,7 +334,7 @@ Type strings >>> dt = np.dtype(('i4', [('r','u1'),('g','u1'),('b','u1'),('a','u1')])) .. note:: XXX: does the second-to-last example above make sense? - + .. index:: triple: dtype; construction; from list @@ -379,7 +379,7 @@ Type strings triple: dtype; construction; from dict ``{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ...}`` - + This style has two required and two optional keys. The *names* and *formats* keys are required. Their respective values are equal-length lists with the field names and the field formats. @@ -408,11 +408,11 @@ Type strings Data type with fields ``r`` and ``b`` (with the given titles), both being 8-bit unsigned integers, the first at byte position 0 from the start of the field and the second at position 2: - + >>> dt = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'], ... 'offsets': [0, 2], ... 'titles': ['Red pixel', 'Blue pixel']}) - + ``{'field1': ..., 'field2': ..., ...}`` @@ -429,7 +429,7 @@ Type strings and ``col3`` (integers at byte position 14): >>> dt = np.dtype({'col1': ('S10', 0), 'col2': (float32, 10), 'col3': (int, 14)}) - + :class:`dtype` ============== diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst index 000a06def..a47474922 100644 --- a/doc/source/reference/arrays.indexing.rst +++ b/doc/source/reference/arrays.indexing.rst @@ -14,8 +14,8 @@ Indexing There are three kinds of indexing available: record access, basic slicing, advanced indexing. Which one occurs depends on *obj*. -.. note:: - +.. note:: + In Python, ``x[(exp1, exp2, ..., expN)]`` is equivalent to ``x[exp1, exp2, ..., expN]``; the latter is just syntactic sugar for the former. @@ -151,7 +151,7 @@ concepts to remember include: .. warning:: The above is **not** true for advanced slicing. - You may use slicing to set values in the array, but (unlike lists) you - can never grow the array. The size of the value to be set in + can never grow the array. The size of the value to be set in ``x[obj] = value`` must be (broadcastable) to the same shape as ``x[obj]``. @@ -169,7 +169,7 @@ concepts to remember include: of arbitrary dimension. .. data:: newaxis - + The :const:`newaxis` object can be used in the basic slicing syntax discussed above. :const:`None` can also be used instead of :const:`newaxis`. @@ -182,7 +182,7 @@ Advanced indexing is triggered when the selection object, *obj*, is a non-tuple sequence object, an :class:`ndarray` (of data type integer or bool), or a tuple with at least one sequence object or ndarray (of data type integer or bool). There are two types of advanced indexing: integer -and Boolean. +and Boolean. Advanced indexing always returns a *copy* of the data (contrast with basic slicing that returns a :term:`view`). @@ -200,7 +200,7 @@ tuple. The rules of advanced integer-style indexing are: - If the length of the selection tuple is larger than *N* an error is raised. -- All sequences and scalars in the selection tuple are converted to +- All sequences and scalars in the selection tuple are converted to :class:`intp` indexing arrays. - All selection tuple objects must be convertible to :class:`intp` @@ -221,7 +221,7 @@ tuple. The rules of advanced integer-style indexing are: - The shape of the output (or the needed shape of the object to be used for setting) is the broadcasted shape. - + - After expanding any ellipses and filling out any missing ``:`` objects in the selection tuple, then let :math:`N_t` be the number of indexing arrays, and let :math:`N_s = N - N_t` be the number of @@ -230,9 +230,9 @@ tuple. The rules of advanced integer-style indexing are: - If :math:`N_s = 0` then the *M*-dimensional result is constructed by varying the index tuple ``(i_1, ..., i_M)`` over the range - of the result shape and for each value of the index tuple + of the result shape and for each value of the index tuple ``(ind_1, ..., ind_M)``:: - + result[i_1, ..., i_M] == x[ind_1[i_1, ..., i_M], ind_2[i_1, ..., i_M], ..., ind_N[i_1, ..., i_M]] @@ -244,7 +244,7 @@ tuple. The rules of advanced integer-style indexing are: *i, j, k* yields:: result[i,j,k] = x[ind_1[i,j,k], ind_2[i,j,k]] - + - If :math:`N_s > 0`, then partial indexing is done. This can be somewhat mind-boggling to understand, but if you think in terms of the shapes of the arrays involved, it can be easier to grasp what @@ -269,7 +269,7 @@ tuple. The rules of advanced integer-style indexing are: we let *i, j, k* loop over the (2,3,4)-shaped subspace then ``result[...,i,j,k,:] = x[...,ind[i,j,k],:]``. This example produces the same result as :meth:`x.take(ind, axis=-2) <ndarray.take>`. - + .. admonition:: Example Now let ``x.shape`` be (10,20,30,40,50) and suppose ``ind_1`` @@ -305,7 +305,7 @@ bounds of *x*, then an index error will be raised. You can also use Boolean arrays as element of the selection tuple. In such instances, they will always be interpreted as :meth:`nonzero(obj) <ndarray.nonzero>` and the equivalent integer indexing will be -done. +done. .. warning:: diff --git a/doc/source/reference/arrays.interface.rst b/doc/source/reference/arrays.interface.rst index e17bb7dfc..4afa3afc1 100644 --- a/doc/source/reference/arrays.interface.rst +++ b/doc/source/reference/arrays.interface.rst @@ -12,7 +12,7 @@ The Array Interface This page describes the old, deprecated array interface. Everything still works as described as of numpy 1.2 and on into the foreseeable future), but - new development should target :pep:`3118` -- + new development should target :pep:`3118` -- :cfunc:`The Revised Buffer Protocol <PyObject_GetBuffer>`. :pep:`3118` was incorporated into Python 2.6 and 3.0, and is additionally supported by Cython's numpy buffer support. (See the Cython numpy @@ -73,13 +73,13 @@ This approach to the interface consists of the object having an the byteorder of the data (``<``: little-endian, ``>``: big-endian, ``|``: not-relevant), a character code giving the basic type of the array, and an integer providing the number of - bytes the type uses. + bytes the type uses. The basic type character codes are: - + ===== ================================================================ - ``t`` Bit field (following integer gives the number of - bits in the bit field). + ``t`` Bit field (following integer gives the number of + bits in the bit field). ``b`` Boolean (integer type where all values are only True or False) ``i`` Integer ``u`` Unsigned integer @@ -169,14 +169,14 @@ This approach to the interface consists of the object having an <arrays.broadcasting.broadcastable>` to the shape of the original array. - **Default**: :const:`None` (All array values are valid) + **Default**: :const:`None` (All array values are valid) **offset** (optional) An integer offset into the array data region. This can only be used when data is :const:`None` or returns a :class:`buffer` object. - + **Default**: 0. **version** (required) @@ -207,7 +207,7 @@ array using only one attribute lookup and a well-defined C-structure. referencing them. .. admonition:: New since June 16, 2006: - + In the past most implementations used the "desc" member of the :ctype:`PyCObject` itself (do not confuse this with the "descr" member of the :ctype:`PyArrayInterface` structure above --- they are two separate diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst index 7713bff9c..0def05ced 100644 --- a/doc/source/reference/arrays.ndarray.rst +++ b/doc/source/reference/arrays.ndarray.rst @@ -43,7 +43,7 @@ objects implementing the :class:`buffer` or :ref:`array dtype('int32') The array can be indexed using a Python container-like syntax: - + >>> x[1,2] 6 @@ -64,7 +64,7 @@ New arrays can be constructed using the routines detailed in :class:`ndarray` constructor: .. autosummary:: - :toctree: generated/ + :toctree: generated/ ndarray @@ -110,7 +110,7 @@ for example in the Fortran language and in *Matlab*) and :term:`row-major` order (used in C) are special cases of the strided scheme, and correspond to the strides: -.. math:: +.. math:: s_k^{\mathrm{column}} = \prod_{j=0}^{k-1} d_j , \quad s_k^{\mathrm{row}} = \prod_{j=k+1}^{N-1} d_j . @@ -127,7 +127,7 @@ in a different scheme. .. seealso: :ref:`Indexing <arrays.ndarray.indexing>`_ -.. note:: +.. note:: Several algorithms in NumPy work on arbitrarily strided arrays. However, some algorithms require single-segment arrays. When an @@ -307,7 +307,7 @@ Calculation .. index:: axis -Many of these methods take an argument named *axis*. In such cases, +Many of these methods take an argument named *axis*. In such cases, - If *axis* is *None* (the default), the array is treated as a 1-D array and the operation is performed over the entire array. This @@ -315,7 +315,7 @@ Many of these methods take an argument named *axis*. In such cases, array scalar. - If *axis* is an integer, then the operation is done over the given axis - (for each 1-D subarray that can be created along the given axis). + (for each 1-D subarray that can be created along the given axis). The parameter *dtype* specifies the data type over which a reduction operation (like summing) should take place. The default reduce data @@ -404,7 +404,7 @@ Unary operations: .. autosummary:: :toctree: generated/ - + ndarray.__neg__ ndarray.__pos__ ndarray.__abs__ @@ -414,7 +414,7 @@ Arithmetic: .. autosummary:: :toctree: generated/ - + ndarray.__add__ ndarray.__sub__ ndarray.__mul__ @@ -430,7 +430,7 @@ Arithmetic: ndarray.__or__ ndarray.__xor__ -.. note:: +.. note:: - Any third argument to :func:`pow()` is silently ignored, as the underlying :func:`ufunc <power>` only takes two arguments. @@ -449,7 +449,7 @@ Arithmetic, in-place: .. autosummary:: :toctree: generated/ - + ndarray.__iadd__ ndarray.__isub__ ndarray.__imul__ diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst index 70c1d07c9..33d5ceff6 100644 --- a/doc/source/reference/arrays.scalars.rst +++ b/doc/source/reference/arrays.scalars.rst @@ -45,7 +45,7 @@ of the flexible itemsize array types (:class:`string`, scalar attributes are settable. .. _arrays.scalars.character-codes: - + .. _arrays.scalars.built-in: Built-in scalar types @@ -97,23 +97,23 @@ types is indicated: two types are compatible if their data is of the same size and interpreted in the same way. Booleans: - + =================== ============================= =============== Type Remarks Character code =================== ============================= =============== -:class:`bool_` compatible: Python bool ``'?'`` +:class:`bool_` compatible: Python bool ``'?'`` :class:`bool8` 8 bits =================== ============================= =============== Integers: =================== ============================= =============== -:class:`byte` compatible: C char ``'b'`` -:class:`short` compatible: C short ``'h'`` -:class:`intc` compatible: C int ``'i'`` -:class:`int_` compatible: Python int ``'l'`` -:class:`longlong` compatible: C long long ``'q'`` -:class:`intp` large enough to fit a pointer ``'p'`` +:class:`byte` compatible: C char ``'b'`` +:class:`short` compatible: C short ``'h'`` +:class:`intc` compatible: C int ``'i'`` +:class:`int_` compatible: Python int ``'l'`` +:class:`longlong` compatible: C long long ``'q'`` +:class:`intp` large enough to fit a pointer ``'p'`` :class:`int8` 8 bits :class:`int16` 16 bits :class:`int32` 32 bits @@ -123,12 +123,12 @@ Integers: Unsigned integers: =================== ============================= =============== -:class:`ubyte` compatible: C unsigned char ``'B'`` -:class:`ushort` compatible: C unsigned short ``'H'`` -:class:`uintc` compatible: C unsigned int ``'I'`` -:class:`uint` compatible: Python int ``'L'`` -:class:`ulonglong` compatible: C long long ``'Q'`` -:class:`uintp` large enough to fit a pointer ``'P'`` +:class:`ubyte` compatible: C unsigned char ``'B'`` +:class:`ushort` compatible: C unsigned short ``'H'`` +:class:`uintc` compatible: C unsigned int ``'I'`` +:class:`uint` compatible: Python int ``'L'`` +:class:`ulonglong` compatible: C long long ``'Q'`` +:class:`uintp` large enough to fit a pointer ``'P'`` :class:`uint8` 8 bits :class:`uint16` 16 bits :class:`uint32` 32 bits @@ -138,10 +138,10 @@ Unsigned integers: Floating-point numbers: =================== ============================= =============== -:class:`single` compatible: C float ``'f'`` +:class:`single` compatible: C float ``'f'`` :class:`double` compatible: C double -:class:`float_` compatible: Python float ``'d'`` -:class:`longfloat` compatible: C long float ``'g'`` +:class:`float_` compatible: Python float ``'d'`` +:class:`longfloat` compatible: C long float ``'g'`` :class:`float32` 32 bits :class:`float64` 64 bits :class:`float96` 92 bits, platform? @@ -151,8 +151,8 @@ Floating-point numbers: Complex floating-point numbers: =================== ============================= =============== -:class:`csingle` ``'F'`` -:class:`complex_` compatible: Python complex ``'D'`` +:class:`csingle` ``'F'`` +:class:`complex_` compatible: Python complex ``'D'`` :class:`clongfloat` ``'G'`` :class:`complex64` two 32-bit floats :class:`complex128` two 64-bit floats @@ -165,7 +165,7 @@ Complex floating-point numbers: Any Python object: =================== ============================= =============== -:class:`object_` any Python object ``'O'`` +:class:`object_` any Python object ``'O'`` =================== ============================= =============== .. note:: diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst index 56950a8d9..a500cc277 100644 --- a/doc/source/reference/c-api.array.rst +++ b/doc/source/reference/c-api.array.rst @@ -5,12 +5,12 @@ Array API | The test of a first-rate intelligence is the ability to hold two | opposed ideas in the mind at the same time, and still retain the -| ability to function. -| --- *F. Scott Fitzgerald* +| ability to function. +| --- *F. Scott Fitzgerald* | For a successful technology, reality must take precedence over public -| relations, for Nature cannot be fooled. -| --- *Richard P. Feynman* +| relations, for Nature cannot be fooled. +| --- *Richard P. Feynman* .. index:: pair: ndarray; C-API @@ -275,40 +275,40 @@ From other objects .. cvar:: NPY_C_CONTIGUOUS Make sure the returned array is C-style contiguous - + .. cvar:: NPY_F_CONTIGUOUS Make sure the returned array is Fortran-style contiguous. - + .. cvar:: NPY_ALIGNED Make sure the returned array is aligned on proper boundaries for its data type. An aligned array has the data pointer and every strides factor as a multiple of the alignment factor for the data-type- descriptor. - + .. cvar:: NPY_WRITEABLE Make sure the returned array can be written to. - + .. cvar:: NPY_ENSURECOPY Make sure a copy is made of *op*. If this flag is not present, data is not copied if it can be avoided. - + .. cvar:: NPY_ENSUREARRAY Make sure the result is a base-class ndarray or bigndarray. By default, if *op* is an instance of a subclass of the bigndarray, an instance of that same subclass is returned. If this flag is set, an ndarray object will be returned instead. - + .. cvar:: NPY_FORCECAST Force a cast to the output type even if it cannot be done safely. Without this flag, a data cast will occur only if it can be done safely, otherwise an error is reaised. - + .. cvar:: NPY_UPDATEIFCOPY If *op* is already an array, but does not satisfy the @@ -322,60 +322,60 @@ From other objects will be made writeable again. If *op* is not writeable to begin with, then an error is raised. If *op* is not already an array, then this flag has no effect. - + .. cvar:: NPY_BEHAVED :cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` - + .. cvar:: NPY_CARRAY :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_BEHAVED` - + .. cvar:: NPY_CARRAY_RO :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - + .. cvar:: NPY_FARRAY :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_BEHAVED` - + .. cvar:: NPY_FARRAY_RO :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - + .. cvar:: NPY_DEFAULT :cdata:`NPY_CARRAY` - + .. cvar:: NPY_IN_ARRAY :cdata:`NPY_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - + .. cvar:: NPY_IN_FARRAY :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - + .. cvar:: NPY_INOUT_ARRAY :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_ALIGNED` - + .. cvar:: NPY_INOUT_FARRAY :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_ALIGNED` - + .. cvar:: NPY_OUT_ARRAY :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_ALIGNED` \| :cdata:`NPY_UPDATEIFCOPY` - + .. cvar:: NPY_OUT_FARRAY :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_ALIGNED` \| :cdata:`UPDATEIFCOPY` - - + + .. cfunction:: PyObject* PyArray_CheckFromAny(PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, int requirements, PyObject* context) Nearly identical to :cfunc:`PyArray_FromAny` (...) except @@ -878,7 +878,7 @@ Converting data types this routine (using :cfunc:`PyDataMem_FREE` ) and all the array objects in it ``DECREF`` 'd or a memory-leak will occur. The example template-code below shows a typically usage: - + .. code-block:: c mps = PyArray_ConvertToCommonType(obj, &n); @@ -1080,7 +1080,7 @@ Flag-like constants ^^^^^^^^^^^^^^^^^^^ These constants are used in :cfunc:`PyArray_FromAny` (and its macro forms) to -specify desired properties of the new array. +specify desired properties of the new array. .. cvar:: NPY_FORCECAST @@ -1425,19 +1425,19 @@ Item selection and manipulation entries in *self* are not between 0 and len(*op*). .. cvar:: NPY_RAISE - + raise a ValueError; - + .. cvar:: NPY_WRAP - + wrap values < 0 by adding len(*op*) and values >=len(*op*) by subtracting len(*op*) until they are in range; - + .. cvar:: NPY_CLIP - + all values are clipped to the region [0, len(*op*) ). - - + + .. cfunction:: PyObject* PyArray_Sort(PyArrayObject* self, int axis) Equivalent to :meth:`ndarray.sort` (*self*, *axis*). Return an array with @@ -1643,33 +1643,33 @@ Array Functions 1-, 2-, and 3-d ndarrays. :param op: - + The address to any Python object. This Python object will be replaced with an equivalent well-behaved, C-style contiguous, ndarray of the given data type specifice by the last two arguments. Be sure that stealing a reference in this way to the input object is justified. - + :param ptr: - + The address to a (ctype* for 1-d, ctype** for 2-d or ctype*** for 3-d) variable where ctype is the equivalent C-type for the data type. On return, *ptr* will be addressable as a 1-d, 2-d, or 3-d array. - + :param dims: - + An output array that contains the shape of the array object. This array gives boundaries on any looping that will take place. - + :param nd: - + The dimensionality of the array (1, 2, or 3). - + :param typenum: - + The expected data type of the array. - + :param itemsize: - + This argument is only needed when *typenum* represents a flexible array. Otherwise it should be 0. @@ -1679,7 +1679,7 @@ Array Functions arrays. For example, the simulated arrays of pointers cannot be passed to subroutines expecting specific, statically-defined 2-d and 3-d arrays. To pass to functions requiring those kind of inputs, you must - statically define the required array and copy data. + statically define the required array and copy data. .. cfunction:: int PyArray_Free(PyObject* op, void* ptr) @@ -2240,7 +2240,7 @@ the C-API is needed then some additional steps must be taken. variable to it. .. cmacro:: PY_ARRAY_UNIQUE_SYMBOL - + .. cmacro:: NO_IMPORT_ARRAY Using these #defines you can use the C-API in multiple files for a @@ -2258,16 +2258,16 @@ the C-API is needed then some additional steps must be taken. coolmodule.c contains the required initcool module initialization function (with the import_array() function called). Then, coolmodule.c would have at the top: - + .. code-block:: c - + #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API #include numpy/arrayobject.h - + On the other hand, coolhelper.c would contain at the top: - + .. code-block:: c - + #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API #define NO_IMPORT_ARRAY #include numpy/arrayobject.h @@ -2308,8 +2308,8 @@ Internal Flexibility **greater**, **greater_equal**, **floor_divide**, **true_divide**, **logical_or**, **logical_and**, **floor**, **ceil**, **maximum**, **minimum**, **rint**. - - + + These functions are included here because they are used at least once in the array object's methods. The function returns -1 (without setting a Python Error) if one of the objects being assigned is not @@ -2393,45 +2393,45 @@ Group 1 its calculation. .. cmacro:: NPY_BEGIN_ALLOW_THREADS - + Equivalent to :cmacro:`Py_BEGIN_ALLOW_THREADS` except it uses :cdata:`NPY_ALLOW_THREADS` to determine if the macro if replaced with white-space or not. - + .. cmacro:: NPY_END_ALLOW_THREADS - + Equivalent to :cmacro:`Py_END_ALLOW_THREADS` except it uses :cdata:`NPY_ALLOW_THREADS` to determine if the macro if replaced with white-space or not. - + .. cmacro:: NPY_BEGIN_THREADS_DEF - + Place in the variable declaration area. This macro sets up the variable needed for storing the Python state. - + .. cmacro:: NPY_BEGIN_THREADS - + Place right before code that does not need the Python interpreter (no Python C-API calls). This macro saves the Python state and releases the GIL. - + .. cmacro:: NPY_END_THREADS - + Place right after code that does not need the Python interpreter. This macro acquires the GIL and restores the Python state from the saved variable. - - .. cfunction:: NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype) - + + .. cfunction:: NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype) + Useful to release the GIL only if *dtype* does not contain arbitrary Python objects which may need the Python interpreter during execution of the loop. Equivalent to - + .. cfunction:: NPY_END_THREADS_DESCR(PyArray_Descr *dtype) Useful to regain the GIL in situations where it was released using the BEGIN form of this macro. - + Group 2 """"""" @@ -2444,23 +2444,23 @@ Group 2 what state it had) and then re-release it with the saved state. .. cmacro:: NPY_ALLOW_C_API_DEF - + Place in the variable declaration area to set up the necessary variable. - + .. cmacro:: NPY_ALLOW_C_API - + Place before code that needs to call the Python C-API (when it is known that the GIL has already been released). - + .. cmacro:: NPY_DISABLE_C_API - + Place after code that needs to call the Python C-API (to re-release the GIL). .. tip:: - Never use semicolons after the threading support macros. + Never use semicolons after the threading support macros. Priority @@ -2593,10 +2593,10 @@ Enumerated Types A special variable-type which can take on the values :cdata:`NPY_{KIND}` where ``{KIND}`` is - **QUICKSORT**, **HEAPSORT**, **MERGESORT** - + **QUICKSORT**, **HEAPSORT**, **MERGESORT** + .. cvar:: NPY_NSORTS - + Defined to be the number of sorts. .. ctype:: NPY_SCALARKIND @@ -2607,11 +2607,11 @@ Enumerated Types **NOSCALAR**, **BOOL_SCALAR**, **INTPOS_SCALAR**, **INTNEG_SCALAR**, **FLOAT_SCALAR**, **COMPLEX_SCALAR**, - **OBJECT_SCALAR** - - + **OBJECT_SCALAR** + + .. cvar:: NPY_NSCALARKINDS - + Defined to be the number of scalar kinds (not including :cdata:`NPY_NOSCALAR`). @@ -2621,7 +2621,7 @@ Enumerated Types interpreted in. The value of a variable of this type can be :cdata:`NPY_{ORDER}` where ``{ORDER}`` is - **ANYORDER**, **CORDER**, **FORTRANORDER** + **ANYORDER**, **CORDER**, **FORTRANORDER** .. ctype:: NPY_CLIPMODE @@ -2629,7 +2629,7 @@ Enumerated Types applied in certain functions. The value of a variable of this type can be :cdata:`NPY_{MODE}` where ``{MODE}`` is - **CLIP**, **WRAP**, **RAISE** - + **CLIP**, **WRAP**, **RAISE** + .. index:: pair: ndarray; C-API diff --git a/doc/source/reference/c-api.config.rst b/doc/source/reference/c-api.config.rst index 4bcd2ecd6..0c7f6b147 100644 --- a/doc/source/reference/c-api.config.rst +++ b/doc/source/reference/c-api.config.rst @@ -40,7 +40,7 @@ information is available to the pre-processor. platform (A macro defines **NPY_SIZEOF_LONGLONG** as well.) .. cvar:: NPY_SIZEOF_PY_LONG_LONG - + .. cvar:: NPY_SIZEOF_FLOAT @@ -101,4 +101,4 @@ Platform information Returns the endianness of the current platform. One of :cdata:`NPY_CPU_BIG`, :cdata:`NPY_CPU_LITTLE`, or :cdata:`NPY_CPU_UNKNOWN_ENDIAN`. - + diff --git a/doc/source/reference/c-api.dtype.rst b/doc/source/reference/c-api.dtype.rst index 071b4b629..569a4ccb3 100644 --- a/doc/source/reference/c-api.dtype.rst +++ b/doc/source/reference/c-api.dtype.rst @@ -35,24 +35,24 @@ are all called :cdata:`NPY_{NAME}` where ``{NAME}`` can be **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**, **CDOUBLE**, **CLONGDOUBLE**, **OBJECT**, **STRING**, **UNICODE**, **VOID** - **NTYPES**, **NOTYPE**, **USERDEF**, **DEFAULT_TYPE** + **NTYPES**, **NOTYPE**, **USERDEF**, **DEFAULT_TYPE** The various character codes indicating certain types are also part of an enumerated list. References to type characters (should they be needed at all) should always use these enumerations. The form of them -is :cdata:`NPY_{NAME}LTR` where ``{NAME}`` can be +is :cdata:`NPY_{NAME}LTR` where ``{NAME}`` can be **BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**, **CDOUBLE**, **CLONGDOUBLE**, **OBJECT**, **STRING**, **VOID** - **INTP**, **UINTP** + **INTP**, **UINTP** - **GENBOOL**, **SIGNED**, **UNSIGNED**, **FLOATING**, **COMPLEX** + **GENBOOL**, **SIGNED**, **UNSIGNED**, **FLOATING**, **COMPLEX** The latter group of ``{NAME}s`` corresponds to letters used in the array -interface typestring specification. +interface typestring specification. Defines @@ -142,7 +142,7 @@ Boolean ^^^^^^^^^^^^^^^^^^ Unsigned versions of the integers can be defined by pre-pending a 'u' -to the front of the integer name. +to the front of the integer name. .. ctype:: npy_(u)byte @@ -194,7 +194,7 @@ Bit-width names There are also typedefs for signed integers, unsigned integers, floating point, and complex floating point types of specific bit- -widths. The available type names are +widths. The available type names are :ctype:`npy_int{bits}`, :ctype:`npy_uint{bits}`, :ctype:`npy_float{bits}`, and :ctype:`npy_complex{bits}` diff --git a/doc/source/reference/c-api.rst b/doc/source/reference/c-api.rst index a6ab83e0c..158e04a16 100644 --- a/doc/source/reference/c-api.rst +++ b/doc/source/reference/c-api.rst @@ -6,10 +6,10 @@ Numpy C-API .. sectionauthor:: Travis E. Oliphant -| Beware of the man who won't be bothered with details. +| Beware of the man who won't be bothered with details. | --- *William Feather, Sr.* -| The truth is out there. +| The truth is out there. | --- *Chris Carter, The X Files* @@ -21,7 +21,7 @@ experience at first. Be assured that the task becomes easier with practice, and you may be surprised at how simple the C-code can be to understand. Even if you don't think you can write C-code from scratch, it is much easier to understand and modify already-written source code -then create it *de novo*. +then create it *de novo*. Python extensions are especially straightforward to understand because they all have a very similar structure. Admittedly, NumPy is not a @@ -33,7 +33,7 @@ little persistence, the code can be opened to your understanding. It is my hope, that this guide to the C-API can assist in the process of becoming familiar with the compiled-level work that can be done with NumPy in order to squeeze that last bit of necessary speed out of your -code. +code. .. currentmodule:: numpy-c-api diff --git a/doc/source/reference/c-api.types-and-structures.rst b/doc/source/reference/c-api.types-and-structures.rst index 82b529663..b99702e11 100644 --- a/doc/source/reference/c-api.types-and-structures.rst +++ b/doc/source/reference/c-api.types-and-structures.rst @@ -86,7 +86,7 @@ PyArray_Type npy_intp *dimensions; npy_intp *strides; PyObject *base; - PyArray_Descr *descr; + PyArray_Descr *descr; int flags; PyObject *weakreflist; } PyArrayObject; @@ -182,7 +182,7 @@ PyArrayDescr_Type The format of the :ctype:`PyArray_Descr` structure that lies at the heart of the :cdata:`PyArrayDescr_Type` is - + .. code-block:: c typedef struct { @@ -236,65 +236,65 @@ PyArrayDescr_Type as: .. cvar:: NPY_ITEM_REFCOUNT - + .. cvar:: NPY_ITEM_HASOBJECT - + Indicates that items of this data-type must be reference counted (using :cfunc:`Py_INCREF` and :cfunc:`Py_DECREF` ). - + .. cvar:: NPY_ITEM_LISTPICKLE - + Indicates arrays of this data-type must be converted to a list before pickling. - + .. cvar:: NPY_ITEM_IS_POINTER - + Indicates the item is a pointer to some other data-type - + .. cvar:: NPY_NEEDS_INIT - + Indicates memory for this data-type must be initialized (set to 0) on creation. - + .. cvar:: NPY_NEEDS_PYAPI - + Indicates this data-type requires the Python C-API during access (so don't give up the GIL if array access is going to be needed). - + .. cvar:: NPY_USE_GETITEM - + On array access use the ``f->getitem`` function pointer instead of the standard conversion to an array scalar. Must use if you don't define an array scalar to go along with the data-type. - + .. cvar:: NPY_USE_SETITEM - + When creating a 0-d array from an array scalar use ``f->setitem`` instead of the standard copy from an array scalar. Must use if you don't define an array scalar to go along with the data-type. - + .. cvar:: NPY_FROM_FIELDS - + The bits that are inherited for the parent data-type if these bits are set in any field of the data-type. Currently ( :cdata:`NPY_NEEDS_INIT` \| :cdata:`NPY_LIST_PICKLE` \| :cdata:`NPY_ITEM_REFCOUNT` \| :cdata:`NPY_NEEDS_PYAPI` ). - + .. cvar:: NPY_OBJECT_DTYPE_FLAGS - + Bits set for the object data-type: ( :cdata:`NPY_LIST_PICKLE` \| :cdata:`NPY_USE_GETITEM` \| :cdata:`NPY_ITEM_IS_POINTER` \| :cdata:`NPY_REFCOUNT` \| :cdata:`NPY_NEEDS_INIT` \| :cdata:`NPY_NEEDS_PYAPI`). - + .. cfunction:: PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags) Return true if all the given flags are set for the data-type object. - + .. cfunction:: PyDataType_REFCHK(PyArray_Descr *dtype) Equivalent to :cfunc:`PyDataType_FLAGCHK` (*dtype*, @@ -333,24 +333,24 @@ PyArrayDescr_Type defined using .. code-block:: c - + typedef struct { PyArray_Descr *base; PyObject *shape; } PyArray_ArrayDescr; - + The elements of this structure are: .. cmember:: PyArray_Descr *PyArray_ArrayDescr.base - + The data-type-descriptor object of the base-type. - + .. cmember:: PyObject *PyArray_ArrayDescr.shape - + The shape (always C-style contiguous) of the sub-array as a Python tuple. - - + + .. cmember:: PyObject *PyArray_Descr.fields If this is non-NULL, then this data-type-descriptor has fields @@ -384,7 +384,7 @@ PyArrayDescr_Type register a user-defined data-type). .. code-block:: c - + typedef struct { PyArray_VectorUnaryFunc *cast[PyArray_NTYPES]; PyArray_GetItemFunc *getitem; @@ -400,14 +400,14 @@ PyArrayDescr_Type PyArray_FillFunc *fill; PyArray_FillWithScalarFunc *fillwithscalar; PyArray_SortFunc *sort[PyArray_NSORTS]; - PyArray_ArgSortFunc *argsort[PyArray_NSORTS]; + PyArray_ArgSortFunc *argsort[PyArray_NSORTS]; PyObject *castdict; PyArray_ScalarKindFunc *scalarkind; int **cancastscalarkindto; int *cancastto; int listpickle } PyArray_ArrFuncs; - + The concept of a behaved segment is used in the description of the function pointers. A behaved segment is one that is aligned and in native machine byte-order for the data-type. The ``nonzero``, @@ -416,7 +416,7 @@ PyArrayDescr_Type functions require behaved memory segments. .. cmember:: void cast(void *from, void *to, npy_intp n, void *fromarr, void *toarr) - + An array of function pointers to cast from the current type to all of the other builtin types. Each function casts a contiguous, aligned, and notswapped buffer pointed at by @@ -425,26 +425,26 @@ PyArrayDescr_Type the arguments *fromarr* and *toarr* are interpreted as PyArrayObjects for flexible arrays to get itemsize information. - + .. cmember:: PyObject *getitem(void *data, void *arr) - + A pointer to a function that returns a standard Python object from a single element of the array object *arr* pointed to by *data*. This function must be able to deal with "misbehaved "(misaligned and/or swapped) arrays correctly. - + .. cmember:: int setitem(PyObject *item, void *data, void *arr) - + A pointer to a function that sets the Python object *item* into the array, *arr*, at the position pointed to by *data* . This function deals with "misbehaved" arrays. If successful, a zero is returned, otherwise, a negative one is returned (and a Python error set). - + .. cmember:: void copyswapn(void *dest, npy_intp dstride, void *src, npy_intp sstride, npy_intp n, int swap, void *arr) .. cmember:: void copyswap(void *dest, void *src, int swap, void *arr) - + These members are both pointers to functions to copy data from *src* to *dest* and *swap* if indicated. The value of arr is only used for flexible ( :cdata:`NPY_STRING`, :cdata:`NPY_UNICODE`, @@ -457,27 +457,27 @@ PyArrayDescr_Type *src* do not overlap. If they overlap, then use ``memmove`` (...) first followed by ``copyswap(n)`` with NULL valued ``src``. - + .. cmember:: int compare(const void* d1, const void* d2, void* arr) - + A pointer to a function that compares two elements of the array, ``arr``, pointed to by ``d1`` and ``d2``. This function requires behaved arrays. The return value is 1 if * ``d1`` > * ``d2``, 0 if * ``d1`` == * ``d2``, and -1 if * ``d1`` < * ``d2``. The array object arr is used to retrieve itemsize and field information for flexible arrays. - + .. cmember:: int argmax(void* data, npy_intp n, npy_intp* max_ind, void* arr) - + A pointer to a function that retrieves the index of the largest of ``n`` elements in ``arr`` beginning at the element pointed to by ``data``. This function requires that the memory segment be contiguous and behaved. The return value is always 0. The index of the largest element is returned in ``max_ind``. - + .. cmember:: void dotfunc(void* ip1, npy_intp is1, void* ip2, npy_intp is2, void* op, npy_intp n, void* arr) - + A pointer to a function that multiplies two ``n`` -length sequences together, adds them, and places the result in element pointed to by ``op`` of ``arr``. The start of the two @@ -485,9 +485,9 @@ PyArrayDescr_Type the next element in each sequence requires a jump of ``is1`` and ``is2`` *bytes*, respectively. This function requires behaved (though not necessarily contiguous) memory. - + .. cmember:: int scanfunc(FILE* fd, void* ip , void* sep , void* arr) - + A pointer to a function that scans (scanf style) one element of the corresponding type from the file descriptor ``fd`` into the array memory pointed to by ``ip``. The array is assumed @@ -500,9 +500,9 @@ PyArrayDescr_Type that the end of file was reached before the element could be scanned, and -3 means that the element could not be interpreted from the format string. Requires a behaved array. - + .. cmember:: int fromstr(char* str, void* ip, char** endptr, void* arr) - + A pointer to a function that converts the string pointed to by ``str`` to one element of the corresponding type and places it in the memory location pointed to by ``ip``. After the @@ -510,77 +510,77 @@ PyArrayDescr_Type string. The last argument ``arr`` is the array into which ip points (needed for variable-size data- types). Returns 0 on success or -1 on failure. Requires a behaved array. - + .. cmember:: Bool nonzero(void* data, void* arr) - + A pointer to a function that returns TRUE if the item of ``arr`` pointed to by ``data`` is nonzero. This function can deal with misbehaved arrays. - + .. cmember:: void fill(void* data, npy_intp length, void* arr) - + A pointer to a function that fills a contiguous array of given length with data. The first two elements of the array must already be filled- in. From these two values, a delta will be computed and the values from item 3 to the end will be computed by repeatedly adding this computed delta. The data buffer must be well-behaved. - + .. cmember:: void fillwithscalar(void* buffer, npy_intp length, void* value, void* arr) - + A pointer to a function that fills a contiguous ``buffer`` of the given ``length`` with a single scalar ``value`` whose address is given. The final argument is the array which is needed to get the itemsize for variable-length arrays. - + .. cmember:: int sort(void* start, npy_intp length, void* arr) - + An array of function pointers to a particular sorting algorithms. A particular sorting algorithm is obtained using a key (so far :cdata:`PyArray_QUICKSORT`, :data`PyArray_HEAPSORT`, and :cdata:`PyArray_MERGESORT` are defined). These sorts are done in-place assuming contiguous and aligned data. - + .. cmember:: int argsort(void* start, npy_intp* result, npy_intp length, void \*arr) - + An array of function pointers to sorting algorithms for this data type. The same sorting algorithms as for sort are available. The indices producing the sort are returned in result (which must be initialized with indices 0 to length-1 inclusive). - + .. cmember:: PyObject *castdict - + Either ``NULL`` or a dictionary containing low-level casting functions for user- defined data-types. Each function is wrapped in a :ctype:`PyCObject *` and keyed by the data-type number. - + .. cmember:: PyArray_SCALARKIND scalarkind(PyArrayObject* arr) - + A function to determine how scalars of this type should be interpreted. The argument is ``NULL`` or a 0-dimensional array containing the data (if that is needed to determine the kind of scalar). The return value must be of type :ctype:`PyArray_SCALARKIND`. - + .. cmember:: int **cancastscalarkindto - + Either ``NULL`` or an array of :ctype:`PyArray_NSCALARKINDS` pointers. These pointers should each be either ``NULL`` or a pointer to an array of integers (terminated by :cdata:`PyArray_NOTYPE`) indicating data-types that a scalar of this data-type of the specified kind can be cast to safely (this usually means without losing precision). - + .. cmember:: int *cancastto - + Either ``NULL`` or an array of integers (terminated by :cdata:`PyArray_NOTYPE` ) indicated data-types that this data-type can be cast to safely (this usually means without losing precision). - + .. cmember:: int listpickle - + Unused. The :cdata:`PyArray_Type` typeobject implements many of the features of @@ -625,7 +625,7 @@ PyUFunc_Type The core of the ufunc is the :ctype:`PyUFuncObject` which contains all the information needed to call the underlying C-code loops that perform the actual work. It has the following structure: - + .. code-block:: c typedef struct { @@ -647,24 +647,24 @@ PyUFunc_Type } PyUFuncObject; .. cmacro:: PyUFuncObject.PyObject_HEAD - + required for all Python objects. .. cmember:: int PyUFuncObject.nin - + The number of input arguments. .. cmember:: int PyUFuncObject.nout - + The number of output arguments. .. cmember:: int PyUFuncObject.nargs - + The total number of arguments (*nin* + *nout*). This must be less than :cdata:`NPY_MAXARGS`. .. cmember:: int PyUFuncObject.identity - + Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`, or :cdata:`PyUFunc_None` to indicate the identity for this operation. It is only used for a reduce-like call on an empty array. @@ -687,7 +687,7 @@ PyUFunc_Type array is ntypes. .. cmember:: void **PyUFuncObject.data - + Extra data to be passed to the 1-d vector loops or ``NULL`` if no extra-data is needed. This C-array must be the same size ( *i.e.* ntypes) as the functions array. ``NULL`` is used if @@ -696,23 +696,23 @@ PyUFunc_Type receive a pointer to the actual function to call. .. cmember:: int PyUFuncObject.ntypes - + The number of supported data types for the ufunc. This number specifies how many different 1-d loops (of the builtin data types) are available. .. cmember:: int PyUFuncObject.check_return - + Obsolete and unused. However, it is set by the corresponding entry in the main ufunc creation routine: :cfunc:`PyUFunc_FromFuncAndData` (...). .. cmember:: char *PyUFuncObject.name - + A string name for the ufunc. This is used dynamically to build the __doc\__ attribute of ufuncs. .. cmember:: char *PyUFuncObject.types - + An array of *nargs* :math:`\times` *ntypes* 8-bit type_numbers which contains the type signature for the function for each of the supported (builtin) data types. For each of the *ntypes* @@ -722,24 +722,24 @@ PyUFunc_Type and mixed-type ufuncs are supported. .. cmember:: char *PyUFuncObject.doc - + Documentation for the ufunc. Should not contain the function signature as this is generated dynamically when __doc\__ is retrieved. .. cmember:: void *PyUFuncObject.ptr - + Any dynamically allocated memory. Currently, this is used for dynamic ufuncs created from a python function to store room for the types, data, and name members. .. cmember:: PyObject *PyUFuncObject.obj - + For ufuncs dynamically created from python functions, this member holds a reference to the underlying Python function. .. cmember:: PyObject *PyUFuncObject.userloops - + A dictionary of user-defined 1-d vector loops (stored as CObject ptrs) for user-defined types. A loop may be registered by the user for any user-defined type. It is retrieved by type number. User defined type @@ -758,7 +758,7 @@ PyArrayIter_Type interface is implemented so that the iterator object can be indexed (using 1-d indexing), and a few methods are implemented through the tp_methods table. This object implements the next method and can be - used anywhere an iterator can be used in Python. + used anywhere an iterator can be used in Python. .. ctype:: PyArrayIterObject @@ -793,54 +793,54 @@ PyArrayIter_Type } PyArrayIterObject; .. cmember:: int PyArrayIterObject.nd_m1 - + :math:`N-1` where :math:`N` is the number of dimensions in the underlying array. .. cmember:: npy_intp PyArrayIterObject.index - + The current 1-d index into the array. .. cmember:: npy_intp PyArrayIterObject.size - + The total size of the underlying array. .. cmember:: npy_intp *PyArrayIterObject.coordinates - + An :math:`N` -dimensional index into the array. .. cmember:: npy_intp *PyArrayIterObject.dims_m1 - + The size of the array minus 1 in each dimension. .. cmember:: npy_intp *PyArrayIterObject.strides - + The strides of the array. How many bytes needed to jump to the next element in each dimension. .. cmember:: npy_intp *PyArrayIterObject.backstrides - + How many bytes needed to jump from the end of a dimension back to its beginning. Note that *backstrides* [k]= *strides* [k]*d *ims_m1* [k], but it is stored here as an optimization. .. cmember:: npy_intp *PyArrayIterObject.factors - + This array is used in computing an N-d index from a 1-d index. It contains needed products of the dimensions. .. cmember:: PyArrayObject *PyArrayIterObject.ao - + A pointer to the underlying ndarray this iterator was created to represent. .. cmember:: char *PyArrayIterObject.dataptr - + This member points to an element in the ndarray indicated by the index. .. cmember:: Bool PyArrayIterObject.contiguous - + This flag is true if the underlying array is :cdata:`NPY_C_CONTIGUOUS`. It is used to simplify calculations when possible. @@ -887,32 +887,32 @@ PyArrayMultiIter_Type } PyArrayMultiIterObject; .. cmacro:: PyArrayMultiIterObject.PyObject_HEAD - + Needed at the start of every Python object (holds reference count and type identification). .. cmember:: int PyArrayMultiIterObject.numiter - + The number of arrays that need to be broadcast to the same shape. .. cmember:: npy_intp PyArrayMultiIterObject.size - + The total broadcasted size. .. cmember:: npy_intp PyArrayMultiIterObject.index - + The current (1-d) index into the broadcasted result. .. cmember:: int PyArrayMultiIterObject.nd - + The number of dimensions in the broadcasted result. .. cmember:: npy_intp *PyArrayMultiIterObject.dimensions - + The shape of the broadcasted result (only ``nd`` slots are used). .. cmember:: PyArrayIterObject **PyArrayMultiIterObject.iters - + An array of iterator objects that holds the iterators for the arrays to be broadcast together. On return, the iterators are adjusted for broadcasting. @@ -971,7 +971,7 @@ PyArray_Dims This structure is very useful when shape and/or strides information is supposed to be interpreted. The structure is: - + .. code-block:: c typedef struct { @@ -979,15 +979,15 @@ PyArray_Dims int len; } PyArray_Dims; - The members of this structure are + The members of this structure are .. cmember:: npy_intp *PyArray_Dims.ptr - + A pointer to a list of (:ctype:`npy_intp`) integers which usually represent array shape or array strides. .. cmember:: int PyArray_Dims.len - + The length of the list of integers. It is assumed safe to access *ptr* [0] to *ptr* [len-1]. @@ -1013,29 +1013,29 @@ PyArray_Chunk int flags; } PyArray_Chunk; - The members are + The members are .. cmacro:: PyArray_Chunk.PyObject_HEAD - + Necessary for all Python objects. Included here so that the :ctype:`PyArray_Chunk` structure matches that of the buffer object (at least to the len member). .. cmember:: PyObject *PyArray_Chunk.base - + The Python object this chunk of memory comes from. Needed so that memory can be accounted for properly. .. cmember:: void *PyArray_Chunk.ptr - + A pointer to the start of the single-segment chunk of memory. .. cmember:: npy_intp PyArray_Chunk.len - + The length of the segment in bytes. .. cmember:: int PyArray_Chunk.flags - + Any data flags (*e.g.* :cdata:`NPY_WRITEABLE` ) that should be used to interpret the memory. @@ -1075,15 +1075,15 @@ PyArrayInterface } PyArrayInterface; .. cmember:: int PyArrayInterface.two - + the integer 2 as a sanity check. .. cmember:: int PyArrayInterface.nd - + the number of dimensions in the array. .. cmember:: char PyArrayInterface.typekind - + A character indicating what kind of array is present according to the typestring convention with 't' -> bitfield, 'b' -> Boolean, 'i' -> signed integer, 'u' -> unsigned integer, 'f' -> floating point, 'c' -> @@ -1113,16 +1113,16 @@ PyArrayInterface An array containing the size of the array in each dimension. .. cmember:: npy_intp *PyArrayInterface.strides - + An array containing the number of bytes to jump to get to the next element in each dimension. .. cmember:: void *PyArrayInterface.data - + A pointer *to* the first element of the array. .. cmember:: PyObject *PyArrayInterface.descr - + A Python object describing the data-type in more detail (same as the *descr* key in :obj:`__array_interface__`). This can be ``NULL`` if *typekind* and *itemsize* provide enough @@ -1136,7 +1136,7 @@ Internally used structures Internally, the code uses some additional Python objects primarily for memory management. These types are not accessible directly from Python, and are not exposed to the C-API. They are included here only -for completeness and assistance in understanding the code. +for completeness and assistance in understanding the code. .. ctype:: PyUFuncLoopObject @@ -1158,7 +1158,7 @@ for completeness and assistance in understanding the code. A simple linked-list of C-structures containing the information needed to define a 1-d loop for a ufunc for every defined signature of a - user-defined data-type. + user-defined data-type. .. cvar:: PyArrayMapIter_Type diff --git a/doc/source/reference/c-api.ufunc.rst b/doc/source/reference/c-api.ufunc.rst index 8e4e625f0..bd0ee8e02 100644 --- a/doc/source/reference/c-api.ufunc.rst +++ b/doc/source/reference/c-api.ufunc.rst @@ -15,7 +15,7 @@ Constants ``{HANDLER}`` can be **IGNORE**, **WARN**, **RAISE**, or **CALL** .. cvar:: UFUNC_{THING}_{ERR} - + ``{THING}`` can be **MASK**, **SHIFT**, or **FPE**, and ``{ERR}`` can be **DIVIDEBYZERO**, **OVERFLOW**, **UNDERFLOW**, and **INVALID**. @@ -35,7 +35,7 @@ Macros declaration area. .. cmacro:: NPY_LOOP_END_THREADS - + Used in universal function code to re-acquire the Python GIL if it was released (because loop->obj was not true). @@ -71,42 +71,42 @@ Functions implementing the basic functionality for each supported type. :param nin: - + The number of inputs to this operation. :param nout: - + The number of outputs :param ntypes: - + How many different data-type "signatures" the ufunc has implemented. :param func: - + Must to an array of length *ntypes* containing :ctype:`PyUFuncGenericFunction` items. These items are pointers to functions that acutally implement the underlying (element-by-element) function :math:`N` times. T :param types: - + Must be of length (*nin* + *nout*) \* *ntypes*, and it contains the data-types (built-in only) that the corresponding function in the *func* array can deal with. :param data: - + Should be ``NULL`` or a pointer to an array of size *ntypes* . This array may contain arbitrary extra-data to be passed to the corresponding 1-d loop function in the func array. :param name: - + The name for the ufunc. :param doc: - + Allows passing in a documentation string to be stored with the ufunc. The documentation string should not contain the name of the function or the calling signature as that will be @@ -114,7 +114,7 @@ Functions accessing the **__doc__** attribute of the ufunc. :param check_return: - + Unused and present for backwards compatibility of the C-API. A corresponding *check_return* integer does exist in the ufunc structure and it does get set with this value when the ufunc @@ -307,7 +307,7 @@ Importing the API ----------------- .. cvar:: PY_UFUNC_UNIQUE_SYMBOL - + .. cvar:: NO_IMPORT_UFUNC .. cfunction:: void import_ufunc(void) @@ -330,6 +330,6 @@ Importing the API global variable is either statically defined or allowed to be seen by other files depending on the state of :cdata:`Py_UFUNC_UNIQUE_SYMBOL` and :cdata:`NO_IMPORT_UFUNC`. - + .. index:: pair: ufunc; C-API diff --git a/doc/source/reference/distutils.rst b/doc/source/reference/distutils.rst index b01c0bfc5..051a1c031 100644 --- a/doc/source/reference/distutils.rst +++ b/doc/source/reference/distutils.rst @@ -55,7 +55,7 @@ misc_util is_local_src_dir get_ext_source_files get_script_files - + .. class:: Configuration(package_name=None, parent_name=None, top_path=None, package_path=None, **attrs) @@ -68,24 +68,24 @@ misc_util the :class:`Configuration` instance. .. method:: todict() - + Return a dictionary compatible with the keyword arguments of distutils setup function. Thus, this method may be used as setup(\**config.todict()). - + .. method:: get_distribution() - + Return the distutils distribution object for self. - + .. method:: get_subpackage(subpackage_name, subpackage_path=None) - + Return a Configuration instance for the sub-package given. If subpackage_path is None then the path is assumed to be the local path plus the subpackage_name. If a setup.py file is not found in the subpackage_path, then a default configuration is used. - + .. method:: add_subpackage(subpackage_name, subpackage_path=None) - + Add a sub-package to the current Configuration instance. This is useful in a setup.py script for adding sub-packages to a package. The sub-package is contained in subpackage_path / subpackage_name and this @@ -93,9 +93,9 @@ misc_util (suitable for Python-code-only subpackages) is assumed. If the subpackage_path is None, then it is assumed to be located in the local path / subpackage_name. - + .. method:: self.add_data_files(*files) - + Add files to the list of data_files to be included with the package. The form of each element of the files sequence is very flexible allowing many combinations of where to get the files from the package @@ -108,7 +108,7 @@ misc_util Finally, the file can be an absolute path name in which case the file will be found at the absolute path name but installed to the package path. - + This basic behavior can be augmented by passing a 2-tuple in as the file argument. The first element of the tuple should specify the relative path (under the package install directory) where the @@ -121,14 +121,14 @@ misc_util Filenames and relative path names will be installed in the package install directory under the path name given as the first element of the tuple. An example may clarify:: - + self.add_data_files('foo.dat', - ('fun', ['gun.dat', 'nun/pun.dat', '/tmp/sun.dat']), - 'bar/cat.dat', + ('fun', ['gun.dat', 'nun/pun.dat', '/tmp/sun.dat']), + 'bar/cat.dat', '/full/path/to/can.dat') - + will install these data files to:: - + <package install directory>/ foo.dat fun/ @@ -139,19 +139,19 @@ misc_util bar/ car.dat can.dat - + where <package install directory> is the package (or sub-package) directory such as '/usr/lib/python2.4/site-packages/mypackage' ('C: \\Python2.4 \\Lib \\site-packages \\mypackage') or '/usr/lib/python2.4/site- - packages/mypackage/mysubpackage' ('C: \\Python2.4 \\Lib \\site-packages \\mypackage \\mysubpackage'). - - + packages/mypackage/mysubpackage' ('C: \\Python2.4 \\Lib \\site-packages \\mypackage \\mysubpackage'). + + An additional feature is that the path to a data-file can actually be a function that takes no arguments and returns the actual path(s) to the data-files. This is useful when the data files are generated while - building the package. - + building the package. + .. method:: add_data_dir(data_path) - + Recursively add files under data_path to the list of data_files to be installed (and distributed). The data_path can be either a relative path-name, or an absolute path-name, or a 2-tuple where the first @@ -162,7 +162,7 @@ misc_util self.add_data_dir('fun') self.add_data_dir(('sun', 'fun')) self.add_data_dir(('gun', '/full/path/to/fun')) - + Will install data-files to the locations:: <package install directory>/ @@ -177,23 +177,23 @@ misc_util gun/ foo.dat car.dat - + .. method:: add_include_dirs(*paths) - + Add the given sequence of paths to the beginning of the include_dirs list. This list will be visible to all extension modules of the current package. - + .. method:: add_headers(*files) - + Add the given sequence of files to the beginning of the headers list. By default, headers will be installed under <python- include>/<self.name.replace('.','/')>/ directory. If an item of files is a tuple, then its first argument specifies the actual installation location relative to the <python-include> path. - + .. method:: add_extension(name, sources, **kw) - + Create and add an Extension instance to the ext_modules list. The first argument defines the name of the extension module that will be installed under the self.name package. The second argument is a list @@ -202,88 +202,88 @@ misc_util include_dirs, define_macros, undef_macros, library_dirs, libraries, runtime_library_dirs, extra_objects, swig_opts, depends, language, f2py_options, module_dirs, and extra_info. - + The self.paths(...) method is applied to all lists that may contain paths. The extra_info is a dictionary or a list of dictionaries whose content will be appended to the keyword arguments. The depends list contains paths to files or directories that the sources of the extension module depend on. If any path in the depends list is newer than the extension module, then the module will be rebuilt. - + The list of sources may contain functions (called source generators) which must take an extension instance and a build directory as inputs and return a source file or list of source files or None. If None is returned then no sources are generated. If the Extension instance has no sources after processing all source generators, then no extension module is built. - + .. method:: add_library(name, sources, **build_info) - + Add a library to the list of libraries. Allowed keyword arguments are depends, macros, include_dirs, extra_compiler_args, and f2py_options. The name is the name of the library to be built and sources is a list of sources (or source generating functions) to add to the library. - + .. method:: add_scripts(*files) - + Add the sequence of files to the beginning of the scripts list. Scripts will be installed under the <prefix>/bin/ directory. - + .. method:: paths(*paths) - + Applies glob.glob(...) to each path in the sequence (if needed) and pre-pends the local_path if needed. Because this is called on all source lists, this allows wildcard characters to be specified in lists of sources for extension modules and libraries and scripts and allows path-names be relative to the source directory. - + .. method:: get_config_cmd() - + Returns the numpy.distutils config command instance. - + .. method:: get_build_temp_dir() - + Return a path to a temporary directory where temporary files should be placed. - + .. method:: have_f77c() - + True if a Fortran 77 compiler is available (because a simple Fortran 77 code was able to be compiled successfully). - + .. method:: have_f90c() - + True if a Fortran 90 compiler is available (because a simple Fortran 90 code was able to be compiled successfully) - + .. method:: get_version() - + Return a version string of the current package or None if the version information could not be detected. This method scans files named __version__.py, <packagename>_version.py, version.py, and __svn_version__.py for string variables version, __version\__, and <packagename>_version, until a version number is found. - + .. method:: make_svn_version_py() - + Appends a data function to the data_files list that will generate __svn_version__.py file to the current package directory. This file will be removed from the source directory when Python exits (so that it can be re-generated next time the package is built). This is intended for working with source directories that are in an SVN repository. - + .. method:: make_config_py() - + Generate a package __config__.py file containing system information used during the building of the package. This file is installed to the package installation directory. - + .. method:: get_info(*names) - + Return information (from system_info.get_info) for all of the names in the argument list in a single dictionary. - + Other modules ------------- @@ -361,7 +361,7 @@ are accomplished. Pre-defined names ^^^^^^^^^^^^^^^^^ -The following predefined named repeat rules are available: +The following predefined named repeat rules are available: - <prefix=s,d,c,z> diff --git a/doc/source/reference/internals.code-explanations.rst b/doc/source/reference/internals.code-explanations.rst index 48f487205..7c1ab6ccb 100644 --- a/doc/source/reference/internals.code-explanations.rst +++ b/doc/source/reference/internals.code-explanations.rst @@ -5,19 +5,19 @@ Numpy C Code Explanations ************************* Fanaticism consists of redoubling your efforts when you have forgotten - your aim. - --- *George Santayana* + your aim. + --- *George Santayana* An authority is a person who can tell you more about something than - you really care to know. - --- *Unknown* + you really care to know. + --- *Unknown* This Chapter attempts to explain the logic behind some of the new pieces of code. The purpose behind these explanations is to enable somebody to be able to understand the ideas behind the implementation somewhat more easily than just staring at the code. Perhaps in this way, the algorithms can be improved on, borrowed from, and/or -optimized. +optimized. Memory model @@ -38,7 +38,7 @@ pointers because strides are in units of bytes. Keep in mind also that strides do not have to be unit-multiples of the element size. Also, remember that if the number of dimensions of the array is 0 (sometimes called a rank-0 array), then the strides and dimensions variables are -NULL. +NULL. Besides the structural information contained in the strides and dimensions members of the :ctype:`PyArrayObject`, the flags contain important @@ -54,7 +54,7 @@ the array. It is also possible to obtain a pointer to an unwriteable memory area. Sometimes, writing to the memory area when the :cdata:`NPY_WRITEABLE` flag is not set will just be rude. Other times it can cause program crashes ( *e.g.* a data-area that is a read-only -memory-mapped file). +memory-mapped file). Data-type encapsulation @@ -71,7 +71,7 @@ list of function pointers pointed to by the 'f' member of the extended simply by providing a :ctype:`PyArray_Descr` structure with suitable function pointers in the 'f' member. For built-in types there are some optimizations that by-pass this mechanism, but the point of the data- -type abstraction is to allow new data-types to be added. +type abstraction is to allow new data-types to be added. One of the built-in data-types, the void data-type allows for arbitrary records containing 1 or more fields as elements of the @@ -82,7 +82,7 @@ implemented for the void type. A common idiom is to cycle through the elements of the dictionary and perform a specific operation based on the data-type object stored at the given offset. These offsets can be arbitrary numbers. Therefore, the possibility of encountering mis- -aligned data must be recognized and taken into account if necessary. +aligned data must be recognized and taken into account if necessary. N-D Iterators @@ -100,7 +100,7 @@ dataptr member of the iterator object structure and call the macro :cfunc:`PyArray_ITER_NEXT` (it) on the iterator object to move to the next element. The "next" element is always in C-contiguous order. The macro works by first special casing the C-contiguous, 1-d, and 2-d cases -which work very simply. +which work very simply. For the general case, the iteration works by keeping track of a list of coordinate counters in the iterator object. At each iteration, the @@ -118,13 +118,13 @@ but a local dimension counter is decremented so that the next-to-last dimension replaces the role that the last dimension played and the previously-described tests are executed again on the next-to-last dimension. In this way, the dataptr is adjusted appropriately for -arbitrary striding. +arbitrary striding. The coordinates member of the :ctype:`PyArrayIterObject` structure maintains the current N-d counter unless the underlying array is C-contiguous in which case the coordinate counting is by-passed. The index member of the :ctype:`PyArrayIterObject` keeps track of the current flat index of the -iterator. It is updated by the :cfunc:`PyArray_ITER_NEXT` macro. +iterator. It is updated by the :cfunc:`PyArray_ITER_NEXT` macro. Broadcasting @@ -142,7 +142,7 @@ binary equivalent) to be passed in. The :ctype:`PyArrayMultiIterObject` keeps track of the broadcasted number of dimensions and size in each dimension along with the total size of the broadcasted result. It also keeps track of the number of arrays being broadcast and a pointer to -an iterator for each of the arrays being broadcasted. +an iterator for each of the arrays being broadcasted. The :cfunc:`PyArray_Broadcast` function takes the iterators that have already been defined and uses them to determine the broadcast shape in each @@ -155,14 +155,14 @@ because the iterator strides are also adjusted. Broadcasting only adjusts (or adds) length-1 dimensions. For these dimensions, the strides variable is simply set to 0 so that the data-pointer for the iterator over that array doesn't move as the broadcasting operation -operates over the extended dimension. +operates over the extended dimension. Broadcasting was always implemented in Numeric using 0-valued strides for the extended dimensions. It is done in exactly the same way in NumPy. The big difference is that now the array of strides is kept track of in a :ctype:`PyArrayIterObject`, the iterators involved in a broadcasted result are kept track of in a :ctype:`PyArrayMultiIterObject`, -and the :cfunc:`PyArray_BroadCast` call implements the broad-casting rules. +and the :cfunc:`PyArray_BroadCast` call implements the broad-casting rules. Array Scalars @@ -178,14 +178,14 @@ array. An exception to this rule was made with object arrays. Object arrays are heterogeneous collections of arbitrary Python objects. When you select an item from an object array, you get back the original Python object (and not an object array scalar which does exist but is -rarely used for practical purposes). +rarely used for practical purposes). The array scalars also offer the same methods and attributes as arrays with the intent that the same code can be used to support arbitrary dimensions (including 0-dimensions). The array scalars are read-only (immutable) with the exception of the void scalar which can also be written to so that record-array field setting works more naturally -(a[0]['f1'] = ``value`` ). +(a[0]['f1'] = ``value`` ). Advanced ("Fancy") Indexing @@ -202,7 +202,7 @@ The second is general-purpose that works for arrays of "arbitrary dimension" (up to a fixed maximum). The one-dimensional indexing approaches were implemented in a rather straightforward fashion, and so it is the general-purpose indexing code that will be the focus of -this section. +this section. There is a multi-layer approach to indexing because the indexing code can at times return an array scalar and at other times return an @@ -218,7 +218,7 @@ not created only to be discarded as the array scalar is returned instead. This provides significant speed-up for code that is selecting many scalars out of an array (such as in a loop). However, it is still not faster than simply using a list to store standard Python scalars, -because that is optimized by the Python interpreter itself. +because that is optimized by the Python interpreter itself. After these optimizations, the array_subscript function itself is called. This function first checks for field selection which occurs @@ -230,7 +230,7 @@ using code borrowed from Numeric which parses the indexing object and returns the offset into the data-buffer and the dimensions necessary to create a new view of the array. The strides are also changed by multiplying each stride by the step-size requested along the -corresponding dimension. +corresponding dimension. Fancy-indexing check @@ -248,7 +248,7 @@ contains any slice, newaxis, or Ellipsis objects, and no arrays or additional sequences are also contained in the sequence. The purpose of this is to allow the construction of "slicing" sequences which is a common technique for building up code that works in arbitrary numbers -of dimensions. +of dimensions. Fancy-indexing implementation @@ -265,7 +265,7 @@ binding the :ctype:`PyArrayMapIterObject` to the array being indexed, and (3) getting (or setting) the items determined by the indexing object. There is an optimization implemented so that the :ctype:`PyArrayIterObject` (which has it's own less complicated fancy-indexing) is used for -indexing when possible. +indexing when possible. Creating the mapping object @@ -276,7 +276,7 @@ where iterators are created for all of the index array inputs and all Boolean arrays are converted to equivalent integer index arrays (as if nonzero(arr) had been called). Finally, all integer arrays are replaced with the integer 0 in the indexing object and all of the -index-array iterators are "broadcast" to the same shape. +index-array iterators are "broadcast" to the same shape. Binding the mapping object @@ -296,7 +296,7 @@ accomplished by extracting a sub-space view of the array (using the index object resulting from replacing all the integer index arrays with 0) and storing the information about where this sub-space starts in the mapping object. This is used later during mapping-object -iteration to select the correct elements from the underlying array. +iteration to select the correct elements from the underlying array. Getting (or Setting) @@ -312,7 +312,7 @@ next coordinate location indicated by all of the indexing-object iterators while adjusting, if necessary, for the presence of a sub- space. The result of this function is that the dataptr member of the mapping object structure is pointed to the next position in the array -that needs to be copied out or set to some value. +that needs to be copied out or set to some value. When advanced indexing is used to extract an array, an iterator for the new array is constructed and advanced in phase with the mapping @@ -320,7 +320,7 @@ object iterator. When advanced indexing is used to place values in an array, a special "broadcasted" iterator is constructed from the object being placed into the array so that it will only work if the values used for setting have a shape that is "broadcastable" to the shape -implied by the indexing object. +implied by the indexing object. Universal Functions @@ -338,7 +338,7 @@ in C, although there is a mechanism for creating ufuncs from Python functions (:func:`frompyfunc`). The user must supply a 1-d loop that implements the basic function taking the input scalar values and placing the resulting scalars into the appropriate output slots as -explaine n implementation. +explaine n implementation. Setup @@ -352,7 +352,7 @@ for small arrays than the ufunc. In particular, using ufuncs to perform many calculations on 0-d arrays will be slower than other Python-based solutions (the silently-imported scalarmath module exists precisely to give array scalars the look-and-feel of ufunc-based -calculations with significantly reduced overhead). +calculations with significantly reduced overhead). When a ufunc is called, many things must be done. The information collected from these setup operations is stored in a loop-object. This @@ -360,7 +360,7 @@ loop object is a C-structure (that could become a Python object but is not initialized as such because it is only used internally). This loop object has the layout needed to be used with PyArray_Broadcast so that the broadcasting can be handled in the same way as it is handled in -other sections of code. +other sections of code. The first thing done is to look-up in the thread-specific global dictionary the current values for the buffer-size, the error mask, and @@ -372,14 +372,14 @@ contiguous and of the correct type so that a single 1-d loop is performed, then the flags may not be checked until all elements of the array have been calcluated. Looking up these values in a thread- specific dictionary takes time which is easily ignored for all but -very small arrays. +very small arrays. After checking, the thread-specific global variables, the inputs are evaluated to determine how the ufunc should proceed and the input and output arrays are constructed if necessary. Any inputs which are not arrays are converted to arrays (using context if necessary). Which of the inputs are scalars (and therefore converted to 0-d arrays) is -noted. +noted. Next, an appropriate 1-d loop is selected from the 1-d loops available to the ufunc based on the input array types. This 1-d loop is selected @@ -397,7 +397,7 @@ implication of this search procedure is that "lesser types" should be placed below "larger types" when the signatures are stored. If no 1-d loop is found, then an error is reported. Otherwise, the argument_list is updated with the stored signature --- in case casting is necessary -and to fix the output types assumed by the 1-d loop. +and to fix the output types assumed by the 1-d loop. If the ufunc has 2 inputs and 1 output and the second input is an Object array then a special-case check is performed so that @@ -406,13 +406,13 @@ the __array_priority\__ attribute, and has an __r{op}\__ special method. In this way, Python is signaled to give the other object a chance to complete the operation instead of using generic object-array calculations. This allows (for example) sparse matrices to override -the multiplication operator 1-d loop. +the multiplication operator 1-d loop. For input arrays that are smaller than the specified buffer size, copies are made of all non-contiguous, mis-aligned, or out-of- byteorder arrays to ensure that for small arrays, a single-loop is used. Then, array iterators are created for all the input arrays and -the resulting collection of iterators is broadcast to a single shape. +the resulting collection of iterators is broadcast to a single shape. The output arguments (if any) are then processed and any missing return arrays are constructed. If any provided output array doesn't @@ -420,7 +420,7 @@ have the correct type (or is mis-aligned) and is smaller than the buffer size, then a new output array is constructed with the special UPDATEIFCOPY flag set so that when it is DECREF'd on completion of the function, it's contents will be copied back into the output array. -Iterators for the output arguments are then processed. +Iterators for the output arguments are then processed. Finally, the decision is made about how to execute the looping mechanism to ensure that all elements of the input arrays are combined @@ -429,7 +429,7 @@ execution are one-loop (for contiguous, aligned, and correct data- type), strided-loop (for non-contiguous but still aligned and correct data-type), and a buffered loop (for mis-aligned or incorrect data- type situations). Depending on which execution method is called for, -the loop is then setup and computed. +the loop is then setup and computed. Function call @@ -442,7 +442,7 @@ compilation, then the Python Global Interpreter Lock (GIL) is released prior to calling all of these loops (as long as they don't involve object arrays). It is re-acquired if necessary to handle error conditions. The hardware error flags are checked only after the 1-d -loop is calcluated. +loop is calcluated. One Loop @@ -455,7 +455,7 @@ and output and all arrays have uniform strides (either contiguous, 0-d, or 1-d). In this case, the 1-d computational loop is called once to compute the calculation for the entire array. Note that the hardware error flags are only checked after the entire calculation is -complete. +complete. Strided Loop @@ -468,7 +468,7 @@ approach converts all of the iterators for the input and output arguments to iterate over all but the largest dimension. The inner loop is then handled by the underlying 1-d computational loop. The outer loop is a standard iterator loop on the converted iterators. The -hardware error flags are checked after each 1-d loop is completed. +hardware error flags are checked after each 1-d loop is completed. Buffered Loop @@ -484,7 +484,7 @@ processing is performed on the outputs in bufsize chunks (where bufsize is a user-settable parameter). The underlying 1-d computational loop is called on data that is copied over (if it needs to be). The setup code and the loop code is considerably more -complicated in this case because it has to handle: +complicated in this case because it has to handle: - memory allocation of the temporary buffers @@ -501,7 +501,7 @@ complicated in this case because it has to handle: remainder). Again, the hardware error flags are checked at the end of each 1-d -loop. +loop. Final output manipulation @@ -520,7 +520,7 @@ calling styles of the :obj:`__array_wrap__` function supported. The first takes the ndarray as the first argument and a tuple of "context" as the second argument. The context is (ufunc, arguments, output argument number). This is the first call tried. If a TypeError occurs, then the -function is called with just the ndarray as the first argument. +function is called with just the ndarray as the first argument. Methods @@ -534,7 +534,7 @@ corresponding to no-elements, one-element, strided-loop, and buffered- loop. These are the same basic loop styles as implemented for the general purpose function call except for the no-element and one- element cases which are special-cases occurring when the input array -objects have 0 and 1 elements respectively. +objects have 0 and 1 elements respectively. Setup @@ -564,7 +564,7 @@ to work with a well-behaved output array but the result will be copied back into the true output array when the method computation is complete. Finally, iterators are set up to loop over the correct axis (depending on the value of axis provided to the method) and the setup -routine returns to the actual computation routine. +routine returns to the actual computation routine. Reduce @@ -580,7 +580,7 @@ reduce is that the 1-d loop is called with the output and the second input pointing to the same position in memory and both having a step- size of 0. The first input is pointing to the input array with a step- size given by the appropriate stride for the selected axis. In this -way, the operation performed is +way, the operation performed is .. math:: :nowrap: @@ -596,14 +596,14 @@ where :math:`N+1` is the number of elements in the input, :math:`i`, This basic operations is repeated for arrays with greater than 1 dimension so that the reduction takes place for every 1-d sub-array along the selected axis. An iterator with the selected dimension -removed handles this looping. +removed handles this looping. For buffered loops, care must be taken to copy and cast data before the loop function is called because the underlying loop expects aligned data of the correct data-type (including byte-order). The buffered loop must handle this copying and casting prior to calling the loop function on chunks no greater than the user-specified -bufsize. +bufsize. Accumulate @@ -615,7 +615,7 @@ Accumulate The accumulate function is very similar to the reduce function in that the output and the second input both point to the output. The difference is that the second input points to memory one stride behind -the current output pointer. Thus, the operation performed is +the current output pointer. Thus, the operation performed is .. math:: :nowrap: @@ -627,7 +627,7 @@ the current output pointer. Thus, the operation performed is The output has the same shape as the input and each 1-d loop operates over :math:`N` elements when the shape in the selected axis is :math:`N+1`. Again, buffered loops take care to copy and cast the data before -calling the underlying 1-d computational loop. +calling the underlying 1-d computational loop. Reduceat @@ -653,7 +653,7 @@ computational loop is fixed to be the difference between the current index and the next index (when the current index is the last index, then the next index is assumed to be the length of the array along the selected dimension). In this way, the 1-d loop will implement a reduce -over the specified indices. +over the specified indices. Mis-aligned or a loop data-type that does not match the input and/or output data-type is handled using buffered code where-in data is @@ -662,4 +662,4 @@ necessary prior to calling the underlying 1-d function. The temporary buffers are created in (element) sizes no bigger than the user settable buffer-size value. Thus, the loop must be flexible enough to call the underlying 1-d computational loop enough times to complete -the total calculation in chunks no bigger than the buffer-size. +the total calculation in chunks no bigger than the buffer-size. diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst index 66418541f..9789765e4 100644 --- a/doc/source/reference/maskedarray.baseclass.rst +++ b/doc/source/reference/maskedarray.baseclass.rst @@ -86,7 +86,7 @@ Attributes and properties of masked arrays .. attribute:: MaskedArray.mask Returns the underlying mask, as an array with the same shape and structure - as the data, but where all fields are atomically booleans. + as the data, but where all fields are atomically booleans. A value of ``True`` indicates an invalid entry. @@ -122,7 +122,7 @@ Attributes and properties of masked arrays object '?' string 'N/A' ======== ======== - + .. attribute:: MaskedArray.baseclass @@ -319,7 +319,7 @@ Arithmetic: .. autosummary:: :toctree: generated/ - + MaskedArray.__abs__ MaskedArray.__add__ MaskedArray.__radd__ @@ -356,7 +356,7 @@ Arithmetic, in-place: .. autosummary:: :toctree: generated/ - + MaskedArray.__iadd__ MaskedArray.__isub__ MaskedArray.__imul__ diff --git a/doc/source/reference/maskedarray.generic.rst b/doc/source/reference/maskedarray.generic.rst index 70d94bdad..580c8a3de 100644 --- a/doc/source/reference/maskedarray.generic.rst +++ b/doc/source/reference/maskedarray.generic.rst @@ -47,7 +47,7 @@ The :mod:`numpy.ma` module -------------------------- -The main feature of the :mod:`numpy.ma` module is the :class:`MaskedArray` class, which is a subclass of :class:`numpy.ndarray`. +The main feature of the :mod:`numpy.ma` module is the :class:`MaskedArray` class, which is a subclass of :class:`numpy.ndarray`. The class, its attributes and methods are described in more details in the :ref:`MaskedArray class <maskedarray.baseclass>` section. @@ -137,7 +137,7 @@ Accessing the data The underlying data of a masked array can be accessed through several ways: -* through the :attr:`~MaskedArray.data` attribute. The output is a view of the array as +* through the :attr:`~MaskedArray.data` attribute. The output is a view of the array as a :class:`numpy.ndarray` or one of its subclasses, depending on the type of the underlying data at the masked array creation. @@ -145,7 +145,7 @@ The underlying data of a masked array can be accessed through several ways: * by directly taking a view of the masked array as a :class:`numpy.ndarray` or one of its subclass (which is actually what using the :attr:`~MaskedArray.data` attribute does). -* by using the :func:`getdata` function. +* by using the :func:`getdata` function. None of these methods is completely satisfactory if some entries have been marked as invalid. As a general rule, invalid data should not be relied on. @@ -161,7 +161,7 @@ We must keep in mind that a ``True`` entry in the mask indicates an *invalid* da Another possibility is to use the :func:`getmask` and :func:`getmaskarray` functions. :func:`getmask(x)` outputs the mask of ``x`` if ``x`` is a masked array, and the special value :data:`nomask` otherwise. :func:`getmaskarray(x)` outputs the mask of ``x`` if ``x`` is a masked array. -If ``x`` has no invalid entry or is not a masked array, the function outputs a boolean array of ``False`` with as many elements as ``x``. +If ``x`` has no invalid entry or is not a masked array, the function outputs a boolean array of ``False`` with as many elements as ``x``. @@ -177,8 +177,8 @@ To retrieve only the valid entries, we can use the inverse of the mask as an ind mask = [False False], fill_value = 999999) -Another way to retrieve the valid data is to use the :meth:`compressed` method, -which returns a one-dimensional :class:`~numpy.ndarray` (or one of its subclasses, +Another way to retrieve the valid data is to use the :meth:`compressed` method, +which returns a one-dimensional :class:`~numpy.ndarray` (or one of its subclasses, depending on the value of the :attr:`~MaskedArray.baseclass` attribute):: >>> x.compressed() @@ -222,8 +222,8 @@ The recommended way to mark one or several specific entries of a masked array as fill_value = 999999) -A second possibility is to modify the :attr:`~MaskedArray.mask` directly, -but this usage is discouraged. +A second possibility is to modify the :attr:`~MaskedArray.mask` directly, +but this usage is discouraged. .. note:: When creating a new masked array with a simple, non-structured datatype, the mask is initially set to the special value :attr:`nomask`, that corresponds roughly to the boolean ``False``. Trying to set an element of :attr:`nomask` will fail with a :exc:`TypeError` exception, as a boolean does not support item assignment. @@ -265,10 +265,10 @@ To unmask one or several specific entries, we can just assign one or several new .. note:: Unmasking an entry by direct assignment will not work if the masked array - has a *hard* mask, as shown by the :attr:`~MaskedArray.hardmask` attribute. + has a *hard* mask, as shown by the :attr:`~MaskedArray.hardmask` attribute. This feature was introduced to prevent the overwriting of the mask. To force the unmasking of an entry in such circumstance, the mask has first - to be softened with the :meth:`soften_mask` method before the allocation, + to be softened with the :meth:`soften_mask` method before the allocation, and then re-hardened with :meth:`harden_mask`:: >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) @@ -322,10 +322,10 @@ When accessing a single entry of a masked array with no named fields, the output >>> x[-1] is ma.masked True -If the masked array has named fields, accessing a single entry returns a +If the masked array has named fields, accessing a single entry returns a :class:`numpy.void` object if none of the fields are masked, or a 0d masked array with the same dtype as the initial array if at least one of the fields is masked. - >>> y = ma.masked_array([(1,2), (3, 4)], + >>> y = ma.masked_array([(1,2), (3, 4)], ... mask=[(0, 0), (0, 1)], ... dtype=[('a', int), ('b', int)]) >>> y[0] @@ -364,17 +364,17 @@ Operations on masked arrays --------------------------- Arithmetic and comparison operations are supported by masked arrays. -As much as possible, invalid entries of a masked array are not processed, -meaning that the corresponding :attr:`~MaskedArray.data` entries *should* be +As much as possible, invalid entries of a masked array are not processed, +meaning that the corresponding :attr:`~MaskedArray.data` entries *should* be the same before and after the operation. .. warning:: - We need to stress that this behavior may not be systematic, that invalid + We need to stress that this behavior may not be systematic, that invalid data may actually be affected by the operation in some cases and once again that invalid data should not be relied on. The :mod:`numpy.ma` module comes with a specific implementation of most -ufuncs. +ufuncs. Unary and binary functions that have a validity domain (such as :func:`~numpy.log` or :func:`~numpy.divide`) return the :data:`masked` constant whenever the input is masked or falls outside the validity domain:: >>> ma.log([-1, 0, 1, 2]) @@ -432,14 +432,14 @@ Numerical operations can be easily performed without worrying about missing valu >>> y = ma.array([1., 2., 0., 4., 5., 6.], mask=[0,0,0,0,0,1]) >>> print np.sqrt(x/y) [1.0 -- -- 1.0 -- --] - + Four values of the output are invalid: the first one comes from taking the square root of a negative number, the second from the division by zero, and the last two where the inputs were masked. Ignoring extreme values ----------------------- -Let's consider an array ``d`` of random floats between 0 and 1. +Let's consider an array ``d`` of random floats between 0 and 1. We wish to compute the average of the values of ``d`` while ignoring any data outside the range ``[0.1, 0.9]``:: >>> print ma.masked_outside(d, 0.1, 0.9).mean() diff --git a/doc/source/reference/routines.array-creation.rst b/doc/source/reference/routines.array-creation.rst index b5385fb86..25196232a 100644 --- a/doc/source/reference/routines.array-creation.rst +++ b/doc/source/reference/routines.array-creation.rst @@ -12,12 +12,12 @@ Ones and zeros .. autosummary:: :toctree: generated/ - empty - empty_like - eye - identity - ones - ones_like + empty + empty_like + eye + identity + ones + ones_like zeros zeros_like @@ -26,16 +26,16 @@ From existing data .. autosummary:: :toctree: generated/ - array - asarray + array + asarray asanyarray ascontiguousarray asmatrix copy frombuffer - fromfile - fromfunction - fromiter + fromfile + fromfunction + fromiter loadtxt .. _routines.array-creation.rec: diff --git a/doc/source/reference/routines.array-manipulation.rst b/doc/source/reference/routines.array-manipulation.rst index 5dedf01d7..e5163bcfc 100644 --- a/doc/source/reference/routines.array-manipulation.rst +++ b/doc/source/reference/routines.array-manipulation.rst @@ -46,8 +46,8 @@ Changing kind of array .. autosummary:: :toctree: generated/ - asarray - asanyarray + asarray + asanyarray asmatrix asfarray asfortranarray @@ -59,11 +59,11 @@ Joining arrays .. autosummary:: :toctree: generated/ - append - column_stack - concatenate - dstack - hstack + append + column_stack + concatenate + dstack + hstack vstack Splitting arrays @@ -71,10 +71,10 @@ Splitting arrays .. autosummary:: :toctree: generated/ - array_split - dsplit - hsplit - split + array_split + dsplit + hsplit + split vsplit Tiling arrays @@ -82,7 +82,7 @@ Tiling arrays .. autosummary:: :toctree: generated/ - tile + tile repeat Adding and removing elements @@ -90,9 +90,9 @@ Adding and removing elements .. autosummary:: :toctree: generated/ - delete - insert - resize + delete + insert + resize trim_zeros unique @@ -101,8 +101,8 @@ Rearranging elements .. autosummary:: :toctree: generated/ - fliplr - flipud - reshape - roll + fliplr + flipud + reshape + roll rot90 diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst index e6173407b..736755338 100644 --- a/doc/source/reference/routines.ma.rst +++ b/doc/source/reference/routines.ma.rst @@ -11,7 +11,7 @@ Constants .. autosummary:: :toctree: generated/ - + ma.MaskType @@ -38,7 +38,7 @@ Ones and zeros .. autosummary:: :toctree: generated/ - + ma.empty ma.empty_like ma.masked_all @@ -65,11 +65,11 @@ Inspecting the array ma.nonzero ma.shape ma.size - + ma.MaskedArray.data ma.MaskedArray.mask ma.MaskedArray.recordmask - + ma.MaskedArray.all ma.MaskedArray.any ma.MaskedArray.count @@ -88,7 +88,7 @@ Changing the shape .. autosummary:: :toctree: generated/ - + ma.ravel ma.reshape ma.resize @@ -103,10 +103,10 @@ Modifying axes ~~~~~~~~~~~~~~ .. autosummary:: :toctree: generated/ - + ma.swapaxes ma.transpose - + ma.MaskedArray.swapaxes ma.MaskedArray.transpose @@ -115,7 +115,7 @@ Changing the number of dimensions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: generated/ - + ma.atleast_1d ma.atleast_2d ma.atleast_3d @@ -123,7 +123,7 @@ Changing the number of dimensions ma.squeeze ma.MaskedArray.squeeze - + ma.column_stack ma.concatenate ma.dstack @@ -139,10 +139,10 @@ Joining arrays .. autosummary:: :toctree: generated/ - ma.column_stack - ma.concatenate - ma.dstack - ma.hstack + ma.column_stack + ma.concatenate + ma.dstack + ma.hstack ma.vstack @@ -194,7 +194,7 @@ Modifying a mask ma.mask_rows ma.harden_mask ma.soften_mask - + ma.MaskedArray.harden_mask ma.MaskedArray.soften_mask ma.MaskedArray.shrink_mask @@ -238,7 +238,7 @@ Conversion operations ma.compress_rows ma.compressed ma.filled - + ma.MaskedArray.compressed ma.MaskedArray.filled @@ -247,7 +247,7 @@ Conversion operations ~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: generated/ - + ma.MaskedArray.tofile ma.MaskedArray.tolist ma.MaskedArray.torecords @@ -258,7 +258,7 @@ Pickling and unpickling ~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: generated/ - + ma.dump ma.dumps ma.load @@ -275,7 +275,7 @@ Filling a masked array ma.maximum_fill_value ma.maximum_fill_value ma.set_fill_value - + ma.MaskedArray.get_fill_value ma.MaskedArray.set_fill_value ma.MaskedArray.fill_value @@ -290,7 +290,7 @@ Arithmetics ~~~~~~~~~~~ .. autosummary:: :toctree: generated/ - + ma.anom ma.anomalies ma.average @@ -306,7 +306,7 @@ Arithmetics ma.std ma.sum ma.var - + ma.MaskedArray.anom ma.MaskedArray.cumprod ma.MaskedArray.cumsum @@ -369,7 +369,7 @@ Polynomial fit ~~~~~~~~~~~~~~ .. autosummary:: :toctree: generated/ - + ma.vander ma.polyfit diff --git a/doc/source/reference/routines.math.rst b/doc/source/reference/routines.math.rst index 2ae1762c6..326391292 100644 --- a/doc/source/reference/routines.math.rst +++ b/doc/source/reference/routines.math.rst @@ -8,14 +8,14 @@ Trigonometric functions .. autosummary:: :toctree: generated/ - sin - cos + sin + cos tan - arcsin - arccos - arctan - hypot - arctan2 + arcsin + arccos + arctan + hypot + arctan2 degrees radians unwrap @@ -25,12 +25,12 @@ Hyperbolic functions .. autosummary:: :toctree: generated/ - sinh - cosh - tanh - arcsinh - arccosh - arctanh + sinh + cosh + tanh + arcsinh + arccosh + arctanh Rounding -------- @@ -40,24 +40,24 @@ Rounding around round_ rint - fix - floor - ceil + fix + floor + ceil Sums, products, differences --------------------------- .. autosummary:: :toctree: generated/ - prod + prod sum nansum - cumprod - cumsum + cumprod + cumsum diff ediff1d gradient - cross + cross trapz Exponents and logarithms @@ -117,7 +117,7 @@ Handling complex numbers angle real imag - conj + conj Miscellaneous @@ -126,7 +126,7 @@ Miscellaneous :toctree: generated/ convolve - clip + clip sqrt square diff --git a/doc/source/reference/routines.statistics.rst b/doc/source/reference/routines.statistics.rst index 89009e210..b41b62839 100644 --- a/doc/source/reference/routines.statistics.rst +++ b/doc/source/reference/routines.statistics.rst @@ -48,4 +48,4 @@ Histograms histogram2d histogramdd bincount - digitize + digitize |