diff options
83 files changed, 13249 insertions, 5903 deletions
diff --git a/LICENSE.txt b/LICENSE.txt index 6f9aec2e9..4371a777b 100644 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -1,4 +1,4 @@ -Copyright (c) 2005, NumPy Developers +Copyright (c) 2005-2009, NumPy Developers. All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/MANIFEST.in b/MANIFEST.in index 581a1d939..176446485 100644 --- a/MANIFEST.in +++ b/MANIFEST.in @@ -11,3 +11,9 @@ include setupegg.py recursive-include numpy/core/code_generators *.py include numpy/core/include/numpy/numpyconfig.h.in recursive-include numpy SConstruct +# Add documentation: we don't use add_data_dir since we do not want to include +# this at installation, only for sdist-generated tarballs +include doc/Makefile doc/postprocess.py +recursive-include doc/release * +recursive-include doc/source * +recursive-include doc/sphinxext * diff --git a/THANKS.txt b/THANKS.txt index 699117e21..5a29c0e3b 100644 --- a/THANKS.txt +++ b/THANKS.txt @@ -45,9 +45,18 @@ A.M. Archibald for no-copy-reshape code, strided array tricks, Pierre Gerard-Marchant for rewriting masked array functionality. Roberto de Almeida for the buffered array iterator. Alan McIntyre for updating the NumPy test framework to use nose, improve - the test coverage, and enhancing the test system documentation + the test coverage, and enhancing the test system documentation. +Joe Harrington for administering the 2008 Documentation Sprint. NumPy is based on the Numeric (Jim Hugunin, Paul Dubois, Konrad Hinsen, and David Ascher) and NumArray (Perry Greenfield, J Todd Miller, Rick White and Paul Barrett) projects. We thank them for paving the way ahead. + +Institutions +------------ + +Enthought for providing resources and finances for development of NumPy. +UC Berkeley for providing travel money and hosting numerous sprints. +The University of Central Florida for funding the 2008 Documentation Marathon. +The University of Stellenbosch for hosting the buildbot. diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst index 65fc10af5..f5a262076 100644 --- a/doc/source/reference/arrays.classes.rst +++ b/doc/source/reference/arrays.classes.rst @@ -261,22 +261,7 @@ scalar data type object :class:`record`. Masked arrays (:mod:`numpy.ma`) =============================== -.. seealso:: :ref:`routines.ma` - -.. XXX: masked array documentation should be improved - -.. currentmodule:: numpy - -.. index:: - single: masked arrays - -.. autosummary:: - :toctree: generated/ - - ma.masked_array - -.. automodule:: numpy.ma - +.. seealso:: :ref:`maskedarray` Standard container class ======================== diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst index f07199603..7713bff9c 100644 --- a/doc/source/reference/arrays.ndarray.rst +++ b/doc/source/reference/arrays.ndarray.rst @@ -135,6 +135,8 @@ in a different scheme. is automatically made. +.. _arrays.ndarray.attributes: + Array attributes ================ @@ -217,6 +219,9 @@ Array interface .. note:: XXX: update and check these docstrings. + +.. _array.ndarray.methods: + Array methods ============= diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst index b6d28fe2c..4204f13a4 100644 --- a/doc/source/reference/arrays.rst +++ b/doc/source/reference/arrays.rst @@ -43,4 +43,5 @@ of also more complicated arrangements of data. arrays.dtypes arrays.indexing arrays.classes + maskedarray arrays.interface diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst new file mode 100644 index 000000000..bef16b100 --- /dev/null +++ b/doc/source/reference/maskedarray.baseclass.rst @@ -0,0 +1,395 @@ + +.. currentmodule:: numpy.ma + + +.. _numpy.ma.constants: + +Constants of the :mod:`numpy.ma` module +======================================= + +In addition to the :class:`MaskedArray` class, the :mod:`numpy.ma` module +defines several constants. + +.. data:: masked + + The :attr:`masked` constant is a special case of :class:`MaskedArray`, + with a float datatype and a null shape. It is used to test whether a + specific entry of a masked array is masked, or to mask one or several + entries of a masked array:: + + >>> x = ma.array([1, 2, 3], mask=[0, 1, 0]) + >>> x[1] is ma.masked + True + >>> x[-1] = ma.masked + >>> x + masked_array(data = [1 -- --], + mask = [False True True], + fill_value = 999999) + + +.. data:: nomask + + Value indicating that a masked array has no invalid entry. + :attr:`nomask` is used internally to speed up computations when the mask + is not needed. + + +.. data:: masked_print_options + + String used in lieu of missing data when a masked array is printed. + By default, this string is ``'--'``. + + + + +.. _maskedarray.baseclass: + +The :class:`MaskedArray` class +============================== + + An instance of :class:`MaskedArray` can be thought as the combination of several elements: + +* The :attr:`data`, as a regular :class:`numpy.ndarray` of any shape or datatype (the data). +* A boolean :attr:`mask` with the same shape as the data, where a ``True`` value indicates that the corresponding element of the data is invalid. + The special value :attr:`nomask` is also acceptable for arrays without named fields, and indicates that no data is invalid. +* A :attr:`fill_value`, a value that may be used to replace the invalid entries in order to return a standard :class:`numpy.ndarray`. + + + +Attributes and properties of masked arrays +------------------------------------------ + +.. seealso:: :ref:`Array Attributes <arrays.ndarray.attributes>` + + +.. attribute:: MaskedArray.data + + Returns the underlying data, as a view of the masked array. + If the underlying data is a subclass of :class:`numpy.ndarray`, it is + returned as such. + + >>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) + >>> x.data + matrix([[1, 2], + [3, 4]]) + + The type of the data can be accessed through the :attr:`baseclass` + attribute. + +.. attribute:: MaskedArray.mask + + Returns the underlying mask, as an array with the same shape and structure + as the data, but where all fields are booleans. + A value of ``True`` indicates an invalid entry. + + +.. attribute:: MaskedArray.recordmask + + Returns the mask of the array if it has no named fields. For structured + arrays, returns a ndarray of booleans where entries are ``True`` if **all** + the fields are masked, ``False`` otherwise:: + + >>> x = ma.array([(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)], + ... mask=[(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)], + ... dtype=[('a', int), ('b', int)]) + >>> x.recordmask + array([False, False, True, False, False], dtype=bool) + + +.. attribute:: MaskedArray.fill_value + + Returns the value used to fill the invalid entries of a masked array. + The value is either a scalar (if the masked array has no named fields), + or a 0d-ndarray with the same datatype as the masked array if it has + named fields. + + The default filling value depends on the datatype of the array: + + ======== ======== + datatype default + ======== ======== + bool True + int 999999 + float 1.e20 + complex 1.e20+0j + object '?' + string 'N/A' + ======== ======== + + + +.. attribute:: MaskedArray.baseclass + + Returns the class of the underlying data.:: + + >>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 0], [1, 0]]) + >>> x.baseclass + <class 'numpy.core.defmatrix.matrix'> + + +.. attribute:: MaskedArray.sharedmask + + Returns whether the mask of the array is shared between several arrays. + If this is the case, any modification to the mask of one array will be + propagated to the other masked arrays. + + +.. attribute:: MaskedArray.hardmask + + Returns whether the mask is hard (``True``) or soft (``False``). + When the mask is hard, masked entries cannot be unmasked. + + +As :class:`MaskedArray` is a subclass of :class:`~numpy.ndarray`, a masked array also inherits all the attributes and properties of a :class:`~numpy.ndarray` instance. + +.. autosummary:: + :toctree: generated/ + + MaskedArray.flags + MaskedArray.shape + MaskedArray.strides + MaskedArray.ndim + MaskedArray.size + MaskedArray.itemsize + MaskedArray.nbytes + MaskedArray.base + MaskedArray.dtype + MaskedArray.T + MaskedArray.real + MaskedArray.imag + MaskedArray.flat + MaskedArray.ctypes + MaskedArray.__array_priority__ + + + +:class:`MaskedArray` methods +============================ + +.. seealso:: :ref:`Array methods <array.ndarray.methods>` + + +Conversion +---------- + +.. autosummary:: + :toctree: generated/ + + MaskedArray.view + MaskedArray.astype + MaskedArray.filled + MaskedArray.tofile + MaskedArray.toflex + MaskedArray.tolist + MaskedArray.torecords + MaskedArray.tostring + + +Shape manipulation +------------------ + +For reshape, resize, and transpose, the single tuple argument may be +replaced with ``n`` integers which will be interpreted as an n-tuple. + +.. autosummary:: + :toctree: generated/ + + MaskedArray.flatten + MaskedArray.ravel + MaskedArray.reshape + MaskedArray.resize + MaskedArray.squeeze + MaskedArray.swapaxes + MaskedArray.transpose + + +Item selection and manipulation +------------------------------- + +For array methods that take an *axis* keyword, it defaults to +:const:`None`. If axis is *None*, then the array is treated as a 1-D +array. Any other value for *axis* represents the dimension along which +the operation should proceed. + +.. autosummary:: + :toctree: generated/ + + MaskedArray.argsort + MaskedArray.choose + MaskedArray.compress + MaskedArray.diagonal + MaskedArray.nonzero + MaskedArray.put + MaskedArray.repeat + MaskedArray.searchsorted + MaskedArray.sort + MaskedArray.take + + +Calculations +------------ + +.. autosummary:: + :toctree: generated/ + + MaskedArray.all + MaskedArray.anom + MaskedArray.any + MaskedArray.argmax + MaskedArray.argmin + MaskedArray.clip + MaskedArray.conj + MaskedArray.cumprod + MaskedArray.cumsum + MaskedArray.mean + MaskedArray.min + MaskedArray.prod + MaskedArray.ptp + MaskedArray.round + MaskedArray.std + MaskedArray.sum + MaskedArray.trace + MaskedArray.var + + +Arithmetic and comparison operations +------------------------------------ + +.. index:: comparison, arithmetic, operation, operator + +Comparison operators: +~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + :toctree: generated/ + + MaskedArray.__lt__ + MaskedArray.__le__ + MaskedArray.__gt__ + MaskedArray.__ge__ + MaskedArray.__eq__ + MaskedArray.__ne__ + +Truth value of an array (:func:`bool()`): +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + :toctree: generated/ + + MaskedArray.__nonzero__ + + +Arithmetic: +~~~~~~~~~~~ + +.. autosummary:: + :toctree: generated/ + + MaskedArray.__add__ + MaskedArray.__sub__ + MaskedArray.__mul__ + MaskedArray.__div__ + MaskedArray.__truediv__ + MaskedArray.__floordiv__ + MaskedArray.__mod__ + MaskedArray.__divmod__ + MaskedArray.__pow__ + MaskedArray.__lshift__ + MaskedArray.__rshift__ + MaskedArray.__and__ + MaskedArray.__or__ + MaskedArray.__xor__ + + +Arithmetic, in-place: +~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + :toctree: generated/ + + MaskedArray.__iadd__ + MaskedArray.__isub__ + MaskedArray.__imul__ + MaskedArray.__idiv__ + MaskedArray.__itruediv__ + MaskedArray.__ifloordiv__ + MaskedArray.__imod__ + MaskedArray.__ipow__ + MaskedArray.__ilshift__ + MaskedArray.__irshift__ + MaskedArray.__iand__ + MaskedArray.__ior__ + MaskedArray.__ixor__ + + + +Special methods +--------------- + +For standard library functions: + +.. autosummary:: + :toctree: generated/ + + MaskedArray.__copy__ + MaskedArray.__deepcopy__ + MaskedArray.__reduce__ + MaskedArray.__setstate__ + +Basic customization: + +.. autosummary:: + :toctree: generated/ + + MaskedArray.__new__ + MaskedArray.__array__ + MaskedArray.__array_wrap__ + +Container customization: (see :ref:`Indexing <arrays.indexing>`) + +.. autosummary:: + :toctree: generated/ + + MaskedArray.__len__ + MaskedArray.__getitem__ + MaskedArray.__setitem__ + MaskedArray.__getslice__ + MaskedArray.__setslice__ + MaskedArray.__contains__ + + + +Specific methods +---------------- + +Handling the mask +~~~~~~~~~~~~~~~~~ + +The following methods can be used to access information about the mask or to +manipulate the mask. + +.. autosummary:: + :toctree: generated/ + + MaskedArray.harden_mask + MaskedArray.soften_mask + MaskedArray.unshare_mask + MaskedArray.shrink_mask + + +Handling the `fill_value` +~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. autosummary:: + :toctree: generated/ + + MaskedArray.get_fill_value + MaskedArray.set_fill_value + + +.. autosummary:: + :toctree: generated/ + + MaskedArray.compressed + MaskedArray.count + diff --git a/doc/source/reference/maskedarray.generic.rst b/doc/source/reference/maskedarray.generic.rst new file mode 100644 index 000000000..ba6b97deb --- /dev/null +++ b/doc/source/reference/maskedarray.generic.rst @@ -0,0 +1,427 @@ +.. currentmodule:: numpy.ma + +.. _maskedarray.generic: + + + +The :mod:`numpy.ma` module +========================== + +Rationale +--------- + +Masked arrays are arrays that may have missing or invalid entries. +The :mod:`numpy.ma` module provides a nearly work-alike replacement for numpy +that supports data arrays with masks. + + + +What is a masked array? +----------------------- + +In many circumstances, datasets can be incomplete or tainted by the presence of invalid data. For example, a sensor may have failed to record a data, or +recorded an invalid value. +The :mod:`numpy.ma` module provides a convenient way to address this issue, by introducing masked arrays. + +A masked array is the combination of a standard :class:`numpy.ndarray` and a mask. A mask is either :attr:`nomask`, indicating that no value of the associated array is invalid, or an array of booleans that determines for each element of the associated array whether the value is valid or not. When an element of the mask is ``False``, the corresponding element of the associated array is valid and is said to be unmasked. When an element of the mask is ``True``, the corresponding element of the associated array is said to be masked (invalid). + +The package ensures that masked entries are not used in computations. + +As an illustration, let's consider the following dataset:: + + >>> import numpy as np + >>> x = np.array([1, 2, 3, -1, 5]) + +We wish to mark the fourth entry as invalid. The easiest is to create a masked +array:: + + >>> mx = ma.masked_array(x, mask=[0, 0, 0, 1, 0]) + +We can now compute the mean of the dataset, without taking the invalid data into account:: + + >>> mx.mean() + 2.75 + + +The :mod:`numpy.ma` module +-------------------------- + + +The main feature of the :mod:`numpy.ma` module is the :class:`~numpy.ma.MaskedArray` class, which is a subclass of :class:`numpy.ndarray`. +The class, its attributes and methods are described in more details in the +:ref:`MaskedArray class <maskedarray.baseclass>` section. + +The :mod:`numpy.ma` module can be used as an addition to :mod:`numpy`: :: + + >>> import numpy as np + >>> import numpy.ma as ma + +To create an array with the second element invalid, we would do:: + + >>> y = ma.array([1, 2, 3], mask = [0, 1, 0]) + +To create a masked array where all values close to 1.e20 are invalid, we would +do:: + + >>> z = masked_values([1.0, 1.e20, 3.0, 4.0], 1.e20) + +For a complete discussion of creation methods for masked arrays please see +section :ref:`Constructing masked arrays <maskedarray.generic.constructing>`. + + + + +Using numpy.ma +============== + +.. _maskedarray.generic.constructing: + +Constructing masked arrays +-------------------------- + +There are several ways to construct a masked array. + +* A first possibility is to directly invoke the :class:`MaskedArray` class. + +* A second possibility is to use the two masked array constructors, + :func:`array` and :func:`masked_array`. + + .. autosummary:: + :toctree: generated/ + + array + masked_array + + +* A third option is to take the view of an existing array. In that case, the + mask of the view is set to :attr:`nomask` if the array has no named fields, + or an array of boolean with the same structure as the array otherwise.:: + + >>> x = np.array([1, 2, 3]) + >>> x.view(ma.MaskedArray) + masked_array(data = [1 2 3], + mask = False, + fill_value = 999999) + +* Yet another possibility is to use any of the following functions: + + .. autosummary:: + :toctree: generated/ + + asarray + asanyarray + fix_invalid + masked_equal + masked_greater + masked_greater_equal + masked_inside + masked_invalid + masked_less + masked_less_equal + masked_not_equal + masked_object + masked_outside + masked_values + masked_where + + + +Accessing the data +------------------ + +The underlying data of a masked array can be accessed through several ways: + +* through the :attr:`data` attribute. The output is a view of the array as + a :class:`numpy.ndarray` or one of its subclasses, depending on the type + of the underlying data at the masked array creation. + +* through the :meth:`~MaskedArray.__array__` method. The output is then a :class:`numpy.ndarray`. + +* by directly taking a view of the masked array as a :class:`numpy.ndarray` or one of its subclass (which is actually what using the :attr:`data` attribute does). + +* by using the :func:`getdata` function. + + +None of these methods is completely satisfactory if some entries have been marked as invalid. As a general rule, invalid data should not be relied on. +If a representation of the array is needed without any masked entries, it is recommended to fill the array with the :meth:`filled` method. + + + +Accessing the mask +------------------ + +The mask of a masked array is accessible through its :attr:`mask` attribute. +We must keep in mind that a ``True`` entry in the mask indicates an *invalid* data. + +Another possibility is to use the :func:`getmask` and :func:`getmaskarray` functions. :func:`getmask(x)` outputs the mask of ``x`` if ``x`` is a masked array, and the special value :data:`nomask` otherwise. +:func:`getmaskarray(x)` outputs the mask of ``x`` if ``x`` is a masked array. +If ``x`` has no invalid entry or is not a masked array, the function outputs a boolean array of ``False`` with as many elements as ``x``. + + + + +Accessing only the valid entries +--------------------------------- + +To retrieve only the valid entries, we can use the inverse of the mask as an index. The inverse of the mask can be calculated with the :func:`numpy.logical_not` function or simply with the ``~`` operator:: + + >>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]]) + >>> x[~x.mask] + masked_array(data = [1 4], + mask = [False False], + fill_value = 999999) + +Another way to retrieve the valid data is to use the :meth:`compressed` method, which returns a one-dimensional :class:`~numpy.ndarray` (or one of its subclasses, depending on the value of the :attr:`baseclass` attribute):: + + >>> x.compressed + array([1, 4]) + + + +Modifying the mask +------------------ + +Masking an entry +~~~~~~~~~~~~~~~~ + +The recommended way to mark one or several specific entries of a masked array as invalid is to assign the special value :attr:`masked` to them:: + + >>> x = ma.array([1, 2, 3]) + >>> x[0] = ma.masked + >>> x + masked_array(data = [-- 2 3], + mask = [ True False False], + fill_value = 999999) + >>> y = ma.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) + >>> y[(0, 1, 2), (1, 2, 0)] = ma.masked + >>> y + masked_array(data = + [[1 -- 3] + [4 5 --] + [-- 8 9]], + mask = + [[False True False] + [False False True] + [ True False False]], + fill_value = 999999) + >>> z = ma.array([1, 2, 3, 4]) + >>> z[:-2] = ma.masked + >>> z + masked_array(data = [-- -- 3 4], + mask = [ True True False False], + fill_value = 999999) + + +A second possibility is to modify the mask directly, but this usage is discouraged. + +.. note:: + When creating a new masked array with a simple, non-structured datatype, the mask is initially set to the special value :attr:`nomask`, that corresponds roughly to the boolean ``False``. Trying to set an element of :attr:`nomask` will fail with a :exc:`TypeError` exception, as a boolean does not support item assignment. + + +All the entries of an array can be masked at once by assigning ``True`` to the mask:: + + >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) + >>> x.mask = True + >>> x + masked_array(data = [-- -- --], + mask = [ True True True], + fill_value = 999999) + +Finally, specific entries can be masked and/or unmasked by assigning to the mask a sequence of booleans:: + + >>> x = ma.array([1, 2, 3]) + >>> x.mask = [0, 1, 0] + >>> x + masked_array(data = [1 -- 3], + mask = [False True False], + fill_value = 999999) + +Unmasking an entry +~~~~~~~~~~~~~~~~~~ + +To unmask one or several specific entries, we can just assign one or several new valid values to them:: + + >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) + >>> x + masked_array(data = [1 2 --], + mask = [False False True], + fill_value = 999999) + >>> x[-1] = 5 + >>> x + masked_array(data = [1 2 5], + mask = [False False False], + fill_value = 999999) + +.. note:: + Unmasking an entry by direct assignment will not work if the masked array + has a *hard* mask, as shown by the :attr:`hardmask`. + This feature was introduced to prevent the overwriting of the mask. + To force the unmasking of an entry in such circumstance, the mask has first + to be softened with the :meth:`soften_mask` method before the allocation, + and then re-hardened with :meth:`harden_mask`:: + + >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) + >>> x + masked_array(data = [1 2 --], + mask = [False False True], + fill_value = 999999) + >>> x[-1] = 5 + >>> x + masked_array(data = [1 2 --], + mask = [False False True], + fill_value = 999999) + >>> x.soften_mask() + >>> x[-1] = 5 + >>> x + masked_array(data = [1 2 --], + mask = [False False True], + fill_value = 999999) + >>> x.soften_mask() + + +To unmask all masked entries of a masked array, the simplest solution is to assign the constant :attr:`nomask` to the mask:: + + >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) + >>> x + masked_array(data = [1 2 --], + mask = [False False True], + fill_value = 999999) + >>> x.mask = nomask + >>> x + masked_array(data = [1 2 3], + mask = [False False False], + fill_value = 999999) + + + +Indexing and slicing +-------------------- + +As a :class:`MaskedArray` is a subclass of :class:`numpy.ndarray`, it inherits its mechanisms for indexing and slicing. + +When accessing a single entry of a masked array with no named fields, the output is either a scalar (if the corresponding entry of the mask is ``False``) or the special value :attr:`masked` (if the corresponding entry of the mask is ``True``):: + + >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) + >>> x[0] + 1 + >>> x[-1] + masked_array(data = --, + mask = True, + fill_value = 1e+20) + >>> x[-1] is ma.masked + True + +If the masked array has named fields, accessing a single entry returns a +:class:`numpy.void` object if none of the fields are masked, or a 0d masked array with the same dtype as the initial array if at least one of the fields is masked. + + >>> y = ma.masked_array([(1,2), (3, 4)], + ... mask=[(0, 0), (0, 1)], + ... dtype=[('a', int), ('b', int)]) + >>> y[0] + (1, 2) + >>> y[-1] + masked_array(data = (3, --), + mask = (False, True), + fill_value = (999999, 999999), + dtype = [('a', '<i4'), ('b', '<i4')]) + + +When accessing a slice, the output is a masked array whose :attr:`data` attribute is a view of the original data, and whose mask is either :attr:`nomask` (if there was no invalid entries in the original array) or a copy of the corresponding slice of the original mask. The copy is required to avoid propagation of any modification of the mask to the original. + + >>> x = ma.array([1, 2, 3, 4, 5], mask=[0, 1, 0, 0, 1]) + >>> mx = x[:3] + >>> mx + masked_array(data = [1 -- 3], + mask = [False True False], + fill_value = 999999) + >>> mx[1] = -1 + >>> mx + masked_array(data = [1 -1 3], + mask = [False True False], + fill_value = 999999) + >>> x.mask + array([False, True, False, False, True], dtype=bool) + >>> x.data + array([ 1, -1, 3, 4, 5]) + + +Accessing a field of a masked array with structured datatype returns a :class:`MaskedArray`. + + + +Operations on masked arrays +--------------------------- + +Arithmetic and comparison operations are supported by masked arrays. +As much as possible, invalid entries of a masked array are not processed, meaning that the corresponding :attr:`data` entries *should* be the same before and after the operation. +We need to stress that this behavior may not be systematic, that invalid data may actually be affected by the operation in some cases and once again that invalid data should not be relied on. + +The :mod:`numpy.ma` module comes with a specific implementation of most +ufuncs. Unary and binary functions that have a validity domain (such as :func:`~numpy.log` or :func:`~numpy.divide`) return the :data:`masked` constant whenever the input is masked or falls outside the validity domain:: + + >>> ma.log([-1, 0, 1, 2]) + masked_array(data = [-- -- 0.0 0.69314718056], + mask = [ True True False False], + fill_value = 1e+20) + +Masked arrays also support standard numpy ufuncs. The output is then a masked array. The result of a unary ufunc is masked wherever the input is masked. The result of a binary ufunc is masked wherever any of the input is masked. If the ufunc also returns the optional context output (a 3-element tuple containing the name of the ufunc, its arguments and its domain), the context is processed and entries of the output masked array are masked wherever the corresponding input fall outside the validity domain:: + + >>> x = ma.array([-1, 1, 0, 2, 3], mask=[0, 0, 0, 0, 1]) + >>> np.log(x) + masked_array(data = [-- -- 0.0 0.69314718056 --], + mask = [ True True False False True], + fill_value = 1e+20) + + + +Examples +======== + +Data with a given value representing missing data +------------------------------------------------- + +Let's consider a list of elements, ``x``, where values of -9999. represent missing data. +We wish to compute the average value of the data and the vector of anomalies (deviations from the average):: + + >>> import numpy.ma as ma + >>> x = [0.,1.,-9999.,3.,4.] + >>> mx = ma.masked_values (x, -9999.) + >>> print mx.mean() + 2.0 + >>> print mx - mx.mean() + [-2.0 -1.0 -- 1.0 2.0] + >>> print mx.anom() + [-2.0 -1.0 -- 1.0 2.0] + + +Filling in the missing data +--------------------------- + +Suppose now that we wish to print that same data, but with the missing values +replaced by the average value. + + >>> print mx.filled(mx.mean()) + [ 0. 1. 2. 3. 4.] + + +Numerical operations +-------------------- + +Numerical operations can be easily performed without worrying about missing values, dividing by zero, square roots of negative numbers, etc.:: + + >>> import numpy as np, numpy.ma as ma + >>> x = ma.array([1., -1., 3., 4., 5., 6.], mask=[0,0,0,0,1,0]) + >>> y = ma.array([1., 2., 0., 4., 5., 6.], mask=[0,0,0,0,0,1]) + >>> print np.sqrt(x/y) + [1.0 -- -- 1.0 -- --] + +Four values of the output are invalid: the first one comes from taking the square root of a negative number, the second from the division by zero, and the last two where the inputs were masked. + + +Ignoring extreme values +----------------------- + +Let's consider an array ``d`` of random floats between 0 and 1. +We wish to compute the average of the values of ``d`` while ignoring any data outside the range [0.1, 0.9]:: + + >>> print ma.masked_outside(d, 0.1, 0.9).mean() diff --git a/doc/source/reference/maskedarray.rst b/doc/source/reference/maskedarray.rst new file mode 100644 index 000000000..c2deb3ba1 --- /dev/null +++ b/doc/source/reference/maskedarray.rst @@ -0,0 +1,19 @@ +.. _maskedarray: + +************* +Masked arrays +************* + +Masked arrays are arrays that may have missing or invalid entries. +The :mod:`numpy.ma` module provides a nearly work-alike replacement for numpy +that supports data arrays with masks. + +.. index:: + single: masked arrays + +.. toctree:: + :maxdepth: 2 + + maskedarray.generic + maskedarray.baseclass + routines.ma diff --git a/doc/sphinxext/docscrape.py b/doc/sphinxext/docscrape.py index 904270a52..f374b3ddc 100644 --- a/doc/sphinxext/docscrape.py +++ b/doc/sphinxext/docscrape.py @@ -406,11 +406,13 @@ def header(text, style='-'): class FunctionDoc(NumpyDocString): - def __init__(self, func, role='func'): + def __init__(self, func, role='func', doc=None): self._f = func self._role = role # e.g. "func" or "meth" + if doc is None: + doc = inspect.getdoc(func) or '' try: - NumpyDocString.__init__(self,inspect.getdoc(func) or '') + NumpyDocString.__init__(self, doc) except ValueError, e: print '*'*78 print "ERROR: '%s' while parsing `%s`" % (e, self._f) @@ -459,7 +461,7 @@ class FunctionDoc(NumpyDocString): class ClassDoc(NumpyDocString): - def __init__(self,cls,modulename='',func_doc=FunctionDoc): + def __init__(self,cls,modulename='',func_doc=FunctionDoc,doc=None): if not inspect.isclass(cls): raise ValueError("Initialise using a class. Got %r" % cls) self._cls = cls @@ -470,7 +472,10 @@ class ClassDoc(NumpyDocString): self._name = cls.__name__ self._func_doc = func_doc - NumpyDocString.__init__(self, pydoc.getdoc(cls)) + if doc is None: + doc = pydoc.getdoc(cls) + + NumpyDocString.__init__(self, doc) @property def methods(self): diff --git a/doc/sphinxext/docscrape_sphinx.py b/doc/sphinxext/docscrape_sphinx.py index d431ecd3f..77ed271b0 100644 --- a/doc/sphinxext/docscrape_sphinx.py +++ b/doc/sphinxext/docscrape_sphinx.py @@ -115,7 +115,7 @@ class SphinxFunctionDoc(SphinxDocString, FunctionDoc): class SphinxClassDoc(SphinxDocString, ClassDoc): pass -def get_doc_object(obj, what=None): +def get_doc_object(obj, what=None, doc=None): if what is None: if inspect.isclass(obj): what = 'class' @@ -126,8 +126,11 @@ def get_doc_object(obj, what=None): else: what = 'object' if what == 'class': - return SphinxClassDoc(obj, '', func_doc=SphinxFunctionDoc) + return SphinxClassDoc(obj, '', func_doc=SphinxFunctionDoc, doc=doc) elif what in ('function', 'method'): - return SphinxFunctionDoc(obj, '') + return SphinxFunctionDoc(obj, '', doc=doc) else: - return SphinxDocString(pydoc.getdoc(obj)) + if doc is None: + doc = pydoc.getdoc(obj) + return SphinxDocString(doc) + diff --git a/doc/sphinxext/numpydoc.py b/doc/sphinxext/numpydoc.py index 21a5ae5ec..ff6c44c53 100644 --- a/doc/sphinxext/numpydoc.py +++ b/doc/sphinxext/numpydoc.py @@ -28,7 +28,7 @@ def mangle_docstrings(app, what, name, obj, options, lines, re.I|re.S) lines[:] = title_re.sub('', "\n".join(lines)).split("\n") else: - doc = get_doc_object(obj, what) + doc = get_doc_object(obj, what, "\n".join(lines)) lines[:] = str(doc).split("\n") if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \ diff --git a/doc/sphinxext/plot_directive.py b/doc/sphinxext/plot_directive.py index 6a9418831..6b5ff6eaf 100644 --- a/doc/sphinxext/plot_directive.py +++ b/doc/sphinxext/plot_directive.py @@ -57,6 +57,9 @@ The plot directive has the following configuration options: plot_include_source Default value for the include-source option + plot_formats + The set of files to generate. Default: ['png', 'pdf', 'hires.png'], + ie. everything. TODO ---- @@ -75,22 +78,27 @@ def setup(app): setup.app = app setup.config = app.config setup.confdir = app.confdir - - app.add_config_value('plot_output_dir', '_static', True) + + static_path = '_static' + if hasattr(app.config, 'html_static_path') and app.config.html_static_path: + static_path = app.config.html_static_path[0] + + app.add_config_value('plot_output_dir', static_path, True) app.add_config_value('plot_pre_code', '', True) app.add_config_value('plot_rcparams', sane_rcparameters, True) app.add_config_value('plot_include_source', False, True) + app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True) app.add_directive('plot', plot_directive, True, (0, 1, False), **plot_directive_options) sane_rcparameters = { - 'font.size': 8, - 'axes.titlesize': 8, - 'axes.labelsize': 8, - 'xtick.labelsize': 8, - 'ytick.labelsize': 8, - 'legend.fontsize': 8, + 'font.size': 9, + 'axes.titlesize': 9, + 'axes.labelsize': 9, + 'xtick.labelsize': 9, + 'ytick.labelsize': 9, + 'legend.fontsize': 9, 'figure.figsize': (4, 3), } @@ -134,10 +142,16 @@ def run_code(code, code_path): # Change the working directory to the directory of the example, so # it can get at its data files, if any. pwd = os.getcwd() + old_sys_path = list(sys.path) if code_path is not None: - os.chdir(os.path.dirname(code_path)) + dirname = os.path.abspath(os.path.dirname(code_path)) + os.chdir(dirname) + sys.path.insert(0, dirname) + + # Redirect stdout stdout = sys.stdout sys.stdout = cStringIO.StringIO() + try: code = unescape_doctest(code) ns = {} @@ -145,9 +159,11 @@ def run_code(code, code_path): exec code in ns finally: os.chdir(pwd) + sys.path[:] = old_sys_path sys.stdout = stdout return ns + #------------------------------------------------------------------------------ # Generating figures #------------------------------------------------------------------------------ @@ -160,16 +176,19 @@ def out_of_date(original, derived): return (not os.path.exists(derived) or os.stat(derived).st_mtime < os.stat(original).st_mtime) + def makefig(code, code_path, output_dir, output_base, config): """ run a pyplot script and save the low and high res PNGs and a PDF in _static """ - formats = [('png', 100), - ('hires.png', 200), - ('pdf', 50), - ] + included_formats = config.plot_formats + if type(included_formats) is str: + included_formats = eval(included_formats) + + formats = [x for x in [('png', 80), ('hires.png', 200), ('pdf', 50)] + if x[0] in config.plot_formats] all_exists = True @@ -181,26 +200,25 @@ def makefig(code, code_path, output_dir, output_base, config): break if all_exists: - return 1 + return [output_base] - # Then look for multi-figure output files, assuming - # if we have some we have all... - i = 0 - while True: - all_exists = True + # Then look for multi-figure output files + image_names = [] + for i in xrange(1000): + image_names.append('%s_%02d' % (output_base, i)) for format, dpi in formats: output_path = os.path.join(output_dir, - '%s_%02d.%s' % (output_base, i, format)) + '%s.%s' % (image_names[-1], format)) if out_of_date(code_path, output_path): all_exists = False break - if all_exists: - i += 1 - else: + if not all_exists: + # assume that if we have one, we have them all + all_exists = (i > 0) break - if i != 0: - return i + if all_exists: + return image_names # We didn't find the files, so build them print "-- Plotting figures %s" % output_base @@ -212,31 +230,24 @@ def makefig(code, code_path, output_dir, output_base, config): matplotlib.rcdefaults() matplotlib.rcParams.update(config.plot_rcparams) - try: - run_code(code, code_path) - except: - raise - s = cbook.exception_to_str("Exception running plot %s" % code_path) - warnings.warn(s) - return 0 + # Run code + run_code(code, code_path) + + # Collect images + image_names = [] fig_managers = _pylab_helpers.Gcf.get_all_fig_managers() for i, figman in enumerate(fig_managers): + if len(fig_managers) == 1: + name = output_base + else: + name = "%s_%02d" % (output_base, i) + image_names.append(name) for format, dpi in formats: - if len(fig_managers) == 1: - name = output_base - else: - name = "%s_%02d" % (output_base, i) path = os.path.join(output_dir, '%s.%s' % (name, format)) - try: - figman.canvas.figure.savefig(path, dpi=dpi) - except: - s = cbook.exception_to_str("Exception running plot %s" - % code_path) - warnings.warn(s) - return 0 + figman.canvas.figure.savefig(path, dpi=dpi) - return len(fig_managers) + return image_names #------------------------------------------------------------------------------ # Generating output @@ -303,7 +314,7 @@ def run(arguments, content, options, state_machine, state, lineno): document.attributes['_plot_counter'] = counter output_base = '%d-%s' % (counter, os.path.basename(file_name)) - rel_name = relative_path(file_name, setup.confdir) + rel_name = relpath(file_name, setup.confdir) base, ext = os.path.splitext(output_base) if ext in ('.py', '.rst', '.txt'): @@ -334,13 +345,19 @@ def run(arguments, content, options, state_machine, state, lineno): f.write(unescape_doctest(code)) f.close() - source_link = relative_path(target_name, rst_dir) + source_link = relpath(target_name, rst_dir) # determine relative reference - link_dir = relative_path(output_dir, rst_dir) + link_dir = relpath(output_dir, rst_dir) # make figures - num_figs = makefig(code, file_name, output_dir, output_base, config) + try: + image_names = makefig(code, file_name, output_dir, output_base, config) + except RuntimeError, err: + reporter = state.memo.reporter + sm = reporter.system_message(3, "Exception occurred rendering plot", + line=lineno) + return [sm] # generate output if options['include-source']: @@ -353,20 +370,6 @@ def run(arguments, content, options, state_machine, state, lineno): else: source_code = "" - if num_figs > 0: - image_names = [] - for i in range(num_figs): - if num_figs == 1: - image_names.append(output_base) - else: - image_names.append("%s_%02d" % (output_base, i)) - else: - reporter = state.memo.reporter - sm = reporter.system_message(3, "Exception occurred rendering plot", - line=lineno) - return [sm] - - opts = [':%s: %s' % (key, val) for key, val in options.items() if key in ('alt', 'height', 'width', 'scale', 'align', 'class')] @@ -381,23 +384,48 @@ def run(arguments, content, options, state_machine, state, lineno): if len(lines): state_machine.insert_input( lines, state_machine.input_lines.source(0)) - return [] - -def relative_path(target, base): - target = os.path.abspath(os.path.normpath(target)) - base = os.path.abspath(os.path.normpath(base)) + return [] - target_parts = target.split(os.path.sep) - base_parts = base.split(os.path.sep) - rel_parts = 0 - while target_parts and base_parts and target_parts[0] == base_parts[0]: - target_parts.pop(0) - base_parts.pop(0) +if hasattr(os.path, 'relpath'): + relpath = os.path.relpath +else: + def relpath(target, base=os.curdir): + """ + Return a relative path to the target from either the current + dir or an optional base dir. Base can be a directory + specified either as absolute or relative to current dir. + """ + + if not os.path.exists(target): + raise OSError, 'Target does not exist: '+target + + if not os.path.isdir(base): + raise OSError, 'Base is not a directory or does not exist: '+base + + base_list = (os.path.abspath(base)).split(os.sep) + target_list = (os.path.abspath(target)).split(os.sep) + + # On the windows platform the target may be on a completely + # different drive from the base. + if os.name in ['nt','dos','os2'] and base_list[0] <> target_list[0]: + raise OSError, 'Target is on a different drive to base. Target: '+target_list[0].upper()+', base: '+base_list[0].upper() + + # Starting from the filepath root, work out how much of the + # filepath is shared by base and target. + for i in range(min(len(base_list), len(target_list))): + if base_list[i] <> target_list[i]: break + else: + # If we broke out of the loop, i is pointing to the first + # differing path elements. If we didn't break out of the + # loop, i is pointing to identical path elements. + # Increment i so that in all cases it points to the first + # differing path elements. + i+=1 - rel_parts += len(base_parts) - return os.path.sep.join([os.path.pardir] * rel_parts + target_parts) + rel_list = [os.pardir] * (len(base_list)-i) + target_list[i:] + return os.path.join(*rel_list) #------------------------------------------------------------------------------ # plot:: directive registration etc. @@ -412,21 +440,11 @@ except ImportError: from docutils.parsers.rst.directives.images import Image align = Image.align -try: - from docutils.parsers.rst import Directive -except ImportError: - from docutils.parsers.rst.directives import _directives +def plot_directive(name, arguments, options, content, lineno, + content_offset, block_text, state, state_machine): + return run(arguments, content, options, state_machine, state, lineno) - def plot_directive(name, arguments, options, content, lineno, - content_offset, block_text, state, state_machine): - return run(arguments, content, options, state_machine, state, lineno) - plot_directive.__doc__ = __doc__ -else: - class plot_directive(Directive): - def run(self): - return run(self.arguments, self.content, self.options, - self.state_machine, self.state, self.lineno) - plot_directive.__doc__ = __doc__ +plot_directive.__doc__ = __doc__ def _option_boolean(arg): if not arg or not arg.strip(): diff --git a/numpy/add_newdocs.py b/numpy/add_newdocs.py index cff9e5e03..68224485c 100644 --- a/numpy/add_newdocs.py +++ b/numpy/add_newdocs.py @@ -8,140 +8,6 @@ from lib import add_newdoc -add_newdoc('numpy.core', 'dtype', -"""Create a data type. - -A numpy array is homogeneous, and contains elements described by a -dtype. A dtype can be constructed from different combinations of -fundamental numeric types, as illustrated below. - -Examples --------- - -Using array-scalar type: ->>> np.dtype(np.int16) -dtype('int16') - -Record, one field name 'f1', containing int16: ->>> np.dtype([('f1', np.int16)]) -dtype([('f1', '<i2')]) - -Record, one field named 'f1', in itself containing a record with one field: ->>> np.dtype([('f1', [('f1', np.int16)])]) -dtype([('f1', [('f1', '<i2')])]) - -Record, two fields: the first field contains an unsigned int, the -second an int32: ->>> np.dtype([('f1', np.uint), ('f2', np.int32)]) -dtype([('f1', '<u4'), ('f2', '<i4')]) - -Using array-protocol type strings: ->>> np.dtype([('a','f8'),('b','S10')]) -dtype([('a', '<f8'), ('b', '|S10')]) - -Using comma-separated field formats. The shape is (2,3): ->>> np.dtype("i4, (2,3)f8") -dtype([('f0', '<i4'), ('f1', '<f8', (2, 3))]) - -Using tuples. ``int`` is a fixed type, 3 the field's shape. ``void`` -is a flexible type, here of size 10: ->>> np.dtype([('hello',(np.int,3)),('world',np.void,10)]) -dtype([('hello', '<i4', 3), ('world', '|V10')]) - -Subdivide ``int16`` into 2 ``int8``'s, called x and y. 0 and 1 are -the offsets in bytes: ->>> np.dtype((np.int16, {'x':(np.int8,0), 'y':(np.int8,1)})) -dtype(('<i2', [('x', '|i1'), ('y', '|i1')])) - -Using dictionaries. Two fields named 'gender' and 'age': ->>> np.dtype({'names':['gender','age'], 'formats':['S1',np.uint8]}) -dtype([('gender', '|S1'), ('age', '|u1')]) - -Offsets in bytes, here 0 and 25: ->>> np.dtype({'surname':('S25',0),'age':(np.uint8,25)}) -dtype([('surname', '|S25'), ('age', '|u1')]) - -""") - -add_newdoc('numpy.core', 'dtype', - """ - dtype(obj, align=False, copy=False) - - Create a data type object. - - A numpy array is homogeneous, and contains elements described by a - dtype object. A dtype object can be constructed from different - combinations of fundamental numeric types. - - Parameters - ---------- - obj - Object to be converted to a data type object. - align : bool, optional - Add padding to the fields to match what a C compiler would output - for a similar C-struct. Can be ``True`` only if `obj` is a dictionary - or a comma-separated string. - copy : bool, optional - Make a new copy of the data-type object. If ``False``, the result - may just be a reference to a built-in data-type object. - - Examples - -------- - Using array-scalar type: - - >>> np.dtype(np.int16) - dtype('int16') - - Record, one field name 'f1', containing int16: - - >>> np.dtype([('f1', np.int16)]) - dtype([('f1', '<i2')]) - - Record, one field named 'f1', in itself containing a record with one field: - - >>> np.dtype([('f1', [('f1', np.int16)])]) - dtype([('f1', [('f1', '<i2')])]) - - Record, two fields: the first field contains an unsigned int, the - second an int32: - - >>> np.dtype([('f1', np.uint), ('f2', np.int32)]) - dtype([('f1', '<u4'), ('f2', '<i4')]) - - Using array-protocol type strings: - - >>> np.dtype([('a','f8'),('b','S10')]) - dtype([('a', '<f8'), ('b', '|S10')]) - - Using comma-separated field formats. The shape is (2,3): - - >>> np.dtype("i4, (2,3)f8") - dtype([('f0', '<i4'), ('f1', '<f8', (2, 3))]) - - Using tuples. ``int`` is a fixed type, 3 the field's shape. ``void`` - is a flexible type, here of size 10: - - >>> np.dtype([('hello',(np.int,3)),('world',np.void,10)]) - dtype([('hello', '<i4', 3), ('world', '|V10')]) - - Subdivide ``int16`` into 2 ``int8``'s, called x and y. 0 and 1 are - the offsets in bytes: - - >>> np.dtype((np.int16, {'x':(np.int8,0), 'y':(np.int8,1)})) - dtype(('<i2', [('x', '|i1'), ('y', '|i1')])) - - Using dictionaries. Two fields named 'gender' and 'age': - - >>> np.dtype({'names':['gender','age'], 'formats':['S1',np.uint8]}) - dtype([('gender', '|S1'), ('age', '|u1')]) - - Offsets in bytes, here 0 and 25: - - >>> np.dtype({'surname':('S25',0),'age':(np.uint8,25)}) - dtype([('surname', '|S25'), ('age', '|u1')]) - - """) - ############################################################################### # # flatiter @@ -150,7 +16,12 @@ add_newdoc('numpy.core', 'dtype', # ############################################################################### -# attributes +add_newdoc('numpy.core', 'flatiter', + """ + """) + +# flatiter attributes + add_newdoc('numpy.core', 'flatiter', ('base', """documentation needed @@ -170,9 +41,8 @@ add_newdoc('numpy.core', 'flatiter', ('index', """)) +# flatiter functions - -# functions add_newdoc('numpy.core', 'flatiter', ('__array__', """__array__(type=None) Get array from iterator @@ -191,37 +61,37 @@ add_newdoc('numpy.core', 'flatiter', ('copy', # ############################################################################### +add_newdoc('numpy.core', 'broadcast', + """ + """) + # attributes + add_newdoc('numpy.core', 'broadcast', ('index', """current index in broadcasted result """)) - add_newdoc('numpy.core', 'broadcast', ('iters', """tuple of individual iterators """)) - add_newdoc('numpy.core', 'broadcast', ('nd', """number of dimensions of broadcasted result """)) - add_newdoc('numpy.core', 'broadcast', ('numiter', """number of iterators """)) - add_newdoc('numpy.core', 'broadcast', ('shape', """shape of broadcasted result """)) - add_newdoc('numpy.core', 'broadcast', ('size', """total size of broadcasted result @@ -1997,6 +1867,32 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('newbyteorder', Equivalent to a.view(a.dtype.newbytorder(byteorder)) + Return array with dtype changed to interpret array data as + specified byte order. + + Changes are also made in all fields and sub-arrays of the array + data type. + + Parameters + ---------- + new_order : string, optional + Byte order to force; a value from the byte order + specifications below. The default value ('S') results in + swapping the current byte order. + `new_order` codes can be any of: + * {'<', 'L'} - little endian + * {'>', 'B'} - big endian + * {'=', 'N'} - native order + * 'S' - swap dtype from current to opposite endian + * {'|', 'I'} - ignore (no change to byte order) + The code does a case-insensitive check on the first letter of + `new_order` for these alternatives. For example, any of '>' + or 'B' or 'b' or 'brian' are valid to specify big-endian. + + Returns + ------- + new_arr : array + array with the given change to the dtype byte order. """)) @@ -2555,6 +2451,25 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('view', """)) + +############################################################################## +# +# umath functions +# +############################################################################## + +add_newdoc('numpy.core.umath', 'frexp', + """ + """) + +add_newdoc('numpy.core.umath', 'frompyfunc', + """ + """) + +add_newdoc('numpy.core.umath', 'ldexp', + """ + """) + add_newdoc('numpy.core.umath','geterrobj', """geterrobj() @@ -2584,6 +2499,102 @@ add_newdoc('numpy.core.umath', 'seterrobj', """) + +############################################################################## +# +# lib._compiled_base functions +# +############################################################################## + +add_newdoc('numpy.lib._compiled_base', 'digitize', + """ + digitize(x,bins) + + Return the index of the bin to which each value of x belongs. + + Each index i returned is such that bins[i-1] <= x < bins[i] if + bins is monotonically increasing, or bins [i-1] > x >= bins[i] if + bins is monotonically decreasing. + + Beyond the bounds of the bins 0 or len(bins) is returned as appropriate. + """) + +add_newdoc('numpy.lib._compiled_base', 'bincount', + """ + bincount(x,weights=None) + + Return the number of occurrences of each value in x. + + x must be a list of non-negative integers. The output, b[i], + represents the number of times that i is found in x. If weights + is specified, every occurrence of i at a position p contributes + weights[p] instead of 1. + + See also: histogram, digitize, unique. + """) + +add_newdoc('numpy.lib._compiled_base', 'add_docstring', + """ + docstring(obj, docstring) + + Add a docstring to a built-in obj if possible. + If the obj already has a docstring raise a RuntimeError + If this routine does not know how to add a docstring to the object + raise a TypeError + """) + +add_newdoc('numpy.lib._compiled_base', 'packbits', + """ + out = numpy.packbits(myarray, axis=None) + + myarray : an integer type array whose elements should be packed to bits + + This routine packs the elements of a binary-valued dataset into a + NumPy array of type uint8 ('B') whose bits correspond to + the logical (0 or nonzero) value of the input elements. + The dimension over-which bit-packing is done is given by axis. + The shape of the output has the same number of dimensions as the input + (unless axis is None, in which case the output is 1-d). + + Example: + >>> a = array([[[1,0,1], + ... [0,1,0]], + ... [[1,1,0], + ... [0,0,1]]]) + >>> b = numpy.packbits(a,axis=-1) + >>> b + array([[[160],[64]],[[192],[32]]], dtype=uint8) + + Note that 160 = 128 + 32 + 192 = 128 + 64 + """) + +add_newdoc('numpy.lib._compiled_base', 'unpackbits', + """ + out = numpy.unpackbits(myarray, axis=None) + + myarray - array of uint8 type where each element represents a bit-field + that should be unpacked into a boolean output array + + The shape of the output array is either 1-d (if axis is None) or + the same shape as the input array with unpacking done along the + axis specified. + """) + + +############################################################################## +# +# Documentation for ufunc attributes and methods +# +############################################################################## + + +############################################################################## +# +# ufunc object +# +############################################################################## + add_newdoc('numpy.core', 'ufunc', """ Functions that operate element by element on whole arrays. @@ -2636,6 +2647,12 @@ add_newdoc('numpy.core', 'ufunc', """) +############################################################################## +# +# ufunc methods +# +############################################################################## + add_newdoc('numpy.core', 'ufunc', ('reduce', """ reduce(array, axis=0, dtype=None, out=None) @@ -2815,3 +2832,680 @@ add_newdoc('numpy.core', 'ufunc', ('outer', [12, 15, 18]]) """)) + + +############################################################################## +# +# Documentation for dtype attributes and methods +# +############################################################################## + +############################################################################## +# +# dtype object +# +############################################################################## + +add_newdoc('numpy.core.multiarray', 'dtype', + """ + dtype(obj, align=False, copy=False) + + Create a data type object. + + A numpy array is homogeneous, and contains elements described by a + dtype object. A dtype object can be constructed from different + combinations of fundamental numeric types. + + Parameters + ---------- + obj + Object to be converted to a data type object. + align : bool, optional + Add padding to the fields to match what a C compiler would output + for a similar C-struct. Can be ``True`` only if `obj` is a dictionary + or a comma-separated string. + copy : bool, optional + Make a new copy of the data-type object. If ``False``, the result + may just be a reference to a built-in data-type object. + + Examples + -------- + Using array-scalar type: + + >>> np.dtype(np.int16) + dtype('int16') + + Record, one field name 'f1', containing int16: + + >>> np.dtype([('f1', np.int16)]) + dtype([('f1', '<i2')]) + + Record, one field named 'f1', in itself containing a record with one field: + + >>> np.dtype([('f1', [('f1', np.int16)])]) + dtype([('f1', [('f1', '<i2')])]) + + Record, two fields: the first field contains an unsigned int, the + second an int32: + + >>> np.dtype([('f1', np.uint), ('f2', np.int32)]) + dtype([('f1', '<u4'), ('f2', '<i4')]) + + Using array-protocol type strings: + + >>> np.dtype([('a','f8'),('b','S10')]) + dtype([('a', '<f8'), ('b', '|S10')]) + + Using comma-separated field formats. The shape is (2,3): + + >>> np.dtype("i4, (2,3)f8") + dtype([('f0', '<i4'), ('f1', '<f8', (2, 3))]) + + Using tuples. ``int`` is a fixed type, 3 the field's shape. ``void`` + is a flexible type, here of size 10: + + >>> np.dtype([('hello',(np.int,3)),('world',np.void,10)]) + dtype([('hello', '<i4', 3), ('world', '|V10')]) + + Subdivide ``int16`` into 2 ``int8``'s, called x and y. 0 and 1 are + the offsets in bytes: + + >>> np.dtype((np.int16, {'x':(np.int8,0), 'y':(np.int8,1)})) + dtype(('<i2', [('x', '|i1'), ('y', '|i1')])) + + Using dictionaries. Two fields named 'gender' and 'age': + + >>> np.dtype({'names':['gender','age'], 'formats':['S1',np.uint8]}) + dtype([('gender', '|S1'), ('age', '|u1')]) + + Offsets in bytes, here 0 and 25: + + >>> np.dtype({'surname':('S25',0),'age':(np.uint8,25)}) + dtype([('surname', '|S25'), ('age', '|u1')]) + + """) + +############################################################################## +# +# dtype attributes +# +############################################################################## + +add_newdoc('numpy.core.multiarray', 'dtype', ('alignment', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('byteorder', + ''' + dt.byteorder + + String giving byteorder of dtype + + One of: + * '=' - native byteorder + * '<' - little endian + * '>' - big endian + * '|' - endian not relevant + + Examples + -------- + >>> dt = np.dtype('i2') + >>> dt.byteorder + '=' + >>> # endian is not relevant for 8 bit numbers + >>> np.dtype('i1').byteorder + '|' + >>> # or ASCII strings + >>> np.dtype('S2').byteorder + '|' + >>> # Even if specific code is given, and it is native + >>> # '=' is the byteorder + >>> import sys + >>> sys_is_le = sys.byteorder == 'little' + >>> native_code = sys_is_le and '<' or '>' + >>> swapped_code = sys_is_le and '>' or '<' + >>> dt = np.dtype(native_code + 'i2') + >>> dt.byteorder + '=' + >>> # Swapped code shows up as itself + >>> dt = np.dtype(swapped_code + 'i2') + >>> dt.byteorder == swapped_code + True + ''')) + +add_newdoc('numpy.core.multiarray', 'dtype', ('char', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('descr', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('fields', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('flags', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('hasobject', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('isbuiltin', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('isnative', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('itemsize', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('kind', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('name', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('names', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('num', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('shape', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('str', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('subdtype', + """ + """)) + +add_newdoc('numpy.core.multiarray', 'dtype', ('type', + """ + """)) + +############################################################################## +# +# dtype methods +# +############################################################################## + +add_newdoc('numpy.core.multiarray', 'dtype', ('newbyteorder', + ''' + newbyteorder(new_order='S') + + Return a new dtype with a different byte order. + + Changes are also made in all fields and sub-arrays of the data type. + + Parameters + ---------- + new_order : string, optional + Byte order to force; a value from the byte order + specifications below. The default value ('S') results in + swapping the current byte order. + `new_order` codes can be any of: + * {'<', 'L'} - little endian + * {'>', 'B'} - big endian + * {'=', 'N'} - native order + * 'S' - swap dtype from current to opposite endian + * {'|', 'I'} - ignore (no change to byte order) + The code does a case-insensitive check on the first letter of + `new_order` for these alternatives. For example, any of '>' + or 'B' or 'b' or 'brian' are valid to specify big-endian. + + Returns + ------- + new_dtype : dtype + New dtype object with the given change to the byte order. + + Examples + -------- + >>> import sys + >>> sys_is_le = sys.byteorder == 'little' + >>> native_code = sys_is_le and '<' or '>' + >>> swapped_code = sys_is_le and '>' or '<' + >>> native_dt = np.dtype(native_code+'i2') + >>> swapped_dt = np.dtype(swapped_code+'i2') + >>> native_dt.newbyteorder('S') == swapped_dt + True + >>> native_dt.newbyteorder() == swapped_dt + True + >>> native_dt == swapped_dt.newbyteorder('S') + True + >>> native_dt == swapped_dt.newbyteorder('=') + True + >>> native_dt == swapped_dt.newbyteorder('N') + True + >>> native_dt == native_dt.newbyteorder('|') + True + >>> np.dtype('<i2') == native_dt.newbyteorder('<') + True + >>> np.dtype('<i2') == native_dt.newbyteorder('L') + True + >>> np.dtype('>i2') == native_dt.newbyteorder('>') + True + >>> np.dtype('>i2') == native_dt.newbyteorder('B') + True + ''')) + + +############################################################################## +# +# nd_grid instances +# +############################################################################## + +add_newdoc('numpy.lib.index_tricks', 'mgrid', + """ + Construct a multi-dimensional filled "meshgrid". + + Returns a mesh-grid when indexed. The dimension and number of the + output arrays are equal to the number of indexing dimensions. If + the step length is not a complex number, then the stop is not + inclusive. + + However, if the step length is a **complex number** (e.g. 5j), + then the integer part of its magnitude is interpreted as + specifying the number of points to create between the start and + stop values, where the stop value **is inclusive**. + + See also + -------- + ogrid + + Examples + -------- + >>> np.mgrid[0:5,0:5] + array([[[0, 0, 0, 0, 0], + [1, 1, 1, 1, 1], + [2, 2, 2, 2, 2], + [3, 3, 3, 3, 3], + [4, 4, 4, 4, 4]], + <BLANKLINE> + [[0, 1, 2, 3, 4], + [0, 1, 2, 3, 4], + [0, 1, 2, 3, 4], + [0, 1, 2, 3, 4], + [0, 1, 2, 3, 4]]]) + >>> np.mgrid[-1:1:5j] + array([-1. , -0.5, 0. , 0.5, 1. ]) + """) + +add_newdoc('numpy.lib.index_tricks', 'ogrid', + """ + Construct a multi-dimensional open "meshgrid". + + Returns an 'open' mesh-grid when indexed. The dimension and + number of the output arrays are equal to the number of indexing + dimensions. If the step length is not a complex number, then the + stop is not inclusive. + + The returned mesh-grid is open (or not fleshed out), so that only + one-dimension of each returned argument is greater than 1 + + If the step length is a **complex number** (e.g. 5j), then the + integer part of its magnitude is interpreted as specifying the + number of points to create between the start and stop values, + where the stop value **is inclusive**. + + See also + -------- + mgrid + + Examples + -------- + >>> np.ogrid[0:5,0:5] + [array([[0], + [1], + [2], + [3], + [4]]), array([[0, 1, 2, 3, 4]])] + """) + + +############################################################################## +# +# Documentation for `generic` attributes and methods +# +############################################################################## + +add_newdoc('numpy.core.numerictypes', 'generic', + """ + """) + +# Attributes + +add_newdoc('numpy.core.numerictypes', 'generic', ('T', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('base', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('data', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('dtype', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('flags', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('flat', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('imag', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('itemsize', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('nbytes', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('ndim', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('real', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('shape', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('size', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('strides', + """ + """)) + +# Methods + +add_newdoc('numpy.core.numerictypes', 'generic', ('all', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('any', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('argmax', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('argmin', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('argsort', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('astype', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('byteswap', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('choose', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('clip', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('compress', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('conjugate', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('copy', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('cumprod', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('cumsum', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('diagonal', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('dump', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('dumps', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('fill', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('flatten', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('getfield', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('item', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('itemset', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('max', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('mean', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('min', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('newbyteorder', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('nonzero', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('prod', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('ptp', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('put', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('ravel', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('repeat', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('reshape', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('resize', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('round', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('searchsorted', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('setfield', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('setflags', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('sort', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('squeeze', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('std', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('sum', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('swapaxes', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('take', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('tofile', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('tolist', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('tostring', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('trace', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('transpose', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('var', + """ + """)) + +add_newdoc('numpy.core.numerictypes', 'generic', ('view', + """ + """)) + + +############################################################################## +# +# Documentation for other scalar classes +# +############################################################################## + +add_newdoc('numpy.core.numerictypes', 'bool_', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'complex64', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'complex128', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'complex256', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'float32', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'float64', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'float96', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'float128', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'int8', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'int16', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'int32', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'int64', + """ + """) + +add_newdoc('numpy.core.numerictypes', 'object_', + """ + """) diff --git a/numpy/core/SConscript b/numpy/core/SConscript index 70d56c902..b611c04c9 100644 --- a/numpy/core/SConscript +++ b/numpy/core/SConscript @@ -211,6 +211,10 @@ if sys.platform=='win32' or os.name=='nt': config.Define('DISTUTILS_USE_SDK', distutils_use_sdk, "define to 1 to disable SMP support ") + if a == "Intel": + config.Define('FORCE_NO_LONG_DOUBLE_FORMATTING', 1, + "define to 1 to force long double format string to the" \ + " same as double (Lg -> g)") #-------------- # Checking Blas #-------------- diff --git a/numpy/core/_internal.py b/numpy/core/_internal.py index 558d2fe93..7d5c3a49e 100644 --- a/numpy/core/_internal.py +++ b/numpy/core/_internal.py @@ -292,3 +292,22 @@ def _newnames(datatype, order): raise ValueError, "unknown field name: %s" % (name,) return tuple(list(order) + nameslist) raise ValueError, "unsupported order value: %s" % (order,) + +# Given an array with fields and a sequence of field names +# construct a new array with just those fields copied over +def _index_fields(ary, fields): + from multiarray import empty, dtype + dt = ary.dtype + new_dtype = [(name, dt[name]) for name in dt.names if name in fields] + if ary.flags.f_contiguous: + order = 'F' + else: + order = 'C' + + newarray = empty(ary.shape, dtype=new_dtype, order=order) + + for name in fields: + newarray[name] = ary[name] + + return newarray + diff --git a/numpy/core/code_generators/generate_numpy_api.py b/numpy/core/code_generators/generate_numpy_api.py index 9cbc317ac..2b5a550de 100644 --- a/numpy/core/code_generators/generate_numpy_api.py +++ b/numpy/core/code_generators/generate_numpy_api.py @@ -65,6 +65,13 @@ static void **PyArray_API=NULL; static int _import_array(void) { +#ifdef WORDS_BIGENDIAN + union { + long i; + char c[sizeof(long)]; + } bint = {1}; +#endif + PyObject *numpy = PyImport_ImportModule("numpy.core.multiarray"); PyObject *c_api = NULL; if (numpy == NULL) return -1; @@ -83,6 +90,17 @@ _import_array(void) (int) NPY_VERSION, (int) PyArray_GetNDArrayCVersion()); return -1; } + +#ifdef WORDS_BIGENDIAN + if (bint.c[0] == 1) { + PyErr_Format(PyExc_RuntimeError, "module compiled against "\ + "python headers configured as big endian, but little endian arch "\ + "detected: this is a python 2.6.* bug (see bug 4728 in python bug "\ + "tracker )"); + return -1; + } +#endif + return 0; } diff --git a/numpy/core/setup.py b/numpy/core/setup.py index 4c590eea1..38767c5f1 100644 --- a/numpy/core/setup.py +++ b/numpy/core/setup.py @@ -67,8 +67,8 @@ def check_math_capabilities(config, moredefs, mathlibs): # Mandatory functions: if not found, fail the build mandatory_funcs = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs", - "floor", "ceil", "sqrt", "log10", "log", "exp", "asin", - "acos", "atan", "fmod", 'modf', 'frexp', 'ldexp'] + "floor", "ceil", "sqrt", "log10", "log", "exp", "asin", + "acos", "atan", "fmod", 'modf', 'frexp', 'ldexp'] if not check_funcs_once(mandatory_funcs): raise SystemError("One of the required function to build numpy is not" @@ -81,6 +81,14 @@ def check_math_capabilities(config, moredefs, mathlibs): optional_stdfuncs = ["expm1", "log1p", "acosh", "asinh", "atanh", "rint", "trunc", "exp2", "log2"] + # XXX: hack to circumvent cpp pollution from python: python put its + # config.h in the public namespace, so we have a clash for the common + # functions we test. We remove every function tested by python's autoconf, + # hoping their own test are correct + if sys.version_info[0] == 2 and sys.version_info[1] >= 6: + for f in ["expm1", "log1p", "acosh", "atanh", "asinh"]: + optional_stdfuncs.remove(f) + check_funcs(optional_stdfuncs) # C99 functions: float and long double versions @@ -179,6 +187,14 @@ def configuration(parent_package='',top_path=None): headers=['stdlib.h']): moredefs.append(('PyOS_ascii_strtod', 'strtod')) + if sys.platform == "win32": + from numpy.distutils.misc_util import get_build_architecture + # On win32, force long double format string to be 'g', not + # 'Lg', since the MS runtime does not support long double whose + # size is > sizeof(double) + if get_build_architecture()=="Intel": + moredefs.append('FORCE_NO_LONG_DOUBLE_FORMATTING') + target_f = open(target,'a') for d in moredefs: if isinstance(d,str): @@ -322,6 +338,7 @@ def configuration(parent_package='',top_path=None): deps = [join('src','arrayobject.c'), join('src','arraymethods.c'), join('src','scalartypes.inc.src'), + join('src','numpyos.c'), join('src','arraytypes.inc.src'), join('src','_signbit.c'), join('src','ucsnarrow.c'), diff --git a/numpy/core/src/arraymethods.c b/numpy/core/src/arraymethods.c index 7cf409173..ee4a7ea39 100644 --- a/numpy/core/src/arraymethods.c +++ b/numpy/core/src/arraymethods.c @@ -4,10 +4,10 @@ static PyObject * array_take(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int dimension=MAX_DIMS; + int dimension = MAX_DIMS; PyObject *indices; - PyArrayObject *out=NULL; - NPY_CLIPMODE mode=NPY_RAISE; + PyArrayObject *out = NULL; + NPY_CLIPMODE mode = NPY_RAISE; static char *kwlist[] = {"indices", "axis", "out", "mode", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&O&O&", kwlist, @@ -26,9 +26,12 @@ static PyObject * array_fill(PyArrayObject *self, PyObject *args) { PyObject *obj; - if (!PyArg_ParseTuple(args, "O", &obj)) + if (!PyArg_ParseTuple(args, "O", &obj)) { return NULL; - if (PyArray_FillWithScalar(self, obj) < 0) return NULL; + } + if (PyArray_FillWithScalar(self, obj) < 0) { + return NULL; + } Py_INCREF(Py_None); return Py_None; } @@ -37,7 +40,7 @@ static PyObject * array_put(PyArrayObject *self, PyObject *args, PyObject *kwds) { PyObject *indices, *values; - NPY_CLIPMODE mode=NPY_RAISE; + NPY_CLIPMODE mode = NPY_RAISE; static char *kwlist[] = {"indices", "values", "mode", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|O&", kwlist, @@ -53,7 +56,7 @@ array_reshape(PyArrayObject *self, PyObject *args, PyObject *kwds) { PyArray_Dims newshape; PyObject *ret; - PyArray_ORDER order=PyArray_CORDER; + PyArray_ORDER order = PyArray_CORDER; int n; if (kwds != NULL) { @@ -64,16 +67,20 @@ array_reshape(PyArrayObject *self, PyObject *args, PyObject *kwds) "invalid keyword argument"); return NULL; } - if ((PyArray_OrderConverter(ref, &order) == PY_FAIL)) + if ((PyArray_OrderConverter(ref, &order) == PY_FAIL)) { return NULL; + } } n = PyTuple_Size(args); if (n <= 1) { - if (PyTuple_GET_ITEM(args, 0) == Py_None) + if (PyTuple_GET_ITEM(args, 0) == Py_None) { return PyArray_View(self, NULL, NULL); + } if (!PyArg_ParseTuple(args, "O&", PyArray_IntpConverter, - &newshape)) return NULL; + &newshape)) { + return NULL; + } } else { if (!PyArray_IntpConverter(args, &newshape)) { @@ -96,16 +103,18 @@ array_reshape(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_squeeze(PyArrayObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) return NULL; + if (!PyArg_ParseTuple(args, "")) { + return NULL; + } return PyArray_Squeeze(self); } static PyObject * array_view(PyArrayObject *self, PyObject *args, PyObject *kwds) { - PyObject *out_dtype=NULL; - PyObject *out_type=NULL; - PyArray_Descr *dtype=NULL; + PyObject *out_dtype = NULL; + PyObject *out_type = NULL; + PyArray_Descr *dtype = NULL; static char *kwlist[] = {"dtype", "type", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OO", kwlist, @@ -151,8 +160,8 @@ array_view(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_argmax(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArrayObject *out = NULL; static char *kwlist[] = {"axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, @@ -168,8 +177,8 @@ array_argmax(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_argmin(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArrayObject *out = NULL; static char *kwlist[] = {"axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, @@ -185,8 +194,8 @@ array_argmin(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_max(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArrayObject *out = NULL; static char *kwlist[] = {"axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, @@ -202,8 +211,8 @@ array_max(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_ptp(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArrayObject *out = NULL; static char *kwlist[] = {"axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, @@ -220,8 +229,8 @@ array_ptp(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_min(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArrayObject *out = NULL; static char *kwlist[] = {"axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, @@ -239,8 +248,9 @@ array_swapaxes(PyArrayObject *self, PyObject *args) { int axis1, axis2; - if (!PyArg_ParseTuple(args, "ii", &axis1, &axis2)) return NULL; - + if (!PyArg_ParseTuple(args, "ii", &axis1, &axis2)) { + return NULL; + } return PyArray_SwapAxes(self, axis1, axis2); } @@ -252,7 +262,7 @@ array_swapaxes(PyArrayObject *self, PyObject *args) static PyObject * PyArray_GetField(PyArrayObject *self, PyArray_Descr *typed, int offset) { - PyObject *ret=NULL; + PyObject *ret = NULL; if (offset < 0 || (offset + typed->elsize) > self->descr->elsize) { PyErr_Format(PyExc_ValueError, @@ -268,7 +278,9 @@ PyArray_GetField(PyArrayObject *self, PyArray_Descr *typed, int offset) self->strides, self->data + offset, self->flags, (PyObject *)self); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } Py_INCREF(self); ((PyArrayObject *)ret)->base = (PyObject *)self; @@ -280,7 +292,7 @@ static PyObject * array_getfield(PyArrayObject *self, PyObject *args, PyObject *kwds) { - PyArray_Descr *dtype=NULL; + PyArray_Descr *dtype = NULL; int offset = 0; static char *kwlist[] = {"dtype", "offset", 0}; @@ -302,7 +314,7 @@ static int PyArray_SetField(PyArrayObject *self, PyArray_Descr *dtype, int offset, PyObject *val) { - PyObject *ret=NULL; + PyObject *ret = NULL; int retval = 0; if (offset < 0 || (offset + dtype->elsize) > self->descr->elsize) { @@ -317,7 +329,9 @@ PyArray_SetField(PyArrayObject *self, PyArray_Descr *dtype, dtype, self->nd, self->dimensions, self->strides, self->data + offset, self->flags, (PyObject *)self); - if (ret == NULL) return -1; + if (ret == NULL) { + return -1; + } Py_INCREF(self); ((PyArrayObject *)ret)->base = (PyObject *)self; @@ -330,7 +344,7 @@ PyArray_SetField(PyArrayObject *self, PyArray_Descr *dtype, static PyObject * array_setfield(PyArrayObject *self, PyObject *args, PyObject *kwds) { - PyArray_Descr *dtype=NULL; + PyArray_Descr *dtype = NULL; int offset = 0; PyObject *value; static char *kwlist[] = {"value", "dtype", "offset", 0}; @@ -342,8 +356,9 @@ array_setfield(PyArrayObject *self, PyObject *args, PyObject *kwds) return NULL; } - if (PyArray_SetField(self, dtype, offset, value) < 0) + if (PyArray_SetField(self, dtype, offset, value) < 0) { return NULL; + } Py_INCREF(Py_None); return Py_None; } @@ -391,8 +406,9 @@ PyArray_Byteswap(PyArrayObject *self, Bool inplace) } else { PyObject *new; - if ((ret = (PyArrayObject *)PyArray_NewCopy(self,-1)) == NULL) + if ((ret = (PyArrayObject *)PyArray_NewCopy(self,-1)) == NULL) { return NULL; + } new = PyArray_Byteswap(ret, TRUE); Py_DECREF(new); return (PyObject *)ret; @@ -403,18 +419,20 @@ PyArray_Byteswap(PyArrayObject *self, Bool inplace) static PyObject * array_byteswap(PyArrayObject *self, PyObject *args) { - Bool inplace=FALSE; + Bool inplace = FALSE; - if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) + if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) { return NULL; - + } return PyArray_Byteswap(self, inplace); } static PyObject * array_tolist(PyArrayObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) return NULL; + if (!PyArg_ParseTuple(args, "")) { + return NULL; + } return PyArray_ToList(self); } @@ -422,12 +440,14 @@ array_tolist(PyArrayObject *self, PyObject *args) static PyObject * array_tostring(PyArrayObject *self, PyObject *args, PyObject *kwds) { - NPY_ORDER order=NPY_CORDER; + NPY_ORDER order = NPY_CORDER; static char *kwlist[] = {"order", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&", kwlist, PyArray_OrderConverter, - &order)) return NULL; + &order)) { + return NULL; + } return PyArray_ToString(self, order); } @@ -441,17 +461,21 @@ array_tofile(PyArrayObject *self, PyObject *args, PyObject *kwds) int ret; PyObject *file; FILE *fd; - char *sep=""; - char *format=""; + char *sep = ""; + char *format = ""; static char *kwlist[] = {"file", "sep", "format", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|ss", kwlist, - &file, &sep, &format)) return NULL; + &file, &sep, &format)) { + return NULL; + } if (PyString_Check(file) || PyUnicode_Check(file)) { file = PyObject_CallFunction((PyObject *)&PyFile_Type, "Os", file, "wb"); - if (file==NULL) return NULL; + if (file == NULL) { + return NULL; + } } else { Py_INCREF(file); @@ -465,7 +489,9 @@ array_tofile(PyArrayObject *self, PyObject *args, PyObject *kwds) } ret = PyArray_ToFile(self, fd, sep, format); Py_DECREF(file); - if (ret < 0) return NULL; + if (ret < 0) { + return NULL; + } Py_INCREF(Py_None); return Py_None; } @@ -476,7 +502,7 @@ array_toscalar(PyArrayObject *self, PyObject *args) { int n, nd; n = PyTuple_GET_SIZE(args); - if (n==1) { + if (n == 1) { PyObject *obj; obj = PyTuple_GET_ITEM(args, 0); if (PyTuple_Check(obj)) { @@ -485,7 +511,7 @@ array_toscalar(PyArrayObject *self, PyObject *args) { } } - if (n==0) { + if (n == 0) { if (self->nd == 0 || PyArray_SIZE(self) == 1) return self->descr->f->getitem(self->data, self); else { @@ -495,13 +521,13 @@ array_toscalar(PyArrayObject *self, PyObject *args) { return NULL; } } - else if (n != self->nd && (n > 1 || self->nd==0)) { + else if (n != self->nd && (n > 1 || self->nd == 0)) { PyErr_SetString(PyExc_ValueError, "incorrect number of indices for " \ "array"); return NULL; } - else if (n==1) { /* allows for flat getting as well as 1-d case */ + else if (n == 1) { /* allows for flat getting as well as 1-d case */ intp value, loc, index, factor; intp factors[MAX_DIMS]; value = PyArray_PyIntAsIntp(PyTuple_GET_ITEM(args, 0)); @@ -528,7 +554,7 @@ array_toscalar(PyArrayObject *self, PyObject *args) { factor *= self->dimensions[nd]; } loc = 0; - for (nd=0; nd < self->nd; nd++) { + for (nd = 0; nd < self->nd; nd++) { index = value / factors[nd]; value = value % factors[nd]; loc += self->strides[nd]*index; @@ -541,11 +567,14 @@ array_toscalar(PyArrayObject *self, PyObject *args) { else { intp loc, index[MAX_DIMS]; nd = PyArray_IntpFromSequence(args, index, MAX_DIMS); - if (nd < n) return NULL; + if (nd < n) { + return NULL; + } loc = 0; while (nd--) { - if (index[nd] < 0) + if (index[nd] < 0) { index[nd] += self->dimensions[nd]; + } if (index[nd] < 0 || index[nd] >= self->dimensions[nd]) { PyErr_SetString(PyExc_ValueError, @@ -563,7 +592,7 @@ array_setscalar(PyArrayObject *self, PyObject *args) { int n, nd; int ret = -1; PyObject *obj; - n = PyTuple_GET_SIZE(args)-1; + n = PyTuple_GET_SIZE(args) - 1; if (n < 0) { PyErr_SetString(PyExc_ValueError, @@ -571,7 +600,7 @@ array_setscalar(PyArrayObject *self, PyObject *args) { return NULL; } obj = PyTuple_GET_ITEM(args, n); - if (n==0) { + if (n == 0) { if (self->nd == 0 || PyArray_SIZE(self) == 1) { ret = self->descr->f->setitem(obj, self->data, self); } @@ -582,13 +611,13 @@ array_setscalar(PyArrayObject *self, PyObject *args) { return NULL; } } - else if (n != self->nd && (n > 1 || self->nd==0)) { + else if (n != self->nd && (n > 1 || self->nd == 0)) { PyErr_SetString(PyExc_ValueError, "incorrect number of indices for " \ "array"); return NULL; } - else if (n==1) { /* allows for flat setting as well as 1-d case */ + else if (n == 1) { /* allows for flat setting as well as 1-d case */ intp value, loc, index, factor; intp factors[MAX_DIMS]; PyObject *indobj; @@ -602,7 +631,7 @@ array_setscalar(PyArrayObject *self, PyObject *args) { nn = PyTuple_GET_SIZE(indobj); newargs = PyTuple_New(nn+1); Py_INCREF(obj); - for (i=0; i<nn; i++) { + for (i = 0; i < nn; i++) { tmp = PyTuple_GET_ITEM(indobj, i); Py_INCREF(tmp); PyTuple_SET_ITEM(newargs, i, tmp); @@ -636,7 +665,7 @@ array_setscalar(PyArrayObject *self, PyObject *args) { factor *= self->dimensions[nd]; } loc = 0; - for (nd=0; nd < self->nd; nd++) { + for (nd = 0; nd < self->nd; nd++) { index = value / factors[nd]; value = value % factors[nd]; loc += self->strides[nd]*index; @@ -650,11 +679,14 @@ array_setscalar(PyArrayObject *self, PyObject *args) { tupargs = PyTuple_GetSlice(args, 0, n); nd = PyArray_IntpFromSequence(tupargs, index, MAX_DIMS); Py_DECREF(tupargs); - if (nd < n) return NULL; + if (nd < n) { + return NULL; + } loc = 0; while (nd--) { - if (index[nd] < 0) + if (index[nd] < 0) { index[nd] += self->dimensions[nd]; + } if (index[nd] < 0 || index[nd] >= self->dimensions[nd]) { PyErr_SetString(PyExc_ValueError, @@ -667,7 +699,9 @@ array_setscalar(PyArrayObject *self, PyObject *args) { } finish: - if (ret < 0) return NULL; + if (ret < 0) { + return NULL; + } Py_INCREF(Py_None); return Py_None; } @@ -676,7 +710,7 @@ array_setscalar(PyArrayObject *self, PyObject *args) { static PyObject * array_cast(PyArrayObject *self, PyObject *args) { - PyArray_Descr *descr=NULL; + PyArray_Descr *descr = NULL; PyObject *obj; if (!PyArg_ParseTuple(args, "O&", PyArray_DescrConverter, @@ -729,7 +763,9 @@ array_wraparray(PyArrayObject *self, PyObject *args) PyArray_DIMS(arr), PyArray_STRIDES(arr), PyArray_DATA(arr), PyArray_FLAGS(arr), (PyObject *)self); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } Py_INCREF(arr); PyArray_BASE(ret) = arr; return ret; @@ -739,7 +775,7 @@ array_wraparray(PyArrayObject *self, PyObject *args) static PyObject * array_getarray(PyArrayObject *self, PyObject *args) { - PyArray_Descr *newtype=NULL; + PyArray_Descr *newtype = NULL; PyObject *ret; if (!PyArg_ParseTuple(args, "|O&", PyArray_DescrConverter, @@ -765,7 +801,9 @@ array_getarray(PyArrayObject *self, PyObject *args) PyArray_STRIDES(self), PyArray_DATA(self), PyArray_FLAGS(self), NULL); - if (new == NULL) return NULL; + if (new == NULL) { + return NULL; + } Py_INCREF(self); PyArray_BASE(new) = (PyObject *)self; self = (PyArrayObject *)new; @@ -774,7 +812,7 @@ array_getarray(PyArrayObject *self, PyObject *args) Py_INCREF(self); } - if ((newtype == NULL) || \ + if ((newtype == NULL) || PyArray_EquivTypes(self->descr, newtype)) { return (PyObject *)self; } @@ -791,7 +829,9 @@ array_copy(PyArrayObject *self, PyObject *args) { PyArray_ORDER fortran=PyArray_CORDER; if (!PyArg_ParseTuple(args, "|O&", PyArray_OrderConverter, - &fortran)) return NULL; + &fortran)) { + return NULL; + } return PyArray_NewCopy(self, fortran); } @@ -804,7 +844,7 @@ array_resize(PyArrayObject *self, PyObject *args, PyObject *kwds) PyObject *ret; int n; int refcheck = 1; - PyArray_ORDER fortran=PyArray_ANYORDER; + PyArray_ORDER fortran = PyArray_ANYORDER; if (kwds != NULL) { PyObject *ref; @@ -817,8 +857,9 @@ array_resize(PyArrayObject *self, PyObject *args, PyObject *kwds) } ref = PyDict_GetItemString(kwds, "order"); if (ref != NULL || - (PyArray_OrderConverter(ref, &fortran) == PY_FAIL)) + (PyArray_OrderConverter(ref, &fortran) == PY_FAIL)) { return NULL; + } } n = PyTuple_Size(args); if (n <= 1) { @@ -827,7 +868,9 @@ array_resize(PyArrayObject *self, PyObject *args, PyObject *kwds) return Py_None; } if (!PyArg_ParseTuple(args, "O&", PyArray_IntpConverter, - &newshape)) return NULL; + &newshape)) { + return NULL; + } } else { if (!PyArray_IntpConverter(args, &newshape)) { @@ -840,7 +883,9 @@ array_resize(PyArrayObject *self, PyObject *args, PyObject *kwds) } ret = PyArray_Resize(self, &newshape, refcheck, fortran); PyDimMem_FREE(newshape.ptr); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } Py_DECREF(ret); Py_INCREF(Py_None); return Py_None; @@ -849,13 +894,14 @@ array_resize(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_repeat(PyArrayObject *self, PyObject *args, PyObject *kwds) { PyObject *repeats; - int axis=MAX_DIMS; + int axis = MAX_DIMS; static char *kwlist[] = {"repeats", "axis", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&", kwlist, &repeats, PyArray_AxisConverter, - &axis)) return NULL; - + &axis)) { + return NULL; + } return _ARET(PyArray_Repeat(self, repeats, axis)); } @@ -864,26 +910,27 @@ array_choose(PyArrayObject *self, PyObject *args, PyObject *kwds) { PyObject *choices; int n; - PyArrayObject *out=NULL; - NPY_CLIPMODE clipmode=NPY_RAISE; + PyArrayObject *out = NULL; + NPY_CLIPMODE clipmode = NPY_RAISE; n = PyTuple_Size(args); if (n <= 1) { - if (!PyArg_ParseTuple(args, "O", &choices)) + if (!PyArg_ParseTuple(args, "O", &choices)) { return NULL; + } } else { choices = args; } if (kwds && PyDict_Check(kwds)) { - if (PyArray_OutputConverter(PyDict_GetItemString(kwds, - "out"), - &out) == PY_FAIL) + if (PyArray_OutputConverter(PyDict_GetItemString(kwds, "out"), + &out) == PY_FAIL) { return NULL; - if (PyArray_ClipmodeConverter(PyDict_GetItemString(kwds, - "mode"), - &clipmode) == PY_FAIL) + } + if (PyArray_ClipmodeConverter(PyDict_GetItemString(kwds, "mode"), + &clipmode) == PY_FAIL) { return NULL; + } } return _ARET(PyArray_Choose(self, choices, out, clipmode)); @@ -894,18 +941,20 @@ array_sort(PyArrayObject *self, PyObject *args, PyObject *kwds) { int axis=-1; int val; - PyArray_SORTKIND which=PyArray_QUICKSORT; - PyObject *order=NULL; - PyArray_Descr *saved=NULL; + PyArray_SORTKIND which = PyArray_QUICKSORT; + PyObject *order = NULL; + PyArray_Descr *saved = NULL; PyArray_Descr *newd; static char *kwlist[] = {"axis", "kind", "order", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iO&O", kwlist, &axis, PyArray_SortkindConverter, &which, - &order)) + &order)) { return NULL; - - if (order == Py_None) order = NULL; + } + if (order == Py_None) { + order = NULL; + } if (order != NULL) { PyObject *new_name; PyObject *_numpy_internal; @@ -916,11 +965,15 @@ array_sort(PyArrayObject *self, PyObject *args, PyObject *kwds) return NULL; } _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) return NULL; + if (_numpy_internal == NULL) { + return NULL; + } new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); Py_DECREF(_numpy_internal); - if (new_name == NULL) return NULL; + if (new_name == NULL) { + return NULL; + } newd = PyArray_DescrNew(saved); newd->names = new_name; self->descr = newd; @@ -931,7 +984,9 @@ array_sort(PyArrayObject *self, PyObject *args, PyObject *kwds) Py_XDECREF(self->descr); self->descr = saved; } - if (val < 0) return NULL; + if (val < 0) { + return NULL; + } Py_INCREF(Py_None); return Py_None; } @@ -939,19 +994,21 @@ array_sort(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_argsort(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=-1; - PyArray_SORTKIND which=PyArray_QUICKSORT; - PyObject *order=NULL, *res; + int axis = -1; + PyArray_SORTKIND which = PyArray_QUICKSORT; + PyObject *order = NULL, *res; PyArray_Descr *newd, *saved=NULL; static char *kwlist[] = {"axis", "kind", "order", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O", kwlist, PyArray_AxisConverter, &axis, PyArray_SortkindConverter, &which, - &order)) + &order)) { return NULL; - - if (order == Py_None) order = NULL; + } + if (order == Py_None) { + order = NULL; + } if (order != NULL) { PyObject *new_name; PyObject *_numpy_internal; @@ -962,11 +1019,15 @@ array_argsort(PyArrayObject *self, PyObject *args, PyObject *kwds) return NULL; } _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) return NULL; + if (_numpy_internal == NULL) { + return NULL; + } new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); Py_DECREF(_numpy_internal); - if (new_name == NULL) return NULL; + if (new_name == NULL) { + return NULL; + } newd = PyArray_DescrNew(saved); newd->names = new_name; self->descr = newd; @@ -989,9 +1050,9 @@ array_searchsorted(PyArrayObject *self, PyObject *args, PyObject *kwds) if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&:searchsorted", kwlist, &keys, - PyArray_SearchsideConverter, &side)) + PyArray_SearchsideConverter, &side)) { return NULL; - + } return _ARET(PyArray_SearchSorted(self, keys, side)); } @@ -999,16 +1060,22 @@ static void _deepcopy_call(char *iptr, char *optr, PyArray_Descr *dtype, PyObject *deepcopy, PyObject *visit) { - if (!PyDataType_REFCHK(dtype)) return; + if (!PyDataType_REFCHK(dtype)) { + return; + } else if (PyDescr_HASFIELDS(dtype)) { - PyObject *key, *value, *title=NULL; + PyObject *key, *value, *title = NULL; PyArray_Descr *new; int offset; - Py_ssize_t pos=0; + Py_ssize_t pos = 0; while (PyDict_Next(dtype->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; + if NPY_TITLE_KEY(key, value) { + continue; + } if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) return; + &title)) { + return; + } _deepcopy_call(iptr + offset, optr + offset, new, deepcopy, visit); } @@ -1020,8 +1087,7 @@ _deepcopy_call(char *iptr, char *optr, PyArray_Descr *dtype, otemp = (PyObject **)optr; Py_XINCREF(*itemp); /* call deepcopy on this argument */ - res = PyObject_CallFunctionObjArgs(deepcopy, - *itemp, visit, NULL); + res = PyObject_CallFunctionObjArgs(deepcopy, *itemp, visit, NULL); Py_XDECREF(*itemp); Py_XDECREF(*otemp); *otemp = res; @@ -1038,20 +1104,28 @@ array_deepcopy(PyArrayObject *self, PyObject *args) PyArrayIterObject *it; PyObject *copy, *ret, *deepcopy; - if (!PyArg_ParseTuple(args, "O", &visit)) return NULL; + if (!PyArg_ParseTuple(args, "O", &visit)) { + return NULL; + } ret = PyArray_Copy(self); if (PyDataType_REFCHK(self->descr)) { copy = PyImport_ImportModule("copy"); - if (copy == NULL) return NULL; + if (copy == NULL) { + return NULL; + } deepcopy = PyObject_GetAttrString(copy, "deepcopy"); Py_DECREF(copy); - if (deepcopy == NULL) return NULL; + if (deepcopy == NULL) { + return NULL; + } it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (it == NULL) {Py_DECREF(deepcopy); return NULL;} + if (it == NULL) { + Py_DECREF(deepcopy); + return NULL; + } optr = PyArray_DATA(ret); while(it->index < it->size) { - _deepcopy_call(it->dataptr, optr, self->descr, - deepcopy, visit); + _deepcopy_call(it->dataptr, optr, self->descr, deepcopy, visit); optr += self->descr->elsize; PyArray_ITER_NEXT(it); } @@ -1066,15 +1140,20 @@ static PyObject * _getlist_pkl(PyArrayObject *self) { PyObject *theobject; - PyArrayIterObject *iter=NULL; + PyArrayIterObject *iter = NULL; PyObject *list; PyArray_GetItemFunc *getitem; getitem = self->descr->f->getitem; iter = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (iter == NULL) return NULL; + if (iter == NULL) { + return NULL; + } list = PyList_New(iter->size); - if (list == NULL) {Py_DECREF(iter); return NULL;} + if (list == NULL) { + Py_DECREF(iter); + return NULL; + } while (iter->index < iter->size) { theobject = getitem(iter->dataptr, self); PyList_SET_ITEM(list, (int) iter->index, theobject); @@ -1088,12 +1167,14 @@ static int _setlist_pkl(PyArrayObject *self, PyObject *list) { PyObject *theobject; - PyArrayIterObject *iter=NULL; + PyArrayIterObject *iter = NULL; PyArray_SetItemFunc *setitem; setitem = self->descr->f->setitem; iter = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (iter == NULL) return -1; + if (iter == NULL) { + return -1; + } while(iter->index < iter->size) { theobject = PyList_GET_ITEM(list, (int) iter->index); setitem(theobject, iter->dataptr, self); @@ -1111,8 +1192,8 @@ array_reduce(PyArrayObject *self, PyObject *NPY_UNUSED(args)) change the format. Be sure to handle the old versions in array_setstate. */ const int version = 1; - PyObject *ret=NULL, *state=NULL, *obj=NULL, *mod=NULL; - PyObject *mybool, *thestr=NULL; + PyObject *ret = NULL, *state = NULL, *obj = NULL, *mod = NULL; + PyObject *mybool, *thestr = NULL; PyArray_Descr *descr; /* Return a tuple of (callable object, arguments, object's state) */ @@ -1120,9 +1201,14 @@ array_reduce(PyArrayObject *self, PyObject *NPY_UNUSED(args)) it can use the string object as memory without a copy */ ret = PyTuple_New(3); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } mod = PyImport_ImportModule("numpy.core.multiarray"); - if (mod == NULL) {Py_DECREF(ret); return NULL;} + if (mod == NULL) { + Py_DECREF(ret); + return NULL; + } obj = PyObject_GetAttrString(mod, "_reconstruct"); Py_DECREF(mod); PyTuple_SET_ITEM(ret, 0, obj); @@ -1150,7 +1236,8 @@ array_reduce(PyArrayObject *self, PyObject *NPY_UNUSED(args)) state = PyTuple_New(5); if (state == NULL) { - Py_DECREF(ret); return NULL; + Py_DECREF(ret); + return NULL; } PyTuple_SET_ITEM(state, 0, PyInt_FromLong(version)); PyTuple_SET_ITEM(state, 1, PyObject_GetAttrString((PyObject *)self, @@ -1227,7 +1314,9 @@ array_setstate(PyArrayObject *self, PyObject *args) self->descr = typecode; Py_INCREF(typecode); nd = PyArray_IntpFromSequence(shape, dimensions, MAX_DIMS); - if (nd < 0) return NULL; + if (nd < 0) { + return NULL; + } size = PyArray_MultiplyList(dimensions, nd); if (self->descr->elsize == 0) { PyErr_SetString(PyExc_ValueError, "Invalid data-type size."); @@ -1264,8 +1353,9 @@ array_setstate(PyArrayObject *self, PyObject *args) } if ((self->flags & OWNDATA)) { - if (self->data != NULL) + if (self->data != NULL) { PyDataMem_FREE(self->data); + } self->flags &= ~OWNDATA; } Py_XDECREF(self->base); @@ -1312,10 +1402,12 @@ array_setstate(PyArrayObject *self, PyObject *args) } else { self->descr = PyArray_DescrNew(typecode); - if (self->descr->byteorder == PyArray_BIG) + if (self->descr->byteorder == PyArray_BIG) { self->descr->byteorder = PyArray_LITTLE; - else if (self->descr->byteorder == PyArray_LITTLE) + } + else if (self->descr->byteorder == PyArray_LITTLE) { self->descr->byteorder = PyArray_BIG; + } } Py_DECREF(typecode); } @@ -1335,15 +1427,19 @@ array_setstate(PyArrayObject *self, PyObject *args) if (self->data == NULL) { self->nd = 0; self->data = PyDataMem_NEW(self->descr->elsize); - if (self->dimensions) PyDimMem_FREE(self->dimensions); + if (self->dimensions) { + PyDimMem_FREE(self->dimensions); + } return PyErr_NoMemory(); } - if (PyDataType_FLAGCHK(self->descr, NPY_NEEDS_INIT)) + if (PyDataType_FLAGCHK(self->descr, NPY_NEEDS_INIT)) { memset(self->data, 0, PyArray_NBYTES(self)); + } self->flags |= OWNDATA; self->base = NULL; - if (_setlist_pkl(self, rawdata) < 0) + if (_setlist_pkl(self, rawdata) < 0) { return NULL; + } } PyArray_UpdateFlags(self, UPDATE_ALL); @@ -1356,24 +1452,32 @@ array_setstate(PyArrayObject *self, PyObject *args) static int PyArray_Dump(PyObject *self, PyObject *file, int protocol) { - PyObject *cpick=NULL; + PyObject *cpick = NULL; PyObject *ret; - if (protocol < 0) protocol = 2; + if (protocol < 0) { + protocol = 2; + } cpick = PyImport_ImportModule("cPickle"); - if (cpick==NULL) return -1; - + if (cpick == NULL) { + return -1; + } if PyString_Check(file) { - file = PyFile_FromString(PyString_AS_STRING(file), "wb"); - if (file==NULL) return -1; + file = PyFile_FromString(PyString_AS_STRING(file), "wb"); + if (file == NULL) { + return -1; } - else Py_INCREF(file); - ret = PyObject_CallMethod(cpick, "dump", "OOi", self, - file, protocol); + } + else { + Py_INCREF(file); + } + ret = PyObject_CallMethod(cpick, "dump", "OOi", self, file, protocol); Py_XDECREF(ret); Py_DECREF(file); Py_DECREF(cpick); - if (PyErr_Occurred()) return -1; + if (PyErr_Occurred()) { + return -1; + } return 0; } @@ -1381,12 +1485,15 @@ PyArray_Dump(PyObject *self, PyObject *file, int protocol) static PyObject * PyArray_Dumps(PyObject *self, int protocol) { - PyObject *cpick=NULL; + PyObject *cpick = NULL; PyObject *ret; - if (protocol < 0) protocol = 2; - + if (protocol < 0) { + protocol = 2; + } cpick = PyImport_ImportModule("cPickle"); - if (cpick==NULL) return NULL; + if (cpick == NULL) { + return NULL; + } ret = PyObject_CallMethod(cpick, "dumps", "Oi", self, protocol); Py_DECREF(cpick); return ret; @@ -1396,13 +1503,16 @@ PyArray_Dumps(PyObject *self, int protocol) static PyObject * array_dump(PyArrayObject *self, PyObject *args) { - PyObject *file=NULL; + PyObject *file = NULL; int ret; - if (!PyArg_ParseTuple(args, "O", &file)) + if (!PyArg_ParseTuple(args, "O", &file)) { return NULL; + } ret = PyArray_Dump((PyObject *)self, file, 2); - if (ret < 0) return NULL; + if (ret < 0) { + return NULL; + } Py_INCREF(Py_None); return Py_None; } @@ -1411,8 +1521,9 @@ array_dump(PyArrayObject *self, PyObject *args) static PyObject * array_dumps(PyArrayObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) + if (!PyArg_ParseTuple(args, "")) { return NULL; + } return PyArray_Dumps((PyObject *)self, 2); } @@ -1420,19 +1531,26 @@ array_dumps(PyArrayObject *self, PyObject *args) static PyObject * array_transpose(PyArrayObject *self, PyObject *args) { - PyObject *shape=Py_None; + PyObject *shape = Py_None; int n; PyArray_Dims permute; PyObject *ret; n = PyTuple_Size(args); - if (n > 1) shape = args; - else if (n == 1) shape = PyTuple_GET_ITEM(args, 0); + if (n > 1) { + shape = args; + } + else if (n == 1) { + shape = PyTuple_GET_ITEM(args, 0); + } - if (shape == Py_None) + if (shape == Py_None) { ret = PyArray_Transpose(self, NULL); + } else { - if (!PyArray_IntpConverter(shape, &permute)) return NULL; + if (!PyArray_IntpConverter(shape, &permute)) { + return NULL; + } ret = PyArray_Transpose(self, &permute); PyDimMem_FREE(permute.ptr); } @@ -1447,9 +1565,9 @@ array_transpose(PyArrayObject *self, PyObject *args) static int _get_type_num_double(PyArray_Descr *dtype1, PyArray_Descr *dtype2) { - if (dtype2 != NULL) + if (dtype2 != NULL) { return dtype2->type_num; - + } /* For integer or bool data-types */ if (dtype1->type_num < NPY_FLOAT) { return NPY_DOUBLE; @@ -1464,9 +1582,9 @@ _get_type_num_double(PyArray_Descr *dtype1, PyArray_Descr *dtype2) static PyObject * array_mean(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int num; static char *kwlist[] = {"axis", "dtype", "out", NULL}; @@ -1488,9 +1606,9 @@ array_mean(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_sum(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int rtype; static char *kwlist[] = {"axis", "dtype", "out", NULL}; @@ -1513,9 +1631,9 @@ array_sum(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_cumsum(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int rtype; static char *kwlist[] = {"axis", "dtype", "out", NULL}; @@ -1537,9 +1655,9 @@ array_cumsum(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_prod(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int rtype; static char *kwlist[] = {"axis", "dtype", "out", NULL}; @@ -1561,9 +1679,9 @@ array_prod(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_cumprod(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int rtype; static char *kwlist[] = {"axis", "dtype", "out", NULL}; @@ -1586,8 +1704,8 @@ array_cumprod(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_any(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArrayObject *out = NULL; static char *kwlist[] = {"axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, @@ -1604,8 +1722,8 @@ array_any(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_all(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArrayObject *out = NULL; static char *kwlist[] = {"axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, @@ -1625,9 +1743,9 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out, static PyObject * array_stddev(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int num; int ddof = 0; static char *kwlist[] = {"axis", "dtype", "out", "ddof", NULL}; @@ -1651,9 +1769,9 @@ array_stddev(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_variance(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis = MAX_DIMS; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int num; int ddof = 0; static char *kwlist[] = {"axis", "dtype", "out", "ddof", NULL}; @@ -1666,7 +1784,7 @@ array_variance(PyArrayObject *self, PyObject *args, PyObject *kwds) &out, &ddof)) { Py_XDECREF(dtype); return NULL; - } + } num = _get_type_num_double(self->descr, dtype); Py_XDECREF(dtype); @@ -1677,17 +1795,18 @@ array_variance(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_compress(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis=MAX_DIMS; + int axis = MAX_DIMS; PyObject *condition; - PyArrayObject *out=NULL; + PyArrayObject *out = NULL; static char *kwlist[] = {"condition", "axis", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&O&", kwlist, &condition, PyArray_AxisConverter, &axis, PyArray_OutputConverter, - &out)) return NULL; - + &out)) { + return NULL; + } return _ARET(PyArray_Compress(self, condition, axis, out)); } @@ -1695,8 +1814,9 @@ array_compress(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_nonzero(PyArrayObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) return NULL; - + if (!PyArg_ParseTuple(args, "")) { + return NULL; + } return PyArray_Nonzero(self); } @@ -1704,9 +1824,9 @@ array_nonzero(PyArrayObject *self, PyObject *args) static PyObject * array_trace(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis1=0, axis2=1, offset=0; - PyArray_Descr *dtype=NULL; - PyArrayObject *out=NULL; + int axis1 = 0, axis2 = 1, offset = 0; + PyArray_Descr *dtype = NULL; + PyArrayObject *out = NULL; int rtype; static char *kwlist[] = {"offset", "axis1", "axis2", "dtype", "out", NULL}; @@ -1720,9 +1840,7 @@ array_trace(PyArrayObject *self, PyObject *args, PyObject *kwds) rtype = _CHKTYPENUM(dtype); Py_XDECREF(dtype); - - return _ARET(PyArray_Trace(self, offset, axis1, axis2, - rtype, out)); + return _ARET(PyArray_Trace(self, offset, axis1, axis2, rtype, out)); } #undef _CHKTYPENUM @@ -1731,19 +1849,19 @@ array_trace(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_clip(PyArrayObject *self, PyObject *args, PyObject *kwds) { - PyObject *min=NULL, *max=NULL; - PyArrayObject *out=NULL; + PyObject *min = NULL, *max = NULL; + PyArrayObject *out = NULL; static char *kwlist[] = {"min", "max", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OOO&", kwlist, &min, &max, PyArray_OutputConverter, - &out)) + &out)) { return NULL; - + } if (max == NULL && min == NULL) { PyErr_SetString(PyExc_ValueError, "One of max or min must be given."); - return NULL; + return NULL; } return _ARET(PyArray_Clip(self, min, max, out)); } @@ -1753,11 +1871,12 @@ static PyObject * array_conjugate(PyArrayObject *self, PyObject *args) { - PyArrayObject *out=NULL; + PyArrayObject *out = NULL; if (!PyArg_ParseTuple(args, "|O&", PyArray_OutputConverter, - &out)) return NULL; - + &out)) { + return NULL; + } return PyArray_Conjugate(self, out); } @@ -1765,13 +1884,13 @@ array_conjugate(PyArrayObject *self, PyObject *args) static PyObject * array_diagonal(PyArrayObject *self, PyObject *args, PyObject *kwds) { - int axis1=0, axis2=1, offset=0; + int axis1 = 0, axis2 = 1, offset = 0; static char *kwlist[] = {"offset", "axis1", "axis2", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iii", kwlist, - &offset, &axis1, &axis2)) + &offset, &axis1, &axis2)) { return NULL; - + } return _ARET(PyArray_Diagonal(self, offset, axis1, axis2)); } @@ -1779,11 +1898,11 @@ array_diagonal(PyArrayObject *self, PyObject *args, PyObject *kwds) static PyObject * array_flatten(PyArrayObject *self, PyObject *args) { - PyArray_ORDER fortran=PyArray_CORDER; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_OrderConverter, - &fortran)) return NULL; + PyArray_ORDER fortran = PyArray_CORDER; + if (!PyArg_ParseTuple(args, "|O&", PyArray_OrderConverter, &fortran)) { + return NULL; + } return PyArray_Flatten(self, fortran); } @@ -1791,11 +1910,12 @@ array_flatten(PyArrayObject *self, PyObject *args) static PyObject * array_ravel(PyArrayObject *self, PyObject *args) { - PyArray_ORDER fortran=PyArray_CORDER; + PyArray_ORDER fortran = PyArray_CORDER; if (!PyArg_ParseTuple(args, "|O&", PyArray_OrderConverter, - &fortran)) return NULL; - + &fortran)) { + return NULL; + } return PyArray_Ravel(self, fortran); } @@ -1804,14 +1924,14 @@ static PyObject * array_round(PyArrayObject *self, PyObject *args, PyObject *kwds) { int decimals = 0; - PyArrayObject *out=NULL; + PyArrayObject *out = NULL; static char *kwlist[] = {"decimals", "out", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iO&", kwlist, &decimals, PyArray_OutputConverter, - &out)) + &out)) { return NULL; - + } return _ARET(PyArray_Round(self, decimals, out)); } @@ -1824,9 +1944,9 @@ static PyObject * array_setflags(PyArrayObject *self, PyObject *args, PyObject *kwds) { static char *kwlist[] = {"write", "align", "uic", NULL}; - PyObject *write=Py_None; - PyObject *align=Py_None; - PyObject *uic=Py_None; + PyObject *write = Py_None; + PyObject *align = Py_None; + PyObject *uic = Py_None; int flagback = self->flags; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OOO", kwlist, @@ -1834,8 +1954,12 @@ array_setflags(PyArrayObject *self, PyObject *args, PyObject *kwds) return NULL; if (align != Py_None) { - if (PyObject_Not(align)) self->flags &= ~ALIGNED; - else if (_IsAligned(self)) self->flags |= ALIGNED; + if (PyObject_Not(align)) { + self->flags &= ~ALIGNED; + } + else if (_IsAligned(self)) { + self->flags |= ALIGNED; + } else { PyErr_SetString(PyExc_ValueError, "cannot set aligned flag of mis-"\ @@ -1888,10 +2012,13 @@ array_newbyteorder(PyArrayObject *self, PyObject *args) PyArray_Descr *new; if (!PyArg_ParseTuple(args, "|O&", PyArray_ByteorderConverter, - &endian)) return NULL; - + &endian)) { + return NULL; + } new = PyArray_DescrNewByteorder(self->descr, endian); - if (!new) return NULL; + if (!new) { + return NULL; + } return PyArray_View(self, new, NULL); } diff --git a/numpy/core/src/arrayobject.c b/numpy/core/src/arrayobject.c index 32d49eaf2..5c1c8887c 100644 --- a/numpy/core/src/arrayobject.c +++ b/numpy/core/src/arrayobject.c @@ -29,13 +29,15 @@ static double PyArray_GetPriority(PyObject *obj, double default_) { PyObject *ret; - double priority=PyArray_PRIORITY; + double priority = PyArray_PRIORITY; if (PyArray_CheckExact(obj)) return priority; ret = PyObject_GetAttrString(obj, "__array_priority__"); - if (ret != NULL) priority = PyFloat_AsDouble(ret); + if (ret != NULL) { + priority = PyFloat_AsDouble(ret); + } if (PyErr_Occurred()) { PyErr_Clear(); priority = default_; @@ -79,7 +81,9 @@ PyArray_Zero(PyArrayObject *arr) int ret, storeflags; PyObject *obj; - if (_check_object_rec(arr->descr) < 0) return NULL; + if (_check_object_rec(arr->descr) < 0) { + return NULL; + } zeroval = PyDataMem_NEW(arr->descr->elsize); if (zeroval == NULL) { PyErr_SetNone(PyExc_MemoryError); @@ -165,13 +169,15 @@ PyArray_Item_INCREF(char *data, PyArray_Descr *descr) Py_XINCREF(*temp); } else if (PyDescr_HASFIELDS(descr)) { - PyObject *key, *value, *title=NULL; + PyObject *key, *value, *title = NULL; PyArray_Descr *new; int offset; - Py_ssize_t pos=0; + Py_ssize_t pos = 0; while (PyDict_Next(descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; + if NPY_TITLE_KEY(key, value) { + continue; + } if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { return; @@ -199,13 +205,15 @@ PyArray_Item_XDECREF(char *data, PyArray_Descr *descr) Py_XDECREF(*temp); } else if PyDescr_HASFIELDS(descr) { - PyObject *key, *value, *title=NULL; + PyObject *key, *value, *title = NULL; PyArray_Descr *new; int offset; - Py_ssize_t pos=0; + Py_ssize_t pos = 0; while (PyDict_Next(descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; + if NPY_TITLE_KEY(key, value) { + continue; + } if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { return; @@ -250,12 +258,12 @@ PyArray_INCREF(PyArrayObject *mp) data = (PyObject **)mp->data; n = PyArray_SIZE(mp); if (PyArray_ISALIGNED(mp)) { - for(i = 0; i < n; i++, data++) { + for (i = 0; i < n; i++, data++) { Py_XINCREF(*data); } } else { - for(i=0; i<n; i++, data++) { + for( i = 0; i < n; i++, data++) { temp = data; Py_XINCREF(*temp); } @@ -308,10 +316,10 @@ PyArray_XDECREF(PyArrayObject *mp) data = (PyObject **)mp->data; n = PyArray_SIZE(mp); if (PyArray_ISALIGNED(mp)) { - for(i = 0; i < n; i++, data++) Py_XDECREF(*data); + for (i = 0; i < n; i++, data++) Py_XDECREF(*data); } else { - for(i = 0; i < n; i++, data++) { + for (i = 0; i < n; i++, data++) { temp = data; Py_XDECREF(*temp); } @@ -358,7 +366,7 @@ _strided_byte_copy(char *dst, intp outstrides, char *src, intp instrides, case 2: _FAST_MOVE(Int16); case 16: - for(i=0; i<N; i++) { + for (i = 0; i < N; i++) { ((Int64 *)tout)[0] = ((Int64 *)tin)[0]; ((Int64 *)tout)[1] = ((Int64 *)tin)[1]; tin += instrides; @@ -366,7 +374,7 @@ _strided_byte_copy(char *dst, intp outstrides, char *src, intp instrides, } return; default: - for(i=0; i<N; i++) { + for(i = 0; i < N; i++) { for(j=0; j<elsize; j++) { *tout++ = *tin++; } @@ -451,21 +459,21 @@ _unaligned_strided_byte_copy(char *dst, intp outstrides, char *src, static void _strided_byte_swap(void *p, intp stride, intp n, int size) { - char *a, *b, c=0; - int j,m; + char *a, *b, c = 0; + int j, m; switch(size) { case 1: /* no byteswap necessary */ break; case 4: - for(a = (char*)p ; n > 0; n--, a += stride-1) { + for (a = (char*)p; n > 0; n--, a += stride - 1) { b = a + 3; c = *a; *a++ = *b; *b-- = c; c = *a; *a = *b; *b = c; } break; case 8: - for(a = (char*)p ; n > 0; n--, a += stride-3) { + for (a = (char*)p; n > 0; n--, a += stride - 3) { b = a + 7; c = *a; *a++ = *b; *b-- = c; c = *a; *a++ = *b; *b-- = c; @@ -474,16 +482,16 @@ _strided_byte_swap(void *p, intp stride, intp n, int size) } break; case 2: - for(a = (char*)p ; n > 0; n--, a += stride) { + for (a = (char*)p; n > 0; n--, a += stride) { b = a + 1; c = *a; *a = *b; *b = c; } break; default: - m = size / 2; - for(a = (char *)p ; n > 0; n--, a += stride-m) { - b = a + (size-1); - for(j=0; j<m; j++) { + m = size/2; + for (a = (char *)p; n > 0; n--, a += stride - m) { + b = a + (size - 1); + for (j = 0; j < m; j++) { c=*a; *a++ = *b; *b-- = c; } } @@ -508,10 +516,11 @@ copy_and_swap(void *dst, void *src, int itemsize, intp numitems, char *d1 = (char *)dst; - if ((numitems == 1) || (itemsize == srcstrides)) + if ((numitems == 1) || (itemsize == srcstrides)) { memcpy(d1, s1, itemsize*numitems); + } else { - for(i = 0; i < numitems; i++) { + for (i = 0; i < numitems; i++) { memcpy(d1, s1, itemsize); d1 += itemsize; s1 += srcstrides; @@ -554,7 +563,6 @@ PyArray_PyIntAsIntp(PyObject *o) PyErr_SetString(PyExc_TypeError, msg); return -1; } - if (PyInt_Check(o)) { long_value = (longlong) PyInt_AS_LONG(o); goto finish; @@ -593,7 +601,7 @@ PyArray_PyIntAsIntp(PyObject *o) #if (PY_VERSION_HEX >= 0x02050000) if (PyIndex_Check(o)) { PyObject* value = PyNumber_Index(o); - if (value==NULL) { + if (value == NULL) { return -1; } long_value = (longlong) PyInt_AsSsize_t(value); @@ -655,7 +663,6 @@ PyArray_PyIntAsInt(PyObject *o) PyErr_SetString(PyExc_TypeError, msg); return -1; } - if (PyInt_Check(o)) { long_value = (long) PyInt_AS_LONG(o); goto finish; @@ -665,7 +672,7 @@ PyArray_PyIntAsInt(PyObject *o) } descr = &INT_Descr; - arr=NULL; + arr = NULL; if (PyArray_Check(o)) { if (PyArray_SIZE(o)!=1 || !PyArray_ISINTEGER(o)) { PyErr_SetString(PyExc_TypeError, msg); @@ -720,8 +727,7 @@ PyArray_PyIntAsInt(PyObject *o) #if (SIZEOF_LONG > SIZEOF_INT) if ((long_value < INT_MIN) || (long_value > INT_MAX)) { - PyErr_SetString(PyExc_ValueError, - "integer won't fit into a C int"); + PyErr_SetString(PyExc_ValueError, "integer won't fit into a C int"); return -1; } #endif @@ -732,17 +738,19 @@ static char * index2ptr(PyArrayObject *mp, intp i) { intp dim0; - if(mp->nd == 0) { - PyErr_SetString(PyExc_IndexError, - "0-d arrays can't be indexed"); + + if (mp->nd == 0) { + PyErr_SetString(PyExc_IndexError, "0-d arrays can't be indexed"); return NULL; } dim0 = mp->dimensions[0]; - if (i<0) i += dim0; - if (i==0 && dim0 > 0) + if (i < 0) { + i += dim0; + } + if (i == 0 && dim0 > 0) { return mp->data; - - if (i>0 && i < dim0) { + } + if (i > 0 && i < dim0) { return mp->data+i*mp->strides[0]; } PyErr_SetString(PyExc_IndexError,"index out of bounds"); @@ -766,11 +774,11 @@ PyArray_Size(PyObject *op) static int _copy_from0d(PyArrayObject *dest, PyArrayObject *src, int usecopy, int swap) { - char *aligned=NULL; + char *aligned = NULL; char *sptr; int numcopies, nbytes; void (*myfunc)(char *, intp, char *, intp, intp, int); - int retval=-1; + int retval = -1; NPY_BEGIN_THREADS_DEF; numcopies = PyArray_SIZE(dest); @@ -807,10 +815,12 @@ _copy_from0d(PyArrayObject *dest, PyArrayObject *src, int usecopy, int swap) intp dstride; dptr = dest->data; - if (dest->nd == 1) + if (dest->nd == 1) { dstride = dest->strides[0]; - else + } + else { dstride = nbytes; + } /* Refcount note: src and dest may have different sizes */ PyArray_INCREF(src); @@ -826,9 +836,10 @@ _copy_from0d(PyArrayObject *dest, PyArrayObject *src, int usecopy, int swap) } else { PyArrayIterObject *dit; - int axis=-1; - dit = (PyArrayIterObject *)\ - PyArray_IterAllButAxis((PyObject *)dest, &axis); + int axis = -1; + + dit = (PyArrayIterObject *) + PyArray_IterAllButAxis((PyObject *)dest, &axis); if (dit == NULL) { goto finish; } @@ -837,12 +848,10 @@ _copy_from0d(PyArrayObject *dest, PyArrayObject *src, int usecopy, int swap) PyArray_XDECREF(dest); NPY_BEGIN_THREADS; while(dit->index < dit->size) { - myfunc(dit->dataptr, PyArray_STRIDE(dest, axis), - sptr, 0, + myfunc(dit->dataptr, PyArray_STRIDE(dest, axis), sptr, 0, PyArray_DIM(dest, axis), nbytes); if (swap) { - _strided_byte_swap(dit->dataptr, - PyArray_STRIDE(dest, axis), + _strided_byte_swap(dit->dataptr, PyArray_STRIDE(dest, axis), PyArray_DIM(dest, axis), nbytes); } PyArray_ITER_NEXT(dit); @@ -928,8 +937,7 @@ _flat_copyinto(PyObject *dst, PyObject *src, NPY_ORDER order) PyArray_XDECREF((PyArrayObject *)dst); NPY_BEGIN_THREADS; while(it->index < it->size) { - myfunc(dptr, elsize, it->dataptr, - PyArray_STRIDE(src,axis), + myfunc(dptr, elsize, it->dataptr, PyArray_STRIDE(src,axis), PyArray_DIM(src,axis), elsize); dptr += nbytes; PyArray_ITER_NEXT(it); @@ -949,7 +957,7 @@ _copy_from_same_shape(PyArrayObject *dest, PyArrayObject *src, void (*myfunc)(char *, intp, char *, intp, intp, int), int swap) { - int maxaxis=-1, elsize; + int maxaxis = -1, elsize; intp maxdim; PyArrayIterObject *dit, *sit; NPY_BEGIN_THREADS_DEF; @@ -1323,7 +1331,7 @@ PyArray_FromDimsAndDataAndDescr(int nd, int *d, } if (!PyArray_ISNBO(descr->byteorder)) descr->byteorder = '='; - for(i = 0; i < nd; i++) { + for (i = 0; i < nd; i++) { newd[i] = (intp) d[i]; } ret = PyArray_NewFromDescr(&PyArray_Type, descr, @@ -1409,8 +1417,9 @@ PyArray_Scalar(void *data, PyArray_Descr *descr, PyObject *base) int swap; type_num = descr->type_num; - if (type_num == PyArray_BOOL) + if (type_num == PyArray_BOOL) { PyArrayScalar_RETURN_BOOL_FROM_LONG(*(Bool*)data); + } else if (PyDataType_FLAGCHK(descr, NPY_USE_GETITEM)) { return descr->f->getitem(data, base); } @@ -1420,18 +1429,23 @@ PyArray_Scalar(void *data, PyArray_Descr *descr, PyObject *base) swap = !PyArray_ISNBO(descr->byteorder); if PyTypeNum_ISSTRING(type_num) { /* Eliminate NULL bytes */ char *dptr = data; - dptr += itemsize-1; - while(itemsize && *dptr-- == 0) itemsize--; + + dptr += itemsize - 1; + while(itemsize && *dptr-- == 0) { + itemsize--; + } if (type_num == PyArray_UNICODE && itemsize) { /* make sure itemsize is a multiple of 4 */ /* so round up to nearest multiple */ itemsize = (((itemsize-1) >> 2) + 1) << 2; } } - if (type->tp_itemsize != 0) /* String type */ + if (type->tp_itemsize != 0) { /* String type */ obj = type->tp_alloc(type, itemsize); - else + } + else { obj = type->tp_alloc(type, 0); + } if (obj == NULL) { return NULL; } @@ -1449,7 +1463,7 @@ PyArray_Scalar(void *data, PyArray_Descr *descr, PyObject *base) int length = itemsize >> 2; #ifndef Py_UNICODE_WIDE char *buffer; - int alloc=0; + int alloc = 0; length *= 2; #endif /* Need an extra slot and need to use @@ -1468,22 +1482,25 @@ PyArray_Scalar(void *data, PyArray_Descr *descr, PyObject *base) uni->defenc = NULL; #ifdef Py_UNICODE_WIDE memcpy(destptr, data, itemsize); - if (swap) + if (swap) { byte_swap_vector(destptr, length, 4); + } #else /* need aligned data buffer */ if ((swap) || ((((intp)data) % descr->alignment) != 0)) { buffer = _pya_malloc(itemsize); - if (buffer == NULL) + if (buffer == NULL) { return PyErr_NoMemory(); + } alloc = 1; memcpy(buffer, data, itemsize); if (swap) { - byte_swap_vector(buffer, - itemsize >> 2, 4); + byte_swap_vector(buffer, itemsize >> 2, 4); } } - else buffer = data; + else { + buffer = data; + } /* Allocated enough for 2-characters per itemsize. Now convert from the data-buffer @@ -1491,7 +1508,9 @@ PyArray_Scalar(void *data, PyArray_Descr *descr, PyObject *base) length = PyUCS2Buffer_FromUCS4(uni->str, (PyArray_UCS4 *)buffer, itemsize >> 2); - if (alloc) _pya_free(buffer); + if (alloc) { + _pya_free(buffer); + } /* Resize the unicode result */ if (MyPyUnicode_Resize(uni, length) < 0) { Py_DECREF(obj); @@ -1635,7 +1654,7 @@ _default_copyswapn(void *dst, npy_intp dstride, void *src, copyswap = PyArray_DESCR(arr)->f->copyswap; - for(i = 0; i < n; i++) { + for (i = 0; i < n; i++) { copyswap(dstptr, srcptr, swap, arr); dstptr += dstride; srcptr += sstride; @@ -1657,12 +1676,12 @@ PyArray_TypeNumFromName(char *str) int i; PyArray_Descr *descr; - for(i=0; i<NPY_NUMUSERTYPES; i++) { + for (i = 0; i < NPY_NUMUSERTYPES; i++) { descr = userdescrs[i]; - if (strcmp(descr->typeobj->tp_name, str) == 0) + if (strcmp(descr->typeobj->tp_name, str) == 0) { return descr->type_num; + } } - return PyArray_NOTYPE; } @@ -1684,10 +1703,11 @@ PyArray_RegisterDataType(PyArray_Descr *descr) PyArray_ArrFuncs *f; /* See if this type is already registered */ - for(i=0; i<NPY_NUMUSERTYPES; i++) { + for (i = 0; i < NPY_NUMUSERTYPES; i++) { descr2 = userdescrs[i]; - if (descr2 == descr) + if (descr2 == descr) { return descr->type_num; + } } typenum = PyArray_USERDEF + NPY_NUMUSERTYPES; descr->type_num = typenum; @@ -1733,6 +1753,7 @@ PyArray_RegisterCastFunc(PyArray_Descr *descr, int totype, { PyObject *cobj, *key; int ret; + if (totype < PyArray_NTYPES) { descr->f->cast[totype] = castfunc; return 0; @@ -1743,12 +1764,19 @@ PyArray_RegisterCastFunc(PyArray_Descr *descr, int totype, } if (descr->f->castdict == NULL) { descr->f->castdict = PyDict_New(); - if (descr->f->castdict == NULL) return -1; + if (descr->f->castdict == NULL) { + return -1; + } } key = PyInt_FromLong(totype); - if (PyErr_Occurred()) return -1; + if (PyErr_Occurred()) { + return -1; + } cobj = PyCObject_FromVoidPtr((void *)castfunc, NULL); - if (cobj == NULL) {Py_DECREF(key); return -1;} + if (cobj == NULL) { + Py_DECREF(key); + return -1; + } ret = PyDict_SetItem(descr->f->castdict, key, cobj); Py_DECREF(key); Py_DECREF(cobj); @@ -1758,13 +1786,15 @@ PyArray_RegisterCastFunc(PyArray_Descr *descr, int totype, static int * _append_new(int *types, int insert) { - int n=0; + int n = 0; int *newtypes; - while (types[n] != PyArray_NOTYPE) n++; - newtypes = (int *)realloc(types, (n+2)*sizeof(int)); + while (types[n] != PyArray_NOTYPE) { + n++; + } + newtypes = (int *)realloc(types, (n + 2)*sizeof(int)); newtypes[n] = insert; - newtypes[n+1] = PyArray_NOTYPE; + newtypes[n + 1] = PyArray_NOTYPE; return newtypes; } @@ -1791,22 +1821,20 @@ PyArray_RegisterCanCast(PyArray_Descr *descr, int totype, /* register with cancastscalarkindto */ if (descr->f->cancastscalarkindto == NULL) { int i; - descr->f->cancastscalarkindto = \ - (int **)malloc(PyArray_NSCALARKINDS* \ - sizeof(int*)); - for(i=0; i<PyArray_NSCALARKINDS; i++) { + descr->f->cancastscalarkindto = + (int **)malloc(PyArray_NSCALARKINDS* sizeof(int*)); + for (i = 0; i < PyArray_NSCALARKINDS; i++) { descr->f->cancastscalarkindto[i] = NULL; } } if (descr->f->cancastscalarkindto[scalar] == NULL) { - descr->f->cancastscalarkindto[scalar] = \ + descr->f->cancastscalarkindto[scalar] = (int *)malloc(1*sizeof(int)); - descr->f->cancastscalarkindto[scalar][0] = \ + descr->f->cancastscalarkindto[scalar][0] = PyArray_NOTYPE; } - descr->f->cancastscalarkindto[scalar] = \ - _append_new(descr->f->cancastscalarkindto[scalar], - totype); + descr->f->cancastscalarkindto[scalar] = + _append_new(descr->f->cancastscalarkindto[scalar], totype); } return 0; } @@ -1859,7 +1887,7 @@ PyArray_ToFile(PyArrayObject *self, FILE *fp, char *sep, char *format) it = (PyArrayIterObject *) PyArray_IterNew((PyObject *)self); NPY_BEGIN_THREADS; - while(it->index < it->size) { + while (it->index < it->size) { if (fwrite((const void *)it->dataptr, (size_t) self->descr->elsize, 1, fp) < 1) { @@ -1885,7 +1913,7 @@ PyArray_ToFile(PyArrayObject *self, FILE *fp, char *sep, char *format) it = (PyArrayIterObject *) PyArray_IterNew((PyObject *)self); n4 = (format ? strlen((const char *)format) : 0); - while(it->index < it->size) { + while (it->index < it->size) { obj = self->descr->f->getitem(it->dataptr, self); if (obj == NULL) { Py_DECREF(it); @@ -1977,7 +2005,7 @@ PyArray_ToList(PyArrayObject *self) sz = self->dimensions[0]; lp = PyList_New(sz); - for(i = 0; i < sz; i++) { + for (i = 0; i < sz; i++) { v = (PyArrayObject *)array_big_item(self, i); if (PyArray_Check(v) && (v->nd >= self->nd)) { PyErr_SetString(PyExc_RuntimeError, @@ -2015,7 +2043,7 @@ PyArray_ToString(PyArrayObject *self, NPY_ORDER order) */ numbytes = PyArray_NBYTES(self); - if ((PyArray_ISCONTIGUOUS(self) && (order == NPY_CORDER)) || \ + if ((PyArray_ISCONTIGUOUS(self) && (order == NPY_CORDER)) || (PyArray_ISFORTRAN(self) && (order == NPY_FORTRANORDER))) { ret = PyString_FromStringAndSize(self->data, (int) numbytes); } @@ -2024,7 +2052,9 @@ PyArray_ToString(PyArrayObject *self, NPY_ORDER order) if (order == NPY_FORTRANORDER) { /* iterators are always in C-order */ new = PyArray_Transpose(self, NULL); - if (new == NULL) return NULL; + if (new == NULL) { + return NULL; + } } else { Py_INCREF(self); @@ -2032,13 +2062,18 @@ PyArray_ToString(PyArrayObject *self, NPY_ORDER order) } it = (PyArrayIterObject *)PyArray_IterNew(new); Py_DECREF(new); - if (it==NULL) return NULL; + if (it == NULL) { + return NULL; + } ret = PyString_FromStringAndSize(NULL, (int) numbytes); - if (ret == NULL) {Py_DECREF(it); return NULL;} + if (ret == NULL) { + Py_DECREF(it); + return NULL; + } dptr = PyString_AS_STRING(ret); index = it->size; elsize = self->descr->elsize; - while(index--) { + while (index--) { memcpy(dptr, it->dataptr, elsize); dptr += elsize; PyArray_ITER_NEXT(it); @@ -2057,30 +2092,34 @@ PyArray_ToString(PyArrayObject *self, NPY_ORDER order) static void array_dealloc(PyArrayObject *self) { - if (self->weakreflist != NULL) + if (self->weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *)self); - - if(self->base) { - /* UPDATEIFCOPY means that base points to an - array that should be updated with the contents - of this array upon destruction. - self->base->flags must have been WRITEABLE - (checked previously) and it was locked here - thus, unlock it. - */ + } + if (self->base) { + /* + * UPDATEIFCOPY means that base points to an + * array that should be updated with the contents + * of this array upon destruction. + * self->base->flags must have been WRITEABLE + * (checked previously) and it was locked here + * thus, unlock it. + */ if (self->flags & UPDATEIFCOPY) { ((PyArrayObject *)self->base)->flags |= WRITEABLE; Py_INCREF(self); /* hold on to self in next call */ - if (PyArray_CopyAnyInto((PyArrayObject *)self->base, - self) < 0) { + if (PyArray_CopyAnyInto((PyArrayObject *)self->base, self) < 0) { PyErr_Print(); PyErr_Clear(); } - /* Don't need to DECREF -- because we are deleting - self already... */ + /* + * Don't need to DECREF -- because we are deleting + *self already... + */ } - /* In any case base is pointing to something that we need - to DECREF -- either a view or a buffer object */ + /* + * In any case base is pointing to something that we need + * to DECREF -- either a view or a buffer object + */ Py_DECREF(self->base); } @@ -2089,16 +2128,16 @@ array_dealloc(PyArrayObject *self) { if (PyDataType_FLAGCHK(self->descr, NPY_ITEM_REFCOUNT)) { Py_INCREF(self); /*hold on to self */ PyArray_XDECREF(self); - /* Don't need to DECREF -- because we are deleting - self already... */ + /* + * Don't need to DECREF -- because we are deleting + * self already... + */ } PyDataMem_FREE(self->data); } PyDimMem_FREE(self->dimensions); - Py_DECREF(self->descr); - self->ob_type->tp_free((PyObject *)self); } @@ -2128,8 +2167,9 @@ array_big_item(PyArrayObject *self, intp i) "0-d arrays can't be indexed"); return NULL; } - if ((item = index2ptr(self, i)) == NULL) return NULL; - + if ((item = index2ptr(self, i)) == NULL) { + return NULL; + } Py_INCREF(self->descr); r = (PyArrayObject *)PyArray_NewFromDescr(self->ob_type, self->descr, @@ -2138,7 +2178,9 @@ array_big_item(PyArrayObject *self, intp i) self->strides+1, item, self->flags, (PyObject *)self); - if (r == NULL) return NULL; + if (r == NULL) { + return NULL; + } Py_INCREF(self); r->base = (PyObject *)self; PyArray_UpdateFlags(r, CONTIGUOUS | FORTRAN); @@ -2151,12 +2193,14 @@ array_item_nice(PyArrayObject *self, Py_ssize_t i) { if (self->nd == 1) { char *item; - if ((item = index2ptr(self, i)) == NULL) return NULL; + if ((item = index2ptr(self, i)) == NULL) { + return NULL; + } return PyArray_Scalar(item, self->descr, (PyObject *)self); } else { - return PyArray_Return((PyArrayObject *)\ - array_big_item(self, (intp) i)); + return PyArray_Return( + (PyArrayObject *) array_big_item(self, (intp) i)); } } @@ -2185,15 +2229,20 @@ array_ass_big_item(PyArrayObject *self, intp i, PyObject *v) if (self->nd > 1) { - if((tmp = (PyArrayObject *)array_big_item(self, i)) == NULL) + if((tmp = (PyArrayObject *)array_big_item(self, i)) == NULL) { return -1; + } ret = PyArray_CopyObject(tmp, v); Py_DECREF(tmp); return ret; } - if ((item = index2ptr(self, i)) == NULL) return -1; - if (self->descr->f->setitem(v, item, self) == -1) return -1; + if ((item = index2ptr(self, i)) == NULL) { + return -1; + } + if (self->descr->f->setitem(v, item, self) == -1) { + return -1; + } return 0; } @@ -2239,8 +2288,11 @@ slice_GetIndices(PySliceObject *r, intp length, if (r->step == Py_None) { *step = 1; - } else { - if (!slice_coerce_index(r->step, step)) return -1; + } + else { + if (!slice_coerce_index(r->step, step)) { + return -1; + } if (*step == 0) { PyErr_SetString(PyExc_ValueError, "slice step cannot be zero"); @@ -2248,15 +2300,20 @@ slice_GetIndices(PySliceObject *r, intp length, } } /* defstart = *step < 0 ? length - 1 : 0; */ - defstop = *step < 0 ? -1 : length; - if (r->start == Py_None) { *start = *step < 0 ? length-1 : 0; - } else { - if (!slice_coerce_index(r->start, start)) return -1; - if (*start < 0) *start += length; - if (*start < 0) *start = (*step < 0) ? -1 : 0; + } + else { + if (!slice_coerce_index(r->start, start)) { + return -1; + } + if (*start < 0) { + *start += length; + } + if (*start < 0) { + *start = (*step < 0) ? -1 : 0; + } if (*start >= length) { *start = (*step < 0) ? length - 1 : length; } @@ -2264,19 +2321,30 @@ slice_GetIndices(PySliceObject *r, intp length, if (r->stop == Py_None) { *stop = defstop; - } else { - if (!slice_coerce_index(r->stop, stop)) return -1; - if (*stop < 0) *stop += length; - if (*stop < 0) *stop = -1; - if (*stop > length) *stop = length; + } + else { + if (!slice_coerce_index(r->stop, stop)) { + return -1; + } + if (*stop < 0) { + *stop += length; + } + if (*stop < 0) { + *stop = -1; + } + if (*stop > length) { + *stop = length; + } } - if ((*step < 0 && *stop >= *start) || \ + if ((*step < 0 && *stop >= *start) || (*step > 0 && *start >= *stop)) { *slicelength = 0; - } else if (*step < 0) { + } + else if (*step < 0) { *slicelength = (*stop - *start + 1) / (*step) + 1; - } else { + } + else { *slicelength = (*stop - *start - 1) / (*step) + 1; } @@ -2295,10 +2363,12 @@ parse_subindex(PyObject *op, intp *step_size, intp *n_steps, intp max) if (op == Py_None) { *n_steps = PseudoIndex; index = 0; - } else if (op == Py_Ellipsis) { + } + else if (op == Py_Ellipsis) { *n_steps = RubberIndex; index = 0; - } else if (PySlice_Check(op)) { + } + else if (PySlice_Check(op)) { intp stop; if (slice_GetIndices((PySliceObject *)op, max, &index, &stop, step_size, n_steps) < 0) { @@ -2313,7 +2383,8 @@ parse_subindex(PyObject *op, intp *step_size, intp *n_steps, intp max) *step_size = 1; index = 0; } - } else { + } + else { index = PyArray_PyIntAsIntp(op); if (error_converting(index)) { PyErr_SetString(PyExc_IndexError, @@ -2324,13 +2395,16 @@ parse_subindex(PyObject *op, intp *step_size, intp *n_steps, intp max) } *n_steps = SingleIndex; *step_size = 0; - if (index < 0) index += max; + if (index < 0) { + index += max; + } if (index >= max || index < 0) { PyErr_SetString(PyExc_IndexError, "invalid index"); goto fail; } } return index; + fail: return -1; } @@ -2343,7 +2417,7 @@ parse_index(PyArrayObject *self, PyObject *op, int i, j, n; int nd_old, nd_new, n_add, n_pseudo; intp n_steps, start, offset, step_size; - PyObject *op1=NULL; + PyObject *op1 = NULL; int is_slice; if (PySlice_Check(op) || op == Py_Ellipsis || op == Py_None) { @@ -2367,7 +2441,7 @@ parse_index(PyArrayObject *self, PyObject *op, nd_old = nd_new = 0; offset = 0; - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { if (!is_slice) { if (!(op1=PySequence_GetItem(op, i))) { PyErr_SetString(PyExc_IndexError, @@ -2375,21 +2449,24 @@ parse_index(PyArrayObject *self, PyObject *op, return -1; } } - start = parse_subindex(op1, &step_size, &n_steps, - nd_old < self->nd ? \ + nd_old < self->nd ? self->dimensions[nd_old] : 0); Py_DECREF(op1); - if (start == -1) break; - + if (start == -1) { + break; + } if (n_steps == PseudoIndex) { dimensions[nd_new] = 1; strides[nd_new] = 0; nd_new++; - } else { + } + else { if (n_steps == RubberIndex) { - for(j=i+1, n_pseudo=0; j<n; j++) { + for (j = i + 1, n_pseudo = 0; j < n; j++) { op1 = PySequence_GetItem(op, j); - if (op1 == Py_None) n_pseudo++; + if (op1 == Py_None) { + n_pseudo++; + } Py_DECREF(op1); } n_add = self->nd-(n-i-n_pseudo-1+nd_old); @@ -2398,14 +2475,15 @@ parse_index(PyArrayObject *self, PyObject *op, "too many indices"); return -1; } - for(j=0; j<n_add; j++) { + for (j = 0; j < n_add; j++) { dimensions[nd_new] = \ self->dimensions[nd_old]; strides[nd_new] = \ self->strides[nd_old]; nd_new++; nd_old++; } - } else { + } + else { if (nd_old >= self->nd) { PyErr_SetString(PyExc_IndexError, "too many indices"); @@ -2422,12 +2500,15 @@ parse_index(PyArrayObject *self, PyObject *op, } } } - if (i < n) return -1; + if (i < n) { + return -1; + } n_add = self->nd-nd_old; - for(j=0; j<n_add; j++) { + for (j = 0; j < n_add; j++) { dimensions[nd_new] = self->dimensions[nd_old]; strides[nd_new] = self->strides[nd_old]; - nd_new++; nd_old++; + nd_new++; + nd_old++; } *offset_ptr = offset; return nd_new; @@ -2446,68 +2527,73 @@ _swap_axes(PyArrayMapIterObject *mit, PyArrayObject **ret, int getmap) permute.ptr = d; permute.len = mit->nd; - /* arr might not have the right number of dimensions - and need to be reshaped first by pre-pending ones */ + /* + * arr might not have the right number of dimensions + * and need to be reshaped first by pre-pending ones + */ arr = *ret; if (arr->nd != mit->nd) { - for(i=1; i<=arr->nd; i++) { + for (i = 1; i <= arr->nd; i++) { permute.ptr[mit->nd-i] = arr->dimensions[arr->nd-i]; } - for(i=0; i<mit->nd-arr->nd; i++) { + for (i = 0; i < mit->nd-arr->nd; i++) { permute.ptr[i] = 1; } new = PyArray_Newshape(arr, &permute, PyArray_ANYORDER); Py_DECREF(arr); *ret = (PyArrayObject *)new; - if (new == NULL) return; + if (new == NULL) { + return; + } } - /* Setting and getting need to have different permutations. - On the get we are permuting the returned object, but on - setting we are permuting the object-to-be-set. - The set permutation is the inverse of the get permutation. - */ + /* + * Setting and getting need to have different permutations. + * On the get we are permuting the returned object, but on + * setting we are permuting the object-to-be-set. + * The set permutation is the inverse of the get permutation. + */ - /* For getting the array the tuple for transpose is - (n1,...,n1+n2-1,0,...,n1-1,n1+n2,...,n3-1) - n1 is the number of dimensions of - the broadcasted index array - n2 is the number of dimensions skipped at the - start - n3 is the number of dimensions of the - result - */ + /* + * For getting the array the tuple for transpose is + * (n1,...,n1+n2-1,0,...,n1-1,n1+n2,...,n3-1) + * n1 is the number of dimensions of the broadcast index array + * n2 is the number of dimensions skipped at the start + * n3 is the number of dimensions of the result + */ - /* For setting the array the tuple for transpose is - (n2,...,n1+n2-1,0,...,n2-1,n1+n2,...n3-1) - */ + /* + * For setting the array the tuple for transpose is + * (n2,...,n1+n2-1,0,...,n2-1,n1+n2,...n3-1) + */ n1 = mit->iters[0]->nd_m1 + 1; n2 = mit->iteraxes[0]; n3 = mit->nd; - bnd = (getmap ? n1 : n2); /* use n1 as the boundary if getting - but n2 if setting */ - + /* use n1 as the boundary if getting but n2 if setting */ + bnd = getmap ? n1 : n2; val = bnd; i = 0; - while(val < n1+n2) + while (val < n1 + n2) { permute.ptr[i++] = val++; + } val = 0; - while(val < bnd) + while (val < bnd) { permute.ptr[i++] = val++; - val = n1+n2; - while(val < n3) + } + val = n1 + n2; + while (val < n3) { permute.ptr[i++] = val++; - + } new = PyArray_Transpose(*ret, &permute); Py_DECREF(*ret); *ret = (PyArrayObject *)new; } -/* Prototypes for Mapping calls --- not part of the C-API - because only useful as part of a getitem call. -*/ - +/* + * Prototypes for Mapping calls --- not part of the C-API + * because only useful as part of a getitem call. + */ static void PyArray_MapIterReset(PyArrayMapIterObject *); static void PyArray_MapIterNext(PyArrayMapIterObject *); static void PyArray_MapIterBind(PyArrayMapIterObject *, PyArrayObject *); @@ -2524,28 +2610,33 @@ PyArray_GetMap(PyArrayMapIterObject *mit) PyArray_CopySwapFunc *copyswap; /* Unbound map iterator --- Bind should have been called */ - if (mit->ait == NULL) return NULL; + if (mit->ait == NULL) { + return NULL; + } /* This relies on the map iterator object telling us the shape of the new array in nd and dimensions. */ temp = mit->ait->ao; Py_INCREF(temp->descr); - ret = (PyArrayObject *)\ + ret = (PyArrayObject *) PyArray_NewFromDescr(temp->ob_type, temp->descr, mit->nd, mit->dimensions, NULL, NULL, PyArray_ISFORTRAN(temp), (PyObject *)temp); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } - /* Now just iterate through the new array filling it in - with the next object from the original array as - defined by the mapping iterator */ + /* + * Now just iterate through the new array filling it in + * with the next object from the original array as + * defined by the mapping iterator + */ - if ((it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)ret)) - == NULL) { + if ((it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)ret)) == NULL) { Py_DECREF(ret); return NULL; } @@ -2572,7 +2663,7 @@ PyArray_GetMap(PyArrayMapIterObject *mit) static int PyArray_SetMap(PyArrayMapIterObject *mit, PyObject *op) { - PyObject *arr=NULL; + PyObject *arr = NULL; PyArrayIterObject *it; int index; int swap; @@ -2580,17 +2671,21 @@ PyArray_SetMap(PyArrayMapIterObject *mit, PyObject *op) PyArray_Descr *descr; /* Unbound Map Iterator */ - if (mit->ait == NULL) return -1; - + if (mit->ait == NULL) { + return -1; + } descr = mit->ait->ao->descr; Py_INCREF(descr); arr = PyArray_FromAny(op, descr, 0, 0, FORCECAST, NULL); - if (arr == NULL) return -1; - + if (arr == NULL) { + return -1; + } if ((mit->subspace != NULL) && (mit->consec)) { if (mit->iteraxes[0] > 0) { /* then we need to swap */ _swap_axes(mit, (PyArrayObject **)&arr, 0); - if (arr == NULL) return -1; + if (arr == NULL) { + return -1; + } } } @@ -2604,7 +2699,7 @@ PyArray_SetMap(PyArrayMapIterObject *mit, PyObject *op) } index = mit->size; - swap = (PyArray_ISNOTSWAPPED(mit->ait->ao) != \ + swap = (PyArray_ISNOTSWAPPED(mit->ait->ao) != (PyArray_ISNOTSWAPPED(arr))); copyswap = PyArray_DESCR(arr)->f->copyswap; PyArray_MapIterReset(mit); @@ -2615,8 +2710,9 @@ PyArray_SetMap(PyArrayMapIterObject *mit, PyObject *op) PyArray_Item_INCREF(it->dataptr, PyArray_DESCR(arr)); memmove(mit->dataptr, it->dataptr, sizeof(PyObject *)); /* ignored unless VOID array with object's */ - if (swap) + if (swap) { copyswap(mit->dataptr, NULL, swap, arr); + } PyArray_MapIterNext(mit); PyArray_ITER_NEXT(it); } @@ -2626,8 +2722,9 @@ PyArray_SetMap(PyArrayMapIterObject *mit, PyObject *op) } while(index--) { memmove(mit->dataptr, it->dataptr, PyArray_ITEMSIZE(arr)); - if (swap) + if (swap) { copyswap(mit->dataptr, NULL, swap, arr); + } PyArray_MapIterNext(mit); PyArray_ITER_NEXT(it); } @@ -2644,12 +2741,17 @@ count_new_axes_0d(PyObject *tuple) int newaxis_count = 0; argument_count = PyTuple_GET_SIZE(tuple); - - for(i = 0; i < argument_count; ++i) { + for (i = 0; i < argument_count; ++i) { PyObject *arg = PyTuple_GET_ITEM(tuple, i); - if (arg == Py_Ellipsis && !ellipsis_count) ellipsis_count++; - else if (arg == Py_None) newaxis_count++; - else break; + if (arg == Py_Ellipsis && !ellipsis_count) { + ellipsis_count++; + } + else if (arg == Py_None) { + newaxis_count++; + } + else { + break; + } } if (i < argument_count) { PyErr_SetString(PyExc_IndexError, @@ -2659,8 +2761,7 @@ count_new_axes_0d(PyObject *tuple) return -1; } if (newaxis_count > MAX_DIMS) { - PyErr_SetString(PyExc_IndexError, - "too many dimensions"); + PyErr_SetString(PyExc_IndexError, "too many dimensions"); return -1; } return newaxis_count; @@ -2672,7 +2773,8 @@ add_new_axes_0d(PyArrayObject *arr, int newaxis_count) PyArrayObject *other; intp dimensions[MAX_DIMS]; int i; - for(i = 0; i < newaxis_count; ++i) { + + for (i = 0; i < newaxis_count; ++i) { dimensions[i] = 1; } Py_INCREF(arr->descr); @@ -2706,13 +2808,16 @@ fancy_indexing_check(PyObject *args) if (PyTuple_Check(args)) { n = PyTuple_GET_SIZE(args); - if (n >= MAX_DIMS) return SOBJ_TOOMANY; - for(i=0; i<n; i++) { + if (n >= MAX_DIMS) { + return SOBJ_TOOMANY; + } + for (i = 0; i < n; i++) { obj = PyTuple_GET_ITEM(args,i); if (PyArray_Check(obj)) { if (PyArray_ISINTEGER(obj) || - PyArray_ISBOOL(obj)) + PyArray_ISBOOL(obj)) { retval = SOBJ_ISFANCY; + } else { retval = SOBJ_BADARRAY; break; @@ -2725,62 +2830,69 @@ fancy_indexing_check(PyObject *args) } else if (PyArray_Check(args)) { if ((PyArray_TYPE(args)==PyArray_BOOL) || - (PyArray_ISINTEGER(args))) + (PyArray_ISINTEGER(args))) { return SOBJ_ISFANCY; - else + } + else { return SOBJ_BADARRAY; + } } else if (PySequence_Check(args)) { - /* Sequences < MAX_DIMS with any slice objects - or newaxis, or Ellipsis is considered standard - as long as there are also no Arrays and or additional - sequences embedded. - */ + /* + * Sequences < MAX_DIMS with any slice objects + * or newaxis, or Ellipsis is considered standard + * as long as there are also no Arrays and or additional + * sequences embedded. + */ retval = SOBJ_ISFANCY; n = PySequence_Size(args); - if (n<0 || n>=MAX_DIMS) return SOBJ_ISFANCY; - for(i=0; i<n; i++) { + if (n < 0 || n >= MAX_DIMS) { + return SOBJ_ISFANCY; + } + for (i = 0; i < n; i++) { obj = PySequence_GetItem(args, i); - if (obj == NULL) return SOBJ_ISFANCY; + if (obj == NULL) { + return SOBJ_ISFANCY; + } if (PyArray_Check(obj)) { - if (PyArray_ISINTEGER(obj) || - PyArray_ISBOOL(obj)) + if (PyArray_ISINTEGER(obj) || PyArray_ISBOOL(obj)) { retval = SOBJ_LISTTUP; - else + } + else { retval = SOBJ_BADARRAY; + } } else if (PySequence_Check(obj)) { retval = SOBJ_LISTTUP; } else if (PySlice_Check(obj) || obj == Py_Ellipsis || - obj == Py_None) { + obj == Py_None) { retval = SOBJ_NOTFANCY; } Py_DECREF(obj); - if (retval > SOBJ_ISFANCY) return retval; + if (retval > SOBJ_ISFANCY) { + return retval; + } } } return retval; } -/* Called when treating array object like a mapping -- called first from - Python when using a[object] unless object is a standard slice object - (not an extended one). - -*/ - -/* There are two situations: - - 1 - the subscript is a standard view and a reference to the - array can be returned - - 2 - the subscript uses Boolean masks or integer indexing and - therefore a new array is created and returned. - -*/ +/* + * Called when treating array object like a mapping -- called first from + * Python when using a[object] unless object is a standard slice object + * (not an extended one). + * + * There are two situations: + * + * 1 - the subscript is a standard view and a reference to the + * array can be returned + * + * 2 - the subscript uses Boolean masks or integer indexing and + * therefore a new array is created and returned. + */ /* Always returns arrays */ - static PyObject *iter_subscript(PyArrayIterObject *, PyObject *); @@ -2800,24 +2912,22 @@ array_subscript_simple(PyArrayObject *self, PyObject *op) PyErr_Clear(); /* Standard (view-based) Indexing */ - if ((nd = parse_index(self, op, dimensions, strides, &offset)) - == -1) return NULL; - + if ((nd = parse_index(self, op, dimensions, strides, &offset)) == -1) { + return NULL; + } /* This will only work if new array will be a view */ Py_INCREF(self->descr); - if ((other = (PyArrayObject *) \ + if ((other = (PyArrayObject *) PyArray_NewFromDescr(self->ob_type, self->descr, nd, dimensions, strides, self->data+offset, self->flags, - (PyObject *)self)) == NULL) + (PyObject *)self)) == NULL) { return NULL; - + } other->base = (PyObject *)self; Py_INCREF(self); - PyArray_UpdateFlags(other, UPDATE_ALL); - return (PyObject *)other; } @@ -2827,21 +2937,19 @@ array_subscript(PyArrayObject *self, PyObject *op) int nd, fancy; PyArrayObject *other; PyArrayMapIterObject *mit; + PyObject *obj; if (PyString_Check(op) || PyUnicode_Check(op)) { if (self->descr->names) { - PyObject *obj; obj = PyDict_GetItem(self->descr->fields, op); if (obj != NULL) { PyArray_Descr *descr; int offset; PyObject *title; - if (PyArg_ParseTuple(obj, "Oi|O", - &descr, &offset, &title)) { + if (PyArg_ParseTuple(obj, "Oi|O", &descr, &offset, &title)) { Py_INCREF(descr); - return PyArray_GetField(self, descr, - offset); + return PyArray_GetField(self, descr, offset); } } } @@ -2852,26 +2960,58 @@ array_subscript(PyArrayObject *self, PyObject *op) return NULL; } + /* Check for multiple field access */ + if (self->descr->names && PySequence_Check(op) && !PyTuple_Check(op)) { + int seqlen, i; + seqlen = PySequence_Size(op); + for (i = 0; i < seqlen; i++) { + obj = PySequence_GetItem(op, i); + if (!PyString_Check(obj) && !PyUnicode_Check(obj)) { + Py_DECREF(obj); + break; + } + Py_DECREF(obj); + } + /* + * extract multiple fields if all elements in sequence + * are either string or unicode (i.e. no break occurred). + */ + fancy = ((seqlen > 0) && (i == seqlen)); + if (fancy) { + PyObject *_numpy_internal; + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) { + return NULL; + } + obj = PyObject_CallMethod(_numpy_internal, + "_index_fields", "OO", self, op); + Py_DECREF(_numpy_internal); + return obj; + } + } + if (op == Py_Ellipsis) { Py_INCREF(self); return (PyObject *)self; } if (self->nd == 0) { - if (op == Py_None) + if (op == Py_None) { return add_new_axes_0d(self, 1); + } if (PyTuple_Check(op)) { if (0 == PyTuple_GET_SIZE(op)) { Py_INCREF(self); return (PyObject *)self; } - if ((nd = count_new_axes_0d(op)) == -1) + if ((nd = count_new_axes_0d(op)) == -1) { return NULL; + } return add_new_axes_0d(self, nd); } /* Allow Boolean mask selection also */ - if ((PyArray_Check(op) && (PyArray_DIMS(op)==0) && - PyArray_ISBOOL(op))) { + if ((PyArray_Check(op) && (PyArray_DIMS(op)==0) + && PyArray_ISBOOL(op))) { if (PyObject_IsTrue(op)) { Py_INCREF(self); return (PyObject *)self; @@ -2887,28 +3027,30 @@ array_subscript(PyArrayObject *self, PyObject *op) NULL); } } - PyErr_SetString(PyExc_IndexError, - "0-d arrays can't be indexed."); + PyErr_SetString(PyExc_IndexError, "0-d arrays can't be indexed."); return NULL; } fancy = fancy_indexing_check(op); - if (fancy != SOBJ_NOTFANCY) { int oned; + oned = ((self->nd == 1) && !(PyTuple_Check(op) && PyTuple_GET_SIZE(op) > 1)); /* wrap arguments into a mapiter object */ - mit = (PyArrayMapIterObject *)\ - PyArray_MapIterNew(op, oned, fancy); - if (mit == NULL) return NULL; + mit = (PyArrayMapIterObject *) PyArray_MapIterNew(op, oned, fancy); + if (mit == NULL) { + return NULL; + } if (oned) { PyArrayIterObject *it; PyObject *rval; - it = (PyArrayIterObject *)\ - PyArray_IterNew((PyObject *)self); - if (it == NULL) {Py_DECREF(mit); return NULL;} + it = (PyArrayIterObject *) PyArray_IterNew((PyObject *)self); + if (it == NULL) { + Py_DECREF(mit); + return NULL; + } rval = iter_subscript(it, mit->indexobj); Py_DECREF(it); Py_DECREF(mit); @@ -2924,15 +3066,13 @@ array_subscript(PyArrayObject *self, PyObject *op) } -/* Another assignment hacked by using CopyObject. */ - -/* This only works if subscript returns a standard view. */ - -/* Again there are two cases. In the first case, PyArray_CopyObject - can be used. In the second case, a new indexing function has to be - used. -*/ - +/* + * Another assignment hacked by using CopyObject. + * This only works if subscript returns a standard view. + * Again there are two cases. In the first case, PyArray_CopyObject + * can be used. In the second case, a new indexing function has to be + * used. + */ static int iter_ass_subscript(PyArrayIterObject *, PyObject *, PyObject *); static int @@ -2952,12 +3092,16 @@ array_ass_sub_simple(PyArrayObject *self, PyObject *index, PyObject *op) if (PyArray_CheckExact(self)) { tmp = (PyArrayObject *)array_subscript_simple(self, index); - if (tmp == NULL) return -1; + if (tmp == NULL) { + return -1; + } } else { PyObject *tmp0; tmp0 = PyObject_GetItem((PyObject *)self, index); - if (tmp0 == NULL) return -1; + if (tmp0 == NULL) { + return -1; + } if (!PyArray_Check(tmp0)) { PyErr_SetString(PyExc_RuntimeError, "Getitem not returning array."); @@ -2990,10 +3134,14 @@ _tuple_of_integers(PyObject *seq, intp *vals, int maxvals) for(i=0; i<maxvals; i++) { obj = PyTuple_GET_ITEM(seq, i); - if ((PyArray_Check(obj) && PyArray_NDIM(obj) > 0) || - PyList_Check(obj)) return -1; + if ((PyArray_Check(obj) && PyArray_NDIM(obj) > 0) + || PyList_Check(obj)) { + return -1; + } temp = PyArray_PyIntAsIntp(obj); - if (error_converting(temp)) return -1; + if (error_converting(temp)) { + return -1; + } vals[i] = temp; } return 0; @@ -3023,26 +3171,27 @@ array_ass_sub(PyArrayObject *self, PyObject *index, PyObject *op) !PySequence_Check(index))) { intp value; value = PyArray_PyIntAsIntp(index); - if (PyErr_Occurred()) + if (PyErr_Occurred()) { PyErr_Clear(); - else + } + else { return array_ass_big_item(self, value, op); + } } if (PyString_Check(index) || PyUnicode_Check(index)) { if (self->descr->names) { PyObject *obj; + obj = PyDict_GetItem(self->descr->fields, index); if (obj != NULL) { PyArray_Descr *descr; int offset; PyObject *title; - if (PyArg_ParseTuple(obj, "Oi|O", - &descr, &offset, &title)) { + if (PyArg_ParseTuple(obj, "Oi|O", &descr, &offset, &title)) { Py_INCREF(descr); - return PyArray_SetField(self, descr, - offset, op); + return PyArray_SetField(self, descr, offset, op); } } } @@ -3054,17 +3203,19 @@ array_ass_sub(PyArrayObject *self, PyObject *index, PyObject *op) } if (self->nd == 0) { - /* Several different exceptions to the 0-d no-indexing rule - - 1) ellipses - 2) empty tuple - 3) Using newaxis (None) - 4) Boolean mask indexing - */ - if (index == Py_Ellipsis || index == Py_None || \ - (PyTuple_Check(index) && (0 == PyTuple_GET_SIZE(index) || \ - count_new_axes_0d(index) > 0))) + /* + * Several different exceptions to the 0-d no-indexing rule + * + * 1) ellipses + * 2) empty tuple + * 3) Using newaxis (None) + * 4) Boolean mask indexing + */ + if (index == Py_Ellipsis || index == Py_None || + (PyTuple_Check(index) && (0 == PyTuple_GET_SIZE(index) || + count_new_axes_0d(index) > 0))) { return self->descr->f->setitem(op, self->data, self); + } if (PyBool_Check(index) || PyArray_IsScalar(index, Bool) || (PyArray_Check(index) && (PyArray_DIMS(index)==0) && PyArray_ISBOOL(index))) { @@ -3075,8 +3226,7 @@ array_ass_sub(PyArrayObject *self, PyObject *index, PyObject *op) return 0; } } - PyErr_SetString(PyExc_IndexError, - "0-d arrays can't be indexed."); + PyErr_SetString(PyExc_IndexError, "0-d arrays can't be indexed."); return -1; } @@ -3086,8 +3236,11 @@ array_ass_sub(PyArrayObject *self, PyObject *index, PyObject *op) && (_tuple_of_integers(index, vals, self->nd) >= 0)) { int i; char *item; - for(i=0; i<self->nd; i++) { - if (vals[i] < 0) vals[i] += self->dimensions[i]; + + for (i = 0; i < self->nd; i++) { + if (vals[i] < 0) { + vals[i] += self->dimensions[i]; + } if ((vals[i] < 0) || (vals[i] >= self->dimensions[i])) { PyErr_Format(PyExc_IndexError, "index (%"INTP_FMT") out of range "\ @@ -3097,25 +3250,27 @@ array_ass_sub(PyArrayObject *self, PyObject *index, PyObject *op) } } item = PyArray_GetPtr(self, vals); - /* fprintf(stderr, "Here I am...\n");*/ return self->descr->f->setitem(op, item, self); } PyErr_Clear(); fancy = fancy_indexing_check(index); - if (fancy != SOBJ_NOTFANCY) { oned = ((self->nd == 1) && !(PyTuple_Check(index) && PyTuple_GET_SIZE(index) > 1)); - - mit = (PyArrayMapIterObject *) \ - PyArray_MapIterNew(index, oned, fancy); - if (mit == NULL) return -1; + mit = (PyArrayMapIterObject *) PyArray_MapIterNew(index, oned, fancy); + if (mit == NULL) { + return -1; + } if (oned) { PyArrayIterObject *it; int rval; + it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (it == NULL) {Py_DECREF(mit); return -1;} + if (it == NULL) { + Py_DECREF(mit); + return -1; + } rval = iter_ass_subscript(it, mit->indexobj, op); Py_DECREF(it); Py_DECREF(mit); @@ -3131,10 +3286,11 @@ array_ass_sub(PyArrayObject *self, PyObject *index, PyObject *op) } -/* There are places that require that array_subscript return a PyArrayObject - and not possibly a scalar. Thus, this is the function exposed to - Python so that 0-dim arrays are passed as scalars -*/ +/* + * There are places that require that array_subscript return a PyArrayObject + * and not possibly a scalar. Thus, this is the function exposed to + * Python so that 0-dim arrays are passed as scalars + */ static PyObject * @@ -3144,13 +3300,14 @@ array_subscript_nice(PyArrayObject *self, PyObject *op) PyArrayObject *mp; intp vals[MAX_DIMS]; - if (PyInt_Check(op) || PyArray_IsScalar(op, Integer) || \ + if (PyInt_Check(op) || PyArray_IsScalar(op, Integer) || PyLong_Check(op) || (PyIndex_Check(op) && !PySequence_Check(op))) { intp value; value = PyArray_PyIntAsIntp(op); - if (PyErr_Occurred()) + if (PyErr_Occurred()) { PyErr_Clear(); + } else { return array_item_nice(self, (Py_ssize_t) value); } @@ -3161,8 +3318,11 @@ array_subscript_nice(PyArrayObject *self, PyObject *op) && (_tuple_of_integers(op, vals, self->nd) >= 0)) { int i; char *item; - for(i=0; i<self->nd; i++) { - if (vals[i] < 0) vals[i] += self->dimensions[i]; + + for (i = 0; i < self->nd; i++) { + if (vals[i] < 0) { + vals[i] += self->dimensions[i]; + } if ((vals[i] < 0) || (vals[i] >= self->dimensions[i])) { PyErr_Format(PyExc_IndexError, "index (%"INTP_FMT") out of range "\ @@ -3177,27 +3337,29 @@ array_subscript_nice(PyArrayObject *self, PyObject *op) PyErr_Clear(); mp = (PyArrayObject *)array_subscript(self, op); + /* + * mp could be a scalar if op is not an Int, Scalar, Long or other Index + * object and still convertable to an integer (so that the code goes to + * array_subscript_simple). So, this cast is a bit dangerous.. + */ - /* mp could be a scalar if op is not an Int, Scalar, Long or other Index - object and still convertable to an integer (so that the code goes to - array_subscript_simple). So, this cast is a bit dangerous.. - */ - - /* The following is just a copy of PyArray_Return with an - additional logic in the nd == 0 case. - */ - - if (mp == NULL) return NULL; + /* + * The following is just a copy of PyArray_Return with an + * additional logic in the nd == 0 case. + */ + if (mp == NULL) { + return NULL; + } if (PyErr_Occurred()) { Py_XDECREF(mp); return NULL; } - if (PyArray_Check(mp) && mp->nd == 0) { Bool noellipses = TRUE; - if ((op == Py_Ellipsis) || PyString_Check(op) || PyUnicode_Check(op)) + if ((op == Py_Ellipsis) || PyString_Check(op) || PyUnicode_Check(op)) { noellipses = FALSE; + } else if (PyBool_Check(op) || PyArray_IsScalar(op, Bool) || (PyArray_Check(op) && (PyArray_DIMS(op)==0) && PyArray_ISBOOL(op))) { @@ -3206,12 +3368,14 @@ array_subscript_nice(PyArrayObject *self, PyObject *op) else if (PySequence_Check(op)) { int n, i; PyObject *temp; + n = PySequence_Size(op); - i=0; + i = 0; while (i<n && noellipses) { temp = PySequence_GetItem(op, i); - if (temp == Py_Ellipsis) + if (temp == Py_Ellipsis) { noellipses = FALSE; + } Py_DECREF(temp); i++; } @@ -3249,15 +3413,15 @@ static PyMappingMethods array_as_mapping = { static Py_ssize_t array_getsegcount(PyArrayObject *self, Py_ssize_t *lenp) { - if (lenp) + if (lenp) { *lenp = PyArray_NBYTES(self); - + } if (PyArray_ISONESEGMENT(self)) { return 1; } - - if (lenp) + if (lenp) { *lenp = 0; + } return 0; } @@ -3269,7 +3433,6 @@ array_getreadbuf(PyArrayObject *self, Py_ssize_t segment, void **ptrptr) "accessing non-existing array segment"); return -1; } - if (PyArray_ISONESEGMENT(self)) { *ptrptr = self->data; return PyArray_NBYTES(self); @@ -3283,10 +3446,11 @@ array_getreadbuf(PyArrayObject *self, Py_ssize_t segment, void **ptrptr) static Py_ssize_t array_getwritebuf(PyArrayObject *self, Py_ssize_t segment, void **ptrptr) { - if (PyArray_CHKFLAGS(self, WRITEABLE)) + if (PyArray_CHKFLAGS(self, WRITEABLE)) { return array_getreadbuf(self, segment, (void **) ptrptr); + } else { - PyErr_SetString(PyExc_ValueError, "array cannot be "\ + PyErr_SetString(PyExc_ValueError, "array cannot be " "accessed as a writeable buffer"); return -1; } @@ -3300,14 +3464,14 @@ array_getcharbuf(PyArrayObject *self, Py_ssize_t segment, constchar **ptrptr) static PyBufferProcs array_as_buffer = { #if PY_VERSION_HEX >= 0x02050000 - (readbufferproc)array_getreadbuf, /*bf_getreadbuffer*/ - (writebufferproc)array_getwritebuf, /*bf_getwritebuffer*/ - (segcountproc)array_getsegcount, /*bf_getsegcount*/ - (charbufferproc)array_getcharbuf, /*bf_getcharbuffer*/ + (readbufferproc)array_getreadbuf, /*bf_getreadbuffer*/ + (writebufferproc)array_getwritebuf, /*bf_getwritebuffer*/ + (segcountproc)array_getsegcount, /*bf_getsegcount*/ + (charbufferproc)array_getcharbuf, /*bf_getcharbuffer*/ #else (getreadbufferproc)array_getreadbuf, /*bf_getreadbuffer*/ (getwritebufferproc)array_getwritebuf, /*bf_getwritebuffer*/ - (getsegcountproc)array_getsegcount, /*bf_getsegcount*/ + (getsegcountproc)array_getsegcount, /*bf_getsegcount*/ (getcharbufferproc)array_getcharbuf, /*bf_getcharbuffer*/ #endif }; @@ -3321,40 +3485,40 @@ static PyBufferProcs array_as_buffer = { typedef struct { - PyObject *add, - *subtract, - *multiply, - *divide, - *remainder, - *power, - *square, - *reciprocal, - *ones_like, - *sqrt, - *negative, - *absolute, - *invert, - *left_shift, - *right_shift, - *bitwise_and, - *bitwise_xor, - *bitwise_or, - *less, - *less_equal, - *equal, - *not_equal, - *greater, - *greater_equal, - *floor_divide, - *true_divide, - *logical_or, - *logical_and, - *floor, - *ceil, - *maximum, - *minimum, - *rint, - *conjugate; + PyObject *add; + PyObject *subtract; + PyObject *multiply; + PyObject *divide; + PyObject *remainder; + PyObject *power; + PyObject *square; + PyObject *reciprocal; + PyObject *ones_like; + PyObject *sqrt; + PyObject *negative; + PyObject *absolute; + PyObject *invert; + PyObject *left_shift; + PyObject *right_shift; + PyObject *bitwise_and; + PyObject *bitwise_xor; + PyObject *bitwise_or; + PyObject *less; + PyObject *less_equal; + PyObject *equal; + PyObject *not_equal; + PyObject *greater; + PyObject *greater_equal; + PyObject *floor_divide; + PyObject *true_divide; + PyObject *logical_or; + PyObject *logical_and; + PyObject *floor; + PyObject *ceil; + PyObject *maximum; + PyObject *minimum; + PyObject *rint; + PyObject *conjugate; } NumericOps; static NumericOps n_ops; /* NB: static objects initialized to zero */ @@ -3472,21 +3636,19 @@ PyArray_GetNumericOps(void) static PyObject * _get_keywords(int rtype, PyArrayObject *out) { - PyObject *kwds=NULL; + PyObject *kwds = NULL; if (rtype != PyArray_NOTYPE || out != NULL) { kwds = PyDict_New(); if (rtype != PyArray_NOTYPE) { PyArray_Descr *descr; descr = PyArray_DescrFromType(rtype); if (descr) { - PyDict_SetItemString(kwds, "dtype", - (PyObject *)descr); + PyDict_SetItemString(kwds, "dtype", (PyObject *)descr); Py_DECREF(descr); } } if (out != NULL) { - PyDict_SetItemString(kwds, "out", - (PyObject *)out); + PyDict_SetItemString(kwds, "out", (PyObject *)out); } } return kwds; @@ -3496,7 +3658,7 @@ static PyObject * PyArray_GenericReduceFunction(PyArrayObject *m1, PyObject *op, int axis, int rtype, PyArrayObject *out) { - PyObject *args, *ret=NULL, *meth; + PyObject *args, *ret = NULL, *meth; PyObject *kwds; if (op == NULL) { Py_INCREF(Py_NotImplemented); @@ -3519,7 +3681,7 @@ static PyObject * PyArray_GenericAccumulateFunction(PyArrayObject *m1, PyObject *op, int axis, int rtype, PyArrayObject *out) { - PyObject *args, *ret=NULL, *meth; + PyObject *args, *ret = NULL, *meth; PyObject *kwds; if (op == NULL) { Py_INCREF(Py_NotImplemented); @@ -3640,8 +3802,9 @@ array_power_is_scalar(PyObject *o2, double* exp) PyObject* value = PyNumber_Index(o2); Py_ssize_t val; if (value==NULL) { - if (PyErr_Occurred()) + if (PyErr_Occurred()) { PyErr_Clear(); + } return 0; } val = PyInt_AsSsize_t(value); @@ -3658,8 +3821,10 @@ array_power_is_scalar(PyObject *o2, double* exp) /* optimize float array or complex array to a scalar power */ static PyObject * -fast_scalar_power(PyArrayObject *a1, PyObject *o2, int inplace) { +fast_scalar_power(PyArrayObject *a1, PyObject *o2, int inplace) +{ double exp; + if (PyArray_Check(a1) && array_power_is_scalar(o2, &exp)) { PyObject *fastop = NULL; if (PyArray_ISFLOAT(a1) || PyArray_ISCOMPLEX(a1)) { @@ -3675,33 +3840,37 @@ fast_scalar_power(PyArrayObject *a1, PyObject *o2, int inplace) { } else { return PyArray_Copy(a1); } - } else if (exp == -1.0) { + } + else if (exp == -1.0) { fastop = n_ops.reciprocal; - } else if (exp == 0.0) { + } + else if (exp == 0.0) { fastop = n_ops.ones_like; - } else if (exp == 0.5) { + } + else if (exp == 0.5) { fastop = n_ops.sqrt; - } else if (exp == 2.0) { + } + else if (exp == 2.0) { fastop = n_ops.square; - } else { + } + else { return NULL; } + if (inplace) { - return PyArray_GenericInplaceUnaryFunction(a1, - fastop); + return PyArray_GenericInplaceUnaryFunction(a1, fastop); } else { - return PyArray_GenericUnaryFunction(a1, - fastop); + return PyArray_GenericUnaryFunction(a1, fastop); } } else if (exp==2.0) { fastop = n_ops.multiply; if (inplace) { - return PyArray_GenericInplaceBinaryFunction \ + return PyArray_GenericInplaceBinaryFunction (a1, (PyObject *)a1, fastop); } else { - return PyArray_GenericBinaryFunction \ + return PyArray_GenericBinaryFunction (a1, (PyObject *)a1, fastop); } } @@ -3877,7 +4046,9 @@ array_any_nonzero(PyArrayObject *mp) Bool anyTRUE = FALSE; it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)mp); - if (it==NULL) return anyTRUE; + if (it == NULL) { + return anyTRUE; + } index = it->size; while(index--) { if (mp->descr->f->nonzero(it->dataptr, mp)) { @@ -3894,6 +4065,7 @@ static int _array_nonzero(PyArrayObject *mp) { intp n; + n = PyArray_SIZE(mp); if (n == 1) { return mp->descr->f->nonzero(mp->data, mp); @@ -3918,7 +4090,9 @@ array_divmod(PyArrayObject *op1, PyObject *op2) PyObject *divp, *modp, *result; divp = array_floor_divide(op1, op2); - if (divp == NULL) return NULL; + if (divp == NULL) { + return NULL; + } modp = array_remainder(op1, op2); if (modp == NULL) { Py_DECREF(divp); @@ -3941,7 +4115,9 @@ array_int(PyArrayObject *v) return NULL; } pv = v->descr->f->getitem(v->data, v); - if (pv == NULL) return NULL; + if (pv == NULL) { + return NULL; + } if (pv->ob_type->tp_as_number == 0) { PyErr_SetString(PyExc_TypeError, "cannot convert to an int; "\ "scalar object is not a number"); @@ -3970,7 +4146,9 @@ array_float(PyArrayObject *v) return NULL; } pv = v->descr->f->getitem(v->data, v); - if (pv == NULL) return NULL; + if (pv == NULL) { + return NULL; + } if (pv->ob_type->tp_as_number == 0) { PyErr_SetString(PyExc_TypeError, "cannot convert to a "\ "float; scalar object is not a number"); @@ -4066,8 +4244,7 @@ array_hex(PyArrayObject *v) static PyObject * _array_copy_nice(PyArrayObject *self) { - return PyArray_Return((PyArrayObject *) \ - PyArray_Copy(self)); + return PyArray_Return((PyArrayObject *) PyArray_Copy(self)); } #if PY_VERSION_HEX >= 0x02050000 @@ -4109,8 +4286,10 @@ static PyNumberMethods array_as_number = { (unaryfunc)array_oct, /*nb_oct*/ (unaryfunc)array_hex, /*nb_hex*/ - /*This code adds augmented assignment functionality*/ - /*that was made available in Python 2.0*/ + /* + * This code adds augmented assignment functionality + * that was made available in Python 2.0 + */ (binaryfunc)array_inplace_add, /*inplace_add*/ (binaryfunc)array_inplace_subtract, /*inplace_subtract*/ (binaryfunc)array_inplace_multiply, /*inplace_multiply*/ @@ -4160,15 +4339,26 @@ array_slice(PyArrayObject *self, Py_ssize_t ilow, } l=self->dimensions[0]; - if (ilow < 0) ilow = 0; - else if (ilow > l) ilow = l; - if (ihigh < ilow) ihigh = ilow; - else if (ihigh > l) ihigh = l; + if (ilow < 0) { + ilow = 0; + } + else if (ilow > l) { + ilow = l; + } + if (ihigh < ilow) { + ihigh = ilow; + } + else if (ihigh > l) { + ihigh = l; + } if (ihigh != ilow) { data = index2ptr(self, ilow); - if (data == NULL) return NULL; - } else { + if (data == NULL) { + return NULL; + } + } + else { data = self->data; } @@ -4180,7 +4370,9 @@ array_slice(PyArrayObject *self, Py_ssize_t ilow, self->strides, data, self->flags, (PyObject *)self); self->dimensions[0] = l; - if (r == NULL) return NULL; + if (r == NULL) { + return NULL; + } r->base = (PyObject *)self; Py_INCREF(self); PyArray_UpdateFlags(r, UPDATE_ALL); @@ -4204,9 +4396,9 @@ array_ass_slice(PyArrayObject *self, Py_ssize_t ilow, "array is not writeable"); return -1; } - if ((tmp = (PyArrayObject *)array_slice(self, ilow, ihigh)) \ - == NULL) + if ((tmp = (PyArrayObject *)array_slice(self, ilow, ihigh)) == NULL) { return -1; + } ret = PyArray_CopyObject(tmp, v); Py_DECREF(tmp); @@ -4223,7 +4415,9 @@ array_contains(PyArrayObject *self, PyObject *el) res = PyArray_EnsureAnyArray(PyObject_RichCompare((PyObject *)self, el, Py_EQ)); - if (res == NULL) return -1; + if (res == NULL) { + return -1; + } ret = array_any_nonzero((PyArrayObject *)res); Py_DECREF(res); return ret; @@ -4268,11 +4462,12 @@ dump_data(char **string, int *n, int *max_n, char *data, int nd, char *ostring; int i, N; -#define CHECK_MEMORY if (*n >= *max_n-16) { *max_n *= 2; \ - *string = (char *)_pya_realloc(*string, *max_n); } +#define CHECK_MEMORY do { if (*n >= *max_n-16) { \ + *max_n *= 2; \ + *string = (char *)_pya_realloc(*string, *max_n); \ + }} while (0) if (nd == 0) { - if ((op = descr->f->getitem(data, self)) == NULL) { return -1; } @@ -4284,33 +4479,33 @@ dump_data(char **string, int *n, int *max_n, char *data, int nd, ostring = PyString_AsString(sp); N = PyString_Size(sp)*sizeof(char); *n += N; - CHECK_MEMORY - memmove(*string + (*n - N), ostring, N); + CHECK_MEMORY; + memmove(*string + (*n - N), ostring, N); Py_DECREF(sp); Py_DECREF(op); return 0; } else { - CHECK_MEMORY - (*string)[*n] = '['; + CHECK_MEMORY; + (*string)[*n] = '['; *n += 1; - for(i = 0; i < dimensions[0]; i++) { + for (i = 0; i < dimensions[0]; i++) { if (dump_data(string, n, max_n, data + (*strides)*i, nd - 1, dimensions + 1, strides + 1, self) < 0) { return -1; } - CHECK_MEMORY - if (i < dimensions[0] - 1) { - (*string)[*n] = ','; - (*string)[*n+1] = ' '; - *n += 2; - } + CHECK_MEMORY; + if (i < dimensions[0] - 1) { + (*string)[*n] = ','; + (*string)[*n+1] = ' '; + *n += 2; + } } - CHECK_MEMORY - (*string)[*n] = ']'; - *n += 1; + CHECK_MEMORY; + (*string)[*n] = ']'; + *n += 1; return 0; } @@ -4369,8 +4564,8 @@ static PyObject *PyArray_StrFunction = NULL; static PyObject *PyArray_ReprFunction = NULL; /*NUMPY_API - Set the array print function to be a Python function. -*/ + * Set the array print function to be a Python function. + */ static void PyArray_SetStringFunction(PyObject *op, int repr) { @@ -4381,7 +4576,8 @@ PyArray_SetStringFunction(PyObject *op, int repr) Py_XINCREF(op); /* Remember new callback */ PyArray_ReprFunction = op; - } else { + } + else { /* Dispose of previous callback */ Py_XDECREF(PyArray_StrFunction); /* Add a reference to new callback */ @@ -4398,7 +4594,8 @@ array_repr(PyArrayObject *self) if (PyArray_ReprFunction == NULL) { s = array_repr_builtin(self, 1); - } else { + } + else { arglist = Py_BuildValue("(O)", self); s = PyEval_CallObject(PyArray_ReprFunction, arglist); Py_DECREF(arglist); @@ -4413,7 +4610,8 @@ array_str(PyArrayObject *self) if (PyArray_StrFunction == NULL) { s = array_repr_builtin(self, 0); - } else { + } + else { arglist = Py_BuildValue("(O)", self); s = PyEval_CallObject(PyArray_StrFunction, arglist); Py_DECREF(arglist); @@ -4483,29 +4681,46 @@ _myunincmp(PyArray_UCS4 *s1, PyArray_UCS4 *s2, int len1, int len2) memcpy(s2t, s2, size); } val = PyArray_CompareUCS4(s1t, s2t, MIN(len1,len2)); - if ((val != 0) || (len1 == len2)) goto finish; - if (len2 > len1) {sptr = s2t+len1; val = -1; diff=len2-len1;} - else {sptr = s1t+len2; val = 1; diff=len1-len2;} + if ((val != 0) || (len1 == len2)) { + goto finish; + } + if (len2 > len1) { + sptr = s2t+len1; + val = -1; + diff = len2-len1; + } + else { + sptr = s1t+len2; + val = 1; + diff=len1-len2; + } while (diff--) { - if (*sptr != 0) goto finish; + if (*sptr != 0) { + goto finish; + } sptr++; } val = 0; finish: - if (s1t != s1) free(s1t); - if (s2t != s2) free(s2t); + if (s1t != s1) { + free(s1t); + } + if (s2t != s2) { + free(s2t); + } return val; } -/* Compare s1 and s2 which are not necessarily NULL-terminated. - s1 is of length len1 - s2 is of length len2 - If they are NULL terminated, then stop comparison. -*/ +/* + * Compare s1 and s2 which are not necessarily NULL-terminated. + * s1 is of length len1 + * s2 is of length len2 + * If they are NULL terminated, then stop comparison. + */ static int _mystrncmp(char *s1, char *s2, int len1, int len2) { @@ -4514,11 +4729,23 @@ _mystrncmp(char *s1, char *s2, int len1, int len2) int diff; val = memcmp(s1, s2, MIN(len1, len2)); - if ((val != 0) || (len1 == len2)) return val; - if (len2 > len1) {sptr = s2+len1; val = -1; diff=len2-len1;} - else {sptr = s1+len2; val = 1; diff=len1-len2;} + if ((val != 0) || (len1 == len2)) { + return val; + } + if (len2 > len1) { + sptr = s2 + len1; + val = -1; + diff = len2 - len1; + } + else { + sptr = s1 + len2; + val = 1; + diff = len1 - len2; + } while (diff--) { - if (*sptr != 0) return val; + if (*sptr != 0) { + return val; + } sptr++; } return 0; /* Only happens if NULLs are everywhere */ @@ -4536,27 +4763,30 @@ _mystrncmp(char *s1, char *s2, int len1, int len2) static void _rstripw(char *s, int n) { int i; - for(i=n-1; i>=1; i--) /* Never strip to length 0. */ - { - int c = s[i]; - if (!c || isspace(c)) - s[i] = 0; - else - break; + for (i = n - 1; i >= 1; i--) { /* Never strip to length 0. */ + int c = s[i]; + + if (!c || isspace(c)) { + s[i] = 0; } + else { + break; + } + } } static void _unistripw(PyArray_UCS4 *s, int n) { int i; - for(i=n-1; i>=1; i--) /* Never strip to length 0. */ - { - PyArray_UCS4 c = s[i]; - if (!c || isspace(c)) - s[i] = 0; - else - break; + for (i = n - 1; i >= 1; i--) { /* Never strip to length 0. */ + PyArray_UCS4 c = s[i]; + if (!c || isspace(c)) { + s[i] = 0; } + else { + break; + } + } } @@ -4695,8 +4925,7 @@ _compare_strings(PyObject *result, PyArrayMultiIterObject *multi, _loop(>=) break; default: - PyErr_SetString(PyExc_RuntimeError, - "bad comparison operator"); + PyErr_SetString(PyExc_RuntimeError, "bad comparison operator"); return -1; } return 0; @@ -4718,7 +4947,7 @@ _strings_richcompare(PyArrayObject *self, PyArrayObject *other, int cmp_op, /* Cast arrays to a common type */ if (self->descr->type_num != other->descr->type_num) { PyObject *new; - if (self->descr->type_num == PyArray_STRING && \ + if (self->descr->type_num == PyArray_STRING && other->descr->type_num == PyArray_UNICODE) { Py_INCREF(other->descr); new = PyArray_FromAny((PyObject *)self, other->descr, @@ -4729,7 +4958,7 @@ _strings_richcompare(PyArrayObject *self, PyArrayObject *other, int cmp_op, Py_INCREF(other); self = (PyArrayObject *)new; } - else if (self->descr->type_num == PyArray_UNICODE && \ + else if (self->descr->type_num == PyArray_UNICODE && other->descr->type_num == PyArray_STRING) { Py_INCREF(self->descr); new = PyArray_FromAny((PyObject *)other, self->descr, @@ -4771,12 +5000,10 @@ _strings_richcompare(PyArrayObject *self, PyArrayObject *other, int cmp_op, } if (self->descr->type_num == PyArray_UNICODE) { - val = _compare_strings(result, mit, cmp_op, _myunincmp, - rstrip); + val = _compare_strings(result, mit, cmp_op, _myunincmp, rstrip); } else { - val = _compare_strings(result, mit, cmp_op, _mystrncmp, - rstrip); + val = _compare_strings(result, mit, cmp_op, _mystrncmp, rstrip); } if (val < 0) { @@ -4788,16 +5015,16 @@ _strings_richcompare(PyArrayObject *self, PyArrayObject *other, int cmp_op, return result; } -/* VOID-type arrays can only be compared equal and not-equal - in which case the fields are all compared by extracting the fields - and testing one at a time... - equality testing is performed using logical_ands on all the fields. - in-equality testing is performed using logical_ors on all the fields. - - VOID-type arrays without fields are compared for equality by comparing their - memory at each location directly (using string-code). -*/ - +/* + * VOID-type arrays can only be compared equal and not-equal + * in which case the fields are all compared by extracting the fields + * and testing one at a time... + * equality testing is performed using logical_ands on all the fields. + * in-equality testing is performed using logical_ors on all the fields. + * + * VOID-type arrays without fields are compared for equality by comparing their + * memory at each location directly (using string-code). + */ static PyObject *array_richcompare(PyArrayObject *, PyObject *, int); @@ -4810,21 +5037,23 @@ _void_compare(PyArrayObject *self, PyArrayObject *other, int cmp_op) return NULL; } if (PyArray_HASFIELDS(self)) { - PyObject *res=NULL, *temp, *a, *b; + PyObject *res = NULL, *temp, *a, *b; PyObject *key, *value, *temp2; PyObject *op; - Py_ssize_t pos=0; + Py_ssize_t pos = 0; op = (cmp_op == Py_EQ ? n_ops.logical_and : n_ops.logical_or); while (PyDict_Next(self->descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; + if NPY_TITLE_KEY(key, value) { + continue; + } a = PyArray_EnsureAnyArray(array_subscript(self, key)); - if (a==NULL) { + if (a == NULL) { Py_XDECREF(res); return NULL; } b = array_subscript(other, key); - if (b==NULL) { + if (b == NULL) { Py_XDECREF(res); Py_DECREF(a); return NULL; @@ -4855,8 +5084,10 @@ _void_compare(PyArrayObject *self, PyArrayObject *other, int cmp_op) return res; } else { - /* compare as a string */ - /* assumes self and other have same descr->type */ + /* + * compare as a string. Assumes self and + * other have same descr->type + */ return _strings_richcompare(self, other, cmp_op, 0); } } @@ -4867,15 +5098,14 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) PyObject *array_other, *result = NULL; int typenum; - switch (cmp_op) - { + switch (cmp_op) { case Py_LT: result = PyArray_GenericBinaryFunction(self, other, - n_ops.less); + n_ops.less); break; case Py_LE: result = PyArray_GenericBinaryFunction(self, other, - n_ops.less_equal); + n_ops.less_equal); break; case Py_EQ: if (other == Py_None) { @@ -4889,15 +5119,14 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) typenum = PyArray_NOTYPE; } array_other = PyArray_FromObject(other, - typenum, 0, 0); - /* If not successful, then return False - This fixes code that used to - allow equality comparisons between arrays - and other objects which would give a result - of False - */ - if ((array_other == NULL) || \ - (array_other == Py_None)) { + typenum, 0, 0); + /* + * If not successful, then return False. This fixes code + * that used to allow equality comparisons between arrays + * and other objects which would give a result of False. + */ + if ((array_other == NULL) || + (array_other == Py_None)) { Py_XDECREF(array_other); PyErr_Clear(); Py_INCREF(Py_False); @@ -4909,16 +5138,17 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) array_other = other; } result = PyArray_GenericBinaryFunction(self, - array_other, - n_ops.equal); + array_other, + n_ops.equal); if ((result == Py_NotImplemented) && - (self->descr->type_num == PyArray_VOID)) { + (self->descr->type_num == PyArray_VOID)) { int _res; - _res = PyObject_RichCompareBool \ - ((PyObject *)self->descr, - (PyObject *)\ - PyArray_DESCR(array_other), - Py_EQ); + + _res = PyObject_RichCompareBool + ((PyObject *)self->descr, + (PyObject *)\ + PyArray_DESCR(array_other), + Py_EQ); if (_res < 0) { Py_DECREF(result); Py_DECREF(array_other); @@ -4926,18 +5156,19 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) } if (_res) { Py_DECREF(result); - result = _void_compare\ - (self, - (PyArrayObject *)array_other, - cmp_op); + result = _void_compare + (self, + (PyArrayObject *)array_other, + cmp_op); Py_DECREF(array_other); } return result; } - /* If the comparison results in NULL, then the - two array objects can not be compared together so - return zero - */ + /* + * If the comparison results in NULL, then the + * two array objects can not be compared together so + * return zero + */ Py_DECREF(array_other); if (result == NULL) { PyErr_Clear(); @@ -4956,14 +5187,13 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) if (typenum != PyArray_OBJECT) { typenum = PyArray_NOTYPE; } - array_other = PyArray_FromObject(other, - typenum, 0, 0); - /* If not successful, then objects cannot be - compared and cannot be equal, therefore, - return True; - */ - if ((array_other == NULL) || \ - (array_other == Py_None)) { + array_other = PyArray_FromObject(other, typenum, 0, 0); + /* + * If not successful, then objects cannot be + * compared and cannot be equal, therefore, + * return True; + */ + if ((array_other == NULL) || (array_other == Py_None)) { Py_XDECREF(array_other); PyErr_Clear(); Py_INCREF(Py_True); @@ -4975,16 +5205,17 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) array_other = other; } result = PyArray_GenericBinaryFunction(self, - array_other, - n_ops.not_equal); + array_other, + n_ops.not_equal); if ((result == Py_NotImplemented) && - (self->descr->type_num == PyArray_VOID)) { + (self->descr->type_num == PyArray_VOID)) { int _res; - _res = PyObject_RichCompareBool\ - ((PyObject *)self->descr, - (PyObject *)\ - PyArray_DESCR(array_other), - Py_EQ); + + _res = PyObject_RichCompareBool( + (PyObject *)self->descr, + (PyObject *) + PyArray_DESCR(array_other), + Py_EQ); if (_res < 0) { Py_DECREF(result); Py_DECREF(array_other); @@ -4992,10 +5223,10 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) } if (_res) { Py_DECREF(result); - result = _void_compare\ - (self, - (PyArrayObject *)array_other, - cmp_op); + result = _void_compare( + self, + (PyArrayObject *)array_other, + cmp_op); Py_DECREF(array_other); } return result; @@ -5010,19 +5241,21 @@ array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) break; case Py_GT: result = PyArray_GenericBinaryFunction(self, other, - n_ops.greater); + n_ops.greater); break; case Py_GE: result = PyArray_GenericBinaryFunction(self, other, - n_ops.greater_equal); + n_ops.greater_equal); break; default: result = Py_NotImplemented; Py_INCREF(result); - } + } if (result == Py_NotImplemented) { /* Try to handle string comparisons */ - if (self->descr->type_num == PyArray_OBJECT) return result; + if (self->descr->type_num == PyArray_OBJECT) { + return result; + } array_other = PyArray_FromObject(other,PyArray_NOTYPE, 0, 0); if (PyArray_ISSTRING(self) && PyArray_ISSTRING(array_other)) { Py_DECREF(result); @@ -5047,7 +5280,10 @@ PyArray_CheckAxis(PyArrayObject *arr, int *axis, int flags) if ((*axis >= MAX_DIMS) || (n==0)) { if (n != 1) { temp1 = PyArray_Ravel(arr,0); - if (temp1 == NULL) {*axis=0; return NULL;} + if (temp1 == NULL) { + *axis = 0; + return NULL; + } *axis = PyArray_NDIM(temp1)-1; } else { @@ -5055,7 +5291,9 @@ PyArray_CheckAxis(PyArrayObject *arr, int *axis, int flags) Py_INCREF(temp1); *axis = 0; } - if (!flags) return temp1; + if (!flags) { + return temp1; + } } else { temp1 = (PyObject *)arr; @@ -5065,13 +5303,17 @@ PyArray_CheckAxis(PyArrayObject *arr, int *axis, int flags) temp2 = PyArray_CheckFromAny((PyObject *)temp1, NULL, 0, 0, flags, NULL); Py_DECREF(temp1); - if (temp2 == NULL) return NULL; + if (temp2 == NULL) { + return NULL; + } } else { temp2 = (PyObject *)temp1; } n = PyArray_NDIM(temp2); - if (*axis < 0) *axis += n; + if (*axis < 0) { + *axis += n; + } if ((*axis < 0) || (*axis >= n)) { PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", *axis); @@ -5094,8 +5336,11 @@ PyArray_IntTupleFromIntp(int len, intp *vals) { int i; PyObject *intTuple = PyTuple_New(len); - if (!intTuple) goto fail; - for(i=0; i<len; i++) { + + if (!intTuple) { + goto fail; + } + for (i = 0; i < len; i++) { #if SIZEOF_INTP <= SIZEOF_LONG PyObject *o = PyInt_FromLong((long) vals[i]); #else @@ -5108,29 +5353,36 @@ PyArray_IntTupleFromIntp(int len, intp *vals) } PyTuple_SET_ITEM(intTuple, i, o); } + fail: return intTuple; } -/* Returns the number of dimensions or -1 if an error occurred */ -/* vals must be large enough to hold maxvals */ /*NUMPY_API - PyArray_IntpFromSequence -*/ + * PyArray_IntpFromSequence + * Returns the number of dimensions or -1 if an error occurred. + * vals must be large enough to hold maxvals + */ static int PyArray_IntpFromSequence(PyObject *seq, intp *vals, int maxvals) { int nd, i; PyObject *op; - /* Check to see if sequence is a single integer first. - or, can be made into one */ + /* + * Check to see if sequence is a single integer first. + * or, can be made into one + */ if ((nd=PySequence_Length(seq)) == -1) { if (PyErr_Occurred()) PyErr_Clear(); #if SIZEOF_LONG >= SIZEOF_INTP - if (!(op = PyNumber_Int(seq))) return -1; + if (!(op = PyNumber_Int(seq))) { + return -1; + } #else - if (!(op = PyNumber_Long(seq))) return -1; + if (!(op = PyNumber_Long(seq))) { + return -1; + } #endif nd = 1; #if SIZEOF_LONG >= SIZEOF_INTP @@ -5139,17 +5391,22 @@ PyArray_IntpFromSequence(PyObject *seq, intp *vals, int maxvals) vals[0] = (intp ) PyLong_AsLongLong(op); #endif Py_DECREF(op); - } else { - for(i=0; i < MIN(nd,maxvals); i++) { + } + else { + for (i = 0; i < MIN(nd,maxvals); i++) { op = PySequence_GetItem(seq, i); - if (op == NULL) return -1; + if (op == NULL) { + return -1; + } #if SIZEOF_LONG >= SIZEOF_INTP vals[i]=(intp )PyInt_AsLong(op); #else vals[i]=(intp )PyLong_AsLongLong(op); #endif Py_DECREF(op); - if(PyErr_Occurred()) return -1; + if(PyErr_Occurred()) { + return -1; + } } } return nd; @@ -5157,10 +5414,12 @@ PyArray_IntpFromSequence(PyObject *seq, intp *vals, int maxvals) -/* Check whether the given array is stored contiguously (row-wise) in - memory. */ - -/* 0-strided arrays are not contiguous (even if dimension == 1) */ +/* + * Check whether the given array is stored contiguously + * (row-wise) in memory. + * + * 0-strided arrays are not contiguous (even if dimension == 1) + */ static int _IsContiguous(PyArrayObject *ap) { @@ -5168,15 +5427,22 @@ _IsContiguous(PyArrayObject *ap) register intp dim; register int i; - if (ap->nd == 0) return 1; + if (ap->nd == 0) { + return 1; + } sd = ap->descr->elsize; - if (ap->nd == 1) return (ap->dimensions[0] == 1 || \ - sd == ap->strides[0]); - for(i = ap->nd-1; i >= 0; --i) { + if (ap->nd == 1) { + return ap->dimensions[0] == 1 || sd == ap->strides[0]; + } + for (i = ap->nd - 1; i >= 0; --i) { dim = ap->dimensions[i]; /* contiguous by definition */ - if (dim == 0) return 1; - if (ap->strides[i] != sd) return 0; + if (dim == 0) { + return 1; + } + if (ap->strides[i] != sd) { + return 0; + } sd *= dim; } return 1; @@ -5191,15 +5457,22 @@ _IsFortranContiguous(PyArrayObject *ap) register intp dim; register int i; - if (ap->nd == 0) return 1; + if (ap->nd == 0) { + return 1; + } sd = ap->descr->elsize; - if (ap->nd == 1) return (ap->dimensions[0] == 1 || \ - sd == ap->strides[0]); - for(i=0; i< ap->nd; ++i) { + if (ap->nd == 1) { + return ap->dimensions[0] == 1 || sd == ap->strides[0]; + } + for (i = 0; i < ap->nd; ++i) { dim = ap->dimensions[i]; /* fortran contiguous by definition */ - if (dim == 0) return 1; - if (ap->strides[i] != sd) return 0; + if (dim == 0) { + return 1; + } + if (ap->strides[i] != sd) { + return 0; + } sd *= dim; } return 1; @@ -5208,20 +5481,22 @@ _IsFortranContiguous(PyArrayObject *ap) static int _IsAligned(PyArrayObject *ap) { - int i, alignment, aligned=1; + int i, alignment, aligned = 1; intp ptr; int type = ap->descr->type_num; - if ((type == PyArray_STRING) || (type == PyArray_VOID)) + if ((type == PyArray_STRING) || (type == PyArray_VOID)) { return 1; - + } alignment = ap->descr->alignment; - if (alignment == 1) return 1; - + if (alignment == 1) { + return 1; + } ptr = (intp) ap->data; aligned = (ptr % alignment) == 0; - for(i=0; i <ap->nd; i++) + for (i = 0; i < ap->nd; i++) { aligned &= ((ap->strides[i] % alignment) == 0); + } return aligned != 0; } @@ -5233,31 +5508,37 @@ _IsWriteable(PyArrayObject *ap) Py_ssize_t n; /* If we own our own data, then no-problem */ - if ((base == NULL) || (ap->flags & OWNDATA)) return TRUE; - - /* Get to the final base object - If it is a writeable array, then return TRUE - If we can find an array object - or a writeable buffer object as the final base object - or a string object (for pickling support memory savings). - - this last could be removed if a proper pickleable - buffer was added to Python. - */ + if ((base == NULL) || (ap->flags & OWNDATA)) { + return TRUE; + } + /* + * Get to the final base object + * If it is a writeable array, then return TRUE + * If we can find an array object + * or a writeable buffer object as the final base object + * or a string object (for pickling support memory savings). + * - this last could be removed if a proper pickleable + * buffer was added to Python. + */ while(PyArray_Check(base)) { - if (PyArray_CHKFLAGS(base, OWNDATA)) + if (PyArray_CHKFLAGS(base, OWNDATA)) { return (Bool) (PyArray_ISWRITEABLE(base)); + } base = PyArray_BASE(base); } - /* here so pickle support works seamlessly - and unpickled array can be set and reset writeable - -- could be abused -- */ - if PyString_Check(base) return TRUE; - - if (PyObject_AsWriteBuffer(base, &dummy, &n) < 0) + /* + * here so pickle support works seamlessly + * and unpickled array can be set and reset writeable + * -- could be abused -- + */ + if PyString_Check(base) { + return TRUE; + } + if (PyObject_AsWriteBuffer(base, &dummy, &n) < 0) { return FALSE; - + } return TRUE; } @@ -5267,20 +5548,21 @@ _IsWriteable(PyArrayObject *ap) static int PyArray_ElementStrides(PyObject *arr) { - register int itemsize = PyArray_ITEMSIZE(arr); - register int i, N=PyArray_NDIM(arr); - register intp *strides = PyArray_STRIDES(arr); + int itemsize = PyArray_ITEMSIZE(arr); + int i, N = PyArray_NDIM(arr); + intp *strides = PyArray_STRIDES(arr); - for(i=0; i<N; i++) { - if ((strides[i] % itemsize) != 0) return 0; + for (i = 0; i < N; i++) { + if ((strides[i] % itemsize) != 0) { + return 0; + } } - return 1; } /*NUMPY_API - Update Several Flags at once. -*/ + * Update Several Flags at once. + */ static void PyArray_UpdateFlags(PyArrayObject *ret, int flagmask) { @@ -5288,45 +5570,64 @@ PyArray_UpdateFlags(PyArrayObject *ret, int flagmask) if (flagmask & FORTRAN) { if (_IsFortranContiguous(ret)) { ret->flags |= FORTRAN; - if (ret->nd > 1) ret->flags &= ~CONTIGUOUS; + if (ret->nd > 1) { + ret->flags &= ~CONTIGUOUS; + } + } + else { + ret->flags &= ~FORTRAN; } - else ret->flags &= ~FORTRAN; } if (flagmask & CONTIGUOUS) { if (_IsContiguous(ret)) { ret->flags |= CONTIGUOUS; - if (ret->nd > 1) ret->flags &= ~FORTRAN; + if (ret->nd > 1) { + ret->flags &= ~FORTRAN; + } + } + else { + ret->flags &= ~CONTIGUOUS; } - else ret->flags &= ~CONTIGUOUS; } if (flagmask & ALIGNED) { - if (_IsAligned(ret)) ret->flags |= ALIGNED; - else ret->flags &= ~ALIGNED; + if (_IsAligned(ret)) { + ret->flags |= ALIGNED; + } + else { + ret->flags &= ~ALIGNED; + } } - /* This is not checked by default WRITEABLE is not - part of UPDATE_ALL */ + /* + * This is not checked by default WRITEABLE is not + * part of UPDATE_ALL + */ if (flagmask & WRITEABLE) { - if (_IsWriteable(ret)) ret->flags |= WRITEABLE; - else ret->flags &= ~WRITEABLE; + if (_IsWriteable(ret)) { + ret->flags |= WRITEABLE; + } + else { + ret->flags &= ~WRITEABLE; + } } return; } -/* This routine checks to see if newstrides (of length nd) will not - ever be able to walk outside of the memory implied numbytes and offset. - - The available memory is assumed to start at -offset and proceed - to numbytes-offset. The strides are checked to ensure - that accessing memory using striding will not try to reach beyond - this memory for any of the axes. - - If numbytes is 0 it will be calculated using the dimensions and - element-size. - - This function checks for walking beyond the beginning and right-end - of the buffer and therefore works for any integer stride (positive - or negative). -*/ +/* + * This routine checks to see if newstrides (of length nd) will not + * ever be able to walk outside of the memory implied numbytes and offset. + * + * The available memory is assumed to start at -offset and proceed + * to numbytes-offset. The strides are checked to ensure + * that accessing memory using striding will not try to reach beyond + * this memory for any of the axes. + * + * If numbytes is 0 it will be calculated using the dimensions and + * element-size. + * + * This function checks for walking beyond the beginning and right-end + * of the buffer and therefore works for any integer stride (positive + * or negative). + */ /*NUMPY_API*/ static Bool @@ -5338,36 +5639,37 @@ PyArray_CheckStrides(int elsize, int nd, intp numbytes, intp offset, intp begin; intp end; - if (numbytes == 0) + if (numbytes == 0) { numbytes = PyArray_MultiplyList(dims, nd) * elsize; - + } begin = -offset; end = numbytes - offset - elsize; - for(i=0; i<nd; i++) { - byte_begin = newstrides[i]*(dims[i]-1); - if ((byte_begin < begin) || (byte_begin > end)) + for (i = 0; i < nd; i++) { + byte_begin = newstrides[i]*(dims[i] - 1); + if ((byte_begin < begin) || (byte_begin > end)) { return FALSE; + } } return TRUE; - } -/* This is the main array creation routine. */ - -/* Flags argument has multiple related meanings - depending on data and strides: - - If data is given, then flags is flags associated with data. - If strides is not given, then a contiguous strides array will be created - and the CONTIGUOUS bit will be set. If the flags argument - has the FORTRAN bit set, then a FORTRAN-style strides array will be - created (and of course the FORTRAN flag bit will be set). - - If data is not given but created here, then flags will be DEFAULT - and a non-zero flags argument can be used to indicate a FORTRAN style - array is desired. -*/ +/* + * This is the main array creation routine. + * + * Flags argument has multiple related meanings + * depending on data and strides: + * + * If data is given, then flags is flags associated with data. + * If strides is not given, then a contiguous strides array will be created + * and the CONTIGUOUS bit will be set. If the flags argument + * has the FORTRAN bit set, then a FORTRAN-style strides array will be + * created (and of course the FORTRAN flag bit will be set). + * + * If data is not given but created here, then flags will be DEFAULT + * and a non-zero flags argument can be used to indicate a FORTRAN style + * array is desired. + */ static size_t _array_fill_strides(intp *strides, intp *dims, int nd, size_t itemsize, @@ -5376,29 +5678,37 @@ _array_fill_strides(intp *strides, intp *dims, int nd, size_t itemsize, int i; /* Only make Fortran strides if not contiguous as well */ if ((inflag & FORTRAN) && !(inflag & CONTIGUOUS)) { - for(i=0; i<nd; i++) { + for (i = 0; i < nd; i++) { strides[i] = itemsize; itemsize *= dims[i] ? dims[i] : 1; } *objflags |= FORTRAN; - if (nd > 1) *objflags &= ~CONTIGUOUS; - else *objflags |= CONTIGUOUS; + if (nd > 1) { + *objflags &= ~CONTIGUOUS; + } + else { + *objflags |= CONTIGUOUS; + } } else { - for(i=nd-1;i>=0;i--) { + for (i = nd - 1; i >= 0; i--) { strides[i] = itemsize; itemsize *= dims[i] ? dims[i] : 1; } *objflags |= CONTIGUOUS; - if (nd > 1) *objflags &= ~FORTRAN; - else *objflags |= FORTRAN; + if (nd > 1) { + *objflags &= ~FORTRAN; + } + else { + *objflags |= FORTRAN; + } } return itemsize; } /*NUMPY_API - Generic new array creation routine. -*/ + * Generic new array creation routine. + */ static PyObject * PyArray_New(PyTypeObject *subtype, int nd, intp *dims, int type_num, intp *strides, void *data, int itemsize, int flags, @@ -5408,7 +5718,9 @@ PyArray_New(PyTypeObject *subtype, int nd, intp *dims, int type_num, PyObject *new; descr = PyArray_DescrFromType(type_num); - if (descr == NULL) return NULL; + if (descr == NULL) { + return NULL; + } if (descr->elsize == 0) { if (itemsize < 1) { PyErr_SetString(PyExc_ValueError, @@ -5424,14 +5736,16 @@ PyArray_New(PyTypeObject *subtype, int nd, intp *dims, int type_num, return new; } -/* Change a sub-array field to the base descriptor */ -/* and update the dimensions and strides - appropriately. Dimensions and strides are added - to the end unless we have a FORTRAN array - and then they are added to the beginning - - Strides are only added if given (because data is given). -*/ +/* + * Change a sub-array field to the base descriptor + * + * and update the dimensions and strides + * appropriately. Dimensions and strides are added + * to the end unless we have a FORTRAN array + * and then they are added to the beginning + * + * Strides are only added if given (because data is given). + */ static int _update_descr_and_dimensions(PyArray_Descr **des, intp *newdims, intp *newstrides, int oldnd, int isfortran) @@ -5458,16 +5772,17 @@ _update_descr_and_dimensions(PyArray_Descr **des, intp *newdims, newnd = oldnd + numnew; - if (newnd > MAX_DIMS) goto finish; + if (newnd > MAX_DIMS) { + goto finish; + } if (isfortran) { memmove(newdims+numnew, newdims, oldnd*sizeof(intp)); mydim = newdims; } - if (tuple) { - for(i=0; i<numnew; i++) { - mydim[i] = (intp) PyInt_AsLong \ - (PyTuple_GET_ITEM(old->subarray->shape, i)); + for (i = 0; i < numnew; i++) { + mydim[i] = (intp) PyInt_AsLong( + PyTuple_GET_ITEM(old->subarray->shape, i)); } } else { @@ -5477,15 +5792,15 @@ _update_descr_and_dimensions(PyArray_Descr **des, intp *newdims, if (newstrides) { intp tempsize; intp *mystrides; + mystrides = newstrides + oldnd; if (isfortran) { - memmove(newstrides+numnew, newstrides, - oldnd*sizeof(intp)); + memmove(newstrides+numnew, newstrides, oldnd*sizeof(intp)); mystrides = newstrides; } /* Make new strides -- alwasy C-contiguous */ tempsize = (*des)->elsize; - for(i=numnew-1; i>=0; i--) { + for (i = numnew - 1; i >= 0; i--) { mystrides[i] = tempsize; tempsize *= mydim[i] ? mydim[i] : 1; } @@ -5498,10 +5813,11 @@ _update_descr_and_dimensions(PyArray_Descr **des, intp *newdims, } -/* steals a reference to descr (even on failure) */ /*NUMPY_API - Generic new array creation routine. -*/ + * Generic new array creation routine. + * + * steals a reference to descr (even on failure) + */ static PyObject * PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, intp *dims, intp *strides, void *data, @@ -5516,9 +5832,9 @@ PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, if (descr->subarray) { PyObject *ret; intp newdims[2*MAX_DIMS]; - intp *newstrides=NULL; - int isfortran=0; - isfortran = (data && (flags & FORTRAN) && !(flags & CONTIGUOUS)) || \ + intp *newstrides = NULL; + int isfortran = 0; + isfortran = (data && (flags & FORTRAN) && !(flags & CONTIGUOUS)) || (!data && flags); memcpy(newdims, dims, nd*sizeof(intp)); if (strides) { @@ -5532,7 +5848,6 @@ PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, data, flags, obj); return ret; } - if (nd < 0) { PyErr_SetString(PyExc_ValueError, "number of dimensions must be >=0"); @@ -5556,13 +5871,19 @@ PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, return NULL; } PyArray_DESCR_REPLACE(descr); - if (descr->type_num == NPY_STRING) descr->elsize = 1; - else descr->elsize = sizeof(PyArray_UCS4); + if (descr->type_num == NPY_STRING) { + descr->elsize = 1; + } + else { + descr->elsize = sizeof(PyArray_UCS4); + } sd = (size_t) descr->elsize; } largest = MAX_INTP / sd; - for(i=0;i<nd;i++) { - if (dims[i] == 0) continue; + for (i = 0; i < nd; i++) { + if (dims[i] == 0) { + continue; + } if (dims[i] < 0) { PyErr_SetString(PyExc_ValueError, "negative dimensions " \ @@ -5591,12 +5912,15 @@ PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, self->flags = DEFAULT; if (flags) { self->flags |= FORTRAN; - if (nd > 1) self->flags &= ~CONTIGUOUS; + if (nd > 1) { + self->flags &= ~CONTIGUOUS; + } flags = FORTRAN; } } - else self->flags = (flags & ~UPDATEIFCOPY); - + else { + self->flags = (flags & ~UPDATEIFCOPY); + } self->descr = descr; self->base = (PyObject *)NULL; self->weakreflist = (PyObject *)NULL; @@ -5613,84 +5937,102 @@ PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, sd = _array_fill_strides(self->strides, dims, nd, sd, flags, &(self->flags)); } - else { /* we allow strides even when we create - the memory, but be careful with this... - */ + else { + /* + * we allow strides even when we create + * the memory, but be careful with this... + */ memcpy(self->strides, strides, sizeof(intp)*nd); sd *= size; } } - else { self->dimensions = self->strides = NULL; } + else { + self->dimensions = self->strides = NULL; + } if (data == NULL) { + /* + * Allocate something even for zero-space arrays + * e.g. shape=(0,) -- otherwise buffer exposure + * (a.data) doesn't work as it should. + */ - /* Allocate something even for zero-space arrays - e.g. shape=(0,) -- otherwise buffer exposure - (a.data) doesn't work as it should. */ - - if (sd==0) sd = descr->elsize; - - if ((data = PyDataMem_NEW(sd))==NULL) { + if (sd == 0) { + sd = descr->elsize; + } + if ((data = PyDataMem_NEW(sd)) == NULL) { PyErr_NoMemory(); goto fail; } self->flags |= OWNDATA; - /* It is bad to have unitialized OBJECT pointers */ - /* which could also be sub-fields of a VOID array */ + /* + * It is bad to have unitialized OBJECT pointers + * which could also be sub-fields of a VOID array + */ if (PyDataType_FLAGCHK(descr, NPY_NEEDS_INIT)) { memset(data, 0, sd); } } else { - self->flags &= ~OWNDATA; /* If data is passed in, - this object won't own it - by default. - Caller must arrange for - this to be reset if truly - desired */ + /* + * If data is passed in, this object won't own it by default. + * Caller must arrange for this to be reset if truly desired + */ + self->flags &= ~OWNDATA; } self->data = data; - /* call the __array_finalize__ - method if a subtype. - If obj is NULL, then call method with Py_None - */ + /* + * call the __array_finalize__ + * method if a subtype. + * If obj is NULL, then call method with Py_None + */ if ((subtype != &PyArray_Type)) { PyObject *res, *func, *args; - static PyObject *str=NULL; + static PyObject *str = NULL; if (str == NULL) { str = PyString_InternFromString("__array_finalize__"); } func = PyObject_GetAttr((PyObject *)self, str); if (func && func != Py_None) { - if (strides != NULL) { /* did not allocate own data - or funny strides */ - /* update flags before finalize function */ + if (strides != NULL) { + /* + * did not allocate own data or funny strides + * update flags before finalize function + */ PyArray_UpdateFlags(self, UPDATE_ALL); } - if PyCObject_Check(func) { /* A C-function is stored here */ - PyArray_FinalizeFunc *cfunc; - cfunc = PyCObject_AsVoidPtr(func); - Py_DECREF(func); - if (cfunc(self, obj) < 0) goto fail; + if PyCObject_Check(func) { + /* A C-function is stored here */ + PyArray_FinalizeFunc *cfunc; + cfunc = PyCObject_AsVoidPtr(func); + Py_DECREF(func); + if (cfunc(self, obj) < 0) { + goto fail; } + } else { args = PyTuple_New(1); - if (obj == NULL) obj=Py_None; + if (obj == NULL) { + obj=Py_None; + } Py_INCREF(obj); PyTuple_SET_ITEM(args, 0, obj); res = PyObject_Call(func, args, NULL); Py_DECREF(args); Py_DECREF(func); - if (res == NULL) goto fail; - else Py_DECREF(res); + if (res == NULL) { + goto fail; + } + else { + Py_DECREF(res); + } } } else Py_XDECREF(func); } - return (PyObject *)self; fail: @@ -5705,14 +6047,17 @@ _putzero(char *optr, PyObject *zero, PyArray_Descr *dtype) memset(optr, 0, dtype->elsize); } else if (PyDescr_HASFIELDS(dtype)) { - PyObject *key, *value, *title=NULL; + PyObject *key, *value, *title = NULL; PyArray_Descr *new; int offset; - Py_ssize_t pos=0; + Py_ssize_t pos = 0; while (PyDict_Next(dtype->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) return; + if NPY_TITLE_KEY(key, value) { + continue; + } + if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { + return; + } _putzero(optr + offset, zero, new); } } @@ -5727,13 +6072,11 @@ _putzero(char *optr, PyObject *zero, PyArray_Descr *dtype) /*NUMPY_API - Resize (reallocate data). Only works if nothing else is referencing - this array and it is contiguous. - If refcheck is 0, then the reference count is not checked - and assumed to be 1. - You still must own this data and have no weak-references and no base - object. -*/ + * Resize (reallocate data). Only works if nothing else is referencing this + * array and it is contiguous. If refcheck is 0, then the reference count is + * not checked and assumed to be 1. You still must own this data and have no + * weak-references and no base object. + */ static PyObject * PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, NPY_ORDER fortran) @@ -5754,9 +6097,9 @@ PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, return NULL; } - if (fortran == PyArray_ANYORDER) + if (fortran == PyArray_ANYORDER) { fortran = PyArray_CORDER; - + } if (self->descr->elsize == 0) { PyErr_SetString(PyExc_ValueError, "Bad data-type size."); return NULL; @@ -5764,7 +6107,9 @@ PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, newsize = 1; largest = MAX_INTP / self->descr->elsize; for(k=0; k<new_nd; k++) { - if (new_dimensions[k]==0) break; + if (new_dimensions[k]==0) { + break; + } if (new_dimensions[k] < 0) { PyErr_SetString(PyExc_ValueError, "negative dimensions not allowed"); @@ -5785,9 +6130,13 @@ PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, return NULL; } - if (refcheck) refcnt = REFCOUNT(self); - else refcnt = 1; - if ((refcnt > 2) || (self->base != NULL) || \ + if (refcheck) { + refcnt = REFCOUNT(self); + } + else { + refcnt = 1; + } + if ((refcnt > 2) || (self->base != NULL) || (self->weakreflist != NULL)) { PyErr_SetString(PyExc_ValueError, "cannot resize an array that has "\ @@ -5797,8 +6146,12 @@ PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, return NULL; } - if (newsize == 0) sd = self->descr->elsize; - else sd = newsize * self->descr->elsize; + if (newsize == 0) { + sd = self->descr->elsize; + } + else { + sd = newsize*self->descr->elsize; + } /* Reallocate space if needed */ new_data = PyDataMem_RENEW(self->data, sd); if (new_data == NULL) { @@ -5817,21 +6170,20 @@ PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, char *optr; optr = self->data + oldsize*elsize; n = newsize - oldsize; - for(k=0; k<n; k++) { + for (k = 0; k < n; k++) { _putzero((char *)optr, zero, self->descr); optr += elsize; } Py_DECREF(zero); } else{ - memset(self->data+oldsize*elsize, 0, - (newsize-oldsize)*elsize); + memset(self->data+oldsize*elsize, 0, (newsize-oldsize)*elsize); } } - if (self->nd != new_nd) { /* Different number of dimensions. */ + if (self->nd != new_nd) { + /* Different number of dimensions. */ self->nd = new_nd; - /* Need new dimensions and strides arrays */ dimptr = PyDimMem_RENEW(self->dimensions, 2*new_nd); if (dimptr == NULL) { @@ -5848,42 +6200,44 @@ PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, sd = (size_t) self->descr->elsize; sd = (size_t) _array_fill_strides(new_strides, new_dimensions, new_nd, sd, self->flags, &(self->flags)); - memmove(self->dimensions, new_dimensions, new_nd*sizeof(intp)); memmove(self->strides, new_strides, new_nd*sizeof(intp)); - Py_INCREF(Py_None); return Py_None; - } static void _fillobject(char *optr, PyObject *obj, PyArray_Descr *dtype) { if (!PyDataType_FLAGCHK(dtype, NPY_ITEM_REFCOUNT)) { - if ((obj == Py_None) || - (PyInt_Check(obj) && PyInt_AsLong(obj)==0)) + if ((obj == Py_None) || (PyInt_Check(obj) && PyInt_AsLong(obj)==0)) { return; + } else { PyObject *arr; Py_INCREF(dtype); arr = PyArray_NewFromDescr(&PyArray_Type, dtype, 0, NULL, NULL, NULL, 0, NULL); - if (arr!=NULL) + if (arr!=NULL) { dtype->f->setitem(obj, optr, arr); + } Py_XDECREF(arr); } } else if (PyDescr_HASFIELDS(dtype)) { - PyObject *key, *value, *title=NULL; + PyObject *key, *value, *title = NULL; PyArray_Descr *new; int offset; - Py_ssize_t pos=0; + Py_ssize_t pos = 0; + while (PyDict_Next(dtype->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) return; + if NPY_TITLE_KEY(key, value) { + continue; + } + if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { + return; + } _fillobject(optr + offset, obj, new); } } @@ -5896,8 +6250,9 @@ _fillobject(char *optr, PyObject *obj, PyArray_Descr *dtype) } } -/* Assumes contiguous */ -/*NUMPY_API*/ +/*NUMPY_API + * Assumes contiguous + */ static void PyArray_FillObjectArray(PyArrayObject *arr, PyObject *obj) { @@ -5908,12 +6263,12 @@ PyArray_FillObjectArray(PyArrayObject *arr, PyObject *obj) optr = (PyObject **)(arr->data); n = PyArray_SIZE(arr); if (obj == NULL) { - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { *optr++ = NULL; } } else { - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { Py_INCREF(obj); *optr++ = obj; } @@ -5922,7 +6277,7 @@ PyArray_FillObjectArray(PyArrayObject *arr, PyObject *obj) else { char *optr; optr = arr->data; - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { _fillobject(optr, obj, arr->descr); optr += arr->descr->elsize; } @@ -5950,7 +6305,9 @@ PyArray_FillWithScalar(PyArrayObject *arr, PyObject *obj) descr = PyArray_DESCR(arr); Py_INCREF(descr); newarr = PyArray_FromAny(obj, descr, 0,0, ALIGNED, NULL); - if (newarr == NULL) return -1; + if (newarr == NULL) { + return -1; + } fromptr = PyArray_DATA(newarr); swap = (PyArray_ISNOTSWAPPED(arr) != PyArray_ISNOTSWAPPED(newarr)); } @@ -5980,7 +6337,7 @@ PyArray_FillWithScalar(PyArrayObject *arr, PyObject *obj) Py_XDECREF(newarr); return -1; } - while(size--) { + while (size--) { copyswap(iter->dataptr, fromptr, swap, arr); PyArray_ITER_NEXT(iter); } @@ -6007,14 +6364,11 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) PyArrayObject *ret; buffer.ptr = NULL; - /* Usually called with shape and type - but can also be called with buffer, strides, and swapped info - */ - - /* For now, let's just use this to create an empty, contiguous - array of a specific type and shape. - */ - + /* + * Usually called with shape and type but can also be called with buffer, + * strides, and swapped info For now, let's just use this to create an + * empty, contiguous array of a specific type and shape. + */ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O&O&LO&O&", kwlist, PyArray_IntpConverter, &dims, @@ -6026,16 +6380,17 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) &PyArray_IntpConverter, &strides, &PyArray_OrderConverter, - &order)) + &order)) { goto fail; - - if (order == PyArray_FORTRANORDER) fortran = 1; - - if (descr == NULL) + } + if (order == PyArray_FORTRANORDER) { + fortran = 1; + } + if (descr == NULL) { descr = PyArray_DescrFromType(PyArray_DEFAULT); + } itemsize = descr->elsize; - if (itemsize == 0) { PyErr_SetString(PyExc_ValueError, "data-type with unspecified variable length"); @@ -6073,27 +6428,31 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) } if (buffer.ptr == NULL) { - ret = (PyArrayObject *) \ + ret = (PyArrayObject *) PyArray_NewFromDescr(subtype, descr, (int)dims.len, dims.ptr, strides.ptr, NULL, fortran, NULL); - if (ret == NULL) {descr=NULL;goto fail;} + if (ret == NULL) { + descr = NULL; + goto fail; + } if (PyDataType_FLAGCHK(descr, NPY_ITEM_HASOBJECT)) { /* place Py_None in object positions */ PyArray_FillObjectArray(ret, Py_None); if (PyErr_Occurred()) { - descr=NULL; + descr = NULL; goto fail; } } } - else { /* buffer given -- use it */ + else { + /* buffer given -- use it */ if (dims.len == 1 && dims.ptr[0] == -1) { dims.ptr[0] = (buffer.len-(intp)offset) / itemsize; } - else if ((strides.ptr == NULL) && \ - (buffer.len < ((intp)itemsize)* \ + else if ((strides.ptr == NULL) && + (buffer.len < ((intp)itemsize)* PyArray_MultiplyList(dims.ptr, dims.len))) { PyErr_SetString(PyExc_TypeError, "buffer is too small for " \ @@ -6101,27 +6460,38 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) goto fail; } /* get writeable and aligned */ - if (fortran) buffer.flags |= FORTRAN; + if (fortran) { + buffer.flags |= FORTRAN; + } ret = (PyArrayObject *)\ PyArray_NewFromDescr(subtype, descr, dims.len, dims.ptr, strides.ptr, offset + (char *)buffer.ptr, buffer.flags, NULL); - if (ret == NULL) {descr=NULL; goto fail;} + if (ret == NULL) { + descr = NULL; + goto fail; + } PyArray_UpdateFlags(ret, UPDATE_ALL); ret->base = buffer.base; Py_INCREF(buffer.base); } PyDimMem_FREE(dims.ptr); - if (strides.ptr) PyDimMem_FREE(strides.ptr); + if (strides.ptr) { + PyDimMem_FREE(strides.ptr); + } return (PyObject *)ret; fail: Py_XDECREF(descr); - if (dims.ptr) PyDimMem_FREE(dims.ptr); - if (strides.ptr) PyDimMem_FREE(strides.ptr); + if (dims.ptr) { + PyDimMem_FREE(dims.ptr); + } + if (strides.ptr) { + PyDimMem_FREE(strides.ptr); + } return NULL; } @@ -6167,7 +6537,9 @@ array_shape_set(PyArrayObject *self, PyObject *val) /* Assumes C-order */ ret = PyArray_Reshape(self, val); - if (ret == NULL) return -1; + if (ret == NULL) { + return -1; + } if (PyArray_DATA(ret) != PyArray_DATA(self)) { Py_DECREF(ret); PyErr_SetString(PyExc_AttributeError, @@ -6180,7 +6552,8 @@ array_shape_set(PyArrayObject *self, PyObject *val) PyDimMem_FREE(self->dimensions); nd = PyArray_NDIM(ret); self->nd = nd; - if (nd > 0) { /* create new dimensions and strides */ + if (nd > 0) { + /* create new dimensions and strides */ self->dimensions = PyDimMem_NEW(2*nd); if (self->dimensions == NULL) { Py_DECREF(ret); @@ -6188,12 +6561,13 @@ array_shape_set(PyArrayObject *self, PyObject *val) return -1; } self->strides = self->dimensions + nd; - memcpy(self->dimensions, PyArray_DIMS(ret), - nd*sizeof(intp)); - memcpy(self->strides, PyArray_STRIDES(ret), - nd*sizeof(intp)); + memcpy(self->dimensions, PyArray_DIMS(ret), nd*sizeof(intp)); + memcpy(self->strides, PyArray_STRIDES(ret), nd*sizeof(intp)); + } + else { + self->dimensions = NULL; + self->strides = NULL; } - else {self->dimensions=NULL; self->strides=NULL;} Py_DECREF(ret); PyArray_UpdateFlags(self, CONTIGUOUS | FORTRAN); return 0; @@ -6211,12 +6585,12 @@ array_strides_set(PyArrayObject *self, PyObject *obj) { PyArray_Dims newstrides = {NULL, 0}; PyArrayObject *new; - intp numbytes=0; - intp offset=0; + intp numbytes = 0; + intp offset = 0; Py_ssize_t buf_len; char *buf; - if (!PyArray_IntpConverter(obj, &newstrides) || \ + if (!PyArray_IntpConverter(obj, &newstrides) || newstrides.ptr == NULL) { PyErr_SetString(PyExc_TypeError, "invalid strides"); return -1; @@ -6230,9 +6604,10 @@ array_strides_set(PyArrayObject *self, PyObject *obj) while(new->base && PyArray_Check(new->base)) { new = (PyArrayObject *)(new->base); } - /* Get the available memory through the buffer - interface on new->base or if that fails - from the current new */ + /* + * Get the available memory through the buffer interface on + * new->base or if that fails from the current new + */ if (new->base && PyObject_AsReadBuffer(new->base, (const void **)&buf, &buf_len) >= 0) { @@ -6268,10 +6643,12 @@ array_strides_set(PyArrayObject *self, PyObject *obj) static PyObject * array_priority_get(PyArrayObject *self) { - if (PyArray_CheckExact(self)) + if (PyArray_CheckExact(self)) { return PyFloat_FromDouble(PyArray_PRIORITY); - else + } + else { return PyFloat_FromDouble(PyArray_SUBTYPE_PRIORITY); + } } static PyObject *arraydescr_protocol_typestr_get(PyArray_Descr *); @@ -6298,16 +6675,23 @@ array_protocol_descr_get(PyArrayObject *self) PyObject *dobj; res = arraydescr_protocol_descr_get(self->descr); - if (res) return res; + if (res) { + return res; + } PyErr_Clear(); /* get default */ dobj = PyTuple_New(2); - if (dobj == NULL) return NULL; + if (dobj == NULL) { + return NULL; + } PyTuple_SET_ITEM(dobj, 0, PyString_FromString("")); PyTuple_SET_ITEM(dobj, 1, array_typestr_get(self)); res = PyList_New(1); - if (res == NULL) {Py_DECREF(dobj); return NULL;} + if (res == NULL) { + Py_DECREF(dobj); + return NULL; + } PyList_SET_ITEM(res, 0, dobj); return res; } @@ -6316,9 +6700,9 @@ static PyObject * array_protocol_strides_get(PyArrayObject *self) { if PyArray_ISCONTIGUOUS(self) { - Py_INCREF(Py_None); - return Py_None; - } + Py_INCREF(Py_None); + return Py_None; + } return PyArray_IntTupleFromIntp(self->nd, self->strides); } @@ -6339,9 +6723,10 @@ array_ctypes_get(PyArrayObject *self) PyObject *_numpy_internal; PyObject *ret; _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) return NULL; - ret = PyObject_CallMethod(_numpy_internal, "_ctypes", - "ON", self, + if (_numpy_internal == NULL) { + return NULL; + } + ret = PyObject_CallMethod(_numpy_internal, "_ctypes", "ON", self, PyLong_FromVoidPtr(self->data)); Py_DECREF(_numpy_internal); return ret; @@ -6352,8 +6737,11 @@ array_interface_get(PyArrayObject *self) { PyObject *dict; PyObject *obj; + dict = PyDict_New(); - if (dict == NULL) return NULL; + if (dict == NULL) { + return NULL; + } /* dataptr */ obj = array_dataptr_get(self); @@ -6393,11 +6781,12 @@ array_data_get(PyArrayObject *self) return NULL; } nbytes = PyArray_NBYTES(self); - if PyArray_ISWRITEABLE(self) - return PyBuffer_FromReadWriteObject((PyObject *)self, 0, - (int) nbytes); - else + if PyArray_ISWRITEABLE(self) { + return PyBuffer_FromReadWriteObject((PyObject *)self, 0, (int) nbytes); + } + else { return PyBuffer_FromObject((PyObject *)self, 0, (int) nbytes); + } } static int @@ -6409,8 +6798,7 @@ array_data_set(PyArrayObject *self, PyObject *op) if (PyObject_AsWriteBuffer(op, &buf, &buf_len) < 0) { writeable = 0; - if (PyObject_AsReadBuffer(op, (const void **)&buf, - &buf_len) < 0) { + if (PyObject_AsReadBuffer(op, (const void **)&buf, &buf_len) < 0) { PyErr_SetString(PyExc_AttributeError, "object does not have single-segment " \ "buffer interface"); @@ -6423,8 +6811,7 @@ array_data_set(PyArrayObject *self, PyObject *op) return -1; } if (PyArray_NBYTES(self) > buf_len) { - PyErr_SetString(PyExc_AttributeError, - "not enough data for array"); + PyErr_SetString(PyExc_AttributeError, "not enough data for array"); return -1; } if (self->flags & OWNDATA) { @@ -6442,8 +6829,9 @@ array_data_set(PyArrayObject *self, PyObject *op) self->base = op; self->data = buf; self->flags = CARRAY; - if (!writeable) + if (!writeable) { self->flags &= ~WRITEABLE; + } return 0; } @@ -6461,10 +6849,12 @@ array_size_get(PyArrayObject *self) #if SIZEOF_INTP <= SIZEOF_LONG return PyInt_FromLong((long) size); #else - if (size > MAX_LONG || size < MIN_LONG) + if (size > MAX_LONG || size < MIN_LONG) { return PyLong_FromLongLong(size); - else + } + else { return PyInt_FromLong((long) size); + } #endif } @@ -6475,28 +6865,29 @@ array_nbytes_get(PyArrayObject *self) #if SIZEOF_INTP <= SIZEOF_LONG return PyInt_FromLong((long) nbytes); #else - if (nbytes > MAX_LONG || nbytes < MIN_LONG) + if (nbytes > MAX_LONG || nbytes < MIN_LONG) { return PyLong_FromLongLong(nbytes); - else + } + else { return PyInt_FromLong((long) nbytes); + } #endif } -/* If the type is changed. - Also needing change: strides, itemsize - - Either itemsize is exactly the same - or the array is single-segment (contiguous or fortran) with - compatibile dimensions - - The shape and strides will be adjusted in that case as well. -*/ +/* + * If the type is changed. + * Also needing change: strides, itemsize + * + * Either itemsize is exactly the same or the array is single-segment + * (contiguous or fortran) with compatibile dimensions The shape and strides + * will be adjusted in that case as well. + */ static int array_descr_set(PyArrayObject *self, PyObject *arg) { - PyArray_Descr *newtype=NULL; + PyArray_Descr *newtype = NULL; intp newdim; int index; char *msg = "new type not compatible with array."; @@ -6525,51 +6916,61 @@ array_descr_set(PyArrayObject *self, PyObject *arg) } - if ((newtype->elsize != self->descr->elsize) && \ - (self->nd == 0 || !PyArray_ISONESEGMENT(self) || \ - newtype->subarray)) goto fail; - - if (PyArray_ISCONTIGUOUS(self)) index = self->nd - 1; - else index = 0; - + if ((newtype->elsize != self->descr->elsize) && + (self->nd == 0 || !PyArray_ISONESEGMENT(self) || + newtype->subarray)) { + goto fail; + } + if (PyArray_ISCONTIGUOUS(self)) { + index = self->nd - 1; + } + else { + index = 0; + } if (newtype->elsize < self->descr->elsize) { - /* if it is compatible increase the size of the - dimension at end (or at the front for FORTRAN) - */ - if (self->descr->elsize % newtype->elsize != 0) + /* + * if it is compatible increase the size of the + * dimension at end (or at the front for FORTRAN) + */ + if (self->descr->elsize % newtype->elsize != 0) { goto fail; + } newdim = self->descr->elsize / newtype->elsize; self->dimensions[index] *= newdim; self->strides[index] = newtype->elsize; } - else if (newtype->elsize > self->descr->elsize) { - - /* Determine if last (or first if FORTRAN) dimension - is compatible */ - + /* + * Determine if last (or first if FORTRAN) dimension + * is compatible + */ newdim = self->dimensions[index] * self->descr->elsize; - if ((newdim % newtype->elsize) != 0) goto fail; - + if ((newdim % newtype->elsize) != 0) { + goto fail; + } self->dimensions[index] = newdim / newtype->elsize; self->strides[index] = newtype->elsize; } /* fall through -- adjust type*/ - Py_DECREF(self->descr); if (newtype->subarray) { - /* create new array object from data and update - dimensions, strides and descr from it */ + /* + * create new array object from data and update + * dimensions, strides and descr from it + */ PyArrayObject *temp; - - /* We would decref newtype here --- temp will - steal a reference to it */ - temp = (PyArrayObject *) \ + /* + * We would decref newtype here. + * temp will steal a reference to it + */ + temp = (PyArrayObject *) PyArray_NewFromDescr(&PyArray_Type, newtype, self->nd, self->dimensions, self->strides, self->data, self->flags, NULL); - if (temp == NULL) return -1; + if (temp == NULL) { + return -1; + } PyDimMem_FREE(self->dimensions); self->dimensions = temp->dimensions; self->nd = temp->nd; @@ -6584,7 +6985,6 @@ array_descr_set(PyArrayObject *self, PyObject *arg) self->descr = newtype; PyArray_UpdateFlags(self, UPDATE_ALL); - return 0; fail: @@ -6599,7 +6999,9 @@ array_struct_get(PyArrayObject *self) PyArrayInterface *inter; inter = (PyArrayInterface *)_pya_malloc(sizeof(PyArrayInterface)); - if (inter==NULL) return PyErr_NoMemory(); + if (inter==NULL) { + return PyErr_NoMemory(); + } inter->two = 2; inter->nd = self->nd; inter->typekind = self->descr->kind; @@ -6608,9 +7010,10 @@ array_struct_get(PyArrayObject *self) /* reset unused flags */ inter->flags &= ~(UPDATEIFCOPY | OWNDATA); if (PyArray_ISNOTSWAPPED(self)) inter->flags |= NOTSWAPPED; - /* Copy shape and strides over since these can be reset - when the array is "reshaped". - */ + /* + * Copy shape and strides over since these can be reset + *when the array is "reshaped". + */ if (self->nd > 0) { inter->shape = (intp *)_pya_malloc(2*sizeof(intp)*self->nd); if (inter->shape == NULL) { @@ -6628,10 +7031,16 @@ array_struct_get(PyArrayObject *self) inter->data = self->data; if (self->descr->names) { inter->descr = arraydescr_protocol_descr_get(self->descr); - if (inter->descr == NULL) PyErr_Clear(); - else inter->flags &= ARR_HAS_DESCR; + if (inter->descr == NULL) { + PyErr_Clear(); + } + else { + inter->flags &= ARR_HAS_DESCR; + } + } + else { + inter->descr = NULL; } - else inter->descr = NULL; Py_INCREF(self); return PyCObject_FromVoidPtrAndDesc(inter, self, gentype_struct_free); } @@ -6658,7 +7067,7 @@ _zerofill(PyArrayObject *ret) PyArray_FillObjectArray(ret, zero); Py_DECREF(zero); if (PyErr_Occurred()) { - Py_DECREF(ret); + Py_DECREF(ret); return -1; } } @@ -6666,14 +7075,14 @@ _zerofill(PyArrayObject *ret) intp n = PyArray_NBYTES(ret); memset(ret->data, 0, n); } - return 0; + return 0; } -/* Create a view of a complex array with an equivalent data-type - except it is real instead of complex. -*/ - +/* + * Create a view of a complex array with an equivalent data-type + * except it is real instead of complex. + */ static PyArrayObject * _get_part(PyArrayObject *self, int imag) { @@ -6692,7 +7101,7 @@ _get_part(PyArrayObject *self, int imag) Py_DECREF(type); type = new; } - ret = (PyArrayObject *) \ + ret = (PyArrayObject *) PyArray_NewFromDescr(self->ob_type, type, self->nd, @@ -6700,7 +7109,9 @@ _get_part(PyArrayObject *self, int imag) self->strides, self->data + offset, self->flags, (PyObject *)self); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } ret->flags &= ~CONTIGUOUS; ret->flags &= ~FORTRAN; Py_INCREF(self); @@ -6733,14 +7144,19 @@ array_real_set(PyArrayObject *self, PyObject *val) if (PyArray_ISCOMPLEX(self)) { ret = _get_part(self, 0); - if (ret == NULL) return -1; + if (ret == NULL) { + return -1; + } } else { Py_INCREF(self); ret = self; } new = (PyArrayObject *)PyArray_FromAny(val, NULL, 0, 0, 0, NULL); - if (new == NULL) {Py_DECREF(ret); return -1;} + if (new == NULL) { + Py_DECREF(ret); + return -1; + } rint = PyArray_MoveInto(ret, new); Py_DECREF(ret); Py_DECREF(new); @@ -6759,15 +7175,17 @@ array_imag_get(PyArrayObject *self) Py_INCREF(self->descr); ret = (PyArrayObject *)PyArray_NewFromDescr(self->ob_type, self->descr, - self->nd, + self->nd, self->dimensions, NULL, NULL, PyArray_ISFORTRAN(self), (PyObject *)self); - if (ret == NULL) return NULL; - - if (_zerofill(ret) < 0) return NULL; - + if (ret == NULL) { + return NULL; + } + if (_zerofill(ret) < 0) { + return NULL; + } ret->flags &= ~WRITEABLE; } return (PyObject *) ret; @@ -6782,9 +7200,14 @@ array_imag_set(PyArrayObject *self, PyObject *val) int rint; ret = _get_part(self, 1); - if (ret == NULL) return -1; + if (ret == NULL) { + return -1; + } new = (PyArrayObject *)PyArray_FromAny(val, NULL, 0, 0, 0, NULL); - if (new == NULL) {Py_DECREF(ret); return -1;} + if (new == NULL) { + Py_DECREF(ret); + return -1; + } rint = PyArray_MoveInto(ret, new); Py_DECREF(ret); Py_DECREF(new); @@ -6806,9 +7229,9 @@ array_flat_get(PyArrayObject *self) static int array_flat_set(PyArrayObject *self, PyObject *val) { - PyObject *arr=NULL; + PyObject *arr = NULL; int retval = -1; - PyArrayIterObject *selfit=NULL, *arrit=NULL; + PyArrayIterObject *selfit = NULL, *arrit = NULL; PyArray_Descr *typecode; int swap; PyArray_CopySwapFunc *copyswap; @@ -6817,28 +7240,36 @@ array_flat_set(PyArrayObject *self, PyObject *val) Py_INCREF(typecode); arr = PyArray_FromAny(val, typecode, 0, 0, FORCECAST | FORTRAN_IF(self), NULL); - if (arr == NULL) return -1; + if (arr == NULL) { + return -1; + } arrit = (PyArrayIterObject *)PyArray_IterNew(arr); - if (arrit == NULL) goto exit; + if (arrit == NULL) { + goto exit; + } selfit = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (selfit == NULL) goto exit; - - if (arrit->size == 0) {retval = 0; goto exit;} - + if (selfit == NULL) { + goto exit; + } + if (arrit->size == 0) { + retval = 0; + goto exit; + } swap = PyArray_ISNOTSWAPPED(self) != PyArray_ISNOTSWAPPED(arr); copyswap = self->descr->f->copyswap; if (PyDataType_REFCHK(self->descr)) { - while(selfit->index < selfit->size) { + while (selfit->index < selfit->size) { PyArray_Item_XDECREF(selfit->dataptr, self->descr); PyArray_Item_INCREF(arrit->dataptr, PyArray_DESCR(arr)); - memmove(selfit->dataptr, arrit->dataptr, - sizeof(PyObject **)); - if (swap) + memmove(selfit->dataptr, arrit->dataptr, sizeof(PyObject **)); + if (swap) { copyswap(selfit->dataptr, NULL, swap, self); + } PyArray_ITER_NEXT(selfit); PyArray_ITER_NEXT(arrit); - if (arrit->index == arrit->size) + if (arrit->index == arrit->size) { PyArray_ITER_RESET(arrit); + } } retval = 0; goto exit; @@ -6846,14 +7277,17 @@ array_flat_set(PyArrayObject *self, PyObject *val) while(selfit->index < selfit->size) { memmove(selfit->dataptr, arrit->dataptr, self->descr->elsize); - if (swap) + if (swap) { copyswap(selfit->dataptr, NULL, swap, self); + } PyArray_ITER_NEXT(selfit); PyArray_ITER_NEXT(arrit); - if (arrit->index == arrit->size) + if (arrit->index == arrit->size) { PyArray_ITER_RESET(arrit); + } } retval = 0; + exit: Py_XDECREF(selfit); Py_XDECREF(arrit); @@ -6961,77 +7395,78 @@ array_alloc(PyTypeObject *type, Py_ssize_t NPY_UNUSED(nitems)) static PyTypeObject PyArray_Type = { PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.ndarray", /*tp_name*/ - sizeof(PyArrayObject), /*tp_basicsize*/ - 0, /*tp_itemsize*/ + 0, /* ob_size */ + "numpy.ndarray", /* tp_name */ + sizeof(PyArrayObject), /* tp_basicsize */ + 0, /* tp_itemsize */ /* methods */ - (destructor)array_dealloc, /*tp_dealloc */ - (printfunc)NULL, /*tp_print*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - (cmpfunc)0, /*tp_compare*/ - (reprfunc)array_repr, /*tp_repr*/ - &array_as_number, /*tp_as_number*/ - &array_as_sequence, /*tp_as_sequence*/ - &array_as_mapping, /*tp_as_mapping*/ - (hashfunc)0, /*tp_hash*/ - (ternaryfunc)0, /*tp_call*/ - (reprfunc)array_str, /*tp_str*/ - - (getattrofunc)0, /*tp_getattro*/ - (setattrofunc)0, /*tp_setattro*/ - &array_as_buffer, /*tp_as_buffer*/ + (destructor)array_dealloc, /* tp_dealloc */ + (printfunc)NULL, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)0, /* tp_compare */ + (reprfunc)array_repr, /* tp_repr */ + &array_as_number, /* tp_as_number */ + &array_as_sequence, /* tp_as_sequence */ + &array_as_mapping, /* tp_as_mapping */ + (hashfunc)0, /* tp_hash */ + (ternaryfunc)0, /* tp_call */ + (reprfunc)array_str, /* tp_str */ + (getattrofunc)0, /* tp_getattro */ + (setattrofunc)0, /* tp_setattro */ + &array_as_buffer, /* tp_as_buffer */ (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_CHECKTYPES), /*tp_flags*/ + | Py_TPFLAGS_CHECKTYPES), /* tp_flags */ /*Documentation string */ - 0, /*tp_doc*/ + 0, /* tp_doc */ - (traverseproc)0, /*tp_traverse */ - (inquiry)0, /*tp_clear */ - (richcmpfunc)array_richcompare, /*tp_richcompare */ - offsetof(PyArrayObject, weakreflist), /*tp_weaklistoffset */ + (traverseproc)0, /* tp_traverse */ + (inquiry)0, /* tp_clear */ + (richcmpfunc)array_richcompare, /* tp_richcompare */ + offsetof(PyArrayObject, weakreflist), /* tp_weaklistoffset */ /* Iterator support (use standard) */ - (getiterfunc)array_iter, /* tp_iter */ - (iternextfunc)0, /* tp_iternext */ + (getiterfunc)array_iter, /* tp_iter */ + (iternextfunc)0, /* tp_iternext */ /* Sub-classing (new-style object) support */ - array_methods, /* tp_methods */ - 0, /* tp_members */ - array_getsetlist, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)0, /* tp_init */ - array_alloc, /* tp_alloc */ - (newfunc)array_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + array_methods, /* tp_methods */ + 0, /* tp_members */ + array_getsetlist, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + (initproc)0, /* tp_init */ + array_alloc, /* tp_alloc */ + (newfunc)array_new, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; -/* The rest of this code is to build the right kind of array from a python */ -/* object. */ +/* + * The rest of this code is to build the right kind of array + * from a python object. + */ static int discover_depth(PyObject *s, int max, int stop_at_string, int stop_at_tuple) @@ -7120,13 +7555,12 @@ discover_itemsize(PyObject *s, int nd, int *itemsize) } n = PyObject_Length(s); - if ((nd == 0) || PyString_Check(s) || PyUnicode_Check(s) || PyBuffer_Check(s)) { *itemsize = MAX(*itemsize, n); return 0; } - for(i = 0; i < n; i++) { + for (i = 0; i < n; i++) { if ((e = PySequence_GetItem(s,i))==NULL) { return -1; } @@ -7156,8 +7590,7 @@ discover_dimensions(PyObject *s, int nd, intp *d, int check_it) } return 0; } - - n=PyObject_Length(s); + n = PyObject_Length(s); *d = n; if (*d < 0) { return -1; @@ -7207,10 +7640,11 @@ _array_small_type(PyArray_Descr *chktype, PyArray_Descr* mintype) } - if (chktype->type_num > mintype->type_num) + if (chktype->type_num > mintype->type_num) { outtype_num = chktype->type_num; + } else { - if (PyDataType_ISOBJECT(chktype) && \ + if (PyDataType_ISOBJECT(chktype) && PyDataType_ISSTRING(mintype)) { return PyArray_DescrFromType(NPY_OBJECT); } @@ -7220,10 +7654,11 @@ _array_small_type(PyArray_Descr *chktype, PyArray_Descr* mintype) } save_num = outtype_num; - while(outtype_num < PyArray_NTYPES && + while (outtype_num < PyArray_NTYPES && !(PyArray_CanCastSafely(chktype->type_num, outtype_num) - && PyArray_CanCastSafely(mintype->type_num, outtype_num))) + && PyArray_CanCastSafely(mintype->type_num, outtype_num))) { outtype_num++; + } if (outtype_num == PyArray_NTYPES) { outtype = PyArray_DescrFromType(save_num); } @@ -7232,11 +7667,13 @@ _array_small_type(PyArray_Descr *chktype, PyArray_Descr* mintype) } if (PyTypeNum_ISEXTENDED(outtype->type_num)) { int testsize = outtype->elsize; - register int chksize, minsize; + int chksize, minsize; chksize = chktype->elsize; minsize = mintype->elsize; - /* Handle string->unicode case separately - because string itemsize is 4* as large */ + /* + * Handle string->unicode case separately + * because string itemsize is 4* as large + */ if (outtype->type_num == PyArray_UNICODE && mintype->type_num == PyArray_STRING) { testsize = MAX(chksize, 4*minsize); @@ -7269,7 +7706,8 @@ _array_find_python_scalar_type(PyObject *op) /* bools are a subclass of int */ if (PyBool_Check(op)) { return PyArray_DescrFromType(PyArray_BOOL); - } else { + } + else { return PyArray_DescrFromType(PyArray_LONG); } } @@ -7307,39 +7745,42 @@ _use_default_type(PyObject *op) } -/* op is an object to be converted to an ndarray. - - minitype is the minimum type-descriptor needed. - - max is the maximum number of dimensions -- used for recursive call - to avoid infinite recursion... - -*/ - +/* + * op is an object to be converted to an ndarray. + * + * minitype is the minimum type-descriptor needed. + * + * max is the maximum number of dimensions -- used for recursive call + * to avoid infinite recursion... + */ static PyArray_Descr * _array_find_type(PyObject *op, PyArray_Descr *minitype, int max) { int l; PyObject *ip; - PyArray_Descr *chktype=NULL; + PyArray_Descr *chktype = NULL; PyArray_Descr *outtype; - /* These need to come first because if op already carries - a descr structure, then we want it to be the result if minitype - is NULL. - */ - + /* + * These need to come first because if op already carries + * a descr structure, then we want it to be the result if minitype + * is NULL. + */ if (PyArray_Check(op)) { chktype = PyArray_DESCR(op); Py_INCREF(chktype); - if (minitype == NULL) return chktype; + if (minitype == NULL) { + return chktype; + } Py_INCREF(minitype); goto finish; } if (PyArray_IsScalar(op, Generic)) { chktype = PyArray_DescrFromScalar(op); - if (minitype == NULL) return chktype; + if (minitype == NULL) { + return chktype; + } Py_INCREF(minitype); goto finish; } @@ -7347,10 +7788,12 @@ _array_find_type(PyObject *op, PyArray_Descr *minitype, int max) if (minitype == NULL) { minitype = PyArray_DescrFromType(PyArray_BOOL); } - else Py_INCREF(minitype); - - if (max < 0) goto deflt; - + else { + Py_INCREF(minitype); + } + if (max < 0) { + goto deflt; + } chktype = _array_find_python_scalar_type(op); if (chktype) { goto finish; @@ -7361,15 +7804,17 @@ _array_find_type(PyObject *op, PyArray_Descr *minitype, int max) PyObject *new; new = PyDict_GetItemString(ip, "typestr"); if (new && PyString_Check(new)) { - chktype =_array_typedescr_fromstr \ - (PyString_AS_STRING(new)); + chktype =_array_typedescr_fromstr(PyString_AS_STRING(new)); } } Py_DECREF(ip); - if (chktype) goto finish; + if (chktype) { + goto finish; + } + } + else { + PyErr_Clear(); } - else PyErr_Clear(); - if ((ip=PyObject_GetAttrString(op, "__array_struct__")) != NULL) { PyArrayInterface *inter; char buf[40]; @@ -7382,9 +7827,13 @@ _array_find_type(PyObject *op, PyArray_Descr *minitype, int max) } } Py_DECREF(ip); - if (chktype) goto finish; + if (chktype) { + goto finish; + } + } + else { + PyErr_Clear(); } - else PyErr_Clear(); if (PyString_Check(op)) { chktype = PyArray_DescrNewFromType(PyArray_STRING); @@ -7420,10 +7869,10 @@ _array_find_type(PyObject *op, PyArray_Descr *minitype, int max) if (PyErr_Occurred()) PyErr_Clear(); } - if (PyInstance_Check(op)) goto deflt; - + if (PyInstance_Check(op)) { + goto deflt; + } if (PySequence_Check(op)) { - l = PyObject_Length(op); if (l < 0 && PyErr_Occurred()) { PyErr_Clear(); @@ -7457,13 +7906,14 @@ _array_find_type(PyObject *op, PyArray_Descr *minitype, int max) chktype = _use_default_type(op); finish: - outtype = _array_small_type(chktype, minitype); Py_DECREF(chktype); Py_DECREF(minitype); - /* VOID Arrays should not occur by "default" - unless input was already a VOID */ - if (outtype->type_num == PyArray_VOID && \ + /* + * VOID Arrays should not occur by "default" + * unless input was already a VOID + */ + if (outtype->type_num == PyArray_VOID && minitype->type_num != PyArray_VOID) { Py_DECREF(outtype); return PyArray_DescrFromType(PyArray_OBJECT); @@ -7478,15 +7928,15 @@ setArrayFromSequence(PyArrayObject *a, PyObject *s, int dim, intp offset) Py_ssize_t i, slen; int res = 0; - /* This code is to ensure that the sequence access below will - return a lower-dimensional sequence. + /* + * This code is to ensure that the sequence access below will + * return a lower-dimensional sequence. */ if (PyArray_Check(s) && !(PyArray_CheckExact(s))) { - /* FIXME: This could probably copy the entire subarray - at once here using a faster algorithm. - Right now, just make sure a base-class array - is used so that the dimensionality reduction assumption - is correct. + /* + * FIXME: This could probably copy the entire subarray at once here using + * a faster algorithm. Right now, just make sure a base-class array is + * used so that the dimensionality reduction assumption is correct. */ s = PyArray_EnsureArray(s); } @@ -7498,14 +7948,13 @@ setArrayFromSequence(PyArrayObject *a, PyObject *s, int dim, intp offset) } slen = PySequence_Length(s); - if (slen != a->dimensions[dim]) { PyErr_Format(PyExc_ValueError, "setArrayFromSequence: sequence/array shape mismatch."); return -1; } - for(i=0; i<slen; i++) { + for (i = 0; i < slen; i++) { PyObject *o = PySequence_GetItem(s, i); if ((a->nd - dim) > 1) { res = setArrayFromSequence(a, o, dim+1, offset); @@ -7514,7 +7963,9 @@ setArrayFromSequence(PyArrayObject *a, PyObject *s, int dim, intp offset) res = a->descr->f->setitem(o, (a->data + offset), a); } Py_DECREF(o); - if (res < 0) return res; + if (res < 0) { + return res; + } offset += a->strides[dim]; } return 0; @@ -7534,12 +7985,13 @@ Assign_Array(PyArrayObject *self, PyObject *v) "assignment to 0-d array"); return -1; } - return setArrayFromSequence(self, v, 0, 0); } -/* "Array Scalars don't call this code" */ -/* steals reference to typecode -- no NULL*/ +/* + * "Array Scalars don't call this code" + * steals reference to typecode -- no NULL + */ static PyObject * Array_FromPyScalar(PyObject *op, PyArray_Descr *typecode) { @@ -7552,7 +8004,6 @@ Array_FromPyScalar(PyObject *op, PyArray_Descr *typecode) if (itemsize == 0 && PyTypeNum_ISEXTENDED(type)) { itemsize = PyObject_Length(op); - if (type == PyArray_UNICODE) { itemsize *= 4; } @@ -7579,21 +8030,21 @@ Array_FromPyScalar(PyObject *op, PyArray_Descr *typecode) if (PyErr_Occurred()) { Py_DECREF(ret); return NULL; - } + } else { return (PyObject *)ret; } } -/* If s is not a list, return 0 - Otherwise: - - run object_depth_and_dimension on all the elements - and make sure the returned shape and size - is the same for each element - -*/ +/* + * If s is not a list, return 0 + * Otherwise: + * + * run object_depth_and_dimension on all the elements + * and make sure the returned shape and size is the + * same for each element + */ static int object_depth_and_dimension(PyObject *s, int max, intp *dims) { @@ -7631,7 +8082,7 @@ object_depth_and_dimension(PyObject *s, int max, intp *dims) } nd = object_depth_and_dimension(obj, max - 1, newdims); - for(i = 1; i < size; i++) { + for (i = 1; i < size; i++) { if (islist) { obj = PyList_GET_ITEM(s, i); } @@ -7647,7 +8098,7 @@ object_depth_and_dimension(PyObject *s, int max, intp *dims) } } - for(i = 1; i <= nd; i++) { + for (i = 1; i <= nd; i++) { dims[i] = newdims[i-1]; } dims[0] = size; @@ -7670,12 +8121,10 @@ ObjectArray_FromNestedList(PyObject *s, PyArray_Descr *typecode, int fortran) if (nd == 0) { return Array_FromPyScalar(s, typecode); } - r = (PyArrayObject*)PyArray_NewFromDescr(&PyArray_Type, typecode, nd, d, NULL, NULL, fortran, NULL); - if (!r) { return NULL; } @@ -7686,12 +8135,12 @@ ObjectArray_FromNestedList(PyObject *s, PyArray_Descr *typecode, int fortran) return (PyObject*)r; } -/* +/* * isobject means that we are constructing an * object array on-purpose with a nested list. * Only a list is interpreted as a sequence with these rules + * steals reference to typecode */ -/* steals reference to typecode */ static PyObject * Array_FromSequence(PyObject *s, PyArray_Descr *typecode, int fortran, int min_depth, int max_depth) @@ -7707,11 +8156,9 @@ Array_FromSequence(PyObject *s, PyArray_Descr *typecode, int fortran, int itemsize = typecode->elsize; check_it = (typecode->type != PyArray_CHARLTR); - stop_at_string = (type != PyArray_STRING) || (typecode->type == PyArray_STRINGLTR); - - stop_at_tuple = (type == PyArray_VOID && (typecode->names \ + stop_at_tuple = (type == PyArray_VOID && (typecode->names || typecode->subarray)); nd = discover_depth(s, MAX_DIMS + 1, stop_at_string, stop_at_tuple); @@ -7776,8 +8223,8 @@ Array_FromSequence(PyObject *s, PyArray_Descr *typecode, int fortran, /*NUMPY_API - Is the typenum valid? -*/ + * Is the typenum valid? + */ static int PyArray_ValidType(int type) { @@ -7792,11 +8239,11 @@ PyArray_ValidType(int type) return res; } -/* For backward compatibility */ - -/* steals reference to at --- cannot be NULL*/ /*NUMPY_API - *Cast an array using typecode structure. + * For backward compatibility + * + * Cast an array using typecode structure. + * steals reference to at --- cannot be NULL */ static PyObject * PyArray_CastToType(PyArrayObject *mp, PyArray_Descr *at, int fortran) @@ -7807,12 +8254,11 @@ PyArray_CastToType(PyArrayObject *mp, PyArray_Descr *at, int fortran) mpd = mp->descr; - if (((mpd == at) || ((mpd->type_num == at->type_num) && \ - PyArray_EquivByteorders(mpd->byteorder,\ - at->byteorder) && \ - ((mpd->elsize == at->elsize) || \ - (at->elsize==0)))) && \ - PyArray_ISBEHAVED_RO(mp)) { + if (((mpd == at) || + ((mpd->type_num == at->type_num) && + PyArray_EquivByteorders(mpd->byteorder, at->byteorder) && + ((mpd->elsize == at->elsize) || (at->elsize==0)))) && + PyArray_ISBEHAVED_RO(mp)) { Py_DECREF(at); Py_INCREF(mp); return (PyObject *)mp; @@ -7823,7 +8269,7 @@ PyArray_CastToType(PyArrayObject *mp, PyArray_Descr *at, int fortran) if (at == NULL) { return NULL; } - if (mpd->type_num == PyArray_STRING && + if (mpd->type_num == PyArray_STRING && at->type_num == PyArray_UNICODE) { at->elsize = mpd->elsize << 2; } @@ -7857,14 +8303,15 @@ PyArray_CastToType(PyArrayObject *mp, PyArray_Descr *at, int fortran) } /*NUMPY_API - Get a cast function to cast from the input descriptor to the - output type_number (must be a registered data-type). - Returns NULL if un-successful. -*/ + * Get a cast function to cast from the input descriptor to the + * output type_number (must be a registered data-type). + * Returns NULL if un-successful. + */ static PyArray_VectorUnaryFunc * PyArray_GetCastFunc(PyArray_Descr *descr, int type_num) { - PyArray_VectorUnaryFunc *castfunc=NULL; + PyArray_VectorUnaryFunc *castfunc = NULL; + if (type_num < PyArray_NTYPES) { castfunc = descr->f->cast[type_num]; } @@ -7889,19 +8336,19 @@ PyArray_GetCastFunc(PyArray_Descr *descr, int type_num) return castfunc; } - PyErr_SetString(PyExc_ValueError, - "No cast function available."); + PyErr_SetString(PyExc_ValueError, "No cast function available."); return NULL; } -/* Reference counts: - copyswapn is used which increases and decreases reference counts for OBJECT arrays. - All that needs to happen is for any reference counts in the buffers to be - decreased when completely finished with the buffers. - - buffers[0] is the destination - buffers[1] is the source -*/ +/* + * Reference counts: + * copyswapn is used which increases and decreases reference counts for OBJECT arrays. + * All that needs to happen is for any reference counts in the buffers to be + * decreased when completely finished with the buffers. + * + * buffers[0] is the destination + * buffers[1] is the source + */ static void _strided_buffered_cast(char *dptr, intp dstride, int delsize, int dswap, PyArray_CopySwapNFunc *dcopyfunc, @@ -7913,10 +8360,11 @@ _strided_buffered_cast(char *dptr, intp dstride, int delsize, int dswap, { int i; if (N <= bufsize) { - /* 1. copy input to buffer and swap - 2. cast input to output - 3. swap output if necessary and copy from output buffer - */ + /* + * 1. copy input to buffer and swap + * 2. cast input to output + * 3. swap output if necessary and copy from output buffer + */ scopyfunc(buffers[1], selsize, sptr, sstride, N, sswap, src); castfunc(buffers[1], buffers[0], N, src, dest); dcopyfunc(dptr, dstride, buffers[0], delsize, N, dswap, dest); @@ -7925,9 +8373,9 @@ _strided_buffered_cast(char *dptr, intp dstride, int delsize, int dswap, /* otherwise we need to divide up into bufsize pieces */ i = 0; - while(N > 0) { - int newN; - newN = MIN(N, bufsize); + while (N > 0) { + int newN = MIN(N, bufsize); + _strided_buffered_cast(dptr+i*dstride, dstride, delsize, dswap, dcopyfunc, sptr+i*sstride, sstride, selsize, @@ -8007,7 +8455,7 @@ _broadcast_cast(PyArrayObject *out, PyArrayObject *in, } #endif - while(multi->index < multi->size) { + while (multi->index < multi->size) { _strided_buffered_cast(multi->iters[0]->dataptr, ostrides, delsize, oswap, ocopyfunc, @@ -8026,13 +8474,13 @@ _broadcast_cast(PyArrayObject *out, PyArrayObject *in, Py_DECREF(multi); if (PyDataType_REFCHK(in->descr)) { obptr = buffers[1]; - for(i = 0; i < N; i++, obptr+=selsize) { + for (i = 0; i < N; i++, obptr+=selsize) { PyArray_Item_XDECREF(obptr, out->descr); } } if (PyDataType_REFCHK(out->descr)) { obptr = buffers[0]; - for(i = 0; i < N; i++, obptr+=delsize) { + for (i = 0; i < N; i++, obptr+=delsize) { PyArray_Item_XDECREF(obptr, out->descr); } } @@ -8062,7 +8510,7 @@ PyArray_CastTo(PyArrayObject *out, PyArrayObject *mp) { int simple; int same; - PyArray_VectorUnaryFunc *castfunc=NULL; + PyArray_VectorUnaryFunc *castfunc = NULL; int mpsize = PyArray_SIZE(mp); int iswap, oswap; NPY_BEGIN_THREADS_DEF; @@ -8071,8 +8519,7 @@ PyArray_CastTo(PyArrayObject *out, PyArrayObject *mp) return 0; } if (!PyArray_ISWRITEABLE(out)) { - PyErr_SetString(PyExc_ValueError, - "output array is not writeable"); + PyErr_SetString(PyExc_ValueError, "output array is not writeable"); return -1; } @@ -8127,13 +8574,13 @@ _bufferedcast(PyArrayObject *out, PyArrayObject *in, { char *inbuffer, *bptr, *optr; char *outbuffer=NULL; - PyArrayIterObject *it_in=NULL, *it_out=NULL; + PyArrayIterObject *it_in = NULL, *it_out = NULL; register intp i, index; intp ncopies = PyArray_SIZE(out) / PyArray_SIZE(in); int elsize=in->descr->elsize; int nels = PyArray_BUFSIZE; int el; - int inswap, outswap=0; + int inswap, outswap = 0; int obuf=!PyArray_ISCARRAY(out); int oelsize = out->descr->elsize; PyArray_CopySwapFunc *in_csn; @@ -8152,45 +8599,50 @@ _bufferedcast(PyArrayObject *out, PyArrayObject *in, inswap = !(PyArray_ISFLEXIBLE(in) || PyArray_ISNOTSWAPPED(in)); inbuffer = PyDataMem_NEW(PyArray_BUFSIZE*elsize); - if (inbuffer == NULL) return -1; - if (PyArray_ISOBJECT(in)) + if (inbuffer == NULL) { + return -1; + } + if (PyArray_ISOBJECT(in)) { memset(inbuffer, 0, PyArray_BUFSIZE*elsize); + } it_in = (PyArrayIterObject *)PyArray_IterNew((PyObject *)in); - if (it_in == NULL) goto exit; - + if (it_in == NULL) { + goto exit; + } if (obuf) { - outswap = !(PyArray_ISFLEXIBLE(out) || \ + outswap = !(PyArray_ISFLEXIBLE(out) || PyArray_ISNOTSWAPPED(out)); outbuffer = PyDataMem_NEW(PyArray_BUFSIZE*oelsize); - if (outbuffer == NULL) goto exit; - if (PyArray_ISOBJECT(out)) + if (outbuffer == NULL) { + goto exit; + } + if (PyArray_ISOBJECT(out)) { memset(outbuffer, 0, PyArray_BUFSIZE*oelsize); - + } it_out = (PyArrayIterObject *)PyArray_IterNew((PyObject *)out); - if (it_out == NULL) goto exit; - + if (it_out == NULL) { + goto exit; + } nels = MIN(nels, PyArray_BUFSIZE); } optr = (obuf) ? outbuffer: out->data; bptr = inbuffer; el = 0; - while(ncopies--) { + while (ncopies--) { index = it_in->size; PyArray_ITER_RESET(it_in); - while(index--) { + while (index--) { in_csn(bptr, it_in->dataptr, inswap, in); bptr += elsize; PyArray_ITER_NEXT(it_in); el += 1; if ((el == nels) || (index == 0)) { /* buffer filled, do cast */ - castfunc(inbuffer, optr, el, in, out); - if (obuf) { /* Copy from outbuffer to array */ - for(i=0; i<el; i++) { + for (i = 0; i < el; i++) { out_csn(it_out->dataptr, optr, outswap, out); @@ -8208,6 +8660,7 @@ _bufferedcast(PyArrayObject *out, PyArrayObject *in, } } retval = 0; + exit: Py_XDECREF(it_in); PyDataMem_FREE(inbuffer); @@ -8219,20 +8672,21 @@ _bufferedcast(PyArrayObject *out, PyArrayObject *in, } /*NUMPY_API - Cast to an already created array. Arrays don't have to be "broadcastable" - Only requirement is they have the same number of elements. -*/ + * Cast to an already created array. Arrays don't have to be "broadcastable" + * Only requirement is they have the same number of elements. + */ static int PyArray_CastAnyTo(PyArrayObject *out, PyArrayObject *mp) { int simple; - PyArray_VectorUnaryFunc *castfunc=NULL; + PyArray_VectorUnaryFunc *castfunc = NULL; int mpsize = PyArray_SIZE(mp); - if (mpsize == 0) return 0; + if (mpsize == 0) { + return 0; + } if (!PyArray_ISWRITEABLE(out)) { - PyErr_SetString(PyExc_ValueError, - "output array is not writeable"); + PyErr_SetString(PyExc_ValueError, "output array is not writeable"); return -1; } @@ -8244,36 +8698,34 @@ PyArray_CastAnyTo(PyArrayObject *out, PyArrayObject *mp) } castfunc = PyArray_GetCastFunc(mp->descr, out->descr->type_num); - if (castfunc == NULL) return -1; - - + if (castfunc == NULL) { + return -1; + } simple = ((PyArray_ISCARRAY_RO(mp) && PyArray_ISCARRAY(out)) || (PyArray_ISFARRAY_RO(mp) && PyArray_ISFARRAY(out))); - if (simple) { castfunc(mp->data, out->data, mpsize, mp, out); return 0; } - if (PyArray_SAMESHAPE(out, mp)) { int iswap, oswap; iswap = PyArray_ISBYTESWAPPED(mp) && !PyArray_ISFLEXIBLE(mp); oswap = PyArray_ISBYTESWAPPED(out) && !PyArray_ISFLEXIBLE(out); return _broadcast_cast(out, mp, castfunc, iswap, oswap); } - return _bufferedcast(out, mp, castfunc); } -/* steals reference to newtype --- acc. NULL */ -/*NUMPY_API*/ +/*NUMPY_API + * steals reference to newtype --- acc. NULL + */ static PyObject * PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) { - PyArrayObject *ret=NULL; + PyArrayObject *ret = NULL; int itemsize; int copy = 0; int arrflags; @@ -8282,9 +8734,7 @@ PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) PyTypeObject *subtype; oldtype = PyArray_DESCR(arr); - subtype = arr->ob_type; - if (newtype == NULL) { newtype = oldtype; Py_INCREF(oldtype); } @@ -8298,10 +8748,11 @@ PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) itemsize = newtype->elsize; } - /* Can't cast unless ndim-0 array, FORCECAST is specified - or the cast is safe. - */ - if (!(flags & FORCECAST) && !PyArray_NDIM(arr)==0 && + /* + * Can't cast unless ndim-0 array, FORCECAST is specified + * or the cast is safe. + */ + if (!(flags & FORCECAST) && !PyArray_NDIM(arr) == 0 && !PyArray_CanCastTo(oldtype, newtype)) { Py_DECREF(newtype); PyErr_SetString(PyExc_TypeError, @@ -8313,16 +8764,15 @@ PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) /* Don't copy if sizes are compatible */ if ((flags & ENSURECOPY) || PyArray_EquivTypes(oldtype, newtype)) { arrflags = arr->flags; - - copy = (flags & ENSURECOPY) || \ - ((flags & CONTIGUOUS) && (!(arrflags & CONTIGUOUS))) \ - || ((flags & ALIGNED) && (!(arrflags & ALIGNED))) \ - || (arr->nd > 1 && \ - ((flags & FORTRAN) && (!(arrflags & FORTRAN)))) \ + copy = (flags & ENSURECOPY) || + ((flags & CONTIGUOUS) && (!(arrflags & CONTIGUOUS))) + || ((flags & ALIGNED) && (!(arrflags & ALIGNED))) + || (arr->nd > 1 && + ((flags & FORTRAN) && (!(arrflags & FORTRAN)))) || ((flags & WRITEABLE) && (!(arrflags & WRITEABLE))); if (copy) { - if ((flags & UPDATEIFCOPY) && \ + if ((flags & UPDATEIFCOPY) && (!PyArray_ISWRITEABLE(arr))) { Py_DECREF(newtype); PyErr_SetString(PyExc_ValueError, msg); @@ -8331,7 +8781,7 @@ PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) if ((flags & ENSUREARRAY)) { subtype = &PyArray_Type; } - ret = (PyArrayObject *) \ + ret = (PyArrayObject *) PyArray_NewFromDescr(subtype, newtype, arr->nd, arr->dimensions, @@ -8352,14 +8802,16 @@ PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) Py_INCREF(arr); } } - /* If no copy then just increase the reference - count and return the input */ + /* + * If no copy then just increase the reference + * count and return the input + */ else { Py_DECREF(newtype); if ((flags & ENSUREARRAY) && !PyArray_CheckExact(arr)) { Py_INCREF(arr->descr); - ret = (PyArrayObject *) \ + ret = (PyArrayObject *) PyArray_NewFromDescr(&PyArray_Type, arr->descr, arr->nd, @@ -8379,10 +8831,12 @@ PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) } } - /* The desired output type is different than the input - array type and copy was not specified */ + /* + * The desired output type is different than the input + * array type and copy was not specified + */ else { - if ((flags & UPDATEIFCOPY) && \ + if ((flags & UPDATEIFCOPY) && (!PyArray_ISWRITEABLE(arr))) { Py_DECREF(newtype); PyErr_SetString(PyExc_ValueError, msg); @@ -8391,7 +8845,7 @@ PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) if ((flags & ENSUREARRAY)) { subtype = &PyArray_Type; } - ret = (PyArrayObject *) \ + ret = (PyArrayObject *) PyArray_NewFromDescr(subtype, newtype, arr->nd, arr->dimensions, NULL, NULL, @@ -8429,99 +8883,137 @@ _array_typedescr_fromstr(char *str) swapchar = str[0]; str += 1; -#define _MY_FAIL { \ - PyErr_SetString(PyExc_ValueError, msg); \ - return NULL; \ - } - typechar = str[0]; size = atoi(str + 1); switch (typechar) { - case 'b': - if (size == sizeof(Bool)) - type_num = PyArray_BOOL; - else _MY_FAIL - break; - case 'u': - if (size == sizeof(uintp)) - type_num = PyArray_UINTP; - else if (size == sizeof(char)) - type_num = PyArray_UBYTE; - else if (size == sizeof(short)) - type_num = PyArray_USHORT; - else if (size == sizeof(ulong)) - type_num = PyArray_ULONG; - else if (size == sizeof(int)) - type_num = PyArray_UINT; - else if (size == sizeof(ulonglong)) - type_num = PyArray_ULONGLONG; - else _MY_FAIL - break; - case 'i': - if (size == sizeof(intp)) - type_num = PyArray_INTP; - else if (size == sizeof(char)) - type_num = PyArray_BYTE; - else if (size == sizeof(short)) - type_num = PyArray_SHORT; - else if (size == sizeof(long)) - type_num = PyArray_LONG; - else if (size == sizeof(int)) - type_num = PyArray_INT; - else if (size == sizeof(longlong)) - type_num = PyArray_LONGLONG; - else _MY_FAIL - break; - case 'f': - if (size == sizeof(float)) - type_num = PyArray_FLOAT; - else if (size == sizeof(double)) - type_num = PyArray_DOUBLE; - else if (size == sizeof(longdouble)) - type_num = PyArray_LONGDOUBLE; - else _MY_FAIL - break; - case 'c': - if (size == sizeof(float)*2) - type_num = PyArray_CFLOAT; - else if (size == sizeof(double)*2) - type_num = PyArray_CDOUBLE; - else if (size == sizeof(longdouble)*2) - type_num = PyArray_CLONGDOUBLE; - else _MY_FAIL - break; - case 'O': - if (size == sizeof(PyObject *)) - type_num = PyArray_OBJECT; - else _MY_FAIL - break; - case PyArray_STRINGLTR: - type_num = PyArray_STRING; - break; - case PyArray_UNICODELTR: - type_num = PyArray_UNICODE; - size <<= 2; - break; - case 'V': - type_num = PyArray_VOID; - break; - default: - _MY_FAIL + case 'b': + if (size == sizeof(Bool)) { + type_num = PyArray_BOOL; } - -#undef _MY_FAIL + else { + PyErr_SetString(PyExc_ValueError, msg); + return NULL; + } + break; + case 'u': + if (size == sizeof(uintp)) { + type_num = PyArray_UINTP; + } + else if (size == sizeof(char)) { + type_num = PyArray_UBYTE; + } + else if (size == sizeof(short)) { + type_num = PyArray_USHORT; + } + else if (size == sizeof(ulong)) { + type_num = PyArray_ULONG; + } + else if (size == sizeof(int)) { + type_num = PyArray_UINT; + } + else if (size == sizeof(ulonglong)) { + type_num = PyArray_ULONGLONG; + } + else { + PyErr_SetString(PyExc_ValueError, msg); + return NULL; + } + break; + case 'i': + if (size == sizeof(intp)) { + type_num = PyArray_INTP; + } + else if (size == sizeof(char)) { + type_num = PyArray_BYTE; + } + else if (size == sizeof(short)) { + type_num = PyArray_SHORT; + } + else if (size == sizeof(long)) { + type_num = PyArray_LONG; + } + else if (size == sizeof(int)) { + type_num = PyArray_INT; + } + else if (size == sizeof(longlong)) { + type_num = PyArray_LONGLONG; + } + else { + PyErr_SetString(PyExc_ValueError, msg); + return NULL; + } + break; + case 'f': + if (size == sizeof(float)) { + type_num = PyArray_FLOAT; + } + else if (size == sizeof(double)) { + type_num = PyArray_DOUBLE; + } + else if (size == sizeof(longdouble)) { + type_num = PyArray_LONGDOUBLE; + } + else { + PyErr_SetString(PyExc_ValueError, msg); + return NULL; + } + break; + case 'c': + if (size == sizeof(float)*2) { + type_num = PyArray_CFLOAT; + } + else if (size == sizeof(double)*2) { + type_num = PyArray_CDOUBLE; + } + else if (size == sizeof(longdouble)*2) { + type_num = PyArray_CLONGDOUBLE; + } + else { + PyErr_SetString(PyExc_ValueError, msg); + return NULL; + } + break; + case 'O': + if (size == sizeof(PyObject *)) { + type_num = PyArray_OBJECT; + } + else { + PyErr_SetString(PyExc_ValueError, msg); + return NULL; + } + break; + case PyArray_STRINGLTR: + type_num = PyArray_STRING; + break; + case PyArray_UNICODELTR: + type_num = PyArray_UNICODE; + size <<= 2; + break; + case 'V': + type_num = PyArray_VOID; + break; + default: + PyErr_SetString(PyExc_ValueError, msg); + return NULL; + } descr = PyArray_DescrFromType(type_num); - if (descr == NULL) return NULL; + if (descr == NULL) { + return NULL; + } swap = !PyArray_ISNBO(swapchar); if (descr->elsize == 0 || swap) { /* Need to make a new PyArray_Descr */ PyArray_DESCR_REPLACE(descr); - if (descr==NULL) return NULL; - if (descr->elsize == 0) + if (descr==NULL) { + return NULL; + } + if (descr->elsize == 0) { descr->elsize = size; - if (swap) + } + if (swap) { descr->byteorder = swapchar; + } } return descr; } @@ -8530,7 +9022,7 @@ _array_typedescr_fromstr(char *str) static PyObject * PyArray_FromStructInterface(PyObject *input) { - PyArray_Descr *thetype=NULL; + PyArray_Descr *thetype = NULL; char buf[40]; PyArrayInterface *inter; PyObject *attr, *r; @@ -8541,9 +9033,13 @@ PyArray_FromStructInterface(PyObject *input) PyErr_Clear(); return Py_NotImplemented; } - if (!PyCObject_Check(attr)) goto fail; + if (!PyCObject_Check(attr)) { + goto fail; + } inter = PyCObject_AsVoidPtr(attr); - if (inter->two != 2) goto fail; + if (inter->two != 2) { + goto fail; + } if ((inter->flags & NOTSWAPPED) != NOTSWAPPED) { endian = PyArray_OPPBYTE; inter->flags &= ~NOTSWAPPED; @@ -8587,10 +9083,10 @@ PyArray_FromStructInterface(PyObject *input) static PyObject * PyArray_FromInterface(PyObject *input) { - PyObject *attr=NULL, *item=NULL; - PyObject *tstr=NULL, *shape=NULL; - PyObject *inter=NULL; - PyObject *base=NULL; + PyObject *attr = NULL, *item = NULL; + PyObject *tstr = NULL, *shape = NULL; + PyObject *inter = NULL; + PyObject *base = NULL; PyArrayObject *ret; PyArray_Descr *type=NULL; char *data; @@ -8605,26 +9101,42 @@ PyArray_FromInterface(PyObject *input) /* Get the strides */ inter = PyObject_GetAttrString(input, "__array_interface__"); - if (inter == NULL) {PyErr_Clear(); return Py_NotImplemented;} - if (!PyDict_Check(inter)) {Py_DECREF(inter); return Py_NotImplemented;} - + if (inter == NULL) { + PyErr_Clear(); + return Py_NotImplemented; + } + if (!PyDict_Check(inter)) { + Py_DECREF(inter); + return Py_NotImplemented; + } shape = PyDict_GetItemString(inter, "shape"); - if (shape == NULL) {Py_DECREF(inter); return Py_NotImplemented;} + if (shape == NULL) { + Py_DECREF(inter); + return Py_NotImplemented; + } tstr = PyDict_GetItemString(inter, "typestr"); - if (tstr == NULL) {Py_DECREF(inter); return Py_NotImplemented;} + if (tstr == NULL) { + Py_DECREF(inter); + return Py_NotImplemented; + } attr = PyDict_GetItemString(inter, "data"); base = input; if ((attr == NULL) || (attr==Py_None) || (!PyTuple_Check(attr))) { - if (attr && (attr != Py_None)) item=attr; - else item=input; - res = PyObject_AsWriteBuffer(item, (void **)&data, - &buffer_len); + if (attr && (attr != Py_None)) { + item = attr; + } + else { + item = input; + } + res = PyObject_AsWriteBuffer(item, (void **)&data, &buffer_len); if (res < 0) { PyErr_Clear(); - res = PyObject_AsReadBuffer(item, (const void **)&data, - &buffer_len); - if (res < 0) goto fail; + res = PyObject_AsReadBuffer( + item, (const void **)&data, &buffer_len); + if (res < 0) { + goto fail; + } dataflags &= ~WRITEABLE; } attr = PyDict_GetItemString(inter, "offset"); @@ -8679,7 +9191,9 @@ PyArray_FromInterface(PyObject *input) goto fail; } type = _array_typedescr_fromstr(PyString_AS_STRING(attr)); - if (type==NULL) goto fail; + if (type == NULL) { + goto fail; + } attr = shape; if (!PyTuple_Check(attr)) { PyErr_SetString(PyExc_TypeError, "shape must be a tuple"); @@ -8687,17 +9201,21 @@ PyArray_FromInterface(PyObject *input) goto fail; } n = PyTuple_GET_SIZE(attr); - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { item = PyTuple_GET_ITEM(attr, i); dims[i] = PyArray_PyIntAsIntp(item); - if (error_converting(dims[i])) break; + if (error_converting(dims[i])) { + break; + } } ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, type, n, dims, NULL, data, dataflags, NULL); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } Py_INCREF(base); ret->base = base; @@ -8716,12 +9234,16 @@ PyArray_FromInterface(PyObject *input) Py_DECREF(ret); return NULL; } - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { item = PyTuple_GET_ITEM(attr, i); strides[i] = PyArray_PyIntAsIntp(item); - if (error_converting(strides[i])) break; + if (error_converting(strides[i])) { + break; + } + } + if (PyErr_Occurred()) { + PyErr_Clear(); } - if (PyErr_Occurred()) PyErr_Clear(); memcpy(ret->strides, strides, n*sizeof(intp)); } else PyErr_Clear(); @@ -8742,35 +9264,38 @@ PyArray_FromArrayAttr(PyObject *op, PyArray_Descr *typecode, PyObject *context) PyObject *array_meth; array_meth = PyObject_GetAttrString(op, "__array__"); - if (array_meth == NULL) {PyErr_Clear(); return Py_NotImplemented;} + if (array_meth == NULL) { + PyErr_Clear(); + return Py_NotImplemented; + } if (context == NULL) { - if (typecode == NULL) new = PyObject_CallFunction(array_meth, - NULL); - else new = PyObject_CallFunction(array_meth, "O", typecode); + if (typecode == NULL) { + new = PyObject_CallFunction(array_meth, NULL); + } + else { + new = PyObject_CallFunction(array_meth, "O", typecode); + } } else { if (typecode == NULL) { - new = PyObject_CallFunction(array_meth, "OO", Py_None, - context); - if (new == NULL && \ - PyErr_ExceptionMatches(PyExc_TypeError)) { + new = PyObject_CallFunction(array_meth, "OO", Py_None, context); + if (new == NULL && PyErr_ExceptionMatches(PyExc_TypeError)) { PyErr_Clear(); new = PyObject_CallFunction(array_meth, ""); } } else { - new = PyObject_CallFunction(array_meth, "OO", - typecode, context); - if (new == NULL && \ - PyErr_ExceptionMatches(PyExc_TypeError)) { + new = PyObject_CallFunction(array_meth, "OO", typecode, context); + if (new == NULL && PyErr_ExceptionMatches(PyExc_TypeError)) { PyErr_Clear(); - new = PyObject_CallFunction(array_meth, "O", - typecode); + new = PyObject_CallFunction(array_meth, "O", typecode); } } } Py_DECREF(array_meth); - if (new == NULL) return NULL; + if (new == NULL) { + return NULL; + } if (!PyArray_Check(new)) { PyErr_SetString(PyExc_ValueError, "object __array__ method not " \ @@ -8781,23 +9306,27 @@ PyArray_FromArrayAttr(PyObject *op, PyArray_Descr *typecode, PyObject *context) return new; } -/* Does not check for ENSURECOPY and NOTSWAPPED in flags */ -/* Steals a reference to newtype --- which can be NULL */ -/*NUMPY_API*/ +/*NUMPY_API + * Does not check for ENSURECOPY and NOTSWAPPED in flags + * Steals a reference to newtype --- which can be NULL + */ static PyObject * PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, int max_depth, int flags, PyObject *context) { - /* This is the main code to make a NumPy array from a Python - Object. It is called from lot's of different places which - is why there are so many checks. The comments try to - explain some of the checks. */ - - PyObject *r=NULL; + /* + * This is the main code to make a NumPy array from a Python + * Object. It is called from lot's of different places which + * is why there are so many checks. The comments try to + * explain some of the checks. + */ + PyObject *r = NULL; int seq = FALSE; - /* Is input object already an array? */ - /* This is where the flags are used */ + /* + * Is input object already an array? + * This is where the flags are used + */ if (PyArray_Check(op)) { r = PyArray_FromArray((PyArrayObject *)op, newtype, flags); } @@ -8821,8 +9350,7 @@ PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, return NULL; } if (newtype != NULL || flags != 0) { - new = PyArray_FromArray((PyArrayObject *)r, newtype, - flags); + new = PyArray_FromArray((PyArrayObject *)r, newtype, flags); Py_DECREF(r); r = new; } @@ -8858,7 +9386,7 @@ PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, PyErr_Clear(); if (isobject) { Py_INCREF(newtype); - r = ObjectArray_FromNestedList \ + r = ObjectArray_FromNestedList (op, newtype, flags & FORTRAN); seq = TRUE; Py_DECREF(newtype); @@ -8880,7 +9408,6 @@ PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, } /* Be sure we succeed here */ - if(!PyArray_Check(r)) { PyErr_SetString(PyExc_RuntimeError, "internal error: PyArray_FromAny "\ @@ -8910,8 +9437,9 @@ PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, return NULL; } -/* new reference -- accepts NULL for mintype*/ -/*NUMPY_API*/ +/*NUMPY_API +* new reference -- accepts NULL for mintype +*/ static PyArray_Descr * PyArray_DescrFromObject(PyObject *op, PyArray_Descr *mintype) { @@ -8919,9 +9447,8 @@ PyArray_DescrFromObject(PyObject *op, PyArray_Descr *mintype) } /*NUMPY_API - Return the typecode of the array a Python object would be converted - to -*/ + * Return the typecode of the array a Python object would be converted to + */ static int PyArray_ObjectType(PyObject *op, int minimum_type) { @@ -8930,7 +9457,9 @@ PyArray_ObjectType(PyObject *op, int minimum_type) int ret; intype = PyArray_DescrFromType(minimum_type); - if (intype == NULL) PyErr_Clear(); + if (intype == NULL) { + PyErr_Clear(); + } outtype = _array_find_type(op, intype, MAX_DIMS); ret = outtype->type_num; Py_DECREF(outtype); @@ -8939,56 +9468,57 @@ PyArray_ObjectType(PyObject *op, int minimum_type) } -/* flags is any of - CONTIGUOUS, - FORTRAN, - ALIGNED, - WRITEABLE, - NOTSWAPPED, - ENSURECOPY, - UPDATEIFCOPY, - FORCECAST, - ENSUREARRAY, - ELEMENTSTRIDES - - or'd (|) together - - Any of these flags present means that the returned array should - guarantee that aspect of the array. Otherwise the returned array - won't guarantee it -- it will depend on the object as to whether or - not it has such features. - - Note that ENSURECOPY is enough - to guarantee CONTIGUOUS, ALIGNED and WRITEABLE - and therefore it is redundant to include those as well. - - BEHAVED == ALIGNED | WRITEABLE - CARRAY = CONTIGUOUS | BEHAVED - FARRAY = FORTRAN | BEHAVED - - FORTRAN can be set in the FLAGS to request a FORTRAN array. - Fortran arrays are always behaved (aligned, - notswapped, and writeable) and not (C) CONTIGUOUS (if > 1d). - - UPDATEIFCOPY flag sets this flag in the returned array if a copy is - made and the base argument points to the (possibly) misbehaved array. - When the new array is deallocated, the original array held in base - is updated with the contents of the new array. - - FORCECAST will cause a cast to occur regardless of whether or not - it is safe. -*/ - +/* + * flags is any of + * CONTIGUOUS, + * FORTRAN, + * ALIGNED, + * WRITEABLE, + * NOTSWAPPED, + * ENSURECOPY, + * UPDATEIFCOPY, + * FORCECAST, + * ENSUREARRAY, + * ELEMENTSTRIDES + * + * or'd (|) together + * + * Any of these flags present means that the returned array should + * guarantee that aspect of the array. Otherwise the returned array + * won't guarantee it -- it will depend on the object as to whether or + * not it has such features. + * + * Note that ENSURECOPY is enough + * to guarantee CONTIGUOUS, ALIGNED and WRITEABLE + * and therefore it is redundant to include those as well. + * + * BEHAVED == ALIGNED | WRITEABLE + * CARRAY = CONTIGUOUS | BEHAVED + * FARRAY = FORTRAN | BEHAVED + * + * FORTRAN can be set in the FLAGS to request a FORTRAN array. + * Fortran arrays are always behaved (aligned, + * notswapped, and writeable) and not (C) CONTIGUOUS (if > 1d). + * + * UPDATEIFCOPY flag sets this flag in the returned array if a copy is + * made and the base argument points to the (possibly) misbehaved array. + * When the new array is deallocated, the original array held in base + * is updated with the contents of the new array. + * + * FORCECAST will cause a cast to occur regardless of whether or not + * it is safe. + */ -/* steals a reference to descr -- accepts NULL */ -/*NUMPY_API*/ +/*NUMPY_API + * steals a reference to descr -- accepts NULL + */ static PyObject * PyArray_CheckFromAny(PyObject *op, PyArray_Descr *descr, int min_depth, int max_depth, int requires, PyObject *context) { PyObject *obj; if (requires & NOTSWAPPED) { - if (!descr && PyArray_Check(op) && \ + if (!descr && PyArray_Check(op) && !PyArray_ISNBO(PyArray_DESCR(op)->byteorder)) { descr = PyArray_DescrNew(PyArray_DESCR(op)); } @@ -9000,9 +9530,10 @@ PyArray_CheckFromAny(PyObject *op, PyArray_Descr *descr, int min_depth, } } - obj = PyArray_FromAny(op, descr, min_depth, max_depth, - requires, context); - if (obj == NULL) return NULL; + obj = PyArray_FromAny(op, descr, min_depth, max_depth, requires, context); + if (obj == NULL) { + return NULL; + } if ((requires & ELEMENTSTRIDES) && !PyArray_ElementStrides(obj)) { PyObject *new; @@ -9013,25 +9544,25 @@ PyArray_CheckFromAny(PyObject *op, PyArray_Descr *descr, int min_depth, return obj; } -/* This is a quick wrapper around PyArray_FromAny(op, NULL, 0, 0, - ENSUREARRAY) */ -/* that special cases Arrays and PyArray_Scalars up front */ -/* It *steals a reference* to the object */ -/* It also guarantees that the result is PyArray_Type */ - -/* Because it decrefs op if any conversion needs to take place - so it can be used like PyArray_EnsureArray(some_function(...)) */ - -/*NUMPY_API*/ +/*NUMPY_API + * This is a quick wrapper around PyArray_FromAny(op, NULL, 0, 0, ENSUREARRAY) + * that special cases Arrays and PyArray_Scalars up front + * It *steals a reference* to the object + * It also guarantees that the result is PyArray_Type + * Because it decrefs op if any conversion needs to take place + * so it can be used like PyArray_EnsureArray(some_function(...)) + */ static PyObject * PyArray_EnsureArray(PyObject *op) { PyObject *new; - if (op == NULL) return NULL; - - if (PyArray_CheckExact(op)) return op; - + if (op == NULL) { + return NULL; + } + if (PyArray_CheckExact(op)) { + return op; + } if (PyArray_Check(op)) { new = PyArray_View((PyArrayObject *)op, NULL, &PyArray_Type); Py_DECREF(op); @@ -9051,25 +9582,36 @@ PyArray_EnsureArray(PyObject *op) static PyObject * PyArray_EnsureAnyArray(PyObject *op) { - if (op && PyArray_Check(op)) return op; + if (op && PyArray_Check(op)) { + return op; + } return PyArray_EnsureArray(op); } /*NUMPY_API - Check the type coercion rules. -*/ + *Check the type coercion rules. + */ static int PyArray_CanCastSafely(int fromtype, int totype) { PyArray_Descr *from, *to; register int felsize, telsize; - if (fromtype == totype) return 1; - if (fromtype == PyArray_BOOL) return 1; - if (totype == PyArray_BOOL) return 0; - if (totype == PyArray_OBJECT || totype == PyArray_VOID) return 1; - if (fromtype == PyArray_OBJECT || fromtype == PyArray_VOID) return 0; - + if (fromtype == totype) { + return 1; + } + if (fromtype == PyArray_BOOL) { + return 1; + } + if (totype == PyArray_BOOL) { + return 0; + } + if (totype == PyArray_OBJECT || totype == PyArray_VOID) { + return 1; + } + if (fromtype == PyArray_OBJECT || fromtype == PyArray_VOID) { + return 0; + } from = PyArray_DescrFromType(fromtype); /* * cancastto is a PyArray_NOTYPE terminated C-int-array of types that @@ -9079,11 +9621,14 @@ PyArray_CanCastSafely(int fromtype, int totype) int *curtype; curtype = from->f->cancastto; while (*curtype != PyArray_NOTYPE) { - if (*curtype++ == totype) return 1; + if (*curtype++ == totype) { + return 1; + } } } - if (PyTypeNum_ISUSERDEF(totype)) return 0; - + if (PyTypeNum_ISUSERDEF(totype)) { + return 0; + } to = PyArray_DescrFromType(totype); telsize = to->elsize; felsize = from->elsize; @@ -9091,79 +9636,94 @@ PyArray_CanCastSafely(int fromtype, int totype) Py_DECREF(to); switch(fromtype) { - case PyArray_BYTE: - case PyArray_SHORT: - case PyArray_INT: - case PyArray_LONG: - case PyArray_LONGLONG: - if (PyTypeNum_ISINTEGER(totype)) { - if (PyTypeNum_ISUNSIGNED(totype)) { - return 0; + case PyArray_BYTE: + case PyArray_SHORT: + case PyArray_INT: + case PyArray_LONG: + case PyArray_LONGLONG: + if (PyTypeNum_ISINTEGER(totype)) { + if (PyTypeNum_ISUNSIGNED(totype)) { + return 0; + } + else { + return telsize >= felsize; + } + } + else if (PyTypeNum_ISFLOAT(totype)) { + if (felsize < 8) { + return telsize > felsize; + } + else { + return telsize >= felsize; + } + } + else if (PyTypeNum_ISCOMPLEX(totype)) { + if (felsize < 8) { + return (telsize >> 1) > felsize; + } + else { + return (telsize >> 1) >= felsize; + } } else { - return (telsize >= felsize); + return totype > fromtype; } - } - else if (PyTypeNum_ISFLOAT(totype)) { - if (felsize < 8) - return (telsize > felsize); - else - return (telsize >= felsize); - } - else if (PyTypeNum_ISCOMPLEX(totype)) { - if (felsize < 8) - return ((telsize >> 1) > felsize); - else - return ((telsize >> 1) >= felsize); - } - else return totype > fromtype; - case PyArray_UBYTE: - case PyArray_USHORT: - case PyArray_UINT: - case PyArray_ULONG: - case PyArray_ULONGLONG: - if (PyTypeNum_ISINTEGER(totype)) { - if (PyTypeNum_ISSIGNED(totype)) { - return (telsize > felsize); + case PyArray_UBYTE: + case PyArray_USHORT: + case PyArray_UINT: + case PyArray_ULONG: + case PyArray_ULONGLONG: + if (PyTypeNum_ISINTEGER(totype)) { + if (PyTypeNum_ISSIGNED(totype)) { + return telsize > felsize; + } + else { + return telsize >= felsize; + } + } + else if (PyTypeNum_ISFLOAT(totype)) { + if (felsize < 8) { + return telsize > felsize; + } + else { + return telsize >= felsize; + } + } + else if (PyTypeNum_ISCOMPLEX(totype)) { + if (felsize < 8) { + return (telsize >> 1) > felsize; + } + else { + return (telsize >> 1) >= felsize; + } } else { - return (telsize >= felsize); + return totype > fromtype; } - } - else if (PyTypeNum_ISFLOAT(totype)) { - if (felsize < 8) - return (telsize > felsize); - else - return (telsize >= felsize); - } - else if (PyTypeNum_ISCOMPLEX(totype)) { - if (felsize < 8) - return ((telsize >> 1) > felsize); - else - return ((telsize >> 1) >= felsize); - } - else return totype > fromtype; - case PyArray_FLOAT: - case PyArray_DOUBLE: - case PyArray_LONGDOUBLE: - if (PyTypeNum_ISCOMPLEX(totype)) - return ((telsize >> 1) >= felsize); - else - return (totype > fromtype); - case PyArray_CFLOAT: - case PyArray_CDOUBLE: - case PyArray_CLONGDOUBLE: - return (totype > fromtype); - case PyArray_STRING: - case PyArray_UNICODE: - return (totype > fromtype); - default: - return 0; + case PyArray_FLOAT: + case PyArray_DOUBLE: + case PyArray_LONGDOUBLE: + if (PyTypeNum_ISCOMPLEX(totype)) { + return (telsize >> 1) >= felsize; + } + else { + return totype > fromtype; + } + case PyArray_CFLOAT: + case PyArray_CDOUBLE: + case PyArray_CLONGDOUBLE: + return totype > fromtype; + case PyArray_STRING: + case PyArray_UNICODE: + return totype > fromtype; + default: + return 0; } } -/* leaves reference count alone --- cannot be NULL*/ -/*NUMPY_API*/ +/*NUMPY_API + * leaves reference count alone --- cannot be NULL + */ static Bool PyArray_CanCastTo(PyArray_Descr *from, PyArray_Descr *to) { @@ -9172,14 +9732,14 @@ PyArray_CanCastTo(PyArray_Descr *from, PyArray_Descr *to) Bool ret; ret = (Bool) PyArray_CanCastSafely(fromtype, totype); - if (ret) { /* Check String and Unicode more closely */ + if (ret) { + /* Check String and Unicode more closely */ if (fromtype == PyArray_STRING) { if (totype == PyArray_STRING) { ret = (from->elsize <= to->elsize); } else if (totype == PyArray_UNICODE) { - ret = (from->elsize << 2 \ - <= to->elsize); + ret = (from->elsize << 2 <= to->elsize); } } else if (fromtype == PyArray_UNICODE) { @@ -9187,17 +9747,18 @@ PyArray_CanCastTo(PyArray_Descr *from, PyArray_Descr *to) ret = (from->elsize <= to->elsize); } } - /* TODO: If totype is STRING or unicode - see if the length is long enough to hold the - stringified value of the object. - */ + /* + * TODO: If totype is STRING or unicode + * see if the length is long enough to hold the + * stringified value of the object. + */ } return ret; } /*NUMPY_API - See if array scalars can be cast. -*/ + * See if array scalars can be cast. + */ static Bool PyArray_CanCastScalar(PyTypeObject *from, PyTypeObject *to) { @@ -9206,8 +9767,9 @@ PyArray_CanCastScalar(PyTypeObject *from, PyTypeObject *to) fromtype = _typenum_fromtypeobj((PyObject *)from, 0); totype = _typenum_fromtypeobj((PyObject *)to, 0); - if (fromtype == PyArray_NOTYPE || totype == PyArray_NOTYPE) + if (fromtype == PyArray_NOTYPE || totype == PyArray_NOTYPE) { return FALSE; + } return (Bool) PyArray_CanCastSafely(fromtype, totype); } @@ -9217,8 +9779,8 @@ PyArray_CanCastScalar(PyTypeObject *from, PyTypeObject *to) /* and Python's array iterator ***/ /*NUMPY_API - Get Iterator. -*/ + * Get Iterator. + */ static PyObject * PyArray_IterNew(PyObject *obj) { @@ -9234,26 +9796,29 @@ PyArray_IterNew(PyObject *obj) it = (PyArrayIterObject *)_pya_malloc(sizeof(PyArrayIterObject)); PyObject_Init((PyObject *)it, &PyArrayIter_Type); /* it = PyObject_New(PyArrayIterObject, &PyArrayIter_Type);*/ - if (it == NULL) + if (it == NULL) { return NULL; - + } nd = ao->nd; PyArray_UpdateFlags(ao, CONTIGUOUS); - if PyArray_ISCONTIGUOUS(ao) it->contiguous = 1; - else it->contiguous = 0; + if (PyArray_ISCONTIGUOUS(ao)) { + it->contiguous = 1; + } + else { + it->contiguous = 0; + } Py_INCREF(ao); it->ao = ao; it->size = PyArray_SIZE(ao); it->nd_m1 = nd - 1; it->factors[nd-1] = 1; - for(i=0; i < nd; i++) { + for (i = 0; i < nd; i++) { it->dims_m1[i] = ao->dimensions[i] - 1; it->strides[i] = ao->strides[i]; - it->backstrides[i] = it->strides[i] * \ - it->dims_m1[i]; - if (i > 0) - it->factors[nd-i-1] = it->factors[nd-i] * \ - ao->dimensions[nd-i]; + it->backstrides[i] = it->strides[i] * it->dims_m1[i]; + if (i > 0) { + it->factors[nd-i-1] = it->factors[nd-i] * ao->dimensions[nd-i]; + } } PyArray_ITER_RESET(it); @@ -9261,8 +9826,8 @@ PyArray_IterNew(PyObject *obj) } /*NUMPY_API - Get Iterator broadcast to a particular shape -*/ + * Get Iterator broadcast to a particular shape + */ static PyObject * PyArray_BroadcastToShape(PyObject *obj, intp *dims, int nd) { @@ -9270,51 +9835,57 @@ PyArray_BroadcastToShape(PyObject *obj, intp *dims, int nd) int i, diff, j, compat, k; PyArrayObject *ao = (PyArrayObject *)obj; - if (ao->nd > nd) goto err; + if (ao->nd > nd) { + goto err; + } compat = 1; diff = j = nd - ao->nd; - for(i=0; i<ao->nd; i++, j++) { - if (ao->dimensions[i] == 1) continue; + for (i = 0; i < ao->nd; i++, j++) { + if (ao->dimensions[i] == 1) { + continue; + } if (ao->dimensions[i] != dims[j]) { compat = 0; break; } } - if (!compat) goto err; - + if (!compat) { + goto err; + } it = (PyArrayIterObject *)_pya_malloc(sizeof(PyArrayIterObject)); PyObject_Init((PyObject *)it, &PyArrayIter_Type); - if (it == NULL) + if (it == NULL) { return NULL; - + } PyArray_UpdateFlags(ao, CONTIGUOUS); - if PyArray_ISCONTIGUOUS(ao) it->contiguous = 1; - else it->contiguous = 0; + if (PyArray_ISCONTIGUOUS(ao)) { + it->contiguous = 1; + } + else { + it->contiguous = 0; + } Py_INCREF(ao); it->ao = ao; it->size = PyArray_MultiplyList(dims, nd); it->nd_m1 = nd - 1; it->factors[nd-1] = 1; - for(i=0; i < nd; i++) { + for (i = 0; i < nd; i++) { it->dims_m1[i] = dims[i] - 1; k = i - diff; - if ((k < 0) || - ao->dimensions[k] != dims[i]) { + if ((k < 0) || ao->dimensions[k] != dims[i]) { it->contiguous = 0; it->strides[i] = 0; } else { it->strides[i] = ao->strides[k]; } - it->backstrides[i] = it->strides[i] * \ - it->dims_m1[i]; - if (i > 0) - it->factors[nd-i-1] = it->factors[nd-i] * \ - dims[nd-i]; + it->backstrides[i] = it->strides[i] * it->dims_m1[i]; + if (i > 0) { + it->factors[nd-i-1] = it->factors[nd-i] * dims[nd-i]; + } } PyArray_ITER_RESET(it); - return (PyObject *)it; err: @@ -9328,29 +9899,31 @@ PyArray_BroadcastToShape(PyObject *obj, intp *dims, int nd) /*NUMPY_API - Get Iterator that iterates over all but one axis (don't use this with - PyArray_ITER_GOTO1D). The axis will be over-written if negative - with the axis having the smallest stride. -*/ + * Get Iterator that iterates over all but one axis (don't use this with + * PyArray_ITER_GOTO1D). The axis will be over-written if negative + * with the axis having the smallest stride. + */ static PyObject * PyArray_IterAllButAxis(PyObject *obj, int *inaxis) { PyArrayIterObject *it; int axis; it = (PyArrayIterObject *)PyArray_IterNew(obj); - if (it == NULL) return NULL; - - if (PyArray_NDIM(obj)==0) + if (it == NULL) { + return NULL; + } + if (PyArray_NDIM(obj)==0) { return (PyObject *)it; + } if (*inaxis < 0) { - int i, minaxis=0; - intp minstride=0; + int i, minaxis = 0; + intp minstride = 0; i = 0; - while (minstride==0 && i<PyArray_NDIM(obj)) { + while (minstride == 0 && i < PyArray_NDIM(obj)) { minstride = PyArray_STRIDE(obj,i); i++; } - for(i=1; i<PyArray_NDIM(obj); i++) { + for (i = 1; i < PyArray_NDIM(obj); i++) { if (PyArray_STRIDE(obj,i) > 0 && PyArray_STRIDE(obj, i) < minstride) { minaxis = i; @@ -9368,21 +9941,21 @@ PyArray_IterAllButAxis(PyObject *obj, int *inaxis) it->dims_m1[axis] = 0; it->backstrides[axis] = 0; - /* (won't fix factors so don't use - PyArray_ITER_GOTO1D with this iterator) */ + /* + * (won't fix factors so don't use + * PyArray_ITER_GOTO1D with this iterator) + */ return (PyObject *)it; } - -/* don't use with PyArray_ITER_GOTO1D because factors are not - adjusted */ - /*NUMPY_API - Adjusts previously broadcasted iterators so that the axis with - the smallest sum of iterator strides is not iterated over. - Returns dimension which is smallest in the range [0,multi->nd). - A -1 is returned if multi->nd == 0. -*/ + * Adjusts previously broadcasted iterators so that the axis with + * the smallest sum of iterator strides is not iterated over. + * Returns dimension which is smallest in the range [0,multi->nd). + * A -1 is returned if multi->nd == 0. + * + * don't use with PyArray_ITER_GOTO1D because factors are not adjusted + */ static int PyArray_RemoveSmallest(PyArrayMultiIterObject *multi) { @@ -9392,34 +9965,33 @@ PyArray_RemoveSmallest(PyArrayMultiIterObject *multi) intp smallest; intp sumstrides[NPY_MAXDIMS]; - if (multi->nd == 0) return -1; - - - for(i=0; i<multi->nd; i++) { + if (multi->nd == 0) { + return -1; + } + for (i = 0; i < multi->nd; i++) { sumstrides[i] = 0; - for(j=0; j<multi->numiter; j++) { + for (j = 0; j < multi->numiter; j++) { sumstrides[i] += multi->iters[j]->strides[i]; } } - axis=0; + axis = 0; smallest = sumstrides[0]; /* Find longest dimension */ - for(i=1; i<multi->nd; i++) { + for (i = 1; i < multi->nd; i++) { if (sumstrides[i] < smallest) { axis = i; smallest = sumstrides[i]; } } - - for(i=0; i<multi->numiter; i++) { + for(i = 0; i < multi->numiter; i++) { it = multi->iters[i]; it->contiguous = 0; - if (it->size != 0) + if (it->size != 0) { it->size /= (it->dims_m1[axis]+1); + } it->dims_m1[axis] = 0; it->backstrides[axis] = 0; } - multi->size = multi->iters[0]->size; return axis; } @@ -9457,7 +10029,7 @@ static PyObject * iter_subscript_Bool(PyArrayIterObject *self, PyArrayObject *ind) { int index, strides, itemsize; - intp count=0; + intp count = 0; char *dptr, *optr; PyObject *r; int swap; @@ -9479,9 +10051,10 @@ iter_subscript_Bool(PyArrayIterObject *self, PyArrayObject *ind) strides = ind->strides[0]; dptr = ind->data; /* Get size of return array */ - while(index--) { - if (*((Bool *)dptr) != 0) + while (index--) { + if (*((Bool *)dptr) != 0) { count++; + } dptr += strides; } itemsize = self->ao->descr->elsize; @@ -9490,17 +10063,17 @@ iter_subscript_Bool(PyArrayIterObject *self, PyArrayObject *ind) self->ao->descr, 1, &count, NULL, NULL, 0, (PyObject *)self->ao); - if (r==NULL) return NULL; - + if (r == NULL) { + return NULL; + } /* Set up loop */ optr = PyArray_DATA(r); index = ind->dimensions[0]; dptr = ind->data; - copyswap = self->ao->descr->f->copyswap; /* Loop over Boolean array */ swap = (PyArray_ISNOTSWAPPED(self->ao) != PyArray_ISNOTSWAPPED(r)); - while(index--) { + while (index--) { if (*((Bool *)dptr) != 0) { copyswap(optr, self->dataptr, swap, self->ao); optr += itemsize; @@ -9527,7 +10100,9 @@ iter_subscript_int(PyArrayIterObject *self, PyArrayObject *ind) itemsize = self->ao->descr->elsize; if (ind->nd == 0) { num = *((intp *)ind->data); - if (num < 0) num += self->size; + if (num < 0) { + num += self->size; + } if (num < 0 || num >= self->size) { PyErr_Format(PyExc_IndexError, "index %d out of bounds" \ @@ -9548,17 +10123,23 @@ iter_subscript_int(PyArrayIterObject *self, PyArrayObject *ind) ind->nd, ind->dimensions, NULL, NULL, 0, (PyObject *)self->ao); - if (r==NULL) return NULL; - + if (r == NULL) { + return NULL; + } optr = PyArray_DATA(r); ind_it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)ind); - if (ind_it == NULL) {Py_DECREF(r); return NULL;} + if (ind_it == NULL) { + Py_DECREF(r); + return NULL; + } index = ind_it->size; copyswap = PyArray_DESCR(r)->f->copyswap; swap = (PyArray_ISNOTSWAPPED(r) != PyArray_ISNOTSWAPPED(self->ao)); - while(index--) { + while (index--) { num = *((intp *)(ind_it->dataptr)); - if (num < 0) num += self->size; + if (num < 0) { + num += self->size; + } if (num < 0 || num >= self->size) { PyErr_Format(PyExc_IndexError, "index %d out of bounds" \ @@ -9583,7 +10164,7 @@ iter_subscript_int(PyArrayIterObject *self, PyArrayObject *ind) static PyObject * iter_subscript(PyArrayIterObject *self, PyObject *ind) { - PyArray_Descr *indtype=NULL; + PyArray_Descr *indtype = NULL; intp start, step_size; intp n_steps; PyObject *r; @@ -9601,7 +10182,9 @@ iter_subscript(PyArrayIterObject *self, PyObject *ind) if (PyTuple_Check(ind)) { int len; len = PyTuple_GET_SIZE(ind); - if (len > 1) goto fail; + if (len > 1) { + goto fail; + } if (len == 0) { Py_INCREF(self->ao); return (PyObject *)self->ao; @@ -9609,12 +10192,11 @@ iter_subscript(PyArrayIterObject *self, PyObject *ind) ind = PyTuple_GET_ITEM(ind, 0); } - /* Tuples >1d not accepted --- i.e. no newaxis */ - /* Could implement this with adjusted strides - and dimensions in iterator */ - - /* Check for Boolean -- this is first becasue - Bool is a subclass of Int */ + /* + * Tuples >1d not accepted --- i.e. no newaxis + * Could implement this with adjusted strides and dimensions in iterator + * Check for Boolean -- this is first becasue Bool is a subclass of Int + */ PyArray_ITER_RESET(self); if (PyBool_Check(ind)) { @@ -9634,12 +10216,12 @@ iter_subscript(PyArrayIterObject *self, PyObject *ind) } /* Check for Integer or Slice */ - if (PyLong_Check(ind) || PyInt_Check(ind) || PySlice_Check(ind)) { start = parse_subindex(ind, &step_size, &n_steps, self->size); - if (start == -1) + if (start == -1) { goto fail; + } if (n_steps == RubberIndex || n_steps == PseudoIndex) { PyErr_SetString(PyExc_IndexError, "cannot use Ellipsis or newaxes here"); @@ -9658,10 +10240,12 @@ iter_subscript(PyArrayIterObject *self, PyObject *ind) 1, &n_steps, NULL, NULL, 0, (PyObject *)self->ao); - if (r==NULL) goto fail; + if (r == NULL) { + goto fail; + } dptr = PyArray_DATA(r); copyswap = PyArray_DESCR(r)->f->copyswap; - while(n_steps--) { + while (n_steps--) { copyswap(dptr, self->dataptr, 0, r); start += step_size; PyArray_ITER_GOTO1D(self, start) @@ -9672,12 +10256,13 @@ iter_subscript(PyArrayIterObject *self, PyObject *ind) } /* convert to INTP array if Integer array scalar or List */ - indtype = PyArray_DescrFromType(PyArray_INTP); if (PyArray_IsScalar(ind, Integer) || PyList_Check(ind)) { Py_INCREF(indtype); obj = PyArray_FromAny(ind, indtype, 0, 0, FORCECAST, NULL); - if (obj == NULL) goto fail; + if (obj == NULL) { + goto fail; + } } else { Py_INCREF(ind); @@ -9695,7 +10280,9 @@ iter_subscript(PyArrayIterObject *self, PyObject *ind) PyObject *new; new = PyArray_FromAny(obj, indtype, 0, 0, FORCECAST | ALIGNED, NULL); - if (new==NULL) goto fail; + if (new == NULL) { + goto fail; + } Py_DECREF(obj); obj = new; r = iter_subscript_int(self, (PyArrayObject *)obj); @@ -9706,12 +10293,15 @@ iter_subscript(PyArrayIterObject *self, PyObject *ind) Py_DECREF(obj); return r; } - else Py_DECREF(indtype); + else { + Py_DECREF(indtype); + } fail: - if (!PyErr_Occurred()) + if (!PyErr_Occurred()) { PyErr_SetString(PyExc_IndexError, "unsupported iterator index"); + } Py_XDECREF(indtype); Py_XDECREF(obj); return NULL; @@ -9745,12 +10335,13 @@ iter_ass_sub_Bool(PyArrayIterObject *self, PyArrayObject *ind, PyArray_ITER_RESET(self); /* Loop over Boolean array */ copyswap = self->ao->descr->f->copyswap; - while(index--) { + while (index--) { if (*((Bool *)dptr) != 0) { copyswap(self->dataptr, val->dataptr, swap, self->ao); PyArray_ITER_NEXT(val); - if (val->index==val->size) + if (val->index == val->size) { PyArray_ITER_RESET(val); + } } dptr += strides; PyArray_ITER_NEXT(self); @@ -9778,11 +10369,15 @@ iter_ass_sub_int(PyArrayIterObject *self, PyArrayObject *ind, return 0; } ind_it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)ind); - if (ind_it == NULL) return -1; + if (ind_it == NULL) { + return -1; + } index = ind_it->size; - while(index--) { + while (index--) { num = *((intp *)(ind_it->dataptr)); - if (num < 0) num += self->size; + if (num < 0) { + num += self->size; + } if ((num < 0) || (num >= self->size)) { PyErr_Format(PyExc_IndexError, "index %d out of bounds" \ @@ -9795,8 +10390,9 @@ iter_ass_sub_int(PyArrayIterObject *self, PyArrayObject *ind, copyswap(self->dataptr, val->dataptr, swap, self->ao); PyArray_ITER_NEXT(ind_it); PyArray_ITER_NEXT(val); - if (val->index == val->size) + if (val->index == val->size) { PyArray_ITER_RESET(val); + } } Py_DECREF(ind_it); return 0; @@ -9805,14 +10401,14 @@ iter_ass_sub_int(PyArrayIterObject *self, PyArrayObject *ind, static int iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) { - PyObject *arrval=NULL; - PyArrayIterObject *val_it=NULL; + PyObject *arrval = NULL; + PyArrayIterObject *val_it = NULL; PyArray_Descr *type; - PyArray_Descr *indtype=NULL; - int swap, retval=-1; + PyArray_Descr *indtype = NULL; + int swap, retval = -1; intp start, step_size; intp n_steps; - PyObject *obj=NULL; + PyObject *obj = NULL; PyArray_CopySwapFunc *copyswap; @@ -9826,15 +10422,18 @@ iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) if (PyTuple_Check(ind)) { int len; len = PyTuple_GET_SIZE(ind); - if (len > 1) goto finish; + if (len > 1) { + goto finish; + } ind = PyTuple_GET_ITEM(ind, 0); } type = self->ao->descr; - /* Check for Boolean -- this is first becasue - Bool is a subclass of Int */ - + /* + * Check for Boolean -- this is first becasue + * Bool is a subclass of Int + */ if (PyBool_Check(ind)) { retval = 0; if (PyObject_IsTrue(ind)) { @@ -9843,9 +10442,13 @@ iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) goto finish; } - if (PySequence_Check(ind) || PySlice_Check(ind)) goto skip; + if (PySequence_Check(ind) || PySlice_Check(ind)) { + goto skip; + } start = PyArray_PyIntAsIntp(ind); - if (start==-1 && PyErr_Occurred()) PyErr_Clear(); + if (start==-1 && PyErr_Occurred()) { + PyErr_Clear(); + } else { if (start < -self->size || start >= self->size) { PyErr_Format(PyExc_ValueError, @@ -9867,41 +10470,48 @@ iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) skip: Py_INCREF(type); arrval = PyArray_FromAny(val, type, 0, 0, 0, NULL); - if (arrval==NULL) return -1; + if (arrval == NULL) { + return -1; + } val_it = (PyArrayIterObject *)PyArray_IterNew(arrval); - if (val_it==NULL) goto finish; - if (val_it->size == 0) {retval = 0; goto finish;} + if (val_it == NULL) { + goto finish; + } + if (val_it->size == 0) { + retval = 0; + goto finish; + } copyswap = PyArray_DESCR(arrval)->f->copyswap; swap = (PyArray_ISNOTSWAPPED(self->ao)!=PyArray_ISNOTSWAPPED(arrval)); /* Check Slice */ - if (PySlice_Check(ind)) { - start = parse_subindex(ind, &step_size, &n_steps, - self->size); - if (start == -1) goto finish; + start = parse_subindex(ind, &step_size, &n_steps, self->size); + if (start == -1) { + goto finish; + } if (n_steps == RubberIndex || n_steps == PseudoIndex) { PyErr_SetString(PyExc_IndexError, "cannot use Ellipsis or newaxes here"); goto finish; } PyArray_ITER_GOTO1D(self, start); - if (n_steps == SingleIndex) { /* Integer */ - copyswap(self->dataptr, PyArray_DATA(arrval), - swap, arrval); + if (n_steps == SingleIndex) { + /* Integer */ + copyswap(self->dataptr, PyArray_DATA(arrval), swap, arrval); PyArray_ITER_RESET(self); - retval=0; + retval = 0; goto finish; } - while(n_steps--) { - copyswap(self->dataptr, val_it->dataptr, - swap, arrval); + while (n_steps--) { + copyswap(self->dataptr, val_it->dataptr, swap, arrval); start += step_size; - PyArray_ITER_GOTO1D(self, start) - PyArray_ITER_NEXT(val_it); - if (val_it->index == val_it->size) + PyArray_ITER_GOTO1D(self, start); + PyArray_ITER_NEXT(val_it); + if (val_it->index == val_it->size) { PyArray_ITER_RESET(val_it); + } } PyArray_ITER_RESET(self); retval = 0; @@ -9909,7 +10519,6 @@ iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) } /* convert to INTP array if Integer array scalar or List */ - indtype = PyArray_DescrFromType(PyArray_INTP); if (PyList_Check(ind)) { Py_INCREF(indtype); @@ -9924,8 +10533,9 @@ iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) /* Check for Boolean object */ if (PyArray_TYPE(obj)==PyArray_BOOL) { if (iter_ass_sub_Bool(self, (PyArrayObject *)obj, - val_it, swap) < 0) + val_it, swap) < 0) { goto finish; + } retval=0; } /* Check for integer array */ @@ -9936,18 +10546,21 @@ iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) FORCECAST | BEHAVED_NS, NULL); Py_DECREF(obj); obj = new; - if (new==NULL) goto finish; + if (new == NULL) { + goto finish; + } if (iter_ass_sub_int(self, (PyArrayObject *)obj, - val_it, swap) < 0) + val_it, swap) < 0) { goto finish; - retval=0; + } + retval = 0; } } finish: - if (!PyErr_Occurred() && retval < 0) - PyErr_SetString(PyExc_IndexError, - "unsupported iterator index"); + if (!PyErr_Occurred() && retval < 0) { + PyErr_SetString(PyExc_IndexError, "unsupported iterator index"); + } Py_XDECREF(indtype); Py_XDECREF(obj); Py_XDECREF(val_it); @@ -9979,13 +10592,12 @@ iter_array(PyArrayIterObject *it, PyObject *NPY_UNUSED(op)) /* Any argument ignored */ /* Two options: - 1) underlying array is contiguous - -- return 1-d wrapper around it - 2) underlying array is not contiguous - -- make new 1-d contiguous array with updateifcopy flag set - to copy back to the old array - */ - + * 1) underlying array is contiguous + * -- return 1-d wrapper around it + * 2) underlying array is not contiguous + * -- make new 1-d contiguous array with updateifcopy flag set + * to copy back to the old array + */ size = PyArray_SIZE(it->ao); Py_INCREF(it->ao->descr); if (PyArray_ISCONTIGUOUS(it->ao)) { @@ -9995,7 +10607,9 @@ iter_array(PyArrayIterObject *it, PyObject *NPY_UNUSED(op)) NULL, it->ao->data, it->ao->flags, (PyObject *)it->ao); - if (r==NULL) return NULL; + if (r == NULL) { + return NULL; + } } else { r = PyArray_NewFromDescr(&PyArray_Type, @@ -10003,7 +10617,9 @@ iter_array(PyArrayIterObject *it, PyObject *NPY_UNUSED(op)) 1, &size, NULL, NULL, 0, (PyObject *)it->ao); - if (r==NULL) return NULL; + if (r == NULL) { + return NULL; + } if (_flat_copyinto(r, (PyObject *)it->ao, PyArray_CORDER) < 0) { Py_DECREF(r); @@ -10021,7 +10637,9 @@ iter_array(PyArrayIterObject *it, PyObject *NPY_UNUSED(op)) static PyObject * iter_copy(PyArrayIterObject *it, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) return NULL; + if (!PyArg_ParseTuple(args, "")) { + return NULL; + } return PyArray_Flatten(it->ao, 0); } @@ -10038,7 +10656,9 @@ iter_richcompare(PyArrayIterObject *self, PyObject *other, int cmp_op) PyArrayObject *new; PyObject *ret; new = (PyArrayObject *)iter_array(self, NULL); - if (new == NULL) return NULL; + if (new == NULL) { + return NULL; + } ret = array_richcompare(new, other, cmp_op); Py_DECREF(new); return ret; @@ -10056,12 +10676,15 @@ iter_coords_get(PyArrayIterObject *self) { int nd; nd = self->ao->nd; - if (self->contiguous) { /* coordinates not kept track of --- need to generate - from index */ + if (self->contiguous) { + /* + * coordinates not kept track of --- + * need to generate from index + */ intp val; int i; val = self->index; - for(i=0;i<nd; i++) { + for (i = 0; i < nd; i++) { self->coordinates[i] = val / self->factors[i]; val = val % self->factors[i]; } @@ -10078,60 +10701,60 @@ static PyGetSetDef iter_getsets[] = { static PyTypeObject PyArrayIter_Type = { PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ - "numpy.flatiter", /* tp_name */ - sizeof(PyArrayIterObject), /* tp_basicsize */ - 0, /* tp_itemsize */ + 0, /* ob_size */ + "numpy.flatiter", /* tp_name */ + sizeof(PyArrayIterObject), /* tp_basicsize */ + 0, /* tp_itemsize */ /* methods */ - (destructor)arrayiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - &iter_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - (richcmpfunc)iter_richcompare, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - (iternextfunc)arrayiter_next, /* tp_iternext */ - iter_methods, /* tp_methods */ - iter_members, /* tp_members */ - iter_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + (destructor)arrayiter_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + &iter_as_mapping, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + (richcmpfunc)iter_richcompare, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + (iternextfunc)arrayiter_next, /* tp_iternext */ + iter_methods, /* tp_methods */ + iter_members, /* tp_members */ + iter_getsets, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; @@ -10162,18 +10785,23 @@ _convert_obj(PyObject *obj, PyArrayIterObject **iter) PyArray_Descr *indtype; PyObject *arr; - if (PySlice_Check(obj) || (obj == Py_Ellipsis)) + if (PySlice_Check(obj) || (obj == Py_Ellipsis)) { return 0; + } else if (PyArray_Check(obj) && PyArray_ISBOOL(obj)) { return _nonzero_indices(obj, iter); } else { indtype = PyArray_DescrFromType(PyArray_INTP); arr = PyArray_FromAny(obj, indtype, 0, 0, FORCECAST, NULL); - if (arr == NULL) return -1; + if (arr == NULL) { + return -1; + } *iter = (PyArrayIterObject *)PyArray_IterNew(arr); Py_DECREF(arr); - if (*iter == NULL) return -1; + if (*iter == NULL) { + return -1; + } } return 1; } @@ -10190,23 +10818,26 @@ PyArray_Broadcast(PyArrayMultiIterObject *mit) PyArrayIterObject *it; /* Discover the broadcast number of dimensions */ - for(i=0, nd=0; i<mit->numiter; i++) + for (i = 0, nd = 0; i < mit->numiter; i++) { nd = MAX(nd, mit->iters[i]->ao->nd); + } mit->nd = nd; /* Discover the broadcast shape in each dimension */ - for(i=0; i<nd; i++) { + for (i = 0; i < nd; i++) { mit->dimensions[i] = 1; - for(j=0; j<mit->numiter; j++) { + for (j = 0; j < mit->numiter; j++) { it = mit->iters[j]; - /* This prepends 1 to shapes not already - equal to nd */ + /* This prepends 1 to shapes not already equal to nd */ k = i + it->ao->nd - nd; - if (k>=0) { + if (k >= 0) { tmp = it->ao->dimensions[k]; - if (tmp == 1) continue; - if (mit->dimensions[i] == 1) + if (tmp == 1) { + continue; + } + if (mit->dimensions[i] == 1) { mit->dimensions[i] = tmp; + } else if (mit->dimensions[i] != tmp) { PyErr_SetString(PyExc_ValueError, "shape mismatch: objects" \ @@ -10218,9 +10849,11 @@ PyArray_Broadcast(PyArrayMultiIterObject *mit) } } - /* Reset the iterator dimensions and strides of each iterator - object -- using 0 valued strides for broadcasting */ - /* Need to check for overflow */ + /* + * Reset the iterator dimensions and strides of each iterator + * object -- using 0 valued strides for broadcasting + * Need to check for overflow + */ tmp = PyArray_OverflowMultiplyList(mit->dimensions, mit->nd); if (tmp < 0) { PyErr_SetString(PyExc_ValueError, @@ -10228,18 +10861,20 @@ PyArray_Broadcast(PyArrayMultiIterObject *mit) return -1; } mit->size = tmp; - for(i=0; i<mit->numiter; i++) { + for (i = 0; i < mit->numiter; i++) { it = mit->iters[i]; it->nd_m1 = mit->nd - 1; it->size = tmp; nd = it->ao->nd; it->factors[mit->nd-1] = 1; - for(j=0; j < mit->nd; j++) { + for (j = 0; j < mit->nd; j++) { it->dims_m1[j] = mit->dimensions[j] - 1; k = j + nd - mit->nd; - /* If this dimension was added or shape - of underlying array was 1 */ - if ((k < 0) || \ + /* + * If this dimension was added or shape of + * underlying array was 1 + */ + if ((k < 0) || it->ao->dimensions[k] != mit->dimensions[j]) { it->contiguous = 0; it->strides[j] = 0; @@ -10247,12 +10882,10 @@ PyArray_Broadcast(PyArrayMultiIterObject *mit) else { it->strides[j] = it->ao->strides[k]; } - it->backstrides[j] = it->strides[j] * \ - it->dims_m1[j]; + it->backstrides[j] = it->strides[j] * it->dims_m1[j]; if (j > 0) - it->factors[mit->nd-j-1] = \ - it->factors[mit->nd-j] * \ - mit->dimensions[mit->nd-j]; + it->factors[mit->nd-j-1] = + it->factors[mit->nd-j] * mit->dimensions[mit->nd-j]; } PyArray_ITER_RESET(it); } @@ -10274,12 +10907,11 @@ PyArray_MapIterReset(PyArrayMapIterObject *mit) if (mit->subspace != NULL) { memcpy(coord, mit->bscoord, sizeof(intp)*mit->ait->ao->nd); PyArray_ITER_RESET(mit->subspace); - for(i=0; i<mit->numiter; i++) { + for (i = 0; i < mit->numiter; i++) { it = mit->iters[i]; PyArray_ITER_RESET(it); j = mit->iteraxes[i]; - copyswap(coord+j,it->dataptr, - !PyArray_ISNOTSWAPPED(it->ao), + copyswap(coord+j,it->dataptr, !PyArray_ISNOTSWAPPED(it->ao), it->ao); } PyArray_ITER_GOTO(mit->ait, coord); @@ -10287,15 +10919,16 @@ PyArray_MapIterReset(PyArrayMapIterObject *mit) mit->dataptr = mit->subspace->dataptr; } else { - for(i=0; i<mit->numiter; i++) { + for (i = 0; i < mit->numiter; i++) { it = mit->iters[i]; if (it->size != 0) { PyArray_ITER_RESET(it); - copyswap(coord+i,it->dataptr, - !PyArray_ISNOTSWAPPED(it->ao), + copyswap(coord+i,it->dataptr, !PyArray_ISNOTSWAPPED(it->ao), it->ao); } - else coord[i] = 0; + else { + coord[i] = 0; + } } PyArray_ITER_GOTO(mit->ait, coord); mit->dataptr = mit->ait->dataptr; @@ -10303,9 +10936,10 @@ PyArray_MapIterReset(PyArrayMapIterObject *mit) return; } -/* This function needs to update the state of the map iterator - and point mit->dataptr to the memory-location of the next object -*/ +/* + * This function needs to update the state of the map iterator + * and point mit->dataptr to the memory-location of the next object + */ static void PyArray_MapIterNext(PyArrayMapIterObject *mit) { @@ -10315,23 +10949,22 @@ PyArray_MapIterNext(PyArrayMapIterObject *mit) PyArray_CopySwapFunc *copyswap; mit->index += 1; - if (mit->index >= mit->size) return; + if (mit->index >= mit->size) { + return; + } copyswap = mit->iters[0]->ao->descr->f->copyswap; /* Sub-space iteration */ if (mit->subspace != NULL) { PyArray_ITER_NEXT(mit->subspace); if (mit->subspace->index >= mit->subspace->size) { - /* reset coord to coordinates of - beginning of the subspace */ - memcpy(coord, mit->bscoord, - sizeof(intp)*mit->ait->ao->nd); + /* reset coord to coordinates of beginning of the subspace */ + memcpy(coord, mit->bscoord, sizeof(intp)*mit->ait->ao->nd); PyArray_ITER_RESET(mit->subspace); - for(i=0; i<mit->numiter; i++) { + for (i = 0; i < mit->numiter; i++) { it = mit->iters[i]; PyArray_ITER_NEXT(it); j = mit->iteraxes[i]; - copyswap(coord+j,it->dataptr, - !PyArray_ISNOTSWAPPED(it->ao), + copyswap(coord+j,it->dataptr, !PyArray_ISNOTSWAPPED(it->ao), it->ao); } PyArray_ITER_GOTO(mit->ait, coord); @@ -10340,7 +10973,7 @@ PyArray_MapIterNext(PyArrayMapIterObject *mit) mit->dataptr = mit->subspace->dataptr; } else { - for(i=0; i<mit->numiter; i++) { + for (i = 0; i < mit->numiter; i++) { it = mit->iters[i]; PyArray_ITER_NEXT(it); copyswap(coord+i,it->dataptr, @@ -10353,26 +10986,26 @@ PyArray_MapIterNext(PyArrayMapIterObject *mit) return; } -/* Bind a mapiteration to a particular array */ - -/* Determine if subspace iteration is necessary. If so, - 1) Fill in mit->iteraxes - 2) Create subspace iterator - 3) Update nd, dimensions, and size. - - Subspace iteration is necessary if: arr->nd > mit->numiter -*/ - -/* Need to check for index-errors somewhere. - - Let's do it at bind time and also convert all <0 values to >0 here - as well. -*/ +/* + * Bind a mapiteration to a particular array + * + * Determine if subspace iteration is necessary. If so, + * 1) Fill in mit->iteraxes + * 2) Create subspace iterator + * 3) Update nd, dimensions, and size. + * + * Subspace iteration is necessary if: arr->nd > mit->numiter + * + * Need to check for index-errors somewhere. + * + * Let's do it at bind time and also convert all <0 values to >0 here + * as well. + */ static void PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr) { int subnd; - PyObject *sub, *obj=NULL; + PyObject *sub, *obj = NULL; int i, j, n, curraxis, ellipexp, noellip; PyArrayIterObject *it; intp dimsize; @@ -10386,22 +11019,24 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr) } mit->ait = (PyArrayIterObject *)PyArray_IterNew((PyObject *)arr); - if (mit->ait == NULL) return; - + if (mit->ait == NULL) { + return; + } /* no subspace iteration needed. Finish up and Return */ if (subnd == 0) { n = arr->nd; - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { mit->iteraxes[i] = i; } goto finish; } - /* all indexing arrays have been converted to 0 - therefore we can extract the subspace with a simple - getitem call which will use view semantics - */ - /* But, be sure to do it with a true array. + /* + * all indexing arrays have been converted to 0 + * therefore we can extract the subspace with a simple + * getitem call which will use view semantics + * + * But, be sure to do it with a true array. */ if (PyArray_CheckExact(arr)) { sub = array_subscript_simple(arr, mit->indexobj); @@ -10409,54 +11044,65 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr) else { Py_INCREF(arr); obj = PyArray_EnsureArray((PyObject *)arr); - if (obj == NULL) goto fail; + if (obj == NULL) { + goto fail; + } sub = array_subscript_simple((PyArrayObject *)obj, mit->indexobj); Py_DECREF(obj); } - if (sub == NULL) goto fail; + if (sub == NULL) { + goto fail; + } mit->subspace = (PyArrayIterObject *)PyArray_IterNew(sub); Py_DECREF(sub); - if (mit->subspace == NULL) goto fail; - + if (mit->subspace == NULL) { + goto fail; + } /* Expand dimensions of result */ n = mit->subspace->ao->nd; - for(i=0; i<n; i++) + for (i = 0; i < n; i++) { mit->dimensions[mit->nd+i] = mit->subspace->ao->dimensions[i]; + } mit->nd += n; - /* Now, we still need to interpret the ellipsis and slice objects - to determine which axes the indexing arrays are referring to - */ + /* + * Now, we still need to interpret the ellipsis and slice objects + * to determine which axes the indexing arrays are referring to + */ n = PyTuple_GET_SIZE(mit->indexobj); - /* The number of dimensions an ellipsis takes up */ ellipexp = arr->nd - n + 1; - /* Now fill in iteraxes -- remember indexing arrays have been - converted to 0's in mit->indexobj */ + /* + * Now fill in iteraxes -- remember indexing arrays have been + * converted to 0's in mit->indexobj + */ curraxis = 0; j = 0; - noellip = 1; /* Only expand the first ellipsis */ + /* Only expand the first ellipsis */ + noellip = 1; memset(mit->bscoord, 0, sizeof(intp)*arr->nd); - for(i=0; i<n; i++) { - /* We need to fill in the starting coordinates for - the subspace */ + for (i = 0; i < n; i++) { + /* + * We need to fill in the starting coordinates for + * the subspace + */ obj = PyTuple_GET_ITEM(mit->indexobj, i); - if (PyInt_Check(obj) || PyLong_Check(obj)) + if (PyInt_Check(obj) || PyLong_Check(obj)) { mit->iteraxes[j++] = curraxis++; + } else if (noellip && obj == Py_Ellipsis) { curraxis += ellipexp; noellip = 0; } else { - intp start=0; + intp start = 0; intp stop, step; - /* Should be slice object or - another Ellipsis */ + /* Should be slice object or another Ellipsis */ if (obj == Py_Ellipsis) { mit->bscoord[curraxis] = 0; } - else if (!PySlice_Check(obj) || \ + else if (!PySlice_Check(obj) || (slice_GetIndices((PySliceObject *)obj, arr->dimensions[curraxis], &start, &stop, &step, @@ -10473,6 +11119,7 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr) curraxis += 1; } } + finish: /* Here check the indexes (now that we have iteraxes) */ mit->size = PyArray_OverflowMultiplyList(mit->dimensions, mit->nd); @@ -10487,15 +11134,17 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr) goto fail; } - for(i=0; i<mit->numiter; i++) { + for (i = 0; i < mit->numiter; i++) { intp indval; it = mit->iters[i]; PyArray_ITER_RESET(it); dimsize = arr->dimensions[mit->iteraxes[i]]; - while(it->index < it->size) { + while (it->index < it->size) { indptr = ((intp *)it->dataptr); indval = *indptr; - if (indval < 0) indval += dimsize; + if (indval < 0) { + indval += dimsize; + } if (indval < 0 || indval >= dimsize) { PyErr_Format(PyExc_IndexError, "index (%d) out of range "\ @@ -10518,14 +11167,15 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr) return; } -/* This function takes a Boolean array and constructs index objects and - iterators as if nonzero(Bool) had been called -*/ +/* + * This function takes a Boolean array and constructs index objects and + * iterators as if nonzero(Bool) had been called + */ static int _nonzero_indices(PyObject *myBool, PyArrayIterObject **iters) { PyArray_Descr *typecode; - PyArrayObject *ba =NULL, *new=NULL; + PyArrayObject *ba = NULL, *new = NULL; int nd, j; intp size, i, count; Bool *ptr; @@ -10535,45 +11185,59 @@ _nonzero_indices(PyObject *myBool, PyArrayIterObject **iters) typecode=PyArray_DescrFromType(PyArray_BOOL); ba = (PyArrayObject *)PyArray_FromAny(myBool, typecode, 0, 0, CARRAY, NULL); - if (ba == NULL) return -1; + if (ba == NULL) { + return -1; + } nd = ba->nd; - for(j=0; j<nd; j++) iters[j] = NULL; + for (j = 0; j < nd; j++) { + iters[j] = NULL; + } size = PyArray_SIZE(ba); ptr = (Bool *)ba->data; count = 0; /* pre-determine how many nonzero entries there are */ - for(i=0; i<size; i++) - if (*(ptr++)) count++; + for (i = 0; i < size; i++) { + if (*(ptr++)) { + count++; + } + } /* create count-sized index arrays for each dimension */ - for(j=0; j<nd; j++) { + for (j = 0; j < nd; j++) { new = (PyArrayObject *)PyArray_New(&PyArray_Type, 1, &count, PyArray_INTP, NULL, NULL, 0, 0, NULL); - if (new == NULL) goto fail; - iters[j] = (PyArrayIterObject *) \ + if (new == NULL) { + goto fail; + } + iters[j] = (PyArrayIterObject *) PyArray_IterNew((PyObject *)new); Py_DECREF(new); - if (iters[j] == NULL) goto fail; + if (iters[j] == NULL) { + goto fail; + } dptr[j] = (intp *)iters[j]->ao->data; coords[j] = 0; dims_m1[j] = ba->dimensions[j]-1; } - ptr = (Bool *)ba->data; + if (count == 0) { + goto finish; + } - if (count == 0) goto finish; - - /* Loop through the Boolean array and copy coordinates - for non-zero entries */ - for(i=0; i<size; i++) { + /* + * Loop through the Boolean array and copy coordinates + * for non-zero entries + */ + for (i = 0; i < size; i++) { if (*(ptr++)) { - for(j=0; j<nd; j++) + for (j = 0; j < nd; j++) { *(dptr[j]++) = coords[j]; + } } /* Borrowed from ITER_NEXT macro */ - for(j=nd-1; j>=0; j--) { + for (j = nd - 1; j >= 0; j--) { if (coords[j] < dims_m1[j]) { coords[j]++; break; @@ -10589,7 +11253,7 @@ _nonzero_indices(PyObject *myBool, PyArrayIterObject **iters) return nd; fail: - for(j=0; j<nd; j++) { + for (j = 0; j < nd; j++) { Py_XDECREF(iters[j]); } Py_XDECREF(ba); @@ -10617,10 +11281,12 @@ PyArray_MapIterNew(PyObject *indexobj, int oned, int fancy) mit = (PyArrayMapIterObject *)_pya_malloc(sizeof(PyArrayMapIterObject)); PyObject_Init((PyObject *)mit, &PyArrayMapIter_Type); - if (mit == NULL) + if (mit == NULL) { return NULL; - for(i=0; i<MAX_DIMS; i++) + } + for (i = 0; i < MAX_DIMS; i++) { mit->iters[i] = NULL; + } mit->index = 0; mit->ait = NULL; mit->subspace = NULL; @@ -10632,7 +11298,9 @@ PyArray_MapIterNew(PyObject *indexobj, int oned, int fancy) if (fancy == SOBJ_LISTTUP) { PyObject *newobj; newobj = PySequence_Tuple(indexobj); - if (newobj == NULL) goto fail; + if (newobj == NULL) { + goto fail; + } Py_DECREF(indexobj); indexobj = newobj; mit->indexobj = indexobj; @@ -10644,25 +11312,30 @@ PyArray_MapIterNew(PyObject *indexobj, int oned, int fancy) #undef SOBJ_TOOMANY #undef SOBJ_LISTTUP - if (oned) return (PyObject *)mit; - - /* Must have some kind of fancy indexing if we are here */ - /* indexobj is either a list, an arrayobject, or a tuple - (with at least 1 list or arrayobject or Bool object), */ + if (oned) { + return (PyObject *)mit; + } + /* + * Must have some kind of fancy indexing if we are here + * indexobj is either a list, an arrayobject, or a tuple + * (with at least 1 list or arrayobject or Bool object) + */ /* convert all inputs to iterators */ - if (PyArray_Check(indexobj) && \ - (PyArray_TYPE(indexobj) == PyArray_BOOL)) { + if (PyArray_Check(indexobj) && (PyArray_TYPE(indexobj) == PyArray_BOOL)) { mit->numiter = _nonzero_indices(indexobj, mit->iters); - if (mit->numiter < 0) goto fail; + if (mit->numiter < 0) { + goto fail; + } mit->nd = 1; mit->dimensions[0] = mit->iters[0]->dims_m1[0]+1; Py_DECREF(mit->indexobj); mit->indexobj = PyTuple_New(mit->numiter); - if (mit->indexobj == NULL) goto fail; - for(i=0; i<mit->numiter; i++) { - PyTuple_SET_ITEM(mit->indexobj, i, - PyInt_FromLong(0)); + if (mit->indexobj == NULL) { + goto fail; + } + for (i = 0; i < mit->numiter; i++) { + PyTuple_SET_ITEM(mit->indexobj, i, PyInt_FromLong(0)); } } @@ -10670,31 +11343,41 @@ PyArray_MapIterNew(PyObject *indexobj, int oned, int fancy) mit->numiter = 1; indtype = PyArray_DescrFromType(PyArray_INTP); arr = PyArray_FromAny(indexobj, indtype, 0, 0, FORCECAST, NULL); - if (arr == NULL) goto fail; + if (arr == NULL) { + goto fail; + } mit->iters[0] = (PyArrayIterObject *)PyArray_IterNew(arr); - if (mit->iters[0] == NULL) {Py_DECREF(arr); goto fail;} + if (mit->iters[0] == NULL) { + Py_DECREF(arr); + goto fail; + } mit->nd = PyArray_NDIM(arr); - memcpy(mit->dimensions,PyArray_DIMS(arr),mit->nd*sizeof(intp)); + memcpy(mit->dimensions, PyArray_DIMS(arr), mit->nd*sizeof(intp)); mit->size = PyArray_SIZE(arr); Py_DECREF(arr); Py_DECREF(mit->indexobj); mit->indexobj = Py_BuildValue("(N)", PyInt_FromLong(0)); } - else { /* must be a tuple */ + else { + /* must be a tuple */ PyObject *obj; PyArrayIterObject **iterp; PyObject *new; int numiters, j, n2; - /* Make a copy of the tuple -- we will be replacing - index objects with 0's */ + /* + * Make a copy of the tuple -- we will be replacing + * index objects with 0's + */ n = PyTuple_GET_SIZE(indexobj); n2 = n; new = PyTuple_New(n2); - if (new == NULL) goto fail; + if (new == NULL) { + goto fail; + } started = 0; nonindex = 0; j = 0; - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { obj = PyTuple_GET_ITEM(indexobj,i); iterp = mit->iters + mit->numiter; if ((numiters=_convert_obj(obj, iterp)) < 0) { @@ -10703,39 +11386,45 @@ PyArray_MapIterNew(PyObject *indexobj, int oned, int fancy) } if (numiters > 0) { started = 1; - if (nonindex) mit->consec = 0; + if (nonindex) { + mit->consec = 0; + } mit->numiter += numiters; if (numiters == 1) { - PyTuple_SET_ITEM(new,j++, - PyInt_FromLong(0)); + PyTuple_SET_ITEM(new,j++, PyInt_FromLong(0)); } - else { /* we need to grow the - new indexing object and fill - it with 0s for each of the iterators - produced */ + else { + /* + * we need to grow the new indexing object and fill + * it with 0s for each of the iterators produced + */ int k; n2 += numiters - 1; - if (_PyTuple_Resize(&new, n2) < 0) + if (_PyTuple_Resize(&new, n2) < 0) { goto fail; - for(k=0;k<numiters;k++) { - PyTuple_SET_ITEM \ - (new,j++, - PyInt_FromLong(0)); + } + for (k = 0; k < numiters; k++) { + PyTuple_SET_ITEM(new, j++, PyInt_FromLong(0)); } } } else { - if (started) nonindex = 1; + if (started) { + nonindex = 1; + } Py_INCREF(obj); PyTuple_SET_ITEM(new,j++,obj); } } Py_DECREF(mit->indexobj); mit->indexobj = new; - /* Store the number of iterators actually converted */ - /* These will be mapped to actual axes at bind time */ - if (PyArray_Broadcast((PyArrayMultiIterObject *)mit) < 0) + /* + * Store the number of iterators actually converted + * These will be mapped to actual axes at bind time + */ + if (PyArray_Broadcast((PyArrayMultiIterObject *)mit) < 0) { goto fail; + } } return (PyObject *)mit; @@ -10753,96 +11442,94 @@ arraymapiter_dealloc(PyArrayMapIterObject *mit) Py_XDECREF(mit->indexobj); Py_XDECREF(mit->ait); Py_XDECREF(mit->subspace); - for(i=0; i<mit->numiter; i++) + for (i = 0; i < mit->numiter; i++) { Py_XDECREF(mit->iters[i]); + } _pya_free(mit); } -/* The mapiter object must be created new each time. It does not work - to bind to a new array, and continue. - - This was the orginal intention, but currently that does not work. - Do not expose the MapIter_Type to Python. - - It's not very useful anyway, since mapiter(indexobj); mapiter.bind(a); - mapiter is equivalent to a[indexobj].flat but the latter gets to use - slice syntax. -*/ - +/* + * The mapiter object must be created new each time. It does not work + * to bind to a new array, and continue. + * + * This was the orginal intention, but currently that does not work. + * Do not expose the MapIter_Type to Python. + * + * It's not very useful anyway, since mapiter(indexobj); mapiter.bind(a); + * mapiter is equivalent to a[indexobj].flat but the latter gets to use + * slice syntax. + */ static PyTypeObject PyArrayMapIter_Type = { PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ - "numpy.mapiter", /* tp_name */ - sizeof(PyArrayIterObject), /* tp_basicsize */ - 0, /* tp_itemsize */ + 0, /* ob_size */ + "numpy.mapiter", /* tp_name */ + sizeof(PyArrayIterObject), /* tp_basicsize */ + 0, /* tp_itemsize */ /* methods */ - (destructor)arraymapiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - (traverseproc)0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - (iternextfunc)0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + (destructor)arraymapiter_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + (traverseproc)0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + (iternextfunc)0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + (initproc)0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif - }; /** END of Subscript Iterator **/ -/* - NUMPY_API - Get MultiIterator from array of Python objects and any additional - - PyObject **mps -- array of PyObjects - int n - number of PyObjects in the array - int nadd - number of additional arrays to include in the - iterator. - - Returns a multi-iterator object. +/*NUMPY_API + * Get MultiIterator from array of Python objects and any additional + * + * PyObject **mps -- array of PyObjects + * int n - number of PyObjects in the array + * int nadd - number of additional arrays to include in the iterator. + * + * Returns a multi-iterator object. */ static PyObject * PyArray_MultiIterFromObjects(PyObject **mps, int n, int nadd, ...) @@ -10861,17 +11548,20 @@ PyArray_MultiIterFromObjects(PyObject **mps, int n, int nadd, ...) "array objects (inclusive).", NPY_MAXARGS); return NULL; } - multi = _pya_malloc(sizeof(PyArrayMultiIterObject)); - if (multi == NULL) return PyErr_NoMemory(); + if (multi == NULL) { + return PyErr_NoMemory(); + } PyObject_Init((PyObject *)multi, &PyArrayMultiIter_Type); - for(i=0; i<ntot; i++) multi->iters[i] = NULL; + for (i = 0; i < ntot; i++) { + multi->iters[i] = NULL; + } multi->numiter = ntot; multi->index = 0; va_start(va, nadd); - for(i=0; i<ntot; i++) { + for (i = 0; i < ntot; i++) { if (i < n) { current = mps[i]; } @@ -10879,32 +11569,31 @@ PyArray_MultiIterFromObjects(PyObject **mps, int n, int nadd, ...) current = va_arg(va, PyObject *); } arr = PyArray_FROM_O(current); - if (arr==NULL) { - err=1; break; + if (arr == NULL) { + err = 1; + break; } else { multi->iters[i] = (PyArrayIterObject *)PyArray_IterNew(arr); Py_DECREF(arr); } } - va_end(va); - if (!err && PyArray_Broadcast(multi) < 0) err=1; - + if (!err && PyArray_Broadcast(multi) < 0) { + err = 1; + } if (err) { Py_DECREF(multi); return NULL; } - PyArray_MultiIter_RESET(multi); - - return (PyObject *)multi; + return (PyObject *)multi; } /*NUMPY_API - Get MultiIterator, -*/ + * Get MultiIterator, + */ static PyObject * PyArray_MultiIterNew(int n, ...) { @@ -10913,7 +11602,7 @@ PyArray_MultiIterNew(int n, ...) PyObject *current; PyObject *arr; - int i, err=0; + int i, err = 0; if (n < 2 || n > NPY_MAXARGS) { PyErr_Format(PyExc_ValueError, @@ -10925,37 +11614,40 @@ PyArray_MultiIterNew(int n, ...) /* fprintf(stderr, "multi new...");*/ multi = _pya_malloc(sizeof(PyArrayMultiIterObject)); - if (multi == NULL) return PyErr_NoMemory(); + if (multi == NULL) { + return PyErr_NoMemory(); + } PyObject_Init((PyObject *)multi, &PyArrayMultiIter_Type); - for(i=0; i<n; i++) multi->iters[i] = NULL; + for (i = 0; i < n; i++) { + multi->iters[i] = NULL; + } multi->numiter = n; multi->index = 0; va_start(va, n); - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { current = va_arg(va, PyObject *); arr = PyArray_FROM_O(current); - if (arr==NULL) { - err=1; break; + if (arr == NULL) { + err = 1; + break; } else { multi->iters[i] = (PyArrayIterObject *)PyArray_IterNew(arr); Py_DECREF(arr); } } - va_end(va); - if (!err && PyArray_Broadcast(multi) < 0) err=1; - + if (!err && PyArray_Broadcast(multi) < 0) { + err = 1; + } if (err) { Py_DECREF(multi); return NULL; } - PyArray_MultiIter_RESET(multi); - return (PyObject *)multi; } @@ -10975,7 +11667,9 @@ arraymultiter_new(PyTypeObject *NPY_UNUSED(subtype), PyObject *args, PyObject *k n = PyTuple_Size(args); if (n < 2 || n > NPY_MAXARGS) { - if (PyErr_Occurred()) return NULL; + if (PyErr_Occurred()) { + return NULL; + } PyErr_Format(PyExc_ValueError, "Need at least two and fewer than (%d) " \ "array objects.", NPY_MAXARGS); @@ -10983,23 +11677,31 @@ arraymultiter_new(PyTypeObject *NPY_UNUSED(subtype), PyObject *args, PyObject *k } multi = _pya_malloc(sizeof(PyArrayMultiIterObject)); - if (multi == NULL) return PyErr_NoMemory(); + if (multi == NULL) { + return PyErr_NoMemory(); + } PyObject_Init((PyObject *)multi, &PyArrayMultiIter_Type); multi->numiter = n; multi->index = 0; - for(i=0; i<n; i++) multi->iters[i] = NULL; - for(i=0; i<n; i++) { + for (i = 0; i < n; i++) { + multi->iters[i] = NULL; + } + for (i = 0; i < n; i++) { arr = PyArray_FromAny(PyTuple_GET_ITEM(args, i), NULL, 0, 0, 0, NULL); - if (arr == NULL) goto fail; - if ((multi->iters[i] = \ - (PyArrayIterObject *)PyArray_IterNew(arr))==NULL) + if (arr == NULL) { goto fail; + } + if ((multi->iters[i] = (PyArrayIterObject *)PyArray_IterNew(arr)) + == NULL) { + goto fail; + } Py_DECREF(arr); } - if (PyArray_Broadcast(multi) < 0) goto fail; + if (PyArray_Broadcast(multi) < 0) { + goto fail; + } PyArray_MultiIter_RESET(multi); - return (PyObject *)multi; fail: @@ -11015,9 +11717,11 @@ arraymultiter_next(PyArrayMultiIterObject *multi) n = multi->numiter; ret = PyTuple_New(n); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } if (multi->index < multi->size) { - for(i=0; i < n; i++) { + for (i = 0; i < n; i++) { PyArrayIterObject *it=multi->iters[i]; PyTuple_SET_ITEM(ret, i, PyArray_ToScalar(it->dataptr, it->ao)); @@ -11034,8 +11738,9 @@ arraymultiter_dealloc(PyArrayMultiIterObject *multi) { int i; - for(i=0; i<multi->numiter; i++) + for (i = 0; i < multi->numiter; i++) { Py_XDECREF(multi->iters[i]); + } multi->ob_type->tp_free((PyObject *)multi); } @@ -11045,10 +11750,12 @@ arraymultiter_size_get(PyArrayMultiIterObject *self) #if SIZEOF_INTP <= SIZEOF_LONG return PyInt_FromLong((long) self->size); #else - if (self->size < MAX_LONG) + if (self->size < MAX_LONG) { return PyInt_FromLong((long) self->size); - else + } + else { return PyLong_FromLongLong((longlong) self->size); + } #endif } @@ -11058,10 +11765,12 @@ arraymultiter_index_get(PyArrayMultiIterObject *self) #if SIZEOF_INTP <= SIZEOF_LONG return PyInt_FromLong((long) self->index); #else - if (self->size < MAX_LONG) + if (self->size < MAX_LONG) { return PyInt_FromLong((long) self->index); - else + } + else { return PyLong_FromLongLong((longlong) self->index); + } #endif } @@ -11076,10 +11785,13 @@ arraymultiter_iters_get(PyArrayMultiIterObject *self) { PyObject *res; int i, n; + n = self->numiter; res = PyTuple_New(n); - if (res == NULL) return res; - for(i=0; i<n; i++) { + if (res == NULL) { + return res; + } + for (i = 0; i < n; i++) { Py_INCREF(self->iters[i]); PyTuple_SET_ITEM(res, i, (PyObject *)self->iters[i]); } @@ -11112,8 +11824,9 @@ static PyMemberDef arraymultiter_members[] = { static PyObject * arraymultiter_reset(PyArrayMultiIterObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) return NULL; - + if (!PyArg_ParseTuple(args, "")) { + return NULL; + } PyArray_MultiIter_RESET(self); Py_INCREF(Py_None); return Py_None; @@ -11126,61 +11839,61 @@ static PyMethodDef arraymultiter_methods[] = { static PyTypeObject PyArrayMultiIter_Type = { PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ - "numpy.broadcast", /* tp_name */ - sizeof(PyArrayMultiIterObject), /* tp_basicsize */ - 0, /* tp_itemsize */ + 0, /* ob_size */ + "numpy.broadcast", /* tp_name */ + sizeof(PyArrayMultiIterObject), /* tp_basicsize */ + 0, /* tp_itemsize */ /* methods */ - (destructor)arraymultiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - (iternextfunc)arraymultiter_next, /* tp_iternext */ - arraymultiter_methods, /* tp_methods */ - arraymultiter_members, /* tp_members */ - arraymultiter_getsetlist, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)0, /* tp_init */ - 0, /* tp_alloc */ - arraymultiter_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + (destructor)arraymultiter_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + (iternextfunc)arraymultiter_next, /* tp_iternext */ + arraymultiter_methods, /* tp_methods */ + arraymultiter_members, /* tp_members */ + arraymultiter_getsetlist, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + (initproc)0, /* tp_init */ + 0, /* tp_alloc */ + arraymultiter_new, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; @@ -11197,21 +11910,23 @@ PyArray_DescrNewFromType(int type_num) return new; } -/*** Array Descr Objects for dynamic types **/ - -/** There are some statically-defined PyArray_Descr objects corresponding - to the basic built-in types. - These can and should be DECREF'd and INCREF'd as appropriate, anyway. - If a mistake is made in reference counting, deallocation on these - builtins will be attempted leading to problems. +/** Array Descr Objects for dynamic types **/ - This let's us deal with all PyArray_Descr objects using reference - counting (regardless of whether they are statically or dynamically - allocated). -**/ +/* + * There are some statically-defined PyArray_Descr objects corresponding + * to the basic built-in types. + * These can and should be DECREF'd and INCREF'd as appropriate, anyway. + * If a mistake is made in reference counting, deallocation on these + * builtins will be attempted leading to problems. + * + * This let's us deal with all PyArray_Descr objects using reference + * counting (regardless of whether they are statically or dynamically + * allocated). + */ -/* base cannot be NULL */ -/*NUMPY_API*/ +/*NUMPY_API + * base cannot be NULL + */ static PyArray_Descr * PyArray_DescrNew(PyArray_Descr *base) { @@ -11240,9 +11955,10 @@ PyArray_DescrNew(PyArray_Descr *base) return new; } -/* should never be called for builtin-types unless - there is a reference-count problem -*/ +/* + * should never be called for builtin-types unless + * there is a reference-count problem + */ static void arraydescr_dealloc(PyArray_Descr *self) { @@ -11265,20 +11981,29 @@ arraydescr_dealloc(PyArray_Descr *self) self->ob_type->tp_free((PyObject *)self); } -/* we need to be careful about setting attributes because these - objects are pointed to by arrays that depend on them for interpreting - data. Currently no attributes of data-type objects can be set - directly except names. -*/ +/* + * we need to be careful about setting attributes because these + * objects are pointed to by arrays that depend on them for interpreting + * data. Currently no attributes of data-type objects can be set + * directly except names. + */ static PyMemberDef arraydescr_members[] = { - {"type", T_OBJECT, offsetof(PyArray_Descr, typeobj), RO, NULL}, - {"kind", T_CHAR, offsetof(PyArray_Descr, kind), RO, NULL}, - {"char", T_CHAR, offsetof(PyArray_Descr, type), RO, NULL}, - {"num", T_INT, offsetof(PyArray_Descr, type_num), RO, NULL}, - {"byteorder", T_CHAR, offsetof(PyArray_Descr, byteorder), RO, NULL}, - {"itemsize", T_INT, offsetof(PyArray_Descr, elsize), RO, NULL}, - {"alignment", T_INT, offsetof(PyArray_Descr, alignment), RO, NULL}, - {"flags", T_UBYTE, offsetof(PyArray_Descr, hasobject), RO, NULL}, + {"type", + T_OBJECT, offsetof(PyArray_Descr, typeobj), RO, NULL}, + {"kind", + T_CHAR, offsetof(PyArray_Descr, kind), RO, NULL}, + {"char", + T_CHAR, offsetof(PyArray_Descr, type), RO, NULL}, + {"num", + T_INT, offsetof(PyArray_Descr, type_num), RO, NULL}, + {"byteorder", + T_CHAR, offsetof(PyArray_Descr, byteorder), RO, NULL}, + {"itemsize", + T_INT, offsetof(PyArray_Descr, elsize), RO, NULL}, + {"alignment", + T_INT, offsetof(PyArray_Descr, alignment), RO, NULL}, + {"flags", + T_UBYTE, offsetof(PyArray_Descr, hasobject), RO, NULL}, {NULL, 0, 0, 0, NULL}, }; @@ -11296,15 +12021,16 @@ arraydescr_subdescr_get(PyArray_Descr *self) static PyObject * arraydescr_protocol_typestr_get(PyArray_Descr *self) { - char basic_=self->kind; + char basic_ = self->kind; char endian = self->byteorder; - int size=self->elsize; + int size = self->elsize; if (endian == '=') { endian = '<'; - if (!PyArray_IsNativeByteOrder(endian)) endian = '>'; + if (!PyArray_IsNativeByteOrder(endian)) { + endian = '>'; + } } - if (self->type_num == PyArray_UNICODE) { size >>= 2; } @@ -11318,7 +12044,8 @@ arraydescr_typename_get(PyArray_Descr *self) PyTypeObject *typeobj = self->typeobj; PyObject *res; char *s; - static int prefix_len=0; + /* fixme: not reentrant */ + static int prefix_len = 0; if (PyTypeNum_ISUSERDEF(self->type_num)) { s = strrchr(typeobj->tp_name, '.'); @@ -11326,17 +12053,18 @@ arraydescr_typename_get(PyArray_Descr *self) res = PyString_FromString(typeobj->tp_name); } else { - res = PyString_FromStringAndSize(s+1, strlen(s)-1); + res = PyString_FromStringAndSize(s + 1, strlen(s) - 1); } return res; } else { - if (prefix_len == 0) + if (prefix_len == 0) { prefix_len = strlen("numpy."); - + } len = strlen(typeobj->tp_name); - if (*(typeobj->tp_name + (len-1)) == '_') - len-=1; + if (*(typeobj->tp_name + (len-1)) == '_') { + len -= 1; + } len -= prefix_len; res = PyString_FromStringAndSize(typeobj->tp_name+prefix_len, len); } @@ -11381,35 +12109,45 @@ arraydescr_protocol_descr_get(PyArray_Descr *self) if (self->names == NULL) { /* get default */ dobj = PyTuple_New(2); - if (dobj == NULL) return NULL; + if (dobj == NULL) { + return NULL; + } PyTuple_SET_ITEM(dobj, 0, PyString_FromString("")); - PyTuple_SET_ITEM(dobj, 1, \ - arraydescr_protocol_typestr_get(self)); + PyTuple_SET_ITEM(dobj, 1, arraydescr_protocol_typestr_get(self)); res = PyList_New(1); - if (res == NULL) {Py_DECREF(dobj); return NULL;} + if (res == NULL) { + Py_DECREF(dobj); + return NULL; + } PyList_SET_ITEM(res, 0, dobj); return res; } _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) return NULL; - res = PyObject_CallMethod(_numpy_internal, "_array_descr", - "O", self); + if (_numpy_internal == NULL) { + return NULL; + } + res = PyObject_CallMethod(_numpy_internal, "_array_descr", "O", self); Py_DECREF(_numpy_internal); return res; } -/* returns 1 for a builtin type - and 2 for a user-defined data-type descriptor - return 0 if neither (i.e. it's a copy of one) -*/ +/* + * returns 1 for a builtin type + * and 2 for a user-defined data-type descriptor + * return 0 if neither (i.e. it's a copy of one) + */ static PyObject * arraydescr_isbuiltin_get(PyArray_Descr *self) { long val; val = 0; - if (self->fields == Py_None) val = 1; - if (PyTypeNum_ISUSERDEF(self->type_num)) val = 2; + if (self->fields == Py_None) { + val = 1; + } + if (PyTypeNum_ISUSERDEF(self->type_num)) { + val = 2; + } return PyInt_FromLong(val); } @@ -11420,34 +12158,42 @@ _arraydescr_isnative(PyArray_Descr *self) return PyArray_ISNBO(self->byteorder); } else { - PyObject *key, *value, *title=NULL; + PyObject *key, *value, *title = NULL; PyArray_Descr *new; int offset; - Py_ssize_t pos=0; - while(PyDict_Next(self->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) return -1; - if (!_arraydescr_isnative(new)) return 0; + Py_ssize_t pos = 0; + while (PyDict_Next(self->fields, &pos, &key, &value)) { + if NPY_TITLE_KEY(key, value) { + continue; + } + if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { + return -1; + } + if (!_arraydescr_isnative(new)) { + return 0; + } } } return 1; } -/* return Py_True if this data-type descriptor - has native byteorder if no fields are defined - - or if all sub-fields have native-byteorder if - fields are defined -*/ +/* + * return Py_True if this data-type descriptor + * has native byteorder if no fields are defined + * + * or if all sub-fields have native-byteorder if + * fields are defined + */ static PyObject * arraydescr_isnative_get(PyArray_Descr *self) { PyObject *ret; int retval; retval = _arraydescr_isnative(self); - if (retval == -1) return NULL; - ret = (retval ? Py_True : Py_False); + if (retval == -1) { + return NULL; + } + ret = retval ? Py_True : Py_False; Py_INCREF(ret); return ret; } @@ -11466,10 +12212,12 @@ static PyObject * arraydescr_hasobject_get(PyArray_Descr *self) { PyObject *res; - if (PyDataType_FLAGCHK(self, NPY_ITEM_HASOBJECT)) + if (PyDataType_FLAGCHK(self, NPY_ITEM_HASOBJECT)) { res = Py_True; - else + } + else { res = Py_False; + } Py_INCREF(res); return res; } @@ -11503,9 +12251,9 @@ arraydescr_names_set(PyArray_Descr *self, PyObject *val) return -1; } /* Make sure all entries are strings */ - for(i=0; i<N; i++) { + for (i = 0; i < N; i++) { PyObject *item; - int valid=1; + int valid = 1; item = PySequence_GetItem(val, i); valid = PyString_Check(item); Py_DECREF(item); @@ -11518,8 +12266,7 @@ arraydescr_names_set(PyArray_Descr *self, PyObject *val) } /* Update dictionary keys in fields */ new_names = PySequence_Tuple(val); - - for(i=0; i<N; i++) { + for (i = 0; i < N; i++) { PyObject *key; PyObject *item; PyObject *new_key; @@ -11542,39 +12289,39 @@ arraydescr_names_set(PyArray_Descr *self, PyObject *val) static PyGetSetDef arraydescr_getsets[] = { {"subdtype", - (getter)arraydescr_subdescr_get, - NULL, NULL, NULL}, + (getter)arraydescr_subdescr_get, + NULL, NULL, NULL}, {"descr", - (getter)arraydescr_protocol_descr_get, - NULL, NULL, NULL}, + (getter)arraydescr_protocol_descr_get, + NULL, NULL, NULL}, {"str", - (getter)arraydescr_protocol_typestr_get, - NULL, NULL, NULL}, + (getter)arraydescr_protocol_typestr_get, + NULL, NULL, NULL}, {"name", - (getter)arraydescr_typename_get, - NULL, NULL, NULL}, + (getter)arraydescr_typename_get, + NULL, NULL, NULL}, {"base", - (getter)arraydescr_base_get, - NULL, NULL, NULL}, + (getter)arraydescr_base_get, + NULL, NULL, NULL}, {"shape", - (getter)arraydescr_shape_get, - NULL, NULL, NULL}, + (getter)arraydescr_shape_get, + NULL, NULL, NULL}, {"isbuiltin", - (getter)arraydescr_isbuiltin_get, - NULL, NULL, NULL}, + (getter)arraydescr_isbuiltin_get, + NULL, NULL, NULL}, {"isnative", - (getter)arraydescr_isnative_get, - NULL, NULL, NULL}, + (getter)arraydescr_isnative_get, + NULL, NULL, NULL}, {"fields", - (getter)arraydescr_fields_get, - NULL, NULL, NULL}, + (getter)arraydescr_fields_get, + NULL, NULL, NULL}, {"names", - (getter)arraydescr_names_get, - (setter)arraydescr_names_set, - NULL, NULL}, + (getter)arraydescr_names_get, + (setter)arraydescr_names_set, + NULL, NULL}, {"hasobject", - (getter)arraydescr_hasobject_get, - NULL, NULL, NULL}, + (getter)arraydescr_hasobject_get, + NULL, NULL, NULL}, {NULL, NULL, NULL, NULL, NULL}, }; @@ -11583,22 +12330,24 @@ arraydescr_new(PyTypeObject *NPY_UNUSED(subtype), PyObject *args, PyObject *kwds { PyObject *odescr; PyArray_Descr *descr, *conv; - Bool align=FALSE; - Bool copy=FALSE; + Bool align = FALSE; + Bool copy = FALSE; static char *kwlist[] = {"dtype", "align", "copy", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&O&", kwlist, &odescr, PyArray_BoolConverter, &align, - PyArray_BoolConverter, ©)) + PyArray_BoolConverter, ©)) { return NULL; - + } if (align) { - if (!PyArray_DescrAlignConverter(odescr, &conv)) + if (!PyArray_DescrAlignConverter(odescr, &conv)) { return NULL; + } } - else if (!PyArray_DescrConverter(odescr, &conv)) + else if (!PyArray_DescrConverter(odescr, &conv)) { return NULL; + } /* Get a new copy of it unless it's already a copy */ if (copy && conv->fields == Py_None) { descr = PyArray_DescrNew(conv); @@ -11613,9 +12362,11 @@ arraydescr_new(PyTypeObject *NPY_UNUSED(subtype), PyObject *args, PyObject *kwds static PyObject * arraydescr_reduce(PyArray_Descr *self, PyObject *NPY_UNUSED(args)) { - /* version number of this pickle type. Increment if we need to - change the format. Be sure to handle the old versions in - arraydescr_setstate. */ + /* + * version number of this pickle type. Increment if we need to + * change the format. Be sure to handle the old versions in + * arraydescr_setstate. + */ const int version = 3; PyObject *ret, *mod, *obj; PyObject *state; @@ -11623,15 +12374,23 @@ arraydescr_reduce(PyArray_Descr *self, PyObject *NPY_UNUSED(args)) int elsize, alignment; ret = PyTuple_New(3); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } mod = PyImport_ImportModule("numpy.core.multiarray"); - if (mod == NULL) {Py_DECREF(ret); return NULL;} + if (mod == NULL) { + Py_DECREF(ret); + return NULL; + } obj = PyObject_GetAttrString(mod, "dtype"); Py_DECREF(mod); - if (obj == NULL) {Py_DECREF(ret); return NULL;} + if (obj == NULL) { + Py_DECREF(ret); + return NULL; + } PyTuple_SET_ITEM(ret, 0, obj); - if (PyTypeNum_ISUSERDEF(self->type_num) || \ - ((self->type_num == PyArray_VOID && \ + if (PyTypeNum_ISUSERDEF(self->type_num) || + ((self->type_num == PyArray_VOID && self->typeobj != &PyVoidArrType_Type))) { obj = (PyObject *)self->typeobj; Py_INCREF(obj); @@ -11645,12 +12404,16 @@ arraydescr_reduce(PyArray_Descr *self, PyObject *NPY_UNUSED(args)) } PyTuple_SET_ITEM(ret, 1, Py_BuildValue("(Nii)", obj, 0, 1)); - /* Now return the state which is at least - byteorder, subarray, and fields */ + /* + * Now return the state which is at least byteorder, + * subarray, and fields + */ endian = self->byteorder; if (endian == '=') { endian = '<'; - if (!PyArray_IsNativeByteOrder(endian)) endian = '>'; + if (!PyArray_IsNativeByteOrder(endian)) { + endian = '>'; + } } state = PyTuple_New(8); PyTuple_SET_ITEM(state, 0, PyInt_FromLong(version)); @@ -11674,12 +12437,13 @@ arraydescr_reduce(PyArray_Descr *self, PyObject *NPY_UNUSED(args)) elsize = self->elsize; alignment = self->alignment; } - else {elsize = -1; alignment = -1;} - + else { + elsize = -1; + alignment = -1; + } PyTuple_SET_ITEM(state, 5, PyInt_FromLong(elsize)); PyTuple_SET_ITEM(state, 6, PyInt_FromLong(alignment)); PyTuple_SET_ITEM(state, 7, PyInt_FromLong(self->hasobject)); - PyTuple_SET_ITEM(ret, 2, state); return ret; } @@ -11691,17 +12455,20 @@ static int _descr_find_object(PyArray_Descr *self) { if (self->hasobject || self->type_num == PyArray_OBJECT || - self->kind == 'O') + self->kind == 'O') { return NPY_OBJECT_DTYPE_FLAGS; + } if (PyDescr_HASFIELDS(self)) { - PyObject *key, *value, *title=NULL; + PyObject *key, *value, *title = NULL; PyArray_Descr *new; int offset; - Py_ssize_t pos=0; + Py_ssize_t pos = 0; + while (PyDict_Next(self->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) { + if NPY_TITLE_KEY(key, value) { + continue; + } + if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { PyErr_Clear(); return 0; } @@ -11714,64 +12481,68 @@ _descr_find_object(PyArray_Descr *self) return 0; } -/* state is at least byteorder, subarray, and fields but could include elsize - and alignment for EXTENDED arrays -*/ - +/* + * state is at least byteorder, subarray, and fields but could include elsize + * and alignment for EXTENDED arrays + */ static PyObject * arraydescr_setstate(PyArray_Descr *self, PyObject *args) { int elsize = -1, alignment = -1; int version = 3; char endian; - PyObject *subarray, *fields, *names=NULL; + PyObject *subarray, *fields, *names = NULL; int incref_names = 1; - int dtypeflags=0; - - if (self->fields == Py_None) {Py_INCREF(Py_None); return Py_None;} + int dtypeflags = 0; + if (self->fields == Py_None) { + Py_INCREF(Py_None); + return Py_None; + } if (PyTuple_GET_SIZE(args) != 1 || !(PyTuple_Check(PyTuple_GET_ITEM(args, 0)))) { PyErr_BadInternalCall(); return NULL; } switch (PyTuple_GET_SIZE(PyTuple_GET_ITEM(args,0))) { - case 8: - if (!PyArg_ParseTuple(args, "(icOOOiii)", &version, &endian, - &subarray, &names, &fields, &elsize, - &alignment, &dtypeflags)) { - return NULL; - } - break; - case 7: - if (!PyArg_ParseTuple(args, "(icOOOii)", &version, &endian, - &subarray, &names, &fields, &elsize, - &alignment)) { - return NULL; - } - break; - case 6: - if (!PyArg_ParseTuple(args, "(icOOii)", &version, - &endian, &subarray, &fields, - &elsize, &alignment)) { - PyErr_Clear(); - } - break; - case 5: - version = 0; - if (!PyArg_ParseTuple(args, "(cOOii)", - &endian, &subarray, &fields, &elsize, - &alignment)) { - return NULL; - } - break; - default: - version = -1; /* raise an error */ + case 8: + if (!PyArg_ParseTuple(args, "(icOOOiii)", &version, &endian, + &subarray, &names, &fields, &elsize, + &alignment, &dtypeflags)) { + return NULL; + } + break; + case 7: + if (!PyArg_ParseTuple(args, "(icOOOii)", &version, &endian, + &subarray, &names, &fields, &elsize, + &alignment)) { + return NULL; + } + break; + case 6: + if (!PyArg_ParseTuple(args, "(icOOii)", &version, + &endian, &subarray, &fields, + &elsize, &alignment)) { + PyErr_Clear(); + } + break; + case 5: + version = 0; + if (!PyArg_ParseTuple(args, "(cOOii)", + &endian, &subarray, &fields, &elsize, + &alignment)) { + return NULL; + } + break; + default: + /* raise an error */ + version = -1; } - /* If we ever need another pickle format, increment the version - number. But we should still be able to handle the old versions. - */ + /* + * If we ever need another pickle format, increment the version + * number. But we should still be able to handle the old versions. + */ if (version < 0 || version > 3) { PyErr_Format(PyExc_ValueError, "can't handle version %d of numpy.dtype pickle", @@ -11784,7 +12555,9 @@ arraydescr_setstate(PyArray_Descr *self, PyObject *args) PyObject *key, *list; key = PyInt_FromLong(-1); list = PyDict_GetItem(fields, key); - if (!list) return NULL; + if (!list) { + return NULL; + } Py_INCREF(list); names = list; PyDict_DelItem(fields, key); @@ -11796,16 +12569,16 @@ arraydescr_setstate(PyArray_Descr *self, PyObject *args) } - if ((fields == Py_None && names != Py_None) || \ + if ((fields == Py_None && names != Py_None) || (names == Py_None && fields != Py_None)) { PyErr_Format(PyExc_ValueError, "inconsistent fields and names"); return NULL; } - if (endian != '|' && - PyArray_IsNativeByteOrder(endian)) endian = '='; - + if (endian != '|' && PyArray_IsNativeByteOrder(endian)) { + endian = '='; + } self->byteorder = endian; if (self->subarray) { Py_XDECREF(self->subarray->base); @@ -11828,8 +12601,9 @@ arraydescr_setstate(PyArray_Descr *self, PyObject *args) Py_INCREF(fields); Py_XDECREF(self->names); self->names = names; - if (incref_names) + if (incref_names) { Py_INCREF(names); + } } if (PyTypeNum_ISEXTENDED(self->type_num)) { @@ -11846,23 +12620,23 @@ arraydescr_setstate(PyArray_Descr *self, PyObject *args) } -/* returns a copy of the PyArray_Descr structure with the byteorder - altered: - no arguments: The byteorder is swapped (in all subfields as well) - single argument: The byteorder is forced to the given state - (in all subfields as well) - - Valid states: ('big', '>') or ('little' or '<') - ('native', or '=') - - If a descr structure with | is encountered it's own - byte-order is not changed but any fields are: -*/ - -/*NUMPY_API - Deep bytorder change of a data-type descriptor - *** Leaves reference count of self unchanged --- does not DECREF self *** - */ + /*NUMPY_API + * returns a copy of the PyArray_Descr structure with the byteorder + * altered: + * no arguments: The byteorder is swapped (in all subfields as well) + * single argument: The byteorder is forced to the given state + * (in all subfields as well) + * + * Valid states: ('big', '>') or ('little' or '<') + * ('native', or '=') + * + * If a descr structure with | is encountered it's own + * byte-order is not changed but any fields are: + * + * + * Deep bytorder change of a data-type descriptor + * *** Leaves reference count of self unchanged --- does not DECREF self *** + */ static PyArray_Descr * PyArray_DescrNewByteorder(PyArray_Descr *self, char newendian) { @@ -11872,9 +12646,14 @@ PyArray_DescrNewByteorder(PyArray_Descr *self, char newendian) new = PyArray_DescrNew(self); endian = new->byteorder; if (endian != PyArray_IGNORE) { - if (newendian == PyArray_SWAP) { /* swap byteorder */ - if PyArray_ISNBO(endian) endian = PyArray_OPPBYTE; - else endian = PyArray_NATBYTE; + if (newendian == PyArray_SWAP) { + /* swap byteorder */ + if PyArray_ISNBO(endian) { + endian = PyArray_OPPBYTE; + } + else { + endian = PyArray_NATBYTE; + } new->byteorder = endian; } else if (newendian != PyArray_IGNORE) { @@ -11889,28 +12668,31 @@ PyArray_DescrNewByteorder(PyArray_Descr *self, char newendian) PyArray_Descr *newdescr; Py_ssize_t pos = 0; int len, i; + newfields = PyDict_New(); - /* make new dictionary with replaced */ - /* PyArray_Descr Objects */ + /* make new dictionary with replaced PyArray_Descr Objects */ while(PyDict_Next(self->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) continue; - if (!PyString_Check(key) || \ - !PyTuple_Check(value) || \ - ((len=PyTuple_GET_SIZE(value)) < 2)) + if NPY_TITLE_KEY(key, value) { continue; - + } + if (!PyString_Check(key) || + !PyTuple_Check(value) || + ((len=PyTuple_GET_SIZE(value)) < 2)) { + continue; + } old = PyTuple_GET_ITEM(value, 0); - if (!PyArray_DescrCheck(old)) continue; - newdescr = PyArray_DescrNewByteorder \ - ((PyArray_Descr *)old, newendian); + if (!PyArray_DescrCheck(old)) { + continue; + } + newdescr = PyArray_DescrNewByteorder( + (PyArray_Descr *)old, newendian); if (newdescr == NULL) { Py_DECREF(newfields); Py_DECREF(new); return NULL; } newvalue = PyTuple_New(len); - PyTuple_SET_ITEM(newvalue, 0, \ - (PyObject *)newdescr); - for(i=1; i<len; i++) { + PyTuple_SET_ITEM(newvalue, 0, (PyObject *)newdescr); + for (i = 1; i < len; i++) { old = PyTuple_GET_ITEM(value, i); Py_INCREF(old); PyTuple_SET_ITEM(newvalue, i, old); @@ -11923,7 +12705,7 @@ PyArray_DescrNewByteorder(PyArray_Descr *self, char newendian) } if (new->subarray) { Py_DECREF(new->subarray->base); - new->subarray->base = PyArray_DescrNewByteorder \ + new->subarray->base = PyArray_DescrNewByteorder (self->subarray->base, newendian); } return new; @@ -11936,19 +12718,20 @@ arraydescr_newbyteorder(PyArray_Descr *self, PyObject *args) char endian=PyArray_SWAP; if (!PyArg_ParseTuple(args, "|O&", PyArray_ByteorderConverter, - &endian)) return NULL; - + &endian)) { + return NULL; + } return (PyObject *)PyArray_DescrNewByteorder(self, endian); } static PyMethodDef arraydescr_methods[] = { /* for pickling */ - {"__reduce__", (PyCFunction)arraydescr_reduce, METH_VARARGS, - NULL}, - {"__setstate__", (PyCFunction)arraydescr_setstate, METH_VARARGS, - NULL}, - {"newbyteorder", (PyCFunction)arraydescr_newbyteorder, METH_VARARGS, - NULL}, + {"__reduce__", + (PyCFunction)arraydescr_reduce, METH_VARARGS, NULL}, + {"__setstate__", + (PyCFunction)arraydescr_setstate, METH_VARARGS, NULL}, + {"newbyteorder", + (PyCFunction)arraydescr_newbyteorder, METH_VARARGS, NULL}, {NULL, NULL, 0, NULL} /* sentinel */ }; @@ -11964,7 +12747,9 @@ arraydescr_str(PyArray_Descr *self) sub = PyString_FromString("<err>"); PyErr_Clear(); } - else sub = PyObject_Str(lst); + else { + sub = PyObject_Str(lst); + } Py_XDECREF(lst); if (self->type_num != PyArray_VOID) { PyObject *p; @@ -12035,55 +12820,66 @@ arraydescr_repr(PyArray_Descr *self) static PyObject * arraydescr_richcompare(PyArray_Descr *self, PyObject *other, int cmp_op) { - PyArray_Descr *new=NULL; + PyArray_Descr *new = NULL; PyObject *result = Py_NotImplemented; if (!PyArray_DescrCheck(other)) { - if (PyArray_DescrConverter(other, &new) == PY_FAIL) + if (PyArray_DescrConverter(other, &new) == PY_FAIL) { return NULL; + } } else { new = (PyArray_Descr *)other; Py_INCREF(new); } switch (cmp_op) { - case Py_LT: - if (!PyArray_EquivTypes(self, new) && PyArray_CanCastTo(self, new)) - result = Py_True; - else - result = Py_False; - break; - case Py_LE: - if (PyArray_CanCastTo(self, new)) - result = Py_True; - else - result = Py_False; - break; - case Py_EQ: - if (PyArray_EquivTypes(self, new)) - result = Py_True; - else - result = Py_False; - break; - case Py_NE: - if (PyArray_EquivTypes(self, new)) - result = Py_False; - else - result = Py_True; - break; - case Py_GT: - if (!PyArray_EquivTypes(self, new) && PyArray_CanCastTo(new, self)) - result = Py_True; - else - result = Py_False; - break; - case Py_GE: - if (PyArray_CanCastTo(new, self)) - result = Py_True; - else - result = Py_False; - break; - default: - result = Py_NotImplemented; + case Py_LT: + if (!PyArray_EquivTypes(self, new) && PyArray_CanCastTo(self, new)) { + result = Py_True; + } + else { + result = Py_False; + } + break; + case Py_LE: + if (PyArray_CanCastTo(self, new)) { + result = Py_True; + } + else { + result = Py_False; + } + break; + case Py_EQ: + if (PyArray_EquivTypes(self, new)) { + result = Py_True; + } + else { + result = Py_False; + } + break; + case Py_NE: + if (PyArray_EquivTypes(self, new)) + result = Py_False; + else + result = Py_True; + break; + case Py_GT: + if (!PyArray_EquivTypes(self, new) && PyArray_CanCastTo(new, self)) { + result = Py_True; + } + else { + result = Py_False; + } + break; + case Py_GE: + if (PyArray_CanCastTo(new, self)) { + result = Py_True; + } + else { + result = Py_False; + } + break; + default: + result = Py_NotImplemented; } Py_XDECREF(new); @@ -12098,12 +12894,14 @@ arraydescr_richcompare(PyArray_Descr *self, PyObject *other, int cmp_op) static Py_ssize_t descr_length(PyObject *self0) { - PyArray_Descr *self = (PyArray_Descr *)self0; - if (self->names) + if (self->names) { return PyTuple_GET_SIZE(self->names); - else return 0; + } + else { + return 0; + } } static PyObject * @@ -12111,7 +12909,7 @@ descr_repeat(PyObject *self, Py_ssize_t length) { PyObject *tup; PyArray_Descr *new; - if (length < 0) + if (length < 0) { return PyErr_Format(PyExc_ValueError, #if (PY_VERSION_HEX < 0x02050000) "Array length must be >= 0, not %d", @@ -12119,8 +12917,11 @@ descr_repeat(PyObject *self, Py_ssize_t length) "Array length must be >= 0, not %zd", #endif length); + } tup = Py_BuildValue("O" NPY_SSIZE_T_PYFMT, self, length); - if (tup == NULL) return NULL; + if (tup == NULL) { + return NULL; + } PyArray_DescrConverter(tup, &new); Py_DECREF(tup); return (PyObject *)new; @@ -12132,11 +12933,9 @@ descr_subscript(PyArray_Descr *self, PyObject *op) if (self->names) { if (PyString_Check(op) || PyUnicode_Check(op)) { - PyObject *obj; - obj = PyDict_GetItem(self->fields, op); + PyObject *obj = PyDict_GetItem(self->fields, op); if (obj != NULL) { - PyObject *descr; - descr = PyTuple_GET_ITEM(obj, 0); + PyObject *descr = PyTuple_GET_ITEM(obj, 0); Py_INCREF(descr); return descr; } @@ -12148,12 +12947,12 @@ descr_subscript(PyArray_Descr *self, PyObject *op) } else { PyObject *name; - int value; - value = PyArray_PyIntAsInt(op); + int value = PyArray_PyIntAsInt(op); if (!PyErr_Occurred()) { - int size; - size = PyTuple_GET_SIZE(self->names); - if (value < 0) value += size; + int size = PyTuple_GET_SIZE(self->names); + if (value < 0) { + value += size; + } if (value < 0 || value >= size) { PyErr_Format(PyExc_IndexError, "0<=index<%d not %d", @@ -12184,17 +12983,17 @@ static PySequenceMethods descr_as_sequence = { (binaryfunc)NULL, descr_repeat, NULL, NULL, - NULL, /* sq_ass_item */ - NULL, /* ssizessizeobjargproc sq_ass_slice */ - 0, /* sq_contains */ - 0, /* sq_inplace_concat */ - 0, /* sq_inplace_repeat */ + NULL, /* sq_ass_item */ + NULL, /* ssizessizeobjargproc sq_ass_slice */ + 0, /* sq_contains */ + 0, /* sq_inplace_concat */ + 0, /* sq_inplace_repeat */ }; static PyMappingMethods descr_as_mapping = { - descr_length, /*mp_length*/ - (binaryfunc)descr_subscript, /*mp_subscript*/ - (objobjargproc)NULL, /*mp_ass_subscript*/ + descr_length, /* mp_length*/ + (binaryfunc)descr_subscript, /* mp_subscript*/ + (objobjargproc)NULL, /* mp_ass_subscript*/ }; /****************** End of Mapping Protocol ******************************/ @@ -12202,70 +13001,71 @@ static PyMappingMethods descr_as_mapping = { static PyTypeObject PyArrayDescr_Type = { PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ - "numpy.dtype", /* tp_name */ - sizeof(PyArray_Descr), /* tp_basicsize */ - 0, /* tp_itemsize */ + 0, /* ob_size */ + "numpy.dtype", /* tp_name */ + sizeof(PyArray_Descr), /* tp_basicsize */ + 0, /* tp_itemsize */ /* methods */ - (destructor)arraydescr_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - (reprfunc)arraydescr_repr, /* tp_repr */ - 0, /* tp_as_number */ - &descr_as_sequence, /* tp_as_sequence */ - &descr_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)arraydescr_str, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - (richcmpfunc)arraydescr_richcompare, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - arraydescr_methods, /* tp_methods */ - arraydescr_members, /* tp_members */ - arraydescr_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - arraydescr_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + (destructor)arraydescr_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + (reprfunc)arraydescr_repr, /* tp_repr */ + 0, /* tp_as_number */ + &descr_as_sequence, /* tp_as_sequence */ + &descr_as_mapping, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)arraydescr_str, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + (richcmpfunc)arraydescr_richcompare, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + arraydescr_methods, /* tp_methods */ + arraydescr_members, /* tp_members */ + arraydescr_getsets, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + arraydescr_new, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; -/** Array Flags Object **/ +/* Array Flags Object */ /*NUMPY_API - Get New ArrayFlagsObject -*/ + * + * Get New ArrayFlagsObject + */ static PyObject * PyArray_NewFlagsObject(PyObject *obj) { @@ -12278,11 +13078,12 @@ PyArray_NewFlagsObject(PyObject *obj) flags = PyArray_FLAGS(obj); } flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); - if (flagobj == NULL) return NULL; + if (flagobj == NULL) { + return NULL; + } Py_XINCREF(obj); ((PyArrayFlagsObject *)flagobj)->arr = obj; ((PyArrayFlagsObject *)flagobj)->flags = flags; - return flagobj; } @@ -12320,11 +13121,12 @@ arrayflags_forc_get(PyArrayFlagsObject *self) PyObject *item; if (((self->flags & FORTRAN) == FORTRAN) || - ((self->flags & CONTIGUOUS) == CONTIGUOUS)) + ((self->flags & CONTIGUOUS) == CONTIGUOUS)) { item = Py_True; - else + } + else { item = Py_False; - + } Py_INCREF(item); return item; } @@ -12335,11 +13137,12 @@ arrayflags_fnc_get(PyArrayFlagsObject *self) PyObject *item; if (((self->flags & FORTRAN) == FORTRAN) && - !((self->flags & CONTIGUOUS) == CONTIGUOUS)) + !((self->flags & CONTIGUOUS) == CONTIGUOUS)) { item = Py_True; - else + } + else { item = Py_False; - + } Py_INCREF(item); return item; } @@ -12349,13 +13152,14 @@ arrayflags_farray_get(PyArrayFlagsObject *self) { PyObject *item; - if (((self->flags & (ALIGNED|WRITEABLE|FORTRAN)) == \ + if (((self->flags & (ALIGNED|WRITEABLE|FORTRAN)) == (ALIGNED|WRITEABLE|FORTRAN)) && - !((self->flags & CONTIGUOUS) == CONTIGUOUS)) + !((self->flags & CONTIGUOUS) == CONTIGUOUS)) { item = Py_True; - else + } + else { item = Py_False; - + } Py_INCREF(item); return item; } @@ -12377,7 +13181,9 @@ arrayflags_updateifcopy_set(PyArrayFlagsObject *self, PyObject *obj) } res = PyObject_CallMethod(self->arr, "setflags", "OOO", Py_None, Py_None, (PyObject_IsTrue(obj) ? Py_True : Py_False)); - if (res == NULL) return -1; + if (res == NULL) { + return -1; + } Py_DECREF(res); return 0; } @@ -12393,7 +13199,9 @@ arrayflags_aligned_set(PyArrayFlagsObject *self, PyObject *obj) res = PyObject_CallMethod(self->arr, "setflags", "OOO", Py_None, (PyObject_IsTrue(obj) ? Py_True : Py_False), Py_None); - if (res == NULL) return -1; + if (res == NULL) { + return -1; + } Py_DECREF(res); return 0; } @@ -12409,7 +13217,9 @@ arrayflags_writeable_set(PyArrayFlagsObject *self, PyObject *obj) res = PyObject_CallMethod(self->arr, "setflags", "OOO", (PyObject_IsTrue(obj) ? Py_True : Py_False), Py_None, Py_None); - if (res == NULL) return -1; + if (res == NULL) { + return -1; + } Py_DECREF(res); return 0; } @@ -12417,61 +13227,61 @@ arrayflags_writeable_set(PyArrayFlagsObject *self, PyObject *obj) static PyGetSetDef arrayflags_getsets[] = { {"contiguous", - (getter)arrayflags_contiguous_get, - NULL, - "", NULL}, + (getter)arrayflags_contiguous_get, + NULL, + "", NULL}, {"c_contiguous", - (getter)arrayflags_contiguous_get, - NULL, - "", NULL}, + (getter)arrayflags_contiguous_get, + NULL, + "", NULL}, {"f_contiguous", - (getter)arrayflags_fortran_get, - NULL, - "", NULL}, + (getter)arrayflags_fortran_get, + NULL, + "", NULL}, {"fortran", - (getter)arrayflags_fortran_get, - NULL, - "", NULL}, + (getter)arrayflags_fortran_get, + NULL, + "", NULL}, {"updateifcopy", - (getter)arrayflags_updateifcopy_get, - (setter)arrayflags_updateifcopy_set, - "", NULL}, + (getter)arrayflags_updateifcopy_get, + (setter)arrayflags_updateifcopy_set, + "", NULL}, {"owndata", - (getter)arrayflags_owndata_get, - NULL, - "", NULL}, + (getter)arrayflags_owndata_get, + NULL, + "", NULL}, {"aligned", - (getter)arrayflags_aligned_get, - (setter)arrayflags_aligned_set, - "", NULL}, + (getter)arrayflags_aligned_get, + (setter)arrayflags_aligned_set, + "", NULL}, {"writeable", - (getter)arrayflags_writeable_get, - (setter)arrayflags_writeable_set, - "", NULL}, + (getter)arrayflags_writeable_get, + (setter)arrayflags_writeable_set, + "", NULL}, {"fnc", - (getter)arrayflags_fnc_get, - NULL, - "", NULL}, + (getter)arrayflags_fnc_get, + NULL, + "", NULL}, {"forc", - (getter)arrayflags_forc_get, - NULL, - "", NULL}, + (getter)arrayflags_forc_get, + NULL, + "", NULL}, {"behaved", - (getter)arrayflags_behaved_get, - NULL, - "", NULL}, + (getter)arrayflags_behaved_get, + NULL, + "", NULL}, {"carray", - (getter)arrayflags_carray_get, - NULL, - "", NULL}, + (getter)arrayflags_carray_get, + NULL, + "", NULL}, {"farray", - (getter)arrayflags_farray_get, - NULL, - "", NULL}, + (getter)arrayflags_farray_get, + NULL, + "", NULL}, {"num", - (getter)arrayflags_num_get, - NULL, - "", NULL}, + (getter)arrayflags_num_get, + NULL, + "", NULL}, {NULL, NULL, NULL, NULL, NULL}, }; @@ -12480,76 +13290,93 @@ arrayflags_getitem(PyArrayFlagsObject *self, PyObject *ind) { char *key; int n; - if (!PyString_Check(ind)) goto fail; + if (!PyString_Check(ind)) { + goto fail; + } key = PyString_AS_STRING(ind); n = PyString_GET_SIZE(ind); switch(n) { - case 1: - switch(key[0]) { - case 'C': - return arrayflags_contiguous_get(self); - case 'F': - return arrayflags_fortran_get(self); - case 'W': - return arrayflags_writeable_get(self); - case 'B': - return arrayflags_behaved_get(self); - case 'O': - return arrayflags_owndata_get(self); - case 'A': - return arrayflags_aligned_get(self); - case 'U': - return arrayflags_updateifcopy_get(self); - default: - goto fail; - } - break; - case 2: - if (strncmp(key, "CA", n)==0) - return arrayflags_carray_get(self); - if (strncmp(key, "FA", n)==0) - return arrayflags_farray_get(self); - break; - case 3: - if (strncmp(key, "FNC", n)==0) - return arrayflags_fnc_get(self); - break; - case 4: - if (strncmp(key, "FORC", n)==0) - return arrayflags_forc_get(self); - break; - case 6: - if (strncmp(key, "CARRAY", n)==0) - return arrayflags_carray_get(self); - if (strncmp(key, "FARRAY", n)==0) - return arrayflags_farray_get(self); - break; - case 7: - if (strncmp(key,"FORTRAN",n)==0) - return arrayflags_fortran_get(self); - if (strncmp(key,"BEHAVED",n)==0) - return arrayflags_behaved_get(self); - if (strncmp(key,"OWNDATA",n)==0) - return arrayflags_owndata_get(self); - if (strncmp(key,"ALIGNED",n)==0) - return arrayflags_aligned_get(self); - break; - case 9: - if (strncmp(key,"WRITEABLE",n)==0) - return arrayflags_writeable_get(self); - break; - case 10: - if (strncmp(key,"CONTIGUOUS",n)==0) - return arrayflags_contiguous_get(self); - break; - case 12: - if (strncmp(key, "UPDATEIFCOPY", n)==0) - return arrayflags_updateifcopy_get(self); - if (strncmp(key, "C_CONTIGUOUS", n)==0) - return arrayflags_contiguous_get(self); - if (strncmp(key, "F_CONTIGUOUS", n)==0) - return arrayflags_fortran_get(self); - break; + case 1: + switch(key[0]) { + case 'C': + return arrayflags_contiguous_get(self); + case 'F': + return arrayflags_fortran_get(self); + case 'W': + return arrayflags_writeable_get(self); + case 'B': + return arrayflags_behaved_get(self); + case 'O': + return arrayflags_owndata_get(self); + case 'A': + return arrayflags_aligned_get(self); + case 'U': + return arrayflags_updateifcopy_get(self); + default: + goto fail; + } + break; + case 2: + if (strncmp(key, "CA", n) == 0) { + return arrayflags_carray_get(self); + } + if (strncmp(key, "FA", n) == 0) { + return arrayflags_farray_get(self); + } + break; + case 3: + if (strncmp(key, "FNC", n) == 0) { + return arrayflags_fnc_get(self); + } + break; + case 4: + if (strncmp(key, "FORC", n) == 0) { + return arrayflags_forc_get(self); + } + break; + case 6: + if (strncmp(key, "CARRAY", n) == 0) { + return arrayflags_carray_get(self); + } + if (strncmp(key, "FARRAY", n) == 0) { + return arrayflags_farray_get(self); + } + break; + case 7: + if (strncmp(key,"FORTRAN",n) == 0) { + return arrayflags_fortran_get(self); + } + if (strncmp(key,"BEHAVED",n) == 0) { + return arrayflags_behaved_get(self); + } + if (strncmp(key,"OWNDATA",n) == 0) { + return arrayflags_owndata_get(self); + } + if (strncmp(key,"ALIGNED",n) == 0) { + return arrayflags_aligned_get(self); + } + break; + case 9: + if (strncmp(key,"WRITEABLE",n) == 0) { + return arrayflags_writeable_get(self); + } + break; + case 10: + if (strncmp(key,"CONTIGUOUS",n) == 0) { + return arrayflags_contiguous_get(self); + } + break; + case 12: + if (strncmp(key, "UPDATEIFCOPY", n) == 0) { + return arrayflags_updateifcopy_get(self); + } + if (strncmp(key, "C_CONTIGUOUS", n) == 0) { + return arrayflags_contiguous_get(self); + } + if (strncmp(key, "F_CONTIGUOUS", n) == 0) { + return arrayflags_fortran_get(self); + } + break; } fail: @@ -12562,18 +13389,23 @@ arrayflags_setitem(PyArrayFlagsObject *self, PyObject *ind, PyObject *item) { char *key; int n; - if (!PyString_Check(ind)) goto fail; + if (!PyString_Check(ind)) { + goto fail; + } key = PyString_AS_STRING(ind); n = PyString_GET_SIZE(ind); - if (((n==9) && (strncmp(key, "WRITEABLE", n)==0)) || - ((n==1) && (strncmp(key, "W", n)==0))) + if (((n==9) && (strncmp(key, "WRITEABLE", n) == 0)) || + ((n==1) && (strncmp(key, "W", n) == 0))) { return arrayflags_writeable_set(self, item); - else if (((n==7) && (strncmp(key, "ALIGNED", n)==0)) || - ((n==1) && (strncmp(key, "A", n)==0))) + } + else if (((n==7) && (strncmp(key, "ALIGNED", n) == 0)) || + ((n==1) && (strncmp(key, "A", n) == 0))) { return arrayflags_aligned_set(self, item); - else if (((n==12) && (strncmp(key, "UPDATEIFCOPY", n)==0)) || - ((n==1) && (strncmp(key, "U", n)==0))) + } + else if (((n==12) && (strncmp(key, "UPDATEIFCOPY", n) == 0)) || + ((n==1) && (strncmp(key, "U", n) == 0))) { return arrayflags_updateifcopy_set(self, item); + } fail: PyErr_SetString(PyExc_KeyError, "Unknown flag"); @@ -12583,8 +13415,12 @@ arrayflags_setitem(PyArrayFlagsObject *self, PyObject *ind, PyObject *item) static char * _torf_(int flags, int val) { - if ((flags & val) == val) return "True"; - else return "False"; + if ((flags & val) == val) { + return "True"; + } + else { + return "False"; + } } static PyObject * @@ -12606,12 +13442,15 @@ arrayflags_print(PyArrayFlagsObject *self) static int arrayflags_compare(PyArrayFlagsObject *self, PyArrayFlagsObject *other) { - if (self->flags == other->flags) + if (self->flags == other->flags) { return 0; - else if (self->flags < other->flags) + } + else if (self->flags < other->flags) { return -1; - else + } + else { return 1; + } } static PyMappingMethods arrayflags_as_mapping = { @@ -12629,9 +13468,9 @@ static PyObject * arrayflags_new(PyTypeObject *NPY_UNUSED(self), PyObject *args, PyObject *NPY_UNUSED(kwds)) { PyObject *arg=NULL; - if (!PyArg_UnpackTuple(args, "flagsobj", 0, 1, &arg)) + if (!PyArg_UnpackTuple(args, "flagsobj", 0, 1, &arg)) { return NULL; - + } if ((arg != NULL) && PyArray_Check(arg)) { return PyArray_NewFlagsObject(arg); } @@ -12645,7 +13484,7 @@ static PyTypeObject PyArrayFlags_Type = { 0, "numpy.flagsobj", sizeof(PyArrayFlagsObject), - 0, /* tp_itemsize */ + 0, /* tp_itemsize */ /* methods */ (destructor)arrayflags_dealloc, /* tp_dealloc */ 0, /* tp_print */ @@ -12670,32 +13509,32 @@ static PyTypeObject PyArrayFlags_Type = { 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - arrayflags_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - arrayflags_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + 0, /* tp_methods */ + 0, /* tp_members */ + arrayflags_getsets, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + arrayflags_new, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; diff --git a/numpy/core/src/arraytypes.inc.src b/numpy/core/src/arraytypes.inc.src index 764bad1f4..250190f8e 100644 --- a/numpy/core/src/arraytypes.inc.src +++ b/numpy/core/src/arraytypes.inc.src @@ -2,41 +2,17 @@ #include "config.h" static double -_getNAN(void) { -#ifdef NAN - return NAN; -#else - static double nan=0; - - if (nan == 0) { - double mul = 1e100; - double tmp = 0.0; - double pinf=0; - pinf = mul; - for (;;) { - pinf *= mul; - if (pinf == tmp) break; - tmp = pinf; - } - nan = pinf / pinf; - } - return nan; -#endif -} - - -static double MyPyFloat_AsDouble(PyObject *obj) { double ret = 0; PyObject *num; if (obj == Py_None) { - return _getNAN(); + return NumPyOS_NAN; } num = PyNumber_Float(obj); if (num == NULL) { - return _getNAN(); + return NumPyOS_NAN; } ret = PyFloat_AsDouble(num); Py_DECREF(num); @@ -192,7 +168,7 @@ static int op2 = op; Py_INCREF(op); } if (op2 == Py_None) { - oop.real = oop.imag = _getNAN(); + oop.real = oop.imag = NumPyOS_NAN; } else { oop = PyComplex_AsCComplex (op2); @@ -897,17 +873,30 @@ static void */ /**begin repeat - -#fname=SHORT,USHORT,INT,UINT,LONG,ULONG,LONGLONG,ULONGLONG,FLOAT,DOUBLE,LONGDOUBLE# -#type=short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble# -#format="hd","hu","d","u","ld","lu",LONGLONG_FMT,ULONGLONG_FMT,"f","lf","Lf"# +#fname=SHORT,USHORT,INT,UINT,LONG,ULONG,LONGLONG,ULONGLONG# +#type=short,ushort,int,uint,long,ulong,longlong,ulonglong# +#format="hd","hu","d","u","ld","lu",LONGLONG_FMT,ULONGLONG_FMT# */ static int @fname@_scan (FILE *fp, @type@ *ip, void *NPY_UNUSED(ignore), PyArray_Descr *NPY_UNUSED(ignored)) { return fscanf(fp, "%"@format@, ip); } +/**end repeat**/ +/**begin repeat +#fname=FLOAT,DOUBLE,LONGDOUBLE# +#type=float,double,longdouble# +*/ +static int +@fname@_scan (FILE *fp, @type@ *ip, void *NPY_UNUSED(ignore), PyArray_Descr *NPY_UNUSED(ignored)) +{ + double result; + int ret; + ret = NumPyOS_ascii_ftolf(fp, &result); + *ip = (@type@) result; + return ret; +} /**end repeat**/ /**begin repeat @@ -966,19 +955,15 @@ static int #fname=FLOAT,DOUBLE,LONGDOUBLE# #type=float,double,longdouble# */ -#if (PY_VERSION_HEX >= 0x02040000) || defined(PyOS_ascii_strtod) static int @fname@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr *NPY_UNUSED(ignore)) { double result; - result = PyOS_ascii_strtod(str, endptr); + result = NumPyOS_ascii_strtod(str, endptr); *ip = (@type@) result; return 0; } -#else -#define @fname@_fromstr NULL -#endif /**end repeat**/ diff --git a/numpy/core/src/multiarraymodule.c b/numpy/core/src/multiarraymodule.c index 53ce89a94..d680671c3 100644 --- a/numpy/core/src/multiarraymodule.c +++ b/numpy/core/src/multiarraymodule.c @@ -81,6 +81,10 @@ _arraydescr_fromobj(PyObject *obj) return NULL; } +/* XXX: We include c99 compat math module here because it is needed for + * numpyos.c (included by arrayobject). This is bad - we should separate + * declaration/implementation and share this in a lib. */ +#include "umath_funcs_c99.inc" /* Including this file is the only way I know how to declare functions static in each file, and store the pointers from functions in both @@ -7705,6 +7709,9 @@ PyMODINIT_FUNC initmultiarray(void) { PyObject *m, *d, *s; PyObject *c_api; + /* Initialize constants etc. */ + NumPyOS_init(); + /* Create the module and add the functions */ m = Py_InitModule("multiarray", array_module_methods); if (!m) goto err; diff --git a/numpy/core/src/numpyos.c b/numpy/core/src/numpyos.c new file mode 100644 index 000000000..5408851f9 --- /dev/null +++ b/numpy/core/src/numpyos.c @@ -0,0 +1,630 @@ +#include <locale.h> +#include <stdio.h> + +/* From the C99 standard, section 7.19.6: The exponent always contains at least + two digits, and only as many more digits as necessary to represent the + exponent. +*/ +/* We force 3 digits on windows for python < 2.6 for compatibility reason */ +#if defined(MS_WIN32) && (PY_VERSION_HEX < 0x02060000) +#define MIN_EXPONENT_DIGITS 3 +#else +#define MIN_EXPONENT_DIGITS 2 +#endif + +/* Ensure that any exponent, if present, is at least MIN_EXPONENT_DIGITS + in length. */ +static void +_ensure_minimum_exponent_length(char* buffer, size_t buf_size) +{ + char *p = strpbrk(buffer, "eE"); + if (p && (*(p + 1) == '-' || *(p + 1) == '+')) { + char *start = p + 2; + int exponent_digit_cnt = 0; + int leading_zero_cnt = 0; + int in_leading_zeros = 1; + int significant_digit_cnt; + + /* Skip over the exponent and the sign. */ + p += 2; + + /* Find the end of the exponent, keeping track of leading + zeros. */ + while (*p && isdigit(Py_CHARMASK(*p))) { + if (in_leading_zeros && *p == '0') + ++leading_zero_cnt; + if (*p != '0') + in_leading_zeros = 0; + ++p; + ++exponent_digit_cnt; + } + + significant_digit_cnt = exponent_digit_cnt - leading_zero_cnt; + if (exponent_digit_cnt == MIN_EXPONENT_DIGITS) { + /* If there are 2 exactly digits, we're done, + regardless of what they contain */ + } + else if (exponent_digit_cnt > MIN_EXPONENT_DIGITS) { + int extra_zeros_cnt; + + /* There are more than 2 digits in the exponent. See + if we can delete some of the leading zeros */ + if (significant_digit_cnt < MIN_EXPONENT_DIGITS) + significant_digit_cnt = MIN_EXPONENT_DIGITS; + + extra_zeros_cnt = exponent_digit_cnt - significant_digit_cnt; + + /* Delete extra_zeros_cnt worth of characters from the + front of the exponent */ + assert(extra_zeros_cnt >= 0); + + /* Add one to significant_digit_cnt to copy the + trailing 0 byte, thus setting the length */ + memmove(start, start + extra_zeros_cnt, significant_digit_cnt + 1); + } + else { + /* If there are fewer than 2 digits, add zeros + until there are 2, if there's enough room */ + int zeros = MIN_EXPONENT_DIGITS - exponent_digit_cnt; + if (start + zeros + exponent_digit_cnt + 1 < buffer + buf_size) { + memmove(start + zeros, start, exponent_digit_cnt + 1); + memset(start, '0', zeros); + } + } + } +} + +/* Ensure that buffer has a decimal point in it. The decimal point + will not be in the current locale, it will always be '.' */ +static void +_ensure_decimal_point(char* buffer, size_t buf_size) +{ + int insert_count = 0; + char* chars_to_insert; + + /* search for the first non-digit character */ + char *p = buffer; + if (*p == '-' || *p == '+') + /* Skip leading sign, if present. I think this could only + ever be '-', but it can't hurt to check for both. */ + ++p; + while (*p && isdigit(Py_CHARMASK(*p))) + ++p; + + if (*p == '.') { + if (isdigit(Py_CHARMASK(*(p+1)))) { + /* Nothing to do, we already have a decimal + point and a digit after it */ + } + else { + /* We have a decimal point, but no following + digit. Insert a zero after the decimal. */ + ++p; + chars_to_insert = "0"; + insert_count = 1; + } + } + else { + chars_to_insert = ".0"; + insert_count = 2; + } + if (insert_count) { + size_t buf_len = strlen(buffer); + if (buf_len + insert_count + 1 >= buf_size) { + /* If there is not enough room in the buffer + for the additional text, just skip it. It's + not worth generating an error over. */ + } + else { + memmove(p + insert_count, p, + buffer + strlen(buffer) - p + 1); + memcpy(p, chars_to_insert, insert_count); + } + } +} + +/* see FORMATBUFLEN in unicodeobject.c */ +#define FLOAT_FORMATBUFLEN 120 + +/* Given a string that may have a decimal point in the current + locale, change it back to a dot. Since the string cannot get + longer, no need for a maximum buffer size parameter. */ +static void +_change_decimal_from_locale_to_dot(char* buffer) +{ + struct lconv *locale_data = localeconv(); + const char *decimal_point = locale_data->decimal_point; + + if (decimal_point[0] != '.' || decimal_point[1] != 0) { + size_t decimal_point_len = strlen(decimal_point); + + if (*buffer == '+' || *buffer == '-') + buffer++; + while (isdigit(Py_CHARMASK(*buffer))) + buffer++; + if (strncmp(buffer, decimal_point, decimal_point_len) == 0) { + *buffer = '.'; + buffer++; + if (decimal_point_len > 1) { + /* buffer needs to get smaller */ + size_t rest_len = strlen(buffer + + (decimal_point_len - 1)); + memmove(buffer, + buffer + (decimal_point_len - 1), + rest_len); + buffer[rest_len] = 0; + } + } + } +} + +/* + * Check that the format string is a valid one for NumPyOS_ascii_format* + */ +static int +_check_ascii_format(const char *format) +{ + char format_char; + size_t format_len = strlen(format); + + /* The last character in the format string must be the format char */ + format_char = format[format_len - 1]; + + if (format[0] != '%') { + return -1; + } + + /* I'm not sure why this test is here. It's ensuring that the format + string after the first character doesn't have a single quote, a + lowercase l, or a percent. This is the reverse of the commented-out + test about 10 lines ago. */ + if (strpbrk(format + 1, "'l%")) { + return -1; + } + + /* Also curious about this function is that it accepts format strings + like "%xg", which are invalid for floats. In general, the + interface to this function is not very good, but changing it is + difficult because it's a public API. */ + + if (!(format_char == 'e' || format_char == 'E' || + format_char == 'f' || format_char == 'F' || + format_char == 'g' || format_char == 'G')) { + return -1; + } + + return 0; +} + +/* + * Fix the generated string: make sure the decimal is ., that exponent has a + * minimal number of digits, and that it has a decimal + one digit after that + * decimal if decimal argument != 0 (Same effect that 'Z' format in + * PyOS_ascii_formatd + */ +static char* +_fix_ascii_format(char* buf, size_t buflen, int decimal) +{ + /* Get the current locale, and find the decimal point string. + Convert that string back to a dot. */ + _change_decimal_from_locale_to_dot(buf); + + /* If an exponent exists, ensure that the exponent is at least + MIN_EXPONENT_DIGITS digits, providing the buffer is large enough + for the extra zeros. Also, if there are more than + MIN_EXPONENT_DIGITS, remove as many zeros as possible until we get + back to MIN_EXPONENT_DIGITS */ + _ensure_minimum_exponent_length(buf, buflen); + + if (decimal != 0) { + _ensure_decimal_point(buf, buflen); + } + + return buf; +} + +/* + * NumPyOS_ascii_format*: + * - buffer: A buffer to place the resulting string in + * - buf_size: The length of the buffer. + * - format: The printf()-style format to use for the code to use for + * converting. + * - value: The value to convert + * - decimal: if != 0, always has a decimal, and at leasat one digit after + * the decimal. This has the same effect as passing 'Z' in the origianl + * PyOS_ascii_formatd + * + * This is similar to PyOS_ascii_formatd in python > 2.6, except that it does + * not handle 'n', and handles nan / inf. + * + * Converts a #gdouble to a string, using the '.' as decimal point. To format + * the number you pass in a printf()-style format string. Allowed conversion + * specifiers are 'e', 'E', 'f', 'F', 'g', 'G'. + * + * Return value: The pointer to the buffer with the converted string. + */ +#define _ASCII_FORMAT(type, suffix, print_type) \ + static char* \ + NumPyOS_ascii_format ## suffix(char *buffer, size_t buf_size, \ + const char *format, \ + type val, int decimal) \ + { \ + if (isfinite(val)) { \ + if(_check_ascii_format(format)) { \ + return NULL; \ + } \ + PyOS_snprintf(buffer, buf_size, format, (print_type)val); \ + return _fix_ascii_format(buffer, buf_size, decimal); \ + } \ + else if (isnan(val)){ \ + if (buf_size < 4) { \ + return NULL; \ + } \ + strcpy(buffer, "nan"); \ + } \ + else { \ + if (signbit(val)) { \ + if (buf_size < 5) { \ + return NULL; \ + } \ + strcpy(buffer, "-inf"); \ + } \ + else { \ + if (buf_size < 4) { \ + return NULL; \ + } \ + strcpy(buffer, "inf"); \ + } \ + } \ + return buffer; \ + } + +_ASCII_FORMAT(float, f, float) +_ASCII_FORMAT(double, d, double) +#ifndef FORCE_NO_LONG_DOUBLE_FORMATTING +_ASCII_FORMAT(long double, l, long double) +#else +_ASCII_FORMAT(long double, l, double) +#endif + + +static double NumPyOS_PINF; /* Positive infinity */ +static double NumPyOS_PZERO; /* +0 */ +static double NumPyOS_NAN; /* NaN */ + +/* NumPyOS_init: + * + * initialize floating-point constants + */ +static void +NumPyOS_init(void) { + double mul = 1e100; + double div = 1e10; + double tmp, c; + + tmp = 0; + c = mul; + for (;;) { + c *= mul; + if (c == tmp) break; + tmp = c; + } + NumPyOS_PINF = c; + + tmp = 0; + c = div; + for (;;) { + c /= div; + if (c == tmp) break; + tmp = c; + } + NumPyOS_PZERO = c; + + NumPyOS_NAN = NumPyOS_PINF / NumPyOS_PINF; +} + + +/* NumPyOS_ascii_isspace: + * + * Same as isspace under C locale + */ +static int +NumPyOS_ascii_isspace(char c) +{ + return c == ' ' || c == '\f' || c == '\n' || c == '\r' || c == '\t' || + c == '\v'; +} + + +/* NumPyOS_ascii_isalpha: + * + * Same as isalpha under C locale + */ +static int +NumPyOS_ascii_isalpha(char c) +{ + return (c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z'); +} + + +/* NumPyOS_ascii_isdigit: + * + * Same as isdigit under C locale + */ +static int +NumPyOS_ascii_isdigit(char c) +{ + return (c >= '0' && c <= '9'); +} + + +/* NumPyOS_ascii_isalnum: + * + * Same as isalnum under C locale + */ +static int +NumPyOS_ascii_isalnum(char c) +{ + return NumPyOS_ascii_isdigit(c) || NumPyOS_ascii_isalpha(c); +} + + +/* NumPyOS_ascii_tolower: + * + * Same as tolower under C locale + */ +static char +NumPyOS_ascii_tolower(char c) +{ + if (c >= 'A' && c <= 'Z') + return c + ('a'-'A'); + return c; +} + + +/* NumPyOS_ascii_strncasecmp: + * + * Same as strncasecmp under C locale + */ +static int +NumPyOS_ascii_strncasecmp(const char* s1, const char* s2, size_t len) +{ + int diff; + while (len > 0 && *s1 != '\0' && *s2 != '\0') { + diff = ((int)NumPyOS_ascii_tolower(*s1)) - + ((int)NumPyOS_ascii_tolower(*s2)); + if (diff != 0) return diff; + ++s1; + ++s2; + --len; + } + if (len > 0) + return ((int)*s1) - ((int)*s2); + return 0; +} + + +/* NumPyOS_ascii_strtod: + * + * Work around bugs in PyOS_ascii_strtod + */ +static double +NumPyOS_ascii_strtod(const char *s, char** endptr) +{ + struct lconv *locale_data = localeconv(); + const char *decimal_point = locale_data->decimal_point; + size_t decimal_point_len = strlen(decimal_point); + + char buffer[FLOAT_FORMATBUFLEN+1]; + const char *p; + char *q; + size_t n; + double result; + + while (NumPyOS_ascii_isspace(*s)) { + ++s; + } + + /* ##1 + * + * Recognize POSIX inf/nan representations on all platforms. + */ + p = s; + result = 1.0; + if (*p == '-') { + result = -1.0; + ++p; + } + else if (*p == '+') { + ++p; + } + if (NumPyOS_ascii_strncasecmp(p, "nan", 3) == 0) { + p += 3; + if (*p == '(') { + ++p; + while (NumPyOS_ascii_isalnum(*p) || *p == '_') ++p; + if (*p == ')') ++p; + } + if (endptr != NULL) *endptr = (char*)p; + return NumPyOS_NAN; + } + else if (NumPyOS_ascii_strncasecmp(p, "inf", 3) == 0) { + p += 3; + if (NumPyOS_ascii_strncasecmp(p, "inity", 5) == 0) + p += 5; + if (endptr != NULL) *endptr = (char*)p; + return result*NumPyOS_PINF; + } + /* End of ##1 */ + + /* ## 2 + * + * At least Python versions <= 2.5.2 and <= 2.6.1 + * + * Fails to do best-efforts parsing of strings of the form "1<DP>234" + * where <DP> is the decimal point under the foreign locale. + */ + if (decimal_point[0] != '.' || decimal_point[1] != 0) { + p = s; + if (*p == '+' || *p == '-') + ++p; + while (*p >= '0' && *p <= '9') + ++p; + if (strncmp(p, decimal_point, decimal_point_len) == 0) { + n = (size_t)(p - s); + if (n > FLOAT_FORMATBUFLEN) + n = FLOAT_FORMATBUFLEN; + memcpy(buffer, s, n); + buffer[n] = '\0'; + result = PyOS_ascii_strtod(buffer, &q); + if (endptr != NULL) { + *endptr = (char*)(s + (q - buffer)); + } + return result; + } + } + /* End of ##2 */ + + return PyOS_ascii_strtod(s, endptr); +} + + +/* + * NumPyOS_ascii_ftolf: + * * fp: FILE pointer + * * value: Place to store the value read + * + * Similar to PyOS_ascii_strtod, except that it reads input from a file. + * + * Similarly to fscanf, this function always consumes leading whitespace, + * and any text that could be the leading part in valid input. + * + * Return value: similar to fscanf. + * * 0 if no number read, + * * 1 if a number read, + * * EOF if end-of-file met before reading anything. + */ +static int +NumPyOS_ascii_ftolf(FILE *fp, double *value) +{ + char buffer[FLOAT_FORMATBUFLEN+1]; + char *endp; + char *p; + int c; + int ok; + + /* + * Pass on to PyOS_ascii_strtod the leftmost matching part in regexp + * + * \s*[+-]? ( [0-9]*\.[0-9]+([eE][+-]?[0-9]+) + * | nan ( \([:alphanum:_]*\) )? + * | inf(inity)? + * ) + * + * case-insensitively. + * + * The "do { ... } while (0)" wrapping in macros ensures that they behave + * properly eg. in "if ... else" structures. + */ + +#define END_MATCH() \ + goto buffer_filled + +#define NEXT_CHAR() \ + do { \ + if (c == EOF || endp >= buffer + FLOAT_FORMATBUFLEN) \ + END_MATCH(); \ + *endp++ = (char)c; \ + c = getc(fp); \ + } while (0) + +#define MATCH_ALPHA_STRING_NOCASE(string) \ + do { \ + for (p=(string); *p!='\0' && (c==*p || c+('a'-'A')==*p); ++p) \ + NEXT_CHAR(); \ + if (*p != '\0') END_MATCH(); \ + } while (0) + +#define MATCH_ONE_OR_NONE(condition) \ + do { if (condition) NEXT_CHAR(); } while (0) + +#define MATCH_ONE_OR_MORE(condition) \ + do { \ + ok = 0; \ + while (condition) { NEXT_CHAR(); ok = 1; } \ + if (!ok) END_MATCH(); \ + } while (0) + +#define MATCH_ZERO_OR_MORE(condition) \ + while (condition) { NEXT_CHAR(); } + + /* 1. emulate fscanf EOF handling */ + c = getc(fp); + if (c == EOF) + return EOF; + + /* 2. consume leading whitespace unconditionally */ + while (NumPyOS_ascii_isspace(c)) { + c = getc(fp); + } + + /* 3. start reading matching input to buffer */ + endp = buffer; + + /* 4.1 sign (optional) */ + MATCH_ONE_OR_NONE(c == '+' || c == '-'); + + /* 4.2 nan, inf, infinity; [case-insensitive] */ + if (c == 'n' || c == 'N') { + NEXT_CHAR(); + MATCH_ALPHA_STRING_NOCASE("an"); + + /* accept nan([:alphanum:_]*), similarly to strtod */ + if (c == '(') { + NEXT_CHAR(); + MATCH_ZERO_OR_MORE(NumPyOS_ascii_isalnum(c) || c == '_'); + if (c == ')') NEXT_CHAR(); + } + END_MATCH(); + } + else if (c == 'i' || c == 'I') { + NEXT_CHAR(); + MATCH_ALPHA_STRING_NOCASE("nfinity"); + END_MATCH(); + } + + /* 4.3 mantissa */ + MATCH_ZERO_OR_MORE(NumPyOS_ascii_isdigit(c)); + + if (c == '.') { + NEXT_CHAR(); + MATCH_ONE_OR_MORE(NumPyOS_ascii_isdigit(c)); + } + + /* 4.4 exponent */ + if (c == 'e' || c == 'E') { + NEXT_CHAR(); + MATCH_ONE_OR_NONE(c == '+' || c == '-'); + MATCH_ONE_OR_MORE(NumPyOS_ascii_isdigit(c)); + } + + END_MATCH(); + +buffer_filled: + + ungetc(c, fp); + *endp = '\0'; + + /* 5. try to convert buffer. */ + + *value = NumPyOS_ascii_strtod(buffer, &p); + + return (buffer == p) ? 0 : 1; /* if something was read */ +} + +#undef END_MATCH +#undef NEXT_CHAR +#undef MATCH_ALPHA_STRING_NOCASE +#undef MATCH_ONE_OR_NONE +#undef MATCH_ONE_OR_MORE +#undef MATCH_ZERO_OR_MORE diff --git a/numpy/core/src/scalarmathmodule.c.src b/numpy/core/src/scalarmathmodule.c.src index dd86678a3..3262999a0 100644 --- a/numpy/core/src/scalarmathmodule.c.src +++ b/numpy/core/src/scalarmathmodule.c.src @@ -636,8 +636,11 @@ static PyObject * &errobj) < 0) return NULL; first = 1; - if (PyUFunc_handlefperr(errmask, errobj, retstatus, &first)) + if (PyUFunc_handlefperr(errmask, errobj, retstatus, &first)) { + Py_XDECREF(errobj); return NULL; + } + Py_XDECREF(errobj); } #endif @@ -736,8 +739,11 @@ static PyObject * &errobj) < 0) return NULL; first = 1; - if (PyUFunc_handlefperr(errmask, errobj, retstatus, &first)) + if (PyUFunc_handlefperr(errmask, errobj, retstatus, &first)) { + Py_XDECREF(errobj); return NULL; + } + Py_XDECREF(errobj); } #if @isint@ diff --git a/numpy/core/src/scalartypes.inc.src b/numpy/core/src/scalartypes.inc.src index b65f7adbc..399047ee8 100644 --- a/numpy/core/src/scalartypes.inc.src +++ b/numpy/core/src/scalartypes.inc.src @@ -5,77 +5,82 @@ #endif #include "numpy/arrayscalars.h" +#include "config.h" +#include "numpyos.c" + static PyBoolScalarObject _PyArrayScalar_BoolValues[2] = { {PyObject_HEAD_INIT(&PyBoolArrType_Type) 0}, {PyObject_HEAD_INIT(&PyBoolArrType_Type) 1}, }; -/* Inheritance established later when tp_bases is set (or tp_base for - single inheritance) */ +/* + * Inheritance is established later when tp_bases is set (or tp_base for + * single inheritance) + */ /**begin repeat - -#name=number, integer, signedinteger, unsignedinteger, inexact, floating, complexfloating, flexible, character# -#NAME=Number, Integer, SignedInteger, UnsignedInteger, Inexact, Floating, ComplexFloating, Flexible, Character# -*/ - + * #name = number, integer, signedinteger, unsignedinteger, inexact, + * floating, complexfloating, flexible, character# + * #NAME = Number, Integer, SignedInteger, UnsignedInteger, Inexact, + * Floating, ComplexFloating, Flexible, Character# + */ static PyTypeObject Py@NAME@ArrType_Type = { PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.@name@", /*tp_name*/ - sizeof(PyObject), /*tp_basicsize*/ - 0, /* tp_itemsize */ + 0, /* ob_size*/ + "numpy.@name@", /* tp_name*/ + sizeof(PyObject), /* tp_basicsize*/ + 0, /* tp_itemsize */ /* methods */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + 0, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + 0, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; /**end repeat**/ @@ -115,13 +120,18 @@ scalar_value(PyObject *scalar, PyArray_Descr *descr) CASE(CLONGDOUBLE, CLongDouble); CASE(OBJECT, Object); #undef CASE - case NPY_STRING: return (void *)PyString_AS_STRING(scalar); - case NPY_UNICODE: return (void *)PyUnicode_AS_DATA(scalar); - case NPY_VOID: return ((PyVoidScalarObject *)scalar)->obval; + case NPY_STRING: + return (void *)PyString_AS_STRING(scalar); + case NPY_UNICODE: + return (void *)PyUnicode_AS_DATA(scalar); + case NPY_VOID: + return ((PyVoidScalarObject *)scalar)->obval; } - /* Must be a user-defined type --- check to see which - scalar it inherits from. */ + /* + * Must be a user-defined type --- check to see which + * scalar it inherits from. + */ #define _CHK(cls) (PyObject_IsInstance(scalar, \ (PyObject *)&Py##cls##ArrType_Type)) @@ -137,7 +147,8 @@ scalar_value(PyObject *scalar, PyArray_Descr *descr) _IFCASE(Long); _IFCASE(LongLong); } - else { /* Unsigned Integer */ + else { + /* Unsigned Integer */ _IFCASE(UByte); _IFCASE(UShort); _IFCASE(UInt); @@ -145,49 +156,64 @@ scalar_value(PyObject *scalar, PyArray_Descr *descr) _IFCASE(ULongLong); } } - else { /* Inexact */ + else { + /* Inexact */ if _CHK(Floating) { _IFCASE(Float); _IFCASE(Double); _IFCASE(LongDouble); } - else { /*ComplexFloating */ + else { + /*ComplexFloating */ _IFCASE(CFloat); _IFCASE(CDouble); _IFCASE(CLongDouble); } } } - else if _CHK(Bool) return _OBJ(Bool); - else if _CHK(Flexible) { - if _CHK(String) return (void *)PyString_AS_STRING(scalar); - if _CHK(Unicode) return (void *)PyUnicode_AS_DATA(scalar); - if _CHK(Void) return ((PyVoidScalarObject *)scalar)->obval; + else if (_CHK(Bool)) { + return _OBJ(Bool); + } + else if (_CHK(Flexible)) { + if (_CHK(String)) { + return (void *)PyString_AS_STRING(scalar); + } + if (_CHK(Unicode)) { + return (void *)PyUnicode_AS_DATA(scalar); + } + if (_CHK(Void)) { + return ((PyVoidScalarObject *)scalar)->obval; + } + } + else { + _IFCASE(Object); } - else _IFCASE(Object); - /* Use the alignment flag to figure out where the data begins - after a PyObject_HEAD + /* + * Use the alignment flag to figure out where the data begins + * after a PyObject_HEAD */ memloc = (intp)scalar; memloc += sizeof(PyObject); - /* now round-up to the nearest alignment value - */ + /* now round-up to the nearest alignment value */ align = descr->alignment; - if (align > 1) memloc = ((memloc + align - 1)/align)*align; + if (align > 1) { + memloc = ((memloc + align - 1)/align)*align; + } return (void *)memloc; #undef _IFCASE #undef _OBJ #undef _CHK } -/* no error checking is performed -- ctypeptr must be same type as scalar */ -/* in case of flexible type, the data is not copied - into ctypeptr which is expected to be a pointer to pointer */ /*NUMPY_API - Convert to c-type -*/ + * Convert to c-type + * + * no error checking is performed -- ctypeptr must be same type as scalar + * in case of flexible type, the data is not copied + * into ctypeptr which is expected to be a pointer to pointer + */ static void PyArray_ScalarAsCtype(PyObject *scalar, void *ctypeptr) { @@ -199,24 +225,23 @@ PyArray_ScalarAsCtype(PyObject *scalar, void *ctypeptr) if (PyTypeNum_ISEXTENDED(typecode->type_num)) { void **ct = (void **)ctypeptr; *ct = newptr; - } else { + } + else { memcpy(ctypeptr, newptr, typecode->elsize); } Py_DECREF(typecode); return; } -/* The output buffer must be large-enough to receive the value */ -/* Even for flexible types which is different from ScalarAsCtype - where only a reference for flexible types is returned -*/ - -/* This may not work right on narrow builds for NumPy unicode scalars. - */ - /*NUMPY_API - Cast Scalar to c-type -*/ + * Cast Scalar to c-type + * + * The output buffer must be large-enough to receive the value + * Even for flexible types which is different from ScalarAsCtype + * where only a reference for flexible types is returned + * + * This may not work right on narrow builds for NumPy unicode scalars. + */ static int PyArray_CastScalarToCtype(PyObject *scalar, void *ctypeptr, PyArray_Descr *outcode) @@ -226,7 +251,9 @@ PyArray_CastScalarToCtype(PyObject *scalar, void *ctypeptr, descr = PyArray_DescrFromScalar(scalar); castfunc = PyArray_GetCastFunc(descr, outcode->type_num); - if (castfunc == NULL) return -1; + if (castfunc == NULL) { + return -1; + } if (PyTypeNum_ISEXTENDED(descr->type_num) || PyTypeNum_ISEXTENDED(outcode->type_num)) { PyArrayObject *ain, *aout; @@ -242,7 +269,10 @@ PyArray_CastScalarToCtype(PyObject *scalar, void *ctypeptr, 0, NULL, NULL, ctypeptr, CARRAY, NULL); - if (aout == NULL) {Py_DECREF(ain); return -1;} + if (aout == NULL) { + Py_DECREF(ain); + return -1; + } castfunc(ain->data, aout->data, 1, ain, aout); Py_DECREF(ain); Py_DECREF(aout); @@ -255,8 +285,8 @@ PyArray_CastScalarToCtype(PyObject *scalar, void *ctypeptr, } /*NUMPY_API - Cast Scalar to c-type -*/ + * Cast Scalar to c-type + */ static int PyArray_CastScalarDirect(PyObject *scalar, PyArray_Descr *indescr, void *ctypeptr, int outtype) @@ -264,22 +294,24 @@ PyArray_CastScalarDirect(PyObject *scalar, PyArray_Descr *indescr, PyArray_VectorUnaryFunc* castfunc; void *ptr; castfunc = PyArray_GetCastFunc(indescr, outtype); - if (castfunc == NULL) return -1; + if (castfunc == NULL) { + return -1; + } ptr = scalar_value(scalar, indescr); castfunc(ptr, ctypeptr, 1, NULL, NULL); return 0; } -/* 0-dim array from array-scalar object */ -/* always contains a copy of the data - unless outcode is NULL, it is of void type and the referrer does - not own it either. -*/ - -/* steals reference to outcode */ /*NUMPY_API - Get 0-dim array from scalar -*/ + * Get 0-dim array from scalar + * + * 0-dim array from array-scalar object + * always contains a copy of the data + * unless outcode is NULL, it is of void type and the referrer does + * not own it either. + * + * steals reference to outcode + */ static PyObject * PyArray_FromScalar(PyObject *scalar, PyArray_Descr *outcode) { @@ -307,8 +339,10 @@ PyArray_FromScalar(PyObject *scalar, PyArray_Descr *outcode) typecode, 0, NULL, NULL, NULL, 0, NULL); - if (r==NULL) {Py_XDECREF(outcode); return NULL;} - + if (r==NULL) { + Py_XDECREF(outcode); + return NULL; + } if (PyDataType_FLAGCHK(typecode, NPY_USE_SETITEM)) { if (typecode->f->setitem(scalar, PyArray_DATA(r), r) < 0) { Py_XDECREF(outcode); Py_DECREF(r); @@ -325,7 +359,8 @@ PyArray_FromScalar(PyObject *scalar, PyArray_Descr *outcode) (PyArray_UCS4 *)PyArray_DATA(r), PyUnicode_GET_SIZE(scalar), PyArray_ITEMSIZE(r) >> 2); - } else + } + else #endif { memcpy(PyArray_DATA(r), memptr, PyArray_ITEMSIZE(r)); @@ -335,8 +370,9 @@ PyArray_FromScalar(PyObject *scalar, PyArray_Descr *outcode) } finish: - if (outcode == NULL) return r; - + if (outcode == NULL) { + return r; + } if (outcode->type_num == typecode->type_num) { if (!PyTypeNum_ISEXTENDED(typecode->type_num) || (outcode->elsize == typecode->elsize)) @@ -350,10 +386,10 @@ finish: } /*NUMPY_API - Get an Array Scalar From a Python Object - Returns NULL if unsuccessful but error is only - set if another error occurred. Currently only Numeric-like - object supported. + * Get an Array Scalar From a Python Object + * + * Returns NULL if unsuccessful but error is only set if another error occurred. + * Currently only Numeric-like object supported. */ static PyObject * PyArray_ScalarFromObject(PyObject *object) @@ -364,17 +400,23 @@ PyArray_ScalarFromObject(PyObject *object) } if (PyInt_Check(object)) { ret = PyArrayScalar_New(Long); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } PyArrayScalar_VAL(ret, Long) = PyInt_AS_LONG(object); } else if (PyFloat_Check(object)) { ret = PyArrayScalar_New(Double); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } PyArrayScalar_VAL(ret, Double) = PyFloat_AS_DOUBLE(object); } else if (PyComplex_Check(object)) { ret = PyArrayScalar_New(CDouble); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } PyArrayScalar_VAL(ret, CDouble).real = ((PyComplexObject *)object)->cval.real; PyArrayScalar_VAL(ret, CDouble).imag = @@ -388,7 +430,9 @@ PyArray_ScalarFromObject(PyObject *object) return NULL; } ret = PyArrayScalar_New(LongLong); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } PyArrayScalar_VAL(ret, LongLong) = val; } else if (PyBool_Check(object)) { @@ -407,14 +451,16 @@ static PyObject * gentype_alloc(PyTypeObject *type, Py_ssize_t nitems) { PyObject *obj; - const size_t size = _PyObject_VAR_SIZE(type, nitems+1); + const size_t size = _PyObject_VAR_SIZE(type, nitems + 1); obj = (PyObject *)_pya_malloc(size); memset(obj, 0, size); - if (type->tp_itemsize == 0) + if (type->tp_itemsize == 0) { PyObject_INIT(obj, type); - else + } + else { (void) PyObject_INIT_VAR((PyVarObject *)obj, type, nitems); + } return obj; } @@ -433,8 +479,7 @@ gentype_power(PyObject *m1, PyObject *m2, PyObject *NPY_UNUSED(m3)) if (!PyArray_IsScalar(m1,Generic)) { if (PyArray_Check(m1)) { - ret = m1->ob_type->tp_as_number->nb_power(m1,m2, - Py_None); + ret = m1->ob_type->tp_as_number->nb_power(m1,m2, Py_None); } else { if (!PyArray_IsScalar(m2,Generic)) { @@ -442,17 +487,17 @@ gentype_power(PyObject *m1, PyObject *m2, PyObject *NPY_UNUSED(m3)) return NULL; } arr = PyArray_FromScalar(m2, NULL); - if (arr == NULL) return NULL; - ret = arr->ob_type->tp_as_number->nb_power(m1, arr, - Py_None); + if (arr == NULL) { + return NULL; + } + ret = arr->ob_type->tp_as_number->nb_power(m1, arr, Py_None); Py_DECREF(arr); } return ret; } if (!PyArray_IsScalar(m2, Generic)) { if (PyArray_Check(m2)) { - ret = m2->ob_type->tp_as_number->nb_power(m1,m2, - Py_None); + ret = m2->ob_type->tp_as_number->nb_power(m1,m2, Py_None); } else { if (!PyArray_IsScalar(m1, Generic)) { @@ -460,18 +505,21 @@ gentype_power(PyObject *m1, PyObject *m2, PyObject *NPY_UNUSED(m3)) return NULL; } arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) return NULL; - ret = arr->ob_type->tp_as_number->nb_power(arr, m2, - Py_None); + if (arr == NULL) { + return NULL; + } + ret = arr->ob_type->tp_as_number->nb_power(arr, m2, Py_None); Py_DECREF(arr); } return ret; } - arr=arg2=NULL; + arr = arg2 = NULL; arr = PyArray_FromScalar(m1, NULL); arg2 = PyArray_FromScalar(m2, NULL); if (arr == NULL || arg2 == NULL) { - Py_XDECREF(arr); Py_XDECREF(arg2); return NULL; + Py_XDECREF(arr); + Py_XDECREF(arg2); + return NULL; } ret = arr->ob_type->tp_as_number->nb_power(arr, arg2, Py_None); Py_DECREF(arr); @@ -486,26 +534,35 @@ gentype_generic_method(PyObject *self, PyObject *args, PyObject *kwds, PyObject *arr, *meth, *ret; arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; + if (arr == NULL) { + return NULL; + } meth = PyObject_GetAttrString(arr, str); - if (meth == NULL) {Py_DECREF(arr); return NULL;} - if (kwds == NULL) + if (meth == NULL) { + Py_DECREF(arr); + return NULL; + } + if (kwds == NULL) { ret = PyObject_CallObject(meth, args); - else + } + else { ret = PyObject_Call(meth, args, kwds); + } Py_DECREF(meth); Py_DECREF(arr); - if (ret && PyArray_Check(ret)) + if (ret && PyArray_Check(ret)) { return PyArray_Return((PyArrayObject *)ret); - else + } + else { return ret; + } } /**begin repeat * - * #name=add, subtract, divide, remainder, divmod, lshift, rshift, and, xor, or, floor_divide, true_divide# + * #name = add, subtract, divide, remainder, divmod, lshift, rshift, + * and, xor, or, floor_divide, true_divide# */ - static PyObject * gentype_@name@(PyObject *m1, PyObject *m2) { @@ -518,28 +575,30 @@ gentype_@name@(PyObject *m1, PyObject *m2) static PyObject * gentype_multiply(PyObject *m1, PyObject *m2) { - PyObject *ret=NULL; + PyObject *ret = NULL; long repeat; if (!PyArray_IsScalar(m1, Generic) && ((m1->ob_type->tp_as_number == NULL) || (m1->ob_type->tp_as_number->nb_multiply == NULL))) { - /* Try to convert m2 to an int and try sequence - repeat */ + /* Try to convert m2 to an int and try sequence repeat */ repeat = PyInt_AsLong(m2); - if (repeat == -1 && PyErr_Occurred()) return NULL; + if (repeat == -1 && PyErr_Occurred()) { + return NULL; + } ret = PySequence_Repeat(m1, (int) repeat); } else if (!PyArray_IsScalar(m2, Generic) && ((m2->ob_type->tp_as_number == NULL) || (m2->ob_type->tp_as_number->nb_multiply == NULL))) { - /* Try to convert m1 to an int and try sequence - repeat */ + /* Try to convert m1 to an int and try sequence repeat */ repeat = PyInt_AsLong(m1); - if (repeat == -1 && PyErr_Occurred()) return NULL; + if (repeat == -1 && PyErr_Occurred()) { + return NULL; + } ret = PySequence_Repeat(m2, (int) repeat); } - if (ret==NULL) { + if (ret == NULL) { PyErr_Clear(); /* no effect if not set */ ret = PyArray_Type.tp_as_number->nb_multiply(m1, m2); } @@ -547,17 +606,18 @@ gentype_multiply(PyObject *m1, PyObject *m2) } /**begin repeat - -#name=positive, negative, absolute, invert, int, long, float, oct, hex# -*/ - + * + * #name=positive, negative, absolute, invert, int, long, float, oct, hex# + */ static PyObject * gentype_@name@(PyObject *m1) { PyObject *arr, *ret; arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) return NULL; + if (arr == NULL) { + return NULL; + } ret = arr->ob_type->tp_as_number->nb_@name@(arr); Py_DECREF(arr); return ret; @@ -571,7 +631,9 @@ gentype_nonzero_number(PyObject *m1) int ret; arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) return -1; + if (arr == NULL) { + return -1; + } ret = arr->ob_type->tp_as_number->nb_nonzero(arr); Py_DECREF(arr); return ret; @@ -584,7 +646,9 @@ gentype_str(PyObject *self) PyObject *ret; arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr==NULL) return NULL; + if (arr == NULL) { + return NULL; + } ret = PyObject_Str((PyObject *)arr); Py_DECREF(arr); return ret; @@ -598,29 +662,44 @@ gentype_repr(PyObject *self) PyObject *ret; arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr==NULL) return NULL; + if (arr == NULL) { + return NULL; + } ret = PyObject_Str((PyObject *)arr); Py_DECREF(arr); return ret; } +#ifdef FORCE_NO_LONG_DOUBLE_FORMATTING +#undef NPY_LONGDOUBLE_FMT +#define NPY_LONGDOUBLE_FMT NPY_DOUBLE_FMT +#endif + /**begin repeat - * #name=float, double, longdouble# - * #NAME=FLOAT, DOUBLE, LONGDOUBLE# + * #name = float, double, longdouble# + * #NAME = FLOAT, DOUBLE, LONGDOUBLE# + * #type = f, d, l# */ -#define FMT "%.*" NPY_@NAME@_FMT -#define CFMT1 "%.*" NPY_@NAME@_FMT "j" -#define CFMT2 "(%.*" NPY_@NAME@_FMT "%+.*" NPY_@NAME@_FMT "j)" +#define _FMT1 "%%.%i" NPY_@NAME@_FMT +#define _FMT2 "%%+.%i" NPY_@NAME@_FMT static void format_@name@(char *buf, size_t buflen, @name@ val, unsigned int prec) { - int cnt, i; + /* XXX: Find a correct size here for format string */ + char format[64], *res; + int i, cnt; - cnt = PyOS_snprintf(buf, buflen, FMT, prec, val); + PyOS_snprintf(format, sizeof(format), _FMT1, prec); + res = NumPyOS_ascii_format@type@(buf, buflen, format, val, 0); + if (res == NULL) { + fprintf(stderr, "Error while formatting\n"); + return; + } /* If nothing but digits after sign, append ".0" */ + cnt = strlen(buf); for (i = (val < 0) ? 1 : 0; i < cnt; ++i) { if (!isdigit(Py_CHARMASK(buf[i]))) { break; @@ -634,33 +713,56 @@ format_@name@(char *buf, size_t buflen, @name@ val, unsigned int prec) static void format_c@name@(char *buf, size_t buflen, c@name@ val, unsigned int prec) { + /* XXX: Find a correct size here for format string */ + char format[64]; + char *res; if (val.real == 0.0) { - PyOS_snprintf(buf, buflen, CFMT1, prec, val.imag); + PyOS_snprintf(format, sizeof(format), _FMT1, prec); + res = NumPyOS_ascii_format@type@(buf, buflen-1, format, val.imag, 0); + if (res == NULL) { + fprintf(stderr, "Error while formatting\n"); + return; + } + strncat(buf, "j", 1); } else { - PyOS_snprintf(buf, buflen, CFMT2, prec, val.real, prec, val.imag); + char re[64], im[64]; + PyOS_snprintf(format, sizeof(format), _FMT1, prec); + res = NumPyOS_ascii_format@type@(re, sizeof(re), format, val.real, 0); + if (res == NULL) { + fprintf(stderr, "Error while formatting\n"); + return; + } + + PyOS_snprintf(format, sizeof(format), _FMT2, prec); + res = NumPyOS_ascii_format@type@(im, sizeof(im), format, val.imag, 0); + if (res == NULL) { + fprintf(stderr, "Error while formatting\n"); + return; + } + PyOS_snprintf(buf, buflen, "(%s%sj)", re, im); } } -#undef FMT -#undef CFMT1 -#undef CFMT2 +#undef _FMT1 +#undef _FMT2 /**end repeat**/ -/* over-ride repr and str of array-scalar strings and unicode to - remove NULL bytes and then call the corresponding functions - of string and unicode. +/* + * over-ride repr and str of array-scalar strings and unicode to + * remove NULL bytes and then call the corresponding functions + * of string and unicode. */ /**begin repeat -#name=string*2,unicode*2# -#form=(repr,str)*2# -#Name=String*2,Unicode*2# -#NAME=STRING*2,UNICODE*2# -#extra=AndSize*2,,# -#type=char*2, Py_UNICODE*2# -*/ + * #name = string*2,unicode*2# + * #form = (repr,str)*2# + * #Name = String*2,Unicode*2# + * #NAME = STRING*2,UNICODE*2# + * #extra = AndSize*2,,# + * #type = char*2, Py_UNICODE*2# + */ static PyObject * @name@type_@form@(PyObject *self) { @@ -672,9 +774,13 @@ static PyObject * ip = dptr = Py@Name@_AS_@NAME@(self); len = Py@Name@_GET_SIZE(self); dptr += len-1; - while(len > 0 && *dptr-- == 0) len--; + while(len > 0 && *dptr-- == 0) { + len--; + } new = Py@Name@_From@Name@@extra@(ip, len); - if (new == NULL) return PyString_FromString(""); + if (new == NULL) { + return PyString_FromString(""); + } ret = Py@Name@_Type.tp_@form@(new); Py_DECREF(new); return ret; @@ -699,10 +805,11 @@ static PyObject * * * These functions will return NULL if PyString creation fails. */ + /**begin repeat - * #name=float, double, longdouble# - * #Name=Float, Double, LongDouble# - * #NAME=FLOAT, DOUBLE, LONGDOUBLE# + * #name = float, double, longdouble# + * #Name = Float, Double, LongDouble# + * #NAME = FLOAT, DOUBLE, LONGDOUBLE# */ /**begin repeat1 * #kind = str, repr# @@ -736,6 +843,46 @@ c@name@type_@kind@(PyObject *self) /**end repeat1**/ /**end repeat**/ +/* + * float type print (control print a, where a is a float type instance) + */ +/**begin repeat + * #name = float, double, longdouble# + * #Name = Float, Double, LongDouble# + * #NAME = FLOAT, DOUBLE, LONGDOUBLE# + */ + +static int +@name@type_print(PyObject *v, FILE *fp, int flags) +{ + char buf[100]; + @name@ val = ((Py@Name@ScalarObject *)v)->obval; + + format_@name@(buf, sizeof(buf), val, + (flags & Py_PRINT_RAW) ? @NAME@PREC_STR : @NAME@PREC_REPR); + Py_BEGIN_ALLOW_THREADS + fputs(buf, fp); + Py_END_ALLOW_THREADS + return 0; +} + +static int +c@name@type_print(PyObject *v, FILE *fp, int flags) +{ + /* Size of buf: twice sizeof(real) + 2 (for the parenthesis) */ + char buf[202]; + c@name@ val = ((PyC@Name@ScalarObject *)v)->obval; + + format_c@name@(buf, sizeof(buf), val, + (flags & Py_PRINT_RAW) ? @NAME@PREC_STR : @NAME@PREC_REPR); + Py_BEGIN_ALLOW_THREADS + fputs(buf, fp); + Py_END_ALLOW_THREADS + return 0; +} + +/**end repeat**/ + /* * Could improve this with a PyLong_FromLongDouble(longdouble ldval) @@ -743,13 +890,13 @@ c@name@type_@kind@(PyObject *self) */ /**begin repeat - -#name=(int, long, hex, oct, float)*2# -#KIND=(Long*4, Float)*2# -#char=,,,,,c*5# -#CHAR=,,,,,C*5# -#POST=,,,,,.real*5# -*/ + * + * #name = (int, long, hex, oct, float)*2# + * #KIND = (Long*4, Float)*2# + * #char = ,,,,,c*5# + * #CHAR = ,,,,,C*5# + * #POST = ,,,,,.real*5# + */ static PyObject * @char@longdoubletype_@name@(PyObject *self) { @@ -766,46 +913,46 @@ static PyObject * static PyNumberMethods gentype_as_number = { - (binaryfunc)gentype_add, /*nb_add*/ - (binaryfunc)gentype_subtract, /*nb_subtract*/ - (binaryfunc)gentype_multiply, /*nb_multiply*/ - (binaryfunc)gentype_divide, /*nb_divide*/ - (binaryfunc)gentype_remainder, /*nb_remainder*/ - (binaryfunc)gentype_divmod, /*nb_divmod*/ - (ternaryfunc)gentype_power, /*nb_power*/ + (binaryfunc)gentype_add, /*nb_add*/ + (binaryfunc)gentype_subtract, /*nb_subtract*/ + (binaryfunc)gentype_multiply, /*nb_multiply*/ + (binaryfunc)gentype_divide, /*nb_divide*/ + (binaryfunc)gentype_remainder, /*nb_remainder*/ + (binaryfunc)gentype_divmod, /*nb_divmod*/ + (ternaryfunc)gentype_power, /*nb_power*/ (unaryfunc)gentype_negative, - (unaryfunc)gentype_positive, /*nb_pos*/ - (unaryfunc)gentype_absolute, /*(unaryfunc)gentype_abs,*/ - (inquiry)gentype_nonzero_number, /*nb_nonzero*/ - (unaryfunc)gentype_invert, /*nb_invert*/ - (binaryfunc)gentype_lshift, /*nb_lshift*/ - (binaryfunc)gentype_rshift, /*nb_rshift*/ - (binaryfunc)gentype_and, /*nb_and*/ - (binaryfunc)gentype_xor, /*nb_xor*/ - (binaryfunc)gentype_or, /*nb_or*/ - 0, /*nb_coerce*/ - (unaryfunc)gentype_int, /*nb_int*/ - (unaryfunc)gentype_long, /*nb_long*/ - (unaryfunc)gentype_float, /*nb_float*/ - (unaryfunc)gentype_oct, /*nb_oct*/ - (unaryfunc)gentype_hex, /*nb_hex*/ - 0, /*inplace_add*/ - 0, /*inplace_subtract*/ - 0, /*inplace_multiply*/ - 0, /*inplace_divide*/ - 0, /*inplace_remainder*/ - 0, /*inplace_power*/ - 0, /*inplace_lshift*/ - 0, /*inplace_rshift*/ - 0, /*inplace_and*/ - 0, /*inplace_xor*/ - 0, /*inplace_or*/ - (binaryfunc)gentype_floor_divide, /*nb_floor_divide*/ - (binaryfunc)gentype_true_divide, /*nb_true_divide*/ - 0, /*nb_inplace_floor_divide*/ - 0, /*nb_inplace_true_divide*/ + (unaryfunc)gentype_positive, /*nb_pos*/ + (unaryfunc)gentype_absolute, /*(unaryfunc)gentype_abs,*/ + (inquiry)gentype_nonzero_number, /*nb_nonzero*/ + (unaryfunc)gentype_invert, /*nb_invert*/ + (binaryfunc)gentype_lshift, /*nb_lshift*/ + (binaryfunc)gentype_rshift, /*nb_rshift*/ + (binaryfunc)gentype_and, /*nb_and*/ + (binaryfunc)gentype_xor, /*nb_xor*/ + (binaryfunc)gentype_or, /*nb_or*/ + 0, /*nb_coerce*/ + (unaryfunc)gentype_int, /*nb_int*/ + (unaryfunc)gentype_long, /*nb_long*/ + (unaryfunc)gentype_float, /*nb_float*/ + (unaryfunc)gentype_oct, /*nb_oct*/ + (unaryfunc)gentype_hex, /*nb_hex*/ + 0, /*inplace_add*/ + 0, /*inplace_subtract*/ + 0, /*inplace_multiply*/ + 0, /*inplace_divide*/ + 0, /*inplace_remainder*/ + 0, /*inplace_power*/ + 0, /*inplace_lshift*/ + 0, /*inplace_rshift*/ + 0, /*inplace_and*/ + 0, /*inplace_xor*/ + 0, /*inplace_or*/ + (binaryfunc)gentype_floor_divide, /*nb_floor_divide*/ + (binaryfunc)gentype_true_divide, /*nb_true_divide*/ + 0, /*nb_inplace_floor_divide*/ + 0, /*nb_inplace_true_divide*/ #if PY_VERSION_HEX >= 0x02050000 - (unaryfunc)NULL, /*nb_index*/ + (unaryfunc)NULL, /*nb_index*/ #endif }; @@ -816,7 +963,9 @@ gentype_richcompare(PyObject *self, PyObject *other, int cmp_op) PyObject *arr, *ret; arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; + if (arr == NULL) { + return NULL; + } ret = arr->ob_type->tp_richcompare(arr, other, cmp_op); Py_DECREF(arr); return ret; @@ -839,7 +988,9 @@ voidtype_flags_get(PyVoidScalarObject *self) { PyObject *flagobj; flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); - if (flagobj == NULL) return NULL; + if (flagobj == NULL) { + return NULL; + } ((PyArrayFlagsObject *)flagobj)->arr = NULL; ((PyArrayFlagsObject *)flagobj)->flags = self->flags; return flagobj; @@ -938,9 +1089,13 @@ gentype_interface_get(PyObject *self) PyObject *inter; arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; + if (arr == NULL) { + return NULL; + } inter = PyObject_GetAttrString((PyObject *)arr, "__array_interface__"); - if (inter != NULL) PyDict_SetItemString(inter, "__ref", (PyObject *)arr); + if (inter != NULL) { + PyDict_SetItemString(inter, "__ref", (PyObject *)arr); + } Py_DECREF(arr); return inter; } @@ -998,7 +1153,9 @@ gentype_real_get(PyObject *self) else if (PyArray_IsScalar(self, Object)) { PyObject *obj = ((PyObjectScalarObject *)self)->obval; ret = PyObject_GetAttrString(obj, "real"); - if (ret != NULL) return ret; + if (ret != NULL) { + return ret; + } PyErr_Clear(); } Py_INCREF(self); @@ -1016,8 +1173,7 @@ gentype_imag_get(PyObject *self) char *ptr; typecode = _realdescr_fromcomplexscalar(self, &typenum); ptr = (char *)scalar_value(self, NULL); - ret = PyArray_Scalar(ptr + typecode->elsize, - typecode, NULL); + ret = PyArray_Scalar(ptr + typecode->elsize, typecode, NULL); } else if (PyArray_IsScalar(self, Object)) { PyObject *obj = ((PyObjectScalarObject *)self)->obval; @@ -1053,7 +1209,9 @@ gentype_flat_get(PyObject *self) PyObject *ret, *arr; arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; + if (arr == NULL) { + return NULL; + } ret = PyArray_IterNew(arr); Py_DECREF(arr); return ret; @@ -1201,10 +1359,11 @@ gentype_wraparray(PyObject *NPY_UNUSED(scalar), PyObject *args) /**begin repeat - -#name=tolist, item, tostring, astype, copy, __deepcopy__, searchsorted, view, swapaxes, conj, conjugate, nonzero, flatten, ravel, fill, transpose, newbyteorder# -*/ - + * + * #name = tolist, item, tostring, astype, copy, __deepcopy__, searchsorted, + * view, swapaxes, conj, conjugate, nonzero, flatten, ravel, fill, + * transpose, newbyteorder# + */ static PyObject * gentype_@name@(PyObject *self, PyObject *args) { @@ -1222,7 +1381,9 @@ gentype_itemset(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args)) static PyObject * gentype_squeeze(PyObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) return NULL; + if (!PyArg_ParseTuple(args, "")) { + return NULL; + } Py_INCREF(self); return self; } @@ -1235,17 +1396,16 @@ gentype_byteswap(PyObject *self, PyObject *args) { Bool inplace=FALSE; - if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) + if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) { return NULL; - + } if (inplace) { PyErr_SetString(PyExc_ValueError, "cannot byteswap a scalar in-place"); return NULL; } else { - /* get the data, copyswap it and pass it to a new Array scalar - */ + /* get the data, copyswap it and pass it to a new Array scalar */ char *data; int numbytes; PyArray_Descr *descr; @@ -1255,8 +1415,13 @@ gentype_byteswap(PyObject *self, PyObject *args) numbytes = gentype_getreadbuf(self, 0, (void **)&data); descr = PyArray_DescrFromScalar(self); newmem = _pya_malloc(descr->elsize); - if (newmem == NULL) {Py_DECREF(descr); return PyErr_NoMemory();} - else memcpy(newmem, data, descr->elsize); + if (newmem == NULL) { + Py_DECREF(descr); + return PyErr_NoMemory(); + } + else { + memcpy(newmem, data, descr->elsize); + } byte_swap_vector(newmem, 1, descr->elsize); new = PyArray_Scalar(newmem, descr, NULL); _pya_free(newmem); @@ -1267,10 +1432,12 @@ gentype_byteswap(PyObject *self, PyObject *args) /**begin repeat - -#name=take, getfield, put, repeat, tofile, mean, trace, diagonal, clip, std, var, sum, cumsum, prod, cumprod, compress, sort, argsort, round, argmax, argmin, max, min, ptp, any, all, resize, reshape, choose# -*/ - + * + * #name = take, getfield, put, repeat, tofile, mean, trace, diagonal, clip, + * std, var, sum, cumsum, prod, cumprod, compress, sort, argsort, + * round, argmax, argmin, max, min, ptp, any, all, resize, reshape, + * choose# + */ static PyObject * gentype_@name@(PyObject *self, PyObject *args, PyObject *kwds) { @@ -1284,7 +1451,9 @@ voidtype_getfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) PyObject *ret; ret = gentype_generic_method((PyObject *)self, args, kwds, "getfield"); - if (!ret) return ret; + if (!ret) { + return ret; + } if (PyArray_IsScalar(ret, Generic) && \ (!PyArray_IsScalar(ret, Void))) { PyArray_Descr *new; @@ -1310,7 +1479,7 @@ gentype_setfield(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args), PyObjec static PyObject * voidtype_setfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) { - PyArray_Descr *typecode=NULL; + PyArray_Descr *typecode = NULL; int offset = 0; PyObject *value, *src; int mysize; @@ -1318,8 +1487,7 @@ voidtype_setfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) static char *kwlist[] = {"value", "dtype", "offset", 0}; if ((self->flags & WRITEABLE) != WRITEABLE) { - PyErr_SetString(PyExc_RuntimeError, - "Can't write to memory"); + PyErr_SetString(PyExc_RuntimeError, "Can't write to memory"); return NULL; } if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO&|i", kwlist, @@ -1354,7 +1522,9 @@ voidtype_setfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) else { /* Copy data from value to correct place in dptr */ src = PyArray_FromAny(value, typecode, 0, 0, CARRAY, NULL); - if (src == NULL) return NULL; + if (src == NULL) { + return NULL; + } typecode->f->copyswap(dptr, PyArray_DATA(src), !PyArray_ISNBO(self->descr->byteorder), src); @@ -1368,38 +1538,44 @@ voidtype_setfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) static PyObject * gentype_reduce(PyObject *self, PyObject *NPY_UNUSED(args)) { - PyObject *ret=NULL, *obj=NULL, *mod=NULL; + PyObject *ret = NULL, *obj = NULL, *mod = NULL; const char *buffer; Py_ssize_t buflen; /* Return a tuple of (callable object, arguments) */ - ret = PyTuple_New(2); - if (ret == NULL) return NULL; + if (ret == NULL) { + return NULL; + } if (PyObject_AsReadBuffer(self, (const void **)&buffer, &buflen)<0) { - Py_DECREF(ret); return NULL; + Py_DECREF(ret); + return NULL; } mod = PyImport_ImportModule("numpy.core.multiarray"); - if (mod == NULL) return NULL; + if (mod == NULL) { + return NULL; + } obj = PyObject_GetAttrString(mod, "scalar"); Py_DECREF(mod); - if (obj == NULL) return NULL; + if (obj == NULL) { + return NULL; + } PyTuple_SET_ITEM(ret, 0, obj); obj = PyObject_GetAttrString((PyObject *)self, "dtype"); if (PyArray_IsScalar(self, Object)) { mod = ((PyObjectScalarObject *)self)->obval; - PyTuple_SET_ITEM(ret, 1, - Py_BuildValue("NO", obj, mod)); + PyTuple_SET_ITEM(ret, 1, Py_BuildValue("NO", obj, mod)); } else { #ifndef Py_UNICODE_WIDE - /* We need to expand the buffer so that we always write - UCS4 to disk for pickle of unicode scalars. - - This could be in a unicode_reduce function, but - that would require re-factoring. - */ - int alloc=0; + /* + * We need to expand the buffer so that we always write + * UCS4 to disk for pickle of unicode scalars. + * + * This could be in a unicode_reduce function, but + * that would require re-factoring. + */ + int alloc = 0; char *tmp; int newlen; @@ -1448,13 +1624,16 @@ gentype_setstate(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args)) static PyObject * gentype_dump(PyObject *self, PyObject *args) { - PyObject *file=NULL; + PyObject *file = NULL; int ret; - if (!PyArg_ParseTuple(args, "O", &file)) + if (!PyArg_ParseTuple(args, "O", &file)) { return NULL; + } ret = PyArray_Dump(self, file, 2); - if (ret < 0) return NULL; + if (ret < 0) { + return NULL; + } Py_INCREF(Py_None); return Py_None; } @@ -1462,15 +1641,17 @@ gentype_dump(PyObject *self, PyObject *args) static PyObject * gentype_dumps(PyObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) + if (!PyArg_ParseTuple(args, "")) { return NULL; + } return PyArray_Dumps(self, 2); } /* setting flags cannot be done for scalars */ static PyObject * -gentype_setflags(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args), PyObject *NPY_UNUSED(kwds)) +gentype_setflags(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args), + PyObject *NPY_UNUSED(kwds)) { Py_INCREF(Py_None); return Py_None; @@ -1698,7 +1879,9 @@ voidtype_item(PyVoidScalarObject *self, Py_ssize_t n) } flist = self->descr->names; m = PyTuple_GET_SIZE(flist); - if (n < 0) n += m; + if (n < 0) { + n += m; + } if (n < 0 || n >= m) { PyErr_Format(PyExc_IndexError, "invalid index (%d)", (int) n); return NULL; @@ -1725,14 +1908,17 @@ voidtype_subscript(PyVoidScalarObject *self, PyObject *ind) if (PyString_Check(ind) || PyUnicode_Check(ind)) { /* look up in fields */ fieldinfo = PyDict_GetItem(self->descr->fields, ind); - if (!fieldinfo) goto fail; + if (!fieldinfo) { + goto fail; + } return voidtype_getfield(self, fieldinfo, NULL); } /* try to convert it to a number */ n = PyArray_PyIntAsIntp(ind); - if (error_converting(n)) goto fail; - + if (error_converting(n)) { + goto fail; + } return voidtype_item(self, (Py_ssize_t)n); fail: @@ -1755,8 +1941,12 @@ voidtype_ass_item(PyVoidScalarObject *self, Py_ssize_t n, PyObject *val) flist = self->descr->names; m = PyTuple_GET_SIZE(flist); - if (n < 0) n += m; - if (n < 0 || n >= m) goto fail; + if (n < 0) { + n += m; + } + if (n < 0 || n >= m) { + goto fail; + } fieldinfo = PyDict_GetItem(self->descr->fields, PyTuple_GET_ITEM(flist, n)); newtup = Py_BuildValue("(OOO)", val, @@ -1764,7 +1954,9 @@ voidtype_ass_item(PyVoidScalarObject *self, Py_ssize_t n, PyObject *val) PyTuple_GET_ITEM(fieldinfo, 1)); res = voidtype_setfield(self, newtup, NULL); Py_DECREF(newtup); - if (!res) return -1; + if (!res) { + return -1; + } Py_DECREF(res); return 0; @@ -1790,20 +1982,26 @@ voidtype_ass_subscript(PyVoidScalarObject *self, PyObject *ind, PyObject *val) if (PyString_Check(ind) || PyUnicode_Check(ind)) { /* look up in fields */ fieldinfo = PyDict_GetItem(self->descr->fields, ind); - if (!fieldinfo) goto fail; + if (!fieldinfo) { + goto fail; + } newtup = Py_BuildValue("(OOO)", val, PyTuple_GET_ITEM(fieldinfo, 0), PyTuple_GET_ITEM(fieldinfo, 1)); res = voidtype_setfield(self, newtup, NULL); Py_DECREF(newtup); - if (!res) return -1; + if (!res) { + return -1; + } Py_DECREF(res); return 0; } /* try to convert it to a number */ n = PyArray_PyIntAsIntp(ind); - if (error_converting(n)) goto fail; + if (error_converting(n)) { + goto fail; + } return voidtype_ass_item(self, (Py_ssize_t)n, val); fail: @@ -1813,35 +2011,35 @@ fail: static PyMappingMethods voidtype_as_mapping = { #if PY_VERSION_HEX >= 0x02050000 - (lenfunc)voidtype_length, /*mp_length*/ + (lenfunc)voidtype_length, /*mp_length*/ #else - (inquiry)voidtype_length, /*mp_length*/ + (inquiry)voidtype_length, /*mp_length*/ #endif - (binaryfunc)voidtype_subscript, /*mp_subscript*/ - (objobjargproc)voidtype_ass_subscript, /*mp_ass_subscript*/ + (binaryfunc)voidtype_subscript, /*mp_subscript*/ + (objobjargproc)voidtype_ass_subscript, /*mp_ass_subscript*/ }; static PySequenceMethods voidtype_as_sequence = { #if PY_VERSION_HEX >= 0x02050000 - (lenfunc)voidtype_length, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - (ssizeargfunc)voidtype_item, /*sq_item*/ - 0, /*sq_slice*/ - (ssizeobjargproc)voidtype_ass_item, /*sq_ass_item*/ + (lenfunc)voidtype_length, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + (ssizeargfunc)voidtype_item, /*sq_item*/ + 0, /*sq_slice*/ + (ssizeobjargproc)voidtype_ass_item, /*sq_ass_item*/ #else - (inquiry)voidtype_length, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - (intargfunc)voidtype_item, /*sq_item*/ - 0, /*sq_slice*/ - (intobjargproc)voidtype_ass_item, /*sq_ass_item*/ + (inquiry)voidtype_length, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + (intargfunc)voidtype_item, /*sq_item*/ + 0, /*sq_slice*/ + (intobjargproc)voidtype_ass_item, /*sq_ass_item*/ #endif - 0, /* ssq_ass_slice */ - 0, /* sq_contains */ - 0, /* sq_inplace_concat */ - 0, /* sq_inplace_repeat */ + 0, /* ssq_ass_slice */ + 0, /* sq_contains */ + 0, /* sq_inplace_concat */ + 0, /* sq_inplace_repeat */ }; @@ -1892,9 +2090,10 @@ gentype_getsegcount(PyObject *self, Py_ssize_t *lenp) static Py_ssize_t gentype_getcharbuf(PyObject *self, Py_ssize_t segment, constchar **ptrptr) { - if (PyArray_IsScalar(self, String) || \ - PyArray_IsScalar(self, Unicode)) + if (PyArray_IsScalar(self, String) || + PyArray_IsScalar(self, Unicode)) { return gentype_getreadbuf(self, segment, (void **)ptrptr); + } else { PyErr_SetString(PyExc_TypeError, "Non-character array cannot be interpreted "\ @@ -1905,10 +2104,10 @@ gentype_getcharbuf(PyObject *self, Py_ssize_t segment, constchar **ptrptr) static PyBufferProcs gentype_as_buffer = { - gentype_getreadbuf, /*bf_getreadbuffer*/ - NULL, /*bf_getwritebuffer*/ - gentype_getsegcount, /*bf_getsegcount*/ - gentype_getcharbuf, /*bf_getcharbuffer*/ + gentype_getreadbuf, /* bf_getreadbuffer*/ + NULL, /* bf_getwritebuffer*/ + gentype_getsegcount, /* bf_getsegcount*/ + gentype_getcharbuf, /* bf_getcharbuffer*/ }; @@ -1917,69 +2116,70 @@ static PyBufferProcs gentype_as_buffer = { static PyTypeObject PyGenericArrType_Type = { PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.generic", /*tp_name*/ - sizeof(PyObject), /*tp_basicsize*/ - 0, /* tp_itemsize */ + 0, /* ob_size*/ + "numpy.generic", /* tp_name*/ + sizeof(PyObject), /* tp_basicsize*/ + 0, /* tp_itemsize */ /* methods */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + 0, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + 0, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS - /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + /* these must be last and never explicitly initialized */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; static void void_dealloc(PyVoidScalarObject *v) { - if (v->flags & OWNDATA) + if (v->flags & OWNDATA) { PyDataMem_FREE(v->obval); + } Py_XDECREF(v->descr); Py_XDECREF(v->base); v->ob_type->tp_free(v); @@ -1992,11 +2192,13 @@ object_arrtype_dealloc(PyObject *v) v->ob_type->tp_free(v); } -/* string and unicode inherit from Python Type first and so GET_ITEM is different to get to the Python Type. +/* + * string and unicode inherit from Python Type first and so GET_ITEM + * is different to get to the Python Type. + * + * ok is a work-around for a bug in complex_new that doesn't allocate + * memory from the sub-types memory allocator. */ -/* ok is a work-around for a bug in complex_new that doesn't allocate - memory from the sub-types memory allocator. -*/ #define _WORK(num) \ if (type->tp_bases && (PyTuple_GET_SIZE(type->tp_bases)==2)) { \ @@ -2015,14 +2217,18 @@ object_arrtype_dealloc(PyObject *v) #define _WORKz _WORK(0) #define _WORK0 -/**begin repeat1 -#name=byte, short, int, long, longlong, ubyte, ushort, uint, ulong, ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble, string, unicode, object# -#TYPE=BYTE, SHORT, INT, LONG, LONGLONG, UBYTE, USHORT, UINT, ULONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE, STRING, UNICODE, OBJECT# -#work=0,0,1,1,1,0,0,0,0,0,0,1,0,0,0,0,z,z,0# -#default=0*16,1*2,2# -*/ +/**begin repeat + * #name = byte, short, int, long, longlong, ubyte, ushort, uint, ulong, + * ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble, + * string, unicode, object# + * #TYPE = BYTE, SHORT, INT, LONG, LONGLONG, UBYTE, USHORT, UINT, ULONG, + * ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE, + * STRING, UNICODE, OBJECT# + * #work = 0,0,1,1,1,0,0,0,0,0,0,1,0,0,0,0,z,z,0# + * #default = 0*16,1*2,2# + */ -#define _NPY_UNUSED2_1 +#define _NPY_UNUSED2_1 #define _NPY_UNUSED2_z #define _NPY_UNUSED2_0 NPY_UNUSED #define _NPY_UNUSED1_0 @@ -2041,17 +2247,20 @@ static PyObject * void *dest, *src; #endif - /* allow base-class (if any) to do conversion */ - /* If successful, this will jump to finish: */ + /* + * allow base-class (if any) to do conversion + * If successful, this will jump to finish: + */ _WORK@work@ if (!PyArg_ParseTuple(args, "|O", &obj)) { return NULL; } typecode = PyArray_DescrFromType(PyArray_@TYPE@); - /* typecode is new reference and stolen by - PyArray_FromAny but not PyArray_Scalar - */ + /* + * typecode is new reference and stolen by + * PyArray_FromAny but not PyArray_Scalar + */ if (obj == NULL) { #if @default@ == 0 char *mem = malloc(sizeof(@name@)); @@ -2062,30 +2271,32 @@ static PyObject * #elif @default@ == 1 robj = PyArray_Scalar(NULL, typecode, NULL); #elif @default@ == 2 - Py_INCREF(Py_None); - robj = Py_None; + Py_INCREF(Py_None); + robj = Py_None; #endif - Py_DECREF(typecode); + Py_DECREF(typecode); goto finish; } - /* It is expected at this point that robj is a PyArrayScalar - (even for Object Data Type) - */ + /* + * It is expected at this point that robj is a PyArrayScalar + * (even for Object Data Type) + */ arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL); if ((arr == NULL) || (PyArray_NDIM(arr) > 0)) { return arr; } /* 0-d array */ robj = PyArray_ToScalar(PyArray_DATA(arr), (NPY_AO *)arr); - Py_DECREF(arr); + Py_DECREF(arr); finish: - -#if @default@ == 2 /* In OBJECT case, robj is no longer a - PyArrayScalar at this point but the - remaining code assumes it is - */ + /* + * In OBJECT case, robj is no longer a + * PyArrayScalar at this point but the + * remaining code assumes it is + */ +#if @default@ == 2 return robj; #else /* Normal return */ @@ -2093,9 +2304,11 @@ finish: return robj; } - /* This return path occurs when the requested type is not created - but another scalar object is created instead (i.e. when - the base-class does the conversion in _WORK macro) */ + /* + * This return path occurs when the requested type is not created + * but another scalar object is created instead (i.e. when + * the base-class does the conversion in _WORK macro) + */ /* Need to allocate new type and copy data-area over */ if (type->tp_itemsize) { @@ -2118,7 +2331,7 @@ finish: *((npy_@name@ *)dest) = *((npy_@name@ *)src); #elif @default@ == 1 /* unicode and strings */ if (itemsize == 0) { /* unicode */ - itemsize = ((PyUnicodeObject *)robj)->length * sizeof(Py_UNICODE); + itemsize = ((PyUnicodeObject *)robj)->length * sizeof(Py_UNICODE); } memcpy(dest, src, itemsize); /* @default@ == 2 won't get here */ @@ -2138,16 +2351,21 @@ finish: static PyObject * bool_arrtype_new(PyTypeObject *NPY_UNUSED(type), PyObject *args, PyObject *NPY_UNUSED(kwds)) { - PyObject *obj=NULL; + PyObject *obj = NULL; PyObject *arr; - if (!PyArg_ParseTuple(args, "|O", &obj)) return NULL; - if (obj == NULL) + if (!PyArg_ParseTuple(args, "|O", &obj)) { + return NULL; + } + if (obj == NULL) { PyArrayScalar_RETURN_FALSE; - if (obj == Py_False) + } + if (obj == Py_False) { PyArrayScalar_RETURN_FALSE; - if (obj == Py_True) + } + if (obj == Py_True) { PyArrayScalar_RETURN_TRUE; + } arr = PyArray_FROM_OTF(obj, PyArray_BOOL, FORCECAST); if (arr && 0 == PyArray_NDIM(arr)) { Bool val = *((Bool *)PyArray_DATA(arr)); @@ -2160,27 +2378,30 @@ bool_arrtype_new(PyTypeObject *NPY_UNUSED(type), PyObject *args, PyObject *NPY_U static PyObject * bool_arrtype_and(PyObject *a, PyObject *b) { - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) + if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) { PyArrayScalar_RETURN_BOOL_FROM_LONG ((a == PyArrayScalar_True)&(b == PyArrayScalar_True)); + } return PyGenericArrType_Type.tp_as_number->nb_and(a, b); } static PyObject * bool_arrtype_or(PyObject *a, PyObject *b) { - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) + if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) { PyArrayScalar_RETURN_BOOL_FROM_LONG ((a == PyArrayScalar_True)|(b == PyArrayScalar_True)); + } return PyGenericArrType_Type.tp_as_number->nb_or(a, b); } static PyObject * bool_arrtype_xor(PyObject *a, PyObject *b) { - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) + if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) { PyArrayScalar_RETURN_BOOL_FROM_LONG ((a == PyArrayScalar_True)^(b == PyArrayScalar_True)); + } return PyGenericArrType_Type.tp_as_number->nb_xor(a, b); } @@ -2192,10 +2413,13 @@ bool_arrtype_nonzero(PyObject *a) #if PY_VERSION_HEX >= 0x02050000 /**begin repeat -#name=byte, short, int, long, ubyte, ushort, longlong, uint, ulong, ulonglong# -#Name=Byte, Short, Int, Long, UByte, UShort, LongLong, UInt, ULong, ULongLong# -#type=PyInt_FromLong*6, PyLong_FromLongLong*1, PyLong_FromUnsignedLong*2, PyLong_FromUnsignedLongLong# -*/ + * #name = byte, short, int, long, ubyte, ushort, longlong, uint, ulong, + * ulonglong# + * #Name = Byte, Short, Int, Long, UByte, UShort, LongLong, UInt, ULong, + * ULongLong# + * #type = PyInt_FromLong*6, PyLong_FromLongLong*1, PyLong_FromUnsignedLong*2, + * PyLong_FromUnsignedLongLong# + */ static PyNumberMethods @name@_arrtype_as_number; static PyObject * @name@_index(PyObject *self) @@ -2203,6 +2427,7 @@ static PyObject * return @type@(PyArrayScalar_VAL(self, @Name@)); } /**end repeat**/ + static PyObject * bool_index(PyObject *a) { @@ -2212,67 +2437,71 @@ bool_index(PyObject *a) /* Arithmetic methods -- only so we can override &, |, ^. */ static PyNumberMethods bool_arrtype_as_number = { - 0, /* nb_add */ - 0, /* nb_subtract */ - 0, /* nb_multiply */ - 0, /* nb_divide */ - 0, /* nb_remainder */ - 0, /* nb_divmod */ - 0, /* nb_power */ - 0, /* nb_negative */ - 0, /* nb_positive */ - 0, /* nb_absolute */ - (inquiry)bool_arrtype_nonzero, /* nb_nonzero */ - 0, /* nb_invert */ - 0, /* nb_lshift */ - 0, /* nb_rshift */ - (binaryfunc)bool_arrtype_and, /* nb_and */ - (binaryfunc)bool_arrtype_xor, /* nb_xor */ - (binaryfunc)bool_arrtype_or, /* nb_or */ - 0, /* nb_coerce */ - 0, /* nb_int */ - 0, /* nb_long */ - 0, /* nb_float */ - 0, /* nb_oct */ - 0, /* nb_hex */ + 0, /* nb_add */ + 0, /* nb_subtract */ + 0, /* nb_multiply */ + 0, /* nb_divide */ + 0, /* nb_remainder */ + 0, /* nb_divmod */ + 0, /* nb_power */ + 0, /* nb_negative */ + 0, /* nb_positive */ + 0, /* nb_absolute */ + (inquiry)bool_arrtype_nonzero, /* nb_nonzero */ + 0, /* nb_invert */ + 0, /* nb_lshift */ + 0, /* nb_rshift */ + (binaryfunc)bool_arrtype_and, /* nb_and */ + (binaryfunc)bool_arrtype_xor, /* nb_xor */ + (binaryfunc)bool_arrtype_or, /* nb_or */ + 0, /* nb_coerce */ + 0, /* nb_int */ + 0, /* nb_long */ + 0, /* nb_float */ + 0, /* nb_oct */ + 0, /* nb_hex */ /* Added in release 2.0 */ - 0, /* nb_inplace_add */ - 0, /* nb_inplace_subtract */ - 0, /* nb_inplace_multiply */ - 0, /* nb_inplace_divide */ - 0, /* nb_inplace_remainder */ - 0, /* nb_inplace_power */ - 0, /* nb_inplace_lshift */ - 0, /* nb_inplace_rshift */ - 0, /* nb_inplace_and */ - 0, /* nb_inplace_xor */ - 0, /* nb_inplace_or */ + 0, /* nb_inplace_add */ + 0, /* nb_inplace_subtract */ + 0, /* nb_inplace_multiply */ + 0, /* nb_inplace_divide */ + 0, /* nb_inplace_remainder */ + 0, /* nb_inplace_power */ + 0, /* nb_inplace_lshift */ + 0, /* nb_inplace_rshift */ + 0, /* nb_inplace_and */ + 0, /* nb_inplace_xor */ + 0, /* nb_inplace_or */ /* Added in release 2.2 */ /* The following require the Py_TPFLAGS_HAVE_CLASS flag */ - 0, /* nb_floor_divide */ - 0, /* nb_true_divide */ - 0, /* nb_inplace_floor_divide */ - 0, /* nb_inplace_true_divide */ + 0, /* nb_floor_divide */ + 0, /* nb_true_divide */ + 0, /* nb_inplace_floor_divide */ + 0, /* nb_inplace_true_divide */ /* Added in release 2.5 */ - 0, /* nb_index */ +#if PY_VERSION_HEX >= 0x02050000 + 0, /* nb_index */ +#endif }; static PyObject * void_arrtype_new(PyTypeObject *type, PyObject *args, PyObject *NPY_UNUSED(kwds)) { PyObject *obj, *arr; - ulonglong memu=1; - PyObject *new=NULL; + ulonglong memu = 1; + PyObject *new = NULL; char *destptr; - if (!PyArg_ParseTuple(args, "O", &obj)) return NULL; - /* For a VOID scalar first see if obj is an integer or long - and create new memory of that size (filled with 0) for the scalar - */ - - if (PyLong_Check(obj) || PyInt_Check(obj) || \ + if (!PyArg_ParseTuple(args, "O", &obj)) { + return NULL; + } + /* + * For a VOID scalar first see if obj is an integer or long + * and create new memory of that size (filled with 0) for the scalar + */ + if (PyLong_Check(obj) || PyInt_Check(obj) || PyArray_IsScalar(obj, Integer) || - (PyArray_Check(obj) && PyArray_NDIM(obj)==0 && \ + (PyArray_Check(obj) && PyArray_NDIM(obj)==0 && PyArray_ISINTEGER(obj))) { new = obj->ob_type->tp_as_number->nb_long(obj); } @@ -2288,7 +2517,9 @@ void_arrtype_new(PyTypeObject *type, PyObject *args, PyObject *NPY_UNUSED(kwds)) return NULL; } destptr = PyDataMem_NEW((int) memu); - if (destptr == NULL) return PyErr_NoMemory(); + if (destptr == NULL) { + return PyErr_NoMemory(); + } ret = type->tp_alloc(type, 0); if (ret == NULL) { PyDataMem_FREE(destptr); @@ -2296,8 +2527,8 @@ void_arrtype_new(PyTypeObject *type, PyObject *args, PyObject *NPY_UNUSED(kwds)) } ((PyVoidScalarObject *)ret)->obval = destptr; ((PyVoidScalarObject *)ret)->ob_size = (int) memu; - ((PyVoidScalarObject *)ret)->descr = \ - PyArray_DescrNewFromType(PyArray_VOID); + ((PyVoidScalarObject *)ret)->descr = + PyArray_DescrNewFromType(PyArray_VOID); ((PyVoidScalarObject *)ret)->descr->elsize = (int) memu; ((PyVoidScalarObject *)ret)->flags = BEHAVED | OWNDATA; ((PyVoidScalarObject *)ret)->base = NULL; @@ -2313,8 +2544,8 @@ void_arrtype_new(PyTypeObject *type, PyObject *args, PyObject *NPY_UNUSED(kwds)) /**************** Define Hash functions ********************/ /**begin repeat -#lname=bool,ubyte,ushort# -#name=Bool,UByte, UShort# + * #lname = bool,ubyte,ushort# + * #name = Bool,UByte, UShort# */ static long @lname@_arrtype_hash(PyObject *obj) @@ -2324,14 +2555,16 @@ static long /**end repeat**/ /**begin repeat -#lname=byte,short,uint,ulong# -#name=Byte,Short,UInt,ULong# + * #lname=byte,short,uint,ulong# + * #name=Byte,Short,UInt,ULong# */ static long @lname@_arrtype_hash(PyObject *obj) { long x = (long)(((Py@name@ScalarObject *)obj)->obval); - if (x == -1) x=-2; + if (x == -1) { + x = -2; + } return x; } /**end repeat**/ @@ -2341,16 +2574,18 @@ static long int_arrtype_hash(PyObject *obj) { long x = (long)(((PyIntScalarObject *)obj)->obval); - if (x == -1) x=-2; + if (x == -1) { + x = -2; + } return x; } #endif /**begin repeat -#char=,u# -#Char=,U# -#ext=&& (x >= LONG_MIN),# -*/ + * #char = ,u# + * #Char = ,U# + * #ext = && (x >= LONG_MIN),# + */ #if SIZEOF_LONG != SIZEOF_LONGLONG /* we assume SIZEOF_LONGLONG=2*SIZEOF_LONG */ static long @@ -2371,7 +2606,9 @@ static long both.v = x; y = both.hashvals[0] + (1000003)*both.hashvals[1]; } - if (y == -1) y = -2; + if (y == -1) { + y = -2; + } return y; } #endif @@ -2382,7 +2619,9 @@ static long ulonglong_arrtype_hash(PyObject *obj) { long x = (long)(((PyULongLongScalarObject *)obj)->obval); - if (x == -1) x=-2; + if (x == -1) { + x = -2; + } return x; } #endif @@ -2390,9 +2629,10 @@ ulonglong_arrtype_hash(PyObject *obj) /* Wrong thing to do for longdouble, but....*/ + /**begin repeat -#lname=float, longdouble# -#name=Float, LongDouble# + * #lname = float, longdouble# + * #name = Float, LongDouble# */ static long @lname@_arrtype_hash(PyObject *obj) @@ -2405,16 +2645,21 @@ static long c@lname@_arrtype_hash(PyObject *obj) { long hashreal, hashimag, combined; - hashreal = _Py_HashDouble((double) \ + hashreal = _Py_HashDouble((double) (((PyC@name@ScalarObject *)obj)->obval).real); - if (hashreal == -1) return -1; - hashimag = _Py_HashDouble((double) \ + if (hashreal == -1) { + return -1; + } + hashimag = _Py_HashDouble((double) (((PyC@name@ScalarObject *)obj)->obval).imag); - if (hashimag == -1) return -1; - + if (hashimag == -1) { + return -1; + } combined = hashreal + 1000003 * hashimag; - if (combined == -1) combined = -2; + if (combined == -1) { + combined = -2; + } return combined; } /**end repeat**/ @@ -2440,7 +2685,9 @@ object_arrtype_getattro(PyObjectScalarObject *obj, PyObject *attr) { /* first look in object and then hand off to generic type */ res = PyObject_GenericGetAttr(obj->obval, attr); - if (res) return res; + if (res) { + return res; + } PyErr_Clear(); return PyObject_GenericGetAttr((PyObject *)obj, attr); } @@ -2451,7 +2698,9 @@ object_arrtype_setattro(PyObjectScalarObject *obj, PyObject *attr, PyObject *val /* first look in object and then hand off to generic type */ res = PyObject_GenericSetAttr(obj->obval, attr, val); - if (res >= 0) return res; + if (res >= 0) { + return res; + } PyErr_Clear(); return PyObject_GenericSetAttr((PyObject *)obj, attr, val); } @@ -2507,27 +2756,27 @@ object_arrtype_inplace_repeat(PyObjectScalarObject *self, Py_ssize_t count) static PySequenceMethods object_arrtype_as_sequence = { #if PY_VERSION_HEX >= 0x02050000 - (lenfunc)object_arrtype_length, /*sq_length*/ - (binaryfunc)object_arrtype_concat, /*sq_concat*/ - (ssizeargfunc)object_arrtype_repeat, /*sq_repeat*/ - 0, /*sq_item*/ - 0, /*sq_slice*/ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)object_arrtype_contains, /* sq_contains */ - (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ - (ssizeargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ + (lenfunc)object_arrtype_length, /*sq_length*/ + (binaryfunc)object_arrtype_concat, /*sq_concat*/ + (ssizeargfunc)object_arrtype_repeat, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /* sq_ass_item */ + 0, /* sq_ass_slice */ + (objobjproc)object_arrtype_contains, /* sq_contains */ + (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ + (ssizeargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ #else - (inquiry)object_arrtype_length, /*sq_length*/ - (binaryfunc)object_arrtype_concat, /*sq_concat*/ - (intargfunc)object_arrtype_repeat, /*sq_repeat*/ - 0, /*sq_item*/ - 0, /*sq_slice*/ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)object_arrtype_contains, /* sq_contains */ - (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ - (intargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ + (inquiry)object_arrtype_length, /*sq_length*/ + (binaryfunc)object_arrtype_concat, /*sq_concat*/ + (intargfunc)object_arrtype_repeat, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /* sq_ass_item */ + 0, /* sq_ass_slice */ + (objobjproc)object_arrtype_contains, /* sq_contains */ + (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ + (intargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ #endif }; @@ -2550,14 +2799,14 @@ object_arrtype_getsegcount(PyObjectScalarObject *self, Py_ssize_t *lenp) int cnt; PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ - pb->bf_getsegcount == NULL || \ - (cnt = (*pb->bf_getsegcount)(self->obval, &newlen)) != 1) + if (pb == NULL || + pb->bf_getsegcount == NULL || + (cnt = (*pb->bf_getsegcount)(self->obval, &newlen)) != 1) { return 0; - - if (lenp) + } + if (lenp) { *lenp = newlen; - + } return cnt; } @@ -2566,14 +2815,13 @@ object_arrtype_getreadbuf(PyObjectScalarObject *self, Py_ssize_t segment, void * { PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ + if (pb == NULL || pb->bf_getreadbuffer == NULL || pb->bf_getsegcount == NULL) { PyErr_SetString(PyExc_TypeError, "expected a readable buffer object"); return -1; } - return (*pb->bf_getreadbuffer)(self->obval, segment, ptrptr); } @@ -2582,14 +2830,13 @@ object_arrtype_getwritebuf(PyObjectScalarObject *self, Py_ssize_t segment, void { PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ + if (pb == NULL || pb->bf_getwritebuffer == NULL || pb->bf_getsegcount == NULL) { PyErr_SetString(PyExc_TypeError, "expected a writeable buffer object"); return -1; } - return (*pb->bf_getwritebuffer)(self->obval, segment, ptrptr); } @@ -2599,14 +2846,13 @@ object_arrtype_getcharbuf(PyObjectScalarObject *self, Py_ssize_t segment, { PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ + if (pb == NULL || pb->bf_getcharbuffer == NULL || pb->bf_getsegcount == NULL) { PyErr_SetString(PyExc_TypeError, "expected a character buffer object"); return -1; } - return (*pb->bf_getcharbuffer)(self->obval, segment, ptrptr); } @@ -2627,64 +2873,64 @@ static PyBufferProcs object_arrtype_as_buffer = { static PyObject * object_arrtype_call(PyObjectScalarObject *obj, PyObject *args, PyObject *kwds) { - return PyObject_Call(obj->obval, args, kwds); + return PyObject_Call(obj->obval, args, kwds); } static PyTypeObject PyObjectArrType_Type = { PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.object_", /*tp_name*/ - sizeof(PyObjectScalarObject), /*tp_basicsize*/ - 0, /* tp_itemsize */ - (destructor)object_arrtype_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - &object_arrtype_as_sequence, /* tp_as_sequence */ - &object_arrtype_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - (ternaryfunc)object_arrtype_call, /* tp_call */ - 0, /* tp_str */ - (getattrofunc)object_arrtype_getattro, /* tp_getattro */ - (setattrofunc)object_arrtype_setattro, /* tp_setattro */ - &object_arrtype_as_buffer, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + 0, /* ob_size*/ + "numpy.object_", /* tp_name*/ + sizeof(PyObjectScalarObject), /* tp_basicsize*/ + 0, /* tp_itemsize */ + (destructor)object_arrtype_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + &object_arrtype_as_sequence, /* tp_as_sequence */ + &object_arrtype_as_mapping, /* tp_as_mapping */ + 0, /* tp_hash */ + (ternaryfunc)object_arrtype_call, /* tp_call */ + 0, /* tp_str */ + (getattrofunc)object_arrtype_getattro, /* tp_getattro */ + (setattrofunc)object_arrtype_setattro, /* tp_setattro */ + &object_arrtype_as_buffer, /* tp_as_buffer */ + 0, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; @@ -2698,12 +2944,12 @@ count_new_axes_0d(PyObject *); static PyObject * gen_arrtype_subscript(PyObject *self, PyObject *key) { - /* Only [...], [...,<???>], [<???>, ...], - is allowed for indexing a scalar - - These return a new N-d array with a copy of - the data where N is the number of None's in <???>. - + /* + * Only [...], [...,<???>], [<???>, ...], + * is allowed for indexing a scalar + * + * These return a new N-d array with a copy of + * the data where N is the number of None's in <???>. */ PyObject *res, *ret; int N; @@ -2717,19 +2963,19 @@ gen_arrtype_subscript(PyObject *self, PyObject *key) "invalid index to scalar variable."); return NULL; } - - if (key == Py_Ellipsis) + if (key == Py_Ellipsis) { return res; - + } if (key == Py_None) { ret = add_new_axes_0d((PyArrayObject *)res, 1); Py_DECREF(res); return ret; } /* Must be a Tuple */ - N = count_new_axes_0d(key); - if (N < 0) return NULL; + if (N < 0) { + return NULL; + } ret = add_new_axes_0d((PyArrayObject *)res, N); Py_DECREF(res); return ret; @@ -2737,74 +2983,75 @@ gen_arrtype_subscript(PyObject *self, PyObject *key) /**begin repeat - * #name=bool, string, unicode, void# - * #NAME=Bool, String, Unicode, Void# - * #ex=_,_,_,# + * #name = bool, string, unicode, void# + * #NAME = Bool, String, Unicode, Void# + * #ex = _,_,_,# */ static PyTypeObject Py@NAME@ArrType_Type = { PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.@name@@ex@", /*tp_name*/ - sizeof(Py@NAME@ScalarObject), /*tp_basicsize*/ - 0, /* tp_itemsize */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + 0, /* ob_size*/ + "numpy.@name@@ex@", /* tp_name*/ + sizeof(Py@NAME@ScalarObject), /* tp_basicsize*/ + 0, /* tp_itemsize */ + 0, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + 0, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; /**end repeat**/ /**begin repeat -#NAME=Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, ULong, ULongLong, Float, Double, LongDouble# -#name=int*5, uint*5, float*3# -#CNAME=(CHAR, SHORT, INT, LONG, LONGLONG)*2, FLOAT, DOUBLE, LONGDOUBLE# -*/ + * #NAME = Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, ULong, + * ULongLong, Float, Double, LongDouble# + * #name = int*5, uint*5, float*3# + * #CNAME = (CHAR, SHORT, INT, LONG, LONGLONG)*2, FLOAT, DOUBLE, LONGDOUBLE# + */ #if BITSOF_@CNAME@ == 8 #define _THIS_SIZE "8" #elif BITSOF_@CNAME@ == 16 @@ -2824,59 +3071,59 @@ static PyTypeObject Py@NAME@ArrType_Type = { #endif static PyTypeObject Py@NAME@ArrType_Type = { PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.@name@" _THIS_SIZE, /*tp_name*/ - sizeof(Py@NAME@ScalarObject), /*tp_basicsize*/ - 0, /* tp_itemsize */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + 0, /* ob_size*/ + "numpy.@name@" _THIS_SIZE, /* tp_name*/ + sizeof(Py@NAME@ScalarObject), /* tp_basicsize*/ + 0, /* tp_itemsize */ + 0, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + 0, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; @@ -2892,10 +3139,10 @@ static PyMappingMethods gentype_as_mapping = { /**begin repeat -#NAME=CFloat, CDouble, CLongDouble# -#name=complex*3# -#CNAME=FLOAT, DOUBLE, LONGDOUBLE# -*/ + * #NAME = CFloat, CDouble, CLongDouble# + * #name = complex*3# + * #CNAME = FLOAT, DOUBLE, LONGDOUBLE# + */ #if BITSOF_@CNAME@ == 16 #define _THIS_SIZE2 "16" #define _THIS_SIZE1 "32" @@ -2918,65 +3165,69 @@ static PyMappingMethods gentype_as_mapping = { #define _THIS_SIZE2 "256" #define _THIS_SIZE1 "512" #endif -static PyTypeObject Py@NAME@ArrType_Type = { + +#define _THIS_DOC "Composed of two " _THIS_SIZE2 " bit floats" + + static PyTypeObject Py@NAME@ArrType_Type = { PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.@name@" _THIS_SIZE1, /*tp_name*/ - sizeof(Py@NAME@ScalarObject), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - 0, /*tp_dealloc*/ - 0, /*tp_print*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_compare*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT, /*tp_flags*/ - "Composed of two " _THIS_SIZE2 " bit floats", /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ + 0, /* ob_size*/ + "numpy.@name@" _THIS_SIZE1, /* tp_name*/ + sizeof(Py@NAME@ScalarObject), /* tp_basicsize*/ + 0, /* tp_itemsize*/ + 0, /* tp_dealloc*/ + 0, /* tp_print*/ + 0, /* tp_getattr*/ + 0, /* tp_setattr*/ + 0, /* tp_compare*/ + 0, /* tp_repr*/ + 0, /* tp_as_number*/ + 0, /* tp_as_sequence*/ + 0, /* tp_as_mapping*/ + 0, /* tp_hash */ + 0, /* tp_call*/ + 0, /* tp_str*/ + 0, /* tp_getattro*/ + 0, /* tp_setattro*/ + 0, /* tp_as_buffer*/ + Py_TPFLAGS_DEFAULT, /* tp_flags*/ + _THIS_DOC, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + 0, /* tp_new */ + 0, /* tp_free */ + 0, /* tp_is_gc */ + 0, /* tp_bases */ + 0, /* tp_mro */ + 0, /* tp_cache */ + 0, /* tp_subclasses */ + 0, /* tp_weaklist */ + 0, /* tp_del */ #ifdef COUNT_ALLOCS /* these must be last and never explicitly initialized */ - 0, /* tp_allocs */ - 0, /* tp_frees */ - 0, /* tp_maxalloc */ - 0, /* tp_prev */ - 0, /* *tp_next */ + 0, /* tp_allocs */ + 0, /* tp_frees */ + 0, /* tp_maxalloc */ + 0, /* tp_prev */ + 0, /* *tp_next */ #endif }; #undef _THIS_SIZE1 #undef _THIS_SIZE2 +#undef _THIS_DOC /**end repeat**/ @@ -3004,12 +3255,15 @@ initialize_numeric_types(void) PyBoolArrType_Type.tp_as_number = &bool_arrtype_as_number; #if PY_VERSION_HEX >= 0x02050000 - /* need to add dummy versions with filled-in nb_index - in-order for PyType_Ready to fill in .__index__() method + /* + * need to add dummy versions with filled-in nb_index + * in-order for PyType_Ready to fill in .__index__() method */ /**begin repeat -#name=byte, short, int, long, longlong, ubyte, ushort, uint, ulong, ulonglong# -#NAME=Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, ULong, ULongLong# + * #name = byte, short, int, long, longlong, ubyte, ushort, + * uint, ulong, ulonglong# + * #NAME = Byte, Short, Int, Long, LongLong, UByte, UShort, + * UInt, ULong, ULongLong# */ Py@NAME@ArrType_Type.tp_as_number = &@name@_arrtype_as_number; Py@NAME@ArrType_Type.tp_as_number->nb_index = (unaryfunc)@name@_index; @@ -3033,15 +3287,19 @@ initialize_numeric_types(void) PyVoidArrType_Type.tp_as_sequence = &voidtype_as_sequence; /**begin repeat -#NAME=Number, Integer, SignedInteger, UnsignedInteger, Inexact, Floating, -ComplexFloating, Flexible, Character# + * #NAME= Number, Integer, SignedInteger, UnsignedInteger, Inexact, + * Floating, ComplexFloating, Flexible, Character# */ Py@NAME@ArrType_Type.tp_flags = BASEFLAGS; /**end repeat**/ /**begin repeat -#name=bool, byte, short, int, long, longlong, ubyte, ushort, uint, ulong, ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble, string, unicode, void, object# -#NAME=Bool, Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, ULong, ULongLong, Float, Double, LongDouble, CFloat, CDouble, CLongDouble, String, Unicode, Void, Object# + * #name = bool, byte, short, int, long, longlong, ubyte, ushort, uint, + * ulong, ulonglong, float, double, longdouble, cfloat, cdouble, + * clongdouble, string, unicode, void, object# + * #NAME = Bool, Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, + * ULong, ULongLong, Float, Double, LongDouble, CFloat, CDouble, + * CLongDouble, String, Unicode, Void, Object# */ Py@NAME@ArrType_Type.tp_flags = BASEFLAGS; Py@NAME@ArrType_Type.tp_new = @name@_arrtype_new; @@ -3049,8 +3307,10 @@ ComplexFloating, Flexible, Character# /**end repeat**/ /**begin repeat -#name=bool, byte, short, ubyte, ushort, uint, ulong, ulonglong, float, longdouble, cfloat, clongdouble, void, object# -#NAME=Bool, Byte, Short, UByte, UShort, UInt, ULong, ULongLong, Float, LongDouble, CFloat, CLongDouble, Void, Object# + * #name = bool, byte, short, ubyte, ushort, uint, ulong, ulonglong, + * float, longdouble, cfloat, clongdouble, void, object# + * #NAME = Bool, Byte, Short, UByte, UShort, UInt, ULong, ULongLong, + * Float, LongDouble, CFloat, CLongDouble, Void, Object# */ Py@NAME@ArrType_Type.tp_hash = @name@_arrtype_hash; /**end repeat**/ @@ -3066,7 +3326,7 @@ ComplexFloating, Flexible, Character# #endif /**begin repeat - *#name = repr, str# + * #name = repr, str# */ PyFloatArrType_Type.tp_@name@ = floattype_@name@; PyCFloatArrType_Type.tp_@name@ = cfloattype_@name@; @@ -3075,15 +3335,24 @@ ComplexFloating, Flexible, Character# PyCDoubleArrType_Type.tp_@name@ = cdoubletype_@name@; /**end repeat**/ - /* These need to be coded specially because getitem does not - return a normal Python type + PyFloatArrType_Type.tp_print = floattype_print; + PyDoubleArrType_Type.tp_print = doubletype_print; + PyLongDoubleArrType_Type.tp_print = longdoubletype_print; + + PyCFloatArrType_Type.tp_print = cfloattype_print; + PyCDoubleArrType_Type.tp_print = cdoubletype_print; + PyCLongDoubleArrType_Type.tp_print = clongdoubletype_print; + + /* + * These need to be coded specially because getitem does not + * return a normal Python type */ PyLongDoubleArrType_Type.tp_as_number = &longdoubletype_as_number; PyCLongDoubleArrType_Type.tp_as_number = &clongdoubletype_as_number; /**begin repeat - * #name=int, long, hex, oct, float, repr, str# - * #kind=tp_as_number->nb*5, tp*2# + * #name = int, long, hex, oct, float, repr, str# + * #kind = tp_as_number->nb*5, tp*2# */ PyLongDoubleArrType_Type.@kind@_@name@ = longdoubletype_@name@; PyCLongDoubleArrType_Type.@kind@_@name@ = clongdoubletype_@name@; @@ -3137,8 +3406,9 @@ _typenum_fromtypeobj(PyObject *type, int user) i++; } - if (!user) return typenum; - + if (!user) { + return typenum; + } /* Search any registered types */ i = 0; while (i < PyArray_NUMUSERTYPES) { @@ -3179,36 +3449,41 @@ PyArray_DescrFromTypeObject(PyObject *type) } /* Check the generic types */ - if ((type == (PyObject *) &PyNumberArrType_Type) || \ - (type == (PyObject *) &PyInexactArrType_Type) || \ - (type == (PyObject *) &PyFloatingArrType_Type)) + if ((type == (PyObject *) &PyNumberArrType_Type) || + (type == (PyObject *) &PyInexactArrType_Type) || + (type == (PyObject *) &PyFloatingArrType_Type)) { typenum = PyArray_DOUBLE; - else if (type == (PyObject *)&PyComplexFloatingArrType_Type) + } + else if (type == (PyObject *)&PyComplexFloatingArrType_Type) { typenum = PyArray_CDOUBLE; - else if ((type == (PyObject *)&PyIntegerArrType_Type) || \ - (type == (PyObject *)&PySignedIntegerArrType_Type)) + } + else if ((type == (PyObject *)&PyIntegerArrType_Type) || + (type == (PyObject *)&PySignedIntegerArrType_Type)) { typenum = PyArray_LONG; - else if (type == (PyObject *) &PyUnsignedIntegerArrType_Type) + } + else if (type == (PyObject *) &PyUnsignedIntegerArrType_Type) { typenum = PyArray_ULONG; - else if (type == (PyObject *) &PyCharacterArrType_Type) + } + else if (type == (PyObject *) &PyCharacterArrType_Type) { typenum = PyArray_STRING; - else if ((type == (PyObject *) &PyGenericArrType_Type) || \ - (type == (PyObject *) &PyFlexibleArrType_Type)) + } + else if ((type == (PyObject *) &PyGenericArrType_Type) || + (type == (PyObject *) &PyFlexibleArrType_Type)) { typenum = PyArray_VOID; + } if (typenum != PyArray_NOTYPE) { return PyArray_DescrFromType(typenum); } - /* Otherwise --- type is a sub-type of an array scalar - not corresponding to a registered data-type object. + /* + * Otherwise --- type is a sub-type of an array scalar + * not corresponding to a registered data-type object. */ - /* Do special thing for VOID sub-types - */ + /* Do special thing for VOID sub-types */ if (PyType_IsSubtype((PyTypeObject *)type, &PyVoidArrType_Type)) { new = PyArray_DescrNewFromType(PyArray_VOID); - conv = _arraydescr_fromobj(type); if (conv) { new->fields = conv->fields; @@ -3229,8 +3504,8 @@ PyArray_DescrFromTypeObject(PyObject *type) } /*NUMPY_API - Return the tuple of ordered field names from a dictionary. -*/ + * Return the tuple of ordered field names from a dictionary. + */ static PyObject * PyArray_FieldNames(PyObject *fields) { @@ -3244,20 +3519,25 @@ PyArray_FieldNames(PyObject *fields) return NULL; } _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) return NULL; + if (_numpy_internal == NULL) { + return NULL; + } tup = PyObject_CallMethod(_numpy_internal, "_makenames_list", "O", fields); Py_DECREF(_numpy_internal); - if (tup == NULL) return NULL; + if (tup == NULL) { + return NULL; + } ret = PyTuple_GET_ITEM(tup, 0); ret = PySequence_Tuple(ret); Py_DECREF(tup); return ret; } -/* New reference */ /*NUMPY_API - Return descr object from array scalar. -*/ + * Return descr object from array scalar. + * + * New reference + */ static PyArray_Descr * PyArray_DescrFromScalar(PyObject *sc) { @@ -3273,8 +3553,9 @@ PyArray_DescrFromScalar(PyObject *sc) if (descr->elsize == 0) { PyArray_DESCR_REPLACE(descr); type_num = descr->type_num; - if (type_num == PyArray_STRING) + if (type_num == PyArray_STRING) { descr->elsize = PyString_GET_SIZE(sc); + } else if (type_num == PyArray_UNICODE) { descr->elsize = PyUnicode_GET_DATA_SIZE(sc); #ifndef Py_UNICODE_WIDE @@ -3290,18 +3571,20 @@ PyArray_DescrFromScalar(PyObject *sc) Py_XDECREF(descr->fields); descr->fields = NULL; } - if (descr->fields) + if (descr->fields) { descr->names = PyArray_FieldNames(descr->fields); + } PyErr_Clear(); } } return descr; } -/* New reference */ /*NUMPY_API - Get a typeobject from a type-number -- can return NULL. -*/ + * Get a typeobject from a type-number -- can return NULL. + * + * New reference + */ static PyObject * PyArray_TypeObjectFromType(int type) { @@ -3309,7 +3592,9 @@ PyArray_TypeObjectFromType(int type) PyObject *obj; descr = PyArray_DescrFromType(type); - if (descr == NULL) return NULL; + if (descr == NULL) { + return NULL; + } obj = (PyObject *)descr->typeobj; Py_XINCREF(obj); Py_DECREF(descr); diff --git a/numpy/core/tests/test_memmap.py b/numpy/core/tests/test_memmap.py index 2b8f276e3..d80724dea 100644 --- a/numpy/core/tests/test_memmap.py +++ b/numpy/core/tests/test_memmap.py @@ -14,6 +14,9 @@ class TestMemmap(TestCase): self.data = arange(12, dtype=self.dtype) self.data.resize(self.shape) + def tearDown(self): + self.tmpfp.close() + def test_roundtrip(self): # Write data to file fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+', diff --git a/numpy/core/tests/test_multiarray.py b/numpy/core/tests/test_multiarray.py index ccfbe354c..7bc9875ab 100644 --- a/numpy/core/tests/test_multiarray.py +++ b/numpy/core/tests/test_multiarray.py @@ -1,9 +1,12 @@ import tempfile import sys +import os import numpy as np from numpy.testing import * from numpy.core import * +from test_print import in_foreign_locale + class TestFlags(TestCase): def setUp(self): self.a = arange(10) @@ -113,41 +116,6 @@ class TestDtypedescr(TestCase): d2 = dtype('f8') assert_equal(d2, dtype(float64)) - -class TestFromstring(TestCase): - def test_binary(self): - a = fromstring('\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@',dtype='<f4') - assert_array_equal(a, array([1,2,3,4])) - - def test_string(self): - a = fromstring('1,2,3,4', sep=',') - assert_array_equal(a, [1., 2., 3., 4.]) - - def test_counted_string(self): - a = fromstring('1,2,3,4', count=4, sep=',') - assert_array_equal(a, [1., 2., 3., 4.]) - a = fromstring('1,2,3,4', count=3, sep=',') - assert_array_equal(a, [1., 2., 3.]) - - def test_string_with_ws(self): - a = fromstring('1 2 3 4 ', dtype=int, sep=' ') - assert_array_equal(a, [1, 2, 3, 4]) - - def test_counted_string_with_ws(self): - a = fromstring('1 2 3 4 ', count=3, dtype=int, sep=' ') - assert_array_equal(a, [1, 2, 3]) - - def test_ascii(self): - a = fromstring('1 , 2 , 3 , 4', sep=',') - b = fromstring('1,2,3,4', dtype=float, sep=',') - assert_array_equal(a, [1.,2.,3.,4.]) - assert_array_equal(a,b) - - def test_malformed(self): - a = fromstring('1.234 1,234', sep=' ') - assert_array_equal(a, [1.234, 1.]) - - class TestZeroRank(TestCase): def setUp(self): self.d = array(0), array('x', object) @@ -812,42 +780,154 @@ class TestLexsort(TestCase): assert_array_equal(x[1][idx],np.sort(x[1])) -class TestFromToFile(TestCase): +class TestIO(object): + """Test tofile, fromfile, tostring, and fromstring""" + def setUp(self): - shape = (4,7) + shape = (2,4,3) rand = np.random.random - self.x = rand(shape) + rand(shape).astype(np.complex)*1j + self.x[0,:,1] = [nan, inf, -inf, nan] self.dtype = self.x.dtype + self.filename = tempfile.mktemp() - def test_file(self): - # Test disabled on Windows, since the tempfile does not flush - # properly. The test ensures that both filenames and file - # objects are accepted in tofile and fromfile, so as long as - # it runs on at least one platform, we should be ok. - if not sys.platform.startswith('win'): - tmp_file = tempfile.NamedTemporaryFile('wb', - prefix='numpy_tofromfile') - self.x.tofile(tmp_file.file) - tmp_file.flush() - y = np.fromfile(tmp_file.name,dtype=self.dtype) - assert_array_equal(y,self.x.flat) - - def test_filename(self): - filename = tempfile.mktemp() - f = open(filename,'wb') + def tearDown(self): + if os.path.isfile(self.filename): + os.unlink(self.filename) + #tmp_file.close() + + def test_roundtrip_file(self): + f = open(self.filename, 'wb') self.x.tofile(f) f.close() - y = np.fromfile(filename,dtype=self.dtype) - assert_array_equal(y,self.x.flat) + # NB. doesn't work with flush+seek, due to use of C stdio + f = open(self.filename, 'rb') + y = np.fromfile(f, dtype=self.dtype) + f.close() + assert_array_equal(y, self.x.flat) + os.unlink(self.filename) + + def test_roundtrip_filename(self): + self.x.tofile(self.filename) + y = np.fromfile(self.filename, dtype=self.dtype) + assert_array_equal(y, self.x.flat) + + def test_roundtrip_binary_str(self): + s = self.x.tostring() + y = np.fromstring(s, dtype=self.dtype) + assert_array_equal(y, self.x.flat) + + s = self.x.tostring('F') + y = np.fromstring(s, dtype=self.dtype) + assert_array_equal(y, self.x.flatten('F')) + + def test_roundtrip_str(self): + x = self.x.real.ravel() + s = "@".join(map(str, x)) + y = np.fromstring(s, sep="@") + # NB. str imbues less precision + nan_mask = ~np.isfinite(x) + assert_array_equal(x[nan_mask], y[nan_mask]) + assert_array_almost_equal(x[~nan_mask], y[~nan_mask], decimal=5) + + def test_roundtrip_repr(self): + x = self.x.real.ravel() + s = "@".join(map(repr, x)) + y = np.fromstring(s, sep="@") + assert_array_equal(x, y) + + def _check_from(self, s, value, **kw): + y = np.fromstring(s, **kw) + assert_array_equal(y, value) + + f = open(self.filename, 'wb') + f.write(s) + f.close() + y = np.fromfile(self.filename, **kw) + assert_array_equal(y, value) + + def test_nan(self): + self._check_from("nan +nan -nan NaN nan(foo) +NaN(BAR) -NAN(q_u_u_x_)", + [nan, nan, nan, nan, nan, nan, nan], + sep=' ') + + def test_inf(self): + self._check_from("inf +inf -inf infinity -Infinity iNfInItY -inF", + [inf, inf, -inf, inf, -inf, inf, -inf], sep=' ') + + def test_numbers(self): + self._check_from("1.234 -1.234 .3 .3e55 -123133.1231e+133", + [1.234, -1.234, .3, .3e55, -123133.1231e+133], sep=' ') + + def test_binary(self): + self._check_from('\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@', + array([1,2,3,4]), + dtype='<f4') + + def test_string(self): + self._check_from('1,2,3,4', [1., 2., 3., 4.], sep=',') + + def test_counted_string(self): + self._check_from('1,2,3,4', [1., 2., 3., 4.], count=4, sep=',') + self._check_from('1,2,3,4', [1., 2., 3.], count=3, sep=',') + self._check_from('1,2,3,4', [1., 2., 3., 4.], count=-1, sep=',') + + def test_string_with_ws(self): + self._check_from('1 2 3 4 ', [1, 2, 3, 4], dtype=int, sep=' ') + + def test_counted_string_with_ws(self): + self._check_from('1 2 3 4 ', [1,2,3], count=3, dtype=int, + sep=' ') + + def test_ascii(self): + self._check_from('1 , 2 , 3 , 4', [1.,2.,3.,4.], sep=',') + self._check_from('1,2,3,4', [1.,2.,3.,4.], dtype=float, sep=',') def test_malformed(self): - filename = tempfile.mktemp() - f = open(filename,'w') - f.write("1.234 1,234") + self._check_from('1.234 1,234', [1.234, 1.], sep=' ') + + def test_long_sep(self): + self._check_from('1_x_3_x_4_x_5', [1,3,4,5], sep='_x_') + + def test_dtype(self): + v = np.array([1,2,3,4], dtype=np.int_) + self._check_from('1,2,3,4', v, sep=',', dtype=np.int_) + + def test_tofile_sep(self): + x = np.array([1.51, 2, 3.51, 4], dtype=float) + f = open(self.filename, 'w') + x.tofile(f, sep=',') + f.close() + f = open(self.filename, 'r') + s = f.read() + f.close() + assert_equal(s, '1.51,2.0,3.51,4.0') + os.unlink(self.filename) + + def test_tofile_format(self): + x = np.array([1.51, 2, 3.51, 4], dtype=float) + f = open(self.filename, 'w') + x.tofile(f, sep=',', format='%.2f') + f.close() + f = open(self.filename, 'r') + s = f.read() f.close() - y = np.fromfile(filename, sep=' ') - assert_array_equal(y, [1.234, 1.]) + assert_equal(s, '1.51,2.00,3.51,4.00') + + @in_foreign_locale + def _run_in_foreign_locale(self, func, fail=False): + np.testing.dec.knownfailureif(fail)(func)(self) + + def test_locale(self): + yield self._run_in_foreign_locale, TestIO.test_numbers + yield self._run_in_foreign_locale, TestIO.test_nan + yield self._run_in_foreign_locale, TestIO.test_inf + yield self._run_in_foreign_locale, TestIO.test_counted_string + yield self._run_in_foreign_locale, TestIO.test_ascii + yield self._run_in_foreign_locale, TestIO.test_malformed + yield self._run_in_foreign_locale, TestIO.test_tofile_sep + yield self._run_in_foreign_locale, TestIO.test_tofile_format + class TestFromBuffer(TestCase): def tst_basic(self,buffer,expected,kwargs): @@ -951,7 +1031,7 @@ class TestChoose(TestCase): self.x = 2*ones((3,),dtype=int) self.y = 3*ones((3,),dtype=int) self.x2 = 2*ones((2,3), dtype=int) - self.y2 = 3*ones((2,3), dtype=int) + self.y2 = 3*ones((2,3), dtype=int) self.ind = [0,0,1] def test_basic(self): @@ -961,11 +1041,11 @@ class TestChoose(TestCase): def test_broadcast1(self): A = np.choose(self.ind, (self.x2, self.y2)) assert_equal(A, [[2,2,3],[2,2,3]]) - + def test_broadcast2(self): A = np.choose(self.ind, (self.x, self.y2)) assert_equal(A, [[2,2,3],[2,2,3]]) - + if __name__ == "__main__": run_module_suite() diff --git a/numpy/core/tests/test_numerictypes.py b/numpy/core/tests/test_numerictypes.py index 745f48737..4e0bb462b 100644 --- a/numpy/core/tests/test_numerictypes.py +++ b/numpy/core/tests/test_numerictypes.py @@ -97,7 +97,7 @@ def normalize_descr(descr): # Creation tests ############################################################ -class create_zeros: +class create_zeros(object): """Check the creation of heterogeneous arrays zero-valued""" def test_zeros0D(self): @@ -140,7 +140,7 @@ class test_create_zeros_nested(create_zeros, TestCase): _descr = Ndescr -class create_values: +class create_values(object): """Check the creation of heterogeneous arrays with values""" def test_tuple(self): @@ -200,7 +200,7 @@ class test_create_values_nested_multiple(create_values, TestCase): # Reading tests ############################################################ -class read_values_plain: +class read_values_plain(object): """Check the reading of values in heterogeneous arrays (plain)""" def test_access_fields(self): @@ -232,7 +232,7 @@ class test_read_values_plain_multiple(read_values_plain, TestCase): multiple_rows = 1 _buffer = PbufferT -class read_values_nested: +class read_values_nested(object): """Check the reading of values in heterogeneous arrays (nested)""" @@ -353,6 +353,16 @@ class TestCommonType(TestCase): res = np.find_common_type(['u8','i8','i8'],['f8']) assert(res == 'f8') +class TestMultipleFields(TestCase): + def setUp(self): + self.ary = np.array([(1,2,3,4),(5,6,7,8)], dtype='i4,f4,i2,c8') + def _bad_call(self): + return self.ary['f0','f1'] + def test_no_tuple(self): + self.failUnlessRaises(ValueError, self._bad_call) + def test_return(self): + res = self.ary[['f0','f2']].tolist() + assert(res == [(1,3), (5,7)]) if __name__ == "__main__": run_module_suite() diff --git a/numpy/core/tests/test_print.py b/numpy/core/tests/test_print.py index 368dd9cfb..a94cc36d2 100644 --- a/numpy/core/tests/test_print.py +++ b/numpy/core/tests/test_print.py @@ -1,34 +1,197 @@ import numpy as np from numpy.testing import * +import nose -class TestPrint(TestCase): +import locale +import sys +from StringIO import StringIO - def test_float_types(self) : - """ Check formatting. +_REF = {np.inf: 'inf', -np.inf: '-inf', np.nan: 'nan'} - This is only for the str function, and only for simple types. - The precision of np.float and np.longdouble aren't the same as the - python float precision. - """ - for t in [np.float, np.double, np.longdouble] : - for x in [0, 1,-1, 1e10, 1e20] : - assert_equal(str(t(x)), str(float(x))) +def check_float_type(tp): + for x in [0, 1,-1, 1e20] : + assert_equal(str(tp(x)), str(float(x)), + err_msg='Failed str formatting for type %s' % tp) - def test_complex_types(self) : - """Check formatting. + if tp(1e10).itemsize > 4: + assert_equal(str(tp(1e10)), str(float('1e10')), + err_msg='Failed str formatting for type %s' % tp) + else: + if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ + sys.version_info[1] <= 5: + ref = '1e+010' + else: + ref = '1e+10' + assert_equal(str(tp(1e10)), ref, + err_msg='Failed str formatting for type %s' % tp) - This is only for the str function, and only for simple types. - The precision of np.float and np.longdouble aren't the same as the - python float precision. +#@dec.knownfailureif(True, "formatting tests are known to fail") +def test_float_types(): + """ Check formatting. - """ - for t in [np.cfloat, np.cdouble, np.clongdouble] : - for x in [0, 1,-1, 1e10, 1e20] : - assert_equal(str(t(x)), str(complex(x))) - assert_equal(str(t(x*1j)), str(complex(x*1j))) - assert_equal(str(t(x + x*1j)), str(complex(x + x*1j))) + This is only for the str function, and only for simple types. + The precision of np.float and np.longdouble aren't the same as the + python float precision. + """ + for t in [np.float32, np.double, np.longdouble] : + yield check_float_type, t + +def check_nan_inf_float(tp): + for x in [np.inf, -np.inf, np.nan]: + assert_equal(str(tp(x)), _REF[x], + err_msg='Failed str formatting for type %s' % tp) + +#@dec.knownfailureif(True, "formatting tests are known to fail") +def test_nan_inf_float(): + """ Check formatting of nan & inf. + + This is only for the str function, and only for simple types. + The precision of np.float and np.longdouble aren't the same as the + python float precision. + + """ + for t in [np.float32, np.double, np.longdouble] : + yield check_nan_inf_float, t + +def check_complex_type(tp): + for x in [0, 1,-1, 1e20] : + assert_equal(str(tp(x)), str(complex(x)), + err_msg='Failed str formatting for type %s' % tp) + assert_equal(str(tp(x*1j)), str(complex(x*1j)), + err_msg='Failed str formatting for type %s' % tp) + assert_equal(str(tp(x + x*1j)), str(complex(x + x*1j)), + err_msg='Failed str formatting for type %s' % tp) + + if tp(1e10).itemsize > 8: + assert_equal(str(tp(1e10)), str(complex(1e10)), + err_msg='Failed str formatting for type %s' % tp) + else: + if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ + sys.version_info[1] <= 5: + ref = '(1e+010+0j)' + else: + ref = '(1e+10+0j)' + assert_equal(str(tp(1e10)), ref, + err_msg='Failed str formatting for type %s' % tp) + +#@dec.knownfailureif(True, "formatting tests are known to fail") +def test_complex_types(): + """Check formatting of complex types. + + This is only for the str function, and only for simple types. + The precision of np.float and np.longdouble aren't the same as the + python float precision. + + """ + for t in [np.complex64, np.cdouble, np.clongdouble] : + yield check_complex_type, t + +# print tests +def _test_redirected_print(x, tp, ref=None): + file = StringIO() + file_tp = StringIO() + stdout = sys.stdout + try: + sys.stdout = file_tp + print tp(x) + sys.stdout = file + if ref: + print ref + else: + print x + finally: + sys.stdout = stdout + + assert_equal(file.getvalue(), file_tp.getvalue(), + err_msg='print failed for type%s' % tp) + +def check_float_type_print(tp): + for x in [0, 1,-1, 1e20]: + _test_redirected_print(float(x), tp) + + for x in [np.inf, -np.inf, np.nan]: + _test_redirected_print(float(x), tp, _REF[x]) + + if tp(1e10).itemsize > 4: + _test_redirected_print(float(1e10), tp) + else: + if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ + sys.version_info[1] <= 5: + ref = '1e+010' + else: + ref = '1e+10' + _test_redirected_print(float(1e10), tp, ref) + +#@dec.knownfailureif(True, "formatting tests are known to fail") +def check_complex_type_print(tp): + # We do not create complex with inf/nan directly because the feature is + # missing in python < 2.6 + for x in [0, 1, -1, 1e20]: + _test_redirected_print(complex(x), tp) + + if tp(1e10).itemsize > 8: + _test_redirected_print(complex(1e10), tp) + else: + if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ + sys.version_info[1] <= 5: + ref = '(1e+010+0j)' + else: + ref = '(1e+10+0j)' + _test_redirected_print(complex(1e10), tp, ref) + + _test_redirected_print(complex(np.inf, 1), tp, '(inf+1j)') + _test_redirected_print(complex(-np.inf, 1), tp, '(-inf+1j)') + _test_redirected_print(complex(-np.nan, 1), tp, '(nan+1j)') + +def test_float_type_print(): + """Check formatting when using print """ + for t in [np.float32, np.double, np.longdouble] : + yield check_float_type_print, t + +#@dec.knownfailureif(True, "formatting tests are known to fail") +def test_complex_type_print(): + """Check formatting when using print """ + for t in [np.complex64, np.cdouble, np.clongdouble] : + yield check_complex_type_print, t + +# Locale tests: scalar types formatting should be independent of the locale +def in_foreign_locale(func): + # XXX: How to query locale on a given system ? + + # French is one language where the decimal is ',' not '.', and should be + # relatively common on many systems + def wrapper(*args, **kwargs): + curloc = locale.getlocale(locale.LC_NUMERIC) + try: + try: + if not sys.platform == 'win32': + locale.setlocale(locale.LC_NUMERIC, 'fr_FR') + else: + locale.setlocale(locale.LC_NUMERIC, 'FRENCH') + except locale.Error: + raise nose.SkipTest("Skipping locale test, because " + "French locale not found") + return func(*args, **kwargs) + finally: + locale.setlocale(locale.LC_NUMERIC, locale=curloc) + return nose.tools.make_decorator(func)(wrapper) + +#@dec.knownfailureif(True, "formatting tests are known to fail") +@in_foreign_locale +def test_locale_single(): + assert_equal(str(np.float32(1.2)), str(float(1.2))) + +#@dec.knownfailureif(True, "formatting tests are known to fail") +@in_foreign_locale +def test_locale_double(): + assert_equal(str(np.double(1.2)), str(float(1.2))) + +#@dec.knownfailureif(True, "formatting tests are known to fail") +@in_foreign_locale +def test_locale_longdouble(): + assert_equal(str(np.longdouble(1.2)), str(float(1.2))) if __name__ == "__main__": run_module_suite() diff --git a/numpy/core/tests/test_regression.py b/numpy/core/tests/test_regression.py index c6ba51b3e..aab2870dc 100644 --- a/numpy/core/tests/test_regression.py +++ b/numpy/core/tests/test_regression.py @@ -1,7 +1,7 @@ - from StringIO import StringIO import pickle import sys +import gc from os import path from numpy.testing import * import numpy as np @@ -1208,5 +1208,17 @@ class TestRegression(TestCase): a = np.array(1) self.failUnlessRaises(ValueError, lambda x: x.choose([]), a) + def test_errobj_reference_leak(self, level=rlevel): + """Ticket #955""" + z = int(0) + p = np.int32(-1) + + gc.collect() + n_before = len(gc.get_objects()) + z**p # this shouldn't leak a reference to errobj + gc.collect() + n_after = len(gc.get_objects()) + assert n_before >= n_after, (n_before, n_after) + if __name__ == "__main__": run_module_suite() diff --git a/numpy/core/tests/test_unicode.py b/numpy/core/tests/test_unicode.py index 4968b28ac..3588e3d35 100644 --- a/numpy/core/tests/test_unicode.py +++ b/numpy/core/tests/test_unicode.py @@ -17,7 +17,7 @@ ucs4_value = u'\U0010FFFF' # Creation tests ############################################################ -class create_zeros: +class create_zeros(object): """Check the creation of zero-valued arrays""" def content_check(self, ua, ua_scalar, nbytes): @@ -69,7 +69,7 @@ class test_create_zeros_1009(create_zeros, TestCase): ulen = 1009 -class create_values: +class create_values(object): """Check the creation of unicode arrays with values""" def content_check(self, ua, ua_scalar, nbytes): @@ -154,7 +154,7 @@ class test_create_values_1009_ucs4(create_values, TestCase): # Assignment tests ############################################################ -class assign_values: +class assign_values(object): """Check the assignment of unicode arrays with values""" def content_check(self, ua, ua_scalar, nbytes): diff --git a/numpy/ctypeslib.py b/numpy/ctypeslib.py index 2d868d017..eb66b570d 100644 --- a/numpy/ctypeslib.py +++ b/numpy/ctypeslib.py @@ -353,8 +353,3 @@ if ctypes is not None: result = tp.from_address(addr) result.__keep = ai return result - - -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) diff --git a/numpy/distutils/command/config.py b/numpy/distutils/command/config.py index d24d60598..408b9f0b4 100644 --- a/numpy/distutils/command/config.py +++ b/numpy/distutils/command/config.py @@ -5,11 +5,13 @@ import os, signal import warnings +import sys from distutils.command.config import config as old_config from distutils.command.config import LANG_EXT from distutils import log from distutils.file_util import copy_file +import distutils from numpy.distutils.exec_command import exec_command from numpy.distutils.mingw32ccompiler import generate_manifest @@ -39,6 +41,30 @@ class config(old_config): def _check_compiler (self): old_config._check_compiler(self) from numpy.distutils.fcompiler import FCompiler, new_fcompiler + + if sys.platform == 'win32' and self.compiler.compiler_type == 'msvc': + # XXX: hack to circumvent a python 2.6 bug with msvc9compiler: + # initialize call query_vcvarsall, which throws an IOError, and + # causes an error along the way without much information. We try to + # catch it here, hoping it is early enough, and print an helpful + # message instead of Error: None. + if not self.compiler.initialized: + try: + self.compiler.initialize() + except IOError, e: + msg = """\ +Could not initialize compiler instance: do you have Visual Studio +installed ? If you are trying to build with mingw, please use python setup.py +build -c mingw32 instead ). If you have Visual Studio installed, check it is +correctly installed, and the right version (VS 2008 for python 2.6, VS 2003 for +2.5, etc...). Original exception was: %s, and the Compiler +class was %s +============================================================================""" \ + % (e, self.compiler.__class__.__name__) + print """\ +============================================================================""" + raise distutils.errors.DistutilsPlatformError(msg) + if not isinstance(self.fcompiler, FCompiler): self.fcompiler = new_fcompiler(compiler=self.fcompiler, dry_run=self.dry_run, force=1, diff --git a/numpy/distutils/command/scons.py b/numpy/distutils/command/scons.py index d5303bb29..5f4c42ba4 100644 --- a/numpy/distutils/command/scons.py +++ b/numpy/distutils/command/scons.py @@ -361,9 +361,13 @@ class scons(old_build_ext): try: minver = "0.9.3" - from numscons import get_version - if get_version() < minver: - raise ValueError() + try: + # version_info was added in 0.10.0 + from numscons import version_info + except ImportError: + from numscons import get_version + if get_version() < minver: + raise ValueError() except ImportError: raise RuntimeError("You need numscons >= %s to build numpy "\ "with numscons (imported numscons path " \ diff --git a/numpy/distutils/fcompiler/compaq.py b/numpy/distutils/fcompiler/compaq.py index 9d6d9c5ab..ca2595e83 100644 --- a/numpy/distutils/fcompiler/compaq.py +++ b/numpy/distutils/fcompiler/compaq.py @@ -79,12 +79,16 @@ class CompaqVisualFCompiler(FCompiler): m.initialize() ar_exe = m.lib except DistutilsPlatformError, msg: - print 'Ignoring "%s" (one should fix me in fcompiler/compaq.py)' % (msg) + pass except AttributeError, msg: if '_MSVCCompiler__root' in str(msg): print 'Ignoring "%s" (I think it is msvccompiler.py bug)' % (msg) else: raise + except IOError, e: + if not "vcvarsall.bat" in str(e): + print "Unexpected IOError in", __file__ + raise e executables = { 'version_cmd' : ['<F90>', "/what"], diff --git a/numpy/distutils/fcompiler/gnu.py b/numpy/distutils/fcompiler/gnu.py index 7b642aff3..1fb4d3e25 100644 --- a/numpy/distutils/fcompiler/gnu.py +++ b/numpy/distutils/fcompiler/gnu.py @@ -87,21 +87,29 @@ class GnuFCompiler(FCompiler): def get_flags_linker_so(self): opt = self.linker_so[1:] if sys.platform=='darwin': - # MACOSX_DEPLOYMENT_TARGET must be at least 10.3. This is - # a reasonable default value even when building on 10.4 when using - # the official Python distribution and those derived from it (when - # not broken). target = os.environ.get('MACOSX_DEPLOYMENT_TARGET', None) - if target is None or target == '': - target = '10.3' - major, minor = target.split('.') - if int(minor) < 3: - minor = '3' - warnings.warn('Environment variable ' - 'MACOSX_DEPLOYMENT_TARGET reset to %s.%s' % (major, minor)) - os.environ['MACOSX_DEPLOYMENT_TARGET'] = '%s.%s' % (major, - minor) - + # If MACOSX_DEPLOYMENT_TARGET is set, we simply trust the value + # and leave it alone. But, distutils will complain if the + # environment's value is different from the one in the Python + # Makefile used to build Python. We let disutils handle this + # error checking. + if not target: + # If MACOSX_DEPLOYMENT_TARGET is not set in the environment, + # we try to get it first from the Python Makefile and then we + # fall back to setting it to 10.3 to maximize the set of + # versions we can work with. This is a reasonable default + # even when using the official Python dist and those derived + # from it. + import distutils.sysconfig as sc + g = {} + filename = sc.get_makefile_filename() + sc.parse_makefile(filename, g) + target = g.get('MACOSX_DEPLOYMENT_TARGET', '10.3') + os.environ['MACOSX_DEPLOYMENT_TARGET'] = target + if target == '10.3': + s = 'Env. variable MACOSX_DEPLOYMENT_TARGET set to 10.3' + warnings.warn(s) + opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) else: opt.append("-shared") @@ -272,30 +280,30 @@ class Gnu95FCompiler(GnuFCompiler): def get_library_dirs(self): opt = GnuFCompiler.get_library_dirs(self) - if sys.platform == 'win32': - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - target = self.get_target() - if target: + if sys.platform == 'win32': + c_compiler = self.c_compiler + if c_compiler and c_compiler.compiler_type == "msvc": + target = self.get_target() + if target: d = os.path.normpath(self.get_libgcc_dir()) - root = os.path.join(d, os.pardir, os.pardir, os.pardir, os.pardir) - mingwdir = os.path.normpath(os.path.join(root, target, "lib")) - full = os.path.join(mingwdir, "libmingwex.a") - if os.path.exists(full): - opt.append(mingwdir) - return opt + root = os.path.join(d, os.pardir, os.pardir, os.pardir, os.pardir) + mingwdir = os.path.normpath(os.path.join(root, target, "lib")) + full = os.path.join(mingwdir, "libmingwex.a") + if os.path.exists(full): + opt.append(mingwdir) + return opt def get_libraries(self): opt = GnuFCompiler.get_libraries(self) if sys.platform == 'darwin': opt.remove('cc_dynamic') - if sys.platform == 'win32': - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - if "gcc" in opt: - i = opt.index("gcc") - opt.insert(i+1, "mingwex") - opt.insert(i+1, "mingw32") + if sys.platform == 'win32': + c_compiler = self.c_compiler + if c_compiler and c_compiler.compiler_type == "msvc": + if "gcc" in opt: + i = opt.index("gcc") + opt.insert(i+1, "mingwex") + opt.insert(i+1, "mingw32") return opt def get_target(self): @@ -303,9 +311,9 @@ class Gnu95FCompiler(GnuFCompiler): ['-v'], use_tee=0) if not status: - m = TARGET_R.search(output) - if m: - return m.group(1) + m = TARGET_R.search(output) + if m: + return m.group(1) return "" if __name__ == '__main__': diff --git a/numpy/distutils/lib2def.py b/numpy/distutils/lib2def.py index 583f244c0..a486b13bd 100644 --- a/numpy/distutils/lib2def.py +++ b/numpy/distutils/lib2def.py @@ -1,6 +1,7 @@ import re import sys import os +import subprocess __doc__ = """This module generates a DEF file from the symbols in an MSVC-compiled DLL import library. It correctly discriminates between @@ -59,13 +60,13 @@ libfile, deffile = parse_cmd()""" deffile = None return libfile, deffile -def getnm(nm_cmd = 'nm -Cs python%s.lib' % py_ver): +def getnm(nm_cmd = ['nm', '-Cs', 'python%s.lib' % py_ver]): """Returns the output of nm_cmd via a pipe. nm_output = getnam(nm_cmd = 'nm -Cs py_lib')""" - f = os.popen(nm_cmd) - nm_output = f.read() - f.close() + f = subprocess.Popen(nm_cmd, shell=True, stdout=subprocess.PIPE) + nm_output = f.stdout.read() + f.stdout.close() return nm_output def parse_nm(nm_output): @@ -107,7 +108,7 @@ if __name__ == '__main__': deffile = sys.stdout else: deffile = open(deffile, 'w') - nm_cmd = '%s %s' % (DEFAULT_NM, libfile) + nm_cmd = [str(DEFAULT_NM), str(libfile)] nm_output = getnm(nm_cmd) dlist, flist = parse_nm(nm_output) output_def(dlist, flist, DEF_HEADER, deffile) diff --git a/numpy/distutils/mingw32ccompiler.py b/numpy/distutils/mingw32ccompiler.py index 15da9471b..989d2155d 100644 --- a/numpy/distutils/mingw32ccompiler.py +++ b/numpy/distutils/mingw32ccompiler.py @@ -9,6 +9,7 @@ Support code for building Python extensions on Windows. """ import os +import subprocess import sys import log import subprocess @@ -56,9 +57,10 @@ class Mingw32CCompiler(distutils.cygwinccompiler.CygwinCCompiler): # get_versions methods regex if self.gcc_version is None: import re - out = os.popen('gcc -dumpversion','r') - out_string = out.read() - out.close() + p = subprocess.Popen(['gcc', '-dumpversion'], shell=True, + stdout=subprocess.PIPE) + out_string = p.stdout.read() + p.stdout.close() result = re.search('(\d+\.\d+)',out_string) if result: self.gcc_version = StrictVersion(result.group(1)) @@ -336,23 +338,37 @@ def _build_import_library_x86(): # raise DistutilsPlatformError, msg return +#===================================== +# Dealing with Visual Studio MANIFESTS +#===================================== + # Functions to deal with visual studio manifests. Manifest are a mechanism to # enforce strong DLL versioning on windows, and has nothing to do with # distutils MANIFEST. manifests are XML files with version info, and used by -# the OS loader; they are necessary when linking against a DLL no in the system -# path; in particular, python 2.6 is built against the MS runtime 9 (the one -# from VS 2008), which is not available on most windows systems; python 2.6 -# installer does install it in the Win SxS (Side by side) directory, but this -# requires the manifest too. This is a big mess, thanks MS for a wonderful -# system. - -# XXX: ideally, we should use exactly the same version as used by python, but I -# have no idea how to obtain the exact version from python. We could use the -# strings utility on python.exe, maybe ? -_MSVCRVER_TO_FULLVER = {'90': "9.0.21022.8", - # I took one version in my SxS directory: no idea if it is the good - # one, and we can't retrieve it from python - '80': "8.0.50727.42"} +# the OS loader; they are necessary when linking against a DLL not in the +# system path; in particular, official python 2.6 binary is built against the +# MS runtime 9 (the one from VS 2008), which is not available on most windows +# systems; python 2.6 installer does install it in the Win SxS (Side by side) +# directory, but this requires the manifest for this to work. This is a big +# mess, thanks MS for a wonderful system. + +# XXX: ideally, we should use exactly the same version as used by python. I +# submitted a patch to get this version, but it was only included for python +# 2.6.1 and above. So for versions below, we use a "best guess". +_MSVCRVER_TO_FULLVER = {} +if sys.platform == 'win32': + try: + import msvcrt + if hasattr(msvcrt, "CRT_ASSEMBLY_VERSION"): + _MSVCRVER_TO_FULLVER['90'] = msvcrt.CRT_ASSEMBLY_VERSION + else: + _MSVCRVER_TO_FULLVER['90'] = "9.0.21022.8" + _MSVCRVER_TO_FULLVER['80'] = "8.0.50727.42" + except ImportError: + # If we are here, means python was not built with MSVC. Not sure what to do + # in that case: manifest building will fail, but it should not be used in + # that case anyway + log.warn('Cannot import msvcrt: using manifest will not be possible') def msvc_manifest_xml(maj, min): """Given a major and minor version of the MSVCR, returns the diff --git a/numpy/distutils/misc_util.py b/numpy/distutils/misc_util.py index 23848b72a..1ba44d89f 100644 --- a/numpy/distutils/misc_util.py +++ b/numpy/distutils/misc_util.py @@ -6,6 +6,7 @@ import copy import glob import atexit import tempfile +import subprocess try: set @@ -1340,7 +1341,10 @@ class Configuration(object): revision = None m = None try: - sin, sout = os.popen4('svnversion') + p = subprocess.Popen(['svnversion'], shell=True, + stdout=subprocess.PIPE, stderr=STDOUT, + close_fds=True) + sout = p.stdout m = re.match(r'(?P<revision>\d+)', sout.read()) except: pass diff --git a/numpy/distutils/system_info.py b/numpy/distutils/system_info.py index 1f5cb4676..5f5f088ea 100644 --- a/numpy/distutils/system_info.py +++ b/numpy/distutils/system_info.py @@ -128,6 +128,50 @@ from numpy.distutils.exec_command import \ from numpy.distutils.misc_util import is_sequence, is_string from numpy.distutils.command.config import config as cmd_config +# Determine number of bits +import platform +_bits = {'32bit':32,'64bit':64} +platform_bits = _bits[platform.architecture()[0]] + +def libpaths(paths,bits): + """Return a list of library paths valid on 32 or 64 bit systems. + + Inputs: + paths : sequence + A sequence of strings (typically paths) + bits : int + An integer, the only valid values are 32 or 64. A ValueError exception + is raised otherwise. + + Examples: + + Consider a list of directories + >>> paths = ['/usr/X11R6/lib','/usr/X11/lib','/usr/lib'] + + For a 32-bit platform, this is already valid: + >>> libpaths(paths,32) + ['/usr/X11R6/lib', '/usr/X11/lib', '/usr/lib'] + + On 64 bits, we prepend the '64' postfix + >>> libpaths(paths,64) + ['/usr/X11R6/lib64', '/usr/X11R6/lib', '/usr/X11/lib64', '/usr/X11/lib', + '/usr/lib64', '/usr/lib'] + """ + if bits not in (32, 64): + raise ValueError("Invalid bit size in libpaths: 32 or 64 only") + + # Handle 32bit case + if bits==32: + return paths + + # Handle 64bit case + out = [] + for p in paths: + out.extend([p+'64', p]) + + return out + + if sys.platform == 'win32': default_lib_dirs = ['C:\\', os.path.join(distutils.sysconfig.EXEC_PREFIX, @@ -137,24 +181,16 @@ if sys.platform == 'win32': default_x11_lib_dirs = [] default_x11_include_dirs = [] else: - default_lib_dirs = ['/usr/local/lib', '/opt/lib', '/usr/lib', - '/opt/local/lib', '/sw/lib'] + default_lib_dirs = libpaths(['/usr/local/lib','/opt/lib','/usr/lib', + '/opt/local/lib','/sw/lib'], platform_bits) default_include_dirs = ['/usr/local/include', '/opt/include', '/usr/include', - '/opt/local/include', '/sw/include'] + '/opt/local/include', '/sw/include', + '/usr/include/suitesparse'] default_src_dirs = ['.','/usr/local/src', '/opt/src','/sw/src'] - try: - platform = os.uname() - bit64 = platform[-1].endswith('64') - except: - bit64 = False - - if bit64: - default_x11_lib_dirs = ['/usr/lib64'] - else: - default_x11_lib_dirs = ['/usr/X11R6/lib','/usr/X11/lib','/usr/lib'] - + default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib', + '/usr/lib'], platform_bits) default_x11_include_dirs = ['/usr/X11R6/include','/usr/X11/include', '/usr/include'] @@ -364,14 +400,16 @@ class system_info: self.files.extend(get_standard_file('.numpy-site.cfg')) self.files.extend(get_standard_file('site.cfg')) self.parse_config_files() - self.search_static_first = self.cp.getboolean(self.section, - 'search_static_first') + if self.section is not None: + self.search_static_first = self.cp.getboolean(self.section, + 'search_static_first') assert isinstance(self.search_static_first, int) def parse_config_files(self): self.cp.read(self.files) if not self.cp.has_section(self.section): - self.cp.add_section(self.section) + if self.section is not None: + self.cp.add_section(self.section) def calc_libraries_info(self): libs = self.get_libraries() diff --git a/numpy/doc/constants.py b/numpy/doc/constants.py new file mode 100644 index 000000000..8240aab8e --- /dev/null +++ b/numpy/doc/constants.py @@ -0,0 +1,80 @@ +""" +========= +Constants +========= + +Numpy includes several constants: + +%(constant_list)s +""" +import textwrap + +# Maintain same format as in numpy.add_newdocs +constants = [] +def add_newdoc(module, name, doc): + constants.append((name, doc)) + +add_newdoc('numpy', 'Inf', + """ + """) + +add_newdoc('numpy', 'Infinity', + """ + """) + +add_newdoc('numpy', 'NAN', + """ + """) + +add_newdoc('numpy', 'NINF', + """ + """) + +add_newdoc('numpy', 'NZERO', + """ + """) + +add_newdoc('numpy', 'NaN', + """ + """) + +add_newdoc('numpy', 'PINF', + """ + """) + +add_newdoc('numpy', 'PZERO', + """ + """) + +add_newdoc('numpy', 'e', + """ + """) + +add_newdoc('numpy', 'inf', + """ + """) + +add_newdoc('numpy', 'infty', + """ + """) + +add_newdoc('numpy', 'nan', + """ + """) + +add_newdoc('numpy', 'newaxis', + """ + """) + +if __doc__: + constants_str = [] + constants.sort() + for name, doc in constants: + constants_str.append(""".. const:: %s\n %s""" % ( + name, textwrap.dedent(doc).replace("\n", "\n "))) + constants_str = "\n".join(constants_str) + + __doc__ = __doc__ % dict(constant_list=constants_str) + del constants_str, name, doc + +del constants, add_newdoc diff --git a/numpy/f2py/cfuncs.py b/numpy/f2py/cfuncs.py index 02462241c..5312b0ec5 100644 --- a/numpy/f2py/cfuncs.py +++ b/numpy/f2py/cfuncs.py @@ -472,15 +472,17 @@ cppmacros['CHECKARRAY']="""\ cppmacros['CHECKSTRING']="""\ #define CHECKSTRING(check,tcheck,name,show,var)\\ \tif (!(check)) {\\ -\t\tPyErr_SetString(#modulename#_error,\"(\"tcheck\") failed for \"name);\\ -\t\tfprintf(stderr,show\"\\n\",slen(var),var);\\ +\t\tchar errstring[256];\\ +\t\tsprintf(errstring, \"%s: \"show, \"(\"tcheck\") failed for \"name, slen(var), var);\\ +\t\tPyErr_SetString(#modulename#_error, errstring);\\ \t\t/*goto capi_fail;*/\\ \t} else """ cppmacros['CHECKSCALAR']="""\ #define CHECKSCALAR(check,tcheck,name,show,var)\\ \tif (!(check)) {\\ -\t\tPyErr_SetString(#modulename#_error,\"(\"tcheck\") failed for \"name);\\ -\t\tfprintf(stderr,show\"\\n\",var);\\ +\t\tchar errstring[256];\\ +\t\tsprintf(errstring, \"%s: \"show, \"(\"tcheck\") failed for \"name, var);\\ +\t\tPyErr_SetString(#modulename#_error,errstring);\\ \t\t/*goto capi_fail;*/\\ \t} else """ ## cppmacros['CHECKDIMS']="""\ diff --git a/numpy/f2py/crackfortran.py b/numpy/f2py/crackfortran.py index 8e5f2882f..449db33a3 100755 --- a/numpy/f2py/crackfortran.py +++ b/numpy/f2py/crackfortran.py @@ -2446,9 +2446,9 @@ def crack2fortrangen(block,tab='\n'): global skipfuncs, onlyfuncs setmesstext(block) ret='' - if type(block) is type([]): + if isinstance(block, list): for g in block: - if g['block'] in ['function','subroutine']: + if g and g['block'] in ['function','subroutine']: if g['name'] in skipfuncs: continue if onlyfuncs and g['name'] not in onlyfuncs: diff --git a/numpy/f2py/f2py.1 b/numpy/f2py/f2py.1 index b8769a0cc..b9391e592 100644 --- a/numpy/f2py/f2py.1 +++ b/numpy/f2py/f2py.1 @@ -20,7 +20,7 @@ f2py \- Fortran to Python interface generator This program generates a Python C/API file (<modulename>module.c) that contains wrappers for given Fortran or C functions so that they can be called from Python. -With the -c option the corresponding +With the \-c option the corresponding extension modules are built. .SH OPTIONS .TP @@ -49,8 +49,8 @@ Name of the module; f2py generates a Python/C API file \'untitled\'. .TP .B \-\-[no\-]lower -Do [not] lower the cases in <fortran files>. By default, --lower is -assumed with -h key, and --no-lower without -h key. +Do [not] lower the cases in <fortran files>. By default, \-\-lower is +assumed with \-h key, and \-\-no\-lower without \-h key. .TP .B \-\-build\-dir <dirname> All f2py generated files are created in <dirname>. Default is tempfile.mktemp(). @@ -59,14 +59,14 @@ All f2py generated files are created in <dirname>. Default is tempfile.mktemp(). Overwrite existing signature file. .TP .B \-\-[no\-]latex\-doc -Create (or not) <modulename>module.tex. Default is --no-latex-doc. +Create (or not) <modulename>module.tex. Default is \-\-no\-latex\-doc. .TP .B \-\-short\-latex Create 'incomplete' LaTeX document (without commands \\documentclass, \\tableofcontents, and \\begin{document}, \\end{document}). .TP .B \-\-[no\-]rest\-doc -Create (or not) <modulename>module.rst. Default is --no-rest-doc. +Create (or not) <modulename>module.rst. Default is \-\-no\-rest\-doc. .TP .B \-\-debug\-capi Create C/API code that reports the state of the wrappers during @@ -81,12 +81,12 @@ statement in signature files instead. .TP .B \-\-[no\-]wrap\-functions Create Fortran subroutine wrappers to Fortran 77 -functions. --wrap-functions is default because it ensures maximum +functions. \-\-wrap\-functions is default because it ensures maximum portability/compiler independence. .TP .B \-\-help\-link [..] List system resources found by system_info.py. [..] may contain -a list of resources names. See also --link-<resource> switch below. +a list of resources names. See also \-\-link\-<resource> switch below. .TP .B \-\-quiet Run quietly. @@ -100,7 +100,7 @@ Print f2py version ID and exit. .B \-\-include_paths path1:path2:... Search include files (that f2py will scan) from the given directories. .SH "CONFIG_FC OPTIONS" -The following options are effective only when -c switch is used. +The following options are effective only when \-c switch is used. .TP .B \-\-help-compiler List available Fortran compilers [DEPRECIATED]. @@ -147,13 +147,13 @@ Compile without arch-dependent optimization. .B \-\-debug Compile with debugging information. .SH "EXTRA OPTIONS" -The following options are effective only when -c switch is used. +The following options are effective only when \-c switch is used. .TP .B \-\-link-<resource> Link extension module with <resource> as defined by numpy_distutils/system_info.py. E.g. to link with optimized LAPACK libraries (vecLib on MacOSX, ATLAS elsewhere), use ---link-lapack_opt. See also --help-link switch. +\-\-link\-lapack_opt. See also \-\-help\-link switch. .TP .B -L/path/to/lib/ -l<libname> diff --git a/numpy/f2py/f2py2e.py b/numpy/f2py/f2py2e.py index 2fd4f6caa..264a01312 100755 --- a/numpy/f2py/f2py2e.py +++ b/numpy/f2py/f2py2e.py @@ -543,7 +543,7 @@ def run_compile(): setup(ext_modules = [ext]) if remove_build_dir and os.path.exists(build_dir): - import shutil + import shutil outmess('Removing build directory %s\n'%(build_dir)) shutil.rmtree(build_dir) diff --git a/numpy/f2py/rules.py b/numpy/f2py/rules.py index 825b13b23..ef4b9cc34 100644 --- a/numpy/f2py/rules.py +++ b/numpy/f2py/rules.py @@ -245,7 +245,7 @@ static PyObject *#apiname#(const PyObject *capi_self, f2py_start_clock(); #endif \tif (!PyArg_ParseTupleAndKeywords(capi_args,capi_keywds,\\ -\t\t\"#argformat#|#keyformat##xaformat#:#pyname#\",\\ +\t\t\"#argformat##keyformat##xaformat#:#pyname#\",\\ \t\tcapi_kwlist#args_capi##keys_capi##keys_xa#))\n\t\treturn NULL; #frompyobj# /*end of frompyobj*/ @@ -1355,6 +1355,16 @@ def buildapi(rout): rd['latexdocstrsigns']=rd['latexdocstrsigns']+rd[k][0:1]+\ ['\\begin{description}']+rd[k][1:]+\ ['\\end{description}'] + + # Workaround for Python 2.6, 2.6.1 bug: http://bugs.python.org/issue4720 + if rd['keyformat'] or rd['xaformat']: + argformat = rd['argformat'] + if isinstance(argformat, list): + argformat.append('|') + else: + assert isinstance(argformat, str),repr((argformat, type(argformat))) + rd['argformat'] += '|' + ar=applyrules(routine_rules,rd) if ismoduleroutine(rout): outmess('\t\t\t %s\n'%(ar['docshort'])) diff --git a/numpy/lib/__init__.py b/numpy/lib/__init__.py index 296ca7135..07f6d5c27 100644 --- a/numpy/lib/__init__.py +++ b/numpy/lib/__init__.py @@ -1,151 +1,3 @@ -""" -Basic functions used by several sub-packages and -useful to have in the main name-space. - -Type Handling -------------- -================ =================== -iscomplexobj Test for complex object, scalar result -isrealobj Test for real object, scalar result -iscomplex Test for complex elements, array result -isreal Test for real elements, array result -imag Imaginary part -real Real part -real_if_close Turns complex number with tiny imaginary part to real -isneginf Tests for negative infinity, array result -isposinf Tests for positive infinity, array result -isnan Tests for nans, array result -isinf Tests for infinity, array result -isfinite Tests for finite numbers, array result -isscalar True if argument is a scalar -nan_to_num Replaces NaN's with 0 and infinities with large numbers -cast Dictionary of functions to force cast to each type -common_type Determine the minimum common type code for a group - of arrays -mintypecode Return minimal allowed common typecode. -================ =================== - -Index Tricks ------------- -================ =================== -mgrid Method which allows easy construction of N-d - 'mesh-grids' -``r_`` Append and construct arrays: turns slice objects into - ranges and concatenates them, for 2d arrays appends rows. -index_exp Konrad Hinsen's index_expression class instance which - can be useful for building complicated slicing syntax. -================ =================== - -Useful Functions ----------------- -================ =================== -select Extension of where to multiple conditions and choices -extract Extract 1d array from flattened array according to mask -insert Insert 1d array of values into Nd array according to mask -linspace Evenly spaced samples in linear space -logspace Evenly spaced samples in logarithmic space -fix Round x to nearest integer towards zero -mod Modulo mod(x,y) = x % y except keeps sign of y -amax Array maximum along axis -amin Array minimum along axis -ptp Array max-min along axis -cumsum Cumulative sum along axis -prod Product of elements along axis -cumprod Cumluative product along axis -diff Discrete differences along axis -angle Returns angle of complex argument -unwrap Unwrap phase along given axis (1-d algorithm) -sort_complex Sort a complex-array (based on real, then imaginary) -trim_zeros Trim the leading and trailing zeros from 1D array. -vectorize A class that wraps a Python function taking scalar - arguments into a generalized function which can handle - arrays of arguments using the broadcast rules of - numerix Python. -================ =================== - -Shape Manipulation ------------------- -================ =================== -squeeze Return a with length-one dimensions removed. -atleast_1d Force arrays to be > 1D -atleast_2d Force arrays to be > 2D -atleast_3d Force arrays to be > 3D -vstack Stack arrays vertically (row on row) -hstack Stack arrays horizontally (column on column) -column_stack Stack 1D arrays as columns into 2D array -dstack Stack arrays depthwise (along third dimension) -split Divide array into a list of sub-arrays -hsplit Split into columns -vsplit Split into rows -dsplit Split along third dimension -================ =================== - -Matrix (2D Array) Manipulations -------------------------------- -================ =================== -fliplr 2D array with columns flipped -flipud 2D array with rows flipped -rot90 Rotate a 2D array a multiple of 90 degrees -eye Return a 2D array with ones down a given diagonal -diag Construct a 2D array from a vector, or return a given - diagonal from a 2D array. -mat Construct a Matrix -bmat Build a Matrix from blocks -================ =================== - -Polynomials ------------ -================ =================== -poly1d A one-dimensional polynomial class -poly Return polynomial coefficients from roots -roots Find roots of polynomial given coefficients -polyint Integrate polynomial -polyder Differentiate polynomial -polyadd Add polynomials -polysub Substract polynomials -polymul Multiply polynomials -polydiv Divide polynomials -polyval Evaluate polynomial at given argument -================ =================== - -Import Tricks -------------- -================ =================== -ppimport Postpone module import until trying to use it -ppimport_attr Postpone module import until trying to use its attribute -ppresolve Import postponed module and return it. -================ =================== - -Machine Arithmetics -------------------- -================ =================== -machar_single Single precision floating point arithmetic parameters -machar_double Double precision floating point arithmetic parameters -================ =================== - -Threading Tricks ----------------- -================ =================== -ParallelExec Execute commands in parallel thread. -================ =================== - -1D Array Set Operations ------------------------ -Set operations for 1D numeric arrays based on sort() function. - -================ =================== -ediff1d Array difference (auxiliary function). -unique1d Unique elements of 1D array. -intersect1d Intersection of 1D arrays with unique elements. -intersect1d_nu Intersection of 1D arrays with any elements. -setxor1d Set exclusive-or of 1D arrays with unique elements. -setmember1d Return an array of shape of ar1 containing 1 where - the elements of ar1 are in ar2 and 0 otherwise. -union1d Union of 1D arrays with unique elements. -setdiff1d Set difference of 1D arrays with unique elements. -================ =================== - -""" from info import __doc__ from numpy.version import version as __version__ diff --git a/numpy/lib/_iotools.py b/numpy/lib/_iotools.py new file mode 100644 index 000000000..23053bf4d --- /dev/null +++ b/numpy/lib/_iotools.py @@ -0,0 +1,493 @@ +""" +A collection of functions designed to help I/O with ascii file. + +""" +__docformat__ = "restructuredtext en" + +import numpy as np +import numpy.core.numeric as nx +from __builtin__ import bool, int, long, float, complex, object, unicode, str + + +def _is_string_like(obj): + """ + Check whether obj behaves like a string. + """ + try: + obj + '' + except (TypeError, ValueError): + return False + return True + + +def _to_filehandle(fname, flag='r', return_opened=False): + """ + Returns the filehandle corresponding to a string or a file. + If the string ends in '.gz', the file is automatically unzipped. + + Parameters + ---------- + fname : string, filehandle + Name of the file whose filehandle must be returned. + flag : string, optional + Flag indicating the status of the file ('r' for read, 'w' for write). + return_opened : boolean, optional + Whether to return the opening status of the file. + """ + if _is_string_like(fname): + if fname.endswith('.gz'): + import gzip + fhd = gzip.open(fname, flag) + elif fname.endswith('.bz2'): + import bz2 + fhd = bz2.BZ2File(fname) + else: + fhd = file(fname, flag) + opened = True + elif hasattr(fname, 'seek'): + fhd = fname + opened = False + else: + raise ValueError('fname must be a string or file handle') + if return_opened: + return fhd, opened + return fhd + + +def has_nested_fields(ndtype): + """ + Returns whether one or several fields of a structured array are nested. + """ + for name in ndtype.names or (): + if ndtype[name].names: + return True + return False + + +def flatten_dtype(ndtype): + """ + Unpack a structured data-type. + + """ + names = ndtype.names + if names is None: + return [ndtype] + else: + types = [] + for field in names: + (typ, _) = ndtype.fields[field] + flat_dt = flatten_dtype(typ) + types.extend(flat_dt) + return types + + +class LineSplitter: + """ + Defines a function to split a string at a given delimiter or at given places. + + Parameters + ---------- + comment : {'#', string} + Character used to mark the beginning of a comment. + delimiter : var, optional + If a string, character used to delimit consecutive fields. + If an integer or a sequence of integers, width(s) of each field. + autostrip : boolean, optional + Whether to strip each individual fields + """ + + def autostrip(self, method): + "Wrapper to strip each member of the output of `method`." + return lambda input: [_.strip() for _ in method(input)] + # + def __init__(self, delimiter=None, comments='#', autostrip=True): + self.comments = comments + # Delimiter is a character + if (delimiter is None) or _is_string_like(delimiter): + delimiter = delimiter or None + _handyman = self._delimited_splitter + # Delimiter is a list of field widths + elif hasattr(delimiter, '__iter__'): + _handyman = self._variablewidth_splitter + idx = np.cumsum([0]+list(delimiter)) + delimiter = [slice(i,j) for (i,j) in zip(idx[:-1], idx[1:])] + # Delimiter is a single integer + elif int(delimiter): + (_handyman, delimiter) = (self._fixedwidth_splitter, int(delimiter)) + else: + (_handyman, delimiter) = (self._delimited_splitter, None) + self.delimiter = delimiter + if autostrip: + self._handyman = self.autostrip(_handyman) + else: + self._handyman = _handyman + # + def _delimited_splitter(self, line): + line = line.split(self.comments)[0].strip() + if not line: + return [] + return line.split(self.delimiter) + # + def _fixedwidth_splitter(self, line): + line = line.split(self.comments)[0] + if not line: + return [] + fixed = self.delimiter + slices = [slice(i, i+fixed) for i in range(len(line))[::fixed]] + return [line[s] for s in slices] + # + def _variablewidth_splitter(self, line): + line = line.split(self.comments)[0] + if not line: + return [] + slices = self.delimiter + return [line[s] for s in slices] + # + def __call__(self, line): + return self._handyman(line) + + + +class NameValidator: + """ + Validates a list of strings to use as field names. + The strings are stripped of any non alphanumeric character, and spaces + are replaced by `_`. If the optional input parameter `case_sensitive` + is False, the strings are set to upper case. + + During instantiation, the user can define a list of names to exclude, as + well as a list of invalid characters. Names in the exclusion list + are appended a '_' character. + + Once an instance has been created, it can be called with a list of names + and a list of valid names will be created. + The `__call__` method accepts an optional keyword, `default`, that sets + the default name in case of ambiguity. By default, `default = 'f'`, so + that names will default to `f0`, `f1` + + Parameters + ---------- + excludelist : sequence, optional + A list of names to exclude. This list is appended to the default list + ['return','file','print']. Excluded names are appended an underscore: + for example, `file` would become `file_`. + deletechars : string, optional + A string combining invalid characters that must be deleted from the names. + casesensitive : {True, False, 'upper', 'lower'}, optional + If True, field names are case_sensitive. + If False or 'upper', field names are converted to upper case. + If 'lower', field names are converted to lower case. + """ + # + defaultexcludelist = ['return','file','print'] + defaultdeletechars = set("""~!@#$%^&*()-=+~\|]}[{';: /?.>,<""") + # + def __init__(self, excludelist=None, deletechars=None, case_sensitive=None): + # + if excludelist is None: + excludelist = [] + excludelist.extend(self.defaultexcludelist) + self.excludelist = excludelist + # + if deletechars is None: + delete = self.defaultdeletechars + else: + delete = set(deletechars) + delete.add('"') + self.deletechars = delete + + if (case_sensitive is None) or (case_sensitive is True): + self.case_converter = lambda x: x + elif (case_sensitive is False) or ('u' in case_sensitive): + self.case_converter = lambda x: x.upper() + elif 'l' in case_sensitive: + self.case_converter = lambda x: x.lower() + else: + self.case_converter = lambda x: x + # + def validate(self, names, default='f'): + # + if names is None: + return + # + validatednames = [] + seen = dict() + # + deletechars = self.deletechars + excludelist = self.excludelist + # + case_converter = self.case_converter + # + for i, item in enumerate(names): + item = case_converter(item) + item = item.strip().replace(' ', '_') + item = ''.join([c for c in item if c not in deletechars]) + if not len(item): + item = '%s%d' % (default, i) + elif item in excludelist: + item += '_' + cnt = seen.get(item, 0) + if cnt > 0: + validatednames.append(item + '_%d' % cnt) + else: + validatednames.append(item) + seen[item] = cnt+1 + return validatednames + # + def __call__(self, names, default='f'): + return self.validate(names, default) + + + +def str2bool(value): + """ + Tries to transform a string supposed to represent a boolean to a boolean. + + Raises + ------ + ValueError + If the string is not 'True' or 'False' (case independent) + """ + value = value.upper() + if value == 'TRUE': + return True + elif value == 'FALSE': + return False + else: + raise ValueError("Invalid boolean") + + + +class StringConverter: + """ + Factory class for function transforming a string into another object (int, + float). + + After initialization, an instance can be called to transform a string + into another object. If the string is recognized as representing a missing + value, a default value is returned. + + Parameters + ---------- + dtype_or_func : {None, dtype, function}, optional + Input data type, used to define a basic function and a default value + for missing data. For example, when `dtype` is float, the :attr:`func` + attribute is set to ``float`` and the default value to `np.nan`. + Alternatively, function used to convert a string to another object. + In that later case, it is recommended to give an associated default + value as input. + default : {None, var}, optional + Value to return by default, that is, when the string to be converted + is flagged as missing. + missing_values : {sequence}, optional + Sequence of strings indicating a missing value. + locked : {boolean}, optional + Whether the StringConverter should be locked to prevent automatic + upgrade or not. + + Attributes + ---------- + func : function + Function used for the conversion + default : var + Default value to return when the input corresponds to a missing value. + type : type + Type of the output. + _status : integer + Integer representing the order of the conversion. + _mapper : sequence of tuples + Sequence of tuples (dtype, function, default value) to evaluate in order. + _locked : boolean + Whether the StringConverter is locked, thereby preventing automatic any + upgrade or not. + + """ + # + _mapper = [(nx.bool_, str2bool, False), + (nx.integer, int, -1), + (nx.floating, float, nx.nan), + (complex, complex, nx.nan+0j), + (nx.string_, str, '???')] + (_defaulttype, _defaultfunc, _defaultfill) = zip(*_mapper) + # + @classmethod + def _getsubdtype(cls, val): + """Returns the type of the dtype of the input variable.""" + return np.array(val).dtype.type + # + @classmethod + def upgrade_mapper(cls, func, default=None): + """ + Upgrade the mapper of a StringConverter by adding a new function and its + corresponding default. + + The input function (or sequence of functions) and its associated default + value (if any) is inserted in penultimate position of the mapper. + The corresponding type is estimated from the dtype of the default value. + + Parameters + ---------- + func : var + Function, or sequence of functions + + Examples + -------- + >>> import dateutil.parser + >>> import datetime + >>> dateparser = datetutil.parser.parse + >>> defaultdate = datetime.date(2000, 1, 1) + >>> StringConverter.upgrade_mapper(dateparser, default=defaultdate) + """ + # Func is a single functions + if hasattr(func, '__call__'): + cls._mapper.insert(-1, (cls._getsubdtype(default), func, default)) + return + elif hasattr(func, '__iter__'): + if isinstance(func[0], (tuple, list)): + for _ in func: + cls._mapper.insert(-1, _) + return + if default is None: + default = [None] * len(func) + else: + default = list(default) + default.append([None] * (len(func)-len(default))) + for (fct, dft) in zip(func, default): + cls._mapper.insert(-1, (cls._getsubdtype(dft), fct, dft)) + # + def __init__(self, dtype_or_func=None, default=None, missing_values=None, + locked=False): + # Defines a lock for upgrade + self._locked = bool(locked) + # No input dtype: minimal initialization + if dtype_or_func is None: + self.func = str2bool + self._status = 0 + self.default = default or False + ttype = np.bool + else: + # Is the input a np.dtype ? + try: + self.func = None + ttype = np.dtype(dtype_or_func).type + except TypeError: + # dtype_or_func must be a function, then + if not hasattr(dtype_or_func, '__call__'): + errmsg = "The input argument `dtype` is neither a function"\ + " or a dtype (got '%s' instead)" + raise TypeError(errmsg % type(dtype_or_func)) + # Set the function + self.func = dtype_or_func + # If we don't have a default, try to guess it or set it to None + if default is None: + try: + default = self.func('0') + except ValueError: + default = None + ttype = self._getsubdtype(default) + # Set the status according to the dtype + _status = -1 + for (i, (deftype, func, default_def)) in enumerate(self._mapper): + if np.issubdtype(ttype, deftype): + _status = i + self.default = default or default_def + break + if _status == -1: + # We never found a match in the _mapper... + _status = 0 + self.default = default + self._status = _status + # If the input was a dtype, set the function to the last we saw + if self.func is None: + self.func = func + # If the status is 1 (int), change the function to smthg more robust + if self.func == self._mapper[1][1]: + self.func = lambda x : int(float(x)) + # Store the list of strings corresponding to missing values. + if missing_values is None: + self.missing_values = set(['']) + else: + self.missing_values = set(list(missing_values) + ['']) + # + self._callingfunction = self._strict_call + self.type = ttype + self._checked = False + # + def _loose_call(self, value): + try: + return self.func(value) + except ValueError: + return self.default + # + def _strict_call(self, value): + try: + return self.func(value) + except ValueError: + if value.strip() in self.missing_values: + if not self._status: + self._checked = False + return self.default + raise ValueError("Cannot convert string '%s'" % value) + # + def __call__(self, value): + return self._callingfunction(value) + # + def upgrade(self, value): + """ + Tries to find the best converter for `value`, by testing different + converters in order. + The order in which the converters are tested is read from the + :attr:`_status` attribute of the instance. + """ + self._checked = True + try: + self._strict_call(value) + except ValueError: + # Raise an exception if we locked the converter... + if self._locked: + raise ValueError("Converter is locked and cannot be upgraded") + _statusmax = len(self._mapper) + # Complains if we try to upgrade by the maximum + if self._status == _statusmax: + raise ValueError("Could not find a valid conversion function") + elif self._status < _statusmax - 1: + self._status += 1 + (self.type, self.func, self.default) = self._mapper[self._status] + self.upgrade(value) + # + def update(self, func, default=None, missing_values='', locked=False): + """ + Sets the :attr:`func` and :attr:`default` attributes directly. + + Parameters + ---------- + func : function + Conversion function. + default : {var}, optional + Default value to return when a missing value is encountered. + missing_values : {var}, optional + Sequence of strings representing missing values. + locked : {False, True}, optional + Whether the status should be locked to prevent automatic upgrade. + """ + self.func = func + self._locked = locked + # Don't reset the default to None if we can avoid it + if default is not None: + self.default = default + # Add the missing values to the existing set + if missing_values is not None: + if _is_string_like(missing_values): + self.missing_values.add(missing_values) + elif hasattr(missing_values, '__iter__'): + for val in missing_values: + self.missing_values.add(val) + else: + self.missing_values = [] + # Update the type + try: + tester = func('0') + except ValueError: + tester = None + self.type = self._getsubdtype(tester) + diff --git a/numpy/lib/arraysetops.py b/numpy/lib/arraysetops.py index 49a1d3e89..f023f6027 100644 --- a/numpy/lib/arraysetops.py +++ b/numpy/lib/arraysetops.py @@ -52,13 +52,19 @@ def ediff1d(ary, to_end=None, to_begin=None): If provided, this number will be taked onto the beginning of the returned differences. + Notes + ----- + When applied to masked arrays, this function drops the mask information + if the `to_begin` and/or `to_end` parameters are used + + Returns ------- ed : array The differences. Loosely, this will be (ary[1:] - ary[:-1]). """ - ary = np.asarray(ary).flat + ary = np.asanyarray(ary).flat ed = ary[1:] - ary[:-1] arrays = [ed] if to_begin is not None: @@ -132,7 +138,7 @@ def unique1d(ar1, return_index=False, return_inverse=False): "the output was (indices, unique_arr), but " "has now been reversed to be more consistent.") - ar = np.asarray(ar1).flatten() + ar = np.asanyarray(ar1).flatten() if ar.size == 0: if return_inverse and return_index: return ar, np.empty(0, np.bool), np.empty(0, np.bool) diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py index 8bf20d3fb..283e3faff 100644 --- a/numpy/lib/function_base.py +++ b/numpy/lib/function_base.py @@ -228,10 +228,10 @@ def histogram(a, bins=10, range=None, normed=False, weights=None, new=None): * None : the new behaviour is used, no warning is printed. * True : the new behaviour is used and a warning is raised about the future removal of the `new` keyword. - * False : the old behaviour is used and a DeprecationWarning + * False : the old behaviour is used and a DeprecationWarning is raised. - As of NumPy 1.3, this keyword should not be used explicitly since it - will disappear in NumPy 1.4. + As of NumPy 1.3, this keyword should not be used explicitly since it + will disappear in NumPy 1.4. Returns ------- @@ -267,9 +267,9 @@ def histogram(a, bins=10, range=None, normed=False, weights=None, new=None): # Old behavior if new == False: warnings.warn(""" - The histogram semantics being used is now deprecated and - will disappear in NumPy 1.4. Please update your code to - use the default semantics. + The histogram semantics being used is now deprecated and + will disappear in NumPy 1.4. Please update your code to + use the default semantics. """, DeprecationWarning) a = asarray(a).ravel() @@ -320,8 +320,8 @@ def histogram(a, bins=10, range=None, normed=False, weights=None, new=None): elif new in [True, None]: if new is True: warnings.warn(""" - The new semantics of histogram is now the default and the `new` - keyword will be removed in NumPy 1.4. + The new semantics of histogram is now the default and the `new` + keyword will be removed in NumPy 1.4. """, Warning) a = asarray(a) if weights is not None: @@ -1073,53 +1073,6 @@ def diff(a, n=1, axis=-1): else: return a[slice1]-a[slice2] -try: - add_docstring(digitize, -r"""digitize(x,bins) - -Return the index of the bin to which each value of x belongs. - -Each index i returned is such that bins[i-1] <= x < bins[i] if -bins is monotonically increasing, or bins [i-1] > x >= bins[i] if -bins is monotonically decreasing. - -Beyond the bounds of the bins 0 or len(bins) is returned as appropriate. - -""") -except RuntimeError: - pass - -try: - add_docstring(bincount, -r"""bincount(x,weights=None) - -Return the number of occurrences of each value in x. - -x must be a list of non-negative integers. The output, b[i], -represents the number of times that i is found in x. If weights -is specified, every occurrence of i at a position p contributes -weights[p] instead of 1. - -See also: histogram, digitize, unique. - -""") -except RuntimeError: - pass - -try: - add_docstring(add_docstring, -r"""docstring(obj, docstring) - -Add a docstring to a built-in obj if possible. -If the obj already has a docstring raise a RuntimeError -If this routine does not know how to add a docstring to the object -raise a TypeError - -""") -except RuntimeError: - pass - - def interp(x, xp, fp, left=None, right=None): """ One-dimensional linear interpolation. @@ -2818,9 +2771,9 @@ def trapz(y, x=None, dx=1.0, axis=-1): y : array_like Input array to integrate. x : array_like, optional - If `x` is None, then spacing between all `y` elements is 1. + If `x` is None, then spacing between all `y` elements is `dx`. dx : scalar, optional - If `x` is None, spacing given by `dx` is assumed. + If `x` is None, spacing given by `dx` is assumed. Default is 1. axis : int, optional Specify the axis. @@ -2836,7 +2789,15 @@ def trapz(y, x=None, dx=1.0, axis=-1): if x is None: d = dx else: - d = diff(x,axis=axis) + x = asarray(x) + if x.ndim == 1: + d = diff(x) + # reshape to correct shape + shape = [1]*y.ndim + shape[axis] = d.shape[0] + d = d.reshape(shape) + else: + d = diff(x, axis=axis) nd = len(y.shape) slice1 = [slice(None)]*nd slice2 = [slice(None)]*nd diff --git a/numpy/lib/getlimits.py b/numpy/lib/getlimits.py index bc5fbbf5e..ccb0e2b3d 100644 --- a/numpy/lib/getlimits.py +++ b/numpy/lib/getlimits.py @@ -88,6 +88,12 @@ class finfo(object): _finfo_cache = {} def __new__(cls, dtype): + try: + dtype = np.dtype(dtype) + except TypeError: + # In case a float instance was given + dtype = np.dtype(type(dtype)) + obj = cls._finfo_cache.get(dtype,None) if obj is not None: return obj @@ -115,7 +121,7 @@ class finfo(object): return obj def _init(self, dtype): - self.dtype = dtype + self.dtype = np.dtype(dtype) if dtype is ntypes.double: itype = ntypes.int64 fmt = '%24.16e' @@ -149,23 +155,23 @@ class finfo(object): self.nexp = machar.iexp self.nmant = machar.it self.machar = machar - self._str_tiny = machar._str_xmin - self._str_max = machar._str_xmax - self._str_epsneg = machar._str_epsneg - self._str_eps = machar._str_eps - self._str_resolution = machar._str_resolution + self._str_tiny = machar._str_xmin.strip() + self._str_max = machar._str_xmax.strip() + self._str_epsneg = machar._str_epsneg.strip() + self._str_eps = machar._str_eps.strip() + self._str_resolution = machar._str_resolution.strip() return self def __str__(self): return '''\ Machine parameters for %(dtype)s --------------------------------------------------------------------- -precision=%(precision)3s resolution=%(_str_resolution)s -machep=%(machep)6s eps= %(_str_eps)s -negep =%(negep)6s epsneg= %(_str_epsneg)s -minexp=%(minexp)6s tiny= %(_str_tiny)s -maxexp=%(maxexp)6s max= %(_str_max)s -nexp =%(nexp)6s min= -max +precision=%(precision)3s resolution= %(_str_resolution)s +machep=%(machep)6s eps= %(_str_eps)s +negep =%(negep)6s epsneg= %(_str_epsneg)s +minexp=%(minexp)6s tiny= %(_str_tiny)s +maxexp=%(maxexp)6s max= %(_str_max)s +nexp =%(nexp)6s min= -max --------------------------------------------------------------------- ''' % self.__dict__ @@ -220,8 +226,11 @@ class iinfo: _min_vals = {} _max_vals = {} - def __init__(self, type): - self.dtype = np.dtype(type) + def __init__(self, int_type): + try: + self.dtype = np.dtype(int_type) + except TypeError: + self.dtype = np.dtype(type(int_type)) self.kind = self.dtype.kind self.bits = self.dtype.itemsize * 8 self.key = "%s%d" % (self.kind, self.bits) @@ -256,6 +265,17 @@ class iinfo: max = property(max) + def __str__(self): + """String representation.""" + return '''\ +Machine parameters for %(dtype)s +--------------------------------------------------------------------- +min = %(min)s +max = %(max)s +--------------------------------------------------------------------- +''' % {'dtype': self.dtype, 'min': self.min, 'max': self.max} + + if __name__ == '__main__': f = finfo(ntypes.single) print 'single epsilon:',f.eps diff --git a/numpy/lib/index_tricks.py b/numpy/lib/index_tricks.py index 3021635dc..fcd3909af 100644 --- a/numpy/lib/index_tricks.py +++ b/numpy/lib/index_tricks.py @@ -212,6 +212,8 @@ class nd_grid(object): mgrid = nd_grid(sparse=False) ogrid = nd_grid(sparse=True) +mgrid.__doc__ = None # set in numpy.add_newdocs +ogrid.__doc__ = None # set in numpy.add_newdocs class AxisConcatenator(object): """Translates slice objects to concatenation along an axis. diff --git a/numpy/lib/info.py b/numpy/lib/info.py index 0944fa9b5..f93234d57 100644 --- a/numpy/lib/info.py +++ b/numpy/lib/info.py @@ -1,134 +1,149 @@ -__doc_title__ = """Basic functions used by several sub-packages and -useful to have in the main name-space.""" -__doc__ = __doc_title__ + """ - -Type handling -============== -iscomplexobj -- Test for complex object, scalar result -isrealobj -- Test for real object, scalar result -iscomplex -- Test for complex elements, array result -isreal -- Test for real elements, array result -imag -- Imaginary part -real -- Real part -real_if_close -- Turns complex number with tiny imaginary part to real -isneginf -- Tests for negative infinity ---| -isposinf -- Tests for positive infinity | -isnan -- Tests for nans |---- array results -isinf -- Tests for infinity | -isfinite -- Tests for finite numbers ---| -isscalar -- True if argument is a scalar -nan_to_num -- Replaces NaN's with 0 and infinities with large numbers -cast -- Dictionary of functions to force cast to each type -common_type -- Determine the 'minimum common type code' for a group - of arrays -mintypecode -- Return minimal allowed common typecode. +""" +Basic functions used by several sub-packages and +useful to have in the main name-space. -Index tricks -================== -mgrid -- Method which allows easy construction of N-d 'mesh-grids' -r_ -- Append and construct arrays: turns slice objects into - ranges and concatenates them, for 2d arrays appends - rows. -index_exp -- Konrad Hinsen's index_expression class instance which - can be useful for building complicated slicing syntax. +Type Handling +------------- +================ =================== +iscomplexobj Test for complex object, scalar result +isrealobj Test for real object, scalar result +iscomplex Test for complex elements, array result +isreal Test for real elements, array result +imag Imaginary part +real Real part +real_if_close Turns complex number with tiny imaginary part to real +isneginf Tests for negative infinity, array result +isposinf Tests for positive infinity, array result +isnan Tests for nans, array result +isinf Tests for infinity, array result +isfinite Tests for finite numbers, array result +isscalar True if argument is a scalar +nan_to_num Replaces NaN's with 0 and infinities with large numbers +cast Dictionary of functions to force cast to each type +common_type Determine the minimum common type code for a group + of arrays +mintypecode Return minimal allowed common typecode. +================ =================== -Useful functions -================== -select -- Extension of where to multiple conditions and choices -extract -- Extract 1d array from flattened array according to mask -insert -- Insert 1d array of values into Nd array according to mask -linspace -- Evenly spaced samples in linear space -logspace -- Evenly spaced samples in logarithmic space -fix -- Round x to nearest integer towards zero -mod -- Modulo mod(x,y) = x % y except keeps sign of y -amax -- Array maximum along axis -amin -- Array minimum along axis -ptp -- Array max-min along axis -cumsum -- Cumulative sum along axis -prod -- Product of elements along axis -cumprod -- Cumluative product along axis -diff -- Discrete differences along axis -angle -- Returns angle of complex argument -unwrap -- Unwrap phase along given axis (1-d algorithm) -sort_complex -- Sort a complex-array (based on real, then imaginary) -trim_zeros -- trim the leading and trailing zeros from 1D array. +Index Tricks +------------ +================ =================== +mgrid Method which allows easy construction of N-d + 'mesh-grids' +``r_`` Append and construct arrays: turns slice objects into + ranges and concatenates them, for 2d arrays appends rows. +index_exp Konrad Hinsen's index_expression class instance which + can be useful for building complicated slicing syntax. +================ =================== -vectorize -- a class that wraps a Python function taking scalar - arguments into a generalized function which - can handle arrays of arguments using the broadcast - rules of numerix Python. +Useful Functions +---------------- +================ =================== +select Extension of where to multiple conditions and choices +extract Extract 1d array from flattened array according to mask +insert Insert 1d array of values into Nd array according to mask +linspace Evenly spaced samples in linear space +logspace Evenly spaced samples in logarithmic space +fix Round x to nearest integer towards zero +mod Modulo mod(x,y) = x % y except keeps sign of y +amax Array maximum along axis +amin Array minimum along axis +ptp Array max-min along axis +cumsum Cumulative sum along axis +prod Product of elements along axis +cumprod Cumluative product along axis +diff Discrete differences along axis +angle Returns angle of complex argument +unwrap Unwrap phase along given axis (1-d algorithm) +sort_complex Sort a complex-array (based on real, then imaginary) +trim_zeros Trim the leading and trailing zeros from 1D array. +vectorize A class that wraps a Python function taking scalar + arguments into a generalized function which can handle + arrays of arguments using the broadcast rules of + numerix Python. +================ =================== -Shape manipulation -=================== -squeeze -- Return a with length-one dimensions removed. -atleast_1d -- Force arrays to be > 1D -atleast_2d -- Force arrays to be > 2D -atleast_3d -- Force arrays to be > 3D -vstack -- Stack arrays vertically (row on row) -hstack -- Stack arrays horizontally (column on column) -column_stack -- Stack 1D arrays as columns into 2D array -dstack -- Stack arrays depthwise (along third dimension) -split -- Divide array into a list of sub-arrays -hsplit -- Split into columns -vsplit -- Split into rows -dsplit -- Split along third dimension +Shape Manipulation +------------------ +================ =================== +squeeze Return a with length-one dimensions removed. +atleast_1d Force arrays to be > 1D +atleast_2d Force arrays to be > 2D +atleast_3d Force arrays to be > 3D +vstack Stack arrays vertically (row on row) +hstack Stack arrays horizontally (column on column) +column_stack Stack 1D arrays as columns into 2D array +dstack Stack arrays depthwise (along third dimension) +split Divide array into a list of sub-arrays +hsplit Split into columns +vsplit Split into rows +dsplit Split along third dimension +================ =================== -Matrix (2d array) manipluations -=============================== -fliplr -- 2D array with columns flipped -flipud -- 2D array with rows flipped -rot90 -- Rotate a 2D array a multiple of 90 degrees -eye -- Return a 2D array with ones down a given diagonal -diag -- Construct a 2D array from a vector, or return a given - diagonal from a 2D array. -mat -- Construct a Matrix -bmat -- Build a Matrix from blocks +Matrix (2D Array) Manipulations +------------------------------- +================ =================== +fliplr 2D array with columns flipped +flipud 2D array with rows flipped +rot90 Rotate a 2D array a multiple of 90 degrees +eye Return a 2D array with ones down a given diagonal +diag Construct a 2D array from a vector, or return a given + diagonal from a 2D array. +mat Construct a Matrix +bmat Build a Matrix from blocks +================ =================== Polynomials -============ -poly1d -- A one-dimensional polynomial class - -poly -- Return polynomial coefficients from roots -roots -- Find roots of polynomial given coefficients -polyint -- Integrate polynomial -polyder -- Differentiate polynomial -polyadd -- Add polynomials -polysub -- Substract polynomials -polymul -- Multiply polynomials -polydiv -- Divide polynomials -polyval -- Evaluate polynomial at given argument +----------- +================ =================== +poly1d A one-dimensional polynomial class +poly Return polynomial coefficients from roots +roots Find roots of polynomial given coefficients +polyint Integrate polynomial +polyder Differentiate polynomial +polyadd Add polynomials +polysub Substract polynomials +polymul Multiply polynomials +polydiv Divide polynomials +polyval Evaluate polynomial at given argument +================ =================== -Import tricks -============= -ppimport -- Postpone module import until trying to use it -ppimport_attr -- Postpone module import until trying to use its - attribute -ppresolve -- Import postponed module and return it. +Import Tricks +------------- +================ =================== +ppimport Postpone module import until trying to use it +ppimport_attr Postpone module import until trying to use its attribute +ppresolve Import postponed module and return it. +================ =================== -Machine arithmetics -=================== -machar_single -- MachAr instance storing the parameters of system - single precision floating point arithmetics -machar_double -- MachAr instance storing the parameters of system - double precision floating point arithmetics +Machine Arithmetics +------------------- +================ =================== +machar_single Single precision floating point arithmetic parameters +machar_double Double precision floating point arithmetic parameters +================ =================== -Threading tricks -================ -ParallelExec -- Execute commands in parallel thread. +Threading Tricks +---------------- +================ =================== +ParallelExec Execute commands in parallel thread. +================ =================== -1D array set operations -======================= +1D Array Set Operations +----------------------- Set operations for 1D numeric arrays based on sort() function. -ediff1d -- Array difference (auxiliary function). -unique1d -- Unique elements of 1D array. -intersect1d -- Intersection of 1D arrays with unique elements. -intersect1d_nu -- Intersection of 1D arrays with any elements. -setxor1d -- Set exclusive-or of 1D arrays with unique elements. -setmember1d -- Return an array of shape of ar1 containing 1 where - the elements of ar1 are in ar2 and 0 otherwise. -union1d -- Union of 1D arrays with unique elements. -setdiff1d -- Set difference of 1D arrays with unique elements. +================ =================== +ediff1d Array difference (auxiliary function). +unique1d Unique elements of 1D array. +intersect1d Intersection of 1D arrays with unique elements. +intersect1d_nu Intersection of 1D arrays with any elements. +setxor1d Set exclusive-or of 1D arrays with unique elements. +setmember1d Return an array of shape of ar1 containing 1 where + the elements of ar1 are in ar2 and 0 otherwise. +union1d Union of 1D arrays with unique elements. +setdiff1d Set difference of 1D arrays with unique elements. +================ =================== """ diff --git a/numpy/lib/io.py b/numpy/lib/io.py index e9a012db1..12765e17c 100644 --- a/numpy/lib/io.py +++ b/numpy/lib/io.py @@ -1,4 +1,5 @@ __all__ = ['savetxt', 'loadtxt', + 'genfromtxt', 'ndfromtxt', 'mafromtxt', 'recfromtxt', 'recfromcsv', 'load', 'loads', 'save', 'savez', 'packbits', 'unpackbits', @@ -15,7 +16,11 @@ from cPickle import load as _cload, loads from _datasource import DataSource from _compiled_base import packbits, unpackbits +from _iotools import LineSplitter, NameValidator, StringConverter, \ + _is_string_like, has_nested_fields, flatten_dtype + _file = file +_string_like = _is_string_like class BagObj(object): """A simple class that converts attribute lookups to @@ -264,10 +269,6 @@ def _getconv(dtype): return str -def _string_like(obj): - try: obj + '' - except (TypeError, ValueError): return 0 - return 1 def loadtxt(fname, dtype=float, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None, unpack=False): @@ -342,7 +343,7 @@ def loadtxt(fname, dtype=float, comments='#', delimiter=None, converters=None, if usecols is not None: usecols = list(usecols) - if _string_like(fname): + if _is_string_like(fname): if fname.endswith('.gz'): import gzip fh = gzip.open(fname) @@ -520,7 +521,7 @@ def savetxt(fname, X, fmt='%.18e',delimiter=' '): """ - if _string_like(fname): + if _is_string_like(fname): if fname.endswith('.gz'): import gzip fh = gzip.open(fname,'wb') @@ -603,8 +604,508 @@ def fromregex(file, regexp, dtype): seq = regexp.findall(file.read()) if seq and not isinstance(seq[0], tuple): - # make sure np.array doesn't interpret strings as binary data - # by always producing a list of tuples - seq = [(x,) for x in seq] - output = np.array(seq, dtype=dtype) + # Only one group is in the regexp. + # Create the new array as a single data-type and then + # re-interpret as a single-field structured array. + newdtype = np.dtype(dtype[dtype.names[0]]) + output = np.array(seq, dtype=newdtype) + output.dtype = dtype + else: + output = np.array(seq, dtype=dtype) + return output + + + + +#####-------------------------------------------------------------------------- +#---- --- ASCII functions --- +#####-------------------------------------------------------------------------- + + + +def genfromtxt(fname, dtype=float, comments='#', delimiter=None, skiprows=0, + converters=None, missing='', missing_values=None, usecols=None, + names=None, excludelist=None, deletechars=None, + case_sensitive=True, unpack=None, usemask=False, loose=True): + """ + Load data from a text file. + + Each line past the first `skiprows` ones is split at the `delimiter` + character, and characters following the `comments` character are discarded. + + + + Parameters + ---------- + fname : file or string + File or filename to read. If the filename extension is `.gz` or `.bz2`, + the file is first decompressed. + dtype : data-type + Data type of the resulting array. If this is a flexible data-type, + the resulting array will be 1-dimensional, and each row will be + interpreted as an element of the array. In this case, the number + of columns used must match the number of fields in the data-type, + and the names of each field will be set by the corresponding name + of the dtype. + If None, the dtypes will be determined by the contents of each + column, individually. + comments : {string}, optional + The character used to indicate the start of a comment. + All the characters occurring on a line after a comment are discarded + delimiter : {string}, optional + The string used to separate values. By default, any consecutive + whitespace act as delimiter. + skiprows : {int}, optional + Numbers of lines to skip at the beginning of the file. + converters : {None, dictionary}, optional + A dictionary mapping column number to a function that will convert + values in the column to a number. Converters can also be used to + provide a default value for missing data: + ``converters = {3: lambda s: float(s or 0)}``. + missing : {string}, optional + A string representing a missing value, irrespective of the column where + it appears (e.g., `'missing'` or `'unused'`). + missing_values : {None, dictionary}, optional + A dictionary mapping a column number to a string indicating whether the + corresponding field should be masked. + usecols : {None, sequence}, optional + Which columns to read, with 0 being the first. For example, + ``usecols = (1,4,5)`` will extract the 2nd, 5th and 6th columns. + names : {None, True, string, sequence}, optional + If `names` is True, the field names are read from the first valid line + after the first `skiprows` lines. + If `names` is a sequence or a single-string of comma-separated names, + the names will be used to define the field names in a flexible dtype. + If `names` is None, the names of the dtype fields will be used, if any. + excludelist : {sequence}, optional + A list of names to exclude. This list is appended to the default list + ['return','file','print']. Excluded names are appended an underscore: + for example, `file` would become `file_`. + deletechars : {string}, optional + A string combining invalid characters that must be deleted from the names. + case_sensitive : {True, False, 'upper', 'lower'}, optional + If True, field names are case_sensitive. + If False or 'upper', field names are converted to upper case. + If 'lower', field names are converted to lower case. + unpack : {bool}, optional + If True, the returned array is transposed, so that arguments may be + unpacked using ``x, y, z = loadtxt(...)`` + usemask : {bool}, optional + If True, returns a masked array. + If False, return a regular standard array. + + Returns + ------- + out : MaskedArray + Data read from the text file. + + Notes + -------- + * When spaces are used as delimiters, or when no delimiter has been given + as input, there should not be any missing data between two fields. + * When the variable are named (either by a flexible dtype or with `names`, + there must not be any header in the file (else a :exc:ValueError exception + is raised). + + Warnings + -------- + * Individual values are not stripped of spaces by default. + When using a custom converter, make sure the function does remove spaces. + + See Also + -------- + numpy.loadtxt : equivalent function when no data is missing. + + """ + # + if usemask: + from numpy.ma import MaskedArray, make_mask_descr + # Check the input dictionary of converters + user_converters = converters or {} + if not isinstance(user_converters, dict): + errmsg = "The input argument 'converter' should be a valid dictionary "\ + "(got '%s' instead)" + raise TypeError(errmsg % type(user_converters)) + # Check the input dictionary of missing values + user_missing_values = missing_values or {} + if not isinstance(user_missing_values, dict): + errmsg = "The input argument 'missing_values' should be a valid "\ + "dictionary (got '%s' instead)" + raise TypeError(errmsg % type(missing_values)) + defmissing = [_.strip() for _ in missing.split(',')] + [''] + + # Initialize the filehandle, the LineSplitter and the NameValidator +# fhd = _to_filehandle(fname) + if isinstance(fname, basestring): + fhd = np.lib._datasource.open(fname) + elif not hasattr(fname, 'read'): + raise TypeError("The input should be a string or a filehandle. "\ + "(got %s instead)" % type(fname)) + else: + fhd = fname + split_line = LineSplitter(delimiter=delimiter, comments=comments, + autostrip=False)._handyman + validate_names = NameValidator(excludelist=excludelist, + deletechars=deletechars, + case_sensitive=case_sensitive) + + # Get the first valid lines after the first skiprows ones + for i in xrange(skiprows): + fhd.readline() + first_values = None + while not first_values: + first_line = fhd.readline() + if first_line == '': + raise IOError('End-of-file reached before encountering data.') + if names is True: + first_values = first_line.strip().split(delimiter) + else: + first_values = split_line(first_line) + if names is True: + fval = first_values[0].strip() + if fval in comments: + del first_values[0] + + # Check the columns to use + if usecols is not None: + usecols = list(usecols) + nbcols = len(usecols or first_values) + + # Check the names and overwrite the dtype.names if needed + if dtype is not None: + dtype = np.dtype(dtype) + dtypenames = getattr(dtype, 'names', None) + if names is True: + names = validate_names([_.strip() for _ in first_values]) + first_line ='' + elif _is_string_like(names): + names = validate_names([_.strip() for _ in names.split(',')]) + elif names: + names = validate_names(names) + elif dtypenames: + dtype.names = validate_names(dtypenames) + if names and dtypenames: + dtype.names = names + + # If usecols is a list of names, convert to a list of indices + if usecols: + for (i, current) in enumerate(usecols): + if _is_string_like(current): + usecols[i] = names.index(current) + + # If user_missing_values has names as keys, transform them to indices + missing_values = {} + for (key, val) in user_missing_values.iteritems(): + # If val is a list, flatten it. In any case, add missing &'' to the list + if isinstance(val, (list, tuple)): + val = [str(_) for _ in val] + else: + val = [str(val),] + val.extend(defmissing) + if _is_string_like(key): + try: + missing_values[names.index(key)] = val + except ValueError: + pass + else: + missing_values[key] = val + + + # Initialize the default converters + if dtype is None: + # Note: we can't use a [...]*nbcols, as we would have 3 times the same + # ... converter, instead of 3 different converters. + converters = [StringConverter(None, + missing_values=missing_values.get(_, defmissing)) + for _ in range(nbcols)] + else: + flatdtypes = flatten_dtype(dtype) + # Initialize the converters + if len(flatdtypes) > 1: + # Flexible type : get a converter from each dtype + converters = [StringConverter(dt, + missing_values=missing_values.get(i, defmissing), + locked=True) + for (i, dt) in enumerate(flatdtypes)] + else: + # Set to a default converter (but w/ different missing values) + converters = [StringConverter(dtype, + missing_values=missing_values.get(_, defmissing), + locked=True) + for _ in range(nbcols)] + missing_values = [_.missing_values for _ in converters] + + # Update the converters to use the user-defined ones + uc_update = [] + for (i, conv) in user_converters.iteritems(): + # If the converter is specified by column names, use the index instead + if _is_string_like(i): + i = names.index(i) + if usecols: + try: + i = usecols.index(i) + except ValueError: + # Unused converter specified + continue + converters[i].update(conv, default=None, + missing_values=missing_values[i], + locked=True) + uc_update.append((i, conv)) + # Make sure we have the corrected keys in user_converters... + user_converters.update(uc_update) + + # Reset the names to match the usecols + if (not first_line) and usecols: + names = [names[_] for _ in usecols] + + rows = [] + append_to_rows = rows.append + if usemask: + masks = [] + append_to_masks = masks.append + # Parse each line + for line in itertools.chain([first_line,], fhd): + values = split_line(line) + # Skip an empty line + if len(values) == 0: + continue + # Select only the columns we need + if usecols: + values = [values[_] for _ in usecols] + # Check whether we need to update the converter + if dtype is None: + for (converter, item) in zip(converters, values): + converter.upgrade(item) + # Store the values + append_to_rows(tuple(values)) + if usemask: + append_to_masks(tuple([val.strip() in mss + for (val, mss) in zip(values, + missing_values)])) + + # Convert each value according to the converter: + # We want to modify the list in place to avoid creating a new one... + if loose: + conversionfuncs = [conv._loose_call for conv in converters] + else: + conversionfuncs = [conv._strict_call for conv in converters] + for (i, vals) in enumerate(rows): + rows[i] = tuple([convert(val) + for (convert, val) in zip(conversionfuncs, vals)]) + + # Reset the dtype + data = rows + if dtype is None: + # Get the dtypes from the types of the converters + coldtypes = [conv.type for conv in converters] + # Find the columns with strings... + strcolidx = [i for (i, v) in enumerate(coldtypes) + if v in (type('S'), np.string_)] + # ... and take the largest number of chars. + for i in strcolidx: + coldtypes[i] = "|S%i" % max(len(row[i]) for row in data) + # + if names is None: + # If the dtype is uniform, don't define names, else use '' + base = set([c.type for c in converters if c._checked]) + + if len(base) == 1: + (ddtype, mdtype) = (list(base)[0], np.bool) + else: + ddtype = [('', dt) for dt in coldtypes] + mdtype = [('', np.bool) for dt in coldtypes] + else: + ddtype = zip(names, coldtypes) + mdtype = zip(names, [np.bool] * len(coldtypes)) + output = np.array(data, dtype=ddtype) + if usemask: + outputmask = np.array(masks, dtype=mdtype) + else: + # Overwrite the initial dtype names if needed + if names and dtype.names: + dtype.names = names + flatdtypes = flatten_dtype(dtype) + # Case 1. We have a structured type + if len(flatdtypes) > 1: + # Nested dtype, eg [('a', int), ('b', [('b0', int), ('b1', 'f4')])] + # First, create the array using a flattened dtype: + # [('a', int), ('b1', int), ('b2', float)] + # Then, view the array using the specified dtype. + if has_nested_fields(dtype): + if 'O' in (_.char for _ in flatdtypes): + errmsg = "Nested fields involving objects "\ + "are not supported..." + raise NotImplementedError(errmsg) + rows = np.array(data, dtype=[('', t) for t in flatdtypes]) + output = rows.view(dtype) + else: + output = np.array(data, dtype=dtype) + # Now, process the rowmasks the same way + if usemask: + rowmasks = np.array(masks, + dtype=np.dtype([('', np.bool) + for t in flatdtypes])) + # Construct the new dtype + mdtype = make_mask_descr(dtype) + outputmask = rowmasks.view(mdtype) + # Case #2. We have a basic dtype + else: + # We used some user-defined converters + if user_converters: + ishomogeneous = True + descr = [] + for (i, ttype) in enumerate([conv.type for conv in converters]): + # Keep the dtype of the current converter + if i in user_converters: + ishomogeneous &= (ttype == dtype.type) + if ttype == np.string_: + ttype = "|S%i" % max(len(row[i]) for row in data) + descr.append(('', ttype)) + else: + descr.append(('', dtype)) + # So we changed the dtype ? + if not ishomogeneous: + # We have more than one field + if len(descr) > 1: + dtype = np.dtype(descr) + # We have only one field: drop the name if not needed. + else: + dtype = np.dtype(ttype) + # + output = np.array(data, dtype) + if usemask: + if dtype.names: + mdtype = [(_, np.bool) for _ in dtype.names] + else: + mdtype = np.bool + outputmask = np.array(masks, dtype=mdtype) + # Try to take care of the missing data we missed + if usemask and output.dtype.names: + for (name, conv) in zip(names or (), converters): + missing_values = [conv(_) for _ in conv.missing_values if _ != ''] + for mval in missing_values: + outputmask[name] |= (output[name] == mval) + # Construct the final array + if usemask: + output = output.view(MaskedArray) + output._mask = outputmask + if unpack: + return output.squeeze().T + return output.squeeze() + + + +def ndfromtxt(fname, dtype=float, comments='#', delimiter=None, skiprows=0, + converters=None, missing='', missing_values=None, + usecols=None, unpack=None, names=None, + excludelist=None, deletechars=None, case_sensitive=True,): + """ + Load ASCII data stored in fname and returns a ndarray. + + Complete description of all the optional input parameters is available in + the docstring of the `genfromtxt` function. + + See Also + -------- + numpy.genfromtxt : generic function. + + """ + kwargs = dict(dtype=dtype, comments=comments, delimiter=delimiter, + skiprows=skiprows, converters=converters, + missing=missing, missing_values=missing_values, + usecols=usecols, unpack=unpack, names=names, + excludelist=excludelist, deletechars=deletechars, + case_sensitive=case_sensitive, usemask=False) + return genfromtxt(fname, **kwargs) + +def mafromtxt(fname, dtype=float, comments='#', delimiter=None, skiprows=0, + converters=None, missing='', missing_values=None, + usecols=None, unpack=None, names=None, + excludelist=None, deletechars=None, case_sensitive=True,): + """ + Load ASCII data stored in fname and returns a MaskedArray. + + Complete description of all the optional input parameters is available in + the docstring of the `genfromtxt` function. + + See Also + -------- + numpy.genfromtxt : generic function. + """ + kwargs = dict(dtype=dtype, comments=comments, delimiter=delimiter, + skiprows=skiprows, converters=converters, + missing=missing, missing_values=missing_values, + usecols=usecols, unpack=unpack, names=names, + excludelist=excludelist, deletechars=deletechars, + case_sensitive=case_sensitive, + usemask=True) + return genfromtxt(fname, **kwargs) + + +def recfromtxt(fname, dtype=None, comments='#', delimiter=None, skiprows=0, + converters=None, missing='', missing_values=None, + usecols=None, unpack=None, names=None, + excludelist=None, deletechars=None, case_sensitive=True, + usemask=False): + """ + Load ASCII data stored in fname and returns a standard recarray (if + `usemask=False`) or a MaskedRecords (if `usemask=True`). + + Complete description of all the optional input parameters is available in + the docstring of the `genfromtxt` function. + + See Also + -------- + numpy.genfromtxt : generic function + + Warnings + -------- + * by default, `dtype=None`, which means that the dtype of the output array + will be determined from the data. + """ + kwargs = dict(dtype=dtype, comments=comments, delimiter=delimiter, + skiprows=skiprows, converters=converters, + missing=missing, missing_values=missing_values, + usecols=usecols, unpack=unpack, names=names, + excludelist=excludelist, deletechars=deletechars, + case_sensitive=case_sensitive, usemask=usemask) + output = genfromtxt(fname, **kwargs) + if usemask: + from numpy.ma.mrecords import MaskedRecords + output = output.view(MaskedRecords) + else: + output = output.view(np.recarray) + return output + + +def recfromcsv(fname, dtype=None, comments='#', skiprows=0, + converters=None, missing='', missing_values=None, + usecols=None, unpack=None, names=True, + excludelist=None, deletechars=None, case_sensitive='lower', + usemask=False): + """ + Load ASCII data stored in comma-separated file and returns a recarray (if + `usemask=False`) or a MaskedRecords (if `usemask=True`). + + Complete description of all the optional input parameters is available in + the docstring of the `genfromtxt` function. + + See Also + -------- + numpy.genfromtxt : generic function + """ + kwargs = dict(dtype=dtype, comments=comments, delimiter=",", + skiprows=skiprows, converters=converters, + missing=missing, missing_values=missing_values, + usecols=usecols, unpack=unpack, names=names, + excludelist=excludelist, deletechars=deletechars, + case_sensitive=case_sensitive, usemask=usemask) + output = genfromtxt(fname, **kwargs) + if usemask: + from numpy.ma.mrecords import MaskedRecords + output = output.view(MaskedRecords) + else: + output = output.view(np.recarray) + return output + diff --git a/numpy/lib/recfunctions.py b/numpy/lib/recfunctions.py new file mode 100644 index 000000000..b3eecdc0e --- /dev/null +++ b/numpy/lib/recfunctions.py @@ -0,0 +1,942 @@ +""" +Collection of utilities to manipulate structured arrays. + +Most of these functions were initially implemented by John Hunter for matplotlib. +They have been rewritten and extended for convenience. + + +""" + + +import itertools +from itertools import chain as iterchain, repeat as iterrepeat, izip as iterizip +import numpy as np +from numpy import ndarray, recarray +import numpy.ma as ma +from numpy.ma import MaskedArray +from numpy.ma.mrecords import MaskedRecords + +from numpy.lib._iotools import _is_string_like + +_check_fill_value = np.ma.core._check_fill_value + +__all__ = ['append_fields', + 'drop_fields', + 'find_duplicates', + 'get_fieldstructure', + 'join_by', + 'merge_arrays', + 'rec_append_fields', 'rec_drop_fields', 'rec_join', + 'recursive_fill_fields', 'rename_fields', + 'stack_arrays', + ] + + +def recursive_fill_fields(input, output): + """ + Fills fields from output with fields from input, + with support for nested structures. + + Parameters + ---------- + input : ndarray + Input array. + output : ndarray + Output array. + + Notes + ----- + * `output` should be at least the same size as `input` + + Examples + -------- + >>> a = np.array([(1, 10.), (2, 20.)], dtype=[('A', int), ('B', float)]) + >>> b = np.zeros((3,), dtype=a.dtype) + >>> recursive_fill_fields(a, b) + np.array([(1, 10.), (2, 20.), (0, 0.)], dtype=[('A', int), ('B', float)]) + + """ + newdtype = output.dtype + for field in newdtype.names: + try: + current = input[field] + except ValueError: + continue + if current.dtype.names: + recursive_fill_fields(current, output[field]) + else: + output[field][:len(current)] = current + return output + + + +def get_names(adtype): + """ + Returns the field names of the input datatype as a tuple. + + Parameters + ---------- + adtype : dtype + Input datatype + + Examples + -------- + >>> get_names(np.empty((1,), dtype=int)) is None + True + >>> get_names(np.empty((1,), dtype=[('A',int), ('B', float)])) + ('A', 'B') + >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) + >>> get_names(adtype) + ('a', ('b', ('ba', 'bb'))) + """ + listnames = [] + names = adtype.names + for name in names: + current = adtype[name] + if current.names: + listnames.append((name, tuple(get_names(current)))) + else: + listnames.append(name) + return tuple(listnames) or None + + +def get_names_flat(adtype): + """ + Returns the field names of the input datatype as a tuple. Nested structure + are flattend beforehand. + + Parameters + ---------- + adtype : dtype + Input datatype + + Examples + -------- + >>> get_names_flat(np.empty((1,), dtype=int)) is None + True + >>> get_names_flat(np.empty((1,), dtype=[('A',int), ('B', float)])) + ('A', 'B') + >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) + >>> get_names_flat(adtype) + ('a', 'b', 'ba', 'bb') + """ + listnames = [] + names = adtype.names + for name in names: + listnames.append(name) + current = adtype[name] + if current.names: + listnames.extend(get_names_flat(current)) + return tuple(listnames) or None + + +def flatten_descr(ndtype): + """ + Flatten a structured data-type description. + + Examples + -------- + >>> ndtype = np.dtype([('a', '<i4'), ('b', [('ba', '<f8'), ('bb', '<i4')])]) + >>> flatten_descr(ndtype) + (('a', dtype('int32')), ('ba', dtype('float64')), ('bb', dtype('int32'))) + + """ + names = ndtype.names + if names is None: + return ndtype.descr + else: + descr = [] + for field in names: + (typ, _) = ndtype.fields[field] + if typ.names: + descr.extend(flatten_descr(typ)) + else: + descr.append((field, typ)) + return tuple(descr) + + +def zip_descr(seqarrays, flatten=False): + """ + Combine the dtype description of a series of arrays. + + Parameters + ---------- + seqarrays : sequence of arrays + Sequence of arrays + flatten : {boolean}, optional + Whether to collapse nested descriptions. + """ + newdtype = [] + if flatten: + for a in seqarrays: + newdtype.extend(flatten_descr(a.dtype)) + else: + for a in seqarrays: + current = a.dtype + names = current.names or () + if len(names) > 1: + newdtype.append(('', current.descr)) + else: + newdtype.extend(current.descr) + return np.dtype(newdtype).descr + + +def get_fieldstructure(adtype, lastname=None, parents=None,): + """ + Returns a dictionary with fields as keys and a list of parent fields as values. + + This function is used to simplify access to fields nested in other fields. + + Parameters + ---------- + adtype : np.dtype + Input datatype + lastname : optional + Last processed field name (used internally during recursion). + parents : dictionary + Dictionary of parent fields (used interbally during recursion). + + Examples + -------- + >>> ndtype = np.dtype([('A', int), + ... ('B', [('BA', int), + ... ('BB', [('BBA', int), ('BBB', int)])])]) + >>> get_fieldstructure(ndtype) + {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], + 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']} + + """ + if parents is None: + parents = {} + names = adtype.names + for name in names: + current = adtype[name] + if current.names: + if lastname: + parents[name] = [lastname,] + else: + parents[name] = [] + parents.update(get_fieldstructure(current, name, parents)) + else: + lastparent = [_ for _ in (parents.get(lastname, []) or [])] + if lastparent: +# if (lastparent[-1] != lastname): + lastparent.append(lastname) + elif lastname: + lastparent = [lastname,] + parents[name] = lastparent or [] + return parents or None + + +def _izip_fields_flat(iterable): + """ + Returns an iterator of concatenated fields from a sequence of arrays, + collapsing any nested structure. + """ + for element in iterable: + if isinstance(element, np.void): + for f in _izip_fields_flat(tuple(element)): + yield f + else: + yield element + + +def _izip_fields(iterable): + """ + Returns an iterator of concatenated fields from a sequence of arrays. + """ + for element in iterable: + if hasattr(element, '__iter__') and not isinstance(element, basestring): + for f in _izip_fields(element): + yield f + elif isinstance(element, np.void) and len(tuple(element)) == 1: + for f in _izip_fields(element): + yield f + else: + yield element + + +def izip_records(seqarrays, fill_value=None, flatten=True): + """ + Returns an iterator of concatenated items from a sequence of arrays. + + Parameters + ---------- + seqarray : sequence of arrays + Sequence of arrays. + fill_value : {None, integer} + Value used to pad shorter iterables. + flatten : {True, False}, + Whether to + """ + # OK, that's a complete ripoff from Python2.6 itertools.izip_longest + def sentinel(counter = ([fill_value]*(len(seqarrays)-1)).pop): + "Yields the fill_value or raises IndexError" + yield counter() + # + fillers = iterrepeat(fill_value) + iters = [iterchain(it, sentinel(), fillers) for it in seqarrays] + # Should we flatten the items, or just use a nested approach + if flatten: + zipfunc = _izip_fields_flat + else: + zipfunc = _izip_fields + # + try: + for tup in iterizip(*iters): + yield tuple(zipfunc(tup)) + except IndexError: + pass + + +def _fix_output(output, usemask=True, asrecarray=False): + """ + Private function: return a recarray, a ndarray, a MaskedArray + or a MaskedRecords depending on the input parameters + """ + if not isinstance(output, MaskedArray): + usemask = False + if usemask: + if asrecarray: + output = output.view(MaskedRecords) + else: + output = ma.filled(output) + if asrecarray: + output = output.view(recarray) + return output + + +def _fix_defaults(output, defaults=None): + """ + Update the fill_value and masked data of `output` + from the default given in a dictionary defaults. + """ + names = output.dtype.names + (data, mask, fill_value) = (output.data, output.mask, output.fill_value) + for (k, v) in (defaults or {}).iteritems(): + if k in names: + fill_value[k] = v + data[k][mask[k]] = v + return output + + +def merge_arrays(seqarrays, + fill_value=-1, flatten=False, usemask=True, asrecarray=False): + """ + Merge arrays field by field. + + Parameters + ---------- + seqarrays : sequence of ndarrays + Sequence of arrays + fill_value : {float}, optional + Filling value used to pad missing data on the shorter arrays. + flatten : {False, True}, optional + Whether to collapse nested fields. + usemask : {False, True}, optional + Whether to return a masked array or not. + asrecarray : {False, True}, optional + Whether to return a recarray (MaskedRecords) or not. + + Examples + -------- + >>> merge_arrays((np.array([1, 2]), np.array([10., 20., 30.]))) + masked_array(data = [(1, 10.0) (2, 20.0) (--, 30.0)], + mask = [(False, False) (False, False) (True, False)], + fill_value=(999999, 1e+20) + dtype=[('f0', '<i4'), ('f1', '<f8')]) + >>> merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])), + ... usemask=False) + array(data = [(1, 10.0) (2, 20.0) (-1, 30.0)], + dtype=[('f0', '<i4'), ('f1', '<f8')]) + >>> merge_arrays((np.array([1, 2]).view([('a', int)]), + np.array([10., 20., 30.])), + usemask=False, asrecarray=True) + rec.array(data = [(1, 10.0) (2, 20.0) (-1, 30.0)], + dtype=[('a', int), ('f1', '<f8')]) + """ + if (len(seqarrays) == 1): + seqarrays = seqarrays[0] + if isinstance(seqarrays, ndarray): + seqdtype = seqarrays.dtype + if (not flatten) or \ + (zip_descr((seqarrays,), flatten=True) == seqdtype.descr): + seqarrays = seqarrays.ravel() + if not seqdtype.names: + seqarrays = seqarrays.view([('', seqdtype)]) + if usemask: + if asrecarray: + return seqarrays.view(MaskedRecords) + return seqarrays.view(MaskedArray) + elif asrecarray: + return seqarrays.view(recarray) + return seqarrays + else: + seqarrays = (seqarrays,) + # Get the dtype + newdtype = zip_descr(seqarrays, flatten=flatten) + # Get the data and the fill_value from each array + seqdata = [ma.getdata(a.ravel()) for a in seqarrays] + seqmask = [ma.getmaskarray(a).ravel() for a in seqarrays] + fill_value = [_check_fill_value(fill_value, a.dtype) for a in seqdata] + # Make an iterator from each array, padding w/ fill_values + maxlength = max(len(a) for a in seqarrays) + for (i, (a, m, fval)) in enumerate(zip(seqdata, seqmask, fill_value)): + # Flatten the fill_values if there's only one field + if isinstance(fval, (ndarray, np.void)): + fmsk = ma.ones((1,), m.dtype)[0] + if len(fval.dtype) == 1: + fval = fval.item()[0] + fmsk = True + else: + # fval and fmsk should be np.void objects + fval = np.array([fval,], dtype=a.dtype)[0] +# fmsk = np.array([fmsk,], dtype=m.dtype)[0] + else: + fmsk = True + nbmissing = (maxlength-len(a)) + seqdata[i] = iterchain(a, [fval]*nbmissing) + seqmask[i] = iterchain(m, [fmsk]*nbmissing) + # + data = izip_records(seqdata, flatten=flatten) + data = tuple(data) + if usemask: + mask = izip_records(seqmask, fill_value=True, flatten=flatten) + mask = tuple(mask) + output = ma.array(np.fromiter(data, dtype=newdtype)) + output._mask[:] = list(mask) + if asrecarray: + output = output.view(MaskedRecords) + else: + output = np.fromiter(data, dtype=newdtype) + if asrecarray: + output = output.view(recarray) + return output + + + +def drop_fields(base, drop_names, usemask=True, asrecarray=False): + """ + Return a new array with fields in `drop_names` dropped. + + Nested fields are supported. + + Parameters + ---------- + base : array + Input array + drop_names : string or sequence + String or sequence of strings corresponding to the names of the fields + to drop. + usemask : {False, True}, optional + Whether to return a masked array or not. + asrecarray : string or sequence + Whether to return a recarray or a mrecarray (`asrecarray=True`) or + a plain ndarray or masked array with flexible dtype (`asrecarray=False`) + + Examples + -------- + >>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], + dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) + >>> drop_fields(a, 'a') + array([((2.0, 3),), ((5.0, 6),)], + dtype=[('b', [('ba', '<f8'), ('bb', '<i4')])]) + >>> drop_fields(a, 'ba') + array([(1, (3,)), (4, (6,))], + dtype=[('a', '<i4'), ('b', [('bb', '<i4')])]) + >>> drop_fields(a, ['ba', 'bb']) + array([(1,), (4,)], + dtype=[('a', '<i4')]) + """ + if _is_string_like(drop_names): + drop_names = [drop_names,] + else: + drop_names = set(drop_names) + # + def _drop_descr(ndtype, drop_names): + names = ndtype.names + newdtype = [] + for name in names: + current = ndtype[name] + if name in drop_names: + continue + if current.names: + descr = _drop_descr(current, drop_names) + if descr: + newdtype.append((name, descr)) + else: + newdtype.append((name, current)) + return newdtype + # + newdtype = _drop_descr(base.dtype, drop_names) + if not newdtype: + return None + # + output = np.empty(base.shape, dtype=newdtype) + output = recursive_fill_fields(base, output) + return _fix_output(output, usemask=usemask, asrecarray=asrecarray) + + +def rec_drop_fields(base, drop_names): + """ + Returns a new numpy.recarray with fields in `drop_names` dropped. + """ + return drop_fields(base, drop_names, usemask=False, asrecarray=True) + + + +def rename_fields(base, namemapper): + """ + Rename the fields from a flexible-datatype ndarray or recarray. + + Nested fields are supported. + + Parameters + ---------- + base : ndarray + Input array whose fields must be modified. + namemapper : dictionary + Dictionary mapping old field names to their new version. + + Examples + -------- + >>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))], + dtype=[('a', int), + ('b', [('ba', float), ('bb', (float, 2))])]) + >>> rename_fields(a, {'a':'A', 'bb':'BB'}) + array([(1, (2.0, 3)), (4, (5.0, 6))], + dtype=[('A', '<i4'), ('b', [('ba', '<f8'), ('BB', '<i4')])]) + + """ + def _recursive_rename_fields(ndtype, namemapper): + newdtype = [] + for name in ndtype.names: + newname = namemapper.get(name, name) + current = ndtype[name] + if current.names: + newdtype.append((newname, + _recursive_rename_fields(current, namemapper))) + else: + newdtype.append((newname, current)) + return newdtype + newdtype = _recursive_rename_fields(base.dtype, namemapper) + return base.view(newdtype) + + +def append_fields(base, names, data=None, dtypes=None, + fill_value=-1, usemask=True, asrecarray=False): + """ + Add new fields to an existing array. + + The names of the fields are given with the `names` arguments, + the corresponding values with the `data` arguments. + If a single field is appended, `names`, `data` and `dtypes` do not have + to be lists but just values. + + Parameters + ---------- + base : array + Input array to extend. + names : string, sequence + String or sequence of strings corresponding to the names + of the new fields. + data : array or sequence of arrays + Array or sequence of arrays storing the fields to add to the base. + dtypes : sequence of datatypes + Datatype or sequence of datatypes. + If None, the datatypes are estimated from the `data`. + fill_value : {float}, optional + Filling value used to pad missing data on the shorter arrays. + usemask : {False, True}, optional + Whether to return a masked array or not. + asrecarray : {False, True}, optional + Whether to return a recarray (MaskedRecords) or not. + + """ + # Check the names + if isinstance(names, (tuple, list)): + if len(names) != len(data): + err_msg = "The number of arrays does not match the number of names" + raise ValueError(err_msg) + elif isinstance(names, basestring): + names = [names,] + data = [data,] + # + if dtypes is None: + data = [np.array(a, copy=False, subok=True) for a in data] + data = [a.view([(name, a.dtype)]) for (name, a) in zip(names, data)] + elif not hasattr(dtypes, '__iter__'): + dtypes = [dtypes,] + if len(data) != len(dtypes): + if len(dtypes) == 1: + dtypes = dtypes * len(data) + else: + msg = "The dtypes argument must be None, "\ + "a single dtype or a list." + raise ValueError(msg) + data = [np.array(a, copy=False, subok=True, dtype=d).view([(n, d)]) + for (a, n, d) in zip(data, names, dtypes)] + # + base = merge_arrays(base, usemask=usemask, fill_value=fill_value) + if len(data) > 1: + data = merge_arrays(data, flatten=True, usemask=usemask, + fill_value=fill_value) + else: + data = data.pop() + # + output = ma.masked_all(max(len(base), len(data)), + dtype=base.dtype.descr + data.dtype.descr) + output = recursive_fill_fields(base, output) + output = recursive_fill_fields(data, output) + # + return _fix_output(output, usemask=usemask, asrecarray=asrecarray) + + + +def rec_append_fields(base, names, data, dtypes=None): + """ + Add new fields to an existing array. + + The names of the fields are given with the `names` arguments, + the corresponding values with the `data` arguments. + If a single field is appended, `names`, `data` and `dtypes` do not have + to be lists but just values. + + Parameters + ---------- + base : array + Input array to extend. + names : string, sequence + String or sequence of strings corresponding to the names + of the new fields. + data : array or sequence of arrays + Array or sequence of arrays storing the fields to add to the base. + dtypes : sequence of datatypes, optional + Datatype or sequence of datatypes. + If None, the datatypes are estimated from the `data`. + + See Also + -------- + append_fields + + Returns + ------- + appended_array : np.recarray + """ + return append_fields(base, names, data=data, dtypes=dtypes, + asrecarray=True, usemask=False) + + + +def stack_arrays(arrays, defaults=None, usemask=True, asrecarray=False, + autoconvert=False): + """ + Superposes arrays fields by fields + + Parameters + ---------- + seqarrays : array or sequence + Sequence of input arrays. + defaults : dictionary, optional + Dictionary mapping field names to the corresponding default values. + usemask : {True, False}, optional + Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) + or a ndarray. + asrecarray : {False, True}, optional + Whether to return a recarray (or MaskedRecords if `usemask==True`) or + just a flexible-type ndarray. + autoconvert : {False, True}, optional + Whether automatically cast the type of the field to the maximum. + + Examples + -------- + >>> x = np.array([1, 2,]) + >>> stack_arrays(x) is x + True + >>> z = np.array([('A', 1), ('B', 2)], dtype=[('A', '|S3'), ('B', float)]) + >>> zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], + dtype=[('A', '|S3'), ('B', float), ('C', float)]) + >>> test = stack_arrays((z,zz)) + >>> masked_array(data = [('A', 1.0, --) ('B', 2.0, --) ('a', 10.0, 100.0) + ... ('b', 20.0, 200.0) ('c', 30.0, 300.0)], + ... mask = [(False, False, True) (False, False, True) (False, False, False) + ... (False, False, False) (False, False, False)], + ... fill_value=('N/A', 1e+20, 1e+20) + ... dtype=[('A', '|S3'), ('B', '<f8'), ('C', '<f8')]) + + """ + if isinstance(arrays, ndarray): + return arrays + elif len(arrays) == 1: + return arrays[0] + seqarrays = [np.asanyarray(a).ravel() for a in arrays] + nrecords = [len(a) for a in seqarrays] + ndtype = [a.dtype for a in seqarrays] + fldnames = [d.names for d in ndtype] + # + dtype_l = ndtype[0] + newdescr = dtype_l.descr + names = [_[0] for _ in newdescr] + for dtype_n in ndtype[1:]: + for descr in dtype_n.descr: + name = descr[0] or '' + if name not in names: + newdescr.append(descr) + names.append(name) + else: + nameidx = names.index(name) + current_descr = newdescr[nameidx] + if autoconvert: + if np.dtype(descr[1]) > np.dtype(current_descr[-1]): + current_descr = list(current_descr) + current_descr[-1] = descr[1] + newdescr[nameidx] = tuple(current_descr) + elif descr[1] != current_descr[-1]: + raise TypeError("Incompatible type '%s' <> '%s'" %\ + (dict(newdescr)[name], descr[1])) + # Only one field: use concatenate + if len(newdescr) == 1: + output = ma.concatenate(seqarrays) + else: + # + output = ma.masked_all((np.sum(nrecords),), newdescr) + offset = np.cumsum(np.r_[0, nrecords]) + seen = [] + for (a, n, i, j) in zip(seqarrays, fldnames, offset[:-1], offset[1:]): + names = a.dtype.names + if names is None: + output['f%i' % len(seen)][i:j] = a + else: + for name in n: + output[name][i:j] = a[name] + if name not in seen: + seen.append(name) + # + return _fix_output(_fix_defaults(output, defaults), + usemask=usemask, asrecarray=asrecarray) + + + +def find_duplicates(a, key=None, ignoremask=True, return_index=False): + """ + Find the duplicates in a structured array along a given key + + Parameters + ---------- + a : array-like + Input array + key : {string, None}, optional + Name of the fields along which to check the duplicates. + If None, the search is performed by records + ignoremask : {True, False}, optional + Whether masked data should be discarded or considered as duplicates. + return_index : {False, True}, optional + Whether to return the indices of the duplicated values. + + Examples + -------- + >>> ndtype = [('a', int)] + >>> a = ma.array([1, 1, 1, 2, 2, 3, 3], + ... mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype) + >>> find_duplicates(a, ignoremask=True, return_index=True) + """ + a = np.asanyarray(a).ravel() + # Get a dictionary of fields + fields = get_fieldstructure(a.dtype) + # Get the sorting data (by selecting the corresponding field) + base = a + if key: + for f in fields[key]: + base = base[f] + base = base[key] + # Get the sorting indices and the sorted data + sortidx = base.argsort() + sortedbase = base[sortidx] + sorteddata = sortedbase.filled() + # Compare the sorting data + flag = (sorteddata[:-1] == sorteddata[1:]) + # If masked data must be ignored, set the flag to false where needed + if ignoremask: + sortedmask = sortedbase.recordmask + flag[sortedmask[1:]] = False + flag = np.concatenate(([False], flag)) + # We need to take the point on the left as well (else we're missing it) + flag[:-1] = flag[:-1] + flag[1:] + duplicates = a[sortidx][flag] + if return_index: + return (duplicates, sortidx[flag]) + else: + return duplicates + + + +def join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', + defaults=None, usemask=True, asrecarray=False): + """ + Join arrays `r1` and `r2` on key `key`. + + The key should be either a string or a sequence of string corresponding + to the fields used to join the array. + An exception is raised if the `key` field cannot be found in the two input + arrays. + Neither `r1` nor `r2` should have any duplicates along `key`: the presence + of duplicates will make the output quite unreliable. Note that duplicates + are not looked for by the algorithm. + + Parameters + ---------- + key : {string, sequence} + A string or a sequence of strings corresponding to the fields used + for comparison. + r1, r2 : arrays + Structured arrays. + jointype : {'inner', 'outer', 'leftouter'}, optional + If 'inner', returns the elements common to both r1 and r2. + If 'outer', returns the common elements as well as the elements of r1 + not in r2 and the elements of not in r2. + If 'leftouter', returns the common elements and the elements of r1 not + in r2. + r1postfix : string, optional + String appended to the names of the fields of r1 that are present in r2 + but absent of the key. + r2postfix : string, optional + String appended to the names of the fields of r2 that are present in r1 + but absent of the key. + defaults : {dictionary}, optional + Dictionary mapping field names to the corresponding default values. + usemask : {True, False}, optional + Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) + or a ndarray. + asrecarray : {False, True}, optional + Whether to return a recarray (or MaskedRecords if `usemask==True`) or + just a flexible-type ndarray. + + Notes + ----- + * The output is sorted along the key. + * A temporary array is formed by dropping the fields not in the key for the + two arrays and concatenating the result. This array is then sorted, and + the common entries selected. The output is constructed by filling the fields + with the selected entries. Matching is not preserved if there are some + duplicates... + + """ + # Check jointype + if jointype not in ('inner', 'outer', 'leftouter'): + raise ValueError("The 'jointype' argument should be in 'inner', "\ + "'outer' or 'leftouter' (got '%s' instead)" % jointype) + # If we have a single key, put it in a tuple + if isinstance(key, basestring): + key = (key, ) + + # Check the keys + for name in key: + if name not in r1.dtype.names: + raise ValueError('r1 does not have key field %s'%name) + if name not in r2.dtype.names: + raise ValueError('r2 does not have key field %s'%name) + + # Make sure we work with ravelled arrays + r1 = r1.ravel() + r2 = r2.ravel() + (nb1, nb2) = (len(r1), len(r2)) + (r1names, r2names) = (r1.dtype.names, r2.dtype.names) + + # Make temporary arrays of just the keys + r1k = drop_fields(r1, [n for n in r1names if n not in key]) + r2k = drop_fields(r2, [n for n in r2names if n not in key]) + + # Concatenate the two arrays for comparison + aux = ma.concatenate((r1k, r2k)) + idx_sort = aux.argsort(order=key) + aux = aux[idx_sort] + # + # Get the common keys + flag_in = ma.concatenate(([False], aux[1:] == aux[:-1])) + flag_in[:-1] = flag_in[1:] + flag_in[:-1] + idx_in = idx_sort[flag_in] + idx_1 = idx_in[(idx_in < nb1)] + idx_2 = idx_in[(idx_in >= nb1)] - nb1 + (r1cmn, r2cmn) = (len(idx_1), len(idx_2)) + if jointype == 'inner': + (r1spc, r2spc) = (0, 0) + elif jointype == 'outer': + idx_out = idx_sort[~flag_in] + idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)])) + idx_2 = np.concatenate((idx_2, idx_out[(idx_out >= nb1)] - nb1)) + (r1spc, r2spc) = (len(idx_1) - r1cmn, len(idx_2) - r2cmn) + elif jointype == 'leftouter': + idx_out = idx_sort[~flag_in] + idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)])) + (r1spc, r2spc) = (len(idx_1) - r1cmn, 0) + # Select the entries from each input + (s1, s2) = (r1[idx_1], r2[idx_2]) + # + # Build the new description of the output array ....... + # Start with the key fields + ndtype = [list(_) for _ in r1k.dtype.descr] + # Add the other fields + ndtype.extend(list(_) for _ in r1.dtype.descr if _[0] not in key) + # Find the new list of names (it may be different from r1names) + names = list(_[0] for _ in ndtype) + for desc in r2.dtype.descr: + desc = list(desc) + name = desc[0] + # Have we seen the current name already ? + if name in names: + nameidx = names.index(name) + current = ndtype[nameidx] + # The current field is part of the key: take the largest dtype + if name in key: + current[-1] = max(desc[1], current[-1]) + # The current field is not part of the key: add the suffixes + else: + current[0] += r1postfix + desc[0] += r2postfix + ndtype.insert(nameidx+1, desc) + #... we haven't: just add the description to the current list + else: + names.extend(desc[0]) + ndtype.append(desc) + # Revert the elements to tuples + ndtype = [tuple(_) for _ in ndtype] + # Find the largest nb of common fields : r1cmn and r2cmn should be equal, but... + cmn = max(r1cmn, r2cmn) + # Construct an empty array + output = ma.masked_all((cmn + r1spc + r2spc,), dtype=ndtype) + names = output.dtype.names + for f in r1names: + selected = s1[f] + if f not in names: + f += r1postfix + current = output[f] + current[:r1cmn] = selected[:r1cmn] + if jointype in ('outer', 'leftouter'): + current[cmn:cmn+r1spc] = selected[r1cmn:] + for f in r2names: + selected = s2[f] + if f not in names: + f += r2postfix + current = output[f] + current[:r2cmn] = selected[:r2cmn] + if (jointype == 'outer') and r2spc: + current[-r2spc:] = selected[r2cmn:] + # Sort and finalize the output + output.sort(order=key) + kwargs = dict(usemask=usemask, asrecarray=asrecarray) + return _fix_output(_fix_defaults(output, defaults), **kwargs) + + +def rec_join(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', + defaults=None): + """ + Join arrays `r1` and `r2` on keys. + Alternative to join_by, that always returns a np.recarray. + + See Also + -------- + join_by : equivalent function + """ + kwargs = dict(jointype=jointype, r1postfix=r1postfix, r2postfix=r2postfix, + defaults=defaults, usemask=False, asrecarray=True) + return join_by(key, r1, r2, **kwargs) diff --git a/numpy/lib/src/_compiled_base.c b/numpy/lib/src/_compiled_base.c index ddab9f851..54955e60a 100644 --- a/numpy/lib/src/_compiled_base.c +++ b/numpy/lib/src/_compiled_base.c @@ -494,34 +494,45 @@ arr_add_docstring(PyObject *NPY_UNUSED(dummy), PyObject *args) #define _TESTDOC1(typebase) (obj->ob_type == &Py##typebase##_Type) #define _TESTDOC2(typebase) (obj->ob_type == Py##typebase##_TypePtr) -#define _ADDDOC(typebase, doc, name) { \ +#define _ADDDOC(typebase, doc, name) do { \ Py##typebase##Object *new = (Py##typebase##Object *)obj; \ if (!(doc)) { \ doc = docstr; \ } \ else { \ - PyErr_Format(PyExc_RuntimeError, \ - "%s method %s",name, msg); \ + PyErr_Format(PyExc_RuntimeError, "%s method %s", name, msg); \ return NULL; \ } \ - } + } while (0) + + if (_TESTDOC1(CFunction)) + _ADDDOC(CFunction, new->m_ml->ml_doc, new->m_ml->ml_name); + else if (_TESTDOC1(Type)) + _ADDDOC(Type, new->tp_doc, new->tp_name); + else if (_TESTDOC2(MemberDescr)) + _ADDDOC(MemberDescr, new->d_member->doc, new->d_member->name); + else if (_TESTDOC2(GetSetDescr)) + _ADDDOC(GetSetDescr, new->d_getset->doc, new->d_getset->name); + else if (_TESTDOC2(MethodDescr)) + _ADDDOC(MethodDescr, new->d_method->ml_doc, new->d_method->ml_name); + else { + PyObject *doc_attr; + + doc_attr = PyObject_GetAttrString(obj, "__doc__"); + if (doc_attr != NULL && doc_attr != Py_None) { + PyErr_Format(PyExc_RuntimeError, "object %s", msg); + return NULL; + } + Py_XDECREF(doc_attr); - if _TESTDOC1(CFunction) - _ADDDOC(CFunction, new->m_ml->ml_doc, new->m_ml->ml_name) - else if _TESTDOC1(Type) - _ADDDOC(Type, new->tp_doc, new->tp_name) - else if _TESTDOC2(MemberDescr) - _ADDDOC(MemberDescr, new->d_member->doc, new->d_member->name) - else if _TESTDOC2(GetSetDescr) - _ADDDOC(GetSetDescr, new->d_getset->doc, new->d_getset->name) - else if _TESTDOC2(MethodDescr) - _ADDDOC(MethodDescr, new->d_method->ml_doc, - new->d_method->ml_name) - else { - PyErr_SetString(PyExc_TypeError, - "Cannot set a docstring for that object"); - return NULL; - } + if (PyObject_SetAttrString(obj, "__doc__", str) < 0) { + PyErr_SetString(PyExc_TypeError, + "Cannot set a docstring for that object"); + return NULL; + } + Py_INCREF(Py_None); + return Py_None; + } #undef _TESTDOC1 #undef _TESTDOC2 @@ -533,35 +544,6 @@ arr_add_docstring(PyObject *NPY_UNUSED(dummy), PyObject *args) } -static char packbits_doc[] = - "out = numpy.packbits(myarray, axis=None)\n\n" - " myarray : an integer type array whose elements should be packed to bits\n\n" - " This routine packs the elements of a binary-valued dataset into a\n" - " NumPy array of type uint8 ('B') whose bits correspond to\n" - " the logical (0 or nonzero) value of the input elements.\n" - " The dimension over-which bit-packing is done is given by axis.\n" - " The shape of the output has the same number of dimensions as the input\n" - " (unless axis is None, in which case the output is 1-d).\n" - "\n" - " Example:\n" - " >>> a = array([[[1,0,1],\n" - " ... [0,1,0]],\n" - " ... [[1,1,0],\n" - " ... [0,0,1]]])\n" - " >>> b = numpy.packbits(a,axis=-1)\n" - " >>> b\n" - " array([[[160],[64]],[[192],[32]]], dtype=uint8)\n\n" - " Note that 160 = 128 + 32\n" - " 192 = 128 + 64\n"; - -static char unpackbits_doc[] = - "out = numpy.unpackbits(myarray, axis=None)\n\n" - " myarray - array of uint8 type where each element represents a bit-field\n" - " that should be unpacked into a boolean output array\n\n" - " The shape of the output array is either 1-d (if axis is None) or\n" - " the same shape as the input array with unpacking done along the\n" - " axis specified."; - /* PACKBITS This function packs binary (0 or 1) 1-bit per pixel arrays @@ -809,9 +791,9 @@ static struct PyMethodDef methods[] = { {"add_docstring", (PyCFunction)arr_add_docstring, METH_VARARGS, NULL}, {"packbits", (PyCFunction)io_pack, METH_VARARGS | METH_KEYWORDS, - packbits_doc}, + NULL}, {"unpackbits", (PyCFunction)io_unpack, METH_VARARGS | METH_KEYWORDS, - unpackbits_doc}, + NULL}, {NULL, NULL} /* sentinel */ }; diff --git a/numpy/lib/tests/test__iotools.py b/numpy/lib/tests/test__iotools.py new file mode 100644 index 000000000..2cb8461c3 --- /dev/null +++ b/numpy/lib/tests/test__iotools.py @@ -0,0 +1,167 @@ + +import StringIO + +import numpy as np +from numpy.lib._iotools import LineSplitter, NameValidator, StringConverter,\ + has_nested_fields +from numpy.testing import * + +class TestLineSplitter(TestCase): + "Tests the LineSplitter class." + # + def test_no_delimiter(self): + "Test LineSplitter w/o delimiter" + strg = " 1 2 3 4 5 # test" + test = LineSplitter()(strg) + assert_equal(test, ['1', '2', '3', '4', '5']) + test = LineSplitter('')(strg) + assert_equal(test, ['1', '2', '3', '4', '5']) + + def test_space_delimiter(self): + "Test space delimiter" + strg = " 1 2 3 4 5 # test" + test = LineSplitter(' ')(strg) + assert_equal(test, ['1', '2', '3', '4', '', '5']) + test = LineSplitter(' ')(strg) + assert_equal(test, ['1 2 3 4', '5']) + + def test_tab_delimiter(self): + "Test tab delimiter" + strg= " 1\t 2\t 3\t 4\t 5 6" + test = LineSplitter('\t')(strg) + assert_equal(test, ['1', '2', '3', '4', '5 6']) + strg= " 1 2\t 3 4\t 5 6" + test = LineSplitter('\t')(strg) + assert_equal(test, ['1 2', '3 4', '5 6']) + + def test_other_delimiter(self): + "Test LineSplitter on delimiter" + strg = "1,2,3,4,,5" + test = LineSplitter(',')(strg) + assert_equal(test, ['1', '2', '3', '4', '', '5']) + # + strg = " 1,2,3,4,,5 # test" + test = LineSplitter(',')(strg) + assert_equal(test, ['1', '2', '3', '4', '', '5']) + + def test_constant_fixed_width(self): + "Test LineSplitter w/ fixed-width fields" + strg = " 1 2 3 4 5 # test" + test = LineSplitter(3)(strg) + assert_equal(test, ['1', '2', '3', '4', '', '5', '']) + # + strg = " 1 3 4 5 6# test" + test = LineSplitter(20)(strg) + assert_equal(test, ['1 3 4 5 6']) + # + strg = " 1 3 4 5 6# test" + test = LineSplitter(30)(strg) + assert_equal(test, ['1 3 4 5 6']) + + def test_variable_fixed_width(self): + strg = " 1 3 4 5 6# test" + test = LineSplitter((3,6,6,3))(strg) + assert_equal(test, ['1', '3', '4 5', '6']) + # + strg = " 1 3 4 5 6# test" + test = LineSplitter((6,6,9))(strg) + assert_equal(test, ['1', '3 4', '5 6']) + + +#------------------------------------------------------------------------------- + +class TestNameValidator(TestCase): + # + def test_case_sensitivity(self): + "Test case sensitivity" + names = ['A', 'a', 'b', 'c'] + test = NameValidator().validate(names) + assert_equal(test, ['A', 'a', 'b', 'c']) + test = NameValidator(case_sensitive=False).validate(names) + assert_equal(test, ['A', 'A_1', 'B', 'C']) + test = NameValidator(case_sensitive='upper').validate(names) + assert_equal(test, ['A', 'A_1', 'B', 'C']) + test = NameValidator(case_sensitive='lower').validate(names) + assert_equal(test, ['a', 'a_1', 'b', 'c']) + # + def test_excludelist(self): + "Test excludelist" + names = ['dates', 'data', 'Other Data', 'mask'] + validator = NameValidator(excludelist = ['dates', 'data', 'mask']) + test = validator.validate(names) + assert_equal(test, ['dates_', 'data_', 'Other_Data', 'mask_']) + + +#------------------------------------------------------------------------------- + +class TestStringConverter(TestCase): + "Test StringConverter" + # + def test_creation(self): + "Test creation of a StringConverter" + converter = StringConverter(int, -99999) + assert_equal(converter._status, 1) + assert_equal(converter.default, -99999) + # + def test_upgrade(self): + "Tests the upgrade method." + converter = StringConverter() + assert_equal(converter._status, 0) + converter.upgrade('0') + assert_equal(converter._status, 1) + converter.upgrade('0.') + assert_equal(converter._status, 2) + converter.upgrade('0j') + assert_equal(converter._status, 3) + converter.upgrade('a') + assert_equal(converter._status, len(converter._mapper)-1) + # + def test_missing(self): + "Tests the use of missing values." + converter = StringConverter(missing_values=('missing','missed')) + converter.upgrade('0') + assert_equal(converter('0'), 0) + assert_equal(converter(''), converter.default) + assert_equal(converter('missing'), converter.default) + assert_equal(converter('missed'), converter.default) + try: + converter('miss') + except ValueError: + pass + # + def test_upgrademapper(self): + "Tests updatemapper" + from datetime import date + import time + dateparser = lambda s : date(*time.strptime(s, "%Y-%m-%d")[:3]) + StringConverter.upgrade_mapper(dateparser, date(2000,1,1)) + convert = StringConverter(dateparser, date(2000, 1, 1)) + test = convert('2001-01-01') + assert_equal(test, date(2001, 01, 01)) + test = convert('2009-01-01') + assert_equal(test, date(2009, 01, 01)) + test = convert('') + assert_equal(test, date(2000, 01, 01)) + # + def test_string_to_object(self): + "Make sure that string-to-object functions are properly recognized" + from datetime import date + import time + conv = StringConverter(lambda s: date(*(time.strptime(s)[:3]))) + assert_equal(conv._mapper[-2][0](0), 0j) + assert(hasattr(conv, 'default')) + + +#------------------------------------------------------------------------------- + +class TestMiscFunctions(TestCase): + # + def test_has_nested_dtype(self): + "Test has_nested_dtype" + ndtype = np.dtype(np.float) + assert_equal(has_nested_fields(ndtype), False) + ndtype = np.dtype([('A', '|S3'), ('B', float)]) + assert_equal(has_nested_fields(ndtype), False) + ndtype = np.dtype([('A', int), ('B', [('BA', float), ('BB', '|S1')])]) + assert_equal(has_nested_fields(ndtype), True) + diff --git a/numpy/lib/tests/test_function_base.py b/numpy/lib/tests/test_function_base.py index ca8104b53..143e28ae5 100644 --- a/numpy/lib/tests/test_function_base.py +++ b/numpy/lib/tests/test_function_base.py @@ -430,6 +430,44 @@ class TestTrapz(TestCase): #check integral of normal equals 1 assert_almost_equal(sum(r,axis=0),1,7) + def test_ndim(self): + x = linspace(0, 1, 3) + y = linspace(0, 2, 8) + z = linspace(0, 3, 13) + + wx = ones_like(x) * (x[1]-x[0]) + wx[0] /= 2 + wx[-1] /= 2 + wy = ones_like(y) * (y[1]-y[0]) + wy[0] /= 2 + wy[-1] /= 2 + wz = ones_like(z) * (z[1]-z[0]) + wz[0] /= 2 + wz[-1] /= 2 + + q = x[:,None,None] + y[None,:,None] + z[None,None,:] + + qx = (q*wx[:,None,None]).sum(axis=0) + qy = (q*wy[None,:,None]).sum(axis=1) + qz = (q*wz[None,None,:]).sum(axis=2) + + # n-d `x` + r = trapz(q, x=x[:,None,None], axis=0) + assert_almost_equal(r, qx) + r = trapz(q, x=y[None,:,None], axis=1) + assert_almost_equal(r, qy) + r = trapz(q, x=z[None,None,:], axis=2) + assert_almost_equal(r, qz) + + # 1-d `x` + r = trapz(q, x=x, axis=0) + assert_almost_equal(r, qx) + r = trapz(q, x=y, axis=1) + assert_almost_equal(r, qy) + r = trapz(q, x=z, axis=2) + assert_almost_equal(r, qz) + + class TestSinc(TestCase): def test_simple(self): assert(sinc(0)==1) diff --git a/numpy/lib/tests/test_getlimits.py b/numpy/lib/tests/test_getlimits.py index 3fe939b32..325e5a444 100644 --- a/numpy/lib/tests/test_getlimits.py +++ b/numpy/lib/tests/test_getlimits.py @@ -51,5 +51,9 @@ class TestIinfo(TestCase): assert_equal(iinfo(T).max, T(-1)) +def test_instances(): + iinfo(10) + finfo(3.0) + if __name__ == "__main__": run_module_suite() diff --git a/numpy/lib/tests/test_io.py b/numpy/lib/tests/test_io.py index 885fd3616..c38d83add 100644 --- a/numpy/lib/tests/test_io.py +++ b/numpy/lib/tests/test_io.py @@ -1,10 +1,25 @@ -from numpy.testing import * + import numpy as np +import numpy.ma as ma +from numpy.ma.testutils import * + import StringIO from tempfile import NamedTemporaryFile +import sys, time +from datetime import datetime + + +MAJVER, MINVER = sys.version_info[:2] -class RoundtripTest: +def strptime(s, fmt=None): + """This function is available in the datetime module only + from Python >= 2.5. + + """ + return datetime(*time.strptime(s, fmt)[:3]) + +class RoundtripTest(object): def roundtrip(self, save_func, *args, **kwargs): """ save_func : callable @@ -25,7 +40,14 @@ class RoundtripTest: file_on_disk = kwargs.get('file_on_disk', False) if file_on_disk: - target_file = NamedTemporaryFile() + # Do not delete the file on windows, because we can't + # reopen an already opened file on that platform, so we + # need to close the file and reopen it, implying no + # automatic deletion. + if sys.platform == 'win32' and MAJVER >= 2 and MINVER >= 6: + target_file = NamedTemporaryFile(delete=False) + else: + target_file = NamedTemporaryFile() load_file = target_file.name else: target_file = StringIO.StringIO() @@ -37,6 +59,9 @@ class RoundtripTest: target_file.flush() target_file.seek(0) + if sys.platform == 'win32' and not isinstance(target_file, StringIO.StringIO): + target_file.close() + arr_reloaded = np.load(load_file, **load_kwds) self.arr = arr @@ -59,6 +84,7 @@ class RoundtripTest: a = np.array([1, 2, 3, 4], int) self.roundtrip(a) + @np.testing.dec.knownfailureif(sys.platform=='win32', "Fail on Win32") def test_mmap(self): a = np.array([[1, 2.5], [4, 7.3]]) self.roundtrip(a, file_on_disk=True, load_kwds={'mmap_mode': 'r'}) @@ -95,6 +121,7 @@ class TestSavezLoad(RoundtripTest, TestCase): class TestSaveTxt(TestCase): + @np.testing.dec.knownfailureif(sys.platform=='win32', "Fail on Win32") def test_array(self): a =np.array([[1, 2], [3, 4]], float) c = StringIO.StringIO() @@ -319,7 +346,6 @@ class Testfromregex(TestCase): assert_array_equal(x, a) def test_record_2(self): - return # pass this test until #736 is resolved c = StringIO.StringIO() c.write('1312 foo\n1534 bar\n4444 qux') c.seek(0) @@ -341,5 +367,447 @@ class Testfromregex(TestCase): assert_array_equal(x, a) +#####-------------------------------------------------------------------------- + + +class TestFromTxt(TestCase): + # + def test_record(self): + "Test w/ explicit dtype" + data = StringIO.StringIO('1 2\n3 4') +# data.seek(0) + test = np.ndfromtxt(data, dtype=[('x', np.int32), ('y', np.int32)]) + control = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + assert_equal(test, control) + # + data = StringIO.StringIO('M 64.0 75.0\nF 25.0 60.0') +# data.seek(0) + descriptor = {'names': ('gender','age','weight'), + 'formats': ('S1', 'i4', 'f4')} + control = np.array([('M', 64.0, 75.0), ('F', 25.0, 60.0)], + dtype=descriptor) + test = np.ndfromtxt(data, dtype=descriptor) + assert_equal(test, control) + + def test_array(self): + "Test outputing a standard ndarray" + data = StringIO.StringIO('1 2\n3 4') + control = np.array([[1,2],[3,4]], dtype=int) + test = np.ndfromtxt(data, dtype=int) + assert_array_equal(test, control) + # + data.seek(0) + control = np.array([[1,2],[3,4]], dtype=float) + test = np.loadtxt(data, dtype=float) + assert_array_equal(test, control) + + def test_1D(self): + "Test squeezing to 1D" + control = np.array([1, 2, 3, 4], int) + # + data = StringIO.StringIO('1\n2\n3\n4\n') + test = np.ndfromtxt(data, dtype=int) + assert_array_equal(test, control) + # + data = StringIO.StringIO('1,2,3,4\n') + test = np.ndfromtxt(data, dtype=int, delimiter=',') + assert_array_equal(test, control) + + def test_comments(self): + "Test the stripping of comments" + control = np.array([1, 2, 3, 5], int) + # Comment on its own line + data = StringIO.StringIO('# comment\n1,2,3,5\n') + test = np.ndfromtxt(data, dtype=int, delimiter=',', comments='#') + assert_equal(test, control) + # Comment at the end of a line + data = StringIO.StringIO('1,2,3,5# comment\n') + test = np.ndfromtxt(data, dtype=int, delimiter=',', comments='#') + assert_equal(test, control) + + def test_skiprows(self): + "Test row skipping" + control = np.array([1, 2, 3, 5], int) + # + data = StringIO.StringIO('comment\n1,2,3,5\n') + test = np.ndfromtxt(data, dtype=int, delimiter=',', skiprows=1) + assert_equal(test, control) + # + data = StringIO.StringIO('# comment\n1,2,3,5\n') + test = np.loadtxt(data, dtype=int, delimiter=',', skiprows=1) + assert_equal(test, control) + + def test_header(self): + "Test retrieving a header" + data = StringIO.StringIO('gender age weight\nM 64.0 75.0\nF 25.0 60.0') + test = np.ndfromtxt(data, dtype=None, names=True) + control = {'gender': np.array(['M', 'F']), + 'age': np.array([64.0, 25.0]), + 'weight': np.array([75.0, 60.0])} + assert_equal(test['gender'], control['gender']) + assert_equal(test['age'], control['age']) + assert_equal(test['weight'], control['weight']) + + def test_auto_dtype(self): + "Test the automatic definition of the output dtype" + data = StringIO.StringIO('A 64 75.0 3+4j True\nBCD 25 60.0 5+6j False') + test = np.ndfromtxt(data, dtype=None) + control = [np.array(['A', 'BCD']), + np.array([64, 25]), + np.array([75.0, 60.0]), + np.array([3+4j, 5+6j]), + np.array([True, False]),] + assert_equal(test.dtype.names, ['f0','f1','f2','f3','f4']) + for (i, ctrl) in enumerate(control): + assert_equal(test['f%i' % i], ctrl) + + + def test_auto_dtype_uniform(self): + "Tests whether the output dtype can be uniformized" + data = StringIO.StringIO('1 2 3 4\n5 6 7 8\n') + test = np.ndfromtxt(data, dtype=None) + control = np.array([[1,2,3,4],[5,6,7,8]]) + assert_equal(test, control) + + + def test_fancy_dtype(self): + "Check that a nested dtype isn't MIA" + data = StringIO.StringIO('1,2,3.0\n4,5,6.0\n') + fancydtype = np.dtype([('x', int), ('y', [('t', int), ('s', float)])]) + test = np.ndfromtxt(data, dtype=fancydtype, delimiter=',') + control = np.array([(1,(2,3.0)),(4,(5,6.0))], dtype=fancydtype) + assert_equal(test, control) + + + def test_names_overwrite(self): + "Test overwriting the names of the dtype" + descriptor = {'names': ('g','a','w'), + 'formats': ('S1', 'i4', 'f4')} + data = StringIO.StringIO('M 64.0 75.0\nF 25.0 60.0') + names = ('gender','age','weight') + test = np.ndfromtxt(data, dtype=descriptor, names=names) + descriptor['names'] = names + control = np.array([('M', 64.0, 75.0), + ('F', 25.0, 60.0)], dtype=descriptor) + assert_equal(test, control) + + + def test_commented_header(self): + "Check that names can be retrieved even if the line is commented out." + data = StringIO.StringIO(""" +#gender age weight +M 21 72.100000 +F 35 58.330000 +M 33 21.99 + """) + # The # is part of the first name and should be deleted automatically. + test = np.genfromtxt(data, names=True, dtype=None) + ctrl = np.array([('M', 21, 72.1), ('F', 35, 58.33), ('M', 33, 21.99)], + dtype=[('gender','|S1'), ('age', int), ('weight', float)]) + assert_equal(test, ctrl) + # Ditto, but we should get rid of the first element + data = StringIO.StringIO(""" +# gender age weight +M 21 72.100000 +F 35 58.330000 +M 33 21.99 + """) + test = np.genfromtxt(data, names=True, dtype=None) + assert_equal(test, ctrl) + + + def test_autonames_and_usecols(self): + "Tests names and usecols" + data = StringIO.StringIO('A B C D\n aaaa 121 45 9.1') + test = np.ndfromtxt(data, usecols=('A', 'C', 'D'), + names=True, dtype=None) + control = np.array(('aaaa', 45, 9.1), + dtype=[('A', '|S4'), ('C', int), ('D', float)]) + assert_equal(test, control) + + + def test_converters_with_usecols(self): + "Test the combination user-defined converters and usecol" + data = StringIO.StringIO('1,2,3,,5\n6,7,8,9,10\n') + test = np.ndfromtxt(data, dtype=int, delimiter=',', + converters={3:lambda s: int(s or -999)}, + usecols=(1, 3, )) + control = np.array([[2, -999], [7, 9]], int) + assert_equal(test, control) + + def test_converters_with_usecols_and_names(self): + "Tests names and usecols" + data = StringIO.StringIO('A B C D\n aaaa 121 45 9.1') + test = np.ndfromtxt(data, usecols=('A', 'C', 'D'), names=True, + dtype=None, converters={'C':lambda s: 2 * int(s)}) + control = np.array(('aaaa', 90, 9.1), + dtype=[('A', '|S4'), ('C', int), ('D', float)]) + assert_equal(test, control) + + def test_converters_cornercases(self): + "Test the conversion to datetime." + converter = {'date': lambda s: strptime(s, '%Y-%m-%d %H:%M:%SZ')} + data = StringIO.StringIO('2009-02-03 12:00:00Z, 72214.0') + test = np.ndfromtxt(data, delimiter=',', dtype=None, + names=['date','stid'], converters=converter) + control = np.array((datetime(2009,02,03), 72214.), + dtype=[('date', np.object_), ('stid', float)]) + assert_equal(test, control) + + + def test_unused_converter(self): + "Test whether unused converters are forgotten" + data = StringIO.StringIO("1 21\n 3 42\n") + test = np.ndfromtxt(data, usecols=(1,), + converters={0: lambda s: int(s, 16)}) + assert_equal(test, [21, 42]) + # + data.seek(0) + test = np.ndfromtxt(data, usecols=(1,), + converters={1: lambda s: int(s, 16)}) + assert_equal(test, [33, 66]) + + + def test_dtype_with_converters(self): + dstr = "2009; 23; 46" + test = np.ndfromtxt(StringIO.StringIO(dstr,), + delimiter=";", dtype=float, converters={0:str}) + control = np.array([('2009', 23., 46)], + dtype=[('f0','|S4'), ('f1', float), ('f2', float)]) + assert_equal(test, control) + test = np.ndfromtxt(StringIO.StringIO(dstr,), + delimiter=";", dtype=float, converters={0:float}) + control = np.array([2009., 23., 46],) + assert_equal(test, control) + + + def test_dtype_with_object(self): + "Test using an explicit dtype with an object" + from datetime import date + import time + data = """ + 1; 2001-01-01 + 2; 2002-01-31 + """ + ndtype = [('idx', int), ('code', np.object)] + func = lambda s: strptime(s.strip(), "%Y-%m-%d") + converters = {1: func} + test = np.genfromtxt(StringIO.StringIO(data), delimiter=";", dtype=ndtype, + converters=converters) + control = np.array([(1, datetime(2001,1,1)), (2, datetime(2002,1,31))], + dtype=ndtype) + assert_equal(test, control) + # + ndtype = [('nest', [('idx', int), ('code', np.object)])] + try: + test = np.genfromtxt(StringIO.StringIO(data), delimiter=";", + dtype=ndtype, converters=converters) + except NotImplementedError: + pass + else: + errmsg = "Nested dtype involving objects should be supported." + raise AssertionError(errmsg) + + + def test_userconverters_with_explicit_dtype(self): + "Test user_converters w/ explicit (standard) dtype" + data = StringIO.StringIO('skip,skip,2001-01-01,1.0,skip') + test = np.genfromtxt(data, delimiter=",", names=None, dtype=float, + usecols=(2, 3), converters={2: str}) + control = np.array([('2001-01-01', 1.)], + dtype=[('', '|S10'), ('', float)]) + assert_equal(test, control) + + + def test_spacedelimiter(self): + "Test space delimiter" + data = StringIO.StringIO("1 2 3 4 5\n6 7 8 9 10") + test = np.ndfromtxt(data) + control = np.array([[ 1., 2., 3., 4., 5.], + [ 6., 7., 8., 9.,10.]]) + assert_equal(test, control) + + + def test_missing(self): + data = StringIO.StringIO('1,2,3,,5\n') + test = np.ndfromtxt(data, dtype=int, delimiter=',', \ + converters={3:lambda s: int(s or -999)}) + control = np.array([1, 2, 3, -999, 5], int) + assert_equal(test, control) + + + def test_usecols(self): + "Test the selection of columns" + # Select 1 column + control = np.array( [[1, 2], [3, 4]], float) + data = StringIO.StringIO() + np.savetxt(data, control) + data.seek(0) + test = np.ndfromtxt(data, dtype=float, usecols=(1,)) + assert_equal(test, control[:, 1]) + # + control = np.array( [[1, 2, 3], [3, 4, 5]], float) + data = StringIO.StringIO() + np.savetxt(data, control) + data.seek(0) + test = np.ndfromtxt(data, dtype=float, usecols=(1, 2)) + assert_equal(test, control[:, 1:]) + # Testing with arrays instead of tuples. + data.seek(0) + test = np.ndfromtxt(data, dtype=float, usecols=np.array([1, 2])) + assert_equal(test, control[:, 1:]) + # Checking with dtypes defined converters. + data = StringIO.StringIO("""JOE 70.1 25.3\nBOB 60.5 27.9""") + names = ['stid', 'temp'] + dtypes = ['S4', 'f8'] + test = np.ndfromtxt(data, usecols=(0, 2), dtype=zip(names, dtypes)) + assert_equal(test['stid'], ["JOE", "BOB"]) + assert_equal(test['temp'], [25.3, 27.9]) + + + def test_empty_file(self): + "Test that an empty file raises the proper exception" + data = StringIO.StringIO() + assert_raises(IOError, np.ndfromtxt, data) + + + def test_fancy_dtype_alt(self): + "Check that a nested dtype isn't MIA" + data = StringIO.StringIO('1,2,3.0\n4,5,6.0\n') + fancydtype = np.dtype([('x', int), ('y', [('t', int), ('s', float)])]) + test = np.mafromtxt(data, dtype=fancydtype, delimiter=',') + control = ma.array([(1,(2,3.0)),(4,(5,6.0))], dtype=fancydtype) + assert_equal(test, control) + + + def test_withmissing(self): + data = StringIO.StringIO('A,B\n0,1\n2,N/A') + test = np.mafromtxt(data, dtype=None, delimiter=',', missing='N/A', + names=True) + control = ma.array([(0, 1), (2, -1)], + mask=[(False, False), (False, True)], + dtype=[('A', np.int), ('B', np.int)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + # + data.seek(0) + test = np.mafromtxt(data, delimiter=',', missing='N/A', names=True) + control = ma.array([(0, 1), (2, -1)], + mask=[[False, False], [False, True]],) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + + + def test_user_missing_values(self): + datastr ="A, B, C\n0, 0., 0j\n1, N/A, 1j\n-9, 2.2, N/A\n3, -99, 3j" + data = StringIO.StringIO(datastr) + basekwargs = dict(dtype=None, delimiter=',', names=True, missing='N/A') + mdtype = [('A', int), ('B', float), ('C', complex)] + # + test = np.mafromtxt(data, **basekwargs) + control = ma.array([( 0, 0.0, 0j), (1, -999, 1j), + ( -9, 2.2, -999j), (3, -99, 3j)], + mask=[(0, 0, 0), (0, 1, 0), (0, 0, 1), (0, 0, 0)], + dtype=mdtype) + assert_equal(test, control) + # + data.seek(0) + test = np.mafromtxt(data, + missing_values={0:-9, 1:-99, 2:-999j}, **basekwargs) + control = ma.array([( 0, 0.0, 0j), (1, -999, 1j), + ( -9, 2.2, -999j), (3, -99, 3j)], + mask=[(0, 0, 0), (0, 1, 0), (1, 0, 1), (0, 1, 0)], + dtype=mdtype) + assert_equal(test, control) + # + data.seek(0) + test = np.mafromtxt(data, + missing_values={0:-9, 'B':-99, 'C':-999j}, + **basekwargs) + control = ma.array([( 0, 0.0, 0j), (1, -999, 1j), + ( -9, 2.2, -999j), (3, -99, 3j)], + mask=[(0, 0, 0), (0, 1, 0), (1, 0, 1), (0, 1, 0)], + dtype=mdtype) + assert_equal(test, control) + + + def test_withmissing_float(self): + data = StringIO.StringIO('A,B\n0,1.5\n2,-999.00') + test = np.mafromtxt(data, dtype=None, delimiter=',', missing='-999.0', + names=True,) + control = ma.array([(0, 1.5), (2, -1.)], + mask=[(False, False), (False, True)], + dtype=[('A', np.int), ('B', np.float)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + + + def test_with_masked_column_uniform(self): + "Test masked column" + data = StringIO.StringIO('1 2 3\n4 5 6\n') + test = np.genfromtxt(data, missing='2,5', dtype=None, usemask=True) + control = ma.array([[1, 2, 3], [4, 5, 6]], mask=[[0, 1, 0],[0, 1, 0]]) + assert_equal(test, control) + + def test_with_masked_column_various(self): + "Test masked column" + data = StringIO.StringIO('True 2 3\nFalse 5 6\n') + test = np.genfromtxt(data, missing='2,5', dtype=None, usemask=True) + control = ma.array([(1, 2, 3), (0, 5, 6)], + mask=[(0, 1, 0),(0, 1, 0)], + dtype=[('f0', bool), ('f1', bool), ('f2', int)]) + assert_equal(test, control) + + + def test_recfromtxt(self): + # + data = StringIO.StringIO('A,B\n0,1\n2,3') + test = np.recfromtxt(data, delimiter=',', missing='N/A', names=True) + control = np.array([(0, 1), (2, 3)], + dtype=[('A', np.int), ('B', np.int)]) + self.failUnless(isinstance(test, np.recarray)) + assert_equal(test, control) + # + data = StringIO.StringIO('A,B\n0,1\n2,N/A') + test = np.recfromtxt(data, dtype=None, delimiter=',', missing='N/A', + names=True, usemask=True) + control = ma.array([(0, 1), (2, -1)], + mask=[(False, False), (False, True)], + dtype=[('A', np.int), ('B', np.int)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + assert_equal(test.A, [0, 2]) + + + def test_recfromcsv(self): + # + data = StringIO.StringIO('A,B\n0,1\n2,3') + test = np.recfromcsv(data, missing='N/A', + names=True, case_sensitive=True) + control = np.array([(0, 1), (2, 3)], + dtype=[('A', np.int), ('B', np.int)]) + self.failUnless(isinstance(test, np.recarray)) + assert_equal(test, control) + # + data = StringIO.StringIO('A,B\n0,1\n2,N/A') + test = np.recfromcsv(data, dtype=None, missing='N/A', + names=True, case_sensitive=True, usemask=True) + control = ma.array([(0, 1), (2, -1)], + mask=[(False, False), (False, True)], + dtype=[('A', np.int), ('B', np.int)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + assert_equal(test.A, [0, 2]) + # + data = StringIO.StringIO('A,B\n0,1\n2,3') + test = np.recfromcsv(data, missing='N/A',) + control = np.array([(0, 1), (2, 3)], + dtype=[('a', np.int), ('b', np.int)]) + self.failUnless(isinstance(test, np.recarray)) + assert_equal(test, control) + + + + if __name__ == "__main__": run_module_suite() diff --git a/numpy/lib/tests/test_recfunctions.py b/numpy/lib/tests/test_recfunctions.py new file mode 100644 index 000000000..424d60ae4 --- /dev/null +++ b/numpy/lib/tests/test_recfunctions.py @@ -0,0 +1,606 @@ +import sys + +import numpy as np +import numpy.ma as ma +from numpy.ma.testutils import * + +from numpy.ma.mrecords import MaskedRecords + +from numpy.lib.recfunctions import * +get_names = np.lib.recfunctions.get_names +get_names_flat = np.lib.recfunctions.get_names_flat +zip_descr = np.lib.recfunctions.zip_descr + +class TestRecFunctions(TestCase): + """ + Misc tests + """ + # + def setUp(self): + x = np.array([1, 2,]) + y = np.array([10, 20, 30]) + z = np.array([('A', 1.), ('B', 2.)], + dtype=[('A', '|S3'), ('B', float)]) + w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], + dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) + self.data = (w, x, y, z) + + + def test_zip_descr(self): + "Test zip_descr" + (w, x, y, z) = self.data + # Std array + test = zip_descr((x, x), flatten=True) + assert_equal(test, + np.dtype([('', int), ('', int)])) + test = zip_descr((x, x), flatten=False) + assert_equal(test, + np.dtype([('', int), ('', int)])) + # Std & flexible-dtype + test = zip_descr((x, z), flatten=True) + assert_equal(test, + np.dtype([('', int), ('A', '|S3'), ('B', float)])) + test = zip_descr((x, z), flatten=False) + assert_equal(test, + np.dtype([('', int), + ('', [('A', '|S3'), ('B', float)])])) + # Standard & nested dtype + test = zip_descr((x, w), flatten=True) + assert_equal(test, + np.dtype([('', int), + ('a', int), + ('ba', float), ('bb', int)])) + test = zip_descr((x, w), flatten=False) + assert_equal(test, + np.dtype([('', int), + ('', [('a', int), + ('b', [('ba', float), ('bb', int)])])])) + + + def test_drop_fields(self): + "Test drop_fields" + a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], + dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) + # A basic field + test = drop_fields(a, 'a') + control = np.array([((2, 3.0),), ((5, 6.0),)], + dtype=[('b', [('ba', float), ('bb', int)])]) + assert_equal(test, control) + # Another basic field (but nesting two fields) + test = drop_fields(a, 'b') + control = np.array([(1,), (4,)], dtype=[('a', int)]) + assert_equal(test, control) + # A nested sub-field + test = drop_fields(a, ['ba',]) + control = np.array([(1, (3.0,)), (4, (6.0,))], + dtype=[('a', int), ('b', [('bb', int)])]) + assert_equal(test, control) + # All the nested sub-field from a field: zap that field + test = drop_fields(a, ['ba', 'bb']) + control = np.array([(1,), (4,)], dtype=[('a', int)]) + assert_equal(test, control) + # + test = drop_fields(a, ['a', 'b']) + assert(test is None) + + + def test_rename_fields(self): + "Tests rename fields" + a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))], + dtype=[('a', int), + ('b', [('ba', float), ('bb', (float, 2))])]) + test = rename_fields(a, {'a':'A', 'bb':'BB'}) + newdtype = [('A', int), ('b', [('ba', float), ('BB', (float, 2))])] + control = a.view(newdtype) + assert_equal(test.dtype, newdtype) + assert_equal(test, control) + + + def test_get_names(self): + "Tests get_names" + ndtype = np.dtype([('A', '|S3'), ('B', float)]) + test = get_names(ndtype) + assert_equal(test, ('A', 'B')) + # + ndtype = np.dtype([('a', int), ('b', [('ba', float), ('bb', int)])]) + test = get_names(ndtype) + assert_equal(test, ('a', ('b', ('ba', 'bb')))) + + + def test_get_names_flat(self): + "Test get_names_flat" + ndtype = np.dtype([('A', '|S3'), ('B', float)]) + test = get_names_flat(ndtype) + assert_equal(test, ('A', 'B')) + # + ndtype = np.dtype([('a', int), ('b', [('ba', float), ('bb', int)])]) + test = get_names_flat(ndtype) + assert_equal(test, ('a', 'b', 'ba', 'bb')) + + + def test_get_fieldstructure(self): + "Test get_fieldstructure" + # No nested fields + ndtype = np.dtype([('A', '|S3'), ('B', float)]) + test = get_fieldstructure(ndtype) + assert_equal(test, {'A':[], 'B':[]}) + # One 1-nested field + ndtype = np.dtype([('A', int), ('B', [('BA', float), ('BB', '|S1')])]) + test = get_fieldstructure(ndtype) + assert_equal(test, {'A': [], 'B': [], 'BA':['B',], 'BB':['B']}) + # One 2-nested fields + ndtype = np.dtype([('A', int), + ('B', [('BA', int), + ('BB', [('BBA', int), ('BBB', int)])])]) + test = get_fieldstructure(ndtype) + control = {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], + 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']} + assert_equal(test, control) + + + @np.testing.dec.knownfailureif(sys.platform=='win32', "Fail on Win32") + def test_find_duplicates(self): + "Test find_duplicates" + a = ma.array([(2, (2., 'B')), (1, (2., 'B')), (2, (2., 'B')), + (1, (1., 'B')), (2, (2., 'B')), (2, (2., 'C'))], + mask=[(0, (0, 0)), (0, (0, 0)), (0, (0, 0)), + (0, (0, 0)), (1, (0, 0)), (0, (1, 0))], + dtype=[('A', int), ('B', [('BA', float), ('BB', '|S1')])]) + # + test = find_duplicates(a, ignoremask=False, return_index=True) + control = [0, 2] + assert_equal(test[-1], control) + assert_equal(test[0], a[control]) + # + test = find_duplicates(a, key='A', return_index=True) + control = [1, 3, 0, 2, 5] + assert_equal(test[-1], control) + assert_equal(test[0], a[control]) + # + test = find_duplicates(a, key='B', return_index=True) + control = [0, 1, 2, 4] + assert_equal(test[-1], control) + assert_equal(test[0], a[control]) + # + test = find_duplicates(a, key='BA', return_index=True) + control = [0, 1, 2, 4] + assert_equal(test[-1], control) + assert_equal(test[0], a[control]) + # + test = find_duplicates(a, key='BB', return_index=True) + control = [0, 1, 2, 3, 4] + assert_equal(test[-1], control) + assert_equal(test[0], a[control]) + + + @np.testing.dec.knownfailureif(sys.platform=='win32', "Fail on Win32") + def test_find_duplicates_ignoremask(self): + "Test the ignoremask option of find_duplicates" + ndtype = [('a', int)] + a = ma.array([1, 1, 1, 2, 2, 3, 3], + mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype) + test = find_duplicates(a, ignoremask=True, return_index=True) + control = [0, 1, 3, 4] + assert_equal(test[-1], control) + assert_equal(test[0], a[control]) + # + test = find_duplicates(a, ignoremask=False, return_index=True) + control = [0, 1, 3, 4, 6, 2] + try: + assert_equal(test[-1], control) + except AssertionError: + assert_equal(test[-1], [0, 1, 3, 4, 2, 6]) + assert_equal(test[0], a[control]) + + +class TestRecursiveFillFields(TestCase): + """ + Test recursive_fill_fields. + """ + def test_simple_flexible(self): + "Test recursive_fill_fields on flexible-array" + a = np.array([(1, 10.), (2, 20.)], dtype=[('A', int), ('B', float)]) + b = np.zeros((3,), dtype=a.dtype) + test = recursive_fill_fields(a, b) + control = np.array([(1, 10.), (2, 20.), (0, 0.)], + dtype=[('A', int), ('B', float)]) + assert_equal(test, control) + # + def test_masked_flexible(self): + "Test recursive_fill_fields on masked flexible-array" + a = ma.array([(1, 10.), (2, 20.)], mask=[(0, 1), (1, 0)], + dtype=[('A', int), ('B', float)]) + b = ma.zeros((3,), dtype=a.dtype) + test = recursive_fill_fields(a, b) + control = ma.array([(1, 10.), (2, 20.), (0, 0.)], + mask=[(0, 1), (1, 0), (0, 0)], + dtype=[('A', int), ('B', float)]) + assert_equal(test, control) + # + + + +class TestMergeArrays(TestCase): + """ + Test merge_arrays + """ + def setUp(self): + x = np.array([1, 2,]) + y = np.array([10, 20, 30]) + z = np.array([('A', 1.), ('B', 2.)], dtype=[('A', '|S3'), ('B', float)]) + w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], + dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) + self.data = (w, x, y, z) + # + def test_solo(self): + "Test merge_arrays on a single array." + (_, x, _, z) = self.data + # + test = merge_arrays(x) + control = np.array([(1,), (2,)], dtype=[('f0', int)]) + assert_equal(test, control) + test = merge_arrays((x,)) + assert_equal(test, control) + # + test = merge_arrays(z, flatten=False) + assert_equal(test, z) + test = merge_arrays(z, flatten=True) + assert_equal(test, z) + # + def test_solo_w_flatten(self): + "Test merge_arrays on a single array w & w/o flattening" + w = self.data[0] + test = merge_arrays(w, flatten=False) + assert_equal(test, w) + # + test = merge_arrays(w, flatten=True) + control = np.array([(1, 2, 3.0), (4, 5, 6.0)], + dtype=[('a', int), ('ba', float), ('bb', int)]) + assert_equal(test, control) + # + def test_standard(self): + "Test standard & standard" + # Test merge arrays + (_, x, y, _) = self.data + test = merge_arrays((x, y), usemask=False) + control = np.array([(1, 10), (2, 20), (-1, 30)], + dtype=[('f0', int), ('f1', int)]) + assert_equal(test, control) + # + test = merge_arrays((x, y), usemask=True) + control = ma.array([(1, 10), (2, 20), (-1, 30)], + mask=[(0, 0), (0, 0), (1, 0)], + dtype=[('f0', int), ('f1', int)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + # + def test_flatten(self): + "Test standard & flexible" + (_, x, _, z) = self.data + test = merge_arrays((x, z), flatten=True) + control = np.array([(1, 'A', 1.), (2, 'B', 2.)], + dtype=[('f0', int), ('A', '|S3'), ('B', float)]) + assert_equal(test, control) + # + test = merge_arrays((x, z), flatten=False) + control = np.array([(1, ('A', 1.)), (2, ('B', 2.))], + dtype=[('f0', int), + ('f1', [('A', '|S3'), ('B', float)])]) + assert_equal(test, control) + # + def test_flatten_wflexible(self): + "Test flatten standard & nested" + (w, x, _, _) = self.data + test = merge_arrays((x, w), flatten=True) + control = np.array([(1, 1, 2, 3.0), (2, 4, 5, 6.0)], + dtype=[('f0', int), + ('a', int), ('ba', float), ('bb', int)]) + assert_equal(test, control) + # + test = merge_arrays((x, w), flatten=False) + controldtype = dtype=[('f0', int), + ('f1', [('a', int), + ('b', [('ba', float), ('bb', int)])])] + control = np.array([(1., (1, (2, 3.0))), (2, (4, (5, 6.0)))], + dtype=controldtype) + # + def test_wmasked_arrays(self): + "Test merge_arrays masked arrays" + (_, x, _, _) = self.data + mx = ma.array([1, 2, 3], mask=[1, 0, 0]) + test = merge_arrays((x, mx), usemask=True) + control = ma.array([(1, 1), (2, 2), (-1, 3)], + mask=[(0, 1), (0, 0), (1, 0)], + dtype=[('f0', int), ('f1', int)]) + assert_equal(test, control) + test = merge_arrays((x, mx), usemask=True, asrecarray=True) + assert_equal(test, control) + assert(isinstance(test, MaskedRecords)) + # + def test_w_singlefield(self): + "Test single field" + test = merge_arrays((np.array([1, 2]).view([('a', int)]), + np.array([10., 20., 30.])),) + control = ma.array([(1, 10.), (2, 20.), (-1, 30.)], + mask=[(0, 0), (0, 0), (1, 0)], + dtype=[('a', int), ('f1', float)]) + assert_equal(test, control) + # + def test_w_shorter_flex(self): + "Test merge_arrays w/ a shorter flexndarray." + z = self.data[-1] + test = merge_arrays((z, np.array([10, 20, 30]).view([('C', int)]))) + control = np.array([('A', 1., 10), ('B', 2., 20), ('-1', -1, 20)], + dtype=[('A', '|S3'), ('B', float), ('C', int)]) + + + +class TestAppendFields(TestCase): + """ + Test append_fields + """ + def setUp(self): + x = np.array([1, 2,]) + y = np.array([10, 20, 30]) + z = np.array([('A', 1.), ('B', 2.)], dtype=[('A', '|S3'), ('B', float)]) + w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], + dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) + self.data = (w, x, y, z) + # + def test_append_single(self): + "Test simple case" + (_, x, _, _) = self.data + test = append_fields(x, 'A', data=[10, 20, 30]) + control = ma.array([(1, 10), (2, 20), (-1, 30)], + mask=[(0, 0), (0, 0), (1, 0)], + dtype=[('f0', int), ('A', int)],) + assert_equal(test, control) + # + def test_append_double(self): + "Test simple case" + (_, x, _, _) = self.data + test = append_fields(x, ('A', 'B'), data=[[10, 20, 30], [100, 200]]) + control = ma.array([(1, 10, 100), (2, 20, 200), (-1, 30, -1)], + mask=[(0, 0, 0), (0, 0, 0), (1, 0, 1)], + dtype=[('f0', int), ('A', int), ('B', int)],) + assert_equal(test, control) + # + def test_append_on_flex(self): + "Test append_fields on flexible type arrays" + z = self.data[-1] + test = append_fields(z, 'C', data=[10, 20, 30]) + control = ma.array([('A', 1., 10), ('B', 2., 20), (-1, -1., 30)], + mask=[(0, 0, 0), (0, 0, 0), (1, 1, 0)], + dtype=[('A', '|S3'), ('B', float), ('C', int)],) + assert_equal(test, control) + # + def test_append_on_nested(self): + "Test append_fields on nested fields" + w = self.data[0] + test = append_fields(w, 'C', data=[10, 20, 30]) + control = ma.array([(1, (2, 3.0), 10), + (4, (5, 6.0), 20), + (-1, (-1, -1.), 30)], + mask=[(0, (0, 0), 0), (0, (0, 0), 0), (1, (1, 1), 0)], + dtype=[('a', int), + ('b', [('ba', float), ('bb', int)]), + ('C', int)],) + assert_equal(test, control) + + + +class TestStackArrays(TestCase): + """ + Test stack_arrays + """ + def setUp(self): + x = np.array([1, 2,]) + y = np.array([10, 20, 30]) + z = np.array([('A', 1.), ('B', 2.)], dtype=[('A', '|S3'), ('B', float)]) + w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], + dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) + self.data = (w, x, y, z) + # + def test_solo(self): + "Test stack_arrays on single arrays" + (_, x, _, _) = self.data + test = stack_arrays((x,)) + assert_equal(test, x) + self.failUnless(test is x) + # + test = stack_arrays(x) + assert_equal(test, x) + self.failUnless(test is x) + # + def test_unnamed_fields(self): + "Tests combinations of arrays w/o named fields" + (_, x, y, _) = self.data + # + test = stack_arrays((x, x), usemask=False) + control = np.array([1, 2, 1, 2]) + assert_equal(test, control) + # + test = stack_arrays((x, y), usemask=False) + control = np.array([1, 2, 10, 20, 30]) + assert_equal(test, control) + # + test = stack_arrays((y, x), usemask=False) + control = np.array([10, 20, 30, 1, 2]) + assert_equal(test, control) + # + def test_unnamed_and_named_fields(self): + "Test combination of arrays w/ & w/o named fields" + (_, x, _, z) = self.data + # + test = stack_arrays((x, z)) + control = ma.array([(1, -1, -1), (2, -1, -1), + (-1, 'A', 1), (-1, 'B', 2)], + mask=[(0, 1, 1), (0, 1, 1), + (1, 0, 0), (1, 0, 0)], + dtype=[('f0', int), ('A', '|S3'), ('B', float)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + # + test = stack_arrays((z, x)) + control = ma.array([('A', 1, -1), ('B', 2, -1), + (-1, -1, 1), (-1, -1, 2),], + mask=[(0, 0, 1), (0, 0, 1), + (1, 1, 0), (1, 1, 0)], + dtype=[('A', '|S3'), ('B', float), ('f2', int)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + # + test = stack_arrays((z, z, x)) + control = ma.array([('A', 1, -1), ('B', 2, -1), + ('A', 1, -1), ('B', 2, -1), + (-1, -1, 1), (-1, -1, 2),], + mask=[(0, 0, 1), (0, 0, 1), + (0, 0, 1), (0, 0, 1), + (1, 1, 0), (1, 1, 0)], + dtype=[('A', '|S3'), ('B', float), ('f2', int)]) + assert_equal(test, control) + # + def test_matching_named_fields(self): + "Test combination of arrays w/ matching field names" + (_, x, _, z) = self.data + zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], + dtype=[('A', '|S3'), ('B', float), ('C', float)]) + test = stack_arrays((z, zz)) + control = ma.array([('A', 1, -1), ('B', 2, -1), + ('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], + dtype=[('A', '|S3'), ('B', float), ('C', float)], + mask=[(0, 0, 1), (0, 0, 1), + (0, 0, 0), (0, 0, 0), (0, 0, 0)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + # + test = stack_arrays((z, zz, x)) + ndtype = [('A', '|S3'), ('B', float), ('C', float), ('f3', int)] + control = ma.array([('A', 1, -1, -1), ('B', 2, -1, -1), + ('a', 10., 100., -1), ('b', 20., 200., -1), + ('c', 30., 300., -1), + (-1, -1, -1, 1), (-1, -1, -1, 2)], + dtype=ndtype, + mask=[(0, 0, 1, 1), (0, 0, 1, 1), + (0, 0, 0, 1), (0, 0, 0, 1), (0, 0, 0, 1), + (1, 1, 1, 0), (1, 1, 1, 0)]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + + + def test_defaults(self): + "Test defaults: no exception raised if keys of defaults are not fields." + (_, _, _, z) = self.data + zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], + dtype=[('A', '|S3'), ('B', float), ('C', float)]) + defaults = {'A':'???', 'B':-999., 'C':-9999., 'D':-99999.} + test = stack_arrays((z, zz), defaults=defaults) + control = ma.array([('A', 1, -9999.), ('B', 2, -9999.), + ('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], + dtype=[('A', '|S3'), ('B', float), ('C', float)], + mask=[(0, 0, 1), (0, 0, 1), + (0, 0, 0), (0, 0, 0), (0, 0, 0)]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + + + def test_autoconversion(self): + "Tests autoconversion" + adtype = [('A', int), ('B', bool), ('C', float)] + a = ma.array([(1, 2, 3)], mask=[(0, 1, 0)], dtype=adtype) + bdtype = [('A', int), ('B', float), ('C', float)] + b = ma.array([(4, 5, 6)], dtype=bdtype) + control = ma.array([(1, 2, 3), (4, 5, 6)], mask=[(0, 1, 0), (0, 0, 0)], + dtype=bdtype) + test = stack_arrays((a, b), autoconvert=True) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + try: + test = stack_arrays((a, b), autoconvert=False) + except TypeError: + pass + else: + raise AssertionError + + + def test_checktitles(self): + "Test using titles in the field names" + adtype = [(('a', 'A'), int), (('b', 'B'), bool), (('c', 'C'), float)] + a = ma.array([(1, 2, 3)], mask=[(0, 1, 0)], dtype=adtype) + bdtype = [(('a', 'A'), int), (('b', 'B'), bool), (('c', 'C'), float)] + b = ma.array([(4, 5, 6)], dtype=bdtype) + test = stack_arrays((a, b)) + control = ma.array([(1, 2, 3), (4, 5, 6)], mask=[(0, 1, 0), (0, 0, 0)], + dtype=bdtype) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + + +class TestJoinBy(TestCase): + # + def test_base(self): + "Basic test of join_by" + a = np.array(zip(np.arange(10), np.arange(50, 60), np.arange(100, 110)), + dtype=[('a', int), ('b', int), ('c', int)]) + b = np.array(zip(np.arange(5, 15), np.arange(65, 75), np.arange(100, 110)), + dtype=[('a', int), ('b', int), ('d', int)]) + # + test = join_by('a', a, b, jointype='inner') + control = np.array([(5, 55, 65, 105, 100), (6, 56, 66, 106, 101), + (7, 57, 67, 107, 102), (8, 58, 68, 108, 103), + (9, 59, 69, 109, 104)], + dtype=[('a', int), ('b1', int), ('b2', int), + ('c', int), ('d', int)]) + assert_equal(test, control) + # + test = join_by(('a', 'b'), a, b) + control = np.array([(5, 55, 105, 100), (6, 56, 106, 101), + (7, 57, 107, 102), (8, 58, 108, 103), + (9, 59, 109, 104)], + dtype=[('a', int), ('b', int), + ('c', int), ('d', int)]) + # + test = join_by(('a', 'b'), a, b, 'outer') + control = ma.array([( 0, 50, 100, -1), ( 1, 51, 101, -1), + ( 2, 52, 102, -1), ( 3, 53, 103, -1), + ( 4, 54, 104, -1), ( 5, 55, 105, -1), + ( 5, 65, -1, 100), ( 6, 56, 106, -1), + ( 6, 66, -1, 101), ( 7, 57, 107, -1), + ( 7, 67, -1, 102), ( 8, 58, 108, -1), + ( 8, 68, -1, 103), ( 9, 59, 109, -1), + ( 9, 69, -1, 104), (10, 70, -1, 105), + (11, 71, -1, 106), (12, 72, -1, 107), + (13, 73, -1, 108), (14, 74, -1, 109)], + mask=[( 0, 0, 0, 1), ( 0, 0, 0, 1), + ( 0, 0, 0, 1), ( 0, 0, 0, 1), + ( 0, 0, 0, 1), ( 0, 0, 0, 1), + ( 0, 0, 1, 0), ( 0, 0, 0, 1), + ( 0, 0, 1, 0), ( 0, 0, 0, 1), + ( 0, 0, 1, 0), ( 0, 0, 0, 1), + ( 0, 0, 1, 0), ( 0, 0, 0, 1), + ( 0, 0, 1, 0), ( 0, 0, 1, 0), + ( 0, 0, 1, 0), ( 0, 0, 1, 0), + ( 0, 0, 1, 0), ( 0, 0, 1, 0)], + dtype=[('a', int), ('b', int), + ('c', int), ('d', int)]) + assert_equal(test, control) + # + test = join_by(('a', 'b'), a, b, 'leftouter') + control = ma.array([(0, 50, 100, -1), (1, 51, 101, -1), + (2, 52, 102, -1), (3, 53, 103, -1), + (4, 54, 104, -1), (5, 55, 105, -1), + (6, 56, 106, -1), (7, 57, 107, -1), + (8, 58, 108, -1), (9, 59, 109, -1)], + mask=[(0, 0, 0, 1), (0, 0, 0, 1), + (0, 0, 0, 1), (0, 0, 0, 1), + (0, 0, 0, 1), (0, 0, 0, 1), + (0, 0, 0, 1), (0, 0, 0, 1), + (0, 0, 0, 1), (0, 0, 0, 1)], + dtype=[('a', int), ('b', int), ('c', int), ('d', int)]) + + + + +if __name__ == '__main__': + run_module_suite() diff --git a/numpy/lib/utils.py b/numpy/lib/utils.py index d749f00b6..9717a7a8f 100644 --- a/numpy/lib/utils.py +++ b/numpy/lib/utils.py @@ -699,11 +699,11 @@ def _lookfor_generate_cache(module, import_modules, regenerate): # import sub-packages if import_modules and hasattr(item, '__path__'): - for pth in item.__path__: - for mod_path in os.listdir(pth): - init_py = os.path.join(pth, mod_path, '__init__.py') + for pth in item.__path__: + for mod_path in os.listdir(pth): + init_py = os.path.join(pth, mod_path, '__init__.py') if not os.path.isfile(init_py): - continue + continue if _all is not None and mod_path not in _all: continue try: diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py index 352b47549..583ab2f71 100644 --- a/numpy/linalg/linalg.py +++ b/numpy/linalg/linalg.py @@ -9,7 +9,7 @@ dgeev, zgeev, dgesdd, zgesdd, dgelsd, zgelsd, dsyevd, zheevd, dgetrf, zgetrf, dpotrf, zpotrf, dgeqrf, zgeqrf, zungqr, dorgqr. """ -__all__ = ['matrix_power', 'solve', 'tensorsolve', 'tensorinv', 'inv', +__all__ = ['matrix_power', 'solve', 'tensorsolve', 'tensorinv', 'inv', 'cholesky', 'eigvals', 'eigvalsh', 'pinv', 'det', 'svd', 'eig', 'eigh','lstsq', 'norm', 'qr', 'cond', 'LinAlgError'] diff --git a/numpy/linalg/tests/test_linalg.py b/numpy/linalg/tests/test_linalg.py index 8fd8b72ed..e8e5a8abd 100644 --- a/numpy/linalg/tests/test_linalg.py +++ b/numpy/linalg/tests/test_linalg.py @@ -202,7 +202,7 @@ class TestBoolPower(TestCase): assert_equal(matrix_power(A,2),A) -class HermitianTestCase: +class HermitianTestCase(object): def test_single(self): a = array([[1.,2.], [2.,1.]], dtype=single) self.do(a) diff --git a/numpy/ma/core.py b/numpy/ma/core.py index a3cda941d..e5ae9598f 100644 --- a/numpy/ma/core.py +++ b/numpy/ma/core.py @@ -1,18 +1,22 @@ # pylint: disable-msg=E1002 -"""MA: a facility for dealing with missing observations -MA is generally used as a numpy.array look-alike. -by Paul F. Dubois. +""" +numpy.ma : a package to handle missing or invalid values. + +This package was initially written for numarray by Paul F. Dubois +at Lawrence Livermore National Laboratory. +In 2006, the package was completely rewritten by Pierre Gerard-Marchant +(University of Georgia) to make the MaskedArray class a subclass of ndarray, +and to improve support of structured arrays. + Copyright 1999, 2000, 2001 Regents of the University of California. Released for unlimited redistribution. -Adapted for numpy_core 2005 by Travis Oliphant and -(mainly) Paul Dubois. - -Subclassing of the base ndarray 2006 by Pierre Gerard-Marchant. -pgmdevlist_AT_gmail_DOT_com -Improvements suggested by Reggie Dugard (reggie_AT_merfinllc_DOT_com) +* Adapted for numpy_core 2005 by Travis Oliphant and (mainly) Paul Dubois. +* Subclassing of the base ndarray 2006 by Pierre Gerard-Marchant + (pgmdevlist_AT_gmail_DOT_com) +* Improvements suggested by Reggie Dugard (reggie_AT_merfinllc_DOT_com) -:author: Pierre Gerard-Marchant +.. moduleauthor:: Pierre Gerard-Marchant """ @@ -33,7 +37,8 @@ __all__ = ['MAError', 'MaskError', 'MaskType', 'MaskedArray', 'default_fill_value', 'diag', 'diagonal', 'divide', 'dump', 'dumps', 'empty', 'empty_like', 'equal', 'exp', 'expand_dims', 'fabs', 'flatten_mask', 'fmod', 'filled', 'floor', 'floor_divide', - 'fix_invalid', 'frombuffer', 'fromfunction', + 'fix_invalid', 'flatten_structured_array', 'frombuffer', 'fromflex', + 'fromfunction', 'getdata','getmask', 'getmaskarray', 'greater', 'greater_equal', 'harden_mask', 'hypot', 'identity', 'ids', 'indices', 'inner', 'innerproduct', @@ -54,7 +59,7 @@ __all__ = ['MAError', 'MaskError', 'MaskType', 'MaskedArray', 'rank', 'ravel', 'remainder', 'repeat', 'reshape', 'resize', 'right_shift', 'round_', 'round', 'set_fill_value', 'shape', 'sin', 'sinh', 'size', 'sometrue', - 'sort', 'soften_mask', 'sqrt', 'squeeze', 'std', 'subtract', 'sum', + 'sort', 'soften_mask', 'sqrt', 'squeeze', 'std', 'subtract', 'sum', 'swapaxes', 'take', 'tan', 'tanh', 'trace', 'transpose', 'true_divide', 'var', 'where', @@ -152,7 +157,7 @@ def default_fill_value(obj): """ if hasattr(obj,'dtype'): - defval = default_filler[obj.dtype.kind] + defval = _check_fill_value(None, obj.dtype) elif isinstance(obj, np.dtype): if obj.subdtype: defval = default_filler[obj.subdtype[0].kind] @@ -170,6 +175,18 @@ def default_fill_value(obj): defval = default_filler['O'] return defval + +def _recursive_extremum_fill_value(ndtype, extremum): + names = ndtype.names + if names: + deflist = [] + for name in names: + fval = _recursive_extremum_fill_value(ndtype[name], extremum) + deflist.append(fval) + return tuple(deflist) + return extremum[ndtype] + + def minimum_fill_value(obj): """ Calculate the default fill value suitable for taking the minimum of ``obj``. @@ -177,11 +194,7 @@ def minimum_fill_value(obj): """ errmsg = "Unsuitable type for calculating minimum." if hasattr(obj, 'dtype'): - objtype = obj.dtype - filler = min_filler[objtype] - if filler is None: - raise TypeError(errmsg) - return filler + return _recursive_extremum_fill_value(obj.dtype, min_filler) elif isinstance(obj, float): return min_filler[ntypes.typeDict['float_']] elif isinstance(obj, int): @@ -193,6 +206,7 @@ def minimum_fill_value(obj): else: raise TypeError(errmsg) + def maximum_fill_value(obj): """ Calculate the default fill value suitable for taking the maximum of ``obj``. @@ -200,11 +214,7 @@ def maximum_fill_value(obj): """ errmsg = "Unsuitable type for calculating maximum." if hasattr(obj, 'dtype'): - objtype = obj.dtype - filler = max_filler[objtype] - if filler is None: - raise TypeError(errmsg) - return filler + return _recursive_extremum_fill_value(obj.dtype, max_filler) elif isinstance(obj, float): return max_filler[ntypes.typeDict['float_']] elif isinstance(obj, int): @@ -217,6 +227,28 @@ def maximum_fill_value(obj): raise TypeError(errmsg) +def _recursive_set_default_fill_value(dtypedescr): + deflist = [] + for currentdescr in dtypedescr: + currenttype = currentdescr[1] + if isinstance(currenttype, list): + deflist.append(tuple(_recursive_set_default_fill_value(currenttype))) + else: + deflist.append(default_fill_value(np.dtype(currenttype))) + return tuple(deflist) + +def _recursive_set_fill_value(fillvalue, dtypedescr): + fillvalue = np.resize(fillvalue, len(dtypedescr)) + output_value = [] + for (fval, descr) in zip(fillvalue, dtypedescr): + cdtype = descr[1] + if isinstance(cdtype, list): + output_value.append(tuple(_recursive_set_fill_value(fval, cdtype))) + else: + output_value.append(np.array(fval, dtype=cdtype).item()) + return tuple(output_value) + + def _check_fill_value(fill_value, ndtype): """ Private function validating the given `fill_value` for the given dtype. @@ -233,10 +265,9 @@ def _check_fill_value(fill_value, ndtype): fields = ndtype.fields if fill_value is None: if fields: - fdtype = [(_[0], _[1]) for _ in ndtype.descr] - fill_value = np.array(tuple([default_fill_value(fields[n][0]) - for n in ndtype.names]), - dtype=fdtype) + descr = ndtype.descr + fill_value = np.array(_recursive_set_default_fill_value(descr), + dtype=ndtype,) else: fill_value = default_fill_value(ndtype) elif fields: @@ -248,10 +279,9 @@ def _check_fill_value(fill_value, ndtype): err_msg = "Unable to transform %s to dtype %s" raise ValueError(err_msg % (fill_value, fdtype)) else: - fval = np.resize(fill_value, len(ndtype.descr)) - fill_value = [np.asarray(f).astype(desc[1]).item() - for (f, desc) in zip(fval, ndtype.descr)] - fill_value = np.array(tuple(fill_value), copy=False, dtype=fdtype) + descr = ndtype.descr + fill_value = np.array(_recursive_set_fill_value(fill_value, descr), + dtype=ndtype) else: if isinstance(fill_value, basestring) and (ndtype.char not in 'SV'): fill_value = default_fill_value(ndtype) @@ -315,7 +345,7 @@ def common_fill_value(a, b): def filled(a, fill_value = None): """ Return `a` as an array where masked data have been replaced by `value`. - + If `a` is not a MaskedArray, `a` itself is returned. If `a` is a MaskedArray and `fill_value` is None, `fill_value` is set to `a.fill_value`. @@ -367,7 +397,7 @@ def get_masked_subclass(*arrays): return rcls #####-------------------------------------------------------------------------- -def get_data(a, subok=True): +def getdata(a, subok=True): """ Return the `_data` part of `a` if `a` is a MaskedArray, or `a` itself. @@ -384,8 +414,8 @@ def get_data(a, subok=True): if not subok: return data.view(ndarray) return data +get_data = getdata -getdata = get_data def fix_invalid(a, mask=nomask, copy=True, fill_value=None): """ @@ -535,17 +565,20 @@ class _MaskedUnaryOperation: # ... but np.putmask looks more efficient, despite the copy. np.putmask(d1, dm, self.fill) # Take care of the masked singletong first ... - if not m.ndim and m: + if (not m.ndim) and m: return masked - # Get the result class ....................... - if isinstance(a, MaskedArray): - subtype = type(a) + elif m is nomask: + result = self.f(d1, *args, **kwargs) else: - subtype = MaskedArray - # Get the result as a view of the subtype ... - result = self.f(d1, *args, **kwargs).view(subtype) - # Fix the mask if we don't have a scalar - if result.ndim > 0: + result = np.where(m, d1, self.f(d1, *args, **kwargs)) + # If result is not a scalar + if result.ndim: + # Get the result subclass: + if isinstance(a, MaskedArray): + subtype = type(a) + else: + subtype = MaskedArray + result = result.view(subtype) result._mask = m result._update_from(a) return result @@ -583,20 +616,50 @@ class _MaskedBinaryOperation: def __call__ (self, a, b, *args, **kwargs): "Execute the call behavior." - m = mask_or(getmask(a), getmask(b)) - (d1, d2) = (get_data(a), get_data(b)) - result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a, b)) - if len(result.shape): - if m is not nomask: - result._mask = make_mask_none(result.shape) - result._mask.flat = m + m = mask_or(getmask(a), getmask(b), shrink=False) + (da, db) = (getdata(a), getdata(b)) + # Easy case: there's no mask... + if m is nomask: + result = self.f(da, db, *args, **kwargs) + # There are some masked elements: run only on the unmasked + else: + result = np.where(m, da, self.f(da, db, *args, **kwargs)) + # Transforms to a (subclass of) MaskedArray if we don't have a scalar + if result.shape: + result = result.view(get_masked_subclass(a, b)) + # If we have a mask, make sure it's broadcasted properly + if m.any(): + result._mask = mask_or(getmaskarray(a), getmaskarray(b)) + # If some initial masks where not shrunk, don't shrink the result + elif m.shape: + result._mask = make_mask_none(result.shape, result.dtype) if isinstance(a, MaskedArray): result._update_from(a) if isinstance(b, MaskedArray): result._update_from(b) + # ... or return masked if we have a scalar and the common mask is True elif m: return masked return result +# +# result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a, b)) +# if len(result.shape): +# if m is not nomask: +# result._mask = make_mask_none(result.shape) +# result._mask.flat = m +# #!!!!! +# # Force m to be at least 1D +# m.shape = m.shape or (1,) +# print "Resetting data" +# result.data[m].flat = d1.flat +# #!!!!! +# if isinstance(a, MaskedArray): +# result._update_from(a) +# if isinstance(b, MaskedArray): +# result._update_from(b) +# elif m: +# return masked +# return result def reduce(self, target, axis=0, dtype=None): """Reduce `target` along the given `axis`.""" @@ -639,11 +702,13 @@ class _MaskedBinaryOperation: m = umath.logical_or.outer(ma, mb) if (not m.ndim) and m: return masked - rcls = get_masked_subclass(a, b) - # We could fill the arguments first, butis it useful ? - # d = self.f.outer(filled(a, self.fillx), filled(b, self.filly)).view(rcls) - d = self.f.outer(getdata(a), getdata(b)).view(rcls) - if d.ndim > 0: + (da, db) = (getdata(a), getdata(b)) + if m is nomask: + d = self.f.outer(da, db) + else: + d = np.where(m, da, self.f.outer(da, db)) + if d.shape: + d = d.view(get_masked_subclass(a, b)) d._mask = m return d @@ -655,7 +720,7 @@ class _MaskedBinaryOperation: if isinstance(target, MaskedArray): tclass = type(target) else: - tclass = masked_array + tclass = MaskedArray t = filled(target, self.filly) return self.f.accumulate(t, axis).view(tclass) @@ -664,7 +729,8 @@ class _MaskedBinaryOperation: #.............................................................................. class _DomainedBinaryOperation: - """Define binary operations that have a domain, like divide. + """ + Define binary operations that have a domain, like divide. They have no reduce, outer or accumulate. @@ -689,26 +755,36 @@ class _DomainedBinaryOperation: ufunc_domain[dbfunc] = domain ufunc_fills[dbfunc] = (fillx, filly) - def __call__(self, a, b): + def __call__(self, a, b, *args, **kwargs): "Execute the call behavior." ma = getmask(a) - mb = getmask(b) - d1 = getdata(a) - d2 = get_data(b) - t = narray(self.domain(d1, d2), copy=False) + mb = getmaskarray(b) + da = getdata(a) + db = getdata(b) + t = narray(self.domain(da, db), copy=False) if t.any(None): - mb = mask_or(mb, t) + mb = mask_or(mb, t, shrink=False) # The following line controls the domain filling - if t.size == d2.size: - d2 = np.where(t, self.filly, d2) + if t.size == db.size: + db = np.where(t, self.filly, db) else: - d2 = np.where(np.resize(t, d2.shape), self.filly, d2) - m = mask_or(ma, mb) + db = np.where(np.resize(t, db.shape), self.filly, db) + # Shrink m if a.mask was nomask, otherwise don't. + m = mask_or(ma, mb, shrink=(getattr(a, '_mask', nomask) is nomask)) if (not m.ndim) and m: return masked - result = self.f(d1, d2).view(get_masked_subclass(a, b)) - if result.ndim > 0: - result._mask = m + elif (m is nomask): + result = self.f(da, db, *args, **kwargs) + else: + result = np.where(m, da, self.f(da, db, *args, **kwargs)) + if result.shape: + result = result.view(get_masked_subclass(a, b)) + # If we have a mask, make sure it's broadcasted properly + if m.any(): + result._mask = mask_or(getmaskarray(a), mb) + # If some initial masks where not shrunk, don't shrink the result + elif m.shape: + result._mask = make_mask_none(result.shape, result.dtype) if isinstance(a, MaskedArray): result._update_from(a) if isinstance(b, MaskedArray): @@ -796,36 +872,37 @@ mod = _DomainedBinaryOperation(umath.mod, _DomainSafeDivide(), 0, 1) #---- --- Mask creation functions --- #####-------------------------------------------------------------------------- +def _recursive_make_descr(datatype, newtype=bool_): + "Private function allowing recursion in make_descr." + # Do we have some name fields ? + if datatype.names: + descr = [] + for name in datatype.names: + field = datatype.fields[name] + if len(field) == 3: + # Prepend the title to the name + name = (field[-1], name) + descr.append((name, _recursive_make_descr(field[0], newtype))) + return descr + # Is this some kind of composite a la (np.float,2) + elif datatype.subdtype: + mdescr = list(datatype.subdtype) + mdescr[0] = newtype + return tuple(mdescr) + else: + return newtype + def make_mask_descr(ndtype): """Constructs a dtype description list from a given dtype. Each field is set to a bool. """ - def _make_descr(datatype): - "Private function allowing recursion." - # Do we have some name fields ? - if datatype.names: - descr = [] - for name in datatype.names: - field = datatype.fields[name] - if len(field) == 3: - # Prepend the title to the name - name = (field[-1], name) - descr.append((name, _make_descr(field[0]))) - return descr - # Is this some kind of composite a la (np.float,2) - elif datatype.subdtype: - mdescr = list(datatype.subdtype) - mdescr[0] = np.dtype(bool) - return tuple(mdescr) - else: - return np.bool # Make sure we do have a dtype if not isinstance(ndtype, np.dtype): ndtype = np.dtype(ndtype) - return np.dtype(_make_descr(ndtype)) + return np.dtype(_recursive_make_descr(ndtype, np.bool)) -def get_mask(a): +def getmask(a): """Return the mask of a, if any, or nomask. To get a full array of booleans of the same shape as a, use @@ -833,7 +910,7 @@ def get_mask(a): """ return getattr(a, '_mask', nomask) -getmask = get_mask +get_mask = getmask def getmaskarray(arr): """Return the mask of arr, if any, or a boolean array of the shape @@ -952,7 +1029,17 @@ def mask_or (m1, m2, copy=False, shrink=True): ValueError If m1 and m2 have different flexible dtypes. - """ + """ + def _recursive_mask_or(m1, m2, newmask): + names = m1.dtype.names + for name in names: + current1 = m1[name] + if current1.dtype.names: + _recursive_mask_or(current1, m2[name], newmask[name]) + else: + umath.logical_or(current1, m2[name], newmask[name]) + return + # if (m1 is nomask) or (m1 is False): dtype = getattr(m2, 'dtype', MaskType) return make_mask(m2, copy=copy, shrink=shrink, dtype=dtype) @@ -966,8 +1053,7 @@ def mask_or (m1, m2, copy=False, shrink=True): raise ValueError("Incompatible dtypes '%s'<>'%s'" % (dtype1, dtype2)) if dtype1.names: newmask = np.empty_like(m1) - for n in dtype1.names: - newmask[n] = umath.logical_or(m1[n], m2[n]) + _recursive_mask_or(m1, m2, newmask) return newmask return make_mask(umath.logical_or(m1, m2), copy=copy, shrink=shrink) @@ -976,7 +1062,7 @@ def flatten_mask(mask): """ Returns a completely flattened version of the mask, where nested fields are collapsed. - + Parameters ---------- mask : array_like @@ -999,7 +1085,7 @@ def flatten_mask(mask): >>> mask = np.array([(0, (0, 0)), (0, (0, 1))], dtype=mdtype) >>> flatten_mask(mask) array([False, False, False, False, False, True], dtype=bool) - + """ # def _flatmask(mask): @@ -1033,7 +1119,7 @@ def flatten_mask(mask): def masked_where(condition, a, copy=True): """ - Return ``a`` as an array masked where ``condition`` is True. + Return ``a`` as an array masked where ``condition`` is ``True``. Masked values of ``a`` or ``condition`` are kept. Parameters @@ -1063,34 +1149,44 @@ def masked_where(condition, a, copy=True): result._mask = cond return result + def masked_greater(x, value, copy=True): """ - Return the array `x` masked where (x > value). + Return the array `x` masked where ``(x > value)``. Any value of mask already masked is kept masked. """ return masked_where(greater(x, value), x, copy=copy) + def masked_greater_equal(x, value, copy=True): - "Shortcut to masked_where, with condition = (x >= value)." + "Shortcut to masked_where, with condition ``(x >= value)``." return masked_where(greater_equal(x, value), x, copy=copy) + def masked_less(x, value, copy=True): - "Shortcut to masked_where, with condition = (x < value)." + "Shortcut to masked_where, with condition ``(x < value)``." return masked_where(less(x, value), x, copy=copy) + def masked_less_equal(x, value, copy=True): - "Shortcut to masked_where, with condition = (x <= value)." + "Shortcut to masked_where, with condition ``(x <= value)``." return masked_where(less_equal(x, value), x, copy=copy) + def masked_not_equal(x, value, copy=True): - "Shortcut to masked_where, with condition = (x != value)." + "Shortcut to masked_where, with condition ``(x != value)``." return masked_where(not_equal(x, value), x, copy=copy) + def masked_equal(x, value, copy=True): """ - Shortcut to masked_where, with condition = (x == value). For - floating point, consider ``masked_values(x, value)`` instead. + Shortcut to masked_where, with condition ``(x == value)``. + + See Also + -------- + masked_where : base function + masked_values : equivalent function for floats. """ # An alternative implementation relies on filling first: probably not needed. @@ -1100,6 +1196,7 @@ def masked_equal(x, value, copy=True): # return array(d, mask=m, copy=copy) return masked_where(equal(x, value), x, copy=copy) + def masked_inside(x, v1, v2, copy=True): """ Shortcut to masked_where, where ``condition`` is True for x inside @@ -1117,6 +1214,7 @@ def masked_inside(x, v1, v2, copy=True): condition = (xf >= v1) & (xf <= v2) return masked_where(condition, x, copy=copy) + def masked_outside(x, v1, v2, copy=True): """ Shortcut to ``masked_where``, where ``condition`` is True for x outside @@ -1134,7 +1232,7 @@ def masked_outside(x, v1, v2, copy=True): condition = (xf < v1) | (xf > v2) return masked_where(condition, x, copy=copy) -# + def masked_object(x, value, copy=True, shrink=True): """ Mask the array `x` where the data are exactly equal to value. @@ -1163,6 +1261,7 @@ def masked_object(x, value, copy=True, shrink=True): mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(x, mask=mask, copy=copy, fill_value=value) + def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True, shrink=True): """ Mask the array x where the data are approximately equal in @@ -1200,6 +1299,7 @@ def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True, shrink=True): mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(xnew, mask=mask, copy=copy, fill_value=value) + def masked_invalid(a, copy=True): """ Mask the array for invalid values (NaNs or infs). @@ -1221,6 +1321,7 @@ def masked_invalid(a, copy=True): #####-------------------------------------------------------------------------- #---- --- Printing options --- #####-------------------------------------------------------------------------- + class _MaskedPrintOption: """ Handle the string used to represent missing data in a masked array. @@ -1255,10 +1356,65 @@ class _MaskedPrintOption: #if you single index into a masked location you get this object. masked_print_option = _MaskedPrintOption('--') + +def _recursive_printoption(result, mask, printopt): + """ + Puts printoptions in result where mask is True. + Private function allowing for recursion + """ + names = result.dtype.names + for name in names: + (curdata, curmask) = (result[name], mask[name]) + if curdata.dtype.names: + _recursive_printoption(curdata, curmask, printopt) + else: + np.putmask(curdata, curmask, printopt) + return + +_print_templates = dict(long = """\ +masked_%(name)s(data = + %(data)s, + %(nlen)s mask = + %(mask)s, + %(nlen)s fill_value = %(fill)s) +""", + short = """\ +masked_%(name)s(data = %(data)s, + %(nlen)s mask = %(mask)s, +%(nlen)s fill_value = %(fill)s) +""", + long_flx = """\ +masked_%(name)s(data = + %(data)s, + %(nlen)s mask = + %(mask)s, +%(nlen)s fill_value = %(fill)s, + %(nlen)s dtype = %(dtype)s) +""", + short_flx = """\ +masked_%(name)s(data = %(data)s, +%(nlen)s mask = %(mask)s, +%(nlen)s fill_value = %(fill)s, +%(nlen)s dtype = %(dtype)s) +""") + #####-------------------------------------------------------------------------- #---- --- MaskedArray class --- #####-------------------------------------------------------------------------- +def _recursive_filled(a, mask, fill_value): + """ + Recursively fill `a` with `fill_value`. + Private function + """ + names = a.dtype.names + for name in names: + current = a[name] + if current.dtype.names: + _recursive_filled(current, mask[name], fill_value[name]) + else: + np.putmask(current, mask[name], fill_value[name]) + #............................................................................... class _arraymethod(object): """ @@ -1313,17 +1469,17 @@ class _arraymethod(object): elif mask is not nomask: result.__setmask__(getattr(mask, methodname)(*args, **params)) else: - if mask.ndim and mask.all(): + if mask.ndim and (not mask.dtype.names and mask.all()): return masked return result #.......................................................... -class FlatIter(object): +class MaskedIterator(object): "Define an interator." def __init__(self, ma): self.ma = ma - self.ma_iter = np.asarray(ma).flat - + self.dataiter = ma._data.flat + # if ma._mask is nomask: self.maskiter = None else: @@ -1332,19 +1488,79 @@ class FlatIter(object): def __iter__(self): return self + def __getitem__(self, indx): + result = self.dataiter.__getitem__(indx).view(type(self.ma)) + if self.maskiter is not None: + _mask = self.maskiter.__getitem__(indx) + _mask.shape = result.shape + result._mask = _mask + return result + ### This won't work is ravel makes a copy def __setitem__(self, index, value): - a = self.ma.ravel() - a[index] = value + self.dataiter[index] = getdata(value) + if self.maskiter is not None: + self.maskiter[index] = getmaskarray(value) +# self.ma1d[index] = value def next(self): "Returns the next element of the iterator." - d = self.ma_iter.next() + d = self.dataiter.next() if self.maskiter is not None and self.maskiter.next(): d = masked return d +def flatten_structured_array(a): + """ + Flatten a strutured array. + + The datatype of the output is the largest datatype of the (nested) fields. + + Returns + ------- + output : var + Flatten MaskedArray if the input is a MaskedArray, + standard ndarray otherwise. + + Examples + -------- + >>> ndtype = [('a', int), ('b', float)] + >>> a = np.array([(1, 1), (2, 2)], dtype=ndtype) + >>> flatten_structured_array(a) + array([[1., 1.], + [2., 2.]]) + + """ + # + def flatten_sequence(iterable): + """Flattens a compound of nested iterables.""" + for elm in iter(iterable): + if hasattr(elm,'__iter__'): + for f in flatten_sequence(elm): + yield f + else: + yield elm + # + a = np.asanyarray(a) + inishape = a.shape + a = a.ravel() + if isinstance(a, MaskedArray): + out = np.array([tuple(flatten_sequence(d.item())) for d in a._data]) + out = out.view(MaskedArray) + out._mask = np.array([tuple(flatten_sequence(d.item())) + for d in getmaskarray(a)]) + else: + out = np.array([tuple(flatten_sequence(d.item())) for d in a]) + if len(inishape) > 1: + newshape = list(out.shape) + newshape[0] = inishape + out.shape = tuple(flatten_sequence(newshape)) + return out + + + + class MaskedArray(ndarray): """ Arrays with possibly masked values. Masked values of True @@ -1358,32 +1574,32 @@ class MaskedArray(ndarray): ---------- data : {var} Input data. - mask : {nomask, sequence} + mask : {nomask, sequence}, optional Mask. Must be convertible to an array of booleans with the same shape as data: True indicates a masked (eg., invalid) data. - dtype : dtype - Data type of the output. If None, the type of the data - argument is used. If dtype is not None and different from - data.dtype, a copy is performed. - copy : bool - Whether to copy the input data (True), or to use a - reference instead. Note: data are NOT copied by default. - subok : {True, boolean} + dtype : {dtype}, optional + Data type of the output. + If dtype is None, the type of the data argument (`data.dtype`) is used. + If dtype is not None and different from `data.dtype`, a copy is performed. + copy : {False, True}, optional + Whether to copy the input data (True), or to use a reference instead. + Note: data are NOT copied by default. + subok : {True, False}, optional Whether to return a subclass of MaskedArray (if possible) or a plain MaskedArray. - ndmin : {0, int} + ndmin : {0, int}, optional Minimum number of dimensions - fill_value : {var} - Value used to fill in the masked values when necessary. If - None, a default based on the datatype is used. - keep_mask : {True, boolean} + fill_value : {var}, optional + Value used to fill in the masked values when necessary. + If None, a default based on the datatype is used. + keep_mask : {True, boolean}, optional Whether to combine mask with the mask of the input data, if any (True), or to use only mask for the output (False). - hard_mask : {False, boolean} - Whether to use a hard mask or not. With a hard mask, - masked values cannot be unmasked. - shrink : {True, boolean} + hard_mask : {False, boolean}, optional + Whether to use a hard mask or not. + With a hard mask, masked values cannot be unmasked. + shrink : {True, boolean}, optional Whether to force compression of an empty mask. """ @@ -1397,10 +1613,12 @@ class MaskedArray(ndarray): subok=True, ndmin=0, fill_value=None, keep_mask=True, hard_mask=None, flag=None, shrink=True, **options): - """Create a new masked array from scratch. + """ + Create a new masked array from scratch. - Note: you can also create an array with the .view(MaskedArray) - method. + Notes + ----- + A masked array can also be created by taking a .view(MaskedArray). """ if flag is not None: @@ -1564,7 +1782,8 @@ class MaskedArray(ndarray): return #.................................. def __array_wrap__(self, obj, context=None): - """Special hook for ufuncs. + """ + Special hook for ufuncs. Wraps the numpy array and sets the mask according to context. """ result = obj.view(type(self)) @@ -1577,10 +1796,11 @@ class MaskedArray(ndarray): # Get the domain mask................ domain = ufunc_domain.get(func, None) if domain is not None: + # Take the domain, and make sure it's a ndarray if len(args) > 2: - d = reduce(domain, args) + d = filled(reduce(domain, args), True) else: - d = domain(*args) + d = filled(domain(*args), True) # Fill the result where the domain is wrong try: # Binary domain: take the last value @@ -1598,7 +1818,8 @@ class MaskedArray(ndarray): if d is not nomask: m = d else: - m |= d + # Don't modify inplace, we risk back-propagation + m = (m | d) # Make sure the mask has the proper size if result.shape == () and m: return masked @@ -1630,7 +1851,7 @@ class MaskedArray(ndarray): if dtype is None: dtype = output.dtype mdtype = make_mask_descr(dtype) - + output._mask = self._mask.view(mdtype, ndarray) output._mask.shape = output.shape # Make sure to reset the _fill_value if needed @@ -1797,7 +2018,8 @@ class MaskedArray(ndarray): ndarray.__setitem__(_data, indx, dindx) _mask[indx] = mindx return - #............................................ + + def __getslice__(self, i, j): """x.__getslice__(i, j) <==> x[i:j] @@ -1806,7 +2028,8 @@ class MaskedArray(ndarray): """ return self.__getitem__(slice(i, j)) - #........................ + + def __setslice__(self, i, j, value): """x.__setslice__(i, j, value) <==> x[i:j]=value @@ -1815,7 +2038,8 @@ class MaskedArray(ndarray): """ self.__setitem__(slice(i, j), value) - #............................................ + + def __setmask__(self, mask, copy=False): """Set the mask. @@ -1881,33 +2105,28 @@ class MaskedArray(ndarray): # return self._mask.reshape(self.shape) return self._mask mask = property(fget=_get_mask, fset=__setmask__, doc="Mask") - # - def _getrecordmask(self): - """Return the mask of the records. + + + def _get_recordmask(self): + """ + Return the mask of the records. A record is masked when all the fields are masked. """ _mask = ndarray.__getattribute__(self, '_mask').view(ndarray) if _mask.dtype.names is None: return _mask - if _mask.size > 1: - axis = 1 - else: - axis = None - # - try: - return _mask.view((bool_, len(self.dtype))).all(axis) - except ValueError: - return np.all([[f[n].all() for n in _mask.dtype.names] - for f in _mask], axis=axis) + return np.all(flatten_structured_array(_mask), axis=-1) + - def _setrecordmask(self): + def _set_recordmask(self): """Return the mask of the records. A record is masked when all the fields are masked. """ raise NotImplementedError("Coming soon: setting the mask per records!") - recordmask = property(fget=_getrecordmask) + recordmask = property(fget=_get_recordmask) + #............................................ def harden_mask(self): """Force the mask to hard. @@ -1921,6 +2140,10 @@ class MaskedArray(ndarray): """ self._hardmask = False + hardmask = property(fget=lambda self: self._hardmask, + doc="Hardness of the mask") + + def unshare_mask(self): """Copy the mask and set the sharedmask flag to False. @@ -1929,6 +2152,9 @@ class MaskedArray(ndarray): self._mask = self._mask.copy() self._sharedmask = False + sharedmask = property(fget=lambda self: self._sharedmask, + doc="Share status of the mask (read-only).") + def shrink_mask(self): """Reduce a mask to nomask when possible. @@ -1938,6 +2164,10 @@ class MaskedArray(ndarray): self._mask = nomask #............................................ + + baseclass = property(fget= lambda self:self._baseclass, + doc="Class of the underlying data (read-only).") + def _get_data(self): """Return the current data, as a view of the original underlying data. @@ -1960,7 +2190,7 @@ class MaskedArray(ndarray): """Return a flat iterator. """ - return FlatIter(self) + return MaskedIterator(self) # def _set_flat (self, value): """Set a flattened version of self to value. @@ -1991,24 +2221,25 @@ class MaskedArray(ndarray): fill_value = property(fget=get_fill_value, fset=set_fill_value, doc="Filling value.") + def filled(self, fill_value=None): - """Return a copy of self._data, where masked values are filled - with fill_value. + """ + Return a copy of self, where masked values are filled with `fill_value`. - If fill_value is None, self.fill_value is used instead. + If `fill_value` is None, `self.fill_value` is used instead. - Notes - ----- - + Subclassing is preserved - + The result is NOT a MaskedArray ! + Notes + ----- + + Subclassing is preserved + + The result is NOT a MaskedArray ! - Examples - -------- - >>> x = np.ma.array([1,2,3,4,5], mask=[0,0,1,0,1], fill_value=-999) - >>> x.filled() - array([1,2,-999,4,-999]) - >>> type(x.filled()) - <type 'numpy.ndarray'> + Examples + -------- + >>> x = np.ma.array([1,2,3,4,5], mask=[0,0,1,0,1], fill_value=-999) + >>> x.filled() + array([1,2,-999,4,-999]) + >>> type(x.filled()) + <type 'numpy.ndarray'> """ m = self._mask @@ -2025,9 +2256,7 @@ class MaskedArray(ndarray): # if m.dtype.names: result = self._data.copy() - for n in result.dtype.names: - field = result[n] - np.putmask(field, self._mask[n], fill_value[n]) + _recursive_filled(result, self._mask, fill_value) elif not m.any(): return self._data else: @@ -2148,13 +2377,9 @@ class MaskedArray(ndarray): res = self._data.astype("|O8") res[m] = f else: - rdtype = [list(_) for _ in self.dtype.descr] - for r in rdtype: - r[1] = '|O8' - rdtype = [tuple(_) for _ in rdtype] + rdtype = _recursive_make_descr(self.dtype, "|O8") res = self._data.astype(rdtype) - for field in names: - np.putmask(res[field], m[field], f) + _recursive_printoption(res, m, f) else: res = self.filled(self.fill_value) return str(res) @@ -2163,44 +2388,71 @@ class MaskedArray(ndarray): """Literal string representation. """ - with_mask = """\ -masked_%(name)s(data = - %(data)s, - mask = - %(mask)s, - fill_value=%(fill)s) -""" - with_mask1 = """\ -masked_%(name)s(data = %(data)s, - mask = %(mask)s, - fill_value=%(fill)s) -""" - with_mask_flx = """\ -masked_%(name)s(data = - %(data)s, - mask = - %(mask)s, - fill_value=%(fill)s, - dtype=%(dtype)s) -""" - with_mask1_flx = """\ -masked_%(name)s(data = %(data)s, - mask = %(mask)s, - fill_value=%(fill)s - dtype=%(dtype)s) -""" n = len(self.shape) name = repr(self._data).split('(')[0] - parameters = dict(name=name, data=str(self), mask=str(self._mask), + parameters = dict(name=name, nlen=" "*len(name), + data=str(self), mask=str(self._mask), fill=str(self.fill_value), dtype=str(self.dtype)) if self.dtype.names: if n <= 1: - return with_mask1_flx % parameters - return with_mask_flx % parameters + return _print_templates['short_flx'] % parameters + return _print_templates['long_flx'] % parameters elif n <= 1: - return with_mask1 % parameters - return with_mask % parameters + return _print_templates['short'] % parameters + return _print_templates['long'] % parameters #............................................ + def __eq__(self, other): + "Check whether other equals self elementwise" + omask = getattr(other, '_mask', nomask) + if omask is nomask: + check = ndarray.__eq__(self.filled(0), other).view(type(self)) + check._mask = self._mask + else: + odata = filled(other, 0) + check = ndarray.__eq__(self.filled(0), odata).view(type(self)) + if self._mask is nomask: + check._mask = omask + else: + mask = mask_or(self._mask, omask) + if mask.dtype.names: + if mask.size > 1: + axis = 1 + else: + axis = None + try: + mask = mask.view((bool_, len(self.dtype))).all(axis) + except ValueError: + mask = np.all([[f[n].all() for n in mask.dtype.names] + for f in mask], axis=axis) + check._mask = mask + return check + # + def __ne__(self, other): + "Check whether other doesn't equal self elementwise" + omask = getattr(other, '_mask', nomask) + if omask is nomask: + check = ndarray.__ne__(self.filled(0), other).view(type(self)) + check._mask = self._mask + else: + odata = filled(other, 0) + check = ndarray.__ne__(self.filled(0), odata).view(type(self)) + if self._mask is nomask: + check._mask = omask + else: + mask = mask_or(self._mask, omask) + if mask.dtype.names: + if mask.size > 1: + axis = 1 + else: + axis = None + try: + mask = mask.view((bool_, len(self.dtype))).all(axis) + except ValueError: + mask = np.all([[f[n].all() for n in mask.dtype.names] + for f in mask], axis=axis) + check._mask = mask + return check + # def __add__(self, other): "Add other to self, and return a new masked array." return add(self, other) @@ -2223,7 +2475,7 @@ masked_%(name)s(data = %(data)s, # def __rmul__(self, other): "Multiply other by self, and return a new masked array." - return multiply(other, self) + return multiply(self, other) # def __div__(self, other): "Divide other into self, and return a new masked array." @@ -2243,32 +2495,39 @@ masked_%(name)s(data = %(data)s, #............................................ def __iadd__(self, other): "Add other to self in-place." - ndarray.__iadd__(self._data, getdata(other)) m = getmask(other) if self._mask is nomask: - self._mask = m - elif m is not nomask: - self._mask += m + if m is not nomask and m.any(): + self._mask = make_mask_none(self.shape, self.dtype) + self._mask += m + else: + if m is not nomask: + self._mask += m + ndarray.__iadd__(self._data, np.where(self._mask, 0, getdata(other))) return self #.... def __isub__(self, other): "Subtract other from self in-place." - ndarray.__isub__(self._data, getdata(other)) m = getmask(other) if self._mask is nomask: - self._mask = m + if m is not nomask and m.any(): + self._mask = make_mask_none(self.shape, self.dtype) + self._mask += m elif m is not nomask: self._mask += m + ndarray.__isub__(self._data, np.where(self._mask, 0, getdata(other))) return self #.... def __imul__(self, other): "Multiply self by other in-place." - ndarray.__imul__(self._data, getdata(other)) m = getmask(other) if self._mask is nomask: - self._mask = m + if m is not nomask and m.any(): + self._mask = make_mask_none(self.shape, self.dtype) + self._mask += m elif m is not nomask: self._mask += m + ndarray.__imul__(self._data, np.where(self._mask, 1, getdata(other))) return self #.... def __idiv__(self, other): @@ -2281,21 +2540,25 @@ masked_%(name)s(data = %(data)s, if dom_mask.any(): (_, fval) = ufunc_fills[np.divide] other_data = np.where(dom_mask, fval, other_data) - ndarray.__idiv__(self._data, other_data) - self._mask = mask_or(self._mask, new_mask) +# self._mask = mask_or(self._mask, new_mask) + self._mask |= new_mask + ndarray.__idiv__(self._data, np.where(self._mask, 1, other_data)) return self #... def __ipow__(self, other): - "Raise self to the power other, in place" - _data = self._data + "Raise self to the power other, in place." other_data = getdata(other) other_mask = getmask(other) - ndarray.__ipow__(_data, other_data) - invalid = np.logical_not(np.isfinite(_data)) + ndarray.__ipow__(self._data, np.where(self._mask, 1, other_data)) + invalid = np.logical_not(np.isfinite(self._data)) + if invalid.any(): + if self._mask is not nomask: + self._mask |= invalid + else: + self._mask = invalid + np.putmask(self._data, invalid, self.fill_value) new_mask = mask_or(other_mask, invalid) self._mask = mask_or(self._mask, new_mask) - # The following line is potentially problematic, as we change _data... - np.putmask(self._data, invalid, self.fill_value) return self #............................................ def __float__(self): @@ -2453,25 +2716,24 @@ masked_%(name)s(data = %(data)s, return result # def resize(self, newshape, refcheck=True, order=False): - """Attempt to modify the size and the shape of the array in place. - - The array must own its own memory and not be referenced by - other arrays. - - Returns - ------- - None. + """ + Change shape and size of array in-place. """ - try: - self._data.resize(newshape, refcheck, order) - if self.mask is not nomask: - self._mask.resize(newshape, refcheck, order) - except ValueError: - raise ValueError("Cannot resize an array that has been referenced " - "or is referencing another array in this way.\n" - "Use the resize function.") - return None + # Note : the 'order' keyword looks broken, let's just drop it +# try: +# ndarray.resize(self, newshape, refcheck=refcheck) +# if self.mask is not nomask: +# self._mask.resize(newshape, refcheck=refcheck) +# except ValueError: +# raise ValueError("Cannot resize an array that has been referenced " +# "or is referencing another array in this way.\n" +# "Use the numpy.ma.resize function.") +# return None + errmsg = "A masked array does not own its data "\ + "and therefore cannot be resized.\n" \ + "Use the numpy.ma.resize function instead." + raise ValueError(errmsg) # def put(self, indices, values, mode='raise'): """ @@ -3062,7 +3324,7 @@ masked_%(name)s(data = %(data)s, index_array : ndarray, int Array of indices that sort `a` along the specified axis. In other words, ``a[index_array]`` yields a sorted `a`. - + See Also -------- sort : Describes sorting algorithms used. @@ -3389,7 +3651,7 @@ masked_%(name)s(data = %(data)s, outmask = out._mask = make_mask_none(out.shape) outmask.flat = newmask else: - + if out.dtype.kind in 'biu': errmsg = "Masked data information would be lost in one or more"\ " location." @@ -3506,9 +3768,10 @@ masked_%(name)s(data = %(data)s, def tofile(self, fid, sep="", format="%s"): raise NotImplementedError("Not implemented yet, sorry...") - def torecords(self): + def toflex(self): """ Transforms a MaskedArray into a flexible-type array with two fields: + * the ``_data`` field stores the ``_data`` part of the array; * the ``_mask`` field stores the ``_mask`` part of the array; @@ -3551,6 +3814,7 @@ masked_%(name)s(data = %(data)s, record['_data'] = self._data record['_mask'] = self._mask return record + torecords = toflex #-------------------------------------------- # Pickling def __getstate__(self): @@ -3610,7 +3874,7 @@ def _mareconstruct(subtype, baseclass, baseshape, basetype,): """ _data = ndarray.__new__(baseclass, baseshape, basetype) - _mask = ndarray.__new__(ndarray, baseshape, 'b1') + _mask = ndarray.__new__(ndarray, baseshape, make_mask_descr(basetype)) return subtype.__new__(subtype, _data, mask=_mask, dtype=basetype,) @@ -3848,22 +4112,22 @@ def power(a, b, third=None): else: basetype = MaskedArray # Get the result and view it as a (subclass of) MaskedArray - result = umath.power(fa, fb).view(basetype) + result = np.where(m, fa, umath.power(fa, fb)).view(basetype) + result._update_from(a) # Find where we're in trouble w/ NaNs and Infs invalid = np.logical_not(np.isfinite(result.view(ndarray))) - # Retrieve some extra attributes if needed - if isinstance(result, MaskedArray): - result._update_from(a) # Add the initial mask if m is not nomask: - if np.isscalar(result): + if not (result.ndim): return masked + m |= invalid result._mask = m # Fix the invalid parts if invalid.any(): if not result.ndim: return masked - result[invalid] = masked + elif result._mask is nomask: + result._mask = invalid result._data[invalid] = result.fill_value return result @@ -3934,12 +4198,12 @@ sort.__doc__ = MaskedArray.sort.__doc__ def compressed(x): """ Return a 1-D array of all the non-masked data. - + See Also -------- MaskedArray.compressed equivalent method - + """ if getmask(x) is nomask: return np.asanyarray(x) @@ -4307,8 +4571,8 @@ def inner(a, b): Returns the inner product of a and b for arrays of floating point types. Like the generic NumPy equivalent the product sum is over the last dimension - of a and b. - + of a and b. + Notes ----- The first argument is not conjugated. @@ -4343,7 +4607,8 @@ outer.__doc__ = doc_note(np.outer.__doc__, outerproduct = outer def allequal (a, b, fill_value=True): - """Return True if all entries of a and b are equal, using + """ + Return True if all entries of a and b are equal, using fill_value as a truth value where either or both are masked. """ @@ -4378,7 +4643,7 @@ def allclose (a, b, masked_equal=True, rtol=1.e-5, atol=1.e-8, fill_value=None): fill_value : boolean, optional Whether masked values in a or b are considered equal (True) or not (False). - + rtol : Relative tolerance The relative difference is equal to `rtol` * `b`. atol : Absolute tolerance @@ -4401,7 +4666,7 @@ def allclose (a, b, masked_equal=True, rtol=1.e-5, atol=1.e-8, fill_value=None): True. absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) - + Return True if all elements of a and b are equal subject to given tolerances. @@ -4434,10 +4699,10 @@ def allclose (a, b, masked_equal=True, rtol=1.e-5, atol=1.e-8, fill_value=None): return np.all(d) #.............................................................................. -def asarray(a, dtype=None): +def asarray(a, dtype=None, order=None): """ - Convert the input to a masked array. - + Convert the input `a` to a masked array of the given datatype. + Parameters ---------- a : array_like @@ -4449,28 +4714,39 @@ def asarray(a, dtype=None): order : {'C', 'F'}, optional Whether to use row-major ('C') or column-major ('FORTRAN') memory representation. Defaults to 'C'. - + Returns ------- out : ndarray MaskedArray interpretation of `a`. No copy is performed if the input - is already an ndarray. If `a` is a subclass of ndarray, a base - class ndarray is returned. - Return a as a MaskedArray object of the given dtype. - If dtype is not given or None, is is set to the dtype of a. - No copy is performed if a is already an array. - Subclasses are converted to the base class MaskedArray. + is already an ndarray. If `a` is a subclass of MaskedArray, a base + class MaskedArray is returned. """ return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=False) def asanyarray(a, dtype=None): - """asanyarray(data, dtype) = array(data, dtype, copy=0, subok=1) + """ + Convert the input `a` to a masked array of the given datatype. + If `a` is a subclass of MaskedArray, its class is conserved. + + Parameters + ---------- + a : array_like + Input data, in any form that can be converted to an array. This + includes lists, lists of tuples, tuples, tuples of tuples, tuples + of lists and ndarrays. + dtype : data-type, optional + By default, the data-type is inferred from the input data. + order : {'C', 'F'}, optional + Whether to use row-major ('C') or column-major ('FORTRAN') memory + representation. Defaults to 'C'. - Return a as an masked array. - If dtype is not given or None, is is set to the dtype of a. - No copy is performed if a is already an array. - Subclasses are conserved. + Returns + ------- + out : ndarray + MaskedArray interpretation of `a`. No copy is performed if the input + is already an ndarray. """ return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=True) @@ -4516,6 +4792,15 @@ def fromfile(file, dtype=float, count=-1, sep=''): raise NotImplementedError("Not yet implemented. Sorry") +def fromflex(fxarray): + """ + Rebuilds a masked_array from a flexible-type array output by the '.torecord' + array + """ + return masked_array(fxarray['_data'], mask=fxarray['_mask']) + + + class _convert2ma: """Convert functions from numpy to numpy.ma. diff --git a/numpy/ma/extras.py b/numpy/ma/extras.py index cf80180e4..b2f06ea6c 100644 --- a/numpy/ma/extras.py +++ b/numpy/ma/extras.py @@ -19,11 +19,14 @@ __all__ = ['apply_along_axis', 'atleast_1d', 'atleast_2d', 'atleast_3d', 'ediff1d', 'flatnotmasked_contiguous', 'flatnotmasked_edges', 'hsplit', 'hstack', + 'intersect1d', 'intersect1d_nu', 'mask_cols', 'mask_rowcols', 'mask_rows', 'masked_all', 'masked_all_like', 'median', 'mr_', 'notmasked_contiguous', 'notmasked_edges', 'polyfit', 'row_stack', + 'setdiff1d', 'setmember1d', 'setxor1d', + 'unique1d', 'union1d', 'vander', 'vstack', ] @@ -45,22 +48,19 @@ from numpy.linalg import lstsq #............................................................................... def issequence(seq): """Is seq a sequence (ndarray, list or tuple)?""" - if isinstance(seq, ndarray): - return True - elif isinstance(seq, tuple): - return True - elif isinstance(seq, list): + if isinstance(seq, (ndarray, tuple, list)): return True return False def count_masked(arr, axis=None): - """Count the number of masked elements along the given axis. + """ + Count the number of masked elements along the given axis. Parameters ---------- - axis : int, optional - Axis along which to count. - If None (default), a flattened version of the array is used. + axis : int, optional + Axis along which to count. + If None (default), a flattened version of the array is used. """ m = getmaskarray(arr) @@ -136,9 +136,12 @@ class _fromnxfunction: res.append(masked_array(_d, mask=_m)) return res -atleast_1d = _fromnxfunction('atleast_1d') -atleast_2d = _fromnxfunction('atleast_2d') -atleast_3d = _fromnxfunction('atleast_3d') +#atleast_1d = _fromnxfunction('atleast_1d') +#atleast_2d = _fromnxfunction('atleast_2d') +#atleast_3d = _fromnxfunction('atleast_3d') +atleast_1d = np.atleast_1d +atleast_2d = np.atleast_2d +atleast_3d = np.atleast_3d vstack = row_stack = _fromnxfunction('vstack') hstack = _fromnxfunction('hstack') @@ -252,7 +255,8 @@ apply_along_axis.__doc__ = np.apply_along_axis.__doc__ def average(a, axis=None, weights=None, returned=False): - """Average the array over the given axis. + """ + Average the array over the given axis. Parameters ---------- @@ -440,10 +444,10 @@ def median(a, axis=None, out=None, overwrite_input=False): #.............................................................................. def compress_rowcols(x, axis=None): """ - Suppress the rows and/or columns of a 2D array that contains + Suppress the rows and/or columns of a 2D array that contain masked values. - The suppression behavior is selected with the `axis`parameter. + The suppression behavior is selected with the `axis` parameter. - If axis is None, rows and columns are suppressed. - If axis is 0, only rows are suppressed. @@ -482,13 +486,15 @@ def compress_rowcols(x, axis=None): return x._data[idxr][:,idxc] def compress_rows(a): - """Suppress whole rows of a 2D array that contain masked values. + """ + Suppress whole rows of a 2D array that contain masked values. """ return compress_rowcols(a, 0) def compress_cols(a): - """Suppress whole columnss of a 2D array that contain masked values. + """ + Suppress whole columns of a 2D array that contain masked values. """ return compress_rowcols(a, 1) @@ -530,30 +536,35 @@ def mask_rowcols(a, axis=None): return a def mask_rows(a, axis=None): - """Mask whole rows of a 2D array that contain masked values. + """ + Mask whole rows of a 2D array that contain masked values. Parameters ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. + axis : int, optional + Axis along which to perform the operation. + If None, applies to a flattened version of the array. + """ return mask_rowcols(a, 0) def mask_cols(a, axis=None): - """Mask whole columns of a 2D array that contain masked values. + """ + Mask whole columns of a 2D array that contain masked values. Parameters ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. + axis : int, optional + Axis along which to perform the operation. + If None, applies to a flattened version of the array. + """ return mask_rowcols(a, 1) def dot(a,b, strict=False): - """Return the dot product of two 2D masked arrays a and b. + """ + Return the dot product of two 2D masked arrays a and b. Like the generic numpy equivalent, the product sum is over the last dimension of a and the second-to-last dimension of b. If strict is True, @@ -582,72 +593,211 @@ def dot(a,b, strict=False): m = ~np.dot(am, bm) return masked_array(d, mask=m) -#............................................................................... -def ediff1d(array, to_end=None, to_begin=None): - """Return the differences between consecutive elements of an - array, possibly with prefixed and/or appended values. +#####-------------------------------------------------------------------------- +#---- --- arraysetops --- +#####-------------------------------------------------------------------------- - Parameters - ---------- - array : {array} - Input array, will be flattened before the difference is taken. - to_end : {number}, optional - If provided, this number will be tacked onto the end of the returned - differences. - to_begin : {number}, optional - If provided, this number will be taked onto the beginning of the - returned differences. +def ediff1d(arr, to_end=None, to_begin=None): + """ + Computes the differences between consecutive elements of an array. + + This function is the equivalent of `numpy.ediff1d` that takes masked + values into account. + + See Also + -------- + numpy.eddif1d : equivalent function for ndarrays. Returns ------- - ed : {array} - The differences. Loosely, this will be (ary[1:] - ary[:-1]). - + output : MaskedArray + """ - a = masked_array(array, copy=True) - if a.ndim > 1: - a.reshape((a.size,)) - (d, m, n) = (a._data, a._mask, a.size-1) - dd = d[1:]-d[:-1] - if m is nomask: - dm = nomask - else: - dm = m[1:]-m[:-1] + arr = ma.asanyarray(arr).flat + ed = arr[1:] - arr[:-1] + arrays = [ed] # + if to_begin is not None: + arrays.insert(0, to_begin) if to_end is not None: - to_end = asarray(to_end) - nend = to_end.size - if to_begin is not None: - to_begin = asarray(to_begin) - nbegin = to_begin.size - r_data = np.empty((n+nend+nbegin,), dtype=a.dtype) - r_mask = np.zeros((n+nend+nbegin,), dtype=bool) - r_data[:nbegin] = to_begin._data - r_mask[:nbegin] = to_begin._mask - r_data[nbegin:-nend] = dd - r_mask[nbegin:-nend] = dm - else: - r_data = np.empty((n+nend,), dtype=a.dtype) - r_mask = np.zeros((n+nend,), dtype=bool) - r_data[:-nend] = dd - r_mask[:-nend] = dm - r_data[-nend:] = to_end._data - r_mask[-nend:] = to_end._mask + arrays.append(to_end) + # + if len(arrays) != 1: + # We'll save ourselves a copy of a potentially large array in the common + # case where neither to_begin or to_end was given. + ed = hstack(arrays) + # + return ed + + +def unique1d(ar1, return_index=False, return_inverse=False): + """ + Finds the unique elements of an array. + + Masked values are considered the same element (masked). + + The output array is always a MaskedArray. + + See Also + -------- + np.unique1d : equivalent function for ndarrays. + """ + output = np.unique1d(ar1, + return_index=return_index, + return_inverse=return_inverse) + if isinstance(output, tuple): + output = list(output) + output[0] = output[0].view(MaskedArray) + output = tuple(output) + else: + output = output.view(MaskedArray) + return output + + +def intersect1d(ar1, ar2): + """ + Returns the repeated or unique elements belonging to the two arrays. + + Masked values are assumed equals one to the other. + The output is always a masked array + + See Also + -------- + numpy.intersect1d : equivalent function for ndarrays. + + Examples + -------- + >>> x = array([1, 3, 3, 3], mask=[0, 0, 0, 1]) + >>> y = array([3, 1, 1, 1], mask=[0, 0, 0, 1]) + >>> intersect1d(x, y) + masked_array(data = [1 1 3 3 --], + mask = [False False False False True], + fill_value = 999999) + """ + aux = ma.concatenate((ar1,ar2)) + aux.sort() + return aux[aux[1:] == aux[:-1]] + + + +def intersect1d_nu(ar1, ar2): + """ + Returns the unique elements common to both arrays. + + Masked values are considered equal one to the other. + The output is always a masked array. + + See Also + -------- + intersect1d : Returns repeated or unique common elements. + numpy.intersect1d_nu : equivalent function for ndarrays. + + Examples + -------- + >>> x = array([1, 3, 3, 3], mask=[0, 0, 0, 1]) + >>> y = array([3, 1, 1, 1], mask=[0, 0, 0, 1]) + >>> intersect1d_nu(x, y) + masked_array(data = [1 3 --], + mask = [False False True], + fill_value = 999999) + + """ + # Might be faster than unique1d( intersect1d( ar1, ar2 ) )? + aux = ma.concatenate((unique1d(ar1), unique1d(ar2))) + aux.sort() + return aux[aux[1:] == aux[:-1]] + + + +def setxor1d(ar1, ar2): + """ + Set exclusive-or of 1D arrays with unique elements. + + See Also + -------- + numpy.setxor1d : equivalent function for ndarrays + + """ + aux = ma.concatenate((ar1, ar2)) + if aux.size == 0: + return aux + aux.sort() + auxf = aux.filled() +# flag = ediff1d( aux, to_end = 1, to_begin = 1 ) == 0 + flag = ma.concatenate(([True], (auxf[1:] != auxf[:-1]), [True])) +# flag2 = ediff1d( flag ) == 0 + flag2 = (flag[1:] == flag[:-1]) + return aux[flag2] + + +def setmember1d(ar1, ar2): + """ + Return a boolean array set True where first element is in second array. + + See Also + -------- + numpy.setmember1d : equivalent function for ndarrays. + + """ + ar1 = ma.asanyarray(ar1) + ar2 = ma.asanyarray( ar2 ) + ar = ma.concatenate((ar1, ar2 )) + b1 = ma.zeros(ar1.shape, dtype = np.int8) + b2 = ma.ones(ar2.shape, dtype = np.int8) + tt = ma.concatenate((b1, b2)) + + # We need this to be a stable sort, so always use 'mergesort' here. The + # values from the first array should always come before the values from the + # second array. + perm = ar.argsort(kind='mergesort') + aux = ar[perm] + aux2 = tt[perm] +# flag = ediff1d( aux, 1 ) == 0 + flag = ma.concatenate((aux[1:] == aux[:-1], [False])) + ii = ma.where( flag * aux2 )[0] + aux = perm[ii+1] + perm[ii+1] = perm[ii] + perm[ii] = aux # - elif to_begin is not None: - to_begin = asarray(to_begin) - nbegin = to_begin.size - r_data = np.empty((n+nbegin,), dtype=a.dtype) - r_mask = np.zeros((n+nbegin,), dtype=bool) - r_data[:nbegin] = to_begin._data - r_mask[:nbegin] = to_begin._mask - r_data[nbegin:] = dd - r_mask[nbegin:] = dm + indx = perm.argsort(kind='mergesort')[:len( ar1 )] # + return flag[indx] + + +def union1d(ar1, ar2): + """ + Union of 1D arrays with unique elements. + + See also + -------- + numpy.union1d : equivalent function for ndarrays. + + """ + return unique1d(ma.concatenate((ar1, ar2))) + + +def setdiff1d(ar1, ar2): + """ + Set difference of 1D arrays with unique elements. + + See Also + -------- + numpy.setdiff1d : equivalent function for ndarrays + + """ + aux = setmember1d(ar1,ar2) + if aux.size == 0: + return aux else: - r_data = dd - r_mask = dm - return masked_array(r_data, mask=r_mask) + return ma.asarray(ar1)[aux == 0] + + + +#####-------------------------------------------------------------------------- +#---- --- Covariance --- +#####-------------------------------------------------------------------------- + + def _covhelper(x, y=None, rowvar=True, allow_masked=True): @@ -747,7 +897,8 @@ def cov(x, y=None, rowvar=True, bias=False, allow_masked=True): def corrcoef(x, y=None, rowvar=True, bias=False, allow_masked=True): - """The correlation coefficients formed from the array x, where the + """ + The correlation coefficients formed from the array x, where the rows are the observations, and the columns are variables. corrcoef(x,y) where x and y are 1d arrays is the same as @@ -818,7 +969,8 @@ def corrcoef(x, y=None, rowvar=True, bias=False, allow_masked=True): #####-------------------------------------------------------------------------- class MAxisConcatenator(AxisConcatenator): - """Translate slice objects to concatenation along an axis. + """ + Translate slice objects to concatenation along an axis. """ @@ -877,11 +1029,13 @@ class MAxisConcatenator(AxisConcatenator): return self._retval(res) class mr_class(MAxisConcatenator): - """Translate slice objects to concatenation along the first axis. + """ + Translate slice objects to concatenation along the first axis. - For example: - >>> np.ma.mr_[np.ma.array([1,2,3]), 0, 0, np.ma.array([4,5,6])] - array([1, 2, 3, 0, 0, 4, 5, 6]) + Examples + -------- + >>> np.ma.mr_[np.ma.array([1,2,3]), 0, 0, np.ma.array([4,5,6])] + array([1, 2, 3, 0, 0, 4, 5, 6]) """ def __init__(self): @@ -894,7 +1048,8 @@ mr_ = mr_class() #####-------------------------------------------------------------------------- def flatnotmasked_edges(a): - """Find the indices of the first and last not masked values in a + """ + Find the indices of the first and last not masked values in a 1D masked array. If all values are masked, returns None. """ @@ -907,8 +1062,10 @@ def flatnotmasked_edges(a): else: return None + def notmasked_edges(a, axis=None): - """Find the indices of the first and last not masked values along + """ + Find the indices of the first and last not masked values along the given axis in a masked array. If all values are masked, return None. Otherwise, return a list @@ -917,9 +1074,10 @@ def notmasked_edges(a, axis=None): Parameters ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. + axis : int, optional + Axis along which to perform the operation. + If None, applies to a flattened version of the array. + """ a = asarray(a) if axis is None or a.ndim == 1: @@ -929,8 +1087,10 @@ def notmasked_edges(a, axis=None): return [tuple([idx[i].min(axis).compressed() for i in range(a.ndim)]), tuple([idx[i].max(axis).compressed() for i in range(a.ndim)]),] + def flatnotmasked_contiguous(a): - """Find contiguous unmasked data in a flattened masked array. + """ + Find contiguous unmasked data in a flattened masked array. Return a sorted sequence of slices (start index, end index). @@ -950,22 +1110,22 @@ def flatnotmasked_contiguous(a): return result def notmasked_contiguous(a, axis=None): - """Find contiguous unmasked data in a masked array along the given - axis. + """ + Find contiguous unmasked data in a masked array along the given axis. Parameters ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. + axis : int, optional + Axis along which to perform the operation. + If None, applies to a flattened version of the array. Returns ------- - A sorted sequence of slices (start index, end index). + A sorted sequence of slices (start index, end index). Notes ----- - Only accepts 2D arrays at most. + Only accepts 2D arrays at most. """ a = asarray(a) diff --git a/numpy/ma/mrecords.py b/numpy/ma/mrecords.py index d5c03c22d..72e78f507 100644 --- a/numpy/ma/mrecords.py +++ b/numpy/ma/mrecords.py @@ -357,7 +357,7 @@ The fieldname base is either `_data` or `_mask`.""" dtype = None else: output = ndarray.view(self, dtype) - # OK, there's the change + # OK, there's the change except TypeError: dtype = np.dtype(dtype) # we need to revert to MaskedArray, but keeping the possibility diff --git a/numpy/ma/tests/test_core.py b/numpy/ma/tests/test_core.py index f9f46e563..a270775d9 100644 --- a/numpy/ma/tests/test_core.py +++ b/numpy/ma/tests/test_core.py @@ -474,6 +474,20 @@ class TestMaskedArray(TestCase): np.array([(1, '1', 1.)], dtype=flexi.dtype)) + def test_filled_w_nested_dtype(self): + "Test filled w/ nested dtype" + ndtype = [('A', int), ('B', [('BA', int), ('BB', int)])] + a = array([(1, (1, 1)), (2, (2, 2))], + mask=[(0, (1, 0)), (0, (0, 1))], dtype=ndtype) + test = a.filled(0) + control = np.array([(1, (0, 1)), (2, (2, 0))], dtype=ndtype) + assert_equal(test, control) + # + test = a['B'].filled(0) + control = np.array([(0, 1), (2, 0)], dtype=a['B'].dtype) + assert_equal(test, control) + + def test_optinfo_propagation(self): "Checks that _optinfo dictionary isn't back-propagated" x = array([1,2,3,], dtype=float) @@ -483,6 +497,55 @@ class TestMaskedArray(TestCase): y._optinfo['info'] = '!!!' assert_equal(x._optinfo['info'], '???') + + def test_fancy_printoptions(self): + "Test printing a masked array w/ fancy dtype." + fancydtype = np.dtype([('x', int), ('y', [('t', int), ('s', float)])]) + test = array([(1, (2, 3.0)), (4, (5, 6.0))], + mask=[(1, (0, 1)), (0, (1, 0))], + dtype=fancydtype) + control = "[(--, (2, --)) (4, (--, 6.0))]" + assert_equal(str(test), control) + + + def test_flatten_structured_array(self): + "Test flatten_structured_array on arrays" + # On ndarray + ndtype = [('a', int), ('b', float)] + a = np.array([(1, 1), (2, 2)], dtype=ndtype) + test = flatten_structured_array(a) + control = np.array([[1., 1.], [2., 2.]], dtype=np.float) + assert_equal(test, control) + assert_equal(test.dtype, control.dtype) + # On masked_array + a = array([(1, 1), (2, 2)], mask=[(0, 1), (1, 0)], dtype=ndtype) + test = flatten_structured_array(a) + control = array([[1., 1.], [2., 2.]], + mask=[[0, 1], [1, 0]], dtype=np.float) + assert_equal(test, control) + assert_equal(test.dtype, control.dtype) + assert_equal(test.mask, control.mask) + # On masked array with nested structure + ndtype = [('a', int), ('b', [('ba', int), ('bb', float)])] + a = array([(1, (1, 1.1)), (2, (2, 2.2))], + mask=[(0, (1, 0)), (1, (0, 1))], dtype=ndtype) + test = flatten_structured_array(a) + control = array([[1., 1., 1.1], [2., 2., 2.2]], + mask=[[0, 1, 0], [1, 0, 1]], dtype=np.float) + assert_equal(test, control) + assert_equal(test.dtype, control.dtype) + assert_equal(test.mask, control.mask) + # Keeping the initial shape + ndtype = [('a', int), ('b', float)] + a = np.array([[(1, 1),], [(2, 2),]], dtype=ndtype) + test = flatten_structured_array(a) + control = np.array([[[1., 1.],], [[2., 2.],]], dtype=np.float) + assert_equal(test, control) + assert_equal(test.dtype, control.dtype) + + + + #------------------------------------------------------------------------------ class TestMaskedArrayArithmetic(TestCase): @@ -539,6 +602,7 @@ class TestMaskedArrayArithmetic(TestCase): assert_equal(np.multiply(x,y), multiply(xm, ym)) assert_equal(np.divide(x,y), divide(xm, ym)) + def test_divide_on_different_shapes(self): x = arange(6, dtype=float) x.shape = (2,3) @@ -557,6 +621,7 @@ class TestMaskedArrayArithmetic(TestCase): assert_equal(z, [[-1.,-1.,-1.], [3.,4.,5.]]) assert_equal(z.mask, [[1,1,1],[0,0,0]]) + def test_mixed_arithmetic(self): "Tests mixed arithmetics." na = np.array([1]) @@ -571,6 +636,7 @@ class TestMaskedArrayArithmetic(TestCase): assert_equal(getmaskarray(a/2), [0,0,0]) assert_equal(getmaskarray(2/a), [1,0,1]) + def test_masked_singleton_arithmetic(self): "Tests some scalar arithmetics on MaskedArrays." # Masked singleton should remain masked no matter what @@ -581,6 +647,7 @@ class TestMaskedArrayArithmetic(TestCase): self.failUnless(maximum(xm, xm).mask) self.failUnless(minimum(xm, xm).mask) + def test_arithmetic_with_masked_singleton(self): "Checks that there's no collapsing to masked" x = masked_array([1,2]) @@ -593,6 +660,7 @@ class TestMaskedArrayArithmetic(TestCase): assert_equal(y.shape, x.shape) assert_equal(y._mask, [True, True]) + def test_arithmetic_with_masked_singleton_on_1d_singleton(self): "Check that we're not losing the shape of a singleton" x = masked_array([1, ]) @@ -600,6 +668,7 @@ class TestMaskedArrayArithmetic(TestCase): assert_equal(y.shape, x.shape) assert_equal(y.mask, [True, ]) + def test_scalar_arithmetic(self): x = array(0, mask=0) assert_equal(x.filled().ctypes.data, x.ctypes.data) @@ -608,6 +677,7 @@ class TestMaskedArrayArithmetic(TestCase): assert_equal(xm.shape,(2,)) assert_equal(xm.mask,[1,1]) + def test_basic_ufuncs (self): "Test various functions such as sin, cos." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -649,6 +719,7 @@ class TestMaskedArrayArithmetic(TestCase): assert getmask(count(ott,0)) is nomask assert_equal([1,2],count(ott,0)) + def test_minmax_func (self): "Tests minimum and maximum." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -672,6 +743,7 @@ class TestMaskedArrayArithmetic(TestCase): x[-1,-1] = masked assert_equal(maximum(x), 2) + def test_minimummaximum_func(self): a = np.ones((2,2)) aminimum = minimum(a,a) @@ -690,6 +762,7 @@ class TestMaskedArrayArithmetic(TestCase): self.failUnless(isinstance(amaximum, MaskedArray)) assert_equal(amaximum, np.maximum.outer(a,a)) + def test_minmax_funcs_with_output(self): "Tests the min/max functions with explicit outputs" mask = np.random.rand(12).round() @@ -735,7 +808,8 @@ class TestMaskedArrayArithmetic(TestCase): self.failUnless(x.min() is masked) self.failUnless(x.max() is masked) self.failUnless(x.ptp() is masked) - #........................ + + def test_addsumprod (self): "Tests add, sum, product." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -757,6 +831,98 @@ class TestMaskedArrayArithmetic(TestCase): assert_equal(np.sum(x,1), sum(x,1)) assert_equal(np.product(x,1), product(x,1)) + + def test_binops_d2D(self): + "Test binary operations on 2D data" + a = array([[1.], [2.], [3.]], mask=[[False], [True], [True]]) + b = array([[2., 3.], [4., 5.], [6., 7.]]) + # + test = a * b + control = array([[2., 3.], [2., 2.], [3., 3.]], + mask=[[0, 0], [1, 1], [1, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = b * a + control = array([[2., 3.], [4., 5.], [6., 7.]], + mask=[[0, 0], [1, 1], [1, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + a = array([[1.], [2.], [3.]]) + b = array([[2., 3.], [4., 5.], [6., 7.]], + mask=[[0, 0], [0, 0], [0, 1]]) + test = a * b + control = array([[2, 3], [8, 10], [18, 3]], + mask=[[0, 0], [0, 0], [0, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = b * a + control = array([[2, 3], [8, 10], [18, 7]], + mask=[[0, 0], [0, 0], [0, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + + + def test_domained_binops_d2D(self): + "Test domained binary operations on 2D data" + a = array([[1.], [2.], [3.]], mask=[[False], [True], [True]]) + b = array([[2., 3.], [4., 5.], [6., 7.]]) + # + test = a / b + control = array([[1./2., 1./3.], [2., 2.], [3., 3.]], + mask=[[0, 0], [1, 1], [1, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = b / a + control = array([[2./1., 3./1.], [4., 5.], [6., 7.]], + mask=[[0, 0], [1, 1], [1, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + a = array([[1.], [2.], [3.]]) + b = array([[2., 3.], [4., 5.], [6., 7.]], + mask=[[0, 0], [0, 0], [0, 1]]) + test = a / b + control = array([[1./2, 1./3], [2./4, 2./5], [3./6, 3]], + mask=[[0, 0], [0, 0], [0, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = b / a + control = array([[2/1., 3/1.], [4/2., 5/2.], [6/3., 7]], + mask=[[0, 0], [0, 0], [0, 1]]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + + + def test_noshrinking(self): + "Check that we don't shrink a mask when not wanted" + # Binary operations + a = masked_array([1,2,3], mask=[False,False,False], shrink=False) + b = a + 1 + assert_equal(b.mask, [0, 0, 0]) + # In place binary operation + a += 1 + assert_equal(a.mask, [0, 0, 0]) + # Domained binary operation + b = a / 1. + assert_equal(b.mask, [0, 0, 0]) + # In place binary operation + a /= 1. + assert_equal(a.mask, [0, 0, 0]) + + def test_mod(self): "Tests mod" (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -767,7 +933,6 @@ class TestMaskedArrayArithmetic(TestCase): test = mod(xm, ym) assert_equal(test, np.mod(xm, ym)) assert_equal(test.mask, mask_or(mask_or(xm.mask, ym.mask), (ym == 0))) - def test_TakeTransposeInnerOuter(self): @@ -825,6 +990,57 @@ class TestMaskedArrayArithmetic(TestCase): self.failUnless(result is output) self.failUnless(output[0] is masked) + + def test_eq_on_structured(self): + "Test the equality of structured arrays" + ndtype = [('A', int), ('B', int)] + a = array([(1, 1), (2, 2)], mask=[(0, 1), (0, 0)], dtype=ndtype) + test = (a == a) + assert_equal(test, [True, True]) + assert_equal(test.mask, [False, False]) + b = array([(1, 1), (2, 2)], mask=[(1, 0), (0, 0)], dtype=ndtype) + test = (a == b) + assert_equal(test, [False, True]) + assert_equal(test.mask, [True, False]) + b = array([(1, 1), (2, 2)], mask=[(0, 1), (1, 0)], dtype=ndtype) + test = (a == b) + assert_equal(test, [True, False]) + assert_equal(test.mask, [False, False]) + + + def test_ne_on_structured(self): + "Test the equality of structured arrays" + ndtype = [('A', int), ('B', int)] + a = array([(1, 1), (2, 2)], mask=[(0, 1), (0, 0)], dtype=ndtype) + test = (a != a) + assert_equal(test, [False, False]) + assert_equal(test.mask, [False, False]) + b = array([(1, 1), (2, 2)], mask=[(1, 0), (0, 0)], dtype=ndtype) + test = (a != b) + assert_equal(test, [True, False]) + assert_equal(test.mask, [True, False]) + b = array([(1, 1), (2, 2)], mask=[(0, 1), (1, 0)], dtype=ndtype) + test = (a != b) + assert_equal(test, [False, True]) + assert_equal(test.mask, [False, False]) + + + def test_numpyarithmetics(self): + "Check that the mask is not back-propagated when using numpy functions" + a = masked_array([-1, 0, 1, 2, 3], mask=[0, 0, 0, 0, 1]) + control = masked_array([np.nan, np.nan, 0, np.log(2), -1], + mask=[1, 1, 0, 0, 1]) + # + test = log(a) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + assert_equal(a.mask, [0, 0, 0, 0, 1]) + # + test = np.log(a) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + assert_equal(a.mask, [0, 0, 0, 0, 1]) + #------------------------------------------------------------------------------ class TestMaskedArrayAttributes(TestCase): @@ -922,8 +1138,17 @@ class TestMaskedArrayAttributes(TestCase): a[1] = 1 assert_equal(a._mask, zeros(10)) - def _wtv(self): - int(np.nan) + def test_flat(self): + "Test flat on masked_matrices" + test = masked_array(np.matrix([[1, 2, 3]]), mask=[0, 0, 1]) + test.flat = masked_array([3, 2, 1], mask=[1, 0, 0]) + control = masked_array(np.matrix([[3, 2, 1]]), mask=[1, 0, 0]) + assert_equal(test, control) + # + test = masked_array(np.matrix([[1, 2, 3]]), mask=[0, 0, 1]) + testflat = test.flat + testflat[:] = testflat[[2, 1, 0]] + assert_equal(test, control) #------------------------------------------------------------------------------ @@ -1050,21 +1275,44 @@ class TestFillingValues(TestCase): # The shape shouldn't matter ndtype = [('f0', float, (2, 2))] control = np.array((default_fill_value(0.),), - dtype=[('f0',float)]) + dtype=[('f0',float)]).astype(ndtype) assert_equal(_check_fill_value(None, ndtype), control) - control = np.array((0,), dtype=[('f0',float)]) + control = np.array((0,), dtype=[('f0',float)]).astype(ndtype) assert_equal(_check_fill_value(0, ndtype), control) # ndtype = np.dtype("int, (2,3)float, float") control = np.array((default_fill_value(0), default_fill_value(0.), default_fill_value(0.),), - dtype="int, float, float") + dtype="int, float, float").astype(ndtype) test = _check_fill_value(None, ndtype) assert_equal(test, control) - control = np.array((0,0,0), dtype="int, float, float") + control = np.array((0,0,0), dtype="int, float, float").astype(ndtype) assert_equal(_check_fill_value(0, ndtype), control) + + def test_extremum_fill_value(self): + "Tests extremum fill values for flexible type." + a = array([(1, (2, 3)), (4, (5, 6))], + dtype=[('A', int), ('B', [('BA', int), ('BB', int)])]) + test = a.fill_value + assert_equal(test['A'], default_fill_value(a['A'])) + assert_equal(test['B']['BA'], default_fill_value(a['B']['BA'])) + assert_equal(test['B']['BB'], default_fill_value(a['B']['BB'])) + # + test = minimum_fill_value(a) + assert_equal(test[0], minimum_fill_value(a['A'])) + assert_equal(test[1][0], minimum_fill_value(a['B']['BA'])) + assert_equal(test[1][1], minimum_fill_value(a['B']['BB'])) + assert_equal(test[1], minimum_fill_value(a['B'])) + # + test = maximum_fill_value(a) + assert_equal(test[0], maximum_fill_value(a['A'])) + assert_equal(test[1][0], maximum_fill_value(a['B']['BA'])) + assert_equal(test[1][1], maximum_fill_value(a['B']['BB'])) + assert_equal(test[1], maximum_fill_value(a['B'])) + + #------------------------------------------------------------------------------ class TestUfuncs(TestCase): @@ -1126,6 +1374,16 @@ class TestUfuncs(TestCase): self.failUnless(amask.max(1)[0].mask) self.failUnless(amask.min(1)[0].mask) + def test_ndarray_mask(self): + "Check that the mask of the result is a ndarray (not a MaskedArray...)" + a = masked_array([-1, 0, 1, 2, 3], mask=[0, 0, 0, 0, 1]) + test = np.sqrt(a) + control = masked_array([-1, 0, 1, np.sqrt(2), -1], + mask=[1, 0, 0, 0, 1]) + assert_equal(test, control) + assert_equal(test.mask, control.mask) + self.failUnless(not isinstance(test.mask, MaskedArray)) + #------------------------------------------------------------------------------ @@ -1242,22 +1500,176 @@ class TestMaskedArrayInPlaceArithmetics(TestCase): def test_inplace_division_misc(self): # - x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + x = [1., 1., 1.,-2., pi/2., 4., 5., -10., 10., 1., 2., 3.] + y = [5., 0., 3., 2., -1., -4., 0., -10., 10., 1., 0., 3.] + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 , 0, 1] xm = masked_array(x, mask=m1) ym = masked_array(y, mask=m2) # z = xm/ym assert_equal(z._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) - assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + assert_equal(z._data, [1.,1.,1.,-1.,-pi/2.,4.,5.,1.,1.,1.,2.,3.]) + #assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) # xm = xm.copy() xm /= ym assert_equal(xm._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) - assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) - + assert_equal(z._data, [1.,1.,1.,-1.,-pi/2.,4.,5.,1.,1.,1.,2.,3.]) + #assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + + + def test_datafriendly_add(self): + "Test keeping data w/ (inplace) addition" + x = array([1, 2, 3], mask=[0, 0, 1]) + # Test add w/ scalar + xx = x + 1 + assert_equal(xx.data, [2, 3, 3]) + assert_equal(xx.mask, [0, 0, 1]) + # Test iadd w/ scalar + x += 1 + assert_equal(x.data, [2, 3, 3]) + assert_equal(x.mask, [0, 0, 1]) + # Test add w/ array + x = array([1, 2, 3], mask=[0, 0, 1]) + xx = x + array([1, 2, 3], mask=[1, 0, 0]) + assert_equal(xx.data, [1, 4, 3]) + assert_equal(xx.mask, [1, 0, 1]) + # Test iadd w/ array + x = array([1, 2, 3], mask=[0, 0, 1]) + x += array([1, 2, 3], mask=[1, 0, 0]) + assert_equal(x.data, [1, 4, 3]) + assert_equal(x.mask, [1, 0, 1]) + + + def test_datafriendly_sub(self): + "Test keeping data w/ (inplace) subtraction" + # Test sub w/ scalar + x = array([1, 2, 3], mask=[0, 0, 1]) + xx = x - 1 + assert_equal(xx.data, [0, 1, 3]) + assert_equal(xx.mask, [0, 0, 1]) + # Test isub w/ scalar + x = array([1, 2, 3], mask=[0, 0, 1]) + x -= 1 + assert_equal(x.data, [0, 1, 3]) + assert_equal(x.mask, [0, 0, 1]) + # Test sub w/ array + x = array([1, 2, 3], mask=[0, 0, 1]) + xx = x - array([1, 2, 3], mask=[1, 0, 0]) + assert_equal(xx.data, [1, 0, 3]) + assert_equal(xx.mask, [1, 0, 1]) + # Test isub w/ array + x = array([1, 2, 3], mask=[0, 0, 1]) + x -= array([1, 2, 3], mask=[1, 0, 0]) + assert_equal(x.data, [1, 0, 3]) + assert_equal(x.mask, [1, 0, 1]) + + + def test_datafriendly_mul(self): + "Test keeping data w/ (inplace) multiplication" + # Test mul w/ scalar + x = array([1, 2, 3], mask=[0, 0, 1]) + xx = x * 2 + assert_equal(xx.data, [2, 4, 3]) + assert_equal(xx.mask, [0, 0, 1]) + # Test imul w/ scalar + x = array([1, 2, 3], mask=[0, 0, 1]) + x *= 2 + assert_equal(x.data, [2, 4, 3]) + assert_equal(x.mask, [0, 0, 1]) + # Test mul w/ array + x = array([1, 2, 3], mask=[0, 0, 1]) + xx = x * array([10, 20, 30], mask=[1, 0, 0]) + assert_equal(xx.data, [1, 40, 3]) + assert_equal(xx.mask, [1, 0, 1]) + # Test imul w/ array + x = array([1, 2, 3], mask=[0, 0, 1]) + x *= array([10, 20, 30], mask=[1, 0, 0]) + assert_equal(x.data, [1, 40, 3]) + assert_equal(x.mask, [1, 0, 1]) + + + def test_datafriendly_div(self): + "Test keeping data w/ (inplace) division" + # Test div on scalar + x = array([1, 2, 3], mask=[0, 0, 1]) + xx = x / 2. + assert_equal(xx.data, [1/2., 2/2., 3]) + assert_equal(xx.mask, [0, 0, 1]) + # Test idiv on scalar + x = array([1., 2., 3.], mask=[0, 0, 1]) + x /= 2. + assert_equal(x.data, [1/2., 2/2., 3]) + assert_equal(x.mask, [0, 0, 1]) + # Test div on array + x = array([1., 2., 3.], mask=[0, 0, 1]) + xx = x / array([10., 20., 30.], mask=[1, 0, 0]) + assert_equal(xx.data, [1., 2./20., 3.]) + assert_equal(xx.mask, [1, 0, 1]) + # Test idiv on array + x = array([1., 2., 3.], mask=[0, 0, 1]) + x /= array([10., 20., 30.], mask=[1, 0, 0]) + assert_equal(x.data, [1., 2/20., 3.]) + assert_equal(x.mask, [1, 0, 1]) + + + def test_datafriendly_pow(self): + "Test keeping data w/ (inplace) power" + # Test pow on scalar + x = array([1., 2., 3.], mask=[0, 0, 1]) + xx = x ** 2.5 + assert_equal(xx.data, [1., 2.**2.5, 3.]) + assert_equal(xx.mask, [0, 0, 1]) + # Test ipow on scalar + x **= 2.5 + assert_equal(x.data, [1., 2.**2.5, 3]) + assert_equal(x.mask, [0, 0, 1]) + + + def test_datafriendly_add_arrays(self): + a = array([[1, 1], [3, 3]]) + b = array([1, 1], mask=[0, 0]) + a += b + assert_equal(a, [[2, 2], [4, 4]]) + if a.mask is not nomask: + assert_equal(a.mask, [[0, 0], [0, 0]]) + # + a = array([[1, 1], [3, 3]]) + b = array([1, 1], mask=[0, 1]) + a += b + assert_equal(a, [[2, 2], [4, 4]]) + assert_equal(a.mask, [[0, 1], [0, 1]]) + + + def test_datafriendly_sub_arrays(self): + a = array([[1, 1], [3, 3]]) + b = array([1, 1], mask=[0, 0]) + a -= b + assert_equal(a, [[0, 0], [2, 2]]) + if a.mask is not nomask: + assert_equal(a.mask, [[0, 0], [0, 0]]) + # + a = array([[1, 1], [3, 3]]) + b = array([1, 1], mask=[0, 1]) + a -= b + assert_equal(a, [[0, 0], [2, 2]]) + assert_equal(a.mask, [[0, 1], [0, 1]]) + + + def test_datafriendly_mul_arrays(self): + a = array([[1, 1], [3, 3]]) + b = array([1, 1], mask=[0, 0]) + a *= b + assert_equal(a, [[1, 1], [3, 3]]) + if a.mask is not nomask: + assert_equal(a.mask, [[0, 0], [0, 0]]) + # + a = array([[1, 1], [3, 3]]) + b = array([1, 1], mask=[0, 1]) + a *= b + assert_equal(a, [[1, 1], [3, 3]]) + assert_equal(a.mask, [[0, 1], [0, 1]]) #------------------------------------------------------------------------------ @@ -1334,7 +1746,7 @@ class TestMaskedArrayMethods(TestCase): a *= 1e-8 a[0] = 0 self.failUnless(allclose(a, 0, masked_equal=True)) - + def test_allany(self): """Checks the any/all methods/functions.""" @@ -1702,6 +2114,28 @@ class TestMaskedArrayMethods(TestCase): assert_equal(am, an) + def test_sort_flexible(self): + "Test sort on flexible dtype." + a = array([(3, 3), (3, 2), (2, 2), (2, 1), (1, 0), (1, 1), (1, 2)], + mask=[(0, 0), (0, 1), (0, 0), (0, 0), (1, 0), (0, 0), (0, 0)], + dtype=[('A', int), ('B', int)]) + # + test = sort(a) + b = array([(1, 1), (1, 2), (2, 1), (2, 2), (3, 3), (3, 2), (1, 0)], + mask=[(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 1), (1, 0)], + dtype=[('A', int), ('B', int)]) + assert_equal(test, b) + assert_equal(test.mask, b.mask) + # + test = sort(a, endwith=False) + b = array([(1, 0), (1, 1), (1, 2), (2, 1), (2, 2), (3, 2), (3, 3),], + mask=[(1, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 1), (0, 0),], + dtype=[('A', int), ('B', int)]) + assert_equal(test, b) + assert_equal(test.mask, b.mask) + # + + def test_squeeze(self): "Check squeeze" data = masked_array([[1,2,3]]) @@ -1775,15 +2209,15 @@ class TestMaskedArrayMethods(TestCase): assert_equal(x.tolist(), [(1,1.1,'one'),(2,2.2,'two'),(None,None,None)]) - def test_torecords(self): + def test_toflex(self): "Test the conversion to records" data = arange(10) - record = data.torecords() + record = data.toflex() assert_equal(record['_data'], data._data) assert_equal(record['_mask'], data._mask) # data[[0,1,2,-1]] = masked - record = data.torecords() + record = data.toflex() assert_equal(record['_data'], data._data) assert_equal(record['_mask'], data._mask) # @@ -1793,7 +2227,7 @@ class TestMaskedArrayMethods(TestCase): np.random.rand(10))], dtype=ndtype) data[[0,1,2,-1]] = masked - record = data.torecords() + record = data.toflex() assert_equal(record['_data'], data._data) assert_equal(record['_mask'], data._mask) # @@ -1803,9 +2237,28 @@ class TestMaskedArrayMethods(TestCase): np.random.rand(10))], dtype=ndtype) data[[0,1,2,-1]] = masked - record = data.torecords() - assert_equal(record['_data'], data._data) - assert_equal(record['_mask'], data._mask) + record = data.toflex() + assert_equal_records(record['_data'], data._data) + assert_equal_records(record['_mask'], data._mask) + + + def test_fromflex(self): + "Test the reconstruction of a masked_array from a record" + a = array([1, 2, 3]) + test = fromflex(a.toflex()) + assert_equal(test, a) + assert_equal(test.mask, a.mask) + # + a = array([1, 2, 3], mask=[0, 0, 1]) + test = fromflex(a.toflex()) + assert_equal(test, a) + assert_equal(test.mask, a.mask) + # + a = array([(1, 1.), (2, 2.), (3, 3.)], mask=[(1, 0), (0, 0), (0, 1)], + dtype=[('A', int), ('B', float)]) + test = fromflex(a.toflex()) + assert_equal(test, a) + assert_equal(test.data, a.data) #------------------------------------------------------------------------------ @@ -1970,7 +2423,7 @@ class TestMaskArrayMathMethod(TestCase): assert_equal(out, [0, 4, 8]) assert_equal(out.mask, [0, 1, 0]) out = diag(out) - control = array([[0, 0, 0], [0, 4, 0], [0, 0, 8]], + control = array([[0, 0, 0], [0, 4, 0], [0, 0, 8]], mask = [[0, 0, 0], [0, 1, 0], [0, 0, 0]]) assert_equal(out, control) @@ -2155,8 +2608,8 @@ class TestMaskedArrayFunctions(TestCase): def test_power(self): x = -1.1 - assert_almost_equal(power(x,2.), 1.21) - self.failUnless(power(x,masked) is masked) + assert_almost_equal(power(x, 2.), 1.21) + self.failUnless(power(x, masked) is masked) x = array([-1.1,-1.1,1.1,1.1,0.]) b = array([0.5,2.,0.5,2.,-1.], mask=[0,0,0,0,1]) y = power(x,b) @@ -2423,6 +2876,12 @@ class TestMaskedArrayFunctions(TestCase): test = mask_or(mask, other) except ValueError: pass + # Using nested arrays + dtype = [('a', np.bool), ('b', [('ba', np.bool), ('bb', np.bool)])] + amask = np.array([(0, (1, 0)), (0, (1, 0))], dtype=dtype) + bmask = np.array([(1, (0, 1)), (0, (0, 0))], dtype=dtype) + cntrl = np.array([(1, (1, 1)), (0, (1, 0))], dtype=dtype) + assert_equal(mask_or(amask, bmask), cntrl) def test_flatten_mask(self): @@ -2435,7 +2894,7 @@ class TestMaskedArrayFunctions(TestCase): test = flatten_mask(mask) control = np.array([0, 0, 0, 1], dtype=bool) assert_equal(test, control) - + mdtype = [('a', bool), ('b', [('ba', bool), ('bb', bool)])] data = [(0, (0, 0)), (0, (0, 1))] mask = np.array(data, dtype=mdtype) @@ -2583,7 +3042,7 @@ class TestMaskedView(TestCase): self.failUnless(isinstance(test, MaskedArray)) assert_equal(test._data, a._data) assert_equal(test._mask, a._mask) - + # def test_view_to_type(self): (data, a, controlmask) = self.data @@ -2619,7 +3078,7 @@ class TestMaskedView(TestCase): assert_equal(test.dtype.names, ('A', 'B')) assert_equal(test['A'], a['a'][-1]) assert_equal(test['B'], a['b'][-1]) - + # def test_view_to_subdtype(self): (data, a, controlmask) = self.data diff --git a/numpy/ma/tests/test_extras.py b/numpy/ma/tests/test_extras.py index 344dfce5a..5ae7bd72c 100644 --- a/numpy/ma/tests/test_extras.py +++ b/numpy/ma/tests/test_extras.py @@ -22,7 +22,7 @@ class TestGeneric(TestCase): # def test_masked_all(self): "Tests masked_all" - # Standard dtype + # Standard dtype test = masked_all((2,), dtype=float) control = array([1, 1], mask=[1, 1], dtype=float) assert_equal(test, control) @@ -53,7 +53,7 @@ class TestGeneric(TestCase): def test_masked_all_like(self): "Tests masked_all" - # Standard dtype + # Standard dtype base = array([1, 2], dtype=float) test = masked_all_like(base) control = array([1, 1], mask=[1, 1], dtype=float) @@ -338,39 +338,7 @@ class Test2DFunctions(TestCase): c = dot(b,a,False) assert_equal(c, np.dot(b.filled(0),a.filled(0))) - def test_ediff1d(self): - "Tests mediff1d" - x = masked_array(np.arange(5), mask=[1,0,0,0,1]) - difx_d = (x._data[1:]-x._data[:-1]) - difx_m = (x._mask[1:]-x._mask[:-1]) - dx = ediff1d(x) - assert_equal(dx._data, difx_d) - assert_equal(dx._mask, difx_m) - # - dx = ediff1d(x, to_begin=masked) - assert_equal(dx._data, np.r_[0,difx_d]) - assert_equal(dx._mask, np.r_[1,difx_m]) - dx = ediff1d(x, to_begin=[1,2,3]) - assert_equal(dx._data, np.r_[[1,2,3],difx_d]) - assert_equal(dx._mask, np.r_[[0,0,0],difx_m]) - # - dx = ediff1d(x, to_end=masked) - assert_equal(dx._data, np.r_[difx_d,0]) - assert_equal(dx._mask, np.r_[difx_m,1]) - dx = ediff1d(x, to_end=[1,2,3]) - assert_equal(dx._data, np.r_[difx_d,[1,2,3]]) - assert_equal(dx._mask, np.r_[difx_m,[0,0,0]]) - # - dx = ediff1d(x, to_end=masked, to_begin=masked) - assert_equal(dx._data, np.r_[0,difx_d,0]) - assert_equal(dx._mask, np.r_[1,difx_m,1]) - dx = ediff1d(x, to_end=[1,2,3], to_begin=masked) - assert_equal(dx._data, np.r_[0,difx_d,[1,2,3]]) - assert_equal(dx._mask, np.r_[1,difx_m,[0,0,0]]) - # - dx = ediff1d(x._data, to_end=masked, to_begin=masked) - assert_equal(dx._data, np.r_[0,difx_d,0]) - assert_equal(dx._mask, np.r_[1,0,0,0,0,1]) + class TestApplyAlongAxis(TestCase): # @@ -383,6 +351,7 @@ class TestApplyAlongAxis(TestCase): assert_equal(xa,[[1,4],[7,10]]) + class TestMedian(TestCase): # def test_2d(self): @@ -422,11 +391,12 @@ class TestMedian(TestCase): assert_equal(median(x,0), [[12,10],[8,9],[16,17]]) + class TestCov(TestCase): - # + def setUp(self): self.data = array(np.random.rand(12)) - # + def test_1d_wo_missing(self): "Test cov on 1D variable w/o missing values" x = self.data @@ -434,7 +404,7 @@ class TestCov(TestCase): assert_almost_equal(np.cov(x, rowvar=False), cov(x, rowvar=False)) assert_almost_equal(np.cov(x, rowvar=False, bias=True), cov(x, rowvar=False, bias=True)) - # + def test_2d_wo_missing(self): "Test cov on 1 2D variable w/o missing values" x = self.data.reshape(3,4) @@ -442,7 +412,7 @@ class TestCov(TestCase): assert_almost_equal(np.cov(x, rowvar=False), cov(x, rowvar=False)) assert_almost_equal(np.cov(x, rowvar=False, bias=True), cov(x, rowvar=False, bias=True)) - # + def test_1d_w_missing(self): "Test cov 1 1D variable w/missing values" x = self.data @@ -466,7 +436,7 @@ class TestCov(TestCase): cov(x, x[::-1], rowvar=False)) assert_almost_equal(np.cov(nx, nx[::-1], rowvar=False, bias=True), cov(x, x[::-1], rowvar=False, bias=True)) - # + def test_2d_w_missing(self): "Test cov on 2D variable w/ missing value" x = self.data @@ -486,11 +456,12 @@ class TestCov(TestCase): np.cov(xf, rowvar=False, bias=True) * x.shape[0]/frac) + class TestCorrcoef(TestCase): - # + def setUp(self): self.data = array(np.random.rand(12)) - # + def test_1d_wo_missing(self): "Test cov on 1D variable w/o missing values" x = self.data @@ -499,7 +470,7 @@ class TestCorrcoef(TestCase): corrcoef(x, rowvar=False)) assert_almost_equal(np.corrcoef(x, rowvar=False, bias=True), corrcoef(x, rowvar=False, bias=True)) - # + def test_2d_wo_missing(self): "Test corrcoef on 1 2D variable w/o missing values" x = self.data.reshape(3,4) @@ -508,7 +479,7 @@ class TestCorrcoef(TestCase): corrcoef(x, rowvar=False)) assert_almost_equal(np.corrcoef(x, rowvar=False, bias=True), corrcoef(x, rowvar=False, bias=True)) - # + def test_1d_w_missing(self): "Test corrcoef 1 1D variable w/missing values" x = self.data @@ -532,7 +503,7 @@ class TestCorrcoef(TestCase): corrcoef(x, x[::-1], rowvar=False)) assert_almost_equal(np.corrcoef(nx, nx[::-1], rowvar=False, bias=True), corrcoef(x, x[::-1], rowvar=False, bias=True)) - # + def test_2d_w_missing(self): "Test corrcoef on 2D variable w/ missing value" x = self.data @@ -575,6 +546,213 @@ class TestPolynomial(TestCase): assert_almost_equal(a, a_) + +class TestArraySetOps(TestCase): + # + def test_unique1d_onlist(self): + "Test unique1d on list" + data = [1, 1, 1, 2, 2, 3] + test = unique1d(data, return_index=True, return_inverse=True) + self.failUnless(isinstance(test[0], MaskedArray)) + assert_equal(test[0], masked_array([1, 2, 3], mask=[0, 0, 0])) + assert_equal(test[1], [0, 3, 5]) + assert_equal(test[2], [0, 0, 0, 1, 1, 2]) + + def test_unique1d_onmaskedarray(self): + "Test unique1d on masked data w/use_mask=True" + data = masked_array([1, 1, 1, 2, 2, 3], mask=[0, 0, 1, 0, 1, 0]) + test = unique1d(data, return_index=True, return_inverse=True) + assert_equal(test[0], masked_array([1, 2, 3, -1], mask=[0, 0, 0, 1])) + assert_equal(test[1], [0, 3, 5, 2]) + assert_equal(test[2], [0, 0, 3, 1, 3, 2]) + # + data.fill_value = 3 + data = masked_array([1, 1, 1, 2, 2, 3], + mask=[0, 0, 1, 0, 1, 0], fill_value=3) + test = unique1d(data, return_index=True, return_inverse=True) + assert_equal(test[0], masked_array([1, 2, 3, -1], mask=[0, 0, 0, 1])) + assert_equal(test[1], [0, 3, 5, 2]) + assert_equal(test[2], [0, 0, 3, 1, 3, 2]) + + def test_unique1d_allmasked(self): + "Test all masked" + data = masked_array([1, 1, 1], mask=True) + test = unique1d(data, return_index=True, return_inverse=True) + assert_equal(test[0], masked_array([1,], mask=[True])) + assert_equal(test[1], [0]) + assert_equal(test[2], [0, 0, 0]) + # + "Test masked" + data = masked + test = unique1d(data, return_index=True, return_inverse=True) + assert_equal(test[0], masked_array(masked)) + assert_equal(test[1], [0]) + assert_equal(test[2], [0]) + + def test_ediff1d(self): + "Tests mediff1d" + x = masked_array(np.arange(5), mask=[1,0,0,0,1]) + control = array([1, 1, 1, 4], mask=[1, 0, 0, 1]) + test = ediff1d(x) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + def test_ediff1d_tobegin(self): + "Test ediff1d w/ to_begin" + x = masked_array(np.arange(5), mask=[1,0,0,0,1]) + test = ediff1d(x, to_begin=masked) + control = array([0, 1, 1, 1, 4], mask=[1, 1, 0, 0, 1]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = ediff1d(x, to_begin=[1,2,3]) + control = array([1, 2, 3, 1, 1, 1, 4], mask=[0, 0, 0, 1, 0, 0, 1]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + def test_ediff1d_toend(self): + "Test ediff1d w/ to_end" + x = masked_array(np.arange(5), mask=[1,0,0,0,1]) + test = ediff1d(x, to_end=masked) + control = array([1, 1, 1, 4, 0], mask=[1, 0, 0, 1, 1]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = ediff1d(x, to_end=[1,2,3]) + control = array([1, 1, 1, 4, 1, 2, 3], mask=[1, 0, 0, 1, 0, 0, 0]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + def test_ediff1d_tobegin_toend(self): + "Test ediff1d w/ to_begin and to_end" + x = masked_array(np.arange(5), mask=[1,0,0,0,1]) + test = ediff1d(x, to_end=masked, to_begin=masked) + control = array([0, 1, 1, 1, 4, 0], mask=[1, 1, 0, 0, 1, 1]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = ediff1d(x, to_end=[1,2,3], to_begin=masked) + control = array([0, 1, 1, 1, 4, 1, 2, 3], mask=[1, 1, 0, 0, 1, 0, 0, 0]) + assert_equal(test, control) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + def test_ediff1d_ndarray(self): + "Test ediff1d w/ a ndarray" + x = np.arange(5) + test = ediff1d(x) + control = array([1, 1, 1, 1], mask=[0, 0, 0, 0]) + assert_equal(test, control) + self.failUnless(isinstance(test, MaskedArray)) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + # + test = ediff1d(x, to_end=masked, to_begin=masked) + control = array([0, 1, 1, 1, 1, 0], mask=[1, 0, 0, 0, 0, 1]) + self.failUnless(isinstance(test, MaskedArray)) + assert_equal(test.data, control.data) + assert_equal(test.mask, control.mask) + + + def test_intersect1d(self): + "Test intersect1d" + x = array([1, 3, 3, 3], mask=[0, 0, 0, 1]) + y = array([3, 1, 1, 1], mask=[0, 0, 0, 1]) + test = intersect1d(x, y) + control = array([1, 1, 3, 3, -1], mask=[0, 0, 0, 0, 1]) + assert_equal(test, control) + + + def test_intersect1d_nu(self): + "Test intersect1d_nu" + x = array([1, 3, 3, 3], mask=[0, 0, 0, 1]) + y = array([3, 1, 1, 1], mask=[0, 0, 0, 1]) + test = intersect1d_nu(x, y) + control = array([1, 3, -1], mask=[0, 0, 1]) + assert_equal(test, control) + + + def test_setxor1d(self): + "Test setxor1d" + a = array([1, 2, 5, 7, -1], mask=[0, 0, 0, 0, 1]) + b = array([1, 2, 3, 4, 5, -1], mask=[0, 0, 0, 0, 0, -1]) + test = setxor1d(a, b) + assert_equal(test, array([3, 4, 7])) + # + a = array([1, 2, 5, 7, -1], mask=[0, 0, 0, 0, 1]) + b = [1, 2, 3, 4, 5] + test = setxor1d(a, b) + assert_equal(test, array([3, 4, 7, -1], mask=[0, 0, 0, 1])) + # + a = array( [1, 2, 3] ) + b = array( [6, 5, 4] ) + test = setxor1d(a, b) + assert(isinstance(test, MaskedArray)) + assert_equal(test, [1, 2, 3, 4, 5, 6]) + # + a = array([1, 8, 2, 3], mask=[0, 1, 0, 0]) + b = array([6, 5, 4, 8], mask=[0, 0, 0, 1]) + test = setxor1d(a, b) + assert(isinstance(test, MaskedArray)) + assert_equal(test, [1, 2, 3, 4, 5, 6]) + # + assert_array_equal([], setxor1d([],[])) + + + def test_setmember1d( self ): + "Test setmember1d" + a = array([1, 2, 5, 7, -1], mask=[0, 0, 0, 0, 1]) + b = array([1, 2, 3, 4, 5, -1], mask=[0, 0, 0, 0, 0, -1]) + test = setmember1d(a, b) + assert_equal(test, [True, True, True, False, True]) + # + assert_array_equal([], setmember1d([],[])) + + + def test_union1d( self ): + "Test union1d" + a = array([1, 2, 5, 7, -1], mask=[0, 0, 0, 0, 1]) + b = array([1, 2, 3, 4, 5, -1], mask=[0, 0, 0, 0, 0, -1]) + test = union1d(a, b) + control = array([1, 2, 3, 4, 5, 7, -1], mask=[0, 0, 0, 0, 0, 0, 1]) + assert_equal(test, control) + # + assert_array_equal([], setmember1d([],[])) + + + def test_setdiff1d( self ): + "Test setdiff1d" + a = array([6, 5, 4, 7, 1, 2, 1], mask=[0, 0, 0, 0, 0, 0, 1]) + b = array([2, 4, 3, 3, 2, 1, 5]) + test = setdiff1d(a, b) + assert_equal(test, array([6, 7, -1], mask=[0, 0, 1])) + # + a = arange(10) + b = arange(8) + assert_equal(setdiff1d(a, b), array([8, 9])) + + + def test_setdiff1d_char_array(self): + "Test setdiff1d_charray" + a = np.array(['a','b','c']) + b = np.array(['a','b','s']) + assert_array_equal(setdiff1d(a,b), np.array(['c'])) + + + + +class TestShapeBase(TestCase): + # + def test_atleast1d(self): + pass + + ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": diff --git a/numpy/ma/tests/test_mrecords.py b/numpy/ma/tests/test_mrecords.py index 6e9248953..769c5330e 100644 --- a/numpy/ma/tests/test_mrecords.py +++ b/numpy/ma/tests/test_mrecords.py @@ -334,8 +334,8 @@ class TestMRecords(TestCase): mult[0] = masked mult[1] = (1, 1, 1) mult.filled(0) - assert_equal(mult.filled(0), - np.array([(0,0,0),(1,1,1)], dtype=mult.dtype)) + assert_equal_records(mult.filled(0), + np.array([(0,0,0),(1,1,1)], dtype=mult.dtype)) class TestView(TestCase): diff --git a/numpy/ma/tests/test_subclassing.py b/numpy/ma/tests/test_subclassing.py index 939b8ee56..5943ad6c1 100644 --- a/numpy/ma/tests/test_subclassing.py +++ b/numpy/ma/tests/test_subclassing.py @@ -153,5 +153,3 @@ class TestSubclassing(TestCase): ################################################################################ if __name__ == '__main__': run_module_suite() - - diff --git a/numpy/ma/testutils.py b/numpy/ma/testutils.py index 28754bccc..5234e0db4 100644 --- a/numpy/ma/testutils.py +++ b/numpy/ma/testutils.py @@ -110,14 +110,14 @@ def assert_equal(actual,desired,err_msg=''): return _assert_equal_on_sequences(actual.tolist(), desired.tolist(), err_msg='') - elif actual_dtype.char in "OV" and desired_dtype.char in "OV": - if (actual_dtype != desired_dtype) and actual_dtype: - msg = build_err_msg([actual_dtype, desired_dtype], - err_msg, header='', names=('actual', 'desired')) - raise ValueError(msg) - return _assert_equal_on_sequences(actual.tolist(), - desired.tolist(), - err_msg='') +# elif actual_dtype.char in "OV" and desired_dtype.char in "OV": +# if (actual_dtype != desired_dtype) and actual_dtype: +# msg = build_err_msg([actual_dtype, desired_dtype], +# err_msg, header='', names=('actual', 'desired')) +# raise ValueError(msg) +# return _assert_equal_on_sequences(actual.tolist(), +# desired.tolist(), +# err_msg='') return assert_array_equal(actual, desired, err_msg) @@ -167,12 +167,12 @@ def assert_array_compare(comparison, x, y, err_msg='', verbose=True, header='', """Asserts that a comparison relation between two masked arrays is satisfied elementwise.""" # Fill the data first - xf = filled(x) - yf = filled(y) +# xf = filled(x) +# yf = filled(y) # Allocate a common mask and refill m = mask_or(getmask(x), getmask(y)) - x = masked_array(xf, copy=False, mask=m) - y = masked_array(yf, copy=False, mask=m) + x = masked_array(x, copy=False, mask=m, keep_mask=False, subok=False) + y = masked_array(y, copy=False, mask=m, keep_mask=False, subok=False) if ((x is masked) and not (y is masked)) or \ ((y is masked) and not (x is masked)): msg = build_err_msg([x, y], err_msg=err_msg, verbose=verbose, @@ -180,14 +180,16 @@ def assert_array_compare(comparison, x, y, err_msg='', verbose=True, header='', raise ValueError(msg) # OK, now run the basic tests on filled versions return utils.assert_array_compare(comparison, - x.filled(fill_value), y.filled(fill_value), - err_msg=err_msg, - verbose=verbose, header=header) + x.filled(fill_value), + y.filled(fill_value), + err_msg=err_msg, + verbose=verbose, header=header) def assert_array_equal(x, y, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays.""" - assert_array_compare(equal, x, y, err_msg=err_msg, verbose=verbose, + assert_array_compare(operator.__eq__, x, y, + err_msg=err_msg, verbose=verbose, header='Arrays are not equal') @@ -221,7 +223,8 @@ def assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True): def assert_array_less(x, y, err_msg='', verbose=True): "Checks that x is smaller than y elementwise." - assert_array_compare(less, x, y, err_msg=err_msg, verbose=verbose, + assert_array_compare(operator.__lt__, x, y, + err_msg=err_msg, verbose=verbose, header='Arrays are not less-ordered') diff --git a/numpy/numarray/util.py b/numpy/numarray/util.py index 01002f194..3f0fc20d0 100644 --- a/numpy/numarray/util.py +++ b/numpy/numarray/util.py @@ -1,7 +1,7 @@ import os import numpy -__all__ = ['MathDomainError', 'UnderflowError', 'NumOverflowError', +__all__ = ['MathDomainError', 'UnderflowError', 'NumOverflowError', 'handleError', 'get_numarray_include_dirs'] class MathDomainError(ArithmeticError): pass diff --git a/numpy/oldnumeric/arrayfns.py b/numpy/oldnumeric/arrayfns.py index dbb910770..230b200a9 100644 --- a/numpy/oldnumeric/arrayfns.py +++ b/numpy/oldnumeric/arrayfns.py @@ -1,8 +1,8 @@ """Backward compatible with arrayfns from Numeric """ -__all__ = ['array_set', 'construct3', 'digitize', 'error', 'find_mask', - 'histogram', 'index_sort', 'interp', 'nz', 'reverse', 'span', +__all__ = ['array_set', 'construct3', 'digitize', 'error', 'find_mask', + 'histogram', 'index_sort', 'interp', 'nz', 'reverse', 'span', 'to_corners', 'zmin_zmax'] import numpy as np diff --git a/numpy/oldnumeric/mlab.py b/numpy/oldnumeric/mlab.py index c11e34c1f..151649f6a 100644 --- a/numpy/oldnumeric/mlab.py +++ b/numpy/oldnumeric/mlab.py @@ -1,7 +1,7 @@ # This module is for compatibility only. All functions are defined elsewhere. __all__ = ['rand', 'tril', 'trapz', 'hanning', 'rot90', 'triu', 'diff', 'angle', - 'roots', 'ptp', 'kaiser', 'randn', 'cumprod', 'diag', 'msort', + 'roots', 'ptp', 'kaiser', 'randn', 'cumprod', 'diag', 'msort', 'LinearAlgebra', 'RandomArray', 'prod', 'std', 'hamming', 'flipud', 'max', 'blackman', 'corrcoef', 'bartlett', 'eye', 'squeeze', 'sinc', 'tri', 'cov', 'svd', 'min', 'median', 'fliplr', 'eig', 'mean'] diff --git a/numpy/oldnumeric/rng.py b/numpy/oldnumeric/rng.py index b4c72a68c..28d3f16df 100644 --- a/numpy/oldnumeric/rng.py +++ b/numpy/oldnumeric/rng.py @@ -5,7 +5,7 @@ __all__ = ['CreateGenerator','ExponentialDistribution','LogNormalDistribution', - 'NormalDistribution', 'UniformDistribution', 'error', 'ranf', + 'NormalDistribution', 'UniformDistribution', 'error', 'ranf', 'default_distribution', 'random_sample', 'standard_generator'] import numpy.random.mtrand as mt diff --git a/numpy/testing/__init__.py b/numpy/testing/__init__.py index 53941fd76..f391c8053 100644 --- a/numpy/testing/__init__.py +++ b/numpy/testing/__init__.py @@ -5,12 +5,10 @@ in a single location, so that test scripts can just import it and work right away. """ -#import unittest from unittest import TestCase import decorators as dec from utils import * -from parametric import ParametricTestCase from numpytest import * from nosetester import NoseTester as Tester from nosetester import run_module_suite diff --git a/numpy/testing/decorators.py b/numpy/testing/decorators.py index 5d8f863d2..70301f250 100644 --- a/numpy/testing/decorators.py +++ b/numpy/testing/decorators.py @@ -51,8 +51,11 @@ def skipif(skip_condition, msg=None): Parameters --------- - skip_condition : bool - Flag to determine whether to skip test (True) or not (False) + skip_condition : bool or callable. + Flag to determine whether to skip test. If the condition is a + callable, it is used at runtime to dynamically make the decision. This + is useful for tests that may require costly imports, to delay the cost + until the test suite is actually executed. msg : string Message to give on raising a SkipTest exception @@ -69,28 +72,66 @@ def skipif(skip_condition, msg=None): decorator with the nose.tools.make_decorator function in order to transmit function name, and various other metadata. ''' - if msg is None: - msg = 'Test skipped due to test condition' + def skip_decorator(f): # Local import to avoid a hard nose dependency and only incur the # import time overhead at actual test-time. import nose - def skipper(*args, **kwargs): - if skip_condition: - raise nose.SkipTest, msg + + # Allow for both boolean or callable skip conditions. + if callable(skip_condition): + skip_val = lambda : skip_condition() + else: + skip_val = lambda : skip_condition + + def get_msg(func,msg=None): + """Skip message with information about function being skipped.""" + if msg is None: + out = 'Test skipped due to test condition' + else: + out = '\n'+msg + + return "Skipping test: %s%s" % (func.__name__,out) + + # We need to define *two* skippers because Python doesn't allow both + # return with value and yield inside the same function. + def skipper_func(*args, **kwargs): + """Skipper for normal test functions.""" + if skip_val(): + raise nose.SkipTest(get_msg(f,msg)) else: return f(*args, **kwargs) + + def skipper_gen(*args, **kwargs): + """Skipper for test generators.""" + if skip_val(): + raise nose.SkipTest(get_msg(f,msg)) + else: + for x in f(*args, **kwargs): + yield x + + # Choose the right skipper to use when building the actual decorator. + if nose.util.isgenerator(f): + skipper = skipper_gen + else: + skipper = skipper_func + return nose.tools.make_decorator(f)(skipper) + return skip_decorator -def knownfailureif(skip_condition, msg=None): - ''' Make function raise KnownFailureTest exception if skip_condition is true + +def knownfailureif(fail_condition, msg=None): + ''' Make function raise KnownFailureTest exception if fail_condition is true Parameters --------- - skip_condition : bool - Flag to determine whether to mark test as known failure (True) - or not (False) + fail_condition : bool or callable. + Flag to determine whether to mark test as known failure (True) + or not (False). If the condition is a callable, it is used at + runtime to dynamically make the decision. This is useful for + tests that may require costly imports, to delay the cost + until the test suite is actually executed. msg : string Message to give on raising a KnownFailureTest exception @@ -109,15 +150,23 @@ def knownfailureif(skip_condition, msg=None): ''' if msg is None: msg = 'Test skipped due to known failure' - def skip_decorator(f): + + # Allow for both boolean or callable known failure conditions. + if callable(fail_condition): + fail_val = lambda : fail_condition() + else: + fail_val = lambda : fail_condition + + def knownfail_decorator(f): # Local import to avoid a hard nose dependency and only incur the # import time overhead at actual test-time. import nose from noseclasses import KnownFailureTest - def skipper(*args, **kwargs): - if skip_condition: + def knownfailer(*args, **kwargs): + if fail_val(): raise KnownFailureTest, msg else: return f(*args, **kwargs) - return nose.tools.make_decorator(f)(skipper) - return skip_decorator + return nose.tools.make_decorator(f)(knownfailer) + + return knownfail_decorator diff --git a/numpy/testing/noseclasses.py b/numpy/testing/noseclasses.py index d838a5f84..05d244083 100644 --- a/numpy/testing/noseclasses.py +++ b/numpy/testing/noseclasses.py @@ -1,4 +1,6 @@ -# These classes implement a doctest runner plugin for nose. +# These classes implement a doctest runner plugin for nose, a "known failure" +# error class, and a customized TestProgram for NumPy. + # Because this module imports nose directly, it should not # be used except by nosetester.py to avoid a general NumPy # dependency on nose. @@ -6,6 +8,7 @@ import os import doctest +import nose from nose.plugins import doctests as npd from nose.plugins.errorclass import ErrorClass, ErrorClassPlugin from nose.plugins.base import Plugin @@ -251,7 +254,7 @@ class KnownFailureTest(Exception): class KnownFailure(ErrorClassPlugin): - '''Plugin that installs a KNOWNFAIL error class for the + '''Plugin that installs a KNOWNFAIL error class for the KnownFailureClass exception. When KnownFailureTest is raised, the exception will be logged in the knownfail attribute of the result, 'K' or 'KNOWNFAIL' (verbose) will be output, and the @@ -275,3 +278,25 @@ class KnownFailure(ErrorClassPlugin): disable = getattr(options, 'noKnownFail', False) if disable: self.enabled = False + + + +# Because nose currently discards the test result object, but we need +# to return it to the user, override TestProgram.runTests to retain +# the result +class NumpyTestProgram(nose.core.TestProgram): + def runTests(self): + """Run Tests. Returns true on success, false on failure, and + sets self.success to the same value. + """ + if self.testRunner is None: + self.testRunner = nose.core.TextTestRunner(stream=self.config.stream, + verbosity=self.config.verbosity, + config=self.config) + plug_runner = self.config.plugins.prepareTestRunner(self.testRunner) + if plug_runner is not None: + self.testRunner = plug_runner + + self.result = self.testRunner.run(self.test) + self.success = self.result.wasSuccessful() + return self.success diff --git a/numpy/testing/nosetester.py b/numpy/testing/nosetester.py index bd4c78308..7a10a5b1f 100644 --- a/numpy/testing/nosetester.py +++ b/numpy/testing/nosetester.py @@ -5,7 +5,6 @@ Implements test and bench functions for modules. ''' import os import sys -import warnings def get_package_name(filepath): # find the package name given a path name that's part of the package @@ -28,7 +27,6 @@ def get_package_name(filepath): pkg_name.reverse() return '.'.join(pkg_name) - def import_nose(): """ Import nose only when needed. """ @@ -166,8 +164,8 @@ class NoseTester(object): print "nose version %d.%d.%d" % nose.__versioninfo__ - def test(self, label='fast', verbose=1, extra_argv=None, doctests=False, - coverage=False, **kwargs): + def prepare_test_args(self, label='fast', verbose=1, extra_argv=None, + doctests=False, coverage=False): ''' Run tests for module using nose %(test_header)s @@ -179,39 +177,6 @@ class NoseTester(object): http://nedbatchelder.com/code/modules/coverage.html) ''' - old_args = set(['level', 'verbosity', 'all', 'sys_argv', - 'testcase_pattern']) - unexpected_args = set(kwargs.keys()) - old_args - if len(unexpected_args) > 0: - ua = ', '.join(unexpected_args) - raise TypeError("test() got unexpected arguments: %s" % ua) - - # issue a deprecation warning if any of the pre-1.2 arguments to - # test are given - if old_args.intersection(kwargs.keys()): - warnings.warn("This method's signature will change in the next " \ - "release; the level, verbosity, all, sys_argv, " \ - "and testcase_pattern keyword arguments will be " \ - "removed. Please update your code.", - DeprecationWarning, stacklevel=2) - - # Use old arguments if given (where it makes sense) - # For the moment, level and sys_argv are ignored - - # replace verbose with verbosity - if kwargs.get('verbosity') is not None: - verbose = kwargs.get('verbosity') - # cap verbosity at 3 because nose becomes *very* verbose beyond that - verbose = min(verbose, 3) - - import utils - utils.verbose = verbose - - # if all evaluates as True, omit attribute filter and run doctests - if kwargs.get('all'): - label = '' - doctests = True - # if doctests is in the extra args, remove it and set the doctest # flag so the NumPy doctester is used instead if extra_argv and '--with-doctest' in extra_argv: @@ -221,9 +186,6 @@ class NoseTester(object): argv = self._test_argv(label, verbose, extra_argv) if doctests: argv += ['--with-numpydoctest'] - print "Running unit tests and doctests for %s" % self.package_name - else: - print "Running unit tests for %s" % self.package_name if coverage: argv+=['--cover-package=%s' % self.package_name, '--with-coverage', @@ -237,33 +199,8 @@ class NoseTester(object): argv += ['--exclude','swig_ext'] argv += ['--exclude','array_from_pyobj'] - self._show_system_info() - nose = import_nose() - # Because nose currently discards the test result object, but we need - # to return it to the user, override TestProgram.runTests to retain - # the result - class NumpyTestProgram(nose.core.TestProgram): - def runTests(self): - """Run Tests. Returns true on success, false on failure, and - sets self.success to the same value. - """ - if self.testRunner is None: - self.testRunner = nose.core.TextTestRunner(stream=self.config.stream, - verbosity=self.config.verbosity, - config=self.config) - plug_runner = self.config.plugins.prepareTestRunner(self.testRunner) - if plug_runner is not None: - self.testRunner = plug_runner - self.result = self.testRunner.run(self.test) - self.success = self.result.wasSuccessful() - return self.success - - # reset doctest state on every run - import doctest - doctest.master = None - # construct list of plugins, omitting the existing doctest plugin import nose.plugins.builtin from noseclasses import NumpyDoctest, KnownFailure @@ -271,10 +208,46 @@ class NoseTester(object): for p in nose.plugins.builtin.plugins: plug = p() if plug.name == 'doctest': + # skip the builtin doctest plugin continue plugins.append(plug) + return argv, plugins + + def test(self, label='fast', verbose=1, extra_argv=None, doctests=False, + coverage=False): + ''' Run tests for module using nose + + %(test_header)s + doctests : boolean + If True, run doctests in module, default False + coverage : boolean + If True, report coverage of NumPy code, default False + (Requires the coverage module: + http://nedbatchelder.com/code/modules/coverage.html) + ''' + + # cap verbosity at 3 because nose becomes *very* verbose beyond that + verbose = min(verbose, 3) + + import utils + utils.verbose = verbose + + if doctests: + print "Running unit tests and doctests for %s" % self.package_name + else: + print "Running unit tests for %s" % self.package_name + + self._show_system_info() + + # reset doctest state on every run + import doctest + doctest.master = None + + argv, plugins = self.prepare_test_args(label, verbose, extra_argv, + doctests, coverage) + from noseclasses import NumpyTestProgram t = NumpyTestProgram(argv=argv, exit=False, plugins=plugins) return t.result @@ -286,9 +259,10 @@ class NoseTester(object): print "Running benchmarks for %s" % self.package_name self._show_system_info() - nose = import_nose() argv = self._test_argv(label, verbose, extra_argv) argv += ['--match', r'(?:^|[\\b_\\.%s-])[Bb]ench' % os.sep] + + nose = import_nose() return nose.run(argv=argv) # generate method docstrings diff --git a/numpy/testing/numpytest.py b/numpy/testing/numpytest.py index c08215383..5ef2cc7f5 100644 --- a/numpy/testing/numpytest.py +++ b/numpy/testing/numpytest.py @@ -1,91 +1,16 @@ import os -import re import sys -import imp -import types -import unittest import traceback -import warnings -__all__ = ['set_package_path', 'set_local_path', 'restore_path', - 'IgnoreException', 'NumpyTestCase', 'NumpyTest', 'importall',] +__all__ = ['IgnoreException', 'importall',] DEBUG=0 -from numpy.testing.utils import jiffies get_frame = sys._getframe class IgnoreException(Exception): "Ignoring this exception due to disabled feature" -def set_package_path(level=1): - """ Prepend package directory to sys.path. - - set_package_path should be called from a test_file.py that - satisfies the following tree structure: - - <somepath>/<somedir>/test_file.py - - Then the first existing path name from the following list - - <somepath>/build/lib.<platform>-<version> - <somepath>/.. - - is prepended to sys.path. - The caller is responsible for removing this path by using - - restore_path() - """ - warnings.warn("set_package_path will be removed in NumPy 1.3; please " - "update your code", DeprecationWarning, stacklevel=2) - - from distutils.util import get_platform - f = get_frame(level) - if f.f_locals['__name__']=='__main__': - testfile = sys.argv[0] - else: - testfile = f.f_locals['__file__'] - d = os.path.dirname(os.path.dirname(os.path.abspath(testfile))) - d1 = os.path.join(d,'build','lib.%s-%s'%(get_platform(),sys.version[:3])) - if not os.path.isdir(d1): - d1 = os.path.dirname(d) - if DEBUG: - print 'Inserting %r to sys.path for test_file %r' % (d1, testfile) - sys.path.insert(0,d1) - return - - -def set_local_path(reldir='', level=1): - """ Prepend local directory to sys.path. - - The caller is responsible for removing this path by using - - restore_path() - """ - warnings.warn("set_local_path will be removed in NumPy 1.3; please " - "update your code", DeprecationWarning, stacklevel=2) - - f = get_frame(level) - if f.f_locals['__name__']=='__main__': - testfile = sys.argv[0] - else: - testfile = f.f_locals['__file__'] - local_path = os.path.normpath(os.path.join(os.path.dirname(os.path.abspath(testfile)),reldir)) - if DEBUG: - print 'Inserting %r to sys.path' % (local_path) - sys.path.insert(0,local_path) - return - -def restore_path(): - warnings.warn("restore_path will be removed in NumPy 1.3; please " - "update your code", DeprecationWarning, stacklevel=2) - - if DEBUG: - print 'Removing %r from sys.path' % (sys.path[0]) - del sys.path[0] - return - - def output_exception(printstream = sys.stdout): try: type, value, tb = sys.exc_info() @@ -99,576 +24,6 @@ def output_exception(printstream = sys.stdout): type = value = tb = None # clean up return - -class _dummy_stream: - def __init__(self,stream): - self.data = [] - self.stream = stream - def write(self,message): - if not self.data and not message.startswith('E'): - self.stream.write(message) - self.stream.flush() - message = '' - self.data.append(message) - def writeln(self,message): - self.write(message+'\n') - def flush(self): - self.stream.flush() - - -class NumpyTestCase (unittest.TestCase): - def __init__(self, *args, **kwds): - warnings.warn("NumpyTestCase will be removed in the next release; please update your code to use nose or unittest", - DeprecationWarning, stacklevel=2) - unittest.TestCase.__init__(self, *args, **kwds) - - def measure(self,code_str,times=1): - """ Return elapsed time for executing code_str in the - namespace of the caller for given times. - """ - frame = get_frame(1) - locs,globs = frame.f_locals,frame.f_globals - code = compile(code_str, - 'NumpyTestCase runner for '+self.__class__.__name__, - 'exec') - i = 0 - elapsed = jiffies() - while i<times: - i += 1 - exec code in globs,locs - elapsed = jiffies() - elapsed - return 0.01*elapsed - - def __call__(self, result=None): - if result is None or not hasattr(result, 'errors') \ - or not hasattr(result, 'stream'): - return unittest.TestCase.__call__(self, result) - - nof_errors = len(result.errors) - save_stream = result.stream - result.stream = _dummy_stream(save_stream) - unittest.TestCase.__call__(self, result) - if nof_errors != len(result.errors): - test, errstr = result.errors[-1][:2] - if isinstance(errstr, tuple): - errstr = str(errstr[0]) - elif isinstance(errstr, str): - errstr = errstr.split('\n')[-2] - else: - # allow for proxy classes - errstr = str(errstr).split('\n')[-2] - l = len(result.stream.data) - if errstr.startswith('IgnoreException:'): - if l==1: - assert result.stream.data[-1]=='E', \ - repr(result.stream.data) - result.stream.data[-1] = 'i' - else: - assert result.stream.data[-1]=='ERROR\n', \ - repr(result.stream.data) - result.stream.data[-1] = 'ignoring\n' - del result.errors[-1] - map(save_stream.write, result.stream.data) - save_stream.flush() - result.stream = save_stream - - def warn(self, message): - from numpy.distutils.misc_util import yellow_text - print>>sys.stderr,yellow_text('Warning: %s' % (message)) - sys.stderr.flush() - def info(self, message): - print>>sys.stdout, message - sys.stdout.flush() - - def rundocs(self, filename=None): - """ Run doc string tests found in filename. - """ - import doctest - if filename is None: - f = get_frame(1) - filename = f.f_globals['__file__'] - name = os.path.splitext(os.path.basename(filename))[0] - path = [os.path.dirname(filename)] - file, pathname, description = imp.find_module(name, path) - try: - m = imp.load_module(name, file, pathname, description) - finally: - file.close() - if sys.version[:3]<'2.4': - doctest.testmod(m, verbose=False) - else: - tests = doctest.DocTestFinder().find(m) - runner = doctest.DocTestRunner(verbose=False) - for test in tests: - runner.run(test) - return - - -def _get_all_method_names(cls): - names = dir(cls) - if sys.version[:3]<='2.1': - for b in cls.__bases__: - for n in dir(b)+_get_all_method_names(b): - if n not in names: - names.append(n) - return names - - -# for debug build--check for memory leaks during the test. -class _NumPyTextTestResult(unittest._TextTestResult): - def startTest(self, test): - unittest._TextTestResult.startTest(self, test) - if self.showAll: - N = len(sys.getobjects(0)) - self._totnumobj = N - self._totrefcnt = sys.gettotalrefcount() - return - - def stopTest(self, test): - if self.showAll: - N = len(sys.getobjects(0)) - self.stream.write("objects: %d ===> %d; " % (self._totnumobj, N)) - self.stream.write("refcnts: %d ===> %d\n" % (self._totrefcnt, - sys.gettotalrefcount())) - return - -class NumPyTextTestRunner(unittest.TextTestRunner): - def _makeResult(self): - return _NumPyTextTestResult(self.stream, self.descriptions, self.verbosity) - - -class NumpyTest: - """ Numpy tests site manager. - - Usage: NumpyTest(<package>).test(level=1,verbosity=1) - - <package> is package name or its module object. - - Package is supposed to contain a directory tests/ with test_*.py - files where * refers to the names of submodules. See .rename() - method to redefine name mapping between test_*.py files and names of - submodules. Pattern test_*.py can be overwritten by redefining - .get_testfile() method. - - test_*.py files are supposed to define a classes, derived from - NumpyTestCase or unittest.TestCase, with methods having names - starting with test or bench or check. The names of TestCase classes - must have a prefix test. This can be overwritten by redefining - .check_testcase_name() method. - - And that is it! No need to implement test or test_suite functions - in each .py file. - - Old-style test_suite(level=1) hooks are also supported. - """ - _check_testcase_name = re.compile(r'test.*|Test.*').match - def check_testcase_name(self, name): - """ Return True if name matches TestCase class. - """ - return not not self._check_testcase_name(name) - - testfile_patterns = ['test_%(modulename)s.py'] - def get_testfile(self, module, verbosity = 0): - """ Return path to module test file. - """ - mstr = self._module_str - short_module_name = self._get_short_module_name(module) - d = os.path.split(module.__file__)[0] - test_dir = os.path.join(d,'tests') - local_test_dir = os.path.join(os.getcwd(),'tests') - if os.path.basename(os.path.dirname(local_test_dir)) \ - == os.path.basename(os.path.dirname(test_dir)): - test_dir = local_test_dir - for pat in self.testfile_patterns: - fn = os.path.join(test_dir, pat % {'modulename':short_module_name}) - if os.path.isfile(fn): - return fn - if verbosity>1: - self.warn('No test file found in %s for module %s' \ - % (test_dir, mstr(module))) - return - - def __init__(self, package=None): - warnings.warn("NumpyTest will be removed in the next release; please update your code to use nose or unittest", - DeprecationWarning, stacklevel=2) - if package is None: - from numpy.distutils.misc_util import get_frame - f = get_frame(1) - package = f.f_locals.get('__name__',f.f_globals.get('__name__',None)) - assert package is not None - self.package = package - self._rename_map = {} - - def rename(self, **kws): - """Apply renaming submodule test file test_<name>.py to - test_<newname>.py. - - Usage: self.rename(name='newname') before calling the - self.test() method. - - If 'newname' is None, then no tests will be executed for a given - module. - """ - for k,v in kws.items(): - self._rename_map[k] = v - return - - def _module_str(self, module): - filename = module.__file__[-30:] - if filename!=module.__file__: - filename = '...'+filename - return '<module %r from %r>' % (module.__name__, filename) - - def _get_method_names(self,clsobj,level): - names = [] - for mthname in _get_all_method_names(clsobj): - if mthname[:5] not in ['bench','check'] \ - and mthname[:4] not in ['test']: - continue - mth = getattr(clsobj, mthname) - if type(mth) is not types.MethodType: - continue - d = mth.im_func.func_defaults - if d is not None: - mthlevel = d[0] - else: - mthlevel = 1 - if level>=mthlevel: - if mthname not in names: - names.append(mthname) - for base in clsobj.__bases__: - for n in self._get_method_names(base,level): - if n not in names: - names.append(n) - return names - - def _get_short_module_name(self, module): - d,f = os.path.split(module.__file__) - short_module_name = os.path.splitext(os.path.basename(f))[0] - if short_module_name=='__init__': - short_module_name = module.__name__.split('.')[-1] - short_module_name = self._rename_map.get(short_module_name,short_module_name) - return short_module_name - - def _get_module_tests(self, module, level, verbosity): - mstr = self._module_str - - short_module_name = self._get_short_module_name(module) - if short_module_name is None: - return [] - - test_file = self.get_testfile(module, verbosity) - - if test_file is None: - return [] - - if not os.path.isfile(test_file): - if short_module_name[:5]=='info_' \ - and short_module_name[5:]==module.__name__.split('.')[-2]: - return [] - if short_module_name in ['__cvs_version__','__svn_version__']: - return [] - if short_module_name[-8:]=='_version' \ - and short_module_name[:-8]==module.__name__.split('.')[-2]: - return [] - if verbosity>1: - self.warn(test_file) - self.warn(' !! No test file %r found for %s' \ - % (os.path.basename(test_file), mstr(module))) - return [] - - if test_file in self.test_files: - return [] - - parent_module_name = '.'.join(module.__name__.split('.')[:-1]) - test_module_name,ext = os.path.splitext(os.path.basename(test_file)) - test_dir_module = parent_module_name+'.tests' - test_module_name = test_dir_module+'.'+test_module_name - - if test_dir_module not in sys.modules: - sys.modules[test_dir_module] = imp.new_module(test_dir_module) - - old_sys_path = sys.path[:] - try: - f = open(test_file,'r') - test_module = imp.load_module(test_module_name, f, - test_file, ('.py', 'r', 1)) - f.close() - except: - sys.path[:] = old_sys_path - self.warn('FAILURE importing tests for %s' % (mstr(module))) - output_exception(sys.stderr) - return [] - sys.path[:] = old_sys_path - - self.test_files.append(test_file) - - return self._get_suite_list(test_module, level, module.__name__) - - def _get_suite_list(self, test_module, level, module_name='__main__', - verbosity=1): - suite_list = [] - if hasattr(test_module, 'test_suite'): - suite_list.extend(test_module.test_suite(level)._tests) - for name in dir(test_module): - obj = getattr(test_module, name) - if type(obj) is not type(unittest.TestCase) \ - or not issubclass(obj, unittest.TestCase) \ - or not self.check_testcase_name(obj.__name__): - continue - for mthname in self._get_method_names(obj,level): - suite = obj(mthname) - if getattr(suite,'isrunnable',lambda mthname:1)(mthname): - suite_list.append(suite) - matched_suite_list = [suite for suite in suite_list \ - if self.testcase_match(suite.id()\ - .replace('__main__.',''))] - if verbosity>=0: - self.info(' Found %s/%s tests for %s' \ - % (len(matched_suite_list), len(suite_list), module_name)) - return matched_suite_list - - def _test_suite_from_modules(self, this_package, level, verbosity): - package_name = this_package.__name__ - modules = [] - for name, module in sys.modules.items(): - if not name.startswith(package_name) or module is None: - continue - if not hasattr(module,'__file__'): - continue - if os.path.basename(os.path.dirname(module.__file__))=='tests': - continue - modules.append((name, module)) - - modules.sort() - modules = [m[1] for m in modules] - - self.test_files = [] - suites = [] - for module in modules: - suites.extend(self._get_module_tests(module, abs(level), verbosity)) - - suites.extend(self._get_suite_list(sys.modules[package_name], - abs(level), verbosity=verbosity)) - return unittest.TestSuite(suites) - - def _test_suite_from_all_tests(self, this_package, level, verbosity): - importall(this_package) - package_name = this_package.__name__ - - # Find all tests/ directories under the package - test_dirs_names = {} - for name, module in sys.modules.items(): - if not name.startswith(package_name) or module is None: - continue - if not hasattr(module, '__file__'): - continue - d = os.path.dirname(module.__file__) - if os.path.basename(d)=='tests': - continue - d = os.path.join(d, 'tests') - if not os.path.isdir(d): - continue - if d in test_dirs_names: - continue - test_dir_module = '.'.join(name.split('.')[:-1]+['tests']) - test_dirs_names[d] = test_dir_module - - test_dirs = test_dirs_names.keys() - test_dirs.sort() - - # For each file in each tests/ directory with a test case in it, - # import the file, and add the test cases to our list - suite_list = [] - testcase_match = re.compile(r'\s*class\s+\w+\s*\(.*TestCase').match - for test_dir in test_dirs: - test_dir_module = test_dirs_names[test_dir] - - if test_dir_module not in sys.modules: - sys.modules[test_dir_module] = imp.new_module(test_dir_module) - - for fn in os.listdir(test_dir): - base, ext = os.path.splitext(fn) - if ext != '.py': - continue - f = os.path.join(test_dir, fn) - - # check that file contains TestCase class definitions: - fid = open(f, 'r') - skip = True - for line in fid: - if testcase_match(line): - skip = False - break - fid.close() - if skip: - continue - - # import the test file - n = test_dir_module + '.' + base - # in case test files import local modules - sys.path.insert(0, test_dir) - fo = None - try: - try: - fo = open(f) - test_module = imp.load_module(n, fo, f, - ('.py', 'U', 1)) - except Exception, msg: - print 'Failed importing %s: %s' % (f,msg) - continue - finally: - if fo: - fo.close() - del sys.path[0] - - suites = self._get_suite_list(test_module, level, - module_name=n, - verbosity=verbosity) - suite_list.extend(suites) - - all_tests = unittest.TestSuite(suite_list) - return all_tests - - def test(self, level=1, verbosity=1, all=True, sys_argv=[], - testcase_pattern='.*'): - """Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - > 10 --- run all tests (same as specifying all=True). - (backward compatibility). - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - all: - True --- run all test files (like self.testall()) - False (default) --- only run test files associated with a module - - sys_argv --- replacement of sys.argv[1:] during running - tests. - - testcase_pattern --- run only tests that match given pattern. - - It is assumed (when all=False) that package tests suite follows - the following convention: for each package module, there exists - file <packagepath>/tests/test_<modulename>.py that defines - TestCase classes (with names having prefix 'test_') with methods - (with names having prefixes 'check_' or 'bench_'); each of these - methods are called when running unit tests. - """ - if level is None: # Do nothing. - return - - if isinstance(self.package, str): - exec 'import %s as this_package' % (self.package) - else: - this_package = self.package - - self.testcase_match = re.compile(testcase_pattern).match - - if all: - all_tests = self._test_suite_from_all_tests(this_package, - level, verbosity) - else: - all_tests = self._test_suite_from_modules(this_package, - level, verbosity) - - if level < 0: - return all_tests - - runner = unittest.TextTestRunner(verbosity=verbosity) - old_sys_argv = sys.argv[1:] - sys.argv[1:] = sys_argv - # Use the builtin displayhook. If the tests are being run - # under IPython (for instance), any doctest test suites will - # fail otherwise. - old_displayhook = sys.displayhook - sys.displayhook = sys.__displayhook__ - try: - r = runner.run(all_tests) - finally: - sys.displayhook = old_displayhook - sys.argv[1:] = old_sys_argv - return r - - def testall(self, level=1,verbosity=1): - """ Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - Different from .test(..) method, this method looks for - TestCase classes from all files in <packagedir>/tests/ - directory and no assumptions are made for naming the - TestCase classes or their methods. - """ - return self.test(level=level, verbosity=verbosity, all=True) - - def run(self): - """ Run Numpy module test suite with level and verbosity - taken from sys.argv. Requires optparse module. - """ - - # delayed import of shlex to reduce startup time - import shlex - - try: - from optparse import OptionParser - except ImportError: - self.warn('Failed to import optparse module, ignoring.') - return self.test() - usage = r'usage: %prog [-v <verbosity>] [-l <level>]'\ - r' [-s "<replacement of sys.argv[1:]>"]'\ - r' [-t "<testcase pattern>"]' - parser = OptionParser(usage) - parser.add_option("-v", "--verbosity", - action="store", - dest="verbosity", - default=1, - type='int') - parser.add_option("-l", "--level", - action="store", - dest="level", - default=1, - type='int') - parser.add_option("-s", "--sys-argv", - action="store", - dest="sys_argv", - default='', - type='string') - parser.add_option("-t", "--testcase-pattern", - action="store", - dest="testcase_pattern", - default=r'.*', - type='string') - (options, args) = parser.parse_args() - return self.test(options.level,options.verbosity, - sys_argv=shlex.split(options.sys_argv or ''), - testcase_pattern=options.testcase_pattern) - - def warn(self, message): - from numpy.distutils.misc_util import yellow_text - print>>sys.stderr,yellow_text('Warning: %s' % (message)) - sys.stderr.flush() - def info(self, message): - print>>sys.stdout, message - sys.stdout.flush() - def importall(package): """ Try recursively to import all subpackages under package. diff --git a/numpy/testing/parametric.py b/numpy/testing/parametric.py deleted file mode 100644 index 27b9d23c6..000000000 --- a/numpy/testing/parametric.py +++ /dev/null @@ -1,311 +0,0 @@ -"""Support for parametric tests in unittest. - -:Author: Fernando Perez - -Purpose -======= - -Briefly, the main class in this module allows you to easily and cleanly -(without the gross name-mangling hacks that are normally needed) to write -unittest TestCase classes that have parametrized tests. That is, tests which -consist of multiple sub-tests that scan for example a parameter range, but -where you want each sub-test to: - -* count as a separate test in the statistics. - -* be run even if others in the group error out or fail. - - -The class offers a simple name-based convention to create such tests (see -simple example at the end), in one of two ways: - -* Each sub-test in a group can be run fully independently, with the - setUp/tearDown methods being called each time. - -* The whole group can be run with setUp/tearDown being called only once for the - group. This lets you conveniently reuse state that may be very expensive to - compute for multiple tests. Be careful not to corrupt it!!! - - -Caveats -======= - -This code relies on implementation details of the unittest module (some key -methods are heavily modified versions of those, after copying them in). So it -may well break either if you make sophisticated use of the unittest APIs, or if -unittest itself changes in the future. I have only tested this with Python -2.5. - -""" -__docformat__ = "restructuredtext en" - -import unittest -import warnings - -class _ParametricTestCase(unittest.TestCase): - """TestCase subclass with support for parametric tests. - - Subclasses of this class can implement test methods that return a list of - tests and arguments to call those with, to do parametric testing (often - also called 'data driven' testing.""" - - #: Prefix for tests with independent state. These methods will be run with - #: a separate setUp/tearDown call for each test in the group. - _indepParTestPrefix = 'testip' - - #: Prefix for tests with shared state. These methods will be run with - #: a single setUp/tearDown call for the whole group. This is useful when - #: writing a group of tests for which the setup is expensive and one wants - #: to actually share that state. Use with care (especially be careful not - #: to mutate the state you are using, which will alter later tests). - _shareParTestPrefix = 'testsp' - - def __init__(self, methodName = 'runTest'): - warnings.warn("ParametricTestCase will be removed in the next NumPy " - "release", DeprecationWarning) - unittest.TestCase.__init__(self, methodName) - - def exec_test(self,test,args,result): - """Execute a single test. Returns a success boolean""" - - ok = False - try: - test(*args) - ok = True - except self.failureException: - result.addFailure(self, self._exc_info()) - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - - return ok - - def set_testMethodDoc(self,doc): - self._testMethodDoc = doc - self._TestCase__testMethodDoc = doc - - def get_testMethodDoc(self): - return self._testMethodDoc - - testMethodDoc = property(fset=set_testMethodDoc, fget=get_testMethodDoc) - - def get_testMethodName(self): - try: - return getattr(self,"_testMethodName") - except: - return getattr(self,"_TestCase__testMethodName") - - testMethodName = property(fget=get_testMethodName) - - def run_test(self, testInfo,result): - """Run one test with arguments""" - - test,args = testInfo[0],testInfo[1:] - - # Reset the doc attribute to be the docstring of this particular test, - # so that in error messages it prints the actual test's docstring and - # not that of the test factory. - self.testMethodDoc = test.__doc__ - result.startTest(self) - try: - try: - self.setUp() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - return - - ok = self.exec_test(test,args,result) - - try: - self.tearDown() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - ok = False - if ok: result.addSuccess(self) - finally: - result.stopTest(self) - - def run_tests(self, tests,result): - """Run many tests with a common setUp/tearDown. - - The entire set of tests is run with a single setUp/tearDown call.""" - - try: - self.setUp() - except KeyboardInterrupt: - raise - except: - result.testsRun += 1 - result.addError(self, self._exc_info()) - return - - saved_doc = self.testMethodDoc - - try: - # Run all the tests specified - for testInfo in tests: - test,args = testInfo[0],testInfo[1:] - - # Set the doc argument for this test. Note that even if we do - # this, the fail/error tracebacks still print the docstring for - # the parent factory, because they only generate the message at - # the end of the run, AFTER we've restored it. There is no way - # to tell the unittest system (without overriding a lot of - # stuff) to extract this information right away, the logic is - # hardcoded to pull it later, since unittest assumes it doesn't - # change. - self.testMethodDoc = test.__doc__ - result.startTest(self) - ok = self.exec_test(test,args,result) - if ok: result.addSuccess(self) - - finally: - # Restore docstring info and run tearDown once only. - self.testMethodDoc = saved_doc - try: - self.tearDown() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - - def run(self, result=None): - """Test runner.""" - - #print - #print '*** run for method:',self._testMethodName # dbg - #print '*** doc:',self._testMethodDoc # dbg - - if result is None: result = self.defaultTestResult() - - # Independent tests: each gets its own setup/teardown - if self.testMethodName.startswith(self._indepParTestPrefix): - for t in getattr(self,self.testMethodName)(): - self.run_test(t,result) - # Shared-state test: single setup/teardown for all - elif self.testMethodName.startswith(self._shareParTestPrefix): - tests = getattr(self,self.testMethodName,'runTest')() - self.run_tests(tests,result) - # Normal unittest Test methods - else: - unittest.TestCase.run(self,result) - -# The underscore was added to the class name to keep nose from trying -# to run the test class (nose ignores class names that begin with an -# underscore by default). -ParametricTestCase = _ParametricTestCase - -############################################################################# -# Quick and dirty interactive example/test -if __name__ == '__main__': - - class ExampleTestCase(ParametricTestCase): - - #------------------------------------------------------------------- - # An instrumented setUp method so we can see when it gets called and - # how many times per instance - counter = 0 - - def setUp(self): - self.counter += 1 - print 'setUp count: %2s for: %s' % (self.counter, - self.testMethodDoc) - - #------------------------------------------------------------------- - # A standard test method, just like in the unittest docs. - def test_foo(self): - """Normal test for feature foo.""" - pass - - #------------------------------------------------------------------- - # Testing methods that need parameters. These can NOT be named test*, - # since they would be picked up by unittest and called without - # arguments. Instead, call them anything else (I use tst*) and then - # load them via the factories below. - def tstX(self,i): - "Test feature X with parameters." - print 'tstX, i=',i - if i==1 or i==3: - # Test fails - self.fail('i is bad, bad: %s' % i) - - def tstY(self,i): - "Test feature Y with parameters." - print 'tstY, i=',i - if i==1: - # Force an error - 1/0 - - def tstXX(self,i,j): - "Test feature XX with parameters." - print 'tstXX, i=',i,'j=',j - if i==1: - # Test fails - self.fail('i is bad, bad: %s' % i) - - def tstYY(self,i): - "Test feature YY with parameters." - print 'tstYY, i=',i - if i==2: - # Force an error - 1/0 - - def tstZZ(self): - """Test feature ZZ without parameters, needs multiple runs. - - This could be a random test that you want to run multiple times.""" - pass - - #------------------------------------------------------------------- - # Parametric test factories that create the test groups to call the - # above tst* methods with their required arguments. - def testip(self): - """Independent parametric test factory. - - A separate setUp() call is made for each test returned by this - method. - - You must return an iterable (list or generator is fine) containing - tuples with the actual method to be called as the first argument, - and the arguments for that call later.""" - return [(self.tstX,i) for i in range(5)] - - def testip2(self): - """Another independent parametric test factory""" - return [(self.tstY,i) for i in range(5)] - - def testip3(self): - """Test factory combining different subtests. - - This one shows how to assemble calls to different tests.""" - return [(self.tstX,3),(self.tstX,9),(self.tstXX,4,10), - (self.tstZZ,),(self.tstZZ,)] - - def testsp(self): - """Shared parametric test factory - - A single setUp() call is made for all the tests returned by this - method. - """ - return [(self.tstXX,i,i+1) for i in range(5)] - - def testsp2(self): - """Another shared parametric test factory""" - return [(self.tstYY,i) for i in range(5)] - - def testsp3(self): - """Another shared parametric test factory. - - This one simply calls the same test multiple times, without any - arguments. Note that you must still return tuples, even if there - are no arguments.""" - return [(self.tstZZ,) for i in range(10)] - - - # This test class runs normally under unittest's default runner - unittest.main() diff --git a/numpy/testing/tests/test_decorators.py b/numpy/testing/tests/test_decorators.py new file mode 100644 index 000000000..504971e61 --- /dev/null +++ b/numpy/testing/tests/test_decorators.py @@ -0,0 +1,156 @@ +import numpy as np +from numpy.testing import * +from numpy.testing.noseclasses import KnownFailureTest +import nose + +def test_slow(): + @dec.slow + def slow_func(x,y,z): + pass + + assert(slow_func.slow) + +def test_setastest(): + @dec.setastest() + def f_default(a): + pass + + @dec.setastest(True) + def f_istest(a): + pass + + @dec.setastest(False) + def f_isnottest(a): + pass + + assert(f_default.__test__) + assert(f_istest.__test__) + assert(not f_isnottest.__test__) + +class DidntSkipException(Exception): + pass + +def test_skip_functions_hardcoded(): + @dec.skipif(True) + def f1(x): + raise DidntSkipException + + try: + f1('a') + except DidntSkipException: + raise Exception('Failed to skip') + except nose.SkipTest: + pass + + @dec.skipif(False) + def f2(x): + raise DidntSkipException + + try: + f2('a') + except DidntSkipException: + pass + except nose.SkipTest: + raise Exception('Skipped when not expected to') + + +def test_skip_functions_callable(): + def skip_tester(): + return skip_flag == 'skip me!' + + @dec.skipif(skip_tester) + def f1(x): + raise DidntSkipException + + try: + skip_flag = 'skip me!' + f1('a') + except DidntSkipException: + raise Exception('Failed to skip') + except nose.SkipTest: + pass + + @dec.skipif(skip_tester) + def f2(x): + raise DidntSkipException + + try: + skip_flag = 'five is right out!' + f2('a') + except DidntSkipException: + pass + except nose.SkipTest: + raise Exception('Skipped when not expected to') + + +def test_skip_generators_hardcoded(): + @dec.knownfailureif(True, "This test is known to fail") + def g1(x): + for i in xrange(x): + yield i + + try: + for j in g1(10): + pass + except KnownFailureTest: + pass + else: + raise Exception('Failed to mark as known failure') + + + @dec.knownfailureif(False, "This test is NOT known to fail") + def g2(x): + for i in xrange(x): + yield i + raise DidntSkipException('FAIL') + + try: + for j in g2(10): + pass + except KnownFailureTest: + raise Exception('Marked incorretly as known failure') + except DidntSkipException: + pass + + +def test_skip_generators_callable(): + def skip_tester(): + return skip_flag == 'skip me!' + + @dec.knownfailureif(skip_tester, "This test is known to fail") + def g1(x): + for i in xrange(x): + yield i + + try: + skip_flag = 'skip me!' + for j in g1(10): + pass + except KnownFailureTest: + pass + else: + raise Exception('Failed to mark as known failure') + + + @dec.knownfailureif(skip_tester, "This test is NOT known to fail") + def g2(x): + for i in xrange(x): + yield i + raise DidntSkipException('FAIL') + + try: + skip_flag = 'do not skip' + for j in g2(10): + pass + except KnownFailureTest: + raise Exception('Marked incorretly as known failure') + except DidntSkipException: + pass + + +if __name__ == '__main__': + run_module_suite() + + + + |