summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.txt3
-rw-r--r--doc/DISTUTILS.rst.txt65
-rw-r--r--doc/release/1.8.0-notes.rst24
-rw-r--r--doc/source/reference/arrays.indexing.rst4
-rw-r--r--doc/source/reference/c-api.ufunc.rst10
-rw-r--r--doc/source/reference/routines.math.rst2
-rw-r--r--doc/source/reference/ufuncs.rst4
-rw-r--r--doc/source/user/c-info.ufunc-tutorial.rst165
-rw-r--r--numpy/core/code_generators/numpy_api.py1
-rw-r--r--numpy/core/code_generators/ufunc_docstrings.py74
-rw-r--r--numpy/core/fromnumeric.py74
-rw-r--r--numpy/core/include/numpy/ndarraytypes.h4
-rw-r--r--numpy/core/include/numpy/npy_math.h53
-rw-r--r--numpy/core/include/numpy/ufuncobject.h2
-rw-r--r--numpy/core/numeric.py4
-rw-r--r--numpy/core/setup.py13
-rw-r--r--numpy/core/setup_common.py9
-rw-r--r--numpy/core/src/multiarray/ctors.c27
-rw-r--r--numpy/core/src/multiarray/iterators.c6
-rw-r--r--numpy/core/src/multiarray/iterators.h3
-rw-r--r--numpy/core/src/multiarray/mapping.c92
-rw-r--r--numpy/core/src/multiarray/multiarray_tests.c.src8
-rw-r--r--numpy/core/src/umath/operand_flag_tests.c.src1
-rw-r--r--numpy/core/src/umath/reduction.c3
-rw-r--r--numpy/core/src/umath/struct_ufunc_test.c.src122
-rw-r--r--numpy/core/src/umath/test_rational.c.src358
-rw-r--r--numpy/core/src/umath/ufunc_object.c206
-rw-r--r--numpy/core/src/umath/ufunc_type_resolution.c53
-rw-r--r--numpy/core/tests/test_api.py8
-rw-r--r--numpy/core/tests/test_indexing.py331
-rw-r--r--numpy/core/tests/test_numeric.py6
-rw-r--r--numpy/core/tests/test_ufunc.py39
-rw-r--r--numpy/distutils/ccompiler.py5
-rw-r--r--numpy/distutils/misc_util.py31
-rw-r--r--numpy/distutils/system_info.py6
-rw-r--r--numpy/f2py/setup.py4
-rw-r--r--numpy/lib/arraypad.py1501
-rw-r--r--numpy/lib/function_base.py56
-rw-r--r--numpy/lib/stride_tricks.py5
-rw-r--r--numpy/lib/tests/test_arraypad.py30
-rw-r--r--numpy/linalg/tests/test_linalg.py44
-rw-r--r--numpy/linalg/umath_linalg.c.src4
-rw-r--r--numpy/ma/extras.py7
-rw-r--r--numpy/ma/tests/test_regression.py8
-rwxr-xr-xruntests.py197
-rw-r--r--tools/osxbuild/README.txt32
-rw-r--r--tools/osxbuild/build.py105
-rw-r--r--tools/osxbuild/docs/README.txt25
-rw-r--r--tools/osxbuild/install_and_test.py52
-rw-r--r--tox.ini30
50 files changed, 2826 insertions, 1090 deletions
diff --git a/README.txt b/README.txt
index f7a8e2a37..31ba1c02a 100644
--- a/README.txt
+++ b/README.txt
@@ -16,9 +16,6 @@ After installation, tests can be run with:
python -c 'import numpy; numpy.test()'
-Starting in NumPy 1.7, deprecation warnings have been set to 'raise' by
-default, so the -Wd command-line option is no longer necessary.
-
The most current development version is always available from our
git repository:
diff --git a/doc/DISTUTILS.rst.txt b/doc/DISTUTILS.rst.txt
index a2ac0b986..363112ea9 100644
--- a/doc/DISTUTILS.rst.txt
+++ b/doc/DISTUTILS.rst.txt
@@ -10,7 +10,7 @@ SciPy structure
Currently SciPy project consists of two packages:
-- NumPy (previously called SciPy core) --- it provides packages like:
+- NumPy --- it provides packages like:
+ numpy.distutils - extension to Python distutils
+ numpy.f2py - a tool to bind Fortran/C codes to Python
@@ -38,7 +38,6 @@ A SciPy package contains, in addition to its sources, the following
files and directories:
+ ``setup.py`` --- building script
- + ``info.py`` --- contains documentation and import flags
+ ``__init__.py`` --- package initializer
+ ``tests/`` --- directory of unittests
@@ -398,64 +397,32 @@ Useful functions in ``numpy.distutils.misc_util``
+ ``find_executable(exe, path=None)``
+ ``exec_command( command, execute_in='', use_shell=None, use_tee=None, **env )``
-The ``info.py`` file
-''''''''''''''''''''
-
-SciPy package import hooks assume that each package contains a
-``info.py`` file. This file contains overall documentation about the package
-and variables defining the order of package imports, dependency
-relations between packages, etc.
-
-On import, the following information will be looked for in ``info.py``:
-
-__doc__
- The documentation string of the package.
-
-__doc_title__
- The title of the package. If not defined then the first non-empty
- line of ``__doc__`` will be used.
-
-__all__
- List of symbols that package exports. Optional.
-
-global_symbols
- List of names that should be imported to numpy name space. To import
- all symbols to ``numpy`` namespace, define ``global_symbols=['*']``.
-
-depends
- List of names that the package depends on. Prefix ``numpy.``
- will be automatically added to package names. For example,
- use ``testing`` to indicate dependence on ``numpy.testing``
- package. Default value is ``[]``.
-
-postpone_import
- Boolean variable indicating that importing the package should be
- postponed until the first attempt of its usage. Default value is ``False``.
- Depreciated.
-
The ``__init__.py`` file
''''''''''''''''''''''''
-To speed up the import time and minimize memory usage, numpy
-uses ``ppimport`` hooks to transparently postpone importing large modules,
-which might not be used during the SciPy session. In order to
-have access to the documentation of all SciPy packages, including
-postponed packages, the docstring from ``info.py`` is imported
-into ``__init__.py``.
+The header of a typical SciPy ``__init__.py`` is::
-The header of a typical ``__init__.py`` is::
+ """
+ Package docstring, typically with a brief description and function listing.
+ """
+
+ # py3k related imports
+ from __future__ import division, print_function, absolute_import
- #
- # Package ... - ...
- #
-
- from info import __doc__
+ # import functions into module namespace
+ from .subpackage import *
...
+ __all__ = [s for s in dir() if not s.startswith('_')]
+
from numpy.testing import Tester
test = Tester().test
bench = Tester().bench
+Note that NumPy submodules still use a file named ``info.py`` in which the
+module docstring and ``__all__`` dict are defined. These files will be removed
+at some point.
+
Extra features in NumPy Distutils
'''''''''''''''''''''''''''''''''
diff --git a/doc/release/1.8.0-notes.rst b/doc/release/1.8.0-notes.rst
index e65658ad3..76dcf50c2 100644
--- a/doc/release/1.8.0-notes.rst
+++ b/doc/release/1.8.0-notes.rst
@@ -110,6 +110,12 @@ New `invert` argument to `in1d`
The function `in1d` now accepts a `invert` argument which, when `True`,
causes the returned array to be inverted.
+Advanced indexing using `np.newaxis`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+It is now possible to use `np.newaxis`/`None` together with index
+arrays instead of only in simple indices. This means that
+``array[np.newaxis, [0, 1]]`` will now work as expected.
+
C-API
~~~~~
@@ -120,6 +126,22 @@ loop signature matching logic wasn't looking at the output operand type.
Now the correct ufunc loop is found, as long as the user provides an output
argument with the correct output type.
+runtests.py
+~~~~~~~~~~~
+
+A simple test runner script ``runtests.py`` was added. It also builds Numpy via
+``setup.py build`` and can be used to run tests easily during development.
+
+
+Improvements
+============
+
+Performance improvements to `pad`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The `pad` function has a new implementation, greatly improving performance for
+all inputs except `mode=<function>` (retained for backwards compatibility).
+Scaling with dimensionality is dramatically improved for rank >= 4.
+
Changes
=======
@@ -145,6 +167,8 @@ Several changes to np.insert and np.delete:
`np.insert(arr, 3, [1,2,3])` to insert multiple items at a single position.
In Numpy 1.8. this is also possible for `np.insert(arr, [3], [1, 2, 3])`.
+Padded regions from np.pad are now correctly rounded, not truncated.
+
C-API
~~~~~
diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst
index f8966f5c1..e759b6ff8 100644
--- a/doc/source/reference/arrays.indexing.rst
+++ b/doc/source/reference/arrays.indexing.rst
@@ -170,8 +170,8 @@ concepts to remember include:
.. data:: newaxis
- The :const:`newaxis` object can be used in the basic slicing syntax
- discussed above. :const:`None` can also be used instead of
+ The :const:`newaxis` object can be used in all slicing operations
+ as discussed above. :const:`None` can also be used instead of
:const:`newaxis`.
diff --git a/doc/source/reference/c-api.ufunc.rst b/doc/source/reference/c-api.ufunc.rst
index 45268b261..d4de28188 100644
--- a/doc/source/reference/c-api.ufunc.rst
+++ b/doc/source/reference/c-api.ufunc.rst
@@ -140,6 +140,16 @@ Functions
in as *arg_types* which must be a pointer to memory at least as
large as ufunc->nargs.
+.. cfunction:: int PyUFunc_RegisterLoopForDescr(PyUFuncObject* ufunc,
+ PyArray_Descr* userdtype, PyUFuncGenericFunction function,
+ PyArray_Descr** arg_dtypes, void* data)
+
+ This function behaves like PyUFunc_RegisterLoopForType above, except
+ that it allows the user to register a 1-d loop using PyArray_Descr
+ objects instead of dtype type num values. This allows a 1-d loop to be
+ registered for structured array data-dtypes and custom data-types
+ instead of scalar data-types.
+
.. cfunction:: int PyUFunc_ReplaceLoopBySignature(PyUFuncObject* ufunc,
PyUFuncGenericFunction newfunc, int* signature,
PyUFuncGenericFunction* oldfunc)
diff --git a/doc/source/reference/routines.math.rst b/doc/source/reference/routines.math.rst
index 7ce77c24d..0e7a60b76 100644
--- a/doc/source/reference/routines.math.rst
+++ b/doc/source/reference/routines.math.rst
@@ -143,6 +143,8 @@ Miscellaneous
sign
maximum
minimum
+ fmax
+ fmin
nan_to_num
real_if_close
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index dcd4ae6d0..2154bca37 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -604,6 +604,10 @@ Comparison functions
``a > b`` and uses it to return either `a` or `b` (as a whole). A similar
difference exists between ``minimum(a, b)`` and ``min(a, b)``.
+.. autosummary::
+
+ fmax
+ fmin
Floating functions
------------------
diff --git a/doc/source/user/c-info.ufunc-tutorial.rst b/doc/source/user/c-info.ufunc-tutorial.rst
index ef2a0f9ee..d3b1deb26 100644
--- a/doc/source/user/c-info.ufunc-tutorial.rst
+++ b/doc/source/user/c-info.ufunc-tutorial.rst
@@ -884,6 +884,171 @@ as well as all other properties of a ufunc.
}
#endif
+
+.. _`sec:Numpy-struct-dtype`:
+
+Example Numpy ufunc with structured array dtype arguments
+=========================================================
+
+This example shows how to create a ufunc for a structured array dtype.
+For the example we show a trivial ufunc for adding two arrays with dtype
+'u8,u8,u8'. The process is a bit different from the other examples since
+a call to PyUFunc_FromFuncAndData doesn't fully register ufuncs for
+custom dtypes and structured array dtypes. We need to also call
+PyUFunc_RegisterLoopForDescr to finish setting up the ufunc.
+
+We only give the C code as the setup.py file is exactly the same as
+the setup.py file in `Example Numpy ufunc for one dtype`_, except that
+the line
+
+ .. code-block:: python
+
+ config.add_extension('npufunc', ['single_type_logit.c'])
+
+is replaced with
+
+ .. code-block:: python
+
+ config.add_extension('npufunc', ['add_triplet.c'])
+
+The C file is given below.
+
+ .. code-block:: c
+
+ #include "Python.h"
+ #include "math.h"
+ #include "numpy/ndarraytypes.h"
+ #include "numpy/ufuncobject.h"
+ #include "numpy/npy_3kcompat.h"
+
+
+ /*
+ * add_triplet.c
+ * This is the C code for creating your own
+ * Numpy ufunc for a structured array dtype.
+ *
+ * Details explaining the Python-C API can be found under
+ * 'Extending and Embedding' and 'Python/C API' at
+ * docs.python.org .
+ */
+
+ static PyMethodDef StructUfuncTestMethods[] = {
+ {NULL, NULL, 0, NULL}
+ };
+
+ /* The loop definition must precede the PyMODINIT_FUNC. */
+
+ static void add_uint64_triplet(char **args, npy_intp *dimensions,
+ npy_intp* steps, void* data)
+ {
+ npy_intp i;
+ npy_intp is1=steps[0];
+ npy_intp is2=steps[1];
+ npy_intp os=steps[2];
+ npy_intp n=dimensions[0];
+ uint64_t *x, *y, *z;
+
+ char *i1=args[0];
+ char *i2=args[1];
+ char *op=args[2];
+
+ for (i = 0; i < n; i++) {
+
+ x = (uint64_t*)i1;
+ y = (uint64_t*)i2;
+ z = (uint64_t*)op;
+
+ z[0] = x[0] + y[0];
+ z[1] = x[1] + y[1];
+ z[2] = x[2] + y[2];
+
+ i1 += is1;
+ i2 += is2;
+ op += os;
+ }
+ }
+
+ /* This a pointer to the above function */
+ PyUFuncGenericFunction funcs[1] = {&add_uint64_triplet};
+
+ /* These are the input and return dtypes of add_uint64_triplet. */
+ static char types[3] = {NPY_UINT64, NPY_UINT64, NPY_UINT64};
+
+ static void *data[1] = {NULL};
+
+ #if defined(NPY_PY3K)
+ static struct PyModuleDef moduledef = {
+ PyModuleDef_HEAD_INIT,
+ "struct_ufunc_test",
+ NULL,
+ -1,
+ StructUfuncTestMethods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+ };
+ #endif
+
+ #if defined(NPY_PY3K)
+ PyMODINIT_FUNC PyInit_struct_ufunc_test(void)
+ #else
+ PyMODINIT_FUNC initstruct_ufunc_test(void)
+ #endif
+ {
+ PyObject *m, *add_triplet, *d;
+ PyObject *dtype_dict;
+ PyArray_Descr *dtype;
+ PyArray_Descr *dtypes[3];
+
+ #if defined(NPY_PY3K)
+ m = PyModule_Create(&moduledef);
+ #else
+ m = Py_InitModule("struct_ufunc_test", StructUfuncTestMethods);
+ #endif
+
+ if (m == NULL) {
+ #if defined(NPY_PY3K)
+ return NULL;
+ #else
+ return;
+ #endif
+ }
+
+ import_array();
+ import_umath();
+
+ /* Create a new ufunc object */
+ add_triplet = PyUFunc_FromFuncAndData(NULL, NULL, NULL, 0, 2, 1,
+ PyUFunc_None, "add_triplet",
+ "add_triplet_docstring", 0);
+
+ dtype_dict = Py_BuildValue("[(s, s), (s, s), (s, s)]",
+ "f0", "u8", "f1", "u8", "f2", "u8");
+ PyArray_DescrConverter(dtype_dict, &dtype);
+ Py_DECREF(dtype_dict);
+
+ dtypes[0] = dtype;
+ dtypes[1] = dtype;
+ dtypes[2] = dtype;
+
+ /* Register ufunc for structured dtype */
+ PyUFunc_RegisterLoopForDescr(add_triplet,
+ dtype,
+ &add_uint64_triplet,
+ dtypes,
+ NULL);
+
+ d = PyModule_GetDict(m);
+
+ PyDict_SetItemString(d, "add_triplet", add_triplet);
+ Py_DECREF(add_triplet);
+ #if defined(NPY_PY3K)
+ return m;
+ #endif
+ }
+
+
.. _`sec:PyUFunc-spec`:
PyUFunc_FromFuncAndData Specification
diff --git a/numpy/core/code_generators/numpy_api.py b/numpy/core/code_generators/numpy_api.py
index 07a87f98d..152d0f948 100644
--- a/numpy/core/code_generators/numpy_api.py
+++ b/numpy/core/code_generators/numpy_api.py
@@ -384,6 +384,7 @@ ufunc_funcs_api = {
# End 1.6 API
'PyUFunc_DefaultTypeResolver': 39,
'PyUFunc_ValidateCasting': 40,
+ 'PyUFunc_RegisterLoopForDescr': 41,
}
# List of all the dicts which define the C API
diff --git a/numpy/core/code_generators/ufunc_docstrings.py b/numpy/core/code_generators/ufunc_docstrings.py
index 5bb5f3f00..53ccfcfda 100644
--- a/numpy/core/code_generators/ufunc_docstrings.py
+++ b/numpy/core/code_generators/ufunc_docstrings.py
@@ -2170,14 +2170,12 @@ add_newdoc('numpy.core.umath', 'maximum',
"""
Element-wise maximum of array elements.
- Compare two arrays and returns a new array containing
- the element-wise maxima. If one of the elements being
- compared is a nan, then that element is returned. If
- both elements are nans then the first is returned. The
- latter distinction is important for complex nans,
- which are defined as at least one of the real or
- imaginary parts being a nan. The net effect is that
- nans are propagated.
+ Compare two arrays and returns a new array containing the element-wise
+ maxima. If one of the elements being compared is a nan, then that element
+ is returned. If both elements are nans then the first is returned. The
+ latter distinction is important for complex nans, which are defined as at
+ least one of the real or imaginary parts being a nan. The net effect is
+ that nans are propagated.
Parameters
----------
@@ -2194,25 +2192,27 @@ add_newdoc('numpy.core.umath', 'maximum',
See Also
--------
minimum :
- element-wise minimum
-
+ Element-wise minimum of two arrays, propagating any NaNs.
fmax :
- element-wise maximum that ignores nans unless both inputs are nans.
-
- fmin :
- element-wise minimum that ignores nans unless both inputs are nans.
+ Element-wise maximum of two arrays, ignoring any NaNs.
+ amax :
+ The maximum value of an array along a given axis, propagating any NaNs.
+ nanmax :
+ The maximum value of an array along a given axis, ignoring any NaNs.
+
+ fmin, amin, nanmin
Notes
-----
- Equivalent to ``np.where(x1 > x2, x1, x2)`` but faster and does proper
- broadcasting.
+ The maximum is equivalent to ``np.where(x1 >= x2, x1, x2)`` when neither
+ x1 nor x2 are nans, but it is faster and does proper broadcasting.
Examples
--------
>>> np.maximum([2, 3, 4], [1, 5, 2])
array([2, 5, 4])
- >>> np.maximum(np.eye(2), [0.5, 2])
+ >>> np.maximum(np.eye(2), [0.5, 2]) # broadcasting
array([[ 1. , 2. ],
[ 0.5, 2. ]])
@@ -2249,11 +2249,15 @@ add_newdoc('numpy.core.umath', 'minimum',
See Also
--------
maximum :
- element-wise minimum that propagates nans.
- fmax :
- element-wise maximum that ignores nans unless both inputs are nans.
+ Element-wise maximum of two arrays, propagating any NaNs.
fmin :
- element-wise minimum that ignores nans unless both inputs are nans.
+ Element-wise minimum of two arrays, ignoring any NaNs.
+ amin :
+ The minimum value of an array along a given axis, propagating any NaNs.
+ nanmin :
+ The minimum value of an array along a given axis, ignoring any NaNs.
+
+ fmax, amax, nanmax
Notes
-----
@@ -2271,6 +2275,8 @@ add_newdoc('numpy.core.umath', 'minimum',
>>> np.minimum([np.nan, 0, np.nan],[0, np.nan, np.nan])
array([ NaN, NaN, NaN])
+ >>> np.minimum(-np.Inf, 1)
+ -inf
""")
@@ -2300,11 +2306,15 @@ add_newdoc('numpy.core.umath', 'fmax',
See Also
--------
fmin :
- element-wise minimum that ignores nans unless both inputs are nans.
+ Element-wise minimum of two arrays, ignoring any NaNs.
maximum :
- element-wise maximum that propagates nans.
- minimum :
- element-wise minimum that propagates nans.
+ Element-wise maximum of two arrays, propagating any NaNs.
+ amax :
+ The maximum value of an array along a given axis, propagating any NaNs.
+ nanmax :
+ The maximum value of an array along a given axis, ignoring any NaNs.
+
+ minimum, amin, nanmin
Notes
-----
@@ -2329,8 +2339,6 @@ add_newdoc('numpy.core.umath', 'fmax',
add_newdoc('numpy.core.umath', 'fmin',
"""
- fmin(x1, x2[, out])
-
Element-wise minimum of array elements.
Compare two arrays and returns a new array containing the element-wise
@@ -2355,11 +2363,15 @@ add_newdoc('numpy.core.umath', 'fmin',
See Also
--------
fmax :
- element-wise maximum that ignores nans unless both inputs are nans.
- maximum :
- element-wise maximum that propagates nans.
+ Element-wise maximum of two arrays, ignoring any NaNs.
minimum :
- element-wise minimum that propagates nans.
+ Element-wise minimum of two arrays, propagating any NaNs.
+ amin :
+ The minimum value of an array along a given axis, propagating any NaNs.
+ nanmin :
+ The minimum value of an array along a given axis, ignoring any NaNs.
+
+ maximum, amax, nanmax
Notes
-----
diff --git a/numpy/core/fromnumeric.py b/numpy/core/fromnumeric.py
index c34348e22..5735b6124 100644
--- a/numpy/core/fromnumeric.py
+++ b/numpy/core/fromnumeric.py
@@ -1903,11 +1903,11 @@ def amax(a, axis=None, out=None, keepdims=False):
a : array_like
Input data.
axis : int, optional
- Axis along which to operate. By default flattened input is used.
+ Axis along which to operate. By default, flattened input is used.
out : ndarray, optional
- Alternate output array in which to place the result. Must be of
- the same shape and buffer length as the expected output. See
- `doc.ufuncs` (Section "Output arguments") for more details.
+ Alternative output array in which to place the result. Must
+ be of the same shape and buffer length as the expected output.
+ See `doc.ufuncs` (Section "Output arguments") for more details.
keepdims : bool, optional
If this is set to True, the axes which are reduced are left
in the result as dimensions with size one. With this option,
@@ -1922,15 +1922,28 @@ def amax(a, axis=None, out=None, keepdims=False):
See Also
--------
- nanmax : NaN values are ignored instead of being propagated.
- fmax : same behavior as the C99 fmax function.
- argmax : indices of the maximum values.
+ amin :
+ The minimum value of an array along a given axis, propagating any NaNs.
+ nanmax :
+ The maximum value of an array along a given axis, ignoring any NaNs.
+ maximum :
+ Element-wise maximum of two arrays, propagating any NaNs.
+ fmax :
+ Element-wise maximum of two arrays, ignoring any NaNs.
+ argmax :
+ Return the indices of the maximum values.
+
+ nanmin, minimum, fmin
Notes
-----
NaN values are propagated, that is if at least one item is NaN, the
- corresponding max value will be NaN as well. To ignore NaN values
+ corresponding max value will be NaN as well. To ignore NaN values
(MATLAB behavior), please use nanmax.
+
+ Don't use `amax` for element-wise comparison of 2 arrays; when
+ ``a.shape[0]`` is 2, ``maximum(a[0], a[1])`` is faster than
+ ``amax(a, axis=0)``.
Examples
--------
@@ -1938,11 +1951,11 @@ def amax(a, axis=None, out=None, keepdims=False):
>>> a
array([[0, 1],
[2, 3]])
- >>> np.amax(a)
+ >>> np.amax(a) # Maximum of the flattened array
3
- >>> np.amax(a, axis=0)
+ >>> np.amax(a, axis=0) # Maxima along the first axis
array([2, 3])
- >>> np.amax(a, axis=1)
+ >>> np.amax(a, axis=1) # Maxima along the second axis
array([1, 3])
>>> b = np.arange(5, dtype=np.float)
@@ -1959,7 +1972,7 @@ def amax(a, axis=None, out=None, keepdims=False):
except AttributeError:
return _methods._amax(a, axis=axis,
out=out, keepdims=keepdims)
- # NOTE: Dropping and keepdims parameter
+ # NOTE: Dropping the keepdims parameter
return amax(axis=axis, out=out)
else:
return _methods._amax(a, axis=axis,
@@ -1974,7 +1987,7 @@ def amin(a, axis=None, out=None, keepdims=False):
a : array_like
Input data.
axis : int, optional
- Axis along which to operate. By default a flattened input is used.
+ Axis along which to operate. By default, flattened input is used.
out : ndarray, optional
Alternative output array in which to place the result. Must
be of the same shape and buffer length as the expected output.
@@ -1986,22 +1999,35 @@ def amin(a, axis=None, out=None, keepdims=False):
Returns
-------
- amin : ndarray
- A new array or a scalar array with the result.
+ amin : ndarray or scalar
+ Minimum of `a`. If `axis` is None, the result is a scalar value.
+ If `axis` is given, the result is an array of dimension
+ ``a.ndim - 1``.
See Also
--------
- nanmin: nan values are ignored instead of being propagated
- fmin: same behavior as the C99 fmin function
- argmin: Return the indices of the minimum values.
+ amax :
+ The maximum value of an array along a given axis, propagating any NaNs.
+ nanmin :
+ The minimum value of an array along a given axis, ignoring any NaNs.
+ minimum :
+ Element-wise minimum of two arrays, propagating any NaNs.
+ fmin :
+ Element-wise minimum of two arrays, ignoring any NaNs.
+ argmin :
+ Return the indices of the minimum values.
- amax, nanmax, fmax
+ nanmax, maximum, fmax
Notes
-----
- NaN values are propagated, that is if at least one item is nan, the
- corresponding min value will be nan as well. To ignore NaN values (matlab
- behavior), please use nanmin.
+ NaN values are propagated, that is if at least one item is NaN, the
+ corresponding min value will be NaN as well. To ignore NaN values
+ (MATLAB behavior), please use nanmin.
+
+ Don't use `amin` for element-wise comparison of 2 arrays; when
+ ``a.shape[0]`` is 2, ``minimum(a[0], a[1])`` is faster than
+ ``amin(a, axis=0)``.
Examples
--------
@@ -2011,9 +2037,9 @@ def amin(a, axis=None, out=None, keepdims=False):
[2, 3]])
>>> np.amin(a) # Minimum of the flattened array
0
- >>> np.amin(a, axis=0) # Minima along the first axis
+ >>> np.amin(a, axis=0) # Minima along the first axis
array([0, 1])
- >>> np.amin(a, axis=1) # Minima along the second axis
+ >>> np.amin(a, axis=1) # Minima along the second axis
array([0, 2])
>>> b = np.arange(5, dtype=np.float)
diff --git a/numpy/core/include/numpy/ndarraytypes.h b/numpy/core/include/numpy/ndarraytypes.h
index 7cc37bff8..bb3f1065c 100644
--- a/numpy/core/include/numpy/ndarraytypes.h
+++ b/numpy/core/include/numpy/ndarraytypes.h
@@ -1275,6 +1275,10 @@ typedef struct {
npy_intp bscoord[NPY_MAXDIMS];
PyObject *indexobj; /* creating obj */
+ /*
+ * consec is first used to indicate wether fancy indices are
+ * consecutive and then denotes at which axis they are inserted
+ */
int consec;
char *dataptr;
diff --git a/numpy/core/include/numpy/npy_math.h b/numpy/core/include/numpy/npy_math.h
index a7c50f6e6..625999022 100644
--- a/numpy/core/include/numpy/npy_math.h
+++ b/numpy/core/include/numpy/npy_math.h
@@ -6,6 +6,9 @@ extern "C" {
#endif
#include <math.h>
+#ifdef NPY_HAVE_CONFIG_H
+#include <npy_config.h>
+#endif
#include <numpy/npy_common.h>
/*
@@ -148,33 +151,51 @@ double npy_spacing(double x);
/*
* IEEE 754 fpu handling. Those are guaranteed to be macros
*/
-#ifndef NPY_HAVE_DECL_ISNAN
- #define npy_isnan(x) ((x) != (x))
+
+/* use builtins to avoid function calls in tight loops
+ * only available if npy_config.h is available (= numpys own build) */
+#if HAVE___BUILTIN_ISNAN
+ #define npy_isnan(x) __builtin_isnan(x)
#else
- #ifdef _MSC_VER
- #define npy_isnan(x) _isnan((x))
+ #ifndef NPY_HAVE_DECL_ISNAN
+ #define npy_isnan(x) ((x) != (x))
#else
- #define npy_isnan(x) isnan((x))
+ #ifdef _MSC_VER
+ #define npy_isnan(x) _isnan((x))
+ #else
+ #define npy_isnan(x) isnan(x)
+ #endif
#endif
#endif
-#ifndef NPY_HAVE_DECL_ISFINITE
- #ifdef _MSC_VER
- #define npy_isfinite(x) _finite((x))
+
+/* only available if npy_config.h is available (= numpys own build) */
+#if HAVE___BUILTIN_ISFINITE
+ #define npy_isfinite(x) __builtin_isfinite(x)
+#else
+ #ifndef NPY_HAVE_DECL_ISFINITE
+ #ifdef _MSC_VER
+ #define npy_isfinite(x) _finite((x))
+ #else
+ #define npy_isfinite(x) !npy_isnan((x) + (-x))
+ #endif
#else
- #define npy_isfinite(x) !npy_isnan((x) + (-x))
+ #define npy_isfinite(x) isfinite((x))
#endif
-#else
- #define npy_isfinite(x) isfinite((x))
#endif
-#ifndef NPY_HAVE_DECL_ISINF
- #define npy_isinf(x) (!npy_isfinite(x) && !npy_isnan(x))
+/* only available if npy_config.h is available (= numpys own build) */
+#if HAVE___BUILTIN_ISINF
+ #define npy_isinf(x) __builtin_isinf(x)
#else
- #ifdef _MSC_VER
- #define npy_isinf(x) (!_finite((x)) && !_isnan((x)))
+ #ifndef NPY_HAVE_DECL_ISINF
+ #define npy_isinf(x) (!npy_isfinite(x) && !npy_isnan(x))
#else
- #define npy_isinf(x) isinf((x))
+ #ifdef _MSC_VER
+ #define npy_isinf(x) (!_finite((x)) && !_isnan((x)))
+ #else
+ #define npy_isinf(x) isinf((x))
+ #endif
#endif
#endif
diff --git a/numpy/core/include/numpy/ufuncobject.h b/numpy/core/include/numpy/ufuncobject.h
index 686d12c38..75611426c 100644
--- a/numpy/core/include/numpy/ufuncobject.h
+++ b/numpy/core/include/numpy/ufuncobject.h
@@ -319,6 +319,8 @@ typedef struct _loop1d_info {
void *data;
int *arg_types;
struct _loop1d_info *next;
+ int nargs;
+ PyArray_Descr **arg_dtypes;
} PyUFunc_Loop1d;
diff --git a/numpy/core/numeric.py b/numpy/core/numeric.py
index d689982db..a187d7c5b 100644
--- a/numpy/core/numeric.py
+++ b/numpy/core/numeric.py
@@ -2137,7 +2137,7 @@ def array_equal(a1, a2):
return False
if a1.shape != a2.shape:
return False
- return bool(equal(a1,a2).all())
+ return bool(asarray(a1 == a2).all())
def array_equiv(a1, a2):
"""
@@ -2179,7 +2179,7 @@ def array_equiv(a1, a2):
except:
return False
try:
- return bool(equal(a1,a2).all())
+ return bool(asarray(a1 == a2).all())
except ValueError:
return False
diff --git a/numpy/core/setup.py b/numpy/core/setup.py
index 3b08d6edd..5a1e6cf6e 100644
--- a/numpy/core/setup.py
+++ b/numpy/core/setup.py
@@ -161,6 +161,10 @@ def check_math_capabilities(config, moredefs, mathlibs):
check_funcs(OPTIONAL_STDFUNCS)
+ for f, args in OPTIONAL_INTRINSICS:
+ if config.check_func(f, decl=False, call=True, call_args=args):
+ moredefs.append((fname2def(f), 1))
+
# C99 functions: float and long double versions
check_funcs(C99_FUNCS_SINGLE)
check_funcs(C99_FUNCS_EXTENDED)
@@ -602,6 +606,8 @@ def configuration(parent_package='',top_path=None):
config.add_include_dirs(join('src', 'umath'))
config.add_include_dirs(join('src', 'npysort'))
+ config.add_define_macros([("HAVE_NPY_CONFIG_H", "1")])
+
config.numpy_include_dirs.extend(config.paths('include'))
deps = [join('src','npymath','_signbit.c'),
@@ -929,6 +935,13 @@ def configuration(parent_package='',top_path=None):
sources = [join('src','umath', 'test_rational.c.src')])
#######################################################################
+ # struct_ufunc_test module #
+ #######################################################################
+
+ config.add_extension('struct_ufunc_test',
+ sources = [join('src','umath', 'struct_ufunc_test.c.src')])
+
+ #######################################################################
# multiarray_tests module #
#######################################################################
diff --git a/numpy/core/setup_common.py b/numpy/core/setup_common.py
index 3f705cbe4..e778e507b 100644
--- a/numpy/core/setup_common.py
+++ b/numpy/core/setup_common.py
@@ -97,6 +97,15 @@ OPTIONAL_STDFUNCS = ["expm1", "log1p", "acosh", "asinh", "atanh",
"rint", "trunc", "exp2", "log2", "hypot", "atan2", "pow",
"copysign", "nextafter"]
+# optional gcc compiler builtins and their call arguments
+# call arguments are required as the compiler will do strict signature checking
+OPTIONAL_INTRINSICS = [("__builtin_isnan", '5.'),
+ ("__builtin_isinf", '5.'),
+ ("__builtin_isfinite", '5.'),
+ ("__builtin_bswap32", '5u'),
+ ("__builtin_bswap64", '5u'),
+ ]
+
# Subset of OPTIONAL_STDFUNCS which may alreay have HAVE_* defined by Python.h
OPTIONAL_STDFUNCS_MAYBE = ["expm1", "log1p", "acosh", "atanh", "asinh", "hypot",
"copysign"]
diff --git a/numpy/core/src/multiarray/ctors.c b/numpy/core/src/multiarray/ctors.c
index f366a34b1..b1a9d9859 100644
--- a/numpy/core/src/multiarray/ctors.c
+++ b/numpy/core/src/multiarray/ctors.c
@@ -311,25 +311,38 @@ _strided_byte_swap(void *p, npy_intp stride, npy_intp n, int size)
case 1: /* no byteswap necessary */
break;
case 4:
- for (a = (char*)p; n > 0; n--, a += stride - 1) {
- b = a + 3;
- c = *a; *a++ = *b; *b-- = c;
- c = *a; *a = *b; *b = c;
+ for (a = (char*)p; n > 0; n--, a += stride) {
+ npy_uint32 * a_ = (npy_uint32 *)a;
+#ifdef HAVE___BUILTIN_BSWAP32
+ *a_ = __builtin_bswap32(*a_);
+#else
+ /* a decent compiler can convert this to bswap too */
+ *a_ = ((*a_ & 0xff000000u) >> 24) | ((*a_ & 0x00ff0000u) >> 8) |
+ ((*a_ & 0x0000ff00u) << 8) | ((*a_ & 0x000000ffu) << 24);
+#endif
}
break;
case 8:
- for (a = (char*)p; n > 0; n--, a += stride - 3) {
+ for (a = (char*)p; n > 0; n--) {
+#ifdef HAVE___BUILTIN_BSWAP64
+ npy_uint64 * a_ = (npy_uint64 *)a;
+ *a_ = __builtin_bswap64(*a_);
+ a += stride;
+#else
+ /* mask version would be faster but requires C99 */
b = a + 7;
c = *a; *a++ = *b; *b-- = c;
c = *a; *a++ = *b; *b-- = c;
c = *a; *a++ = *b; *b-- = c;
c = *a; *a = *b; *b = c;
+ a += stride - 3;
+#endif
}
break;
case 2:
for (a = (char*)p; n > 0; n--, a += stride) {
- b = a + 1;
- c = *a; *a = *b; *b = c;
+ npy_uint16 * a_ = (npy_uint16 *)a;
+ *a_ = (((*a_ >> 8) & 0xffu) | ((*a_ & 0xffu) << 8));
}
break;
default:
diff --git a/numpy/core/src/multiarray/iterators.c b/numpy/core/src/multiarray/iterators.c
index ce2ef4659..abaff6c98 100644
--- a/numpy/core/src/multiarray/iterators.c
+++ b/numpy/core/src/multiarray/iterators.c
@@ -97,7 +97,8 @@ NPY_NO_EXPORT int
parse_index(PyArrayObject *self, PyObject *op,
npy_intp *out_dimensions,
npy_intp *out_strides,
- npy_intp *out_offset)
+ npy_intp *out_offset,
+ int check_index)
{
int i, j, n;
int nd_old, nd_new, n_add, n_ellipsis;
@@ -136,7 +137,8 @@ parse_index(PyArrayObject *self, PyObject *op,
start = parse_index_entry(op1, &step_size, &n_steps,
nd_old < PyArray_NDIM(self) ?
PyArray_DIMS(self)[nd_old] : 0,
- nd_old, nd_old < PyArray_NDIM(self));
+ nd_old, check_index ?
+ nd_old < PyArray_NDIM(self) : 0);
Py_DECREF(op1);
if (start == -1) {
break;
diff --git a/numpy/core/src/multiarray/iterators.h b/numpy/core/src/multiarray/iterators.h
index 8276a3ceb..dad345935 100644
--- a/numpy/core/src/multiarray/iterators.h
+++ b/numpy/core/src/multiarray/iterators.h
@@ -9,7 +9,8 @@ NPY_NO_EXPORT int
parse_index(PyArrayObject *self, PyObject *op,
npy_intp *out_dimensions,
npy_intp *out_strides,
- npy_intp *out_offset);
+ npy_intp *out_offset,
+ int check_index);
NPY_NO_EXPORT PyObject
*iter_subscript(PyArrayIterObject *, PyObject *);
diff --git a/numpy/core/src/multiarray/mapping.c b/numpy/core/src/multiarray/mapping.c
index 0b4022874..17089761d 100644
--- a/numpy/core/src/multiarray/mapping.c
+++ b/numpy/core/src/multiarray/mapping.c
@@ -25,7 +25,7 @@
#define SOBJ_LISTTUP 4
static PyObject *
-array_subscript_simple(PyArrayObject *self, PyObject *op);
+array_subscript_simple(PyArrayObject *self, PyObject *op, int check_index);
/******************************************************************************
*** IMPLEMENT MAPPING PROTOCOL ***
@@ -226,7 +226,7 @@ PyArray_MapIterSwapAxes(PyArrayMapIterObject *mit, PyArrayObject **ret, int getm
* (n2,...,n1+n2-1,0,...,n2-1,n1+n2,...n3-1)
*/
n1 = mit->iters[0]->nd_m1 + 1;
- n2 = mit->iteraxes[0];
+ n2 = mit->consec; /* axes to insert at */
n3 = mit->nd;
/* use n1 as the boundary if getting but n2 if setting */
@@ -303,9 +303,7 @@ PyArray_GetMap(PyArrayMapIterObject *mit)
/* check for consecutive axes */
if ((mit->subspace != NULL) && (mit->consec)) {
- if (mit->iteraxes[0] > 0) { /* then we need to swap */
- PyArray_MapIterSwapAxes(mit, &ret, 1);
- }
+ PyArray_MapIterSwapAxes(mit, &ret, 1);
}
return (PyObject *)ret;
}
@@ -338,11 +336,9 @@ PyArray_SetMap(PyArrayMapIterObject *mit, PyObject *op)
return -1;
}
if ((mit->subspace != NULL) && (mit->consec)) {
- if (mit->iteraxes[0] > 0) { /* then we need to swap */
- PyArray_MapIterSwapAxes(mit, &arr, 0);
- if (arr == NULL) {
- return -1;
- }
+ PyArray_MapIterSwapAxes(mit, &arr, 0);
+ if (arr == NULL) {
+ return -1;
}
}
@@ -556,7 +552,7 @@ fancy_indexing_check(PyObject *args)
*/
NPY_NO_EXPORT PyObject *
-array_subscript_simple(PyArrayObject *self, PyObject *op)
+array_subscript_simple(PyArrayObject *self, PyObject *op, int check_index)
{
npy_intp dimensions[NPY_MAXDIMS], strides[NPY_MAXDIMS];
npy_intp offset;
@@ -601,7 +597,7 @@ array_subscript_simple(PyArrayObject *self, PyObject *op)
/* Standard (view-based) Indexing */
nd = parse_index(self, op, dimensions,
- strides, &offset);
+ strides, &offset, check_index);
if (nd == -1) {
return NULL;
}
@@ -1222,7 +1218,7 @@ array_subscript_fromobject(PyArrayObject *self, PyObject *op)
return array_subscript_fancy(self, op, fancy);
}
else {
- return array_subscript_simple(self, op);
+ return array_subscript_simple(self, op, 1);
}
}
@@ -1256,9 +1252,10 @@ array_subscript(PyArrayObject *self, PyObject *op)
ret = array_subscript_fancy(self, op, fancy);
}
else {
- ret = array_subscript_simple(self, op);
+ ret = array_subscript_simple(self, op, 1);
}
}
+
if (ret == NULL) {
return NULL;
}
@@ -1301,7 +1298,7 @@ array_ass_sub_simple(PyArrayObject *self, PyObject *ind, PyObject *op)
/* Rest of standard (view-based) indexing */
if (PyArray_CheckExact(self)) {
- tmp = (PyArrayObject *)array_subscript_simple(self, ind);
+ tmp = (PyArrayObject *)array_subscript_simple(self, ind, 1);
if (tmp == NULL) {
return -1;
}
@@ -1665,7 +1662,7 @@ _nonzero_indices(PyObject *myBool, PyArrayIterObject **iters)
}
/* convert an indexing object to an INTP indexing array iterator
- if possible -- otherwise, it is a Slice or Ellipsis object
+ if possible -- otherwise, it is a Slice, Ellipsis or None object
and has to be interpreted on bind to a particular
array so leave it NULL for now.
*/
@@ -1675,7 +1672,7 @@ _convert_obj(PyObject *obj, PyArrayIterObject **iter)
PyArray_Descr *indtype;
PyObject *arr;
- if (PySlice_Check(obj) || (obj == Py_Ellipsis)) {
+ if (PySlice_Check(obj) || (obj == Py_Ellipsis) || (obj == Py_None)) {
return 0;
}
else if (PyArray_Check(obj) && PyArray_ISBOOL((PyArrayObject *)obj)) {
@@ -1811,7 +1808,7 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr)
{
int subnd;
PyObject *sub, *obj = NULL;
- int i, j, n, curraxis, ellipexp, noellip;
+ int i, j, n, curraxis, ellipexp, noellip, newaxes;
PyArrayIterObject *it;
npy_intp dimsize;
npy_intp *indptr;
@@ -1827,24 +1824,18 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr)
if (mit->ait == NULL) {
goto fail;
}
- /* no subspace iteration needed. Finish up and Return */
- if (subnd == 0) {
- n = PyArray_NDIM(arr);
- for (i = 0; i < n; i++) {
- mit->iteraxes[i] = i;
- }
- goto finish;
- }
/*
* all indexing arrays have been converted to 0
* therefore we can extract the subspace with a simple
- * getitem call which will use view semantics
+ * getitem call which will use view semantics, but
+ * without index checking since all original normal
+ * indexes are checked later as fancy ones.
*
* But, be sure to do it with a true array.
*/
if (PyArray_CheckExact(arr)) {
- sub = array_subscript_simple(arr, mit->indexobj);
+ sub = array_subscript_simple(arr, mit->indexobj, 0);
}
else {
Py_INCREF(arr);
@@ -1852,32 +1843,53 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr)
if (obj == NULL) {
goto fail;
}
- sub = array_subscript_simple((PyArrayObject *)obj, mit->indexobj);
+ sub = array_subscript_simple((PyArrayObject *)obj, mit->indexobj, 0);
Py_DECREF(obj);
}
if (sub == NULL) {
goto fail;
}
+
+ subnd = PyArray_NDIM(sub);
+ /* no subspace iteration needed. Finish up and Return */
+ if (subnd == 0) {
+ n = PyArray_NDIM(arr);
+ for (i = 0; i < n; i++) {
+ mit->iteraxes[i] = i;
+ }
+ goto finish;
+ }
+
mit->subspace = (PyArrayIterObject *)PyArray_IterNew(sub);
Py_DECREF(sub);
if (mit->subspace == NULL) {
goto fail;
}
+
+ if (mit->nd + subnd > NPY_MAXDIMS) {
+ PyErr_Format(PyExc_ValueError,
+ "number of dimensions must be within [0, %d], "
+ "indexed array has %d",
+ NPY_MAXDIMS, mit->nd + subnd);
+ goto fail;
+ }
+
/* Expand dimensions of result */
- n = PyArray_NDIM(mit->subspace->ao);
- for (i = 0; i < n; i++) {
+ for (i = 0; i < subnd; i++) {
mit->dimensions[mit->nd+i] = PyArray_DIMS(mit->subspace->ao)[i];
}
- mit->nd += n;
+ mit->nd += subnd;
/*
- * Now, we still need to interpret the ellipsis and slice objects
- * to determine which axes the indexing arrays are referring to
+ * Now, we still need to interpret the ellipsis, slice and None
+ * objects to determine which axes the indexing arrays are
+ * referring to
*/
n = PyTuple_GET_SIZE(mit->indexobj);
/* The number of dimensions an ellipsis takes up */
- ellipexp = PyArray_NDIM(arr) - n + 1;
+ newaxes = subnd - (PyArray_NDIM(arr) - mit->numiter);
+ ellipexp = PyArray_NDIM(arr) + newaxes - n + 1;
/*
* Now fill in iteraxes -- remember indexing arrays have been
* converted to 0's in mit->indexobj
@@ -1886,6 +1898,8 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr)
j = 0;
/* Only expand the first ellipsis */
noellip = 1;
+ /* count newaxes before iter axes */
+ newaxes = 0;
memset(mit->bscoord, 0, sizeof(npy_intp)*PyArray_NDIM(arr));
for (i = 0; i < n; i++) {
/*
@@ -1900,6 +1914,11 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr)
curraxis += ellipexp;
noellip = 0;
}
+ else if (obj == Py_None) {
+ if (j == 0) {
+ newaxes += 1;
+ }
+ }
else {
npy_intp start = 0;
npy_intp stop, step;
@@ -1924,6 +1943,9 @@ PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr)
curraxis += 1;
}
}
+ if (mit->consec) {
+ mit->consec = mit->iteraxes[0] + newaxes;
+ }
finish:
/* Here check the indexes (now that we have iteraxes) */
diff --git a/numpy/core/src/multiarray/multiarray_tests.c.src b/numpy/core/src/multiarray/multiarray_tests.c.src
index 4aa179b68..f22b7462d 100644
--- a/numpy/core/src/multiarray/multiarray_tests.c.src
+++ b/numpy/core/src/multiarray/multiarray_tests.c.src
@@ -471,11 +471,9 @@ map_increment(PyArrayMapIterObject *mit, PyObject *op, inplace_map_binop add_inp
}
if ((mit->subspace != NULL) && (mit->consec)) {
- if (mit->iteraxes[0] > 0) {
- PyArray_MapIterSwapAxes(mit, (PyArrayObject **)&arr, 0);
- if (arr == NULL) {
- return -1;
- }
+ PyArray_MapIterSwapAxes(mit, (PyArrayObject **)&arr, 0);
+ if (arr == NULL) {
+ return -1;
}
}
diff --git a/numpy/core/src/umath/operand_flag_tests.c.src b/numpy/core/src/umath/operand_flag_tests.c.src
index 0cae4db36..7eaf1fbfc 100644
--- a/numpy/core/src/umath/operand_flag_tests.c.src
+++ b/numpy/core/src/umath/operand_flag_tests.c.src
@@ -1,6 +1,5 @@
#define NPY_NO_DEPRECATED_API NPY_API_VERSION
-#include <stdint.h>
#include <math.h>
#include <Python.h>
#include <structmember.h>
diff --git a/numpy/core/src/umath/reduction.c b/numpy/core/src/umath/reduction.c
index f69aea2d0..3f2b94a4a 100644
--- a/numpy/core/src/umath/reduction.c
+++ b/numpy/core/src/umath/reduction.c
@@ -483,7 +483,8 @@ PyUFunc_ReduceWrapper(PyArrayObject *operand, PyArrayObject *out,
if (op_view == NULL) {
goto fail;
}
- if (PyArray_SIZE(op_view) == 0) {
+ /* empty op_view signals no reduction; but 0-d arrays cannot be empty */
+ if ((PyArray_SIZE(op_view) == 0) || (PyArray_NDIM(operand) == 0)) {
Py_DECREF(op_view);
op_view = NULL;
goto finish;
diff --git a/numpy/core/src/umath/struct_ufunc_test.c.src b/numpy/core/src/umath/struct_ufunc_test.c.src
new file mode 100644
index 000000000..4bd24559f
--- /dev/null
+++ b/numpy/core/src/umath/struct_ufunc_test.c.src
@@ -0,0 +1,122 @@
+#include "Python.h"
+#include "math.h"
+#include "numpy/ndarraytypes.h"
+#include "numpy/ufuncobject.h"
+#include "numpy/npy_3kcompat.h"
+
+
+/*
+ * struct_ufunc_test.c
+ * This is the C code for creating your own
+ * Numpy ufunc for a structured array dtype.
+ *
+ * Details explaining the Python-C API can be found under
+ * 'Extending and Embedding' and 'Python/C API' at
+ * docs.python.org .
+ */
+
+static PyMethodDef StructUfuncTestMethods[] = {
+ {NULL, NULL, 0, NULL}
+};
+
+/* The loop definition must precede the PyMODINIT_FUNC. */
+
+static void add_uint64_triplet(char **args, npy_intp *dimensions,
+ npy_intp* steps, void* data)
+{
+ npy_intp i;
+ npy_intp is1=steps[0];
+ npy_intp is2=steps[1];
+ npy_intp os=steps[2];
+ npy_intp n=dimensions[0];
+ uint64_t *x, *y, *z;
+
+ char *i1=args[0];
+ char *i2=args[1];
+ char *op=args[2];
+
+ for (i = 0; i < n; i++) {
+
+ x = (uint64_t*)i1;
+ y = (uint64_t*)i2;
+ z = (uint64_t*)op;
+
+ z[0] = x[0] + y[0];
+ z[1] = x[1] + y[1];
+ z[2] = x[2] + y[2];
+
+ i1 += is1;
+ i2 += is2;
+ op += os;
+ }
+}
+
+#if defined(NPY_PY3K)
+static struct PyModuleDef moduledef = {
+ PyModuleDef_HEAD_INIT,
+ "struct_ufunc_test",
+ NULL,
+ -1,
+ StructUfuncTestMethods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+#endif
+
+#if defined(NPY_PY3K)
+PyMODINIT_FUNC PyInit_struct_ufunc_test(void)
+#else
+PyMODINIT_FUNC initstruct_ufunc_test(void)
+#endif
+{
+ PyObject *m, *add_triplet, *d;
+ PyObject *dtype_dict;
+ PyArray_Descr *dtype;
+ PyArray_Descr *dtypes[3];
+
+#if defined(NPY_PY3K)
+ m = PyModule_Create(&moduledef);
+#else
+ m = Py_InitModule("struct_ufunc_test", StructUfuncTestMethods);
+#endif
+
+ if (m == NULL) {
+#if defined(NPY_PY3K)
+ return NULL;
+#else
+ return;
+#endif
+ }
+
+ import_array();
+ import_umath();
+
+ add_triplet = PyUFunc_FromFuncAndData(NULL, NULL, NULL, 0, 2, 1,
+ PyUFunc_None, "add_triplet",
+ "add_triplet_docstring", 0);
+
+ dtype_dict = Py_BuildValue("[(s, s), (s, s), (s, s)]",
+ "f0", "u8", "f1", "u8", "f2", "u8");
+ PyArray_DescrConverter(dtype_dict, &dtype);
+ Py_DECREF(dtype_dict);
+
+ dtypes[0] = dtype;
+ dtypes[1] = dtype;
+ dtypes[2] = dtype;
+
+ PyUFunc_RegisterLoopForDescr(add_triplet,
+ dtype,
+ &add_uint64_triplet,
+ dtypes,
+ NULL);
+
+ d = PyModule_GetDict(m);
+
+ PyDict_SetItemString(d, "add_triplet", add_triplet);
+ Py_DECREF(add_triplet);
+#if defined(NPY_PY3K)
+ return m;
+#endif
+}
diff --git a/numpy/core/src/umath/test_rational.c.src b/numpy/core/src/umath/test_rational.c.src
index aca3d21f3..f9153b87c 100644
--- a/numpy/core/src/umath/test_rational.c.src
+++ b/numpy/core/src/umath/test_rational.c.src
@@ -2,13 +2,12 @@
#define NPY_NO_DEPRECATED_API NPY_API_VERSION
-#include <stdint.h>
#include <math.h>
#include <Python.h>
#include <structmember.h>
#include <numpy/arrayobject.h>
#include <numpy/ufuncobject.h>
-#include "numpy/npy_3kcompat.h"
+#include <numpy/npy_3kcompat.h>
/* Relevant arithmetic exceptions */
@@ -47,63 +46,67 @@ set_zero_divide(void) {
/* Integer arithmetic utilities */
-static NPY_INLINE int32_t
-safe_neg(int32_t x) {
- if (x==(int32_t)1<<31) {
+static NPY_INLINE npy_int32
+safe_neg(npy_int32 x) {
+ if (x==(npy_int32)1<<31) {
set_overflow();
}
return -x;
}
-static NPY_INLINE int32_t
-safe_abs32(int32_t x) {
+static NPY_INLINE npy_int32
+safe_abs32(npy_int32 x) {
+ npy_int32 nx;
if (x>=0) {
return x;
}
- int32_t nx = -x;
+ nx = -x;
if (nx<0) {
set_overflow();
}
return nx;
}
-static NPY_INLINE int64_t
-safe_abs64(int64_t x) {
+static NPY_INLINE npy_int64
+safe_abs64(npy_int64 x) {
+ npy_int64 nx;
if (x>=0) {
return x;
}
- int64_t nx = -x;
+ nx = -x;
if (nx<0) {
set_overflow();
}
return nx;
}
-static NPY_INLINE int64_t
-gcd(int64_t x, int64_t y) {
+static NPY_INLINE npy_int64
+gcd(npy_int64 x, npy_int64 y) {
x = safe_abs64(x);
y = safe_abs64(y);
if (x < y) {
- int64_t t = x;
+ npy_int64 t = x;
x = y;
y = t;
}
while (y) {
+ npy_int64 t;
x = x%y;
- int64_t t = x;
+ t = x;
x = y;
y = t;
}
return x;
}
-static NPY_INLINE int64_t
-lcm(int64_t x, int64_t y) {
+static NPY_INLINE npy_int64
+lcm(npy_int64 x, npy_int64 y) {
+ npy_int64 lcm;
if (!x || !y) {
return 0;
}
x /= gcd(x,y);
- int64_t lcm = x*y;
+ lcm = x*y;
if (lcm/y!=x) {
set_overflow();
}
@@ -114,17 +117,17 @@ lcm(int64_t x, int64_t y) {
typedef struct {
/* numerator */
- int32_t n;
+ npy_int32 n;
/*
* denominator minus one: numpy.zeros() uses memset(0) for non-object
* types, so need to ensure that rational(0) has all zero bytes
*/
- int32_t dmm;
+ npy_int32 dmm;
} rational;
static NPY_INLINE rational
-make_rational_int(int64_t n) {
- rational r = {n,0};
+make_rational_int(npy_int64 n) {
+ rational r = {(npy_int32)n,0};
if (r.n != n) {
set_overflow();
}
@@ -132,17 +135,18 @@ make_rational_int(int64_t n) {
}
static rational
-make_rational_slow(int64_t n_, int64_t d_) {
+make_rational_slow(npy_int64 n_, npy_int64 d_) {
rational r = {0};
if (!d_) {
set_zero_divide();
}
else {
- int64_t g = gcd(n_,d_);
+ npy_int64 g = gcd(n_,d_);
+ npy_int32 d;
n_ /= g;
d_ /= g;
- r.n = n_;
- int32_t d = d_;
+ r.n = (npy_int32)n_;
+ d = (npy_int32)d_;
if (r.n!=n_ || d!=d_) {
set_overflow();
}
@@ -157,20 +161,20 @@ make_rational_slow(int64_t n_, int64_t d_) {
return r;
}
-static NPY_INLINE int32_t
+static NPY_INLINE npy_int32
d(rational r) {
return r.dmm+1;
}
/* Assumes d_ > 0 */
static rational
-make_rational_fast(int64_t n_, int64_t d_) {
- int64_t g = gcd(n_,d_);
+make_rational_fast(npy_int64 n_, npy_int64 d_) {
+ npy_int64 g = gcd(n_,d_);
+ rational r;
n_ /= g;
d_ /= g;
- rational r;
- r.n = n_;
- r.dmm = d_-1;
+ r.n = (npy_int32)n_;
+ r.dmm = (npy_int32)(d_-1);
if (r.n!=n_ || r.dmm+1!=d_) {
set_overflow();
}
@@ -191,29 +195,29 @@ rational_add(rational x, rational y) {
* Note that the numerator computation can never overflow int128_t,
* since each term is strictly under 2**128/4 (since d > 0).
*/
- return make_rational_fast((int64_t)x.n*d(y)+(int64_t)d(x)*y.n,
- (int64_t)d(x)*d(y));
+ return make_rational_fast((npy_int64)x.n*d(y)+(npy_int64)d(x)*y.n,
+ (npy_int64)d(x)*d(y));
}
static NPY_INLINE rational
rational_subtract(rational x, rational y) {
/* We're safe from overflow as with + */
- return make_rational_fast((int64_t)x.n*d(y)-(int64_t)d(x)*y.n,
- (int64_t)d(x)*d(y));
+ return make_rational_fast((npy_int64)x.n*d(y)-(npy_int64)d(x)*y.n,
+ (npy_int64)d(x)*d(y));
}
static NPY_INLINE rational
rational_multiply(rational x, rational y) {
/* We're safe from overflow as with + */
- return make_rational_fast((int64_t)x.n*y.n,(int64_t)d(x)*d(y));
+ return make_rational_fast((npy_int64)x.n*y.n,(npy_int64)d(x)*d(y));
}
static NPY_INLINE rational
rational_divide(rational x, rational y) {
- return make_rational_slow((int64_t)x.n*d(y),(int64_t)d(x)*y.n);
+ return make_rational_slow((npy_int64)x.n*d(y),(npy_int64)d(x)*y.n);
}
-static NPY_INLINE int64_t
+static NPY_INLINE npy_int64
rational_floor(rational x) {
/* Always round down */
if (x.n>=0) {
@@ -223,10 +227,10 @@ rational_floor(rational x) {
* This can be done without casting up to 64 bits, but it requires
* working out all the sign cases
*/
- return -((-(int64_t)x.n+d(x)-1)/d(x));
+ return -((-(npy_int64)x.n+d(x)-1)/d(x));
}
-static NPY_INLINE int64_t
+static NPY_INLINE npy_int64
rational_ceil(rational x) {
return -rational_floor(rational_negative(x));
}
@@ -245,14 +249,14 @@ rational_abs(rational x) {
return y;
}
-static NPY_INLINE int64_t
+static NPY_INLINE npy_int64
rational_rint(rational x) {
/*
* Round towards nearest integer, moving exact half integers towards
* zero
*/
- int32_t d_ = d(x);
- return (2*(int64_t)x.n+(x.n<0?-d_:d_))/(2*(int64_t)d_);
+ npy_int32 d_ = d(x);
+ return (2*(npy_int64)x.n+(x.n<0?-d_:d_))/(2*(npy_int64)d_);
}
static NPY_INLINE int
@@ -267,13 +271,14 @@ rational_inverse(rational x) {
set_zero_divide();
}
else {
+ npy_int32 d_;
y.n = d(x);
- int32_t d = x.n;
- if (d <= 0) {
- d = safe_neg(d);
+ d_ = x.n;
+ if (d_ <= 0) {
+ d_ = safe_neg(d_);
y.n = -y.n;
}
- y.dmm = d-1;
+ y.dmm = d_-1;
}
return y;
}
@@ -294,7 +299,7 @@ rational_ne(rational x, rational y) {
static NPY_INLINE int
rational_lt(rational x, rational y) {
- return (int64_t)x.n*d(y) < (int64_t)y.n*d(x);
+ return (npy_int64)x.n*d(y) < (npy_int64)y.n*d(x);
}
static NPY_INLINE int
@@ -312,7 +317,7 @@ rational_ge(rational x, rational y) {
return !rational_lt(x,y);
}
-static NPY_INLINE int32_t
+static NPY_INLINE npy_int32
rational_int(rational x) {
return x.n/d(x);
}
@@ -331,10 +336,11 @@ static int
scan_rational(const char** s, rational* x) {
long n,d;
int offset;
+ const char* ss;
if (sscanf(*s,"%ld%n",&n,&offset)<=0) {
return 0;
}
- const char* ss = *s+offset;
+ ss = *s+offset;
if (*ss!='/') {
*s = ss;
*x = make_rational_int(n);
@@ -352,7 +358,7 @@ scan_rational(const char** s, rational* x) {
/* Expose rational to Python as a numpy scalar */
typedef struct {
- PyObject_HEAD;
+ PyObject_HEAD
rational r;
} PyRational;
@@ -374,18 +380,24 @@ PyRational_FromRational(rational x) {
static PyObject*
pyrational_new(PyTypeObject* type, PyObject* args, PyObject* kwds) {
+ Py_ssize_t size;
+ PyObject* x[2];
+ long n[2]={0,1};
+ int i;
+ rational r;
if (kwds && PyDict_Size(kwds)) {
PyErr_SetString(PyExc_TypeError,
"constructor takes no keyword arguments");
return 0;
}
- Py_ssize_t size = PyTuple_GET_SIZE(args);
+ size = PyTuple_GET_SIZE(args);
if (size>2) {
PyErr_SetString(PyExc_TypeError,
"expected rational or numerator and optional denominator");
return 0;
}
- PyObject* x[2] = {PyTuple_GET_ITEM(args,0),PyTuple_GET_ITEM(args,1)};
+ x[0] = PyTuple_GET_ITEM(args,0);
+ x[1] = PyTuple_GET_ITEM(args,1);
if (size==1) {
if (PyRational_Check(x[0])) {
Py_INCREF(x[0]);
@@ -409,9 +421,9 @@ pyrational_new(PyTypeObject* type, PyObject* args, PyObject* kwds) {
return 0;
}
}
- long n[2]={0,1};
- int i;
for (i=0;i<size;i++) {
+ PyObject* y;
+ int eq;
n[i] = PyInt_AsLong(x[i]);
if (n[i]==-1 && PyErr_Occurred()) {
if (PyErr_ExceptionMatches(PyExc_TypeError)) {
@@ -423,11 +435,11 @@ pyrational_new(PyTypeObject* type, PyObject* args, PyObject* kwds) {
return 0;
}
/* Check that we had an exact integer */
- PyObject* y = PyInt_FromLong(n[i]);
+ y = PyInt_FromLong(n[i]);
if (!y) {
return 0;
}
- int eq = PyObject_RichCompareBool(x[i],y,Py_EQ);
+ eq = PyObject_RichCompareBool(x[i],y,Py_EQ);
Py_DECREF(y);
if (eq<0) {
return 0;
@@ -440,7 +452,7 @@ pyrational_new(PyTypeObject* type, PyObject* args, PyObject* kwds) {
return 0;
}
}
- rational r = make_rational_slow(n[0],n[1]);
+ r = make_rational_slow(n[0],n[1]);
if (PyErr_Occurred()) {
return 0;
}
@@ -452,41 +464,46 @@ pyrational_new(PyTypeObject* type, PyObject* args, PyObject* kwds) {
* overflow error for too long ints
*/
#define AS_RATIONAL(dst,object) \
- rational dst = {0}; \
- if (PyRational_Check(object)) { \
- dst = ((PyRational*)object)->r; \
- } \
- else { \
- long n_ = PyInt_AsLong(object); \
- if (n_==-1 && PyErr_Occurred()) { \
- if (PyErr_ExceptionMatches(PyExc_TypeError)) { \
- PyErr_Clear(); \
+ { \
+ dst.n = 0; \
+ if (PyRational_Check(object)) { \
+ dst = ((PyRational*)object)->r; \
+ } \
+ else { \
+ PyObject* y_; \
+ int eq_; \
+ long n_ = PyInt_AsLong(object); \
+ if (n_==-1 && PyErr_Occurred()) { \
+ if (PyErr_ExceptionMatches(PyExc_TypeError)) { \
+ PyErr_Clear(); \
+ Py_INCREF(Py_NotImplemented); \
+ return Py_NotImplemented; \
+ } \
+ return 0; \
+ } \
+ y_ = PyInt_FromLong(n_); \
+ if (!y_) { \
+ return 0; \
+ } \
+ eq_ = PyObject_RichCompareBool(object,y_,Py_EQ); \
+ Py_DECREF(y_); \
+ if (eq_<0) { \
+ return 0; \
+ } \
+ if (!eq_) { \
Py_INCREF(Py_NotImplemented); \
return Py_NotImplemented; \
} \
- return 0; \
- } \
- PyObject* y_ = PyInt_FromLong(n_); \
- if (!y_) { \
- return 0; \
+ dst = make_rational_int(n_); \
} \
- int eq_ = PyObject_RichCompareBool(object,y_,Py_EQ); \
- Py_DECREF(y_); \
- if (eq_<0) { \
- return 0; \
- } \
- if (!eq_) { \
- Py_INCREF(Py_NotImplemented); \
- return Py_NotImplemented; \
- } \
- dst = make_rational_int(n_); \
}
static PyObject*
pyrational_richcompare(PyObject* a, PyObject* b, int op) {
+ rational x, y;
+ int result = 0;
AS_RATIONAL(x,a);
AS_RATIONAL(y,b);
- int result = 0;
#define OP(py,op) case py: result = rational_##op(x,y); break;
switch (op) {
OP(Py_LT,lt)
@@ -538,9 +555,10 @@ pyrational_hash(PyObject* self) {
#define RATIONAL_BINOP_2(name,exp) \
static PyObject* \
pyrational_##name(PyObject* a, PyObject* b) { \
+ rational x, y, z; \
AS_RATIONAL(x,a); \
AS_RATIONAL(y,b); \
- rational z = exp; \
+ z = exp; \
if (PyErr_Occurred()) { \
return 0; \
} \
@@ -644,9 +662,9 @@ static PyGetSetDef pyrational_getset[] = {
static PyTypeObject PyRational_Type = {
#if defined(NPY_PY3K)
- PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ PyVarObject_HEAD_INIT(NULL, 0)
#else
- PyObject_HEAD_INIT(&PyType_Type)
+ PyObject_HEAD_INIT(NULL)
0, /* ob_size */
#endif
"rational", /* tp_name */
@@ -720,14 +738,16 @@ npyrational_setitem(PyObject* item, void* data, void* arr) {
}
else {
long n = PyInt_AsLong(item);
+ PyObject* y;
+ int eq;
if (n==-1 && PyErr_Occurred()) {
return -1;
}
- PyObject* y = PyInt_FromLong(n);
+ y = PyInt_FromLong(n);
if (!y) {
return -1;
}
- int eq = PyObject_RichCompareBool(item,y,Py_EQ);
+ eq = PyObject_RichCompareBool(item,y,Py_EQ);
Py_DECREF(y);
if (eq<0) {
return -1;
@@ -744,11 +764,11 @@ npyrational_setitem(PyObject* item, void* data, void* arr) {
}
static NPY_INLINE void
-byteswap(int32_t* x) {
+byteswap(npy_int32* x) {
char* p = (char*)x;
size_t i;
for (i = 0; i < sizeof(*x)/2; i++) {
- int j = sizeof(*x)-1-i;
+ size_t j = sizeof(*x)-1-i;
char t = p[i];
p[i] = p[j];
p[j] = t;
@@ -759,10 +779,10 @@ static void
npyrational_copyswapn(void* dst_, npy_intp dstride, void* src_,
npy_intp sstride, npy_intp n, int swap, void* arr) {
char *dst = (char*)dst_, *src = (char*)src_;
+ npy_intp i;
if (!src) {
return;
}
- npy_intp i;
if (swap) {
for (i = 0; i < n; i++) {
rational* r = (rational*)(dst+dstride*i);
@@ -783,10 +803,11 @@ npyrational_copyswapn(void* dst_, npy_intp dstride, void* src_,
static void
npyrational_copyswap(void* dst, void* src, int swap, void* arr) {
+ rational* r;
if (!src) {
return;
}
- rational* r = (rational*)dst;
+ r = (rational*)dst;
memcpy(r,src,sizeof(rational));
if (swap) {
byteswap(&r->n);
@@ -805,13 +826,17 @@ npyrational_compare(const void* d0, const void* d1, void* arr) {
static int \
npyrational_##name(void* data_, npy_intp n, \
npy_intp* max_ind, void* arr) { \
+ const rational* data; \
+ npy_intp best_i; \
+ rational best_r; \
+ npy_intp i; \
if (!n) { \
return 0; \
} \
- const rational* data = (rational*)data_; \
- npy_intp best_i = 0; \
- rational best_r = data[0]; \
- npy_intp i; \
+ data = (rational*)data_; \
+ best_i = 0; \
+ best_r = data[0]; \
+ i; \
for (i = 1; i < n; i++) { \
if (rational_##op(data[i],best_r)) { \
best_i = i; \
@@ -910,9 +935,9 @@ PyArray_Descr npyrational_descr = {
} \
}
#define DEFINE_INT_CAST(bits) \
- DEFINE_CAST(int##bits##_t,rational,rational y = make_rational_int(x);) \
- DEFINE_CAST(rational,int##bits##_t,int32_t z = rational_int(x); \
- int##bits##_t y = z; if (y != z) set_overflow();)
+ DEFINE_CAST(npy_int##bits,rational,rational y = make_rational_int(x);) \
+ DEFINE_CAST(rational,npy_int##bits,npy_int32 z = rational_int(x); \
+ npy_int##bits y = z; if (y != z) set_overflow();)
DEFINE_INT_CAST(8)
DEFINE_INT_CAST(16)
DEFINE_INT_CAST(32)
@@ -955,8 +980,8 @@ RATIONAL_BINARY_UFUNC(greater,npy_bool,rational_gt(x,y))
RATIONAL_BINARY_UFUNC(less_equal,npy_bool,rational_le(x,y))
RATIONAL_BINARY_UFUNC(greater_equal,npy_bool,rational_ge(x,y))
-BINARY_UFUNC(gcd_ufunc,int64_t,int64_t,int64_t,gcd(x,y))
-BINARY_UFUNC(lcm_ufunc,int64_t,int64_t,int64_t,lcm(x,y))
+BINARY_UFUNC(gcd_ufunc,npy_int64,npy_int64,npy_int64,gcd(x,y))
+BINARY_UFUNC(lcm_ufunc,npy_int64,npy_int64,npy_int64,lcm(x,y))
#define UNARY_UFUNC(name,type,exp) \
void rational_ufunc_##name(char** args, npy_intp* dimensions, \
@@ -979,8 +1004,8 @@ UNARY_UFUNC(square,rational,rational_multiply(x,x))
UNARY_UFUNC(rint,rational,make_rational_int(rational_rint(x)))
UNARY_UFUNC(sign,rational,make_rational_int(rational_sign(x)))
UNARY_UFUNC(reciprocal,rational,rational_inverse(x))
-UNARY_UFUNC(numerator,int64_t,x.n)
-UNARY_UFUNC(denominator,int64_t,d(x))
+UNARY_UFUNC(numerator,npy_int64,x.n)
+UNARY_UFUNC(denominator,npy_int64,d(x))
static NPY_INLINE void
rational_matrix_multiply(char **args, npy_intp *dimensions, npy_intp *steps)
@@ -1059,8 +1084,8 @@ rational_ufunc_test_add(char** args, npy_intp* dimensions,
char *i0 = args[0], *i1 = args[1], *o = args[2];
int k;
for (k = 0; k < n; k++) {
- int64_t x = *(int64_t*)i0;
- int64_t y = *(int64_t*)i1;
+ npy_int64 x = *(npy_int64*)i0;
+ npy_int64 y = *(npy_int64*)i1;
*(rational*)o = rational_add(make_rational_fast(x, 1),
make_rational_fast(y, 1));
i0 += is0; i1 += is1; o += os;
@@ -1068,6 +1093,21 @@ rational_ufunc_test_add(char** args, npy_intp* dimensions,
}
+static void
+rational_ufunc_test_add_rationals(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* data) {
+ npy_intp is0 = steps[0], is1 = steps[1], os = steps[2], n = *dimensions;
+ char *i0 = args[0], *i1 = args[1], *o = args[2];
+ int k;
+ for (k = 0; k < n; k++) {
+ rational x = *(rational*)i0;
+ rational y = *(rational*)i1;
+ *(rational*)o = rational_add(x, y);
+ i0 += is0; i1 += is1; o += os;
+ }
+}
+
+
PyMethodDef module_methods[] = {
{0} /* sentinel */
};
@@ -1095,6 +1135,10 @@ PyMODINIT_FUNC inittest_rational(void) {
#endif
PyObject *m = NULL;
+ PyObject* numpy_str;
+ PyObject* numpy;
+ int npy_rational;
+ PyObject* gufunc;
import_array();
if (PyErr_Occurred()) {
@@ -1104,11 +1148,11 @@ PyMODINIT_FUNC inittest_rational(void) {
if (PyErr_Occurred()) {
goto fail;
}
- PyObject* numpy_str = PyUString_FromString("numpy");
+ numpy_str = PyUString_FromString("numpy");
if (!numpy_str) {
goto fail;
}
- PyObject* numpy = PyImport_Import(numpy_str);
+ numpy = PyImport_Import(numpy_str);
Py_DECREF(numpy_str);
if (!numpy) {
goto fail;
@@ -1137,7 +1181,7 @@ PyMODINIT_FUNC inittest_rational(void) {
npyrational_arrfuncs.fillwithscalar = npyrational_fillwithscalar;
/* Left undefined: scanfunc, fromstr, sort, argsort */
Py_TYPE(&npyrational_descr) = &PyArrayDescr_Type;
- int npy_rational = PyArray_RegisterDataType(&npyrational_descr);
+ npy_rational = PyArray_RegisterDataType(&npyrational_descr);
if (npy_rational<0) {
goto fail;
}
@@ -1149,21 +1193,23 @@ PyMODINIT_FUNC inittest_rational(void) {
}
/* Register casts to and from rational */
- #define REGISTER_CAST(From,To,from_descr,to_typenum,safe) \
- PyArray_Descr* from_descr_##From##_##To = (from_descr); \
- if (PyArray_RegisterCastFunc(from_descr_##From##_##To, (to_typenum), \
- npycast_##From##_##To) < 0) { \
- goto fail; \
- } \
- if (safe && PyArray_RegisterCanCast(from_descr_##From##_##To, \
- (to_typenum), \
- NPY_NOSCALAR) < 0) { \
- goto fail; \
+ #define REGISTER_CAST(From,To,from_descr,to_typenum,safe) { \
+ PyArray_Descr* from_descr_##From##_##To = (from_descr); \
+ if (PyArray_RegisterCastFunc(from_descr_##From##_##To, \
+ (to_typenum), \
+ npycast_##From##_##To) < 0) { \
+ goto fail; \
+ } \
+ if (safe && PyArray_RegisterCanCast(from_descr_##From##_##To, \
+ (to_typenum), \
+ NPY_NOSCALAR) < 0) { \
+ goto fail; \
+ } \
}
#define REGISTER_INT_CASTS(bits) \
- REGISTER_CAST(int##bits##_t, rational, \
+ REGISTER_CAST(npy_int##bits, rational, \
PyArray_DescrFromType(NPY_INT##bits), npy_rational, 1) \
- REGISTER_CAST(rational, int##bits##_t, &npyrational_descr, \
+ REGISTER_CAST(rational, npy_int##bits, &npyrational_descr, \
NPY_INT##bits, 0)
REGISTER_INT_CASTS(8)
REGISTER_INT_CASTS(16)
@@ -1179,10 +1225,10 @@ PyMODINIT_FUNC inittest_rational(void) {
#define REGISTER_UFUNC(name,...) { \
PyUFuncObject* ufunc = \
(PyUFuncObject*)PyObject_GetAttrString(numpy, #name); \
+ int _types[] = __VA_ARGS__; \
if (!ufunc) { \
goto fail; \
} \
- int _types[] = __VA_ARGS__; \
if (sizeof(_types)/sizeof(int)!=ufunc->nargs) { \
PyErr_Format(PyExc_AssertionError, \
"ufunc %s takes %d arguments, our loop takes %ld", \
@@ -1244,42 +1290,64 @@ PyMODINIT_FUNC inittest_rational(void) {
PyModule_AddObject(m,"rational",(PyObject*)&PyRational_Type);
/* Create matrix multiply generalized ufunc */
- PyObject* gufunc = PyUFunc_FromFuncAndDataAndSignature(0,0,0,0,2,1,
- PyUFunc_None,(char*)"matrix_multiply",
- (char*)"return result of multiplying two matrices of rationals",
- 0,"(m,n),(n,p)->(m,p)");
- if (!gufunc) {
- goto fail;
- }
- int types2[3] = {npy_rational,npy_rational,npy_rational};
- if (PyUFunc_RegisterLoopForType((PyUFuncObject*)gufunc, npy_rational,
- rational_gufunc_matrix_multiply, types2, 0) < 0) {
- goto fail;
+ {
+ int types2[3] = {npy_rational,npy_rational,npy_rational};
+ PyObject* gufunc = PyUFunc_FromFuncAndDataAndSignature(0,0,0,0,2,1,
+ PyUFunc_None,(char*)"matrix_multiply",
+ (char*)"return result of multiplying two matrices of rationals",
+ 0,"(m,n),(n,p)->(m,p)");
+ if (!gufunc) {
+ goto fail;
+ }
+ if (PyUFunc_RegisterLoopForType((PyUFuncObject*)gufunc, npy_rational,
+ rational_gufunc_matrix_multiply, types2, 0) < 0) {
+ goto fail;
+ }
+ PyModule_AddObject(m,"matrix_multiply",(PyObject*)gufunc);
}
- PyModule_AddObject(m,"matrix_multiply",(PyObject*)gufunc);
/* Create test ufunc with built in input types and rational output type */
- PyObject* ufunc = PyUFunc_FromFuncAndData(0,0,0,0,2,1,
- PyUFunc_None,(char*)"test_add",
- (char*)"add two matrices of int64 and return rational matrix",0);
- if (!ufunc) {
- goto fail;
+ {
+ int types3[3] = {NPY_INT64,NPY_INT64,npy_rational};
+ PyObject* ufunc = PyUFunc_FromFuncAndData(0,0,0,0,2,1,
+ PyUFunc_None,(char*)"test_add",
+ (char*)"add two matrices of int64 and return rational matrix",0);
+ if (!ufunc) {
+ goto fail;
+ }
+ if (PyUFunc_RegisterLoopForType((PyUFuncObject*)ufunc, npy_rational,
+ rational_ufunc_test_add, types3, 0) < 0) {
+ goto fail;
+ }
+ PyModule_AddObject(m,"test_add",(PyObject*)ufunc);
}
- int types3[3] = {NPY_INT64,NPY_INT64,npy_rational};
- if (PyUFunc_RegisterLoopForType((PyUFuncObject*)ufunc, npy_rational,
- rational_ufunc_test_add, types3, 0) < 0) {
- goto fail;
+
+ /* Create test ufunc with rational types using RegisterLoopForDescr */
+ {
+ PyObject* ufunc = PyUFunc_FromFuncAndData(0,0,0,0,2,1,
+ PyUFunc_None,(char*)"test_add_rationals",
+ (char*)"add two matrices of rationals and return rational matrix",0);
+ if (!ufunc) {
+ goto fail;
+ }
+ PyArray_Descr* types[3] = {&npyrational_descr,
+ &npyrational_descr,
+ &npyrational_descr};
+ if (PyUFunc_RegisterLoopForDescr((PyUFuncObject*)ufunc, &npyrational_descr,
+ rational_ufunc_test_add_rationals, types, 0) < 0) {
+ goto fail;
+ }
+ PyModule_AddObject(m,"test_add_rationals",(PyObject*)ufunc);
}
- PyModule_AddObject(m,"test_add",(PyObject*)ufunc);
/* Create numerator and denominator ufuncs */
#define NEW_UNARY_UFUNC(name,type,doc) { \
+ int types[2] = {npy_rational,type}; \
PyObject* ufunc = PyUFunc_FromFuncAndData(0,0,0,0,1,1, \
PyUFunc_None,(char*)#name,(char*)doc,0); \
if (!ufunc) { \
goto fail; \
} \
- int types[2] = {npy_rational,type}; \
if (PyUFunc_RegisterLoopForType((PyUFuncObject*)ufunc, \
npy_rational,rational_ufunc_##name,types,0)<0) { \
goto fail; \
diff --git a/numpy/core/src/umath/ufunc_object.c b/numpy/core/src/umath/ufunc_object.c
index c9cc18aff..1bf56a0f0 100644
--- a/numpy/core/src/umath/ufunc_object.c
+++ b/numpy/core/src/umath/ufunc_object.c
@@ -731,8 +731,9 @@ static int get_ufunc_arguments(PyUFuncObject *ufunc,
PyObject *obj, *context;
PyObject *str_key_obj = NULL;
char *ufunc_name;
+ int type_num;
- int any_flexible = 0, any_object = 0;
+ int any_flexible = 0, any_object = 0, any_flexible_userloops = 0;
ufunc_name = ufunc->name ? ufunc->name : "<unnamed ufunc>";
@@ -771,23 +772,55 @@ static int get_ufunc_arguments(PyUFuncObject *ufunc,
if (out_op[i] == NULL) {
return -1;
}
+
+ type_num = PyArray_DESCR(out_op[i])->type_num;
if (!any_flexible &&
- PyTypeNum_ISFLEXIBLE(PyArray_DESCR(out_op[i])->type_num)) {
+ PyTypeNum_ISFLEXIBLE(type_num)) {
any_flexible = 1;
}
if (!any_object &&
- PyTypeNum_ISOBJECT(PyArray_DESCR(out_op[i])->type_num)) {
+ PyTypeNum_ISOBJECT(type_num)) {
any_object = 1;
}
+
+ /*
+ * If any operand is a flexible dtype, check to see if any
+ * struct dtype ufuncs are registered. A ufunc has been registered
+ * for a struct dtype if ufunc's arg_dtypes array is not NULL.
+ */
+ if (PyTypeNum_ISFLEXIBLE(type_num) &&
+ !any_flexible_userloops &&
+ ufunc->userloops != NULL) {
+ PyUFunc_Loop1d *funcdata;
+ PyObject *key, *obj;
+ key = PyInt_FromLong(type_num);
+ if (key == NULL) {
+ continue;
+ }
+ obj = PyDict_GetItem(ufunc->userloops, key);
+ Py_DECREF(key);
+ if (obj == NULL) {
+ continue;
+ }
+ funcdata = (PyUFunc_Loop1d *)NpyCapsule_AsVoidPtr(obj);
+ while (funcdata != NULL) {
+ if (funcdata->arg_dtypes != NULL) {
+ any_flexible_userloops = 1;
+ break;
+ }
+ funcdata = funcdata->next;
+ }
+ }
}
/*
* Indicate not implemented if there are flexible objects (structured
- * type or string) but no object types.
+ * type or string) but no object types and no registered struct
+ * dtype ufuncs.
*
* Not sure - adding this increased to 246 errors, 150 failures.
*/
- if (any_flexible && !any_object) {
+ if (any_flexible && !any_flexible_userloops && !any_object) {
return -2;
}
@@ -2015,18 +2048,6 @@ PyUFunc_GeneralizedFunction(PyUFuncObject *ufunc,
NPY_ITER_NO_BROADCAST;
}
- /*
- * If there are no iteration dimensions, create a fake one
- * so that the scalar edge case works right.
- */
- if (iter_ndim == 0) {
- iter_ndim = 1;
- iter_shape[0] = 1;
- for (i = 0; i < nop; ++i) {
- op_axes[i][0] = -1;
- }
- }
-
iter_flags = ufunc->iter_flags |
NPY_ITER_MULTI_INDEX |
NPY_ITER_REFS_OK |
@@ -3738,31 +3759,16 @@ PyUFunc_GenericReduction(PyUFuncObject *ufunc, PyObject *args,
* 'prod', et al, also allow a reduction where axis=0, even
* though this is technically incorrect.
*/
- if (operation == UFUNC_REDUCE &&
- (naxes == 0 || (naxes == 1 && axes[0] == 0))) {
+ naxes = 0;
+
+ if (!(operation == UFUNC_REDUCE &&
+ (naxes == 0 || (naxes == 1 && axes[0] == 0)))) {
+ PyErr_Format(PyExc_TypeError, "cannot %s on a scalar",
+ _reduce_type[operation]);
Py_XDECREF(otype);
- /* If there's an output parameter, copy the value */
- if (out != NULL) {
- if (PyArray_CopyInto(out, mp) < 0) {
- Py_DECREF(mp);
- return NULL;
- }
- else {
- Py_DECREF(mp);
- Py_INCREF(out);
- return (PyObject *)out;
- }
- }
- /* Otherwise return the array unscathed */
- else {
- return PyArray_Return(mp);
- }
+ Py_DECREF(mp);
+ return NULL;
}
- PyErr_Format(PyExc_TypeError, "cannot %s on a scalar",
- _reduce_type[operation]);
- Py_XDECREF(otype);
- Py_DECREF(mp);
- return NULL;
}
/*
@@ -4390,9 +4396,19 @@ cmp_arg_types(int *arg1, int *arg2, int n)
static NPY_INLINE void
_free_loop1d_list(PyUFunc_Loop1d *data)
{
+ int i;
+
while (data != NULL) {
PyUFunc_Loop1d *next = data->next;
PyArray_free(data->arg_types);
+
+ if (data->arg_dtypes != NULL) {
+ for (i = 0; i < data->nargs; i++) {
+ Py_DECREF(data->arg_dtypes[i]);
+ }
+ PyArray_free(data->arg_dtypes);
+ }
+
PyArray_free(data);
data = next;
}
@@ -4415,6 +4431,112 @@ _loop1d_list_free(void *ptr)
#endif
+/*
+ * This function allows the user to register a 1-d loop with an already
+ * created ufunc. This function is similar to RegisterLoopForType except
+ * that it allows a 1-d loop to be registered with PyArray_Descr objects
+ * instead of dtype type num values. This allows a 1-d loop to be registered
+ * for a structured array dtype or a custom dtype. The ufunc is called
+ * whenever any of it's input arguments match the user_dtype argument.
+ * ufunc - ufunc object created from call to PyUFunc_FromFuncAndData
+ * user_dtype - dtype that ufunc will be registered with
+ * function - 1-d loop function pointer
+ * arg_dtypes - array of dtype objects describing the ufunc operands
+ * data - arbitrary data pointer passed in to loop function
+ */
+/*UFUNC_API*/
+NPY_NO_EXPORT int
+PyUFunc_RegisterLoopForDescr(PyUFuncObject *ufunc,
+ PyArray_Descr *user_dtype,
+ PyUFuncGenericFunction function,
+ PyArray_Descr **arg_dtypes,
+ void *data)
+{
+ int i;
+ int result = 0;
+ int *arg_typenums;
+ PyObject *key, *cobj;
+
+ if (user_dtype == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "unknown user defined struct dtype");
+ return -1;
+ }
+
+ key = PyInt_FromLong((long) user_dtype->type_num);
+ if (key == NULL) {
+ return -1;
+ }
+
+ arg_typenums = PyArray_malloc(ufunc->nargs * sizeof(int));
+ if (arg_typenums == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ if (arg_dtypes != NULL) {
+ for (i = 0; i < ufunc->nargs; i++) {
+ arg_typenums[i] = arg_dtypes[i]->type_num;
+ }
+ }
+ else {
+ for (i = 0; i < ufunc->nargs; i++) {
+ arg_typenums[i] = user_dtype->type_num;
+ }
+ }
+
+ result = PyUFunc_RegisterLoopForType(ufunc, user_dtype->type_num,
+ function, arg_typenums, data);
+
+ if (result == 0) {
+ cobj = PyDict_GetItem(ufunc->userloops, key);
+ if (cobj == NULL) {
+ PyErr_SetString(PyExc_KeyError,
+ "userloop for user dtype not found");
+ result = -1;
+ }
+ else {
+ PyUFunc_Loop1d *current, *prev = NULL;
+ int cmp = 1;
+ current = (PyUFunc_Loop1d *)NpyCapsule_AsVoidPtr(cobj);
+ while (current != NULL) {
+ cmp = cmp_arg_types(current->arg_types,
+ arg_typenums, ufunc->nargs);
+ if (cmp >= 0 && current->arg_dtypes == NULL) {
+ break;
+ }
+ prev = current;
+ current = current->next;
+ }
+ if (cmp == 0 && current->arg_dtypes == NULL) {
+ current->arg_dtypes = PyArray_malloc(ufunc->nargs *
+ sizeof(PyArray_Descr*));
+ if (arg_dtypes != NULL) {
+ for (i = 0; i < ufunc->nargs; i++) {
+ current->arg_dtypes[i] = arg_dtypes[i];
+ Py_INCREF(current->arg_dtypes[i]);
+ }
+ }
+ else {
+ for (i = 0; i < ufunc->nargs; i++) {
+ current->arg_dtypes[i] = user_dtype;
+ Py_INCREF(current->arg_dtypes[i]);
+ }
+ }
+ current->nargs = ufunc->nargs;
+ }
+ else {
+ result = -1;
+ }
+ }
+ }
+
+ PyArray_free(arg_typenums);
+
+ Py_DECREF(key);
+
+ return result;
+}
+
/*UFUNC_API*/
NPY_NO_EXPORT int
PyUFunc_RegisterLoopForType(PyUFuncObject *ufunc,
@@ -4430,7 +4552,7 @@ PyUFunc_RegisterLoopForType(PyUFuncObject *ufunc,
int *newtypes=NULL;
descr=PyArray_DescrFromType(usertype);
- if ((usertype < NPY_USERDEF) || (descr==NULL)) {
+ if ((usertype < NPY_USERDEF && usertype != NPY_VOID) || (descr==NULL)) {
PyErr_SetString(PyExc_TypeError, "unknown user-defined type");
return -1;
}
@@ -4466,6 +4588,8 @@ PyUFunc_RegisterLoopForType(PyUFuncObject *ufunc,
funcdata->arg_types = newtypes;
funcdata->data = data;
funcdata->next = NULL;
+ funcdata->arg_dtypes = NULL;
+ funcdata->nargs = 0;
/* Get entry for this user-defined type*/
cobj = PyDict_GetItem(ufunc->userloops, key);
diff --git a/numpy/core/src/umath/ufunc_type_resolution.c b/numpy/core/src/umath/ufunc_type_resolution.c
index 8344d73e5..6a16692b0 100644
--- a/numpy/core/src/umath/ufunc_type_resolution.c
+++ b/numpy/core/src/umath/ufunc_type_resolution.c
@@ -1180,14 +1180,16 @@ find_userloop(PyUFuncObject *ufunc,
int last_userdef = -1;
for (i = 0; i < nargs; ++i) {
+ int type_num;
/* no more ufunc arguments to check */
if (dtypes[i] == NULL) {
break;
}
- int type_num = dtypes[i]->type_num;
- if (type_num != last_userdef && PyTypeNum_ISUSERDEF(type_num)) {
+ type_num = dtypes[i]->type_num;
+ if (type_num != last_userdef &&
+ (PyTypeNum_ISUSERDEF(type_num) || type_num == NPY_VOID)) {
PyObject *key, *obj;
last_userdef = type_num;
@@ -1435,7 +1437,7 @@ ufunc_loop_matches(PyUFuncObject *self,
NPY_CASTING output_casting,
int any_object,
int use_min_scalar,
- int *types,
+ int *types, PyArray_Descr **dtypes,
int *out_no_castable_output,
char *out_err_src_typecode,
char *out_err_dst_typecode)
@@ -1462,7 +1464,18 @@ ufunc_loop_matches(PyUFuncObject *self,
return 0;
}
- tmp = PyArray_DescrFromType(types[i]);
+ /*
+ * If type num is NPY_VOID and struct dtypes have been passed in,
+ * use struct dtype object. Otherwise create new dtype object
+ * from type num.
+ */
+ if (types[i] == NPY_VOID && dtypes != NULL) {
+ tmp = dtypes[i];
+ Py_INCREF(tmp);
+ }
+ else {
+ tmp = PyArray_DescrFromType(types[i]);
+ }
if (tmp == NULL) {
return -1;
}
@@ -1524,7 +1537,7 @@ ufunc_loop_matches(PyUFuncObject *self,
static int
set_ufunc_loop_data_types(PyUFuncObject *self, PyArrayObject **op,
PyArray_Descr **out_dtypes,
- int *type_nums)
+ int *type_nums, PyArray_Descr **dtypes)
{
int i, nin = self->nin, nop = nin + self->nout;
@@ -1535,11 +1548,16 @@ set_ufunc_loop_data_types(PyUFuncObject *self, PyArrayObject **op,
* instead of creating a new one, similarly to preserve metadata.
**/
for (i = 0; i < nop; ++i) {
+ if (dtypes != NULL) {
+ out_dtypes[i] = dtypes[i];
+ Py_XINCREF(out_dtypes[i]);
/*
* Copy the dtype from 'op' if the type_num matches,
* to preserve metadata.
*/
- if (op[i] != NULL && PyArray_DESCR(op[i])->type_num == type_nums[i]) {
+ }
+ else if (op[i] != NULL &&
+ PyArray_DESCR(op[i])->type_num == type_nums[i]) {
out_dtypes[i] = ensure_dtype_nbo(PyArray_DESCR(op[i]));
Py_XINCREF(out_dtypes[i]);
}
@@ -1594,14 +1612,16 @@ linear_search_userloop_type_resolver(PyUFuncObject *self,
int last_userdef = -1;
for (i = 0; i < nop; ++i) {
+ int type_num;
/* no more ufunc arguments to check */
if (op[i] == NULL) {
break;
}
- int type_num = PyArray_DESCR(op[i])->type_num;
- if (type_num != last_userdef && PyTypeNum_ISUSERDEF(type_num)) {
+ type_num = PyArray_DESCR(op[i])->type_num;
+ if (type_num != last_userdef &&
+ (PyTypeNum_ISUSERDEF(type_num) || type_num == NPY_VOID)) {
PyObject *key, *obj;
last_userdef = type_num;
@@ -1621,7 +1641,7 @@ linear_search_userloop_type_resolver(PyUFuncObject *self,
switch (ufunc_loop_matches(self, op,
input_casting, output_casting,
any_object, use_min_scalar,
- types,
+ types, funcdata->arg_dtypes,
out_no_castable_output, out_err_src_typecode,
out_err_dst_typecode)) {
/* Error */
@@ -1629,7 +1649,7 @@ linear_search_userloop_type_resolver(PyUFuncObject *self,
return -1;
/* Found a match */
case 1:
- set_ufunc_loop_data_types(self, op, out_dtype, types);
+ set_ufunc_loop_data_types(self, op, out_dtype, types, funcdata->arg_dtypes);
return 1;
}
@@ -1705,12 +1725,13 @@ type_tuple_userloop_type_resolver(PyUFuncObject *self,
switch (ufunc_loop_matches(self, op,
casting, casting,
any_object, use_min_scalar,
- types,
+ types, NULL,
&no_castable_output, &err_src_typecode,
&err_dst_typecode)) {
/* It works */
case 1:
- set_ufunc_loop_data_types(self, op, out_dtype, types);
+ set_ufunc_loop_data_types(self, op,
+ out_dtype, types, NULL);
return 1;
/* Didn't match */
case 0:
@@ -1873,7 +1894,7 @@ linear_search_type_resolver(PyUFuncObject *self,
switch (ufunc_loop_matches(self, op,
input_casting, output_casting,
any_object, use_min_scalar,
- types,
+ types, NULL,
&no_castable_output, &err_src_typecode,
&err_dst_typecode)) {
/* Error */
@@ -1881,7 +1902,7 @@ linear_search_type_resolver(PyUFuncObject *self,
return -1;
/* Found a match */
case 1:
- set_ufunc_loop_data_types(self, op, out_dtype, types);
+ set_ufunc_loop_data_types(self, op, out_dtype, types, NULL);
return 0;
}
}
@@ -2079,7 +2100,7 @@ type_tuple_type_resolver(PyUFuncObject *self,
switch (ufunc_loop_matches(self, op,
casting, casting,
any_object, use_min_scalar,
- types,
+ types, NULL,
&no_castable_output, &err_src_typecode,
&err_dst_typecode)) {
/* Error */
@@ -2087,7 +2108,7 @@ type_tuple_type_resolver(PyUFuncObject *self,
return -1;
/* It worked */
case 1:
- set_ufunc_loop_data_types(self, op, out_dtype, types);
+ set_ufunc_loop_data_types(self, op, out_dtype, types, NULL);
return 0;
/* Didn't work */
case 0:
diff --git a/numpy/core/tests/test_api.py b/numpy/core/tests/test_api.py
index 376097f7b..1f56d6cf6 100644
--- a/numpy/core/tests/test_api.py
+++ b/numpy/core/tests/test_api.py
@@ -276,5 +276,13 @@ def test_contiguous_flags():
check_contig(a.ravel(), True, True)
check_contig(np.ones((1,3,1)).squeeze(), True, True)
+def test_broadcast_arrays():
+ # Test user defined dtypes
+ a = np.array([(1,2,3)], dtype='u4,u4,u4')
+ b = np.array([(1,2,3),(4,5,6),(7,8,9)], dtype='u4,u4,u4')
+ result = np.broadcast_arrays(a, b)
+ assert_equal(result[0], np.array([(1,2,3),(1,2,3),(1,2,3)], dtype='u4,u4,u4'))
+ assert_equal(result[1], np.array([(1,2,3),(4,5,6),(7,8,9)], dtype='u4,u4,u4'))
+
if __name__ == "__main__":
run_module_suite()
diff --git a/numpy/core/tests/test_indexing.py b/numpy/core/tests/test_indexing.py
index 6906dcf6b..0a81a70d0 100644
--- a/numpy/core/tests/test_indexing.py
+++ b/numpy/core/tests/test_indexing.py
@@ -1,21 +1,13 @@
from __future__ import division, absolute_import, print_function
import numpy as np
+from itertools import product
from numpy.compat import asbytes
from numpy.testing import *
import sys, warnings
-# The C implementation of fancy indexing is relatively complicated,
-# and has many seeming inconsistencies. It also appears to lack any
-# kind of test suite, making any changes to the underlying code difficult
-# because of its fragility.
-
-# This file is to remedy the test suite part a little bit,
-# but hopefully NumPy indexing can be changed to be more systematic
-# at some point in the future.
class TestIndexing(TestCase):
-
def test_none_index(self):
# `None` index adds newaxis
a = np.array([1, 2, 3])
@@ -117,5 +109,326 @@ class TestIndexing(TestCase):
[4, 0, 6],
[0, 8, 0]])
+
+class TestMultiIndexingAutomated(TestCase):
+ """
+ These test use code to mimic the C-Code indexing for selection.
+
+ NOTE: * This still lacks tests for complex item setting.
+ * If you change behavoir of indexing, you might want to modify
+ these tests to try more combinations.
+ * Behavior was written to match numpy version 1.8. (though a
+ first version matched 1.7.)
+ * Only tuple indicies are supported by the mimicing code.
+ (and tested as of writing this)
+ * Error types should match most of the time as long as there
+ is only one error. For multiple errors, what gets raised
+ will usually not be the same one. They are *not* tested.
+ """
+ def setUp(self):
+ self.a = np.arange(np.prod([3,1,5,6])).reshape(3,1,5,6)
+ self.b = np.empty((3,0,5,6))
+ self.complex_indices = ['skip', Ellipsis,
+ 0,
+ # Boolean indices, up to 3-d for some special cases of eating up
+ # dimensions, also need to test all False
+ np.array(False),
+ np.array([True, False, False]),
+ np.array([[True, False], [False, True]]),
+ np.array([[[False, False], [False, False]]]),
+ # Some slices:
+ slice(-5, 5, 2),
+ slice(1, 1, 100),
+ slice(4, -1, -2),
+ slice(None,None,-3),
+ # Some Fancy indexes:
+ np.empty((0,1,1), dtype=np.intp), # empty broadcastable
+ np.array([0,1,-2]),
+ np.array([[2],[0],[1]]),
+ np.array([[0,-1], [0,1]]),
+ np.array([2,-1]),
+ np.zeros([1]*31, dtype=int), # trigger too large array.
+ np.array([0., 1.])] # invalid datatype
+ # Some simpler indices that still cover a bit more
+ self.simple_indices = [Ellipsis, None, -1, [1], np.array([True]), 'skip']
+ # Very simple ones to fill the rest:
+ self.fill_indices = [slice(None,None), 'skip']
+
+
+ def _get_multi_index(self, arr, indices):
+ """Mimic multi dimensional indexing. Returns the indexed array and a
+ flag no_copy. If no_copy is True, np.may_share_memory(arr, arr[indicies])
+ should be True (though this may be wrong for 0-d arrays sometimes.
+ If this function raises an error it should most of the time match the
+ real error as long as there is exactly one error in the index.
+ """
+ in_indices = list(indices)
+ indices = []
+ # if False, this is a fancy or boolean index
+ no_copy = True
+ # number of fancy/scalar indexes that are not consecutive
+ num_fancy = 0
+ # number of dimensions indexed by a "fancy" index
+ fancy_dim = 0
+ # NOTE: This is a funny twist (and probably OK to change).
+ # The boolean array has illegal indexes, but this is
+ # allowed if the broadcasted fancy-indices are 0-sized.
+ # This variable is to catch that case.
+ error_unless_broadcast_to_empty = False
+
+ # We need to handle Ellipsis and make arrays from indices, also
+ # check if this is fancy indexing (set no_copy).
+ ndim = 0
+ ellipsis_pos = None # define here mostly to replace all but first.
+ for i, indx in enumerate(in_indices):
+ if indx is None:
+ continue
+ if isinstance(indx, np.ndarray) and indx.dtype == bool:
+ no_copy = False
+ if indx.ndim == 0:
+ raise IndexError
+ # boolean indices can have higher dimensions
+ ndim += indx.ndim
+ fancy_dim += indx.ndim
+ continue
+ if indx is Ellipsis:
+ if ellipsis_pos is None:
+ ellipsis_pos = i
+ continue # do not increment ndim counter
+ in_indices[i] = slice(None,None)
+ ndim += 1
+ continue
+ if isinstance(indx, slice):
+ ndim += 1
+ continue
+ if not isinstance(indx, np.ndarray):
+ # This could be open for changes in numpy.
+ # numpy should maybe raise an error if casting to intp
+ # is not safe. It rejects np.array([1., 2.]) but not
+ # [1., 2.] as index (same for ie. np.take).
+ # (Note the importance of empty lists if changing this here)
+ indx = np.array(indx, dtype=np.intp)
+ in_indices[i] = indx
+ elif indx.dtype.kind != 'b' and indx.dtype.kind != 'i':
+ raise IndexError('arrays used as indices must be of integer (or boolean) type')
+ if indx.ndim != 0:
+ no_copy = False
+ ndim += 1
+ fancy_dim += 1
+
+ if arr.ndim - ndim < 0:
+ # we can't take more dimensions then we have, not even for 0-d arrays.
+ # since a[()] makes sense, but not a[(),]. We will raise an error
+ # lateron, unless a broadcasting error occurs first.
+ raise IndexError
+
+ if ndim == 0 and not None in in_indices:
+ # Well we have no indexes or one Ellipsis. This is legal.
+ return arr.copy(), no_copy
+
+ if ellipsis_pos is not None:
+ in_indices[ellipsis_pos:ellipsis_pos+1] = [slice(None,None)] * (arr.ndim - ndim)
+
+ for ax, indx in enumerate(in_indices):
+ if isinstance(indx, slice):
+ # convert to an index array anways:
+ indx = np.arange(*indx.indices(arr.shape[ax]))
+ indices.append(['s', indx])
+ continue
+ elif indx is None:
+ # this is like taking a slice with one element from a new axis:
+ indices.append(['n', np.array([0], dtype=np.intp)])
+ arr = arr.reshape((arr.shape[:ax] + (1,) + arr.shape[ax:]))
+ continue
+ if isinstance(indx, np.ndarray) and indx.dtype == bool:
+ # This may be open for improvement in numpy.
+ # numpy should probably cast boolean lists to boolean indices
+ # instead of intp!
+
+ # Numpy supports for a boolean index with
+ # non-matching shape as long as the True values are not
+ # out of bounds. Numpy maybe should maybe not allow this,
+ # (at least not array that are larger then the original one).
+ try:
+ flat_indx = np.ravel_multi_index(np.nonzero(indx),
+ arr.shape[ax:ax+indx.ndim], mode='raise')
+ except:
+ error_unless_broadcast_to_empty = True
+ # fill with 0s instead, and raise error later
+ flat_indx = np.array([0]*indx.sum(), dtype=np.intp)
+ # concatenate axis into a single one:
+ if indx.ndim != 0:
+ arr = arr.reshape((arr.shape[:ax]
+ + (np.prod(arr.shape[ax:ax+indx.ndim]),)
+ + arr.shape[ax+indx.ndim:]))
+ indx = flat_indx
+ else:
+ raise IndexError
+ if len(indices) > 0 and indices[-1][0] == 'f' and ax != ellipsis_pos:
+ # NOTE: There could still have been a 0-sized Ellipsis
+ # between them. Checked that with ellipsis_pos.
+ indices[-1].append(indx)
+ else:
+ # We have a fancy index that is not after an existing one.
+ # NOTE: A 0-d array triggers this as well, while
+ # one may expect it to not trigger it, since a scalar
+ # would not be considered fancy indexing.
+ num_fancy += 1
+ indices.append(['f', indx])
+
+ if num_fancy > 1 and not no_copy:
+ # We have to flush the fancy indexes left
+ new_indices = indices[:]
+ axes = list(range(arr.ndim))
+ fancy_axes = []
+ new_indices.insert(0, ['f'])
+ ni = 0
+ ai = 0
+ for indx in indices:
+ ni += 1
+ if indx[0] == 'f':
+ new_indices[0].extend(indx[1:])
+ del new_indices[ni]
+ ni -= 1
+ for ax in range(ai, ai + len(indx[1:])):
+ fancy_axes.append(ax)
+ axes.remove(ax)
+ ai += len(indx) - 1 # axis we are at
+ indices = new_indices
+ # and now we need to transpose arr:
+ arr = arr.transpose(*(fancy_axes + axes))
+
+ # We only have one 'f' index now and arr is transposed accordingly.
+ # Now handle newaxes by reshaping...
+ ax = 0
+ for indx in indices:
+ if indx[0] == 'f':
+ if len(indx) == 1:
+ continue
+ # First of all, reshape arr to combine fancy axes into one:
+ orig_shape = arr.shape
+ orig_slice = orig_shape[ax:ax + len(indx[1:])]
+ arr = arr.reshape((arr.shape[:ax]
+ + (np.prod(orig_slice).astype(int),)
+ + arr.shape[ax + len(indx[1:]):]))
+
+ # Check if broadcasting works
+ if len(indx[1:]) != 1:
+ res = np.broadcast(*indx[1:]) # raises ValueError...
+ else:
+ res = indx[1]
+ # unfortunatly the indices might be out of bounds. So check
+ # that first, and use mode='wrap' then. However only if
+ # there are any indices...
+ if res.size != 0:
+ if error_unless_broadcast_to_empty:
+ raise IndexError
+ for _indx, _size in zip(indx[1:], orig_slice):
+ if _indx.size == 0:
+ continue
+ if np.any(_indx >= _size) or np.any(_indx < -_size):
+ raise IndexError
+ if len(indx[1:]) == len(orig_slice):
+ if np.product(orig_slice) == 0:
+ # Work around for a crash or IndexError with 'wrap'
+ # in some 0-sized cases.
+ try:
+ mi = np.ravel_multi_index(indx[1:], orig_slice, mode='raise')
+ except:
+ # This happens with 0-sized orig_slice (sometimes?)
+ # here it is a ValueError, but indexing gives a:
+ raise IndexError('invalid index into 0-sized')
+ else:
+ mi = np.ravel_multi_index(indx[1:], orig_slice, mode='wrap')
+ else:
+ # Maybe never happens...
+ raise ValueError
+ arr = arr.take(mi.ravel(), axis=ax)
+ arr = arr.reshape((arr.shape[:ax]
+ + mi.shape
+ + arr.shape[ax+1:]))
+ ax += mi.ndim
+ continue
+
+ # If we are here, we have a 1D array for take:
+ arr = arr.take(indx[1], axis=ax)
+ ax += 1
+
+ return arr, no_copy
+
+
+ def _check_multi_index(self, arr, index):
+ """Check mult index getting and simple setting. Input array
+ must be a reshaped arange for __setitem__ check for non-view
+ arrays to work. It then relies on .flat to work.
+ """
+ # Test item getting
+ try:
+ mimic_get, no_copy = self._get_multi_index(arr, index)
+ except Exception as e:
+ assert_raises(Exception, arr.__getitem__, index)
+ assert_raises(Exception, arr.__setitem__, index, 0)
+ return
+
+ arr = arr.copy()
+ indexed_arr = arr[index]
+ assert_array_equal(indexed_arr, mimic_get)
+ # Check if we got a view, unless its a 0-sized or 0-d array.
+ # (then its not a view, and that does not matter)
+ if indexed_arr.size != 0 and indexed_arr.ndim != 0:
+ assert_(np.may_share_memory(indexed_arr, arr) == no_copy)
+
+ sys.stdout.flush()
+ # Test non-broadcast setitem:
+ b = arr.copy()
+ b[index] = mimic_get + 1000
+ if b.size == 0:
+ return # nothing to compare here...
+ if no_copy and indexed_arr.ndim != 0:
+ # change indexed_arr in-place to manipulate original:
+ indexed_arr += 1000
+ assert_array_equal(arr, b)
+ return
+ # Use the fact that the array is originally an arange:
+ arr.flat[indexed_arr.ravel()] += 1000
+ assert_array_equal(arr, b)
+
+
+ def test_boolean(self):
+ a = np.array(5)
+ assert_equal(a[np.array(True)], 5)
+ a[np.array(True)] = 1
+ assert_equal(a, 1)
+ # NOTE: This is different from normal broadcasting, as
+ # arr[boolean_array] works like in a multi index. Which means
+ # it is aligned to the left. This is probably correct for
+ # consistency with arr[boolean_array,] also no broadcasting
+ # is done at all
+ self._check_multi_index(self.a, (np.zeros_like(self.a, dtype=bool),))
+ self._check_multi_index(self.a, (np.zeros_like(self.a, dtype=bool)[...,0],))
+ self._check_multi_index(self.a, (np.zeros_like(self.a, dtype=bool)[None,...],))
+
+
+ def test_multidim(self):
+ # Check all combinations of all inner 3x3 arrays. Since test None
+ # we also test the Ellipsis OK.
+ tocheck = [self.simple_indices, self.complex_indices] + [self.simple_indices]*2
+ for simple_pos in [0,2,3]:
+ tocheck = [self.fill_indices, self.complex_indices, self.fill_indices, self.fill_indices]
+ tocheck[simple_pos] = self.simple_indices
+ for index in product(*tocheck):
+ index = tuple(i for i in index if i != 'skip')
+ self._check_multi_index(self.a, index)
+ self._check_multi_index(self.b, index)
+ # Check very simple item getting:
+ self._check_multi_index(self.a, (0,0,0,0))
+ self._check_multi_index(self.b, (0,0,0,0))
+ # Also check (simple cases of) too many indices:
+ assert_raises(IndexError, self.a.__getitem__, (0,0,0,0,0))
+ assert_raises(IndexError, self.a.__setitem__, (0,0,0,0,0), 0)
+ assert_raises(IndexError, self.a.__getitem__, (0,0,[1],0,0))
+ assert_raises(IndexError, self.a.__setitem__, (0,0,[1],0,0), 0)
+
+
if __name__ == "__main__":
run_module_suite()
diff --git a/numpy/core/tests/test_numeric.py b/numpy/core/tests/test_numeric.py
index 3a6118f06..5c8de3734 100644
--- a/numpy/core/tests/test_numeric.py
+++ b/numpy/core/tests/test_numeric.py
@@ -656,6 +656,12 @@ class TestArrayComparisons(TestCase):
res = array_equal(array([1,2]), array([1,3]))
assert_(not res)
assert_(type(res) is bool)
+ res = array_equal(array(['a'], dtype='S1'), array(['a'], dtype='S1'))
+ assert_(res)
+ assert_(type(res) is bool)
+ res = array_equal(array([('a',1)], dtype='S1,u4'), array([('a',1)], dtype='S1,u4'))
+ assert_(res)
+ assert_(type(res) is bool)
def test_array_equiv(self):
res = array_equiv(array([1,2]), array([1,2]))
diff --git a/numpy/core/tests/test_ufunc.py b/numpy/core/tests/test_ufunc.py
index c969fa8ac..ad489124e 100644
--- a/numpy/core/tests/test_ufunc.py
+++ b/numpy/core/tests/test_ufunc.py
@@ -565,12 +565,25 @@ class TestUfunc(TestCase):
assert_equal(np.max(3, axis=0), 3)
assert_equal(np.min(2.5, axis=0), 2.5)
+ # Check scalar behaviour for ufuncs without an identity
+ assert_equal(np.power.reduce(3), 3)
+
# Make sure that scalars are coming out from this operation
assert_(type(np.prod(np.float32(2.5), axis=0)) is np.float32)
assert_(type(np.sum(np.float32(2.5), axis=0)) is np.float32)
assert_(type(np.max(np.float32(2.5), axis=0)) is np.float32)
assert_(type(np.min(np.float32(2.5), axis=0)) is np.float32)
+ # check if scalars/0-d arrays get cast
+ assert_(type(np.any(0, axis=0)) is np.bool_)
+
+ # assert that 0-d arrays get wrapped
+ class MyArray(np.ndarray):
+ pass
+ a = np.array(1).view(MyArray)
+ assert_(type(np.any(a)) is MyArray)
+
+
def test_casting_out_param(self):
# Test that it's possible to do casts on output
a = np.ones((200,100), np.int64)
@@ -795,5 +808,31 @@ class TestUfunc(TestCase):
assert_equal(a, np.array([[0,2,4,3],[7,9,11,7],
[14,16,18,11],[12,13,14,15]], dtype='i8'))
+ a = np.array(0)
+ opflag_tests.inplace_add(a, 3)
+ assert_equal(a, 3)
+ opflag_tests.inplace_add(a, [3, 4])
+ assert_equal(a, 10)
+
+ def test_struct_ufunc(self):
+ import numpy.core.struct_ufunc_test as struct_ufunc
+
+ a = np.array([(1,2,3)], dtype='u8,u8,u8')
+ b = np.array([(1,2,3)], dtype='u8,u8,u8')
+
+ result = struct_ufunc.add_triplet(a, b)
+ assert_equal(result, np.array([(2, 4, 6)], dtype='u8,u8,u8'))
+
+ def test_custom_ufunc(self):
+ a = np.array([rational(1,2), rational(1,3), rational(1,4)],
+ dtype=rational);
+ b = np.array([rational(1,2), rational(1,3), rational(1,4)],
+ dtype=rational);
+
+ result = test_add_rationals(a, b)
+ expected = np.array([rational(1), rational(2,3), rational(1,2)],
+ dtype=rational);
+ assert_equal(result, expected);
+
if __name__ == "__main__":
run_module_suite()
diff --git a/numpy/distutils/ccompiler.py b/numpy/distutils/ccompiler.py
index 51a349aea..48f6f09b6 100644
--- a/numpy/distutils/ccompiler.py
+++ b/numpy/distutils/ccompiler.py
@@ -16,7 +16,7 @@ from distutils.version import LooseVersion
from numpy.distutils import log
from numpy.distutils.exec_command import exec_command
from numpy.distutils.misc_util import cyg2win32, is_sequence, mingw32, \
- quote_args, msvc_on_amd64
+ quote_args
from numpy.distutils.compat import get_exception
@@ -654,6 +654,3 @@ def split_quoted(s):
return words
ccompiler.split_quoted = split_quoted
##Fix distutils.util.split_quoted:
-
-# define DISTUTILS_USE_SDK when necessary to workaround distutils/msvccompiler.py bug
-msvc_on_amd64()
diff --git a/numpy/distutils/misc_util.py b/numpy/distutils/misc_util.py
index 5bd3a9086..bc073aee4 100644
--- a/numpy/distutils/misc_util.py
+++ b/numpy/distutils/misc_util.py
@@ -367,17 +367,6 @@ def msvc_runtime_library():
lib = None
return lib
-def msvc_on_amd64():
- if not (sys.platform=='win32' or os.name=='nt'):
- return
- if get_build_architecture() != 'AMD64':
- return
- if 'DISTUTILS_USE_SDK' in os.environ:
- return
- # try to avoid _MSVCCompiler__root attribute error
- print('Forcing DISTUTILS_USE_SDK=1')
- os.environ['DISTUTILS_USE_SDK']='1'
- return
#########################
@@ -671,7 +660,7 @@ class Configuration(object):
_list_keys = ['packages', 'ext_modules', 'data_files', 'include_dirs',
'libraries', 'headers', 'scripts', 'py_modules',
- 'installed_libraries']
+ 'installed_libraries', 'define_macros']
_dict_keys = ['package_dir', 'installed_pkg_config']
_extra_keys = ['name', 'version']
@@ -1273,6 +1262,22 @@ class Configuration(object):
### XXX Implement add_py_modules
+ def add_define_macros(self, macros):
+ """Add define macros to configuration
+
+ Add the given sequence of macro name and value duples to the beginning
+ of the define_macros list This list will be visible to all extension
+ modules of the current package.
+ """
+ dist = self.get_distribution()
+ if dist is not None:
+ if not hasattr(dist, 'define_macros'):
+ dist.define_macros = []
+ dist.define_macros.extend(macros)
+ else:
+ self.define_macros.extend(macros)
+
+
def add_include_dirs(self,*paths):
"""Add paths to configuration include directories.
@@ -1440,6 +1445,8 @@ class Configuration(object):
libnames.append(libname)
ext_args['libraries'] = libnames + ext_args['libraries']
+ ext_args['define_macros'] = \
+ self.define_macros + ext_args.get('define_macros', [])
from numpy.distutils.core import Extension
ext = Extension(**ext_args)
diff --git a/numpy/distutils/system_info.py b/numpy/distutils/system_info.py
index 125a1f175..13f52801e 100644
--- a/numpy/distutils/system_info.py
+++ b/numpy/distutils/system_info.py
@@ -223,8 +223,10 @@ else:
tmp = open(os.devnull, 'w')
p = sp.Popen(["gcc", "-print-multiarch"], stdout=sp.PIPE,
stderr=tmp)
- except OSError:
- pass # gcc is not installed
+ except (OSError, DistutilsError):
+ # OSError if gcc is not installed, or SandboxViolation (DistutilsError
+ # subclass) if an old setuptools bug is triggered (see gh-3160).
+ pass
else:
triplet = str(p.communicate()[0].decode().strip())
if p.returncode == 0:
diff --git a/numpy/f2py/setup.py b/numpy/f2py/setup.py
index 1fae54531..96a047ec7 100644
--- a/numpy/f2py/setup.py
+++ b/numpy/f2py/setup.py
@@ -53,7 +53,7 @@ def configuration(parent_package='',top_path=None):
log.info('Creating %s', target)
f = open(target,'w')
f.write('''\
-#!/usr/bin/env %s
+#!%s
# See http://cens.ioc.ee/projects/f2py2e/
import os, sys
for mode in ["g3-numpy", "2e-numeric", "2e-numarray", "2e-numpy"]:
@@ -77,7 +77,7 @@ else:
sys.stderr.write("Unknown mode: " + repr(mode) + "\\n")
sys.exit(1)
main()
-'''%(os.path.basename(sys.executable)))
+'''%(sys.executable))
f.close()
return target
diff --git a/numpy/lib/arraypad.py b/numpy/lib/arraypad.py
index 09ca99332..c80f6f54d 100644
--- a/numpy/lib/arraypad.py
+++ b/numpy/lib/arraypad.py
@@ -15,508 +15,1075 @@ __all__ = ['pad']
# Private utility functions.
-def _create_vector(vector, pad_tuple, before_val, after_val):
+def _arange_ndarray(arr, shape, axis, reverse=False):
"""
- Private function which creates the padded vector.
+ Create an ndarray of `shape` with increments along specified `axis`
Parameters
----------
- vector : ndarray of rank 1, length N + pad_tuple[0] + pad_tuple[1]
- Input vector including blank padded values. `N` is the lenth of the
- original vector.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding along
- this particular iaxis.
- before_val : scalar or ndarray of rank 1, length pad_tuple[0]
- This is the value(s) that will pad the beginning of `vector`.
- after_val : scalar or ndarray of rank 1, length pad_tuple[1]
- This is the value(s) that will pad the end of the `vector`.
+ arr : ndarray
+ Input array of arbitrary shape.
+ shape : tuple of ints
+ Shape of desired array. Should be equivalent to `arr.shape` except
+ `shape[axis]` which may have any positive value.
+ axis : int
+ Axis to increment along.
+ reverse : bool
+ If False, increment in a positive fashion from 1 to `shape[axis]`,
+ inclusive. If True, the bounds are the same but the order reversed.
Returns
-------
- _create_vector : ndarray
- Vector with before_val and after_val replacing the blank pad values.
+ padarr : ndarray
+ Output array sized to pad `arr` along `axis`, with linear range from
+ 1 to `shape[axis]` along specified `axis`.
+
+ Notes
+ -----
+ The range is deliberately 1-indexed for this specific use case. Think of
+ this algorithm as broadcasting `np.arange` to a single `axis` of an
+ arbitrarily shaped ndarray.
"""
- vector[:pad_tuple[0]] = before_val
- if pad_tuple[1] > 0:
- vector[-pad_tuple[1]:] = after_val
- return vector
+ initshape = tuple(1 if i != axis else shape[axis]
+ for (i, x) in enumerate(arr.shape))
+ if not reverse:
+ padarr = np.arange(1, shape[axis] + 1)
+ else:
+ padarr = np.arange(shape[axis], 0, -1)
+ padarr = padarr.reshape(initshape)
+ for i, dim in enumerate(shape):
+ if padarr.shape[i] != dim:
+ padarr = padarr.repeat(dim, axis=i)
+ return padarr
-def _normalize_shape(narray, shape):
+def _round_ifneeded(arr, dtype):
"""
- Private function which does some checks and normalizes the possibly
- much simpler representations of 'pad_width', 'stat_length',
- 'constant_values', 'end_values'.
+ Rounds arr inplace if destination dtype is integer.
Parameters
----------
- narray : ndarray
- Input ndarray
- shape : {sequence, int}, optional
- The width of padding (pad_width) or the number of elements on the
- edge of the narray used for statistics (stat_length).
- ((before_1, after_1), ... (before_N, after_N)) unique number of
- elements for each axis where `N` is rank of `narray`.
- ((before, after),) yields same before and after constants for each
- axis.
- (constant,) or int is a shortcut for before = after = constant for
- all axes.
+ arr : ndarray
+ Input array.
+ dtype : dtype
+ The dtype of the destination array.
+
+ """
+ if np.issubdtype(dtype, np.integer):
+ arr.round(out=arr)
+
+
+def _prepend_const(arr, pad_amt, val, axis=-1):
+ """
+ Prepend constant `val` along `axis` of `arr`.
+
+ Parameters
+ ----------
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to prepend.
+ val : scalar
+ Constant value to use. For best results should be of type `arr.dtype`;
+ if not `arr.dtype` will be cast to `arr.dtype`.
+ axis : int
+ Axis along which to pad `arr`.
Returns
-------
- _normalize_shape : tuple of tuples
- int => ((int, int), (int, int), ...)
- [[int1, int2], [int3, int4], ...] => ((int1, int2), (int3, int4), ...)
- ((int1, int2), (int3, int4), ...) => no change
- [[int1, int2], ] => ((int1, int2), (int1, int2), ...)
- ((int1, int2), ) => ((int1, int2), (int1, int2), ...)
- [[int , ], ] => ((int, int), (int, int), ...)
- ((int , ), ) => ((int, int), (int, int), ...)
+ padarr : ndarray
+ Output array, with `pad_amt` constant `val` prepended along `axis`.
"""
- normshp = None
- shapelen = len(np.shape(narray))
- if (isinstance(shape, int)):
- normshp = ((shape, shape), ) * shapelen
- elif (isinstance(shape, (tuple, list))
- and isinstance(shape[0], (tuple, list))
- and len(shape) == shapelen):
- normshp = shape
- for i in normshp:
- if len(i) != 2:
- fmt = "Unable to create correctly shaped tuple from %s"
- raise ValueError(fmt % (normshp,))
- elif (isinstance(shape, (tuple, list))
- and isinstance(shape[0], (int, float, long))
- and len(shape) == 1):
- normshp = ((shape[0], shape[0]), ) * shapelen
- elif (isinstance(shape, (tuple, list))
- and isinstance(shape[0], (int, float, long))
- and len(shape) == 2):
- normshp = (shape, ) * shapelen
- if normshp is None:
- fmt = "Unable to create correctly shaped tuple from %s"
- raise ValueError(fmt % (shape,))
- return normshp
+ if pad_amt == 0:
+ return arr
+ padshape = tuple(x if i != axis else pad_amt
+ for (i, x) in enumerate(arr.shape))
+ if val == 0:
+ return np.concatenate((np.zeros(padshape, dtype=arr.dtype), arr),
+ axis=axis)
+ else:
+ return np.concatenate(((np.zeros(padshape) + val).astype(arr.dtype),
+ arr), axis=axis)
-def _validate_lengths(narray, number_elements):
+def _append_const(arr, pad_amt, val, axis=-1):
"""
- Private function which does some checks and reformats pad_width and
- stat_length using _normalize_shape.
+ Append constant `val` along `axis` of `arr`.
Parameters
----------
- narray : ndarray
- Input ndarray
- number_elements : {sequence, int}, optional
- The width of padding (pad_width) or the number of elements on the edge
- of the narray used for statistics (stat_length).
- ((before_1, after_1), ... (before_N, after_N)) unique number of
- elements for each axis.
- ((before, after),) yields same before and after constants for each
- axis.
- (constant,) or int is a shortcut for before = after = constant for all
- axes.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to append.
+ val : scalar
+ Constant value to use. For best results should be of type `arr.dtype`;
+ if not `arr.dtype` will be cast to `arr.dtype`.
+ axis : int
+ Axis along which to pad `arr`.
Returns
-------
- _validate_lengths : tuple of tuples
- int => ((int, int), (int, int), ...)
- [[int1, int2], [int3, int4], ...] => ((int1, int2), (int3, int4), ...)
- ((int1, int2), (int3, int4), ...) => no change
- [[int1, int2], ] => ((int1, int2), (int1, int2), ...)
- ((int1, int2), ) => ((int1, int2), (int1, int2), ...)
- [[int , ], ] => ((int, int), (int, int), ...)
- ((int , ), ) => ((int, int), (int, int), ...)
+ padarr : ndarray
+ Output array, with `pad_amt` constant `val` appended along `axis`.
"""
- normshp = _normalize_shape(narray, number_elements)
- for i in normshp:
- if i[0] < 0 or i[1] < 0:
- fmt = "%s cannot contain negative values."
- raise ValueError(fmt % (number_elements,))
- return normshp
+ if pad_amt == 0:
+ return arr
+ padshape = tuple(x if i != axis else pad_amt
+ for (i, x) in enumerate(arr.shape))
+ if val == 0:
+ return np.concatenate((arr, np.zeros(padshape, dtype=arr.dtype)),
+ axis=axis)
+ else:
+ return np.concatenate(
+ (arr, (np.zeros(padshape) + val).astype(arr.dtype)), axis=axis)
-def _create_stat_vectors(vector, pad_tuple, iaxis, kwargs):
+def _prepend_edge(arr, pad_amt, axis=-1):
"""
- Returns the portion of the vector required for any statistic.
+ Prepend `pad_amt` to `arr` along `axis` by extending edge values.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across.
- kwargs : keyword arguments
- Keyword arguments. Only 'stat_length' is used. 'stat_length'
- defaults to the entire vector if not supplied.
-
- Return
- ------
- _create_stat_vectors : ndarray
- The values from the original vector that will be used to calculate
- the statistic.
-
- """
-
- # Can't have 0 represent the end if a slice... a[1:0] doesnt' work
- pt1 = -pad_tuple[1]
- if pt1 == 0:
- pt1 = None
-
- # Default is the entire vector from the original array.
- sbvec = vector[pad_tuple[0]:pt1]
- savec = vector[pad_tuple[0]:pt1]
-
- if kwargs['stat_length']:
- stat_length = kwargs['stat_length'][iaxis]
- sl0 = min(stat_length[0], len(sbvec))
- sl1 = min(stat_length[1], len(savec))
- sbvec = np.arange(0)
- savec = np.arange(0)
- if pad_tuple[0] > 0:
- sbvec = vector[pad_tuple[0]:pad_tuple[0] + sl0]
- if pad_tuple[1] > 0:
- savec = vector[-pad_tuple[1] - sl1:pt1]
- return (sbvec, savec)
-
-
-def _maximum(vector, pad_tuple, iaxis, kwargs):
- """
- Private function to calculate the before/after vectors for pad_maximum.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to prepend.
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, extended by `pad_amt` edge values appended along `axis`.
+
+ """
+ if pad_amt == 0:
+ return arr
+
+ edge_slice = tuple(slice(None) if i != axis else 0
+ for (i, x) in enumerate(arr.shape))
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+ edge_arr = arr[edge_slice].reshape(pad_singleton)
+ return np.concatenate((edge_arr.repeat(pad_amt, axis=axis), arr),
+ axis=axis)
+
+
+def _append_edge(arr, pad_amt, axis=-1):
+ """
+ Append `pad_amt` to `arr` along `axis` by extending edge values.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across.
- kwargs : keyword arguments
- Keyword arguments. Only 'stat_length' is used. 'stat_length'
- defaults to the entire vector if not supplied.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to append.
+ axis : int
+ Axis along which to pad `arr`.
- Return
- ------
- _maximum : ndarray
- Padded vector
+ Returns
+ -------
+ padarr : ndarray
+ Output array, extended by `pad_amt` edge values prepended along
+ `axis`.
"""
- sbvec, savec = _create_stat_vectors(vector, pad_tuple, iaxis, kwargs)
- return _create_vector(vector, pad_tuple, max(sbvec), max(savec))
+ if pad_amt == 0:
+ return arr
+ edge_slice = tuple(slice(None) if i != axis else arr.shape[axis] - 1
+ for (i, x) in enumerate(arr.shape))
-def _minimum(vector, pad_tuple, iaxis, kwargs):
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+ edge_arr = arr[edge_slice].reshape(pad_singleton)
+ return np.concatenate((arr, edge_arr.repeat(pad_amt, axis=axis)),
+ axis=axis)
+
+
+def _prepend_ramp(arr, pad_amt, end, axis=-1):
"""
- Private function to calculate the before/after vectors for pad_minimum.
+ Prepend linear ramp along `axis`.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across.
- kwargs : keyword arguments
- Keyword arguments. Only 'stat_length' is used. 'stat_length'
- defaults to the entire vector if not supplied.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to prepend.
+ end : scalar
+ Constal value to use. For best results should be of type `arr.dtype`;
+ if not `arr.dtype` will be cast to `arr.dtype`.
+ axis : int
+ Axis along which to pad `arr`.
- Return
- ------
- _minimum : ndarray
- Padded vector
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values prepended along `axis`. The
+ prepended region ramps linearly from the edge value to `end`.
"""
- sbvec, savec = _create_stat_vectors(vector, pad_tuple, iaxis, kwargs)
- return _create_vector(vector, pad_tuple, min(sbvec), min(savec))
+ if pad_amt == 0:
+ return arr
+
+ # Generate shape for final concatenated array
+ padshape = tuple(x if i != axis else pad_amt
+ for (i, x) in enumerate(arr.shape))
+ # Generate an n-dimensional array incrementing along `axis`
+ ramp_arr = _arange_ndarray(arr, padshape, axis,
+ reverse=True).astype(np.float64)
-def _median(vector, pad_tuple, iaxis, kwargs):
+
+ # Appropriate slicing to extract n-dimensional edge along `axis`
+ edge_slice = tuple(slice(None) if i != axis else 0
+ for (i, x) in enumerate(arr.shape))
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+
+ # Extract edge, reshape to original rank, and extend along `axis`
+ edge_pad = arr[edge_slice].reshape(pad_singleton).repeat(pad_amt, axis)
+
+ # Linear ramp
+ slope = (end - edge_pad) / float(pad_amt)
+ ramp_arr = ramp_arr * slope
+ ramp_arr += edge_pad
+ _round_ifneeded(ramp_arr, arr.dtype)
+
+ # Ramp values will most likely be float, cast them to the same type as arr
+ return np.concatenate((ramp_arr.astype(arr.dtype), arr), axis=axis)
+
+
+def _append_ramp(arr, pad_amt, end, axis=-1):
"""
- Private function to calculate the before/after vectors for pad_median.
+ Append linear ramp along `axis`.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across.
- kwargs : keyword arguments
- Keyword arguments. Only 'stat_length' is used. 'stat_length'
- defaults to the entire vector if not supplied.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to append.
+ end : scalar
+ Constal value to use. For best results should be of type `arr.dtype`;
+ if not `arr.dtype` will be cast to `arr.dtype`.
+ axis : int
+ Axis along which to pad `arr`.
- Return
- ------
- _median : ndarray
- Padded vector
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values appended along `axis`. The
+ appended region ramps linearly from the edge value to `end`.
"""
- sbvec, savec = _create_stat_vectors(vector, pad_tuple, iaxis, kwargs)
- return _create_vector(vector, pad_tuple, np.median(sbvec),
- np.median(savec))
+ if pad_amt == 0:
+ return arr
+
+ # Generate shape for final concatenated array
+ padshape = tuple(x if i != axis else pad_amt
+ for (i, x) in enumerate(arr.shape))
+
+ # Generate an n-dimensional array incrementing along `axis`
+ ramp_arr = _arange_ndarray(arr, padshape, axis,
+ reverse=False).astype(np.float64)
+ # Slice a chunk from the edge to calculate stats on
+ edge_slice = tuple(slice(None) if i != axis else -1
+ for (i, x) in enumerate(arr.shape))
-def _mean(vector, pad_tuple, iaxis, kwargs):
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+
+ # Extract edge, reshape to original rank, and extend along `axis`
+ edge_pad = arr[edge_slice].reshape(pad_singleton).repeat(pad_amt, axis)
+
+ # Linear ramp
+ slope = (end - edge_pad) / float(pad_amt)
+ ramp_arr = ramp_arr * slope
+ ramp_arr += edge_pad
+ _round_ifneeded(ramp_arr, arr.dtype)
+
+ # Ramp values will most likely be float, cast them to the same type as arr
+ return np.concatenate((arr, ramp_arr.astype(arr.dtype)), axis=axis)
+
+
+def _prepend_max(arr, pad_amt, num, axis=-1):
"""
- Private function to calculate the before/after vectors for pad_mean.
+ Prepend `pad_amt` maximum values along `axis`.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across.
- kwargs : keyword arguments
- Keyword arguments. Only 'stat_length' is used. 'stat_length'
- defaults to the entire vector if not supplied.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to prepend.
+ num : int
+ Depth into `arr` along `axis` to calculate maximum.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
- Return
- ------
- _mean : ndarray
- Padded vector
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values appended along `axis`. The
+ prepended region is the maximum of the first `num` values along
+ `axis`.
"""
- sbvec, savec = _create_stat_vectors(vector, pad_tuple, iaxis, kwargs)
- return _create_vector(vector, pad_tuple, np.average(sbvec),
- np.average(savec))
+ if pad_amt == 0:
+ return arr
+
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _prepend_edge(arr, pad_amt, axis)
+
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
+
+ # Slice a chunk from the edge to calculate stats on
+ max_slice = tuple(slice(None) if i != axis else slice(num)
+ for (i, x) in enumerate(arr.shape))
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
-def _constant(vector, pad_tuple, iaxis, kwargs):
+ # Extract slice, calculate max, reshape to add singleton dimension back
+ max_chunk = arr[max_slice].max(axis=axis).reshape(pad_singleton)
+
+ # Concatenate `arr` with `max_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate((max_chunk.repeat(pad_amt, axis=axis), arr),
+ axis=axis)
+
+
+def _append_max(arr, pad_amt, num, axis=-1):
"""
- Private function to calculate the before/after vectors for
- pad_constant.
+ Pad one `axis` of `arr` with the maximum of the last `num` elements.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across.
- kwargs : keyword arguments
- Keyword arguments. Need 'constant_values' keyword argument.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to append.
+ num : int
+ Depth into `arr` along `axis` to calculate maximum.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
- Return
- ------
- _constant : ndarray
- Padded vector
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values appended along `axis`. The
+ appended region is the maximum of the final `num` values along `axis`.
"""
- nconstant = kwargs['constant_values'][iaxis]
- return _create_vector(vector, pad_tuple, nconstant[0], nconstant[1])
+ if pad_amt == 0:
+ return arr
+
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _append_edge(arr, pad_amt, axis)
+
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
+
+ # Slice a chunk from the edge to calculate stats on
+ end = arr.shape[axis] - 1
+ if num is not None:
+ max_slice = tuple(
+ slice(None) if i != axis else slice(end, end - num, -1)
+ for (i, x) in enumerate(arr.shape))
+ else:
+ max_slice = tuple(slice(None) for x in arr.shape)
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
-def _linear_ramp(vector, pad_tuple, iaxis, kwargs):
+ # Extract slice, calculate max, reshape to add singleton dimension back
+ max_chunk = arr[max_slice].max(axis=axis).reshape(pad_singleton)
+
+ # Concatenate `arr` with `max_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate((arr, max_chunk.repeat(pad_amt, axis=axis)),
+ axis=axis)
+
+
+def _prepend_mean(arr, pad_amt, num, axis=-1):
"""
- Private function to calculate the before/after vectors for
- pad_linear_ramp.
+ Prepend `pad_amt` mean values along `axis`.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across. Not used in _linear_ramp.
- kwargs : keyword arguments
- Keyword arguments. Not used in _linear_ramp.
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to prepend.
+ num : int
+ Depth into `arr` along `axis` to calculate mean.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
- Return
- ------
- _linear_ramp : ndarray
- Padded vector
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values prepended along `axis`. The
+ prepended region is the mean of the first `num` values along `axis`.
"""
- end_values = kwargs['end_values'][iaxis]
- before_delta = ((vector[pad_tuple[0]] - end_values[0])
- / float(pad_tuple[0]))
- after_delta = ((vector[-pad_tuple[1] - 1] - end_values[1])
- / float(pad_tuple[1]))
+ if pad_amt == 0:
+ return arr
- before_vector = np.ones((pad_tuple[0], )) * end_values[0]
- before_vector = before_vector.astype(vector.dtype)
- for i in range(len(before_vector)):
- before_vector[i] = before_vector[i] + i * before_delta
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _prepend_edge(arr, pad_amt, axis)
- after_vector = np.ones((pad_tuple[1], )) * end_values[1]
- after_vector = after_vector.astype(vector.dtype)
- for i in range(len(after_vector)):
- after_vector[i] = after_vector[i] + i * after_delta
- after_vector = after_vector[::-1]
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
- return _create_vector(vector, pad_tuple, before_vector, after_vector)
+ # Slice a chunk from the edge to calculate stats on
+ mean_slice = tuple(slice(None) if i != axis else slice(num)
+ for (i, x) in enumerate(arr.shape))
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
-def _reflect(vector, pad_tuple, iaxis, kwargs):
+ # Extract slice, calculate mean, reshape to add singleton dimension back
+ mean_chunk = arr[mean_slice].mean(axis).reshape(pad_singleton)
+ _round_ifneeded(mean_chunk, arr.dtype)
+
+ # Concatenate `arr` with `mean_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate((mean_chunk.repeat(pad_amt, axis).astype(arr.dtype),
+ arr), axis=axis)
+
+
+def _append_mean(arr, pad_amt, num, axis=-1):
"""
- Private function to calculate the before/after vectors for pad_reflect.
+ Append `pad_amt` mean values along `axis`.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across. Not used in _reflect.
- kwargs : keyword arguments
- Keyword arguments. Not used in _reflect.
-
- Return
- ------
- _reflect : ndarray
- Padded vector
-
- """
- # Can't have pad_tuple[1] be used in the slice if == to 0.
- if pad_tuple[1] == 0:
- after_vector = vector[pad_tuple[0]:None]
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to append.
+ num : int
+ Depth into `arr` along `axis` to calculate mean.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values appended along `axis`. The
+ appended region is the maximum of the final `num` values along `axis`.
+
+ """
+ if pad_amt == 0:
+ return arr
+
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _append_edge(arr, pad_amt, axis)
+
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
+
+ # Slice a chunk from the edge to calculate stats on
+ end = arr.shape[axis] - 1
+ if num is not None:
+ mean_slice = tuple(
+ slice(None) if i != axis else slice(end, end - num, -1)
+ for (i, x) in enumerate(arr.shape))
else:
- after_vector = vector[pad_tuple[0]:-pad_tuple[1]]
+ mean_slice = tuple(slice(None) for x in arr.shape)
- reverse = after_vector[::-1]
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
- before_vector = np.resize(
- np.concatenate((after_vector[1:-1], reverse)), pad_tuple[0])[::-1]
- after_vector = np.resize(
- np.concatenate((reverse[1:-1], after_vector)), pad_tuple[1])
+ # Extract slice, calculate mean, reshape to add singleton dimension back
+ mean_chunk = arr[mean_slice].mean(axis=axis).reshape(pad_singleton)
+ _round_ifneeded(mean_chunk, arr.dtype)
- if kwargs['reflect_type'] == 'even':
- pass
- elif kwargs['reflect_type'] == 'odd':
- before_vector = 2 * vector[pad_tuple[0]] - before_vector
- after_vector = 2 * vector[-pad_tuple[-1] - 1] - after_vector
- else:
- raise ValueError("The keyword '%s' cannot have the value '%s'."
- % ('reflect_type', kwargs['reflect_type']))
- return _create_vector(vector, pad_tuple, before_vector, after_vector)
+ # Concatenate `arr` with `mean_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate(
+ (arr, mean_chunk.repeat(pad_amt, axis).astype(arr.dtype)), axis=axis)
-def _symmetric(vector, pad_tuple, iaxis, kwargs):
+def _prepend_med(arr, pad_amt, num, axis=-1):
"""
- Private function to calculate the before/after vectors for
- pad_symmetric.
+ Prepend `pad_amt` median values along `axis`.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across. Not used in _symmetric.
- kwargs : keyword arguments
- Keyword arguments. Not used in _symmetric.
-
- Return
- ------
- _symmetric : ndarray
- Padded vector
-
- """
- if pad_tuple[1] == 0:
- after_vector = vector[pad_tuple[0]:None]
- else:
- after_vector = vector[pad_tuple[0]:-pad_tuple[1]]
-
- before_vector = np.resize(
- np.concatenate((after_vector, after_vector[::-1])),
- pad_tuple[0])[::-1]
- after_vector = np.resize(
- np.concatenate((after_vector[::-1], after_vector)),
- pad_tuple[1])
-
- if kwargs['reflect_type'] == 'even':
- pass
- elif kwargs['reflect_type'] == 'odd':
- before_vector = 2 * vector[pad_tuple[0]] - before_vector
- after_vector = 2 * vector[-pad_tuple[1] - 1] - after_vector
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to prepend.
+ num : int
+ Depth into `arr` along `axis` to calculate median.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values prepended along `axis`. The
+ prepended region is the median of the first `num` values along `axis`.
+
+ """
+ if pad_amt == 0:
+ return arr
+
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _prepend_edge(arr, pad_amt, axis)
+
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
+
+ # Slice a chunk from the edge to calculate stats on
+ med_slice = tuple(slice(None) if i != axis else slice(num)
+ for (i, x) in enumerate(arr.shape))
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+
+ # Extract slice, calculate median, reshape to add singleton dimension back
+ med_chunk = np.median(arr[med_slice], axis=axis).reshape(pad_singleton)
+ _round_ifneeded(med_chunk, arr.dtype)
+
+ # Concatenate `arr` with `med_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate(
+ (med_chunk.repeat(pad_amt, axis).astype(arr.dtype), arr), axis=axis)
+
+
+def _append_med(arr, pad_amt, num, axis=-1):
+ """
+ Append `pad_amt` median values along `axis`.
+
+ Parameters
+ ----------
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to append.
+ num : int
+ Depth into `arr` along `axis` to calculate median.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values appended along `axis`. The
+ appended region is the median of the final `num` values along `axis`.
+
+ """
+ if pad_amt == 0:
+ return arr
+
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _append_edge(arr, pad_amt, axis)
+
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
+
+ # Slice a chunk from the edge to calculate stats on
+ end = arr.shape[axis] - 1
+ if num is not None:
+ med_slice = tuple(
+ slice(None) if i != axis else slice(end, end - num, -1)
+ for (i, x) in enumerate(arr.shape))
else:
- raise ValueError("The keyword '%s' cannot have the value '%s'."
- % ('reflect_type', kwargs['reflect_type']))
- return _create_vector(vector, pad_tuple, before_vector, after_vector)
+ med_slice = tuple(slice(None) for x in arr.shape)
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+
+ # Extract slice, calculate median, reshape to add singleton dimension back
+ med_chunk = np.median(arr[med_slice], axis=axis).reshape(pad_singleton)
+ _round_ifneeded(med_chunk, arr.dtype)
+
+ # Concatenate `arr` with `med_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate(
+ (arr, med_chunk.repeat(pad_amt, axis).astype(arr.dtype)), axis=axis)
+
+
+def _prepend_min(arr, pad_amt, num, axis=-1):
+ """
+ Prepend `pad_amt` minimum values along `axis`.
+
+ Parameters
+ ----------
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to prepend.
+ num : int
+ Depth into `arr` along `axis` to calculate minimum.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values prepended along `axis`. The
+ prepended region is the minimum of the first `num` values along
+ `axis`.
+
+ """
+ if pad_amt == 0:
+ return arr
+
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _prepend_edge(arr, pad_amt, axis)
+
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
+
+ # Slice a chunk from the edge to calculate stats on
+ min_slice = tuple(slice(None) if i != axis else slice(num)
+ for (i, x) in enumerate(arr.shape))
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+
+ # Extract slice, calculate min, reshape to add singleton dimension back
+ min_chunk = arr[min_slice].min(axis=axis).reshape(pad_singleton)
+ # Concatenate `arr` with `min_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate((min_chunk.repeat(pad_amt, axis=axis), arr),
+ axis=axis)
-def _wrap(vector, pad_tuple, iaxis, kwargs):
+
+def _append_min(arr, pad_amt, num, axis=-1):
"""
- Private function to calculate the before/after vectors for pad_wrap.
+ Append `pad_amt` median values along `axis`.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across. Not used in _wrap.
- kwargs : keyword arguments
- Keyword arguments. Not used in _wrap.
-
- Return
- ------
- _wrap : ndarray
- Padded vector
-
- """
- if pad_tuple[1] == 0:
- after_vector = vector[pad_tuple[0]:None]
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : int
+ Amount of padding to append.
+ num : int
+ Depth into `arr` along `axis` to calculate minimum.
+ Range: [1, `arr.shape[axis]`] or None (entire axis)
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt` values appended along `axis`. The
+ appended region is the minimum of the final `num` values along `axis`.
+
+ """
+ if pad_amt == 0:
+ return arr
+
+ # Equivalent to edge padding for single value, so do that instead
+ if num == 1:
+ return _append_edge(arr, pad_amt, axis)
+
+ # Use entire array if `num` is too large
+ if num is not None:
+ if num >= arr.shape[axis]:
+ num = None
+
+ # Slice a chunk from the edge to calculate stats on
+ end = arr.shape[axis] - 1
+ if num is not None:
+ min_slice = tuple(
+ slice(None) if i != axis else slice(end, end - num, -1)
+ for (i, x) in enumerate(arr.shape))
else:
- after_vector = vector[pad_tuple[0]:-pad_tuple[1]]
+ min_slice = tuple(slice(None) for x in arr.shape)
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+
+ # Extract slice, calculate min, reshape to add singleton dimension back
+ min_chunk = arr[min_slice].min(axis=axis).reshape(pad_singleton)
+
+ # Concatenate `arr` with `min_chunk`, extended along `axis` by `pad_amt`
+ return np.concatenate((arr, min_chunk.repeat(pad_amt, axis=axis)),
+ axis=axis)
+
+
+def _pad_ref(arr, pad_amt, method, axis=-1):
+ """
+ Pad `axis` of `arr` by reflection.
+
+ Parameters
+ ----------
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : tuple of ints, length 2
+ Padding to (prepend, append) along `axis`.
+ method : str
+ Controls method of reflection; options are 'even' or 'odd'.
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt[0]` values prepended and `pad_amt[1]`
+ values appended along `axis`. Both regions are padded with reflected
+ values from the original array.
+
+ Notes
+ -----
+ This algorithm does not pad with repetition, i.e. the edges are not
+ repeated in the reflection. For that behavior, use `method='symmetric'`.
+
+ The modes 'reflect', 'symmetric', and 'wrap' must be padded with a
+ single function, lest the indexing tricks in non-integer multiples of the
+ original shape would violate repetition in the final iteration.
+
+ """
+ # Implicit booleanness to test for zero (or None) in any scalar type
+ if pad_amt[0] == 0 and pad_amt[1] == 0:
+ return arr
+
+ ##########################################################################
+ # Prepended region
+
+ # Slice off a reverse indexed chunk from near edge to pad `arr` before
+ ref_slice = tuple(slice(None) if i != axis else slice(pad_amt[0], 0, -1)
+ for (i, x) in enumerate(arr.shape))
+
+ ref_chunk1 = arr[ref_slice]
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+ if pad_amt[0] == 1:
+ ref_chunk1 = ref_chunk1.reshape(pad_singleton)
+
+ # Memory/computationally more expensive, only do this if `method='odd'`
+ if 'odd' in method and pad_amt[0] > 0:
+ edge_slice1 = tuple(slice(None) if i != axis else 0
+ for (i, x) in enumerate(arr.shape))
+ edge_chunk = arr[edge_slice1].reshape(pad_singleton)
+ ref_chunk1 = 2 * edge_chunk - ref_chunk1
+ del edge_chunk
+
+ ##########################################################################
+ # Appended region
+
+ # Slice off a reverse indexed chunk from far edge to pad `arr` after
+ start = arr.shape[axis] - pad_amt[1] - 1
+ end = arr.shape[axis] - 1
+ ref_slice = tuple(slice(None) if i != axis else slice(start, end)
+ for (i, x) in enumerate(arr.shape))
+ rev_idx = tuple(slice(None) if i != axis else slice(None, None, -1)
+ for (i, x) in enumerate(arr.shape))
+ ref_chunk2 = arr[ref_slice][rev_idx]
+
+ if pad_amt[1] == 1:
+ ref_chunk2 = ref_chunk2.reshape(pad_singleton)
+
+ if 'odd' in method:
+ edge_slice2 = tuple(slice(None) if i != axis else -1
+ for (i, x) in enumerate(arr.shape))
+ edge_chunk = arr[edge_slice2].reshape(pad_singleton)
+ ref_chunk2 = 2 * edge_chunk - ref_chunk2
+ del edge_chunk
+
+ # Concatenate `arr` with both chunks, extending along `axis`
+ return np.concatenate((ref_chunk1, arr, ref_chunk2), axis=axis)
+
+
+def _pad_sym(arr, pad_amt, method, axis=-1):
+ """
+ Pad `axis` of `arr` by symmetry.
+
+ Parameters
+ ----------
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : tuple of ints, length 2
+ Padding to (prepend, append) along `axis`.
+ method : str
+ Controls method of symmetry; options are 'even' or 'odd'.
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt[0]` values prepended and `pad_amt[1]`
+ values appended along `axis`. Both regions are padded with symmetric
+ values from the original array.
+
+ Notes
+ -----
+ This algorithm DOES pad with repetition, i.e. the edges are repeated.
+ For a method that does not repeat edges, use `method='reflect'`.
+
+ The modes 'reflect', 'symmetric', and 'wrap' must be padded with a
+ single function, lest the indexing tricks in non-integer multiples of the
+ original shape would violate repetition in the final iteration.
+
+ """
+ # Implicit booleanness to test for zero (or None) in any scalar type
+ if pad_amt[0] == 0 and pad_amt[1] == 0:
+ return arr
+
+ ##########################################################################
+ # Prepended region
+
+ # Slice off a reverse indexed chunk from near edge to pad `arr` before
+ sym_slice = tuple(slice(None) if i != axis else slice(0, pad_amt[0])
+ for (i, x) in enumerate(arr.shape))
+ rev_idx = tuple(slice(None) if i != axis else slice(None, None, -1)
+ for (i, x) in enumerate(arr.shape))
+ sym_chunk1 = arr[sym_slice][rev_idx]
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+ if pad_amt[0] == 1:
+ sym_chunk1 = sym_chunk1.reshape(pad_singleton)
+
+ # Memory/computationally more expensive, only do this if `method='odd'`
+ if 'odd' in method and pad_amt[0] > 0:
+ edge_slice1 = tuple(slice(None) if i != axis else 0
+ for (i, x) in enumerate(arr.shape))
+ edge_chunk = arr[edge_slice1].reshape(pad_singleton)
+ sym_chunk1 = 2 * edge_chunk - sym_chunk1
+ del edge_chunk
+
+ ##########################################################################
+ # Appended region
+
+ # Slice off a reverse indexed chunk from far edge to pad `arr` after
+ start = arr.shape[axis] - pad_amt[1]
+ end = arr.shape[axis]
+ sym_slice = tuple(slice(None) if i != axis else slice(start, end)
+ for (i, x) in enumerate(arr.shape))
+ sym_chunk2 = arr[sym_slice][rev_idx]
+
+ if pad_amt[1] == 1:
+ sym_chunk2 = sym_chunk2.reshape(pad_singleton)
+
+ if 'odd' in method:
+ edge_slice2 = tuple(slice(None) if i != axis else -1
+ for (i, x) in enumerate(arr.shape))
+ edge_chunk = arr[edge_slice2].reshape(pad_singleton)
+ sym_chunk2 = 2 * edge_chunk - sym_chunk2
+ del edge_chunk
+
+ # Concatenate `arr` with both chunks, extending along `axis`
+ return np.concatenate((sym_chunk1, arr, sym_chunk2), axis=axis)
+
+
+def _pad_wrap(arr, pad_amt, axis=-1):
+ """
+ Pad `axis` of `arr` via wrapping.
+
+ Parameters
+ ----------
+ arr : ndarray
+ Input array of arbitrary shape.
+ pad_amt : tuple of ints, length 2
+ Padding to (prepend, append) along `axis`.
+ axis : int
+ Axis along which to pad `arr`.
+
+ Returns
+ -------
+ padarr : ndarray
+ Output array, with `pad_amt[0]` values prepended and `pad_amt[1]`
+ values appended along `axis`. Both regions are padded wrapped values
+ from the opposite end of `axis`.
+
+ Notes
+ -----
+ This method of padding is also known as 'tile' or 'tiling'.
+
+ The modes 'reflect', 'symmetric', and 'wrap' must be padded with a
+ single function, lest the indexing tricks in non-integer multiples of the
+ original shape would violate repetition in the final iteration.
+
+ """
+ # Implicit booleanness to test for zero (or None) in any scalar type
+ if pad_amt[0] == 0 and pad_amt[1] == 0:
+ return arr
+
+ ##########################################################################
+ # Prepended region
+
+ # Slice off a reverse indexed chunk from near edge to pad `arr` before
+ start = arr.shape[axis] - pad_amt[0]
+ end = arr.shape[axis]
+ wrap_slice = tuple(slice(None) if i != axis else slice(start, end)
+ for (i, x) in enumerate(arr.shape))
+ wrap_chunk1 = arr[wrap_slice]
+
+ # Shape to restore singleton dimension after slicing
+ pad_singleton = tuple(x if i != axis else 1
+ for (i, x) in enumerate(arr.shape))
+ if pad_amt[0] == 1:
+ wrap_chunk1 = wrap_chunk1.reshape(pad_singleton)
+
+ ##########################################################################
+ # Appended region
+
+ # Slice off a reverse indexed chunk from far edge to pad `arr` after
+ wrap_slice = tuple(slice(None) if i != axis else slice(0, pad_amt[1])
+ for (i, x) in enumerate(arr.shape))
+ wrap_chunk2 = arr[wrap_slice]
+
+ if pad_amt[1] == 1:
+ wrap_chunk2 = wrap_chunk2.reshape(pad_singleton)
+
+ # Concatenate `arr` with both chunks, extending along `axis`
+ return np.concatenate((wrap_chunk1, arr, wrap_chunk2), axis=axis)
+
+
+def _normalize_shape(narray, shape):
+ """
+ Private function which does some checks and normalizes the possibly
+ much simpler representations of 'pad_width', 'stat_length',
+ 'constant_values', 'end_values'.
- before_vector = np.resize(after_vector[::-1], pad_tuple[0])[::-1]
- after_vector = np.resize(after_vector, pad_tuple[1])
+ Parameters
+ ----------
+ narray : ndarray
+ Input ndarray
+ shape : {sequence, int}, optional
+ The width of padding (pad_width) or the number of elements on the
+ edge of the narray used for statistics (stat_length).
+ ((before_1, after_1), ... (before_N, after_N)) unique number of
+ elements for each axis where `N` is rank of `narray`.
+ ((before, after),) yields same before and after constants for each
+ axis.
+ (constant,) or int is a shortcut for before = after = constant for
+ all axes.
- return _create_vector(vector, pad_tuple, before_vector, after_vector)
+ Returns
+ -------
+ _normalize_shape : tuple of tuples
+ int => ((int, int), (int, int), ...)
+ [[int1, int2], [int3, int4], ...] => ((int1, int2), (int3, int4), ...)
+ ((int1, int2), (int3, int4), ...) => no change
+ [[int1, int2], ] => ((int1, int2), (int1, int2), ...)
+ ((int1, int2), ) => ((int1, int2), (int1, int2), ...)
+ [[int , ], ] => ((int, int), (int, int), ...)
+ ((int , ), ) => ((int, int), (int, int), ...)
+
+ """
+ normshp = None
+ shapelen = len(np.shape(narray))
+ if (isinstance(shape, int)) or shape is None:
+ normshp = ((shape, shape), ) * shapelen
+ elif (isinstance(shape, (tuple, list))
+ and isinstance(shape[0], (tuple, list))
+ and len(shape) == shapelen):
+ normshp = shape
+ for i in normshp:
+ if len(i) != 2:
+ fmt = "Unable to create correctly shaped tuple from %s"
+ raise ValueError(fmt % (normshp,))
+ elif (isinstance(shape, (tuple, list))
+ and isinstance(shape[0], (int, float, long))
+ and len(shape) == 1):
+ normshp = ((shape[0], shape[0]), ) * shapelen
+ elif (isinstance(shape, (tuple, list))
+ and isinstance(shape[0], (int, float, long))
+ and len(shape) == 2):
+ normshp = (shape, ) * shapelen
+ if normshp is None:
+ fmt = "Unable to create correctly shaped tuple from %s"
+ raise ValueError(fmt % (shape,))
+ return normshp
-def _edge(vector, pad_tuple, iaxis, kwargs):
+def _validate_lengths(narray, number_elements):
"""
- Private function to calculate the before/after vectors for pad_edge.
+ Private function which does some checks and reformats pad_width and
+ stat_length using _normalize_shape.
Parameters
----------
- vector : ndarray
- Input vector that already includes empty padded values.
- pad_tuple : tuple
- This tuple represents the (before, after) width of the padding
- along this particular iaxis.
- iaxis : int
- The axis currently being looped across. Not used in _edge.
- kwargs : keyword arguments
- Keyword arguments. Not used in _edge.
+ narray : ndarray
+ Input ndarray
+ number_elements : {sequence, int}, optional
+ The width of padding (pad_width) or the number of elements on the edge
+ of the narray used for statistics (stat_length).
+ ((before_1, after_1), ... (before_N, after_N)) unique number of
+ elements for each axis.
+ ((before, after),) yields same before and after constants for each
+ axis.
+ (constant,) or int is a shortcut for before = after = constant for all
+ axes.
- Return
- ------
- _edge : ndarray
- Padded vector
+ Returns
+ -------
+ _validate_lengths : tuple of tuples
+ int => ((int, int), (int, int), ...)
+ [[int1, int2], [int3, int4], ...] => ((int1, int2), (int3, int4), ...)
+ ((int1, int2), (int3, int4), ...) => no change
+ [[int1, int2], ] => ((int1, int2), (int1, int2), ...)
+ ((int1, int2), ) => ((int1, int2), (int1, int2), ...)
+ [[int , ], ] => ((int, int), (int, int), ...)
+ ((int , ), ) => ((int, int), (int, int), ...)
"""
- return _create_vector(vector, pad_tuple, vector[pad_tuple[0]],
- vector[-pad_tuple[1] - 1])
+ normshp = _normalize_shape(narray, number_elements)
+ for i in normshp:
+ chk = [1 if x is None else x for x in i]
+ chk = [1 if x > 0 else -1 for x in chk]
+ if (chk[0] < 0) or (chk[1] < 0):
+ fmt = "%s cannot contain negative values."
+ raise ValueError(fmt % (number_elements,))
+ return normshp
###############################################################################
@@ -714,19 +1281,6 @@ def pad(array, pad_width, mode=None, **kwargs):
narray = np.array(array)
pad_width = _validate_lengths(narray, pad_width)
- modefunc = {
- 'constant': _constant,
- 'edge': _edge,
- 'linear_ramp': _linear_ramp,
- 'maximum': _maximum,
- 'mean': _mean,
- 'median': _median,
- 'minimum': _minimum,
- 'reflect': _reflect,
- 'symmetric': _symmetric,
- 'wrap': _wrap,
- }
-
allowedkwargs = {
'constant': ['constant_values'],
'edge': [],
@@ -748,8 +1302,6 @@ def pad(array, pad_width, mode=None, **kwargs):
}
if isinstance(mode, str):
- function = modefunc[mode]
-
# Make sure have allowed kwargs appropriate for mode
for key in kwargs:
if key not in allowedkwargs[mode]:
@@ -762,35 +1314,156 @@ def pad(array, pad_width, mode=None, **kwargs):
# Need to only normalize particular keywords.
for i in kwargs:
- if i == 'stat_length' and kwargs[i]:
+ if i == 'stat_length':
kwargs[i] = _validate_lengths(narray, kwargs[i])
if i in ['end_values', 'constant_values']:
kwargs[i] = _normalize_shape(narray, kwargs[i])
elif mode is None:
raise ValueError('Keyword "mode" must be a function or one of %s.' %
- (list(modefunc.keys()),))
+ (list(allowedkwargs.keys()),))
else:
- # User supplied function, I hope
+ # Drop back to old, slower np.apply_along_axis mode for user-supplied
+ # vector function
function = mode
- # Create a new padded array
- rank = list(range(len(narray.shape)))
- total_dim_increase = [np.sum(pad_width[i]) for i in rank]
- offset_slices = [slice(pad_width[i][0],
- pad_width[i][0] + narray.shape[i])
- for i in rank]
- new_shape = np.array(narray.shape) + total_dim_increase
- newmat = np.zeros(new_shape).astype(narray.dtype)
-
- # Insert the original array into the padded array
- newmat[offset_slices] = narray
-
- # This is the core of pad ...
- for iaxis in rank:
- np.apply_along_axis(function,
- iaxis,
- newmat,
- pad_width[iaxis],
- iaxis,
- kwargs)
+ # Create a new padded array
+ rank = list(range(len(narray.shape)))
+ total_dim_increase = [np.sum(pad_width[i]) for i in rank]
+ offset_slices = [slice(pad_width[i][0],
+ pad_width[i][0] + narray.shape[i])
+ for i in rank]
+ new_shape = np.array(narray.shape) + total_dim_increase
+ newmat = np.zeros(new_shape).astype(narray.dtype)
+
+ # Insert the original array into the padded array
+ newmat[offset_slices] = narray
+
+ # This is the core of pad ...
+ for iaxis in rank:
+ np.apply_along_axis(function,
+ iaxis,
+ newmat,
+ pad_width[iaxis],
+ iaxis,
+ kwargs)
+ return newmat
+
+ # If we get here, use new padding method
+ newmat = narray.copy()
+
+ # API preserved, but completely new algorithm which pads by building the
+ # entire block to pad before/after `arr` with in one step, for each axis.
+ if mode == 'constant':
+ for axis, ((pad_before, pad_after), (before_val, after_val)) \
+ in enumerate(zip(pad_width, kwargs['constant_values'])):
+ newmat = _prepend_const(newmat, pad_before, before_val, axis)
+ newmat = _append_const(newmat, pad_after, after_val, axis)
+
+ elif mode == 'edge':
+ for axis, (pad_before, pad_after) in enumerate(pad_width):
+ newmat = _prepend_edge(newmat, pad_before, axis)
+ newmat = _append_edge(newmat, pad_after, axis)
+
+ elif mode == 'linear_ramp':
+ for axis, ((pad_before, pad_after), (before_val, after_val)) \
+ in enumerate(zip(pad_width, kwargs['end_values'])):
+ newmat = _prepend_ramp(newmat, pad_before, before_val, axis)
+ newmat = _append_ramp(newmat, pad_after, after_val, axis)
+
+ elif mode == 'maximum':
+ for axis, ((pad_before, pad_after), (chunk_before, chunk_after)) \
+ in enumerate(zip(pad_width, kwargs['stat_length'])):
+ newmat = _prepend_max(newmat, pad_before, chunk_before, axis)
+ newmat = _append_max(newmat, pad_after, chunk_after, axis)
+
+ elif mode == 'mean':
+ for axis, ((pad_before, pad_after), (chunk_before, chunk_after)) \
+ in enumerate(zip(pad_width, kwargs['stat_length'])):
+ newmat = _prepend_mean(newmat, pad_before, chunk_before, axis)
+ newmat = _append_mean(newmat, pad_after, chunk_after, axis)
+
+ elif mode == 'median':
+ for axis, ((pad_before, pad_after), (chunk_before, chunk_after)) \
+ in enumerate(zip(pad_width, kwargs['stat_length'])):
+ newmat = _prepend_med(newmat, pad_before, chunk_before, axis)
+ newmat = _append_med(newmat, pad_after, chunk_after, axis)
+
+ elif mode == 'minimum':
+ for axis, ((pad_before, pad_after), (chunk_before, chunk_after)) \
+ in enumerate(zip(pad_width, kwargs['stat_length'])):
+ newmat = _prepend_min(newmat, pad_before, chunk_before, axis)
+ newmat = _append_min(newmat, pad_after, chunk_after, axis)
+
+ elif mode == 'reflect':
+ for axis, (pad_before, pad_after) in enumerate(pad_width):
+ # Recursive padding along any axis where `pad_amt` is too large
+ # for indexing tricks. We can only safely pad the original axis
+ # length, to keep the period of the reflections consistent.
+ if ((pad_before > 0) or
+ (pad_after > 0)) and newmat.shape[axis] == 1:
+ # Extending singleton dimension for 'reflect' is legacy
+ # behavior; it really should raise an error.
+ newmat = _prepend_edge(newmat, pad_before, axis)
+ newmat = _append_edge(newmat, pad_after, axis)
+ continue
+
+ method = kwargs['reflect_type']
+ safe_pad = newmat.shape[axis] - 1
+ repeat = safe_pad
+ while ((pad_before > safe_pad) or (pad_after > safe_pad)):
+ offset = 0
+ pad_iter_b = min(safe_pad,
+ safe_pad * (pad_before // safe_pad))
+ pad_iter_a = min(safe_pad, safe_pad * (pad_after // safe_pad))
+ newmat = _pad_ref(newmat, (pad_iter_b,
+ pad_iter_a), method, axis)
+ pad_before -= pad_iter_b
+ pad_after -= pad_iter_a
+ if pad_iter_b > 0:
+ offset += 1
+ if pad_iter_a > 0:
+ offset += 1
+ safe_pad += pad_iter_b + pad_iter_a
+ newmat = _pad_ref(newmat, (pad_before, pad_after), method, axis)
+
+ elif mode == 'symmetric':
+ for axis, (pad_before, pad_after) in enumerate(pad_width):
+ # Recursive padding along any axis where `pad_amt` is too large
+ # for indexing tricks. We can only safely pad the original axis
+ # length, to keep the period of the reflections consistent.
+ method = kwargs['reflect_type']
+ safe_pad = newmat.shape[axis]
+ repeat = safe_pad
+ while ((pad_before > safe_pad) or
+ (pad_after > safe_pad)):
+ pad_iter_b = min(safe_pad,
+ safe_pad * (pad_before // safe_pad))
+ pad_iter_a = min(safe_pad, safe_pad * (pad_after // safe_pad))
+ newmat = _pad_sym(newmat, (pad_iter_b,
+ pad_iter_a), method, axis)
+ pad_before -= pad_iter_b
+ pad_after -= pad_iter_a
+ safe_pad += pad_iter_b + pad_iter_a
+ newmat = _pad_sym(newmat, (pad_before, pad_after), method, axis)
+
+ elif mode == 'wrap':
+ for axis, (pad_before, pad_after) in enumerate(pad_width):
+ # Recursive padding along any axis where `pad_amt` is too large
+ # for indexing tricks. We can only safely pad the original axis
+ # length, to keep the period of the reflections consistent.
+ safe_pad = newmat.shape[axis]
+ repeat = safe_pad
+ while ((pad_before > safe_pad) or
+ (pad_after > safe_pad)):
+ pad_iter_b = min(safe_pad,
+ safe_pad * (pad_before // safe_pad))
+ pad_iter_a = min(safe_pad, safe_pad * (pad_after // safe_pad))
+ newmat = _pad_wrap(newmat, (pad_iter_b, pad_iter_a), axis)
+
+ pad_before -= pad_iter_b
+ pad_after -= pad_iter_a
+ safe_pad += pad_iter_b + pad_iter_a
+
+ newmat = _pad_wrap(newmat, (pad_before, pad_after), axis)
+
return newmat
diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py
index a7163a7ca..d782f454a 100644
--- a/numpy/lib/function_base.py
+++ b/numpy/lib/function_base.py
@@ -1478,28 +1478,40 @@ def nansum(a, axis=None):
def nanmin(a, axis=None):
"""
- Return the minimum of an array or minimum along an axis ignoring any NaNs.
+ Return the minimum of an array or minimum along an axis, ignoring any NaNs.
Parameters
----------
a : array_like
- Array containing numbers whose minimum is desired.
+ Array containing numbers whose minimum is desired. If `a` is not
+ an array, a conversion is attempted.
axis : int, optional
- Axis along which the minimum is computed.The default is to compute
+ Axis along which the minimum is computed. The default is to compute
the minimum of the flattened array.
Returns
-------
nanmin : ndarray
- A new array or a scalar array with the result.
+ An array with the same shape as `a`, with the specified axis removed.
+ If `a` is a 0-d array, or if axis is None, an ndarray scalar is
+ returned. The same dtype as `a` is returned.
See Also
--------
- numpy.amin : Minimum across array including any Not a Numbers.
- numpy.nanmax : Maximum across array ignoring any Not a Numbers.
- isnan : Shows which elements are Not a Number (NaN).
- isfinite: Shows which elements are not: Not a Number, positive and
- negative infinity
+ nanmax :
+ The maximum value of an array along a given axis, ignoring any NaNs.
+ amin :
+ The minimum value of an array along a given axis, propagating any NaNs.
+ fmin :
+ Element-wise minimum of two arrays, ignoring any NaNs.
+ minimum :
+ Element-wise minimum of two arrays, propagating any NaNs.
+ isnan :
+ Shows which elements are Not a Number (NaN).
+ isfinite:
+ Shows which elements are neither NaN nor infinity.
+
+ amax, fmax, maximum
Notes
-----
@@ -1510,7 +1522,6 @@ def nanmin(a, axis=None):
If the input has a integer type the function is equivalent to np.min.
-
Examples
--------
>>> a = np.array([[1, 2], [3, np.nan]])
@@ -1572,7 +1583,7 @@ def nanargmin(a, axis=None):
def nanmax(a, axis=None):
"""
- Return the maximum of an array or maximum along an axis ignoring any NaNs.
+ Return the maximum of an array or maximum along an axis, ignoring any NaNs.
Parameters
----------
@@ -1587,16 +1598,25 @@ def nanmax(a, axis=None):
-------
nanmax : ndarray
An array with the same shape as `a`, with the specified axis removed.
- If `a` is a 0-d array, or if axis is None, a ndarray scalar is
- returned. The the same dtype as `a` is returned.
+ If `a` is a 0-d array, or if axis is None, an ndarray scalar is
+ returned. The same dtype as `a` is returned.
See Also
--------
- numpy.amax : Maximum across array including any Not a Numbers.
- numpy.nanmin : Minimum across array ignoring any Not a Numbers.
- isnan : Shows which elements are Not a Number (NaN).
- isfinite: Shows which elements are not: Not a Number, positive and
- negative infinity
+ nanmin :
+ The minimum value of an array along a given axis, ignoring any NaNs.
+ amax :
+ The maximum value of an array along a given axis, propagating any NaNs.
+ fmax :
+ Element-wise maximum of two arrays, ignoring any NaNs.
+ maximum :
+ Element-wise maximum of two arrays, propagating any NaNs.
+ isnan :
+ Shows which elements are Not a Number (NaN).
+ isfinite:
+ Shows which elements are neither NaN nor infinity.
+
+ amin, fmin, minimum
Notes
-----
diff --git a/numpy/lib/stride_tricks.py b/numpy/lib/stride_tricks.py
index 1f08131ec..7b6b06fdc 100644
--- a/numpy/lib/stride_tricks.py
+++ b/numpy/lib/stride_tricks.py
@@ -27,7 +27,10 @@ def as_strided(x, shape=None, strides=None):
interface['shape'] = tuple(shape)
if strides is not None:
interface['strides'] = tuple(strides)
- return np.asarray(DummyArray(interface, base=x))
+ array = np.asarray(DummyArray(interface, base=x))
+ # Make sure dtype is correct in case of custom dtype
+ array.dtype = x.dtype
+ return array
def broadcast_arrays(*args):
"""
diff --git a/numpy/lib/tests/test_arraypad.py b/numpy/lib/tests/test_arraypad.py
index 041d7e6e0..c6acd4db0 100644
--- a/numpy/lib/tests/test_arraypad.py
+++ b/numpy/lib/tests/test_arraypad.py
@@ -178,21 +178,21 @@ class TestStatistic(TestCase):
a = [[4, 5, 6]]
a = pad(a, (5, 7), 'mean', stat_length=2)
b = np.array([
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
-
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
-
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5],
- [4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 5, 5, 5, 5, 5]])
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6],
+ [4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6]])
assert_array_equal(a, b)
def test_check_mean_2(self):
diff --git a/numpy/linalg/tests/test_linalg.py b/numpy/linalg/tests/test_linalg.py
index c64253d21..2dc55ab5e 100644
--- a/numpy/linalg/tests/test_linalg.py
+++ b/numpy/linalg/tests/test_linalg.py
@@ -2,6 +2,7 @@
"""
from __future__ import division, absolute_import, print_function
+import os
import sys
import numpy as np
@@ -750,8 +751,49 @@ def test_generalized_raise_multiloop():
assert_raises(np.linalg.LinAlgError, np.linalg.inv, x)
+def _is_xerbla_safe():
+ """
+ Check that running the xerbla test is safe --- if python_xerbla
+ is not successfully linked in, the standard xerbla routine is called,
+ which aborts the process.
-@dec.skipif(sys.platform == "win32", "python_xerbla not enabled on Win32")
+ """
+
+ try:
+ pid = os.fork()
+ except (OSError, AttributeError):
+ # fork failed, or not running on POSIX
+ return False
+
+ if pid == 0:
+ # child; close i/o file handles
+ os.close(1)
+ os.close(0)
+ # avoid producing core files
+ import resource
+ resource.setrlimit(resource.RLIMIT_CORE, (0, 0))
+ # these calls may abort
+ try:
+ a = np.array([[1]])
+ np.linalg.lapack_lite.dgetrf(
+ 1, 1, a.astype(np.double),
+ 0, # <- invalid value
+ a.astype(np.intc), 0)
+ except:
+ pass
+ try:
+ np.linalg.lapack_lite.xerbla()
+ except:
+ pass
+ os._exit(111)
+ else:
+ # parent
+ pid, status = os.wait()
+ if os.WEXITSTATUS(status) == 111 and not os.WIFSIGNALED(status):
+ return True
+ return False
+
+@dec.skipif(not _is_xerbla_safe(), "python_xerbla not found")
def test_xerbla():
# Test that xerbla works (with GIL)
a = np.array([[1]])
diff --git a/numpy/linalg/umath_linalg.c.src b/numpy/linalg/umath_linalg.c.src
index eadbde8e7..796f76778 100644
--- a/numpy/linalg/umath_linalg.c.src
+++ b/numpy/linalg/umath_linalg.c.src
@@ -1196,11 +1196,11 @@ init_gemm_params(GEMM_PARAMS_t *params,
npy_intp* steps,
size_t sot)
{
+ npy_uint8 *mem_buff = NULL;
matrix_desc a, b, c;
matrix_desc_init(&a, steps + 0, sot, m, k);
matrix_desc_init(&b, steps + 2, sot, k, n);
matrix_desc_init(&c, steps + 4, sot, m, n);
- npy_uint8 *mem_buff = NULL;
if (a.size + b.size + c.size)
{
@@ -3029,7 +3029,7 @@ static inline void
npy_intp* dimensions,
npy_intp* steps)
{
- ptrdiff_t outer_steps[3];
+ ptrdiff_t outer_steps[4];
int error_occurred = get_fp_invalid_and_clear();
size_t iter;
size_t outer_dim = *dimensions++;
diff --git a/numpy/ma/extras.py b/numpy/ma/extras.py
index 2f3159c49..5a484ce9d 100644
--- a/numpy/ma/extras.py
+++ b/numpy/ma/extras.py
@@ -1402,14 +1402,13 @@ def corrcoef(x, y=None, rowvar=True, bias=False, allow_masked=True, ddof=None):
if rowvar:
for i in range(n - 1):
for j in range(i + 1, n):
- _x = mask_cols(vstack((x[i], x[j]))).var(axis=1,
- ddof=1 - bias)
+ _x = mask_cols(vstack((x[i], x[j]))).var(axis=1, ddof=ddof)
_denom[i, j] = _denom[j, i] = ma.sqrt(ma.multiply.reduce(_x))
else:
for i in range(n - 1):
for j in range(i + 1, n):
- _x = mask_cols(vstack((x[:, i], x[:, j]))).var(axis=1,
- ddof=1 - bias)
+ _x = mask_cols(
+ vstack((x[:, i], x[:, j]))).var(axis=1, ddof=ddof)
_denom[i, j] = _denom[j, i] = ma.sqrt(ma.multiply.reduce(_x))
return c / _denom
diff --git a/numpy/ma/tests/test_regression.py b/numpy/ma/tests/test_regression.py
index f713a8a1a..eb301aa05 100644
--- a/numpy/ma/tests/test_regression.py
+++ b/numpy/ma/tests/test_regression.py
@@ -61,6 +61,14 @@ class TestRegression(TestCase):
a.var(out=mout)
assert_(mout._data == 0)
+ def test_ddof_corrcoef(self):
+ # See gh-3336
+ x = np.ma.masked_equal([1,2,3,4,5], 4)
+ y = np.array([2,2.5,3.1,3,5])
+ r0 = np.ma.corrcoef(x, y, ddof=0)
+ r1 = np.ma.corrcoef(x, y, ddof=1)
+ # ddof should not have an effect (it gets cancelled out)
+ assert_allclose(r0.data, r1.data)
if __name__ == "__main__":
run_module_suite()
diff --git a/runtests.py b/runtests.py
new file mode 100755
index 000000000..7660546c8
--- /dev/null
+++ b/runtests.py
@@ -0,0 +1,197 @@
+#!/usr/bin/env python
+"""
+runtests.py [OPTIONS] [-- ARGS]
+
+Run tests, building the project first.
+
+Examples::
+
+ $ python runtests.py
+ $ python runtests.py -s {SAMPLE_SUBMODULE}
+ $ python runtests.py -t {SAMPLE_TEST}
+ $ python runtests.py --ipython
+
+"""
+
+#
+# This is a generic test runner script for projects using Numpy's test
+# framework. Change the following values to adapt to your project:
+#
+
+PROJECT_MODULE = "numpy"
+PROJECT_ROOT_FILES = ['numpy', 'LICENSE.txt', 'setup.py']
+SAMPLE_TEST = "numpy/linalg/tests/test_linalg.py:test_byteorder_check"
+SAMPLE_SUBMODULE = "linalg"
+
+# ---------------------------------------------------------------------
+
+__doc__ = __doc__.format(**globals())
+
+import sys
+import os
+
+# In case we are run from the source directory, we don't want to import the
+# project from there:
+sys.path.pop(0)
+
+import shutil
+import subprocess
+from argparse import ArgumentParser, REMAINDER
+
+def main(argv):
+ parser = ArgumentParser(usage=__doc__.lstrip())
+ parser.add_argument("--verbose", "-v", action="count", default=1,
+ help="more verbosity")
+ parser.add_argument("--no-build", "-n", action="store_true", default=False,
+ help="do not build the project (use system installed version)")
+ parser.add_argument("--build-only", "-b", action="store_true", default=False,
+ help="just build, do not run any tests")
+ parser.add_argument("--doctests", action="store_true", default=False,
+ help="Run doctests in module")
+ parser.add_argument("--coverage", action="store_true", default=False,
+ help=("report coverage of project code. HTML output goes "
+ "under build/coverage"))
+ parser.add_argument("--mode", "-m", default="fast",
+ help="'fast', 'full', or something that could be "
+ "passed to nosetests -A [default: fast]")
+ parser.add_argument("--submodule", "-s", default=None,
+ help="Submodule whose tests to run (cluster, constants, ...)")
+ parser.add_argument("--pythonpath", "-p", default=None,
+ help="Paths to prepend to PYTHONPATH")
+ parser.add_argument("--tests", "-t", action='append',
+ help="Specify tests to run")
+ parser.add_argument("--python", action="store_true",
+ help="Start a Python shell with PYTHONPATH set")
+ parser.add_argument("--ipython", "-i", action="store_true",
+ help="Start IPython shell with PYTHONPATH set")
+ parser.add_argument("--shell", action="store_true",
+ help="Start Unix shell with PYTHONPATH set")
+ parser.add_argument("--debug", "-g", action="store_true",
+ help="Debug build")
+ parser.add_argument("args", metavar="ARGS", default=[], nargs=REMAINDER,
+ help="Arguments to pass to Nose")
+ args = parser.parse_args(argv)
+
+ if args.pythonpath:
+ for p in reversed(args.pythonpath.split(os.pathsep)):
+ sys.path.insert(0, p)
+
+ if not args.no_build:
+ site_dir = build_project(args)
+ sys.path.insert(0, site_dir)
+ os.environ['PYTHONPATH'] = site_dir
+
+ if args.python:
+ import code
+ code.interact()
+ sys.exit(0)
+
+ if args.ipython:
+ import IPython
+ IPython.embed()
+ sys.exit(0)
+
+ if args.shell:
+ shell = os.environ.get('SHELL', 'sh')
+ print("Spawning a Unix shell...")
+ os.execv(shell, [shell])
+ sys.exit(1)
+
+ extra_argv = args.args
+
+ if args.coverage:
+ dst_dir = os.path.join('build', 'coverage')
+ fn = os.path.join(dst_dir, 'coverage_html.js')
+ if os.path.isdir(dst_dir) and os.path.isfile(fn):
+ shutil.rmtree(dst_dir)
+ extra_argv += ['--cover-html',
+ '--cover-html-dir='+dst_dir]
+
+ if args.build_only:
+ sys.exit(0)
+ elif args.submodule:
+ modname = PROJECT_MODULE + '.' + args.submodule
+ try:
+ __import__(modname)
+ test = sys.modules[modname].test
+ except (ImportError, KeyError, AttributeError):
+ print("Cannot run tests for %s" % modname)
+ sys.exit(2)
+ elif args.tests:
+ def test(*a, **kw):
+ extra_argv = kw.pop('extra_argv', ())
+ extra_argv = extra_argv + args.tests[1:]
+ kw['extra_argv'] = extra_argv
+ from numpy.testing import Tester
+ return Tester(args.tests[0]).test(*a, **kw)
+ else:
+ __import__(PROJECT_MODULE)
+ test = sys.modules[PROJECT_MODULE].test
+
+ result = test(args.mode,
+ verbose=args.verbose,
+ extra_argv=args.args,
+ doctests=args.doctests,
+ coverage=args.coverage)
+
+ if result.wasSuccessful():
+ sys.exit(0)
+ else:
+ sys.exit(1)
+
+def build_project(args):
+ """
+ Build a dev version of the project.
+
+ Returns
+ -------
+ site_dir
+ site-packages directory where it was installed
+
+ """
+
+ root_dir = os.path.abspath(os.path.join(os.path.dirname(__file__)))
+ root_ok = [os.path.exists(os.path.join(root_dir, fn))
+ for fn in PROJECT_ROOT_FILES]
+ if not all(root_ok):
+ print("To build the project, run runtests.py in "
+ "git checkout or unpacked source")
+ sys.exit(1)
+
+ dst_dir = os.path.join(root_dir, 'build', 'testenv')
+
+ env = dict(os.environ)
+ cmd = [sys.executable, 'setup.py']
+
+ # Always use ccache, if installed
+ env['PATH'] = os.pathsep.join(['/usr/lib/ccache']
+ + env.get('PATH', '').split(os.pathsep))
+
+ if args.debug:
+ # assume everyone uses gcc/gfortran
+ env['OPT'] = '-O0 -ggdb'
+ env['FOPT'] = '-O0 -ggdb'
+ cmd += ["build", "--debug"]
+
+ cmd += ['install', '--prefix=' + dst_dir]
+
+ print("Building, see build.log...")
+ with open('build.log', 'w') as log:
+ ret = subprocess.call(cmd, env=env, stdout=log, stderr=log,
+ cwd=root_dir)
+
+ if ret == 0:
+ print("Build OK")
+ else:
+ with open('build.log', 'r') as f:
+ print(f.read())
+ print("Build failed!")
+ sys.exit(1)
+
+ from distutils.sysconfig import get_python_lib
+ site_dir = get_python_lib(prefix=dst_dir)
+
+ return site_dir
+
+if __name__ == "__main__":
+ main(argv=sys.argv[1:])
diff --git a/tools/osxbuild/README.txt b/tools/osxbuild/README.txt
deleted file mode 100644
index 00600f190..000000000
--- a/tools/osxbuild/README.txt
+++ /dev/null
@@ -1,32 +0,0 @@
-==================================
- Building an OSX binary for numpy
-==================================
-
-This directory contains the scripts to build a universal binary for
-OSX. The binaries work on OSX 10.4 and 10.5.
-
-The docstring in build.py may contain more current details.
-
-Requirements
-============
-
-* bdist_mpkg v0.4.3
-
-Build
-=====
-
-The build script will build a numpy distribution using bdist_mpkg and
-create the mac package (mpkg) bundled in a disk image (dmg). To run
-the build script::
-
- python build.py
-
-Install and test
-----------------
-
-The *install_and_test.py* script will find the numpy*.mpkg, install it
-using the Mac installer and then run the numpy test suite. To run the
-install and test::
-
- python install_and_test.py
-
diff --git a/tools/osxbuild/build.py b/tools/osxbuild/build.py
deleted file mode 100644
index 71d37889d..000000000
--- a/tools/osxbuild/build.py
+++ /dev/null
@@ -1,105 +0,0 @@
-"""Python script to build the OSX universal binaries.
-
-This is a simple script, most of the heavy lifting is done in bdist_mpkg.
-
-To run this script: 'python build.py'
-
-Requires a svn version of numpy is installed, svn is used to revert
-file changes made to the docs for the end-user install. Installer is
-built using sudo so file permissions are correct when installed on
-user system. Script will prompt for sudo pwd.
-
-"""
-from __future__ import division, print_function
-
-import os
-import shutil
-import subprocess
-from getpass import getuser
-
-SRC_DIR = '../../'
-
-USER_README = 'docs/README.txt'
-DEV_README = SRC_DIR + 'README.txt'
-
-BUILD_DIR = 'build'
-DIST_DIR = 'dist'
-
-def remove_dirs():
- print('Removing old build and distribution directories...')
- print("""The distribution is built as root, so the files have the correct
- permissions when installed by the user. Chown them to user for removal.""")
- if os.path.exists(BUILD_DIR):
- cmd = 'sudo chown -R %s %s' % (getuser(), BUILD_DIR)
- shellcmd(cmd)
- shutil.rmtree(BUILD_DIR)
- if os.path.exists(DIST_DIR):
- cmd = 'sudo chown -R %s %s' % (getuser(), DIST_DIR)
- shellcmd(cmd)
- shutil.rmtree(DIST_DIR)
-
-def build_dist():
- print('Building distribution... (using sudo)')
- cmd = 'sudo python setupegg.py bdist_mpkg'
- shellcmd(cmd)
-
-def build_dmg():
- print('Building disk image...')
- # Since we removed the dist directory at the start of the script,
- # our pkg should be the only file there.
- pkg = os.listdir(DIST_DIR)[0]
- fn, ext = os.path.splitext(pkg)
- dmg = fn + '.dmg'
- srcfolder = os.path.join(DIST_DIR, pkg)
- dstfolder = os.path.join(DIST_DIR, dmg)
- # build disk image
- cmd = 'sudo hdiutil create -srcfolder %s %s' % (srcfolder, dstfolder)
- shellcmd(cmd)
-
-def copy_readme():
- """Copy a user README with info regarding the website, instead of
- the developer README which tells one how to build the source.
- """
- print('Copy user README.txt for installer.')
- shutil.copy(USER_README, DEV_README)
-
-def revert_readme():
- """Revert the developer README."""
- print('Reverting README.txt...')
- cmd = 'svn revert %s' % DEV_README
- shellcmd(cmd)
-
-def shellcmd(cmd, verbose=True):
- """Call a shell command."""
- if verbose:
- print(cmd)
- try:
- subprocess.check_call(cmd, shell=True)
- except subprocess.CalledProcessError as err:
- msg = """
- Error while executing a shell command.
- %s
- """ % str(err)
- raise Exception(msg)
-
-def build():
- # update end-user documentation
- copy_readme()
- shellcmd("svn stat %s"%DEV_README)
-
- # change to source directory
- cwd = os.getcwd()
- os.chdir(SRC_DIR)
-
- # build distribution
- remove_dirs()
- build_dist()
- build_dmg()
-
- # change back to original directory
- os.chdir(cwd)
- # restore developer documentation
- revert_readme()
-
-if __name__ == '__main__':
- build()
diff --git a/tools/osxbuild/docs/README.txt b/tools/osxbuild/docs/README.txt
deleted file mode 100644
index f7a8e2a37..000000000
--- a/tools/osxbuild/docs/README.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-NumPy is the fundamental package needed for scientific computing with Python.
-This package contains:
-
- * a powerful N-dimensional array object
- * sophisticated (broadcasting) functions
- * tools for integrating C/C++ and Fortran code
- * useful linear algebra, Fourier transform, and random number capabilities.
-
-It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
-
-More information can be found at the website:
-
-http://www.numpy.org
-
-After installation, tests can be run with:
-
-python -c 'import numpy; numpy.test()'
-
-Starting in NumPy 1.7, deprecation warnings have been set to 'raise' by
-default, so the -Wd command-line option is no longer necessary.
-
-The most current development version is always available from our
-git repository:
-
-http://github.com/numpy/numpy
diff --git a/tools/osxbuild/install_and_test.py b/tools/osxbuild/install_and_test.py
deleted file mode 100644
index 0243724d9..000000000
--- a/tools/osxbuild/install_and_test.py
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/usr/bin/env python
-"""Install the built package and run the tests.
-
-"""
-from __future__ import division, print_function
-
-import os
-
-# FIXME: Should handle relative import better!
-#from .build import DIST_DIR
-from build import SRC_DIR, DIST_DIR, shellcmd
-
-clrgreen = '\033[0;32m'
-clrnull = '\033[0m'
-# print '\033[0;32m foobar \033[0m'
-def color_print(msg):
- """Add color to this print output."""
- clrmsg = clrgreen + msg + clrnull
- print(clrmsg)
-
-distdir = os.path.join(SRC_DIR, DIST_DIR)
-
-# Find the package and build abspath to it
-pkg = None
-filelist = os.listdir(distdir)
-for fn in filelist:
- if fn.endswith('mpkg'):
- pkg = fn
- break
-if pkg is None:
- raise IOError('Package is not found in directory %s' % distdir)
-
-pkgpath = os.path.abspath(os.path.join(SRC_DIR, DIST_DIR, pkg))
-color_print('Installing package: %s' % pkgpath)
-
-# Run the installer
-print()
-color_print('Installer requires admin rights, you will be prompted for sudo')
-print()
-cmd = 'sudo installer -verbose -package %s -target /' % pkgpath
-#color_print(cmd)
-shellcmd(cmd)
-
-# Null out the PYTHONPATH so we're sure to test the Installed version of numpy
-os.environ['PYTHONPATH'] = '0'
-
-print()
-color_print('Install successful!')
-color_print('Running numpy test suite!')
-print()
-import numpy
-numpy.test()
diff --git a/tox.ini b/tox.ini
index fd52674dc..0fb63300b 100644
--- a/tox.ini
+++ b/tox.ini
@@ -13,7 +13,7 @@
# - Use pip to install the numpy sdist into the virtualenv
# - Run the numpy tests
# To run against a specific subset of Python versions, use:
-# tox -e py24,py27
+# tox -e py27
# Extra arguments will be passed to test-installed-numpy.py. To run
# the full testsuite:
@@ -22,27 +22,35 @@
# tox -- -v
# Tox assumes that you have appropriate Python interpreters already
-# installed and that they can be run as 'python2.4', 'python2.5', etc.
+# installed and that they can be run as 'python2.7', 'python3.3', etc.
[tox]
-envlist = py24,py25,py26,py27,py31,py32,py27-separate,py32-separate
+envlist = py26,py27,py32,py33,py27-monolithic,py33-monolithic,py27-relaxed-strides,py33-relaxed-strides
[testenv]
deps=
nose
changedir={envdir}
-commands=python {toxinidir}/tools/test-installed-numpy.py {posargs:}
+commands={envpython} {toxinidir}/tools/test-installed-numpy.py --mode=full {posargs:}
-[testenv:py27-separate]
+[testenv:py27-monolithic]
basepython=python2.7
-env=NPY_SEPARATE_COMPILATION=1
+env=NPY_SEPARATE_COMPILATION=0
-[testenv:py32-separate]
-basepython=python3.2
-env=NPY_SEPARATE_COMPILATION=1
+[testenv:py33-monolithic]
+basepython=python3.3
+env=NPY_SEPARATE_COMPILATION=0
+
+[testenv:py27-relaxed-strides]
+basepython=python2.7
+env=NPY_RELAXED_STRIDES_CHECKING=1
+
+[testenv:py33-relaxed-strides]
+basepython=python3.3
+env=NPY_RELAXED_STRIDES_CHECKING=1
# Not run by default. Set up the way you want then use 'tox -e debug'
# if you want it:
[testenv:debug]
-basepython=PYTHON-WITH-DEBUG-INFO
-commands=gdb --args {envpython} {toxinidir}/tools/test-installed-numpy.py {posargs:}
+basepython=python-dbg
+commands=gdb --args {envpython} {toxinidir}/tools/test-installed-numpy.py --mode=full {posargs:}