summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/neps/missing-data.rst78
-rw-r--r--doc/source/reference/c-api.array.rst8
-rw-r--r--doc/source/reference/routines.array-manipulation.rst7
-rw-r--r--numpy/add_newdocs.py48
-rw-r--r--numpy/core/code_generators/numpy_api.py5
-rw-r--r--numpy/core/include/numpy/ndarrayobject.h50
-rw-r--r--numpy/core/include/numpy/ndarraytypes.h101
-rw-r--r--numpy/core/include/numpy/npy_deprecated_api.h11
-rw-r--r--numpy/core/numeric.py14
-rw-r--r--numpy/core/src/multiarray/arrayobject.c10
-rw-r--r--numpy/core/src/multiarray/calculation.c70
-rw-r--r--numpy/core/src/multiarray/convert_datatype.c4
-rw-r--r--numpy/core/src/multiarray/ctors.c285
-rw-r--r--numpy/core/src/multiarray/dtype_transfer.c221
-rw-r--r--numpy/core/src/multiarray/item_selection.c5
-rw-r--r--numpy/core/src/multiarray/methods.c10
-rw-r--r--numpy/core/src/multiarray/multiarraymodule.c93
-rw-r--r--numpy/core/src/multiarray/nditer_api.c7
-rw-r--r--numpy/core/src/multiarray/nditer_constr.c317
-rw-r--r--numpy/core/src/multiarray/nditer_impl.h10
-rw-r--r--numpy/core/src/multiarray/nditer_pywrap.c39
-rw-r--r--numpy/core/src/multiarray/scalartypes.c.src2
-rw-r--r--numpy/core/src/multiarray/shape.c18
-rw-r--r--numpy/core/src/private/lowlevel_strided_loops.h42
-rw-r--r--numpy/core/src/umath/ufunc_type_resolution.c12
-rw-r--r--numpy/core/tests/test_api.py36
-rw-r--r--numpy/lib/tests/test_format.py24
27 files changed, 1215 insertions, 312 deletions
diff --git a/doc/neps/missing-data.rst b/doc/neps/missing-data.rst
index 478d019a5..7a2c076cb 100644
--- a/doc/neps/missing-data.rst
+++ b/doc/neps/missing-data.rst
@@ -591,6 +591,33 @@ cannot hold values, but will conform to the input types in functions like
maps to [('a', 'NA[f4]'), ('b', 'NA[i4]')]. Thus, to view the memory
of an 'f8' array 'arr' with 'NA[f8]', you can say arr.view(dtype='NA').
+Future Expansion to multi-NA Payloads
+=====================================
+
+The packages SAS and Stata both support multiple different "NA" values.
+This allows one to specify different reasons for why a value, for
+example homework that wasn't done because the dog ate it or the student
+was sick. In these packages, the different NA values have a linear ordering
+which specifies how different NA values combine together.
+
+In the sections on C implementation details, the mask has been designed
+so that a mask with a payload is a strict superset of the NumPy boolean
+type, and the boolean type has a payload of just zero. Different payloads
+combine with the 'min' operation.
+
+The important part of future-proofing the design is making sure
+the C ABI-level choices and the Python API-level choices have a natural
+transition to multi-NA support. Here is one way multi-NA support could look::
+
+ >>> a = np.array([np.NA(1), 3, np.NA(2)], namasked='multi')
+ >>> np.sum(a)
+ NA(1)
+ >>> np.sum(a[1:])
+ NA(2)
+ >>> b = np.array([np.NA, 2, 5], namasked=True)
+ >>> a + b
+ array([NA(0), 5, NA(2)], namasked='multi')
+
PEP 3118
========
@@ -677,7 +704,7 @@ This gives us the following additions to the PyArrayObject::
* NPY_ARRAY_OWNNAMASK enabled, it owns this memory and
* must call PyArray_free on it when destroyed.
*/
- char *maskdata;
+ npy_uint8 *maskdata;
/*
* Just like dimensions and strides point into the same memory
* buffer, we now just make the buffer 3x the nd instead of 2x
@@ -708,6 +735,38 @@ PyArray_ContainsNA(PyArrayObject* obj)
true if the array has NA support AND there is an
NA anywhere in the array.
+Mask Binary Format
+==================
+
+The format of the mask itself is designed to indicate whether an
+element is masked or not, as well as contain a payload so that multiple
+different NAs with different payloads can be used in the future.
+Initially, we will simply use the payload 0.
+
+The mask has type npy_uint8, and bit 0 is used to indicate whether
+a value is masked. If ((m&0x01) == 0), the element is masked, otherwise
+it is unmasked. The rest of the bits are the payload, which is (m>>1).
+The convention for combining masks with payloads is that smaller
+payloads propagate. This design gives 128 payload values to masked elements,
+and 128 payload values to unmasked elements.
+
+The big benefit of this approach is that npy_bool also
+works as a mask, because it takes on the values 0 for False and 1
+for True. Additionally, the payload for npy_bool, which is always
+zero, dominates over all the other possible payloads.
+
+Since the design involves giving the mask its own dtype, we can
+distinguish between masking with a single NA value (npy_bool mask),
+and masking with multi-NA (npy_uint8 mask). Initial implementations
+will just support the npy_bool mask.
+
+An idea that was discarded is to allow the combination of masks + payloads
+to be a simple 'min' operation. This can be done by putting the payload
+in bits 0 through 6, so that the payload is (m&0x7f), and using bit 7
+for the masking flag, so ((m&0x80) == 0) means the element is masked.
+The fact that this makes masks completely different from booleans, instead
+of a strict superset, is the primary reason this choice was discarded.
+
********************************************
C Iterator API Changes: Iteration With Masks
********************************************
@@ -738,6 +797,10 @@ NPY_ITER_WRITEMASKED
to know the mask ahead of time, and copying everything into
the buffer will never destroy data.
+ The code using the iterator should only write to values which
+ are not masked by the mask specified, otherwise the result will
+ be different depending on whether buffering is enabled or not.
+
NPY_ITER_ARRAYMASK
Indicates that this array is a boolean mask to use when copying
any WRITEMASKED argument from a buffer back to the array. There
@@ -751,12 +814,13 @@ NPY_ITER_ARRAYMASK
into the NA bitpattern when copying from the buffer to the
array.
-NPY_ITER_VIRTUALMASK
- Indicates that the mask is not an array, but rather created on
- the fly by the inner iteration code. This allocates enough buffer
- space for the code to write the mask into, but does not have
- an actual array backing the data. There can only be one such
- mask, and there cannot also be an array mask.
+NPY_ITER_VIRTUAL
+ Indicates that this operand is not an array, but rather created on
+ the fly for the inner iteration code. This allocates enough buffer
+ space for the code to read/write data, but does not have
+ an actual array backing the data. When combined with NPY_ITER_ARRAYMASK,
+ allows for creating a "virtual mask", specifying which values
+ are unmasked without ever creating a full mask array.
Iterator NA-array Features
==========================
diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst
index 02aa03ff0..51802c436 100644
--- a/doc/source/reference/c-api.array.rst
+++ b/doc/source/reference/c-api.array.rst
@@ -475,6 +475,10 @@ From other objects
indicates that the array should be aligned in the sense that the
strides are multiples of the element size.
+ In versions 1.6 and earlier of NumPy, the following flags
+ did not have the _ARRAY_ macro namespace in them. That form
+ of the constant names is deprecated in 1.7.
+
.. cvar:: NPY_ARRAY_NOTSWAPPED
Make sure the returned array has a data-type descriptor that is in
@@ -1210,6 +1214,10 @@ might not be writeable. It might be in Fortan-contiguous order. The
array flags are used to indicate what can be said about data
associated with an array.
+In versions 1.6 and earlier of NumPy, the following flags
+did not have the _ARRAY_ macro namespace in them. That form
+of the constant names is deprecated in 1.7.
+
.. cvar:: NPY_ARRAY_C_CONTIGUOUS
The data area is in C-style contiguous order (last index varies the
diff --git a/doc/source/reference/routines.array-manipulation.rst b/doc/source/reference/routines.array-manipulation.rst
index 2c1a5b200..ca97bb479 100644
--- a/doc/source/reference/routines.array-manipulation.rst
+++ b/doc/source/reference/routines.array-manipulation.rst
@@ -3,6 +3,13 @@ Array manipulation routines
.. currentmodule:: numpy
+Basic operations
+================
+.. autosummary::
+ :toctree: generated/
+
+ copyto
+
Changing array shape
====================
.. autosummary::
diff --git a/numpy/add_newdocs.py b/numpy/add_newdocs.py
index 2f9071378..1e1d237a4 100644
--- a/numpy/add_newdocs.py
+++ b/numpy/add_newdocs.py
@@ -3184,6 +3184,10 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('copy',
order. If order is 'A' ('Any'), then the result has the same order
as the input.
+ See also
+ --------
+ numpy.copyto
+
Examples
--------
>>> x = np.array([[1,2,3],[4,5,6]], order='F')
@@ -3690,11 +3694,45 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('put',
"""))
+add_newdoc('numpy.core.multiarray', 'copyto',
+ """
+ copyto(dst, src, casting='same_kind', where=None)
+
+ Copies values from `src` into `dst`, broadcasting as necessary.
+ Raises a TypeError if the casting rule is violated, and if
+ `where` is provided, it selects which elements to copy.
+
+ .. versionadded:: 1.7.0
+
+ Parameters
+ ----------
+ dst : ndarray
+ The array into which values are copied.
+ src : array_like
+ The array from which values are copied.
+ casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
+ Controls what kind of data casting may occur when copying.
+
+ * 'no' means the data types should not be cast at all.
+ * 'equiv' means only byte-order changes are allowed.
+ * 'safe' means only casts which can preserve values are allowed.
+ * 'same_kind' means only safe casts or casts within a kind,
+ like float64 to float32, are allowed.
+ * 'unsafe' means any data conversions may be done.
+ where : array_like of bool
+ A boolean array which is broadcasted to match the dimensions
+ of `dst`, and selects elements to copy from `src` to `dst`
+ wherever it contains the value True.
+
+ """)
add_newdoc('numpy.core.multiarray', 'putmask',
"""
putmask(a, mask, values)
+ This function is deprecated as of NumPy 1.7. Use the function
+ ``np.copyto(a, values, where=mask)`` to achieve this functionality.
+
Changes elements of an array based on conditional and input values.
Sets ``a.flat[n] = values[n]`` for each n where ``mask.flat[n]==True``.
@@ -3714,7 +3752,7 @@ add_newdoc('numpy.core.multiarray', 'putmask',
See Also
--------
- place, put, take
+ place, put, take, copyto
Examples
--------
@@ -5959,6 +5997,8 @@ add_newdoc('numpy.core.multiarray', 'busdaycalendar',
information defining business days for the business
day-related functions.
+ .. versionadded:: 1.7.0
+
Parameters
----------
weekmask : str or array_like of bool, optional
@@ -6018,6 +6058,8 @@ add_newdoc('numpy.core.multiarray', 'is_busday',
Calculates which of the given dates are valid business days, and
which are not.
+ .. versionadded:: 1.7.0
+
Parameters
----------
dates : array_like of datetime64[D]
@@ -6070,6 +6112,8 @@ add_newdoc('numpy.core.multiarray', 'busday_offset',
the ``roll`` rule, then applies offsets to the given dates
counted in business days.
+ .. versionadded:: 1.7.0
+
Parameters
----------
dates : array_like of datetime64[D]
@@ -6158,6 +6202,8 @@ add_newdoc('numpy.core.multiarray', 'busday_count',
Counts the number of business days between `begindates` and
`enddates`, not including the day of `enddates`.
+ .. versionadded:: 1.7.0
+
Parameters
----------
begindates : array_like of datetime64[D]
diff --git a/numpy/core/code_generators/numpy_api.py b/numpy/core/code_generators/numpy_api.py
index b8cde7b3b..d83371319 100644
--- a/numpy/core/code_generators/numpy_api.py
+++ b/numpy/core/code_generators/numpy_api.py
@@ -256,7 +256,7 @@ multiarray_funcs_api = {
'PyArray_TimedeltaToTimedeltaStruct': 221,
'PyArray_DatetimeStructToDatetime': 222,
'PyArray_TimedeltaStructToTimedelta': 223,
- # New Iterator API
+ # NDIter API
'NpyIter_New': 224,
'NpyIter_MultiNew': 225,
'NpyIter_AdvancedNew': 226,
@@ -315,6 +315,9 @@ multiarray_funcs_api = {
'PyArray_GetArrayParamsFromObject': 278,
'PyArray_ConvertClipmodeSequence': 279,
'PyArray_MatrixProduct2': 280,
+ # End 1.6 API
+ 'PyArray_MaskedCopyInto': 281,
+ 'PyArray_MaskedMoveInto': 282,
}
ufunc_types_api = {
diff --git a/numpy/core/include/numpy/ndarrayobject.h b/numpy/core/include/numpy/ndarrayobject.h
index c7b51346b..04aa8d0e3 100644
--- a/numpy/core/include/numpy/ndarrayobject.h
+++ b/numpy/core/include/numpy/ndarrayobject.h
@@ -98,41 +98,41 @@ extern "C" CONFUSE_EMACS
(((flags) & NPY_ARRAY_ENSURECOPY) ? \
(flags) | NPY_ARRAY_DEFAULT : (flags)), NULL)
-#define PyArray_ZEROS(m, dims, type, fortran) \
- PyArray_Zeros(m, dims, PyArray_DescrFromType(type), fortran)
+#define PyArray_ZEROS(m, dims, type, is_f_order) \
+ PyArray_Zeros(m, dims, PyArray_DescrFromType(type), is_f_order)
-#define PyArray_EMPTY(m, dims, type, fortran) \
- PyArray_Empty(m, dims, PyArray_DescrFromType(type), fortran)
+#define PyArray_EMPTY(m, dims, type, is_f_order) \
+ PyArray_Empty(m, dims, PyArray_DescrFromType(type), is_f_order)
-#define PyArray_FILLWBYTE(obj, val) memset(PyArray_DATA(obj), val, \
+#define PyArray_FILLWBYTE(obj, val) memset(PyArray_DATA(obj), val, \
PyArray_NBYTES(obj))
#define PyArray_REFCOUNT(obj) (((PyObject *)(obj))->ob_refcnt)
#define NPY_REFCOUNT PyArray_REFCOUNT
#define NPY_MAX_ELSIZE (2 * NPY_SIZEOF_LONGDOUBLE)
-#define PyArray_ContiguousFromAny(op, type, min_depth, max_depth) \
- PyArray_FromAny(op, PyArray_DescrFromType(type), min_depth, \
+#define PyArray_ContiguousFromAny(op, type, min_depth, max_depth) \
+ PyArray_FromAny(op, PyArray_DescrFromType(type), min_depth, \
max_depth, NPY_ARRAY_DEFAULT, NULL)
-#define PyArray_EquivArrTypes(a1, a2) \
+#define PyArray_EquivArrTypes(a1, a2) \
PyArray_EquivTypes(PyArray_DESCR(a1), PyArray_DESCR(a2))
-#define PyArray_EquivByteorders(b1, b2) \
+#define PyArray_EquivByteorders(b1, b2) \
(((b1) == (b2)) || (PyArray_ISNBO(b1) == PyArray_ISNBO(b2)))
-#define PyArray_SimpleNew(nd, dims, typenum) \
+#define PyArray_SimpleNew(nd, dims, typenum) \
PyArray_New(&PyArray_Type, nd, dims, typenum, NULL, NULL, 0, 0, NULL)
-#define PyArray_SimpleNewFromData(nd, dims, typenum, data) \
- PyArray_New(&PyArray_Type, nd, dims, typenum, NULL, \
+#define PyArray_SimpleNewFromData(nd, dims, typenum, data) \
+ PyArray_New(&PyArray_Type, nd, dims, typenum, NULL, \
data, 0, NPY_ARRAY_CARRAY, NULL)
-#define PyArray_SimpleNewFromDescr(nd, dims, descr) \
- PyArray_NewFromDescr(&PyArray_Type, descr, nd, dims, \
+#define PyArray_SimpleNewFromDescr(nd, dims, descr) \
+ PyArray_NewFromDescr(&PyArray_Type, descr, nd, dims, \
NULL, NULL, 0, NULL)
-#define PyArray_ToScalar(data, arr) \
+#define PyArray_ToScalar(data, arr) \
PyArray_Scalar(data, PyArray_DESCR(arr), (PyObject *)arr)
@@ -141,22 +141,22 @@ extern "C" CONFUSE_EMACS
inline the constants inside a for loop making it a moot point
*/
-#define PyArray_GETPTR1(obj, i) ((void *)(PyArray_BYTES(obj) + \
+#define PyArray_GETPTR1(obj, i) ((void *)(PyArray_BYTES(obj) + \
(i)*PyArray_STRIDES(obj)[0]))
-#define PyArray_GETPTR2(obj, i, j) ((void *)(PyArray_BYTES(obj) + \
- (i)*PyArray_STRIDES(obj)[0] + \
+#define PyArray_GETPTR2(obj, i, j) ((void *)(PyArray_BYTES(obj) + \
+ (i)*PyArray_STRIDES(obj)[0] + \
(j)*PyArray_STRIDES(obj)[1]))
-#define PyArray_GETPTR3(obj, i, j, k) ((void *)(PyArray_BYTES(obj) + \
- (i)*PyArray_STRIDES(obj)[0] + \
- (j)*PyArray_STRIDES(obj)[1] + \
+#define PyArray_GETPTR3(obj, i, j, k) ((void *)(PyArray_BYTES(obj) + \
+ (i)*PyArray_STRIDES(obj)[0] + \
+ (j)*PyArray_STRIDES(obj)[1] + \
(k)*PyArray_STRIDES(obj)[2]))
-#define PyArray_GETPTR4(obj, i, j, k, l) ((void *)(PyArray_BYTES(obj) + \
- (i)*PyArray_STRIDES(obj)[0] + \
- (j)*PyArray_STRIDES(obj)[1] + \
- (k)*PyArray_STRIDES(obj)[2] + \
+#define PyArray_GETPTR4(obj, i, j, k, l) ((void *)(PyArray_BYTES(obj) + \
+ (i)*PyArray_STRIDES(obj)[0] + \
+ (j)*PyArray_STRIDES(obj)[1] + \
+ (k)*PyArray_STRIDES(obj)[2] + \
(l)*PyArray_STRIDES(obj)[3]))
#define PyArray_XDECREF_ERR(obj) \
diff --git a/numpy/core/include/numpy/ndarraytypes.h b/numpy/core/include/numpy/ndarraytypes.h
index bf8af1661..66aa15820 100644
--- a/numpy/core/include/numpy/ndarraytypes.h
+++ b/numpy/core/include/numpy/ndarraytypes.h
@@ -104,49 +104,52 @@ enum NPY_TYPES { NPY_BOOL=0,
* module
*/
-/* except 'p' -- signed integer for pointer type */
-
-enum NPY_TYPECHAR { NPY_BOOLLTR = '?',
- NPY_BYTELTR = 'b',
- NPY_UBYTELTR = 'B',
- NPY_SHORTLTR = 'h',
- NPY_USHORTLTR = 'H',
- NPY_INTLTR = 'i',
- NPY_UINTLTR = 'I',
- NPY_LONGLTR = 'l',
- NPY_ULONGLTR = 'L',
- NPY_LONGLONGLTR = 'q',
- NPY_ULONGLONGLTR = 'Q',
- NPY_HALFLTR = 'e',
- NPY_FLOATLTR = 'f',
- NPY_DOUBLELTR = 'd',
- NPY_LONGDOUBLELTR = 'g',
- NPY_CFLOATLTR = 'F',
- NPY_CDOUBLELTR = 'D',
- NPY_CLONGDOUBLELTR = 'G',
- NPY_OBJECTLTR = 'O',
- NPY_STRINGLTR = 'S',
- NPY_STRINGLTR2 = 'a',
- NPY_UNICODELTR = 'U',
- NPY_VOIDLTR = 'V',
- NPY_DATETIMELTR = 'M',
- NPY_TIMEDELTALTR = 'm',
- NPY_CHARLTR = 'c',
-
- /*
- * No Descriptor, just a define -- this let's
- * Python users specify an array of integers
- * large enough to hold a pointer on the
- * platform
- */
- NPY_INTPLTR = 'p',
- NPY_UINTPLTR = 'P',
-
- NPY_GENBOOLLTR ='b',
- NPY_SIGNEDLTR = 'i',
- NPY_UNSIGNEDLTR = 'u',
- NPY_FLOATINGLTR = 'f',
- NPY_COMPLEXLTR = 'c'
+enum NPY_TYPECHAR {
+ NPY_BOOLLTR = '?',
+ NPY_BYTELTR = 'b',
+ NPY_UBYTELTR = 'B',
+ NPY_SHORTLTR = 'h',
+ NPY_USHORTLTR = 'H',
+ NPY_INTLTR = 'i',
+ NPY_UINTLTR = 'I',
+ NPY_LONGLTR = 'l',
+ NPY_ULONGLTR = 'L',
+ NPY_LONGLONGLTR = 'q',
+ NPY_ULONGLONGLTR = 'Q',
+ NPY_HALFLTR = 'e',
+ NPY_FLOATLTR = 'f',
+ NPY_DOUBLELTR = 'd',
+ NPY_LONGDOUBLELTR = 'g',
+ NPY_CFLOATLTR = 'F',
+ NPY_CDOUBLELTR = 'D',
+ NPY_CLONGDOUBLELTR = 'G',
+ NPY_OBJECTLTR = 'O',
+ NPY_STRINGLTR = 'S',
+ NPY_STRINGLTR2 = 'a',
+ NPY_UNICODELTR = 'U',
+ NPY_VOIDLTR = 'V',
+ NPY_DATETIMELTR = 'M',
+ NPY_TIMEDELTALTR = 'm',
+ NPY_CHARLTR = 'c',
+
+ /*
+ * No Descriptor, just a define -- this let's
+ * Python users specify an array of integers
+ * large enough to hold a pointer on the
+ * platform
+ */
+ NPY_INTPLTR = 'p',
+ NPY_UINTPLTR = 'P',
+
+ /*
+ * These are for dtype 'kinds', not dtype 'typecodes'
+ * as the above are for.
+ */
+ NPY_GENBOOLLTR ='b',
+ NPY_SIGNEDLTR = 'i',
+ NPY_UNSIGNEDLTR = 'u',
+ NPY_FLOATINGLTR = 'f',
+ NPY_COMPLEXLTR = 'c'
};
typedef enum {
@@ -210,7 +213,7 @@ typedef enum {
/* The special not-a-time (NaT) value */
#define NPY_DATETIME_NAT NPY_MIN_INT64
/*
- * Theoretical maximum length of a DATETIME ISO 8601 string
+ * Upper bound on the length of a DATETIME ISO 8601 string
* YEAR: 21 (64-bit year)
* MONTH: 3
* DAY: 3
@@ -604,10 +607,6 @@ typedef struct PyArrayObject {
PyObject *weakreflist; /* For weakreferences */
} PyArrayObject;
-#define NPY_AO PyArrayObject
-
-#define fortran fortran_ /* For some compilers */
-
/* Array Flags Object */
typedef struct PyArrayFlagsObject {
PyObject_HEAD
@@ -890,8 +889,14 @@ typedef void (NpyIter_GetMultiIndexFunc)(NpyIter *iter,
#define NPY_ITER_ALLOCATE 0x01000000
/* If an operand is allocated, don't use any subtype */
#define NPY_ITER_NO_SUBTYPE 0x02000000
+/* This is a virtual array slot, operand is NULL but temporary data is there */
+#define NPY_ITER_VIRTUAL 0x04000000
/* Require that the dimension match the iterator dimensions exactly */
#define NPY_ITER_NO_BROADCAST 0x08000000
+/* A mask is being used on this array, affects buffer -> array copy */
+#define NPY_ITER_WRITEMASKED 0x10000000
+/* This array is the mask for all WRITEMASKED operands */
+#define NPY_ITER_ARRAYMASK 0x20000000
#define NPY_ITER_GLOBAL_FLAGS 0x0000ffff
#define NPY_ITER_PER_OP_FLAGS 0xffff0000
diff --git a/numpy/core/include/numpy/npy_deprecated_api.h b/numpy/core/include/numpy/npy_deprecated_api.h
index d00af8217..413d24d4e 100644
--- a/numpy/core/include/numpy/npy_deprecated_api.h
+++ b/numpy/core/include/numpy/npy_deprecated_api.h
@@ -77,5 +77,16 @@
PyDict_GetItemString(descr->metadata, NPY_METADATA_DTSTR)))))
#endif
+/*
+ * Deprecated as of NumPy 1.7, this kind of shortcut doesn't
+ * belong in the public API.
+ */
+#define NPY_AO PyArrayObject
+
+/*
+ * Deprecated as of NumPy 1.7, an all-lowercase macro doesn't
+ * belong in the public API.
+ */
+#define fortran fortran_
#endif
diff --git a/numpy/core/numeric.py b/numpy/core/numeric.py
index 67235e4d1..80db6573e 100644
--- a/numpy/core/numeric.py
+++ b/numpy/core/numeric.py
@@ -1,7 +1,7 @@
__all__ = ['newaxis', 'ndarray', 'flatiter', 'nditer', 'nested_iters', 'ufunc',
'arange', 'array', 'zeros', 'count_nonzero', 'empty', 'broadcast',
'dtype', 'fromstring', 'fromfile', 'frombuffer',
- 'int_asbuffer', 'where', 'argwhere',
+ 'int_asbuffer', 'where', 'argwhere', 'copyto',
'concatenate', 'fastCopyAndTranspose', 'lexsort', 'set_numeric_ops',
'can_cast', 'promote_types', 'min_scalar_type', 'result_type',
'asarray', 'asanyarray', 'ascontiguousarray', 'asfortranarray',
@@ -58,6 +58,7 @@ nditer = multiarray.nditer
nested_iters = multiarray.nested_iters
broadcast = multiarray.broadcast
dtype = multiarray.dtype
+copyto = multiarray.copyto
ufunc = type(sin)
@@ -113,7 +114,7 @@ def zeros_like(a, dtype=None, order='K', subok=True):
"""
res = empty_like(a, dtype=dtype, order=order, subok=subok)
- res.fill(0)
+ multiarray.copyto(res, 0, casting='unsafe')
return res
# end Fernando's utilities
@@ -1817,14 +1818,7 @@ def ones(shape, dtype=None, order='C'):
"""
a = empty(shape, dtype, order)
- try:
- a.fill(1)
- # Above is faster now after addition of fast loops.
- #a = zeros(shape, dtype, order)
- #a+=1
- except TypeError:
- obj = _maketup(dtype, 1)
- a.fill(obj)
+ multiarray.copyto(a, 1, casting='unsafe')
return a
def identity(n, dtype=None):
diff --git a/numpy/core/src/multiarray/arrayobject.c b/numpy/core/src/multiarray/arrayobject.c
index 9ef88b815..a1dc4399a 100644
--- a/numpy/core/src/multiarray/arrayobject.c
+++ b/numpy/core/src/multiarray/arrayobject.c
@@ -1216,8 +1216,8 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
PyArray_Dims strides = {NULL, 0};
PyArray_Chunk buffer;
longlong offset = 0;
- NPY_ORDER order = PyArray_CORDER;
- int fortran = 0;
+ NPY_ORDER order = NPY_CORDER;
+ int is_f_order = 0;
PyArrayObject *ret;
buffer.ptr = NULL;
@@ -1241,7 +1241,7 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
goto fail;
}
if (order == NPY_FORTRANORDER) {
- fortran = 1;
+ is_f_order = 1;
}
if (descr == NULL) {
descr = PyArray_DescrFromType(NPY_DEFAULT_TYPE);
@@ -1289,7 +1289,7 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
PyArray_NewFromDescr(subtype, descr,
(int)dims.len,
dims.ptr,
- strides.ptr, NULL, fortran, NULL);
+ strides.ptr, NULL, is_f_order, NULL);
if (ret == NULL) {
descr = NULL;
goto fail;
@@ -1318,7 +1318,7 @@ array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
goto fail;
}
/* get writeable and aligned */
- if (fortran) {
+ if (is_f_order) {
buffer.flags |= NPY_ARRAY_F_CONTIGUOUS;
}
ret = (PyArrayObject *)\
diff --git a/numpy/core/src/multiarray/calculation.c b/numpy/core/src/multiarray/calculation.c
index 4e8e0da19..ddb68953d 100644
--- a/numpy/core/src/multiarray/calculation.c
+++ b/numpy/core/src/multiarray/calculation.c
@@ -16,10 +16,6 @@
#include "calculation.h"
-/* FIXME: just remove _check_axis ? */
-#define _check_axis PyArray_CheckAxis
-#define PyAO PyArrayObject
-
static double
power_of_ten(int n)
{
@@ -52,7 +48,7 @@ PyArray_ArgMax(PyArrayObject *op, int axis, PyArrayObject *out)
int copyret = 0;
NPY_BEGIN_THREADS_DEF;
- if ((ap=(PyAO *)_check_axis(op, &axis, 0)) == NULL) {
+ if ((ap=(PyArrayObject *)PyArray_CheckAxis(op, &axis, 0)) == NULL) {
return NULL;
}
/*
@@ -69,7 +65,7 @@ PyArray_ArgMax(PyArrayObject *op, int axis, PyArrayObject *out)
for (i = 0; i < axis; i++) dims[i] = i;
for (i = axis; i < ap->nd - 1; i++) dims[i] = i + 1;
dims[ap->nd - 1] = axis;
- op = (PyAO *)PyArray_Transpose(ap, &newaxes);
+ op = (PyArrayObject *)PyArray_Transpose(ap, &newaxes);
Py_DECREF(ap);
if (op == NULL) {
return NULL;
@@ -194,7 +190,7 @@ PyArray_Max(PyArrayObject *ap, int axis, PyArrayObject *out)
PyArrayObject *arr;
PyObject *ret;
- if ((arr=(PyArrayObject *)_check_axis(ap, &axis, 0)) == NULL) {
+ if ((arr=(PyArrayObject *)PyArray_CheckAxis(ap, &axis, 0)) == NULL) {
return NULL;
}
ret = PyArray_GenericReduceFunction(arr, n_ops.maximum, axis,
@@ -212,7 +208,7 @@ PyArray_Min(PyArrayObject *ap, int axis, PyArrayObject *out)
PyArrayObject *arr;
PyObject *ret;
- if ((arr=(PyArrayObject *)_check_axis(ap, &axis, 0)) == NULL) {
+ if ((arr=(PyArrayObject *)PyArray_CheckAxis(ap, &axis, 0)) == NULL) {
return NULL;
}
ret = PyArray_GenericReduceFunction(arr, n_ops.minimum, axis,
@@ -231,7 +227,7 @@ PyArray_Ptp(PyArrayObject *ap, int axis, PyArrayObject *out)
PyObject *ret;
PyObject *obj1 = NULL, *obj2 = NULL;
- if ((arr=(PyArrayObject *)_check_axis(ap, &axis, 0)) == NULL) {
+ if ((arr=(PyArrayObject *)PyArray_CheckAxis(ap, &axis, 0)) == NULL) {
return NULL;
}
obj1 = PyArray_Max(arr, axis, out);
@@ -282,11 +278,11 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out,
int i, n;
intp val;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
/* Compute and reshape mean */
- obj1 = PyArray_EnsureAnyArray(PyArray_Mean((PyAO *)new, axis, rtype, NULL));
+ obj1 = PyArray_EnsureAnyArray(PyArray_Mean((PyArrayObject *)new, axis, rtype, NULL));
if (obj1 == NULL) {
Py_DECREF(new);
return NULL;
@@ -307,7 +303,7 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out,
}
PyTuple_SET_ITEM(newshape, i, PyInt_FromLong((long)val));
}
- obj2 = PyArray_Reshape((PyAO *)obj1, newshape);
+ obj2 = PyArray_Reshape((PyArrayObject *)obj1, newshape);
Py_DECREF(obj1);
Py_DECREF(newshape);
if (obj2 == NULL) {
@@ -324,7 +320,7 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out,
}
/* Compute x * x */
if (PyArray_ISCOMPLEX(obj1)) {
- obj3 = PyArray_Conjugate((PyAO *)obj1, NULL);
+ obj3 = PyArray_Conjugate((PyArrayObject *)obj1, NULL);
}
else {
obj3 = obj1;
@@ -335,7 +331,7 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out,
return NULL;
}
obj2 = PyArray_EnsureAnyArray \
- (PyArray_GenericBinaryFunction((PyAO *)obj1, obj3, n_ops.multiply));
+ (PyArray_GenericBinaryFunction((PyArrayObject *)obj1, obj3, n_ops.multiply));
Py_DECREF(obj1);
Py_DECREF(obj3);
if (obj2 == NULL) {
@@ -365,7 +361,7 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out,
return NULL;
}
/* Compute add.reduce(x*x,axis) */
- obj1 = PyArray_GenericReduceFunction((PyAO *)obj3, n_ops.add,
+ obj1 = PyArray_GenericReduceFunction((PyArrayObject *)obj3, n_ops.add,
axis, rtype, NULL);
Py_DECREF(obj3);
Py_DECREF(obj2);
@@ -391,7 +387,7 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out,
if (!variance) {
obj1 = PyArray_EnsureAnyArray(ret);
/* sqrt() */
- ret = PyArray_GenericUnaryFunction((PyAO *)obj1, n_ops.sqrt);
+ ret = PyArray_GenericUnaryFunction((PyArrayObject *)obj1, n_ops.sqrt);
Py_DECREF(obj1);
}
if (ret == NULL) {
@@ -407,7 +403,7 @@ __New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out,
if (obj1 == NULL) {
return NULL;
}
- ret = PyArray_View((PyAO *)obj1, NULL, Py_TYPE(self));
+ ret = PyArray_View((PyArrayObject *)obj1, NULL, Py_TYPE(self));
Py_DECREF(obj1);
finish:
@@ -432,10 +428,10 @@ PyArray_Sum(PyArrayObject *self, int axis, int rtype, PyArrayObject *out)
{
PyObject *new, *ret;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
- ret = PyArray_GenericReduceFunction((PyAO *)new, n_ops.add, axis,
+ ret = PyArray_GenericReduceFunction((PyArrayObject *)new, n_ops.add, axis,
rtype, out);
Py_DECREF(new);
return ret;
@@ -449,10 +445,10 @@ PyArray_Prod(PyArrayObject *self, int axis, int rtype, PyArrayObject *out)
{
PyObject *new, *ret;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
- ret = PyArray_GenericReduceFunction((PyAO *)new, n_ops.multiply, axis,
+ ret = PyArray_GenericReduceFunction((PyArrayObject *)new, n_ops.multiply, axis,
rtype, out);
Py_DECREF(new);
return ret;
@@ -466,10 +462,10 @@ PyArray_CumSum(PyArrayObject *self, int axis, int rtype, PyArrayObject *out)
{
PyObject *new, *ret;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
- ret = PyArray_GenericAccumulateFunction((PyAO *)new, n_ops.add, axis,
+ ret = PyArray_GenericAccumulateFunction((PyArrayObject *)new, n_ops.add, axis,
rtype, out);
Py_DECREF(new);
return ret;
@@ -483,11 +479,11 @@ PyArray_CumProd(PyArrayObject *self, int axis, int rtype, PyArrayObject *out)
{
PyObject *new, *ret;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
- ret = PyArray_GenericAccumulateFunction((PyAO *)new,
+ ret = PyArray_GenericAccumulateFunction((PyArrayObject *)new,
n_ops.multiply, axis,
rtype, out);
Py_DECREF(new);
@@ -662,10 +658,10 @@ PyArray_Mean(PyArrayObject *self, int axis, int rtype, PyArrayObject *out)
PyObject *obj1 = NULL, *obj2 = NULL;
PyObject *new, *ret;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
- obj1 = PyArray_GenericReduceFunction((PyAO *)new, n_ops.add, axis,
+ obj1 = PyArray_GenericReduceFunction((PyArrayObject *)new, n_ops.add, axis,
rtype, out);
obj2 = PyFloat_FromDouble((double) PyArray_DIM(new,axis));
Py_DECREF(new);
@@ -697,10 +693,10 @@ PyArray_Any(PyArrayObject *self, int axis, PyArrayObject *out)
{
PyObject *new, *ret;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
- ret = PyArray_GenericReduceFunction((PyAO *)new,
+ ret = PyArray_GenericReduceFunction((PyArrayObject *)new,
n_ops.logical_or, axis,
PyArray_BOOL, out);
Py_DECREF(new);
@@ -715,10 +711,10 @@ PyArray_All(PyArrayObject *self, int axis, PyArrayObject *out)
{
PyObject *new, *ret;
- if ((new = _check_axis(self, &axis, 0)) == NULL) {
+ if ((new = PyArray_CheckAxis(self, &axis, 0)) == NULL) {
return NULL;
}
- ret = PyArray_GenericReduceFunction((PyAO *)new,
+ ret = PyArray_GenericReduceFunction((PyArrayObject *)new,
n_ops.logical_and, axis,
PyArray_BOOL, out);
Py_DECREF(new);
@@ -873,7 +869,7 @@ PyArray_Clip(PyArrayObject *self, PyObject *min, PyObject *max, PyArrayObject *o
/* Convert max to an array */
if (max != NULL) {
- maxa = (NPY_AO *)PyArray_FromAny(max, indescr, 0, 0,
+ maxa = (PyArrayObject *)PyArray_FromAny(max, indescr, 0, 0,
NPY_ARRAY_DEFAULT, NULL);
if (maxa == NULL) {
return NULL;
@@ -915,7 +911,7 @@ PyArray_Clip(PyArrayObject *self, PyObject *min, PyObject *max, PyArrayObject *o
/* Convert min to an array */
Py_INCREF(indescr);
- mina = (NPY_AO *)PyArray_FromAny(min, indescr, 0, 0,
+ mina = (PyArrayObject *)PyArray_FromAny(min, indescr, 0, 0,
NPY_ARRAY_DEFAULT, NULL);
Py_DECREF(min);
if (mina == NULL) {
@@ -944,7 +940,7 @@ PyArray_Clip(PyArrayObject *self, PyObject *min, PyObject *max, PyArrayObject *o
flags = NPY_ARRAY_CARRAY;
}
Py_INCREF(indescr);
- newin = (NPY_AO *)PyArray_FromArray(self, indescr, flags);
+ newin = (PyArrayObject *)PyArray_FromArray(self, indescr, flags);
if (newin == NULL) {
goto fail;
}
@@ -977,7 +973,7 @@ PyArray_Clip(PyArrayObject *self, PyObject *min, PyObject *max, PyArrayObject *o
*/
if (out == NULL) {
Py_INCREF(indescr);
- out = (NPY_AO*)PyArray_NewFromDescr(Py_TYPE(self),
+ out = (PyArrayObject*)PyArray_NewFromDescr(Py_TYPE(self),
indescr, self->nd,
self->dimensions,
NULL, NULL,
@@ -1012,7 +1008,7 @@ PyArray_Clip(PyArrayObject *self, PyObject *min, PyObject *max, PyArrayObject *o
oflags = NPY_ARRAY_CARRAY;
oflags |= NPY_ARRAY_UPDATEIFCOPY | NPY_ARRAY_FORCECAST;
Py_INCREF(indescr);
- newout = (NPY_AO*)PyArray_FromArray(out, indescr, oflags);
+ newout = (PyArrayObject*)PyArray_FromArray(out, indescr, oflags);
if (newout == NULL) {
goto fail;
}
@@ -1105,7 +1101,7 @@ PyArray_Trace(PyArrayObject *self, int offset, int axis1, int axis2,
if (diag == NULL) {
return NULL;
}
- ret = PyArray_GenericReduceFunction((PyAO *)diag, n_ops.add, -1, rtype, out);
+ ret = PyArray_GenericReduceFunction((PyArrayObject *)diag, n_ops.add, -1, rtype, out);
Py_DECREF(diag);
return ret;
}
diff --git a/numpy/core/src/multiarray/convert_datatype.c b/numpy/core/src/multiarray/convert_datatype.c
index 1acdeed23..da3f7205b 100644
--- a/numpy/core/src/multiarray/convert_datatype.c
+++ b/numpy/core/src/multiarray/convert_datatype.c
@@ -30,7 +30,7 @@
* doesn't change.
*/
NPY_NO_EXPORT PyObject *
-PyArray_CastToType(PyArrayObject *arr, PyArray_Descr *dtype, int fortran)
+PyArray_CastToType(PyArrayObject *arr, PyArray_Descr *dtype, int is_f_order)
{
PyObject *out;
@@ -44,7 +44,7 @@ PyArray_CastToType(PyArrayObject *arr, PyArray_Descr *dtype, int fortran)
arr->nd,
arr->dimensions,
NULL, NULL,
- fortran,
+ is_f_order,
(PyObject *)arr);
if (out == NULL) {
diff --git a/numpy/core/src/multiarray/ctors.c b/numpy/core/src/multiarray/ctors.c
index deaa152d7..9388c41a7 100644
--- a/numpy/core/src/multiarray/ctors.c
+++ b/numpy/core/src/multiarray/ctors.c
@@ -21,6 +21,7 @@
#include "buffer.h"
#include "numpymemoryview.h"
#include "lowlevel_strided_loops.h"
+#include "methods.h"
#include "_datetime.h"
#include "datetime_strings.h"
@@ -505,6 +506,91 @@ PyArray_MoveInto(PyArrayObject *dst, PyArrayObject *src)
}
}
+/*NUMPY_API
+ * Copy the memory of one array into another, allowing for overlapping data
+ * and selecting which elements to move based on a mask.
+ *
+ * Precisely handling the overlapping data is in general a difficult
+ * problem to solve efficiently, because strides can be negative.
+ * Consider "a = np.arange(3); a[::-1] = a", which previously produced
+ * the incorrect [0, 1, 0].
+ *
+ * Instead of trying to be fancy, we simply check for overlap and make
+ * a temporary copy when one exists.
+ *
+ * Returns 0 on success, negative on failure.
+ */
+NPY_NO_EXPORT int
+PyArray_MaskedMoveInto(PyArrayObject *dst, PyArrayObject *src,
+ PyArrayObject *mask, NPY_CASTING casting)
+{
+ /*
+ * Performance fix for expresions like "a[1000:6000] += x". In this
+ * case, first an in-place add is done, followed by an assignment,
+ * equivalently expressed like this:
+ *
+ * tmp = a[1000:6000] # Calls array_subscript_nice in mapping.c
+ * np.add(tmp, x, tmp)
+ * a[1000:6000] = tmp # Calls array_ass_sub in mapping.c
+ *
+ * In the assignment the underlying data type, shape, strides, and
+ * data pointers are identical, but src != dst because they are separately
+ * generated slices. By detecting this and skipping the redundant
+ * copy of values to themselves, we potentially give a big speed boost.
+ *
+ * Note that we don't call EquivTypes, because usually the exact same
+ * dtype object will appear, and we don't want to slow things down
+ * with a complicated comparison. The comparisons are ordered to
+ * try and reject this with as little work as possible.
+ */
+ if (PyArray_DATA(src) == PyArray_DATA(dst) &&
+ PyArray_DESCR(src) == PyArray_DESCR(dst) &&
+ PyArray_NDIM(src) == PyArray_NDIM(dst) &&
+ PyArray_CompareLists(PyArray_DIMS(src),
+ PyArray_DIMS(dst),
+ PyArray_NDIM(src)) &&
+ PyArray_CompareLists(PyArray_STRIDES(src),
+ PyArray_STRIDES(dst),
+ PyArray_NDIM(src))) {
+ /*printf("Redundant copy operation detected\n");*/
+ return 0;
+ }
+
+ /*
+ * A special case is when there is just one dimension with positive
+ * strides, and we pass that to CopyInto, which correctly handles
+ * it for most cases. It may still incorrectly handle copying of
+ * partially-overlapping data elements, where the data pointer was offset
+ * by a fraction of the element size.
+ */
+ if ((PyArray_NDIM(dst) == 1 &&
+ PyArray_NDIM(src) == 1 &&
+ PyArray_STRIDE(dst, 0) > 0 &&
+ PyArray_STRIDE(src, 0) > 0) ||
+ !_arrays_overlap(dst, src)) {
+ return PyArray_MaskedCopyInto(dst, src, mask, casting);
+ }
+ else {
+ PyArrayObject *tmp;
+ int ret;
+
+ /*
+ * Allocate a temporary copy array.
+ */
+ tmp = (PyArrayObject *)PyArray_NewLikeArray(dst,
+ NPY_KEEPORDER, NULL, 0);
+ if (tmp == NULL) {
+ return -1;
+ }
+ ret = PyArray_CopyInto(tmp, src);
+ if (ret == 0) {
+ ret = PyArray_MaskedCopyInto(dst, tmp, mask, casting);
+ }
+ Py_DECREF(tmp);
+ return ret;
+ }
+}
+
/* adapted from Numarray */
@@ -1425,7 +1511,7 @@ fail:
* // Could make custom strides here too
* arr = PyArray_NewFromDescr(&PyArray_Type, dtype, ndim,
* dims, NULL,
- * fortran ? NPY_ARRAY_F_CONTIGUOUS : 0,
+ * is_f_order ? NPY_ARRAY_F_CONTIGUOUS : 0,
* NULL);
* if (arr == NULL) {
* return NULL;
@@ -2798,6 +2884,195 @@ PyArray_CopyInto(PyArrayObject *dst, PyArrayObject *src)
}
}
+/*NUMPY_API
+ * Copy an Array into another array, wherever the mask specifies.
+ * The memory of src and dst must not overlap.
+ *
+ * Broadcast to the destination shape if necessary.
+ *
+ * Returns 0 on success, -1 on failure.
+ */
+NPY_NO_EXPORT int
+PyArray_MaskedCopyInto(PyArrayObject *dst, PyArrayObject *src,
+ PyArrayObject *mask, NPY_CASTING casting)
+{
+ PyArray_MaskedStridedTransferFn *stransfer = NULL;
+ NpyAuxData *transferdata = NULL;
+ NPY_BEGIN_THREADS_DEF;
+
+ if (!PyArray_ISWRITEABLE(dst)) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "cannot write to array");
+ return -1;
+ }
+
+ if (!PyArray_CanCastArrayTo(src, PyArray_DESCR(dst), casting)) {
+ PyObject *errmsg;
+ errmsg = PyUString_FromString("Cannot cast array data from ");
+ PyUString_ConcatAndDel(&errmsg,
+ PyObject_Repr((PyObject *)PyArray_DESCR(src)));
+ PyUString_ConcatAndDel(&errmsg,
+ PyUString_FromString(" to "));
+ PyUString_ConcatAndDel(&errmsg,
+ PyObject_Repr((PyObject *)PyArray_DESCR(dst)));
+ PyUString_ConcatAndDel(&errmsg,
+ PyUString_FromFormat(" according to the rule %s",
+ npy_casting_to_string(casting)));
+ PyErr_SetObject(PyExc_TypeError, errmsg);
+ return -1;
+ }
+
+
+ if (PyArray_NDIM(dst) >= PyArray_NDIM(src) &&
+ PyArray_NDIM(dst) >= PyArray_NDIM(mask) &&
+ PyArray_TRIVIALLY_ITERABLE_TRIPLE(dst, src, mask)) {
+ char *dst_data, *src_data, *mask_data;
+ npy_intp count, dst_stride, src_stride, src_itemsize, mask_stride;
+
+ int needs_api = 0;
+
+ PyArray_PREPARE_TRIVIAL_TRIPLE_ITERATION(dst, src, mask, count,
+ dst_data, src_data, mask_data,
+ dst_stride, src_stride, mask_stride);
+
+ /*
+ * Check for overlap with positive strides, and if found,
+ * possibly reverse the order
+ */
+ if (dst_data > src_data && src_stride > 0 && dst_stride > 0 &&
+ (dst_data < src_data+src_stride*count) &&
+ (src_data < dst_data+dst_stride*count)) {
+ dst_data += dst_stride*(count-1);
+ src_data += src_stride*(count-1);
+ mask_data += mask_stride*(count-1);
+ dst_stride = -dst_stride;
+ src_stride = -src_stride;
+ mask_stride = -mask_stride;
+ }
+
+ if (PyArray_GetMaskedDTypeTransferFunction(
+ PyArray_ISALIGNED(src) && PyArray_ISALIGNED(dst),
+ src_stride, dst_stride, mask_stride,
+ PyArray_DESCR(src),
+ PyArray_DESCR(dst),
+ PyArray_DESCR(mask),
+ 0,
+ &stransfer, &transferdata,
+ &needs_api) != NPY_SUCCEED) {
+ return -1;
+ }
+
+ src_itemsize = PyArray_DESCR(src)->elsize;
+
+ if (!needs_api) {
+ NPY_BEGIN_THREADS;
+ }
+
+ stransfer(dst_data, dst_stride, src_data, src_stride,
+ (npy_uint8 *)mask_data, mask_stride,
+ count, src_itemsize, transferdata);
+
+ if (!needs_api) {
+ NPY_END_THREADS;
+ }
+
+ NPY_AUXDATA_FREE(transferdata);
+
+ return PyErr_Occurred() ? -1 : 0;
+ }
+ else {
+ PyArrayObject *op[3];
+ npy_uint32 op_flags[3];
+ NpyIter *iter;
+
+ NpyIter_IterNextFunc *iternext;
+ char **dataptr;
+ npy_intp *stride;
+ npy_intp *countptr;
+ npy_intp src_itemsize;
+ int needs_api;
+
+ op[0] = dst;
+ op[1] = src;
+ op[2] = mask;
+ /*
+ * TODO: In NumPy 2.0, renable NPY_ITER_NO_BROADCAST. This
+ * was removed during NumPy 1.6 testing for compatibility
+ * with NumPy 1.5, as per Travis's -10 veto power.
+ */
+ /*op_flags[0] = NPY_ITER_WRITEONLY|NPY_ITER_NO_BROADCAST;*/
+ op_flags[0] = NPY_ITER_WRITEONLY;
+ op_flags[1] = NPY_ITER_READONLY;
+ op_flags[2] = NPY_ITER_READONLY;
+
+ iter = NpyIter_MultiNew(3, op,
+ NPY_ITER_EXTERNAL_LOOP|
+ NPY_ITER_REFS_OK|
+ NPY_ITER_ZEROSIZE_OK,
+ NPY_KEEPORDER,
+ NPY_NO_CASTING,
+ op_flags,
+ NULL);
+ if (iter == NULL) {
+ return -1;
+ }
+
+ iternext = NpyIter_GetIterNext(iter, NULL);
+ if (iternext == NULL) {
+ NpyIter_Deallocate(iter);
+ return -1;
+ }
+ dataptr = NpyIter_GetDataPtrArray(iter);
+ stride = NpyIter_GetInnerStrideArray(iter);
+ countptr = NpyIter_GetInnerLoopSizePtr(iter);
+ src_itemsize = PyArray_DESCR(src)->elsize;
+
+ needs_api = NpyIter_IterationNeedsAPI(iter);
+
+ /*
+ * Because buffering is disabled in the iterator, the inner loop
+ * strides will be the same throughout the iteration loop. Thus,
+ * we can pass them to this function to take advantage of
+ * contiguous strides, etc.
+ */
+ if (PyArray_GetMaskedDTypeTransferFunction(
+ PyArray_ISALIGNED(src) && PyArray_ISALIGNED(dst),
+ stride[1], stride[0], stride[2],
+ PyArray_DESCR(src),
+ PyArray_DESCR(dst),
+ PyArray_DESCR(mask),
+ 0,
+ &stransfer, &transferdata,
+ &needs_api) != NPY_SUCCEED) {
+ NpyIter_Deallocate(iter);
+ return -1;
+ }
+
+
+ if (NpyIter_GetIterSize(iter) != 0) {
+ if (!needs_api) {
+ NPY_BEGIN_THREADS;
+ }
+
+ do {
+ stransfer(dataptr[0], stride[0],
+ dataptr[1], stride[1],
+ (npy_uint8 *)dataptr[2], stride[2],
+ *countptr, src_itemsize, transferdata);
+ } while(iternext(iter));
+
+ if (!needs_api) {
+ NPY_END_THREADS;
+ }
+ }
+
+ NPY_AUXDATA_FREE(transferdata);
+ NpyIter_Deallocate(iter);
+
+ return PyErr_Occurred() ? -1 : 0;
+ }
+}
+
/*NUMPY_API
* PyArray_CheckAxis
@@ -2866,7 +3141,7 @@ PyArray_CheckAxis(PyArrayObject *arr, int *axis, int flags)
* accepts NULL type
*/
NPY_NO_EXPORT PyObject *
-PyArray_Zeros(int nd, npy_intp *dims, PyArray_Descr *type, int fortran)
+PyArray_Zeros(int nd, npy_intp *dims, PyArray_Descr *type, int is_f_order)
{
PyArrayObject *ret;
@@ -2877,7 +3152,7 @@ PyArray_Zeros(int nd, npy_intp *dims, PyArray_Descr *type, int fortran)
type,
nd, dims,
NULL, NULL,
- fortran, NULL);
+ is_f_order, NULL);
if (ret == NULL) {
return NULL;
}
@@ -2895,7 +3170,7 @@ PyArray_Zeros(int nd, npy_intp *dims, PyArray_Descr *type, int fortran)
* steals referenct to type
*/
NPY_NO_EXPORT PyObject *
-PyArray_Empty(int nd, npy_intp *dims, PyArray_Descr *type, int fortran)
+PyArray_Empty(int nd, npy_intp *dims, PyArray_Descr *type, int is_f_order)
{
PyArrayObject *ret;
@@ -2903,7 +3178,7 @@ PyArray_Empty(int nd, npy_intp *dims, PyArray_Descr *type, int fortran)
ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type,
type, nd, dims,
NULL, NULL,
- fortran, NULL);
+ is_f_order, NULL);
if (ret == NULL) {
return NULL;
}
diff --git a/numpy/core/src/multiarray/dtype_transfer.c b/numpy/core/src/multiarray/dtype_transfer.c
index 7609f0016..47182b82a 100644
--- a/numpy/core/src/multiarray/dtype_transfer.c
+++ b/numpy/core/src/multiarray/dtype_transfer.c
@@ -2999,6 +2999,149 @@ get_setdestzero_fields_transfer_function(int aligned,
return NPY_SUCCEED;
}
+/************************* MASKED TRANSFER WRAPPER *************************/
+
+typedef struct {
+ NpyAuxData base;
+ /* The transfer function being wrapped */
+ PyArray_StridedTransferFn *stransfer;
+ NpyAuxData *transferdata;
+
+ /* The src decref function if necessary */
+ PyArray_StridedTransferFn *decsrcref_stransfer;
+ NpyAuxData *decsrcref_transferdata;
+} _masked_wrapper_transfer_data;
+
+/* transfer data free function */
+void _masked_wrapper_transfer_data_free(NpyAuxData *data)
+{
+ _masked_wrapper_transfer_data *d = (_masked_wrapper_transfer_data *)data;
+ NPY_AUXDATA_FREE(d->transferdata);
+ NPY_AUXDATA_FREE(d->decsrcref_transferdata);
+ PyArray_free(data);
+}
+
+/* transfer data copy function */
+NpyAuxData *_masked_wrapper_transfer_data_clone(NpyAuxData *data)
+{
+ _masked_wrapper_transfer_data *d = (_masked_wrapper_transfer_data *)data;
+ _masked_wrapper_transfer_data *newdata;
+
+ /* Allocate the data and populate it */
+ newdata = (_masked_wrapper_transfer_data *)PyArray_malloc(
+ sizeof(_masked_wrapper_transfer_data));
+ if (newdata == NULL) {
+ return NULL;
+ }
+ memcpy(newdata, d, sizeof(_masked_wrapper_transfer_data));
+
+ /* Clone all the owned auxdata as well */
+ if (newdata->transferdata != NULL) {
+ newdata->transferdata = NPY_AUXDATA_CLONE(newdata->transferdata);
+ if (newdata->transferdata == NULL) {
+ PyArray_free(newdata);
+ return NULL;
+ }
+ }
+ if (newdata->decsrcref_transferdata != NULL) {
+ newdata->decsrcref_transferdata =
+ NPY_AUXDATA_CLONE(newdata->decsrcref_transferdata);
+ if (newdata->decsrcref_transferdata == NULL) {
+ NPY_AUXDATA_FREE(newdata->transferdata);
+ PyArray_free(newdata);
+ return NULL;
+ }
+ }
+
+ return (NpyAuxData *)newdata;
+}
+
+void _strided_masked_wrapper_decsrcref_transfer_function(
+ char *dst, npy_intp dst_stride,
+ char *src, npy_intp src_stride,
+ npy_uint8 *mask, npy_intp mask_stride,
+ npy_intp N, npy_intp src_itemsize,
+ NpyAuxData *transferdata)
+{
+ _masked_wrapper_transfer_data *d =
+ (_masked_wrapper_transfer_data *)transferdata;
+ npy_intp subloopsize;
+ PyArray_StridedTransferFn *unmasked_stransfer, *decsrcref_stransfer;
+ NpyAuxData *unmasked_transferdata, *decsrcref_transferdata;
+
+ unmasked_stransfer = d->stransfer;
+ unmasked_transferdata = d->transferdata;
+ decsrcref_stransfer = d->decsrcref_stransfer;
+ decsrcref_transferdata = d->decsrcref_transferdata;
+
+ while (N > 0) {
+ /* Skip masked values, still calling decsrcref for move_references */
+ subloopsize = 0;
+ while (subloopsize < N && ((*mask)&0x01) == 0) {
+ ++subloopsize;
+ mask += mask_stride;
+ }
+ decsrcref_stransfer(NULL, 0, src, src_stride,
+ subloopsize, src_itemsize, decsrcref_transferdata);
+ dst += subloopsize * dst_stride;
+ src += subloopsize * src_stride;
+ N -= subloopsize;
+ /* Process unmasked values */
+ subloopsize = 0;
+ while (subloopsize < N && ((*mask)&0x01) != 0) {
+ ++subloopsize;
+ mask += mask_stride;
+ }
+ unmasked_stransfer(dst, dst_stride, src, src_stride,
+ subloopsize, src_itemsize, unmasked_transferdata);
+ dst += subloopsize * dst_stride;
+ src += subloopsize * src_stride;
+ N -= subloopsize;
+ }
+}
+
+void _strided_masked_wrapper_transfer_function(
+ char *dst, npy_intp dst_stride,
+ char *src, npy_intp src_stride,
+ npy_uint8 *mask, npy_intp mask_stride,
+ npy_intp N, npy_intp src_itemsize,
+ NpyAuxData *transferdata)
+{
+
+ _masked_wrapper_transfer_data *d =
+ (_masked_wrapper_transfer_data *)transferdata;
+ npy_intp subloopsize;
+ PyArray_StridedTransferFn *unmasked_stransfer;
+ NpyAuxData *unmasked_transferdata;
+
+ unmasked_stransfer = d->stransfer;
+ unmasked_transferdata = d->transferdata;
+
+ while (N > 0) {
+ /* Skip masked values */
+ subloopsize = 0;
+ while (subloopsize < N && ((*mask)&0x01) == 0) {
+ ++subloopsize;
+ mask += mask_stride;
+ }
+ dst += subloopsize * dst_stride;
+ src += subloopsize * src_stride;
+ N -= subloopsize;
+ /* Process unmasked values */
+ subloopsize = 0;
+ while (subloopsize < N && ((*mask)&0x01) != 0) {
+ ++subloopsize;
+ mask += mask_stride;
+ }
+ unmasked_stransfer(dst, dst_stride, src, src_stride,
+ subloopsize, src_itemsize, unmasked_transferdata);
+ dst += subloopsize * dst_stride;
+ src += subloopsize * src_stride;
+ N -= subloopsize;
+ }
+}
+
+
/************************* DEST BOOL SETONE *******************************/
static void
@@ -3603,6 +3746,84 @@ PyArray_GetDTypeTransferFunction(int aligned,
}
NPY_NO_EXPORT int
+PyArray_GetMaskedDTypeTransferFunction(int aligned,
+ npy_intp src_stride,
+ npy_intp dst_stride,
+ npy_intp mask_stride,
+ PyArray_Descr *src_dtype,
+ PyArray_Descr *dst_dtype,
+ PyArray_Descr *mask_dtype,
+ int move_references,
+ PyArray_MaskedStridedTransferFn **out_stransfer,
+ NpyAuxData **out_transferdata,
+ int *out_needs_api)
+{
+ PyArray_StridedTransferFn *stransfer = NULL;
+ NpyAuxData *transferdata = NULL;
+ _masked_wrapper_transfer_data *data;
+
+ /* TODO: Add struct-based mask_dtype support later */
+ if (mask_dtype->type_num != NPY_BOOL &&
+ mask_dtype->type_num != NPY_UINT8) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "Only bool and uint8 masks are supported at the moment, "
+ "structs of bool/uint8 is planned for the future");
+ return NPY_FAIL;
+ }
+
+ /* TODO: Special case some important cases so they're fast */
+
+ /* Fall back to wrapping a non-masked transfer function */
+ if (PyArray_GetDTypeTransferFunction(aligned,
+ src_stride, dst_stride,
+ src_dtype, dst_dtype,
+ move_references,
+ &stransfer, &transferdata,
+ out_needs_api) != NPY_SUCCEED) {
+ return NPY_FAIL;
+ }
+
+ /* Create the wrapper function's auxdata */
+ data = (_masked_wrapper_transfer_data *)PyArray_malloc(
+ sizeof(_masked_wrapper_transfer_data));
+ if (data == NULL) {
+ PyErr_NoMemory();
+ NPY_AUXDATA_FREE(transferdata);
+ return NPY_FAIL;
+ }
+
+ /* Fill in the auxdata object */
+ memset(data, 0, sizeof(_masked_wrapper_transfer_data));
+ data->base.free = &_masked_wrapper_transfer_data_free;
+ data->base.clone = &_masked_wrapper_transfer_data_clone;
+
+ data->stransfer = stransfer;
+ data->transferdata = transferdata;
+
+ /* If the src object will need a DECREF, get a function to handle that */
+ if (move_references && PyDataType_REFCHK(src_dtype)) {
+ if (get_decsrcref_transfer_function(aligned,
+ src_stride,
+ src_dtype,
+ &data->decsrcref_stransfer,
+ &data->decsrcref_transferdata,
+ out_needs_api) != NPY_SUCCEED) {
+ NPY_AUXDATA_FREE((NpyAuxData *)data);
+ return NPY_FAIL;
+ }
+
+ *out_stransfer = &_strided_masked_wrapper_decsrcref_transfer_function;
+ }
+ else {
+ *out_stransfer = &_strided_masked_wrapper_transfer_function;
+ }
+
+ *out_transferdata = (NpyAuxData *)data;
+
+ return NPY_SUCCEED;
+}
+
+NPY_NO_EXPORT int
PyArray_CastRawArrays(npy_intp count,
char *src, char *dst,
npy_intp src_stride, npy_intp dst_stride,
diff --git a/numpy/core/src/multiarray/item_selection.c b/numpy/core/src/multiarray/item_selection.c
index b4aef4ad6..74203774c 100644
--- a/numpy/core/src/multiarray/item_selection.c
+++ b/numpy/core/src/multiarray/item_selection.c
@@ -393,6 +393,11 @@ PyArray_PutMask(PyArrayObject *self, PyObject* values0, PyObject* mask0)
char *src, *dest;
int copied = 0;
+ if (DEPRECATE("putmask has been deprecated. Use copyto with 'where' as "
+ "the mask instead") < 0) {
+ return NULL;
+ }
+
mask = NULL;
values = NULL;
if (!PyArray_Check(self)) {
diff --git a/numpy/core/src/multiarray/methods.c b/numpy/core/src/multiarray/methods.c
index 2213ae99d..68f697a4d 100644
--- a/numpy/core/src/multiarray/methods.c
+++ b/numpy/core/src/multiarray/methods.c
@@ -1444,7 +1444,7 @@ array_setstate(PyArrayObject *self, PyObject *args)
PyObject *shape;
PyArray_Descr *typecode;
int version = 1;
- int fortran;
+ int is_f_order;
PyObject *rawdata = NULL;
char *datastr;
Py_ssize_t len;
@@ -1458,14 +1458,14 @@ array_setstate(PyArrayObject *self, PyObject *args)
&version,
&PyTuple_Type, &shape,
&PyArrayDescr_Type, &typecode,
- &fortran,
+ &is_f_order,
&rawdata)) {
PyErr_Clear();
version = 0;
if (!PyArg_ParseTuple(args, "(O!O!iO)",
&PyTuple_Type, &shape,
&PyArrayDescr_Type, &typecode,
- &fortran,
+ &is_f_order,
&rawdata)) {
return NULL;
}
@@ -1564,8 +1564,8 @@ array_setstate(PyArrayObject *self, PyObject *args)
memcpy(self->dimensions, dimensions, sizeof(intp)*nd);
(void) _array_fill_strides(self->strides, dimensions, nd,
(size_t) self->descr->elsize,
- (fortran ? NPY_ARRAY_F_CONTIGUOUS :
- NPY_ARRAY_C_CONTIGUOUS),
+ (is_f_order ? NPY_ARRAY_F_CONTIGUOUS :
+ NPY_ARRAY_C_CONTIGUOUS),
&(self->flags));
}
diff --git a/numpy/core/src/multiarray/multiarraymodule.c b/numpy/core/src/multiarray/multiarraymodule.c
index 3fb7fc37e..452ddec20 100644
--- a/numpy/core/src/multiarray/multiarraymodule.c
+++ b/numpy/core/src/multiarray/multiarraymodule.c
@@ -45,6 +45,7 @@ NPY_NO_EXPORT int NPY_NUMUSERTYPES = 0;
#include "numpymemoryview.h"
#include "convert_datatype.h"
#include "nditer_pywrap.h"
+#include "methods.h"
#include "_datetime.h"
#include "datetime_strings.h"
#include "datetime_busday.h"
@@ -1649,6 +1650,79 @@ clean_type:
}
static PyObject *
+array_copyto(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
+{
+
+ static char *kwlist[] = {"dst","src","casting","where",NULL};
+ PyObject *wheremask_in = NULL;
+ PyArrayObject *dst = NULL, *src = NULL, *wheremask = NULL;
+ NPY_CASTING casting = NPY_SAME_KIND_CASTING;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!O&|O&O", kwlist,
+ &PyArray_Type, &dst,
+ &PyArray_Converter, &src,
+ &PyArray_CastingConverter, &casting,
+ &wheremask_in)) {
+ goto fail;
+ }
+
+ if (wheremask_in != NULL) {
+ /* Get the boolean where mask */
+ PyArray_Descr *dtype = PyArray_DescrFromType(NPY_BOOL);
+ if (dtype == NULL) {
+ goto fail;
+ }
+ wheremask = (PyArrayObject *)PyArray_FromAny(wheremask_in,
+ dtype, 0, 0, 0, NULL);
+ if (wheremask == NULL) {
+ goto fail;
+ }
+
+ /* Use the 'move' function which handles overlapping */
+ if (PyArray_MaskedMoveInto(dst, src, wheremask, casting) < 0) {
+ goto fail;
+ }
+ }
+ else {
+ /*
+ * MoveInto doesn't accept a casting rule, must check it
+ * ourselves.
+ */
+ if (!PyArray_CanCastArrayTo(src, PyArray_DESCR(dst), casting)) {
+ PyObject *errmsg;
+ errmsg = PyUString_FromString("Cannot cast array data from ");
+ PyUString_ConcatAndDel(&errmsg,
+ PyObject_Repr((PyObject *)PyArray_DESCR(src)));
+ PyUString_ConcatAndDel(&errmsg,
+ PyUString_FromString(" to "));
+ PyUString_ConcatAndDel(&errmsg,
+ PyObject_Repr((PyObject *)PyArray_DESCR(dst)));
+ PyUString_ConcatAndDel(&errmsg,
+ PyUString_FromFormat(" according to the rule %s",
+ npy_casting_to_string(casting)));
+ PyErr_SetObject(PyExc_TypeError, errmsg);
+ goto fail;
+ }
+
+ /* Use the 'move' function which handles overlapping */
+ if (PyArray_MoveInto(dst, src) < 0) {
+ goto fail;
+ }
+ }
+
+ Py_XDECREF(src);
+ Py_XDECREF(wheremask);
+
+ Py_INCREF(Py_None);
+ return Py_None;
+
+ fail:
+ Py_XDECREF(src);
+ Py_XDECREF(wheremask);
+ return NULL;
+}
+
+static PyObject *
array_empty(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
{
@@ -1656,7 +1730,7 @@ array_empty(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
PyArray_Descr *typecode = NULL;
PyArray_Dims shape = {NULL, 0};
NPY_ORDER order = NPY_CORDER;
- Bool fortran;
+ npy_bool is_f_order;
PyObject *ret = NULL;
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O&O&", kwlist,
@@ -1668,10 +1742,10 @@ array_empty(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
switch (order) {
case NPY_CORDER:
- fortran = FALSE;
+ is_f_order = FALSE;
break;
case NPY_FORTRANORDER:
- fortran = TRUE;
+ is_f_order = TRUE;
break;
default:
PyErr_SetString(PyExc_ValueError,
@@ -1679,7 +1753,7 @@ array_empty(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
goto fail;
}
- ret = PyArray_Empty(shape.len, shape.ptr, typecode, fortran);
+ ret = PyArray_Empty(shape.len, shape.ptr, typecode, is_f_order);
PyDimMem_FREE(shape.ptr);
return ret;
@@ -1789,7 +1863,7 @@ array_zeros(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
PyArray_Descr *typecode = NULL;
PyArray_Dims shape = {NULL, 0};
NPY_ORDER order = NPY_CORDER;
- Bool fortran = FALSE;
+ npy_bool is_f_order = FALSE;
PyObject *ret = NULL;
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O&O&", kwlist,
@@ -1801,10 +1875,10 @@ array_zeros(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
switch (order) {
case NPY_CORDER:
- fortran = FALSE;
+ is_f_order = FALSE;
break;
case NPY_FORTRANORDER:
- fortran = TRUE;
+ is_f_order = TRUE;
break;
default:
PyErr_SetString(PyExc_ValueError,
@@ -1812,7 +1886,7 @@ array_zeros(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds)
goto fail;
}
- ret = PyArray_Zeros(shape.len, shape.ptr, typecode, (int) fortran);
+ ret = PyArray_Zeros(shape.len, shape.ptr, typecode, (int) is_f_order);
PyDimMem_FREE(shape.ptr);
return ret;
@@ -3363,6 +3437,9 @@ static struct PyMethodDef array_module_methods[] = {
{"array",
(PyCFunction)_array_fromobject,
METH_VARARGS|METH_KEYWORDS, NULL},
+ {"copyto",
+ (PyCFunction)array_copyto,
+ METH_VARARGS|METH_KEYWORDS, NULL},
{"nested_iters",
(PyCFunction)NpyIter_NestedIters,
METH_VARARGS|METH_KEYWORDS, NULL},
diff --git a/numpy/core/src/multiarray/nditer_api.c b/numpy/core/src/multiarray/nditer_api.c
index 10f4afd8b..1328ebc38 100644
--- a/numpy/core/src/multiarray/nditer_api.c
+++ b/numpy/core/src/multiarray/nditer_api.c
@@ -1337,6 +1337,9 @@ NpyIter_DebugPrint(NpyIter *iter)
printf("\n");
printf("| NDim: %d\n", (int)ndim);
printf("| NOp: %d\n", (int)nop);
+ if (NIT_MASKOP(iter) >= 0) {
+ printf("| MaskOp: %d\n", (int)NIT_MASKOP(iter));
+ }
printf("| IterSize: %d\n", (int)NIT_ITERSIZE(iter));
printf("| IterStart: %d\n", (int)NIT_ITERSTART(iter));
printf("| IterEnd: %d\n", (int)NIT_ITEREND(iter));
@@ -1418,6 +1421,10 @@ NpyIter_DebugPrint(NpyIter *iter)
printf("ALIGNED ");
if ((NIT_OPITFLAGS(iter)[iop])&NPY_OP_ITFLAG_REDUCE)
printf("REDUCE ");
+ if ((NIT_OPITFLAGS(iter)[iop])&NPY_OP_ITFLAG_VIRTUAL)
+ printf("VIRTUAL ");
+ if ((NIT_OPITFLAGS(iter)[iop])&NPY_OP_ITFLAG_WRITEMASKED)
+ printf("WRITEMASKED ");
printf("\n");
}
printf("|\n");
diff --git a/numpy/core/src/multiarray/nditer_constr.c b/numpy/core/src/multiarray/nditer_constr.c
index 774ed65e4..e99a0fb0a 100644
--- a/numpy/core/src/multiarray/nditer_constr.c
+++ b/numpy/core/src/multiarray/nditer_constr.c
@@ -39,7 +39,8 @@ npyiter_prepare_operands(int nop, PyArrayObject **op_in,
PyArray_Descr **op_request_dtypes,
PyArray_Descr **op_dtype,
npy_uint32 flags,
- npy_uint32 *op_flags, char *op_itflags);
+ npy_uint32 *op_flags, char *op_itflags,
+ npy_int8 *out_maskop);
static int
npyiter_check_casting(int nop, PyArrayObject **op,
PyArray_Descr **op_dtype,
@@ -200,7 +201,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
if (!npyiter_prepare_operands(nop, op_in, op, op_dataptr,
op_request_dtypes, op_dtype,
flags,
- op_flags, op_itflags)) {
+ op_flags, op_itflags, &NIT_MASKOP(iter))) {
PyArray_free(iter);
return NULL;
}
@@ -213,7 +214,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
* Initialize buffer data (must set the buffers and transferdata
* to NULL before we might deallocate the iterator).
*/
- if (itflags&NPY_ITFLAG_BUFFER) {
+ if (itflags & NPY_ITFLAG_BUFFER) {
bufferdata = NIT_BUFFERDATA(iter);
NBF_SIZE(bufferdata) = 0;
memset(NBF_BUFFERS(bufferdata), 0, nop*NPY_SIZEOF_INTP);
@@ -231,7 +232,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
NPY_IT_TIME_POINT(c_fill_axisdata);
- if (itflags&NPY_ITFLAG_BUFFER) {
+ if (itflags & NPY_ITFLAG_BUFFER) {
/*
* If buffering is enabled and no buffersize was given, use a default
* chosen to be big enough to get some amortization benefits, but
@@ -276,7 +277,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
/* Flag this so later we can avoid flipping axes */
any_allocate = 1;
/* If a subtype may be used, indicate so */
- if (!(op_flags[iop]&NPY_ITER_NO_SUBTYPE)) {
+ if (!(op_flags[iop] & NPY_ITER_NO_SUBTYPE)) {
need_subtype = 1;
}
/*
@@ -293,7 +294,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
* If the ordering was not forced, reorder the axes
* and flip negative strides to find the best one.
*/
- if (!(itflags&NPY_ITFLAG_FORCEDORDER)) {
+ if (!(itflags & NPY_ITFLAG_FORCEDORDER)) {
if (ndim > 1) {
npyiter_find_best_axis_ordering(iter);
}
@@ -301,7 +302,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
* If there's an output being allocated, we must not negate
* any strides.
*/
- if (!any_allocate && !(flags&NPY_ITER_DONT_NEGATE_STRIDES)) {
+ if (!any_allocate && !(flags & NPY_ITER_DONT_NEGATE_STRIDES)) {
npyiter_flip_negative_strides(iter);
}
itflags = NIT_ITFLAGS(iter);
@@ -320,9 +321,9 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
* If an automatically allocated output didn't have a specified
* dtype, we need to figure it out now, before allocating the outputs.
*/
- if (any_missing_dtypes || (flags&NPY_ITER_COMMON_DTYPE)) {
+ if (any_missing_dtypes || (flags & NPY_ITER_COMMON_DTYPE)) {
PyArray_Descr *dtype;
- int only_inputs = !(flags&NPY_ITER_COMMON_DTYPE);
+ int only_inputs = !(flags & NPY_ITER_COMMON_DTYPE);
op = NIT_OPERANDS(iter);
op_dtype = NIT_DTYPES(iter);
@@ -336,7 +337,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
NpyIter_Deallocate(iter);
return NULL;
}
- if (flags&NPY_ITER_COMMON_DTYPE) {
+ if (flags & NPY_ITER_COMMON_DTYPE) {
NPY_IT_DBG_PRINT("Iterator: Replacing all data types\n");
/* Replace all the data types */
for (iop = 0; iop < nop; ++iop) {
@@ -392,7 +393,7 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
* Finally, if a multi-index wasn't requested,
* it may be possible to coalesce some axes together.
*/
- if (ndim > 1 && !(itflags&NPY_ITFLAG_HASMULTIINDEX)) {
+ if (ndim > 1 && !(itflags & NPY_ITFLAG_HASMULTIINDEX)) {
npyiter_coalesce_axes(iter);
/*
* The operation may have changed the layout, so we have to
@@ -412,9 +413,9 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
* Now that the axes are finished, check whether we can apply
* the single iteration optimization to the iternext function.
*/
- if (!(itflags&NPY_ITFLAG_BUFFER)) {
+ if (!(itflags & NPY_ITFLAG_BUFFER)) {
NpyIter_AxisData *axisdata = NIT_AXISDATA(iter);
- if (itflags&NPY_ITFLAG_EXLOOP) {
+ if (itflags & NPY_ITFLAG_EXLOOP) {
if (NIT_ITERSIZE(iter) == NAD_SHAPE(axisdata)) {
NIT_ITFLAGS(iter) |= NPY_ITFLAG_ONEITERATION;
}
@@ -428,11 +429,11 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
* If REFS_OK was specified, check whether there are any
* reference arrays and flag it if so.
*/
- if (flags&NPY_ITER_REFS_OK) {
+ if (flags & NPY_ITER_REFS_OK) {
for (iop = 0; iop < nop; ++iop) {
PyArray_Descr *rdt = op_dtype[iop];
- if ((rdt->flags&(NPY_ITEM_REFCOUNT|
- NPY_ITEM_IS_POINTER|
+ if ((rdt->flags & (NPY_ITEM_REFCOUNT |
+ NPY_ITEM_IS_POINTER |
NPY_NEEDS_PYAPI)) != 0) {
/* Iteration needs API access */
NIT_ITFLAGS(iter) |= NPY_ITFLAG_NEEDSAPI;
@@ -441,12 +442,12 @@ NpyIter_AdvancedNew(int nop, PyArrayObject **op_in, npy_uint32 flags,
}
/* If buffering is set without delayed allocation */
- if (itflags&NPY_ITFLAG_BUFFER) {
+ if (itflags & NPY_ITFLAG_BUFFER) {
if (!npyiter_allocate_transfer_functions(iter)) {
NpyIter_Deallocate(iter);
return NULL;
}
- if (itflags&NPY_ITFLAG_DELAYBUF) {
+ if (itflags & NPY_ITFLAG_DELAYBUF) {
bufferdata = NIT_BUFFERDATA(iter);
/* Make the data pointers NULL */
memset(NBF_PTRS(bufferdata), 0, nop*NPY_SIZEOF_INTP);
@@ -513,7 +514,7 @@ NpyIter_New(PyArrayObject *op, npy_uint32 flags,
PyArray_Descr* dtype)
{
/* Split the flags into separate global and op flags */
- npy_uint32 op_flags = flags&NPY_ITER_PER_OP_FLAGS;
+ npy_uint32 op_flags = flags & NPY_ITER_PER_OP_FLAGS;
flags &= NPY_ITER_GLOBAL_FLAGS;
return NpyIter_AdvancedNew(1, &op, flags, order, casting,
@@ -553,7 +554,7 @@ NpyIter_Copy(NpyIter *iter)
}
/* Allocate buffers and make copies of the transfer data if necessary */
- if (itflags&NPY_ITFLAG_BUFFER) {
+ if (itflags & NPY_ITFLAG_BUFFER) {
NpyIter_BufferData *bufferdata;
npy_intp buffersize, itemsize;
char **buffers;
@@ -638,7 +639,7 @@ NpyIter_Deallocate(NpyIter *iter)
PyArrayObject **object = NIT_OPERANDS(iter);
/* Deallocate any buffers and buffering data */
- if (itflags&NPY_ITFLAG_BUFFER) {
+ if (itflags & NPY_ITFLAG_BUFFER) {
NpyIter_BufferData *bufferdata = NIT_BUFFERDATA(iter);
char **buffers;
NpyAuxData **transferdata;
@@ -686,7 +687,7 @@ NpyIter_Deallocate(NpyIter *iter)
static int
npyiter_check_global_flags(npy_uint32 flags, npy_uint32* itflags)
{
- if ((flags&NPY_ITER_PER_OP_FLAGS) != 0) {
+ if ((flags & NPY_ITER_PER_OP_FLAGS) != 0) {
PyErr_SetString(PyExc_ValueError,
"A per-operand flag was passed as a global flag "
"to the iterator constructor");
@@ -694,8 +695,8 @@ npyiter_check_global_flags(npy_uint32 flags, npy_uint32* itflags)
}
/* Check for an index */
- if (flags&(NPY_ITER_C_INDEX | NPY_ITER_F_INDEX)) {
- if ((flags&(NPY_ITER_C_INDEX | NPY_ITER_F_INDEX)) ==
+ if (flags & (NPY_ITER_C_INDEX | NPY_ITER_F_INDEX)) {
+ if ((flags & (NPY_ITER_C_INDEX | NPY_ITER_F_INDEX)) ==
(NPY_ITER_C_INDEX | NPY_ITER_F_INDEX)) {
PyErr_SetString(PyExc_ValueError,
"Iterator flags C_INDEX and "
@@ -705,7 +706,7 @@ npyiter_check_global_flags(npy_uint32 flags, npy_uint32* itflags)
(*itflags) |= NPY_ITFLAG_HASINDEX;
}
/* Check if a multi-index was requested */
- if (flags&NPY_ITER_MULTI_INDEX) {
+ if (flags & NPY_ITER_MULTI_INDEX) {
/*
* This flag primarily disables dimension manipulations that
* would produce an incorrect multi-index.
@@ -713,8 +714,8 @@ npyiter_check_global_flags(npy_uint32 flags, npy_uint32* itflags)
(*itflags) |= NPY_ITFLAG_HASMULTIINDEX;
}
/* Check if the caller wants to handle inner iteration */
- if (flags&NPY_ITER_EXTERNAL_LOOP) {
- if ((*itflags)&(NPY_ITFLAG_HASINDEX|NPY_ITFLAG_HASMULTIINDEX)) {
+ if (flags & NPY_ITER_EXTERNAL_LOOP) {
+ if ((*itflags) & (NPY_ITFLAG_HASINDEX | NPY_ITFLAG_HASMULTIINDEX)) {
PyErr_SetString(PyExc_ValueError,
"Iterator flag EXTERNAL_LOOP cannot be used "
"if an index or multi-index is being tracked");
@@ -723,10 +724,10 @@ npyiter_check_global_flags(npy_uint32 flags, npy_uint32* itflags)
(*itflags) |= NPY_ITFLAG_EXLOOP;
}
/* Ranged */
- if (flags&NPY_ITER_RANGED) {
+ if (flags & NPY_ITER_RANGED) {
(*itflags) |= NPY_ITFLAG_RANGE;
- if ((flags&NPY_ITER_EXTERNAL_LOOP) &&
- !(flags&NPY_ITER_BUFFERED)) {
+ if ((flags & NPY_ITER_EXTERNAL_LOOP) &&
+ !(flags & NPY_ITER_BUFFERED)) {
PyErr_SetString(PyExc_ValueError,
"Iterator flag RANGED cannot be used with "
"the flag EXTERNAL_LOOP unless "
@@ -735,12 +736,12 @@ npyiter_check_global_flags(npy_uint32 flags, npy_uint32* itflags)
}
}
/* Buffering */
- if (flags&NPY_ITER_BUFFERED) {
+ if (flags & NPY_ITER_BUFFERED) {
(*itflags) |= NPY_ITFLAG_BUFFER;
- if (flags&NPY_ITER_GROWINNER) {
+ if (flags & NPY_ITER_GROWINNER) {
(*itflags) |= NPY_ITFLAG_GROWINNER;
}
- if (flags&NPY_ITER_DELAY_BUFALLOC) {
+ if (flags & NPY_ITER_DELAY_BUFALLOC) {
(*itflags) |= NPY_ITFLAG_DELAYBUF;
}
}
@@ -845,7 +846,7 @@ npyiter_calculate_ndim(int nop, PyArrayObject **op_in,
static int
npyiter_check_per_op_flags(npy_uint32 op_flags, char *op_itflags)
{
- if ((op_flags&NPY_ITER_GLOBAL_FLAGS) != 0) {
+ if ((op_flags & NPY_ITER_GLOBAL_FLAGS) != 0) {
PyErr_SetString(PyExc_ValueError,
"A global iterator flag was passed as a per-operand flag "
"to the iterator constructor");
@@ -853,9 +854,9 @@ npyiter_check_per_op_flags(npy_uint32 op_flags, char *op_itflags)
}
/* Check the read/write flags */
- if (op_flags&NPY_ITER_READONLY) {
+ if (op_flags & NPY_ITER_READONLY) {
/* The read/write flags are mutually exclusive */
- if (op_flags&(NPY_ITER_READWRITE|NPY_ITER_WRITEONLY)) {
+ if (op_flags & (NPY_ITER_READWRITE|NPY_ITER_WRITEONLY)) {
PyErr_SetString(PyExc_ValueError,
"Only one of the iterator flags READWRITE, "
"READONLY, and WRITEONLY may be "
@@ -865,9 +866,9 @@ npyiter_check_per_op_flags(npy_uint32 op_flags, char *op_itflags)
*op_itflags = NPY_OP_ITFLAG_READ;
}
- else if (op_flags&NPY_ITER_READWRITE) {
+ else if (op_flags & NPY_ITER_READWRITE) {
/* The read/write flags are mutually exclusive */
- if (op_flags&NPY_ITER_WRITEONLY) {
+ if (op_flags & NPY_ITER_WRITEONLY) {
PyErr_SetString(PyExc_ValueError,
"Only one of the iterator flags READWRITE, "
"READONLY, and WRITEONLY may be "
@@ -877,7 +878,7 @@ npyiter_check_per_op_flags(npy_uint32 op_flags, char *op_itflags)
*op_itflags = NPY_OP_ITFLAG_READ|NPY_OP_ITFLAG_WRITE;
}
- else if(op_flags&NPY_ITER_WRITEONLY) {
+ else if(op_flags & NPY_ITER_WRITEONLY) {
*op_itflags = NPY_OP_ITFLAG_WRITE;
}
else {
@@ -889,8 +890,8 @@ npyiter_check_per_op_flags(npy_uint32 op_flags, char *op_itflags)
}
/* Check the flags for temporary copies */
- if (((*op_itflags)&NPY_OP_ITFLAG_WRITE) &&
- (op_flags&(NPY_ITER_COPY|
+ if (((*op_itflags) & NPY_OP_ITFLAG_WRITE) &&
+ (op_flags & (NPY_ITER_COPY |
NPY_ITER_UPDATEIFCOPY)) == NPY_ITER_COPY) {
PyErr_SetString(PyExc_ValueError,
"If an iterator operand is writeable, must use "
@@ -899,6 +900,33 @@ npyiter_check_per_op_flags(npy_uint32 op_flags, char *op_itflags)
return 0;
}
+ /* Check the flag for a write masked operands */
+ if (op_flags & NPY_ITER_WRITEMASKED) {
+ if (!(*op_itflags) & NPY_OP_ITFLAG_WRITE) {
+ PyErr_SetString(PyExc_ValueError,
+ "The iterator flag WRITEMASKED may only "
+ "be used with READWRITE or WRITEONLY");
+ return 0;
+ }
+ if ((op_flags & NPY_ITER_ARRAYMASK) != 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "The iterator flag WRITEMASKED may not "
+ "be used together with ARRAYMASK");
+ return 0;
+ }
+ *op_itflags |= NPY_OP_ITFLAG_WRITEMASKED;
+ }
+
+ if ((op_flags & NPY_ITER_VIRTUAL) != 0) {
+ if ((op_flags & NPY_ITER_READWRITE) == 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "The iterator flag VIRTUAL should be "
+ "be used together with READWRITE");
+ return 0;
+ }
+ *op_itflags |= NPY_OP_ITFLAG_VIRTUAL;
+ }
+
return 1;
}
@@ -919,52 +947,78 @@ npyiter_prepare_one_operand(PyArrayObject **op,
{
/* NULL operands must be automatically allocated outputs */
if (*op == NULL) {
- /* ALLOCATE should be enabled */
- if (!(op_flags&NPY_ITER_ALLOCATE)) {
+ /* ALLOCATE or VIRTUAL should be enabled */
+ if ((op_flags & (NPY_ITER_ALLOCATE|NPY_ITER_VIRTUAL)) == 0) {
PyErr_SetString(PyExc_ValueError,
- "Iterator operand was NULL, but automatic allocation as an "
- "output wasn't requested");
+ "Iterator operand was NULL, but neither the "
+ "ALLOCATE nor the VIRTUAL flag was specified");
return 0;
}
- /* Writing should be enabled */
- if (!((*op_itflags)&NPY_OP_ITFLAG_WRITE)) {
- PyErr_SetString(PyExc_ValueError,
- "Automatic allocation was requested for an iterator "
- "operand, but it wasn't flagged for writing");
- return 0;
+
+ if (op_flags & NPY_ITER_ALLOCATE) {
+ /* Writing should be enabled */
+ if (!((*op_itflags) & NPY_OP_ITFLAG_WRITE)) {
+ PyErr_SetString(PyExc_ValueError,
+ "Automatic allocation was requested for an iterator "
+ "operand, but it wasn't flagged for writing");
+ return 0;
+ }
+ /*
+ * Reading should be disabled if buffering is enabled without
+ * also enabling NPY_ITER_DELAY_BUFALLOC. In all other cases,
+ * the caller may initialize the allocated operand to a value
+ * before beginning iteration.
+ */
+ if (((flags & (NPY_ITER_BUFFERED |
+ NPY_ITER_DELAY_BUFALLOC)) == NPY_ITER_BUFFERED) &&
+ ((*op_itflags) & NPY_OP_ITFLAG_READ)) {
+ PyErr_SetString(PyExc_ValueError,
+ "Automatic allocation was requested for an iterator "
+ "operand, and it was flagged as readable, but "
+ "buffering without delayed allocation was enabled");
+ return 0;
+ }
+
+ /* If a requested dtype was provided, use it, otherwise NULL */
+ Py_XINCREF(op_request_dtype);
+ *op_dtype = op_request_dtype;
}
- /*
- * Reading should be disabled if buffering is enabled without
- * also enabling NPY_ITER_DELAY_BUFALLOC. In all other cases,
- * the caller may initialize the allocated operand to a value
- * before beginning iteration.
- */
- if (((flags&(NPY_ITER_BUFFERED|
- NPY_ITER_DELAY_BUFALLOC)) == NPY_ITER_BUFFERED) &&
- ((*op_itflags)&NPY_OP_ITFLAG_READ)) {
- PyErr_SetString(PyExc_ValueError,
- "Automatic allocation was requested for an iterator "
- "operand, and it was flagged as readable, but buffering "
- " without delayed allocation was enabled");
- return 0;
+ else {
+ *op_dtype = NULL;
+ }
+
+ /* Specify bool if no dtype was requested for the mask */
+ if (op_flags & NPY_ITER_ARRAYMASK) {
+ if (*op_dtype == NULL) {
+ *op_dtype = PyArray_DescrFromType(NPY_BOOL);
+ if (*op_dtype == NULL) {
+ return 0;
+ }
+ }
}
+
*op_dataptr = NULL;
- /* If a requested dtype was provided, use it, otherwise NULL */
- Py_XINCREF(op_request_dtype);
- *op_dtype = op_request_dtype;
return 1;
}
+ /* VIRTUAL operands must be NULL */
+ if (op_flags & NPY_ITER_VIRTUAL) {
+ PyErr_SetString(PyExc_ValueError,
+ "Iterator operand flag VIRTUAL was specified, "
+ "but the operand was not NULL");
+ return 0;
+ }
+
if (PyArray_Check(*op)) {
- if (((*op_itflags)&NPY_OP_ITFLAG_WRITE) &&
+ if (((*op_itflags) & NPY_OP_ITFLAG_WRITE) &&
(!PyArray_CHKFLAGS(*op, NPY_ARRAY_WRITEABLE))) {
PyErr_SetString(PyExc_ValueError,
"Iterator operand was a non-writeable array, but was "
"flagged as writeable");
return 0;
}
- if (!(flags&NPY_ITER_ZEROSIZE_OK) && PyArray_SIZE(*op) == 0) {
+ if (!(flags & NPY_ITER_ZEROSIZE_OK) && PyArray_SIZE(*op) == 0) {
PyErr_SetString(PyExc_ValueError,
"Iteration of zero-sized operands is not enabled");
return 0;
@@ -974,7 +1028,7 @@ npyiter_prepare_one_operand(PyArrayObject **op,
*op_dtype = PyArray_DESCR(*op);
if (*op_dtype == NULL) {
PyErr_SetString(PyExc_ValueError,
- "Iterator input array object has no dtype descr");
+ "Iterator input operand has no dtype descr");
return 0;
}
Py_INCREF(*op_dtype);
@@ -982,12 +1036,12 @@ npyiter_prepare_one_operand(PyArrayObject **op,
* If references weren't specifically allowed, make sure there
* are no references in the inputs or requested dtypes.
*/
- if (!(flags&NPY_ITER_REFS_OK)) {
+ if (!(flags & NPY_ITER_REFS_OK)) {
PyArray_Descr *dt = PyArray_DESCR(*op);
- if (((dt->flags&(NPY_ITEM_REFCOUNT|
+ if (((dt->flags & (NPY_ITEM_REFCOUNT |
NPY_ITEM_IS_POINTER)) != 0) ||
(dt != *op_dtype &&
- (((*op_dtype)->flags&(NPY_ITEM_REFCOUNT|
+ (((*op_dtype)->flags & (NPY_ITEM_REFCOUNT |
NPY_ITEM_IS_POINTER))) != 0)) {
PyErr_SetString(PyExc_TypeError,
"Iterator operand or requested dtype holds "
@@ -1016,7 +1070,7 @@ npyiter_prepare_one_operand(PyArrayObject **op,
}
/* Check if the operand is in the byte order requested */
- if (op_flags&NPY_ITER_NBO) {
+ if (op_flags & NPY_ITER_NBO) {
/* Check byte order */
if (!PyArray_ISNBO((*op_dtype)->byteorder)) {
PyArray_Descr *nbo_dtype;
@@ -1033,7 +1087,7 @@ npyiter_prepare_one_operand(PyArrayObject **op,
}
}
/* Check if the operand is aligned */
- if (op_flags&NPY_ITER_ALIGNED) {
+ if (op_flags & NPY_ITER_ALIGNED) {
/* Check alignment */
if (!PyArray_ISALIGNED(*op)) {
NPY_IT_DBG_PRINT("Iterator: Setting NPY_OP_ITFLAG_CAST "
@@ -1066,9 +1120,11 @@ npyiter_prepare_operands(int nop, PyArrayObject **op_in,
PyArray_Descr **op_request_dtypes,
PyArray_Descr **op_dtype,
npy_uint32 flags,
- npy_uint32 *op_flags, char *op_itflags)
+ npy_uint32 *op_flags, char *op_itflags,
+ npy_int8 *out_maskop)
{
int iop, i;
+ npy_int8 maskop = -1;
for (iop = 0; iop < nop; ++iop) {
op[iop] = op_in[iop];
@@ -1084,6 +1140,23 @@ npyiter_prepare_operands(int nop, PyArrayObject **op_in,
return 0;
}
+ /* Extract the operand which is for masked iteration */
+ if ((op_flags[iop] & NPY_ITER_ARRAYMASK) != 0) {
+ if (maskop != -1) {
+ PyErr_SetString(PyExc_ValueError,
+ "Only one iterator operand may receive an "
+ "ARRAYMASK flag");
+ for (i = 0; i <= iop; ++i) {
+ Py_XDECREF(op[i]);
+ Py_XDECREF(op_dtype[i]);
+ }
+ return 0;
+ }
+
+ maskop = iop;
+ *out_maskop = iop;
+ }
+
/*
* Prepare the operand. This produces an op_dtype[iop] reference
* on success.
@@ -1221,7 +1294,7 @@ npyiter_check_casting(int nop, PyArrayObject **op,
if (op[iop] != NULL && !PyArray_EquivTypes(PyArray_DESCR(op[iop]),
op_dtype[iop])) {
/* Check read (op -> temp) casting */
- if ((op_itflags[iop]&NPY_OP_ITFLAG_READ) &&
+ if ((op_itflags[iop] & NPY_OP_ITFLAG_READ) &&
!PyArray_CanCastArrayTo(op[iop],
op_dtype[iop],
casting)) {
@@ -1242,7 +1315,7 @@ npyiter_check_casting(int nop, PyArrayObject **op,
return 0;
}
/* Check write (temp -> op) casting */
- if ((op_itflags[iop]&NPY_OP_ITFLAG_WRITE) &&
+ if ((op_itflags[iop] & NPY_OP_ITFLAG_WRITE) &&
!PyArray_CanCastTypeTo(op_dtype[iop],
PyArray_DESCR(op[iop]),
casting)) {
@@ -1409,25 +1482,25 @@ npyiter_fill_axisdata(NpyIter *iter, npy_uint32 flags, char *op_itflags,
if (bshape == 1) {
strides[iop] = 0;
if (idim >= ondim && !output_scalars &&
- (op_flags[iop]&NPY_ITER_NO_BROADCAST)) {
+ (op_flags[iop] & NPY_ITER_NO_BROADCAST)) {
goto operand_different_than_broadcast;
}
}
else if (idim >= ondim ||
PyArray_DIM(op_cur, ondim-idim-1) == 1) {
strides[iop] = 0;
- if (op_flags[iop]&NPY_ITER_NO_BROADCAST) {
+ if (op_flags[iop] & NPY_ITER_NO_BROADCAST) {
goto operand_different_than_broadcast;
}
/* If it's writeable, this means a reduction */
- if (op_itflags[iop]&NPY_OP_ITFLAG_WRITE) {
- if (!(flags&NPY_ITER_REDUCE_OK)) {
+ if (op_itflags[iop] & NPY_OP_ITFLAG_WRITE) {
+ if (!(flags & NPY_ITER_REDUCE_OK)) {
PyErr_SetString(PyExc_ValueError,
"output operand requires a reduction, but "
"reduction is not enabled");
return 0;
}
- if (!(op_itflags[iop]&NPY_OP_ITFLAG_READ)) {
+ if (!(op_itflags[iop] & NPY_OP_ITFLAG_READ)) {
PyErr_SetString(PyExc_ValueError,
"output operand requires a reduction, but "
"is flagged as write-only, not "
@@ -1452,18 +1525,18 @@ npyiter_fill_axisdata(NpyIter *iter, npy_uint32 flags, char *op_itflags,
}
else if (PyArray_DIM(op_cur, i) == 1) {
strides[iop] = 0;
- if (op_flags[iop]&NPY_ITER_NO_BROADCAST) {
+ if (op_flags[iop] & NPY_ITER_NO_BROADCAST) {
goto operand_different_than_broadcast;
}
/* If it's writeable, this means a reduction */
- if (op_itflags[iop]&NPY_OP_ITFLAG_WRITE) {
- if (!(flags&NPY_ITER_REDUCE_OK)) {
+ if (op_itflags[iop] & NPY_OP_ITFLAG_WRITE) {
+ if (!(flags & NPY_ITER_REDUCE_OK)) {
PyErr_SetString(PyExc_ValueError,
"output operand requires a reduction, but "
"reduction is not enabled");
return 0;
}
- if (!(op_itflags[iop]&NPY_OP_ITFLAG_READ)) {
+ if (!(op_itflags[iop] & NPY_OP_ITFLAG_READ)) {
PyErr_SetString(PyExc_ValueError,
"output operand requires a reduction, but "
"is flagged as write-only, not "
@@ -1484,14 +1557,14 @@ npyiter_fill_axisdata(NpyIter *iter, npy_uint32 flags, char *op_itflags,
else {
strides[iop] = 0;
/* If it's writeable, this means a reduction */
- if (op_itflags[iop]&NPY_OP_ITFLAG_WRITE) {
- if (!(flags&NPY_ITER_REDUCE_OK)) {
+ if (op_itflags[iop] & NPY_OP_ITFLAG_WRITE) {
+ if (!(flags & NPY_ITER_REDUCE_OK)) {
PyErr_SetString(PyExc_ValueError,
"output operand requires a reduction, but "
"reduction is not enabled");
return 0;
}
- if (!(op_itflags[iop]&NPY_OP_ITFLAG_READ)) {
+ if (!(op_itflags[iop] & NPY_OP_ITFLAG_READ)) {
PyErr_SetString(PyExc_ValueError,
"output operand requires a reduction, but "
"is flagged as write-only, not "
@@ -1644,7 +1717,7 @@ operand_different_than_broadcast: {
PyObject *errmsg, *tmp;
/* Start of error message */
- if (op_flags[iop]&NPY_ITER_READONLY) {
+ if (op_flags[iop] & NPY_ITER_READONLY) {
errmsg = PyUString_FromString("non-broadcastable operand "
"with shape ");
}
@@ -1857,14 +1930,14 @@ npyiter_compute_index_strides(NpyIter *iter, npy_uint32 flags)
* incremented.
*/
if (NIT_ITERSIZE(iter) == 1) {
- if (itflags&NPY_ITFLAG_HASINDEX) {
+ if (itflags & NPY_ITFLAG_HASINDEX) {
axisdata = NIT_AXISDATA(iter);
NAD_PTRS(axisdata)[nop] = 0;
}
return;
}
- if (flags&NPY_ITER_C_INDEX) {
+ if (flags & NPY_ITER_C_INDEX) {
sizeof_axisdata = NIT_AXISDATA_SIZEOF(itflags, ndim, nop);
axisdata = NIT_AXISDATA(iter);
indexstride = 1;
@@ -1881,7 +1954,7 @@ npyiter_compute_index_strides(NpyIter *iter, npy_uint32 flags)
indexstride *= shape;
}
}
- else if (flags&NPY_ITER_F_INDEX) {
+ else if (flags & NPY_ITER_F_INDEX) {
sizeof_axisdata = NIT_AXISDATA_SIZEOF(itflags, ndim, nop);
axisdata = NIT_INDEX_AXISDATA(NIT_AXISDATA(iter), ndim-1);
indexstride = 1;
@@ -2237,7 +2310,7 @@ npyiter_get_common_dtype(int nop, PyArrayObject **op,
for (iop = 0; iop < nop; ++iop) {
if (op_dtype[iop] != NULL &&
- (!only_inputs || (op_itflags[iop]&NPY_OP_ITFLAG_READ))) {
+ (!only_inputs || (op_itflags[iop] & NPY_OP_ITFLAG_READ))) {
/* If no dtype was requested and the op is a scalar, pass the op */
if ((op_request_dtypes == NULL ||
op_request_dtypes[iop] == NULL) &&
@@ -2380,13 +2453,13 @@ npyiter_new_temp_array(NpyIter *iter, PyTypeObject *subtype,
* reduction wasn't enabled, throw an error
*/
if (NAD_SHAPE(axisdata) != 1) {
- if (!(flags&NPY_ITER_REDUCE_OK)) {
+ if (!(flags & NPY_ITER_REDUCE_OK)) {
PyErr_SetString(PyExc_ValueError,
"output requires a reduction, but "
"reduction is not enabled");
return NULL;
}
- if (!((*op_itflags)&NPY_OP_ITFLAG_READ)) {
+ if (!((*op_itflags) & NPY_OP_ITFLAG_READ)) {
PyErr_SetString(PyExc_ValueError,
"output requires a reduction, but "
"is flagged as write-only, not read-write");
@@ -2572,7 +2645,7 @@ npyiter_allocate_arrays(NpyIter *iter,
NpyIter_BufferData *bufferdata = NULL;
PyArrayObject **op = NIT_OPERANDS(iter);
- if (itflags&NPY_ITFLAG_BUFFER) {
+ if (itflags & NPY_ITFLAG_BUFFER) {
bufferdata = NIT_BUFFERDATA(iter);
}
@@ -2585,7 +2658,7 @@ npyiter_allocate_arrays(NpyIter *iter,
int ondim = output_scalars ? 0 : ndim;
/* Check whether the subtype was disabled */
- op_subtype = (op_flags[iop]&NPY_ITER_NO_SUBTYPE) ?
+ op_subtype = (op_flags[iop] & NPY_ITER_NO_SUBTYPE) ?
&PyArray_Type : subtype;
/* Allocate the output array */
@@ -2617,9 +2690,9 @@ npyiter_allocate_arrays(NpyIter *iter,
* it's an array scalar, make a copy whether or not the
* copy flag is enabled.
*/
- else if ((op_itflags[iop]&(NPY_OP_ITFLAG_CAST|
- NPY_OP_ITFLAG_READ|
- NPY_OP_ITFLAG_WRITE)) == (NPY_OP_ITFLAG_CAST|
+ else if ((op_itflags[iop] & (NPY_OP_ITFLAG_CAST |
+ NPY_OP_ITFLAG_READ |
+ NPY_OP_ITFLAG_WRITE)) == (NPY_OP_ITFLAG_CAST |
NPY_OP_ITFLAG_READ) &&
PyArray_NDIM(op[iop]) == 0) {
PyArrayObject *temp;
@@ -2648,16 +2721,16 @@ npyiter_allocate_arrays(NpyIter *iter,
* New arrays are aligned need no cast, and in the case
* of scalars, always have stride 0 so never need buffering
*/
- op_itflags[iop] |= (NPY_OP_ITFLAG_ALIGNED|
+ op_itflags[iop] |= (NPY_OP_ITFLAG_ALIGNED |
NPY_OP_ITFLAG_BUFNEVER);
op_itflags[iop] &= ~NPY_OP_ITFLAG_CAST;
- if (itflags&NPY_ITFLAG_BUFFER) {
+ if (itflags & NPY_ITFLAG_BUFFER) {
NBF_STRIDES(bufferdata)[iop] = 0;
}
}
/* If casting is required and permitted */
- else if ((op_itflags[iop]&NPY_OP_ITFLAG_CAST) &&
- (op_flags[iop]&(NPY_ITER_COPY|NPY_ITER_UPDATEIFCOPY))) {
+ else if ((op_itflags[iop] & NPY_OP_ITFLAG_CAST) &&
+ (op_flags[iop] & (NPY_ITER_COPY|NPY_ITER_UPDATEIFCOPY))) {
PyArrayObject *temp;
int ondim = PyArray_NDIM(op[iop]);
@@ -2673,14 +2746,14 @@ npyiter_allocate_arrays(NpyIter *iter,
}
/* If the data will be read, copy it into temp */
- if (op_itflags[iop]&NPY_OP_ITFLAG_READ) {
+ if (op_itflags[iop] & NPY_OP_ITFLAG_READ) {
if (PyArray_CopyInto(temp, op[iop]) != 0) {
Py_DECREF(temp);
return 0;
}
}
/* If the data will be written to, set UPDATEIFCOPY */
- if (op_itflags[iop]&NPY_OP_ITFLAG_WRITE) {
+ if (op_itflags[iop] & NPY_OP_ITFLAG_WRITE) {
PyArray_FLAGS(temp) |= NPY_ARRAY_UPDATEIFCOPY;
PyArray_FLAGS(op[iop]) &= ~NPY_ARRAY_WRITEABLE;
Py_INCREF(op[iop]);
@@ -2706,8 +2779,8 @@ npyiter_allocate_arrays(NpyIter *iter,
* Buffering must be enabled for casting/conversion if copy
* wasn't specified.
*/
- if ((op_itflags[iop]&NPY_OP_ITFLAG_CAST) &&
- !(itflags&NPY_ITFLAG_BUFFER)) {
+ if ((op_itflags[iop] & NPY_OP_ITFLAG_CAST) &&
+ !(itflags & NPY_ITFLAG_BUFFER)) {
PyErr_SetString(PyExc_TypeError,
"Iterator operand required copying or buffering, "
"but neither copying nor buffering was enabled");
@@ -2724,7 +2797,7 @@ npyiter_allocate_arrays(NpyIter *iter,
}
/* Here we can finally check for contiguous iteration */
- if (op_flags[iop]&NPY_ITER_CONTIG) {
+ if (op_flags[iop] & NPY_ITER_CONTIG) {
NpyIter_AxisData *axisdata = NIT_AXISDATA(iter);
npy_intp stride = NAD_STRIDES(axisdata)[iop];
@@ -2732,7 +2805,7 @@ npyiter_allocate_arrays(NpyIter *iter,
NPY_IT_DBG_PRINT("Iterator: Setting NPY_OP_ITFLAG_CAST "
"because of NPY_ITER_CONTIG\n");
op_itflags[iop] |= NPY_OP_ITFLAG_CAST;
- if (!(itflags&NPY_ITFLAG_BUFFER)) {
+ if (!(itflags & NPY_ITFLAG_BUFFER)) {
PyErr_SetString(PyExc_TypeError,
"Iterator operand required buffering, "
"to be contiguous as requested, but "
@@ -2747,7 +2820,7 @@ npyiter_allocate_arrays(NpyIter *iter,
* the inner stride of this operand works for the whole
* array, we can set NPY_OP_ITFLAG_BUFNEVER.
*/
- if ((itflags&NPY_ITFLAG_BUFFER) && !(op_itflags[iop]&NPY_OP_ITFLAG_CAST)) {
+ if ((itflags & NPY_ITFLAG_BUFFER) && !(op_itflags[iop] & NPY_OP_ITFLAG_CAST)) {
NpyIter_AxisData *axisdata = NIT_AXISDATA(iter);
if (ndim == 1) {
op_itflags[iop] |= NPY_OP_ITFLAG_BUFNEVER;
@@ -2816,7 +2889,7 @@ npyiter_get_priority_subtype(int nop, PyArrayObject **op,
int iop;
for (iop = 0; iop < nop; ++iop) {
- if (op[iop] != NULL && op_itflags[iop]&NPY_OP_ITFLAG_READ) {
+ if (op[iop] != NULL && op_itflags[iop] & NPY_OP_ITFLAG_READ) {
double priority = PyArray_GetPriority((PyObject *)op[iop], 0.0);
if (priority > *subtype_priority) {
*subtype_priority = priority;
@@ -2855,18 +2928,18 @@ npyiter_allocate_transfer_functions(NpyIter *iter)
* Reduction operands may be buffered with a different stride,
* so we must pass NPY_MAX_INTP to the transfer function factory.
*/
- op_stride = (flags&NPY_OP_ITFLAG_REDUCE) ? NPY_MAX_INTP :
+ op_stride = (flags & NPY_OP_ITFLAG_REDUCE) ? NPY_MAX_INTP :
strides[iop];
/*
* If we have determined that a buffer may be needed,
* allocate the appropriate transfer functions
*/
- if (!(flags&NPY_OP_ITFLAG_BUFNEVER)) {
- if (flags&NPY_OP_ITFLAG_READ) {
+ if (!(flags & NPY_OP_ITFLAG_BUFNEVER)) {
+ if (flags & NPY_OP_ITFLAG_READ) {
int move_references = 0;
if (PyArray_GetDTypeTransferFunction(
- (flags&NPY_OP_ITFLAG_ALIGNED) != 0,
+ (flags & NPY_OP_ITFLAG_ALIGNED) != 0,
op_stride,
op_dtype[iop]->elsize,
PyArray_DESCR(op[iop]),
@@ -2883,10 +2956,10 @@ npyiter_allocate_transfer_functions(NpyIter *iter)
else {
readtransferfn[iop] = NULL;
}
- if (flags&NPY_OP_ITFLAG_WRITE) {
+ if (flags & NPY_OP_ITFLAG_WRITE) {
int move_references = 1;
if (PyArray_GetDTypeTransferFunction(
- (flags&NPY_OP_ITFLAG_ALIGNED) != 0,
+ (flags & NPY_OP_ITFLAG_ALIGNED) != 0,
op_dtype[iop]->elsize,
op_stride,
op_dtype[iop],
@@ -2908,7 +2981,7 @@ npyiter_allocate_transfer_functions(NpyIter *iter)
* src references.
*/
if (PyArray_GetDTypeTransferFunction(
- (flags&NPY_OP_ITFLAG_ALIGNED) != 0,
+ (flags & NPY_OP_ITFLAG_ALIGNED) != 0,
op_dtype[iop]->elsize, 0,
op_dtype[iop], NULL,
1,
diff --git a/numpy/core/src/multiarray/nditer_impl.h b/numpy/core/src/multiarray/nditer_impl.h
index f79ac3415..e5ec487f8 100644
--- a/numpy/core/src/multiarray/nditer_impl.h
+++ b/numpy/core/src/multiarray/nditer_impl.h
@@ -116,6 +116,10 @@
#define NPY_OP_ITFLAG_ALIGNED 0x10
/* The operand is being reduced */
#define NPY_OP_ITFLAG_REDUCE 0x20
+/* The operand is for temporary use, does not have a backing array */
+#define NPY_OP_ITFLAG_VIRTUAL 0x40
+/* The operand requires masking when copying buffer -> array */
+#define NPY_OP_ITFLAG_WRITEMASKED 0x80
/*
* The data layout of the iterator is fully specified by
@@ -129,7 +133,9 @@
struct NpyIter_InternalOnly {
/* Initial fixed position data */
npy_uint32 itflags;
- npy_uint16 ndim, nop;
+ npy_uint8 ndim, nop;
+ npy_int8 maskop;
+ npy_uint8 unused_padding;
npy_intp itersize, iterstart, iterend;
/* iterindex is only used if RANGED or BUFFERED is set */
npy_intp iterindex;
@@ -188,6 +194,8 @@ typedef struct NpyIter_BD NpyIter_BufferData;
((iter)->ndim)
#define NIT_NOP(iter) \
((iter)->nop)
+#define NIT_MASKOP(iter) \
+ ((iter)->maskop)
#define NIT_ITERSIZE(iter) \
(iter->itersize)
#define NIT_ITERSTART(iter) \
diff --git a/numpy/core/src/multiarray/nditer_pywrap.c b/numpy/core/src/multiarray/nditer_pywrap.c
index 8b2f3a0c0..1e86487f5 100644
--- a/numpy/core/src/multiarray/nditer_pywrap.c
+++ b/numpy/core/src/multiarray/nditer_pywrap.c
@@ -370,11 +370,22 @@ NpyIter_OpFlagsConverter(PyObject *op_flags_in,
flag = 0;
switch (str[0]) {
case 'a':
- if (strcmp(str, "allocate") == 0) {
- flag = NPY_ITER_ALLOCATE;
- }
- if (strcmp(str, "aligned") == 0) {
- flag = NPY_ITER_ALIGNED;
+ if (length > 2) switch(str[2]) {
+ case 'i':
+ if (strcmp(str, "aligned") == 0) {
+ flag = NPY_ITER_ALIGNED;
+ }
+ break;
+ case 'l':
+ if (strcmp(str, "allocate") == 0) {
+ flag = NPY_ITER_ALLOCATE;
+ }
+ break;
+ case 'r':
+ if (strcmp(str, "arraymask") == 0) {
+ flag = NPY_ITER_ARRAYMASK;
+ }
+ break;
}
break;
case 'c':
@@ -421,9 +432,23 @@ NpyIter_OpFlagsConverter(PyObject *op_flags_in,
flag = NPY_ITER_UPDATEIFCOPY;
}
break;
+ case 'v':
+ if (strcmp(str, "virtual") == 0) {
+ flag = NPY_ITER_VIRTUAL;
+ }
+ break;
case 'w':
- if (strcmp(str, "writeonly") == 0) {
- flag = NPY_ITER_WRITEONLY;
+ if (length > 5) switch (str[5]) {
+ case 'o':
+ if (strcmp(str, "writeonly") == 0) {
+ flag = NPY_ITER_WRITEONLY;
+ }
+ break;
+ case 'm':
+ if (strcmp(str, "writemasked") == 0) {
+ flag = NPY_ITER_WRITEMASKED;
+ }
+ break;
}
break;
}
diff --git a/numpy/core/src/multiarray/scalartypes.c.src b/numpy/core/src/multiarray/scalartypes.c.src
index 9f0da42e4..e2674ac50 100644
--- a/numpy/core/src/multiarray/scalartypes.c.src
+++ b/numpy/core/src/multiarray/scalartypes.c.src
@@ -2482,7 +2482,7 @@ static PyObject *
return arr;
}
/* 0-d array */
- robj = PyArray_ToScalar(PyArray_DATA(arr), (NPY_AO *)arr);
+ robj = PyArray_ToScalar(PyArray_DATA(arr), (PyArrayObject *)arr);
Py_DECREF(arr);
finish:
diff --git a/numpy/core/src/multiarray/shape.c b/numpy/core/src/multiarray/shape.c
index 88353fc47..ff022faf0 100644
--- a/numpy/core/src/multiarray/shape.c
+++ b/numpy/core/src/multiarray/shape.c
@@ -28,7 +28,7 @@ _fix_unknown_dimension(PyArray_Dims *newshape, intp s_original);
static int
_attempt_nocopy_reshape(PyArrayObject *self, int newnd, intp* newdims,
- intp *newstrides, int fortran);
+ intp *newstrides, int is_f_order);
static void
_putzero(char *optr, PyObject *zero, PyArray_Descr *dtype);
@@ -41,7 +41,7 @@ _putzero(char *optr, PyObject *zero, PyArray_Descr *dtype);
*/
NPY_NO_EXPORT PyObject *
PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck,
- NPY_ORDER fortran)
+ NPY_ORDER order)
{
intp oldsize, newsize;
int new_nd=newshape->len, k, n, elsize;
@@ -166,7 +166,7 @@ PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck,
/*
* Returns a new array
* with the new shape from the data
- * in the old array --- order-perspective depends on fortran argument.
+ * in the old array --- order-perspective depends on order argument.
* copy-only-if-necessary
*/
@@ -186,7 +186,7 @@ PyArray_Newshape(PyArrayObject *self, PyArray_Dims *newdims,
intp newstrides[MAX_DIMS];
int flags;
- if (order == PyArray_ANYORDER) {
+ if (order == NPY_ANYORDER) {
order = PyArray_ISFORTRAN(self);
}
/* Quick check to make sure anything actually needs to be done */
@@ -418,7 +418,7 @@ _putzero(char *optr, PyObject *zero, PyArray_Descr *dtype)
* If no copy is needed, returns 1 and fills newstrides
* with appropriate strides
*
- * The "fortran" argument describes how the array should be viewed
+ * The "is_f_order" argument describes how the array should be viewed
* during the reshape, not how it is stored in memory (that
* information is in self->strides).
*
@@ -428,7 +428,7 @@ _putzero(char *optr, PyObject *zero, PyArray_Descr *dtype)
*/
static int
_attempt_nocopy_reshape(PyArrayObject *self, int newnd, intp* newdims,
- intp *newstrides, int fortran)
+ intp *newstrides, int is_f_order)
{
int oldnd;
intp olddims[MAX_DIMS];
@@ -452,7 +452,7 @@ _attempt_nocopy_reshape(PyArrayObject *self, int newnd, intp* newdims,
fprintf(stderr, ") -> (");
for (ni=0; ni<newnd; ni++)
fprintf(stderr, "(%d,*), ", newdims[ni]);
- fprintf(stderr, "), fortran=%d)\n", fortran);
+ fprintf(stderr, "), is_f_order=%d)\n", is_f_order);
*/
@@ -490,7 +490,7 @@ _attempt_nocopy_reshape(PyArrayObject *self, int newnd, intp* newdims,
}
for (ok = oi; ok < oj - 1; ok++) {
- if (fortran) {
+ if (is_f_order) {
if (oldstrides[ok+1] != olddims[ok]*oldstrides[ok]) {
/* not contiguous enough */
return 0;
@@ -505,7 +505,7 @@ _attempt_nocopy_reshape(PyArrayObject *self, int newnd, intp* newdims,
}
}
- if (fortran) {
+ if (is_f_order) {
newstrides[ni] = oldstrides[oi];
for (nk = ni + 1; nk < nj; nk++) {
newstrides[nk] = newstrides[nk - 1]*newdims[nk - 1];
diff --git a/numpy/core/src/private/lowlevel_strided_loops.h b/numpy/core/src/private/lowlevel_strided_loops.h
index a1f183e50..b4cd79f9a 100644
--- a/numpy/core/src/private/lowlevel_strided_loops.h
+++ b/numpy/core/src/private/lowlevel_strided_loops.h
@@ -27,6 +27,20 @@ typedef void (PyArray_StridedTransferFn)(char *dst, npy_intp dst_stride,
NpyAuxData *transferdata);
/*
+ * This is for pointers to functions which behave exactly as
+ * for PyArray_StridedTransferFn, but with an additional mask controlling
+ * which values are transferred.
+ *
+ * In particular, the 'i'-th element is transfered if and only if
+ * (((mask[i*mask_stride])&0x01) == 0x01).
+ */
+typedef void (PyArray_MaskedStridedTransferFn)(char *dst, npy_intp dst_stride,
+ char *src, npy_intp src_stride,
+ npy_uint8 *mask, npy_intp mask_stride,
+ npy_intp N, npy_intp src_itemsize,
+ NpyAuxData *transferdata);
+
+/*
* Gives back a function pointer to a specialized function for copying
* strided memory. Returns NULL if there is a problem with the inputs.
*
@@ -174,6 +188,34 @@ PyArray_GetDTypeTransferFunction(int aligned,
int *out_needs_api);
/*
+ * This is identical to PyArray_GetDTypeTransferFunction, but
+ * returns a transfer function which also takes a mask as a parameter.
+ * Bit zero of the mask is used to determine which values to copy,
+ * data is transfered exactly when ((mask[i])&0x01) == 0x01.
+ *
+ * If move_references is true, values which are not copied to the
+ * destination will still have their source reference decremented.
+ *
+ * If mask_dtype is NPY_BOOL or NPY_UINT8, each full element is either
+ * transferred or not according to the mask as described above. If
+ * dst_dtype and mask_dtype are both struct dtypes, their names must
+ * match exactly, and the dtype of each leaf field in mask_dtype must
+ * be either NPY_BOOL or NPY_UINT8.
+ */
+NPY_NO_EXPORT int
+PyArray_GetMaskedDTypeTransferFunction(int aligned,
+ npy_intp src_stride,
+ npy_intp dst_stride,
+ npy_intp mask_stride,
+ PyArray_Descr *src_dtype,
+ PyArray_Descr *dst_dtype,
+ PyArray_Descr *mask_dtype,
+ int move_references,
+ PyArray_MaskedStridedTransferFn **out_stransfer,
+ NpyAuxData **out_transferdata,
+ int *out_needs_api);
+
+/*
* Casts the specified number of elements from 'src' with data type
* 'src_dtype' to 'dst' with 'dst_dtype'. See
* PyArray_GetDTypeTransferFunction for more details.
diff --git a/numpy/core/src/umath/ufunc_type_resolution.c b/numpy/core/src/umath/ufunc_type_resolution.c
index 8eb1f8ddf..8fb17a441 100644
--- a/numpy/core/src/umath/ufunc_type_resolution.c
+++ b/numpy/core/src/umath/ufunc_type_resolution.c
@@ -1391,7 +1391,7 @@ unmasked_ufunc_loop_as_masked(
void *unmasked_innerloopdata;
npy_intp loopsize, subloopsize;
char *mask;
- npy_intp maskstep;
+ npy_intp mask_stride;
/* Put the aux data into local variables */
data = (_ufunc_masker_data *)innerloopdata;
@@ -1400,16 +1400,16 @@ unmasked_ufunc_loop_as_masked(
nargs = data->nargs;
loopsize = *dimensions;
mask = args[nargs];
- maskstep = steps[nargs];
+ mask_stride = steps[nargs];
/* Process the data as runs of unmasked values */
do {
/* Skip masked values */
subloopsize = 0;
- while (subloopsize < loopsize && *(npy_bool *)mask == 0) {
+ while (subloopsize < loopsize && ((*(npy_uint8 *)mask)&0x01) == 0) {
++subloopsize;
- mask += maskstep;
+ mask += mask_stride;
}
for (iargs = 0; iargs < nargs; ++iargs) {
args[iargs] += subloopsize * steps[iargs];
@@ -1420,9 +1420,9 @@ unmasked_ufunc_loop_as_masked(
* mess with the 'args' pointer values)
*/
subloopsize = 0;
- while (subloopsize < loopsize && *(npy_bool *)mask != 0) {
+ while (subloopsize < loopsize && ((*(npy_uint8 *)mask)&0x01) != 0) {
++subloopsize;
- mask += maskstep;
+ mask += mask_stride;
}
unmasked_innerloop(args, &subloopsize, steps, unmasked_innerloopdata);
for (iargs = 0; iargs < nargs; ++iargs) {
diff --git a/numpy/core/tests/test_api.py b/numpy/core/tests/test_api.py
index 2c9b7d4d0..7ebcb932b 100644
--- a/numpy/core/tests/test_api.py
+++ b/numpy/core/tests/test_api.py
@@ -84,5 +84,41 @@ def test_array_astype():
assert_(not (a is b))
assert_(type(b) != np.matrix)
+def test_copyto():
+ a = np.arange(6, dtype='i4').reshape(2,3)
+
+ # Simple copy
+ np.copyto(a, [[3,1,5], [6,2,1]])
+ assert_equal(a, [[3,1,5], [6,2,1]])
+
+ # Overlapping copy should work
+ np.copyto(a[:,:2], a[::-1, 1::-1])
+ assert_equal(a, [[2,6,5], [1,3,1]])
+
+ # Defaults to 'same_kind' casting
+ assert_raises(TypeError, np.copyto, a, 1.5)
+
+ # Force a copy with 'unsafe' casting, truncating 1.5 to 1
+ np.copyto(a, 1.5, casting='unsafe')
+ assert_equal(a, 1)
+
+ # Copying with a mask
+ np.copyto(a, 3, where=[True,False,True])
+ assert_equal(a, [[3,1,3],[3,1,3]])
+
+ # Casting rule still applies with a mask
+ assert_raises(TypeError, np.copyto, a, 3.5, where=[True,False,True])
+
+ # Lists of integer 0's and 1's is ok too
+ np.copyto(a, 4, where=[[0,1,1], [1,0,0]])
+ assert_equal(a, [[3,4,4], [4,1,3]])
+
+ # Overlapping copy with mask should work
+ np.copyto(a[:,:2], a[::-1, 1::-1], where=[[0,1],[1,1]])
+ assert_equal(a, [[3,4,4], [4,3,3]])
+
+ # 'dst' must be an array
+ assert_raises(TypeError, np.copyto, [1,2,3], [2,3,4])
+
if __name__ == "__main__":
run_module_suite()
diff --git a/numpy/lib/tests/test_format.py b/numpy/lib/tests/test_format.py
index 76fc81397..213d69760 100644
--- a/numpy/lib/tests/test_format.py
+++ b/numpy/lib/tests/test_format.py
@@ -259,18 +259,18 @@ Test the header writing.
"F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>c16', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (3, 3)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (0,)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': ()} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (15,)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 5)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': True, 'shape': (5, 3)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 3)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (0,)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': ()} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (15,)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 5)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': True, 'shape': (5, 3)} \n"
- "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 3)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (0,)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': ()} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (15,)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 5)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': True, 'shape': (5, 3)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 3)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (0,)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': ()} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (15,)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 5)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': True, 'shape': (5, 3)} \n"
+ "F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 3)} \n"
"v\x00{'descr': [('x', '<i4', (2,)), ('y', '<f8', (2, 2)), ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n"
"\x16\x02{'descr': [('x', '<i4', (2,)),\n ('Info',\n [('value', '<c16'),\n ('y2', '<f8'),\n ('Info2',\n [('name', '|S2'),\n ('value', '<c16', (2,)),\n ('y3', '<f8', (2,)),\n ('z3', '<u4', (2,))]),\n ('name', '|S2'),\n ('z2', '|b1')]),\n ('color', '|S2'),\n ('info', [('Name', '<U8'), ('Value', '<c16')]),\n ('y', '<f8', (2, 2)),\n ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n"
"v\x00{'descr': [('x', '>i4', (2,)), ('y', '>f8', (2, 2)), ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n"