summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.lgtm.yml9
-rw-r--r--doc/release/1.13.0-notes.rst6
-rw-r--r--doc/release/1.13.1-notes.rst4
-rw-r--r--doc/release/1.14.0-notes.rst8
-rw-r--r--doc/release/1.14.1-notes.rst2
-rw-r--r--doc/release/1.15.0-notes.rst9
-rw-r--r--doc/release/1.16.0-notes.rst11
-rw-r--r--doc/source/reference/c-api.ufunc.rst75
-rw-r--r--doc/source/user/c-info.beyond-basics.rst32
-rw-r--r--doc/source/user/c-info.ufunc-tutorial.rst131
-rw-r--r--numpy/core/_add_newdocs.py9
-rw-r--r--numpy/core/_internal.py4
-rw-r--r--numpy/core/records.py4
-rw-r--r--numpy/core/src/multiarray/buffer.c21
-rw-r--r--numpy/core/tests/test_multiarray.py115
-rw-r--r--numpy/core/tests/test_scalarbuffer.py8
-rw-r--r--numpy/lib/_datasource.py8
-rw-r--r--numpy/lib/nanfunctions.py16
-rw-r--r--numpy/lib/polynomial.py3
-rw-r--r--numpy/lib/tests/test__datasource.py23
-rw-r--r--numpy/polynomial/_polybase.py2
-rw-r--r--numpy/testing/_private/nosetester.py20
-rwxr-xr-xtools/travis-test.sh24
23 files changed, 264 insertions, 280 deletions
diff --git a/.lgtm.yml b/.lgtm.yml
new file mode 100644
index 000000000..c0a9cf59a
--- /dev/null
+++ b/.lgtm.yml
@@ -0,0 +1,9 @@
+path_classifiers:
+ library:
+ - tools
+ generated:
+ # The exports defined in __init__.py are defined in the Cython module
+ # np.random.mtrand. By excluding this file we suppress a number of
+ # "undefined export" alerts
+ - numpy/random/__init__.py
+
diff --git a/doc/release/1.13.0-notes.rst b/doc/release/1.13.0-notes.rst
index 4554e53ea..3b719db09 100644
--- a/doc/release/1.13.0-notes.rst
+++ b/doc/release/1.13.0-notes.rst
@@ -183,11 +183,11 @@ override the behavior of NumPy's ufuncs. This works quite similarly to Python's
``__mul__`` and other binary operation routines. See the documentation for a
more detailed description of the implementation and behavior of this new
option. The API is provisional, we do not yet guarantee backward compatibility
-as modifications may be made pending feedback. See the NEP_ and
+as modifications may be made pending feedback. See `NEP 13`_ and
documentation_ for more details.
-.. _NEP: https://github.com/numpy/numpy/blob/master/doc/neps/ufunc-overrides.rst
-.. _documentation: https://github.com/charris/numpy/blob/master/doc/source/reference/arrays.classes.rst
+.. _`NEP 13`: http://www.numpy.org/neps/nep-0013-ufunc-overrides.html
+.. _documentation: https://github.com/numpy/numpy/blob/master/doc/source/reference/arrays.classes.rst
New ``positive`` ufunc
----------------------
diff --git a/doc/release/1.13.1-notes.rst b/doc/release/1.13.1-notes.rst
index 807296a85..88a4bc3dd 100644
--- a/doc/release/1.13.1-notes.rst
+++ b/doc/release/1.13.1-notes.rst
@@ -13,7 +13,7 @@ used with 3.6.0 due to Python bug 29943_. NumPy 1.13.2 will be released shortly
after Python 3.6.2 is out to fix that problem. If you are using 3.6.0 the
workaround is to upgrade to 3.6.1 or use an earlier Python version.
-.. _#29943: https://bugs.python.org/issue29943
+.. _29943: https://bugs.python.org/issue29943
Pull requests merged
@@ -21,7 +21,7 @@ Pull requests merged
A total of 19 pull requests were merged for this release.
* #9240 DOC: BLD: fix lots of Sphinx warnings/errors.
-* #9255 Revert "DEP: Raise TypeError for subtract(bool_, bool_)."
+* #9255 Revert "DEP: Raise TypeError for subtract(bool, bool)."
* #9261 BUG: don't elide into readonly and updateifcopy temporaries for...
* #9262 BUG: fix missing keyword rename for common block in numpy.f2py
* #9263 BUG: handle resize of 0d array
diff --git a/doc/release/1.14.0-notes.rst b/doc/release/1.14.0-notes.rst
index 0f14f7703..462631de6 100644
--- a/doc/release/1.14.0-notes.rst
+++ b/doc/release/1.14.0-notes.rst
@@ -14,11 +14,11 @@ dropping Python 2.7 support in the runup to 2020. The decision has been made to
support 2.7 for all releases made in 2018, with the last release being
designated a long term release with support for bug fixes extending through
2019. In 2019 support for 2.7 will be dropped in all new releases. More details
-can be found in the relevant NEP_.
+can be found in `NEP 12`_.
This release supports Python 2.7 and 3.4 - 3.6.
-.. _NEP: https://github.com/numpy/numpy/blob/master/doc/neps/dropping-python2.7-proposal.rst
+.. _`NEP 12`: http://www.numpy.org/neps/nep-0014-dropping-python2.7-proposal.html
Highlights
@@ -134,8 +134,8 @@ are marked readonly. In the past, it was possible to get away with::
var_arr = np.asarray(val)
val_arr += 1 # now errors, previously changed np.ma.masked.data
-``np.ma`` functions producing ``fill_value``s have changed
-----------------------------------------------------------
+``np.ma`` functions producing ``fill_value`` s have changed
+-----------------------------------------------------------
Previously, ``np.ma.default_fill_value`` would return a 0d array, but
``np.ma.minimum_fill_value`` and ``np.ma.maximum_fill_value`` would return a
tuple of the fields. Instead, all three methods return a structured ``np.void``
diff --git a/doc/release/1.14.1-notes.rst b/doc/release/1.14.1-notes.rst
index 2ed4c3e14..7b95c2e28 100644
--- a/doc/release/1.14.1-notes.rst
+++ b/doc/release/1.14.1-notes.rst
@@ -67,7 +67,7 @@ A total of 36 pull requests were merged for this release.
* `#10431 <https://github.com/numpy/numpy/pull/10431>`__: REL: Add 1.14.1 release notes template
* `#10435 <https://github.com/numpy/numpy/pull/10435>`__: MAINT: Use ValueError for duplicate field names in lookup (backport)
* `#10534 <https://github.com/numpy/numpy/pull/10534>`__: BUG: Provide a better error message for out-of-order fields
-* `#10536 <https://github.com/numpy/numpy/pull/10536>`__: BUG: Resize bytes_ columns in genfromtxt (backport of #10401)
+* `#10536 <https://github.com/numpy/numpy/pull/10536>`__: BUG: Resize bytes columns in genfromtxt (backport of #10401)
* `#10537 <https://github.com/numpy/numpy/pull/10537>`__: BUG: multifield-indexing adds padding bytes: revert for 1.14.1
* `#10539 <https://github.com/numpy/numpy/pull/10539>`__: BUG: fix np.save issue with python 2.7.5
* `#10540 <https://github.com/numpy/numpy/pull/10540>`__: BUG: Add missing DECREF in Py2 int() cast
diff --git a/doc/release/1.15.0-notes.rst b/doc/release/1.15.0-notes.rst
index 0e3d2a525..7235ca915 100644
--- a/doc/release/1.15.0-notes.rst
+++ b/doc/release/1.15.0-notes.rst
@@ -99,7 +99,7 @@ Deprecations
* Users of ``nditer`` should use the nditer object as a context manager
anytime one of the iterator operands is writeable, so that numpy can
manage writeback semantics, or should call ``it.close()``. A
- `RuntimeWarning` may be emitted otherwise in these cases.
+ `RuntimeWarning` may be emitted otherwise in these cases.
* The ``normed`` argument of ``np.histogram``, deprecated long ago in 1.6.0,
now emits a ``DeprecationWarning``.
@@ -227,13 +227,6 @@ Changes to ``PyArray_GetDTypeTransferFunction``
significant performance hit, consider implementing ``copyswapn`` to reflect the
implementation of ``PyArray_GetStridedCopyFn``. See `#10898
<https://github.com/numpy/numpy/pull/10898>`__.
-* Functions ``npy_get_floatstatus_barrier`` and ``npy_clear_floatstatus_barrier``
- have been added and should be used in place of the ``npy_get_floatstatus``and
- ``npy_clear_status`` functions. Optimizing compilers like GCC 8.1 and Clang
- were rearranging the order of operations when the previous functions were
- used in the ufunc SIMD functions, resulting in the floatstatus flags being '
- checked before the operation whose status we wanted to check was run.
- See `#10339 <https://github.com/numpy/numpy/issues/10370>`__.
New Features
diff --git a/doc/release/1.16.0-notes.rst b/doc/release/1.16.0-notes.rst
index a93aed7d1..d11eed2d2 100644
--- a/doc/release/1.16.0-notes.rst
+++ b/doc/release/1.16.0-notes.rst
@@ -15,7 +15,7 @@ Deprecations
============
`typeNA` and `sctypeNA` have been deprecated
--------------------------------------------
+--------------------------------------------
The type dictionaries `numpy.core.typeNA` and `numpy.core.sctypeNA` were buggy
and not documented. They will be removed in the 1.18 release. Use
@@ -47,20 +47,21 @@ Improvements
============
`np.polynomial.Polynomial` classes render in LaTeX in Jupyter notebooks
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------------------------------------------------
+
When used in a front-end that supports it, `Polynomial` instances are now
rendered through LaTeX. The current format is experimental, and is subject to
change.
``randint`` and ``choice`` now work on empty distributions
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------------------------------------
Even when no elements needed to be drawn, ``np.random.randint`` and
``np.random.choice`` raised an error when the arguments described an empty
distribution. This has been fixed so that e.g.
``np.random.choice([], 0) == np.array([], dtype=float64)``.
``linalg.lstsq`` and ``linalg.qr`` now work with empty matrices
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------------------------------------
Previously, a ``LinAlgError`` would be raised when an empty matrix/empty
matrices (with zero rows and/or columns) is/are passed in. Now outputs of
appropriate shapes are returned.
@@ -81,7 +82,7 @@ behavior will be appending. This applied to: `LDFLAGS`, `F77FLAGS`,
details.
``np.clip`` and the ``clip`` method check for memory overlap
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+------------------------------------------------------------
The ``out`` argument to these functions is now always tested for memory overlap
to avoid corrupted results when memory overlap occurs.
diff --git a/doc/source/reference/c-api.ufunc.rst b/doc/source/reference/c-api.ufunc.rst
index 8c2554a9e..07c7b0c80 100644
--- a/doc/source/reference/c-api.ufunc.rst
+++ b/doc/source/reference/c-api.ufunc.rst
@@ -85,12 +85,61 @@ Functions
Must to an array of length *ntypes* containing
:c:type:`PyUFuncGenericFunction` items. These items are pointers to
functions that actually implement the underlying
- (element-by-element) function :math:`N` times.
+ (element-by-element) function :math:`N` times with the following
+ signature:
+
+ .. c:function:: void loopfunc(
+ char** args, npy_intp* dimensions, npy_intp* steps, void* data)
+
+ *args*
+
+ An array of pointers to the actual data for the input and output
+ arrays. The input arguments are given first followed by the output
+ arguments.
+
+ *dimensions*
+
+ A pointer to the size of the dimension over which this function is
+ looping.
+
+ *steps*
+
+ A pointer to the number of bytes to jump to get to the
+ next element in this dimension for each of the input and
+ output arguments.
+
+ *data*
+
+ Arbitrary data (extra arguments, function names, *etc.* )
+ that can be stored with the ufunc and will be passed in
+ when it is called.
+
+ This is an example of a func specialized for addition of doubles
+ returning doubles.
+
+ .. code-block:: c
+
+ static void
+ double_add(char **args, npy_intp *dimensions, npy_intp *steps,
+ void *extra)
+ {
+ npy_intp i;
+ npy_intp is1 = steps[0], is2 = steps[1];
+ npy_intp os = steps[2], n = dimensions[0];
+ char *i1 = args[0], *i2 = args[1], *op = args[2];
+ for (i = 0; i < n; i++) {
+ *((double *)op) = *((double *)i1) +
+ *((double *)i2);
+ i1 += is1;
+ i2 += is2;
+ op += os;
+ }
+ }
:param data:
Should be ``NULL`` or a pointer to an array of size *ntypes*
. This array may contain arbitrary extra-data to be passed to
- the corresponding 1-d loop function in the func array.
+ the corresponding loop function in the func array.
:param types:
Length ``(nin + nout) * ntypes`` array of ``char`` encoding the
@@ -102,6 +151,9 @@ Functions
be ``(char[]) {5, 5, 0, 7, 7, 0}`` since ``NPY_INT32`` is 5,
``NPY_INT64`` is 7, and ``NPY_BOOL`` is 0.
+ The bit-width names can also be used (e.g. :c:data:`NPY_INT32`,
+ :c:data:`NPY_COMPLEX128` ) if desired.
+
:ref:`ufuncs.casting` will be used at runtime to find the first
``func`` callable by the input/output provided.
@@ -114,14 +166,19 @@ Functions
:param nout:
The number of outputs
+ :param identity:
+
+ Either :c:data:`PyUFunc_One`, :c:data:`PyUFunc_Zero`,
+ :c:data:`PyUFunc_None`. This specifies what should be returned when
+ an empty array is passed to the reduce method of the ufunc.
+
:param name:
- The name for the ufunc. Specifying a name of 'add' or
- 'multiply' enables a special behavior for integer-typed
- reductions when no dtype is given. If the input type is an
- integer (or boolean) data type smaller than the size of the
- `numpy.int_` data type, it will be internally upcast to the
- `numpy.int_` (or `numpy.uint`)
- data type.
+ The name for the ufunc as a ``NULL`` terminated string. Specifying
+ a name of 'add' or 'multiply' enables a special behavior for
+ integer-typed reductions when no dtype is given. If the input type is an
+ integer (or boolean) data type smaller than the size of the `numpy.int_`
+ data type, it will be internally upcast to the `numpy.int_` (or
+ `numpy.uint`) data type.
:param doc:
Allows passing in a documentation string to be stored with the
diff --git a/doc/source/user/c-info.beyond-basics.rst b/doc/source/user/c-info.beyond-basics.rst
index aee68f6e7..d4d941a5e 100644
--- a/doc/source/user/c-info.beyond-basics.rst
+++ b/doc/source/user/c-info.beyond-basics.rst
@@ -358,38 +358,6 @@ previously created. Then you call :c:func:`PyUFunc_RegisterLoopForType`
this function is ``0`` if the process was successful and ``-1`` with
an error condition set if it was not successful.
-.. c:function:: int PyUFunc_RegisterLoopForType( \
- PyUFuncObject* ufunc, int usertype, PyUFuncGenericFunction function, \
- int* arg_types, void* data)
-
- *ufunc*
-
- The ufunc to attach this loop to.
-
- *usertype*
-
- The user-defined type this loop should be indexed under. This number
- must be a user-defined type or an error occurs.
-
- *function*
-
- The ufunc inner 1-d loop. This function must have the signature as
- explained in Section `3 <#sec-creating-a-new>`__ .
-
- *arg_types*
-
- (optional) If given, this should contain an array of integers of at
- least size ufunc.nargs containing the data-types expected by the loop
- function. The data will be copied into a NumPy-managed structure so
- the memory for this argument should be deleted after calling this
- function. If this is NULL, then it will be assumed that all data-types
- are of type usertype.
-
- *data*
-
- (optional) Specify any optional data needed by the function which will
- be passed when the function is called.
-
.. index::
pair: dtype; adding new
diff --git a/doc/source/user/c-info.ufunc-tutorial.rst b/doc/source/user/c-info.ufunc-tutorial.rst
index 788a3429f..96a73f9a6 100644
--- a/doc/source/user/c-info.ufunc-tutorial.rst
+++ b/doc/source/user/c-info.ufunc-tutorial.rst
@@ -893,9 +893,9 @@ Example NumPy ufunc with structured array dtype arguments
This example shows how to create a ufunc for a structured array dtype.
For the example we show a trivial ufunc for adding two arrays with dtype
'u8,u8,u8'. The process is a bit different from the other examples since
-a call to PyUFunc_FromFuncAndData doesn't fully register ufuncs for
+a call to :c:func:`PyUFunc_FromFuncAndData` doesn't fully register ufuncs for
custom dtypes and structured array dtypes. We need to also call
-PyUFunc_RegisterLoopForDescr to finish setting up the ufunc.
+:c:func:`PyUFunc_RegisterLoopForDescr` to finish setting up the ufunc.
We only give the C code as the setup.py file is exactly the same as
the setup.py file in `Example NumPy ufunc for one dtype`_, except that
@@ -1048,133 +1048,6 @@ The C file is given below.
#endif
}
-
-.. _`sec:PyUFunc-spec`:
-
-PyUFunc_FromFuncAndData Specification
-=====================================
-
-What follows is the full specification of PyUFunc_FromFuncAndData, which
-automatically generates a ufunc from a C function with the correct signature.
-
-.. seealso:: :c:func:`PyUFunc_FromFuncAndDataAndSignature`
-
-.. c:function:: PyObject *PyUFunc_FromFuncAndData( \
- PyUFuncGenericFunction* func, void** data, char* types, int ntypes, \
- int nin, int nout, int identity, char* name, char* doc, int unused)
-
- *func*
-
- A pointer to an array of 1-d functions to use. This array must be at
- least ntypes long. Each entry in the array must be a
- ``PyUFuncGenericFunction`` function. This function has the following
- signature. An example of a valid 1d loop function is also given.
-
- .. c:function:: void loop1d( \
- char** args, npy_intp* dimensions, npy_intp* steps, void* data)
-
- *args*
-
- An array of pointers to the actual data for the input and output
- arrays. The input arguments are given first followed by the output
- arguments.
-
- *dimensions*
-
- A pointer to the size of the dimension over which this function is
- looping.
-
- *steps*
-
- A pointer to the number of bytes to jump to get to the
- next element in this dimension for each of the input and
- output arguments.
-
- *data*
-
- Arbitrary data (extra arguments, function names, *etc.* )
- that can be stored with the ufunc and will be passed in
- when it is called.
-
- .. code-block:: c
-
- static void
- double_add(char **args, npy_intp *dimensions, npy_intp *steps,
- void *extra)
- {
- npy_intp i;
- npy_intp is1 = steps[0], is2 = steps[1];
- npy_intp os = steps[2], n = dimensions[0];
- char *i1 = args[0], *i2 = args[1], *op = args[2];
- for (i = 0; i < n; i++) {
- *((double *)op) = *((double *)i1) +
- *((double *)i2);
- i1 += is1;
- i2 += is2;
- op += os;
- }
- }
-
- *data*
-
- An array of data. There should be ntypes entries (or NULL) --- one for
- every loop function defined for this ufunc. This data will be passed
- in to the 1-d loop. One common use of this data variable is to pass in
- an actual function to call to compute the result when a generic 1-d
- loop (e.g. :c:func:`PyUFunc_d_d`) is being used.
-
- *types*
-
- An array of type-number signatures (type ``char`` ). This
- array should be of size (nin+nout)*ntypes and contain the
- data-types for the corresponding 1-d loop. The inputs should
- be first followed by the outputs. For example, suppose I have
- a ufunc that supports 1 integer and 1 double 1-d loop
- (length-2 func and data arrays) that takes 2 inputs and
- returns 1 output that is always a complex double, then the
- types array would be
-
- .. code-block:: c
-
- static char types[3] = {NPY_INT, NPY_DOUBLE, NPY_CDOUBLE}
-
- The bit-width names can also be used (e.g. :c:data:`NPY_INT32`,
- :c:data:`NPY_COMPLEX128` ) if desired.
-
- *ntypes*
-
- The number of data-types supported. This is equal to the number of 1-d
- loops provided.
-
- *nin*
-
- The number of input arguments.
-
- *nout*
-
- The number of output arguments.
-
- *identity*
-
- Either :c:data:`PyUFunc_One`, :c:data:`PyUFunc_Zero`,
- :c:data:`PyUFunc_None`. This specifies what should be returned when
- an empty array is passed to the reduce method of the ufunc.
-
- *name*
-
- A ``NULL`` -terminated string providing the name of this ufunc
- (should be the Python name it will be called).
-
- *doc*
-
- A documentation string for this ufunc (will be used in generating the
- response to ``{ufunc_name}.__doc__``). Do not include the function
- signature or the name as this is generated automatically.
-
- *unused*
-
- Unused; kept for compatibility. Just set it to zero.
-
.. index::
pair: ufunc; adding new
diff --git a/numpy/core/_add_newdocs.py b/numpy/core/_add_newdocs.py
index b65920fde..9f82a9093 100644
--- a/numpy/core/_add_newdocs.py
+++ b/numpy/core/_add_newdocs.py
@@ -1454,11 +1454,10 @@ add_newdoc('numpy.core.multiarray', 'arange',
Values are generated within the half-open interval ``[start, stop)``
(in other words, the interval including `start` but excluding `stop`).
For integer arguments the function is equivalent to the Python built-in
- `range <https://docs.python.org/library/functions.html#func-range>`_ function,
- but returns an ndarray rather than a list.
+ `range` function, but returns an ndarray rather than a list.
When using a non-integer step, such as 0.1, the results will often not
- be consistent. It is better to use ``linspace`` for these cases.
+ be consistent. It is better to use `numpy.linspace` for these cases.
Parameters
----------
@@ -7158,10 +7157,10 @@ add_newdoc('numpy.core.multiarray', 'datetime_data',
array(250, dtype='timedelta64[s]')
The result can be used to construct a datetime that uses the same units
- as a timedelta::
+ as a timedelta
>>> np.datetime64('2010', np.datetime_data(dt_25s))
- numpy.datetime64('2010-01-01T00:00:00','25s')
+ numpy.datetime64('2010-01-01T00:00:00', '25s')
""")
diff --git a/numpy/core/_internal.py b/numpy/core/_internal.py
index 1cf89aab0..ce7ef7060 100644
--- a/numpy/core/_internal.py
+++ b/numpy/core/_internal.py
@@ -9,7 +9,7 @@ from __future__ import division, absolute_import, print_function
import re
import sys
-from numpy.compat import basestring
+from numpy.compat import basestring, unicode
from .multiarray import dtype, array, ndarray
try:
import ctypes
@@ -294,7 +294,7 @@ def _newnames(datatype, order):
"""
oldnames = datatype.names
nameslist = list(oldnames)
- if isinstance(order, str):
+ if isinstance(order, (str, unicode)):
order = [order]
seen = set()
if isinstance(order, (list, tuple)):
diff --git a/numpy/core/records.py b/numpy/core/records.py
index 612d39322..a483871ba 100644
--- a/numpy/core/records.py
+++ b/numpy/core/records.py
@@ -42,7 +42,7 @@ import warnings
from . import numeric as sb
from . import numerictypes as nt
-from numpy.compat import isfileobj, bytes, long
+from numpy.compat import isfileobj, bytes, long, unicode
from .arrayprint import get_printoptions
# All of the functions allow formats to be a dtype
@@ -174,7 +174,7 @@ class format_parser(object):
if (names):
if (type(names) in [list, tuple]):
pass
- elif isinstance(names, str):
+ elif isinstance(names, (str, unicode)):
names = names.split(',')
else:
raise NameError("illegal input names %s" % repr(names))
diff --git a/numpy/core/src/multiarray/buffer.c b/numpy/core/src/multiarray/buffer.c
index 0325f3c6a..d549a3029 100644
--- a/numpy/core/src/multiarray/buffer.c
+++ b/numpy/core/src/multiarray/buffer.c
@@ -175,6 +175,14 @@ _is_natively_aligned_at(PyArray_Descr *descr,
return 1;
}
+/*
+ * Fill in str with an appropriate PEP 3118 format string, based on
+ * descr. For structured dtypes, calls itself recursively. Each call extends
+ * str at offset then updates offset, and uses descr->byteorder, (and
+ * possibly the byte order in obj) to determine the byte-order char.
+ *
+ * Returns 0 for success, -1 for failure
+ */
static int
_buffer_format_string(PyArray_Descr *descr, _tmp_string_t *str,
PyObject* obj, Py_ssize_t *offset,
@@ -195,8 +203,8 @@ _buffer_format_string(PyArray_Descr *descr, _tmp_string_t *str,
PyObject *item, *subarray_tuple;
Py_ssize_t total_count = 1;
Py_ssize_t dim_size;
+ Py_ssize_t old_offset;
char buf[128];
- int old_offset;
int ret;
if (PyTuple_Check(descr->subarray->shape)) {
@@ -230,15 +238,15 @@ _buffer_format_string(PyArray_Descr *descr, _tmp_string_t *str,
return ret;
}
else if (PyDataType_HASFIELDS(descr)) {
- int base_offset = *offset;
+ Py_ssize_t base_offset = *offset;
_append_str(str, "T{");
for (k = 0; k < PyTuple_GET_SIZE(descr->names); ++k) {
PyObject *name, *item, *offset_obj, *tmp;
PyArray_Descr *child;
char *p;
- Py_ssize_t len;
- int new_offset;
+ Py_ssize_t len, new_offset;
+ int ret;
name = PyTuple_GET_ITEM(descr->names, k);
item = PyDict_GetItem(descr->fields, name);
@@ -266,8 +274,11 @@ _buffer_format_string(PyArray_Descr *descr, _tmp_string_t *str,
}
/* Insert child item */
- _buffer_format_string(child, str, obj, offset,
+ ret = _buffer_format_string(child, str, obj, offset,
active_byteorder);
+ if (ret < 0) {
+ return -1;
+ }
/* Insert field name */
#if defined(NPY_PY3K)
diff --git a/numpy/core/tests/test_multiarray.py b/numpy/core/tests/test_multiarray.py
index 1511f5b6b..d751657db 100644
--- a/numpy/core/tests/test_multiarray.py
+++ b/numpy/core/tests/test_multiarray.py
@@ -4748,55 +4748,72 @@ class TestRecord(object):
# Error raised when multiple fields have the same name
assert_raises(ValueError, test_assign)
- if sys.version_info[0] >= 3:
- def test_bytes_fields(self):
- # Bytes are not allowed in field names and not recognized in titles
- # on Py3
- assert_raises(TypeError, np.dtype, [(b'a', int)])
- assert_raises(TypeError, np.dtype, [(('b', b'a'), int)])
-
- dt = np.dtype([((b'a', 'b'), int)])
- assert_raises(TypeError, dt.__getitem__, b'a')
-
- x = np.array([(1,), (2,), (3,)], dtype=dt)
- assert_raises(IndexError, x.__getitem__, b'a')
-
- y = x[0]
- assert_raises(IndexError, y.__getitem__, b'a')
-
- def test_multiple_field_name_unicode(self):
- def test_assign_unicode():
- dt = np.dtype([("\u20B9", "f8"),
- ("B", "f8"),
- ("\u20B9", "f8")])
-
- # Error raised when multiple fields have the same name(unicode included)
- assert_raises(ValueError, test_assign_unicode)
-
- else:
- def test_unicode_field_titles(self):
- # Unicode field titles are added to field dict on Py2
- title = u'b'
- dt = np.dtype([((title, 'a'), int)])
- dt[title]
- dt['a']
- x = np.array([(1,), (2,), (3,)], dtype=dt)
- x[title]
- x['a']
- y = x[0]
- y[title]
- y['a']
-
- def test_unicode_field_names(self):
- # Unicode field names are converted to ascii on Python 2:
- encodable_name = u'b'
- assert_equal(np.dtype([(encodable_name, int)]).names[0], b'b')
- assert_equal(np.dtype([(('a', encodable_name), int)]).names[0], b'b')
-
- # But raises UnicodeEncodeError if it can't be encoded:
- nonencodable_name = u'\uc3bc'
- assert_raises(UnicodeEncodeError, np.dtype, [(nonencodable_name, int)])
- assert_raises(UnicodeEncodeError, np.dtype, [(('a', nonencodable_name), int)])
+ @pytest.mark.skipif(sys.version_info[0] < 3, reason="Not Python 3")
+ def test_bytes_fields(self):
+ # Bytes are not allowed in field names and not recognized in titles
+ # on Py3
+ assert_raises(TypeError, np.dtype, [(b'a', int)])
+ assert_raises(TypeError, np.dtype, [(('b', b'a'), int)])
+
+ dt = np.dtype([((b'a', 'b'), int)])
+ assert_raises(TypeError, dt.__getitem__, b'a')
+
+ x = np.array([(1,), (2,), (3,)], dtype=dt)
+ assert_raises(IndexError, x.__getitem__, b'a')
+
+ y = x[0]
+ assert_raises(IndexError, y.__getitem__, b'a')
+
+ @pytest.mark.skipif(sys.version_info[0] < 3, reason="Not Python 3")
+ def test_multiple_field_name_unicode(self):
+ def test_assign_unicode():
+ dt = np.dtype([("\u20B9", "f8"),
+ ("B", "f8"),
+ ("\u20B9", "f8")])
+
+ # Error raised when multiple fields have the same name(unicode included)
+ assert_raises(ValueError, test_assign_unicode)
+
+ @pytest.mark.skipif(sys.version_info[0] >= 3, reason="Not Python 2")
+ def test_unicode_field_titles(self):
+ # Unicode field titles are added to field dict on Py2
+ title = u'b'
+ dt = np.dtype([((title, 'a'), int)])
+ dt[title]
+ dt['a']
+ x = np.array([(1,), (2,), (3,)], dtype=dt)
+ x[title]
+ x['a']
+ y = x[0]
+ y[title]
+ y['a']
+
+ @pytest.mark.skipif(sys.version_info[0] >= 3, reason="Not Python 2")
+ def test_unicode_field_names(self):
+ # Unicode field names are converted to ascii on Python 2:
+ encodable_name = u'b'
+ assert_equal(np.dtype([(encodable_name, int)]).names[0], b'b')
+ assert_equal(np.dtype([(('a', encodable_name), int)]).names[0], b'b')
+
+ # But raises UnicodeEncodeError if it can't be encoded:
+ nonencodable_name = u'\uc3bc'
+ assert_raises(UnicodeEncodeError, np.dtype, [(nonencodable_name, int)])
+ assert_raises(UnicodeEncodeError, np.dtype, [(('a', nonencodable_name), int)])
+
+ def test_fromarrays_unicode(self):
+ # A single name string provided to fromarrays() is allowed to be unicode
+ # on both Python 2 and 3:
+ x = np.core.records.fromarrays([[0], [1]], names=u'a,b', formats=u'i4,i4')
+ assert_equal(x['a'][0], 0)
+ assert_equal(x['b'][0], 1)
+
+ def test_unicode_order(self):
+ # Test that we can sort with order as a unicode field name in both Python 2 and
+ # 3:
+ name = u'b'
+ x = np.array([1, 3, 2], dtype=[(name, int)])
+ x.sort(order=name)
+ assert_equal(x[u'b'], np.array([1, 2, 3]))
def test_field_names(self):
# Test unicode and 8-bit / byte strings can be used
diff --git a/numpy/core/tests/test_scalarbuffer.py b/numpy/core/tests/test_scalarbuffer.py
index 6d57a5014..9e75fec3e 100644
--- a/numpy/core/tests/test_scalarbuffer.py
+++ b/numpy/core/tests/test_scalarbuffer.py
@@ -5,7 +5,7 @@ import sys
import numpy as np
import pytest
-from numpy.testing import assert_, assert_equal
+from numpy.testing import assert_, assert_equal, assert_raises
# PEP3118 format strings for native (standard alignment and byteorder) types
scalars_and_codes = [
@@ -77,3 +77,9 @@ class TestScalarPEP3118(object):
mv_a = memoryview(a)
assert_equal(mv_x.itemsize, mv_a.itemsize)
assert_equal(mv_x.format, mv_a.format)
+
+ def test_invalid_buffer_format(self):
+ # datetime64 cannot be used in a buffer yet
+ dt = np.dtype([('a', int), ('b', 'M8[s]')])
+ a = np.empty(1, dt)
+ assert_raises((ValueError, BufferError), memoryview, a[0])
diff --git a/numpy/lib/_datasource.py b/numpy/lib/_datasource.py
index 6f1295f09..ab00b1444 100644
--- a/numpy/lib/_datasource.py
+++ b/numpy/lib/_datasource.py
@@ -37,6 +37,7 @@ from __future__ import division, absolute_import, print_function
import os
import sys
+import warnings
import shutil
import io
@@ -85,9 +86,10 @@ def _python2_bz2open(fn, mode, encoding, newline):
if "t" in mode:
# BZ2File is missing necessary functions for TextIOWrapper
- raise ValueError("bz2 text files not supported in python2")
- else:
- return bz2.BZ2File(fn, mode)
+ warnings.warn("Assuming latin1 encoding for bz2 text file in Python2",
+ RuntimeWarning, stacklevel=5)
+ mode = mode.replace("t", "")
+ return bz2.BZ2File(fn, mode)
def _python2_gzipopen(fn, mode, encoding, newline):
""" Wrapper to open gzip in text mode.
diff --git a/numpy/lib/nanfunctions.py b/numpy/lib/nanfunctions.py
index abd2da1a2..8d6b0f139 100644
--- a/numpy/lib/nanfunctions.py
+++ b/numpy/lib/nanfunctions.py
@@ -1178,13 +1178,15 @@ def nanquantile(a, q, axis=None, out=None, overwrite_input=False,
This optional parameter specifies the interpolation method to
use when the desired quantile lies between two data points
``i < j``:
- * linear: ``i + (j - i) * fraction``, where ``fraction``
- is the fractional part of the index surrounded by ``i``
- and ``j``.
- * lower: ``i``.
- * higher: ``j``.
- * nearest: ``i`` or ``j``, whichever is nearest.
- * midpoint: ``(i + j) / 2``.
+
+ * linear: ``i + (j - i) * fraction``, where ``fraction``
+ is the fractional part of the index surrounded by ``i``
+ and ``j``.
+ * lower: ``i``.
+ * higher: ``j``.
+ * nearest: ``i`` or ``j``, whichever is nearest.
+ * midpoint: ``(i + j) / 2``.
+
keepdims : bool, optional
If this is set to True, the axes which are reduced are left in
the result as dimensions with size one. With this option, the
diff --git a/numpy/lib/polynomial.py b/numpy/lib/polynomial.py
index 0de39877a..9f3b84732 100644
--- a/numpy/lib/polynomial.py
+++ b/numpy/lib/polynomial.py
@@ -400,8 +400,7 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False):
The `Polynomial.fit <numpy.polynomial.polynomial.Polynomial.fit>` class
method is recommended for new code as it is more stable numerically. See
- the documentation for the method for more information, or the convenience
- function `polynomial.polyfit <numpy.polynomial.polynomial.polyfit>`.
+ the documentation of the method for more information.
Parameters
----------
diff --git a/numpy/lib/tests/test__datasource.py b/numpy/lib/tests/test__datasource.py
index 32812990c..70fff3bb0 100644
--- a/numpy/lib/tests/test__datasource.py
+++ b/numpy/lib/tests/test__datasource.py
@@ -2,11 +2,14 @@ from __future__ import division, absolute_import, print_function
import os
import sys
+import pytest
from tempfile import mkdtemp, mkstemp, NamedTemporaryFile
from shutil import rmtree
-from numpy.testing import assert_, assert_equal, assert_raises, SkipTest
import numpy.lib._datasource as datasource
+from numpy.testing import (
+ assert_, assert_equal, assert_raises, assert_warns, SkipTest
+ )
if sys.version_info[0] >= 3:
import urllib.request as urllib_request
@@ -161,6 +164,24 @@ class TestDataSourceOpen(object):
fp.close()
assert_equal(magic_line, result)
+ @pytest.mark.skipif(sys.version_info[0] >= 3, reason="Python 2 only")
+ def test_Bz2File_text_mode_warning(self):
+ try:
+ import bz2
+ except ImportError:
+ # We don't have the bz2 capabilities to test.
+ raise SkipTest
+ # Test datasource's internal file_opener for BZip2 files.
+ filepath = os.path.join(self.tmpdir, 'foobar.txt.bz2')
+ fp = bz2.BZ2File(filepath, 'w')
+ fp.write(magic_line)
+ fp.close()
+ with assert_warns(RuntimeWarning):
+ fp = self.ds.open(filepath, 'rt')
+ result = fp.readline()
+ fp.close()
+ assert_equal(magic_line, result)
+
class TestDataSourceExists(object):
def setup(self):
diff --git a/numpy/polynomial/_polybase.py b/numpy/polynomial/_polybase.py
index 98f16836c..ccbf30bda 100644
--- a/numpy/polynomial/_polybase.py
+++ b/numpy/polynomial/_polybase.py
@@ -862,7 +862,7 @@ class ABCPolyBase(object):
A series that represents the least squares fit to the data and
has the domain and window specified in the call. If the
coefficients for the unscaled and unshifted basis polynomials are
- of interest, do ``new_series.convert().coef``
+ of interest, do ``new_series.convert().coef``.
[resid, rank, sv, rcond] : list
These values are only returned if `full` = True
diff --git a/numpy/testing/_private/nosetester.py b/numpy/testing/_private/nosetester.py
index c2cf58377..1728d9d1f 100644
--- a/numpy/testing/_private/nosetester.py
+++ b/numpy/testing/_private/nosetester.py
@@ -338,12 +338,14 @@ class NoseTester(object):
Identifies the tests to run. This can be a string to pass to
the nosetests executable with the '-A' option, or one of several
special values. Special values are:
+
* 'fast' - the default - which corresponds to the ``nosetests -A``
option of 'not slow'.
* 'full' - fast (as above) and slow tests as in the
'no -A' option to nosetests - this is the same as ''.
* None or '' - run all tests.
- attribute_identifier - string passed directly to nosetests as '-A'.
+ * attribute_identifier - string passed directly to nosetests as '-A'.
+
verbose : int, optional
Verbosity value for test outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
@@ -352,16 +354,14 @@ class NoseTester(object):
If True, run doctests in module. Default is False.
coverage : bool, optional
If True, report coverage of NumPy code. Default is False.
- (This requires the `coverage module:
- <http://nedbatchelder.com/code/modules/coverage.html>`_).
+ (This requires the
+ `coverage module <https://nedbatchelder.com/code/modules/coveragehtml>`_).
raise_warnings : None, str or sequence of warnings, optional
This specifies which warnings to configure as 'raise' instead
- of being shown once during the test execution. Valid strings are:
-
- - "develop" : equals ``(Warning,)``
- - "release" : equals ``()``, don't raise on any warnings.
+ of being shown once during the test execution. Valid strings are:
- The default is to use the class initialization value.
+ * "develop" : equals ``(Warning,)``
+ * "release" : equals ``()``, do not raise on any warnings.
timer : bool or int, optional
Timing of individual tests with ``nose-timer`` (which needs to be
installed). If True, time tests and report on all of them.
@@ -489,12 +489,14 @@ class NoseTester(object):
Identifies the benchmarks to run. This can be a string to pass to
the nosetests executable with the '-A' option, or one of several
special values. Special values are:
+
* 'fast' - the default - which corresponds to the ``nosetests -A``
option of 'not slow'.
* 'full' - fast (as above) and slow benchmarks as in the
'no -A' option to nosetests - this is the same as ''.
* None or '' - run all tests.
- attribute_identifier - string passed directly to nosetests as '-A'.
+ * attribute_identifier - string passed directly to nosetests as '-A'.
+
verbose : int, optional
Verbosity value for benchmark outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
diff --git a/tools/travis-test.sh b/tools/travis-test.sh
index 11863b5fe..97b192813 100755
--- a/tools/travis-test.sh
+++ b/tools/travis-test.sh
@@ -125,6 +125,22 @@ run_test()
fi
if [ -n "$RUN_COVERAGE" ]; then
+ # move back up to the source dir because we want to execute
+ # gcov on the source files after the tests have gone through
+ # the code paths
+ cd ..
+
+ # execute gcov on source files
+ find . -name '*.gcno' -type f -exec gcov -pb {} +
+
+ # move the C line coverage report files to the same path
+ # as the Python report data
+ mv *.gcov empty
+
+ # move back to the previous path for good measure
+ # as the Python coverage data is there
+ cd empty
+
# Upload coverage files to codecov
bash <(curl -s https://codecov.io/bash) -X gcov -X coveragepy
fi
@@ -153,6 +169,14 @@ if [ -n "$USE_WHEEL" ] && [ $# -eq 0 ]; then
$PIP install -U virtualenv
# ensure some warnings are not issued
export CFLAGS=$CFLAGS" -Wno-sign-compare -Wno-unused-result"
+ # adjust gcc flags if C coverage requested
+ if [ -n "$RUN_COVERAGE" ]; then
+ export NPY_DISTUTILS_APPEND_FLAGS=1
+ export CC='gcc --coverage'
+ export F77='gfortran --coverage'
+ export F90='gfortran --coverage'
+ export LDFLAGS='--coverage'
+ fi
$PYTHON setup.py bdist_wheel
# Make another virtualenv to install into
virtualenv --python=`which $PYTHON` venv-for-wheel