summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/release/1.12.1-notes.rst9
-rw-r--r--doc/release/1.13.0-notes.rst53
-rw-r--r--doc/source/reference/arrays.indexing.rst4
-rw-r--r--doc/source/reference/arrays.ndarray.rst8
-rw-r--r--doc/source/reference/arrays.scalars.rst15
-rw-r--r--doc/source/reference/c-api.array.rst8
-rw-r--r--doc/source/reference/maskedarray.baseclass.rst2
-rw-r--r--doc/source/user/quickstart.rst27
8 files changed, 95 insertions, 31 deletions
diff --git a/doc/release/1.12.1-notes.rst b/doc/release/1.12.1-notes.rst
new file mode 100644
index 000000000..9e18a6fc7
--- /dev/null
+++ b/doc/release/1.12.1-notes.rst
@@ -0,0 +1,9 @@
+==========================
+NumPy 1.12.1 Release Notes
+==========================
+
+NumPy 1.12.1 supports Python 2.7 and 3.4 - 3.6 and fixes bugs and regressions
+found in NumPy 1.12.0. Wheels for Linux, Windows, and OSX can be found on pypi,
+
+Fixes Merged
+============
diff --git a/doc/release/1.13.0-notes.rst b/doc/release/1.13.0-notes.rst
index 3d8f7734f..3c22b1f61 100644
--- a/doc/release/1.13.0-notes.rst
+++ b/doc/release/1.13.0-notes.rst
@@ -56,6 +56,26 @@ See Changes section for more detail.
* ``array == None`` and ``array != None`` do element-wise comparison.
* ``np.equal, np.not_equal``, object identity doesn't override comparison result.
+dtypes are now always true
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Previously ``bool(dtype)`` would fall back to the default python
+implementation, which checked if ``len(dtype) > 0``. Since ``dtype`` objects
+implement ``__len__`` as the number of record fields, ``bool`` of scalar dtypes
+would evaluate to ``False``, which was unintuitive. Now ``bool(dtype) == True``
+for all dtypes.
+
+``__getslice__`` and ``__setslice__`` have been removed from ``ndarray``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+When subclassing np.ndarray in Python 2.7, it is no longer _necessary_ to
+implement ``__*slice__`` on the derived class, as ``__*item__`` will intercept
+these calls correctly.
+
+Any code that did implement these will work exactly as before, with the
+obvious exception of any code that tries to directly call
+``ndarray.__getslice__`` (e.g. through ``super(...).__getslice__``). In
+this case, ``.__getitem__(slice(start, end))`` will act as a replacement.
+
C API
~~~~~
@@ -83,10 +103,43 @@ for instance). Note that this does not remove the need for Mingwpy; if you make
extensive use of the runtime, you will most likely run into issues_. Instead,
it should be regarded as a band-aid until Mingwpy is fully functional.
+Extensions can also be compiled using the MinGW toolset using the runtime
+library from the (moveable) WinPython 3.4 distribution, which can be useful for
+programs with a PySide1/Qt4 front-end.
+
.. _MinGW: https://sf.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/6.2.0/threads-win32/seh/
.. _issues: https://mingwpy.github.io/issues.html
+Performance improvements for ``packbits`` and ``unpackbits``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The functions ``numpy.packbits`` with boolean input and ``numpy.unpackbits`` have
+been optimized to be a significantly faster for contiguous data.
+
+Fix for PPC long double floating point information
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In previous versions of numpy, the ``finfo`` function returned invalid
+information about the `double double`_ format of the ``longdouble`` float type
+on Power PC (PPC). The invalid values resulted from the failure of the numpy
+algorithm to deal with the `variable number of digits in the significand
+<https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.genprogc/128bit_long_double_floating-point_datatype.htm>`_
+that are a feature of PPC long doubles. This release by-passes the failing
+algorithm by using heuristics to detect the presence of the PPC double double
+format. A side-effect of using these heuristics is that the ``finfo``
+function is faster than previous releases.
+
+.. _issues: https://github.com/numpy/numpy/issues/2669
+
+.. _double double: https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic
+
+Support for returning arrays of arbitrary dimensionality in `apply_along_axis`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Previously, only scalars or 1D arrays could be returned by the function passed
+to `apply_along_axis`. Now, it can return an array of any dimensionality
+(including 0D), and the shape of this array replaces the axis of the array
+being iterated over.
+
+
Changes
=======
diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst
index b7bc3a655..6a5f428da 100644
--- a/doc/source/reference/arrays.indexing.rst
+++ b/doc/source/reference/arrays.indexing.rst
@@ -36,8 +36,8 @@ objects, the :const:`Ellipsis` object, or the :const:`newaxis` object,
but not for integer arrays or other embedded sequences.
.. index::
- triple: ndarray; special methods; getslice
- triple: ndarray; special methods; setslice
+ triple: ndarray; special methods; getitem
+ triple: ndarray; special methods; setitem
single: ellipsis
single: newaxis
diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst
index 14d35271e..4c8bbf66d 100644
--- a/doc/source/reference/arrays.ndarray.rst
+++ b/doc/source/reference/arrays.ndarray.rst
@@ -119,12 +119,12 @@ strided scheme, and correspond to memory that can be *addressed* by the strides:
.. math::
- s_k^{\mathrm{column}} = \prod_{j=0}^{k-1} d_j ,
- \quad s_k^{\mathrm{row}} = \prod_{j=k+1}^{N-1} d_j .
+ s_k^{\mathrm{column}} = \mathrm{itemsize} \prod_{j=0}^{k-1} d_j ,
+ \quad s_k^{\mathrm{row}} = \mathrm{itemsize} \prod_{j=k+1}^{N-1} d_j .
.. index:: single-segment, contiguous, non-contiguous
-where :math:`d_j` `= self.itemsize * self.shape[j]`.
+where :math:`d_j` `= self.shape[j]`.
Both the C and Fortran orders are :term:`contiguous`, *i.e.,*
:term:`single-segment`, memory layouts, in which every part of the
@@ -595,8 +595,6 @@ Container customization: (see :ref:`Indexing <arrays.indexing>`)
ndarray.__len__
ndarray.__getitem__
ndarray.__setitem__
- ndarray.__getslice__
- ndarray.__setslice__
ndarray.__contains__
Conversion; the operations :func:`complex()`, :func:`int()`,
diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst
index 4acaf1b3b..f76087ce2 100644
--- a/doc/source/reference/arrays.scalars.rst
+++ b/doc/source/reference/arrays.scalars.rst
@@ -248,7 +248,8 @@ Indexing
Array scalars can be indexed like 0-dimensional arrays: if *x* is an
array scalar,
-- ``x[()]`` returns a 0-dimensional :class:`ndarray`
+- ``x[()]`` returns a copy of array scalar
+- ``x[...]`` returns a 0-dimensional :class:`ndarray`
- ``x['field-name']`` returns the array scalar in the field *field-name*.
(*x* can have fields, for example, when it corresponds to a structured data type.)
@@ -282,10 +283,10 @@ Defining new types
==================
There are two ways to effectively define a new array scalar type
-(apart from composing structured types :ref:`dtypes <arrays.dtypes>` from
-the built-in scalar types): One way is to simply subclass the
-:class:`ndarray` and overwrite the methods of interest. This will work to
-a degree, but internally certain behaviors are fixed by the data type of
-the array. To fully customize the data type of an array you need to
-define a new data-type, and register it with NumPy. Such new types can only
+(apart from composing structured types :ref:`dtypes <arrays.dtypes>` from
+the built-in scalar types): One way is to simply subclass the
+:class:`ndarray` and overwrite the methods of interest. This will work to
+a degree, but internally certain behaviors are fixed by the data type of
+the array. To fully customize the data type of an array you need to
+define a new data-type, and register it with NumPy. Such new types can only
be defined in C, using the :ref:`NumPy C-API <c-api>`.
diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst
index 3574282a4..2a7bb3a32 100644
--- a/doc/source/reference/c-api.array.rst
+++ b/doc/source/reference/c-api.array.rst
@@ -1686,12 +1686,12 @@ Shape Manipulation
different total number of elements then the old shape. If
reallocation is necessary, then *self* must own its data, have
*self* - ``>base==NULL``, have *self* - ``>weakrefs==NULL``, and
- (unless refcheck is 0) not be referenced by any other array. A
- reference to the new array is returned. The fortran argument can
- be :c:data:`NPY_ANYORDER`, :c:data:`NPY_CORDER`, or
- :c:data:`NPY_FORTRANORDER`. It currently has no effect. Eventually
+ (unless refcheck is 0) not be referenced by any other array.
+ The fortran argument can be :c:data:`NPY_ANYORDER`, :c:data:`NPY_CORDER`,
+ or :c:data:`NPY_FORTRANORDER`. It currently has no effect. Eventually
it could be used to determine how the resize operation should view
the data when constructing a differently-dimensioned array.
+ Returns None on success and NULL on error.
.. c:function:: PyObject* PyArray_Transpose(PyArrayObject* self, PyArray_Dims* permute)
diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst
index a1c90a45d..f35b0ea88 100644
--- a/doc/source/reference/maskedarray.baseclass.rst
+++ b/doc/source/reference/maskedarray.baseclass.rst
@@ -417,8 +417,6 @@ Container customization: (see :ref:`Indexing <arrays.indexing>`)
MaskedArray.__getitem__
MaskedArray.__setitem__
MaskedArray.__delitem__
- MaskedArray.__getslice__
- MaskedArray.__setslice__
MaskedArray.__contains__
diff --git a/doc/source/user/quickstart.rst b/doc/source/user/quickstart.rst
index 65840c724..f69eb3ace 100644
--- a/doc/source/user/quickstart.rst
+++ b/doc/source/user/quickstart.rst
@@ -713,27 +713,32 @@ Several arrays can be stacked together along different axes::
The function `column_stack`
stacks 1D arrays as columns into a 2D array. It is equivalent to
-`vstack` only for 1D arrays::
+`hstack` only for 2D arrays::
>>> from numpy import newaxis
- >>> np.column_stack((a,b)) # With 2D arrays
+ >>> np.column_stack((a,b)) # with 2D arrays
array([[ 8., 8., 1., 8.],
[ 0., 0., 0., 4.]])
>>> a = np.array([4.,2.])
- >>> b = np.array([2.,8.])
- >>> a[:,newaxis] # This allows to have a 2D columns vector
+ >>> b = np.array([3.,8.])
+ >>> np.column_stack((a,b)) # returns a 2D array
+ array([[ 4., 3.],
+ [ 2., 8.]])
+ >>> np.hstack((a,b)) # the result is different
+ array([ 4., 2., 3., 8.])
+ >>> a[:,newaxis] # this allows to have a 2D columns vector
array([[ 4.],
[ 2.]])
>>> np.column_stack((a[:,newaxis],b[:,newaxis]))
- array([[ 4., 2.],
+ array([[ 4., 3.],
+ [ 2., 8.]])
+ >>> np.hstack((a[:,newaxis],b[:,newaxis])) # the result is the same
+ array([[ 4., 3.],
[ 2., 8.]])
- >>> np.vstack((a[:,newaxis],b[:,newaxis])) # The behavior of vstack is different
- array([[ 4.],
- [ 2.],
- [ 2.],
- [ 8.]])
-For arrays of with more than two dimensions,
+On the other hand, the function `row_stack` is equivalent to `vstack`
+for any input arrays.
+In general, for arrays of with more than two dimensions,
`hstack` stacks along their second
axes, `vstack` stacks along their
first axes, and `concatenate`