summaryrefslogtreecommitdiff
path: root/doc/source
diff options
context:
space:
mode:
Diffstat (limited to 'doc/source')
-rw-r--r--doc/source/about.rst3
-rw-r--r--doc/source/conf.py14
-rw-r--r--doc/source/dev/gitwash/development_setup.rst17
-rw-r--r--doc/source/dev/governance/people.rst12
-rw-r--r--doc/source/f2py/index.rst32
-rw-r--r--doc/source/reference/arrays.datetime.rst3
-rw-r--r--doc/source/reference/c-api.array.rst44
-rw-r--r--doc/source/reference/internals.code-explanations.rst6
-rw-r--r--doc/source/reference/maskedarray.generic.rst4
-rw-r--r--doc/source/reference/routines.linalg.rst1
-rw-r--r--doc/source/reference/routines.logic.rst1
-rw-r--r--doc/source/reference/routines.math.rst1
-rw-r--r--doc/source/reference/routines.polynomials.classes.rst48
-rw-r--r--doc/source/reference/routines.testing.rst2
-rw-r--r--doc/source/reference/ufuncs.rst18
-rw-r--r--doc/source/user/basics.io.genfromtxt.rst92
-rw-r--r--doc/source/user/building.rst4
-rw-r--r--doc/source/user/c-info.beyond-basics.rst4
-rw-r--r--doc/source/user/c-info.ufunc-tutorial.rst2
-rw-r--r--doc/source/user/numpy-for-matlab-users.rst20
-rw-r--r--doc/source/user/quickstart.rst34
21 files changed, 190 insertions, 172 deletions
diff --git a/doc/source/about.rst b/doc/source/about.rst
index 0f585950a..be1ced13e 100644
--- a/doc/source/about.rst
+++ b/doc/source/about.rst
@@ -40,8 +40,7 @@ Our main means of communication are:
- `Old NumPy Trac <http://projects.scipy.org/numpy>`__ (no longer used)
-More information about the development of NumPy can be found at
-http://scipy.org/Developer_Zone
+More information about the development of NumPy can be found at our `Developer Zone <https://scipy.scipy.org/scipylib/dev-zone.html>`__.
If you want to fix issues in this documentation, the easiest way
is to participate in `our ongoing documentation marathon
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 2bafc50eb..9ac729961 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -5,8 +5,8 @@ import sys, os, re
# Check Sphinx version
import sphinx
-if sphinx.__version__ < "1.0.1":
- raise RuntimeError("Sphinx 1.0.1 or newer required")
+if sphinx.__version__ < "1.2.1":
+ raise RuntimeError("Sphinx 1.2.1 or newer required")
needs_sphinx = '1.0'
@@ -33,7 +33,7 @@ source_suffix = '.rst'
# General substitutions.
project = 'NumPy'
-copyright = '2008-2009, The Scipy community'
+copyright = '2008-2017, The SciPy community'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
@@ -126,6 +126,8 @@ htmlhelp_basename = 'numpy'
pngmath_use_preview = True
pngmath_dvipng_args = ['-gamma', '1.5', '-D', '96', '-bg', 'Transparent']
+plot_html_show_formats = False
+plot_html_show_source_link = False
# -----------------------------------------------------------------------------
# LaTeX output
@@ -306,19 +308,19 @@ def linkcode_resolve(domain, info):
for part in fullname.split('.'):
try:
obj = getattr(obj, part)
- except:
+ except Exception:
return None
try:
fn = inspect.getsourcefile(obj)
- except:
+ except Exception:
fn = None
if not fn:
return None
try:
source, lineno = inspect.getsourcelines(obj)
- except:
+ except Exception:
lineno = None
if lineno:
diff --git a/doc/source/dev/gitwash/development_setup.rst b/doc/source/dev/gitwash/development_setup.rst
index 5623364a2..1ebd4b486 100644
--- a/doc/source/dev/gitwash/development_setup.rst
+++ b/doc/source/dev/gitwash/development_setup.rst
@@ -62,7 +62,7 @@ Overview
git clone https://github.com/your-user-name/numpy.git
cd numpy
- git remote add upstream git://github.com/numpy/numpy.git
+ git remote add upstream https://github.com/numpy/numpy.git
In detail
=========
@@ -95,21 +95,16 @@ Linking your repository to the upstream repo
::
cd numpy
- git remote add upstream git://github.com/numpy/numpy.git
+ git remote add upstream https://github.com/numpy/numpy.git
``upstream`` here is just the arbitrary name we're using to refer to the
main NumPy_ repository at `NumPy github`_.
-Note that we've used ``git://`` for the URL rather than ``https://``. The
-``git://`` URL is read only. This means we that we can't accidentally
-(or deliberately) write to the upstream repo, and we are only going to
-use it to merge into our own code.
-
Just for your own satisfaction, show yourself that you now have a new
'remote', with ``git remote -v show``, giving you something like::
- upstream git://github.com/numpy/numpy.git (fetch)
- upstream git://github.com/numpy/numpy.git (push)
+ upstream https://github.com/numpy/numpy.git (fetch)
+ upstream https://github.com/numpy/numpy.git (push)
origin https://github.com/your-user-name/numpy.git (fetch)
origin https://github.com/your-user-name/numpy.git (push)
@@ -122,7 +117,7 @@ so it pulls from ``upstream`` by default. This can be done with::
You may also want to have easy access to all pull requests sent to the
NumPy repository::
- git config --add remote.upstream.fetch '+refs/pull//head:refs/remotes/upstream/pr/'
+ git config --add remote.upstream.fetch '+refs/pull/*/head:refs/remotes/upstream/pr/*'
Your config file should now look something like (from
``$ cat .git/config``)::
@@ -138,7 +133,7 @@ Your config file should now look something like (from
url = https://github.com/your-user-name/numpy.git
fetch = +refs/heads/*:refs/remotes/origin/*
[remote "upstream"]
- url = git://github.com/numpy/numpy.git
+ url = https://github.com/numpy/numpy.git
fetch = +refs/heads/*:refs/remotes/upstream/*
fetch = +refs/pull/*/head:refs/remotes/upstream/pr/*
[branch "master"]
diff --git a/doc/source/dev/governance/people.rst b/doc/source/dev/governance/people.rst
index a0f08b57d..b22852a5a 100644
--- a/doc/source/dev/governance/people.rst
+++ b/doc/source/dev/governance/people.rst
@@ -12,8 +12,6 @@ Steering council
* Ralf Gommers
-* Alex Griffing
-
* Charles Harris
* Nathaniel Smith
@@ -22,12 +20,22 @@ Steering council
* Pauli Virtanen
+* Eric Wieser
+
+* Marten van Kerkwijk
+
+* Stephan Hoyer
+
+* Allan Haldane
+
Emeritus members
----------------
* Travis Oliphant - Project Founder / Emeritus Leader (served: 2005-2012)
+* Alex Griffing (served: 2015-2017)
+
NumFOCUS Subcommittee
---------------------
diff --git a/doc/source/f2py/index.rst b/doc/source/f2py/index.rst
index 0cebbfd16..8b7d1453a 100644
--- a/doc/source/f2py/index.rst
+++ b/doc/source/f2py/index.rst
@@ -1,31 +1,20 @@
-.. -*- rest -*-
+#####################################
+F2PY Users Guide and Reference Manual
+#####################################
-//////////////////////////////////////////////////////////////////////
- F2PY Users Guide and Reference Manual
-//////////////////////////////////////////////////////////////////////
-
-:Author: Pearu Peterson
-:Contact: pearu@cens.ioc.ee
-:Web site: http://cens.ioc.ee/projects/f2py2e/
-:Date: 2005/04/02 10:03:26
-
-================
- Introduction
-================
-
-The purpose of the F2PY_ --*Fortran to Python interface generator*--
-project is to provide a connection between Python and Fortran
-languages. F2PY is a Python_ package (with a command line tool
-``f2py`` and a module ``f2py2e``) that facilitates creating/building
-Python C/API extension modules that make it possible
+The purpose of the ``F2PY`` --*Fortran to Python interface generator*--
+is to provide a connection between Python and Fortran
+languages. F2PY is a part of NumPy_ (``numpy.f2py``) and also available as a
+standalone command line tool ``f2py`` when ``numpy`` is installed that
+facilitates creating/building Python C/API extension modules that make it
+possible
* to call Fortran 77/90/95 external subroutines and Fortran 90/95
module subroutines as well as C functions;
* to access Fortran 77 ``COMMON`` blocks and Fortran 90/95 module data,
including allocatable arrays
-from Python. See F2PY_ web site for more information and installation
-instructions.
+from Python.
.. toctree::
:maxdepth: 2
@@ -37,7 +26,6 @@ instructions.
distutils
advanced
-.. _F2PY: http://cens.ioc.ee/projects/f2py2e/
.. _Python: http://www.python.org/
.. _NumPy: http://www.numpy.org/
.. _SciPy: http://www.numpy.org/
diff --git a/doc/source/reference/arrays.datetime.rst b/doc/source/reference/arrays.datetime.rst
index 139f23f11..e64d0c17e 100644
--- a/doc/source/reference/arrays.datetime.rst
+++ b/doc/source/reference/arrays.datetime.rst
@@ -363,7 +363,8 @@ As a corollary to this change, we no longer prohibit casting between datetimes
with date units and datetimes with timeunits. With timezone naive datetimes,
the rule for casting from dates to times is no longer ambiguous.
-pandas_: http://pandas.pydata.org
+.. _pandas: http://pandas.pydata.org
+
Differences Between 1.6 and 1.7 Datetimes
=========================================
diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst
index 35df42daa..90bb56b2d 100644
--- a/doc/source/reference/c-api.array.rst
+++ b/doc/source/reference/c-api.array.rst
@@ -137,7 +137,7 @@ sub-types).
.. c:function:: npy_intp PyArray_Size(PyArrayObject* obj)
- Returns 0 if *obj* is not a sub-class of bigndarray. Otherwise,
+ Returns 0 if *obj* is not a sub-class of ndarray. Otherwise,
returns the total number of elements in the array. Safer version
of :c:func:`PyArray_SIZE` (*obj*).
@@ -257,7 +257,7 @@ From scratch
PyTypeObject* subtype, int nd, npy_intp* dims, int type_num, \
npy_intp* strides, void* data, int itemsize, int flags, PyObject* obj)
- This is similar to :c:func:`PyArray_DescrNew` (...) except you
+ This is similar to :c:func:`PyArray_NewFromDescr` (...) except you
specify the data-type descriptor with *type_num* and *itemsize*,
where *type_num* corresponds to a builtin (or user-defined)
type. If the type always has the same number of bytes, then
@@ -303,7 +303,7 @@ From scratch
.. c:function:: PyArray_FILLWBYTE(PyObject* obj, int val)
Fill the array pointed to by *obj* ---which must be a (subclass
- of) bigndarray---with the contents of *val* (evaluated as a byte).
+ of) ndarray---with the contents of *val* (evaluated as a byte).
This macro calls memset, so obj must be contiguous.
.. c:function:: PyObject* PyArray_Zeros( \
@@ -433,9 +433,9 @@ From other objects
.. c:var:: NPY_ARRAY_ENSUREARRAY
- Make sure the result is a base-class ndarray or bigndarray. By
- default, if *op* is an instance of a subclass of the
- bigndarray, an instance of that same subclass is returned. If
+ Make sure the result is a base-class ndarray. By
+ default, if *op* is an instance of a subclass of
+ ndarray, an instance of that same subclass is returned. If
this flag is set, an ndarray object will be returned instead.
.. c:var:: NPY_ARRAY_FORCECAST
@@ -455,8 +455,7 @@ From other objects
is deleted (presumably after your calculations are complete),
its contents will be copied back into *op* and the *op* array
will be made writeable again. If *op* is not writeable to begin
- with, then an error is raised. If *op* is not already an array,
- then this flag has no effect.
+ with, or if it is not already an array, then an error is raised.
.. c:var:: NPY_ARRAY_BEHAVED
@@ -1483,8 +1482,7 @@ specify desired properties of the new array.
.. c:var:: NPY_ARRAY_ENSUREARRAY
- Make sure the resulting object is an actual ndarray (or bigndarray),
- and not a sub-class.
+ Make sure the resulting object is an actual ndarray, and not a sub-class.
.. c:var:: NPY_ARRAY_NOTSWAPPED
@@ -2888,10 +2886,10 @@ to.
to a C-array of :c:type:`npy_intp`. The Python object could also be a
single number. The *seq* variable is a pointer to a structure with
members ptr and len. On successful return, *seq* ->ptr contains a
- pointer to memory that must be freed to avoid a memory leak. The
- restriction on memory size allows this converter to be
- conveniently used for sequences intended to be interpreted as
- array shapes.
+ pointer to memory that must be freed, by calling :c:func:`PyDimMem_FREE`,
+ to avoid a memory leak. The restriction on memory size allows this
+ converter to be conveniently used for sequences intended to be
+ interpreted as array shapes.
.. c:function:: int PyArray_BufferConverter(PyObject* obj, PyArray_Chunk* buf)
@@ -3064,6 +3062,24 @@ the C-API is needed then some additional steps must be taken.
header file as long as you make sure that NO_IMPORT_ARRAY is
#defined before #including that file.
+ Internally, these #defines work as follows:
+
+ * If neither is defined, the C-API is declared to be
+ :c:type:`static void**`, so it is only visible within the
+ compilation unit that #includes numpy/arrayobject.h.
+ * If :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, but
+ :c:macro:`NO_IMPORT_ARRAY` is not, the C-API is declared to
+ be :c:type:`void**`, so that it will also be visible to other
+ compilation units.
+ * If :c:macro:`NO_IMPORT_ARRAY` is #defined, regardless of
+ whether :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is, the C-API is
+ declared to be :c:type:`extern void**`, so it is expected to
+ be defined in another compilation unit.
+ * Whenever :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, it
+ also changes the name of the variable holding the C-API, which
+ defaults to :c:data:`PyArray_API`, to whatever the macro is
+ #defined to.
+
Checking the API Version
^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/doc/source/reference/internals.code-explanations.rst b/doc/source/reference/internals.code-explanations.rst
index af34d716f..94e827429 100644
--- a/doc/source/reference/internals.code-explanations.rst
+++ b/doc/source/reference/internals.code-explanations.rst
@@ -105,7 +105,7 @@ which work very simply.
For the general case, the iteration works by keeping track of a list
of coordinate counters in the iterator object. At each iteration, the
last coordinate counter is increased (starting from 0). If this
-counter is smaller then one less than the size of the array in that
+counter is smaller than one less than the size of the array in that
dimension (a pre-computed and stored value), then the counter is
increased and the dataptr member is increased by the strides in that
dimension and the macro ends. If the end of a dimension is reached,
@@ -369,7 +369,7 @@ return arrays are constructed. If any provided output array doesn't
have the correct type (or is mis-aligned) and is smaller than the
buffer size, then a new output array is constructed with the special
UPDATEIFCOPY flag set so that when it is DECREF'd on completion of the
-function, it's contents will be copied back into the output array.
+function, its contents will be copied back into the output array.
Iterators for the output arguments are then processed.
Finally, the decision is made about how to execute the looping
@@ -475,7 +475,7 @@ function is called with just the ndarray as the first argument.
Methods
-------
-Their are three methods of ufuncs that require calculation similar to
+There are three methods of ufuncs that require calculation similar to
the general-purpose ufuncs. These are reduce, accumulate, and
reduceat. Each of these methods requires a setup command followed by a
loop. There are four loop styles possible for the methods
diff --git a/doc/source/reference/maskedarray.generic.rst b/doc/source/reference/maskedarray.generic.rst
index adb51416a..1fee9a74a 100644
--- a/doc/source/reference/maskedarray.generic.rst
+++ b/doc/source/reference/maskedarray.generic.rst
@@ -379,8 +379,8 @@ is masked.
When accessing a slice, the output is a masked array whose
:attr:`~MaskedArray.data` attribute is a view of the original data, and whose
mask is either :attr:`nomask` (if there was no invalid entries in the original
-array) or a copy of the corresponding slice of the original mask. The copy is
-required to avoid propagation of any modification of the mask to the original.
+array) or a view of the corresponding slice of the original mask. The view is
+required to ensure propagation of any modification of the mask to the original.
>>> x = ma.array([1, 2, 3, 4, 5], mask=[0, 1, 0, 0, 1])
>>> mx = x[:3]
diff --git a/doc/source/reference/routines.linalg.rst b/doc/source/reference/routines.linalg.rst
index 09c7d9b4e..4715f636e 100644
--- a/doc/source/reference/routines.linalg.rst
+++ b/doc/source/reference/routines.linalg.rst
@@ -18,6 +18,7 @@ Matrix and vector products
matmul
tensordot
einsum
+ einsum_path
linalg.matrix_power
kron
diff --git a/doc/source/reference/routines.logic.rst b/doc/source/reference/routines.logic.rst
index 88edde855..7fa0cd1de 100644
--- a/doc/source/reference/routines.logic.rst
+++ b/doc/source/reference/routines.logic.rst
@@ -19,6 +19,7 @@ Array contents
isfinite
isinf
isnan
+ isnat
isneginf
isposinf
diff --git a/doc/source/reference/routines.math.rst b/doc/source/reference/routines.math.rst
index a2fb06958..4c2f2800a 100644
--- a/doc/source/reference/routines.math.rst
+++ b/doc/source/reference/routines.math.rst
@@ -108,6 +108,7 @@ Arithmetic operations
add
reciprocal
+ positive
negative
multiply
divide
diff --git a/doc/source/reference/routines.polynomials.classes.rst b/doc/source/reference/routines.polynomials.classes.rst
index 0db77eb7c..f44ddd46c 100644
--- a/doc/source/reference/routines.polynomials.classes.rst
+++ b/doc/source/reference/routines.polynomials.classes.rst
@@ -52,7 +52,7 @@ the conventional Polynomial class because of its familiarity::
>>> from numpy.polynomial import Polynomial as P
>>> p = P([1,2,3])
>>> p
- Polynomial([ 1., 2., 3.], [-1., 1.], [-1., 1.])
+ Polynomial([ 1., 2., 3.], domain=[-1, 1], window=[-1, 1])
Note that there are three parts to the long version of the printout. The
first is the coefficients, the second is the domain, and the third is the
@@ -77,19 +77,19 @@ we ignore them and run through the basic algebraic and arithmetic operations.
Addition and Subtraction::
>>> p + p
- Polynomial([ 2., 4., 6.], [-1., 1.], [-1., 1.])
+ Polynomial([ 2., 4., 6.], domain=[-1, 1], window=[-1, 1])
>>> p - p
- Polynomial([ 0.], [-1., 1.], [-1., 1.])
+ Polynomial([ 0.], domain=[-1, 1], window=[-1, 1])
Multiplication::
>>> p * p
- Polynomial([ 1., 4., 10., 12., 9.], [-1., 1.], [-1., 1.])
+ Polynomial([ 1., 4., 10., 12., 9.], domain=[-1, 1], window=[-1, 1])
Powers::
>>> p**2
- Polynomial([ 1., 4., 10., 12., 9.], [-1., 1.], [-1., 1.])
+ Polynomial([ 1., 4., 10., 12., 9.], domain=[-1, 1], window=[-1, 1])
Division:
@@ -100,20 +100,20 @@ versions the '/' will only work for division by scalars. At some point it
will be deprecated::
>>> p // P([-1, 1])
- Polynomial([ 5., 3.], [-1., 1.], [-1., 1.])
+ Polynomial([ 5., 3.], domain=[-1, 1], window=[-1, 1])
Remainder::
>>> p % P([-1, 1])
- Polynomial([ 6.], [-1., 1.], [-1., 1.])
+ Polynomial([ 6.], domain=[-1, 1], window=[-1, 1])
Divmod::
>>> quo, rem = divmod(p, P([-1, 1]))
>>> quo
- Polynomial([ 5., 3.], [-1., 1.], [-1., 1.])
+ Polynomial([ 5., 3.], domain=[-1, 1], window=[-1, 1])
>>> rem
- Polynomial([ 6.], [-1., 1.], [-1., 1.])
+ Polynomial([ 6.], domain=[-1, 1], window=[-1, 1])
Evaluation::
@@ -134,7 +134,7 @@ the polynomials are regarded as functions this is composition of
functions::
>>> p(p)
- Polynomial([ 6., 16., 36., 36., 27.], [-1., 1.], [-1., 1.])
+ Polynomial([ 6., 16., 36., 36., 27.], domain=[-1, 1], window=[-1, 1])
Roots::
@@ -148,11 +148,11 @@ tuples, lists, arrays, and scalars are automatically cast in the arithmetic
operations::
>>> p + [1, 2, 3]
- Polynomial([ 2., 4., 6.], [-1., 1.], [-1., 1.])
+ Polynomial([ 2., 4., 6.], domain=[-1, 1], window=[-1, 1])
>>> [1, 2, 3] * p
- Polynomial([ 1., 4., 10., 12., 9.], [-1., 1.], [-1., 1.])
+ Polynomial([ 1., 4., 10., 12., 9.], domain=[-1, 1], window=[-1, 1])
>>> p / 2
- Polynomial([ 0.5, 1. , 1.5], [-1., 1.], [-1., 1.])
+ Polynomial([ 0.5, 1. , 1.5], domain=[-1, 1], window=[-1, 1])
Polynomials that differ in domain, window, or class can't be mixed in
arithmetic::
@@ -180,7 +180,7 @@ conversion of Polynomial classes among themselves is done for type, domain,
and window casting::
>>> p(T([0, 1]))
- Chebyshev([ 2.5, 2. , 1.5], [-1., 1.], [-1., 1.])
+ Chebyshev([ 2.5, 2. , 1.5], domain=[-1, 1], window=[-1, 1])
Which gives the polynomial `p` in Chebyshev form. This works because
:math:`T_1(x) = x` and substituting :math:`x` for :math:`x` doesn't change
@@ -195,18 +195,18 @@ Polynomial instances can be integrated and differentiated.::
>>> from numpy.polynomial import Polynomial as P
>>> p = P([2, 6])
>>> p.integ()
- Polynomial([ 0., 2., 3.], [-1., 1.], [-1., 1.])
+ Polynomial([ 0., 2., 3.], domain=[-1, 1], window=[-1, 1])
>>> p.integ(2)
- Polynomial([ 0., 0., 1., 1.], [-1., 1.], [-1., 1.])
+ Polynomial([ 0., 0., 1., 1.], domain=[-1, 1], window=[-1, 1])
The first example integrates `p` once, the second example integrates it
twice. By default, the lower bound of the integration and the integration
constant are 0, but both can be specified.::
>>> p.integ(lbnd=-1)
- Polynomial([-1., 2., 3.], [-1., 1.], [-1., 1.])
+ Polynomial([-1., 2., 3.], domain=[-1, 1], window=[-1, 1])
>>> p.integ(lbnd=-1, k=1)
- Polynomial([ 0., 2., 3.], [-1., 1.], [-1., 1.])
+ Polynomial([ 0., 2., 3.], domain=[-1, 1], window=[-1, 1])
In the first case the lower bound of the integration is set to -1 and the
integration constant is 0. In the second the constant of integration is set
@@ -215,9 +215,9 @@ number of times the polynomial is differentiated::
>>> p = P([1, 2, 3])
>>> p.deriv(1)
- Polynomial([ 2., 6.], [-1., 1.], [-1., 1.])
+ Polynomial([ 2., 6.], domain=[-1, 1], window=[-1, 1])
>>> p.deriv(2)
- Polynomial([ 6.], [-1., 1.], [-1., 1.])
+ Polynomial([ 6.], domain=[-1, 1], window=[-1, 1])
Other Polynomial Constructors
@@ -233,9 +233,9 @@ are demonstrated below::
>>> from numpy.polynomial import Chebyshev as T
>>> p = P.fromroots([1, 2, 3])
>>> p
- Polynomial([ -6., 11., -6., 1.], [-1., 1.], [-1., 1.])
+ Polynomial([ -6., 11., -6., 1.], domain=[-1, 1], window=[-1, 1])
>>> p.convert(kind=T)
- Chebyshev([ -9. , 11.75, -3. , 0.25], [-1., 1.], [-1., 1.])
+ Chebyshev([ -9. , 11.75, -3. , 0.25], domain=[-1, 1], window=[-1, 1])
The convert method can also convert domain and window::
@@ -249,9 +249,9 @@ available. The cast method works like the convert method while the basis
method returns the basis polynomial of given degree::
>>> P.basis(3)
- Polynomial([ 0., 0., 0., 1.], [-1., 1.], [-1., 1.])
+ Polynomial([ 0., 0., 0., 1.], domain=[-1, 1], window=[-1, 1])
>>> T.cast(p)
- Chebyshev([ -9. , 11.75, -3. , 0.25], [-1., 1.], [-1., 1.])
+ Chebyshev([ -9. , 11.75, -3. , 0.25], domain=[-1, 1], window=[-1, 1])
Conversions between types can be useful, but it is *not* recommended
for routine use. The loss of numerical precision in passing from a
diff --git a/doc/source/reference/routines.testing.rst b/doc/source/reference/routines.testing.rst
index c43aeeed9..ad95bb399 100644
--- a/doc/source/reference/routines.testing.rst
+++ b/doc/source/reference/routines.testing.rst
@@ -41,7 +41,6 @@ Decorators
decorators.slow
decorate_methods
-
Test Running
------------
.. autosummary::
@@ -50,3 +49,4 @@ Test Running
Tester
run_module_suite
rundocs
+ suppress_warnings
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index b3fb4d384..e28496cf6 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -426,12 +426,14 @@ Methods
All ufuncs have four methods. However, these methods only make sense on
ufuncs that take two input arguments and return one output argument.
Attempting to call these methods on other ufuncs will cause a
-:exc:`ValueError`. The reduce-like methods all take an *axis* keyword
-and a *dtype* keyword, and the arrays must all have dimension >= 1.
+:exc:`ValueError`. The reduce-like methods all take an *axis* keyword, a *dtype*
+keyword, and an *out* keyword, and the arrays must all have dimension >= 1.
The *axis* keyword specifies the axis of the array over which the reduction
-will take place and may be negative, but must be an integer. The
-*dtype* keyword allows you to manage a very common problem that arises
-when naively using :ref:`{op}.reduce <ufunc.reduce>`. Sometimes you may
+will take place (with negative values counting backwards). Generally, it is an
+integer, though for :meth:`ufunc.reduce`, it can also be a tuple of `int` to
+reduce over several axes at once, or `None`, to reduce over all axes.
+The *dtype* keyword allows you to manage a very common problem that arises
+when naively using :meth:`ufunc.reduce`. Sometimes you may
have an array of a certain data type and wish to add up all of its
elements, but the result does not fit into the data type of the
array. This commonly happens if you have an array of single-byte
@@ -443,7 +445,10 @@ mostly up to you. There is one exception: if no *dtype* is given for a
reduction on the "add" or "multiply" operations, then if the input type is
an integer (or Boolean) data-type and smaller than the size of the
:class:`int_` data type, it will be internally upcast to the :class:`int_`
-(or :class:`uint`) data-type.
+(or :class:`uint`) data-type. Finally, the *out* keyword allows you to provide
+an output array (for single-output ufuncs, which are currently the only ones
+supported; for future extension, however, a tuple with a single argument
+can be passed in). If *out* is given, the *dtype* argument is ignored.
Ufuncs also have a fifth method that allows in place operations to be
performed using fancy indexing. No buffering is used on the dimensions where
@@ -660,6 +665,7 @@ single operation.
isfinite
isinf
isnan
+ isnat
fabs
signbit
copysign
diff --git a/doc/source/user/basics.io.genfromtxt.rst b/doc/source/user/basics.io.genfromtxt.rst
index 1048ab725..2bdd5a0d0 100644
--- a/doc/source/user/basics.io.genfromtxt.rst
+++ b/doc/source/user/basics.io.genfromtxt.rst
@@ -46,12 +46,12 @@ ends with ``'.gz'``, a :class:`gzip` archive is expected; if it ends with
Splitting the lines into columns
================================
-The :keyword:`delimiter` argument
----------------------------------
+The ``delimiter`` argument
+--------------------------
Once the file is defined and open for reading, :func:`~numpy.genfromtxt`
splits each non-empty line into a sequence of strings. Empty or commented
-lines are just skipped. The :keyword:`delimiter` keyword is used to define
+lines are just skipped. The ``delimiter`` keyword is used to define
how the splitting should take place.
Quite often, a single character marks the separation between columns. For
@@ -71,7 +71,7 @@ spaces are considered as a single white space.
Alternatively, we may be dealing with a fixed-width file, where columns are
defined as a given number of characters. In that case, we need to set
-:keyword:`delimiter` to a single integer (if all the columns have the same
+``delimiter`` to a single integer (if all the columns have the same
size) or to a sequence of integers (if columns can have different sizes)::
>>> data = " 1 2 3\n 4 5 67\n890123 4"
@@ -86,13 +86,13 @@ size) or to a sequence of integers (if columns can have different sizes)::
[ 4., 567., 9.]])
-The :keyword:`autostrip` argument
----------------------------------
+The ``autostrip`` argument
+--------------------------
By default, when a line is decomposed into a series of strings, the
individual entries are not stripped of leading nor trailing white spaces.
This behavior can be overwritten by setting the optional argument
-:keyword:`autostrip` to a value of ``True``::
+``autostrip`` to a value of ``True``::
>>> data = "1, abc , 2\n 3, xxx, 4"
>>> # Without autostrip
@@ -107,10 +107,10 @@ This behavior can be overwritten by setting the optional argument
dtype='|U5')
-The :keyword:`comments` argument
---------------------------------
+The ``comments`` argument
+-------------------------
-The optional argument :keyword:`comments` is used to define a character
+The optional argument ``comments`` is used to define a character
string that marks the beginning of a comment. By default,
:func:`~numpy.genfromtxt` assumes ``comments='#'``. The comment marker may
occur anywhere on the line. Any character present after the comment
@@ -143,15 +143,15 @@ marker(s) is simply ignored::
Skipping lines and choosing columns
===================================
-The :keyword:`skip_header` and :keyword:`skip_footer` arguments
+The ``skip_header`` and ``skip_footer`` arguments
---------------------------------------------------------------
The presence of a header in the file can hinder data processing. In that
-case, we need to use the :keyword:`skip_header` optional argument. The
+case, we need to use the ``skip_header`` optional argument. The
values of this argument must be an integer which corresponds to the number
of lines to skip at the beginning of the file, before any other action is
performed. Similarly, we can skip the last ``n`` lines of the file by
-using the :keyword:`skip_footer` attribute and giving it a value of ``n``::
+using the ``skip_footer`` attribute and giving it a value of ``n``::
>>> data = "\n".join(str(i) for i in range(10))
>>> np.genfromtxt(BytesIO(data),)
@@ -164,12 +164,12 @@ By default, ``skip_header=0`` and ``skip_footer=0``, meaning that no lines
are skipped.
-The :keyword:`usecols` argument
--------------------------------
+The ``usecols`` argument
+------------------------
In some cases, we are not interested in all the columns of the data but
only a few of them. We can select which columns to import with the
-:keyword:`usecols` argument. This argument accepts a single integer or a
+``usecols`` argument. This argument accepts a single integer or a
sequence of integers corresponding to the indices of the columns to import.
Remember that by convention, the first column has an index of 0. Negative
integers behave the same as regular Python negative indexes.
@@ -183,7 +183,7 @@ can use ``usecols=(0, -1)``::
[ 4., 6.]])
If the columns have names, we can also select which columns to import by
-giving their name to the :keyword:`usecols` argument, either as a sequence
+giving their name to the ``usecols`` argument, either as a sequence
of strings or a comma-separated string::
>>> data = "1 2 3\n4 5 6"
@@ -203,12 +203,12 @@ Choosing the data type
======================
The main way to control how the sequences of strings we have read from the
-file are converted to other types is to set the :keyword:`dtype` argument.
+file are converted to other types is to set the ``dtype`` argument.
Acceptable values for this argument are:
* a single type, such as ``dtype=float``.
The output will be 2D with the given dtype, unless a name has been
- associated with each column with the use of the :keyword:`names` argument
+ associated with each column with the use of the ``names`` argument
(see below). Note that ``dtype=float`` is the default for
:func:`~numpy.genfromtxt`.
* a sequence of types, such as ``dtype=(int, float, float)``.
@@ -223,7 +223,7 @@ Acceptable values for this argument are:
In all the cases but the first one, the output will be a 1D array with a
structured dtype. This dtype has as many fields as items in the sequence.
-The field names are defined with the :keyword:`names` keyword.
+The field names are defined with the ``names`` keyword.
When ``dtype=None``, the type of each column is determined iteratively from
@@ -242,8 +242,8 @@ significantly slower than setting the dtype explicitly.
Setting the names
=================
-The :keyword:`names` argument
------------------------------
+The ``names`` argument
+----------------------
A natural approach when dealing with tabular data is to allocate a name to
each column. A first possibility is to use an explicit structured dtype,
@@ -254,7 +254,7 @@ as mentioned previously::
array([(1, 2, 3), (4, 5, 6)],
dtype=[('a', '<i8'), ('b', '<i8'), ('c', '<i8')])
-Another simpler possibility is to use the :keyword:`names` keyword with a
+Another simpler possibility is to use the ``names`` keyword with a
sequence of strings or a comma-separated string::
>>> data = BytesIO("1 2 3\n 4 5 6")
@@ -267,7 +267,7 @@ By giving a sequence of names, we are forcing the output to a structured
dtype.
We may sometimes need to define the column names from the data itself. In
-that case, we must use the :keyword:`names` keyword with a value of
+that case, we must use the ``names`` keyword with a value of
``True``. The names will then be read from the first line (after the
``skip_header`` ones), even if the line is commented out::
@@ -276,7 +276,7 @@ that case, we must use the :keyword:`names` keyword with a value of
array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)],
dtype=[('a', '<f8'), ('b', '<f8'), ('c', '<f8')])
-The default value of :keyword:`names` is ``None``. If we give any other
+The default value of ``names`` is ``None``. If we give any other
value to the keyword, the new names will overwrite the field names we may
have defined with the dtype::
@@ -288,8 +288,8 @@ have defined with the dtype::
dtype=[('A', '<i8'), ('B', '<f8'), ('C', '<i8')])
-The :keyword:`defaultfmt` argument
-----------------------------------
+The ``defaultfmt`` argument
+---------------------------
If ``names=None`` but a structured dtype is expected, names are defined
with the standard NumPy default of ``"f%i"``, yielding names like ``f0``,
@@ -308,7 +308,7 @@ dtype, the missing names will be defined with this default template::
array([(1, 2.0, 3), (4, 5.0, 6)],
dtype=[('a', '<i8'), ('f0', '<f8'), ('f1', '<i8')])
-We can overwrite this default with the :keyword:`defaultfmt` argument, that
+We can overwrite this default with the ``defaultfmt`` argument, that
takes any format string::
>>> data = BytesIO("1 2 3\n 4 5 6")
@@ -333,16 +333,16 @@ correspond to the name of a standard attribute (like ``size`` or
``shape``), which would confuse the interpreter. :func:`~numpy.genfromtxt`
accepts three optional arguments that provide a finer control on the names:
- :keyword:`deletechars`
+ ``deletechars``
Gives a string combining all the characters that must be deleted from
the name. By default, invalid characters are
``~!@#$%^&*()-=+~\|]}[{';:
/?.>,<``.
- :keyword:`excludelist`
+ ``excludelist``
Gives a list of the names to exclude, such as ``return``, ``file``,
``print``... If one of the input name is part of this list, an
underscore character (``'_'``) will be appended to it.
- :keyword:`case_sensitive`
+ ``case_sensitive``
Whether the names should be case-sensitive (``case_sensitive=True``),
converted to upper case (``case_sensitive=False`` or
``case_sensitive='upper'``) or to lower case
@@ -353,15 +353,15 @@ accepts three optional arguments that provide a finer control on the names:
Tweaking the conversion
=======================
-The :keyword:`converters` argument
-----------------------------------
+The ``converters`` argument
+---------------------------
Usually, defining a dtype is sufficient to define how the sequence of
strings must be converted. However, some additional control may sometimes
be required. For example, we may want to make sure that a date in a format
``YYYY/MM/DD`` is converted to a :class:`datetime` object, or that a string
like ``xx%`` is properly converted to a float between 0 and 1. In such
-cases, we should define conversion functions with the :keyword:`converters`
+cases, we should define conversion functions with the ``converters``
arguments.
The value of this argument is typically a dictionary with column indices or
@@ -427,16 +427,16 @@ float. However, user-defined converters may rapidly become cumbersome to
manage.
The :func:`~nummpy.genfromtxt` function provides two other complementary
-mechanisms: the :keyword:`missing_values` argument is used to recognize
-missing data and a second argument, :keyword:`filling_values`, is used to
+mechanisms: the ``missing_values`` argument is used to recognize
+missing data and a second argument, ``filling_values``, is used to
process these missing data.
-:keyword:`missing_values`
--------------------------
+``missing_values``
+------------------
By default, any empty string is marked as missing. We can also consider
more complex strings, such as ``"N/A"`` or ``"???"`` to represent missing
-or invalid data. The :keyword:`missing_values` argument accepts three kind
+or invalid data. The ``missing_values`` argument accepts three kind
of values:
a string or a comma-separated string
@@ -451,8 +451,8 @@ of values:
define a default applicable to all columns.
-:keyword:`filling_values`
--------------------------
+``filling_values``
+------------------
We know how to recognize missing data, but we still need to provide a value
for these missing entries. By default, this value is determined from the
@@ -469,8 +469,8 @@ Expected type Default
============= ==============
We can get a finer control on the conversion of missing values with the
-:keyword:`filling_values` optional argument. Like
-:keyword:`missing_values`, this argument accepts different kind of values:
+``filling_values`` optional argument. Like
+``missing_values``, this argument accepts different kind of values:
a single value
This will be the default for all columns
@@ -497,13 +497,13 @@ and second column, and to -999 if they occur in the last column::
dtype=[('a', '<i8'), ('b', '<i8'), ('c', '<i8')])
-:keyword:`usemask`
-------------------
+``usemask``
+-----------
We may also want to keep track of the occurrence of missing data by
constructing a boolean mask, with ``True`` entries where data was missing
and ``False`` otherwise. To do that, we just have to set the optional
-argument :keyword:`usemask` to ``True`` (the default is ``False``). The
+argument ``usemask`` to ``True`` (the default is ``False``). The
output array will then be a :class:`~numpy.ma.MaskedArray`.
diff --git a/doc/source/user/building.rst b/doc/source/user/building.rst
index fa3f2ccb4..b98f89c2d 100644
--- a/doc/source/user/building.rst
+++ b/doc/source/user/building.rst
@@ -32,7 +32,7 @@ Building NumPy requires the following software installed:
FORTRAN 77 compiler installed.
Note that NumPy is developed mainly using GNU compilers. Compilers from
- other vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Porland,
+ other vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Portland,
Lahey, HP, IBM, Microsoft are only supported in the form of community
feedback, and may not work out of the box. GCC 4.x (and later) compilers
are recommended.
@@ -137,7 +137,7 @@ Additional compiler flags can be supplied by setting the ``OPT``,
Building with ATLAS support
---------------------------
-Ubuntu
+Ubuntu
~~~~~~
You can install the necessary package for optimized ATLAS with this command::
diff --git a/doc/source/user/c-info.beyond-basics.rst b/doc/source/user/c-info.beyond-basics.rst
index 1f19c8405..5c321088d 100644
--- a/doc/source/user/c-info.beyond-basics.rst
+++ b/doc/source/user/c-info.beyond-basics.rst
@@ -390,8 +390,8 @@ an error condition set if it was not successful.
(optional) Specify any optional data needed by the function which will
be passed when the function is called.
- .. index::
- pair: dtype; adding new
+.. index::
+ pair: dtype; adding new
Subtyping the ndarray in C
diff --git a/doc/source/user/c-info.ufunc-tutorial.rst b/doc/source/user/c-info.ufunc-tutorial.rst
index 59e3dc6dc..addc38f45 100644
--- a/doc/source/user/c-info.ufunc-tutorial.rst
+++ b/doc/source/user/c-info.ufunc-tutorial.rst
@@ -1098,7 +1098,7 @@ automatically generates a ufunc from a C function with the correct signature.
.. code-block:: c
static void
- double_add(char *args, npy_intp *dimensions, npy_intp *steps,
+ double_add(char **args, npy_intp *dimensions, npy_intp *steps,
void *extra)
{
npy_intp i;
diff --git a/doc/source/user/numpy-for-matlab-users.rst b/doc/source/user/numpy-for-matlab-users.rst
index 7f48e7031..66641eed3 100644
--- a/doc/source/user/numpy-for-matlab-users.rst
+++ b/doc/source/user/numpy-for-matlab-users.rst
@@ -31,7 +31,7 @@ Some Key Differences
these arrays are designed to act more or less like matrix operations
in linear algebra.
- In NumPy the basic type is a multidimensional ``array``. Operations
- on these arrays in all dimensionalities including 2D are elementwise
+ on these arrays in all dimensionalities including 2D are element-wise
operations. However, there is a special ``matrix`` type for doing
linear algebra, which is just a subclass of the ``array`` class.
Operations on matrix-class arrays are linear algebra operations.
@@ -77,9 +77,10 @@ Short answer
linear algebra operations.
- You can have standard vectors or row/column vectors if you like.
-The only disadvantage of using the array type is that you will have to
-use ``dot`` instead of ``*`` to multiply (reduce) two tensors (scalar
-product, matrix vector multiplication etc.).
+Until Python 3.5 the only disadvantage of using the array type was that you
+had to use ``dot`` instead of ``*`` to multiply (reduce) two tensors
+(scalar product, matrix vector multiplication etc.). Since Python 3.5 you
+can use the matrix multiplication ``@`` operator.
Long answer
-----------
@@ -136,7 +137,9 @@ There are pros and cons to using both:
``dot(v,A)`` treats ``v`` as a row vector. This can save you having to
type a lot of transposes.
- ``<:(`` Having to use the ``dot()`` function for matrix-multiply is
- messy -- ``dot(dot(A,B),C)`` vs. ``A*B*C``.
+ messy -- ``dot(dot(A,B),C)`` vs. ``A*B*C``. This isn't an issue with
+ Python >= 3.5 because the ``@`` operator allows it to be written as
+ ``A @ B @ C``.
- ``:)`` Element-wise multiplication is easy: ``A*B``.
- ``:)`` ``array`` is the "default" NumPy type, so it gets the most
testing, and is the type most likely to be returned by 3rd party
@@ -145,7 +148,7 @@ There are pros and cons to using both:
- ``:)`` Closer in semantics to tensor algebra, if you are familiar
with that.
- ``:)`` *All* operations (``*``, ``/``, ``+``, ``-`` etc.) are
- elementwise
+ element-wise.
- ``matrix``
@@ -160,11 +163,12 @@ There are pros and cons to using both:
it's a bug), but 3rd party code based on NumPy may not honor type
preservation like NumPy does.
- ``:)`` ``A*B`` is matrix multiplication, so more convenient for
- linear algebra.
+ linear algebra (For Python >= 3.5 plain arrays have the same convenience
+ with the ``@`` operator).
- ``<:(`` Element-wise multiplication requires calling a function,
``multiply(A,B)``.
- ``<:(`` The use of operator overloading is a bit illogical: ``*``
- does not work elementwise but ``/`` does.
+ does not work element-wise but ``/`` does.
The ``array`` is thus much more advisable to use.
diff --git a/doc/source/user/quickstart.rst b/doc/source/user/quickstart.rst
index 7295d1aca..4a10faae8 100644
--- a/doc/source/user/quickstart.rst
+++ b/doc/source/user/quickstart.rst
@@ -25,14 +25,12 @@ The Basics
NumPy's main object is the homogeneous multidimensional array. It is a
table of elements (usually numbers), all of the same type, indexed by a
-tuple of positive integers. In NumPy dimensions are called *axes*. The
-number of axes is *rank*.
+tuple of positive integers. In NumPy dimensions are called *axes*.
-For example, the coordinates of a point in 3D space ``[1, 2, 1]`` is an
-array of rank 1, because it has one axis. That axis has a length of 3.
-In the example pictured below, the array has rank 2 (it is 2-dimensional).
-The first dimension (axis) has a length of 2, the second dimension has a
-length of 3.
+For example, the coordinates of a point in 3D space ``[1, 2, 1]`` has
+one axis. That axis has 3 elements in it, so we say it has a length
+of 3. In the example pictured below, the array has 2 axes. The first
+axis has a length of 2, the second axis has a length of 3.
::
@@ -46,14 +44,12 @@ arrays and offers less functionality. The more important attributes of
an ``ndarray`` object are:
ndarray.ndim
- the number of axes (dimensions) of the array. In the Python world,
- the number of dimensions is referred to as *rank*.
+ the number of axes (dimensions) of the array.
ndarray.shape
the dimensions of the array. This is a tuple of integers indicating
the size of the array in each dimension. For a matrix with *n* rows
and *m* columns, ``shape`` will be ``(n,m)``. The length of the
- ``shape`` tuple is therefore the rank, or number of dimensions,
- ``ndim``.
+ ``shape`` tuple is therefore the number of axes, ``ndim``.
ndarray.size
the total number of elements of the array. This is equal to the
product of the elements of ``shape``.
@@ -537,8 +533,8 @@ remaining axes. NumPy also allows you to write this using dots as
``b[i,...]``.
The **dots** (``...``) represent as many colons as needed to produce a
-complete indexing tuple. For example, if ``x`` is a rank 5 array (i.e.,
-it has 5 axes), then
+complete indexing tuple. For example, if ``x`` is an array with 5
+axes, then
- ``x[1,2,...]`` is equivalent to ``x[1,2,:,:,:]``,
- ``x[...,3]`` to ``x[:,:,:,:,3]`` and
@@ -1119,13 +1115,13 @@ value of time-dependent series::
[-0.53657292, 0.42016704, 0.99060736, 0.65028784],
[-0.28790332, -0.96139749, -0.75098725, 0.14987721]])
>>>
- >>> ind = data.argmax(axis=0) # index of the maxima for each series
+ >>> ind = data.argmax(axis=0) # index of the maxima for each series
>>> ind
array([2, 0, 3, 1])
>>>
- >>> time_max = time[ ind] # times corresponding to the maxima
+ >>> time_max = time[ind] # times corresponding to the maxima
>>>
- >>> data_max = data[ind, xrange(data.shape[1])] # => data[ind[0],0], data[ind[1],1]...
+ >>> data_max = data[ind, range(data.shape[1])] # => data[ind[0],0], data[ind[1],1]...
>>>
>>> time_max
array([ 82.5 , 20. , 113.75, 51.25])
@@ -1245,9 +1241,9 @@ selecting the slices we want::
Note that the length of the 1D boolean array must coincide with the
length of the dimension (or axis) you want to slice. In the previous
-example, ``b1`` is a 1-rank array with length 3 (the number of *rows* in
-``a``), and ``b2`` (of length 4) is suitable to index the 2nd rank
-(columns) of ``a``.
+example, ``b1`` has length 3 (the number of *rows* in ``a``), and
+``b2`` (of length 4) is suitable to index the 2nd axis (columns) of
+``a``.
The ix_() function
-------------------