summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorCharles Harris <charlesr.harris@gmail.com>2019-06-26 10:43:36 -0700
committerGitHub <noreply@github.com>2019-06-26 10:43:36 -0700
commitdf096f8386cb1478e117fc959c3c3c374fb1895b (patch)
tree6f716b3d39fc6305ac4b7c0a489bbe693b258c0d /doc
parent113560b576c57fcbaa758cb8e7b12b7f19f51c2f (diff)
parent4262579fe671bad09aefa1716f45c73023011048 (diff)
downloadnumpy-df096f8386cb1478e117fc959c3c3c374fb1895b.tar.gz
Merge branch 'master' into force-zip64
Diffstat (limited to 'doc')
-rw-r--r--doc/C_STYLE_GUIDE.rst.txt15
-rw-r--r--doc/DISTUTILS.rst.txt149
-rw-r--r--doc/Makefile5
-rw-r--r--doc/RELEASE_WALKTHROUGH.rst.txt23
-rw-r--r--doc/TESTS.rst.txt16
-rw-r--r--doc/changelog/1.16.4-changelog.rst39
-rw-r--r--doc/neps/nep-0010-new-iterator-ufunc.rst4
-rw-r--r--doc/neps/nep-0016-abstract-array.rst2
-rw-r--r--doc/neps/nep-0018-array-function-protocol.rst272
-rw-r--r--doc/neps/nep-0019-rng-policy.rst8
-rw-r--r--doc/neps/nep-0026-missing-data-summary.rst4
-rw-r--r--doc/neps/nep-0027-zero-rank-arrarys.rst28
-rw-r--r--doc/neps/roadmap.rst129
-rw-r--r--doc/release/1.14.4-notes.rst2
-rw-r--r--doc/release/1.16.4-notes.rst94
-rw-r--r--doc/release/1.17.0-notes.rst107
-rw-r--r--doc/source/about.rst2
-rw-r--r--doc/source/conf.py38
-rw-r--r--doc/source/dev/development_environment.rst4
-rw-r--r--doc/source/reference/arrays.dtypes.rst1
-rw-r--r--doc/source/reference/c-api.array.rst49
-rw-r--r--doc/source/reference/c-api.config.rst19
-rw-r--r--doc/source/reference/c-api.coremath.rst8
-rw-r--r--doc/source/reference/c-api.iterator.rst49
-rw-r--r--doc/source/reference/c-api.types-and-structures.rst143
-rw-r--r--doc/source/reference/distutils.rst100
-rw-r--r--doc/source/reference/random/bit_generators/bitgenerators.rst11
-rw-r--r--doc/source/reference/random/bit_generators/index.rst71
-rw-r--r--doc/source/reference/random/bit_generators/mt19937.rst34
-rw-r--r--doc/source/reference/random/bit_generators/pcg64.rst33
-rw-r--r--doc/source/reference/random/bit_generators/philox.rst35
-rw-r--r--doc/source/reference/random/bit_generators/sfc64.rst28
-rw-r--r--doc/source/reference/random/entropy.rst6
-rw-r--r--doc/source/reference/random/extending.rst165
-rw-r--r--doc/source/reference/random/generator.rst82
-rw-r--r--doc/source/reference/random/index.rst187
-rw-r--r--doc/source/reference/random/legacy.rst128
-rw-r--r--doc/source/reference/random/multithreading.rst106
-rw-r--r--doc/source/reference/random/new-or-different.rst116
-rw-r--r--doc/source/reference/random/parallel.rst135
-rw-r--r--doc/source/reference/random/performance.py74
-rw-r--r--doc/source/reference/random/performance.rst135
-rw-r--r--doc/source/reference/routines.char.rst10
-rw-r--r--doc/source/reference/routines.random.rst83
-rw-r--r--doc/source/reference/routines.rst2
-rw-r--r--doc/source/reference/ufuncs.rst2
-rw-r--r--doc/source/release.rst1
-rw-r--r--doc/source/user/basics.io.genfromtxt.rst6
-rw-r--r--doc/source/user/building.rst7
49 files changed, 2136 insertions, 631 deletions
diff --git a/doc/C_STYLE_GUIDE.rst.txt b/doc/C_STYLE_GUIDE.rst.txt
index fa1d3a77d..07f4b99df 100644
--- a/doc/C_STYLE_GUIDE.rst.txt
+++ b/doc/C_STYLE_GUIDE.rst.txt
@@ -10,9 +10,6 @@ to achieve uniformity. Because the NumPy conventions are very close to
those in PEP-0007, that PEP is used as a template below with the NumPy
additions and variations in the appropriate spots.
-NumPy modified PEP-0007
-=======================
-
Introduction
------------
@@ -31,10 +28,7 @@ Two good reasons to break a particular rule:
C dialect
---------
-* Use ANSI/ISO standard C (the 1989 version of the standard).
- This means, amongst many other things, that all declarations
- must be at the top of a block (not necessarily at the top of
- function).
+* Use C99 (that is, the standard defined by ISO/IEC 9899:1999).
* Don't use GCC extensions (e.g. don't write multi-line strings
without trailing backslashes). Preferably break long strings
@@ -49,9 +43,6 @@ C dialect
* All function declarations and definitions must use full
prototypes (i.e. specify the types of all arguments).
-* Do not use C++ style // one line comments, they aren't portable.
- Note: this will change with the proposed transition to C++.
-
* No compiler warnings with major compilers (gcc, VC++, a few others).
Note: NumPy still produces compiler warnings that need to be addressed.
@@ -179,12 +170,12 @@ Code lay-out
Trailing comments should be used sparingly. Instead of ::
- if (yes) {/* Success! */
+ if (yes) { // Success!
do ::
if (yes) {
- /* Success! */
+ // Success!
* All functions and global variables should be declared static
when they aren't needed outside the current compilation unit.
diff --git a/doc/DISTUTILS.rst.txt b/doc/DISTUTILS.rst.txt
index 42aa9561d..eadde63f8 100644
--- a/doc/DISTUTILS.rst.txt
+++ b/doc/DISTUTILS.rst.txt
@@ -297,29 +297,140 @@ in writing setup scripts:
+ ``config.get_info(*names)`` ---
-Template files
---------------
+
+.. _templating:
+
+Conversion of ``.src`` files using Templates
+--------------------------------------------
+
+NumPy distutils supports automatic conversion of source files named
+<somefile>.src. This facility can be used to maintain very similar
+code blocks requiring only simple changes between blocks. During the
+build phase of setup, if a template file named <somefile>.src is
+encountered, a new file named <somefile> is constructed from the
+template and placed in the build directory to be used instead. Two
+forms of template conversion are supported. The first form occurs for
+files named <file>.ext.src where ext is a recognized Fortran
+extension (f, f90, f95, f77, for, ftn, pyf). The second form is used
+for all other cases.
+
+.. index::
+ single: code generation
+
+Fortran files
+-------------
+
+This template converter will replicate all **function** and
+**subroutine** blocks in the file with names that contain '<...>'
+according to the rules in '<...>'. The number of comma-separated words
+in '<...>' determines the number of times the block is repeated. What
+these words are indicates what that repeat rule, '<...>', should be
+replaced with in each block. All of the repeat rules in a block must
+contain the same number of comma-separated words indicating the number
+of times that block should be repeated. If the word in the repeat rule
+needs a comma, leftarrow, or rightarrow, then prepend it with a
+backslash ' \'. If a word in the repeat rule matches ' \\<index>' then
+it will be replaced with the <index>-th word in the same repeat
+specification. There are two forms for the repeat rule: named and
+short.
+
+Named repeat rule
+^^^^^^^^^^^^^^^^^
+
+A named repeat rule is useful when the same set of repeats must be
+used several times in a block. It is specified using <rule1=item1,
+item2, item3,..., itemN>, where N is the number of times the block
+should be repeated. On each repeat of the block, the entire
+expression, '<...>' will be replaced first with item1, and then with
+item2, and so forth until N repeats are accomplished. Once a named
+repeat specification has been introduced, the same repeat rule may be
+used **in the current block** by referring only to the name
+(i.e. <rule1>.
+
+
+Short repeat rule
+^^^^^^^^^^^^^^^^^
+
+A short repeat rule looks like <item1, item2, item3, ..., itemN>. The
+rule specifies that the entire expression, '<...>' should be replaced
+first with item1, and then with item2, and so forth until N repeats
+are accomplished.
+
+
+Pre-defined names
+^^^^^^^^^^^^^^^^^
+
+The following predefined named repeat rules are available:
+
+- <prefix=s,d,c,z>
+
+- <_c=s,d,c,z>
+
+- <_t=real, double precision, complex, double complex>
+
+- <ftype=real, double precision, complex, double complex>
+
+- <ctype=float, double, complex_float, complex_double>
+
+- <ftypereal=float, double precision, \\0, \\1>
+
+- <ctypereal=float, double, \\0, \\1>
+
+
+Other files
+------------
+
+Non-Fortran files use a separate syntax for defining template blocks
+that should be repeated using a variable expansion similar to the
+named repeat rules of the Fortran-specific repeats.
NumPy Distutils preprocesses C source files (extension: :file:`.c.src`) written
in a custom templating language to generate C code. The :c:data:`@` symbol is
-used to wrap macro-style variables to empower a string substitution mechansim
+used to wrap macro-style variables to empower a string substitution mechanism
that might describe (for instance) a set of data types.
-As a more detailed scenario, a loop in the NumPy C source code may
-have a :c:data:`@TYPE@` variable, targeted for string substitution,
-which is preprocessed to a number of otherwise identical loops with
-several strings such as :c:data:`INT, LONG, UINT, ULONG`. The :c:data:`@TYPE@`
-style syntax thus reduces code duplication and maintenance burden by
-mimicking languages that have generic type support. By convention,
-and as required by the preprocessor, generically typed blocks are preceded
-by comment blocks that enumerate the intended string substitutions.
-
The template language blocks are delimited by :c:data:`/**begin repeat`
and :c:data:`/**end repeat**/` lines, which may also be nested using
consecutively numbered delimiting lines such as :c:data:`/**begin repeat1`
-and :c:data:`/**end repeat1**/`. String replacement specifications are started
-and terminated using :c:data:`#`. This may be clearer in the following
-template source example::
+and :c:data:`/**end repeat1**/`:
+
+1. "/\**begin repeat "on a line by itself marks the beginning of
+a segment that should be repeated.
+
+2. Named variable expansions are defined using ``#name=item1, item2, item3,
+..., itemN#`` and placed on successive lines. These variables are
+replaced in each repeat block with corresponding word. All named
+variables in the same repeat block must define the same number of
+words.
+
+3. In specifying the repeat rule for a named variable, ``item*N`` is short-
+hand for ``item, item, ..., item`` repeated N times. In addition,
+parenthesis in combination with \*N can be used for grouping several
+items that should be repeated. Thus, #name=(item1, item2)*4# is
+equivalent to #name=item1, item2, item1, item2, item1, item2, item1,
+item2#
+
+4. "\*/ "on a line by itself marks the end of the variable expansion
+naming. The next line is the first line that will be repeated using
+the named rules.
+
+5. Inside the block to be repeated, the variables that should be expanded
+are specified as ``@name@``
+
+6. "/\**end repeat**/ "on a line by itself marks the previous line
+as the last line of the block to be repeated.
+
+7. A loop in the NumPy C source code may have a ``@TYPE@`` variable, targeted
+for string substitution, which is preprocessed to a number of otherwise
+identical loops with several strings such as INT, LONG, UINT, ULONG. The
+``@TYPE@`` style syntax thus reduces code duplication and maintenance burden by
+mimicking languages that have generic type support.
+
+The above rules may be clearer in the following template source example:
+
+.. code-block:: NumPyC
+ :linenos:
+ :emphasize-lines: 3, 13, 29, 31
/* TIMEDELTA to non-float types */
@@ -356,10 +467,10 @@ template source example::
The preprocessing of generically typed C source files (whether in NumPy
proper or in any third party package using NumPy Distutils) is performed
by `conv_template.py`_.
-The type specific C files generated (extension: :file:`.c`) by these modules
-during the build process are ready to be compiled. This form
-of generic typing is also supported for C header files (preprocessed to
-produce :file:`.h` files).
+The type specific C files generated (extension: .c)
+by these modules during the build process are ready to be compiled. This
+form of generic typing is also supported for C header files (preprocessed
+to produce .h files).
.. _conv_template.py: https://github.com/numpy/numpy/blob/master/numpy/distutils/conv_template.py
diff --git a/doc/Makefile b/doc/Makefile
index 4db17b297..842d2ad13 100644
--- a/doc/Makefile
+++ b/doc/Makefile
@@ -5,7 +5,7 @@
# issues with the amendments to PYTHONPATH and install paths (see DIST_VARS).
# Use explicit "version_info" indexing since make cannot handle colon characters, and
-# evaluate it now to allow easier debugging when printing the varaible
+# evaluate it now to allow easier debugging when printing the variable
PYVER:=$(shell python3 -c 'from sys import version_info as v; print("{0}.{1}".format(v[0], v[1]))')
PYTHON = python$(PYVER)
@@ -46,7 +46,8 @@ help:
@echo " upload USERNAME=... RELEASE=... to upload built docs to docs.scipy.org"
clean:
- -rm -rf build/* source/reference/generated
+ -rm -rf build/*
+ find . -name generated -type d -prune -exec rm -rf "{}" ";"
version-check:
ifeq "$(GITVER)" "Unknown"
diff --git a/doc/RELEASE_WALKTHROUGH.rst.txt b/doc/RELEASE_WALKTHROUGH.rst.txt
index 79a296ffe..6987dd6c1 100644
--- a/doc/RELEASE_WALKTHROUGH.rst.txt
+++ b/doc/RELEASE_WALKTHROUGH.rst.txt
@@ -139,7 +139,7 @@ environment.
Generate the README files
-------------------------
-This needs to be done after all installers are present, but before the pavement
+This needs to be done after all installers are downloaded, but before the pavement
file is updated for continued development::
$ cd ../numpy
@@ -151,7 +151,7 @@ Tag the release
Once the wheels have been built and downloaded without errors, go back to your
numpy repository in the maintenance branch and tag the ``REL`` commit, signing
-it with your gpg key, and build the source distribution archives::
+it with your gpg key::
$ git tag -s v1.14.5
@@ -171,9 +171,22 @@ Reset the maintenance branch into a development state
-----------------------------------------------------
Add another ``REL`` commit to the numpy maintenance branch, which resets the
-``ISREALEASED`` flag to ``False`` and increments the version counter.::
+``ISREALEASED`` flag to ``False`` and increments the version counter::
$ gvim pavement.py setup.py
+
+Create release notes for next release and edit them to set the version::
+
+ $ cp doc/release/template.rst doc/release/1.14.6-notes.rst
+ $ gvim doc/release/1.14.6-notes.rst
+ $ git add doc/release/1.14.6-notes.rst
+
+Add new release notes to the documentation release list::
+
+ $ gvim doc/source/release.rst
+
+Commit the result::
+
$ git commit -a -m"REL: prepare 1.14.x for further development"
$ git push upstream maintenance/1.14.x
@@ -182,7 +195,9 @@ Upload to PyPI
--------------
Upload to PyPI using ``twine``. A recent version of ``twine`` of is needed
-after recent PyPI changes, version ``1.11.0`` was used here.::
+after recent PyPI changes, version ``1.11.0`` was used here.
+
+.. code-block:: sh
$ cd ../numpy
$ twine upload release/installers/*.whl
diff --git a/doc/TESTS.rst.txt b/doc/TESTS.rst.txt
index 8169ea38a..14cb28df8 100644
--- a/doc/TESTS.rst.txt
+++ b/doc/TESTS.rst.txt
@@ -321,35 +321,33 @@ Known failures & skipping tests
Sometimes you might want to skip a test or mark it as a known failure,
such as when the test suite is being written before the code it's
meant to test, or if a test only fails on a particular architecture.
-The decorators from numpy.testing.dec can be used to do this.
To skip a test, simply use ``skipif``::
- from numpy.testing import dec
+ import pytest
- @dec.skipif(SkipMyTest, "Skipping this test because...")
+ @pytest.mark.skipif(SkipMyTest, reason="Skipping this test because...")
def test_something(foo):
...
The test is marked as skipped if ``SkipMyTest`` evaluates to nonzero,
and the message in verbose test output is the second argument given to
``skipif``. Similarly, a test can be marked as a known failure by
-using ``knownfailureif``::
+using ``xfail``::
- from numpy.testing import dec
+ import pytest
- @dec.knownfailureif(MyTestFails, "This test is known to fail because...")
+ @pytest.mark.xfail(MyTestFails, reason="This test is known to fail because...")
def test_something_else(foo):
...
Of course, a test can be unconditionally skipped or marked as a known
-failure by passing ``True`` as the first argument to ``skipif`` or
-``knownfailureif``, respectively.
+failure by using ``skip`` or ``xfail`` without argument, respectively.
A total of the number of skipped and known failing tests is displayed
at the end of the test run. Skipped tests are marked as ``'S'`` in
the test results (or ``'SKIPPED'`` for ``verbose > 1``), and known
-failing tests are marked as ``'K'`` (or ``'KNOWN'`` if ``verbose >
+failing tests are marked as ``'x'`` (or ``'XFAIL'`` if ``verbose >
1``).
Tests on random data
diff --git a/doc/changelog/1.16.4-changelog.rst b/doc/changelog/1.16.4-changelog.rst
new file mode 100644
index 000000000..b32881c37
--- /dev/null
+++ b/doc/changelog/1.16.4-changelog.rst
@@ -0,0 +1,39 @@
+
+Contributors
+============
+
+A total of 10 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Eric Wieser
+* Dennis Zollo +
+* Hunter Damron +
+* Jingbei Li +
+* Kevin Sheppard
+* Matti Picus
+* Nicola Soranzo +
+* Sebastian Berg
+* Tyler Reddy
+
+Pull requests merged
+====================
+
+A total of 16 pull requests were merged for this release.
+
+* `#13392 <https://github.com/numpy/numpy/pull/13392>`__: BUG: Some PyPy versions lack PyStructSequence_InitType2.
+* `#13394 <https://github.com/numpy/numpy/pull/13394>`__: MAINT, DEP: Fix deprecated ``assertEquals()``
+* `#13396 <https://github.com/numpy/numpy/pull/13396>`__: BUG: Fix structured_to_unstructured on single-field types (backport)
+* `#13549 <https://github.com/numpy/numpy/pull/13549>`__: BLD: Make CI pass again with pytest 4.5
+* `#13552 <https://github.com/numpy/numpy/pull/13552>`__: TST: Register markers in conftest.py.
+* `#13559 <https://github.com/numpy/numpy/pull/13559>`__: BUG: Removes ValueError for empty kwargs in arraymultiter_new
+* `#13560 <https://github.com/numpy/numpy/pull/13560>`__: BUG: Add TypeError to accepted exceptions in crackfortran.
+* `#13561 <https://github.com/numpy/numpy/pull/13561>`__: BUG: Handle subarrays in descr_to_dtype
+* `#13562 <https://github.com/numpy/numpy/pull/13562>`__: BUG: Protect generators from log(0.0)
+* `#13563 <https://github.com/numpy/numpy/pull/13563>`__: BUG: Always return views from structured_to_unstructured when...
+* `#13564 <https://github.com/numpy/numpy/pull/13564>`__: BUG: Catch stderr when checking compiler version
+* `#13565 <https://github.com/numpy/numpy/pull/13565>`__: BUG: longdouble(int) does not work
+* `#13587 <https://github.com/numpy/numpy/pull/13587>`__: BUG: distutils/system_info.py fix missing subprocess import (#13523)
+* `#13620 <https://github.com/numpy/numpy/pull/13620>`__: BUG,DEP: Fix writeable flag setting for arrays without base
+* `#13641 <https://github.com/numpy/numpy/pull/13641>`__: MAINT: Prepare for the 1.16.4 release.
+* `#13644 <https://github.com/numpy/numpy/pull/13644>`__: BUG: special case object arrays when printing rel-, abs-error
diff --git a/doc/neps/nep-0010-new-iterator-ufunc.rst b/doc/neps/nep-0010-new-iterator-ufunc.rst
index 8601b4a4c..fd7b3e52c 100644
--- a/doc/neps/nep-0010-new-iterator-ufunc.rst
+++ b/doc/neps/nep-0010-new-iterator-ufunc.rst
@@ -1877,8 +1877,8 @@ the new iterator.
Here is one of the original functions, for reference, and some
random image data.::
- In [5]: rand1 = np.random.random_sample(1080*1920*4).astype(np.float32)
- In [6]: rand2 = np.random.random_sample(1080*1920*4).astype(np.float32)
+ In [5]: rand1 = np.random.random(1080*1920*4).astype(np.float32)
+ In [6]: rand2 = np.random.random(1080*1920*4).astype(np.float32)
In [7]: image1 = rand1.reshape(1080,1920,4).swapaxes(0,1)
In [8]: image2 = rand2.reshape(1080,1920,4).swapaxes(0,1)
diff --git a/doc/neps/nep-0016-abstract-array.rst b/doc/neps/nep-0016-abstract-array.rst
index 86d164d8e..7551b11b9 100644
--- a/doc/neps/nep-0016-abstract-array.rst
+++ b/doc/neps/nep-0016-abstract-array.rst
@@ -266,7 +266,7 @@ array, then they'll get a segfault. Right now, in the same situation,
``asarray`` will instead invoke the object's ``__array__`` method, or
use the buffer interface to make a view, or pass through an array with
object dtype, or raise an error, or similar. Probably none of these
-outcomes are actually desireable in most cases, so maybe making it a
+outcomes are actually desirable in most cases, so maybe making it a
segfault instead would be OK? But it's dangerous given that we don't
know how common such code is. OTOH, if we were starting from scratch
then this would probably be the ideal solution.
diff --git a/doc/neps/nep-0018-array-function-protocol.rst b/doc/neps/nep-0018-array-function-protocol.rst
index de5adeacd..fb9b838b5 100644
--- a/doc/neps/nep-0018-array-function-protocol.rst
+++ b/doc/neps/nep-0018-array-function-protocol.rst
@@ -10,7 +10,7 @@ NEP 18 — A dispatch mechanism for NumPy's high level array functions
:Status: Provisional
:Type: Standards Track
:Created: 2018-05-29
-:Updated: 2019-04-11
+:Updated: 2019-05-25
:Resolution: https://mail.python.org/pipermail/numpy-discussion/2018-August/078493.html
Abstact
@@ -98,12 +98,15 @@ A prototype implementation can be found in
.. note::
- Dispatch with the ``__array_function__`` protocol has been implemented on
- NumPy's master branch but is not yet enabled by default. In NumPy 1.16,
- you will need to set the environment variable
- ``NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1`` before importing NumPy to test
- NumPy function overrides. We anticipate the protocol will be enabled by
- default in NumPy 1.17.
+ Dispatch with the ``__array_function__`` protocol has been implemented but is
+ not yet enabled by default:
+
+ - In NumPy 1.16, you need to set the environment variable
+ ``NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1`` before importing NumPy to test
+ NumPy function overrides.
+ - In NumPy 1.17, the protocol will be enabled by default, but can be disabled
+ with ``NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=0``.
+ - Eventually, expect to ``__array_function__`` to always be enabled.
The interface
~~~~~~~~~~~~~
@@ -208,75 +211,6 @@ were explicitly used in the NumPy function call.
be impossible to correctly override NumPy functions from another object
if the operation also includes one of your objects.
-Avoiding nested ``__array_function__`` overrides
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The special ``__skip_array_function__`` attribute found on NumPy functions that
-support overrides with ``__array_function__`` allows for calling these
-functions without any override checks.
-
-``__skip_array_function__`` always points back to the original NumPy-array
-specific implementation of a function. These functions do not check for
-``__array_function__`` overrides, and instead usually coerce all of their
-array-like arguments to NumPy arrays.
-
-.. note::
-
- ``__skip_array_function__`` was not included as part of the initial
- opt-in-only preview of ``__array_function__`` in NumPy 1.16.
-
-Defaulting to NumPy's coercive implementations
-''''''''''''''''''''''''''''''''''''''''''''''
-
-Some projects may prefer to default to NumPy's implementation, rather than
-explicitly defining implementing a supported API. This allows for incrementally
-overriding NumPy's API in projects that already support it implicitly by
-allowing their objects to be converted into NumPy arrays (e.g., because they
-implemented special methods such as ``__array__``). We don't recommend this
-for most new projects ("Explicit is better than implicit"), but in some cases
-it is the most expedient option.
-
-Adapting the previous example:
-
-.. code:: python
-
- class MyArray:
- def __array_function__(self, func, types, args, kwargs):
- # It is still best practice to defer to unrecognized types
- if not all(issubclass(t, (MyArray, np.ndarray)) for t in types):
- return NotImplemented
-
- my_func = HANDLED_FUNCTIONS.get(func)
- if my_func is None:
- return func.__skip_array_function__(*args, **kwargs)
- return my_func(*args, **kwargs)
-
- def __array__(self, dtype):
- # convert this object into a NumPy array
-
-Now, if a NumPy function that isn't explicitly handled is called on
-``MyArray`` object, the operation will act (almost) as if MyArray's
-``__array_function__`` method never existed.
-
-Explicitly reusing NumPy's implementation
-'''''''''''''''''''''''''''''''''''''''''
-
-``__skip_array_function__`` is also convenient for cases where an explicit
-set of NumPy functions should still use NumPy's implementation, by
-calling ``func.__skip__array_function__(*args, **kwargs)`` inside
-``__array_function__`` instead of ``func(*args, **kwargs)`` (which would
-lead to infinite recursion). For example, to explicitly reuse NumPy's
-``array_repr()`` function on a custom array type:
-
-.. code:: python
-
- class MyArray:
- def __array_function__(self, func, types, args, kwargs):
- ...
- if func is np.array_repr:
- return np.array_repr.__skip_array_function__(*args, **kwargs)
- ...
-
Necessary changes within the NumPy codebase itself
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -400,12 +334,7 @@ The ``__array_function__`` method on ``numpy.ndarray``
The use cases for subclasses with ``__array_function__`` are the same as those
with ``__array_ufunc__``, so ``numpy.ndarray`` also defines a
-``__array_function__`` method.
-
-``ndarray.__array_function__`` is a trivial case of the "Defaulting to NumPy's
-implementation" strategy described above: *every* NumPy function on NumPy
-arrays is defined by calling NumPy's own implementation if there are other
-overrides:
+``__array_function__`` method:
.. code:: python
@@ -413,23 +342,34 @@ overrides:
if not all(issubclass(t, ndarray) for t in types):
# Defer to any non-subclasses that implement __array_function__
return NotImplemented
- return func.__skip_array_function__(*args, **kwargs)
+
+ # Use NumPy's private implementation without __array_function__
+ # dispatching
+ return func._implementation(*args, **kwargs)
This method matches NumPy's dispatching rules, so for most part it is
possible to pretend that ``ndarray.__array_function__`` does not exist.
+The private ``_implementation`` attribute, defined below in the
+``array_function_dispatch`` decorator, allows us to avoid the special cases for
+NumPy arrays that were needed in the ``__array_ufunc__`` protocol.
The ``__array_function__`` protocol always calls subclasses before
superclasses, so if any ``ndarray`` subclasses are involved in an operation,
they will get the chance to override it, just as if any other argument
-overrides ``__array_function__``. However, the default behavior in an operation
+overrides ``__array_function__``. But the default behavior in an operation
that combines a base NumPy array and a subclass is different: if the subclass
returns ``NotImplemented``, NumPy's implementation of the function will be
called instead of raising an exception. This is appropriate since subclasses
are `expected to be substitutable <https://en.wikipedia.org/wiki/Liskov_substitution_principle>`_.
-Notice that the ``__skip_array_function__`` function attribute allows us
-to avoid the special cases for NumPy arrays that were needed in the
-``__array_ufunc__`` protocol.
+We still caution authors of subclasses to exercise caution when relying
+upon details of NumPy's internal implementations. It is not always possible to
+write a perfectly substitutable ndarray subclass, e.g., in cases involving the
+creation of new arrays, not least because NumPy makes use of internal
+optimizations specialized to base NumPy arrays, e.g., code written in C. Even
+if NumPy's implementation happens to work today, it may not work in the future.
+In these cases, your recourse is to re-implement top-level NumPy functions via
+``__array_function__`` on your subclass.
Changes within NumPy functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -441,9 +381,8 @@ but of fairly simple and innocuous code that should complete quickly and
without effect if no arguments implement the ``__array_function__``
protocol.
-In most cases, these functions should written using the
-``array_function_dispatch`` decorator. Error checking aside, here's what the
-core implementation looks like:
+To achieve this, we define a ``array_function_dispatch`` decorator to rewrite
+NumPy functions. The basic implementation is as follows:
.. code:: python
@@ -457,25 +396,27 @@ core implementation looks like:
implementation, public_api, relevant_args, args, kwargs)
if module is not None:
public_api.__module__ = module
- public_api.__skip_array_function__ = implementation
+ # for ndarray.__array_function__
+ public_api._implementation = implementation
return public_api
return decorator
# example usage
- def broadcast_to(array, shape, subok=None):
+ def _broadcast_to_dispatcher(array, shape, subok=None):
return (array,)
- @array_function_dispatch(broadcast_to, module='numpy')
+ @array_function_dispatch(_broadcast_to_dispatcher, module='numpy')
def broadcast_to(array, shape, subok=False):
... # existing definition of np.broadcast_to
Using a decorator is great! We don't need to change the definitions of
existing NumPy functions, and only need to write a few additional lines
-to define dispatcher function. We originally thought that we might want to
-implement dispatching for some NumPy functions without the decorator, but
-so far it seems to cover every case.
+for the dispatcher function. We could even reuse a single dispatcher for
+families of functions with the same signature (e.g., ``sum`` and ``prod``).
+For such functions, the largest change could be adding a few lines to the
+docstring to note which arguments are checked for overloads.
-Within NumPy's implementation, it's worth calling out the decorator's use of
+It's particularly worth calling out the decorator's use of
``functools.wraps``:
- This ensures that the wrapped function has the same name and docstring as
@@ -489,14 +430,6 @@ Within NumPy's implementation, it's worth calling out the decorator's use of
The example usage illustrates several best practices for writing dispatchers
relevant to NumPy contributors:
-- We gave the "dispatcher" function ``broadcast_to`` the exact same name and
- arguments as the "implementation" function. The matching arguments are
- required, because the function generated by ``array_function_dispatch`` will
- call the dispatcher in *exactly* the same way as it was called. The matching
- function name isn't strictly necessary, but ensures that Python reports the
- original function name in error messages if invalid arguments are used, e.g.,
- ``TypeError: broadcast_to() got an unexpected keyword argument``.
-
- We passed the ``module`` argument, which in turn sets the ``__module__``
attribute on the generated function. This is for the benefit of better error
messages, here for errors raised internally by NumPy when no implementation
@@ -600,36 +533,6 @@ concerned about performance differences measured in microsecond(s) on NumPy
functions, because it's difficult to do *anything* in Python in less than a
microsecond.
-For rare cases where NumPy functions are called in performance critical inner
-loops on small arrays or scalars, it is possible to avoid the overhead of
-dispatching by calling the versions of NumPy functions skipping
-``__array_function__`` checks available in the ``__skip_array_function__``
-attribute. For example:
-
-.. code:: python
-
- dot = getattr(np.dot, '__skip_array_function__', np.dot)
-
- def naive_matrix_power(x, n):
- x = np.array(x)
- for _ in range(n):
- dot(x, x, out=x)
- return x
-
-NumPy will use this internally to minimize overhead for NumPy functions
-defined in terms of other NumPy functions, but
-**we do not recommend it for most users**:
-
-- The specific implementation of overrides is still provisional, so the
- ``__skip_array_function__`` attribute on particular functions could be
- removed in any NumPy release without warning.
- For this reason, access to ``__skip_array_function__`` attribute outside of
- ``__array_function__`` methods should *always* be guarded by using
- ``getattr()`` with a default value.
-- In cases where this makes a difference, you will get far greater speed-ups
- rewriting your inner loops in a compiled language, e.g., with Cython or
- Numba.
-
Use outside of NumPy
~~~~~~~~~~~~~~~~~~~~
@@ -739,7 +642,7 @@ layer, separating NumPy's high level API from default implementations on
The downsides are that this would require an explicit opt-in from all
existing code, e.g., ``import numpy.api as np``, and in the long term
-would result in the maintainence of two separate NumPy APIs. Also, many
+would result in the maintenance of two separate NumPy APIs. Also, many
functions from ``numpy`` itself are already overloaded (but
inadequately), so confusion about high vs. low level APIs in NumPy would
still persist.
@@ -809,48 +712,60 @@ nearly every public function in NumPy's API. This does not preclude the future
possibility of rewriting NumPy functions in terms of simplified core
functionality with ``__array_function__`` and a protocol and/or base class for
ensuring that arrays expose methods and properties like ``numpy.ndarray``.
+However, to work well this would require the possibility of implementing
+*some* but not all functions with ``__array_function__``, e.g., as described
+in the next section.
-Coercion to a NumPy array as a catch-all fallback
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Partial implementation of NumPy's API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the current design, classes that implement ``__array_function__``
-to overload at least one function can opt-out of overriding other functions
-by using the ``__skip_array_function__`` function, as described above under
-"Defaulting to NumPy's implementation."
-
-However, this still results in different behavior than not implementing
-``__array_function__`` in at least one edge case. If multiple objects implement
-``__array_function__`` but don't know about each other NumPy will raise
-``TypeError`` if all methods return ``NotImplemented``, whereas if no arguments
-defined ``__array_function__`` methods it would attempt to coerce all of them
-to NumPy arrays.
-
-Alternatively, this could be "fixed" by writing a ``__array_function__``
-method that always calls ``__skip_array_function__()`` instead of returning
-``NotImplemented`` for some functions, but that would result in a type
-whose implementation cannot be overriden by over argumetns -- like NumPy
-arrays themselves prior to the introduction of this protocol.
-
-Either way, it is not possible to *exactly* maintain the current behavior of
-all NumPy functions if at least one more function is overriden. If preserving
-this behavior is important, we could potentially solve it by changing the
-handling of return values in ``__array_function__`` in either of two ways:
-
-1. Change the meaning of all arguments returning ``NotImplemented`` to indicate
- that all arguments should be coerced to NumPy arrays and the operation
- should be retried. However, many array libraries (e.g., scipy.sparse) really
- don't want implicit conversions to NumPy arrays, and often avoid implementing
- ``__array__`` for exactly this reason. Implicit conversions can result in
- silent bugs and performance degradation.
+to overload at least one function implicitly declare an intent to
+implement the entire NumPy API. It's not possible to implement *only*
+``np.concatenate()`` on a type, but fall back to NumPy's default
+behavior of casting with ``np.asarray()`` for all other functions.
+
+This could present a backwards compatibility concern that would
+discourage libraries from adopting ``__array_function__`` in an
+incremental fashion. For example, currently most numpy functions will
+implicitly convert ``pandas.Series`` objects into NumPy arrays, behavior
+that assuredly many pandas users rely on. If pandas implemented
+``__array_function__`` only for ``np.concatenate``, unrelated NumPy
+functions like ``np.nanmean`` would suddenly break on pandas objects by
+raising TypeError.
+
+Even libraries that reimplement most of NumPy's public API sometimes rely upon
+using utility functions from NumPy without a wrapper. For example, both CuPy
+and JAX simply `use an alias <https://github.com/numpy/numpy/issues/12974>`_ to
+``np.result_type``, which already supports duck-types with a ``dtype``
+attribute.
+
+With ``__array_ufunc__``, it's possible to alleviate this concern by
+casting all arguments to numpy arrays and re-calling the ufunc, but the
+heterogeneous function signatures supported by ``__array_function__``
+make it impossible to implement this generic fallback behavior for
+``__array_function__``.
+
+We considered three possible ways to resolve this issue, but none were
+entirely satisfactory:
+
+1. Change the meaning of all arguments returning ``NotImplemented`` from
+ ``__array_function__`` to indicate that all arguments should be coerced to
+ NumPy arrays and the operation should be retried. However, many array
+ libraries (e.g., scipy.sparse) really don't want implicit conversions to
+ NumPy arrays, and often avoid implementing ``__array__`` for exactly this
+ reason. Implicit conversions can result in silent bugs and performance
+ degradation.
Potentially, we could enable this behavior only for types that implement
``__array__``, which would resolve the most problematic cases like
scipy.sparse. But in practice, a large fraction of classes that present a
high level API like NumPy arrays already implement ``__array__``. This would
preclude reliable use of NumPy's high level API on these objects.
+
2. Use another sentinel value of some sort, e.g.,
- ``np.NotImplementedButCoercible``, to indicate that a class implementing part
- of NumPy's higher level array API is coercible as a fallback. If all
+ ``np.NotImplementedButCoercible``, to indicate that a class implementing
+ part of NumPy's higher level array API is coercible as a fallback. If all
arguments return ``NotImplementedButCoercible``, arguments would be coerced
and the operation would be retried.
@@ -863,10 +778,20 @@ handling of return values in ``__array_function__`` in either of two ways:
logic an arbitrary number of times. Either way, the dispatching rules would
definitely get more complex and harder to reason about.
-At present, neither of these alternatives looks like a good idea. Reusing
-``__skip_array_function__()`` looks like it should suffice for most purposes.
-Arguably this loss in flexibility is a virtue: fallback implementations often
-result in unpredictable and undesired behavior.
+3. Allow access to NumPy's implementation of functions, e.g., in the form of
+ a publicly exposed ``__skip_array_function__`` attribute on the NumPy
+ functions. This would allow for falling back to NumPy's implementation by
+ using ``func.__skip_array_function__`` inside ``__array_function__``
+ methods, and could also potentially be used to be used to avoid the
+ overhead of dispatching. However, it runs the risk of potentially exposing
+ details of NumPy's implementations for NumPy functions that do not call
+ ``np.asarray()`` internally. See
+ `this note <https://mail.python.org/pipermail/numpy-discussion/2019-May/079541.html>`_
+ for a summary of the full discussion.
+
+These solutions would solve real use cases, but at the cost of additional
+complexity. We would like to gain experience with how ``__array_function__`` is
+actually used before making decisions that would be difficult to roll back.
A magic decorator that inspects type annotations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -965,8 +890,7 @@ There are two other arguments that we think *might* be important to pass to
- Access to the non-dispatched implementation (i.e., before wrapping with
``array_function_dispatch``) in ``ndarray.__array_function__`` would allow
us to drop special case logic for that method from
- ``implement_array_function``. *Update: This has been implemented, as the
- ``__skip_array_function__`` attributes.*
+ ``implement_array_function``.
- Access to the ``dispatcher`` function passed into
``array_function_dispatch()`` would allow ``__array_function__``
implementations to determine the list of "array-like" arguments in a generic
diff --git a/doc/neps/nep-0019-rng-policy.rst b/doc/neps/nep-0019-rng-policy.rst
index 46dcbd0d7..aa5fdc653 100644
--- a/doc/neps/nep-0019-rng-policy.rst
+++ b/doc/neps/nep-0019-rng-policy.rst
@@ -143,11 +143,11 @@ We will be more strict about a select subset of methods on these BitGenerator
objects. They MUST guarantee stream-compatibility for a specified set
of methods which are chosen to make it easier to compose them to build other
distributions and which are needed to abstract over the implementation details
-of the variety of core PRNG algorithms. Namely,
+of the variety of BitGenerator algorithms. Namely,
* ``.bytes()``
- * ``integers`` (formerly ``.random_uintegers()``)
- * ``random`` (formerly ``.random_sample()``)
+ * ``integers()`` (formerly ``.random_integers()``)
+ * ``random()`` (formerly ``.random_sample()``)
The distributions class (``Generator``) SHOULD have all of the same
distribution methods as ``RandomState`` with close-enough function signatures
@@ -296,7 +296,7 @@ satisfactory subset. At least some projects used a fairly broad selection of
the ``RandomState`` methods in unit tests.
Downstream project owners would have been forced to modify their code to
-accomodate the new PRNG subsystem. Some modifications might be simply
+accommodate the new PRNG subsystem. Some modifications might be simply
mechanical, but the bulk of the work would have been tedious churn for no
positive improvement to the downstream project, just avoiding being broken.
diff --git a/doc/neps/nep-0026-missing-data-summary.rst b/doc/neps/nep-0026-missing-data-summary.rst
index e99138cdd..78fe999df 100644
--- a/doc/neps/nep-0026-missing-data-summary.rst
+++ b/doc/neps/nep-0026-missing-data-summary.rst
@@ -669,7 +669,7 @@ NumPy could more easily be overtaken by another project.
In the case of the existing NA contribution at issue, how we resolve
this disagreement represents a decision about how NumPy's
-developers, contributers, and users should interact. If we create
+developers, contributors, and users should interact. If we create
a document describing a dispute resolution process, how do we
design it so that it doesn't introduce a large burden and excessive
uncertainty on developers that could prevent them from productively
@@ -677,7 +677,7 @@ contributing code?
If we go this route of writing up a decision process which includes
such a dispute resolution mechanism, I think the meat of it should
-be a roadmap that potential contributers and developers can follow
+be a roadmap that potential contributors and developers can follow
to gain influence over NumPy. NumPy development needs broad support
beyond code contributions, and tying influence in the project to
contributions seems to me like it would be a good way to encourage
diff --git a/doc/neps/nep-0027-zero-rank-arrarys.rst b/doc/neps/nep-0027-zero-rank-arrarys.rst
index d932bb609..430397235 100644
--- a/doc/neps/nep-0027-zero-rank-arrarys.rst
+++ b/doc/neps/nep-0027-zero-rank-arrarys.rst
@@ -51,7 +51,7 @@ However there are some important differences:
* Array scalars are immutable
* Array scalars have different python type for different data types
-
+
Motivation for Array Scalars
----------------------------
@@ -62,7 +62,7 @@ we will try to explain why it is necessary to have three different ways to
represent a number.
There were several numpy-discussion threads:
-
+
* `rank-0 arrays`_ in a 2002 mailing list thread.
* Thoughts about zero dimensional arrays vs Python scalars in a `2005 mailing list thread`_]
@@ -71,7 +71,7 @@ It has been suggested several times that NumPy just use rank-0 arrays to
represent scalar quantities in all case. Pros and cons of converting rank-0
arrays to scalars were summarized as follows:
-- Pros:
+- Pros:
- Some cases when Python expects an integer (the most
dramatic is when slicing and indexing a sequence:
@@ -94,15 +94,15 @@ arrays to scalars were summarized as follows:
files (though this could also be done by a special case
in the pickling code for arrays)
-- Cons:
+- Cons:
- It is difficult to write generic code because scalars
do not have the same methods and attributes as arrays.
(such as ``.type`` or ``.shape``). Also Python scalars have
- different numeric behavior as well.
+ different numeric behavior as well.
- - This results in a special-case checking that is not
- pleasant. Fundamentally it lets the user believe that
+ - This results in a special-case checking that is not
+ pleasant. Fundamentally it lets the user believe that
somehow multidimensional homoegeneous arrays
are something like Python lists (which except for
Object arrays they are not).
@@ -117,7 +117,7 @@ The Need for Zero-Rank Arrays
-----------------------------
Once the idea to use zero-rank arrays to represent scalars was rejected, it was
-natural to consider whether zero-rank arrays can be eliminated alltogether.
+natural to consider whether zero-rank arrays can be eliminated altogether.
However there are some important use cases where zero-rank arrays cannot be
replaced by array scalars. See also `A case for rank-0 arrays`_ from February
2006.
@@ -164,12 +164,12 @@ Alexander started a `Jan 2006 discussion`_ on scipy-dev
with the following proposal:
... it may be reasonable to allow ``a[...]``. This way
- ellipsis can be interpereted as any number of ``:`` s including zero.
+ ellipsis can be interpereted as any number of ``:`` s including zero.
Another subscript operation that makes sense for scalars would be
- ``a[...,newaxis]`` or even ``a[{newaxis, }* ..., {newaxis,}*]``, where
- ``{newaxis,}*`` stands for any number of comma-separated newaxis tokens.
+ ``a[...,newaxis]`` or even ``a[{newaxis, }* ..., {newaxis,}*]``, where
+ ``{newaxis,}*`` stands for any number of comma-separated newaxis tokens.
This will allow one to use ellipsis in generic code that would work on
- any numpy type.
+ any numpy type.
Francesc Altet supported the idea of ``[...]`` on zero-rank arrays and
`suggested`_ that ``[()]`` be supported as well.
@@ -204,7 +204,7 @@ remains on what should be the type of the result - zero rank ndarray or ``x.dtyp
1
Since most if not all numpy function automatically convert zero-rank arrays to scalars on return, there is no reason for
-``[...]`` and ``[()]`` operations to be different.
+``[...]`` and ``[()]`` operations to be different.
See SVN changeset 1864 (which became git commit `9024ff0`_) for
implementation of ``x[...]`` and ``x[()]`` returning numpy scalars.
@@ -234,7 +234,7 @@ Currently all indexing on zero-rank arrays is implemented in a special ``if (nd
that the changes do not affect any existing usage (except, the usage that
relies on exceptions). On the other hand part of motivation for these changes
was to make behavior of ndarrays more uniform and this should allow to
-eliminate ``if (nd == 0)`` checks alltogether.
+eliminate ``if (nd == 0)`` checks altogether.
Copyright
---------
diff --git a/doc/neps/roadmap.rst b/doc/neps/roadmap.rst
index a45423711..2ec0b7520 100644
--- a/doc/neps/roadmap.rst
+++ b/doc/neps/roadmap.rst
@@ -6,74 +6,78 @@ This is a live snapshot of tasks and features we will be investing resources
in. It may be used to encourage and inspire developers and to search for
funding.
-Interoperability protocols & duck typing
-----------------------------------------
-
-- `__array_function__`
-
- See `NEP 18`_ and a sample implementation_
-
-- Array Duck-Typing
-
- `NEP 22`_ `np.asduckarray()`
-
-- Mixins like `NDArrayOperatorsMixin`:
+Interoperability
+----------------
+
+We aim to make it easier to interoperate with NumPy. There are many NumPy-like
+packages that add interesting new capabilities to the Python ecosystem, as well
+as many libraries that extend NumPy's model in various ways. Work in NumPy to
+facilitate interoperability with all such packages, and the code that uses them,
+may include (among other things) interoperability protocols, better duck typing
+support and ndarray subclass handling.
+
+- The ``__array_function__`` protocol is currently experimental and needs to be
+ matured. See `NEP 18`_ for details.
+- New protocols for overriding other functionality in NumPy may be needed.
+- Array duck typing, or handling "duck arrays", needs improvements. See
+ `NEP 22`_ for details.
+
+Extensibility
+-------------
- - for mutable arrays
- - for reduction methods implemented as ufuncs
+We aim to make it much easier to extend NumPy. The primary topic here is to
+improve the dtype system.
-Better dtypes
--------------
+- Easier custom dtypes:
-- Easier custom dtypes
- Simplify and/or wrap the current C-API
- More consistent support for dtype metadata
- Support for writing a dtype in Python
-- New string dtype(s):
- - Encoded strings with fixed-width storage (utf8, latin1, ...) and/or
- - Variable length strings (could share implementation with dtype=object, but are explicitly type-checked)
- - One of these should probably be the default for text data. The current behavior on Python 3 is neither efficient nor user friendly.
-- `np.int` should not be platform dependent
-- better coercion for string + number
-Random number generation policy & rewrite
------------------------------------------
+- New string dtype(s):
-`NEP 19`_ and a `reference implementation`_
+ - Encoded strings with fixed-width storage (utf8, latin1, ...) and/or
+ - Variable length strings (could share implementation with dtype=object,
+ but are explicitly type-checked)
+ - One of these should probably be the default for text data. The current
+ behavior on Python 3 is neither efficient nor user friendly.
-Indexing
---------
+- `np.int` should not be platform dependent
+- Better coercion for string + number
-vindex/oindex `NEP 21`_
+Performance
+-----------
-Infrastructure
---------------
+We want to further improve NumPy's performance, through:
-NumPy is much more than just the code base itself, we also maintain
-docs, CI, benchmarks, etc.
+- Better use of SIMD instructions, also on platforms other than x86.
+- Reducing ufunc overhead.
+- Optimizations in individual functions.
-- Rewrite numpy.org
-- Benchmarking: improve the extent of the existing suite, and run & render
- the results as part of the docs or website.
+Furthermore we would like to improve the benchmarking system, in terms of coverage,
+easy of use, and publication of the results (now
+`here <https://pv.github.io/numpy-bench>`__) as part of the docs or website.
- - Hardware: find a machine that can reliably run serial benchmarks
- - ASV produces graphs, could we set up a site? Currently at
- https://pv.github.io/numpy-bench/, should that become a community resource?
+Website and documentation
+-------------------------
-Functionality outside core
---------------------------
+Our website (https://numpy.org) is in very poor shape and needs to be rewritten
+completely.
-Some things inside NumPy do not actually match the `Scope of NumPy`.
+The NumPy `documentation <https://www.numpy.org/devdocs/user/index.html>`__ is
+of varying quality - in particular the User Guide needs major improvements.
-- A backend system for `numpy.fft` (so that e.g. `fft-mkl` doesn't need to monkeypatch numpy)
+Random number generation policy & rewrite
+-----------------------------------------
-- Rewrite masked arrays to not be a ndarray subclass -- maybe in a separate project?
-- MaskedArray as a duck-array type, and/or
-- dtypes that support missing values
+A new random number generation framework with higher performance generators is
+close to completion, see `NEP 19`_ and `PR 13163`_.
-- Write a strategy on how to deal with overlap between numpy and scipy for `linalg` and `fft` (and implement it).
+Indexing
+--------
-- Deprecate `np.matrix`
+We intend to add new indexing modes for "vectorized indexing" and "outer indexing",
+see `NEP 21`_.
Continuous Integration
----------------------
@@ -81,31 +85,25 @@ Continuous Integration
We depend on CI to discover problems as we continue to develop NumPy before the
code reaches downstream users.
-- CI for more exotic platforms (e.g. ARM is now available from
- http://www.shippable.com/, but it is not free).
+- CI for more exotic platforms (if available as a service).
- Multi-package testing
- Add an official channel for numpy dev builds for CI usage by other projects so
they may confirm new builds do not break their package.
-Typing
-------
+Other functionality
+-------------------
-Python type annotation syntax should support ndarrays and dtypes.
+- ``MaskedArray`` needs to be improved, ideas include:
-- Type annotations for NumPy: github.com/numpy/numpy-stubs
-- Support for typing shape and dtype in multi-dimensional arrays in Python more generally
-
-NumPy scalars
--------------
+ - Rewrite masked arrays to not be a ndarray subclass -- maybe in a separate project?
+ - MaskedArray as a duck-array type, and/or
+ - dtypes that support missing values
-Numpy has both scalars and zero-dimensional arrays.
+- A backend system for ``numpy.fft`` (so that e.g. ``fft-mkl`` doesn't need to monkeypatch numpy)
+- Write a strategy on how to deal with overlap between NumPy and SciPy for ``linalg``
+ and ``fft`` (and implement it).
+- Deprecate ``np.matrix`` (very slowly)
-- The current implementation adds a large maintenance burden -- can we remove
- scalars and/or simplify it internally?
-- Zero dimensional arrays get converted into scalars by most NumPy
- functions (i.e., output of `np.sin(x)` depends on whether `x` is
- zero-dimensional or not). This inconsistency should be addressed,
- so that one could, e.g., write sane type annotations.
.. _`NEP 19`: https://www.numpy.org/neps/nep-0019-rng-policy.html
.. _`NEP 22`: http://www.numpy.org/neps/nep-0022-ndarray-duck-typing-overview.html
@@ -113,3 +111,4 @@ Numpy has both scalars and zero-dimensional arrays.
.. _implementation: https://gist.github.com/shoyer/1f0a308a06cd96df20879a1ddb8f0006
.. _`reference implementation`: https://github.com/bashtage/randomgen
.. _`NEP 21`: https://www.numpy.org/neps/nep-0021-advanced-indexing.html
+.. _`PR 13163`: https://github.com/numpy/numpy/pull/13163
diff --git a/doc/release/1.14.4-notes.rst b/doc/release/1.14.4-notes.rst
index 174094c1c..3fb94383b 100644
--- a/doc/release/1.14.4-notes.rst
+++ b/doc/release/1.14.4-notes.rst
@@ -19,7 +19,7 @@ values are now correct.
Note that NumPy will error on import if it detects incorrect float32 `dot`
results. This problem has been seen on the Mac when working in the Anaconda
-enviroment and is due to a subtle interaction between MKL and PyQt5. It is not
+environment and is due to a subtle interaction between MKL and PyQt5. It is not
strictly a NumPy problem, but it is best that users be aware of it. See the
gh-8577 NumPy issue for more information.
diff --git a/doc/release/1.16.4-notes.rst b/doc/release/1.16.4-notes.rst
new file mode 100644
index 000000000..a236b05c8
--- /dev/null
+++ b/doc/release/1.16.4-notes.rst
@@ -0,0 +1,94 @@
+==========================
+NumPy 1.16.4 Release Notes
+==========================
+
+The NumPy 1.16.4 release fixes bugs reported against the 1.16.3 release, and
+also backports several enhancements from master that seem appropriate for a
+release series that is the last to support Python 2.7. The wheels on PyPI are
+linked with OpenBLAS v0.3.7-dev, which should fix issues on Skylake series
+cpus.
+
+Downstream developers building this release should use Cython >= 0.29.2 and,
+if using OpenBLAS, OpenBLAS > v0.3.7. The supported Python versions are 2.7 and
+3.5-3.7.
+
+
+New deprecations
+================
+Writeable flag of C-API wrapped arrays
+--------------------------------------
+When an array is created from the C-API to wrap a pointer to data, the only
+indication we have of the read-write nature of the data is the ``writeable``
+flag set during creation. It is dangerous to force the flag to writeable. In
+the future it will not be possible to switch the writeable flag to ``True``
+from python. This deprecation should not affect many users since arrays
+created in such a manner are very rare in practice and only available through
+the NumPy C-API.
+
+
+Compatibility notes
+===================
+
+Potential changes to the random stream
+--------------------------------------
+Due to bugs in the application of log to random floating point numbers,
+the stream may change when sampling from ``np.random.beta``, ``np.random.binomial``,
+``np.random.laplace``, ``np.random.logistic``, ``np.random.logseries`` or
+``np.random.multinomial`` if a 0 is generated in the underlying MT19937 random stream.
+There is a 1 in :math:`10^{53}` chance of this occurring, and so the probability that
+the stream changes for any given seed is extremely small. If a 0 is encountered in the
+underlying generator, then the incorrect value produced (either ``np.inf``
+or ``np.nan``) is now dropped.
+
+
+Changes
+=======
+
+`numpy.lib.recfunctions.structured_to_unstructured` does not squeeze single-field views
+---------------------------------------------------------------------------------------
+Previously ``structured_to_unstructured(arr[['a']])`` would produce a squeezed
+result inconsistent with ``structured_to_unstructured(arr[['a', b']])``. This
+was accidental. The old behavior can be retained with
+``structured_to_unstructured(arr[['a']]).squeeze(axis=-1)`` or far more simply,
+``arr['a']``.
+
+
+Contributors
+============
+
+A total of 10 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Eric Wieser
+* Dennis Zollo +
+* Hunter Damron +
+* Jingbei Li +
+* Kevin Sheppard
+* Matti Picus
+* Nicola Soranzo +
+* Sebastian Berg
+* Tyler Reddy
+
+
+Pull requests merged
+====================
+
+A total of 16 pull requests were merged for this release.
+
+* `#13392 <https://github.com/numpy/numpy/pull/13392>`__: BUG: Some PyPy versions lack PyStructSequence_InitType2.
+* `#13394 <https://github.com/numpy/numpy/pull/13394>`__: MAINT, DEP: Fix deprecated ``assertEquals()``
+* `#13396 <https://github.com/numpy/numpy/pull/13396>`__: BUG: Fix structured_to_unstructured on single-field types (backport)
+* `#13549 <https://github.com/numpy/numpy/pull/13549>`__: BLD: Make CI pass again with pytest 4.5
+* `#13552 <https://github.com/numpy/numpy/pull/13552>`__: TST: Register markers in conftest.py.
+* `#13559 <https://github.com/numpy/numpy/pull/13559>`__: BUG: Removes ValueError for empty kwargs in arraymultiter_new
+* `#13560 <https://github.com/numpy/numpy/pull/13560>`__: BUG: Add TypeError to accepted exceptions in crackfortran.
+* `#13561 <https://github.com/numpy/numpy/pull/13561>`__: BUG: Handle subarrays in descr_to_dtype
+* `#13562 <https://github.com/numpy/numpy/pull/13562>`__: BUG: Protect generators from log(0.0)
+* `#13563 <https://github.com/numpy/numpy/pull/13563>`__: BUG: Always return views from structured_to_unstructured when...
+* `#13564 <https://github.com/numpy/numpy/pull/13564>`__: BUG: Catch stderr when checking compiler version
+* `#13565 <https://github.com/numpy/numpy/pull/13565>`__: BUG: longdouble(int) does not work
+* `#13587 <https://github.com/numpy/numpy/pull/13587>`__: BUG: distutils/system_info.py fix missing subprocess import (#13523)
+* `#13620 <https://github.com/numpy/numpy/pull/13620>`__: BUG,DEP: Fix writeable flag setting for arrays without base
+* `#13641 <https://github.com/numpy/numpy/pull/13641>`__: MAINT: Prepare for the 1.16.4 release.
+* `#13644 <https://github.com/numpy/numpy/pull/13644>`__: BUG: special case object arrays when printing rel-, abs-error
diff --git a/doc/release/1.17.0-notes.rst b/doc/release/1.17.0-notes.rst
index f4e00f3d2..f053b2ef6 100644
--- a/doc/release/1.17.0-notes.rst
+++ b/doc/release/1.17.0-notes.rst
@@ -43,10 +43,27 @@ from python.
This deprecation should not affect many users since arrays created in such
a manner are very rare in practice and only available through the NumPy C-API.
+`numpy.nonzero` should no longer be called on 0d arrays
+-------------------------------------------------------
+The behavior of nonzero on 0d arrays was surprising, making uses of it almost
+always incorrect. If the old behavior was intended, it can be preserved without
+a warning by using ``nonzero(atleast_1d(arr))`` instead of ``nonzero(arr)``.
+In a future release, it is most likely this will raise a `ValueError`.
+
Future Changes
==============
+Shape-1 fields in dtypes won't be collapsed to scalars in a future version
+--------------------------------------------------------------------------
+
+Currently, a field specified as ``[(name, dtype, 1)]`` or ``"1type"`` is
+interpreted as a scalar field (i.e., the same as ``[(name, dtype)]`` or
+``[(name, dtype, ()]``). This now raises a FutureWarning; in a future version,
+it will be interpreted as a shape-(1,) field, i.e. the same as ``[(name,
+dtype, (1,))]`` or ``"(1,)type"`` (consistently with ``[(name, dtype, n)]``
+/ ``"ntype"`` with ``n>1``, which is already equivalent to ``[(name, dtype,
+(n,)]`` / ``"(n,)type"``).
Expired deprecations
====================
@@ -114,13 +131,72 @@ the stream changes for any given seed is extremely small. If a 0 is encountered
underlying generator, then the incorrect value produced (either ``np.inf``
or ``np.nan``) is now dropped.
+``i0`` now always returns a result with the same shape as the input
+-------------------------------------------------------------------
+Previously, the output was squeezed, such that, e.g., input with just a single
+element would lead to an array scalar being returned, and inputs with shapes
+such as ``(10, 1)`` would yield results that would not broadcast against the
+input.
+
+Note that we generally recommend the scipy implementation over the numpy one:
+it is a proper ufunc written in C, and more than an order of magnitude faster.
+
+``np.can_cast`` no longer assumes all unsafe casting is allowed
+---------------------------------------------------------------
+Previously, ``can_cast`` returned `True` for almost all inputs for
+``casting='unsafe'``, even for cases where casting was not possible, such as
+from a structured dtype to a regular one. This has been fixed, making it
+more consistent with actual casting using, e.g., the ``.astype`` method.
+
C API changes
=============
+dimension or stride input arguments are now passed by ``npy_intp const*``
+-------------------------------------------------------------------------
+Previously these function arguments were declared as the more strict
+``npy_intp*``, which prevented the caller passing constant data.
+This change is backwards compatible, but now allows code like::
+
+ npy_intp const fixed_dims[] = {1, 2, 3};
+ // no longer complains that the const-qualifier is discarded
+ npy_intp size = PyArray_MultiplyList(fixed_dims, 3);
+
New Features
============
+libFLAME
+--------
+Support for building NumPy with the libFLAME linear algebra package as the LAPACK,
+implementation, see
+`libFLAME <https://www.cs.utexas.edu/~flame/web/libFLAME.html>`_ for details.
+
+User-defined BLAS detection order
+---------------------------------
+``numpy.distutils`` now uses an environment variable, comma-separated and case
+insensitive, to determine the detection order for BLAS libraries.
+By default ``NPY_BLAS_ORDER=mkl,blis,openblas,atlas,accelerate,blas``.
+However, to force the use of OpenBLAS simply do::
+
+ NPY_BLAS_ORDER=openblas python setup.py build
+
+which forces the use of OpenBLAS.
+This may be helpful for users which have a MKL installation but wishes to try
+out different implementations.
+
+User-defined LAPACK detection order
+-----------------------------------
+``numpy.distutils`` now uses an environment variable, comma-separated and case
+insensitive, to determine the detection order for LAPAK libraries.
+By default ``NPY_BLAS_ORDER=mkl,openblas,flame,atlas,accelerate,lapack``.
+However, to force the use of OpenBLAS simply do::
+
+ NPY_LAPACK_ORDER=openblas python setup.py build
+
+which forces the use of OpenBLAS.
+This may be helpful for users which have a MKL installation but wishes to try
+out different implementations.
+
``np.ufunc.reduce`` and related functions now accept a ``where`` mask
---------------------------------------------------------------------
``np.ufunc.reduce``, ``np.sum``, ``np.prod``, ``np.min``, ``np.max`` all
@@ -163,12 +239,17 @@ divmod operation is now supported for two ``timedelta64`` operands
The divmod operator now handles two ``np.timedelta64`` operands, with
type signature mm->qm.
+``np.fromfile`` now takes an ``offset`` argument
+------------------------------------------------
+This function now takes an ``offset`` keyword argument for binary files,
+which specifics the offset (in bytes) from the file's current position.
+Defaults to 0.
+
New mode "empty" for ``np.pad``
-------------------------------
This mode pads an array to a desired shape without initializing the new
entries.
-
``np.empty_like`` and related functions now accept a ``shape`` argument
-----------------------------------------------------------------------
``np.empty_like``, ``np.full_like``, ``np.ones_like`` and ``np.zeros_like`` now
@@ -183,6 +264,18 @@ Floating point scalars implement ``as_integer_ratio`` to match the builtin float
This returns a (numerator, denominator) pair, which can be used to construct a
`fractions.Fraction`.
+structured ``dtype`` objects can be indexed with multiple fields names
+----------------------------------------------------------------------
+``arr.dtype[['a', 'b']]`` now returns a dtype that is equivalent to
+``arr[['a', 'b']].dtype``, for consistency with
+``arr.dtype['a'] == arr['a'].dtype``.
+
+Like the dtype of structured arrays indexed with a list of fields, this dtype
+has the same `itemsize` as the original, but only keeps a subset of the fields.
+
+This means that `arr[['a', 'b']]` and ``arr.view(arr.dtype[['a', 'b']])`` are
+equivalent.
+
``.npy`` files support unicode field names
------------------------------------------
A new format version of 3.0 has been introduced, which enables structured types
@@ -271,6 +364,11 @@ concatenation.
In some cases where ``np.interp`` would previously return ``np.nan``, it now
returns an appropriate infinity.
+Pathlib support for ``np.fromfile``, ``ndarray.tofile`` and ``ndarray.dump``
+----------------------------------------------------------------------------
+``np.fromfile``, ``np.ndarray.tofile`` and ``np.ndarray.dump`` now support
+the `pathlib.Path` type for the ``file``/``fid`` parameter.
+
Specialized ``np.isnan``, ``np.isinf``, and ``np.isfinite`` ufuncs for bool and int types
-----------------------------------------------------------------------------------------
The boolean and integer types are incapable of storing ``np.nan`` and
@@ -377,9 +475,14 @@ The interface may use an ``offset`` value that was mistakenly ignored.
Pickle protocol in ``np.savez`` set to 3 for ``force zip64`` flag
-----------------------------------------------------------------
-
``np.savez`` was not using the ``force_zip64`` flag, which limited the size of
the archive to 2GB. But using the flag requires us to use pickle protocol 3 to
write ``object`` arrays. The protocol used was bumped to 3, meaning the archive
will be unreadable by Python2.
+Structured arrays indexed with non-existent fields raise ``KeyError`` not ``ValueError``
+----------------------------------------------------------------------------------------
+``arr['bad_field']`` on a structured type raises ``KeyError``, for consistency
+with ``dict['bad_field']``.
+
+.. _`NEP 18` : http://www.numpy.org/neps/nep-0018-array-function-protocol.html
diff --git a/doc/source/about.rst b/doc/source/about.rst
index 5ac4facbb..3e83833d1 100644
--- a/doc/source/about.rst
+++ b/doc/source/about.rst
@@ -8,7 +8,7 @@ needed for scientific computing with Python. This package contains:
- sophisticated :ref:`(broadcasting) functions <ufuncs>`
- basic :ref:`linear algebra functions <routines.linalg>`
- basic :ref:`Fourier transforms <routines.fft>`
-- sophisticated :ref:`random number capabilities <routines.random>`
+- sophisticated :ref:`random number capabilities <numpyrandom>`
- tools for integrating Fortran code
- tools for integrating C/C++ code
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 072a3b44e..fa0c0e7e4 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -19,11 +19,19 @@ needs_sphinx = '1.0'
sys.path.insert(0, os.path.abspath('../sphinxext'))
-extensions = ['sphinx.ext.autodoc', 'numpydoc',
- 'sphinx.ext.intersphinx', 'sphinx.ext.coverage',
- 'sphinx.ext.doctest', 'sphinx.ext.autosummary',
- 'sphinx.ext.graphviz', 'sphinx.ext.ifconfig',
- 'matplotlib.sphinxext.plot_directive']
+extensions = [
+ 'sphinx.ext.autodoc',
+ 'numpydoc',
+ 'sphinx.ext.intersphinx',
+ 'sphinx.ext.coverage',
+ 'sphinx.ext.doctest',
+ 'sphinx.ext.autosummary',
+ 'sphinx.ext.graphviz',
+ 'sphinx.ext.ifconfig',
+ 'matplotlib.sphinxext.plot_directive',
+ 'IPython.sphinxext.ipython_console_highlighting',
+ 'IPython.sphinxext.ipython_directive',
+]
if sphinx.__version__ >= "1.4":
extensions.append('sphinx.ext.imgmath')
@@ -234,7 +242,7 @@ numpydoc_use_plots = True
# -----------------------------------------------------------------------------
import glob
-autosummary_generate = glob.glob("reference/*.rst")
+autosummary_generate = True
# -----------------------------------------------------------------------------
# Coverage checker
@@ -355,3 +363,21 @@ def linkcode_resolve(domain, info):
else:
return "https://github.com/numpy/numpy/blob/v%s/numpy/%s%s" % (
numpy.__version__, fn, linespec)
+
+from pygments.lexers import CLexer
+from pygments import token
+from sphinx.highlighting import lexers
+import copy
+
+class NumPyLexer(CLexer):
+ name = 'NUMPYLEXER'
+
+ tokens = copy.deepcopy(lexers['c'].tokens)
+ # Extend the regex for valid identifiers with @
+ for k, val in tokens.items():
+ for i, v in enumerate(val):
+ if isinstance(v, tuple):
+ if isinstance(v[0], str):
+ val[i] = (v[0].replace('a-zA-Z', 'a-zA-Z@'),) + v[1:]
+
+lexers['NumPyC'] = NumPyLexer(stripnl=False)
diff --git a/doc/source/dev/development_environment.rst b/doc/source/dev/development_environment.rst
index 445ce3204..bc491b711 100644
--- a/doc/source/dev/development_environment.rst
+++ b/doc/source/dev/development_environment.rst
@@ -147,9 +147,9 @@ That also takes extra arguments, like ``--pdb`` which drops you into the Python
debugger when a test fails or an exception is raised.
Running tests with `tox`_ is also supported. For example, to build NumPy and
-run the test suite with Python 3.4, use::
+run the test suite with Python 3.7, use::
- $ tox -e py34
+ $ tox -e py37
For more extensive information, see :ref:`testing-guidelines`
diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst
index b55feb247..ab743a8ee 100644
--- a/doc/source/reference/arrays.dtypes.rst
+++ b/doc/source/reference/arrays.dtypes.rst
@@ -538,6 +538,7 @@ Attributes providing additional information:
dtype.isnative
dtype.descr
dtype.alignment
+ dtype.base
Methods
diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst
index aeb55ca03..bd6062b16 100644
--- a/doc/source/reference/c-api.array.rst
+++ b/doc/source/reference/c-api.array.rst
@@ -33,7 +33,7 @@ sub-types).
Returns a pointer to the dimensions/shape of the array. The
number of elements matches the number of dimensions
- of the array.
+ of the array. Can return ``NULL`` for 0-dimensional arrays.
.. c:function:: npy_intp *PyArray_SHAPE(PyArrayObject *arr)
@@ -199,8 +199,8 @@ From scratch
^^^^^^^^^^^^
.. c:function:: PyObject* PyArray_NewFromDescr( \
- PyTypeObject* subtype, PyArray_Descr* descr, int nd, npy_intp* dims, \
- npy_intp* strides, void* data, int flags, PyObject* obj)
+ PyTypeObject* subtype, PyArray_Descr* descr, int nd, npy_intp const* dims, \
+ npy_intp const* strides, void* data, int flags, PyObject* obj)
This function steals a reference to *descr*. The easiest way to get one
is using :c:func:`PyArray_DescrFromType`.
@@ -219,7 +219,7 @@ From scratch
If *data* is ``NULL``, then new unitinialized memory will be allocated and
*flags* can be non-zero to indicate a Fortran-style contiguous array. Use
- :c:func:`PyArray_FILLWBYTE` to initialze the memory.
+ :c:func:`PyArray_FILLWBYTE` to initialize the memory.
If *data* is not ``NULL``, then it is assumed to point to the memory
to be used for the array and the *flags* argument is used as the
@@ -266,8 +266,9 @@ From scratch
base-class array.
.. c:function:: PyObject* PyArray_New( \
- PyTypeObject* subtype, int nd, npy_intp* dims, int type_num, \
- npy_intp* strides, void* data, int itemsize, int flags, PyObject* obj)
+ PyTypeObject* subtype, int nd, npy_intp const* dims, int type_num, \
+ npy_intp const* strides, void* data, int itemsize, int flags, \
+ PyObject* obj)
This is similar to :c:func:`PyArray_NewFromDescr` (...) except you
specify the data-type descriptor with *type_num* and *itemsize*,
@@ -288,7 +289,7 @@ From scratch
are passed in they must be consistent with the dimensions, the
itemsize, and the data of the array.
-.. c:function:: PyObject* PyArray_SimpleNew(int nd, npy_intp* dims, int typenum)
+.. c:function:: PyObject* PyArray_SimpleNew(int nd, npy_intp const* dims, int typenum)
Create a new uninitialized array of type, *typenum*, whose size in
each of *nd* dimensions is given by the integer array, *dims*.The memory
@@ -301,7 +302,7 @@ From scratch
used to create a flexible-type array (no itemsize given).
.. c:function:: PyObject* PyArray_SimpleNewFromData( \
- int nd, npy_intp* dims, int typenum, void* data)
+ int nd, npy_intp const* dims, int typenum, void* data)
Create an array wrapper around *data* pointed to by the given
pointer. The array flags will have a default that the data area is
@@ -316,7 +317,7 @@ From scratch
as the ndarray is deallocated, set the OWNDATA flag on the returned ndarray.
.. c:function:: PyObject* PyArray_SimpleNewFromDescr( \
- int nd, npy_intp* dims, PyArray_Descr* descr)
+ int nd, npy_int const* dims, PyArray_Descr* descr)
This function steals a reference to *descr*.
@@ -330,7 +331,7 @@ From scratch
This macro calls memset, so obj must be contiguous.
.. c:function:: PyObject* PyArray_Zeros( \
- int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran)
+ int nd, npy_intp const* dims, PyArray_Descr* dtype, int fortran)
Construct a new *nd* -dimensional array with shape given by *dims*
and data type given by *dtype*. If *fortran* is non-zero, then a
@@ -339,13 +340,13 @@ From scratch
corresponds to :c:type:`NPY_OBJECT` ).
.. c:function:: PyObject* PyArray_ZEROS( \
- int nd, npy_intp* dims, int type_num, int fortran)
+ int nd, npy_intp const* dims, int type_num, int fortran)
Macro form of :c:func:`PyArray_Zeros` which takes a type-number instead
of a data-type object.
.. c:function:: PyObject* PyArray_Empty( \
- int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran)
+ int nd, npy_intp const* dims, PyArray_Descr* dtype, int fortran)
Construct a new *nd* -dimensional array with shape given by *dims*
and data type given by *dtype*. If *fortran* is non-zero, then a
@@ -355,7 +356,7 @@ From scratch
filled with :c:data:`Py_None`.
.. c:function:: PyObject* PyArray_EMPTY( \
- int nd, npy_intp* dims, int typenum, int fortran)
+ int nd, npy_intp const* dims, int typenum, int fortran)
Macro form of :c:func:`PyArray_Empty` which takes a type-number,
*typenum*, instead of a data-type object.
@@ -1671,11 +1672,13 @@ Conversion
.. c:function:: PyObject* PyArray_GetField( \
PyArrayObject* self, PyArray_Descr* dtype, int offset)
- Equivalent to :meth:`ndarray.getfield<numpy.ndarray.getfield>` (*self*, *dtype*, *offset*). Return
- a new array of the given *dtype* using the data in the current
- array at a specified *offset* in bytes. The *offset* plus the
- itemsize of the new array type must be less than *self*
- ->descr->elsize or an error is raised. The same shape and strides
+ Equivalent to :meth:`ndarray.getfield<numpy.ndarray.getfield>`
+ (*self*, *dtype*, *offset*). This function `steals a reference
+ <https://docs.python.org/3/c-api/intro.html?reference-count-details>`_
+ to `PyArray_Descr` and returns a new array of the given `dtype` using
+ the data in the current array at a specified `offset` in bytes. The
+ `offset` plus the itemsize of the new array type must be less than ``self
+ ->descr->elsize`` or an error is raised. The same shape and strides
as the original array are used. Therefore, this function has the
effect of returning a field from a structured array. But, it can also
be used to select specific bytes or groups of bytes from any array
@@ -2355,8 +2358,8 @@ Other functions
^^^^^^^^^^^^^^^
.. c:function:: Bool PyArray_CheckStrides( \
- int elsize, int nd, npy_intp numbytes, npy_intp* dims, \
- npy_intp* newstrides)
+ int elsize, int nd, npy_intp numbytes, npy_intp const* dims, \
+ npy_intp const* newstrides)
Determine if *newstrides* is a strides array consistent with the
memory of an *nd* -dimensional array with shape ``dims`` and
@@ -2368,14 +2371,14 @@ Other functions
*elsize* refer to a single-segment array. Return :c:data:`NPY_TRUE` if
*newstrides* is acceptable, otherwise return :c:data:`NPY_FALSE`.
-.. c:function:: npy_intp PyArray_MultiplyList(npy_intp* seq, int n)
+.. c:function:: npy_intp PyArray_MultiplyList(npy_intp const* seq, int n)
-.. c:function:: int PyArray_MultiplyIntList(int* seq, int n)
+.. c:function:: int PyArray_MultiplyIntList(int const* seq, int n)
Both of these routines multiply an *n* -length array, *seq*, of
integers and return the result. No overflow checking is performed.
-.. c:function:: int PyArray_CompareLists(npy_intp* l1, npy_intp* l2, int n)
+.. c:function:: int PyArray_CompareLists(npy_intp const* l1, npy_intp const* l2, int n)
Given two *n* -length arrays of integers, *l1*, and *l2*, return
1 if the lists are identical; otherwise, return 0.
diff --git a/doc/source/reference/c-api.config.rst b/doc/source/reference/c-api.config.rst
index 60bf61a32..05e6fe44d 100644
--- a/doc/source/reference/c-api.config.rst
+++ b/doc/source/reference/c-api.config.rst
@@ -101,3 +101,22 @@ Platform information
Returns the endianness of the current platform.
One of :c:data:`NPY_CPU_BIG`, :c:data:`NPY_CPU_LITTLE`,
or :c:data:`NPY_CPU_UNKNOWN_ENDIAN`.
+
+
+Compiler directives
+-------------------
+
+.. c:var:: NPY_LIKELY
+.. c:var:: NPY_UNLIKELY
+.. c:var:: NPY_UNUSED
+
+
+Interrupt Handling
+------------------
+
+.. c:var:: NPY_INTERRUPT_H
+.. c:var:: NPY_SIGSETJMP
+.. c:var:: NPY_SIGLONGJMP
+.. c:var:: NPY_SIGJMP_BUF
+.. c:var:: NPY_SIGINT_ON
+.. c:var:: NPY_SIGINT_OFF
diff --git a/doc/source/reference/c-api.coremath.rst b/doc/source/reference/c-api.coremath.rst
index bb457eb0d..7e00322f9 100644
--- a/doc/source/reference/c-api.coremath.rst
+++ b/doc/source/reference/c-api.coremath.rst
@@ -185,7 +185,7 @@ Those can be useful for precise floating point comparison.
* NPY_FPE_INVALID
Note that :c:func:`npy_get_floatstatus_barrier` is preferable as it prevents
- agressive compiler optimizations reordering the call relative to
+ aggressive compiler optimizations reordering the call relative to
the code setting the status, which could lead to incorrect results.
.. versionadded:: 1.9.0
@@ -193,7 +193,7 @@ Those can be useful for precise floating point comparison.
.. c:function:: int npy_get_floatstatus_barrier(char*)
Get floating point status. A pointer to a local variable is passed in to
- prevent aggresive compiler optimizations from reodering this function call
+ prevent aggressive compiler optimizations from reodering this function call
relative to the code setting the status, which could lead to incorrect
results.
@@ -211,7 +211,7 @@ Those can be useful for precise floating point comparison.
Clears the floating point status. Returns the previous status mask.
Note that :c:func:`npy_clear_floatstatus_barrier` is preferable as it
- prevents agressive compiler optimizations reordering the call relative to
+ prevents aggressive compiler optimizations reordering the call relative to
the code setting the status, which could lead to incorrect results.
.. versionadded:: 1.9.0
@@ -219,7 +219,7 @@ Those can be useful for precise floating point comparison.
.. c:function:: int npy_clear_floatstatus_barrier(char*)
Clears the floating point status. A pointer to a local variable is passed in to
- prevent aggresive compiler optimizations from reodering this function call.
+ prevent aggressive compiler optimizations from reodering this function call.
Returns the previous status mask.
.. versionadded:: 1.15.0
diff --git a/doc/source/reference/c-api.iterator.rst b/doc/source/reference/c-api.iterator.rst
index 940452d3c..b77d029cc 100644
--- a/doc/source/reference/c-api.iterator.rst
+++ b/doc/source/reference/c-api.iterator.rst
@@ -593,25 +593,23 @@ Construction and Destruction
code doing iteration can write to this operand to
control which elements will be untouched and which ones will be
modified. This is useful when the mask should be a combination
- of input masks, for example. Mask values can be created
- with the :c:func:`NpyMask_Create` function.
+ of input masks.
.. c:var:: NPY_ITER_WRITEMASKED
.. versionadded:: 1.7
- Indicates that only elements which the operand with
- the ARRAYMASK flag indicates are intended to be modified
- by the iteration. In general, the iterator does not enforce
- this, it is up to the code doing the iteration to follow
- that promise. Code can use the :c:func:`NpyMask_IsExposed`
- inline function to test whether the mask at a particular
- element allows writing.
+ This array is the mask for all `writemasked <numpy.nditer>`
+ operands. Code uses the ``writemasked`` flag which indicates
+ that only elements where the chosen ARRAYMASK operand is True
+ will be written to. In general, the iterator does not enforce
+ this, it is up to the code doing the iteration to follow that
+ promise.
- When this flag is used, and this operand is buffered, this
- changes how data is copied from the buffer into the array.
+ When ``writemasked`` flag is used, and this operand is buffered,
+ this changes how data is copied from the buffer into the array.
A masked copying routine is used, which only copies the
- elements in the buffer for which :c:func:`NpyMask_IsExposed`
+ elements in the buffer for which ``writemasked``
returns true from the corresponding element in the ARRAYMASK
operand.
@@ -630,7 +628,7 @@ Construction and Destruction
.. c:function:: NpyIter* NpyIter_AdvancedNew( \
npy_intp nop, PyArrayObject** op, npy_uint32 flags, NPY_ORDER order, \
NPY_CASTING casting, npy_uint32* op_flags, PyArray_Descr** op_dtypes, \
- int oa_ndim, int** op_axes, npy_intp* itershape, npy_intp buffersize)
+ int oa_ndim, int** op_axes, npy_intp const* itershape, npy_intp buffersize)
Extends :c:func:`NpyIter_MultiNew` with several advanced options providing
more control over broadcasting and buffering.
@@ -867,7 +865,7 @@ Construction and Destruction
} while (iternext2(iter2));
} while (iternext1(iter1));
-.. c:function:: int NpyIter_GotoMultiIndex(NpyIter* iter, npy_intp* multi_index)
+.. c:function:: int NpyIter_GotoMultiIndex(NpyIter* iter, npy_intp const* multi_index)
Adjusts the iterator to point to the ``ndim`` indices
pointed to by ``multi_index``. Returns an error if a multi-index
@@ -974,19 +972,6 @@ Construction and Destruction
Returns the number of operands in the iterator.
- When :c:data:`NPY_ITER_USE_MASKNA` is used on an operand, a new
- operand is added to the end of the operand list in the iterator
- to track that operand's NA mask. Thus, this equals the number
- of construction operands plus the number of operands for
- which the flag :c:data:`NPY_ITER_USE_MASKNA` was specified.
-
-.. c:function:: int NpyIter_GetFirstMaskNAOp(NpyIter* iter)
-
- .. versionadded:: 1.7
-
- Returns the index of the first NA mask operand in the array. This
- value is equal to the number of operands passed into the constructor.
-
.. c:function:: npy_intp* NpyIter_GetAxisStrideArray(NpyIter* iter, int axis)
Gets the array of strides for the specified axis. Requires that
@@ -1023,16 +1008,6 @@ Construction and Destruction
that are being iterated. The result points into ``iter``,
so the caller does not gain any references to the PyObjects.
-.. c:function:: npy_int8* NpyIter_GetMaskNAIndexArray(NpyIter* iter)
-
- .. versionadded:: 1.7
-
- This gives back a pointer to the ``nop`` indices which map
- construction operands with :c:data:`NPY_ITER_USE_MASKNA` flagged
- to their corresponding NA mask operands and vice versa. For
- operands which were not flagged with :c:data:`NPY_ITER_USE_MASKNA`,
- this array contains negative values.
-
.. c:function:: PyObject* NpyIter_GetIterView(NpyIter* iter, npy_intp i)
This gives back a reference to a new ndarray view, which is a view
diff --git a/doc/source/reference/c-api.types-and-structures.rst b/doc/source/reference/c-api.types-and-structures.rst
index f411ebc44..a716b5a06 100644
--- a/doc/source/reference/c-api.types-and-structures.rst
+++ b/doc/source/reference/c-api.types-and-structures.rst
@@ -57,8 +57,8 @@ types are place holders that allow the array scalars to fit into a
hierarchy of actual Python types.
-PyArray_Type
-------------
+PyArray_Type and PyArrayObject
+------------------------------
.. c:var:: PyArray_Type
@@ -74,7 +74,7 @@ PyArray_Type
subclasses) will have this structure. For future compatibility,
these structure members should normally be accessed using the
provided macros. If you need a shorter name, then you can make use
- of :c:type:`NPY_AO` which is defined to be equivalent to
+ of :c:type:`NPY_AO` (deprecated) which is defined to be equivalent to
:c:type:`PyArrayObject`.
.. code-block:: c
@@ -91,7 +91,7 @@ PyArray_Type
PyObject *weakreflist;
} PyArrayObject;
-.. c:macro: PyArrayObject.PyObject_HEAD
+.. c:macro:: PyArrayObject.PyObject_HEAD
This is needed by all Python objects. It consists of (at least)
a reference count member ( ``ob_refcnt`` ) and a pointer to the
@@ -130,14 +130,16 @@ PyArray_Type
.. c:member:: PyObject *PyArrayObject.base
This member is used to hold a pointer to another Python object that
- is related to this array. There are two use cases: 1) If this array
- does not own its own memory, then base points to the Python object
- that owns it (perhaps another array object), 2) If this array has
- the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
- :c:data:NPY_ARRAY_WRITEBACKIFCOPY`: flag set, then this array is
- a working copy of a "misbehaved" array. When
- ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to by base
- will be updated with the contents of this array.
+ is related to this array. There are two use cases:
+
+ - If this array does not own its own memory, then base points to the
+ Python object that owns it (perhaps another array object)
+ - If this array has the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set, then this array is a working
+ copy of a "misbehaved" array.
+
+ When ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to
+ by base will be updated with the contents of this array.
.. c:member:: PyArray_Descr *PyArrayObject.descr
@@ -163,8 +165,8 @@ PyArray_Type
weakref module).
-PyArrayDescr_Type
------------------
+PyArrayDescr_Type and PyArray_Descr
+-----------------------------------
.. c:var:: PyArrayDescr_Type
@@ -253,11 +255,13 @@ PyArrayDescr_Type
.. c:var:: NPY_ITEM_REFCOUNT
- .. c:var:: NPY_ITEM_HASOBJECT
-
Indicates that items of this data-type must be reference
counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
+ .. c:var:: NPY_ITEM_HASOBJECT
+
+ Same as :c:data:`NPY_ITEM_REFCOUNT`.
+
.. c:var:: NPY_LIST_PICKLE
Indicates arrays of this data-type must be converted to a list
@@ -530,20 +534,19 @@ PyArrayDescr_Type
and ``is2`` *bytes*, respectively. This function requires
behaved (though not necessarily contiguous) memory.
- .. c:member:: int scanfunc(FILE* fd, void* ip , void* sep , void* arr)
+ .. c:member:: int scanfunc(FILE* fd, void* ip, void* arr)
A pointer to a function that scans (scanf style) one element
of the corresponding type from the file descriptor ``fd`` into
the array memory pointed to by ``ip``. The array is assumed
- to be behaved. If ``sep`` is not NULL, then a separator string
- is also scanned from the file before returning. The last
- argument ``arr`` is the array to be scanned into. A 0 is
- returned if the scan is successful. A negative number
- indicates something went wrong: -1 means the end of file was
- reached before the separator string could be scanned, -4 means
- that the end of file was reached before the element could be
- scanned, and -3 means that the element could not be
- interpreted from the format string. Requires a behaved array.
+ to be behaved.
+ The last argument ``arr`` is the array to be scanned into.
+ Returns number of receiving arguments successfully assigned (which
+ may be zero in case a matching failure occurred before the first
+ receiving argument was assigned), or EOF if input failure occurs
+ before the first receiving argument was assigned.
+ This function should be called without holding the Python GIL, and
+ has to grab it for error reporting.
.. c:member:: int fromstr(char* str, void* ip, char** endptr, void* arr)
@@ -554,6 +557,8 @@ PyArrayDescr_Type
string. The last argument ``arr`` is the array into which ip
points (needed for variable-size data- types). Returns 0 on
success or -1 on failure. Requires a behaved array.
+ This function should be called without holding the Python GIL, and
+ has to grab it for error reporting.
.. c:member:: Bool nonzero(void* data, void* arr)
@@ -675,25 +680,28 @@ PyArrayDescr_Type
The :c:data:`PyArray_Type` typeobject implements many of the features of
-Python objects including the tp_as_number, tp_as_sequence,
-tp_as_mapping, and tp_as_buffer interfaces. The rich comparison
-(tp_richcompare) is also used along with new-style attribute lookup
-for methods (tp_methods) and properties (tp_getset). The
-:c:data:`PyArray_Type` can also be sub-typed.
+:c:type:`Python objects <PyTypeObject>` including the :c:member:`tp_as_number
+<PyTypeObject.tp_as_number>`, :c:member:`tp_as_sequence
+<PyTypeObject.tp_as_sequence>`, :c:member:`tp_as_mapping
+<PyTypeObject.tp_as_mapping>`, and :c:member:`tp_as_buffer
+<PyTypeObject.tp_as_buffer>` interfaces. The :c:type:`rich comparison
+<richcmpfunc>`) is also used along with new-style attribute lookup for
+member (:c:member:`tp_members <PyTypeObject.tp_members>`) and properties
+(:c:member:`tp_getset <PyTypeObject.tp_getset>`).
+The :c:data:`PyArray_Type` can also be sub-typed.
.. tip::
- The tp_as_number methods use a generic approach to call whatever
- function has been registered for handling the operation. The
- function PyNumeric_SetOps(..) can be used to register functions to
- handle particular mathematical operations (for all arrays). When
- the umath module is imported, it sets the numeric operations for
- all arrays to the corresponding ufuncs. The tp_str and tp_repr
- methods can also be altered using PyString_SetStringFunction(...).
+ The ``tp_as_number`` methods use a generic approach to call whatever
+ function has been registered for handling the operation. When the
+ ``_multiarray_umath module`` is imported, it sets the numeric operations
+ for all arrays to the corresponding ufuncs. This choice can be changed with
+ :c:func:`PyUFunc_ReplaceLoopBySignature` The ``tp_str`` and ``tp_repr``
+ methods can also be altered using :c:func:`PyArray_SetStringFunction`.
-PyUFunc_Type
-------------
+PyUFunc_Type and PyUFuncObject
+------------------------------
.. c:var:: PyUFunc_Type
@@ -785,8 +793,8 @@ PyUFunc_Type
the identity for this operation. It is only used for a
reduce-like call on an empty array.
- .. c:member:: void PyUFuncObject.functions(char** args, npy_intp* dims,
- npy_intp* steps, void* extradata)
+ .. c:member:: void PyUFuncObject.functions( \
+ char** args, npy_intp* dims, npy_intp* steps, void* extradata)
An array of function pointers --- one for each data type
supported by the ufunc. This is the vector loop that is called
@@ -931,8 +939,8 @@ PyUFunc_Type
- :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` if the dim size will be
determined from the operands and not from a :ref:`frozen <frozen>` signature
-PyArrayIter_Type
-----------------
+PyArrayIter_Type and PyArrayIterObject
+--------------------------------------
.. c:var:: PyArrayIter_Type
@@ -1041,8 +1049,8 @@ with it through the use of the macros :c:func:`PyArray_ITER_NEXT` (it),
:c:type:`PyArrayIterObject *`.
-PyArrayMultiIter_Type
----------------------
+PyArrayMultiIter_Type and PyArrayMultiIterObject
+------------------------------------------------
.. c:var:: PyArrayMultiIter_Type
@@ -1103,8 +1111,8 @@ PyArrayMultiIter_Type
arrays to be broadcast together. On return, the iterators are
adjusted for broadcasting.
-PyArrayNeighborhoodIter_Type
-----------------------------
+PyArrayNeighborhoodIter_Type and PyArrayNeighborhoodIterObject
+--------------------------------------------------------------
.. c:var:: PyArrayNeighborhoodIter_Type
@@ -1117,8 +1125,33 @@ PyArrayNeighborhoodIter_Type
:c:data:`PyArrayNeighborhoodIter_Type` is the
:c:type:`PyArrayNeighborhoodIterObject`.
-PyArrayFlags_Type
------------------
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ int nd_m1;
+ npy_intp index, size;
+ npy_intp coordinates[NPY_MAXDIMS]
+ npy_intp dims_m1[NPY_MAXDIMS];
+ npy_intp strides[NPY_MAXDIMS];
+ npy_intp backstrides[NPY_MAXDIMS];
+ npy_intp factors[NPY_MAXDIMS];
+ PyArrayObject *ao;
+ char *dataptr;
+ npy_bool contiguous;
+ npy_intp bounds[NPY_MAXDIMS][2];
+ npy_intp limits[NPY_MAXDIMS][2];
+ npy_intp limits_sizes[NPY_MAXDIMS];
+ npy_iter_get_dataptr_t translate;
+ npy_intp nd;
+ npy_intp dimensions[NPY_MAXDIMS];
+ PyArrayIterObject* _internal_iter;
+ char* constant;
+ int mode;
+ } PyArrayNeighborhoodIterObject;
+
+PyArrayFlags_Type and PyArrayFlagsObject
+----------------------------------------
.. c:var:: PyArrayFlags_Type
@@ -1128,6 +1161,16 @@ PyArrayFlags_Type
attributes or by accessing them as if the object were a dictionary
with the flag names as entries.
+.. c:type:: PyArrayFlagsObject
+
+ .. code-block:: c
+
+ typedef struct PyArrayFlagsObject {
+ PyObject_HEAD
+ PyObject *arr;
+ int flags;
+ } PyArrayFlagsObject;
+
ScalarArrayTypes
----------------
diff --git a/doc/source/reference/distutils.rst b/doc/source/reference/distutils.rst
index 88e533832..46e5ec25e 100644
--- a/doc/source/reference/distutils.rst
+++ b/doc/source/reference/distutils.rst
@@ -214,102 +214,4 @@ template and placed in the build directory to be used instead. Two
forms of template conversion are supported. The first form occurs for
files named <file>.ext.src where ext is a recognized Fortran
extension (f, f90, f95, f77, for, ftn, pyf). The second form is used
-for all other cases.
-
-.. index::
- single: code generation
-
-Fortran files
--------------
-
-This template converter will replicate all **function** and
-**subroutine** blocks in the file with names that contain '<...>'
-according to the rules in '<...>'. The number of comma-separated words
-in '<...>' determines the number of times the block is repeated. What
-these words are indicates what that repeat rule, '<...>', should be
-replaced with in each block. All of the repeat rules in a block must
-contain the same number of comma-separated words indicating the number
-of times that block should be repeated. If the word in the repeat rule
-needs a comma, leftarrow, or rightarrow, then prepend it with a
-backslash ' \'. If a word in the repeat rule matches ' \\<index>' then
-it will be replaced with the <index>-th word in the same repeat
-specification. There are two forms for the repeat rule: named and
-short.
-
-
-Named repeat rule
-^^^^^^^^^^^^^^^^^
-
-A named repeat rule is useful when the same set of repeats must be
-used several times in a block. It is specified using <rule1=item1,
-item2, item3,..., itemN>, where N is the number of times the block
-should be repeated. On each repeat of the block, the entire
-expression, '<...>' will be replaced first with item1, and then with
-item2, and so forth until N repeats are accomplished. Once a named
-repeat specification has been introduced, the same repeat rule may be
-used **in the current block** by referring only to the name
-(i.e. <rule1>.
-
-
-Short repeat rule
-^^^^^^^^^^^^^^^^^
-
-A short repeat rule looks like <item1, item2, item3, ..., itemN>. The
-rule specifies that the entire expression, '<...>' should be replaced
-first with item1, and then with item2, and so forth until N repeats
-are accomplished.
-
-
-Pre-defined names
-^^^^^^^^^^^^^^^^^
-
-The following predefined named repeat rules are available:
-
-- <prefix=s,d,c,z>
-
-- <_c=s,d,c,z>
-
-- <_t=real, double precision, complex, double complex>
-
-- <ftype=real, double precision, complex, double complex>
-
-- <ctype=float, double, complex_float, complex_double>
-
-- <ftypereal=float, double precision, \\0, \\1>
-
-- <ctypereal=float, double, \\0, \\1>
-
-
-Other files
------------
-
-Non-Fortran files use a separate syntax for defining template blocks
-that should be repeated using a variable expansion similar to the
-named repeat rules of the Fortran-specific repeats. The template rules
-for these files are:
-
-1. "/\**begin repeat "on a line by itself marks the beginning of
- a segment that should be repeated.
-
-2. Named variable expansions are defined using #name=item1, item2, item3,
- ..., itemN# and placed on successive lines. These variables are
- replaced in each repeat block with corresponding word. All named
- variables in the same repeat block must define the same number of
- words.
-
-3. In specifying the repeat rule for a named variable, item*N is short-
- hand for item, item, ..., item repeated N times. In addition,
- parenthesis in combination with \*N can be used for grouping several
- items that should be repeated. Thus, #name=(item1, item2)*4# is
- equivalent to #name=item1, item2, item1, item2, item1, item2, item1,
- item2#
-
-4. "\*/ "on a line by itself marks the end of the variable expansion
- naming. The next line is the first line that will be repeated using
- the named rules.
-
-5. Inside the block to be repeated, the variables that should be expanded
- are specified as @name@.
-
-6. "/\**end repeat**/ "on a line by itself marks the previous line
- as the last line of the block to be repeated.
+for all other cases. See :ref:`templating`.
diff --git a/doc/source/reference/random/bit_generators/bitgenerators.rst b/doc/source/reference/random/bit_generators/bitgenerators.rst
new file mode 100644
index 000000000..1474f7dac
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/bitgenerators.rst
@@ -0,0 +1,11 @@
+:orphan:
+
+BitGenerator
+------------
+
+.. currentmodule:: numpy.random.bit_generator
+
+.. autosummary::
+ :toctree: generated/
+
+ BitGenerator
diff --git a/doc/source/reference/random/bit_generators/index.rst b/doc/source/reference/random/bit_generators/index.rst
new file mode 100644
index 000000000..4540f60d9
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/index.rst
@@ -0,0 +1,71 @@
+.. _bit_generator:
+
+.. currentmodule:: numpy.random
+
+Bit Generators
+--------------
+
+The random values produced by :class:`~Generator`
+orignate in a BitGenerator. The BitGenerators do not directly provide
+random numbers and only contains methods used for seeding, getting or
+setting the state, jumping or advancing the state, and for accessing
+low-level wrappers for consumption by code that can efficiently
+access the functions provided, e.g., `numba <https://numba.pydata.org>`_.
+
+Supported BitGenerators
+=======================
+
+The included BitGenerators are:
+
+* MT19937 - The standard Python BitGenerator. Adds a `~mt19937.MT19937.jumped`
+ function that returns a new generator with state as-if ``2**128`` draws have
+ been made.
+* PCG-64 - Fast generator that support many parallel streams and
+ can be advanced by an arbitrary amount. See the documentation for
+ :meth:`~.PCG64.advance`. PCG-64 has a period of
+ :math:`2^{128}`. See the `PCG author's page`_ for more details about
+ this class of PRNG.
+* Philox - a counter-based generator capable of being advanced an
+ arbitrary number of steps or generating independent streams. See the
+ `Random123`_ page for more details about this class of bit generators.
+
+.. _`PCG author's page`: http://www.pcg-random.org/
+.. _`Random123`: https://www.deshawresearch.com/resources_random123.html
+
+
+.. toctree::
+ :maxdepth: 1
+
+ BitGenerator <bitgenerators>
+ MT19937 <mt19937>
+ PCG64 <pcg64>
+ Philox <philox>
+ SFC64 <sfc64>
+
+Seeding and Entropy
+-------------------
+
+A BitGenerator provides a stream of random values. In order to generate
+reproducableis streams, BitGenerators support setting their initial state via a
+seed. But how best to seed the BitGenerator? On first impulse one would like to
+do something like ``[bg(i) for i in range(12)]`` to obtain 12 non-correlated,
+independent BitGenerators. However using a highly correlated set of seeds could
+generate BitGenerators that are correlated or overlap within a few samples.
+
+NumPy uses a `SeedSequence` class to mix the seed in a reproducible way that
+introduces the necessary entropy to produce independent and largely non-
+overlapping streams. Small seeds are unable to fill the complete range of
+initializaiton states, and lead to biases among an ensemble of small-seed
+runs. For many cases, that doesn't matter. If you just want to hold things in
+place while you debug something, biases aren't a concern. For actual
+simulations whose results you care about, let ``SeedSequence(None)`` do its
+thing and then log/print the `SeedSequence.entropy` for repeatable
+`BitGenerator` streams.
+
+.. autosummary::
+ :toctree: generated/
+
+ bit_generator.ISeedSequence
+ bit_generator.ISpawnableSeedSequence
+ SeedSequence
+ bit_generator.SeedlessSeedSequence
diff --git a/doc/source/reference/random/bit_generators/mt19937.rst b/doc/source/reference/random/bit_generators/mt19937.rst
new file mode 100644
index 000000000..25ba1d7b5
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/mt19937.rst
@@ -0,0 +1,34 @@
+Mersenne Twister (MT19937)
+--------------------------
+
+.. module:: numpy.random.mt19937
+
+.. currentmodule:: numpy.random.mt19937
+
+.. autoclass:: MT19937
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~MT19937.state
+
+Parallel generation
+===================
+.. autosummary::
+ :toctree: generated/
+
+ ~MT19937.jumped
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~MT19937.cffi
+ ~MT19937.ctypes
+
+
diff --git a/doc/source/reference/random/bit_generators/pcg64.rst b/doc/source/reference/random/bit_generators/pcg64.rst
new file mode 100644
index 000000000..7aef1e0dd
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/pcg64.rst
@@ -0,0 +1,33 @@
+Parallel Congruent Generator (64-bit, PCG64)
+--------------------------------------------
+
+.. module:: numpy.random.pcg64
+
+.. currentmodule:: numpy.random.pcg64
+
+.. autoclass:: PCG64
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~PCG64.state
+
+Parallel generation
+===================
+.. autosummary::
+ :toctree: generated/
+
+ ~PCG64.advance
+ ~PCG64.jumped
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~PCG64.cffi
+ ~PCG64.ctypes
diff --git a/doc/source/reference/random/bit_generators/philox.rst b/doc/source/reference/random/bit_generators/philox.rst
new file mode 100644
index 000000000..5e581e094
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/philox.rst
@@ -0,0 +1,35 @@
+Philox Counter-based RNG
+------------------------
+
+.. module:: numpy.random.philox
+
+.. currentmodule:: numpy.random.philox
+
+.. autoclass:: Philox
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~Philox.state
+
+Parallel generation
+===================
+.. autosummary::
+ :toctree: generated/
+
+ ~Philox.advance
+ ~Philox.jumped
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~Philox.cffi
+ ~Philox.ctypes
+
+
diff --git a/doc/source/reference/random/bit_generators/sfc64.rst b/doc/source/reference/random/bit_generators/sfc64.rst
new file mode 100644
index 000000000..dc03820ae
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/sfc64.rst
@@ -0,0 +1,28 @@
+SFC64 Small Fast Chaotic PRNG
+-----------------------------
+
+.. module:: numpy.random.sfc64
+
+.. currentmodule:: numpy.random.sfc64
+
+.. autoclass:: SFC64
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~SFC64.state
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~SFC64.cffi
+ ~SFC64.ctypes
+
+
+
diff --git a/doc/source/reference/random/entropy.rst b/doc/source/reference/random/entropy.rst
new file mode 100644
index 000000000..0664da6f9
--- /dev/null
+++ b/doc/source/reference/random/entropy.rst
@@ -0,0 +1,6 @@
+System Entropy
+==============
+
+.. module:: numpy.random.entropy
+
+.. autofunction:: random_entropy
diff --git a/doc/source/reference/random/extending.rst b/doc/source/reference/random/extending.rst
new file mode 100644
index 000000000..22f9cb7e4
--- /dev/null
+++ b/doc/source/reference/random/extending.rst
@@ -0,0 +1,165 @@
+.. currentmodule:: numpy.random
+
+Extending
+---------
+The BitGenerators have been designed to be extendable using standard tools for
+high-performance Python -- numba and Cython. The `~Generator` object can also
+be used with user-provided BitGenerators as long as these export a small set of
+required functions.
+
+Numba
+=====
+Numba can be used with either CTypes or CFFI. The current iteration of the
+BitGenerators all export a small set of functions through both interfaces.
+
+This example shows how numba can be used to produce Box-Muller normals using
+a pure Python implementation which is then compiled. The random numbers are
+provided by ``ctypes.next_double``.
+
+.. code-block:: python
+
+ from numpy.random import PCG64
+ import numpy as np
+ import numba as nb
+
+ x = PCG64()
+ f = x.ctypes.next_double
+ s = x.ctypes.state
+ state_addr = x.ctypes.state_address
+
+ def normals(n, state):
+ out = np.empty(n)
+ for i in range((n+1)//2):
+ x1 = 2.0*f(state) - 1.0
+ x2 = 2.0*f(state) - 1.0
+ r2 = x1*x1 + x2*x2
+ while r2 >= 1.0 or r2 == 0.0:
+ x1 = 2.0*f(state) - 1.0
+ x2 = 2.0*f(state) - 1.0
+ r2 = x1*x1 + x2*x2
+ g = np.sqrt(-2.0*np.log(r2)/r2)
+ out[2*i] = g*x1
+ if 2*i+1 < n:
+ out[2*i+1] = g*x2
+ return out
+
+ # Compile using Numba
+ print(normals(10, s).var())
+ # Warm up
+ normalsj = nb.jit(normals, nopython=True)
+ # Must use state address not state with numba
+ normalsj(1, state_addr)
+ %timeit normalsj(1000000, state_addr)
+ print('1,000,000 Box-Muller (numba/PCG64) randoms')
+ %timeit np.random.standard_normal(1000000)
+ print('1,000,000 Box-Muller (NumPy) randoms')
+
+
+Both CTypes and CFFI allow the more complicated distributions to be used
+directly in Numba after compiling the file distributions.c into a DLL or so.
+An example showing the use of a more complicated distribution is in the
+examples folder.
+
+.. _randomgen_cython:
+
+Cython
+======
+
+Cython can be used to unpack the ``PyCapsule`` provided by a BitGenerator.
+This example uses `~pcg64.PCG64` and
+``random_gauss_zig``, the Ziggurat-based generator for normals, to fill an
+array. The usual caveats for writing high-performance code using Cython --
+removing bounds checks and wrap around, providing array alignment information
+-- still apply.
+
+.. code-block:: cython
+
+ import numpy as np
+ cimport numpy as np
+ cimport cython
+ from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer
+ from numpy.random.common cimport *
+ from numpy.random.distributions cimport random_gauss_zig
+ from numpy.random import PCG64
+
+
+ @cython.boundscheck(False)
+ @cython.wraparound(False)
+ def normals_zig(Py_ssize_t n):
+ cdef Py_ssize_t i
+ cdef bitgen_t *rng
+ cdef const char *capsule_name = "BitGenerator"
+ cdef double[::1] random_values
+
+ x = PCG64()
+ capsule = x.capsule
+ if not PyCapsule_IsValid(capsule, capsule_name):
+ raise ValueError("Invalid pointer to anon_func_state")
+ rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name)
+ random_values = np.empty(n)
+ # Best practice is to release GIL and acquire the lock
+ with x.lock, nogil:
+ for i in range(n):
+ random_values[i] = random_gauss_zig(rng)
+ randoms = np.asarray(random_values)
+ return randoms
+
+The BitGenerator can also be directly accessed using the members of the basic
+RNG structure.
+
+.. code-block:: cython
+
+ @cython.boundscheck(False)
+ @cython.wraparound(False)
+ def uniforms(Py_ssize_t n):
+ cdef Py_ssize_t i
+ cdef bitgen_t *rng
+ cdef const char *capsule_name = "BitGenerator"
+ cdef double[::1] random_values
+
+ x = PCG64()
+ capsule = x.capsule
+ # Optional check that the capsule if from a BitGenerator
+ if not PyCapsule_IsValid(capsule, capsule_name):
+ raise ValueError("Invalid pointer to anon_func_state")
+ # Cast the pointer
+ rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name)
+ random_values = np.empty(n)
+ with x.lock, nogil:
+ for i in range(n):
+ # Call the function
+ random_values[i] = rng.next_double(rng.state)
+ randoms = np.asarray(random_values)
+ return randoms
+
+These functions along with a minimal setup file are included in the
+examples folder.
+
+New Basic RNGs
+==============
+`~Generator` can be used with other user-provided BitGenerators. The simplest
+way to write a new BitGenerator is to examine the pyx file of one of the
+existing BitGenerators. The key structure that must be provided is the
+``capsule`` which contains a ``PyCapsule`` to a struct pointer of type
+``bitgen_t``,
+
+.. code-block:: c
+
+ typedef struct bitgen {
+ void *state;
+ uint64_t (*next_uint64)(void *st);
+ uint32_t (*next_uint32)(void *st);
+ double (*next_double)(void *st);
+ uint64_t (*next_raw)(void *st);
+ } bitgen_t;
+
+which provides 5 pointers. The first is an opaque pointer to the data structure
+used by the BitGenerators. The next three are function pointers which return
+the next 64- and 32-bit unsigned integers, the next random double and the next
+raw value. This final function is used for testing and so can be set to
+the next 64-bit unsigned integer function if not needed. Functions inside
+``Generator`` use this structure as in
+
+.. code-block:: c
+
+ bitgen_state->next_uint64(bitgen_state->state)
diff --git a/doc/source/reference/random/generator.rst b/doc/source/reference/random/generator.rst
new file mode 100644
index 000000000..22bce2e6c
--- /dev/null
+++ b/doc/source/reference/random/generator.rst
@@ -0,0 +1,82 @@
+.. currentmodule:: numpy.random
+
+Random Generator
+----------------
+The `~Generator` provides access to
+a wide range of distributions, and served as a replacement for
+:class:`~numpy.random.RandomState`. The main difference between
+the two is that ``Generator`` relies on an additional BitGenerator to
+manage state and generate the random bits, which are then transformed into
+random values from useful distributions. The default BitGenerator used by
+``Generator`` is `~PCG64`. The BitGenerator
+can be changed by passing an instantized BitGenerator to ``Generator``.
+
+
+.. autoclass:: Generator
+ :exclude-members:
+
+Accessing the BitGenerator
+==========================
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.bit_generator
+
+Simple random data
+==================
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.integers
+ ~Generator.random
+ ~Generator.choice
+ ~Generator.bytes
+
+Permutations
+============
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.shuffle
+ ~Generator.permutation
+
+Distributions
+=============
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.beta
+ ~Generator.binomial
+ ~Generator.chisquare
+ ~Generator.dirichlet
+ ~Generator.exponential
+ ~Generator.f
+ ~Generator.gamma
+ ~Generator.geometric
+ ~Generator.gumbel
+ ~Generator.hypergeometric
+ ~Generator.laplace
+ ~Generator.logistic
+ ~Generator.lognormal
+ ~Generator.logseries
+ ~Generator.multinomial
+ ~Generator.multivariate_normal
+ ~Generator.negative_binomial
+ ~Generator.noncentral_chisquare
+ ~Generator.noncentral_f
+ ~Generator.normal
+ ~Generator.pareto
+ ~Generator.poisson
+ ~Generator.power
+ ~Generator.rayleigh
+ ~Generator.standard_cauchy
+ ~Generator.standard_exponential
+ ~Generator.standard_gamma
+ ~Generator.standard_normal
+ ~Generator.standard_t
+ ~Generator.triangular
+ ~Generator.uniform
+ ~Generator.vonmises
+ ~Generator.wald
+ ~Generator.weibull
+ ~Generator.zipf
diff --git a/doc/source/reference/random/index.rst b/doc/source/reference/random/index.rst
new file mode 100644
index 000000000..f32853e7c
--- /dev/null
+++ b/doc/source/reference/random/index.rst
@@ -0,0 +1,187 @@
+.. _numpyrandom:
+
+.. currentmodule:: numpy.random
+
+numpy.random
+============
+
+Numpy's random number routines produce pseudo random numbers using
+combinations of a `BitGenerator` to create sequences and a `Generator`
+to use those sequences to sample from different statistical distributions:
+
+* SeedSequence: Objects that provide entropy for the initial state of a
+ BitGenerator. A good SeedSequence will provide initializations across the
+ entire range of possible states for the BitGenerator, otherwise biases may
+ creep into the generated bit streams.
+* BitGenerators: Objects that generate random numbers. These are typically
+ unsigned integer words filled with sequences of either 32 or 64 random bits.
+* Generators: Objects that transform sequences of random bits from a
+ BitGenerator into sequences of numbers that follow a specific probability
+ distribution (such as uniform, Normal or Binomial) within a specified
+ interval.
+
+Since Numpy version 1.17.0 the Generator can be initialized with a
+number of different BitGenerators. It exposes many different probability
+distributions. See `NEP 19 <https://www.numpy.org/neps/
+nep-0019-rng-policy.html>`_ for context on the updated random Numpy number
+routines. The legacy `RandomState` random number routines are still
+available, but limited to a single BitGenerator.
+
+For convenience and backward compatibility, a single `RandomState`
+instance's methods are imported into the numpy.random namespace, see
+:ref:`legacy` for the complete list.
+
+Quick Start
+-----------
+
+By default, `Generator` uses normals provided by `PCG64` which will be
+statistically more reliable than the legacy methods in `RandomState`
+
+.. code-block:: python
+
+ # Uses the old numpy.random.RandomState
+ from numpy import random
+ random.standard_normal()
+
+`Generator` can be used as a direct replacement for `~RandomState`, although
+the random values are generated by `~PCG64`. The
+`Generator` holds an instance of a BitGenerator. It is accessible as
+``gen.bit_generator``.
+
+.. code-block:: python
+
+ # As replacement for RandomState()
+ from numpy.random import Generator
+ rg = Generator()
+ rg.standard_normal()
+ rg.bit_generator
+
+Seeds can be passed to any of the BitGenerators. The provided value is mixed
+via `~.SeedSequence` to spread a possible sequence of seeds across a wider
+range of initialization states for the BitGenerator. Here `~.PCG64` is used and
+is wrapped with a `~.Generator`.
+
+.. code-block:: python
+
+ from numpy.random import Generator, PCG64
+ rg = Generator(PCG64(12345))
+ rg.standard_normal()
+
+Introduction
+------------
+RandomGen takes a different approach to producing random numbers from the
+`RandomState` object. Random number generation is separated into three
+components, a seed sequence, a bit generator and a random generator.
+
+The `BitGenerator` has a limited set of responsibilities. It manages state
+and provides functions to produce random doubles and random unsigned 32- and
+64-bit values.
+
+The `SeedSequence` takes a seed and provides the initial state for the
+`BitGenerator`. Since consecutive seeds can cause bad effects when comparing
+`BitGenerator` streams, the `SeedSequence` uses current best-practice methods
+to spread the initial state out. However small seeds may still be unable to
+reach all possible initialization states, which can cause biases among an
+ensemble of small-seed runs. For many cases, that doesn't matter. If you just
+want to hold things in place while you debug something, biases aren't a
+concern. For actual simulations whose results you care about, let
+``SeedSequence(None)`` do its thing and then log/print the
+`SeedSequence.entropy` for repeatable `BitGenerator` streams.
+
+The `random generator <Generator>` takes the
+bit generator-provided stream and transforms them into more useful
+distributions, e.g., simulated normal random values. This structure allows
+alternative bit generators to be used with little code duplication.
+
+The `Generator` is the user-facing object that is nearly identical to
+`RandomState`. The canonical method to initialize a generator passes a
+`~mt19937.MT19937` bit generator, the underlying bit generator in Python -- as
+the sole argument. Note that the BitGenerator must be instantiated.
+.. code-block:: python
+
+ from numpy.random import Generator, PCG64
+ rg = Generator(PCG64())
+ rg.random()
+
+Seed information is directly passed to the bit generator.
+
+.. code-block:: python
+
+ rg = Generator(PCG64(12345))
+ rg.random()
+
+What's New or Different
+~~~~~~~~~~~~~~~~~~~~~~~
+.. warning::
+
+ The Box-Muller method used to produce NumPy's normals is no longer available
+ in `Generator`. It is not possible to reproduce the exact random
+ values using Generator for the normal distribution or any other
+ distribution that relies on the normal such as the `numpy.random.gamma` or
+ `numpy.random.standard_t`. If you require bitwise backward compatible
+ streams, use `RandomState`.
+
+* The Generator's normal, exponential and gamma functions use 256-step Ziggurat
+ methods which are 2-10 times faster than NumPy's Box-Muller or inverse CDF
+ implementations.
+* Optional ``dtype`` argument that accepts ``np.float32`` or ``np.float64``
+ to produce either single or double prevision uniform random variables for
+ select distributions
+* Optional ``out`` argument that allows existing arrays to be filled for
+ select distributions
+* `~entropy.random_entropy` provides access to the system
+ source of randomness that is used in cryptographic applications (e.g.,
+ ``/dev/urandom`` on Unix).
+* All BitGenerators can produce doubles, uint64s and uint32s via CTypes
+ (`~PCG64.ctypes`) and CFFI
+ (:meth:`~PCG64.cffi`). This allows the bit generators to
+ be used in numba.
+* The bit generators can be used in downstream projects via
+ :ref:`Cython <randomgen_cython>`.
+* `~.Generator.integers` is now the canonical way to generate integer
+ random numbers from a discrete uniform distribution. The ``rand`` and
+ ``randn`` methods are only available through the legacy `~.RandomState`.
+ The ``endpoint`` keyword can be used to specify open or closed intervals.
+ This replaces both ``randint`` and the deprecated ``random_integers``.
+* `~.Generator.random` is now the canonical way to generate floating-point
+ random numbers, which replaces `random_sample`, `sample`, and `ranf`. This
+ is consistent with Python's `random.random`.
+
+See :ref:`new-or-different` for a complete list of improvements and
+differences from the traditional ``Randomstate``.
+
+Parallel Generation
+~~~~~~~~~~~~~~~~~~~
+
+The included generators can be used in parallel, distributed applications in
+one of two ways:
+
+* :ref:`independent-streams`
+* :ref:`jump-and-advance`
+
+Concepts
+--------
+.. toctree::
+ :maxdepth: 1
+
+ generator
+ legacy mtrand <legacy>
+ BitGenerators, SeedSequences <bit_generators/index>
+
+Features
+--------
+.. toctree::
+ :maxdepth: 2
+
+ Parallel Applications <parallel>
+ Multithreaded Generation <multithreading>
+ new-or-different
+ Comparing Performance <performance>
+ extending
+ Reading System Entropy <entropy>
+
+Original Source
+~~~~~~~~~~~~~~~
+
+This package was developed independently of NumPy and was integrated in version
+1.17.0. The original repo is at https://github.com/bashtage/randomgen.
diff --git a/doc/source/reference/random/legacy.rst b/doc/source/reference/random/legacy.rst
new file mode 100644
index 000000000..d9391e9e2
--- /dev/null
+++ b/doc/source/reference/random/legacy.rst
@@ -0,0 +1,128 @@
+.. _legacy:
+
+Legacy Random Generation
+------------------------
+The `~mtrand.RandomState` provides access to
+legacy generators. This generator is considered frozen and will have
+no further improvements. It is guaranteed to produce the same values
+as the final point release of NumPy v1.16. These all depend on Box-Muller
+normals or inverse CDF exponentials or gammas. This class should only be used
+if it is essential to have randoms that are identical to what
+would have been produced by NumPy.
+
+`~mtrand.RandomState` adds additional information
+to the state which is required when using Box-Muller normals since these
+are produced in pairs. It is important to use
+`~mtrand.RandomState.get_state`, and not the underlying bit generators
+`state`, when accessing the state so that these extra values are saved.
+
+.. warning::
+
+ :class:`~randomgen.legacy.LegacyGenerator` only contains functions
+ that have changed. Since it does not contain other functions, it
+ is not directly possible to replace :class:`~numpy.random.RandomState`.
+ In order to full replace :class:`~numpy.random.RandomState`, it is
+ necessary to use both :class:`~randomgen.legacy.LegacyGenerator`
+ and :class:`~randomgen.generator.RandomGenerator` both driven
+ by the same basic RNG. Methods present in :class:`~randomgen.legacy.LegacyGenerator`
+ must be called from :class:`~randomgen.legacy.LegacyGenerator`. Other Methods
+ should be called from :class:`~randomgen.generator.RandomGenerator`.
+
+
+.. code-block:: python
+
+ from numpy.random import MT19937
+ from numpy.random import RandomState
+
+ # Use same seed
+ rs = RandomState(12345)
+ mt19937 = MT19937(12345)
+ lg = RandomState(mt19937)
+
+ # Identical output
+ rs.standard_normal()
+ lg.standard_normal()
+
+ rs.random()
+ lg.random()
+
+ rs.standard_exponential()
+ lg.standard_exponential()
+
+
+.. currentmodule:: numpy.random.mtrand
+
+.. autoclass:: RandomState
+ :exclude-members:
+
+Seeding and State
+=================
+
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.get_state
+ ~RandomState.set_state
+ ~RandomState.seed
+
+Simple random data
+==================
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.rand
+ ~RandomState.randn
+ ~RandomState.randint
+ ~RandomState.random_integers
+ ~RandomState.random_sample
+ ~RandomState.choice
+ ~RandomState.bytes
+
+Permutations
+============
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.shuffle
+ ~RandomState.permutation
+
+Distributions
+=============
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.beta
+ ~RandomState.binomial
+ ~RandomState.chisquare
+ ~RandomState.dirichlet
+ ~RandomState.exponential
+ ~RandomState.f
+ ~RandomState.gamma
+ ~RandomState.geometric
+ ~RandomState.gumbel
+ ~RandomState.hypergeometric
+ ~RandomState.laplace
+ ~RandomState.logistic
+ ~RandomState.lognormal
+ ~RandomState.logseries
+ ~RandomState.multinomial
+ ~RandomState.multivariate_normal
+ ~RandomState.negative_binomial
+ ~RandomState.noncentral_chisquare
+ ~RandomState.noncentral_f
+ ~RandomState.normal
+ ~RandomState.pareto
+ ~RandomState.poisson
+ ~RandomState.power
+ ~RandomState.rayleigh
+ ~RandomState.standard_cauchy
+ ~RandomState.standard_exponential
+ ~RandomState.standard_gamma
+ ~RandomState.standard_normal
+ ~RandomState.standard_t
+ ~RandomState.triangular
+ ~RandomState.uniform
+ ~RandomState.vonmises
+ ~RandomState.wald
+ ~RandomState.weibull
+ ~RandomState.zipf
diff --git a/doc/source/reference/random/multithreading.rst b/doc/source/reference/random/multithreading.rst
new file mode 100644
index 000000000..849d64d4e
--- /dev/null
+++ b/doc/source/reference/random/multithreading.rst
@@ -0,0 +1,106 @@
+Multithreaded Generation
+========================
+
+The four core distributions all allow existing arrays to be filled using the
+``out`` keyword argument. Existing arrays need to be contiguous and
+well-behaved (writable and aligned). Under normal circumstances, arrays
+created using the common constructors such as :meth:`numpy.empty` will satisfy
+these requirements.
+
+This example makes use of Python 3 :mod:`concurrent.futures` to fill an array
+using multiple threads. Threads are long-lived so that repeated calls do not
+require any additional overheads from thread creation. The underlying
+BitGenerator is `PCG64` which is fast, has a long period and supports
+using `PCG64.jumped` to return a new generator while advancing the
+state. The random numbers generated are reproducible in the sense that the same
+seed will produce the same outputs.
+
+.. code-block:: ipython
+
+ from numpy.random import Generator, PCG64
+ import multiprocessing
+ import concurrent.futures
+ import numpy as np
+
+ class MultithreadedRNG(object):
+ def __init__(self, n, seed=None, threads=None):
+ rg = PCG64(seed)
+ if threads is None:
+ threads = multiprocessing.cpu_count()
+ self.threads = threads
+
+ self._random_generators = [rg]
+ last_rg = rg
+ for _ in range(0, threads-1):
+ new_rg = last_rg.jumped()
+ self._random_generators.append(new_rg)
+ last_rg = new_rg
+
+ self.n = n
+ self.executor = concurrent.futures.ThreadPoolExecutor(threads)
+ self.values = np.empty(n)
+ self.step = np.ceil(n / threads).astype(np.int)
+
+ def fill(self):
+ def _fill(random_state, out, first, last):
+ random_state.standard_normal(out=out[first:last])
+
+ futures = {}
+ for i in range(self.threads):
+ args = (_fill,
+ self._random_generators[i],
+ self.values,
+ i * self.step,
+ (i + 1) * self.step)
+ futures[self.executor.submit(*args)] = i
+ concurrent.futures.wait(futures)
+
+ def __del__(self):
+ self.executor.shutdown(False)
+
+
+The multithreaded random number generator can be used to fill an array.
+The ``values`` attributes shows the zero-value before the fill and the
+random value after.
+
+.. code-block:: ipython
+
+ In [2]: mrng = MultithreadedRNG(10000000, seed=0)
+ ...: print(mrng.values[-1])
+ 0.0
+
+ In [3]: mrng.fill()
+ ...: print(mrng.values[-1])
+ 3.296046120254392
+
+The time required to produce using multiple threads can be compared to
+the time required to generate using a single thread.
+
+.. code-block:: ipython
+
+ In [4]: print(mrng.threads)
+ ...: %timeit mrng.fill()
+
+ 4
+ 32.8 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
+
+The single threaded call directly uses the BitGenerator.
+
+.. code-block:: ipython
+
+ In [5]: values = np.empty(10000000)
+ ...: rg = Generator(PCG64())
+ ...: %timeit rg.standard_normal(out=values)
+
+ 99.6 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
+
+The gains are substantial and the scaling is reasonable even for large that
+are only moderately large. The gains are even larger when compared to a call
+that does not use an existing array due to array creation overhead.
+
+.. code-block:: ipython
+
+ In [6]: rg = Generator(PCG64())
+ ...: %timeit rg.standard_normal(10000000)
+
+ 125 ms ± 309 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
diff --git a/doc/source/reference/random/new-or-different.rst b/doc/source/reference/random/new-or-different.rst
new file mode 100644
index 000000000..11638824a
--- /dev/null
+++ b/doc/source/reference/random/new-or-different.rst
@@ -0,0 +1,116 @@
+.. _new-or-different:
+
+.. currentmodule:: numpy.random
+
+What's New or Different
+-----------------------
+
+.. warning::
+
+ The Box-Muller method used to produce NumPy's normals is no longer available
+ in `Generator`. It is not possible to reproduce the exact random
+ values using ``Generator`` for the normal distribution or any other
+ distribution that relies on the normal such as the `gamma` or
+ `standard_t`. If you require bitwise backward compatible
+ streams, use `RandomState`.
+
+Quick comparison of legacy `mtrand <legacy>`_ to the new `Generator`
+
+================== ==================== =============
+Feature Older Equivalent Notes
+------------------ -------------------- -------------
+`Generator` `RandomState` ``Generator`` requires a stream
+ source, called a `BitGenerator
+ <bit_generators>` A number of these
+ are provided. ``RandomState`` uses
+ only the Box- Muller method.
+------------------ -------------------- -------------
+``np.random.`` ``np.random.`` Access the values in a BitGenerator,
+``Generator().`` ``random_sample()`` convert them to ``float64`` in the
+``random()`` interval ``[0.0.,`` `` 1.0)``.
+ In addition to the ``size`` kwarg, now
+ supports ``dtype='d'`` or ``dtype='f'``,
+ and an ``out`` kwarg to fill a user-
+ supplied array.
+
+ Many other distributions are also
+ supported.
+------------------ -------------------- -------------
+``Generator().`` ``randint``, Use the ``endpoint`` kwarg to adjust
+``integers()`` ``random_integers`` the inclusion or exclution of the
+ ``high`` interval endpoint
+================== ==================== =============
+
+And in more detail:
+
+* `~.entropy.random_entropy` provides access to the system
+ source of randomness that is used in cryptographic applications (e.g.,
+ ``/dev/urandom`` on Unix).
+* Simulate from the complex normal distribution
+ (`~.Generator.complex_normal`)
+* The normal, exponential and gamma generators use 256-step Ziggurat
+ methods which are 2-10 times faster than NumPy's default implementation in
+ `~.Generator.standard_normal`, `~.Generator.standard_exponential` or
+ `~.Generator.standard_gamma`.
+* `~.Generator.integers` is now the canonical way to generate integer
+ random numbers from a discrete uniform distribution. The ``rand`` and
+ ``randn`` methods are only available through the legacy `~.RandomState`.
+ This replaces both ``randint`` and the deprecated ``random_integers``.
+* The Box-Muller used to produce NumPy's normals is no longer available.
+* All bit generators can produce doubles, uint64s and
+ uint32s via CTypes (`~PCG64.ctypes`) and CFFI (`~PCG64.cffi`).
+ This allows these bit generators to be used in numba.
+* The bit generators can be used in downstream projects via
+ Cython.
+
+
+.. ipython:: python
+
+ from numpy.random import Generator, PCG64
+ import numpy.random
+ rg = Generator(PCG64())
+ %timeit rg.standard_normal(100000)
+ %timeit numpy.random.standard_normal(100000)
+
+.. ipython:: python
+
+ %timeit rg.standard_exponential(100000)
+ %timeit numpy.random.standard_exponential(100000)
+
+.. ipython:: python
+
+ %timeit rg.standard_gamma(3.0, 100000)
+ %timeit numpy.random.standard_gamma(3.0, 100000)
+
+* Optional ``dtype`` argument that accepts ``np.float32`` or ``np.float64``
+ to produce either single or double prevision uniform random variables for
+ select distributions
+
+ * Uniforms (`~.Generator.random` and `~.Generator.integers`)
+ * Normals (`~.Generator.standard_normal`)
+ * Standard Gammas (`~.Generator.standard_gamma`)
+ * Standard Exponentials (`~.Generator.standard_exponential`)
+
+.. ipython:: python
+
+ rg = Generator(PCG64(0))
+ rg.random(3, dtype='d')
+ rg.random(3, dtype='f')
+
+* Optional ``out`` argument that allows existing arrays to be filled for
+ select distributions
+
+ * Uniforms (`~.Generator.random`)
+ * Normals (`~.Generator.standard_normal`)
+ * Standard Gammas (`~.Generator.standard_gamma`)
+ * Standard Exponentials (`~.Generator.standard_exponential`)
+
+ This allows multithreading to fill large arrays in chunks using suitable
+ BitGenerators in parallel.
+
+.. ipython:: python
+
+ existing = np.zeros(4)
+ rg.random(out=existing[:2])
+ print(existing)
+
diff --git a/doc/source/reference/random/parallel.rst b/doc/source/reference/random/parallel.rst
new file mode 100644
index 000000000..36e173ef2
--- /dev/null
+++ b/doc/source/reference/random/parallel.rst
@@ -0,0 +1,135 @@
+Parallel Random Number Generation
+=================================
+
+There are three strategies implemented that can be used to produce
+repeatable pseudo-random numbers across multiple processes (local
+or distributed).
+
+.. _independent-streams:
+
+.. currentmodule:: numpy.random
+
+Independent Streams
+-------------------
+
+:class:`~pcg64.PCG64`, :class:`~threefry.ThreeFry`
+and :class:`~philox.Philox` support independent streams. This
+example shows how many streams can be created by passing in different index
+values in the second input while using the same seed in the first.
+
+.. code-block:: python
+
+ from numpy.random.entropy import random_entropy
+ from numpy.random import PCG64
+
+ entropy = random_entropy(4)
+ # 128-bit number as a seed
+ seed = sum([int(entropy[i]) * 2 ** (32 * i) for i in range(4)])
+ streams = [PCG64(seed, stream) for stream in range(10)]
+
+
+:class:`~philox.Philox` and :class:`~threefry.ThreeFry` are
+counter-based RNGs which use a counter and key. Different keys can be used
+to produce independent streams.
+
+.. code-block:: python
+
+ import numpy as np
+ from numpy.random import ThreeFry
+
+ key = random_entropy(8)
+ key = key.view(np.uint64)
+ key[0] = 0
+ step = np.zeros(4, dtype=np.uint64)
+ step[0] = 1
+ streams = [ThreeFry(key=key + stream * step) for stream in range(10)]
+
+.. _jump-and-advance:
+
+Jump/Advance the BitGenerator state
+-----------------------------------
+
+Jumped
+******
+
+``jumped`` advances the state of the BitGenerator *as-if* a large number of
+random numbers have been drawn, and returns a new instance with this state.
+The specific number of draws varies by BitGenerator, and ranges from
+:math:`2^{64}` to :math:`2^{512}`. Additionally, the *as-if* draws also depend
+on the size of the default random number produced by the specific BitGenerator.
+The BitGenerator that support ``jumped``, along with the period of the
+BitGenerator, the size of the jump and the bits in the default unsigned random
+are listed below.
+
++-----------------+-------------------------+-------------------------+-------------------------+
+| BitGenerator | Period | Jump Size | Bits |
++=================+=========================+=========================+=========================+
+| MT19937 | :math:`2^{19937}` | :math:`2^{128}` | 32 |
++-----------------+-------------------------+-------------------------+-------------------------+
+| PCG64 | :math:`2^{128}` | :math:`2^{64}` | 64 |
++-----------------+-------------------------+-------------------------+-------------------------+
+| Philox | :math:`2^{256}` | :math:`2^{128}` | 64 |
++-----------------+-------------------------+-------------------------+-------------------------+
+| ThreeFry | :math:`2^{256}` | :math:`2^{128}` | 64 |
++-----------------+-------------------------+-------------------------+-------------------------+
+
+``jumped`` can be used to produce long blocks which should be long enough to not
+overlap.
+
+.. code-block:: python
+
+ from numpy.random.entropy import random_entropy
+ from numpy.random import PCG64
+
+ entropy = random_entropy(2).astype(np.uint64)
+ # 64-bit number as a seed
+ seed = entropy[0] * 2**32 + entropy[1]
+ blocked_rng = []
+ rng = PCG64(seed)
+ for i in range(10):
+ blocked_rng.append(rng.jumped(i))
+
+Advance
+*******
+``advance`` can be used to jump the state an arbitrary number of steps, and so
+is a more general approach than ``jumped``. :class:`~pcg64.PCG64`,
+:class:`~threefry.ThreeFry` and :class:`~philox.Philox`
+support ``advance``, and since these also support
+independent streams, it is not usually necessary to use ``advance``.
+
+Advancing a BitGenerator updates the underlying state as-if a given number of
+calls to the BitGenerator have been made. In general there is not a
+one-to-one relationship between the number output random values from a
+particular distribution and the number of draws from the core BitGenerator.
+This occurs for two reasons:
+
+* The random values are simulated using a rejection-based method
+ and so more than one value from the underlying BitGenerator can be required
+ to generate an single draw.
+* The number of bits required to generate a simulated value differs from the
+ number of bits generated by the underlying BitGenerator. For example, two
+ 16-bit integer values can be simulated from a single draw of a 32-bit value.
+
+Advancing the BitGenerator state resets any pre-computed random numbers. This
+is required to ensure exact reproducibility.
+
+This example uses ``advance`` to advance a :class:`~pcg64.PCG64`
+generator 2 ** 127 steps to set a sequence of random number generators.
+
+.. code-block:: python
+
+ from numpy.random import PCG64
+ bit_generator = PCG64()
+ bit_generator_copy = PCG64()
+ bit_generator_copy.state = bit_generator.state
+
+ advance = 2**127
+ bit_generators = [bit_generator]
+ for _ in range(9):
+ bit_generator_copy.advance(advance)
+ bit_generator = PCG64()
+ bit_generator.state = bit_generator_copy.state
+ bit_generators.append(bit_generator)
+
+.. end block
+
diff --git a/doc/source/reference/random/performance.py b/doc/source/reference/random/performance.py
new file mode 100644
index 000000000..ed8745078
--- /dev/null
+++ b/doc/source/reference/random/performance.py
@@ -0,0 +1,74 @@
+from collections import OrderedDict
+from timeit import repeat
+
+import pandas as pd
+
+import numpy as np
+from numpy.random import MT19937, PCG64, Philox, SFC64
+
+PRNGS = [MT19937, PCG64, Philox, SFC64]
+
+funcs = OrderedDict()
+integers = 'integers(0, 2**{bits},size=1000000, dtype="uint{bits}")'
+funcs['32-bit Unsigned Ints'] = integers.format(bits=32)
+funcs['64-bit Unsigned Ints'] = integers.format(bits=64)
+funcs['Uniforms'] = 'random(size=1000000)'
+funcs['Normals'] = 'standard_normal(size=1000000)'
+funcs['Exponentials'] = 'standard_exponential(size=1000000)'
+funcs['Gammas'] = 'standard_gamma(3.0,size=1000000)'
+funcs['Binomials'] = 'binomial(9, .1, size=1000000)'
+funcs['Laplaces'] = 'laplace(size=1000000)'
+funcs['Poissons'] = 'poisson(3.0, size=1000000)'
+
+setup = """
+from numpy.random import {prng}, Generator
+rg = Generator({prng}())
+"""
+
+test = "rg.{func}"
+table = OrderedDict()
+for prng in PRNGS:
+ print(prng)
+ col = OrderedDict()
+ for key in funcs:
+ t = repeat(test.format(func=funcs[key]),
+ setup.format(prng=prng().__class__.__name__),
+ number=1, repeat=3)
+ col[key] = 1000 * min(t)
+ col = pd.Series(col)
+ table[prng().__class__.__name__] = col
+
+npfuncs = OrderedDict()
+npfuncs.update(funcs)
+npfuncs['32-bit Unsigned Ints'] = 'randint(2**32,dtype="uint32",size=1000000)'
+npfuncs['64-bit Unsigned Ints'] = 'randint(2**64,dtype="uint64",size=1000000)'
+setup = """
+from numpy.random import RandomState
+rg = RandomState()
+"""
+col = {}
+for key in npfuncs:
+ t = repeat(test.format(func=npfuncs[key]),
+ setup.format(prng=prng().__class__.__name__),
+ number=1, repeat=3)
+ col[key] = 1000 * min(t)
+table['RandomState'] = pd.Series(col)
+
+table = pd.DataFrame(table)
+table = table.reindex(table.mean(1).sort_values().index)
+order = np.log(table).mean().sort_values().index
+table = table.T
+table = table.reindex(order)
+table = table.T
+table = table.reindex([k for k in funcs], axis=0)
+print(table.to_csv(float_format='%0.1f'))
+
+rel = table.loc[:, ['RandomState']].values @ np.ones(
+ (1, table.shape[1])) / table
+rel.pop('RandomState')
+rel = rel.T
+rel['Overall'] = np.exp(np.log(rel).mean(1))
+rel *= 100
+rel = np.round(rel)
+rel = rel.T
+print(rel.to_csv(float_format='%0d'))
diff --git a/doc/source/reference/random/performance.rst b/doc/source/reference/random/performance.rst
new file mode 100644
index 000000000..3e5c20e3a
--- /dev/null
+++ b/doc/source/reference/random/performance.rst
@@ -0,0 +1,135 @@
+Performance
+-----------
+
+.. py:module:: numpy.random
+
+.. currentmodule:: numpy.random
+
+Recommendation
+**************
+The recommended generator for single use is :class:`~PCG64`.
+
+Timings
+*******
+
+The timings below are the time in ns to produce 1 random value from a
+specific distribution. The original :class:`~mt19937.MT19937` generator is
+much slower since it requires 2 32-bit values to equal the output of the
+faster generators.
+
+Integer performance has a similar ordering.
+
+The pattern is similar for other, more complex generators. The normal
+performance of the legacy :class:`~mtrand.RandomState` generator is much
+lower than the other since it uses the Box-Muller transformation rather
+than the Ziggurat generator. The performance gap for Exponentials is also
+large due to the cost of computing the log function to invert the CDF.
+The column labeled MT19973 is used the same 32-bit generator as
+:class:`~mtrand.RandomState` but produces random values using
+:class:`~generator.Generator`.
+
+.. csv-table::
+ :header: ,PCG64,MT19937,Philox,RandomState
+ :widths: 14,14,14,14,14
+
+ 32-bit Unsigned Ints,3.2,3.3,4.8,3.2
+ 64-bit Unsigned Ints,4.8,5.7,6.9,5.7
+ Uniforms,5.0,7.3,8.0,7.3
+ Normals,11.3,13.0,13.7,34.4
+ Exponentials,6.7,7.9,8.6,40.3
+ Gammas,30.6,34.2,35.1,58.1
+ Binomials,25.7,27.7,28.4,25.9
+ Laplaces,41.1,44.5,45.4,46.9
+ Poissons,58.1,68.4,70.2,86.0
+
+
+The next table presents the performance in percentage relative to values
+generated by the legagy generator, `RandomState(MT19937())`. The overall
+performance was computed using a geometric mean.
+
+.. csv-table::
+ :header: ,PCG64,MT19937,Philox
+ :widths: 14,14,14,14
+
+ 32-bit Unsigned Ints,100,99,67
+ 64-bit Unsigned Ints,118,100,83
+ Uniforms,147,100,91
+ Normals,304,264,252
+ Exponentials,601,512,467
+ Gammas,190,170,166
+ Binomials,101,93,91
+ Laplaces,114,105,103
+ Poissons,148,126,123
+ Overall,167,145,131
+
+.. note::
+
+ All timings were taken using Linux on a i5-3570 processor.
+
+Performance on different Operating Systems
+******************************************
+Performance differs across platforms due to compiler and hardware availability
+(e.g., register width) differences. The default bit generator has been chosen
+to perform well on 64-bit platforms. Performance on 32-bit operating systems
+is very different.
+
+The values reported are normalized relative to the speed of MT19937 in
+each table. A value of 100 indicates that the performance matches the MT19937.
+Higher values indicate improved performance. These values cannot be compared
+across tables.
+
+64-bit Linux
+~~~~~~~~~~~~
+
+=================== ========= ======= ========
+Distribution MT19937 PCG64 Philox
+=================== ========= ======= ========
+32-bit Unsigned Int 100 113.9 72.1
+64-bit Unsigned Int 100 143.3 89.7
+Uniform 100 181.5 90.8
+Exponential 100 145.5 92.5
+Normal 100 121.4 98.3
+**Overall** 100 139.3 88.2
+=================== ========= ======= ========
+
+
+64-bit Windows
+~~~~~~~~~~~~~~
+The performance on 64-bit Linux and 64-bit Windows is broadly similar.
+
+
+=================== ========= ======= ========
+Distribution MT19937 PCG64 Philox
+=================== ========= ======= ========
+32-bit Unsigned Int 100 134.9 44.1
+64-bit Unsigned Int 100 162.7 41.0
+Uniform 100 200.0 44.8
+Exponential 100 167.8 47.4
+Normal 100 135.6 60.3
+**Overall** 100 158.4 47.1
+=================== ========= ======= ========
+
+32-bit Windows
+~~~~~~~~~~~~~~
+
+The performance of 64-bit generators on 32-bit Windows is much lower than on 64-bit
+operating systems due to register width. MT19937, the generator that has been
+in NumPy since 2005, operates on 32-bit integers.
+
+=================== ========= ======= ========
+Distribution MT19937 PCG64 Philox
+=================== ========= ======= ========
+32-bit Unsigned Int 100 30.6 28.1
+64-bit Unsigned Int 100 24.2 23.7
+Uniform 100 26.7 28.4
+Exponential 100 32.1 32.6
+Normal 100 36.3 37.5
+**Overall** 100 29.7 29.7
+=================== ========= ======= ========
+
+
+.. note::
+
+ Linux timings used Ubuntu 18.04 and GCC 7.4. Windows timings were made on
+ Windows 10 using Microsoft C/C++ Optimizing Compiler Version 19 (Visual
+ Studio 2015). All timings were produced on a i5-3570 processor.
diff --git a/doc/source/reference/routines.char.rst b/doc/source/reference/routines.char.rst
index 3f4efdfc5..513f975e7 100644
--- a/doc/source/reference/routines.char.rst
+++ b/doc/source/reference/routines.char.rst
@@ -1,11 +1,13 @@
String operations
*****************
-.. currentmodule:: numpy.core.defchararray
+.. currentmodule:: numpy.char
-This module provides a set of vectorized string operations for arrays
-of type `numpy.string_` or `numpy.unicode_`. All of them are based on
-the string methods in the Python standard library.
+.. module:: numpy.char
+
+The `numpy.char` module provides a set of vectorized string
+operations for arrays of type `numpy.string_` or `numpy.unicode_`.
+All of them are based on the string methods in the Python standard library.
String operations
-----------------
diff --git a/doc/source/reference/routines.random.rst b/doc/source/reference/routines.random.rst
deleted file mode 100644
index cda4e2b61..000000000
--- a/doc/source/reference/routines.random.rst
+++ /dev/null
@@ -1,83 +0,0 @@
-.. _routines.random:
-
-.. module:: numpy.random
-
-Random sampling (:mod:`numpy.random`)
-*************************************
-
-.. currentmodule:: numpy.random
-
-Simple random data
-==================
-.. autosummary::
- :toctree: generated/
-
- rand
- randn
- randint
- random_integers
- random_sample
- random
- ranf
- sample
- choice
- bytes
-
-Permutations
-============
-.. autosummary::
- :toctree: generated/
-
- shuffle
- permutation
-
-Distributions
-=============
-.. autosummary::
- :toctree: generated/
-
- beta
- binomial
- chisquare
- dirichlet
- exponential
- f
- gamma
- geometric
- gumbel
- hypergeometric
- laplace
- logistic
- lognormal
- logseries
- multinomial
- multivariate_normal
- negative_binomial
- noncentral_chisquare
- noncentral_f
- normal
- pareto
- poisson
- power
- rayleigh
- standard_cauchy
- standard_exponential
- standard_gamma
- standard_normal
- standard_t
- triangular
- uniform
- vonmises
- wald
- weibull
- zipf
-
-Random generator
-================
-.. autosummary::
- :toctree: generated/
-
- RandomState
- seed
- get_state
- set_state
diff --git a/doc/source/reference/routines.rst b/doc/source/reference/routines.rst
index a9e80480b..7a9b97d77 100644
--- a/doc/source/reference/routines.rst
+++ b/doc/source/reference/routines.rst
@@ -41,7 +41,7 @@ indentation.
routines.other
routines.padding
routines.polynomials
- routines.random
+ random/index
routines.set
routines.sort
routines.statistics
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index c71c8c9a7..d00e88b34 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -118,7 +118,7 @@ all output arrays will be passed to the :obj:`~class.__array_prepare__` and
the highest :obj:`~class.__array_priority__` of any other input to the
universal function. The default :obj:`~class.__array_priority__` of the
ndarray is 0.0, and the default :obj:`~class.__array_priority__` of a subtype
-is 1.0. Matrices have :obj:`~class.__array_priority__` equal to 10.0.
+is 0.0. Matrices have :obj:`~class.__array_priority__` equal to 10.0.
All ufuncs can also take output arguments. If necessary, output will
be cast to the data-type(s) of the provided output array(s). If a class
diff --git a/doc/source/release.rst b/doc/source/release.rst
index a6908bb07..f8d83726f 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -3,6 +3,7 @@ Release Notes
*************
.. include:: ../release/1.17.0-notes.rst
+.. include:: ../release/1.16.4-notes.rst
.. include:: ../release/1.16.3-notes.rst
.. include:: ../release/1.16.2-notes.rst
.. include:: ../release/1.16.1-notes.rst
diff --git a/doc/source/user/basics.io.genfromtxt.rst b/doc/source/user/basics.io.genfromtxt.rst
index 21832e5aa..6ef80bf8e 100644
--- a/doc/source/user/basics.io.genfromtxt.rst
+++ b/doc/source/user/basics.io.genfromtxt.rst
@@ -521,12 +521,6 @@ provides several convenience functions derived from
:func:`~numpy.genfromtxt`. These functions work the same way as the
original, but they have different default values.
-:func:`~numpy.ndfromtxt`
- Always set ``usemask=False``.
- The output is always a standard :class:`numpy.ndarray`.
-:func:`~numpy.mafromtxt`
- Always set ``usemask=True``.
- The output is always a :class:`~numpy.ma.MaskedArray`
:func:`~numpy.recfromtxt`
Returns a standard :class:`numpy.recarray` (if ``usemask=False``) or a
:class:`~numpy.ma.MaskedRecords` array (if ``usemaske=True``). The
diff --git a/doc/source/user/building.rst b/doc/source/user/building.rst
index a9ec496c5..a13e1160a 100644
--- a/doc/source/user/building.rst
+++ b/doc/source/user/building.rst
@@ -155,9 +155,10 @@ The default order for the libraries are:
1. MKL
2. OpenBLAS
-3. ATLAS
-4. Accelerate (MacOS)
-5. LAPACK (NetLIB)
+3. libFLAME
+4. ATLAS
+5. Accelerate (MacOS)
+6. LAPACK (NetLIB)
If you wish to build against OpenBLAS but you also have MKL available one