summaryrefslogtreecommitdiff
path: root/doc/source
diff options
context:
space:
mode:
Diffstat (limited to 'doc/source')
-rw-r--r--doc/source/dev/alignment.rst2
-rw-r--r--doc/source/dev/depending_on_numpy.rst106
-rw-r--r--doc/source/dev/development_workflow.rst34
-rw-r--r--doc/source/dev/gitwash/git_resources.rst2
-rw-r--r--doc/source/dev/index.rst12
-rw-r--r--doc/source/dev/reviewer_guidelines.rst31
-rw-r--r--doc/source/f2py/buildtools/cmake.rst2
-rw-r--r--doc/source/f2py/code/ftype.f2
-rw-r--r--doc/source/f2py/f2py.getting-started.rst8
-rw-r--r--doc/source/index.rst1
-rw-r--r--doc/source/reference/arrays.nditer.rst22
-rw-r--r--doc/source/reference/arrays.scalars.rst2
-rw-r--r--doc/source/reference/c-api/array.rst79
-rw-r--r--doc/source/reference/c-api/data_memory.rst84
-rw-r--r--doc/source/reference/c-api/deprecations.rst4
-rw-r--r--doc/source/reference/c-api/dtype.rst194
-rw-r--r--doc/source/reference/c-api/types-and-structures.rst197
-rw-r--r--doc/source/reference/distutils_guide.rst2
-rw-r--r--doc/source/reference/distutils_status_migration.rst2
-rw-r--r--doc/source/reference/global_state.rst10
-rw-r--r--doc/source/reference/index.rst2
-rw-r--r--doc/source/reference/internals.code-explanations.rst2
-rw-r--r--doc/source/reference/maskedarray.baseclass.rst14
-rw-r--r--doc/source/reference/random/bit_generators/index.rst2
-rw-r--r--doc/source/reference/random/compatibility.rst87
-rw-r--r--doc/source/reference/random/index.rst309
-rw-r--r--doc/source/reference/random/legacy.rst15
-rw-r--r--doc/source/reference/random/new-or-different.rst56
-rw-r--r--doc/source/reference/routines.ctypeslib.rst2
-rw-r--r--doc/source/reference/routines.datetime.rst4
-rw-r--r--doc/source/reference/routines.io.rst2
-rw-r--r--doc/source/reference/routines.ma.rst15
-rw-r--r--doc/source/reference/routines.other.rst2
-rw-r--r--doc/source/reference/simd/gen_features.py4
-rw-r--r--doc/source/release.rst1
-rw-r--r--doc/source/release/1.24.0-notes.rst2
-rw-r--r--doc/source/release/1.24.3-notes.rst49
-rw-r--r--doc/source/user/basics.creation.rst4
-rw-r--r--doc/source/user/basics.indexing.rst13
-rw-r--r--doc/source/user/howtos_index.rst2
-rw-r--r--doc/source/user/theory.broadcasting.rst2
41 files changed, 779 insertions, 606 deletions
diff --git a/doc/source/dev/alignment.rst b/doc/source/dev/alignment.rst
index bb1198ebf..f2321d17b 100644
--- a/doc/source/dev/alignment.rst
+++ b/doc/source/dev/alignment.rst
@@ -3,7 +3,7 @@
.. _alignment:
****************
-Memory Alignment
+Memory alignment
****************
NumPy alignment goals
diff --git a/doc/source/dev/depending_on_numpy.rst b/doc/source/dev/depending_on_numpy.rst
index 0582048cb..868b44162 100644
--- a/doc/source/dev/depending_on_numpy.rst
+++ b/doc/source/dev/depending_on_numpy.rst
@@ -48,70 +48,80 @@ job, either all warnings or otherwise at least ``DeprecationWarning`` and
adapt your code.
+.. _depending_on_numpy:
+
Adding a dependency on NumPy
----------------------------
Build-time dependency
~~~~~~~~~~~~~~~~~~~~~
+.. note::
+
+ Before NumPy 1.25, the NumPy C-API was *not* backwards compatible. This
+ means that when compiling with a NumPy version earlier than 1.25 you
+ have to compile with the oldest version you wish to support.
+ This can be done by using
+ `oldest-supported-numpy <https://github.com/scipy/oldest-supported-numpy/>`__.
+ Please see the NumPy 1.24 documentation at
+ `https://numpy.org/doc/1.24/dev/depending_on_numpy.html`__.
+
+
If a package either uses the NumPy C API directly or it uses some other tool
that depends on it like Cython or Pythran, NumPy is a *build-time* dependency
-of the package. Because the NumPy ABI is only forward compatible, you must
-build your own binaries (wheels or other package formats) against the lowest
-NumPy version that you support (or an even older version).
-
-Picking the correct NumPy version to build against for each Python version and
-platform can get complicated. There are a couple of ways to do this.
-Build-time dependencies are specified in ``pyproject.toml`` (see PEP 517),
-which is the file used to build wheels by PEP 517 compliant tools (e.g.,
-when using ``pip wheel``).
-
-You can specify everything manually in ``pyproject.toml``, or you can instead
-rely on the `oldest-supported-numpy <https://github.com/scipy/oldest-supported-numpy/>`__
-metapackage. ``oldest-supported-numpy`` will specify the correct NumPy version
-at build time for wheels, taking into account Python version, Python
-implementation (CPython or PyPy), operating system and hardware platform. It
-will specify the oldest NumPy version that supports that combination of
-characteristics. Note: for platforms for which NumPy provides wheels on PyPI,
-it will be the first version with wheels (even if some older NumPy version
-happens to build).
-
-For conda-forge it's a little less complicated: there's dedicated handling for
-NumPy in build-time and runtime dependencies, so typically this is enough
-(see `here <https://conda-forge.org/docs/maintainer/knowledge_base.html#building-against-numpy>`__ for docs)::
+of the package.
- host:
- - numpy
- run:
- - {{ pin_compatible('numpy') }}
+By default, NumPy will expose an API that is backwards compatible with the
+oldest NumPy version that supports the currently oldest compatible Python
+version. NumPy 1.25.0 supports Python 3.9 and higher and NumPy 1.19 is the
+first version to support Python 3.9. Thus, we guarantee that, when using
+defaults, NumPy 1.25 will expose a C-API compatible with NumPy 1.19.
+(the exact version is set within NumPy-internal header files).
-.. note::
+NumPy is also forward compatible for all minor releases, but a major release
+will require recompilation.
+
+The default behavior can be customized for example by adding::
- ``pip`` has ``--no-use-pep517`` and ``--no-build-isolation`` flags that may
- ignore ``pyproject.toml`` or treat it differently - if users use those
- flags, they are responsible for installing the correct build dependencies
- themselves.
+ #define NPY_TARGET_VERSION NPY_1_22_API_VERSION
- ``conda`` will always use ``-no-build-isolation``; dependencies for conda
- builds are given in the conda recipe (``meta.yaml``), the ones in
- ``pyproject.toml`` have no effect.
+before including any NumPy headers (or the equivalent ``-D`` compiler flag) in
+every extension module that requires the NumPy C-API.
+This is mainly useful if you need to use newly added API at the cost of not
+being compatible with older versions.
- Please do not use ``setup_requires`` (it is deprecated and may invoke
- ``easy_install``).
+If for some reason you wish to compile for the currently installed NumPy
+version by default you can add::
-Because for NumPy you have to care about ABI compatibility, you
-specify the version with ``==`` to the lowest supported version. For your other
-build dependencies you can probably be looser, however it's still important to
-set lower and upper bounds for each dependency. It's fine to specify either a
-range or a specific version for a dependency like ``wheel`` or ``setuptools``.
+ #ifndef NPY_TARGET_VERSION
+ #define NPY_TARGET_VERSION NPY_API_VERSION
+ #endif
-.. warning::
+Which allows a user to override the default via ``-DNPY_TARGET_VERSION``.
+This define must be consistent for each extension module (use of
+``import_array()``) and also applies to the umath module.
+
+When you compile against NumPy, you should add the proper version restrictions
+to your ``pyproject.toml`` (see PEP 517). Since your extension will not be
+compatible with a new major release of NumPy and may not be compatible with
+very old versions.
+
+For conda-forge packages, please see
+`here <https://conda-forge.org/docs/maintainer/knowledge_base.html#building-against-numpy>`__.
+
+as of now, it is usually as easy as including::
+
+ host:
+ - numpy
+ run:
+ - {{ pin_compatible('numpy') }}
+
+.. note::
- Note that ``setuptools`` does major releases often and those may contain
- changes that break ``numpy.distutils``, which will *not* be updated anymore
- for new ``setuptools`` versions. It is therefore recommended to set an
- upper version bound in your build configuration for the last known version
- of ``setuptools`` that works with your build.
+ At the time of NumPy 1.25, NumPy 2.0 is expected to be the next release
+ of NumPy. The NumPy 2.0 release is expected to require a different pin,
+ since NumPy 2+ will be needed in order to be compatible with both NumPy
+ 1.x and 2.x.
Runtime dependency & version ranges
diff --git a/doc/source/dev/development_workflow.rst b/doc/source/dev/development_workflow.rst
index c54a34192..9fa9c5809 100644
--- a/doc/source/dev/development_workflow.rst
+++ b/doc/source/dev/development_workflow.rst
@@ -200,12 +200,46 @@ sequences. In such cases you may explicitly skip CI by including one of these
fragments in your commit message:
* ``[skip ci]``: skip all CI
+
+ Only recommended if you are still not ready for the checks to run on your PR
+ (for example, if this is only a draft.)
+
* ``[skip actions]``: skip GitHub Actions jobs
+
+ `GitHub Actions <https://docs.github.com/actions>`__ is where most of the CI
+ checks are run, including the linter, benchmarking, running basic tests for
+ most architectures and OSs, and several compiler and CPU optimization
+ settings.
+ `See the configuration files for these checks. <https://github.com/numpy/numpy/tree/main/.github/workflows>`__
+
* ``[skip travis]``: skip TravisCI jobs
+
+ `TravisCI <https://www.travis-ci.com/>`__ will test your changes against
+ Python 3.9 on the PowerPC and s390x architectures.
+ `See the configuration file for these checks. <https://github.com/numpy/numpy/blob/main/.travis.yml>`__
+
* ``[skip azp]``: skip Azure jobs
+
+ `Azure <https://azure.microsoft.com/en-us/products/devops/pipelines>`__ is
+ where all comprehensive tests are run. This is an expensive run, and one you
+ could typically skip if you do documentation-only changes, for example.
+ `See the main configuration file for these checks. <https://github.com/numpy/numpy/blob/main/azure-pipelines.yml>`__
+
* ``[skip circle]``: skip CircleCI jobs
+
+ `CircleCI <https://circleci.com/>`__ is where we build the documentation and
+ store the generated artifact for preview in each PR. This check will also run
+ all the docstrings examples and verify their results. If you don't make
+ documentation changes, but you make changes to a function's API, for example,
+ you may need to run these tests to verify that the doctests are still valid.
+ `See the configuration file for these checks. <https://github.com/numpy/numpy/blob/main/.circleci/config.yml>`__
+
* ``[skip cirrus]``: skip Cirrus jobs
+ `CirrusCI <https://cirrus-ci.org/>`__ mostly triggers Linux aarch64 and MacOS Arm64 wheels
+ uploads.
+ `See the configuration file for these checks. <https://github.com/numpy/numpy/blob/main/.cirrus.star>`__
+
Test building wheels
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/dev/gitwash/git_resources.rst b/doc/source/dev/gitwash/git_resources.rst
index c41af762c..b6930f394 100644
--- a/doc/source/dev/gitwash/git_resources.rst
+++ b/doc/source/dev/gitwash/git_resources.rst
@@ -1,7 +1,7 @@
.. _git-resources:
=========================
-Additional Git_ Resources
+Additional Git_ resources
=========================
Tutorials and summaries
diff --git a/doc/source/dev/index.rst b/doc/source/dev/index.rst
index 22ab55b31..b4479fa0d 100644
--- a/doc/source/dev/index.rst
+++ b/doc/source/dev/index.rst
@@ -60,12 +60,16 @@ Here's the short summary, complete TOC links are below:
- ``upstream``, which refers to the ``numpy`` repository
- ``origin``, which refers to your personal fork
-2. Develop your contribution:
-
- * Pull the latest changes from upstream::
+ * Pull the latest changes from upstream, including tags::
git checkout main
- git pull upstream main
+ git pull upstream main --tags
+
+ * Initialize numpy's submodules::
+
+ git submodule update --init
+
+2. Develop your contribution:
* Create a branch for the feature you want to work on. Since the
branch name will appear in the merge message, use a sensible name
diff --git a/doc/source/dev/reviewer_guidelines.rst b/doc/source/dev/reviewer_guidelines.rst
index 8d938319e..0ed9bf2e5 100644
--- a/doc/source/dev/reviewer_guidelines.rst
+++ b/doc/source/dev/reviewer_guidelines.rst
@@ -1,7 +1,7 @@
.. _reviewer-guidelines:
===================
-Reviewer Guidelines
+Reviewer guidelines
===================
Reviewing open pull requests (PRs) helps move the project forward. We encourage
@@ -101,7 +101,34 @@ For maintainers
If a PR becomes inactive, maintainers may make larger changes.
Remember, a PR is a collaboration between a contributor and a reviewer/s,
sometimes a direct push is the best way to finish it.
-
+
+API Changes
+-----------
+As mentioned most public API changes should be discussed ahead of time and
+often with a wider audience (on the mailing list, or even through a NEP).
+
+For changes in the public C-API be aware that the NumPy C-API is backwards
+compatible so that any addition must be ABI compatible with previous versions.
+When it is not the case, you must add a guard.
+
+For example ``PyUnicodeScalarObject`` struct contains the following::
+
+ #if NPY_FEATURE_VERSION >= NPY_1_20_API_VERSION
+ char *buffer_fmt;
+ #endif
+
+Because the ``buffer_fmt`` field was added to its end in NumPy 1.20 (all
+previous fields remained ABI compatible).
+Similarly, any function added to the API table in
+``numpy/core/code_generators/numpy_api.py`` must use the ``MinVersion``
+annotation.
+For example::
+
+ 'PyDataMem_SetHandler': (304, MinVersion("1.22")),
+
+Header only functionality (such as a new macro) typically does not need to be
+guarded.
+
GitHub Workflow
---------------
diff --git a/doc/source/f2py/buildtools/cmake.rst b/doc/source/f2py/buildtools/cmake.rst
index 8c654c73e..db64453b4 100644
--- a/doc/source/f2py/buildtools/cmake.rst
+++ b/doc/source/f2py/buildtools/cmake.rst
@@ -18,7 +18,7 @@ but this `extensive CMake collection`_ of resources is great.
``f2py`` is not particularly native or pleasant; and a more natural approach
is to consider :ref:`f2py-skbuild`
-Fibonacci Walkthrough (F77)
+Fibonacci walkthrough (F77)
===========================
Returning to the ``fib`` example from :ref:`f2py-getting-started` section.
diff --git a/doc/source/f2py/code/ftype.f b/doc/source/f2py/code/ftype.f
index cabbb9e2d..67d5a224b 100644
--- a/doc/source/f2py/code/ftype.f
+++ b/doc/source/f2py/code/ftype.f
@@ -4,6 +4,6 @@ C FILE: FTYPE.F
Cf2py integer optional,intent(in) :: n = 13
REAL A,X
COMMON /DATA/ A,X(3)
- PRINT*, "IN FOO: N=",N," A=",A," X=[",X(1),X(2),X(3),"]"
+C PRINT*, "IN FOO: N=",N," A=",A," X=[",X(1),X(2),X(3),"]"
END
C END OF FTYPE.F
diff --git a/doc/source/f2py/f2py.getting-started.rst b/doc/source/f2py/f2py.getting-started.rst
index da88b46f5..e96564814 100644
--- a/doc/source/f2py/f2py.getting-started.rst
+++ b/doc/source/f2py/f2py.getting-started.rst
@@ -18,6 +18,7 @@ following steps:
* F2PY reads a signature file and writes a Python C/API module containing
Fortran/C/Python bindings.
+
* F2PY compiles all sources and builds an extension module containing
the wrappers.
@@ -26,6 +27,13 @@ following steps:
Fortran, SGI MIPSpro, Absoft, NAG, Compaq etc. For different build systems,
see :ref:`f2py-bldsys`.
+ * Depending on your operating system, you may need to install the Python
+ development headers (which provide the file ``Python.h``) separately. In
+ Linux Debian-based distributions this package should be called ``python3-dev``,
+ in Fedora-based distributions it is ``python3-devel``. For macOS, depending
+ how Python was installed, your mileage may vary. In Windows, the headers are
+ typically installed already.
+
Depending on the situation, these steps can be carried out in a single composite
command or step-by-step; in which case some steps can be omitted or combined
with others.
diff --git a/doc/source/index.rst b/doc/source/index.rst
index f31e730d7..5e31ec0a4 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -17,7 +17,6 @@ NumPy documentation
**Version**: |version|
**Download documentation**:
-`PDF Version <https://numpy.org/doc/stable/numpy-user.pdf>`_ |
`Historical versions of documentation <https://numpy.org/doc/>`_
**Useful links**:
diff --git a/doc/source/reference/arrays.nditer.rst b/doc/source/reference/arrays.nditer.rst
index 8cabc1a06..fb1c91a07 100644
--- a/doc/source/reference/arrays.nditer.rst
+++ b/doc/source/reference/arrays.nditer.rst
@@ -3,7 +3,7 @@
.. _arrays.nditer:
*********************
-Iterating Over Arrays
+Iterating over arrays
*********************
.. note::
@@ -23,7 +23,7 @@ can accelerate the inner loop in Cython. Since the Python exposure of
iterator API, these ideas will also provide help working with array
iteration from C or C++.
-Single Array Iteration
+Single array iteration
======================
The most basic task that can be done with the :class:`nditer` is to
@@ -64,7 +64,7 @@ namely the order they are stored in memory, whereas the elements of
`a.T.copy(order='C')` get visited in a different order because they
have been put into a different memory layout.
-Controlling Iteration Order
+Controlling iteration order
---------------------------
There are times when it is important to visit the elements of an array
@@ -88,7 +88,7 @@ order='C' for C order and order='F' for Fortran order.
.. _nditer-context-manager:
-Modifying Array Values
+Modifying array values
----------------------
By default, the :class:`nditer` treats the input operand as a read-only
@@ -128,7 +128,7 @@ note that prior to 1.15, :class:`nditer` was not a context manager and
did not have a `close` method. Instead it relied on the destructor to
initiate the writeback of the buffer.
-Using an External Loop
+Using an external loop
----------------------
In all the examples so far, the elements of `a` are provided by the
@@ -161,7 +161,7 @@ elements each.
...
[0 3] [1 4] [2 5]
-Tracking an Index or Multi-Index
+Tracking an index or multi-index
--------------------------------
During iteration, you may want to use the index of the current
@@ -210,7 +210,7 @@ raise an exception.
File "<stdin>", line 1, in <module>
ValueError: Iterator flag EXTERNAL_LOOP cannot be used if an index or multi-index is being tracked
-Alternative Looping and Element Access
+Alternative looping and element access
--------------------------------------
To make its properties more readily accessible during iteration,
@@ -246,7 +246,7 @@ produce identical results to the ones in the previous section.
array([[ 0, 1, 2],
[-1, 0, 1]])
-Buffering the Array Elements
+Buffering the array elements
----------------------------
When forcing an iteration order, we observed that the external loop
@@ -274,7 +274,7 @@ is enabled.
...
[0 3 1 4 2 5]
-Iterating as a Specific Data Type
+Iterating as a specific data type
---------------------------------
There are times when it is necessary to treat an array as a different
@@ -382,7 +382,7 @@ would violate the casting rule.
File "<stdin>", line 2, in <module>
TypeError: Iterator requested dtype could not be cast from dtype('float64') to dtype('int64'), the operand 0 dtype, according to the rule 'same_kind'
-Broadcasting Array Iteration
+Broadcasting array iteration
============================
NumPy has a set of rules for dealing with arrays that have differing
@@ -417,7 +417,7 @@ which includes the input shapes to help diagnose the problem.
...
ValueError: operands could not be broadcast together with shapes (2,) (2,3)
-Iterator-Allocated Output Arrays
+Iterator-allocated output arrays
--------------------------------
A common case in NumPy functions is to have outputs allocated based
diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst
index 9a592984c..4dba54d6b 100644
--- a/doc/source/reference/arrays.scalars.rst
+++ b/doc/source/reference/arrays.scalars.rst
@@ -337,8 +337,6 @@ are also provided.
.. note that these are documented with ..attribute because that is what
autoclass does for aliases under the hood.
-.. autoclass:: numpy.bool8
-
.. attribute:: int8
int16
int32
diff --git a/doc/source/reference/c-api/array.rst b/doc/source/reference/c-api/array.rst
index 79949acc7..038702bcf 100644
--- a/doc/source/reference/c-api/array.rst
+++ b/doc/source/reference/c-api/array.rst
@@ -548,22 +548,6 @@ From other objects
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`
-.. c:function:: int PyArray_GetArrayParamsFromObject( \
- PyObject* op, PyArray_Descr* requested_dtype, npy_bool writeable, \
- PyArray_Descr** out_dtype, int* out_ndim, npy_intp* out_dims, \
- PyArrayObject** out_arr, PyObject* context)
-
- .. deprecated:: NumPy 1.19
-
- Unless NumPy is made aware of an issue with this, this function
- is scheduled for rapid removal without replacement.
-
- .. versionchanged:: NumPy 1.19
-
- `context` is never used. Its use results in an error.
-
- .. versionadded:: 1.6
-
.. c:function:: PyObject* PyArray_CheckFromAny( \
PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, \
int requirements, PyObject* context)
@@ -1165,29 +1149,13 @@ Converting data types
.. versionadded:: 1.6
- This applies type promotion to all the inputs,
- using the NumPy rules for combining scalars and arrays, to
- determine the output type of a set of operands. This is the
- same result type that ufuncs produce. The specific algorithm
- used is as follows.
-
- Categories are determined by first checking which of boolean,
- integer (int/uint), or floating point (float/complex) the maximum
- kind of all the arrays and the scalars are.
+ This applies type promotion to all the input arrays and dtype
+ objects, using the NumPy rules for combining scalars and arrays, to
+ determine the output type for an operation with the given set of
+ operands. This is the same result type that ufuncs produce.
- If there are only scalars or the maximum category of the scalars
- is higher than the maximum category of the arrays,
- the data types are combined with :c:func:`PyArray_PromoteTypes`
- to produce the return value.
-
- Otherwise, PyArray_MinScalarType is called on each array, and
- the resulting data types are all combined with
- :c:func:`PyArray_PromoteTypes` to produce the return value.
-
- The set of int values is not a subset of the uint values for types
- with the same number of bits, something not reflected in
- :c:func:`PyArray_MinScalarType`, but handled as a special case in
- PyArray_ResultType.
+ See the documentation of :func:`numpy.result_type` for more
+ detail about the type promotion algorithm.
.. c:function:: int PyArray_ObjectType(PyObject* op, int mintype)
@@ -1202,17 +1170,6 @@ Converting data types
return value is the enumerated typenumber that represents the
data-type that *op* should have.
-.. c:function:: void PyArray_ArrayType( \
- PyObject* op, PyArray_Descr* mintype, PyArray_Descr* outtype)
-
- This function is superseded by :c:func:`PyArray_ResultType`.
-
- This function works similarly to :c:func:`PyArray_ObjectType` (...)
- except it handles flexible arrays. The *mintype* argument can have
- an itemsize member and the *outtype* argument will have an
- itemsize member at least as big but perhaps bigger depending on
- the object *op*.
-
.. c:function:: PyArrayObject** PyArray_ConvertToCommonType( \
PyObject* op, int* n)
@@ -2276,7 +2233,7 @@ Array Functions
output array must have the correct shape, type, and be
C-contiguous, or an exception is raised.
-.. c:function:: PyObject* PyArray_EinsteinSum( \
+.. c:function:: PyArrayObject* PyArray_EinsteinSum( \
char* subscripts, npy_intp nop, PyArrayObject** op_in, \
PyArray_Descr* dtype, NPY_ORDER order, NPY_CASTING casting, \
PyArrayObject* out)
@@ -3378,7 +3335,7 @@ Memory management
.. c:function:: int PyArray_ResolveWritebackIfCopy(PyArrayObject* obj)
- If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, this function
+ If ``obj->flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, this function
clears the flags, `DECREF` s
`obj->base` and makes it writeable, and sets ``obj->base`` to NULL. It then
copies ``obj->data`` to `obj->base->data`, and returns the error state of
@@ -3608,29 +3565,17 @@ Miscellaneous Macros
Returns the reference count of any Python object.
-.. c:function:: void PyArray_DiscardWritebackIfCopy(PyObject* obj)
+.. c:function:: void PyArray_DiscardWritebackIfCopy(PyArrayObject* obj)
- If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, this function
+ If ``obj->flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, this function
clears the flags, `DECREF` s
`obj->base` and makes it writeable, and sets ``obj->base`` to NULL. In
- contrast to :c:func:`PyArray_DiscardWritebackIfCopy` it makes no attempt
- to copy the data from `obj->base` This undoes
+ contrast to :c:func:`PyArray_ResolveWritebackIfCopy` it makes no attempt
+ to copy the data from `obj->base`. This undoes
:c:func:`PyArray_SetWritebackIfCopyBase`. Usually this is called after an
error when you are finished with ``obj``, just before ``Py_DECREF(obj)``.
It may be called multiple times, or with ``NULL`` input.
-.. c:function:: void PyArray_XDECREF_ERR(PyObject* obj)
-
- Deprecated in 1.14, use :c:func:`PyArray_DiscardWritebackIfCopy`
- followed by ``Py_XDECREF``
-
- DECREF's an array object which may have the
- :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`
- flag set without causing the contents to be copied back into the
- original array. Resets the :c:data:`NPY_ARRAY_WRITEABLE` flag on the base
- object. This is useful for recovering from an error condition when
- writeback semantics are used, but will lead to wrong results.
-
Enumerated Types
~~~~~~~~~~~~~~~~
diff --git a/doc/source/reference/c-api/data_memory.rst b/doc/source/reference/c-api/data_memory.rst
index 2084ab5d0..cab587efc 100644
--- a/doc/source/reference/c-api/data_memory.rst
+++ b/doc/source/reference/c-api/data_memory.rst
@@ -159,3 +159,87 @@ A better technique would be to use a ``PyCapsule`` as a base object:
return NULL;
}
...
+
+Example of memory tracing with ``np.lib.tracemalloc_domain``
+------------------------------------------------------------
+
+Note that since Python 3.6 (or newer), the builtin ``tracemalloc`` module can be used to
+track allocations inside NumPy. NumPy places its CPU memory allocations into the
+``np.lib.tracemalloc_domain`` domain.
+For additional information, check: `https://docs.python.org/3/library/tracemalloc.html`.
+
+Here is an example on how to use ``np.lib.tracemalloc_domain``:
+
+.. code-block:: python
+
+ """
+ The goal of this example is to show how to trace memory
+ from an application that has NumPy and non-NumPy sections.
+ We only select the sections using NumPy related calls.
+ """
+
+ import tracemalloc
+ import numpy as np
+
+ # Flag to determine if we select NumPy domain
+ use_np_domain = True
+
+ nx = 300
+ ny = 500
+
+ # Start to trace memory
+ tracemalloc.start()
+
+ # Section 1
+ # ---------
+
+ # NumPy related call
+ a = np.zeros((nx,ny))
+
+ # non-NumPy related call
+ b = [i**2 for i in range(nx*ny)]
+
+ snapshot1 = tracemalloc.take_snapshot()
+ # We filter the snapshot to only select NumPy related calls
+ np_domain = np.lib.tracemalloc_domain
+ dom_filter = tracemalloc.DomainFilter(inclusive=use_np_domain,
+ domain=np_domain)
+ snapshot1 = snapshot1.filter_traces([dom_filter])
+ top_stats1 = snapshot1.statistics('traceback')
+
+ print("================ SNAPSHOT 1 =================")
+ for stat in top_stats1:
+ print(f"{stat.count} memory blocks: {stat.size / 1024:.1f} KiB")
+ print(stat.traceback.format()[-1])
+
+ # Clear traces of memory blocks allocated by Python
+ # before moving to the next section.
+ tracemalloc.clear_traces()
+
+ # Section 2
+ #----------
+
+ # We are only using NumPy
+ c = np.sum(a*a)
+
+ snapshot2 = tracemalloc.take_snapshot()
+ top_stats2 = snapshot2.statistics('traceback')
+
+ print()
+ print("================ SNAPSHOT 2 =================")
+ for stat in top_stats2:
+ print(f"{stat.count} memory blocks: {stat.size / 1024:.1f} KiB")
+ print(stat.traceback.format()[-1])
+
+ tracemalloc.stop()
+
+ print()
+ print("============================================")
+ print("\nTracing Status : ", tracemalloc.is_tracing())
+
+ try:
+ print("\nTrying to Take Snapshot After Tracing is Stopped.")
+ snap = tracemalloc.take_snapshot()
+ except Exception as e:
+ print("Exception : ", e)
+
diff --git a/doc/source/reference/c-api/deprecations.rst b/doc/source/reference/c-api/deprecations.rst
index 5b1abc6f2..d805421f2 100644
--- a/doc/source/reference/c-api/deprecations.rst
+++ b/doc/source/reference/c-api/deprecations.rst
@@ -58,3 +58,7 @@ On compilers which support a #warning mechanism, NumPy issues a
compiler warning if you do not define the symbol NPY_NO_DEPRECATED_API.
This way, the fact that there are deprecations will be flagged for
third-party developers who may not have read the release notes closely.
+
+Note that defining NPY_NO_DEPRECATED_API is not sufficient to make your
+extension ABI compatible with a given NumPy version. See
+:ref:`for-downstream-package-authors`.
diff --git a/doc/source/reference/c-api/dtype.rst b/doc/source/reference/c-api/dtype.rst
index 642f62749..bdd5a6f55 100644
--- a/doc/source/reference/c-api/dtype.rst
+++ b/doc/source/reference/c-api/dtype.rst
@@ -25,161 +25,161 @@ select the precision desired.
Enumerated Types
----------------
-.. c:enumerator:: NPY_TYPES
+.. c:enum:: NPY_TYPES
-There is a list of enumerated types defined providing the basic 24
-data types plus some useful generic names. Whenever the code requires
-a type number, one of these enumerated types is requested. The types
-are all called ``NPY_{NAME}``:
+ There is a list of enumerated types defined providing the basic 24
+ data types plus some useful generic names. Whenever the code requires
+ a type number, one of these enumerated types is requested. The types
+ are all called ``NPY_{NAME}``:
-.. c:enumerator:: NPY_BOOL
+ .. c:enumerator:: NPY_BOOL
- The enumeration value for the boolean type, stored as one byte.
- It may only be set to the values 0 and 1.
+ The enumeration value for the boolean type, stored as one byte.
+ It may only be set to the values 0 and 1.
-.. c:enumerator:: NPY_BYTE
-.. c:enumerator:: NPY_INT8
+ .. c:enumerator:: NPY_BYTE
+ .. c:enumerator:: NPY_INT8
- The enumeration value for an 8-bit/1-byte signed integer.
+ The enumeration value for an 8-bit/1-byte signed integer.
-.. c:enumerator:: NPY_SHORT
-.. c:enumerator:: NPY_INT16
+ .. c:enumerator:: NPY_SHORT
+ .. c:enumerator:: NPY_INT16
- The enumeration value for a 16-bit/2-byte signed integer.
+ The enumeration value for a 16-bit/2-byte signed integer.
-.. c:enumerator:: NPY_INT
-.. c:enumerator:: NPY_INT32
+ .. c:enumerator:: NPY_INT
+ .. c:enumerator:: NPY_INT32
- The enumeration value for a 32-bit/4-byte signed integer.
+ The enumeration value for a 32-bit/4-byte signed integer.
-.. c:enumerator:: NPY_LONG
+ .. c:enumerator:: NPY_LONG
- Equivalent to either NPY_INT or NPY_LONGLONG, depending on the
- platform.
+ Equivalent to either NPY_INT or NPY_LONGLONG, depending on the
+ platform.
-.. c:enumerator:: NPY_LONGLONG
-.. c:enumerator:: NPY_INT64
+ .. c:enumerator:: NPY_LONGLONG
+ .. c:enumerator:: NPY_INT64
- The enumeration value for a 64-bit/8-byte signed integer.
+ The enumeration value for a 64-bit/8-byte signed integer.
-.. c:enumerator:: NPY_UBYTE
-.. c:enumerator:: NPY_UINT8
+ .. c:enumerator:: NPY_UBYTE
+ .. c:enumerator:: NPY_UINT8
- The enumeration value for an 8-bit/1-byte unsigned integer.
+ The enumeration value for an 8-bit/1-byte unsigned integer.
-.. c:enumerator:: NPY_USHORT
-.. c:enumerator:: NPY_UINT16
+ .. c:enumerator:: NPY_USHORT
+ .. c:enumerator:: NPY_UINT16
- The enumeration value for a 16-bit/2-byte unsigned integer.
+ The enumeration value for a 16-bit/2-byte unsigned integer.
-.. c:enumerator:: NPY_UINT
-.. c:enumerator:: NPY_UINT32
+ .. c:enumerator:: NPY_UINT
+ .. c:enumerator:: NPY_UINT32
- The enumeration value for a 32-bit/4-byte unsigned integer.
+ The enumeration value for a 32-bit/4-byte unsigned integer.
-.. c:enumerator:: NPY_ULONG
+ .. c:enumerator:: NPY_ULONG
- Equivalent to either NPY_UINT or NPY_ULONGLONG, depending on the
- platform.
+ Equivalent to either NPY_UINT or NPY_ULONGLONG, depending on the
+ platform.
-.. c:enumerator:: NPY_ULONGLONG
-.. c:enumerator:: NPY_UINT64
+ .. c:enumerator:: NPY_ULONGLONG
+ .. c:enumerator:: NPY_UINT64
- The enumeration value for a 64-bit/8-byte unsigned integer.
+ The enumeration value for a 64-bit/8-byte unsigned integer.
-.. c:enumerator:: NPY_HALF
-.. c:enumerator:: NPY_FLOAT16
+ .. c:enumerator:: NPY_HALF
+ .. c:enumerator:: NPY_FLOAT16
- The enumeration value for a 16-bit/2-byte IEEE 754-2008 compatible floating
- point type.
+ The enumeration value for a 16-bit/2-byte IEEE 754-2008 compatible floating
+ point type.
-.. c:enumerator:: NPY_FLOAT
-.. c:enumerator:: NPY_FLOAT32
+ .. c:enumerator:: NPY_FLOAT
+ .. c:enumerator:: NPY_FLOAT32
- The enumeration value for a 32-bit/4-byte IEEE 754 compatible floating
- point type.
+ The enumeration value for a 32-bit/4-byte IEEE 754 compatible floating
+ point type.
-.. c:enumerator:: NPY_DOUBLE
-.. c:enumerator:: NPY_FLOAT64
+ .. c:enumerator:: NPY_DOUBLE
+ .. c:enumerator:: NPY_FLOAT64
- The enumeration value for a 64-bit/8-byte IEEE 754 compatible floating
- point type.
+ The enumeration value for a 64-bit/8-byte IEEE 754 compatible floating
+ point type.
-.. c:enumerator:: NPY_LONGDOUBLE
+ .. c:enumerator:: NPY_LONGDOUBLE
- The enumeration value for a platform-specific floating point type which is
- at least as large as NPY_DOUBLE, but larger on many platforms.
+ The enumeration value for a platform-specific floating point type which is
+ at least as large as NPY_DOUBLE, but larger on many platforms.
-.. c:enumerator:: NPY_CFLOAT
-.. c:enumerator:: NPY_COMPLEX64
+ .. c:enumerator:: NPY_CFLOAT
+ .. c:enumerator:: NPY_COMPLEX64
- The enumeration value for a 64-bit/8-byte complex type made up of
- two NPY_FLOAT values.
+ The enumeration value for a 64-bit/8-byte complex type made up of
+ two NPY_FLOAT values.
-.. c:enumerator:: NPY_CDOUBLE
-.. c:enumerator:: NPY_COMPLEX128
+ .. c:enumerator:: NPY_CDOUBLE
+ .. c:enumerator:: NPY_COMPLEX128
- The enumeration value for a 128-bit/16-byte complex type made up of
- two NPY_DOUBLE values.
+ The enumeration value for a 128-bit/16-byte complex type made up of
+ two NPY_DOUBLE values.
-.. c:enumerator:: NPY_CLONGDOUBLE
+ .. c:enumerator:: NPY_CLONGDOUBLE
- The enumeration value for a platform-specific complex floating point
- type which is made up of two NPY_LONGDOUBLE values.
+ The enumeration value for a platform-specific complex floating point
+ type which is made up of two NPY_LONGDOUBLE values.
-.. c:enumerator:: NPY_DATETIME
+ .. c:enumerator:: NPY_DATETIME
- The enumeration value for a data type which holds dates or datetimes with
- a precision based on selectable date or time units.
+ The enumeration value for a data type which holds dates or datetimes with
+ a precision based on selectable date or time units.
-.. c:enumerator:: NPY_TIMEDELTA
+ .. c:enumerator:: NPY_TIMEDELTA
- The enumeration value for a data type which holds lengths of times in
- integers of selectable date or time units.
+ The enumeration value for a data type which holds lengths of times in
+ integers of selectable date or time units.
-.. c:enumerator:: NPY_STRING
+ .. c:enumerator:: NPY_STRING
- The enumeration value for ASCII strings of a selectable size. The
- strings have a fixed maximum size within a given array.
+ The enumeration value for ASCII strings of a selectable size. The
+ strings have a fixed maximum size within a given array.
-.. c:enumerator:: NPY_UNICODE
+ .. c:enumerator:: NPY_UNICODE
- The enumeration value for UCS4 strings of a selectable size. The
- strings have a fixed maximum size within a given array.
+ The enumeration value for UCS4 strings of a selectable size. The
+ strings have a fixed maximum size within a given array.
-.. c:enumerator:: NPY_OBJECT
+ .. c:enumerator:: NPY_OBJECT
- The enumeration value for references to arbitrary Python objects.
+ The enumeration value for references to arbitrary Python objects.
-.. c:enumerator:: NPY_VOID
+ .. c:enumerator:: NPY_VOID
- Primarily used to hold struct dtypes, but can contain arbitrary
- binary data.
+ Primarily used to hold struct dtypes, but can contain arbitrary
+ binary data.
-Some useful aliases of the above types are
+ Some useful aliases of the above types are
-.. c:enumerator:: NPY_INTP
+ .. c:enumerator:: NPY_INTP
- The enumeration value for a signed integer type which is the same
- size as a (void \*) pointer. This is the type used by all
- arrays of indices.
+ The enumeration value for a signed integer type which is the same
+ size as a (void \*) pointer. This is the type used by all
+ arrays of indices.
-.. c:enumerator:: NPY_UINTP
+ .. c:enumerator:: NPY_UINTP
- The enumeration value for an unsigned integer type which is the
- same size as a (void \*) pointer.
+ The enumeration value for an unsigned integer type which is the
+ same size as a (void \*) pointer.
-.. c:enumerator:: NPY_MASK
+ .. c:enumerator:: NPY_MASK
- The enumeration value of the type used for masks, such as with
- the :c:data:`NPY_ITER_ARRAYMASK` iterator flag. This is equivalent
- to :c:data:`NPY_UINT8`.
+ The enumeration value of the type used for masks, such as with
+ the :c:data:`NPY_ITER_ARRAYMASK` iterator flag. This is equivalent
+ to :c:data:`NPY_UINT8`.
-.. c:enumerator:: NPY_DEFAULT_TYPE
+ .. c:enumerator:: NPY_DEFAULT_TYPE
- The default type to use when no dtype is explicitly specified, for
- example when calling np.zero(shape). This is equivalent to
- :c:data:`NPY_DOUBLE`.
+ The default type to use when no dtype is explicitly specified, for
+ example when calling np.zero(shape). This is equivalent to
+ :c:data:`NPY_DOUBLE`.
Other useful related constants are
diff --git a/doc/source/reference/c-api/types-and-structures.rst b/doc/source/reference/c-api/types-and-structures.rst
index 8081576b5..cff1b3d38 100644
--- a/doc/source/reference/c-api/types-and-structures.rst
+++ b/doc/source/reference/c-api/types-and-structures.rst
@@ -285,83 +285,16 @@ PyArrayDescr_Type and PyArray_Descr
array like behavior. Each bit in this member is a flag which are named
as:
- .. c:member:: int alignment
-
- Non-NULL if this type is an array (C-contiguous) of some other type
-
-
-..
- dedented to allow internal linking, pending a refactoring
-
-.. c:macro:: NPY_ITEM_REFCOUNT
-
- Indicates that items of this data-type must be reference
- counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
-
- .. c:macro:: NPY_ITEM_HASOBJECT
-
- Same as :c:data:`NPY_ITEM_REFCOUNT`.
-
-..
- dedented to allow internal linking, pending a refactoring
-
-.. c:macro:: NPY_LIST_PICKLE
-
- Indicates arrays of this data-type must be converted to a list
- before pickling.
-
-.. c:macro:: NPY_ITEM_IS_POINTER
-
- Indicates the item is a pointer to some other data-type
-
-.. c:macro:: NPY_NEEDS_INIT
-
- Indicates memory for this data-type must be initialized (set
- to 0) on creation.
-
-.. c:macro:: NPY_NEEDS_PYAPI
-
- Indicates this data-type requires the Python C-API during
- access (so don't give up the GIL if array access is going to
- be needed).
-
-.. c:macro:: NPY_USE_GETITEM
-
- On array access use the ``f->getitem`` function pointer
- instead of the standard conversion to an array scalar. Must
- use if you don't define an array scalar to go along with
- the data-type.
-
-.. c:macro:: NPY_USE_SETITEM
-
- When creating a 0-d array from an array scalar use
- ``f->setitem`` instead of the standard copy from an array
- scalar. Must use if you don't define an array scalar to go
- along with the data-type.
-
- .. c:macro:: NPY_FROM_FIELDS
-
- The bits that are inherited for the parent data-type if these
- bits are set in any field of the data-type. Currently (
- :c:data:`NPY_NEEDS_INIT` \| :c:data:`NPY_LIST_PICKLE` \|
- :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_PYAPI` ).
-
- .. c:macro:: NPY_OBJECT_DTYPE_FLAGS
-
- Bits set for the object data-type: ( :c:data:`NPY_LIST_PICKLE`
- \| :c:data:`NPY_USE_GETITEM` \| :c:data:`NPY_ITEM_IS_POINTER` \|
- :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_INIT` \|
- :c:data:`NPY_NEEDS_PYAPI`).
-
- .. c:function:: int PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
-
- Return true if all the given flags are set for the data-type
- object.
-
- .. c:function:: int PyDataType_REFCHK(PyArray_Descr *dtype)
-
- Equivalent to :c:func:`PyDataType_FLAGCHK` (*dtype*,
- :c:data:`NPY_ITEM_REFCOUNT`).
+ * :c:macro:`NPY_ITEM_REFCOUNT`
+ * :c:macro:`NPY_ITEM_HASOBJECT`
+ * :c:macro:`NPY_LIST_PICKLE`
+ * :c:macro:`NPY_ITEM_IS_POINTER`
+ * :c:macro:`NPY_NEEDS_INIT`
+ * :c:macro:`NPY_NEEDS_PYAPI`
+ * :c:macro:`NPY_USE_GETITEM`
+ * :c:macro:`NPY_USE_SETITEM`
+ * :c:macro:`NPY_FROM_FIELDS`
+ * :c:macro:`NPY_OBJECT_DTYPE_FLAGS`
.. c:member:: int type_num
@@ -452,6 +385,73 @@ PyArrayDescr_Type and PyArray_Descr
Currently unused. Reserved for future use in caching
hash values.
+.. c:macro:: NPY_ITEM_REFCOUNT
+
+ Indicates that items of this data-type must be reference
+ counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
+
+.. c:macro:: NPY_ITEM_HASOBJECT
+
+ Same as :c:data:`NPY_ITEM_REFCOUNT`.
+
+.. c:macro:: NPY_LIST_PICKLE
+
+ Indicates arrays of this data-type must be converted to a list
+ before pickling.
+
+.. c:macro:: NPY_ITEM_IS_POINTER
+
+ Indicates the item is a pointer to some other data-type
+
+.. c:macro:: NPY_NEEDS_INIT
+
+ Indicates memory for this data-type must be initialized (set
+ to 0) on creation.
+
+.. c:macro:: NPY_NEEDS_PYAPI
+
+ Indicates this data-type requires the Python C-API during
+ access (so don't give up the GIL if array access is going to
+ be needed).
+
+.. c:macro:: NPY_USE_GETITEM
+
+ On array access use the ``f->getitem`` function pointer
+ instead of the standard conversion to an array scalar. Must
+ use if you don't define an array scalar to go along with
+ the data-type.
+
+.. c:macro:: NPY_USE_SETITEM
+
+ When creating a 0-d array from an array scalar use
+ ``f->setitem`` instead of the standard copy from an array
+ scalar. Must use if you don't define an array scalar to go
+ along with the data-type.
+
+.. c:macro:: NPY_FROM_FIELDS
+
+ The bits that are inherited for the parent data-type if these
+ bits are set in any field of the data-type. Currently (
+ :c:data:`NPY_NEEDS_INIT` \| :c:data:`NPY_LIST_PICKLE` \|
+ :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_PYAPI` ).
+
+.. c:macro:: NPY_OBJECT_DTYPE_FLAGS
+
+ Bits set for the object data-type: ( :c:data:`NPY_LIST_PICKLE`
+ \| :c:data:`NPY_USE_GETITEM` \| :c:data:`NPY_ITEM_IS_POINTER` \|
+ :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_INIT` \|
+ :c:data:`NPY_NEEDS_PYAPI`).
+
+.. c:function:: int PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
+
+ Return true if all the given flags are set for the data-type
+ object.
+
+.. c:function:: int PyDataType_REFCHK(PyArray_Descr *dtype)
+
+ Equivalent to :c:func:`PyDataType_FLAGCHK` (*dtype*,
+ :c:data:`NPY_ITEM_REFCOUNT`).
+
.. c:type:: PyArray_ArrFuncs
Functions implementing internal features. Not all of these
@@ -973,7 +973,7 @@ PyUFunc_Type and PyUFuncObject
Some fallback support for this slot exists, but will be removed
eventually. A universal function that relied on this will
have to be ported eventually.
- See ref:`NEP 41 <NEP41>` and ref:`NEP 43 <NEP43>`
+ See :ref:`NEP 41 <NEP41>` and :ref:`NEP 43 <NEP43>`
.. c:member:: void *reserved2
@@ -997,10 +997,14 @@ PyUFunc_Type and PyUFuncObject
.. c:member:: npy_uint32 *core_dim_flags
- For each distinct core dimension, a set of ``UFUNC_CORE_DIM*`` flags
+ For each distinct core dimension, a set of flags (
+ :c:macro:`UFUNC_CORE_DIM_CAN_IGNORE` and
+ :c:macro:`UFUNC_CORE_DIM_SIZE_INFERRED`)
-..
- dedented to allow internal linking, pending a refactoring
+ .. c:member:: PyObject *identity_value
+
+ Identity for reduction, when :c:member:`PyUFuncObject.identity`
+ is equal to :c:data:`PyUFunc_IdentityValue`.
.. c:macro:: UFUNC_CORE_DIM_CAN_IGNORE
@@ -1011,11 +1015,6 @@ PyUFunc_Type and PyUFuncObject
if the dim size will be determined from the operands
and not from a :ref:`frozen <frozen>` signature
- .. c:member:: PyObject *identity_value
-
- Identity for reduction, when :c:member:`PyUFuncObject.identity`
- is equal to :c:data:`PyUFunc_IdentityValue`.
-
PyArrayIter_Type and PyArrayIterObject
--------------------------------------
@@ -1253,7 +1252,7 @@ ScalarArrayTypes
----------------
There is a Python type for each of the different built-in data types
-that can be present in the array Most of these are simple wrappers
+that can be present in the array. Most of these are simple wrappers
around the corresponding data type in C. The C-names for these types
are ``Py{TYPE}ArrType_Type`` where ``{TYPE}`` can be
@@ -1457,22 +1456,6 @@ memory management. These types are not accessible directly from
Python, and are not exposed to the C-API. They are included here only
for completeness and assistance in understanding the code.
-
-.. c:type:: PyUFuncLoopObject
-
- A loose wrapper for a C-structure that contains the information
- needed for looping. This is useful if you are trying to understand
- the ufunc looping code. The :c:type:`PyUFuncLoopObject` is the associated
- C-structure. It is defined in the ``ufuncobject.h`` header.
-
-.. c:type:: PyUFuncReduceObject
-
- A loose wrapper for the C-structure that contains the information
- needed for reduce-like methods of ufuncs. This is useful if you are
- trying to understand the reduce, accumulate, and reduce-at
- code. The :c:type:`PyUFuncReduceObject` is the associated C-structure. It
- is defined in the ``ufuncobject.h`` header.
-
.. c:type:: PyUFunc_Loop1d
A simple linked-list of C-structures containing the information needed
@@ -1483,8 +1466,12 @@ for completeness and assistance in understanding the code.
Advanced indexing is handled with this Python type. It is simply a
loose wrapper around the C-structure containing the variables
- needed for advanced array indexing. The associated C-structure,
- ``PyArrayMapIterObject``, is useful if you are trying to
+ needed for advanced array indexing.
+
+.. c:type:: PyArrayMapIterObject
+
+ The C-structure associated with :c:var:`PyArrayMapIter_Type`.
+ This structure is useful if you are trying to
understand the advanced-index mapping code. It is defined in the
``arrayobject.h`` header. This type is not exposed to Python and
could be replaced with a C-structure. As a Python type it takes
diff --git a/doc/source/reference/distutils_guide.rst b/doc/source/reference/distutils_guide.rst
index a72a9c4d6..6f927c231 100644
--- a/doc/source/reference/distutils_guide.rst
+++ b/doc/source/reference/distutils_guide.rst
@@ -1,6 +1,6 @@
.. _distutils-user-guide:
-NumPy Distutils - Users Guide
+NumPy distutils - users guide
=============================
.. warning::
diff --git a/doc/source/reference/distutils_status_migration.rst b/doc/source/reference/distutils_status_migration.rst
index eda4790b5..67695978c 100644
--- a/doc/source/reference/distutils_status_migration.rst
+++ b/doc/source/reference/distutils_status_migration.rst
@@ -83,7 +83,7 @@ present in ``setuptools``:
- Support for a few other scientific libraries, like FFTW and UMFPACK
- Better MinGW support
- Per-compiler build flag customization (e.g. `-O3` and `SSE2` flags are default)
-- a simple user build config system, see [site.cfg.example](https://github.com/numpy/numpy/blob/master/site.cfg.example)
+- a simple user build config system, see `site.cfg.example <https://github.com/numpy/numpy/blob/master/site.cfg.example>`__
- SIMD intrinsics support
The most widely used feature is nested ``setup.py`` files. This feature will
diff --git a/doc/source/reference/global_state.rst b/doc/source/reference/global_state.rst
index d73da4244..a898450af 100644
--- a/doc/source/reference/global_state.rst
+++ b/doc/source/reference/global_state.rst
@@ -1,7 +1,7 @@
.. _global_state:
************
-Global State
+Global state
************
NumPy has a few import-time, compile-time, or runtime options
@@ -11,10 +11,10 @@ purposes and will not be interesting to the vast majority
of users.
-Performance-Related Options
+Performance-related options
===========================
-Number of Threads used for Linear Algebra
+Number of threads used for Linear Algebra
-----------------------------------------
NumPy itself is normally intentionally limited to a single thread
@@ -56,10 +56,10 @@ Setting ``NPY_DISABLE_CPU_FEATURES`` will exclude simd features at runtime.
See :ref:`runtime-simd-dispatch`.
-Debugging-Related Options
+Debugging-related options
=========================
-Relaxed Strides Checking
+Relaxed strides checking
------------------------
The *compile-time* environment variable::
diff --git a/doc/source/reference/index.rst b/doc/source/reference/index.rst
index 8e4a81436..4756af5fa 100644
--- a/doc/source/reference/index.rst
+++ b/doc/source/reference/index.rst
@@ -3,7 +3,7 @@
.. _reference:
###############
-NumPy Reference
+NumPy reference
###############
:Release: |version|
diff --git a/doc/source/reference/internals.code-explanations.rst b/doc/source/reference/internals.code-explanations.rst
index d34314610..db3b3b452 100644
--- a/doc/source/reference/internals.code-explanations.rst
+++ b/doc/source/reference/internals.code-explanations.rst
@@ -1,7 +1,7 @@
:orphan:
*************************
-NumPy C Code Explanations
+NumPy C code explanations
*************************
.. This document has been moved to ../dev/internals.code-explanations.rst.
diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst
index fcd310faa..7121914b9 100644
--- a/doc/source/reference/maskedarray.baseclass.rst
+++ b/doc/source/reference/maskedarray.baseclass.rst
@@ -72,19 +72,19 @@ Attributes and properties of masked arrays
.. seealso:: :ref:`Array Attributes <arrays.ndarray.attributes>`
-.. autoattribute:: MaskedArray.data
+.. autoattribute:: numpy::ma.MaskedArray.data
-.. autoattribute:: MaskedArray.mask
+.. autoattribute:: numpy::ma.MaskedArray.mask
-.. autoattribute:: MaskedArray.recordmask
+.. autoattribute:: numpy::ma.MaskedArray.recordmask
-.. autoattribute:: MaskedArray.fill_value
+.. autoattribute:: numpy::ma.MaskedArray.fill_value
-.. autoattribute:: MaskedArray.baseclass
+.. autoattribute:: numpy::ma.MaskedArray.baseclass
-.. autoattribute:: MaskedArray.sharedmask
+.. autoattribute:: numpy::ma.MaskedArray.sharedmask
-.. autoattribute:: MaskedArray.hardmask
+.. autoattribute:: numpy::ma.MaskedArray.hardmask
As :class:`MaskedArray` is a subclass of :class:`~numpy.ndarray`, a masked array also inherits all the attributes and properties of a :class:`~numpy.ndarray` instance.
diff --git a/doc/source/reference/random/bit_generators/index.rst b/doc/source/reference/random/bit_generators/index.rst
index 14c19a6bd..e523b4339 100644
--- a/doc/source/reference/random/bit_generators/index.rst
+++ b/doc/source/reference/random/bit_generators/index.rst
@@ -1,5 +1,7 @@
.. currentmodule:: numpy.random
+.. _random-bit-generators:
+
Bit Generators
==============
diff --git a/doc/source/reference/random/compatibility.rst b/doc/source/reference/random/compatibility.rst
new file mode 100644
index 000000000..138f03ca3
--- /dev/null
+++ b/doc/source/reference/random/compatibility.rst
@@ -0,0 +1,87 @@
+.. _random-compatibility:
+
+.. currentmodule:: numpy.random
+
+Compatibility Policy
+====================
+
+`numpy.random` has a somewhat stricter compatibility policy than the rest of
+NumPy. Users of pseudorandomness often have use cases for being able to
+reproduce runs in fine detail given the same seed (so-called "stream
+compatibility"), and so we try to balance those needs with the flexibility to
+enhance our algorithms. :ref:`NEP 19 <NEP19>` describes the evolution of this
+policy.
+
+The main kind of compatibility that we enforce is stream-compatibility from run
+to run under certain conditions. If you create a `Generator` with the same
+`BitGenerator`, with the same seed, perform the same sequence of method calls
+with the same arguments, on the same build of ``numpy``, in the same
+environment, on the same machine, you should get the same stream of numbers.
+Note that these conditions are very strict. There are a number of factors
+outside of NumPy's control that limit our ability to guarantee much more than
+this. For example, different CPUs implement floating point arithmetic
+differently, and this can cause differences in certain edge cases that cascade
+to the rest of the stream. `Generator.multivariate_normal`, for another
+example, uses a matrix decomposition from ``numpy.linalg``. Even on the same
+platform, a different build of ``numpy`` may use a different version of this
+matrix decomposition algorithm from the LAPACK that it links to, causing
+`Generator.multivariate_normal` to return completely different (but equally
+valid!) results. We strive to prefer algorithms that are more resistant to
+these effects, but this is always imperfect.
+
+.. note::
+
+ Most of the `Generator` methods allow you to draw multiple values from
+ a distribution as arrays. The requested size of this array is a parameter,
+ for the purposes of the above policy. Calling ``rng.random()`` 5 times is
+ not *guaranteed* to give the same numbers as ``rng.random(5)``. We reserve
+ the ability to decide to use different algorithms for different-sized
+ blocks. In practice, this happens rarely.
+
+Like the rest of NumPy, we generally maintain API source
+compatibility from version to version. If we *must* make an API-breaking
+change, then we will only do so with an appropriate deprecation period and
+warnings, according to :ref:`general NumPy policy <NEP23>`.
+
+Breaking stream-compatibility in order to introduce new features or
+improve performance in `Generator` or `default_rng` will be *allowed* with
+*caution*. Such changes will be considered features, and as such will be no
+faster than the standard release cadence of features (i.e. on ``X.Y`` releases,
+never ``X.Y.Z``). Slowness will not be considered a bug for this purpose.
+Correctness bug fixes that break stream-compatibility can happen on bugfix
+releases, per usual, but developers should consider if they can wait until the
+next feature release. We encourage developers to strongly weight user’s pain
+from the break in stream-compatibility against the improvements. One example
+of a worthwhile improvement would be to change algorithms for a significant
+increase in performance, for example, moving from the `Box-Muller transform
+<https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform>`_ method of
+Gaussian variate generation to the faster `Ziggurat algorithm
+<https://en.wikipedia.org/wiki/Ziggurat_algorithm>`_. An example of
+a discouraged improvement would be tweaking the Ziggurat tables just a little
+bit for a small performance improvement.
+
+.. note::
+
+ In particular, `default_rng` is allowed to change the default
+ `BitGenerator` that it uses (again, with *caution* and plenty of advance
+ warning).
+
+In general, `BitGenerator` classes have stronger guarantees of
+version-to-version stream compatibility. This allows them to be a firmer
+building block for downstream users that need it. Their limited API surface
+makes it easier for them to maintain this compatibility from version to version. See
+the docstrings of each `BitGenerator` class for their individual compatibility
+guarantees.
+
+The legacy `RandomState` and the :ref:`associated convenience functions
+<functions-in-numpy-random>` have a stricter version-to-version
+compatibility guarantee. For reasons outlined in :ref:`NEP 19 <NEP19>`, we had made
+stronger promises about their version-to-version stability early in NumPy's
+development. There are still some limited use cases for this kind of
+compatibility (like generating data for tests), so we maintain as much
+compatibility as we can. There will be no more modifications to `RandomState`,
+not even to fix correctness bugs. There are a few gray areas where we can make
+minor fixes to keep `RandomState` working without segfaulting as NumPy's
+internals change, and some docstring fixes. However, the previously-mentioned
+caveats about the variability from machine to machine and build to build still
+apply to `RandomState` just as much as it does to `Generator`.
diff --git a/doc/source/reference/random/index.rst b/doc/source/reference/random/index.rst
index 83a27d80c..38555b133 100644
--- a/doc/source/reference/random/index.rst
+++ b/doc/source/reference/random/index.rst
@@ -7,210 +7,124 @@
Random sampling (:mod:`numpy.random`)
=====================================
-Numpy's random number routines produce pseudo random numbers using
-combinations of a `BitGenerator` to create sequences and a `Generator`
-to use those sequences to sample from different statistical distributions:
-
-* BitGenerators: Objects that generate random numbers. These are typically
- unsigned integer words filled with sequences of either 32 or 64 random bits.
-* Generators: Objects that transform sequences of random bits from a
- BitGenerator into sequences of numbers that follow a specific probability
- distribution (such as uniform, Normal or Binomial) within a specified
- interval.
-
-Since Numpy version 1.17.0 the Generator can be initialized with a
-number of different BitGenerators. It exposes many different probability
-distributions. See `NEP 19 <https://www.numpy.org/neps/
-nep-0019-rng-policy.html>`_ for context on the updated random Numpy number
-routines. The legacy `RandomState` random number routines are still
-available, but limited to a single BitGenerator. See :ref:`new-or-different`
-for a complete list of improvements and differences from the legacy
-``RandomState``.
-
-For convenience and backward compatibility, a single `RandomState`
-instance's methods are imported into the numpy.random namespace, see
-:ref:`legacy` for the complete list.
-
.. _random-quick-start:
Quick Start
-----------
-Call `default_rng` to get a new instance of a `Generator`, then call its
-methods to obtain samples from different distributions. By default,
-`Generator` uses bits provided by `PCG64` which has better statistical
-properties than the legacy `MT19937` used in `RandomState`.
-
-.. code-block:: python
-
- # Do this (new version)
- from numpy.random import default_rng
- rng = default_rng()
- vals = rng.standard_normal(10)
- more_vals = rng.standard_normal(10)
-
- # instead of this (legacy version)
- from numpy import random
- vals = random.standard_normal(10)
- more_vals = random.standard_normal(10)
-
-`Generator` can be used as a replacement for `RandomState`. Both class
-instances hold an internal `BitGenerator` instance to provide the bit
-stream, it is accessible as ``gen.bit_generator``. Some long-overdue API
-cleanup means that legacy and compatibility methods have been removed from
-`Generator`
-
-=================== ============== ============
-`RandomState` `Generator` Notes
-------------------- -------------- ------------
-``random_sample``, ``random`` Compatible with `random.random`
-``rand``
-------------------- -------------- ------------
-``randint``, ``integers`` Add an ``endpoint`` kwarg
-``random_integers``
-------------------- -------------- ------------
-``tomaxint`` removed Use ``integers(0, np.iinfo(np.int_).max,``
- ``endpoint=False)``
-------------------- -------------- ------------
-``seed`` removed Use `SeedSequence.spawn`
-=================== ============== ============
-
-See :ref:`new-or-different` for more information.
-
-Something like the following code can be used to support both ``RandomState``
-and ``Generator``, with the understanding that the interfaces are slightly
-different
-
-.. code-block:: python
-
- try:
- rng_integers = rng.integers
- except AttributeError:
- rng_integers = rng.randint
- a = rng_integers(1000)
-
-Seeds can be passed to any of the BitGenerators. The provided value is mixed
-via `SeedSequence` to spread a possible sequence of seeds across a wider
-range of initialization states for the BitGenerator. Here `PCG64` is used and
-is wrapped with a `Generator`.
-
-.. code-block:: python
-
- from numpy.random import Generator, PCG64
- rng = Generator(PCG64(12345))
- rng.standard_normal()
-
-Here we use `default_rng` to create an instance of `Generator` to generate a
-random float:
-
->>> import numpy as np
->>> rng = np.random.default_rng(12345)
->>> print(rng)
-Generator(PCG64)
->>> rfloat = rng.random()
->>> rfloat
-0.22733602246716966
->>> type(rfloat)
-<class 'float'>
-
-Here we use `default_rng` to create an instance of `Generator` to generate 3
-random integers between 0 (inclusive) and 10 (exclusive):
-
->>> import numpy as np
->>> rng = np.random.default_rng(12345)
->>> rints = rng.integers(low=0, high=10, size=3)
->>> rints
-array([6, 2, 7])
->>> type(rints[0])
-<class 'numpy.int64'>
-
-Introduction
-------------
-The new infrastructure takes a different approach to producing random numbers
-from the `RandomState` object. Random number generation is separated into
-two components, a bit generator and a random generator.
-
-The `BitGenerator` has a limited set of responsibilities. It manages state
-and provides functions to produce random doubles and random unsigned 32- and
-64-bit values.
-
-The `random generator <Generator>` takes the
-bit generator-provided stream and transforms them into more useful
-distributions, e.g., simulated normal random values. This structure allows
-alternative bit generators to be used with little code duplication.
-
-The `Generator` is the user-facing object that is nearly identical to the
-legacy `RandomState`. It accepts a bit generator instance as an argument.
-The default is currently `PCG64` but this may change in future versions.
-As a convenience NumPy provides the `default_rng` function to hide these
-details:
-
->>> from numpy.random import default_rng
->>> rng = default_rng(12345)
->>> print(rng)
-Generator(PCG64)
->>> print(rng.random())
-0.22733602246716966
-
-One can also instantiate `Generator` directly with a `BitGenerator` instance.
-
-To use the default `PCG64` bit generator, one can instantiate it directly and
-pass it to `Generator`:
-
->>> from numpy.random import Generator, PCG64
->>> rng = Generator(PCG64(12345))
->>> print(rng)
-Generator(PCG64)
-
-Similarly to use the older `MT19937` bit generator (not recommended), one can
-instantiate it directly and pass it to `Generator`:
-
->>> from numpy.random import Generator, MT19937
->>> rng = Generator(MT19937(12345))
->>> print(rng)
-Generator(MT19937)
-
-What's New or Different
-~~~~~~~~~~~~~~~~~~~~~~~
+The :mod:`numpy.random` module implements pseudo-random number generators
+(PRNGs or RNGs, for short) with the ability to draw samples from a variety of
+probability distributions. In general, users will create a `Generator` instance
+with `default_rng` and call the various methods on it to obtain samples from
+different distributions.
+
+::
+
+ >>> import numpy as np
+ >>> rng = np.random.default_rng()
+ # Generate one random float uniformly distributed over the range [0, 1)
+ >>> rng.random() #doctest: +SKIP
+ 0.06369197489564249 # may vary
+ # Generate an array of 10 numbers according to a unit Gaussian distribution.
+ >>> rng.standard_normal(10) #doctest: +SKIP
+ array([-0.31018314, -1.8922078 , -0.3628523 , -0.63526532, 0.43181166, # may vary
+ 0.51640373, 1.25693945, 0.07779185, 0.84090247, -2.13406828])
+ # Generate an array of 5 integers uniformly over the range [0, 10).
+ >>> rng.integers(low=0, high=10, size=5) #doctest: +SKIP
+ array([8, 7, 6, 2, 0]) # may vary
+
+Our RNGs are deterministic sequences and can be reproduced by specifying a seed integer to
+derive its initial state. By default, with no seed provided, `default_rng` will create
+seed the RNG from nondeterministic data from the operating system and therefore
+generate different numbers each time. The pseudo-random sequences will be
+independent for all practical purposes, at least those purposes for which our
+pseudo-randomness was good for in the first place.
+
+::
+
+ >>> rng1 = np.random.default_rng()
+ >>> rng1.random() #doctest: +SKIP
+ 0.6596288841243357 # may vary
+ >>> rng2 = np.random.default_rng()
+ >>> rng2.random() #doctest: +SKIP
+ 0.11885628817151628 # may vary
+
.. warning::
- The Box-Muller method used to produce NumPy's normals is no longer available
- in `Generator`. It is not possible to reproduce the exact random
- values using Generator for the normal distribution or any other
- distribution that relies on the normal such as the `RandomState.gamma` or
- `RandomState.standard_t`. If you require bitwise backward compatible
- streams, use `RandomState`.
-
-* The Generator's normal, exponential and gamma functions use 256-step Ziggurat
- methods which are 2-10 times faster than NumPy's Box-Muller or inverse CDF
- implementations.
-* Optional ``dtype`` argument that accepts ``np.float32`` or ``np.float64``
- to produce either single or double precision uniform random variables for
- select distributions
-* Optional ``out`` argument that allows existing arrays to be filled for
- select distributions
-* All BitGenerators can produce doubles, uint64s and uint32s via CTypes
- (`PCG64.ctypes`) and CFFI (`PCG64.cffi`). This allows the bit generators
- to be used in numba.
-* The bit generators can be used in downstream projects via
- :ref:`Cython <random_cython>`.
-* `Generator.integers` is now the canonical way to generate integer
- random numbers from a discrete uniform distribution. The ``rand`` and
- ``randn`` methods are only available through the legacy `RandomState`.
- The ``endpoint`` keyword can be used to specify open or closed intervals.
- This replaces both ``randint`` and the deprecated ``random_integers``.
-* `Generator.random` is now the canonical way to generate floating-point
- random numbers, which replaces `RandomState.random_sample`,
- `RandomState.sample`, and `RandomState.ranf`. This is consistent with
- Python's `random.random`.
-* All BitGenerators in numpy use `SeedSequence` to convert seeds into
- initialized states.
-* The addition of an ``axis`` keyword argument to methods such as
- `Generator.choice`, `Generator.permutation`, and `Generator.shuffle`
- improves support for sampling from and shuffling multi-dimensional arrays.
-
-See :ref:`new-or-different` for a complete list of improvements and
-differences from the traditional ``Randomstate``.
+ The pseudo-random number generators implemented in this module are designed
+ for statistical modeling and simulation. They are not suitable for security
+ or cryptographic purposes. See the :py:mod:`secrets` module from the
+ standard library for such use cases.
+
+Seeds should be large positive integers. `default_rng` can take positive
+integers of any size. We recommend using very large, unique numbers to ensure
+that your seed is different from anyone else's. This is good practice to ensure
+that your results are statistically independent from theirs unless you are
+intentionally *trying* to reproduce their result. A convenient way to get
+such a seed number is to use :py:func:`secrets.randbits` to get an
+arbitrary 128-bit integer.
+
+::
+
+ >>> import secrets
+ >>> import numpy as np
+ >>> secrets.randbits(128) #doctest: +SKIP
+ 122807528840384100672342137672332424406 # may vary
+ >>> rng1 = np.random.default_rng(122807528840384100672342137672332424406)
+ >>> rng1.random()
+ 0.5363922081269535
+ >>> rng2 = np.random.default_rng(122807528840384100672342137672332424406)
+ >>> rng2.random()
+ 0.5363922081269535
+
+See the documentation on `default_rng` and `SeedSequence` for more advanced
+options for controlling the seed in specialized scenarios.
+
+`Generator` and its associated infrastructure was introduced in NumPy version
+1.17.0. There is still a lot of code that uses the older `RandomState` and the
+functions in `numpy.random`. While there are no plans to remove them at this
+time, we do recommend transitioning to `Generator` as you can. The algorithms
+are faster, more flexible, and will receive more improvements in the future.
+For the most part, `Generator` can be used as a replacement for `RandomState`.
+See :ref:`legacy` for information on the legacy infrastructure,
+:ref:`new-or-different` for information on transitioning, and :ref:`NEP 19
+<NEP19>` for some of the reasoning for the transition.
+
+Design
+------
+
+Users primarily interact with `Generator` instances. Each `Generator` instance
+owns a `BitGenerator` instance that implements the core RNG algorithm. The
+`BitGenerator` has a limited set of responsibilities. It manages state and
+provides functions to produce random doubles and random unsigned 32- and 64-bit
+values.
+
+The `Generator` takes the bit generator-provided stream and transforms them
+into more useful distributions, e.g., simulated normal random values. This
+structure allows alternative bit generators to be used with little code
+duplication.
+
+NumPy implements several different `BitGenerator` classes implementing
+different RNG algorithms. `default_rng` currently uses `~PCG64` as the
+default `BitGenerator`. It has better statistical properties and performance
+than the `~MT19937` algorithm used in the legacy `RandomState`. See
+:ref:`random-bit-generators` for more details on the supported BitGenerators.
+
+`default_rng` and BitGenerators delegate the conversion of seeds into RNG
+states to `SeedSequence` internally. `SeedSequence` implements a sophisticated
+algorithm that intermediates between the user's input and the internal
+implementation details of each `BitGenerator` algorithm, each of which can
+require different amounts of bits for its state. Importantly, it lets you use
+arbitrary-sized integers and arbitrary sequences of such integers to mix
+together into the RNG state. This is a useful primitive for constructing
+a :ref:`flexible pattern for parallel RNG streams <seedsequence-spawn>`.
+
+For backward compatibility, we still maintain the legacy `RandomState` class.
+It continues to use the `~MT19937` algorithm by default, and old seeds continue
+to reproduce the same results. The convenience :ref:`functions-in-numpy-random`
+are still aliases to the methods on a single global `RandomState` instance. See
+:ref:`legacy` for the complete details. See :ref:`new-or-different` for
+a detailed comparison between `Generator` and `RandomState`.
Parallel Generation
~~~~~~~~~~~~~~~~~~~
@@ -235,6 +149,7 @@ Concepts
Legacy Generator (RandomState) <legacy>
BitGenerators, SeedSequences <bit_generators/index>
Upgrading PCG64 with PCG64DXSM <upgrading-pcg64>
+ compatibility
Features
--------
diff --git a/doc/source/reference/random/legacy.rst b/doc/source/reference/random/legacy.rst
index b1fce49a1..00921c477 100644
--- a/doc/source/reference/random/legacy.rst
+++ b/doc/source/reference/random/legacy.rst
@@ -52,7 +52,7 @@ using the state of the `RandomState`:
:exclude-members: __init__
Seeding and State
------------------
+=================
.. autosummary::
:toctree: generated/
@@ -62,7 +62,7 @@ Seeding and State
~RandomState.seed
Simple random data
-------------------
+==================
.. autosummary::
:toctree: generated/
@@ -75,7 +75,7 @@ Simple random data
~RandomState.bytes
Permutations
-------------
+============
.. autosummary::
:toctree: generated/
@@ -83,7 +83,7 @@ Permutations
~RandomState.permutation
Distributions
--------------
+==============
.. autosummary::
:toctree: generated/
@@ -123,8 +123,10 @@ Distributions
~RandomState.weibull
~RandomState.zipf
+.. _functions-in-numpy-random:
+
Functions in `numpy.random`
----------------------------
+===========================
Many of the RandomState methods above are exported as functions in
`numpy.random` This usage is discouraged, as it is implemented via a global
`RandomState` instance which is not advised on two counts:
@@ -133,8 +135,7 @@ Many of the RandomState methods above are exported as functions in
- It uses a `RandomState` rather than the more modern `Generator`.
-For backward compatible legacy reasons, we cannot change this. See
-:ref:`random-quick-start`.
+For backward compatible legacy reasons, we will not change this.
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/random/new-or-different.rst b/doc/source/reference/random/new-or-different.rst
index 8f4a70540..9b5bf38e5 100644
--- a/doc/source/reference/random/new-or-different.rst
+++ b/doc/source/reference/random/new-or-different.rst
@@ -5,17 +5,9 @@
What's New or Different
-----------------------
-.. warning::
-
- The Box-Muller method used to produce NumPy's normals is no longer available
- in `Generator`. It is not possible to reproduce the exact random
- values using ``Generator`` for the normal distribution or any other
- distribution that relies on the normal such as the `Generator.gamma` or
- `Generator.standard_t`. If you require bitwise backward compatible
- streams, use `RandomState`, i.e., `RandomState.gamma` or
- `RandomState.standard_t`.
-
-Quick comparison of legacy :ref:`mtrand <legacy>` to the new `Generator`
+NumPy 1.17.0 introduced `Generator` as an improved replacement for
+the :ref:`legacy <legacy>` `RandomState`. Here is a quick comparison of the two
+implementations.
================== ==================== =============
Feature Older Equivalent Notes
@@ -44,21 +36,17 @@ Feature Older Equivalent Notes
``high`` interval endpoint
================== ==================== =============
-And in more detail:
-
-* Simulate from the complex normal distribution
- (`~.Generator.complex_normal`)
* The normal, exponential and gamma generators use 256-step Ziggurat
methods which are 2-10 times faster than NumPy's default implementation in
`~.Generator.standard_normal`, `~.Generator.standard_exponential` or
- `~.Generator.standard_gamma`.
-
+ `~.Generator.standard_gamma`. Because of the change in algorithms, it is not
+ possible to reproduce the exact random values using ``Generator`` for these
+ distributions or any distribution method that relies on them.
.. ipython:: python
- from numpy.random import Generator, PCG64
import numpy.random
- rng = Generator(PCG64())
+ rng = np.random.default_rng()
%timeit -n 1 rng.standard_normal(100000)
%timeit -n 1 numpy.random.standard_normal(100000)
@@ -74,18 +62,25 @@ And in more detail:
* `~.Generator.integers` is now the canonical way to generate integer
- random numbers from a discrete uniform distribution. The ``rand`` and
- ``randn`` methods are only available through the legacy `~.RandomState`.
- This replaces both ``randint`` and the deprecated ``random_integers``.
-* The Box-Muller method used to produce NumPy's normals is no longer available.
+ random numbers from a discrete uniform distribution. This replaces both
+ ``randint`` and the deprecated ``random_integers``.
+* The ``rand`` and ``randn`` methods are only available through the legacy
+ `~.RandomState`.
+* `Generator.random` is now the canonical way to generate floating-point
+ random numbers, which replaces `RandomState.random_sample`,
+ `sample`, and `ranf`, all of which were aliases. This is consistent with
+ Python's `random.random`.
* All bit generators can produce doubles, uint64s and
uint32s via CTypes (`~PCG64.ctypes`) and CFFI (`~PCG64.cffi`).
This allows these bit generators to be used in numba.
* The bit generators can be used in downstream projects via
Cython.
+* All bit generators use `SeedSequence` to :ref:`convert seed integers to
+ initialized states <seeding_and_entropy>`.
* Optional ``dtype`` argument that accepts ``np.float32`` or ``np.float64``
to produce either single or double precision uniform random variables for
- select distributions
+ select distributions. `~.Generator.integers` accepts a ``dtype`` argument
+ with any signed or unsigned integer dtype.
* Uniforms (`~.Generator.random` and `~.Generator.integers`)
* Normals (`~.Generator.standard_normal`)
@@ -94,9 +89,10 @@ And in more detail:
.. ipython:: python
- rng = Generator(PCG64(0))
- rng.random(3, dtype='d')
- rng.random(3, dtype='f')
+ rng = np.random.default_rng()
+ rng.random(3, dtype=np.float64)
+ rng.random(3, dtype=np.float32)
+ rng.integers(0, 256, size=3, dtype=np.uint8)
* Optional ``out`` argument that allows existing arrays to be filled for
select distributions
@@ -111,6 +107,7 @@ And in more detail:
.. ipython:: python
+ rng = np.random.default_rng()
existing = np.zeros(4)
rng.random(out=existing[:2])
print(existing)
@@ -121,9 +118,12 @@ And in more detail:
.. ipython:: python
- rng = Generator(PCG64(123456789))
+ rng = np.random.default_rng()
a = np.arange(12).reshape((3, 4))
a
rng.choice(a, axis=1, size=5)
rng.shuffle(a, axis=1) # Shuffle in-place
a
+
+* Added a method to sample from the complex normal distribution
+ (`~.Generator.complex_normal`)
diff --git a/doc/source/reference/routines.ctypeslib.rst b/doc/source/reference/routines.ctypeslib.rst
index c6127ca64..fe5ffb5a8 100644
--- a/doc/source/reference/routines.ctypeslib.rst
+++ b/doc/source/reference/routines.ctypeslib.rst
@@ -1,7 +1,7 @@
.. module:: numpy.ctypeslib
***********************************************************
-C-Types Foreign Function Interface (:mod:`numpy.ctypeslib`)
+C-Types foreign function interface (:mod:`numpy.ctypeslib`)
***********************************************************
.. currentmodule:: numpy.ctypeslib
diff --git a/doc/source/reference/routines.datetime.rst b/doc/source/reference/routines.datetime.rst
index 966ed5a47..72513882b 100644
--- a/doc/source/reference/routines.datetime.rst
+++ b/doc/source/reference/routines.datetime.rst
@@ -1,6 +1,6 @@
.. _routines.datetime:
-Datetime Support Functions
+Datetime support functions
**************************
.. currentmodule:: numpy
@@ -12,7 +12,7 @@ Datetime Support Functions
datetime_data
-Business Day Functions
+Business day functions
======================
.. currentmodule:: numpy
diff --git a/doc/source/reference/routines.io.rst b/doc/source/reference/routines.io.rst
index 2542b336f..1ec2ccb5e 100644
--- a/doc/source/reference/routines.io.rst
+++ b/doc/source/reference/routines.io.rst
@@ -83,7 +83,7 @@ Data sources
DataSource
-Binary Format Description
+Binary format description
-------------------------
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst
index d503cc243..fd22a74aa 100644
--- a/doc/source/reference/routines.ma.rst
+++ b/doc/source/reference/routines.ma.rst
@@ -31,6 +31,7 @@ From existing data
ma.fromfunction
ma.MaskedArray.copy
+ ma.diagflat
Ones and zeros
@@ -72,6 +73,9 @@ Inspecting the array
ma.isMaskedArray
ma.isMA
ma.isarray
+ ma.isin
+ ma.in1d
+ ma.unique
ma.MaskedArray.all
@@ -394,6 +398,17 @@ Clipping and rounding
ma.MaskedArray.clip
ma.MaskedArray.round
+Set operations
+~~~~~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+
+ ma.intersect1d
+ ma.setdiff1d
+ ma.setxor1d
+ ma.union1d
+
Miscellanea
~~~~~~~~~~~
diff --git a/doc/source/reference/routines.other.rst b/doc/source/reference/routines.other.rst
index 7b60545f1..769b3d910 100644
--- a/doc/source/reference/routines.other.rst
+++ b/doc/source/reference/routines.other.rst
@@ -59,3 +59,5 @@ Matlab-like Functions
disp
.. automodule:: numpy.exceptions
+
+.. automodule:: numpy.dtypes
diff --git a/doc/source/reference/simd/gen_features.py b/doc/source/reference/simd/gen_features.py
index 9a38ef5c9..b141e23d0 100644
--- a/doc/source/reference/simd/gen_features.py
+++ b/doc/source/reference/simd/gen_features.py
@@ -168,7 +168,7 @@ if __name__ == '__main__':
gen_path = path.join(
path.dirname(path.realpath(__file__)), "generated_tables"
)
- with open(path.join(gen_path, 'cpu_features.inc'), 'wt') as fd:
+ with open(path.join(gen_path, 'cpu_features.inc'), 'w') as fd:
fd.write(f'.. generated via {__file__}\n\n')
for arch in (
("x86", "PPC64", "PPC64LE", "ARMHF", "AARCH64", "S390X")
@@ -177,7 +177,7 @@ if __name__ == '__main__':
table = Features(arch, 'gcc').table()
fd.write(wrapper_section(title, table))
- with open(path.join(gen_path, 'compilers-diff.inc'), 'wt') as fd:
+ with open(path.join(gen_path, 'compilers-diff.inc'), 'w') as fd:
fd.write(f'.. generated via {__file__}\n\n')
for arch, cc_names in (
("x86", ("clang", "ICC", "MSVC")),
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 75decb2cd..b8a877924 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -6,6 +6,7 @@ Release notes
:maxdepth: 3
1.25.0 <release/1.25.0-notes>
+ 1.24.3 <release/1.24.3-notes>
1.24.2 <release/1.24.2-notes>
1.24.1 <release/1.24.1-notes>
1.24.0 <release/1.24.0-notes>
diff --git a/doc/source/release/1.24.0-notes.rst b/doc/source/release/1.24.0-notes.rst
index bcccd8c57..1c9e719b3 100644
--- a/doc/source/release/1.24.0-notes.rst
+++ b/doc/source/release/1.24.0-notes.rst
@@ -389,7 +389,7 @@ Users can modify the behavior of these warnings using ``np.errstate``.
Note that for float to int casts, the exact warnings that are given may
be platform dependent. For example::
- arr = np.full(100, value=1000, dtype=np.float64)
+ arr = np.full(100, fill_value=1000, dtype=np.float64)
arr.astype(np.int8)
May give a result equivalent to (the intermediate cast means no warning is
diff --git a/doc/source/release/1.24.3-notes.rst b/doc/source/release/1.24.3-notes.rst
new file mode 100644
index 000000000..1aaf79cb6
--- /dev/null
+++ b/doc/source/release/1.24.3-notes.rst
@@ -0,0 +1,49 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.24.3 Release Notes
+==========================
+NumPy 1.24.3 is a maintenance release that fixes bugs and regressions discovered after the
+1.24.2 release. The Python versions supported by this release are 3.8-3.11.
+
+Contributors
+============
+
+A total of 12 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Aleksei Nikiforov +
+* Alexander Heger
+* Bas van Beek
+* Bob Eldering
+* Brock Mendel
+* Charles Harris
+* Kyle Sunden
+* Peter Hawkins
+* Rohit Goswami
+* Sebastian Berg
+* Warren Weckesser
+* dependabot[bot]
+
+Pull requests merged
+====================
+
+A total of 17 pull requests were merged for this release.
+
+* `#23206 <https://github.com/numpy/numpy/pull/23206>`__: BUG: fix for f2py string scalars (#23194)
+* `#23207 <https://github.com/numpy/numpy/pull/23207>`__: BUG: datetime64/timedelta64 comparisons return NotImplemented
+* `#23208 <https://github.com/numpy/numpy/pull/23208>`__: MAINT: Pin matplotlib to version 3.6.3 for refguide checks
+* `#23221 <https://github.com/numpy/numpy/pull/23221>`__: DOC: Fix matplotlib error in documentation
+* `#23226 <https://github.com/numpy/numpy/pull/23226>`__: CI: Ensure submodules are initialized in gitpod.
+* `#23341 <https://github.com/numpy/numpy/pull/23341>`__: TYP: Replace duplicate reduce in ufunc type signature with reduceat.
+* `#23342 <https://github.com/numpy/numpy/pull/23342>`__: TYP: Remove duplicate CLIP/WRAP/RAISE in ``__init__.pyi``.
+* `#23343 <https://github.com/numpy/numpy/pull/23343>`__: TYP: Mark ``d`` argument to fftfreq and rfftfreq as optional...
+* `#23344 <https://github.com/numpy/numpy/pull/23344>`__: TYP: Add type annotations for comparison operators to MaskedArray.
+* `#23345 <https://github.com/numpy/numpy/pull/23345>`__: TYP: Remove some stray type-check-only imports of ``msort``
+* `#23370 <https://github.com/numpy/numpy/pull/23370>`__: BUG: Ensure like is only stripped for ``like=`` dispatched functions
+* `#23543 <https://github.com/numpy/numpy/pull/23543>`__: BUG: fix loading and storing big arrays on s390x
+* `#23544 <https://github.com/numpy/numpy/pull/23544>`__: MAINT: Bump larsoner/circleci-artifacts-redirector-action
+* `#23634 <https://github.com/numpy/numpy/pull/23634>`__: BUG: Ignore invalid and overflow warnings in masked setitem
+* `#23635 <https://github.com/numpy/numpy/pull/23635>`__: BUG: Fix masked array raveling when ``order="A"`` or ``order="K"``
+* `#23636 <https://github.com/numpy/numpy/pull/23636>`__: MAINT: Update conftest for newer hypothesis versions
+* `#23637 <https://github.com/numpy/numpy/pull/23637>`__: BUG: Fix bug in parsing F77 style string arrays.
diff --git a/doc/source/user/basics.creation.rst b/doc/source/user/basics.creation.rst
index 40b4e378d..3ee501889 100644
--- a/doc/source/user/basics.creation.rst
+++ b/doc/source/user/basics.creation.rst
@@ -75,8 +75,8 @@ the computation, here ``uint32`` and ``int32`` can both be represented in
as ``int64``.
The default NumPy behavior is to create arrays in either 32 or 64-bit signed
-integers (platform dependent and matches C int size) or double precision
-floating point numbers, int32/int64 and float, respectively. If you expect your
+integers (platform dependent and matches C ``long`` size) or double precision
+floating point numbers. If you expect your
integer arrays to be a specific type, then you need to specify the dtype while
you create the array.
diff --git a/doc/source/user/basics.indexing.rst b/doc/source/user/basics.indexing.rst
index b140e223a..17fc9c0cc 100644
--- a/doc/source/user/basics.indexing.rst
+++ b/doc/source/user/basics.indexing.rst
@@ -481,12 +481,13 @@ tuple (of length :attr:`obj.ndim <ndarray.ndim>`) of integer index
arrays showing the :py:data:`True` elements of *obj*. However, it is
faster when ``obj.shape == x.shape``.
-If ``obj.ndim == x.ndim``, ``x[obj]`` returns a 1-dimensional array
-filled with the elements of *x* corresponding to the :py:data:`True`
-values of *obj*. The search order will be :term:`row-major`,
-C-style. If *obj* has :py:data:`True` values at entries that are outside
-of the bounds of *x*, then an index error will be raised. If *obj* is
-smaller than *x* it is identical to filling it with :py:data:`False`.
+If ``obj.ndim == x.ndim``, ``x[obj]``
+returns a 1-dimensional array filled with the elements of *x*
+corresponding to the :py:data:`True` values of *obj*. The search order
+will be :term:`row-major`, C-style. An index error will be raised if
+the shape of *obj* does not match the corresponding dimensions of *x*,
+regardless of whether those values are :py:data:`True` or
+:py:data:`False`.
A common use case for this is filtering for desired element values.
For example, one may wish to select all entries from an array which
diff --git a/doc/source/user/howtos_index.rst b/doc/source/user/howtos_index.rst
index 4a0a22900..ca30f7e91 100644
--- a/doc/source/user/howtos_index.rst
+++ b/doc/source/user/howtos_index.rst
@@ -1,7 +1,7 @@
.. _howtos:
################
-NumPy How Tos
+NumPy how-tos
################
These documents are intended as recipes to common tasks using NumPy. For
diff --git a/doc/source/user/theory.broadcasting.rst b/doc/source/user/theory.broadcasting.rst
index a4973e4e6..f277d4afd 100644
--- a/doc/source/user/theory.broadcasting.rst
+++ b/doc/source/user/theory.broadcasting.rst
@@ -1,7 +1,7 @@
:orphan:
===========================
-Array Broadcasting in Numpy
+Array broadcasting in Numpy
===========================
..