diff options
Diffstat (limited to 'doc')
71 files changed, 1650 insertions, 292 deletions
diff --git a/doc/TESTS.rst.txt b/doc/TESTS.rst.txt index d048a4569..0d8137f4a 100644 --- a/doc/TESTS.rst.txt +++ b/doc/TESTS.rst.txt @@ -139,6 +139,21 @@ originally written without unit tests, there are still several modules that don't have tests yet. Please feel free to choose one of these modules and develop tests for it. +Using C code in tests +--------------------- + +NumPy exposes a rich :ref:`C-API<c-api>` . These are tested using c-extension +modules written "as-if" they know nothing about the internals of NumPy, rather +using the official C-API interfaces only. Examples of such modules are tests +for a user-defined ``rational`` dtype in ``_rational_tests`` or the ufunc +machinery tests in ``_umath_tests`` which are part of the binary distribution. +Starting from version 1.21, you can also write snippets of C code in tests that +will be compiled locally into c-extension modules and loaded into python. + +.. currentmodule:: numpy.testing.extbuild + +.. autofunction:: build_and_import_extension + Labeling tests -------------- diff --git a/doc/changelog/1.21.3-changelog.rst b/doc/changelog/1.21.3-changelog.rst new file mode 100644 index 000000000..767794721 --- /dev/null +++ b/doc/changelog/1.21.3-changelog.rst @@ -0,0 +1,28 @@ + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Aaron Meurer +* Bas van Beek +* Charles Harris +* Developer-Ecosystem-Engineering + +* Kevin Sheppard +* Sebastian Berg +* Warren Weckesser + +Pull requests merged +==================== + +A total of 8 pull requests were merged for this release. + +* `#19745 <https://github.com/numpy/numpy/pull/19745>`__: ENH: Add dtype-support to 3 `generic`/`ndarray` methods +* `#19955 <https://github.com/numpy/numpy/pull/19955>`__: BUG: Resolve Divide by Zero on Apple silicon + test failures... +* `#19958 <https://github.com/numpy/numpy/pull/19958>`__: MAINT: Mark type-check-only ufunc subclasses as ufunc aliases... +* `#19994 <https://github.com/numpy/numpy/pull/19994>`__: BUG: np.tan(np.inf) test failure +* `#20080 <https://github.com/numpy/numpy/pull/20080>`__: BUG: Correct incorrect advance in PCG with emulated int128 +* `#20081 <https://github.com/numpy/numpy/pull/20081>`__: BUG: Fix NaT handling in the PyArray_CompareFunc for datetime... +* `#20082 <https://github.com/numpy/numpy/pull/20082>`__: DOC: Ensure that we add documentation also as to the dict for... +* `#20106 <https://github.com/numpy/numpy/pull/20106>`__: BUG: core: result_type(0, np.timedelta64(4)) would seg. fault. diff --git a/doc/changelog/1.21.4-changelog.rst b/doc/changelog/1.21.4-changelog.rst new file mode 100644 index 000000000..3452627c0 --- /dev/null +++ b/doc/changelog/1.21.4-changelog.rst @@ -0,0 +1,29 @@ + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Bas van Beek +* Charles Harris +* Isuru Fernando +* Matthew Brett +* Sayed Adel +* Sebastian Berg +* 傅立业(Chris Fu) + + +Pull requests merged +==================== + +A total of 9 pull requests were merged for this release. + +* `#20278 <https://github.com/numpy/numpy/pull/20278>`__: BUG: Fix shadowed reference of ``dtype`` in type stub +* `#20293 <https://github.com/numpy/numpy/pull/20293>`__: BUG: Fix headers for universal2 builds +* `#20294 <https://github.com/numpy/numpy/pull/20294>`__: BUG: ``VOID_nonzero`` could sometimes mutate alignment flag +* `#20295 <https://github.com/numpy/numpy/pull/20295>`__: BUG: Do not use nonzero fastpath on unaligned arrays +* `#20296 <https://github.com/numpy/numpy/pull/20296>`__: BUG: Distutils patch to allow for 2 as a minor version (!) +* `#20297 <https://github.com/numpy/numpy/pull/20297>`__: BUG, SIMD: Fix 64-bit/8-bit integer division by a scalar +* `#20298 <https://github.com/numpy/numpy/pull/20298>`__: BUG, SIMD: Workaround broadcasting SIMD 64-bit integers on MSVC... +* `#20300 <https://github.com/numpy/numpy/pull/20300>`__: REL: Prepare for the NumPy 1.21.4 release. +* `#20302 <https://github.com/numpy/numpy/pull/20302>`__: TST: Fix a ``Arrayterator`` typing test failure diff --git a/doc/neps/nep-0013-ufunc-overrides.rst b/doc/neps/nep-0013-ufunc-overrides.rst index 2f455e9b4..c132113db 100644 --- a/doc/neps/nep-0013-ufunc-overrides.rst +++ b/doc/neps/nep-0013-ufunc-overrides.rst @@ -556,7 +556,7 @@ in turn immediately raises :exc:`TypeError`, because one of its operands ``arr.__array_ufunc__``, which will return :obj:`NotImplemented`, which we catch. -.. note :: the reason for not allowing in-place operations to return +.. note:: the reason for not allowing in-place operations to return :obj:`NotImplemented` is that these cannot generically be replaced by a simple reverse operation: most array operations assume the contents of the instance are changed in-place, and do not expect a new diff --git a/doc/neps/nep-0027-zero-rank-arrarys.rst b/doc/neps/nep-0027-zero-rank-arrarys.rst index 4515cf96f..eef4bcacc 100644 --- a/doc/neps/nep-0027-zero-rank-arrarys.rst +++ b/doc/neps/nep-0027-zero-rank-arrarys.rst @@ -10,7 +10,7 @@ NEP 27 — Zero rank arrays :Created: 2006-06-10 :Resolution: https://mail.python.org/pipermail/numpy-discussion/2018-October/078824.html -.. note :: +.. note:: NumPy has both zero rank arrays and scalars. This design document, adapted from a `2006 wiki entry`_, describes what zero rank arrays are and why they diff --git a/doc/neps/nep-0047-array-api-standard.rst b/doc/neps/nep-0047-array-api-standard.rst index 3e63602cc..53b8e35b0 100644 --- a/doc/neps/nep-0047-array-api-standard.rst +++ b/doc/neps/nep-0047-array-api-standard.rst @@ -338,9 +338,10 @@ the options already present in NumPy are: Adding support for DLPack to NumPy entails: -- Adding a ``ndarray.__dlpack__`` method. -- Adding a ``from_dlpack`` function, which takes as input an object - supporting ``__dlpack__``, and returns an ``ndarray``. +- Adding a ``ndarray.__dlpack__()`` method which returns a ``dlpack`` C + structure wrapped in a ``PyCapsule``. +- Adding a ``np._from_dlpack(obj)`` function, where ``obj`` supports + ``__dlpack__()``, and returns an ``ndarray``. DLPack is currently a ~200 LoC header, and is meant to be included directly, so no external dependency is needed. Implementation should be straightforward. diff --git a/doc/neps/nep-0049.rst b/doc/neps/nep-0049.rst index 51a3f11b1..3bd1d102c 100644 --- a/doc/neps/nep-0049.rst +++ b/doc/neps/nep-0049.rst @@ -3,10 +3,10 @@ NEP 49 — Data allocation strategies =================================== :Author: Matti Picus -:Status: Draft +:Status: Final :Type: Standards Track :Created: 2021-04-18 -:Resolution: http://numpy-discussion.10968.n7.nabble.com/NEP-49-Data-allocation-strategies-tt49185.html +:Resolution: https://mail.python.org/archives/list/numpy-discussion@python.org/thread/YZ3PNTXZUT27B6ITFAD3WRSM3T3SRVK4/#PKYXCTG4R5Q6LIRZC4SEWLNBM6GLRF26 Abstract @@ -93,19 +93,30 @@ High level design Users who wish to change the NumPy data memory management routines will use :c:func:`PyDataMem_SetHandler`, which uses a :c:type:`PyDataMem_Handler` -structure to hold pointers to functions used to manage the data memory. +structure to hold pointers to functions used to manage the data memory. In +order to allow lifetime management of the ``context``, the structure is wrapped +in a ``PyCapsule``. Since a call to ``PyDataMem_SetHandler`` will change the default functions, but that function may be called during the lifetime of an ``ndarray`` object, each -``ndarray`` will carry with it the ``PyDataMem_Handler`` struct used at the -time of its instantiation, and these will be used to reallocate or free the -data memory of the instance. Internally NumPy may use ``memcpy`` or ``memset`` -on the pointer to the data memory. +``ndarray`` will carry with it the ``PyDataMem_Handler``-wrapped PyCapsule used +at the time of its instantiation, and these will be used to reallocate or free +the data memory of the instance. Internally NumPy may use ``memcpy`` or +``memset`` on the pointer to the data memory. The name of the handler will be exposed on the python level via a ``numpy.core.multiarray.get_handler_name(arr)`` function. If called as ``numpy.core.multiarray.get_handler_name()`` it will return the name of the -global handler that will be used to allocate data for the next new `ndarrray`. +handler that will be used to allocate data for the next new `ndarrray`. + +The version of the handler will be exposed on the python level via a +``numpy.core.multiarray.get_handler_version(arr)`` function. If called as +``numpy.core.multiarray.get_handler_version()`` it will return the version of the +handler that will be used to allocate data for the next new `ndarrray`. + +The version, currently 1, allows for future enhancements to the +``PyDataMemAllocator``. If fields are added, they must be added to the end. + NumPy C-API functions ===================== @@ -117,7 +128,8 @@ NumPy C-API functions .. code-block:: c typedef struct { - char name[128]; /* multiple of 64 to keep the struct aligned */ + char name[127]; /* multiple of 64 to keep the struct aligned */ + uint8_t version; /* currently 1 */ PyDataMemAllocator allocator; } PyDataMem_Handler; @@ -150,20 +162,19 @@ NumPy C-API functions 15780_ and 15788_ but has not yet been resolved. When it is this NEP should be revisited. -.. c:function:: const PyDataMem_Handler * PyDataMem_SetHandler(PyDataMem_Handler *handler) +.. c:function:: PyObject * PyDataMem_SetHandler(PyObject *handler) Sets a new allocation policy. If the input value is ``NULL``, will reset - the policy to the default. Returns the previous policy, ``NULL`` if the - previous policy was the default. We wrap the user-provided functions + the policy to the default. Return the previous policy, or + return NULL if an error has occurred. We wrap the user-provided so they will still call the Python and NumPy memory management callback hooks. All the function pointers must be filled in, ``NULL`` is not accepted. -.. c:function:: const PyDataMem_Handler * PyDataMem_GetHandler(PyArrayObject *obj) +.. c:function:: const PyObject * PyDataMem_GetHandler() - Return the ``PyDataMem_Handler`` used by the - ``PyArrayObject``. If ``NULL``, return the handler - that will be used to allocate data for the next ``PyArrayObject``. + Return the current policy that will be used to allocate data for the + next ``PyArrayObject``. On failure, return ``NULL``. ``PyDataMem_Handler`` thread safety and lifetime ================================================ @@ -278,6 +289,7 @@ the ``sz`` argument is correct. static PyDataMem_Handler new_handler = { "secret_data_allocator", + 1, { &new_handler_ctx, shift_alloc, /* malloc */ diff --git a/doc/release/upcoming_changes/17530.improvement.rst b/doc/release/upcoming_changes/17530.improvement.rst deleted file mode 100644 index 07a23f0e5..000000000 --- a/doc/release/upcoming_changes/17530.improvement.rst +++ /dev/null @@ -1,5 +0,0 @@ -`ctypeslib.load_library` can now take any path-like object ------------------------------------------------------------------------ -All parameters in the can now take any :term:`python:path-like object`. -This includes the likes of strings, bytes and objects implementing the -:meth:`__fspath__<os.PathLike.__fspath__>` protocol. diff --git a/doc/release/upcoming_changes/18536.improvement.rst b/doc/release/upcoming_changes/18536.improvement.rst deleted file mode 100644 index 8693916db..000000000 --- a/doc/release/upcoming_changes/18536.improvement.rst +++ /dev/null @@ -1,7 +0,0 @@ -Add ``smallest_normal`` and ``smallest_subnormal`` attributes to `finfo` -------------------------------------------------------------------------- - -The attributes ``smallest_normal`` and ``smallest_subnormal`` are available as -an extension of `finfo` class for any floating-point data type. To use these -new attributes, write ``np.finfo(np.float64).smallest_normal`` or -``np.finfo(np.float64).smallest_subnormal``. diff --git a/doc/release/upcoming_changes/18585.new_feature.rst b/doc/release/upcoming_changes/18585.new_feature.rst deleted file mode 100644 index bb83d755c..000000000 --- a/doc/release/upcoming_changes/18585.new_feature.rst +++ /dev/null @@ -1,15 +0,0 @@ -Implementation of the NEP 47 (adopting the array API standard) --------------------------------------------------------------- - -An initial implementation of `NEP 47`_ (adoption the array API standard) has -been added as ``numpy.array_api``. The implementation is experimental and will -issue a UserWarning on import, as the `array API standard -<https://data-apis.org/array-api/latest/index.html>`_ is still in draft state. -``numpy.array_api`` is a conforming implementation of the array API standard, -which is also minimal, meaning that only those functions and behaviors that -are required by the standard are implemented (see the NEP for more info). -Libraries wishing to make use of the array API standard are encouraged to use -``numpy.array_api`` to check that they are only using functionality that is -guaranteed to be present in standard conforming implementations. - -.. _`NEP 47`: https://numpy.org/neps/nep-0047-array-api-standard.html diff --git a/doc/release/upcoming_changes/18884.new_feature.rst b/doc/release/upcoming_changes/18884.new_feature.rst deleted file mode 100644 index 41503b00e..000000000 --- a/doc/release/upcoming_changes/18884.new_feature.rst +++ /dev/null @@ -1,7 +0,0 @@ -Generate C/C++ API reference documentation from comments blocks is now possible -------------------------------------------------------------------------------- -This feature depends on Doxygen_ in the generation process and on Breathe_ -to integrate it with Sphinx. - -.. _`Doxygen`: https://www.doxygen.nl/index.html -.. _`Breathe`: https://breathe.readthedocs.io/en/latest/ diff --git a/doc/release/upcoming_changes/19062.new_feature.rst b/doc/release/upcoming_changes/19062.new_feature.rst deleted file mode 100644 index 171715568..000000000 --- a/doc/release/upcoming_changes/19062.new_feature.rst +++ /dev/null @@ -1,21 +0,0 @@ -Assign the platform-specific ``c_intp`` precision via a mypy plugin -------------------------------------------------------------------- - -The mypy_ plugin, introduced in `numpy/numpy#17843`_, has again been expanded: -the plugin now is now responsible for setting the platform-specific precision -of `numpy.ctypeslib.c_intp`, the latter being used as data type for various -`numpy.ndarray.ctypes` attributes. - -Without the plugin, aforementioned type will default to `ctypes.c_int64`. - -To enable the plugin, one must add it to their mypy `configuration file`_: - -.. code-block:: ini - - [mypy] - plugins = numpy.typing.mypy_plugin - - -.. _mypy: http://mypy-lang.org/ -.. _configuration file: https://mypy.readthedocs.io/en/stable/config_file.html -.. _`numpy/numpy#17843`: https://github.com/numpy/numpy/pull/17843 diff --git a/doc/release/upcoming_changes/19135.change.rst b/doc/release/upcoming_changes/19135.change.rst deleted file mode 100644 index 0b900a16a..000000000 --- a/doc/release/upcoming_changes/19135.change.rst +++ /dev/null @@ -1,10 +0,0 @@ -Removed floor division support for complex types ------------------------------------------------- - -Floor division of complex types will now result in a `TypeError` - -.. code-block:: python - - >>> a = np.arange(10) + 1j* np.arange(10) - >>> a // 1 - TypeError: ufunc 'floor_divide' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' diff --git a/doc/release/upcoming_changes/19151.improvement.rst b/doc/release/upcoming_changes/19151.improvement.rst deleted file mode 100644 index 2108b9c4f..000000000 --- a/doc/release/upcoming_changes/19151.improvement.rst +++ /dev/null @@ -1,6 +0,0 @@ -`numpy.linalg.qr` accepts stacked matrices as inputs ----------------------------------------------------- - -`numpy.linalg.qr` is able to produce results for stacked matrices as inputs. -Moreover, the implementation of QR decomposition has been shifted to C -from Python. diff --git a/doc/release/upcoming_changes/19211.new_feature.rst b/doc/release/upcoming_changes/19211.new_feature.rst deleted file mode 100644 index 40e42387c..000000000 --- a/doc/release/upcoming_changes/19211.new_feature.rst +++ /dev/null @@ -1,7 +0,0 @@ -``keepdims`` optional argument added to `numpy.argmin`, `numpy.argmax` ----------------------------------------------------------------------- - -``keepdims`` argument is added to `numpy.argmin`, `numpy.argmax`. -If set to ``True``, the axes which are reduced are left in the result as dimensions with size one. -The resulting array has the same number of dimensions and will broadcast with the -input array. diff --git a/doc/release/upcoming_changes/19259.c_api.rst b/doc/release/upcoming_changes/19259.c_api.rst deleted file mode 100644 index dac9f520a..000000000 --- a/doc/release/upcoming_changes/19259.c_api.rst +++ /dev/null @@ -1,12 +0,0 @@ -Masked inner-loops cannot be customized anymore ------------------------------------------------ -The masked inner-loop selector is now never used. A warning -will be given in the unlikely event that it was customized. - -We do not expect that any code uses this. If you do use it, -you must unset the selector on newer NumPy version. -Please also contact the NumPy developers, we do anticipate -providing a new, more specific, mechanism. - -The customization was part of a never-implemented feature to allow -for faster masked operations. diff --git a/doc/release/upcoming_changes/19356.change.rst b/doc/release/upcoming_changes/19356.change.rst deleted file mode 100644 index 3c5ef4a91..000000000 --- a/doc/release/upcoming_changes/19356.change.rst +++ /dev/null @@ -1,7 +0,0 @@ -`numpy.vectorize` functions now produce the same output class as the base function ----------------------------------------------------------------------------------- -When a function that respects `numpy.ndarray` subclasses is vectorized using -`numpy.vectorize`, the vectorized function will now be subclass-safe -also for cases that a signature is given (i.e., when creating a ``gufunc``): -the output class will be the same as that returned by the first call to -the underlying function. diff --git a/doc/release/upcoming_changes/19459.new_feature.rst b/doc/release/upcoming_changes/19459.new_feature.rst deleted file mode 100644 index aecae670f..000000000 --- a/doc/release/upcoming_changes/19459.new_feature.rst +++ /dev/null @@ -1,4 +0,0 @@ -The ``ndim`` and ``axis`` attributes have been added to `numpy.AxisError` -------------------------------------------------------------------------- -The ``ndim`` and ``axis`` parameters are now also stored as attributes -within each `numpy.AxisError` instance. diff --git a/doc/release/upcoming_changes/19462.change.rst b/doc/release/upcoming_changes/19462.change.rst deleted file mode 100644 index 8fbadb394..000000000 --- a/doc/release/upcoming_changes/19462.change.rst +++ /dev/null @@ -1,3 +0,0 @@ -OpenBLAS v0.3.17 ----------------- -Update the OpenBLAS used in testing and in wheels to v0.3.17 diff --git a/doc/release/upcoming_changes/19478.performance.rst b/doc/release/upcoming_changes/19478.performance.rst deleted file mode 100644 index 6a389c20e..000000000 --- a/doc/release/upcoming_changes/19478.performance.rst +++ /dev/null @@ -1,11 +0,0 @@ -Vectorize umath module using AVX-512 -------------------------------------- - -By leveraging Intel Short Vector Math Library (SVML), 18 umath functions -(``exp2``, ``log2``, ``log10``, ``expm1``, ``log1p``, ``cbrt``, ``sin``, -``cos``, ``tan``, ``arcsin``, ``arccos``, ``arctan``, ``sinh``, ``cosh``, -``tanh``, ``arcsinh``, ``arccosh``, ``arctanh``) are vectorized using AVX-512 -instruction set for both single and double precision implementations. This -change is currently enabled only for Linux users and on processors with -AVX-512 instruction set. It provides an average speed up of 32x and 14x for -single and double precision functions respectively. diff --git a/doc/release/upcoming_changes/19479.compatibility.rst b/doc/release/upcoming_changes/19479.compatibility.rst deleted file mode 100644 index 83533a305..000000000 --- a/doc/release/upcoming_changes/19479.compatibility.rst +++ /dev/null @@ -1,7 +0,0 @@ -Distutils forces strict floating point model on clang ------------------------------------------------------ -NumPy now sets the ``-ftrapping-math`` option on clang to enforce correct -floating point error handling for universal functions. -Clang defaults to non-IEEE and C99 conform behaviour otherwise. -This change (using the equivalent but newer ``-ffp-exception-behavior=strict``) -was attempted in NumPy 1.21, but was effectively never used. diff --git a/doc/release/upcoming_changes/19513.new_feature.rst b/doc/release/upcoming_changes/19513.new_feature.rst deleted file mode 100644 index 5f945cea2..000000000 --- a/doc/release/upcoming_changes/19513.new_feature.rst +++ /dev/null @@ -1,4 +0,0 @@ -Preliminary support for `windows/arm64` target ----------------------------------------------- -``numpy`` added support for windows/arm64 target. Please note -``OpenBLAS`` support is not yet available for windows/arm64 target. diff --git a/doc/release/upcoming_changes/19527.new_feature.rst b/doc/release/upcoming_changes/19527.new_feature.rst deleted file mode 100644 index 3967f1841..000000000 --- a/doc/release/upcoming_changes/19527.new_feature.rst +++ /dev/null @@ -1,3 +0,0 @@ -Added support for LoongArch ------------------------------------------------- -LoongArch is a new instruction set, numpy compilation failure on LoongArch architecture, so add the commit. diff --git a/doc/release/upcoming_changes/19539.expired.rst b/doc/release/upcoming_changes/19539.expired.rst deleted file mode 100644 index 6e94f175d..000000000 --- a/doc/release/upcoming_changes/19539.expired.rst +++ /dev/null @@ -1,2 +0,0 @@ -* Using the strings ``"Bytes0"``, ``"Datetime64"``, ``"Str0"``, ``"Uint32"``, - and ``"Uint64"`` as a dtype will now raise a ``TypeError``.
\ No newline at end of file diff --git a/doc/release/upcoming_changes/19615.expired.rst b/doc/release/upcoming_changes/19615.expired.rst deleted file mode 100644 index 4e02771e3..000000000 --- a/doc/release/upcoming_changes/19615.expired.rst +++ /dev/null @@ -1,8 +0,0 @@ -Expired deprecations for ``loads``, ``ndfromtxt``, and ``mafromtxt`` in npyio ------------------------------------------------------------------------------ - -``numpy.loads`` was deprecated in v1.15, with the recommendation that users -use `pickle.loads` instead. -``ndfromtxt`` and ``mafromtxt`` were both deprecated in v1.17 - users should -use `numpy.genfromtxt` instead with the appropriate value for the -``usemask`` parameter. diff --git a/doc/release/upcoming_changes/19665.change.rst b/doc/release/upcoming_changes/19665.change.rst deleted file mode 100644 index 2c2315dd2..000000000 --- a/doc/release/upcoming_changes/19665.change.rst +++ /dev/null @@ -1,4 +0,0 @@ -Python 3.7 is no longer supported ---------------------------------- -Python support has been dropped. This is rather strict, there are -changes that require Python >=3.8. diff --git a/doc/release/upcoming_changes/19680.improvement.rst b/doc/release/upcoming_changes/19680.improvement.rst deleted file mode 100644 index 1a2a3496b..000000000 --- a/doc/release/upcoming_changes/19680.improvement.rst +++ /dev/null @@ -1,5 +0,0 @@ -`numpy.fromregex` now accepts ``os.PathLike`` implementations -------------------------------------------------------------- - -`numpy.fromregex` now accepts objects implementing the `__fspath__<os.PathLike>` -protocol, *e.g.* `pathlib.Path`. diff --git a/doc/release/upcoming_changes/19687.change.rst b/doc/release/upcoming_changes/19687.change.rst deleted file mode 100644 index c7f7512b6..000000000 --- a/doc/release/upcoming_changes/19687.change.rst +++ /dev/null @@ -1,8 +0,0 @@ -str/repr of complex dtypes now include space after punctuation --------------------------------------------------------------- - -The repr of ``np.dtype({"names": ["a"], "formats": [int], "offsets": [2]})`` is -now ``dtype({'names': ['a'], 'formats': ['<i8'], 'offsets': [2], 'itemsize': 10})``, -whereas spaces where previously omitted after colons and between fields. - -The old behavior can be restored via ``np.set_printoptions(legacy="1.21")``. diff --git a/doc/release/upcoming_changes/19754.new_feature.rst b/doc/release/upcoming_changes/19754.new_feature.rst deleted file mode 100644 index 4e91e4cb3..000000000 --- a/doc/release/upcoming_changes/19754.new_feature.rst +++ /dev/null @@ -1,7 +0,0 @@ -A ``.clang-format`` file has been added ---------------------------------------- -Clang-format is a C/C++ code formatter, together with the added -``.clang-format`` file, it produces code close enough to the NumPy -C_STYLE_GUIDE for general use. Clang-format version 12+ is required -due to the use of several new features, it is available in -Fedora 34 and Ubuntu Focal among other distributions. diff --git a/doc/release/upcoming_changes/19803.new_feature.rst b/doc/release/upcoming_changes/19803.new_feature.rst deleted file mode 100644 index 942325822..000000000 --- a/doc/release/upcoming_changes/19803.new_feature.rst +++ /dev/null @@ -1,14 +0,0 @@ -``is_integer`` is now available to `numpy.floating` and `numpy.integer` ------------------------------------------------------------------------ -Based on its counterpart in `float` and `int`, the numpy floating point and -integer types now support `~float.is_integer`. Returns ``True`` if the -number is finite with integral value, and ``False`` otherwise. - -.. code-block:: python - - >>> np.float32(-2.0).is_integer() - True - >>> np.float64(3.2).is_integer() - False - >>> np.int32(-2).is_integer() - True diff --git a/doc/release/upcoming_changes/19805.new_feature.rst b/doc/release/upcoming_changes/19805.new_feature.rst deleted file mode 100644 index f59409254..000000000 --- a/doc/release/upcoming_changes/19805.new_feature.rst +++ /dev/null @@ -1,5 +0,0 @@ -Symbolic parser for Fortran dimension specifications ----------------------------------------------------- -A new symbolic parser has been added to f2py in order to correctly parse -dimension specifications. The parser is the basis for future improvements -and provides compatibility with Draft Fortran 202x. diff --git a/doc/release/upcoming_changes/19879.new_feature.rst b/doc/release/upcoming_changes/19879.new_feature.rst deleted file mode 100644 index c6624138b..000000000 --- a/doc/release/upcoming_changes/19879.new_feature.rst +++ /dev/null @@ -1,15 +0,0 @@ -``ndarray``, ``dtype`` and ``number`` are now runtime-subscriptable -------------------------------------------------------------------- -Mimicking :pep:`585`, the `~numpy.ndarray`, `~numpy.dtype` and `~numpy.number` -classes are now subscriptable for python 3.9 and later. -Consequently, expressions that were previously only allowed in .pyi stub files -or with the help of ``from __future__ import annotations`` are now also legal -during runtime. - -.. code-block:: python - - >>> import numpy as np - >>> from typing import Any - - >>> np.ndarray[Any, np.dtype[np.float64]] - numpy.ndarray[typing.Any, numpy.dtype[numpy.float64]] diff --git a/doc/release/upcoming_changes/19921.deprecation.rst b/doc/release/upcoming_changes/19921.deprecation.rst deleted file mode 100644 index 17fa0f605..000000000 --- a/doc/release/upcoming_changes/19921.deprecation.rst +++ /dev/null @@ -1,3 +0,0 @@ -* the misspelled keyword argument ``delimitor`` of - ``numpy.ma.mrecords.fromtextfile()`` has been changed into - ``delimiter``, using it will emit a deprecation warning. diff --git a/doc/release/upcoming_changes/20000.deprecation.rst b/doc/release/upcoming_changes/20000.deprecation.rst deleted file mode 100644 index e0a56cd47..000000000 --- a/doc/release/upcoming_changes/20000.deprecation.rst +++ /dev/null @@ -1,5 +0,0 @@ -Passing boolean ``kth`` values to (arg-)partition has been deprecated ---------------------------------------------------------------------- -`~numpy.partition` and `~numpy.argpartition` would previously accept boolean -values for the ``kth`` parameter, which would subsequently be converted into -integers. This behavior has now been deprecated. diff --git a/doc/release/upcoming_changes/20027.improvement.rst b/doc/release/upcoming_changes/20027.improvement.rst deleted file mode 100644 index 86b3bed74..000000000 --- a/doc/release/upcoming_changes/20027.improvement.rst +++ /dev/null @@ -1,17 +0,0 @@ -Missing parameters have been added to the ``nan<x>`` functions --------------------------------------------------------------- -A number of the ``nan<x>`` functions previously lacked parameters that were -present in their ``<x>``-based counterpart, *e.g.* the ``where`` parameter was -present in `~numpy.mean` but absent from `~numpy.nanmean`. - -The following parameters have now been added to the ``nan<x>`` functions: - -* nanmin: ``initial`` & ``where`` -* nanmax: ``initial`` & ``where`` -* nanargmin: ``keepdims`` & ``out`` -* nanargmax: ``keepdims`` & ``out`` -* nansum: ``initial`` & ``where`` -* nanprod: ``initial`` & ``where`` -* nanmean: ``where`` -* nanvar: ``where`` -* nanstd: ``where`` diff --git a/doc/release/upcoming_changes/20049.change.rst b/doc/release/upcoming_changes/20049.change.rst deleted file mode 100644 index e1f08b343..000000000 --- a/doc/release/upcoming_changes/20049.change.rst +++ /dev/null @@ -1,5 +0,0 @@ -Corrected ``advance`` in ``PCG64DSXM`` and ``PCG64`` ----------------------------------------------------- -Fixed a bug in the ``advance`` method of ``PCG64DSXM`` and ``PCG64``. The bug only -affects results when the step was larger than :math:`2^{64}` on platforms -that do not support 128-bit integers(e.g., Windows and 32-bit Linux). diff --git a/doc/release/upcoming_changes/20394.deprecation.rst b/doc/release/upcoming_changes/20394.deprecation.rst new file mode 100644 index 000000000..44d1c8a20 --- /dev/null +++ b/doc/release/upcoming_changes/20394.deprecation.rst @@ -0,0 +1,6 @@ +Deprecate PyDataMem_SetEventHook +-------------------------------- + +The ability to track allocations is now built-in to python via ``tracemalloc``. +The hook function ``PyDataMem_SetEventHook`` has been deprecated and the +demonstration of its use in tool/allocation_tracking has been removed. diff --git a/doc/source/dev/development_workflow.rst b/doc/source/dev/development_workflow.rst index 8c56f6fb2..585aacfc9 100644 --- a/doc/source/dev/development_workflow.rst +++ b/doc/source/dev/development_workflow.rst @@ -187,6 +187,27 @@ Standard acronyms to start the commit message with are:: TST: addition or modification of tests REL: related to releasing numpy +Commands to skip continuous integration +``````````````````````````````````````` + +By default a lot of continuous integration (CI) jobs are run for every PR, +from running the test suite on different operating systems and hardware +platforms to building the docs. In some cases you already know that CI isn't +needed (or not all of it), for example if you work on CI config files, text in +the README, or other files that aren't involved in regular build, test or docs +sequences. In such cases you may explicitly skip CI by including one of these +fragments in your commit message:: + + ``[ci skip]``: skip as much CI as possible (not all jobs can be skipped) + ``[skip github]``: skip GitHub Actions "build numpy and run tests" jobs + ``[skip travis]``: skip TravisCI jobs + ``[skip azurepipelines]``: skip Azure jobs + +*Note: unfortunately not all CI systems implement this feature well, or at all. +CircleCI supports ``ci skip`` but has no command to skip only CircleCI. +Azure chooses to still run jobs with skip commands on PRs, the jobs only get +skipped on merging to master.* + .. _workflow_mailing_list: diff --git a/doc/source/f2py/buildtools/cmake.rst b/doc/source/f2py/buildtools/cmake.rst new file mode 100644 index 000000000..3ed5a2bee --- /dev/null +++ b/doc/source/f2py/buildtools/cmake.rst @@ -0,0 +1,60 @@ +.. _f2py-cmake: + +=================== +Using via ``cmake`` +=================== + +In terms of complexity, ``cmake`` falls between ``make`` and ``meson``. The +learning curve is steeper since CMake syntax is not pythonic and is closer to +``make`` with environment variables. + +However, the trade-off is enhanced flexibility and support for most architectures +and compilers. An introduction to the syntax is out of scope for this document, +but this `extensive CMake collection`_ of resources is great. + +.. note:: + + ``cmake`` is very popular for mixed-language systems, however support for + ``f2py`` is not particularly native or pleasant; and a more natural approach + is to consider :ref:`f2py-skbuild` + +Fibonacci Walkthrough (F77) +=========================== + +Returning to the ``fib`` example from :ref:`f2py-getting-started` section. + +.. literalinclude:: ./../code/fib1.f + :language: fortran + +We do not need to explicitly generate the ``python -m numpy.f2py fib1.f`` +output, which is ``fib1module.c``, which is beneficial. With this; we can now +initialize a ``CMakeLists.txt`` file as follows: + +.. literalinclude:: ./../code/CMakeLists.txt + :language: cmake + +A key element of the ``CMakeLists.txt`` file defined above is that the +``add_custom_command`` is used to generate the wrapper ``C`` files and then +added as a dependency of the actual shared library target via a +``add_custom_target`` directive which prevents the command from running every +time. Additionally, the method used for obtaining the ``fortranobject.c`` file +can also be used to grab the ``numpy`` headers on older ``cmake`` versions. + +This then works in the same manner as the other modules, although the naming +conventions are different and the output library is not automatically prefixed +with the ``cython`` information. + +.. code:: bash + + ls . + # CMakeLists.txt fib1.f + mkdir build && cd build + cmake .. + make + python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" + # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] + +This is particularly useful where an existing toolchain already exists and +``scikit-build`` or other additional ``python`` dependencies are discouraged. + +.. _extensive CMake collection: https://cliutils.gitlab.io/modern-cmake/ diff --git a/doc/source/f2py/distutils.rst b/doc/source/f2py/buildtools/distutils.rst index 575dacdff..9abeee8b8 100644 --- a/doc/source/f2py/distutils.rst +++ b/doc/source/f2py/buildtools/distutils.rst @@ -1,3 +1,5 @@ +.. _f2py-distutils: + ============================= Using via `numpy.distutils` ============================= @@ -10,23 +12,21 @@ compile Fortran sources, call F2PY to construct extension modules, etc. .. topic:: Example - Consider the following `setup file`__ for the ``fib`` examples in the previous - section: + Consider the following ``setup_file.py`` for the ``fib`` and ``scalar`` + examples from :ref:`f2py-getting-started` section: - .. literalinclude:: ./code/setup_example.py + .. literalinclude:: ./../code/setup_example.py :language: python Running - :: + .. code-block:: bash python setup_example.py build will build two extension modules ``scalar`` and ``fib2`` to the build directory. - - __ setup_example.py - + Extensions to ``distutils`` =========================== @@ -57,7 +57,7 @@ Extensions to ``distutils`` Run - :: + .. code-block:: bash python <setup.py file> config_fc build_src build_ext --help @@ -73,6 +73,6 @@ Extensions to ``distutils`` See ``numpy_distutils/fcompiler.py`` for an up-to-date list of supported compilers for different platforms, or run - :: + .. code-block:: bash - f2py -c --help-fcompiler + python -m numpy.f2py -c --help-fcompiler diff --git a/doc/source/f2py/buildtools/index.rst b/doc/source/f2py/buildtools/index.rst new file mode 100644 index 000000000..aa41fd37f --- /dev/null +++ b/doc/source/f2py/buildtools/index.rst @@ -0,0 +1,102 @@ +.. _f2py-bldsys: + +======================= +F2PY and Build Systems +======================= + +In this section we will cover the various popular build systems and their usage +with ``f2py``. + +.. note:: + **As of November 2021** + + The default build system for ``F2PY`` has traditionally been the through the + enhanced ``numpy.distutils`` module. This module is based on ``distutils`` which + will be removed in ``Python 3.12.0`` in **October 2023**; ``setuptools`` does not + have support for Fortran or ``F2PY`` and it is unclear if it will be supported + in the future. Alternative methods are thus increasingly more important. + + +Basic Concepts +=============== + +Building an extension module which includes Python and Fortran consists of: + +- Fortran source(s) +- One or more generated files from ``f2py`` + + + A ``C`` wrapper file is always created + + Code with modules require an additional ``.f90`` wrapper + +- ``fortranobject.{c,h}`` + + + Distributed with ``numpy`` + + Can be queried via ``python -c "import numpy.f2py; print(numpy.f2py.get_include())"`` + +- NumPy headers + + + Can be queried via ``python -c "import numpy; print(numpy.get_include())"`` + +- Python libraries and development headers + +Broadly speaking there are three cases which arise when considering the outputs of ``f2py``: + +Fortran 77 programs + - Input file ``blah.f`` + - Generates + + + ``blahmodule.c`` + + ``f2pywrappers.f`` + + When no ``COMMON`` blocks are present only a ``C`` wrapper file is generated. + Wrappers are also generated to rewrite assumed shape arrays as automatic + arrays. + +Fortran 90 programs + - Input file ``blah.f90`` + - Generates: + + + ``blahmodule.c`` + + ``blah-f2pywrappers2.f90`` + + The secondary wrapper is used to handle code which is subdivided into + modules. It rewrites assumed shape arrays as automatic arrays. + +Signature files + - Input file ``blah.pyf`` + - Generates: + + + ``blahmodule.c`` + + ``blah-f2pywrappers2.f90`` (occasionally) + + ``f2pywrappers.f`` (occasionally) + + Signature files ``.pyf`` do not signal their language standard via the file + extension, they may generate the F90 and F77 specific wrappers depending on + their contents; which shifts the burden of checking for generated files onto + the build system. + +.. note:: + + The signature file output situation is being reconsidered in `issue 20385`_ . + + +In theory keeping the above requirements in hand, any build system can be +adapted to generate ``f2py`` extension modules. Here we will cover a subset of +the more popular systems. + +.. note:: + ``make`` has no place in a modern multi-language setup, and so is not + discussed further. + +Build Systems +============== + +.. toctree:: + :maxdepth: 2 + + distutils + meson + cmake + skbuild + +.. _`issue 20385`: https://github.com/numpy/numpy/issues/20385 diff --git a/doc/source/f2py/buildtools/meson.rst b/doc/source/f2py/buildtools/meson.rst new file mode 100644 index 000000000..d98752e65 --- /dev/null +++ b/doc/source/f2py/buildtools/meson.rst @@ -0,0 +1,114 @@ +.. _f2py-meson: + +=================== +Using via ``meson`` +=================== + +The key advantage gained by leveraging ``meson`` over the techniques described +in :ref:`f2py-distutils` is that this feeds into existing systems and larger +projects with ease. ``meson`` has a rather pythonic syntax which makes it more +comfortable and amenable to extension for ``python`` users. + +.. note:: + + Meson needs to be at-least ``0.46.0`` in order to resolve the ``python`` include directories. + + +Fibonacci Walkthrough (F77) +=========================== + + +We will need the generated ``C`` wrapper before we can use a general purpose +build system like ``meson``. We will acquire this by: + +.. code-block:: bash + + python -n numpy.f2py fib1.f -m fib2 + +Now, consider the following ``meson.build`` file for the ``fib`` and ``scalar`` +examples from :ref:`f2py-getting-started` section: + +.. literalinclude:: ./../code/meson.build + :language: meson + +At this point the build will complete, but the import will fail: + +.. code-block:: bash + + meson setup builddir + meson compile -C builddir + cd builddir + python -c 'import fib2' + Traceback (most recent call last): + File "<string>", line 1, in <module> + ImportError: fib2.cpython-39-x86_64-linux-gnu.so: undefined symbol: FIB_ + # Check this isn't a false positive + nm -A fib2.cpython-39-x86_64-linux-gnu.so | grep FIB_ + fib2.cpython-39-x86_64-linux-gnu.so: U FIB_ + +Recall that the original example, as reproduced below, was in SCREAMCASE: + +.. literalinclude:: ./../code/fib1.f + :language: fortran + +With the standard approach, the subroutine exposed to ``python`` is ``fib`` and +not ``FIB``. This means we have a few options. One approach (where possible) is +to lowercase the original Fortran file with say: + +.. code-block:: bash + + tr "[:upper:]" "[:lower:]" < fib1.f > fib1.f + python -n numpy.f2py fib1.f -m fib2 + meson --wipe builddir + meson compile -C builddir + cd builddir + python -c 'import fib2' + +However this requires the ability to modify the source which is not always +possible. The easiest way to solve this is to let ``f2py`` deal with it: + +.. code-block:: bash + + python -n numpy.f2py fib1.f -m fib2 --lower + meson --wipe builddir + meson compile -C builddir + cd builddir + python -c 'import fib2' + + +Automating wrapper generation +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +A major pain point in the workflow defined above, is the manual tracking of +inputs. Although it would require more effort to figure out the actual outputs +for reasons discussed in :ref:`f2py-bldsys`. + +However, we can augment our workflow in a straightforward to take into account +files for which the outputs are known when the build system is set up. + +.. literalinclude:: ./../code/meson_upd.build + :language: meson + +This can be compiled and run as before. + +.. code-block:: bash + + rm -rf builddir + meson setup builddir + meson compile -C builddir + cd builddir + python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" + # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] + +Salient points +=============== + +It is worth keeping in mind the following: + +* ``meson`` will default to passing ``-fimplicit-none`` under ``gfortran`` by + default, which differs from that of the standard ``np.distutils`` behaviour + +* It is not possible to use SCREAMCASE in this context, so either the contents + of the ``.f`` file or the generated wrapper ``.c`` needs to be lowered to + regular letters; which can be facilitated by the ``--lower`` option of + ``F2PY`` diff --git a/doc/source/f2py/buildtools/skbuild.rst b/doc/source/f2py/buildtools/skbuild.rst new file mode 100644 index 000000000..af18ea43b --- /dev/null +++ b/doc/source/f2py/buildtools/skbuild.rst @@ -0,0 +1,94 @@ +.. _f2py-skbuild: + +============================ +Using via ``scikit-build`` +============================ + +``scikit-build`` provides two separate concepts geared towards the users of Python extension modules. + +1. A ``setuptools`` replacement (legacy behaviour) +2. A series of ``cmake`` modules with definitions which help building Python extensions + +.. note:: + + It is possible to use ``scikit-build``'s ``cmake`` modules to `bypass the + cmake setup mechanism`_ completely, and to write targets which call ``f2py + -c``. This usage is **not recommended** since the point of these build system + documents are to move away from the internal ``numpy.distutils`` methods. + +For situations where no ``setuptools`` replacements are required or wanted (i.e. +if ``wheels`` are not needed), it is recommended to instead use the vanilla +``cmake`` setup described in :ref:`f2py-cmake`. + +Fibonacci Walkthrough (F77) +=========================== + +We will consider the ``fib`` example from :ref:`f2py-getting-started` section. + +.. literalinclude:: ./../code/fib1.f + :language: fortran + +``CMake`` modules only +^^^^^^^^^^^^^^^^^^^^^^^ + +Consider using the following ``CMakeLists.txt``. + +.. literalinclude:: ./../code/CMakeLists_skbuild.txt + :language: cmake + +Much of the logic is the same as in :ref:`f2py-cmake`, however notably here the +appropriate module suffix is generated via ``sysconfig.get_config_var("SO")``. +The resulting extension can be built and loaded in the standard workflow. + +.. code:: bash + + ls . + # CMakeLists.txt fib1.f + mkdir build && cd build + cmake .. + make + python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" + # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] + + +``setuptools`` replacement +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. note:: + + **As of November 2021** + + The behavior described here of driving the ``cmake`` build of a module is + considered to be legacy behaviour and should not be depended on. + +The utility of ``scikit-build`` lies in being able to drive the generation of +more than extension modules, in particular a common usage pattern is the +generation of Python distributables (for example for PyPI). + +The workflow with ``scikit-build`` straightforwardly supports such packaging requirements. Consider augmenting the project with a ``setup.py`` as defined: + +.. literalinclude:: ./../code/setup_skbuild.py + :language: python + +Along with a commensurate ``pyproject.toml`` + +.. literalinclude:: ./../code/pyproj_skbuild.toml + :language: toml + +Together these can build the extension using ``cmake`` in tandem with other +standard ``setuptools`` outputs. Running ``cmake`` through ``setup.py`` is +mostly used when it is necessary to integrate with extension modules not built +with ``cmake``. + +.. code:: bash + + ls . + # CMakeLists.txt fib1.f pyproject.toml setup.py + python setup.py build_ext --inplace + python -c "import numpy as np; import fibby.fibby; a = np.zeros(9); fibby.fibby.fib(a); print (a)" + # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] + +Where we have modified the path to the module as ``--inplace`` places the +extension module in a subfolder. + +.. _bypass the cmake setup mechanism: https://scikit-build.readthedocs.io/en/latest/cmake-modules/F2PY.html diff --git a/doc/source/f2py/code/CMakeLists.txt b/doc/source/f2py/code/CMakeLists.txt new file mode 100644 index 000000000..62ff193bb --- /dev/null +++ b/doc/source/f2py/code/CMakeLists.txt @@ -0,0 +1,80 @@ +### setup project ### +cmake_minimum_required(VERSION 3.17.3) # 3.17 > for Python3_SOABI +set(CMAKE_CXX_STANDARD_REQUIRED ON) + +project(fibby + VERSION 1.0 + DESCRIPTION "FIB module" + LANGUAGES C Fortran + ) + +# Safety net +if(PROJECT_SOURCE_DIR STREQUAL PROJECT_BINARY_DIR) + message( + FATAL_ERROR + "In-source builds not allowed. Please make a new directory (called a build directory) and run CMake from there.\n" + ) +endif() + +# Grab Python +find_package(Python3 3.9 REQUIRED + COMPONENTS Interpreter Development NumPy) + +# Grab the variables from a local Python installation +# F2PY headers +execute_process( + COMMAND "${Python3_EXECUTABLE}" + -c "import numpy.f2py; print(numpy.f2py.get_include())" + OUTPUT_VARIABLE F2PY_INCLUDE_DIR + OUTPUT_STRIP_TRAILING_WHITESPACE +) + +# Project scope; consider using target_include_directories instead +include_directories( + BEFORE + ${Python3_INCLUDE_DIRS} + ${Python3_NumPy_INCLUDE_DIRS} + ${F2PY_INCLUDE_DIR} + ) + +message(STATUS ${Python3_INCLUDE_DIRS}) +message(STATUS ${F2PY_INCLUDE_DIR}) +message(STATUS ${Python3_NumPy_INCLUDE_DIRS}) + +# Vars +set(f2py_module_name "fibby") +set(fortran_src_file "${CMAKE_SOURCE_DIR}/fib1.f") +set(f2py_module_c "${f2py_module_name}module.c") +set(generated_module_file "${f2py_module_name}${Python3_SOABI}") + +# Generate sources +add_custom_target( + genpyf + DEPENDS "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" + ) +add_custom_command( + OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" + COMMAND ${Python3_EXECUTABLE} -m "numpy.f2py" + "${fortran_src_file}" + -m "fibby" + --lower # Important + DEPENDS fib1.f # Fortran source + ) + +# Set up target +add_library(${CMAKE_PROJECT_NAME} SHARED + "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" # Generated + "${F2PY_INCLUDE_DIR}/fortranobject.c" # From NumPy + "${fortran_src_file}" # Fortran source(s) + ) + +# Depend on sources +add_dependencies(${CMAKE_PROJECT_NAME} genpyf) + +set_target_properties( + ${CMAKE_PROJECT_NAME} + PROPERTIES + PREFIX "" + OUTPUT_NAME "${CMAKE_PROJECT_NAME}" + LINKER_LANGUAGE C + ) diff --git a/doc/source/f2py/code/CMakeLists_skbuild.txt b/doc/source/f2py/code/CMakeLists_skbuild.txt new file mode 100644 index 000000000..97bc5c744 --- /dev/null +++ b/doc/source/f2py/code/CMakeLists_skbuild.txt @@ -0,0 +1,89 @@ +### setup project ### +cmake_minimum_required(VERSION 3.17.3) +set(CMAKE_CXX_STANDARD_REQUIRED ON) + +project(fibby + VERSION 1.0 + DESCRIPTION "FIB module" + LANGUAGES C Fortran + ) + +# Safety net +if(PROJECT_SOURCE_DIR STREQUAL PROJECT_BINARY_DIR) + message( + FATAL_ERROR + "In-source builds not allowed. Please make a new directory (called a build directory) and run CMake from there.\n" + ) +endif() + +# Grab Python +find_package(Python3 3.9 REQUIRED + COMPONENTS Interpreter Development) + +# Ensure scikit-build modules +if (NOT SKBUILD) + # Kanged -->https://github.com/Kitware/torch_liberator/blob/master/CMakeLists.txt + # If skbuild is not the driver; include its utilities in CMAKE_MODULE_PATH + execute_process( + COMMAND "${Python3_EXECUTABLE}" + -c "import os, skbuild; print(os.path.dirname(skbuild.__file__))" + OUTPUT_VARIABLE SKBLD_DIR + OUTPUT_STRIP_TRAILING_WHITESPACE + ) + set(SKBLD_CMAKE_DIR "${SKBLD_DIR}/resources/cmake") + list(APPEND CMAKE_MODULE_PATH ${SKBLD_CMAKE_DIR}) +endif() + +# scikit-build style includes +find_package(PythonExtensions REQUIRED) # for ${PYTHON_EXTENSION_MODULE_SUFFIX} +find_package(NumPy REQUIRED) # for ${NumPy_INCLUDE_DIRS} +find_package(F2PY REQUIRED) # for ${F2PY_INCLUDE_DIR} + +# Prepping the module +set(f2py_module_name "fibby") +set(fortran_src_file "${CMAKE_SOURCE_DIR}/fib1.f") +set(generated_module_file ${f2py_module_name}${PYTHON_EXTENSION_MODULE_SUFFIX}) + +# Target for enforcing dependencies +add_custom_target(${f2py_module_name} ALL + DEPENDS "${fortran_src_file}" + ) + +# Custom command for generating .c +add_custom_command( + OUTPUT "${f2py_module_name}module.c" + COMMAND ${F2PY_EXECUTABLE} + -m ${f2py_module_name} + ${fortran_src_file} + --lower + WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} + DEPENDS ${fortran_src_file} + ) + +add_library(${generated_module_file} MODULE + "${f2py_module_name}module.c" + "${F2PY_INCLUDE_DIR}/fortranobject.c" + "${fortran_src_file}") + +target_include_directories(${generated_module_file} PUBLIC + ${F2PY_INCLUDE_DIRS} + ${PYTHON_INCLUDE_DIRS}) +set_target_properties(${generated_module_file} PROPERTIES SUFFIX "") +set_target_properties(${generated_module_file} PROPERTIES PREFIX "") + +# Linker fixes +if (UNIX) + if (APPLE) + set_target_properties(${generated_module_file} PROPERTIES + LINK_FLAGS '-Wl,-dylib,-undefined,dynamic_lookup') + else() + set_target_properties(${generated_module_file} PROPERTIES + LINK_FLAGS '-Wl,--allow-shlib-undefined') + endif() +endif() + +if (SKBUILD) + install(TARGETS ${generated_module_file} DESTINATION fibby) +else() + install(TARGETS ${generated_module_file} DESTINATION ${CMAKE_SOURCE_DIR}/fibby) +endif() diff --git a/doc/source/f2py/code/meson.build b/doc/source/f2py/code/meson.build new file mode 100644 index 000000000..b756abf8f --- /dev/null +++ b/doc/source/f2py/code/meson.build @@ -0,0 +1,38 @@ +project('f2py_examples', 'c', + version : '0.1', + default_options : ['warning_level=2']) + +add_languages('fortran') + +py_mod = import('python') +py3 = py_mod.find_installation('python3') +py3_dep = py3.dependency() +message(py3.path()) +message(py3.get_install_dir()) + +incdir_numpy = run_command(py3, + ['-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())'], + check : true +).stdout().strip() + +incdir_f2py = run_command(py3, + ['-c', 'import os; os.chdir(".."); import numpy.f2py; print(numpy.f2py.get_include())'], + check : true +).stdout().strip() + +fibby_source = custom_target('fibbymodule.c', + input : ['fib1.f'], + output : ['fibbymodule.c'], + command : [ py3, '-m', 'numpy.f2py', '@INPUT@', + '-m', 'fibby', '--lower' ] + ) + +inc_np = include_directories(incdir_numpy, incdir_f2py) + +py3.extension_module('fibby', + 'fib1.f', + fibby_source, + incdir_f2py+'/fortranobject.c', + include_directories: inc_np, + dependencies : py3_dep, + install : true) diff --git a/doc/source/f2py/code/meson_upd.build b/doc/source/f2py/code/meson_upd.build new file mode 100644 index 000000000..97bd8d175 --- /dev/null +++ b/doc/source/f2py/code/meson_upd.build @@ -0,0 +1,37 @@ +project('f2py_examples', 'c', + version : '0.1', + default_options : ['warning_level=2']) + +add_languages('fortran') + +py_mod = import('python') +py3 = py_mod.find_installation('python3') +py3_dep = py3.dependency() +message(py3.path()) +message(py3.get_install_dir()) + +incdir_numpy = run_command(py3, + ['-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())'], + check : true +).stdout().strip() + +incdir_f2py = run_command(py3, + ['-c', 'import os; os.chdir(".."); import numpy.f2py; print(numpy.f2py.get_include())'], + check : true +).stdout().strip() + +fibby_source = custom_target('fibbymodule.c', + input : ['fib1.f'], + output : ['fibbymodule.c'], + command : [ py3, '-m', 'numpy.f2py', '@INPUT@', + '-m', 'fibby', '--lower' ]) + +inc_np = include_directories(incdir_numpy, incdir_f2py) + +py3.extension_module('fibby', + 'fib1.f', + fibby_source, + incdir_f2py+'/fortranobject.c', + include_directories: inc_np, + dependencies : py3_dep, + install : true) diff --git a/doc/source/f2py/code/pyproj_skbuild.toml b/doc/source/f2py/code/pyproj_skbuild.toml new file mode 100644 index 000000000..6686d1736 --- /dev/null +++ b/doc/source/f2py/code/pyproj_skbuild.toml @@ -0,0 +1,5 @@ +[project] +requires-python = ">=3.7" + +[build-system] +requires = ["setuptools>=42", "wheel", "scikit-build", "cmake>=3.18", "numpy>=1.21"] diff --git a/doc/source/f2py/code/setup_skbuild.py b/doc/source/f2py/code/setup_skbuild.py new file mode 100644 index 000000000..4dfc6af8b --- /dev/null +++ b/doc/source/f2py/code/setup_skbuild.py @@ -0,0 +1,10 @@ +from skbuild import setup + +setup( + name="fibby", + version="0.0.1", + description="a minimal example package (fortran version)", + license="MIT", + packages=['fibby'], + cmake_args=['-DSKBUILD=ON'] +) diff --git a/doc/source/f2py/f2py.getting-started.rst b/doc/source/f2py/f2py.getting-started.rst index 1709aad61..c1a006f6f 100644 --- a/doc/source/f2py/f2py.getting-started.rst +++ b/doc/source/f2py/f2py.getting-started.rst @@ -1,3 +1,5 @@ +.. _f2py-getting-started: + ====================================== Three ways to wrap - getting started ====================================== diff --git a/doc/source/f2py/index.rst b/doc/source/f2py/index.rst index c774a0df6..56df31b4e 100644 --- a/doc/source/f2py/index.rst +++ b/doc/source/f2py/index.rst @@ -23,9 +23,9 @@ from Python. usage f2py.getting-started - distutils python-usage signature-file + buildtools/index advanced .. _Python: https://www.python.org/ diff --git a/doc/source/reference/c-api/array.rst b/doc/source/reference/c-api/array.rst index 6a135fd71..bb4405825 100644 --- a/doc/source/reference/c-api/array.rst +++ b/doc/source/reference/c-api/array.rst @@ -325,8 +325,7 @@ From scratch should be increased after the pointer is passed in, and the base member of the returned ndarray should point to the Python object that owns the data. This will ensure that the provided memory is not - freed while the returned array is in existence. To free memory as soon - as the ndarray is deallocated, set the OWNDATA flag on the returned ndarray. + freed while the returned array is in existence. .. c:function:: PyObject* PyArray_SimpleNewFromDescr( \ int nd, npy_int const* dims, PyArray_Descr* descr) @@ -1323,7 +1322,7 @@ User-defined data types data-type object, *descr*, of the given *scalar* kind. Use *scalar* = :c:data:`NPY_NOSCALAR` to register that an array of data-type *descr* can be cast safely to a data-type whose type_number is - *totype*. + *totype*. The return value is 0 on success or -1 on failure. .. c:function:: int PyArray_TypeNumFromName( \ char const *str) @@ -1463,7 +1462,9 @@ of the constant names is deprecated in 1.7. .. c:macro:: NPY_ARRAY_OWNDATA - The data area is owned by this array. + The data area is owned by this array. Should never be set manually, instead + create a ``PyObject`` wrapping the data and set the array's base to that + object. For an example, see the test in ``test_mem_policy``. .. c:macro:: NPY_ARRAY_ALIGNED @@ -2778,13 +2779,19 @@ Array Scalars whenever 0-dimensional arrays could be returned to Python. .. c:function:: PyObject* PyArray_Scalar( \ - void* data, PyArray_Descr* dtype, PyObject* itemsize) - - Return an array scalar object of the given enumerated *typenum* - and *itemsize* by **copying** from memory pointed to by *data* - . If *swap* is nonzero then this function will byteswap the data - if appropriate to the data-type because array scalars are always - in correct machine-byte order. + void* data, PyArray_Descr* dtype, PyObject* base) + + Return an array scalar object of the given *dtype* by **copying** + from memory pointed to by *data*. *base* is expected to be the + array object that is the owner of the data. *base* is required + if `dtype` is a ``void`` scalar, or if the ``NPY_USE_GETITEM`` + flag is set and it is known that the ``getitem`` method uses + the ``arr`` argument without checking if it is ``NULL``. Otherwise + `base` may be ``NULL``. + + If the data is not in native byte order (as indicated by + ``dtype->byteorder``) then this function will byteswap the data, + because array scalars are always in correct machine-byte order. .. c:function:: PyObject* PyArray_ToScalar(void* data, PyArrayObject* arr) diff --git a/doc/source/reference/c-api/data_memory.rst b/doc/source/reference/c-api/data_memory.rst new file mode 100644 index 000000000..2084ab5d0 --- /dev/null +++ b/doc/source/reference/c-api/data_memory.rst @@ -0,0 +1,161 @@ +.. _data_memory: + +Memory management in NumPy +========================== + +The `numpy.ndarray` is a python class. It requires additional memory allocations +to hold `numpy.ndarray.strides`, `numpy.ndarray.shape` and +`numpy.ndarray.data` attributes. These attributes are specially allocated +after creating the python object in `__new__`. The ``strides`` and +``shape`` are stored in a piece of memory allocated internally. + +The ``data`` allocation used to store the actual array values (which could be +pointers in the case of ``object`` arrays) can be very large, so NumPy has +provided interfaces to manage its allocation and release. This document details +how those interfaces work. + +Historical overview +------------------- + +Since version 1.7.0, NumPy has exposed a set of ``PyDataMem_*`` functions +(:c:func:`PyDataMem_NEW`, :c:func:`PyDataMem_FREE`, :c:func:`PyDataMem_RENEW`) +which are backed by `alloc`, `free`, `realloc` respectively. In that version +NumPy also exposed the `PyDataMem_EventHook` function (now deprecated) +described below, which wrap the OS-level calls. + +Since those early days, Python also improved its memory management +capabilities, and began providing +various :ref:`management policies <memoryoverview>` beginning in version +3.4. These routines are divided into a set of domains, each domain has a +:c:type:`PyMemAllocatorEx` structure of routines for memory management. Python also +added a `tracemalloc` module to trace calls to the various routines. These +tracking hooks were added to the NumPy ``PyDataMem_*`` routines. + +NumPy added a small cache of allocated memory in its internal +``npy_alloc_cache``, ``npy_alloc_cache_zero``, and ``npy_free_cache`` +functions. These wrap ``alloc``, ``alloc-and-memset(0)`` and ``free`` +respectively, but when ``npy_free_cache`` is called, it adds the pointer to a +short list of available blocks marked by size. These blocks can be re-used by +subsequent calls to ``npy_alloc*``, avoiding memory thrashing. + +Configurable memory routines in NumPy (NEP 49) +---------------------------------------------- + +Users may wish to override the internal data memory routines with ones of their +own. Since NumPy does not use the Python domain strategy to manage data memory, +it provides an alternative set of C-APIs to change memory routines. There are +no Python domain-wide strategies for large chunks of object data, so those are +less suited to NumPy's needs. User who wish to change the NumPy data memory +management routines can use :c:func:`PyDataMem_SetHandler`, which uses a +:c:type:`PyDataMem_Handler` structure to hold pointers to functions used to +manage the data memory. The calls are still wrapped by internal routines to +call :c:func:`PyTraceMalloc_Track`, :c:func:`PyTraceMalloc_Untrack`, and will +use the deprecated :c:func:`PyDataMem_EventHookFunc` mechanism. Since the +functions may change during the lifetime of the process, each ``ndarray`` +carries with it the functions used at the time of its instantiation, and these +will be used to reallocate or free the data memory of the instance. + +.. c:type:: PyDataMem_Handler + + A struct to hold function pointers used to manipulate memory + + .. code-block:: c + + typedef struct { + char name[127]; /* multiple of 64 to keep the struct aligned */ + uint8_t version; /* currently 1 */ + PyDataMemAllocator allocator; + } PyDataMem_Handler; + + where the allocator structure is + + .. code-block:: c + + /* The declaration of free differs from PyMemAllocatorEx */ + typedef struct { + void *ctx; + void* (*malloc) (void *ctx, size_t size); + void* (*calloc) (void *ctx, size_t nelem, size_t elsize); + void* (*realloc) (void *ctx, void *ptr, size_t new_size); + void (*free) (void *ctx, void *ptr, size_t size); + } PyDataMemAllocator; + +.. c:function:: PyObject * PyDataMem_SetHandler(PyObject *handler) + + Set a new allocation policy. If the input value is ``NULL``, will reset the + policy to the default. Return the previous policy, or + return ``NULL`` if an error has occurred. We wrap the user-provided functions + so they will still call the python and numpy memory management callback + hooks. + +.. c:function:: PyObject * PyDataMem_GetHandler() + + Return the current policy that will be used to allocate data for the + next ``PyArrayObject``. On failure, return ``NULL``. + +For an example of setting up and using the PyDataMem_Handler, see the test in +:file:`numpy/core/tests/test_mem_policy.py` + +.. c:function:: void PyDataMem_EventHookFunc(void *inp, void *outp, size_t size, void *user_data); + + This function will be called during data memory manipulation + +.. c:function:: PyDataMem_EventHookFunc * PyDataMem_SetEventHook(PyDataMem_EventHookFunc *newhook, void *user_data, void **old_data) + + Sets the allocation event hook for numpy array data. + + Returns a pointer to the previous hook or ``NULL``. If old_data is + non-``NULL``, the previous user_data pointer will be copied to it. + + If not ``NULL``, hook will be called at the end of each ``PyDataMem_NEW/FREE/RENEW``: + + .. code-block:: c + + result = PyDataMem_NEW(size) -> (*hook)(NULL, result, size, user_data) + PyDataMem_FREE(ptr) -> (*hook)(ptr, NULL, 0, user_data) + result = PyDataMem_RENEW(ptr, size) -> (*hook)(ptr, result, size, user_data) + + When the hook is called, the GIL will be held by the calling + thread. The hook should be written to be reentrant, if it performs + operations that might cause new allocation events (such as the + creation/destruction numpy objects, or creating/destroying Python + objects which might cause a gc). + + Deprecated in v1.23 + +What happens when deallocating if there is no policy set +-------------------------------------------------------- + +A rare but useful technique is to allocate a buffer outside NumPy, use +:c:func:`PyArray_NewFromDescr` to wrap the buffer in a ``ndarray``, then switch +the ``OWNDATA`` flag to true. When the ``ndarray`` is released, the +appropriate function from the ``ndarray``'s ``PyDataMem_Handler`` should be +called to free the buffer. But the ``PyDataMem_Handler`` field was never set, +it will be ``NULL``. For backward compatibility, NumPy will call ``free()`` to +release the buffer. If ``NUMPY_WARN_IF_NO_MEM_POLICY`` is set to ``1``, a +warning will be emitted. The current default is not to emit a warning, this may +change in a future version of NumPy. + +A better technique would be to use a ``PyCapsule`` as a base object: + +.. code-block:: c + + /* define a PyCapsule_Destructor, using the correct deallocator for buff */ + void free_wrap(void *capsule){ + void * obj = PyCapsule_GetPointer(capsule, PyCapsule_GetName(capsule)); + free(obj); + }; + + /* then inside the function that creates arr from buff */ + ... + arr = PyArray_NewFromDescr(... buf, ...); + if (arr == NULL) { + return NULL; + } + capsule = PyCapsule_New(buf, "my_wrapped_buffer", + (PyCapsule_Destructor)&free_wrap); + if (PyArray_SetBaseObject(arr, capsule) == -1) { + Py_DECREF(arr); + return NULL; + } + ... diff --git a/doc/source/reference/c-api/index.rst b/doc/source/reference/c-api/index.rst index bb1ed154e..6288ff33b 100644 --- a/doc/source/reference/c-api/index.rst +++ b/doc/source/reference/c-api/index.rst @@ -49,3 +49,4 @@ code. generalized-ufuncs coremath deprecations + data_memory diff --git a/doc/source/reference/global_state.rst b/doc/source/reference/global_state.rst index f18481235..20874ceaa 100644 --- a/doc/source/reference/global_state.rst +++ b/doc/source/reference/global_state.rst @@ -84,3 +84,13 @@ contiguous in memory. Most users will have no reason to change these; for details see the :ref:`memory layout <memory-layout>` documentation. + +Warn if no memory allocation policy when deallocating data +---------------------------------------------------------- + +Some users might pass ownership of the data pointer to the ``ndarray`` by +setting the ``OWNDATA`` flag. If they do this without setting (manually) a +memory allocation policy, the default will be to call ``free``. If +``NUMPY_WARN_IF_NO_MEM_POLICY`` is set to ``"1"``, a ``RuntimeWarning`` will +be emitted. A better alternative is to use a ``PyCapsule`` with a deallocator +and set the ``ndarray.base``. diff --git a/doc/source/reference/random/index.rst b/doc/source/reference/random/index.rst index 96cd47017..aaabc9b39 100644 --- a/doc/source/reference/random/index.rst +++ b/doc/source/reference/random/index.rst @@ -55,7 +55,7 @@ properties than the legacy `MT19937` used in `RandomState`. more_vals = random.standard_normal(10) `Generator` can be used as a replacement for `RandomState`. Both class -instances hold a internal `BitGenerator` instance to provide the bit +instances hold an internal `BitGenerator` instance to provide the bit stream, it is accessible as ``gen.bit_generator``. Some long-overdue API cleanup means that legacy and compatibility methods have been removed from `Generator` diff --git a/doc/source/reference/random/performance.rst b/doc/source/reference/random/performance.rst index 85855be59..cb9b94113 100644 --- a/doc/source/reference/random/performance.rst +++ b/doc/source/reference/random/performance.rst @@ -13,7 +13,7 @@ full-featured, and fast on most platforms, but somewhat slow when compiled for parallelism would indicate using `PCG64DXSM`. `Philox` is fairly slow, but its statistical properties have -very high quality, and it is easy to get assuredly-independent stream by using +very high quality, and it is easy to get an assuredly-independent stream by using unique keys. If that is the style you wish to use for parallel streams, or you are porting from another system that uses that style, then `Philox` is your choice. diff --git a/doc/source/reference/routines.math.rst b/doc/source/reference/routines.math.rst index 3c2f96830..2a09b8d20 100644 --- a/doc/source/reference/routines.math.rst +++ b/doc/source/reference/routines.math.rst @@ -143,6 +143,21 @@ Handling complex numbers conj conjugate +Extrema Finding +--------------- +.. autosummary:: + :toctree: generated/ + + maximum + fmax + amax + nanmax + + minimum + fmin + amin + nanmin + Miscellaneous ------------- @@ -160,11 +175,7 @@ Miscellaneous fabs sign heaviside - maximum - minimum - fmax - fmin - + nan_to_num real_if_close diff --git a/doc/source/reference/routines.statistics.rst b/doc/source/reference/routines.statistics.rst index c675b6090..cd93e6025 100644 --- a/doc/source/reference/routines.statistics.rst +++ b/doc/source/reference/routines.statistics.rst @@ -9,11 +9,7 @@ Order statistics .. autosummary:: :toctree: generated/ - - amin - amax - nanmin - nanmax + ptp percentile nanpercentile diff --git a/doc/source/release.rst b/doc/source/release.rst index 62bd15790..9504c6e97 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -5,7 +5,10 @@ Release notes .. toctree:: :maxdepth: 3 + 1.23.0 <release/1.23.0-notes> 1.22.0 <release/1.22.0-notes> + 1.21.4 <release/1.21.4-notes> + 1.21.3 <release/1.21.3-notes> 1.21.2 <release/1.21.2-notes> 1.21.1 <release/1.21.1-notes> 1.21.0 <release/1.21.0-notes> diff --git a/doc/source/release/1.21.3-notes.rst b/doc/source/release/1.21.3-notes.rst new file mode 100644 index 000000000..4058452ef --- /dev/null +++ b/doc/source/release/1.21.3-notes.rst @@ -0,0 +1,44 @@ +.. currentmodule:: numpy + +========================== +NumPy 1.21.3 Release Notes +========================== + +NumPy 1.21.3 is a maintenance release that fixes a few bugs discovered after +1.21.2. It also provides 64 bit Python 3.10.0 wheels. Note a few oddities about +Python 3.10: + +* There are no 32 bit wheels for Windows, Mac, or Linux. +* The Mac Intel builds are only available in universal2 wheels. + +The Python versions supported in this release are 3.7-3.10. If you want to +compile your own version using gcc-11, you will need to use gcc-11.2+ to avoid +problems. + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Aaron Meurer +* Bas van Beek +* Charles Harris +* Developer-Ecosystem-Engineering + +* Kevin Sheppard +* Sebastian Berg +* Warren Weckesser + +Pull requests merged +==================== + +A total of 8 pull requests were merged for this release. + +* `#19745 <https://github.com/numpy/numpy/pull/19745>`__: ENH: Add dtype-support to 3 ```generic``/``ndarray`` methods +* `#19955 <https://github.com/numpy/numpy/pull/19955>`__: BUG: Resolve Divide by Zero on Apple silicon + test failures... +* `#19958 <https://github.com/numpy/numpy/pull/19958>`__: MAINT: Mark type-check-only ufunc subclasses as ufunc aliases... +* `#19994 <https://github.com/numpy/numpy/pull/19994>`__: BUG: np.tan(np.inf) test failure +* `#20080 <https://github.com/numpy/numpy/pull/20080>`__: BUG: Correct incorrect advance in PCG with emulated int128 +* `#20081 <https://github.com/numpy/numpy/pull/20081>`__: BUG: Fix NaT handling in the PyArray_CompareFunc for datetime... +* `#20082 <https://github.com/numpy/numpy/pull/20082>`__: DOC: Ensure that we add documentation also as to the dict for... +* `#20106 <https://github.com/numpy/numpy/pull/20106>`__: BUG: core: result_type(0, np.timedelta64(4)) would seg. fault. diff --git a/doc/source/release/1.21.4-notes.rst b/doc/source/release/1.21.4-notes.rst new file mode 100644 index 000000000..e35d8c880 --- /dev/null +++ b/doc/source/release/1.21.4-notes.rst @@ -0,0 +1,46 @@ +.. currentmodule:: numpy + +========================== +NumPy 1.21.4 Release Notes +========================== + +The NumPy 1.21.4 is a maintenance release that fixes a few bugs discovered +after 1.21.3. The most important fix here is a fix for the NumPy header files +to make them work for both x86_64 and M1 hardware when included in the Mac +universal2 wheels. Previously, the header files only worked for M1 and this +caused problems for folks building x86_64 extensions. This problem was not seen +before Python 3.10 because there were thin wheels for x86_64 that had +precedence. This release also provides thin x86_64 Mac wheels for Python 3.10. + +The Python versions supported in this release are 3.7-3.10. If you want to +compile your own version using gcc-11, you will need to use gcc-11.2+ to avoid +problems. + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Bas van Beek +* Charles Harris +* Isuru Fernando +* Matthew Brett +* Sayed Adel +* Sebastian Berg +* 傅立业(Chris Fu) + + +Pull requests merged +==================== + +A total of 9 pull requests were merged for this release. + +* `#20278 <https://github.com/numpy/numpy/pull/20278>`__: BUG: Fix shadowed reference of ``dtype`` in type stub +* `#20293 <https://github.com/numpy/numpy/pull/20293>`__: BUG: Fix headers for universal2 builds +* `#20294 <https://github.com/numpy/numpy/pull/20294>`__: BUG: ``VOID_nonzero`` could sometimes mutate alignment flag +* `#20295 <https://github.com/numpy/numpy/pull/20295>`__: BUG: Do not use nonzero fastpath on unaligned arrays +* `#20296 <https://github.com/numpy/numpy/pull/20296>`__: BUG: Distutils patch to allow for 2 as a minor version (!) +* `#20297 <https://github.com/numpy/numpy/pull/20297>`__: BUG, SIMD: Fix 64-bit/8-bit integer division by a scalar +* `#20298 <https://github.com/numpy/numpy/pull/20298>`__: BUG, SIMD: Workaround broadcasting SIMD 64-bit integers on MSVC... +* `#20300 <https://github.com/numpy/numpy/pull/20300>`__: REL: Prepare for the NumPy 1.21.4 release. +* `#20302 <https://github.com/numpy/numpy/pull/20302>`__: TST: Fix a ``Arrayterator`` typing test failure diff --git a/doc/source/release/1.23.0-notes.rst b/doc/source/release/1.23.0-notes.rst new file mode 100644 index 000000000..330e7fd44 --- /dev/null +++ b/doc/source/release/1.23.0-notes.rst @@ -0,0 +1,45 @@ +.. currentmodule:: numpy + +========================== +NumPy 1.23.0 Release Notes +========================== + + +Highlights +========== + + +New functions +============= + + +Deprecations +============ + + +Future Changes +============== + + +Expired deprecations +==================== + + +Compatibility notes +=================== + + +C API changes +============= + + +New Features +============ + + +Improvements +============ + + +Changes +======= diff --git a/doc/source/user/absolute_beginners.rst b/doc/source/user/absolute_beginners.rst index bb570f622..27e9e1f63 100644 --- a/doc/source/user/absolute_beginners.rst +++ b/doc/source/user/absolute_beginners.rst @@ -391,7 +391,7 @@ this array to an array with three rows and two columns:: With ``np.reshape``, you can specify a few optional parameters:: - >>> numpy.reshape(a, newshape=(1, 6), order='C') + >>> np.reshape(a, newshape=(1, 6), order='C') array([[0, 1, 2, 3, 4, 5]]) ``a`` is the array to be reshaped. @@ -613,7 +613,7 @@ How to create an array from existing data ----- -You can easily use create a new array from a section of an existing array. +You can easily create a new array from a section of an existing array. Let's say you have this array: @@ -899,12 +899,18 @@ You can aggregate matrices the same way you aggregated vectors:: .. image:: images/np_matrix_aggregation.png You can aggregate all the values in a matrix and you can aggregate them across -columns or rows using the ``axis`` parameter:: +columns or rows using the ``axis`` parameter. To illustrate this point, let's +look at a slightly modified dataset:: + >>> data = np.array([[1, 2], [5, 3], [4, 6]]) + >>> data + array([[1, 2], + [5, 3], + [4, 6]]) >>> data.max(axis=0) array([5, 6]) >>> data.max(axis=1) - array([2, 4, 6]) + array([2, 5, 6]) .. image:: images/np_matrix_aggregation_row.png diff --git a/doc/source/user/basics.copies.rst b/doc/source/user/basics.copies.rst new file mode 100644 index 000000000..e8ba68bc0 --- /dev/null +++ b/doc/source/user/basics.copies.rst @@ -0,0 +1,154 @@ +.. _basics.copies-and-views: + +**************** +Copies and views +**************** + +When operating on NumPy arrays, it is possible to access the internal data +buffer directly using a :ref:`view <view>` without copying data around. This +ensures good performance but can also cause unwanted problems if the user is +not aware of how this works. Hence, it is important to know the difference +between these two terms and to know which operations return copies and +which return views. + +The NumPy array is a data structure consisting of two parts: +the :term:`contiguous` data buffer with the actual data elements and the +metadata that contains information about the data buffer. The metadata +includes data type, strides, and other important information that helps +manipulate the :class:`.ndarray` easily. See the :ref:`numpy-internals` +section for a detailed look. + +.. _view: + +View +==== + +It is possible to access the array differently by just changing certain +metadata like :term:`stride` and :term:`dtype` without changing the +data buffer. This creates a new way of looking at the data and these new +arrays are called views. The data buffer remains the same, so any changes made +to a view reflects in the original copy. A view can be forced through the +:meth:`.ndarray.view` method. + +Copy +==== + +When a new array is created by duplicating the data buffer as well as the +metadata, it is called a copy. Changes made to the copy +do not reflect on the original array. Making a copy is slower and +memory-consuming but sometimes necessary. A copy can be forced by using +:meth:`.ndarray.copy`. + +.. _indexing-operations: + +Indexing operations +=================== + +.. seealso:: :ref:`basics.indexing` + +Views are created when elements can be addressed with offsets and strides +in the original array. Hence, basic indexing always creates views. +For example:: + + >>> x = np.arange(10) + >>> x + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> y = x[1:3] # creates a view + >>> y + array([1, 2]) + >>> x[1:3] = [10, 11] + >>> x + array([ 0, 10, 11, 3, 4, 5, 6, 7, 8, 9]) + >>> y + array([10, 11]) + +Here, ``y`` gets changed when ``x`` is changed because it is a view. + +:ref:`advanced-indexing`, on the other hand, always creates copies. +For example:: + + >>> x = np.arange(9).reshape(3, 3) + >>> x + array([[0, 1, 2], + [3, 4, 5], + [6, 7, 8]]) + >>> y = x[[1, 2]] + >>> y + array([[3, 4, 5], + [6, 7, 8]]) + >>> y.base is None + True + +Here, ``y`` is a copy, as signified by the :attr:`base <.ndarray.base>` +attribute. We can also confirm this by assigning new values to ``x[[1, 2]]`` +which in turn will not affect ``y`` at all:: + + >>> x[[1, 2]] = [[10, 11, 12], [13, 14, 15]] + >>> x + array([[ 0, 1, 2], + [10, 11, 12], + [13, 14, 15]]) + >>> y + array([[3, 4, 5], + [6, 7, 8]]) + +It must be noted here that during the assignment of ``x[[1, 2]]`` no view +or copy is created as the assignment happens in-place. + + +Other operations +================ + +The :func:`numpy.reshape` function creates a view where possible or a copy +otherwise. In most cases, the strides can be modified to reshape the +array with a view. However, in some cases where the array becomes +non-contiguous (perhaps after a :meth:`.ndarray.transpose` operation), +the reshaping cannot be done by modifying strides and requires a copy. +In these cases, we can raise an error by assigning the new shape to the +shape attribute of the array. For example:: + + >>> x = np.ones((2, 3)) + >>> y = x.T # makes the array non-contiguous + >>> y + array([[1., 1.], + [1., 1.], + [1., 1.]]) + >>> z = y.view() + >>> z.shape = 6 + Traceback (most recent call last): + ... + AttributeError: Incompatible shape for in-place modification. Use + `.reshape()` to make a copy with the desired shape. + +Taking the example of another operation, :func:`.ravel` returns a contiguous +flattened view of the array wherever possible. On the other hand, +:meth:`.ndarray.flatten` always returns a flattened copy of the array. +However, to guarantee a view in most cases, ``x.reshape(-1)`` may be preferable. + +How to tell if the array is a view or a copy +============================================ + +The :attr:`base <.ndarray.base>` attribute of the ndarray makes it easy +to tell if an array is a view or a copy. The base attribute of a view returns +the original array while it returns ``None`` for a copy. + + >>> x = np.arange(9) + >>> x + array([0, 1, 2, 3, 4, 5, 6, 7, 8]) + >>> y = x.reshape(3, 3) + >>> y + array([[0, 1, 2], + [3, 4, 5], + [6, 7, 8]]) + >>> y.base # .reshape() creates a view + array([0, 1, 2, 3, 4, 5, 6, 7, 8]) + >>> z = y[[2, 1]] + >>> z + array([[6, 7, 8], + [3, 4, 5]]) + >>> z.base is None # advanced indexing creates a copy + True + +Note that the ``base`` attribute should not be used to determine +if an ndarray object is *new*; only if it is a view or a copy +of another ndarray.
\ No newline at end of file diff --git a/doc/source/user/basics.indexing.rst b/doc/source/user/basics.indexing.rst index 264c3d721..e99682f02 100644 --- a/doc/source/user/basics.indexing.rst +++ b/doc/source/user/basics.indexing.rst @@ -28,6 +28,7 @@ Note that in Python, ``x[(exp1, exp2, ..., expN)]`` is equivalent to ``x[exp1, exp2, ..., expN]``; the latter is just syntactic sugar for the former. +.. _basic-indexing: Basic indexing -------------- @@ -88,6 +89,7 @@ that is subsequently indexed by 2. rapidly changing location in memory. This difference represents a great potential for confusion. +.. _slicing-and-striding: Slicing and striding ^^^^^^^^^^^^^^^^^^^^ @@ -226,6 +228,7 @@ concepts to remember include: .. index:: pair: ndarray; view +.. _dimensional-indexing-tools: Dimensional indexing tools ^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -470,6 +473,7 @@ such an array with an image with shape (ny, nx) with dtype=np.uint8 lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. +.. _boolean-indexing: Boolean array indexing ^^^^^^^^^^^^^^^^^^^^^^ @@ -851,7 +855,7 @@ For this reason, it is possible to use the output from the :meth:`np.nonzero() <ndarray.nonzero>` function directly as an index since it always returns a tuple of index arrays. -Because the special treatment of tuples, they are not automatically +Because of the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1, 1, 1, 1]] # produces a large array diff --git a/doc/source/user/basics.rst b/doc/source/user/basics.rst index bcd51d983..affb85db2 100644 --- a/doc/source/user/basics.rst +++ b/doc/source/user/basics.rst @@ -19,3 +19,4 @@ fundamental NumPy ideas and philosophy. basics.dispatch basics.subclassing basics.ufuncs + basics.copies diff --git a/doc/source/user/how-to-index.rst b/doc/source/user/how-to-index.rst new file mode 100644 index 000000000..41061d5f4 --- /dev/null +++ b/doc/source/user/how-to-index.rst @@ -0,0 +1,351 @@ +.. currentmodule:: numpy + +.. _how-to-index.rst: + +***************************************** +How to index :class:`ndarrays <.ndarray>` +***************************************** + +.. seealso:: :ref:`basics.indexing` + +This page tackles common examples. For an in-depth look into indexing, refer +to :ref:`basics.indexing`. + +Access specific/arbitrary rows and columns +========================================== + +Use :ref:`basic-indexing` features like :ref:`slicing-and-striding`, and +:ref:`dimensional-indexing-tools`. + + >>> a = np.arange(30).reshape(2, 3, 5) + >>> a + array([[[ 0, 1, 2, 3, 4], + [ 5, 6, 7, 8, 9], + [10, 11, 12, 13, 14]], + <BLANKLINE> + [[15, 16, 17, 18, 19], + [20, 21, 22, 23, 24], + [25, 26, 27, 28, 29]]]) + >>> a[0, 2, :] + array([10, 11, 12, 13, 14]) + >>> a[0, :, 3] + array([ 3, 8, 13]) + +Note that the output from indexing operations can have different shape from the +original object. To preserve the original dimensions after indexing, you can +use :func:`newaxis`. To use other such tools, refer to +:ref:`dimensional-indexing-tools`. + + >>> a[0, :, 3].shape + (3,) + >>> a[0, :, 3, np.newaxis].shape + (3, 1) + >>> a[0, :, 3, np.newaxis, np.newaxis].shape + (3, 1, 1) + +Variables can also be used to index:: + + >>> y = 0 + >>> a[y, :, y+3] + array([ 3, 8, 13]) + +Refer to :ref:`dealing-with-variable-indices` to see how to use +:term:`python:slice` and :py:data:`Ellipsis` in your index variables. + +Index columns +------------- + +To index columns, you have to index the last axis. Use +:ref:`dimensional-indexing-tools` to get the desired number of dimensions:: + + >>> a = np.arange(24).reshape(2, 3, 4) + >>> a + array([[[ 0, 1, 2, 3], + [ 4, 5, 6, 7], + [ 8, 9, 10, 11]], + <BLANKLINE> + [[12, 13, 14, 15], + [16, 17, 18, 19], + [20, 21, 22, 23]]]) + >>> a[..., 3] + array([[ 3, 7, 11], + [15, 19, 23]]) + +To index specific elements in each column, make use of :ref:`advanced-indexing` +as below:: + + >>> arr = np.arange(3*4).reshape(3, 4) + >>> arr + array([[ 0, 1, 2, 3], + [ 4, 5, 6, 7], + [ 8, 9, 10, 11]]) + >>> column_indices = [[1, 3], [0, 2], [2, 2]] + >>> np.arange(arr.shape[0]) + array([0, 1, 2]) + >>> row_indices = np.arange(arr.shape[0])[:, np.newaxis] + >>> row_indices + array([[0], + [1], + [2]]) + +Use the ``row_indices`` and ``column_indices`` for advanced +indexing:: + + >>> arr[row_indices, column_indices] + array([[ 1, 3], + [ 4, 6], + [10, 10]]) + +Index along a specific axis +--------------------------- + +Use :meth:`take`. See also :meth:`take_along_axis` and +:meth:`put_along_axis`. + + >>> a = np.arange(30).reshape(2, 3, 5) + >>> a + array([[[ 0, 1, 2, 3, 4], + [ 5, 6, 7, 8, 9], + [10, 11, 12, 13, 14]], + <BLANKLINE> + [[15, 16, 17, 18, 19], + [20, 21, 22, 23, 24], + [25, 26, 27, 28, 29]]]) + >>> np.take(a, [2, 3], axis=2) + array([[[ 2, 3], + [ 7, 8], + [12, 13]], + <BLANKLINE> + [[17, 18], + [22, 23], + [27, 28]]]) + >>> np.take(a, [2], axis=1) + array([[[10, 11, 12, 13, 14]], + <BLANKLINE> + [[25, 26, 27, 28, 29]]]) + +Create subsets of larger matrices +================================= + +Use :ref:`slicing-and-striding` to access chunks of a large array:: + + >>> a = np.arange(100).reshape(10, 10) + >>> a + array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], + [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], + [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], + [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], + [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], + [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], + [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], + [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], + [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], + [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) + >>> a[2:5, 2:5] + array([[22, 23, 24], + [32, 33, 34], + [42, 43, 44]]) + >>> a[2:5, 1:3] + array([[21, 22], + [31, 32], + [41, 42]]) + >>> a[:5, :5] + array([[ 0, 1, 2, 3, 4], + [10, 11, 12, 13, 14], + [20, 21, 22, 23, 24], + [30, 31, 32, 33, 34], + [40, 41, 42, 43, 44]]) + +The same thing can be done with advanced indexing in a slightly more complex +way. Remember that +:ref:`advanced indexing creates a copy <indexing-operations>`:: + + >>> a[np.arange(5)[:, None], np.arange(5)[None, :]] + array([[ 0, 1, 2, 3, 4], + [10, 11, 12, 13, 14], + [20, 21, 22, 23, 24], + [30, 31, 32, 33, 34], + [40, 41, 42, 43, 44]]) + +You can also use :meth:`mgrid` to generate indices:: + + >>> indices = np.mgrid[0:6:2] + >>> indices + array([0, 2, 4]) + >>> a[:, indices] + array([[ 0, 2, 4], + [10, 12, 14], + [20, 22, 24], + [30, 32, 34], + [40, 42, 44], + [50, 52, 54], + [60, 62, 64], + [70, 72, 74], + [80, 82, 84], + [90, 92, 94]]) + +Filter values +============= + +Non-zero elements +----------------- + +Use :meth:`nonzero` to get a tuple of array indices of non-zero elements +corresponding to every dimension:: + + >>> z = np.array([[1, 2, 3, 0], [0, 0, 5, 3], [4, 6, 0, 0]]) + >>> z + array([[1, 2, 3, 0], + [0, 0, 5, 3], + [4, 6, 0, 0]]) + >>> np.nonzero(z) + (array([0, 0, 0, 1, 1, 2, 2]), array([0, 1, 2, 2, 3, 0, 1])) + +Use :meth:`flatnonzero` to fetch indices of elements that are non-zero in +the flattened version of the ndarray:: + + >>> np.flatnonzero(z) + array([0, 1, 2, 6, 7, 8, 9]) + +Arbitrary conditions +-------------------- + +Use :meth:`where` to generate indices based on conditions and then +use :ref:`advanced-indexing`. + + >>> a = np.arange(30).reshape(2, 3, 5) + >>> indices = np.where(a % 2 == 0) + >>> indices + (array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]), + array([0, 0, 0, 1, 1, 2, 2, 2, 0, 0, 1, 1, 1, 2, 2]), + array([0, 2, 4, 1, 3, 0, 2, 4, 1, 3, 0, 2, 4, 1, 3])) + >>> a[indices] + array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28]) + +Or, use :ref:`boolean-indexing`:: + + >>> a > 14 + array([[[False, False, False, False, False], + [False, False, False, False, False], + [False, False, False, False, False]], + <BLANKLINE> + [[ True, True, True, True, True], + [ True, True, True, True, True], + [ True, True, True, True, True]]]) + >>> a[a > 14] + array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) + +Replace values after filtering +------------------------------ + +Use assignment with filtering to replace desired values:: + + >>> p = np.arange(-10, 10).reshape(2, 2, 5) + >>> p + array([[[-10, -9, -8, -7, -6], + [ -5, -4, -3, -2, -1]], + <BLANKLINE> + [[ 0, 1, 2, 3, 4], + [ 5, 6, 7, 8, 9]]]) + >>> q = p < 0 + >>> q + array([[[ True, True, True, True, True], + [ True, True, True, True, True]], + <BLANKLINE> + [[False, False, False, False, False], + [False, False, False, False, False]]]) + >>> p[q] = 0 + >>> p + array([[[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0]], + <BLANKLINE> + [[0, 1, 2, 3, 4], + [5, 6, 7, 8, 9]]]) + +Fetch indices of max/min values +=============================== + +Use :meth:`argmax` and :meth:`argmin`:: + + >>> a = np.arange(30).reshape(2, 3, 5) + >>> np.argmax(a) + 29 + >>> np.argmin(a) + 0 + +Use the ``axis`` keyword to get the indices of maximum and minimum +values along a specific axis:: + + >>> np.argmax(a, axis=0) + array([[1, 1, 1, 1, 1], + [1, 1, 1, 1, 1], + [1, 1, 1, 1, 1]]) + >>> np.argmax(a, axis=1) + array([[2, 2, 2, 2, 2], + [2, 2, 2, 2, 2]]) + >>> np.argmax(a, axis=2) + array([[4, 4, 4], + [4, 4, 4]]) + <BLANKLINE> + >>> np.argmin(a, axis=1) + array([[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0]]) + >>> np.argmin(a, axis=2) + array([[0, 0, 0], + [0, 0, 0]]) + +Set ``keepdims`` to ``True`` to keep the axes which are reduced in the +result as dimensions with size one:: + + >>> np.argmin(a, axis=2, keepdims=True) + array([[[0], + [0], + [0]], + <BLANKLINE> + [[0], + [0], + [0]]]) + >>> np.argmax(a, axis=1, keepdims=True) + array([[[2, 2, 2, 2, 2]], + <BLANKLINE> + [[2, 2, 2, 2, 2]]]) + +Index the same ndarray multiple times efficiently +================================================= + +It must be kept in mind that basic indexing produces :term:`views <view>` +and advanced indexing produces :term:`copies <copy>`, which are +computationally less efficient. Hence, you should take care to use basic +indexing wherever possible instead of advanced indexing. + +Further reading +=============== + +Nicolas Rougier's `100 NumPy exercises <https://github.com/rougier/numpy-100>`_ +provide a good insight into how indexing is combined with other operations. +Exercises `6`_, `8`_, `10`_, `15`_, `16`_, `19`_, `20`_, `45`_, `59`_, +`64`_, `65`_, `70`_, `71`_, `72`_, `76`_, `80`_, `81`_, `84`_, `87`_, `90`_, +`93`_, `94`_ are specially focused on indexing. + +.. _6: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#6-create-a-null-vector-of-size-10-but-the-fifth-value-which-is-1- +.. _8: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#8-reverse-a-vector-first-element-becomes-last- +.. _10: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#10-find-indices-of-non-zero-elements-from-120040- +.. _15: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#15-create-a-2d-array-with-1-on-the-border-and-0-inside- +.. _16: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#16-how-to-add-a-border-filled-with-0s-around-an-existing-array- +.. _19: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#19-create-a-8x8-matrix-and-fill-it-with-a-checkerboard-pattern- +.. _20: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#20-consider-a-678-shape-array-what-is-the-index-xyz-of-the-100th-element- +.. _45: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#45-create-random-vector-of-size-10-and-replace-the-maximum-value-by-0- +.. _59: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#59-how-to-sort-an-array-by-the-nth-column- +.. _64: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#64-consider-a-given-vector-how-to-add-1-to-each-element-indexed-by-a-second-vector-be-careful-with-repeated-indices- +.. _65: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#65-how-to-accumulate-elements-of-a-vector-x-to-an-array-f-based-on-an-index-list-i- +.. _70: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#70-consider-the-vector-1-2-3-4-5-how-to-build-a-new-vector-with-3-consecutive-zeros-interleaved-between-each-value- +.. _71: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#71-consider-an-array-of-dimension-553-how-to-mulitply-it-by-an-array-with-dimensions-55- +.. _72: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#72-how-to-swap-two-rows-of-an-array- +.. _76: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#76-consider-a-one-dimensional-array-z-build-a-two-dimensional-array-whose-first-row-is-z0z1z2-and-each-subsequent-row-is--shifted-by-1-last-row-should-be-z-3z-2z-1- +.. _80: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#80-consider-an-arbitrary-array-write-a-function-that-extract-a-subpart-with-a-fixed-shape-and-centered-on-a-given-element-pad-with-a-fill-value-when-necessary- +.. _81: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#81-consider-an-array-z--1234567891011121314-how-to-generate-an-array-r--1234-2345-3456--11121314- +.. _84: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#84-extract-all-the-contiguous-3x3-blocks-from-a-random-10x10-matrix- +.. _87: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#87-consider-a-16x16-array-how-to-get-the-block-sum-block-size-is-4x4- +.. _90: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#90-given-an-arbitrary-number-of-vectors-build-the-cartesian-product-every-combinations-of-every-item- +.. _93: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#93-consider-two-arrays-a-and-b-of-shape-83-and-22-how-to-find-rows-of-a-that-contain-elements-of-each-row-of-b-regardless-of-the-order-of-the-elements-in-b- +.. _94: https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#94-considering-a-10x3-matrix-extract-rows-with-unequal-values-eg-223-
\ No newline at end of file diff --git a/doc/source/user/howtos_index.rst b/doc/source/user/howtos_index.rst index 89a6f54e7..2d66d0638 100644 --- a/doc/source/user/howtos_index.rst +++ b/doc/source/user/howtos_index.rst @@ -13,3 +13,4 @@ the package, see the :ref:`API reference <reference>`. how-to-how-to how-to-io + how-to-index diff --git a/doc/source/user/quickstart.rst b/doc/source/user/quickstart.rst index dd5773878..a9cfeca31 100644 --- a/doc/source/user/quickstart.rst +++ b/doc/source/user/quickstart.rst @@ -45,10 +45,11 @@ NumPy's main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of non-negative integers. In NumPy dimensions are called *axes*. -For example, the coordinates of a point in 3D space ``[1, 2, 1]`` has -one axis. That axis has 3 elements in it, so we say it has a length -of 3. In the example pictured below, the array has 2 axes. The first -axis has a length of 2, the second axis has a length of 3. +For example, the array for the coordinates of a point in 3D space, +``[1, 2, 1]``, has one axis. That axis has 3 elements in it, so we say +it has a length of 3. In the example pictured below, the array has 2 +axes. The first axis has a length of 2, the second axis has a length of +3. :: diff --git a/doc/source/user/whatisnumpy.rst b/doc/source/user/whatisnumpy.rst index 154f91c84..e152a4ae2 100644 --- a/doc/source/user/whatisnumpy.rst +++ b/doc/source/user/whatisnumpy.rst @@ -125,7 +125,7 @@ same shape, or a scalar and an array, or even two arrays of with different shapes, provided that the smaller array is "expandable" to the shape of the larger in such a way that the resulting broadcast is unambiguous. For detailed "rules" of broadcasting see -`basics.broadcasting`. +:ref:`Broadcasting <basics.broadcasting>`. Who Else Uses NumPy? -------------------- |