diff options
author | RedRuM <44142765+zoj613@users.noreply.github.com> | 2019-11-03 19:53:08 +0200 |
---|---|---|
committer | RedRuM <44142765+zoj613@users.noreply.github.com> | 2019-11-03 19:53:08 +0200 |
commit | 87840dc2f09ebeed12c3b9fef68b94dc04f4d16f (patch) | |
tree | f746eeaa31cb757de6ee7cb5253c6ae4c0b4218c /doc | |
parent | 0117379f1c751296c914ffe9547a84380219b588 (diff) | |
parent | 2be03c8d25b14b654064e953feac7d210e6bd44d (diff) | |
download | numpy-87840dc2f09ebeed12c3b9fef68b94dc04f4d16f.tar.gz |
merge latest changes on master branch
Diffstat (limited to 'doc')
163 files changed, 3249 insertions, 1068 deletions
diff --git a/doc/DISTUTILS.rst.txt b/doc/DISTUTILS.rst.txt index eadde63f8..bcef82500 100644 --- a/doc/DISTUTILS.rst.txt +++ b/doc/DISTUTILS.rst.txt @@ -243,7 +243,7 @@ in writing setup scripts: after processing all source generators, no extension module will be built. This is the recommended way to conditionally define extension modules. Source generator functions are called by the - ``build_src`` command of ``numpy.distutils``. + ``build_src`` sub-command of ``numpy.distutils``. For example, here is a typical source generator function:: diff --git a/doc/HOWTO_RELEASE.rst.txt b/doc/HOWTO_RELEASE.rst.txt index d68763fe6..4b485c8b9 100644 --- a/doc/HOWTO_RELEASE.rst.txt +++ b/doc/HOWTO_RELEASE.rst.txt @@ -197,17 +197,6 @@ best to read the pavement.py script. .. note:: The following steps are repeated for the beta(s), release candidates(s) and the final release. -Check that docs can be built ----------------------------- -Do:: - - cd doc/ - make dist - -to check that the documentation is in a buildable state. See -doc/HOWTO_BUILD_DOCS.rst.txt for more details and for how to update -https://docs.scipy.org. - Check deprecations ------------------ Before the release branch is made, it should be checked that all deprecated @@ -389,14 +378,24 @@ Build the changelog and notes for upload with:: paver write_release_and_log -The tar-files and binary releases for distribution should be uploaded to SourceForge, -together with the Release Notes and the Changelog. Uploading can be done -through a web interface or, more efficiently, through scp/sftp/rsync as -described in the SourceForge -`upload guide <https://sourceforge.net/apps/trac/sourceforge/wiki/Release%20files%20for%20download>`_ (dead link). -For example:: +Build and archive documentation +------------------------------- +Do:: + + cd doc/ + make dist + +to check that the documentation is in a buildable state. Then, after tagging, +create an archive of the documentation in the numpy/doc repo:: - scp <filename> <username>,numpy@frs.sourceforge.net:/home/frs/project/n/nu/numpy/NumPy/<releasedir>/ + # This checks out github.com/numpy/doc and adds (``git add``) the + # documentation to the checked out repo. + make merge-doc + # Now edit the ``index.html`` file in the repo to reflect the new content, + # and commit the changes + git -C dist/merge commit -a "Add documentation for <version>" + # Push to numpy/doc repo + git -C push Update PyPI ----------- @@ -449,28 +448,6 @@ you released you can push the tag and release commit up to github:: where ``upstream`` points to the main https://github.com/numpy/numpy.git repository. -Update docs.scipy.org ---------------------- - -All documentation for a release can be updated on https://docs.scipy.org/ with: - - make dist - make upload USERNAME=<yourname> RELEASE=1.11.0 - -Note that ``<username>`` must have SSH credentials on the server. If you don't -have those, ask someone who does (the list currently includes @rgommers, -@juliantaylor and @pv). - -Also rebuild and upload ``docs.scipy.org`` front page, if the release -series is a new one. The front page sources have their own repo: -https://github.com/scipy/docs.scipy.org. Do the following: - -- Update ``index.rst`` for the new version. -- ``make dist`` -- Check that the built documentation is OK. -- ``touch output-is-fine`` -- ``make upload USERNAME=<username> RELEASE=1.x.y`` - Update scipy.org ---------------- diff --git a/doc/Makefile b/doc/Makefile index 00393abc6..3c32cb811 100644 --- a/doc/Makefile +++ b/doc/Makefile @@ -14,6 +14,10 @@ PYTHON = python$(PYVER) SPHINXOPTS ?= SPHINXBUILD ?= LANG=C sphinx-build PAPER ?= +# For merging a documentation archive into a git checkout of numpy/doc +# Turn a tag like v1.18.0 into 1.18 +# Use sed -n -e 's/patttern/match/p' to return a blank value if no match +TAG ?= $(shell git describe --tag | sed -n -e's,v\([1-9]\.[0-9]*\)\.[0-9].*,\1,p') FILES= @@ -24,7 +28,8 @@ ALLSPHINXOPTS = -WT --keep-going -d build/doctrees $(PAPEROPT_$(PAPER)) \ $(SPHINXOPTS) source .PHONY: help clean html web pickle htmlhelp latex changes linkcheck \ - dist dist-build gitwash-update version-check html-build latex-build + dist dist-build gitwash-update version-check html-build latex-build \ + merge-doc #------------------------------------------------------------------------------ @@ -40,6 +45,7 @@ help: @echo " dist PYVER=... to make a distribution-ready tree" @echo " gitwash-update GITWASH=path/to/gitwash update gitwash developer docs" @echo " upload USERNAME=... RELEASE=... to upload built docs to docs.scipy.org" + @echo " merge-doc TAG=... to clone numpy/doc and archive documentation into it" clean: -rm -rf build/* @@ -92,7 +98,9 @@ else endif -dist: +dist: build/dist.tar.gz + +build/dist.tar.gz: make $(DIST_VARS) real-dist real-dist: dist-build html-build html-scipyorg @@ -113,7 +121,7 @@ dist-build: install -d $(subst :, ,$(INSTALL_PPH)) $(PYTHON) `which easy_install` --prefix=$(INSTALL_DIR) ../dist/*.egg -upload: +upload: build/dist.tar.gz # SSH must be correctly configured for this to work. # Assumes that ``make dist`` was already run # Example usage: ``make upload USERNAME=rgommers RELEASE=1.10.1`` @@ -130,6 +138,32 @@ upload: ssh $(USERNAME)@docs.scipy.org rm $(UPLOAD_DIR)/dist.tar.gz ssh $(USERNAME)@docs.scipy.org ln -snf numpy-$(RELEASE) /srv/docs_scipy_org/doc/numpy + +merge-doc: build/dist.tar.gz +ifeq "$(TAG)" "" + echo tag "$(TAG)" not of the form 1.18; + exit 1; +endif + @# Only clone if the directory does not exist + @if ! test -d build/merge; then \ + git clone https://github.com/numpy/doc build/merge; \ + fi; + @# Remove any old content and copy in the new, add it to git + -rm -rf build/merge/$(TAG)/* + -mkdir -p build/merge/$(TAG) + @# -C changes working directory + tar -C build/merge/$(TAG) -xf build/dist.tar.gz + git -C build/merge add $(TAG) + @# For now, the user must do this. If it is onerous, automate it and change + @# the instructions in doc/HOWTO_RELEASE.rst.txt + @echo " " + @echo New documentation archive added to ./build/merge. + @echo Now add/modify the appropiate section after + @echo " <!-- insert here -->" + @echo in build/merge/index.html, + @echo then \"git commit\", \"git push\" + + #------------------------------------------------------------------------------ # Basic Sphinx generation rules for different formats #------------------------------------------------------------------------------ diff --git a/doc/Py3K.rst.txt b/doc/Py3K.rst.txt index f78b9e5db..b23536ca5 100644 --- a/doc/Py3K.rst.txt +++ b/doc/Py3K.rst.txt @@ -812,20 +812,20 @@ Types with tp_as_sequence defined PySequenceMethods in py3k are binary compatible with py2k, but some of the slots have gone away. I suspect this means some functions need redefining so -the semantics of the slots needs to be checked. - -PySequenceMethods foo_sequence_methods = { - (lenfunc)0, /* sq_length */ - (binaryfunc)0, /* sq_concat */ - (ssizeargfunc)0, /* sq_repeat */ - (ssizeargfunc)0, /* sq_item */ - (void *)0, /* nee sq_slice */ - (ssizeobjargproc)0, /* sq_ass_item */ - (void *)0, /* nee sq_ass_slice */ - (objobjproc)0, /* sq_contains */ - (binaryfunc)0, /* sq_inplace_concat */ - (ssizeargfunc)0 /* sq_inplace_repeat */ -}; +the semantics of the slots needs to be checked:: + + PySequenceMethods foo_sequence_methods = { + (lenfunc)0, /* sq_length */ + (binaryfunc)0, /* sq_concat */ + (ssizeargfunc)0, /* sq_repeat */ + (ssizeargfunc)0, /* sq_item */ + (void *)0, /* nee sq_slice */ + (ssizeobjargproc)0, /* sq_ass_item */ + (void *)0, /* nee sq_ass_slice */ + (objobjproc)0, /* sq_contains */ + (binaryfunc)0, /* sq_inplace_concat */ + (ssizeargfunc)0 /* sq_inplace_repeat */ + }; PyMappingMethods @@ -840,13 +840,13 @@ Types with tp_as_mapping defined * multiarray/arrayobject.c PyMappingMethods in py3k look to be the same as in py2k. The semantics -of the slots needs to be checked. +of the slots needs to be checked:: -PyMappingMethods foo_mapping_methods = { - (lenfunc)0, /* mp_length */ - (binaryfunc)0, /* mp_subscript */ - (objobjargproc)0 /* mp_ass_subscript */ -}; + PyMappingMethods foo_mapping_methods = { + (lenfunc)0, /* mp_length */ + (binaryfunc)0, /* mp_subscript */ + (objobjargproc)0 /* mp_ass_subscript */ + }; PyFile diff --git a/doc/RELEASE_WALKTHROUGH.rst.txt b/doc/RELEASE_WALKTHROUGH.rst.txt index 445790709..0a761e350 100644 --- a/doc/RELEASE_WALKTHROUGH.rst.txt +++ b/doc/RELEASE_WALKTHROUGH.rst.txt @@ -41,7 +41,7 @@ Finish the Release Note .. note: This has changed now that we use ``towncrier``. See the instructions for - creating the release note in ``changelog/README.rst``. + creating the release note in ``doc/release/upcoming_changes/README.rst``. Fill out the release note ``doc/release/1.14.5-notes.rst`` calling out significant changes. @@ -56,7 +56,7 @@ repository:: $ git checkout maintenance/1.14.x $ git pull upstream maintenance/1.14.x $ git submodule update - $ git clean -xdf > /dev/null + $ git clean -xdfq Edit pavement.py and setup.py as detailed in HOWTO_RELEASE:: @@ -83,7 +83,7 @@ Paver is used to build the source releases. It will create the ``release`` and ``release/installers`` directories and put the ``*.zip`` and ``*.tar.gz`` source releases in the latter. :: - $ cython --version # check that you have the correct cython version + $ python3 -m cython --version # check for correct cython version $ paver sdist # sdist will do a git clean -xdf, so we omit that @@ -232,28 +232,39 @@ add files, using an editable text window and as binary uploads. - Hit the ``{Publish,Update} release`` button at the bottom. -Upload documents to docs.scipy.org ----------------------------------- +Upload documents to numpy.org +----------------------------- This step is only needed for final releases and can be skipped for -pre-releases. You will also need upload permission for the document server, if -you do not have permission ping Pauli Virtanen or Ralf Gommers to generate and -upload the documentation. Otherwise:: +pre-releases. ``make merge-doc`` clones the ``numpy/doc`` repo into +``doc/build/merge`` and updates it with the new documentation:: $ pushd doc $ make dist - $ make upload USERNAME=<yourname> RELEASE=v1.14.5 + $ make merge-doc $ popd -If the release series is a new one, you will need to rebuild and upload the -``docs.scipy.org`` front page:: +If the release series is a new one, you will need to add a new section to the +``doc/build/merge/index.html`` front page just after the "insert here" comment:: - $ cd ../docs.scipy.org - $ gvim index.rst + $ gvim doc/build/merge/index.html +/'insert here' -Note: there is discussion about moving the docs to github. This section will be -updated when/if that happens. +Otherwise, only the ``zip`` and ``pdf`` links should be updated with the +new tag name:: + $ gvim doc/build/merge/index.html +/'tag v1.14' + +You can "test run" the new documentation in a browser to make sure the links +work:: + + $ firefox doc/build/merge/index.html + +Once everything seems satisfactory, commit and upload the changes:: + + $ pushd doc/build/merge + $ git commit -am"Add documentation for v1.14.5" + $ git push + $ popd Announce the release on scipy.org --------------------------------- diff --git a/doc/changelog/1.16.5-changelog.rst b/doc/changelog/1.16.5-changelog.rst new file mode 100644 index 000000000..19374058d --- /dev/null +++ b/doc/changelog/1.16.5-changelog.rst @@ -0,0 +1,54 @@ + +Contributors +============ + +A total of 18 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Alexander Shadchin +* Allan Haldane +* Bruce Merry + +* Charles Harris +* Colin Snyder + +* Dan Allan + +* Emile + +* Eric Wieser +* Grey Baker + +* Maksim Shabunin + +* Marten van Kerkwijk +* Matti Picus +* Peter Andreas Entschev + +* Ralf Gommers +* Richard Harris + +* Sebastian Berg +* Sergei Lebedev + +* Stephan Hoyer + +Pull requests merged +==================== + +A total of 23 pull requests were merged for this release. + +* `#13742 <https://github.com/numpy/numpy/pull/13742>`__: ENH: Add project URLs to setup.py +* `#13823 <https://github.com/numpy/numpy/pull/13823>`__: TEST, ENH: fix tests and ctypes code for PyPy +* `#13845 <https://github.com/numpy/numpy/pull/13845>`__: BUG: use npy_intp instead of int for indexing array +* `#13867 <https://github.com/numpy/numpy/pull/13867>`__: TST: Ignore DeprecationWarning during nose imports +* `#13905 <https://github.com/numpy/numpy/pull/13905>`__: BUG: Fix use-after-free in boolean indexing +* `#13933 <https://github.com/numpy/numpy/pull/13933>`__: MAINT/BUG/DOC: Fix errors in _add_newdocs +* `#13984 <https://github.com/numpy/numpy/pull/13984>`__: BUG: fix byte order reversal for datetime64[ns] +* `#13994 <https://github.com/numpy/numpy/pull/13994>`__: MAINT,BUG: Use nbytes to also catch empty descr during allocation +* `#14042 <https://github.com/numpy/numpy/pull/14042>`__: BUG: np.array cleared errors occured in PyMemoryView_FromObject +* `#14043 <https://github.com/numpy/numpy/pull/14043>`__: BUG: Fixes for Undefined Behavior Sanitizer (UBSan) errors. +* `#14044 <https://github.com/numpy/numpy/pull/14044>`__: BUG: ensure that casting to/from structured is properly checked. +* `#14045 <https://github.com/numpy/numpy/pull/14045>`__: MAINT: fix histogram*d dispatchers +* `#14046 <https://github.com/numpy/numpy/pull/14046>`__: BUG: further fixup to histogram2d dispatcher. +* `#14052 <https://github.com/numpy/numpy/pull/14052>`__: BUG: Replace contextlib.suppress for Python 2.7 +* `#14056 <https://github.com/numpy/numpy/pull/14056>`__: BUG: fix compilation of 3rd party modules with Py_LIMITED_API... +* `#14057 <https://github.com/numpy/numpy/pull/14057>`__: BUG: Fix memory leak in dtype from dict contructor +* `#14058 <https://github.com/numpy/numpy/pull/14058>`__: DOC: Document array_function at a higher level. +* `#14084 <https://github.com/numpy/numpy/pull/14084>`__: BUG, DOC: add new recfunctions to `__all__` +* `#14162 <https://github.com/numpy/numpy/pull/14162>`__: BUG: Remove stray print that causes a SystemError on python 3.7 +* `#14297 <https://github.com/numpy/numpy/pull/14297>`__: TST: Pin pytest version to 5.0.1. +* `#14322 <https://github.com/numpy/numpy/pull/14322>`__: ENH: Enable huge pages in all Linux builds +* `#14346 <https://github.com/numpy/numpy/pull/14346>`__: BUG: fix behavior of structured_to_unstructured on non-trivial... +* `#14382 <https://github.com/numpy/numpy/pull/14382>`__: REL: Prepare for the NumPy 1.16.5 release. diff --git a/doc/changelog/1.17.1-changelog.rst b/doc/changelog/1.17.1-changelog.rst new file mode 100644 index 000000000..c7c8b6c8e --- /dev/null +++ b/doc/changelog/1.17.1-changelog.rst @@ -0,0 +1,55 @@ + +Contributors +============ + +A total of 17 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Alexander Jung + +* Allan Haldane +* Charles Harris +* Eric Wieser +* Giuseppe Cuccu + +* Hiroyuki V. Yamazaki +* Jérémie du Boisberranger +* Kmol Yuan + +* Matti Picus +* Max Bolingbroke + +* Maxwell Aladago + +* Oleksandr Pavlyk +* Peter Andreas Entschev +* Sergei Lebedev +* Seth Troisi + +* Vladimir Pershin + +* Warren Weckesser + +Pull requests merged +==================== + +A total of 24 pull requests were merged for this release. + +* `#14156 <https://github.com/numpy/numpy/pull/14156>`__: TST: Allow fuss in testing strided/non-strided exp/log loops +* `#14157 <https://github.com/numpy/numpy/pull/14157>`__: BUG: avx2_scalef_ps must be static +* `#14158 <https://github.com/numpy/numpy/pull/14158>`__: BUG: Remove stray print that causes a SystemError on python 3.7. +* `#14159 <https://github.com/numpy/numpy/pull/14159>`__: BUG: Fix DeprecationWarning in python 3.8. +* `#14160 <https://github.com/numpy/numpy/pull/14160>`__: BLD: Add missing gcd/lcm definitions to npy_math.h +* `#14161 <https://github.com/numpy/numpy/pull/14161>`__: DOC, BUILD: cleanups and fix (again) 'build dist' +* `#14166 <https://github.com/numpy/numpy/pull/14166>`__: TST: Add 3.8-dev to travisCI testing. +* `#14194 <https://github.com/numpy/numpy/pull/14194>`__: BUG: Remove the broken clip wrapper (Backport) +* `#14198 <https://github.com/numpy/numpy/pull/14198>`__: DOC: Fix hermitian argument docs in svd. +* `#14199 <https://github.com/numpy/numpy/pull/14199>`__: MAINT: Workaround for Intel compiler bug leading to failing test +* `#14200 <https://github.com/numpy/numpy/pull/14200>`__: TST: Clean up of test_pocketfft.py +* `#14201 <https://github.com/numpy/numpy/pull/14201>`__: BUG: Make advanced indexing result on read-only subclass writeable... +* `#14236 <https://github.com/numpy/numpy/pull/14236>`__: BUG: Fixed default BitGenerator name +* `#14237 <https://github.com/numpy/numpy/pull/14237>`__: ENH: add c-imported modules for freeze analysis in np.random +* `#14296 <https://github.com/numpy/numpy/pull/14296>`__: TST: Pin pytest version to 5.0.1 +* `#14301 <https://github.com/numpy/numpy/pull/14301>`__: BUG: Fix leak in the f2py-generated module init and `PyMem_Del`... +* `#14302 <https://github.com/numpy/numpy/pull/14302>`__: BUG: Fix formatting error in exception message +* `#14307 <https://github.com/numpy/numpy/pull/14307>`__: MAINT: random: Match type of SeedSequence.pool_size to DEFAULT_POOL_SIZE. +* `#14308 <https://github.com/numpy/numpy/pull/14308>`__: BUG: Fix numpy.random bug in platform detection +* `#14309 <https://github.com/numpy/numpy/pull/14309>`__: ENH: Enable huge pages in all Linux builds +* `#14330 <https://github.com/numpy/numpy/pull/14330>`__: BUG: Fix segfault in `random.permutation(x)` when x is a string. +* `#14338 <https://github.com/numpy/numpy/pull/14338>`__: BUG: don't fail when lexsorting some empty arrays (#14228) +* `#14339 <https://github.com/numpy/numpy/pull/14339>`__: BUG: Fix misuse of .names and .fields in various places (backport... +* `#14345 <https://github.com/numpy/numpy/pull/14345>`__: BUG: fix behavior of structured_to_unstructured on non-trivial... +* `#14350 <https://github.com/numpy/numpy/pull/14350>`__: REL: Prepare 1.17.1 release diff --git a/doc/changelog/1.17.2-changelog.rst b/doc/changelog/1.17.2-changelog.rst new file mode 100644 index 000000000..144f40038 --- /dev/null +++ b/doc/changelog/1.17.2-changelog.rst @@ -0,0 +1,28 @@ + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* CakeWithSteak + +* Charles Harris +* Dan Allan +* Hameer Abbasi +* Lars Grueter +* Matti Picus +* Sebastian Berg + +Pull requests merged +==================== + +A total of 8 pull requests were merged for this release. + +* `#14418 <https://github.com/numpy/numpy/pull/14418>`__: BUG: Fix aradixsort indirect indexing. +* `#14420 <https://github.com/numpy/numpy/pull/14420>`__: DOC: Fix a minor typo in dispatch documentation. +* `#14421 <https://github.com/numpy/numpy/pull/14421>`__: BUG: test, fix regression in converting to ctypes +* `#14430 <https://github.com/numpy/numpy/pull/14430>`__: BUG: Do not show Override module in private error classes. +* `#14432 <https://github.com/numpy/numpy/pull/14432>`__: BUG: Fixed maximum relative error reporting in assert_allclose. +* `#14433 <https://github.com/numpy/numpy/pull/14433>`__: BUG: Fix uint-overflow if padding with linear_ramp and negative... +* `#14436 <https://github.com/numpy/numpy/pull/14436>`__: BUG: Update 1.17.x with 1.18.0-dev pocketfft.py. +* `#14446 <https://github.com/numpy/numpy/pull/14446>`__: REL: Prepare for NumPy 1.17.2 release. diff --git a/doc/changelog/1.17.3-changelog.rst b/doc/changelog/1.17.3-changelog.rst new file mode 100644 index 000000000..f911c8465 --- /dev/null +++ b/doc/changelog/1.17.3-changelog.rst @@ -0,0 +1,32 @@ + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Allan Haldane +* Charles Harris +* Kevin Sheppard +* Matti Picus +* Ralf Gommers +* Sebastian Berg +* Warren Weckesser + +Pull requests merged +==================== + +A total of 12 pull requests were merged for this release. + +* `#14456 <https://github.com/numpy/numpy/pull/14456>`__: MAINT: clean up pocketfft modules inside numpy.fft namespace. +* `#14463 <https://github.com/numpy/numpy/pull/14463>`__: BUG: random.hypergeometic assumes npy_long is npy_int64, hung... +* `#14502 <https://github.com/numpy/numpy/pull/14502>`__: BUG: random: Revert gh-14458 and refix gh-14557. +* `#14504 <https://github.com/numpy/numpy/pull/14504>`__: BUG: add a specialized loop for boolean matmul. +* `#14506 <https://github.com/numpy/numpy/pull/14506>`__: MAINT: Update pytest version for Python 3.8 +* `#14512 <https://github.com/numpy/numpy/pull/14512>`__: DOC: random: fix doc linking, was referencing private submodules. +* `#14513 <https://github.com/numpy/numpy/pull/14513>`__: BUG,MAINT: Some fixes and minor cleanup based on clang analysis +* `#14515 <https://github.com/numpy/numpy/pull/14515>`__: BUG: Fix randint when range is 2**32 +* `#14519 <https://github.com/numpy/numpy/pull/14519>`__: MAINT: remove the entropy c-extension module +* `#14563 <https://github.com/numpy/numpy/pull/14563>`__: DOC: remove note about Pocketfft license file (non-existing here). +* `#14578 <https://github.com/numpy/numpy/pull/14578>`__: BUG: random: Create a legacy implementation of random.binomial. +* `#14687 <https://github.com/numpy/numpy/pull/14687>`__: BUG: properly define PyArray_DescrCheck diff --git a/doc/neps/index.rst.tmpl b/doc/neps/index.rst.tmpl index 0ad8e0f80..4c5b7766f 100644 --- a/doc/neps/index.rst.tmpl +++ b/doc/neps/index.rst.tmpl @@ -23,7 +23,7 @@ Meta-NEPs (NEPs about NEPs or Processes) .. toctree:: :maxdepth: 1 -{% for nep, tags in neps.items() if tags['Type'] == 'Process' %} +{% for nep, tags in neps.items() if tags['Status'] == 'Active' %} {{ tags['Title'] }} <{{ tags['Filename'] }}> {% endfor %} diff --git a/doc/neps/nep-0000.rst b/doc/neps/nep-0000.rst index 89ba177cb..0a2dbdefb 100644 --- a/doc/neps/nep-0000.rst +++ b/doc/neps/nep-0000.rst @@ -75,9 +75,11 @@ request`_ to the ``doc/neps`` directory with the name ``nep-<n>.rst`` where ``<n>`` is an appropriately assigned four-digit number (e.g., ``nep-0000.rst``). The draft must use the :doc:`nep-template` file. -Once the PR is in place, the NEP should be announced on the mailing -list for discussion (comments on the PR itself should be restricted to -minor editorial and technical fixes). +Once the PR for the NEP is in place, a post should be made to the +mailing list containing the sections upto "Backward compatibility", +with the purpose of limiting discussion there to usage and impact. +Discussion on the pull request will have a broader scope, also including +details of implementation. At the earliest convenience, the PR should be merged (regardless of whether it is accepted during discussion). Additional PRs may be made @@ -138,7 +140,7 @@ accepted that a competing proposal is a better alternative. When a NEP is ``Accepted``, ``Rejected``, or ``Withdrawn``, the NEP should be updated accordingly. In addition to updating the status field, at the very least the ``Resolution`` header should be added with a link to the relevant -post in the mailing list archives. +thread in the mailing list archives. NEPs can also be ``Superseded`` by a different NEP, rendering the original obsolete. The ``Replaced-By`` and ``Replaces`` headers diff --git a/doc/neps/nep-0019-rng-policy.rst b/doc/neps/nep-0019-rng-policy.rst index aa5fdc653..4f766fa2d 100644 --- a/doc/neps/nep-0019-rng-policy.rst +++ b/doc/neps/nep-0019-rng-policy.rst @@ -3,11 +3,11 @@ NEP 19 — Random Number Generator Policy ======================================= :Author: Robert Kern <robert.kern@gmail.com> -:Status: Accepted +:Status: Final :Type: Standards Track :Created: 2018-05-24 :Updated: 2019-05-21 -:Resolution: https://mail.python.org/pipermail/numpy-discussion/2018-June/078126.html +:Resolution: https://mail.python.org/pipermail/numpy-discussion/2018-July/078380.html Abstract -------- diff --git a/doc/neps/nep-0021-advanced-indexing.rst b/doc/neps/nep-0021-advanced-indexing.rst index 5acabbf16..dab9ab022 100644 --- a/doc/neps/nep-0021-advanced-indexing.rst +++ b/doc/neps/nep-0021-advanced-indexing.rst @@ -630,7 +630,7 @@ At this point we have left the straight forward world of ``oindex`` but can do random picking of any element from the array. Note that in the last example a method such as mentioned in the ``Related Questions`` section could be more straight forward. But this approach is even more flexible, since ``rows`` -does not have to be a simple ``arange``, but could be ``intersting_times``:: +does not have to be a simple ``arange``, but could be ``interesting_times``:: >>> interesting_times = np.array([0, 4, 8, 9, 10]) >>> correct_sensors_at_it = correct_sensors[interesting_times, :] diff --git a/doc/neps/nep-0024-missing-data-2.rst b/doc/neps/nep-0024-missing-data-2.rst index c8b19561f..f4414e0a0 100644 --- a/doc/neps/nep-0024-missing-data-2.rst +++ b/doc/neps/nep-0024-missing-data-2.rst @@ -28,7 +28,7 @@ Detailed description Rationale ^^^^^^^^^ -The purpose of this aNEP is to define two interfaces -- one for handling +The purpose of this NEP is to define two interfaces -- one for handling 'missing values', and one for handling 'masked arrays'. An ordinary value is something like an integer or a floating point number. A diff --git a/doc/neps/nep-0028-website-redesign.rst b/doc/neps/nep-0028-website-redesign.rst new file mode 100644 index 000000000..b418ca831 --- /dev/null +++ b/doc/neps/nep-0028-website-redesign.rst @@ -0,0 +1,334 @@ +=================================== +NEP 28 — numpy.org website redesign +=================================== + +:Author: Ralf Gommers <ralf.gommers@gmail.com> +:Author: Joe LaChance <joe@boldmetrics.com> +:Author: Shekhar Rajak <shekharrajak.1994@gmail.com> +:Status: Accepted +:Type: Informational +:Created: 2019-07-16 +:Resolution: https://mail.python.org/pipermail/numpy-discussion/2019-August/079889.html + + +Abstract +-------- + +NumPy is the fundamental library for numerical and scientific computing with +Python. It is used by millions and has a large team of maintainers and +contributors. Despite that, its `numpy.org <http://numpy.org>`_ website has +never received the attention it needed and deserved. We hope and intend to +change that soon. This document describes ideas and requirements for how to +design a replacement for the current website, to better serve the needs of +our diverse community. + +At a high level, what we're aiming for is: + +- a modern, clean look +- an easy to deploy static site +- a structure that's easy to navigate +- content that addresses all types of stakeholders +- Possible multilingual translations / i18n + +This website serves a couple of roles: + +- it's the entry point to the project for new users +- it should link to the documentation (which is hosted separately, now on + http://docs.scipy.org/ and in the near future on http://numpy.org/doc). +- it should address various aspects of the project (e.g. what NumPy is and + why you'd want to use it, community, project organization, funding, + relationship with NumFOCUS and possibly other organizations) +- it should link out to other places, so every type of stakeholder + (beginning and advanced user, educators, packagers, funders, etc.) + can find their way + + +Motivation and Scope +-------------------- + +The current numpy.org website has almost no content and its design is poor. +This affects many users, who come there looking for information. It also +affects many other aspects of the NumPy project, from finding new contributors +to fundraising. + +The scope of the proposed redesign is the top-level numpy.org site, which +now contains only a couple of pages and may contain on the order of ten +pages after the redesign. Changing the documentation (user guide, reference +guide, and some other pages in the NumPy Manual) is out of scope for +this proposal. + + +Detailed description +-------------------- + +User Experience +~~~~~~~~~~~~~~~ + +Besides the NumPy logo, there is little that can or needs to be kept from the +current website. We will rely to a large extent on ideas and proposals by the +designer(s) of the new website. + +As reference points we can use the `Jupyter website <https://jupyter.org/>`_, +which is probably the best designed site in our ecosystem, and the +`QuantEcon <https://quantecon.org>`_ and `Julia <https://julialang.org>`_ +sites which are well-designed too. + +The Website +~~~~~~~~~~~ + +A static site is a must. There are many high-quality static site generators. +The current website uses Sphinx, however that is not the best choice - it's +hard to theme and results in sites that are too text-heavy due to Sphinx' +primary aim being documentation. + +The following should be considered when choosing a static site generator: + +1. *How widely used is it?* This is important when looking for help maintaining + or improving the site. More popular frameworks are usually also better + maintained, so less chance of bugs or obsolescence. +2. *Ease of deployment.* Most generators meet this criterion, however things + like built-in support for GitHub Pages helps. +3. *Preferences of who implements the new site.* Everyone has their own + preferences. And it's a significant amount of work to build a new site. + So we should take the opinion of those doing the work into account. + +Traffic +``````` + +The current site receives on the order of 500,000 unique visitors per month. +With a redesigned site and relevant content, there is potential for visitor +counts to reach 5-6 million -- a similar level as +`scipy.org <http://scipy.org>`_ or `matplotlib.org <http://matplotlib.org>`_ -- +or more. + +Possible options for static site generators +``````````````````````````````````````````` + +1. *Jekyll.* This is a well maintained option with 855 Github contributors, + with contributions within the last month. Jekyll is written in Ruby, and + has a simple CLI interface. Jekyll also has a large directory of + `themes <https://jekyllthemes.io>`__, although a majority cost money. + There are several themes (`serif <https://jekyllthemes.io/theme/serif>`_, + `uBuild <https://jekyllthemes.io/theme/ubuild-jekyll-theme>`_, + `Just The Docs <https://jekyllthemes.io/theme/just-the-docs>`_) that are + appropriate and free. Most themes are likely responsive for mobile, and + that should be a requirement. Jekyll uses a combination of liquid templating + and YAML to render HTML, and content is written in Markdown. i18n + functionality is not native to Jekyll, but can be added easily. + One nice benefit of Jekyll is that it can be run automatically by GitHub + Pages, so deployment via a CI system doesn't need to be implemented. +2. *Hugo.* This is another well maintained option with 554 contributors, with + contributions within the last month. Hugo is written in Go, and similar to + Jekyll, has a simple to use CLI interface to generate static sites. Again, + similar to Jekyll, Hugo has a large directory of + `themes <https://themes.gohugo.io>`_. These themes appear to be free, + unlike some of Jekyll's themes. + (`Sample landing page theme <https://themes.gohugo.io/hugo-hero-theme>`_, + `docs theme <https://themes.gohugo.io/hugo-whisper-theme>`_). Hugo uses Jade + as its templating language, and content is also written in Markdown. i18n + functionality is native to Hugo. +3. *Docusaurus.* Docusaurus is a responsive static site generator made by Facebook. + Unlike the previous options, Docusaurus doesn't come with themes, and thus we + would not want to use this for our landing page. This is an excellent docs + option written in React. Docusaurus natively has support for i18n (via + Crowdin_, document versioning, and document search. + +Both Jekyll and Hugo are excellent options that should be supported into the +future and are good choices for NumPy. Docusaurus has several bonus features +such as versioning and search that Jekyll and Hugo don't have, but is likely +a poor candidate for a landing page - it could be a good option for a +high-level docs site later on though. + +Deployment +~~~~~~~~~~ + +There is no need for running a server, and doing so is in our experience a +significant drain on the time of maintainers. + +1. *Netlify.* Using netlify is free until 100GB of bandwidth is used. Additional + bandwidth costs $20/100GB. They support a global CDN system, which will keep + load times quick for users in other regions. Netlify also has Github integration, + which will allow for easy deployment. When a pull request is merged, Netlify + will automatically deploy the changes. DNS is simple, and HTTPS is also supported. +2. *Github Pages.* Github Pages also has a 100GB bandwidth limit, and is unclear if + additional bandwidth can be purchased. It is also unclear where sites are deployed, + and should be assumed sites aren't deployed globally. Github Pages has an easy to + use CI & DNS, similar to to Netlify. HTTPS is supported. +3. *Cloudflare.* An excellent option, additional CI is likely needed for the same + ease of deployment. + +All of the above options are appropriate for the NumPy site based on current +traffic. Updating to a new deployment strategy, if needed, is a minor amount of +work compared to developing the website itself. If a provider such as +Cloudflare is chosen, additional CI may be required, such as CircleCI, to +have a similar deployment to GitHub Pages or Netlify. + +Analytics +~~~~~~~~~ + +It's benefical to maintainers to know how many visitors are coming to +numpy.org. Google Analytics offers visitor counts and locations. This will +help to support and deploy more strategically, and help maintainers +understand where traffic is coming from. + +Google Analytics is free. A script, provided by Google, must be added to the home page. + +Website Structure +~~~~~~~~~~~~~~~~~ + +We aim to keep the first version of the new website small in terms of amount +of content. New pages can be added later on, it's more important right now to +get the site design right and get some essential information up. Note that in +the second half of 2019 we expect to get 1 or 2 tech writers involved in the +project via Google Season of Docs. They will likely help improve the content +and organization of that content. + +We propose the following structure: + +0. Front page: essentials of what NumPy is (compare e.g. jupyter.org), one or + a couple key user stories (compare e.g. julialang.org) +1. Install +2. Documentation +3. Array computing +4. Community +5. Learning +6. About Us +7. Contribute +8. Donate + +There may be a few other pages, e.g. a page on performance, that are linked +from one of the main pages. + +Stakeholder Content +~~~~~~~~~~~~~~~~~~~ + +This should have as little content as possible *within the site*. Somewhere +on the site we should link out to content that's specific to: + +- beginning users (quickstart, tutorial) +- advanced users +- educators +- packagers +- package authors that depend on NumPy +- funders (governance, roadmap) + +Translation (multilingual / i18n) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +NumPy has users all over the world. Most of those users are not native +English speakers, and many don't speak English well or at all. Therefore +having content in multiple languages is potentially addressing a large unmet +need. It would likely also help make the NumPy project more diverse and +welcoming. + +On the other hand, there are good reasons why few projects have a +multi-lingual site. It's potentially a lot of extra work. Extra work for +maintainers is costly - they're already struggling to keep up with the work +load. Therefore we have to very carefully consider whether a multi-lingual +site is feasible and weight costs and benefits. + +We start with an assertion: maintaining translations of all documentation, or +even the whole user guide, as part of the NumPy project is not feasible. One +simply has to look at the volume of our documentation and the frequency with +which we change it to realize that that's the case. Perhaps it will be +feasible though to translate just the top-level pages of the website. Those +do not change very often, and it will be a limited amount of content (order +of magnitude 5-10 pages of text). + +We propose the following requirements for adding a language: + +- The language must have a dedicated maintainer +- There must be a way to validate content changes (e.g. a second + maintainer/reviewer, or high quality language support in a freely + available machine translation tool) +- The language must have a reasonable size target audience (to be + assessed by the NumPy maintainers) + +Furthermore we propose a policy for when to remove support for a language again +(preferably by hiding it rather than deleting content). This may be done when +the language no longer has a maintainer, and coverage of translations falls +below an acceptable threshold (say 80%). + +Benefits of having translations include: + +- Better serve many existing and potential users +- Potentially attract a culturally and geographically more diverse set of contributors + +The tradeoffs are: + +- Cost of maintaining a more complex code base +- Cost of making decisions about whether or not to add a new language +- Higher cost to making content changes, creates work for language maintainers +- Any content change should be rolled out with enough delay to have translations in place + +Can we define a small enough set of pages and content that it makes sense to do this? +Probably yes. + +Is there an easy to use tool to maintain translations and add them to the website? +To be discussed - it needs investigating, and may depend on the choice of static site +generator. One potential option is Crowdin_, which is free for open source projects. + + +Style and graphic design +~~~~~~~~~~~~~~~~~~~~~~~~ + +Beyond the "a modern, clean look" goal we choose to not specify too much. A +designer may have much better ideas than the authors of this proposal, hence we +will work with the designer(s) during the implementation phase. + +The NumPy logo could use a touch-up. The logo widely recognized and its colors and +design are good, however the look-and-feel is perhaps a little dated. + + +Other aspects +~~~~~~~~~~~~~ + +A search box would be nice to have. The Sphinx documentation already has a +search box, however a search box on the main site which provides search results +for the docs, the website, and perhaps other domains that are relevant for +NumPy would make sense. + + +Backward compatibility +---------------------- + +Given a static site generator is chosen, we will migrate away from Sphinx for +numpy.org (the website, *not including the docs*). The current deployment can +be preserved until a future deprecation date is decided (potentially based on +the comfort level of our new site). + +All site generators listed above have visibility into the HTML and Javascript +that is generated, and can continue to be maintained in the event a given +project ceases to be maintained. + + +Alternatives +------------ + +Alternatives we considered for the overall design of the website: + +1. *Update current site.* A new Sphinx theme could be chosen. This would likely + take the least amount of resources initially, however, Sphinx does not have + the features we are looking for moving forward such as i18n, responsive design, + and a clean, modern look. + Note that updating the docs Sphinx theme is likely still a good idea - it's + orthogonal to this NEP though. +2. *Create custom site.* This would take the most amount of resources, and is + likely to have additional benefit in comparison to a static site generator. + All features would be able to be added at the cost of developer time. + + +Discussion +---------- + +Mailing list thread discussing this NEP: TODO + + +References and Footnotes +------------------------ +.. _Crowdin: https://crowdin.com/pricing#annual + +Copyright +--------- + +This document has been placed in the public domain. diff --git a/doc/neps/nep-0029-deprecation_policy.rst b/doc/neps/nep-0029-deprecation_policy.rst new file mode 100644 index 000000000..2f5c8ecb5 --- /dev/null +++ b/doc/neps/nep-0029-deprecation_policy.rst @@ -0,0 +1,302 @@ +================================================================================== +NEP 29 — Recommend Python and Numpy version support as a community policy standard +================================================================================== + + +:Author: Thomas A Caswell <tcaswell@gmail.com>, Andreas Mueller, Brian Granger, Madicken Munk, Ralf Gommers, Matt Haberland <mhaberla@calpoly.edu>, Matthias Bussonnier <bussonniermatthias@gmail.com>, Stefan van der Walt <stefanv@berkeley.edu> +:Status: Final +:Type: Informational +:Created: 2019-07-13 +:Resolution: https://mail.python.org/pipermail/numpy-discussion/2019-October/080128.html + + +Abstract +-------- + +This NEP recommends that all projects across the Scientific +Python ecosystem adopt a common "time window-based" policy for +support of Python and NumPy versions. Standardizing a recommendation +for project support of minimum Python and NumPy versions will improve +downstream project planning. + +This is an unusual NEP in that it offers recommendations for +community-wide policy and not for changes to NumPy itself. Since a +common place for SPEEPs (Scientific Python Ecosystem Enhancement +Proposals) does not exist and given NumPy's central role in the +ecosystem, a NEP provides a visible place to document the proposed +policy. + +This NEP is being put forward by maintainers of Matplotlib, scikit-learn, +IPython, Jupyter, yt, SciPy, NumPy, and scikit-image. + + + +Detailed description +-------------------- + +For the purposes of this NEP we assume semantic versioning and define: + +*major version* + A release that changes the first number (e.g. X.0.0) + +*minor version* + A release that changes the second number (e.g 1.Y.0) + +*patch version* + A release that changes the third number (e.g. 1.1.Z) + + +When a project releases a new major or minor version, we recommend that +they support at least all minor versions of Python +introduced and released in the prior 42 months *from the +anticipated release date* with a minimum of 2 minor versions of +Python, and all minor versions of NumPy released in the prior 24 +months *from the anticipated release date* with a minimum of 3 +minor versions of NumPy. + + +Consider the following timeline:: + + Jan 16 Jan 17 Jan 18 Jan 19 Jan 20 + | | | | | + +++++|+++++++++++|+++++++++++|+++++++++++|+++++++++++|++++++++++++ + | | | | + py 3.5.0 py 3.6.0 py 3.7.0 py 3.8.0 + |-----------------------------------------> Feb19 + |-----------------------------------------> Dec19 + |-----------------------------------------> Nov20 + +It shows the 42 month support windows for Python. A project with a +major or minor version release in February 2019 should support Python 3.5 and newer, +a project with a major or minor version released in December 2019 should +support Python 3.6 and newer, and a project with a major or minor version +release in November 2020 should support Python 3.7 and newer. + +The current Python release cadence is 18 months so a 42 month window +ensures that there will always be at least two minor versions of Python +in the window. The window is extended 6 months beyond the anticipated two-release +interval for Python to provides resilience against small fluctuations / +delays in its release schedule. + +Because Python minor version support is based only on historical +release dates, a 42 month time window, and a planned project release +date, one can predict with high confidence when a project will be able +to drop any given minor version of Python. This, in turn, could save +months of unnecessary maintenance burden. + +If a project releases immediately after a minor version of Python +drops out of the support window, there will inevitably be some +mismatch in supported versions—but this situation should only last +until other projects in the ecosystem make releases. + +Otherwise, once a project does a minor or major release, it is +guaranteed that there will be a stable release of all other projects +that, at the source level, support the same set of Python versions +supported by the new release. + +If there is a Python 4 or a NumPy 2 this policy will have to be +reviewed in light of the community's and projects' best interests. + + +Support Table +~~~~~~~~~~~~~ + +============ ====== ===== +Date Python NumPy +------------ ------ ----- +Jan 07, 2020 3.6+ 1.15+ +Jun 23, 2020 3.7+ 1.15+ +Jul 23, 2020 3.7+ 1.16+ +Jan 13, 2021 3.7+ 1.17+ +Jul 26, 2021 3.7+ 1.18+ +Dec 26, 2021 3.8+ 1.18+ +============ ====== ===== + + +Drop Schedule +~~~~~~~~~~~~~ + +:: + + On next release, drop support for Python 3.5 (initially released on Sep 13, 2015) + On Jan 07, 2020 drop support for Numpy 1.14 (initially released on Jan 06, 2018) + On Jun 23, 2020 drop support for Python 3.6 (initially released on Dec 23, 2016) + On Jul 23, 2020 drop support for Numpy 1.15 (initially released on Jul 23, 2018) + On Jan 13, 2021 drop support for Numpy 1.16 (initially released on Jan 13, 2019) + On Jul 26, 2021 drop support for Numpy 1.17 (initially released on Jul 26, 2019) + On Dec 26, 2021 drop support for Python 3.7 (initially released on Jun 27, 2018) + + +Implementation +-------------- + +We suggest that all projects adopt the following language into their +development guidelines: + + This project supports: + + - All minor versions of Python released 42 months prior to the + project, and at minimum the two latest minor versions. + - All minor versions of ``numpy`` released in the 24 months prior + to the project, and at minimum the last three minor versions. + + In ``setup.py``, the ``python_requires`` variable should be set to + the minimum supported version of Python. All supported minor + versions of Python should be in the test matrix and have binary + artifacts built for the release. + + Minimum Python and NumPy version support should be adjusted upward + on every major and minor release, but never on a patch release. + + +Backward compatibility +---------------------- + +No backward compatibility issues. + +Alternatives +------------ + +Ad-Hoc version support +~~~~~~~~~~~~~~~~~~~~~~ + +A project could, on every release, evaluate whether to increase +the minimum version of Python supported. +As a major downside, an ad-hoc approach makes it hard for downstream users to predict what +the future minimum versions will be. As there is no objective threshold +to when the minimum version should be dropped, it is easy for these +version support discussions to devolve into `bike shedding <https://en.wikipedia.org/wiki/Wikipedia:Avoid_Parkinson%27s_bicycle-shed_effect>`_ and acrimony. + + +All CPython supported versions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The CPython supported versions of Python are listed in the Python +Developers Guide and the Python PEPs. Supporting these is a very clear +and conservative approach. However, it means that there exists a four +year lag between when a new features is introduced into the language +and when a project is able to use it. Additionally, for projects with +compiled extensions this requires building many binary artifacts for +each release. + +For the case of NumPy, many projects carry workarounds to bugs that +are fixed in subsequent versions of NumPy. Being proactive about +increasing the minimum version of NumPy allows downstream +packages to carry fewer version-specific patches. + + + +Default version on Linux distribution +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The policy could be to support the version of Python that ships by +default in the latest Ubuntu LTS or CentOS/RHEL release. However, we +would still have to standardize across the community which +distribution to follow. + +By following the versions supported by major Linux distributions, we +are giving up technical control of our projects to external +organizations that may have different motivations and concerns than we +do. + + +N minor versions of Python +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Given the current release cadence of the Python, the proposed time (42 +months) is roughly equivalent to "the last two" Python minor versions. +However, if Python changes their release cadence substantially, any +rule based solely on the number of minor releases may need to be +changed to remain sensible. + +A more fundamental problem with a policy based on number of Python +releases is that it is hard to predict when support for a given minor +version of Python will be dropped as that requires correctly +predicting the release schedule of Python for the next 3-4 years. A +time-based rule, in contrast, only depends on past events +and the length of the support window. + + +Time window from the X.Y.1 Python release +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This is equivalent to a few month longer support window from the X.Y.0 +release. This is because X.Y.1 bug-fix release is typically a few +months after the X.Y.0 release, thus a N month window from X.Y.1 is +roughly equivalent to a N+3 month from X.Y.0. + +The X.Y.0 release is naturally a special release. If we were to +anchor the window on X.Y.1 we would then have the discussion of why +not X.Y.M? + + +Discussion +---------- + + +References and Footnotes +------------------------ + +Code to generate support and drop schedule tables :: + + from datetime import datetime, timedelta + + data = """Jan 15, 2017: Numpy 1.12 + Sep 13, 2015: Python 3.5 + Jun 27, 2018: Python 3.7 + Dec 23, 2016: Python 3.6 + Jun 07, 2017: Numpy 1.13 + Jan 06, 2018: Numpy 1.14 + Jul 23, 2018: Numpy 1.15 + Jan 13, 2019: Numpy 1.16 + Jul 26, 2019: Numpy 1.17 + """ + + releases = [] + + plus42 = timedelta(days=int(365*3.5 + 1)) + plus24 = timedelta(days=int(365*2 + 1)) + + for line in data.splitlines(): + date, project_version = line.split(':') + project, version = project_version.strip().split(' ') + release = datetime.strptime(date, '%b %d, %Y') + if project.lower() == 'numpy': + drop = release + plus24 + else: + drop = release + plus42 + releases.append((drop, project, version, release)) + + releases = sorted(releases, key=lambda x: x[0]) + + minpy = '3.8+' + minnum = '1.18+' + + toprint_drop_dates = [''] + toprint_support_table = [] + for d, p, v, r in releases[::-1]: + df = d.strftime('%b %d, %Y') + toprint_drop_dates.append( + f'On {df} drop support for {p} {v} ' + f'(initially released on {r.strftime("%b %d, %Y")})') + toprint_support_table.append(f'{df} {minpy:<6} {minnum:<5}') + if p.lower() == 'numpy': + minnum = v+'+' + else: + minpy = v+'+' + + for e in toprint_drop_dates[::-1]: + print(e) + + print('============ ====== =====') + print('Date Python NumPy') + print('------------ ------ -----') + for e in toprint_support_table[::-1]: + print(e) + print('============ ====== =====') + + +Copyright +--------- + +This document has been placed in the public domain. diff --git a/doc/neps/nep-0030-duck-array-protocol.rst b/doc/neps/nep-0030-duck-array-protocol.rst new file mode 100644 index 000000000..353c5df1e --- /dev/null +++ b/doc/neps/nep-0030-duck-array-protocol.rst @@ -0,0 +1,183 @@ +====================================================== +NEP 30 — Duck Typing for NumPy Arrays - Implementation +====================================================== + +:Author: Peter Andreas Entschev <pentschev@nvidia.com> +:Author: Stephan Hoyer <shoyer@google.com> +:Status: Draft +:Type: Standards Track +:Created: 2019-07-31 +:Updated: 2019-07-31 +:Resolution: + +Abstract +-------- + +We propose the ``__duckarray__`` protocol, following the high-level overview +described in NEP 22, allowing downstream libraries to return arrays of their +defined types, in contrast to ``np.asarray``, that coerces those ``array_like`` +objects to NumPy arrays. + +Detailed description +-------------------- + +NumPy's API, including array definitions, is implemented and mimicked in +countless other projects. By definition, many of those arrays are fairly +similar in how they operate to the NumPy standard. The introduction of +``__array_function__`` allowed dispathing of functions implemented by several +of these projects directly via NumPy's API. This introduces a new requirement, +returning the NumPy-like array itself, rather than forcing a coercion into a +pure NumPy array. + +For the purpose above, NEP 22 introduced the concept of duck typing to NumPy +arrays. The suggested solution described in the NEP allows libraries to avoid +coercion of a NumPy-like array to a pure NumPy array where necessary, while +still allowing that NumPy-like array libraries that do not wish to implement +the protocol to coerce arrays to a pure Numpy array via ``np.asarray``. + +Usage Guidance +~~~~~~~~~~~~~~ + +Code that uses np.duckarray is meant for supporting other ndarray-like objects +that "follow the NumPy API". That is an ill-defined concept at the moment -- +every known library implements the NumPy API only partly, and many deviate +intentionally in at least some minor ways. This cannot be easily remedied, so +for users of ``__duckarray__`` we recommend the following strategy: check if the +NumPy functionality used by the code that follows your use of ``__duckarray__`` +is present in Dask, CuPy and Sparse. If so, it's reasonable to expect any duck +array to work here. If not, we suggest you indicate in your docstring what kinds +of duck arrays are accepted, or what properties they need to have. + +To exemplify the usage of duck arrays, suppose one wants to take the ``mean()`` +of an array-like object ``arr``. Using NumPy to achieve that, one could write +``np.asarray(arr).mean()`` to achieve the intended result. However, libraries +may expect ``arr`` to be a NumPy-like array, and at the same time, the array may +or may not be an object compliant to the NumPy API (either in full or partially) +such as a CuPy, Sparse or a Dask array. In the case where ``arr`` is already an +object compliant to the NumPy API, we would simply return it (and prevent it +from being coerced into a pure NumPy array), otherwise, it would then be coerced +into a NumPy array. + +Implementation +-------------- + +The implementation idea is fairly straightforward, requiring a new function +``duckarray`` to be introduced in NumPy, and a new method ``__duckarray__`` in +NumPy-like array classes. The new ``__duckarray__`` method shall return the +downstream array-like object itself, such as the ``self`` object. If appropriate, +an ``__array__`` method may be implemented that returns a NumPy array or possibly +raise a ``TypeError`` with a helpful message. + +The new NumPy ``duckarray`` function can be implemented as follows: + +.. code:: python + + def duckarray(array_like): + if hasattr(array_like, '__duckarray__'): + return array_like.__duckarray__() + return np.asarray(array_like) + +Example for a project implementing NumPy-like arrays +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Now consider a library that implements a NumPy-compatible array class called +``NumPyLikeArray``, this class shall implement the methods described above, and +a complete implementation would look like the following: + +.. code:: python + + class NumPyLikeArray: + def __duckarray__(self): + return self + + def __array__(self): + return TypeError("NumPyLikeArray can not be converted to a numpy array. " + "You may want to use np.duckarray.") + +The implementation above exemplifies the simplest case, but the overall idea +is that libraries will implement a ``__duckarray__`` method that returns the +original object, and an ``__array__`` method that either creates and returns an +appropriate NumPy array, or raises a``TypeError`` to prevent unintentional use +as an object in a NumPy array (if ``np.asarray`` is called on an arbitrary +object that does not implement ``__array__``, it will create a NumPy array +scalar). + +In case of existing libraries that don't already implement ``__array__`` but +would like to use duck array typing, it is advised that they introduce +both ``__array__`` and``__duckarray__`` methods. + +Usage +----- + +An example of how the ``__duckarray__`` protocol could be used to write a +``stack`` function based on ``concatenate``, and its produced outcome, can be +seen below. The example here was chosen not only to demonstrate the usage of +the ``duckarray`` function, but also to demonstrate its dependency on the NumPy +API, demonstrated by checks on the array's ``shape`` attribute. Note that the +example is merely a simplified version of NumPy's actualy implementation of +``stack`` working on the first axis, and it is assumed that Dask has implemented +the ``__duckarray__`` method. + +.. code:: python + + def duckarray_stack(arrays): + arrays = [np.duckarray(arr) for arr in arrays] + + shapes = {arr.shape for arr in arrays} + if len(shapes) != 1: + raise ValueError('all input arrays must have the same shape') + + expanded_arrays = [arr[np.newaxis, ...] for arr in arrays] + return np.concatenate(expanded_arrays, axis=0) + + dask_arr = dask.array.arange(10) + np_arr = np.arange(10) + np_like = list(range(10)) + + duckarray_stack((dask_arr, dask_arr)) # Returns dask.array + duckarray_stack((dask_arr, np_arr)) # Returns dask.array + duckarray_stack((dask_arr, np_like)) # Returns dask.array + +In contrast, using only ``np.asarray`` (at the time of writing of this NEP, this +is the usual method employed by library developers to ensure arrays are +NumPy-like) has a different outcome: + +.. code:: python + + def asarray_stack(arrays): + arrays = [np.asanyarray(arr) for arr in arrays] + + # The remaining implementation is the same as that of + # ``duckarray_stack`` above + + asarray_stack((dask_arr, dask_arr)) # Returns np.ndarray + asarray_stack((dask_arr, np_arr)) # Returns np.ndarray + asarray_stack((dask_arr, np_like)) # Returns np.ndarray + +Backward compatibility +---------------------- + +This proposal does not raise any backward compatibility issues within NumPy, +given that it only introduces a new function. However, downstream libraries +that opt to introduce the ``__duckarray__`` protocol may choose to remove the +ability of coercing arrays back to a NumPy array via ``np.array`` or +``np.asarray`` functions, preventing unintended effects of coercion of such +arrays back to a pure NumPy array (as some libraries already do, such as CuPy +and Sparse), but still leaving libraries not implementing the protocol with the +choice of utilizing ``np.duckarray`` to promote ``array_like`` objects to pure +NumPy arrays. + +Previous proposals and discussion +--------------------------------- + +The duck typing protocol proposed here was described in a high level in +`NEP 22 <https://numpy.org/neps/nep-0022-ndarray-duck-typing-overview.html>`_. + +Additionally, longer discussions about the protocol and related proposals +took place in +`numpy/numpy #13831 <https://github.com/numpy/numpy/issues/13831>`_ + +Copyright +--------- + +This document has been placed in the public domain. diff --git a/doc/neps/nep-0031-uarray.rst b/doc/neps/nep-0031-uarray.rst new file mode 100644 index 000000000..3519b6bc0 --- /dev/null +++ b/doc/neps/nep-0031-uarray.rst @@ -0,0 +1,637 @@ +============================================================ +NEP 31 — Context-local and global overrides of the NumPy API +============================================================ + +:Author: Hameer Abbasi <habbasi@quansight.com> +:Author: Ralf Gommers <rgommers@quansight.com> +:Author: Peter Bell <pbell@quansight.com> +:Status: Draft +:Type: Standards Track +:Created: 2019-08-22 + + +Abstract +-------- + +This NEP proposes to make all of NumPy's public API overridable via an +extensible backend mechanism. + +Acceptance of this NEP means NumPy would provide global and context-local +overrides, as well as a dispatch mechanism similar to NEP-18 [2]_. First +experiences with ``__array_function__`` show that it is necessary to be able +to override NumPy functions that *do not take an array-like argument*, and +hence aren't overridable via ``__array_function__``. The most pressing need is +array creation and coercion functions, such as ``numpy.zeros`` or +``numpy.asarray``; see e.g. NEP-30 [9]_. + +This NEP proposes to allow, in an opt-in fashion, overriding any part of the +NumPy API. It is intended as a comprehensive resolution to NEP-22 [3]_, and +obviates the need to add an ever-growing list of new protocols for each new +type of function or object that needs to become overridable. + +Motivation and Scope +-------------------- + +The motivation behind ``uarray`` is manyfold: First, there have been several +attempts to allow dispatch of parts of the NumPy API, including (most +prominently), the ``__array_ufunc__`` protocol in NEP-13 [4]_, and the +``__array_function__`` protocol in NEP-18 [2]_, but this has shown the need +for further protocols to be developed, including a protocol for coercion (see +[5]_, [9]_). The reasons these overrides are needed have been extensively +discussed in the references, and this NEP will not attempt to go into the +details of why these are needed; but in short: It is necessary for library +authors to be able to coerce arbitrary objects into arrays of their own types, +such as CuPy needing to coerce to a CuPy array, for example, instead of +a NumPy array. In simpler words, one needs things like ``np.asarray(...)`` or +an alternative to "just work" and return duck-arrays. + +The primary end-goal of this NEP is to make the following possible: + +.. code:: python + + # On the library side + import numpy.overridable as unp + + def library_function(array): + array = unp.asarray(array) + # Code using unumpy as usual + return array + + # On the user side: + import numpy.overridable as unp + import uarray as ua + import dask.array as da + + ua.register_backend(da) # Can be done within Dask itself + + library_function(dask_array) # works and returns dask_array + + with unp.set_backend(da): + library_function([1, 2, 3, 4]) # actually returns a Dask array. + +Here, ``backend`` can be any compatible object defined either by NumPy or an +external library, such as Dask or CuPy. Ideally, it should be the module +``dask.array`` or ``cupy`` itself. + +These kinds of overrides are useful for both the end-user as well as library +authors. End-users may have written or wish to write code that they then later +speed up or move to a different implementation, say PyData/Sparse. They can do +this simply by setting a backend. Library authors may also wish to write code +that is portable across array implementations, for example ``sklearn`` may wish +to write code for a machine learning algorithm that is portable across array +implementations while also using array creation functions. + +This NEP takes a holistic approach: It assumes that there are parts of +the API that need to be overridable, and that these will grow over time. It +provides a general framework and a mechanism to avoid a design of a new +protocol each time this is required. This was the goal of ``uarray``: to +allow for overrides in an API without needing the design of a new protocol. + +This NEP proposes the following: That ``unumpy`` [8]_ becomes the +recommended override mechanism for the parts of the NumPy API not yet covered +by ``__array_function__`` or ``__array_ufunc__``, and that ``uarray`` is +vendored into a new namespace within NumPy to give users and downstream +dependencies access to these overrides. This vendoring mechanism is similar +to what SciPy decided to do for making ``scipy.fft`` overridable (see [10]_). + + +Detailed description +-------------------- + +Using overrides +~~~~~~~~~~~~~~~ + +Here are a few examples of how an end-user would use overrides. + +.. code:: python + + data = da.from_zarr('myfile.zarr') + # result should still be dask, all things being equal + result = library_function(data) + result.to_zarr('output.zarr') + +This would keep on working, assuming the Dask backend was either set or +registered. Registration can also be done at import-time. + +Now consider another function, and what would need to happen in order to +make this work: + +.. code:: python + + from dask import array as da + from magic_library import pytorch_predict + + data = da.from_zarr('myfile.zarr') + # normally here one would use e.g. data.map_overlap + result = pytorch_predict(data) + result.to_zarr('output.zarr') + +This would work in two scenarios: The first is that ``pytorch_predict`` was a +multimethod, and implemented by the Dask backend. Dask could provide utility +functions to allow external libraries to register implementations. + +The second, and perhaps more useful way, is that ``pytorch_predict`` was defined +in an idiomatic style true to NumPy in terms of other multimethods, and that Dask +implemented the required multimethods itself, e.g. ``np.convolve``. If this +happened, then the above example would work without either ``magic_library`` +or Dask having to do anything specific to the other. + +Composing backends +~~~~~~~~~~~~~~~~~~ + +There are some backends which may depend on other backends, for example xarray +depending on `numpy.fft`, and transforming a time axis into a frequency axis, +or Dask/xarray holding an array other than a NumPy array inside it. This would +be handled in the following manner inside code:: + + with ua.set_backend(cupy), ua.set_backend(dask.array): + # Code that has distributed GPU arrays here + +Proposals +~~~~~~~~~ + +The only change this NEP proposes at its acceptance, is to make ``unumpy`` the +officially recommended way to override NumPy, along with making some submodules +overridable by default via ``uarray``. ``unumpy`` will remain a separate +repository/package (which we propose to vendor to avoid a hard dependency, and +use the separate ``unumpy`` package only if it is installed, rather than depend +on for the time being). In concrete terms, ``numpy.overridable`` becomes an +alias for ``unumpy``, if available with a fallback to the a vendored version if +not. ``uarray`` and ``unumpy`` and will be developed primarily with the input +of duck-array authors and secondarily, custom dtype authors, via the usual +GitHub workflow. There are a few reasons for this: + +* Faster iteration in the case of bugs or issues. +* Faster design changes, in the case of needed functionality. +* ``unumpy`` will work with older versions of NumPy as well. +* The user and library author opt-in to the override process, + rather than breakages happening when it is least expected. + In simple terms, bugs in ``unumpy`` mean that ``numpy`` remains + unaffected. +* For ``numpy.fft``, ``numpy.linalg`` and ``numpy.random``, the functions in + the main namespace will mirror those in the ``numpy.overridable`` namespace. + The reason for this is that there may exist functions in the in these + submodules that need backends, even for ``numpy.ndarray`` inputs. + +Advantanges of ``unumpy`` over other solutions +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +``unumpy`` offers a number of advantanges over the approach of defining a new +protocol for every problem encountered: Whenever there is something requiring +an override, ``unumpy`` will be able to offer a unified API with very minor +changes. For example: + +* ``ufunc`` objects can be overridden via their ``__call__``, ``reduce`` and + other methods. +* Other functions can be overridden in a similar fashion. +* ``np.asduckarray`` goes away, and becomes ``np.overridable.asarray`` with a + backend set. +* The same holds for array creation functions such as ``np.zeros``, + ``np.empty`` and so on. + +This also holds for the future: Making something overridable would require only +minor changes to ``unumpy``. + +Another promise ``unumpy`` holds is one of default implementations. Default +implementations can be provided for any multimethod, in terms of others. This +allows one to override a large part of the NumPy API by defining only a small +part of it. This is to ease the creation of new duck-arrays, by providing +default implementations of many functions that can be easily expressed in +terms of others, as well as a repository of utility functions that help in the +implementation of duck-arrays that most duck-arrays would require. This would +allow us to avoid designing entire protocols, e.g., a protocol for stacking +and concatenating would be replaced by simply implementing ``stack`` and/or +``concatenate`` and then providing default implementations for everything else +in that class. The same applies for transposing, and many other functions for +which protocols haven't been proposed, such as ``isin`` in terms of ``in1d``, +``setdiff1d`` in terms of ``unique``, and so on. + +It also allows one to override functions in a manner which +``__array_function__`` simply cannot, such as overriding ``np.einsum`` with the +version from the ``opt_einsum`` package, or Intel MKL overriding FFT, BLAS +or ``ufunc`` objects. They would define a backend with the appropriate +multimethods, and the user would select them via a ``with`` statement, or +registering them as a backend. + +The last benefit is a clear way to coerce to a given backend (via the +``coerce`` keyword in ``ua.set_backend``), and a protocol +for coercing not only arrays, but also ``dtype`` objects and ``ufunc`` objects +with similar ones from other libraries. This is due to the existence of actual, +third party dtype packages, and their desire to blend into the NumPy ecosystem +(see [6]_). This is a separate issue compared to the C-level dtype redesign +proposed in [7]_, it's about allowing third-party dtype implementations to +work with NumPy, much like third-party array implementations. These can provide +features such as, for example, units, jagged arrays or other such features that +are outside the scope of NumPy. + +Mixing NumPy and ``unumpy`` in the same file +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Normally, one would only want to import only one of ``unumpy`` or ``numpy``, +you would import it as ``np`` for familiarity. However, there may be situations +where one wishes to mix NumPy and the overrides, and there are a few ways to do +this, depending on the user's style:: + + from numpy import overridable as unp + import numpy as np + +or:: + + import numpy as np + + # Use unumpy via np.overridable + +Duck-array coercion +~~~~~~~~~~~~~~~~~~~ + +There are inherent problems about returning objects that are not NumPy arrays +from ``numpy.array`` or ``numpy.asarray``, particularly in the context of C/C++ +or Cython code that may get an object with a different memory layout than the +one it expects. However, we believe this problem may apply not only to these +two functions but all functions that return NumPy arrays. For this reason, +overrides are opt-in for the user, by using the submodule ``numpy.overridable`` +rather than ``numpy``. NumPy will continue to work unaffected by anything in +``numpy.overridable``. + +If the user wishes to obtain a NumPy array, there are two ways of doing it: + +1. Use ``numpy.asarray`` (the non-overridable version). +2. Use ``numpy.overridable.asarray`` with the NumPy backend set and coercion + enabled + +Aliases outside of the ``numpy.overridable`` namespace +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +All functionality in ``numpy.random``, ``numpy.linalg`` and ``numpy.fft`` +will be aliased to their respective overridable versions inside +``numpy.overridable``. The reason for this is that there are alternative +implementations of RNGs (``mkl-random``), linear algebra routines (``eigen``, +``blis``) and FFT routines (``mkl-fft``, ``pyFFTW``) that need to operate on +``numpy.ndarray`` inputs, but still need the ability to switch behaviour. + +This is different from monkeypatching in a few different ways: + +* The caller-facing signature of the function is always the same, + so there is at least the loose sense of an API contract. Monkeypatching + does not provide this ability. +* There is the ability of locally switching the backend. +* It has been `suggested <http://numpy-discussion.10968.n7.nabble.com/NEP-31-Context-local-and-global-overrides-of-the-NumPy-API-tp47452p47472.html>`_ + that the reason that 1.17 hasn't landed in the Anaconda defaults channel is + due to the incompatibility between monkeypatching and ``__array_function__``, + as monkeypatching would bypass the protocol completely. +* Statements of the form ``from numpy import x; x`` and ``np.x`` would have + different results depending on whether the import was made before or + after monkeypatching happened. + +All this isn't possible at all with ``__array_function__`` or +``__array_ufunc__``. + +It has been formally realised (at least in part) that a backend system is +needed for this, in the `NumPy roadmap <https://numpy.org/neps/roadmap.html#other-functionality>`_. + +For ``numpy.random``, it's still necessary to make the C-API fit the one +proposed in `NEP-19 <https://numpy.org/neps/nep-0019-rng-policy.html>`_. +This is impossible for `mkl-random`, because then it would need to be +rewritten to fit that framework. The guarantees on stream +compatibility will be the same as before, but if there's a backend that affects +``numpy.random`` set, we make no guarantees about stream compatibility, and it +is up to the backend author to provide their own guarantees. + +Providing a way for implicit dispatch +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +It has been suggested that the ability to dispatch methods which do not take +a dispatchable is needed, while guessing that backend from another dispatchable. + +As a concrete example, consider the following: + +.. code:: python + + with unumpy.determine_backend(array_like, np.ndarray): + unumpy.arange(len(array_like)) + +While this does not exist yet in ``uarray``, it is trivial to add it. The need for +this kind of code exists because one might want to have an alternative for the +proposed ``*_like`` functions, or the ``like=`` keyword argument. The need for these +exists because there are functions in the NumPy API that do not take a dispatchable +argument, but there is still the need to select a backend based on a different +dispatchable. + +The need for an opt-in module +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The need for an opt-in module is realised because of a few reasons: + +* There are parts of the API (like `numpy.asarray`) that simply cannot be + overridden due to incompatibility concerns with C/Cython extensions, however, + one may want to coerce to a duck-array using ``asarray`` with a backend set. +* There are possible issues around an implicit option and monkeypatching, such + as those mentioned above. + +NEP 18 notes that this may require maintenance of two separate APIs. However, +this burden may be lessened by, for example, parametrizing all tests over +``numpy.overridable`` separately via a fixture. This also has the side-effect +of thoroughly testing it, unlike ``__array_function__``. We also feel that it +provides an oppurtunity to separate the NumPy API contract properly from the +implementation. + +Benefits to end-users and mixing backends +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Mixing backends is easy in ``uarray``, one only has to do: + +.. code:: python + + # Explicitly say which backends you want to mix + ua.register_backend(backend1) + ua.register_backend(backend2) + ua.register_backend(backend3) + + # Freely use code that mixes backends here. + +The benefits to end-users extend beyond just writing new code. Old code +(usually in the form of scripts) can be easily ported to different backends +by a simple import switch and a line adding the preferred backend. This way, +users may find it easier to port existing code to GPU or distributed computing. + +Related Work +------------ + +Other override mechanisms +~~~~~~~~~~~~~~~~~~~~~~~~~ + +* NEP-18, the ``__array_function__`` protocol. [2]_ +* NEP-13, the ``__array_ufunc__`` protocol. [3]_ +* NEP-30, the ``__duck_array__`` protocol. [9]_ + +Existing NumPy-like array implementations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Dask: https://dask.org/ +* CuPy: https://cupy.chainer.org/ +* PyData/Sparse: https://sparse.pydata.org/ +* Xnd: https://xnd.readthedocs.io/ +* Astropy's Quantity: https://docs.astropy.org/en/stable/units/ + +Existing and potential consumers of alternative arrays +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Dask: https://dask.org/ +* scikit-learn: https://scikit-learn.org/ +* xarray: https://xarray.pydata.org/ +* TensorLy: http://tensorly.org/ + +Existing alternate dtype implementations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* ``ndtypes``: https://ndtypes.readthedocs.io/en/latest/ +* Datashape: https://datashape.readthedocs.io +* Plum: https://plum-py.readthedocs.io/ + +Alternate implementations of parts of the NumPy API +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* ``mkl_random``: https://github.com/IntelPython/mkl_random +* ``mkl_fft``: https://github.com/IntelPython/mkl_fft +* ``bottleneck``: https://github.com/pydata/bottleneck +* ``opt_einsum``: https://github.com/dgasmith/opt_einsum + +Implementation +-------------- + +The implementation of this NEP will require the following steps: + +* Implementation of ``uarray`` multimethods corresponding to the + NumPy API, including classes for overriding ``dtype``, ``ufunc`` + and ``array`` objects, in the ``unumpy`` repository, which are usually + very easy to create. +* Moving backends from ``unumpy`` into the respective array libraries. + +Maintenance can be eased by testing over ``{numpy, unumpy}`` via parameterized +tests. If a new argument is added to a method, the corresponding argument +extractor and replacer will need to be updated within ``unumpy``. + +A lot of argument extractors can be re-used from the existing implementation +of the ``__array_function__`` protocol, and the replacers can be usually +re-used across many methods. + +For the parts of the namespace which are going to be overridable by default, +the main method will need to be renamed and hidden behind a ``uarray`` multimethod. + +Default implementations are usually seen in the documentation using the words +"equivalent to", and thus, are easily available. + +``uarray`` Primer +~~~~~~~~~~~~~~~~~ + +**Note:** *This section will not attempt to go into too much detail about +uarray, that is the purpose of the uarray documentation.* [1]_ +*However, the NumPy community will have input into the design of +uarray, via the issue tracker.* + +``unumpy`` is the interface that defines a set of overridable functions +(multimethods) compatible with the numpy API. To do this, it uses the +``uarray`` library. ``uarray`` is a general purpose tool for creating +multimethods that dispatch to one of multiple different possible backend +implementations. In this sense, it is similar to the ``__array_function__`` +protocol but with the key difference that the backend is explicitly installed +by the end-user and not coupled into the array type. + +Decoupling the backend from the array type gives much more flexibility to +end-users and backend authors. For example, it is possible to: + +* override functions not taking arrays as arguments +* create backends out of source from the array type +* install multiple backends for the same array type + +This decoupling also means that ``uarray`` is not constrained to dispatching +over array-like types. The backend is free to inspect the entire set of +function arguments to determine if it can implement the function e.g. ``dtype`` +parameter dispatching. + +Defining backends +^^^^^^^^^^^^^^^^^ + +``uarray`` consists of two main protocols: ``__ua_convert__`` and +``__ua_function__``, called in that order, along with ``__ua_domain__``. +``__ua_convert__`` is for conversion and coercion. It has the signature +``(dispatchables, coerce)``, where ``dispatchables`` is an iterable of +``ua.Dispatchable`` objects and ``coerce`` is a boolean indicating whether or +not to force the conversion. ``ua.Dispatchable`` is a simple class consisting +of three simple values: ``type``, ``value``, and ``coercible``. +``__ua_convert__`` returns an iterable of the converted values, or +``NotImplemented`` in the case of failure. + +``__ua_function__`` has the signature ``(func, args, kwargs)`` and defines +the actual implementation of the function. It recieves the function and its +arguments. Returning ``NotImplemented`` will cause a move to the default +implementation of the function if one exists, and failing that, the next +backend. + +Here is what will happen assuming a ``uarray`` multimethod is called: + +1. We canonicalise the arguments so any arguments without a default + are placed in ``*args`` and those with one are placed in ``**kwargs``. +2. We check the list of backends. + + a. If it is empty, we try the default implementation. + +3. We check if the backend's ``__ua_convert__`` method exists. If it exists: + + a. We pass it the output of the dispatcher, + which is an iterable of ``ua.Dispatchable`` objects. + b. We feed this output, along with the arguments, + to the argument replacer. ``NotImplemented`` means we move to 3 + with the next backend. + c. We store the replaced arguments as the new arguments. + +4. We feed the arguments into ``__ua_function__``, and return the output, and + exit if it isn't ``NotImplemented``. +5. If the default implementation exists, we try it with the current backend. +6. On failure, we move to 3 with the next backend. If there are no more + backends, we move to 7. +7. We raise a ``ua.BackendNotImplementedError``. + +Defining overridable multimethods +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To define an overridable function (a multimethod), one needs a few things: + +1. A dispatcher that returns an iterable of ``ua.Dispatchable`` objects. +2. A reverse dispatcher that replaces dispatchable values with the supplied + ones. +3. A domain. +4. Optionally, a default implementation, which can be provided in terms of + other multimethods. + +As an example, consider the following:: + + import uarray as ua + + def full_argreplacer(args, kwargs, dispatchables): + def full(shape, fill_value, dtype=None, order='C'): + return (shape, fill_value), dict( + dtype=dispatchables[0], + order=order + ) + + return full(*args, **kwargs) + + @ua.create_multimethod(full_argreplacer, domain="numpy") + def full(shape, fill_value, dtype=None, order='C'): + return (ua.Dispatchable(dtype, np.dtype),) + +A large set of examples can be found in the ``unumpy`` repository, [8]_. +This simple act of overriding callables allows us to override: + +* Methods +* Properties, via ``fget`` and ``fset`` +* Entire objects, via ``__get__``. + +Examples for NumPy +^^^^^^^^^^^^^^^^^^ + +A library that implements a NumPy-like API will use it in the following +manner (as an example):: + + import numpy.overridable as unp + _ua_implementations = {} + + __ua_domain__ = "numpy" + + def __ua_function__(func, args, kwargs): + fn = _ua_implementations.get(func, None) + return fn(*args, **kwargs) if fn is not None else NotImplemented + + def implements(ua_func): + def inner(func): + _ua_implementations[ua_func] = func + return func + + return inner + + @implements(unp.asarray) + def asarray(a, dtype=None, order=None): + # Code here + # Either this method or __ua_convert__ must + # return NotImplemented for unsupported types, + # Or they shouldn't be marked as dispatchable. + + # Provides a default implementation for ones and zeros. + @implements(unp.full) + def full(shape, fill_value, dtype=None, order='C'): + # Code here + +Backward compatibility +---------------------- + +There are no backward incompatible changes proposed in this NEP. + +Alternatives +------------ + +The current alternative to this problem is a combination of NEP-18 [2]_, +NEP-13 [4]_ and NEP-30 [9]_ plus adding more protocols (not yet specified) +in addition to it. Even then, some parts of the NumPy API will remain +non-overridable, so it's a partial alternative. + +The main alternative to vendoring ``unumpy`` is to simply move it into NumPy +completely and not distribute it as a separate package. This would also achieve +the proposed goals, however we prefer to keep it a separate package for now, +for reasons already stated above. + +The third alternative is to move ``unumpy`` into the NumPy organisation and +develop it as a NumPy project. This will also achieve the said goals, and is +also a possibility that can be considered by this NEP. However, the act of +doing an extra ``pip install`` or ``conda install`` may discourage some users +from adopting this method. + +An alternative to requiring opt-in is mainly to *not* override ``np.asarray`` +and ``np.array``, and making the rest of the NumPy API surface overridable, +instead providing ``np.duckarray`` and ``np.asduckarray`` +as duck-array friendly alternatives that used the respective overrides. However, +this has the downside of adding a minor overhead to NumPy calls. + +Discussion +---------- + +* ``uarray`` blogpost: https://labs.quansight.org/blog/2019/07/uarray-update-api-changes-overhead-and-comparison-to-__array_function__/ +* The discussion section of NEP-18: https://numpy.org/neps/nep-0018-array-function-protocol.html#discussion +* NEP-22: https://numpy.org/neps/nep-0022-ndarray-duck-typing-overview.html +* Dask issue #4462: https://github.com/dask/dask/issues/4462 +* PR #13046: https://github.com/numpy/numpy/pull/13046 +* Dask issue #4883: https://github.com/dask/dask/issues/4883 +* Issue #13831: https://github.com/numpy/numpy/issues/13831 +* Discussion PR 1: https://github.com/hameerabbasi/numpy/pull/3 +* Discussion PR 2: https://github.com/hameerabbasi/numpy/pull/4 +* Discussion PR 3: https://github.com/numpy/numpy/pull/14389 + + +References and Footnotes +------------------------ + +.. [1] uarray, A general dispatch mechanism for Python: https://uarray.readthedocs.io + +.. [2] NEP 18 — A dispatch mechanism for NumPy’s high level array functions: https://numpy.org/neps/nep-0018-array-function-protocol.html + +.. [3] NEP 22 — Duck typing for NumPy arrays – high level overview: https://numpy.org/neps/nep-0022-ndarray-duck-typing-overview.html + +.. [4] NEP 13 — A Mechanism for Overriding Ufuncs: https://numpy.org/neps/nep-0013-ufunc-overrides.html + +.. [5] Reply to Adding to the non-dispatched implementation of NumPy methods: http://numpy-discussion.10968.n7.nabble.com/Adding-to-the-non-dispatched-implementation-of-NumPy-methods-tp46816p46874.html + +.. [6] Custom Dtype/Units discussion: http://numpy-discussion.10968.n7.nabble.com/Custom-Dtype-Units-discussion-td43262.html + +.. [7] The epic dtype cleanup plan: https://github.com/numpy/numpy/issues/2899 + +.. [8] unumpy: NumPy, but implementation-independent: https://unumpy.readthedocs.io + +.. [9] NEP 30 — Duck Typing for NumPy Arrays - Implementation: https://www.numpy.org/neps/nep-0030-duck-array-protocol.html + +.. [10] http://scipy.github.io/devdocs/fft.html#backend-control + + +Copyright +--------- + +This document has been placed in the public domain. diff --git a/doc/neps/nep-0032-remove-financial-functions.rst b/doc/neps/nep-0032-remove-financial-functions.rst new file mode 100644 index 000000000..a78b11fea --- /dev/null +++ b/doc/neps/nep-0032-remove-financial-functions.rst @@ -0,0 +1,214 @@ +================================================== +NEP 32 — Remove the financial functions from NumPy +================================================== + +:Author: Warren Weckesser <warren.weckesser@gmail.com> +:Status: Accepted +:Type: Standards Track +:Created: 2019-08-30 +:Resolution: https://mail.python.org/pipermail/numpy-discussion/2019-September/080074.html + + +Abstract +-------- + +We propose deprecating and ultimately removing the financial functions [1]_ +from NumPy. The functions will be moved to an independent repository, +and provided to the community as a separate package with the name +``numpy_financial``. + + +Motivation and scope +-------------------- + +The NumPy financial functions [1]_ are the 10 functions ``fv``, ``ipmt``, +``irr``, ``mirr``, ``nper``, ``npv``, ``pmt``, ``ppmt``, ``pv`` and ``rate``. +The functions provide elementary financial calculations such as future value, +net present value, etc. These functions were added to NumPy in 2008 [2]_. + +In May, 2009, a request by Joe Harrington to add a function called ``xirr`` to +the financial functions triggered a long thread about these functions [3]_. +One important point that came up in that thread is that a "real" financial +library must be able to handle real dates. The NumPy financial functions do +not work with actual dates or calendars. The preference for a more capable +library independent of NumPy was expressed several times in that thread. + +In June, 2009, D. L. Goldsmith expressed concerns about the correctness of the +implementations of some of the financial functions [4]_. It was suggested then +to move the financial functions out of NumPy to an independent package. + +In a GitHub issue in 2013 [5]_, Nathaniel Smith suggested moving the financial +functions from the top-level namespace to ``numpy.financial``. He also +suggested giving the functions better names. Responses at that time included +the suggestion to deprecate them and move them from NumPy to a separate +package. This issue is still open. + +Later in 2013 [6]_, it was suggested on the mailing list that these functions +be removed from NumPy. + +The arguments for the removal of these functions from NumPy: + +* They are too specialized for NumPy. +* They are not actually useful for "real world" financial calculations, because + they do not handle real dates and calendars. +* The definition of "correctness" for some of these functions seems to be a + matter of convention, and the current NumPy developers do not have the + background to judge their correctness. +* There has been little interest among past and present NumPy developers + in maintaining these functions. + +The main arguments for keeping the functions in NumPy are: + +* Removing these functions will be disruptive for some users. Current users + will have to add the new ``numpy_financial`` package to their dependencies, + and then modify their code to use the new package. +* The functions provided, while not "industrial strength", are apparently + similar to functions provided by spreadsheets and some calculators. Having + them available in NumPy makes it easier for some developers to migrate their + software to Python and NumPy. + +It is clear from comments in the mailing list discussions and in the GitHub +issues that many current NumPy developers believe the benefits of removing +the functions outweigh the costs. For example, from [5]_:: + + The financial functions should probably be part of a separate package + -- Charles Harris + + If there's a better package we can point people to we could just deprecate + them and then remove them entirely... I'd be fine with that too... + -- Nathaniel Smith + + +1 to deprecate them. If no other package exists, it can be created if + someone feels the need for that. + -- Ralf Gommers + + I feel pretty strongly that we should deprecate these. If nobody on numpy’s + core team is interested in maintaining them, then it is purely a drag on + development for NumPy. + -- Stephan Hoyer + +And from the 2013 mailing list discussion, about removing the functions from +NumPy:: + + I am +1 as well, I don't think they should have been included in the first + place. + -- David Cournapeau + +But not everyone was in favor of removal:: + + The fin routines are tiny and don't require much maintenance once + written. If we made an effort (putting up pages with examples of common + financial calculations and collecting those under a topical web page, + then linking to that page from various places and talking it up), I + would think they could attract users looking for a free way to play with + financial scenarios. [...] + So, I would say we keep them. If ours are not the best, we should bring + them up to snuff. + -- Joe Harrington + +For an idea of the maintenance burden of the financial functions, one can +look for all the GitHub issues [7]_ and pull requests [8]_ that have the tag +``component: numpy.lib.financial``. + +One method for measuring the effect of removing these functions is to find +all the packages on GitHub that use them. Such a search can be performed +with the ``python-api-inspect`` service [9]_. A search for all uses of the +NumPy financial functions finds just eight repositories. (See the comments +in [5]_ for the actual SQL query.) + + +Implementation +-------------- + +* Create a new Python package, ``numpy_financial``, to be maintained in the + top-level NumPy github organization. This repository will contain the + definitions and unit tests for the financial functions. The package will + be added to PyPI so it can be installed with ``pip``. +* Deprecate the financial functions in the ``numpy`` namespace, beginning in + NumPy version 1.18. Remove the financial functions from NumPy version 1.20. + + +Backward compatibility +---------------------- + +The removal of these functions breaks backward compatibility, as explained +earlier. The effects are mitigated by providing the ``numpy_financial`` +library. + + +Alternatives +------------ + +The following alternatives were mentioned in [5]_: + +* *Maintain the functions as they are (i.e. do nothing).* + A review of the history makes clear that this is not the preference of many + NumPy developers. A recurring comment is that the functions simply do not + belong in NumPy. When that sentiment is combined with the history of bug + reports and the ongoing questions about the correctness of the functions, the + conclusion is that the cleanest solution is deprecation and removal. +* *Move the functions from the ``numpy`` namespace to ``numpy.financial``.* + This was the initial suggestion in [5]_. Such a change does not address the + maintenance issues, and doesn't change the misfit that many developers see + between these functions and NumPy. It causes disruption for the current + users of these functions without addressing what many developers see as the + fundamental problem. + + +Discussion +---------- + +Links to past mailing list discussions, and to relevant GitHub issues and pull +requests, have already been given. The announcement of this NEP was made on +the NumPy-Discussion mailing list on 3 September 2019 [10]_, and on the +PyData mailing list on 8 September 2019 [11]_. The formal proposal to accept +the NEP was made on 19 September 2019 [12]_; a notification was also sent to +PyData (same thread as [11]_). There have been no substantive objections. + + +References and footnotes +------------------------ + +.. [1] Financial functions, + https://numpy.org/doc/1.17/reference/routines.financial.html + +.. [2] Numpy-discussion mailing list, "Simple financial functions for NumPy", + https://mail.python.org/pipermail/numpy-discussion/2008-April/032353.html + +.. [3] Numpy-discussion mailing list, "add xirr to numpy financial functions?", + https://mail.python.org/pipermail/numpy-discussion/2009-May/042645.html + +.. [4] Numpy-discussion mailing list, "Definitions of pv, fv, nper, pmt, and rate", + https://mail.python.org/pipermail/numpy-discussion/2009-June/043188.html + +.. [5] Get financial functions out of main namespace, + https://github.com/numpy/numpy/issues/2880 + +.. [6] Numpy-discussion mailing list, "Deprecation of financial routines", + https://mail.python.org/pipermail/numpy-discussion/2013-August/067409.html + +.. [7] ``component: numpy.lib.financial`` issues, + https://github.com/numpy/numpy/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22component%3A+numpy.lib.financial%22+ + +.. [8] ``component: numpy.lib.financial`` pull requests, + https://github.com/numpy/numpy/pulls?utf8=%E2%9C%93&q=is%3Apr+label%3A%22component%3A+numpy.lib.financial%22+ + +.. [9] Quansight-Labs/python-api-inspect, + https://github.com/Quansight-Labs/python-api-inspect/ + +.. [10] Numpy-discussion mailing list, "NEP 32: Remove the financial functions + from NumPy" + https://mail.python.org/pipermail/numpy-discussion/2019-September/079965.html + +.. [11] PyData mailing list (pydata@googlegroups.com), "NumPy proposal to + remove the financial functions. + https://mail.google.com/mail/u/0/h/1w0mjgixc4rpe/?&th=16d5c38be45f77c4&q=nep+32&v=c&s=q + +.. [12] Numpy-discussion mailing list, "Proposal to accept NEP 32: Remove the + financial functions from NumPy" + https://mail.python.org/pipermail/numpy-discussion/2019-September/080074.html + +Copyright +--------- + +This document has been placed in the public domain. diff --git a/doc/neps/nep-0034.rst b/doc/neps/nep-0034.rst new file mode 100644 index 000000000..d9a9c62f2 --- /dev/null +++ b/doc/neps/nep-0034.rst @@ -0,0 +1,141 @@ +=========================================================== +NEP 34 — Disallow inferring ``dtype=object`` from sequences +=========================================================== + +:Author: Matti Picus +:Status: Draft +:Type: Standards Track +:Created: 2019-10-10 + + +Abstract +-------- + +When users create arrays with sequences-of-sequences, they sometimes err in +matching the lengths of the nested sequences_, commonly called "ragged +arrays". Here we will refer to them as ragged nested sequences. Creating such +arrays via ``np.array([<ragged_nested_sequence>])`` with no ``dtype`` keyword +argument will today default to an ``object``-dtype array. Change the behaviour to +raise a ``ValueError`` instead. + +Motivation and Scope +-------------------- + +Users who specify lists-of-lists when creating a `numpy.ndarray` via +``np.array`` may mistakenly pass in lists of different lengths. Currently we +accept this input and automatically create an array with ``dtype=object``. This +can be confusing, since it is rarely what is desired. Changing the automatic +dtype detection to never return ``object`` for ragged nested sequences (defined as a +recursive sequence of sequences, where not all the sequences on the same +level have the same length) will force users who actually wish to create +``object`` arrays to specify that explicitly. Note that ``lists``, ``tuples``, +and ``nd.ndarrays`` are all sequences [0]_. See for instance `issue 5303`_. + +Usage and Impact +---------------- + +After this change, array creation with ragged nested sequences must explicitly +define a dtype: + + >>> np.array([[1, 2], [1]]) + ValueError: cannot guess the desired dtype from the input + + >>> np.array([[1, 2], [1]], dtype=object) + # succeeds, with no change from current behaviour + +The deprecation will affect any call that internally calls ``np.asarray``. For +instance, the ``assert_equal`` family of functions calls ``np.asarray``, so +users will have to change code like:: + + np.assert_equal(a, [[1, 2], 3]) + +to:: + + np.assert_equal(a, np.array([[1, 2], 3], dtype=object)) + +Detailed description +-------------------- + +To explicitly set the shape of the object array, since it is sometimes hard to +determine what shape is desired, one could use: + + >>> arr = np.empty(correct_shape, dtype=object) + >>> arr[...] = values + +We will also reject mixed sequences of non-sequence and sequence, for instance +all of these will be rejected: + + >>> arr = np.array([np.arange(10), [10]]) + >>> arr = np.array([[range(3), range(3), range(3)], [range(3), 0, 0]]) + +Related Work +------------ + +`PR 14341`_ tried to raise an error when ragged nested sequences were specified +with a numeric dtype ``np.array, [[1], [2, 3]], dtype=int)`` but failed due to +false-positives, for instance ``np.array([1, np.array([5])], dtype=int)``. + +.. _`PR 14341`: https://github.com/numpy/numpy/pull/14341 + +Implementation +-------------- + +The code to be changed is inside ``PyArray_GetArrayParamsFromObject`` and the +internal ``discover_dimentions`` function. See `PR 14794`_. + +Backward compatibility +---------------------- + +Anyone depending on creating object arrays from ragged nested sequences will +need to modify their code. There will be a deprecation period during which the +current behaviour will emit a ``DeprecationWarning``. + +Alternatives +------------ + +- We could continue with the current situation. + +- It was also suggested to add a kwarg ``depth`` to array creation, or perhaps + to add another array creation API function ``ragged_array_object``. The goal + was to eliminate the ambiguity in creating an object array from ``array([[1, + 2], [1]], dtype=object)``: should the returned array have a shape of + ``(1,)``, or ``(2,)``? This NEP does not deal with that issue, and only + deprecates the use of ``array`` with no ``dtype=object`` for ragged nested + sequences. Users of ragged nested sequences may face another deprecation + cycle in the future. Rationale: we expect that there are very few users who + intend to use ragged arrays like that, this was never intended as a use case + of NumPy arrays. Users are likely better off with `another library`_ or just + using list of lists. + +- It was also suggested to deprecate all automatic creation of ``object``-dtype + arrays, which would require adding an explicit ``dtype=object`` for something + like ``np.array([Decimal(10), Decimal(10)])``. This too is out of scope for + the current NEP. Rationale: it's harder to asses the impact of this larger + change, we're not sure how many users this may impact. + +Discussion +---------- + +Comments to `issue 5303`_ indicate this is unintended behaviour as far back as +2014. Suggestions to change it have been made in the ensuing years, but none +have stuck. The WIP implementation in `PR 14794`_ seems to point to the +viability of this approach. + +References and Footnotes +------------------------ + +.. _`issue 5303`: https://github.com/numpy/numpy/issues/5303 +.. _sequences: https://docs.python.org/3.7/glossary.html#term-sequence +.. _`PR 14794`: https://github.com/numpy/numpy/pull/14794 +.. _`another library`: https://github.com/scikit-hep/awkward-array + +.. [0] ``np.ndarrays`` are not recursed into, rather their shape is used + directly. This will not emit warnings:: + + ragged = np.array([[1], [1, 2, 3]], dtype=object) + np.array([ragged, ragged]) # no dtype needed + +Copyright +--------- + +This document has been placed in the public domain. diff --git a/doc/neps/nep-template.rst b/doc/neps/nep-template.rst index c3d34ea46..42f717c7a 100644 --- a/doc/neps/nep-template.rst +++ b/doc/neps/nep-template.rst @@ -24,6 +24,26 @@ the existing problem, who it affects, what it is trying to solve, and why. This section should explicitly address the scope of and key requirements for the proposed change. +Usage and Impact +---------------- + +This section describes how users of NumPy will use features described in this +NEP. It should be comprised mainly of code examples that wouldn't be possible +without acceptance and implementation of this NEP, as well as the impact the +proposed changes would have on the ecosystem. This section should be written +from the perspective of the users of NumPy, and the benefits it will provide +them; and as such, it should include implementation details only if +necessary to explain the functionality. + +Backward compatibility +---------------------- + +This section describes the ways in which the NEP breaks backward compatibility. + +The mailing list post will contain the NEP up to and including this section. +Its purpose is to provide a high-level summary to users who are not interested +in detailed technical discussion, but may have opinions around, e.g., usage and +impact. Detailed description -------------------- @@ -54,12 +74,6 @@ be linked to from here. (A NEP does not need to be implemented in a single pull request if it makes sense to implement it in discrete phases). -Backward compatibility ----------------------- - -This section describes the ways in which the NEP breaks backward compatibility. - - Alternatives ------------ diff --git a/doc/newdtype_example/example.py b/doc/newdtype_example/example.py deleted file mode 100644 index 6be9caa75..000000000 --- a/doc/newdtype_example/example.py +++ /dev/null @@ -1,18 +0,0 @@ -from __future__ import division, absolute_import, print_function - -import floatint.floatint as ff -import numpy as np - -# Setting using array is hard because -# The parser doesn't stop at tuples always -# So, the setitem code will be called with scalars on the -# wrong shaped array. -# But we can get a view as an ndarray of the given type: -g = np.array([1, 2, 3, 4, 5, 6, 7, 8]).view(ff.floatint_type) - -# Now, the elements will be the scalar type associated -# with the ndarray. -print(g[0]) -print(type(g[1])) - -# Now, you need to register ufuncs and more arrfuncs to do useful things... diff --git a/doc/newdtype_example/floatint.c b/doc/newdtype_example/floatint.c deleted file mode 100644 index 0cc198388..000000000 --- a/doc/newdtype_example/floatint.c +++ /dev/null @@ -1,152 +0,0 @@ - -#include "Python.h" -#include "structmember.h" /* for offset of macro if needed */ -#include "numpy/arrayobject.h" - - -/* Use a Python float as the canonical type being added -*/ - -typedef struct _floatint { - PyObject_HEAD - npy_int32 first; - npy_int32 last; -} PyFloatIntObject; - -static PyTypeObject PyFloatInt_Type = { - PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "floatint.floatint", /*tp_name*/ - sizeof(PyFloatIntObject), /*tp_basicsize*/ -}; - -static PyArray_ArrFuncs _PyFloatInt_Funcs; - -#define _ALIGN(type) offsetof(struct {char c; type v;},v) - -/* The scalar-type */ - -static PyArray_Descr _PyFloatInt_Dtype = { - PyObject_HEAD_INIT(NULL) - &PyFloatInt_Type, - 'f', - '0', - '=', - 0, - 0, - sizeof(double), - _ALIGN(double), - NULL, - NULL, - NULL, - &_PyFloatInt_Funcs -}; - -static void -twoint_copyswap(void *dst, void *src, int swap, void *arr) -{ - if (src != NULL) { - memcpy(dst, src, sizeof(double)); - } - - if (swap) { - register char *a, *b, c; - a = (char *)dst; - b = a + 7; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; - } -} - -static PyObject * -twoint_getitem(char *ip, PyArrayObject *ap) { - npy_int32 a[2]; - - if ((ap==NULL) || PyArray_ISBEHAVED_RO(ap)) { - a[0] = *((npy_int32 *)ip); - a[1] = *((npy_int32 *)ip + 1); - } - else { - ap->descr->f->copyswap(a, ip, !PyArray_ISNOTSWAPPED(ap), ap); - } - return Py_BuildValue("(ii)", a[0], a[1]); -} - -static int -twoint_setitem(PyObject *op, char *ov, PyArrayObject *ap) { - npy_int32 a[2]; - - if (!PyTuple_Check(op)) { - PyErr_SetString(PyExc_TypeError, "must be a tuple"); - return -1; - } - if (!PyArg_ParseTuple(op, "ii", a, a+1)) return -1; - - if (ap == NULL || PyArray_ISBEHAVED(ap)) { - memcpy(ov, a, sizeof(double)); - } - else { - ap->descr->f->copyswap(ov, a, !PyArray_ISNOTSWAPPED(ap), ap); - } - return 0; -} - -static PyArray_Descr * _register_dtype(void) -{ - int userval; - PyArray_InitArrFuncs(&_PyFloatInt_Funcs); - /* Add copyswap, - nonzero, getitem, setitem*/ - _PyFloatInt_Funcs.copyswap = twoint_copyswap; - _PyFloatInt_Funcs.getitem = (PyArray_GetItemFunc *)twoint_getitem; - _PyFloatInt_Funcs.setitem = (PyArray_SetItemFunc *)twoint_setitem; - _PyFloatInt_Dtype.ob_type = &PyArrayDescr_Type; - - userval = PyArray_RegisterDataType(&_PyFloatInt_Dtype); - return PyArray_DescrFromType(userval); -} - - -/* Initialization function for the module (*must* be called init<name>) */ - -PyMODINIT_FUNC initfloatint(void) { - PyObject *m, *d; - PyArray_Descr *dtype; - - /* Create the module and add the functions */ - m = Py_InitModule("floatint", NULL); - - /* Import the array objects */ - import_array(); - - - /* Initialize the new float type */ - - /* Add some symbolic constants to the module */ - d = PyModule_GetDict(m); - - if (PyType_Ready(&PyFloat_Type) < 0) return; - PyFloatInt_Type.tp_base = &PyFloat_Type; - /* This is only needed because we are sub-typing the - Float type and must pre-set some function pointers - to get PyType_Ready to fill in the rest. - */ - PyFloatInt_Type.tp_alloc = PyType_GenericAlloc; - PyFloatInt_Type.tp_new = PyFloat_Type.tp_new; - PyFloatInt_Type.tp_dealloc = PyFloat_Type.tp_dealloc; - PyFloatInt_Type.tp_free = PyObject_Del; - if (PyType_Ready(&PyFloatInt_Type) < 0) return; - /* End specific code */ - - - dtype = _register_dtype(); - Py_XINCREF(dtype); - if (dtype != NULL) { - PyDict_SetItemString(d, "floatint_type", (PyObject *)dtype); - } - Py_INCREF(&PyFloatInt_Type); - PyDict_SetItemString(d, "floatint", (PyObject *)&PyFloatInt_Type); - return; -} diff --git a/doc/newdtype_example/floatint/__init__.py b/doc/newdtype_example/floatint/__init__.py deleted file mode 100644 index 1d0f69b67..000000000 --- a/doc/newdtype_example/floatint/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from __future__ import division, absolute_import, print_function diff --git a/doc/newdtype_example/setup.py b/doc/newdtype_example/setup.py deleted file mode 100644 index d7ab040a1..000000000 --- a/doc/newdtype_example/setup.py +++ /dev/null @@ -1,13 +0,0 @@ -from __future__ import division, print_function - -from numpy.distutils.core import setup - -def configuration(parent_package = '', top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('floatint', parent_package, top_path) - - config.add_extension('floatint', - sources = ['floatint.c']) - return config - -setup(configuration=configuration) diff --git a/doc/records.rst.txt b/doc/records.rst.txt index a608880d7..3c0d55216 100644 --- a/doc/records.rst.txt +++ b/doc/records.rst.txt @@ -50,7 +50,7 @@ New possibilities for the "data-type" **Dictionary (keys "names", "titles", and "formats")** - This will be converted to a ``PyArray_VOID`` type with corresponding + This will be converted to a ``NPY_VOID`` type with corresponding fields parameter (the formats list will be converted to actual ``PyArray_Descr *`` objects). @@ -58,10 +58,10 @@ New possibilities for the "data-type" **Objects (anything with an .itemsize and .fields attribute)** If its an instance of (a sub-class of) void type, then a new ``PyArray_Descr*`` structure is created corresponding to its - typeobject (and ``PyArray_VOID``) typenumber. If the type is + typeobject (and ``NPY_VOID``) typenumber. If the type is registered, then the registered type-number is used. - Otherwise a new ``PyArray_VOID PyArray_Descr*`` structure is created + Otherwise a new ``NPY_VOID PyArray_Descr*`` structure is created and filled ->elsize and ->fields filled in appropriately. The itemsize attribute must return a number > 0. The fields diff --git a/doc/release/1.18.0-notes.rst b/doc/release/1.18.0-notes.rst deleted file mode 100644 index f20d5e3fe..000000000 --- a/doc/release/1.18.0-notes.rst +++ /dev/null @@ -1,43 +0,0 @@ -========================== -NumPy 1.18.0 Release Notes -========================== - - -Highlights -========== - - -New functions -============= - - -Deprecations -============ - - -Future Changes -============== - - -Expired deprecations -==================== - - -Compatibility notes -=================== - - -C API changes -============= - - -New Features -============ - - -Improvements -============ - - -Changes -======= diff --git a/doc/release/time_based_proposal.rst b/doc/release/time_based_proposal.rst deleted file mode 100644 index 2eb13562d..000000000 --- a/doc/release/time_based_proposal.rst +++ /dev/null @@ -1,129 +0,0 @@ -.. vim:syntax=rst - -Introduction -============ - -This document proposes some enhancements for numpy and scipy releases. -Successive numpy and scipy releases are too far apart from a time point of -view - some people who are in the numpy release team feel that it cannot -improve without a bit more formal release process. The main proposal is to -follow a time-based release, with expected dates for code freeze, beta and rc. -The goal is two folds: make release more predictable, and move the code forward. - -Rationale -========= - -Right now, the release process of numpy is relatively organic. When some -features are there, we may decide to make a new release. Because there is not -fixed schedule, people don't really know when new features and bug fixes will -go into a release. More significantly, having an expected release schedule -helps to *coordinate* efforts: at the beginning of a cycle, everybody can jump -in and put new code, even break things if needed. But after some point, only -bug fixes are accepted: this makes beta and RC releases much easier; calming -things down toward the release date helps focusing on bugs and regressions - -Proposal -======== - -Time schedule -------------- - -The proposed schedule is to release numpy every 9 weeks - the exact period can -be tweaked if it ends up not working as expected. There will be several stages -for the cycle: - - * Development: anything can happen (by anything, we mean as currently - done). The focus is on new features, refactoring, etc... - - * Beta: no new features. No bug fixing which requires heavy changes. - regression fixes which appear on supported platforms and were not - caught earlier. - - * Polish/RC: only docstring changes and blocker regressions are allowed. - -The schedule would be as follows: - - +------+-----------------+-----------------+------------------+ - | Week | 1.3.0 | 1.4.0 | Release time | - +======+=================+=================+==================+ - | 1 | Development | | | - +------+-----------------+-----------------+------------------+ - | 2 | Development | | | - +------+-----------------+-----------------+------------------+ - | 3 | Development | | | - +------+-----------------+-----------------+------------------+ - | 4 | Development | | | - +------+-----------------+-----------------+------------------+ - | 5 | Development | | | - +------+-----------------+-----------------+------------------+ - | 6 | Development | | | - +------+-----------------+-----------------+------------------+ - | 7 | Beta | | | - +------+-----------------+-----------------+------------------+ - | 8 | Beta | | | - +------+-----------------+-----------------+------------------+ - | 9 | Beta | | 1.3.0 released | - +------+-----------------+-----------------+------------------+ - | 10 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 11 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 12 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 13 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 14 | | Development | | - +------+-----------------+-----------------+------------------+ - | 15 | | Development | | - +------+-----------------+-----------------+------------------+ - | 16 | | Beta | | - +------+-----------------+-----------------+------------------+ - | 17 | | Beta | | - +------+-----------------+-----------------+------------------+ - | 18 | | Beta | 1.4.0 released | - +------+-----------------+-----------------+------------------+ - -Each stage can be defined as follows: - - +------------------+-------------+----------------+----------------+ - | | Development | Beta | Polish | - +==================+=============+================+================+ - | Python Frozen | | slushy | Y | - +------------------+-------------+----------------+----------------+ - | Docstring Frozen | | slushy | thicker slush | - +------------------+-------------+----------------+----------------+ - | C code Frozen | | thicker slush | thicker slush | - +------------------+-------------+----------------+----------------+ - -Terminology: - - * slushy: you can change it if you beg the release team and it's really - important and you coordinate with docs/translations; no "big" - changes. - - * thicker slush: you can change it if it's an open bug marked - showstopper for the Polish release, you beg the release team, the - change is very very small yet very very important, and you feel - extremely guilty about your transgressions. - -The different frozen states are intended to be gradients. The exact meaning is -decided by the release manager: he has the last word on what's go in, what -doesn't. The proposed schedule means that there would be at most 12 weeks -between putting code into the source code repository and being released. - -Release team ------------- - -For every release, there would be at least one release manager. We propose to -rotate the release manager: rotation means it is not always the same person -doing the dirty job, and it should also keep the release manager honest. - -References -========== - - * Proposed schedule for Gnome from Havoc Pennington (one of the core - GTK and Gnome manager): - https://mail.gnome.org/archives/gnome-hackers/2002-June/msg00041.html - The proposed schedule is heavily based on this email - - * https://wiki.gnome.org/ReleasePlanning/Freezes diff --git a/doc/release/upcoming_changes/10151.improvement.rst b/doc/release/upcoming_changes/10151.improvement.rst new file mode 100644 index 000000000..3706a5132 --- /dev/null +++ b/doc/release/upcoming_changes/10151.improvement.rst @@ -0,0 +1,9 @@ +Different C numeric types of the same size have unique names +------------------------------------------------------------ +On any given platform, two of ``np.intc``, ``np.int_``, and ``np.longlong`` +would previously appear indistinguishable through their ``repr``, despite +their corresponding ``dtype`` having different properties. +A similar problem existed for the unsigned counterparts to these types, and on +some platforms for ``np.double`` and ``np.longdouble`` + +These types now always print with a unique ``__name__``. diff --git a/doc/release/upcoming_changes/12284.new_feature.rst b/doc/release/upcoming_changes/12284.new_feature.rst new file mode 100644 index 000000000..25321cd9b --- /dev/null +++ b/doc/release/upcoming_changes/12284.new_feature.rst @@ -0,0 +1,5 @@ + +Add our own ``*.pxd`` cython import file +-------------------------------------------- +Added a ``numpy/__init__.pxd`` file. It will be used for `cimport numpy` + diff --git a/doc/release/upcoming_changes/13605.deprecation.rst b/doc/release/upcoming_changes/13605.deprecation.rst new file mode 100644 index 000000000..bff12e965 --- /dev/null +++ b/doc/release/upcoming_changes/13605.deprecation.rst @@ -0,0 +1,9 @@ +`np.fromfile` and `np.fromstring` will error on bad data +-------------------------------------------------------- + +In future numpy releases, the functions `np.fromfile` and `np.fromstring` +will throw an error when parsing bad data. +This will now give a ``DeprecationWarning`` where previously partial or +even invalid data was silently returned. This deprecation also affects +the C defined functions c:func:`PyArray_FromString`` and +c:func:`PyArray_FromFile` diff --git a/doc/release/upcoming_changes/13610.improvement.rst b/doc/release/upcoming_changes/13610.improvement.rst new file mode 100644 index 000000000..6f97b43ad --- /dev/null +++ b/doc/release/upcoming_changes/13610.improvement.rst @@ -0,0 +1,5 @@ +``argwhere`` now produces a consistent result on 0d arrays +---------------------------------------------------------- +On N-d arrays, `numpy.argwhere` now always produces an array of shape +``(n_non_zero, arr.ndim)``, even when ``arr.ndim == 0``. Previously, the +last axis would have a dimension of 1 in this case. diff --git a/doc/release/upcoming_changes/13794.new_function.rst b/doc/release/upcoming_changes/13794.new_function.rst new file mode 100644 index 000000000..cf8b38bb0 --- /dev/null +++ b/doc/release/upcoming_changes/13794.new_function.rst @@ -0,0 +1,5 @@ +Multivariate hypergeometric distribution added to `numpy.random` +---------------------------------------------------------------- +The method `multivariate_hypergeometric` has been added to the class +`numpy.random.Generator`. This method generates random variates from +the multivariate hypergeometric probability distribution. diff --git a/doc/release/upcoming_changes/13829.improvement.rst b/doc/release/upcoming_changes/13829.improvement.rst new file mode 100644 index 000000000..ede1b2a53 --- /dev/null +++ b/doc/release/upcoming_changes/13829.improvement.rst @@ -0,0 +1,6 @@ +Add ``axis`` argument for ``random.permutation`` and ``random.shuffle`` +----------------------------------------------------------------------- + +Previously the ``random.permutation`` and ``random.shuffle`` functions +can only shuffle an array along the first axis; they now have a +new argument ``axis`` which allows shuffle along a specified axis. diff --git a/doc/release/upcoming_changes/13899.change.rst b/doc/release/upcoming_changes/13899.change.rst new file mode 100644 index 000000000..da8277347 --- /dev/null +++ b/doc/release/upcoming_changes/13899.change.rst @@ -0,0 +1,4 @@ +Incorrect ``threshold`` in ``np.set_printoptions`` raises ``TypeError`` or ``ValueError`` +----------------------------------------------------------------------------------------- +Previously an incorrect ``threshold`` raised ``ValueError``; it now raises ``TypeError`` +for non-numeric types and ``ValueError`` for ``nan`` values. diff --git a/doc/release/upcoming_changes/14036.deprecation.rst b/doc/release/upcoming_changes/14036.deprecation.rst new file mode 100644 index 000000000..3d997b9a2 --- /dev/null +++ b/doc/release/upcoming_changes/14036.deprecation.rst @@ -0,0 +1,4 @@ +Deprecate `PyArray_As1D`, `PyArray_As2D` +---------------------------------------- +`PyArray_As1D`, `PyArray_As2D` are deprecated, use +`PyArray_AsCArray` instead
\ No newline at end of file diff --git a/doc/release/upcoming_changes/14036.expired.rst b/doc/release/upcoming_changes/14036.expired.rst new file mode 100644 index 000000000..05164aa38 --- /dev/null +++ b/doc/release/upcoming_changes/14036.expired.rst @@ -0,0 +1,2 @@ +* ``PyArray_As1D`` and ``PyArray_As2D`` have been removed in favor of + ``PyArray_AsCArray`` diff --git a/doc/release/upcoming_changes/14039.expired.rst b/doc/release/upcoming_changes/14039.expired.rst new file mode 100644 index 000000000..effee0626 --- /dev/null +++ b/doc/release/upcoming_changes/14039.expired.rst @@ -0,0 +1,2 @@ +* ``np.rank`` has been removed. This was deprecated in NumPy 1.10 + and has been replaced by ``np.ndim``. diff --git a/doc/release/upcoming_changes/14100.expired.rst b/doc/release/upcoming_changes/14100.expired.rst new file mode 100644 index 000000000..e9ea9eeb4 --- /dev/null +++ b/doc/release/upcoming_changes/14100.expired.rst @@ -0,0 +1,3 @@ +* ``PyArray_FromDimsAndDataAndDescr`` and ``PyArray_FromDims`` have been + removed (they will always raise an error). Use ``PyArray_NewFromDescr`` + and ``PyArray_SimpleNew`` instead. diff --git a/doc/release/upcoming_changes/14181.deprecation.rst b/doc/release/upcoming_changes/14181.deprecation.rst new file mode 100644 index 000000000..9979b2246 --- /dev/null +++ b/doc/release/upcoming_changes/14181.deprecation.rst @@ -0,0 +1,3 @@ +Deprecate `np.alen` +------------------- +`np.alen` was deprecated. Use `len` instead. diff --git a/doc/release/upcoming_changes/14227.improvement.rst b/doc/release/upcoming_changes/14227.improvement.rst new file mode 100644 index 000000000..6e45f47c1 --- /dev/null +++ b/doc/release/upcoming_changes/14227.improvement.rst @@ -0,0 +1,3 @@ +Add complex number support for ``numpy.fromstring`` +--------------------------------------------------- +Now ``numpy.fromstring`` can read complex numbers. diff --git a/doc/release/upcoming_changes/14248.change.rst b/doc/release/upcoming_changes/14248.change.rst new file mode 100644 index 000000000..9ae0f16bc --- /dev/null +++ b/doc/release/upcoming_changes/14248.change.rst @@ -0,0 +1,10 @@ +`numpy.distutils`: append behavior changed for LDFLAGS and similar +------------------------------------------------------------------ +`numpy.distutils` has always overridden rather than appended to ``LDFLAGS`` and +other similar such environment variables for compiling Fortran extensions. Now +the default behavior has changed to appending - which is the expected behavior +in most situations. To preserve the old (overwriting) behavior, set the +``NPY_DISTUTILS_APPEND_FLAGS`` environment variable to 0. This applies to: +``LDFLAGS``, ``F77FLAGS``, ``F90FLAGS``, ``FREEFLAGS``, ``FOPT``, ``FDEBUG``, +and ``FFLAGS``. NumPy 1.16 and 1.17 gave build warnings in situations where this +change in behavior would have affected the compile flags used. diff --git a/doc/release/upcoming_changes/14255.improvement.rst b/doc/release/upcoming_changes/14255.improvement.rst new file mode 100644 index 000000000..e17835efd --- /dev/null +++ b/doc/release/upcoming_changes/14255.improvement.rst @@ -0,0 +1,4 @@ +`numpy.unique` has consistent axes order (except the chosen one) when ``axis`` is not None +------------------------------------------------------------------------------------------ +Using ``moveaxis`` instead of ``swapaxes`` in `numpy.unique`, so that the ordering of axes +except the axis in arguments will not be broken. diff --git a/doc/release/upcoming_changes/14256.expired.rst b/doc/release/upcoming_changes/14256.expired.rst new file mode 100644 index 000000000..229514171 --- /dev/null +++ b/doc/release/upcoming_changes/14256.expired.rst @@ -0,0 +1,3 @@ +* ``numeric.loads``, ``numeric.load``, ``np.ma.dump``, + ``np.ma.dumps``, ``np.ma.load``, ``np.ma.loads`` are removed, + use ``pickle`` methods instead
\ No newline at end of file diff --git a/doc/release/upcoming_changes/14259.expired.rst b/doc/release/upcoming_changes/14259.expired.rst new file mode 100644 index 000000000..fee44419b --- /dev/null +++ b/doc/release/upcoming_changes/14259.expired.rst @@ -0,0 +1,6 @@ +* ``arrayprint.FloatFormat``, ``arrayprint.LongFloatFormat`` has been removed, + use ``FloatingFormat`` instead +* ``arrayprint.ComplexFormat``, ``arrayprint.LongComplexFormat`` has been + removed, use ``ComplexFloatingFormat`` instead +* ``arrayprint.StructureFormat`` has been removed, use ``StructureVoidFormat`` + instead
\ No newline at end of file diff --git a/doc/release/upcoming_changes/14325.expired.rst b/doc/release/upcoming_changes/14325.expired.rst new file mode 100644 index 000000000..348b3d524 --- /dev/null +++ b/doc/release/upcoming_changes/14325.expired.rst @@ -0,0 +1,2 @@ +* ``np.testing.rand`` has been removed. This was deprecated in NumPy 1.11 + and has been replaced by ``np.random.rand``. diff --git a/doc/release/upcoming_changes/14335.expired.rst b/doc/release/upcoming_changes/14335.expired.rst new file mode 100644 index 000000000..53598cea1 --- /dev/null +++ b/doc/release/upcoming_changes/14335.expired.rst @@ -0,0 +1,2 @@ +* Class ``SafeEval`` in ``numpy/lib/utils.py`` has been removed. This was deprecated in NumPy 1.10. + Use ``np.safe_eval`` instead.
\ No newline at end of file diff --git a/doc/release/upcoming_changes/14393.c_api.rst b/doc/release/upcoming_changes/14393.c_api.rst new file mode 100644 index 000000000..0afd27584 --- /dev/null +++ b/doc/release/upcoming_changes/14393.c_api.rst @@ -0,0 +1,5 @@ +PyDataType_ISUNSIZED(descr) now returns False for structured datatypes +---------------------------------------------------------------------- +Previously this returned True for any datatype of itemsize 0, but now this +returns false for the non-flexible datatype with itemsize 0, ``np.dtype([])``. + diff --git a/doc/release/upcoming_changes/14464.improvement.rst b/doc/release/upcoming_changes/14464.improvement.rst new file mode 100644 index 000000000..36ee4090b --- /dev/null +++ b/doc/release/upcoming_changes/14464.improvement.rst @@ -0,0 +1,6 @@ +`numpy.matmul` with boolean output now converts to boolean values +----------------------------------------------------------------- +Calling `numpy.matmul` where the output is a boolean array would fill the array +with uint8 equivalents of the result, rather than 0/1. Now it forces the output +to 0 or 1 (``NPY_TRUE`` or ``NPY_FALSE``). + diff --git a/doc/release/upcoming_changes/14498.change.rst b/doc/release/upcoming_changes/14498.change.rst new file mode 100644 index 000000000..fd784e289 --- /dev/null +++ b/doc/release/upcoming_changes/14498.change.rst @@ -0,0 +1,7 @@ +Remove ``numpy.random.entropy`` without a deprecation +----------------------------------------------------- + +``numpy.random.entropy`` was added to the `numpy.random` namespace in 1.17.0. +It was meant to be a private c-extension module, but was exposed as public. +It has been replaced by `numpy.random.SeedSequence` so the module was +completely removed. diff --git a/doc/release/upcoming_changes/14501.improvement.rst b/doc/release/upcoming_changes/14501.improvement.rst new file mode 100644 index 000000000..f397ecccf --- /dev/null +++ b/doc/release/upcoming_changes/14501.improvement.rst @@ -0,0 +1,6 @@ +`numpy.random.randint` produced incorrect value when the range was ``2**32`` +---------------------------------------------------------------------------- +The implementation introduced in 1.17.0 had an incorrect check when +determining whether to use the 32-bit path or the full 64-bit +path that incorrectly redirected random integer generation with a high - low +range of ``2**32`` to the 64-bit generator. diff --git a/doc/release/upcoming_changes/14510.compatibility.rst b/doc/release/upcoming_changes/14510.compatibility.rst new file mode 100644 index 000000000..fc5edbc39 --- /dev/null +++ b/doc/release/upcoming_changes/14510.compatibility.rst @@ -0,0 +1,12 @@ +`numpy.lib.recfunctions.drop_fields` can no longer return None +-------------------------------------------------------------- +If ``drop_fields`` is used to drop all fields, previously the array would +be completely discarded and None returned. Now it returns an array of the +same shape as the input, but with no fields. The old behavior can be retained +with:: + + dropped_arr = drop_fields(arr, ['a', 'b']) + if dropped_arr.dtype.names == (): + dropped_arr = None + +converting the empty recarray to None diff --git a/doc/release/upcoming_changes/14518.change.rst b/doc/release/upcoming_changes/14518.change.rst new file mode 100644 index 000000000..ba3844c85 --- /dev/null +++ b/doc/release/upcoming_changes/14518.change.rst @@ -0,0 +1,18 @@ +Add options to quiet build configuration and build with ``-Werror`` +------------------------------------------------------------------- +Added two new configuration options. During the ``build_src`` subcommand, as +part of configuring NumPy, the files ``_numpyconfig.h`` and ``config.h`` are +created by probing support for various runtime functions and routines. +Previously, the very verbose compiler output during this stage clouded more +important information. By default the output is silenced. Running ``runtests.py +--debug-info`` will add ``--verbose-cfg`` to the ``build_src`` subcommand, +which will restore the previous behaviour. + +Adding ``CFLAGS=-Werror`` to turn warnings into errors would trigger errors +during the configuration. Now ``runtests.py --warn-error`` will add +``--warn-error`` to the ``build`` subcommand, which will percolate to the +``build_ext`` and ``build_lib`` subcommands. This will add the compiler flag +to those stages and turn compiler warnings into errors while actually building +NumPy itself, avoiding the ``build_src`` subcommand compiler calls. + +(`gh-14527 <https://github.com/numpy/numpy/pull/14527>`__) diff --git a/doc/release/upcoming_changes/14567.expired.rst b/doc/release/upcoming_changes/14567.expired.rst new file mode 100644 index 000000000..59cb600fb --- /dev/null +++ b/doc/release/upcoming_changes/14567.expired.rst @@ -0,0 +1,5 @@ +The files ``numpy/testing/decorators.py``, ``numpy/testing/noseclasses.py`` +and ``numpy/testing/nosetester.py`` have been removed. They were never +meant to be public (all relevant objects are present in the +``numpy.testing`` namespace), and importing them has given a deprecation +warning since NumPy 1.15.0 diff --git a/doc/release/upcoming_changes/14583.expired.rst b/doc/release/upcoming_changes/14583.expired.rst new file mode 100644 index 000000000..1fad06309 --- /dev/null +++ b/doc/release/upcoming_changes/14583.expired.rst @@ -0,0 +1,2 @@ +* Remove deprecated support for boolean and empty condition lists in + `numpy.select` diff --git a/doc/release/upcoming_changes/14596.expired.rst b/doc/release/upcoming_changes/14596.expired.rst new file mode 100644 index 000000000..3831d5401 --- /dev/null +++ b/doc/release/upcoming_changes/14596.expired.rst @@ -0,0 +1,2 @@ +* Array order only accepts 'C', 'F', 'A', and 'K'. More permissive options + were deprecated in NumPy 1.11. diff --git a/doc/release/upcoming_changes/14620.expired.rst b/doc/release/upcoming_changes/14620.expired.rst new file mode 100644 index 000000000..e35589b53 --- /dev/null +++ b/doc/release/upcoming_changes/14620.expired.rst @@ -0,0 +1 @@ +* np.linspace param num must be an integer. This was deprecated in NumPy 1.12. diff --git a/doc/release/upcoming_changes/14682.expired.rst b/doc/release/upcoming_changes/14682.expired.rst new file mode 100644 index 000000000..e9a8107ec --- /dev/null +++ b/doc/release/upcoming_changes/14682.expired.rst @@ -0,0 +1,2 @@ +* UFuncs with multiple outputs must use a tuple for the `out` kwarg. This + finishes a deprecation started in NumPy 1.10. diff --git a/doc/release/upcoming_changes/14717.compatibility.rst b/doc/release/upcoming_changes/14717.compatibility.rst new file mode 100644 index 000000000..f6f0ec8e5 --- /dev/null +++ b/doc/release/upcoming_changes/14717.compatibility.rst @@ -0,0 +1,4 @@ +``numpy.argmin/argmax/min/max`` returns ``NaT`` if it exists in array +--------------------------------------------------------------------- +``numpy.argmin``, ``numpy.argmax``, ``numpy.min``, and ``numpy.max`` will return +``NaT`` if it exists in the array. diff --git a/doc/release/upcoming_changes/14720.deprecation.rst b/doc/release/upcoming_changes/14720.deprecation.rst new file mode 100644 index 000000000..46ad6d8f7 --- /dev/null +++ b/doc/release/upcoming_changes/14720.deprecation.rst @@ -0,0 +1,8 @@ +Deprecate the financial functions +--------------------------------- +In accordance with +`NEP-32 <https://numpy.org/neps/nep-0032-remove-financial-functions.html>`_, +the functions `fv`, `ipmt`, `irr`, `mirr`, `nper`, `npv`, `pmt`, `ppmt`, +`pv` and `rate` are deprecated, and will be removed from NumPy 1.20. +The replacement for these functions is the Python package +`numpy-financial <https://pypi.org/project/numpy-financial>`_. diff --git a/doc/release/upcoming_changes/14730.improvement.rst b/doc/release/upcoming_changes/14730.improvement.rst new file mode 100644 index 000000000..ee073d234 --- /dev/null +++ b/doc/release/upcoming_changes/14730.improvement.rst @@ -0,0 +1,3 @@ +Add complex number support for ``numpy.fromfile`` +------------------------------------------------- +Now ``numpy.fromfile`` can read complex numbers. diff --git a/doc/release/upcoming_changes/14771.improvement.rst b/doc/release/upcoming_changes/14771.improvement.rst new file mode 100644 index 000000000..aaea8f8ed --- /dev/null +++ b/doc/release/upcoming_changes/14771.improvement.rst @@ -0,0 +1,5 @@ +``std=c99`` added if compiler is named ``gcc`` +---------------------------------------------- +GCC before version 5 requires the ``-std=c99`` command line argument. Newer +compilers automatically turn on C99 mode. The compiler setup code will +automatically add the code if the compiler name has ``gcc`` in it. diff --git a/doc/release/upcoming_changes/14777.compatibility.rst b/doc/release/upcoming_changes/14777.compatibility.rst new file mode 100644 index 000000000..d594463de --- /dev/null +++ b/doc/release/upcoming_changes/14777.compatibility.rst @@ -0,0 +1,5 @@ +Changed random variate stream from `numpy.random.Generator.integers` +-------------------------------------------------------------------- +There was a bug in `numpy.random.Generator.integers` that caused biased +sampling of 8 and 16 bit integer types. Fixing that bug has changed the +output stream from what it was in previous releases. diff --git a/doc/release/upcoming_changes/README.rst b/doc/release/upcoming_changes/README.rst new file mode 100644 index 000000000..7f6476bda --- /dev/null +++ b/doc/release/upcoming_changes/README.rst @@ -0,0 +1,55 @@ +:orphan: + +Changelog +========= + +This directory contains "news fragments" which are short files that contain a +small **ReST**-formatted text that will be added to the next what's new page. + +Make sure to use full sentences with correct case and punctuation, and please +try to use Sphinx intersphinx using backticks. The fragment should have a +header line and an underline using ``------`` + +Each file should be named like ``<PULL REQUEST>.<TYPE>.rst``, where +``<PULL REQUEST>`` is a pull request number, and ``<TYPE>`` is one of: + +* ``new_function``: New user facing functions. +* ``deprecation``: Changes existing code to emit a DeprecationWarning. +* ``future``: Changes existing code to emit a FutureWarning. +* ``expired``: Removal of a deprecated part of the API. +* ``compatibility``: A change which requires users to change code and is not + backwards compatible. (Not to be used for removal of deprecated features.) +* ``c_api``: Changes in the Numpy C-API exported functions +* ``new_feature``: New user facing features like ``kwargs``. +* ``improvement``: Performance and edge-case changes +* ``change``: Other changes +* ``highlight``: Adds a highlight bullet point to use as a possibly highlight + of the release. + +Most categories should be formatted as paragraphs with a heading. +So for example: ``123.new_feature.rst`` would have the content:: + + ``my_new_feature`` option for `my_favorite_function` + ---------------------------------------------------- + The ``my_new_feature`` option is now available for `my_favorite_function`. + To use it, write ``np.my_favorite_function(..., my_new_feature=True)``. + +``highlight`` is usually formatted as bulled points making the fragment +``* This is a highlight``. + +Note the use of single-backticks to get an internal link (assuming +``my_favorite_function`` is exported from the ``numpy`` namespace), +and double-backticks for code. + +If you are unsure what pull request type to use, don't hesitate to ask in your +PR. + +You can install ``towncrier`` and run ``towncrier --draft --version 1.18`` +if you want to get a preview of how your change will look in the final release +notes. + +.. note:: + + This README was adapted from the pytest changelog readme under the terms of + the MIT licence. + diff --git a/doc/release/upcoming_changes/template.rst b/doc/release/upcoming_changes/template.rst new file mode 100644 index 000000000..9c8a3b5fc --- /dev/null +++ b/doc/release/upcoming_changes/template.rst @@ -0,0 +1,38 @@ +{% set title = "NumPy {} Release Notes".format(versiondata.version) %} +{{ "=" * title|length }} +{{ title }} +{{ "=" * title|length }} + +{% for section, _ in sections.items() %} +{% set underline = underlines[0] %}{% if section %}{{ section }} +{{ underline * section|length }}{% set underline = underlines[1] %} + +{% endif %} +{% if sections[section] %} +{% for category, val in definitions.items() if category in sections[section] %} + +{{ definitions[category]['name'] }} +{{ underline * definitions[category]['name']|length }} + +{% if definitions[category]['showcontent'] %} +{% for text, values in sections[section][category].items() %} +{{ text }} +{{ get_indent(text) }}({{values|join(', ') }}) + +{% endfor %} +{% else %} +- {{ sections[section][category]['']|join(', ') }} + +{% endif %} +{% if sections[section][category]|length == 0 %} +No significant changes. + +{% else %} +{% endif %} +{% endfor %} +{% else %} +No significant changes. + + +{% endif %} +{% endfor %} diff --git a/doc/source/_static/numpy_logo.png b/doc/source/_static/numpy_logo.png Binary files differnew file mode 100644 index 000000000..af8cbe323 --- /dev/null +++ b/doc/source/_static/numpy_logo.png diff --git a/doc/source/_templates/autosummary/base.rst b/doc/source/_templates/autosummary/base.rst new file mode 100644 index 000000000..0331154a7 --- /dev/null +++ b/doc/source/_templates/autosummary/base.rst @@ -0,0 +1,14 @@ +{% if objtype == 'property' %} +:orphan: +{% endif %} + +{{ fullname | escape | underline}} + +.. currentmodule:: {{ module }} + +{% if objtype == 'property' %} +property +{% endif %} + +.. auto{{ objtype }}:: {{ objname }} + diff --git a/doc/source/_templates/indexsidebar.html b/doc/source/_templates/indexsidebar.html index 51e7c4308..4707fc0e8 100644 --- a/doc/source/_templates/indexsidebar.html +++ b/doc/source/_templates/indexsidebar.html @@ -1,4 +1,5 @@ <h3>Resources</h3> <ul> + <li><a href="https://numpy.org/">NumPy.org website</a></li> <li><a href="https://scipy.org/">Scipy.org website</a></li> </ul> diff --git a/doc/source/_templates/layout.html b/doc/source/_templates/layout.html index 77da54a00..beaa297db 100644 --- a/doc/source/_templates/layout.html +++ b/doc/source/_templates/layout.html @@ -1,5 +1,15 @@ {% extends "!layout.html" %} +{%- block header %} +<div class="container"> + <div class="top-scipy-org-logo-header" style="background-color: #a2bae8;"> + <a href="{{ pathto('index') }}"> + <img border=0 alt="NumPy" src="{{ pathto('_static/numpy_logo.png', 1) }}"></a> + </div> + </div> +</div> + +{% endblock %} {% block rootrellink %} {% if pagename != 'index' %} <li class="active"><a href="{{ pathto('index') }}">{{ shorttitle|e }}</a></li> diff --git a/doc/source/conf.py b/doc/source/conf.py index fa0c0e7e4..83cecc917 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -3,12 +3,8 @@ from __future__ import division, absolute_import, print_function import sys, os, re -# Check Sphinx version -import sphinx -if sphinx.__version__ < "1.2.1": - raise RuntimeError("Sphinx 1.2.1 or newer required") - -needs_sphinx = '1.0' +# Minimum version, enforced by sphinx +needs_sphinx = '2.2.0' # ----------------------------------------------------------------------------- # General configuration @@ -31,13 +27,10 @@ extensions = [ 'matplotlib.sphinxext.plot_directive', 'IPython.sphinxext.ipython_console_highlighting', 'IPython.sphinxext.ipython_directive', + 'sphinx.ext.imgmath', ] -if sphinx.__version__ >= "1.4": - extensions.append('sphinx.ext.imgmath') - imgmath_image_format = 'svg' -else: - extensions.append('sphinx.ext.pngmath') +imgmath_image_format = 'svg' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] @@ -45,6 +38,8 @@ templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' +master_doc = 'contents' + # General substitutions. project = 'NumPy' copyright = '2008-2019, The SciPy community' @@ -93,6 +88,7 @@ pygments_style = 'sphinx' def setup(app): # add a config value for `ifconfig` directives app.add_config_value('python_version_major', str(sys.version_info.major), 'env') + app.add_lexer('NumPyC', NumPyLexer(stripnl=False)) # ----------------------------------------------------------------------------- # HTML output @@ -121,7 +117,9 @@ else: "edit_link": False, "sidebar": "left", "scipy_org_logo": False, - "rootlinks": [] + "rootlinks": [("https://numpy.org/", "NumPy.org"), + ("https://numpy.org/doc", "Docs"), + ] } html_sidebars = {'index': ['indexsidebar.html', 'searchbox.html']} @@ -175,6 +173,10 @@ latex_documents = [ # not chapters. #latex_use_parts = False +latex_elements = { + 'fontenc': r'\usepackage[LGR,T1]{fontenc}' +} + # Additional stuff for the LaTeX preamble. latex_preamble = r''' \usepackage{amsmath} @@ -366,18 +368,15 @@ def linkcode_resolve(domain, info): from pygments.lexers import CLexer from pygments import token -from sphinx.highlighting import lexers import copy class NumPyLexer(CLexer): name = 'NUMPYLEXER' - tokens = copy.deepcopy(lexers['c'].tokens) + tokens = copy.deepcopy(CLexer.tokens) # Extend the regex for valid identifiers with @ for k, val in tokens.items(): for i, v in enumerate(val): if isinstance(v, tuple): if isinstance(v[0], str): val[i] = (v[0].replace('a-zA-Z', 'a-zA-Z@'),) + v[1:] - -lexers['NumPyC'] = NumPyLexer(stripnl=False) diff --git a/doc/source/dev/development_environment.rst b/doc/source/dev/development_environment.rst index 340f22026..297502b31 100644 --- a/doc/source/dev/development_environment.rst +++ b/doc/source/dev/development_environment.rst @@ -11,14 +11,18 @@ Recommended development setup Since NumPy contains parts written in C and Cython that need to be compiled before use, make sure you have the necessary compilers and Python development headers installed - see :ref:`building-from-source`. Building -NumPy as of version ``1.17`` requires a C99 compliant compiler. For -some older compilers this may require ``export CFLAGS='-std=c99'``. +NumPy as of version ``1.17`` requires a C99 compliant compiler. Having compiled code also means that importing NumPy from the development sources needs some additional steps, which are explained below. For the rest of this chapter we assume that you have set up your git repo as described in :ref:`using-git`. +.. _testing-builds: + +Testing builds +-------------- + To build the development version of NumPy and run tests, spawn interactive shells with the Python import paths properly set up etc., do one of:: @@ -47,6 +51,10 @@ When using pytest as a target (the default), you can $ python runtests.py -v -t numpy/core/tests/test_multiarray.py -- -k "MatMul and not vector" +.. note:: + + Remember that all tests of NumPy should pass before commiting your changes. + Using ``runtests.py`` is the recommended approach to running tests. There are also a number of alternatives to it, for example in-place build or installing to a virtualenv. See the FAQ below for details. @@ -87,6 +95,11 @@ installs a ``.egg-link`` file into your site-packages as well as adjusts the Other build options ------------------- +Build options can be discovered by running any of:: + + $ python setup.py --help + $ python setup.py --help-commands + It's possible to do a parallel build with ``numpy.distutils`` with the ``-j`` option; see :ref:`parallel-builds` for more details. @@ -97,6 +110,16 @@ source tree is to use:: $ export PYTHONPATH=/some/owned/folder/lib/python3.4/site-packages +NumPy uses a series of tests to probe the compiler and libc libraries for +funtions. The results are stored in ``_numpyconfig.h`` and ``config.h`` files +using ``HAVE_XXX`` definitions. These tests are run during the ``build_src`` +phase of the ``_multiarray_umath`` module in the ``generate_config_h`` and +``generate_numpyconfig_h`` functions. Since the output of these calls includes +many compiler warnings and errors, by default it is run quietly. If you wish +to see this output, you can run the ``build_src`` stage verbosely:: + + $ python build build_src -v + Using virtualenvs ----------------- diff --git a/doc/source/dev/development_workflow.rst b/doc/source/dev/development_workflow.rst index 291b1df73..900431374 100644 --- a/doc/source/dev/development_workflow.rst +++ b/doc/source/dev/development_workflow.rst @@ -203,8 +203,9 @@ function, you should a description of and a motivation for your changes. This may generate changes and feedback. It might be prudent to start with this step if your change may be controversial. -- add a release note to the ``changelog`` directory, following the instructions - and format in the ``changelog/README.rst`` file. +- add a release note to the ``doc/release/upcoming_changes/`` directory, + following the instructions and format in the + ``doc/release/upcoming_changes/README.rst`` file. .. _rebasing-on-master: diff --git a/doc/source/dev/index.rst b/doc/source/dev/index.rst index a8bd0bb46..306c15069 100644 --- a/doc/source/dev/index.rst +++ b/doc/source/dev/index.rst @@ -2,6 +2,33 @@ Contributing to NumPy ##################### +Not a coder? Not a problem! NumPy is multi-faceted, and we can use a lot of help. +These are all activities we'd like to get help with (they're all important, so +we list them in alphabetical order): + +- Code maintenance and development +- Community coordination +- DevOps +- Developing educational content & narrative documentation +- Writing technical documentation +- Fundraising +- Project management +- Marketing +- Translating content +- Website design and development + +The rest of this document discusses working on the NumPy code base and documentation. +We're in the process of updating our descriptions of other activities and roles. +If you are interested in these other activities, please contact us! +You can do this via +the `numpy-discussion mailing list <https://scipy.org/scipylib/mailing-lists.html>`__, +or on GitHub (open an issue or comment on a relevant issue). These are our preferred +communication channels (open source is open by nature!), however if you prefer +to discuss in private first, please reach out to our community coordinators +at `numpy-team@googlegroups.com` or `numpy-team.slack.com` (send an email to +`numpy-team@googlegroups.com` for an invite the first time). + + Development process - summary ============================= @@ -104,8 +131,11 @@ Here's the short summary, complete TOC links are below: Beyond changes to a functions docstring and possible description in the general documentation, if your change introduces any user-facing - modifications, update the current release notes under - ``doc/release/X.XX-notes.rst`` + modifications they may need to be mentioned in the release notes. + To add your change to the release notes, you need to create a short file + with a summary and place it in ``doc/release/upcoming_changes``. + The file ``doc/release/upcoming_changes/README.rst`` details the format and + filename conventions. If your change introduces a deprecation, make sure to discuss this first on GitHub or the mailing list first. If agreement on the deprecation is @@ -199,7 +229,7 @@ Requirements ~~~~~~~~~~~~ `Sphinx <http://www.sphinx-doc.org/en/stable/>`__ is needed to build -the documentation. Matplotlib and SciPy are also required. +the documentation. Matplotlib, SciPy, and IPython are also required. Fixing Warnings ~~~~~~~~~~~~~~~ diff --git a/doc/source/docs/howto_build_docs.rst b/doc/source/docs/howto_build_docs.rst index 4bb7628c1..6deacda5c 100644 --- a/doc/source/docs/howto_build_docs.rst +++ b/doc/source/docs/howto_build_docs.rst @@ -5,7 +5,7 @@ Building the NumPy API and reference docs ========================================= We currently use Sphinx_ for generating the API and reference -documentation for NumPy. You will need Sphinx 1.8.3 or newer. +documentation for NumPy. You will need Sphinx 1.8.3 <= 1.8.5. If you only want to get the documentation, note that pre-built versions can be found at diff --git a/doc/source/f2py/distutils.rst b/doc/source/f2py/distutils.rst index fdcd38468..71f6eab5a 100644 --- a/doc/source/f2py/distutils.rst +++ b/doc/source/f2py/distutils.rst @@ -26,7 +26,7 @@ sources, call F2PY to construct extension modules, etc. :mod:`numpy.distutils` extends ``distutils`` with the following features: -* ``Extension`` class argument ``sources`` may contain Fortran source +* :class:`Extension` class argument ``sources`` may contain Fortran source files. In addition, the list ``sources`` may contain at most one F2PY signature file, and then the name of an Extension module must match with the ``<modulename>`` used in signature file. It is @@ -37,7 +37,7 @@ sources, call F2PY to construct extension modules, etc. to scan Fortran source files for routine signatures to construct the wrappers to Fortran codes. - Additional options to F2PY process can be given using ``Extension`` + Additional options to F2PY process can be given using :class:`Extension` class argument ``f2py_options``. * The following new ``distutils`` commands are defined: diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst index 39410b2a4..9dcbb6267 100644 --- a/doc/source/reference/arrays.classes.rst +++ b/doc/source/reference/arrays.classes.rst @@ -51,7 +51,7 @@ NumPy provides several hooks that classes can customize: .. versionadded:: 1.13 Any class, ndarray subclass or not, can define this method or set it to - :obj:`None` in order to override the behavior of NumPy's ufuncs. This works + None in order to override the behavior of NumPy's ufuncs. This works quite similarly to Python's ``__mul__`` and other binary operation routines. - *ufunc* is the ufunc object that was called. @@ -94,13 +94,13 @@ NumPy provides several hooks that classes can customize: :class:`ndarray` handles binary operations like ``arr + obj`` and ``arr < obj`` when ``arr`` is an :class:`ndarray` and ``obj`` is an instance of a custom class. There are two possibilities. If - ``obj.__array_ufunc__`` is present and not :obj:`None`, then + ``obj.__array_ufunc__`` is present and not None, then ``ndarray.__add__`` and friends will delegate to the ufunc machinery, meaning that ``arr + obj`` becomes ``np.add(arr, obj)``, and then :func:`~numpy.add` invokes ``obj.__array_ufunc__``. This is useful if you want to define an object that acts like an array. - Alternatively, if ``obj.__array_ufunc__`` is set to :obj:`None`, then as a + Alternatively, if ``obj.__array_ufunc__`` is set to None, then as a special case, special methods like ``ndarray.__add__`` will notice this and *unconditionally* raise :exc:`TypeError`. This is useful if you want to create objects that interact with arrays via binary operations, but @@ -135,7 +135,7 @@ NumPy provides several hooks that classes can customize: place rather than separately by the ufunc machinery and by the binary operation rules (which gives preference to special methods of subclasses; the alternative way to enforce a one-place only hierarchy, - of setting :func:`__array_ufunc__` to :obj:`None`, would seem very + of setting :func:`__array_ufunc__` to None, would seem very unexpected and thus confusing, as then the subclass would not work at all with ufuncs). - :class:`ndarray` defines its own :func:`__array_ufunc__`, which, @@ -280,7 +280,7 @@ NumPy provides several hooks that classes can customize: .. py:method:: class.__array_prepare__(array, context=None) - At the beginning of every :ref:`ufunc <ufuncs.output-type>`, this + At the beginning of every :ref:`ufunc <ufuncs-output-type>`, this method is called on the input object with the highest array priority, or the output object if one was specified. The output array is passed in and whatever is returned is passed to the ufunc. @@ -295,7 +295,7 @@ NumPy provides several hooks that classes can customize: .. py:method:: class.__array_wrap__(array, context=None) - At the end of every :ref:`ufunc <ufuncs.output-type>`, this method + At the end of every :ref:`ufunc <ufuncs-output-type>`, this method is called on the input object with the highest array priority, or the output object if one was specified. The ufunc-computed array is passed in and whatever is returned is passed to the user. @@ -322,7 +322,7 @@ NumPy provides several hooks that classes can customize: If a class (ndarray subclass or not) having the :func:`__array__` method is used as the output object of an :ref:`ufunc - <ufuncs.output-type>`, results will be written to the object + <ufuncs-output-type>`, results will be written to the object returned by :func:`__array__`. Similar conversion is done on input arrays. diff --git a/doc/source/reference/arrays.datetime.rst b/doc/source/reference/arrays.datetime.rst index 387515f59..2225eedb3 100644 --- a/doc/source/reference/arrays.datetime.rst +++ b/doc/source/reference/arrays.datetime.rst @@ -26,7 +26,9 @@ be either a :ref:`date unit <arrays.dtypes.dateunits>` or a :ref:`time unit <arrays.dtypes.timeunits>`. The date units are years ('Y'), months ('M'), weeks ('W'), and days ('D'), while the time units are hours ('h'), minutes ('m'), seconds ('s'), milliseconds ('ms'), and -some additional SI-prefix seconds-based units. +some additional SI-prefix seconds-based units. The datetime64 data type +also accepts the string "NAT", in any combination of lowercase/uppercase +letters, for a "Not A Time" value. .. admonition:: Example @@ -50,6 +52,11 @@ some additional SI-prefix seconds-based units. >>> np.datetime64('2005-02-25T03:30') numpy.datetime64('2005-02-25T03:30') + NAT (not a time): + + >>> numpy.datetime64('nat') + numpy.datetime64('NaT') + When creating an array of datetimes from a string, it is still possible to automatically select the unit from the inputs, by using the datetime type with generic units. @@ -100,7 +107,21 @@ Datetime and Timedelta Arithmetic NumPy allows the subtraction of two Datetime values, an operation which produces a number with a time unit. Because NumPy doesn't have a physical quantities system in its core, the timedelta64 data type was created -to complement datetime64. +to complement datetime64. The arguments for timedelta64 are a number, +to represent the number of units, and a date/time unit, such as +(D)ay, (M)onth, (Y)ear, (h)ours, (m)inutes, or (s)econds. The timedelta64 +data type also accepts the string "NAT" in place of the number for a "Not A Time" value. + +.. admonition:: Example + + >>> numpy.timedelta64(1, 'D') + numpy.timedelta64(1,'D') + + >>> numpy.timedelta64(4, 'h') + numpy.timedelta64(4,'h') + + >>> numpy.timedelta64('nAt') + numpy.timedelta64('NaT') Datetimes and Timedeltas work together to provide ways for simple datetime calculations. @@ -122,6 +143,12 @@ simple datetime calculations. >>> np.timedelta64(1,'W') % np.timedelta64(10,'D') numpy.timedelta64(7,'D') + >>> numpy.datetime64('nat') - numpy.datetime64('2009-01-01') + numpy.timedelta64('NaT','D') + + >>> numpy.datetime64('2009-01-01') + numpy.timedelta64('nat') + numpy.datetime64('NaT') + There are two Timedelta units ('Y', years and 'M', months) which are treated specially, because how much time they represent changes depending on when they are used. While a timedelta day unit is equivalent to @@ -366,132 +393,4 @@ As a corollary to this change, we no longer prohibit casting between datetimes with date units and datetimes with timeunits. With timezone naive datetimes, the rule for casting from dates to times is no longer ambiguous. -.. _pandas: http://pandas.pydata.org - - -Differences Between 1.6 and 1.7 Datetimes -========================================= - -The NumPy 1.6 release includes a more primitive datetime data type -than 1.7. This section documents many of the changes that have taken -place. - -String Parsing -`````````````` - -The datetime string parser in NumPy 1.6 is very liberal in what it accepts, -and silently allows invalid input without raising errors. The parser in -NumPy 1.7 is quite strict about only accepting ISO 8601 dates, with a few -convenience extensions. 1.6 always creates microsecond (us) units by -default, whereas 1.7 detects a unit based on the format of the string. -Here is a comparison.:: - - # NumPy 1.6.1 - >>> np.datetime64('1979-03-22') - 1979-03-22 00:00:00 - # NumPy 1.7.0 - >>> np.datetime64('1979-03-22') - numpy.datetime64('1979-03-22') - - # NumPy 1.6.1, unit default microseconds - >>> np.datetime64('1979-03-22').dtype - dtype('datetime64[us]') - # NumPy 1.7.0, unit of days detected from string - >>> np.datetime64('1979-03-22').dtype - dtype('<M8[D]') - - # NumPy 1.6.1, ignores invalid part of string - >>> np.datetime64('1979-03-2corruptedstring') - 1979-03-02 00:00:00 - # NumPy 1.7.0, raises error for invalid input - >>> np.datetime64('1979-03-2corruptedstring') - Traceback (most recent call last): - File "<stdin>", line 1, in <module> - ValueError: Error parsing datetime string "1979-03-2corruptedstring" at position 8 - - # NumPy 1.6.1, 'nat' produces today's date - >>> np.datetime64('nat') - 2012-04-30 00:00:00 - # NumPy 1.7.0, 'nat' produces not-a-time - >>> np.datetime64('nat') - numpy.datetime64('NaT') - - # NumPy 1.6.1, 'garbage' produces today's date - >>> np.datetime64('garbage') - 2012-04-30 00:00:00 - # NumPy 1.7.0, 'garbage' raises an exception - >>> np.datetime64('garbage') - Traceback (most recent call last): - File "<stdin>", line 1, in <module> - ValueError: Error parsing datetime string "garbage" at position 0 - - # NumPy 1.6.1, can't specify unit in scalar constructor - >>> np.datetime64('1979-03-22T19:00', 'h') - Traceback (most recent call last): - File "<stdin>", line 1, in <module> - TypeError: function takes at most 1 argument (2 given) - # NumPy 1.7.0, unit in scalar constructor - >>> np.datetime64('1979-03-22T19:00', 'h') - numpy.datetime64('1979-03-22T19:00-0500','h') - - # NumPy 1.6.1, reads ISO 8601 strings w/o TZ as UTC - >>> np.array(['1979-03-22T19:00'], dtype='M8[h]') - array([1979-03-22 19:00:00], dtype=datetime64[h]) - # NumPy 1.7.0, reads ISO 8601 strings w/o TZ as local (ISO specifies this) - >>> np.array(['1979-03-22T19:00'], dtype='M8[h]') - array(['1979-03-22T19-0500'], dtype='datetime64[h]') - - # NumPy 1.6.1, doesn't parse all ISO 8601 strings correctly - >>> np.array(['1979-03-22T12'], dtype='M8[h]') - array([1979-03-22 00:00:00], dtype=datetime64[h]) - >>> np.array(['1979-03-22T12:00'], dtype='M8[h]') - array([1979-03-22 12:00:00], dtype=datetime64[h]) - # NumPy 1.7.0, handles this case correctly - >>> np.array(['1979-03-22T12'], dtype='M8[h]') - array(['1979-03-22T12-0500'], dtype='datetime64[h]') - >>> np.array(['1979-03-22T12:00'], dtype='M8[h]') - array(['1979-03-22T12-0500'], dtype='datetime64[h]') - -Unit Conversion -``````````````` - -The 1.6 implementation of datetime does not convert between units correctly.:: - - # NumPy 1.6.1, the representation value is untouched - >>> np.array(['1979-03-22'], dtype='M8[D]') - array([1979-03-22 00:00:00], dtype=datetime64[D]) - >>> np.array(['1979-03-22'], dtype='M8[D]').astype('M8[M]') - array([2250-08-01 00:00:00], dtype=datetime64[M]) - # NumPy 1.7.0, the representation is scaled accordingly - >>> np.array(['1979-03-22'], dtype='M8[D]') - array(['1979-03-22'], dtype='datetime64[D]') - >>> np.array(['1979-03-22'], dtype='M8[D]').astype('M8[M]') - array(['1979-03'], dtype='datetime64[M]') - -Datetime Arithmetic -``````````````````` - -The 1.6 implementation of datetime only works correctly for a small subset of -arithmetic operations. Here we show some simple cases.:: - - # NumPy 1.6.1, produces invalid results if units are incompatible - >>> a = np.array(['1979-03-22T12'], dtype='M8[h]') - >>> b = np.array([3*60], dtype='m8[m]') - >>> a + b - array([1970-01-01 00:00:00.080988], dtype=datetime64[us]) - # NumPy 1.7.0, promotes to higher-resolution unit - >>> a = np.array(['1979-03-22T12'], dtype='M8[h]') - >>> b = np.array([3*60], dtype='m8[m]') - >>> a + b - array(['1979-03-22T15:00-0500'], dtype='datetime64[m]') - - # NumPy 1.6.1, arithmetic works if everything is microseconds - >>> a = np.array(['1979-03-22T12:00'], dtype='M8[us]') - >>> b = np.array([3*60*60*1000000], dtype='m8[us]') - >>> a + b - array([1979-03-22 15:00:00], dtype=datetime64[us]) - # NumPy 1.7.0 - >>> a = np.array(['1979-03-22T12:00'], dtype='M8[us]') - >>> b = np.array([3*60*60*1000000], dtype='m8[us]') - >>> a + b - array(['1979-03-22T15:00:00.000000-0500'], dtype='datetime64[us]') +.. _pandas: http://pandas.pydata.org
\ No newline at end of file diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst index ab743a8ee..231707b11 100644 --- a/doc/source/reference/arrays.dtypes.rst +++ b/doc/source/reference/arrays.dtypes.rst @@ -128,7 +128,7 @@ What can be converted to a data-type object is described below: Used as-is. -:const:`None` +None .. index:: triple: dtype; construction; from None @@ -392,7 +392,7 @@ Type strings their values must each be lists of the same length as the *names* and *formats* lists. The *offsets* value is a list of byte offsets (limited to `ctypes.c_int`) for each field, while the *titles* value is a - list of titles for each field (:const:`None` can be used if no title is + list of titles for each field (None can be used if no title is desired for that field). The *titles* can be any :class:`string` or :class:`unicode` object and will add another entry to the fields dictionary keyed by the title and referencing the same diff --git a/doc/source/reference/arrays.interface.rst b/doc/source/reference/arrays.interface.rst index f361ccb06..f36a083aa 100644 --- a/doc/source/reference/arrays.interface.rst +++ b/doc/source/reference/arrays.interface.rst @@ -138,18 +138,18 @@ This approach to the interface consists of the object having an This attribute can also be an object exposing the :c:func:`buffer interface <PyObject_AsCharBuffer>` which will be used to share the data. If this key is not present (or - returns :class:`None`), then memory sharing will be done + returns None), then memory sharing will be done through the buffer interface of the object itself. In this case, the offset key can be used to indicate the start of the buffer. A reference to the object exposing the array interface must be stored by the new object if the memory area is to be secured. - **Default**: :const:`None` + **Default**: None **strides** (optional) - Either :const:`None` to indicate a C-style contiguous array or + Either None to indicate a C-style contiguous array or a Tuple of strides which provides the number of bytes needed to jump to the next array element in the corresponding dimension. Each entry must be an integer (a Python @@ -157,29 +157,29 @@ This approach to the interface consists of the object having an be larger than can be represented by a C "int" or "long"; the calling code should handle this appropriately, either by raising an error, or by using :c:type:`Py_LONG_LONG` in C. The - default is :const:`None` which implies a C-style contiguous + default is None which implies a C-style contiguous memory buffer. In this model, the last dimension of the array varies the fastest. For example, the default strides tuple for an object whose array entries are 8 bytes long and whose shape is (10,20,30) would be (4800, 240, 8) - **Default**: :const:`None` (C-style contiguous) + **Default**: None (C-style contiguous) **mask** (optional) - :const:`None` or an object exposing the array interface. All + None or an object exposing the array interface. All elements of the mask array should be interpreted only as true or not true indicating which elements of this array are valid. The shape of this object should be `"broadcastable" <arrays.broadcasting.broadcastable>` to the shape of the original array. - **Default**: :const:`None` (All array values are valid) + **Default**: None (All array values are valid) **offset** (optional) An integer offset into the array data region. This can only be - used when data is :const:`None` or returns a :class:`buffer` + used when data is None or returns a :class:`buffer` object. **Default**: 0. diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst index 8f431bc9c..831d211bc 100644 --- a/doc/source/reference/arrays.ndarray.rst +++ b/doc/source/reference/arrays.ndarray.rst @@ -329,7 +329,7 @@ Item selection and manipulation ------------------------------- For array methods that take an *axis* keyword, it defaults to -:const:`None`. If axis is *None*, then the array is treated as a 1-D +*None*. If axis is *None*, then the array is treated as a 1-D array. Any other value for *axis* represents the dimension along which the operation should proceed. diff --git a/doc/source/reference/arrays.nditer.rst b/doc/source/reference/arrays.nditer.rst index fa8183f75..7dab09a71 100644 --- a/doc/source/reference/arrays.nditer.rst +++ b/doc/source/reference/arrays.nditer.rst @@ -115,13 +115,18 @@ context is exited. array([[ 0, 2, 4], [ 6, 8, 10]]) +If you are writing code that needs to support older versions of numpy, +note that prior to 1.15, :class:`nditer` was not a context manager and +did not have a `close` method. Instead it relied on the destructor to +initiate the writeback of the buffer. + Using an External Loop ---------------------- In all the examples so far, the elements of `a` are provided by the iterator one at a time, because all the looping logic is internal to the -iterator. While this is simple and convenient, it is not very efficient. A -better approach is to move the one-dimensional innermost loop into your +iterator. While this is simple and convenient, it is not very efficient. +A better approach is to move the one-dimensional innermost loop into your code, external to the iterator. This way, NumPy's vectorized operations can be used on larger chunks of the elements being visited. @@ -156,41 +161,29 @@ element in a computation. For example, you may want to visit the elements of an array in memory order, but use a C-order, Fortran-order, or multidimensional index to look up values in a different array. -The Python iterator protocol doesn't have a natural way to query these -additional values from the iterator, so we introduce an alternate syntax -for iterating with an :class:`nditer`. This syntax explicitly works -with the iterator object itself, so its properties are readily accessible -during iteration. With this looping construct, the current value is -accessible by indexing into the iterator, and the index being tracked -is the property `index` or `multi_index` depending on what was requested. - -The Python interactive interpreter unfortunately prints out the -values of expressions inside the while loop during each iteration of the -loop. We have modified the output in the examples using this looping -construct in order to be more readable. +The index is tracked by the iterator object itself, and accessible +through the `index` or `multi_index` properties, depending on what was +requested. The examples below show printouts demonstrating the +progression of the index: .. admonition:: Example >>> a = np.arange(6).reshape(2,3) >>> it = np.nditer(a, flags=['f_index']) - >>> while not it.finished: - ... print("%d <%d>" % (it[0], it.index), end=' ') - ... it.iternext() + >>> for x in it: + ... print("%d <%d>" % (x, it.index), end=' ') ... 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5> >>> it = np.nditer(a, flags=['multi_index']) - >>> while not it.finished: - ... print("%d <%s>" % (it[0], it.multi_index), end=' ') - ... it.iternext() + >>> for x in it: + ... print("%d <%s>" % (x, it.multi_index), end=' ') ... 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)> - >>> it = np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) - >>> with it: - .... while not it.finished: - ... it[0] = it.multi_index[1] - it.multi_index[0] - ... it.iternext() + >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: + ... for x in it: + ... x[...] = it.multi_index[1] - it.multi_index[0] ... >>> a array([[ 0, 1, 2], @@ -199,7 +192,7 @@ construct in order to be more readable. Tracking an index or multi-index is incompatible with using an external loop, because it requires a different index value per element. If you try to combine these flags, the :class:`nditer` object will -raise an exception +raise an exception. .. admonition:: Example @@ -209,6 +202,42 @@ raise an exception File "<stdin>", line 1, in <module> ValueError: Iterator flag EXTERNAL_LOOP cannot be used if an index or multi-index is being tracked +Alternative Looping and Element Access +-------------------------------------- + +To make its properties more readily accessible during iteration, +:class:`nditer` has an alternative syntax for iterating, which works +explicitly with the iterator object itself. With this looping construct, +the current value is accessible by indexing into the iterator. Other +properties, such as tracked indices remain as before. The examples below +produce identical results to the ones in the previous section. + +.. admonition:: Example + + >>> a = np.arange(6).reshape(2,3) + >>> it = np.nditer(a, flags=['f_index']) + >>> while not it.finished: + ... print("%d <%d>" % (it[0], it.index), end=' ') + ... it.iternext() + ... + 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5> + + >>> it = np.nditer(a, flags=['multi_index']) + >>> while not it.finished: + ... print("%d <%s>" % (it[0], it.multi_index), end=' ') + ... it.iternext() + ... + 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)> + + >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: + ... while not it.finished: + ... it[0] = it.multi_index[1] - it.multi_index[0] + ... it.iternext() + ... + >>> a + array([[ 0, 1, 2], + [-1, 0, 1]]) + Buffering the Array Elements ---------------------------- diff --git a/doc/source/reference/c-api/array.rst b/doc/source/reference/c-api/array.rst index a2b56cee7..0530a5747 100644 --- a/doc/source/reference/c-api/array.rst +++ b/doc/source/reference/c-api/array.rst @@ -226,7 +226,7 @@ From scratch If *data* is not ``NULL``, then it is assumed to point to the memory to be used for the array and the *flags* argument is used as the - new flags for the array (except the state of :c:data:`NPY_OWNDATA`, + new flags for the array (except the state of :c:data:`NPY_ARRAY_OWNDATA`, :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` and :c:data:`NPY_ARRAY_UPDATEIFCOPY` flags of the new array will be reset). @@ -916,122 +916,126 @@ enumerated array data type. For the array type checking macros the argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpreted as a :c:type:`PyArrayObject *`. -.. c:function:: PyTypeNum_ISUNSIGNED(num) +.. c:function:: PyTypeNum_ISUNSIGNED(int num) -.. c:function:: PyDataType_ISUNSIGNED(descr) +.. c:function:: PyDataType_ISUNSIGNED(PyArray_Descr *descr) -.. c:function:: PyArray_ISUNSIGNED(obj) +.. c:function:: PyArray_ISUNSIGNED(PyArrayObject *obj) Type represents an unsigned integer. -.. c:function:: PyTypeNum_ISSIGNED(num) +.. c:function:: PyTypeNum_ISSIGNED(int num) -.. c:function:: PyDataType_ISSIGNED(descr) +.. c:function:: PyDataType_ISSIGNED(PyArray_Descr *descr) -.. c:function:: PyArray_ISSIGNED(obj) +.. c:function:: PyArray_ISSIGNED(PyArrayObject *obj) Type represents a signed integer. -.. c:function:: PyTypeNum_ISINTEGER(num) +.. c:function:: PyTypeNum_ISINTEGER(int num) -.. c:function:: PyDataType_ISINTEGER(descr) +.. c:function:: PyDataType_ISINTEGER(PyArray_Descr* descr) -.. c:function:: PyArray_ISINTEGER(obj) +.. c:function:: PyArray_ISINTEGER(PyArrayObject *obj) Type represents any integer. -.. c:function:: PyTypeNum_ISFLOAT(num) +.. c:function:: PyTypeNum_ISFLOAT(int num) -.. c:function:: PyDataType_ISFLOAT(descr) +.. c:function:: PyDataType_ISFLOAT(PyArray_Descr* descr) -.. c:function:: PyArray_ISFLOAT(obj) +.. c:function:: PyArray_ISFLOAT(PyArrayObject *obj) Type represents any floating point number. -.. c:function:: PyTypeNum_ISCOMPLEX(num) +.. c:function:: PyTypeNum_ISCOMPLEX(int num) -.. c:function:: PyDataType_ISCOMPLEX(descr) +.. c:function:: PyDataType_ISCOMPLEX(PyArray_Descr* descr) -.. c:function:: PyArray_ISCOMPLEX(obj) +.. c:function:: PyArray_ISCOMPLEX(PyArrayObject *obj) Type represents any complex floating point number. -.. c:function:: PyTypeNum_ISNUMBER(num) +.. c:function:: PyTypeNum_ISNUMBER(int num) -.. c:function:: PyDataType_ISNUMBER(descr) +.. c:function:: PyDataType_ISNUMBER(PyArray_Descr* descr) -.. c:function:: PyArray_ISNUMBER(obj) +.. c:function:: PyArray_ISNUMBER(PyArrayObject *obj) Type represents any integer, floating point, or complex floating point number. -.. c:function:: PyTypeNum_ISSTRING(num) +.. c:function:: PyTypeNum_ISSTRING(int num) -.. c:function:: PyDataType_ISSTRING(descr) +.. c:function:: PyDataType_ISSTRING(PyArray_Descr* descr) -.. c:function:: PyArray_ISSTRING(obj) +.. c:function:: PyArray_ISSTRING(PyArrayObject *obj) Type represents a string data type. -.. c:function:: PyTypeNum_ISPYTHON(num) +.. c:function:: PyTypeNum_ISPYTHON(int num) -.. c:function:: PyDataType_ISPYTHON(descr) +.. c:function:: PyDataType_ISPYTHON(PyArray_Descr* descr) -.. c:function:: PyArray_ISPYTHON(obj) +.. c:function:: PyArray_ISPYTHON(PyArrayObject *obj) Type represents an enumerated type corresponding to one of the standard Python scalar (bool, int, float, or complex). -.. c:function:: PyTypeNum_ISFLEXIBLE(num) +.. c:function:: PyTypeNum_ISFLEXIBLE(int num) -.. c:function:: PyDataType_ISFLEXIBLE(descr) +.. c:function:: PyDataType_ISFLEXIBLE(PyArray_Descr* descr) -.. c:function:: PyArray_ISFLEXIBLE(obj) +.. c:function:: PyArray_ISFLEXIBLE(PyArrayObject *obj) Type represents one of the flexible array types ( :c:data:`NPY_STRING`, :c:data:`NPY_UNICODE`, or :c:data:`NPY_VOID` ). -.. c:function:: PyDataType_ISUNSIZED(descr): +.. c:function:: PyDataType_ISUNSIZED(PyArray_Descr* descr): Type has no size information attached, and can be resized. Should only be called on flexible dtypes. Types that are attached to an array will always be sized, hence the array form of this macro not existing. -.. c:function:: PyTypeNum_ISUSERDEF(num) + .. versionchanged:: 1.18 -.. c:function:: PyDataType_ISUSERDEF(descr) + For structured datatypes with no fields this function now returns False. -.. c:function:: PyArray_ISUSERDEF(obj) +.. c:function:: PyTypeNum_ISUSERDEF(int num) + +.. c:function:: PyDataType_ISUSERDEF(PyArray_Descr* descr) + +.. c:function:: PyArray_ISUSERDEF(PyArrayObject *obj) Type represents a user-defined type. -.. c:function:: PyTypeNum_ISEXTENDED(num) +.. c:function:: PyTypeNum_ISEXTENDED(int num) -.. c:function:: PyDataType_ISEXTENDED(descr) +.. c:function:: PyDataType_ISEXTENDED(PyArray_Descr* descr) -.. c:function:: PyArray_ISEXTENDED(obj) +.. c:function:: PyArray_ISEXTENDED(PyArrayObject *obj) Type is either flexible or user-defined. -.. c:function:: PyTypeNum_ISOBJECT(num) +.. c:function:: PyTypeNum_ISOBJECT(int num) -.. c:function:: PyDataType_ISOBJECT(descr) +.. c:function:: PyDataType_ISOBJECT(PyArray_Descr* descr) -.. c:function:: PyArray_ISOBJECT(obj) +.. c:function:: PyArray_ISOBJECT(PyArrayObject *obj) Type represents object data type. -.. c:function:: PyTypeNum_ISBOOL(num) +.. c:function:: PyTypeNum_ISBOOL(int num) -.. c:function:: PyDataType_ISBOOL(descr) +.. c:function:: PyDataType_ISBOOL(PyArray_Descr* descr) -.. c:function:: PyArray_ISBOOL(obj) +.. c:function:: PyArray_ISBOOL(PyArrayObject *obj) Type represents Boolean data type. -.. c:function:: PyDataType_HASFIELDS(descr) +.. c:function:: PyDataType_HASFIELDS(PyArray_Descr* descr) -.. c:function:: PyArray_HASFIELDS(obj) +.. c:function:: PyArray_HASFIELDS(PyArrayObject *obj) Type has fields associated with it. @@ -1580,7 +1584,7 @@ Flag checking For all of these macros *arr* must be an instance of a (subclass of) :c:data:`PyArray_Type`. -.. c:function:: PyArray_CHKFLAGS(arr, flags) +.. c:function:: PyArray_CHKFLAGS(PyObject *arr, flags) The first parameter, arr, must be an ndarray or subclass. The parameter, *flags*, should be an integer consisting of bitwise @@ -1590,60 +1594,60 @@ For all of these macros *arr* must be an instance of a (subclass of) :c:data:`NPY_ARRAY_WRITEABLE`, :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, :c:data:`NPY_ARRAY_UPDATEIFCOPY`. -.. c:function:: PyArray_IS_C_CONTIGUOUS(arr) +.. c:function:: PyArray_IS_C_CONTIGUOUS(PyObject *arr) Evaluates true if *arr* is C-style contiguous. -.. c:function:: PyArray_IS_F_CONTIGUOUS(arr) +.. c:function:: PyArray_IS_F_CONTIGUOUS(PyObject *arr) Evaluates true if *arr* is Fortran-style contiguous. -.. c:function:: PyArray_ISFORTRAN(arr) +.. c:function:: PyArray_ISFORTRAN(PyObject *arr) Evaluates true if *arr* is Fortran-style contiguous and *not* C-style contiguous. :c:func:`PyArray_IS_F_CONTIGUOUS` is the correct way to test for Fortran-style contiguity. -.. c:function:: PyArray_ISWRITEABLE(arr) +.. c:function:: PyArray_ISWRITEABLE(PyObject *arr) Evaluates true if the data area of *arr* can be written to -.. c:function:: PyArray_ISALIGNED(arr) +.. c:function:: PyArray_ISALIGNED(PyObject *arr) Evaluates true if the data area of *arr* is properly aligned on the machine. -.. c:function:: PyArray_ISBEHAVED(arr) +.. c:function:: PyArray_ISBEHAVED(PyObject *arr) Evaluates true if the data area of *arr* is aligned and writeable and in machine byte-order according to its descriptor. -.. c:function:: PyArray_ISBEHAVED_RO(arr) +.. c:function:: PyArray_ISBEHAVED_RO(PyObject *arr) Evaluates true if the data area of *arr* is aligned and in machine byte-order. -.. c:function:: PyArray_ISCARRAY(arr) +.. c:function:: PyArray_ISCARRAY(PyObject *arr) Evaluates true if the data area of *arr* is C-style contiguous, and :c:func:`PyArray_ISBEHAVED` (*arr*) is true. -.. c:function:: PyArray_ISFARRAY(arr) +.. c:function:: PyArray_ISFARRAY(PyObject *arr) Evaluates true if the data area of *arr* is Fortran-style contiguous and :c:func:`PyArray_ISBEHAVED` (*arr*) is true. -.. c:function:: PyArray_ISCARRAY_RO(arr) +.. c:function:: PyArray_ISCARRAY_RO(PyObject *arr) Evaluates true if the data area of *arr* is C-style contiguous, aligned, and in machine byte-order. -.. c:function:: PyArray_ISFARRAY_RO(arr) +.. c:function:: PyArray_ISFARRAY_RO(PyObject *arr) Evaluates true if the data area of *arr* is Fortran-style contiguous, aligned, and in machine byte-order **.** -.. c:function:: PyArray_ISONESEGMENT(arr) +.. c:function:: PyArray_ISONESEGMENT(PyObject *arr) Evaluates true if the data area of *arr* consists of a single (C-style or Fortran-style) contiguous segment. @@ -2049,7 +2053,7 @@ Calculation .. tip:: Pass in :c:data:`NPY_MAXDIMS` for axis in order to achieve the same - effect that is obtained by passing in *axis* = :const:`None` in Python + effect that is obtained by passing in ``axis=None`` in Python (treating the array as a 1-d array). @@ -2655,18 +2659,27 @@ cost of a slight overhead. The mode should be one of: .. c:macro:: NPY_NEIGHBORHOOD_ITER_ZERO_PADDING + Zero padding. Outside bounds values will be 0. + .. c:macro:: NPY_NEIGHBORHOOD_ITER_ONE_PADDING + One padding, Outside bounds values will be 1. + .. c:macro:: NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING + Constant padding. Outside bounds values will be the same as the first item in fill_value. + .. c:macro:: NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING + Mirror padding. Outside bounds values will be as if the array items were mirrored. For example, for the array [1, 2, 3, 4], x[-2] will be 2, x[-2] will be 1, x[4] will be 4, x[5] will be 1, etc... + .. c:macro:: NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING + Circular padding. Outside bounds values will be as if the array was repeated. For example, for the array [1, 2, 3, 4], x[-2] will be 3, x[-2] will be 4, x[4] will be 1, x[5] will be 2, etc... @@ -2793,10 +2806,7 @@ Array Scalars *arr* is not ``NULL`` and the first element is negative then :c:data:`NPY_INTNEG_SCALAR` is returned, otherwise :c:data:`NPY_INTPOS_SCALAR` is returned. The possible return values - are :c:data:`NPY_{kind}_SCALAR` where ``{kind}`` can be **INTPOS**, - **INTNEG**, **FLOAT**, **COMPLEX**, **BOOL**, or **OBJECT**. - :c:data:`NPY_NOSCALAR` is also an enumerated value - :c:type:`NPY_SCALARKIND` variables can take on. + are the enumerated values in :c:type:`NPY_SCALARKIND`. .. c:function:: int PyArray_CanCoerceScalar( \ char thistype, char neededtype, NPY_SCALARKIND scalar) @@ -3507,6 +3517,10 @@ Miscellaneous Macros Evaluates as True if arrays *a1* and *a2* have the same shape. +.. c:var:: a + +.. c:var:: b + .. c:macro:: PyArray_MAX(a,b) Returns the maximum of *a* and *b*. If (*a*) or (*b*) are @@ -3592,11 +3606,21 @@ Enumerated Types A special variable type indicating the number of "kinds" of scalars distinguished in determining scalar-coercion rules. This - variable can take on the values :c:data:`NPY_{KIND}` where ``{KIND}`` can be + variable can take on the values: + + .. c:var:: NPY_NOSCALAR + + .. c:var:: NPY_BOOL_SCALAR + + .. c:var:: NPY_INTPOS_SCALAR + + .. c:var:: NPY_INTNEG_SCALAR + + .. c:var:: NPY_FLOAT_SCALAR + + .. c:var:: NPY_COMPLEX_SCALAR - **NOSCALAR**, **BOOL_SCALAR**, **INTPOS_SCALAR**, - **INTNEG_SCALAR**, **FLOAT_SCALAR**, **COMPLEX_SCALAR**, - **OBJECT_SCALAR** + .. c:var:: NPY_OBJECT_SCALAR .. c:var:: NPY_NSCALARKINDS diff --git a/doc/source/reference/c-api/ufunc.rst b/doc/source/reference/c-api/ufunc.rst index 92a679510..c9cc60141 100644 --- a/doc/source/reference/c-api/ufunc.rst +++ b/doc/source/reference/c-api/ufunc.rst @@ -198,10 +198,10 @@ Functions to calling PyUFunc_FromFuncAndData. A copy of the string is made, so the passed in buffer can be freed. -.. c:function:: PyObject* PyUFunc_FromFuncAndDataAndSignatureAndIdentity( +.. c:function:: PyObject* PyUFunc_FromFuncAndDataAndSignatureAndIdentity( \ PyUFuncGenericFunction *func, void **data, char *types, int ntypes, \ - int nin, int nout, int identity, char *name, char *doc, int unused, char *signature, - PyObject *identity_value) + int nin, int nout, int identity, char *name, char *doc, int unused, \ + char *signature, PyObject *identity_value) This function is very similar to `PyUFunc_FromFuncAndDataAndSignature` above, but has an extra *identity_value* argument, to define an arbitrary identity diff --git a/doc/source/reference/distutils.rst b/doc/source/reference/distutils.rst index 46e5ec25e..a22db3e8e 100644 --- a/doc/source/reference/distutils.rst +++ b/doc/source/reference/distutils.rst @@ -22,38 +22,30 @@ information is available in the :ref:`distutils-user-guide`. Modules in :mod:`numpy.distutils` ================================= +.. toctree:: + :maxdepth: 2 -misc_util ---------- + distutils/misc_util -.. module:: numpy.distutils.misc_util + +.. currentmodule:: numpy.distutils .. autosummary:: :toctree: generated/ - get_numpy_include_dirs - dict_append - appendpath - allpath - dot_join - generate_config_py - get_cmd - terminal_has_colors - red_text - green_text - yellow_text - blue_text - cyan_text - cyg2win32 - all_strings - has_f_sources - has_cxx_sources - filter_sources - get_dependencies - is_local_src_dir - get_ext_source_files - get_script_files + ccompiler + cpuinfo.cpu + core.Extension + exec_command + log.set_verbosity + system_info.get_info + system_info.get_standard_file + + +Configuration class +=================== +.. currentmodule:: numpy.distutils.misc_util .. class:: Configuration(package_name=None, parent_name=None, top_path=None, package_path=None, **attrs) @@ -109,20 +101,6 @@ misc_util .. automethod:: get_info -Other modules -------------- - -.. currentmodule:: numpy.distutils - -.. autosummary:: - :toctree: generated/ - - system_info.get_info - system_info.get_standard_file - cpuinfo.cpu - log.set_verbosity - exec_command - Building Installable C libraries ================================ diff --git a/doc/source/reference/distutils/misc_util.rst b/doc/source/reference/distutils/misc_util.rst new file mode 100644 index 000000000..bbb83a5ab --- /dev/null +++ b/doc/source/reference/distutils/misc_util.rst @@ -0,0 +1,7 @@ +distutils.misc_util +=================== + +.. automodule:: numpy.distutils.misc_util + :members: + :undoc-members: + :exclude-members: Configuration diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst index 204ebfe08..5bbdd0299 100644 --- a/doc/source/reference/maskedarray.baseclass.rst +++ b/doc/source/reference/maskedarray.baseclass.rst @@ -160,9 +160,9 @@ replaced with ``n`` integers which will be interpreted as an n-tuple. Item selection and manipulation ------------------------------- -For array methods that take an *axis* keyword, it defaults to `None`. -If axis is *None*, then the array is treated as a 1-D array. -Any other value for *axis* represents the dimension along which +For array methods that take an ``axis`` keyword, it defaults to None. +If axis is None, then the array is treated as a 1-D array. +Any other value for ``axis`` represents the dimension along which the operation should proceed. .. autosummary:: diff --git a/doc/source/reference/maskedarray.generic.rst b/doc/source/reference/maskedarray.generic.rst index 7375d60fb..41c3ee564 100644 --- a/doc/source/reference/maskedarray.generic.rst +++ b/doc/source/reference/maskedarray.generic.rst @@ -74,7 +74,7 @@ To create an array with the second element invalid, we would do:: To create a masked array where all values close to 1.e20 are invalid, we would do:: - >>> z = masked_values([1.0, 1.e20, 3.0, 4.0], 1.e20) + >>> z = ma.masked_values([1.0, 1.e20, 3.0, 4.0], 1.e20) For a complete discussion of creation methods for masked arrays please see section :ref:`Constructing masked arrays <maskedarray.generic.constructing>`. @@ -110,15 +110,15 @@ There are several ways to construct a masked array. >>> x = np.array([1, 2, 3]) >>> x.view(ma.MaskedArray) - masked_array(data = [1 2 3], - mask = False, - fill_value = 999999) + masked_array(data=[1, 2, 3], + mask=False, + fill_value=999999) >>> x = np.array([(1, 1.), (2, 2.)], dtype=[('a',int), ('b', float)]) >>> x.view(ma.MaskedArray) - masked_array(data = [(1, 1.0) (2, 2.0)], - mask = [(False, False) (False, False)], - fill_value = (999999, 1e+20), - dtype = [('a', '<i4'), ('b', '<f8')]) + masked_array(data=[(1, 1.0), (2, 2.0)], + mask=[(False, False), (False, False)], + fill_value=(999999, 1.e+20), + dtype=[('a', '<i8'), ('b', '<f8')]) * Yet another possibility is to use any of the following functions: @@ -195,9 +195,9 @@ index. The inverse of the mask can be calculated with the >>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]]) >>> x[~x.mask] - masked_array(data = [1 4], - mask = [False False], - fill_value = 999999) + masked_array(data=[1, 4], + mask=[False, False], + fill_value=999999) Another way to retrieve the valid data is to use the :meth:`compressed` method, which returns a one-dimensional :class:`~numpy.ndarray` (or one of its @@ -223,27 +223,26 @@ as invalid is to assign the special value :attr:`masked` to them:: >>> x = ma.array([1, 2, 3]) >>> x[0] = ma.masked >>> x - masked_array(data = [-- 2 3], - mask = [ True False False], - fill_value = 999999) + masked_array(data=[--, 2, 3], + mask=[ True, False, False], + fill_value=999999) >>> y = ma.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> y[(0, 1, 2), (1, 2, 0)] = ma.masked >>> y - masked_array(data = - [[1 -- 3] - [4 5 --] - [-- 8 9]], - mask = - [[False True False] - [False False True] - [ True False False]], - fill_value = 999999) + masked_array( + data=[[1, --, 3], + [4, 5, --], + [--, 8, 9]], + mask=[[False, True, False], + [False, False, True], + [ True, False, False]], + fill_value=999999) >>> z = ma.array([1, 2, 3, 4]) >>> z[:-2] = ma.masked >>> z - masked_array(data = [-- -- 3 4], - mask = [ True True False False], - fill_value = 999999) + masked_array(data=[--, --, 3, 4], + mask=[ True, True, False, False], + fill_value=999999) A second possibility is to modify the :attr:`~MaskedArray.mask` directly, @@ -263,9 +262,10 @@ mask:: >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) >>> x.mask = True >>> x - masked_array(data = [-- -- --], - mask = [ True True True], - fill_value = 999999) + masked_array(data=[--, --, --], + mask=[ True, True, True], + fill_value=999999, + dtype=int64) Finally, specific entries can be masked and/or unmasked by assigning to the mask a sequence of booleans:: @@ -273,9 +273,9 @@ mask a sequence of booleans:: >>> x = ma.array([1, 2, 3]) >>> x.mask = [0, 1, 0] >>> x - masked_array(data = [1 -- 3], - mask = [False True False], - fill_value = 999999) + masked_array(data=[1, --, 3], + mask=[False, True, False], + fill_value=999999) Unmasking an entry ~~~~~~~~~~~~~~~~~~ @@ -285,14 +285,14 @@ new valid values to them:: >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) + masked_array(data=[1, 2, --], + mask=[False, False, True], + fill_value=999999) >>> x[-1] = 5 >>> x - masked_array(data = [1 2 5], - mask = [False False False], - fill_value = 999999) + masked_array(data=[1, 2, 5], + mask=[False, False, False], + fill_value=999999) .. note:: Unmasking an entry by direct assignment will silently fail if the masked @@ -304,21 +304,27 @@ new valid values to them:: >>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True) >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) + masked_array(data=[1, 2, --], + mask=[False, False, True], + fill_value=999999) >>> x[-1] = 5 >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) + masked_array(data=[1, 2, --], + mask=[False, False, True], + fill_value=999999) >>> x.soften_mask() + masked_array(data=[1, 2, --], + mask=[False, False, True], + fill_value=999999) >>> x[-1] = 5 >>> x - masked_array(data = [1 2 5], - mask = [False False False], - fill_value = 999999) + masked_array(data=[1, 2, 5], + mask=[False, False, False], + fill_value=999999) >>> x.harden_mask() + masked_array(data=[1, 2, 5], + mask=[False, False, False], + fill_value=999999) To unmask all masked entries of a masked array (provided the mask isn't a hard @@ -327,15 +333,14 @@ mask:: >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) + masked_array(data=[1, 2, --], + mask=[False, False, True], + fill_value=999999) >>> x.mask = ma.nomask >>> x - masked_array(data = [1 2 3], - mask = [False False False], - fill_value = 999999) - + masked_array(data=[1, 2, 3], + mask=[False, False, False], + fill_value=999999) Indexing and slicing @@ -353,9 +358,7 @@ the mask is ``True``):: >>> x[0] 1 >>> x[-1] - masked_array(data = --, - mask = True, - fill_value = 1e+20) + masked >>> x[-1] is ma.masked True @@ -370,10 +373,7 @@ is masked. >>> y[0] (1, 2) >>> y[-1] - masked_array(data = (3, --), - mask = (False, True), - fill_value = (999999, 999999), - dtype = [('a', '<i4'), ('b', '<i4')]) + (3, --) When accessing a slice, the output is a masked array whose @@ -385,20 +385,19 @@ required to ensure propagation of any modification of the mask to the original. >>> x = ma.array([1, 2, 3, 4, 5], mask=[0, 1, 0, 0, 1]) >>> mx = x[:3] >>> mx - masked_array(data = [1 -- 3], - mask = [False True False], - fill_value = 999999) + masked_array(data=[1, --, 3], + mask=[False, True, False], + fill_value=999999) >>> mx[1] = -1 >>> mx - masked_array(data = [1 -1 3], - mask = [False False False], - fill_value = 999999) + masked_array(data=[1, -1, 3], + mask=[False, False, False], + fill_value=999999) >>> x.mask - array([False, True, False, False, True]) + array([False, False, False, False, True]) >>> x.data array([ 1, -1, 3, 4, 5]) - Accessing a field of a masked array with structured datatype returns a :class:`MaskedArray`. @@ -421,9 +420,9 @@ ufuncs. Unary and binary functions that have a validity domain (such as constant whenever the input is masked or falls outside the validity domain:: >>> ma.log([-1, 0, 1, 2]) - masked_array(data = [-- -- 0.0 0.69314718056], - mask = [ True True False False], - fill_value = 1e+20) + masked_array(data=[--, --, 0.0, 0.6931471805599453], + mask=[ True, True, False, False], + fill_value=1e+20) Masked arrays also support standard numpy ufuncs. The output is then a masked array. The result of a unary ufunc is masked wherever the input is masked. The @@ -435,10 +434,9 @@ input fall outside the validity domain:: >>> x = ma.array([-1, 1, 0, 2, 3], mask=[0, 0, 0, 0, 1]) >>> np.log(x) - masked_array(data = [-- -- 0.0 0.69314718056 --], - mask = [ True True False False True], - fill_value = 1e+20) - + masked_array(data=[--, 0.0, --, 0.6931471805599453, --], + mask=[ True, False, True, False, True], + fill_value=1e+20) Examples @@ -454,11 +452,11 @@ of anomalies (deviations from the average):: >>> import numpy.ma as ma >>> x = [0.,1.,-9999.,3.,4.] >>> mx = ma.masked_values (x, -9999.) - >>> print mx.mean() + >>> print(mx.mean()) 2.0 - >>> print mx - mx.mean() + >>> print(mx - mx.mean()) [-2.0 -1.0 -- 1.0 2.0] - >>> print mx.anom() + >>> print(mx.anom()) [-2.0 -1.0 -- 1.0 2.0] @@ -468,7 +466,7 @@ Filling in the missing data Suppose now that we wish to print that same data, but with the missing values replaced by the average value. - >>> print mx.filled(mx.mean()) + >>> print(mx.filled(mx.mean())) [ 0. 1. 2. 3. 4.] @@ -478,10 +476,10 @@ Numerical operations Numerical operations can be easily performed without worrying about missing values, dividing by zero, square roots of negative numbers, etc.:: - >>> import numpy as np, numpy.ma as ma + >>> import numpy.ma as ma >>> x = ma.array([1., -1., 3., 4., 5., 6.], mask=[0,0,0,0,1,0]) >>> y = ma.array([1., 2., 0., 4., 5., 6.], mask=[0,0,0,0,0,1]) - >>> print np.sqrt(x/y) + >>> print(ma.sqrt(x/y)) [1.0 -- -- 1.0 -- --] Four values of the output are invalid: the first one comes from taking the @@ -492,8 +490,10 @@ the last two where the inputs were masked. Ignoring extreme values ----------------------- -Let's consider an array ``d`` of random floats between 0 and 1. We wish to +Let's consider an array ``d`` of floats between 0 and 1. We wish to compute the average of the values of ``d`` while ignoring any data outside -the range ``[0.1, 0.9]``:: +the range ``[0.2, 0.9]``:: - >>> print ma.masked_outside(d, 0.1, 0.9).mean() + >>> d = np.linspace(0, 1, 20) + >>> print(d.mean() - ma.masked_outside(d, 0.2, 0.9).mean()) + -0.05263157894736836 diff --git a/doc/source/reference/random/bit_generators/bitgenerators.rst b/doc/source/reference/random/bit_generators/bitgenerators.rst deleted file mode 100644 index 1474f7dac..000000000 --- a/doc/source/reference/random/bit_generators/bitgenerators.rst +++ /dev/null @@ -1,11 +0,0 @@ -:orphan: - -BitGenerator ------------- - -.. currentmodule:: numpy.random.bit_generator - -.. autosummary:: - :toctree: generated/ - - BitGenerator diff --git a/doc/source/reference/random/bit_generators/index.rst b/doc/source/reference/random/bit_generators/index.rst index 35d9e5d09..94d3d8a3c 100644 --- a/doc/source/reference/random/bit_generators/index.rst +++ b/doc/source/reference/random/bit_generators/index.rst @@ -1,5 +1,3 @@ -.. _bit_generator: - .. currentmodule:: numpy.random Bit Generators @@ -35,14 +33,18 @@ The included BitGenerators are: .. _`Random123`: https://www.deshawresearch.com/resources_random123.html .. _`SFC author's page`: http://pracrand.sourceforge.net/RNG_engines.txt +.. autosummary:: + :toctree: generated/ + + BitGenerator + .. toctree:: - :maxdepth: 1 + :maxdepth: 1 - BitGenerator <bitgenerators> - MT19937 <mt19937> - PCG64 <pcg64> - Philox <philox> - SFC64 <sfc64> + MT19937 <mt19937> + PCG64 <pcg64> + Philox <philox> + SFC64 <sfc64> Seeding and Entropy ------------------- @@ -53,14 +55,14 @@ seed. All of the provided BitGenerators will take an arbitrary-sized non-negative integer, or a list of such integers, as a seed. BitGenerators need to take those inputs and process them into a high-quality internal state for the BitGenerator. All of the BitGenerators in numpy delegate that task to -`~SeedSequence`, which uses hashing techniques to ensure that even low-quality +`SeedSequence`, which uses hashing techniques to ensure that even low-quality seeds generate high-quality initial states. .. code-block:: python - from numpy.random import PCG64 + from numpy.random import PCG64 - bg = PCG64(12345678903141592653589793) + bg = PCG64(12345678903141592653589793) .. end_block @@ -75,14 +77,14 @@ user, which is up to you. .. code-block:: python - from numpy.random import PCG64, SeedSequence + from numpy.random import PCG64, SeedSequence - # Get the user's seed somehow, maybe through `argparse`. - # If the user did not provide a seed, it should return `None`. - seed = get_user_seed() - ss = SeedSequence(seed) - print('seed = {}'.format(ss.entropy)) - bg = PCG64(ss) + # Get the user's seed somehow, maybe through `argparse`. + # If the user did not provide a seed, it should return `None`. + seed = get_user_seed() + ss = SeedSequence(seed) + print('seed = {}'.format(ss.entropy)) + bg = PCG64(ss) .. end_block @@ -104,9 +106,6 @@ or using ``secrets.randbits(128)`` from the standard library are both convenient ways. .. autosummary:: - :toctree: generated/ + :toctree: generated/ SeedSequence - bit_generator.ISeedSequence - bit_generator.ISpawnableSeedSequence - bit_generator.SeedlessSeedSequence diff --git a/doc/source/reference/random/bit_generators/mt19937.rst b/doc/source/reference/random/bit_generators/mt19937.rst index 25ba1d7b5..71875db4e 100644 --- a/doc/source/reference/random/bit_generators/mt19937.rst +++ b/doc/source/reference/random/bit_generators/mt19937.rst @@ -1,9 +1,7 @@ -Mersenne Twister (MT19937) +Mersenne Twister (MT19937) -------------------------- -.. module:: numpy.random.mt19937 - -.. currentmodule:: numpy.random.mt19937 +.. currentmodule:: numpy.random .. autoclass:: MT19937 :exclude-members: diff --git a/doc/source/reference/random/bit_generators/pcg64.rst b/doc/source/reference/random/bit_generators/pcg64.rst index 7aef1e0dd..5881b7008 100644 --- a/doc/source/reference/random/bit_generators/pcg64.rst +++ b/doc/source/reference/random/bit_generators/pcg64.rst @@ -1,9 +1,7 @@ Parallel Congruent Generator (64-bit, PCG64) -------------------------------------------- -.. module:: numpy.random.pcg64 - -.. currentmodule:: numpy.random.pcg64 +.. currentmodule:: numpy.random .. autoclass:: PCG64 :exclude-members: diff --git a/doc/source/reference/random/bit_generators/philox.rst b/doc/source/reference/random/bit_generators/philox.rst index 5e581e094..8eba2d351 100644 --- a/doc/source/reference/random/bit_generators/philox.rst +++ b/doc/source/reference/random/bit_generators/philox.rst @@ -1,9 +1,7 @@ Philox Counter-based RNG ------------------------ -.. module:: numpy.random.philox - -.. currentmodule:: numpy.random.philox +.. currentmodule:: numpy.random .. autoclass:: Philox :exclude-members: diff --git a/doc/source/reference/random/bit_generators/sfc64.rst b/doc/source/reference/random/bit_generators/sfc64.rst index dc03820ae..d34124a33 100644 --- a/doc/source/reference/random/bit_generators/sfc64.rst +++ b/doc/source/reference/random/bit_generators/sfc64.rst @@ -1,9 +1,7 @@ SFC64 Small Fast Chaotic PRNG ----------------------------- -.. module:: numpy.random.sfc64 - -.. currentmodule:: numpy.random.sfc64 +.. currentmodule:: numpy.random .. autoclass:: SFC64 :exclude-members: diff --git a/doc/source/reference/random/entropy.rst b/doc/source/reference/random/entropy.rst deleted file mode 100644 index 0664da6f9..000000000 --- a/doc/source/reference/random/entropy.rst +++ /dev/null @@ -1,6 +0,0 @@ -System Entropy -============== - -.. module:: numpy.random.entropy - -.. autofunction:: random_entropy diff --git a/doc/source/reference/random/generator.rst b/doc/source/reference/random/generator.rst index 068143270..a2cbb493a 100644 --- a/doc/source/reference/random/generator.rst +++ b/doc/source/reference/random/generator.rst @@ -62,6 +62,7 @@ Distributions ~numpy.random.Generator.lognormal ~numpy.random.Generator.logseries ~numpy.random.Generator.multinomial + ~numpy.random.Generator.multivariate_hypergeometric ~numpy.random.Generator.multivariate_normal ~numpy.random.Generator.negative_binomial ~numpy.random.Generator.noncentral_chisquare diff --git a/doc/source/reference/random/index.rst b/doc/source/reference/random/index.rst index 01f9981a2..9b19620d8 100644 --- a/doc/source/reference/random/index.rst +++ b/doc/source/reference/random/index.rst @@ -123,7 +123,7 @@ The `Generator` is the user-facing object that is nearly identical to rg.random() One can also instantiate `Generator` directly with a `BitGenerator` instance. -To use the older `~mt19937.MT19937` algorithm, one can instantiate it directly +To use the older `MT19937` algorithm, one can instantiate it directly and pass it to `Generator`. .. code-block:: python @@ -151,9 +151,6 @@ What's New or Different select distributions * Optional ``out`` argument that allows existing arrays to be filled for select distributions -* `~entropy.random_entropy` provides access to the system - source of randomness that is used in cryptographic applications (e.g., - ``/dev/urandom`` on Unix). * All BitGenerators can produce doubles, uint64s and uint32s via CTypes (`~.PCG64.ctypes`) and CFFI (`~.PCG64.cffi`). This allows the bit generators to be used in numba. @@ -190,7 +187,7 @@ Concepts :maxdepth: 1 generator - legacy mtrand <legacy> + Legacy Generator (RandomState) <legacy> BitGenerators, SeedSequences <bit_generators/index> Features @@ -203,7 +200,6 @@ Features new-or-different Comparing Performance <performance> extending - Reading System Entropy <entropy> Original Source ~~~~~~~~~~~~~~~ diff --git a/doc/source/reference/random/legacy.rst b/doc/source/reference/random/legacy.rst index 04d4d3569..413a42727 100644 --- a/doc/source/reference/random/legacy.rst +++ b/doc/source/reference/random/legacy.rst @@ -4,7 +4,7 @@ Legacy Random Generation ------------------------ -The `~mtrand.RandomState` provides access to +The `RandomState` provides access to legacy generators. This generator is considered frozen and will have no further improvements. It is guaranteed to produce the same values as the final point release of NumPy v1.16. These all depend on Box-Muller @@ -12,19 +12,19 @@ normals or inverse CDF exponentials or gammas. This class should only be used if it is essential to have randoms that are identical to what would have been produced by previous versions of NumPy. -`~mtrand.RandomState` adds additional information +`RandomState` adds additional information to the state which is required when using Box-Muller normals since these are produced in pairs. It is important to use -`~mtrand.RandomState.get_state`, and not the underlying bit generators +`RandomState.get_state`, and not the underlying bit generators `state`, when accessing the state so that these extra values are saved. -Although we provide the `~mt19937.MT19937` BitGenerator for use independent of -`~mtrand.RandomState`, note that its default seeding uses `~SeedSequence` -rather than the legacy seeding algorithm. `~mtrand.RandomState` will use the +Although we provide the `MT19937` BitGenerator for use independent of +`RandomState`, note that its default seeding uses `SeedSequence` +rather than the legacy seeding algorithm. `RandomState` will use the legacy seeding algorithm. The methods to use the legacy seeding algorithm are currently private as the main reason to use them is just to implement -`~mtrand.RandomState`. However, one can reset the state of `~mt19937.MT19937` -using the state of the `~mtrand.RandomState`: +`RandomState`. However, one can reset the state of `MT19937` +using the state of the `RandomState`: .. code-block:: python @@ -47,8 +47,6 @@ using the state of the `~mtrand.RandomState`: rs2.standard_exponential() -.. currentmodule:: numpy.random.mtrand - .. autoclass:: RandomState :exclude-members: diff --git a/doc/source/reference/random/new-or-different.rst b/doc/source/reference/random/new-or-different.rst index 5442f46c9..b3bddb443 100644 --- a/doc/source/reference/random/new-or-different.rst +++ b/doc/source/reference/random/new-or-different.rst @@ -10,9 +10,10 @@ What's New or Different The Box-Muller method used to produce NumPy's normals is no longer available in `Generator`. It is not possible to reproduce the exact random values using ``Generator`` for the normal distribution or any other - distribution that relies on the normal such as the `gamma` or - `standard_t`. If you require bitwise backward compatible - streams, use `RandomState`. + distribution that relies on the normal such as the `Generator.gamma` or + `Generator.standard_t`. If you require bitwise backward compatible + streams, use `RandomState`, i.e., `RandomState.gamma` or + `RandomState.standard_t`. Quick comparison of legacy `mtrand <legacy>`_ to the new `Generator` @@ -20,9 +21,9 @@ Quick comparison of legacy `mtrand <legacy>`_ to the new `Generator` Feature Older Equivalent Notes ------------------ -------------------- ------------- `~.Generator` `~.RandomState` ``Generator`` requires a stream - source, called a `BitGenerator - <bit_generators>` A number of these - are provided. ``RandomState`` uses + source, called a `BitGenerator` + A number of these are provided. + ``RandomState`` uses the Mersenne Twister `~.MT19937` by default, but can also be instantiated with any BitGenerator. @@ -45,9 +46,6 @@ Feature Older Equivalent Notes And in more detail: -* `~.entropy.random_entropy` provides access to the system - source of randomness that is used in cryptographic applications (e.g., - ``/dev/urandom`` on Unix). * Simulate from the complex normal distribution (`~.Generator.complex_normal`) * The normal, exponential and gamma generators use 256-step Ziggurat diff --git a/doc/source/reference/random/parallel.rst b/doc/source/reference/random/parallel.rst index 2f79f22d8..721584014 100644 --- a/doc/source/reference/random/parallel.rst +++ b/doc/source/reference/random/parallel.rst @@ -18,10 +18,10 @@ a `~BitGenerator`. It uses hashing techniques to ensure that low-quality seeds are turned into high quality initial states (at least, with very high probability). -For example, `~mt19937.MT19937` has a state consisting of 624 +For example, `MT19937` has a state consisting of 624 `uint32` integers. A naive way to take a 32-bit integer seed would be to just set the last element of the state to the 32-bit seed and leave the rest 0s. This is -a valid state for `~mt19937.MT19937`, but not a good one. The Mersenne Twister +a valid state for `MT19937`, but not a good one. The Mersenne Twister algorithm `suffers if there are too many 0s`_. Similarly, two adjacent 32-bit integer seeds (i.e. ``12345`` and ``12346``) would produce very similar streams. @@ -91,15 +91,15 @@ territory ([2]_). .. [2] In this calculation, we can ignore the amount of numbers drawn from each stream. Each of the PRNGs we provide has some extra protection built in that avoids overlaps if the `~SeedSequence` pools differ in the - slightest bit. `~pcg64.PCG64` has :math:`2^{127}` separate cycles + slightest bit. `PCG64` has :math:`2^{127}` separate cycles determined by the seed in addition to the position in the :math:`2^{128}` long period for each cycle, so one has to both get on or near the same cycle *and* seed a nearby position in the cycle. - `~philox.Philox` has completely independent cycles determined by the seed. - `~sfc64.SFC64` incorporates a 64-bit counter so every unique seed is at + `Philox` has completely independent cycles determined by the seed. + `SFC64` incorporates a 64-bit counter so every unique seed is at least :math:`2^{64}` iterations away from any other seed. And - finally, `~mt19937.MT19937` has just an unimaginably huge period. Getting - a collision internal to `~SeedSequence` is the way a failure would be + finally, `MT19937` has just an unimaginably huge period. Getting + a collision internal to `SeedSequence` is the way a failure would be observed. .. _`implements an algorithm`: http://www.pcg-random.org/posts/developing-a-seed_seq-alternative.html @@ -113,10 +113,10 @@ territory ([2]_). Independent Streams ------------------- -:class:`~philox.Philox` is a counter-based RNG based which generates values by +`Philox` is a counter-based RNG based which generates values by encrypting an incrementing counter using weak cryptographic primitives. The seed determines the key that is used for the encryption. Unique keys create -unique, independent streams. :class:`~philox.Philox` lets you bypass the +unique, independent streams. `Philox` lets you bypass the seeding algorithm to directly set the 128-bit key. Similar, but different, keys will still create independent streams. diff --git a/doc/source/reference/random/performance.rst b/doc/source/reference/random/performance.rst index 2d5fca496..d70dd064a 100644 --- a/doc/source/reference/random/performance.rst +++ b/doc/source/reference/random/performance.rst @@ -5,21 +5,21 @@ Performance Recommendation ************** -The recommended generator for general use is :class:`~pcg64.PCG64`. It is +The recommended generator for general use is `PCG64`. It is statistically high quality, full-featured, and fast on most platforms, but somewhat slow when compiled for 32-bit processes. -:class:`~philox.Philox` is fairly slow, but its statistical properties have +`Philox` is fairly slow, but its statistical properties have very high quality, and it is easy to get assuredly-independent stream by using unique keys. If that is the style you wish to use for parallel streams, or you are porting from another system that uses that style, then -:class:`~philox.Philox` is your choice. +`Philox` is your choice. -:class:`~sfc64.SFC64` is statistically high quality and very fast. However, it +`SFC64` is statistically high quality and very fast. However, it lacks jumpability. If you are not using that capability and want lots of speed, even on 32-bit processes, this is your choice. -:class:`~mt19937.MT19937` `fails some statistical tests`_ and is not especially +`MT19937` `fails some statistical tests`_ and is not especially fast compared to modern PRNGs. For these reasons, we mostly do not recommend using it on its own, only through the legacy `~.RandomState` for reproducing old results. That said, it has a very long history as a default in @@ -31,20 +31,20 @@ Timings ******* The timings below are the time in ns to produce 1 random value from a -specific distribution. The original :class:`~mt19937.MT19937` generator is +specific distribution. The original `MT19937` generator is much slower since it requires 2 32-bit values to equal the output of the faster generators. Integer performance has a similar ordering. The pattern is similar for other, more complex generators. The normal -performance of the legacy :class:`~.RandomState` generator is much +performance of the legacy `RandomState` generator is much lower than the other since it uses the Box-Muller transformation rather than the Ziggurat generator. The performance gap for Exponentials is also large due to the cost of computing the log function to invert the CDF. The column labeled MT19973 is used the same 32-bit generator as -:class:`~.RandomState` but produces random values using -:class:`~Generator`. +`RandomState` but produces random values using +`Generator`. .. csv-table:: :header: ,MT19937,PCG64,Philox,SFC64,RandomState @@ -61,7 +61,7 @@ The column labeled MT19973 is used the same 32-bit generator as Poissons,67.6,52.4,69.2,46.4,78.1 The next table presents the performance in percentage relative to values -generated by the legacy generator, `RandomState(MT19937())`. The overall +generated by the legacy generator, ``RandomState(MT19937())``. The overall performance was computed using a geometric mean. .. csv-table:: diff --git a/doc/source/reference/routines.array-manipulation.rst b/doc/source/reference/routines.array-manipulation.rst index cc93d1029..bf43232ef 100644 --- a/doc/source/reference/routines.array-manipulation.rst +++ b/doc/source/reference/routines.array-manipulation.rst @@ -9,6 +9,7 @@ Basic operations :toctree: generated/ copyto + shape Changing array shape ==================== diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst index 491bb6bff..5b2098c7a 100644 --- a/doc/source/reference/routines.ma.rst +++ b/doc/source/reference/routines.ma.rst @@ -264,17 +264,6 @@ Conversion operations ma.MaskedArray.tobytes -Pickling and unpickling -~~~~~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.dump - ma.dumps - ma.load - ma.loads - - Filling a masked array ~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: diff --git a/doc/source/reference/routines.testing.rst b/doc/source/reference/routines.testing.rst index c676dec07..98ce3f377 100644 --- a/doc/source/reference/routines.testing.rst +++ b/doc/source/reference/routines.testing.rst @@ -37,11 +37,11 @@ Decorators .. autosummary:: :toctree: generated/ - decorators.deprecated - decorators.knownfailureif - decorators.setastest - decorators.skipif - decorators.slow + dec.deprecated + dec.knownfailureif + dec.setastest + dec.skipif + dec.slow decorate_methods Test Running diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst index d00e88b34..0416d6efc 100644 --- a/doc/source/reference/ufuncs.rst +++ b/doc/source/reference/ufuncs.rst @@ -100,7 +100,7 @@ is true: - *d* acts like a (5,6) array where the single value is repeated. -.. _ufuncs.output-type: +.. _ufuncs-output-type: Output type determination ========================= @@ -228,46 +228,47 @@ can generate this table for your system with the code given in the Figure. .. admonition:: Figure - Code segment showing the "can cast safely" table for a 32-bit system. + Code segment showing the "can cast safely" table for a 64-bit system. + Generally the output depends on the system; your system might result in + a different table. + >>> mark = {False: ' -', True: ' Y'} >>> def print_table(ntypes): - ... print 'X', - ... for char in ntypes: print char, - ... print + ... print('X ' + ' '.join(ntypes)) ... for row in ntypes: - ... print row, + ... print(row, end='') ... for col in ntypes: - ... print int(np.can_cast(row, col)), - ... print + ... print(mark[np.can_cast(row, col)], end='') + ... print() + ... >>> print_table(np.typecodes['All']) X ? b h i l q p B H I L Q P e f d g F D G S U V O M m - ? 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - b 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 - h 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 - i 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - l 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - q 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - p 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - B 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 - H 0 0 0 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 0 - I 0 0 0 0 1 1 1 0 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - L 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - Q 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - P 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - e 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 - f 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 - d 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - g 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 0 0 - F 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 - D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 - G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 - S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 - U 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 - V 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 - O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 - M 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 - m 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 - + ? Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y - Y + b - Y Y Y Y Y Y - - - - - - Y Y Y Y Y Y Y Y Y Y Y - Y + h - - Y Y Y Y Y - - - - - - - Y Y Y Y Y Y Y Y Y Y - Y + i - - - Y Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y + l - - - - Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y + q - - - - Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y + p - - - - Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y + B - - Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y - Y + H - - - Y Y Y Y - Y Y Y Y Y - Y Y Y Y Y Y Y Y Y Y - Y + I - - - - Y Y Y - - Y Y Y Y - - Y Y - Y Y Y Y Y Y - Y + L - - - - - - - - - - Y Y Y - - Y Y - Y Y Y Y Y Y - Y + Q - - - - - - - - - - Y Y Y - - Y Y - Y Y Y Y Y Y - Y + P - - - - - - - - - - Y Y Y - - Y Y - Y Y Y Y Y Y - Y + e - - - - - - - - - - - - - Y Y Y Y Y Y Y Y Y Y Y - - + f - - - - - - - - - - - - - - Y Y Y Y Y Y Y Y Y Y - - + d - - - - - - - - - - - - - - - Y Y - Y Y Y Y Y Y - - + g - - - - - - - - - - - - - - - - Y - - Y Y Y Y Y - - + F - - - - - - - - - - - - - - - - - Y Y Y Y Y Y Y - - + D - - - - - - - - - - - - - - - - - - Y Y Y Y Y Y - - + G - - - - - - - - - - - - - - - - - - - Y Y Y Y Y - - + S - - - - - - - - - - - - - - - - - - - - Y Y Y Y - - + U - - - - - - - - - - - - - - - - - - - - - Y Y Y - - + V - - - - - - - - - - - - - - - - - - - - - - Y Y - - + O - - - - - - - - - - - - - - - - - - - - - - Y Y - - + M - - - - - - - - - - - - - - - - - - - - - - Y Y Y - + m - - - - - - - - - - - - - - - - - - - - - - Y Y - Y You should note that, while included in the table for completeness, the 'S', 'U', and 'V' types cannot be operated on by ufuncs. Also, @@ -319,7 +320,7 @@ advanced usage and will not typically be used. .. versionadded:: 1.10 The 'out' keyword argument is expected to be a tuple with one entry per - output (which can be `None` for arrays to be allocated by the ufunc). + output (which can be None for arrays to be allocated by the ufunc). For ufuncs with a single output, passing a single array (instead of a tuple holding a single array) is also valid. @@ -493,7 +494,7 @@ keyword, and an *out* keyword, and the arrays must all have dimension >= 1. The *axis* keyword specifies the axis of the array over which the reduction will take place (with negative values counting backwards). Generally, it is an integer, though for :meth:`ufunc.reduce`, it can also be a tuple of `int` to -reduce over several axes at once, or `None`, to reduce over all axes. +reduce over several axes at once, or None, to reduce over all axes. The *dtype* keyword allows you to manage a very common problem that arises when naively using :meth:`ufunc.reduce`. Sometimes you may have an array of a certain data type and wish to add up all of its diff --git a/doc/source/release.rst b/doc/source/release.rst index 8dfb8db1d..3bfe81243 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -2,52 +2,59 @@ Release Notes ************* -.. include:: ../release/1.18.0-notes.rst -.. include:: ../release/1.17.0-notes.rst -.. include:: ../release/1.16.4-notes.rst -.. include:: ../release/1.16.3-notes.rst -.. include:: ../release/1.16.2-notes.rst -.. include:: ../release/1.16.1-notes.rst -.. include:: ../release/1.16.0-notes.rst -.. include:: ../release/1.15.4-notes.rst -.. include:: ../release/1.15.3-notes.rst -.. include:: ../release/1.15.2-notes.rst -.. include:: ../release/1.15.1-notes.rst -.. include:: ../release/1.15.0-notes.rst -.. include:: ../release/1.14.6-notes.rst -.. include:: ../release/1.14.5-notes.rst -.. include:: ../release/1.14.4-notes.rst -.. include:: ../release/1.14.3-notes.rst -.. include:: ../release/1.14.2-notes.rst -.. include:: ../release/1.14.1-notes.rst -.. include:: ../release/1.14.0-notes.rst -.. include:: ../release/1.13.3-notes.rst -.. include:: ../release/1.13.2-notes.rst -.. include:: ../release/1.13.1-notes.rst -.. include:: ../release/1.13.0-notes.rst -.. include:: ../release/1.12.1-notes.rst -.. include:: ../release/1.12.0-notes.rst -.. include:: ../release/1.11.3-notes.rst -.. include:: ../release/1.11.2-notes.rst -.. include:: ../release/1.11.1-notes.rst -.. include:: ../release/1.11.0-notes.rst -.. include:: ../release/1.10.4-notes.rst -.. include:: ../release/1.10.3-notes.rst -.. include:: ../release/1.10.2-notes.rst -.. include:: ../release/1.10.1-notes.rst -.. include:: ../release/1.10.0-notes.rst -.. include:: ../release/1.9.2-notes.rst -.. include:: ../release/1.9.1-notes.rst -.. include:: ../release/1.9.0-notes.rst -.. include:: ../release/1.8.2-notes.rst -.. include:: ../release/1.8.1-notes.rst -.. include:: ../release/1.8.0-notes.rst -.. include:: ../release/1.7.2-notes.rst -.. include:: ../release/1.7.1-notes.rst -.. include:: ../release/1.7.0-notes.rst -.. include:: ../release/1.6.2-notes.rst -.. include:: ../release/1.6.1-notes.rst -.. include:: ../release/1.6.0-notes.rst -.. include:: ../release/1.5.0-notes.rst -.. include:: ../release/1.4.0-notes.rst -.. include:: ../release/1.3.0-notes.rst +.. toctree:: + :maxdepth: 3 + + 1.18.0 <release/1.18.0-notes> + 1.17.3 <release/1.17.3-notes> + 1.17.2 <release/1.17.2-notes> + 1.17.1 <release/1.17.1-notes> + 1.17.0 <release/1.17.0-notes> + 1.16.5 <release/1.16.5-notes> + 1.16.4 <release/1.16.4-notes> + 1.16.3 <release/1.16.3-notes> + 1.16.2 <release/1.16.2-notes> + 1.16.1 <release/1.16.1-notes> + 1.16.0 <release/1.16.0-notes> + 1.15.4 <release/1.15.4-notes> + 1.15.3 <release/1.15.3-notes> + 1.15.2 <release/1.15.2-notes> + 1.15.1 <release/1.15.1-notes> + 1.15.0 <release/1.15.0-notes> + 1.14.6 <release/1.14.6-notes> + 1.14.5 <release/1.14.5-notes> + 1.14.4 <release/1.14.4-notes> + 1.14.3 <release/1.14.3-notes> + 1.14.2 <release/1.14.2-notes> + 1.14.1 <release/1.14.1-notes> + 1.14.0 <release/1.14.0-notes> + 1.13.3 <release/1.13.3-notes> + 1.13.2 <release/1.13.2-notes> + 1.13.1 <release/1.13.1-notes> + 1.13.0 <release/1.13.0-notes> + 1.12.1 <release/1.12.1-notes> + 1.12.0 <release/1.12.0-notes> + 1.11.3 <release/1.11.3-notes> + 1.11.2 <release/1.11.2-notes> + 1.11.1 <release/1.11.1-notes> + 1.11.0 <release/1.11.0-notes> + 1.10.4 <release/1.10.4-notes> + 1.10.3 <release/1.10.3-notes> + 1.10.2 <release/1.10.2-notes> + 1.10.1 <release/1.10.1-notes> + 1.10.0 <release/1.10.0-notes> + 1.9.2 <release/1.9.2-notes> + 1.9.1 <release/1.9.1-notes> + 1.9.0 <release/1.9.0-notes> + 1.8.2 <release/1.8.2-notes> + 1.8.1 <release/1.8.1-notes> + 1.8.0 <release/1.8.0-notes> + 1.7.2 <release/1.7.2-notes> + 1.7.1 <release/1.7.1-notes> + 1.7.0 <release/1.7.0-notes> + 1.6.2 <release/1.6.2-notes> + 1.6.1 <release/1.6.1-notes> + 1.6.0 <release/1.6.0-notes> + 1.5.0 <release/1.5.0-notes> + 1.4.0 <release/1.4.0-notes> + 1.3.0 <release/1.3.0-notes> diff --git a/doc/release/1.10.0-notes.rst b/doc/source/release/1.10.0-notes.rst index 88062e463..88062e463 100644 --- a/doc/release/1.10.0-notes.rst +++ b/doc/source/release/1.10.0-notes.rst diff --git a/doc/release/1.10.1-notes.rst b/doc/source/release/1.10.1-notes.rst index 4e541d279..4e541d279 100644 --- a/doc/release/1.10.1-notes.rst +++ b/doc/source/release/1.10.1-notes.rst diff --git a/doc/release/1.10.2-notes.rst b/doc/source/release/1.10.2-notes.rst index 8c26b463c..8c26b463c 100644 --- a/doc/release/1.10.2-notes.rst +++ b/doc/source/release/1.10.2-notes.rst diff --git a/doc/release/1.10.3-notes.rst b/doc/source/release/1.10.3-notes.rst index 0d4df4ce6..0d4df4ce6 100644 --- a/doc/release/1.10.3-notes.rst +++ b/doc/source/release/1.10.3-notes.rst diff --git a/doc/release/1.10.4-notes.rst b/doc/source/release/1.10.4-notes.rst index 481928ca7..481928ca7 100644 --- a/doc/release/1.10.4-notes.rst +++ b/doc/source/release/1.10.4-notes.rst diff --git a/doc/release/1.11.0-notes.rst b/doc/source/release/1.11.0-notes.rst index 166502ac5..166502ac5 100644 --- a/doc/release/1.11.0-notes.rst +++ b/doc/source/release/1.11.0-notes.rst diff --git a/doc/release/1.11.1-notes.rst b/doc/source/release/1.11.1-notes.rst index 6303c32f0..6303c32f0 100644 --- a/doc/release/1.11.1-notes.rst +++ b/doc/source/release/1.11.1-notes.rst diff --git a/doc/release/1.11.2-notes.rst b/doc/source/release/1.11.2-notes.rst index c954089d5..c954089d5 100644 --- a/doc/release/1.11.2-notes.rst +++ b/doc/source/release/1.11.2-notes.rst diff --git a/doc/release/1.11.3-notes.rst b/doc/source/release/1.11.3-notes.rst index 8381a97f7..8381a97f7 100644 --- a/doc/release/1.11.3-notes.rst +++ b/doc/source/release/1.11.3-notes.rst diff --git a/doc/release/1.12.0-notes.rst b/doc/source/release/1.12.0-notes.rst index 711055d16..711055d16 100644 --- a/doc/release/1.12.0-notes.rst +++ b/doc/source/release/1.12.0-notes.rst diff --git a/doc/release/1.12.1-notes.rst b/doc/source/release/1.12.1-notes.rst index f67dab108..f67dab108 100644 --- a/doc/release/1.12.1-notes.rst +++ b/doc/source/release/1.12.1-notes.rst diff --git a/doc/release/1.13.0-notes.rst b/doc/source/release/1.13.0-notes.rst index 3b719db09..3b719db09 100644 --- a/doc/release/1.13.0-notes.rst +++ b/doc/source/release/1.13.0-notes.rst diff --git a/doc/release/1.13.1-notes.rst b/doc/source/release/1.13.1-notes.rst index 88a4bc3dd..88a4bc3dd 100644 --- a/doc/release/1.13.1-notes.rst +++ b/doc/source/release/1.13.1-notes.rst diff --git a/doc/release/1.13.2-notes.rst b/doc/source/release/1.13.2-notes.rst index f2f9120f5..f2f9120f5 100644 --- a/doc/release/1.13.2-notes.rst +++ b/doc/source/release/1.13.2-notes.rst diff --git a/doc/release/1.13.3-notes.rst b/doc/source/release/1.13.3-notes.rst index 7f7170bcc..7f7170bcc 100644 --- a/doc/release/1.13.3-notes.rst +++ b/doc/source/release/1.13.3-notes.rst diff --git a/doc/release/1.14.0-notes.rst b/doc/source/release/1.14.0-notes.rst index 462631de6..462631de6 100644 --- a/doc/release/1.14.0-notes.rst +++ b/doc/source/release/1.14.0-notes.rst diff --git a/doc/release/1.14.1-notes.rst b/doc/source/release/1.14.1-notes.rst index 7b95c2e28..7b95c2e28 100644 --- a/doc/release/1.14.1-notes.rst +++ b/doc/source/release/1.14.1-notes.rst diff --git a/doc/release/1.14.2-notes.rst b/doc/source/release/1.14.2-notes.rst index 3f47cb5f5..3f47cb5f5 100644 --- a/doc/release/1.14.2-notes.rst +++ b/doc/source/release/1.14.2-notes.rst diff --git a/doc/release/1.14.3-notes.rst b/doc/source/release/1.14.3-notes.rst index 60b631168..60b631168 100644 --- a/doc/release/1.14.3-notes.rst +++ b/doc/source/release/1.14.3-notes.rst diff --git a/doc/release/1.14.4-notes.rst b/doc/source/release/1.14.4-notes.rst index 3fb94383b..3fb94383b 100644 --- a/doc/release/1.14.4-notes.rst +++ b/doc/source/release/1.14.4-notes.rst diff --git a/doc/release/1.14.5-notes.rst b/doc/source/release/1.14.5-notes.rst index 9a97cc033..9a97cc033 100644 --- a/doc/release/1.14.5-notes.rst +++ b/doc/source/release/1.14.5-notes.rst diff --git a/doc/release/1.14.6-notes.rst b/doc/source/release/1.14.6-notes.rst index ac6a78272..ac6a78272 100644 --- a/doc/release/1.14.6-notes.rst +++ b/doc/source/release/1.14.6-notes.rst diff --git a/doc/release/1.15.0-notes.rst b/doc/source/release/1.15.0-notes.rst index 7235ca915..7235ca915 100644 --- a/doc/release/1.15.0-notes.rst +++ b/doc/source/release/1.15.0-notes.rst diff --git a/doc/release/1.15.1-notes.rst b/doc/source/release/1.15.1-notes.rst index ddb83303c..ddb83303c 100644 --- a/doc/release/1.15.1-notes.rst +++ b/doc/source/release/1.15.1-notes.rst diff --git a/doc/release/1.15.2-notes.rst b/doc/source/release/1.15.2-notes.rst index a3e61fccd..a3e61fccd 100644 --- a/doc/release/1.15.2-notes.rst +++ b/doc/source/release/1.15.2-notes.rst diff --git a/doc/release/1.15.3-notes.rst b/doc/source/release/1.15.3-notes.rst index 753eecec9..753eecec9 100644 --- a/doc/release/1.15.3-notes.rst +++ b/doc/source/release/1.15.3-notes.rst diff --git a/doc/release/1.15.4-notes.rst b/doc/source/release/1.15.4-notes.rst index 033bd5828..033bd5828 100644 --- a/doc/release/1.15.4-notes.rst +++ b/doc/source/release/1.15.4-notes.rst diff --git a/doc/release/1.16.0-notes.rst b/doc/source/release/1.16.0-notes.rst index 1034d6e6c..1034d6e6c 100644 --- a/doc/release/1.16.0-notes.rst +++ b/doc/source/release/1.16.0-notes.rst diff --git a/doc/release/1.16.1-notes.rst b/doc/source/release/1.16.1-notes.rst index 2a190ef91..2a190ef91 100644 --- a/doc/release/1.16.1-notes.rst +++ b/doc/source/release/1.16.1-notes.rst diff --git a/doc/release/1.16.2-notes.rst b/doc/source/release/1.16.2-notes.rst index 62b90dc40..62b90dc40 100644 --- a/doc/release/1.16.2-notes.rst +++ b/doc/source/release/1.16.2-notes.rst diff --git a/doc/release/1.16.3-notes.rst b/doc/source/release/1.16.3-notes.rst index 181a7264d..181a7264d 100644 --- a/doc/release/1.16.3-notes.rst +++ b/doc/source/release/1.16.3-notes.rst diff --git a/doc/release/1.16.4-notes.rst b/doc/source/release/1.16.4-notes.rst index a236b05c8..a236b05c8 100644 --- a/doc/release/1.16.4-notes.rst +++ b/doc/source/release/1.16.4-notes.rst diff --git a/doc/source/release/1.16.5-notes.rst b/doc/source/release/1.16.5-notes.rst new file mode 100644 index 000000000..5b6eb585b --- /dev/null +++ b/doc/source/release/1.16.5-notes.rst @@ -0,0 +1,68 @@ +========================== +NumPy 1.16.5 Release Notes +========================== + +The NumPy 1.16.5 release fixes bugs reported against the 1.16.4 release, and +also backports several enhancements from master that seem appropriate for a +release series that is the last to support Python 2.7. The wheels on PyPI are +linked with OpenBLAS v0.3.7-dev, which should fix errors on Skylake series +cpus. + +Downstream developers building this release should use Cython >= 0.29.2 and, if +using OpenBLAS, OpenBLAS >= v0.3.7. The supported Python versions are 2.7 and +3.5-3.7. + + +Contributors +============ + +A total of 18 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Alexander Shadchin +* Allan Haldane +* Bruce Merry + +* Charles Harris +* Colin Snyder + +* Dan Allan + +* Emile + +* Eric Wieser +* Grey Baker + +* Maksim Shabunin + +* Marten van Kerkwijk +* Matti Picus +* Peter Andreas Entschev + +* Ralf Gommers +* Richard Harris + +* Sebastian Berg +* Sergei Lebedev + +* Stephan Hoyer + +Pull requests merged +==================== + +A total of 23 pull requests were merged for this release. + +* `#13742 <https://github.com/numpy/numpy/pull/13742>`__: ENH: Add project URLs to setup.py +* `#13823 <https://github.com/numpy/numpy/pull/13823>`__: TEST, ENH: fix tests and ctypes code for PyPy +* `#13845 <https://github.com/numpy/numpy/pull/13845>`__: BUG: use npy_intp instead of int for indexing array +* `#13867 <https://github.com/numpy/numpy/pull/13867>`__: TST: Ignore DeprecationWarning during nose imports +* `#13905 <https://github.com/numpy/numpy/pull/13905>`__: BUG: Fix use-after-free in boolean indexing +* `#13933 <https://github.com/numpy/numpy/pull/13933>`__: MAINT/BUG/DOC: Fix errors in _add_newdocs +* `#13984 <https://github.com/numpy/numpy/pull/13984>`__: BUG: fix byte order reversal for datetime64[ns] +* `#13994 <https://github.com/numpy/numpy/pull/13994>`__: MAINT,BUG: Use nbytes to also catch empty descr during allocation +* `#14042 <https://github.com/numpy/numpy/pull/14042>`__: BUG: np.array cleared errors occured in PyMemoryView_FromObject +* `#14043 <https://github.com/numpy/numpy/pull/14043>`__: BUG: Fixes for Undefined Behavior Sanitizer (UBSan) errors. +* `#14044 <https://github.com/numpy/numpy/pull/14044>`__: BUG: ensure that casting to/from structured is properly checked. +* `#14045 <https://github.com/numpy/numpy/pull/14045>`__: MAINT: fix histogram*d dispatchers +* `#14046 <https://github.com/numpy/numpy/pull/14046>`__: BUG: further fixup to histogram2d dispatcher. +* `#14052 <https://github.com/numpy/numpy/pull/14052>`__: BUG: Replace contextlib.suppress for Python 2.7 +* `#14056 <https://github.com/numpy/numpy/pull/14056>`__: BUG: fix compilation of 3rd party modules with Py_LIMITED_API... +* `#14057 <https://github.com/numpy/numpy/pull/14057>`__: BUG: Fix memory leak in dtype from dict contructor +* `#14058 <https://github.com/numpy/numpy/pull/14058>`__: DOC: Document array_function at a higher level. +* `#14084 <https://github.com/numpy/numpy/pull/14084>`__: BUG, DOC: add new recfunctions to `__all__` +* `#14162 <https://github.com/numpy/numpy/pull/14162>`__: BUG: Remove stray print that causes a SystemError on python 3.7 +* `#14297 <https://github.com/numpy/numpy/pull/14297>`__: TST: Pin pytest version to 5.0.1. +* `#14322 <https://github.com/numpy/numpy/pull/14322>`__: ENH: Enable huge pages in all Linux builds +* `#14346 <https://github.com/numpy/numpy/pull/14346>`__: BUG: fix behavior of structured_to_unstructured on non-trivial... +* `#14382 <https://github.com/numpy/numpy/pull/14382>`__: REL: Prepare for the NumPy 1.16.5 release. diff --git a/doc/release/1.17.0-notes.rst b/doc/source/release/1.17.0-notes.rst index 8d69e36d9..a0e737982 100644 --- a/doc/release/1.17.0-notes.rst +++ b/doc/source/release/1.17.0-notes.rst @@ -239,7 +239,7 @@ New extensible `numpy.random` module with selectable random number generators ----------------------------------------------------------------------------- A new extensible `numpy.random` module along with four selectable random number generators and improved seeding designed for use in parallel processes has been -added. The currently available :ref:`Bit Generators <bit_generator>` are +added. The currently available `Bit Generators` are `~mt19937.MT19937`, `~pcg64.PCG64`, `~philox.Philox`, and `~sfc64.SFC64`. ``PCG64`` is the new default while ``MT19937`` is retained for backwards compatibility. Note that the legacy random module is unchanged and is now diff --git a/doc/source/release/1.17.1-notes.rst b/doc/source/release/1.17.1-notes.rst new file mode 100644 index 000000000..bd837ee5b --- /dev/null +++ b/doc/source/release/1.17.1-notes.rst @@ -0,0 +1,73 @@ +.. currentmodule:: numpy + +========================== +NumPy 1.17.1 Release Notes +========================== + +This release contains a number of fixes for bugs reported against NumPy 1.17.0 +along with a few documentation and build improvements. The Python versions +supported are 3.5-3.7, note that Python 2.7 has been dropped. Python 3.8b3 +should work with the released source packages, but there are no future +guarantees. + +Downstream developers should use Cython >= 0.29.13 for Python 3.8 support and +OpenBLAS >= 3.7 to avoid problems on the Skylake architecture. The NumPy wheels +on PyPI are built from the OpenBLAS development branch in order to avoid those +problems. + + +Contributors +============ + +A total of 17 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Alexander Jung + +* Allan Haldane +* Charles Harris +* Eric Wieser +* Giuseppe Cuccu + +* Hiroyuki V. Yamazaki +* Jérémie du Boisberranger +* Kmol Yuan + +* Matti Picus +* Max Bolingbroke + +* Maxwell Aladago + +* Oleksandr Pavlyk +* Peter Andreas Entschev +* Sergei Lebedev +* Seth Troisi + +* Vladimir Pershin + +* Warren Weckesser + + +Pull requests merged +==================== + +A total of 24 pull requests were merged for this release. + +* `#14156 <https://github.com/numpy/numpy/pull/14156>`__: TST: Allow fuss in testing strided/non-strided exp/log loops +* `#14157 <https://github.com/numpy/numpy/pull/14157>`__: BUG: avx2_scalef_ps must be static +* `#14158 <https://github.com/numpy/numpy/pull/14158>`__: BUG: Remove stray print that causes a SystemError on python 3.7. +* `#14159 <https://github.com/numpy/numpy/pull/14159>`__: BUG: Fix DeprecationWarning in python 3.8. +* `#14160 <https://github.com/numpy/numpy/pull/14160>`__: BLD: Add missing gcd/lcm definitions to npy_math.h +* `#14161 <https://github.com/numpy/numpy/pull/14161>`__: DOC, BUILD: cleanups and fix (again) 'build dist' +* `#14166 <https://github.com/numpy/numpy/pull/14166>`__: TST: Add 3.8-dev to travisCI testing. +* `#14194 <https://github.com/numpy/numpy/pull/14194>`__: BUG: Remove the broken clip wrapper (Backport) +* `#14198 <https://github.com/numpy/numpy/pull/14198>`__: DOC: Fix hermitian argument docs in svd. +* `#14199 <https://github.com/numpy/numpy/pull/14199>`__: MAINT: Workaround for Intel compiler bug leading to failing test +* `#14200 <https://github.com/numpy/numpy/pull/14200>`__: TST: Clean up of test_pocketfft.py +* `#14201 <https://github.com/numpy/numpy/pull/14201>`__: BUG: Make advanced indexing result on read-only subclass writeable... +* `#14236 <https://github.com/numpy/numpy/pull/14236>`__: BUG: Fixed default BitGenerator name +* `#14237 <https://github.com/numpy/numpy/pull/14237>`__: ENH: add c-imported modules for freeze analysis in np.random +* `#14296 <https://github.com/numpy/numpy/pull/14296>`__: TST: Pin pytest version to 5.0.1 +* `#14301 <https://github.com/numpy/numpy/pull/14301>`__: BUG: Fix leak in the f2py-generated module init and `PyMem_Del`... +* `#14302 <https://github.com/numpy/numpy/pull/14302>`__: BUG: Fix formatting error in exception message +* `#14307 <https://github.com/numpy/numpy/pull/14307>`__: MAINT: random: Match type of SeedSequence.pool_size to DEFAULT_POOL_SIZE. +* `#14308 <https://github.com/numpy/numpy/pull/14308>`__: BUG: Fix numpy.random bug in platform detection +* `#14309 <https://github.com/numpy/numpy/pull/14309>`__: ENH: Enable huge pages in all Linux builds +* `#14330 <https://github.com/numpy/numpy/pull/14330>`__: BUG: Fix segfault in `random.permutation(x)` when x is a string. +* `#14338 <https://github.com/numpy/numpy/pull/14338>`__: BUG: don't fail when lexsorting some empty arrays (#14228) +* `#14339 <https://github.com/numpy/numpy/pull/14339>`__: BUG: Fix misuse of .names and .fields in various places (backport... +* `#14345 <https://github.com/numpy/numpy/pull/14345>`__: BUG: fix behavior of structured_to_unstructured on non-trivial... +* `#14350 <https://github.com/numpy/numpy/pull/14350>`__: REL: Prepare 1.17.1 release diff --git a/doc/source/release/1.17.2-notes.rst b/doc/source/release/1.17.2-notes.rst new file mode 100644 index 000000000..65cdaf903 --- /dev/null +++ b/doc/source/release/1.17.2-notes.rst @@ -0,0 +1,49 @@ +.. currentmodule:: numpy + +========================== +NumPy 1.17.2 Release Notes +========================== + +This release contains fixes for bugs reported against NumPy 1.17.1 along with a +some documentation improvements. The most important fix is for lexsort when the +keys are of type (u)int8 or (u)int16. If you are currently using 1.17 you +should upgrade. + +The Python versions supported in this release are 3.5-3.7, Python 2.7 has been +dropped. Python 3.8b4 should work with the released source packages, but there +are no future guarantees. + +Downstream developers should use Cython >= 0.29.13 for Python 3.8 support and +OpenBLAS >= 3.7 to avoid errors on the Skylake architecture. The NumPy wheels +on PyPI are built from the OpenBLAS development branch in order to avoid those +errors. + + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* CakeWithSteak + +* Charles Harris +* Dan Allan +* Hameer Abbasi +* Lars Grueter +* Matti Picus +* Sebastian Berg + + +Pull requests merged +==================== + +A total of 8 pull requests were merged for this release. + +* `#14418 <https://github.com/numpy/numpy/pull/14418>`__: BUG: Fix aradixsort indirect indexing. +* `#14420 <https://github.com/numpy/numpy/pull/14420>`__: DOC: Fix a minor typo in dispatch documentation. +* `#14421 <https://github.com/numpy/numpy/pull/14421>`__: BUG: test, fix regression in converting to ctypes +* `#14430 <https://github.com/numpy/numpy/pull/14430>`__: BUG: Do not show Override module in private error classes. +* `#14432 <https://github.com/numpy/numpy/pull/14432>`__: BUG: Fixed maximum relative error reporting in assert_allclose. +* `#14433 <https://github.com/numpy/numpy/pull/14433>`__: BUG: Fix uint-overflow if padding with linear_ramp and negative... +* `#14436 <https://github.com/numpy/numpy/pull/14436>`__: BUG: Update 1.17.x with 1.18.0-dev pocketfft.py. +* `#14446 <https://github.com/numpy/numpy/pull/14446>`__: REL: Prepare for NumPy 1.17.2 release. diff --git a/doc/source/release/1.17.3-notes.rst b/doc/source/release/1.17.3-notes.rst new file mode 100644 index 000000000..e33ca1917 --- /dev/null +++ b/doc/source/release/1.17.3-notes.rst @@ -0,0 +1,59 @@ +.. currentmodule:: numpy + +========================== +NumPy 1.17.3 Release Notes +========================== + +This release contains fixes for bugs reported against NumPy 1.17.2 along with a +some documentation improvements. The Python versions supported in this release +are 3.5-3.8. + +Downstream developers should use Cython >= 0.29.13 for Python 3.8 support and +OpenBLAS >= 3.7 to avoid errors on the Skylake architecture. + + +Highlights +========== + +- Wheels for Python 3.8 +- Boolean ``matmul`` fixed to use booleans instead of integers. + + +Compatibility notes +=================== + +- The seldom used ``PyArray_DescrCheck`` macro has been changed/fixed. + + +Contributors +============ + +A total of 7 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Allan Haldane +* Charles Harris +* Kevin Sheppard +* Matti Picus +* Ralf Gommers +* Sebastian Berg +* Warren Weckesser + + +Pull requests merged +==================== + +A total of 12 pull requests were merged for this release. + +* `#14456 <https://github.com/numpy/numpy/pull/14456>`__: MAINT: clean up pocketfft modules inside numpy.fft namespace. +* `#14463 <https://github.com/numpy/numpy/pull/14463>`__: BUG: random.hypergeometic assumes npy_long is npy_int64, hung... +* `#14502 <https://github.com/numpy/numpy/pull/14502>`__: BUG: random: Revert gh-14458 and refix gh-14557. +* `#14504 <https://github.com/numpy/numpy/pull/14504>`__: BUG: add a specialized loop for boolean matmul. +* `#14506 <https://github.com/numpy/numpy/pull/14506>`__: MAINT: Update pytest version for Python 3.8 +* `#14512 <https://github.com/numpy/numpy/pull/14512>`__: DOC: random: fix doc linking, was referencing private submodules. +* `#14513 <https://github.com/numpy/numpy/pull/14513>`__: BUG,MAINT: Some fixes and minor cleanup based on clang analysis +* `#14515 <https://github.com/numpy/numpy/pull/14515>`__: BUG: Fix randint when range is 2**32 +* `#14519 <https://github.com/numpy/numpy/pull/14519>`__: MAINT: remove the entropy c-extension module +* `#14563 <https://github.com/numpy/numpy/pull/14563>`__: DOC: remove note about Pocketfft license file (non-existing here). +* `#14578 <https://github.com/numpy/numpy/pull/14578>`__: BUG: random: Create a legacy implementation of random.binomial. +* `#14687 <https://github.com/numpy/numpy/pull/14687>`__: BUG: properly define PyArray_DescrCheck diff --git a/doc/source/release/1.18.0-notes.rst b/doc/source/release/1.18.0-notes.rst new file mode 100644 index 000000000..e66540410 --- /dev/null +++ b/doc/source/release/1.18.0-notes.rst @@ -0,0 +1,8 @@ +The NumPy 1.18 release is currently in developement. Please check +the ``numpy/doc/release/upcoming_changes/`` folder for upcoming +release notes. +The ``numpy/doc/release/upcoming_changes/README.txt`` details how +to add new release notes. + +For the work in progress release notes for the current development +version, see the `devdocs <https://numpy.org/devdocs/release.html>`__. diff --git a/doc/release/1.3.0-notes.rst b/doc/source/release/1.3.0-notes.rst index 239714246..239714246 100644 --- a/doc/release/1.3.0-notes.rst +++ b/doc/source/release/1.3.0-notes.rst diff --git a/doc/release/1.4.0-notes.rst b/doc/source/release/1.4.0-notes.rst index 9480a054e..9480a054e 100644 --- a/doc/release/1.4.0-notes.rst +++ b/doc/source/release/1.4.0-notes.rst diff --git a/doc/release/1.5.0-notes.rst b/doc/source/release/1.5.0-notes.rst index a2184ab13..a2184ab13 100644 --- a/doc/release/1.5.0-notes.rst +++ b/doc/source/release/1.5.0-notes.rst diff --git a/doc/release/1.6.0-notes.rst b/doc/source/release/1.6.0-notes.rst index c5f53a0eb..c5f53a0eb 100644 --- a/doc/release/1.6.0-notes.rst +++ b/doc/source/release/1.6.0-notes.rst diff --git a/doc/release/1.6.1-notes.rst b/doc/source/release/1.6.1-notes.rst index 05fcb4ab9..05fcb4ab9 100644 --- a/doc/release/1.6.1-notes.rst +++ b/doc/source/release/1.6.1-notes.rst diff --git a/doc/release/1.6.2-notes.rst b/doc/source/release/1.6.2-notes.rst index 8f0b06f98..8f0b06f98 100644 --- a/doc/release/1.6.2-notes.rst +++ b/doc/source/release/1.6.2-notes.rst diff --git a/doc/release/1.7.0-notes.rst b/doc/source/release/1.7.0-notes.rst index f111f80dc..f111f80dc 100644 --- a/doc/release/1.7.0-notes.rst +++ b/doc/source/release/1.7.0-notes.rst diff --git a/doc/release/1.7.1-notes.rst b/doc/source/release/1.7.1-notes.rst index 04216b0df..04216b0df 100644 --- a/doc/release/1.7.1-notes.rst +++ b/doc/source/release/1.7.1-notes.rst diff --git a/doc/release/1.7.2-notes.rst b/doc/source/release/1.7.2-notes.rst index b0951bd72..b0951bd72 100644 --- a/doc/release/1.7.2-notes.rst +++ b/doc/source/release/1.7.2-notes.rst diff --git a/doc/release/1.8.0-notes.rst b/doc/source/release/1.8.0-notes.rst index 80c39f8bc..80c39f8bc 100644 --- a/doc/release/1.8.0-notes.rst +++ b/doc/source/release/1.8.0-notes.rst diff --git a/doc/release/1.8.1-notes.rst b/doc/source/release/1.8.1-notes.rst index ea34e75ac..ea34e75ac 100644 --- a/doc/release/1.8.1-notes.rst +++ b/doc/source/release/1.8.1-notes.rst diff --git a/doc/release/1.8.2-notes.rst b/doc/source/release/1.8.2-notes.rst index 71e549526..71e549526 100644 --- a/doc/release/1.8.2-notes.rst +++ b/doc/source/release/1.8.2-notes.rst diff --git a/doc/release/1.9.0-notes.rst b/doc/source/release/1.9.0-notes.rst index 7ea29e354..7ea29e354 100644 --- a/doc/release/1.9.0-notes.rst +++ b/doc/source/release/1.9.0-notes.rst diff --git a/doc/release/1.9.1-notes.rst b/doc/source/release/1.9.1-notes.rst index 4558237f4..4558237f4 100644 --- a/doc/release/1.9.1-notes.rst +++ b/doc/source/release/1.9.1-notes.rst diff --git a/doc/release/1.9.2-notes.rst b/doc/source/release/1.9.2-notes.rst index 268f3aa64..268f3aa64 100644 --- a/doc/release/1.9.2-notes.rst +++ b/doc/source/release/1.9.2-notes.rst diff --git a/doc/release/template.rst b/doc/source/release/template.rst index fdfec2be9..cde7646df 100644 --- a/doc/release/template.rst +++ b/doc/source/release/template.rst @@ -1,3 +1,5 @@ +:orphan: + ========================== NumPy 1.xx.x Release Notes ========================== diff --git a/doc/source/user/basics.io.genfromtxt.rst b/doc/source/user/basics.io.genfromtxt.rst index 6ef80bf8e..19e37eabc 100644 --- a/doc/source/user/basics.io.genfromtxt.rst +++ b/doc/source/user/basics.io.genfromtxt.rst @@ -27,13 +27,13 @@ Defining the input ================== The only mandatory argument of :func:`~numpy.genfromtxt` is the source of -the data. It can be a string, a list of strings, or a generator. If a -single string is provided, it is assumed to be the name of a local or -remote file, or an open file-like object with a :meth:`read` method, for -example, a file or :class:`io.StringIO` object. If a list of strings -or a generator returning strings is provided, each string is treated as one -line in a file. When the URL of a remote file is passed, the file is -automatically downloaded to the current directory and opened. +the data. It can be a string, a list of strings, a generator or an open +file-like object with a :meth:`read` method, for example, a file or +:class:`io.StringIO` object. If a single string is provided, it is assumed +to be the name of a local or remote file. If a list of strings or a generator +returning strings is provided, each string is treated as one line in a file. +When the URL of a remote file is passed, the file is automatically downloaded +to the current directory and opened. Recognized file types are text files and archives. Currently, the function recognizes :class:`gzip` and :class:`bz2` (`bzip2`) archives. The type of diff --git a/doc/source/user/building.rst b/doc/source/user/building.rst index 26f251151..b4b4371e5 100644 --- a/doc/source/user/building.rst +++ b/doc/source/user/building.rst @@ -69,6 +69,15 @@ Using ``virtualenv`` should work as expected. *Note: for build instructions to do development work on NumPy itself, see* :ref:`development-environment`. +Testing +------- + +Make sure to test your builds. To ensure everything stays in shape, see if all tests pass:: + + $ python runtests.py -v -m full + +For detailed info on testing, see :ref:`testing-builds`. + .. _parallel-builds: Parallel builds diff --git a/doc/source/user/c-info.beyond-basics.rst b/doc/source/user/c-info.beyond-basics.rst index d4d941a5e..62e8139fe 100644 --- a/doc/source/user/c-info.beyond-basics.rst +++ b/doc/source/user/c-info.beyond-basics.rst @@ -217,14 +217,13 @@ type will behave much like a regular data-type except ufuncs must have 1-d loops registered to handle it separately. Also checking for whether or not other data-types can be cast "safely" to and from this new type or not will always return "can cast" unless you also register -which types your new data-type can be cast to and from. Adding -data-types is one of the less well-tested areas for NumPy 1.0, so -there may be bugs remaining in the approach. Only add a new data-type -if you can't do what you want to do using the OBJECT or VOID -data-types that are already available. As an example of what I -consider a useful application of the ability to add data-types is the -possibility of adding a data-type of arbitrary precision floats to -NumPy. +which types your new data-type can be cast to and from. + +The NumPy source code includes an example of a custom data-type as part +of its test suite. The file ``_rational_tests.c.src`` in the source code +directory ``numpy/numpy/core/src/umath/`` contains an implementation of +a data-type that represents a rational number as the ratio of two 32 bit +integers. .. index:: pair: dtype; adding new @@ -300,9 +299,10 @@ An example castfunc is: static void double_to_float(double *from, float* to, npy_intp n, - void* ig1, void* ig2); - while (n--) { - (*to++) = (double) *(from++); + void* ignore1, void* ignore2) { + while (n--) { + (*to++) = (double) *(from++); + } } This could then be registered to convert doubles to floats using the diff --git a/doc/source/user/quickstart.rst b/doc/source/user/quickstart.rst index a23a7b2c7..6211d0c69 100644 --- a/doc/source/user/quickstart.rst +++ b/doc/source/user/quickstart.rst @@ -206,8 +206,8 @@ of elements that we want, instead of the step:: `empty_like`, `arange`, `linspace`, - `numpy.random.mtrand.RandomState.rand`, - `numpy.random.mtrand.RandomState.randn`, + `numpy.random.RandomState.rand`, + `numpy.random.RandomState.randn`, `fromfunction`, `fromfile` |