summaryrefslogtreecommitdiff
path: root/doc/source
diff options
context:
space:
mode:
Diffstat (limited to 'doc/source')
-rw-r--r--doc/source/_static/numpy.css40
-rw-r--r--doc/source/_static/numpy_logo.pngbin6103 -> 0 bytes
-rw-r--r--doc/source/_static/numpylogo.svg23
-rw-r--r--doc/source/_templates/autosummary/attribute.rst5
-rw-r--r--doc/source/_templates/autosummary/base.rst5
-rw-r--r--doc/source/_templates/autosummary/member.rst6
-rw-r--r--doc/source/_templates/autosummary/method.rst5
-rw-r--r--doc/source/_templates/defindex.html35
-rw-r--r--doc/source/_templates/indexcontent.html40
-rw-r--r--doc/source/_templates/layout.html30
-rw-r--r--doc/source/about.rst62
-rw-r--r--doc/source/conf.py154
-rw-r--r--doc/source/contents.rst25
-rw-r--r--doc/source/dev/conduct/code_of_conduct.rst163
-rw-r--r--doc/source/dev/conduct/report_handling_manual.rst220
-rw-r--r--doc/source/dev/development_advanced_debugging.rst190
-rw-r--r--doc/source/dev/development_environment.rst4
-rw-r--r--doc/source/dev/development_workflow.rst37
-rw-r--r--doc/source/dev/index.rst32
-rw-r--r--doc/source/dev/reviewer_guidelines.rst119
-rw-r--r--doc/source/dev/style_guide.rst8
-rw-r--r--doc/source/doc_conventions.rst23
-rw-r--r--doc/source/docs/howto_document.rst63
-rw-r--r--doc/source/f2py/allocarr_session.dat9
-rw-r--r--doc/source/f2py/common_session.dat6
-rw-r--r--doc/source/f2py/distutils.rst2
-rw-r--r--doc/source/f2py/moddata_session.dat14
-rw-r--r--doc/source/glossary.rst518
-rw-r--r--doc/source/license.rst35
-rw-r--r--doc/source/reference/arrays.classes.rst8
-rw-r--r--doc/source/reference/arrays.datetime.rst2
-rw-r--r--doc/source/reference/arrays.dtypes.rst53
-rw-r--r--doc/source/reference/arrays.indexing.rst21
-rw-r--r--doc/source/reference/arrays.interface.rst113
-rw-r--r--doc/source/reference/arrays.ndarray.rst20
-rw-r--r--doc/source/reference/arrays.nditer.cython.rst2
-rw-r--r--doc/source/reference/arrays.scalars.rst374
-rw-r--r--doc/source/reference/c-api/array.rst412
-rw-r--r--doc/source/reference/c-api/config.rst79
-rw-r--r--doc/source/reference/c-api/coremath.rst46
-rw-r--r--doc/source/reference/c-api/deprecations.rst4
-rw-r--r--doc/source/reference/c-api/dtype.rst45
-rw-r--r--doc/source/reference/c-api/iterator.rst60
-rw-r--r--doc/source/reference/c-api/types-and-structures.rst586
-rw-r--r--doc/source/reference/c-api/ufunc.rst169
-rw-r--r--doc/source/reference/global_state.rst15
-rw-r--r--doc/source/reference/internals.code-explanations.rst7
-rw-r--r--doc/source/reference/internals.rst158
-rw-r--r--doc/source/reference/maskedarray.baseclass.rst4
-rw-r--r--doc/source/reference/maskedarray.generic.rst18
-rw-r--r--doc/source/reference/random/c-api.rst3
-rw-r--r--doc/source/reference/random/generator.rst94
-rw-r--r--doc/source/reference/random/legacy.rst2
-rw-r--r--doc/source/reference/routines.array-manipulation.rst1
-rw-r--r--doc/source/reference/routines.char.rst2
-rw-r--r--doc/source/reference/routines.ctypeslib.rst1
-rw-r--r--doc/source/reference/routines.financial.rst21
-rw-r--r--doc/source/reference/routines.indexing.rst1
-rw-r--r--doc/source/reference/routines.io.rst2
-rw-r--r--doc/source/reference/routines.ma.rst5
-rw-r--r--doc/source/reference/routines.other.rst1
-rw-r--r--doc/source/reference/routines.rst1
-rw-r--r--doc/source/reference/routines.set.rst5
-rw-r--r--doc/source/reference/simd/simd-optimizations-tables-diff.inc37
-rw-r--r--doc/source/reference/simd/simd-optimizations-tables.inc165
-rw-r--r--doc/source/reference/simd/simd-optimizations.py236
-rw-r--r--doc/source/reference/simd/simd-optimizations.rst42
-rw-r--r--doc/source/reference/ufuncs.rst22
-rw-r--r--doc/source/release.rst5
-rw-r--r--doc/source/release/1.16.0-notes.rst4
-rw-r--r--doc/source/release/1.17.0-notes.rst7
-rw-r--r--doc/source/release/1.19.1-notes.rst68
-rw-r--r--doc/source/release/1.19.2-notes.rst57
-rw-r--r--doc/source/release/1.19.3-notes.rst46
-rw-r--r--doc/source/release/1.19.4-notes.rst30
-rw-r--r--doc/source/release/1.20.0-notes.rst927
-rw-r--r--doc/source/release/1.21.0-notes.rst6
-rw-r--r--doc/source/user/absolute_beginners.rst15
-rw-r--r--doc/source/user/basics.broadcasting.rst176
-rw-r--r--doc/source/user/basics.byteswapping.rst150
-rw-r--r--doc/source/user/basics.creation.rst139
-rw-r--r--doc/source/user/basics.dispatch.rst266
-rw-r--r--doc/source/user/basics.indexing.rst452
-rw-r--r--doc/source/user/basics.io.genfromtxt.rst26
-rw-r--r--doc/source/user/basics.rec.rst646
-rw-r--r--doc/source/user/basics.rst14
-rw-r--r--doc/source/user/basics.subclassing.rst749
-rw-r--r--doc/source/user/basics.types.rst337
-rw-r--r--doc/source/user/building.rst21
-rw-r--r--doc/source/user/c-info.beyond-basics.rst2
-rw-r--r--doc/source/user/c-info.how-to-extend.rst10
-rw-r--r--doc/source/user/how-to-how-to.rst118
-rw-r--r--doc/source/user/how-to-io.rst328
-rw-r--r--doc/source/user/howtos_index.rst3
-rw-r--r--doc/source/user/images/np_indexing.pngbin64363 -> 148808 bytes
-rw-r--r--doc/source/user/index.rst31
-rw-r--r--doc/source/user/install.rst17
-rw-r--r--doc/source/user/ionumpy.rst20
-rw-r--r--doc/source/user/misc.rst222
-rw-r--r--doc/source/user/numpy-for-matlab-users.rst765
-rw-r--r--doc/source/user/quickstart.rst20
-rw-r--r--doc/source/user/setting-up.rst10
-rw-r--r--doc/source/user/theory.broadcasting.rst2
-rw-r--r--doc/source/user/troubleshooting-importerror.rst10
-rw-r--r--doc/source/user/tutorial-ma.rst30
-rw-r--r--doc/source/user/tutorial-svd.rst33
-rw-r--r--doc/source/user/tutorials_index.rst6
-rw-r--r--doc/source/user/whatisnumpy.rst2
108 files changed, 8213 insertions, 2267 deletions
diff --git a/doc/source/_static/numpy.css b/doc/source/_static/numpy.css
new file mode 100644
index 000000000..22d08cc0d
--- /dev/null
+++ b/doc/source/_static/numpy.css
@@ -0,0 +1,40 @@
+@import url('https://fonts.googleapis.com/css2?family=Lato:ital,wght@0,400;0,700;0,900;1,400;1,700;1,900&family=Open+Sans:ital,wght@0,400;0,600;1,400;1,600&display=swap');
+
+.navbar-brand img {
+ height: 75px;
+}
+.navbar-brand {
+ height: 75px;
+}
+
+body {
+ font-family: 'Open Sans', sans-serif;
+ color:#4A4A4A; /* numpy.org body color */
+}
+
+pre, code {
+ font-size: 100%;
+ line-height: 155%;
+}
+
+h1 {
+ font-style: "Lato", sans-serif;
+ color: #013243; /* warm black */
+ font-weight: 700;
+ letter-spacing: -.04em;
+ text-align: right;
+ margin-top: 3rem;
+ margin-bottom: 4rem;
+ font-size: 3rem;
+}
+
+
+h2 {
+ color: #4d77cf; /* han blue */
+ letter-spacing: -.03em;
+}
+
+h3 {
+ color: #013243; /* warm black */
+ letter-spacing: -.03em;
+}
diff --git a/doc/source/_static/numpy_logo.png b/doc/source/_static/numpy_logo.png
deleted file mode 100644
index af8cbe323..000000000
--- a/doc/source/_static/numpy_logo.png
+++ /dev/null
Binary files differ
diff --git a/doc/source/_static/numpylogo.svg b/doc/source/_static/numpylogo.svg
new file mode 100644
index 000000000..5f0dac700
--- /dev/null
+++ b/doc/source/_static/numpylogo.svg
@@ -0,0 +1,23 @@
+<?xml version="1.0" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!--Generator: Xara Designer (www.xara.com), SVG filter version: 6.4.0.3-->
+<svg fill="none" fill-rule="evenodd" stroke="black" stroke-width="0.501" stroke-linejoin="bevel" stroke-miterlimit="10" font-family="Times New Roman" font-size="16" style="font-variant-ligatures:none" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" version="1.1" overflow="visible" width="255.845pt" height="123.322pt" viewBox="0 -123.322 255.845 123.322">
+ <defs>
+ </defs>
+ <g id="Layer 1" transform="scale(1 -1)">
+ <path d="M 107.188,79.018 C 107.386,78.994 107.58,78.94 107.762,78.859 C 107.941,78.774 108.106,78.663 108.252,78.529 C 108.44,78.349 108.616,78.158 108.78,77.955 L 123.492,59.358 C 123.432,59.95 123.393,60.531 123.364,61.088 C 123.336,61.644 123.322,62.176 123.322,62.672 L 123.322,79.079 L 129.655,79.079 L 129.655,48.109 L 125.913,48.109 C 125.433,48.095 124.956,48.182 124.513,48.364 C 124.073,48.581 123.693,48.902 123.407,49.3 L 108.801,67.73 C 108.847,67.195 108.879,66.667 108.907,66.149 C 108.936,65.632 108.953,65.146 108.953,64.692 L 108.953,48.091 L 102.616,48.091 L 102.616,79.079 L 106.398,79.079 C 106.662,79.076 106.926,79.056 107.188,79.018 Z" fill="#013243" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 138.934,70.158 L 138.934,56.172 C 138.934,55.08 139.182,54.237 139.679,53.641 C 140.233,53.023 141.04,52.693 141.869,52.748 C 142.571,52.744 143.265,52.896 143.9,53.195 C 144.571,53.52 145.191,53.943 145.739,54.45 L 145.739,70.158 L 152.328,70.158 L 152.328,48.116 L 148.249,48.116 C 147.515,48.055 146.839,48.516 146.629,49.222 L 146.228,50.498 C 145.814,50.096 145.373,49.722 144.91,49.378 C 144.455,49.046 143.966,48.763 143.453,48.531 C 142.913,48.287 142.349,48.099 141.77,47.971 C 141.128,47.831 140.473,47.763 139.817,47.769 C 138.721,47.749 137.634,47.962 136.627,48.396 C 135.723,48.797 134.92,49.395 134.277,50.147 C 133.624,50.928 133.132,51.832 132.831,52.805 C 132.495,53.893 132.33,55.026 132.342,56.165 L 132.342,70.158 Z" fill="#013243" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 156.578,48.109 L 156.578,70.158 L 160.661,70.158 C 161.024,70.171 161.384,70.075 161.692,69.881 C 161.978,69.682 162.185,69.388 162.277,69.052 L 162.631,67.861 C 162.989,68.24 163.371,68.596 163.776,68.924 C 164.175,69.245 164.606,69.522 165.063,69.754 C 166.067,70.263 167.18,70.522 168.306,70.509 C 169.494,70.555 170.661,70.191 171.612,69.477 C 172.508,68.755 173.194,67.805 173.597,66.727 C 173.947,67.379 174.403,67.969 174.948,68.471 C 175.463,68.94 176.043,69.333 176.67,69.637 C 177.291,69.936 177.947,70.157 178.623,70.296 C 179.299,70.437 179.988,70.508 180.679,70.509 C 181.822,70.528 182.96,70.337 184.035,69.945 C 184.97,69.598 185.811,69.037 186.491,68.308 C 187.174,67.546 187.685,66.647 187.99,65.671 C 188.347,64.524 188.519,63.327 188.501,62.126 L 188.501,48.119 L 181.908,48.119 L 181.908,62.116 C 181.908,64.398 180.931,65.538 178.977,65.536 C 178.146,65.563 177.341,65.243 176.755,64.653 C 176.167,64.07 175.873,63.224 175.873,62.116 L 175.873,48.109 L 169.291,48.109 L 169.291,62.116 C 169.291,63.378 169.043,64.264 168.547,64.774 C 168.05,65.284 167.32,65.536 166.356,65.536 C 165.769,65.537 165.19,65.4 164.666,65.135 C 164.115,64.85 163.61,64.484 163.166,64.051 L 163.166,48.102 Z" fill="#013243" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 199.516,58.462 L 199.516,48.109 L 192.332,48.109 L 192.332,79.079 L 203.255,79.079 C 205.159,79.121 207.058,78.861 208.88,78.309 C 210.302,77.874 211.618,77.15 212.747,76.183 C 213.741,75.307 214.51,74.206 214.991,72.972 C 215.476,71.697 215.716,70.342 215.699,68.977 C 215.716,67.526 215.464,66.084 214.955,64.724 C 214.472,63.453 213.692,62.316 212.68,61.407 C 211.553,60.424 210.232,59.69 208.802,59.252 C 207.007,58.695 205.135,58.429 203.255,58.462 Z M 199.516,63.881 L 203.255,63.881 C 205.127,63.881 206.474,64.324 207.296,65.221 C 208.118,66.117 208.529,67.347 208.529,68.96 C 208.538,69.619 208.43,70.274 208.21,70.895 C 208.007,71.462 207.676,71.975 207.243,72.394 C 206.774,72.832 206.215,73.162 205.605,73.362 C 204.847,73.607 204.053,73.726 203.255,73.716 L 199.516,73.716 Z" fill="#013243" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 228.466,42.388 C 228.316,42.012 228.072,41.68 227.757,41.424 C 227.345,41.186 226.87,41.078 226.396,41.116 L 221.452,41.116 L 225.705,50.04 L 216.908,70.158 L 222.731,70.158 C 223.157,70.179 223.577,70.054 223.922,69.803 C 224.192,69.595 224.398,69.315 224.517,68.995 L 228.129,59.493 C 228.463,58.637 228.74,57.759 228.958,56.867 C 229.1,57.32 229.256,57.767 229.426,58.203 C 229.596,58.639 229.759,59.089 229.915,59.543 L 233.19,69.002 C 233.314,69.343 233.55,69.632 233.86,69.821 C 234.174,70.034 234.544,70.148 234.923,70.151 L 240.24,70.151 Z" fill="#013243" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 46.918,89.155 L 33.759,95.797 L 19.312,88.588 L 32.83,81.801 L 46.918,89.155 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 52.954,86.11 L 66.752,79.142 L 52.437,71.955 L 38.898,78.752 L 52.954,86.11 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 71.384,95.698 L 85.561,88.588 L 72.88,82.222 L 59.054,89.197 L 71.384,95.698 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 65.281,98.76 L 52.518,105.161 L 39.894,98.859 L 53.046,92.228 L 65.281,98.76 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 55.304,43.803 L 55.304,26.386 L 70.764,34.102 L 70.75,51.526 L 55.304,43.803 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 70.743,57.607 L 70.725,74.847 L 55.304,67.18 L 55.304,49.934 L 70.743,57.607 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 89.304,60.836 L 89.304,43.352 L 76.116,36.774 L 76.105,54.177 L 89.304,60.836 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 89.304,66.95 L 89.304,84.083 L 76.091,77.516 L 76.102,60.241 L 89.304,66.95 Z" fill="#4dabcf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ <path d="M 49.846,67.18 L 39.433,72.419 L 39.433,49.792 C 39.433,49.792 26.695,76.892 25.518,79.327 C 25.366,79.642 24.742,79.986 24.582,80.071 C 22.286,81.269 15.594,84.657 15.594,84.657 L 15.594,44.667 L 24.852,39.705 L 24.852,60.617 C 24.852,60.617 37.452,36.402 37.583,36.136 C 37.714,35.871 38.972,33.322 40.326,32.426 C 42.123,31.231 49.839,26.592 49.839,26.592 Z" fill="#4d77cf" stroke="none" stroke-width="0.354" fill-rule="nonzero" stroke-linejoin="miter" marker-start="none" marker-end="none"/>
+ </g>
+</svg>
diff --git a/doc/source/_templates/autosummary/attribute.rst b/doc/source/_templates/autosummary/attribute.rst
index a6ed600ef..9e0eaa25f 100644
--- a/doc/source/_templates/autosummary/attribute.rst
+++ b/doc/source/_templates/autosummary/attribute.rst
@@ -6,5 +6,8 @@
attribute
-.. auto{{ objtype }}:: {{ objname }}
+.. auto{{ objtype }}:: {{ fullname | replace("numpy.", "numpy::") }}
+{# In the fullname (e.g. `numpy.ma.MaskedArray.methodname`), the module name
+is ambiguous. Using a `::` separator (e.g. `numpy::ma.MaskedArray.methodname`)
+specifies `numpy` as the module name. #}
diff --git a/doc/source/_templates/autosummary/base.rst b/doc/source/_templates/autosummary/base.rst
index 0331154a7..91bfff9ba 100644
--- a/doc/source/_templates/autosummary/base.rst
+++ b/doc/source/_templates/autosummary/base.rst
@@ -10,5 +10,8 @@
property
{% endif %}
-.. auto{{ objtype }}:: {{ objname }}
+.. auto{{ objtype }}:: {{ fullname | replace("numpy.", "numpy::") }}
+{# In the fullname (e.g. `numpy.ma.MaskedArray.methodname`), the module name
+is ambiguous. Using a `::` separator (e.g. `numpy::ma.MaskedArray.methodname`)
+specifies `numpy` as the module name. #}
diff --git a/doc/source/_templates/autosummary/member.rst b/doc/source/_templates/autosummary/member.rst
index f1f30e123..c0dcd5ed2 100644
--- a/doc/source/_templates/autosummary/member.rst
+++ b/doc/source/_templates/autosummary/member.rst
@@ -6,6 +6,8 @@
member
-.. auto{{ objtype }}:: {{ objname }}
-
+.. auto{{ objtype }}:: {{ fullname | replace("numpy.", "numpy::") }}
+{# In the fullname (e.g. `numpy.ma.MaskedArray.methodname`), the module name
+is ambiguous. Using a `::` separator (e.g. `numpy::ma.MaskedArray.methodname`)
+specifies `numpy` as the module name. #}
diff --git a/doc/source/_templates/autosummary/method.rst b/doc/source/_templates/autosummary/method.rst
index 8abda8677..0dd226393 100644
--- a/doc/source/_templates/autosummary/method.rst
+++ b/doc/source/_templates/autosummary/method.rst
@@ -6,5 +6,8 @@
method
-.. auto{{ objtype }}:: {{ objname }}
+.. auto{{ objtype }}:: {{ fullname | replace("numpy.", "numpy::") }}
+{# In the fullname (e.g. `numpy.ma.MaskedArray.methodname`), the module name
+is ambiguous. Using a `::` separator (e.g. `numpy::ma.MaskedArray.methodname`)
+specifies `numpy` as the module name. #}
diff --git a/doc/source/_templates/defindex.html b/doc/source/_templates/defindex.html
deleted file mode 100644
index 8eaadecb9..000000000
--- a/doc/source/_templates/defindex.html
+++ /dev/null
@@ -1,35 +0,0 @@
-{#
- basic/defindex.html
- ~~~~~~~~~~~~~~~~~~~
-
- Default template for the "index" page.
-
- :copyright: Copyright 2007-2017 by the Sphinx team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-#}
-{%- extends "layout.html" %}
-{% set title = _('Overview') %}
-{% block body %}
- <h1>{{ docstitle|e }}</h1>
- <p>
- {{ _('Welcome! This is') }}
- {% block description %}{{ _('the documentation for') }} {{ project|e }}
- {{ release|e }}{% if last_updated %}, {{ _('last updated') }} {{ last_updated|e }}{% endif %}{% endblock %}.
- </p>
- {% block tables %}
- <p><strong>{{ _('Indices and tables:') }}</strong></p>
- <table class="contentstable"><tr>
- <td style="width: 50%">
- <p class="biglink"><a class="biglink" href="{{ pathto("contents") }}">{{ _('Complete Table of Contents') }}</a><br>
- <span class="linkdescr">{{ _('lists all sections and subsections') }}</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("search") }}">{{ _('Search Page') }}</a><br>
- <span class="linkdescr">{{ _('search this documentation') }}</span></p>
- </td><td style="width: 50%">
- <p class="biglink"><a class="biglink" href="{{ pathto("modindex") }}">{{ _('Global Module Index') }}</a><br>
- <span class="linkdescr">{{ _('quick access to all modules') }}</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("genindex") }}">{{ _('General Index') }}</a><br>
- <span class="linkdescr">{{ _('all functions, classes, terms') }}</span></p>
- </td></tr>
- </table>
- {% endblock %}
-{% endblock %}
diff --git a/doc/source/_templates/indexcontent.html b/doc/source/_templates/indexcontent.html
index d77c5a85e..6dd6bf9b0 100644
--- a/doc/source/_templates/indexcontent.html
+++ b/doc/source/_templates/indexcontent.html
@@ -1,23 +1,33 @@
-{% extends "defindex.html" %}
-{% block tables %}
+{#
+ Loosely inspired by the deprecated sphinx/themes/basic/defindex.html
+#}
+{%- extends "layout.html" %}
+{% set title = _('Overview') %}
+{% block body %}
+<h1>{{ docstitle|e }}</h1>
+<p>
+ Welcome! This is the documentation for NumPy {{ release|e }}
+ {% if last_updated %}, last updated {{ last_updated|e }}{% endif %}.
+</p>
<p><strong>For users:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
- <p class="biglink"><a class="biglink" href="{{ pathto("user/setting-up") }}">Setting Up</a><br/>
- <span class="linkdescr">Learn about what NumPy is and how to install it</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("user/quickstart") }}">Quickstart Tutorial</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("user/whatisnumpy") }}">What is NumPy?</a><br/>
+ <span class="linkdescr">Who uses it and why</span></p>
+ <p class="biglink"><a class="biglink" href="https://numpy.org/install/">Installation</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("user/quickstart") }}">Quickstart</a><br/>
<span class="linkdescr">Aimed at domain experts or people migrating to NumPy</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("user/absolute_beginners") }}">Absolute Beginners Tutorial</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("user/absolute_beginners") }}">Absolute beginner's guide</a><br/>
<span class="linkdescr">Start here for an overview of NumPy features and syntax</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("user/tutorials_index") }}">Tutorials</a><br/>
<span class="linkdescr">Learn about concepts and submodules</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("user/howtos_index") }}">How Tos</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("user/howtos_index") }}">How-tos</a><br/>
<span class="linkdescr">How to do common tasks with NumPy</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("reference/index") }}">NumPy API Reference</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("reference/index") }}">NumPy API reference</a><br/>
<span class="linkdescr">Automatically generated reference documentation</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("user/explanations_index") }}">Explanations</a><br/>
<span class="linkdescr">In depth explanation of concepts, best practices and techniques</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("f2py/index") }}">F2Py Guide</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("f2py/index") }}">F2Py guide</a><br/>
<span class="linkdescr">Documentation for the f2py module (Fortran extensions for Python)</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("glossary") }}">Glossary</a><br/>
<span class="linkdescr">List of the most important terms</span></p>
@@ -27,11 +37,11 @@
<p><strong>For developers/contributors:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
- <p class="biglink"><a class="biglink" href="{{ pathto("dev/index") }}">NumPy Contributor Guide</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("dev/index") }}">NumPy contributor guide</a><br/>
<span class="linkdescr">Contributing to NumPy</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("dev/underthehood") }}">Under-the-hood docs</a><br/>
<span class="linkdescr">Specialized, in-depth documentation</span></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("docs/index") }}">Building and Extending the Documentation</a><br/>
+ <p class="biglink"><a class="biglink" href="{{ pathto("docs/index") }}">Building and extending the documentation</a><br/>
<span class="linkdescr">How to contribute to this documentation (user and API)</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("docs/howto_document") }}">The numpydoc docstring guide</a><br/>
<span class="linkdescr">How to write docstrings in the numpydoc format</span></p>
@@ -45,9 +55,9 @@
<table class="contentstable" align="center"><tr>
<td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("bugs") }}">Reporting bugs</a></p>
- <p class="biglink"><a class="biglink" href="{{ pathto("release") }}">Release Notes</a></p>
+ <p class="biglink"><a class="biglink" href="{{ pathto("release") }}">Release notes</a></p>
</td><td width="50%">
- <p class="biglink"><a class="biglink" href="{{ pathto("about") }}">About NumPy</a></p>
+ <p class="biglink"><a class="biglink" href="{{ pathto("doc_conventions") }}">Document conventions</a></p>
<p class="biglink"><a class="biglink" href="{{ pathto("license") }}">License of NumPy</a></p>
</td></tr>
</table>
@@ -56,13 +66,13 @@
<p>
Large parts of this manual originate from Travis E. Oliphant's book
<a href="https://archive.org/details/NumPyBook">"Guide to NumPy"</a>
- (which generously entered Public Domain in August 2008). The reference
+ (which generously entered public domain in August 2008). The reference
documentation for many of the functions are written by numerous
contributors and developers of NumPy.
</p>
<p>
The preferred way to update the documentation is by submitting a pull
- request on Github (see the <a href="{{ pathto("docs/index") }}">Documentation Index</a>).
+ request on GitHub (see the <a href="{{ pathto("docs/index") }}">Documentation index</a>).
Please help us to further improve the NumPy documentation!
</p>
{% endblock %}
diff --git a/doc/source/_templates/layout.html b/doc/source/_templates/layout.html
index beaa297db..e2812fdd5 100644
--- a/doc/source/_templates/layout.html
+++ b/doc/source/_templates/layout.html
@@ -1,30 +1,10 @@
{% extends "!layout.html" %}
-{%- block header %}
-<div class="container">
- <div class="top-scipy-org-logo-header" style="background-color: #a2bae8;">
- <a href="{{ pathto('index') }}">
- <img border=0 alt="NumPy" src="{{ pathto('_static/numpy_logo.png', 1) }}"></a>
- </div>
- </div>
-</div>
+{%- block extrahead %}
+{{ super() }}
+<link rel="stylesheet" href="{{ pathto('_static/numpy.css', 1) }}" type="text/css" />
-{% endblock %}
-{% block rootrellink %}
- {% if pagename != 'index' %}
- <li class="active"><a href="{{ pathto('index') }}">{{ shorttitle|e }}</a></li>
- {% endif %}
-{% endblock %}
+ <!-- PR #17220: This is added via javascript in versionwarning.js -->
+ <!-- link rel="canonical" href="http://numpy.org/doc/stable/{{ pagename }}{{ file_suffix }}" / -->
-{% block sidebarsearch %}
-{%- if sourcename %}
-<ul class="this-page-menu">
-{%- if 'reference/generated' in sourcename %}
- <li><a href="/numpy/docs/{{ sourcename.replace('reference/generated/', '').replace('.txt', '') |e }}">{{_('Edit page')}}</a></li>
-{%- else %}
- <li><a href="/numpy/docs/numpy-docs/{{ sourcename.replace('.txt', '.rst') |e }}">{{_('Edit page')}}</a></li>
-{%- endif %}
-</ul>
-{%- endif %}
-{{ super() }}
{% endblock %}
diff --git a/doc/source/about.rst b/doc/source/about.rst
deleted file mode 100644
index 3e83833d1..000000000
--- a/doc/source/about.rst
+++ /dev/null
@@ -1,62 +0,0 @@
-About NumPy
-===========
-
-NumPy is the fundamental package
-needed for scientific computing with Python. This package contains:
-
-- a powerful N-dimensional :ref:`array object <arrays>`
-- sophisticated :ref:`(broadcasting) functions <ufuncs>`
-- basic :ref:`linear algebra functions <routines.linalg>`
-- basic :ref:`Fourier transforms <routines.fft>`
-- sophisticated :ref:`random number capabilities <numpyrandom>`
-- tools for integrating Fortran code
-- tools for integrating C/C++ code
-
-Besides its obvious scientific uses, *NumPy* can also be used as an
-efficient multi-dimensional container of generic data. Arbitrary
-data types can be defined. This allows *NumPy* to seamlessly and
-speedily integrate with a wide variety of databases.
-
-NumPy is a successor for two earlier scientific Python libraries:
-Numeric and Numarray.
-
-NumPy community
----------------
-
-NumPy is a distributed, volunteer, open-source project. *You* can help
-us make it better; if you believe something should be improved either
-in functionality or in documentation, don't hesitate to contact us --- or
-even better, contact us and participate in fixing the problem.
-
-Our main means of communication are:
-
-- `scipy.org website <https://scipy.org/>`__
-
-- `Mailing lists <https://scipy.org/scipylib/mailing-lists.html>`__
-
-- `NumPy Issues <https://github.com/numpy/numpy/issues>`__ (bug reports go here)
-
-- `Old NumPy Trac <http://projects.scipy.org/numpy>`__ (dead link)
-
-More information about the development of NumPy can be found at our `Developer Zone <https://scipy.scipy.org/scipylib/dev-zone.html>`__.
-
-The project management structure can be found at our :doc:`governance page <dev/governance/index>`
-
-
-About this documentation
-========================
-
-Conventions
------------
-
-Names of classes, objects, constants, etc. are given in **boldface** font.
-Often they are also links to a more detailed documentation of the
-referred object.
-
-This manual contains many examples of use, usually prefixed with the
-Python prompt ``>>>`` (which is not a part of the example code). The
-examples assume that you have first entered::
-
->>> import numpy as np
-
-before running the examples.
diff --git a/doc/source/conf.py b/doc/source/conf.py
index b908a5a28..381a01612 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -6,6 +6,62 @@ import sys
# Minimum version, enforced by sphinx
needs_sphinx = '2.2.0'
+
+# This is a nasty hack to use platform-agnostic names for types in the
+# documentation.
+
+# must be kept alive to hold the patched names
+_name_cache = {}
+
+def replace_scalar_type_names():
+ """ Rename numpy types to use the canonical names to make sphinx behave """
+ import ctypes
+
+ Py_ssize_t = ctypes.c_int64 if ctypes.sizeof(ctypes.c_void_p) == 8 else ctypes.c_int32
+
+ class PyObject(ctypes.Structure):
+ pass
+
+ class PyTypeObject(ctypes.Structure):
+ pass
+
+ PyObject._fields_ = [
+ ('ob_refcnt', Py_ssize_t),
+ ('ob_type', ctypes.POINTER(PyTypeObject)),
+ ]
+
+
+ PyTypeObject._fields_ = [
+ # varhead
+ ('ob_base', PyObject),
+ ('ob_size', Py_ssize_t),
+ # declaration
+ ('tp_name', ctypes.c_char_p),
+ ]
+
+ # prevent numpy attaching docstrings to the scalar types
+ assert 'numpy.core._add_newdocs_scalars' not in sys.modules
+ sys.modules['numpy.core._add_newdocs_scalars'] = object()
+
+ import numpy
+
+ # change the __name__ of the scalar types
+ for name in [
+ 'byte', 'short', 'intc', 'int_', 'longlong',
+ 'ubyte', 'ushort', 'uintc', 'uint', 'ulonglong',
+ 'half', 'single', 'double', 'longdouble',
+ 'half', 'csingle', 'cdouble', 'clongdouble',
+ ]:
+ typ = getattr(numpy, name)
+ c_typ = PyTypeObject.from_address(id(typ))
+ c_typ.tp_name = _name_cache[typ] = b"numpy." + name.encode('utf8')
+
+ # now generate the docstrings as usual
+ del sys.modules['numpy.core._add_newdocs_scalars']
+ import numpy.core._add_newdocs_scalars
+
+replace_scalar_type_names()
+
# -----------------------------------------------------------------------------
# General configuration
# -----------------------------------------------------------------------------
@@ -94,34 +150,15 @@ def setup(app):
# HTML output
# -----------------------------------------------------------------------------
-themedir = os.path.join(os.pardir, 'scipy-sphinx-theme', '_theme')
-if not os.path.isdir(themedir):
- raise RuntimeError("Get the scipy-sphinx-theme first, "
- "via git submodule init && git submodule update")
-
-html_theme = 'scipy'
-html_theme_path = [themedir]
-
-if 'scipyorg' in tags:
- # Build for the scipy.org website
- html_theme_options = {
- "edit_link": True,
- "sidebar": "right",
- "scipy_org_logo": True,
- "rootlinks": [("https://scipy.org/", "Scipy.org"),
- ("https://docs.scipy.org/", "Docs")]
- }
-else:
- # Default build
- html_theme_options = {
- "edit_link": False,
- "sidebar": "left",
- "scipy_org_logo": False,
- "rootlinks": [("https://numpy.org/", "NumPy.org"),
- ("https://numpy.org/doc", "Docs"),
- ]
- }
- html_sidebars = {'index': ['indexsidebar.html', 'searchbox.html']}
+html_theme = 'pydata_sphinx_theme'
+
+html_logo = '_static/numpylogo.svg'
+
+html_theme_options = {
+ "github_url": "https://github.com/numpy/numpy",
+ "twitter_url": "https://twitter.com/numpy_team",
+}
+
html_additional_pages = {
'index': 'indexcontent.html',
@@ -246,6 +283,8 @@ intersphinx_mapping = {
'matplotlib': ('https://matplotlib.org', None),
'imageio': ('https://imageio.readthedocs.io/en/stable', None),
'skimage': ('https://scikit-image.org/docs/stable', None),
+ 'pandas': ('https://pandas.pydata.org/pandas-docs/stable', None),
+ 'scipy-lecture-notes': ('https://scipy-lectures.org', None),
}
@@ -329,6 +368,17 @@ for name in ['sphinx.ext.linkcode', 'numpydoc.linkcode']:
else:
print("NOTE: linkcode extension not found -- no links to source generated")
+
+def _get_c_source_file(obj):
+ if issubclass(obj, numpy.generic):
+ return r"core/src/multiarray/scalartypes.c.src"
+ elif obj is numpy.ndarray:
+ return r"core/src/multiarray/arrayobject.c"
+ else:
+ # todo: come up with a better way to generate these
+ return None
+
+
def linkcode_resolve(domain, info):
"""
Determine the URL corresponding to Python object
@@ -359,25 +409,33 @@ def linkcode_resolve(domain, info):
else:
obj = unwrap(obj)
- try:
- fn = inspect.getsourcefile(obj)
- except Exception:
- fn = None
- if not fn:
- return None
+ fn = None
+ lineno = None
- try:
- source, lineno = inspect.getsourcelines(obj)
- except Exception:
- lineno = None
+ # Make a poor effort at linking C extension types
+ if isinstance(obj, type) and obj.__module__ == 'numpy':
+ fn = _get_c_source_file(obj)
+
+ if fn is None:
+ try:
+ fn = inspect.getsourcefile(obj)
+ except Exception:
+ fn = None
+ if not fn:
+ return None
+
+ try:
+ source, lineno = inspect.getsourcelines(obj)
+ except Exception:
+ lineno = None
+
+ fn = relpath(fn, start=dirname(numpy.__file__))
if lineno:
linespec = "#L%d-L%d" % (lineno, lineno + len(source) - 1)
else:
linespec = ""
- fn = relpath(fn, start=dirname(numpy.__file__))
-
if 'dev' in numpy.__version__:
return "https://github.com/numpy/numpy/blob/master/numpy/%s%s" % (
fn, linespec)
@@ -386,15 +444,15 @@ def linkcode_resolve(domain, info):
numpy.__version__, fn, linespec)
from pygments.lexers import CLexer
-import copy
+from pygments.lexer import inherit, bygroups
+from pygments.token import Comment
class NumPyLexer(CLexer):
name = 'NUMPYLEXER'
- tokens = copy.deepcopy(CLexer.tokens)
- # Extend the regex for valid identifiers with @
- for k, val in tokens.items():
- for i, v in enumerate(val):
- if isinstance(v, tuple):
- if isinstance(v[0], str):
- val[i] = (v[0].replace('a-zA-Z', 'a-zA-Z@'),) + v[1:]
+ tokens = {
+ 'statements': [
+ (r'@[a-zA-Z_]*@', Comment.Preproc, 'macro'),
+ inherit,
+ ],
+ }
diff --git a/doc/source/contents.rst b/doc/source/contents.rst
index baea7784c..5d4e12097 100644
--- a/doc/source/contents.rst
+++ b/doc/source/contents.rst
@@ -5,23 +5,12 @@ NumPy Documentation
###################
.. toctree::
+ :maxdepth: 1
- user/setting-up
- user/quickstart
- user/absolute_beginners
- user/tutorials_index
- user/howtos_index
- reference/index
- user/explanations_index
- f2py/index
- glossary
- dev/index
- dev/underthehood
- docs/index
- docs/howto_document
- benchmarking
- bugs
- release
- about
- license
+ User Guide <user/index>
+ API reference <reference/index>
+ Development <dev/index>
+.. This is not really the index page, that is found in
+ _templates/indexcontent.html The toctree content here will be added to the
+ top of the template header
diff --git a/doc/source/dev/conduct/code_of_conduct.rst b/doc/source/dev/conduct/code_of_conduct.rst
deleted file mode 100644
index f2f0a536d..000000000
--- a/doc/source/dev/conduct/code_of_conduct.rst
+++ /dev/null
@@ -1,163 +0,0 @@
-NumPy Code of Conduct
-=====================
-
-
-Introduction
-------------
-
-This code of conduct applies to all spaces managed by the NumPy project,
-including all public and private mailing lists, issue trackers, wikis, blogs,
-Twitter, and any other communication channel used by our community. The NumPy
-project does not organise in-person events, however events related to our
-community should have a code of conduct similar in spirit to this one.
-
-This code of conduct should be honored by everyone who participates in
-the NumPy community formally or informally, or claims any affiliation with the
-project, in any project-related activities and especially when representing the
-project, in any role.
-
-This code is not exhaustive or complete. It serves to distill our common
-understanding of a collaborative, shared environment and goals. Please try to
-follow this code in spirit as much as in letter, to create a friendly and
-productive environment that enriches the surrounding community.
-
-
-Specific Guidelines
--------------------
-
-We strive to:
-
-1. Be open. We invite anyone to participate in our community. We prefer to use
- public methods of communication for project-related messages, unless
- discussing something sensitive. This applies to messages for help or
- project-related support, too; not only is a public support request much more
- likely to result in an answer to a question, it also ensures that any
- inadvertent mistakes in answering are more easily detected and corrected.
-
-2. Be empathetic, welcoming, friendly, and patient. We work together to resolve
- conflict, and assume good intentions. We may all experience some frustration
- from time to time, but we do not allow frustration to turn into a personal
- attack. A community where people feel uncomfortable or threatened is not a
- productive one.
-
-3. Be collaborative. Our work will be used by other people, and in turn we will
- depend on the work of others. When we make something for the benefit of the
- project, we are willing to explain to others how it works, so that they can
- build on the work to make it even better. Any decision we make will affect
- users and colleagues, and we take those consequences seriously when making
- decisions.
-
-4. Be inquisitive. Nobody knows everything! Asking questions early avoids many
- problems later, so we encourage questions, although we may direct them to
- the appropriate forum. We will try hard to be responsive and helpful.
-
-5. Be careful in the words that we choose. We are careful and respectful in
- our communication and we take responsibility for our own speech. Be kind to
- others. Do not insult or put down other participants. We will not accept
- harassment or other exclusionary behaviour, such as:
-
- - Violent threats or language directed against another person.
- - Sexist, racist, or otherwise discriminatory jokes and language.
- - Posting sexually explicit or violent material.
- - Posting (or threatening to post) other people's personally identifying information ("doxing").
- - Sharing private content, such as emails sent privately or non-publicly,
- or unlogged forums such as IRC channel history, without the sender's consent.
- - Personal insults, especially those using racist or sexist terms.
- - Unwelcome sexual attention.
- - Excessive profanity. Please avoid swearwords; people differ greatly in their sensitivity to swearing.
- - Repeated harassment of others. In general, if someone asks you to stop, then stop.
- - Advocating for, or encouraging, any of the above behaviour.
-
-
-Diversity Statement
--------------------
-
-The NumPy project welcomes and encourages participation by everyone. We are
-committed to being a community that everyone enjoys being part of. Although
-we may not always be able to accommodate each individual's preferences, we try
-our best to treat everyone kindly.
-
-No matter how you identify yourself or how others perceive you: we welcome you.
-Though no list can hope to be comprehensive, we explicitly honour diversity in:
-age, culture, ethnicity, genotype, gender identity or expression, language,
-national origin, neurotype, phenotype, political beliefs, profession, race,
-religion, sexual orientation, socioeconomic status, subculture and technical
-ability, to the extent that these do not conflict with this code of conduct.
-
-
-Though we welcome people fluent in all languages, NumPy development is
-conducted in English.
-
-Standards for behaviour in the NumPy community are detailed in the Code of
-Conduct above. Participants in our community should uphold these standards
-in all their interactions and help others to do so as well (see next section).
-
-
-Reporting Guidelines
---------------------
-
-We know that it is painfully common for internet communication to start at or
-devolve into obvious and flagrant abuse. We also recognize that sometimes
-people may have a bad day, or be unaware of some of the guidelines in this Code
-of Conduct. Please keep this in mind when deciding on how to respond to a
-breach of this Code.
-
-For clearly intentional breaches, report those to the Code of Conduct committee
-(see below). For possibly unintentional breaches, you may reply to the person
-and point out this code of conduct (either in public or in private, whatever is
-most appropriate). If you would prefer not to do that, please feel free to
-report to the Code of Conduct Committee directly, or ask the Committee for
-advice, in confidence.
-
-You can report issues to the NumPy Code of Conduct committee, at
-numpy-conduct@googlegroups.com. Currently, the committee consists of:
-
-- Stefan van der Walt
-- Melissa Weber Mendonça
-- Anirudh Subramanian
-
-If your report involves any members of the committee, or if they feel they have
-a conflict of interest in handling it, then they will recuse themselves from
-considering your report. Alternatively, if for any reason you feel
-uncomfortable making a report to the committee, then you can also contact:
-
-- Senior `NumFOCUS staff <https://numfocus.org/code-of-conduct#persons-responsible>`__: conduct@numfocus.org
-
-
-Incident reporting resolution & Code of Conduct enforcement
------------------------------------------------------------
-
-*This section summarizes the most important points, more details can be found
-in* :ref:`CoC_reporting_manual`.
-
-We will investigate and respond to all complaints. The NumPy Code of Conduct
-Committee and the NumPy Steering Committee (if involved) will protect the
-identity of the reporter, and treat the content of complaints as confidential
-(unless the reporter agrees otherwise).
-
-In case of severe and obvious breaches, e.g. personal threat or violent, sexist
-or racist language, we will immediately disconnect the originator from NumPy
-communication channels; please see the manual for details.
-
-In cases not involving clear severe and obvious breaches of this code of
-conduct, the process for acting on any received code of conduct violation
-report will be:
-
-1. acknowledge report is received
-2. reasonable discussion/feedback
-3. mediation (if feedback didn't help, and only if both reporter and reportee agree to this)
-4. enforcement via transparent decision (see :ref:`CoC_resolutions`) by the
- Code of Conduct Committee
-
-The committee will respond to any report as soon as possible, and at most
-within 72 hours.
-
-
-Endnotes
---------
-
-We are thankful to the groups behind the following documents, from which we
-drew content and inspiration:
-
-- `The SciPy Code of Conduct <https://docs.scipy.org/doc/scipy/reference/dev/conduct/code_of_conduct.html>`_
-
diff --git a/doc/source/dev/conduct/report_handling_manual.rst b/doc/source/dev/conduct/report_handling_manual.rst
deleted file mode 100644
index d39b615bb..000000000
--- a/doc/source/dev/conduct/report_handling_manual.rst
+++ /dev/null
@@ -1,220 +0,0 @@
-:orphan:
-
-.. _CoC_reporting_manual:
-
-NumPy Code of Conduct - How to follow up on a report
-----------------------------------------------------
-
-This is the manual followed by NumPy's Code of Conduct Committee. It's used
-when we respond to an issue to make sure we're consistent and fair.
-
-Enforcing the Code of Conduct impacts our community today and for the future.
-It's an action that we do not take lightly. When reviewing enforcement
-measures, the Code of Conduct Committee will keep the following values and
-guidelines in mind:
-
-* Act in a personal manner rather than impersonal. The Committee can engage
- the parties to understand the situation, while respecting the privacy and any
- necessary confidentiality of reporters. However, sometimes it is necessary
- to communicate with one or more individuals directly: the Committee's goal is
- to improve the health of our community rather than only produce a formal
- decision.
-
-* Emphasize empathy for individuals rather than judging behavior, avoiding
- binary labels of "good" and "bad/evil". Overt, clear-cut aggression and
- harassment exists and we will be address that firmly. But many scenarios
- that can prove challenging to resolve are those where normal disagreements
- devolve into unhelpful or harmful behavior from multiple parties.
- Understanding the full context and finding a path that re-engages all is
- hard, but ultimately the most productive for our community.
-
-* We understand that email is a difficult medium and can be isolating.
- Receiving criticism over email, without personal contact, can be
- particularly painful. This makes it especially important to keep an
- atmosphere of open-minded respect of the views of others. It also means
- that we must be transparent in our actions, and that we will do everything
- in our power to make sure that all our members are treated fairly and with
- sympathy.
-
-* Discrimination can be subtle and it can be unconscious. It can show itself
- as unfairness and hostility in otherwise ordinary interactions. We know
- that this does occur, and we will take care to look out for it. We would
- very much like to hear from you if you feel you have been treated unfairly,
- and we will use these procedures to make sure that your complaint is heard
- and addressed.
-
-* Help increase engagement in good discussion practice: try to identify where
- discussion may have broken down and provide actionable information, pointers
- and resources that can lead to positive change on these points.
-
-* Be mindful of the needs of new members: provide them with explicit support
- and consideration, with the aim of increasing participation from
- underrepresented groups in particular.
-
-* Individuals come from different cultural backgrounds and native languages.
- Try to identify any honest misunderstandings caused by a non-native speaker
- and help them understand the issue and what they can change to avoid causing
- offence. Complex discussion in a foreign language can be very intimidating,
- and we want to grow our diversity also across nationalities and cultures.
-
-*Mediation*: voluntary, informal mediation is a tool at our disposal. In
-contexts such as when two or more parties have all escalated to the point of
-inappropriate behavior (something sadly common in human conflict), it may be
-useful to facilitate a mediation process. This is only an example: the
-Committee can consider mediation in any case, mindful that the process is meant
-to be strictly voluntary and no party can be pressured to participate. If the
-Committee suggests mediation, it should:
-
-* Find a candidate who can serve as a mediator.
-* Obtain the agreement of the reporter(s). The reporter(s) have complete
- freedom to decline the mediation idea, or to propose an alternate mediator.
-* Obtain the agreement of the reported person(s).
-* Settle on the mediator: while parties can propose a different mediator than
- the suggested candidate, only if common agreement is reached on all terms can
- the process move forward.
-* Establish a timeline for mediation to complete, ideally within two weeks.
-
-The mediator will engage with all the parties and seek a resolution that is
-satisfactory to all. Upon completion, the mediator will provide a report
-(vetted by all parties to the process) to the Committee, with recommendations
-on further steps. The Committee will then evaluate these results (whether
-satisfactory resolution was achieved or not) and decide on any additional
-action deemed necessary.
-
-
-How the committee will respond to reports
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-When the committee (or a committee member) receives a report, they will first
-determine whether the report is about a clear and severe breach (as defined
-below). If so, immediate action needs to be taken in addition to the regular
-report handling process.
-
-Clear and severe breach actions
-+++++++++++++++++++++++++++++++
-
-We know that it is painfully common for internet communication to start at or
-devolve into obvious and flagrant abuse. We will deal quickly with clear and
-severe breaches like personal threats, violent, sexist or racist language.
-
-When a member of the Code of Conduct committee becomes aware of a clear and
-severe breach, they will do the following:
-
-* Immediately disconnect the originator from all NumPy communication channels.
-* Reply to the reporter that their report has been received and that the
- originator has been disconnected.
-* In every case, the moderator should make a reasonable effort to contact the
- originator, and tell them specifically how their language or actions
- qualify as a "clear and severe breach". The moderator should also say
- that, if the originator believes this is unfair or they want to be
- reconnected to NumPy, they have the right to ask for a review, as below, by
- the Code of Conduct Committee.
- The moderator should copy this explanation to the Code of Conduct Committee.
-* The Code of Conduct Committee will formally review and sign off on all cases
- where this mechanism has been applied to make sure it is not being used to
- control ordinary heated disagreement.
-
-Report handling
-+++++++++++++++
-
-When a report is sent to the committee they will immediately reply to the
-reporter to confirm receipt. This reply must be sent within 72 hours, and the
-group should strive to respond much quicker than that.
-
-If a report doesn't contain enough information, the committee will obtain all
-relevant data before acting. The committee is empowered to act on the Steering
-Council’s behalf in contacting any individuals involved to get a more complete
-account of events.
-
-The committee will then review the incident and determine, to the best of their
-ability:
-
-* What happened.
-* Whether this event constitutes a Code of Conduct violation.
-* Who are the responsible party(ies).
-* Whether this is an ongoing situation, and there is a threat to anyone's
- physical safety.
-
-This information will be collected in writing, and whenever possible the
-group's deliberations will be recorded and retained (i.e. chat transcripts,
-email discussions, recorded conference calls, summaries of voice conversations,
-etc).
-
-It is important to retain an archive of all activities of this committee to
-ensure consistency in behavior and provide institutional memory for the
-project. To assist in this, the default channel of discussion for this
-committee will be a private mailing list accessible to current and future
-members of the committee as well as members of the Steering Council upon
-justified request. If the Committee finds the need to use off-list
-communications (e.g. phone calls for early/rapid response), it should in all
-cases summarize these back to the list so there's a good record of the process.
-
-The Code of Conduct Committee should aim to have a resolution agreed upon within
-two weeks. In the event that a resolution can't be determined in that time, the
-committee will respond to the reporter(s) with an update and projected timeline
-for resolution.
-
-
-.. _CoC_resolutions:
-
-Resolutions
-~~~~~~~~~~~
-
-The committee must agree on a resolution by consensus. If the group cannot reach
-consensus and deadlocks for over a week, the group will turn the matter over to
-the Steering Council for resolution.
-
-
-Possible responses may include:
-
-* Taking no further action
-
- - if we determine no violations have occurred.
- - if the matter has been resolved publicly while the committee was considering responses.
-
-* Coordinating voluntary mediation: if all involved parties agree, the
- Committee may facilitate a mediation process as detailed above.
-* Remind publicly, and point out that some behavior/actions/language have been
- judged inappropriate and why in the current context, or can but hurtful to
- some people, requesting the community to self-adjust.
-* A private reprimand from the committee to the individual(s) involved. In this
- case, the group chair will deliver that reprimand to the individual(s) over
- email, cc'ing the group.
-* A public reprimand. In this case, the committee chair will deliver that
- reprimand in the same venue that the violation occurred, within the limits of
- practicality. E.g., the original mailing list for an email violation, but
- for a chat room discussion where the person/context may be gone, they can be
- reached by other means. The group may choose to publish this message
- elsewhere for documentation purposes.
-* A request for a public or private apology, assuming the reporter agrees to
- this idea: they may at their discretion refuse further contact with the
- violator. The chair will deliver this request. The committee may, if it
- chooses, attach "strings" to this request: for example, the group may ask a
- violator to apologize in order to retain one’s membership on a mailing list.
-* A "mutually agreed upon hiatus" where the committee asks the individual to
- temporarily refrain from community participation. If the individual chooses
- not to take a temporary break voluntarily, the committee may issue a
- "mandatory cooling off period".
-* A permanent or temporary ban from some or all NumPy spaces (mailing lists,
- gitter.im, etc.). The group will maintain records of all such bans so that
- they may be reviewed in the future or otherwise maintained.
-
-Once a resolution is agreed upon, but before it is enacted, the committee will
-contact the original reporter and any other affected parties and explain the
-proposed resolution. The committee will ask if this resolution is acceptable,
-and must note feedback for the record.
-
-Finally, the committee will make a report to the NumPy Steering Council (as
-well as the NumPy core team in the event of an ongoing resolution, such as a
-ban).
-
-The committee will never publicly discuss the issue; all public statements will
-be made by the chair of the Code of Conduct Committee or the NumPy Steering
-Council.
-
-
-Conflicts of Interest
-~~~~~~~~~~~~~~~~~~~~~
-
-In the event of any conflict of interest, a committee member must immediately
-notify the other members, and recuse themselves if necessary.
diff --git a/doc/source/dev/development_advanced_debugging.rst b/doc/source/dev/development_advanced_debugging.rst
new file mode 100644
index 000000000..fa4014fdb
--- /dev/null
+++ b/doc/source/dev/development_advanced_debugging.rst
@@ -0,0 +1,190 @@
+========================
+Advanced debugging tools
+========================
+
+If you reached here, you want to dive into, or use, more advanced tooling.
+This is usually not necessary for first time contributers and most
+day-to-day developement.
+These are used more rarely, for example close to a new NumPy release,
+or when a large or particular complex change was made.
+
+Since not all of these tools are used on a regular bases and only available
+on some systems, please expect differences, issues, or quirks;
+we will be happy to help if you get stuck and appreciate any improvements
+or suggestions to these workflows.
+
+
+Finding C errors with additional tooling
+########################################
+
+Most development will not require more than a typical debugging toolchain
+as shown in :ref:`Debugging <debugging>`.
+But for example memory leaks can be particularly subtle or difficult to
+narrow down.
+
+We do not expect any of these tools to be run by most contributors.
+However, you can ensure that we can track down such issues more easily easier:
+
+* Tests should cover all code paths, incluing error paths.
+* Try to write short and simple tests. If you have a very complicated test
+ consider creating an additional simpler test as well.
+ This can be helpful, because often it is only easy to find which test
+ triggers an issue and not which line of the test.
+* Never use ``np.empty`` if data is read/used. ``valgrind`` will notice this
+ and report an error. When you do not care about values, you can generate
+ random values instead.
+
+This will help us catch any oversights before your change is released
+and means you do not have to worry about making reference counting errors,
+which can be intimidating.
+
+
+Python debug build for finding memory leaks
+===========================================
+
+Debug builds of Python are easily available for example on ``debian`` systems,
+and can be used on all platforms.
+Running a test or terminal is usually as easy as::
+
+ python3.8d runtests.py
+ # or
+ python3.8d runtests.py --ipython
+
+and were already mentioned in :ref:`Debugging <debugging>`.
+
+A Python debug build will help:
+
+- Find bugs which may otherwise cause random behaviour.
+ One example is when an object is still used after it has been deleted.
+
+- Python debug builds allows to check correct reference counting.
+ This works using the additional commands::
+
+ sys.gettotalrefcount()
+ sys.getallocatedblocks()
+
+
+Use together with ``pytest``
+----------------------------
+
+Running the test suite only with a debug python build will not find many
+errors on its own. An additional advantage of a debug build of Python is that
+it allows detecting memory leaks.
+
+A tool to make this easier is `pytest-leaks`_, which can be installed using ``pip``.
+Unfortunately, ``pytest`` itself may leak memory, but good results can usually
+(currently) be achieved by removing::
+
+ @pytest.fixture(autouse=True)
+ def add_np(doctest_namespace):
+ doctest_namespace['np'] = numpy
+
+ @pytest.fixture(autouse=True)
+ def env_setup(monkeypatch):
+ monkeypatch.setenv('PYTHONHASHSEED', '0')
+
+from ``numpy/conftest.py`` (This may change with new ``pytest-leaks`` versions
+or ``pytest`` updates).
+
+This allows to run the test suite, or part of it, conveniently::
+
+ python3.8d runtests.py -t numpy/core/tests/test_multiarray.py -- -R2:3 -s
+
+where ``-R2:3`` is the ``pytest-leaks`` command (see its documentation), the
+``-s`` causes output to print and may be necessary (in some versions captured
+output was detected as a leak).
+
+Note that some tests are known (or even designed) to leak references, we try
+to mark them, but expect some false positives.
+
+.. _pytest-leaks: https://github.com/abalkin/pytest-leaks
+
+``valgrind``
+============
+
+Valgrind is a powerful tool to find certain memory access problems and should
+be run on complicated C code.
+Basic use of ``valgrind`` usually requires no more than::
+
+ PYTHONMALLOC=malloc python runtests.py
+
+where ``PYTHONMALLOC=malloc`` is necessary to avoid false positives from python
+itself.
+Depending on the system and valgrind version, you may see more false positives.
+``valgrind`` supports "suppressions" to ignore some of these, and Python does
+have a supression file (and even a compile time option) which may help if you
+find it necessary.
+
+Valgrind helps:
+
+- Find use of uninitialized variables/memory.
+
+- Detect memory access violations (reading or writing outside of allocated
+ memory).
+
+- Find *many* memory leaks. Note that for *most* leaks the python
+ debug build approach (and ``pytest-leaks``) is much more sensitive.
+ The reason is that ``valgrind`` can only detect if memory is definitely
+ lost. If::
+
+ dtype = np.dtype(np.int64)
+ arr.astype(dtype=dtype)
+
+ Has incorrect reference counting for ``dtype``, this is a bug, but valgrind
+ cannot see it because ``np.dtype(np.int64)`` always returns the same object.
+ However, not all dtypes are singletons, so this might leak memory for
+ different input.
+ In rare cases NumPy uses ``malloc`` and not the Python memory allocators
+ which are invisible to the Python debug build.
+ ``malloc`` should normally be avoided, but there are some exceptions
+ (e.g. the ``PyArray_Dims`` structure is public API and cannot use the
+ Python allocators.)
+
+Even though using valgrind for memory leak detection is slow and less sensitive
+it can be a convenient: you can run most programs with valgrind without
+modification.
+
+Things to be aware of:
+
+- Valgrind does not support the numpy ``longdouble``, this means that tests
+ will fail or be flagged errors that are completely fine.
+
+- Expect some errors before and after running your NumPy code.
+
+- Caches can mean that errors (specifically memory leaks) may not be detected
+ or are only detect at a later, unrelated time.
+
+A big advantage of valgrind is that it has no requirements aside from valgrind
+itself (although you probably want to use debug builds for better tracebacks).
+
+
+Use together with ``pytest``
+----------------------------
+You can run the test suite with valgrind which may be sufficient
+when you are only interested in a few tests::
+
+ PYTHOMMALLOC=malloc valgrind python runtests.py \
+ -t numpy/core/tests/test_multiarray.py -- --continue-on-collection-errors
+
+Note the ``--continue-on-collection-errors``, which is currently necessary due to
+missing ``longdouble`` support causing failures (this will usually not be
+necessary if you do not run the full test suite).
+
+If you wish to detect memory leaks you will also require ``--show-leak-kinds=definite``
+and possibly more valgrind options. Just as for ``pytest-leaks`` certain
+tests are known to leak cause errors in valgrind and may or may not be marked
+as such.
+
+We have developed `pytest-valgrind`_ which:
+
+- Reports errors for each test individually
+
+- Narrows down memory leaks to individual tests (by default valgrind
+ only checks for memory leaks after a program stops, which is very
+ cumbersome).
+
+Please refer to its ``README`` for more information (it includes an example
+command for NumPy).
+
+.. _pytest-valgrind: https://github.com/seberg/pytest-valgrind
+
diff --git a/doc/source/dev/development_environment.rst b/doc/source/dev/development_environment.rst
index ff78cecc5..013414568 100644
--- a/doc/source/dev/development_environment.rst
+++ b/doc/source/dev/development_environment.rst
@@ -207,6 +207,8 @@ repo, use one of::
$ git reset --hard
+.. _debugging:
+
Debugging
---------
@@ -273,7 +275,7 @@ pull requests aren't perfect, the community is always happy to help. As a
volunteer project, things do sometimes get dropped and it's totally fine to
ping us if something has sat without a response for about two to four weeks.
-So go ahead and pick something that annoys or confuses you about numpy,
+So go ahead and pick something that annoys or confuses you about NumPy,
experiment with the code, hang around for discussions or go through the
reference documents to try to fix it. Things will fall in place and soon
you'll have a pretty good understanding of the project as a whole. Good Luck!
diff --git a/doc/source/dev/development_workflow.rst b/doc/source/dev/development_workflow.rst
index d5a49a9f9..34535b2f5 100644
--- a/doc/source/dev/development_workflow.rst
+++ b/doc/source/dev/development_workflow.rst
@@ -188,6 +188,16 @@ Standard acronyms to start the commit message with are::
REL: related to releasing numpy
+.. _workflow_mailing_list:
+
+Get the mailing list's opinion
+=======================================================
+
+If you plan a new feature or API change, it's wisest to first email the
+NumPy `mailing list <https://mail.python.org/mailman/listinfo/numpy-discussion>`_
+asking for comment. If you haven't heard back in a week, it's
+OK to ping the list again.
+
.. _asking-for-merging:
Asking for your changes to be merged with the main repo
@@ -197,15 +207,24 @@ When you feel your work is finished, you can create a pull request (PR). Github
has a nice help page that outlines the process for `filing pull requests`_.
If your changes involve modifications to the API or addition/modification of a
-function, you should
+function, add a release note to the ``doc/release/upcoming_changes/``
+directory, following the instructions and format in the
+``doc/release/upcoming_changes/README.rst`` file.
+
+
+.. _workflow_PR_timeline:
+
+Getting your PR reviewed
+========================
+
+We review pull requests as soon as we can, typically within a week. If you get
+no review comments within two weeks, feel free to ask for feedback by
+adding a comment on your PR (this will notify maintainers).
+
+If your PR is large or complicated, asking for input on the numpy-discussion
+mailing list may also be useful.
+
-- send an email to the `NumPy mailing list`_ with a link to your PR along with
- a description of and a motivation for your changes. This may generate
- changes and feedback. It might be prudent to start with this step if your
- change may be controversial.
-- add a release note to the ``doc/release/upcoming_changes/`` directory,
- following the instructions and format in the
- ``doc/release/upcoming_changes/README.rst`` file.
.. _rebasing-on-master:
@@ -290,7 +309,7 @@ Rewriting commit history
Do this only for your own feature branches.
-There's an embarrassing typo in a commit you made? Or perhaps the you
+There's an embarrassing typo in a commit you made? Or perhaps you
made several false starts you would like the posterity not to see.
This can be done via *interactive rebasing*.
diff --git a/doc/source/dev/index.rst b/doc/source/dev/index.rst
index aeb277a87..bcd144d71 100644
--- a/doc/source/dev/index.rst
+++ b/doc/source/dev/index.rst
@@ -4,6 +4,22 @@
Contributing to NumPy
#####################
+.. TODO: this is hidden because there's a bug in the pydata theme that won't render TOC items under headers
+
+.. toctree::
+ :hidden:
+
+ Git Basics <gitwash/index>
+ development_environment
+ development_workflow
+ development_advanced_debugging
+ ../benchmarking
+ NumPy C style guide <https://numpy.org/neps/nep-0045-c_style_guide.html>
+ releasing
+ governance/index
+ howto-docs
+
+
Not a coder? Not a problem! NumPy is multi-faceted, and we can use a lot of help.
These are all activities we'd like to get help with (they're all important, so
we list them in alphabetical order):
@@ -107,7 +123,8 @@ Here's the short summary, complete TOC links are below:
overall code quality benefits. Therefore, please don't let the review
discourage you from contributing: its only aim is to improve the quality
of project, not to criticize (we are, after all, very grateful for the
- time you're donating!).
+ time you're donating!). See our :ref:`Reviewer Guidelines
+ <reviewer-guidelines>` for more information.
* To update your PR, make your changes on your local repository, commit,
**run tests, and only if they succeed** push to your fork. As soon as
@@ -164,6 +181,8 @@ be merged automatically, you have to incorporate changes that have been made
since you started into your branch. Our recommended way to do this is to
:ref:`rebase on master<rebasing-on-master>`.
+.. _guidelines:
+
Guidelines
----------
@@ -171,9 +190,11 @@ Guidelines
* All code should be `documented <https://numpydoc.readthedocs.io/
en/latest/format.html#docstring-standard>`_.
* No changes are ever committed without review and approval by a core
- team member.Please ask politely on the PR or on the `mailing list`_ if you
+ team member. Please ask politely on the PR or on the `mailing list`_ if you
get no response to your pull request within a week.
+.. _stylistic-guidelines:
+
Stylistic Guidelines
--------------------
@@ -218,6 +239,8 @@ This will create a report in ``build/coverage``, which can be viewed with::
$ firefox build/coverage/index.html
+.. _building-docs:
+
Building docs
-------------
@@ -277,12 +300,13 @@ The rest of the story
.. toctree::
:maxdepth: 2
- conduct/code_of_conduct
Git Basics <gitwash/index>
development_environment
development_workflow
+ development_advanced_debugging
+ reviewer_guidelines
../benchmarking
- style_guide
+ NumPy C style guide <https://numpy.org/neps/nep-0045-c_style_guide.html>
releasing
governance/index
howto-docs
diff --git a/doc/source/dev/reviewer_guidelines.rst b/doc/source/dev/reviewer_guidelines.rst
new file mode 100644
index 000000000..0b225b9b6
--- /dev/null
+++ b/doc/source/dev/reviewer_guidelines.rst
@@ -0,0 +1,119 @@
+.. _reviewer-guidelines:
+
+===================
+Reviewer Guidelines
+===================
+
+Reviewing open pull requests (PRs) helps move the project forward. We encourage
+people outside the project to get involved as well; it's a great way to get
+familiar with the codebase.
+
+Who can be a reviewer?
+======================
+
+Reviews can come from outside the NumPy team -- we welcome contributions from
+domain experts (for instance, `linalg` or `fft`) or maintainers of other
+projects. You do not need to be a NumPy maintainer (a NumPy team member with
+permission to merge a PR) to review.
+
+If we do not know you yet, consider introducing yourself in `the mailing list or
+Slack <https://numpy.org/community/>`_ before you start reviewing pull requests.
+
+Communication Guidelines
+========================
+
+- Every PR, good or bad, is an act of generosity. Opening with a positive
+ comment will help the author feel rewarded, and your subsequent remarks may be
+ heard more clearly. You may feel good also.
+- Begin if possible with the large issues, so the author knows they've been
+ understood. Resist the temptation to immediately go line by line, or to open
+ with small pervasive issues.
+- You are the face of the project, and NumPy some time ago decided `the kind of
+ project it will be <https://numpy.org/code-of-conduct/>`_: open, empathetic,
+ welcoming, friendly and patient. Be `kind
+ <https://youtu.be/tzFWz5fiVKU?t=49m30s>`_ to contributors.
+- Do not let perfect be the enemy of the good, particularly for documentation.
+ If you find yourself making many small suggestions, or being too nitpicky on
+ style or grammar, consider merging the current PR when all important concerns
+ are addressed. Then, either push a commit directly (if you are a maintainer)
+ or open a follow-up PR yourself.
+- If you need help writing replies in reviews, check out some `Standard replies
+ for reviewing
+ <https://scikit-learn.org/stable/developers/tips.html#saved-replies>`_.
+
+Reviewer Checklist
+==================
+
+- Is the intended behavior clear under all conditions? Some things to watch:
+ - What happens with unexpected inputs like empty arrays or nan/inf values?
+ - Are axis or shape arguments tested to be `int` or `tuples`?
+ - Are unusual `dtypes` tested if a function supports those?
+- Should variable names be improved for clarity or consistency?
+- Should comments be added, or rather removed as unhelpful or extraneous?
+- Does the documentation follow the :ref:`NumPy guidelines<howto-document>`? Are
+ the docstrings properly formatted?
+- Does the code follow NumPy's :ref:`Stylistic Guidelines<stylistic-guidelines>`?
+- If you are a maintainer, and it is not obvious from the PR description, add a
+ short explanation of what a branch did to the merge message and, if closing an
+ issue, also add "Closes gh-123" where 123 is the issue number.
+- For code changes, at least one maintainer (i.e. someone with commit rights)
+ should review and approve a pull request. If you are the first to review a
+ PR and approve of the changes use the GitHub `approve review
+ <https://help.github.com/articles/reviewing-changes-in-pull-requests/>`_ tool
+ to mark it as such. If a PR is straightforward, for example it's a clearly
+ correct bug fix, it can be merged straight away. If it's more complex or
+ changes public API, please leave it open for at least a couple of days so
+ other maintainers get a chance to review.
+- If you are a subsequent reviewer on an already approved PR, please use the
+ same review method as for a new PR (focus on the larger issues, resist the
+ temptation to add only a few nitpicks). If you have commit rights and think
+ no more review is needed, merge the PR.
+
+For maintainers
+---------------
+
+- Make sure all automated CI tests pass before merging a PR, and that the
+ :ref:`documentation builds <building-docs>` without any errors.
+- In case of merge conflicts, ask the PR submitter to :ref:`rebase on master
+ <rebasing-on-master>`.
+- For PRs that add new features or are in some way complex, wait at least a day
+ or two before merging it. That way, others get a chance to comment before the
+ code goes in. Consider adding it to the release notes.
+- When merging contributions, a committer is responsible for ensuring that those
+ meet the requirements outlined in the :ref:`Development process guidelines
+ <guidelines>` for NumPy. Also, check that new features and backwards
+ compatibility breaks were discussed on the `numpy-discussion mailing list
+ <https://mail.python.org/mailman/listinfo/numpy-discussion>`_.
+- Squashing commits or cleaning up commit messages of a PR that you consider too
+ messy is OK. Remember to retain the original author's name when doing this.
+ Make sure commit messages follow the :ref:`rules for NumPy
+ <writing-the-commit-message>`.
+- When you want to reject a PR: if it's very obvious, you can just close it and
+ explain why. If it's not, then it's a good idea to first explain why you
+ think the PR is not suitable for inclusion in NumPy and then let a second
+ committer comment or close.
+
+GitHub Workflow
+---------------
+
+When reviewing pull requests, please use workflow tracking features on GitHub as
+appropriate:
+
+- After you have finished reviewing, if you want to ask for the submitter to
+ make changes, change your review status to "Changes requested." This can be
+ done on GitHub, PR page, Files changed tab, Review changes (button on the top
+ right).
+- If you're happy about the current status, mark the pull request as Approved
+ (same way as Changes requested). Alternatively (for maintainers): merge
+ the pull request, if you think it is ready to be merged.
+
+It may be helpful to have a copy of the pull request code checked out on your
+own machine so that you can play with it locally. You can use the `GitHub CLI
+<https://docs.github.com/en/github/getting-started-with-github/github-cli>`_ to
+do this by clicking the ``Open with`` button in the upper right-hand corner of
+the PR page.
+
+Assuming you have your :ref:`development environment<development-environment>`
+set up, you can now build the code and test it.
+
+.. include:: gitwash/git_links.inc
diff --git a/doc/source/dev/style_guide.rst b/doc/source/dev/style_guide.rst
deleted file mode 100644
index bede3424d..000000000
--- a/doc/source/dev/style_guide.rst
+++ /dev/null
@@ -1,8 +0,0 @@
-.. _style_guide:
-
-===================
-NumPy C Style Guide
-===================
-
-.. include:: ../../C_STYLE_GUIDE.rst.txt
- :start-line: 4
diff --git a/doc/source/doc_conventions.rst b/doc/source/doc_conventions.rst
new file mode 100644
index 000000000..e2bc419d1
--- /dev/null
+++ b/doc/source/doc_conventions.rst
@@ -0,0 +1,23 @@
+.. _documentation_conventions:
+
+##############################################################################
+Documentation conventions
+##############################################################################
+
+- Names that look like :func:`numpy.array` are links to detailed
+ documentation.
+
+- Examples often include the Python prompt ``>>>``. This is not part of the
+ code and will cause an error if typed or pasted into the Python
+ shell. It can be safely typed or pasted into the IPython shell; the ``>>>``
+ is ignored.
+
+- Examples often use ``np`` as an alias for ``numpy``; that is, they assume
+ you've run::
+
+ >>> import numpy as np
+
+- If you're a code contributor writing a docstring, see :ref:`docstring_intro`.
+
+- If you're a writer contributing ordinary (non-docstring) documentation, see
+ :ref:`userdoc_guide`.
diff --git a/doc/source/docs/howto_document.rst b/doc/source/docs/howto_document.rst
index cf86b7e99..ff726c67c 100644
--- a/doc/source/docs/howto_document.rst
+++ b/doc/source/docs/howto_document.rst
@@ -1,12 +1,41 @@
.. _howto-document:
-A Guide to NumPy/SciPy Documentation
-====================================
+A Guide to NumPy Documentation
+==============================
+
+.. _userdoc_guide:
User documentation
-*******************
-NumPy text documents should follow the `Google developer documentation style guide <https://developers.google.com/style>`_.
+******************
+- In general, we follow the
+ `Google developer documentation style guide <https://developers.google.com/style>`_.
+
+- NumPy style governs cases where:
+
+ - Google has no guidance, or
+ - We prefer not to use the Google style
+
+ Our current rules:
+
+ - We pluralize *index* as *indices* rather than
+ `indexes <https://developers.google.com/style/word-list#letter-i>`_,
+ following the precedent of :func:`numpy.indices`.
+
+ - For consistency we also pluralize *matrix* as *matrices*.
+
+- Grammatical issues inadequately addressed by the NumPy or Google rules are
+ decided by the section on "Grammar and Usage" in the most recent edition of
+ the `Chicago Manual of Style
+ <https://en.wikipedia.org/wiki/The_Chicago_Manual_of_Style>`_.
+
+- We welcome being
+ `alerted <https://github.com/numpy/numpy/issues>`_ to cases
+ we should add to the NumPy style rules.
+
+
+
+.. _docstring_intro:
Docstrings
**********
@@ -40,29 +69,7 @@ after which you may use it::
np.fft.fft2(...)
-.. rubric::
- **For convenience the** `formatting standard`_ **is included below with an
- example**
-
-.. include:: ../../sphinxext/doc/format.rst
-
-.. _example:
-
-Example Source
-==============
-
-.. literalinclude:: ../../sphinxext/doc/example.py
-
-
-
-Example Rendered
-================
-
-.. ifconfig:: python_version_major < '3'
-
- The example is rendered only when sphinx is run with python3 and above
-
-.. automodule:: doc.example
- :members:
+Please use the numpydoc `formatting standard`_ as shown in their example_
.. _`formatting standard`: https://numpydoc.readthedocs.io/en/latest/format.html
+.. _example: https://numpydoc.readthedocs.io/en/latest/example.html
diff --git a/doc/source/f2py/allocarr_session.dat b/doc/source/f2py/allocarr_session.dat
index 754d9cb8b..ba168c22a 100644
--- a/doc/source/f2py/allocarr_session.dat
+++ b/doc/source/f2py/allocarr_session.dat
@@ -1,8 +1,11 @@
>>> import allocarr
>>> print(allocarr.mod.__doc__)
-b - 'f'-array(-1,-1), not allocated
-foo - Function signature:
- foo()
+b : 'f'-array(-1,-1), not allocated
+foo()
+
+Wrapper for ``foo``.
+
+
>>> allocarr.mod.foo()
b is not allocated
diff --git a/doc/source/f2py/common_session.dat b/doc/source/f2py/common_session.dat
index 0a38bec27..2595bfbd5 100644
--- a/doc/source/f2py/common_session.dat
+++ b/doc/source/f2py/common_session.dat
@@ -1,8 +1,8 @@
>>> import common
>>> print(common.data.__doc__)
-i - 'i'-scalar
-x - 'i'-array(4)
-a - 'f'-array(2,3)
+i : 'i'-scalar
+x : 'i'-array(4)
+a : 'f'-array(2,3)
>>> common.data.i = 5
>>> common.data.x[1] = 2
diff --git a/doc/source/f2py/distutils.rst b/doc/source/f2py/distutils.rst
index 71f6eab5a..4cf30045e 100644
--- a/doc/source/f2py/distutils.rst
+++ b/doc/source/f2py/distutils.rst
@@ -2,6 +2,8 @@
Using via `numpy.distutils`
=============================
+.. currentmodule:: numpy.distutils.core
+
:mod:`numpy.distutils` is part of NumPy extending standard Python ``distutils``
to deal with Fortran sources and F2PY signature files, e.g. compile Fortran
sources, call F2PY to construct extension modules, etc.
diff --git a/doc/source/f2py/moddata_session.dat b/doc/source/f2py/moddata_session.dat
index e3c758041..824bd86fc 100644
--- a/doc/source/f2py/moddata_session.dat
+++ b/doc/source/f2py/moddata_session.dat
@@ -1,10 +1,14 @@
>>> import moddata
>>> print(moddata.mod.__doc__)
-i - 'i'-scalar
-x - 'i'-array(4)
-a - 'f'-array(2,3)
-foo - Function signature:
- foo()
+i : 'i'-scalar
+x : 'i'-array(4)
+a : 'f'-array(2,3)
+b : 'f'-array(-1,-1), not allocated
+foo()
+
+Wrapper for ``foo``.
+
+
>>> moddata.mod.i = 5
>>> moddata.mod.x[:2] = [1,2]
diff --git a/doc/source/glossary.rst b/doc/source/glossary.rst
index b6ea42909..57e3bcf92 100644
--- a/doc/source/glossary.rst
+++ b/doc/source/glossary.rst
@@ -2,6 +2,520 @@
Glossary
********
-.. toctree::
+.. glossary::
+
+
+ (`n`,)
+ A parenthesized number followed by a comma denotes a tuple with one
+ element. The trailing comma distinguishes a one-element tuple from a
+ parenthesized ``n``.
+
+
+ -1
+ - **In a dimension entry**, instructs NumPy to choose the length
+ that will keep the total number of array elements the same.
+
+ >>> np.arange(12).reshape(4, -1).shape
+ (4, 3)
+
+ - **In an index**, any negative value
+ `denotes <https://docs.python.org/dev/faq/programming.html#what-s-a-negative-index>`_
+ indexing from the right.
+
+ . . .
+ An :py:data:`Ellipsis`.
+
+ - **When indexing an array**, shorthand that the missing axes, if they
+ exist, are full slices.
+
+ >>> a = np.arange(24).reshape(2,3,4)
+
+ >>> a[...].shape
+ (2, 3, 4)
+
+ >>> a[...,0].shape
+ (2, 3)
+
+ >>> a[0,...].shape
+ (3, 4)
+
+ >>> a[0,...,0].shape
+ (3,)
+
+ It can be used at most once; ``a[...,0,...]`` raises an :exc:`IndexError`.
+
+ - **In printouts**, NumPy substitutes ``...`` for the middle elements of
+ large arrays. To see the entire array, use `numpy.printoptions`
+
+
+ :
+ The Python :term:`python:slice`
+ operator. In ndarrays, slicing can be applied to every
+ axis:
+
+ >>> a = np.arange(24).reshape(2,3,4)
+ >>> a
+ array([[[ 0, 1, 2, 3],
+ [ 4, 5, 6, 7],
+ [ 8, 9, 10, 11]],
+ <BLANKLINE>
+ [[12, 13, 14, 15],
+ [16, 17, 18, 19],
+ [20, 21, 22, 23]]])
+ <BLANKLINE>
+ >>> a[1:,-2:,:-1]
+ array([[[16, 17, 18],
+ [20, 21, 22]]])
+
+ Trailing slices can be omitted: ::
+
+ >>> a[1] == a[1,:,:]
+ array([[ True, True, True, True],
+ [ True, True, True, True],
+ [ True, True, True, True]])
+
+ In contrast to Python, where slicing creates a copy, in NumPy slicing
+ creates a :term:`view`.
+
+ For details, see :ref:`combining-advanced-and-basic-indexing`.
+
+
+ <
+ In a dtype declaration, indicates that the data is
+ :term:`little-endian` (the bracket is big on the right). ::
+
+ >>> dt = np.dtype('<f') # little-endian single-precision float
+
+
+ >
+ In a dtype declaration, indicates that the data is
+ :term:`big-endian` (the bracket is big on the left). ::
+
+ >>> dt = np.dtype('>H') # big-endian unsigned short
+
+
+ advanced indexing
+ Rather than using a :doc:`scalar <reference/arrays.scalars>` or slice as
+ an index, an axis can be indexed with an array, providing fine-grained
+ selection. This is known as :ref:`advanced indexing<advanced-indexing>`
+ or "fancy indexing".
+
+
+ along an axis
+ An operation `along axis n` of array ``a`` behaves as if its argument
+ were an array of slices of ``a`` where each slice has a successive
+ index of axis `n`.
+
+ For example, if ``a`` is a 3 x `N` array, an operation along axis 0
+ behaves as if its argument were an array containing slices of each row:
+
+ >>> np.array((a[0,:], a[1,:], a[2,:])) #doctest: +SKIP
+
+ To make it concrete, we can pick the operation to be the array-reversal
+ function :func:`numpy.flip`, which accepts an ``axis`` argument. We
+ construct a 3 x 4 array ``a``:
+
+ >>> a = np.arange(12).reshape(3,4)
+ >>> a
+ array([[ 0, 1, 2, 3],
+ [ 4, 5, 6, 7],
+ [ 8, 9, 10, 11]])
+
+ Reversing along axis 0 (the row axis) yields
+
+ >>> np.flip(a,axis=0)
+ array([[ 8, 9, 10, 11],
+ [ 4, 5, 6, 7],
+ [ 0, 1, 2, 3]])
+
+ Recalling the definition of `along an axis`, ``flip`` along axis 0 is
+ treating its argument as if it were
+
+ >>> np.array((a[0,:], a[1,:], a[2,:]))
+ array([[ 0, 1, 2, 3],
+ [ 4, 5, 6, 7],
+ [ 8, 9, 10, 11]])
+
+ and the result of ``np.flip(a,axis=0)`` is to reverse the slices:
+
+ >>> np.array((a[2,:],a[1,:],a[0,:]))
+ array([[ 8, 9, 10, 11],
+ [ 4, 5, 6, 7],
+ [ 0, 1, 2, 3]])
+
+
+ array
+ Used synonymously in the NumPy docs with :term:`ndarray`.
+
+
+ array_like
+ Any :doc:`scalar <reference/arrays.scalars>` or
+ :term:`python:sequence`
+ that can be interpreted as an ndarray. In addition to ndarrays
+ and scalars this category includes lists (possibly nested and with
+ different element types) and tuples. Any argument accepted by
+ :doc:`numpy.array <reference/generated/numpy.array>`
+ is array_like. ::
+
+ >>> a = np.array([[1, 2.0], [0, 0], (1+1j, 3.)])
+
+ >>> a
+ array([[1.+0.j, 2.+0.j],
+ [0.+0.j, 0.+0.j],
+ [1.+1.j, 3.+0.j]])
+
+
+ array scalar
+ For uniformity in handling operands, NumPy treats
+ a :doc:`scalar <reference/arrays.scalars>` as an array of zero
+ dimension.
+
+
+ axis
+ Another term for an array dimension. Axes are numbered left to right;
+ axis 0 is the first element in the shape tuple.
+
+ In a two-dimensional vector, the elements of axis 0 are rows and the
+ elements of axis 1 are columns.
+
+ In higher dimensions, the picture changes. NumPy prints
+ higher-dimensional vectors as replications of row-by-column building
+ blocks, as in this three-dimensional vector:
+
+ >>> a = np.arange(12).reshape(2,2,3)
+ >>> a
+ array([[[ 0, 1, 2],
+ [ 3, 4, 5]],
+ [[ 6, 7, 8],
+ [ 9, 10, 11]]])
+
+ ``a`` is depicted as a two-element array whose elements are 2x3 vectors.
+ From this point of view, rows and columns are the final two axes,
+ respectively, in any shape.
+
+ This rule helps you anticipate how a vector will be printed, and
+ conversely how to find the index of any of the printed elements. For
+ instance, in the example, the last two values of 8's index must be 0 and
+ 2. Since 8 appears in the second of the two 2x3's, the first index must
+ be 1:
+
+ >>> a[1,0,2]
+ 8
+
+ A convenient way to count dimensions in a printed vector is to
+ count ``[`` symbols after the open-parenthesis. This is
+ useful in distinguishing, say, a (1,2,3) shape from a (2,3) shape:
+
+ >>> a = np.arange(6).reshape(2,3)
+ >>> a.ndim
+ 2
+ >>> a
+ array([[0, 1, 2],
+ [3, 4, 5]])
+
+ >>> a = np.arange(6).reshape(1,2,3)
+ >>> a.ndim
+ 3
+ >>> a
+ array([[[0, 1, 2],
+ [3, 4, 5]]])
+
+
+ .base
+
+ If an array does not own its memory, then its
+ :doc:`base <reference/generated/numpy.ndarray.base>` attribute returns
+ the object whose memory the array is referencing. That object may be
+ referencing the memory from still another object, so the owning object
+ may be ``a.base.base.base...``. Some writers erroneously claim that
+ testing ``base`` determines if arrays are :term:`view`\ s. For the
+ correct way, see :func:`numpy.shares_memory`.
+
+
+ big-endian
+ See `Endianness <https://en.wikipedia.org/wiki/Endianness>`_.
+
+
+ BLAS
+ `Basic Linear Algebra Subprograms <https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms>`_
+
+
+ broadcast
+ *broadcasting* is NumPy's ability to process ndarrays of
+ different sizes as if all were the same size.
+
+ It permits an elegant do-what-I-mean behavior where, for instance,
+ adding a scalar to a vector adds the scalar value to every element.
+
+ >>> a = np.arange(3)
+ >>> a
+ array([0, 1, 2])
+
+ >>> a + [3, 3, 3]
+ array([3, 4, 5])
+
+ >>> a + 3
+ array([3, 4, 5])
+
+ Ordinarly, vector operands must all be the same size, because NumPy
+ works element by element -- for instance, ``c = a * b`` is ::
+
+ c[0,0,0] = a[0,0,0] * b[0,0,0]
+ c[0,0,1] = a[0,0,1] * b[0,0,1]
+ ...
+
+ But in certain useful cases, NumPy can duplicate data along "missing"
+ axes or "too-short" dimensions so shapes will match. The duplication
+ costs no memory or time. For details, see
+ :doc:`Broadcasting. <user/basics.broadcasting>`
+
+
+ C order
+ Same as :term:`row-major`.
+
+
+ column-major
+ See `Row- and column-major order <https://en.wikipedia.org/wiki/Row-_and_column-major_order>`_.
+
+
+ contiguous
+ An array is contiguous if
+ * it occupies an unbroken block of memory, and
+ * array elements with higher indexes occupy higher addresses (that
+ is, no :term:`stride` is negative).
+
+
+ copy
+ See :term:`view`.
+
+
+ dimension
+ See :term:`axis`.
+
+
+ dtype
+ The datatype describing the (identically typed) elements in an ndarray.
+ It can be changed to reinterpret the array contents. For details, see
+ :doc:`Data type objects (dtype). <reference/arrays.dtypes>`
+
+
+ fancy indexing
+ Another term for :term:`advanced indexing`.
+
+
+ field
+ In a :term:`structured data type`, each subtype is called a `field`.
+ The `field` has a name (a string), a type (any valid dtype), and
+ an optional `title`. See :ref:`arrays.dtypes`.
+
+
+ Fortran order
+ Same as :term:`column-major`.
+
+
+ flattened
+ See :term:`ravel`.
+
+
+ homogeneous
+ All elements of a homogeneous array have the same type. ndarrays, in
+ contrast to Python lists, are homogeneous. The type can be complicated,
+ as in a :term:`structured array`, but all elements have that type.
+
+ NumPy `object arrays <#term-object-array>`_, which contain references to
+ Python objects, fill the role of heterogeneous arrays.
+
+
+ itemsize
+ The size of the dtype element in bytes.
+
+
+ little-endian
+ See `Endianness <https://en.wikipedia.org/wiki/Endianness>`_.
+
+
+ mask
+ A boolean array used to select only certain elements for an operation:
+
+ >>> x = np.arange(5)
+ >>> x
+ array([0, 1, 2, 3, 4])
+
+ >>> mask = (x > 2)
+ >>> mask
+ array([False, False, False, True, True])
+
+ >>> x[mask] = -1
+ >>> x
+ array([ 0, 1, 2, -1, -1])
+
+
+ masked array
+ Bad or missing data can be cleanly ignored by putting it in a masked
+ array, which has an internal boolean array indicating invalid
+ entries. Operations with masked arrays ignore these entries. ::
+
+ >>> a = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True])
+ >>> a
+ masked_array(data=[--, 2.0, --],
+ mask=[ True, False, True],
+ fill_value=1e+20)
+
+ >>> a + [1, 2, 3]
+ masked_array(data=[--, 4.0, --],
+ mask=[ True, False, True],
+ fill_value=1e+20)
+
+ For details, see :doc:`Masked arrays. <reference/maskedarray>`
+
+
+ matrix
+ NumPy's two-dimensional
+ :doc:`matrix class <reference/generated/numpy.matrix>`
+ should no longer be used; use regular ndarrays.
+
+
+ ndarray
+ :doc:`NumPy's basic structure <reference/arrays>`.
+
+
+ object array
+ An array whose dtype is ``object``; that is, it contains references to
+ Python objects. Indexing the array dereferences the Python objects, so
+ unlike other ndarrays, an object array has the ability to hold
+ heterogeneous objects.
+
+
+ ravel
+ :doc:`numpy.ravel \
+ <reference/generated/numpy.ravel>`
+ and :doc:`numpy.flatten \
+ <reference/generated/numpy.ndarray.flatten>`
+ both flatten an ndarray. ``ravel`` will return a view if possible;
+ ``flatten`` always returns a copy.
+
+ Flattening collapses a multimdimensional array to a single dimension;
+ details of how this is done (for instance, whether ``a[n+1]`` should be
+ the next row or next column) are parameters.
+
+
+ record array
+ A :term:`structured array` with allowing access in an attribute style
+ (``a.field``) in addition to ``a['field']``. For details, see
+ :doc:`numpy.recarray. <reference/generated/numpy.recarray>`
+
+
+ row-major
+ See `Row- and column-major order <https://en.wikipedia.org/wiki/Row-_and_column-major_order>`_.
+ NumPy creates arrays in row-major order by default.
+
+
+ scalar
+ In NumPy, usually a synonym for :term:`array scalar`.
+
+
+ shape
+ A tuple showing the length of each dimension of an ndarray. The
+ length of the tuple itself is the number of dimensions
+ (:doc:`numpy.ndim <reference/generated/numpy.ndarray.ndim>`).
+ The product of the tuple elements is the number of elements in the
+ array. For details, see
+ :doc:`numpy.ndarray.shape <reference/generated/numpy.ndarray.shape>`.
+
+
+ stride
+ Physical memory is one-dimensional; strides provide a mechanism to map
+ a given index to an address in memory. For an N-dimensional array, its
+ ``strides`` attribute is an N-element tuple; advancing from index
+ ``i`` to index ``i+1`` on axis ``n`` means adding ``a.strides[n]`` bytes
+ to the address.
+
+ Strides are computed automatically from an array's dtype and
+ shape, but can be directly specified using
+ :doc:`as_strided. <reference/generated/numpy.lib.stride_tricks.as_strided>`
+
+ For details, see
+ :doc:`numpy.ndarray.strides <reference/generated/numpy.ndarray.strides>`.
+
+ To see how striding underlies the power of NumPy views, see
+ `The NumPy array: a structure for efficient numerical computation. \
+ <https://arxiv.org/pdf/1102.1523.pdf>`_
+
+
+ structured array
+ Array whose :term:`dtype` is a :term:`structured data type`.
+
+
+ structured data type
+ Users can create arbitrarily complex :term:`dtypes <dtype>`
+ that can include other arrays and dtypes. These composite dtypes are called
+ :doc:`structured data types. <user/basics.rec>`
+
+
+ subarray
+ An array nested in a :term:`structured data type`, as ``b`` is here:
+
+ >>> dt = np.dtype([('a', np.int32), ('b', np.float32, (3,))])
+ >>> np.zeros(3, dtype=dt)
+ array([(0, [0., 0., 0.]), (0, [0., 0., 0.]), (0, [0., 0., 0.])],
+ dtype=[('a', '<i4'), ('b', '<f4', (3,))])
+
+
+ subarray data type
+ An element of a structured datatype that behaves like an ndarray.
+
+
+ title
+ An alias for a field name in a structured datatype.
+
+
+ type
+ In NumPy, usually a synonym for :term:`dtype`. For the more general
+ Python meaning, :term:`see here. <python:type>`
+
+
+ ufunc
+ NumPy's fast element-by-element computation (:term:`vectorization`)
+ gives a choice which function gets applied. The general term for the
+ function is ``ufunc``, short for ``universal function``. NumPy routines
+ have built-in ufuncs, but users can also
+ :doc:`write their own. <reference/ufuncs>`
+
+
+ vectorization
+ NumPy hands off array processing to C, where looping and computation are
+ much faster than in Python. To exploit this, programmers using NumPy
+ eliminate Python loops in favor of array-to-array operations.
+ :term:`vectorization` can refer both to the C offloading and to
+ structuring NumPy code to leverage it.
+
+ view
+ Without touching underlying data, NumPy can make one array appear
+ to change its datatype and shape.
+
+ An array created this way is a `view`, and NumPy often exploits the
+ performance gain of using a view versus making a new array.
+
+ A potential drawback is that writing to a view can alter the original
+ as well. If this is a problem, NumPy instead needs to create a
+ physically distinct array -- a `copy`.
+
+ Some NumPy routines always return views, some always return copies, some
+ may return one or the other, and for some the choice can be specified.
+ Responsibility for managing views and copies falls to the programmer.
+ :func:`numpy.shares_memory` will check whether ``b`` is a view of
+ ``a``, but an exact answer isn't always feasible, as the documentation
+ page explains.
+
+ >>> x = np.arange(5)
+ >>> x
+ array([0, 1, 2, 3, 4])
+
+ >>> y = x[::2]
+ >>> y
+ array([0, 2, 4])
+
+ >>> x[0] = 3 # changing x changes y as well, since y is a view on x
+ >>> y
+ array([3, 2, 4])
-.. automodule:: numpy.doc.glossary
diff --git a/doc/source/license.rst b/doc/source/license.rst
index 8f360af88..beea023ce 100644
--- a/doc/source/license.rst
+++ b/doc/source/license.rst
@@ -1,35 +1,6 @@
*************
-NumPy License
+NumPy license
*************
-Copyright (c) 2005, NumPy Developers
-
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
-* Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above
- copyright notice, this list of conditions and the following
- disclaimer in the documentation and/or other materials provided
- with the distribution.
-
-* Neither the name of the NumPy Developers nor the names of any
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+.. include:: ../../LICENSE.txt
+ :literal:
diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst
index c5563bddd..3a4ed2168 100644
--- a/doc/source/reference/arrays.classes.rst
+++ b/doc/source/reference/arrays.classes.rst
@@ -480,16 +480,16 @@ Character arrays (:mod:`numpy.char`)
The `chararray` class exists for backwards compatibility with
Numarray, it is not recommended for new development. Starting from numpy
1.4, if one needs arrays of strings, it is recommended to use arrays of
- `dtype` `object_`, `string_` or `unicode_`, and use the free functions
+ `dtype` `object_`, `bytes_` or `str_`, and use the free functions
in the `numpy.char` module for fast vectorized string operations.
-These are enhanced arrays of either :class:`string_` type or
-:class:`unicode_` type. These arrays inherit from the
+These are enhanced arrays of either :class:`str_` type or
+:class:`bytes_` type. These arrays inherit from the
:class:`ndarray`, but specially-define the operations ``+``, ``*``,
and ``%`` on a (broadcasting) element-by-element basis. These
operations are not available on the standard :class:`ndarray` of
character type. In addition, the :class:`chararray` has all of the
-standard :class:`string <str>` (and :class:`unicode`) methods,
+standard :class:`str` (and :class:`bytes`) methods,
executing them on an element-by-element basis. Perhaps the easiest
way to create a chararray is to use :meth:`self.view(chararray)
<ndarray.view>` where *self* is an ndarray of str or unicode
diff --git a/doc/source/reference/arrays.datetime.rst b/doc/source/reference/arrays.datetime.rst
index 9ce77424a..c5947620e 100644
--- a/doc/source/reference/arrays.datetime.rst
+++ b/doc/source/reference/arrays.datetime.rst
@@ -218,7 +218,7 @@ And here are the time units:
m minute +/- 1.7e13 years [1.7e13 BC, 1.7e13 AD]
s second +/- 2.9e11 years [2.9e11 BC, 2.9e11 AD]
ms millisecond +/- 2.9e8 years [ 2.9e8 BC, 2.9e8 AD]
- us microsecond +/- 2.9e5 years [290301 BC, 294241 AD]
+us / μs microsecond +/- 2.9e5 years [290301 BC, 294241 AD]
ns nanosecond +/- 292 years [ 1678 AD, 2262 AD]
ps picosecond +/- 106 days [ 1969 AD, 1970 AD]
fs femtosecond +/- 2.6 hours [ 1969 AD, 1970 AD]
diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst
index 8afbaeacc..b5ffa1a8b 100644
--- a/doc/source/reference/arrays.dtypes.rst
+++ b/doc/source/reference/arrays.dtypes.rst
@@ -122,14 +122,12 @@ constructor:
What can be converted to a data-type object is described below:
:class:`dtype` object
-
.. index::
triple: dtype; construction; from dtype
Used as-is.
None
-
.. index::
triple: dtype; construction; from None
@@ -139,7 +137,6 @@ None
triple: dtype; construction; from type
Array-scalar types
-
The 24 built-in :ref:`array scalar type objects
<arrays.scalars.built-in>` all convert to an associated data-type object.
This is true for their sub-classes as well.
@@ -155,15 +152,6 @@ Array-scalar types
>>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number
Generic types
-
- .. deprecated NumPy 1.19::
-
- The use of generic types is deprecated. This is because it can be
- unexpected in a context such as ``arr.astype(dtype=np.floating)``.
- ``arr.astype(dtype=np.floating)`` which casts an array of ``float32``
- to an array of ``float64``, even though ``float32`` is a subdtype of
- ``np.floating``.
-
The generic hierarchical type objects convert to corresponding
type objects according to the associations:
@@ -176,8 +164,16 @@ Generic types
:class:`generic`, :class:`flexible` :class:`void`
===================================================== ===============
-Built-in Python types
+ .. deprecated:: 1.19
+
+ This conversion of generic scalar types is deprecated.
+ This is because it can be unexpected in a context such as
+ ``arr.astype(dtype=np.floating)``, which casts an array of ``float32``
+ to an array of ``float64``, even though ``float32`` is a subdtype of
+ ``np.floating``.
+
+Built-in Python types
Several python types are equivalent to a corresponding
array scalar when used to generate a :class:`dtype` object:
@@ -209,7 +205,6 @@ Built-in Python types
that such types may map to a specific (new) dtype in the future.
Types with ``.dtype``
-
Any type object with a ``dtype`` attribute: The attribute will be
accessed and used directly. The attribute must return something
that is convertible into a dtype object.
@@ -223,7 +218,6 @@ prepended with ``'>'`` (:term:`big-endian`), ``'<'``
specify the byte order.
One-character strings
-
Each built-in data-type has a character code
(the updated Numeric typecodes), that uniquely identifies it.
@@ -235,7 +229,6 @@ One-character strings
>>> dt = np.dtype('d') # double-precision floating-point number
Array-protocol type strings (see :ref:`arrays.interface`)
-
The first character specifies the kind of data and the remaining
characters specify the number of bytes per item, except for Unicode,
where it is interpreted as the number of characters. The item size
@@ -271,14 +264,12 @@ Array-protocol type strings (see :ref:`arrays.interface`)
.. admonition:: Note on string types
For backward compatibility with Python 2 the ``S`` and ``a`` typestrings
- remain zero-terminated bytes and ``np.string_`` continues to map to
- ``np.bytes_``.
- To use actual strings in Python 3 use ``U`` or ``np.unicode_``.
+ remain zero-terminated bytes and `numpy.string_` continues to alias
+ `numpy.bytes_`. To use actual strings in Python 3 use ``U`` or `numpy.str_`.
For signed bytes that do not need zero-termination ``b`` or ``i1`` can be
used.
String with comma-separated fields
-
A short-hand notation for specifying the format of a structured data type is
a comma-separated string of basic formats.
@@ -310,7 +301,6 @@ String with comma-separated fields
>>> dt = np.dtype("a3, 3u8, (3,4)a10")
Type strings
-
Any string in :obj:`numpy.sctypeDict`.keys():
.. admonition:: Example
@@ -322,7 +312,6 @@ Type strings
triple: dtype; construction; from tuple
``(flexible_dtype, itemsize)``
-
The first argument must be an object that is converted to a
zero-sized flexible data-type object, the second argument is
an integer providing the desired itemsize.
@@ -333,7 +322,6 @@ Type strings
>>> dt = np.dtype(('U', 10)) # 10-character unicode string
``(fixed_dtype, shape)``
-
.. index::
pair: dtype; sub-array
@@ -354,10 +342,9 @@ Type strings
triple: dtype; construction; from list
``[(field_name, field_dtype, field_shape), ...]``
-
*obj* should be a list of fields where each field is described by a
tuple of length 2 or 3. (Equivalent to the ``descr`` item in the
- :obj:`__array_interface__` attribute.)
+ :obj:`~object.__array_interface__` attribute.)
The first element, *field_name*, is the field name (if this is
``''`` then a standard field name, ``'f#'``, is assigned). The
@@ -394,7 +381,6 @@ Type strings
triple: dtype; construction; from dict
``{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ..., 'itemsize': ...}``
-
This style has two required and three optional keys. The *names*
and *formats* keys are required. Their respective values are
equal-length lists with the field names and the field formats.
@@ -405,9 +391,9 @@ Type strings
their values must each be lists of the same length as the *names*
and *formats* lists. The *offsets* value is a list of byte offsets
(limited to `ctypes.c_int`) for each field, while the *titles* value is a
- list of titles for each field (None can be used if no title is
- desired for that field). The *titles* can be any :class:`string`
- or :class:`unicode` object and will add another entry to the
+ list of titles for each field (``None`` can be used if no title is
+ desired for that field). The *titles* can be any object, but when a
+ :class:`str` object will add another entry to the
fields dictionary keyed by the title and referencing the same
field tuple which will contain the title as an additional tuple
member.
@@ -436,7 +422,6 @@ Type strings
``{'field1': ..., 'field2': ..., ...}``
-
This usage is discouraged, because it is ambiguous with the
other dict-based construction method. If you have a field
called 'names' and a field called 'formats' there will be
@@ -458,7 +443,6 @@ Type strings
... 'col3': (int, 14)})
``(base_dtype, new_dtype)``
-
In NumPy 1.7 and later, this form allows `base_dtype` to be interpreted as
a structured dtype. Arrays created with this dtype will have underlying
dtype `base_dtype` but will have fields and flags taken from `new_dtype`.
@@ -553,6 +537,13 @@ Attributes providing additional information:
dtype.alignment
dtype.base
+Metadata attached by the user:
+
+.. autosummary::
+ :toctree: generated/
+
+ dtype.metadata
+
Methods
-------
diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst
index 56b99f272..9f82875ea 100644
--- a/doc/source/reference/arrays.indexing.rst
+++ b/doc/source/reference/arrays.indexing.rst
@@ -34,7 +34,7 @@ Basic Slicing and Indexing
Basic slicing extends Python's basic concept of slicing to N
dimensions. Basic slicing occurs when *obj* is a :class:`slice` object
(constructed by ``start:stop:step`` notation inside of brackets), an
-integer, or a tuple of slice objects and integers. :const:`Ellipsis`
+integer, or a tuple of slice objects and integers. :py:data:`Ellipsis`
and :const:`newaxis` objects can be interspersed with these as
well.
@@ -43,7 +43,7 @@ well.
In order to remain backward compatible with a common usage in
Numeric, basic slicing is also initiated if the selection object is
any non-ndarray and non-tuple sequence (such as a :class:`list`) containing
- :class:`slice` objects, the :const:`Ellipsis` object, or the :const:`newaxis`
+ :class:`slice` objects, the :py:data:`Ellipsis` object, or the :const:`newaxis`
object, but not for integer arrays or other embedded sequences.
.. index::
@@ -129,7 +129,7 @@ concepts to remember include:
[5],
[6]]])
-- :const:`Ellipsis` expands to the number of ``:`` objects needed for the
+- :py:data:`Ellipsis` expands to the number of ``:`` objects needed for the
selection tuple to index all dimensions. In most cases, this means that
length of the expanded selection tuple is ``x.ndim``. There may only be a
single ellipsis present.
@@ -198,6 +198,7 @@ concepts to remember include:
create an axis of length one. :const:`newaxis` is an alias for
'None', and 'None' can be used in place of this with the same result.
+.. _advanced-indexing:
Advanced Indexing
-----------------
@@ -304,6 +305,8 @@ understood with an example.
most important thing to remember about indexing with multiple advanced
indexes.
+.. _combining-advanced-and-basic-indexing:
+
Combining advanced and basic indexing
"""""""""""""""""""""""""""""""""""""
@@ -330,7 +333,7 @@ the subspace defined by the basic indexing (excluding integers) and the
subspace from the advanced indexing part. Two cases of index combination
need to be distinguished:
-* The advanced indexes are separated by a slice, :const:`Ellipsis` or :const:`newaxis`.
+* The advanced indexes are separated by a slice, :py:data:`Ellipsis` or :const:`newaxis`.
For example ``x[arr1, :, arr2]``.
* The advanced indexes are all next to each other.
For example ``x[..., arr1, arr2, :]`` but *not* ``x[arr1, :, 1]``
@@ -377,15 +380,15 @@ type, such as may be returned from comparison operators. A single
boolean index array is practically identical to ``x[obj.nonzero()]`` where,
as described above, :meth:`obj.nonzero() <ndarray.nonzero>` returns a
tuple (of length :attr:`obj.ndim <ndarray.ndim>`) of integer index
-arrays showing the :const:`True` elements of *obj*. However, it is
+arrays showing the :py:data:`True` elements of *obj*. However, it is
faster when ``obj.shape == x.shape``.
If ``obj.ndim == x.ndim``, ``x[obj]`` returns a 1-dimensional array
-filled with the elements of *x* corresponding to the :const:`True`
+filled with the elements of *x* corresponding to the :py:data:`True`
values of *obj*. The search order will be :term:`row-major`,
-C-style. If *obj* has :const:`True` values at entries that are outside
+C-style. If *obj* has :py:data:`True` values at entries that are outside
of the bounds of *x*, then an index error will be raised. If *obj* is
-smaller than *x* it is identical to filling it with :const:`False`.
+smaller than *x* it is identical to filling it with :py:data:`False`.
.. admonition:: Example
@@ -450,7 +453,7 @@ also supports boolean arrays and will work without any surprises.
array([[ 3, 5],
[ 9, 11]])
- Without the ``np.ix_`` call or only the diagonal elements would be
+ Without the ``np.ix_`` call, only the diagonal elements would be
selected.
Or without ``np.ix_`` (compare the integer array examples):
diff --git a/doc/source/reference/arrays.interface.rst b/doc/source/reference/arrays.interface.rst
index 4e95535c0..6a8c5f9c4 100644
--- a/doc/source/reference/arrays.interface.rst
+++ b/doc/source/reference/arrays.interface.rst
@@ -49,9 +49,9 @@ Python side
===========
This approach to the interface consists of the object having an
-:data:`__array_interface__` attribute.
+:data:`~object.__array_interface__` attribute.
-.. data:: __array_interface__
+.. data:: object.__array_interface__
A dictionary of items (3 required and 5 optional). The optional
keys in the dictionary have implied defaults if they are not
@@ -60,17 +60,15 @@ This approach to the interface consists of the object having an
The keys are:
**shape** (required)
-
Tuple whose elements are the array size in each dimension. Each
- entry is an integer (a Python int or long). Note that these
- integers could be larger than the platform "int" or "long"
- could hold (a Python int is a C long). It is up to the code
+ entry is an integer (a Python :py:class:`int`). Note that these
+ integers could be larger than the platform ``int`` or ``long``
+ could hold (a Python :py:class:`int` is a C ``long``). It is up to the code
using this attribute to handle this appropriately; either by
raising an error when overflow is possible, or by using
- :c:data:`Py_LONG_LONG` as the C type for the shapes.
+ ``long long`` as the C type for the shapes.
**typestr** (required)
-
A string providing the basic type of the homogeneous array The
basic string format consists of 3 parts: a character describing
the byteorder of the data (``<``: little-endian, ``>``:
@@ -97,7 +95,6 @@ This approach to the interface consists of the object having an
===== ================================================================
**descr** (optional)
-
A list of tuples providing a more detailed description of the
memory layout for each item in the homogeneous array. Each
tuple in the list has two or three elements. Normally, this
@@ -127,7 +124,6 @@ This approach to the interface consists of the object having an
**Default**: ``[('', typestr)]``
**data** (optional)
-
A 2-tuple whose first argument is an integer (a long integer
if necessary) that points to the data-area storing the array
contents. This pointer must point to the first element of
@@ -136,7 +132,7 @@ This approach to the interface consists of the object having an
means the data area is read-only).
This attribute can also be an object exposing the
- :c:func:`buffer interface <PyObject_AsCharBuffer>` which
+ :ref:`buffer interface <bufferobjects>` which
will be used to share the data. If this key is not present (or
returns None), then memory sharing will be done
through the buffer interface of the object itself. In this
@@ -148,25 +144,23 @@ This approach to the interface consists of the object having an
**Default**: None
**strides** (optional)
-
- Either None to indicate a C-style contiguous array or
+ Either ``None`` to indicate a C-style contiguous array or
a Tuple of strides which provides the number of bytes needed
to jump to the next array element in the corresponding
dimension. Each entry must be an integer (a Python
- :const:`int` or :const:`long`). As with shape, the values may
- be larger than can be represented by a C "int" or "long"; the
+ :py:class:`int`). As with shape, the values may
+ be larger than can be represented by a C ``int`` or ``long``; the
calling code should handle this appropriately, either by
- raising an error, or by using :c:type:`Py_LONG_LONG` in C. The
- default is None which implies a C-style contiguous
- memory buffer. In this model, the last dimension of the array
+ raising an error, or by using ``long long`` in C. The
+ default is ``None`` which implies a C-style contiguous
+ memory buffer. In this model, the last dimension of the array
varies the fastest. For example, the default strides tuple
for an object whose array entries are 8 bytes long and whose
- shape is (10,20,30) would be (4800, 240, 8)
+ shape is ``(10, 20, 30)`` would be ``(4800, 240, 8)``
- **Default**: None (C-style contiguous)
+ **Default**: ``None`` (C-style contiguous)
**mask** (optional)
-
None or an object exposing the array interface. All
elements of the mask array should be interpreted only as true
or not true indicating which elements of this array are valid.
@@ -177,15 +171,13 @@ This approach to the interface consists of the object having an
**Default**: None (All array values are valid)
**offset** (optional)
-
An integer offset into the array data region. This can only be
- used when data is None or returns a :class:`buffer`
+ used when data is ``None`` or returns a :class:`buffer`
object.
**Default**: 0.
**version** (required)
-
An integer showing the version of the interface (i.e. 3 for
this version). Be careful not to use this to invalidate
objects exposing future versions of the interface.
@@ -197,11 +189,11 @@ C-struct access
This approach to the array interface allows for faster access to an
array using only one attribute lookup and a well-defined C-structure.
-.. c:var:: __array_struct__
+.. data:: object.__array_struct__
- A :c:type: `PyCObject` whose :c:data:`voidptr` member contains a
+ A :c:type:`PyCapsule` whose ``pointer`` member contains a
pointer to a filled :c:type:`PyArrayInterface` structure. Memory
- for the structure is dynamically created and the :c:type:`PyCObject`
+ for the structure is dynamically created and the :c:type:`PyCapsule`
is also created with an appropriate destructor so the retriever of
this attribute simply has to apply :c:func:`Py_DECREF()` to the
object returned by this attribute when it is finished. Also,
@@ -211,7 +203,7 @@ array using only one attribute lookup and a well-defined C-structure.
must also not reallocate their memory if other objects are
referencing them.
-The PyArrayInterface structure is defined in ``numpy/ndarrayobject.h``
+The :c:type:`PyArrayInterface` structure is defined in ``numpy/ndarrayobject.h``
as::
typedef struct {
@@ -231,29 +223,32 @@ as::
The flags member may consist of 5 bits showing how the data should be
interpreted and one bit showing how the Interface should be
-interpreted. The data-bits are :const:`CONTIGUOUS` (0x1),
-:const:`FORTRAN` (0x2), :const:`ALIGNED` (0x100), :const:`NOTSWAPPED`
-(0x200), and :const:`WRITEABLE` (0x400). A final flag
-:const:`ARR_HAS_DESCR` (0x800) indicates whether or not this structure
+interpreted. The data-bits are :c:macro:`NPY_ARRAY_C_CONTIGUOUS` (0x1),
+:c:macro:`NPY_ARRAY_F_CONTIGUOUS` (0x2), :c:macro:`NPY_ARRAY_ALIGNED` (0x100),
+:c:macro:`NPY_ARRAY_NOTSWAPPED` (0x200), and :c:macro:`NPY_ARRAY_WRITEABLE` (0x400). A final flag
+:c:macro:`NPY_ARR_HAS_DESCR` (0x800) indicates whether or not this structure
has the arrdescr field. The field should not be accessed unless this
flag is present.
+ .. c:macro:: NPY_ARR_HAS_DESCR
+
.. admonition:: New since June 16, 2006:
- In the past most implementations used the "desc" member of the
- :c:type:`PyCObject` itself (do not confuse this with the "descr" member of
+ In the past most implementations used the ``desc`` member of the ``PyCObject``
+ (now :c:type:`PyCapsule`) itself (do not confuse this with the "descr" member of
the :c:type:`PyArrayInterface` structure above --- they are two separate
things) to hold the pointer to the object exposing the interface.
- This is now an explicit part of the interface. Be sure to own a
- reference to the object when the :c:type:`PyCObject` is created using
- :c:type:`PyCObject_FromVoidPtrAndDesc`.
+ This is now an explicit part of the interface. Be sure to take a
+ reference to the object and call :c:func:`PyCapsule_SetContext` before
+ returning the :c:type:`PyCapsule`, and configure a destructor to decref this
+ reference.
Type description examples
=========================
For clarity it is useful to provide some examples of the type
-description and corresponding :data:`__array_interface__` 'descr'
+description and corresponding :data:`~object.__array_interface__` 'descr'
entries. Thanks to Scott Gilbert for these examples:
In every case, the 'descr' key is optional, but of course provides
@@ -315,25 +310,39 @@ largely aesthetic. In particular:
1. The PyArrayInterface structure had no descr member at the end
(and therefore no flag ARR_HAS_DESCR)
-2. The desc member of the PyCObject returned from __array_struct__ was
+2. The ``context`` member of the :c:type:`PyCapsule` (formally the ``desc``
+ member of the ``PyCObject``) returned from ``__array_struct__`` was
not specified. Usually, it was the object exposing the array (so
that a reference to it could be kept and destroyed when the
- C-object was destroyed). Now it must be a tuple whose first
- element is a string with "PyArrayInterface Version #" and whose
- second element is the object exposing the array.
+ C-object was destroyed). It is now an explicit requirement that this field
+ be used in some way to hold a reference to the owning object.
+
+ .. note::
+
+ Until August 2020, this said:
+
+ Now it must be a tuple whose first element is a string with
+ "PyArrayInterface Version #" and whose second element is the object
+ exposing the array.
+
+ This design was retracted almost immediately after it was proposed, in
+ <https://mail.python.org/pipermail/numpy-discussion/2006-June/020995.html>.
+ Despite 14 years of documentation to the contrary, at no point was it
+ valid to assume that ``__array_interface__`` capsules held this tuple
+ content.
-3. The tuple returned from __array_interface__['data'] used to be a
+3. The tuple returned from ``__array_interface__['data']`` used to be a
hex-string (now it is an integer or a long integer).
-4. There was no __array_interface__ attribute instead all of the keys
- (except for version) in the __array_interface__ dictionary were
+4. There was no ``__array_interface__`` attribute instead all of the keys
+ (except for version) in the ``__array_interface__`` dictionary were
their own attribute: Thus to obtain the Python-side information you
had to access separately the attributes:
- * __array_data__
- * __array_shape__
- * __array_strides__
- * __array_typestr__
- * __array_descr__
- * __array_offset__
- * __array_mask__
+ * ``__array_data__``
+ * ``__array_shape__``
+ * ``__array_strides__``
+ * ``__array_typestr__``
+ * ``__array_descr__``
+ * ``__array_offset__``
+ * ``__array_mask__``
diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst
index 689240c7d..191367058 100644
--- a/doc/source/reference/arrays.ndarray.rst
+++ b/doc/source/reference/arrays.ndarray.rst
@@ -1,11 +1,11 @@
+.. currentmodule:: numpy
+
.. _arrays.ndarray:
******************************************
The N-dimensional array (:class:`ndarray`)
******************************************
-.. currentmodule:: numpy
-
An :class:`ndarray` is a (usually fixed-size) multidimensional
container of items of the same type and size. The number of dimensions
and items in an array is defined by its :attr:`shape <ndarray.shape>`,
@@ -259,10 +259,10 @@ Array interface
.. seealso:: :ref:`arrays.interface`.
-========================== ===================================
-:obj:`__array_interface__` Python-side of the array interface
-:obj:`__array_struct__` C-side of the array interface
-========================== ===================================
+================================== ===================================
+:obj:`~object.__array_interface__` Python-side of the array interface
+:obj:`~object.__array_struct__` C-side of the array interface
+================================== ===================================
:mod:`ctypes` foreign function interface
----------------------------------------
@@ -469,7 +469,7 @@ Comparison operators:
ndarray.__eq__
ndarray.__ne__
-Truth value of an array (:func:`bool()`):
+Truth value of an array (:class:`bool() <bool>`):
.. autosummary::
:toctree: generated/
@@ -604,9 +604,9 @@ Container customization: (see :ref:`Indexing <arrays.indexing>`)
ndarray.__setitem__
ndarray.__contains__
-Conversion; the operations :func:`int()`, :func:`float()` and
-:func:`complex()`.
-. They work only on arrays that have one element in them
+Conversion; the operations :class:`int() <int>`,
+:class:`float() <float>` and :class:`complex() <complex>`.
+They work only on arrays that have one element in them
and return the appropriate scalar.
.. autosummary::
diff --git a/doc/source/reference/arrays.nditer.cython.rst b/doc/source/reference/arrays.nditer.cython.rst
index 2cc7763ed..43aad9927 100644
--- a/doc/source/reference/arrays.nditer.cython.rst
+++ b/doc/source/reference/arrays.nditer.cython.rst
@@ -5,7 +5,7 @@ Those who want really good performance out of their low level operations
should strongly consider directly using the iteration API provided
in C, but for those who are not comfortable with C or C++, Cython
is a good middle ground with reasonable performance tradeoffs. For
-the :class:`nditer` object, this means letting the iterator take care
+the :class:`~numpy.nditer` object, this means letting the iterator take care
of broadcasting, dtype conversion, and buffering, while giving the inner
loop to Cython.
diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst
index d27d61e2c..4b5da2e13 100644
--- a/doc/source/reference/arrays.scalars.rst
+++ b/doc/source/reference/arrays.scalars.rst
@@ -24,14 +24,14 @@ mixing scalar and array operations.
Array scalars live in a hierarchy (see the Figure below) of data
types. They can be detected using the hierarchy: For example,
-``isinstance(val, np.generic)`` will return :const:`True` if *val* is
+``isinstance(val, np.generic)`` will return :py:data:`True` if *val* is
an array scalar object. Alternatively, what kind of array scalar is
present can be determined using other members of the data type
hierarchy. Thus, for example ``isinstance(val, np.complexfloating)``
-will return :const:`True` if *val* is a complex valued type, while
-:const:`isinstance(val, np.flexible)` will return true if *val* is one
-of the flexible itemsize array types (:class:`string`,
-:class:`unicode`, :class:`void`).
+will return :py:data:`True` if *val* is a complex valued type, while
+``isinstance(val, np.flexible)`` will return true if *val* is one
+of the flexible itemsize array types (:class:`str_`,
+:class:`bytes_`, :class:`void`).
.. figure:: figures/dtype-hierarchy.png
@@ -41,6 +41,13 @@ of the flexible itemsize array types (:class:`string`,
pointer for the platform. All the number types can be obtained
using bit-width names as well.
+
+.. TODO - use something like this instead of the diagram above, as it generates
+ links to the classes and is a vector graphic. Unfortunately it looks worse
+ and the html <map> element providing the linked regions is misaligned.
+
+ .. inheritance-diagram:: byte short intc int_ longlong ubyte ushort uintc uint ulonglong half single double longdouble csingle cdouble clongdouble bool_ datetime64 timedelta64 object_ bytes_ str_ void
+
.. [#] However, array scalars are immutable, so none of the array
scalar attributes are settable.
@@ -51,129 +58,148 @@ of the flexible itemsize array types (:class:`string`,
Built-in scalar types
=====================
-The built-in scalar types are shown below. Along with their (mostly)
-C-derived names, the integer, float, and complex data-types are also
-available using a bit-width convention so that an array of the right
-size can always be ensured (e.g. :class:`int8`, :class:`float64`,
-:class:`complex128`). Two aliases (:class:`intp` and :class:`uintp`)
-pointing to the integer type that is sufficiently large to hold a C pointer
-are also provided. The C-like names are associated with character codes,
-which are shown in the table. Use of the character codes, however,
+The built-in scalar types are shown below. The C-like names are associated with character codes,
+which are shown in their descriptions. Use of the character codes, however,
is discouraged.
Some of the scalar types are essentially equivalent to fundamental
Python types and therefore inherit from them as well as from the
generic array scalar type:
-==================== ================================
-Array scalar type Related Python type
-==================== ================================
-:class:`int_` :class:`IntType` (Python 2 only)
-:class:`float_` :class:`FloatType`
-:class:`complex_` :class:`ComplexType`
-:class:`bytes_` :class:`BytesType`
-:class:`unicode_` :class:`UnicodeType`
-==================== ================================
+==================== =========================== =============
+Array scalar type Related Python type Inherits?
+==================== =========================== =============
+:class:`int_` :class:`int` Python 2 only
+:class:`float_` :class:`float` yes
+:class:`complex_` :class:`complex` yes
+:class:`bytes_` :class:`bytes` yes
+:class:`str_` :class:`str` yes
+:class:`bool_` :class:`bool` no
+:class:`datetime64` :class:`datetime.datetime` no
+:class:`timedelta64` :class:`datetime.timedelta` no
+==================== =========================== =============
The :class:`bool_` data type is very similar to the Python
-:class:`BooleanType` but does not inherit from it because Python's
-:class:`BooleanType` does not allow itself to be inherited from, and
+:class:`bool` but does not inherit from it because Python's
+:class:`bool` does not allow itself to be inherited from, and
on the C-level the size of the actual bool data is not the same as a
Python Boolean scalar.
.. warning::
- The :class:`bool_` type is not a subclass of the :class:`int_` type
- (the :class:`bool_` is not even a number type). This is different
- than Python's default implementation of :class:`bool` as a
- sub-class of int.
-
-.. warning::
-
The :class:`int_` type does **not** inherit from the
:class:`int` built-in under Python 3, because type :class:`int` is no
longer a fixed-width integer type.
.. tip:: The default data type in NumPy is :class:`float_`.
-In the tables below, ``platform?`` means that the type may not be
-available on all platforms. Compatibility with different C or Python
-types is indicated: two types are compatible if their data is of the
-same size and interpreted in the same way.
-
-Booleans:
-
-=================== ============================= ===============
-Type Remarks Character code
-=================== ============================= ===============
-:class:`bool_` compatible: Python bool ``'?'``
-:class:`bool8` 8 bits
-=================== ============================= ===============
-
-Integers:
-
-=================== ============================= ===============
-:class:`byte` compatible: C char ``'b'``
-:class:`short` compatible: C short ``'h'``
-:class:`intc` compatible: C int ``'i'``
-:class:`int_` compatible: Python int ``'l'``
-:class:`longlong` compatible: C long long ``'q'``
-:class:`intp` large enough to fit a pointer ``'p'``
-:class:`int8` 8 bits
-:class:`int16` 16 bits
-:class:`int32` 32 bits
-:class:`int64` 64 bits
-=================== ============================= ===============
-
-Unsigned integers:
-
-=================== ============================= ===============
-:class:`ubyte` compatible: C unsigned char ``'B'``
-:class:`ushort` compatible: C unsigned short ``'H'``
-:class:`uintc` compatible: C unsigned int ``'I'``
-:class:`uint` compatible: Python int ``'L'``
-:class:`ulonglong` compatible: C long long ``'Q'``
-:class:`uintp` large enough to fit a pointer ``'P'``
-:class:`uint8` 8 bits
-:class:`uint16` 16 bits
-:class:`uint32` 32 bits
-:class:`uint64` 64 bits
-=================== ============================= ===============
-
-Floating-point numbers:
-
-=================== ============================= ===============
-:class:`half` ``'e'``
-:class:`single` compatible: C float ``'f'``
-:class:`double` compatible: C double
-:class:`float_` compatible: Python float ``'d'``
-:class:`longfloat` compatible: C long float ``'g'``
-:class:`float16` 16 bits
-:class:`float32` 32 bits
-:class:`float64` 64 bits
-:class:`float96` 96 bits, platform?
-:class:`float128` 128 bits, platform?
-=================== ============================= ===============
-
-Complex floating-point numbers:
-
-=================== ============================= ===============
-:class:`csingle` ``'F'``
-:class:`complex_` compatible: Python complex ``'D'``
-:class:`clongfloat` ``'G'``
-:class:`complex64` two 32-bit floats
-:class:`complex128` two 64-bit floats
-:class:`complex192` two 96-bit floats,
- platform?
-:class:`complex256` two 128-bit floats,
- platform?
-=================== ============================= ===============
-
-Any Python object:
-
-=================== ============================= ===============
-:class:`object_` any Python object ``'O'``
-=================== ============================= ===============
+.. autoclass:: numpy.generic
+ :exclude-members:
+
+.. autoclass:: numpy.number
+ :exclude-members:
+
+Integer types
+~~~~~~~~~~~~~
+
+.. autoclass:: numpy.integer
+ :exclude-members:
+
+Signed integer types
+++++++++++++++++++++
+
+.. autoclass:: numpy.signedinteger
+ :exclude-members:
+
+.. autoclass:: numpy.byte
+ :exclude-members:
+
+.. autoclass:: numpy.short
+ :exclude-members:
+
+.. autoclass:: numpy.intc
+ :exclude-members:
+
+.. autoclass:: numpy.int_
+ :exclude-members:
+
+.. autoclass:: numpy.longlong
+ :exclude-members:
+
+Unsigned integer types
+++++++++++++++++++++++
+
+.. autoclass:: numpy.unsignedinteger
+ :exclude-members:
+
+.. autoclass:: numpy.ubyte
+ :exclude-members:
+
+.. autoclass:: numpy.ushort
+ :exclude-members:
+
+.. autoclass:: numpy.uintc
+ :exclude-members:
+
+.. autoclass:: numpy.uint
+ :exclude-members:
+
+.. autoclass:: numpy.ulonglong
+ :exclude-members:
+
+Inexact types
+~~~~~~~~~~~~~
+
+.. autoclass:: numpy.inexact
+ :exclude-members:
+
+Floating-point types
+++++++++++++++++++++
+
+.. autoclass:: numpy.floating
+ :exclude-members:
+
+.. autoclass:: numpy.half
+ :exclude-members:
+
+.. autoclass:: numpy.single
+ :exclude-members:
+
+.. autoclass:: numpy.double
+ :exclude-members:
+
+.. autoclass:: numpy.longdouble
+ :exclude-members:
+
+Complex floating-point types
+++++++++++++++++++++++++++++
+
+.. autoclass:: numpy.complexfloating
+ :exclude-members:
+
+.. autoclass:: numpy.csingle
+ :exclude-members:
+
+.. autoclass:: numpy.cdouble
+ :exclude-members:
+
+.. autoclass:: numpy.clongdouble
+ :exclude-members:
+
+Other types
+~~~~~~~~~~~
+
+.. autoclass:: numpy.bool_
+ :exclude-members:
+
+.. autoclass:: numpy.datetime64
+ :exclude-members:
+
+.. autoclass:: numpy.timedelta64
+ :exclude-members:
+
+.. autoclass:: numpy.object_
+ :exclude-members:
.. note::
@@ -195,11 +221,17 @@ size and the data they describe can be of different length in different
arrays. (In the character codes ``#`` is an integer denoting how many
elements the data type consists of.)
-=================== ============================== ========
-:class:`bytes_` compatible: Python bytes ``'S#'``
-:class:`unicode_` compatible: Python unicode/str ``'U#'``
-:class:`void` ``'V#'``
-=================== ============================== ========
+.. autoclass:: numpy.flexible
+ :exclude-members:
+
+.. autoclass:: numpy.bytes_
+ :exclude-members:
+
+.. autoclass:: numpy.str_
+ :exclude-members:
+
+.. autoclass:: numpy.void
+ :exclude-members:
.. warning::
@@ -214,12 +246,123 @@ elements the data type consists of.)
convention more consistent with other Python modules such as the
:mod:`struct` module.
+Sized aliases
+~~~~~~~~~~~~~
+
+Along with their (mostly)
+C-derived names, the integer, float, and complex data-types are also
+available using a bit-width convention so that an array of the right
+size can always be ensured. Two aliases (:class:`numpy.intp` and :class:`numpy.uintp`)
+pointing to the integer type that is sufficiently large to hold a C pointer
+are also provided.
+
+.. note that these are documented with ..attribute because that is what
+ autoclass does for aliases under the hood.
+
+.. autoclass:: numpy.bool8
+
+.. attribute:: int8
+ int16
+ int32
+ int64
+
+ Aliases for the signed integer types (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `numpy.longlong`) with the specified number
+ of bits.
+
+ Compatible with the C99 ``int8_t``, ``int16_t``, ``int32_t``, and
+ ``int64_t``, respectively.
+
+.. attribute:: uint8
+ uint16
+ uint32
+ uint64
+
+ Alias for the unsigned integer types (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `numpy.longlong`) with the specified number
+ of bits.
+
+ Compatible with the C99 ``uint8_t``, ``uint16_t``, ``uint32_t``, and
+ ``uint64_t``, respectively.
+
+.. attribute:: intp
+
+ Alias for the signed integer type (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `np.longlong`) that is the same size as a
+ pointer.
+
+ Compatible with the C ``intptr_t``.
+
+ :Character code: ``'p'``
+
+.. attribute:: uintp
+
+ Alias for the unsigned integer type (one of `numpy.byte`, `numpy.short`,
+ `numpy.intc`, `numpy.int_` and `np.longlong`) that is the same size as a
+ pointer.
+
+ Compatible with the C ``uintptr_t``.
+
+ :Character code: ``'P'``
+
+.. autoclass:: numpy.float16
+
+.. autoclass:: numpy.float32
+
+.. autoclass:: numpy.float64
+
+.. attribute:: float96
+ float128
+
+ Alias for `numpy.longdouble`, named after its size in bits.
+ The existence of these aliases depends on the platform.
+
+.. autoclass:: numpy.complex64
+
+.. autoclass:: numpy.complex128
+
+.. attribute:: complex192
+ complex256
+
+ Alias for `numpy.clongdouble`, named after its size in bits.
+ The existance of these aliases depends on the platform.
+
+Other aliases
+~~~~~~~~~~~~~
+
+The first two of these are conveniences which resemble the names of the
+builtin types, in the same style as `bool_`, `int_`, `str_`, `bytes_`, and
+`object_`:
+
+.. autoclass:: numpy.float_
+
+.. autoclass:: numpy.complex_
+
+Some more use alternate naming conventions for extended-precision floats and
+complex numbers:
+
+.. autoclass:: numpy.longfloat
+
+.. autoclass:: numpy.singlecomplex
+
+.. autoclass:: numpy.cfloat
+
+.. autoclass:: numpy.longcomplex
+
+.. autoclass:: numpy.clongfloat
+
+The following aliases originate from Python 2, and it is recommended that they
+not be used in new code.
+
+.. autoclass:: numpy.string_
+
+.. autoclass:: numpy.unicode_
Attributes
==========
The array scalar objects have an :obj:`array priority
-<__array_priority__>` of :c:data:`NPY_SCALAR_PRIORITY`
+<class.__array_priority__>` of :c:data:`NPY_SCALAR_PRIORITY`
(-1,000,000.0). They also do not (yet) have a :attr:`ctypes <ndarray.ctypes>`
attribute. Otherwise, they share the same attributes as arrays:
@@ -273,7 +416,6 @@ The exceptions to the above rules are given below:
.. autosummary::
:toctree: generated/
- generic
generic.__array__
generic.__array_wrap__
generic.squeeze
diff --git a/doc/source/reference/c-api/array.rst b/doc/source/reference/c-api/array.rst
index 10c1704c2..3aa541b79 100644
--- a/doc/source/reference/c-api/array.rst
+++ b/doc/source/reference/c-api/array.rst
@@ -24,7 +24,7 @@ These macros access the :c:type:`PyArrayObject` structure members and are
defined in ``ndarraytypes.h``. The input argument, *arr*, can be any
:c:type:`PyObject *<PyObject>` that is directly interpretable as a
:c:type:`PyArrayObject *` (any instance of the :c:data:`PyArray_Type`
-and itssub-types).
+and its sub-types).
.. c:function:: int PyArray_NDIM(PyArrayObject *arr)
@@ -326,7 +326,7 @@ From scratch
Create a new array with the provided data-type descriptor, *descr*,
of the shape determined by *nd* and *dims*.
-.. c:function:: PyArray_FILLWBYTE(PyObject* obj, int val)
+.. c:function:: void PyArray_FILLWBYTE(PyObject* obj, int val)
Fill the array pointed to by *obj* ---which must be a (subclass
of) ndarray---with the contents of *val* (evaluated as a byte).
@@ -428,44 +428,44 @@ From other objects
have :c:data:`NPY_ARRAY_DEFAULT` as its flags member. The *context*
argument is unused.
- .. c:var:: NPY_ARRAY_C_CONTIGUOUS
+ .. c:macro:: NPY_ARRAY_C_CONTIGUOUS
Make sure the returned array is C-style contiguous
- .. c:var:: NPY_ARRAY_F_CONTIGUOUS
+ .. c:macro:: NPY_ARRAY_F_CONTIGUOUS
Make sure the returned array is Fortran-style contiguous.
- .. c:var:: NPY_ARRAY_ALIGNED
+ .. c:macro:: NPY_ARRAY_ALIGNED
Make sure the returned array is aligned on proper boundaries for its
data type. An aligned array has the data pointer and every strides
factor as a multiple of the alignment factor for the data-type-
descriptor.
- .. c:var:: NPY_ARRAY_WRITEABLE
+ .. c:macro:: NPY_ARRAY_WRITEABLE
Make sure the returned array can be written to.
- .. c:var:: NPY_ARRAY_ENSURECOPY
+ .. c:macro:: NPY_ARRAY_ENSURECOPY
Make sure a copy is made of *op*. If this flag is not
present, data is not copied if it can be avoided.
- .. c:var:: NPY_ARRAY_ENSUREARRAY
+ .. c:macro:: NPY_ARRAY_ENSUREARRAY
Make sure the result is a base-class ndarray. By
default, if *op* is an instance of a subclass of
ndarray, an instance of that same subclass is returned. If
this flag is set, an ndarray object will be returned instead.
- .. c:var:: NPY_ARRAY_FORCECAST
+ .. c:macro:: NPY_ARRAY_FORCECAST
Force a cast to the output type even if it cannot be done
safely. Without this flag, a data cast will occur only if it
can be done safely, otherwise an error is raised.
- .. c:var:: NPY_ARRAY_WRITEBACKIFCOPY
+ .. c:macro:: NPY_ARRAY_WRITEBACKIFCOPY
If *op* is already an array, but does not satisfy the
requirements, then a copy is made (which will satisfy the
@@ -478,67 +478,67 @@ From other objects
will be made writeable again. If *op* is not writeable to begin
with, or if it is not already an array, then an error is raised.
- .. c:var:: NPY_ARRAY_UPDATEIFCOPY
+ .. c:macro:: NPY_ARRAY_UPDATEIFCOPY
Deprecated. Use :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, which is similar.
This flag "automatically" copies the data back when the returned
array is deallocated, which is not supported in all python
implementations.
- .. c:var:: NPY_ARRAY_BEHAVED
+ .. c:macro:: NPY_ARRAY_BEHAVED
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE`
- .. c:var:: NPY_ARRAY_CARRAY
+ .. c:macro:: NPY_ARRAY_CARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
- .. c:var:: NPY_ARRAY_CARRAY_RO
+ .. c:macro:: NPY_ARRAY_CARRAY_RO
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_FARRAY
+ .. c:macro:: NPY_ARRAY_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
- .. c:var:: NPY_ARRAY_FARRAY_RO
+ .. c:macro:: NPY_ARRAY_FARRAY_RO
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_DEFAULT
+ .. c:macro:: NPY_ARRAY_DEFAULT
:c:data:`NPY_ARRAY_CARRAY`
- .. c:var:: NPY_ARRAY_IN_ARRAY
+ .. c:macro:: NPY_ARRAY_IN_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_IN_FARRAY
+ .. c:macro:: NPY_ARRAY_IN_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_OUT_ARRAY
+ .. c:macro:: NPY_OUT_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_OUT_ARRAY
+ .. c:macro:: NPY_ARRAY_OUT_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED` \|
:c:data:`NPY_ARRAY_WRITEABLE`
- .. c:var:: NPY_ARRAY_OUT_FARRAY
+ .. c:macro:: NPY_ARRAY_OUT_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED`
- .. c:var:: NPY_ARRAY_INOUT_ARRAY
+ .. c:macro:: NPY_ARRAY_INOUT_ARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` \|
:c:data:`NPY_ARRAY_UPDATEIFCOPY`
- .. c:var:: NPY_ARRAY_INOUT_FARRAY
+ .. c:macro:: NPY_ARRAY_INOUT_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` \|
@@ -574,7 +574,7 @@ From other objects
did not have the _ARRAY_ macro namespace in them. That form
of the constant names is deprecated in 1.7.
-.. c:var:: NPY_ARRAY_NOTSWAPPED
+.. c:macro:: NPY_ARRAY_NOTSWAPPED
Make sure the returned array has a data-type descriptor that is in
machine byte-order, over-riding any specification in the *dtype*
@@ -585,11 +585,11 @@ From other objects
not in machine byte- order), then a new data-type descriptor is
created and used with its byte-order field set to native.
-.. c:var:: NPY_ARRAY_BEHAVED_NS
+.. c:macro:: NPY_ARRAY_BEHAVED_NS
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE` \| :c:data:`NPY_ARRAY_NOTSWAPPED`
-.. c:var:: NPY_ARRAY_ELEMENTSTRIDES
+.. c:macro:: NPY_ARRAY_ELEMENTSTRIDES
Make sure the returned array has strides that are multiples of the
element size.
@@ -604,14 +604,14 @@ From other objects
.. c:function:: PyObject* PyArray_FromStructInterface(PyObject* op)
Returns an ndarray object from a Python object that exposes the
- :obj:`__array_struct__` attribute and follows the array interface
+ :obj:`~object.__array_struct__` attribute and follows the array interface
protocol. If the object does not contain this attribute then a
borrowed reference to :c:data:`Py_NotImplemented` is returned.
.. c:function:: PyObject* PyArray_FromInterface(PyObject* op)
Returns an ndarray object from a Python object that exposes the
- :obj:`__array_interface__` attribute following the array interface
+ :obj:`~object.__array_interface__` attribute following the array interface
protocol. If the object does not contain this attribute then a
borrowed reference to :c:data:`Py_NotImplemented` is returned.
@@ -790,17 +790,17 @@ Dealing with types
General check of Python Type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. c:function:: PyArray_Check(PyObject *op)
+.. c:function:: int PyArray_Check(PyObject *op)
Evaluates true if *op* is a Python object whose type is a sub-type
of :c:data:`PyArray_Type`.
-.. c:function:: PyArray_CheckExact(PyObject *op)
+.. c:function:: int PyArray_CheckExact(PyObject *op)
Evaluates true if *op* is a Python object with type
:c:data:`PyArray_Type`.
-.. c:function:: PyArray_HasArrayInterface(PyObject *op, PyObject *out)
+.. c:function:: int PyArray_HasArrayInterface(PyObject *op, PyObject *out)
If ``op`` implements any part of the array interface, then ``out``
will contain a new reference to the newly created ndarray using
@@ -808,7 +808,8 @@ General check of Python Type
conversion occurs. Otherwise, out will contain a borrowed
reference to :c:data:`Py_NotImplemented` and no error condition is set.
-.. c:function:: PyArray_HasArrayInterfaceType(op, dtype, context, out)
+.. c:function:: int PyArray_HasArrayInterfaceType(\
+ PyObject *op, PyArray_Descr *dtype, PyObject *context, PyObject *out)
If ``op`` implements any part of the array interface, then ``out``
will contain a new reference to the newly created ndarray using
@@ -819,38 +820,38 @@ General check of Python Type
that looks for the :obj:`~numpy.class.__array__` attribute. `context` is
unused.
-.. c:function:: PyArray_IsZeroDim(op)
+.. c:function:: int PyArray_IsZeroDim(PyObject *op)
Evaluates true if *op* is an instance of (a subclass of)
:c:data:`PyArray_Type` and has 0 dimensions.
.. c:function:: PyArray_IsScalar(op, cls)
- Evaluates true if *op* is an instance of :c:data:`Py{cls}ArrType_Type`.
+ Evaluates true if *op* is an instance of ``Py{cls}ArrType_Type``.
-.. c:function:: PyArray_CheckScalar(op)
+.. c:function:: int PyArray_CheckScalar(PyObject *op)
Evaluates true if *op* is either an array scalar (an instance of a
sub-type of :c:data:`PyGenericArr_Type` ), or an instance of (a
sub-class of) :c:data:`PyArray_Type` whose dimensionality is 0.
-.. c:function:: PyArray_IsPythonNumber(op)
+.. c:function:: int PyArray_IsPythonNumber(PyObject *op)
Evaluates true if *op* is an instance of a builtin numeric type (int,
float, complex, long, bool)
-.. c:function:: PyArray_IsPythonScalar(op)
+.. c:function:: int PyArray_IsPythonScalar(PyObject *op)
Evaluates true if *op* is a builtin Python scalar object (int,
float, complex, bytes, str, long, bool).
-.. c:function:: PyArray_IsAnyScalar(op)
+.. c:function:: int PyArray_IsAnyScalar(PyObject *op)
Evaluates true if *op* is either a Python scalar object (see
:c:func:`PyArray_IsPythonScalar`) or an array scalar (an instance of a sub-
type of :c:data:`PyGenericArr_Type` ).
-.. c:function:: PyArray_CheckAnyScalar(op)
+.. c:function:: int PyArray_CheckAnyScalar(PyObject *op)
Evaluates true if *op* is a Python scalar object (see
:c:func:`PyArray_IsPythonScalar`), an array scalar (an instance of a
@@ -866,82 +867,82 @@ enumerated array data type. For the array type checking macros the
argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpreted as a
:c:type:`PyArrayObject *`.
-.. c:function:: PyTypeNum_ISUNSIGNED(int num)
+.. c:function:: int PyTypeNum_ISUNSIGNED(int num)
-.. c:function:: PyDataType_ISUNSIGNED(PyArray_Descr *descr)
+.. c:function:: int PyDataType_ISUNSIGNED(PyArray_Descr *descr)
-.. c:function:: PyArray_ISUNSIGNED(PyArrayObject *obj)
+.. c:function:: int PyArray_ISUNSIGNED(PyArrayObject *obj)
Type represents an unsigned integer.
-.. c:function:: PyTypeNum_ISSIGNED(int num)
+.. c:function:: int PyTypeNum_ISSIGNED(int num)
-.. c:function:: PyDataType_ISSIGNED(PyArray_Descr *descr)
+.. c:function:: int PyDataType_ISSIGNED(PyArray_Descr *descr)
-.. c:function:: PyArray_ISSIGNED(PyArrayObject *obj)
+.. c:function:: int PyArray_ISSIGNED(PyArrayObject *obj)
Type represents a signed integer.
-.. c:function:: PyTypeNum_ISINTEGER(int num)
+.. c:function:: int PyTypeNum_ISINTEGER(int num)
-.. c:function:: PyDataType_ISINTEGER(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISINTEGER(PyArray_Descr* descr)
-.. c:function:: PyArray_ISINTEGER(PyArrayObject *obj)
+.. c:function:: int PyArray_ISINTEGER(PyArrayObject *obj)
Type represents any integer.
-.. c:function:: PyTypeNum_ISFLOAT(int num)
+.. c:function:: int PyTypeNum_ISFLOAT(int num)
-.. c:function:: PyDataType_ISFLOAT(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISFLOAT(PyArray_Descr* descr)
-.. c:function:: PyArray_ISFLOAT(PyArrayObject *obj)
+.. c:function:: int PyArray_ISFLOAT(PyArrayObject *obj)
Type represents any floating point number.
-.. c:function:: PyTypeNum_ISCOMPLEX(int num)
+.. c:function:: int PyTypeNum_ISCOMPLEX(int num)
-.. c:function:: PyDataType_ISCOMPLEX(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISCOMPLEX(PyArray_Descr* descr)
-.. c:function:: PyArray_ISCOMPLEX(PyArrayObject *obj)
+.. c:function:: int PyArray_ISCOMPLEX(PyArrayObject *obj)
Type represents any complex floating point number.
-.. c:function:: PyTypeNum_ISNUMBER(int num)
+.. c:function:: int PyTypeNum_ISNUMBER(int num)
-.. c:function:: PyDataType_ISNUMBER(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISNUMBER(PyArray_Descr* descr)
-.. c:function:: PyArray_ISNUMBER(PyArrayObject *obj)
+.. c:function:: int PyArray_ISNUMBER(PyArrayObject *obj)
Type represents any integer, floating point, or complex floating point
number.
-.. c:function:: PyTypeNum_ISSTRING(int num)
+.. c:function:: int PyTypeNum_ISSTRING(int num)
-.. c:function:: PyDataType_ISSTRING(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISSTRING(PyArray_Descr* descr)
-.. c:function:: PyArray_ISSTRING(PyArrayObject *obj)
+.. c:function:: int PyArray_ISSTRING(PyArrayObject *obj)
Type represents a string data type.
-.. c:function:: PyTypeNum_ISPYTHON(int num)
+.. c:function:: int PyTypeNum_ISPYTHON(int num)
-.. c:function:: PyDataType_ISPYTHON(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISPYTHON(PyArray_Descr* descr)
-.. c:function:: PyArray_ISPYTHON(PyArrayObject *obj)
+.. c:function:: int PyArray_ISPYTHON(PyArrayObject *obj)
Type represents an enumerated type corresponding to one of the
standard Python scalar (bool, int, float, or complex).
-.. c:function:: PyTypeNum_ISFLEXIBLE(int num)
+.. c:function:: int PyTypeNum_ISFLEXIBLE(int num)
-.. c:function:: PyDataType_ISFLEXIBLE(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISFLEXIBLE(PyArray_Descr* descr)
-.. c:function:: PyArray_ISFLEXIBLE(PyArrayObject *obj)
+.. c:function:: int PyArray_ISFLEXIBLE(PyArrayObject *obj)
Type represents one of the flexible array types ( :c:data:`NPY_STRING`,
:c:data:`NPY_UNICODE`, or :c:data:`NPY_VOID` ).
-.. c:function:: PyDataType_ISUNSIZED(PyArray_Descr* descr):
+.. c:function:: int PyDataType_ISUNSIZED(PyArray_Descr* descr)
Type has no size information attached, and can be resized. Should only be
called on flexible dtypes. Types that are attached to an array will always
@@ -951,55 +952,55 @@ argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpret
For structured datatypes with no fields this function now returns False.
-.. c:function:: PyTypeNum_ISUSERDEF(int num)
+.. c:function:: int PyTypeNum_ISUSERDEF(int num)
-.. c:function:: PyDataType_ISUSERDEF(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISUSERDEF(PyArray_Descr* descr)
-.. c:function:: PyArray_ISUSERDEF(PyArrayObject *obj)
+.. c:function:: int PyArray_ISUSERDEF(PyArrayObject *obj)
Type represents a user-defined type.
-.. c:function:: PyTypeNum_ISEXTENDED(int num)
+.. c:function:: int PyTypeNum_ISEXTENDED(int num)
-.. c:function:: PyDataType_ISEXTENDED(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISEXTENDED(PyArray_Descr* descr)
-.. c:function:: PyArray_ISEXTENDED(PyArrayObject *obj)
+.. c:function:: int PyArray_ISEXTENDED(PyArrayObject *obj)
Type is either flexible or user-defined.
-.. c:function:: PyTypeNum_ISOBJECT(int num)
+.. c:function:: int PyTypeNum_ISOBJECT(int num)
-.. c:function:: PyDataType_ISOBJECT(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISOBJECT(PyArray_Descr* descr)
-.. c:function:: PyArray_ISOBJECT(PyArrayObject *obj)
+.. c:function:: int PyArray_ISOBJECT(PyArrayObject *obj)
Type represents object data type.
-.. c:function:: PyTypeNum_ISBOOL(int num)
+.. c:function:: int PyTypeNum_ISBOOL(int num)
-.. c:function:: PyDataType_ISBOOL(PyArray_Descr* descr)
+.. c:function:: int PyDataType_ISBOOL(PyArray_Descr* descr)
-.. c:function:: PyArray_ISBOOL(PyArrayObject *obj)
+.. c:function:: int PyArray_ISBOOL(PyArrayObject *obj)
Type represents Boolean data type.
-.. c:function:: PyDataType_HASFIELDS(PyArray_Descr* descr)
+.. c:function:: int PyDataType_HASFIELDS(PyArray_Descr* descr)
-.. c:function:: PyArray_HASFIELDS(PyArrayObject *obj)
+.. c:function:: int PyArray_HASFIELDS(PyArrayObject *obj)
Type has fields associated with it.
-.. c:function:: PyArray_ISNOTSWAPPED(m)
+.. c:function:: int PyArray_ISNOTSWAPPED(PyArrayObject *m)
Evaluates true if the data area of the ndarray *m* is in machine
byte-order according to the array's data-type descriptor.
-.. c:function:: PyArray_ISBYTESWAPPED(m)
+.. c:function:: int PyArray_ISBYTESWAPPED(PyArrayObject *m)
Evaluates true if the data area of the ndarray *m* is **not** in
machine byte-order according to the array's data-type descriptor.
-.. c:function:: Bool PyArray_EquivTypes( \
+.. c:function:: npy_bool PyArray_EquivTypes( \
PyArray_Descr* type1, PyArray_Descr* type2)
Return :c:data:`NPY_TRUE` if *type1* and *type2* actually represent
@@ -1008,18 +1009,18 @@ argument must be a :c:type:`PyObject *<PyObject>` that can be directly interpret
:c:data:`NPY_LONG` and :c:data:`NPY_INT` are equivalent. Otherwise
return :c:data:`NPY_FALSE`.
-.. c:function:: Bool PyArray_EquivArrTypes( \
+.. c:function:: npy_bool PyArray_EquivArrTypes( \
PyArrayObject* a1, PyArrayObject * a2)
Return :c:data:`NPY_TRUE` if *a1* and *a2* are arrays with equivalent
types for this platform.
-.. c:function:: Bool PyArray_EquivTypenums(int typenum1, int typenum2)
+.. c:function:: npy_bool PyArray_EquivTypenums(int typenum1, int typenum2)
Special case of :c:func:`PyArray_EquivTypes` (...) that does not accept
flexible data types but may be easier to call.
-.. c:function:: int PyArray_EquivByteorders({byteorder} b1, {byteorder} b2)
+.. c:function:: int PyArray_EquivByteorders(int b1, int b2)
True if byteorder characters ( :c:data:`NPY_LITTLE`,
:c:data:`NPY_BIG`, :c:data:`NPY_NATIVE`, :c:data:`NPY_IGNORE` ) are
@@ -1142,8 +1143,8 @@ Converting data types
storing the max value of the input types converted to a string or unicode.
.. c:function:: PyArray_Descr* PyArray_ResultType( \
- npy_intp narrs, PyArrayObject**arrs, npy_intp ndtypes, \
- PyArray_Descr**dtypes)
+ npy_intp narrs, PyArrayObject **arrs, npy_intp ndtypes, \
+ PyArray_Descr **dtypes)
.. versionadded:: 1.6
@@ -1334,7 +1335,7 @@ Special functions for NPY_OBJECT
locations in the structure with object data-types. No checking is
performed but *arr* must be of data-type :c:type:`NPY_OBJECT` and be
single-segment and uninitialized (no previous objects in
- position). Use :c:func:`PyArray_DECREF` (*arr*) if you need to
+ position). Use :c:func:`PyArray_XDECREF` (*arr*) if you need to
decrement all the items in the object array prior to calling this
function.
@@ -1343,7 +1344,7 @@ Special functions for NPY_OBJECT
Precondition: ``arr`` is a copy of ``base`` (though possibly with different
strides, ordering, etc.) Set the UPDATEIFCOPY flag and ``arr->base`` so
that when ``arr`` is destructed, it will copy any changes back to ``base``.
- DEPRECATED, use :c:func:`PyArray_SetWritebackIfCopyBase``.
+ DEPRECATED, use :c:func:`PyArray_SetWritebackIfCopyBase`.
Returns 0 for success, -1 for failure.
@@ -1353,7 +1354,7 @@ Special functions for NPY_OBJECT
strides, ordering, etc.) Sets the :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag
and ``arr->base``, and set ``base`` to READONLY. Call
:c:func:`PyArray_ResolveWritebackIfCopy` before calling
- `Py_DECREF`` in order copy any changes back to ``base`` and
+ `Py_DECREF` in order copy any changes back to ``base`` and
reset the READONLY flag.
Returns 0 for success, -1 for failure.
@@ -1397,12 +1398,12 @@ In versions 1.6 and earlier of NumPy, the following flags
did not have the _ARRAY_ macro namespace in them. That form
of the constant names is deprecated in 1.7.
-.. c:var:: NPY_ARRAY_C_CONTIGUOUS
+.. c:macro:: NPY_ARRAY_C_CONTIGUOUS
The data area is in C-style contiguous order (last index varies the
fastest).
-.. c:var:: NPY_ARRAY_F_CONTIGUOUS
+.. c:macro:: NPY_ARRAY_F_CONTIGUOUS
The data area is in Fortran-style contiguous order (first index varies
the fastest).
@@ -1423,22 +1424,22 @@ of the constant names is deprecated in 1.7.
.. seealso:: :ref:`Internal memory layout of an ndarray <arrays.ndarray>`
-.. c:var:: NPY_ARRAY_OWNDATA
+.. c:macro:: NPY_ARRAY_OWNDATA
The data area is owned by this array.
-.. c:var:: NPY_ARRAY_ALIGNED
+.. c:macro:: NPY_ARRAY_ALIGNED
The data area and all array elements are aligned appropriately.
-.. c:var:: NPY_ARRAY_WRITEABLE
+.. c:macro:: NPY_ARRAY_WRITEABLE
The data area can be written to.
Notice that the above 3 flags are defined so that a new, well-
behaved array has these flags defined as true.
-.. c:var:: NPY_ARRAY_WRITEBACKIFCOPY
+.. c:macro:: NPY_ARRAY_WRITEBACKIFCOPY
The data area represents a (well-behaved) copy whose information
should be transferred back to the original when
@@ -1457,7 +1458,7 @@ of the constant names is deprecated in 1.7.
would have returned an error because :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`
would not have been possible.
-.. c:var:: NPY_ARRAY_UPDATEIFCOPY
+.. c:macro:: NPY_ARRAY_UPDATEIFCOPY
A deprecated version of :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` which
depends upon ``dealloc`` to trigger the writeback. For backwards
@@ -1474,31 +1475,31 @@ for ``flags`` which can be any of :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
Combinations of array flags
^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. c:var:: NPY_ARRAY_BEHAVED
+.. c:macro:: NPY_ARRAY_BEHAVED
:c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE`
-.. c:var:: NPY_ARRAY_CARRAY
+.. c:macro:: NPY_ARRAY_CARRAY
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
-.. c:var:: NPY_ARRAY_CARRAY_RO
+.. c:macro:: NPY_ARRAY_CARRAY_RO
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
-.. c:var:: NPY_ARRAY_FARRAY
+.. c:macro:: NPY_ARRAY_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_BEHAVED`
-.. c:var:: NPY_ARRAY_FARRAY_RO
+.. c:macro:: NPY_ARRAY_FARRAY_RO
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
-.. c:var:: NPY_ARRAY_DEFAULT
+.. c:macro:: NPY_ARRAY_DEFAULT
:c:data:`NPY_ARRAY_CARRAY`
-.. c:var:: NPY_ARRAY_UPDATE_ALL
+.. c:macro:: NPY_ARRAY_UPDATE_ALL
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED`
@@ -1509,28 +1510,19 @@ Flag-like constants
These constants are used in :c:func:`PyArray_FromAny` (and its macro forms) to
specify desired properties of the new array.
-.. c:var:: NPY_ARRAY_FORCECAST
+.. c:macro:: NPY_ARRAY_FORCECAST
Cast to the desired type, even if it can't be done without losing
information.
-.. c:var:: NPY_ARRAY_ENSURECOPY
+.. c:macro:: NPY_ARRAY_ENSURECOPY
Make sure the resulting array is a copy of the original.
-.. c:var:: NPY_ARRAY_ENSUREARRAY
+.. c:macro:: NPY_ARRAY_ENSUREARRAY
Make sure the resulting object is an actual ndarray, and not a sub-class.
-.. c:var:: NPY_ARRAY_NOTSWAPPED
-
- Only used in :c:func:`PyArray_CheckFromAny` to over-ride the byteorder
- of the data-type object passed in.
-
-.. c:var:: NPY_ARRAY_BEHAVED_NS
-
- :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE` \| :c:data:`NPY_ARRAY_NOTSWAPPED`
-
Flag checking
^^^^^^^^^^^^^
@@ -1538,7 +1530,7 @@ Flag checking
For all of these macros *arr* must be an instance of a (subclass of)
:c:data:`PyArray_Type`.
-.. c:function:: PyArray_CHKFLAGS(PyObject *arr, flags)
+.. c:function:: int PyArray_CHKFLAGS(PyObject *arr, int flags)
The first parameter, arr, must be an ndarray or subclass. The
parameter, *flags*, should be an integer consisting of bitwise
@@ -1548,60 +1540,60 @@ For all of these macros *arr* must be an instance of a (subclass of)
:c:data:`NPY_ARRAY_WRITEABLE`, :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`,
:c:data:`NPY_ARRAY_UPDATEIFCOPY`.
-.. c:function:: PyArray_IS_C_CONTIGUOUS(PyObject *arr)
+.. c:function:: int PyArray_IS_C_CONTIGUOUS(PyObject *arr)
Evaluates true if *arr* is C-style contiguous.
-.. c:function:: PyArray_IS_F_CONTIGUOUS(PyObject *arr)
+.. c:function:: int PyArray_IS_F_CONTIGUOUS(PyObject *arr)
Evaluates true if *arr* is Fortran-style contiguous.
-.. c:function:: PyArray_ISFORTRAN(PyObject *arr)
+.. c:function:: int PyArray_ISFORTRAN(PyObject *arr)
Evaluates true if *arr* is Fortran-style contiguous and *not*
C-style contiguous. :c:func:`PyArray_IS_F_CONTIGUOUS`
is the correct way to test for Fortran-style contiguity.
-.. c:function:: PyArray_ISWRITEABLE(PyObject *arr)
+.. c:function:: int PyArray_ISWRITEABLE(PyObject *arr)
Evaluates true if the data area of *arr* can be written to
-.. c:function:: PyArray_ISALIGNED(PyObject *arr)
+.. c:function:: int PyArray_ISALIGNED(PyObject *arr)
Evaluates true if the data area of *arr* is properly aligned on
the machine.
-.. c:function:: PyArray_ISBEHAVED(PyObject *arr)
+.. c:function:: int PyArray_ISBEHAVED(PyObject *arr)
Evaluates true if the data area of *arr* is aligned and writeable
and in machine byte-order according to its descriptor.
-.. c:function:: PyArray_ISBEHAVED_RO(PyObject *arr)
+.. c:function:: int PyArray_ISBEHAVED_RO(PyObject *arr)
Evaluates true if the data area of *arr* is aligned and in machine
byte-order.
-.. c:function:: PyArray_ISCARRAY(PyObject *arr)
+.. c:function:: int PyArray_ISCARRAY(PyObject *arr)
Evaluates true if the data area of *arr* is C-style contiguous,
and :c:func:`PyArray_ISBEHAVED` (*arr*) is true.
-.. c:function:: PyArray_ISFARRAY(PyObject *arr)
+.. c:function:: int PyArray_ISFARRAY(PyObject *arr)
Evaluates true if the data area of *arr* is Fortran-style
contiguous and :c:func:`PyArray_ISBEHAVED` (*arr*) is true.
-.. c:function:: PyArray_ISCARRAY_RO(PyObject *arr)
+.. c:function:: int PyArray_ISCARRAY_RO(PyObject *arr)
Evaluates true if the data area of *arr* is C-style contiguous,
aligned, and in machine byte-order.
-.. c:function:: PyArray_ISFARRAY_RO(PyObject *arr)
+.. c:function:: int PyArray_ISFARRAY_RO(PyObject *arr)
Evaluates true if the data area of *arr* is Fortran-style
contiguous, aligned, and in machine byte-order **.**
-.. c:function:: PyArray_ISONESEGMENT(PyObject *arr)
+.. c:function:: int PyArray_ISONESEGMENT(PyObject *arr)
Evaluates true if the data area of *arr* consists of a single
(C-style or Fortran-style) contiguous segment.
@@ -1659,7 +1651,7 @@ Conversion
destination must be an integer multiple of the number of elements
in *val*.
-.. c:function:: PyObject* PyArray_Byteswap(PyArrayObject* self, Bool inplace)
+.. c:function:: PyObject* PyArray_Byteswap(PyArrayObject* self, npy_bool inplace)
Equivalent to :meth:`ndarray.byteswap<numpy.ndarray.byteswap>` (*self*, *inplace*). Return an array
whose data area is byteswapped. If *inplace* is non-zero, then do
@@ -1876,16 +1868,16 @@ Item selection and manipulation
created. The *clipmode* argument determines behavior for when
entries in *self* are not between 0 and len(*op*).
- .. c:var:: NPY_RAISE
+ .. c:macro:: NPY_RAISE
raise a ValueError;
- .. c:var:: NPY_WRAP
+ .. c:macro:: NPY_WRAP
wrap values < 0 by adding len(*op*) and values >=len(*op*)
by subtracting len(*op*) until they are in range;
- .. c:var:: NPY_CLIP
+ .. c:macro:: NPY_CLIP
all values are clipped to the region [0, len(*op*) ).
@@ -2263,7 +2255,7 @@ Array Functions
See the :func:`~numpy.einsum` function for more details.
-.. c:function:: PyObject* PyArray_CopyAndTranspose(PyObject \* op)
+.. c:function:: PyObject* PyArray_CopyAndTranspose(PyObject * op)
A specialized copy and transpose function that works only for 2-d
arrays. The returned array is a transposed copy of *op*.
@@ -2318,7 +2310,7 @@ Array Functions
Other functions
^^^^^^^^^^^^^^^
-.. c:function:: Bool PyArray_CheckStrides( \
+.. c:function:: npy_bool PyArray_CheckStrides( \
int elsize, int nd, npy_intp numbytes, npy_intp const* dims, \
npy_intp const* newstrides)
@@ -2361,7 +2353,9 @@ it is possible to do this.
Defining an :c:type:`NpyAuxData` is similar to defining a class in C++,
but the object semantics have to be tracked manually since the API is in C.
Here's an example for a function which doubles up an element using
-an element copier function as a primitive.::
+an element copier function as a primitive.
+
+.. code-block:: c
typedef struct {
NpyAuxData base;
@@ -2425,12 +2419,12 @@ an element copier function as a primitive.::
functions should never set the Python exception on error, because
they may be called from a multi-threaded context.
-.. c:function:: NPY_AUXDATA_FREE(auxdata)
+.. c:function:: void NPY_AUXDATA_FREE(NpyAuxData *auxdata)
A macro which calls the auxdata's free function appropriately,
does nothing if auxdata is NULL.
-.. c:function:: NPY_AUXDATA_CLONE(auxdata)
+.. c:function:: NpyAuxData *NPY_AUXDATA_CLONE(NpyAuxData *auxdata)
A macro which calls the auxdata's clone function appropriately,
returning a deep copy of the auxiliary data.
@@ -2453,7 +2447,7 @@ this useful approach to looping over an array.
it easy to loop over an N-dimensional non-contiguous array in
C-style contiguous fashion.
-.. c:function:: PyObject* PyArray_IterAllButAxis(PyObject* arr, int \*axis)
+.. c:function:: PyObject* PyArray_IterAllButAxis(PyObject* arr, int* axis)
Return an array iterator that will iterate over all axes but the
one provided in *\*axis*. The returned iterator cannot be used
@@ -2497,7 +2491,7 @@ this useful approach to looping over an array.
*destination*, which must have size at least *iterator*
->nd_m1+1.
-.. c:function:: PyArray_ITER_GOTO1D(PyObject* iterator, npy_intp index)
+.. c:function:: void PyArray_ITER_GOTO1D(PyObject* iterator, npy_intp index)
Set the *iterator* index and dataptr to the location in the array
indicated by the integer *index* which points to an element in the
@@ -2818,11 +2812,21 @@ Data-type descriptors
Create a new data-type object with the byteorder set according to
*newendian*. All referenced data-type objects (in subdescr and
fields members of the data-type object) are also changed
- (recursively). If a byteorder of :c:data:`NPY_IGNORE` is encountered it
+ (recursively).
+
+ The value of *newendian* is one of these macros:
+
+ .. c:macro:: NPY_IGNORE
+ NPY_SWAP
+ NPY_NATIVE
+ NPY_LITTLE
+ NPY_BIG
+
+ If a byteorder of :c:data:`NPY_IGNORE` is encountered it
is left alone. If newendian is :c:data:`NPY_SWAP`, then all byte-orders
are swapped. Other valid newendian values are :c:data:`NPY_NATIVE`,
- :c:data:`NPY_LITTLE`, and :c:data:`NPY_BIG` which all cause the returned
- data-typed descriptor (and all it's
+ :c:data:`NPY_LITTLE`, and :c:data:`NPY_BIG` which all cause
+ the returned data-typed descriptor (and all it's
referenced data-type descriptors) to have the corresponding byte-
order.
@@ -2956,11 +2960,11 @@ to.
already a buffer object pointing to another object). If you need
to hold on to the memory be sure to INCREF the base member. The
chunk of memory is pointed to by *buf* ->ptr member and has length
- *buf* ->len. The flags member of *buf* is :c:data:`NPY_BEHAVED_RO` with
- the :c:data:`NPY_ARRAY_WRITEABLE` flag set if *obj* has a writeable buffer
- interface.
+ *buf* ->len. The flags member of *buf* is :c:data:`NPY_ARRAY_ALIGNED`
+ with the :c:data:`NPY_ARRAY_WRITEABLE` flag set if *obj* has
+ a writeable buffer interface.
-.. c:function:: int PyArray_AxisConverter(PyObject \* obj, int* axis)
+.. c:function:: int PyArray_AxisConverter(PyObject* obj, int* axis)
Convert a Python object, *obj*, representing an axis argument to
the proper value for passing to the functions that take an integer
@@ -2968,7 +2972,7 @@ to.
:c:data:`NPY_MAXDIMS` which is interpreted correctly by the C-API
functions that take axis arguments.
-.. c:function:: int PyArray_BoolConverter(PyObject* obj, Bool* value)
+.. c:function:: int PyArray_BoolConverter(PyObject* obj, npy_bool* value)
Convert any Python object, *obj*, to :c:data:`NPY_TRUE` or
:c:data:`NPY_FALSE`, and place the result in *value*.
@@ -3120,19 +3124,19 @@ the C-API is needed then some additional steps must be taken.
Internally, these #defines work as follows:
* If neither is defined, the C-API is declared to be
- :c:type:`static void**`, so it is only visible within the
+ ``static void**``, so it is only visible within the
compilation unit that #includes numpy/arrayobject.h.
* If :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, but
:c:macro:`NO_IMPORT_ARRAY` is not, the C-API is declared to
- be :c:type:`void**`, so that it will also be visible to other
+ be ``void**``, so that it will also be visible to other
compilation units.
* If :c:macro:`NO_IMPORT_ARRAY` is #defined, regardless of
whether :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is, the C-API is
- declared to be :c:type:`extern void**`, so it is expected to
+ declared to be ``extern void**``, so it is expected to
be defined in another compilation unit.
* Whenever :c:macro:`PY_ARRAY_UNIQUE_SYMBOL` is #defined, it
also changes the name of the variable holding the C-API, which
- defaults to :c:data:`PyArray_API`, to whatever the macro is
+ defaults to ``PyArray_API``, to whatever the macro is
#defined to.
Checking the API Version
@@ -3147,21 +3151,31 @@ calling the function). That's why several functions are provided to check for
numpy versions. The macros :c:data:`NPY_VERSION` and
:c:data:`NPY_FEATURE_VERSION` corresponds to the numpy version used to build the
extension, whereas the versions returned by the functions
-PyArray_GetNDArrayCVersion and PyArray_GetNDArrayCFeatureVersion corresponds to
-the runtime numpy's version.
+:c:func:`PyArray_GetNDArrayCVersion` and :c:func:`PyArray_GetNDArrayCFeatureVersion`
+corresponds to the runtime numpy's version.
The rules for ABI and API compatibilities can be summarized as follows:
- * Whenever :c:data:`NPY_VERSION` != PyArray_GetNDArrayCVersion, the
+ * Whenever :c:data:`NPY_VERSION` != ``PyArray_GetNDArrayCVersion()``, the
extension has to be recompiled (ABI incompatibility).
- * :c:data:`NPY_VERSION` == PyArray_GetNDArrayCVersion and
- :c:data:`NPY_FEATURE_VERSION` <= PyArray_GetNDArrayCFeatureVersion means
+ * :c:data:`NPY_VERSION` == ``PyArray_GetNDArrayCVersion()`` and
+ :c:data:`NPY_FEATURE_VERSION` <= ``PyArray_GetNDArrayCFeatureVersion()`` means
backward compatible changes.
ABI incompatibility is automatically detected in every numpy's version. API
incompatibility detection was added in numpy 1.4.0. If you want to supported
many different numpy versions with one extension binary, you have to build your
-extension with the lowest NPY_FEATURE_VERSION as possible.
+extension with the lowest :c:data:`NPY_FEATURE_VERSION` as possible.
+
+.. c:macro:: NPY_VERSION
+
+ The current version of the ndarray object (check to see if this
+ variable is defined to guarantee the ``numpy/arrayobject.h`` header is
+ being used).
+
+.. c:macro:: NPY_FEATURE_VERSION
+
+ The current version of the C-API.
.. c:function:: unsigned int PyArray_GetNDArrayCVersion(void)
@@ -3242,7 +3256,7 @@ Memory management
.. c:function:: char* PyDataMem_NEW(size_t nbytes)
-.. c:function:: PyDataMem_FREE(char* ptr)
+.. c:function:: void PyDataMem_FREE(char* ptr)
.. c:function:: char* PyDataMem_RENEW(void * ptr, size_t newbytes)
@@ -3251,7 +3265,7 @@ Memory management
.. c:function:: npy_intp* PyDimMem_NEW(int nd)
-.. c:function:: PyDimMem_FREE(char* ptr)
+.. c:function:: void PyDimMem_FREE(char* ptr)
.. c:function:: npy_intp* PyDimMem_RENEW(void* ptr, size_t newnd)
@@ -3259,7 +3273,7 @@ Memory management
.. c:function:: void* PyArray_malloc(size_t nbytes)
-.. c:function:: PyArray_free(void* ptr)
+.. c:function:: void PyArray_free(void* ptr)
.. c:function:: void* PyArray_realloc(npy_intp* ptr, size_t nbytes)
@@ -3268,6 +3282,8 @@ Memory management
:c:data:`NPY_USE_PYMEM` is 0, if :c:data:`NPY_USE_PYMEM` is 1, then
the Python memory allocator is used.
+ .. c:macro:: NPY_USE_PYMEM
+
.. c:function:: int PyArray_ResolveWritebackIfCopy(PyArrayObject* obj)
If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` or (deprecated)
@@ -3298,9 +3314,13 @@ be accomplished using two groups of macros. Typically, if one macro in
a group is used in a code block, all of them must be used in the same
code block. Currently, :c:data:`NPY_ALLOW_THREADS` is defined to the
python-defined :c:data:`WITH_THREADS` constant unless the environment
-variable :c:data:`NPY_NOSMP` is set in which case
+variable ``NPY_NOSMP`` is set in which case
:c:data:`NPY_ALLOW_THREADS` is defined to be 0.
+.. c:macro:: NPY_ALLOW_THREADS
+
+.. c:macro:: WITH_THREADS
+
Group 1
"""""""
@@ -3337,18 +3357,18 @@ Group 1
interpreter. This macro acquires the GIL and restores the
Python state from the saved variable.
- .. c:function:: NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype)
+ .. c:function:: void NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype)
Useful to release the GIL only if *dtype* does not contain
arbitrary Python objects which may need the Python interpreter
during execution of the loop.
- .. c:function:: NPY_END_THREADS_DESCR(PyArray_Descr *dtype)
+ .. c:function:: void NPY_END_THREADS_DESCR(PyArray_Descr *dtype)
Useful to regain the GIL in situations where it was released
using the BEGIN form of this macro.
- .. c:function:: NPY_BEGIN_THREADS_THRESHOLDED(int loop_size)
+ .. c:function:: void NPY_BEGIN_THREADS_THRESHOLDED(int loop_size)
Useful to release the GIL only if *loop_size* exceeds a
minimum threshold, currently set to 500. Should be matched
@@ -3388,15 +3408,15 @@ Group 2
Priority
^^^^^^^^
-.. c:var:: NPY_PRIORITY
+.. c:macro:: NPY_PRIORITY
Default priority for arrays.
-.. c:var:: NPY_SUBTYPE_PRIORITY
+.. c:macro:: NPY_SUBTYPE_PRIORITY
Default subtype priority.
-.. c:var:: NPY_SCALAR_PRIORITY
+.. c:macro:: NPY_SCALAR_PRIORITY
Default scalar priority (very small)
@@ -3411,15 +3431,15 @@ Priority
Default buffers
^^^^^^^^^^^^^^^
-.. c:var:: NPY_BUFSIZE
+.. c:macro:: NPY_BUFSIZE
Default size of the user-settable internal buffers.
-.. c:var:: NPY_MIN_BUFSIZE
+.. c:macro:: NPY_MIN_BUFSIZE
Smallest size of user-settable internal buffers.
-.. c:var:: NPY_MAX_BUFSIZE
+.. c:macro:: NPY_MAX_BUFSIZE
Largest size allowed for the user-settable buffers.
@@ -3427,38 +3447,32 @@ Default buffers
Other constants
^^^^^^^^^^^^^^^
-.. c:var:: NPY_NUM_FLOATTYPE
+.. c:macro:: NPY_NUM_FLOATTYPE
The number of floating-point types
-.. c:var:: NPY_MAXDIMS
+.. c:macro:: NPY_MAXDIMS
The maximum number of dimensions allowed in arrays.
-.. c:var:: NPY_MAXARGS
+.. c:macro:: NPY_MAXARGS
The maximum number of array arguments that can be used in functions.
-.. c:var:: NPY_VERSION
-
- The current version of the ndarray object (check to see if this
- variable is defined to guarantee the numpy/arrayobject.h header is
- being used).
-
-.. c:var:: NPY_FALSE
+.. c:macro:: NPY_FALSE
Defined as 0 for use with Bool.
-.. c:var:: NPY_TRUE
+.. c:macro:: NPY_TRUE
Defined as 1 for use with Bool.
-.. c:var:: NPY_FAIL
+.. c:macro:: NPY_FAIL
The return value of failed converter functions which are called using
the "O&" syntax in :c:func:`PyArg_ParseTuple`-like functions.
-.. c:var:: NPY_SUCCEED
+.. c:macro:: NPY_SUCCEED
The return value of successful converter functions which are called
using the "O&" syntax in :c:func:`PyArg_ParseTuple`-like functions.
@@ -3467,7 +3481,7 @@ Other constants
Miscellaneous Macros
^^^^^^^^^^^^^^^^^^^^
-.. c:function:: PyArray_SAMESHAPE(PyArrayObject *a1, PyArrayObject *a2)
+.. c:function:: int PyArray_SAMESHAPE(PyArrayObject *a1, PyArrayObject *a2)
Evaluates as True if arrays *a1* and *a2* have the same shape.
@@ -3502,11 +3516,11 @@ Miscellaneous Macros
of the ordering which is lexicographic: comparing the real parts
first and then the complex parts if the real parts are equal.
-.. c:function:: PyArray_REFCOUNT(PyObject* op)
+.. c:function:: npy_intp PyArray_REFCOUNT(PyObject* op)
Returns the reference count of any Python object.
-.. c:function:: PyArray_DiscardWritebackIfCopy(PyObject* obj)
+.. c:function:: void PyArray_DiscardWritebackIfCopy(PyObject* obj)
If ``obj.flags`` has :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` or (deprecated)
:c:data:`NPY_ARRAY_UPDATEIFCOPY`, this function clears the flags, `DECREF` s
@@ -3517,7 +3531,7 @@ Miscellaneous Macros
error when you are finished with ``obj``, just before ``Py_DECREF(obj)``.
It may be called multiple times, or with ``NULL`` input.
-.. c:function:: PyArray_XDECREF_ERR(PyObject* obj)
+.. c:function:: void PyArray_XDECREF_ERR(PyObject* obj)
Deprecated in 1.14, use :c:func:`PyArray_DiscardWritebackIfCopy`
followed by ``Py_XDECREF``
@@ -3623,6 +3637,22 @@ Enumerated Types
Wraps an index to the valid range if it is out of bounds.
+.. c:type:: NPY_SEARCHSIDE
+
+ A variable type indicating whether the index returned should be that of
+ the first suitable location (if :c:data:`NPY_SEARCHLEFT`) or of the last
+ (if :c:data:`NPY_SEARCHRIGHT`).
+
+ .. c:var:: NPY_SEARCHLEFT
+
+ .. c:var:: NPY_SEARCHRIGHT
+
+.. c:type:: NPY_SELECTKIND
+
+ A variable type indicating the selection algorithm being used.
+
+ .. c:var:: NPY_INTROSELECT
+
.. c:type:: NPY_CASTING
.. versionadded:: 1.6
diff --git a/doc/source/reference/c-api/config.rst b/doc/source/reference/c-api/config.rst
index 05e6fe44d..87130699b 100644
--- a/doc/source/reference/c-api/config.rst
+++ b/doc/source/reference/c-api/config.rst
@@ -19,59 +19,62 @@ avoid namespace pollution.
Data type sizes
---------------
-The :c:data:`NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof
+The ``NPY_SIZEOF_{CTYPE}`` constants are defined so that sizeof
information is available to the pre-processor.
-.. c:var:: NPY_SIZEOF_SHORT
+.. c:macro:: NPY_SIZEOF_SHORT
sizeof(short)
-.. c:var:: NPY_SIZEOF_INT
+.. c:macro:: NPY_SIZEOF_INT
sizeof(int)
-.. c:var:: NPY_SIZEOF_LONG
+.. c:macro:: NPY_SIZEOF_LONG
sizeof(long)
-.. c:var:: NPY_SIZEOF_LONGLONG
+.. c:macro:: NPY_SIZEOF_LONGLONG
sizeof(longlong) where longlong is defined appropriately on the
platform.
-.. c:var:: NPY_SIZEOF_PY_LONG_LONG
+.. c:macro:: NPY_SIZEOF_PY_LONG_LONG
-.. c:var:: NPY_SIZEOF_FLOAT
+.. c:macro:: NPY_SIZEOF_FLOAT
sizeof(float)
-.. c:var:: NPY_SIZEOF_DOUBLE
+.. c:macro:: NPY_SIZEOF_DOUBLE
sizeof(double)
-.. c:var:: NPY_SIZEOF_LONG_DOUBLE
+.. c:macro:: NPY_SIZEOF_LONG_DOUBLE
- sizeof(longdouble) (A macro defines **NPY_SIZEOF_LONGDOUBLE** as well.)
+.. c:macro:: NPY_SIZEOF_LONGDOUBLE
-.. c:var:: NPY_SIZEOF_PY_INTPTR_T
+ sizeof(longdouble)
- Size of a pointer on this platform (sizeof(void \*)) (A macro defines
- NPY_SIZEOF_INTP as well.)
+.. c:macro:: NPY_SIZEOF_PY_INTPTR_T
+
+.. c:macro:: NPY_SIZEOF_INTP
+
+ Size of a pointer on this platform (sizeof(void \*))
Platform information
--------------------
-.. c:var:: NPY_CPU_X86
-.. c:var:: NPY_CPU_AMD64
-.. c:var:: NPY_CPU_IA64
-.. c:var:: NPY_CPU_PPC
-.. c:var:: NPY_CPU_PPC64
-.. c:var:: NPY_CPU_SPARC
-.. c:var:: NPY_CPU_SPARC64
-.. c:var:: NPY_CPU_S390
-.. c:var:: NPY_CPU_PARISC
+.. c:macro:: NPY_CPU_X86
+.. c:macro:: NPY_CPU_AMD64
+.. c:macro:: NPY_CPU_IA64
+.. c:macro:: NPY_CPU_PPC
+.. c:macro:: NPY_CPU_PPC64
+.. c:macro:: NPY_CPU_SPARC
+.. c:macro:: NPY_CPU_SPARC64
+.. c:macro:: NPY_CPU_S390
+.. c:macro:: NPY_CPU_PARISC
.. versionadded:: 1.3.0
@@ -80,11 +83,11 @@ Platform information
Defined in ``numpy/npy_cpu.h``
-.. c:var:: NPY_LITTLE_ENDIAN
+.. c:macro:: NPY_LITTLE_ENDIAN
-.. c:var:: NPY_BIG_ENDIAN
+.. c:macro:: NPY_BIG_ENDIAN
-.. c:var:: NPY_BYTE_ORDER
+.. c:macro:: NPY_BYTE_ORDER
.. versionadded:: 1.3.0
@@ -94,7 +97,7 @@ Platform information
Defined in ``numpy/npy_endian.h``.
-.. c:function:: PyArray_GetEndianness()
+.. c:function:: int PyArray_GetEndianness()
.. versionadded:: 1.3.0
@@ -102,21 +105,27 @@ Platform information
One of :c:data:`NPY_CPU_BIG`, :c:data:`NPY_CPU_LITTLE`,
or :c:data:`NPY_CPU_UNKNOWN_ENDIAN`.
+ .. c:macro:: NPY_CPU_BIG
+
+ .. c:macro:: NPY_CPU_LITTLE
+
+ .. c:macro:: NPY_CPU_UNKNOWN_ENDIAN
+
Compiler directives
-------------------
-.. c:var:: NPY_LIKELY
-.. c:var:: NPY_UNLIKELY
-.. c:var:: NPY_UNUSED
+.. c:macro:: NPY_LIKELY
+.. c:macro:: NPY_UNLIKELY
+.. c:macro:: NPY_UNUSED
Interrupt Handling
------------------
-.. c:var:: NPY_INTERRUPT_H
-.. c:var:: NPY_SIGSETJMP
-.. c:var:: NPY_SIGLONGJMP
-.. c:var:: NPY_SIGJMP_BUF
-.. c:var:: NPY_SIGINT_ON
-.. c:var:: NPY_SIGINT_OFF
+.. c:macro:: NPY_INTERRUPT_H
+.. c:macro:: NPY_SIGSETJMP
+.. c:macro:: NPY_SIGLONGJMP
+.. c:macro:: NPY_SIGJMP_BUF
+.. c:macro:: NPY_SIGINT_ON
+.. c:macro:: NPY_SIGINT_OFF
diff --git a/doc/source/reference/c-api/coremath.rst b/doc/source/reference/c-api/coremath.rst
index 0c46475cf..338c584a1 100644
--- a/doc/source/reference/c-api/coremath.rst
+++ b/doc/source/reference/c-api/coremath.rst
@@ -24,23 +24,23 @@ in doubt.
Floating point classification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. c:var:: NPY_NAN
+.. c:macro:: NPY_NAN
This macro is defined to a NaN (Not a Number), and is guaranteed to have
the signbit unset ('positive' NaN). The corresponding single and extension
precision macro are available with the suffix F and L.
-.. c:var:: NPY_INFINITY
+.. c:macro:: NPY_INFINITY
This macro is defined to a positive inf. The corresponding single and
extension precision macro are available with the suffix F and L.
-.. c:var:: NPY_PZERO
+.. c:macro:: NPY_PZERO
This macro is defined to positive zero. The corresponding single and
extension precision macro are available with the suffix F and L.
-.. c:var:: NPY_NZERO
+.. c:macro:: NPY_NZERO
This macro is defined to negative zero (that is with the sign bit set). The
corresponding single and extension precision macro are available with the
@@ -84,47 +84,47 @@ The following math constants are available in ``npy_math.h``. Single
and extended precision are also available by adding the ``f`` and
``l`` suffixes respectively.
-.. c:var:: NPY_E
+.. c:macro:: NPY_E
Base of natural logarithm (:math:`e`)
-.. c:var:: NPY_LOG2E
+.. c:macro:: NPY_LOG2E
Logarithm to base 2 of the Euler constant (:math:`\frac{\ln(e)}{\ln(2)}`)
-.. c:var:: NPY_LOG10E
+.. c:macro:: NPY_LOG10E
Logarithm to base 10 of the Euler constant (:math:`\frac{\ln(e)}{\ln(10)}`)
-.. c:var:: NPY_LOGE2
+.. c:macro:: NPY_LOGE2
Natural logarithm of 2 (:math:`\ln(2)`)
-.. c:var:: NPY_LOGE10
+.. c:macro:: NPY_LOGE10
Natural logarithm of 10 (:math:`\ln(10)`)
-.. c:var:: NPY_PI
+.. c:macro:: NPY_PI
Pi (:math:`\pi`)
-.. c:var:: NPY_PI_2
+.. c:macro:: NPY_PI_2
Pi divided by 2 (:math:`\frac{\pi}{2}`)
-.. c:var:: NPY_PI_4
+.. c:macro:: NPY_PI_4
Pi divided by 4 (:math:`\frac{\pi}{4}`)
-.. c:var:: NPY_1_PI
+.. c:macro:: NPY_1_PI
Reciprocal of pi (:math:`\frac{1}{\pi}`)
-.. c:var:: NPY_2_PI
+.. c:macro:: NPY_2_PI
Two times the reciprocal of pi (:math:`\frac{2}{\pi}`)
-.. c:var:: NPY_EULER
+.. c:macro:: NPY_EULER
The Euler constant
:math:`\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})`
@@ -308,35 +308,35 @@ __ https://en.wikipedia.org/wiki/Half-precision_floating-point_format
__ https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_half_float_pixel.txt
__ https://www.openexr.com/about.html
-.. c:var:: NPY_HALF_ZERO
+.. c:macro:: NPY_HALF_ZERO
This macro is defined to positive zero.
-.. c:var:: NPY_HALF_PZERO
+.. c:macro:: NPY_HALF_PZERO
This macro is defined to positive zero.
-.. c:var:: NPY_HALF_NZERO
+.. c:macro:: NPY_HALF_NZERO
This macro is defined to negative zero.
-.. c:var:: NPY_HALF_ONE
+.. c:macro:: NPY_HALF_ONE
This macro is defined to 1.0.
-.. c:var:: NPY_HALF_NEGONE
+.. c:macro:: NPY_HALF_NEGONE
This macro is defined to -1.0.
-.. c:var:: NPY_HALF_PINF
+.. c:macro:: NPY_HALF_PINF
This macro is defined to +inf.
-.. c:var:: NPY_HALF_NINF
+.. c:macro:: NPY_HALF_NINF
This macro is defined to -inf.
-.. c:var:: NPY_HALF_NAN
+.. c:macro:: NPY_HALF_NAN
This macro is defined to a NaN value, guaranteed to have its sign bit unset.
diff --git a/doc/source/reference/c-api/deprecations.rst b/doc/source/reference/c-api/deprecations.rst
index a382017a2..5b1abc6f2 100644
--- a/doc/source/reference/c-api/deprecations.rst
+++ b/doc/source/reference/c-api/deprecations.rst
@@ -48,7 +48,9 @@ warnings).
To use the NPY_NO_DEPRECATED_API mechanism, you need to #define it to
the target API version of NumPy before #including any NumPy headers.
-If you want to confirm that your code is clean against 1.7, use::
+If you want to confirm that your code is clean against 1.7, use:
+
+.. code-block:: c
#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
diff --git a/doc/source/reference/c-api/dtype.rst b/doc/source/reference/c-api/dtype.rst
index 72e908861..a1a53cdb6 100644
--- a/doc/source/reference/c-api/dtype.rst
+++ b/doc/source/reference/c-api/dtype.rst
@@ -30,7 +30,7 @@ Enumerated Types
There is a list of enumerated types defined providing the basic 24
data types plus some useful generic names. Whenever the code requires
a type number, one of these enumerated types is requested. The types
-are all called :c:data:`NPY_{NAME}`:
+are all called ``NPY_{NAME}``:
.. c:var:: NPY_BOOL
@@ -183,23 +183,23 @@ Some useful aliases of the above types are
Other useful related constants are
-.. c:var:: NPY_NTYPES
+.. c:macro:: NPY_NTYPES
The total number of built-in NumPy types. The enumeration covers
the range from 0 to NPY_NTYPES-1.
-.. c:var:: NPY_NOTYPE
+.. c:macro:: NPY_NOTYPE
A signal value guaranteed not to be a valid type enumeration number.
-.. c:var:: NPY_USERDEF
+.. c:macro:: NPY_USERDEF
The start of type numbers used for Custom Data types.
The various character codes indicating certain types are also part of
an enumerated list. References to type characters (should they be
needed at all) should always use these enumerations. The form of them
-is :c:data:`NPY_{NAME}LTR` where ``{NAME}`` can be
+is ``NPY_{NAME}LTR`` where ``{NAME}`` can be
**BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**,
**UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**,
@@ -221,24 +221,17 @@ Defines
Max and min values for integers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. c:var:: NPY_MAX_INT{bits}
-
-.. c:var:: NPY_MAX_UINT{bits}
-
-.. c:var:: NPY_MIN_INT{bits}
-
+``NPY_MAX_INT{bits}``, ``NPY_MAX_UINT{bits}``, ``NPY_MIN_INT{bits}``
These are defined for ``{bits}`` = 8, 16, 32, 64, 128, and 256 and provide
the maximum (minimum) value of the corresponding (unsigned) integer
type. Note: the actual integer type may not be available on all
platforms (i.e. 128-bit and 256-bit integers are rare).
-.. c:var:: NPY_MIN_{type}
-
+``NPY_MIN_{type}``
This is defined for ``{type}`` = **BYTE**, **SHORT**, **INT**,
**LONG**, **LONGLONG**, **INTP**
-.. c:var:: NPY_MAX_{type}
-
+``NPY_MAX_{type}``
This is defined for all defined for ``{type}`` = **BYTE**, **UBYTE**,
**SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**,
**LONGLONG**, **ULONGLONG**, **INTP**, **UINTP**
@@ -247,8 +240,8 @@ Max and min values for integers
Number of bits in data types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-All :c:data:`NPY_SIZEOF_{CTYPE}` constants have corresponding
-:c:data:`NPY_BITSOF_{CTYPE}` constants defined. The :c:data:`NPY_BITSOF_{CTYPE}`
+All ``NPY_SIZEOF_{CTYPE}`` constants have corresponding
+``NPY_BITSOF_{CTYPE}`` constants defined. The ``NPY_BITSOF_{CTYPE}``
constants provide the number of bits in the data type. Specifically,
the available ``{CTYPE}s`` are
@@ -263,7 +256,7 @@ All of the numeric data types (integer, floating point, and complex)
have constants that are defined to be a specific enumerated type
number. Exactly which enumerated type a bit-width type refers to is
platform dependent. In particular, the constants available are
-:c:data:`PyArray_{NAME}{BITS}` where ``{NAME}`` is **INT**, **UINT**,
+``PyArray_{NAME}{BITS}`` where ``{NAME}`` is **INT**, **UINT**,
**FLOAT**, **COMPLEX** and ``{BITS}`` can be 8, 16, 32, 64, 80, 96, 128,
160, 192, 256, and 512. Obviously not all bit-widths are available on
all platforms for all the kinds of numeric types. Commonly 8-, 16-,
@@ -397,8 +390,8 @@ There are also typedefs for signed integers, unsigned integers,
floating point, and complex floating point types of specific bit-
widths. The available type names are
- :c:type:`npy_int{bits}`, :c:type:`npy_uint{bits}`, :c:type:`npy_float{bits}`,
- and :c:type:`npy_complex{bits}`
+ ``npy_int{bits}``, ``npy_uint{bits}``, ``npy_float{bits}``,
+ and ``npy_complex{bits}``
where ``{bits}`` is the number of bits in the type and can be **8**,
**16**, **32**, **64**, 128, and 256 for integer types; 16, **32**
@@ -414,6 +407,12 @@ Printf Formatting
For help in printing, the following strings are defined as the correct
format specifier in printf and related commands.
- :c:data:`NPY_LONGLONG_FMT`, :c:data:`NPY_ULONGLONG_FMT`,
- :c:data:`NPY_INTP_FMT`, :c:data:`NPY_UINTP_FMT`,
- :c:data:`NPY_LONGDOUBLE_FMT`
+.. c:macro:: NPY_LONGLONG_FMT
+
+.. c:macro:: NPY_ULONGLONG_FMT
+
+.. c:macro:: NPY_INTP_FMT
+
+.. c:macro:: NPY_UINTP_FMT
+
+.. c:macro:: NPY_LONGDOUBLE_FMT
diff --git a/doc/source/reference/c-api/iterator.rst b/doc/source/reference/c-api/iterator.rst
index b77d029cc..ae96bb3fb 100644
--- a/doc/source/reference/c-api/iterator.rst
+++ b/doc/source/reference/c-api/iterator.rst
@@ -313,17 +313,17 @@ Construction and Destruction
Flags that may be passed in ``flags``, applying to the whole
iterator, are:
- .. c:var:: NPY_ITER_C_INDEX
+ .. c:macro:: NPY_ITER_C_INDEX
Causes the iterator to track a raveled flat index matching C
order. This option cannot be used with :c:data:`NPY_ITER_F_INDEX`.
- .. c:var:: NPY_ITER_F_INDEX
+ .. c:macro:: NPY_ITER_F_INDEX
Causes the iterator to track a raveled flat index matching Fortran
order. This option cannot be used with :c:data:`NPY_ITER_C_INDEX`.
- .. c:var:: NPY_ITER_MULTI_INDEX
+ .. c:macro:: NPY_ITER_MULTI_INDEX
Causes the iterator to track a multi-index.
This prevents the iterator from coalescing axes to
@@ -336,7 +336,7 @@ Construction and Destruction
However, it is possible to remove axes again and use the iterator
normally if the size is small enough after removal.
- .. c:var:: NPY_ITER_EXTERNAL_LOOP
+ .. c:macro:: NPY_ITER_EXTERNAL_LOOP
Causes the iterator to skip iteration of the innermost
loop, requiring the user of the iterator to handle it.
@@ -344,7 +344,7 @@ Construction and Destruction
This flag is incompatible with :c:data:`NPY_ITER_C_INDEX`,
:c:data:`NPY_ITER_F_INDEX`, and :c:data:`NPY_ITER_MULTI_INDEX`.
- .. c:var:: NPY_ITER_DONT_NEGATE_STRIDES
+ .. c:macro:: NPY_ITER_DONT_NEGATE_STRIDES
This only affects the iterator when :c:type:`NPY_KEEPORDER` is
specified for the order parameter. By default with
@@ -355,7 +355,7 @@ Construction and Destruction
but don't want an axis reversed. This is the behavior of
``numpy.ravel(a, order='K')``, for instance.
- .. c:var:: NPY_ITER_COMMON_DTYPE
+ .. c:macro:: NPY_ITER_COMMON_DTYPE
Causes the iterator to convert all the operands to a common
data type, calculated based on the ufunc type promotion rules.
@@ -364,7 +364,7 @@ Construction and Destruction
If the common data type is known ahead of time, don't use this
flag. Instead, set the requested dtype for all the operands.
- .. c:var:: NPY_ITER_REFS_OK
+ .. c:macro:: NPY_ITER_REFS_OK
Indicates that arrays with reference types (object
arrays or structured arrays containing an object type)
@@ -373,7 +373,7 @@ Construction and Destruction
:c:func:`NpyIter_IterationNeedsAPI(iter)` is true, in which case
it may not release the GIL during iteration.
- .. c:var:: NPY_ITER_ZEROSIZE_OK
+ .. c:macro:: NPY_ITER_ZEROSIZE_OK
Indicates that arrays with a size of zero should be permitted.
Since the typical iteration loop does not naturally work with
@@ -381,7 +381,7 @@ Construction and Destruction
than zero before entering the iteration loop.
Currently only the operands are checked, not a forced shape.
- .. c:var:: NPY_ITER_REDUCE_OK
+ .. c:macro:: NPY_ITER_REDUCE_OK
Permits writeable operands with a dimension with zero
stride and size greater than one. Note that such operands
@@ -400,7 +400,7 @@ Construction and Destruction
after initializing the allocated operand to prepare the
buffers.
- .. c:var:: NPY_ITER_RANGED
+ .. c:macro:: NPY_ITER_RANGED
Enables support for iteration of sub-ranges of the full
``iterindex`` range ``[0, NpyIter_IterSize(iter))``. Use
@@ -414,7 +414,7 @@ Construction and Destruction
would require special handling, effectively making it more
like the buffered version.
- .. c:var:: NPY_ITER_BUFFERED
+ .. c:macro:: NPY_ITER_BUFFERED
Causes the iterator to store buffering data, and use buffering
to satisfy data type, alignment, and byte-order requirements.
@@ -441,7 +441,7 @@ Construction and Destruction
the inner loops may become smaller depending
on the structure of the reduction.
- .. c:var:: NPY_ITER_GROWINNER
+ .. c:macro:: NPY_ITER_GROWINNER
When buffering is enabled, this allows the size of the inner
loop to grow when buffering isn't necessary. This option
@@ -449,7 +449,7 @@ Construction and Destruction
data, rather than anything with small cache-friendly arrays
of temporary values for each inner loop.
- .. c:var:: NPY_ITER_DELAY_BUFALLOC
+ .. c:macro:: NPY_ITER_DELAY_BUFALLOC
When buffering is enabled, this delays allocation of the
buffers until :c:func:`NpyIter_Reset` or another reset function is
@@ -465,7 +465,7 @@ Construction and Destruction
Then, call :c:func:`NpyIter_Reset` to allocate and fill the buffers
with their initial values.
- .. c:var:: NPY_ITER_COPY_IF_OVERLAP
+ .. c:macro:: NPY_ITER_COPY_IF_OVERLAP
If any write operand has overlap with any read operand, eliminate all
overlap by making temporary copies (enabling UPDATEIFCOPY for write
@@ -484,9 +484,9 @@ Construction and Destruction
Flags that may be passed in ``op_flags[i]``, where ``0 <= i < nop``:
- .. c:var:: NPY_ITER_READWRITE
- .. c:var:: NPY_ITER_READONLY
- .. c:var:: NPY_ITER_WRITEONLY
+ .. c:macro:: NPY_ITER_READWRITE
+ .. c:macro:: NPY_ITER_READONLY
+ .. c:macro:: NPY_ITER_WRITEONLY
Indicate how the user of the iterator will read or write
to ``op[i]``. Exactly one of these flags must be specified
@@ -495,13 +495,13 @@ Construction and Destruction
semantics. The data will be written back to the original array
when ``NpyIter_Deallocate`` is called.
- .. c:var:: NPY_ITER_COPY
+ .. c:macro:: NPY_ITER_COPY
Allow a copy of ``op[i]`` to be made if it does not
meet the data type or alignment requirements as specified
by the constructor flags and parameters.
- .. c:var:: NPY_ITER_UPDATEIFCOPY
+ .. c:macro:: NPY_ITER_UPDATEIFCOPY
Triggers :c:data:`NPY_ITER_COPY`, and when an array operand
is flagged for writing and is copied, causes the data
@@ -513,9 +513,9 @@ Construction and Destruction
to back to ``op[i]`` on calling ``NpyIter_Deallocate``, instead of
doing the unnecessary copy operation.
- .. c:var:: NPY_ITER_NBO
- .. c:var:: NPY_ITER_ALIGNED
- .. c:var:: NPY_ITER_CONTIG
+ .. c:macro:: NPY_ITER_NBO
+ .. c:macro:: NPY_ITER_ALIGNED
+ .. c:macro:: NPY_ITER_CONTIG
Causes the iterator to provide data for ``op[i]``
that is in native byte order, aligned according to
@@ -534,7 +534,7 @@ Construction and Destruction
the NBO flag overrides it and the requested data type is
converted to be in native byte order.
- .. c:var:: NPY_ITER_ALLOCATE
+ .. c:macro:: NPY_ITER_ALLOCATE
This is for output arrays, and requires that the flag
:c:data:`NPY_ITER_WRITEONLY` or :c:data:`NPY_ITER_READWRITE`
@@ -557,7 +557,7 @@ Construction and Destruction
getting the i-th object in the returned C array. The caller
must call Py_INCREF on it to claim a reference to the array.
- .. c:var:: NPY_ITER_NO_SUBTYPE
+ .. c:macro:: NPY_ITER_NO_SUBTYPE
For use with :c:data:`NPY_ITER_ALLOCATE`, this flag disables
allocating an array subtype for the output, forcing
@@ -566,12 +566,12 @@ Construction and Destruction
TODO: Maybe it would be better to introduce a function
``NpyIter_GetWrappedOutput`` and remove this flag?
- .. c:var:: NPY_ITER_NO_BROADCAST
+ .. c:macro:: NPY_ITER_NO_BROADCAST
Ensures that the input or output matches the iteration
dimensions exactly.
- .. c:var:: NPY_ITER_ARRAYMASK
+ .. c:macro:: NPY_ITER_ARRAYMASK
.. versionadded:: 1.7
@@ -595,7 +595,7 @@ Construction and Destruction
modified. This is useful when the mask should be a combination
of input masks.
- .. c:var:: NPY_ITER_WRITEMASKED
+ .. c:macro:: NPY_ITER_WRITEMASKED
.. versionadded:: 1.7
@@ -613,7 +613,7 @@ Construction and Destruction
returns true from the corresponding element in the ARRAYMASK
operand.
- .. c:var:: NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE
+ .. c:macro:: NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE
In memory overlap checks, assume that operands with
``NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE`` enabled are accessed only
@@ -707,7 +707,7 @@ Construction and Destruction
:c:func:`NpyIter_Deallocate` must be called for each copy.
-.. c:function:: int NpyIter_RemoveAxis(NpyIter* iter, int axis)``
+.. c:function:: int NpyIter_RemoveAxis(NpyIter* iter, int axis)
Removes an axis from iteration. This requires that
:c:data:`NPY_ITER_MULTI_INDEX` was set for iterator creation, and does
@@ -1264,7 +1264,7 @@ functions provide that information.
NPY_MAX_INTP is placed in the stride.
Once the iterator is prepared for iteration (after a reset if
- :c:data:`NPY_DELAY_BUFALLOC` was used), call this to get the strides
+ :c:data:`NPY_ITER_DELAY_BUFALLOC` was used), call this to get the strides
which may be used to select a fast inner loop function. For example,
if the stride is 0, that means the inner loop can always load its
value into a variable once, then use the variable throughout the loop,
diff --git a/doc/source/reference/c-api/types-and-structures.rst b/doc/source/reference/c-api/types-and-structures.rst
index 60d8e420b..763f985a6 100644
--- a/doc/source/reference/c-api/types-and-structures.rst
+++ b/doc/source/reference/c-api/types-and-structures.rst
@@ -26,7 +26,7 @@ By constructing a new Python type you make available a new object for
Python. The ndarray object is an example of a new type defined in C.
New types are defined in C by two basic steps:
-1. creating a C-structure (usually named :c:type:`Py{Name}Object`) that is
+1. creating a C-structure (usually named ``Py{Name}Object``) that is
binary- compatible with the :c:type:`PyObject` structure itself but holds
the additional information needed for that particular object;
@@ -69,6 +69,7 @@ PyArray_Type and PyArrayObject
typeobject.
.. c:type:: PyArrayObject
+ NPY_AO
The :c:type:`PyArrayObject` C-structure contains all of the required
information for an array. All instances of an ndarray (and its
@@ -77,7 +78,9 @@ PyArray_Type and PyArrayObject
provided macros. If you need a shorter name, then you can make use
of :c:type:`NPY_AO` (deprecated) which is defined to be equivalent to
:c:type:`PyArrayObject`. Direct access to the struct fields are
- deprecated. Use the `PyArray_*(arr)` form instead.
+ deprecated. Use the ``PyArray_*(arr)`` form instead.
+ As of NumPy 1.20, the size of this struct is not considered part of
+ the NumPy ABI (see note at the end of the member list).
.. code-block:: c
@@ -91,86 +94,108 @@ PyArray_Type and PyArrayObject
PyArray_Descr *descr;
int flags;
PyObject *weakreflist;
+ /* version dependend private members */
} PyArrayObject;
-.. c:macro:: PyArrayObject.PyObject_HEAD
+ .. c:macro:: PyObject_HEAD
- This is needed by all Python objects. It consists of (at least)
- a reference count member ( ``ob_refcnt`` ) and a pointer to the
- typeobject ( ``ob_type`` ). (Other elements may also be present
- if Python was compiled with special options see
- Include/object.h in the Python source tree for more
- information). The ob_type member points to a Python type
- object.
+ This is needed by all Python objects. It consists of (at least)
+ a reference count member ( ``ob_refcnt`` ) and a pointer to the
+ typeobject ( ``ob_type`` ). (Other elements may also be present
+ if Python was compiled with special options see
+ Include/object.h in the Python source tree for more
+ information). The ob_type member points to a Python type
+ object.
-.. c:member:: char *PyArrayObject.data
+ .. c:member:: char *data
- Accessible via :c:data:`PyArray_DATA`, this data member is a
- pointer to the first element of the array. This pointer can
- (and normally should) be recast to the data type of the array.
+ Accessible via :c:data:`PyArray_DATA`, this data member is a
+ pointer to the first element of the array. This pointer can
+ (and normally should) be recast to the data type of the array.
-.. c:member:: int PyArrayObject.nd
+ .. c:member:: int nd
- An integer providing the number of dimensions for this
- array. When nd is 0, the array is sometimes called a rank-0
- array. Such arrays have undefined dimensions and strides and
- cannot be accessed. Macro :c:data:`PyArray_NDIM` defined in
- ``ndarraytypes.h`` points to this data member. :c:data:`NPY_MAXDIMS`
- is the largest number of dimensions for any array.
+ An integer providing the number of dimensions for this
+ array. When nd is 0, the array is sometimes called a rank-0
+ array. Such arrays have undefined dimensions and strides and
+ cannot be accessed. Macro :c:data:`PyArray_NDIM` defined in
+ ``ndarraytypes.h`` points to this data member. :c:data:`NPY_MAXDIMS`
+ is the largest number of dimensions for any array.
-.. c:member:: npy_intp PyArrayObject.dimensions
+ .. c:member:: npy_intp dimensions
- An array of integers providing the shape in each dimension as
- long as nd :math:`\geq` 1. The integer is always large enough
- to hold a pointer on the platform, so the dimension size is
- only limited by memory. :c:data:`PyArray_DIMS` is the macro
- associated with this data member.
+ An array of integers providing the shape in each dimension as
+ long as nd :math:`\geq` 1. The integer is always large enough
+ to hold a pointer on the platform, so the dimension size is
+ only limited by memory. :c:data:`PyArray_DIMS` is the macro
+ associated with this data member.
-.. c:member:: npy_intp *PyArrayObject.strides
+ .. c:member:: npy_intp *strides
- An array of integers providing for each dimension the number of
- bytes that must be skipped to get to the next element in that
- dimension. Associated with macro :c:data:`PyArray_STRIDES`.
+ An array of integers providing for each dimension the number of
+ bytes that must be skipped to get to the next element in that
+ dimension. Associated with macro :c:data:`PyArray_STRIDES`.
-.. c:member:: PyObject *PyArrayObject.base
+ .. c:member:: PyObject *base
- Pointed to by :c:data:`PyArray_BASE`, this member is used to hold a
- pointer to another Python object that is related to this array.
- There are two use cases:
+ Pointed to by :c:data:`PyArray_BASE`, this member is used to hold a
+ pointer to another Python object that is related to this array.
+ There are two use cases:
- - If this array does not own its own memory, then base points to the
- Python object that owns it (perhaps another array object)
- - If this array has the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
- :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set, then this array is a working
- copy of a "misbehaved" array.
+ - If this array does not own its own memory, then base points to the
+ Python object that owns it (perhaps another array object)
+ - If this array has the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set, then this array is a working
+ copy of a "misbehaved" array.
- When ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to
- by base will be updated with the contents of this array.
+ When ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to
+ by base will be updated with the contents of this array.
-.. c:member:: PyArray_Descr *PyArrayObject.descr
+ .. c:member:: PyArray_Descr *descr
- A pointer to a data-type descriptor object (see below). The
- data-type descriptor object is an instance of a new built-in
- type which allows a generic description of memory. There is a
- descriptor structure for each data type supported. This
- descriptor structure contains useful information about the type
- as well as a pointer to a table of function pointers to
- implement specific functionality. As the name suggests, it is
- associated with the macro :c:data:`PyArray_DESCR`.
+ A pointer to a data-type descriptor object (see below). The
+ data-type descriptor object is an instance of a new built-in
+ type which allows a generic description of memory. There is a
+ descriptor structure for each data type supported. This
+ descriptor structure contains useful information about the type
+ as well as a pointer to a table of function pointers to
+ implement specific functionality. As the name suggests, it is
+ associated with the macro :c:data:`PyArray_DESCR`.
-.. c:member:: int PyArrayObject.flags
+ .. c:member:: int flags
- Pointed to by the macro :c:data:`PyArray_FLAGS`, this data member represents
- the flags indicating how the memory pointed to by data is to be
- interpreted. Possible flags are :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
- :c:data:`NPY_ARRAY_F_CONTIGUOUS`, :c:data:`NPY_ARRAY_OWNDATA`,
- :c:data:`NPY_ARRAY_ALIGNED`, :c:data:`NPY_ARRAY_WRITEABLE`,
- :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, and :c:data:`NPY_ARRAY_UPDATEIFCOPY`.
+ Pointed to by the macro :c:data:`PyArray_FLAGS`, this data member represents
+ the flags indicating how the memory pointed to by data is to be
+ interpreted. Possible flags are :c:data:`NPY_ARRAY_C_CONTIGUOUS`,
+ :c:data:`NPY_ARRAY_F_CONTIGUOUS`, :c:data:`NPY_ARRAY_OWNDATA`,
+ :c:data:`NPY_ARRAY_ALIGNED`, :c:data:`NPY_ARRAY_WRITEABLE`,
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY`, and :c:data:`NPY_ARRAY_UPDATEIFCOPY`.
-.. c:member:: PyObject *PyArrayObject.weakreflist
+ .. c:member:: PyObject *weakreflist
- This member allows array objects to have weak references (using the
- weakref module).
+ This member allows array objects to have weak references (using the
+ weakref module).
+
+ .. note::
+
+ Further members are considered private and version dependend. If the size
+ of the struct is important for your code, special care must be taken.
+ A possible use-case when this is relevant is subclassing in C.
+ If your code relies on ``sizeof(PyArrayObject)`` to be constant,
+ you must add the following check at import time:
+
+ .. code-block:: c
+
+ if (sizeof(PyArrayObject) < PyArray_Type.tp_basicsize) {
+ PyErr_SetString(PyExc_ImportError,
+ "Binary incompatibility with NumPy, must recompile/update X.");
+ return NULL;
+ }
+
+ To ensure that your code does not have to be compiled for a specific
+ NumPy version, you may add a constant, leaving room for changes in NumPy.
+ A solution guaranteed to be compatible with any future NumPy version
+ requires the use of a runtime calculate offset and allocation size.
PyArrayDescr_Type and PyArray_Descr
@@ -226,197 +251,196 @@ PyArrayDescr_Type and PyArray_Descr
npy_hash_t hash;
} PyArray_Descr;
-.. c:member:: PyTypeObject *PyArray_Descr.typeobj
-
- Pointer to a typeobject that is the corresponding Python type for
- the elements of this array. For the builtin types, this points to
- the corresponding array scalar. For user-defined types, this
- should point to a user-defined typeobject. This typeobject can
- either inherit from array scalars or not. If it does not inherit
- from array scalars, then the :c:data:`NPY_USE_GETITEM` and
- :c:data:`NPY_USE_SETITEM` flags should be set in the ``flags`` member.
+ .. c:member:: PyTypeObject *typeobj
-.. c:member:: char PyArray_Descr.kind
+ Pointer to a typeobject that is the corresponding Python type for
+ the elements of this array. For the builtin types, this points to
+ the corresponding array scalar. For user-defined types, this
+ should point to a user-defined typeobject. This typeobject can
+ either inherit from array scalars or not. If it does not inherit
+ from array scalars, then the :c:data:`NPY_USE_GETITEM` and
+ :c:data:`NPY_USE_SETITEM` flags should be set in the ``flags`` member.
- A character code indicating the kind of array (using the array
- interface typestring notation). A 'b' represents Boolean, a 'i'
- represents signed integer, a 'u' represents unsigned integer, 'f'
- represents floating point, 'c' represents complex floating point, 'S'
- represents 8-bit zero-terminated bytes, 'U' represents 32-bit/character
- unicode string, and 'V' represents arbitrary.
+ .. c:member:: char kind
-.. c:member:: char PyArray_Descr.type
+ A character code indicating the kind of array (using the array
+ interface typestring notation). A 'b' represents Boolean, a 'i'
+ represents signed integer, a 'u' represents unsigned integer, 'f'
+ represents floating point, 'c' represents complex floating point, 'S'
+ represents 8-bit zero-terminated bytes, 'U' represents 32-bit/character
+ unicode string, and 'V' represents arbitrary.
- A traditional character code indicating the data type.
+ .. c:member:: char type
-.. c:member:: char PyArray_Descr.byteorder
+ A traditional character code indicating the data type.
- A character indicating the byte-order: '>' (big-endian), '<' (little-
- endian), '=' (native), '\|' (irrelevant, ignore). All builtin data-
- types have byteorder '='.
+ .. c:member:: char byteorder
-.. c:member:: char PyArray_Descr.flags
+ A character indicating the byte-order: '>' (big-endian), '<' (little-
+ endian), '=' (native), '\|' (irrelevant, ignore). All builtin data-
+ types have byteorder '='.
- A data-type bit-flag that determines if the data-type exhibits object-
- array like behavior. Each bit in this member is a flag which are named
- as:
+ .. c:member:: char flags
- .. c:var:: NPY_ITEM_REFCOUNT
+ A data-type bit-flag that determines if the data-type exhibits object-
+ array like behavior. Each bit in this member is a flag which are named
+ as:
- Indicates that items of this data-type must be reference
- counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
+ .. c:macro:: NPY_ITEM_REFCOUNT
- .. c:var:: NPY_ITEM_HASOBJECT
+ Indicates that items of this data-type must be reference
+ counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
- Same as :c:data:`NPY_ITEM_REFCOUNT`.
+ .. c:macro:: NPY_ITEM_HASOBJECT
- .. c:var:: NPY_LIST_PICKLE
+ Same as :c:data:`NPY_ITEM_REFCOUNT`.
- Indicates arrays of this data-type must be converted to a list
- before pickling.
+ .. c:macro:: NPY_LIST_PICKLE
- .. c:var:: NPY_ITEM_IS_POINTER
+ Indicates arrays of this data-type must be converted to a list
+ before pickling.
- Indicates the item is a pointer to some other data-type
+ .. c:macro:: NPY_ITEM_IS_POINTER
- .. c:var:: NPY_NEEDS_INIT
+ Indicates the item is a pointer to some other data-type
- Indicates memory for this data-type must be initialized (set
- to 0) on creation.
+ .. c:macro:: NPY_NEEDS_INIT
- .. c:var:: NPY_NEEDS_PYAPI
+ Indicates memory for this data-type must be initialized (set
+ to 0) on creation.
- Indicates this data-type requires the Python C-API during
- access (so don't give up the GIL if array access is going to
- be needed).
+ .. c:macro:: NPY_NEEDS_PYAPI
- .. c:var:: NPY_USE_GETITEM
+ Indicates this data-type requires the Python C-API during
+ access (so don't give up the GIL if array access is going to
+ be needed).
- On array access use the ``f->getitem`` function pointer
- instead of the standard conversion to an array scalar. Must
- use if you don't define an array scalar to go along with
- the data-type.
+ .. c:macro:: NPY_USE_GETITEM
- .. c:var:: NPY_USE_SETITEM
+ On array access use the ``f->getitem`` function pointer
+ instead of the standard conversion to an array scalar. Must
+ use if you don't define an array scalar to go along with
+ the data-type.
- When creating a 0-d array from an array scalar use
- ``f->setitem`` instead of the standard copy from an array
- scalar. Must use if you don't define an array scalar to go
- along with the data-type.
+ .. c:macro:: NPY_USE_SETITEM
- .. c:var:: NPY_FROM_FIELDS
+ When creating a 0-d array from an array scalar use
+ ``f->setitem`` instead of the standard copy from an array
+ scalar. Must use if you don't define an array scalar to go
+ along with the data-type.
- The bits that are inherited for the parent data-type if these
- bits are set in any field of the data-type. Currently (
- :c:data:`NPY_NEEDS_INIT` \| :c:data:`NPY_LIST_PICKLE` \|
- :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_PYAPI` ).
+ .. c:macro:: NPY_FROM_FIELDS
- .. c:var:: NPY_OBJECT_DTYPE_FLAGS
+ The bits that are inherited for the parent data-type if these
+ bits are set in any field of the data-type. Currently (
+ :c:data:`NPY_NEEDS_INIT` \| :c:data:`NPY_LIST_PICKLE` \|
+ :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_PYAPI` ).
- Bits set for the object data-type: ( :c:data:`NPY_LIST_PICKLE`
- \| :c:data:`NPY_USE_GETITEM` \| :c:data:`NPY_ITEM_IS_POINTER` \|
- :c:data:`NPY_REFCOUNT` \| :c:data:`NPY_NEEDS_INIT` \|
- :c:data:`NPY_NEEDS_PYAPI`).
+ .. c:macro:: NPY_OBJECT_DTYPE_FLAGS
- .. c:function:: PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
+ Bits set for the object data-type: ( :c:data:`NPY_LIST_PICKLE`
+ \| :c:data:`NPY_USE_GETITEM` \| :c:data:`NPY_ITEM_IS_POINTER` \|
+ :c:data:`NPY_ITEM_REFCOUNT` \| :c:data:`NPY_NEEDS_INIT` \|
+ :c:data:`NPY_NEEDS_PYAPI`).
- Return true if all the given flags are set for the data-type
- object.
+ .. c:function:: int PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
- .. c:function:: PyDataType_REFCHK(PyArray_Descr *dtype)
+ Return true if all the given flags are set for the data-type
+ object.
- Equivalent to :c:func:`PyDataType_FLAGCHK` (*dtype*,
- :c:data:`NPY_ITEM_REFCOUNT`).
+ .. c:function:: int PyDataType_REFCHK(PyArray_Descr *dtype)
-.. c:member:: int PyArray_Descr.type_num
+ Equivalent to :c:func:`PyDataType_FLAGCHK` (*dtype*,
+ :c:data:`NPY_ITEM_REFCOUNT`).
- A number that uniquely identifies the data type. For new data-types,
- this number is assigned when the data-type is registered.
+ .. c:member:: int type_num
-.. c:member:: int PyArray_Descr.elsize
+ A number that uniquely identifies the data type. For new data-types,
+ this number is assigned when the data-type is registered.
- For data types that are always the same size (such as long), this
- holds the size of the data type. For flexible data types where
- different arrays can have a different elementsize, this should be
- 0.
+ .. c:member:: int elsize
-.. c:member:: int PyArray_Descr.alignment
+ For data types that are always the same size (such as long), this
+ holds the size of the data type. For flexible data types where
+ different arrays can have a different elementsize, this should be
+ 0.
- A number providing alignment information for this data type.
- Specifically, it shows how far from the start of a 2-element
- structure (whose first element is a ``char`` ), the compiler
- places an item of this type: ``offsetof(struct {char c; type v;},
- v)``
+ .. c:member:: int alignment
-.. c:member:: PyArray_ArrayDescr *PyArray_Descr.subarray
+ A number providing alignment information for this data type.
+ Specifically, it shows how far from the start of a 2-element
+ structure (whose first element is a ``char`` ), the compiler
+ places an item of this type: ``offsetof(struct {char c; type v;},
+ v)``
- If this is non- ``NULL``, then this data-type descriptor is a
- C-style contiguous array of another data-type descriptor. In
- other-words, each element that this descriptor describes is
- actually an array of some other base descriptor. This is most
- useful as the data-type descriptor for a field in another
- data-type descriptor. The fields member should be ``NULL`` if this
- is non- ``NULL`` (the fields member of the base descriptor can be
- non- ``NULL`` however). The :c:type:`PyArray_ArrayDescr` structure is
- defined using
+ .. c:member:: PyArray_ArrayDescr *subarray
- .. code-block:: c
+ If this is non- ``NULL``, then this data-type descriptor is a
+ C-style contiguous array of another data-type descriptor. In
+ other-words, each element that this descriptor describes is
+ actually an array of some other base descriptor. This is most
+ useful as the data-type descriptor for a field in another
+ data-type descriptor. The fields member should be ``NULL`` if this
+ is non- ``NULL`` (the fields member of the base descriptor can be
+ non- ``NULL`` however).
- typedef struct {
- PyArray_Descr *base;
- PyObject *shape;
- } PyArray_ArrayDescr;
+ .. c:type:: PyArray_ArrayDescr
- The elements of this structure are:
+ .. code-block:: c
- .. c:member:: PyArray_Descr *PyArray_ArrayDescr.base
+ typedef struct {
+ PyArray_Descr *base;
+ PyObject *shape;
+ } PyArray_ArrayDescr;
- The data-type-descriptor object of the base-type.
+ .. c:member:: PyArray_Descr *base
- .. c:member:: PyObject *PyArray_ArrayDescr.shape
+ The data-type-descriptor object of the base-type.
- The shape (always C-style contiguous) of the sub-array as a Python
- tuple.
+ .. c:member:: PyObject *shape
+ The shape (always C-style contiguous) of the sub-array as a Python
+ tuple.
-.. c:member:: PyObject *PyArray_Descr.fields
+ .. c:member:: PyObject *fields
- If this is non-NULL, then this data-type-descriptor has fields
- described by a Python dictionary whose keys are names (and also
- titles if given) and whose values are tuples that describe the
- fields. Recall that a data-type-descriptor always describes a
- fixed-length set of bytes. A field is a named sub-region of that
- total, fixed-length collection. A field is described by a tuple
- composed of another data- type-descriptor and a byte
- offset. Optionally, the tuple may contain a title which is
- normally a Python string. These tuples are placed in this
- dictionary keyed by name (and also title if given).
+ If this is non-NULL, then this data-type-descriptor has fields
+ described by a Python dictionary whose keys are names (and also
+ titles if given) and whose values are tuples that describe the
+ fields. Recall that a data-type-descriptor always describes a
+ fixed-length set of bytes. A field is a named sub-region of that
+ total, fixed-length collection. A field is described by a tuple
+ composed of another data- type-descriptor and a byte
+ offset. Optionally, the tuple may contain a title which is
+ normally a Python string. These tuples are placed in this
+ dictionary keyed by name (and also title if given).
-.. c:member:: PyObject *PyArray_Descr.names
+ .. c:member:: PyObject *names
- An ordered tuple of field names. It is NULL if no field is
- defined.
+ An ordered tuple of field names. It is NULL if no field is
+ defined.
-.. c:member:: PyArray_ArrFuncs *PyArray_Descr.f
+ .. c:member:: PyArray_ArrFuncs *f
- A pointer to a structure containing functions that the type needs
- to implement internal features. These functions are not the same
- thing as the universal functions (ufuncs) described later. Their
- signatures can vary arbitrarily.
+ A pointer to a structure containing functions that the type needs
+ to implement internal features. These functions are not the same
+ thing as the universal functions (ufuncs) described later. Their
+ signatures can vary arbitrarily.
-.. c:member:: PyObject *PyArray_Descr.metadata
+ .. c:member:: PyObject *metadata
- Metadata about this dtype.
+ Metadata about this dtype.
-.. c:member:: NpyAuxData *PyArray_Descr.c_metadata
+ .. c:member:: NpyAuxData *c_metadata
- Metadata specific to the C implementation
- of the particular dtype. Added for NumPy 1.7.0.
+ Metadata specific to the C implementation
+ of the particular dtype. Added for NumPy 1.7.0.
-.. c:member:: Npy_hash_t *PyArray_Descr.hash
+ .. c:type:: npy_hash_t
+ .. c:member:: npy_hash_t *hash
- Currently unused. Reserved for future use in caching
- hash values.
+ Currently unused. Reserved for future use in caching
+ hash values.
.. c:type:: PyArray_ArrFuncs
@@ -568,7 +592,7 @@ PyArrayDescr_Type and PyArray_Descr
This function should be called without holding the Python GIL, and
has to grab it for error reporting.
- .. c:member:: Bool nonzero(void* data, void* arr)
+ .. c:member:: npy_bool nonzero(void* data, void* arr)
A pointer to a function that returns TRUE if the item of
``arr`` pointed to by ``data`` is nonzero. This function can
@@ -612,7 +636,8 @@ PyArrayDescr_Type and PyArray_Descr
Either ``NULL`` or a dictionary containing low-level casting
functions for user- defined data-types. Each function is
- wrapped in a :c:type:`PyCObject *` and keyed by the data-type number.
+ wrapped in a :c:type:`PyCapsule *<PyCapsule>` and keyed by
+ the data-type number.
.. c:member:: NPY_SCALARKIND scalarkind(PyArrayObject* arr)
@@ -791,35 +816,37 @@ PyUFunc_Type and PyUFuncObject
npy_uint32 *iter_flags;
/* new in API version 0x0000000D */
npy_intp *core_dim_sizes;
- npy_intp *core_dim_flags;
-
+ npy_uint32 *core_dim_flags;
+ PyObject *identity_value;
} PyUFuncObject;
- .. c:macro: PyUFuncObject.PyObject_HEAD
+ .. c:macro: PyObject_HEAD
required for all Python objects.
- .. c:member:: int PyUFuncObject.nin
+ .. c:member:: int nin
The number of input arguments.
- .. c:member:: int PyUFuncObject.nout
+ .. c:member:: int nout
The number of output arguments.
- .. c:member:: int PyUFuncObject.nargs
+ .. c:member:: int nargs
The total number of arguments (*nin* + *nout*). This must be
less than :c:data:`NPY_MAXARGS`.
- .. c:member:: int PyUFuncObject.identity
+ .. c:member:: int identity
Either :c:data:`PyUFunc_One`, :c:data:`PyUFunc_Zero`,
- :c:data:`PyUFunc_None` or :c:data:`PyUFunc_AllOnes` to indicate
+ :c:data:`PyUFunc_MinusOne`, :c:data:`PyUFunc_None`,
+ :c:data:`PyUFunc_ReorderableNone`, or
+ :c:data:`PyUFunc_IdentityValue` to indicate
the identity for this operation. It is only used for a
reduce-like call on an empty array.
- .. c:member:: void PyUFuncObject.functions( \
+ .. c:member:: void functions( \
char** args, npy_intp* dims, npy_intp* steps, void* extradata)
An array of function pointers --- one for each data type
@@ -837,7 +864,7 @@ PyUFunc_Type and PyUFuncObject
passed in as *extradata*. The size of this function pointer
array is ntypes.
- .. c:member:: void **PyUFuncObject.data
+ .. c:member:: void **data
Extra data to be passed to the 1-d vector loops or ``NULL`` if
no extra-data is needed. This C-array must be the same size (
@@ -846,22 +873,22 @@ PyUFunc_Type and PyUFuncObject
just 1-d vector loops that make use of this extra data to
receive a pointer to the actual function to call.
- .. c:member:: int PyUFuncObject.ntypes
+ .. c:member:: int ntypes
The number of supported data types for the ufunc. This number
specifies how many different 1-d loops (of the builtin data
types) are available.
- .. c:member:: int PyUFuncObject.reserved1
+ .. c:member:: int reserved1
Unused.
- .. c:member:: char *PyUFuncObject.name
+ .. c:member:: char *name
A string name for the ufunc. This is used dynamically to build
the __doc\__ attribute of ufuncs.
- .. c:member:: char *PyUFuncObject.types
+ .. c:member:: char *types
An array of :math:`nargs \times ntypes` 8-bit type_numbers
which contains the type signature for the function for each of
@@ -871,24 +898,24 @@ PyUFunc_Type and PyUFuncObject
vector loop. These type numbers do not have to be the same type
and mixed-type ufuncs are supported.
- .. c:member:: char *PyUFuncObject.doc
+ .. c:member:: char *doc
Documentation for the ufunc. Should not contain the function
signature as this is generated dynamically when __doc\__ is
retrieved.
- .. c:member:: void *PyUFuncObject.ptr
+ .. c:member:: void *ptr
Any dynamically allocated memory. Currently, this is used for
dynamic ufuncs created from a python function to store room for
the types, data, and name members.
- .. c:member:: PyObject *PyUFuncObject.obj
+ .. c:member:: PyObject *obj
For ufuncs dynamically created from python functions, this member
holds a reference to the underlying Python function.
- .. c:member:: PyObject *PyUFuncObject.userloops
+ .. c:member:: PyObject *userloops
A dictionary of user-defined 1-d vector loops (stored as CObject
ptrs) for user-defined types. A loop may be registered by the
@@ -896,74 +923,85 @@ PyUFunc_Type and PyUFuncObject
User defined type numbers are always larger than
:c:data:`NPY_USERDEF`.
- .. c:member:: int PyUFuncObject.core_enabled
+ .. c:member:: int core_enabled
0 for scalar ufuncs; 1 for generalized ufuncs
- .. c:member:: int PyUFuncObject.core_num_dim_ix
+ .. c:member:: int core_num_dim_ix
Number of distinct core dimension names in the signature
- .. c:member:: int *PyUFuncObject.core_num_dims
+ .. c:member:: int *core_num_dims
Number of core dimensions of each argument
- .. c:member:: int *PyUFuncObject.core_dim_ixs
+ .. c:member:: int *core_dim_ixs
Dimension indices in a flattened form; indices of argument ``k`` are
stored in ``core_dim_ixs[core_offsets[k] : core_offsets[k] +
core_numdims[k]]``
- .. c:member:: int *PyUFuncObject.core_offsets
+ .. c:member:: int *core_offsets
Position of 1st core dimension of each argument in ``core_dim_ixs``,
equivalent to cumsum(``core_num_dims``)
- .. c:member:: char *PyUFuncObject.core_signature
+ .. c:member:: char *core_signature
Core signature string
- .. c:member:: PyUFunc_TypeResolutionFunc *PyUFuncObject.type_resolver
+ .. c:member:: PyUFunc_TypeResolutionFunc *type_resolver
A function which resolves the types and fills an array with the dtypes
for the inputs and outputs
- .. c:member:: PyUFunc_LegacyInnerLoopSelectionFunc *PyUFuncObject.legacy_inner_loop_selector
+ .. c:member:: PyUFunc_LegacyInnerLoopSelectionFunc *legacy_inner_loop_selector
A function which returns an inner loop. The ``legacy`` in the name arises
because for NumPy 1.6 a better variant had been planned. This variant
has not yet come about.
- .. c:member:: void *PyUFuncObject.reserved2
+ .. c:member:: void *reserved2
For a possible future loop selector with a different signature.
- .. c:member:: PyUFunc_MaskedInnerLoopSelectionFunc *PyUFuncObject.masked_inner_loop_selector
+ .. c:member:: PyUFunc_MaskedInnerLoopSelectionFunc *masked_inner_loop_selector
Function which returns a masked inner loop for the ufunc
- .. c:member:: npy_uint32 PyUFuncObject.op_flags
+ .. c:member:: npy_uint32 op_flags
Override the default operand flags for each ufunc operand.
- .. c:member:: npy_uint32 PyUFuncObject.iter_flags
+ .. c:member:: npy_uint32 iter_flags
Override the default nditer flags for the ufunc.
Added in API version 0x0000000D
- .. c:member:: npy_intp *PyUFuncObject.core_dim_sizes
+ .. c:member:: npy_intp *core_dim_sizes
For each distinct core dimension, the possible
- :ref:`frozen <frozen>` size if :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` is 0
+ :ref:`frozen <frozen>` size if
+ :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` is ``0``
- .. c:member:: npy_uint32 *PyUFuncObject.core_dim_flags
+ .. c:member:: npy_uint32 *core_dim_flags
For each distinct core dimension, a set of ``UFUNC_CORE_DIM*`` flags
- - :c:data:`UFUNC_CORE_DIM_CAN_IGNORE` if the dim name ends in ``?``
- - :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` if the dim size will be
- determined from the operands and not from a :ref:`frozen <frozen>` signature
+ .. c:macro:: UFUNC_CORE_DIM_CAN_IGNORE
+
+ if the dim name ends in ``?``
+
+ .. c:macro:: UFUNC_CORE_DIM_SIZE_INFERRED
+
+ if the dim size will be determined from the operands
+ and not from a :ref:`frozen <frozen>` signature
+
+ .. c:member:: PyObject *identity_value
+
+ Identity for reduction, when :c:member:`PyUFuncObject.identity`
+ is equal to :c:data:`PyUFunc_IdentityValue`.
PyArrayIter_Type and PyArrayIterObject
--------------------------------------
@@ -1009,57 +1047,57 @@ PyArrayIter_Type and PyArrayIterObject
npy_intp factors[NPY_MAXDIMS];
PyArrayObject *ao;
char *dataptr;
- Bool contiguous;
+ npy_bool contiguous;
} PyArrayIterObject;
- .. c:member:: int PyArrayIterObject.nd_m1
+ .. c:member:: int nd_m1
:math:`N-1` where :math:`N` is the number of dimensions in the
underlying array.
- .. c:member:: npy_intp PyArrayIterObject.index
+ .. c:member:: npy_intp index
The current 1-d index into the array.
- .. c:member:: npy_intp PyArrayIterObject.size
+ .. c:member:: npy_intp size
The total size of the underlying array.
- .. c:member:: npy_intp *PyArrayIterObject.coordinates
+ .. c:member:: npy_intp *coordinates
An :math:`N` -dimensional index into the array.
- .. c:member:: npy_intp *PyArrayIterObject.dims_m1
+ .. c:member:: npy_intp *dims_m1
The size of the array minus 1 in each dimension.
- .. c:member:: npy_intp *PyArrayIterObject.strides
+ .. c:member:: npy_intp *strides
The strides of the array. How many bytes needed to jump to the next
element in each dimension.
- .. c:member:: npy_intp *PyArrayIterObject.backstrides
+ .. c:member:: npy_intp *backstrides
How many bytes needed to jump from the end of a dimension back
to its beginning. Note that ``backstrides[k] == strides[k] *
dims_m1[k]``, but it is stored here as an optimization.
- .. c:member:: npy_intp *PyArrayIterObject.factors
+ .. c:member:: npy_intp *factors
This array is used in computing an N-d index from a 1-d index. It
contains needed products of the dimensions.
- .. c:member:: PyArrayObject *PyArrayIterObject.ao
+ .. c:member:: PyArrayObject *ao
A pointer to the underlying ndarray this iterator was created to
represent.
- .. c:member:: char *PyArrayIterObject.dataptr
+ .. c:member:: char *dataptr
This member points to an element in the ndarray indicated by the
index.
- .. c:member:: Bool PyArrayIterObject.contiguous
+ .. c:member:: npy_bool contiguous
This flag is true if the underlying array is
:c:data:`NPY_ARRAY_C_CONTIGUOUS`. It is used to simplify
@@ -1106,32 +1144,32 @@ PyArrayMultiIter_Type and PyArrayMultiIterObject
PyArrayIterObject *iters[NPY_MAXDIMS];
} PyArrayMultiIterObject;
- .. c:macro: PyArrayMultiIterObject.PyObject_HEAD
+ .. c:macro: PyObject_HEAD
Needed at the start of every Python object (holds reference count
and type identification).
- .. c:member:: int PyArrayMultiIterObject.numiter
+ .. c:member:: int numiter
The number of arrays that need to be broadcast to the same shape.
- .. c:member:: npy_intp PyArrayMultiIterObject.size
+ .. c:member:: npy_intp size
The total broadcasted size.
- .. c:member:: npy_intp PyArrayMultiIterObject.index
+ .. c:member:: npy_intp index
The current (1-d) index into the broadcasted result.
- .. c:member:: int PyArrayMultiIterObject.nd
+ .. c:member:: int nd
The number of dimensions in the broadcasted result.
- .. c:member:: npy_intp *PyArrayMultiIterObject.dimensions
+ .. c:member:: npy_intp *dimensions
The shape of the broadcasted result (only ``nd`` slots are used).
- .. c:member:: PyArrayIterObject **PyArrayMultiIterObject.iters
+ .. c:member:: PyArrayIterObject **iters
An array of iterator objects that holds the iterators for the
arrays to be broadcast together. On return, the iterators are
@@ -1204,7 +1242,7 @@ ScalarArrayTypes
There is a Python type for each of the different built-in data types
that can be present in the array Most of these are simple wrappers
around the corresponding data type in C. The C-names for these types
-are :c:data:`Py{TYPE}ArrType_Type` where ``{TYPE}`` can be
+are ``Py{TYPE}ArrType_Type`` where ``{TYPE}`` can be
**Bool**, **Byte**, **Short**, **Int**, **Long**, **LongLong**,
**UByte**, **UShort**, **UInt**, **ULong**, **ULongLong**,
@@ -1213,8 +1251,8 @@ are :c:data:`Py{TYPE}ArrType_Type` where ``{TYPE}`` can be
**Object**.
These type names are part of the C-API and can therefore be created in
-extension C-code. There is also a :c:data:`PyIntpArrType_Type` and a
-:c:data:`PyUIntpArrType_Type` that are simple substitutes for one of the
+extension C-code. There is also a ``PyIntpArrType_Type`` and a
+``PyUIntpArrType_Type`` that are simple substitutes for one of the
integer types that can hold a pointer on the platform. The structure
of these scalar objects is not exposed to C-code. The function
:c:func:`PyArray_ScalarAsCtype` (..) can be used to extract the C-type
@@ -1249,12 +1287,12 @@ PyArray_Dims
The members of this structure are
- .. c:member:: npy_intp *PyArray_Dims.ptr
+ .. c:member:: npy_intp *ptr
A pointer to a list of (:c:type:`npy_intp`) integers which
usually represent array shape or array strides.
- .. c:member:: int PyArray_Dims.len
+ .. c:member:: int len
The length of the list of integers. It is assumed safe to
access *ptr* [0] to *ptr* [len-1].
@@ -1283,26 +1321,26 @@ PyArray_Chunk
The members are
- .. c:macro: PyArray_Chunk.PyObject_HEAD
+ .. c:macro: PyObject_HEAD
Necessary for all Python objects. Included here so that the
:c:type:`PyArray_Chunk` structure matches that of the buffer object
(at least to the len member).
- .. c:member:: PyObject *PyArray_Chunk.base
+ .. c:member:: PyObject *base
The Python object this chunk of memory comes from. Needed so that
memory can be accounted for properly.
- .. c:member:: void *PyArray_Chunk.ptr
+ .. c:member:: void *ptr
A pointer to the start of the single-segment chunk of memory.
- .. c:member:: npy_intp PyArray_Chunk.len
+ .. c:member:: npy_intp len
The length of the segment in bytes.
- .. c:member:: int PyArray_Chunk.flags
+ .. c:member:: int flags
Any data flags (*e.g.* :c:data:`NPY_ARRAY_WRITEABLE` ) that should
be used to interpret the memory.
@@ -1317,13 +1355,13 @@ PyArrayInterface
The :c:type:`PyArrayInterface` structure is defined so that NumPy and
other extension modules can use the rapid array interface
- protocol. The :obj:`__array_struct__` method of an object that
+ protocol. The :obj:`~object.__array_struct__` method of an object that
supports the rapid array interface protocol should return a
- :c:type:`PyCObject` that contains a pointer to a :c:type:`PyArrayInterface`
+ :c:type:`PyCapsule` that contains a pointer to a :c:type:`PyArrayInterface`
structure with the relevant details of the array. After the new
array is created, the attribute should be ``DECREF``'d which will
free the :c:type:`PyArrayInterface` structure. Remember to ``INCREF`` the
- object (whose :obj:`__array_struct__` attribute was retrieved) and
+ object (whose :obj:`~object.__array_struct__` attribute was retrieved) and
point the base member of the new :c:type:`PyArrayObject` to this same
object. In this way the memory for the array will be managed
correctly.
@@ -1342,15 +1380,15 @@ PyArrayInterface
PyObject *descr;
} PyArrayInterface;
- .. c:member:: int PyArrayInterface.two
+ .. c:member:: int two
the integer 2 as a sanity check.
- .. c:member:: int PyArrayInterface.nd
+ .. c:member:: int nd
the number of dimensions in the array.
- .. c:member:: char PyArrayInterface.typekind
+ .. c:member:: char typekind
A character indicating what kind of array is present according to the
typestring convention with 't' -> bitfield, 'b' -> Boolean, 'i' ->
@@ -1358,11 +1396,11 @@ PyArrayInterface
complex floating point, 'O' -> object, 'S' -> (byte-)string, 'U' ->
unicode, 'V' -> void.
- .. c:member:: int PyArrayInterface.itemsize
+ .. c:member:: int itemsize
The number of bytes each item in the array requires.
- .. c:member:: int PyArrayInterface.flags
+ .. c:member:: int flags
Any of the bits :c:data:`NPY_ARRAY_C_CONTIGUOUS` (1),
:c:data:`NPY_ARRAY_F_CONTIGUOUS` (2), :c:data:`NPY_ARRAY_ALIGNED` (0x100),
@@ -1376,26 +1414,26 @@ PyArrayInterface
structure is present (it will be ignored by objects consuming
version 2 of the array interface).
- .. c:member:: npy_intp *PyArrayInterface.shape
+ .. c:member:: npy_intp *shape
An array containing the size of the array in each dimension.
- .. c:member:: npy_intp *PyArrayInterface.strides
+ .. c:member:: npy_intp *strides
An array containing the number of bytes to jump to get to the next
element in each dimension.
- .. c:member:: void *PyArrayInterface.data
+ .. c:member:: void *data
A pointer *to* the first element of the array.
- .. c:member:: PyObject *PyArrayInterface.descr
+ .. c:member:: PyObject *descr
A Python object describing the data-type in more detail (same
- as the *descr* key in :obj:`__array_interface__`). This can be
+ as the *descr* key in :obj:`~object.__array_interface__`). This can be
``NULL`` if *typekind* and *itemsize* provide enough
information. This field is also ignored unless
- :c:data:`ARR_HAS_DESCR` flag is on in *flags*.
+ :c:data:`NPY_ARR_HAS_DESCR` flag is on in *flags*.
Internally used structures
@@ -1433,7 +1471,7 @@ for completeness and assistance in understanding the code.
Advanced indexing is handled with this Python type. It is simply a
loose wrapper around the C-structure containing the variables
needed for advanced array indexing. The associated C-structure,
- :c:type:`PyArrayMapIterObject`, is useful if you are trying to
+ ``PyArrayMapIterObject``, is useful if you are trying to
understand the advanced-index mapping code. It is defined in the
``arrayobject.h`` header. This type is not exposed to Python and
could be replaced with a C-structure. As a Python type it takes
diff --git a/doc/source/reference/c-api/ufunc.rst b/doc/source/reference/c-api/ufunc.rst
index 16ddde58c..9eb70c3fb 100644
--- a/doc/source/reference/c-api/ufunc.rst
+++ b/doc/source/reference/c-api/ufunc.rst
@@ -10,28 +10,52 @@ UFunc API
Constants
---------
-.. c:var:: UFUNC_ERR_{HANDLER}
+``UFUNC_ERR_{HANDLER}``
+ .. c:macro:: UFUNC_ERR_IGNORE
- ``{HANDLER}`` can be **IGNORE**, **WARN**, **RAISE**, or **CALL**
+ .. c:macro:: UFUNC_ERR_WARN
-.. c:var:: UFUNC_{THING}_{ERR}
+ .. c:macro:: UFUNC_ERR_RAISE
- ``{THING}`` can be **MASK**, **SHIFT**, or **FPE**, and ``{ERR}`` can
- be **DIVIDEBYZERO**, **OVERFLOW**, **UNDERFLOW**, and **INVALID**.
+ .. c:macro:: UFUNC_ERR_CALL
-.. c:var:: PyUFunc_{VALUE}
+``UFUNC_{THING}_{ERR}``
+ .. c:macro:: UFUNC_MASK_DIVIDEBYZERO
- .. c:var:: PyUFunc_One
+ .. c:macro:: UFUNC_MASK_OVERFLOW
- .. c:var:: PyUFunc_Zero
+ .. c:macro:: UFUNC_MASK_UNDERFLOW
- .. c:var:: PyUFunc_MinusOne
+ .. c:macro:: UFUNC_MASK_INVALID
- .. c:var:: PyUFunc_ReorderableNone
+ .. c:macro:: UFUNC_SHIFT_DIVIDEBYZERO
- .. c:var:: PyUFunc_None
+ .. c:macro:: UFUNC_SHIFT_OVERFLOW
- .. c:var:: PyUFunc_IdentityValue
+ .. c:macro:: UFUNC_SHIFT_UNDERFLOW
+
+ .. c:macro:: UFUNC_SHIFT_INVALID
+
+ .. c:macro:: UFUNC_FPE_DIVIDEBYZERO
+
+ .. c:macro:: UFUNC_FPE_OVERFLOW
+
+ .. c:macro:: UFUNC_FPE_UNDERFLOW
+
+ .. c:macro:: UFUNC_FPE_INVALID
+
+``PyUFunc_{VALUE}``
+ .. c:macro:: PyUFunc_One
+
+ .. c:macro:: PyUFunc_Zero
+
+ .. c:macro:: PyUFunc_MinusOne
+
+ .. c:macro:: PyUFunc_ReorderableNone
+
+ .. c:macro:: PyUFunc_None
+
+ .. c:macro:: PyUFunc_IdentityValue
Macros
@@ -50,6 +74,66 @@ Macros
was released (because loop->obj was not true).
+Types
+-----
+
+.. c:type:: PyUFuncGenericFunction
+
+ pointers to functions that actually implement the underlying
+ (element-by-element) function :math:`N` times with the following
+ signature:
+
+ .. c:function:: void loopfunc(\
+ char** args, npy_intp const *dimensions, npy_intp const *steps, void* data)
+
+ *args*
+
+ An array of pointers to the actual data for the input and output
+ arrays. The input arguments are given first followed by the output
+ arguments.
+
+ *dimensions*
+
+ A pointer to the size of the dimension over which this function is
+ looping.
+
+ *steps*
+
+ A pointer to the number of bytes to jump to get to the
+ next element in this dimension for each of the input and
+ output arguments.
+
+ *data*
+
+ Arbitrary data (extra arguments, function names, *etc.* )
+ that can be stored with the ufunc and will be passed in
+ when it is called.
+
+ This is an example of a func specialized for addition of doubles
+ returning doubles.
+
+ .. code-block:: c
+
+ static void
+ double_add(char **args,
+ npy_intp const *dimensions,
+ npy_intp const *steps,
+ void *extra)
+ {
+ npy_intp i;
+ npy_intp is1 = steps[0], is2 = steps[1];
+ npy_intp os = steps[2], n = dimensions[0];
+ char *i1 = args[0], *i2 = args[1], *op = args[2];
+ for (i = 0; i < n; i++) {
+ *((double *)op) = *((double *)i1) +
+ *((double *)i2);
+ i1 += is1;
+ i2 += is2;
+ op += os;
+ }
+ }
+
+
Functions
---------
@@ -71,60 +155,7 @@ Functions
:param func:
Must to an array of length *ntypes* containing
- :c:type:`PyUFuncGenericFunction` items. These items are pointers to
- functions that actually implement the underlying
- (element-by-element) function :math:`N` times with the following
- signature:
-
- .. c:function:: void loopfunc(
- char** args, npy_intp const *dimensions, npy_intp const *steps, void* data)
-
- *args*
-
- An array of pointers to the actual data for the input and output
- arrays. The input arguments are given first followed by the output
- arguments.
-
- *dimensions*
-
- A pointer to the size of the dimension over which this function is
- looping.
-
- *steps*
-
- A pointer to the number of bytes to jump to get to the
- next element in this dimension for each of the input and
- output arguments.
-
- *data*
-
- Arbitrary data (extra arguments, function names, *etc.* )
- that can be stored with the ufunc and will be passed in
- when it is called.
-
- This is an example of a func specialized for addition of doubles
- returning doubles.
-
- .. code-block:: c
-
- static void
- double_add(char **args,
- npy_intp const *dimensions,
- npy_intp const *steps,
- void *extra)
- {
- npy_intp i;
- npy_intp is1 = steps[0], is2 = steps[1];
- npy_intp os = steps[2], n = dimensions[0];
- char *i1 = args[0], *i2 = args[1], *op = args[2];
- for (i = 0; i < n; i++) {
- *((double *)op) = *((double *)i1) +
- *((double *)i2);
- i1 += is1;
- i2 += is2;
- op += os;
- }
- }
+ :c:type:`PyUFuncGenericFunction` items.
:param data:
Should be ``NULL`` or a pointer to an array of size *ntypes*
@@ -269,7 +300,7 @@ Functions
.. c:function:: int PyUFunc_checkfperr(int errmask, PyObject* errobj)
A simple interface to the IEEE error-flag checking support. The
- *errmask* argument is a mask of :c:data:`UFUNC_MASK_{ERR}` bitmasks
+ *errmask* argument is a mask of ``UFUNC_MASK_{ERR}`` bitmasks
indicating which errors to check for (and how to check for
them). The *errobj* must be a Python tuple with two elements: a
string containing the name which will be used in any communication
@@ -459,9 +490,9 @@ structure.
Importing the API
-----------------
-.. c:var:: PY_UFUNC_UNIQUE_SYMBOL
+.. c:macro:: PY_UFUNC_UNIQUE_SYMBOL
-.. c:var:: NO_IMPORT_UFUNC
+.. c:macro:: NO_IMPORT_UFUNC
.. c:function:: void import_ufunc(void)
diff --git a/doc/source/reference/global_state.rst b/doc/source/reference/global_state.rst
index 7bf9310e8..b59467210 100644
--- a/doc/source/reference/global_state.rst
+++ b/doc/source/reference/global_state.rst
@@ -83,3 +83,18 @@ in C which iterates through arrays that may or may not be
contiguous in memory.
Most users will have no reason to change these; for details
see the :ref:`memory layout <memory-layout>` documentation.
+
+Using the new casting implementation
+------------------------------------
+
+Within NumPy 1.20 it is possible to enable the new experimental casting
+implementation for testing purposes. To do this set::
+
+ NPY_USE_NEW_CASTINGIMPL=1
+
+Setting the flag is only useful to aid with NumPy developement to ensure the
+new version is bug free and should be avoided for production code.
+It is a helpful test for projects that either create custom datatypes or
+use for example complicated structured dtypes. The flag is expected to be
+removed in 1.21 with the new version being always in use.
+
diff --git a/doc/source/reference/internals.code-explanations.rst b/doc/source/reference/internals.code-explanations.rst
index 65553e07e..e8e428f2e 100644
--- a/doc/source/reference/internals.code-explanations.rst
+++ b/doc/source/reference/internals.code-explanations.rst
@@ -147,7 +147,8 @@ an iterator for each of the arrays being broadcast.
The :c:func:`PyArray_Broadcast` function takes the iterators that have already
been defined and uses them to determine the broadcast shape in each
dimension (to create the iterators at the same time that broadcasting
-occurs then use the :c:func:`PyMultiIter_New` function). Then, the iterators are
+occurs then use the :c:func:`PyArray_MultiIterNew` function).
+Then, the iterators are
adjusted so that each iterator thinks it is iterating over an array
with the broadcast size. This is done by adjusting the iterators
number of dimensions, and the shape in each dimension. This works
@@ -162,7 +163,7 @@ for the extended dimensions. It is done in exactly the same way in
NumPy. The big difference is that now the array of strides is kept
track of in a :c:type:`PyArrayIterObject`, the iterators involved in a
broadcast result are kept track of in a :c:type:`PyArrayMultiIterObject`,
-and the :c:func:`PyArray_BroadCast` call implements the broad-casting rules.
+and the :c:func:`PyArray_Broadcast` call implements the broad-casting rules.
Array Scalars
@@ -368,7 +369,7 @@ The output arguments (if any) are then processed and any missing
return arrays are constructed. If any provided output array doesn't
have the correct type (or is mis-aligned) and is smaller than the
buffer size, then a new output array is constructed with the special
-:c:data:`WRITEBACKIFCOPY` flag set. At the end of the function,
+:c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set. At the end of the function,
:c:func:`PyArray_ResolveWritebackIfCopy` is called so that
its contents will be copied back into the output array.
Iterators for the output arguments are then processed.
diff --git a/doc/source/reference/internals.rst b/doc/source/reference/internals.rst
index aacfabcd3..ed8042c08 100644
--- a/doc/source/reference/internals.rst
+++ b/doc/source/reference/internals.rst
@@ -9,4 +9,160 @@ NumPy internals
internals.code-explanations
alignment
-.. automodule:: numpy.doc.internals
+Internal organization of numpy arrays
+=====================================
+
+It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to NumPy".
+
+NumPy arrays consist of two major components, the raw array data (from now on,
+referred to as the data buffer), and the information about the raw array data.
+The data buffer is typically what people think of as arrays in C or Fortran,
+a contiguous (and fixed) block of memory containing fixed sized data items.
+NumPy also contains a significant set of data that describes how to interpret
+the data in the data buffer. This extra information contains (among other things):
+
+ 1) The basic data element's size in bytes
+ 2) The start of the data within the data buffer (an offset relative to the
+ beginning of the data buffer).
+ 3) The number of dimensions and the size of each dimension
+ 4) The separation between elements for each dimension (the 'stride'). This
+ does not have to be a multiple of the element size
+ 5) The byte order of the data (which may not be the native byte order)
+ 6) Whether the buffer is read-only
+ 7) Information (via the dtype object) about the interpretation of the basic
+ data element. The basic data element may be as simple as a int or a float,
+ or it may be a compound object (e.g., struct-like), a fixed character field,
+ or Python object pointers.
+ 8) Whether the array is to interpreted as C-order or Fortran-order.
+
+This arrangement allow for very flexible use of arrays. One thing that it allows
+is simple changes of the metadata to change the interpretation of the array buffer.
+Changing the byteorder of the array is a simple change involving no rearrangement
+of the data. The shape of the array can be changed very easily without changing
+anything in the data buffer or any data copying at all
+
+Among other things that are made possible is one can create a new array metadata
+object that uses the same data buffer
+to create a new view of that data buffer that has a different interpretation
+of the buffer (e.g., different shape, offset, byte order, strides, etc) but
+shares the same data bytes. Many operations in numpy do just this such as
+slices. Other operations, such as transpose, don't move data elements
+around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
+
+Typically these new versions of the array metadata but the same data buffer are
+new 'views' into the data buffer. There is a different ndarray object, but it
+uses the same data buffer. This is why it is necessary to force copies through
+use of the .copy() method if one really wants to make a new and independent
+copy of the data buffer.
+
+New views into arrays mean the object reference counts for the data buffer
+increase. Simply doing away with the original array object will not remove the
+data buffer if other views of it still exist.
+
+Multidimensional Array Indexing Order Issues
+============================================
+
+What is the right way to index
+multi-dimensional arrays? Before you jump to conclusions about the one and
+true way to index multi-dimensional arrays, it pays to understand why this is
+a confusing issue. This section will try to explain in detail how numpy
+indexing works and why we adopt the convention we do for images, and when it
+may be appropriate to adopt other conventions.
+
+The first thing to understand is
+that there are two conflicting conventions for indexing 2-dimensional arrays.
+Matrix notation uses the first index to indicate which row is being selected and
+the second index to indicate which column is selected. This is opposite the
+geometrically oriented-convention for images where people generally think the
+first index represents x position (i.e., column) and the second represents y
+position (i.e., row). This alone is the source of much confusion;
+matrix-oriented users and image-oriented users expect two different things with
+regard to indexing.
+
+The second issue to understand is how indices correspond
+to the order the array is stored in memory. In Fortran the first index is the
+most rapidly varying index when moving through the elements of a two
+dimensional array as it is stored in memory. If you adopt the matrix
+convention for indexing, then this means the matrix is stored one column at a
+time (since the first index moves to the next row as it changes). Thus Fortran
+is considered a Column-major language. C has just the opposite convention. In
+C, the last index changes most rapidly as one moves through the array as
+stored in memory. Thus C is a Row-major language. The matrix is stored by
+rows. Note that in both cases it presumes that the matrix convention for
+indexing is being used, i.e., for both Fortran and C, the first index is the
+row. Note this convention implies that the indexing convention is invariant
+and that the data order changes to keep that so.
+
+But that's not the only way
+to look at it. Suppose one has large two-dimensional arrays (images or
+matrices) stored in data files. Suppose the data are stored by rows rather than
+by columns. If we are to preserve our index convention (whether matrix or
+image) that means that depending on the language we use, we may be forced to
+reorder the data if it is read into memory to preserve our indexing
+convention. For example if we read row-ordered data into memory without
+reordering, it will match the matrix indexing convention for C, but not for
+Fortran. Conversely, it will match the image indexing convention for Fortran,
+but not for C. For C, if one is using data stored in row order, and one wants
+to preserve the image index convention, the data must be reordered when
+reading into memory.
+
+In the end, which you do for Fortran or C depends on
+which is more important, not reordering data or preserving the indexing
+convention. For large images, reordering data is potentially expensive, and
+often the indexing convention is inverted to avoid that.
+
+The situation with
+numpy makes this issue yet more complicated. The internal machinery of numpy
+arrays is flexible enough to accept any ordering of indices. One can simply
+reorder indices by manipulating the internal stride information for arrays
+without reordering the data at all. NumPy will know how to map the new index
+order to the data without moving the data.
+
+So if this is true, why not choose
+the index order that matches what you most expect? In particular, why not define
+row-ordered images to use the image convention? (This is sometimes referred
+to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
+order options for array ordering in numpy.) The drawback of doing this is
+potential performance penalties. It's common to access the data sequentially,
+either implicitly in array operations or explicitly by looping over rows of an
+image. When that is done, then the data will be accessed in non-optimal order.
+As the first index is incremented, what is actually happening is that elements
+spaced far apart in memory are being sequentially accessed, with usually poor
+memory access speeds. For example, for a two dimensional image 'im' defined so
+that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
+Python behavior then im[0] would represent a column at x=0. Yet that data
+would be spread over the whole array since the data are stored in row order.
+Despite the flexibility of numpy's indexing, it can't really paper over the fact
+basic operations are rendered inefficient because of data order or that getting
+contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
+im[0]), thus one can't use an idiom such as for row in im; for col in im does
+work, but doesn't yield contiguous column data.
+
+As it turns out, numpy is
+smart enough when dealing with ufuncs to determine which index is the most
+rapidly varying one in memory and uses that for the innermost loop. Thus for
+ufuncs there is no large intrinsic advantage to either approach in most cases.
+On the other hand, use of .flat with an FORTRAN ordered array will lead to
+non-optimal memory access as adjacent elements in the flattened array (iterator,
+actually) are not contiguous in memory.
+
+Indeed, the fact is that Python
+indexing on lists and other sequences naturally leads to an outside-to inside
+ordering (the first index gets the largest grouping, the next the next largest,
+and the last gets the smallest element). Since image data are normally stored
+by rows, this corresponds to position within rows being the last item indexed.
+
+If you do want to use Fortran ordering realize that
+there are two approaches to consider: 1) accept that the first index is just not
+the most rapidly changing in memory and have all your I/O routines reorder
+your data when going from memory to disk or visa versa, or use numpy's
+mechanism for mapping the first index to the most rapidly varying data. We
+recommend the former if possible. The disadvantage of the latter is that many
+of numpy's functions will yield arrays without Fortran ordering unless you are
+careful to use the 'order' keyword. Doing this would be highly inconvenient.
+
+Otherwise we recommend simply learning to reverse the usual order of indices
+when accessing elements of an array. Granted, it goes against the grain, but
+it is more in line with Python semantics and the natural order of the data.
+
+
diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst
index 5c1bdda23..5a0f99651 100644
--- a/doc/source/reference/maskedarray.baseclass.rst
+++ b/doc/source/reference/maskedarray.baseclass.rst
@@ -242,8 +242,8 @@ Comparison operators:
MaskedArray.__eq__
MaskedArray.__ne__
-Truth value of an array (:func:`bool()`):
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Truth value of an array (:class:`bool() <bool>`):
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/maskedarray.generic.rst b/doc/source/reference/maskedarray.generic.rst
index 41c3ee564..d3849c50d 100644
--- a/doc/source/reference/maskedarray.generic.rst
+++ b/doc/source/reference/maskedarray.generic.rst
@@ -177,8 +177,8 @@ attribute. We must keep in mind that a ``True`` entry in the mask indicates an
*invalid* data.
Another possibility is to use the :func:`getmask` and :func:`getmaskarray`
-functions. :func:`getmask(x)` outputs the mask of ``x`` if ``x`` is a masked
-array, and the special value :data:`nomask` otherwise. :func:`getmaskarray(x)`
+functions. ``getmask(x)`` outputs the mask of ``x`` if ``x`` is a masked
+array, and the special value :data:`nomask` otherwise. ``getmaskarray(x)``
outputs the mask of ``x`` if ``x`` is a masked array. If ``x`` has no invalid
entry or is not a masked array, the function outputs a boolean array of
``False`` with as many elements as ``x``.
@@ -296,11 +296,11 @@ new valid values to them::
.. note::
Unmasking an entry by direct assignment will silently fail if the masked
- array has a *hard* mask, as shown by the :attr:`hardmask` attribute. This
- feature was introduced to prevent overwriting the mask. To force the
- unmasking of an entry where the array has a hard mask, the mask must first
- to be softened using the :meth:`soften_mask` method before the allocation.
- It can be re-hardened with :meth:`harden_mask`::
+ array has a *hard* mask, as shown by the :attr:`~MaskedArray.hardmask`
+ attribute. This feature was introduced to prevent overwriting the mask.
+ To force the unmasking of an entry where the array has a hard mask,
+ the mask must first to be softened using the :meth:`soften_mask` method
+ before the allocation. It can be re-hardened with :meth:`harden_mask`::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True)
>>> x
@@ -406,8 +406,8 @@ Operations on masked arrays
Arithmetic and comparison operations are supported by masked arrays.
As much as possible, invalid entries of a masked array are not processed,
-meaning that the corresponding :attr:`data` entries *should* be the same
-before and after the operation.
+meaning that the corresponding :attr:`~MaskedArray.data` entries
+*should* be the same before and after the operation.
.. warning::
We need to stress that this behavior may not be systematic, that masked
diff --git a/doc/source/reference/random/c-api.rst b/doc/source/reference/random/c-api.rst
index 63b0fdc2b..a79da7a49 100644
--- a/doc/source/reference/random/c-api.rst
+++ b/doc/source/reference/random/c-api.rst
@@ -181,6 +181,5 @@ Generate a single integer
Generate random uint64 numbers in closed interval [off, off + rng].
-.. c:function:: npy_uint64 random_bounded_uint64(bitgen_t *bitgen_state, npy_uint64 off, npy_uint64 rng, npy_uint64 mask, bint use_masked)
-
+.. c:function:: npy_uint64 random_bounded_uint64(bitgen_t *bitgen_state, npy_uint64 off, npy_uint64 rng, npy_uint64 mask, bool use_masked)
diff --git a/doc/source/reference/random/generator.rst b/doc/source/reference/random/generator.rst
index a2cbb493a..8706e1de2 100644
--- a/doc/source/reference/random/generator.rst
+++ b/doc/source/reference/random/generator.rst
@@ -36,11 +36,105 @@ Simple random data
Permutations
============
+The methods for randomly permuting a sequence are
+
.. autosummary::
:toctree: generated/
~numpy.random.Generator.shuffle
~numpy.random.Generator.permutation
+ ~numpy.random.Generator.permuted
+
+The following table summarizes the behaviors of the methods.
+
++--------------+-------------------+------------------+
+| method | copy/in-place | axis handling |
++==============+===================+==================+
+| shuffle | in-place | as if 1d |
++--------------+-------------------+------------------+
+| permutation | copy | as if 1d |
++--------------+-------------------+------------------+
+| permuted | either (use 'out' | axis independent |
+| | for in-place) | |
++--------------+-------------------+------------------+
+
+The following subsections provide more details about the differences.
+
+In-place vs. copy
+~~~~~~~~~~~~~~~~~
+The main difference between `Generator.shuffle` and `Generator.permutation`
+is that `Generator.shuffle` operates in-place, while `Generator.permutation`
+returns a copy.
+
+By default, `Generator.permuted` returns a copy. To operate in-place with
+`Generator.permuted`, pass the same array as the first argument *and* as
+the value of the ``out`` parameter. For example,
+
+ >>> rg = np.random.default_rng()
+ >>> x = np.arange(0, 15).reshape(3, 5)
+ >>> x
+ array([[ 0, 1, 2, 3, 4],
+ [ 5, 6, 7, 8, 9],
+ [10, 11, 12, 13, 14]])
+ >>> y = rg.permuted(x, axis=1, out=x)
+ >>> x
+ array([[ 1, 0, 2, 4, 3], # random
+ [ 6, 7, 8, 9, 5],
+ [10, 14, 11, 13, 12]])
+
+Note that when ``out`` is given, the return value is ``out``:
+
+ >>> y is x
+ True
+
+Handling the ``axis`` parameter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+An important distinction for these methods is how they handle the ``axis``
+parameter. Both `Generator.shuffle` and `Generator.permutation` treat the
+input as a one-dimensional sequence, and the ``axis`` parameter determines
+which dimension of the input array to use as the sequence. In the case of a
+two-dimensional array, ``axis=0`` will, in effect, rearrange the rows of the
+array, and ``axis=1`` will rearrange the columns. For example
+
+ >>> rg = np.random.default_rng()
+ >>> x = np.arange(0, 15).reshape(3, 5)
+ >>> x
+ array([[ 0, 1, 2, 3, 4],
+ [ 5, 6, 7, 8, 9],
+ [10, 11, 12, 13, 14]])
+ >>> rg.permutation(x, axis=1)
+ array([[ 1, 3, 2, 0, 4], # random
+ [ 6, 8, 7, 5, 9],
+ [11, 13, 12, 10, 14]])
+
+Note that the columns have been rearranged "in bulk": the values within
+each column have not changed.
+
+The method `Generator.permuted` treats the ``axis`` parameter similar to
+how `numpy.sort` treats it. Each slice along the given axis is shuffled
+independently of the others. Compare the following example of the use of
+`Generator.permuted` to the above example of `Generator.permutation`:
+
+ >>> rg.permuted(x, axis=1)
+ array([[ 1, 0, 2, 4, 3], # random
+ [ 5, 7, 6, 9, 8],
+ [10, 14, 12, 13, 11]])
+
+In this example, the values within each row (i.e. the values along
+``axis=1``) have been shuffled independently. This is not a "bulk"
+shuffle of the columns.
+
+Shuffling non-NumPy sequences
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+`Generator.shuffle` works on non-NumPy sequences. That is, if it is given
+a sequence that is not a NumPy array, it shuffles that sequence in-place.
+For example,
+
+ >>> rg = np.random.default_rng()
+ >>> a = ['A', 'B', 'C', 'D', 'E']
+ >>> rg.shuffle(a) # shuffle the list in-place
+ >>> a
+ ['B', 'D', 'A', 'E', 'C'] # random
Distributions
=============
diff --git a/doc/source/reference/random/legacy.rst b/doc/source/reference/random/legacy.rst
index 91b91dac8..6cf4775b8 100644
--- a/doc/source/reference/random/legacy.rst
+++ b/doc/source/reference/random/legacy.rst
@@ -133,7 +133,7 @@ Many of the RandomState methods above are exported as functions in
- It uses a `RandomState` rather than the more modern `Generator`.
For backward compatible legacy reasons, we cannot change this. See
-`random-quick-start`.
+:ref:`random-quick-start`.
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/routines.array-manipulation.rst b/doc/source/reference/routines.array-manipulation.rst
index 8d13a1800..1c96495d9 100644
--- a/doc/source/reference/routines.array-manipulation.rst
+++ b/doc/source/reference/routines.array-manipulation.rst
@@ -74,6 +74,7 @@ Joining arrays
hstack
dstack
column_stack
+ row_stack
Splitting arrays
================
diff --git a/doc/source/reference/routines.char.rst b/doc/source/reference/routines.char.rst
index ed8393855..90df14125 100644
--- a/doc/source/reference/routines.char.rst
+++ b/doc/source/reference/routines.char.rst
@@ -6,7 +6,7 @@ String operations
.. module:: numpy.char
The `numpy.char` module provides a set of vectorized string
-operations for arrays of type `numpy.string_` or `numpy.unicode_`.
+operations for arrays of type `numpy.str_` or `numpy.bytes_`.
All of them are based on the string methods in the Python standard library.
String operations
diff --git a/doc/source/reference/routines.ctypeslib.rst b/doc/source/reference/routines.ctypeslib.rst
index 562638e9c..3a059f5d9 100644
--- a/doc/source/reference/routines.ctypeslib.rst
+++ b/doc/source/reference/routines.ctypeslib.rst
@@ -9,6 +9,5 @@ C-Types Foreign Function Interface (:mod:`numpy.ctypeslib`)
.. autofunction:: as_array
.. autofunction:: as_ctypes
.. autofunction:: as_ctypes_type
-.. autofunction:: ctypes_load_library
.. autofunction:: load_library
.. autofunction:: ndpointer
diff --git a/doc/source/reference/routines.financial.rst b/doc/source/reference/routines.financial.rst
deleted file mode 100644
index 5f426d7ab..000000000
--- a/doc/source/reference/routines.financial.rst
+++ /dev/null
@@ -1,21 +0,0 @@
-Financial functions
-*******************
-
-.. currentmodule:: numpy
-
-Simple financial functions
---------------------------
-
-.. autosummary::
- :toctree: generated/
-
- fv
- pv
- npv
- pmt
- ppmt
- ipmt
- irr
- mirr
- nper
- rate
diff --git a/doc/source/reference/routines.indexing.rst b/doc/source/reference/routines.indexing.rst
index aeec1a1bb..eebbf4989 100644
--- a/doc/source/reference/routines.indexing.rst
+++ b/doc/source/reference/routines.indexing.rst
@@ -42,6 +42,7 @@ Indexing-like operations
diag
diagonal
select
+ lib.stride_tricks.sliding_window_view
lib.stride_tricks.as_strided
Inserting data into arrays
diff --git a/doc/source/reference/routines.io.rst b/doc/source/reference/routines.io.rst
index 2e119af9a..3052ee1fb 100644
--- a/doc/source/reference/routines.io.rst
+++ b/doc/source/reference/routines.io.rst
@@ -88,4 +88,4 @@ Binary Format Description
.. autosummary::
:toctree: generated/
- lib.format
+ lib.format
diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst
index 97859ac67..d961cbf02 100644
--- a/doc/source/reference/routines.ma.rst
+++ b/doc/source/reference/routines.ma.rst
@@ -67,6 +67,9 @@ Inspecting the array
ma.size
ma.is_masked
ma.is_mask
+ ma.isMaskedArray
+ ma.isMA
+ ma.isarray
ma.MaskedArray.all
@@ -272,7 +275,7 @@ Filling a masked array
ma.common_fill_value
ma.default_fill_value
ma.maximum_fill_value
- ma.maximum_fill_value
+ ma.minimum_fill_value
ma.set_fill_value
ma.MaskedArray.get_fill_value
diff --git a/doc/source/reference/routines.other.rst b/doc/source/reference/routines.other.rst
index def5b3e3c..aefd680bb 100644
--- a/doc/source/reference/routines.other.rst
+++ b/doc/source/reference/routines.other.rst
@@ -47,6 +47,7 @@ Utility
show_config
deprecate
deprecate_with_doc
+ broadcast_shapes
Matlab-like Functions
---------------------
diff --git a/doc/source/reference/routines.rst b/doc/source/reference/routines.rst
index 7a9b97d77..5d6a823b7 100644
--- a/doc/source/reference/routines.rst
+++ b/doc/source/reference/routines.rst
@@ -28,7 +28,6 @@ indentation.
routines.emath
routines.err
routines.fft
- routines.financial
routines.functional
routines.help
routines.indexing
diff --git a/doc/source/reference/routines.set.rst b/doc/source/reference/routines.set.rst
index b12d3d5f5..149c33a8b 100644
--- a/doc/source/reference/routines.set.rst
+++ b/doc/source/reference/routines.set.rst
@@ -3,6 +3,11 @@ Set routines
.. currentmodule:: numpy
+.. autosummary::
+ :toctree: generated/
+
+ lib.arraysetops
+
Making proper sets
------------------
.. autosummary::
diff --git a/doc/source/reference/simd/simd-optimizations-tables-diff.inc b/doc/source/reference/simd/simd-optimizations-tables-diff.inc
new file mode 100644
index 000000000..41fa96703
--- /dev/null
+++ b/doc/source/reference/simd/simd-optimizations-tables-diff.inc
@@ -0,0 +1,37 @@
+.. generated via source/reference/simd/simd-optimizations.py
+
+x86::Intel Compiler - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. table::
+ :align: left
+
+ =========== ==================================================================================================================
+ Name Implies
+ =========== ==================================================================================================================
+ ``FMA3`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **AVX2**
+ ``AVX2`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **FMA3**
+ ``AVX512F`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` **AVX512CD**
+ =========== ==================================================================================================================
+
+.. note::
+ The following features aren't supported by x86::Intel Compiler:
+ **XOP FMA4**
+
+x86::Microsoft Visual C/C++ - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. table::
+ :align: left
+
+ ============ =================================================================================================================================
+ Name Implies
+ ============ =================================================================================================================================
+ ``FMA3`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **AVX2**
+ ``AVX2`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` **FMA3**
+ ``AVX512F`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` **AVX512CD** **AVX512_SKX**
+ ``AVX512CD`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` **AVX512_SKX**
+ ============ =================================================================================================================================
+
+.. note::
+ The following features aren't supported by x86::Microsoft Visual C/C++:
+ **AVX512_KNL AVX512_KNM**
+
diff --git a/doc/source/reference/simd/simd-optimizations-tables.inc b/doc/source/reference/simd/simd-optimizations-tables.inc
index d5b82ee0c..f038a91e1 100644
--- a/doc/source/reference/simd/simd-optimizations-tables.inc
+++ b/doc/source/reference/simd/simd-optimizations-tables.inc
@@ -1,110 +1,103 @@
.. generated via source/reference/simd/simd-optimizations.py
-``X86`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+x86 - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ======== =================================================================================================================
- Name Implies
- ======== =================================================================================================================
- SSE ``SSE`` ``SSE2``
- SSE2 ``SSE`` ``SSE2``
- SSE3 ``SSE`` ``SSE2``
- SSSE3 ``SSE`` ``SSE2`` ``SSE3``
- SSE41 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3``
- POPCNT ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41``
- SSE42 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT``
- AVX ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42``
- XOP ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
- FMA4 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
- F16C ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
- FMA3 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
- AVX2 ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
- AVX512F ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2``
- AVX512CD ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F``
- ======== =================================================================================================================
-
-``X86`` - Group names
-~~~~~~~~~~~~~~~~~~~~~
-
+ ============ =================================================================================================================
+ Name Implies
+ ============ =================================================================================================================
+ ``SSE`` ``SSE2``
+ ``SSE2`` ``SSE``
+ ``SSE3`` ``SSE`` ``SSE2``
+ ``SSSE3`` ``SSE`` ``SSE2`` ``SSE3``
+ ``SSE41`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3``
+ ``POPCNT`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41``
+ ``SSE42`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT``
+ ``AVX`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42``
+ ``XOP`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
+ ``FMA4`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
+ ``F16C`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX``
+ ``FMA3`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
+ ``AVX2`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C``
+ ``AVX512F`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2``
+ ``AVX512CD`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F``
+ ============ =================================================================================================================
+
+x86 - Group names
+~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ========== ===================================================== ===========================================================================================================================================================================
- Name Gather Implies
- ========== ===================================================== ===========================================================================================================================================================================
- AVX512_KNL ``AVX512ER`` ``AVX512PF`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
- AVX512_KNM ``AVX5124FMAPS`` ``AVX5124VNNIW`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_KNL``
- AVX512_SKX ``AVX512VL`` ``AVX512BW`` ``AVX512DQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
- AVX512_CLX ``AVX512VNNI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
- AVX512_CNL ``AVX512IFMA`` ``AVX512VBMI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
- AVX512_ICL ``AVX512VBMI2`` ``AVX512BITALG`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX`` ``AVX512_CLX`` ``AVX512_CNL``
- ========== ===================================================== ===========================================================================================================================================================================
-
-``IBM/POWER`` ``big-endian`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+ ============== ===================================================== ===========================================================================================================================================================================
+ Name Gather Implies
+ ============== ===================================================== ===========================================================================================================================================================================
+ ``AVX512_KNL`` ``AVX512ER`` ``AVX512PF`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
+ ``AVX512_KNM`` ``AVX5124FMAPS`` ``AVX5124VNNIW`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_KNL``
+ ``AVX512_SKX`` ``AVX512VL`` ``AVX512BW`` ``AVX512DQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD``
+ ``AVX512_CLX`` ``AVX512VNNI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
+ ``AVX512_CNL`` ``AVX512IFMA`` ``AVX512VBMI`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX``
+ ``AVX512_ICL`` ``AVX512VBMI2`` ``AVX512BITALG`` ``AVX512VPOPCNTDQ`` ``SSE`` ``SSE2`` ``SSE3`` ``SSSE3`` ``SSE41`` ``POPCNT`` ``SSE42`` ``AVX`` ``F16C`` ``FMA3`` ``AVX2`` ``AVX512F`` ``AVX512CD`` ``AVX512_SKX`` ``AVX512_CLX`` ``AVX512_CNL``
+ ============== ===================================================== ===========================================================================================================================================================================
+
+IBM/POWER big-endian - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ==== ================
- Name Implies
- ==== ================
- VSX
- VSX2 ``VSX``
- VSX3 ``VSX`` ``VSX2``
- ==== ================
-
-``IBM/POWER`` ``little-endian mode`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ ======== ================
+ Name Implies
+ ======== ================
+ ``VSX``
+ ``VSX2`` ``VSX``
+ ``VSX3`` ``VSX`` ``VSX2``
+ ======== ================
+IBM/POWER little-endian - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ==== ================
- Name Implies
- ==== ================
- VSX ``VSX`` ``VSX2``
- VSX2 ``VSX`` ``VSX2``
- VSX3 ``VSX`` ``VSX2``
- ==== ================
+ ======== ================
+ Name Implies
+ ======== ================
+ ``VSX`` ``VSX2``
+ ``VSX2`` ``VSX``
+ ``VSX3`` ``VSX`` ``VSX2``
+ ======== ================
-``ARMHF`` - CPU feature names
+ARMv7/A32 - CPU feature names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
.. table::
:align: left
- ========== ===========================================================
- Name Implies
- ========== ===========================================================
- NEON
- NEON_FP16 ``NEON``
- NEON_VFPV4 ``NEON`` ``NEON_FP16``
- ASIMD ``NEON`` ``NEON_FP16`` ``NEON_VFPV4``
- ASIMDHP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDDP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDFHM ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
- ========== ===========================================================
-
-``ARM64`` ``AARCH64`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+ ============== ===========================================================
+ Name Implies
+ ============== ===========================================================
+ ``NEON``
+ ``NEON_FP16`` ``NEON``
+ ``NEON_VFPV4`` ``NEON`` ``NEON_FP16``
+ ``ASIMD`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4``
+ ``ASIMDHP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDDP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDFHM`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
+ ============== ===========================================================
+
+ARMv8/A64 - CPU feature names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. table::
:align: left
- ========== ===========================================================
- Name Implies
- ========== ===========================================================
- NEON ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- NEON_FP16 ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- NEON_VFPV4 ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMD ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDHP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDDP ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
- ASIMDFHM ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
- ========== ===========================================================
+ ============== ===========================================================
+ Name Implies
+ ============== ===========================================================
+ ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``NEON_FP16`` ``NEON`` ``NEON_VFPV4`` ``ASIMD``
+ ``NEON_VFPV4`` ``NEON`` ``NEON_FP16`` ``ASIMD``
+ ``ASIMD`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4``
+ ``ASIMDHP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDDP`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD``
+ ``ASIMDFHM`` ``NEON`` ``NEON_FP16`` ``NEON_VFPV4`` ``ASIMD`` ``ASIMDHP``
+ ============== ===========================================================
- \ No newline at end of file
diff --git a/doc/source/reference/simd/simd-optimizations.py b/doc/source/reference/simd/simd-optimizations.py
index 628356163..5d6da50e3 100644
--- a/doc/source/reference/simd/simd-optimizations.py
+++ b/doc/source/reference/simd/simd-optimizations.py
@@ -3,10 +3,14 @@ Generate CPU features tables from CCompilerOpt
"""
from os import sys, path
gen_path = path.dirname(path.realpath(__file__))
+#sys.path.append(path.abspath(path.join(gen_path, *([".."]*4), "numpy", "distutils")))
+#from ccompiler_opt import CCompilerOpt
from numpy.distutils.ccompiler_opt import CCompilerOpt
class FakeCCompilerOpt(CCompilerOpt):
fake_info = ""
+ # disable caching no need for it
+ conf_nocache = True
def __init__(self, *args, **kwargs):
no_cc = None
CCompilerOpt.__init__(self, no_cc, **kwargs)
@@ -23,40 +27,49 @@ class FakeCCompilerOpt(CCompilerOpt):
return True
def gen_features_table(self, features, ignore_groups=True,
- field_names=["Name", "Implies"], **kwargs):
+ field_names=["Name", "Implies"],
+ fstyle=None, fstyle_implies=None, **kwargs):
rows = []
- for f in features:
+ if fstyle is None:
+ fstyle = lambda ft: f'``{ft}``'
+ if fstyle_implies is None:
+ fstyle_implies = lambda origin, ft: fstyle(ft)
+ for f in self.feature_sorted(features):
is_group = "group" in self.feature_supported.get(f, {})
if ignore_groups and is_group:
continue
implies = self.feature_sorted(self.feature_implies(f))
- implies = ' '.join(['``%s``' % i for i in implies])
- rows.append([f, implies])
- return self.gen_rst_table(field_names, rows, **kwargs)
+ implies = ' '.join([fstyle_implies(f, i) for i in implies])
+ rows.append([fstyle(f), implies])
+ if rows:
+ return self.gen_rst_table(field_names, rows, **kwargs)
def gen_gfeatures_table(self, features,
field_names=["Name", "Gather", "Implies"],
- **kwargs):
+ fstyle=None, fstyle_implies=None, **kwargs):
rows = []
- for f in features:
+ if fstyle is None:
+ fstyle = lambda ft: f'``{ft}``'
+ if fstyle_implies is None:
+ fstyle_implies = lambda origin, ft: fstyle(ft)
+ for f in self.feature_sorted(features):
gather = self.feature_supported.get(f, {}).get("group", None)
if not gather:
continue
implies = self.feature_sorted(self.feature_implies(f))
- implies = ' '.join(['``%s``' % i for i in implies])
- gather = ' '.join(['``%s``' % i for i in gather])
- rows.append([f, gather, implies])
- return self.gen_rst_table(field_names, rows, **kwargs)
+ implies = ' '.join([fstyle_implies(f, i) for i in implies])
+ gather = ' '.join([fstyle_implies(f, i) for i in gather])
+ rows.append([fstyle(f), gather, implies])
+ if rows:
+ return self.gen_rst_table(field_names, rows, **kwargs)
-
- def gen_rst_table(self, field_names, rows, margin_left=2):
+ def gen_rst_table(self, field_names, rows, tab_size=4):
assert(not rows or len(field_names) == len(rows[0]))
rows.append(field_names)
fld_len = len(field_names)
cls_len = [max(len(c[i]) for c in rows) for i in range(fld_len)]
del rows[-1]
- padding = 0
- cformat = ' '.join('{:<%d}' % (i+padding) for i in cls_len)
+ cformat = ' '.join('{:<%d}' % i for i in cls_len)
border = cformat.format(*['='*i for i in cls_len])
rows = [cformat.format(*row) for row in rows]
@@ -65,102 +78,113 @@ class FakeCCompilerOpt(CCompilerOpt):
# footer
rows += [border]
# add left margin
- rows = [(' ' * margin_left) + r for r in rows]
+ rows = [(' ' * tab_size) + r for r in rows]
return '\n'.join(rows)
-if __name__ == '__main__':
- margin_left = 4*1
- ############### x86 ###############
- FakeCCompilerOpt.fake_info = "x86_64 gcc"
- x64_gcc = FakeCCompilerOpt(cpu_baseline="max")
- x86_tables = """\
-``X86`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{x86_features}
-
-``X86`` - Group names
-~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{x86_gfeatures}
-
-""".format(
- x86_features = x64_gcc.gen_features_table(
- x64_gcc.cpu_baseline_names(), margin_left=margin_left
- ),
- x86_gfeatures = x64_gcc.gen_gfeatures_table(
- x64_gcc.cpu_baseline_names(), margin_left=margin_left
+def features_table_sections(name, ftable=None, gtable=None, tab_size=4):
+ tab = ' '*tab_size
+ content = ''
+ if ftable:
+ title = f"{name} - CPU feature names"
+ content = (
+ f"{title}\n{'~'*len(title)}"
+ f"\n.. table::\n{tab}:align: left\n\n"
+ f"{ftable}\n\n"
)
- )
- ############### Power ###############
- FakeCCompilerOpt.fake_info = "ppc64 gcc"
- ppc64_gcc = FakeCCompilerOpt(cpu_baseline="max")
- FakeCCompilerOpt.fake_info = "ppc64le gcc"
- ppc64le_gcc = FakeCCompilerOpt(cpu_baseline="max")
- ppc64_tables = """\
-``IBM/POWER`` ``big-endian`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{ppc64_features}
-
-``IBM/POWER`` ``little-endian mode`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{ppc64le_features}
-
-""".format(
- ppc64_features = ppc64_gcc.gen_features_table(
- ppc64_gcc.cpu_baseline_names(), margin_left=margin_left
- ),
- ppc64le_features = ppc64le_gcc.gen_features_table(
- ppc64le_gcc.cpu_baseline_names(), margin_left=margin_left
+ if gtable:
+ title = f"{name} - Group names"
+ content += (
+ f"{title}\n{'~'*len(title)}"
+ f"\n.. table::\n{tab}:align: left\n\n"
+ f"{gtable}\n\n"
)
+ return content
+
+def features_table(arch, cc="gcc", pretty_name=None, **kwargs):
+ FakeCCompilerOpt.fake_info = arch + cc
+ ccopt = FakeCCompilerOpt(cpu_baseline="max")
+ features = ccopt.cpu_baseline_names()
+ ftable = ccopt.gen_features_table(features, **kwargs)
+ gtable = ccopt.gen_gfeatures_table(features, **kwargs)
+
+ if not pretty_name:
+ pretty_name = arch + '/' + cc
+ return features_table_sections(pretty_name, ftable, gtable, **kwargs)
+
+def features_table_diff(arch, cc, cc_vs="gcc", pretty_name=None, **kwargs):
+ FakeCCompilerOpt.fake_info = arch + cc
+ ccopt = FakeCCompilerOpt(cpu_baseline="max")
+ fnames = ccopt.cpu_baseline_names()
+ features = {f:ccopt.feature_implies(f) for f in fnames}
+
+ FakeCCompilerOpt.fake_info = arch + cc_vs
+ ccopt_vs = FakeCCompilerOpt(cpu_baseline="max")
+ fnames_vs = ccopt_vs.cpu_baseline_names()
+ features_vs = {f:ccopt_vs.feature_implies(f) for f in fnames_vs}
+
+ common = set(fnames).intersection(fnames_vs)
+ extra_avl = set(fnames).difference(fnames_vs)
+ not_avl = set(fnames_vs).difference(fnames)
+ diff_impl_f = {f:features[f].difference(features_vs[f]) for f in common}
+ diff_impl = {k for k, v in diff_impl_f.items() if v}
+
+ fbold = lambda ft: f'**{ft}**' if ft in extra_avl else f'``{ft}``'
+ fbold_implies = lambda origin, ft: (
+ f'**{ft}**' if ft in diff_impl_f.get(origin, {}) else f'``{ft}``'
)
- ############### Arm ###############
- FakeCCompilerOpt.fake_info = "armhf gcc"
- armhf_gcc = FakeCCompilerOpt(cpu_baseline="max")
- FakeCCompilerOpt.fake_info = "aarch64 gcc"
- aarch64_gcc = FakeCCompilerOpt(cpu_baseline="max")
- arm_tables = """\
-``ARMHF`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{armhf_features}
-
-``ARM64`` ``AARCH64`` - CPU feature names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. table::
- :align: left
-
-{aarch64_features}
-
- """.format(
- armhf_features = armhf_gcc.gen_features_table(
- armhf_gcc.cpu_baseline_names(), margin_left=margin_left
- ),
- aarch64_features = aarch64_gcc.gen_features_table(
- aarch64_gcc.cpu_baseline_names(), margin_left=margin_left
- )
+ diff_all = diff_impl.union(extra_avl)
+ ftable = ccopt.gen_features_table(
+ diff_all, fstyle=fbold, fstyle_implies=fbold_implies, **kwargs
+ )
+ gtable = ccopt.gen_gfeatures_table(
+ diff_all, fstyle=fbold, fstyle_implies=fbold_implies, **kwargs
)
- # TODO: diff the difference among all supported compilers
+ if not pretty_name:
+ pretty_name = arch + '/' + cc
+ content = features_table_sections(pretty_name, ftable, gtable, **kwargs)
+
+ if not_avl:
+ not_avl = ccopt_vs.feature_sorted(not_avl)
+ not_avl = ' '.join(not_avl)
+ content += (
+ ".. note::\n"
+ f" The following features aren't supported by {pretty_name}:\n"
+ f" **{not_avl}**\n\n"
+ )
+ return content
+
+if __name__ == '__main__':
+ pretty_names = {
+ "PPC64": "IBM/POWER big-endian",
+ "PPC64LE": "IBM/POWER little-endian",
+ "ARMHF": "ARMv7/A32",
+ "AARCH64": "ARMv8/A64",
+ "ICC": "Intel Compiler",
+ # "ICCW": "Intel Compiler msvc-like",
+ "MSVC": "Microsoft Visual C/C++"
+ }
with open(path.join(gen_path, 'simd-optimizations-tables.inc'), 'wt') as fd:
fd.write(f'.. generated via {__file__}\n\n')
- fd.write(x86_tables)
- fd.write(ppc64_tables)
- fd.write(arm_tables)
+ for arch in (
+ ("x86", "PPC64", "PPC64LE", "ARMHF", "AARCH64")
+ ):
+ pretty_name = pretty_names.get(arch, arch)
+ table = features_table(arch=arch, pretty_name=pretty_name)
+ assert(table)
+ fd.write(table)
+
+ with open(path.join(gen_path, 'simd-optimizations-tables-diff.inc'), 'wt') as fd:
+ fd.write(f'.. generated via {__file__}\n\n')
+ for arch, cc_names in (
+ ("x86", ("clang", "ICC", "MSVC")),
+ ("PPC64", ("clang",)),
+ ("PPC64LE", ("clang",)),
+ ("ARMHF", ("clang",)),
+ ("AARCH64", ("clang",))
+ ):
+ arch_pname = pretty_names.get(arch, arch)
+ for cc in cc_names:
+ pretty_name = f"{arch_pname}::{pretty_names.get(cc, cc)}"
+ table = features_table_diff(arch=arch, cc=cc, pretty_name=pretty_name)
+ if table:
+ fd.write(table)
diff --git a/doc/source/reference/simd/simd-optimizations.rst b/doc/source/reference/simd/simd-optimizations.rst
index eb7eb2a83..59a4892b2 100644
--- a/doc/source/reference/simd/simd-optimizations.rst
+++ b/doc/source/reference/simd/simd-optimizations.rst
@@ -29,8 +29,8 @@ Build options for compilation
safely run on a wide range of platforms within the processor family.
- ``--cpu-dispatch``: dispatched set of additional optimizations.
- The default value for ``x86`` is ``max -xop -fma4`` which enables all CPU
- features, except for AMD legacy features.
+ The default value is ``max -xop -fma4`` which enables all CPU
+ features, except for AMD legacy features(in case of X86).
The command arguments are available in ``build``, ``build_clib``, and
``build_ext``.
@@ -38,13 +38,24 @@ if ``build_clib`` or ``build_ext`` are not specified by the user, the arguments
``build`` will be used instead, which also holds the default values.
Optimization names can be CPU features or groups of features that gather
-several features or special options to perform a series of procedures.
+several features or :ref:`special options <special-options>` to perform a series of procedures.
The following tables show the current supported optimizations sorted from the lowest to the highest interest.
.. include:: simd-optimizations-tables.inc
+----
+
+.. _tables-diff:
+
+While the above tables are based on the GCC Compiler, the following tables showing the differences in the
+other compilers:
+
+.. include:: simd-optimizations-tables-diff.inc
+
+.. _special-options:
+
Special options
~~~~~~~~~~~~~~~
@@ -80,7 +91,7 @@ NOTES
- The order of the requsted optimizations doesn't matter.
-- Either commas or spaces can be used as a separator, e.g. ``--cpu-dispatch``\ =
+- Either commas or spaces can be used as a separator, e.g. ``--cpu-dispatch``\ =
"avx2 avx512f" or ``--cpu-dispatch``\ = "avx2, avx512f" both work, but the
arguments must be enclosed in quotes.
@@ -114,6 +125,25 @@ NOTES
Special cases
~~~~~~~~~~~~~
+**Interrelated CPU features**: Some exceptional conditions force us to link some features together when it come to certain compilers or architectures, resulting in the impossibility of building them separately.
+These conditions can be divided into two parts, as follows:
+
+- **Architectural compatibility**: The need to align certain CPU features that are assured
+ to be supported by successive generations of the same architecture, for example:
+
+ - On ppc64le `VSX(ISA 2.06)` and `VSX2(ISA 2.07)` both imply one another since the
+ first generation that supports little-endian mode is Power-8`(ISA 2.07)`
+ - On AArch64 `NEON` `FP16` `VFPV4` `ASIMD` implies each other since they are part of the
+ hardware baseline.
+
+- **Compilation compatibility**: Not all **C/C++** compilers provide independent support for all CPU
+ features. For example, **Intel**'s compiler doesn't provide separated flags for `AVX2` and `FMA3`,
+ it makes sense since all Intel CPUs that comes with `AVX2` also support `FMA3` and vice versa,
+ but this approach is incompatible with other **x86** CPUs from **AMD** or **VIA**.
+ Therefore, there are differences in the depiction of CPU features between the C/C++ compilers,
+ as shown in the :ref:`tables above <tables-diff>`.
+
+
Behaviors and Errors
~~~~~~~~~~~~~~~~~~~~
@@ -224,7 +254,7 @@ Definitely, yes. But the :ref:`dispatch-able sources <dispatchable-sources>` are
treated differently.
What if the user specifies certain **baseline features** during the
-build but at runtime the machine doesn't support even these
+build but at runtime the machine doesn't support even these
features? Will the compiled code be called via one of these definitions, or
maybe the compiler itself auto-generated/vectorized certain piece of code
based on the provided command line compiler flags?
@@ -304,7 +334,7 @@ through ``--cpu-dispatch``, but it can also represent other options such as:
.. code:: c
- /*
+ /*
* this definition is used by NumPy utilities as suffixes for the
* exported symbols
*/
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index 8f506dd8b..06fbe28dd 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -1,5 +1,7 @@
.. sectionauthor:: adapted from "Guide to NumPy" by Travis E. Oliphant
+.. currentmodule:: numpy
+
.. _ufuncs:
************************************
@@ -8,8 +10,6 @@ Universal functions (:class:`ufunc`)
.. note: XXX: section might need to be made more reference-guideish...
-.. currentmodule:: numpy
-
.. index: ufunc, universal function, arithmetic, operation
A universal function (or :term:`ufunc` for short) is a function that
@@ -298,6 +298,11 @@ them by defining certain special methods. For details, see
:class:`ufunc`
==============
+.. autosummary::
+ :toctree: generated/
+
+ numpy.ufunc
+
.. _ufuncs.kwargs:
Optional keyword arguments
@@ -335,6 +340,19 @@ advanced usage and will not typically be used.
Note that outputs not explicitly filled are left with their
uninitialized values.
+ .. versionadded:: 1.13
+
+ Operations where ufunc input and output operands have memory overlap are
+ defined to be the same as for equivalent operations where there
+ is no memory overlap. Operations affected make temporary copies
+ as needed to eliminate data dependency. As detecting these cases
+ is computationally expensive, a heuristic is used, which may in rare
+ cases result in needless temporary copies. For operations where the
+ data dependency is simple enough for the heuristic to analyze,
+ temporary copies will not be made even if the arrays overlap, if it
+ can be deduced copies are not necessary. As an example,
+ ``np.add(a, b, out=a)`` will not involve copies.
+
*where*
.. versionadded:: 1.7
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 5a890178c..29199fb83 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -5,7 +5,12 @@ Release Notes
.. toctree::
:maxdepth: 3
+ 1.21.0 <release/1.21.0-notes>
1.20.0 <release/1.20.0-notes>
+ 1.19.4 <release/1.19.4-notes>
+ 1.19.3 <release/1.19.3-notes>
+ 1.19.2 <release/1.19.2-notes>
+ 1.19.1 <release/1.19.1-notes>
1.19.0 <release/1.19.0-notes>
1.18.4 <release/1.18.4-notes>
1.18.3 <release/1.18.3-notes>
diff --git a/doc/source/release/1.16.0-notes.rst b/doc/source/release/1.16.0-notes.rst
index e78e270f4..17d24160a 100644
--- a/doc/source/release/1.16.0-notes.rst
+++ b/doc/source/release/1.16.0-notes.rst
@@ -170,8 +170,8 @@ See the "accessing multiple fields" section of the
C API changes
=============
-The :c:data:`NPY_API_VERSION` was incremented to 0x0000D, due to the addition
-of:
+The :c:data:`NPY_FEATURE_VERSION` was incremented to 0x0000D, due to
+the addition of:
* :c:member:`PyUFuncObject.core_dim_flags`
* :c:member:`PyUFuncObject.core_dim_sizes`
diff --git a/doc/source/release/1.17.0-notes.rst b/doc/source/release/1.17.0-notes.rst
index a93eb2186..4bdc6105f 100644
--- a/doc/source/release/1.17.0-notes.rst
+++ b/doc/source/release/1.17.0-notes.rst
@@ -171,15 +171,15 @@ The functions `load`, and ``lib.format.read_array`` take an
`CVE-2019-6446 <https://nvd.nist.gov/vuln/detail/CVE-2019-6446>`_.
-.. currentmodule:: numpy.random.mtrand
+.. currentmodule:: numpy.random
Potential changes to the random stream in old random module
-----------------------------------------------------------
Due to bugs in the application of ``log`` to random floating point numbers,
the stream may change when sampling from `~RandomState.beta`, `~RandomState.binomial`,
`~RandomState.laplace`, `~RandomState.logistic`, `~RandomState.logseries` or
-`~RandomState.multinomial` if a ``0`` is generated in the underlying `MT19937
-<~numpy.random.mt11937.MT19937>` random stream. There is a ``1`` in
+`~RandomState.multinomial` if a ``0`` is generated in the underlying `MT19937`
+random stream. There is a ``1`` in
:math:`10^{53}` chance of this occurring, so the probability that the stream
changes for any given seed is extremely small. If a ``0`` is encountered in the
underlying generator, then the incorrect value produced (either `numpy.inf` or
@@ -559,4 +559,3 @@ Structured arrays indexed with non-existent fields raise ``KeyError`` not ``Valu
----------------------------------------------------------------------------------------
``arr['bad_field']`` on a structured type raises ``KeyError``, for consistency
with ``dict['bad_field']``.
-
diff --git a/doc/source/release/1.19.1-notes.rst b/doc/source/release/1.19.1-notes.rst
new file mode 100644
index 000000000..4fc5528f5
--- /dev/null
+++ b/doc/source/release/1.19.1-notes.rst
@@ -0,0 +1,68 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.19.1 Release Notes
+==========================
+
+NumPy 1.19.1 fixes several bugs found in the 1.19.0 release, replaces several
+functions deprecated in the upcoming Python-3.9 release, has improved support
+for AIX, and has a number of development related updates to keep CI working
+with recent upstream changes.
+
+This release supports Python 3.6-3.8. Cython >= 0.29.21 needs to be used when
+building with Python 3.9 for testing purposes.
+
+
+Contributors
+============
+
+A total of 15 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Abhinav Reddy +
+* Anirudh Subramanian
+* Antonio Larrosa +
+* Charles Harris
+* Chunlin Fang
+* Eric Wieser
+* Etienne Guesnet +
+* Kevin Sheppard
+* Matti Picus
+* Raghuveer Devulapalli
+* Roman Yurchak
+* Ross Barnowski
+* Sayed Adel
+* Sebastian Berg
+* Tyler Reddy
+
+
+Pull requests merged
+====================
+
+A total of 25 pull requests were merged for this release.
+
+* `#16649 <https://github.com/numpy/numpy/pull/16649>`__: MAINT, CI: disable Shippable cache
+* `#16652 <https://github.com/numpy/numpy/pull/16652>`__: MAINT: Replace `PyUString_GET_SIZE` with `PyUnicode_GetLength`.
+* `#16654 <https://github.com/numpy/numpy/pull/16654>`__: REL: Fix outdated docs link
+* `#16656 <https://github.com/numpy/numpy/pull/16656>`__: BUG: raise IEEE exception on AIX
+* `#16672 <https://github.com/numpy/numpy/pull/16672>`__: BUG: Fix bug in AVX complex absolute while processing array of...
+* `#16693 <https://github.com/numpy/numpy/pull/16693>`__: TST: Add extra debugging information to CPU features detection
+* `#16703 <https://github.com/numpy/numpy/pull/16703>`__: BLD: Add CPU entry for Emscripten / WebAssembly
+* `#16705 <https://github.com/numpy/numpy/pull/16705>`__: TST: Disable Python 3.9-dev testing.
+* `#16714 <https://github.com/numpy/numpy/pull/16714>`__: MAINT: Disable use_hugepages in case of ValueError
+* `#16724 <https://github.com/numpy/numpy/pull/16724>`__: BUG: Fix PyArray_SearchSorted signature.
+* `#16768 <https://github.com/numpy/numpy/pull/16768>`__: MAINT: Fixes for deprecated functions in scalartypes.c.src
+* `#16772 <https://github.com/numpy/numpy/pull/16772>`__: MAINT: Remove unneeded call to PyUnicode_READY
+* `#16776 <https://github.com/numpy/numpy/pull/16776>`__: MAINT: Fix deprecated functions in scalarapi.c
+* `#16779 <https://github.com/numpy/numpy/pull/16779>`__: BLD, ENH: Add RPATH support for AIX
+* `#16780 <https://github.com/numpy/numpy/pull/16780>`__: BUG: Fix default fallback in genfromtxt
+* `#16784 <https://github.com/numpy/numpy/pull/16784>`__: BUG: Added missing return after raising error in methods.c
+* `#16795 <https://github.com/numpy/numpy/pull/16795>`__: BLD: update cython to 0.29.21
+* `#16832 <https://github.com/numpy/numpy/pull/16832>`__: MAINT: setuptools 49.2.0 emits a warning, avoid it
+* `#16872 <https://github.com/numpy/numpy/pull/16872>`__: BUG: Validate output size in bin- and multinomial
+* `#16875 <https://github.com/numpy/numpy/pull/16875>`__: BLD, MAINT: Pin setuptools
+* `#16904 <https://github.com/numpy/numpy/pull/16904>`__: DOC: Reconstruct Testing Guideline.
+* `#16905 <https://github.com/numpy/numpy/pull/16905>`__: TST, BUG: Re-raise MemoryError exception in test_large_zip's...
+* `#16906 <https://github.com/numpy/numpy/pull/16906>`__: BUG,DOC: Fix bad MPL kwarg.
+* `#16916 <https://github.com/numpy/numpy/pull/16916>`__: BUG: Fix string/bytes to complex assignment
+* `#16922 <https://github.com/numpy/numpy/pull/16922>`__: REL: Prepare for NumPy 1.19.1 release
diff --git a/doc/source/release/1.19.2-notes.rst b/doc/source/release/1.19.2-notes.rst
new file mode 100644
index 000000000..1267d5eb1
--- /dev/null
+++ b/doc/source/release/1.19.2-notes.rst
@@ -0,0 +1,57 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.19.2 Release Notes
+==========================
+
+NumPy 1.19.2 fixes several bugs, prepares for the upcoming Cython 3.x release.
+and pins setuptools to keep distutils working while upstream modifications are
+ongoing. The aarch64 wheels are built with the latest manylinux2014 release
+that fixes the problem of differing page sizes used by different linux distros.
+
+This release supports Python 3.6-3.8. Cython >= 0.29.21 needs to be used when
+building with Python 3.9 for testing purposes.
+
+There is a known problem with Windows 10 version=2004 and OpenBLAS svd that we
+are trying to debug. If you are running that Windows version you should use a
+NumPy version that links to the MKL library, earlier Windows versions are fine.
+
+Improvements
+============
+
+Add NumPy declarations for Cython 3.0 and later
+-----------------------------------------------
+The pxd declarations for Cython 3.0 were improved to avoid using deprecated
+NumPy C-API features. Extension modules built with Cython 3.0+ that use NumPy
+can now set the C macro ``NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION`` to avoid
+C compiler warnings about deprecated API usage.
+
+Contributors
+============
+
+A total of 8 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Matti Picus
+* Pauli Virtanen
+* Philippe Ombredanne +
+* Sebastian Berg
+* Stefan Behnel +
+* Stephan Loyd +
+* Zac Hatfield-Dodds
+
+Pull requests merged
+====================
+
+A total of 9 pull requests were merged for this release.
+
+* `#16959 <https://github.com/numpy/numpy/pull/16959>`__: TST: Change aarch64 to arm64 in travis.yml.
+* `#16998 <https://github.com/numpy/numpy/pull/16998>`__: MAINT: Configure hypothesis in ``np.test()`` for determinism,...
+* `#17000 <https://github.com/numpy/numpy/pull/17000>`__: BLD: pin setuptools < 49.2.0
+* `#17015 <https://github.com/numpy/numpy/pull/17015>`__: ENH: Add NumPy declarations to be used by Cython 3.0+
+* `#17125 <https://github.com/numpy/numpy/pull/17125>`__: BUG: Remove non-threadsafe sigint handling from fft calculation
+* `#17243 <https://github.com/numpy/numpy/pull/17243>`__: BUG: core: fix ilp64 blas dot/vdot/... for strides > int32 max
+* `#17244 <https://github.com/numpy/numpy/pull/17244>`__: DOC: Use SPDX license expressions with correct license
+* `#17245 <https://github.com/numpy/numpy/pull/17245>`__: DOC: Fix the link to the quick-start in the old API functions
+* `#17272 <https://github.com/numpy/numpy/pull/17272>`__: BUG: fix pickling of arrays larger than 2GiB
diff --git a/doc/source/release/1.19.3-notes.rst b/doc/source/release/1.19.3-notes.rst
new file mode 100644
index 000000000..f1f1fd2b3
--- /dev/null
+++ b/doc/source/release/1.19.3-notes.rst
@@ -0,0 +1,46 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.19.3 Release Notes
+==========================
+
+NumPy 1.19.3 is a small maintenance release with two major improvements:
+
+- Python 3.9 binary wheels on all supported platforms.
+- OpenBLAS fixes for Windows 10 version 2004 fmod bug.
+
+This release supports Python 3.6-3.9 and is linked with OpenBLAS 0.3.12 to avoid
+some of the fmod problems on Windows 10 version 2004. Microsoft is aware of the
+problem and users should upgrade when the fix becomes available, the fix here
+is limited in scope.
+
+Contributors
+============
+
+A total of 8 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Chris Brown +
+* Daniel Vanzo +
+* E. Madison Bray +
+* Hugo van Kemenade +
+* Ralf Gommers
+* Sebastian Berg
+* @danbeibei +
+
+Pull requests merged
+====================
+
+A total of 10 pull requests were merged for this release.
+
+* `#17298 <https://github.com/numpy/numpy/pull/17298>`__: BLD: set upper versions for build dependencies
+* `#17336 <https://github.com/numpy/numpy/pull/17336>`__: BUG: Set deprecated fields to null in PyArray_InitArrFuncs
+* `#17446 <https://github.com/numpy/numpy/pull/17446>`__: ENH: Warn on unsupported Python 3.10+
+* `#17450 <https://github.com/numpy/numpy/pull/17450>`__: MAINT: Update test_requirements.txt.
+* `#17522 <https://github.com/numpy/numpy/pull/17522>`__: ENH: Support for the NVIDIA HPC SDK nvfortran compiler
+* `#17568 <https://github.com/numpy/numpy/pull/17568>`__: BUG: Cygwin Workaround for #14787 on affected platforms
+* `#17647 <https://github.com/numpy/numpy/pull/17647>`__: BUG: Fix memory leak of buffer-info cache due to relaxed strides
+* `#17652 <https://github.com/numpy/numpy/pull/17652>`__: MAINT: Backport openblas_support from master.
+* `#17653 <https://github.com/numpy/numpy/pull/17653>`__: TST: Add Python 3.9 to the CI testing on Windows, Mac.
+* `#17660 <https://github.com/numpy/numpy/pull/17660>`__: TST: Simplify source path names in test_extending.
diff --git a/doc/source/release/1.19.4-notes.rst b/doc/source/release/1.19.4-notes.rst
new file mode 100644
index 000000000..e7c0863f4
--- /dev/null
+++ b/doc/source/release/1.19.4-notes.rst
@@ -0,0 +1,30 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.19.4 Release Notes
+==========================
+
+NumPy 1.19.4 is a quick release to revert the OpenBLAS library version. It was
+hoped that the 0.3.12 OpenBLAS version used in 1.19.3 would work around the
+Microsoft fmod bug, but problems in some docker environments turned up. Instead,
+1.19.4 will use the older library and run a sanity check on import, raising an
+error if the problem is detected. Microsoft is aware of the problem and has
+promised a fix, users should upgrade when it becomes available.
+
+This release supports Python 3.6-3.9
+
+Contributors
+============
+
+A total of 1 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+
+Pull requests merged
+====================
+
+A total of 2 pull requests were merged for this release.
+
+* `#17679 <https://github.com/numpy/numpy/pull/17679>`__: MAINT: Add check for Windows 10 version 2004 bug.
+* `#17680 <https://github.com/numpy/numpy/pull/17680>`__: REV: Revert OpenBLAS to 1.19.2 version for 1.19.4
diff --git a/doc/source/release/1.20.0-notes.rst b/doc/source/release/1.20.0-notes.rst
index d91bea762..9f46a3e80 100644
--- a/doc/source/release/1.20.0-notes.rst
+++ b/doc/source/release/1.20.0-notes.rst
@@ -3,4 +3,931 @@
==========================
NumPy 1.20.0 Release Notes
==========================
+This NumPy release is the largest so made to date, some 648 PRs contributed by
+182 people have been merged. See the list of highlights below for more details.
+The Python versions supported for this release are 3.7-3.9, support for Python
+3.6 has been dropped. Highlights are
+
+- Annotations for NumPy functions. This work is ongoing and improvements can
+ be expected pending feedback from users.
+
+- Wider use of SIMD to increase execution speed of ufuncs. Much work has been
+ done in introducing universal functions that will ease use of modern
+ features across different hardware platforms. This work is ongoing.
+
+- Preliminary work in changing the dtype and casting implementations in order to
+ provide an easier path to extending dtypes. This work is ongoing but enough
+ has been done to allow experimentation and feedback.
+
+- Extensive documentation improvements comprising some 185 PR merges. This work
+ is ongoing and part of the larger project to improve NumPy's online presence
+ and usefulness to new users.
+
+- Further cleanups related to removing Python 2.7. This improves code
+ readability and removes technical debt.
+
+- Preliminary support for the upcoming Cython 3.0.
+
+
+New functions
+=============
+
+The random.Generator class has a new ``permuted`` function.
+-----------------------------------------------------------
+The new function differs from ``shuffle`` and ``permutation`` in that the
+subarrays indexed by an axis are permuted rather than the axis being treated as
+a separate 1-D array for every combination of the other indexes. For example,
+it is now possible to permute the rows or columns of a 2-D array.
+
+(`gh-15121 <https://github.com/numpy/numpy/pull/15121>`__)
+
+``sliding_window_view`` provides a sliding window view for numpy arrays
+-----------------------------------------------------------------------
+`numpy.lib.stride_tricks.sliding_window_view` constructs views on numpy
+arrays that offer a sliding or moving window access to the array. This allows
+for the simple implementation of certain algorithms, such as running means.
+
+(`gh-17394 <https://github.com/numpy/numpy/pull/17394>`__)
+
+`numpy.broadcast_shapes` is a new user-facing function
+------------------------------------------------------
+`~numpy.broadcast_shapes` gets the resulting shape from
+broadcasting the given shape tuples against each other.
+
+.. code:: python
+
+ >>> np.broadcast_shapes((1, 2), (3, 1))
+ (3, 2)
+
+ >>> np.broadcast_shapes(2, (3, 1))
+ (3, 2)
+
+ >>> np.broadcast_shapes((6, 7), (5, 6, 1), (7,), (5, 1, 7))
+ (5, 6, 7)
+
+(`gh-17535 <https://github.com/numpy/numpy/pull/17535>`__)
+
+
+Deprecations
+============
+
+Using the aliases of builtin types like ``np.int`` is deprecated
+----------------------------------------------------------------
+
+For a long time, ``np.int`` has been an alias of the builtin ``int``. This is
+repeatedly a cause of confusion for newcomers, and is also simply not useful.
+
+These aliases have been deprecated. The table below shows the full list of
+deprecated aliases, along with their exact meaning. Replacing uses of items in
+the first column with the contents of the second column will work identically
+and silence the deprecation warning.
+
+In many cases, it may have been intended to use the types from the third column.
+Be aware that use of these types may result in subtle but desirable behavior
+changes.
+
+================== ================================= ==================================================================
+Deprecated name Identical to Possibly intended numpy type
+================== ================================= ==================================================================
+``numpy.bool`` ``bool`` `numpy.bool_`
+``numpy.int`` ``int`` `numpy.int_` (default int dtype), `numpy.cint` (C ``int``)
+``numpy.float`` ``float`` `numpy.float_`, `numpy.double` (equivalent)
+``numpy.complex`` ``complex`` `numpy.complex_`, `numpy.cdouble` (equivalent)
+``numpy.object`` ``object`` `numpy.object_`
+``numpy.str`` ``str`` `numpy.str_`
+``numpy.long`` ``int`` (``long`` on Python 2) `numpy.int_` (C ``long``), `numpy.longlong` (largest integer type)
+``numpy.unicode`` ``str`` (``unicode`` on Python 2) `numpy.unicode_`
+================== ================================= ==================================================================
+
+Note that for technical reasons these deprecation warnings will only be emitted
+on Python 3.7 and above.
+
+(`gh-14882 <https://github.com/numpy/numpy/pull/14882>`__)
+
+Passing ``shape=None`` to functions with a non-optional shape argument is deprecated
+------------------------------------------------------------------------------------
+Previously, this was an alias for passing ``shape=()``.
+This deprecation is emitted by `PyArray_IntpConverter` in the C API. If your
+API is intended to support passing ``None``, then you should check for ``None``
+prior to invoking the converter, so as to be able to distinguish ``None`` and
+``()``.
+
+(`gh-15886 <https://github.com/numpy/numpy/pull/15886>`__)
+
+Indexing errors will be reported even when index result is empty
+----------------------------------------------------------------
+In the future, NumPy will raise an IndexError when an
+integer array index contains out of bound values even if a non-indexed
+dimension is of length 0. This will now emit a DeprecationWarning.
+This can happen when the array is previously empty, or an empty
+slice is involved::
+
+ arr1 = np.zeros((5, 0))
+ arr1[[20]]
+ arr2 = np.zeros((5, 5))
+ arr2[[20], :0]
+
+Previously the non-empty index ``[20]`` was not checked for correctness.
+It will now be checked causing a deprecation warning which will be turned
+into an error. This also applies to assignments.
+
+(`gh-15900 <https://github.com/numpy/numpy/pull/15900>`__)
+
+Inexact matches for ``mode`` and ``searchside`` are deprecated
+--------------------------------------------------------------
+Inexact and case insensitive matches for ``mode`` and ``searchside`` were valid
+inputs earlier and will give a DeprecationWarning now. For example, below are
+some example usages which are now deprecated and will give a
+DeprecationWarning::
+
+ import numpy as np
+ arr = np.array([[3, 6, 6], [4, 5, 1]])
+ # mode: inexact match
+ np.ravel_multi_index(arr, (7, 6), mode="clap") # should be "clip"
+ # searchside: inexact match
+ np.searchsorted(arr[0], 4, side='random') # should be "right"
+
+(`gh-16056 <https://github.com/numpy/numpy/pull/16056>`__)
+
+Deprecation of `numpy.dual`
+---------------------------
+The module `numpy.dual` is deprecated. Instead of importing functions
+from `numpy.dual`, the functions should be imported directly from NumPy
+or SciPy.
+
+(`gh-16156 <https://github.com/numpy/numpy/pull/16156>`__)
+
+``outer`` and ``ufunc.outer`` deprecated for matrix
+---------------------------------------------------
+``np.matrix`` use with `~numpy.outer` or generic ufunc outer
+calls such as ``numpy.add.outer``. Previously, matrix was
+converted to an array here. This will not be done in the future
+requiring a manual conversion to arrays.
+
+(`gh-16232 <https://github.com/numpy/numpy/pull/16232>`__)
+
+Further Numeric Style types Deprecated
+--------------------------------------
+
+The remaining numeric-style type codes ``Bytes0``, ``Str0``,
+``Uint32``, ``Uint64``, and ``Datetime64``
+have been deprecated. The lower-case variants should be used
+instead. For bytes and string ``"S"`` and ``"U"``
+are further alternatives.
+
+(`gh-16554 <https://github.com/numpy/numpy/pull/16554>`__)
+
+The ``ndincr`` method of ``ndindex`` is deprecated
+--------------------------------------------------
+The documentation has warned against using this function since NumPy 1.8.
+Use ``next(it)`` instead of ``it.ndincr()``.
+
+(`gh-17233 <https://github.com/numpy/numpy/pull/17233>`__)
+
+
+Future Changes
+==============
+
+Arrays cannot be using subarray dtypes
+--------------------------------------
+Array creation and casting using ``np.array(arr, dtype)``
+and ``arr.astype(dtype)`` will use different logic when ``dtype``
+is a subarray dtype such as ``np.dtype("(2)i,")``.
+
+For such a ``dtype`` the following behaviour is true::
+
+ res = np.array(arr, dtype)
+
+ res.dtype is not dtype
+ res.dtype is dtype.base
+ res.shape == arr.shape + dtype.shape
+
+But ``res`` is filled using the logic::
+
+ res = np.empty(arr.shape + dtype.shape, dtype=dtype.base)
+ res[...] = arr
+
+which uses incorrect broadcasting (and often leads to an error).
+In the future, this will instead cast each element individually,
+leading to the same result as::
+
+ res = np.array(arr, dtype=np.dtype(["f", dtype]))["f"]
+
+Which can normally be used to opt-in to the new behaviour.
+
+This change does not affect ``np.array(list, dtype="(2)i,")`` unless the
+``list`` itself includes at least one array. In particular, the behaviour
+is unchanged for a list of tuples.
+
+(`gh-17596 <https://github.com/numpy/numpy/pull/17596>`__)
+
+
+Expired deprecations
+====================
+
+* The deprecation of numeric style type-codes ``np.dtype("Complex64")``
+ (with upper case spelling), is expired. ``"Complex64"`` corresponded to
+ ``"complex128"`` and ``"Complex32"`` corresponded to ``"complex64"``.
+* The deprecation of ``np.sctypeNA`` and ``np.typeNA`` is expired. Both
+ have been removed from the public API. Use ``np.typeDict`` instead.
+
+ (`gh-16554 <https://github.com/numpy/numpy/pull/16554>`__)
+
+* The 14-year deprecation of ``np.ctypeslib.ctypes_load_library`` is expired.
+ Use :func:`~numpy.ctypeslib.load_library` instead, which is identical.
+
+ (`gh-17116 <https://github.com/numpy/numpy/pull/17116>`__)
+
+Financial functions removed
+---------------------------
+In accordance with NEP 32, the financial functions are removed
+from NumPy 1.20. The functions that have been removed are ``fv``,
+``ipmt``, ``irr``, ``mirr``, ``nper``, ``npv``, ``pmt``, ``ppmt``,
+``pv``, and ``rate``. These functions are available in the
+`numpy_financial <https://pypi.org/project/numpy-financial>`_
+library.
+
+(`gh-17067 <https://github.com/numpy/numpy/pull/17067>`__)
+
+
+Compatibility notes
+===================
+
+``isinstance(dtype, np.dtype)`` and not ``type(dtype) is not np.dtype``
+-----------------------------------------------------------------------
+NumPy dtypes are not direct instances of ``np.dtype`` anymore. Code that
+may have used ``type(dtype) is np.dtype`` will always return ``False`` and
+must be updated to use the correct version ``isinstance(dtype, np.dtype)``.
+
+This change also affects the C-side macro ``PyArray_DescrCheck`` if compiled
+against a NumPy older than 1.16.6. If code uses this macro and wishes to
+compile against an older version of NumPy, it must replace the macro
+(see also `C API changes`_ section).
+
+
+Same kind casting in concatenate with ``axis=None``
+---------------------------------------------------
+When `~numpy.concatenate` is called with ``axis=None``,
+the flattened arrays were cast with ``unsafe``. Any other axis
+choice uses "same kind". That different default
+has been deprecated and "same kind" casting will be used
+instead. The new ``casting`` keyword argument
+can be used to retain the old behaviour.
+
+(`gh-16134 <https://github.com/numpy/numpy/pull/16134>`__)
+
+NumPy Scalars are cast when assigned to arrays
+----------------------------------------------
+
+When creating or assigning to arrays, in all relevant cases NumPy
+scalars will now be cast identically to NumPy arrays. In particular
+this changes the behaviour in some cases which previously raised an
+error::
+
+ np.array([np.float64(np.nan)], dtype=np.int64)
+
+will succeed and return an undefined result (usually the smallest possible
+integer). This also affects assignments::
+
+ arr[0] = np.float64(np.nan)
+
+At this time, NumPy retains the behaviour for::
+
+ np.array(np.float64(np.nan), dtype=np.int64)
+
+The above changes do not affect Python scalars::
+
+ np.array([float("NaN")], dtype=np.int64)
+
+remains unaffected (``np.nan`` is a Python ``float``, not a NumPy one).
+Unlike signed integers, unsigned integers do not retain this special case,
+since they always behaved more like casting.
+The following code stops raising an error::
+
+ np.array([np.float64(np.nan)], dtype=np.uint64)
+
+To avoid backward compatibility issues, at this time assignment from
+``datetime64`` scalar to strings of too short length remains supported.
+This means that ``np.asarray(np.datetime64("2020-10-10"), dtype="S5")``
+succeeds now, when it failed before. In the long term this may be
+deprecated or the unsafe cast may be allowed generally to make assignment
+of arrays and scalars behave consistently.
+
+
+Array coercion changes when Strings and other types are mixed
+-------------------------------------------------------------
+
+When strings and other types are mixed, such as::
+
+ np.array(["string", np.float64(3.)], dtype="S")
+
+The results will change, which may lead to string dtypes with longer strings
+in some cases. In particularly, if ``dtype="S"`` is not provided any numerical
+value will lead to a string results long enough to hold all possible numerical
+values. (e.g. "S32" for floats). Note that you should always provide
+``dtype="S"`` when converting non-strings to strings.
+
+If ``dtype="S"`` is provided the results will be largely identical to before,
+but NumPy scalars (not a Python float like ``1.0``), will still enforce
+a uniform string length::
+
+ np.array([np.float64(3.)], dtype="S") # gives "S32"
+ np.array([3.0], dtype="S") # gives "S3"
+
+Previously the first version gave the same result as the second.
+
+
+Array coercion restructure
+--------------------------
+
+Array coercion has been restructured. In general, this should not affect
+users. In extremely rare corner cases where array-likes are nested::
+
+ np.array([array_like1])
+
+Things will now be more consistent with::
+
+ np.array([np.array(array_like1)])
+
+This could potentially subtly change output for badly defined array-likes.
+We are not aware of any such case where the results were not clearly
+incorrect previously.
+
+(`gh-16200 <https://github.com/numpy/numpy/pull/16200>`__)
+
+Writing to the result of `numpy.broadcast_arrays` will export readonly buffers
+------------------------------------------------------------------------------
+
+In NumPy 1.17 `numpy.broadcast_arrays` started warning when the resulting array
+was written to. This warning was skipped when the array was used through the
+buffer interface (e.g. ``memoryview(arr)``). The same thing will now occur for the
+two protocols ``__array_interface__``, and ``__array_struct__`` returning read-only
+buffers instead of giving a warning.
+
+(`gh-16350 <https://github.com/numpy/numpy/pull/16350>`__)
+
+Numeric-style type names have been removed from type dictionaries
+-----------------------------------------------------------------
+
+To stay in sync with the deprecation for ``np.dtype("Complex64")``
+and other numeric-style (capital case) types. These were removed
+from ``np.sctypeDict`` and ``np.typeDict``. You should use
+the lower case versions instead. Note that ``"Complex64"``
+corresponds to ``"complex128"`` and ``"Complex32"`` corresponds
+to ``"complex64"``. The numpy style (new) versions, denote the full
+size and not the size of the real/imaginary part.
+
+(`gh-16554 <https://github.com/numpy/numpy/pull/16554>`__)
+
+The ``operator.concat`` function now raises TypeError for array arguments
+-------------------------------------------------------------------------
+The previous behavior was to fall back to addition and add the two arrays,
+which was thought to be unexpected behavior for a concatenation function.
+
+(`gh-16570 <https://github.com/numpy/numpy/pull/16570>`__)
+
+``nickname`` attribute removed from ABCPolyBase
+-----------------------------------------------
+
+An abstract property ``nickname`` has been removed from ``ABCPolyBase`` as it
+was no longer used in the derived convenience classes.
+This may affect users who have derived classes from ``ABCPolyBase`` and
+overridden the methods for representation and display, e.g. ``__str__``,
+``__repr__``, ``_repr_latex``, etc.
+
+(`gh-16589 <https://github.com/numpy/numpy/pull/16589>`__)
+
+``float->timedelta`` and ``uint64->timedelta`` promotion will raise a TypeError
+-------------------------------------------------------------------------------
+Float and timedelta promotion consistently raises a TypeError.
+``np.promote_types("float32", "m8")`` aligns with
+``np.promote_types("m8", "float32")`` now and both raise a TypeError.
+Previously, ``np.promote_types("float32", "m8")`` returned ``"m8"`` which
+was considered a bug.
+
+Uint64 and timedelta promotion consistently raises a TypeError.
+``np.promote_types("uint64", "m8")`` aligns with
+``np.promote_types("m8", "uint64")`` now and both raise a TypeError.
+Previously, ``np.promote_types("uint64", "m8")`` returned ``"m8"`` which
+was considered a bug.
+
+(`gh-16592 <https://github.com/numpy/numpy/pull/16592>`__)
+
+``numpy.genfromtxt`` now correctly unpacks structured arrays
+------------------------------------------------------------
+Previously, `numpy.genfromtxt` failed to unpack if it was called with
+``unpack=True`` and a structured datatype was passed to the ``dtype`` argument
+(or ``dtype=None`` was passed and a structured datatype was inferred).
+For example::
+
+ >>> data = StringIO("21 58.0\n35 72.0")
+ >>> np.genfromtxt(data, dtype=None, unpack=True)
+ array([(21, 58.), (35, 72.)], dtype=[('f0', '<i8'), ('f1', '<f8')])
+
+Structured arrays will now correctly unpack into a list of arrays,
+one for each column::
+
+ >>> np.genfromtxt(data, dtype=None, unpack=True)
+ [array([21, 35]), array([58., 72.])]
+
+(`gh-16650 <https://github.com/numpy/numpy/pull/16650>`__)
+
+``mgrid``, ``r_``, etc. consistently return correct outputs for non-default precision input
+-------------------------------------------------------------------------------------------
+Previously, ``np.mgrid[np.float32(0.1):np.float32(0.35):np.float32(0.1),]``
+and ``np.r_[0:10:np.complex64(3j)]`` failed to return meaningful output.
+This bug potentially affects `~numpy.mgrid`, `~numpy.ogrid`, `~numpy.r_`,
+and `~numpy.c_` when an input with dtype other than the default
+``float64`` and ``complex128`` and equivalent Python types were used.
+The methods have been fixed to handle varying precision correctly.
+
+(`gh-16815 <https://github.com/numpy/numpy/pull/16815>`__)
+
+Boolean array indices with mismatching shapes now properly give ``IndexError``
+------------------------------------------------------------------------------
+
+Previously, if a boolean array index matched the size of the indexed array but
+not the shape, it was incorrectly allowed in some cases. In other cases, it
+gave an error, but the error was incorrectly a ``ValueError`` with a message
+about broadcasting instead of the correct ``IndexError``.
+
+For example, the following used to incorrectly give ``ValueError: operands
+could not be broadcast together with shapes (2,2) (1,4)``:
+
+.. code:: python
+
+ np.empty((2, 2))[np.array([[True, False, False, False]])]
+
+And the following used to incorrectly return ``array([], dtype=float64)``:
+
+.. code:: python
+
+ np.empty((2, 2))[np.array([[False, False, False, False]])]
+
+Both now correctly give ``IndexError: boolean index did not match indexed
+array along dimension 0; dimension is 2 but corresponding boolean dimension is
+1``.
+
+(`gh-17010 <https://github.com/numpy/numpy/pull/17010>`__)
+
+Casting errors interrupt Iteration
+----------------------------------
+When iterating while casting values, an error may stop the iteration
+earlier than before. In any case, a failed casting operation always
+returned undefined, partial results. Those may now be even more
+undefined and partial.
+For users of the ``NpyIter`` C-API such cast errors will now
+cause the `iternext()` function to return 0 and thus abort
+iteration.
+Currently, there is no API to detect such an error directly.
+It is necessary to check ``PyErr_Occurred()``, which
+may be problematic in combination with ``NpyIter_Reset``.
+These issues always existed, but new API could be added
+if required by users.
+
+(`gh-17029 <https://github.com/numpy/numpy/pull/17029>`__)
+
+f2py generated code may return unicode instead of byte strings
+--------------------------------------------------------------
+Some byte strings previously returned by f2py generated code may now be unicode
+strings. This results from the ongoing Python2 -> Python3 cleanup.
+
+(`gh-17068 <https://github.com/numpy/numpy/pull/17068>`__)
+
+The first element of the ``__array_interface__["data"]`` tuple must be an integer
+----------------------------------------------------------------------------------
+This has been the documented interface for many years, but there was still
+code that would accept a byte string representation of the pointer address.
+That code has been removed, passing the address as a byte string will now
+raise an error.
+
+(`gh-17241 <https://github.com/numpy/numpy/pull/17241>`__)
+
+poly1d respects the dtype of all-zero argument
+----------------------------------------------
+Previously, constructing an instance of ``poly1d`` with all-zero
+coefficients would cast the coefficients to ``np.float64``.
+This affected the output dtype of methods which construct
+``poly1d`` instances internally, such as ``np.polymul``.
+
+(`gh-17577 <https://github.com/numpy/numpy/pull/17577>`__)
+
+The numpy.i file for swig is Python 3 only.
+-------------------------------------------
+Uses of Python 2.7 C-API functions have been updated to Python 3 only. Users
+who need the old version should take it from an older version of NumPy.
+
+(`gh-17580 <https://github.com/numpy/numpy/pull/17580>`__)
+
+Void dtype discovery in ``np.array``
+------------------------------------
+In calls using ``np.array(..., dtype="V")``, ``arr.astype("V")``,
+and similar a TypeError will now be correctly raised unless all
+elements have the identical void length. An example for this is::
+
+ np.array([b"1", b"12"], dtype="V")
+
+Which previously returned an array with dtype ``"V2"`` which
+cannot represent ``b"1"`` faithfully.
+
+(`gh-17706 <https://github.com/numpy/numpy/pull/17706>`__)
+
+
+C API changes
+=============
+
+The ``PyArray_DescrCheck`` macro is modified
+--------------------------------------------
+The ``PyArray_DescrCheck`` macro has been updated since NumPy 1.16.6 to be::
+
+ #define PyArray_DescrCheck(op) PyObject_TypeCheck(op, &PyArrayDescr_Type)
+
+Starting with NumPy 1.20 code that is compiled against an earlier version
+will be API incompatible with NumPy 1.20.
+The fix is to either compile against 1.16.6 (if the NumPy 1.16 release is
+the oldest release you wish to support), or manually inline the macro by
+replacing it with the new definition::
+
+ PyObject_TypeCheck(op, &PyArrayDescr_Type)
+
+which is compatible with all NumPy versions.
+
+
+Size of ``np.ndarray`` and ``np.void_`` changed
+-----------------------------------------------
+The size of the ``PyArrayObject`` and ``PyVoidScalarObject``
+structures have changed. The following header definition has been
+removed::
+
+ #define NPY_SIZEOF_PYARRAYOBJECT (sizeof(PyArrayObject_fields))
+
+since the size must not be considered a compile time constant: it will
+change for different runtime versions of NumPy.
+
+The most likely relevant use are potential subclasses written in C which
+will have to be recompiled and should be updated. Please see the
+documentation for :c:type:`PyArrayObject` for more details and contact
+the NumPy developers if you are affected by this change.
+
+NumPy will attempt to give a graceful error but a program expecting a
+fixed structure size may have undefined behaviour and likely crash.
+
+(`gh-16938 <https://github.com/numpy/numpy/pull/16938>`__)
+
+
+New Features
+============
+
+``where`` keyword argument for ``numpy.all`` and ``numpy.any`` functions
+------------------------------------------------------------------------
+The keyword argument ``where`` is added and allows to only consider specified
+elements or subaxes from an array in the Boolean evaluation of ``all`` and
+``any``. This new keyword is available to the functions ``all`` and ``any``
+both via ``numpy`` directly or in the methods of ``numpy.ndarray``.
+
+Any broadcastable Boolean array or a scalar can be set as ``where``. It
+defaults to ``True`` to evaluate the functions for all elements in an array if
+``where`` is not set by the user. Examples are given in the documentation of
+the functions.
+
+
+``where`` keyword argument for ``numpy`` functions ``mean``, ``std``, ``var``
+-----------------------------------------------------------------------------
+The keyword argument ``where`` is added and allows to limit the scope in the
+calculation of ``mean``, ``std`` and ``var`` to only a subset of elements. It
+is available both via ``numpy`` directly or in the methods of
+``numpy.ndarray``.
+
+Any broadcastable Boolean array or a scalar can be set as ``where``. It
+defaults to ``True`` to evaluate the functions for all elements in an array if
+``where`` is not set by the user. Examples are given in the documentation of
+the functions.
+
+(`gh-15852 <https://github.com/numpy/numpy/pull/15852>`__)
+
+``norm=backward``, ``forward`` keyword options for ``numpy.fft`` functions
+--------------------------------------------------------------------------
+The keyword argument option ``norm=backward`` is added as an alias for ``None``
+and acts as the default option; using it has the direct transforms unscaled
+and the inverse transforms scaled by ``1/n``.
+
+Using the new keyword argument option ``norm=forward`` has the direct
+transforms scaled by ``1/n`` and the inverse transforms unscaled (i.e. exactly
+opposite to the default option ``norm=backward``).
+
+(`gh-16476 <https://github.com/numpy/numpy/pull/16476>`__)
+
+NumPy is now typed
+------------------
+Type annotations have been added for large parts of NumPy. There is
+also a new `numpy.typing` module that contains useful types for
+end-users. The currently available types are
+
+- ``ArrayLike``: for objects that can be coerced to an array
+- ``DtypeLike``: for objects that can be coerced to a dtype
+
+(`gh-16515 <https://github.com/numpy/numpy/pull/16515>`__)
+
+``numpy.typing`` is accessible at runtime
+-----------------------------------------
+The types in ``numpy.typing`` can now be imported at runtime. Code
+like the following will now work:
+
+.. code:: python
+
+ from numpy.typing import ArrayLike
+ x: ArrayLike = [1, 2, 3, 4]
+
+(`gh-16558 <https://github.com/numpy/numpy/pull/16558>`__)
+
+New ``__f2py_numpy_version__`` attribute for f2py generated modules.
+--------------------------------------------------------------------
+Because f2py is released together with NumPy, ``__f2py_numpy_version__``
+provides a way to track the version f2py used to generate the module.
+
+(`gh-16594 <https://github.com/numpy/numpy/pull/16594>`__)
+
+``mypy`` tests can be run via runtests.py
+-----------------------------------------
+Currently running mypy with the NumPy stubs configured requires
+either:
+
+* Installing NumPy
+* Adding the source directory to MYPYPATH and linking to the ``mypy.ini``
+
+Both options are somewhat inconvenient, so add a ``--mypy`` option to runtests
+that handles setting things up for you. This will also be useful in the future
+for any typing codegen since it will ensure the project is built before type
+checking.
+
+(`gh-17123 <https://github.com/numpy/numpy/pull/17123>`__)
+
+Negation of user defined BLAS/LAPACK detection order
+----------------------------------------------------
+`~numpy.distutils` allows negation of libraries when determining BLAS/LAPACK
+libraries.
+This may be used to remove an item from the library resolution phase, i.e.
+to disallow NetLIB libraries one could do:
+
+.. code:: bash
+
+ NPY_BLAS_ORDER='^blas' NPY_LAPACK_ORDER='^lapack' python setup.py build
+
+That will use any of the accelerated libraries instead.
+
+(`gh-17219 <https://github.com/numpy/numpy/pull/17219>`__)
+
+Allow passing optimizations arguments to asv build
+--------------------------------------------------
+It is now possible to pass ``-j``, ``--cpu-baseline``, ``--cpu-dispatch`` and
+``--disable-optimization`` flags to ASV build when the ``--bench-compare``
+argument is used.
+
+(`gh-17284 <https://github.com/numpy/numpy/pull/17284>`__)
+
+The NVIDIA HPC SDK nvfortran compiler is now supported
+------------------------------------------------------
+Support for the nvfortran compiler, a version of pgfortran, has been added.
+
+(`gh-17344 <https://github.com/numpy/numpy/pull/17344>`__)
+
+``dtype`` option for ``cov`` and ``corrcoef``
+---------------------------------------------
+The ``dtype`` option is now available for `numpy.cov` and `numpy.corrcoef`.
+It specifies which data-type the returned result should have.
+By default the functions still return a `numpy.float64` result.
+
+(`gh-17456 <https://github.com/numpy/numpy/pull/17456>`__)
+
+
+Improvements
+============
+
+Improved string representation for polynomials (``__str__``)
+------------------------------------------------------------
+
+The string representation (``__str__``) of all six polynomial types in
+`numpy.polynomial` has been updated to give the polynomial as a mathematical
+expression instead of an array of coefficients. Two package-wide formats for
+the polynomial expressions are available - one using Unicode characters for
+superscripts and subscripts, and another using only ASCII characters.
+
+(`gh-15666 <https://github.com/numpy/numpy/pull/15666>`__)
+
+Remove the Accelerate library as a candidate LAPACK library
+-----------------------------------------------------------
+Apple no longer supports Accelerate. Remove it.
+
+(`gh-15759 <https://github.com/numpy/numpy/pull/15759>`__)
+
+Object arrays containing multi-line objects have a more readable ``repr``
+-------------------------------------------------------------------------
+If elements of an object array have a ``repr`` containing new lines, then the
+wrapped lines will be aligned by column. Notably, this improves the ``repr`` of
+nested arrays::
+
+ >>> np.array([np.eye(2), np.eye(3)], dtype=object)
+ array([array([[1., 0.],
+ [0., 1.]]),
+ array([[1., 0., 0.],
+ [0., 1., 0.],
+ [0., 0., 1.]])], dtype=object)
+
+(`gh-15997 <https://github.com/numpy/numpy/pull/15997>`__)
+
+Concatenate supports providing an output dtype
+----------------------------------------------
+Support was added to `~numpy.concatenate` to provide
+an output ``dtype`` and ``casting`` using keyword
+arguments. The ``dtype`` argument cannot be provided
+in conjunction with the ``out`` one.
+
+(`gh-16134 <https://github.com/numpy/numpy/pull/16134>`__)
+
+Thread safe f2py callback functions
+-----------------------------------
+
+Callback functions in f2py are now thread safe.
+
+(`gh-16519 <https://github.com/numpy/numpy/pull/16519>`__)
+
+`numpy.core.records.fromfile` now supports file-like objects
+------------------------------------------------------------
+`numpy.rec.fromfile` can now use file-like objects, for instance
+:py:class:`io.BytesIO`
+
+(`gh-16675 <https://github.com/numpy/numpy/pull/16675>`__)
+
+RPATH support on AIX added to distutils
+---------------------------------------
+This allows SciPy to be built on AIX.
+
+(`gh-16710 <https://github.com/numpy/numpy/pull/16710>`__)
+
+Use f90 compiler specified by the command line args
+---------------------------------------------------
+
+The compiler command selection for Fortran Portland Group Compiler is changed
+in `numpy.distutils.fcompiler`. This only affects the linking command. This
+forces the use of the executable provided by the command line option (if
+provided) instead of the pgfortran executable. If no executable is provided to
+the command line option it defaults to the pgf90 executable, wich is an alias
+for pgfortran according to the PGI documentation.
+
+(`gh-16730 <https://github.com/numpy/numpy/pull/16730>`__)
+
+Add NumPy declarations for Cython 3.0 and later
+-----------------------------------------------
+
+The pxd declarations for Cython 3.0 were improved to avoid using deprecated
+NumPy C-API features. Extension modules built with Cython 3.0+ that use NumPy
+can now set the C macro ``NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION`` to avoid
+C compiler warnings about deprecated API usage.
+
+(`gh-16986 <https://github.com/numpy/numpy/pull/16986>`__)
+
+Make the window functions exactly symmetric
+-------------------------------------------
+Make sure the window functions provided by NumPy are symmetric. There were
+previously small deviations from symmetry due to numerical precision that are
+now avoided by better arrangement of the computation.
+
+(`gh-17195 <https://github.com/numpy/numpy/pull/17195>`__)
+
+
+Performance improvements and changes
+====================================
+
+Enable multi-platform SIMD compiler optimizations
+-------------------------------------------------
+
+A series of improvements for NumPy infrastructure to pave the way to
+**NEP-38**, that can be summarized as follow:
+
+- **New Build Arguments**
+
+ - ``--cpu-baseline`` to specify the minimal set of required
+ optimizations, default value is ``min`` which provides the minimum
+ CPU features that can safely run on a wide range of users
+ platforms.
+
+ - ``--cpu-dispatch`` to specify the dispatched set of additional
+ optimizations, default value is ``max -xop -fma4`` which enables
+ all CPU features, except for AMD legacy features.
+
+ - ``--disable-optimization`` to explicitly disable the whole new
+ improvements, It also adds a new **C** compiler #definition
+ called ``NPY_DISABLE_OPTIMIZATION`` which it can be used as
+ guard for any SIMD code.
+
+- **Advanced CPU dispatcher**
+
+ A flexible cross-architecture CPU dispatcher built on the top of
+ Python/Numpy distutils, support all common compilers with a wide range of
+ CPU features.
+
+ The new dispatcher requires a special file extension ``*.dispatch.c`` to
+ mark the dispatch-able **C** sources. These sources have the ability to be
+ compiled multiple times so that each compilation process represents certain
+ CPU features and provides different #definitions and flags that affect the
+ code paths.
+
+- **New auto-generated C header ``core/src/common/_cpu_dispatch.h``**
+
+ This header is generated by the distutils module ``ccompiler_opt``, and
+ contains all the #definitions and headers of instruction sets, that had been
+ configured through command arguments '--cpu-baseline' and '--cpu-dispatch'.
+
+- **New C header ``core/src/common/npy_cpu_dispatch.h``**
+
+ This header contains all utilities that required for the whole CPU
+ dispatching process, it also can be considered as a bridge linking the new
+ infrastructure work with NumPy CPU runtime detection.
+
+- **Add new attributes to NumPy umath module(Python level)**
+
+ - ``__cpu_baseline__`` a list contains the minimal set of required
+ optimizations that supported by the compiler and platform according to the
+ specified values to command argument '--cpu-baseline'.
+
+ - ``__cpu_dispatch__`` a list contains the dispatched set of additional
+ optimizations that supported by the compiler and platform according to the
+ specified values to command argument '--cpu-dispatch'.
+
+- **Print the supported CPU features during the run of PytestTester**
+
+(`gh-13516 <https://github.com/numpy/numpy/pull/13516>`__)
+
+
+Changes
+=======
+
+Changed behavior of ``divmod(1., 0.)`` and related functions
+------------------------------------------------------------
+The changes also assure that different compiler versions have the same behavior
+for nan or inf usages in these operations. This was previously compiler
+dependent, we now force the invalid and divide by zero flags, making the
+results the same across compilers. For example, gcc-5, gcc-8, or gcc-9 now
+result in the same behavior. The changes are tabulated below:
+
+.. list-table:: Summary of New Behavior
+ :widths: auto
+ :header-rows: 1
+
+ * - Operator
+ - Old Warning
+ - New Warning
+ - Old Result
+ - New Result
+ - Works on MacOS
+ * - np.divmod(1.0, 0.0)
+ - Invalid
+ - Invalid and Dividebyzero
+ - nan, nan
+ - inf, nan
+ - Yes
+ * - np.fmod(1.0, 0.0)
+ - Invalid
+ - Invalid
+ - nan
+ - nan
+ - No? Yes
+ * - np.floor_divide(1.0, 0.0)
+ - Invalid
+ - Dividebyzero
+ - nan
+ - inf
+ - Yes
+ * - np.remainder(1.0, 0.0)
+ - Invalid
+ - Invalid
+ - nan
+ - nan
+ - Yes
+
+(`gh-16161 <https://github.com/numpy/numpy/pull/16161>`__)
+
+``np.linspace`` on integers now uses floor
+------------------------------------------
+When using a ``int`` dtype in `numpy.linspace`, previously float values would
+be rounded towards zero. Now `numpy.floor` is used instead, which rounds toward
+``-inf``. This changes the results for negative values. For example, the
+following would previously give::
+
+ >>> np.linspace(-3, 1, 8, dtype=int)
+ array([-3, -2, -1, -1, 0, 0, 0, 1])
+
+and now results in::
+
+ >>> np.linspace(-3, 1, 8, dtype=int)
+ array([-3, -3, -2, -2, -1, -1, 0, 1])
+
+The former result can still be obtained with::
+
+ >>> np.linspace(-3, 1, 8).astype(int)
+ array([-3, -2, -1, -1, 0, 0, 0, 1])
+
+(`gh-16841 <https://github.com/numpy/numpy/pull/16841>`__)
+
+
diff --git a/doc/source/release/1.21.0-notes.rst b/doc/source/release/1.21.0-notes.rst
new file mode 100644
index 000000000..5fda1f631
--- /dev/null
+++ b/doc/source/release/1.21.0-notes.rst
@@ -0,0 +1,6 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.21.0 Release Notes
+==========================
+
diff --git a/doc/source/user/absolute_beginners.rst b/doc/source/user/absolute_beginners.rst
index 5873eb108..126f5f2a3 100644
--- a/doc/source/user/absolute_beginners.rst
+++ b/doc/source/user/absolute_beginners.rst
@@ -1090,7 +1090,7 @@ To learn more about finding the unique elements in an array, see `unique`.
Transposing and reshaping a matrix
----------------------------------
-*This section covers* ``arr.reshape()``, ``arr.transpose()``, ``arr.T()``
+*This section covers* ``arr.reshape()``, ``arr.transpose()``, ``arr.T``
-----
@@ -1114,7 +1114,7 @@ You simply need to pass in the new dimensions that you want for the matrix. ::
.. image:: images/np_reshape.png
-You can also use ``.transpose`` to reverse or change the axes of an array
+You can also use ``.transpose()`` to reverse or change the axes of an array
according to the values you specify.
If you start with this array::
@@ -1131,6 +1131,13 @@ You can transpose your array with ``arr.transpose()``. ::
[1, 4],
[2, 5]])
+You can also use ``arr.T``::
+
+ >>> arr.T
+ array([[0, 3],
+ [1, 4],
+ [2, 5]])
+
To learn more about transposing and reshaping arrays, see `transpose` and
`reshape`.
@@ -1138,12 +1145,12 @@ To learn more about transposing and reshaping arrays, see `transpose` and
How to reverse an array
-----------------------
-*This section covers* ``np.flip``
+*This section covers* ``np.flip()``
-----
NumPy's ``np.flip()`` function allows you to flip, or reverse, the contents of
-an array along an axis. When using ``np.flip``, specify the array you would like
+an array along an axis. When using ``np.flip()``, specify the array you would like
to reverse and the axis. If you don't specify the axis, NumPy will reverse the
contents along all of the axes of your input array.
diff --git a/doc/source/user/basics.broadcasting.rst b/doc/source/user/basics.broadcasting.rst
index 00bf17a41..5eae3eb32 100644
--- a/doc/source/user/basics.broadcasting.rst
+++ b/doc/source/user/basics.broadcasting.rst
@@ -10,4 +10,178 @@ Broadcasting
:ref:`array-broadcasting-in-numpy`
An introduction to the concepts discussed here
-.. automodule:: numpy.doc.broadcasting
+.. note::
+ See `this article
+ <https://numpy.org/devdocs/user/theory.broadcasting.html>`_
+ for illustrations of broadcasting concepts.
+
+
+The term broadcasting describes how numpy treats arrays with different
+shapes during arithmetic operations. Subject to certain constraints,
+the smaller array is "broadcast" across the larger array so that they
+have compatible shapes. Broadcasting provides a means of vectorizing
+array operations so that looping occurs in C instead of Python. It does
+this without making needless copies of data and usually leads to
+efficient algorithm implementations. There are, however, cases where
+broadcasting is a bad idea because it leads to inefficient use of memory
+that slows computation.
+
+NumPy operations are usually done on pairs of arrays on an
+element-by-element basis. In the simplest case, the two arrays must
+have exactly the same shape, as in the following example:
+
+ >>> a = np.array([1.0, 2.0, 3.0])
+ >>> b = np.array([2.0, 2.0, 2.0])
+ >>> a * b
+ array([ 2., 4., 6.])
+
+NumPy's broadcasting rule relaxes this constraint when the arrays'
+shapes meet certain constraints. The simplest broadcasting example occurs
+when an array and a scalar value are combined in an operation:
+
+>>> a = np.array([1.0, 2.0, 3.0])
+>>> b = 2.0
+>>> a * b
+array([ 2., 4., 6.])
+
+The result is equivalent to the previous example where ``b`` was an array.
+We can think of the scalar ``b`` being *stretched* during the arithmetic
+operation into an array with the same shape as ``a``. The new elements in
+``b`` are simply copies of the original scalar. The stretching analogy is
+only conceptual. NumPy is smart enough to use the original scalar value
+without actually making copies so that broadcasting operations are as
+memory and computationally efficient as possible.
+
+The code in the second example is more efficient than that in the first
+because broadcasting moves less memory around during the multiplication
+(``b`` is a scalar rather than an array).
+
+General Broadcasting Rules
+==========================
+When operating on two arrays, NumPy compares their shapes element-wise.
+It starts with the trailing (i.e. rightmost) dimensions and works its
+way left. Two dimensions are compatible when
+
+1) they are equal, or
+2) one of them is 1
+
+If these conditions are not met, a
+``ValueError: operands could not be broadcast together`` exception is
+thrown, indicating that the arrays have incompatible shapes. The size of
+the resulting array is the size that is not 1 along each axis of the inputs.
+
+Arrays do not need to have the same *number* of dimensions. For example,
+if you have a ``256x256x3`` array of RGB values, and you want to scale
+each color in the image by a different value, you can multiply the image
+by a one-dimensional array with 3 values. Lining up the sizes of the
+trailing axes of these arrays according to the broadcast rules, shows that
+they are compatible::
+
+ Image (3d array): 256 x 256 x 3
+ Scale (1d array): 3
+ Result (3d array): 256 x 256 x 3
+
+When either of the dimensions compared is one, the other is
+used. In other words, dimensions with size 1 are stretched or "copied"
+to match the other.
+
+In the following example, both the ``A`` and ``B`` arrays have axes with
+length one that are expanded to a larger size during the broadcast
+operation::
+
+ A (4d array): 8 x 1 x 6 x 1
+ B (3d array): 7 x 1 x 5
+ Result (4d array): 8 x 7 x 6 x 5
+
+Here are some more examples::
+
+ A (2d array): 5 x 4
+ B (1d array): 1
+ Result (2d array): 5 x 4
+
+ A (2d array): 5 x 4
+ B (1d array): 4
+ Result (2d array): 5 x 4
+
+ A (3d array): 15 x 3 x 5
+ B (3d array): 15 x 1 x 5
+ Result (3d array): 15 x 3 x 5
+
+ A (3d array): 15 x 3 x 5
+ B (2d array): 3 x 5
+ Result (3d array): 15 x 3 x 5
+
+ A (3d array): 15 x 3 x 5
+ B (2d array): 3 x 1
+ Result (3d array): 15 x 3 x 5
+
+Here are examples of shapes that do not broadcast::
+
+ A (1d array): 3
+ B (1d array): 4 # trailing dimensions do not match
+
+ A (2d array): 2 x 1
+ B (3d array): 8 x 4 x 3 # second from last dimensions mismatched
+
+An example of broadcasting in practice::
+
+ >>> x = np.arange(4)
+ >>> xx = x.reshape(4,1)
+ >>> y = np.ones(5)
+ >>> z = np.ones((3,4))
+
+ >>> x.shape
+ (4,)
+
+ >>> y.shape
+ (5,)
+
+ >>> x + y
+ ValueError: operands could not be broadcast together with shapes (4,) (5,)
+
+ >>> xx.shape
+ (4, 1)
+
+ >>> y.shape
+ (5,)
+
+ >>> (xx + y).shape
+ (4, 5)
+
+ >>> xx + y
+ array([[ 1., 1., 1., 1., 1.],
+ [ 2., 2., 2., 2., 2.],
+ [ 3., 3., 3., 3., 3.],
+ [ 4., 4., 4., 4., 4.]])
+
+ >>> x.shape
+ (4,)
+
+ >>> z.shape
+ (3, 4)
+
+ >>> (x + z).shape
+ (3, 4)
+
+ >>> x + z
+ array([[ 1., 2., 3., 4.],
+ [ 1., 2., 3., 4.],
+ [ 1., 2., 3., 4.]])
+
+Broadcasting provides a convenient way of taking the outer product (or
+any other outer operation) of two arrays. The following example shows an
+outer addition operation of two 1-d arrays::
+
+ >>> a = np.array([0.0, 10.0, 20.0, 30.0])
+ >>> b = np.array([1.0, 2.0, 3.0])
+ >>> a[:, np.newaxis] + b
+ array([[ 1., 2., 3.],
+ [ 11., 12., 13.],
+ [ 21., 22., 23.],
+ [ 31., 32., 33.]])
+
+Here the ``newaxis`` index operator inserts a new axis into ``a``,
+making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array
+with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.
+
+
diff --git a/doc/source/user/basics.byteswapping.rst b/doc/source/user/basics.byteswapping.rst
index 4b1008df3..fecdb9ee8 100644
--- a/doc/source/user/basics.byteswapping.rst
+++ b/doc/source/user/basics.byteswapping.rst
@@ -2,4 +2,152 @@
Byte-swapping
*************
-.. automodule:: numpy.doc.byteswapping
+Introduction to byte ordering and ndarrays
+==========================================
+
+The ``ndarray`` is an object that provide a python array interface to data
+in memory.
+
+It often happens that the memory that you want to view with an array is
+not of the same byte ordering as the computer on which you are running
+Python.
+
+For example, I might be working on a computer with a little-endian CPU -
+such as an Intel Pentium, but I have loaded some data from a file
+written by a computer that is big-endian. Let's say I have loaded 4
+bytes from a file written by a Sun (big-endian) computer. I know that
+these 4 bytes represent two 16-bit integers. On a big-endian machine, a
+two-byte integer is stored with the Most Significant Byte (MSB) first,
+and then the Least Significant Byte (LSB). Thus the bytes are, in memory order:
+
+#. MSB integer 1
+#. LSB integer 1
+#. MSB integer 2
+#. LSB integer 2
+
+Let's say the two integers were in fact 1 and 770. Because 770 = 256 *
+3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2.
+The bytes I have loaded from the file would have these contents:
+
+>>> big_end_buffer = bytearray([0,1,3,2])
+>>> big_end_buffer
+bytearray(b'\\x00\\x01\\x03\\x02')
+
+We might want to use an ``ndarray`` to access these integers. In that
+case, we can create an array around this memory, and tell numpy that
+there are two integers, and that they are 16 bit and big-endian:
+
+>>> import numpy as np
+>>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_buffer)
+>>> big_end_arr[0]
+1
+>>> big_end_arr[1]
+770
+
+Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian'
+(``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For
+example, if our data represented a single unsigned 4-byte little-endian
+integer, the dtype string would be ``<u4``.
+
+In fact, why don't we try that?
+
+>>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_buffer)
+>>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3
+True
+
+Returning to our ``big_end_arr`` - in this case our underlying data is
+big-endian (data endianness) and we've set the dtype to match (the dtype
+is also big-endian). However, sometimes you need to flip these around.
+
+.. warning::
+
+ Scalars currently do not include byte order information, so extracting
+ a scalar from an array will return an integer in native byte order.
+ Hence:
+
+ >>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder
+ True
+
+Changing byte ordering
+======================
+
+As you can imagine from the introduction, there are two ways you can
+affect the relationship between the byte ordering of the array and the
+underlying memory it is looking at:
+
+* Change the byte-ordering information in the array dtype so that it
+ interprets the underlying data as being in a different byte order.
+ This is the role of ``arr.newbyteorder()``
+* Change the byte-ordering of the underlying data, leaving the dtype
+ interpretation as it was. This is what ``arr.byteswap()`` does.
+
+The common situations in which you need to change byte ordering are:
+
+#. Your data and dtype endianness don't match, and you want to change
+ the dtype so that it matches the data.
+#. Your data and dtype endianness don't match, and you want to swap the
+ data so that they match the dtype
+#. Your data and dtype endianness match, but you want the data swapped
+ and the dtype to reflect this
+
+Data and dtype endianness don't match, change dtype to match data
+-----------------------------------------------------------------
+
+We make something where they don't match:
+
+>>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_buffer)
+>>> wrong_end_dtype_arr[0]
+256
+
+The obvious fix for this situation is to change the dtype so it gives
+the correct endianness:
+
+>>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder()
+>>> fixed_end_dtype_arr[0]
+1
+
+Note the array has not changed in memory:
+
+>>> fixed_end_dtype_arr.tobytes() == big_end_buffer
+True
+
+Data and type endianness don't match, change data to match dtype
+----------------------------------------------------------------
+
+You might want to do this if you need the data in memory to be a certain
+ordering. For example you might be writing the memory out to a file
+that needs a certain byte ordering.
+
+>>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap()
+>>> fixed_end_mem_arr[0]
+1
+
+Now the array *has* changed in memory:
+
+>>> fixed_end_mem_arr.tobytes() == big_end_buffer
+False
+
+Data and dtype endianness match, swap data and dtype
+----------------------------------------------------
+
+You may have a correctly specified array dtype, but you need the array
+to have the opposite byte order in memory, and you want the dtype to
+match so the array values make sense. In this case you just do both of
+the previous operations:
+
+>>> swapped_end_arr = big_end_arr.byteswap().newbyteorder()
+>>> swapped_end_arr[0]
+1
+>>> swapped_end_arr.tobytes() == big_end_buffer
+False
+
+An easier way of casting the data to a specific dtype and byte ordering
+can be achieved with the ndarray astype method:
+
+>>> swapped_end_arr = big_end_arr.astype('<i2')
+>>> swapped_end_arr[0]
+1
+>>> swapped_end_arr.tobytes() == big_end_buffer
+False
+
+
diff --git a/doc/source/user/basics.creation.rst b/doc/source/user/basics.creation.rst
index b3fa81017..671a8ec59 100644
--- a/doc/source/user/basics.creation.rst
+++ b/doc/source/user/basics.creation.rst
@@ -6,4 +6,141 @@ Array creation
.. seealso:: :ref:`Array creation routines <routines.array-creation>`
-.. automodule:: numpy.doc.creation
+Introduction
+============
+
+There are 5 general mechanisms for creating arrays:
+
+1) Conversion from other Python structures (e.g., lists, tuples)
+2) Intrinsic numpy array creation objects (e.g., arange, ones, zeros,
+ etc.)
+3) Reading arrays from disk, either from standard or custom formats
+4) Creating arrays from raw bytes through the use of strings or buffers
+5) Use of special library functions (e.g., random)
+
+This section will not cover means of replicating, joining, or otherwise
+expanding or mutating existing arrays. Nor will it cover creating object
+arrays or structured arrays. Both of those are covered in their own sections.
+
+Converting Python array_like Objects to NumPy Arrays
+====================================================
+
+In general, numerical data arranged in an array-like structure in Python can
+be converted to arrays through the use of the array() function. The most
+obvious examples are lists and tuples. See the documentation for array() for
+details for its use. Some objects may support the array-protocol and allow
+conversion to arrays this way. A simple way to find out if the object can be
+converted to a numpy array using array() is simply to try it interactively and
+see if it works! (The Python Way).
+
+Examples: ::
+
+ >>> x = np.array([2,3,1,0])
+ >>> x = np.array([2, 3, 1, 0])
+ >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists,
+ and types
+ >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
+
+Intrinsic NumPy Array Creation
+==============================
+
+NumPy has built-in functions for creating arrays from scratch:
+
+zeros(shape) will create an array filled with 0 values with the specified
+shape. The default dtype is float64. ::
+
+ >>> np.zeros((2, 3))
+ array([[ 0., 0., 0.], [ 0., 0., 0.]])
+
+ones(shape) will create an array filled with 1 values. It is identical to
+zeros in all other respects.
+
+arange() will create arrays with regularly incrementing values. Check the
+docstring for complete information on the various ways it can be used. A few
+examples will be given here: ::
+
+ >>> np.arange(10)
+ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+ >>> np.arange(2, 10, dtype=float)
+ array([ 2., 3., 4., 5., 6., 7., 8., 9.])
+ >>> np.arange(2, 3, 0.1)
+ array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9])
+
+Note that there are some subtleties regarding the last usage that the user
+should be aware of that are described in the arange docstring.
+
+linspace() will create arrays with a specified number of elements, and
+spaced equally between the specified beginning and end values. For
+example: ::
+
+ >>> np.linspace(1., 4., 6)
+ array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ])
+
+The advantage of this creation function is that one can guarantee the
+number of elements and the starting and end point, which arange()
+generally will not do for arbitrary start, stop, and step values.
+
+indices() will create a set of arrays (stacked as a one-higher dimensioned
+array), one per dimension with each representing variation in that dimension.
+An example illustrates much better than a verbal description: ::
+
+ >>> np.indices((3,3))
+ array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
+
+This is particularly useful for evaluating functions of multiple dimensions on
+a regular grid.
+
+Reading Arrays From Disk
+========================
+
+This is presumably the most common case of large array creation. The details,
+of course, depend greatly on the format of data on disk and so this section
+can only give general pointers on how to handle various formats.
+
+Standard Binary Formats
+-----------------------
+
+Various fields have standard formats for array data. The following lists the
+ones with known python libraries to read them and return numpy arrays (there
+may be others for which it is possible to read and convert to numpy arrays so
+check the last section as well)
+::
+
+ HDF5: h5py
+ FITS: Astropy
+
+Examples of formats that cannot be read directly but for which it is not hard to
+convert are those formats supported by libraries like PIL (able to read and
+write many image formats such as jpg, png, etc).
+
+Common ASCII Formats
+------------------------
+
+Comma Separated Value files (CSV) are widely used (and an export and import
+option for programs like Excel). There are a number of ways of reading these
+files in Python. There are CSV functions in Python and functions in pylab
+(part of matplotlib).
+
+More generic ascii files can be read using the io package in scipy.
+
+Custom Binary Formats
+---------------------
+
+There are a variety of approaches one can use. If the file has a relatively
+simple format then one can write a simple I/O library and use the numpy
+fromfile() function and .tofile() method to read and write numpy arrays
+directly (mind your byteorder though!) If a good C or C++ library exists that
+read the data, one can wrap that library with a variety of techniques though
+that certainly is much more work and requires significantly more advanced
+knowledge to interface with C or C++.
+
+Use of Special Libraries
+------------------------
+
+There are libraries that can be used to generate arrays for special purposes
+and it isn't possible to enumerate all of them. The most common uses are use
+of the many array generation functions in random that can generate arrays of
+random values, and some utility functions to generate special matrices (e.g.
+diagonal).
+
+
diff --git a/doc/source/user/basics.dispatch.rst b/doc/source/user/basics.dispatch.rst
index f7b8da262..c0e1cf9ba 100644
--- a/doc/source/user/basics.dispatch.rst
+++ b/doc/source/user/basics.dispatch.rst
@@ -4,5 +4,269 @@
Writing custom array containers
*******************************
-.. automodule:: numpy.doc.dispatch
+Numpy's dispatch mechanism, introduced in numpy version v1.16 is the
+recommended approach for writing custom N-dimensional array containers that are
+compatible with the numpy API and provide custom implementations of numpy
+functionality. Applications include `dask <http://dask.pydata.org>`_ arrays, an
+N-dimensional array distributed across multiple nodes, and `cupy
+<https://docs-cupy.chainer.org/en/stable/>`_ arrays, an N-dimensional array on
+a GPU.
+
+To get a feel for writing custom array containers, we'll begin with a simple
+example that has rather narrow utility but illustrates the concepts involved.
+
+>>> import numpy as np
+>>> class DiagonalArray:
+... def __init__(self, N, value):
+... self._N = N
+... self._i = value
+... def __repr__(self):
+... return f"{self.__class__.__name__}(N={self._N}, value={self._i})"
+... def __array__(self):
+... return self._i * np.eye(self._N)
+
+Our custom array can be instantiated like:
+
+>>> arr = DiagonalArray(5, 1)
+>>> arr
+DiagonalArray(N=5, value=1)
+
+We can convert to a numpy array using :func:`numpy.array` or
+:func:`numpy.asarray`, which will call its ``__array__`` method to obtain a
+standard ``numpy.ndarray``.
+
+>>> np.asarray(arr)
+array([[1., 0., 0., 0., 0.],
+ [0., 1., 0., 0., 0.],
+ [0., 0., 1., 0., 0.],
+ [0., 0., 0., 1., 0.],
+ [0., 0., 0., 0., 1.]])
+
+If we operate on ``arr`` with a numpy function, numpy will again use the
+``__array__`` interface to convert it to an array and then apply the function
+in the usual way.
+
+>>> np.multiply(arr, 2)
+array([[2., 0., 0., 0., 0.],
+ [0., 2., 0., 0., 0.],
+ [0., 0., 2., 0., 0.],
+ [0., 0., 0., 2., 0.],
+ [0., 0., 0., 0., 2.]])
+
+
+Notice that the return type is a standard ``numpy.ndarray``.
+
+>>> type(arr)
+numpy.ndarray
+
+How can we pass our custom array type through this function? Numpy allows a
+class to indicate that it would like to handle computations in a custom-defined
+way through the interfaces ``__array_ufunc__`` and ``__array_function__``. Let's
+take one at a time, starting with ``_array_ufunc__``. This method covers
+:ref:`ufuncs`, a class of functions that includes, for example,
+:func:`numpy.multiply` and :func:`numpy.sin`.
+
+The ``__array_ufunc__`` receives:
+
+- ``ufunc``, a function like ``numpy.multiply``
+- ``method``, a string, differentiating between ``numpy.multiply(...)`` and
+ variants like ``numpy.multiply.outer``, ``numpy.multiply.accumulate``, and so
+ on. For the common case, ``numpy.multiply(...)``, ``method == '__call__'``.
+- ``inputs``, which could be a mixture of different types
+- ``kwargs``, keyword arguments passed to the function
+
+For this example we will only handle the method ``__call__``
+
+>>> from numbers import Number
+>>> class DiagonalArray:
+... def __init__(self, N, value):
+... self._N = N
+... self._i = value
+... def __repr__(self):
+... return f"{self.__class__.__name__}(N={self._N}, value={self._i})"
+... def __array__(self):
+... return self._i * np.eye(self._N)
+... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+... if method == '__call__':
+... N = None
+... scalars = []
+... for input in inputs:
+... if isinstance(input, Number):
+... scalars.append(input)
+... elif isinstance(input, self.__class__):
+... scalars.append(input._i)
+... if N is not None:
+... if N != self._N:
+... raise TypeError("inconsistent sizes")
+... else:
+... N = self._N
+... else:
+... return NotImplemented
+... return self.__class__(N, ufunc(*scalars, **kwargs))
+... else:
+... return NotImplemented
+
+Now our custom array type passes through numpy functions.
+
+>>> arr = DiagonalArray(5, 1)
+>>> np.multiply(arr, 3)
+DiagonalArray(N=5, value=3)
+>>> np.add(arr, 3)
+DiagonalArray(N=5, value=4)
+>>> np.sin(arr)
+DiagonalArray(N=5, value=0.8414709848078965)
+
+At this point ``arr + 3`` does not work.
+
+>>> arr + 3
+TypeError: unsupported operand type(s) for *: 'DiagonalArray' and 'int'
+
+To support it, we need to define the Python interfaces ``__add__``, ``__lt__``,
+and so on to dispatch to the corresponding ufunc. We can achieve this
+conveniently by inheriting from the mixin
+:class:`~numpy.lib.mixins.NDArrayOperatorsMixin`.
+
+>>> import numpy.lib.mixins
+>>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin):
+... def __init__(self, N, value):
+... self._N = N
+... self._i = value
+... def __repr__(self):
+... return f"{self.__class__.__name__}(N={self._N}, value={self._i})"
+... def __array__(self):
+... return self._i * np.eye(self._N)
+... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+... if method == '__call__':
+... N = None
+... scalars = []
+... for input in inputs:
+... if isinstance(input, Number):
+... scalars.append(input)
+... elif isinstance(input, self.__class__):
+... scalars.append(input._i)
+... if N is not None:
+... if N != self._N:
+... raise TypeError("inconsistent sizes")
+... else:
+... N = self._N
+... else:
+... return NotImplemented
+... return self.__class__(N, ufunc(*scalars, **kwargs))
+... else:
+... return NotImplemented
+
+>>> arr = DiagonalArray(5, 1)
+>>> arr + 3
+DiagonalArray(N=5, value=4)
+>>> arr > 0
+DiagonalArray(N=5, value=True)
+
+Now let's tackle ``__array_function__``. We'll create dict that maps numpy
+functions to our custom variants.
+
+>>> HANDLED_FUNCTIONS = {}
+>>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin):
+... def __init__(self, N, value):
+... self._N = N
+... self._i = value
+... def __repr__(self):
+... return f"{self.__class__.__name__}(N={self._N}, value={self._i})"
+... def __array__(self):
+... return self._i * np.eye(self._N)
+... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+... if method == '__call__':
+... N = None
+... scalars = []
+... for input in inputs:
+... # In this case we accept only scalar numbers or DiagonalArrays.
+... if isinstance(input, Number):
+... scalars.append(input)
+... elif isinstance(input, self.__class__):
+... scalars.append(input._i)
+... if N is not None:
+... if N != self._N:
+... raise TypeError("inconsistent sizes")
+... else:
+... N = self._N
+... else:
+... return NotImplemented
+... return self.__class__(N, ufunc(*scalars, **kwargs))
+... else:
+... return NotImplemented
+... def __array_function__(self, func, types, args, kwargs):
+... if func not in HANDLED_FUNCTIONS:
+... return NotImplemented
+... # Note: this allows subclasses that don't override
+... # __array_function__ to handle DiagonalArray objects.
+... if not all(issubclass(t, self.__class__) for t in types):
+... return NotImplemented
+... return HANDLED_FUNCTIONS[func](*args, **kwargs)
+...
+
+A convenient pattern is to define a decorator ``implements`` that can be used
+to add functions to ``HANDLED_FUNCTIONS``.
+
+>>> def implements(np_function):
+... "Register an __array_function__ implementation for DiagonalArray objects."
+... def decorator(func):
+... HANDLED_FUNCTIONS[np_function] = func
+... return func
+... return decorator
+...
+
+Now we write implementations of numpy functions for ``DiagonalArray``.
+For completeness, to support the usage ``arr.sum()`` add a method ``sum`` that
+calls ``numpy.sum(self)``, and the same for ``mean``.
+
+>>> @implements(np.sum)
+... def sum(arr):
+... "Implementation of np.sum for DiagonalArray objects"
+... return arr._i * arr._N
+...
+>>> @implements(np.mean)
+... def mean(arr):
+... "Implementation of np.mean for DiagonalArray objects"
+... return arr._i / arr._N
+...
+>>> arr = DiagonalArray(5, 1)
+>>> np.sum(arr)
+5
+>>> np.mean(arr)
+0.2
+
+If the user tries to use any numpy functions not included in
+``HANDLED_FUNCTIONS``, a ``TypeError`` will be raised by numpy, indicating that
+this operation is not supported. For example, concatenating two
+``DiagonalArrays`` does not produce another diagonal array, so it is not
+supported.
+
+>>> np.concatenate([arr, arr])
+TypeError: no implementation found for 'numpy.concatenate' on types that implement __array_function__: [<class '__main__.DiagonalArray'>]
+
+Additionally, our implementations of ``sum`` and ``mean`` do not accept the
+optional arguments that numpy's implementation does.
+
+>>> np.sum(arr, axis=0)
+TypeError: sum() got an unexpected keyword argument 'axis'
+
+The user always has the option of converting to a normal ``numpy.ndarray`` with
+:func:`numpy.asarray` and using standard numpy from there.
+
+>>> np.concatenate([np.asarray(arr), np.asarray(arr)])
+array([[1., 0., 0., 0., 0.],
+ [0., 1., 0., 0., 0.],
+ [0., 0., 1., 0., 0.],
+ [0., 0., 0., 1., 0.],
+ [0., 0., 0., 0., 1.],
+ [1., 0., 0., 0., 0.],
+ [0., 1., 0., 0., 0.],
+ [0., 0., 1., 0., 0.],
+ [0., 0., 0., 1., 0.],
+ [0., 0., 0., 0., 1.]])
+
+Refer to the `dask source code <https://github.com/dask/dask>`_ and
+`cupy source code <https://github.com/cupy/cupy>`_ for more fully-worked
+examples of custom array containers.
+
+See also :doc:`NEP 18<neps:nep-0018-array-function-protocol>`.
diff --git a/doc/source/user/basics.indexing.rst b/doc/source/user/basics.indexing.rst
index 0dca4b884..9545bb78c 100644
--- a/doc/source/user/basics.indexing.rst
+++ b/doc/source/user/basics.indexing.rst
@@ -10,4 +10,454 @@ Indexing
:ref:`Indexing routines <routines.indexing>`
-.. automodule:: numpy.doc.indexing
+Array indexing refers to any use of the square brackets ([]) to index
+array values. There are many options to indexing, which give numpy
+indexing great power, but with power comes some complexity and the
+potential for confusion. This section is just an overview of the
+various options and issues related to indexing. Aside from single
+element indexing, the details on most of these options are to be
+found in related sections.
+
+Assignment vs referencing
+=========================
+
+Most of the following examples show the use of indexing when
+referencing data in an array. The examples work just as well
+when assigning to an array. See the section at the end for
+specific examples and explanations on how assignments work.
+
+Single element indexing
+=======================
+
+Single element indexing for a 1-D array is what one expects. It work
+exactly like that for other standard Python sequences. It is 0-based,
+and accepts negative indices for indexing from the end of the array. ::
+
+ >>> x = np.arange(10)
+ >>> x[2]
+ 2
+ >>> x[-2]
+ 8
+
+Unlike lists and tuples, numpy arrays support multidimensional indexing
+for multidimensional arrays. That means that it is not necessary to
+separate each dimension's index into its own set of square brackets. ::
+
+ >>> x.shape = (2,5) # now x is 2-dimensional
+ >>> x[1,3]
+ 8
+ >>> x[1,-1]
+ 9
+
+Note that if one indexes a multidimensional array with fewer indices
+than dimensions, one gets a subdimensional array. For example: ::
+
+ >>> x[0]
+ array([0, 1, 2, 3, 4])
+
+That is, each index specified selects the array corresponding to the
+rest of the dimensions selected. In the above example, choosing 0
+means that the remaining dimension of length 5 is being left unspecified,
+and that what is returned is an array of that dimensionality and size.
+It must be noted that the returned array is not a copy of the original,
+but points to the same values in memory as does the original array.
+In this case, the 1-D array at the first position (0) is returned.
+So using a single index on the returned array, results in a single
+element being returned. That is: ::
+
+ >>> x[0][2]
+ 2
+
+So note that ``x[0,2] = x[0][2]`` though the second case is more
+inefficient as a new temporary array is created after the first index
+that is subsequently indexed by 2.
+
+Note to those used to IDL or Fortran memory order as it relates to
+indexing. NumPy uses C-order indexing. That means that the last
+index usually represents the most rapidly changing memory location,
+unlike Fortran or IDL, where the first index represents the most
+rapidly changing location in memory. This difference represents a
+great potential for confusion.
+
+Other indexing options
+======================
+
+It is possible to slice and stride arrays to extract arrays of the
+same number of dimensions, but of different sizes than the original.
+The slicing and striding works exactly the same way it does for lists
+and tuples except that they can be applied to multiple dimensions as
+well. A few examples illustrates best: ::
+
+ >>> x = np.arange(10)
+ >>> x[2:5]
+ array([2, 3, 4])
+ >>> x[:-7]
+ array([0, 1, 2])
+ >>> x[1:7:2]
+ array([1, 3, 5])
+ >>> y = np.arange(35).reshape(5,7)
+ >>> y[1:5:2,::3]
+ array([[ 7, 10, 13],
+ [21, 24, 27]])
+
+Note that slices of arrays do not copy the internal array data but
+only produce new views of the original data. This is different from
+list or tuple slicing and an explicit ``copy()`` is recommended if
+the original data is not required anymore.
+
+It is possible to index arrays with other arrays for the purposes of
+selecting lists of values out of arrays into new arrays. There are
+two different ways of accomplishing this. One uses one or more arrays
+of index values. The other involves giving a boolean array of the proper
+shape to indicate the values to be selected. Index arrays are a very
+powerful tool that allow one to avoid looping over individual elements in
+arrays and thus greatly improve performance.
+
+It is possible to use special features to effectively increase the
+number of dimensions in an array through indexing so the resulting
+array acquires the shape needed for use in an expression or with a
+specific function.
+
+Index arrays
+============
+
+NumPy arrays may be indexed with other arrays (or any other sequence-
+like object that can be converted to an array, such as lists, with the
+exception of tuples; see the end of this document for why this is). The
+use of index arrays ranges from simple, straightforward cases to
+complex, hard-to-understand cases. For all cases of index arrays, what
+is returned is a copy of the original data, not a view as one gets for
+slices.
+
+Index arrays must be of integer type. Each value in the array indicates
+which value in the array to use in place of the index. To illustrate: ::
+
+ >>> x = np.arange(10,1,-1)
+ >>> x
+ array([10, 9, 8, 7, 6, 5, 4, 3, 2])
+ >>> x[np.array([3, 3, 1, 8])]
+ array([7, 7, 9, 2])
+
+
+The index array consisting of the values 3, 3, 1 and 8 correspondingly
+create an array of length 4 (same as the index array) where each index
+is replaced by the value the index array has in the array being indexed.
+
+Negative values are permitted and work as they do with single indices
+or slices: ::
+
+ >>> x[np.array([3,3,-3,8])]
+ array([7, 7, 4, 2])
+
+It is an error to have index values out of bounds: ::
+
+ >>> x[np.array([3, 3, 20, 8])]
+ <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
+
+Generally speaking, what is returned when index arrays are used is
+an array with the same shape as the index array, but with the type
+and values of the array being indexed. As an example, we can use a
+multidimensional index array instead: ::
+
+ >>> x[np.array([[1,1],[2,3]])]
+ array([[9, 9],
+ [8, 7]])
+
+Indexing Multi-dimensional arrays
+=================================
+
+Things become more complex when multidimensional arrays are indexed,
+particularly with multidimensional index arrays. These tend to be
+more unusual uses, but they are permitted, and they are useful for some
+problems. We'll start with the simplest multidimensional case (using
+the array y from the previous examples): ::
+
+ >>> y[np.array([0,2,4]), np.array([0,1,2])]
+ array([ 0, 15, 30])
+
+In this case, if the index arrays have a matching shape, and there is
+an index array for each dimension of the array being indexed, the
+resultant array has the same shape as the index arrays, and the values
+correspond to the index set for each position in the index arrays. In
+this example, the first index value is 0 for both index arrays, and
+thus the first value of the resultant array is y[0,0]. The next value
+is y[2,1], and the last is y[4,2].
+
+If the index arrays do not have the same shape, there is an attempt to
+broadcast them to the same shape. If they cannot be broadcast to the
+same shape, an exception is raised: ::
+
+ >>> y[np.array([0,2,4]), np.array([0,1])]
+ <type 'exceptions.ValueError'>: shape mismatch: objects cannot be
+ broadcast to a single shape
+
+The broadcasting mechanism permits index arrays to be combined with
+scalars for other indices. The effect is that the scalar value is used
+for all the corresponding values of the index arrays: ::
+
+ >>> y[np.array([0,2,4]), 1]
+ array([ 1, 15, 29])
+
+Jumping to the next level of complexity, it is possible to only
+partially index an array with index arrays. It takes a bit of thought
+to understand what happens in such cases. For example if we just use
+one index array with y: ::
+
+ >>> y[np.array([0,2,4])]
+ array([[ 0, 1, 2, 3, 4, 5, 6],
+ [14, 15, 16, 17, 18, 19, 20],
+ [28, 29, 30, 31, 32, 33, 34]])
+
+What results is the construction of a new array where each value of
+the index array selects one row from the array being indexed and the
+resultant array has the resulting shape (number of index elements,
+size of row).
+
+An example of where this may be useful is for a color lookup table
+where we want to map the values of an image into RGB triples for
+display. The lookup table could have a shape (nlookup, 3). Indexing
+such an array with an image with shape (ny, nx) with dtype=np.uint8
+(or any integer type so long as values are with the bounds of the
+lookup table) will result in an array of shape (ny, nx, 3) where a
+triple of RGB values is associated with each pixel location.
+
+In general, the shape of the resultant array will be the concatenation
+of the shape of the index array (or the shape that all the index arrays
+were broadcast to) with the shape of any unused dimensions (those not
+indexed) in the array being indexed.
+
+Boolean or "mask" index arrays
+==============================
+
+Boolean arrays used as indices are treated in a different manner
+entirely than index arrays. Boolean arrays must be of the same shape
+as the initial dimensions of the array being indexed. In the
+most straightforward case, the boolean array has the same shape: ::
+
+ >>> b = y>20
+ >>> y[b]
+ array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
+
+Unlike in the case of integer index arrays, in the boolean case, the
+result is a 1-D array containing all the elements in the indexed array
+corresponding to all the true elements in the boolean array. The
+elements in the indexed array are always iterated and returned in
+:term:`row-major` (C-style) order. The result is also identical to
+``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy
+of the data, not a view as one gets with slices.
+
+The result will be multidimensional if y has more dimensions than b.
+For example: ::
+
+ >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y
+ array([False, False, False, True, True])
+ >>> y[b[:,5]]
+ array([[21, 22, 23, 24, 25, 26, 27],
+ [28, 29, 30, 31, 32, 33, 34]])
+
+Here the 4th and 5th rows are selected from the indexed array and
+combined to make a 2-D array.
+
+In general, when the boolean array has fewer dimensions than the array
+being indexed, this is equivalent to y[b, ...], which means
+y is indexed by b followed by as many : as are needed to fill
+out the rank of y.
+Thus the shape of the result is one dimension containing the number
+of True elements of the boolean array, followed by the remaining
+dimensions of the array being indexed.
+
+For example, using a 2-D boolean array of shape (2,3)
+with four True elements to select rows from a 3-D array of shape
+(2,3,5) results in a 2-D result of shape (4,5): ::
+
+ >>> x = np.arange(30).reshape(2,3,5)
+ >>> x
+ array([[[ 0, 1, 2, 3, 4],
+ [ 5, 6, 7, 8, 9],
+ [10, 11, 12, 13, 14]],
+ [[15, 16, 17, 18, 19],
+ [20, 21, 22, 23, 24],
+ [25, 26, 27, 28, 29]]])
+ >>> b = np.array([[True, True, False], [False, True, True]])
+ >>> x[b]
+ array([[ 0, 1, 2, 3, 4],
+ [ 5, 6, 7, 8, 9],
+ [20, 21, 22, 23, 24],
+ [25, 26, 27, 28, 29]])
+
+For further details, consult the numpy reference documentation on array indexing.
+
+Combining index arrays with slices
+==================================
+
+Index arrays may be combined with slices. For example: ::
+
+ >>> y[np.array([0, 2, 4]), 1:3]
+ array([[ 1, 2],
+ [15, 16],
+ [29, 30]])
+
+In effect, the slice and index array operation are independent.
+The slice operation extracts columns with index 1 and 2,
+(i.e. the 2nd and 3rd columns),
+followed by the index array operation which extracts rows with
+index 0, 2 and 4 (i.e the first, third and fifth rows).
+
+This is equivalent to::
+
+ >>> y[:, 1:3][np.array([0, 2, 4]), :]
+ array([[ 1, 2],
+ [15, 16],
+ [29, 30]])
+
+Likewise, slicing can be combined with broadcasted boolean indices: ::
+
+ >>> b = y > 20
+ >>> b
+ array([[False, False, False, False, False, False, False],
+ [False, False, False, False, False, False, False],
+ [False, False, False, False, False, False, False],
+ [ True, True, True, True, True, True, True],
+ [ True, True, True, True, True, True, True]])
+ >>> y[b[:,5],1:3]
+ array([[22, 23],
+ [29, 30]])
+
+Structural indexing tools
+=========================
+
+To facilitate easy matching of array shapes with expressions and in
+assignments, the np.newaxis object can be used within array indices
+to add new dimensions with a size of 1. For example: ::
+
+ >>> y.shape
+ (5, 7)
+ >>> y[:,np.newaxis,:].shape
+ (5, 1, 7)
+
+Note that there are no new elements in the array, just that the
+dimensionality is increased. This can be handy to combine two
+arrays in a way that otherwise would require explicitly reshaping
+operations. For example: ::
+
+ >>> x = np.arange(5)
+ >>> x[:,np.newaxis] + x[np.newaxis,:]
+ array([[0, 1, 2, 3, 4],
+ [1, 2, 3, 4, 5],
+ [2, 3, 4, 5, 6],
+ [3, 4, 5, 6, 7],
+ [4, 5, 6, 7, 8]])
+
+The ellipsis syntax maybe used to indicate selecting in full any
+remaining unspecified dimensions. For example: ::
+
+ >>> z = np.arange(81).reshape(3,3,3,3)
+ >>> z[1,...,2]
+ array([[29, 32, 35],
+ [38, 41, 44],
+ [47, 50, 53]])
+
+This is equivalent to: ::
+
+ >>> z[1,:,:,2]
+ array([[29, 32, 35],
+ [38, 41, 44],
+ [47, 50, 53]])
+
+Assigning values to indexed arrays
+==================================
+
+As mentioned, one can select a subset of an array to assign to using
+a single index, slices, and index and mask arrays. The value being
+assigned to the indexed array must be shape consistent (the same shape
+or broadcastable to the shape the index produces). For example, it is
+permitted to assign a constant to a slice: ::
+
+ >>> x = np.arange(10)
+ >>> x[2:7] = 1
+
+or an array of the right size: ::
+
+ >>> x[2:7] = np.arange(5)
+
+Note that assignments may result in changes if assigning
+higher types to lower types (like floats to ints) or even
+exceptions (assigning complex to floats or ints): ::
+
+ >>> x[1] = 1.2
+ >>> x[1]
+ 1
+ >>> x[1] = 1.2j
+ TypeError: can't convert complex to int
+
+
+Unlike some of the references (such as array and mask indices)
+assignments are always made to the original data in the array
+(indeed, nothing else would make sense!). Note though, that some
+actions may not work as one may naively expect. This particular
+example is often surprising to people: ::
+
+ >>> x = np.arange(0, 50, 10)
+ >>> x
+ array([ 0, 10, 20, 30, 40])
+ >>> x[np.array([1, 1, 3, 1])] += 1
+ >>> x
+ array([ 0, 11, 20, 31, 40])
+
+Where people expect that the 1st location will be incremented by 3.
+In fact, it will only be incremented by 1. The reason is because
+a new array is extracted from the original (as a temporary) containing
+the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
+and then the temporary is assigned back to the original array. Thus
+the value of the array at x[1]+1 is assigned to x[1] three times,
+rather than being incremented 3 times.
+
+Dealing with variable numbers of indices within programs
+========================================================
+
+The index syntax is very powerful but limiting when dealing with
+a variable number of indices. For example, if you want to write
+a function that can handle arguments with various numbers of
+dimensions without having to write special case code for each
+number of possible dimensions, how can that be done? If one
+supplies to the index a tuple, the tuple will be interpreted
+as a list of indices. For example (using the previous definition
+for the array z): ::
+
+ >>> indices = (1,1,1,1)
+ >>> z[indices]
+ 40
+
+So one can use code to construct tuples of any number of indices
+and then use these within an index.
+
+Slices can be specified within programs by using the slice() function
+in Python. For example: ::
+
+ >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
+ >>> z[indices]
+ array([39, 40])
+
+Likewise, ellipsis can be specified by code by using the Ellipsis
+object: ::
+
+ >>> indices = (1, Ellipsis, 1) # same as [1,...,1]
+ >>> z[indices]
+ array([[28, 31, 34],
+ [37, 40, 43],
+ [46, 49, 52]])
+
+For this reason it is possible to use the output from the np.nonzero()
+function directly as an index since it always returns a tuple of index
+arrays.
+
+Because the special treatment of tuples, they are not automatically
+converted to an array as a list would be. As an example: ::
+
+ >>> z[[1,1,1,1]] # produces a large array
+ array([[[[27, 28, 29],
+ [30, 31, 32], ...
+ >>> z[(1,1,1,1)] # returns a single value
+ 40
+
+
diff --git a/doc/source/user/basics.io.genfromtxt.rst b/doc/source/user/basics.io.genfromtxt.rst
index 3fce6a8aa..5364acbe9 100644
--- a/doc/source/user/basics.io.genfromtxt.rst
+++ b/doc/source/user/basics.io.genfromtxt.rst
@@ -28,7 +28,7 @@ Defining the input
The only mandatory argument of :func:`~numpy.genfromtxt` is the source of
the data. It can be a string, a list of strings, a generator or an open
-file-like object with a :meth:`read` method, for example, a file or
+file-like object with a ``read`` method, for example, a file or
:class:`io.StringIO` object. If a single string is provided, it is assumed
to be the name of a local or remote file. If a list of strings or a generator
returning strings is provided, each string is treated as one line in a file.
@@ -36,10 +36,10 @@ When the URL of a remote file is passed, the file is automatically downloaded
to the current directory and opened.
Recognized file types are text files and archives. Currently, the function
-recognizes :class:`gzip` and :class:`bz2` (`bzip2`) archives. The type of
+recognizes ``gzip`` and ``bz2`` (``bzip2``) archives. The type of
the archive is determined from the extension of the file: if the filename
-ends with ``'.gz'``, a :class:`gzip` archive is expected; if it ends with
-``'bz2'``, a :class:`bzip2` archive is assumed.
+ends with ``'.gz'``, a ``gzip`` archive is expected; if it ends with
+``'bz2'``, a ``bzip2`` archive is assumed.
@@ -360,9 +360,9 @@ The ``converters`` argument
Usually, defining a dtype is sufficient to define how the sequence of
strings must be converted. However, some additional control may sometimes
be required. For example, we may want to make sure that a date in a format
-``YYYY/MM/DD`` is converted to a :class:`datetime` object, or that a string
-like ``xx%`` is properly converted to a float between 0 and 1. In such
-cases, we should define conversion functions with the ``converters``
+``YYYY/MM/DD`` is converted to a :class:`~datetime.datetime` object, or that
+a string like ``xx%`` is properly converted to a float between 0 and 1. In
+such cases, we should define conversion functions with the ``converters``
arguments.
The value of this argument is typically a dictionary with column indices or
@@ -427,7 +427,7 @@ previous example, we used a converter to transform an empty string into a
float. However, user-defined converters may rapidly become cumbersome to
manage.
-The :func:`~nummpy.genfromtxt` function provides two other complementary
+The :func:`~numpy.genfromtxt` function provides two other complementary
mechanisms: the ``missing_values`` argument is used to recognize
missing data and a second argument, ``filling_values``, is used to
process these missing data.
@@ -514,15 +514,15 @@ output array will then be a :class:`~numpy.ma.MaskedArray`.
Shortcut functions
==================
-In addition to :func:`~numpy.genfromtxt`, the :mod:`numpy.lib.io` module
+In addition to :func:`~numpy.genfromtxt`, the :mod:`numpy.lib.npyio` module
provides several convenience functions derived from
:func:`~numpy.genfromtxt`. These functions work the same way as the
original, but they have different default values.
-:func:`~numpy.recfromtxt`
+:func:`~numpy.npyio.recfromtxt`
Returns a standard :class:`numpy.recarray` (if ``usemask=False``) or a
- :class:`~numpy.ma.MaskedRecords` array (if ``usemaske=True``). The
+ :class:`~numpy.ma.mrecords.MaskedRecords` array (if ``usemaske=True``). The
default dtype is ``dtype=None``, meaning that the types of each column
will be automatically determined.
-:func:`~numpy.recfromcsv`
- Like :func:`~numpy.recfromtxt`, but with a default ``delimiter=","``.
+:func:`~numpy.npyio.recfromcsv`
+ Like :func:`~numpy.npyio.recfromtxt`, but with a default ``delimiter=","``.
diff --git a/doc/source/user/basics.rec.rst b/doc/source/user/basics.rec.rst
index b885c9e77..0524fde8e 100644
--- a/doc/source/user/basics.rec.rst
+++ b/doc/source/user/basics.rec.rst
@@ -4,10 +4,652 @@
Structured arrays
*****************
-.. automodule:: numpy.doc.structured_arrays
+Introduction
+============
+
+Structured arrays are ndarrays whose datatype is a composition of simpler
+datatypes organized as a sequence of named :term:`fields <field>`. For example,
+::
+
+ >>> x = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)],
+ ... dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')])
+ >>> x
+ array([('Rex', 9, 81.), ('Fido', 3, 27.)],
+ dtype=[('name', 'U10'), ('age', '<i4'), ('weight', '<f4')])
+
+Here ``x`` is a one-dimensional array of length two whose datatype is a
+structure with three fields: 1. A string of length 10 or less named 'name', 2.
+a 32-bit integer named 'age', and 3. a 32-bit float named 'weight'.
+
+If you index ``x`` at position 1 you get a structure::
+
+ >>> x[1]
+ ('Fido', 3, 27.0)
+
+You can access and modify individual fields of a structured array by indexing
+with the field name::
+
+ >>> x['age']
+ array([9, 3], dtype=int32)
+ >>> x['age'] = 5
+ >>> x
+ array([('Rex', 5, 81.), ('Fido', 5, 27.)],
+ dtype=[('name', 'U10'), ('age', '<i4'), ('weight', '<f4')])
+
+Structured datatypes are designed to be able to mimic 'structs' in the C
+language, and share a similar memory layout. They are meant for interfacing with
+C code and for low-level manipulation of structured buffers, for example for
+interpreting binary blobs. For these purposes they support specialized features
+such as subarrays, nested datatypes, and unions, and allow control over the
+memory layout of the structure.
+
+Users looking to manipulate tabular data, such as stored in csv files, may find
+other pydata projects more suitable, such as xarray, pandas, or DataArray.
+These provide a high-level interface for tabular data analysis and are better
+optimized for that use. For instance, the C-struct-like memory layout of
+structured arrays in numpy can lead to poor cache behavior in comparison.
+
+.. _defining-structured-types:
+
+Structured Datatypes
+====================
+
+A structured datatype can be thought of as a sequence of bytes of a certain
+length (the structure's :term:`itemsize`) which is interpreted as a collection
+of fields. Each field has a name, a datatype, and a byte offset within the
+structure. The datatype of a field may be any numpy datatype including other
+structured datatypes, and it may also be a :term:`subarray data type` which
+behaves like an ndarray of a specified shape. The offsets of the fields are
+arbitrary, and fields may even overlap. These offsets are usually determined
+automatically by numpy, but can also be specified.
+
+Structured Datatype Creation
+----------------------------
+
+Structured datatypes may be created using the function :func:`numpy.dtype`.
+There are 4 alternative forms of specification which vary in flexibility and
+conciseness. These are further documented in the
+:ref:`Data Type Objects <arrays.dtypes.constructing>` reference page, and in
+summary they are:
+
+1. A list of tuples, one tuple per field
+
+ Each tuple has the form ``(fieldname, datatype, shape)`` where shape is
+ optional. ``fieldname`` is a string (or tuple if titles are used, see
+ :ref:`Field Titles <titles>` below), ``datatype`` may be any object
+ convertible to a datatype, and ``shape`` is a tuple of integers specifying
+ subarray shape.
+
+ >>> np.dtype([('x', 'f4'), ('y', np.float32), ('z', 'f4', (2, 2))])
+ dtype([('x', '<f4'), ('y', '<f4'), ('z', '<f4', (2, 2))])
+
+ If ``fieldname`` is the empty string ``''``, the field will be given a
+ default name of the form ``f#``, where ``#`` is the integer index of the
+ field, counting from 0 from the left::
+
+ >>> np.dtype([('x', 'f4'), ('', 'i4'), ('z', 'i8')])
+ dtype([('x', '<f4'), ('f1', '<i4'), ('z', '<i8')])
+
+ The byte offsets of the fields within the structure and the total
+ structure itemsize are determined automatically.
+
+2. A string of comma-separated dtype specifications
+
+ In this shorthand notation any of the :ref:`string dtype specifications
+ <arrays.dtypes.constructing>` may be used in a string and separated by
+ commas. The itemsize and byte offsets of the fields are determined
+ automatically, and the field names are given the default names ``f0``,
+ ``f1``, etc. ::
+
+ >>> np.dtype('i8, f4, S3')
+ dtype([('f0', '<i8'), ('f1', '<f4'), ('f2', 'S3')])
+ >>> np.dtype('3int8, float32, (2, 3)float64')
+ dtype([('f0', 'i1', (3,)), ('f1', '<f4'), ('f2', '<f8', (2, 3))])
+
+3. A dictionary of field parameter arrays
+
+ This is the most flexible form of specification since it allows control
+ over the byte-offsets of the fields and the itemsize of the structure.
+
+ The dictionary has two required keys, 'names' and 'formats', and four
+ optional keys, 'offsets', 'itemsize', 'aligned' and 'titles'. The values
+ for 'names' and 'formats' should respectively be a list of field names and
+ a list of dtype specifications, of the same length. The optional 'offsets'
+ value should be a list of integer byte-offsets, one for each field within
+ the structure. If 'offsets' is not given the offsets are determined
+ automatically. The optional 'itemsize' value should be an integer
+ describing the total size in bytes of the dtype, which must be large
+ enough to contain all the fields.
+ ::
+
+ >>> np.dtype({'names': ['col1', 'col2'], 'formats': ['i4', 'f4']})
+ dtype([('col1', '<i4'), ('col2', '<f4')])
+ >>> np.dtype({'names': ['col1', 'col2'],
+ ... 'formats': ['i4', 'f4'],
+ ... 'offsets': [0, 4],
+ ... 'itemsize': 12})
+ dtype({'names':['col1','col2'], 'formats':['<i4','<f4'], 'offsets':[0,4], 'itemsize':12})
+
+ Offsets may be chosen such that the fields overlap, though this will mean
+ that assigning to one field may clobber any overlapping field's data. As
+ an exception, fields of :class:`numpy.object_` type cannot overlap with
+ other fields, because of the risk of clobbering the internal object
+ pointer and then dereferencing it.
+
+ The optional 'aligned' value can be set to ``True`` to make the automatic
+ offset computation use aligned offsets (see :ref:`offsets-and-alignment`),
+ as if the 'align' keyword argument of :func:`numpy.dtype` had been set to
+ True.
+
+ The optional 'titles' value should be a list of titles of the same length
+ as 'names', see :ref:`Field Titles <titles>` below.
+
+4. A dictionary of field names
+
+ The use of this form of specification is discouraged, but documented here
+ because older numpy code may use it. The keys of the dictionary are the
+ field names and the values are tuples specifying type and offset::
+
+ >>> np.dtype({'col1': ('i1', 0), 'col2': ('f4', 1)})
+ dtype([('col1', 'i1'), ('col2', '<f4')])
+
+ This form is discouraged because Python dictionaries do not preserve order
+ in Python versions before Python 3.6, and the order of the fields in a
+ structured dtype has meaning. :ref:`Field Titles <titles>` may be
+ specified by using a 3-tuple, see below.
+
+Manipulating and Displaying Structured Datatypes
+------------------------------------------------
+
+The list of field names of a structured datatype can be found in the ``names``
+attribute of the dtype object::
+
+ >>> d = np.dtype([('x', 'i8'), ('y', 'f4')])
+ >>> d.names
+ ('x', 'y')
+
+The field names may be modified by assigning to the ``names`` attribute using a
+sequence of strings of the same length.
+
+The dtype object also has a dictionary-like attribute, ``fields``, whose keys
+are the field names (and :ref:`Field Titles <titles>`, see below) and whose
+values are tuples containing the dtype and byte offset of each field. ::
+
+ >>> d.fields
+ mappingproxy({'x': (dtype('int64'), 0), 'y': (dtype('float32'), 8)})
+
+Both the ``names`` and ``fields`` attributes will equal ``None`` for
+unstructured arrays. The recommended way to test if a dtype is structured is
+with `if dt.names is not None` rather than `if dt.names`, to account for dtypes
+with 0 fields.
+
+The string representation of a structured datatype is shown in the "list of
+tuples" form if possible, otherwise numpy falls back to using the more general
+dictionary form.
+
+.. _offsets-and-alignment:
+
+Automatic Byte Offsets and Alignment
+------------------------------------
+
+Numpy uses one of two methods to automatically determine the field byte offsets
+and the overall itemsize of a structured datatype, depending on whether
+``align=True`` was specified as a keyword argument to :func:`numpy.dtype`.
+
+By default (``align=False``), numpy will pack the fields together such that
+each field starts at the byte offset the previous field ended, and the fields
+are contiguous in memory. ::
+
+ >>> def print_offsets(d):
+ ... print("offsets:", [d.fields[name][1] for name in d.names])
+ ... print("itemsize:", d.itemsize)
+ >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2'))
+ offsets: [0, 1, 2, 6, 7, 15]
+ itemsize: 17
+
+If ``align=True`` is set, numpy will pad the structure in the same way many C
+compilers would pad a C-struct. Aligned structures can give a performance
+improvement in some cases, at the cost of increased datatype size. Padding
+bytes are inserted between fields such that each field's byte offset will be a
+multiple of that field's alignment, which is usually equal to the field's size
+in bytes for simple datatypes, see :c:member:`PyArray_Descr.alignment`. The
+structure will also have trailing padding added so that its itemsize is a
+multiple of the largest field's alignment. ::
+
+ >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2', align=True))
+ offsets: [0, 1, 4, 8, 16, 24]
+ itemsize: 32
+
+Note that although almost all modern C compilers pad in this way by default,
+padding in C structs is C-implementation-dependent so this memory layout is not
+guaranteed to exactly match that of a corresponding struct in a C program. Some
+work may be needed, either on the numpy side or the C side, to obtain exact
+correspondence.
+
+If offsets were specified using the optional ``offsets`` key in the
+dictionary-based dtype specification, setting ``align=True`` will check that
+each field's offset is a multiple of its size and that the itemsize is a
+multiple of the largest field size, and raise an exception if not.
+
+If the offsets of the fields and itemsize of a structured array satisfy the
+alignment conditions, the array will have the ``ALIGNED`` :attr:`flag
+<numpy.ndarray.flags>` set.
+
+A convenience function :func:`numpy.lib.recfunctions.repack_fields` converts an
+aligned dtype or array to a packed one and vice versa. It takes either a dtype
+or structured ndarray as an argument, and returns a copy with fields re-packed,
+with or without padding bytes.
+
+.. _titles:
+
+Field Titles
+------------
+
+In addition to field names, fields may also have an associated :term:`title`,
+an alternate name, which is sometimes used as an additional description or
+alias for the field. The title may be used to index an array, just like a
+field name.
+
+To add titles when using the list-of-tuples form of dtype specification, the
+field name may be specified as a tuple of two strings instead of a single
+string, which will be the field's title and field name respectively. For
+example::
+
+ >>> np.dtype([(('my title', 'name'), 'f4')])
+ dtype([(('my title', 'name'), '<f4')])
+
+When using the first form of dictionary-based specification, the titles may be
+supplied as an extra ``'titles'`` key as described above. When using the second
+(discouraged) dictionary-based specification, the title can be supplied by
+providing a 3-element tuple ``(datatype, offset, title)`` instead of the usual
+2-element tuple::
+
+ >>> np.dtype({'name': ('i4', 0, 'my title')})
+ dtype([(('my title', 'name'), '<i4')])
+
+The ``dtype.fields`` dictionary will contain titles as keys, if any
+titles are used. This means effectively that a field with a title will be
+represented twice in the fields dictionary. The tuple values for these fields
+will also have a third element, the field title. Because of this, and because
+the ``names`` attribute preserves the field order while the ``fields``
+attribute may not, it is recommended to iterate through the fields of a dtype
+using the ``names`` attribute of the dtype, which will not list titles, as
+in::
+
+ >>> for name in d.names:
+ ... print(d.fields[name][:2])
+ (dtype('int64'), 0)
+ (dtype('float32'), 8)
+
+Union types
+-----------
+
+Structured datatypes are implemented in numpy to have base type
+:class:`numpy.void` by default, but it is possible to interpret other numpy
+types as structured types using the ``(base_dtype, dtype)`` form of dtype
+specification described in
+:ref:`Data Type Objects <arrays.dtypes.constructing>`. Here, ``base_dtype`` is
+the desired underlying dtype, and fields and flags will be copied from
+``dtype``. This dtype is similar to a 'union' in C.
+
+Indexing and Assignment to Structured arrays
+============================================
+
+Assigning data to a Structured Array
+------------------------------------
+
+There are a number of ways to assign values to a structured array: Using python
+tuples, using scalar values, or using other structured arrays.
+
+Assignment from Python Native Types (Tuples)
+````````````````````````````````````````````
+
+The simplest way to assign values to a structured array is using python tuples.
+Each assigned value should be a tuple of length equal to the number of fields
+in the array, and not a list or array as these will trigger numpy's
+broadcasting rules. The tuple's elements are assigned to the successive fields
+of the array, from left to right::
+
+ >>> x = np.array([(1, 2, 3), (4, 5, 6)], dtype='i8, f4, f8')
+ >>> x[1] = (7, 8, 9)
+ >>> x
+ array([(1, 2., 3.), (7, 8., 9.)],
+ dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '<f8')])
+
+Assignment from Scalars
+```````````````````````
+
+A scalar assigned to a structured element will be assigned to all fields. This
+happens when a scalar is assigned to a structured array, or when an
+unstructured array is assigned to a structured array::
+
+ >>> x = np.zeros(2, dtype='i8, f4, ?, S1')
+ >>> x[:] = 3
+ >>> x
+ array([(3, 3., True, b'3'), (3, 3., True, b'3')],
+ dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')])
+ >>> x[:] = np.arange(2)
+ >>> x
+ array([(0, 0., False, b'0'), (1, 1., True, b'1')],
+ dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')])
+
+Structured arrays can also be assigned to unstructured arrays, but only if the
+structured datatype has just a single field::
+
+ >>> twofield = np.zeros(2, dtype=[('A', 'i4'), ('B', 'i4')])
+ >>> onefield = np.zeros(2, dtype=[('A', 'i4')])
+ >>> nostruct = np.zeros(2, dtype='i4')
+ >>> nostruct[:] = twofield
+ Traceback (most recent call last):
+ ...
+ TypeError: Cannot cast array data from dtype([('A', '<i4'), ('B', '<i4')]) to dtype('int32') according to the rule 'unsafe'
+
+Assignment from other Structured Arrays
+```````````````````````````````````````
+
+Assignment between two structured arrays occurs as if the source elements had
+been converted to tuples and then assigned to the destination elements. That
+is, the first field of the source array is assigned to the first field of the
+destination array, and the second field likewise, and so on, regardless of
+field names. Structured arrays with a different number of fields cannot be
+assigned to each other. Bytes of the destination structure which are not
+included in any of the fields are unaffected. ::
+
+ >>> a = np.zeros(3, dtype=[('a', 'i8'), ('b', 'f4'), ('c', 'S3')])
+ >>> b = np.ones(3, dtype=[('x', 'f4'), ('y', 'S3'), ('z', 'O')])
+ >>> b[:] = a
+ >>> b
+ array([(0., b'0.0', b''), (0., b'0.0', b''), (0., b'0.0', b'')],
+ dtype=[('x', '<f4'), ('y', 'S3'), ('z', 'O')])
+
+
+Assignment involving subarrays
+``````````````````````````````
+
+When assigning to fields which are subarrays, the assigned value will first be
+broadcast to the shape of the subarray.
+
+Indexing Structured Arrays
+--------------------------
+
+Accessing Individual Fields
+```````````````````````````
+
+Individual fields of a structured array may be accessed and modified by indexing
+the array with the field name. ::
+
+ >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')])
+ >>> x['foo']
+ array([1, 3])
+ >>> x['foo'] = 10
+ >>> x
+ array([(10, 2.), (10, 4.)],
+ dtype=[('foo', '<i8'), ('bar', '<f4')])
+
+The resulting array is a view into the original array. It shares the same
+memory locations and writing to the view will modify the original array. ::
+
+ >>> y = x['bar']
+ >>> y[:] = 11
+ >>> x
+ array([(10, 11.), (10, 11.)],
+ dtype=[('foo', '<i8'), ('bar', '<f4')])
+
+This view has the same dtype and itemsize as the indexed field, so it is
+typically a non-structured array, except in the case of nested structures.
+
+ >>> y.dtype, y.shape, y.strides
+ (dtype('float32'), (2,), (12,))
+
+If the accessed field is a subarray, the dimensions of the subarray
+are appended to the shape of the result::
+
+ >>> x = np.zeros((2, 2), dtype=[('a', np.int32), ('b', np.float64, (3, 3))])
+ >>> x['a'].shape
+ (2, 2)
+ >>> x['b'].shape
+ (2, 2, 3, 3)
+
+Accessing Multiple Fields
+```````````````````````````
+
+One can index and assign to a structured array with a multi-field index, where
+the index is a list of field names.
+
+.. warning::
+ The behavior of multi-field indexes changed from Numpy 1.15 to Numpy 1.16.
+
+The result of indexing with a multi-field index is a view into the original
+array, as follows::
+
+ >>> a = np.zeros(3, dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'f4')])
+ >>> a[['a', 'c']]
+ array([(0, 0.), (0, 0.), (0, 0.)],
+ dtype={'names':['a','c'], 'formats':['<i4','<f4'], 'offsets':[0,8], 'itemsize':12})
+
+Assignment to the view modifies the original array. The view's fields will be
+in the order they were indexed. Note that unlike for single-field indexing, the
+dtype of the view has the same itemsize as the original array, and has fields
+at the same offsets as in the original array, and unindexed fields are merely
+missing.
+
+.. warning::
+ In Numpy 1.15, indexing an array with a multi-field index returned a copy of
+ the result above, but with fields packed together in memory as if
+ passed through :func:`numpy.lib.recfunctions.repack_fields`.
+
+ The new behavior as of Numpy 1.16 leads to extra "padding" bytes at the
+ location of unindexed fields compared to 1.15. You will need to update any
+ code which depends on the data having a "packed" layout. For instance code
+ such as::
+
+ >>> a[['a', 'c']].view('i8') # Fails in Numpy 1.16
+ Traceback (most recent call last):
+ File "<stdin>", line 1, in <module>
+ ValueError: When changing to a smaller dtype, its size must be a divisor of the size of original dtype
+
+ will need to be changed. This code has raised a ``FutureWarning`` since
+ Numpy 1.12, and similar code has raised ``FutureWarning`` since 1.7.
+
+ In 1.16 a number of functions have been introduced in the
+ :mod:`numpy.lib.recfunctions` module to help users account for this
+ change. These are
+ :func:`numpy.lib.recfunctions.repack_fields`.
+ :func:`numpy.lib.recfunctions.structured_to_unstructured`,
+ :func:`numpy.lib.recfunctions.unstructured_to_structured`,
+ :func:`numpy.lib.recfunctions.apply_along_fields`,
+ :func:`numpy.lib.recfunctions.assign_fields_by_name`, and
+ :func:`numpy.lib.recfunctions.require_fields`.
+
+ The function :func:`numpy.lib.recfunctions.repack_fields` can always be
+ used to reproduce the old behavior, as it will return a packed copy of the
+ structured array. The code above, for example, can be replaced with:
+
+ >>> from numpy.lib.recfunctions import repack_fields
+ >>> repack_fields(a[['a', 'c']]).view('i8') # supported in 1.16
+ array([0, 0, 0])
+
+ Furthermore, numpy now provides a new function
+ :func:`numpy.lib.recfunctions.structured_to_unstructured` which is a safer
+ and more efficient alternative for users who wish to convert structured
+ arrays to unstructured arrays, as the view above is often indeded to do.
+ This function allows safe conversion to an unstructured type taking into
+ account padding, often avoids a copy, and also casts the datatypes
+ as needed, unlike the view. Code such as:
+
+ >>> b = np.zeros(3, dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')])
+ >>> b[['x', 'z']].view('f4')
+ array([0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)
+
+ can be made safer by replacing with:
+
+ >>> from numpy.lib.recfunctions import structured_to_unstructured
+ >>> structured_to_unstructured(b[['x', 'z']])
+ array([0, 0, 0])
+
+
+Assignment to an array with a multi-field index modifies the original array::
+
+ >>> a[['a', 'c']] = (2, 3)
+ >>> a
+ array([(2, 0, 3.), (2, 0, 3.), (2, 0, 3.)],
+ dtype=[('a', '<i4'), ('b', '<i4'), ('c', '<f4')])
+
+This obeys the structured array assignment rules described above. For example,
+this means that one can swap the values of two fields using appropriate
+multi-field indexes::
+
+ >>> a[['a', 'c']] = a[['c', 'a']]
+
+Indexing with an Integer to get a Structured Scalar
+```````````````````````````````````````````````````
+
+Indexing a single element of a structured array (with an integer index) returns
+a structured scalar::
+
+ >>> x = np.array([(1, 2., 3.)], dtype='i, f, f')
+ >>> scalar = x[0]
+ >>> scalar
+ (1, 2., 3.)
+ >>> type(scalar)
+ <class 'numpy.void'>
+
+Unlike other numpy scalars, structured scalars are mutable and act like views
+into the original array, such that modifying the scalar will modify the
+original array. Structured scalars also support access and assignment by field
+name::
+
+ >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')])
+ >>> s = x[0]
+ >>> s['bar'] = 100
+ >>> x
+ array([(1, 100.), (3, 4.)],
+ dtype=[('foo', '<i8'), ('bar', '<f4')])
+
+Similarly to tuples, structured scalars can also be indexed with an integer::
+
+ >>> scalar = np.array([(1, 2., 3.)], dtype='i, f, f')[0]
+ >>> scalar[0]
+ 1
+ >>> scalar[1] = 4
+
+Thus, tuples might be thought of as the native Python equivalent to numpy's
+structured types, much like native python integers are the equivalent to
+numpy's integer types. Structured scalars may be converted to a tuple by
+calling `numpy.ndarray.item`::
+
+ >>> scalar.item(), type(scalar.item())
+ ((1, 4.0, 3.0), <class 'tuple'>)
+
+Viewing Structured Arrays Containing Objects
+--------------------------------------------
+
+In order to prevent clobbering object pointers in fields of
+:class:`object` type, numpy currently does not allow views of structured
+arrays containing objects.
+
+Structure Comparison
+--------------------
+
+If the dtypes of two void structured arrays are equal, testing the equality of
+the arrays will result in a boolean array with the dimensions of the original
+arrays, with elements set to ``True`` where all fields of the corresponding
+structures are equal. Structured dtypes are equal if the field names,
+dtypes and titles are the same, ignoring endianness, and the fields are in
+the same order::
+
+ >>> a = np.zeros(2, dtype=[('a', 'i4'), ('b', 'i4')])
+ >>> b = np.ones(2, dtype=[('a', 'i4'), ('b', 'i4')])
+ >>> a == b
+ array([False, False])
+
+Currently, if the dtypes of two void structured arrays are not equivalent the
+comparison fails, returning the scalar value ``False``. This behavior is
+deprecated as of numpy 1.10 and will raise an error or perform elementwise
+comparison in the future.
+
+The ``<`` and ``>`` operators always return ``False`` when comparing void
+structured arrays, and arithmetic and bitwise operations are not supported.
+
+Record Arrays
+=============
+
+As an optional convenience numpy provides an ndarray subclass,
+:class:`numpy.recarray` that allows access to fields of structured arrays by
+attribute instead of only by index.
+Record arrays use a special datatype, :class:`numpy.record`, that allows
+field access by attribute on the structured scalars obtained from the array.
+The :mod:`numpy.rec` module provides functions for creating recarrays from
+various objects.
+Additional helper functions for creating and manipulating structured arrays
+can be found in :mod:`numpy.lib.recfunctions`.
+
+The simplest way to create a record array is with ``numpy.rec.array``::
+
+ >>> recordarr = np.rec.array([(1, 2., 'Hello'), (2, 3., "World")],
+ ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
+ >>> recordarr.bar
+ array([ 2., 3.], dtype=float32)
+ >>> recordarr[1:2]
+ rec.array([(2, 3., b'World')],
+ dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])
+ >>> recordarr[1:2].foo
+ array([2], dtype=int32)
+ >>> recordarr.foo[1:2]
+ array([2], dtype=int32)
+ >>> recordarr[1].baz
+ b'World'
+
+:func:`numpy.rec.array` can convert a wide variety of arguments into record
+arrays, including structured arrays::
+
+ >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")],
+ ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')])
+ >>> recordarr = np.rec.array(arr)
+
+The :mod:`numpy.rec` module provides a number of other convenience functions for
+creating record arrays, see :ref:`record array creation routines
+<routines.array-creation.rec>`.
+
+A record array representation of a structured array can be obtained using the
+appropriate `view <numpy-ndarray-view>`_::
+
+ >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")],
+ ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')])
+ >>> recordarr = arr.view(dtype=np.dtype((np.record, arr.dtype)),
+ ... type=np.recarray)
+
+For convenience, viewing an ndarray as type :class:`numpy.recarray` will
+automatically convert to :class:`numpy.record` datatype, so the dtype can be left
+out of the view::
+
+ >>> recordarr = arr.view(np.recarray)
+ >>> recordarr.dtype
+ dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]))
+
+To get back to a plain ndarray both the dtype and type must be reset. The
+following view does so, taking into account the unusual case that the
+recordarr was not a structured type::
+
+ >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray)
+
+Record array fields accessed by index or by attribute are returned as a record
+array if the field has a structured type but as a plain ndarray otherwise. ::
+
+ >>> recordarr = np.rec.array([('Hello', (1, 2)), ("World", (3, 4))],
+ ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])])
+ >>> type(recordarr.foo)
+ <class 'numpy.ndarray'>
+ >>> type(recordarr.bar)
+ <class 'numpy.recarray'>
+
+Note that if a field has the same name as an ndarray attribute, the ndarray
+attribute takes precedence. Such fields will be inaccessible by attribute but
+will still be accessible by index.
+
Recarray Helper Functions
-*************************
+-------------------------
.. automodule:: numpy.lib.recfunctions
:members:
diff --git a/doc/source/user/basics.rst b/doc/source/user/basics.rst
index e0fc0ece3..66f3f9ee9 100644
--- a/doc/source/user/basics.rst
+++ b/doc/source/user/basics.rst
@@ -1,14 +1,18 @@
-************
-NumPy basics
-************
+******************
+NumPy fundamentals
+******************
+
+These documents clarify concepts, design decisions, and technical
+constraints in NumPy. This is a great place to understand the
+fundamental NumPy ideas and philosophy.
.. toctree::
:maxdepth: 1
- basics.types
basics.creation
- basics.io
basics.indexing
+ basics.io
+ basics.types
basics.broadcasting
basics.byteswapping
basics.rec
diff --git a/doc/source/user/basics.subclassing.rst b/doc/source/user/basics.subclassing.rst
index 43315521c..8ffa31688 100644
--- a/doc/source/user/basics.subclassing.rst
+++ b/doc/source/user/basics.subclassing.rst
@@ -4,4 +4,751 @@
Subclassing ndarray
*******************
-.. automodule:: numpy.doc.subclassing
+Introduction
+------------
+
+Subclassing ndarray is relatively simple, but it has some complications
+compared to other Python objects. On this page we explain the machinery
+that allows you to subclass ndarray, and the implications for
+implementing a subclass.
+
+ndarrays and object creation
+============================
+
+Subclassing ndarray is complicated by the fact that new instances of
+ndarray classes can come about in three different ways. These are:
+
+#. Explicit constructor call - as in ``MySubClass(params)``. This is
+ the usual route to Python instance creation.
+#. View casting - casting an existing ndarray as a given subclass
+#. New from template - creating a new instance from a template
+ instance. Examples include returning slices from a subclassed array,
+ creating return types from ufuncs, and copying arrays. See
+ :ref:`new-from-template` for more details
+
+The last two are characteristics of ndarrays - in order to support
+things like array slicing. The complications of subclassing ndarray are
+due to the mechanisms numpy has to support these latter two routes of
+instance creation.
+
+.. _view-casting:
+
+View casting
+------------
+
+*View casting* is the standard ndarray mechanism by which you take an
+ndarray of any subclass, and return a view of the array as another
+(specified) subclass:
+
+>>> import numpy as np
+>>> # create a completely useless ndarray subclass
+>>> class C(np.ndarray): pass
+>>> # create a standard ndarray
+>>> arr = np.zeros((3,))
+>>> # take a view of it, as our useless subclass
+>>> c_arr = arr.view(C)
+>>> type(c_arr)
+<class 'C'>
+
+.. _new-from-template:
+
+Creating new from template
+--------------------------
+
+New instances of an ndarray subclass can also come about by a very
+similar mechanism to :ref:`view-casting`, when numpy finds it needs to
+create a new instance from a template instance. The most obvious place
+this has to happen is when you are taking slices of subclassed arrays.
+For example:
+
+>>> v = c_arr[1:]
+>>> type(v) # the view is of type 'C'
+<class 'C'>
+>>> v is c_arr # but it's a new instance
+False
+
+The slice is a *view* onto the original ``c_arr`` data. So, when we
+take a view from the ndarray, we return a new ndarray, of the same
+class, that points to the data in the original.
+
+There are other points in the use of ndarrays where we need such views,
+such as copying arrays (``c_arr.copy()``), creating ufunc output arrays
+(see also :ref:`array-wrap`), and reducing methods (like
+``c_arr.mean()``).
+
+Relationship of view casting and new-from-template
+--------------------------------------------------
+
+These paths both use the same machinery. We make the distinction here,
+because they result in different input to your methods. Specifically,
+:ref:`view-casting` means you have created a new instance of your array
+type from any potential subclass of ndarray. :ref:`new-from-template`
+means you have created a new instance of your class from a pre-existing
+instance, allowing you - for example - to copy across attributes that
+are particular to your subclass.
+
+Implications for subclassing
+----------------------------
+
+If we subclass ndarray, we need to deal not only with explicit
+construction of our array type, but also :ref:`view-casting` or
+:ref:`new-from-template`. NumPy has the machinery to do this, and it is
+this machinery that makes subclassing slightly non-standard.
+
+There are two aspects to the machinery that ndarray uses to support
+views and new-from-template in subclasses.
+
+The first is the use of the ``ndarray.__new__`` method for the main work
+of object initialization, rather then the more usual ``__init__``
+method. The second is the use of the ``__array_finalize__`` method to
+allow subclasses to clean up after the creation of views and new
+instances from templates.
+
+A brief Python primer on ``__new__`` and ``__init__``
+=====================================================
+
+``__new__`` is a standard Python method, and, if present, is called
+before ``__init__`` when we create a class instance. See the `python
+__new__ documentation
+<https://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail.
+
+For example, consider the following Python code:
+
+.. testcode::
+
+ class C:
+ def __new__(cls, *args):
+ print('Cls in __new__:', cls)
+ print('Args in __new__:', args)
+ # The `object` type __new__ method takes a single argument.
+ return object.__new__(cls)
+
+ def __init__(self, *args):
+ print('type(self) in __init__:', type(self))
+ print('Args in __init__:', args)
+
+meaning that we get:
+
+>>> c = C('hello')
+Cls in __new__: <class 'C'>
+Args in __new__: ('hello',)
+type(self) in __init__: <class 'C'>
+Args in __init__: ('hello',)
+
+When we call ``C('hello')``, the ``__new__`` method gets its own class
+as first argument, and the passed argument, which is the string
+``'hello'``. After python calls ``__new__``, it usually (see below)
+calls our ``__init__`` method, with the output of ``__new__`` as the
+first argument (now a class instance), and the passed arguments
+following.
+
+As you can see, the object can be initialized in the ``__new__``
+method or the ``__init__`` method, or both, and in fact ndarray does
+not have an ``__init__`` method, because all the initialization is
+done in the ``__new__`` method.
+
+Why use ``__new__`` rather than just the usual ``__init__``? Because
+in some cases, as for ndarray, we want to be able to return an object
+of some other class. Consider the following:
+
+.. testcode::
+
+ class D(C):
+ def __new__(cls, *args):
+ print('D cls is:', cls)
+ print('D args in __new__:', args)
+ return C.__new__(C, *args)
+
+ def __init__(self, *args):
+ # we never get here
+ print('In D __init__')
+
+meaning that:
+
+>>> obj = D('hello')
+D cls is: <class 'D'>
+D args in __new__: ('hello',)
+Cls in __new__: <class 'C'>
+Args in __new__: ('hello',)
+>>> type(obj)
+<class 'C'>
+
+The definition of ``C`` is the same as before, but for ``D``, the
+``__new__`` method returns an instance of class ``C`` rather than
+``D``. Note that the ``__init__`` method of ``D`` does not get
+called. In general, when the ``__new__`` method returns an object of
+class other than the class in which it is defined, the ``__init__``
+method of that class is not called.
+
+This is how subclasses of the ndarray class are able to return views
+that preserve the class type. When taking a view, the standard
+ndarray machinery creates the new ndarray object with something
+like::
+
+ obj = ndarray.__new__(subtype, shape, ...
+
+where ``subdtype`` is the subclass. Thus the returned view is of the
+same class as the subclass, rather than being of class ``ndarray``.
+
+That solves the problem of returning views of the same type, but now
+we have a new problem. The machinery of ndarray can set the class
+this way, in its standard methods for taking views, but the ndarray
+``__new__`` method knows nothing of what we have done in our own
+``__new__`` method in order to set attributes, and so on. (Aside -
+why not call ``obj = subdtype.__new__(...`` then? Because we may not
+have a ``__new__`` method with the same call signature).
+
+The role of ``__array_finalize__``
+==================================
+
+``__array_finalize__`` is the mechanism that numpy provides to allow
+subclasses to handle the various ways that new instances get created.
+
+Remember that subclass instances can come about in these three ways:
+
+#. explicit constructor call (``obj = MySubClass(params)``). This will
+ call the usual sequence of ``MySubClass.__new__`` then (if it exists)
+ ``MySubClass.__init__``.
+#. :ref:`view-casting`
+#. :ref:`new-from-template`
+
+Our ``MySubClass.__new__`` method only gets called in the case of the
+explicit constructor call, so we can't rely on ``MySubClass.__new__`` or
+``MySubClass.__init__`` to deal with the view casting and
+new-from-template. It turns out that ``MySubClass.__array_finalize__``
+*does* get called for all three methods of object creation, so this is
+where our object creation housekeeping usually goes.
+
+* For the explicit constructor call, our subclass will need to create a
+ new ndarray instance of its own class. In practice this means that
+ we, the authors of the code, will need to make a call to
+ ``ndarray.__new__(MySubClass,...)``, a class-hierarchy prepared call to
+ ``super(MySubClass, cls).__new__(cls, ...)``, or do view casting of an
+ existing array (see below)
+* For view casting and new-from-template, the equivalent of
+ ``ndarray.__new__(MySubClass,...`` is called, at the C level.
+
+The arguments that ``__array_finalize__`` receives differ for the three
+methods of instance creation above.
+
+The following code allows us to look at the call sequences and arguments:
+
+.. testcode::
+
+ import numpy as np
+
+ class C(np.ndarray):
+ def __new__(cls, *args, **kwargs):
+ print('In __new__ with class %s' % cls)
+ return super(C, cls).__new__(cls, *args, **kwargs)
+
+ def __init__(self, *args, **kwargs):
+ # in practice you probably will not need or want an __init__
+ # method for your subclass
+ print('In __init__ with class %s' % self.__class__)
+
+ def __array_finalize__(self, obj):
+ print('In array_finalize:')
+ print(' self type is %s' % type(self))
+ print(' obj type is %s' % type(obj))
+
+
+Now:
+
+>>> # Explicit constructor
+>>> c = C((10,))
+In __new__ with class <class 'C'>
+In array_finalize:
+ self type is <class 'C'>
+ obj type is <type 'NoneType'>
+In __init__ with class <class 'C'>
+>>> # View casting
+>>> a = np.arange(10)
+>>> cast_a = a.view(C)
+In array_finalize:
+ self type is <class 'C'>
+ obj type is <type 'numpy.ndarray'>
+>>> # Slicing (example of new-from-template)
+>>> cv = c[:1]
+In array_finalize:
+ self type is <class 'C'>
+ obj type is <class 'C'>
+
+The signature of ``__array_finalize__`` is::
+
+ def __array_finalize__(self, obj):
+
+One sees that the ``super`` call, which goes to
+``ndarray.__new__``, passes ``__array_finalize__`` the new object, of our
+own class (``self``) as well as the object from which the view has been
+taken (``obj``). As you can see from the output above, the ``self`` is
+always a newly created instance of our subclass, and the type of ``obj``
+differs for the three instance creation methods:
+
+* When called from the explicit constructor, ``obj`` is ``None``
+* When called from view casting, ``obj`` can be an instance of any
+ subclass of ndarray, including our own.
+* When called in new-from-template, ``obj`` is another instance of our
+ own subclass, that we might use to update the new ``self`` instance.
+
+Because ``__array_finalize__`` is the only method that always sees new
+instances being created, it is the sensible place to fill in instance
+defaults for new object attributes, among other tasks.
+
+This may be clearer with an example.
+
+Simple example - adding an extra attribute to ndarray
+-----------------------------------------------------
+
+.. testcode::
+
+ import numpy as np
+
+ class InfoArray(np.ndarray):
+
+ def __new__(subtype, shape, dtype=float, buffer=None, offset=0,
+ strides=None, order=None, info=None):
+ # Create the ndarray instance of our type, given the usual
+ # ndarray input arguments. This will call the standard
+ # ndarray constructor, but return an object of our type.
+ # It also triggers a call to InfoArray.__array_finalize__
+ obj = super(InfoArray, subtype).__new__(subtype, shape, dtype,
+ buffer, offset, strides,
+ order)
+ # set the new 'info' attribute to the value passed
+ obj.info = info
+ # Finally, we must return the newly created object:
+ return obj
+
+ def __array_finalize__(self, obj):
+ # ``self`` is a new object resulting from
+ # ndarray.__new__(InfoArray, ...), therefore it only has
+ # attributes that the ndarray.__new__ constructor gave it -
+ # i.e. those of a standard ndarray.
+ #
+ # We could have got to the ndarray.__new__ call in 3 ways:
+ # From an explicit constructor - e.g. InfoArray():
+ # obj is None
+ # (we're in the middle of the InfoArray.__new__
+ # constructor, and self.info will be set when we return to
+ # InfoArray.__new__)
+ if obj is None: return
+ # From view casting - e.g arr.view(InfoArray):
+ # obj is arr
+ # (type(obj) can be InfoArray)
+ # From new-from-template - e.g infoarr[:3]
+ # type(obj) is InfoArray
+ #
+ # Note that it is here, rather than in the __new__ method,
+ # that we set the default value for 'info', because this
+ # method sees all creation of default objects - with the
+ # InfoArray.__new__ constructor, but also with
+ # arr.view(InfoArray).
+ self.info = getattr(obj, 'info', None)
+ # We do not need to return anything
+
+
+Using the object looks like this:
+
+ >>> obj = InfoArray(shape=(3,)) # explicit constructor
+ >>> type(obj)
+ <class 'InfoArray'>
+ >>> obj.info is None
+ True
+ >>> obj = InfoArray(shape=(3,), info='information')
+ >>> obj.info
+ 'information'
+ >>> v = obj[1:] # new-from-template - here - slicing
+ >>> type(v)
+ <class 'InfoArray'>
+ >>> v.info
+ 'information'
+ >>> arr = np.arange(10)
+ >>> cast_arr = arr.view(InfoArray) # view casting
+ >>> type(cast_arr)
+ <class 'InfoArray'>
+ >>> cast_arr.info is None
+ True
+
+This class isn't very useful, because it has the same constructor as the
+bare ndarray object, including passing in buffers and shapes and so on.
+We would probably prefer the constructor to be able to take an already
+formed ndarray from the usual numpy calls to ``np.array`` and return an
+object.
+
+Slightly more realistic example - attribute added to existing array
+-------------------------------------------------------------------
+
+Here is a class that takes a standard ndarray that already exists, casts
+as our type, and adds an extra attribute.
+
+.. testcode::
+
+ import numpy as np
+
+ class RealisticInfoArray(np.ndarray):
+
+ def __new__(cls, input_array, info=None):
+ # Input array is an already formed ndarray instance
+ # We first cast to be our class type
+ obj = np.asarray(input_array).view(cls)
+ # add the new attribute to the created instance
+ obj.info = info
+ # Finally, we must return the newly created object:
+ return obj
+
+ def __array_finalize__(self, obj):
+ # see InfoArray.__array_finalize__ for comments
+ if obj is None: return
+ self.info = getattr(obj, 'info', None)
+
+
+So:
+
+ >>> arr = np.arange(5)
+ >>> obj = RealisticInfoArray(arr, info='information')
+ >>> type(obj)
+ <class 'RealisticInfoArray'>
+ >>> obj.info
+ 'information'
+ >>> v = obj[1:]
+ >>> type(v)
+ <class 'RealisticInfoArray'>
+ >>> v.info
+ 'information'
+
+.. _array-ufunc:
+
+``__array_ufunc__`` for ufuncs
+------------------------------
+
+ .. versionadded:: 1.13
+
+A subclass can override what happens when executing numpy ufuncs on it by
+overriding the default ``ndarray.__array_ufunc__`` method. This method is
+executed *instead* of the ufunc and should return either the result of the
+operation, or :obj:`NotImplemented` if the operation requested is not
+implemented.
+
+The signature of ``__array_ufunc__`` is::
+
+ def __array_ufunc__(ufunc, method, *inputs, **kwargs):
+
+ - *ufunc* is the ufunc object that was called.
+ - *method* is a string indicating how the Ufunc was called, either
+ ``"__call__"`` to indicate it was called directly, or one of its
+ :ref:`methods<ufuncs.methods>`: ``"reduce"``, ``"accumulate"``,
+ ``"reduceat"``, ``"outer"``, or ``"at"``.
+ - *inputs* is a tuple of the input arguments to the ``ufunc``
+ - *kwargs* contains any optional or keyword arguments passed to the
+ function. This includes any ``out`` arguments, which are always
+ contained in a tuple.
+
+A typical implementation would convert any inputs or outputs that are
+instances of one's own class, pass everything on to a superclass using
+``super()``, and finally return the results after possible
+back-conversion. An example, taken from the test case
+``test_ufunc_override_with_super`` in ``core/tests/test_umath.py``, is the
+following.
+
+.. testcode::
+
+ input numpy as np
+
+ class A(np.ndarray):
+ def __array_ufunc__(self, ufunc, method, *inputs, out=None, **kwargs):
+ args = []
+ in_no = []
+ for i, input_ in enumerate(inputs):
+ if isinstance(input_, A):
+ in_no.append(i)
+ args.append(input_.view(np.ndarray))
+ else:
+ args.append(input_)
+
+ outputs = out
+ out_no = []
+ if outputs:
+ out_args = []
+ for j, output in enumerate(outputs):
+ if isinstance(output, A):
+ out_no.append(j)
+ out_args.append(output.view(np.ndarray))
+ else:
+ out_args.append(output)
+ kwargs['out'] = tuple(out_args)
+ else:
+ outputs = (None,) * ufunc.nout
+
+ info = {}
+ if in_no:
+ info['inputs'] = in_no
+ if out_no:
+ info['outputs'] = out_no
+
+ results = super(A, self).__array_ufunc__(ufunc, method,
+ *args, **kwargs)
+ if results is NotImplemented:
+ return NotImplemented
+
+ if method == 'at':
+ if isinstance(inputs[0], A):
+ inputs[0].info = info
+ return
+
+ if ufunc.nout == 1:
+ results = (results,)
+
+ results = tuple((np.asarray(result).view(A)
+ if output is None else output)
+ for result, output in zip(results, outputs))
+ if results and isinstance(results[0], A):
+ results[0].info = info
+
+ return results[0] if len(results) == 1 else results
+
+So, this class does not actually do anything interesting: it just
+converts any instances of its own to regular ndarray (otherwise, we'd
+get infinite recursion!), and adds an ``info`` dictionary that tells
+which inputs and outputs it converted. Hence, e.g.,
+
+>>> a = np.arange(5.).view(A)
+>>> b = np.sin(a)
+>>> b.info
+{'inputs': [0]}
+>>> b = np.sin(np.arange(5.), out=(a,))
+>>> b.info
+{'outputs': [0]}
+>>> a = np.arange(5.).view(A)
+>>> b = np.ones(1).view(A)
+>>> c = a + b
+>>> c.info
+{'inputs': [0, 1]}
+>>> a += b
+>>> a.info
+{'inputs': [0, 1], 'outputs': [0]}
+
+Note that another approach would be to to use ``getattr(ufunc,
+methods)(*inputs, **kwargs)`` instead of the ``super`` call. For this example,
+the result would be identical, but there is a difference if another operand
+also defines ``__array_ufunc__``. E.g., lets assume that we evalulate
+``np.add(a, b)``, where ``b`` is an instance of another class ``B`` that has
+an override. If you use ``super`` as in the example,
+``ndarray.__array_ufunc__`` will notice that ``b`` has an override, which
+means it cannot evaluate the result itself. Thus, it will return
+`NotImplemented` and so will our class ``A``. Then, control will be passed
+over to ``b``, which either knows how to deal with us and produces a result,
+or does not and returns `NotImplemented`, raising a ``TypeError``.
+
+If instead, we replace our ``super`` call with ``getattr(ufunc, method)``, we
+effectively do ``np.add(a.view(np.ndarray), b)``. Again, ``B.__array_ufunc__``
+will be called, but now it sees an ``ndarray`` as the other argument. Likely,
+it will know how to handle this, and return a new instance of the ``B`` class
+to us. Our example class is not set up to handle this, but it might well be
+the best approach if, e.g., one were to re-implement ``MaskedArray`` using
+``__array_ufunc__``.
+
+As a final note: if the ``super`` route is suited to a given class, an
+advantage of using it is that it helps in constructing class hierarchies.
+E.g., suppose that our other class ``B`` also used the ``super`` in its
+``__array_ufunc__`` implementation, and we created a class ``C`` that depended
+on both, i.e., ``class C(A, B)`` (with, for simplicity, not another
+``__array_ufunc__`` override). Then any ufunc on an instance of ``C`` would
+pass on to ``A.__array_ufunc__``, the ``super`` call in ``A`` would go to
+``B.__array_ufunc__``, and the ``super`` call in ``B`` would go to
+``ndarray.__array_ufunc__``, thus allowing ``A`` and ``B`` to collaborate.
+
+.. _array-wrap:
+
+``__array_wrap__`` for ufuncs and other functions
+-------------------------------------------------
+
+Prior to numpy 1.13, the behaviour of ufuncs could only be tuned using
+``__array_wrap__`` and ``__array_prepare__``. These two allowed one to
+change the output type of a ufunc, but, in contrast to
+``__array_ufunc__``, did not allow one to make any changes to the inputs.
+It is hoped to eventually deprecate these, but ``__array_wrap__`` is also
+used by other numpy functions and methods, such as ``squeeze``, so at the
+present time is still needed for full functionality.
+
+Conceptually, ``__array_wrap__`` "wraps up the action" in the sense of
+allowing a subclass to set the type of the return value and update
+attributes and metadata. Let's show how this works with an example. First
+we return to the simpler example subclass, but with a different name and
+some print statements:
+
+.. testcode::
+
+ import numpy as np
+
+ class MySubClass(np.ndarray):
+
+ def __new__(cls, input_array, info=None):
+ obj = np.asarray(input_array).view(cls)
+ obj.info = info
+ return obj
+
+ def __array_finalize__(self, obj):
+ print('In __array_finalize__:')
+ print(' self is %s' % repr(self))
+ print(' obj is %s' % repr(obj))
+ if obj is None: return
+ self.info = getattr(obj, 'info', None)
+
+ def __array_wrap__(self, out_arr, context=None):
+ print('In __array_wrap__:')
+ print(' self is %s' % repr(self))
+ print(' arr is %s' % repr(out_arr))
+ # then just call the parent
+ return super(MySubClass, self).__array_wrap__(self, out_arr, context)
+
+We run a ufunc on an instance of our new array:
+
+>>> obj = MySubClass(np.arange(5), info='spam')
+In __array_finalize__:
+ self is MySubClass([0, 1, 2, 3, 4])
+ obj is array([0, 1, 2, 3, 4])
+>>> arr2 = np.arange(5)+1
+>>> ret = np.add(arr2, obj)
+In __array_wrap__:
+ self is MySubClass([0, 1, 2, 3, 4])
+ arr is array([1, 3, 5, 7, 9])
+In __array_finalize__:
+ self is MySubClass([1, 3, 5, 7, 9])
+ obj is MySubClass([0, 1, 2, 3, 4])
+>>> ret
+MySubClass([1, 3, 5, 7, 9])
+>>> ret.info
+'spam'
+
+Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method
+with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result
+of the addition. In turn, the default ``__array_wrap__``
+(``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``,
+and called ``__array_finalize__`` - hence the copying of the ``info``
+attribute. This has all happened at the C level.
+
+But, we could do anything we wanted:
+
+.. testcode::
+
+ class SillySubClass(np.ndarray):
+
+ def __array_wrap__(self, arr, context=None):
+ return 'I lost your data'
+
+>>> arr1 = np.arange(5)
+>>> obj = arr1.view(SillySubClass)
+>>> arr2 = np.arange(5)
+>>> ret = np.multiply(obj, arr2)
+>>> ret
+'I lost your data'
+
+So, by defining a specific ``__array_wrap__`` method for our subclass,
+we can tweak the output from ufuncs. The ``__array_wrap__`` method
+requires ``self``, then an argument - which is the result of the ufunc -
+and an optional parameter *context*. This parameter is returned by
+ufuncs as a 3-element tuple: (name of the ufunc, arguments of the ufunc,
+domain of the ufunc), but is not set by other numpy functions. Though,
+as seen above, it is possible to do otherwise, ``__array_wrap__`` should
+return an instance of its containing class. See the masked array
+subclass for an implementation.
+
+In addition to ``__array_wrap__``, which is called on the way out of the
+ufunc, there is also an ``__array_prepare__`` method which is called on
+the way into the ufunc, after the output arrays are created but before any
+computation has been performed. The default implementation does nothing
+but pass through the array. ``__array_prepare__`` should not attempt to
+access the array data or resize the array, it is intended for setting the
+output array type, updating attributes and metadata, and performing any
+checks based on the input that may be desired before computation begins.
+Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or
+subclass thereof or raise an error.
+
+Extra gotchas - custom ``__del__`` methods and ndarray.base
+-----------------------------------------------------------
+
+One of the problems that ndarray solves is keeping track of memory
+ownership of ndarrays and their views. Consider the case where we have
+created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``.
+The two objects are looking at the same memory. NumPy keeps track of
+where the data came from for a particular array or view, with the
+``base`` attribute:
+
+>>> # A normal ndarray, that owns its own data
+>>> arr = np.zeros((4,))
+>>> # In this case, base is None
+>>> arr.base is None
+True
+>>> # We take a view
+>>> v1 = arr[1:]
+>>> # base now points to the array that it derived from
+>>> v1.base is arr
+True
+>>> # Take a view of a view
+>>> v2 = v1[1:]
+>>> # base points to the original array that it was derived from
+>>> v2.base is arr
+True
+
+In general, if the array owns its own memory, as for ``arr`` in this
+case, then ``arr.base`` will be None - there are some exceptions to this
+- see the numpy book for more details.
+
+The ``base`` attribute is useful in being able to tell whether we have
+a view or the original array. This in turn can be useful if we need
+to know whether or not to do some specific cleanup when the subclassed
+array is deleted. For example, we may only want to do the cleanup if
+the original array is deleted, but not the views. For an example of
+how this can work, have a look at the ``memmap`` class in
+``numpy.core``.
+
+Subclassing and Downstream Compatibility
+----------------------------------------
+
+When sub-classing ``ndarray`` or creating duck-types that mimic the ``ndarray``
+interface, it is your responsibility to decide how aligned your APIs will be
+with those of numpy. For convenience, many numpy functions that have a corresponding
+``ndarray`` method (e.g., ``sum``, ``mean``, ``take``, ``reshape``) work by checking
+if the first argument to a function has a method of the same name. If it exists, the
+method is called instead of coercing the arguments to a numpy array.
+
+For example, if you want your sub-class or duck-type to be compatible with
+numpy's ``sum`` function, the method signature for this object's ``sum`` method
+should be the following:
+
+.. testcode::
+
+ def sum(self, axis=None, dtype=None, out=None, keepdims=False):
+ ...
+
+This is the exact same method signature for ``np.sum``, so now if a user calls
+``np.sum`` on this object, numpy will call the object's own ``sum`` method and
+pass in these arguments enumerated above in the signature, and no errors will
+be raised because the signatures are completely compatible with each other.
+
+If, however, you decide to deviate from this signature and do something like this:
+
+.. testcode::
+
+ def sum(self, axis=None, dtype=None):
+ ...
+
+This object is no longer compatible with ``np.sum`` because if you call ``np.sum``,
+it will pass in unexpected arguments ``out`` and ``keepdims``, causing a TypeError
+to be raised.
+
+If you wish to maintain compatibility with numpy and its subsequent versions (which
+might add new keyword arguments) but do not want to surface all of numpy's arguments,
+your function's signature should accept ``**kwargs``. For example:
+
+.. testcode::
+
+ def sum(self, axis=None, dtype=None, **unused_kwargs):
+ ...
+
+This object is now compatible with ``np.sum`` again because any extraneous arguments
+(i.e. keywords that are not ``axis`` or ``dtype``) will be hidden away in the
+``**unused_kwargs`` parameter.
+
+
diff --git a/doc/source/user/basics.types.rst b/doc/source/user/basics.types.rst
index 5ce5af15a..ec2af409a 100644
--- a/doc/source/user/basics.types.rst
+++ b/doc/source/user/basics.types.rst
@@ -4,4 +4,339 @@ Data types
.. seealso:: :ref:`Data type objects <arrays.dtypes>`
-.. automodule:: numpy.doc.basics
+Array types and conversions between types
+=========================================
+
+NumPy supports a much greater variety of numerical types than Python does.
+This section shows which are available, and how to modify an array's data-type.
+
+The primitive types supported are tied closely to those in C:
+
+.. list-table::
+ :header-rows: 1
+
+ * - Numpy type
+ - C type
+ - Description
+
+ * - `numpy.bool_`
+ - ``bool``
+ - Boolean (True or False) stored as a byte
+
+ * - `numpy.byte`
+ - ``signed char``
+ - Platform-defined
+
+ * - `numpy.ubyte`
+ - ``unsigned char``
+ - Platform-defined
+
+ * - `numpy.short`
+ - ``short``
+ - Platform-defined
+
+ * - `numpy.ushort`
+ - ``unsigned short``
+ - Platform-defined
+
+ * - `numpy.intc`
+ - ``int``
+ - Platform-defined
+
+ * - `numpy.uintc`
+ - ``unsigned int``
+ - Platform-defined
+
+ * - `numpy.int_`
+ - ``long``
+ - Platform-defined
+
+ * - `numpy.uint`
+ - ``unsigned long``
+ - Platform-defined
+
+ * - `numpy.longlong`
+ - ``long long``
+ - Platform-defined
+
+ * - `numpy.ulonglong`
+ - ``unsigned long long``
+ - Platform-defined
+
+ * - `numpy.half` / `numpy.float16`
+ -
+ - Half precision float:
+ sign bit, 5 bits exponent, 10 bits mantissa
+
+ * - `numpy.single`
+ - ``float``
+ - Platform-defined single precision float:
+ typically sign bit, 8 bits exponent, 23 bits mantissa
+
+ * - `numpy.double`
+ - ``double``
+ - Platform-defined double precision float:
+ typically sign bit, 11 bits exponent, 52 bits mantissa.
+
+ * - `numpy.longdouble`
+ - ``long double``
+ - Platform-defined extended-precision float
+
+ * - `numpy.csingle`
+ - ``float complex``
+ - Complex number, represented by two single-precision floats (real and imaginary components)
+
+ * - `numpy.cdouble`
+ - ``double complex``
+ - Complex number, represented by two double-precision floats (real and imaginary components).
+
+ * - `numpy.clongdouble`
+ - ``long double complex``
+ - Complex number, represented by two extended-precision floats (real and imaginary components).
+
+
+Since many of these have platform-dependent definitions, a set of fixed-size
+aliases are provided:
+
+.. list-table::
+ :header-rows: 1
+
+ * - Numpy type
+ - C type
+ - Description
+
+ * - `numpy.int8`
+ - ``int8_t``
+ - Byte (-128 to 127)
+
+ * - `numpy.int16`
+ - ``int16_t``
+ - Integer (-32768 to 32767)
+
+ * - `numpy.int32`
+ - ``int32_t``
+ - Integer (-2147483648 to 2147483647)
+
+ * - `numpy.int64`
+ - ``int64_t``
+ - Integer (-9223372036854775808 to 9223372036854775807)
+
+ * - `numpy.uint8`
+ - ``uint8_t``
+ - Unsigned integer (0 to 255)
+
+ * - `numpy.uint16`
+ - ``uint16_t``
+ - Unsigned integer (0 to 65535)
+
+ * - `numpy.uint32`
+ - ``uint32_t``
+ - Unsigned integer (0 to 4294967295)
+
+ * - `numpy.uint64`
+ - ``uint64_t``
+ - Unsigned integer (0 to 18446744073709551615)
+
+ * - `numpy.intp`
+ - ``intptr_t``
+ - Integer used for indexing, typically the same as ``ssize_t``
+
+ * - `numpy.uintp`
+ - ``uintptr_t``
+ - Integer large enough to hold a pointer
+
+ * - `numpy.float32`
+ - ``float``
+ -
+
+ * - `numpy.float64` / `numpy.float_`
+ - ``double``
+ - Note that this matches the precision of the builtin python `float`.
+
+ * - `numpy.complex64`
+ - ``float complex``
+ - Complex number, represented by two 32-bit floats (real and imaginary components)
+
+ * - `numpy.complex128` / `numpy.complex_`
+ - ``double complex``
+ - Note that this matches the precision of the builtin python `complex`.
+
+
+NumPy numerical types are instances of ``dtype`` (data-type) objects, each
+having unique characteristics. Once you have imported NumPy using
+
+ ::
+
+ >>> import numpy as np
+
+the dtypes are available as ``np.bool_``, ``np.float32``, etc.
+
+Advanced types, not listed in the table above, are explored in
+section :ref:`structured_arrays`.
+
+There are 5 basic numerical types representing booleans (bool), integers (int),
+unsigned integers (uint) floating point (float) and complex. Those with numbers
+in their name indicate the bitsize of the type (i.e. how many bits are needed
+to represent a single value in memory). Some types, such as ``int`` and
+``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit
+vs. 64-bit machines). This should be taken into account when interfacing
+with low-level code (such as C or Fortran) where the raw memory is addressed.
+
+Data-types can be used as functions to convert python numbers to array scalars
+(see the array scalar section for an explanation), python sequences of numbers
+to arrays of that type, or as arguments to the dtype keyword that many numpy
+functions or methods accept. Some examples::
+
+ >>> import numpy as np
+ >>> x = np.float32(1.0)
+ >>> x
+ 1.0
+ >>> y = np.int_([1,2,4])
+ >>> y
+ array([1, 2, 4])
+ >>> z = np.arange(3, dtype=np.uint8)
+ >>> z
+ array([0, 1, 2], dtype=uint8)
+
+Array types can also be referred to by character codes, mostly to retain
+backward compatibility with older packages such as Numeric. Some
+documentation may still refer to these, for example::
+
+ >>> np.array([1, 2, 3], dtype='f')
+ array([ 1., 2., 3.], dtype=float32)
+
+We recommend using dtype objects instead.
+
+To convert the type of an array, use the .astype() method (preferred) or
+the type itself as a function. For example: ::
+
+ >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE
+ array([ 0., 1., 2.])
+ >>> np.int8(z)
+ array([0, 1, 2], dtype=int8)
+
+Note that, above, we use the *Python* float object as a dtype. NumPy knows
+that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``,
+that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``.
+The other data-types do not have Python equivalents.
+
+To determine the type of an array, look at the dtype attribute::
+
+ >>> z.dtype
+ dtype('uint8')
+
+dtype objects also contain information about the type, such as its bit-width
+and its byte-order. The data type can also be used indirectly to query
+properties of the type, such as whether it is an integer::
+
+ >>> d = np.dtype(int)
+ >>> d
+ dtype('int32')
+
+ >>> np.issubdtype(d, np.integer)
+ True
+
+ >>> np.issubdtype(d, np.floating)
+ False
+
+
+Array Scalars
+=============
+
+NumPy generally returns elements of arrays as array scalars (a scalar
+with an associated dtype). Array scalars differ from Python scalars, but
+for the most part they can be used interchangeably (the primary
+exception is for versions of Python older than v2.x, where integer array
+scalars cannot act as indices for lists and tuples). There are some
+exceptions, such as when code requires very specific attributes of a scalar
+or when it checks specifically whether a value is a Python scalar. Generally,
+problems are easily fixed by explicitly converting array scalars
+to Python scalars, using the corresponding Python type function
+(e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``).
+
+The primary advantage of using array scalars is that
+they preserve the array type (Python may not have a matching scalar type
+available, e.g. ``int16``). Therefore, the use of array scalars ensures
+identical behaviour between arrays and scalars, irrespective of whether the
+value is inside an array or not. NumPy scalars also have many of the same
+methods arrays do.
+
+Overflow Errors
+===============
+
+The fixed size of NumPy numeric types may cause overflow errors when a value
+requires more memory than available in the data type. For example,
+`numpy.power` evaluates ``100 * 10 ** 8`` correctly for 64-bit integers,
+but gives 1874919424 (incorrect) for a 32-bit integer.
+
+ >>> np.power(100, 8, dtype=np.int64)
+ 10000000000000000
+ >>> np.power(100, 8, dtype=np.int32)
+ 1874919424
+
+The behaviour of NumPy and Python integer types differs significantly for
+integer overflows and may confuse users expecting NumPy integers to behave
+similar to Python's ``int``. Unlike NumPy, the size of Python's ``int`` is
+flexible. This means Python integers may expand to accommodate any integer and
+will not overflow.
+
+NumPy provides `numpy.iinfo` and `numpy.finfo` to verify the
+minimum or maximum values of NumPy integer and floating point values
+respectively ::
+
+ >>> np.iinfo(int) # Bounds of the default integer on this system.
+ iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
+ >>> np.iinfo(np.int32) # Bounds of a 32-bit integer
+ iinfo(min=-2147483648, max=2147483647, dtype=int32)
+ >>> np.iinfo(np.int64) # Bounds of a 64-bit integer
+ iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
+
+If 64-bit integers are still too small the result may be cast to a
+floating point number. Floating point numbers offer a larger, but inexact,
+range of possible values.
+
+ >>> np.power(100, 100, dtype=np.int64) # Incorrect even with 64-bit int
+ 0
+ >>> np.power(100, 100, dtype=np.float64)
+ 1e+200
+
+Extended Precision
+==================
+
+Python's floating-point numbers are usually 64-bit floating-point numbers,
+nearly equivalent to ``np.float64``. In some unusual situations it may be
+useful to use floating-point numbers with more precision. Whether this
+is possible in numpy depends on the hardware and on the development
+environment: specifically, x86 machines provide hardware floating-point
+with 80-bit precision, and while most C compilers provide this as their
+``long double`` type, MSVC (standard for Windows builds) makes
+``long double`` identical to ``double`` (64 bits). NumPy makes the
+compiler's ``long double`` available as ``np.longdouble`` (and
+``np.clongdouble`` for the complex numbers). You can find out what your
+numpy provides with ``np.finfo(np.longdouble)``.
+
+NumPy does not provide a dtype with more precision than C's
+``long double``\\; in particular, the 128-bit IEEE quad precision
+data type (FORTRAN's ``REAL*16``\\) is not available.
+
+For efficient memory alignment, ``np.longdouble`` is usually stored
+padded with zero bits, either to 96 or 128 bits. Which is more efficient
+depends on hardware and development environment; typically on 32-bit
+systems they are padded to 96 bits, while on 64-bit systems they are
+typically padded to 128 bits. ``np.longdouble`` is padded to the system
+default; ``np.float96`` and ``np.float128`` are provided for users who
+want specific padding. In spite of the names, ``np.float96`` and
+``np.float128`` provide only as much precision as ``np.longdouble``,
+that is, 80 bits on most x86 machines and 64 bits in standard
+Windows builds.
+
+Be warned that even if ``np.longdouble`` offers more precision than
+python ``float``, it is easy to lose that extra precision, since
+python often forces values to pass through ``float``. For example,
+the ``%`` formatting operator requires its arguments to be converted
+to standard python types, and it is therefore impossible to preserve
+extended precision even if many decimal places are requested. It can
+be useful to test your code with the value
+``1 + np.finfo(np.longdouble).eps``.
+
+
diff --git a/doc/source/user/building.rst b/doc/source/user/building.rst
index 54ece3da3..47399139e 100644
--- a/doc/source/user/building.rst
+++ b/doc/source/user/building.rst
@@ -142,6 +142,16 @@ will prefer to use ATLAS, then BLIS, then OpenBLAS and as a last resort MKL.
If neither of these exists the build will fail (names are compared
lower case).
+Alternatively one may use ``!`` or ``^`` to negate all items::
+
+ NPY_BLAS_ORDER='^blas,atlas' python setup.py build
+
+will allow using anything **but** NetLIB BLAS and ATLAS libraries, the order of the above
+list is retained.
+
+One cannot mix negation and positives, nor have multiple negations, such cases will
+raise an error.
+
LAPACK
~~~~~~
@@ -165,6 +175,17 @@ will prefer to use ATLAS, then OpenBLAS and as a last resort MKL.
If neither of these exists the build will fail (names are compared
lower case).
+Alternatively one may use ``!`` or ``^`` to negate all items::
+
+ NPY_LAPACK_ORDER='^lapack' python setup.py build
+
+will allow using anything **but** the NetLIB LAPACK library, the order of the above
+list is retained.
+
+One cannot mix negation and positives, nor have multiple negations, such cases will
+raise an error.
+
+
.. deprecated:: 1.20
The native libraries on macOS, provided by Accelerate, are not fit for use
in NumPy since they have bugs that cause wrong output under easily reproducible
diff --git a/doc/source/user/c-info.beyond-basics.rst b/doc/source/user/c-info.beyond-basics.rst
index 9e9cd3067..124162d6c 100644
--- a/doc/source/user/c-info.beyond-basics.rst
+++ b/doc/source/user/c-info.beyond-basics.rst
@@ -115,7 +115,7 @@ processors that use pipelining to enhance fundamental operations.
The :c:func:`PyArray_IterAllButAxis` ( ``array``, ``&dim`` ) constructs an
iterator object that is modified so that it will not iterate over the
dimension indicated by dim. The only restriction on this iterator
-object, is that the :c:func:`PyArray_Iter_GOTO1D` ( ``it``, ``ind`` ) macro
+object, is that the :c:func:`PyArray_ITER_GOTO1D` ( ``it``, ``ind`` ) macro
cannot be used (thus flat indexing won't work either if you pass this
object back to Python --- so you shouldn't do this). Note that the
returned object from this routine is still usually cast to
diff --git a/doc/source/user/c-info.how-to-extend.rst b/doc/source/user/c-info.how-to-extend.rst
index d75242092..845ce0a74 100644
--- a/doc/source/user/c-info.how-to-extend.rst
+++ b/doc/source/user/c-info.how-to-extend.rst
@@ -363,7 +363,6 @@ particular set of requirements ( *e.g.* contiguous, aligned, and
writeable). The syntax is
:c:func:`PyArray_FROM_OTF`
-
Return an ndarray from any Python object, *obj*, that can be
converted to an array. The number of dimensions in the returned
array is determined by the object. The desired data-type of the
@@ -375,7 +374,6 @@ writeable). The syntax is
exception is set.
*obj*
-
The object can be any Python object convertible to an ndarray.
If the object is already (a subclass of) the ndarray that
satisfies the requirements then a new reference is returned.
@@ -394,7 +392,6 @@ writeable). The syntax is
to the requirements flag.
*typenum*
-
One of the enumerated types or :c:data:`NPY_NOTYPE` if the data-type
should be determined from the object itself. The C-based names
can be used:
@@ -422,7 +419,6 @@ writeable). The syntax is
requirements flag to override this behavior.
*requirements*
-
The memory model for an ndarray admits arbitrary strides in
each dimension to advance to the next element of the array.
Often, however, you need to interface with code that expects a
@@ -446,13 +442,11 @@ writeable). The syntax is
:c:data:`NPY_OUT_ARRAY`, and :c:data:`NPY_ARRAY_INOUT_ARRAY`:
:c:data:`NPY_ARRAY_IN_ARRAY`
-
This flag is useful for arrays that must be in C-contiguous
order and aligned. These kinds of arrays are usually input
arrays for some algorithm.
:c:data:`NPY_ARRAY_OUT_ARRAY`
-
This flag is useful to specify an array that is
in C-contiguous order, is aligned, and can be written to
as well. Such an array is usually returned as output
@@ -460,7 +454,6 @@ writeable). The syntax is
scratch).
:c:data:`NPY_ARRAY_INOUT_ARRAY`
-
This flag is useful to specify an array that will be used for both
input and output. :c:func:`PyArray_ResolveWritebackIfCopy`
must be called before :c:func:`Py_DECREF` at
@@ -479,16 +472,13 @@ writeable). The syntax is
Other useful flags that can be OR'd as additional requirements are:
:c:data:`NPY_ARRAY_FORCECAST`
-
Cast to the desired type, even if it can't be done without losing
information.
:c:data:`NPY_ARRAY_ENSURECOPY`
-
Make sure the resulting array is a copy of the original.
:c:data:`NPY_ARRAY_ENSUREARRAY`
-
Make sure the resulting object is an actual ndarray and not a sub-
class.
diff --git a/doc/source/user/how-to-how-to.rst b/doc/source/user/how-to-how-to.rst
new file mode 100644
index 000000000..de8afc28a
--- /dev/null
+++ b/doc/source/user/how-to-how-to.rst
@@ -0,0 +1,118 @@
+.. _how-to-how-to:
+
+##############################################################################
+How to write a NumPy how-to
+##############################################################################
+
+How-tos get straight to the point -- they
+
+ - answer a focused question, or
+ - narrow a broad question into focused questions that the user can
+ choose among.
+
+******************************************************************************
+A stranger has asked for directions...
+******************************************************************************
+
+**"I need to refuel my car."**
+
+******************************************************************************
+Give a brief but explicit answer
+******************************************************************************
+
+ - `"Three kilometers/miles, take a right at Hayseed Road, it's on your left."`
+
+Add helpful details for newcomers ("Hayseed Road", even though it's the only
+turnoff at three km/mi). But not irrelevant ones:
+
+ - Don't also give directions from Route 7.
+ - Don't explain why the town has only one filling station.
+
+If there's related background (tutorial, explanation, reference, alternative
+approach), bring it to the user's attention with a link ("Directions from Route 7,"
+"Why so few filling stations?").
+
+
+******************************************************************************
+Delegate
+******************************************************************************
+
+ - `"Three km/mi, take a right at Hayseed Road, follow the signs."`
+
+If the information is already documented and succinct enough for a how-to,
+just link to it, possibly after an introduction ("Three km/mi, take a right").
+
+******************************************************************************
+If the question is broad, narrow and redirect it
+******************************************************************************
+
+ **"I want to see the sights."**
+
+The `See the sights` how-to should link to a set of narrower how-tos:
+
+- Find historic buildings
+- Find scenic lookouts
+- Find the town center
+
+and these might in turn link to still narrower how-tos -- so the town center
+page might link to
+
+ - Find the court house
+ - Find city hall
+
+By organizing how-tos this way, you not only display the options for people
+who need to narrow their question, you also have provided answers for users
+who start with narrower questions ("I want to see historic buildings," "Which
+way to city hall?").
+
+******************************************************************************
+If there are many steps, break them up
+******************************************************************************
+
+If a how-to has many steps:
+
+ - Consider breaking a step out into an individual how-to and linking to it.
+ - Include subheadings. They help readers grasp what's coming and return
+ where they left off.
+
+******************************************************************************
+Why write how-tos when there's Stack Overflow, Reddit, Gitter...?
+******************************************************************************
+
+ - We have authoritative answers.
+ - How-tos make the site less forbidding to non-experts.
+ - How-tos bring people into the site and help them discover other information
+ that's here .
+ - Creating how-tos helps us see NumPy usability through new eyes.
+
+******************************************************************************
+Aren't how-tos and tutorials the same thing?
+******************************************************************************
+
+People use the terms "how-to" and "tutorial" interchangeably, but we draw a
+distinction, following Daniele Procida's `taxonomy of documentation`_.
+
+ .. _`taxonomy of documentation`: https://documentation.divio.com/
+
+Documentation needs to meet users where they are. `How-tos` offer get-it-done
+information; the user wants steps to copy and doesn't necessarily want to
+understand NumPy. `Tutorials` are warm-fuzzy information; the user wants a
+feel for some aspect of NumPy (and again, may or may not care about deeper
+knowledge).
+
+We distinguish both tutorials and how-tos from `Explanations`, which are
+deep dives intended to give understanding rather than immediate assistance,
+and `References`, which give complete, autoritative data on some concrete
+part of NumPy (like its API) but aren't obligated to paint a broader picture.
+
+For more on tutorials, see the `tutorial how-to`_.
+
+.. _`tutorial how-to`: https://github.com/numpy/numpy-tutorials/blob/master/tutorial_style.ipynb
+
+
+******************************************************************************
+Is this page an example of a how-to?
+******************************************************************************
+
+Yes -- until the sections with question-mark headings; they explain rather
+than giving directions. In a how-to, those would be links. \ No newline at end of file
diff --git a/doc/source/user/how-to-io.rst b/doc/source/user/how-to-io.rst
new file mode 100644
index 000000000..ca9fc41f0
--- /dev/null
+++ b/doc/source/user/how-to-io.rst
@@ -0,0 +1,328 @@
+.. _how-to-io:
+
+##############################################################################
+Reading and writing files
+##############################################################################
+
+This page tackles common applications; for the full collection of I/O
+routines, see :ref:`routines.io`.
+
+
+******************************************************************************
+Reading text and CSV_ files
+******************************************************************************
+
+.. _CSV: https://en.wikipedia.org/wiki/Comma-separated_values
+
+With no missing values
+==============================================================================
+
+Use :func:`numpy.loadtxt`.
+
+With missing values
+==============================================================================
+
+Use :func:`numpy.genfromtxt`.
+
+:func:`numpy.genfromtxt` will either
+
+ - return a :ref:`masked array<maskedarray.generic>`
+ **masking out missing values** (if ``usemask=True``), or
+
+ - **fill in the missing value** with the value specified in
+ ``filling_values`` (default is ``np.nan`` for float, -1 for int).
+
+With non-whitespace delimiters
+------------------------------------------------------------------------------
+::
+
+ >>> print(open("csv.txt").read()) # doctest: +SKIP
+ 1, 2, 3
+ 4,, 6
+ 7, 8, 9
+
+
+Masked-array output
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
+
+ >>> np.genfromtxt("csv.txt", delimiter=",", usemask=True) # doctest: +SKIP
+ masked_array(
+ data=[[1.0, 2.0, 3.0],
+ [4.0, --, 6.0],
+ [7.0, 8.0, 9.0]],
+ mask=[[False, False, False],
+ [False, True, False],
+ [False, False, False]],
+ fill_value=1e+20)
+
+Array output
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
+
+ >>> np.genfromtxt("csv.txt", delimiter=",") # doctest: +SKIP
+ array([[ 1., 2., 3.],
+ [ 4., nan, 6.],
+ [ 7., 8., 9.]])
+
+Array output, specified fill-in value
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
+
+ >>> np.genfromtxt("csv.txt", delimiter=",", dtype=np.int8, filling_values=99) # doctest: +SKIP
+ array([[ 1, 2, 3],
+ [ 4, 99, 6],
+ [ 7, 8, 9]], dtype=int8)
+
+Whitespace-delimited
+-------------------------------------------------------------------------------
+
+:func:`numpy.genfromtxt` can also parse whitespace-delimited data files
+that have missing values if
+
+* **Each field has a fixed width**: Use the width as the `delimiter` argument.
+ ::
+
+ # File with width=4. The data does not have to be justified (for example,
+ # the 2 in row 1), the last column can be less than width (for example, the 6
+ # in row 2), and no delimiting character is required (for instance 8888 and 9
+ # in row 3)
+
+ >>> f = open("fixedwidth.txt").read() # doctest: +SKIP
+ >>> print(f) # doctest: +SKIP
+ 1 2 3
+ 44 6
+ 7 88889
+
+ # Showing spaces as ^
+ >>> print(f.replace(" ","^")) # doctest: +SKIP
+ 1^^^2^^^^^^3
+ 44^^^^^^6
+ 7^^^88889
+
+ >>> np.genfromtxt("fixedwidth.txt", delimiter=4) # doctest: +SKIP
+ array([[1.000e+00, 2.000e+00, 3.000e+00],
+ [4.400e+01, nan, 6.000e+00],
+ [7.000e+00, 8.888e+03, 9.000e+00]])
+
+* **A special value (e.g. "x") indicates a missing field**: Use it as the
+ `missing_values` argument.
+ ::
+
+ >>> print(open("nan.txt").read()) # doctest: +SKIP
+ 1 2 3
+ 44 x 6
+ 7 8888 9
+
+ >>> np.genfromtxt("nan.txt", missing_values="x") # doctest: +SKIP
+ array([[1.000e+00, 2.000e+00, 3.000e+00],
+ [4.400e+01, nan, 6.000e+00],
+ [7.000e+00, 8.888e+03, 9.000e+00]])
+
+* **You want to skip the rows with missing values**: Set
+ `invalid_raise=False`.
+ ::
+
+ >>> print(open("skip.txt").read()) # doctest: +SKIP
+ 1 2 3
+ 44 6
+ 7 888 9
+
+ >>> np.genfromtxt("skip.txt", invalid_raise=False) # doctest: +SKIP
+ __main__:1: ConversionWarning: Some errors were detected !
+ Line #2 (got 2 columns instead of 3)
+ array([[ 1., 2., 3.],
+ [ 7., 888., 9.]])
+
+
+* **The delimiter whitespace character is different from the whitespace that
+ indicates missing data**. For instance, if columns are delimited by ``\t``,
+ then missing data will be recognized if it consists of one
+ or more spaces.
+ ::
+
+ >>> f = open("tabs.txt").read() # doctest: +SKIP
+ >>> print(f) # doctest: +SKIP
+ 1 2 3
+ 44 6
+ 7 888 9
+
+ # Tabs vs. spaces
+ >>> print(f.replace("\t","^")) # doctest: +SKIP
+ 1^2^3
+ 44^ ^6
+ 7^888^9
+
+ >>> np.genfromtxt("tabs.txt", delimiter="\t", missing_values=" +") # doctest: +SKIP
+ array([[ 1., 2., 3.],
+ [ 44., nan, 6.],
+ [ 7., 888., 9.]])
+
+******************************************************************************
+Read a file in .npy or .npz format
+******************************************************************************
+
+Choices:
+
+ - Use :func:`numpy.load`. It can read files generated by any of
+ :func:`numpy.save`, :func:`numpy.savez`, or :func:`numpy.savez_compressed`.
+
+ - Use memory mapping. See `numpy.lib.format.open_memmap`.
+
+******************************************************************************
+Write to a file to be read back by NumPy
+******************************************************************************
+
+Binary
+===============================================================================
+
+Use
+:func:`numpy.save`, or to store multiple arrays :func:`numpy.savez`
+or :func:`numpy.savez_compressed`.
+
+For :ref:`security and portability <how-to-io-pickle-file>`, set
+``allow_pickle=False`` unless the dtype contains Python objects, which
+requires pickling.
+
+Masked arrays :any:`can't currently be saved <MaskedArray.tofile>`,
+nor can other arbitrary array subclasses.
+
+Human-readable
+==============================================================================
+
+:func:`numpy.save` and :func:`numpy.savez` create binary files. To **write a
+human-readable file**, use :func:`numpy.savetxt`. The array can only be 1- or
+2-dimensional, and there's no ` savetxtz` for multiple files.
+
+Large arrays
+==============================================================================
+
+See :ref:`how-to-io-large-arrays`.
+
+******************************************************************************
+Read an arbitrarily formatted binary file ("binary blob")
+******************************************************************************
+
+Use a :doc:`structured array <basics.rec>`.
+
+**Example:**
+
+The ``.wav`` file header is a 44-byte block preceding ``data_size`` bytes of the
+actual sound data::
+
+ chunk_id "RIFF"
+ chunk_size 4-byte unsigned little-endian integer
+ format "WAVE"
+ fmt_id "fmt "
+ fmt_size 4-byte unsigned little-endian integer
+ audio_fmt 2-byte unsigned little-endian integer
+ num_channels 2-byte unsigned little-endian integer
+ sample_rate 4-byte unsigned little-endian integer
+ byte_rate 4-byte unsigned little-endian integer
+ block_align 2-byte unsigned little-endian integer
+ bits_per_sample 2-byte unsigned little-endian integer
+ data_id "data"
+ data_size 4-byte unsigned little-endian integer
+
+The ``.wav`` file header as a NumPy structured dtype::
+
+ wav_header_dtype = np.dtype([
+ ("chunk_id", (bytes, 4)), # flexible-sized scalar type, item size 4
+ ("chunk_size", "<u4"), # little-endian unsigned 32-bit integer
+ ("format", "S4"), # 4-byte string, alternate spelling of (bytes, 4)
+ ("fmt_id", "S4"),
+ ("fmt_size", "<u4"),
+ ("audio_fmt", "<u2"), #
+ ("num_channels", "<u2"), # .. more of the same ...
+ ("sample_rate", "<u4"), #
+ ("byte_rate", "<u4"),
+ ("block_align", "<u2"),
+ ("bits_per_sample", "<u2"),
+ ("data_id", "S4"),
+ ("data_size", "<u4"),
+ #
+ # the sound data itself cannot be represented here:
+ # it does not have a fixed size
+ ])
+
+ header = np.fromfile(f, dtype=wave_header_dtype, count=1)[0]
+
+This ``.wav`` example is for illustration; to read a ``.wav`` file in real
+life, use Python's built-in module :mod:`wave`.
+
+(Adapted from Pauli Virtanen, :ref:`advanced_numpy`, licensed
+under `CC BY 4.0 <https://creativecommons.org/licenses/by/4.0/>`_.)
+
+.. _how-to-io-large-arrays:
+
+******************************************************************************
+Write or read large arrays
+******************************************************************************
+
+**Arrays too large to fit in memory** can be treated like ordinary in-memory
+arrays using memory mapping.
+
+- Raw array data written with :func:`numpy.ndarray.tofile` or
+ :func:`numpy.ndarray.tobytes` can be read with :func:`numpy.memmap`::
+
+ array = numpy.memmap("mydata/myarray.arr", mode="r", dtype=np.int16, shape=(1024, 1024))
+
+- Files output by :func:`numpy.save` (that is, using the numpy format) can be read
+ using :func:`numpy.load` with the ``mmap_mode`` keyword argument::
+
+ large_array[some_slice] = np.load("path/to/small_array", mmap_mode="r")
+
+Memory mapping lacks features like data chunking and compression; more
+full-featured formats and libraries usable with NumPy include:
+
+* **HDF5**: `h5py <https://www.h5py.org/>`_ or `PyTables <https://www.pytables.org/>`_.
+* **Zarr**: `here <https://zarr.readthedocs.io/en/stable/tutorial.html#reading-and-writing-data>`_.
+* **NetCDF**: :class:`scipy.io.netcdf_file`.
+
+For tradeoffs among memmap, Zarr, and HDF5, see
+`pythonspeed.com <https://pythonspeed.com/articles/mmap-vs-zarr-hdf5/>`_.
+
+******************************************************************************
+Write files for reading by other (non-NumPy) tools
+******************************************************************************
+
+Formats for **exchanging data** with other tools include HDF5, Zarr, and
+NetCDF (see :ref:`how-to-io-large-arrays`).
+
+******************************************************************************
+Write or read a JSON file
+******************************************************************************
+
+NumPy arrays are **not** directly
+`JSON serializable <https://github.com/numpy/numpy/issues/12481>`_.
+
+
+.. _how-to-io-pickle-file:
+
+******************************************************************************
+Save/restore using a pickle file
+******************************************************************************
+
+Avoid when possible; :doc:`pickles <python:library/pickle>` are not secure
+against erroneous or maliciously constructed data.
+
+Use :func:`numpy.save` and :func:`numpy.load`. Set ``allow_pickle=False``,
+unless the array dtype includes Python objects, in which case pickling is
+required.
+
+******************************************************************************
+Convert from a pandas DataFrame to a NumPy array
+******************************************************************************
+
+See :meth:`pandas.DataFrame.to_numpy`.
+
+******************************************************************************
+ Save/restore using `~numpy.ndarray.tofile` and `~numpy.fromfile`
+******************************************************************************
+
+In general, prefer :func:`numpy.save` and :func:`numpy.load`.
+
+:func:`numpy.ndarray.tofile` and :func:`numpy.fromfile` lose information on
+endianness and precision and so are unsuitable for anything but scratch
+storage.
+
diff --git a/doc/source/user/howtos_index.rst b/doc/source/user/howtos_index.rst
index c052286b9..89a6f54e7 100644
--- a/doc/source/user/howtos_index.rst
+++ b/doc/source/user/howtos_index.rst
@@ -11,4 +11,5 @@ the package, see the :ref:`API reference <reference>`.
.. toctree::
:maxdepth: 1
- ionumpy
+ how-to-how-to
+ how-to-io
diff --git a/doc/source/user/images/np_indexing.png b/doc/source/user/images/np_indexing.png
index 4303ec35b..863b2d46f 100644
--- a/doc/source/user/images/np_indexing.png
+++ b/doc/source/user/images/np_indexing.png
Binary files differ
diff --git a/doc/source/user/index.rst b/doc/source/user/index.rst
index 4e6a29d9f..28297d9ea 100644
--- a/doc/source/user/index.rst
+++ b/doc/source/user/index.rst
@@ -3,18 +3,17 @@
.. _user:
################
-NumPy User Guide
+NumPy user guide
################
-This guide is intended as an introductory overview of NumPy and
-explains how to install and make use of the most important features of
-NumPy. For detailed reference documentation of the functions and
-classes contained in the package, see the :ref:`reference`.
+This guide is an overview and explains the important features;
+details are found in :ref:`reference`.
.. toctree::
:maxdepth: 1
- setting-up
+ whatisnumpy
+ Installation <https://numpy.org/install/>
quickstart
absolute_beginners
basics
@@ -24,3 +23,23 @@ classes contained in the package, see the :ref:`reference`.
c-info
tutorials_index
howtos_index
+
+
+.. Links to these files are placed directly in the top-level html
+ (doc/source/_templates/indexcontent.html, which appears for the URLs
+ numpy.org/devdocs and numpy.org/doc/XX) and are not in any toctree, so
+ we include them here to avoid a "WARNING: document isn't included in any
+ toctree" message
+
+.. toctree::
+ :hidden:
+
+ explanations_index
+ ../f2py/index
+ ../glossary
+ ../dev/underthehood
+ ../docs/index
+ ../bugs
+ ../release
+ ../doc_conventions
+ ../license
diff --git a/doc/source/user/install.rst b/doc/source/user/install.rst
index 52586f3d7..e05cee2f1 100644
--- a/doc/source/user/install.rst
+++ b/doc/source/user/install.rst
@@ -1,10 +1,7 @@
-****************
-Installing NumPy
-****************
-
-In most use cases the best way to install NumPy on your system is by using a
-pre-built package for your operating system. Please see
-https://scipy.org/install.html for links to available options.
-
-For instructions on building for source package, see
-:doc:`building`. This information is useful mainly for advanced users.
+:orphan:
+
+****************
+Installing NumPy
+****************
+
+See `Installing NumPy <https://numpy.org/install/>`_. \ No newline at end of file
diff --git a/doc/source/user/ionumpy.rst b/doc/source/user/ionumpy.rst
deleted file mode 100644
index a31720322..000000000
--- a/doc/source/user/ionumpy.rst
+++ /dev/null
@@ -1,20 +0,0 @@
-================================================
-How to read and write data using NumPy
-================================================
-
-.. currentmodule:: numpy
-
-.. testsetup::
-
- import numpy as np
- np.random.seed(1)
-
-**Objectives**
-
-- Writing NumPy arrays to files
-- Reading NumPy arrays from files
-- Dealing with encoding and dtype issues
-
-**Content**
-
-To be completed.
diff --git a/doc/source/user/misc.rst b/doc/source/user/misc.rst
index c10aea486..031ce4efa 100644
--- a/doc/source/user/misc.rst
+++ b/doc/source/user/misc.rst
@@ -2,4 +2,224 @@
Miscellaneous
*************
-.. automodule:: numpy.doc.misc
+IEEE 754 Floating Point Special Values
+--------------------------------------
+
+Special values defined in numpy: nan, inf,
+
+NaNs can be used as a poor-man's mask (if you don't care what the
+original value was)
+
+Note: cannot use equality to test NaNs. E.g.: ::
+
+ >>> myarr = np.array([1., 0., np.nan, 3.])
+ >>> np.nonzero(myarr == np.nan)
+ (array([], dtype=int64),)
+ >>> np.nan == np.nan # is always False! Use special numpy functions instead.
+ False
+ >>> myarr[myarr == np.nan] = 0. # doesn't work
+ >>> myarr
+ array([ 1., 0., NaN, 3.])
+ >>> myarr[np.isnan(myarr)] = 0. # use this instead find
+ >>> myarr
+ array([ 1., 0., 0., 3.])
+
+Other related special value functions: ::
+
+ isinf(): True if value is inf
+ isfinite(): True if not nan or inf
+ nan_to_num(): Map nan to 0, inf to max float, -inf to min float
+
+The following corresponds to the usual functions except that nans are excluded
+from the results: ::
+
+ nansum()
+ nanmax()
+ nanmin()
+ nanargmax()
+ nanargmin()
+
+ >>> x = np.arange(10.)
+ >>> x[3] = np.nan
+ >>> x.sum()
+ nan
+ >>> np.nansum(x)
+ 42.0
+
+How numpy handles numerical exceptions
+--------------------------------------
+
+The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
+and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
+set individually for different kinds of exceptions. The different behaviors
+are:
+
+ - 'ignore' : Take no action when the exception occurs.
+ - 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
+ - 'raise' : Raise a `FloatingPointError`.
+ - 'call' : Call a function specified using the `seterrcall` function.
+ - 'print' : Print a warning directly to ``stdout``.
+ - 'log' : Record error in a Log object specified by `seterrcall`.
+
+These behaviors can be set for all kinds of errors or specific ones:
+
+ - all : apply to all numeric exceptions
+ - invalid : when NaNs are generated
+ - divide : divide by zero (for integers as well!)
+ - overflow : floating point overflows
+ - underflow : floating point underflows
+
+Note that integer divide-by-zero is handled by the same machinery.
+These behaviors are set on a per-thread basis.
+
+Examples
+--------
+
+::
+
+ >>> oldsettings = np.seterr(all='warn')
+ >>> np.zeros(5,dtype=np.float32)/0.
+ invalid value encountered in divide
+ >>> j = np.seterr(under='ignore')
+ >>> np.array([1.e-100])**10
+ >>> j = np.seterr(invalid='raise')
+ >>> np.sqrt(np.array([-1.]))
+ FloatingPointError: invalid value encountered in sqrt
+ >>> def errorhandler(errstr, errflag):
+ ... print("saw stupid error!")
+ >>> np.seterrcall(errorhandler)
+ <function err_handler at 0x...>
+ >>> j = np.seterr(all='call')
+ >>> np.zeros(5, dtype=np.int32)/0
+ FloatingPointError: invalid value encountered in divide
+ saw stupid error!
+ >>> j = np.seterr(**oldsettings) # restore previous
+ ... # error-handling settings
+
+Interfacing to C
+----------------
+Only a survey of the choices. Little detail on how each works.
+
+1) Bare metal, wrap your own C-code manually.
+
+ - Plusses:
+
+ - Efficient
+ - No dependencies on other tools
+
+ - Minuses:
+
+ - Lots of learning overhead:
+
+ - need to learn basics of Python C API
+ - need to learn basics of numpy C API
+ - need to learn how to handle reference counting and love it.
+
+ - Reference counting often difficult to get right.
+
+ - getting it wrong leads to memory leaks, and worse, segfaults
+
+ - API will change for Python 3.0!
+
+2) Cython
+
+ - Plusses:
+
+ - avoid learning C API's
+ - no dealing with reference counting
+ - can code in pseudo python and generate C code
+ - can also interface to existing C code
+ - should shield you from changes to Python C api
+ - has become the de-facto standard within the scientific Python community
+ - fast indexing support for arrays
+
+ - Minuses:
+
+ - Can write code in non-standard form which may become obsolete
+ - Not as flexible as manual wrapping
+
+3) ctypes
+
+ - Plusses:
+
+ - part of Python standard library
+ - good for interfacing to existing sharable libraries, particularly
+ Windows DLLs
+ - avoids API/reference counting issues
+ - good numpy support: arrays have all these in their ctypes
+ attribute: ::
+
+ a.ctypes.data a.ctypes.get_strides
+ a.ctypes.data_as a.ctypes.shape
+ a.ctypes.get_as_parameter a.ctypes.shape_as
+ a.ctypes.get_data a.ctypes.strides
+ a.ctypes.get_shape a.ctypes.strides_as
+
+ - Minuses:
+
+ - can't use for writing code to be turned into C extensions, only a wrapper
+ tool.
+
+4) SWIG (automatic wrapper generator)
+
+ - Plusses:
+
+ - around a long time
+ - multiple scripting language support
+ - C++ support
+ - Good for wrapping large (many functions) existing C libraries
+
+ - Minuses:
+
+ - generates lots of code between Python and the C code
+ - can cause performance problems that are nearly impossible to optimize
+ out
+ - interface files can be hard to write
+ - doesn't necessarily avoid reference counting issues or needing to know
+ API's
+
+5) scipy.weave
+
+ - Plusses:
+
+ - can turn many numpy expressions into C code
+ - dynamic compiling and loading of generated C code
+ - can embed pure C code in Python module and have weave extract, generate
+ interfaces and compile, etc.
+
+ - Minuses:
+
+ - Future very uncertain: it's the only part of Scipy not ported to Python 3
+ and is effectively deprecated in favor of Cython.
+
+6) Psyco
+
+ - Plusses:
+
+ - Turns pure python into efficient machine code through jit-like
+ optimizations
+ - very fast when it optimizes well
+
+ - Minuses:
+
+ - Only on intel (windows?)
+ - Doesn't do much for numpy?
+
+Interfacing to Fortran:
+-----------------------
+The clear choice to wrap Fortran code is
+`f2py <https://docs.scipy.org/doc/numpy/f2py/>`_.
+
+Pyfort is an older alternative, but not supported any longer.
+Fwrap is a newer project that looked promising but isn't being developed any
+longer.
+
+Interfacing to C++:
+-------------------
+ 1) Cython
+ 2) CXX
+ 3) Boost.python
+ 4) SWIG
+ 5) SIP (used mainly in PyQT)
+
+
diff --git a/doc/source/user/numpy-for-matlab-users.rst b/doc/source/user/numpy-for-matlab-users.rst
index 602192ecd..ed0be82a0 100644
--- a/doc/source/user/numpy-for-matlab-users.rst
+++ b/doc/source/user/numpy-for-matlab-users.rst
@@ -1,18 +1,15 @@
.. _numpy-for-matlab-users:
======================
-NumPy for Matlab users
+NumPy for MATLAB users
======================
Introduction
============
-MATLAB® and NumPy/SciPy have a lot in common. But there are many
-differences. NumPy and SciPy were created to do numerical and scientific
-computing in the most natural way with Python, not to be MATLAB® clones.
-This page is intended to be a place to collect wisdom about the
-differences, mostly for the purpose of helping proficient MATLAB® users
-become proficient NumPy and SciPy users.
+MATLAB® and NumPy have a lot in common, but NumPy was created to work with
+Python, not to be a MATLAB clone. This guide will help MATLAB users get started
+with NumPy.
.. raw:: html
@@ -20,234 +17,184 @@ become proficient NumPy and SciPy users.
table.docutils td { border: solid 1px #ccc; }
</style>
-Some Key Differences
+Some key differences
====================
.. list-table::
-
- * - In MATLAB®, the basic data type is a multidimensional array of
- double precision floating point numbers. Most expressions take such
- arrays and return such arrays. Operations on the 2-D instances of
- these arrays are designed to act more or less like matrix operations
- in linear algebra.
- - In NumPy the basic type is a multidimensional ``array``. Operations
- on these arrays in all dimensionalities including 2D are element-wise
- operations. One needs to use specific functions for linear algebra
- (though for matrix multiplication, one can use the ``@`` operator
- in python 3.5 and above).
-
- * - MATLAB® uses 1 (one) based indexing. The initial element of a
- sequence is found using a(1).
+ :class: docutils
+
+ * - In MATLAB, the basic type, even for scalars, is a
+ multidimensional array. Array assignments in MATLAB are stored as
+ 2D arrays of double precision floating point numbers, unless you
+ specify the number of dimensions and type. Operations on the 2D
+ instances of these arrays are modeled on matrix operations in
+ linear algebra.
+
+ - In NumPy, the basic type is a multidimensional ``array``. Array
+ assignments in NumPy are usually stored as :ref:`n-dimensional arrays<arrays>` with the
+ minimum type required to hold the objects in sequence, unless you
+ specify the number of dimensions and type. NumPy performs
+ operations element-by-element, so multiplying 2D arrays with
+ ``*`` is not a matrix multiplication -- it's an
+ element-by-element multiplication. (The ``@`` operator, available
+ since Python 3.5, can be used for conventional matrix
+ multiplication.)
+
+ * - MATLAB numbers indices from 1; ``a(1)`` is the first element.
:ref:`See note INDEXING <numpy-for-matlab-users.notes>`
- - Python uses 0 (zero) based indexing. The initial element of a
- sequence is found using a[0].
-
- * - MATLAB®'s scripting language was created for doing linear algebra.
- The syntax for basic matrix operations is nice and clean, but the API
- for adding GUIs and making full-fledged applications is more or less
- an afterthought.
- - NumPy is based on Python, which was designed from the outset to be
- an excellent general-purpose programming language. While Matlab's
- syntax for some array manipulations is more compact than
- NumPy's, NumPy (by virtue of being an add-on to Python) can do many
- things that Matlab just cannot, for instance dealing properly with
- stacks of matrices.
-
- * - In MATLAB®, arrays have pass-by-value semantics, with a lazy
- copy-on-write scheme to prevent actually creating copies until they
- are actually needed. Slice operations copy parts of the array.
- - In NumPy arrays have pass-by-reference semantics. Slice operations
- are views into an array.
-
-
-'array' or 'matrix'? Which should I use?
-========================================
-
-Historically, NumPy has provided a special matrix type, `np.matrix`, which
-is a subclass of ndarray which makes binary operations linear algebra
-operations. You may see it used in some existing code instead of `np.array`.
-So, which one to use?
-
-Short answer
-------------
-
-**Use arrays**.
-
-- They are the standard vector/matrix/tensor type of numpy. Many numpy
- functions return arrays, not matrices.
-- There is a clear distinction between element-wise operations and
- linear algebra operations.
-- You can have standard vectors or row/column vectors if you like.
-
-Until Python 3.5 the only disadvantage of using the array type was that you
-had to use ``dot`` instead of ``*`` to multiply (reduce) two tensors
-(scalar product, matrix vector multiplication etc.). Since Python 3.5 you
-can use the matrix multiplication ``@`` operator.
-
-Given the above, we intend to deprecate ``matrix`` eventually.
-
-Long answer
------------
-
-NumPy contains both an ``array`` class and a ``matrix`` class. The
-``array`` class is intended to be a general-purpose n-dimensional array
-for many kinds of numerical computing, while ``matrix`` is intended to
-facilitate linear algebra computations specifically. In practice there
-are only a handful of key differences between the two.
-
-- Operators ``*`` and ``@``, functions ``dot()``, and ``multiply()``:
-
- - For ``array``, **``*`` means element-wise multiplication**, while
- **``@`` means matrix multiplication**; they have associated functions
- ``multiply()`` and ``dot()``. (Before python 3.5, ``@`` did not exist
- and one had to use ``dot()`` for matrix multiplication).
- - For ``matrix``, **``*`` means matrix multiplication**, and for
- element-wise multiplication one has to use the ``multiply()`` function.
-
-- Handling of vectors (one-dimensional arrays)
-
- - For ``array``, the **vector shapes 1xN, Nx1, and N are all different
- things**. Operations like ``A[:,1]`` return a one-dimensional array of
- shape N, not a two-dimensional array of shape Nx1. Transpose on a
- one-dimensional ``array`` does nothing.
- - For ``matrix``, **one-dimensional arrays are always upconverted to 1xN
- or Nx1 matrices** (row or column vectors). ``A[:,1]`` returns a
- two-dimensional matrix of shape Nx1.
-
-- Handling of higher-dimensional arrays (ndim > 2)
-
- - ``array`` objects **can have number of dimensions > 2**;
- - ``matrix`` objects **always have exactly two dimensions**.
-
-- Convenience attributes
-
- - ``array`` **has a .T attribute**, which returns the transpose of
- the data.
- - ``matrix`` **also has .H, .I, and .A attributes**, which return
- the conjugate transpose, inverse, and ``asarray()`` of the matrix,
- respectively.
-
-- Convenience constructor
-
- - The ``array`` constructor **takes (nested) Python sequences as
- initializers**. As in, ``array([[1,2,3],[4,5,6]])``.
- - The ``matrix`` constructor additionally **takes a convenient
- string initializer**. As in ``matrix("[1 2 3; 4 5 6]")``.
-
-There are pros and cons to using both:
-
-- ``array``
-
- - ``:)`` Element-wise multiplication is easy: ``A*B``.
- - ``:(`` You have to remember that matrix multiplication has its own
- operator, ``@``.
- - ``:)`` You can treat one-dimensional arrays as *either* row or column
- vectors. ``A @ v`` treats ``v`` as a column vector, while
- ``v @ A`` treats ``v`` as a row vector. This can save you having to
- type a lot of transposes.
- - ``:)`` ``array`` is the "default" NumPy type, so it gets the most
- testing, and is the type most likely to be returned by 3rd party
- code that uses NumPy.
- - ``:)`` Is quite at home handling data of any number of dimensions.
- - ``:)`` Closer in semantics to tensor algebra, if you are familiar
- with that.
- - ``:)`` *All* operations (``*``, ``/``, ``+``, ``-`` etc.) are
- element-wise.
- - ``:(`` Sparse matrices from ``scipy.sparse`` do not interact as well
- with arrays.
-
-- ``matrix``
-
- - ``:\\`` Behavior is more like that of MATLAB® matrices.
- - ``<:(`` Maximum of two-dimensional. To hold three-dimensional data you
- need ``array`` or perhaps a Python list of ``matrix``.
- - ``<:(`` Minimum of two-dimensional. You cannot have vectors. They must be
- cast as single-column or single-row matrices.
- - ``<:(`` Since ``array`` is the default in NumPy, some functions may
- return an ``array`` even if you give them a ``matrix`` as an
- argument. This shouldn't happen with NumPy functions (if it does
- it's a bug), but 3rd party code based on NumPy may not honor type
- preservation like NumPy does.
- - ``:)`` ``A*B`` is matrix multiplication, so it looks just like you write
- it in linear algebra (For Python >= 3.5 plain arrays have the same
- convenience with the ``@`` operator).
- - ``<:(`` Element-wise multiplication requires calling a function,
- ``multiply(A,B)``.
- - ``<:(`` The use of operator overloading is a bit illogical: ``*``
- does not work element-wise but ``/`` does.
- - Interaction with ``scipy.sparse`` is a bit cleaner.
+ - NumPy, like Python, numbers indices from 0; ``a[0]`` is the first
+ element.
-The ``array`` is thus much more advisable to use. Indeed, we intend to
-deprecate ``matrix`` eventually.
-
-Table of Rough MATLAB-NumPy Equivalents
+ * - MATLAB's scripting language was created for linear algebra so the
+ syntax for some array manipulations is more compact than
+ NumPy's. On the other hand, the API for adding GUIs and creating
+ full-fledged applications is more or less an afterthought.
+ - NumPy is based on Python, a
+ general-purpose language. The advantage to NumPy
+ is access to Python libraries including: `SciPy
+ <https://www.scipy.org/>`_, `Matplotlib <https://matplotlib.org/>`_,
+ `Pandas <https://pandas.pydata.org/>`_, `OpenCV <https://opencv.org/>`_,
+ and more. In addition, Python is often `embedded as a scripting language
+ <https://en.wikipedia.org/wiki/List_of_Python_software#Embedded_as_a_scripting_language>`_
+ in other software, allowing NumPy to be used there too.
+
+ * - MATLAB array slicing uses pass-by-value semantics, with a lazy
+ copy-on-write scheme to prevent creating copies until they are
+ needed. Slicing operations copy parts of the array.
+ - NumPy array slicing uses pass-by-reference, that does not copy
+ the arguments. Slicing operations are views into an array.
+
+
+Rough equivalents
=======================================
-The table below gives rough equivalents for some common MATLAB®
-expressions. **These are not exact equivalents**, but rather should be
-taken as hints to get you going in the right direction. For more detail
-read the built-in documentation on the NumPy functions.
+The table below gives rough equivalents for some common MATLAB
+expressions. These are similar expressions, not equivalents. For
+details, see the :ref:`documentation<reference>`.
In the table below, it is assumed that you have executed the following
commands in Python:
::
- from numpy import *
- import scipy.linalg
+ import numpy as np
+ from scipy import io, integrate, linalg, signal
+ from scipy.sparse.linalg import eigs
Also assume below that if the Notes talk about "matrix" that the
arguments are two-dimensional entities.
-General Purpose Equivalents
+General purpose equivalents
---------------------------
.. list-table::
:header-rows: 1
- * - **MATLAB**
- - **numpy**
- - **Notes**
+ * - MATLAB
+ - NumPy
+ - Notes
* - ``help func``
- - ``info(func)`` or ``help(func)`` or ``func?`` (in Ipython)
+ - ``info(func)`` or ``help(func)`` or ``func?`` (in IPython)
- get help on the function *func*
* - ``which func``
- - `see note HELP <numpy-for-matlab-users.notes>`__
+ - :ref:`see note HELP <numpy-for-matlab-users.notes>`
- find out where *func* is defined
* - ``type func``
- - ``source(func)`` or ``func??`` (in Ipython)
+ - ``np.source(func)`` or ``func??`` (in IPython)
- print source for *func* (if not a native function)
+ * - ``% comment``
+ - ``# comment``
+ - comment a line of code with the text ``comment``
+
+ * - ::
+
+ for i=1:3
+ fprintf('%i\n',i)
+ end
+
+ - ::
+
+ for i in range(1, 4):
+ print(i)
+
+ - use a for-loop to print the numbers 1, 2, and 3 using :py:class:`range <range>`
+
* - ``a && b``
- ``a and b``
- - short-circuiting logical AND operator (Python native operator);
+ - short-circuiting logical AND operator (:ref:`Python native operator <python:boolean>`);
scalar arguments only
* - ``a || b``
- ``a or b``
- - short-circuiting logical OR operator (Python native operator);
+ - short-circuiting logical OR operator (:ref:`Python native operator <python:boolean>`);
scalar arguments only
+ * - .. code:: matlab
+
+ >> 4 == 4
+ ans = 1
+ >> 4 == 5
+ ans = 0
+
+ - ::
+
+ >>> 4 == 4
+ True
+ >>> 4 == 5
+ False
+
+ - The :ref:`boolean objects <python:bltin-boolean-values>`
+ in Python are ``True`` and ``False``, as opposed to MATLAB
+ logical types of ``1`` and ``0``.
+
+ * - .. code:: matlab
+
+ a=4
+ if a==4
+ fprintf('a = 4\n')
+ elseif a==5
+ fprintf('a = 5\n')
+ end
+
+ - ::
+
+ a = 4
+ if a == 4:
+ print('a = 4')
+ elif a == 5:
+ print('a = 5')
+
+ - create an if-else statement to check if ``a`` is 4 or 5 and print result
+
* - ``1*i``, ``1*j``, ``1i``, ``1j``
- ``1j``
- complex numbers
* - ``eps``
- - ``np.spacing(1)``
- - Distance between 1 and the nearest floating point number.
+ - ``np.finfo(float).eps`` or ``np.spacing(1)``
+ - Upper bound to relative error due to rounding in 64-bit floating point
+ arithmetic.
+
+ * - ``load data.mat``
+ - ``io.loadmat('data.mat')``
+ - Load MATLAB variables saved to the file ``data.mat``. (Note: When saving arrays to
+ ``data.mat`` in MATLAB/Octave, use a recent binary format. :func:`scipy.io.loadmat`
+ will create a dictionary with the saved arrays and further information.)
* - ``ode45``
- - ``scipy.integrate.solve_ivp(f)``
+ - ``integrate.solve_ivp(f)``
- integrate an ODE with Runge-Kutta 4,5
* - ``ode15s``
- - ``scipy.integrate.solve_ivp(f, method='BDF')``
+ - ``integrate.solve_ivp(f, method='BDF')``
- integrate an ODE with BDF method
-Linear Algebra Equivalents
+
+Linear algebra equivalents
--------------------------
.. list-table::
@@ -258,63 +205,63 @@ Linear Algebra Equivalents
- Notes
* - ``ndims(a)``
- - ``ndim(a)`` or ``a.ndim``
- - get the number of dimensions of an array
+ - ``np.ndim(a)`` or ``a.ndim``
+ - number of dimensions of array ``a``
* - ``numel(a)``
- - ``size(a)`` or ``a.size``
- - get the number of elements of an array
+ - ``np.size(a)`` or ``a.size``
+ - number of elements of array ``a``
* - ``size(a)``
- - ``shape(a)`` or ``a.shape``
- - get the "size" of the matrix
+ - ``np.shape(a)`` or ``a.shape``
+ - "size" of array ``a``
* - ``size(a,n)``
- ``a.shape[n-1]``
- get the number of elements of the n-th dimension of array ``a``. (Note
- that MATLAB® uses 1 based indexing while Python uses 0 based indexing,
+ that MATLAB uses 1 based indexing while Python uses 0 based indexing,
See note :ref:`INDEXING <numpy-for-matlab-users.notes>`)
* - ``[ 1 2 3; 4 5 6 ]``
- - ``array([[1.,2.,3.], [4.,5.,6.]])``
- - 2x3 matrix literal
+ - ``np.array([[1. ,2. ,3.], [4. ,5. ,6.]])``
+ - define a 2x3 2D array
* - ``[ a b; c d ]``
- - ``block([[a,b], [c,d]])``
+ - ``np.block([[a, b], [c, d]])``
- construct a matrix from blocks ``a``, ``b``, ``c``, and ``d``
* - ``a(end)``
- ``a[-1]``
- - access last element in the 1xn matrix ``a``
+ - access last element in MATLAB vector (1xn or nx1) or 1D NumPy array
+ ``a`` (length n)
* - ``a(2,5)``
- - ``a[1,4]``
- - access element in second row, fifth column
+ - ``a[1, 4]``
+ - access element in second row, fifth column in 2D array ``a``
* - ``a(2,:)``
- - ``a[1]`` or ``a[1,:]``
- - entire second row of ``a``
+ - ``a[1]`` or ``a[1, :]``
+ - entire second row of 2D array ``a``
* - ``a(1:5,:)``
- - ``a[0:5]`` or ``a[:5]`` or ``a[0:5,:]``
- - the first five rows of ``a``
+ - ``a[0:5]`` or ``a[:5]`` or ``a[0:5, :]``
+ - first 5 rows of 2D array ``a``
* - ``a(end-4:end,:)``
- ``a[-5:]``
- - the last five rows of ``a``
+ - last 5 rows of 2D array ``a``
* - ``a(1:3,5:9)``
- - ``a[0:3][:,4:9]``
- - rows one to three and columns five to nine of ``a``. This gives
- read-only access.
+ - ``a[0:3, 4:9]``
+ - The first through third rows and fifth through ninth columns of a 2D array, ``a``.
* - ``a([2,4,5],[1,3])``
- - ``a[ix_([1,3,4],[0,2])]``
+ - ``a[np.ix_([1, 3, 4], [0, 2])]``
- rows 2,4 and 5 and columns 1 and 3. This allows the matrix to be
modified, and doesn't require a regular slice.
* - ``a(3:2:21,:)``
- - ``a[ 2:21:2,:]``
+ - ``a[2:21:2,:]``
- every other row of ``a``, starting with the third and going to the
twenty-first
@@ -323,11 +270,11 @@ Linear Algebra Equivalents
- every other row of ``a``, starting with the first
* - ``a(end:-1:1,:)`` or ``flipud(a)``
- - ``a[ ::-1,:]``
+ - ``a[::-1,:]``
- ``a`` with rows in reverse order
* - ``a([1:end 1],:)``
- - ``a[r_[:len(a),0]]``
+ - ``a[np.r_[:len(a),0]]``
- ``a`` with copy of the first row appended to the end
* - ``a.'``
@@ -354,30 +301,30 @@ Linear Algebra Equivalents
- ``a**3``
- element-wise exponentiation
- * - ``(a>0.5)``
- - ``(a>0.5)``
- - matrix whose i,jth element is (a_ij > 0.5). The Matlab result is an
- array of 0s and 1s. The NumPy result is an array of the boolean
+ * - ``(a > 0.5)``
+ - ``(a > 0.5)``
+ - matrix whose i,jth element is (a_ij > 0.5). The MATLAB result is an
+ array of logical values 0 and 1. The NumPy result is an array of the boolean
values ``False`` and ``True``.
- * - ``find(a>0.5)``
- - ``nonzero(a>0.5)``
+ * - ``find(a > 0.5)``
+ - ``np.nonzero(a > 0.5)``
- find the indices where (``a`` > 0.5)
- * - ``a(:,find(v>0.5))``
- - ``a[:,nonzero(v>0.5)[0]]``
+ * - ``a(:,find(v > 0.5))``
+ - ``a[:,np.nonzero(v > 0.5)[0]]``
- extract the columms of ``a`` where vector v > 0.5
* - ``a(:,find(v>0.5))``
- - ``a[:,v.T>0.5]``
+ - ``a[:, v.T > 0.5]``
- extract the columms of ``a`` where column vector v > 0.5
* - ``a(a<0.5)=0``
- - ``a[a<0.5]=0``
+ - ``a[a < 0.5]=0``
- ``a`` with elements less than 0.5 zeroed out
* - ``a .* (a>0.5)``
- - ``a * (a>0.5)``
+ - ``a * (a > 0.5)``
- ``a`` with elements less than 0.5 zeroed out
* - ``a(:) = 3``
@@ -386,74 +333,86 @@ Linear Algebra Equivalents
* - ``y=x``
- ``y = x.copy()``
- - numpy assigns by reference
+ - NumPy assigns by reference
* - ``y=x(2,:)``
- - ``y = x[1,:].copy()``
- - numpy slices are by reference
+ - ``y = x[1, :].copy()``
+ - NumPy slices are by reference
* - ``y=x(:)``
- ``y = x.flatten()``
- turn array into vector (note that this forces a copy). To obtain the
- same data ordering as in Matlab, use ``x.flatten('F')``.
+ same data ordering as in MATLAB, use ``x.flatten('F')``.
* - ``1:10``
- - ``arange(1.,11.)`` or ``r_[1.:11.]`` or ``r_[1:10:10j]``
+ - ``np.arange(1., 11.)`` or ``np.r_[1.:11.]`` or ``np.r_[1:10:10j]``
- create an increasing vector (see note :ref:`RANGES
<numpy-for-matlab-users.notes>`)
* - ``0:9``
- - ``arange(10.)`` or ``r_[:10.]`` or ``r_[:9:10j]``
+ - ``np.arange(10.)`` or ``np.r_[:10.]`` or ``np.r_[:9:10j]``
- create an increasing vector (see note :ref:`RANGES
<numpy-for-matlab-users.notes>`)
* - ``[1:10]'``
- - ``arange(1.,11.)[:, newaxis]``
+ - ``np.arange(1.,11.)[:, np.newaxis]``
- create a column vector
* - ``zeros(3,4)``
- - ``zeros((3,4))``
+ - ``np.zeros((3, 4))``
- 3x4 two-dimensional array full of 64-bit floating point zeros
* - ``zeros(3,4,5)``
- - ``zeros((3,4,5))``
+ - ``np.zeros((3, 4, 5))``
- 3x4x5 three-dimensional array full of 64-bit floating point zeros
* - ``ones(3,4)``
- - ``ones((3,4))``
+ - ``np.ones((3, 4))``
- 3x4 two-dimensional array full of 64-bit floating point ones
* - ``eye(3)``
- - ``eye(3)``
+ - ``np.eye(3)``
- 3x3 identity matrix
* - ``diag(a)``
- - ``diag(a)``
- - vector of diagonal elements of ``a``
+ - ``np.diag(a)``
+ - returns a vector of the diagonal elements of 2D array, ``a``
+
+ * - ``diag(v,0)``
+ - ``np.diag(v, 0)``
+ - returns a square diagonal matrix whose nonzero values are the elements of
+ vector, ``v``
- * - ``diag(a,0)``
- - ``diag(a,0)``
- - square diagonal matrix whose nonzero values are the elements of
- ``a``
+ * - .. code:: matlab
+
+ rng(42,'twister')
+ rand(3,4)
- * - ``rand(3,4)``
- - ``random.rand(3,4)`` or ``random.random_sample((3, 4))``
- - random 3x4 matrix
+ - ::
+
+ from numpy.random import default_rng
+ rng = default_rng(42)
+ rng.random(3, 4)
+
+ or older version: ``random.rand((3, 4))``
+
+ - generate a random 3x4 array with default random number generator and
+ seed = 42
* - ``linspace(1,3,4)``
- - ``linspace(1,3,4)``
+ - ``np.linspace(1,3,4)``
- 4 equally spaced samples between 1 and 3, inclusive
* - ``[x,y]=meshgrid(0:8,0:5)``
- - ``mgrid[0:9.,0:6.]`` or ``meshgrid(r_[0:9.],r_[0:6.]``
+ - ``np.mgrid[0:9.,0:6.]`` or ``np.meshgrid(r_[0:9.],r_[0:6.]``
- two 2D arrays: one of x values, the other of y values
* -
- - ``ogrid[0:9.,0:6.]`` or ``ix_(r_[0:9.],r_[0:6.]``
+ - ``ogrid[0:9.,0:6.]`` or ``np.ix_(np.r_[0:9.],np.r_[0:6.]``
- the best way to eval functions on a grid
* - ``[x,y]=meshgrid([1,2,4],[2,4,5])``
- - ``meshgrid([1,2,4],[2,4,5])``
+ - ``np.meshgrid([1,2,4],[2,4,5])``
-
* -
@@ -461,37 +420,38 @@ Linear Algebra Equivalents
- the best way to eval functions on a grid
* - ``repmat(a, m, n)``
- - ``tile(a, (m, n))``
+ - ``np.tile(a, (m, n))``
- create m by n copies of ``a``
* - ``[a b]``
- - ``concatenate((a,b),1)`` or ``hstack((a,b))`` or
- ``column_stack((a,b))`` or ``c_[a,b]``
+ - ``np.concatenate((a,b),1)`` or ``np.hstack((a,b))`` or
+ ``np.column_stack((a,b))`` or ``np.c_[a,b]``
- concatenate columns of ``a`` and ``b``
* - ``[a; b]``
- - ``concatenate((a,b))`` or ``vstack((a,b))`` or ``r_[a,b]``
+ - ``np.concatenate((a,b))`` or ``np.vstack((a,b))`` or ``np.r_[a,b]``
- concatenate rows of ``a`` and ``b``
* - ``max(max(a))``
- - ``a.max()``
- - maximum element of ``a`` (with ndims(a)<=2 for matlab)
+ - ``a.max()`` or ``np.nanmax(a)``
+ - maximum element of ``a`` (with ndims(a)<=2 for MATLAB, if there are
+ NaN's, ``nanmax`` will ignore these and return largest value)
* - ``max(a)``
- ``a.max(0)``
- - maximum element of each column of matrix ``a``
+ - maximum element of each column of array ``a``
* - ``max(a,[],2)``
- ``a.max(1)``
- - maximum element of each row of matrix ``a``
+ - maximum element of each row of array ``a``
* - ``max(a,b)``
- - ``maximum(a, b)``
+ - ``np.maximum(a, b)``
- compares ``a`` and ``b`` element-wise, and returns the maximum value
from each pair
* - ``norm(v)``
- - ``sqrt(v @ v)`` or ``np.linalg.norm(v)``
+ - ``np.sqrt(v @ v)`` or ``np.linalg.norm(v)``
- L2 norm of vector ``v``
* - ``a & b``
@@ -500,7 +460,7 @@ Linear Algebra Equivalents
LOGICOPS <numpy-for-matlab-users.notes>`
* - ``a | b``
- - ``logical_or(a,b)``
+ - ``np.logical_or(a,b)``
- element-by-element OR operator (NumPy ufunc) :ref:`See note LOGICOPS
<numpy-for-matlab-users.notes>`
@@ -514,90 +474,99 @@ Linear Algebra Equivalents
* - ``inv(a)``
- ``linalg.inv(a)``
- - inverse of square matrix ``a``
+ - inverse of square 2D array ``a``
* - ``pinv(a)``
- ``linalg.pinv(a)``
- - pseudo-inverse of matrix ``a``
+ - pseudo-inverse of 2D array ``a``
* - ``rank(a)``
- ``linalg.matrix_rank(a)``
- - matrix rank of a 2D array / matrix ``a``
+ - matrix rank of a 2D array ``a``
* - ``a\b``
- - ``linalg.solve(a,b)`` if ``a`` is square; ``linalg.lstsq(a,b)``
+ - ``linalg.solve(a, b)`` if ``a`` is square; ``linalg.lstsq(a, b)``
otherwise
- solution of a x = b for x
* - ``b/a``
- - Solve a.T x.T = b.T instead
+ - Solve ``a.T x.T = b.T`` instead
- solution of x a = b for x
* - ``[U,S,V]=svd(a)``
- ``U, S, Vh = linalg.svd(a), V = Vh.T``
- singular value decomposition of ``a``
- * - ``chol(a)``
- - ``linalg.cholesky(a).T``
- - cholesky factorization of a matrix (``chol(a)`` in matlab returns an
- upper triangular matrix, but ``linalg.cholesky(a)`` returns a lower
- triangular matrix)
+ * - ``c=chol(a)`` where ``a==c'*c``
+ - ``c = linalg.cholesky(a)`` where ``a == c@c.T``
+ - Cholesky factorization of a 2D array (``chol(a)`` in MATLAB returns an
+ upper triangular 2D array, but :func:`~scipy.linalg.cholesky` returns a lower
+ triangular 2D array)
* - ``[V,D]=eig(a)``
- ``D,V = linalg.eig(a)``
- - eigenvalues and eigenvectors of ``a``
+ - eigenvalues :math:`\lambda` and eigenvectors :math:`\bar{v}` of ``a``,
+ where :math:`\lambda\bar{v}=\mathbf{a}\bar{v}`
* - ``[V,D]=eig(a,b)``
- - ``D,V = scipy.linalg.eig(a,b)``
- - eigenvalues and eigenvectors of ``a``, ``b``
+ - ``D,V = linalg.eig(a, b)``
+ - eigenvalues :math:`\lambda` and eigenvectors :math:`\bar{v}` of
+ ``a``, ``b``
+ where :math:`\lambda\mathbf{b}\bar{v}=\mathbf{a}\bar{v}`
- * - ``[V,D]=eigs(a,k)``
- -
- - find the ``k`` largest eigenvalues and eigenvectors of ``a``
+ * - ``[V,D]=eigs(a,3)``
+ - ``D,V = eigs(a, k = 3)``
+ - find the ``k=3`` largest eigenvalues and eigenvectors of 2D array, ``a``
* - ``[Q,R,P]=qr(a,0)``
- - ``Q,R = scipy.linalg.qr(a)``
+ - ``Q,R = linalg.qr(a)``
- QR decomposition
- * - ``[L,U,P]=lu(a)``
- - ``L,U = scipy.linalg.lu(a)`` or ``LU,P=scipy.linalg.lu_factor(a)``
- - LU decomposition (note: P(Matlab) == transpose(P(numpy)) )
+ * - ``[L,U,P]=lu(a)`` where ``a==P'*L*U``
+ - ``P,L,U = linalg.lu(a)`` where ``a == P@L@U``
+ - LU decomposition (note: P(MATLAB) == transpose(P(NumPy)))
* - ``conjgrad``
- - ``scipy.sparse.linalg.cg``
+ - ``cg``
- Conjugate gradients solver
* - ``fft(a)``
- - ``fft(a)``
+ - ``np.fft(a)``
- Fourier transform of ``a``
* - ``ifft(a)``
- - ``ifft(a)``
+ - ``np.ifft(a)``
- inverse Fourier transform of ``a``
* - ``sort(a)``
- - ``sort(a)`` or ``a.sort()``
- - sort the matrix
+ - ``np.sort(a)`` or ``a.sort(axis=0)``
+ - sort each column of a 2D array, ``a``
- * - ``[b,I] = sortrows(a,i)``
- - ``I = argsort(a[:,i]), b=a[I,:]``
- - sort the rows of the matrix
+ * - ``sort(a, 2)``
+ - ``np.sort(a, axis = 1)`` or ``a.sort(axis = 1)``
+ - sort the each row of 2D array, ``a``
- * - ``regress(y,X)``
- - ``linalg.lstsq(X,y)``
- - multilinear regression
+ * - ``[b,I]=sortrows(a,1)``
+ - ``I = np.argsort(a[:, 0]); b = a[I,:]``
+ - save the array ``a`` as array ``b`` with rows sorted by the first column
+
+ * - ``x = Z\y``
+ - ``x = linalg.lstsq(Z, y)``
+ - perform a linear regression of the form :math:`\mathbf{Zx}=\mathbf{y}`
* - ``decimate(x, q)``
- - ``scipy.signal.resample(x, len(x)/q)``
+ - ``signal.resample(x, np.ceil(len(x)/q))``
- downsample with low-pass filtering
* - ``unique(a)``
- - ``unique(a)``
- -
+ - ``np.unique(a)``
+ - a vector of unique values in array ``a``
* - ``squeeze(a)``
- ``a.squeeze()``
- -
+ - remove singleton dimensions of array ``a``. Note that MATLAB will always
+ return arrays of 2D or higher while NumPy will return arrays of 0D or
+ higher
.. _numpy-for-matlab-users.notes:
@@ -605,15 +574,15 @@ Notes
=====
\ **Submatrix**: Assignment to a submatrix can be done with lists of
-indexes using the ``ix_`` command. E.g., for 2d array ``a``, one might
-do: ``ind=[1,3]; a[np.ix_(ind,ind)]+=100``.
+indices using the ``ix_`` command. E.g., for 2D array ``a``, one might
+do: ``ind=[1, 3]; a[np.ix_(ind, ind)] += 100``.
\ **HELP**: There is no direct equivalent of MATLAB's ``which`` command,
-but the commands ``help`` and ``source`` will usually list the filename
+but the commands :func:`help` and :func:`numpy.source` will usually list the filename
where the function is located. Python also has an ``inspect`` module (do
``import inspect``) which provides a ``getfile`` that often works.
-\ **INDEXING**: MATLAB® uses one based indexing, so the initial element
+\ **INDEXING**: MATLAB uses one based indexing, so the initial element
of a sequence has index 1. Python uses zero based indexing, so the
initial element of a sequence has index 0. Confusion and flamewars arise
because each has advantages and disadvantages. One based indexing is
@@ -623,55 +592,176 @@ indexing <https://groups.google.com/group/comp.lang.python/msg/1bf4d925dfbf368?q
See also `a text by prof.dr. Edsger W.
Dijkstra <https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html>`__.
-\ **RANGES**: In MATLAB®, ``0:5`` can be used as both a range literal
+\ **RANGES**: In MATLAB, ``0:5`` can be used as both a range literal
and a 'slice' index (inside parentheses); however, in Python, constructs
like ``0:5`` can *only* be used as a slice index (inside square
brackets). Thus the somewhat quirky ``r_`` object was created to allow
-numpy to have a similarly terse range construction mechanism. Note that
+NumPy to have a similarly terse range construction mechanism. Note that
``r_`` is not called like a function or a constructor, but rather
*indexed* using square brackets, which allows the use of Python's slice
syntax in the arguments.
-\ **LOGICOPS**: & or \| in NumPy is bitwise AND/OR, while in Matlab &
-and \| are logical AND/OR. The difference should be clear to anyone with
-significant programming experience. The two can appear to work the same,
-but there are important differences. If you would have used Matlab's &
-or \| operators, you should use the NumPy ufuncs
-logical\_and/logical\_or. The notable differences between Matlab's and
-NumPy's & and \| operators are:
+\ **LOGICOPS**: ``&`` or ``|`` in NumPy is bitwise AND/OR, while in MATLAB &
+and ``|`` are logical AND/OR. The two can appear to work the same,
+but there are important differences. If you would have used MATLAB's ``&``
+or ``|`` operators, you should use the NumPy ufuncs
+``logical_and``/``logical_or``. The notable differences between MATLAB's and
+NumPy's ``&`` and ``|`` operators are:
- Non-logical {0,1} inputs: NumPy's output is the bitwise AND of the
- inputs. Matlab treats any non-zero value as 1 and returns the logical
- AND. For example (3 & 4) in NumPy is 0, while in Matlab both 3 and 4
- are considered logical true and (3 & 4) returns 1.
+ inputs. MATLAB treats any non-zero value as 1 and returns the logical
+ AND. For example ``(3 & 4)`` in NumPy is ``0``, while in MATLAB both ``3``
+ and ``4``
+ are considered logical true and ``(3 & 4)`` returns ``1``.
- Precedence: NumPy's & operator is higher precedence than logical
- operators like < and >; Matlab's is the reverse.
+ operators like ``<`` and ``>``; MATLAB's is the reverse.
If you know you have boolean arguments, you can get away with using
-NumPy's bitwise operators, but be careful with parentheses, like this: z
-= (x > 1) & (x < 2). The absence of NumPy operator forms of logical\_and
-and logical\_or is an unfortunate consequence of Python's design.
+NumPy's bitwise operators, but be careful with parentheses, like this: ``z
+= (x > 1) & (x < 2)``. The absence of NumPy operator forms of ``logical_and``
+and ``logical_or`` is an unfortunate consequence of Python's design.
-**RESHAPE and LINEAR INDEXING**: Matlab always allows multi-dimensional
+**RESHAPE and LINEAR INDEXING**: MATLAB always allows multi-dimensional
arrays to be accessed using scalar or linear indices, NumPy does not.
-Linear indices are common in Matlab programs, e.g. find() on a matrix
+Linear indices are common in MATLAB programs, e.g. ``find()`` on a matrix
returns them, whereas NumPy's find behaves differently. When converting
-Matlab code it might be necessary to first reshape a matrix to a linear
+MATLAB code it might be necessary to first reshape a matrix to a linear
sequence, perform some indexing operations and then reshape back. As
reshape (usually) produces views onto the same storage, it should be
possible to do this fairly efficiently. Note that the scan order used by
-reshape in NumPy defaults to the 'C' order, whereas Matlab uses the
+reshape in NumPy defaults to the 'C' order, whereas MATLAB uses the
Fortran order. If you are simply converting to a linear sequence and
-back this doesn't matter. But if you are converting reshapes from Matlab
-code which relies on the scan order, then this Matlab code: z =
-reshape(x,3,4); should become z = x.reshape(3,4,order='F').copy() in
+back this doesn't matter. But if you are converting reshapes from MATLAB
+code which relies on the scan order, then this MATLAB code: ``z =
+reshape(x,3,4);`` should become ``z = x.reshape(3,4,order='F').copy()`` in
NumPy.
-Customizing Your Environment
+'array' or 'matrix'? Which should I use?
+========================================
+
+Historically, NumPy has provided a special matrix type, `np.matrix`, which
+is a subclass of ndarray which makes binary operations linear algebra
+operations. You may see it used in some existing code instead of `np.array`.
+So, which one to use?
+
+Short answer
+------------
+
+**Use arrays**.
+
+- They support multidimensional array algebra that is supported in MATLAB
+- They are the standard vector/matrix/tensor type of NumPy. Many NumPy
+ functions return arrays, not matrices.
+- There is a clear distinction between element-wise operations and
+ linear algebra operations.
+- You can have standard vectors or row/column vectors if you like.
+
+Until Python 3.5 the only disadvantage of using the array type was that you
+had to use ``dot`` instead of ``*`` to multiply (reduce) two tensors
+(scalar product, matrix vector multiplication etc.). Since Python 3.5 you
+can use the matrix multiplication ``@`` operator.
+
+Given the above, we intend to deprecate ``matrix`` eventually.
+
+Long answer
+-----------
+
+NumPy contains both an ``array`` class and a ``matrix`` class. The
+``array`` class is intended to be a general-purpose n-dimensional array
+for many kinds of numerical computing, while ``matrix`` is intended to
+facilitate linear algebra computations specifically. In practice there
+are only a handful of key differences between the two.
+
+- Operators ``*`` and ``@``, functions ``dot()``, and ``multiply()``:
+
+ - For ``array``, **``*`` means element-wise multiplication**, while
+ **``@`` means matrix multiplication**; they have associated functions
+ ``multiply()`` and ``dot()``. (Before Python 3.5, ``@`` did not exist
+ and one had to use ``dot()`` for matrix multiplication).
+ - For ``matrix``, **``*`` means matrix multiplication**, and for
+ element-wise multiplication one has to use the ``multiply()`` function.
+
+- Handling of vectors (one-dimensional arrays)
+
+ - For ``array``, the **vector shapes 1xN, Nx1, and N are all different
+ things**. Operations like ``A[:,1]`` return a one-dimensional array of
+ shape N, not a two-dimensional array of shape Nx1. Transpose on a
+ one-dimensional ``array`` does nothing.
+ - For ``matrix``, **one-dimensional arrays are always upconverted to 1xN
+ or Nx1 matrices** (row or column vectors). ``A[:,1]`` returns a
+ two-dimensional matrix of shape Nx1.
+
+- Handling of higher-dimensional arrays (ndim > 2)
+
+ - ``array`` objects **can have number of dimensions > 2**;
+ - ``matrix`` objects **always have exactly two dimensions**.
+
+- Convenience attributes
+
+ - ``array`` **has a .T attribute**, which returns the transpose of
+ the data.
+ - ``matrix`` **also has .H, .I, and .A attributes**, which return
+ the conjugate transpose, inverse, and ``asarray()`` of the matrix,
+ respectively.
+
+- Convenience constructor
+
+ - The ``array`` constructor **takes (nested) Python sequences as
+ initializers**. As in, ``array([[1,2,3],[4,5,6]])``.
+ - The ``matrix`` constructor additionally **takes a convenient
+ string initializer**. As in ``matrix("[1 2 3; 4 5 6]")``.
+
+There are pros and cons to using both:
+
+- ``array``
+
+ - ``:)`` Element-wise multiplication is easy: ``A*B``.
+ - ``:(`` You have to remember that matrix multiplication has its own
+ operator, ``@``.
+ - ``:)`` You can treat one-dimensional arrays as *either* row or column
+ vectors. ``A @ v`` treats ``v`` as a column vector, while
+ ``v @ A`` treats ``v`` as a row vector. This can save you having to
+ type a lot of transposes.
+ - ``:)`` ``array`` is the "default" NumPy type, so it gets the most
+ testing, and is the type most likely to be returned by 3rd party
+ code that uses NumPy.
+ - ``:)`` Is quite at home handling data of any number of dimensions.
+ - ``:)`` Closer in semantics to tensor algebra, if you are familiar
+ with that.
+ - ``:)`` *All* operations (``*``, ``/``, ``+``, ``-`` etc.) are
+ element-wise.
+ - ``:(`` Sparse matrices from ``scipy.sparse`` do not interact as well
+ with arrays.
+
+- ``matrix``
+
+ - ``:\\`` Behavior is more like that of MATLAB matrices.
+ - ``<:(`` Maximum of two-dimensional. To hold three-dimensional data you
+ need ``array`` or perhaps a Python list of ``matrix``.
+ - ``<:(`` Minimum of two-dimensional. You cannot have vectors. They must be
+ cast as single-column or single-row matrices.
+ - ``<:(`` Since ``array`` is the default in NumPy, some functions may
+ return an ``array`` even if you give them a ``matrix`` as an
+ argument. This shouldn't happen with NumPy functions (if it does
+ it's a bug), but 3rd party code based on NumPy may not honor type
+ preservation like NumPy does.
+ - ``:)`` ``A*B`` is matrix multiplication, so it looks just like you write
+ it in linear algebra (For Python >= 3.5 plain arrays have the same
+ convenience with the ``@`` operator).
+ - ``<:(`` Element-wise multiplication requires calling a function,
+ ``multiply(A,B)``.
+ - ``<:(`` The use of operator overloading is a bit illogical: ``*``
+ does not work element-wise but ``/`` does.
+ - Interaction with ``scipy.sparse`` is a bit cleaner.
+
+The ``array`` is thus much more advisable to use. Indeed, we intend to
+deprecate ``matrix`` eventually.
+
+Customizing your environment
============================
-In MATLAB® the main tool available to you for customizing the
+In MATLAB the main tool available to you for customizing the
environment is to modify the search path with the locations of your
favorite functions. You can put such customizations into a startup
script that MATLAB will run on startup.
@@ -685,7 +775,7 @@ NumPy, or rather Python, has similar facilities.
interpreter is started, define the ``PYTHONSTARTUP`` environment
variable to contain the name of your startup script.
-Unlike MATLAB®, where anything on your path can be called immediately,
+Unlike MATLAB, where anything on your path can be called immediately,
with Python you need to first do an 'import' statement to make functions
in a particular file accessible.
@@ -696,26 +786,39 @@ this is just an example, not a statement of "best practices"):
# Make all numpy available via shorter 'np' prefix
import numpy as np
- # Make all matlib functions accessible at the top level via M.func()
- import numpy.matlib as M
- # Make some matlib functions accessible directly at the top level via, e.g. rand(3,3)
- from numpy.matlib import rand,zeros,ones,empty,eye
+ #
+ # Make the SciPy linear algebra functions available as linalg.func()
+ # e.g. linalg.lu, linalg.eig (for general l*B@u==A@u solution)
+ from scipy import linalg
+ #
# Define a Hermitian function
def hermitian(A, **kwargs):
- return np.transpose(A,**kwargs).conj()
- # Make some shortcuts for transpose,hermitian:
- # np.transpose(A) --> T(A)
+ return np.conj(A,**kwargs).T
+ # Make a shortcut for hermitian:
# hermitian(A) --> H(A)
- T = np.transpose
H = hermitian
+To use the deprecated `matrix` and other `matlib` functions:
+
+::
+
+ # Make all matlib functions accessible at the top level via M.func()
+ import numpy.matlib as M
+ # Make some matlib functions accessible directly at the top level via, e.g. rand(3,3)
+ from numpy.matlib import matrix,rand,zeros,ones,empty,eye
+
Links
=====
-See http://mathesaurus.sf.net/ for another MATLAB®/NumPy
-cross-reference.
+Another somewhat outdated MATLAB/NumPy cross-reference can be found at
+http://mathesaurus.sf.net/
-An extensive list of tools for scientific work with python can be
+An extensive list of tools for scientific work with Python can be
found in the `topical software page <https://scipy.org/topical-software.html>`__.
-MATLAB® and SimuLink® are registered trademarks of The MathWorks.
+See
+`List of Python software: scripting
+<https://en.wikipedia.org/wiki/List_of_Python_software#Embedded_as_a_scripting_language>`_
+for a list of softwares that use Python as a scripting language
+
+MATLAB® and SimuLink® are registered trademarks of The MathWorks, Inc.
diff --git a/doc/source/user/quickstart.rst b/doc/source/user/quickstart.rst
index b1af81886..8fdc6ec36 100644
--- a/doc/source/user/quickstart.rst
+++ b/doc/source/user/quickstart.rst
@@ -1,5 +1,5 @@
===================
-Quickstart tutorial
+NumPy quickstart
===================
.. currentmodule:: numpy
@@ -12,26 +12,24 @@ Quickstart tutorial
Prerequisites
=============
-Before reading this tutorial you should know a bit of Python. If you
-would like to refresh your memory, take a look at the `Python
+You'll need to know a bit of Python. For a refresher, see the `Python
tutorial <https://docs.python.org/tutorial/>`__.
-If you wish to work the examples in this tutorial, you must also have
-some software installed on your computer. Please see
-https://scipy.org/install.html for instructions.
+To work the examples, you'll need ``matplotlib`` installed
+in addition to NumPy.
**Learner profile**
-This tutorial is intended as a quick overview of
-algebra and arrays in NumPy and want to understand how n-dimensional
+This is a quick overview of
+algebra and arrays in NumPy. It demonstrates how n-dimensional
(:math:`n>=2`) arrays are represented and can be manipulated. In particular, if
you don't know how to apply common functions to n-dimensional arrays (without
using for-loops), or if you want to understand axis and shape properties for
-n-dimensional arrays, this tutorial might be of help.
+n-dimensional arrays, this article might be of help.
**Learning Objectives**
-After this tutorial, you should be able to:
+After reading, you should be able to:
- Understand the difference between one-, two- and n-dimensional arrays in
NumPy;
@@ -361,7 +359,7 @@ existing array rather than create a new one.
>>> a += b # b is not automatically converted to integer type
Traceback (most recent call last):
...
- numpy.core._exceptions.UFuncTypeError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
+ numpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
When operating with arrays of different types, the type of the resulting
array corresponds to the more general or precise one (a behavior known
diff --git a/doc/source/user/setting-up.rst b/doc/source/user/setting-up.rst
deleted file mode 100644
index 7ca3a365c..000000000
--- a/doc/source/user/setting-up.rst
+++ /dev/null
@@ -1,10 +0,0 @@
-**********
-Setting up
-**********
-
-.. toctree::
- :maxdepth: 1
-
- whatisnumpy
- install
- troubleshooting-importerror
diff --git a/doc/source/user/theory.broadcasting.rst b/doc/source/user/theory.broadcasting.rst
index b37edeacc..a82d78e6c 100644
--- a/doc/source/user/theory.broadcasting.rst
+++ b/doc/source/user/theory.broadcasting.rst
@@ -69,7 +69,7 @@ numpy on Windows 2000 with one million element arrays.
*Figure 1*
*In the simplest example of broadcasting, the scalar ``b`` is
- stretched to become an array of with the same shape as ``a`` so the shapes
+ stretched to become an array of same shape as ``a`` so the shapes
are compatible for element-by-element multiplication.*
diff --git a/doc/source/user/troubleshooting-importerror.rst b/doc/source/user/troubleshooting-importerror.rst
index 7d4846f77..1f99491a1 100644
--- a/doc/source/user/troubleshooting-importerror.rst
+++ b/doc/source/user/troubleshooting-importerror.rst
@@ -1,3 +1,11 @@
+:orphan:
+
+.. Reason for orphan: This page is referenced by the installation
+ instructions, which have moved from Sphinx to https://numpy.org/install.
+ All install links in Sphinx now point there, leaving no Sphinx references
+ to this page.
+
+
***************************
Troubleshooting ImportError
***************************
@@ -69,7 +77,7 @@ or conda.
Using Eclipse/PyDev with Anaconda/conda Python (or environments)
----------------------------------------------------------------
-Please see the
+Please see the
`Anaconda Documentation <https://docs.anaconda.com/anaconda/user-guide/tasks/integration/eclipse-pydev/>`_
on how to properly configure Eclipse/PyDev to use Anaconda Python with specific
conda environments.
diff --git a/doc/source/user/tutorial-ma.rst b/doc/source/user/tutorial-ma.rst
index c28353371..88bad3cbe 100644
--- a/doc/source/user/tutorial-ma.rst
+++ b/doc/source/user/tutorial-ma.rst
@@ -9,7 +9,8 @@ Tutorial: Masked Arrays
import numpy as np
np.random.seed(1)
-**Prerequisites**
+Prerequisites
+-------------
Before reading this tutorial, you should know a bit of Python. If you
would like to refresh your memory, take a look at the
@@ -18,13 +19,15 @@ would like to refresh your memory, take a look at the
If you want to be able to run the examples in this tutorial, you should also
have `matplotlib <https://matplotlib.org/>`_ installed on your computer.
-**Learner profile**
+Learner profile
+---------------
This tutorial is for people who have a basic understanding of NumPy and want to
understand how masked arrays and the :mod:`numpy.ma` module can be used in
practice.
-**Learning Objectives**
+Learning Objectives
+-------------------
After this tutorial, you should be able to:
@@ -33,7 +36,8 @@ After this tutorial, you should be able to:
- Decide when the use of masked arrays is appropriate in some of your
applications
-**What are masked arrays?**
+What are masked arrays?
+-----------------------
Consider the following problem. You have a dataset with missing or invalid
entries. If you're doing any kind of processing on this data, and want to
@@ -63,7 +67,8 @@ combination of:
- A ``fill_value``, a value that may be used to replace the invalid entries
in order to return a standard :class:`numpy.ndarray`.
-**When can they be useful?**
+When can they be useful?
+------------------------
There are a few situations where masked arrays can be more useful than just
eliminating the invalid entries of an array:
@@ -84,7 +89,8 @@ comes with a specific implementation of most :term:`NumPy universal functions
functions and operations on masked data. The output is then a masked array.
We'll see some examples of how this works in practice below.
-**Using masked arrays to see COVID-19 data**
+Using masked arrays to see COVID-19 data
+----------------------------------------
From `Kaggle <https://www.kaggle.com/atilamadai/covid19>`_ it is possible to
download a dataset with initial data about the COVID-19 outbreak in the
@@ -149,7 +155,8 @@ can read more about the :func:`numpy.genfromtxt` function from
the :func:`Reference Documentation <numpy.genfromtxt>` or from the
:doc:`Basic IO tutorial <basics.io.genfromtxt>`.
-**Exploring the data**
+Exploring the data
+------------------
First of all, we can plot the whole set of data we have and see what it looks
like. In order to get a readable plot, we select only a few of the dates to
@@ -194,7 +201,8 @@ the :func:`numpy.sum` function to sum all the selected rows (``axis=0``):
Something's wrong with this data - we are not supposed to have negative values
in a cumulative data set. What's going on?
-**Missing data**
+Missing data
+------------
Looking at the data, here's what we find: there is a period with
**missing data**:
@@ -308,7 +316,8 @@ Mainland China:
It's clear that masked arrays are the right solution here. We cannot represent
the missing data without mischaracterizing the evolution of the curve.
-**Fitting Data**
+Fitting Data
+------------
One possibility we can think of is to interpolate the missing data to estimate
the number of cases in late January. Observe that we can select the masked
@@ -367,7 +376,8 @@ after the beginning of the records:
plt.title("COVID-19 cumulative cases from Jan 21 to Feb 3 2020 - Mainland China\n"
"Cubic estimate for 7 days after start");
-**More reading**
+More reading
+------------
Topics not covered in this tutorial can be found in the documentation:
diff --git a/doc/source/user/tutorial-svd.rst b/doc/source/user/tutorial-svd.rst
index 086e0a6de..fd9e366e0 100644
--- a/doc/source/user/tutorial-svd.rst
+++ b/doc/source/user/tutorial-svd.rst
@@ -9,7 +9,8 @@ Tutorial: Linear algebra on n-dimensional arrays
import numpy as np
np.random.seed(1)
-**Prerequisites**
+Prerequisites
+-------------
Before reading this tutorial, you should know a bit of Python. If you
would like to refresh your memory, take a look at the
@@ -19,7 +20,8 @@ If you want to be able to run the examples in this tutorial, you should also
have `matplotlib <https://matplotlib.org/>`_ and `SciPy <https://scipy.org>`_
installed on your computer.
-**Learner profile**
+Learner profile
+---------------
This tutorial is for people who have a basic understanding of linear
algebra and arrays in NumPy and want to understand how n-dimensional
@@ -28,7 +30,8 @@ you don't know how to apply common functions to n-dimensional arrays (without
using for-loops), or if you want to understand axis and shape properties for
n-dimensional arrays, this tutorial might be of help.
-**Learning Objectives**
+Learning Objectives
+-------------------
After this tutorial, you should be able to:
@@ -38,7 +41,8 @@ After this tutorial, you should be able to:
arrays without using for-loops;
- Understand axis and shape properties for n-dimensional arrays.
-**Content**
+Content
+-------
In this tutorial, we will use a `matrix decomposition
<https://en.wikipedia.org/wiki/Matrix_decomposition>`_ from linear algebra, the
@@ -78,7 +82,8 @@ We can see the image using the `matplotlib.pyplot.imshow` function::
If you are executing the commands above in the IPython shell, it might be
necessary to use the command ``plt.show()`` to show the image window.
-**Shape, axis and array properties**
+Shape, axis and array properties
+--------------------------------
Note that, in linear algebra, the dimension of a vector refers to the number of
entries in an array. In NumPy, it instead defines the number of axes. For
@@ -162,7 +167,8 @@ syntax::
>>> green_array = img_array[:, :, 1]
>>> blue_array = img_array[:, :, 2]
-**Operations on an axis**
+Operations on an axis
+---------------------
It is possible to use methods from linear algebra to approximate an existing set
of data. Here, we will use the `SVD (Singular Value Decomposition)
@@ -290,7 +296,8 @@ diagonal and with the appropriate dimensions for multiplying: in our case,
Now, we want to check if the reconstructed ``U @ Sigma @ Vt`` is
close to the original ``img_gray`` matrix.
-**Approximation**
+Approximation
+-------------
The `linalg` module includes a ``norm`` function, which
computes the norm of a vector or matrix represented in a NumPy array. For
@@ -360,7 +367,8 @@ Now, you can go ahead and repeat this experiment with other values of `k`, and
each of your experiments should give you a slightly better (or worse) image
depending on the value you choose.
-**Applying to all colors**
+Applying to all colors
+----------------------
Now we want to do the same kind of operation, but to all three colors. Our
first instinct might be to repeat the same operation we did above to each color
@@ -411,7 +419,8 @@ matrices into the approximation. Now, note that
To build the final approximation matrix, we must understand how multiplication
across different axes works.
-**Products with n-dimensional arrays**
+Products with n-dimensional arrays
+----------------------------------
If you have worked before with only one- or two-dimensional arrays in NumPy,
you might use `numpy.dot` and `numpy.matmul` (or the ``@`` operator)
@@ -495,7 +504,8 @@ Even though the image is not as sharp, using a small number of ``k`` singular
values (compared to the original set of 768 values), we can recover many of the
distinguishing features from this image.
-**Final words**
+Final words
+-----------
Of course, this is not the best method to *approximate* an image.
However, there is, in fact, a result in linear algebra that says that the
@@ -504,7 +514,8 @@ terms of the norm of the difference. For more information, see *G. H. Golub and
C. F. Van Loan, Matrix Computations, Baltimore, MD, Johns Hopkins University
Press, 1985*.
-**Further reading**
+Further reading
+---------------
- :doc:`Python tutorial <python:tutorial/index>`
- :ref:`reference`
diff --git a/doc/source/user/tutorials_index.rst b/doc/source/user/tutorials_index.rst
index 5e9419f96..20e2c256c 100644
--- a/doc/source/user/tutorials_index.rst
+++ b/doc/source/user/tutorials_index.rst
@@ -11,10 +11,6 @@ classes contained in the package, see the :ref:`API reference <reference>`.
.. toctree::
:maxdepth: 1
- basics
- misc
- numpy-for-matlab-users
tutorial-svd
tutorial-ma
- building
- c-info
+
diff --git a/doc/source/user/whatisnumpy.rst b/doc/source/user/whatisnumpy.rst
index 8478a77c4..154f91c84 100644
--- a/doc/source/user/whatisnumpy.rst
+++ b/doc/source/user/whatisnumpy.rst
@@ -125,7 +125,7 @@ same shape, or a scalar and an array, or even two arrays of with
different shapes, provided that the smaller array is "expandable" to
the shape of the larger in such a way that the resulting broadcast is
unambiguous. For detailed "rules" of broadcasting see
-`numpy.doc.broadcasting`.
+`basics.broadcasting`.
Who Else Uses NumPy?
--------------------