summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/neps/nep-0016-abstract-array.rst2
-rw-r--r--doc/neps/nep-0018-array-function-protocol.rst2
-rw-r--r--doc/neps/nep-0019-rng-policy.rst2
-rw-r--r--doc/neps/nep-0026-missing-data-summary.rst4
-rw-r--r--doc/neps/nep-0027-zero-rank-arrarys.rst28
-rw-r--r--doc/release/1.14.4-notes.rst2
6 files changed, 20 insertions, 20 deletions
diff --git a/doc/neps/nep-0016-abstract-array.rst b/doc/neps/nep-0016-abstract-array.rst
index 86d164d8e..7551b11b9 100644
--- a/doc/neps/nep-0016-abstract-array.rst
+++ b/doc/neps/nep-0016-abstract-array.rst
@@ -266,7 +266,7 @@ array, then they'll get a segfault. Right now, in the same situation,
``asarray`` will instead invoke the object's ``__array__`` method, or
use the buffer interface to make a view, or pass through an array with
object dtype, or raise an error, or similar. Probably none of these
-outcomes are actually desireable in most cases, so maybe making it a
+outcomes are actually desirable in most cases, so maybe making it a
segfault instead would be OK? But it's dangerous given that we don't
know how common such code is. OTOH, if we were starting from scratch
then this would probably be the ideal solution.
diff --git a/doc/neps/nep-0018-array-function-protocol.rst b/doc/neps/nep-0018-array-function-protocol.rst
index 27a462239..fb9b838b5 100644
--- a/doc/neps/nep-0018-array-function-protocol.rst
+++ b/doc/neps/nep-0018-array-function-protocol.rst
@@ -642,7 +642,7 @@ layer, separating NumPy's high level API from default implementations on
The downsides are that this would require an explicit opt-in from all
existing code, e.g., ``import numpy.api as np``, and in the long term
-would result in the maintainence of two separate NumPy APIs. Also, many
+would result in the maintenance of two separate NumPy APIs. Also, many
functions from ``numpy`` itself are already overloaded (but
inadequately), so confusion about high vs. low level APIs in NumPy would
still persist.
diff --git a/doc/neps/nep-0019-rng-policy.rst b/doc/neps/nep-0019-rng-policy.rst
index 815f16f8d..aa5fdc653 100644
--- a/doc/neps/nep-0019-rng-policy.rst
+++ b/doc/neps/nep-0019-rng-policy.rst
@@ -296,7 +296,7 @@ satisfactory subset. At least some projects used a fairly broad selection of
the ``RandomState`` methods in unit tests.
Downstream project owners would have been forced to modify their code to
-accomodate the new PRNG subsystem. Some modifications might be simply
+accommodate the new PRNG subsystem. Some modifications might be simply
mechanical, but the bulk of the work would have been tedious churn for no
positive improvement to the downstream project, just avoiding being broken.
diff --git a/doc/neps/nep-0026-missing-data-summary.rst b/doc/neps/nep-0026-missing-data-summary.rst
index e99138cdd..78fe999df 100644
--- a/doc/neps/nep-0026-missing-data-summary.rst
+++ b/doc/neps/nep-0026-missing-data-summary.rst
@@ -669,7 +669,7 @@ NumPy could more easily be overtaken by another project.
In the case of the existing NA contribution at issue, how we resolve
this disagreement represents a decision about how NumPy's
-developers, contributers, and users should interact. If we create
+developers, contributors, and users should interact. If we create
a document describing a dispute resolution process, how do we
design it so that it doesn't introduce a large burden and excessive
uncertainty on developers that could prevent them from productively
@@ -677,7 +677,7 @@ contributing code?
If we go this route of writing up a decision process which includes
such a dispute resolution mechanism, I think the meat of it should
-be a roadmap that potential contributers and developers can follow
+be a roadmap that potential contributors and developers can follow
to gain influence over NumPy. NumPy development needs broad support
beyond code contributions, and tying influence in the project to
contributions seems to me like it would be a good way to encourage
diff --git a/doc/neps/nep-0027-zero-rank-arrarys.rst b/doc/neps/nep-0027-zero-rank-arrarys.rst
index d932bb609..430397235 100644
--- a/doc/neps/nep-0027-zero-rank-arrarys.rst
+++ b/doc/neps/nep-0027-zero-rank-arrarys.rst
@@ -51,7 +51,7 @@ However there are some important differences:
* Array scalars are immutable
* Array scalars have different python type for different data types
-
+
Motivation for Array Scalars
----------------------------
@@ -62,7 +62,7 @@ we will try to explain why it is necessary to have three different ways to
represent a number.
There were several numpy-discussion threads:
-
+
* `rank-0 arrays`_ in a 2002 mailing list thread.
* Thoughts about zero dimensional arrays vs Python scalars in a `2005 mailing list thread`_]
@@ -71,7 +71,7 @@ It has been suggested several times that NumPy just use rank-0 arrays to
represent scalar quantities in all case. Pros and cons of converting rank-0
arrays to scalars were summarized as follows:
-- Pros:
+- Pros:
- Some cases when Python expects an integer (the most
dramatic is when slicing and indexing a sequence:
@@ -94,15 +94,15 @@ arrays to scalars were summarized as follows:
files (though this could also be done by a special case
in the pickling code for arrays)
-- Cons:
+- Cons:
- It is difficult to write generic code because scalars
do not have the same methods and attributes as arrays.
(such as ``.type`` or ``.shape``). Also Python scalars have
- different numeric behavior as well.
+ different numeric behavior as well.
- - This results in a special-case checking that is not
- pleasant. Fundamentally it lets the user believe that
+ - This results in a special-case checking that is not
+ pleasant. Fundamentally it lets the user believe that
somehow multidimensional homoegeneous arrays
are something like Python lists (which except for
Object arrays they are not).
@@ -117,7 +117,7 @@ The Need for Zero-Rank Arrays
-----------------------------
Once the idea to use zero-rank arrays to represent scalars was rejected, it was
-natural to consider whether zero-rank arrays can be eliminated alltogether.
+natural to consider whether zero-rank arrays can be eliminated altogether.
However there are some important use cases where zero-rank arrays cannot be
replaced by array scalars. See also `A case for rank-0 arrays`_ from February
2006.
@@ -164,12 +164,12 @@ Alexander started a `Jan 2006 discussion`_ on scipy-dev
with the following proposal:
... it may be reasonable to allow ``a[...]``. This way
- ellipsis can be interpereted as any number of ``:`` s including zero.
+ ellipsis can be interpereted as any number of ``:`` s including zero.
Another subscript operation that makes sense for scalars would be
- ``a[...,newaxis]`` or even ``a[{newaxis, }* ..., {newaxis,}*]``, where
- ``{newaxis,}*`` stands for any number of comma-separated newaxis tokens.
+ ``a[...,newaxis]`` or even ``a[{newaxis, }* ..., {newaxis,}*]``, where
+ ``{newaxis,}*`` stands for any number of comma-separated newaxis tokens.
This will allow one to use ellipsis in generic code that would work on
- any numpy type.
+ any numpy type.
Francesc Altet supported the idea of ``[...]`` on zero-rank arrays and
`suggested`_ that ``[()]`` be supported as well.
@@ -204,7 +204,7 @@ remains on what should be the type of the result - zero rank ndarray or ``x.dtyp
1
Since most if not all numpy function automatically convert zero-rank arrays to scalars on return, there is no reason for
-``[...]`` and ``[()]`` operations to be different.
+``[...]`` and ``[()]`` operations to be different.
See SVN changeset 1864 (which became git commit `9024ff0`_) for
implementation of ``x[...]`` and ``x[()]`` returning numpy scalars.
@@ -234,7 +234,7 @@ Currently all indexing on zero-rank arrays is implemented in a special ``if (nd
that the changes do not affect any existing usage (except, the usage that
relies on exceptions). On the other hand part of motivation for these changes
was to make behavior of ndarrays more uniform and this should allow to
-eliminate ``if (nd == 0)`` checks alltogether.
+eliminate ``if (nd == 0)`` checks altogether.
Copyright
---------
diff --git a/doc/release/1.14.4-notes.rst b/doc/release/1.14.4-notes.rst
index 174094c1c..3fb94383b 100644
--- a/doc/release/1.14.4-notes.rst
+++ b/doc/release/1.14.4-notes.rst
@@ -19,7 +19,7 @@ values are now correct.
Note that NumPy will error on import if it detects incorrect float32 `dot`
results. This problem has been seen on the Mac when working in the Anaconda
-enviroment and is due to a subtle interaction between MKL and PyQt5. It is not
+environment and is due to a subtle interaction between MKL and PyQt5. It is not
strictly a NumPy problem, but it is best that users be aware of it. See the
gh-8577 NumPy issue for more information.