summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.github/FUNDING.yml1
-rw-r--r--doc/changelog/1.17.4-changelog.rst26
-rw-r--r--doc/neps/nep-0000.rst13
-rw-r--r--doc/neps/nep-0034.rst141
-rw-r--r--doc/neps/nep-template.rst5
-rw-r--r--doc/release/upcoming_changes/14197.improvement.rst7
-rw-r--r--doc/release/upcoming_changes/14227.improvement.rst3
-rw-r--r--doc/release/upcoming_changes/14730.improvement.rst3
-rw-r--r--doc/source/release.rst1
-rw-r--r--doc/source/release/1.17.4-notes.rst49
-rw-r--r--numpy/__init__.py10
-rw-r--r--numpy/core/_add_newdocs.py14
-rw-r--r--numpy/core/code_generators/generate_umath.py19
-rw-r--r--numpy/core/fromnumeric.py38
-rw-r--r--numpy/core/setup.py2
-rw-r--r--numpy/core/setup_common.py3
-rw-r--r--numpy/core/src/common/binop_override.h5
-rw-r--r--numpy/core/src/common/get_attr_string.h20
-rw-r--r--numpy/core/src/common/ufunc_override.c3
-rw-r--r--numpy/core/src/multiarray/arrayfunction_override.c8
-rw-r--r--numpy/core/src/multiarray/arrayobject.c2
-rw-r--r--numpy/core/src/multiarray/arraytypes.c.src132
-rw-r--r--numpy/core/src/multiarray/buffer.c2
-rw-r--r--numpy/core/src/multiarray/common.c12
-rw-r--r--numpy/core/src/multiarray/conversion_utils.c2
-rw-r--r--numpy/core/src/multiarray/ctors.c21
-rw-r--r--numpy/core/src/multiarray/descriptor.c2
-rw-r--r--numpy/core/src/multiarray/getset.c2
-rw-r--r--numpy/core/src/multiarray/multiarraymodule.c15
-rw-r--r--numpy/core/src/multiarray/npy_buffer.h (renamed from numpy/core/src/multiarray/buffer.h)0
-rw-r--r--numpy/core/src/multiarray/scalartypes.c.src2
-rw-r--r--numpy/core/tests/test_datetime.py24
-rw-r--r--numpy/core/tests/test_deprecations.py2
-rw-r--r--numpy/core/tests/test_issue14735.py29
-rw-r--r--numpy/core/tests/test_longdouble.py114
-rw-r--r--numpy/core/tests/test_numeric.py5
-rw-r--r--numpy/core/tests/test_ufunc.py1
-rw-r--r--numpy/core/tests/test_umath.py4
-rw-r--r--numpy/lib/nanfunctions.py2
-rw-r--r--numpy/lib/tests/test_function_base.py4
-rw-r--r--numpy/linalg/linalg.py77
-rw-r--r--numpy/random/.gitignore3
-rw-r--r--numpy/random/_bounded_integers.pxd29
-rw-r--r--numpy/random/_bounded_integers.pyx1564
-rw-r--r--numpy/random/_generator.pyx59
-rw-r--r--numpy/random/tests/test_generator_mt19937.py24
-rw-r--r--shippable.yml1
-rw-r--r--test_requirements.txt4
-rwxr-xr-xtools/travis-test.sh4
49 files changed, 803 insertions, 1710 deletions
diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml
index 4c868d735..25322bd34 100644
--- a/.github/FUNDING.yml
+++ b/.github/FUNDING.yml
@@ -1,2 +1,3 @@
+github: [numfocus]
tidelift: pypi/numpy
custom: https://www.numpy.org/#support-numpy
diff --git a/doc/changelog/1.17.4-changelog.rst b/doc/changelog/1.17.4-changelog.rst
new file mode 100644
index 000000000..96d9f3e9e
--- /dev/null
+++ b/doc/changelog/1.17.4-changelog.rst
@@ -0,0 +1,26 @@
+
+Contributors
+============
+
+A total of 5 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Chris Burr +
+* Matti Picus
+* Qiming Sun +
+* Warren Weckesser
+
+Pull requests merged
+====================
+
+A total of 8 pull requests were merged for this release.
+
+* `#14758 <https://github.com/numpy/numpy/pull/14758>`__: BLD: declare support for python 3.8
+* `#14781 <https://github.com/numpy/numpy/pull/14781>`__: BUG: random: biased samples from integers() with 8 or 16 bit...
+* `#14851 <https://github.com/numpy/numpy/pull/14851>`__: BUG: Fix _ctypes class circular reference. (#13808)
+* `#14852 <https://github.com/numpy/numpy/pull/14852>`__: BLD: add 'apt update' to shippable
+* `#14855 <https://github.com/numpy/numpy/pull/14855>`__: BUG: Fix `np.einsum` errors on Power9 Linux and z/Linux
+* `#14857 <https://github.com/numpy/numpy/pull/14857>`__: BUG: lib: Fix histogram problem with signed integer arrays.
+* `#14858 <https://github.com/numpy/numpy/pull/14858>`__: BLD: Prevent -flto from optimising long double representation...
+* `#14866 <https://github.com/numpy/numpy/pull/14866>`__: MAINT: move buffer.h -> npy_buffer.h to avoid conflicts
diff --git a/doc/neps/nep-0000.rst b/doc/neps/nep-0000.rst
index 090c4850e..0a2dbdefb 100644
--- a/doc/neps/nep-0000.rst
+++ b/doc/neps/nep-0000.rst
@@ -75,14 +75,11 @@ request`_ to the ``doc/neps`` directory with the name ``nep-<n>.rst``
where ``<n>`` is an appropriately assigned four-digit number (e.g.,
``nep-0000.rst``). The draft must use the :doc:`nep-template` file.
-Once the PR is in place, the NEP should be announced on the mailing
-list for discussion. Discussion about implementation details will take place
-on the pull request, but once editorial issues are solved, the PR should be
-merged, even if with draft status. The mailing list e-mail will contain the NEP
-upto the section titled "Backward compatibility", so as to make it digestible
-to a wide audience. The mailing list discussion is intended to target
-end-users, and thus, discussion on the proposed usage and possible impact
-should take place on the mailing list.
+Once the PR for the NEP is in place, a post should be made to the
+mailing list containing the sections upto "Backward compatibility",
+with the purpose of limiting discussion there to usage and impact.
+Discussion on the pull request will have a broader scope, also including
+details of implementation.
At the earliest convenience, the PR should be merged (regardless of
whether it is accepted during discussion). Additional PRs may be made
diff --git a/doc/neps/nep-0034.rst b/doc/neps/nep-0034.rst
new file mode 100644
index 000000000..d9a9c62f2
--- /dev/null
+++ b/doc/neps/nep-0034.rst
@@ -0,0 +1,141 @@
+===========================================================
+NEP 34 — Disallow inferring ``dtype=object`` from sequences
+===========================================================
+
+:Author: Matti Picus
+:Status: Draft
+:Type: Standards Track
+:Created: 2019-10-10
+
+
+Abstract
+--------
+
+When users create arrays with sequences-of-sequences, they sometimes err in
+matching the lengths of the nested sequences_, commonly called "ragged
+arrays". Here we will refer to them as ragged nested sequences. Creating such
+arrays via ``np.array([<ragged_nested_sequence>])`` with no ``dtype`` keyword
+argument will today default to an ``object``-dtype array. Change the behaviour to
+raise a ``ValueError`` instead.
+
+Motivation and Scope
+--------------------
+
+Users who specify lists-of-lists when creating a `numpy.ndarray` via
+``np.array`` may mistakenly pass in lists of different lengths. Currently we
+accept this input and automatically create an array with ``dtype=object``. This
+can be confusing, since it is rarely what is desired. Changing the automatic
+dtype detection to never return ``object`` for ragged nested sequences (defined as a
+recursive sequence of sequences, where not all the sequences on the same
+level have the same length) will force users who actually wish to create
+``object`` arrays to specify that explicitly. Note that ``lists``, ``tuples``,
+and ``nd.ndarrays`` are all sequences [0]_. See for instance `issue 5303`_.
+
+Usage and Impact
+----------------
+
+After this change, array creation with ragged nested sequences must explicitly
+define a dtype:
+
+ >>> np.array([[1, 2], [1]])
+ ValueError: cannot guess the desired dtype from the input
+
+ >>> np.array([[1, 2], [1]], dtype=object)
+ # succeeds, with no change from current behaviour
+
+The deprecation will affect any call that internally calls ``np.asarray``. For
+instance, the ``assert_equal`` family of functions calls ``np.asarray``, so
+users will have to change code like::
+
+ np.assert_equal(a, [[1, 2], 3])
+
+to::
+
+ np.assert_equal(a, np.array([[1, 2], 3], dtype=object))
+
+Detailed description
+--------------------
+
+To explicitly set the shape of the object array, since it is sometimes hard to
+determine what shape is desired, one could use:
+
+ >>> arr = np.empty(correct_shape, dtype=object)
+ >>> arr[...] = values
+
+We will also reject mixed sequences of non-sequence and sequence, for instance
+all of these will be rejected:
+
+ >>> arr = np.array([np.arange(10), [10]])
+ >>> arr = np.array([[range(3), range(3), range(3)], [range(3), 0, 0]])
+
+Related Work
+------------
+
+`PR 14341`_ tried to raise an error when ragged nested sequences were specified
+with a numeric dtype ``np.array, [[1], [2, 3]], dtype=int)`` but failed due to
+false-positives, for instance ``np.array([1, np.array([5])], dtype=int)``.
+
+.. _`PR 14341`: https://github.com/numpy/numpy/pull/14341
+
+Implementation
+--------------
+
+The code to be changed is inside ``PyArray_GetArrayParamsFromObject`` and the
+internal ``discover_dimentions`` function. See `PR 14794`_.
+
+Backward compatibility
+----------------------
+
+Anyone depending on creating object arrays from ragged nested sequences will
+need to modify their code. There will be a deprecation period during which the
+current behaviour will emit a ``DeprecationWarning``.
+
+Alternatives
+------------
+
+- We could continue with the current situation.
+
+- It was also suggested to add a kwarg ``depth`` to array creation, or perhaps
+ to add another array creation API function ``ragged_array_object``. The goal
+ was to eliminate the ambiguity in creating an object array from ``array([[1,
+ 2], [1]], dtype=object)``: should the returned array have a shape of
+ ``(1,)``, or ``(2,)``? This NEP does not deal with that issue, and only
+ deprecates the use of ``array`` with no ``dtype=object`` for ragged nested
+ sequences. Users of ragged nested sequences may face another deprecation
+ cycle in the future. Rationale: we expect that there are very few users who
+ intend to use ragged arrays like that, this was never intended as a use case
+ of NumPy arrays. Users are likely better off with `another library`_ or just
+ using list of lists.
+
+- It was also suggested to deprecate all automatic creation of ``object``-dtype
+ arrays, which would require adding an explicit ``dtype=object`` for something
+ like ``np.array([Decimal(10), Decimal(10)])``. This too is out of scope for
+ the current NEP. Rationale: it's harder to asses the impact of this larger
+ change, we're not sure how many users this may impact.
+
+Discussion
+----------
+
+Comments to `issue 5303`_ indicate this is unintended behaviour as far back as
+2014. Suggestions to change it have been made in the ensuing years, but none
+have stuck. The WIP implementation in `PR 14794`_ seems to point to the
+viability of this approach.
+
+References and Footnotes
+------------------------
+
+.. _`issue 5303`: https://github.com/numpy/numpy/issues/5303
+.. _sequences: https://docs.python.org/3.7/glossary.html#term-sequence
+.. _`PR 14794`: https://github.com/numpy/numpy/pull/14794
+.. _`another library`: https://github.com/scikit-hep/awkward-array
+
+.. [0] ``np.ndarrays`` are not recursed into, rather their shape is used
+ directly. This will not emit warnings::
+
+ ragged = np.array([[1], [1, 2, 3]], dtype=object)
+ np.array([ragged, ragged]) # no dtype needed
+
+Copyright
+---------
+
+This document has been placed in the public domain.
diff --git a/doc/neps/nep-template.rst b/doc/neps/nep-template.rst
index 55a872ca9..42f717c7a 100644
--- a/doc/neps/nep-template.rst
+++ b/doc/neps/nep-template.rst
@@ -41,8 +41,9 @@ Backward compatibility
This section describes the ways in which the NEP breaks backward compatibility.
The mailing list post will contain the NEP up to and including this section.
-This is to avoid losing users who are not interested in implementation details
-and instead focus the discussion on usage and impact of the intended features.
+Its purpose is to provide a high-level summary to users who are not interested
+in detailed technical discussion, but may have opinions around, e.g., usage and
+impact.
Detailed description
--------------------
diff --git a/doc/release/upcoming_changes/14197.improvement.rst b/doc/release/upcoming_changes/14197.improvement.rst
new file mode 100644
index 000000000..c0c52580a
--- /dev/null
+++ b/doc/release/upcoming_changes/14197.improvement.rst
@@ -0,0 +1,7 @@
+``method`` keyword argument for `np.random.multivariate_normal`
+---------------------------------------------------------------
+A ``method`` keyword argument is now available for
+`np.random.multivariate_normal` with possible values
+``{'svd', 'eigh', 'cholesky'}``. To use it, write
+``np.random.multivariate_normal(..., method=<method>)``.
+
diff --git a/doc/release/upcoming_changes/14227.improvement.rst b/doc/release/upcoming_changes/14227.improvement.rst
new file mode 100644
index 000000000..6e45f47c1
--- /dev/null
+++ b/doc/release/upcoming_changes/14227.improvement.rst
@@ -0,0 +1,3 @@
+Add complex number support for ``numpy.fromstring``
+---------------------------------------------------
+Now ``numpy.fromstring`` can read complex numbers.
diff --git a/doc/release/upcoming_changes/14730.improvement.rst b/doc/release/upcoming_changes/14730.improvement.rst
new file mode 100644
index 000000000..ee073d234
--- /dev/null
+++ b/doc/release/upcoming_changes/14730.improvement.rst
@@ -0,0 +1,3 @@
+Add complex number support for ``numpy.fromfile``
+-------------------------------------------------
+Now ``numpy.fromfile`` can read complex numbers.
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 3bfe81243..0343275a5 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -6,6 +6,7 @@ Release Notes
:maxdepth: 3
1.18.0 <release/1.18.0-notes>
+ 1.17.4 <release/1.17.4-notes>
1.17.3 <release/1.17.3-notes>
1.17.2 <release/1.17.2-notes>
1.17.1 <release/1.17.1-notes>
diff --git a/doc/source/release/1.17.4-notes.rst b/doc/source/release/1.17.4-notes.rst
new file mode 100644
index 000000000..47f4725f9
--- /dev/null
+++ b/doc/source/release/1.17.4-notes.rst
@@ -0,0 +1,49 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.17.4 Release Notes
+==========================
+
+This release contains fixes for bugs reported against NumPy 1.17.3 along with
+some build improvements. The Python versions supported in this release
+are 3.5-3.8.
+
+Downstream developers should use Cython >= 0.29.13 for Python 3.8 support and
+OpenBLAS >= 3.7 to avoid errors on the Skylake architecture.
+
+
+Highlights
+==========
+
+- Fixed `random.random_integers` biased generation of 8 and 16 bit integers.
+- Fixed `np.einsum` regression on Power9 and z/Linux.
+- Fixed histogram problem with signed integer arrays.
+
+
+Contributors
+============
+
+A total of 5 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Charles Harris
+* Chris Burr +
+* Matti Picus
+* Qiming Sun +
+* Warren Weckesser
+
+
+Pull requests merged
+====================
+
+A total of 8 pull requests were merged for this release.
+
+* `#14758 <https://github.com/numpy/numpy/pull/14758>`__: BLD: declare support for python 3.8
+* `#14781 <https://github.com/numpy/numpy/pull/14781>`__: BUG: random: biased samples from integers() with 8 or 16 bit...
+* `#14851 <https://github.com/numpy/numpy/pull/14851>`__: BUG: Fix _ctypes class circular reference. (#13808)
+* `#14852 <https://github.com/numpy/numpy/pull/14852>`__: BLD: add 'apt update' to shippable
+* `#14855 <https://github.com/numpy/numpy/pull/14855>`__: BUG: Fix `np.einsum` errors on Power9 Linux and z/Linux
+* `#14857 <https://github.com/numpy/numpy/pull/14857>`__: BUG: lib: Fix histogram problem with signed integer arrays.
+* `#14858 <https://github.com/numpy/numpy/pull/14858>`__: BLD: Prevent -flto from optimising long double representation...
+* `#14866 <https://github.com/numpy/numpy/pull/14866>`__: MAINT: move buffer.h -> npy_buffer.h to avoid conflicts
+
diff --git a/numpy/__init__.py b/numpy/__init__.py
index fef8245de..349914b2f 100644
--- a/numpy/__init__.py
+++ b/numpy/__init__.py
@@ -158,6 +158,7 @@ else:
# Make these accessible from numpy name-space
# but not imported in from numpy import *
+ # TODO[gh-6103]: Deprecate these
if sys.version_info[0] >= 3:
from builtins import bool, int, float, complex, object, str
unicode = str
@@ -168,14 +169,17 @@ else:
# now that numpy modules are imported, can initialize limits
core.getlimits._register_known_types()
- __all__.extend(['bool', 'int', 'float', 'complex', 'object', 'unicode',
- 'str'])
__all__.extend(['__version__', 'show_config'])
__all__.extend(core.__all__)
__all__.extend(_mat.__all__)
__all__.extend(lib.__all__)
__all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
+ # These are added by `from .core import *` and `core.__all__`, but we
+ # overwrite them above with builtins we do _not_ want to export.
+ __all__.remove('long')
+ __all__.remove('unicode')
+
# Remove things that are in the numpy.lib but not in the numpy namespace
# Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)
# that prevents adding more things to the main namespace by accident.
@@ -216,7 +220,7 @@ else:
"{!r}".format(__name__, attr))
def __dir__():
- return __all__ + ['Tester', 'testing']
+ return list(globals().keys()) + ['Tester', 'testing']
else:
# We don't actually use this ourselves anymore, but I'm not 100% sure that
diff --git a/numpy/core/_add_newdocs.py b/numpy/core/_add_newdocs.py
index bd309f4a5..0225d12b9 100644
--- a/numpy/core/_add_newdocs.py
+++ b/numpy/core/_add_newdocs.py
@@ -1036,7 +1036,12 @@ add_newdoc('numpy.core.multiarray', 'fromstring',
A string containing the data.
dtype : data-type, optional
The data type of the array; default: float. For binary input data,
- the data must be in exactly this format.
+ the data must be in exactly this format. Most builtin numeric types are
+ supported and extension types may be supported.
+
+ .. versionadded:: 1.18.0
+ Complex dtypes.
+
count : int, optional
Read this number of `dtype` elements from the data. If this is
negative (the default), the count will be determined from the
@@ -1172,6 +1177,11 @@ add_newdoc('numpy.core.multiarray', 'fromfile',
Data type of the returned array.
For binary files, it is used to determine the size and byte-order
of the items in the file.
+ Most builtin numeric types are supported and extension types may be supported.
+
+ .. versionadded:: 1.18.0
+ Complex dtypes.
+
count : int
Number of items to read. ``-1`` means all items (i.e., the complete
file).
@@ -1196,7 +1206,7 @@ add_newdoc('numpy.core.multiarray', 'fromfile',
Notes
-----
Do not rely on the combination of `tofile` and `fromfile` for
- data storage, as the binary files generated are are not platform
+ data storage, as the binary files generated are not platform
independent. In particular, no byte-order or data-type information is
saved. Data can be stored in the platform independent ``.npy`` format
using `save` and `load` instead.
diff --git a/numpy/core/code_generators/generate_umath.py b/numpy/core/code_generators/generate_umath.py
index e0b6a654c..9e67a45ef 100644
--- a/numpy/core/code_generators/generate_umath.py
+++ b/numpy/core/code_generators/generate_umath.py
@@ -226,7 +226,9 @@ chartoname = {
'P': 'OBJECT',
}
-all = '?bBhHiIlLqQefdgFDGOMm'
+noobj = '?bBhHiIlLqQefdgFDGmM'
+all = '?bBhHiIlLqQefdgFDGOmM'
+
O = 'O'
P = 'P'
ints = 'bBhHiIlLqQ'
@@ -246,10 +248,8 @@ inexactvec = 'fd'
noint = inexact+O
nointP = inexact+P
allP = bints+times+flts+cmplxP
-nobool = all[1:]
-noobj = all[:-3]+all[-2:]
-nobool_or_obj = all[1:-3]+all[-2:]
-nobool_or_datetime = all[1:-2]+all[-1:]
+nobool_or_obj = noobj[1:]
+nobool_or_datetime = noobj[1:-1] + O # includes m - timedelta64
intflt = ints+flts
intfltcmplx = ints+flts+cmplx
nocmplx = bints+times+flts
@@ -433,6 +433,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(all, out='?', simd=[('avx2', ints)]),
[TypeDescription('O', FullTypeDescr, 'OO', 'O')],
+ TD('O', out='?'),
),
'greater_equal':
Ufunc(2, 1, None,
@@ -440,6 +441,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(all, out='?', simd=[('avx2', ints)]),
[TypeDescription('O', FullTypeDescr, 'OO', 'O')],
+ TD('O', out='?'),
),
'less':
Ufunc(2, 1, None,
@@ -447,6 +449,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(all, out='?', simd=[('avx2', ints)]),
[TypeDescription('O', FullTypeDescr, 'OO', 'O')],
+ TD('O', out='?'),
),
'less_equal':
Ufunc(2, 1, None,
@@ -454,6 +457,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(all, out='?', simd=[('avx2', ints)]),
[TypeDescription('O', FullTypeDescr, 'OO', 'O')],
+ TD('O', out='?'),
),
'equal':
Ufunc(2, 1, None,
@@ -461,6 +465,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(all, out='?', simd=[('avx2', ints)]),
[TypeDescription('O', FullTypeDescr, 'OO', 'O')],
+ TD('O', out='?'),
),
'not_equal':
Ufunc(2, 1, None,
@@ -468,6 +473,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(all, out='?', simd=[('avx2', ints)]),
[TypeDescription('O', FullTypeDescr, 'OO', 'O')],
+ TD('O', out='?'),
),
'logical_and':
Ufunc(2, 1, True_,
@@ -475,6 +481,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(nodatetime_or_obj, out='?', simd=[('avx2', ints)]),
TD(O, f='npy_ObjectLogicalAnd'),
+ TD(O, f='npy_ObjectLogicalAnd', out='?'),
),
'logical_not':
Ufunc(1, 1, None,
@@ -482,6 +489,7 @@ defdict = {
None,
TD(nodatetime_or_obj, out='?', simd=[('avx2', ints)]),
TD(O, f='npy_ObjectLogicalNot'),
+ TD(O, f='npy_ObjectLogicalNot', out='?'),
),
'logical_or':
Ufunc(2, 1, False_,
@@ -489,6 +497,7 @@ defdict = {
'PyUFunc_SimpleBinaryComparisonTypeResolver',
TD(nodatetime_or_obj, out='?', simd=[('avx2', ints)]),
TD(O, f='npy_ObjectLogicalOr'),
+ TD(O, f='npy_ObjectLogicalOr', out='?'),
),
'logical_xor':
Ufunc(2, 1, False_,
diff --git a/numpy/core/fromnumeric.py b/numpy/core/fromnumeric.py
index 5f7716455..6e5f3dabf 100644
--- a/numpy/core/fromnumeric.py
+++ b/numpy/core/fromnumeric.py
@@ -796,7 +796,9 @@ def argpartition(a, kth, axis=-1, kind='introselect', order=None):
--------
partition : Describes partition algorithms used.
ndarray.partition : Inplace partition.
- argsort : Full indirect sort
+ argsort : Full indirect sort.
+ take_along_axis : Apply ``index_array`` from argpartition
+ to an array as if by calling partition.
Notes
-----
@@ -816,6 +818,14 @@ def argpartition(a, kth, axis=-1, kind='introselect', order=None):
>>> np.array(x)[np.argpartition(x, 3)]
array([2, 1, 3, 4])
+ Multi-dimensional array:
+
+ >>> x = np.array([[3, 4, 2], [1, 3, 1]])
+ >>> index_array = np.argpartition(x, kth=1, axis=-1)
+ >>> np.take_along_axis(x, index_array, axis=-1) # same as np.partition(x, kth=1)
+ array([[2, 3, 4],
+ [1, 1, 3]])
+
"""
return _wrapfunc(a, 'argpartition', kth, axis=axis, kind=kind, order=order)
@@ -1025,6 +1035,8 @@ def argsort(a, axis=-1, kind=None, order=None):
lexsort : Indirect stable sort with multiple keys.
ndarray.sort : Inplace sort.
argpartition : Indirect partial sort.
+ take_along_axis : Apply ``index_array`` from argsort
+ to an array as if by calling sort.
Notes
-----
@@ -1120,6 +1132,8 @@ def argmax(a, axis=None, out=None):
ndarray.argmax, argmin
amax : The maximum value along a given axis.
unravel_index : Convert a flat index into an index tuple.
+ take_along_axis : Apply ``np.expand_dims(index_array, axis)``
+ from argmax to an array as if by calling max.
Notes
-----
@@ -1154,6 +1168,16 @@ def argmax(a, axis=None, out=None):
>>> np.argmax(b) # Only the first occurrence is returned.
1
+ >>> x = np.array([[4,2,3], [1,0,3]])
+ >>> index_array = np.argmax(x, axis=-1)
+ >>> # Same as np.max(x, axis=-1, keepdims=True)
+ >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1)
+ array([[4],
+ [3]])
+ >>> # Same as np.max(x, axis=-1)
+ >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1).squeeze(axis=-1)
+ array([4, 3])
+
"""
return _wrapfunc(a, 'argmax', axis=axis, out=out)
@@ -1189,6 +1213,8 @@ def argmin(a, axis=None, out=None):
ndarray.argmin, argmax
amin : The minimum value along a given axis.
unravel_index : Convert a flat index into an index tuple.
+ take_along_axis : Apply ``np.expand_dims(index_array, axis)``
+ from argmin to an array as if by calling min.
Notes
-----
@@ -1223,6 +1249,16 @@ def argmin(a, axis=None, out=None):
>>> np.argmin(b) # Only the first occurrence is returned.
0
+ >>> x = np.array([[4,2,3], [1,0,3]])
+ >>> index_array = np.argmin(x, axis=-1)
+ >>> # Same as np.min(x, axis=-1, keepdims=True)
+ >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1)
+ array([[2],
+ [0]])
+ >>> # Same as np.max(x, axis=-1)
+ >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1).squeeze(axis=-1)
+ array([2, 0])
+
"""
return _wrapfunc(a, 'argmin', axis=axis, out=out)
diff --git a/numpy/core/setup.py b/numpy/core/setup.py
index a33318472..a4b5cfe5f 100644
--- a/numpy/core/setup.py
+++ b/numpy/core/setup.py
@@ -774,7 +774,7 @@ def configuration(parent_package='',top_path=None):
join('src', 'multiarray', 'arrayobject.h'),
join('src', 'multiarray', 'arraytypes.h'),
join('src', 'multiarray', 'arrayfunction_override.h'),
- join('src', 'multiarray', 'buffer.h'),
+ join('src', 'multiarray', 'npy_buffer.h'),
join('src', 'multiarray', 'calculation.h'),
join('src', 'multiarray', 'common.h'),
join('src', 'multiarray', 'convert_datatype.h'),
diff --git a/numpy/core/setup_common.py b/numpy/core/setup_common.py
index 84b78b585..6356f08ba 100644
--- a/numpy/core/setup_common.py
+++ b/numpy/core/setup_common.py
@@ -266,8 +266,9 @@ def check_long_double_representation(cmd):
except ValueError:
# try linking to support CC="gcc -flto" or icc -ipo
# struct needs to be volatile so it isn't optimized away
+ # additionally "clang -flto" requires the foo struct to be used
body = body.replace('struct', 'volatile struct')
- body += "int main(void) { return 0; }\n"
+ body += "int main(void) { return foo.before[0]; }\n"
src, obj = cmd._compile(body, None, None, 'c')
cmd.temp_files.append("_configtest")
cmd.compiler.link_executable([obj], "_configtest")
diff --git a/numpy/core/src/common/binop_override.h b/numpy/core/src/common/binop_override.h
index 47df63e38..c5e7ab808 100644
--- a/numpy/core/src/common/binop_override.h
+++ b/numpy/core/src/common/binop_override.h
@@ -129,11 +129,14 @@ binop_should_defer(PyObject *self, PyObject *other, int inplace)
* check whether __array_ufunc__ equals None.
*/
attr = PyArray_LookupSpecial(other, "__array_ufunc__");
- if (attr) {
+ if (attr != NULL) {
defer = !inplace && (attr == Py_None);
Py_DECREF(attr);
return defer;
}
+ else if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
/*
* Otherwise, we need to check for the legacy __array_priority__. But if
* other.__class__ is a subtype of self.__class__, then it's already had
diff --git a/numpy/core/src/common/get_attr_string.h b/numpy/core/src/common/get_attr_string.h
index d458d9550..d3401aea6 100644
--- a/numpy/core/src/common/get_attr_string.h
+++ b/numpy/core/src/common/get_attr_string.h
@@ -40,18 +40,14 @@ _is_basic_python_type(PyTypeObject *tp)
}
/*
- * Stripped down version of PyObject_GetAttrString,
- * avoids lookups for None, tuple, and List objects,
- * and doesn't create a PyErr since this code ignores it.
+ * Stripped down version of PyObject_GetAttrString(obj, name) that does not
+ * raise PyExc_AttributeError.
*
- * This can be much faster then PyObject_GetAttrString where
- * exceptions are not used by caller.
+ * This allows it to avoid creating then discarding exception objects when
+ * performing lookups on objects without any attributes.
*
- * 'obj' is the object to search for attribute.
- *
- * 'name' is the attribute to search for.
- *
- * Returns attribute value on success, NULL on failure.
+ * Returns attribute value on success, NULL without an exception set if
+ * there is no such attribute, and NULL with an exception on failure.
*/
static NPY_INLINE PyObject *
maybe_get_attr(PyObject *obj, char *name)
@@ -62,7 +58,7 @@ maybe_get_attr(PyObject *obj, char *name)
/* Attribute referenced by (char *)name */
if (tp->tp_getattr != NULL) {
res = (*tp->tp_getattr)(obj, name);
- if (res == NULL) {
+ if (res == NULL && PyErr_ExceptionMatches(PyExc_AttributeError)) {
PyErr_Clear();
}
}
@@ -78,7 +74,7 @@ maybe_get_attr(PyObject *obj, char *name)
}
res = (*tp->tp_getattro)(obj, w);
Py_DECREF(w);
- if (res == NULL) {
+ if (res == NULL && PyErr_ExceptionMatches(PyExc_AttributeError)) {
PyErr_Clear();
}
}
diff --git a/numpy/core/src/common/ufunc_override.c b/numpy/core/src/common/ufunc_override.c
index 89f08a9cb..3f699bcdd 100644
--- a/numpy/core/src/common/ufunc_override.c
+++ b/numpy/core/src/common/ufunc_override.c
@@ -36,6 +36,9 @@ PyUFuncOverride_GetNonDefaultArrayUfunc(PyObject *obj)
*/
cls_array_ufunc = PyArray_LookupSpecial(obj, "__array_ufunc__");
if (cls_array_ufunc == NULL) {
+ if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
return NULL;
}
/* Ignore if the same as ndarray.__array_ufunc__ */
diff --git a/numpy/core/src/multiarray/arrayfunction_override.c b/numpy/core/src/multiarray/arrayfunction_override.c
index 62e597764..9ea8efdd9 100644
--- a/numpy/core/src/multiarray/arrayfunction_override.c
+++ b/numpy/core/src/multiarray/arrayfunction_override.c
@@ -26,6 +26,7 @@ static PyObject *
get_array_function(PyObject *obj)
{
static PyObject *ndarray_array_function = NULL;
+ PyObject *array_function;
if (ndarray_array_function == NULL) {
ndarray_array_function = get_ndarray_array_function();
@@ -37,7 +38,12 @@ get_array_function(PyObject *obj)
return ndarray_array_function;
}
- return PyArray_LookupSpecial(obj, "__array_function__");
+ array_function = PyArray_LookupSpecial(obj, "__array_function__");
+ if (array_function == NULL && PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
+
+ return array_function;
}
diff --git a/numpy/core/src/multiarray/arrayobject.c b/numpy/core/src/multiarray/arrayobject.c
index 5ed5b7635..a5cebfbd8 100644
--- a/numpy/core/src/multiarray/arrayobject.c
+++ b/numpy/core/src/multiarray/arrayobject.c
@@ -48,7 +48,7 @@ maintainer email: oliphant.travis@ieee.org
#include "mapping.h"
#include "getset.h"
#include "sequence.h"
-#include "buffer.h"
+#include "npy_buffer.h"
#include "array_assign.h"
#include "alloc.h"
#include "mem_overlap.h"
diff --git a/numpy/core/src/multiarray/arraytypes.c.src b/numpy/core/src/multiarray/arraytypes.c.src
index 152a2be9c..e36b95c00 100644
--- a/numpy/core/src/multiarray/arraytypes.c.src
+++ b/numpy/core/src/multiarray/arraytypes.c.src
@@ -36,7 +36,7 @@
#include "cblasfuncs.h"
#include "npy_cblas.h"
-#include "buffer.h"
+#include "npy_buffer.h"
/* check for sequences, but ignore the types numpy considers scalars */
static NPY_INLINE npy_bool
@@ -1081,6 +1081,7 @@ TIMEDELTA_setitem(PyObject *op, void *ov, void *vap)
* npy_long, npy_ulong, npy_longlong, npy_ulonglong,
* npy_float, npy_double, npy_longdouble,
* npy_datetime, npy_timedelta#
+ * #supports_nat = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1#
*/
/**begin repeat1
@@ -1092,6 +1093,7 @@ TIMEDELTA_setitem(PyObject *op, void *ov, void *vap)
* npy_long, npy_ulong, npy_longlong, npy_ulonglong,
* npy_float, npy_double, npy_longdouble,
* npy_datetime, npy_timedelta#
+ * #floatingpoint = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0#
*/
static void
@FROMTYPE@_to_@TOTYPE@(void *input, void *output, npy_intp n,
@@ -1101,7 +1103,15 @@ static void
@totype@ *op = output;
while (n--) {
- *op++ = (@totype@)*ip++;
+ @fromtype@ f = *ip++;
+ @totype@ t = (@totype@)f;
+#if @supports_nat@ && @floatingpoint@
+ /* Avoid undefined behaviour for NaN -> NaT */
+ if (npy_isnan(f)) {
+ t = (@totype@)NPY_DATETIME_NAT;
+ }
+#endif
+ *op++ = t;
}
}
/**end repeat1**/
@@ -1119,7 +1129,15 @@ static void
@totype@ *op = output;
while (n--) {
- *op++ = (@totype@)*ip;
+ @fromtype@ f = *ip;
+ @totype@ t = (@totype@)f;
+#if @supports_nat@
+ /* Avoid undefined behaviour for NaN -> NaT */
+ if (npy_isnan(f)) {
+ t = (@totype@)NPY_DATETIME_NAT;
+ }
+#endif
+ *op++ = t;
ip += 2;
}
}
@@ -1757,7 +1775,58 @@ BOOL_scan(FILE *fp, npy_bool *ip, void *NPY_UNUSED(ignore),
}
/**begin repeat
- * #fname = CFLOAT, CDOUBLE, CLONGDOUBLE,
+ * #fname = CFLOAT, CDOUBLE#
+ * #type = npy_cfloat, npy_cdouble#
+ */
+static int
+@fname@_scan(FILE *fp, @type@ *ip, void *NPY_UNUSED(ignore),
+ PyArray_Descr *NPY_UNUSED(ignored))
+{
+ double result;
+ int ret_real, ret_imag;
+
+ ret_real = NumPyOS_ascii_ftolf(fp, &result);
+ @type@ output;
+ // Peek next character
+ char next = getc(fp);
+ if ((next == '+') || (next == '-')) {
+ // Imaginary component specified
+ output.real = result;
+ // Revert peek and read imaginary component
+ ungetc(next, fp);
+ ret_imag = NumPyOS_ascii_ftolf(fp, &result);
+ // Peak next character
+ next = getc(fp);
+ if ((ret_imag == 1) && (next == 'j')) {
+ // If read is successful and the immediate following char is j
+ output.imag = result;
+ }
+ else {
+ output.imag = 0;
+ // Push an invalid char to trigger the not everything is read error
+ ungetc('a', fp);
+ }
+ }
+ else if (next == 'j') {
+ // Real component not specified
+ output.real = 0;
+ output.imag = result;
+ }
+ else {
+ // Imaginary component not specified
+ output.real = result;
+ output.imag = 0.;
+ // Next character is not + / - / j. Revert peek.
+ ungetc(next, fp);
+ }
+ *(@type@ *)ip = output;
+ return ret_real;
+}
+/**end repeat**/
+
+
+/**begin repeat
+ * #fname = CLONGDOUBLE,
* OBJECT, STRING, UNICODE, VOID,
* DATETIME, TIMEDELTA#
*/
@@ -1849,7 +1918,60 @@ BOOL_fromstr(char *str, void *ip, char **endptr,
}
/**begin repeat
- * #fname = CFLOAT, CDOUBLE, CLONGDOUBLE,
+ * #fname = CFLOAT, CDOUBLE#
+ * #type = npy_cfloat, npy_cdouble#
+ */
+static int
+@fname@_fromstr(char *str, void *ip, char **endptr,
+ PyArray_Descr *NPY_UNUSED(ignore))
+{
+ double result;
+
+ result = NumPyOS_ascii_strtod(str, endptr);
+ @type@ output;
+
+ if (endptr && ((*endptr[0] == '+') || (*endptr[0] == '-'))) {
+ // Imaginary component specified
+ output.real = result;
+ // Reading imaginary component
+ char **prev = endptr;
+ str = *endptr;
+ result = NumPyOS_ascii_strtod(str, endptr);
+ if (endptr && *endptr[0] == 'j') {
+ // Read is successful if the immediate following char is j
+ output.imag = result;
+ // Skip j
+ ++*endptr;
+ }
+ else {
+ /*
+ * Set endptr to previous char to trigger the not everything is
+ * read error
+ */
+ endptr = prev;
+ output.imag = 0;
+ }
+ }
+ else if (endptr && *endptr[0] == 'j') {
+ // Real component not specified
+ output.real = 0;
+ output.imag = result;
+ // Skip j
+ ++*endptr;
+ }
+ else {
+ // Imaginary component not specified
+ output.real = result;
+ output.imag = 0.;
+ }
+ *(@type@ *)ip = output;
+ return 0;
+}
+/**end repeat**/
+
+
+/**begin repeat
+ * #fname = CLONGDOUBLE,
* OBJECT, STRING, UNICODE, VOID#
*/
diff --git a/numpy/core/src/multiarray/buffer.c b/numpy/core/src/multiarray/buffer.c
index b729027ad..0edadee98 100644
--- a/numpy/core/src/multiarray/buffer.c
+++ b/numpy/core/src/multiarray/buffer.c
@@ -11,7 +11,7 @@
#include "npy_pycompat.h"
-#include "buffer.h"
+#include "npy_buffer.h"
#include "common.h"
#include "numpyos.h"
#include "arrayobject.h"
diff --git a/numpy/core/src/multiarray/common.c b/numpy/core/src/multiarray/common.c
index 3270bc20d..c991f7428 100644
--- a/numpy/core/src/multiarray/common.c
+++ b/numpy/core/src/multiarray/common.c
@@ -12,7 +12,7 @@
#include "usertypes.h"
#include "common.h"
-#include "buffer.h"
+#include "npy_buffer.h"
#include "get_attr_string.h"
#include "mem_overlap.h"
@@ -367,6 +367,10 @@ PyArray_DTypeFromObjectHelper(PyObject *obj, int maxdims,
}
Py_DECREF(ip);
}
+ else if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
+
/* The array struct interface */
ip = PyArray_LookupSpecial_OnInstance(obj, "__array_struct__");
@@ -389,6 +393,9 @@ PyArray_DTypeFromObjectHelper(PyObject *obj, int maxdims,
}
Py_DECREF(ip);
}
+ else if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
/* The old buffer interface */
#if !defined(NPY_PY3K)
@@ -419,6 +426,9 @@ PyArray_DTypeFromObjectHelper(PyObject *obj, int maxdims,
goto fail;
}
}
+ else if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
/*
* If we reached the maximum recursion depth without hitting one
diff --git a/numpy/core/src/multiarray/conversion_utils.c b/numpy/core/src/multiarray/conversion_utils.c
index 5f0ad5817..ca126b4b1 100644
--- a/numpy/core/src/multiarray/conversion_utils.c
+++ b/numpy/core/src/multiarray/conversion_utils.c
@@ -16,7 +16,7 @@
#include "conversion_utils.h"
#include "alloc.h"
-#include "buffer.h"
+#include "npy_buffer.h"
static int
PyArray_PyIntAsInt_ErrMsg(PyObject *o, const char * msg) NPY_GCC_NONNULL(2);
diff --git a/numpy/core/src/multiarray/ctors.c b/numpy/core/src/multiarray/ctors.c
index 9b6f59e3a..64933ae1b 100644
--- a/numpy/core/src/multiarray/ctors.c
+++ b/numpy/core/src/multiarray/ctors.c
@@ -19,7 +19,7 @@
#include "ctors.h"
#include "convert_datatype.h"
#include "shape.h"
-#include "buffer.h"
+#include "npy_buffer.h"
#include "lowlevel_strided_loops.h"
#include "methods.h"
#include "_datetime.h"
@@ -852,6 +852,10 @@ discover_dimensions(PyObject *obj, int *maxndim, npy_intp *d, int check_it,
return 0;
}
}
+ else if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
+
/* obj has the __array_interface__ interface */
e = PyArray_LookupSpecial_OnInstance(obj, "__array_interface__");
@@ -881,6 +885,9 @@ discover_dimensions(PyObject *obj, int *maxndim, npy_intp *d, int check_it,
return 0;
}
}
+ else if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
seq = PySequence_Fast(obj, "Could not convert object to sequence");
if (seq == NULL) {
@@ -2351,7 +2358,11 @@ PyArray_FromStructInterface(PyObject *input)
attr = PyArray_LookupSpecial_OnInstance(input, "__array_struct__");
if (attr == NULL) {
- return Py_NotImplemented;
+ if (PyErr_Occurred()) {
+ return NULL;
+ } else {
+ return Py_NotImplemented;
+ }
}
if (!NpyCapsule_Check(attr)) {
goto fail;
@@ -2463,6 +2474,9 @@ PyArray_FromInterface(PyObject *origin)
iface = PyArray_LookupSpecial_OnInstance(origin,
"__array_interface__");
if (iface == NULL) {
+ if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
return Py_NotImplemented;
}
if (!PyDict_Check(iface)) {
@@ -2716,6 +2730,9 @@ PyArray_FromArrayAttr(PyObject *op, PyArray_Descr *typecode, PyObject *context)
array_meth = PyArray_LookupSpecial_OnInstance(op, "__array__");
if (array_meth == NULL) {
+ if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
return Py_NotImplemented;
}
if (context == NULL) {
diff --git a/numpy/core/src/multiarray/descriptor.c b/numpy/core/src/multiarray/descriptor.c
index 522b69307..d4e18e457 100644
--- a/numpy/core/src/multiarray/descriptor.c
+++ b/numpy/core/src/multiarray/descriptor.c
@@ -19,7 +19,7 @@
#include "descriptor.h"
#include "alloc.h"
#include "assert.h"
-#include "buffer.h"
+#include "npy_buffer.h"
/*
* offset: A starting offset.
diff --git a/numpy/core/src/multiarray/getset.c b/numpy/core/src/multiarray/getset.c
index 116e37ce5..6e5d480d0 100644
--- a/numpy/core/src/multiarray/getset.c
+++ b/numpy/core/src/multiarray/getset.c
@@ -20,7 +20,7 @@
#include "arrayobject.h"
#include "mem_overlap.h"
#include "alloc.h"
-#include "buffer.h"
+#include "npy_buffer.h"
/******************* array attribute get and set routines ******************/
diff --git a/numpy/core/src/multiarray/multiarraymodule.c b/numpy/core/src/multiarray/multiarraymodule.c
index 9693275e7..9169814c2 100644
--- a/numpy/core/src/multiarray/multiarraymodule.c
+++ b/numpy/core/src/multiarray/multiarraymodule.c
@@ -118,6 +118,9 @@ PyArray_GetPriority(PyObject *obj, double default_)
ret = PyArray_LookupSpecial_OnInstance(obj, "__array_priority__");
if (ret == NULL) {
+ if (PyErr_Occurred()) {
+ PyErr_Clear(); /* TODO[gh-14801]: propagate crashes during attribute access? */
+ }
return default_;
}
@@ -1112,6 +1115,14 @@ _pyarray_correlate(PyArrayObject *ap1, PyArrayObject *ap2, int typenum,
n1 = PyArray_DIMS(ap1)[0];
n2 = PyArray_DIMS(ap2)[0];
+ if (n1 == 0) {
+ PyErr_SetString(PyExc_ValueError, "first array argument cannot be empty");
+ return NULL;
+ }
+ if (n2 == 0) {
+ PyErr_SetString(PyExc_ValueError, "second array argument cannot be empty");
+ return NULL;
+ }
if (n1 < n2) {
ret = ap1;
ap1 = ap2;
@@ -2063,7 +2074,7 @@ array_fromfile(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *keywds)
if (file == NULL) {
return NULL;
}
-
+
if (offset != 0 && strcmp(sep, "") != 0) {
PyErr_SetString(PyExc_TypeError, "'offset' argument only permitted for binary files");
Py_XDECREF(type);
@@ -3265,7 +3276,7 @@ array_datetime_data(PyObject *NPY_UNUSED(dummy), PyObject *args)
}
meta = get_datetime_metadata_from_dtype(dtype);
- Py_DECREF(dtype);
+ Py_DECREF(dtype);
if (meta == NULL) {
return NULL;
}
diff --git a/numpy/core/src/multiarray/buffer.h b/numpy/core/src/multiarray/npy_buffer.h
index fae413c85..fae413c85 100644
--- a/numpy/core/src/multiarray/buffer.h
+++ b/numpy/core/src/multiarray/npy_buffer.h
diff --git a/numpy/core/src/multiarray/scalartypes.c.src b/numpy/core/src/multiarray/scalartypes.c.src
index 9adca6773..32d712e0c 100644
--- a/numpy/core/src/multiarray/scalartypes.c.src
+++ b/numpy/core/src/multiarray/scalartypes.c.src
@@ -28,7 +28,7 @@
#include "npy_import.h"
#include "dragon4.h"
#include "npy_longdouble.h"
-#include "buffer.h"
+#include "npy_buffer.h"
#include <stdlib.h>
diff --git a/numpy/core/tests/test_datetime.py b/numpy/core/tests/test_datetime.py
index 11f900c5f..a756dc7e7 100644
--- a/numpy/core/tests/test_datetime.py
+++ b/numpy/core/tests/test_datetime.py
@@ -483,6 +483,30 @@ class TestDateTime(object):
assert_equal(np.datetime64(a, '[Y]'), np.datetime64('NaT', '[Y]'))
assert_equal(np.datetime64(a, '[W]'), np.datetime64('NaT', '[W]'))
+ # NaN -> NaT
+ nan = np.array([np.nan] * 8)
+ fnan = nan.astype('f')
+ lnan = nan.astype('g')
+ cnan = nan.astype('D')
+ cfnan = nan.astype('F')
+ clnan = nan.astype('G')
+
+ nat = np.array([np.datetime64('NaT')] * 8)
+ assert_equal(nan.astype('M8[ns]'), nat)
+ assert_equal(fnan.astype('M8[ns]'), nat)
+ assert_equal(lnan.astype('M8[ns]'), nat)
+ assert_equal(cnan.astype('M8[ns]'), nat)
+ assert_equal(cfnan.astype('M8[ns]'), nat)
+ assert_equal(clnan.astype('M8[ns]'), nat)
+
+ nat = np.array([np.timedelta64('NaT')] * 8)
+ assert_equal(nan.astype('timedelta64[ns]'), nat)
+ assert_equal(fnan.astype('timedelta64[ns]'), nat)
+ assert_equal(lnan.astype('timedelta64[ns]'), nat)
+ assert_equal(cnan.astype('timedelta64[ns]'), nat)
+ assert_equal(cfnan.astype('timedelta64[ns]'), nat)
+ assert_equal(clnan.astype('timedelta64[ns]'), nat)
+
def test_days_creation(self):
assert_equal(np.array('1599', dtype='M8[D]').astype('i8'),
(1600-1970)*365 - (1972-1600)/4 + 3 - 365)
diff --git a/numpy/core/tests/test_deprecations.py b/numpy/core/tests/test_deprecations.py
index 8bffaa9af..363ff26db 100644
--- a/numpy/core/tests/test_deprecations.py
+++ b/numpy/core/tests/test_deprecations.py
@@ -172,7 +172,7 @@ class TestComparisonDeprecations(_DeprecationTestCase):
# (warning is issued a couple of times here)
self.assert_deprecated(op, args=(a, a[:-1]), num=None)
- # Element comparison error (numpy array can't be compared).
+ # ragged array comparison returns True/False
a = np.array([1, np.array([1,2,3])], dtype=object)
b = np.array([1, np.array([1,2,3])], dtype=object)
self.assert_deprecated(op, args=(a, b), num=None)
diff --git a/numpy/core/tests/test_issue14735.py b/numpy/core/tests/test_issue14735.py
new file mode 100644
index 000000000..6105c8e6a
--- /dev/null
+++ b/numpy/core/tests/test_issue14735.py
@@ -0,0 +1,29 @@
+import pytest
+import warnings
+import numpy as np
+
+
+class Wrapper:
+ def __init__(self, array):
+ self.array = array
+
+ def __len__(self):
+ return len(self.array)
+
+ def __getitem__(self, item):
+ return type(self)(self.array[item])
+
+ def __getattr__(self, name):
+ if name.startswith("__array_"):
+ warnings.warn("object got converted", UserWarning, stacklevel=1)
+
+ return getattr(self.array, name)
+
+ def __repr__(self):
+ return "<Wrapper({self.array})>".format(self=self)
+
+@pytest.mark.filterwarnings("error")
+def test_getattr_warning():
+ array = Wrapper(np.arange(10))
+ with pytest.raises(UserWarning, match="object got converted"):
+ np.asarray(array)
diff --git a/numpy/core/tests/test_longdouble.py b/numpy/core/tests/test_longdouble.py
index 59ac5923c..2b6e1c5a2 100644
--- a/numpy/core/tests/test_longdouble.py
+++ b/numpy/core/tests/test_longdouble.py
@@ -71,6 +71,38 @@ def test_fromstring():
err_msg="reading '%s'" % s)
+def test_fromstring_complex():
+ for ctype in ["complex", "cdouble", "cfloat"]:
+ # Check spacing between separator
+ assert_equal(np.fromstring("1, 2 , 3 ,4", sep=",", dtype=ctype),
+ np.array([1., 2., 3., 4.]))
+ # Real component not specified
+ assert_equal(np.fromstring("1j, -2j, 3j, 4e1j", sep=",", dtype=ctype),
+ np.array([1.j, -2.j, 3.j, 40.j]))
+ # Both components specified
+ assert_equal(np.fromstring("1+1j,2-2j, -3+3j, -4e1+4j", sep=",", dtype=ctype),
+ np.array([1. + 1.j, 2. - 2.j, - 3. + 3.j, - 40. + 4j]))
+ # Spaces at wrong places
+ with assert_warns(DeprecationWarning):
+ assert_equal(np.fromstring("1+2 j,3", dtype=ctype, sep=","),
+ np.array([1.]))
+ with assert_warns(DeprecationWarning):
+ assert_equal(np.fromstring("1+ 2j,3", dtype=ctype, sep=","),
+ np.array([1.]))
+ with assert_warns(DeprecationWarning):
+ assert_equal(np.fromstring("1 +2j,3", dtype=ctype, sep=","),
+ np.array([1.]))
+ with assert_warns(DeprecationWarning):
+ assert_equal(np.fromstring("1+j", dtype=ctype, sep=","),
+ np.array([1.]))
+ with assert_warns(DeprecationWarning):
+ assert_equal(np.fromstring("1+", dtype=ctype, sep=","),
+ np.array([1.]))
+ with assert_warns(DeprecationWarning):
+ assert_equal(np.fromstring("1j+1", dtype=ctype, sep=","),
+ np.array([1j]))
+
+
def test_fromstring_bogus():
with assert_warns(DeprecationWarning):
assert_equal(np.fromstring("1. 2. 3. flop 4.", dtype=float, sep=" "),
@@ -104,6 +136,88 @@ class TestFileBased(object):
res = np.fromfile(path, dtype=float, sep=" ")
assert_equal(res, np.array([1., 2., 3.]))
+ def test_fromfile_complex(self):
+ for ctype in ["complex", "cdouble", "cfloat"]:
+ # Check spacing between separator and only real component specified
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1, 2 , 3 ,4\n")
+
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1., 2., 3., 4.]))
+
+ # Real component not specified
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1j, -2j, 3j, 4e1j\n")
+
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1.j, -2.j, 3.j, 40.j]))
+
+ # Both components specified
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1+1j,2-2j, -3+3j, -4e1+4j\n")
+
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1. + 1.j, 2. - 2.j, - 3. + 3.j, - 40. + 4j]))
+
+ # Spaces at wrong places
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1+2 j,3\n")
+
+ with assert_warns(DeprecationWarning):
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1.]))
+
+ # Spaces at wrong places
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1+ 2j,3\n")
+
+ with assert_warns(DeprecationWarning):
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1.]))
+
+ # Spaces at wrong places
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1 +2j,3\n")
+
+ with assert_warns(DeprecationWarning):
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1.]))
+
+ # Spaces at wrong places
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1+j\n")
+
+ with assert_warns(DeprecationWarning):
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1.]))
+
+ # Spaces at wrong places
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1+\n")
+
+ with assert_warns(DeprecationWarning):
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1.]))
+
+ # Spaces at wrong places
+ with temppath() as path:
+ with open(path, 'wt') as f:
+ f.write("1j+1\n")
+
+ with assert_warns(DeprecationWarning):
+ res = np.fromfile(path, dtype=ctype, sep=",")
+ assert_equal(res, np.array([1.j]))
+
+
+
@pytest.mark.skipif(string_to_longdouble_inaccurate,
reason="Need strtold_l")
def test_fromfile(self):
diff --git a/numpy/core/tests/test_numeric.py b/numpy/core/tests/test_numeric.py
index 1358b45e9..4d322e50e 100644
--- a/numpy/core/tests/test_numeric.py
+++ b/numpy/core/tests/test_numeric.py
@@ -2567,6 +2567,11 @@ class TestCorrelate(object):
z = np.correlate(y, x, mode='full')
assert_array_almost_equal(z, r_z)
+ def test_zero_size(self):
+ with pytest.raises(ValueError):
+ np.correlate(np.array([]), np.ones(1000), mode='full')
+ with pytest.raises(ValueError):
+ np.correlate(np.ones(1000), np.array([]), mode='full')
class TestConvolve(object):
def test_object(self):
diff --git a/numpy/core/tests/test_ufunc.py b/numpy/core/tests/test_ufunc.py
index 707c690dd..ba1aee55b 100644
--- a/numpy/core/tests/test_ufunc.py
+++ b/numpy/core/tests/test_ufunc.py
@@ -1092,7 +1092,6 @@ class TestUfunc(object):
arr0d = np.array(HasComparisons())
assert_equal(arr0d == arr0d, True)
assert_equal(np.equal(arr0d, arr0d), True) # normal behavior is a cast
- assert_equal(np.equal(arr0d, arr0d, dtype=object), '==')
arr1d = np.array([HasComparisons()])
assert_equal(arr1d == arr1d, np.array([True]))
diff --git a/numpy/core/tests/test_umath.py b/numpy/core/tests/test_umath.py
index 9b4ce9e47..1d71766ef 100644
--- a/numpy/core/tests/test_umath.py
+++ b/numpy/core/tests/test_umath.py
@@ -170,7 +170,7 @@ class TestOut(object):
class TestComparisons(object):
def test_ignore_object_identity_in_equal(self):
- # Check error raised when comparing identical objects whose comparison
+ # Check comparing identical objects whose comparison
# is not a simple boolean, e.g., arrays that are compared elementwise.
a = np.array([np.array([1, 2, 3]), None], dtype=object)
assert_raises(ValueError, np.equal, a, a)
@@ -188,7 +188,7 @@ class TestComparisons(object):
assert_equal(np.equal(a, a), [False])
def test_ignore_object_identity_in_not_equal(self):
- # Check error raised when comparing identical objects whose comparison
+ # Check comparing identical objects whose comparison
# is not a simple boolean, e.g., arrays that are compared elementwise.
a = np.array([np.array([1, 2, 3]), None], dtype=object)
assert_raises(ValueError, np.not_equal, a, a)
diff --git a/numpy/lib/nanfunctions.py b/numpy/lib/nanfunctions.py
index 18ccab3b8..457cca146 100644
--- a/numpy/lib/nanfunctions.py
+++ b/numpy/lib/nanfunctions.py
@@ -99,7 +99,7 @@ def _replace_nan(a, val):
if a.dtype == np.object_:
# object arrays do not support `isnan` (gh-9009), so make a guess
- mask = a != a
+ mask = np.not_equal(a, a, dtype=bool)
elif issubclass(a.dtype.type, np.inexact):
mask = np.isnan(a)
else:
diff --git a/numpy/lib/tests/test_function_base.py b/numpy/lib/tests/test_function_base.py
index 1eae8ccfb..9075ff538 100644
--- a/numpy/lib/tests/test_function_base.py
+++ b/numpy/lib/tests/test_function_base.py
@@ -2523,7 +2523,7 @@ class TestPercentile(object):
assert_equal(np.percentile(x, 0, interpolation='nearest'), np.nan)
def test_fraction(self):
- x = [Fraction(i, 2) for i in np.arange(8)]
+ x = [Fraction(i, 2) for i in range(8)]
p = np.percentile(x, Fraction(0))
assert_equal(p, Fraction(0))
@@ -2943,7 +2943,7 @@ class TestQuantile(object):
def test_fraction(self):
# fractional input, integral quantile
- x = [Fraction(i, 2) for i in np.arange(8)]
+ x = [Fraction(i, 2) for i in range(8)]
q = np.quantile(x, 0)
assert_equal(q, 0)
diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py
index 816a200eb..665b9fbec 100644
--- a/numpy/linalg/linalg.py
+++ b/numpy/linalg/linalg.py
@@ -194,37 +194,33 @@ def _fastCopyAndTranspose(type, *arrays):
else:
return cast_arrays
-def _assertRank2(*arrays):
+def _assert_2d(*arrays):
for a in arrays:
if a.ndim != 2:
raise LinAlgError('%d-dimensional array given. Array must be '
'two-dimensional' % a.ndim)
-def _assertRankAtLeast2(*arrays):
+def _assert_stacked_2d(*arrays):
for a in arrays:
if a.ndim < 2:
raise LinAlgError('%d-dimensional array given. Array must be '
'at least two-dimensional' % a.ndim)
-def _assertNdSquareness(*arrays):
+def _assert_stacked_square(*arrays):
for a in arrays:
m, n = a.shape[-2:]
if m != n:
raise LinAlgError('Last 2 dimensions of the array must be square')
-def _assertFinite(*arrays):
+def _assert_finite(*arrays):
for a in arrays:
- if not (isfinite(a).all()):
+ if not isfinite(a).all():
raise LinAlgError("Array must not contain infs or NaNs")
-def _isEmpty2d(arr):
+def _is_empty_2d(arr):
# check size first for efficiency
return arr.size == 0 and product(arr.shape[-2:]) == 0
-def _assertNoEmpty2d(*arrays):
- for a in arrays:
- if _isEmpty2d(a):
- raise LinAlgError("Arrays cannot be empty")
def transpose(a):
"""
@@ -386,8 +382,8 @@ def solve(a, b):
"""
a, _ = _makearray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
b, wrap = _makearray(b)
t, result_t = _commonType(a, b)
@@ -542,8 +538,8 @@ def inv(a):
"""
a, wrap = _makearray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
t, result_t = _commonType(a)
signature = 'D->D' if isComplexType(t) else 'd->d'
@@ -622,8 +618,8 @@ def matrix_power(a, n):
"""
a = asanyarray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
try:
n = operator.index(n)
@@ -752,8 +748,8 @@ def cholesky(a):
extobj = get_linalg_error_extobj(_raise_linalgerror_nonposdef)
gufunc = _umath_linalg.cholesky_lo
a, wrap = _makearray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
t, result_t = _commonType(a)
signature = 'D->D' if isComplexType(t) else 'd->d'
r = gufunc(a, signature=signature, extobj=extobj)
@@ -895,7 +891,7 @@ def qr(a, mode='reduced'):
raise ValueError("Unrecognized mode '%s'" % mode)
a, wrap = _makearray(a)
- _assertRank2(a)
+ _assert_2d(a)
m, n = a.shape
t, result_t = _commonType(a)
a = _fastCopyAndTranspose(t, a)
@@ -1047,9 +1043,9 @@ def eigvals(a):
"""
a, wrap = _makearray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
- _assertFinite(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
+ _assert_finite(a)
t, result_t = _commonType(a)
extobj = get_linalg_error_extobj(
@@ -1157,8 +1153,8 @@ def eigvalsh(a, UPLO='L'):
gufunc = _umath_linalg.eigvalsh_up
a, wrap = _makearray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
t, result_t = _commonType(a)
signature = 'D->d' if isComplexType(t) else 'd->d'
w = gufunc(a, signature=signature, extobj=extobj)
@@ -1294,9 +1290,9 @@ def eig(a):
"""
a, wrap = _makearray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
- _assertFinite(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
+ _assert_finite(a)
t, result_t = _commonType(a)
extobj = get_linalg_error_extobj(
@@ -1435,8 +1431,8 @@ def eigh(a, UPLO='L'):
raise ValueError("UPLO argument must be 'L' or 'U'")
a, wrap = _makearray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
t, result_t = _commonType(a)
extobj = get_linalg_error_extobj(
@@ -1608,7 +1604,7 @@ def svd(a, full_matrices=True, compute_uv=True, hermitian=False):
s = abs(s)
return s
- _assertRankAtLeast2(a)
+ _assert_stacked_2d(a)
t, result_t = _commonType(a)
extobj = get_linalg_error_extobj(_raise_linalgerror_svd_nonconvergence)
@@ -1729,7 +1725,8 @@ def cond(x, p=None):
"""
x = asarray(x) # in case we have a matrix
- _assertNoEmpty2d(x)
+ if _is_empty_2d(x):
+ raise LinAlgError("cond is not defined on empty arrays")
if p is None or p == 2 or p == -2:
s = svd(x, compute_uv=False)
with errstate(all='ignore'):
@@ -1740,8 +1737,8 @@ def cond(x, p=None):
else:
# Call inv(x) ignoring errors. The result array will
# contain nans in the entries where inversion failed.
- _assertRankAtLeast2(x)
- _assertNdSquareness(x)
+ _assert_stacked_2d(x)
+ _assert_stacked_square(x)
t, result_t = _commonType(x)
signature = 'D->D' if isComplexType(t) else 'd->d'
with errstate(all='ignore'):
@@ -1956,7 +1953,7 @@ def pinv(a, rcond=1e-15, hermitian=False):
"""
a, wrap = _makearray(a)
rcond = asarray(rcond)
- if _isEmpty2d(a):
+ if _is_empty_2d(a):
m, n = a.shape[-2:]
res = empty(a.shape[:-2] + (n, m), dtype=a.dtype)
return wrap(res)
@@ -2052,8 +2049,8 @@ def slogdet(a):
"""
a = asarray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
t, result_t = _commonType(a)
real_t = _realType(result_t)
signature = 'D->Dd' if isComplexType(t) else 'd->dd'
@@ -2112,8 +2109,8 @@ def det(a):
"""
a = asarray(a)
- _assertRankAtLeast2(a)
- _assertNdSquareness(a)
+ _assert_stacked_2d(a)
+ _assert_stacked_square(a)
t, result_t = _commonType(a)
signature = 'D->D' if isComplexType(t) else 'd->d'
r = _umath_linalg.det(a, signature=signature)
@@ -2224,7 +2221,7 @@ def lstsq(a, b, rcond="warn"):
is_1d = b.ndim == 1
if is_1d:
b = b[:, newaxis]
- _assertRank2(a, b)
+ _assert_2d(a, b)
m, n = a.shape[-2:]
m2, n_rhs = b.shape[-2:]
if m != m2:
@@ -2657,7 +2654,7 @@ def multi_dot(arrays):
arrays[0] = atleast_2d(arrays[0])
if arrays[-1].ndim == 1:
arrays[-1] = atleast_2d(arrays[-1]).T
- _assertRank2(*arrays)
+ _assert_2d(*arrays)
# _multi_dot_three is much faster than _multi_dot_matrix_chain_order
if n == 3:
diff --git a/numpy/random/.gitignore b/numpy/random/.gitignore
new file mode 100644
index 000000000..fea3f955a
--- /dev/null
+++ b/numpy/random/.gitignore
@@ -0,0 +1,3 @@
+# generated files
+_bounded_integers.pyx
+_bounded_integers.pxd
diff --git a/numpy/random/_bounded_integers.pxd b/numpy/random/_bounded_integers.pxd
deleted file mode 100644
index d3ee97a70..000000000
--- a/numpy/random/_bounded_integers.pxd
+++ /dev/null
@@ -1,29 +0,0 @@
-from libc.stdint cimport (uint8_t, uint16_t, uint32_t, uint64_t,
- int8_t, int16_t, int32_t, int64_t, intptr_t)
-import numpy as np
-cimport numpy as np
-ctypedef np.npy_bool bool_t
-
-from ._bit_generator cimport bitgen_t
-
-cdef inline uint64_t _gen_mask(uint64_t max_val) nogil:
- """Mask generator for use in bounded random numbers"""
- # Smallest bit mask >= max
- cdef uint64_t mask = max_val
- mask |= mask >> 1
- mask |= mask >> 2
- mask |= mask >> 4
- mask |= mask >> 8
- mask |= mask >> 16
- mask |= mask >> 32
- return mask
-
-cdef object _rand_uint64(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_uint32(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_uint16(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_uint8(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_bool(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_int64(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_int32(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_int16(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
-cdef object _rand_int8(object low, object high, object size, bint use_masked, bint closed, bitgen_t *state, object lock)
diff --git a/numpy/random/_bounded_integers.pyx b/numpy/random/_bounded_integers.pyx
deleted file mode 100644
index d6a534b43..000000000
--- a/numpy/random/_bounded_integers.pyx
+++ /dev/null
@@ -1,1564 +0,0 @@
-#!python
-#cython: wraparound=False, nonecheck=False, boundscheck=False, cdivision=True
-
-import numpy as np
-cimport numpy as np
-
-__all__ = []
-
-np.import_array()
-
-cdef extern from "include/distributions.h":
- # Generate random numbers in closed interval [off, off + rng].
- uint64_t random_bounded_uint64(bitgen_t *bitgen_state,
- uint64_t off, uint64_t rng,
- uint64_t mask, bint use_masked) nogil
- uint32_t random_buffered_bounded_uint32(bitgen_t *bitgen_state,
- uint32_t off, uint32_t rng,
- uint32_t mask, bint use_masked,
- int *bcnt, uint32_t *buf) nogil
- uint16_t random_buffered_bounded_uint16(bitgen_t *bitgen_state,
- uint16_t off, uint16_t rng,
- uint16_t mask, bint use_masked,
- int *bcnt, uint32_t *buf) nogil
- uint8_t random_buffered_bounded_uint8(bitgen_t *bitgen_state,
- uint8_t off, uint8_t rng,
- uint8_t mask, bint use_masked,
- int *bcnt, uint32_t *buf) nogil
- np.npy_bool random_buffered_bounded_bool(bitgen_t *bitgen_state,
- np.npy_bool off, np.npy_bool rng,
- np.npy_bool mask, bint use_masked,
- int *bcnt, uint32_t *buf) nogil
- void random_bounded_uint64_fill(bitgen_t *bitgen_state,
- uint64_t off, uint64_t rng, np.npy_intp cnt,
- bint use_masked,
- uint64_t *out) nogil
- void random_bounded_uint32_fill(bitgen_t *bitgen_state,
- uint32_t off, uint32_t rng, np.npy_intp cnt,
- bint use_masked,
- uint32_t *out) nogil
- void random_bounded_uint16_fill(bitgen_t *bitgen_state,
- uint16_t off, uint16_t rng, np.npy_intp cnt,
- bint use_masked,
- uint16_t *out) nogil
- void random_bounded_uint8_fill(bitgen_t *bitgen_state,
- uint8_t off, uint8_t rng, np.npy_intp cnt,
- bint use_masked,
- uint8_t *out) nogil
- void random_bounded_bool_fill(bitgen_t *bitgen_state,
- np.npy_bool off, np.npy_bool rng, np.npy_intp cnt,
- bint use_masked,
- np.npy_bool *out) nogil
-
-
-
-_integers_types = {'bool': (0, 2),
- 'int8': (-2**7, 2**7),
- 'int16': (-2**15, 2**15),
- 'int32': (-2**31, 2**31),
- 'int64': (-2**63, 2**63),
- 'uint8': (0, 2**8),
- 'uint16': (0, 2**16),
- 'uint32': (0, 2**32),
- 'uint64': (0, 2**64)}
-
-
-cdef object _rand_uint32_broadcast(np.ndarray low, np.ndarray high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for smaller integer types
-
- This path is simpler since the high value in the open interval [low, high)
- must be in-range for the next larger type, uint64. Here we case to
- this type for checking and the recast to uint32 when producing the
- random integers.
- """
- cdef uint32_t rng, last_rng, off, val, mask, out_val, is_open
- cdef uint32_t buf
- cdef uint32_t *out_data
- cdef uint64_t low_v, high_v
- cdef np.ndarray low_arr, high_arr, out_arr
- cdef np.npy_intp i, cnt
- cdef np.broadcast it
- cdef int buf_rem = 0
-
- # Array path
- is_open = not closed
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
- if np.any(np.less(low_arr, 0)):
- raise ValueError('low is out of bounds for uint32')
- if closed:
- high_comp = np.greater_equal
- low_high_comp = np.greater
- else:
- high_comp = np.greater
- low_high_comp = np.greater_equal
-
- if np.any(high_comp(high_arr, 0X100000000ULL)):
- raise ValueError('high is out of bounds for uint32')
- if np.any(low_high_comp(low_arr, high_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_UINT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
- high_arr = <np.ndarray>np.PyArray_FROM_OTF(high, np.NPY_UINT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.uint32)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.uint32)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint32_t *>np.PyArray_DATA(out_arr)
- cnt = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(cnt):
- low_v = (<uint64_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint64_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- rng = <uint32_t>((high_v - is_open) - low_v)
- off = <uint32_t>(<uint64_t>low_v)
-
- if rng != last_rng:
- # Smallest bit mask >= max
- mask = <uint32_t>_gen_mask(rng)
-
- out_data[i] = random_buffered_bounded_uint32(state, off, rng, mask, use_masked, &buf_rem, &buf)
-
- np.PyArray_MultiIter_NEXT(it)
- return out_arr
-
-cdef object _rand_uint16_broadcast(np.ndarray low, np.ndarray high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for smaller integer types
-
- This path is simpler since the high value in the open interval [low, high)
- must be in-range for the next larger type, uint32. Here we case to
- this type for checking and the recast to uint16 when producing the
- random integers.
- """
- cdef uint16_t rng, last_rng, off, val, mask, out_val, is_open
- cdef uint32_t buf
- cdef uint16_t *out_data
- cdef uint32_t low_v, high_v
- cdef np.ndarray low_arr, high_arr, out_arr
- cdef np.npy_intp i, cnt
- cdef np.broadcast it
- cdef int buf_rem = 0
-
- # Array path
- is_open = not closed
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
- if np.any(np.less(low_arr, 0)):
- raise ValueError('low is out of bounds for uint16')
- if closed:
- high_comp = np.greater_equal
- low_high_comp = np.greater
- else:
- high_comp = np.greater
- low_high_comp = np.greater_equal
-
- if np.any(high_comp(high_arr, 0X10000UL)):
- raise ValueError('high is out of bounds for uint16')
- if np.any(low_high_comp(low_arr, high_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_UINT32, np.NPY_ALIGNED | np.NPY_FORCECAST)
- high_arr = <np.ndarray>np.PyArray_FROM_OTF(high, np.NPY_UINT32, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.uint16)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.uint16)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint16_t *>np.PyArray_DATA(out_arr)
- cnt = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(cnt):
- low_v = (<uint32_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint32_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- rng = <uint16_t>((high_v - is_open) - low_v)
- off = <uint16_t>(<uint32_t>low_v)
-
- if rng != last_rng:
- # Smallest bit mask >= max
- mask = <uint16_t>_gen_mask(rng)
-
- out_data[i] = random_buffered_bounded_uint16(state, off, rng, mask, use_masked, &buf_rem, &buf)
-
- np.PyArray_MultiIter_NEXT(it)
- return out_arr
-
-cdef object _rand_uint8_broadcast(np.ndarray low, np.ndarray high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for smaller integer types
-
- This path is simpler since the high value in the open interval [low, high)
- must be in-range for the next larger type, uint16. Here we case to
- this type for checking and the recast to uint8 when producing the
- random integers.
- """
- cdef uint8_t rng, last_rng, off, val, mask, out_val, is_open
- cdef uint32_t buf
- cdef uint8_t *out_data
- cdef uint16_t low_v, high_v
- cdef np.ndarray low_arr, high_arr, out_arr
- cdef np.npy_intp i, cnt
- cdef np.broadcast it
- cdef int buf_rem = 0
-
- # Array path
- is_open = not closed
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
- if np.any(np.less(low_arr, 0)):
- raise ValueError('low is out of bounds for uint8')
- if closed:
- high_comp = np.greater_equal
- low_high_comp = np.greater
- else:
- high_comp = np.greater
- low_high_comp = np.greater_equal
-
- if np.any(high_comp(high_arr, 0X100UL)):
- raise ValueError('high is out of bounds for uint8')
- if np.any(low_high_comp(low_arr, high_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_UINT16, np.NPY_ALIGNED | np.NPY_FORCECAST)
- high_arr = <np.ndarray>np.PyArray_FROM_OTF(high, np.NPY_UINT16, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.uint8)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.uint8)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint8_t *>np.PyArray_DATA(out_arr)
- cnt = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(cnt):
- low_v = (<uint16_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint16_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- rng = <uint8_t>((high_v - is_open) - low_v)
- off = <uint8_t>(<uint16_t>low_v)
-
- if rng != last_rng:
- # Smallest bit mask >= max
- mask = <uint8_t>_gen_mask(rng)
-
- out_data[i] = random_buffered_bounded_uint8(state, off, rng, mask, use_masked, &buf_rem, &buf)
-
- np.PyArray_MultiIter_NEXT(it)
- return out_arr
-
-cdef object _rand_bool_broadcast(np.ndarray low, np.ndarray high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for smaller integer types
-
- This path is simpler since the high value in the open interval [low, high)
- must be in-range for the next larger type, uint8. Here we case to
- this type for checking and the recast to bool when producing the
- random integers.
- """
- cdef bool_t rng, last_rng, off, val, mask, out_val, is_open
- cdef uint32_t buf
- cdef bool_t *out_data
- cdef uint8_t low_v, high_v
- cdef np.ndarray low_arr, high_arr, out_arr
- cdef np.npy_intp i, cnt
- cdef np.broadcast it
- cdef int buf_rem = 0
-
- # Array path
- is_open = not closed
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
- if np.any(np.less(low_arr, 0)):
- raise ValueError('low is out of bounds for bool')
- if closed:
- high_comp = np.greater_equal
- low_high_comp = np.greater
- else:
- high_comp = np.greater
- low_high_comp = np.greater_equal
-
- if np.any(high_comp(high_arr, 0x2UL)):
- raise ValueError('high is out of bounds for bool')
- if np.any(low_high_comp(low_arr, high_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_UINT8, np.NPY_ALIGNED | np.NPY_FORCECAST)
- high_arr = <np.ndarray>np.PyArray_FROM_OTF(high, np.NPY_UINT8, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.bool_)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.bool_)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <bool_t *>np.PyArray_DATA(out_arr)
- cnt = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(cnt):
- low_v = (<uint8_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint8_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- rng = <bool_t>((high_v - is_open) - low_v)
- off = <bool_t>(<uint8_t>low_v)
-
- if rng != last_rng:
- # Smallest bit mask >= max
- mask = <bool_t>_gen_mask(rng)
-
- out_data[i] = random_buffered_bounded_bool(state, off, rng, mask, use_masked, &buf_rem, &buf)
-
- np.PyArray_MultiIter_NEXT(it)
- return out_arr
-
-cdef object _rand_int32_broadcast(np.ndarray low, np.ndarray high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for smaller integer types
-
- This path is simpler since the high value in the open interval [low, high)
- must be in-range for the next larger type, uint64. Here we case to
- this type for checking and the recast to int32 when producing the
- random integers.
- """
- cdef uint32_t rng, last_rng, off, val, mask, out_val, is_open
- cdef uint32_t buf
- cdef uint32_t *out_data
- cdef uint64_t low_v, high_v
- cdef np.ndarray low_arr, high_arr, out_arr
- cdef np.npy_intp i, cnt
- cdef np.broadcast it
- cdef int buf_rem = 0
-
- # Array path
- is_open = not closed
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
- if np.any(np.less(low_arr, -0x80000000LL)):
- raise ValueError('low is out of bounds for int32')
- if closed:
- high_comp = np.greater_equal
- low_high_comp = np.greater
- else:
- high_comp = np.greater
- low_high_comp = np.greater_equal
-
- if np.any(high_comp(high_arr, 0x80000000LL)):
- raise ValueError('high is out of bounds for int32')
- if np.any(low_high_comp(low_arr, high_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_INT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
- high_arr = <np.ndarray>np.PyArray_FROM_OTF(high, np.NPY_INT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.int32)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.int32)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint32_t *>np.PyArray_DATA(out_arr)
- cnt = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(cnt):
- low_v = (<uint64_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint64_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- rng = <uint32_t>((high_v - is_open) - low_v)
- off = <uint32_t>(<uint64_t>low_v)
-
- if rng != last_rng:
- # Smallest bit mask >= max
- mask = <uint32_t>_gen_mask(rng)
-
- out_data[i] = random_buffered_bounded_uint32(state, off, rng, mask, use_masked, &buf_rem, &buf)
-
- np.PyArray_MultiIter_NEXT(it)
- return out_arr
-
-cdef object _rand_int16_broadcast(np.ndarray low, np.ndarray high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for smaller integer types
-
- This path is simpler since the high value in the open interval [low, high)
- must be in-range for the next larger type, uint32. Here we case to
- this type for checking and the recast to int16 when producing the
- random integers.
- """
- cdef uint16_t rng, last_rng, off, val, mask, out_val, is_open
- cdef uint32_t buf
- cdef uint16_t *out_data
- cdef uint32_t low_v, high_v
- cdef np.ndarray low_arr, high_arr, out_arr
- cdef np.npy_intp i, cnt
- cdef np.broadcast it
- cdef int buf_rem = 0
-
- # Array path
- is_open = not closed
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
- if np.any(np.less(low_arr, -0x8000LL)):
- raise ValueError('low is out of bounds for int16')
- if closed:
- high_comp = np.greater_equal
- low_high_comp = np.greater
- else:
- high_comp = np.greater
- low_high_comp = np.greater_equal
-
- if np.any(high_comp(high_arr, 0x8000LL)):
- raise ValueError('high is out of bounds for int16')
- if np.any(low_high_comp(low_arr, high_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_INT32, np.NPY_ALIGNED | np.NPY_FORCECAST)
- high_arr = <np.ndarray>np.PyArray_FROM_OTF(high, np.NPY_INT32, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.int16)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.int16)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint16_t *>np.PyArray_DATA(out_arr)
- cnt = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(cnt):
- low_v = (<uint32_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint32_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- rng = <uint16_t>((high_v - is_open) - low_v)
- off = <uint16_t>(<uint32_t>low_v)
-
- if rng != last_rng:
- # Smallest bit mask >= max
- mask = <uint16_t>_gen_mask(rng)
-
- out_data[i] = random_buffered_bounded_uint16(state, off, rng, mask, use_masked, &buf_rem, &buf)
-
- np.PyArray_MultiIter_NEXT(it)
- return out_arr
-
-cdef object _rand_int8_broadcast(np.ndarray low, np.ndarray high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for smaller integer types
-
- This path is simpler since the high value in the open interval [low, high)
- must be in-range for the next larger type, uint16. Here we case to
- this type for checking and the recast to int8 when producing the
- random integers.
- """
- cdef uint8_t rng, last_rng, off, val, mask, out_val, is_open
- cdef uint32_t buf
- cdef uint8_t *out_data
- cdef uint16_t low_v, high_v
- cdef np.ndarray low_arr, high_arr, out_arr
- cdef np.npy_intp i, cnt
- cdef np.broadcast it
- cdef int buf_rem = 0
-
- # Array path
- is_open = not closed
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
- if np.any(np.less(low_arr, -0x80LL)):
- raise ValueError('low is out of bounds for int8')
- if closed:
- high_comp = np.greater_equal
- low_high_comp = np.greater
- else:
- high_comp = np.greater
- low_high_comp = np.greater_equal
-
- if np.any(high_comp(high_arr, 0x80LL)):
- raise ValueError('high is out of bounds for int8')
- if np.any(low_high_comp(low_arr, high_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_INT16, np.NPY_ALIGNED | np.NPY_FORCECAST)
- high_arr = <np.ndarray>np.PyArray_FROM_OTF(high, np.NPY_INT16, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.int8)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.int8)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint8_t *>np.PyArray_DATA(out_arr)
- cnt = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(cnt):
- low_v = (<uint16_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint16_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- rng = <uint8_t>((high_v - is_open) - low_v)
- off = <uint8_t>(<uint16_t>low_v)
-
- if rng != last_rng:
- # Smallest bit mask >= max
- mask = <uint8_t>_gen_mask(rng)
-
- out_data[i] = random_buffered_bounded_uint8(state, off, rng, mask, use_masked, &buf_rem, &buf)
-
- np.PyArray_MultiIter_NEXT(it)
- return out_arr
-
-
-cdef object _rand_uint64_broadcast(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for 64-bit integer types
-
- Requires special treatment since the high value can be out-of-range for
- the largest (64 bit) integer type since the generator is specified on the
- interval [low,high).
-
- The internal generator does not have this issue since it generates from
- the closes interval [low, high-1] and high-1 is always in range for the
- 64 bit integer type.
- """
-
- cdef np.ndarray low_arr, high_arr, out_arr, highm1_arr
- cdef np.npy_intp i, cnt, n
- cdef np.broadcast it
- cdef object closed_upper
- cdef uint64_t *out_data
- cdef uint64_t *highm1_data
- cdef uint64_t low_v, high_v
- cdef uint64_t rng, last_rng, val, mask, off, out_val
-
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
-
- if np.any(np.less(low_arr, 0x0ULL)):
- raise ValueError('low is out of bounds for uint64')
- dt = high_arr.dtype
- if closed or np.issubdtype(dt, np.integer):
- # Avoid object dtype path if already an integer
- high_lower_comp = np.less if closed else np.less_equal
- if np.any(high_lower_comp(high_arr, 0x0ULL)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
- high_m1 = high_arr if closed else high_arr - dt.type(1)
- if np.any(np.greater(high_m1, 0xFFFFFFFFFFFFFFFFULL)):
- raise ValueError('high is out of bounds for uint64')
- highm1_arr = <np.ndarray>np.PyArray_FROM_OTF(high_m1, np.NPY_UINT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
- else:
- # If input is object or a floating type
- highm1_arr = <np.ndarray>np.empty_like(high_arr, dtype=np.uint64)
- highm1_data = <uint64_t *>np.PyArray_DATA(highm1_arr)
- cnt = np.PyArray_SIZE(high_arr)
- flat = high_arr.flat
- for i in range(cnt):
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- closed_upper = int(flat[i]) - 1
- if closed_upper > 0xFFFFFFFFFFFFFFFFULL:
- raise ValueError('high is out of bounds for uint64')
- if closed_upper < 0x0ULL:
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
- highm1_data[i] = <uint64_t>closed_upper
-
- if np.any(np.greater(low_arr, highm1_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- high_arr = highm1_arr
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_UINT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.uint64)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.uint64)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint64_t *>np.PyArray_DATA(out_arr)
- n = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(n):
- low_v = (<uint64_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<uint64_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Generator produces values on the closed int [off, off+rng], -1 subtracted above
- rng = <uint64_t>(high_v - low_v)
- off = <uint64_t>(<uint64_t>low_v)
-
- if rng != last_rng:
- mask = _gen_mask(rng)
- out_data[i] = random_bounded_uint64(state, off, rng, mask, use_masked)
-
- np.PyArray_MultiIter_NEXT(it)
-
- return out_arr
-
-cdef object _rand_int64_broadcast(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- Array path for 64-bit integer types
-
- Requires special treatment since the high value can be out-of-range for
- the largest (64 bit) integer type since the generator is specified on the
- interval [low,high).
-
- The internal generator does not have this issue since it generates from
- the closes interval [low, high-1] and high-1 is always in range for the
- 64 bit integer type.
- """
-
- cdef np.ndarray low_arr, high_arr, out_arr, highm1_arr
- cdef np.npy_intp i, cnt, n
- cdef np.broadcast it
- cdef object closed_upper
- cdef uint64_t *out_data
- cdef int64_t *highm1_data
- cdef int64_t low_v, high_v
- cdef uint64_t rng, last_rng, val, mask, off, out_val
-
- low_arr = <np.ndarray>low
- high_arr = <np.ndarray>high
-
- if np.any(np.less(low_arr, -0x8000000000000000LL)):
- raise ValueError('low is out of bounds for int64')
- dt = high_arr.dtype
- if closed or np.issubdtype(dt, np.integer):
- # Avoid object dtype path if already an integer
- high_lower_comp = np.less if closed else np.less_equal
- if np.any(high_lower_comp(high_arr, -0x8000000000000000LL)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
- high_m1 = high_arr if closed else high_arr - dt.type(1)
- if np.any(np.greater(high_m1, 0x7FFFFFFFFFFFFFFFLL)):
- raise ValueError('high is out of bounds for int64')
- highm1_arr = <np.ndarray>np.PyArray_FROM_OTF(high_m1, np.NPY_INT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
- else:
- # If input is object or a floating type
- highm1_arr = <np.ndarray>np.empty_like(high_arr, dtype=np.int64)
- highm1_data = <int64_t *>np.PyArray_DATA(highm1_arr)
- cnt = np.PyArray_SIZE(high_arr)
- flat = high_arr.flat
- for i in range(cnt):
- # Subtract 1 since generator produces values on the closed int [off, off+rng]
- closed_upper = int(flat[i]) - 1
- if closed_upper > 0x7FFFFFFFFFFFFFFFLL:
- raise ValueError('high is out of bounds for int64')
- if closed_upper < -0x8000000000000000LL:
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
- highm1_data[i] = <int64_t>closed_upper
-
- if np.any(np.greater(low_arr, highm1_arr)):
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- high_arr = highm1_arr
- low_arr = <np.ndarray>np.PyArray_FROM_OTF(low, np.NPY_INT64, np.NPY_ALIGNED | np.NPY_FORCECAST)
-
- if size is not None:
- out_arr = <np.ndarray>np.empty(size, np.int64)
- else:
- it = np.PyArray_MultiIterNew2(low_arr, high_arr)
- out_arr = <np.ndarray>np.empty(it.shape, np.int64)
-
- it = np.PyArray_MultiIterNew3(low_arr, high_arr, out_arr)
- out_data = <uint64_t *>np.PyArray_DATA(out_arr)
- n = np.PyArray_SIZE(out_arr)
- mask = last_rng = 0
- with lock, nogil:
- for i in range(n):
- low_v = (<int64_t*>np.PyArray_MultiIter_DATA(it, 0))[0]
- high_v = (<int64_t*>np.PyArray_MultiIter_DATA(it, 1))[0]
- # Generator produces values on the closed int [off, off+rng], -1 subtracted above
- rng = <uint64_t>(high_v - low_v)
- off = <uint64_t>(<int64_t>low_v)
-
- if rng != last_rng:
- mask = _gen_mask(rng)
- out_data[i] = random_bounded_uint64(state, off, rng, mask, use_masked)
-
- np.PyArray_MultiIter_NEXT(it)
-
- return out_arr
-
-
-cdef object _rand_uint64(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_uint64(low, high, size, use_masked, *state, lock)
-
- Return random np.uint64 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.uint64 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.uint64
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint64. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint64_t rng, off, out_val
- cdef uint64_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.uint64)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < 0x0ULL:
- raise ValueError("low is out of bounds for uint64")
- if high > 0xFFFFFFFFFFFFFFFFULL:
- raise ValueError("high is out of bounds for uint64")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint64_t>(high - low)
- off = <uint64_t>(<uint64_t>low)
- if size is None:
- with lock:
- random_bounded_uint64_fill(state, off, rng, 1, use_masked, &out_val)
- return np.uint64(<uint64_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.uint64)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint64_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint64_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_uint64_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_uint32(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_uint32(low, high, size, use_masked, *state, lock)
-
- Return random np.uint32 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.uint32 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.uint32
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint32. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint32_t rng, off, out_val
- cdef uint32_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.uint32)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < 0x0UL:
- raise ValueError("low is out of bounds for uint32")
- if high > 0XFFFFFFFFUL:
- raise ValueError("high is out of bounds for uint32")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint32_t>(high - low)
- off = <uint32_t>(<uint32_t>low)
- if size is None:
- with lock:
- random_bounded_uint32_fill(state, off, rng, 1, use_masked, &out_val)
- return np.uint32(<uint32_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.uint32)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint32_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint32_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_uint32_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_uint16(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_uint16(low, high, size, use_masked, *state, lock)
-
- Return random np.uint16 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.uint16 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.uint16
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint16. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint16_t rng, off, out_val
- cdef uint16_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.uint16)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < 0x0UL:
- raise ValueError("low is out of bounds for uint16")
- if high > 0XFFFFUL:
- raise ValueError("high is out of bounds for uint16")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint16_t>(high - low)
- off = <uint16_t>(<uint16_t>low)
- if size is None:
- with lock:
- random_bounded_uint16_fill(state, off, rng, 1, use_masked, &out_val)
- return np.uint16(<uint16_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.uint16)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint16_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint16_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_uint16_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_uint8(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_uint8(low, high, size, use_masked, *state, lock)
-
- Return random np.uint8 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.uint8 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.uint8
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint8. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint8_t rng, off, out_val
- cdef uint8_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.uint8)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < 0x0UL:
- raise ValueError("low is out of bounds for uint8")
- if high > 0XFFUL:
- raise ValueError("high is out of bounds for uint8")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint8_t>(high - low)
- off = <uint8_t>(<uint8_t>low)
- if size is None:
- with lock:
- random_bounded_uint8_fill(state, off, rng, 1, use_masked, &out_val)
- return np.uint8(<uint8_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.uint8)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint8_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint8_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_uint8_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_bool(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_bool(low, high, size, use_masked, *state, lock)
-
- Return random np.bool integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.bool type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.bool
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for bool. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef bool_t rng, off, out_val
- cdef bool_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.bool)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < 0x0UL:
- raise ValueError("low is out of bounds for bool")
- if high > 0x1UL:
- raise ValueError("high is out of bounds for bool")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <bool_t>(high - low)
- off = <bool_t>(<bool_t>low)
- if size is None:
- with lock:
- random_bounded_bool_fill(state, off, rng, 1, use_masked, &out_val)
- return np.bool_(<bool_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.bool)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <bool_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_bool_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_bool_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_int64(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_int64(low, high, size, use_masked, *state, lock)
-
- Return random np.int64 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.int64 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.int64
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint64. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint64_t rng, off, out_val
- cdef uint64_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.int64)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < -0x8000000000000000LL:
- raise ValueError("low is out of bounds for int64")
- if high > 0x7FFFFFFFFFFFFFFFL:
- raise ValueError("high is out of bounds for int64")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint64_t>(high - low)
- off = <uint64_t>(<int64_t>low)
- if size is None:
- with lock:
- random_bounded_uint64_fill(state, off, rng, 1, use_masked, &out_val)
- return np.int64(<int64_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.int64)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint64_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint64_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_int64_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_int32(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_int32(low, high, size, use_masked, *state, lock)
-
- Return random np.int32 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.int32 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.int32
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint32. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint32_t rng, off, out_val
- cdef uint32_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.int32)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < -0x80000000L:
- raise ValueError("low is out of bounds for int32")
- if high > 0x7FFFFFFFL:
- raise ValueError("high is out of bounds for int32")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint32_t>(high - low)
- off = <uint32_t>(<int32_t>low)
- if size is None:
- with lock:
- random_bounded_uint32_fill(state, off, rng, 1, use_masked, &out_val)
- return np.int32(<int32_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.int32)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint32_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint32_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_int32_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_int16(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_int16(low, high, size, use_masked, *state, lock)
-
- Return random np.int16 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.int16 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.int16
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint16. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint16_t rng, off, out_val
- cdef uint16_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.int16)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < -0x8000L:
- raise ValueError("low is out of bounds for int16")
- if high > 0x7FFFL:
- raise ValueError("high is out of bounds for int16")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint16_t>(high - low)
- off = <uint16_t>(<int16_t>low)
- if size is None:
- with lock:
- random_bounded_uint16_fill(state, off, rng, 1, use_masked, &out_val)
- return np.int16(<int16_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.int16)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint16_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint16_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_int16_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
-
-cdef object _rand_int8(object low, object high, object size,
- bint use_masked, bint closed,
- bitgen_t *state, object lock):
- """
- _rand_int8(low, high, size, use_masked, *state, lock)
-
- Return random np.int8 integers from `low` (inclusive) to `high` (exclusive).
-
- Return random integers from the "discrete uniform" distribution in the
- interval [`low`, `high`). If `high` is None (the default),
- then results are from [0, `low`). On entry the arguments are presumed
- to have been validated for size and order for the np.int8 type.
-
- Parameters
- ----------
- low : int or array-like
- Lowest (signed) integer to be drawn from the distribution (unless
- ``high=None``, in which case this parameter is the *highest* such
- integer).
- high : int or array-like
- If provided, one above the largest (signed) integer to be drawn from the
- distribution (see above for behavior if ``high=None``).
- size : int or tuple of ints
- Output shape. If the given shape is, e.g., ``(m, n, k)``, then
- ``m * n * k`` samples are drawn. Default is None, in which case a
- single value is returned.
- use_masked : bool
- If True then rejection sampling with a range mask is used else Lemire's algorithm is used.
- closed : bool
- If True then sample from [low, high]. If False, sample [low, high)
- state : bit generator
- Bit generator state to use in the core random number generators
- lock : threading.Lock
- Lock to prevent multiple using a single generator simultaneously
-
- Returns
- -------
- out : python scalar or ndarray of np.int8
- `size`-shaped array of random integers from the appropriate
- distribution, or a single such random int if `size` not provided.
-
- Notes
- -----
- The internal integer generator produces values from the closed
- interval [low, high-(not closed)]. This requires some care since
- high can be out-of-range for uint8. The scalar path leaves
- integers as Python integers until the 1 has been subtracted to
- avoid needing to cast to a larger type.
- """
- cdef np.ndarray out_arr, low_arr, high_arr
- cdef uint8_t rng, off, out_val
- cdef uint8_t *out_data
- cdef np.npy_intp i, n, cnt
-
- if size is not None:
- if (np.prod(size) == 0):
- return np.empty(size, dtype=np.int8)
-
- low_arr = <np.ndarray>np.array(low, copy=False)
- high_arr = <np.ndarray>np.array(high, copy=False)
- low_ndim = np.PyArray_NDIM(low_arr)
- high_ndim = np.PyArray_NDIM(high_arr)
- if ((low_ndim == 0 or (low_ndim == 1 and low_arr.size == 1 and size is not None)) and
- (high_ndim == 0 or (high_ndim == 1 and high_arr.size == 1 and size is not None))):
- low = int(low_arr)
- high = int(high_arr)
- # Subtract 1 since internal generator produces on closed interval [low, high]
- if not closed:
- high -= 1
-
- if low < -0x80L:
- raise ValueError("low is out of bounds for int8")
- if high > 0x7FL:
- raise ValueError("high is out of bounds for int8")
- if low > high: # -1 already subtracted, closed interval
- comp = '>' if closed else '>='
- raise ValueError('low {comp} high'.format(comp=comp))
-
- rng = <uint8_t>(high - low)
- off = <uint8_t>(<int8_t>low)
- if size is None:
- with lock:
- random_bounded_uint8_fill(state, off, rng, 1, use_masked, &out_val)
- return np.int8(<int8_t>out_val)
- else:
- out_arr = <np.ndarray>np.empty(size, np.int8)
- cnt = np.PyArray_SIZE(out_arr)
- out_data = <uint8_t *>np.PyArray_DATA(out_arr)
- with lock, nogil:
- random_bounded_uint8_fill(state, off, rng, cnt, use_masked, out_data)
- return out_arr
- return _rand_int8_broadcast(low_arr, high_arr, size, use_masked, closed, state, lock)
diff --git a/numpy/random/_generator.pyx b/numpy/random/_generator.pyx
index b842d6a32..4b012af2f 100644
--- a/numpy/random/_generator.pyx
+++ b/numpy/random/_generator.pyx
@@ -3456,7 +3456,7 @@ cdef class Generator:
# Multivariate distributions:
def multivariate_normal(self, mean, cov, size=None, check_valid='warn',
- tol=1e-8):
+ tol=1e-8, *, method='svd'):
"""
multivariate_normal(mean, cov, size=None, check_valid='warn', tol=1e-8)
@@ -3486,6 +3486,15 @@ cdef class Generator:
tol : float, optional
Tolerance when checking the singular values in covariance matrix.
cov is cast to double before the check.
+ method : { 'svd', 'eigh', 'cholesky'}, optional
+ The cov input is used to compute a factor matrix A such that
+ ``A @ A.T = cov``. This argument is used to select the method
+ used to compute the factor matrix A. The default method 'svd' is
+ the slowest, while 'cholesky' is the fastest but less robust than
+ the slowest method. The method `eigh` uses eigen decomposition to
+ compute A and is faster than svd but slower than cholesky.
+
+ .. versionadded:: 1.18.0
Returns
-------
@@ -3546,10 +3555,16 @@ cdef class Generator:
--------
>>> mean = (1, 2)
>>> cov = [[1, 0], [0, 1]]
- >>> x = np.random.default_rng().multivariate_normal(mean, cov, (3, 3))
+ >>> rng = np.random.default_rng()
+ >>> x = rng.multivariate_normal(mean, cov, (3, 3))
>>> x.shape
(3, 3, 2)
+ We can use a different method other than the default to factorize cov:
+ >>> y = rng.multivariate_normal(mean, cov, (3, 3), method='cholesky')
+ >>> y.shape
+ (3, 3, 2)
+
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
@@ -3557,7 +3572,9 @@ cdef class Generator:
[True, True] # random
"""
- from numpy.dual import svd
+ if method not in {'eigh', 'svd', 'cholesky'}:
+ raise ValueError(
+ "method must be one of {'eigh', 'svd', 'cholesky'}")
# Check preconditions on arguments
mean = np.array(mean)
@@ -3600,13 +3617,27 @@ cdef class Generator:
# GH10839, ensure double to make tol meaningful
cov = cov.astype(np.double)
- (u, s, v) = svd(cov)
+ if method == 'svd':
+ from numpy.dual import svd
+ (u, s, vh) = svd(cov)
+ elif method == 'eigh':
+ from numpy.dual import eigh
+ # could call linalg.svd(hermitian=True), but that calculates a vh we don't need
+ (s, u) = eigh(cov)
+ else:
+ from numpy.dual import cholesky
+ l = cholesky(cov)
- if check_valid != 'ignore':
+ # make sure check_valid is ignored whe method == 'cholesky'
+ # since the decomposition will have failed if cov is not valid.
+ if check_valid != 'ignore' and method != 'cholesky':
if check_valid != 'warn' and check_valid != 'raise':
- raise ValueError("check_valid must equal 'warn', 'raise', or 'ignore'")
-
- psd = np.allclose(np.dot(v.T * s, v), cov, rtol=tol, atol=tol)
+ raise ValueError(
+ "check_valid must equal 'warn', 'raise', or 'ignore'")
+ if method == 'svd':
+ psd = np.allclose(np.dot(vh.T * s, vh), cov, rtol=tol, atol=tol)
+ else:
+ psd = not np.any(s < -tol)
if not psd:
if check_valid == 'warn':
warnings.warn("covariance is not positive-semidefinite.",
@@ -3614,7 +3645,17 @@ cdef class Generator:
else:
raise ValueError("covariance is not positive-semidefinite.")
- x = np.dot(x, np.sqrt(s)[:, None] * v)
+ if method == 'cholesky':
+ _factor = l
+ elif method == 'eigh':
+ # if check_valid == 'ignore' we need to ensure that np.sqrt does not
+ # return a NaN if s is a very small negative number that is
+ # approximately zero or when the covariance is not positive-semidefinite
+ _factor = u * np.sqrt(abs(s))
+ else:
+ _factor = np.sqrt(s)[:, None] * vh
+
+ x = np.dot(x, _factor)
x += mean
x.shape = tuple(final_shape)
return x
diff --git a/numpy/random/tests/test_generator_mt19937.py b/numpy/random/tests/test_generator_mt19937.py
index d4502d276..d85de6b6d 100644
--- a/numpy/random/tests/test_generator_mt19937.py
+++ b/numpy/random/tests/test_generator_mt19937.py
@@ -3,6 +3,8 @@ import sys
import pytest
import numpy as np
+from numpy.dual import cholesky, eigh, svd
+from numpy.linalg import LinAlgError
from numpy.testing import (
assert_, assert_raises, assert_equal, assert_allclose,
assert_warns, assert_no_warnings, assert_array_equal,
@@ -1196,12 +1198,13 @@ class TestRandomDist(object):
[5, 5, 3, 1, 2, 4]]])
assert_array_equal(actual, desired)
- def test_multivariate_normal(self):
+ @pytest.mark.parametrize("method", ["svd", "eigh", "cholesky"])
+ def test_multivariate_normal(self, method):
random = Generator(MT19937(self.seed))
mean = (.123456789, 10)
cov = [[1, 0], [0, 1]]
size = (3, 2)
- actual = random.multivariate_normal(mean, cov, size)
+ actual = random.multivariate_normal(mean, cov, size, method=method)
desired = np.array([[[-1.747478062846581, 11.25613495182354 ],
[-0.9967333370066214, 10.342002097029821 ]],
[[ 0.7850019631242964, 11.181113712443013 ],
@@ -1212,15 +1215,24 @@ class TestRandomDist(object):
assert_array_almost_equal(actual, desired, decimal=15)
# Check for default size, was raising deprecation warning
- actual = random.multivariate_normal(mean, cov)
+ actual = random.multivariate_normal(mean, cov, method=method)
desired = np.array([0.233278563284287, 9.424140804347195])
assert_array_almost_equal(actual, desired, decimal=15)
+ # Check that non symmetric covariance input raises exception when
+ # check_valid='raises' if using default svd method.
+ mean = [0, 0]
+ cov = [[1, 2], [1, 2]]
+ assert_raises(ValueError, random.multivariate_normal, mean, cov,
+ check_valid='raise')
# Check that non positive-semidefinite covariance warns with
# RuntimeWarning
- mean = [0, 0]
cov = [[1, 2], [2, 1]]
assert_warns(RuntimeWarning, random.multivariate_normal, mean, cov)
+ assert_warns(RuntimeWarning, random.multivariate_normal, mean, cov,
+ method='eigh')
+ assert_raises(LinAlgError, random.multivariate_normal, mean, cov,
+ method='cholesky')
# and that it doesn't warn with RuntimeWarning check_valid='ignore'
assert_no_warnings(random.multivariate_normal, mean, cov,
@@ -1229,10 +1241,12 @@ class TestRandomDist(object):
# and that it raises with RuntimeWarning check_valid='raises'
assert_raises(ValueError, random.multivariate_normal, mean, cov,
check_valid='raise')
+ assert_raises(ValueError, random.multivariate_normal, mean, cov,
+ check_valid='raise', method='eigh')
cov = np.array([[1, 0.1], [0.1, 1]], dtype=np.float32)
with suppress_warnings() as sup:
- random.multivariate_normal(mean, cov)
+ random.multivariate_normal(mean, cov, method=method)
w = sup.record(RuntimeWarning)
assert len(w) == 0
diff --git a/shippable.yml b/shippable.yml
index af3cfaa04..b5ce17751 100644
--- a/shippable.yml
+++ b/shippable.yml
@@ -22,6 +22,7 @@ runtime:
build:
ci:
# install dependencies
+ - sudo apt-get update
- sudo apt-get install gcc gfortran
- target=$(python tools/openblas_support.py)
- sudo cp -r "${target}"/64/lib/* /usr/lib
diff --git a/test_requirements.txt b/test_requirements.txt
index ea2a4bfbf..32b95897a 100644
--- a/test_requirements.txt
+++ b/test_requirements.txt
@@ -1,5 +1,5 @@
-cython==0.29.13
-pytest==5.2.1
+cython==0.29.14
+pytest==5.2.2
pytz==2019.3
pytest-cov==2.8.1
pickle5; python_version == '3.7'
diff --git a/tools/travis-test.sh b/tools/travis-test.sh
index e04a33143..549a2d570 100755
--- a/tools/travis-test.sh
+++ b/tools/travis-test.sh
@@ -53,7 +53,9 @@ setup_base()
$PYTHON setup.py build build_src --verbose-cfg build_ext --inplace 2>&1 | tee log
fi
grep -v "_configtest" log \
- | grep -vE "ld returned 1|no previously-included files matching|manifest_maker: standard file '-c'" \
+ | grep -vE "ld returned 1|no files found matching" \
+ | grep -vE "no previously-included files matching" \
+ | grep -vE "manifest_maker: standard file '-c'" \
| grep -E "warning\>" \
| tee warnings
if [ "$LAPACK" != "None" ]; then