diff options
89 files changed, 1119 insertions, 686 deletions
diff --git a/.circleci/config.yml b/.circleci/config.yml index 92987ef8e..772c3fbfd 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -35,6 +35,14 @@ jobs: pip install scipy - run: + name: create release notes + command: | + . venv/bin/activate + pip install git+https://github.com/hawkowl/towncrier.git@master + VERSION=$(python -c "import setup; print(setup.VERSION)") + towncrier --version $VERSION --yes + ./tools/ci/test_all_newsfragments_used.py + - run: name: build devdocs command: | . venv/bin/activate diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 52b4818d6..0e97d42d6 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -29,7 +29,7 @@ jobs: python3 -m pip install --user -r test_requirements.txt && \ python3 -m pip install . && \ F77=gfortran-5 F90=gfortran-5 \ - CFLAGS='-UNDEBUG -std=c99' python3 runtests.py -n --mode=full -- -rsx --junitxml=junit/test-results.xml && \ + CFLAGS='-UNDEBUG -std=c99' python3 runtests.py -n --debug-configure --mode=full -- -rsx --junitxml=junit/test-results.xml && \ python3 tools/openblas_support.py --check_version $(OpenBLAS_version)" displayName: 'Run 32-bit Ubuntu Docker Build / Tests' - task: PublishTestResults@2 @@ -94,7 +94,7 @@ jobs: displayName: 'Check for unreachable code paths in Python modules' # prefer usage of clang over gcc proper # to match likely scenario on many user mac machines - - script: python setup.py build -j 4 install + - script: python setup.py build -j 4 build_src -v install displayName: 'Build NumPy' env: BLAS: None diff --git a/changelog/13829.enhancement.rst b/changelog/13829.enhancement.rst new file mode 100644 index 000000000..ede1b2a53 --- /dev/null +++ b/changelog/13829.enhancement.rst @@ -0,0 +1,6 @@ +Add ``axis`` argument for ``random.permutation`` and ``random.shuffle`` +----------------------------------------------------------------------- + +Previously the ``random.permutation`` and ``random.shuffle`` functions +can only shuffle an array along the first axis; they now have a +new argument ``axis`` which allows shuffle along a specified axis. diff --git a/doc/DISTUTILS.rst.txt b/doc/DISTUTILS.rst.txt index eadde63f8..bcef82500 100644 --- a/doc/DISTUTILS.rst.txt +++ b/doc/DISTUTILS.rst.txt @@ -243,7 +243,7 @@ in writing setup scripts: after processing all source generators, no extension module will be built. This is the recommended way to conditionally define extension modules. Source generator functions are called by the - ``build_src`` command of ``numpy.distutils``. + ``build_src`` sub-command of ``numpy.distutils``. For example, here is a typical source generator function:: diff --git a/doc/neps/nep-0030-duck-array-protocol.rst b/doc/neps/nep-0030-duck-array-protocol.rst index 07c4275a1..353c5df1e 100644 --- a/doc/neps/nep-0030-duck-array-protocol.rst +++ b/doc/neps/nep-0030-duck-array-protocol.rst @@ -16,7 +16,7 @@ Abstract We propose the ``__duckarray__`` protocol, following the high-level overview described in NEP 22, allowing downstream libraries to return arrays of their defined types, in contrast to ``np.asarray``, that coerces those ``array_like`` -to NumPy arrays. +objects to NumPy arrays. Detailed description -------------------- @@ -64,8 +64,9 @@ Implementation The implementation idea is fairly straightforward, requiring a new function ``duckarray`` to be introduced in NumPy, and a new method ``__duckarray__`` in NumPy-like array classes. The new ``__duckarray__`` method shall return the -downstream array-like object itself, such as the ``self`` object, while the -``__array__`` method returns ``TypeError``. +downstream array-like object itself, such as the ``self`` object. If appropriate, +an ``__array__`` method may be implemented that returns a NumPy array or possibly +raise a ``TypeError`` with a helpful message. The new NumPy ``duckarray`` function can be implemented as follows: @@ -90,15 +91,20 @@ a complete implementation would look like the following: return self def __array__(self): - return TypeError + return TypeError("NumPyLikeArray can not be converted to a numpy array. " + "You may want to use np.duckarray.") The implementation above exemplifies the simplest case, but the overall idea is that libraries will implement a ``__duckarray__`` method that returns the -original object, and ``__array__`` solely for the purpose of raising a -``TypeError``, thus preventing unintentional NumPy-coercion. In case of existing -libraries that don't already implement ``__array__`` but would like to use duck -array typing, it is advised that they they introduce both ``__array__`` and -``__duckarray__`` methods. +original object, and an ``__array__`` method that either creates and returns an +appropriate NumPy array, or raises a``TypeError`` to prevent unintentional use +as an object in a NumPy array (if ``np.asarray`` is called on an arbitrary +object that does not implement ``__array__``, it will create a NumPy array +scalar). + +In case of existing libraries that don't already implement ``__array__`` but +would like to use duck array typing, it is advised that they introduce +both ``__array__`` and``__duckarray__`` methods. Usage ----- diff --git a/doc/records.rst.txt b/doc/records.rst.txt index a608880d7..3c0d55216 100644 --- a/doc/records.rst.txt +++ b/doc/records.rst.txt @@ -50,7 +50,7 @@ New possibilities for the "data-type" **Dictionary (keys "names", "titles", and "formats")** - This will be converted to a ``PyArray_VOID`` type with corresponding + This will be converted to a ``NPY_VOID`` type with corresponding fields parameter (the formats list will be converted to actual ``PyArray_Descr *`` objects). @@ -58,10 +58,10 @@ New possibilities for the "data-type" **Objects (anything with an .itemsize and .fields attribute)** If its an instance of (a sub-class of) void type, then a new ``PyArray_Descr*`` structure is created corresponding to its - typeobject (and ``PyArray_VOID``) typenumber. If the type is + typeobject (and ``NPY_VOID``) typenumber. If the type is registered, then the registered type-number is used. - Otherwise a new ``PyArray_VOID PyArray_Descr*`` structure is created + Otherwise a new ``NPY_VOID PyArray_Descr*`` structure is created and filled ->elsize and ->fields filled in appropriately. The itemsize attribute must return a number > 0. The fields diff --git a/doc/release/upcoming_changes/10151.improvement.rst b/doc/release/upcoming_changes/10151.improvement.rst index 352e03029..3706a5132 100644 --- a/doc/release/upcoming_changes/10151.improvement.rst +++ b/doc/release/upcoming_changes/10151.improvement.rst @@ -2,8 +2,8 @@ Different C numeric types of the same size have unique names ------------------------------------------------------------ On any given platform, two of ``np.intc``, ``np.int_``, and ``np.longlong`` would previously appear indistinguishable through their ``repr``, despite -having different properties when wrapped into ``dtype``s. +their corresponding ``dtype`` having different properties. A similar problem existed for the unsigned counterparts to these types, and on -some plaforms for ``np.double`` and ``np.longdouble`` +some platforms for ``np.double`` and ``np.longdouble`` These types now always print with a unique ``__name__``. diff --git a/changelog/13605.deprecation.rst b/doc/release/upcoming_changes/13605.deprecation.rst index bff12e965..bff12e965 100644 --- a/changelog/13605.deprecation.rst +++ b/doc/release/upcoming_changes/13605.deprecation.rst diff --git a/doc/release/upcoming_changes/14036.deprecation.rst b/doc/release/upcoming_changes/14036.deprecation.rst new file mode 100644 index 000000000..3d997b9a2 --- /dev/null +++ b/doc/release/upcoming_changes/14036.deprecation.rst @@ -0,0 +1,4 @@ +Deprecate `PyArray_As1D`, `PyArray_As2D` +---------------------------------------- +`PyArray_As1D`, `PyArray_As2D` are deprecated, use +`PyArray_AsCArray` instead
\ No newline at end of file diff --git a/doc/release/upcoming_changes/14100.expired.rst b/doc/release/upcoming_changes/14100.expired.rst index 953922c72..e9ea9eeb4 100644 --- a/doc/release/upcoming_changes/14100.expired.rst +++ b/doc/release/upcoming_changes/14100.expired.rst @@ -1,3 +1,3 @@ -* ``PyArray_FromDimsAndDataAndDescr`` has been removed, use - ``PyArray_NewFromDescr`` instead -* ``PyArray_FromDims`` has been removed, use ``PyArray_SimpleNew`` instead +* ``PyArray_FromDimsAndDataAndDescr`` and ``PyArray_FromDims`` have been + removed (they will always raise an error). Use ``PyArray_NewFromDescr`` + and ``PyArray_SimpleNew`` instead. diff --git a/doc/release/upcoming_changes/14248.change.rst b/doc/release/upcoming_changes/14248.change.rst new file mode 100644 index 000000000..9ae0f16bc --- /dev/null +++ b/doc/release/upcoming_changes/14248.change.rst @@ -0,0 +1,10 @@ +`numpy.distutils`: append behavior changed for LDFLAGS and similar +------------------------------------------------------------------ +`numpy.distutils` has always overridden rather than appended to ``LDFLAGS`` and +other similar such environment variables for compiling Fortran extensions. Now +the default behavior has changed to appending - which is the expected behavior +in most situations. To preserve the old (overwriting) behavior, set the +``NPY_DISTUTILS_APPEND_FLAGS`` environment variable to 0. This applies to: +``LDFLAGS``, ``F77FLAGS``, ``F90FLAGS``, ``FREEFLAGS``, ``FOPT``, ``FDEBUG``, +and ``FFLAGS``. NumPy 1.16 and 1.17 gave build warnings in situations where this +change in behavior would have affected the compile flags used. diff --git a/doc/release/upcoming_changes/14248.changes.rst b/doc/release/upcoming_changes/14248.changes.rst deleted file mode 100644 index ff5f4acef..000000000 --- a/doc/release/upcoming_changes/14248.changes.rst +++ /dev/null @@ -1,10 +0,0 @@ -numpy.distutils: append behavior changed for LDFLAGS and similar ----------------------------------------------------------------- -`numpy.distutils` has always overridden rather than appended to `LDFLAGS` and -other similar such environment variables for compiling Fortran extensions. Now -the default behavior has changed to appending - which is the expected behavior -in most situations. To preserve the old (overwriting) behavior, set the -`NPY_DISTUTILS_APPEND_FLAGS` environment variable to 0. This applies to: -`LDFLAGS`, `F77FLAGS`, `F90FLAGS`, `FREEFLAGS`, `FOPT`, `FDEBUG`, and `FFLAGS`. -NumPy 1.16 and 1.17 gave build warnings in situations where this change in -behavior would have affected the compile flags used. diff --git a/doc/release/upcoming_changes/14498.change.rst b/doc/release/upcoming_changes/14498.change.rst new file mode 100644 index 000000000..fd784e289 --- /dev/null +++ b/doc/release/upcoming_changes/14498.change.rst @@ -0,0 +1,7 @@ +Remove ``numpy.random.entropy`` without a deprecation +----------------------------------------------------- + +``numpy.random.entropy`` was added to the `numpy.random` namespace in 1.17.0. +It was meant to be a private c-extension module, but was exposed as public. +It has been replaced by `numpy.random.SeedSequence` so the module was +completely removed. diff --git a/doc/release/upcoming_changes/14501.improvement.rst b/doc/release/upcoming_changes/14501.improvement.rst new file mode 100644 index 000000000..f397ecccf --- /dev/null +++ b/doc/release/upcoming_changes/14501.improvement.rst @@ -0,0 +1,6 @@ +`numpy.random.randint` produced incorrect value when the range was ``2**32`` +---------------------------------------------------------------------------- +The implementation introduced in 1.17.0 had an incorrect check when +determining whether to use the 32-bit path or the full 64-bit +path that incorrectly redirected random integer generation with a high - low +range of ``2**32`` to the 64-bit generator. diff --git a/doc/release/upcoming_changes/14510.compatibility.rst b/doc/release/upcoming_changes/14510.compatibility.rst new file mode 100644 index 000000000..63d46d2f7 --- /dev/null +++ b/doc/release/upcoming_changes/14510.compatibility.rst @@ -0,0 +1,12 @@ +`numpy.lib.recfunctions.drop_fields` can no longer return `None` +---------------------------------------------------------------- +If ``drop_fields`` is used to drop all fields, previously the array would +be completely discarded and `None` returned. Now it returns an array of the +same shape as the input, but with no fields. The old behavior can be retained +with:: + + dropped_arr = drop_fields(arr, ['a', 'b']) + if dropped_arr.dtype.names == (): + dropped_arr = None + +converting the empty recarray to `None` diff --git a/doc/release/upcoming_changes/14518.change.rst b/doc/release/upcoming_changes/14518.change.rst new file mode 100644 index 000000000..f7b782825 --- /dev/null +++ b/doc/release/upcoming_changes/14518.change.rst @@ -0,0 +1,18 @@ +Add options to quiet build configuration and build with ``-Werror`` +------------------------------------------------------------------- +Added two new configuration options. During the ``build_src`` subcommand, as +part of configuring NumPy, the files ``_numpyconfig.h`` and ``config.h`` are +created by probing support for various runtime functions and routines. +Previously, the very verbose compiler output during this stage clouded more +important information. By default the output is silenced. Running ``runtests.py +--debug-configure`` will add ``-v`` to the ``build_src`` subcommand, which +will restore the previous behaviour. + +Adding ``CFLAGS=-Werror`` to turn warnings into errors would trigger errors +during the configuration. Now ``runtests.py --warn-error`` will add +``--warn-error`` to the ``build`` subcommand, which will percolate to the +``build_ext`` and ``build_lib`` subcommands. This will add the compiler flag +to those stages and turn compiler warnings into errors while actually building +NumPy itself, avoiding the ``build_src`` subcommand compiler calls. + +(`gh-14527 <https://github.com/numpy/numpy/pull/14527>`__) diff --git a/doc/release/upcoming_changes/14567.expired.rst b/doc/release/upcoming_changes/14567.expired.rst new file mode 100644 index 000000000..59cb600fb --- /dev/null +++ b/doc/release/upcoming_changes/14567.expired.rst @@ -0,0 +1,5 @@ +The files ``numpy/testing/decorators.py``, ``numpy/testing/noseclasses.py`` +and ``numpy/testing/nosetester.py`` have been removed. They were never +meant to be public (all relevant objects are present in the +``numpy.testing`` namespace), and importing them has given a deprecation +warning since NumPy 1.15.0 diff --git a/doc/release/upcoming_changes/template.rst b/doc/release/upcoming_changes/template.rst index 21c4d19c6..9c8a3b5fc 100644 --- a/doc/release/upcoming_changes/template.rst +++ b/doc/release/upcoming_changes/template.rst @@ -1,19 +1,25 @@ +{% set title = "NumPy {} Release Notes".format(versiondata.version) %} +{{ "=" * title|length }} +{{ title }} +{{ "=" * title|length }} + {% for section, _ in sections.items() %} -{% set underline = underlines[0] %}{% if section %}{{section}} +{% set underline = underlines[0] %}{% if section %}{{ section }} {{ underline * section|length }}{% set underline = underlines[1] %} {% endif %} - {% if sections[section] %} -{% for category, val in definitions.items() if category in sections[section]%} +{% for category, val in definitions.items() if category in sections[section] %} + {{ definitions[category]['name'] }} {{ underline * definitions[category]['name']|length }} {% if definitions[category]['showcontent'] %} {% for text, values in sections[section][category].items() %} -{{ text }} ({{ values|join(', ') }}) -{% endfor %} +{{ text }} +{{ get_indent(text) }}({{values|join(', ') }}) +{% endfor %} {% else %} - {{ sections[section][category]['']|join(', ') }} @@ -23,7 +29,6 @@ No significant changes. {% else %} {% endif %} - {% endfor %} {% else %} No significant changes. diff --git a/doc/source/dev/development_environment.rst b/doc/source/dev/development_environment.rst index ce571926e..9d618cc9f 100644 --- a/doc/source/dev/development_environment.rst +++ b/doc/source/dev/development_environment.rst @@ -96,6 +96,11 @@ installs a ``.egg-link`` file into your site-packages as well as adjusts the Other build options ------------------- +Build options can be discovered by running any of:: + + $ python setup.py --help + $ python setup.py --help-commands + It's possible to do a parallel build with ``numpy.distutils`` with the ``-j`` option; see :ref:`parallel-builds` for more details. @@ -106,6 +111,16 @@ source tree is to use:: $ export PYTHONPATH=/some/owned/folder/lib/python3.4/site-packages +NumPy uses a series of tests to probe the compiler and libc libraries for +funtions. The results are stored in ``_numpyconfig.h`` and ``config.h`` files +using ``HAVE_XXX`` definitions. These tests are run during the ``build_src`` +phase of the ``_multiarray_umath`` module in the ``generate_config_h`` and +``generate_numpyconfig_h`` functions. Since the output of these calls includes +many compiler warnings and errors, by default it is run quietly. If you wish +to see this output, you can run the ``build_src`` stage verbosely:: + + $ python build build_src -v + Using virtualenvs ----------------- diff --git a/doc/source/reference/arrays.nditer.rst b/doc/source/reference/arrays.nditer.rst index fa8183f75..7dab09a71 100644 --- a/doc/source/reference/arrays.nditer.rst +++ b/doc/source/reference/arrays.nditer.rst @@ -115,13 +115,18 @@ context is exited. array([[ 0, 2, 4], [ 6, 8, 10]]) +If you are writing code that needs to support older versions of numpy, +note that prior to 1.15, :class:`nditer` was not a context manager and +did not have a `close` method. Instead it relied on the destructor to +initiate the writeback of the buffer. + Using an External Loop ---------------------- In all the examples so far, the elements of `a` are provided by the iterator one at a time, because all the looping logic is internal to the -iterator. While this is simple and convenient, it is not very efficient. A -better approach is to move the one-dimensional innermost loop into your +iterator. While this is simple and convenient, it is not very efficient. +A better approach is to move the one-dimensional innermost loop into your code, external to the iterator. This way, NumPy's vectorized operations can be used on larger chunks of the elements being visited. @@ -156,41 +161,29 @@ element in a computation. For example, you may want to visit the elements of an array in memory order, but use a C-order, Fortran-order, or multidimensional index to look up values in a different array. -The Python iterator protocol doesn't have a natural way to query these -additional values from the iterator, so we introduce an alternate syntax -for iterating with an :class:`nditer`. This syntax explicitly works -with the iterator object itself, so its properties are readily accessible -during iteration. With this looping construct, the current value is -accessible by indexing into the iterator, and the index being tracked -is the property `index` or `multi_index` depending on what was requested. - -The Python interactive interpreter unfortunately prints out the -values of expressions inside the while loop during each iteration of the -loop. We have modified the output in the examples using this looping -construct in order to be more readable. +The index is tracked by the iterator object itself, and accessible +through the `index` or `multi_index` properties, depending on what was +requested. The examples below show printouts demonstrating the +progression of the index: .. admonition:: Example >>> a = np.arange(6).reshape(2,3) >>> it = np.nditer(a, flags=['f_index']) - >>> while not it.finished: - ... print("%d <%d>" % (it[0], it.index), end=' ') - ... it.iternext() + >>> for x in it: + ... print("%d <%d>" % (x, it.index), end=' ') ... 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5> >>> it = np.nditer(a, flags=['multi_index']) - >>> while not it.finished: - ... print("%d <%s>" % (it[0], it.multi_index), end=' ') - ... it.iternext() + >>> for x in it: + ... print("%d <%s>" % (x, it.multi_index), end=' ') ... 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)> - >>> it = np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) - >>> with it: - .... while not it.finished: - ... it[0] = it.multi_index[1] - it.multi_index[0] - ... it.iternext() + >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: + ... for x in it: + ... x[...] = it.multi_index[1] - it.multi_index[0] ... >>> a array([[ 0, 1, 2], @@ -199,7 +192,7 @@ construct in order to be more readable. Tracking an index or multi-index is incompatible with using an external loop, because it requires a different index value per element. If you try to combine these flags, the :class:`nditer` object will -raise an exception +raise an exception. .. admonition:: Example @@ -209,6 +202,42 @@ raise an exception File "<stdin>", line 1, in <module> ValueError: Iterator flag EXTERNAL_LOOP cannot be used if an index or multi-index is being tracked +Alternative Looping and Element Access +-------------------------------------- + +To make its properties more readily accessible during iteration, +:class:`nditer` has an alternative syntax for iterating, which works +explicitly with the iterator object itself. With this looping construct, +the current value is accessible by indexing into the iterator. Other +properties, such as tracked indices remain as before. The examples below +produce identical results to the ones in the previous section. + +.. admonition:: Example + + >>> a = np.arange(6).reshape(2,3) + >>> it = np.nditer(a, flags=['f_index']) + >>> while not it.finished: + ... print("%d <%d>" % (it[0], it.index), end=' ') + ... it.iternext() + ... + 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5> + + >>> it = np.nditer(a, flags=['multi_index']) + >>> while not it.finished: + ... print("%d <%s>" % (it[0], it.multi_index), end=' ') + ... it.iternext() + ... + 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)> + + >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: + ... while not it.finished: + ... it[0] = it.multi_index[1] - it.multi_index[0] + ... it.iternext() + ... + >>> a + array([[ 0, 1, 2], + [-1, 0, 1]]) + Buffering the Array Elements ---------------------------- diff --git a/doc/source/reference/c-api/array.rst b/doc/source/reference/c-api/array.rst index 4dd556e97..08bf06b00 100644 --- a/doc/source/reference/c-api/array.rst +++ b/doc/source/reference/c-api/array.rst @@ -2797,10 +2797,7 @@ Array Scalars *arr* is not ``NULL`` and the first element is negative then :c:data:`NPY_INTNEG_SCALAR` is returned, otherwise :c:data:`NPY_INTPOS_SCALAR` is returned. The possible return values - are :c:data:`NPY_{kind}_SCALAR` where ``{kind}`` can be **INTPOS**, - **INTNEG**, **FLOAT**, **COMPLEX**, **BOOL**, or **OBJECT**. - :c:data:`NPY_NOSCALAR` is also an enumerated value - :c:type:`NPY_SCALARKIND` variables can take on. + are the enumerated values in :c:type:`NPY_SCALARKIND`. .. c:function:: int PyArray_CanCoerceScalar( \ char thistype, char neededtype, NPY_SCALARKIND scalar) @@ -3596,11 +3593,21 @@ Enumerated Types A special variable type indicating the number of "kinds" of scalars distinguished in determining scalar-coercion rules. This - variable can take on the values :c:data:`NPY_{KIND}` where ``{KIND}`` can be + variable can take on the values: - **NOSCALAR**, **BOOL_SCALAR**, **INTPOS_SCALAR**, - **INTNEG_SCALAR**, **FLOAT_SCALAR**, **COMPLEX_SCALAR**, - **OBJECT_SCALAR** + .. c:var:: NPY_NOSCALAR + + .. c:var:: NPY_BOOL_SCALAR + + .. c:var:: NPY_INTPOS_SCALAR + + .. c:var:: NPY_INTNEG_SCALAR + + .. c:var:: NPY_FLOAT_SCALAR + + .. c:var:: NPY_COMPLEX_SCALAR + + .. c:var:: NPY_OBJECT_SCALAR .. c:var:: NPY_NSCALARKINDS diff --git a/doc/source/reference/random/entropy.rst b/doc/source/reference/random/entropy.rst deleted file mode 100644 index 0664da6f9..000000000 --- a/doc/source/reference/random/entropy.rst +++ /dev/null @@ -1,6 +0,0 @@ -System Entropy -============== - -.. module:: numpy.random.entropy - -.. autofunction:: random_entropy diff --git a/doc/source/reference/random/index.rst b/doc/source/reference/random/index.rst index 0b7a0bfad..b0283f3a7 100644 --- a/doc/source/reference/random/index.rst +++ b/doc/source/reference/random/index.rst @@ -151,9 +151,6 @@ What's New or Different select distributions * Optional ``out`` argument that allows existing arrays to be filled for select distributions -* `~entropy.random_entropy` provides access to the system - source of randomness that is used in cryptographic applications (e.g., - ``/dev/urandom`` on Unix). * All BitGenerators can produce doubles, uint64s and uint32s via CTypes (`~.PCG64.ctypes`) and CFFI (`~.PCG64.cffi`). This allows the bit generators to be used in numba. @@ -203,7 +200,6 @@ Features new-or-different Comparing Performance <performance> extending - Reading System Entropy <entropy> Original Source ~~~~~~~~~~~~~~~ diff --git a/doc/source/reference/random/new-or-different.rst b/doc/source/reference/random/new-or-different.rst index 5442f46c9..c8815f98f 100644 --- a/doc/source/reference/random/new-or-different.rst +++ b/doc/source/reference/random/new-or-different.rst @@ -45,9 +45,6 @@ Feature Older Equivalent Notes And in more detail: -* `~.entropy.random_entropy` provides access to the system - source of randomness that is used in cryptographic applications (e.g., - ``/dev/urandom`` on Unix). * Simulate from the complex normal distribution (`~.Generator.complex_normal`) * The normal, exponential and gamma generators use 256-step Ziggurat diff --git a/doc/source/reference/routines.testing.rst b/doc/source/reference/routines.testing.rst index c676dec07..98ce3f377 100644 --- a/doc/source/reference/routines.testing.rst +++ b/doc/source/reference/routines.testing.rst @@ -37,11 +37,11 @@ Decorators .. autosummary:: :toctree: generated/ - decorators.deprecated - decorators.knownfailureif - decorators.setastest - decorators.skipif - decorators.slow + dec.deprecated + dec.knownfailureif + dec.setastest + dec.skipif + dec.slow decorate_methods Test Running diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst index d00e88b34..30e9a9171 100644 --- a/doc/source/reference/ufuncs.rst +++ b/doc/source/reference/ufuncs.rst @@ -228,45 +228,47 @@ can generate this table for your system with the code given in the Figure. .. admonition:: Figure - Code segment showing the "can cast safely" table for a 32-bit system. + Code segment showing the "can cast safely" table for a 64-bit system. + Generally the output depends on the system; your system might result in + a different table. + >>> mark = {False: ' -', True: ' ✓'} >>> def print_table(ntypes): - ... print 'X', - ... for char in ntypes: print char, - ... print + ... print('X ' + ' '.join(ntypes)) ... for row in ntypes: - ... print row, + ... print(row, end='') ... for col in ntypes: - ... print int(np.can_cast(row, col)), - ... print + ... print(mark[np.can_cast(row, col)], end='') + ... print() + ... >>> print_table(np.typecodes['All']) X ? b h i l q p B H I L Q P e f d g F D G S U V O M m - ? 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - b 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 - h 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 - i 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - l 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - q 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - p 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - B 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 - H 0 0 0 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 0 - I 0 0 0 0 1 1 1 0 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - L 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - Q 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - P 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 - e 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 - f 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 - d 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 - g 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 0 0 - F 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 - D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 - G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 - S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 - U 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 - V 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 - O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 - M 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 - m 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 + ? ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + b - ✓ ✓ ✓ ✓ ✓ ✓ - - - - - - ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + h - - ✓ ✓ ✓ ✓ ✓ - - - - - - - ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + i - - - ✓ ✓ ✓ ✓ - - - - - - - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + l - - - - ✓ ✓ ✓ - - - - - - - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + q - - - - ✓ ✓ ✓ - - - - - - - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + p - - - - ✓ ✓ ✓ - - - - - - - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + B - - ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + H - - - ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + I - - - - ✓ ✓ ✓ - - ✓ ✓ ✓ ✓ - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + L - - - - - - - - - - ✓ ✓ ✓ - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + Q - - - - - - - - - - ✓ ✓ ✓ - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + P - - - - - - - - - - ✓ ✓ ✓ - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - ✓ + e - - - - - - - - - - - - - ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - - + f - - - - - - - - - - - - - - ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - - + d - - - - - - - - - - - - - - - ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - - + g - - - - - - - - - - - - - - - - ✓ - - ✓ ✓ ✓ ✓ ✓ - - + F - - - - - - - - - - - - - - - - - ✓ ✓ ✓ ✓ ✓ ✓ ✓ - - + D - - - - - - - - - - - - - - - - - - ✓ ✓ ✓ ✓ ✓ ✓ - - + G - - - - - - - - - - - - - - - - - - - ✓ ✓ ✓ ✓ ✓ - - + S - - - - - - - - - - - - - - - - - - - - ✓ ✓ ✓ ✓ - - + U - - - - - - - - - - - - - - - - - - - - - ✓ ✓ ✓ - - + V - - - - - - - - - - - - - - - - - - - - - - ✓ ✓ - - + O - - - - - - - - - - - - - - - - - - - - - - ✓ ✓ - - + M - - - - - - - - - - - - - - - - - - - - - - ✓ ✓ ✓ - + m - - - - - - - - - - - - - - - - - - - - - - ✓ ✓ - ✓ You should note that, while included in the table for completeness, diff --git a/doc/source/release.rst b/doc/source/release.rst index 0b65d3e39..fb4e2b14d 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -5,6 +5,7 @@ Release Notes .. toctree:: :maxdepth: 3 + 1.18.0 <release/1.18.0-notes> 1.17.1 <release/1.17.2-notes> 1.17.1 <release/1.17.1-notes> 1.17.0 <release/1.17.0-notes> diff --git a/doc/source/release/1.18.0-notes.rst b/doc/source/release/1.18.0-notes.rst new file mode 100644 index 000000000..e66540410 --- /dev/null +++ b/doc/source/release/1.18.0-notes.rst @@ -0,0 +1,8 @@ +The NumPy 1.18 release is currently in developement. Please check +the ``numpy/doc/release/upcoming_changes/`` folder for upcoming +release notes. +The ``numpy/doc/release/upcoming_changes/README.txt`` details how +to add new release notes. + +For the work in progress release notes for the current development +version, see the `devdocs <https://numpy.org/devdocs/release.html>`__. diff --git a/doc/source/user/basics.io.genfromtxt.rst b/doc/source/user/basics.io.genfromtxt.rst index 6ef80bf8e..19e37eabc 100644 --- a/doc/source/user/basics.io.genfromtxt.rst +++ b/doc/source/user/basics.io.genfromtxt.rst @@ -27,13 +27,13 @@ Defining the input ================== The only mandatory argument of :func:`~numpy.genfromtxt` is the source of -the data. It can be a string, a list of strings, or a generator. If a -single string is provided, it is assumed to be the name of a local or -remote file, or an open file-like object with a :meth:`read` method, for -example, a file or :class:`io.StringIO` object. If a list of strings -or a generator returning strings is provided, each string is treated as one -line in a file. When the URL of a remote file is passed, the file is -automatically downloaded to the current directory and opened. +the data. It can be a string, a list of strings, a generator or an open +file-like object with a :meth:`read` method, for example, a file or +:class:`io.StringIO` object. If a single string is provided, it is assumed +to be the name of a local or remote file. If a list of strings or a generator +returning strings is provided, each string is treated as one line in a file. +When the URL of a remote file is passed, the file is automatically downloaded +to the current directory and opened. Recognized file types are text files and archives. Currently, the function recognizes :class:`gzip` and :class:`bz2` (`bzip2`) archives. The type of diff --git a/doc/source/user/c-info.beyond-basics.rst b/doc/source/user/c-info.beyond-basics.rst index d4d941a5e..dd25861b4 100644 --- a/doc/source/user/c-info.beyond-basics.rst +++ b/doc/source/user/c-info.beyond-basics.rst @@ -300,9 +300,10 @@ An example castfunc is: static void double_to_float(double *from, float* to, npy_intp n, - void* ig1, void* ig2); - while (n--) { - (*to++) = (double) *(from++); + void* ignore1, void* ignore2) { + while (n--) { + (*to++) = (double) *(from++); + } } This could then be registered to convert doubles to floats using the diff --git a/numpy/__init__.py b/numpy/__init__.py index 07d67945c..fef8245de 100644 --- a/numpy/__init__.py +++ b/numpy/__init__.py @@ -143,7 +143,9 @@ else: from .core import * from . import compat from . import lib + # FIXME: why have numpy.lib if everything is imported here?? from .lib import * + from . import linalg from . import fft from . import polynomial @@ -174,6 +176,14 @@ else: __all__.extend(lib.__all__) __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma']) + # Remove things that are in the numpy.lib but not in the numpy namespace + # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace) + # that prevents adding more things to the main namespace by accident. + # The list below will grow until the `from .lib import *` fixme above is + # taken care of + __all__.remove('Arrayterator') + del Arrayterator + # Filter out Cython harmless warnings warnings.filterwarnings("ignore", message="numpy.dtype size changed") warnings.filterwarnings("ignore", message="numpy.ufunc size changed") diff --git a/numpy/core/_add_newdocs.py b/numpy/core/_add_newdocs.py index f041e0cd6..dbe3d226f 100644 --- a/numpy/core/_add_newdocs.py +++ b/numpy/core/_add_newdocs.py @@ -386,12 +386,12 @@ add_newdoc('numpy.core', 'nditer', >>> luf(lambda i,j:i*i + j/2, a, b) array([ 0.5, 1.5, 4.5, 9.5, 16.5]) - If operand flags `"writeonly"` or `"readwrite"` are used the operands may - be views into the original data with the `WRITEBACKIFCOPY` flag. In this case - nditer must be used as a context manager or the nditer.close - method must be called before using the result. The temporary - data will be written back to the original data when the `__exit__` - function is called but not before: + If operand flags `"writeonly"` or `"readwrite"` are used the + operands may be views into the original data with the + `WRITEBACKIFCOPY` flag. In this case `nditer` must be used as a + context manager or the `nditer.close` method must be called before + using the result. The temporary data will be written back to the + original data when the `__exit__` function is called but not before: >>> a = np.arange(6, dtype='i4')[::-2] >>> with np.nditer(a, [], @@ -413,6 +413,8 @@ add_newdoc('numpy.core', 'nditer', `x.data` will still point at some part of `a.data`, and writing to one will affect the other. + Context management and the `close` method appeared in version 1.15.0. + """) # nditer methods @@ -568,6 +570,8 @@ add_newdoc('numpy.core', 'nditer', ('close', Resolve all writeback semantics in writeable operands. + .. versionadded:: 1.15.0 + See Also -------- @@ -1342,7 +1346,7 @@ add_newdoc('numpy.core.multiarray', 'arange', add_newdoc('numpy.core.multiarray', '_get_ndarray_c_version', """_get_ndarray_c_version() - Return the compile time NDARRAY_VERSION number. + Return the compile time NPY_VERSION (formerly called NDARRAY_VERSION) number. """) diff --git a/numpy/core/code_generators/generate_umath.py b/numpy/core/code_generators/generate_umath.py index ae871ea6f..6729fe197 100644 --- a/numpy/core/code_generators/generate_umath.py +++ b/numpy/core/code_generators/generate_umath.py @@ -664,7 +664,7 @@ defdict = { None, TD('e', f='cos', astype={'e':'f'}), TD('f', simd=[('fma', 'f'), ('avx512f', 'f')]), - TD(inexact, f='cos', astype={'e':'f'}), + TD('fdg' + cmplx, f='cos'), TD(P, f='cos'), ), 'sin': @@ -673,7 +673,7 @@ defdict = { None, TD('e', f='sin', astype={'e':'f'}), TD('f', simd=[('fma', 'f'), ('avx512f', 'f')]), - TD(inexact, f='sin', astype={'e':'f'}), + TD('fdg' + cmplx, f='sin'), TD(P, f='sin'), ), 'tan': @@ -710,7 +710,7 @@ defdict = { None, TD('e', f='exp', astype={'e':'f'}), TD('f', simd=[('fma', 'f'), ('avx512f', 'f')]), - TD(inexact, f='exp', astype={'e':'f'}), + TD('fdg' + cmplx, f='exp'), TD(P, f='exp'), ), 'exp2': @@ -733,7 +733,7 @@ defdict = { None, TD('e', f='log', astype={'e':'f'}), TD('f', simd=[('fma', 'f'), ('avx512f', 'f')]), - TD(inexact, f='log', astype={'e':'f'}), + TD('fdg' + cmplx, f='log'), TD(P, f='log'), ), 'log2': @@ -763,7 +763,7 @@ defdict = { None, TD('e', f='sqrt', astype={'e':'f'}), TD(inexactvec), - TD(inexact, f='sqrt', astype={'e':'f'}), + TD('fdg' + cmplx, f='sqrt'), TD(P, f='sqrt'), ), 'cbrt': diff --git a/numpy/core/setup.py b/numpy/core/setup.py index 63b515b18..5f2f4a7b2 100644 --- a/numpy/core/setup.py +++ b/numpy/core/setup.py @@ -497,10 +497,10 @@ def configuration(parent_package='',top_path=None): #endif """)) - print('File:', target) + log.info('File: %s' % target) with open(target) as target_f: - print(target_f.read()) - print('EOF') + log.info(target_f.read()) + log.info('EOF') else: mathlibs = [] with open(target) as target_f: @@ -587,10 +587,10 @@ def configuration(parent_package='',top_path=None): """)) # Dump the numpyconfig.h header to stdout - print('File: %s' % target) + log.info('File: %s' % target) with open(target) as target_f: - print(target_f.read()) - print('EOF') + log.info(target_f.read()) + log.info('EOF') config.add_data_files((header_dir, target)) return target @@ -639,23 +639,6 @@ def configuration(parent_package='',top_path=None): ] ####################################################################### - # dummy module # - ####################################################################### - - # npymath needs the config.h and numpyconfig.h files to be generated, but - # build_clib cannot handle generate_config_h and generate_numpyconfig_h - # (don't ask). Because clib are generated before extensions, we have to - # explicitly add an extension which has generate_config_h and - # generate_numpyconfig_h as sources *before* adding npymath. - - config.add_extension('_dummy', - sources=[join('src', 'dummymodule.c'), - generate_config_h, - generate_numpyconfig_h, - generate_numpy_api] - ) - - ####################################################################### # npymath library # ####################################################################### diff --git a/numpy/core/src/multiarray/_multiarray_tests.c.src b/numpy/core/src/multiarray/_multiarray_tests.c.src index b0985c80f..9e6083e2a 100644 --- a/numpy/core/src/multiarray/_multiarray_tests.c.src +++ b/numpy/core/src/multiarray/_multiarray_tests.c.src @@ -675,6 +675,43 @@ npy_updateifcopy_deprecation(PyObject* NPY_UNUSED(self), PyObject* args) Py_RETURN_NONE; } +/* used to test PyArray_As1D usage emits not implemented error */ +static PyObject* +npy_pyarrayas1d_deprecation(PyObject* NPY_UNUSED(self), PyObject* NPY_UNUSED(args)) +{ + PyObject *op = Py_BuildValue("i", 42); + PyObject *result = op; + int dim = 4; + double arg[2] = {1, 2}; + int temp = PyArray_As1D(&result, (char **)&arg, &dim, NPY_DOUBLE); + if (temp < 0) { + Py_DECREF(op); + return NULL; + } + /* op != result */ + Py_DECREF(op); + return result; +} + +/* used to test PyArray_As2D usage emits not implemented error */ +static PyObject* +npy_pyarrayas2d_deprecation(PyObject* NPY_UNUSED(self), PyObject* NPY_UNUSED(args)) +{ + PyObject *op = Py_BuildValue("i", 42); + PyObject *result = op; + int dim1 = 4; + int dim2 = 6; + double arg[2][2] = {{1, 2}, {3, 4}}; + int temp = PyArray_As2D(&result, (char ***)&arg, &dim1, &dim2, NPY_DOUBLE); + if (temp < 0) { + Py_DECREF(op); + return NULL; + } + /* op != result */ + Py_DECREF(op); + return result; +} + /* used to create array with WRITEBACKIFCOPY flag */ static PyObject* npy_create_writebackifcopy(PyObject* NPY_UNUSED(self), PyObject* args) @@ -1961,6 +1998,12 @@ static PyMethodDef Multiarray_TestsMethods[] = { {"npy_updateifcopy_deprecation", npy_updateifcopy_deprecation, METH_O, NULL}, + {"npy_pyarrayas1d_deprecation", + npy_pyarrayas1d_deprecation, + METH_NOARGS, NULL}, + {"npy_pyarrayas2d_deprecation", + npy_pyarrayas2d_deprecation, + METH_NOARGS, NULL}, {"npy_create_writebackifcopy", npy_create_writebackifcopy, METH_O, NULL}, diff --git a/numpy/core/tests/test_deprecations.py b/numpy/core/tests/test_deprecations.py index 46cebdd31..b12b71940 100644 --- a/numpy/core/tests/test_deprecations.py +++ b/numpy/core/tests/test_deprecations.py @@ -446,6 +446,18 @@ class TestNPY_CHAR(_DeprecationTestCase): assert_(npy_char_deprecation() == 'S1') +class TestPyArray_AS1D(_DeprecationTestCase): + def test_npy_pyarrayas1d_deprecation(self): + from numpy.core._multiarray_tests import npy_pyarrayas1d_deprecation + assert_raises(NotImplementedError, npy_pyarrayas1d_deprecation) + + +class TestPyArray_AS2D(_DeprecationTestCase): + def test_npy_pyarrayas2d_deprecation(self): + from numpy.core._multiarray_tests import npy_pyarrayas2d_deprecation + assert_raises(NotImplementedError, npy_pyarrayas2d_deprecation) + + class Test_UPDATEIFCOPY(_DeprecationTestCase): """ v1.14 deprecates creating an array with the UPDATEIFCOPY flag, use diff --git a/numpy/core/tests/test_umath_accuracy.py b/numpy/core/tests/test_umath_accuracy.py index fcbed0dd3..0bab04df2 100644 --- a/numpy/core/tests/test_umath_accuracy.py +++ b/numpy/core/tests/test_umath_accuracy.py @@ -35,7 +35,8 @@ class TestAccuracy(object): for filename in files: data_dir = path.join(path.dirname(__file__), 'data') filepath = path.join(data_dir, filename) - file_without_comments = (r for r in open(filepath) if not r[0] in ('$', '#')) + with open(filepath) as fid: + file_without_comments = (r for r in fid if not r[0] in ('$', '#')) data = np.genfromtxt(file_without_comments, dtype=('|S39','|S39','|S39',np.int), names=('type','input','output','ulperr'), diff --git a/numpy/distutils/__init__.py b/numpy/distutils/__init__.py index 55514750e..a6f804bdc 100644 --- a/numpy/distutils/__init__.py +++ b/numpy/distutils/__init__.py @@ -28,7 +28,7 @@ def customized_fcompiler(plat=None, compiler=None): c.customize() return c -def customized_ccompiler(plat=None, compiler=None): - c = ccompiler.new_compiler(plat=plat, compiler=compiler) +def customized_ccompiler(plat=None, compiler=None, verbose=1): + c = ccompiler.new_compiler(plat=plat, compiler=compiler, verbose=verbose) c.customize('') return c diff --git a/numpy/distutils/ccompiler.py b/numpy/distutils/ccompiler.py index 14451fa66..643879023 100644 --- a/numpy/distutils/ccompiler.py +++ b/numpy/distutils/ccompiler.py @@ -140,7 +140,10 @@ def CCompiler_spawn(self, cmd, display=None): display = ' '.join(list(display)) log.info(display) try: - subprocess.check_output(cmd) + if self.verbose: + subprocess.check_output(cmd) + else: + subprocess.check_output(cmd, stderr=subprocess.STDOUT) except subprocess.CalledProcessError as exc: o = exc.output s = exc.returncode @@ -162,7 +165,8 @@ def CCompiler_spawn(self, cmd, display=None): if is_sequence(cmd): cmd = ' '.join(list(cmd)) - forward_bytes_to_stdout(o) + if self.verbose: + forward_bytes_to_stdout(o) if re.search(b'Too many open files', o): msg = '\nTry rerunning setup command until build succeeds.' @@ -727,10 +731,12 @@ if sys.platform == 'win32': _distutils_new_compiler = new_compiler def new_compiler (plat=None, compiler=None, - verbose=0, + verbose=None, dry_run=0, force=0): # Try first C compilers from numpy.distutils. + if verbose is None: + verbose = log.get_threshold() <= log.INFO if plat is None: plat = os.name try: @@ -763,6 +769,7 @@ def new_compiler (plat=None, raise DistutilsModuleError(("can't compile C/C++ code: unable to find class '%s' " + "in module '%s'") % (class_name, module_name)) compiler = klass(None, dry_run, force) + compiler.verbose = verbose log.debug('new_compiler returns %s' % (klass)) return compiler diff --git a/numpy/distutils/command/build.py b/numpy/distutils/command/build.py index b3e18b204..5a9da1217 100644 --- a/numpy/distutils/command/build.py +++ b/numpy/distutils/command/build.py @@ -16,8 +16,8 @@ class build(old_build): user_options = old_build.user_options + [ ('fcompiler=', None, "specify the Fortran compiler type"), - ('parallel=', 'j', - "number of parallel jobs"), + ('warn-error', None, + "turn all warnings into errors (-Werror)"), ] help_options = old_build.help_options + [ @@ -28,14 +28,9 @@ class build(old_build): def initialize_options(self): old_build.initialize_options(self) self.fcompiler = None - self.parallel = None + self.warn_error = False def finalize_options(self): - if self.parallel: - try: - self.parallel = int(self.parallel) - except ValueError: - raise ValueError("--parallel/-j argument must be an integer") build_scripts = self.build_scripts old_build.finalize_options(self) plat_specifier = ".{}-{}.{}".format(get_platform(), *sys.version_info[:2]) diff --git a/numpy/distutils/command/build_clib.py b/numpy/distutils/command/build_clib.py index 910493a77..13edf0717 100644 --- a/numpy/distutils/command/build_clib.py +++ b/numpy/distutils/command/build_clib.py @@ -33,15 +33,18 @@ class build_clib(old_build_clib): ('inplace', 'i', 'Build in-place'), ('parallel=', 'j', "number of parallel jobs"), + ('warn-error', None, + "turn all warnings into errors (-Werror)"), ] - boolean_options = old_build_clib.boolean_options + ['inplace'] + boolean_options = old_build_clib.boolean_options + ['inplace', 'warn-error'] def initialize_options(self): old_build_clib.initialize_options(self) self.fcompiler = None self.inplace = 0 self.parallel = None + self.warn_error = None def finalize_options(self): if self.parallel: @@ -50,7 +53,10 @@ class build_clib(old_build_clib): except ValueError: raise ValueError("--parallel/-j argument must be an integer") old_build_clib.finalize_options(self) - self.set_undefined_options('build', ('parallel', 'parallel')) + self.set_undefined_options('build', + ('parallel', 'parallel'), + ('warn_error', 'warn_error'), + ) def have_f_sources(self): for (lib_name, build_info) in self.libraries: @@ -86,6 +92,10 @@ class build_clib(old_build_clib): self.compiler.customize(self.distribution, need_cxx=self.have_cxx_sources()) + if self.warn_error: + self.compiler.compiler.append('-Werror') + self.compiler.compiler_so.append('-Werror') + libraries = self.libraries self.libraries = None self.compiler.customize_cmd(self) diff --git a/numpy/distutils/command/build_ext.py b/numpy/distutils/command/build_ext.py index ef54fb25e..cd9b1c6f1 100644 --- a/numpy/distutils/command/build_ext.py +++ b/numpy/distutils/command/build_ext.py @@ -33,6 +33,8 @@ class build_ext (old_build_ext): "specify the Fortran compiler type"), ('parallel=', 'j', "number of parallel jobs"), + ('warn-error', None, + "turn all warnings into errors (-Werror)"), ] help_options = old_build_ext.help_options + [ @@ -40,10 +42,13 @@ class build_ext (old_build_ext): show_fortran_compilers), ] + boolean_options = old_build_ext.boolean_options + ['warn-error'] + def initialize_options(self): old_build_ext.initialize_options(self) self.fcompiler = None self.parallel = None + self.warn_error = None def finalize_options(self): if self.parallel: @@ -69,7 +74,10 @@ class build_ext (old_build_ext): self.include_dirs.extend(incl_dirs) old_build_ext.finalize_options(self) - self.set_undefined_options('build', ('parallel', 'parallel')) + self.set_undefined_options('build', + ('parallel', 'parallel'), + ('warn_error', 'warn_error'), + ) def run(self): if not self.extensions: @@ -116,6 +124,11 @@ class build_ext (old_build_ext): force=self.force) self.compiler.customize(self.distribution) self.compiler.customize_cmd(self) + + if self.warn_error: + self.compiler.compiler.append('-Werror') + self.compiler.compiler_so.append('-Werror') + self.compiler.show_customization() # Setup directory for storing generated extra DLL files on Windows diff --git a/numpy/distutils/command/build_src.py b/numpy/distutils/command/build_src.py index e183b2090..af8cec08a 100644 --- a/numpy/distutils/command/build_src.py +++ b/numpy/distutils/command/build_src.py @@ -53,9 +53,12 @@ class build_src(build_ext.build_ext): ('inplace', 'i', "ignore build-lib and put compiled extensions into the source " + "directory alongside your pure Python modules"), + ('verbose', 'v', + "change logging level from WARN to INFO which will show all " + + "compiler output") ] - boolean_options = ['force', 'inplace'] + boolean_options = ['force', 'inplace', 'verbose'] help_options = [] @@ -76,6 +79,7 @@ class build_src(build_ext.build_ext): self.swig_opts = None self.swig_cpp = None self.swig = None + self.verbose = None def finalize_options(self): self.set_undefined_options('build', @@ -365,6 +369,13 @@ class build_src(build_ext.build_ext): build_dir = os.path.join(*([self.build_src] +name.split('.')[:-1])) self.mkpath(build_dir) + + if self.verbose: + new_level = log.INFO + else: + new_level = log.WARN + old_level = log.set_threshold(new_level) + for func in func_sources: source = func(extension, build_dir) if not source: @@ -375,7 +386,7 @@ class build_src(build_ext.build_ext): else: log.info(" adding '%s' to sources." % (source,)) new_sources.append(source) - + log.set_threshold(old_level) return new_sources def filter_py_files(self, sources): diff --git a/numpy/distutils/log.py b/numpy/distutils/log.py index 37f9fe5dd..ff7de86b1 100644 --- a/numpy/distutils/log.py +++ b/numpy/distutils/log.py @@ -67,6 +67,8 @@ def set_threshold(level, force=False): ' %s to %s' % (prev_level, level)) return prev_level +def get_threshold(): + return _global_log.threshold def set_verbosity(v, force=False): prev_level = _global_log.threshold diff --git a/numpy/distutils/misc_util.py b/numpy/distutils/misc_util.py index 0eaaeb736..7ba8ad862 100644 --- a/numpy/distutils/misc_util.py +++ b/numpy/distutils/misc_util.py @@ -1687,8 +1687,6 @@ class Configuration(object): and will be installed as foo.ini in the 'lib' subpath. - Cross-compilation - ----------------- When cross-compiling with numpy distutils, it might be necessary to use modified npy-pkg-config files. Using the default/generated files will link with the host libraries (i.e. libnpymath.a). For diff --git a/numpy/distutils/system_info.py b/numpy/distutils/system_info.py index 6cfce3b1c..5fd1003ab 100644 --- a/numpy/distutils/system_info.py +++ b/numpy/distutils/system_info.py @@ -146,7 +146,7 @@ else: from distutils.errors import DistutilsError from distutils.dist import Distribution import distutils.sysconfig -from distutils import log +from numpy.distutils import log from distutils.util import get_platform from numpy.distutils.exec_command import ( @@ -550,7 +550,6 @@ class system_info(object): dir_env_var = None search_static_first = 0 # XXX: disabled by default, may disappear in # future unless it is proved to be useful. - verbosity = 1 saved_results = {} notfounderror = NotFoundError @@ -558,7 +557,6 @@ class system_info(object): def __init__(self, default_lib_dirs=default_lib_dirs, default_include_dirs=default_include_dirs, - verbosity=1, ): self.__class__.info = {} self.local_prefixes = [] @@ -704,7 +702,7 @@ class system_info(object): log.info(' FOUND:') res = self.saved_results.get(self.__class__.__name__) - if self.verbosity > 0 and flag: + if log.get_threshold() <= log.INFO and flag: for k, v in res.items(): v = str(v) if k in ['sources', 'libraries'] and len(v) > 270: @@ -914,7 +912,7 @@ class system_info(object): """Return a list of existing paths composed by all combinations of items from the arguments. """ - return combine_paths(*args, **{'verbosity': self.verbosity}) + return combine_paths(*args) class fft_opt_info(system_info): @@ -1531,12 +1529,12 @@ def get_atlas_version(**config): try: s, o = c.get_output(atlas_version_c_text, libraries=libraries, library_dirs=library_dirs, - use_tee=(system_info.verbosity > 0)) + ) if s and re.search(r'undefined reference to `_gfortran', o, re.M): s, o = c.get_output(atlas_version_c_text, libraries=libraries + ['gfortran'], library_dirs=library_dirs, - use_tee=(system_info.verbosity > 0)) + ) if not s: warnings.warn(textwrap.dedent(""" ***************************************************** diff --git a/numpy/fft/README.md b/numpy/fft/README.md index 7040a2e9b..f79188139 100644 --- a/numpy/fft/README.md +++ b/numpy/fft/README.md @@ -10,11 +10,6 @@ advantages: - worst case complexity for transform sizes with large prime factors is `N*log(N)`, because Bluestein's algorithm [3] is used for these cases. -License -------- - -3-clause BSD (see LICENSE.md) - Some code details ----------------- diff --git a/numpy/lib/__init__.py b/numpy/lib/__init__.py index c1757150e..906bede37 100644 --- a/numpy/lib/__init__.py +++ b/numpy/lib/__init__.py @@ -5,10 +5,15 @@ import math from .info import __doc__ from numpy.version import version as __version__ +# Public submodules +# Note: recfunctions and (maybe) format are public too, but not imported +from . import mixins +from . import scimath as emath + +# Private submodules from .type_check import * from .index_tricks import * from .function_base import * -from .mixins import * from .nanfunctions import * from .shape_base import * from .stride_tricks import * @@ -16,9 +21,7 @@ from .twodim_base import * from .ufunclike import * from .histograms import * -from . import scimath as emath from .polynomial import * -#import convertcode from .utils import * from .arraysetops import * from .npyio import * @@ -28,11 +31,10 @@ from .arraypad import * from ._version import * from numpy.core._multiarray_umath import tracemalloc_domain -__all__ = ['emath', 'math', 'tracemalloc_domain'] +__all__ = ['emath', 'math', 'tracemalloc_domain', 'Arrayterator'] __all__ += type_check.__all__ __all__ += index_tricks.__all__ __all__ += function_base.__all__ -__all__ += mixins.__all__ __all__ += shape_base.__all__ __all__ += stride_tricks.__all__ __all__ += twodim_base.__all__ diff --git a/numpy/lib/financial.py b/numpy/lib/financial.py index 216687475..d72384e99 100644 --- a/numpy/lib/financial.py +++ b/numpy/lib/financial.py @@ -715,8 +715,6 @@ def irr(values): >>> round(np.irr([-5, 10.5, 1, -8, 1]), 5) 0.0886 - (Compare with the Example given for numpy.lib.financial.npv) - """ # `np.roots` call is why this function does not support Decimal type. # @@ -763,6 +761,15 @@ def npv(rate, values): The NPV of the input cash flow series `values` at the discount `rate`. + Warnings + -------- + ``npv`` considers a series of cashflows starting in the present (t = 0). + NPV can also be defined with a series of future cashflows, paid at the + end, rather than the start, of each period. If future cashflows are used, + the first cashflow `values[0]` must be zeroed and added to the net + present value of the future cashflows. This is demonstrated in the + examples. + Notes ----- Returns the result of: [G]_ @@ -776,10 +783,24 @@ def npv(rate, values): Examples -------- - >>> np.npv(0.281,[-100, 39, 59, 55, 20]) - -0.0084785916384548798 # may vary - - (Compare with the Example given for numpy.lib.financial.irr) + Consider a potential project with an initial investment of $40 000 and + projected cashflows of $5 000, $8 000, $12 000 and $30 000 at the end of + each period discounted at a rate of 8% per period. To find the project's + net present value: + + >>> rate, cashflows = 0.08, [-40_000, 5_000, 8_000, 12_000, 30_000] + >>> np.npv(rate, cashflows).round(5) + 3065.22267 + + It may be preferable to split the projected cashflow into an initial + investment and expected future cashflows. In this case, the value of + the initial cashflow is zero and the initial investment is later added + to the future cashflows net present value: + + >>> initial_cashflow = cashflows[0] + >>> cashflows[0] = 0 + >>> np.round(np.npv(rate, cashflows) + initial_cashflow, 5) + 3065.22267 """ values = np.asarray(values) diff --git a/numpy/lib/format.py b/numpy/lib/format.py index 3bf818812..1ecd72815 100644 --- a/numpy/lib/format.py +++ b/numpy/lib/format.py @@ -173,6 +173,9 @@ from numpy.compat import ( ) +__all__ = [] + + MAGIC_PREFIX = b'\x93NUMPY' MAGIC_LEN = len(MAGIC_PREFIX) + 2 ARRAY_ALIGN = 64 # plausible values are powers of 2 between 16 and 4096 diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py index 21532838b..e39bbf63a 100644 --- a/numpy/lib/function_base.py +++ b/numpy/lib/function_base.py @@ -1167,11 +1167,13 @@ def diff(a, n=1, axis=-1, prepend=np._NoValue, append=np._NoValue): The axis along which the difference is taken, default is the last axis. prepend, append : array_like, optional - Values to prepend or append to "a" along axis prior to + Values to prepend or append to `a` along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the - dimension and shape must match "a" except along axis. + dimension and shape must match `a` except along axis. + + .. versionadded:: 1.16.0 Returns ------- diff --git a/numpy/lib/mixins.py b/numpy/lib/mixins.py index 52ad45b68..f974a7724 100644 --- a/numpy/lib/mixins.py +++ b/numpy/lib/mixins.py @@ -5,8 +5,8 @@ import sys from numpy.core import umath as um -# Nothing should be exposed in the top-level NumPy module. -__all__ = [] + +__all__ = ['NDArrayOperatorsMixin'] def _disables_array_ufunc(obj): diff --git a/numpy/lib/recfunctions.py b/numpy/lib/recfunctions.py index 40060b41a..927161ddb 100644 --- a/numpy/lib/recfunctions.py +++ b/numpy/lib/recfunctions.py @@ -200,7 +200,7 @@ def flatten_descr(ndtype): descr = [] for field in names: (typ, _) = ndtype.fields[field] - if typ.names: + if typ.names is not None: descr.extend(flatten_descr(typ)) else: descr.append((field, typ)) @@ -527,6 +527,10 @@ def drop_fields(base, drop_names, usemask=True, asrecarray=False): Nested fields are supported. + ..versionchanged: 1.18.0 + `drop_fields` returns an array with 0 fields if all fields are dropped, + rather than returning ``None`` as it did previously. + Parameters ---------- base : array @@ -566,7 +570,7 @@ def drop_fields(base, drop_names, usemask=True, asrecarray=False): current = ndtype[name] if name in drop_names: continue - if current.names: + if current.names is not None: descr = _drop_descr(current, drop_names) if descr: newdtype.append((name, descr)) @@ -575,8 +579,6 @@ def drop_fields(base, drop_names, usemask=True, asrecarray=False): return newdtype newdtype = _drop_descr(base.dtype, drop_names) - if not newdtype: - return None output = np.empty(base.shape, dtype=newdtype) output = recursive_fill_fields(base, output) diff --git a/numpy/lib/shape_base.py b/numpy/lib/shape_base.py index a5d0040aa..92d52109e 100644 --- a/numpy/lib/shape_base.py +++ b/numpy/lib/shape_base.py @@ -782,7 +782,7 @@ def _split_dispatcher(ary, indices_or_sections, axis=None): @array_function_dispatch(_split_dispatcher) def split(ary, indices_or_sections, axis=0): """ - Split an array into multiple sub-arrays. + Split an array into multiple sub-arrays as views into `ary`. Parameters ---------- @@ -809,7 +809,7 @@ def split(ary, indices_or_sections, axis=0): Returns ------- sub-arrays : list of ndarrays - A list of sub-arrays. + A list of sub-arrays as views into `ary`. Raises ------ @@ -854,8 +854,7 @@ def split(ary, indices_or_sections, axis=0): if N % sections: raise ValueError( 'array split does not result in an equal division') - res = array_split(ary, indices_or_sections, axis) - return res + return array_split(ary, indices_or_sections, axis) def _hvdsplit_dispatcher(ary, indices_or_sections): diff --git a/numpy/lib/tests/test_financial.py b/numpy/lib/tests/test_financial.py index 524915041..21088765f 100644 --- a/numpy/lib/tests/test_financial.py +++ b/numpy/lib/tests/test_financial.py @@ -9,6 +9,12 @@ from numpy.testing import ( class TestFinancial(object): + def test_npv_irr_congruence(self): + # IRR is defined as the rate required for the present value of a + # a series of cashflows to be zero i.e. NPV(IRR(x), x) = 0 + cashflows = np.array([-40000, 5000, 8000, 12000, 30000]) + assert_allclose(np.npv(np.irr(cashflows), cashflows), 0, atol=1e-10, rtol=0) + def test_rate(self): assert_almost_equal( np.rate(10, 0, -3500, 10000), diff --git a/numpy/lib/tests/test_recfunctions.py b/numpy/lib/tests/test_recfunctions.py index 0c839d486..fa5f4dec2 100644 --- a/numpy/lib/tests/test_recfunctions.py +++ b/numpy/lib/tests/test_recfunctions.py @@ -91,8 +91,10 @@ class TestRecFunctions(object): control = np.array([(1,), (4,)], dtype=[('a', int)]) assert_equal(test, control) + # dropping all fields results in an array with no fields test = drop_fields(a, ['a', 'b']) - assert_(test is None) + control = np.array([(), ()], dtype=[]) + assert_equal(test, control) def test_rename_fields(self): # Test rename fields @@ -378,8 +380,8 @@ class TestMergeArrays(object): z = np.array( [('A', 1.), ('B', 2.)], dtype=[('A', '|S3'), ('B', float)]) w = np.array( - [(1, (2, 3.0)), (4, (5, 6.0))], - dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) + [(1, (2, 3.0, ())), (4, (5, 6.0, ()))], + dtype=[('a', int), ('b', [('ba', float), ('bb', int), ('bc', [])])]) self.data = (w, x, y, z) def test_solo(self): @@ -450,8 +452,8 @@ class TestMergeArrays(object): test = merge_arrays((x, w), flatten=False) controldtype = [('f0', int), ('f1', [('a', int), - ('b', [('ba', float), ('bb', int)])])] - control = np.array([(1., (1, (2, 3.0))), (2, (4, (5, 6.0)))], + ('b', [('ba', float), ('bb', int), ('bc', [])])])] + control = np.array([(1., (1, (2, 3.0, ()))), (2, (4, (5, 6.0, ())))], dtype=controldtype) assert_equal(test, control) diff --git a/numpy/lib/utils.py b/numpy/lib/utils.py index 8bcbd8e86..3c71d2a7c 100644 --- a/numpy/lib/utils.py +++ b/numpy/lib/utils.py @@ -788,13 +788,8 @@ def lookfor(what, module=None, import_modules=True, regenerate=False, if kind in ('module', 'object'): # don't show modules or objects continue - ok = True doc = docstring.lower() - for w in whats: - if w not in doc: - ok = False - break - if ok: + if all(w in doc for w in whats): found.append(name) # Relevance sort diff --git a/numpy/ma/version.py b/numpy/ma/version.py deleted file mode 100644 index a2c5c42a8..000000000 --- a/numpy/ma/version.py +++ /dev/null @@ -1,14 +0,0 @@ -"""Version number - -""" -from __future__ import division, absolute_import, print_function - -version = '1.00' -release = False - -if not release: - from . import core - from . import extras - revision = [core.__revision__.split(':')[-1][:-1].strip(), - extras.__revision__.split(':')[-1][:-1].strip(),] - version += '.dev%04i' % max([int(rev) for rev in revision]) diff --git a/numpy/matlib.py b/numpy/matlib.py index 9e115943a..604ef470b 100644 --- a/numpy/matlib.py +++ b/numpy/matlib.py @@ -2,7 +2,7 @@ from __future__ import division, absolute_import, print_function import numpy as np from numpy.matrixlib.defmatrix import matrix, asmatrix -# need * as we're copying the numpy namespace +# need * as we're copying the numpy namespace (FIXME: this makes little sense) from numpy import * __version__ = np.__version__ diff --git a/numpy/random/__init__.py b/numpy/random/__init__.py index e7eecc5cd..f7c248451 100644 --- a/numpy/random/__init__.py +++ b/numpy/random/__init__.py @@ -181,7 +181,6 @@ __all__ = [ from . import _pickle from . import common from . import bounded_integers -from . import entropy from .mtrand import * from .generator import Generator, default_rng diff --git a/numpy/random/entropy.pyx b/numpy/random/entropy.pyx deleted file mode 100644 index 95bf7c177..000000000 --- a/numpy/random/entropy.pyx +++ /dev/null @@ -1,155 +0,0 @@ -cimport numpy as np -import numpy as np - -from libc.stdint cimport uint32_t, uint64_t - -__all__ = ['random_entropy', 'seed_by_array'] - -np.import_array() - -cdef extern from "src/splitmix64/splitmix64.h": - cdef uint64_t splitmix64_next(uint64_t *state) nogil - -cdef extern from "src/entropy/entropy.h": - cdef bint entropy_getbytes(void* dest, size_t size) - cdef bint entropy_fallback_getbytes(void *dest, size_t size) - -cdef Py_ssize_t compute_numel(size): - cdef Py_ssize_t i, n = 1 - if isinstance(size, tuple): - for i in range(len(size)): - n *= size[i] - else: - n = size - return n - - -def seed_by_array(object seed, Py_ssize_t n): - """ - Transforms a seed array into an initial state - - Parameters - ---------- - seed: ndarray, 1d, uint64 - Array to use. If seed is a scalar, promote to array. - n : int - Number of 64-bit unsigned integers required - - Notes - ----- - Uses splitmix64 to perform the transformation - """ - cdef uint64_t seed_copy = 0 - cdef uint64_t[::1] seed_array - cdef uint64_t[::1] initial_state - cdef Py_ssize_t seed_size, iter_bound - cdef int i, loc = 0 - - if hasattr(seed, 'squeeze'): - seed = seed.squeeze() - arr = np.asarray(seed) - if arr.shape == (): - err_msg = 'Scalar seeds must be integers between 0 and 2**64 - 1' - if not np.isreal(arr): - raise TypeError(err_msg) - int_seed = int(seed) - if int_seed != seed: - raise TypeError(err_msg) - if int_seed < 0 or int_seed > 2**64 - 1: - raise ValueError(err_msg) - seed_array = np.array([int_seed], dtype=np.uint64) - elif issubclass(arr.dtype.type, np.inexact): - raise TypeError('seed array must be integers') - else: - err_msg = "Seed values must be integers between 0 and 2**64 - 1" - obj = np.asarray(seed).astype(np.object) - if obj.ndim != 1: - raise ValueError('Array-valued seeds must be 1-dimensional') - if not np.isreal(obj).all(): - raise TypeError(err_msg) - if ((obj > int(2**64 - 1)) | (obj < 0)).any(): - raise ValueError(err_msg) - try: - obj_int = obj.astype(np.uint64, casting='unsafe') - except ValueError: - raise ValueError(err_msg) - if not (obj == obj_int).all(): - raise TypeError(err_msg) - seed_array = obj_int - - seed_size = seed_array.shape[0] - iter_bound = n if n > seed_size else seed_size - - initial_state = <np.ndarray>np.empty(n, dtype=np.uint64) - for i in range(iter_bound): - if i < seed_size: - seed_copy ^= seed_array[i] - initial_state[loc] = splitmix64_next(&seed_copy) - loc += 1 - if loc == n: - loc = 0 - - return np.array(initial_state) - - -def random_entropy(size=None, source='system'): - """ - random_entropy(size=None, source='system') - - Read entropy from the system cryptographic provider - - Parameters - ---------- - size : int or tuple of ints, optional - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. Default is None, in which case a - single value is returned. - source : str {'system', 'fallback'} - Source of entropy. 'system' uses system cryptographic pool. - 'fallback' uses a hash of the time and process id. - - Returns - ------- - entropy : scalar or array - Entropy bits in 32-bit unsigned integers. A scalar is returned if size - is `None`. - - Notes - ----- - On Unix-like machines, reads from ``/dev/urandom``. On Windows machines - reads from the RSA algorithm provided by the cryptographic service - provider. - - This function reads from the system entropy pool and so samples are - not reproducible. In particular, it does *NOT* make use of a - BitGenerator, and so ``seed`` and setting ``state`` have no - effect. - - Raises RuntimeError if the command fails. - """ - cdef bint success = True - cdef Py_ssize_t n = 0 - cdef uint32_t random = 0 - cdef uint32_t [:] randoms - - if source not in ('system', 'fallback'): - raise ValueError('Unknown value in source.') - - if size is None: - if source == 'system': - success = entropy_getbytes(<void *>&random, 4) - else: - success = entropy_fallback_getbytes(<void *>&random, 4) - else: - n = compute_numel(size) - randoms = np.zeros(n, dtype=np.uint32) - if source == 'system': - success = entropy_getbytes(<void *>(&randoms[0]), 4 * n) - else: - success = entropy_fallback_getbytes(<void *>(&randoms[0]), 4 * n) - if not success: - raise RuntimeError('Unable to read from system cryptographic provider') - - if n == 0: - return random - return np.asarray(randoms).reshape(size) diff --git a/numpy/random/generator.pyx b/numpy/random/generator.pyx index 26fd95129..37ac57c06 100644 --- a/numpy/random/generator.pyx +++ b/numpy/random/generator.pyx @@ -4,6 +4,7 @@ import operator import warnings import numpy as np +from numpy.core.multiarray import normalize_axis_index from .bounded_integers import _integers_types from .pcg64 import PCG64 @@ -3783,20 +3784,21 @@ cdef class Generator: return diric # Shuffling and permutations: - def shuffle(self, object x): + def shuffle(self, object x, axis=0): """ shuffle(x) Modify a sequence in-place by shuffling its contents. - This function only shuffles the array along the first axis of a - multi-dimensional array. The order of sub-arrays is changed but - their contents remains the same. + The order of sub-arrays is changed but their contents remains the same. Parameters ---------- x : array_like The array or list to be shuffled. + axis : int, optional + The axis which `x` is shuffled along. Default is 0. + It is only supported on `ndarray` objects. Returns ------- @@ -3810,8 +3812,6 @@ cdef class Generator: >>> arr [1 7 5 2 9 4 3 6 0 8] # random - Multi-dimensional arrays are only shuffled along the first axis: - >>> arr = np.arange(9).reshape((3, 3)) >>> rng.shuffle(arr) >>> arr @@ -3819,17 +3819,25 @@ cdef class Generator: [6, 7, 8], [0, 1, 2]]) + >>> arr = np.arange(9).reshape((3, 3)) + >>> rng.shuffle(arr, axis=1) + >>> arr + array([[2, 0, 1], # random + [5, 3, 4], + [8, 6, 7]]) """ cdef: np.npy_intp i, j, n = len(x), stride, itemsize char* x_ptr char* buf_ptr + axis = normalize_axis_index(axis, np.ndim(x)) + if type(x) is np.ndarray and x.ndim == 1 and x.size: # Fast, statically typed path: shuffle the underlying buffer. # Only for non-empty, 1d objects of class ndarray (subclasses such # as MaskedArrays may not support this approach). - x_ptr = <char*><size_t>x.ctypes.data + x_ptr = <char*><size_t>np.PyArray_DATA(x) stride = x.strides[0] itemsize = x.dtype.itemsize # As the array x could contain python objects we use a buffer @@ -3837,7 +3845,7 @@ cdef class Generator: # within the buffer and erroneously decrementing it's refcount # when the function exits. buf = np.empty(itemsize, dtype=np.int8) # GC'd at function exit - buf_ptr = <char*><size_t>buf.ctypes.data + buf_ptr = <char*><size_t>np.PyArray_DATA(buf) with self.lock: # We trick gcc into providing a specialized implementation for # the most common case, yielding a ~33% performance improvement. @@ -3847,6 +3855,7 @@ cdef class Generator: else: self._shuffle_raw(n, 1, itemsize, stride, x_ptr, buf_ptr) elif isinstance(x, np.ndarray) and x.ndim and x.size: + x = np.swapaxes(x, 0, axis) buf = np.empty_like(x[0, ...]) with self.lock: for i in reversed(range(1, n)): @@ -3859,6 +3868,9 @@ cdef class Generator: x[i] = buf else: # Untyped path. + if axis != 0: + raise NotImplementedError("Axis argument is only supported " + "on ndarray objects") with self.lock: for i in reversed(range(1, n)): j = random_interval(&self._bitgen, i) @@ -3914,13 +3926,11 @@ cdef class Generator: data[j] = data[i] data[i] = temp - def permutation(self, object x): + def permutation(self, object x, axis=0): """ permutation(x) Randomly permute a sequence, or return a permuted range. - If `x` is a multi-dimensional array, it is only shuffled along its - first index. Parameters ---------- @@ -3928,6 +3938,8 @@ cdef class Generator: If `x` is an integer, randomly permute ``np.arange(x)``. If `x` is an array, make a copy and shuffle the elements randomly. + axis : int, optional + The axis which `x` is shuffled along. Default is 0. Returns ------- @@ -3953,16 +3965,22 @@ cdef class Generator: Traceback (most recent call last): ... numpy.AxisError: x must be an integer or at least 1-dimensional - """ + >>> arr = np.arange(9).reshape((3, 3)) + >>> rng.permutation(arr, axis=1) + array([[0, 2, 1], # random + [3, 5, 4], + [6, 8, 7]]) + + """ if isinstance(x, (int, np.integer)): arr = np.arange(x) self.shuffle(arr) return arr arr = np.asarray(x) - if arr.ndim < 1: - raise np.AxisError("x must be an integer or at least 1-dimensional") + + axis = normalize_axis_index(axis, arr.ndim) # shuffle has fast-path for 1-d if arr.ndim == 1: @@ -3973,9 +3991,11 @@ cdef class Generator: return arr # Shuffle index array, dtype to ensure fast path - idx = np.arange(arr.shape[0], dtype=np.intp) + idx = np.arange(arr.shape[axis], dtype=np.intp) self.shuffle(idx) - return arr[idx] + slices = [slice(None)]*arr.ndim + slices[axis] = idx + return arr[tuple(slices)] def default_rng(seed=None): diff --git a/numpy/random/legacy_distributions.pxd b/numpy/random/legacy_distributions.pxd index 7ba058054..c681388db 100644 --- a/numpy/random/legacy_distributions.pxd +++ b/numpy/random/legacy_distributions.pxd @@ -34,6 +34,8 @@ cdef extern from "legacy-distributions.h": double nonc) nogil double legacy_wald(aug_bitgen_t *aug_state, double mean, double scale) nogil double legacy_lognormal(aug_bitgen_t *aug_state, double mean, double sigma) nogil + int64_t legacy_random_binomial(bitgen_t *bitgen_state, double p, + int64_t n, binomial_t *binomial) nogil int64_t legacy_negative_binomial(aug_bitgen_t *aug_state, double n, double p) nogil int64_t legacy_random_hypergeometric(bitgen_t *bitgen_state, int64_t good, int64_t bad, int64_t sample) nogil int64_t legacy_random_logseries(bitgen_t *bitgen_state, double p) nogil diff --git a/numpy/random/mt19937.pyx b/numpy/random/mt19937.pyx index 49c3622f5..7d0f6cd22 100644 --- a/numpy/random/mt19937.pyx +++ b/numpy/random/mt19937.pyx @@ -5,7 +5,6 @@ cimport numpy as np from .common cimport * from .bit_generator cimport BitGenerator, SeedSequence -from .entropy import random_entropy __all__ = ['MT19937'] @@ -156,7 +155,8 @@ cdef class MT19937(BitGenerator): Random seed initializing the pseudo-random number generator. Can be an integer in [0, 2**32-1], array of integers in [0, 2**32-1], a `SeedSequence, or ``None``. If `seed` - is ``None``, then sample entropy for a seed. + is ``None``, then fresh, unpredictable entropy will be pulled from + the OS. Raises ------ @@ -167,7 +167,8 @@ cdef class MT19937(BitGenerator): with self.lock: try: if seed is None: - val = random_entropy(RK_STATE_LEN) + seed = SeedSequence() + val = seed.generate_state(RK_STATE_LEN) # MSB is 1; assuring non-zero initial array self.rng_state.key[0] = 0x80000000UL for i in range(1, RK_STATE_LEN): diff --git a/numpy/random/mtrand.pyx b/numpy/random/mtrand.pyx index 468703e38..c469a4645 100644 --- a/numpy/random/mtrand.pyx +++ b/numpy/random/mtrand.pyx @@ -3086,7 +3086,9 @@ cdef class RandomState: for i in range(cnt): _dp = (<double*>np.PyArray_MultiIter_DATA(it, 1))[0] _in = (<long*>np.PyArray_MultiIter_DATA(it, 2))[0] - (<long*>np.PyArray_MultiIter_DATA(it, 0))[0] = random_binomial(&self._bitgen, _dp, _in, &self._binomial) + (<long*>np.PyArray_MultiIter_DATA(it, 0))[0] = \ + legacy_random_binomial(&self._bitgen, _dp, _in, + &self._binomial) np.PyArray_MultiIter_NEXT(it) @@ -3099,7 +3101,8 @@ cdef class RandomState: if size is None: with self.lock: - return random_binomial(&self._bitgen, _dp, _in, &self._binomial) + return <long>legacy_random_binomial(&self._bitgen, _dp, _in, + &self._binomial) randoms = <np.ndarray>np.empty(size, int) cnt = np.PyArray_SIZE(randoms) @@ -3107,8 +3110,8 @@ cdef class RandomState: with self.lock, nogil: for i in range(cnt): - randoms_data[i] = random_binomial(&self._bitgen, _dp, _in, - &self._binomial) + randoms_data[i] = legacy_random_binomial(&self._bitgen, _dp, _in, + &self._binomial) return randoms @@ -4070,7 +4073,7 @@ cdef class RandomState: # Fast, statically typed path: shuffle the underlying buffer. # Only for non-empty, 1d objects of class ndarray (subclasses such # as MaskedArrays may not support this approach). - x_ptr = <char*><size_t>x.ctypes.data + x_ptr = <char*><size_t>np.PyArray_DATA(x) stride = x.strides[0] itemsize = x.dtype.itemsize # As the array x could contain python objects we use a buffer @@ -4078,7 +4081,7 @@ cdef class RandomState: # within the buffer and erroneously decrementing it's refcount # when the function exits. buf = np.empty(itemsize, dtype=np.int8) # GC'd at function exit - buf_ptr = <char*><size_t>buf.ctypes.data + buf_ptr = <char*><size_t>np.PyArray_DATA(buf) with self.lock: # We trick gcc into providing a specialized implementation for # the most common case, yielding a ~33% performance improvement. diff --git a/numpy/random/setup.py b/numpy/random/setup.py index a820d326e..f0ebe331f 100644 --- a/numpy/random/setup.py +++ b/numpy/random/setup.py @@ -61,18 +61,6 @@ def configuration(parent_package='', top_path=None): # One can force emulated 128-bit arithmetic if one wants. #PCG64_DEFS += [('PCG_FORCE_EMULATED_128BIT_MATH', '1')] - config.add_extension('entropy', - sources=['entropy.c', 'src/entropy/entropy.c'] + - [generate_libraries], - libraries=EXTRA_LIBRARIES, - extra_compile_args=EXTRA_COMPILE_ARGS, - extra_link_args=EXTRA_LINK_ARGS, - depends=[join('src', 'splitmix64', 'splitmix.h'), - join('src', 'entropy', 'entropy.h'), - 'entropy.pyx', - ], - define_macros=defs, - ) for gen in ['mt19937']: # gen.pyx, src/gen/gen.c, src/gen/gen-jump.c config.add_extension(gen, diff --git a/numpy/random/src/distributions/distributions.c b/numpy/random/src/distributions/distributions.c index 65257ecbf..1244ffe65 100644 --- a/numpy/random/src/distributions/distributions.c +++ b/numpy/random/src/distributions/distributions.c @@ -901,8 +901,8 @@ RAND_INT_TYPE random_binomial_inversion(bitgen_t *bitgen_state, RAND_INT_TYPE n, return X; } -RAND_INT_TYPE random_binomial(bitgen_t *bitgen_state, double p, RAND_INT_TYPE n, - binomial_t *binomial) { +int64_t random_binomial(bitgen_t *bitgen_state, double p, int64_t n, + binomial_t *binomial) { double q; if ((n == 0LL) || (p == 0.0f)) @@ -1478,7 +1478,7 @@ uint64_t random_bounded_uint64(bitgen_t *bitgen_state, uint64_t off, uint64_t rng, uint64_t mask, bool use_masked) { if (rng == 0) { return off; - } else if (rng < 0xFFFFFFFFUL) { + } else if (rng <= 0xFFFFFFFFUL) { /* Call 32-bit generator if range in 32-bit. */ if (use_masked) { return off + buffered_bounded_masked_uint32(bitgen_state, rng, mask, NULL, @@ -1592,7 +1592,7 @@ void random_bounded_uint64_fill(bitgen_t *bitgen_state, uint64_t off, for (i = 0; i < cnt; i++) { out[i] = off; } - } else if (rng < 0xFFFFFFFFUL) { + } else if (rng <= 0xFFFFFFFFUL) { uint32_t buf = 0; int bcnt = 0; diff --git a/numpy/random/src/distributions/distributions.h b/numpy/random/src/distributions/distributions.h index c8cdfd20f..f2c370c07 100644 --- a/numpy/random/src/distributions/distributions.h +++ b/numpy/random/src/distributions/distributions.h @@ -43,11 +43,11 @@ typedef struct s_binomial_t { int has_binomial; /* !=0: following parameters initialized for binomial */ double psave; - int64_t nsave; + RAND_INT_TYPE nsave; double r; double q; double fm; - int64_t m; + RAND_INT_TYPE m; double p1; double xm; double xl; @@ -148,8 +148,18 @@ DECLDIR double random_triangular(bitgen_t *bitgen_state, double left, double mod DECLDIR RAND_INT_TYPE random_poisson(bitgen_t *bitgen_state, double lam); DECLDIR RAND_INT_TYPE random_negative_binomial(bitgen_t *bitgen_state, double n, double p); -DECLDIR RAND_INT_TYPE random_binomial(bitgen_t *bitgen_state, double p, RAND_INT_TYPE n, - binomial_t *binomial); + +DECLDIR RAND_INT_TYPE random_binomial_btpe(bitgen_t *bitgen_state, + RAND_INT_TYPE n, + double p, + binomial_t *binomial); +DECLDIR RAND_INT_TYPE random_binomial_inversion(bitgen_t *bitgen_state, + RAND_INT_TYPE n, + double p, + binomial_t *binomial); +DECLDIR int64_t random_binomial(bitgen_t *bitgen_state, double p, + int64_t n, binomial_t *binomial); + DECLDIR RAND_INT_TYPE random_logseries(bitgen_t *bitgen_state, double p); DECLDIR RAND_INT_TYPE random_geometric_search(bitgen_t *bitgen_state, double p); DECLDIR RAND_INT_TYPE random_geometric_inversion(bitgen_t *bitgen_state, double p); diff --git a/numpy/random/src/entropy/entropy.c b/numpy/random/src/entropy/entropy.c deleted file mode 100644 index eaca37a9c..000000000 --- a/numpy/random/src/entropy/entropy.c +++ /dev/null @@ -1,114 +0,0 @@ -#include <stddef.h> -#include <stdio.h> -#include <stdlib.h> -#include <string.h> - -#include "entropy.h" -#ifdef _WIN32 -/* Windows */ -#include <sys/timeb.h> -#include <time.h> -#include <windows.h> - -#include <wincrypt.h> -#else -/* Unix */ -#include <sys/time.h> -#include <time.h> -#include <unistd.h> -#include <fcntl.h> -#endif - -bool entropy_getbytes(void *dest, size_t size) { -#ifndef _WIN32 - - int fd = open("/dev/urandom", O_RDONLY); - if (fd < 0) - return false; - ssize_t sz = read(fd, dest, size); - if ((sz < 0) || ((size_t)sz < size)) - return false; - return close(fd) == 0; - -#else - - HCRYPTPROV hCryptProv; - BOOL done; - - if (!CryptAcquireContext(&hCryptProv, NULL, NULL, PROV_RSA_FULL, - CRYPT_VERIFYCONTEXT) || - !hCryptProv) { - return true; - } - done = CryptGenRandom(hCryptProv, (DWORD)size, (unsigned char *)dest); - CryptReleaseContext(hCryptProv, 0); - if (!done) { - return false; - } - - return true; -#endif -} - -/* Thomas Wang 32/64 bits integer hash function */ -uint32_t entropy_hash_32(uint32_t key) { - key += ~(key << 15); - key ^= (key >> 10); - key += (key << 3); - key ^= (key >> 6); - key += ~(key << 11); - key ^= (key >> 16); - return key; -} - -uint64_t entropy_hash_64(uint64_t key) { - key = (~key) + (key << 21); // key = (key << 21) - key - 1; - key = key ^ (key >> 24); - key = (key + (key << 3)) + (key << 8); // key * 265 - key = key ^ (key >> 14); - key = (key + (key << 2)) + (key << 4); // key * 21 - key = key ^ (key >> 28); - key = key + (key << 31); - return key; -} - -uint32_t entropy_randombytes(void) { - -#ifndef _WIN32 - struct timeval tv; - gettimeofday(&tv, NULL); - return entropy_hash_32(getpid()) ^ entropy_hash_32(tv.tv_sec) ^ - entropy_hash_32(tv.tv_usec) ^ entropy_hash_32(clock()); -#else - uint32_t out = 0; - int64_t counter; - struct _timeb tv; - _ftime_s(&tv); - out = entropy_hash_32(GetCurrentProcessId()) ^ - entropy_hash_32((uint32_t)tv.time) ^ entropy_hash_32(tv.millitm) ^ - entropy_hash_32(clock()); - if (QueryPerformanceCounter((LARGE_INTEGER *)&counter) != 0) - out ^= entropy_hash_32((uint32_t)(counter & 0xffffffff)); - return out; -#endif -} - -bool entropy_fallback_getbytes(void *dest, size_t size) { - int hashes = (int)size; - uint32_t *hash = malloc(hashes * sizeof(uint32_t)); - int i; - for (i = 0; i < hashes; i++) { - hash[i] = entropy_randombytes(); - } - memcpy(dest, (void *)hash, size); - free(hash); - return true; -} - -void entropy_fill(void *dest, size_t size) { - bool success; - success = entropy_getbytes(dest, size); - if (!success) { - entropy_fallback_getbytes(dest, size); - } -} diff --git a/numpy/random/src/entropy/entropy.h b/numpy/random/src/entropy/entropy.h deleted file mode 100644 index f00caf61d..000000000 --- a/numpy/random/src/entropy/entropy.h +++ /dev/null @@ -1,14 +0,0 @@ -#ifndef _RANDOMDGEN__ENTROPY_H_ -#define _RANDOMDGEN__ENTROPY_H_ - -#include <stddef.h> -#include <stdbool.h> -#include <stdint.h> - -extern void entropy_fill(void *dest, size_t size); - -extern bool entropy_getbytes(void *dest, size_t size); - -extern bool entropy_fallback_getbytes(void *dest, size_t size); - -#endif diff --git a/numpy/random/src/legacy/legacy-distributions.c b/numpy/random/src/legacy/legacy-distributions.c index 4741a0352..684b3d762 100644 --- a/numpy/random/src/legacy/legacy-distributions.c +++ b/numpy/random/src/legacy/legacy-distributions.c @@ -215,6 +215,37 @@ double legacy_exponential(aug_bitgen_t *aug_state, double scale) { } +static RAND_INT_TYPE legacy_random_binomial_original(bitgen_t *bitgen_state, + double p, + RAND_INT_TYPE n, + binomial_t *binomial) { + double q; + + if (p <= 0.5) { + if (p * n <= 30.0) { + return random_binomial_inversion(bitgen_state, n, p, binomial); + } else { + return random_binomial_btpe(bitgen_state, n, p, binomial); + } + } else { + q = 1.0 - p; + if (q * n <= 30.0) { + return n - random_binomial_inversion(bitgen_state, n, q, binomial); + } else { + return n - random_binomial_btpe(bitgen_state, n, q, binomial); + } + } +} + + +int64_t legacy_random_binomial(bitgen_t *bitgen_state, double p, + int64_t n, binomial_t *binomial) { + return (int64_t) legacy_random_binomial_original(bitgen_state, p, + (RAND_INT_TYPE) n, + binomial); +} + + static RAND_INT_TYPE random_hypergeometric_hyp(bitgen_t *bitgen_state, RAND_INT_TYPE good, RAND_INT_TYPE bad, diff --git a/numpy/random/src/legacy/legacy-distributions.h b/numpy/random/src/legacy/legacy-distributions.h index 005c4e5d2..4bc15d58e 100644 --- a/numpy/random/src/legacy/legacy-distributions.h +++ b/numpy/random/src/legacy/legacy-distributions.h @@ -16,26 +16,23 @@ extern double legacy_pareto(aug_bitgen_t *aug_state, double a); extern double legacy_weibull(aug_bitgen_t *aug_state, double a); extern double legacy_power(aug_bitgen_t *aug_state, double a); extern double legacy_gamma(aug_bitgen_t *aug_state, double shape, double scale); -extern double legacy_pareto(aug_bitgen_t *aug_state, double a); -extern double legacy_weibull(aug_bitgen_t *aug_state, double a); extern double legacy_chisquare(aug_bitgen_t *aug_state, double df); extern double legacy_noncentral_chisquare(aug_bitgen_t *aug_state, double df, double nonc); - extern double legacy_noncentral_f(aug_bitgen_t *aug_state, double dfnum, double dfden, double nonc); extern double legacy_wald(aug_bitgen_t *aug_state, double mean, double scale); extern double legacy_lognormal(aug_bitgen_t *aug_state, double mean, double sigma); extern double legacy_standard_t(aug_bitgen_t *aug_state, double df); -extern int64_t legacy_negative_binomial(aug_bitgen_t *aug_state, double n, - double p); extern double legacy_standard_cauchy(aug_bitgen_t *state); extern double legacy_beta(aug_bitgen_t *aug_state, double a, double b); extern double legacy_f(aug_bitgen_t *aug_state, double dfnum, double dfden); extern double legacy_normal(aug_bitgen_t *aug_state, double loc, double scale); extern double legacy_standard_gamma(aug_bitgen_t *aug_state, double shape); extern double legacy_exponential(aug_bitgen_t *aug_state, double scale); +extern int64_t legacy_random_binomial(bitgen_t *bitgen_state, double p, + int64_t n, binomial_t *binomial); extern int64_t legacy_negative_binomial(aug_bitgen_t *aug_state, double n, double p); extern int64_t legacy_random_hypergeometric(bitgen_t *bitgen_state, diff --git a/numpy/random/tests/test_generator_mt19937.py b/numpy/random/tests/test_generator_mt19937.py index 853d86fba..20bc10cd0 100644 --- a/numpy/random/tests/test_generator_mt19937.py +++ b/numpy/random/tests/test_generator_mt19937.py @@ -732,6 +732,20 @@ class TestRandomDist(object): desired = conv([4, 1, 9, 8, 0, 5, 3, 6, 2, 7]) assert_array_equal(actual, desired) + def test_shuffle_custom_axis(self): + random = Generator(MT19937(self.seed)) + actual = np.arange(16).reshape((4, 4)) + random.shuffle(actual, axis=1) + desired = np.array([[ 0, 3, 1, 2], + [ 4, 7, 5, 6], + [ 8, 11, 9, 10], + [12, 15, 13, 14]]) + assert_array_equal(actual, desired) + random = Generator(MT19937(self.seed)) + actual = np.arange(16).reshape((4, 4)) + random.shuffle(actual, axis=-1) + assert_array_equal(actual, desired) + def test_shuffle_masked(self): # gh-3263 a = np.ma.masked_values(np.reshape(range(20), (5, 4)) % 3 - 1, -1) @@ -746,6 +760,16 @@ class TestRandomDist(object): assert_equal( sorted(b.data[~b.mask]), sorted(b_orig.data[~b_orig.mask])) + def test_shuffle_exceptions(self): + random = Generator(MT19937(self.seed)) + arr = np.arange(10) + assert_raises(np.AxisError, random.shuffle, arr, 1) + arr = np.arange(9).reshape((3, 3)) + assert_raises(np.AxisError, random.shuffle, arr, 3) + assert_raises(TypeError, random.shuffle, arr, slice(1, 2, None)) + arr = [[1, 2, 3], [4, 5, 6]] + assert_raises(NotImplementedError, random.shuffle, arr, 1) + def test_permutation(self): random = Generator(MT19937(self.seed)) alist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] @@ -771,6 +795,27 @@ class TestRandomDist(object): actual = random.permutation(integer_val) assert_array_equal(actual, desired) + def test_permutation_custom_axis(self): + a = np.arange(16).reshape((4, 4)) + desired = np.array([[ 0, 3, 1, 2], + [ 4, 7, 5, 6], + [ 8, 11, 9, 10], + [12, 15, 13, 14]]) + random = Generator(MT19937(self.seed)) + actual = random.permutation(a, axis=1) + assert_array_equal(actual, desired) + random = Generator(MT19937(self.seed)) + actual = random.permutation(a, axis=-1) + assert_array_equal(actual, desired) + + def test_permutation_exceptions(self): + random = Generator(MT19937(self.seed)) + arr = np.arange(10) + assert_raises(np.AxisError, random.permutation, arr, 1) + arr = np.arange(9).reshape((3, 3)) + assert_raises(np.AxisError, random.permutation, arr, 3) + assert_raises(TypeError, random.permutation, arr, slice(1, 2, None)) + def test_beta(self): random = Generator(MT19937(self.seed)) actual = random.beta(.1, .9, size=(3, 2)) diff --git a/numpy/random/tests/test_randomstate_regression.py b/numpy/random/tests/test_randomstate_regression.py index 29870534a..edf32ea97 100644 --- a/numpy/random/tests/test_randomstate_regression.py +++ b/numpy/random/tests/test_randomstate_regression.py @@ -181,3 +181,30 @@ class TestRegression(object): assert c.dtype == np.dtype(int) c = np.random.choice(10, replace=False, size=2) assert c.dtype == np.dtype(int) + + @pytest.mark.skipif(np.iinfo('l').max < 2**32, + reason='Cannot test with 32-bit C long') + def test_randint_117(self): + # GH 14189 + random.seed(0) + expected = np.array([2357136044, 2546248239, 3071714933, 3626093760, + 2588848963, 3684848379, 2340255427, 3638918503, + 1819583497, 2678185683], dtype='int64') + actual = random.randint(2**32, size=10) + assert_array_equal(actual, expected) + + def test_p_zero_stream(self): + # Regression test for gh-14522. Ensure that future versions + # generate the same variates as version 1.16. + np.random.seed(12345) + assert_array_equal(random.binomial(1, [0, 0.25, 0.5, 0.75, 1]), + [0, 0, 0, 1, 1]) + + def test_n_zero_stream(self): + # Regression test for gh-14522. Ensure that future versions + # generate the same variates as version 1.16. + np.random.seed(8675309) + expected = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], + [3, 4, 2, 3, 3, 1, 5, 3, 1, 3]]) + assert_array_equal(random.binomial([[0], [10]], 0.25, size=(2, 10)), + expected) diff --git a/numpy/random/tests/test_smoke.py b/numpy/random/tests/test_smoke.py index 84d261e5e..6e641b5f4 100644 --- a/numpy/random/tests/test_smoke.py +++ b/numpy/random/tests/test_smoke.py @@ -5,7 +5,7 @@ from functools import partial import numpy as np import pytest from numpy.testing import assert_equal, assert_, assert_array_equal -from numpy.random import (Generator, MT19937, PCG64, Philox, SFC64, entropy) +from numpy.random import (Generator, MT19937, PCG64, Philox, SFC64) @pytest.fixture(scope='module', params=(np.bool, np.int8, np.int16, np.int32, np.int64, @@ -806,23 +806,3 @@ class TestDefaultRNG(RNG): np.random.default_rng(-1) with pytest.raises(ValueError): np.random.default_rng([12345, -1]) - - -class TestEntropy(object): - def test_entropy(self): - e1 = entropy.random_entropy() - e2 = entropy.random_entropy() - assert_((e1 != e2)) - e1 = entropy.random_entropy(10) - e2 = entropy.random_entropy(10) - assert_((e1 != e2).all()) - e1 = entropy.random_entropy(10, source='system') - e2 = entropy.random_entropy(10, source='system') - assert_((e1 != e2).all()) - - def test_fallback(self): - e1 = entropy.random_entropy(source='fallback') - time.sleep(0.1) - e2 = entropy.random_entropy(source='fallback') - assert_((e1 != e2)) - diff --git a/numpy/testing/decorators.py b/numpy/testing/decorators.py deleted file mode 100644 index bf78be500..000000000 --- a/numpy/testing/decorators.py +++ /dev/null @@ -1,15 +0,0 @@ -""" -Back compatibility decorators module. It will import the appropriate -set of tools - -""" -from __future__ import division, absolute_import, print_function - -import warnings - -# 2018-04-04, numpy 1.15.0 -warnings.warn("Importing from numpy.testing.decorators is deprecated " - "since numpy 1.15.0, import from numpy.testing instead.", - DeprecationWarning, stacklevel=2) - -from ._private.decorators import * diff --git a/numpy/testing/noseclasses.py b/numpy/testing/noseclasses.py deleted file mode 100644 index 5748a9a0f..000000000 --- a/numpy/testing/noseclasses.py +++ /dev/null @@ -1,14 +0,0 @@ -""" -Back compatibility noseclasses module. It will import the appropriate -set of tools -""" -from __future__ import division, absolute_import, print_function - -import warnings - -# 2018-04-04, numpy 1.15.0 -warnings.warn("Importing from numpy.testing.noseclasses is deprecated " - "since 1.15.0, import from numpy.testing instead", - DeprecationWarning, stacklevel=2) - -from ._private.noseclasses import * diff --git a/numpy/testing/nosetester.py b/numpy/testing/nosetester.py deleted file mode 100644 index 2ac212eee..000000000 --- a/numpy/testing/nosetester.py +++ /dev/null @@ -1,19 +0,0 @@ -""" -Back compatibility nosetester module. It will import the appropriate -set of tools - -""" -from __future__ import division, absolute_import, print_function - -import warnings - -# 2018-04-04, numpy 1.15.0 -warnings.warn("Importing from numpy.testing.nosetester is deprecated " - "since 1.15.0, import from numpy.testing instead.", - DeprecationWarning, stacklevel=2) - -from ._private.nosetester import * - -__all__ = ['get_package_name', 'run_module_suite', 'NoseTester', - '_numpy_tester', 'get_package_name', 'import_nose', - 'suppress_warnings'] diff --git a/numpy/testing/print_coercion_tables.py b/numpy/testing/print_coercion_tables.py index 3a359f472..72b22cee1 100755 --- a/numpy/testing/print_coercion_tables.py +++ b/numpy/testing/print_coercion_tables.py @@ -70,22 +70,24 @@ def print_coercion_table(ntypes, inputfirstvalue, inputsecondvalue, firstarray, print(char, end=' ') print() -print("can cast") -print_cancast_table(np.typecodes['All']) -print() -print("In these tables, ValueError is '!', OverflowError is '@', TypeError is '#'") -print() -print("scalar + scalar") -print_coercion_table(np.typecodes['All'], 0, 0, False) -print() -print("scalar + neg scalar") -print_coercion_table(np.typecodes['All'], 0, -1, False) -print() -print("array + scalar") -print_coercion_table(np.typecodes['All'], 0, 0, True) -print() -print("array + neg scalar") -print_coercion_table(np.typecodes['All'], 0, -1, True) -print() -print("promote_types") -print_coercion_table(np.typecodes['All'], 0, 0, False, True) + +if __name__ == '__main__': + print("can cast") + print_cancast_table(np.typecodes['All']) + print() + print("In these tables, ValueError is '!', OverflowError is '@', TypeError is '#'") + print() + print("scalar + scalar") + print_coercion_table(np.typecodes['All'], 0, 0, False) + print() + print("scalar + neg scalar") + print_coercion_table(np.typecodes['All'], 0, -1, False) + print() + print("array + scalar") + print_coercion_table(np.typecodes['All'], 0, 0, True) + print() + print("array + neg scalar") + print_coercion_table(np.typecodes['All'], 0, -1, True) + print() + print("promote_types") + print_coercion_table(np.typecodes['All'], 0, 0, False, True) diff --git a/numpy/testing/utils.py b/numpy/testing/utils.py index 1e7d65b89..975f6ad5d 100644 --- a/numpy/testing/utils.py +++ b/numpy/testing/utils.py @@ -7,10 +7,11 @@ from __future__ import division, absolute_import, print_function import warnings -# 2018-04-04, numpy 1.15.0 +# 2018-04-04, numpy 1.15.0 ImportWarning +# 2019-09-18, numpy 1.18.0 DeprecatonWarning (changed) warnings.warn("Importing from numpy.testing.utils is deprecated " "since 1.15.0, import from numpy.testing instead.", - ImportWarning, stacklevel=2) + DeprecationWarning, stacklevel=2) from ._private.utils import * diff --git a/numpy/tests/test_public_api.py b/numpy/tests/test_public_api.py index df2fc4802..6de171dbb 100644 --- a/numpy/tests/test_public_api.py +++ b/numpy/tests/test_public_api.py @@ -2,14 +2,21 @@ from __future__ import division, absolute_import, print_function import sys import subprocess +import pkgutil +import types +import importlib +import warnings import numpy as np +import numpy import pytest + try: import ctypes except ImportError: ctypes = None + def check_dir(module, module_name=None): """Returns a mapping of all objects with the wrong __module__ attribute.""" if module_name is None: @@ -27,7 +34,8 @@ def check_dir(module, module_name=None): sys.version_info[0] < 3, reason="NumPy exposes slightly different functions on Python 2") def test_numpy_namespace(): - # None of these objects are publicly documented. + # None of these objects are publicly documented to be part of the main + # NumPy namespace (some are useful though, others need to be cleaned up) undocumented = { 'Tester': 'numpy.testing._private.nosetester.NoseTester', '_add_newdoc_ufunc': 'numpy.core._multiarray_umath._add_newdoc_ufunc', @@ -72,7 +80,7 @@ def test_numpy_namespace(): @pytest.mark.parametrize('name', ['testing', 'Tester']) def test_import_lazy_import(name): - """Make sure we can actually the the modules we lazy load. + """Make sure we can actually use the modules we lazy load. While not exported as part of the public API, it was accessible. With the use of __getattr__ and __dir__, this isn't always true It can happen that @@ -101,6 +109,7 @@ def test_numpy_fft(): bad_results = check_dir(np.fft) assert bad_results == {} + @pytest.mark.skipif(ctypes is None, reason="ctypes not available in this python") def test_NPY_NO_EXPORT(): @@ -109,3 +118,387 @@ def test_NPY_NO_EXPORT(): f = getattr(cdll, 'test_not_exported', None) assert f is None, ("'test_not_exported' is mistakenly exported, " "NPY_NO_EXPORT does not work") + + +# Historically NumPy has not used leading underscores for private submodules +# much. This has resulted in lots of things that look like public modules +# (i.e. things that can be imported as `import numpy.somesubmodule.somefile`), +# but were never intended to be public. The PUBLIC_MODULES list contains +# modules that are either public because they were meant to be, or because they +# contain public functions/objects that aren't present in any other namespace +# for whatever reason and therefore should be treated as public. +# +# The PRIVATE_BUT_PRESENT_MODULES list contains modules that look public (lack +# of underscores) but should not be used. For many of those modules the +# current status is fine. For others it may make sense to work on making them +# private, to clean up our public API and avoid confusion. +PUBLIC_MODULES = ['numpy.' + s for s in [ + "ctypeslib", + "distutils", + "distutils.cpuinfo", + "distutils.exec_command", + "distutils.misc_util", + "distutils.log", + "distutils.system_info", + "doc", + "doc.basics", + "doc.broadcasting", + "doc.byteswapping", + "doc.constants", + "doc.creation", + "doc.dispatch", + "doc.glossary", + "doc.indexing", + "doc.internals", + "doc.misc", + "doc.structured_arrays", + "doc.subclassing", + "doc.ufuncs", + "dual", + "f2py", + "fft", + "lib", + "lib.format", # was this meant to be public? + "lib.mixins", + "lib.recfunctions", + "lib.scimath", + "linalg", + "ma", + "ma.extras", + "ma.mrecords", + "matlib", + "polynomial", + "polynomial.chebyshev", + "polynomial.hermite", + "polynomial.hermite_e", + "polynomial.laguerre", + "polynomial.legendre", + "polynomial.polynomial", + "polynomial.polyutils", + "random", + "testing", + "version", +]] + + +PUBLIC_ALIASED_MODULES = [ + "numpy.char", + "numpy.emath", + "numpy.rec", +] + + +PRIVATE_BUT_PRESENT_MODULES = ['numpy.' + s for s in [ + "compat", + "compat.py3k", + "conftest", + "core", + "core.arrayprint", + "core.defchararray", + "core.einsumfunc", + "core.fromnumeric", + "core.function_base", + "core.getlimits", + "core.info", + "core.machar", + "core.memmap", + "core.multiarray", + "core.numeric", + "core.numerictypes", + "core.overrides", + "core.records", + "core.shape_base", + "core.umath", + "core.umath_tests", + "distutils.ccompiler", + "distutils.command", + "distutils.command.autodist", + "distutils.command.bdist_rpm", + "distutils.command.build", + "distutils.command.build_clib", + "distutils.command.build_ext", + "distutils.command.build_py", + "distutils.command.build_scripts", + "distutils.command.build_src", + "distutils.command.config", + "distutils.command.config_compiler", + "distutils.command.develop", + "distutils.command.egg_info", + "distutils.command.install", + "distutils.command.install_clib", + "distutils.command.install_data", + "distutils.command.install_headers", + "distutils.command.sdist", + "distutils.compat", + "distutils.conv_template", + "distutils.core", + "distutils.extension", + "distutils.fcompiler", + "distutils.fcompiler.absoft", + "distutils.fcompiler.compaq", + "distutils.fcompiler.environment", + "distutils.fcompiler.g95", + "distutils.fcompiler.gnu", + "distutils.fcompiler.hpux", + "distutils.fcompiler.ibm", + "distutils.fcompiler.intel", + "distutils.fcompiler.lahey", + "distutils.fcompiler.mips", + "distutils.fcompiler.nag", + "distutils.fcompiler.none", + "distutils.fcompiler.pathf95", + "distutils.fcompiler.pg", + "distutils.fcompiler.sun", + "distutils.fcompiler.vast", + "distutils.from_template", + "distutils.info", + "distutils.intelccompiler", + "distutils.lib2def", + "distutils.line_endings", + "distutils.mingw32ccompiler", + "distutils.msvccompiler", + "distutils.npy_pkg_config", + "distutils.numpy_distribution", + "distutils.pathccompiler", + "distutils.unixccompiler", + "f2py.auxfuncs", + "f2py.capi_maps", + "f2py.cb_rules", + "f2py.cfuncs", + "f2py.common_rules", + "f2py.crackfortran", + "f2py.diagnose", + "f2py.f2py2e", + "f2py.f2py_testing", + "f2py.f90mod_rules", + "f2py.func2subr", + "f2py.info", + "f2py.rules", + "f2py.use_rules", + "fft.helper", + "lib.arraypad", + "lib.arraysetops", + "lib.arrayterator", + "lib.financial", + "lib.function_base", + "lib.histograms", + "lib.index_tricks", + "lib.info", + "lib.nanfunctions", + "lib.npyio", + "lib.polynomial", + "lib.shape_base", + "lib.stride_tricks", + "lib.twodim_base", + "lib.type_check", + "lib.ufunclike", + "lib.user_array", # note: not in np.lib, but probably should just be deleted + "lib.utils", + "linalg.info", + "linalg.lapack_lite", + "linalg.linalg", + "ma.bench", + "ma.core", + "ma.testutils", + "ma.timer_comparison", + "matrixlib", + "matrixlib.defmatrix", + "random.bit_generator", + "random.bounded_integers", + "random.common", + "random.generator", + "random.info", + "random.mt19937", + "random.mtrand", + "random.pcg64", + "random.philox", + "random.sfc64", + "testing.print_coercion_tables", + "testing.utils", +]] + + +def is_unexpected(name): + """Check if this needs to be considered.""" + if '._' in name or '.tests' in name or '.setup' in name: + return False + + if name in PUBLIC_MODULES: + return False + + if name in PUBLIC_ALIASED_MODULES: + return False + + if name in PRIVATE_BUT_PRESENT_MODULES: + return False + + return True + + +# These are present in a directory with an __init__.py but cannot be imported +# code_generators/ isn't installed, but present for an inplace build +SKIP_LIST = [ + "numpy.core.code_generators", + "numpy.core.code_generators.genapi", + "numpy.core.code_generators.generate_umath", + "numpy.core.code_generators.ufunc_docstrings", + "numpy.core.code_generators.generate_numpy_api", + "numpy.core.code_generators.generate_ufunc_api", + "numpy.core.code_generators.numpy_api", + "numpy.core.cversions", + "numpy.core.generate_numpy_api", + "numpy.distutils.msvc9compiler", +] + + +def test_all_modules_are_expected(): + """ + Test that we don't add anything that looks like a new public module by + accident. Check is based on filenames. + """ + + modnames = [] + for _, modname, ispkg in pkgutil.walk_packages(path=np.__path__, + prefix=np.__name__ + '.', + onerror=None): + if is_unexpected(modname) and modname not in SKIP_LIST: + # We have a name that is new. If that's on purpose, add it to + # PUBLIC_MODULES. We don't expect to have to add anything to + # PRIVATE_BUT_PRESENT_MODULES. Use an underscore in the name! + modnames.append(modname) + + if modnames: + raise AssertionError("Found unexpected modules: {}".format(modnames)) + + +# Stuff that clearly shouldn't be in the API and is detected by the next test +# below +SKIP_LIST_2 = [ + 'numpy.math', + 'numpy.distutils.log.sys', + 'numpy.distutils.system_info.copy', + 'numpy.distutils.system_info.distutils', + 'numpy.distutils.system_info.log', + 'numpy.distutils.system_info.os', + 'numpy.distutils.system_info.platform', + 'numpy.distutils.system_info.re', + 'numpy.distutils.system_info.shutil', + 'numpy.distutils.system_info.subprocess', + 'numpy.distutils.system_info.sys', + 'numpy.distutils.system_info.tempfile', + 'numpy.distutils.system_info.textwrap', + 'numpy.distutils.system_info.warnings', + 'numpy.doc.constants.re', + 'numpy.doc.constants.textwrap', + 'numpy.lib.emath', + 'numpy.lib.math', + 'numpy.matlib.char', + 'numpy.matlib.rec', + 'numpy.matlib.emath', + 'numpy.matlib.math', + 'numpy.matlib.linalg', + 'numpy.matlib.fft', + 'numpy.matlib.random', + 'numpy.matlib.ctypeslib', + 'numpy.matlib.ma' +] + + +def test_all_modules_are_expected_2(): + """ + Method checking all objects. The pkgutil-based method in + `test_all_modules_are_expected` does not catch imports into a namespace, + only filenames. So this test is more thorough, and checks this like: + + import .lib.scimath as emath + + To check if something in a module is (effectively) public, one can check if + there's anything in that namespace that's a public function/object but is + not exposed in a higher-level namespace. For example for a `numpy.lib` + submodule:: + + mod = np.lib.mixins + for obj in mod.__all__: + if obj in np.__all__: + continue + elif obj in np.lib.__all__: + continue + + else: + print(obj) + + """ + + def find_unexpected_members(mod_name): + members = [] + module = importlib.import_module(mod_name) + if hasattr(module, '__all__'): + objnames = module.__all__ + else: + objnames = dir(module) + + for objname in objnames: + if not objname.startswith('_'): + fullobjname = mod_name + '.' + objname + if isinstance(getattr(module, objname), types.ModuleType): + if is_unexpected(fullobjname): + if fullobjname not in SKIP_LIST_2: + members.append(fullobjname) + + return members + + unexpected_members = find_unexpected_members("numpy") + for modname in PUBLIC_MODULES: + unexpected_members.extend(find_unexpected_members(modname)) + + if unexpected_members: + raise AssertionError("Found unexpected object(s) that look like " + "modules: {}".format(unexpected_members)) + + +def test_api_importable(): + """ + Check that all submodules listed higher up in this file can be imported + + Note that if a PRIVATE_BUT_PRESENT_MODULES entry goes missing, it may + simply need to be removed from the list (deprecation may or may not be + needed - apply common sense). + """ + def check_importable(module_name): + try: + importlib.import_module(module_name) + except (ImportError, AttributeError): + return False + + return True + + module_names = [] + for module_name in PUBLIC_MODULES: + if not check_importable(module_name): + module_names.append(module_name) + + if module_names: + raise AssertionError("Modules in the public API that cannot be " + "imported: {}".format(module_names)) + + for module_name in PUBLIC_ALIASED_MODULES: + try: + eval(module_name) + except AttributeError: + module_names.append(module_name) + + if module_names: + raise AssertionError("Modules in the public API that were not " + "found: {}".format(module_names)) + + with warnings.catch_warnings(record=True) as w: + warnings.filterwarnings('always', category=DeprecationWarning) + warnings.filterwarnings('always', category=ImportWarning) + for module_name in PRIVATE_BUT_PRESENT_MODULES: + if not check_importable(module_name): + module_names.append(module_name) + + if module_names: + raise AssertionError("Modules that are not really public but looked " + "public and can not be imported: " + "{}".format(module_names)) diff --git a/pyproject.toml b/pyproject.toml index 4439ed229..918cbb278 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -9,13 +9,15 @@ requires = [ [tool.towncrier] # Do no set this since it is hard to import numpy inside the source directory - # Use "--version Numpy" instead - #project = "numpy" - filename = "doc/release/latest-note.rst" - directory = "doc/release/upcoming_changes" + # the name is hardcoded. Use "--version 1.18.0" to set the version + single_file = true + filename = "doc/source/release/{version}-notes.rst" + directory = "doc/release/upcoming_changes/" issue_format = "`gh-{issue} <https://github.com/numpy/numpy/pull/{issue}>`__" template = "doc/release/upcoming_changes/template.rst" - underlines="~=" + underlines = "~=" + all_bullets = false + [[tool.towncrier.type]] directory = "highlight" @@ -66,3 +68,4 @@ requires = [ directory = "change" name = "Changes" showcontent = true + diff --git a/runtests.py b/runtests.py index 23245aeac..c469f85d8 100755 --- a/runtests.py +++ b/runtests.py @@ -18,6 +18,10 @@ Run a debugger: $ gdb --args python runtests.py [...other args...] +Disable pytest capturing of output by using its '-s' option: + + $ python runtests.py -- -s + Generate C code coverage listing under build/lcov/: (requires http://ltp.sourceforge.net/coverage/lcov.php) @@ -67,6 +71,10 @@ def main(argv): parser = ArgumentParser(usage=__doc__.lstrip()) parser.add_argument("--verbose", "-v", action="count", default=1, help="more verbosity") + parser.add_argument("--debug-configure", action="store_true", + help=("add -v to build_src to show compiler " + "configuration output while creating " + "_numpyconfig.h and config.h")) parser.add_argument("--no-build", "-n", action="store_true", default=False, help="do not build the project (use system installed version)") parser.add_argument("--build-only", "-b", action="store_true", default=False, @@ -106,6 +114,8 @@ def main(argv): help="Debug build") parser.add_argument("--parallel", "-j", type=int, default=0, help="Number of parallel jobs during build") + parser.add_argument("--warn-error", action="store_true", + help="Set -Werror to convert all compiler warnings to errors") parser.add_argument("--show-build-log", action="store_true", help="Show build output rather than using a log file") parser.add_argument("--bench", action="store_true", @@ -366,6 +376,10 @@ def build_project(args): cmd += ["build"] if args.parallel > 1: cmd += ["-j", str(args.parallel)] + if args.debug_configure: + cmd += ["build_src", "--verbose"] + if args.warn_error: + cmd += ["--warn-error"] # Install; avoid producing eggs so numpy can be imported from dst_dir. cmd += ['install', '--prefix=' + dst_dir, '--single-version-externally-managed', @@ -83,6 +83,10 @@ def git_version(): except (subprocess.SubprocessError, OSError): GIT_REVISION = "Unknown" + if not GIT_REVISION: + # this shouldn't happen but apparently can (see gh-8512) + GIT_REVISION = "Unknown" + return GIT_REVISION # BEFORE importing setuptools, remove MANIFEST. Otherwise it may not be @@ -263,7 +267,7 @@ def parse_setuppy_commands(): # below and not standalone. Hence they're not added to good_commands. good_commands = ('develop', 'sdist', 'build', 'build_ext', 'build_py', 'build_clib', 'build_scripts', 'bdist_wheel', 'bdist_rpm', - 'bdist_wininst', 'bdist_msi', 'bdist_mpkg') + 'bdist_wininst', 'bdist_msi', 'bdist_mpkg', 'build_src') for command in good_commands: if command in args: @@ -403,7 +407,8 @@ def setup_package(): classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f], platforms = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"], test_suite='nose.collector', - cmdclass={"sdist": sdist_checked}, + cmdclass={"sdist": sdist_checked, + }, python_requires='>=3.5', zip_safe=False, entry_points={ diff --git a/shippable.yml b/shippable.yml index 2abd8a843..91323ceb6 100644 --- a/shippable.yml +++ b/shippable.yml @@ -48,7 +48,7 @@ build: # check OpenBLAS version - python tools/openblas_support.py --check_version 0.3.7 # run the test suite - - python runtests.py -- -rsx --junit-xml=$SHIPPABLE_REPO_DIR/shippable/testresults/tests.xml -n 2 --durations=10 + - python runtests.py --debug-configure --show-build-log -- -rsx --junit-xml=$SHIPPABLE_REPO_DIR/shippable/testresults/tests.xml -n 2 --durations=10 cache: true cache_dir_list: diff --git a/test_requirements.txt b/test_requirements.txt index cb3e5f758..2d52599b1 100644 --- a/test_requirements.txt +++ b/test_requirements.txt @@ -1,5 +1,5 @@ cython==0.29.13 -pytest==5.1.2 +pytest==5.1.3 pytz==2019.2 pytest-cov==2.7.1 pickle5; python_version == '3.7' diff --git a/tools/ci/test_all_newsfragments_used.py b/tools/ci/test_all_newsfragments_used.py new file mode 100755 index 000000000..6c4591fd8 --- /dev/null +++ b/tools/ci/test_all_newsfragments_used.py @@ -0,0 +1,16 @@ +#!/usr/bin/env python + +import sys +import toml +import os + +path = toml.load("pyproject.toml")["tool"]["towncrier"]["directory"] + +fragments = os.listdir(path) +fragments.remove("README.rst") +fragments.remove("template.rst") + +if fragments: + print("The following files were not found by towncrier:") + print(" " + " \n".join(fragments)) + sys.exit(1) diff --git a/tools/pypy-test.sh b/tools/pypy-test.sh index 388a5b75f..b02d18778 100755 --- a/tools/pypy-test.sh +++ b/tools/pypy-test.sh @@ -39,7 +39,7 @@ echo pypy3 version pypy3/bin/pypy3 -c "import sys; print(sys.version)" echo -pypy3/bin/pypy3 runtests.py --show-build-log -- -rsx \ +pypy3/bin/pypy3 runtests.py --debug-configure --show-build-log -v -- -rsx \ --junitxml=junit/test-results.xml --durations 10 echo Make sure the correct openblas has been linked in diff --git a/tools/travis-test.sh b/tools/travis-test.sh index 8fbae4b09..1eda43c31 100755 --- a/tools/travis-test.sh +++ b/tools/travis-test.sh @@ -52,7 +52,7 @@ setup_base() else # Python3.5-dbg on travis seems to need this export CFLAGS=$CFLAGS" -Wno-maybe-uninitialized" - $PYTHON setup.py build_ext --inplace 2>&1 | tee log + $PYTHON setup.py build build_src -v build_ext --inplace 2>&1 | tee log fi grep -v "_configtest" log \ | grep -vE "ld returned 1|no previously-included files matching|manifest_maker: standard file '-c'" \ @@ -151,7 +151,7 @@ if [ -n "$USE_WHEEL" ] && [ $# -eq 0 ]; then export F90='gfortran --coverage' export LDFLAGS='--coverage' fi - $PYTHON setup.py bdist_wheel + $PYTHON setup.py build build_src -v bdist_wheel # Make another virtualenv to install into virtualenv --python=`which $PYTHON` venv-for-wheel . venv-for-wheel/bin/activate |