| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
This doesn't qualify for fixing under the NEP.
|
|
|
|
|
|
| |
Check that high is weakly larger than low and raise if now
closes #17905
|
|
|
|
|
|
|
|
|
| |
Tests using MD5 algorithms fail in FIPS Mode because MD5 is not FIPS
compliant.
Replace MD5 with SHA256 to overcome that.
Signed-off-by: Nikola Forró <nforro@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* ENH: random: Make _shuffle_raw and _shuffle_int standalone functions.
* ENH: random: Add the method `permuted` to Generator.
The method permuted(x, axis=None, out=None) shuffles an array.
Unlike the existing shuffle method, it shuffles the slices along
the given axis independently.
Closes gh-5173.
|
|
|
|
|
|
| |
Check that size is not being broadcast
closes #16833
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #16539
The implicit 0-padding that is done to small entropy inputs to make them the
size of the internal pool conflicts with the spawn keys, which start with an
appended 0.
In order to maintain stream compatibility with unspawned `SeedSequence`s, we
explicitly 0-pad short inputs out to the pool size only if the spawn key is
provided, and thus would trigger the bug. This should minimize the impact on
users that were not encountering the bug.
|
|
|
|
|
|
|
| |
Broadcastable size with inputs does not produce an error when size produces
a smaller output array than the broadcast input shape. Patch checks that
the output shape matches the outer shape of the broadcast fo all inputs and
the size when given.
|
|
|
|
|
|
| |
Refactor polynomial to be unsigned long array
Remove unused code
Fix md5 calculation on BE
|
|
|
|
|
|
| |
Use the original loop order instead of an inverted order
closes #15394
|
|\
| |
| | |
BUG: add missing numpy/__init__.pxd to the wheel
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#14924)
* Add stick-breaking
* Add tests demonstrating slowness for beta and dirichlet generators for small alpha (and beta) values
* Remove the test for beta with small `a` and `b`
* Switch from standard to stick-breaking method whenever alpha.max() < 0.1
Co-authored-by: Warren Weckesser <warren.weckesser@gmail.com>
Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
|
| |
|
|
|
|
|
|
|
| |
Add repeatability tests for when the range of the integers is `2**32`
(and `2**32 +/- 1` for good measure) with broadcasting. The underlying
functions called by Generator.integers and random.randint when the
inputs are broadcast are different than when the inputs are scalars.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the input to Generator.integers was 2**32, the value 2**32-1
was being passed as the `rng` argument to the 32-bit Lemire method,
but that method requires `rng` be strictly less then 2**32-1.
The fix was to handle 2**32-1 by calling next_uint32 directly.
This also works for the legacy code without changing the stream
of random integers from `randint`.
Closes gh-16066.
|
|
|
|
|
| |
Only one dimensional alpha paramter is currently supported, but higher dimensions were silently allowed and gave an incorrect results. This fixes the regression. In the future, the API could be extended to allow higher dimensional arrays for alpha.
Fixes gh-15915
|
|\
| |
| | |
BUG: Check that `pvals` is 1D in `_generator.multinomial`.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Make `Generator.negative_binomial` raise a ValueError if p=0.
`negative_binomial(n, p)` draws samples from the distribution of
the number of failures until n successes are encountered. If p is 0,
then a success is never encountered, so the probability distribution
is 0 for any finite number of failures. In other words, it is not
really a meaningful distribution, so we disallow p=0.
Closes gh-15913.
|
|/
|
|
| |
Fixes #15871
|
|
|
|
|
| |
types (#15816)
Cleanup from the dropping of python 2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(gh-15463)
xref gh-14778
As pointed out in the comment by @jamesthomasgriffin, we did not include a pxd file to expose the distribution functions documented in the random c-api. This PR adds a c_distributions.pxd file that exposes them.
Squashed commits:
* BUG: add missing c_distributions.pxd to enable cython use of random C-API
* ENH, TST: add npyrandom library like npymath, test cython use of it
* BUG: actually prefix f-string with f
* MAINT: fixes from review, add _bit_generato_bit_generator.pxd
* STY: fixes from review
* BLD: don't use nprandom library for mtrand legacy build
* TST: WindowsPath cannot be used in subprocess's list2cmdline
* MAINT, API: move _bit_generator to bit_generator
* DOC: add release note about moving bit_generator
* DOC, MAINT: fixes from review
* MAINT: redo dtype determination from review
|
|
|
|
|
|
|
| |
* Cleanup unused imports (F401) of mostly standard Python modules,
or some internal but unlikely referenced modules
* Where internal imports are potentially used, mark with noqa
* Avoid redefinition of imports (F811)
|
|
|
|
|
|
| |
* PEP 8: "Imports should usually be on separate lines"
* Where modified, sort imported modules alphabetically
* Clean-up unused imports from these expanded lines
|
|
|
|
| |
This implements NEP 34.
|
|
|
|
|
|
|
| |
Inheriting from object was necessary for Python 2 compatibility to use
new-style classes. In Python 3, this is unnecessary as there are no
old-style classes.
Dropping the object is more idiomatic Python.
|
|
|
|
|
| |
As numpy is Python 3 only, these import statements are now unnecessary
and don't alter runtime behavior.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
* BUG: use tmp dir and check version for cython test
* TST, MAINT: skip on win32, fix formatting
* TST: fixes from review
* TST: fixes from review
* TST: fixes from review
|
| |
|
| |
|
|\
| |
| | |
DEP: issue deprecation warning when creating ragged array (NEP 34)
|
| | |
|
| | |
|
|\ \
| | |
| | | |
TST. API: test using distributions.h via cffi
|
| | | |
|
| | | |
|
|/ / |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* API: restructure and document numpy.random C-API
* DOC: fix bad reference
* API: ship, document, and start to test numpy.random C-API examples
* API, DOC, TST: fix tests, refactor documentation to include snippets
* BUILD: move public headers to numpy/core/include/numpy/random
* TST: ignore DeprecationWarnings in setuptools and numba
* DOC: document the C-API as used from Cython
|
| |
| |
| |
| | |
Relates to gh-6103
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When an 8 or 16 bit dtype was given to the integers() method of the
Generator class, the resulting sample was biased. The problem was
the lines of the form
const uint8_t threshold = -rng_excl % rng_excl;
in the implementations of Lemire's method, in the C file
distributions.c. The intent was to compute
(UINT8_MAX+1 - rng_excl) % rng_excl
However, when the type of rng_excl has integer conversion rank lower
than a C int (which is almost certainly the case for the 8 and 16
bit types), the terms in the expression -rng_excl % rng_excl are
promoted to int, and the result of the calculation is always 0.
The fix is to make the expression explicit, and write it as
const uint8_t threshold = (UINT8_MAX - rng) % rng_excl;
rng is used, because rng_excl is simply rng + 1; by using rng, we
we only need the constant UINT#_MAX, without the extra +1.
For consistency, I made the same change for all the data types
(8, 16, 32 and 64 bit).
Closes gh-14774.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The new method
multivariate_hypergeometric(self, object colors, object nsample,
size=None, method='marginals')
of the class numpy.random.Generator implements the multivariate
hypergeometric distribution; see
https://en.wikipedia.org/wiki/Hypergeometric_distribution,
specifically the section "Multivariate hypergeometric distribution".
Two algorithms are implemented. The user selects which algorithm
to use with the `method` parameter. The default, `method='marginals'`,
is based on repeated calls of the univariate hypergeometric
distribution function. The other algorithm, selected with
`method='count'`, is a brute-force method that allocates an
internal array of length ``sum(colors)``. It should only be used
when that value is small, but it can be much faster than the
"marginals" algorithm in that case.
The C implementations of the two methods are in the files
random_mvhg_count.c and random_mvhg_marginals.c in
numpy/random/src/distributions.
|
| | |
|
| | |
|