| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | |
| | | |
'midpoint' must return the same as 'higher' and 'lower' when the two
are the same, not 'lower' + 0.5 as it was doing.
|
|\ \ \
| | | |
| | | | |
BUG: Fixed regressions in np.piecewise in ref to #5737 and #5729.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Added unit tests for these conditions.
|
|\ \ \ \
| |/ / /
|/| | | |
DOC: Updated documentation wording and examples for np.percentile.
|
| | | | |
|
| |/ /
| | |
| | |
| | | |
Examples had some factual errors. Wording updated in a couple of places.
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| | |
np.put and np.place do something only when the first argument
is an instance of np.ndarray. These changes will cause a TypeError
to be thrown in either function should that requirement not be
satisfied.
|
| |
| |
| |
| | |
[ci skip]
|
| |
| |
| |
| | |
[ci skip]
|
| |
| |
| |
| | |
Closes gh-6863.
|
| |
| |
| |
| | |
The rowvar and bias parameters are booleans, not integers.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The bug traces to the PyArray_OrderConverter
method in conversion_utils.c, where no errors
are thrown if the ORDER parameter passed in
is not of the string data-type or has a string
value of length greater than one. This commit
causes a DeprecationWarning to be raised, which
will later be turned into a TypeError or another
type of error in a future release.
Closes gh-6598.
|
| |
| |
| | |
In all cases, it's either ...*n^(-1/3) or .../n^(1/3), not both. The actual functions are implemented correctly.
|
| |
| |
| |
| | |
assure this. Improved some error-types.
|
| |
| |
| |
| |
| | |
np.median([]) returns NaN. Fixes bug/regression that raised an IndexError.
Added tests to ensure continued support of empty arrays.
|
| |
| |
| |
| |
| |
| |
| | |
The argument for the original input array is named `a`
but in the docstring it was at some point referred to as `arr`.
[skip ci]
|
|\ \
| | |
| | | |
ENH: speed up cov by ~10% for large arrays
|
| | |
| | |
| | |
| | | |
Replaces n² divisions by one division and n² multiplications.
|
|\ \ \
| | | |
| | | | |
MAINT: corrcoef, memory usage optimization
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We calculate sqrt on the small vector rather than on that huge
product matrix and we combine the "outer" product with element-wise
devision. So even though we have a slower loop over the rows now ... this
code snippet runs about 3 times faster than before.
However the speed improvement of the whole function is not really
significant because cov() takes 80-99% of the time (dependent on
blas/lapack implementation and number of CPU cores).
More important is that we will safe 1/3 memory. For example corrcoef()
for a [23k, m] matrix needs 8GB now instead of 12GB.
|
|\ \ \ \
| | |/ /
| |/| | |
ENH: halve the memory requirement of np.cov
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Prevents allocation of an n²-sized array.
XXX For large arrays, multiplying by 1/fact is more than 10% faster
than dividing by fact, but that doesn't pass the tests.
|
|/ / / |
|
|/ /
| |
| |
| |
| |
| | |
https://github.com/numpy/numpy/issues/5900
Slight change for cumsame doco as well, to match.
|
| | |
|
| |
| |
| |
| | |
when given axis=None, behave the same as axis not being provided
|
| |
| |
| |
| | |
can now pass in bins='auto' (or 'scott','fd','rice','sturges') and get the corresponding rule of thumb estimator provide a decent estimate of the optimal number of bins for the given the data.
|
| |
| |
| |
| | |
values to be modified
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Most of these fixes involve putting blank lines around
.. versionadded:: x.x.x
and
.. deprecated:: x.x.x
Some of the examples were also fixed.
|
| | |
|
| |
| |
| |
| |
| |
| | |
in array to close issue #586.
Also added unit tests.
|
| |
| |
| |
| |
| |
| |
| | |
Update docs for boolean array indexing and nonzero order.
Add links to row-major and column-major terms where they appear.
Closes #3177
|
| |
| |
| |
| |
| |
| | |
This is to make it easier to find and remove deprecated features.
It would be a good idea if all deprecations were made with similar
comments.
|
|\ \
| | |
| | | |
DOC: Fix spelling of Von Hann's surname
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
ENH: add a weighted covariance calculation.
|
| |/ /
| | |
| | |
| | |
| | | |
'fweights' allows integer frequencies to be specified for observation vectors,
and 'aweights' provides a more general importance or probabalistic weighting.
|
|\ \ \
| |/ /
|/| | |
ENH: add np.stack
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The motivation here is to present a uniform and N-dimensional interface for
joining arrays along a new axis, similarly to how `concatenate` provides a
uniform and N-dimensional interface for joining arrays along an existing axis.
Background
~~~~~~~~~~
Currently, users can choose between `hstack`, `vstack`, `column_stack` and
`dstack`, but none of these functions handle N-dimensional input. In my
opinion, it's also difficult to keep track of the differences between these
methods and to predict how they will handle input with different
dimensions.
In the past, my preferred approach has been to either construct the result
array explicitly and use indexing for assignment, to or use `np.array` to
stack along the first dimension and then use `transpose` (or a similar method)
to reorder dimensions if necessary. This is pretty awkward.
I brought this proposal up a few weeks on the numpy-discussion list:
http://mail.scipy.org/pipermail/numpy-discussion/2015-February/072199.html
I also received positive feedback on Twitter:
https://twitter.com/shoyer/status/565937244599377920
Implementation notes
~~~~~~~~~~~~~~~~~~~~
The one line summaries for `concatenate` and `stack` have been (re)written to
mirror each other, and to make clear that the distinction between these functions
is whether they join over an existing or new axis.
In general, I've tweaked the documentation and docstrings with an eye toward
pointing users to `concatenate`/`stack`/`split` as a fundamental set of basic
array manipulation routines, and away from
`array_split`/`{h,v,d}split`/`{h,v,d,column_}stack`
I put this implementation in `numpy.core.shape_base` alongside `hstack`/`vstack`,
but it appears that there is also a `numpy.lib.shape_base` module that contains
another larger set of functions, including `dstack`. I'm not really sure where
this belongs (or if it even matters).
Finally, it might be a good idea to write a masked array version of `stack`.
But I don't use masked arrays, so I'm not well motivated to do that.
|
| | | |
|
| | |
| | |
| | |
| | | |
Updating of docstring of gradient() function specifying the return is a `list` of `ndarray`.
|
| | |
| | |
| | | |
This is an improve of documentation for gradient() funcion as commented in #5628
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The bias and ddof arguments had no effect on the calculation of the
correlation coefficient because the value cancels in the calculation.
Deprecate these arguments to np.corrcoef and np.ma.corrcoef.
|
|/ / |
|
| |
| |
| |
| | |
Allows access to internal functions for the file.
|
| | |
|