| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
np.put and np.place do something only when the first argument
is an instance of np.ndarray. These changes will cause a TypeError
to be thrown in either function should that requirement not be
satisfied.
|
|
|
|
| |
[ci skip]
|
|
|
|
| |
[ci skip]
|
|
|
|
| |
Closes gh-6863.
|
|
|
|
| |
The rowvar and bias parameters are booleans, not integers.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bug traces to the PyArray_OrderConverter
method in conversion_utils.c, where no errors
are thrown if the ORDER parameter passed in
is not of the string data-type or has a string
value of length greater than one. This commit
causes a DeprecationWarning to be raised, which
will later be turned into a TypeError or another
type of error in a future release.
Closes gh-6598.
|
|
|
| |
In all cases, it's either ...*n^(-1/3) or .../n^(1/3), not both. The actual functions are implemented correctly.
|
|
|
|
| |
assure this. Improved some error-types.
|
|
|
|
|
| |
np.median([]) returns NaN. Fixes bug/regression that raised an IndexError.
Added tests to ensure continued support of empty arrays.
|
|
|
|
|
|
|
| |
The argument for the original input array is named `a`
but in the docstring it was at some point referred to as `arr`.
[skip ci]
|
|\
| |
| | |
ENH: speed up cov by ~10% for large arrays
|
| |
| |
| |
| | |
Replaces n² divisions by one division and n² multiplications.
|
|\ \
| | |
| | | |
MAINT: corrcoef, memory usage optimization
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We calculate sqrt on the small vector rather than on that huge
product matrix and we combine the "outer" product with element-wise
devision. So even though we have a slower loop over the rows now ... this
code snippet runs about 3 times faster than before.
However the speed improvement of the whole function is not really
significant because cov() takes 80-99% of the time (dependent on
blas/lapack implementation and number of CPU cores).
More important is that we will safe 1/3 memory. For example corrcoef()
for a [23k, m] matrix needs 8GB now instead of 12GB.
|
|\ \ \
| | |/
| |/| |
ENH: halve the memory requirement of np.cov
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Prevents allocation of an n²-sized array.
XXX For large arrays, multiplying by 1/fact is more than 10% faster
than dividing by fact, but that doesn't pass the tests.
|
|/ / |
|
|/
|
|
|
|
| |
https://github.com/numpy/numpy/issues/5900
Slight change for cumsame doco as well, to match.
|
| |
|
|
|
|
| |
when given axis=None, behave the same as axis not being provided
|
|
|
|
| |
can now pass in bins='auto' (or 'scott','fd','rice','sturges') and get the corresponding rule of thumb estimator provide a decent estimate of the optimal number of bins for the given the data.
|
|
|
|
| |
values to be modified
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of these fixes involve putting blank lines around
.. versionadded:: x.x.x
and
.. deprecated:: x.x.x
Some of the examples were also fixed.
|
| |
|
|
|
|
|
|
| |
in array to close issue #586.
Also added unit tests.
|
|
|
|
|
|
|
| |
Update docs for boolean array indexing and nonzero order.
Add links to row-major and column-major terms where they appear.
Closes #3177
|
|
|
|
|
|
| |
This is to make it easier to find and remove deprecated features.
It would be a good idea if all deprecations were made with similar
comments.
|
|\
| |
| | |
DOC: Fix spelling of Von Hann's surname
|
| | |
|
| | |
|
|\ \
| | |
| | | |
ENH: add a weighted covariance calculation.
|
| |/
| |
| |
| |
| | |
'fweights' allows integer frequencies to be specified for observation vectors,
and 'aweights' provides a more general importance or probabalistic weighting.
|
|\ \
| |/
|/| |
ENH: add np.stack
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The motivation here is to present a uniform and N-dimensional interface for
joining arrays along a new axis, similarly to how `concatenate` provides a
uniform and N-dimensional interface for joining arrays along an existing axis.
Background
~~~~~~~~~~
Currently, users can choose between `hstack`, `vstack`, `column_stack` and
`dstack`, but none of these functions handle N-dimensional input. In my
opinion, it's also difficult to keep track of the differences between these
methods and to predict how they will handle input with different
dimensions.
In the past, my preferred approach has been to either construct the result
array explicitly and use indexing for assignment, to or use `np.array` to
stack along the first dimension and then use `transpose` (or a similar method)
to reorder dimensions if necessary. This is pretty awkward.
I brought this proposal up a few weeks on the numpy-discussion list:
http://mail.scipy.org/pipermail/numpy-discussion/2015-February/072199.html
I also received positive feedback on Twitter:
https://twitter.com/shoyer/status/565937244599377920
Implementation notes
~~~~~~~~~~~~~~~~~~~~
The one line summaries for `concatenate` and `stack` have been (re)written to
mirror each other, and to make clear that the distinction between these functions
is whether they join over an existing or new axis.
In general, I've tweaked the documentation and docstrings with an eye toward
pointing users to `concatenate`/`stack`/`split` as a fundamental set of basic
array manipulation routines, and away from
`array_split`/`{h,v,d}split`/`{h,v,d,column_}stack`
I put this implementation in `numpy.core.shape_base` alongside `hstack`/`vstack`,
but it appears that there is also a `numpy.lib.shape_base` module that contains
another larger set of functions, including `dstack`. I'm not really sure where
this belongs (or if it even matters).
Finally, it might be a good idea to write a masked array version of `stack`.
But I don't use masked arrays, so I'm not well motivated to do that.
|
| | |
|
| |
| |
| |
| | |
Updating of docstring of gradient() function specifying the return is a `list` of `ndarray`.
|
| |
| |
| | |
This is an improve of documentation for gradient() funcion as commented in #5628
|
| |
| |
| |
| |
| |
| |
| | |
The bias and ddof arguments had no effect on the calculation of the
correlation coefficient because the value cancels in the calculation.
Deprecate these arguments to np.corrcoef and np.ma.corrcoef.
|
|/ |
|
|
|
|
| |
Allows access to internal functions for the file.
|
| |
|
|
|
|
|
| |
change '{ndarray, float}' -> 'ndarray or float' as {} are for
when the value is an enumeration
|
|\
| |
| |
| | |
Update to average calculation
|
| |
| |
| |
| | |
closes gh-5202
|
|\ \
| | |
| | | |
BUG: copy inherited masks in MaskedArray.__array_finalize__
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, operations which created a new masked array from an old
masked array -- e.g., np.empty_like -- would tend to result in the new
and old arrays sharing the same .mask attribute. This leads to
horrible brokenness in which writes to one array affect the other. In
particular this was responsible for part of the brokenness that
@jenshnielsen reported in gh-5184 in which np.gradient on masked
arrays would modify the original array's mask.
This fixes the worst part of the issues addressed in gh-3404, though
there's still an argument that we ought to deprecate the mask-copying
behaviour entirely so that empty_like returns an array with an empty
mask. That can wait until we find someone who cares though.
I also applied a small speedup to np.gradient (avoiding one copy);
previously this inefficiency was masking (heh) some of the problems
with masked arrays, so removing it is both an optimization and makes
it easier to test that things are working now.
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Previous expected behavior was that the gradient is computed using central
differences in the interior and first differences at the boundaries.
* gradient was updated in v1.9.0 so that second-order accurate calculations are
done at the boundaries, but this breaks expected behavior with old code, so
`edge_order` keyword (Default: 1) is added to specify whether first or second
order calculations at the boundaries should be used.
* Since the second argument is *varargs, in order to maintain compatibility
with old code and compatibility with python 2.6 & 2.7, **kwargs is used, and
all kwargs that are not `edge_order` raise an error, listing the offending
kwargs.
* Tests and documentation updated to reflect this change.
* Added `.. versionadded:: 1.9.1` to the new optional kwarg `edge_order`
documentation, and specified supported `edge_order` kwarg values.
* Clarified documentation for `varargs`.
* Made indentation in docstring consistent with other docstring styles.
* Examples corrected
|