| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|/
|
|
|
| |
change '{ndarray, float}' -> 'ndarray or float' as {} are for
when the value is an enumeration
|
|\
| |
| |
| | |
Update to average calculation
|
| |
| |
| |
| | |
closes gh-5202
|
|\ \
| | |
| | | |
BUG: copy inherited masks in MaskedArray.__array_finalize__
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, operations which created a new masked array from an old
masked array -- e.g., np.empty_like -- would tend to result in the new
and old arrays sharing the same .mask attribute. This leads to
horrible brokenness in which writes to one array affect the other. In
particular this was responsible for part of the brokenness that
@jenshnielsen reported in gh-5184 in which np.gradient on masked
arrays would modify the original array's mask.
This fixes the worst part of the issues addressed in gh-3404, though
there's still an argument that we ought to deprecate the mask-copying
behaviour entirely so that empty_like returns an array with an empty
mask. That can wait until we find someone who cares though.
I also applied a small speedup to np.gradient (avoiding one copy);
previously this inefficiency was masking (heh) some of the problems
with masked arrays, so removing it is both an optimization and makes
it easier to test that things are working now.
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Previous expected behavior was that the gradient is computed using central
differences in the interior and first differences at the boundaries.
* gradient was updated in v1.9.0 so that second-order accurate calculations are
done at the boundaries, but this breaks expected behavior with old code, so
`edge_order` keyword (Default: 1) is added to specify whether first or second
order calculations at the boundaries should be used.
* Since the second argument is *varargs, in order to maintain compatibility
with old code and compatibility with python 2.6 & 2.7, **kwargs is used, and
all kwargs that are not `edge_order` raise an error, listing the offending
kwargs.
* Tests and documentation updated to reflect this change.
* Added `.. versionadded:: 1.9.1` to the new optional kwarg `edge_order`
documentation, and specified supported `edge_order` kwarg values.
* Clarified documentation for `varargs`.
* Made indentation in docstring consistent with other docstring styles.
* Examples corrected
|
|/ / |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
BUG: Fix np.insert for inserting a single item into a structured array
|
| | |
| | |
| | |
| | |
| | | |
Note that there are some object array special cases because of allowing
multiple inserts. `np.array(..., dtype=object)` is not always clear.
|
|\ \ \
| |/ /
| | | |
BUG: don't overwrite input percentile arrays
|
| |/ |
|
|\ \
| |/
| | |
Charris pep8 numpy lib
|
| |
| |
| |
| | |
The rules enforced are the same as those used for scipy.
|
| |
| |
| |
| |
| |
| | |
Some of those problems look like potential coding errors. In those
cases a Fixme comment was made and the offending code, usually an
unused variable, was commented out.
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Following inequality causes wrong counting at the edges and can be avoided by
making the edge array of the same type as the input data.
In [1]: np.around(np.float64(6010.36962890625), 5)
Out[1]: 6010.3696300000001
In [2]: np.around(np.float32(6010.36962890625), 5)
Out[2]: 6010.3701
Closes gh-4799
|
|
|
|
|
|
|
|
|
|
| |
When `x` has more than one element the condlist `[True, False]`
is being made equivalent to `[[True, False]]`, which is correct.
However, when `x` is zero dimensional the expected condlist is
`[[True], [False]]`: this commit addresses the issue. Besides,
the documentation stated that there could be undefined values
but actually these are 0 by default: using `nan` would be desirable,
but for the moment the docs were corrected. Closes #331.
|
| |
|
|
|
|
| |
[ci skip]
|
| |
|
| |
|
|\
| |
| | |
ENH: Speed improvements and deprecations for np.select
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The idea for this (and some of the code) originally comes from
Graeme B Bell (gh-3537).
Choose is not as fast and pretty limited, so an iterative
copyto is used instead.
Closes gh-3259, gh-3537, gh-3551, and gh-3254
|
| |
| |
| |
| |
| |
| | |
Merging median and percentile make would break astropy and quantities as
we don't call mean anymore. These packages rely on overriding mean to
add their own median behavior.
|
| | |
|
| | |
|
|/
|
|
| |
fix #3285
|
|\
| |
| | |
DOC: Fix documentation of normed keyword in histogram2d and histogramdd.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The documentation misrepresented what happened, leaving out division
by the total number of sample points.
Also run spellcheck over function_base.py and twodim_base.py and break
some long lines.
Closes #2423.
|
|\ \
| | |
| | | |
Closes issue #4266, fixes histogramdd treatment of events at rightmost binedge
|
| |/
| |
| |
| | |
Fixes Github issue #4266
|
|/
|
|
| |
closes gh-4295
|
| |
|
|
|
| |
The docstring for interp() contained the grammatically-incorrect text "defaults is". I corrected this to "default is".
|
|
|
|
|
|
|
|
| |
In some cases a negative axis argument to np.insert would result
in wrong behaviour due to np.rollaxis, add modulo operation to
avoid this (an error is still raised due to arr.shape[axis]).
Closes gh-3494
|
|
|
|
|
| |
replace slow exec with a direct __import__.
improves `import numpy` speed by about 10%.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This preserves the complex (and higher precision float or
object) type of the input array, so that the complex
covariance and correlation coefficients can be calculated.
It also fixes the the behaviour of empty arrays. These will
now either result in a 0x0 result, or a NxN result filled
with NaNs.
A warning is now issued when ddof is too large and the factor
set to 0 so that in this case the result is always NaN or
infinity/negative infinity and never a negative number.
Closes gh-597 and gh-2680
Closes gh-3882 (original pull request)
|
|
|
|
|
|
| |
percentile returned scalars and lists of arrays in 1.8
adapt new percentile to return scalar and arrays with q dimension first
for compatibility.
|
|
|
|
| |
closes gh-3846
|
| |
|
|
|
|
|
| |
Continueing the pep8 effort, adds newline afer each `Error(`
and tries to wrap correctly.
|
|
|
|
|
|
| |
This makes function_base.py almost pep8 compatible.
ALSO, removes the Set import which is unneeded since python 2.4,
and organises the import statements.
|
|
|
|
|
|
|
| |
* added note that `overwrite_input` has not effect when `a` is not
an array in the percentile function.
* added unit test to verify that no error is raised when `a` is not
an array and `overwrite_input` is True.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
The percentile function was enhanced by adding limit and interpolation
parameters to give it similar functionality to SciPy's stats.scoreatpercentile
function. In addition the function was vecorized along q and rewritten to
use the partition method for better performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* numpy.gradient has been enhanced to use a second order accurate
one-sided finite difference stencil at boundary elements of the
array. Second order accurate central difference are still used for
the interior elements. The result is a fully second order accurate
approximation of the gradient over the full domain.
* The one-sided stencil uses 3 elements each with a different weight. A
forward difference is used for the first element,
dy/dx ~ -(3.0*y[0] - 4.0*y[1] + y[2]) / (2.0*dx)
and backwards difference is used for the last element,
dy/dx ~ (3.0*y[-1] - 4.0*y[-2] + y[-3]) / (2.0*dx)
* Because the datetime64 datatype cannot be multiplied a view is taken
of datetime64 arrays and cast to int64. The gradient algorithm is
then applied to the view rather than the input array.
* Previously no dimension checks were performed on the input array. Now
if the array size along the differentiation axis is less than 2, a
ValueError is raised which explains that more elements are needed. If
the size is exactly two the function falls back to using a 2 point
stencil (the old behaviour). If the size is 3 and above then the
higher accuracy methods are used.
* A new test has been added which validates the higher accuracy. Old
tests have been updated to pass. Note, this should be expected
because the boundary elements now return different (more accurate)
values.
|