| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \
| | |
| | | |
DOC: Fix spelling of Von Hann's surname
|
| | | |
|
| | |
| | |
| | |
| | | |
Fixes gh-5862
|
| | | |
|
|\ \ \
| | | |
| | | | |
ENH: add a weighted covariance calculation.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
'fweights' allows integer frequencies to be specified for observation vectors,
and 'aweights' provides a more general importance or probabalistic weighting.
|
|\ \ \ \
| |/ / /
|/| | | |
ENH: add np.stack
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The motivation here is to present a uniform and N-dimensional interface for
joining arrays along a new axis, similarly to how `concatenate` provides a
uniform and N-dimensional interface for joining arrays along an existing axis.
Background
~~~~~~~~~~
Currently, users can choose between `hstack`, `vstack`, `column_stack` and
`dstack`, but none of these functions handle N-dimensional input. In my
opinion, it's also difficult to keep track of the differences between these
methods and to predict how they will handle input with different
dimensions.
In the past, my preferred approach has been to either construct the result
array explicitly and use indexing for assignment, to or use `np.array` to
stack along the first dimension and then use `transpose` (or a similar method)
to reorder dimensions if necessary. This is pretty awkward.
I brought this proposal up a few weeks on the numpy-discussion list:
http://mail.scipy.org/pipermail/numpy-discussion/2015-February/072199.html
I also received positive feedback on Twitter:
https://twitter.com/shoyer/status/565937244599377920
Implementation notes
~~~~~~~~~~~~~~~~~~~~
The one line summaries for `concatenate` and `stack` have been (re)written to
mirror each other, and to make clear that the distinction between these functions
is whether they join over an existing or new axis.
In general, I've tweaked the documentation and docstrings with an eye toward
pointing users to `concatenate`/`stack`/`split` as a fundamental set of basic
array manipulation routines, and away from
`array_split`/`{h,v,d}split`/`{h,v,d,column_}stack`
I put this implementation in `numpy.core.shape_base` alongside `hstack`/`vstack`,
but it appears that there is also a `numpy.lib.shape_base` module that contains
another larger set of functions, including `dstack`. I'm not really sure where
this belongs (or if it even matters).
Finally, it might be a good idea to write a masked array version of `stack`.
But I don't use masked arrays, so I'm not well motivated to do that.
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | | |
BUG: setdiff1d return type
Fixes 5846
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fixes #5846 (If called with an empty array as first argument, the returned
array had dtype bool instead of the dtype of the input array)
|
|/ / / /
| | | |
| | | |
| | | | |
closes gh-5839
|
|\ \ \ \
| | | | |
| | | | | |
Ix intp
|
| | | | | |
|
| | | | | |
|
| | |/ /
| |/| |
| | | |
| | | | |
Fixes #5804
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
ENH: Multiple comment tokens in loadtxt
|
| | | | |
|
| |_|/
|/| | |
|
| | | |
|
| | |
| | |
| | |
| | | |
Updating of docstring of gradient() function specifying the return is a `list` of `ndarray`.
|
|\ \ \
| | | |
| | | | |
DOC: Return of gradient() function
|
| | | |
| | | |
| | | | |
This is an improve of documentation for gradient() funcion as commented in #5628
|
|\ \ \ \
| |/ / /
|/| | | |
Fix read_array_header_*
|
| | | |
| | | |
| | | |
| | | | |
Previously the passed in version was ignored
|
| | | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | | |
Previously read_array_header_1_0 & read_array_header_2_0 were not
returning the documented results.
Closes #5602
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Also fix the documentation to reflect current behavior
1) Copies of the input are always returned.
2) Type is preserved, in particular, integer is not upcast to float.
Note that the function now accepts all types, not just inexact and
integer. The lack of upcast is present since at least numpy 1.5, but
may be a bug.
|
| | | |
|
|/ /
| |
| |
| |
| |
| | |
Fix is to properly set array type.
Closes #1478
|
| |
| |
| |
| |
| |
| |
| | |
The bias and ddof arguments had no effect on the calculation of the
correlation coefficient because the value cancels in the calculation.
Deprecate these arguments to np.corrcoef and np.ma.corrcoef.
|
|\ \
| | |
| | | |
BUG: Fixed issue #4679 - make numpy.tile always return a copy
|
| | |
| | |
| | | |
So removed the paranthesis and included the return statement.
|
| |/
| |
| |
| |
| | |
Tile now copies the input when it is a numpy array and all dimensions are
repeated only once.
|
|\ \
| | |
| | | |
ENH: PyArray_FromInterface checks descr if typestr is np.void
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When the 'typestr' member of the __array_interface__ dictionary defines
a np.void dtype, check the 'descr' member, and if it is a valid dtype
description and it is not the default one, use it to construct the
dtype for the array to return.
This fixes #5081, as as_strided no longer has to worry about changing
the dtype of the return.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The problem is that the Python complex type constructor only accepts a
pair of numbers or a string, unlike other numeric types it does not work
with byte strings. The numpy error is subtle, as loadtxt opens the file
in the default text mode, but then converts the input lines to byte
strings when they are split into separate values. The fix here is to
convert the values back to strings in the complex converter.
Closes #5655.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Add pickle compatibility flags to numpy.save and numpy.load. Allow only
combinations that cannot corrupt binary data in Numpy arrays. Use the
same default values as Python pickle.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix StringConverter to avoid OverflowError in genfromtxt. Before, int(2**66) would work
(and return a ‘long’) but then np.array([2**66], dtype=np.integer) would not work and
return an OverflowError which would propagate to genfromtxt. This commit fixes this by
ensuring testing in advance whether an OverflowError will occur. In addition, this adds
an explicit np.int64 entry on systems where integer means int32. Values larger than
2**63-1 will be cast as float. This includes a regression test and adds an entry to the
release notes.
|
| | |
| | |
| | |
| | |
| | | |
Fixed typos in docstrings were updated for functions where the parameter
names in the docstring didn't match the function signature.
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | | |
DOC: Describe return_counts keyword in np.unique docstring
|
| | | | |
|
| |/ /
|/| | |
|
|\ \ \
| | | |
| | | | |
ENH: add np.broadcast_to and reimplement np.broadcast_arrays
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | | |
Per the mailing list discussion [1], I have implemented a new function
`broadcast_to` that broadcasts an array to a given shape according to
numpy's broadcasting rules.
[1] http://mail.scipy.org/pipermail/numpy-discussion/2014-December/071796.html
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| | |
The tests were using assert_almost_equal and setting the precision to 3
decimals. The reason for that low precision appears to have been the
failure of the tests for a more reasonable precision. The fix was to use
assert_allclose instead.
|
| | |
|