| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
also drop sometrue/alltrue link, its equivalent to any/all.
|
|
|
|
|
|
|
| |
Also remove the test_diagonal_deprecation test and add test that
checks that a view is returned and that it is not writeable.
Closes #596.
|
|
|
|
|
|
|
| |
Run the 2to3 ws_comma fixer on *.py files. Some lines are now too long
and will need to be broken at some point. OTOH, some lines were already
too long and need to be broken at some point. Now seems as good a time
as any to do this with open PRs at a minimum.
|
|
|
|
| |
Now is as good a time as any with open PR's at a low.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A partition sorts the kth element into its sorted order and moves all
smaller elements before the kth element and all equal or greater
elements behind it.
The ordering of all elements in the partitions is undefined.
It is implemented via the introselection algorithm which has worst case
linear complexity compared to a full sort that has linearithmic
complexity.
The introselect algorithm uses a quickselect with median of three pivot
and falls back to a quickselect with median of median of five pivot if
no sufficient progress is made.
The pivots used during the search for the wanted kth element can
optionally be stored and reused for further partitionings of the array.
This is used by the python interface if an array of kth is provided to
the partitions function. This improves the performance of median and
which need to select two elements if the size of the array is even. A
percentile function interpolating between values also profits from this.
String selection is implemented in terms of quicksort which has the same
properties as a selection for now.
|
|
|
|
|
|
|
|
|
| |
memmap needs to call it in __array_finalize__ to determine if it can
drop the references on copies.
The python version if may_share_memory caused significant slowdowns when
slicing these maps.
closes gh-3364
|
| |
|
|
|
|
|
|
|
| |
Add `print_function` to all `from __future__ import ...` statements
and use the python3 print function syntax everywhere.
Closes #3078.
|
|\
| |
| | |
DOC: Formatting fixes using regex
|
| |
| |
| |
| | |
also other spacing or formatting mistakes
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The new import `absolute_import` is added the `from __future__ import`
statement and The 2to3 `import` fixer is run to make the imports
compatible. There are several things that need to be dealt with to make
this work.
1) Files meant to be run as scripts run in a different environment than
files imported as part of a package, and so changes to those files need
to be skipped. The affected script files are:
* all setup.py files
* numpy/core/code_generators/generate_umath.py
* numpy/core/code_generators/generate_numpy_api.py
* numpy/core/code_generators/generate_ufunc_api.py
2) Some imported modules are not available as they are created during
the build process and consequently 2to3 is unable to handle them
correctly. Files that import those modules need a bit of extra work.
The affected files are:
* core/__init__.py,
* core/numeric.py,
* core/_internal.py,
* core/arrayprint.py,
* core/fromnumeric.py,
* numpy/__init__.py,
* lib/npyio.py,
* lib/function_base.py,
* fft/fftpack.py,
* random/__init__.py
Closes #3172
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In python3 range is an iterator and `xrange` has been removed. This has
two consequence for code:
1) Where a list is needed `list(range(...))` must be used.
2) `xrange` must be replaced by `range`
Both of these changes also work in python2 and this patch makes both.
There are three places fixed that do not need it, but I left them in
so that the result would be `xrange` clean.
Closes #3092
|
|
|
|
|
|
|
|
| |
This should be harmless, as we already are division clean. However,
placement of this import takes some care. In the future a script
can be used to append new features without worry, at least until
such time as it exceeds a single line. Having that ability will
make it easier to deal with absolute imports and printing updates.
|
| |
|
|
|
|
|
| |
More detail: Views are only sensitive to under-the-hood storage when
the dtype storage size has changed.
|
|
|
|
|
|
|
| |
Since most numpy operations are not sensitive to underlying data
structure (C-ordered arrays vs fortran-ordered arrays, versus slices or
transposes of arrays, etc.), but structured-array views ARE sensitive to
that, it is worth saying it explicitly in the documentation.
|
| |
|
|
|
|
|
| |
This switches us back to the behaviour seen in numpy 1.6 and earlier,
which it turns out that scikit-learn (and probably others) relied on.
|
|
|
|
| |
and order parameter was added
|
|
|
|
| |
Expand docstring for ``astype`` method.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are two reasons to want to keep PyArray_ReduceWrapper out of the
public multiarray API:
- Its signature is likely to change if/when masked arrays are added
- It is essentially a wrapper for array->scalar transformations
(*not* just reductions as its name implies -- the whole reason it
is in multiarray.so in the first place is to support count_nonzero,
which is not actually a reduction!). It provides some nice
conveniences (like making it easy to apply such functions to
multiple axes simultaneously), but, we already have a general
mechanism for writing array->scalar transformations -- generalized
ufuncs. We do not want to have two independent, redundant
implementations of this functionality, one in multiarray and one in
umath! So in the long run we should add these nice features to the
generalized ufunc machinery. And in the short run, we shouldn't add
it to the public API and commit ourselves to supporting it.
However, simply removing it from numpy_api.py is not easy, because
this code was used in both multiarray and umath. This commit:
- Moves ReduceWrapper and supporting code to umath/, and makes
appropriate changes (e.g. renaming it to PyUFunc_ReduceWrapper and
cleaning up the header files).
- Reverts numpy.count_nonzero to its previous implementation, so that
it loses the new axis= and keepdims= arguments. This is
unfortunate, but this change isn't so urgent that it's worth tying
our APIs in knots forever. (Perhaps in the future it can become a
generalized ufunc.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original masked-NA-NEP branch contained a large number of changes
in addition to the core NA support. For example:
- ufunc.__call__ support for where= argument
- nditer support for arbitrary masks (in support of where=)
- ufunc.reduce support for simultaneous reduction over multiple axes
- a new "array assignment API"
- ndarray.diagonal() returning a view in all cases
- bug-fixes in __array_priority__ handling
- datetime test changes
etc. There's no consensus yet on what should be done with the
maskna-related part of this branch, but the rest is generally useful
and uncontroversial, so the goal of this branch is to identify exactly
which code changes are involved in maskna support.
The basic strategy used to create this patch was:
- Remove the new masking-related fields from ndarray, so no arrays
are masked
- Go through and remove all the code that this makes
dead/inaccessible/irrelevant, in a largely mechanical fashion. So
for example, if I saw 'if (PyArray_HASMASK(a)) { ... }' then that
whole block was obviously just dead code if no arrays have masks,
and I removed it. Likewise for function arguments like skipna that
are useless if there aren't any NAs to skip.
This changed the signature of a number of functions that were newly
exposed in the numpy public API. I've removed all such functions from
the public API, since releasing them with the NA-less signature in 1.7
would create pointless compatibility hassles later if and when we add
back the NA-related functionality. Most such functions are removed by
this commit; the exception is PyArray_ReduceWrapper, which requires
more extensive surgery, and will be handled in followup commits.
I also removed the new ndarray.setasflat method. Reason: a comment
noted that the only reason this was added was to allow easier testing
of one branch of PyArray_CopyAsFlat. That branch is now the main
branch, so that isn't an issue. Nonetheless this function is arguably
useful, so perhaps it should have remained, but I judged that since
numpy's API is already hairier than we would like, it's not a good
idea to add extra hair "just in case". (Also AFAICT the test for this
method in test_maskna was actually incorrect, as noted here:
https://github.com/njsmith/numpyNEP/blob/master/numpyNEP.py
so I'm not confident that it ever worked in master, though I haven't
had a chance to follow-up on this.)
I also removed numpy.count_reduce_items, since without skipna it
became trivial.
I believe that these are the only exceptions to the "remove dead code"
strategy.
|
|\
| |
| | |
ENH: Give digitize left or right open interval option
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
| |
The new argument allows one to search an argsorted array by passing
in the result of argsorting the array as the 'sorter' argument. For
example
searchsorted(a, sorter=a.argsort)
|
|
|
|
| |
This has been extensively discussed on the mailing list. See #2072.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
friends
Also fix some array() NA mask construction issues and make sure the
base object doesn't collapse past the owner of the NA mask being
viewed in addition to the data.
|
| |
|
| |
|
|
|
|
|
| |
Also add 'keepdims=' parameter to reductions, to support writing of
the np.std function.
|
|
|
|
|
|
|
| |
This function either cheaply returns the product of the sizes of
all the reduction axes, or counts the number of items which will
be used in a reduction operation when skipna is True. Its purpose
is to make it easy to do functions like np.mean and np.std.
|
|
|
|
|
| |
It should also have less memory usage for heterogeneous inputs,
because it no longer makes extra copies in that case.
|
|
|
|
| |
always return a view
|
| |
|
| |
|
|
|
|
| |
empty, and empty_like
|
|
|
|
| |
discoverable
|
| |
|