| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This switches us back to the behaviour seen in numpy 1.6 and earlier,
which it turns out that scikit-learn (and probably others) relied on.
|
|
|
|
|
|
| |
The original code used arange with offsets and scaling to generate
sample points. Using linspace simplifies the code and clarifies
the intent.
|
|
|
|
|
|
| |
This should fix the problems with numpy.insert(), where the input values
were not checked for all scalar types and where values did not get inserted
properly, but got duplicated by default.
|
|\
| |
| | |
BF bug #808
|
| |
| |
| |
| |
| |
| |
| | |
the argument passed to be used as the item to be insterted, and a list was
passed as the positions. This was fixed by simply duplicating the item to
be inserted so that it was a list of equal length and then control was
passed to the already exsisting code to handel this case
|
|\ \
| |/
|/| |
Meshgrid enhancements (>2-D, sparse grids, matrix indexing)
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The original masked-NA-NEP branch contained a large number of changes
in addition to the core NA support. For example:
- ufunc.__call__ support for where= argument
- nditer support for arbitrary masks (in support of where=)
- ufunc.reduce support for simultaneous reduction over multiple axes
- a new "array assignment API"
- ndarray.diagonal() returning a view in all cases
- bug-fixes in __array_priority__ handling
- datetime test changes
etc. There's no consensus yet on what should be done with the
maskna-related part of this branch, but the rest is generally useful
and uncontroversial, so the goal of this branch is to identify exactly
which code changes are involved in maskna support.
The basic strategy used to create this patch was:
- Remove the new masking-related fields from ndarray, so no arrays
are masked
- Go through and remove all the code that this makes
dead/inaccessible/irrelevant, in a largely mechanical fashion. So
for example, if I saw 'if (PyArray_HASMASK(a)) { ... }' then that
whole block was obviously just dead code if no arrays have masks,
and I removed it. Likewise for function arguments like skipna that
are useless if there aren't any NAs to skip.
This changed the signature of a number of functions that were newly
exposed in the numpy public API. I've removed all such functions from
the public API, since releasing them with the NA-less signature in 1.7
would create pointless compatibility hassles later if and when we add
back the NA-related functionality. Most such functions are removed by
this commit; the exception is PyArray_ReduceWrapper, which requires
more extensive surgery, and will be handled in followup commits.
I also removed the new ndarray.setasflat method. Reason: a comment
noted that the only reason this was added was to allow easier testing
of one branch of PyArray_CopyAsFlat. That branch is now the main
branch, so that isn't an issue. Nonetheless this function is arguably
useful, so perhaps it should have remained, but I judged that since
numpy's API is already hairier than we would like, it's not a good
idea to add extra hair "just in case". (Also AFAICT the test for this
method in test_maskna was actually incorrect, as noted here:
https://github.com/njsmith/numpyNEP/blob/master/numpyNEP.py
so I'm not confident that it ever worked in master, though I haven't
had a chance to follow-up on this.)
I also removed numpy.count_reduce_items, since without skipna it
became trivial.
I believe that these are the only exceptions to the "remove dead code"
strategy.
|
|\ \
| | |
| | | |
ENH: Add kwarg support for vectorize (tickets #2100, #1156, and #1487) (clean)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a substantial rewrite of vectorize to remove all introspection and
caching behaviour. This greatly simplifies the logic of the code, and allows
for much more generalized behaviour, simultaneously fixing tickets #1156,
#1487, and #2100. There will probably be a performance hit because caching is
no longer used (but should be able to be reinstated if needed).
As vectorize is a convenience function with poor performance in general,
perhaps this is okay. Rather than trying to inspect the function to determine
the number of arguments, defaults, and argument names, we just use the
arguments passed on the call to determine the behaviour on each call.
All tests pass and code is fully covered
Fixes:
Ticket #2100: kwarg support for vectorize
- API: Optional excluded argument to exclude some args from vectorization.
- Added documentation, examples, and coverage tests
- Added additional coverage test and base case for functions with no args
- Factored original behaviour into _vectorize_call
- Some minor documentation and error message corrections
Ticket #1156: Support vectorizing over instance methods
- No longer an issue since everything is determined by the call.
Ticket: #1487: result depends on execution order
- No longer caching, so the behaviour is as was expected.
ENH: Simple cache for vectorize
- Added simple cache to prevent vectorize from calling pyfunc twice on the first
argument when determining the output types and added regression test.
- Added documentation for excluded positional arguments.
- Documentation cleanups.
- Cleaned up variable names.
ENH: Performance improvements for backward compatibility of vectorize.
After some simple profiling, I found that the wrapping used to
support the caching of the previous commit wasted more time than
it saved, so I added a flag to allow the user to toggle. Moral:
caching makes sense only if the function is expensive and is off
by default.
I also compared performance with the original vectorize and opted
for keeping a cache of _ufunc if otypes is specified and there are
no kwargs/excluded vars. This case is easy to implement, and allows
users to reproduce (almost) the old performance characteristics if
needed. (The new version is about 5% slower in this case).
It would be much more complicated to add a similar cache in the case
where kwargs are used, and since a wrapper is used here, the
performance gain would be negligible (profiling showed that wrapping
was a more significant slowdown than the extra call to frompyfunc).
- API: Added cache kwarg which allows the user to toggle caching
of the first result.
- DOC: Added Notes section with a discussion of performance and a
warning that vectorize should not be used for performance.
- Added private _ufunc member to implement old-style of cache for
special case with no kwargs, excluded, and with otypes specified.
- Modified test case.
Partially address ticket #1982
- I tried to use hasattr(outputs, '__len__') rather than
isinstance(outputs, tuple) in order to allow for functions to return
lists. This, however, means that strings will get vectorized over
each character which breaks previous behaviour. Keeping old
behaviour for now.
|
|/ / |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Includes a doctest for dtype.
|
| |
|
| |
|
| |
|
|
|
|
| |
Thanks to Yury Zaytsev for the suggestion.
|
| |
|
|
|
|
|
| |
Also add more informative error messages for wrongly specified bins, for both
histogram and histogram2d/dd.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
density kw.
This reverts part of the following commits:
3743430e
400a2a67
3743430e
Behavior for normed keyword is again the same as it was in Numpy 1.5. The
desired behavior (probability density) is implemented by the new density
keyword, which reflects the functionality better than "normed".
For a discussion on this issue, see the Numpy mailing list thread started on
Aug 6th, 2010.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
It was suggested in issue #1180 to add an ignore_None= parameter to
average, but I think this does not fit cleanly into NumPy, and rather
educating users about Python list comprehensions is better. This is
an attempt to do that.
|
| |
|
| |
|
| |
|
|
|
|
| |
non-uniform bin widths
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|