| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The check for monotonic bin arrays of digitize doesn't properly handle
inputs with repeated entries at the beginning of the array:
```
>>> np.__version__
'1.8.0'
>>> np.digitize([1], [0, 0 , 2])
array([2], dtype=int64)
>>> np.digitize([1], [2, 2, 0])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The bins must be monotonically increasing or decreasing
```
Modified `check_array_monotonic` in `_compiled_base.c` to skip over repeating
entries before deciding to check for increasing or decreasing monotonicity
and added relevant tests to `test_function_base.py`.
|
|\ \ \
| | | |
| | | | |
ENH: Reimplementing the indexing machinery
|
| | | |
| | | |
| | | |
| | | | |
Some optimizations still missing.
|
|\ \ \ \
| | | | |
| | | | | |
TST: clean up tempfile in test_closing_zipfile_after_load
|
| |/ / / |
|
|/ / / |
|
| |/
|/|
| |
| |
| | |
mktemp only returns a filename, a malicous user could replace it before
it gets used.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change corrects the following two bugs in numpy.irr:
* When the solution was negative, numpy.irr returned nan instead of the
correct solution because of the mask applied to the roots. Corrected
by removing the mask that 0 < res < 1.
* When multiple roots were found, numpy.irr was returning an array of
all roots rather than a single float. This bug was corrected by
selecting the single root closest to zero (min(abs(root)).
With these corrections, numpy.irr returns the same result as the
corresponding spreadsheet function in LibreOffice Calc for all test
cases (additional test cases were added to cover cases with multiple
positive and negative roots)
|
| |
|
|
|
|
|
|
| |
On linux large file support must be enabled and ftello used to avoid
overflows. The result must not be converted to a size_t, but a long
long.
|
|
|
|
| |
Observed on 32-bit linux with python 3.4b1
|
|
|
|
|
| |
This is default behavior in 3.x; in 2.x open() doesn't have a `newline`
kw. So make code Python-version-specific.
|
|\
| |
| | |
ENH: Allow meshgrid to take 1D and 0D inputs
|
| | |
|
|\ \
| | |
| | | |
FIX: Array split should not hack empty array shape away
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes the result type of empty output arrays.
The FutureWarning warns about changed behaviour. A "kludge" was
introduced into array split converting converting the result of
something like:
>>> np.array_split(np.array([[1, 1]]), 2)
[array([[1, 1]]), array([], dtype=int64)]
instead of retaining the shape of the empty/second array to
be (0, 2). A FutureWarning is now raised when such a
replacement occurs.
Closes gh-612
|
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, arraypad used zeros(shape).astype(dtype) instead of
zeros(shape, dtype), which could allocate up to eight times more memory
than necessary.
|
| | |
| | |
| | |
| | |
| | | |
Speeds the test up vs. reading single bytes at a time, and is now
correctly handled when reading the magic string and array header.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This wrapper function is used everywhere in format.py now to ensure to
correctly the handle the case when fp.read returns fewer bytes than
requested.
Also added a test for the orignal bug, loading an array of size more
than 64K from a zip file.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Issue 4093: Improved the handling of the case when the amount of data
read by fp.read is smaller than dtype.itemsize. Also changed the test
code to make sure this case is tested.
|
| | | |
|
| | | |
|
| | | |
|
| |/
|/|
| |
| |
| |
| | |
In Python 2.6.x the number of bytes read from a zip file is 2^16, which
is less than the 2^18 requested by lib.format.py. This change handles
the case where fp.read() returns fewer than the requested number of bytes.
|
|/
|
| |
The docstring for interp() contained the grammatically-incorrect text "defaults is". I corrected this to "default is".
|
| |
|
|
|
|
|
|
|
|
| |
In some cases a negative axis argument to np.insert would result
in wrong behaviour due to np.rollaxis, add modulo operation to
avoid this (an error is still raised due to arr.shape[axis]).
Closes gh-3494
|
| |
|
| |
|
|
|
|
|
| |
replace slow exec with a direct __import__.
improves `import numpy` speed by about 10%.
|
| |
|
| |
|
|
|
| |
Current formatting is not part of rst, and is not rendering correctly at http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.pad.html
|
| |
|
|
|
|
|
|
|
| |
This fix correctly calculates the number of BUFFER_SIZE chunks to read.
This fix takes into account dtype sizes that could be larger
than the BUFFER_SIZE, like a long string.
See #4027
|
| |
|
|
|
|
| |
one read
|
|
|
|
| |
and tests/test_twodim_base.py. Also make a couple more PEP8 tweaks.
|
|
|
|
| |
more PEP8 tweaks.
|
| |
|
|
|
|
| |
order of the powers (either decreasing or increasing).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This preserves the complex (and higher precision float or
object) type of the input array, so that the complex
covariance and correlation coefficients can be calculated.
It also fixes the the behaviour of empty arrays. These will
now either result in a 0x0 result, or a NxN result filled
with NaNs.
A warning is now issued when ddof is too large and the factor
set to 0 so that in this case the result is always NaN or
infinity/negative infinity and never a negative number.
Closes gh-597 and gh-2680
Closes gh-3882 (original pull request)
|
|
|
| |
Fix `ResourceWarning: unclosed file` on Python 3
|
|\
| |
| | |
A couple micro optimizations
|
| |
| |
| |
| |
| |
| |
| | |
move slow test_memmap_roundtrip to slow tests
decrease excessively large array size used in np.sin(x) compuation
TestInterp.test_if_len_x_is_small, the code has no special path for this
large size differences.
|
|\ \
| | |
| | | |
MAINT: accept NULL in NpyIter_Deallocate and remove redundant NULL checks
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Deallocation should just do nothing if provided a NULL pointer nditer
deletion broke this convention.
Removed many redundant NULL checks for various deallocation functions
used in numpy, they all end up in standard C free or PyMem_Free which
are both NULL safe.
|
|\ \ \
| | | |
| | | | |
STY: pep8 for npyio
|