| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
On linux large file support must be enabled and ftello used to avoid
overflows. The result must not be converted to a size_t, but a long
long.
|
|
|
|
| |
Observed on 32-bit linux with python 3.4b1
|
|
|
|
|
| |
This is default behavior in 3.x; in 2.x open() doesn't have a `newline`
kw. So make code Python-version-specific.
|
|\
| |
| | |
ENH: Allow meshgrid to take 1D and 0D inputs
|
| | |
|
|\ \
| | |
| | | |
FIX: Array split should not hack empty array shape away
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes the result type of empty output arrays.
The FutureWarning warns about changed behaviour. A "kludge" was
introduced into array split converting converting the result of
something like:
>>> np.array_split(np.array([[1, 1]]), 2)
[array([[1, 1]]), array([], dtype=int64)]
instead of retaining the shape of the empty/second array to
be (0, 2). A FutureWarning is now raised when such a
replacement occurs.
Closes gh-612
|
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, arraypad used zeros(shape).astype(dtype) instead of
zeros(shape, dtype), which could allocate up to eight times more memory
than necessary.
|
| | |
| | |
| | |
| | |
| | | |
Speeds the test up vs. reading single bytes at a time, and is now
correctly handled when reading the magic string and array header.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This wrapper function is used everywhere in format.py now to ensure to
correctly the handle the case when fp.read returns fewer bytes than
requested.
Also added a test for the orignal bug, loading an array of size more
than 64K from a zip file.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Issue 4093: Improved the handling of the case when the amount of data
read by fp.read is smaller than dtype.itemsize. Also changed the test
code to make sure this case is tested.
|
| | | |
|
| | | |
|
| | | |
|
| |/
|/|
| |
| |
| |
| | |
In Python 2.6.x the number of bytes read from a zip file is 2^16, which
is less than the 2^18 requested by lib.format.py. This change handles
the case where fp.read() returns fewer than the requested number of bytes.
|
|/
|
| |
The docstring for interp() contained the grammatically-incorrect text "defaults is". I corrected this to "default is".
|
| |
|
|
|
|
|
|
|
|
| |
In some cases a negative axis argument to np.insert would result
in wrong behaviour due to np.rollaxis, add modulo operation to
avoid this (an error is still raised due to arr.shape[axis]).
Closes gh-3494
|
| |
|
| |
|
|
|
|
|
| |
replace slow exec with a direct __import__.
improves `import numpy` speed by about 10%.
|
| |
|
| |
|
|
|
| |
Current formatting is not part of rst, and is not rendering correctly at http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.pad.html
|
| |
|
|
|
|
|
|
|
| |
This fix correctly calculates the number of BUFFER_SIZE chunks to read.
This fix takes into account dtype sizes that could be larger
than the BUFFER_SIZE, like a long string.
See #4027
|
| |
|
|
|
|
| |
one read
|
|
|
|
| |
and tests/test_twodim_base.py. Also make a couple more PEP8 tweaks.
|
|
|
|
| |
more PEP8 tweaks.
|
| |
|
|
|
|
| |
order of the powers (either decreasing or increasing).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This preserves the complex (and higher precision float or
object) type of the input array, so that the complex
covariance and correlation coefficients can be calculated.
It also fixes the the behaviour of empty arrays. These will
now either result in a 0x0 result, or a NxN result filled
with NaNs.
A warning is now issued when ddof is too large and the factor
set to 0 so that in this case the result is always NaN or
infinity/negative infinity and never a negative number.
Closes gh-597 and gh-2680
Closes gh-3882 (original pull request)
|
|
|
| |
Fix `ResourceWarning: unclosed file` on Python 3
|
|\
| |
| | |
A couple micro optimizations
|
| |
| |
| |
| |
| |
| |
| | |
move slow test_memmap_roundtrip to slow tests
decrease excessively large array size used in np.sin(x) compuation
TestInterp.test_if_len_x_is_small, the code has no special path for this
large size differences.
|
|\ \
| | |
| | | |
MAINT: accept NULL in NpyIter_Deallocate and remove redundant NULL checks
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Deallocation should just do nothing if provided a NULL pointer nditer
deletion broke this convention.
Removed many redundant NULL checks for various deallocation functions
used in numpy, they all end up in standard C free or PyMem_Free which
are both NULL safe.
|
|\ \ \
| | | |
| | | | |
STY: pep8 for npyio
|
| | | |
| | | |
| | | |
| | | | |
fixing one typo in npyio.py
|
| | | |
| | | |
| | | |
| | | | |
Two slight style modifications in npyio, regarding line length.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Various pep8 fixes for npyio.py
Also reorganized the imports, and removed the unnecessary (I hope)
`_string_like = _is_string_like` statement.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Fixes gh-2561
|
|\ \ \ \
| | | | |
| | | | | |
BUG: ensure percentile has same output structure as in 1.8
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | | |
percentile returned scalars and lists of arrays in 1.8
adapt new percentile to return scalar and arrays with q dimension first
for compatibility.
|