| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ENH: __array_function__ support for np.lib, part 2
xref GH12028
np.lib.npyio through np.lib.ufunclike
* Fix failures in numpy/core/tests/test_overrides.py
* CLN: handle depreaction in dispatchers for np.lib.ufunclike
* CLN: fewer dispatchers in lib.twodim_base
* CLN: fewer dispatchers in lib.shape_base
* CLN: more dispatcher consolidation
* BUG: fix test failure
* Use all method instead of function in assert_equal
* DOC: indicate n is array_like in scimath.logn
* MAINT: updates per review
* MAINT: more conservative changes in assert_array_equal
* MAINT: add back in comment
* MAINT: casting tweaks in assert_array_equal
* MAINT: fixes and tests for assert_array_equal on subclasses
|
|
|
|
|
|
|
|
| |
All imports of pickle from numpy modules are now done this way:
>>> from numpy.core.numeric import pickle
Also, some loops on protocol numbers are added over pickle tests that
were not caught from #12090
|
| |
|
|\
| |
| | |
ENH: maximum lines of content to be read from numpy.loadtxt
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
argument, or the file is an invalid zip
|
|/ |
|
|
|
|
| |
Fixes #9989
|
| |
|
|\ |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* fixed doc typo
* fixed lib typos
* fixed lapack_lite typos
* Revert "fixed lapack_lite typos"
This reverts commit e7dada860cb73af190234402508ab79965ecd079.
|
|\ \
| |/ |
|
| | |
|
| |
| |
| |
| | |
Fixes gh-10780
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* TST: skip refcount-requiring tests if sys.refcount is missing
* ENH: io: add refcheck=False to a safe .resize() call
The array is allocated immediately above, and the resize always succeeds
so it is not necessary to check it. Fixes Pypy compatibility.
* TST: remove unused code
* TST: factor skipif(not HAS_REFCOUNT) into a separate decorator
|
| |\
| | |
| | | |
BUG: fromregex: asbytes called on regexp objects
|
| | |
| | |
| | |
| | |
| | |
| | | |
When calling fromregex() with a binary stream and a regular expression
object, asbytes() was called on the regexp object, resulting in an
incorrect regular expression being compiled and used.
|
| | |
| | |
| | |
| | | |
Fixes #10620
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
This is potentially a breaking change for python 3, because the Mapping protocol changed between python 2 and python 3 - `items()` and `keys()` now return views, not lists.
In practice, any user running `2to3` should have found this fixed itself automatically.
Also fixes dir(np.lib.npyio.BagObj(dict(a=1))) on python 3.
Fixes gh-1723
|
|\ \
| | |
| | | |
BUG: Resize bytes_ columns in genfromtxt
|
| | |
| | |
| | |
| | | |
Fixes gh-10394, due to regression in gh-10054
|
| |/
|/|
| |
| |
| |
| | |
* DOC: See #10098 and minor punctuation cleanup
* DOC: Correcting per PR comments
|
| | |
|
| | |
|
|/
|
|
|
|
| |
* The np.ma functions are misleading, as they do not actually do anything special for ma.array
* The np.loads functions doesn't even have numpy-specific documentation, and does not behave consistently with `np.load`
* The string overloads of np.ma.load and np.ma.dump do not work well on python 3, as they make assumptions about whether a binary or text pickle file is used (gh-5491)
|
| |
|
|
|
|
|
|
|
| |
Add docstrings for some of the support functions in _datasource and
npyio in order to aid future maintainers.
[ci skip]
|
|
|
|
|
|
|
|
|
|
|
| |
This modifies loadtxt and genfromtxt in several ways intended to add
unicode support for text files by adding an `encoding` keyword to
np.load, np.genfromtxt, np.savetxt, and np.fromregex. The original
treatment of the relevant files was to open them as byte
files, whereas they are now opened as text files with an encoding. When
read, they are decoded to unicode strings for Python3 compatibility,
and when written, they are encoded as specified. For backward
compatibility, the default encoding in both cases is latin1.
|
|
|
|
|
|
|
|
|
| |
The documentation on the name parameter for npyio.genfromtxt uses the phrase
"valid line" which doesn't completely describe it's behavior. This updates the
documentation on the names field to indicate the first line, with or without a
comment delimeter, will be taken for the names field.
fixes #9878
|
|
|
|
|
|
| |
The rendered markdown in the online documentation was broken due to
the one-character indentation added in the multiline enumerations of
the docstring of savetxt.
|
|
|
|
|
| |
Since Python 3.6 it is possible to write directly to a ZIP file,
without creating temporary files.
|
|
|
|
|
| |
savetxt does not support saving arrays of dimension 0 or higher than 2.
This pull request improves the message of the error raised.
|
| |
|
|
|
|
|
|
|
| |
This is the case for x in {int, bool, str, float, complex, object}.
Using the np.{x} version is deceptive as it suggests that there is a
difference. This change doesn't affect any external behaviour. The
`long` type is missing in python 3, so np.long is still useful
|
|
|
|
|
|
|
| |
* make exception raising 2/3 compatible
* remove unnecesary else statement after while loop without break clause
* ensure file is always enclosed even in the event of an exception
* ensure list comprehension variable does not override enclosing loop variable
|
|
|
|
| |
Bare except is very rarely the right thing
|
|
|
|
|
|
|
|
| |
Since we only need to support python 2, we can remove any case where we just
pass a single string literal and use the b prefix instead.
What we can't do is transform asbytes("tests %d" % num), because %-formatting
fails on bytes in python 3.x < 3.5.
|
|
|
|
|
| |
This file had two functions called flatten_dtype, which did similar but
different things.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The original dtype converters for bytes and str
did not account for converting objects of str or
bytes dtype respectively. Replace the original
converters with those from numpy.compat, which
are much more robust.
Closes gh-8033.
|
|\
| |
| | |
DOC: See also np.load and np.memmap in np.lib.format.open_memmap
|