| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |/ / |
|
| | |
| | |
| | | |
Co-authored-by: Mukulika <60316606+Mukulikaa@users.noreply.github.com>
|
| | | |
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Before:
```
In [1]: np.dtype({"names": ["a"], "formats": [int], "offsets": [2]})
Out[1]: dtype({'names':['a'], 'formats':['<i8'], 'offsets':[2], 'itemsize':10})
```
After:
```
In [1]: np.dtype({"names": ["a"], "formats": [int], "offsets": [2]})
Out[1]: dtype({'names': ['a'], 'formats': ['<i8'], 'offsets': [2], 'itemsize': 10})
```
* Allow switching back to old dtype printing format.
* Add changelog.
|
| | |
|
| | |
|
|\ \
| | |
| | | |
ENH: Add annotations for `np.rec`
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In fromregex, add a check that verifies that the given dtype is
a structured datatype. This avoids confusing error messages that
can occur when the given data type is not structured.
Also tweaked the code in the Examples section.
Closes gh-8891.
|
|\ \
| | |
| | | |
DOC: fix docstring formatting of polynomial fit method return values.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes #19897
The 2nd return value of the following methods/functions were badly
formatted and the list was all appearing in a single line. Changed them
to separate points which are rendered nicely.
- numpy.polyfit
- numpy.ma.polyfit
- numpy.polynomial.polynomial.polyfit
- numpy.polynomial.polynomial.Polynomial.fit
- numpy.polynomial.chebyshev.chebfit
- numpy.polynomial.chebyshev.Chebyshev.fit
- numpy.polynomial.hermite.hermfit
- numpy.polynomial.hermite.Hermite.fit
- numpy.polynomial.hermite_e.hermefit
- numpy.polynomial.hermite_e.HermiteE.fit
- numpy.polynomial.laguerre.lagfit
- numpy.polynomial.laguerre.Laguerre.fit
- numpy.polynomial.legendre.legfit
- numpy.polynomial.legendre.Legendre.fit
Also fixed erroneous links to `numpy.full` which were actually referring
to the `full` argument. Changed those to code strings (double backticks)
from single backticks.
Also fixed formatting issues in the 3rd return value of numpy.polyfit
(and hence also numpy.ma.polyfit).
|
|\ \ \
| | | |
| | | | |
ENH: Add annotations for `np.lib.arraysetops`
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
* Add explanation of a spare mesh grid.
Co-authored-by: Ross Barnowski <rossbar@berkeley.edu>
|
|/ / / |
|
|/ / |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, np.median is almost completely safe for subclasses, except
if the result is NaN. In that case, it assumes the result is a scalar
and substitutes a NaN with the right dtype. This PR fixes that, since
subclasses like astropy's Quantity generally use array scalars to
preserve subclass information such as the unit.
|
| | |
|
| |
| |
| |
| | |
Co-Authored-By: Charles Harris <charlesr.harris@gmail.com>
|
| |
| |
| |
| | |
Parametrize w.r.t. the axis, array dimensionality and dtype
|
| |
| |
| |
| | |
the dtype for all-nan arrays
|
| | |
|
| |
| |
| |
| |
| |
| | |
This makes the test more precise, and I ran into having to broaden
the xfails otherwise, because right now reduce-likes incorrectly
faile to give floating point warnings (and I was fixing that).
|
| |
| |
| |
| | |
Use the same approach as in numpy/numpy#9013
|
| |
| |
| |
| | |
https://github.com/numpy/numpy/blob/410a89ef04a2d3c50dd2dba2ad403c872c3745ac/numpy/core/_methods.py#L265-L270
|
| | |
|
| |
| |
| |
| | |
types
|
| |
| |
| |
| | |
arrays
|
|\ \
| | |
| | | |
MAINT: revise OSError aliases (IOError, EnvironmentError)
|
| | | |
|
| | | |
|
|/ / |
|
|\ \
| | |
| | | |
MAINT: refactor "for ... in range(len(" statements
|
| | | |
|
| | | |
|
|/ / |
|
|\ \
| | |
| | | |
MAINT: Use a contextmanager to ensure loadtxt closes the input file.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This seems easier to track that a giant try... finally.
Also move the `fencoding` initialization to within the contextmanager,
in the rather unlikely case an exception occurs during the call to
`getpreferredencoding`.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As of Py3, np.compat.unicode == str, but that's not entirely obvious
(it could correspond to some numpy dtype too), so just use plain str.
Likewise for np.compat.int.
tests are intentionally left unchanged, as they can be considered as
implicitly testing the np.compat.py3k interface as well.
|
| | | |
|
|/ /
| |
| |
| |
| |
| | |
7-10% speedup in usecols benchmarks; it appears that even in the
single-usecol case, avoiding the iteration over `usecols` more than
compensates the cost of the extra function call to usecols_getter.
|
|\ \
| | |
| | | |
MAINT: In loadtxt, inline read_data.
|
| | |
| | |
| | |
| | |
| | |
| | | |
No speed difference; the point is to avoid an unnecessary inner
generator (which was previously defined quite far away from its point of
use).
|
|/ / |
|
|\ \
| | |
| | | |
BUG: Ignore whitespaces while parsing gufunc signatures
|