summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/neps/nep-0037-array-module.rst82
-rw-r--r--numpy/core/src/multiarray/_multiarray_tests.c.src129
-rw-r--r--numpy/core/src/multiarray/item_selection.c13
-rw-r--r--numpy/core/tests/test_conversion_utils.py124
-rw-r--r--numpy/core/tests/test_regression.py5
-rw-r--r--numpy/polynomial/tests/test_classes.py50
-rw-r--r--numpy/polynomial/tests/test_printing.py49
7 files changed, 367 insertions, 85 deletions
diff --git a/doc/neps/nep-0037-array-module.rst b/doc/neps/nep-0037-array-module.rst
index b3471e227..d789ef0de 100644
--- a/doc/neps/nep-0037-array-module.rst
+++ b/doc/neps/nep-0037-array-module.rst
@@ -13,8 +13,8 @@ Abstract
--------
NEP-18's ``__array_function__`` has been a mixed success. Some projects (e.g.,
-dask, CuPy, xarray, sparse, Pint) have enthusiastically adopted it. Others
-(e.g., PyTorch, JAX, SciPy) have been more reluctant. Here we propose a new
+dask, CuPy, xarray, sparse, Pint, MXNet) have enthusiastically adopted it.
+Others (e.g., JAX) have been more reluctant. Here we propose a new
protocol, ``__array_module__``, that we expect could eventually subsume most
use-cases for ``__array_function__``. The protocol requires explicit adoption
by both users and library authors, which ensures backwards compatibility, and
@@ -26,32 +26,33 @@ Why ``__array_function__`` hasn't been enough
There are two broad ways in which NEP-18 has fallen short of its goals:
-1. **Maintainability concerns**. `__array_function__` has significant
+1. **Backwards compatibility concerns**. `__array_function__` has significant
implications for libraries that use it:
- - Projects like `PyTorch
- <https://github.com/pytorch/pytorch/issues/22402>`_, `JAX
- <https://github.com/google/jax/issues/1565>`_ and even `scipy.sparse
- <https://github.com/scipy/scipy/issues/10362>`_ have been reluctant to
- implement `__array_function__` in part because they are concerned about
- **breaking existing code**: users expect NumPy functions like
+ - `JAX <https://github.com/google/jax/issues/1565>`_ has been reluctant
+ to implement ``__array_function__`` in part because it is concerned about
+ breaking existing code: users expect NumPy functions like
``np.concatenate`` to return NumPy arrays. This is a fundamental
limitation of the ``__array_function__`` design, which we chose to allow
overriding the existing ``numpy`` namespace.
+ Libraries like Dask and CuPy have looked at and accepted the backwards
+ incompatibility impact of ``__array_function__``; it would still have been
+ better for them if that impact didn't exist.
+
+ Note that projects like `PyTorch
+ <https://github.com/pytorch/pytorch/issues/22402>`_ and `scipy.sparse
+ <https://github.com/scipy/scipy/issues/10362>`_ have also not
+ adopted ``__array_function__`` yet, because they don't have a
+ NumPy-compatible API or semantics. In the case of PyTorch, that is likely
+ to be added in the future. ``scipy.sparse`` is in the same situation as
+ ``numpy.matrix``: its semantics are not compatible with ``numpy.ndarray``
+ and therefore adding ``__array_function__`` (except to return ``NotImplemented``
+ perhaps) is not a healthy idea.
- ``__array_function__`` currently requires an "all or nothing" approach to
implementing NumPy's API. There is no good pathway for **incremental
adoption**, which is particularly problematic for established projects
for which adopting ``__array_function__`` would result in breaking
changes.
- - It is no longer possible to use **aliases to NumPy functions** within
- modules that support overrides. For example, both CuPy and JAX set
- ``result_type = np.result_type``.
- - Implementing **fall-back mechanisms** for unimplemented NumPy functions
- by using NumPy's implementation is hard to get right (but see the
- `version from dask <https://github.com/dask/dask/pull/5043>`_), because
- ``__array_function__`` does not present a consistent interface.
- Converting all arguments of array type requires recursing into generic
- arguments of the form ``*args, **kwargs``.
2. **Limitations on what can be overridden.** ``__array_function__`` has some
important gaps, most notably array creation and coercion functions:
@@ -71,6 +72,19 @@ There are two broad ways in which NEP-18 has fallen short of its goals:
a separate ``np.duckarray`` function, but this still does not resolve how
to cast one duck array into a type matching another duck array.
+Other maintainability concerns that were raised include:
+
+- It is no longer possible to use **aliases to NumPy functions** within
+ modules that support overrides. For example, both CuPy and JAX set
+ ``result_type = np.result_type`` and now have to wrap use of
+ ``np.result_type`` in their own ``result_type`` function instead.
+- Implementing **fall-back mechanisms** for unimplemented NumPy functions
+ by using NumPy's implementation is hard to get right (but see the
+ `version from dask <https://github.com/dask/dask/pull/5043>`_), because
+ ``__array_function__`` does not present a consistent interface.
+ Converting all arguments of array type requires recursing into generic
+ arguments of the form ``*args, **kwargs``.
+
``get_array_module`` and the ``__array_module__`` protocol
----------------------------------------------------------
@@ -493,23 +507,27 @@ Both ``__array_ufunc__`` and ``__array_function__`` have implicit control over
dispatching: the dispatched functions are determined via the appropriate
protocols in every function call. This generalizes well to handling many
different types of objects, as evidenced by its use for implementing arithmetic
-operators in Python, but it has two downsides:
-
-1. *Speed*: it imposes additional overhead in every function call, because each
- function call needs to inspect each of its arguments for overrides. This is
- why arithmetic on builtin Python numbers is slow.
-2. *Readability*: it is not longer immediately evident to readers of code what
- happens when a function is called, because the function's implementation
- could be overridden by any of its arguments.
-
-In contrast, importing a new library (e.g., ``import dask.array as da``) with
-an API matching NumPy is entirely explicit. There is no overhead from dispatch
-or ambiguity about which implementation is being used.
+operators in Python, but it has an important downside for **readability**:
+it is not longer immediately evident to readers of code what happens when a
+function is called, because the function's implementation could be overridden
+by any of its arguments.
+
+The **speed** implications are:
+
+- When using a *duck-array type*, ``get_array_module`` means type checking only
+ needs to happen once inside each function that supports duck typing, whereas
+ with ``__array_function__`` it happens every time a NumPy function is called.
+ Obvious it's going to depend on the function, but if a typical duck-array
+ supporting function calls into other NumPy functions 3-5 times this is a factor
+ of 3-5x more overhead.
+- When using *NumPy arrays*, ``get_array_module`` is one extra call per
+ function (``__array_function__`` overhead remains the same), which means a
+ small amount of extra overhead.
Explicit and implicit choice of implementations are not mutually exclusive
options. Indeed, most implementations of NumPy API overrides via
-``__array_function__`` that we are familiar with (namely, dask, CuPy and
-sparse, but not Pint) also include an explicit way to use their version of
+``__array_function__`` that we are familiar with (namely, Dask, CuPy and
+Sparse, but not Pint) also include an explicit way to use their version of
NumPy's API by importing a module directly (``dask.array``, ``cupy`` or
``sparse``, respectively).
diff --git a/numpy/core/src/multiarray/_multiarray_tests.c.src b/numpy/core/src/multiarray/_multiarray_tests.c.src
index ec2928c8f..318559885 100644
--- a/numpy/core/src/multiarray/_multiarray_tests.c.src
+++ b/numpy/core/src/multiarray/_multiarray_tests.c.src
@@ -1938,6 +1938,114 @@ getset_numericops(PyObject* NPY_UNUSED(self), PyObject* NPY_UNUSED(args))
return ret;
}
+static PyObject *
+run_byteorder_converter(PyObject* NPY_UNUSED(self), PyObject *args)
+{
+ char byteorder;
+ if (!PyArg_ParseTuple(args, "O&", PyArray_ByteorderConverter, &byteorder)) {
+ return NULL;
+ }
+ switch (byteorder) {
+ case NPY_BIG: return PyUnicode_FromString("NPY_BIG");
+ case NPY_LITTLE: return PyUnicode_FromString("NPY_LITTLE");
+ case NPY_NATIVE: return PyUnicode_FromString("NPY_NATIVE");
+ case NPY_SWAP: return PyUnicode_FromString("NPY_SWAP");
+ case NPY_IGNORE: return PyUnicode_FromString("NPY_IGNORE");
+ }
+ return PyInt_FromLong(byteorder);
+}
+
+static PyObject *
+run_sortkind_converter(PyObject* NPY_UNUSED(self), PyObject *args)
+{
+ NPY_SORTKIND kind;
+ if (!PyArg_ParseTuple(args, "O&", PyArray_SortkindConverter, &kind)) {
+ return NULL;
+ }
+ switch (kind) {
+ case NPY_QUICKSORT: return PyUnicode_FromString("NPY_QUICKSORT");
+ case NPY_HEAPSORT: return PyUnicode_FromString("NPY_HEAPSORT");
+ case NPY_STABLESORT: return PyUnicode_FromString("NPY_STABLESORT");
+ }
+ return PyInt_FromLong(kind);
+}
+
+static PyObject *
+run_selectkind_converter(PyObject* NPY_UNUSED(self), PyObject *args)
+{
+ NPY_SELECTKIND kind;
+ if (!PyArg_ParseTuple(args, "O&", PyArray_SelectkindConverter, &kind)) {
+ return NULL;
+ }
+ switch (kind) {
+ case NPY_INTROSELECT: return PyUnicode_FromString("NPY_INTROSELECT");
+ }
+ return PyInt_FromLong(kind);
+}
+
+static PyObject *
+run_searchside_converter(PyObject* NPY_UNUSED(self), PyObject *args)
+{
+ NPY_SEARCHSIDE side;
+ if (!PyArg_ParseTuple(args, "O&", PyArray_SearchsideConverter, &side)) {
+ return NULL;
+ }
+ switch (side) {
+ case NPY_SEARCHLEFT: return PyUnicode_FromString("NPY_SEARCHLEFT");
+ case NPY_SEARCHRIGHT: return PyUnicode_FromString("NPY_SEARCHRIGHT");
+ }
+ return PyInt_FromLong(side);
+}
+
+static PyObject *
+run_order_converter(PyObject* NPY_UNUSED(self), PyObject *args)
+{
+ NPY_ORDER order;
+ if (!PyArg_ParseTuple(args, "O&", PyArray_OrderConverter, &order)) {
+ return NULL;
+ }
+ switch (order) {
+ case NPY_ANYORDER: return PyUnicode_FromString("NPY_ANYORDER");
+ case NPY_CORDER: return PyUnicode_FromString("NPY_CORDER");
+ case NPY_FORTRANORDER: return PyUnicode_FromString("NPY_FORTRANORDER");
+ case NPY_KEEPORDER: return PyUnicode_FromString("NPY_KEEPORDER");
+ }
+ return PyInt_FromLong(order);
+}
+
+static PyObject *
+run_clipmode_converter(PyObject* NPY_UNUSED(self), PyObject *args)
+{
+ NPY_CLIPMODE mode;
+ if (!PyArg_ParseTuple(args, "O&", PyArray_ClipmodeConverter, &mode)) {
+ return NULL;
+ }
+ switch (mode) {
+ case NPY_CLIP: return PyUnicode_FromString("NPY_CLIP");
+ case NPY_WRAP: return PyUnicode_FromString("NPY_WRAP");
+ case NPY_RAISE: return PyUnicode_FromString("NPY_RAISE");
+ }
+ return PyInt_FromLong(mode);
+}
+
+static PyObject *
+run_casting_converter(PyObject* NPY_UNUSED(self), PyObject *args)
+{
+ NPY_CASTING casting;
+ if (!PyArg_ParseTuple(args, "O&", PyArray_CastingConverter, &casting)) {
+ return NULL;
+ }
+ switch (casting) {
+ case NPY_NO_CASTING: return PyUnicode_FromString("NPY_NO_CASTING");
+ case NPY_EQUIV_CASTING: return PyUnicode_FromString("NPY_EQUIV_CASTING");
+ case NPY_SAFE_CASTING: return PyUnicode_FromString("NPY_SAFE_CASTING");
+ case NPY_SAME_KIND_CASTING: return PyUnicode_FromString("NPY_SAME_KIND_CASTING");
+ case NPY_UNSAFE_CASTING: return PyUnicode_FromString("NPY_UNSAFE_CASTING");
+ }
+ return PyInt_FromLong(casting);
+}
+
+
static PyMethodDef Multiarray_TestsMethods[] = {
{"IsPythonScalar",
IsPythonScalar,
@@ -2089,6 +2197,27 @@ static PyMethodDef Multiarray_TestsMethods[] = {
{"get_struct_alignments",
get_struct_alignments,
METH_VARARGS, NULL},
+ {"run_byteorder_converter",
+ run_byteorder_converter,
+ METH_VARARGS, NULL},
+ {"run_sortkind_converter",
+ run_sortkind_converter,
+ METH_VARARGS, NULL},
+ {"run_selectkind_converter",
+ run_selectkind_converter,
+ METH_VARARGS, NULL},
+ {"run_searchside_converter",
+ run_searchside_converter,
+ METH_VARARGS, NULL},
+ {"run_order_converter",
+ run_order_converter,
+ METH_VARARGS, NULL},
+ {"run_clipmode_converter",
+ run_clipmode_converter,
+ METH_VARARGS, NULL},
+ {"run_casting_converter",
+ run_casting_converter,
+ METH_VARARGS, NULL},
{NULL, NULL, 0, NULL} /* Sentinel */
};
diff --git a/numpy/core/src/multiarray/item_selection.c b/numpy/core/src/multiarray/item_selection.c
index f0ef8ba3b..45c019f49 100644
--- a/numpy/core/src/multiarray/item_selection.c
+++ b/numpy/core/src/multiarray/item_selection.c
@@ -1564,6 +1564,16 @@ PyArray_LexSort(PyObject *sort_keys, int axis)
/* Now we can check the axis */
nd = PyArray_NDIM(mps[0]);
+ /*
+ * Special case letting axis={-1,0} slip through for scalars,
+ * for backwards compatibility reasons.
+ */
+ if (nd == 0 && (axis == 0 || axis == -1)) {
+ /* TODO: can we deprecate this? */
+ }
+ else if (check_and_adjust_axis(&axis, nd) < 0) {
+ goto fail;
+ }
if ((nd == 0) || (PyArray_SIZE(mps[0]) <= 1)) {
/* empty/single element case */
ret = (PyArrayObject *)PyArray_NewFromDescr(
@@ -1579,9 +1589,6 @@ PyArray_LexSort(PyObject *sort_keys, int axis)
}
goto finish;
}
- if (check_and_adjust_axis(&axis, nd) < 0) {
- goto fail;
- }
for (i = 0; i < n; i++) {
its[i] = (PyArrayIterObject *)PyArray_IterAllButAxis(
diff --git a/numpy/core/tests/test_conversion_utils.py b/numpy/core/tests/test_conversion_utils.py
new file mode 100644
index 000000000..9e80fdcbf
--- /dev/null
+++ b/numpy/core/tests/test_conversion_utils.py
@@ -0,0 +1,124 @@
+"""
+Tests for numpy/core/src/multiarray/conversion_utils.c
+"""
+import pytest
+
+import numpy as np
+import numpy.core._multiarray_tests as mt
+
+
+class StringConverterTestCase:
+ allow_bytes = True
+ case_insensitive = True
+ exact_match = False
+
+ def _check(self, val, expected):
+ assert self.conv(val) == expected
+
+ if self.allow_bytes:
+ assert self.conv(val.encode('ascii')) == expected
+ else:
+ with pytest.raises(TypeError):
+ self.conv(val.encode('ascii'))
+
+ if len(val) != 1:
+ if self.exact_match:
+ with pytest.raises(ValueError):
+ self.conv(val[:1])
+ else:
+ assert self.conv(val[:1]) == expected
+
+ if self.case_insensitive:
+ if val != val.lower():
+ assert self.conv(val.lower()) == expected
+ if val != val.upper():
+ assert self.conv(val.upper()) == expected
+ else:
+ if val != val.lower():
+ with pytest.raises(ValueError):
+ self.conv(val.lower())
+ if val != val.upper():
+ with pytest.raises(ValueError):
+ self.conv(val.upper())
+
+
+class TestByteorderConverter(StringConverterTestCase):
+ """ Tests of PyArray_ByteorderConverter """
+ conv = mt.run_byteorder_converter
+ def test_valid(self):
+ for s in ['big', '>']:
+ self._check(s, 'NPY_BIG')
+ for s in ['little', '<']:
+ self._check(s, 'NPY_LITTLE')
+ for s in ['native', '=']:
+ self._check(s, 'NPY_NATIVE')
+ for s in ['ignore', '|']:
+ self._check(s, 'NPY_IGNORE')
+ for s in ['swap']:
+ self._check(s, 'NPY_SWAP')
+
+
+class TestSortkindConverter(StringConverterTestCase):
+ """ Tests of PyArray_SortkindConverter """
+ conv = mt.run_sortkind_converter
+ def test_valid(self):
+ self._check('quick', 'NPY_QUICKSORT')
+ self._check('heap', 'NPY_HEAPSORT')
+ self._check('merge', 'NPY_STABLESORT') # alias
+ self._check('stable', 'NPY_STABLESORT')
+
+
+class TestSelectkindConverter(StringConverterTestCase):
+ """ Tests of PyArray_SelectkindConverter """
+ conv = mt.run_selectkind_converter
+ case_insensitive = False
+ exact_match = True
+
+ def test_valid(self):
+ self._check('introselect', 'NPY_INTROSELECT')
+
+
+class TestSearchsideConverter(StringConverterTestCase):
+ """ Tests of PyArray_SearchsideConverter """
+ conv = mt.run_searchside_converter
+ def test_valid(self):
+ self._check('left', 'NPY_SEARCHLEFT')
+ self._check('right', 'NPY_SEARCHRIGHT')
+
+
+class TestOrderConverter(StringConverterTestCase):
+ """ Tests of PyArray_OrderConverter """
+ conv = mt.run_order_converter
+ def test_valid(self):
+ self._check('c', 'NPY_CORDER')
+ self._check('f', 'NPY_FORTRANORDER')
+ self._check('a', 'NPY_ANYORDER')
+ self._check('k', 'NPY_KEEPORDER')
+
+
+class TestClipmodeConverter(StringConverterTestCase):
+ """ Tests of PyArray_ClipmodeConverter """
+ conv = mt.run_clipmode_converter
+ def test_valid(self):
+ self._check('clip', 'NPY_CLIP')
+ self._check('wrap', 'NPY_WRAP')
+ self._check('raise', 'NPY_RAISE')
+
+ # integer values allowed here
+ assert self.conv(np.CLIP) == 'NPY_CLIP'
+ assert self.conv(np.WRAP) == 'NPY_WRAP'
+ assert self.conv(np.RAISE) == 'NPY_RAISE'
+
+
+class TestCastingConverter(StringConverterTestCase):
+ """ Tests of PyArray_CastingConverter """
+ conv = mt.run_casting_converter
+ case_insensitive = False
+ exact_match = True
+
+ def test_valid(self):
+ self._check("no", "NPY_NO_CASTING")
+ self._check("equiv", "NPY_EQUIV_CASTING")
+ self._check("safe", "NPY_SAFE_CASTING")
+ self._check("same_kind", "NPY_SAME_KIND_CASTING")
+ self._check("unsafe", "NPY_UNSAFE_CASTING")
diff --git a/numpy/core/tests/test_regression.py b/numpy/core/tests/test_regression.py
index fb969e5f8..96a6d810f 100644
--- a/numpy/core/tests/test_regression.py
+++ b/numpy/core/tests/test_regression.py
@@ -452,6 +452,11 @@ class TestRegression:
xs.strides = (16, 16)
assert np.lexsort((xs,), axis=0).shape[0] == 2
+ def test_lexsort_invalid_axis(self):
+ assert_raises(np.AxisError, np.lexsort, (np.arange(1),), axis=2)
+ assert_raises(np.AxisError, np.lexsort, (np.array([]),), axis=1)
+ assert_raises(np.AxisError, np.lexsort, (np.array(1),), axis=10)
+
def test_lexsort_zerolen_element(self):
dt = np.dtype([]) # a void dtype with no fields
xs = np.empty(4, dt)
diff --git a/numpy/polynomial/tests/test_classes.py b/numpy/polynomial/tests/test_classes.py
index e9f256cf8..8e71a1945 100644
--- a/numpy/polynomial/tests/test_classes.py
+++ b/numpy/polynomial/tests/test_classes.py
@@ -570,56 +570,6 @@ def test_ufunc_override(Poly):
assert_raises(TypeError, np.add, x, p)
-
-class TestLatexRepr:
- """Test the latex repr used by ipython """
-
- def as_latex(self, obj):
- # right now we ignore the formatting of scalars in our tests, since
- # it makes them too verbose. Ideally, the formatting of scalars will
- # be fixed such that tests below continue to pass
- obj._repr_latex_scalar = lambda x: str(x)
- try:
- return obj._repr_latex_()
- finally:
- del obj._repr_latex_scalar
-
- def test_simple_polynomial(self):
- # default input
- p = Polynomial([1, 2, 3])
- assert_equal(self.as_latex(p),
- r'$x \mapsto 1.0 + 2.0\,x + 3.0\,x^{2}$')
-
- # translated input
- p = Polynomial([1, 2, 3], domain=[-2, 0])
- assert_equal(self.as_latex(p),
- r'$x \mapsto 1.0 + 2.0\,\left(1.0 + x\right) + 3.0\,\left(1.0 + x\right)^{2}$')
-
- # scaled input
- p = Polynomial([1, 2, 3], domain=[-0.5, 0.5])
- assert_equal(self.as_latex(p),
- r'$x \mapsto 1.0 + 2.0\,\left(2.0x\right) + 3.0\,\left(2.0x\right)^{2}$')
-
- # affine input
- p = Polynomial([1, 2, 3], domain=[-1, 0])
- assert_equal(self.as_latex(p),
- r'$x \mapsto 1.0 + 2.0\,\left(1.0 + 2.0x\right) + 3.0\,\left(1.0 + 2.0x\right)^{2}$')
-
- def test_basis_func(self):
- p = Chebyshev([1, 2, 3])
- assert_equal(self.as_latex(p),
- r'$x \mapsto 1.0\,{T}_{0}(x) + 2.0\,{T}_{1}(x) + 3.0\,{T}_{2}(x)$')
- # affine input - check no surplus parens are added
- p = Chebyshev([1, 2, 3], domain=[-1, 0])
- assert_equal(self.as_latex(p),
- r'$x \mapsto 1.0\,{T}_{0}(1.0 + 2.0x) + 2.0\,{T}_{1}(1.0 + 2.0x) + 3.0\,{T}_{2}(1.0 + 2.0x)$')
-
- def test_multichar_basis_func(self):
- p = HermiteE([1, 2, 3])
- assert_equal(self.as_latex(p),
- r'$x \mapsto 1.0\,{He}_{0}(x) + 2.0\,{He}_{1}(x) + 3.0\,{He}_{2}(x)$')
-
-
#
# Test class method that only exists for some classes
#
diff --git a/numpy/polynomial/tests/test_printing.py b/numpy/polynomial/tests/test_printing.py
index 049d3af2f..bbd5502af 100644
--- a/numpy/polynomial/tests/test_printing.py
+++ b/numpy/polynomial/tests/test_printing.py
@@ -64,3 +64,52 @@ class TestRepr:
res = repr(poly.Laguerre([0, 1]))
tgt = 'Laguerre([0., 1.], domain=[0, 1], window=[0, 1])'
assert_equal(res, tgt)
+
+
+class TestLatexRepr:
+ """Test the latex repr used by Jupyter"""
+
+ def as_latex(self, obj):
+ # right now we ignore the formatting of scalars in our tests, since
+ # it makes them too verbose. Ideally, the formatting of scalars will
+ # be fixed such that tests below continue to pass
+ obj._repr_latex_scalar = lambda x: str(x)
+ try:
+ return obj._repr_latex_()
+ finally:
+ del obj._repr_latex_scalar
+
+ def test_simple_polynomial(self):
+ # default input
+ p = poly.Polynomial([1, 2, 3])
+ assert_equal(self.as_latex(p),
+ r'$x \mapsto 1.0 + 2.0\,x + 3.0\,x^{2}$')
+
+ # translated input
+ p = poly.Polynomial([1, 2, 3], domain=[-2, 0])
+ assert_equal(self.as_latex(p),
+ r'$x \mapsto 1.0 + 2.0\,\left(1.0 + x\right) + 3.0\,\left(1.0 + x\right)^{2}$')
+
+ # scaled input
+ p = poly.Polynomial([1, 2, 3], domain=[-0.5, 0.5])
+ assert_equal(self.as_latex(p),
+ r'$x \mapsto 1.0 + 2.0\,\left(2.0x\right) + 3.0\,\left(2.0x\right)^{2}$')
+
+ # affine input
+ p = poly.Polynomial([1, 2, 3], domain=[-1, 0])
+ assert_equal(self.as_latex(p),
+ r'$x \mapsto 1.0 + 2.0\,\left(1.0 + 2.0x\right) + 3.0\,\left(1.0 + 2.0x\right)^{2}$')
+
+ def test_basis_func(self):
+ p = poly.Chebyshev([1, 2, 3])
+ assert_equal(self.as_latex(p),
+ r'$x \mapsto 1.0\,{T}_{0}(x) + 2.0\,{T}_{1}(x) + 3.0\,{T}_{2}(x)$')
+ # affine input - check no surplus parens are added
+ p = poly.Chebyshev([1, 2, 3], domain=[-1, 0])
+ assert_equal(self.as_latex(p),
+ r'$x \mapsto 1.0\,{T}_{0}(1.0 + 2.0x) + 2.0\,{T}_{1}(1.0 + 2.0x) + 3.0\,{T}_{2}(1.0 + 2.0x)$')
+
+ def test_multichar_basis_func(self):
+ p = poly.HermiteE([1, 2, 3])
+ assert_equal(self.as_latex(p),
+ r'$x \mapsto 1.0\,{He}_{0}(x) + 2.0\,{He}_{1}(x) + 3.0\,{He}_{2}(x)$')