summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--numpy/polynomial/chebyshev.py39
-rw-r--r--numpy/polynomial/hermite.py46
-rw-r--r--numpy/polynomial/hermite_e.py50
-rw-r--r--numpy/polynomial/laguerre.py48
-rw-r--r--numpy/polynomial/legendre.py39
-rw-r--r--numpy/polynomial/polynomial.py48
6 files changed, 160 insertions, 110 deletions
diff --git a/numpy/polynomial/chebyshev.py b/numpy/polynomial/chebyshev.py
index eb0087395..a81085921 100644
--- a/numpy/polynomial/chebyshev.py
+++ b/numpy/polynomial/chebyshev.py
@@ -1524,9 +1524,16 @@ def chebfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least squares fit of Chebyshev series to data.
- Fit a Chebyshev series ``p(x) = p[0] * T_{0}(x) + ... + p[deg] *
- T_{deg}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of
- coefficients `p` that minimises the squared error.
+ Return the coefficients of a Legendre series of degree `deg` that is the
+ least squares fit to the data values `y` given at points `x`. If `y` is
+ 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
+ fits are done, one for each column of `y`, and the resulting
+ coefficients are stored in the corresponding columns of a 2-D return.
+ The fitted polynomial(s) are in the form
+
+ .. math:: p(x) = c_0 + c_1 * T_1(x) + ... + c_n * T_n(x),
+
+ where `n` is `deg`.
Parameters
----------
@@ -1537,7 +1544,7 @@ def chebfit(x, y, deg, rcond=None, full=False, w=None):
points sharing the same x-coordinates can be fitted at once by
passing in a 2D-array that contains one dataset per column.
deg : int
- Degree of the fitting polynomial
+ Degree of the fitting series
rcond : float, optional
Relative condition number of the fit. Singular values smaller than
this relative to the largest singular value will be ignored. The
@@ -1552,6 +1559,7 @@ def chebfit(x, y, deg, rcond=None, full=False, w=None):
``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the
weights are chosen so that the errors of the products ``w[i]*y[i]``
all have the same variance. The default value is None.
+
.. versionadded:: 1.5.0
Returns
@@ -1578,30 +1586,31 @@ def chebfit(x, y, deg, rcond=None, full=False, w=None):
See Also
--------
+ polyfit, legfit, lagfit, hermfit, hermefit
chebval : Evaluates a Chebyshev series.
chebvander : Vandermonde matrix of Chebyshev series.
- polyfit : least squares fit using polynomials.
+ chebweight : Chebyshev weight function.
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
- The solution are the coefficients ``c[i]`` of the Chebyshev series
- ``T(x)`` that minimizes the squared error
+ The solution is the coefficients of the Chebyshev series `p` that
+ minimizes the sum of the weighted squared errors
- ``E = \\sum_j |y_j - T(x_j)|^2``.
+ .. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
- This problem is solved by setting up as the overdetermined matrix
- equation
+ where :math:`w_j` are the weights. This problem is solved by setting up
+ as the (typically) overdetermined matrix equation
- ``V(x)*c = y``,
+ .. math:: V(x) * c = w * y,
- where ``V`` is the Vandermonde matrix of `x`, the elements of ``c`` are
- the coefficients to be solved for, and the elements of `y` are the
+ where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
+ coefficients to be solved for, `w` are the weights, and `y` are the
observed values. This equation is then solved using the singular value
- decomposition of ``V``.
+ decomposition of `V`.
- If some of the singular values of ``V`` are so small that they are
+ If some of the singular values of `V` are so small that they are
neglected, then a `RankWarning` will be issued. This means that the
coeficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
diff --git a/numpy/polynomial/hermite.py b/numpy/polynomial/hermite.py
index b9862ad5a..ace91d2e2 100644
--- a/numpy/polynomial/hermite.py
+++ b/numpy/polynomial/hermite.py
@@ -1297,9 +1297,16 @@ def hermfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least squares fit of Hermite series to data.
- Fit a Hermite series ``p(x) = p[0] * P_{0}(x) + ... + p[deg] *
- P_{deg}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of
- coefficients `p` that minimises the squared error.
+ Return the coefficients of a Hermite series of degree `deg` that is the
+ least squares fit to the data values `y` given at points `x`. If `y` is
+ 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
+ fits are done, one for each column of `y`, and the resulting
+ coefficients are stored in the corresponding columns of a 2-D return.
+ The fitted polynomial(s) are in the form
+
+ .. math:: p(x) = c_0 + c_1 * H_1(x) + ... + c_n * H_n(x),
+
+ where `n` is `deg`.
Parameters
----------
@@ -1350,41 +1357,42 @@ def hermfit(x, y, deg, rcond=None, full=False, w=None):
See Also
--------
+ chebfit, legfit, lagfit, polyfit, hermefit
hermval : Evaluates a Hermite series.
hermvander : Vandermonde matrix of Hermite series.
- polyfit : least squares fit using polynomials.
- chebfit : least squares fit using Chebyshev series.
+ hermweight : Hermite weight function
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
- The solution are the coefficients ``c[i]`` of the Hermite series
- ``P(x)`` that minimizes the squared error
+ The solution is the coefficients of the Hermite series `p` that
+ minimizes the sum of the weighted squared errors
- ``E = \\sum_j |y_j - P(x_j)|^2``.
+ .. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
- This problem is solved by setting up as the overdetermined matrix
- equation
+ where the :math:`w_j` are the weights. This problem is solved by
+ setting up the (typically) overdetermined matrix equation
- ``V(x)*c = y``,
+ .. math:: V(x) * c = w * y,
- where ``V`` is the Vandermonde matrix of `x`, the elements of ``c`` are
- the coefficients to be solved for, and the elements of `y` are the
+ where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
+ coefficients to be solved for, `w` are the weights, `y` are the
observed values. This equation is then solved using the singular value
- decomposition of ``V``.
+ decomposition of `V`.
- If some of the singular values of ``V`` are so small that they are
+ If some of the singular values of `V` are so small that they are
neglected, then a `RankWarning` will be issued. This means that the
coeficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
set to a value smaller than its default, but the resulting fit may be
spurious and have large contributions from roundoff error.
- Fits using Hermite series are usually better conditioned than fits
- using power series, but much can depend on the distribution of the
- sample points and the smoothness of the data. If the quality of the fit
- is inadequate splines may be a good alternative.
+ Fits using Hermite series are probably most useful when the data can be
+ approximated by ``sqrt(w(x)) * p(x)``, where `w(x)` is the Hermite
+ weight. In that case the wieght ``sqrt(w(x[i])`` should be used
+ together with data values ``y[i]/sqrt(w(x[i])``. The weight function is
+ available as `hermweight`.
References
----------
diff --git a/numpy/polynomial/hermite_e.py b/numpy/polynomial/hermite_e.py
index 4f39827f9..caf9d8d80 100644
--- a/numpy/polynomial/hermite_e.py
+++ b/numpy/polynomial/hermite_e.py
@@ -1293,9 +1293,16 @@ def hermefit(x, y, deg, rcond=None, full=False, w=None):
"""
Least squares fit of Hermite series to data.
- Fit a Hermite series ``p(x) = p[0] * P_{0}(x) + ... + p[deg] *
- P_{deg}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of
- coefficients `p` that minimises the squared error.
+ Return the coefficients of a HermiteE series of degree `deg` that is
+ the least squares fit to the data values `y` given at points `x`. If
+ `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D
+ multiple fits are done, one for each column of `y`, and the resulting
+ coefficients are stored in the corresponding columns of a 2-D return.
+ The fitted polynomial(s) are in the form
+
+ .. math:: p(x) = c_0 + c_1 * He_1(x) + ... + c_n * He_n(x),
+
+ where `n` is `deg`.
Parameters
----------
@@ -1346,41 +1353,42 @@ def hermefit(x, y, deg, rcond=None, full=False, w=None):
See Also
--------
+ chebfit, legfit, polyfit, hermfit, polyfit
hermeval : Evaluates a Hermite series.
- hermevander : Vandermonde matrix of Hermite series.
- polyfit : least squares fit using polynomials.
- chebfit : least squares fit using Chebyshev series.
+ hermevander : pseudo Vandermonde matrix of Hermite series.
+ hermeweight : HermiteE weight function.
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
- The solution are the coefficients ``c[i]`` of the Hermite series
- ``P(x)`` that minimizes the squared error
+ The solution is the coefficients of the HermiteE series `p` that
+ minimizes the sum of the weighted squared errors
- ``E = \\sum_j |y_j - P(x_j)|^2``.
+ .. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
- This problem is solved by setting up as the overdetermined matrix
- equation
+ where the :math:`w_j` are the weights. This problem is solved by
+ setting up the (typically) overdetermined matrix equation
- ``V(x)*c = y``,
+ .. math:: V(x) * c = w * y,
- where ``V`` is the Vandermonde matrix of `x`, the elements of ``c`` are
- the coefficients to be solved for, and the elements of `y` are the
+ where `V` is the pseudo Vandermonde matrix of `x`, the elements of `c`
+ are the coefficients to be solved for, and the elements of `y` are the
observed values. This equation is then solved using the singular value
- decomposition of ``V``.
+ decomposition of `V`.
- If some of the singular values of ``V`` are so small that they are
+ If some of the singular values of `V` are so small that they are
neglected, then a `RankWarning` will be issued. This means that the
coeficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
set to a value smaller than its default, but the resulting fit may be
spurious and have large contributions from roundoff error.
- Fits using Hermite series are usually better conditioned than fits
- using power series, but much can depend on the distribution of the
- sample points and the smoothness of the data. If the quality of the fit
- is inadequate splines may be a good alternative.
+ Fits using HermiteE series are probably most useful when the data can
+ be approximated by ``sqrt(w(x)) * p(x)``, where `w(x)` is the HermiteE
+ weight. In that case the wieght ``sqrt(w(x[i])`` should be used
+ together with data values ``y[i]/sqrt(w(x[i])``. The weight function is
+ available as `hermeweight`.
References
----------
@@ -1389,7 +1397,7 @@ def hermefit(x, y, deg, rcond=None, full=False, w=None):
Examples
--------
- >>> from numpy.polynomial.hermite_e import hermefit, hermeval
+ >>> from numpy.polynomial.hermite_e import hermefik, hermeval
>>> x = np.linspace(-10, 10)
>>> err = np.random.randn(len(x))/10
>>> y = hermeval(x, [1, 2, 3]) + err
diff --git a/numpy/polynomial/laguerre.py b/numpy/polynomial/laguerre.py
index 15ea8d870..489ecb8a2 100644
--- a/numpy/polynomial/laguerre.py
+++ b/numpy/polynomial/laguerre.py
@@ -1296,9 +1296,16 @@ def lagfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least squares fit of Laguerre series to data.
- Fit a Laguerre series ``p(x) = p[0] * P_{0}(x) + ... + p[deg] *
- P_{deg}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of
- coefficients `p` that minimises the squared error.
+ Return the coefficients of a Laguerre series of degree `deg` that is the
+ least squares fit to the data values `y` given at points `x`. If `y` is
+ 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
+ fits are done, one for each column of `y`, and the resulting
+ coefficients are stored in the corresponding columns of a 2-D return.
+ The fitted polynomial(s) are in the form
+
+ .. math:: p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x),
+
+ where `n` is `deg`.
Parameters
----------
@@ -1349,41 +1356,42 @@ def lagfit(x, y, deg, rcond=None, full=False, w=None):
See Also
--------
+ chebfit, legfit, polyfit, hermfit, hermefit
lagval : Evaluates a Laguerre series.
- lagvander : Vandermonde matrix of Laguerre series.
- polyfit : least squares fit using polynomials.
- chebfit : least squares fit using Chebyshev series.
+ lagvander : pseudo Vandermonde matrix of Laguerre series.
+ lagweight : Laguerre weight function.
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
- The solution are the coefficients ``c[i]`` of the Laguerre series
- ``P(x)`` that minimizes the squared error
+ The solution is the coefficients of the Laguerre series `p` that
+ minimizes the sum of the weighted squared errors
- ``E = \\sum_j |y_j - P(x_j)|^2``.
+ .. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
- This problem is solved by setting up as the overdetermined matrix
- equation
+ where the :math:`w_j` are the weights. This problem is solved by
+ setting up as the (typically) overdetermined matrix equation
- ``V(x)*c = y``,
+ .. math:: V(x) * c = w * y,
- where ``V`` is the Vandermonde matrix of `x`, the elements of ``c`` are
- the coefficients to be solved for, and the elements of `y` are the
+ where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
+ coefficients to be solved for, `w` are the weights, and `y` are the
observed values. This equation is then solved using the singular value
- decomposition of ``V``.
+ decomposition of `V`.
- If some of the singular values of ``V`` are so small that they are
+ If some of the singular values of `V` are so small that they are
neglected, then a `RankWarning` will be issued. This means that the
coeficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
set to a value smaller than its default, but the resulting fit may be
spurious and have large contributions from roundoff error.
- Fits using Laguerre series are usually better conditioned than fits
- using power series, but much can depend on the distribution of the
- sample points and the smoothness of the data. If the quality of the fit
- is inadequate splines may be a good alternative.
+ Fits using Laguerre series are probably most useful when the data can
+ be approximated by ``sqrt(w(x)) * p(x)``, where `w(x)` is the Laguerre
+ weight. In that case the wieght ``sqrt(w(x[i])`` should be used
+ together with data values ``y[i]/sqrt(w(x[i])``. The weight function is
+ available as `lagweight`.
References
----------
diff --git a/numpy/polynomial/legendre.py b/numpy/polynomial/legendre.py
index 319fb505b..da2c2d846 100644
--- a/numpy/polynomial/legendre.py
+++ b/numpy/polynomial/legendre.py
@@ -1326,9 +1326,16 @@ def legfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least squares fit of Legendre series to data.
- Fit a Legendre series ``p(x) = p[0] * P_{0}(x) + ... + p[deg] *
- P_{deg}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of
- coefficients `p` that minimises the squared error.
+ Return the coefficients of a Legendre series of degree `deg` that is the
+ least squares fit to the data values `y` given at points `x`. If `y` is
+ 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
+ fits are done, one for each column of `y`, and the resulting
+ coefficients are stored in the corresponding columns of a 2-D return.
+ The fitted polynomial(s) are in the form
+
+ .. math:: p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x),
+
+ where `n` is `deg`.
Parameters
----------
@@ -1355,6 +1362,8 @@ def legfit(x, y, deg, rcond=None, full=False, w=None):
weights are chosen so that the errors of the products ``w[i]*y[i]``
all have the same variance. The default value is None.
+ .. versionadded:: 1.5.0
+
Returns
-------
coef : ndarray, shape (M,) or (M, K)
@@ -1379,31 +1388,31 @@ def legfit(x, y, deg, rcond=None, full=False, w=None):
See Also
--------
+ chebfit, polyfit, lagfit, hermfit, hermefit
legval : Evaluates a Legendre series.
legvander : Vandermonde matrix of Legendre series.
- polyfit : least squares fit using polynomials.
- chebfit : least squares fit using Chebyshev series.
+ legweight : Legendre weight function (= 1).
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
- The solution are the coefficients ``c[i]`` of the Legendre series
- ``P(x)`` that minimizes the squared error
+ The solution is the coefficients of the Legendre series `p` that
+ minimizes the sum of the weighted squared errors
- ``E = \\sum_j |y_j - P(x_j)|^2``.
+ .. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
- This problem is solved by setting up as the overdetermined matrix
- equation
+ where :math:`w_j` are the weights. This problem is solved by setting up
+ as the (typically) overdetermined matrix equation
- ``V(x)*c = y``,
+ .. math:: V(x) * c = w * y,
- where ``V`` is the Vandermonde matrix of `x`, the elements of ``c`` are
- the coefficients to be solved for, and the elements of `y` are the
+ where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
+ coefficients to be solved for, `w` are the weights, and `y` are the
observed values. This equation is then solved using the singular value
- decomposition of ``V``.
+ decomposition of `V`.
- If some of the singular values of ``V`` are so small that they are
+ If some of the singular values of `V` are so small that they are
neglected, then a `RankWarning` will be issued. This means that the
coeficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
diff --git a/numpy/polynomial/polynomial.py b/numpy/polynomial/polynomial.py
index 73090244c..99a555e71 100644
--- a/numpy/polynomial/polynomial.py
+++ b/numpy/polynomial/polynomial.py
@@ -1122,10 +1122,16 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least-squares fit of a polynomial to data.
- Fit a polynomial ``c0 + c1*x + c2*x**2 + ... + c[deg]*x**deg`` to
- points (`x`, `y`). Returns a 1-d (if `y` is 1-d) or 2-d (if `y` is 2-d)
- array of coefficients representing, from lowest order term to highest,
- the polynomial(s) which minimize the total square error.
+ Return the coefficients of a polynomial of degree `deg` that is the
+ least squares fit to the data values `y` given at points `x`. If `y` is
+ 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
+ fits are done, one for each column of `y`, and the resulting
+ coefficients are stored in the corresponding columns of a 2-D return.
+ The fitted polynomial(s) are in the form
+
+ .. math:: p(x) = c_0 + c_1 * x + ... + c_n * x^n,
+
+ where `n` is `deg`.
Parameters
----------
@@ -1134,7 +1140,7 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
y : array_like, shape (`M`,) or (`M`, `K`)
y-coordinates of the sample points. Several sets of sample points
sharing the same x-coordinates can be (independently) fit with one
- call to `polyfit` by passing in for `y` a 2-d array that contains
+ call to `polyfit` by passing in for `y` a 2-D array that contains
one data set per column.
deg : int
Degree of the polynomial(s) to be fit.
@@ -1154,12 +1160,13 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the
weights are chosen so that the errors of the products ``w[i]*y[i]``
all have the same variance. The default value is None.
+
.. versionadded:: 1.5.0
Returns
-------
coef : ndarray, shape (`deg` + 1,) or (`deg` + 1, `K`)
- Polynomial coefficients ordered from low to high. If `y` was 2-d,
+ Polynomial coefficients ordered from low to high. If `y` was 2-D,
the coefficients in column `k` of `coef` represent the polynomial
fit to the data in `y`'s `k`-th column.
@@ -1181,27 +1188,27 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
See Also
--------
+ chebfit, legfit, lagfit, hermfit, hermefit
polyval : Evaluates a polynomial.
polyvander : Vandermonde matrix for powers.
- chebfit : least squares fit using Chebyshev series.
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
- The solutions are the coefficients ``c[i]`` of the polynomial ``P(x)``
- that minimizes the total squared error:
+ The solution is the coefficients of the polynomial `p` that minimizes
+ the sum of the weighted squared errors
- .. math :: E = \\sum_j (y_j - P(x_j))^2
+ .. math :: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
- This problem is solved by setting up the (typically) over-determined
- matrix equation:
+ where the :math:`w_j` are the weights. This problem is solved by
+ setting up the (typically) over-determined matrix equation:
- .. math :: V(x)*c = y
+ .. math :: V(x) * c = w * y,
- where `V` is the Vandermonde matrix of `x`, the elements of `c` are the
- coefficients to be solved for, and the elements of `y` are the observed
- values. This equation is then solved using the singular value
+ where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
+ coefficients to be solved for, `w` are the weights, and `y` are the
+ observed values. This equation is then solved using the singular value
decomposition of `V`.
If some of the singular values of `V` are so small that they are
@@ -1216,10 +1223,11 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
contributions from roundoff error.
Polynomial fits using double precision tend to "fail" at about
- (polynomial) degree 20. Fits using Chebyshev series are generally
- better conditioned, but much can still depend on the distribution of
- the sample points and the smoothness of the data. If the quality of
- the fit is inadequate, splines may be a good alternative.
+ (polynomial) degree 20. Fits using Chebyshev or Legendre series are
+ generally better conditioned, but much can still depend on the
+ distribution of the sample points and the smoothness of the data. If
+ the quality of the fit is inadequate, splines may be a good
+ alternative.
Examples
--------