summaryrefslogtreecommitdiff
path: root/numpy/polynomial/polynomial.py
diff options
context:
space:
mode:
authorCharles Harris <charlesr.harris@gmail.com>2011-12-27 17:54:09 -0700
committerCharles Harris <charlesr.harris@gmail.com>2012-01-09 11:09:36 -0700
commitc462637f9b398600d25ca449aef8586d8d9d6210 (patch)
tree82ac8cec0eb48a220f311f53ef4c2265c782b198 /numpy/polynomial/polynomial.py
parent08c8c54051e27f80f7e30646ef367eb709b3e6cf (diff)
downloadnumpy-c462637f9b398600d25ca449aef8586d8d9d6210.tar.gz
DOC: Document xxxfit functions in the polynomial package modules.
Diffstat (limited to 'numpy/polynomial/polynomial.py')
-rw-r--r--numpy/polynomial/polynomial.py48
1 files changed, 28 insertions, 20 deletions
diff --git a/numpy/polynomial/polynomial.py b/numpy/polynomial/polynomial.py
index 73090244c..99a555e71 100644
--- a/numpy/polynomial/polynomial.py
+++ b/numpy/polynomial/polynomial.py
@@ -1122,10 +1122,16 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least-squares fit of a polynomial to data.
- Fit a polynomial ``c0 + c1*x + c2*x**2 + ... + c[deg]*x**deg`` to
- points (`x`, `y`). Returns a 1-d (if `y` is 1-d) or 2-d (if `y` is 2-d)
- array of coefficients representing, from lowest order term to highest,
- the polynomial(s) which minimize the total square error.
+ Return the coefficients of a polynomial of degree `deg` that is the
+ least squares fit to the data values `y` given at points `x`. If `y` is
+ 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
+ fits are done, one for each column of `y`, and the resulting
+ coefficients are stored in the corresponding columns of a 2-D return.
+ The fitted polynomial(s) are in the form
+
+ .. math:: p(x) = c_0 + c_1 * x + ... + c_n * x^n,
+
+ where `n` is `deg`.
Parameters
----------
@@ -1134,7 +1140,7 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
y : array_like, shape (`M`,) or (`M`, `K`)
y-coordinates of the sample points. Several sets of sample points
sharing the same x-coordinates can be (independently) fit with one
- call to `polyfit` by passing in for `y` a 2-d array that contains
+ call to `polyfit` by passing in for `y` a 2-D array that contains
one data set per column.
deg : int
Degree of the polynomial(s) to be fit.
@@ -1154,12 +1160,13 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the
weights are chosen so that the errors of the products ``w[i]*y[i]``
all have the same variance. The default value is None.
+
.. versionadded:: 1.5.0
Returns
-------
coef : ndarray, shape (`deg` + 1,) or (`deg` + 1, `K`)
- Polynomial coefficients ordered from low to high. If `y` was 2-d,
+ Polynomial coefficients ordered from low to high. If `y` was 2-D,
the coefficients in column `k` of `coef` represent the polynomial
fit to the data in `y`'s `k`-th column.
@@ -1181,27 +1188,27 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
See Also
--------
+ chebfit, legfit, lagfit, hermfit, hermefit
polyval : Evaluates a polynomial.
polyvander : Vandermonde matrix for powers.
- chebfit : least squares fit using Chebyshev series.
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
- The solutions are the coefficients ``c[i]`` of the polynomial ``P(x)``
- that minimizes the total squared error:
+ The solution is the coefficients of the polynomial `p` that minimizes
+ the sum of the weighted squared errors
- .. math :: E = \\sum_j (y_j - P(x_j))^2
+ .. math :: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
- This problem is solved by setting up the (typically) over-determined
- matrix equation:
+ where the :math:`w_j` are the weights. This problem is solved by
+ setting up the (typically) over-determined matrix equation:
- .. math :: V(x)*c = y
+ .. math :: V(x) * c = w * y,
- where `V` is the Vandermonde matrix of `x`, the elements of `c` are the
- coefficients to be solved for, and the elements of `y` are the observed
- values. This equation is then solved using the singular value
+ where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
+ coefficients to be solved for, `w` are the weights, and `y` are the
+ observed values. This equation is then solved using the singular value
decomposition of `V`.
If some of the singular values of `V` are so small that they are
@@ -1216,10 +1223,11 @@ def polyfit(x, y, deg, rcond=None, full=False, w=None):
contributions from roundoff error.
Polynomial fits using double precision tend to "fail" at about
- (polynomial) degree 20. Fits using Chebyshev series are generally
- better conditioned, but much can still depend on the distribution of
- the sample points and the smoothness of the data. If the quality of
- the fit is inadequate, splines may be a good alternative.
+ (polynomial) degree 20. Fits using Chebyshev or Legendre series are
+ generally better conditioned, but much can still depend on the
+ distribution of the sample points and the smoothness of the data. If
+ the quality of the fit is inadequate, splines may be a good
+ alternative.
Examples
--------