Linear least squares/Proposed
From Wikipedia, the free encyclopedia
Linear least squares is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to observations obtained from experiments. Mathematically, it can be stated as the problem of finding an approximate solution of an overdetermined system of equations.
Linear least square problems admits a closed-form solution, in contrast to non-linear least squares problems, which have to be solved by an iterative procedure.
Contents |
[edit] Problem statement and solutions
Consider an overdetermined system
of m linear equations in n unknowns, . Such a system usually has no solution, and the goal is then to find the numbers βj which fit "best" the equations, in the sense of minimizing the sum of squares of differences between the left- and right-hand sides of the equations.
The primary application of linear least squares is in data fitting. Given a set of m data points consisting of experimentally measured values, taken at m values of an independent variable, , (xi may be a scalar or a vector) it is desired to find a model function with that fits best the data. The model function is assumed to be linear in the parameters βj, so
Here, the functions φj may be nonlinear in the variable x.
A best fit is realized when each difference between an observed value and the value calculated for the model is made as small as possible by varying the parameters. The difference is known as a residual, r.
However, there are more residuals than parameters, so the parameters are overdetermined and no set of parameter values exists that can make the residuals equal to zero. In the least squares method the criterion chosen for best fit is that the sum of squared residuals
is minimized. The problem then reduces to the overdetermined linear system mentioned earlier, with Xij = φj(xi).
The justification for choosing this criterion is given in properties, below. There is a unique set of parameter values that corresponds to the minimum value of the sum of squared residuals.
[edit] Specific solution, straight line fitting, with example
For straight line fitting there are only two parameters. This means that a complete algebraic solution may be worked out with relative ease. For the model
the normal equations (for derivation see below) are
All the summations go from i=1 to m. Each summation can be represented by a single symbol
In terms of these symbols the normal equations become
and the solution, by Cramer's rule is
These expressions have been used in hand calculators because each time a data point is added or removed, the five sums are adjusted and the parameters are recalculated, only seven operations in all. [1] The standard deviations of the parameter estimates (often called their standard errors) are
The correlation coefficient between the parameter estimates is
Example
With a set of observed data points y=2,3,3,4 obtained with the independent variable, x at values -1,0,2,4.
Now, the residuals are calculated
and S=0.305. After calculating the standard deviations the final result is obtained.
Note that the error is only quoted to one significant digit.
[edit] Normal equations method
S is minimized when its gradient with respect to each parameter is equal to zero. The elements of the gradient vector are the partial derivatives of S with respect to the parameters.
The gradient equations are a set of n simultaneous equations in the n parameters. They are solved using the methods of linear algebra. Since , the derivatives are
Substitution of the expressions for the residuals and the derivatives into the gradient equations gives
Upon rearrangement, the n simultaneous linear equations, the normal equations
are obtained. The normal equations are written in matrix notation as
Solution of the normal equations yields the least squares estimators,, of the parameter values.
General solution Although the algebraic solution of the normal equations can be written as
it is not good practice to invert the normal equations matrix. An exception occurs in numerical smoothing and differentiation where an analytical expression is required.
If the matrix is well-conditioned and positive definite, that is, it has full rank, the normal equations can be solved directly by using the Cholesky decomposition , where R is an upper triangular matrix, giving
The solution is obtained in two stages, a forward substitution, , followed by a backward substitution . Both subtitutions are facilitated by the triangular nature of R.
See example of linear regression for a worked-out numerical example with three parameters.
[edit] Orthogonal decomposition methods
Orthogonal decomposition methods of solving the least squares problem are slower than the normal equations method but are more numerically stable.
The extra stability results from not having to form the product . The residuals are written in matrix notation as
The matrix X is subjected to an orthogonal decomposition; the QR decomposition will serve to illustrate the process.
where Q is an orthogonal matrix and R is an matrix which is partitioned into a block, , and a zero block. is upper triangular.
The residual vector is left-multiplied by .
The sum of squares of the transformed residuals, , is the same as before, because Q is orthogonal.
The minimum value of S is attained when the upper block, U, is zero. Therefore the parameters are found by solving
These equations are easily solved as is upper triangular.
An alternative decomposition of X is the singular value decomposition (SVD)[2]
This is effectively another kind of orthogonal decomposition as both U and V are orthogonal. This method is the most computationally intensive, but is particularly useful if the normal equations matrix, , is very ill-conditioned (i.e. if its condition number multiplied by the machine's relative round-off error is appreciably large). In that case, including the smallest singular values in the inversion merely adds numerical noise to the solution. This can be cured using the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to factor analysis.
[edit] Weighted linear least squares
When the observations are not equally reliable, a weighted sum of squares
may be minimized.
Each element of the diagonal weight matrix, W should,ideally, be equal to the reciprocal of the variance of the measurement.[3] The normal equations are then
[edit] Properties of the least-squares estimators
The gradient equations at the minimum can be written as
A geometrical interpretation of these equations is that the vector of residuals, is orthogonal to the column space of , since the dot product is equal to zero for any conformal vector, . This means that is the shortest of all possible vectors , that is, the variance of the residuals is the minimum possible. This is illustrated at the right.
If the experimental errors, , are uncorrelated, have a mean of zero and a constant variance, σ, the Gauss-Markov theorem states that the least-squares estimator, , has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statistical distribution function of the errors. In other words, the distribution function of the errors need not be a normal distribution.
For example, it is easy to show that the arithmetic mean of a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss-Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be.
However, in the case that the experimental errors do belong to a Normal distribuition, the least-squares estimator is also a maximum likelihood estimator.[4]
These properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid.
[edit] Limitations
An assumption underlying the treatment given above is that the independent variable, x, is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case, total least squares also known as Errors-in-variables model, or Rigorous least squares, should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependant and independent variables and then following the standard procedure.[5][6]
In some cases the (weighted) normal equations matrix is ill-conditioned; this occurs when the measurements have only a marginal effect on one or more of the estimated parameters.[7] In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate. Various regularization techniques can be applied in such cases, the most common of which is called Tikhonov regularization. If further information about the parameters is known, for example, a range of possible values of x, then minimax techniques can also be used to increase the stability of the solution.
Another drawback of the least squares estimator is the fact that the norm of the residuals, is minimized, whereas in some cases one is truly interested in obtaining small error in the parameter , e.g., a small value of . However, since is unknown, this quantity cannot be directly minimized. If a prior probability on is known, then a Bayes estimator can be used to minimize the mean squared error, . The least squares method is often applied when no prior is known. Surprisingly, however, better estimators can be constructed, an effect known as Stein's phenomenon. For example, if the measurement error is Gaussian, several estimators are known which dominate, or outperform, the least squares technique; the best known of these is the James-Stein estimator.
[edit] Parameter errors, correlation and confidence limits
The parameter values are linear combinations of the observed values
Therefore an expression for the errors on the parameter can be obtained by error propagation from the errors on the observations. Let the variance-covariance matrix for the observations be denoted by M and that of the parameters by Mβ. Then,
When , this simplifies to
When unit weights are used () it is implied that the experimental errors are uncorrelated and all equal: , where is known as the variance of an observation of unit weight, and is an identity matrix. In this case is approximated by , where S is the minimum value of the objective function
In all cases, the variance of the parameter βi is given by and the covariance between parameters βi and βj is given by . Standard deviation is the square root of variance and the correlation coefficient is given by . These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors which, by definition, cannot be quantified. Note that even though the observations may be un-correlated, the parameters are always correlated.
It is often assumed, for want of any concrete evidence, that the error on a parameter belongs to a Normal distribution with a mean of zero and standard deviation σ. Under that assumption the following confidence limits can be derived.
- 68% confidence limits,
- 95% confidence limits,
- 99% confidence limits,
The assumption is not unreasonable when m>>n. If the experimental errors are normally distributed the parameters will belong to a Student's t-distribution with m-n degrees of freedom. When m>>n Student's t-distribution approximates to a Normal distribution. Note, however, that these confidence limits cannot take systematic error into account. Also, parameter errors should be quoted to one significant figure only, as they are subject to sampling error.[8]
When the number of observations is relatively small, Chebychev's inequality can be used for an upper bound on probabilities, regardless of any assumptions about the distribution of experimental errors: the maximum probabilities that a parameter will be more than 1, 2 or 3 standard deviations away from its expectation value are 100%, 25% and 11% respectively.
[edit] Residual values and correlation
The residuals are related to the observations by
The symmetric, idempotent matrix is known in the statistics literature as the hat matrix, . Thus,
where I is an identity matrix. The variance-covariance matrice of the residuals, Mr is given by
This shows that even though the observations may be uncorrelated, the residuals are always correlated.
If experimental error follows a normal distribution, then, because of the linear relationship between residuals and observations, so should residuals, [9] but since the observations are only a sample of the population of all possible observations, the residuals should belong to a Student's t-distribution. Studentized residuals are useful in making a statistical test for an outlier when a particular residual appears to be excessively large.
[edit] Objective function
The objective function can be written as
since is also symmetric and idempotent. It can be shown from this,[10] that the expected value of S is m-n. Note, however, that this is true only if the weights have been assigned correctly. If unit weights are assumed, the expected value of S is (m − n)σ2, where σ2 is the variance of an observation.
If it is assumed that the residuals belong to a Normal distribution, the objective function, being a sum of weighted squared residuals, will belong to a Chi-square (χ2) distribution with m-n degrees of freedom. Some illustrative percentile values of χ2 are given in the following table.[11]
-
m-n 10 9.34 18.3 23.2 25 24.3 37.7 44.3 100 99.3 124 136
These values can be used for a statistical criterion as to the goodness-of-fit. When unit weights are used, the numbers should be divided by the variance of an observation.
[edit] Applications
- Polynomials in an independent variable, x
- Straight line: .[12]
- Quadratic: .
- Cubic, quartic and higher polynomials. For high-order polynomials the use of orthogonal polynomials is recommended.[7][13]
- Numerical smoothing and differentiation This is an application of polynomial fitting.
- Multinomials in more than one independent variable, including surface fitting
- Curve fitting with B-splines [5]
- Chemometrics, Calibration curve, Standard addition, Gran plot, analysis of mixtures
[edit] Notes and references
- ^ Since and an alternative expression can be given for the slope.
- ^ C.L. Lawson and R.J. Hanson, Solving Least Squares Problems, Prentice-Hall,1974
- ^ This implies that the observations are uncorrelated. If the observations are correlated, the expression
- ^ H. Margenau and G.M. Murphy, The Mathematics of Physics and Chemistry, Van Nostrand, 1943, 1956
- ^ a b P. Gans, Data fitting in the Chemical Sciences, Wiley, 1992
- ^ W.E. Deming, Statistical adjustment of Data, Wiley, 1943
- ^ a b When fitting polynomials the normal equations matrix is a Vandermonde matrix. Vandermode matrices become increasingly ill-conditioned as the order of the matrix increases.
- ^ J. Mandel, The Statistical Analysis of Experimental Data, Interscience, 1964
- ^ K.V. Mardia, J.T. Kent and J.M. Bibby, Multivariate analysis, Academic Press, 1979
- ^ W. C. Hamilton, Statistics in Physical Science, The Ronald Press, New York, 1964
- ^ M.R. Spiegel, Probability and Statistics, Schaum's Outline Series, McGraw-Hill 1982
- ^ F.S. Acton, Analysis of Straight-Line Data, Wiley, 1959
- ^ P.G. Guest, Numerical Methods of Curve Fitting, Cambridge University Press, 1961.
- Björck, Åke (1996). Numerical methods for least squares problems. Philadelphia: SIAM. ISBN 0-89871-360-9.