Talk:Least squares

From Wikipedia, the free encyclopedia

This article is within the scope of WikiProject Statistics, which collaborates to improve Wikipedia's coverage of statistics. If you would like to participate, please visit the project page.

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: B Class Mid Priority  Field: Applied mathematics
One of the 500 most frequently viewed mathematics articles.

Moved from article:

(this article needs work, more content, explanation of the Gauss method)

Contents

[edit] Addition

I have moved some that information that belongs in least squares instead of in the article on linear regression. The article Least-squares estimation of linear regression coefficients should also be examined and merged with this.Woollymammoth 02:14, 24 February 2007 (UTC)

[edit] political science

A number of politics articles goes here, can this article be made more relevant to the political usage please? --Gregstephens 02:20, 25 Mar 2005 (UTC)

I have no idea what the political usage is, and I am unable to find those political articles linking here (I clicked on "What links here"). Unless it's very much related to this topic, I think its inclusion would be inappropriate; it should be in a separate article titled least squares (politics) or the like. Michael Hardy 03:02, 25 Mar 2005 (UTC)
It is linked to from proportional representation, it is a measurement ot figure out how disproportional an electoral system is. The formula used is:
Lsq= square root of (half the sum of (Vi-Si)^2)

Is that similar to what is used here? Or is it different? If it is, I will start an article on measures of disproportionality, as that can cover more. --Gregstephens 04:52, 25 Mar 2005 (UTC)

I find the remarks above cryptic. What, in particular, are the quantities you are calling Vi and Si? Michael Hardy 30 June 2005 21:11 (UTC)

Firstly, the formula cited above seems to be basically the same as the usual definition, save for some proportionality constant. So there is no need to involve politics here. Secondly, there is no such formula in proportional representation at this date. It was removed on the 5th of June. --HelgeStenstrom 12:37, 21 November 2005 (UTC)

The Gallagher Index is sometimes refered to as the least squares method of calculating disproportionality. --LeftyG 07:24, 6 January 2006 (UTC)

[edit] Recursion

Could someone put in a short discussion of how to solve the problem using recursion? I filled a request for Recursive least squares algorithm by redirecting to this article, but perhaps someone whoever requested it wouldn't find all they're looking for here.--Joel 03:26, 25 Jun 2005 (UTC)

I don't think that redirect is very helpful as the current article does not talk about solving the least squares problem recursively at all. Unfortunately, I can't add it to the article since I have no idea how the algorithm is supposed to work. -- Jitse Niesen (talk) 11:30, 25 Jun 2005 (UTC)
Isn't that at Solving linear recurrence relations in Recurrence relation? 212.102.225.147 30 June 2005 12:54 (UTC)
I am sorry, but I don't see anything there about least squares problems. Could you please be more specific? -- Jitse Niesen (talk) 2 July 2005 23:52 (UTC)
No (sorry from my part), my response was more based on a hunch feeling then in-depth study of the article. 212.102.225.147 16:21, 11 July 2005 (UTC)

[edit] Ordinate

Ah, the residuals are measured along the ordinate (assuming no error in the x-values) now allows me to understand what you mean by ordinate. But it wasn't at all obvious. Doesn't between the fitted function and the data. make this clear anyway, and rather more clear? William M. Connolley 09:25:26, 2005-08-24 (UTC).

"Between the fitted function and the data" is certainly erroneous. When "distance" is measured between a point and a particular line (in this case the fitted function), the outcome will vary with the choosen direction along which you want to measure. In geometry "distance" would create no problem, because by definition the measurement is assumed to be in orthogonal direction. However, in OLS, this is certainly not the case. The objective in OLS (as well as the optimization of the underlying partial derivative problem) is to minimize the error in the Y dimension. As a consequence, in OLS the "distance" is measured along the direction of the Y-axis (= the ordinate).

A challenge: try to define a formula for Orthogonal Least Squares, and you'll understand immediately why Ordinary LS is such a popular method. User:Witger

You have any info about this Witger? I am using OR at the moment (or Total Least Squares regression to be more precise, isn't that the same?) Until now I haven't been able to find any 'basic' information about Orthogonal Regression except: SIAM Review 36 (2) 258-264 Jeroenemans 16:42, 5 December 2006 (UTC)

[edit] Nonrubustness of Least Squares and IRLS

I have noticed that outside certain fields (computer vision) there seems to be little discussion of robust statistics. Least squares is utterly crippled by any form of asymmetric outliers, and I believe that more of this article should be spent on discussing the limitations of the method and pointing to similar methods (Iterated Reweighted Least Squares, or Least Median Squares, or consensus based algorithms like RANSAC... interestingly wikipedia doesn't have any articles on IRLS or Least Median Squares, despite their importance in high noise environments... perhaps I'll write some). Anyone have any thoughts on this? - JustinWick 18:09, 5 December 2005 (UTC)

[edit] Merge

I support the merge of Least mean squares into this page. Possible with a section on adaptive filters. --Pfafrich 23:13, 10 January 2006 (UTC)

I won't mind. But that text needs a good cleanup before that. Anybody willing to do the work? Oleg Alexandrov (talk) 00:13, 11 January 2006 (UTC)
No, LMS is a largely unrelated topic, an adaptive filter algorithm. Dicklyon (talk) 06:52, 5 December 2007 (UTC)
I oppose. The article Coefficient of determination will not be comprehensive, if this article disappears. A.L.

I support the merge of RSS with "Least Squares" (This article). Could we reference the definition as a sub-section titled "RSS"? Having a reference for the term would be helpful in other articles as well. What's required before a merge is allowed? Yoderj 19:19, 14 February 2007 (UTC)

  • Merge for Explained_sum_of_squares, and Total sum of squares since both have no helpful links to them and are just silly little articles that should just be on this page. Coefficient of determination is not comprehensive as is, the importatn info from those other articles really should be on that page for it to be clear and understandable. Pdbailey (talk) 04:06, 5 December 2007 (UTC)
Then better start a new merge proposal instead of messing with this old one about something else. Dicklyon (talk) 06:52, 5 December 2007 (UTC)
This is listed as a proposed merger on Wikipedia:Proposed_mergers, where am I supposed to comment? Pdbailey (talk) 14:09, 5 December 2007 (UTC)
I took that 2006 item out of there, since it was totally out of process; see the top of that page about process. Start a new merge proposal if you want to. Dicklyon (talk) 15:47, 6 December 2007 (UTC)
What Dicklyon meant to say was, "Thanks for bringing up this important issue. Please note that the standard way to propose a merge is to add {{mergefrom}} and {{mergeto}} templates on the pages you think should be merged, and start a discussion in a new section on the talk page of the merge target. Take a look at WP:MERGE for more information. Thanks again, and have a nice day!" (Wasn't it, Dick? :-) ) --Zvika (talk) 20:00, 6 December 2007 (UTC)
Zvika, thanks for being kind and explaining Dicklyon's remark. I'm very familiar with the merge process and don't need any help navigating it. I'd encourage you to find the person who put the proposal there if you want to be of assistance. Thanks again for the kinder wording. Pdbailey (talk) 23:02, 6 December 2007 (UTC)
Dicklyon, do you really think that you are AGF when you do that? Maybe trying to help along the newby, or at least telling them what you are doing when you blast what they think is a valid proposal would be best. Pdbailey (talk) 23:02, 6 December 2007 (UTC)
I'm not following you. Zvika's over-interpretation of my terse remark is not too far off. But I'm not sure how to reconcile your familiarity with merge process with your calling this a "valid proposal". I simply said that if you want to make it a valid proposal, do so. Dicklyon (talk) 07:48, 7 December 2007 (UTC)
Dicklyon, sorry for the confusion, I had assumed that since this article was listed on Wikipedia:Proposed_mergers that meant someone tried to list it there and was paying attention. Now I understand that is was just detritus. Thanks for sticking with me. Pdbailey (talk) 03:37, 8 December 2007 (UTC)

[edit] Redirect

Should not LSQ redirect/be disambiguation to this article? —Preceding unsigned comment added by 130.238.7.41 (talk) 05:16, 5 October 2007 (UTC)

[edit] History: Fact Checking

The submission by 81.173.155.84 (diff) reads, in part:

On New Year's Day in 1801, the italien astronomer Giuseppe Piazzi discovered the asteroid Ceres. He was able to track its path for 40 days, until Ceres disappeared behind the sun.

I'm removing the wording "until Ceres disappeared behind the sun" on the grounds of error. Modern astronomy software confirms that on 10 Feb 1801 Ceres was still more than 90 degrees away from the sun, therefore not behind the Sun, and not even lost in the Sun's glare. (Jean Meeus, in Mathematical Astronomy Morsels, p. 300, agrees.) Conjunction with the Sun did not occur until July 1801. A much more likely guess is that Ceres, which had dimmed from magnitude 7.8 to 8.5 and was also progressively lower in the western sky each evening by the end of twilight, became too dim to be seen, but the date at which this happened depends on the aperture of Piazzi's telescope. Perhaps also Ceres became lost in the star fields between the Pleiades and Hyades. My own experience is that Ceres is difficult to track with 80mm binoculars in the best of conditions, so I favor the dimness explanation. I suggest someone find Piazzi's notebook and see what he said. - JEBrown87544 16:05, 22 July 2006 (UTC)

[edit] Robert Adrain

The Robert Adrain page states,

He is chiefly remembered for his formulation of the method of least squares, published in 1808. Adrain certainly did not know of the work of C.F. Gauss on least squares (published 1809), although it is possible that he had read A.M. Legendre's article on the topic (published 1804).

anyone know anything about this? --Salix alba (talk) 20:54, 29 September 2006 (UTC)


[edit] Wording in intro

I don't like saying LS is "a mathematical optimization technique." I would prefer something like that it is a criterion of agreement between the data and the fitted curve. Once a criterion has been decided upon, LS or whatever, one needs a mathematical optimization technique such as Levenberg-Marquandt, Gauss-Newton, etc.

Dfarrar 00:25, 2 March 2007 (UTC)

[edit] problems with "least mean squares" in intro

There are several problems with the passage in the intro on least mean squares:

"The approach is known as least mean squares (LMS) when the number of measured data is 1 and the gradient descent method is used to minimize the squared residual. LMS is known to minimize the expectation of the squared residual, with the smallest operations (per iteration). But it requires a large number of iterations to converge."

(1) For purposes of an intro, this seems to be too much on one specific application of least squares. In particular specific numerical optimization algorithms, and problems with numerical optimization, probably do not belong in an introduction. (2) I don't see why the number of observations should be 1. (3) This appears to be from signal processing methodology, which should have been stated. Dfarrar 12:43, 8 April 2007 (UTC)

[edit] Role of Gauss-Markov

While there may be generalizations to the nonlinear case, the basic Gauss-Markov theorem applies to the linear case. It has to do with the conditions for the least squares estimates to be best in terms of mean square error, among linear unbiased estimators. In the nonlinear case the estimator is not generally linear or unbiased. The rationale for the method needs to be developed. Part of it is computational convenience, particularly in the linear case. Also, LS estimators are maximum likelihood estimators in particular situations involving Gaussian errors. Dfarrar 14:07, 11 April 2007 (UTC)

[edit] Gauss-Markov, put it in but do it right

The article should have a correct cross-reference to Guass-Markov, integrated into a statement of statistical properties of least squares, recognizing differences between linear and nonlinear cases. Dfarrar 22:26, 13 April 2007 (UTC)

[edit] here is the stuff I deleted from intro

I think this is too technical for an intro, without further development. I think the intro should provide a pretty informal statement of what least squares is all about. The material will be unfamiliar to a lot of good statisticians. Where does it come from? Following is what I took out.

"A related method is the 'least mean squares (LMS) method. It occurs when the number of measured data is 1 and the gradient descent method is used to minimize the squared residual. LMS is known to minimize the expectation of the squared residual, with the smallest number of operations per iteration). However, it requires a large number of iterations to converge."

"Furthermore, many other types of optimization problems can be expressed in a least squares form, by either minimizing energy or maximizing entropy."

Dfarrar 19:11, 24 May 2007 (UTC)

[edit] Generalize the number of parameters?

The Least squares and regression analysis section gives a generalization where there are 2 parameters, when in practice there can be an arbitrary number of parameters to estimate. An example model using 3 parameters can be found here Linear_least_squares#Applications. I'm not sure exactly how to fix this situation, other than giving multiple examples based on different analytic forms of models.

The article becomes misleading about always modelling a line because of the sub-sections that call beta the slope, and alpha the intercept. It is correct for the model presented, but at first glance, it appears that a line is the only thing that can be modelled. The Problem statement section is clear that this can estimate any number of linear parameters, but i think that explanation is too technical for people unfamiliar with the subject.

Straha 206th 21:27, 12 July 2007 (UTC)

[edit] Perpendicular Offsets?

I see no mention of least squares to minimize perpendicular offsets, which is discussed on MathWorld. Perhaps that could be added. 155.212.242.34 13:08, 3 October 2007 (UTC)

[edit] links to applications in "see also"

Least squares is enormously useful and has thousands of applications. I do not think it makes sense to list these in the "see also" section, unless they are exceedingly well-known examples. Doing so would unnecessarily clutter the page. I have therefore removed the link to Vanicek analysis. --Zvika 20:05, 9 October 2007 (UTC)

[edit] Ambiguous math notation

 S = \sum_{i=1}^n (y_i - f(\vec{x}_i,\vec{a}))^2

Just to clear things up, which of the following C functions is the closest to the above?

 #include <math.h>

 /* The model function */
 extern double f(double x, double *a);

 double S(double *y, double *x, double *a, size_t n) {
     double result=0.0;
     size_t i;

     for(i=0; i < n; i++) {
         result += pow(y[i] - f(x[i], a), 2.0);
     }
     return result;
 }


 double S_alt(double *y, double *x, double *a, size_t n) {
     double result=0.0;
     size_t i;

     for(i=0; i < n; i++) {
         result += y[i] - f(x[i], a);
     }
     return pow(result, 2.0);
 }

216.215.128.49 (talk) 11:30, 17 November 2007 (UTC)

Hi, S is the sum of the squares, not the square of the sums. So, your first function is correct. One could add a pair of brackets to the formula to make that clear:  S = \sum_{i=1}^n ((y_i - f(\vec{x}_i,\vec{a}))^2) GluonBall (talk) 12:17, 17 November 2007 (UTC)
But the notation is perfectly standard and clear without the extra brackets, so don't. Dicklyon (talk) 16:11, 17 November 2007 (UTC)
It should also be noted that \vec{a} and \vec{x} have different lengths, because \vec{a} is a list of parameters that need to be fiddled with, while \vec{x} is a list of input values for which f() should return the corresponding value from \vec{y}. Therefore, the C program needs two size arguments, not just one, and f() needs to receive the size argument for \vec{a} to keep it from using unallocated memory. This would be so much easier if the program was written in D:
 import std.math;
 
 double S(double[] y, double[] x, double[] a, double delegate(double, double[]) f) {
     double result=0.0; 

     assert(y.length == x.length);

     for(size_t i=0; i < x.length; i++) {
         result += pow(y[i] - f(x[i], a), 2.0);
     }
     return result;
 }

[edit] A major proposal

Wikipedia coverage of the subject of least squares analysis is currently somewhat chaotic. I would like to propose a major re-structuring on the following basis (please respond in the relevant section)

[edit] Principal article

This one, amplified and serving as an introduction to the main articles

[edit] Subsidiary articles

Regression Least squares
isotonic regression Coefficient of determination
Generalized linear model Curve fitting
Least-squares estimation of linear regression coefficients Least mean squares filter
Non-linear regression Least-squares spectral analysis
Partial least squares regression Linear model
Robust regression Moving least squares
Segmented regression Numerical smoothing and differentiation
Outliers
Overdetermined system
Polynomial interpolation?
Recursive least squares
Sum of squares
Trend estimation
Total least squares or errors-in-variables model
Others?
Applications?

[edit] Articles to merge


I would agree to merge Total sum of squares and Residual sum of squares, and also Levenberg-Marquardt algorithm. I would be reluctant to merge the others. Yes, the articles will be much shorter once you delete the redundant material, but this doesn't mean that they won't be expanded in the future. I think decisions to merge should be made on the basis of maximum potential length of the page at a reasonable point in the future. Least squares is already a rather long page and I'd hate to see it get much longer, and it would if these topics were expanded at all. Also, the material on the pages Weighted least squares is more technical, and I am a big fan of compartmentalizing technical material on separate pages that refer back to more elementary pages. Cazort (talk) 21:57, 22 January 2008 (UTC)
I don't think they are appropriate to merge — any of them, e.g., the Gauss-Newton algorithm is an optimization method, while non-linear regression is something that may involve an optimization/estimation method but would also involve, e.g., prediction. — fnielsen (talk) 23:49, 22 January 2008 (UTC)

[edit] Template

A template with two (or more) rows (regression and least squares) would be a helpful navigation aid, but I don't feel competent to construct one. Petergans (talk) 17:17, 22 January 2008 (UTC)

I think this is one of the few cases where I would not be inclined to include a template/navbox. Least squares connects and relates to a broad variety of different topics, and I think a template could potentially downplay alternatives to least squares. My biggest issue with how least squares is presented (both on wikipedia and in most textbooks) is that it's presented without enough question and skepticism, and I think a template could be dangerously moving in the direction of less skepticism. Cazort (talk) 21:47, 22 January 2008 (UTC)
I could see a navbox that breaks down LS type algos by assumption. But a lot of thought would have to go into this. Pdbailey (talk) 16:46, 24 January 2008 (UTC)

I've dug out some more links. There are three main strands Maths, Statistics and applications (mainly science). The total collect now stands as

Curve fitting Errors and residuals in statistics Gauss–Markov theorem Gauss-Newton algorithm Generalized linear model
Least-squares estimation of linear regression coefficients Levenberg-Marquardt algorithm Linear least squares Linear model Linear regression
Mean squared error Mean squared prediction error Minimum mean-square error Nonlinear regression Numerical smoothing and differentiation
Regression analysis Ridge regression Robust regression Root mean square deviation Segmented regression
Squared deviations Studentized residual Total least squares Weighted least squares

Surely some navigation aid is desirable? This would not downplay other optimization topics, but would make it easier to access all the articles relating to least squares and regression. Petergans (talk) 14:20, 26 January 2008 (UTC)

[edit] Notation

The notation used in some articles is clearly conflicting. For example, X is used for coefficients in some articles, for parameters in others. Can a uniform notation be agreed? I also propose that the notation of linear algebra should be the main one used. Here is a table of actual usage.

Article ind. variable observations parameter
Generalized linear model, linear model X\, \vec Y\, \beta\,
Least-squares estimation of linear regression coefficients X\, \vec Y\, \theta\,
Nonlinear regression x\, y\, \theta\,
curve fitting x\, y\, a,b, \,
linear least squares, total least squares A\, b\, x \,
non-linear least squares f\, S\, p \,
regression analysis x\, y\, \alpha, \beta ... \,
linear regression X\, Y\, \beta \,
numerical smoothing and differentiation x\, y\, a \,
proposal X\, y\, p \,

Any objections to the proposal that this notation should be used in most articles relating to least squares, that is, posing the problem as

\mathbf{ X p = y\,}

with solution

\mathbf{\hat p =\left( X^TX \right) ^{-1}X^Ty}\,?

Petergans (talk)

I disagree slightly with the idea of the problem and slightly with the proposed solution. Some of the notational differences make sense (i.e. a scalar, vector, and matrix are often typeset differently), and there are times when it makes sense to breakout the coefficients from the vector form (specifically in regression analysis) how about a proposal of one notation for scalar, one for vector, and one for matrix? Also, p is not nearly as good as β or θ, just in that the latter two are canonical. Pdbailey (talk) 16:39, 24 January 2008 (UTC)
Using X for a matrix and p for a vector is very confusing. Besides, there is no need to unify the notation. Wikipedia articles are supposed to stand on their own. As long as a given article has good and self-consistent notation, that is good enough as far as I see.
I'd suggest you focus on improving individual articles rather than make them read as one (Wikipedia is not a book). One obvious area of improvement could be adding more references to this article for example. Oleg Alexandrov (talk) 16:40, 24 January 2008 (UTC)
I can see the value of unified notation -- very often I read one article and then click on another to get further information, and having to figure out the notation anew each time is time-consuming. I have seen β, θ, and x used for the parameter in books, so I also think it should be one of those rather than p. --Zvika (talk) 18:48, 24 January 2008 (UTC)
Bear in mind that there's no way that all mathematics articles on Wikipedia can have unified notation, there's simply too many, with their own standards in their respective fields. e.g. there's frequently conflict in cross-links between engineering and physics articles; the first discipline tends to use j for \sqrt{-1} and f as the independent variable in Fourier transforms, the other uses i and ω. There's no way that complete consistency can ever be achieved. Oli Filth(talk) 18:53, 24 January 2008 (UTC)
That is obviously true. But if we have a small group of related articles, and an editor willing and able to unify the notation in them, then I really don't see what the problem is. --Zvika (talk) 19:18, 24 January 2008 (UTC)

I agree it's impossible to be wholly consistent. That's why I originally said "most" rather than "all", since I believe that some greater consistency is desirable, as indicated by Zvika. I can accept β for the parameters, even though it clashes with stability constant (my area of specialization). I was taught upper case for matrices and lower case for vectors, bold for whole items, italics for an element. Although it won't satisfy everyone, it's also the most commonly used notation, at this time, in the least-squares subject area. Lastly I never intended a wholesale revision; it has to be applied judiciously as indicated by Pdbailey. Petergans (talk) 20:17, 24 January 2008 (UTC)

I very much disagree with using X for a matrix. The expression Xp=y is just so confusing. Use A or B for matrices. Oleg Alexandrov (talk) 04:38, 25 January 2008 (UTC)
y=Xβ is standard notation in many statistics books (e.g. Lehmann and Casella), but I agree it is confusing for people in other fields (such as myself), so I too think that it should be avoided. In electrical engineering, I believe H is often used for the matrix (e.g. the textbook by Kay), but A or B will also do. --Zvika (talk) 08:44, 25 January 2008 (UTC)

OK. \mathbf{A \beta = y}\, seems to be the preferred notation. However, \mathbf{X}\, appears to be standard in regression articles, so I won't change the notation in those articles. Petergans (talk) 09:34, 25 January 2008 (UTC)

Hmm, \mathbf{A \beta = y}\, just looks off to me because it is different than almost all my statistics books. I the standard in statistics books is \mathbf{X}\,, but mathematicians who are not statisticians often prefer the early letters. I'd say the plurality of the articles above use \mathbf{X}\, and you should stick with that or not change it. Pdbailey (talk) 13:59, 25 January 2008 (UTC)

Final decision: \mathbf{X \beta = y}\,. The clincher for me is the fact that this is much the same notation as used in linear regression. This will help to make it clear that regression and least squares are not really different topics, but different ways of looking at the same problem. Petergans (talk) 14:20, 26 January 2008 (UTC)

[edit] Draft replacement for linear least squares

I have prepared a thorough revision of this article in User:Petergans/a/sandbox. The main points to note are

  • Use of linear algebra notation
  • Inclusion of weights right from the start
  • Shift of emphasis from pure mathematics to applications
  • Addition of sections relating to errors
  • Significant selection of references

Unless there are strong objections to the contrary, I shall replace the article by this draft as a first step towards the overall objective of rationalizing the treatment of least squares analysis in Wikipedia. Petergans (talk) 17:17, 22 January 2008 (UTC)

That article is WAAAAY ahead of the current one. I would be in favor of replacing it; we can tweak it further then! Cazort (talk) 22:51, 22 January 2008 (UTC)
The added information in the new version is definitely an improvement. However, I wonder if it is wise to begin the article with the most general case and with a long sequence of definitions. In the proposed version, the reader has to follow through 4 display equations and the definitions of the Jacobian and residual, before getting to what least-squares is about. Even then, the definition is a general case and the usefulness might not be clear. Compare this with the current version, in which the article begins with a very simple, informal explanation of the goal ("we want to find a solution \hat{\mathbf{x}} for the 'equation' A\hat{\mathbf{x}} \approx \mathbf{b}"). This goal is something that is good to keep in mind while working through the math. I would suggest having a short "Motivation" section with an informal discussion such as the one in the first sentence of the Definition section in the current version. Just my two cents... --Zvika (talk) 11:55, 23 January 2008 (UTC)

User:Petergans/a/sandbox Now in revised notation and expanded. I accept that the introduction is a bit fierce, but I regard it as essential a) to include weights from the start and b) to state the minimization problem in the context of modelling experimental data. Petergans (talk) 14:20, 26 January 2008 (UTC)

[edit] Discussion

[edit] Computations

Can I suggest that computation details might sit most naturally in two articles,

  • Methods for linear least squares
  • Methods for nonlinear least squares

the first treating Cholesky, QR and SVD as applied to least squares; the second giving an overview of nonlinear least squares via Newton's method, Gauss-Newton, Gradient descent, Conjugate gradient, and Levenberg-Marquardt ? Jheald (talk) 17:50, 22 January 2008 (UTC)

[edit] Comment on proposal

I disagre with merging Gauss-Newton algorithm and Levenberg-Marquardt algorithm because they are different (albeit related) algorithms.

I am very weary of grand schemes, and I do not support the proposal above. Let's start simple. You want to rewrite linear least squares, so let's discuss that and shelve everything else for later.

In short, I feel you are proposing too many changes, I don't feel they are all justified, and that it will be easier for everybody if we proceed one step at a time to avoid wasted efforts later on. Oleg Alexandrov (talk) 04:36, 23 January 2008 (UTC)

[edit] Another proposal

I have prepared a draft of Non-linear Least Squares in User:Petergans/b/sandbox. I propose that this article should replace, with re-name, Gauss-Newton algorithm. Petergans (talk) 12:59, 30 January 2008 (UTC)

[edit] If not merge then what?

What is to be done about Levenberg-Marquardt algorithm? There is clearly too much repetition of basics with the proposed Non-linear least squares as it stands. I disagree with Oleg that it is a separate algorithm as the Marquardt parameter is only ever invoked in the context of a non-linear least squares refinement. After all, it is used to modify the least squares normal equations. On the other hand, Levenberg-Marquardt algorithm does contain some useful references etc. Petergans (talk) 12:59, 30 January 2008 (UTC)

[edit] The rewrite of linear least squares

See Wikipedia talk:WikiProject Mathematics#Least squares objectionable rewrite. Oleg Alexandrov (talk) 16:03, 31 January 2008 (UTC)

[edit] Proposal: least squares is linear algebra and statistics is an application

The article as it is now [1] is introducing a number of concepts that will only confuse a reader who is not well versed in statistics. Yet the least squares problem is just very simple linear algebra with no need for statistics whatsoever. The statistical interpretation and motivation should be kept separate. Thus, the following topics should be treated separately, for different audiences, and crossreferenced:

  • basic formulation and solution of least squares problem
  • use in statistics and motivation from statistics
  • advanced mathematical topics, including how to actually compute the least squares solution in a numerically stable and efficient way (which is not from the textbook formulas by any means), relations to QR, SVD, and pseudoinverse

This could be in separate sections or separate articles. Thoughts? Jmath666 (talk) 01:58, 2 February 2008 (UTC)

I think this article is still OK, having some motivation, history, and a gentle introduction to how the problem is set up. It is the article on linear least squares which starts right away with an in-depth foray in statistical data estimation likely to confuse anybody who is not a specialist.
And it is the article on linear least squares which is more about linear algebra than this one. So, how about starting there? :) Oleg Alexandrov (talk) 04:32, 2 February 2008 (UTC)

Indeed, what a mess. This article might be moved to Least squares (statistics) and then some parts of linear least squares split off to Least squares (linear algebra).

Actually, I find this article more confusing than linear least squares. Maybe I am too much of a specialist. Jmath666 (talk) 05:00, 2 February 2008 (UTC) Jmath666 (talk) 05:00, 2 February 2008 (UTC)

[edit] Least squares: implementation of proposal

which contain more technical details, but it has sufficient detail to stand on its own.

In addition Gauss-Newton algorithm has been revised. The earlier article contained a serious error regarding the validity of setting second derivatives to zero. Points to notice include:

  • Adoption of a standard notation in all four articles mentioned above. This makes for easy cross-referencing. The notation also agrees with many of the articles on regression
  • New navigation template
  • Weighted least squares should be deleted. The first section is adequately covered in Linear least squares and Non-linear least squares. The second section (Linear Algebraic Derivation) is rubbish.

This completes the fist phase of restructuring of the topic of least squares analysis. From now on I envisage only minor revision of related articles. May I suggest that comments relating to more than one article be posted on talk: least squares and that comments relating to a specific article be posted on the talk page of that article. This note is being posted an all four talk pages and Wikipedia talk:WikiProject Mathematics.

Petergans (talk) 09:52, 8 February 2008 (UTC)

That's a big improvement. I appreciate your hard work on this. A couple of constructive (i hope) comments and a question:
  • I think the fact the distinction between linear and non-linear least squares needs to be emphasised more - those new to least-squares often find it confusing that "linear" here means linear in the parameters, not the independent variables xi, so that linear least squares includes polynomial fitting in particular. (Maybe I should go ahead and add something on this myself).
  • Some material is repeated in more than one article, in particular least squares#Differences between linear and non-linear least squares and non-linear least squares#Differences between linear and non-linear least squares. This is hard to maintain (the two will tend to drift apart) so I think a cross-reference is much preferable.
  • Those sections say both that divergence is "quite rare" in LLSQ (unlike NLLSQ) and that it's non-iterative. I grant that LLSQ is not immune from numerical issues (especially if poorly programmed) but how can a non-iterative process ever be said to diverge?
Qwfp (talk) 12:56, 8 February 2008 (UTC)
I have not always been sure where to place material. Do you think it would be enough for least squares#Differences between linear and non-linear least squares to be referenced in the non-linear article? Also I moved the numerical example from least squares into linear least squares but I'm not sure if that was the right thing to do.
An earlier draft made the distinction as follows: linear - f = α + βx + γx2, non-linear f = α + βx + γ2x but I discarded it as too confusing!
LLSQ can diverge if the sum of squares increases. I think I have seen this happen, but obviously it would have been due to ill-conditioning and/or round-off error. Petergans (talk) 14:31, 8 February 2008 (UTC)
I removed the part about linear least squares not being iterative. "Iterative methods for linear systems" is a fairly huge section of numerical linear algebra. The Jacobi method and Gauss-Seidel method are fairly ancient examples of this. A particularly common 20th century example is Conjugate gradient method, but Lanczos methods are popular in the research community (sometimes called Krylov methods). JackSchmidt (talk) 14:55, 8 February 2008 (UTC)
I'm not sure how to smoothly add this to the article, but an incredibly important aspect of least squares is solving under-determined systems with the solution minimized. The techniques are more or less identical, but the ability to choose the solution this way is important for many other ideas and algorithms. At some point though perhaps the right thing to do is to find the linear solver articles and ensure that important aspects are mentioned in both places. I agree with Qwfp though that a particular good discussion should only be in one article with references from the others. Every article can have a sentence or two about it in their own language, though. JackSchmidt (talk) 15:04, 8 February 2008 (UTC)
Most versions of linear least squares I've seen on wikipedia jump straight to the normal equations. I admit I don't understand why they are mentioned at all on least squares, and I think they are overemphasized at linear least squares. The normal equations are nice for proofs, but so is the SVD. The benefit of a more SVD based presentation is that the geometry is more evident and the mathematical significance of the solutions is clear. By geometry, I mean the inner product which defines angles is more obviously relevant, so that for instance the "weight" matrix now can be understood as describing the cost of movement in a non-isotropic medium. This was a problem with the pre-Petergans version as well. JackSchmidt (talk) 15:26, 8 February 2008 (UTC)
A minor problem is that Linear least squares might lead someone to believe calculating the SVD is a wise way to solve least squares problems. I have no objection to this, as there are many methods where the truncated SVD provides the theoretical analysis of the algorithm, but these methods do not produce the SVD as a by-product. Some of them are just slightly-modified QR factorizations. "Incomplete" methods often fall into this category. In some sense this is how the Jacobi method variants work too. At any rate, "this method is the most computationally intensive" scares me. Perhaps it is not so far from the truth as these truncated methods are usually applied to large problems, but they are applied only because they are so amazingly cheaper computationally than the other methods. JackSchmidt (talk) 15:26, 8 February 2008 (UTC)
I agree with this. Truncation by SVD is a very useful form of data compression, but surely too technical for WP? The words "computationally intensive" are carried over from the earlier versions. I left them there because calculating the singular values is indeed a somewhat lengthier process compared with the other methods.

Petergans (talk) 20:08, 8 February 2008 (UTC)

[edit] Gauss-Newton

I wrote a comment on the rewrite of that article at Talk:Gauss-Newton algorithm#Response to rewrite. Oleg Alexandrov (talk) 03:24, 9 February 2008 (UTC)

[edit] Constraints

The LASSO method is a welcome addition; it draws attention to the fact that WP has little on constrained optimization. I did consider including, in least squares, something on non-negativity constraints (applied by a change of variable such as a=b2) and equality constraints, but decided that it was too technical. Any comments? Petergans (talk) 09:22, 11 February 2008 (UTC)

Disagree. What has Lasso got to do with least squares? AFAIK it's just another ad hoc functional that some people like for some particular applications. Especially since it involves an L1 norm and does not belong here. Non-negativity constraints also should be treated as a form of regularization - these topics should be moved to [Regularization](mathematics or machine learning), although those sections are a bit of a disaster at present. 68.101.160.213 (talk) 07:27, 17 April 2008 (UTC)

[edit] Error?

Under the section "Solving the Least Squares" beneath the text "r_i=y_i-f(x_i,\boldsymbol \beta)\, the gradient equations become" I believe there is an extra r_i in the equation. It's been a while since I've studied math, so I apologize if my interpretation is incorrect. —Preceding unsigned comment added by 75.180.35.23 (talk) 05:22, 2 March 2008 (UTC)

The text is correct. Petergans (talk) 07:36, 2 March 2008 (UTC)
Yep, \frac{d}{dx} (f(x))^2 = 2f(x)f'(x). —Preceding unsigned comment added by Oleg Alexandrov (talkcontribs) 16:12, 2 March 2008 (UTC)