Laguerre's method
From Wikipedia, the free encyclopedia
In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to solve numerically the equation
for a given polynomial p. One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to some root of the polynomial, no matter what initial guess is chosen.
Contents |
[edit] Derivation
The fundamental theorem of algebra states that every nth degree polynomial p can be written in the form
where xk are the roots of the polynomial. If we take the natural logarithm of both sides, we find that
Denote the derivative by
and the second derivative by
We then make what Acton calls a 'drastic set of assumptions', that the root we are looking for, say, x1 is a certain distance away from our guess x, and all the other roots are clustered together some distance away. If we denote these distances by
and
then our equation for G may be written
and that for H becomes
Solving these equations, we find that
[edit] Definition
The above derivation leads to the following method:
- Choose an initial guess x0
- For k = 0, 1, 2, …
- Calculate
- Calculate
- Calculate , where the sign is chosen to give the denominator with the larger absolute value, to avoid loss of significance as iteration proceeds.
- Set xk + 1 = xk − a
- Repeat until a is small enough or if the maximum number of iterations has been reached.
[edit] Properties
If x is a simple root of the polynomial p, then Laguerre's method converges cubically whenever the initial guess x0 is close enough to the root x. On the other hand, if x is a multiple root then the convergence is only linear. This is obtained with the penalty of calculating values for the polynomial and its first and second derivatives at each stage of the iteration.
A major advantage of Laguerre's method is that it is almost guaranteed to converge to some root of the polynomial no matter where the initial approximation is chosen. This is in contrast to other methods such as the Newton-Raphson method which may fail to converge for poorly chosen initial guesses. It may even converge to a complex root of the polynomial, because of the square root being taken in the calculation of a above may be of a negative number. This may be considered an advantage or a liability depending on the application to which the method is being used. Empirical evidence has shown that convergence failure is extremely rare, making this a good candidate for a general purpose polynomial root finding algorithm. However, given the fairly limited theoretical understanding of the algorithm, many numerical analysts are hesitant to use it as such, and prefer better understood methods such as the Jenkins-Traub method, for which more solid theory has been developed. Nevertheless, the algorithm is fairly simple to use compared to these other "sure-fire" methods, easy enough to be used by hand or with the aid of a pocket calculator when an automatic computer is unavailable. The speed at which the method converges means that one is only very rarely required to compute more than a few iterations to get high accuracy.
[edit] References
- Forman S. Acton, Numerical Methods that Work, Harper & Row, 1970, ISBN 0-88385-450-3.
- S. Goedecker, Remark on Algorithms to Find Roots of Polynomials, SIAM J. Sci. Comput. 15(5), 1059–1063 (September 1994).
- Wankere R. Mekwi (2001). Iterative Methods for Roots of Polynomials. Master's thesis, University of Oxford.
- V. Y. Pan, Solving a Polynomial Equation: Some History and Recent Progress, SIAM Rev. 39(2), 187–220 (June 1997).
- Anthony Ralston and Philip Rabinowitz, A First Course in Numerical Analysis, McGraw-Hill, 1978, ISBN 0-07-051158-6.