Fixed-point iteration
In numerical analysis, fixed-point iteration is a method of computing fixed points of iterated functions.
More specifically, given a function defined on the real numbers with real values and given a point in the domain of , the fixed point iteration is
which gives rise to the sequence which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of , i.e.,
- .
More generally, the function can be defined on any metric space with values in that same space.
Examples
- A first simple and useful example is the Babylonian method for computing the square root of a>0, which consists in taking , i.e. the mean value of x and a/x, to approach the limit (from whatever starting point ). This is a special case of Newton's method quoted below.
- The fixed-point iteration converges to the unique fixed point of the function for any starting point This example does satisfy the hypotheses of the Banach fixed point theorem. Hence, the error after n steps satisfies (where we can take , if we start from .) When the error is less than a multiple of for some constant q, we say that we have linear convergence. The Banach fixed-point theorem allows one to obtain fixed-point iterations with linear convergence.
- The fixed-point iteration will diverge unless . We say that the fixed point of is repelling.
- The requirement that f is continuous is important, as the following example shows. The iteration
converges to 0 for all values of . However, 0 is not a fixed point of the function
as this function is not continuous at , and in fact has no fixed points.
Applications
- Newton's method for a given differentiable function is . If we write we may rewrite the Newton iteration as the fixed point iteration . If this iteration converges to a fixed point of then so . The inverse of anything is nonzero, therefore : x is a root of f. Under the assumptions of the Banach fixed point theorem, the Newton iteration, framed as the fixed point method, demonstrates linear convergence. However, a more detailed analysis shows quadratic convergence, i.e., , under certain circumstances.
- Halley's method is similar to Newton's method but, when it works correctly, its error is (cubic convergence). In general, it is possible to design methods that converge with speed for any . As a general rule, the higher the , the less stable it is, and the more computationally expensive it gets. For these reasons, higher order methods are typically not used.
Properties
If a function defined on the real line with real values is Lipschitz continuous with Lipschitz constant , then this function has precisely one fixed point, and the fixed-point iteration converges towards that fixed point for any initial guess This theorem can be generalized to any metric space.
The speed of convergence of the iteration sequence can be increased by using a convergence acceleration method such as Aitken's delta-squared process. The application of Aitken's method to fixed-point iteration is known as Steffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic.
References
- ^ One may also consider certain iterations A-stable if the iterates stay bounded for a long time, which is beyond the scope of this article.
- ^ M A Kumar (2010), Solve Implicit Equations (Colebrook) Within Worksheet, Createspace, ISBN 1-452-81619-0
- ^ Bellman, R. (1957). Dynamic programming, Princeton University Press.
- ^ Sniedovich, M. (2010). Dynamic Programming: Foundations and Principles, Taylor & Francis.
- Burden, Richard L.; Faires, J. Douglas (1985). "2.2 Fixed-Point Iteration". Numerical Analysis (3rd ed.). PWS Publishers. ISBN 0-87150-857-5. .
See also
External links