Fixed point iteration

From Wikipedia, the free encyclopedia

In numerical analysis, fixed point iteration is a method of computing fixed points of iterated functions.

More specifically, given a function f defined on the real numbers with real values (or, more generally, defined on a metric space with values in itself) and given a point x0 in the domain of f, the fixed point iteration is

x_{n+1}=f(x_n), \, n=0, 1, 2, \dots

which gives rise to the sequence x_0, x_1, x_2, \dots which is hoped to converge to a point x. If f is continuous, then one can prove that the obtained x is a fixed point of f, i.e.,

f(x) = x.

Contents

[edit] Examples

  • A first simple and useful example is the Babylonian method for computing the square root of a>0, which consists in taking f(x)=\frac 12\left(\frac ax + x\right), i.e. the mean value of x and a/x, to approach the limit x = \sqrt a (from whatever starting point x_0 \gg 0 ). This is a special case of Newton's method quoted below.
The fixed point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the hypotheses of the Banach fixed point theorem and so its speed of convergence is very slow.
The fixed point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the hypotheses of the Banach fixed point theorem and so its speed of convergence is very slow.
  • The fixed point iteration x_{n+1}=\cos x_n\, converges to the unique fixed point of the function f(x)=\cos x\, for any starting point x0. This example does satisfy the hypotheses of the Banach fixed point theorem. Hence, the error after n steps satisfies |x_n-x_0| \leq { q^n \over 1-q } | x_1 - x_0 | = C q^n (where we can take q = 0.85, if we start from x0 = 1.) When the error is less than a multiple of qn for some constant q, we say that we have linear convergence. The Banach fixed point theorem allows one to obtain fixed point iterations with linear convergence.
  • The fixed point iteration x_{n+1}=2x_n\, will diverge unless x0 = 0. We say that the fixed point of f(x)=2x\, is repelling.
  • The requirement that f is continuous is important, as the following example shows. The iteration
 x_{n+1} = 
\begin{cases}
\frac{x_n}{2}, & x_n \ne 0\\
1, & x_n=0
\end{cases}      converges to 0 for all values of x0. However, 0 is not a fixed point of the function
f(x) = 
\begin{cases}
\frac{x}{2}, & x \ne 0\\
1, & x = 0
\end{cases}           this function is not continuous at x = 0, and in fact has no fixed points.

[edit] Applications

  • Newton's method for a given differentiable function f(x) is x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}. If we write g(x)=x-\frac{f(x)}{f'(x)} we may rewrite the Newton iteration as the fixed point iteration xn + 1 = g(xn). If this iteration converges to a fixed point x of g then x=g(x)=x-\frac{f(x)}{f'(x)} so \frac{f(x)}{f'(x)}=0. The inverse of anything is nonzero, therefore f(x) = 0: x is a root of f. Assuming the hypotheses of the Banach fixed point theorem are satisfied, we have that the Newton iteration converges linearly. However, a more detailed analysis shows that under certain circumstances, |x_n-x|<Cq^{n^2}. This is called quadratic convergence.
  • Halley's method is similar to Newton's method but, when it works correctly, its error is |x_n-x|<Cq^{n^3} (cubic convergence). In general, it is possible to design methods that converge with speed Cq^{n^k} for any k\in \Bbb N. As a rule of thumb, the higher the k, the less stable it is, and the more computationally expensive it gets. For these reasons, higher order methods are typically not used.
  • Runge-Kutta methods and numerical Ordinary Differential Equation solvers in general can be viewed as fixed point iterations. Indeed, the core idea when analyzing the A stability of ODE solvers is to start with the special case y'=ay, where a is a complex number, and to check whether the ODE solver converges to the fixed point y = 0 whenever the real part of a is negative.[1]

[edit] Properties

If a function f defined on the real line with real values is Lipschitz continuous with Lipschitz constant L < 1, then this function has precisely one fixed point, and the fixed point iteration converges towards that fixed point for any initial guess x0. This theorem can be generalized to any metric space.

The speed of convergence of the iteration sequence can be increased by using a convergence acceleration method such as Aitken's delta-squared process. The application of Aitken's method to fixed point iteration is known as Steffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic.

[edit] Notes

  1. ^ One may also consider certain iterations A-stable if the iterates stay bounded for a long time, which is beyond the scope of this article.

[edit] See also

Languages