Superconvergence

From Wikipedia, the free encyclopedia

In numerical analysis, a superconvergent method is one which converges faster than generally expected. For example in the Finite Element Method approximation to Poisson's equation in two dimensions, using piecewise linear elements, the average error in the gradient is first order. However under certain conditions it's possible to recover the gradient at certain locations within each element to second order.

There's an alternate definition of the term which is unrelated to the above. A sequence x0, x1, ... superconverges to 0 if, when the xi are written in base 2, then each number xi starts with 2i − 1 ≈ 2i zeroes. For example, the following sequence is superconverging to 0:

x_0 =  \frac{1}{2} = .1
x_1 =  \frac{1}{4} = .01
x_2 =  \frac{1}{16} = .0001
x_3 =  \frac{1}{256} = .00000001
x_4 =  \frac{1}{65536} = .0000000000000001

In this case it is easy to see that the number of binary 0's approximately doubles at each step.

A sequence {xi} superconverges to x if {xix} superconverges to 0, and a sequence {yi} is said to be superconvergent if there exists a y to which the sequence superconverges.

We see from the example above that a sequence superconverges to 0 if each term is roughly the square of the previous one. In other words, this definition of superconvergence is the same as quadratic convergence.

When solving for a root a for an equation f(x)=0 using Newton's method, the sequence of approximations will superconverge to a if f′(a) is not zero.

This article incorporates material from superconvergence on PlanetMath, which is licensed under the GFDL.