Sequence transformations
From Wikipedia, the free encyclopedia
In mathematics, a sequence transformation is a resummation of a sequence. Sequence transformations are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods.
Contents |
[edit] Overview
Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.
Two classical techniques for series acceleration are Euler's transformation of series[1] and Kummer's transformation of series[2]. A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Hirotaka Takebe in 1722, the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century, the e-algorithm given by Peter Wynn in 1956, the Levin u-transform, and the Wilf-Zeilberger-Ekhad method or WZ method.
For alternating series, several powerful techniques, offering convergence rates from 5.828 − n all the way to 17.93 − n for a summation of n terms, are described by Cohen et al. [3].
[edit] Definitions
For a given sequence
- ,
the transformed sequence is
- ,
where the members of the transformed sequence are usually computed from some finite number of members of the original sequence, e.g., there is a mapping T according to
for some finite k. In the simplest case, the sn and the s'n are real or complex numbers. More generally, they may be elements of some vector space or algebra.
The transformed sequence is said to converge faster than the original sequence if
where s is the limit of S assumed to be convergent. In this case, convergence acceleration is obtained. If the original sequence is divergent, the sequence transformation acts as extrapolation method to the antilimit s.
If the mapping T is linear in each of its arguments, i.e., for
for some constants , the sequence transformation is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations.
Classical examples of linear sequence transformations are Euler's transformation of series, from the 18th century, and Kummer's transformation of series[4].
[edit] Application
Examples of such nonlinear sequence transformations are Padé approximants and Levin-type sequence transformations.
Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and may be used as highly effective extrapolation methods.
[edit] Example: The Aitken method (Aitken Extrapolation)
A simple nonlinear sequence transformation is the Aitken method
defined by the mapping
with
- .
Taking the sequence of partial sums
of the geometric series as untransformed sequence, we obtain for the transformed sequence by simple algebra
independent of n. This, however is the limit of the geometric series for
- .
Thus, the Aitken method yields the exact result by extrapolation of only three consecutive partial sums.
For
- ,
the geometric series diverges since it is power series in q outside its radius of convergence. But even for this divergent series, the Aitken method sums the geometric series to its analytic continuation to
- ,
i.e. the complex q plane punctuated at
- .
The Aitken method is often used iteratively by applying it again to the transformed sequence etc.:
and so on.
[edit] Example: minimum polynomial extrapolation
While Aitken's method is the most famous, it often fails for vector sequences. An effective method for vector sequences is the Minimum Polynomial Extrapolation. It is usually phrased in terms of the fixed point iteration:
- xk + 1 = f(xk).
Given iterates x1,x2,...,xk in , one constructs the matrix U = (x2 − x1,x3 − x2,...,xk − xk − 1) whose columns are the k − 1 differences. Then, one computes the vector c = U + (xk + 1 − xk) where U + denotes the Moore-Penrose Pseudoinverse of U. The number 1 is then appended to the end of c, and the extrapolated limit is
where X = (x2,x3,...,xk + 1) is the matrix whose columns are the k iterates starting at 2.
The following 4 line MATLAB code segment implements the MPE algorithm:
U=x(:,2:end-1)-x(:,1:end-2); c=-pinv(U)*(x(:,end)-x(:,end-1)); c(end+1,1)=1; s=(x(:,2:end)*c)/sum(c);
[edit] Example: Euler's transform
For example, Euler's transform is given by
where Δ is the forward difference operator:
If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the left hand side converges.
A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation[5]
[edit] References
- ^ Milton Abramowitz and Irene A. Stegun, eds. (1965). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover. ISBN 0-486-61272-4. (See chapter 3, eqn 3.6.27)
- ^ Milton Abramowitz and Irene A. Stegun, eds. (1965). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover. ISBN 0-486-61272-4. (See chapter 3, eqn 3.6.26)
- ^ Henri Cohen, Fernando Rodriguez Villegas, and Don Zagier, "Convergence Acceleration of Alternating Series", Experimental Mathematics, 9:1 (2000) page 3.
- ^ Milton Abramowitz and Irene A. Stegun, eds. (1965). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover. ISBN 0-486-61272-4. (See chapter 3, eqn 3.6.26)
- ^ William H. Press, et al, Numerical Recipes in C, (1987) Cambridge University Press, ISBN 0-521-43108-5 (See section 5.1)
- C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North-Holland, 1991.
- G. A. Baker, Jr. and P. Graves-Morris, Padé Approximants, Cambridge U.P., 1996.