Linear multistep method

From Wikipedia, the free encyclopedia

Linear multistep methods are methods used in mathematics for the numerical solution of ordinary differential equations. One-step methods (such as Euler's method and Runge–Kutta methods) refer only to one previous value to determine the current value. Multistep methods refer to several previous function values in an effort to achieve greater accuracy. In the case of linear multistep methods, a linear combination of the previous function values is used.

Contents

[edit] Introductory example

Linear multistep methods solve initial value problems of the form

 y' = f(t,y), \quad y(t_0) = y_0.

Consider for example the problem

 y' = y, \quad y(0) = 1.

The exact solution is y(t) = et.

A simple numerical method is Euler's method:

 y_{n+1} = y_n + hf(t_n, y_n). \,

This method, applied with step size  h = \tfrac12 on the problem y' = y, gives the following results:

 \begin{align}
  y_1 &= y_0 + hf(y_0) = 1 + \tfrac12\cdot1 = 1.5, \\
  y_2 &= y_1 + hf(y_1) = 1.5 + \tfrac12\cdot1.5 = 2.25, \\
  y_3 &= y_2 + hf(y_2) = 2.25 + \tfrac12\cdot2.25 = 3.375, \\
  y_4 &= y_3 + hf(y_3) = 3.375 + \tfrac12\cdot3.375 = 5.0625.
\end{align}

Euler's method is a one-step method. A simple multistep method is the two-step Adams–Bashforth method

 y_{n+2} = y_{n+1} + \tfrac32 hf(t_{n+1},y_{n+1}) - \tfrac12 hf(t_n,y_n).

This method needs two values, yn + 1 and yn, to compute the next value, yn + 2. However, the initial value problem provides only one value, y0 = 1. One possibility to resolve this issue is to use the y1 computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits):

 \begin{align}
  y_2 &= y_1 + \tfrac32 hf(y_1) - \tfrac12 hf(y_0) = 1.5 + \tfrac32\cdot\tfrac12\cdot1.5 - \tfrac12\cdot\tfrac12\cdot1 = 2.375, \\
  y_3 &= y_2 + \tfrac32 hf(y_2) - \tfrac12 hf(y_1) = 2.375 + \tfrac32\cdot\tfrac12\cdot2.375 - \tfrac12\cdot\tfrac12\cdot1.5 = 3.7812, \\
  y_4 &= y_3 + \tfrac32 hf(y_3) - \tfrac12 hf(y_2) = 3.7812 + \tfrac32\cdot\tfrac12\cdot3.7812 - \tfrac12\cdot\tfrac12\cdot2.375 = 6.0234.
\end{align}

The exact solution at t = t4 = 2 is  \mathrm{e}^2 = 7.3891\ldots , so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough.

[edit] Definition

A linear multistep method is a method of the form

 \begin{align} 
& y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr), 
\end{align}

where h denotes the step size and f the right-hand side of the differential equation. The coefficents  a_0, \ldots, a_{s-1} and  b_0, \ldots, b_s determine the method.

The method is explicit if bs = 0. In that case, the recurrence relation can be used directly to compute yn + s. If  b_s \ne 0 then the method is implicit and the recurrence relation is an equation for yn + s which has to be solved.

[edit] Examples

Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the backward differentiation formulas (BDFs).

[edit] Adams–Bashforth methods

The Adams–Bashforth methods are explicit methods. The coefficients are as − 1 = − 1 and a_{s-2} = \cdots = a_0 = 0, while the bj are chosen such that the methods has order s (this determines the methods uniquely).

The Adams–Bashforth methods with s = 1, 2, 3, 4 are (Hairer, Nørsett & Wanner 1993, §III.1):

  •  y_{n+1} = y_n + hf(t_n, y_n) \, — this is simply the Euler method;
  •  y_{n+2} = y_{n+1} + h\left( \tfrac32 f(t_{n+1}, y_{n+1}) - \tfrac12 f(t_n, y_n)\right);
  •  y_{n+3} = y_{n+2} + h\left( \tfrac{23}{12} f(t_{n+2}, y_{n+2}) - \tfrac43 f(t_{n+1}, y_{n+1}) + \tfrac{5}{12}f(t_n, y_n)\right);
  •  y_{n+4} = y_{n+3} + h\left( \tfrac{55}{24} f(t_{n+3}, y_{n+3}) - \tfrac{59}{24} f(t_{n+2}, y_{n+2}) + \tfrac{37}{24} f(t_{n+1}, y_{n+1}) - \tfrac38 f(t_n, y_n) \right).

The coefficients bj can be determined as follows. Use polynomial interpolation to find the polynomial p of degree s − 1 such that

 p(t_{n+i}) = f(t_{n+i}, y_{n+i}), \qquad \text{for } i=0,\ldots,s-1.

The Lagrange formula for polynomial interpolation yields

 p(t) = \sum_{j=0}^{s-1} \frac{(-1)^{s-j-1}f(t_{n+j}, y_{n+j})}{j!(s-j-1)!h^{s-1}} \prod_{i=0 \atop i\ne j}^{s-1} (t-t_{n+i}).

The polynomial p is locally a good approximation of the right-hand side of the differential equation y' = f(t,y) that is to be solved, so consider the equation y' = p(t) instead. This equation can be solved exactly; the solution is simply the integral of p. This suggests taking

 y_{n+s} = y_{n+s-1} + \int_{t_{n+s}}^{t_{n+s-1}} p(t)\,dt.

The Adams–Bashforth method arises when the formula for p is substituted. The coefficients bj turn out to be given by

 b_{s-j-1} = \frac{(-1)^j}{j!(s-j-1)!} \int_0^1 \prod_{i=0 \atop i\ne j}^{s-1} (u+i) \,du, \qquad \text{for } j=0,\ldots,s-1.

Replacing f(t, y) by its interpolant p incurs an error of order hs, and it follows that the s-step Adams–Bashforth method has indeed order s (Iserles 1996, §2.1)

The Adams–Bashforth methods were designed by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth. Bashforth (1883) published his theory and Adams' numerical method (Goldstine 1977).

[edit] Adams–Moulton methods

The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have as − 1 = − 1 and a_{s-2} = \cdots = a_0 = 0. Again the b coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that bs = 0, an s-step Adams–Moulton method can reach order s + 1, while an s-step Adams–Bashforth methods has only order s.

The Adams–Moulton methods with s = 0, 1, 2, 3 are (Hairer, Nørsett & Wanner 1993, §III.1):

The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points tn−1, … tns, as above, but also tn. The coefficients are given by

 b_{s-j} = \frac{(-1)^j}{j!(s-j)!} \int_0^1 \prod_{i=0 \atop i\ne j}^{s} (u+i-1) \,du, \qquad \text{for } j=0,\ldots,s.

The Adams–Moulton methods are solely due to John Couch Adams, like the Adams–Bashforth methods. The name of Forest Ray Moulton became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a predictor–corrector pair (Moulton 1926); Milne (1926) had the same idea. Adams used Newton's method to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1).

[edit] Analysis

The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are convergence, order, and stability.

The first question is whether the method is consistent: is the difference equation

 \begin{align} 
& y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr), 
\end{align}

a good approximation of the differential equation y' = f(t,y)? More precisely, a multistep method is consistent if the local error goes to zero as the step size h goes to zero, where the local error is defined to be the difference between the result yn + s of the method, assuming that all the previous values y_{n+s-1}, \ldots, y_n are exact, and the exact solution of the equation at time tn + s. A computation using Taylor series shows out that a linear multistep method is consistent if and only if

 \sum_{k=0}^{s-1} a_k = -1 \quad\text{and}\quad \sum_{k=0}^s b_k = s + \sum_{k=0}^{s-1} ka_k.

All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2).

If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have order p if the local error is of order O(hp + 1) as h goes to zero. This is equivalent to the following condition on the coefficients of the methods:

 \sum_{k=0}^{s-1} a_k = -1 \quad\text{and}\quad q k^{q-1} \sum_{k=0}^s b_k  = s^q + \sum_{k=0}^{s-1} k^q a_k \text{ for } q=1,\ldots,p.

The s-step Adams–Bashforth method has order s, while the s-step Adams–Moulton method has order s + 1 (Hairer, Nørsett & Wanner 1993, §III.2).

These conditions are often formulated using the characteristic polynomials

 \rho(z) = z^s + \sum_{k=0}^{s-1} a_k z^k \quad\text{and}\quad \sigma(z) = \sum_{k=0}^s b_k z^k.

In terms of these polynomials, the above condition for the method to have order p becomes

 \rho(\mathrm{e}^h) - h\sigma(\mathrm{e}^h) = O(h^{p+1}) \quad \text{as } h\to 0.

In particular, the method is consistent if it has order one, which is the case if ρ(1) = 0 and ρ'(1) = σ(1).

If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied. The method is convergent if and only if it is consistent and the root condition is satisfied. Consequently, a consistent method is stable if and only if this condition is satisfied, and thus the method is convergent if and only if it is stable.

Furthermore, if the method is stable, the method is said to be strongly stable if z = 1 is the only root of modulus 1, otherwise, it is said to be weakly stable.

[edit] Example

Consider the Adams–Bashforth three-step method

y_{n+1} = y_n + h\left( {23\over 12} f(t_n, y_n) - {16 \over 12} f(t_{n-1}, y_{n-1}) + {5\over 12}f(t_{n-2}, y_{n-2})\right).

The characteristic equation is thus

q(z)=z^3-z^2=z^2(z-1)\,

which has roots z = 0,1, and the conditions above are satisfied. As z = 1 is the only root of modulus 1, the method is strongly stable.

[edit] References

[edit] External links