Simpson's rule

From Wikipedia, the free encyclopedia

Simpson's rule can be derived by approximating the integrand f(x) (in blue) by the quadratic interpolant P(x) (in red).
Simpson's rule can be derived by approximating the integrand f(x) (in blue) by the quadratic interpolant P(x) (in red).

In numerical analysis, Simpson's rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the following approximation:

\int_{a}^{b} f(x) \, dx \approx \frac{b-a}{6}\left[f(a) + 4f\left(\frac{a+b}{2}\right)+f(b)\right].

It is named after Thomas Simpson.[1]

Contents

[edit] Derivation

Simpson's rule can be derived in various ways.

[edit] Quadratic interpolation

One derivation replaces the integrand f(x) by the quadratic polynomial P(x) which takes the same values as f(x) at the end points a and b and the midpoint m = (a+b) / 2. One can use Lagrange polynomial interpolation to find an expression for this polynomial,

P(x) = f(a) \frac{(x-m)(x-b)}{(a-m)(a-b)} + f(m) \frac{(x-a)(x-b)}{(m-a)(m-b)} + f(b) \frac{(x-a)(x-m)}{(b-a)(b-m)}.

An easy (albeit tedious) calculation shows that

\int_{a}^{b} P(x) \, dx =\frac{b-a}{6}\left[f(a) + 4f\left(\frac{a+b}{2}\right)+f(b)\right]. [2]

[edit] Averaging the midpoint and the trapezium rules

Another derivation constructs Simpson's rule from two simpler approximations: the midpoint rule

M = (b-a) f \left( \frac{a+b}{2} \right)

and the trapezium rule

T = \tfrac12 (b-a) (f(a)+f(b)).

The errors in these approximations are

-\tfrac1{24} (b-a)^3 f''(a) + O((b-a)^4) \quad\text{and}\quad \tfrac1{12} (b-a)^3 f''(a) + O((b-a)^4),

respectively. It follows that the leading error term vanishes if we take the weighted average

\frac{2M+T}{3}.

This weighted average is exactly Simpson's rule.

[edit] Undetermined coefficients

The third derivation starts from the ansatz

\int_{a}^{b} f(x) \, dx \approx \alpha f(a) + \beta f\left(\frac{a+b}{2}\right) + \gamma f(b).

The coefficients α, β and γ can be fixed by requiring that this approximation be exact for all quadratic polynomials. This yields Simpson's rule.

[edit] Error

The error in approximating an integral by Simpson's rule is

-\frac{(b-a)^5}{2880} f^{(4)}(\xi),

where ξ is some number between a and b.[3]

The error is (asymptotically) proportional to (ba)5. However, the above derivations suggest an error proportional to (ba)4. Simpson's rule gains an extra order because the points at which the integrand are evaluated, are distributed symmetrically in the interval [a, b].

[edit] Composite Simpson's rule

If the interval of integration [a,b] is in some sense "small", then Simpson's rule will provide an adequate approximation to the exact integral. By small, what we really mean is that the function being integrated is relatively smooth over the interval [a,b]. For such a function, a smooth quadratic interpolant like the one used in Simpson's rule will give good results.

However, it is often the case that the function we are trying to integrate is not smooth over the interval. Typically, this means that either the function is highly oscillatory, or it lacks derivatives at certain points. In these cases, Simpson's rule may give very poor results. One common way of handling this problem is by breaking up the interval [a,b] into a number of small subintervals. Simpson's rule is then applied to each subinterval, with the results being summed to produce an approximation for the integral over the entire interval. This sort of approach is termed the composite Simpson's rule.

\int_a^b f(x) \, dx\approx  \frac{h}{3}\bigg[f(x_0)+2\sum_{j=1}^{n/2-1}f(x_{2j})+ 4\sum_{j=1}^{n/2}f(x_{2j-1})+f(x_n) \bigg],

where n is the number of subintervals in which one splits [a,b] with n an even number, h = (ba) / n is the length of each subinterval, and xi = a + ih for i = 0,1,...,n − 1,n, in particular, x0 = a and xn = b. Alternatively, the above can be written as:

\int_a^b f(x) \, dx\approx \frac{h}{3}\bigg[f(x_0)+4f(x_1)+2f(x_2)+4f(x_3)+2f(x_4)+\cdots+4f(x_{n-1})+f(x_n)\bigg].

The error committed by the composite Simpson's rule is bounded (in absolute value) by

\frac{h^4}{180}(b-a) \max_{\xi\in[a,b]} |f^{(4)}(\xi)|,

where h is the "step length", given by h = (ba) / n.[4]

This formulation splits the interval [a,b] in subintervals of equal length. In practice, it is often advantageous to use subintervals of different lengths, and concentrate the efforts on the places where the integrand is less well-behaved. This leads to the adaptive Simpson's method.

[edit] Python implementation of Simpson's rule

Here is an implementation of Simpson's rule in Python.

def simpson_rule(f, a, b):
  "Approximate the definite integral of f from a to b by Simpson's rule."
  c = (a + b) / 2.0
  h3 = abs(b - a) / 6.0
  return h3 * (f(a) + 4.0*f(c) + f(b))

# Calculates integral of sin(x) from 0 to 1
from math import sin
print simpson_rule(sin, 0, 1)

Integrating sin x from 0 to 1 with this code gives 0.4598622... whereas the true value is 1 − cos 1 = 0.45969769413... .

[edit] Matlab implementation of composite Simpson's rule

%Define the function to integrate (using anonymous functions)
f = @(x) x^2;

%Set the interval to integrate
a = -1;
b = 1;
%set the number of panels to compute and their length
n = 100;
h = (b-a)/n;

%split up the interval into subintervals
x = [a:h:b];
%note that matlab matrices are indexed starting at 1
sum = f(x(1));
for i=2:2:n
    sum = sum + 4*f(x(i));
end;
for i=3:2:n-1
    sum = sum + 2*f(x(i));
end;
%prints out the result
f_integrated = (h/3)*(sum + f(x(n+1)))

[edit] Notes

  1. ^ Süli and Mayers, §7.2
  2. ^ Atkinson, p. 256; Süli and Mayers, §7.2
  3. ^ Atkinson, equation (5.1.15); Süli and Mayers, Theorem 7.2
  4. ^ Atkinson, pp. 257+258; Süli and Mayers, §7.5

[edit] References

Simpson's rule is mentioned in many text books in numerical analysis:

  • Atkinson, Kendall A. (1989). An Introduction to Numerical Analysis, 2nd edition, John Wiley & Sons. ISBN 0-471-50023-2.
  • Burden, Richard L. and Faires, J. Douglas (2000). Numerical Analysis, 7th edition, Brooks/Cole. ISBN 0-534-38216-9.
  • Süli, Endre and Mayers, David (2003). An Introduction to Numerical Analysis. Cambridge University Press. ISBN 0-521-81026-4 (hardback), ISBN 0-521-00794-1 (paperback).

[edit] External links