Simpson's rule
From Wikipedia, the free encyclopedia
In numerical analysis, Simpson's rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the following approximation:
- .
It is named after Thomas Simpson.[1]
Contents |
[edit] Derivation
Simpson's rule can be derived in various ways.
[edit] Quadratic interpolation
One derivation replaces the integrand f(x) by the quadratic polynomial P(x) which takes the same values as f(x) at the end points a and b and the midpoint m = (a+b) / 2. One can use Lagrange polynomial interpolation to find an expression for this polynomial,
An easy (albeit tedious) calculation shows that
[edit] Averaging the midpoint and the trapezium rules
Another derivation constructs Simpson's rule from two simpler approximations: the midpoint rule
and the trapezium rule
The errors in these approximations are
respectively. It follows that the leading error term vanishes if we take the weighted average
This weighted average is exactly Simpson's rule.
Using another approximation (for example, the trapezium rule with twice as many points), it is possible to take a suitable weighted average and eliminate another error term. This is Romberg's method.
[edit] Undetermined coefficients
The third derivation starts from the ansatz
The coefficients α, β and γ can be fixed by requiring that this approximation be exact for all quadratic polynomials. This yields Simpson's rule.
[edit] Error
The error in approximating an integral by Simpson's rule is
where ξ is some number between a and b.[3]
The error is (asymptotically) proportional to (b − a)5. However, the above derivations suggest an error proportional to (b − a)4. Simpson's rule gains an extra order because the points at which the integrand are evaluated, are distributed symmetrically in the interval [a, b].
[edit] Composite Simpson's rule
If the interval of integration [a,b] is in some sense "small", then Simpson's rule will provide an adequate approximation to the exact integral. By small, what we really mean is that the function being integrated is relatively smooth over the interval [a,b]. For such a function, a smooth quadratic interpolant like the one used in Simpson's rule will give good results.
However, it is often the case that the function we are trying to integrate is not smooth over the interval. Typically, this means that either the function is highly oscillatory, or it lacks derivatives at certain points. In these cases, Simpson's rule may give very poor results. One common way of handling this problem is by breaking up the interval [a,b] into a number of small subintervals. Simpson's rule is then applied to each subinterval, with the results being summed to produce an approximation for the integral over the entire interval. This sort of approach is termed the composite Simpson's rule.
Suppose that the interval [a,b] is split up in n subintervals, with n an even number. Then, the composite Simpson's rule is given by
where xi = a + ih for i = 0,1,...,n − 1,n with h = (b − a) / n; in particular, x0 = a and xn = b. The above formula can also be written as
The error committed by the composite Simpson's rule is bounded (in absolute value) by
where h is the "step length", given by h = (b − a) / n.[4]
This formulation splits the interval [a,b] in subintervals of equal length. In practice, it is often advantageous to use subintervals of different lengths, and concentrate the efforts on the places where the integrand is less well-behaved. This leads to the adaptive Simpson's method.
[edit] Python implementation of Simpson's rule
Here is an implementation of Simpson's rule in Python.
def simpson_rule(f, a, b): "Approximate the definite integral of f from a to b by Simpson's rule." c = (a + b) / 2.0 h3 = abs(b - a) / 6.0 return h3 * (f(a) + 4.0*f(c) + f(b)) # Calculates integral of sin(x) from 0 to 1 from math import sin print simpson_rule(sin, 0, 1)
Here is the version of the Composite Simpson's rule, also in Python. This version defaults to behave with one partition; that is, it can be used equivalently to the above.
def simpson(f, a, b, n = 1): "Approximate the definite integral of f from a to b by Composite Simpson's rule, dividing the interval in n parts." assert n > 0 n = ((n + 1) >> 1) << 1 dx = (b - a) / n ans = - f(a) - f(b) - (n % 2) * 2 * f(b) x = a m = 2 for i in xrange(0, n + 1): ans += m * f(x) m = 6 - m x = x + dx return dx * ans / 3 # Calculates integral of sin(x) from 0 to 1 from math import sin print simpson(sin, 0.0, 1.0, 10000) # show error between 1 step and 1000 steps print abs(simpson(sin, 0.0, 1.0) - simpson(sin, 0.0, 1.0, 10000))
The above two functions test the integration of sin(x) from 0 to 1, which has the exact solution of 1 − cos(1) = 0.45969769413... The first code gives 0.459862189871 (error of 1.645e-4), while the second yields 0.459697694132 (error of 1.38777878078e-15). Thus, the composite version of the rule gives an approximation that has 11 more orders of accuracy in this example.
[edit] Notes
- ^ Süli and Mayers, §7.2
- ^ Atkinson, p. 256; Süli and Mayers, §7.2
- ^ Atkinson, equation (5.1.15); Süli and Mayers, Theorem 7.2
- ^ Atkinson, pp. 257+258; Süli and Mayers, §7.5
[edit] References
Simpson's rule is mentioned in many text books in numerical analysis:
- Atkinson, Kendall A. (1989). An Introduction to Numerical Analysis, 2nd edition, John Wiley & Sons. ISBN 0-471-50023-2.
- Burden, Richard L. and Faires, J. Douglas (2000). Numerical Analysis, 7th edition, Brooks/Cole. ISBN 0-534-38216-9.
- Süli, Endre and Mayers, David (2003). An Introduction to Numerical Analysis. Cambridge University Press. ISBN 0-521-81026-4 (hardback), ISBN 0-521-00794-1 (paperback).
[edit] External links
- Eric W. Weisstein, Simpson's Rule at MathWorld.
- Simpson's Rule for Numerical Integration
- Application of Simpson's Rule - Earthwork Excavation
- Simpson's 1/3rd rule of integration - Notes, PPT, Mathcad, Matlab, Mathematica, Maple
This article incorporates material from Code for Simpson's rule on PlanetMath, which is licensed under the GFDL.