Euler–Maclaurin formula

From Wikipedia, the free encyclopedia

In mathematics, the Euler–Maclaurin formula provides a powerful connection between integrals (see calculus) and sums. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735 (and later generalized as Darboux's formula). Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals.

Contents

[edit] The formula

If n is a natural number and f(x) is a smooth (meaning: sufficiently often differentiable) function defined for all real numbers x between 0 and n, then the integral

I=\int_0^n f(x)\,dx

can be approximated by the sum


S=\frac{f\left( 0\right) }{2}+f\left( 1\right) +\cdots+f\left( n-1\right) +
\frac{f\left( n\right) }{2} 
=\frac{f\left( 0\right) +f\left( n\right) }{2}+\sum_{k=1}^{n-1}f\left(
k\right)

(see trapezoidal rule). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives f(k) at the end points of the interval 0 and n. For any natural number p, we have

S-I=\sum_{k=1}^p\frac{B_{k+1}}{(k+1)!}\left(f^{(k)}(n)-f^{(k)}(0)\right)+R

where B1 = −1/2, B2 = 1/6, B3 = 0, B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0, B8 = −1/30, ... are the Bernoulli numbers, and R is an error term which is normally small for suitable values of p.

By employing the substitution rule, one can adapt this formula also to functions f which are defined on some other interval of the real line.

[edit] The remainder term

The remainder term R is given by

 R = (-1)^{p} \int_0^n f^{(p+1)}(x) {B_{p+1}(x-\lfloor x \rfloor) \over (p+1)!}\,dx,

where B_i(x-\lfloor x \rfloor) are the periodic Bernoulli polynomials. The remainder term can be estimated as

\left|R\right|\leq\frac{2}{(2\pi)^{2p}}\int_0^n\left|f^{(p+1)}(x)\right|\,dx.

[edit] Applications

[edit] Sums involving a polynomial

If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if f(x) = x3, we can choose p = 2 to obtain after simplification

\sum_{i=0}^n i^3=\left(\frac{n(n+1)}{2}\right)^2

(see Faulhaber's formula).

[edit] Numerical integration

The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature; in particular, extrapolation methods depend on it.

[edit] Asymptotic expansion of sums

In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is

\sum_{n=a}^{b}f(n) \sim \int_{a}^{b} f(x)\,dx+\frac{f(a)+f(b)}{2}+\sum_{k=1}^{\infty}\,\frac{B_{2k}}{(2k)!}\left(f^{(2k-1)}(b)-f^{(2k-1)}(a)\right)\, ,

where a and b are integers. Often the expansion remains valid even after taking the limits {\scriptstyle a\to -\infty} or {\scriptstyle b\to +\infty}, or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,

\sum_{k=0}^{\infty}\frac{1}{(z+k)^2} \sim \underbrace{\int_{0}^{\infty}\frac{1}{(z+k)^{2}}\,dk}_{=1/z}+\frac{1}{2z^{2}}
+\sum_{t=1}^{\infty}\frac{B_{2t}}{z^{2t+1}}\, .

Here the left-hand side is equal to the sum of {\scriptstyle 1/z^{2}} and {\scriptstyle \psi^{(1)}(z)}, where the latter is the first-order polygamma function defined through {\scriptstyle \psi^{(1)}(z)=\frac{d^{2}}{dz^{2}}\ln \Gamma(z)}; the gamma function {\scriptstyle \Gamma(z)} is equal to {\scriptstyle (z-1)!} if {\scriptstyle z} is a positive integer. Subtracting {\scriptstyle 1/z^{2}} from both sides results in an asymptotic expansion for {\scriptstyle \psi^{(1)}(z)}. That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.

[edit] Proofs

[edit] Derivation by mathematical induction

We follow the argument given in (Apostol) [1].

The Bernoulli polynomials Bn(x), n = 0, 1, 2, ... may be defined recursively as follows:

B_0(x) = 1, \,
 B_n'(x) = nB_{n-1}(x)\mbox{ and }\int_0^1 B_n(x)\,dx = 0\mbox{ for }n \ge 1.

The first several of these are

 B_1(x)=x-1/2, \quad B_2(x)=x^2-x+1/6,
 B_3(x) = x^3-\frac{3}{2}x^2+\frac{1}{2}x, \quad B_4(x)=x^4-2x^3+x^2-\frac{1}{30}, \dots

The values Bn(1) are the Bernoulli numbers. For n ≥ 2, we have Bn(0) = Bn(1).

The periodic Bernoulli functions Pn are given by

 P_n(x) = B_n(x - \lfloor x\rfloor)\mbox{ for }0 < x < 1, \,

i.e., they agree with the Bernoulli polynomials on the interval (0, 1) and are periodic with period 1.

Consider the integral

 \int_k^{k+1} f(x)\,dx = \int u\,dv,

where

\begin{align}
u &{}= f(x), \\
du &{}= f'(x)\,dx, \\
dv &{}= P_0(x)\,dx \quad (\mbox{since }P_0(x)=1), \\
v &{}= P_1(x).
\end{align}

Integrating by parts, we get

\begin{align}
uv - \int v\,du &{}= \Big[f(x)P_1(x) \Big]_k^{k+1} - \int_k^{k+1} f'(x)P_1(x)\,dx \\  \\
&{}= {f(k) + f(k+1) \over 2} - \int_k^{k+1} f'(x)P_1(x)\,dx.
\end{align}

Summing from k = 1 to k = n − 1, we get

 \int_1^n f(x)\, dx = {f(1) \over 2} + f(2) + \cdots + f(n-1) + {f(n) \over 2} - \int_1^n f'(x) P_1(x)\,dx.

Adding {f(1) + f(n) \over 2} to both sides and rearranging, we have

 \sum_{k=1}^n f(k) = \int_1^n f(x)\,dx + {f(1) + f(n) \over 2} + \int_1^n f'(x) P_1(x)\,dx.\qquad (1)

The last two terms therefore give the error when the integral is taken to approximate the sum.

Now consider

 \int_k^{k+1} f'(x)P_1(x)\,dx = \int u\,dv,

where

\begin{align}
u &{}= f'(x), \\
du &{}= f''(x)\,dx, \\
dv &{}= P_1(x)\,dx, \\
v &{}= P_2(x)/2.
\end{align}

Integrating by parts again, we get,

\begin{align}
uv - \int v\,du &{}= \left[ {f'(x)P_2(x) \over 2} \right]_k^{k+1} - {1 \over 2}\int_k^{k+1} f''(x)P_2(x)\,dx \\  \\
&{}= {f'(k+1) - f'(k) \over 12} -{1 \over 2}\int_k^{k+1} f''(x)P_2(x)\,dx.
\end{align}

Then summing from k = 1 to k = n − 1, and then replacing the last integral in (1) with what we have thus shown to be equal to it, we have

 \sum_{k=1}^n f(k) = \int_1^n f(x)\,dx + {f(1) + f(n) \over 2} + {f'(n) - f'(1) \over 12} - {1 \over 2}\int_1^n f''(x)P_2(x)\,dx.

By now the reader will have guessed that this process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula by mathematical induction, in which the induction step relies on integration by parts and on the identities for periodic Bernoulli functions.

In order to get bounds on the size of the error when the sum is approximated by the integral, we note that the Bernoulli polynomials on the interval [0, 1] attain their maximum absolute values at the endpoints (see D.H. Lehmer in References below), and the value Bn(1) is the nth Bernoulli number.

[edit] Derivation by functional analysis

The Euler–MacLaurin formula can be understood as a curious application of some ideas from Hilbert spaces and functional analysis. Let Bn(x) be the Bernoulli polynomials. A set of functions dual to the Bernoulli polynomials are given by

\tilde{B}_n(x)=\frac{(-1)^{n+1}}{n!} \left[ 
\delta^{(n-1)}(1-x) - \delta^{(n-1)}(x) \right]

where δ is the Dirac delta function. The above is a formal notation for the idea of taking derivatives at a point; thus one has

\int_0^1 \tilde{B}_n(x) f(x)\, dx = \frac{1}{n!} \left[ 
f^{(n-1)}(1) - f^{(n-1)}(0) \right]

for n > 0 and some arbitrary but differentiable function f(x) on the unit interval. For the case of n = 0, one defines \tilde{B}_0(x)=1. The Bernoulli polynomials, along with their duals, form an orthogonal set of states on the unit interval: one has

\int_0^1 \tilde{B}_m(x) B_n(x)\, dx = \delta_{mn}

and

\sum_{n=0}^\infty B_n(x) \tilde{B}_n(y) = \delta (x-y).

The Euler–MacLaurin summation formula then follows as an integral over the latter. One has

f(x)=\int_0^1 \sum_{n=0}^\infty B_n(x) \tilde{B}_n(y) f(y)\, dy
=\int_0^1 f(y)\,dy + 
\sum_{n=1}^{N} B_n(x) \frac{1}{n!} 
\left[ f^{(n-1)}(1) - f^{(n-1)}(0) \right] 
- \frac{1}{(N+1)!} \int_0^1 B_{N+1}(x-y) f^{(N)}(y)\, dy.

Then taking x = 0, and rearranging terms, one obtains the traditional formula, together with the error term. Note that the Bernoulli numbers are defined as Bn = Bn(0), and that these vanish for odd n greater than 1. Note that this derivation does assume that f(x) is sufficiently differentiable and well-behaved; specifically, that f may be approximated by polynomials; equivalently, that f is a real analytic function.

The Euler–MacLaurin summation formula can thus be seen to be an outcome of the representation of functions on the unit interval by the direct product of the Bernoulli polynomials and their duals. Note, however, that the representation is not complete on the set of square-integrable functions. The expansion in terms of the Bernoulli polynomials has a non-trivial kernel. In particular, sin(2πnx) lies in the kernel; the integral of sin(2πnx) is vanishing on the unit interval, as is the difference of its derivatives at the endpoints.

[edit] References

  1. ^ Tom M. Apostol, "An Elementary View of Euler's Summation Formula", American Mathematical Monthly, volume 106, number 5, pages 409—418 (May 1999). doi:10.2307/2589145.
  • Pierre Gaspard, "r-adic one-dimensional maps and the Euler summation formula", Journal of Physics A, 25 (letter) L483-L485 (1992). (Describes the eigenfunctions of the transfer operator for the Bernoulli map)
  • Xavier Gourdon and Pascal Sebah, Introduction on Bernoulli's numbers, (2002)
  • D.H. Lehmer, "On the Maxima and Minima of Bernoulli Polynomials", American Mathematical Monthly, volume 47, pages 533–538 (1940)