Poisson summation formula

From Wikipedia, the free encyclopedia

The Poisson summation formula (PSF) is an equation relating a sum S(t) of a function f(t) over all integers and an equivalent summation of its continuous Fourier transform. The PSF was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

Contents

[edit] Definition

The sum S(t) of a function f(t) over all integers is equal to an equivalent summation of its continuous Fourier transform \tilde{F}(\omega)


S(t) \ \stackrel{\mathrm{def}}{=}\   
\sum_{n=-\infty}^{\infty} f(t + n T) = 
\frac{1}{T} \sum_{m=-\infty}^{\infty} \tilde{F}(m \omega_0) \ e^{i m \omega_0 t}

where the continuous Fourier transform is defined as

 \tilde{F}(\omega) \ \stackrel{\mathrm{def}}{=}\   \int_{-\infty}^{\infty} f(t) \ e^{-i\omega t} dt

and the fundamental frequency ω0 is

 \omega_0 \ \stackrel{\mathrm{def}}{=}\   \frac{2\pi}{T}.

An alternative definition of the continuous Fourier transform (namely, the unitary convention of mathematicians) and its corresponding Poisson summation formula are given below.

[edit] Derivation of the PSF

The definition of S(t) ensures that it is a periodic function with period T. Hence, it expands into a Fourier series


S(t) = \sum_{m=-\infty}^{\infty} c_m e^{i m \omega_0 t}

where the fundamental frequency ω0 is defined as above and the Fourier coefficients cm are determined by


c_m \ = \   
\frac{1}{T} \int_{-T/2}^{+T/2} S(t) \ e^{-i m \omega_0 t} dt \ = 
\frac{1}{T} \sum_{n=-\infty}^{\infty} \int_{-T/2}^{+T/2} f(t + nT) \ e^{-i m \omega_0 t} \, dt.

Making a change of variables to \tau \ \stackrel{\mathrm{def}}{=}\   t + nT results in


\begin{align}
c_m & =
\frac{1}{T} \sum_{n=-\infty}^{\infty} \int_{nT - T/2}^{nT + T/2} f(\tau) \ e^{-i m \omega_{0} (\tau - nT)}  d\tau \\ & =
\frac{1}{T} \sum_{n=-\infty}^{\infty} \int_{nT - T/2}^{nT + T/2} f(\tau) \ e^{-i m \omega_{0} \tau } \underbrace{\ e^{i m n 2 \pi }}_{=1 \forall m, n} d\tau \\ & =
\frac{1}{T} \sum_{n=-\infty}^{\infty} \int_{nT - T/2}^{nT + T/2} f(\tau) \ e^{-i m \omega_{0} \tau}  d\tau \\ & = 
\frac{1}{T} \int_{-\infty}^{\infty} f(\tau) \ e^{-i m \omega_0 \tau} d\tau \ \stackrel{\mathrm{def}}{=}\   \frac{1}{T} \tilde{F}(m \omega_0).
\end{align}

Substitution of these coefficients c_m = \frac{1}{T} \tilde{F}(m \omega_0) into the Fourier series yields the Poisson summation formula.

[edit] Applications of the PSF

At the simplest level, the PSF can be useful in evaluating integer summations such as


S \ \stackrel{\mathrm{def}}{=}\   \sum_{n=1}^{\infty} \frac{1}{n^{2}} = \frac{\pi^{2}}{6}

or


S \ \stackrel{\mathrm{def}}{=}\   -\sum_{n=1}^{\infty} \frac{(-1)^{n}}{n^{4}}= \frac{7\pi^{4}}{720}

by converting them into geometric series in Fourier space that can be summed exactly.

Computationally, the PSF is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation.

[edit] The PSF using the unitary-convention continuous Fourier transform

An alternative definition of the Fourier transform is


\hat F(\omega) \ \stackrel{\mathrm{def}}{=}\  \frac1{\sqrt{2\pi}}\int_{\mathbb{R}} F(x)e^{-i\omega x}\,dx.

Using this definition, the PSF reads


\sum_{n\in\mathbb Z}F(n)={\sqrt{2\pi}}\sum_{k\in\mathbb Z}\hat F(2\pi\,k).

[edit] Convergence conditions

Some conditions restricting F must naturally be applied to have convergence here. A useful way to get around stating those precisely is to use the language of distributions. Let δ(x) be the Dirac delta function. Then if we write

\Delta(x) \ = \  \sum_{n=-\infty}^{\infty} \delta(x - n)

summed over all integers n, we have that Δ is a distribution (a so-called Dirac comb) in good standing, because applied to any test function we get a bi-infinite sum that has very small 'tails'. Then a neat way to restate the summation formula is to say that

\Delta(x) \ is its own Fourier transform.

Again this depends on precise normalization in the transform; but it conveys good information about the variance of the formula. For example it is easy to see that for constant a ≠ 0 it would follow that

\Delta(ax) \ is the Fourier transform of \Delta(x/a) \ .

Therefore we can always find some spacing λZ of the integers, such that placing a delta-function at each of those points is its own transform, and each normalization will have a corresponding valid formula. It also suggests a method of proof that is intuitive: put instead a Gaussian centred at each integer, calculate using the known Fourier transform of a Gaussian, and then let the width of all the Gaussians become small.

[edit] Generalizations

There is a version in n dimensions, that is easy to formulate. Given a lattice Λ in Rn, there is a dual lattice Λ′ (defined by vector space or Pontryagin duality, as one wishes). Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization.

This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis.

Further generalisation to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character.

Languages