Poisson summation formula
From Wikipedia, the free encyclopedia
The Poisson summation formula (PSF) is an equation relating a sum S(t) of a function f(t) over all integers and an equivalent summation of its continuous Fourier transform. The PSF was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
Contents |
[edit] Definition
The sum S(t) of a function f(t) over all integers is equal to an equivalent summation of its continuous Fourier transform
where the continuous Fourier transform is defined as
and the fundamental frequency ω0 is
An alternative definition of the continuous Fourier transform (namely, the unitary convention of mathematicians) and its corresponding Poisson summation formula are given below.
[edit] Derivation of the PSF
The definition of S(t) ensures that it is a periodic function with period T. Hence, it expands into a Fourier series
where the fundamental frequency ω0 is defined as above and the Fourier coefficients cm are determined by
Making a change of variables to results in
Substitution of these coefficients into the Fourier series yields the Poisson summation formula.
[edit] Applications of the PSF
At the simplest level, the PSF can be useful in evaluating integer summations such as
or
by converting them into geometric series in Fourier space that can be summed exactly.
Computationally, the PSF is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation.
[edit] The PSF using the unitary-convention continuous Fourier transform
An alternative definition of the Fourier transform is
Using this definition, the PSF reads
[edit] Convergence conditions
Some conditions restricting F must naturally be applied to have convergence here. A useful way to get around stating those precisely is to use the language of distributions. Let δ(x) be the Dirac delta function. Then if we write
summed over all integers n, we have that Δ is a distribution (a so-called Dirac comb) in good standing, because applied to any test function we get a bi-infinite sum that has very small 'tails'. Then a neat way to restate the summation formula is to say that
- is its own Fourier transform.
Again this depends on precise normalization in the transform; but it conveys good information about the variance of the formula. For example it is easy to see that for constant a ≠ 0 it would follow that
- is the Fourier transform of
Therefore we can always find some spacing λZ of the integers, such that placing a delta-function at each of those points is its own transform, and each normalization will have a corresponding valid formula. It also suggests a method of proof that is intuitive: put instead a Gaussian centred at each integer, calculate using the known Fourier transform of a Gaussian, and then let the width of all the Gaussians become small.
[edit] Generalizations
There is a version in n dimensions, that is easy to formulate. Given a lattice Λ in Rn, there is a dual lattice Λ′ (defined by vector space or Pontryagin duality, as one wishes). Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization.
This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis.
Further generalisation to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character.