Poisson summation formula

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

Forms of the equation

For appropriate functions f,\,  the Poisson summation formula may be stated as:

\sum_{n=-\infty}^\infty f(n)=\sum_{k=-\infty}^\infty \hat f\left(k\right),     where \hat f  is the Fourier transform[note 1] of f\,;  that is   \hat f(\nu) = \mathcal{F}\{f(x)\}.

 

 

 

 

(Eq.1)


With the substitution, g(xP)\ \stackrel{\text{def}}{=}\ f(x),\,  and the Fourier transform property,  \mathcal{F}\{g(x P)\}\ = \frac{1}{P} \cdot \hat g\left(\frac{\nu}{P}\right)  (for P > 0),  Eq.1 becomes:

\sum_{n=-\infty}^\infty g(nP)=\frac{1}{P}\sum_{k=-\infty}^\infty \hat g\left(\frac{k}{P}\right)     (Stein & Weiss 1971).

 

 

 

 

(Eq.2)


With another definition,  s(t+x)\ \stackrel{\text{def}}{=}\ g(x),\,  and the transform property  \mathcal{F}\{s(t+x)\}\ = \hat s(\nu)\cdot e^{i 2\pi \nu t},  Eq.2 becomes a periodic summation (with period P) and its equivalent Fourier series:

\underbrace{\sum_{n=-\infty}^{\infty} s(t + nP)}_{S_P(t)} = \sum_{k=-\infty}^{\infty} \underbrace{\frac{1}{P}\cdot \hat s\left(\frac{k}{P}\right)}_{S[k]}\ e^{i 2\pi \frac{k}{P} t }     (Pinsky 2002; Zygmund 1968).

 

 

 

 

(Eq.3)

Similarly, the periodic summation of a function's Fourier transform has this Fourier series equivalent:

\sum_{k=-\infty}^{\infty} \hat s(\nu + k/T) = \sum_{n=-\infty}^{\infty} T\cdot s(nT)\ e^{-i 2\pi n T \nu} \equiv \mathcal{F}\left \{ \sum_{n=-\infty}^{\infty} T\cdot s(nT)\ \delta(t-nT)\right \},

 

 

 

 

(Eq.4)

where T represents the time interval at which a function s(t) is sampled, and 1/T is the rate of samples/sec.

Distributional formulation

These equations can be interpreted in the language of distributions (Córdoba 1988; Hörmander 1983, §7.2) for a function f whose derivatives are all rapidly decreasing (see Schwartz function). Using the Dirac comb distribution and its Fourier series:

\sum_{n=-\infty}^\infty \delta(x-nT) \equiv \sum_{k=-\infty}^\infty  \frac{1}{T}\cdot e^{i 2\pi \frac{k}{T} x} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{T}\cdot \sum_{k=-\infty}^{\infty} \delta (\nu-k/T),

 

 

 

 

(Eq.7)


In other words, the periodization of a Dirac delta \delta, resulting in a Dirac comb, corresponds to the discretization of its spectrum which is constantly one. Hence, this again is a Dirac comb but with reciprocal increments.

Eq.1 readily follows:


\begin{align}
\sum_{k=-\infty}^\infty \hat f(k)
&= \sum_{k=-\infty}^\infty \left(\int_{-\infty}^{\infty} f(x)\ e^{-i 2\pi k x} dx \right)
= \int_{-\infty}^{\infty} f(x) \underbrace{\left(\sum_{k=-\infty}^\infty e^{-i 2\pi k x}\right)}_{\sum_{n=-\infty}^\infty \delta(x-n)} dx \\
&= \sum_{n=-\infty}^\infty  \left(\int_{-\infty}^{\infty} f(x)\ \delta(x-n)\ dx \right) = \sum_{n=-\infty}^\infty f(n).
\end{align}

Similarly:


\begin{align}
\sum_{k=-\infty}^{\infty} \hat s(\nu + k/T)
&= \sum_{k=-\infty}^{\infty} \mathcal{F}\left \{ s(t)\cdot e^{-i 2\pi\frac{k}{T}t}\right \}\\
&= \mathcal{F} \bigg \{s(t)\underbrace{\sum_{k=-\infty}^{\infty} e^{-i 2\pi\frac{k}{T}t}}_{T \sum_{n=-\infty}^{\infty} \delta(t-nT)}\bigg \}
= \mathcal{F}\left \{\sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \delta(t-nT)\right \}\\
&= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \mathcal{F}\left \{\delta(t-nT)\right \}
= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot e^{-i 2\pi nT \nu}.
\end{align}

Derivation

We can also prove that Eq.3 holds in the sense that if s(t)   L1(R), then the right-hand side is the (possibly divergent) Fourier series of the left-hand side. This proof may be found in either (Pinsky 2002) or (Zygmund 1968). It follows from the dominated convergence theorem that sP(t) exists and is finite for almost every t. And furthermore it follows that sP is integrable on the interval [0,P]. The right-hand side of Eq.3 has the form of a Fourier series. So it is sufficient to show that the Fourier series coefficients of sP(t) are \scriptstyle \frac{1}{P} \hat s\left(\frac{k}{P}\right).. Proceeding from the definition of the Fourier coefficients we have:

\begin{align}
S[k]\ &\stackrel{\text{def}}{=}\ \frac{1}{P}\int_0^{P} s_P(t)\cdot e^{-i 2\pi \frac{k}{P} t}\, dt\\
&=\ \frac{1}{P}\int_0^{P}
     \left(\sum_{n=-\infty}^{\infty} s(t + nP)\right)
     \cdot e^{-i 2\pi\frac{k}{P} t}\, dt\\
&=\ \frac{1}{P}
     \sum_{n=-\infty}^{\infty}
        \int_0^{P} s(t + nP)\cdot e^{-i 2\pi\frac{k}{P} t}\, dt,
\end{align}
where the interchange of summation with integration is once again justified by dominated convergence. With a change of variables (τ = t + nP) this becomes:

\begin{align}
S[k] =
\frac{1}{P} \sum_{n=-\infty}^{\infty} \int_{nP}^{nP + P} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} \ \underbrace{e^{i 2\pi k n}}_{1}\,d\tau
\ =\ \frac{1}{P} \int_{-\infty}^{\infty} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} d\tau = \frac{1}{P}\cdot \hat s\left(\frac{k}{P}\right)
\end{align}
      QED.

Applicability

Eq.3 holds provided s(t) is a continuous integrable function which satisfies

|s(t)| + |\hat{s}(t)| \le C (1+|t|)^{-1-\delta}

for some C, δ > 0 and every t (Grafakos 2004; Stein & Weiss 1971). Note that such s(t) is uniformly continuous, this together with the decay assumption on s, show that the series defining sP converges uniformly to a continuous function.   Eq.3 holds in the strong sense that both sides converge uniformly and absolutely to the same limit (Stein & Weiss 1971).

Eq.3 holds in a pointwise sense under the strictly weaker assumption that s has bounded variation and

2\cdot s(t)=\lim_{\varepsilon\to 0} s(t+\varepsilon) + \lim_{\varepsilon\to 0} s(t-\varepsilon)     (Zygmund 1968).

The Fourier series on the right-hand side of Eq.3 is then understood as a (conditionally convergent) limit of symmetric partial sums.

As shown above, Eq.3 holds under the much less restrictive assumption that s(t) is in L1(R), but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of sP(t) (Zygmund 1968). In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way Eq.2 holds under the less restrictive conditions that g(x) is integrable and 0 is a point of continuity of gP(x). However Eq.2 may fail to hold even when both g\, and \hat{g} are integrable and continuous, and the sums converge absolutely (Katznelson 1976).

Applications

Method of images

In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on R2 is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions (Grafakos 2004). In one dimension, the resulting solution is called a theta function.

Sampling

In the statistical study of time-series, if  f is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function  f is band-limited, meaning that there is some cutoff frequency f_o such that the Fourier transform is zero for frequencies exceeding the cutoff: \hat{f}(\xi)=0 for |\xi|>f_o. For band-limited functions, choosing the sampling rate 2f_o guarantees that no information is lost: since  \hat f can be reconstructed from these sampled values, then, by Fourier inversion, so can  f. This leads to the Nyquist–Shannon sampling theorem (Pinsky 2002).

Ewald summation

Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation.

Lattice points in a sphere

The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points in a large Euclidean sphere. It can also be used to show that if an integrable function, f\, and \hat f both have compact support then f = 0\,  (Pinsky 2002).

Number theory

In number theory, Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function.[1]

One important such use of Poisson summation concerns theta functions: periodic summations of Gaussians . Put  q= e^{i\pi \tau } , for  \tau a complex number in the upper half plane, and define the theta function:

 \theta ( \tau) =  \sum_n q^{n^2}.

The relation between  \theta (-1/\tau) and  \theta (\tau) turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing  f= e^{-\pi x^2} in the second version of the Poisson summation formula (with a = 0), and using the fact that  \hat f = e^{-\pi \xi ^2} , one gets immediately

  \theta \left({-1\over\tau}\right) = \sqrt{ \tau \over i} \theta (\tau)

by putting  {1/\lambda} = \sqrt{ \tau/i} .

It follows from this that  \theta^8 has a simple transformation property under  \tau \mapsto {-1/ \tau} and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares.

Generalizations

The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let Λ be the lattice in Rd consisting of points with integer coordinates; Λ is the character group, or Pontryagin dual, of Rd. For a function ƒ in L1(Rd), consider the series given by summing the translates of ƒ by elements of Λ:

\sum_{\nu\in\Lambda} f(x+\nu).

Theorem For ƒ in L1(Rd), the above series converges pointwise almost everywhere, and thus defines a periodic function Pƒ on Λ. Pƒ lies in L1(Λ) with ||Pƒ||1 ≤ ||ƒ||1. Moreover, for all ν in Λ, Pƒ̂(ν) (Fourier transform on Λ) equals ƒ̂(ν) (Fourier transform on Rd).

When ƒ is in addition continuous, and both ƒ and ƒ̂ decay sufficiently fast at infinity, then one can "invert" the domain back to Rd and make a stronger statement. More precisely, if

|f(x)| + |\hat{f}(x)| \le C (1+|x|)^{-d-\delta}

for some C, δ > 0, then

\sum_{\nu\in\Lambda} f(x+\nu) = \sum_{\nu\in\Lambda}\hat{f}(\nu)e^{2\pi i x\cdot\nu},     (Stein & Weiss 1971, VII §2)

where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives the formula given in the first section above.

More generally, a version of the statement holds if Λ is replaced by a more general lattice in Rd. The dual lattice Λ′ can be defined as a subset of the dual vector space or alternatively by Pontryagin duality. Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization.

This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis.

Selberg trace formula

Further generalisation to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character.

A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups  G with a discrete subgroup  \Gamma such that  G/\Gamma has finite volume. For example,  G can be the real points of  GL_n and  \Gamma can be the integral points of  GL_n. In this setting,  G plays the role of the real number line in the classical version of Poisson summation, and  \Gamma plays the role of the integers  n that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula, and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of (1) becomes a sum over irreducible unitary representations of  G, and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of  \Gamma, and is called "the geometric side."

The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory.

See also

Notes

References

  1. H. M. Edwards (1974). Riemann's Zeta Function. Academic Press. ISBN 0-486-41740-9. (pages 209-211)

Further reading

This article is issued from Wikipedia - version of the Wednesday, February 10, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.