Nyquist–Shannon sampling theorem

From Wikipedia, the free encyclopedia

Hypothetical spectrum of a bandlimited signal as a function of frequency
Enlarge
Hypothetical spectrum of a bandlimited signal as a function of frequency

The Nyquist–Shannon sampling theorem is a fundamental result in the field of information theory, in particular telecommunications and signal processing. The theorem is commonly called the Shannon sampling theorem, and is also known as Nyquist–Shannon–Kotelnikov, Whittaker–Shannon–Kotelnikov, Whittaker–Nyquist–Kotelnikov–Shannon, WKS, etc., sampling theorem, as well as the Cardinal Theorem of Interpolation Theory. It is often referred to as simply the sampling theorem. See the historical background section below.

Sampling is the process of converting a signal (for example, a function of continuous time or space) into a numeric sequence (a function of discrete time or space).

The theorem states that

Exact reconstruction of a continuous-time baseband signal from its samples is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth.

The theorem also leads to an effective reconstruction formula.


Contents

[edit] Introduction

A signal is bandlimited if it contains no energy at frequencies higher than some bandlimit or bandwidth B\,. A signal that is bandlimited is constrained in terms of how rapidly it changes in time, and therefore how much detail it can convey, in between discrete instants of time. The sampling theorem means that the uniformly spaced discrete samples are a complete representation of the signal if this bandwidth is less than half the sampling rate.

To formalize these concepts, let x(t)\, represent a continuous-time signal and X(f)\, be the continuous Fourier transform of that signal (which exists if x(t)\, is square-integrable):

X(f) \ \stackrel{\mathrm{def}}{=}\   \int_{-\infty}^{\infty} x(t) \ e^{- 2 \pi i f t} \ dt \

The signal x(t)\, is bandlimited to a one-sided baseband bandwidth B\, if:

X(f) = 0 \quad for all |f| > B \,

Then the condition for exact reconstructability from samples at a uniform sampling rate f_s\, (in samples per unit time) is:

f_s > 2 B\,  

or equivalently:

B < f_s /2\,  

2 B\, is called the Nyquist rate and is a property of the bandlimited signal, while f_s /2\, is called the Nyquist frequency and is a property of this sampling system.

The time between successive samples is referred to as the sampling interval

T \ \stackrel{\mathrm{def}}{=}\   \frac{1}{f_s} \,

and the samples of x(t)\, are denoted by:

x[n] \ \stackrel{\mathrm{def}}{=}\   x(nT), \quad n\in\mathbb{Z}\, (integers)

The sampling theorem leads to a procedure for reconstructing the original x(t)\, from the samples x[n]\,, and states sufficient conditions for that such reconstruction to be exact.

[edit] The sampling process

From a signal processing perspective, the theorem describes two processes; a sampling process, in which a continuous time signal is converted to a discrete time signal, and a reconstruction process, in which the continuous signal is recovered from the discrete signal.

The continuous signal varies over time (or space as in a digitized image or another independent variable in some other application) and the sampling process is done by simply measuring the continuous signal's value every T units of time (or space), which is called the sampling interval. In practice, for signals that are a function of time, the sampling interval is typically quite small, on the order of milliseconds or microseconds or less. This results in a sequence of numbers, called samples, which is to represent the original signal. Each sample is associated to the specific point in time where it was measured. The reciprocal of the sampling interval, 1/T is the sampling frequency, fs, and measured in samples per unit time. If T is expressed in seconds then fs is expressed in Hz.

The reconstruction process is an interpolation process that mathematically defines a continuous-time signal, x(t), from the discrete samples x[n] and at times in between the sample instants, nT.

The normalized sinc function:  sin(πx) / (πx)... showing the central peak at x= 0, and zero-crossings at the other integer values of x.
Enlarge
The normalized sinc function: sin(πx) / (πx)... showing the central peak at x= 0, and zero-crossings at the other integer values of x.
  • The procedure: Each sample is multiplied by the sinc function scaled so that the zero-crossings of the sinc function occur at the sampling instants and that the sinc-function's central point is shifted to the time of that sample, nT. All of these shifted and scaled functions are then added together to recover the original signal. The scaled and time-shifted sinc-functions are continuous making the sum of these also continuous. This means that the result of this operation is indeed a continuous signal. This procedure is represented by the Whittaker–Shannon interpolation formula.
  • The condition: The signal obtained from this reconstruction process will have no frequencies higher than one-half the sampling frequency. This reconstructed signal will match the original signal if the original signal contains no frequencies equal to or above half the sampling frequency; that is, if the sampling frequency exceeds twice the highest frequency in the original signal. This condition is called the Nyquist criterion or sometimes the Raabe condition.

Note that if the original signal contains a frequency component exactly equal to one-half the sampling rate, this condition is not satisfied, and the resulting reconstructed signal may have a component at that frequency but the amplitude and phase of that component will not, in general, match the original component.

This reconstruction or interpolation using sinc functions is not the only interpolation scheme, and indeed, is practically impossible because it requires summing an infinite number of terms. However, it is the interpolation method that exactly reconstructs any given bandlimited x(t) with any bandlimit B<1/(2T); any other method that does so is formally equivalent to it.

[edit] Practical considerations

A few consequences can be drawn from the theorem:

  • If it is known that the signal which we sample has a certain highest frequency B, the theorem gives us a lower bound on the sampling frequency to assure perfect reconstruction. This lower bound to the sampling frequency, 2B, is called the Nyquist rate.
  • If instead the sampling frequency is known, the theorem gives us an upper bound for frequency components, B<fs/2, of the signal to allow for perfect reconstruction. This upper bound is the Nyquist frequency, denoted fN.
  • Both of these cases imply that the signal to be sampled must be bandlimited; that is, any component of this signal which has a frequency above a certain bound should be zero, or at least sufficiently close to zero to allow us to neglect its influence on the resulting reconstruction. In the first case, the condition of bandlimitation of the sampled signal can be accomplished by assuming a model of the signal which can be analysed in terms of the frequency components it contains; for example, sounds that are made by a speaking human normally contain very small frequency components at or above 10 kHz and it is then sufficient to sample such an audio signal with a sampling frequency of at least 20 kHz. For the second case, we have to assure that the sampled signal is bandlimited such that frequency components at or above half of the sampling frequency can be neglected. This is usually accomplished by means of a suitable low-pass filter; for example, if it is desired to sample speech waveforms at 8 kHz, the signals should first be lowpass filtered to below 4 kHz.
  • In practice, neither of the two statements of the sampling theorem described above can be completely satisfied, and neither can the reconstruction formula be precisely implemented. The reconstruction process that involves scaled and delayed sinc functions can be described as ideal. It cannot be realized in practice since it implies that each sample contributes to the reconstructed signal at almost all time points, requiring summing an infinite number of terms. Instead, some type of approximation of the sinc functions, finite in length, has to be used. The error that corresponds to the sinc-function approximation is referred to as interpolation error. Practical digital-to-analog converters produce neither scaled and delayed sinc functions nor ideal impulses (that if ideally low-pass filtered would yield the original signal), but a sequence of scaled and delayed rectangular pulses. This practical piecewise-constant output can be modeled as a zero-order hold filter driven by the sequence of scaled and delayed dirac impulses referred to in the mathematical basis section below. A shaping filter is sometimes used after the DAC with zero-order hold to make a better overall approximation.
  • Furthermore, in practice, a sampled signal that is "time-limited", or finite length, can never be fully bandlimited. This means that even if an ideal reconstruction could be made, the reconstructed signal would not be exactly the original signal. The error that corresponds to the failure of bandlimitation is referred to as aliasing.
  • The sampling theorem does not say what happens when the conditions and procedures are not exactly met, but its proof suggests an analytical framework in which the non-ideality can be studied. A designer of a system that deals with sampling and reconstruction processes needs a thorough understanding of the signal to be sampled, in particular its frequency content, the sampling frequency, how the signal is reconstructed in terms of interpolation, and the requirement for the total reconstruction error, including aliasing and interpolation error. These properties and parameters may need to be carefully tuned in order to obtain a useful system.

[edit] Aliasing

Hypothetical spectrum of a properly sampled bandlimited signal (blue) and images (green) that do not overlap.  A "brick-wall" low-pass filter can remove the images and leave the original spectrum, thus recovering the original signal from the samples.
Enlarge
Hypothetical spectrum of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. A "brick-wall" low-pass filter can remove the images and leave the original spectrum, thus recovering the original signal from the samples.

If the sampling condition is not satisfied, then frequencies will overlap; that is, frequencies above half the sampling rate will be reconstructed as, and appear as, frequencies below half the sampling rate. The resulting distortion is called aliasing; the reconstructed signal is said to be an alias of the original signal, in the sense that it has the same set of sample values.

Top:Hypothetical spectrum of an insufficiently sampled bandlimited signal (blue), X(f), where the images (green) overlap.  These overlapping edges or "tails" of the images add creating a spectrum unlike the original. Bottom: Hypothetical spectrum of a marginally sufficiently sampled bandlimited signal (blue), XA(f), where the images (green) narrowly do not overlap.  But the overall sampled spectrum of XA(f) is identical to the overall inadequately sampled spectrum of X(f) (top) because the sum of baseband and images are the same in both cases.  The discrete sampled signals xA[n] and x[n] are also identical.  It is not possible, just from examining the spectra (or the sampled signals), to tell the two situations apart.  If this were an audio signal, xA[n] and x[n] would sound the same and the presumed "properly" sampled xA[n] would be the alias of x[n] since the spectrum XA(f) masquarades as the spectrum X(f).
Enlarge
Top:Hypothetical spectrum of an insufficiently sampled bandlimited signal (blue), X(f), where the images (green) overlap. These overlapping edges or "tails" of the images add creating a spectrum unlike the original. Bottom: Hypothetical spectrum of a marginally sufficiently sampled bandlimited signal (blue), XA(f), where the images (green) narrowly do not overlap. But the overall sampled spectrum of XA(f) is identical to the overall inadequately sampled spectrum of X(f) (top) because the sum of baseband and images are the same in both cases. The discrete sampled signals xA[n] and x[n] are also identical. It is not possible, just from examining the spectra (or the sampled signals), to tell the two situations apart. If this were an audio signal, xA[n] and x[n] would sound the same and the presumed "properly" sampled xA[n] would be the alias of x[n] since the spectrum XA(f) masquarades as the spectrum X(f).

For a sinusoidal component of exactly half the sampling frequency, the component will in general alias to another sinusoid of the same frequency, but with a different phase and amplitude.

To prevent or reduce aliasing, two things can be done:

  1. Increase the sampling rate, to above twice some or all of the frequencies that are aliasing.
  2. Introduce an anti-aliasing filter or make the anti-aliasing filter more stringent.

The anti-aliasing filter is to restrict the bandwidth of the signal to satisfy the condition for proper sampling. Such a restriction works in theory, but is not precisely satisfiable in reality, because realizable filters will always allow some leakage of high frequencies. However, the leakage energy can be made small enough so that the aliasing effects are negligible.

[edit] Application to multivariable signals and images

Subsampled image showing a Moiré pattern
Enlarge
Subsampled image showing a Moiré pattern
See for full size image
Enlarge
See for full size image

The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely — one for the row, and one for the column.

Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors — red, green, and blue, or RGB for short. Other colorspaces using 3-vectors for colors include HSV, LAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain.

Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a Moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt or use a higher resolution sensor.

Another example is shown to the right in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the Moiré pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results.

The top image was created by zooming out in GIMP and then taking a screenshot of it. The likely reason that this causes a banding problem is that the zooming feature simply downsamples without low-pass filtering (probably for performance reasons) since the zoomed image is for on-screen display instead of printing or saving.

The application of the sampling theorem to images should not be made without care. For example, the sampling process in any standard image sensor (CCD or CMOS camera) is relatively far from the ideal sampling which would measure the image intensity at a single point. Instead these devices have a relatively large sensor area at each sample point in order to obtain sufficient amount of light. Also, it is not obvious that the analog image intensity function which is sampled by the sensor device is bandlimited. It should be noted, however, that the non-ideal sampling is itself a type of low-pass filter, although far from one that ideally removes high frequency components. Despite images having these problems in relation to the sampling theorem, the theorem can be used to describe the basics of down and up sampling of images.

[edit] Downsampling

When a signal is downsampled, the sampling theorem can be invoked via the artifice of resampling a hypothetical continuous-time reconstruction. The Nyquist criterion must still be satisfied with respect to the new lower sampling frequency in order to avoid aliasing. To meet the requirements of the theorem, the signal must usually pass through a low-pass filter of appropriate cutoff frequency as part of the downsampling operation. This low-pass filter, which prevents aliasing, is called an anti-aliasing filter.

[edit] Critical frequency

A family of sinusoids at the critical frequency, all having the same sample sequences of alternating +1 and –1.  That is, they all are aliases of each other, even though their frequency is not above half the sample rate.
Enlarge
A family of sinusoids at the critical frequency, all having the same sample sequences of alternating +1 and –1. That is, they all are aliases of each other, even though their frequency is not above half the sample rate.

The Nyquist rate is defined as twice the bandwidth of the continuous-time signal. It should be noted that the sampling frequency must be strictly greater than the Nyquist rate of the signal to achieve unambiguous representation of the signal. This constraint is equivalent to requiring that the system's Nyquist frequency (also known as critical frequency, and equal to half the sample rate) be strictly greater than the bandwidth of the signal. If the signal contains a frequency component at precisely the Nyquist frequency then the corresponding component of the sample values cannot have sufficient information to reconstruct the Nyquist-frequency component in the continuous-time signal because of phase ambiguity. In such a case, there would be an infinite number of possible and different sinusoids (of varying amplitude and phase) of the Nyquist-frequency component that are represented by the discrete samples.

As an example, consider this family of signals at the critical frequency:

x(t) = \frac{1}{\cos(\theta)} \cos(2 \pi \frac{f_s}{2} t + \theta) \

Where the samples

x[n] \ \stackrel{\mathrm{def}}{=}\   x(nT) = \cos(\pi n) = (-1)^n \

are in every case just alternating –1 and +1, for any phase θ. There is no way to determine either the amplitude or the phase of the continuous-time sinusoid x(t) that x[n] was sampled from. This ambiguity is the reason for the strict inequality of the sampling theorem's condition.

[edit] Mathematical basis for the theorem

A Dirac comb, modulated by the sample values of a signal
Enlarge
A Dirac comb, modulated by the sample values of a signal

The Nyquist–Shannon sampling theorem states that, given a bandlimited continuous-time signal x(t) that is uniformly sampled at a sufficient rate, even if all of the information in the signal between samples is discarded, there remains sufficient information in the samples that the original continuous-time signal can be mathematically reconstructed perfectly from only those discrete samples. To prove this, a different function is first constructed, conceptually, from the whole original signal, but preserving information from just the sample instants:

x_s(t) = x(t)\cdot \left(T\cdot \Delta_T(t) \right) \
x(t) is the original continuous-time signal.
xs(t) is a function that depends only on the values of x(t) at discrete moments of time
ΔT(t) is the sampling operator called the Dirac comb and, being periodic with period T, can be formally expressed as a Fourier series:
T \cdot \Delta_T(t)\, \ \stackrel{\mathrm{def}}{=}\   T \cdot \sum_{n=-\infty}^{\infty} \delta(t - nT) \
= T \cdot \frac{1}{T}\sum_{k=-\infty}^{\infty} e^{i 2 \pi k t/T} \
= \sum_{k=-\infty}^{\infty} e^{i 2 \pi k f_s t} \
fs = 1/T is the sampling frequency and is the fundamental frequency of the periodic function ΔT(t).
δ(t-nT) is a dirac impulse delayed to time nT.
The (implied) limit in the Fourier summation is not in the pointwise sense but in the sense of tempered distributions, see also Dirichlet kernel.

Since the dirac impulse is zero except where its argument is zero, ΔT(t) takes a value of zero except for values of t that are at the sampling instants, nT, for integer n. Therefore xs(t) also takes on zero values for all t except for the sampling instants nT. Multiplying x(t) by ΔT(t) effectively discards all of the information between sampling instants and retains information only at the sampling instants nT. xs(t) can be represented in terms of the samples:

x_s(t)\, = x(t) \cdot T \sum_{n=-\infty}^{\infty} \delta(t - nT) \
= T \sum_{n=-\infty}^{\infty} x(t)\cdot \delta(t - nT) \
= T \sum_{n=-\infty}^{\infty} x(nT) \cdot \delta(t - nT) \
= T \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t - nT) \

where x[n] = x(nT) are the samples. The sequence of sample impulses xs(t) can also be written in terms of the Fourier series of the Dirac comb,:

x_s(t)\, = x(t) \cdot \sum_{k=-\infty}^{\infty} e^{i 2 \pi k f_s t} \
= \sum_{k=-\infty}^{\infty} x(t) \cdot e^{i 2 \pi k f_s t} \

Using the frequency shifting property of the continuous Fourier transform,

X_s(f)\, \ \stackrel{\mathrm{def}}{=}\   \mathcal{F} \left \{ x_s(t) \right \} = \int_{-\infty}^{\infty} x_s(t) e^{-i 2 \pi f t} \,dt \
= \mathcal{F} \left \{ \sum_{k=-\infty}^{\infty} x(t) \cdot e^{i 2 \pi k f_s t}  \right \} \
= \sum_{k=-\infty}^{\infty} \mathcal{F} \left \{ x(t) \cdot e^{i 2 \pi k f_s t}  \right \} \
= \sum_{k=-\infty}^{\infty} X(f - k f_s) \

where X(f) is the Fourier transform of x(t). This says that the spectrum of the baseband signal being sampled is shifted and repeated forever at integral multiples of the sampling frequency, fs. These repeated copies are called images of the original signal spectrum.

Now constrain x(t) to be bandlimited to B (that is, X(f) = 0 for all |f| > B), and consider what condition precludes overlapping of the adjacent images X(f-kfs) :

right edge of kth image of X( f ) < \ left edge of (k+1)th image
k f_s + B\, < (k+1) f_s - B = k f_s + f_s - B \
B\, < f_s - B \
2 B\, < f_s = \frac{1}{T} \

With that condition satisfied, there is no overlap of images in Xs(f) and X(f) (and thus x(t)) can be reconstructed from Xs(f) (or xs(t)) by low pass filtering out all of the images of X(f) in Xs(f) except for the original image at the baseband. To do that, fs > 2B (to prevent overlap) and the frequency response of the reconstruction filter H(f) must be:

H(f) = \begin{cases}1 & |f| \le B \\ 0 & |f| \ge f_s - B \end{cases}

The reconstruction low-pass filter transition band is between B and fs-B and the filter response need not be precisely defined in that region (since there is no non-zero spectrum in that region). However, the worst case is when the bandwidth B is virtually as large as the Nyquist frequency fs/2 and in that worst case, the reconstruction filter H(f) must be:

H(f) = \mathrm{rect} \left(\frac{f}{f_s} \right) = \begin{cases}1 & |f| < \frac{f_s}{2} \\ 0 & |f| > \frac{f_s}{2} \end{cases}

where \mathrm{rect}(u) \ is the rectangular function.

With H(f) so defined, it is clear that

X(f) = H(f) \cdot X_s(f) \
Spectrum, Xs(f), of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. A "brick-wall" low-pass filter, H(f), removes the images, leaves the original spectrum, X(f), and recovers the original signal from the samples.
Enlarge
Spectrum, Xs(f), of a properly sampled bandlimited signal (blue) and images (green) that do not overlap. A "brick-wall" low-pass filter, H(f), removes the images, leaves the original spectrum, X(f), and recovers the original signal from the samples.

and the spectrum of the original signal that was sampled, X(f), is recovered from the spectrum of the sampled signal, Xs(f). This means, in the time domain, that the original signal that was sampled, x(t), is recovered from the sampled signal, xs(t).

This completes the proof of the Nyquist–Shannon sampling theorem. It says that if the sampling frequency, fs, is strictly greater than twice the bandwidth, B, of the continuous-time baseband signal, x(t), then no information is lost (or "aliased"). Following Whittaker and Shannon and most more recent expositors, the reconstruction that bypasses all the frequency-domain math, and specifies the reconstruction of the original signal directly from its samples, is now given.

The impulse response of the reconstruction filter is the inverse Fourier transform of H(f):

h(t)\, = \mathcal{F}^{-1} \left \{ H(f) \right \} \
= \int_{-\infty}^{\infty} H(f) e^{i 2 \pi f t} \,df \
= \int_{-\infty}^{\infty} \mathrm{rect} \left(\frac{f}{f_s} \right) e^{i 2 \pi f t} \,df \
= \int_{-f_s /2}^{f_s /2} e^{i 2 \pi f t} \,df \
= \frac{1}{i 2 \pi t} e^{i 2 \pi f t}\bigg|_{-f_s /2}^{f_s /2}  \
= \frac{1}{\pi t} \frac{\left( e^{i \pi f_s t} - e^{-i \pi f_s t} \right)}{2 i} \
= \frac{\sin(\pi f_s t)}{\pi t}  \
= f_s \ \mathrm{sinc}(f_s t) \,   in terms of the normalized sinc function.

This function is the impulse response of the reconstruction filter with input the sampled signal xs(t), which is just a collection of dirac impulses, δ(t-nT), each delayed to the time of their sampling instance, nT and weighted by a value proportional to the value of the continuous-time signal that was sampled at that instance, x[n]=x(nT). Since the reconstruction filter is a linear, time-invariant system, each impulse at time nT generates its own impulse response delayed to the same time, and the output of the reconstruction filter is the sum of outputs driven by each weighted impulse separately. For each input impulse, the component of the output is the impulse response delayed to the same time of that input impulse, h(t-nT), and weighted by the same coefficient attached to that input impulse, Tx[n]. That is, the output of the reconstruction filter is:

x(t)\, = h(t) * x_s(t) \,   where \quad * \ is the convolution operator
= h(t) * \sum_{n=-\infty}^{\infty} T \cdot x[n] \cdot \delta(t - nT)  \
= \sum_{n=-\infty}^{\infty} x[n] \cdot T \cdot \left[h(t) * \delta(t - nT)\right] \
= \sum_{n=-\infty}^{\infty} x[n] \cdot T \cdot h(t - nT) \
= \sum_{n=-\infty}^{\infty} x[n] \cdot (T f_s) \ \mathrm{sinc} \left( f_s (t - nT) \right) \
= \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{sinc} \left( \frac{t - nT}{T} \right) \

This shows explicitly how the samples x[n] are combined to reconstruct the original function x(t). This completes the reconstruction formula derivation.

[edit] Concise summary of the mathematical proof

There is no actual device that produces the infinite-valued samples implied by the Dirac comb model of sampling. The finite-valued samples, x[n], are not a function of continuous time, so their Fourier transform is undefined. To use that analysis tool, a continuous-time function is contrived conceptually (not actually nor numerically) by using the samples to modulate the "teeth" of a Dirac comb function. This modulated comb does have a continuous-time Fourier transform (not within the strict definition that requires square integrable functions, but in the generalization that allows Schwartz distributions, in the case of the original signal being square integrable).

The transform of the (virtual) modulated comb, X_s(f)\,, is related to the transform of the physical waveform, X(f)\,, via a superposition of shifted copies (which is equivalent to convolution with a frequency-domain Dirac comb); this superposition viewpoint leads to an understanding of aliasing and ways to mitigate it. When the shifted copies do not overlap, the original can be extracted by lowpass filtering, giving back the original signal.

The Fourier transform view also reveals that the sample rate can be higher than twice the highest frequency, with no ill effect, and even leaving room for a transition band in which the transfer function of the reconstruction filter is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. Oversampling may be inefficient or wasteful, but it is also reversible, meaning that no information is lost.

[edit] Shannon's original proof

The original proof presented by Shannon is elegant and quite brief, but it offers less intuitive insight into the subtleties of aliasing, both unintentional and intentional. Quoting Shannon's original paper, which uses f for the function, F for the spectrum, and W for the bandwidth limit:

Let F(ω) be the spectrum of f(t). Then
f(t)\, = {1 \over 2\pi} \int_{-\infty}^{\infty} F(\omega) e^{i\omega t}\;d\omega \
= {1 \over 2\pi} \int_{-2\pi W}^{2\pi W} F(\omega) e^{i\omega t}\;d\omega \
since F(ω) is assumed to be zero outside the band W. If we let
t = {n \over {2W}}
where n is any positive or negative integer, we obtain
f({n \over {2W}}) = {1 \over 2\pi} \int_{-2\pi W}^{2\pi W} F(\omega) e^{i\omega {n \over {2W}}}\;d\omega
On the left are values of f(t) at the sampling points. The integral on the right will be recognized as essentially the nth coefficient in a Fourier-series expansion of the function F(ω), taking the interval –W to W as a fundamental period. This means that the values of the samples f(n / 2W) determine the Fourier coefficients in the series expansion of F(ω). Thus they determine F(ω), since F(ω) is zero for frequencies greater than W, and for lower frequencies F(ω) is determined if its Fourier coefficients are determined. But F(ω) determines the original function f(t) completely, since a function is determined if its spectrum is known. Therefore the original samples determine the function f(t) completely.

Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via sinc functions, what we now call the Whittaker–Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, but these would have been familiar to engineers reading his works at the time, since the Fourier pair relationship between rect and sinc was well known. Quoting Shannon:

Let xn be the nth sample. Then the function f(t) is represented by:
f(t) = \sum_{n=-\infty}^{\infty}x_n{\sin \pi(2Wt-n)) \over \pi(2Wt-n)}

As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes.

[edit] Sampling of non-baseband signals

For sampling a non-baseband signal, the conditions to avoid information loss and to allow perfect reconstruction can be generalized in terms of conditions on the frequency interval of nonzero spectrum. See Sampling (signal processing) for more details and examples.

A bandpass condition is that X(f) = 0\, for all nonnegative f\, outside the open band of frequencies

\left(\frac{N}2f_\mathrm{s},\frac{N+1}2f_\mathrm{s}\right)

for some nonnegative integer N\,. This formulation includes the normal baseband condition as the case N=0.

The corresponding interpolation function is the impulse response of a bandpass filter with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses:

(N+1)\operatorname{sinc} \left(\frac{(N+1)t}T\right) - N\operatorname{sinc} \left( \frac{Nt}T \right).

Other generalizations, for example to signals occupying multiple non-contiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost.

[edit] Historical background

The sampling theorem was implied by the work of Harry Nyquist in 1928 ("Certain topics in telegraph transmission theory"), in which he showed that up to 2B independent pulse samples could be sent through a system of bandwidth B; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller showed a similar result, and discussed the sinc-function impulse response of a band-limiting filter, via its integral, the step response Integralsinus; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a Küpfmüller filter (but seldom so in English).

The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon in 1949 ("Communication in the presence of noise"). V. A. Kotelnikov published similar results in 1933 ("On the transmission capacity of the 'ether' and of cables in electrical communications", translation from the Russian), as did the mathematician E. T. Whittaker in 1915 ("Expansions of the Interpolation-Theory", "Theorie der Kardinalfunktionen"), J. M. Whittaker in 1935 ("Interpolatory function theory"), and Gabor in 1946 ("Theory of communication").

[edit] Other discoverers

Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example by Jerri[1] and by Lüke.[2] For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the term Raabe condition came to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth).

Meijering[3] mentions several other discovers and names in a paragraph and pair of footnotes thusly:

As pointed out by Higgins [135], the sampling theorem should really be considered in two parts, as done above: the first stating the fact that a bandlimited function is completely determined by its samples, the second describing how to reconstruct the function using its samples. Both parts of the sampling theorem were given in a somewhat different form by J. M. Whittaker [350, 351, 353] and before him also by Ogura [241, 242]. They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel [25].27 As we have seen, Borel also used around that time what became known as the cardinal series. However, he appears not to have made the link [135]. In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community by Kotel'nikov [173]. In more implicit, verbal form, it had also been described in the German literature by Raabe [257]. Several authors [33, 205] have mentioned that Someya [296] introduced the theorem in the Japanese literature parallel to Shannon. In the English literature, Weston [347] introduced it independently of Shannon around the same time.28
.27 Several authors, following Black [16], have claimed that this first part of the sampling theorem was stated even earlier by Cauchy, in a paper [41] published in 1841. However, the paper of Cauchy does not contain such a statement, as has been pointed out by Higgins [135].
.28 As a consequence of the discovery of the several independent introductions of the sampling theorem, people started to refer to the theorem by including the names of the aforementioned authors, resulting in such catchphrases as “the Whittaker-Kotel’nikov-Shannon (WKS) sampling theorem" [155] or even "the Whittaker-Kotel'nikov-Raabe-Shannon-Someya sampling theorem" [33]. To avoid confusion, perhaps the best thing to do is to refer to it as the sampling theorem, "rather than trying to find a title that does justice to all claimants" [136].

[edit] Why Nyquist?

Exactly how, when, or why Nyquist had his name attached to the sampling theorem remains obscure. The first known use of the term Nyquist sampling theorem is in a 1965 book.[4] It had been called the Shannon Sampling Theorem as early as 1954,[5] but also just the sampling theorem by several other books in the early 1950s.

In 1958, Blackman and Tukey[6] cited Nyquist's 1928 paper as a reference for the sampling theorem of information theory, even though that paper does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries:

Sampling theorem (of information theory)
Nyquist's result that equi-spaced data, with two or more points per cycle of highest frequency, allows reconstruction of band-limited functions. (See Cardinal theorem.)
Cardinal theorem (of interpolation theory)
A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated (with the aid of the function (\sin (x - x_i))/(x - x_i)\, to yield a continuous band-limited function. (sic: mismatched parentheses)

Exactly what result of Nyquist they are referring to remains mysterious.

When Shannon stated and proved the sampling theorem in his 1949 paper, according to Meijering[3] "he referred to the critical sampling interval T = 1/2W as the Nyquist interval corresponding to the band W, in recognition of Nyquist’s discovery of the fundamental importance of this interval in connection with telegraphy." This explains Nyquist's name on the critical interval, but not on the theorem.

Similarly, Nyquist's name was attached to Nyquist rate in 1953 by Harold S. Black:[7]

"If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1/2B has been termed a Nyquist interval." (bold added for emphasis; italics as in the original)

According to the OED, this is may be the origin of the term Nyquist rate. In Black's usage, it is not a sampling rate, but a signaling rate.

[edit] Historical references

  1. ^ Abdul J. Jerri, "The Shannon Sampling Theorem—Its Various Extensions and Applications: A Tutorial Review", Proceedings of the IEEE, 65:1565–1595, Nov. 1977.
  2. ^ Hans Dieter Lüke, "The Origins of the Sampling Theorem", IEEE Communications Magazine, pp.106–108, April 1999.
  3. ^ a b Erik Meijering, "A Chronology of Interpolation From Ancient Astronomy to Modern Signal and Image Processing", Proc. IEEE, 90, 2002.
  4. ^ Richard A. Roberts and Ben F. Barton, Theory of Signal Detectability: Composite Deferred Decision Theory, 1965.
  5. ^ Truman S. Gray, Applied Electronics: A First Course in Electronics, Electron Tubes, and Associated Circuits, 1954.
  6. ^ R. B. Blackman and J. W. Tukey, The Measurement of Power Spectra : From the Point of View of Communications Engineering, New York: Dover, 1958.
  7. ^ Harold S. Black, Modulation Theory, 1953

[edit] See also

[edit] References

[edit] External links