Sampling (signal processing)
From Wikipedia, the free encyclopedia
In signal processing, sampling is the reduction of a continuous signal to a discrete signal. A common example is the conversion of a sound wave (a continuous-time signal) to a sequence of samples (a discrete-time signal).
Contents |
[edit] Theory
- See also: Nyquist–Shannon sampling theorem
For convenience, we will discuss signals which vary with time. However, the same results can be applied to signals varying in space or in any other dimension.
Let x(t) be a continuous signal which is to be sampled, and that sampling is performed by measuring the value of the continuous signal every T seconds. Thus, the sampled signal x[n] is given by
- x[n] = x(nT)
with n = 0,1,2,3,....
The sampling frequency or sampling rate fs is defined as the number of samples obtained in one second, or fs = 1 / T. The sampling rate is measured in Hertz or in samples per second.
We can now ask: under what circumstances is it possible to reconstruct the original signal completely and exactly (perfect reconstruction)?
A partial answer is provided by the Nyquist–Shannon sampling theorem, which provides a sufficient (but not always necessary) condition under which perfect reconstruction is possible. The sampling theorem guarantees that bandlimited signals (i.e., signals which have a maximum frequency) can be reconstructed perfectly from their sampled version, if the sampling rate is more than twice the maximum frequency. Reconstruction in this case can be achieved using the Whittaker–Shannon interpolation formula.
The frequency equal to one-half of the sampling rate is therefore a bound on the highest frequency that can be unambigiously represented by the sampled signal. This frequency (half the sampling rate) is called the Nyquist frequency of the sampling system. Frequencies above the Nyquist frequency fN can be observed in the sampled signal, but their frequency is ambiguous. That is, a frequency component with frequency f cannot be distinguished from other components with frequencies NfN + f and NfN − f for nonzero integers N. This ambiguity is called aliasing. To handle this problem as gracefully as possible, most analog signals are filtered with an anti-aliasing filter (usually a low-pass filter with cutoff near the Nyquist frequency) before conversion to the sampled discrete representation.
A more general statement of the Nyquist–Shannon sampling theorem says more or less that the signals with frequencies higher than the Nyquist frequency can be sampled without loss of information, provided their bandwidth (non-zero frequency band) is small enough to avoid ambiguity, and the bandlimits are known.
[edit] Sampling interval
The sampling interval is the interval T = 1 / fs corresponding to the sampling frequency. [1]
[edit] Observation period
The observation period is the span of time during which a series of data samples are collected at regular intervals.[2] More broadly, it can refer to any specific period during which a set of data points is gathered, regardless of whether or not the data is periodic in nature. Thus a researcher might study the incidence of earthquakes and tsunamis over a particular time period, such as a year or a century.
The observation period is simply the span of time during which the data is studied, regardless of whether data so gathered represents a set of discrete events having arbitrary timing within the interval, or whether the samples are explicitly bound to specified sub-intervals.
[edit] Practical implications
In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a non-ideal device with various physical limitations. This results in deviations from the theoretically perfect reconstruction capabilities, collectively referred to as distortion.
Various types of distortion can occur, including:
- Aliasing. A precondition of the sampling theorem is that the signal be bandlimited. However, in practice, no time-limited signal can be bandlimited. Since signals of interest are almost always time-limited (e.g., at most spanning the lifetime of the sampling device in question), it follows that they are not bandlimited. However, by designing a sampler with an appropriate guard band, it is possible to obtain output that is as accurate as necessary.
- Integration effect or aperture effect. This results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant. The integration effect is readily noticeable in photography when the exposure is too long and creates a blur in the image. An ideal camera would have an exposure time of zero. In a capacitor-based sample and hold circuit, the integration effect is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width.
- Jitter or deviation from the precise sample timing intervals.
- Noise, including thermal sensor noise, analog circuit noise, etc.
- Slew rate limit error, caused by an inability for an ADC output value to change sufficiently rapidly.
- Quantization as a consequence of the finite precision of words that represent the converted values.
- Error due to other non-linear effects of the mapping of input voltage to converted output value (in addition to the effects of quantization).
The conventional, practical digital-to-analog converter (DAC) does not output a sequence of dirac impulses (such that, if ideally low-pass filtered, result in the original signal before sampling) but instead output a sequence of piecewise constant values or rectangular pulses. This means that there is an inherent effect of the zero-order hold on the effective frequency response of the DAC resulting in a mild roll-off of gain at the higher frequencies (a 3.9224 dB loss at the Nyquist frequency). This zero-order hold effect is a consequence of the hold action of the DAC and is not due to the sample-and-hold that might precede a conventional ADC as is often misunderstood. The DAC can also suffer errors from jitter, noise, slewing, and non-linear mapping of input value to output voltage.
Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function.
[edit] Applications
[edit] Audio sampling
Audio waveforms are commonly sampled at 44.1k samples/s (CD) or 48k samples/s (professional audio). This is usually sufficient for any practical purpose, since the human auditory system is capable of discerning sounds up to about 15-20 kHz.
The recent trend towards higher sampling rates, at two or four times this basic requirement, has not been justified theoretically, or shown to make any audible difference, even under the most critical listening conditions. Nevertheless, a lot of 96kHz equipment is now used in studio recording, and 'superaudio' formats are being promised to consumers, mostly as a DVD option. Most articles purporting to justify a need for more than 48 kHz state that the 'dynamic range' of 16-bit audio is 96dB, a figure commonly derived from the simple ratio of quantizing level to full-scale level, which is 216, or 65536. This calculation fails to take into account the fact that peak level is not maximum permitted sine-wave signal level, and quantizing step size is not rms noise level, and even if it were it would not represent loudness, without the application of the ITU-R 468 noise weighting function. A proper analysis of typical programme levels throughout the audio chain reveals the fact that the capabilities of well engineered 16-bit material far exceed those of the very best hi-fi systems, with the microphone noise and loudspeaker headroom being the real limiting factors.
[edit] Speech sampling
Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a much lower rate. For most phonemes, almost all of the energy is contained in the 0-4 kHz range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony systems, which use the G.711 sampling and quantization specifications.
[edit] Video sampling
Standard-definition television (SDTV) uses 704 by 576 pixels (UK PAL 625-line) for the visible picture area.
High-definition television (HDTV) is currently moving towards two standards referred to as 720p (progressive) and 1080i (interlaced), which all 'HD-Ready' sets will be able to display.
[edit] Video reconstruction filtering
Most TV sets do not achieve basic SDTV quality, because they do not reconstruct the vertically sampled image properly. Digital video produces a 2-dimensional set of samples of each frame, which requires a 2-dimensional 'brick-wall' reconstruction filter for proper reproduction of the image. CRT displays produce a raster scan of horizontal lines, and the digital signal is low-pass filtered along the horizontal lines, giving good resolution of vertical lines without aliasing, but reconstruction is not usually attempted vertically, so that the resulting picture contains very visible artifacts (loss of resolution, staircasing effects, fringing pattern, sampling harmonics, and other adverse effects).
Proper 2-dimensional reconstruction requires a final display with many more pixels than the signal format, and modern HDTV sets can provide this, producing much better resolution pictures than even a top studio monitor can from SDTV signals (though they are not so good regarding grey-level accuracy, especially near black level).
As with audio, this theoretical need for reconstruction is not commonly realised, though it was recognised by the BBC who then backed off from broadcasting HDTV but started to record programmes in HDTV.
To get a true HDTV image you really need a 'super HDTV' display, with at least twice as many pixels again (3840 x 2160)!! Worth bearing in mind though not currently practical. Nevertheless, HDTV does a very significant increase in resolution over SDTV when both are compared on a HDTV set, the higher Nyquist frequency bringing improvements despite the fact that the image is not properly reconstructed on currently available displays.
[edit] IF/RF sampling
For sampling a non-baseband signal, such as a radio's intermediate-frequency (IF) or radio-frequency (RF) signal, the Nyquist–Shannon conditions to avoid aliasing can be restated as follows. Let 0 < fL < fH be the lower and higher boundaries of a frequency band and W = fH − fL be the bandwidth. Then there is a non-negative integer N with
In addition, we define the remainder r as
- .
Any real-valued signal x(t) with a spectrum limited to this frequency band, that is with
- for outside the interval ,
is uniquely determined by its samples obtained at a sampling rate of fs, if this sampling rate satisfies one of the following conditions:
-
- for one value of n = { 0, 1, ..., N-1 }
- OR the usual Nyquist condition:
- .
If N > 0, then the first conditions result in what is sometimes referred to as undersampling, or using a sampling rate less than the Nyquist rate 2fH obtained from the upper bound of the spectrum. See aliasing for a simpler formulation of this Nyquist criterion that specifies the lower bound on sampling rate (but is incomplete because it does not specify the gaps above that bound, in which aliasing will occur). Alternatively, for the case of a given sampling frequency, simpler formulae for the constraints on the signal's spectral band are given below.
- Example: Consider FM radio to illustrate the idea of undersampling.
- In the US, FM radio operates on the frequency band from fL = 88 MHz to fH = 108 MHz. The bandwidth is given by
- The sampling conditions are satisfied for
- Therefore
- N=4, r=8 MHz and n = 0,1,2,3.
- The value n = 0 gives the lowest sampling frequencies interval and this is a scenario of undersampling. In this case, the signal spectrum fits between and 2 and 2.5 times the sampling rate (higher than 86.4–108 but lower than 88-110 MHz).
- A lower value of N will also lead to a useful sampling rate, equivalent to picking a nonzero n. For example, using N–n = 3, the FM band spectrum fits easily between 1.5 and 2.0 times the sampling rate, for a sampling rate near 56 MHz (multiples of the Nyquist frequency being 28, 56, 84, 112, etc.). See the illustrations at the right.
- When undersampling a real-world signal, the sampling circuit must be fast enough to capture the highest signal frequency of interest. Theoretically, each sample should be taken during an infinitesimally short interval, but this is not practically feasible. Instead, the sampling of the signal should be made in a short enough interval that it can represent the instantaneous value of the signal with the highest frequency. This means that in the FM radio example above, the sampling circuit must be able to capture a signal with a frequency of 108 MHz, not 43.2 MHz. Thus, the sampling frequency may be only a little bit greater than 43.2 MHz, but the input bandwidth of the system must be at least 108 MHz.
- If the sampling theorem is interpreted as requiring twice the highest frequency, then the required sampling rate would be assumed to be greater than the Nyquist rate 216 MHz. While this does satisfy the last condition on the sampling rate, it is grossly oversampled.
- Note that if a band is sampled with a nonzero N, then a band-pass filter is required for the anti-aliasing filter, instead of a lowpass filter.
As we have seen, the normal baseband condition for reversible sampling is that outside the open interval:
And the reconstructive interpolation function, or lowpass filter impulse response, is .
To accommodate undersampling, the generalized condition is that outside the union of open positive and negative frequency bands
-
- for some nonnegative integer .
- which includes the normal baseband condition as case N=0 (except that where the intervals come together at 0 frequency, they can be closed).
And the corresponding interpolation function is the bandpass filter given by this difference of lowpass impulse responses:
-
- .
On the other hand, reconstruction is not usually the goal with sampled IF or RF signals. Rather, the sample sequence can be treated as ordinary samples of the signal frequency-shifted to near baseband, and digital demodulation can proceed on that basis.
[edit] See also
- Nyquist–Shannon sampling theorem
- Quantization (signal processing)
- Sampling rates:
[edit] References
- Matt Pharr and Greg Humphreys, Physically Based Rendering: From Theory to Implementation, Morgan Kaufmann, July 2004. ISBN 0-12-553180-X. The chapter on sampling (available online) is nicely written with diagrams, core theory and code sample.
- Nyquist, Harry, Certain topics in telegraph transmission theory, AIEE Trans., vol. 47, pp. 617–644, Jan. 1928.
- Shannon, Claude E., Communications in the presence of noise, Proc. IRE, vol. 37, pp. 10–21, Jan. 1949.