Oversampling

From Wikipedia, the free encyclopedia

In signal processing, oversampling is the process of sampling a signal with a sampling frequency significantly higher than twice the bandwidth or highest frequency of the signal being sampled. An oversampled signal is said to be oversampled by a factor of β, defined as

\beta \ \stackrel{\mathrm{def}}{=}\   \frac{f_s}{2 f_H} \

or

f_s = 2 \beta f_H \.

where

There are three main reasons for performing oversampling:

  • It aids in anti-aliasing because realizable analog anti-aliasing filters are very difficult to implement with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampled signal, the anti-aliasing filter has less complexity and can be made less expensively by relaxing the requirements of the filter at the cost of a faster sampler. Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, digital filters are much easier to implement than comparable analog filters of high order.
  • In practice, oversampling is implemented in order to achieve cheaper higher-resolution A/D and D/A conversion. For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Averaging a group of 256 consecutive 20-bit samples adds 4 bits to the resolution of the average, producing a single sample with 24-bit resolution. Note that this averaging is possible only if the signal contains perfect equally distributed noise (i.e. if the A/D is perfect and the signal's deviation from an A/D result step lies below the threshold, the conversion result will be as inaccurate as if it had been measured by the low-resolution core A/D and the oversampling benefits will not take effect).
  • Noise reduction/cancellation. If multiple samples are taken of the same quantity with a random noise signal, then averaging several samples reduces the noise by a factor of 1/\sqrt{N}. See standard error (statistics).

Certain kinds of A/D converters known as delta-sigma converters produce disproportionately more quantization noise in the upper portion of their output spectrum. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled down to half the target sampling rate, it is possible to obtain a result with less noise than the average over the entire band of the converter. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.

[edit] Example

For example, consider a signal with a bandwidth or highest frequency of fH = 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at 200 Hz would result in β = 1. Sampling at four times that rate (β = 4) would result in a sampling rate of 800 Hz. This gives the anti-aliasing filter a transition band of 600 Hz (f_s - 2 f_H = 800 - 2 \cdot 100 = 600) instead of 0 Hz if the sampling frequency were at 200 Hz.

An anti-aliasing filter with a transition band of 600 Hz is much more realizable than that of 0 Hz (which would require a perfect filter). If the sampler went to eight times over then the transition band would increase to 1400 Hz, which means the anti-aliasing filter could be made cheaper due to a relaxation of the requirements.

After being sampled at 800 Hz, the signal could be digitally filtered to have a bandwidth of 400 Hz and then further downsampled to closer to 200 Hz.

[edit] Reference

[edit] See also

In other languages