Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and digital video data, and is often one of the last stages of audio production to compact disc.
Contents |
…one of the earliest [applications] of dither came in World War II. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither from the Middle English verb "didderen," meaning "to tremble." Today, when you tap a mechanical meter to increase its accuracy, you are applying dither, and modern dictionaries define dither as a highly nervous, confused, or agitated state. In minute quantities, dither successfully makes a digitization system a little more analog in the good sense of the word.—Ken Pohlmann, Principles of Digital Audio[1]
The term "dither" was published in books on analog computation and hydraulic controlled guns shortly after the war.[2][3] The concept of dithering to reduce quantization patterns was first applied by Lawrence G. Roberts[4] in his 1961 MIT master's thesis[5] and 1962 article[6] though he did not use the term dither. By 1964 dither was being used in the modern sense described in this article.[7]
Dither is often used in digital audio and video processing, where it is applied to bit-depth transitions; it is utilized in many different fields where digital processing and analysis are used — especially waveform analysis. These uses include systems using digital signal processing, such as digital audio, digital video, digital photography, seismology, RADAR, weather forecasting systems and many more.
The premise is that quantization and re-quantization of digital data yields error. If that error is repeating and correlated to the signal, the error that results is repeating, cyclical, and mathematically determinable. In some fields, especially where the receptor is sensitive to such artifacts, cyclical errors yield undesirable artifacts. In these fields dither results in less determinable artifacts. The field of audio is a primary example of this — the human ear functions much like a Fourier transform, wherein it hears individual frequencies. The ear is therefore very sensitive to distortion, or additional frequency content that "colors" the sound differently, but far less sensitive to random noise at all frequencies.
In audio, dither can be useful to break up periodic limit cycles, which are a common problem in digital filters. Random noise is typically less objectionable than the harmonic tones produced by limit cycles.
In 1987, Lipshitz and Vanderkooy pointed out that different noise types, with different probability density functions, behave differently when used as dither signals, and suggested optimal levels of dither signals for audio.[8][9]
In an analog system, the signal is continuous, but in a PCM digital system, the amplitude of the signal out of the digital system is limited to one of a set of fixed values or numbers. This process is called quantization. Each coded value is a discrete step... if a signal is quantized without using dither, there will be quantization distortion related to the original input signal... In order to prevent this, the signal is "dithered", a process that mathematically removes the harmonics or other highly undesirable distortions entirely, and that replaces it with a constant, fixed noise level.[10]
The final version of audio that goes onto a compact disc contains only 16 bits per sample, but throughout the production process a greater number of bits are typically used to represent the sample. In the end, the digital data must be reduced to 16 bits for pressing onto a CD and distributing.
There are multiple ways to do this. One can, for example, simply discard the excess bits — called truncation. One can also round the excess bits to the nearest value. Each of these methods, however, results in predictable and determinable errors in the result. Take, for example, a waveform that consists of the following values:
1 2 3 4 5 6 7 8
If we reduce our waveform by, say, 20% then we end up with the following values:
0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4
If we truncate these values we end up with the following data:
0 1 2 3 4 4 5 6
If we instead round these values we end up with the following data:
1 2 2 3 4 5 6 6
For any original waveform, the process of reducing the waveform amplitude by 20% results in regular errors. Take for example a sine wave that, for some portion, matches the values above. Every time the sine wave's value hit "3.2," the truncated result would be off by 0.2, as in the sample data above. Every time the sine wave's value hit "4.0," there would be no error since the truncated result would be off by 0.0, also shown above. The magnitude of this error changes regularly and repeatedly throughout the sine wave's cycle. It is precisely this error which manifests itself as distortion. What the ear hears as distortion is the additional content at discrete frequencies created by the regular and repeated quantization error.
A plausible solution would be to take the 2 digit number (say, 4.8) and round it one direction or the other. For example, we could round it to 5 one time and then 4 the next time. This would make the long-term average 4.5 instead of 4, so that over the long-term the value is closer to its actual value. This, on the other hand, still results in determinable (though more complicated) error. Every other time the value 4.8 comes up the result is an error of 0.2, and the other times it is –0.8. This still results in repeating, quantifiable error.
Another plausible solution would be to take 4.8 and round it so that the first four times out of five it rounded up to 5, and the fifth time it rounded to 4. This would average out to exactly 4.8 over the long term. Unfortunately, however, it still results in repeatable and determinable errors, and those errors still manifest themselves as distortion to the ear (though oversampling can reduce this).
This leads to the dither solution. Rather than predictably rounding up or down in a repeating pattern, what if we rounded up or down in a random pattern? If we came up with a way to randomly toggle our results between 4 and 5 so that 80% of the time it ended up on 5 then we would average 4.8 over the long run but would have random, non-repeating error in the result. This is done through dither.
We calculate a series of random numbers between 0.0 and 0.9 (ex: 0.6, 0.1, 0.3, 0.6, 0.9, etc.) and we add these random numbers to the results of our equation. Two times out of ten the result will truncate back to 4 (if 0.0 or 0.1 are added to 4.8) and the rest of the times it will truncate to 5, but each given situation has a random 20% chance of rounding to 4 or 80% chance of rounding to 5. Over the long haul this will result in results that average to 4.8 and a quantization error that is random — or noise. This "noise" result is less offensive to the ear than the determinable distortion that would result otherwise.
Audio samples:
Dither should be added to any low-amplitude or highly-periodic signal before any quantization or re-quantization process, in order to de-correlate the quantization noise from the input signal and to prevent non-linear behavior (distortion); the lesser the bit depth, the greater the dither must be. The result of the process still yields distortion, but the distortion is of a random nature so the resulting noise is, effectively, de-correlated from the intended signal. Any bit-reduction process should add dither to the waveform before the reduction is performed.
RPDF stands for "Rectangular Probability Density Function," equivalent to a roll of a die. Any number has the same random probability of surfacing.
TPDF stands for "Triangular Probability Density Function," equivalent to a roll of two dice (the sum of two independent samples of RPDF).
Gaussian PDF is equivalent to a roll of a large number of dice. The relationship of probabilities of results follows a bell-shaped, or Gaussian curve, typical of dither generated by analog sources such as microphone preamplifiers. If the bit depth of a recording is sufficiently great, that noise will be sufficient to dither the recording.
Colored Dither is sometimes mentioned as dither that has been filtered to be different from white noise. Some dither algorithms use noise that has more energy in the higher frequencies so as to lower the energy in the critical audio band.
Noise shaping is a filtering process that shapes the spectral energy of quantization error, typically to either de-emphasise frequencies to which the ear is most sensitive or separate the signal and noise bands completely. If dither is used, its final spectrum depends on whether it is added inside or outside the feedback loop of the noise shaper: if inside, the dither is treated as part of the error signal and shaped along with actual quantization error; if outside, the dither is treated as part of the original signal and linearises quantization without being shaped itself. In this case, the final noise floor is the sum of the flat dither spectrum and the shaped quantization noise. While real-world noise shaping usually includes in-loop dithering, it is also possible to use it without adding dither at all, in which case the usual harmonic-distortion effects still appear at low signal levels.
If the signal being dithered is to undergo further processing, then it should be processed with TPDF dither that has an amplitude of two quantization steps (so that the dither values computed range from, say, –1 to +1, or 0 to 2).[9] This is the lowest power ideal dither, in that it does not introduce noise modulation (constant noise floor) and completely eliminates the harmonic distortion from *quantization*. If colored dither is used at these intermediate processing stages then the frequency content can "bleed" into other, more noticeable frequency ranges and become distractingly audible.
If the signal being dithered is to undergo no further processing — it is being dithered to its final result for distribution — then colored dither or noise shaping is appropriate, and can effectively lower the audible noise level by putting most of that noise in a frequency range where it is less critical.
Dithering is a technique used in computer graphics to create the illusion of color depth in images with a limited color palette (color quantization). In a dithered image, colors not available in the palette are approximated by a diffusion of colored pixels from within the available palette. The human eye perceives the diffusion as a mixture of the colors within it (see color vision). Dithering is analogous to the halftone technique used in printing. Dithered images, particularly those with relatively few colors, can often be distinguished by a characteristic graininess, or speckled appearance.
Reducing the color depth of an image can often have significant visual side-effects. If the original image is a photograph, it is likely to have thousands, or even millions of distinct colors. The process of constraining the available colors to a specific color palette effectively throws away a certain amount of color information.
A number of factors can affect the resulting quality of a color-reduced image. Perhaps most significant is the color palette that will be used in the reduced image. For example, an original image (Figure 1) may be reduced to the 216-color "web-safe" color palette. If the original pixel colors are simply translated into the closest available color from the palette, no dithering occurs (Figure 2). Typically, this approach results in flat areas (contours) and a loss of detail, and may produce patches of color that are significantly different from the original. Shaded or gradient areas may appear as color bands, which may be distracting. The application of dithering can help to minimize such visual artifacts, and usually results in a better representation of the original (Figure 3). Dithering helps to reduce color banding and flatness.
One of the problems associated with using a fixed color palette is that many of the needed colors may not be available in the palette, and many of the available colors may not be needed; a fixed palette containing mostly shades of green would not be well-suited for images that do not contain many shades of green, for instance. The use of an optimized color palette can be of benefit in such cases. An optimized color palette is one in which the available colors are chosen based on how frequently they are used in the original source image. If the image is reduced based on an optimized palette, the result is often much closer to the original (Figure 4).
The number of colors available in the palette is also a contributing factor. If, for example, the palette is limited to only 16 colors, the resulting image could suffer from additional loss of detail, and even more pronounced problems with flatness and color banding (Figure 5). Once again, dithering can help to minimize such artifacts (Figure 6).
|
Display hardware, including early computer video adapters and many modern LCDs used in mobile phones and inexpensive digital cameras, show a much smaller color range than more advanced displays. One common application of dithering is to more accurately display graphics containing a greater range of colors than the hardware is capable of showing. For example, dithering might be used in order to display a photographic image containing millions of colors on video hardware that is only capable of showing 256 colors at a time. The 256 available colors would be used to generate a dithered approximation of the original image. Without dithering, the colors in the original image might simply be "rounded off" to the closest available color, resulting in a new image that is a poor representation of the original. Dithering takes advantage of the human eye's tendency to "mix" two colors in close proximity to one another.
Some LCDs may use temporal dithering to achieve a similar effect. By alternating each pixel's color value rapidly between two approximate colors in the panel's color space (also known as Frame Rate Control), a display panel which natively supports 18-bit color (6 bits per channel) can represent a 24-bit "true" color image (8 bits per channel).[11]
Dithering such as this, in which the computer's display hardware is the primary limitation on color depth, is commonly employed in software such as web browsers. Since a web browser may be retrieving graphical elements from an external source, it may be necessary for the browser to perform dithering on images with too many colors for the available display. It was due to problems with dithering that a color palette known as the "web-safe color palette" was identified, for use in choosing colors that would not be dithered on displays with only 256 colors available.
But even when the total number of available colors in the display hardware is high enough when rendering full color digital photographs, as those 15- and 16-bit RGB Hicolor 32,768/65,536 color modes, banding can be evident to the eye, especially in large areas of smooth shade transitions (although the original image file has no banding at all). Dithering the 32 or 64 RGB levels will result in a pretty good "pseudo truecolor" display approximation, which the eye cannot resolve as grainy. Furthermore, images displayed on 24-bit RGB hardware (8 bits per RGB primary) can be dithered to simulate somewhat higher bit depth, and/or to minimize the loss of hues available after a gamma correction. High-end still image processing software, as Adobe Photoshop, commonly uses these techniques for improved display.
Another useful application of dithering is for situations in which the graphic file format is the limiting factor. In particular, the commonly-used GIF format is restricted to the use of 256 or fewer colors in many graphics editing programs. Images in other file formats, such as PNG, may also have such a restriction imposed on them for the sake of a reduction in file size. Images such as these have a fixed color palette defining all the colors that the image may use. For such situations, graphical editing software may be responsible for dithering images prior to saving them in such restrictive formats.
There are several algorithms designed to perform dithering. One of the earliest, and still one of the most popular, is the Floyd–Steinberg dithering algorithm, developed in 1975. One of the strengths of this algorithm is that it minimizes visual artifacts through an error-diffusion process; error-diffusion algorithms typically produce images that more closely represent the original than simpler dithering algorithms.[12]
Dithering methods include:
(Original) | Threshold | Random | Halftone | Bayer (ordered) |
---|---|---|---|---|
Floyd–Steinberg | Jarvis, Judice & Ninke | Stucki | Burkes |
---|---|---|---|
Sierra | Two-row Sierra | Sierra Lite | Atkinson |
---|---|---|---|
Stimulated Brillouin Scattering (SBS) is a nonlinear optical effect that limits the launched optical power in fiber optic systems. This power limit can be increased by dithering the transmit optical center frequency, typically implemented by modulating the laser's bias input.
Other well-written papers on the subject at a more elementary level are available by:
Both Nika Aldrich and Bob Katz are esteemed experts in the field of digital audio and have books available as well, each of which are far more comprehensive in their explanations:
More recent research in the field of dither for audio was done by Lipshitz, Vanderkooy, and Wannamaker at the University of Waterloo: