Sub-band coding

From Wikipedia, the free encyclopedia

Sub-band coding is any form of transform coding that breaks a signal into a number of different frequency bands and encodes each one independently. It can be used with audio compression so that parts of the signal which the ear cannot detect are removed (e.g., a quiet sound masked by a loud one). The remaining signal is encoded using variable bit-rates with more bits per sample being used in the mid frequency range.

For example, subband encoding is used in MPEG-1.

Contents

[edit] Basic Principles

SBC depends on a phenomenon of the human hearing system called masking. Normal human ears are sensitive to a wide range of frequencies. However, when a lot of signal energy is present at one frequency, the ear cannot hear lower energy at nearby frequencies. We say that the louder frequency masks the softer frequencies. The louder frequency is called the masker.

(Strictly speaking, what we're describing here is really called simultaneous masking (masking across frequency). There are also nonsimultaneous masking (masking across time) phenomena, as well as many other phenomena of human hearing, which we're not concerned with here. For more information about auditory perception, see the upcoming Auditory Perception OLT.)

The basic idea of SBC is to save signal bandwidth by throwing away information about frequencies which are masked. The result won't be the same as the original signal, but if the computation is done right, human ears can't hear the difference.

[edit] Encoding audio signals

The simplest way to encode audio signals is Pulse-code modulation (PCM), which is used on audio CDs, DAT recordings, and so on. Like all digitization, PCM adds noise to the signal, which is generally undesirable. The fewer bits used in digitization, the more noise gets added. The way to keep this noise from being a problem is to use enough bits to ensure that the noise is always low enough to be masked either by the signal or by other sources of noise. This produces a high quality signal, but at a high bitrate (over 700Kbps for one channel of CD audio). A lot of those bits are encoding masked portions of the signal, and are being wasted.

There are more clever ways of digitizing an audio signal, which can save some of that wasted bandwidth. A classic method is nonlinear PCM, such as mu-law encoding (named after a perceptual curve in auditory perception research). This is like PCM on a logarithmic scale, and the effect is to add noise that is proportional to the signal strength. Sun's Au file format for sound is a popular example of mu-law encoding. Using 8-bit mu-law encoding would cut our one channel of CD audio down to about 350k bps, which is better but still pretty high, and is often audibly poorer quality than the original (this scheme doesn't really model masking effects).

[edit] A basic SBC scheme

Image:sbc.fig1.gif

Most SBC encoders use a structure like this. First, a time-frequency mapping (a filter bank, or FFT, or something else) decomposes the input signal into subbands. The psychoacoustic model looks at these subbands as well as the original signal, and determines masking thresholds using psychoacoustic information. Using these masking thresholds, each of the subband samples is quantized and encoded so as to keep the quantization noise below the masking threshold. The final step is to assemble all these quantized samples into frames, so that the decoder can figure it out without getting lost.

Decoding is easier, since there is no need for a psychoacoustic model. The frames are unpacked, subband samples are decoded, and a frequency-time mapping turns them back into a single output audio signal.

This is a basic, generic sketch of how SBC works. Notice that we haven't looked at how much computation it takes to do this. For practical systems that need to run in real time, computation is a major issue, and is usually the main constraint on what can be done.

Over the last five to ten years, SBC systems have been developed by many of the key companies and laboratories in the audio industry. Beginning in the late 1980's, a standardization body of the ISO called the Motion Picture Experts Group (MPEG) developed generic standards for coding of both audio and video.

[edit] External links

This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.

In other languages