Video coding

From Wikipedia, the free encyclopedia


Video coding is the field in electrical engineer that deals with finding efficient coding formats for digital video.

Video data usually not only contains visual information but also audio. Therefore, it is often referred to as multimedia. Modern video coding standards even include other multimedia data such as synthetic computer graphics, text and meta information for searching/browsing and digital rights management. They also often provide mechanisms for user interaction.

despite the fact that most intensive parts of video data in terms of data size (memory demand, transmission bandwidth) remain (visual) video and audio data. These parts have to be compressed. Unfortunately, this can hardly be done without loss of quality (lossy compression), because of the enormous size of a lossless video stream. There are two special research areas that deal with multimedia compression: video compression and audio compression.

Video coding has two distinct goals: storing and transmission of video data. These two goals have much in common. Therefore, video file formats usually have the same structure as streaming video formats with just a little header information added.

A computer program that encodes and decodes video data is called video codec. (The part that only deals with audio data is called audio codec.)

Almost all successful techniques developed for video coding have been integrated in the MPEG standards. Therefore, the MPEG standards represent a comprehensive knowledge base for video coding. Most video coding standards and commercial formats are modifications of MPEG.

[edit] Compresssion

Because multimedia typically derives from data sampled by a device such as a camera or a microphone, and because such data contains large amounts of random noise, traditional lossless compression algorithms tend to do a poor job compressing multimedia. Multimedia compression algorithms, traditionally known as codecs, work in a lossy fashion:

  1. Transform the data according to a model designed to reduce sample-to-sample correlation, concentrating the important signal in a few data values.
  2. Quantize the data, most of which has become noise. Some codecs use a scalar quantizer followed by run-length encoding; others use vector quantization.
  3. Use entropy coding such as Huffman coding to reduce the number of bits that the most common values use.

This method is called transform coding.

Multimedia compression has become the primary focus of compression research, primarily in a search for more efficient models. It is the most important part in video coding formats.

[edit] See also

In other languages