Prefix code
From Wikipedia, the free encyclopedia
A prefix code is a code, typically a variable-length code, with the "prefix property": no code word is a prefix of any other code word in the set. A code with code words {0, 10, 11} has the prefix property; a code consisting of {0, 1, 10, 11} does not, because "1" is a prefix of both "10" and "11".
Prefix codes are also known as prefix-free codes, prefix condition codes, comma-free codes (although this is incorrect), and instantaneous codes. Although Huffman coding is just one of many algorithms for deriving prefix codes, prefix codes are also widely referred to as "Huffman codes", even when the code was not produced by a Huffman algorithm.
Using prefix codes, a message can be transmitted as a sequence of concatenated code words, without any out-of-band markers to frame the words in the message. The recipient can decode the message unambiguously, by repeatedly finding and removing prefixes that form valid code words. This is not possible with codes that lack the prefix property, such as our example of {0, 1, 10, 11}: a receiver reading a "1" at the start of a code word would not know whether that was the complete code word "1", or merely the prefix of the code word "10" or "11".
The variable-length Huffman codes, country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS W-CDMA 3G Wireless Standard are prefix codes. Prefix codes are also a form of entropy encoding used in lossless data compression.
Prefix codes are not error-correcting codes. In actual practice, a message might first be compressed with a prefix code, and then encoded again (with an error-correcting code) before transmission.
This article is partly derived from Federal Standard 1037C, which uses the term comma-free code.
Contents |
[edit] Techniques
Techniques for constructing a prefix code can be simple, or quite complicated.
If every word in the code has the same length, the code is called a fixed-length code. For example, ISO 8859-15 letters are always 8 bits long. UTF-32/UCS-4 letters are always 32 bits long. ATM packets are always 424 bits long. Prefixes cannot exist in a fixed-length code. Unfortunately, fixed length encodings are inefficient in situations where some words are much more likely to be transmitted than others.
Some codes mark the end of a code word with a special "comma" symbol, different from normal data. [1] This is somewhat analogous to the period at the end of a sentence; it marks where one sentence ends and another begins. If every code word ends in a comma, and the comma does not appear elsewhere in a code word, the code is prefix-free. However, modern communication systems send everything as sequences of "1" and "0" – adding a third symbol would be expensive, and using it only at the ends of words would be inefficient. Morse code is an everyday example of a variable-length code with a comma. The long pauses between letters, and the even longer pauses between words, help people recognize where one letter (or word) ends, and the next begins. Similarly, Fibonacci coding uses a "11" to mark the end of every code word.
Huffman coding is a more sophisticated technique for constructing variable-length prefix codes. The Huffman coding algorithm takes as input the frequencies that the code words should have, and constructs a prefix code that minimizes the weighted average of the code word lengths.
Kraft's inequality characterizes the sets of code word lengths that are possible in a prefix code.
[edit] Prefix codes in use today
Examples of prefix codes include:
- country calling codes
- the country and publisher parts of ISBNs
- the Secondary Synchronization Codes used in the UMTS W-CDMA 3G Wireless Standard
- VCR Plus+ codes
- the UTF-8 system for encoding Unicode characters
[edit] Techniques
Commonly used techniques for constructing prefix codes include Huffman codes and the earlier Shannon-Fano codes, and universal codes such as:
- Elias delta coding
- Elias gamma coding
- Elias omega coding
- Fibonacci coding
- Levenshtein coding
- Unary coding
- Golomb Rice code
[edit] References
- P. Elias, Universal codeword sets and representations of integers, IEEE Trans. Inform. Theory 21 (2) (1975) 194-203.
- D.A. Huffman, "A method for the construction of minimum-redundancy codes" (PDF), Proceedings of the I.R.E., Sept. 1952, pp. 1098-1102 (Huffman's original article)
- Profile: David A. Huffman, Scientific American, Sept. 1991, pp. 54-58 (Background story)
- Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 16.3, pp.385–392.
[edit] External links
- Codes, trees and the prefix property by Kona Macphee