Shannon coding

In the field of data compression, Shannon coding, named after its creator, Claude Shannon, is a lossless data compression technique for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding, and never better but sometime equal to the Shannon-Fano coding.

The method was the first of its type, The technique was used to prove Shannon's noiseless coding theorem in his 1948 article "A Mathematical Theory of Communication",[1] and is therefore a centerpiece to the information age.

This coding method led rise to the field of information theory and without its contribution, the world would not have any of the many predecessors; for example Shannon-Fano coding, Huffman coding, or arithmetic coding. Much of our day-to-day lives are significantly influenced by digital data and this would not be possible without Shannon coding and its ongoing evolution of its predecessor coding methods.

In Shannon coding, the symbols are arranged in order from most probable to least probable, and assigned codewords by taking the first l_i = \left\lceil {-\log}  p_i(x) \right\rceil digits from the binary expansions of the cumulative probabilities.  \sum\limits_{i=k}^{i-1} p_k(x) . Here \left\lceil x \right\rceil denotes the function which rounds x up to the next integer value.

References

  1. "A Mathematical Theory of Communication" http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf

External links