Timeline of information theory
From Wikipedia, the free encyclopedia
A timeline of events related to information theory, data compression, error correcting codes and related subjects.
- 1872 - Ludwig Boltzmann introduces his H-theorem, and with it the formula Σpi log pi for the entropy of a single gas particle.
- 1878 - J. Willard Gibbs introduces the Gibbs entropy, where the probabilities in the entropy formula are now taken as probabilities of the state of the whole system.
- 1924 - Harry Nyquist discusses quantifying "intelligence" and the speed at which it can be transmitted by a communication system.
- 1928 - Ralph Hartley introduces Hartley information as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning).
- 1929 - Leo Szilard analyses Maxwell's Demon, showing how a Szilard engine can sometimes transform information into the extraction of useful work.
- 1940 - Alan Turing introduces the deciban as a measure of information inferred about the German Enigma machine cypher settings by the Banburismus process.
- 1944 - Claude Shannon's theory of information is substantially complete.
- 1946 - Richard W. Hamming presents Hamming codes for error detection and correction.
- 1948 - Claude E. Shannon publishes A Mathematical Theory of Communication
- 1949 - Claude E. Shannon publishes Communication in the Presence of Noise - Nyquist–Shannon sampling theorem and Shannon–Hartley law
- 1949 - Claude E. Shannon's Communication Theory of Secrecy Systems is declassified
- 1949 - Marcel J. E. Golay introduces Golay codes for forward error correction
- 1950 - Richard Hamming introduces Hamming codes for forward error correction
- 1951 - David A. Huffman invents Huffman encoding, a method of finding optimal prefix codes for lossless data compression
- 1951 - Solomon Kullback and Richard Leibler introduce the Kullback–Leibler divergence
- 1954 - Irving S. Reed and D.E. Muller propose Reed–Muller codes
- 1955 - Peter Elias introduces convolutional codes
- 1957 - E. Prange cyclic redundancy codes
- 1959 - Raj Chandra Bose, D.K. Ray-Chaudhuri, A. Hocquenghem BCH codes
- 1960 - Irving S. Reed and Gustave Solomon propose Reed-Solomon codes
- 1962 - Robert G. Gallager proposes Low-density parity-check codes; they are unused for 30 years due to technical limitations.
- 1967 - Andrew Viterbi reveals the Viterbi algorithm, making decoding of convolutional codes practicable.
- 1968 - Elwyn Berlekamp invents the Berlekamp-Massey algorithm; its application to decoding BCH and Reed-Solomon codes is pointed out by J.L. Massey the following year.
- 1968 - Chris Wallace and David M. Boulton publish the first of many papers on Minimum Message Length (MML) statistical and inductive inference
- 1973 - David Slepian and Jack Wolf discover and prove the Slepian–Wolf coding limits for distributed source coding.
- 1977 - Jorma J. Rissanen patents arithmetic coding for IBM.
- 1977 - Abraham Lempel and Jacob Ziv develop Lempel-Ziv compression (LZ77)
- 1993 - Claude Berrou, Alain Glavieux and Punya Thitimajshima introduce Turbo codes