Timeline of information theory
From Wikipedia, the free encyclopedia
A timeline of events related to information theory, data compression, error correcting codes and related subjects.
- 1872 - Ludwig Boltzmann presents his H-theorem, and with it the formula Σpi log pi for the entropy of a single gas particle.
- 1878 - J. Willard Gibbs defines the Gibbs entropy: the probabilities in the entropy formula are now taken as probabilities of the state of the whole system.
- 1924 - Harry Nyquist discusses quantifying "intelligence" and the speed at which it can be transmitted by a communication system.
- 1927 - John von Neumann defines the von Neumann entropy, extending the Gibbs entropy to quantum mechanics.
- 1928 - Ralph Hartley introduces Hartley information as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning).
- 1929 - Leo Szilard analyses Maxwell's Demon, showing how a Szilard engine can sometimes transform information into the extraction of useful work.
- 1940 - Alan Turing introduces the deciban as a measure of information inferred about the German Enigma machine cypher settings by the Banburismus process.
- 1944 - Claude Shannon's theory of information is substantially complete.
- 1946 - Richard W. Hamming presents Hamming codes for error detection and correction.
- 1948 - Claude E. Shannon publishes A Mathematical Theory of Communication
- 1949 - Claude E. Shannon publishes Communication in the Presence of Noise - Nyquist–Shannon sampling theorem and Shannon–Hartley law
- 1949 - Claude E. Shannon's Communication Theory of Secrecy Systems is declassified
- 1949 - Marcel J. E. Golay introduces Golay codes for forward error correction
- 1950 - Richard Hamming introduces Hamming codes for forward error correction
- 1951 - David A. Huffman invents Huffman encoding, a method of finding optimal prefix codes for lossless data compression
- 1951 - Solomon Kullback and Richard Leibler introduce the Kullback–Leibler divergence
- 1954 - Irving S. Reed and D.E. Muller propose Reed–Muller codes
- 1955 - Peter Elias introduces convolutional codes
- 1957 - Eugene Prange first discusses cyclic redundancy codes
- 1959 - Raj Chandra Bose and Dwijendra Kumar Ray-Chaudhuri, and independently the next year Alexis Hocquenghem, present BCH codes
- 1960 - Irving S. Reed and Gustave Solomon propose Reed-Solomon codes
- 1962 - Robert G. Gallager proposes Low-density parity-check codes; they are unused for 30 years due to technical limitations.
- 1967 - Andrew Viterbi reveals the Viterbi algorithm, making decoding of convolutional codes practicable.
- 1968 - Elwyn Berlekamp invents the Berlekamp-Massey algorithm; its application to decoding BCH and Reed-Solomon codes is pointed out by James L. Massey the following year.
- 1968 - Chris Wallace and David M. Boulton publish the first of many papers on Minimum Message Length (MML) statistical and inductive inference
- 1973 - David Slepian and Jack Wolf discover and prove the Slepian–Wolf coding limits for distributed source coding.
- 1977 - Jorma J. Rissanen patents arithmetic coding for IBM.
- 1977 - Abraham Lempel and Jacob Ziv develop Lempel-Ziv compression (LZ77)
- 1993 - Claude Berrou, Alain Glavieux and Punya Thitimajshima introduce Turbo codes