Algorithmic information theory
From Wikipedia, the free encyclopedia
This article or section is in need of attention from an expert on the subject. WikiProject Computer science or the Computer science Portal may be able to help recruit one. |
Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously." [1]
Contents |
[edit] Overview
Algorithmic information theory principally studies Kolmogorov complexity and other complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers and real numbers.
This use of the term "information" might be a bit misleading, as it depends upon the concept of compressibility. Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the shortest possible self-contained representation of that string. A self-contained representation is essentially a program – in some fixed but otherwise irrelevant universal programming language – that, when run, outputs the original string.
From this point of view, a 3000 page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know, more or less, what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present. For this reason, high-information strings and sequences are sometimes called "random"; people also sometimes attempt to distinguish between "information" and "useful information" and attempt to provide rigorous definitions for the latter, with the idea that the random letters may have more information than the encyclopedia, but the encyclopedia has more "useful" information.
Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood.
Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant Ω, a real number which expresses the probability that a random computer program will eventually halt. Ω has numerous remarkable mathematical properties, including the fact that it is definable but not computable. Thus, although Ω is easily defined, in any theory you can only compute finitely many digits of Ω, so it is in some sense unknowable, providing an absolute limit on knowledge that is reminiscent of Gödel's Incompleteness Theorem. Although the digits of Ω cannot be found, many general properties of Ω are known; for example, it is known to be normal and transcendental - in fact, these properties are shared by every infinite algorithmically random sequence.
[edit] History
The field was developed by Andrey Kolmogorov, Ray Solomonoff and Gregory Chaitin, starting in the late 1960s. There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences.
[edit] Precise Definitions
A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.
An infinite binary sequence is said to be random if, for some constant c, the Kolmogorov complexity of every initial segment (with length n) of the sequence is at least n-c. Importantly, the complexity used here is prefix-free complexity; if plain complexity were used, there would be no random sequences. However, with this definition, it can be shown that almost every sequence (from the point of view of the standard measure - "fair coin" or Lebesgue measure - on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually called Martin-Löf randomness, after Per Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called 1-randomness to distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.).
(Related definitions can be made for alphabets other than the set {0,1}.)
[edit] See also
- Kolmogorov complexity
- Algorithmically random sequence
- Algorithmic probability
- Chaitin's constant
- Chaitin–Kolmogorov randomness
- Computationally indistinguishable
- Distribution ensemble
- Epistemology
- Invariance theorem
- Limits to knowledge
- Minimum description length
- Minimum message length
- Pseudorandom ensemble
- Pseudorandom generator
- Uniform ensemble