Turing completeness

From Wikipedia, the free encyclopedia

In computability theory, several closely-related terms are used to describe the "computational power" of a computational system (such as an abstract machine or programming language):

Turing completeness
A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful). Alternatively, such a system is one that can simulate a universal Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every function it can compute is also Turing-computable; i.e., it computes precisely the same class of functions as do Turing machines. Alternatively, a Turing-equivalent system is one that can simulate, and be simulated by, a universal Turing machine. (All known Turing-complete systems are Turing-equivalent, which adds support to the Church-Turing thesis.)
(Computational) universality
A system is called universal with respect to a class of systems if it can compute every function computable by systems in that class (or can simulate each of those systems). Typically, universality is tacitly with respect to a Turing-complete class of systems. The term weakly universal is sometimes used to distinguish a system (e.g. a cellular automaton) whose universality is achieved only by modifying the standard definition of Turing machine so as to include unbounded input.

Contents

[edit] Overview

Turing completeness, named after Alan Turing, is significant in that every plausible design for a computing device so far advanced can be emulated by a universal Turing machine — an observation (it is not and cannot be mathematically proven) that has become known as the Church-Turing thesis. Thus, a machine that can act as a universal Turing machine can, in principle, perform any calculation that any other computer is capable of (that is, if it is programmable). Note, however, that this says nothing about the effort required to write a program for the machine, the time it may take for the machine to perform the calculation, or any abilities unrelated to computation (such as communication or randomness) which the machine may possess.

While truly Turing-complete machines are very likely physically impossible as they require unlimited storage, Turing completeness is often loosely attributed to physical machines or programming languages that would be universal if they had unlimited storage. All modern computers are Turing-complete in this sense.

Charles Babbage's analytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed, but the first actual implementation appeared in 1941: the program-controlled Z3 of Konrad Zuse. The universality of the Z3 was shown by Raúl Rojas in 1998. Prior to Rojas' 1998 paper, the first machine known to be Turing-complete was ENIAC (1946).

The gap from the 1830s to the 1940s was not a period of continuous "mechanical computer" development. A mathematical demonstration of the computational resolution of problems remains with the first formal programming languages (1930s), and a wide range of solutions were demonstrated in the 1930s and 1940s, justifying the "investment" on the modern Turing complete machines in the 1940s.

The hypothesis exists that the universe is computable on a universal Turing machine, which would imply that no computer more powerful than a universal Turing machine can be physically built (see philosophical implications in the Church-Turing thesis and digital physics).

See the article on computability theory for a long list of systems that are Turing-complete, as well as several systems that are less powerful, and several theoretical systems that are even more powerful than a universal Turing machine.

[edit] Related work

One important result from computability theory is that it is impossible in general to determine whether a program written in a Turing-complete language will continue executing forever or will stop within a finite period of time (see halting problem). One method of commonly getting around this is to cause programs to stop executing after a fixed period of time, or to limit the power of flow control instructions. Such systems are strictly not Turing-complete by design.

Another curious theorem from computability theory is that there are problems solvable by Turing-complete languages that cannot be solved by languages with finite looping capabilities (i.e. languages that guarantee any program will halt). This result is derived by, for example, Brainerd and Landweber using the PL and PL-{GOTO} languages.

[edit] Examples

The computational systems (algebras, calculi) that are discussed as Turing complete systems are those intended for studying theoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:

Most programming languages, conventional and unconventional, are Turing-complete. This includes:

The specific language features used to achieve Turing-completeness can be quite different; FORTRAN systems would use loop constructs or possibly even GOTO statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would use recursion. Turing-completeness is an abstract statement of capability, rather than a prescription of specific language features used to implement that capability.

It is difficult to find examples of non-Turing complete languages, as these languages are usually very limited (see, however, machine that always halts). Examples include some of the early versions of the pixel shader languages embedded in Direct3D and OpenGL extensions, or a series of mathematical formulae in a spreadsheet with no cycles. While it is possible to perform many interesting operations in such a system, this fails to be Turing-complete as it is impossible to form loops. Another example is the category of regular expressions, which are generated by finite automata. A more powerful but still not Turing-complete extension of finite automata is the category of pushdown automata and context-free grammars, which are commonly used to generate parse trees in an initial stage of program compilation.

There are some languages where all functions are total, and must terminate, such as Charity and Epigram. Charity uses a type system and control constructs based on category theory, whereas Epigram uses dependent types.

Turing-completeness in SQL is implemented through proprietary extensions, illustrating one of the reasons why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing-complete.

The untyped lambda calculus is Turing-complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most "typical" computer programs while detecting more errors.

Rule 110 and Conway's Game of Life, both cellular automata, are Turing-complete.

[edit] See also

[edit] References

  • Brainerd, W.S., Landweber, L.H. (1974), Theory of Computation, Wiley.
  • Simplest 'universal computer' wins student $25,000 by Jim Giles, New Scientist, October 24, 2007.
  • The Universal Turing Machine: A Half-Century Survey (1995), ed. Rolf Herken, Springer Verlag.

[edit] External links