Turing completeness

In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if it can be used to simulate any single-taped Turing machine. The concept is named after English mathematician and computer scientist Alan Turing. A classic example is lambda calculus.

A closely related concept is that of Turing equivalence  two computers P and Q are called equivalent if P can simulate Q and Q can simulate P. The Church–Turing thesis conjectures that any function whose values can be computed by an algorithm can be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. A universal Turing machine can be used to simulate any Turing machine and by extension the computational aspects of any possible real-world computer.[NB 1]

To show that something is Turing complete, it is enough to show that it can be used to simulate some Turing complete system. For example, an imperative language is Turing complete if it has conditional branching (e.g., "if" and "goto" statements, or a "branch if zero" instruction; see one instruction set computer) and the ability to change an arbitrary amount of memory (e.g., the ability to maintain an arbitrary number of variables). Since this is almost always the case, most if not all imperative languages are Turing complete if the limitations of finite memory are ignored.

Non-mathematical usage

In colloquial usage, the terms "Turing complete" or "Turing equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language.

Real computers constructed so far are essentially similar to a single-tape Turing machine; thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, a universal computer is defined as a device with a Turing complete instruction set, infinite memory, and infinite available time.

Formal definitions

In computability theory, several closely related terms are used to describe the computational power of a computational system (such as an abstract machine or programming language):

Turing completeness
A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful). Alternatively, such a system is one that can simulate a universal Turing machine.
Turing equivalence
A Turing-complete system is called Turing equivalent if every function it can compute is also Turing computable; i.e., it computes precisely the same class of functions as do Turing machines. Alternatively, a Turing-equivalent system is one that can simulate, and be simulated by, a universal Turing machine. (All known Turing-complete systems are Turing equivalent, which adds support to the Church–Turing thesis.)
(Computational) universality
A system is called universal with respect to a class of systems if it can compute every function computable by systems in that class (or can simulate each of those systems). Typically, the term universality is tacitly used with respect to a Turing-complete class of systems. The term "weakly universal" is sometimes used to distinguish a system (e.g. a cellular automaton) whose universality is achieved only by modifying the standard definition of Turing machine so as to include input streams with infinitely many 1s.

History

Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics  that a universal Turing machine can, in principle, perform any calculation that any other programmable computer can. This says nothing about the effort needed to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.

Charles Babbage's analytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do better. From the 1830s until the 1940s, mechanical calculating machines such as adders and multipliers were built and improved, but they could not perform a conditional branch and therefore were not Turing complete.

In the late 19th century, Leopold Kronecker formulated notions of computability, defining primitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions which compute them do not allow for an infinite loop. In the early 20th century, David Hilbert led a program to axiomatize all of mathematics with precise axioms and precise logical rules of deduction which could be performed by a machine. Soon, it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved by Kurt Gödel in 1930 to be enough to produce every theorem.

The actual notion of computation was isolated soon after, starting with Gödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation which deduces their theorems. Church and Turing independently demonstrated that Hilbert's Entscheidungsproblem (decision problem) was unsolvable,[1] thus identifying the computational core of the incompleteness theorem. This work, along with Gödel's work on general recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique.

Computability theory

The first result of computability theory is that it is impossible in general to predict what a Turing-complete program will do over an arbitrarily long time. For example, it is impossible to determine for every program-input pair whether the program, operating on the input, will eventually stop or will continue forever (see halting problem). It is impossible to determine whether the program will return "true" or whether it will return "false". For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold. This can cause problems in practice when analyzing real-world computer programs. One way to avoid this is to cause programs to stop executing after a fixed period of time (timeout), or to limit the power of flow control instructions. Such systems are not Turing complete by design.

Another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with only finite looping abilities (i.e., any language that guarantees every program will eventually finish to a halt). Given a guaranteed halting language, the computable function which is produced by Cantor's diagonal argument on all computable functions in that language is not computable in that language.

Turing oracles

A computer with access to an infinite tape of data may be more powerful than a Turing machine: for instance, the tape might contain the solution to the halting problem, or some other Turing-undecidable problem. Such an infinite tape of data is called a Turing oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot.

Digital physics

All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident, because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically (see Church–Turing thesis – Philosophical implications).

Examples

The computational systems (algebras, calculi) that are discussed as Turing complete systems are those intended for studying theoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:

Most programming languages, conventional and unconventional, are Turing-complete. This includes:

Rewrite systems are also Turing-complete.

Turing completeness is an abstract statement of ability, rather than a prescription of specific language features used to implement that ability. The features used to achieve Turing completeness can be quite different; Fortran systems would use loop constructs or possibly even goto statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would use recursion. Most programming languages are describing computations on von Neumann architectures, which have memory (RAM and register) and a control unit. These two elements make this architecture Turing-complete. Even pure functional languages are Turing-complete.[3]

Turing completeness in declarative SQL is implemented through recursive common table expressions.[4] Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing complete. This illustrates one reason why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing complete.

The untyped lambda calculus is Turing-complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors.

Rule 110 and Conway's Game of Life, both cellular automata, are Turing complete.

Unintentional Turing completeness

Some games and other software are Turing-complete by accident.

Video games:

Card games:

Zero-person games (simulations):


Non-Turing-complete languages

Many computational languages exist that are not Turing complete. One such example is the set of regular languages, which are generated by regular expressions and which are recognized by finite automata. A more powerful but still not Turing-complete extension of finite automata is the category of pushdown automata and context-free grammars, which are commonly used to generate parse trees in an initial stage of program compiling. Further examples include some of the early versions of the pixel shader languages embedded in Direct3D and OpenGL extensions.

In total functional programming languages, such as Charity and Epigram, all functions are total and must terminate. Charity uses a type system and control constructs based on category theory, whereas Epigram uses dependent types. The LOOP language is designed so that it computes only the functions that are primitive recursive. All of these compute proper subsets of the total computable functions, since the full set of total computable functions is not computably enumerable. Also, since all functions in these languages are total, algorithms for recursively enumerable sets cannot be written in these languages, in contrast with Turing machines.

Although (untyped) lambda calculus is Turing-complete, simply typed lambda calculus is not.

Data languages

The notion of Turing-completeness does not apply to languages such as XML, HTML, JSON, YAML and S-expressions, because they are typically used to represent structured data, not describe computation. These are sometimes referred to as markup languages, or more properly as "container languages" or "data description languages". However, Rule 110, a Turing-complete cellular automaton, has been successfully implemented in CSS 3, thus proving, to some extent, its Turing completeness.[11]

See also

Notes

  1. A UTM cannot simulate non-computational aspects such as I/O.

References

  1. Hodges, Andrew (1992) [1983], Alan Turing: The Enigma, London: Burnett Books, p. 111, ISBN 0-04-510060-8
  2. "Universal Turing Machine in XSLT". unidex.com. Retrieved 2010-07-05.
  3. http://www.cs.utexas.edu/users/boyer/ftp/ics-reports/cmp37.ps. (The book of Thomas Rauber and Gudula Rünger "Parallel programming: for multicore and cluster systems", 2010 provides an introduction for computation models.)
  4. "Cyclic Tag System". PostgreSQL.org. Retrieved 2014-09-10.
  5. Hamilton, Linus (3 December 2014). "Braid is undecidable". Cornell University. Retrieved 14 March 2017.
  6. Cedotal, Andrew (16 April 2010). "Man Uses World’s Most Difficult Computer Game to Create … A Working Turing Machine". Retrieved 2 June 2015.
  7. 1 2 3 Zwinkau, Andreas (20 October 2013). "Accidentally Turing-Complete". Retrieved 2 June 2015.
  8. Kaye, Richard (31 May 2007). "Infinite versions of minesweeper are Turing complete" (PDF). The University of Birmingham. Retrieved 14 March 2017.
  9. Paul Rendell (January 12, 2005). "A Turing Machine in Conway's Game of Life". Retrieved 22 June 2009.
  10. Adam P. Goucher. "Spartan universal computer-constructor". LifeWiki. Retrieved 22 June 2009.
  11. https://stackoverflow.com/questions/2497146/is-css-turing-complete

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.