Theory of computation
From Wikipedia, the free encyclopedia
The theory of computation is the branch of computer science that deals with whether and how efficiently problems can be solved on a computer. The field is divided into two major branches: computability theory and complexity theory, but both branches deal with formal models of computation.
In order to perform a rigorous study of computation, computer scientists work with a mathematical abstractions of computers called a model of computation. There are several formulations in use, but the most commonly examined is the Turing machine. A Turing machine can be thought of as a desktop PC with an infinite memory capacity, though it can only access this memory in small discrete chunks. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation. While the infinite memory capacity might be considered an unphysical attribute, for any problem actually solved by a Turing machine the memory used will always be finite, so any problem that can be solved on a Turing machine could be solved on a desktop PC which has enough memory installed.
Contents |
[edit] Computability theory
Computability theory deals primarily with the question of whether a problem is solvable at all on a computer. The halting problem is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result.
Computability theory is closely related to the branch of mathematical logic called recursion theory, which removes the restriction of studying only models of computation which are close to physically realizable. Many mathematicians and Computational theorists who study recursion theory will refer to it as computability theory. There is no real difference between the fields other than whether a researcher working in this area has his or her office in the computer science or mathematics department.
[edit] Complexity theory
Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps does it take to perform a computation, and how much memory is required to perform that computation.
In order to analyze how much time and space a given algorithm requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grows linearly in the size of the problem.
To simplify this problem, computer scientists have adopted Big O notation, which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example we might say that the problem requires O(n) steps to solve.
Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP.
[edit] Other formal definitions of computation
Aside from a Turing machine, other equivalent (See: Church-Turing thesis) models of computation are in use.
- lambda calculus
- A computation is an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction.
- Combinatory logic
- is a concept which has many similarities to λ-calculus, but also important differences exist (e.g. fixed point combinator Y has normal form in combinatory logic but not in λ-calculus). Combinatory logic was developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics).
- mu-recursive functions
- a computation is a mu-recursive function, i.e. its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function f(x) the functions g(x) and h(x,y) appear, then terms of the form 'g(5)=7' or 'h(3,2)=10' might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or mu recursion. For instance if f(x) = h(x,g(x)), then for 'f(5)=3' to appear, terms like 'g(5)=6' and 'h(3,6)=3' must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs.
- Markov algorithm
- a string rewriting system that uses grammar-like rules to operate on strings of symbols.
- Register machine
- is a theoretically interesting idealization of a computer. There are more variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and inrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gödel numbering techniques: the fact that each register hold a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriate huge natural number — unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques.
- P′′
- Like Turing machines, P′′ uses an infinite tape of symbols (without random access), and a rather minimalistic set of instructions. But these instructions are very different, thus, unlike Turing machines, P′′ does not need to maintain a distinct state, because all “memory-like” functionality can be provided only by the tape. Instead of rewriting the current symbol, it can perform a modular arithmetic incrementation on it. P′′ has also a pair of instructions for a cycle, inspecting the blank symbol. Despite of its minimalistic nature, it has become the parental formal language of an implemented and (for entertainment) used programming language called Brainfuck.
In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, are used to specify string patterns in many contexts, from office productivity software to programming languages. Another formalism mathematically equivalent to regular expressions, Finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars are used to specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions.
Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; this leads to the Chomsky hierarchy of languages.
[edit] Further reading
- Michael Sipser (2006). Introduction to the Theory of Computation 2ed. PWS Publishing. ISBN 0-534-94728-X. Part Two: Computability Theory, chapters 3–6, pp.123–222.
- Hein, James L: Theory of Computation. Sudbury, MA: Jones & Bartlett, 1996. A gentle introduction to the field, appropriate for second-year undergraduate computer science students.
- Hopcroft, John E., and Jeffrey D. Ullman: Introduction to Automata Theory, Languages, and Computation. 2ed Reading, MA: Addison-Wesley, 2001. One of the standard references in the field.
- Taylor, R. Gregory: Models of Computation. New York: Oxford University Press, 1998. An unusually readable textbook, appropriate for upper-level undergraduates or beginning graduate students.
- Hartley Rogers, Jr, Theory of Recursive Functions and Effective Computability, MIT Press, 1987, ISBN 0-262-68052-1 (paperback)
- Computability Logic: A theory of interactive computation. The main web source on this subject.
- [1] A nice (pdf) textbook by Forbes D. Lewis, covering the topics of formal languages, automata and grammars. The emphasis appears to be on presenting an overview of the results and their applications rather than providing proofs of the results.
- Further information: Introduction to Automata Theory, Languages, and Computation