Algorithm characterizations
From Wikipedia, the free encyclopedia
The word algorithm does not have a generally accepted definition. Researchers are actively working in formalizing this term. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.
This article is a supplement to the article Algorithm.
[edit] The problem of definition
There is no generally accepted definition of algorithm. Over the last 200 years the definition has become more complicated and detailed as researchers have tried to pin down the term. Indeed there may be more than one type of "algorithm". But most agree that algorithm has something to do with defining generalized processes for the creation of "output" integers from other "input" integers -- "input parameters" arbitrary and infinite in extent, or limited in extent but still variable -- by the manipulation of distinguishable symbols (counting numbers) with finite collections of rules that a man can do with paper and pencil.
The most common number-manipulation schemes -- both in formal mathematics and in routine life -- are: (1) the recursive functions calculated by a person with paper and pencil, and (2) the Turing machine or its Turing equivalents -- the primitive register machine or "counter machine" model, the Random Access Machine model (RAM), the Random access stored program machine model (RASP) and its functional equivalent "the computer".
The reader is probably unfamiliar with the notion of a "recursive function". But when we are doing "arithmetic" we are really calculating by the use of "recursive functions" in the shorthand algorithms we learned in grade-school, for example, adding and subtracting.
The proofs that every "recursive function" we can calculate by hand we can compute by machine and vice versa -- note the usage of the words calculate versus compute -- is remarkable. But this equivalence together with the thesis (hypothesis, unproven assertion) that this includes every calculation/computation indicates why so much emphasis has been placed upon the use of Turing-equivalent machines in the definition of specific algorithms, and why the definition of "algorithm" itself often refers back to "the Turing machine". This is discussed in more detail under Stephen Kleene's characterization.
For more about the developments that led up to, and steps in, the effort to better understand and define the notion of "algorithm" see History.
The following are summaries of the more famous characterizations (Kleene, Markov, Knuth) together with those that introduce novel elements -- elements that further expand the definition or contribute to a more precise definition.
[edit] Chomsky hierarchy
There is more consensus on the "characterization" of the notion of "simple algorithm".
All algorithms need to be specified in a formal language, and the "simplicity notion" arises from the simplicity of the language. The Chomsky (1956) hierarchy is a containment hierarchy of classes of formal grammars that generate formal languages. It is used for classifying of programming languages and abstract machines.
From the Chomsky hierarchy perspective, if the algorithm can be specified on a more simple language (than unrestricted), it can be characterized by this kind of language, else it is a typical "unrestricted algorithm".
Examples: a "general purpose" macro language, like M4 is unrestricted (Turing complete), but the C preprocessor macro language is not, soon any algorithm expressed in C preprocessor is a "simple algorithm".
See also Relationships between complexity classes.
[edit] Characterizations of the notion of "algorithm"
[edit] Stephen Kleene's characterization
This section is longer and more detailed than the others because of its importance to the topic: Kleene was the first to propose that all calculations/computations -- of every sort, the totality of -- can equivalently be (i) calculated by use of five "primitive recursive operators" plus one special operator called the mu-operator, or be (ii) computed by the actions of a Turing machine or an equivalent model.
Furthermore he opined that either of these would stand as a definition of algorithm.
A reader first confronting the words that follow may well be confused, so a brief explanation is in order. Calculation means done by hand, computation means done by Turing machine (or equivalent). (Sometimes an author slips and interchanges the words). A "function" can be thought of as an "input-output box" into which a person puts natural numbers called "arguments" or "parameters" (but only the counting numbers including 0 -- the positive integers) and gets out a single positive integer (including 0) (conventionally called "the answer"). Think of the "function-box" as a litle man either calculating by hand using "general recursion" or computing by Turing machine (or an equivalent machine).
"Effectively calculable/computable" is more generic and means "calculable/computable by some procedure, method, technique ... whatever...". "General recursive" was Kleene's way of writing what today is called just "recursion"; however, "primitive recursion" -- calculation by use of the five recursive operators -- is a lesser form of recursion that lacks access to the sixth, additional, mu-operator that is needed only in rare instances. Thus most of life goes on requiring only the "primitive recursive functions."
[edit] Church's Thesis
In 1943 Kleene proposed what has come to be known as Church's thesis:
- "Thesis I. Every effectively calculable function (effectively decidable predicate) is general recursive" (First stated by Kleene in 1943 (cf page 274 in The Undecidable, appears also verbatim in Kleene (1952) p.300)
In a nutshell: to calculate any function the only operations a person needs (technically, formally) are the 6 primitive operators of "general" recursion (nowadays called the operators of the mu recursive functions).
Kleene's first statement of this was under the section title "12. Algorithmic theories". He would later amplify it in his text (1952) as follows:
- "Thesis I and its converse provide the exact definition of the notion of a calculation (decision) procedure or algorithm, for the case of a function (predicate) of natural numbers" (p. 301, boldface added for emphasis)
(His use of the word "decision" and "predicate" extends the notion of calculability to the more general manipulation of symbols such as occurs in mathematical "proofs".)
This is not as daunting as it may sound -- "general" recursion is just a way of making our everyday arithmetic operations from the five "operators" of the primitive recursive functions together with the additional mu-operator as needed. Indeed, Kleene gives 13 examples of primitive recursive functions and Boolos-Burgess-Jeffrey add some more, most of which will be familiar to the reader -- e.g. addition, subtraction, multiplication and division, exponentiation, the CASE function, concatenation, etc, etc; for a list see Some common primitive recursive functions.
Why general-recursive functions rather than primitive-recursive functions?
Kleene et. al. had to add a sixth recursion operator called the minimization-operator (written as μ-operator or mu-operator) because Ackermann (1925) produced a hugely-growing function -- the Ackermann function -- and Rózsa Péter (1935) produced a general method of creating recursive functions using Cantor's diagonal argument, neither of which could be described by the 5 primitive-recursive-function operators. With respect to the Ackermann function:
- "...in a certain sense, the length of the computation [sic] algorithm of a recursive function which is not also primitive recursive grows faster with the arguments than the value of any primitive recursive function" (Kleene (1935) p. 246 in The Undecidable, plus footnote 13 with regards to the need for an additional operator, boldface added).
But the need for the mu-operator is a rarity. As indicated above by Kleene's list of common calculations, a person goes about their life happily computing primitive recursive functions without fear of encountering the monster numbers created by Ackermann's function (e.g. super-exponentiation ).
[edit] Turing's thesis
Turing's Thesis hypothesizes the computability of "all computable functions" by the Turing machine model and its equivalents.
To do this in a effective manner, Kleene extended the notion of "computable" by casting the net wider -- by allowing into the notion of "functions" both "total functions" and "partial functions". A total function is one that is defined for all natural numbers (positive integers including 0). A partial function is defined for some natural numbers but not all -- the specification of "some" has to come "up front". Thus the inclusion of "partial function" extends the notion of function to "less-perfect" functions. Total- and partial-functions may either calculated by hand or computed by machine.
- Examples:
- "Functions": include "common subtraction m-n" and "addition m+n"
-
- "Partial function": "Common subtraction" m-n is undefined when only natural numbers (positive integers and zero) are allowed as input -- e.g. 6-7 is undefined
-
- Total function: "Addition" m+n is defined for all positive integers and zero.
We now observe Kleene's definition of "computable" in a formal sense:
- Definition: "A partial function φ is computable, if there is a machine M which computes it" (Kleene (1952) p. 360)
- "Definition 2.5. An n-ary function f(x1,... xn) is partially computable if there esists a Turing machine Z such that
- f(x1,... xn) = ΨZ(n)(x1,... xn)
- In this case we say that [machine] Z computes f. If, in addition, f(x1,... xn) is a total function, then it is called computable" (Davis (1958) p. 10)
Thus we have arrived at Turing's Thesis:
- "Every function which would naturally be regarded as computable is computable ... by one of his machines..." (Kleene (1952) p.376)
Although Kleene did not give examples of "computable functions" others have. For example, Davis (1958) gives Turing tables for the Constant, Successor and Identity functions, three of the five operators of the primitive recursive functions:
- Computable by Turing machine:
- Addition (also is the Constant function if one operand is 0)
- Increment (Successor function)
- Common subtraction (defined only if x ≥ y). Thus "x - y" is an example of a partially computable function.
- Proper subtraction x┴y (as defined above)
- The identity function: for each i, a function UZn = ΨZn(x1,... xn) exists that plucks xi out of the set of arguments (x1,... xn)
- Multiplication
Boolos-Burgess-Jeffrey (2002) give the following as prose descriptions of Turing machines for:
-
- Doubling: 2*p
- Parity
- Addition
- Multiplication
With regards to the counter machine, an abstract machine model equivalent to the Turing machine:
- Examples Computable by Abacus machine (cf Boolos-Burgess-Jeffrey (2002))
- Addition
- Multiplication
- Exponention: (a flow-chart/block diagram description of the algorithm)
Demonstrations of computability by abacus machine (Boolos-Burgess-Jeffrey (2002)) and by counter machine (Minsky 1967):
- The six recursive function operators:
-
- Zero function
- Successor function
- Identity function
- Composition function
- Primitive recursion (induction)
- Minimization
-
The fact that the abacus/counter machine models can simulate the recursive functions provides the proof that: If a function is "machine computable" then it is "hand-calculable by partial recursion". Kleene's Theorem XXIX :
- "Theorem XXIX: "Every computable partial function φ is partial recursive..." (italics in original, p. 374).
The converse appears as his Theorem XXVIII. Together these form the proof of their equivalence, Kleene's Theorem XXX.
[edit] Church-Turing Thesis
With his Theorem XXX Kleene proves the equivalence of the two "Theses" -- the Church Thesis and the Turing Thesis. (Kleene can only hypothesize (conjecture) the truth of both thesis -- these he has not proven):
- THEOREM XXX: The following classes of partial functions ... have the same members: (a) the partial recursive functions, (b) the computable functions ..."(p. 376)
-
- Definition of "partial recursive function": "A partial function φ is partial recursive in [the partial functions] ψ1, ... ψn if there is a system of equations E which defines φ recursively from [partial functions] ψ1, ... ψn" (p. 326)
Thus by Kleene's Theorem XXX: either method of making numbers from input-numbers -- recursive functions calculated by hand or computated by Turing-machine or equivalent -- results in an "effectively calculable/computable function". If we accept the hypothesis that every calculation/computation can be done by either method equivalently we have accepted both Kleene's Theorem XXX (the equivalence) and the Church-Turing Thesis (the hypothesis of "every").
[edit] A note of dissent: "There's more to algorithm..." Blass and Gurevich (2003)
The notion of separating out Church's and Turing's theses from the "Church-Turing thesis" appears not only in Kleene (1952) but in Blass-Gurevich (2003) as well. But there while there are agreements, there are disagreements too:
- "...we disagree with Kleene that the notion of algorithm is that well understood. In fact the notion of algorithm is richer these days than it was in Turing's days. And there are algorithms, of modern and classical varieties, not covered directly by Turing's analysis, for example, algorithms that interact with their environments, algorithms whose inputs are abstract structures, and geometric or, more generally, non-discrete algorithms" (Blass-Gurevich (2003) p. 8, boldface added)
[edit] A. A. Markov's characterization
A. A. Markov (1954) provided the following definition of algorithm:
- "1. In mathematics, "algorithm" is commonly understood to be an exact prescription, defining a computational process, leading from various initial data to the desired result...."
- "The following three features are characteristic of algorithms and determine their role in mathematics:
- "a) the precision of the prescription, leaving no place to arbitrariness, and its universal comprehensibility -- the definiteness of the algorithm;
- "b) the possibility of starting out with initial data, which may vary within given limits -- the generality of the algorithm;
- "c) the orientation of the algorithm toward obtaining some desired result, which is indeed obtained in the end with proper initial data -- the conclusiveness of the algorithm." (p.1)
He admitted that this definition "does not pretend to mathematical precision" (p. 1). His 1954 monograph was his attempt to define algorithm more accurately; he saw his resulting definition -- his "normal" algorithm -- as "equivalent to the concept of a recursive function" (p. 3). His definition included four major components (Chapter II.3 pp.63ff):
- "1. Separate elementary steps, each of which will be performed according to one of [the substitution] rules... [rules given at the outset]
- "2. ... steps of local nature ... [Thus the algorithm won't change more than a certain number of symbols to the left or right of the observed word/symbol]
- "3. Rules for the substitution formulas ... [he called the list of these "the scheme" of the algorithm]
- "4. ...a means to distinguish a "concluding substitution" [i.e. a distinguishable "terminal/final" state or states]
In his Introduction Markov observed that "the entire significance for mathematics" of efforts to define algorithm more precisely would be "in connection with the problem of a constructive foundation for mathematics" (p. 2). Ian Stewart (cf Encyclopedia Britannica) shares a similar belief: "...constructive analysis is very much in the same algorithmic spirit as computer science...". For more see constructive mathematics and Intuitionism.
The notion of "locality" first appeared with Turing (1936-1937) -- "Changes of one of the squares observed to another square within L squares of one of the previously observed squares", and it appears prominently in the work of Gurevich.
[edit] Minsky's characterization
Minsky (1967) baldly asserts that "an algorithm is "an effective procedure" and declines to use the word "algorithm" further in his text; in fact his index makes it clear what he feels about "Algorithm, synonym for Effective procedure"(p. 311):
-
- "We will use the latter term [an effective procedure] in the sequel. The terms are roughly synonymous, but there are a number of shades of meaning used in different contexts, especially for 'algorithm'" (italics in original, p. 105)
Other writers (see Knuth below) use the word "effective procedure". This leads one to wonder: What is Minsky's notion of "an effective procedure"? He starts off with:
- "...a set of rules which tell us, from moment to moment, precisely how to behave" (p. 106)
But he recognizes that this is subject to a criticism:
- "... the criticism that the interpretation of the rules is left to depend on some person or agent" (p. 106)
His refinement? To "specify, along with the statement of the rules, the details of the mechanism that is to interpret them". To avoid the "cumbersome" process of "having to do this over again for each individual procedure" he hopes to identify a "reasonably uniform family of rule-obeying mechanisms". His "formulation":
-
- "(1) a language in which sets of behavioral rules are to be expressed, and
-
- "(2) a single machine which can interpret statements in the language and thus carry out the steps of each specified process." (italics in original, all quotes this para. p. 107)
In the end, though, he still worries that "there remains a subjective aspect to the matter. Different people may not agree on whether a certain procedure should be called effective" (p. 107)
But Minsky is undeterred. He immediately introduces "Turing's Analysis of Computation Process" (his chapter 5.2). He quotes what he calls "Turing's thesis"
- "Any process which could naturally be called an effective procedure can be realized by a Turing machine" (p. 108. (Minsky comments that in a more general form this is called "Church's thesis").
After an analysis of "Turing's Argument" (his chapter 5.3) he observes that "equivalence of many intuitive formulations" of Turing, Church, Kleene, Post, and Smullyan "...leads us to suppose that there is really here an 'objective' or 'absolute' notion. As Rogers [1959] put it:
-
- "In this sense, the notion of effectively computable function is one of the few 'absolute' concepts produced by modern work in the foundations of mathematics'" (Minsky p. 111 quoting Rogers, Hartley Jr (1959) The present theory of Turing machine computability, J. SIAM 7, 114-130.)
[edit] Knuth's characterization
Knuth (1968, 1973) has given a list of five properties that are widely accepted as requirements for an algorithm:
- Finiteness: "An algorithm must always terminate after a finite number of steps ... a very finite number, a reasonable number"
- Definiteness: "Each step of an algorithm must be precisely defined; the actions to be carried out must be rigorously and unambiguously specified for each case"
- Input: "...quantities which are given to it initially before the algorithm begins. These inputs are taken from specified sets of objects"
- Output: "...quantities which have a specified relation to the inputs"
- Effectiveness: "... all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done exactly and in a finite length of time by a man using paper and pencil"
Knuth offers as an example the Euclidean algorithm for determining the greatest common divisor of two natural numbers (cf. Knuth Vol. 1 p. 2).
Knuth admits that, while his description of an algorithm may be intuitively clear, it lacks formal rigor, since it is not exactly clear what "precisely defined" means, or "rigorously and unambiguously specified" means, or "sufficiently basic", and so forth. He makes an effort in this direction in his first volume where he defines in detail what he calls the "machine language" for his "mythical MIX...the world's first polyunsaturated computer" (pp. 120ff). Many of the algorithms in his books are written in the MIX language. He also uses tree diagrams, flow diagrams and state diagrams.
"Goodness" of an algorithm, "best" algorithms: Knuth states that "In practice, we not only want algorithms, we want good algorithms...." He suggests that some criteria of an algorithm's goodness are the number of steps to perform the algorithm, its "adaptability to computers, its simplicity and elegance, etc." Given a number of algorithms to perform the same computation, which one is "best"? He calls this sort of inquiry "algorithmic analysis: given an algorithm, to determine its performance characteristcis" (all quotes this paragraph: Knuth Vol. 1 p. 7)
[edit] Stone's characterization
Stone (1972) and Knuth (1968, 1973) were professors at Stanford University at the same time so it is not surprising if there are similarities in their definitions (boldface added for emphasis):
- "To summarize ... we define an algorithm to be a set of rules that precisely defines a sequence of operations such that each rule is effective and definite and such that the sequence terminates in a finite time." (boldface added, p. 8)
Stone is noteworthy because of his detailed discussion of what constitutes an “effective” rule – his robot, or person-acting-as-robot, must have some information and abilities within them, and if not the information and the ability must be provided in "the algorithm":
- "For people to follow the rules of an algorithm, the rules must be formulated so that they can be followed in a robot-like manner, that is, without the need for thought... however, if the instructions [to solve the quadratic equation, his example] are to be obeyed by someone who knows how to perform arithmetic operations but does not know how to extract a square root, then we must also provide a set of rules for extracting a square root in order to satisfy the definition of algorithm" (p. 4-5)
Furthermore "...not all instructions are acceptable, because they may require the robot to have abilities beyond those that we consider reasonable.” He gives the example of a robot confronted with the question is “Henry VIII a King of England?” and to print 1 if yes and 0 if no, but the robot has not been previously provided with this information. And worse, if the robot is asked if Aristotle was a King of England and the robot only had been provided with five names, it would not know how to answer. Thus:
- “an intuitive definition of an acceptable sequence of instructions is one in which each instruction is precisely defined so that the robot is guaranteed to be able to obey it” (p. 6)
After providing us with his definition, Stone introduces the Turing machine model and states that the set of five-tupes that are the machine’s instructions are “an algorithm ... known as a Turing machine program” (p. 9). Immediately thereafter he goes on say that a “computation of a Turing machine is described by stating:
- "1. The tape alphabet
- "2. The form in which the [input] parameters are presented on the tape
- "3. The initial state of the Turing machine
- "4. The form in which answers [output] will be represented on the tape when the Turing machine halts
- "5. The machine program" (italics added, p. 10)
This precise prescription of what is required for "a computation" is in the spirit of what will follow in the work of Blass and Gurevich.
[edit] Blass and Gurevich's characterization
Blass and Gurevich describe their work as evolved from consideration of Turing machines and pointer machines, specifically Kolmogorov-Uspensky machines (KU machines), Schönhage Storage Modification Machines (SMM), and linking automata as defined by Knuth. The work of Gandy and Markov are also described as influential precursors.
Gurevich offers a 'strong' definition of an algorithm (boldface added):
- "...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine....In practice, it would be ridiculous...[Nevertheless,] [c]an one generalize Turing machines so that any algorithm, never mind how abstract, can be modeled by a generalized machine?...But suppose such generalized Turing machines exist. What would their states be?...a first-order structure ... a particular small instruction set suffices in all cases ... computation as an evolution of the state ... could be nondeterministic... can interact with their environment ... [could be] parallel and multi-agent ... [could have] dynamic semantics ... [the two underpinings of thier work are:] Turing's thesis ...[and] the notion of (first order) structure of [Tarski 1933]" (Gurevich 2000, p. 1-2)
The above phrase computation as an evolution of the state differs markedly from the definition of Knuth and Stone -- the "algorithm" as a Turing machine program. Rather, it corresponds to what Turing called the complete configuration (cf Turing's definition in Undecidable, p. 118) -- and includes both the current instruction (state) and the status of the tape. [cf Kleene (1952) p. 375 where he shows an example of a tape with 6 symbols on it -- all other squares are blank -- and how to Gödelize its combined table-tape status].
In the examples below we see the evolution of the state first-hand.
[edit] 2002: Boolos-Burgess-Jeffrey specification of Turing machine calculation
- For examples of this specification-method applied to the addition algorithm "m+n" see Algorithm examples.
An example in Boolos-Burgess-Jeffrey (2002) (pp. 31-32) demonstrates the precision required in a complete specification of an algorithm, in this case to add two numbers: m+n. It is similar to the Stone requirements above.
(i) They have discussed the role of "number format" in the computation and selected the "tally notation" to represent numbers:
-
- "Certainly computation can be harder in practice with some notations than others... But... it is possible in principle to do in any other notation, simply by translating the data... For purposes of framing a rigourously defined notion of computability, it is convenient to use monadic or tally notation" (p. 25-26)
(ii) At the outset of their example they specify the machine to be used in the computation as a Turing machine. They have previously specified (p. 26) that the Turing-machine will be of the 4-tuple, rather than 5-tuple, variety. For more on this convention see Turing machine.
(iii) Previously the authors have specified that the tape-head's position will be indicated by a subscript to the right of the scanned symbol. For more on this convention see Turing machine. (In the following, boldface is added for emphasis):
- "We have not given an official definition of what it is for a numerical function to be computable by a Turing machine, specifying how inputs or arguments are to be represented on the machine, and how outputs or values represented. Our specifications for a k-place function from positive integers to positive integers are as follows:
- "(a) [Initial number format:] The arguments m1, ... mk, ... will be represented in monadic [unary] notation by blocks of those numbers of strokes, each block separated from the next by a single blank, on an otherwise blank tape.
- Example: 3+2, 111B11
- "(b) [Initial head location, initial state:] Initially, the machine will be scanning the leftmost 1 on the tape, and will be in its initial state, state 1.
- Example: 3+2, 11111B11
- "(c) [Successful computation -- number format at Halt:] If the function to be computed assigns a value n to the arguments that are represented initially on the tape, then the machine will eventually halt on a tape containing a block of strokes, and otherwise blank...
- Example: 3+2, 11111
- "(d) [Successful computation -- head location at Halt:] In this case [c] the machine will halt scanning the left-most 1 on the tape...
- Example: 3+2, 1n1111
- "(e) [Unsuccessful computation -- failure to Halt or Halt with non-standard number format:] If the function that is to be computed assigns no value to the arguments that are represented initially on the tape, then the machine either will never halt, or will halt in some nonstandard configuration..."(ibid)
- Example: Bn11111 or B11n111 or B11111n
This specification is incomplete: it requires the location of where the instructions are to be placed and their format in the machine--
- (iv) in the finite state machine's TABLE or, in the case of a Universal Turing machine on the tape, and
- (v) the Table of instructions in a specified format
This later point is important. Boolos-Burgess-Jeffrey give a demonstration (p. 36) that the predictability of the entries in the table allow one to "shrink" the table by putting the entries in sequence and omitting the input state and the symbol. Indeed the example Turing machine computation required only the 4 columns as shown in the table below (but note: these were presented to the machine in rows):
State qk | Scanned symbol | Action | Next state qk | State qn | Scanned symbol | Action | Next state qk | State qk | B-action | B-next state qkB | 1-action | 1: next state qk1 | ||
1 | B | R | H | 1 | 1 | R | 2 | 1 | R | H | R | 2 | ||
2 | B | P | 3 | 2 | 1 | R | 2 | 2 | P | 3 | R | 2 | ||
3 | B | L | 4 | 3 | 1 | R | 3 | 3 | L | 4 | R | 3 | ||
4 | B | L | 5 | 4 | 1 | E | 4 | 4 | L | 5 | E | 4 | ||
5 | B | R | H | 5 | 1 | L | 5 | 5 | R | H | L | 5 |
[edit] 2006: Sipser's assertion and his three levels of description
- For examples of this specification-method applied to the addition algorithm "m+n" see Algorithm examples.
Sipser finishes the job that Stone and Boolos-Burgess-Jeffrey started (boldface added):
- "...our real focus from now on is on algorithms. That is, the Turing machine merely serves as a precise model for the definition of algorithm.... we need only to be comfortable enough with Turing machines to believe that they capture all algorithms" ( p. 156)"
Sipser offers us three levels of description of Turing machine algorithms (p. 157):
- High-level description: "wherein we use ... prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head."
- Implementation description: "in which we use ... prose to describe the way that the Turing machine moves its head and the way that it stores data on its tape. At this level we do not give details of states or transition function."
- Formal description: "... the lowest, most detailed, level of description... that spells out in full the Turing machine's states, transition function, and so on."
[edit] References
See Algorithm for a list of references, plus:
- George Boolos, John P. Burgess, Richard Jeffrey (2002), Computability and Logic: Fourth Edition, Cambridge University Press, Cambridge, UK. ISBN 0521-00758-5 (pbk).
- Michael Sipser, (2006), Introduction to the Theory of computation: Second Edition, Thompson Course Technology div. of Thompson Learning, Inc. Boston, MA. ISBN-13: 978-0-534-95097-2.