μ operator

From Wikipedia, the free encyclopedia

In computability theory, the μ operator, minimization operator, or unbounded search operator searches for the least natural number with a given property.

Contents

[edit] Definition

Suppose that R( y, x1 , . . ., xk ) is a fixed k+1-ary relation on the natural numbers. The mu operator "μy", in either the unbounded or bounded form, is a "number theoretic function" defined from the natural numbers { 0, 1, 2, . . . }. to the natural numbers. However, "μy" contains a predicate over the natural numbers that delivers true when the predicate is satisfied and false when it is not.

The bounded mu operator appears earlier in Kleene (1952) Chapter IX Primitive Recursive Functions, §45 Predicates, prime factor representation, as:

"\mu y_{y<z} R(y). \ \ \mbox{The least} \ y<z \ \mbox{such that} \ R(y), \ \mbox{if} \ (\exists y)_{y<z} R(y); \ \mbox{otherwise}, \ z." (p. 255)

Kleene notes that any of the 6 inequality restrictions on the range of the variable y is permitted, i.e. "y < z", "y ≤ z", "w < y < z", "w < y ≤ z", "w ≤ y < z", "w ≤ y ≤ z". "When the indicated range contains no y such that R(y) [is "true"], the value of the "μy" expression is the cardinal number of the range"(p. 226); this is why the default "z" appears in the definition above. As shown below, the bounded mu operator "μyy<z" is defined in terms of two primitive recursive functions called the finite sum Σ and finite product Π, a predicate function that "does the test" and a representing function that converts { t, f } to { 0, 1 }.

In Chapter XI §57 General Recursive Functions, Kleene defines the unbounded μ-operator over the variable y in the following manner,

"(\exists y) \mu y R(y) = \{ \mbox{the least (natural number)}\ y \ \mbox{such that} \ R(y)\}" (p. 279, where "(\exists y)" means "There exists a y such that..."

In this instance R itself, or its representing function, delivers 0 when it is satisfied (i.e. delivers true); the function then delivers the number y. No upper bound exists on y, hence no inequality expressions appear in its definition.

For a given R(y) the unbounded mu operator μyR(y) (note no requirement for "(Ey)" ) can result in a total function or partial function. Kleene writes this potentially-partial function in a different way (cf. p. 317):

εyR(x, y) =
  • the least y such that R(x,y) [is true], if (Ey)R(x,y)
  • 0, otherwise.

[edit] Properties

(i) In context of the primitive recursive functions, where the search variable y of the μ-operator is bounded, e.g. y<z in the formula below, if the predicate R is primitive recursive (Kleene Proof #E p. 228), then

μyy<z R( y, x1,..., xn ) is a primitive recursive function.

(ii) In the context of the (total) recursive functions: Where the search variable y is unbounded but guaranteed to exist for all values xi of the total recursive predicate R's parameters,

(x1), ..., (xn) (Ey) R( y, xi, ... xn ) implies that μyR(y, xi, ... xn) is a total recursive function.
here (xi) means "for all xi" and Ey means "there exists at least one value of y such that..." (cf Kleene (1952) p. 279.)

then the five primitive recursive operators plus the unbounded-but-total μ-operator give rise to what Kleene called the "general" recursive functions (i.e. total functions defined by the six recursion operators).

(iii) In the context of the partial recursive functions: Suppose that the relation R holds if and only if a partial recursive function converges to zero. And suppose that that partial recursive function converges (to something, not necessarily zero) whenever \mu y R(y,x_1,\ldots,x_k) is defined and y is \mu y R(y,x_1,\ldots,x_k) or smaller. Then the function \mu y R(y,x_1,\ldots,x_k) is also a partial recursive function.

The μ operator is used in the characterization of the computable functions as the μ recursive functions.

[edit] Examples

[edit] Example #1: The bounded μ-operator is a primitive recursive function

In the following, to save space the bold-face x i.e. x will represent the string xi, . . ., xn.

The bounded μ-operator can be expressed rather simply in terms of two primitive recursive functions (hereafter "prf") that also are used to define the CASE function -- the product-of-terms Π and the sum-of-terms Σ (cf Kleene #B page 224). (As needed, any boundary for the variable such as s≤t or t<z, or 5<x<17 etc. is appropriate). For example:

  • Πs≤t fs (x, s) = f0(x, 0) * f1(x, 1) * . . . * ft(x, t)
  • Σt<z gt ( x, t ) = g0( x, 0 ) + g1(x, 1 ) + . . . + gz-1(x, z-1 )

Before we proceed we need to introduce a function ψ called "the representing function" of predicate R. Function ψ is defined from inputs ( t= "truth", f="falsity" ) to outputs ( 0, 1 ) (Observe the order!). In this case the input to ψ i.e. { t, f } is coming from the output of R:

  • ψ( R = t ) = 0
  • ψ( R = f ) = 1

Kleene demonstrates that μyy<z R(y) is defined as follows; we see the product function Π is acting like a Boolean AND operator, and the sum Σ is acting somewhat like a Boolean OR but is producing { Σ≠0, Σ=0 } rather than just { 1, 0 }:

μy y<z R(y) = Σt<z Πs≤t ψ( R( x ,t ,s )) =
  • [ ψ( x, 0, 0 ) ] +
  • [ ψ( x, 1, 0 ) * ψ( x, 1, 1 ) ] +
  • [ ψ( x, 2, 0 ) * ψ( x, 2, 1 ) * ψ( x, 2, 2 ) ] +
  • . . . . . . +
  • [ ψ( x, z-1, 0 ) * ψ( x, z-1, 1 ) * ψ( x, z-1, 2 ) + . . . + ψ ( x, z-1, z-1 ) ]
Σ is actually a primitive recursion with the base Σ(x, 0) = 0 and the induction step Σ(x, y+1 ) = Σ( x, y ) + Π( x, y ). The product Π is also a primitive recursion Π with base step Π( x, 0 ) = ψ( x, 0 ) and induction step Π( x, y+1 ) = Π( x, y )*ψ( x, y+1 ).

The equation is easier if observed with an example, as given by Kleene. He just made up the entries for the representing function ψ(R(y)). He designated the representing functions χ(y) rather than ψ( x, y ):

y 0 1 2 3 4 5 6 7=z
χ(y) 1 1 1 0 1 0 0
π(y) = Πs≤y χ(s) 1 1 1 0 0 0 0 0
σ(y) = Σt<y π(t) 0 1 2 3 3 3 3 3
least y<z such that R(y) is "true": φ(y) = μy y<z R(y) 3

[edit] Example #2: The unbounded μ-operator is primitive-recursive

The unbounded μ operator -- the function μy -- is the one commonly defined in the texts. But the reader may wonder why -- the modern texts do not state the reason -- the unbounded μ-operator is searching for a function R(x, y) to yield zero, rather than some other natural number.

In a footnote Minsky does allow his operator to terminate when the function inside produces a match to the parameter "k"; this example is also useful because it shows another author's format:
" For μt[ φ(t) = k ] "(p. 210)

The reason for zero is that the unbounded operator μy will be defined in terms of the function "product" Π with its index y allowed to "grow" as the μ operator searches. As noted in the example above, the product Π x<y of a string of numbers ψ(x, 0) *, . . ., * ψ(x, y) yields zero whenever one of its members ψ(x, i) is zero:

Πs<y = ψ(x, 0) * , . . ., * ψ(x, y) = 0

if any ψ(x, i)=0 where 0 ≤ i ≤ s. Thus the Π is acting like a Boolean AND.

The function μy produces as "output" a single natural number y = { 0, 1, 2, 3 ... }. However, inside the operator one of a couple "situations" can appear: (a) a "number-theoretic function" χ that produces a single natural number, or (b) a "predicate" R that produces either { t= true, f = false }. (And, in the context of partial recursive functions Kleene later admits a third outcome: "μ = undecided", pp. 332ff ).

Kleene splits his definition of the unbounded μ operator to handle the two situations (a) and (b). For situation (b), before the predicate R(x, y) can serve in an arithmetic capacity in the product Π, its output { t, f } must first be "operated on" by its representing function χ to yield { 0, 1 }. And for situation (a) if one definition is to be used then the number theoretic function χ must produce zero to "satisfy" the μ operator. With this matter settled, he demonstrates with single "Proof III" that either types (a) or (b) together with the five primitive recursive operators yield the (total) recursive functions ... with this proviso for a total function:

That for all parameters x, a demonstration that must be provided to show that a y exists that satisfies (a) μy ψ(x, y) or (b) μy R(x,y).
Kleene also a third situation (c) that does not require the demonstration of "for all x a y exists such that ψ(x, y). He uses this in his proof that more total recursive functions exist than can be enumerated; cf footnote Total function demonstration.

Kleene's proof is informal and uses an example similar to the first example. Fbut first he casts the μ-operator into a different form that uses the "product-of-terms" Π operating on function χ that yields a natural number n where n can be any natural number, and 0 in the instance when the u operator's test is "satisfied".

The definition recast with the Π-function:
μy y<zχ(y) =
  • (i): π(x, y) = Πs<y χ( x, s)
  • (ii): φ(x) = τ( π(x, y), π( x, y' ), y)
  • (iii): τ(z', 0, y) = y ;τ( u, v, w ) is undefined for u = 0 or v > 0.

This is subtle. At first glance the equations seem to be using primitive recursion. But Kleene has not provided us with a base step and an induction step of the general form:

  • base step: φ( 0,x ) = φ( x )
  • induction step: φ( 0,x ) = ψ( y, φ(0,x), x )

What is going on? First, we have to remind ourselves that we have assigned a parameter (a natural number) to every variable xi. Second, we do see a successor-operator at work iterating y (i.e. the y'). And third, we see that the function μy y<zχ(y, x) is just producing instances of χ(y,x) i.e. χ(0,x), χ(1,x), ... until an instance yields 0. Fourth, when an instance χ(n,x) yields 0 it causes the middle term of τ, i.e. v = π( x, y' ) to yield 0. Finally, when the middle term v = 0, μy y<zχ(y) executes line (iii) and "exits". Kleene's presentation of equations (ii) and (iii) have been exchanged to make this point that line (iii) represents an exit -- an exit taken only when the search successfully finds a y to satisfy χ(y) and the middle product-term π(x, y' ) is 0; the operator then terminates its search with τ(z', 0, y) = y.

τ( π(x, y), π( x, y' ), y), i.e.:
  • τ( π(x, 0), π( x, 1 ), 0),
  • τ( π(x, 1), π( x, 2 ), 1)
  • τ( π(x, 2), π( x, 3 ), 2)
  • τ( π(x, 3), π( x, 4 ), 3)
  • . . . . . until a match occurs at y=n and then:
  • τ(z', 0, y) = τ(z', 0, n) = n and the μ operator's search is done.

For the example Kleene "...consider[s] any fixed values of xi, ... xn) and write[s] simply "χ(y)" for "χ(xi, ... xn),y)":

y 0 1 2 3 4 5 6 7 etc
χ(y) 3 1 2 0 9 0 1 5 . . .
π(y) = Π s≤y χ(s) 1 3 3 6 0 0 0 0 . . .
least y<z such that R(y) is "true": φ(y) = μy y<z R(y) 3


[edit] Example #3: Definition of the unbounded μ operator in terms of an abstract machine

Both Minsky (1967) p. 21 and Boolos-Burgess-Jeffrey (2002) p. 60-61 provide definitions of the μ operator as an abstract machine; see footnote Alternative definitions of μ.

The following demonstration follows Minsky without "the peculiarity" mentioned in the footnote. The demonstration will use a "successor" counter machine model closely related to the Peano Axioms and the primitive recursive functions. The model consists of (i) a finite state machine with a TABLE of instructions and a so-called 'state register' that we will rename "the Instruction Register" (IR), (ii) a few "registers" each of which can contain only a single natural number, and (iii) an instruction set of four "commands" described in the following table:

In the following, the symbolism " [ r ] " means "the contents of", and " →r " indicates an action with respect to register r.
Instruction Mnemonic Action on register(s) "r" Action on Instruction Register, IR
CLeaR register CLR ( r ) 0 → r [ IR ] +1 → IR
INCrement register INC ( r ) [ r ] +1 → r [ IR ] +1 → IR
Jump if Equal JE (r1, r2, z) none IF [ r1 ] = [ r2 ] THEN z → IR
ELSE [ IR ] +1 → IR
Halt H none [ IR ] → IR

The algorithm for the minimization operator μy [φ( x, y )] will, in essence, create a sequence of instances of the function φ( x, y ) as the value of parameter y (a natural number) increases; the process will continue (see Note † below) until a match occurs between the output of function φ( x, y ) and some pre-established number (usually 0). Thus the evaluation of φ(x, y) requires, at the outset, assignment of a natural number to each of its variables x and an assignment of a "match-number" (usually 0) to a register "w", and a number (usually 0) to register y.

Note †: The unbounded μ operator will continue this attempt-to-match process ad infinitum or until a match occurs. Thus the "y" register must be unbounded -- it must be able to "hold" a number of arbitrary size. Unlike a "real" computer model, abstract machine models allow this. In the case of a bounded μ operator, a lower-bounded μ operator would start with the contents of y set to a number other than zero. An upper-bounded μ operator would require an additional register "ub" to contain the number that represents the upper bound plus an additional comparison operation; an algorithm could provide for both lower- and upper bounds.

In the following we are assuming that the Instruction Register (IR) encounters the μy "routine" at instruction number "n". Its first action will be to establish a number in a dedicated "w" register -- an "example of" the number that function φ( x, y ) must produce before the algorithm can terminate (classically this is the number zero, but see the footnote about the use of numbers other than zero). The algorithm's next action at instructiton "n+1" will be to clear the "y" register -- "y" will act as an "up-counter" that starts from 0. Then at instruction "n+2" the algorithm evaluates its function φ( x, y ) -- we assume this takes j instructions to accomplish -- and at the end of its evaluation φ( x, y ) deposits its output in register "φ". At the n+j+3rd instruction the algorithm compares the number in the "w" register (e.g. 0) to the number in the "φ" register -- if they are the same the algorithm has succeeded and it escapes through exit; otherwise it increments the contents of the "y" register and loops back with this new y-value to test function φ( x, y ) again.

IR Instruction Action on register Action on Instruction Register IR
n μy[ φ( x, y ) ]: CLR ( w ) 0 → w [ IR ] +1 → IR
n+1 CLR ( y ) 0 → y [ IR ] +1 → IR
n+2 loop: φ ( x, y ) φ( [x], [y] ) → φ [ IR ] +j+1 → IR
n+j+3 JE (φ, w, exit) none CASE: { IF [φ]=[w] THEN exit → IR
ELSE [IR] +1 → IR }
n+j+4 INC ( y ) [ y ] +1 → y [ IR ] +1 → IR
n+j+5 JE (0, 0, loop) Unconditional jump CASE: { IF [r0] =[r0] THEN loop → IR
ELSE loop → IR }
n+j+6 exit: etc.

[edit] Footnotes

[edit] Total function demonstration

What is mandatory if the function is to be a total function is a demonstration by some other method (e.g. induction) that for each and every combination of values of its parameters xi some natural number y will satisfy the μ-operator so that the algorithm that represents the calculation can terminate:

"...we must always hesitate to assume that a system of equations really defines a general-recursive [i.e. total] function. We normally require auxiliary evidence for this, e.g. in the form of an inductive proof that, for each argument value, the computation terminates with a unique value." (Minsky (1967) p. 186)
"In other words, we should not claim that a function is effectively calculable on the ground that it has been shown to be general [i.e. total] recursive, unless the demonstration that it is general recursive is effective."(Kleene (1952) p. 319)

For an example of what this means in practice see the examples at mu recursive functions -- even the simplest ("improper") subtraction algorithm "x - y = d" can yield, for the undefined cases when x < y, (1) no termination, (2) no numbers (i.e. something wrong with the format so the yield is not considered a natural number), or (3) deceit: wrong numbers in the correct format. The "proper" subtraction algorithm requires careful attention to all the "cases"

(x, y) = { (0, 0), (a, 0), (0, b), (a≥b, b), (a=b, b), (a<b, b) }.

But even when the algorithm has been shown to produce the expected output in the instances { (0, 0), (1, 0), (0, 1), (2, 1), (1, 1), (1, 2) }, we are left with an uneasy feeling until we can devise a "convincing demonstration" that the cases (x, y) = (n, m) all yield the expected results. To Kleene's point: is our "demonstration" (i.e. the algorithm that is our denonstration) convincing enough to be considered effective?

[edit] Alternative abstract machine models of the unbounded μ operator from Minsky (1967) and Boolos-Burgess-Jeffrey (2002)

The unbounded μ operator is defined by Minsky (1967) p. 210 but with a peculiar flaw: the operator will not yield t = 0 when its predicate (the IF-THEN-ELSE test) is satisfied; rather, it yields t=2. In Minsky's version the counter is "t", and the function φ( t, x ) deposits its number in register φ. In the usual μ definition register w will contain 0, but Minsky observes that it can contain any number k. Minsky's instruction set is equivalent to the following where "JNE" = Jump to z if Not Equal:

{ CLR (r), INC (r), JNE (rj, rk, z) }
IR Instruction Action on register Action on Instruction Register, IR
n μy φ( x ): CLR ( w ) 0 → w [ IR ] +1 → IR
n+1 CLR ( t ) 0 → t [ IR ] +1 → IR
n+2 loop: φ ( y, x ) φ( [ t ], [ x ] ) → φ [ IR ] +j+1 → IR
n+j+3 INC ( t ) [ t ] +1 → t [ IR ] +1 → IR
n+j+4 JNE (φ, w, loop) none CASE: { IF [φ] ≠ [w] THEN "exit" → IR
ELSE [IR] +1 → IR }
n+j+5 INC ( t ) [ t ] +1 → t [ IR ] +1 → IR
n+j+6 exit: etc.

The unbounded μ operator is also defined by Boolos-Burgess-Jeffrey (2002) p. 60-61 for a counter machine with an instruction set equivalent to the following:

{ CLR (r), INC (r), DEC (r), JZ (r, z), H }

In this version the counter "y" is called "r2", and the function f( x, r2 ) deposits its number in register "r3". Perhaps the reason Boolos-Burgess-Jeffrey clear r3 is to facilitate an unconditional jump to loop; this is often done by use of a dedicated register "0" that contains "0":

IR Instruction Action on register Action on Instruction Register, IR
n μr2[f(x, r2)]: CLR ( r2 ) 0 → r2 [ IR ] +1 → IR
n+1 loop: f( y, x ) f( [ t ], [ x ] ) →r3 [ IR ] +j+1 → IR
n+2 JZ ( r3, exit ) none IF [ r3 ] = 0 THEN exit → IR
ELSE [ IR ] +1 → IR
n+j+3 CLR ( r3 ) 0 → r3 [ IR ] +1 → IR
n+j+4 INC ( r2 ) [ r2 ] +1 → r2 [ IR ] +1 → IR
n+j+5 JZ ( r3, loop) CASE: { IF [ r3 ] =0 THEN loop → IR
ELSE [IR] +1 → IR }
n+j+6 exit: etc.

[edit] References

  • Stephen Kleene (1952) Introduction to Metamathematics, North-Holland Publishing Company, New York, 11th reprint 1971: (2nd edition notes added on 6th reprint).
  • Marvin L. Minsky (1967), Computation: Finite and Infinite Machines, Prentice-Hall, Inc. Englewood Cliffs, N.J.
On pages 210-215 Minsky shows how to create the μ-operator using the register machine model, thus demonstrating its equivalence to the general recursive functions.