Partial function

From Wikipedia, the free encyclopedia

An example of a partial function that is not a total function.
An example of a partial function that is not a total function.
An example of a partial function that is also a total function.
An example of a partial function that is also a total function.

In mathematics, a partial function is a binary relation that associates each element of a set, sometimes called its domain, with at most one element of another (possibly the same) set, called its codomain. However, not every element of the domain has to be associated with an element of the codomain.

An example of a partial function with domain and codomain equal to the integers, is given by

g(n) = \sqrt{n},

where g(n) is only defined for those nonnegative integers which are perfect squares, that is are equal to the square of some integer.

If every element of the domain of a binary relation f is associated to exactly one element of its codomain, then f is termed a total function, or simply a "function". Note that a total function is a partial function; this is consistent with the concept that the whole is a part of itself (see subset).

The term partial function may also refer to other meanings: In the context of multilinear maps, one speaks of the partial function with respect to the k-th argument, when all other arguments are fixed.

Contents

[edit] Domain of a partial function

There are two distinct meanings in current mathematical usage for the notion of the domain of a partial function. One, probably the less common, but favored by category theorists, is the one alluded to above: The domain of a partial function f:XY is X. Some authors[who?] may refer to the domain of definition as those values on which the function is defined, and consider it distinct from the domain.

Many other mathematicians, including recursion theorists, prefer to reserve the term "domain of f" for the set of all values x such that f(x) is defined. In this terminology, a partial function on a set S is simply a function whose domain is a subset of S.

[edit] Discussion and examples

The first diagram above represents a partial function that is not a total function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Until the second half of the 20th century, only total functions were considered "well-defined".

Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a total function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a total function.

[edit] A function viewed as an algorithm

A function is an algorithm calculated by a person, or computed by a machine. Unless they are intentionally imitating the actions of machines, persons calculating by hand are using learned algorithms that can be defined by the μ recursive functions (cf Stone (1972) pp. 3-8, his example of finding the roots of a quadratic equation). Machines "computing by the rules" can be described, in the abstract, by the use of a Turing machine or a register machine model, the most primitive example of which is the counter machine.

[edit] Incompletely-specified look-up tables

If all its entries are completely specified a lookup table is actually an example of a primitive recursive function known as the CASE operator — e.g. "If case 1 then do w else if case 2 then do x else if case 3 then do y else do z as default"). An incompletely specified table is an example of a CASE operator in a μ recursive function that is missing its μ operator — it represents a partial function. Such things can happen in the state machine's TABLE of instructions for a Turing machine.

[edit] Incompletely-defined Decrement

An interesting example to ponder is a counter machine with the instruction set shown below, in the eventuality that instruction "DECrement (r)" occurs but the contents of "r" is zero.

{ INCrement (r), DECrement (r), JZ (r, z), HALT }

Here "r" is a name of a particular "register" in the model (It can anywhere from 1 to a boundless maximum register number.). Each register holds a single natural number { 0, 1, 2, 3, ... }. (i) INCrement means "add 1 to the contents of". (ii) DECrement means "remove 1 from the contents of". (iii) JZ means "If the contents of register r is zero, then jump to instruction "z", else continue to the next instruction.", i.e. register r is empty. (iv) HALT means terminate the computation. Except when the contents of register r is zero, in all cases the program executes the next instruction in sequence.

What happens during DEC (r) when the contents of r is zero (meaning the register is empty), but the outcome has not been defined? Indeed, the function DEC (r) — by itself — represents a partial function. If we do not define what happens in this eventuality, then we must never use DEC (r) without preceding it with the test for zero, JZ (r, z). We must think of these two as working together.

But DEC (r), INC (r) and JZ (r, z) represent examples of functions that have the capability of being partial in another way — by failure of specification. Each requires a pre-defined acceptable register "r" to test, and JZ requires a pre-defined instruction number "z" that does not send the algorithm to an undefined place in its TABLE. More often than not, if "mistakes are made", the algorithm will be a partial function, but not necessarily. (Question: how can we determine this, in the most general case? Answer: we cannot.)

[edit] Subtraction with a "successor" counter machine model

The following is an example of the first of the two Kleene (1952) "eventualities" for a failing partial function:

"An algorithm for calculating a function φ may, for a given x, fail to lead to a number as value of φ(x) either by not terminating (so that no matter how many steps have already been performed, the rules of the algorithm call for a next step), or by terminating but without giving a number as value." (p. 325)

The algorithm, in the form of a so-called "successor" counter machine model equipped with a 6-line program, computes the difference d = x - y, given a natural number x and a natural number y. The successor counter machine is closely akin to the Peano axioms and what Kleene (1952) called his "BASIS B" for the primitive recursive functions. In this example, this primitive machine will come equipped with only 4 "registers" { 0, 1, 2, 3 } that we rename { 0, x, y, d } to keep track of what is going on. Each "register" can contain only a single natural number { 0, 1, 2, . . . } . The machine must follow the commands of its instruction table which has only four instruction-types: (i) CLEAR contents of register r to "0", (ii) INCREMENT contents of register r, (iii) COMPARE contents of register rj to contents of register rk and JUMP, if EQUAL, to instruction z, and (iv) HALT. In all cases the program proceeds in sequence through its list of instructions unless a jump is successful. We abbreviate the instructions as follows:

{ CLR ( r ), INC ( r ), JE ( rj, rk, z ), H }

We allow the algorithm to alter the input parameters x and y ; we are only after the difference d. The following table defines the program: it starts by clearing the difference-register d (#1), then, until x and y are equal (#2), it increments y (#3) and increments d (#4) and jumps back to instruction #2 where x and y are again compared. If x and y are compared and found to be equal, then the algorithm jumps to #6 and halts with the answer in d. This simple algorithm "operates on the assumption that" y is always less than or equal to x. (It is no coincidence that this behavior resembles the μ operator ). (In the following, The symbolism, e.g. " [ d ] +1 → d ", is to be read as "The contents of register d plus one is put into register d."

Instruction #: Instruction: action: [ d ] + 1 → d means "contents of d" +1 replaces contents of d
1 CLR ( d ) 0 → d
loop: 2 JE ( x, y, 6 ) IF x = y THEN instr #6 ELSE next instr
3 INC ( y ) [ y ] + 1 → y
4 INC ( d ) [ d ] + 1 → d
5 JE ( 0, 0, 2 ) Unconditional jump to instr #2
6 H HALT

This is how the program behaves when x > y. When it halts the machine yields the correct number "2" in register d:

step Inst # Instr rj rk z reg 0 reg x reg y reg d Comments
0 3 1 4 ← initially 3 - 1, but d is not cleared
1 1 CLR d 0 0 0 3 1 0 Clear register d to 0
2 2 JE x y 6 0 3 1 0 x ?= y: no
3 3 INC y 0 0 0 3 2 0 y+1
4 4 INC d 0 0 0 3 2 1 d+1
5 5 JE 0 0 2 0 3 2 1 Unconditional jump
6 2 JE x y 6 0 3 2 1 x ?= y: no
7 3 INC y 0 0 0 3 3 1 y+1→y
8 4 INC d 0 0 0 3 3 2 d+1→d
9 5 JE 0 0 2 0 3 3 2 Unconditional jump
10 2 JE x y 6 0 3 3 2 x ?= y: YES, goto HALT
11 6 H 0 0 0 0 3 3 2 HALT: d = 2

The following table shows what happens when the machine confronts the input x < y. Because the algorithm "assumes" (operates on the principle that) y is less than x it increments y as it increments d. So x starts out and always remains less than y. The algorithm never halts.

step Inst # Instr rj rk z reg 0 reg x reg y reg d 4 Comments
0 1 3 4 0 <= initially 1 - 3, but d is not cleared
1 1 CLR d 0 0 0 1 3 0 0 Clear register d to 0
2 2 JE x y 6 0 1 3 0 0 x ?= y: no
3 3 INC y 0 0 0 1 4 0 0 y+1→y
4 4 INC d 0 0 0 1 4 1 0 d+1→d
5 5 JE 0 0 2 0 1 4 1 0 Unconditional jump
6 2 JE x y 6 0 1 4 1 0 x ?= y: no
7 3 INC y 0 0 0 1 5 1 0 y+1→y
8 4 INC d 0 0 0 1 5 2 0 d+1→d
9 5 JE 0 0 2 0 1 5 2 0 Unconditional jump
10 2 JE x y 6 0 1 5 2 0 x ?= y: no
11 3 INC y 0 0 0 1 6 2 0 y+1→y
12 4 INC d 0 0 0 1 6 3 0 d+1→d

[edit] Subtraction with a Post–Turing machine model

Kleene fails to mention a third eventuality: that the algorithm terminates in the expected manner but with the wrong answer! One wonders: Is this a case of a "partial function", or just a lousy algorithm? Whatever answer the reader chooses, the example does emphasize Minsky's point that a person should not assume that a recursive function is total. And to Kleene's point: we have to demonstrate that, even if the function has been shown to be recursive, that the demonstration itself must be "effective"(!) (Kleene (1952) p. 319).

The next example shows this third eventuality is a real possibility. (It is possible to modify the algorithm to provide non-standard output).

The following example shows what happens to a Post–Turing machine when the function x - y is not defined over the entire range of its variables — i.e. what happens when x < y. The example given here is suggested by an example in Davis (1958) pp. 12-15. Davis however, rather than using a much simpler Post-Turing machine model, used a full Turing machine thereby rendering his example more difficult. But the outcome is the same: the machine behaves badly when its input is "improper". (His example never terminates because he designed a test for x < y that sends his algorithm into a never-ending 'circle').

As before, we define the function f(x, y) = x - y as a "number-theoretic function", that is, one whose variables x and y range over the counting numbers greater than or equal to zero — the natural numbers { 0, 1, 2, ... }, and its output likewise ranges over the natural numbers { 0, 1, 2, ... }.

We design our machine to accept the numbers x and y represented as unary numbers (tally marks ■ ) printed on its tape, with "0" = Blank (B), "1" = ■, "2" = ■■, etc. A single blank square acts as a separator between the two numbers, and at least one square to the left and right of our numbers must be blank:

  • "x - y" ≡ . . . B x B y B . . .
  • "5 - 3" ≡ . . . B [■] ■ ■ ■ ■ B ■ ■ ■ B . . .
  • "0 - 3" ≡ . . . B [ B ] B ■ ■ ■ B . . .
  • "5 - 0" ≡ . . . B [■] ■ ■ ■ ■ B B B . . .
  • "0 - 0" ≡ . . . B [B] B B B . . .

The brackets [B] and [■] show where the tape head must be placed initially, both when x equals 0 and when x > 0 — on the far left mark of the left string x. This we specify as: "The initial standard position as defined by the algorithm ". At the end of the calculation, y will be gone leaving x - y with the head over the left-most mark in what we specify as: "The final standard position as defined by the algorithm"; marks not in this position are not considered to be "numbers". For example these are NOT considered "numbers":

. . . B [B] ■ ■ ■ B B B . . .
. . . B [B] B ■ ■ ■ B B B . . .
. . . B ■ B [■] ■ ■ B B B . . . (this assumes the tape was blank to begin with)
. . . B B B ■ ■ ■ [B] B B . . . (etc.)

The following table shows what the algorithm does when it encounters x and y within its defined range — the range xy, with x = 3, y = 1. In line #1, the algorithm starts in "standard position". Until y is equal to 0 represented by Blank (B), the algorithm does the following: (#2) search right for the right-most end of y, (#3) erase the right-end mark of y, (#4) go left to the left-most mark of x (the column labelled "x3"), and (#5) erase the mark there. (#6) Repeat the cycle until all the y-marks are gone. Then (#7) go to the left-end of what is left of x and (#8) place the head in "final standard position" with the head over the left-most mark (or over a blank if the result is 0). The following table shows all the "actions" of the head (left and right moves, and erasure):

row x y x3 x2 x1 y1 Action of tape head: Post-Turing Instruction
1 3 1 . . . B B B B B . . . initial "standard position" R
2a 3 1 . . . B B B B B . . . right through x to separator blank J1 *-1
2b 3 1 . . . B B B B B . . . right once to where y1 should be R
2c 3 1 . . . B B B B B . . . Test y1: square is marked so y ≠ 0 J0 end
2d 3 1 . . . B B B B B . . . right to first blank beyond right end of y loop: R, J1 *-1
2e 3 1 . . . B B B B B . . . left once to right-most mark of y L
3 3 0 . . . B B B B B B . . . erase mark E
4a 3 0 . . . B B B B B B . . . left through y to separator-blank L, J1 *-1
4b 3 0 . . . B B B B B B . . . left to left end of x L, J1 *-1
4c 3 0 . . . B B B B B B . . . right to "standard position" R
5 2 0 . . . B B B B B B B . . . erase mark E
6b 2 0 . . . B B B B B B B . . . right through x to separator blank R, J1 *-1
6c 2 0 . . . B B B B B B B . . . right once to where y1 should be R
6d 2 0 . . . B B B B B B B . . . Test y1: square is blank, so y = 0 J0 end; J1 loop
8 2 0 . . . B B B B B B B . . . find left-most mark of x end: L, L, J1 *-1
9 2 0 . . . B B B B B B B . . . HALT in final "standard position" R, HALT

Now what happens to our algorithm when x < y, e.g. x = 1, y = 3? This particular algorithm halts in standard position with the final number "1" — it has made an error. This is far worse than Kleene's eventuality #2: It has not, to quote Kleene (1952) "terminat[ed] but without giving a number", it has deceived us.

The following chart shows what this particular algorithm produces over a small range of its variables:

x-y x 0 1 2 3 4
y 0 0 1 2 3 4
1 0 0 1 2 3
2 1 0 0 1 2
3 2 1 0 0 1
4 3 2 1 0 0

[edit] Partial subtraction that yields NOT-numbers

The following fixes the deceitful algorithm above with a few extra instructions to test for numbers x < y. The original is shown above the modified version in a way to emphasize where the changes occurred.

Instruction #: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Instruction: L J1 L J0 L J1 R E R J1 R J1 L E L J1 L J0 J1 R R J1 L H
Jump-to address#: 1 20 5 9 11 15 20 5 21
Instruction #: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
Instruction: J0 L J1 L J0 L J1 R E R J1 R J1 L E L J0 R L J1 L J0 J1 R R J1 L H
Jump-to address#: 27 2 24 6 10 12 28 19 24 6 25

[edit] A total function — "proper" subtraction

The following will use the "successor" counter machine model.

How do we eliminate the uncertainty of the partial function? We can design an algorithm to test for x empty before y is empty, and if so then send the algorithm off to an empty d and then halt. This will yield what is known as proper subtraction:

f(x, y) ≡ d ≡ "IF x < y THEN 0 ELSE x - y".

We will define this with help of the primitive recursive function called the CASE operator (a string of IF-THEN-ELSE tests), what Boolos-Burgess-Jeffrey (2002) calls "an important process for obtaining new recursive functions from old... in more complicated examples, definition by cases makes it far easier to establish the (primitive) recursiveness of important functions" (p. 74).

Proper subtraction is not as simple as it seems. We have to provide for all six cases in a manner that our machine permits, e.g. the successor machine allows only comparisons of equality, incrementing and clearing. We just throw down the six cases in any order to ponder them (the last case #6 is called the "default" case):

d ≡ SUBRACTION_CASE( x, y ) =
  1. IF [x] = 0 & [y] = 0 THEN 0 → d ELSE;
  2. IF [x] = 0 & [y] ≠ 0 THEN 0 → d ELSE;
  3. IF [x] ≠ 0 & [y] = 0 THEN [x] → d ELSE;
  4. IF [x] = [y] & [x] ≠ 0 THEN 0 → d ELSE;
  5. IF [x] < y THEN 0 → d ELSE;
  6. [x] - [y] → d

The first two we can "solve" by doing an "IF-THEN-ELSE" test on x and then y; the third can follow by a test on y given than x ≠ 0. And case #4 ( [x] = [y] is easy, too (it turns out we can get this one later). But case #5 requires us to compare numbers [x] to [y].

CLR ( d ) ; be sure that difference d is truly empty
JE ( x, y, d_is_zero); case #4
JE ( x, 0, d_is_zero); cases #1, #2
JE ( y, 0, compute_d); case #3

Compare [x] to [y]: We clear a spare register ("s") and increment its contents from 0 to n at the same time we are comparing it to both [x] and [y]. Whichever of [x] or [y] (but not both) reaches the value of [s] is the smaller of the two. If they reach the value at the same time, then they are equal.

This requires a sequence of two tests that we should write as a CASE to be sure we've covered them all:

CASE #5:
  • (5a) [x] = [y]: IF [x] = [s] & [y] = [s] THEN 0 → d ELSE;
  • (5b) [x] < [y]: IF [x] = [s] & [y] ≠ [s] THEN 0 → d ELSE;
  • (5c) [x] > [y]: IF [s] ≠ [x] & [s] = [y] THEN compute_d ELSE;
  • (5d) [s] ≠ [x] & [s]≠[y]: u → output.

The u for "undecided" requires our algorithm to increment [s] and repeat this CASE #5 until it makes a match to either [x] or [y] (Again, that this sounds like the mu-recursive operator is no accident.):

loop_CASE#5:
JE (x, s, d_is_zero) ;cases (5a), (5b)
JE (y, s, compute_d) ;case (5c)
INC ( s ) ;default case (5d)
JE (0, 0, loop_CASE#4)

Compute_d: We are left with only the case (5c): [x] - [s] that we are calling "compute_d". What we do is increment [s] and [d] until [x] and [s] are equal. The difference d will be [d]:

compute_d: CASE #5c:
(5ci) : IF [s]=[x] THEN [x]-[s=y]→ d ELSE;
(5cii)[x]≠[s]: u → output.

Again, because this CASE yields u the algorithm must increment [s] and continue on until the match to [x] occurs.

compute_d:
JE ( x, s, done_d )
INC ( s )
JE ( 0, 0, compute_d )

We assemble the whole algorithm. It works as expected:

Instr #: Instr: Reg rj Reg rk Jump-to: Comments Case Action
1 CLR d Clear difference "d" 0 → d
2 CLR s Clear spare "s" 0 → s
3 JE x y 14 Optional: If [x] = [y] then done [x] = [y] 0 → d
4 JE x 0 14 If [x] = 0 then done [x] < [y] or [x] = [y] = 0 0 → d
5 JE y 0 10 If [y] = 0 then compute_d [y] = 0 & [x] ≠ 0 compute_d
loop_case#4: 6 JE x s 14 If [x] =[s] then done [x] = [y] or [x] < [y] 0 → d
7 JE y s 10 If [y] = [s] then compute_d [x] > [y] compute_d
8 INC s Increment spare register s [s] +1 → s
9 JE 0 0 6 unconditional jump to loop_case#4:
compute_d: 10 JE x s 14 If [x] = [s] then done [x] > [y] or [x] > 0 & [y]=0 [x] - [y] → d
11 INC d Increment difference d [d] +1 → d
12 INC s Increment spare register s [s] +1 → s
13 JE 0 0 10 unconditional jump to compute_d
done: 14 H HALT

If for some reason we wanted to make the algorithm 'circle' when it encountered the case x < y we could send the algorithm into a pair of instructions that jump back and forth, i.e. → 14 → 15 → 14 → 15 . . . ad infinitum. This is what Davis (1958) did in his subtraction algorithm. Kleene (1952) seems to think that this is appropriate:

"...we aragne that, when the given algorithm terminates without giving a number as value of φ(x), the new algorithm calls for additional steps which will continue ad infinitum." (p. 325)

The reader is left to ponder why one would intentionally do this.

[edit] See also

[edit] References

  • Martin Davis (1958), Computability and Unsolvability, McGraw-Hill Book Company, Inc, New York. Republished by Dover in 1982. ISBN 0-486-61471-9.
  • Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam, Netherlands, 10th printing with corrections added on 7th printing (1974). ISBN 0-7204-2103-9.
  • Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGraw-Hill Book Company, New York.