Talk:Halting problem/Archive1

From Wikipedia, the free encyclopedia

Contents

[edit] Big Trouble in Proof?

From main page: I beleive there is a big trouble in its proof. We try the function halt(p,i) on trouble(s). But trouble need an input. Trouble halt or no depending on its input. What mean halt(T,T) ? "Does trouble(s) halt with trouble(s) as input?" The answer is different depending of the input of the second function trouble(), but this input aren't given, so we can't answer. If someone could explain where i'm wrong. (nico@seul.org)

The input is the function itself, not a certain instance of it - the algorithm, not its outcome. Perhaps it would be clearer if we wrote 'trouble' or 'trouble()' rather than 'trouble(s)'. The question is - does trouble halt with trouble as input? Well, if Halt(trouble,trouble) comes out with "No", it does, otherwise not. And when does Halt(trouble,trouble) come out with "No"? Well, if trouble halts with trouble as input.

We are in computer world, a function is represented by a big number. So if T'=T, halt(T,T') means : "Does trouble() stop with T' as input ?". Ok, but what is the input of T' ? We need it to answer the question ! Otherwise it's a none sense, how could you none the behavior of foo(int i){while(i){}}, it depend if i=0 or i!=0

T' doesn't need an input. T' is just a string. Strings don't take inputs. --LC 08:38 Aug 27, 2002 (PDT)

halt(X,Y) means that we are going to imagine what would happen if we ran the program represented by X. The program X has one input which is a string. We imagine giving it the string Y for its input. Y is just a string. You don't give inputs to a string. The string Y happens to represent a program, but that's irrelevant. We aren't going to run Y as a program. We're just going to take its string and feed it to X. That's why we don't have to worry about what inputs Y will take.

Imagine a Microsoft employee using Microsoft Word to edit a file which happens to be the source code for Microsoft Word. In this case, X is the copy of Word that is executing. Y is the source file that it reads in. We can talk about what input X takes. We can't really talk about what input Y takes. --LC

That's what i don't like in the demonstration. halt(X,X) is a none sense at the construction and you can't use it in a demonstration. So, you're ok with me ?

halt(X,X) makes sense, and is used correctly in the proof. I don't know how to make it any clearer. --LC 07:36 Aug 27, 2002 (PDT)

It seems that i loose some editing. In halt(X,Y), X is program that take data, Y. We extend Y to be a fonction. That what means halt(X,X) but the problem is : what is the input of the function X (the second one). We could code the input of the fonction inside the string X itself. But, in that case, X means : T(T(T(T(T(T(T(...))))))) and it's became an infinite loop. Do you see my problem ?

But X in halt(X,Y) is not a function, it is a string that represents a function. You are confusing the map with the territory. -- Jan Hidders 06:27 Sep 2, 2002 (PDT)
Maybe it's easier if we use an example.
Let X be the string
function Decrease (s: string)
begin
  repeat
    remove the last sign of s, if it has any
  until size(s)<20
  return(s)
end
Then if we take the algorithm represented by X with X as input, it will remove the last sign of X (the 'd' of 'end'), and repeat that until there are less than 20, return "function Decrease (", and then end. Thus Halt(X,X) should give Yes (and in fact Halt(X,s) should give Yes for any value of s). On the other hand, if Y were the string
function Decrease (s: string)
begin
  repeat
    remove the last sign of s, if it has any
  until size(s) >1000
  return(s)
end
Then taking the algorithm represented by Y with Y as input, would keep removing signs, and then idle through the loop without ever stopping, so Halt(Y,Y) should give No.
Maybe the best way to look at it, is to assume that we have a function c(), which given a string describing an algorithm, gives that algorithm (for other strings it is allowed to give garbage). Such a function exists for several mathematical descriptions of algorithms. A Halting algorithm Halt(X,Y) is now an algorithm on strings, that given a string X, if X describes an algorithm with a single string as its input, provides yes if c(X) halts with Y as its input, no otherwise. In particular, Halt(X,X) should give yes if c(X) halts with X as its input. The second X is just looked at as a string, not as a program, so it makes no sense to talk of its 'input'.
Andre Engels

In the first sentence of the indented paragraph labeled "2." near the end of the "Sketch of Proof" section of the Halting Problem article, I think that "Trouble does not halt" is incorrect and should be changed to "Trouble halts". I wanted to check with this discussion group before making the change.

Cole Kitchen

You are right. Well spotted. -- Jan Hidders 15:57 Nov 24, 2002 (UTC)


From the article:

it's very important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the question of truth, but only concerns the issue of provability).

What are the precise assumptions of the first incompleteness theorem? Typically, one sees something like "a theory strong enough to formulate elementary arithmetic", but what does that mean? Isn't it assumed that the theory is able to prove at least some of the true statements of arithmetic? AxelBoldt 02:30 Dec 2, 2002 (UTC)

Not necessarily. If the theory is not able to prove at least some of the true statements of arithmetic, it's certainly is not able to prove all of them, so Gödel's theorem holds semi-trivially. Andre Engels 09:45 Dec 2, 2002 (UTC)
I don't understand. Presburger arithmetic for instance is able to prove some, but not all, theorems of arithmetic, but Gödel's theorem does not apply to it. AxelBoldt 17:12 Dec 2, 2002 (UTC)
The reason that Gödel's theorem does not apply to Presburger arithmetic, is that there are some theorems of arithmetic that cannot be described in Presburger arithmetic. For example, there is no way in Presburger arithmetic to describe that there are infinitely many prime numbers. Because of this, Presburger arithmetic can be (and apparently in fact is) at the same time complete (meaning that for every statement P that can be described in Presburger arithmetic can be either proven or disproven) and correct (meaning that no statement P can be both proven and disproven), in spite of Gödel's theorem. The assumption of Gödel's theorem that is not true is that every statement of basic arithmetics can be described.
One could restate Gödel's Theorem to read "Any axiomatic system of arithmetics will either have true statements of arithmetics that it cannot prove, or false statements that it can prove." Under this rewriting, it is clear where Presburger arithmetic fails: There are certain true statements of arithmetics that cannot even be written down in Presburger arithmetic; it is obvious that they also cannot be proven.
On the Wikipedia-page Gödel's incompleteness theorem it is said, that "in any axiomatic system sufficiently strong to allow one to do basic arithmetic, one can construct a statement that either can be neither proven nor disproven within that system or can be both proven and disproven within that system." Because in Presburger arithmetic one cannot do a multiplication, it is not "sufficiently strong to allow one to do basic arithmetic", and thus Gödel's incompleteness theorem does not apply. Andre Engels 17:37 Dec 2, 2002 (UTC)

It seems to me that:

  • Any fool can prove that a program halts. Just run it until it halts.
  • Proving that a program does not halt is much harder than proving that it halts. This is because proving that it does not halt is proving a negative.
"Any fool can prove that a program halts. Just run it until it halts."
You just have proved that this instance of the program with this input can halt. What we are want to prove is that all instance of the program with all possible acceptable inputs will eventualy halt.
-- Looxix

One such consequence of the halting problem's undecidability is that ... there cannot be a general algorithm that decides whether a given statement about natural numbers is true or not.

What is this supposed to mean? It seems to me that I can quite easily prove, say, the statement that the empty set is a subset of the set of natural numbers. According to quote above I can't. I'm obviously misunderstanding something here. Can somebody clarify?

The quote doesn't say that you cannot do that. What it says is that there is not a fixed algorithm that decides the truth of all statements about natural numbers. (That is the meaning of "general".) That does not imply that you cannot find a proof for certain particular statements or a fixed algorithm for a certain small subset of statements. Compare it to the Halting Problem; there is no general algorithm that decides whether a program halts but it is still possible to prove for certain particular programs that they halt (or not). -- Jan Hidders 12:52, 31 Aug 2003 (UTC)

If the memory and external storage of a machine is limited, as it is for any computer which actually exists, then the halting problem for programs running on that machine can be solved with a general algorithm (albeit an extremely inefficient one).

How? Say S is the number of bits available on the machine. One way would be to keep track of another counter M of the same size, and see if it cycles back to zero. But can you run this Halt program on the same machine?

There exists a finite state machine that, given any program less than a gigabyte long, immediately returns whether the program will halt (we don't know that finite state machine however. See below

Where is the reference for this? -- User:MO 15 November 2003.

Question 1: Nobody said the program would run on the same machine.
Question 2: There is a finite number of programs smaller than a Gigabyte. We could simply take a list of all those programs, and write after each one 0 if it does not halt, and 1 if it halts. It's easy to make a finite state machine that returns a 0 or 1 depending on what we have written down. Andre Engels 01:52, 15 Nov 2003 (UTC)

I know this intuitive argument can be found in some books on the theory of computation, but I never liked it:

"the halting theorem itself is unsurprising. If there were a mechanical way to decide whether arbitrary programs would halt, then many apparently difficult mathematical problems would succumb to it."

It is wrong. The Perfect Number C-program is only 43 bytes long, so The Gigabyte Machine would decide it.

The halting problem for most of your brute-force search programs is solvable, computable. The existence of the Gigabyte Machine is proof of that (nonconstructive, unfortunately :-). Is this fact surprising? It shouldn't be.

The intuitive argument uses a magicians sleight of hand, replaces ”computable” with ”mechanicle way to decide”. That, whatever it means exactly, is a completely other issue. -- Jan Tångring

What is the "Gigabyte Machine"? Maybe you should write an article about it? :-) — Timwi 21:47, 5 Apr 2004 (UTC)

I am referring to the finitite state machine mentioned earlier in the article (search for "gigabyte" in the text) -- Jan Tångring

Quote: "So, after centuries of work, mathematicians have yet to discover whether a simple, 43-byte C program halts."

Is C older than we thought? ;-)

I am not sure the Perfect Number C-program is really a "less that a Gb long" algorithm. Of course it can be written in less than a Gb, but can he run with less? If the type "int" that is used is the classic one as in C, I bet the outcome of the program will be evident because the numbers will quickly grow and cycle. There will be a moment where either the condition is fullfilled, either a previously existing state of the program will occur again, proving the non-halting nature for the given output. If we don't want the numbers to cycle, then they must be arbitrary long, so maybe needing more than a Gb to be stored ;-)
Fafner 12:40, 2 Aug 2004 (UTC)

The article asks if this theoretical C program would halt:

int main(void)
{
   BigInt i, j, k; /* assume arbitrary-precision integers */
   for(i=j=k=1;--j||k;k=j?i%j?k:k-j:(j=i+=2));
   return 0;
}

Maybe if I could read it, I could tell you. The double-nested ?: expression is the worst C idiom I've ever seen; it looks like something from an obfuscated C contest. And why cram all that into the header of the for loop? I'm going to try to clean it up, but I might get it wrong, which is another reason why this code is so horrible in the first place... code like this does not belong in Wikipedia at all except as an example of obfuscated code. - Furrykef 16:13, 9 Sep 2004 (UTC)

Done. I think I "unpacked" the for loop correctly. Though I think we might be better served with a p-code example... - Furrykef 16:35, 9 Sep 2004 (UTC)
I agree, and I think that the twin primes example that's already on the page serves the purpose just fine. The obfuscated odd-perfect-number program is superfluous, and its obfuscation obscures the real point, which is that you can't tell if an odd-perfect-number searcher will terminate, whether or not it is obfuscated to the point of unintelligibility. -- Dominus 17:06, 9 Sep 2004 (UTC)

I simplified the proof of the halting problem, since Halt (or as I renamed it, Halts) really only needs to take one parameter -- the algorithm itself. Adding a second one is just confusing. Also, with modern programming languages, it would be simpler to have the code have functions passed around rather than strings; this also makes it easier to understand. I hope I didn't screw any of it up, though. - Furrykef 10:29, 12 Sep 2004 (UTC)

Oh, wait, I just realized my error. Yes, it does need the second parameter. - Furrykef 10:35, 12 Sep 2004 (UTC)

[edit] Limitations of this

I think it's important to note that the undecidability of the halting problem only holds when an infinite amount of memory is involved. Otherwise, it is decidable. A program can exist that can determine whether a program on a given computer system will ever terminate.

As an example, the perfect number problem can be answered. Eventually, the program is going to run out of memory. One of several things will happen, depending on the exact assembly language used by the compiler, the operating system, the C libraries, etc.:

  • The program causes an out-of-memory exception. Clearly, in this case, it terminates.
  • The program causes the operating system to freeze due to memory problems. Whether you define this as termination or running forever is up to you.
  • The program's BigInt library ignores the out-of-memory condition and continues running without increasing the integer size. i will be 0xFFFFFF...FFFF at this point. Adding 2 to i will result in 0x000000...0001. This causes i, j, and k to reset to the start state, which is an infinite loop.

As for the theoretical meaning of this... In Sketch of proof, assume that there is finite memory in the system. The proof essentially is the same idea as the sequence 1, -1, 1, -1 ... having no limit; the question of whether Trouble(Trouble) halts is equivalent to asking whether the "last" number in that sequence is 1 or -1. At some point, the infinite recursion that results from trying to decide whether Trouble() halts causes a stack overflow or other out-of-memory condition. At this point, the deepest copy of Trouble terminates with an error. The second to deepest copy of Trouble runs infinitely as a result. The third to deepest copy terminates, detecting this condition. This repeats up the "stack" until the top one either halts or does not halt, depending on the parity of the number of levels - there is no contradiction. (Note that there is no "stack", and that only one copy, the top copy, ever actually runs. The top copy is really just analyzing the lower ones, and in order to do that, it has to recursively analyze all subprograms called by its target.)

-- Myria 03:04, 24 Sep 2004 (UTC)

The fact that the halting problem is decidable for finite-memory machines has already been noted in the article. The practical consequences of this caveat are minimal, as the decision algorithm for finite-memory machines is fundamentally exponential-time. --Robert Merkel 03:55, 24 Sep 2004 (UTC)
  1. Not only exponential-time, but exponential-memory too right? This means that if you have a finite memory machine, you still cannot check, using this finite memory machine, to see if a program will halt or not on this finite memory machine.
This is related to the question of why abstract machines like Turing machines are even a practical model for algorithms, if they're more powerful than real computers. The answer is that:
  1. They can describe algorithms at once over all machines, regardless of how much memory they have; there is a maximum to this now, but it can rise arbitrarily in time. Algorithm results should be timeless.
Actually, I think a consequence of one of the laws of thermodynamics is that there is a finite amount of energy in the universe and, consequently, a finite amount of information. So "it can rise arbitrarily in time" is probably true from a human perspective, but our understanding of physics suggests that this is (from a universal perspective) flawed. :) (I am not a physicist, though.) --pfunk42 (1 Jun 2005)
  1. They can describe algorithms without reference to any sort of machine-specific characteristics, such as the memory limit.
  2. They simplify the statement of algorithms; algorithms running on abstract machines are usually more general than their counterparts running on real machines, because they have arbitrary-precision datatypes available and never have to deal with unexpected out-of-memory conditions.
In short, you can certainly decide if an algorithm will halt on a given machine. It just takes a very long time, and in the end isn't worth much, because by the time it's done machines will have a million times or a billion times as much memory, and your algorithm has no hope of catching up. Derrick Coetzee 05:13, 24 Sep 2004 (UTC)
Okay. I think I understand your "fundamentally exponential time" bit, but that doesn't mean that many infinitely looping programs can't be decided as such rather quickly. In fact, it can be done proportional to the time it takes for the machine to repeat a state (which in some cases is rather fast), and with memory proportional to the memory used by the machine it's reasoning about. The "fundamentally exponential time" comes in because it can take exponential time with respect to the memory used for the machine to repeat a state, but "fundamentally exponential time" was confusing to me at first because you didn't specify what it's exponential with respect to--and it's only O(exponential) not Theta(exponential). Another interesting note: I actually implemented an infinite loop detector for Java that was quite effective for "tight" infinite loops. --pfunk42 (1 Jun 2005)
Any program which goes into a loop, literally, is easy to detect, just by remembering all its previous states and seeing if it hits one it's already hit. This means it's easy to detect if any program using bounded space enters an infinite loop. The problem is that Turing machines can use unbounded space, using more and more as they run, never entering a true loop, such as the twin primes example. A more practical problem is that trying to remember all the states of a real program so you can detect a loop requires more space than is typically available. To get around this people in model checking use Bloom filters. Deco 04:39, 27 July 2005 (UTC)

[edit] Reduction...

An anonymous user made an edit to the section of the importance of the halting problem, getting the proof by reduction part round exactly the wrong way (a common mistake). Could people check that I have got it correct? And is it too detailed for that part of the article? --Robert Merkel 10:28, 2 Nov 2004 (UTC)

Looks good. I'd suggest a concrete example, though, even a made-up one would help. Deco 23:47, 2 Nov 2004 (UTC)

[edit] Can Humans solve it?

I really don't think even humans can answer the halting problem with a definitive yes/no without being wrong, even if there is an infallible human being. First, suppose there is an infallible human being named Bob. Let's suppose we rewrite the program:

  function CheckIfHalts(program, input)
  {
     print(program, input);
     print("does this halt? (y/n)");
     return getch() == 'y';
  }

Bob sits in the back room doing nothing but analyzing the program and pressing y or n, and he's never wrong. We still have the contradiction - except that we end up presenting the contradiction to Bob instead of the Turing machine. Because we have already proven a logical contradiction, Bob has no choice but to say there is no solution.

This is all really just saying that the halting problem is a paradox with no solution, right?

--Caywen 20:00, 13 Mar 2005 (UTC)

You are correct, humans cannot solve it. Although the halting problem applies specifically to turing machines, there are many who would argue that human beings ARE turing machines (well, at least their brains are). It is a paradox with no solution, but a very special one: the significance is that there does not exist an algorithm which can determine whether or not other algorithms do not halt (or in your example, there exists no such person, and Turing definitively proved that non-existance). [no signature]
To clarify a point: the halting problem is not a paradox in the sense that it doesn't even make sense for a solution to exist. To the contrary, a program either halts on an input or it doesn't, and this is obvious. We just can't tell in general which it will do. There is an answer. Deco 04:35, 27 July 2005 (UTC)
Actually you forget that your function CheckIfHalts is not an algorithm that can run on a turing machine anymore, because it includes your more powerful human Bob. So if you ask Bob whether some program that includes him halts, he will just tell you that you are trying to fool him because the answer depends on Bob's free will. If Bob is not more powerful than a turing machine, then of course he can't solve it. It all depends on whether humans are more powerful than turing machines or not. (I think that question is unanswered) GoGi 12:28, 15 August 2005 (UTC)

[edit] Super halting problems

The idea is: suppose there's a magical machine that can solve the halting problem. The super halting problem is to deside whether this machine halts. It seems that this problem cannot be solved by the "standard" turing machine or the magical one described earlier. A machine that could solve this problem could not solve it's own halting problem. Again, a machine that could solve it couldn't solve it's own halting problem. This leads to an infinite series of more and more powerful machines and "harder" halting problems. See http://www.scottaaronson.com/writings/bignumbers.html for much better explanation. (search for the phrase "super halting problem")

Should the page mention this or have link to page dedicated for this?

Well, deciding if that particular machine will halt is easy. The answer is yes: because there is no Turing machine that neither halts nor doesn't halt, in order to be correct the halt-deciding machine must always halt. A more interesting question (the one your probably intended) is to ask how much "harder" the halting problem is on a Turing machine with an oracle for the halting problem. This is discussed at Oracle machine#Oracles and Halting Problems article, to which we could reasonably link. Deco 00:14, 14 May 2005 (UTC)

[edit] wrong examples?

    function findTwinPrimeAbove(int N)
    int p := N
    loop
        if p is prime and p + 2 is prime
            return
        else
            p := p + 1

I think this example is wrong. If there would be a proof we know the answer. I think this should be a function for which we know there is not a proof because if "all proofs" are know then are there any functions left?

This is just offered as evidence why it makes sense for the halting problem to be impossible to solve, because it would show that there is an automatic procedure for solving hard open problems in mathematics. There are very few problems that have proofs that they cannot be proven or disproven, and it's not clear to me how a program could test any of these. Deco 22:24, 15 Jun 2005 (UTC)

[edit] Recognizing partial solutions

In this section it says:

Can we recognize a correct PHS when we see it?

I think this is intended to mean: "can we construct a Turing machine that will infallibly determine whether a program given as input is itself a correct PHS", but that meaning isn't clear from the above expression, and it took me some time analysing the following text to decide that this is what was meant.

However, if I am correct about the meaning of the question, the answer also seems trivially "no" - take any incorrect PHS and prepend code to find the first odd perfect number, for example - and it seems odd then that the section spends so long focussing on this apparently trivial question. To me, the more obviously interesting question to consider is, given that it is clearly possible to construct some Turing machines that are provably correct PHSs, how good can we make them? Hv 13:49, 12 August 2005 (UTC)

I agree with this. I'd like to see the discussion of the undecidability of this problem removed if there's consent. We can instead include a discussion of some well-known algorithms for partial deciding, such as those based on basic blocks and dead-code elimination that are used by compilers. Deco 22:32, 12 August 2005 (UTC)
Note that Busy beaver chasers need to solve exactly this problem, so their documents would be a useful starting point for research. Hv 04:00, 13 August 2005 (UTC)

[edit] Riemann hypothesis

I ran across an interesting fact at Divisor function. It appears that the famous Riemann hypothesis, one of the million-dollar problems, is false if and only if the following very simple Turing machine terminates:

function Riemann() {
    int n := 5041
    while true
        if sumOfDivisors(n)/(n*ln(ln(n)) ≥ e^y then
            return
        n := n + 1
}

Here, sumOfDivisors computes the sum of the divisors of n, e is the number e, y is the Euler-Mascheroni constant, and computations are done with sufficient accuracy to conclusively determine whether the inequality holds (it suffices to use range arithmetic and increase accuracy until the range does not contain the constant). We have enough examples already, and this one is a bit more complicated, but it's amazing how many important open problems can be expressed in this way. Deco 04:39, 22 August 2005 (UTC)

[edit] Bad Proof?

This is code from article

 function trouble(string s)
     if halt(s, s) = false
         return true
     else
         loop forever

but I think that correct form of trouble is( false replaced with true)

 function trouble(string s)
     if halt(s, s) = true
         return true
     else
         loop forever
You are mistaken. -- Dominus 20:38, 8 September 2005 (UTC)

[edit] /* limitations of a precise problem formulation */

If we model "halt(x, i)" using an oracle with the restriction that only *one* call can be made to the oracle for a program x. (that is, x may do everything except encode a call to halt())? Is this oracle any different from the general halting oracle (where unlimited calls are allowed in one program).

[edit] Assumption that Halt always halts?

On what assumption does the conclusion that halt always halts come from? Hackwrench 00:50, 9 November 2005 (UTC)

In the case listed, halt can make the determination that trouble never returns, but it can never pass that information to trouble, i.e. it never returns.

I just figured it out. The assumption being made is that to output an algoritm has to halt. This is not the case.

global does_halt

halt(i, j) { does_halt = false; infinite_loop;}Hackwrench 01:15, 9 November 2005 (UTC)

Expanded arguement and counters to refutation on kuro5hin.

http://www.kuro5hin.org/story/2005/11/8/221547/316

As you might have guessed, you're wrong. By definition, a function which solves the halting problem always returns true or false, or in other words always halts. We define it this way because a function which either decides if its input halts or never returns isn't very useful - it might only work on a few programs, and we want something that works for all programs (and the content of the theorem is that, as it turns out, that's asking too much). As the section "Recognizing partial solutions" notes, "There are programs that give correct answers for some instances of it, and run forever on all other instances." Deco 02:13, 10 November 2005 (UTC)

How is "return" a synonym for "halt"? Another issue I presented is that in order for trouble to be an algorithm, the definition of algorithm "trouble" must include the definition of "halt" so if "halt" halts, then "trouble" is immediately halted. Hackwrench 18:40, 10 November 2005 (UTC)

"halt" means "returns a result" because that's what it's defined to mean. You can use some other definition of "halt", but then you're not talking about the same thing that the article is.
A function that produces a result but that doesn't halt is useless, because the function's caller never gets a chance to examine the result. A function that halts but that doesn't produce a result is also useless; there was no reason to call it. So there is no reason to consider functions for which halting and producing results are different things. -- Dominus 18:54, 10 November 2005 (UTC)

But trouble as stated is not an algorithm. Even if we ignore the definition of algorithm on the [algorithm] page as always halting, the totality of trouble must include the portion assigned the label halt. If halt by definition always halts, then trouble always halts, and the portion beyond halt is not part of the algorithm trouble because it is unreachable. Hackwrench 00:00, 11 November 2005 (UTC)

No. When a function halts, that just means that it returns a value to its caller. When halt halts, its result is returned to trouble, which then can use the result in its own computations. -- Dominus 03:25, 11 November 2005 (UTC)

If that's the case, then trouble can never get passed to itself as the input because then the input becomes Trouble + result of halt. Hackwrench 14:45, 11 November 2005 (UTC)

function trouble(string s)

    if halt(s, s) = false
        return true
    else
        loop forever

is equivalent to function trouble(string s)

    if s = false
        return true
    else
        loop forever

end function

trouble(halt(t,t))

It is not equivalent. Your second function doesn't even make sense, because it compares a string with a boolean. -- Dominus 16:03, 11 November 2005 (UTC)
Any boolean can be expressed as a string. My point is that the string is not a real parameter for trouble as it doesn't come into contact with trouble's inner workings.

Furthermore trouble doesn't know which state is "false" or which state is true. There are two states for trouble and there are two possible states for halt and it is a trivial exercise to show that the two map to each other. Hackwrench 14:45, 11 November 2005 (UTC)

It occurs to me that the fact that the only use that trouble makes of the result of halt is to decide whether or not to branch may be useful in this discussion, as you said that trouble can use the result of halt in its computations, but in fact it can use it in only one calculation, its branching calculation. Hackwrench 14:45, 11 November 2005 (UTC)

[edit] Consider

This is a field of turing machines, all other turing machines can be built of these

0:0,0 does not return, so to find the answer, negate
0:1,1 returns: does not return
0:0,1 returns: returns
0:1,0 returns: returns
1:1,1 does not return, so to find the answer negate
1:0,0 returns:does not return
1:0,1 retruns:returns
1:1,0 returns:returns