Three Prisoners problem
From Wikipedia, the free encyclopedia
The Three Prisoners Problem appeared in Martin Gardner's Mathematical Games column in Scientific American in 1959[1][2]. It is mathematically equivalent to the Monty Hall problem with "car" and "goat" replaced with "freedom" and "execution" respectively.
There are three prisoners scheduled to be executed, A, B, and C, although one will be pardoned. A asks the warden to tell him the name of one of the others who will be executed. As the question is not directly about A's fate, the warden obliges flipping a coin to decide which name to give A if A is the one being pardoned. Assuming the warden's truthfulness, there are now only two possibilities for who will be pardoned: A, and whichever of B or C the warden did not name. Did A gain any information as to his own fate, that is, does he change his estimate of the chances he will be pardoned? To make the analogy to the Monty Hall problem more explicit: if the warden says "B will be executed" and A could switch fates with C, should he?
Contents |
[edit] Solution
The answer is he didn't gain information about his own fate, but he should switch with C if he can. Prisoner A, prior to hearing from the warden, estimates his chances of being pardoned as 1/3, the same as both B and C. If the warden says B will be executed, it's either because C will be pardoned (1/3 chance) or both A will be pardoned (1/3 chance) and the B/C coin the warden flipped came up B (1/2 chance for a total of a 1/6 chance). Hence, after hearing who will be executed, A estimates his own chances of being pardoned as half that of whoever the warden didn't name, which means his chances of being pardoned remain 1/3 but whoever between B and C is not being executed has a 2/3 chance of being pardoned. Concerning switching fates, if the warden says "B will be executed" and A wants to live, he's better off switching fates with C.
Some may argue that this is not equivalent to the Monty Hall problem, pointing to the fact that the player in that game decides for himself which door to initially select and is given a subsequent opportunity to switch, whereas the prisoner to be pardoned has already been selected and cannot affect his fate.
However this criticism is incorrect because A has a choice as to how to set his own expectations. Should he be more or less worried about his future after hearing the warden? Or, putting it another way, if he had to choose between keeping his fate the way it is after hearing the warden's response, or try to switch it with that of the other prisoner whose destiny is still unknown, should he?
[edit] Aids to understanding
As with the Monty Hall Problem, it may be useful to see this problem from alternative viewpoints for better understanding.
[edit] Bayesian analysis
An analysis using Bayesian probability theory begins by expressing the problem in terms of statements, or propositions, that may be true or false. Its goal is to compute the probability of these propositions, a number measuring our confidence in their truth. The problem at hand concerns propositions of the form:
- : "x will be pardoned", for x equal to A, B or C.
- : "Replying to y, the warden states that z is executed", for y and z equal to A, B or C.
For example, denotes the proposition "A will be pardoned", and denotes the proposition "Replying to A, the warden states that B is executed". In this context, "being pardoned" is the negative of "to be executed".
The rules of the game, collectively denoted by , impose the following constraints on the probability of such propositions. First, the three prisoners have, a-priori, the same chance of being pardoned, hence the prior probability of is:
- .
Second, the warden is truthful, and will always name as executed a prisoner other than the one questioning him. If he has a choice of two such prisoners, they are equally likely to be named in his response. This rule concerns the conditional probability of a proposition , conditioned on a proposition as well as the game rules:
-
if y = z, (the warden shall not reveal the asking prisoner that he is executed) if z = x, (the warden shall not lie, by indicating as executed a prisoner that, in fact, is to be pardoned) if y = x, (the prisoner who asks is to be pardoned, and the warden names either one of the others as executed) if y ≠x and y ≠ z, (prisoner z is the only one the warden can mention in reply)
Now assume, by renaming the prisoners if necessary, that A is the one questioning the warden, and B the one the warden names as executed. After the warden's reply, A 's opinion about his chances of being pardoned are expressed by the posterior probability . Using Bayes' theorem this is expressed as:
- .
By the rules stated above, the numerator of the right-hand side is:
- .
The normalizing constant at the denominator can be evaluated by expanding it using the definitions of marginal probability and conditional probability:
Dividing the numerator by the normalizing constant yields:
- .
As the value of the posterior probability is equal to the prior one, this shows that the warden has given no additional information concerning A 's fate . Further, since B is executed, it is . Hence the chances that C is to be pardoned are, in A 's opinion,
- .
Therefore A must judge it safer to try switch his fate with C 's.
[edit] Enumeration of possible cases
Consider the following six scenarios:
- a B C [B]
- a B C [C]
- A b C [C]
- A b C [C]
- A B c [B]
- A B c [B]
This diagram is to be read as follows: Lowercase is for the prisoner being pardoned, while a capital letter means capital punishment. Between brackets is the name of the prisoner being executed as revealed by the warden when A asks. So if A himself will be pardoned, the warden can either answer with B or C, in this case choosing at random, so that both answers have 50% chance (cases 1 and 2). When B is to be pardoned, of course he only can answer with C (cases 3 and 4). Likewise, when C is to live, the answer must be B (cases 5 and 6). Note that cases 3 and 4 are equal, and so are cases 5 and 6; they still are mentioned because in this way all 6 scenarios have an equal chance (1/6) of occurring.
It is now clear that if the warden answers B to A, (which he will do in 3 out of the 6 cases), in one of them A will live, in two of them C will live. Is C now suddenly twice as likely to live as A? Yes, if considered from those 3 cases only. But they are only half of the truth: in the other 3 cases, when the warden answers C, it is B who is twice as likely to live than A. In other words, if the executions were changed into torture to be repeated every day on two prisoners only, chosen at random, and A asked the warden every day, then in 50% of the cases he would find (when B is the answer) 1 chance for him to avoid suffering, zero for B and 2 for C, and in 50% (with C the answer) 1 for him, 2 for B, and zero for C. So it remains on the average for each of them 2 out of 6.
The wording of the question throws things off considerably. Instead, consider if you had to choose to be one of the prisoners, A B or C (in a way like the Monty problem). If you were to pick at first, (A, B or C), you would have a 2/3rd chance of dying. On the other hand, if you pick after the warden "removed" one of the choices (e.g. B), you would have a 2/3 chance of dying if you picked A but only a 1/3 chance if you picked C.
[edit] Why the paradox?
The tendency of people to provide the answer 1/2 neglects to take into account the query that the warden was asked. Had the query been: "Will B be executed?" then the warden's answer "Yes, B will be executed" would indeed result in a probability 1/2 for A's death. Judea Pearl (1988)[3] used a variant of this example to demonstrate that belief updates must depend not merely on the facts observed but also on the experiment (i.e., query) that led to those facts.
[edit] References
- ^ Gardner, Martin (1959a). "Mathematical Games" column, Scientific American, October 1959, pp. 180–182.
- ^ Gardner, Martin (1959b). "Mathematical Games" column, Scientific American, November 1959, p. 188.
- ^ Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, CA: Morgan Kaufmann Publishers, Inc., First Edition, 1988.