Newcomb's paradox

From Wikipedia, the free encyclopedia

This article or section may contain original research or unverified claims.
Please help Wikipedia by adding references. See the talk page for details.

Newcomb's Paradox, also referred to as Newcomb's Problem, is a thought experiment involving a game between two players, one of whom purports to be able to predict the future. Whether the problem is actually a paradox is disputed.

Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed and was published in a philosophy paper spread to the philosophical community by Robert Nozick in 1969, and appeared in Martin Gardner's Scientific American column in 1974. Today it is a much debated problem in the philosophical branch of decision theory but has received little attention from the mathematical side.

Contents

[edit] The problem

A person is playing a game operated by the Predictor, an entity somehow presented as being exceptionally skilled at predicting people's actions. The exact nature of the Predictor varies between retellings of the paradox. Some assume that the character always has a reputation for being completely infallible and incapable of error. The Predictor can be presented as a psychic, as a superintelligent alien, as God, etc. However, the original discussion by Nozick says only that the Predictor's predictions are "almost certainly" correct, and also specifies that "what you actually decide to do is not part of the explanation of why he made the prediction he made". With this original version of the problem, some of the discussion below is inapplicable.

The player of the game is presented with two opaque boxes, labeled A and B. The player is permitted to take the contents of both boxes, or just of box B. (The option of taking only box A is ignored, for reasons soon to be obvious.) Box A contains $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.

By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Before the game begins, the player is aware of all the rules of the game, including the two possible contents of box B, the fact that its contents are based on the Predictor's prediction, and knowledge of the Predictor's infallibility. The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

The problem is called a paradox because two strategies that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout. The first strategy argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money. That is, if the prediction is for both A and B to be taken, then the player's decision becomes a matter of choosing between $1,000 (by taking A and B) and $0 (by taking just B), in which case taking both boxes is obviously preferable. But, even if the prediction is for the player to take only B, then taking both boxes yields $1,001,000, and taking only B yields only $1,000,000—the difference is slight in the latter case, but taking both boxes is still better, regardless of what prediction has been made.

The second strategy suggests taking only B. By this strategy, we can ignore the possibilities that return $0 and $1,001,000, as they both require that the Predictor has made an incorrect prediction, and the problem states that the Predictor cannot be wrong. Thus, the choice becomes whether to receive $1,000 (both boxes) or to receive $1,000,000 (only box B)—so taking only box B is better.

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."

[edit] Thoughts on the paradox

Many argue that the paradox is primarily a matter of conflicting decision making models. Using the Expected Utility hypothesis will lead one to believe that one should expect the most utility (or money) from taking only box B. However if one uses the Dominance (game theory) principle, one would expect to benefit most from taking both boxes.

Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.

Other philosophers have proposed many solutions to the problem, many eliminating its seemingly paradoxical nature:

Some suggest a rational person will choose both boxes, and an irrational person will choose the closed one, therefore rational people fare better, since the Predictor cannot actually exist. Others have suggested that an irrational person will do better than a rational person and interpret this paradox as showing how people can be punished for making rational decisions. [citation needed]

The rationality of the person who chooses the closed box depends upon facts concerning the Predictor. If, as posited, the Predictor is 100% accurate, and is completely reliable to put the million dollars in the closed box, and the chooser knows this, then the only rational choice is to pick box B. If the players knows the Predictor is unreliable, then the only rational choice is both boxes.

Others have suggested that in a world with perfect predictors (or time machines because a time machine could be the mechanism for making the prediction) causation can go backwards. [citation needed]If a person truly knows the future, and that knowledge affects his actions, then events in the future will be causing effects in the past. Chooser's choice will have already caused Predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and Chooser will do whatever he's fated to do. Others conclude that the paradox shows that it is impossible to ever know the future. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since perfect predictors require determinism. Some philosophers argue this paradox is equivalent to the grandfather paradox.

Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person. [1] Suppose we take the Predictor to be a machine that arrives at its prediction by simulating the brain of the Chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the Chooser, then the Chooser cannot tell if he is standing in front of the boxes in the real world or in the virtual world generated by the simulation. The "virtual" Chooser would thus tell the Predictor which choice the "real" Chooser is going to make.

[edit] Glass box

Newcomb's Problem has been extended with the question of how behaviors would be changed if box B is made of glass (this draws parallels with Schrödinger's cat). Now what should chooser do?

If he sees $1,000,000 in box B, then he might as well choose both boxes, and get both the $1,000,000 and the $1,000. If he sees box B is empty, he might be angry at being deprived of a chance at the big prize and so choose just the one box to demonstrate that the game is a fraud. Either way, his actions will be the opposite of what was predicted, which contradicts the premise that the prediction is always right.

Some philosophers take the glass box version of Newcomb's paradox as a proof that:

  1. It is impossible to know the future.
  2. Knowledge of the future is only possible in cases where the knowledge itself won't prevent that future.
  3. The universe will conspire to prevent self-contradictory causal loops (via the Novikov self-consistency principle, for example).
  4. Chooser might accidentally make the wrong selection, or he might misunderstand the rules, or the time machine/prediction engine might break.

[edit] See also

[edit] References

  1. ^ R. M. Neal, Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning, preprint
  • Nozick, Robert (1969), "Newcomb's Problem and Two principles of Choice," in Essays in Honor of Carl G. Hempel, ed. Nicholas Rescher, Synthese Library (Dordrecht, the Netherlands: D. Reidel), p 115.
  • Gardner, Martin (1974), "Mathematical Games," Scientific American, March 1974, p. 102; reprinted with an addendum and annotated bibliography in his book The Colossal Book of Mathematics (ISBN 0-393-02023-1)
  • Campbell, Richmond and Lanning Sowden, ed. (1985), Paradoxes of Rationality and Cooperation: Prisoners' Dilemma and Newcomb's Problem, Vancouver: University of British Columbia Press. (an anthology discussing Newcomb's Problem, with an extensive bibliography)
  • Levi, Isaac (1982), "A Note on Newcombmania," Journal of Philosophy 79 (1982): 337-42. (a paper discussing the popularity of Newcomb's Problem)
  • John Collins, "Newcomb's Problem", International Encyclopedia of the Social and Behavioral Sciences, Neil Smelser and Paul Baltes (eds), Elsevier Science (2001) (Requires proper credentials)

[edit] External links