Talk:Newcomb's paradox

From Wikipedia, the free encyclopedia

Contents

[edit] Consistency of terminology

When adding to this page, can contibutors please keep to a single terminology (currently "Box A / Box B" at the start), and not deviate to other terminologies such as "red / blue"; "? / X", "Open / Closed"? Thanks! NcLean 29 May 2004

[edit] Predictor can not see the future

Pulled this paragraph because it is untrue. The two players can equally acheive their goals and reach equilibrium if Chooser always takes the closed box. Only if Predictor has a goal of minimizing payments is this true (and that is not part of Newcomb's Paradox). Rossami 21:04, 18 Mar 2004 (UTC)

A game theory analysis is straightforward. If Chooser wants to maximize profit, and Predictor wants to maximize the accuracy of the predictions, then the Nash equilibrium is for Chooser to always take 2 boxes, and for Predictor to always predict that 2 boxes will be chosen. This gives a payout of $1000 and a perfect prediction every time. If Predictor's goal is to minimize payments (rather than maximize prediction accuracy), the equilibrium is the same. If two people played this game repeatedly, they would probably settle into this equilibrium fairly quickly.

I believe that the above paragraph is correct and reinserted it. Suppose Chooser wants to maximize profit and Predictor wants to maximize prediction accuracy. Your suggestion of Chooser always taking the closed box and Predictor always predicting the closed box is not a Nash equilibrium: Chooser could improve his profit by switching strategies. AxelBoldt 18:21, 24 Mar 2004 (UTC)

If Chooser always takes the closed box and Predictor has always predicted that only the closed box will be taken, then Chooser walks away with $1,000,000 and Predictor still has a perfect prediction record. Perhaps I am misunderstanding your scenario? Rossami 23:53, 24 Mar 2004 (UTC)
That's the scenario I'm talking about. It's not a Nash equilibrium: Chooser could switch strategies, take both boxes and end up with more money. In a Nash equilibrium, none of the players can increase their profits by unilaterally changing strategy.
By contrast, the scenario where Chooser always takes both boxes and Predictor always predicts this is a Nash equilibrium: if either player unilaterally changes their strategy, they will be worse off. AxelBoldt 03:24, 25 Mar 2004 (UTC)
You're right. I was confusing Nash equilibrium with other equilibrium. Thanks for correcting. Rossami

[edit] Unrelated insults

This page embodies the extremely stupid philosophies held by professors everywhere. Bensaccount 22:27, 18 Mar 2004 (UTC)

Good morning, Bensaccount. This is a real paradox that does not have a trivial answer. Please study the issue more carefully before imposing your personal interpretation on the problem. As the article mentions, it is often very difficult for those who see one solution to recognize the validity of the other interpretation. Thanks. Rossami 13:45, 19 Mar 2004 (UTC)

[edit] Hume quatation

The David Hume quote is interesting, but I don't see what it's connection to this article is about. Can you please make the connection more explicit? Rossami 00:57, 20 Mar 2004 (UTC)

Reverse causation is defined in the problem. The reason that people are choosing the second option is because they dont get that reverse causation (the choice in the future effects the outcome of the choice) is defined in the problem. They are arguing with something that has been defined because the wording of the problem does not convince them thoroughly enough that the events in the future are causing the results in the past.
To sum up: They are arguing with the definition of the problem.Bensaccount 01:15, 20 Mar 2004 (UTC)
I'm not sure that interpretation is universally held. A quantum physicist, for example, might reach the same conclusion that you do - choose the closed box - from a completely different perspective. The quantum perspective would consider the closed box in a state of superposition until the point of choice. Reverse causation is unnecessary. A multi-worlds cosmologist, on the other hand, might take both arguing credibly that reverse causation is an illusion - an artifact of perception. Let me take a few days to do some external research, though. Thanks. Rossami 05:04, 20 Mar 2004 (UTC)

Reverse causation is defined in the problem. This is not a paradox. Bensaccount 01:04, 22 Mar 2004 (UTC)

Wow that glass box version is confusing. Prediction with 100% certainty means that there is no free will. This is still defined in the problem. Bensaccount 03:32, 25 Mar 2004 (UTC)

I regret your removal of this relevent quotation. Do you have a reason? Bensaccount 03:55, 25 Mar 2004 (UTC)

[edit] Major bias in the old version

The old argument is extremely biased. I will prove why:

  1. The names of the philosophers who support option two are not mentioned.
  2. The source of the 50-50 split result is not mentioned.
  3. The fact that 100% accuracy of prediction eliminates free will is not mentioned.
  4. Only the argument for the second version is given. (Shouldn;t it explore both arguments equally to be fair?).

I have more if you want it. Bensaccount 03:48, 25 Mar 2004 (UTC)

[edit] Paradox requirements

This is not a paradox because there is only one possible outcome based on the definition of the problem. Bensaccount 03:48, 25 Mar 2004 (UTC)

A paradox leads logically to self contradiction. This does no such thing. Only illogical argument with the problem itself will lead to contradiction. The problem leads only to a single final outcome. Bensaccount 04:04, 25 Mar 2004 (UTC)

This is indeed a paradox as two widely principles of decision making (Expected Utility and Dominance) contradict one another as to which is the best decision.

Kaikaapro 00:18, 13 September 2006 (UTC)

[edit] Whether the paradox is real

After several days research and thought, I am firmly convinced 1) that this is a paradox with a non-trivial analysis and 2) that the original version (while imperfect) was closer to NPOV than the current version.

Bensaccount's primary complaint seems to be that because "reverse causation is defined into the problem" there is only one solution. However, free will is also "defined into the problem" - otherwise Chooser is not really making a choice. Using Bensaccount's framework, we have two mutually incompatible conclusions yet neither of the premises (free will and the ability to predict the future) can be easily or obviously dismissed as untrue.

Ok well proven, I didnt see that before. I stand corrected. Bensaccount 00:54, 1 Apr 2004 (UTC)

[edit] Physics

Further, I remain unconvinced that Bensaccount's framework is the only framework through which this problem can be analyzed. Reverse causation is not necessarily defined into the problem. If your view of cosmology is bounded by classical physics, it is. If you extend the bounds to other views of cosmology (quantum superpositions, etc.), the problem can still be productively analyzed without resorting to reverse causation. Rossami 00:31, 31 Mar 2004 (UTC)


[edit] Calculations

Assuming that the Predictor has a probability of p of a correct prediction. Then the best strategy is

Choose one box if p > 50.05%

Choose both box if p <= 50.05%

The expected outcome of choosing one box is ( 1000000p ) dollars
The expected outcome of choosing both box is ( 1001000 - 1000000p ) dollars

I think the principle you are looking for is Expected Utility. Assuming the predictor has a probability of 90%, the expected utility would look as follows:

                             90%                     10%
                             Predictor is right      Predictor is wrong
   You take one box          $900,000.               $0
   You take two boxes        $900                    $101,000


As you can see, taking one box has the highest expected utility.

Kaikaapro 00:10, 13 September 2006 (UTC)

I think the problem in this calculation lies in the Prosecutor's fallacy : If the predictor has a probability p of 90%, what it means is this : If the prediction is X then the probability of the Decision to be X is 90%. You cannot use this probability to know the Expected Utility, which needs the probability q : Probability of the Prediction to have been X, knowing that the Decision is X If we note P=AB the event Prediction=2 boxes, P=B the event Prediction=1 Box, AB and B the events Decision = 2 boxes and resp. 1 box, using Bayes' theorem and conditional probabilities we see

P(P=X|X) = \frac{P(X|P=X) \cdot P(P=X) }{P(X)}

and

P(X | P = X) = 90%

So it all depends on the probability of the psychic to predict X and the probability of the Chooser to choose X. Can we know those values ?

If we consider the case where p = 100% the two premises : 1: the Chooser has his free will 2: the decision does not affect the prediction Are obviously contradictory. Hiding the problem behind probabilities only exploits the fact that we are not used to think in terms of time travels.

Well, anyway...

How is p different from q here: "If the predictor has a probability p of 90%, what it means is this : If the prediction is X then the probability of the Decision to be X is 90%. You cannot use this probability to know the Expected Utility, which needs the probability q : Probability of the Prediction to have been X, knowing that the Decision is X"? How are these two different, am I missing something?Krum Stanoev 09:12, 28 September 2006 (UTC)

[edit] Nash equilibria

The game theory section is quite confused, which is amusing given how straightforward the analysis claims to be.

The article can't decide whether it's talking about a single-round Nash equilibrium, or the Nash equilibrium of a repeated game. It talks about repetition, and a threatened "retaliation" by the Predictor, but it describes an equilibrium which doesn't need retaliation or a repeated game at all. The equilibrium {choose both boxes, predict both boxes} is quite stable for the single-round game, and is the only Nash equilibrium.

Both players choosing "one box" is also a Nash equilibrium for the repeated game. However, when we have a repeated game, another Nash equilibrium is possible. If the Predictor adopts a strategy "always choose one box unless the Chooser chose two boxes last round" then the Chooser can also always choose one box and have no incentive to deviate. Deviating would net an extra 1,000 for that round, but would cap the maximum payoff at 1,000 in the next round, where it would have been 1,000,000. Hence, the deviation is discouraged and the equilibrium is stable.

The real question is, is any of this germane to the article's topic? If this section could be made concise and simple, I could see including it. I don't see a way to make it concise and still correct unless you want to dismiss the possibility of playing repeatedly. Isomorphic 02:41, 12 Aug 2004 (UTC)

Correcty, in the single round game, the only Nash Equilibrium is {choose both, predict both}. However both the article and the previous comment fail to state whether the game is finitly often or infinitly often repeated, which makes a big difference for the trigger strategy that both want to apply. In the finitly often repeated case, the trigger strategy fails, since the game can be solved by backward induction starting from the last period, where the chooser would deviate for sure. In the infinitly often repeated game, one needs a discount factor (or else the chooser would get an infinit amount of money from getting 1 each round). Then, one can calculate the discount factor needed to sustain the "good" {choose one, predict one} Equilibrium. The discount factor changes with the payoffs.
So what does this have to do with the paradox? Not a big lot if one is interested in the time travel/free will issue (not to forget that the predictor's payoff is not well defined). --Xeeron 18:20, 2 December 2005 (UTC)
I agree. The whole game theory section should get on a piece of fat and slide off. Kaikaapro 00:20, 13 September 2006 (UTC)
I went ahead and deleted it. Later on I'll add a section on Expected Utility. Kaikaapro 00:25, 13 September 2006 (UTC)

[edit] freaking crazy schiznap

This is the perfect kinda stuff to talk about when one is high/drunk. I love it. Anywho, just wanted to complement you all on a good article The bellman 01:49, 2004 Nov 25 (UTC)

[edit] No original research

This article very likely contains violations of Wikipedia's original research policy. Many of the proposed solutions to this paradox are attributed to "some philosophers" and other unnamed individuals. If you've made contributions to this article, please try to cite the sources of your information. If you've contributed theories and explanations to this article that you personally formulated yourself, please remove them, or consider moving them to other wiki projects that allow original research. I'll try to clean some of this up a bit, but it's of course easier if the original contributors help out as well. -- Schaefer 01:05, 27 Nov 2004 (UTC)

[edit] Article move

I'm proposing this article be moved from Newcomb's paradox to Newcomb's Problem. Whether this is actually a paradox is quite disputed. The second word should be capitalized as it refers to a specific problem by Newcomb. Also, it seems the original name of this really was "Newcomb's Problem". Google Scholar reports twice as many references to "Newcomb's Problem" than to "Newcomb's Paradox", and they are roughly equal on normal Google (all searches were conducted with quotes). There already exists an article at Newcomb's Problem that just redirects here, but has a few edits in the history so I'm taking this through Wikipedia:Requested moves. -- Schaefer 01:49, 27 Nov 2004 (UTC)

  • I no longer desire to move this article, after being made aware of the popularity of the term "paradox" to describe it in previous decades. I was admittedly taking the term too literally. -- Schaefer 04:51, 27 Nov 2004 (UTC)

[edit] About that Glass box...

Isn't the glass box example completely illogical, unapplicable and even unnecessary? It is like saying you have two open boxes, why bother making it glass? Come to think of it, it is actually like having no box at all. Rather, it would be equivalent to "the predictor" holding out the one or two bills and say:

"If I have two bills in my hand, I have predicted that you will only take the $1,000,000 bill. If I have only the $1,000 bill I have predicted you would have attempted to take them both, had I presented you with them".

This obviously defies all logic. In case the predictor is holding only one bill, he is not providing the so-called chooser with a choice at all, he is instead presenting him with the consequence of what choice he would have made had he had the chance to! But if determinism is the rule, does it not than follow that predictions of alternative futures/realities are impossible, or at least irrelevant? And if presented with the two bills, truly having a choice and being told he can take both, why wouldn't he? If we consider it a fact that one of the prerequisites of the paradox is that the chooser wants as much money as possible, it seems he once again is left without a choice. Indeed, the motive of the chooser has to be to get as much money as possible, if it weren't, he would choose box or boxes at random and it would indeed not matter if, instead for money, pecks of dust were placed in the boxes.

One could argue that the differences with actually having a (glass) box is that you can still pick it even if you see it is empty, but is this relevant? Quoting the present article: "If he sees the closed box is empty, he might be angry at being deprived of a chance at the big prize and so choose just the one box to demonstrate that the game is a fraud". Certainly, that would be a possibility if the chooser regarded the $1,000 dispensable, but would that not violate the "maximum money" prerequisite? Let me elucidate; I cannot see a reason why changing the value on the bills would be violating the "paradox". Let us then say than the $1,000 bill is not a $1,000 bill at all but actually a $999,999 bill. If one indeed suggests this alteration DOES violates the rules of the paradox, I question where exactly from $1,000 to $999,999 this violation occurs and how can it be explained? If it is not a violation and we go along with the example, then there is no reason for the chooser to be angry and choose the empty box. According to the predictor, the chooser could only have walked away with one measly dollar more even if he had been presented with both the bills.

Therefore, the only prediction that can be made is that the chooser will take what money he is presented with. Thus, the predictor can ultimately only offer one choice: the $1,000 bill (or the $999,999 if you prefer). Now, the circle is complete, he hasn't offered the chooser a choice and there is no paradox!

Doesn't this in fact hold equally true for the main "paradox", be the boxes made out of glass, other material or non-existant? Isn't the only difference between the glass box and the original problem that the chooser in the original will take what money he's presented with (i.e. both boxes) only if he's "rational" about it? But what if we take the rationality of the chooser for granted? In other words, what if all people were that rational, or at least informed about these circumstances before making their choice?

I realise I am not the first person to claim this is not a paradox but I haven't seen it presented this way earlier. I must however admit things get slightly more complicated when working with the main example (and I'm too tired to develop it any further right now), but anyways, I think it's interesting. But basically, I guess I'm just questioning whether philosophers really use such a flawed example of the "glass box paradox" or if it's just a violation of Wikipedia's "no original research policy". Also, excuse me for any grammatical shortcomings, I am not a native English-speaker and it's 4 PM. guess I got a little carried away! --Mackanma 02:22, 8 May 2005 (UTC)

1) "Glass box" is just a fancy way of saying "no box" or "open top box" or "box with a hole through which you can peek inside" or whatever. It's inconsequential.
2) The original situation and its outcomes appear to be somewhat counter-intuitive, but are nevertheless consistent: you pick, and whatever you choose, Predictor is always correct. Psychologically, it's convincing. It appears to be "magical", but what is foretelling the future if not magical?
3) The glass box situation is radically different. It is no longer convincing. It is self-contradictory. The paradox lies in the following question: can you accurately predict the outcome of a future event if your prediction itself affects that outcome? And that's exactly Minority Report... GregorB 19:07, Jun 2, 2005 (UTC)
That's the point of the paradox... there can be no perfect prediction of the future if there is to be free will. They're mutually exclusive, and this demonstrates it. -P.

[edit] Brain Lateralization

As a potential reason for the almost dichotomous split.

Self-oriented temporal linearity vs whatever is the other viewpoint. 24.22.227.53 22:36, 13 August 2005 (UTC)

[edit] Removed paragraph

According to Raihan's Hypothesis, there will be a third agent playing other than the Predictor and Chooser. If the Predictor is 100% correct in his prediction, then, the Chooser will logically choose exactly according to the prediction, even if decides to choose the opposite. This will be accomplished by an accident initiated by the third agent. Raihan's Hypothesis bases itself on the assumption that the past as well as the present and future are fixed unchangeable points. The knowledge of the future cannot change the future. The necessary condition of an accurate Predictor is an accurate Protector or law that will make sure this immutability. This is interesting because one accurate prediction generates at least another prediction with certain variations. For example, the Predictor predicts that in some distant future the Chooser won't burn a certain piece of paper and knowing this prediction the Chooser decided to do the opposite. What will happen? According to Raihan's hypothesis the paper won't be burnt. But the Predictor also predicts(knowing the Chooser's intention) that either the Chooser changes his decision for good or he will die or he won't get fire anywhere etc.

The above paragraph lacks citation and verifiability. Google searches for "Raihan's Hypothesis" and "Raihan Hypothesis" turn up zero hits. Searching for just Raihan returns pages mostly about a music group of the same name, and of the links that aren't about the music group, none looks relevant. Wikipedia has no further information on who Raihan is that I can find. Raihan is not mentioned as an external reference in this article. Until verifiable information on who "Raihan" is and what his hypothesis is can be found, I'm moving this paragraph to talk. -- Schaefer 06:49, 9 January 2006 (UTC)

A similar concept is listed in Wikipedia as the Novikov self-consistency principle. 88.108.228.144 22:32, 14 June 2006 (UTC)

[edit] Original research

A very strict reading of the original research page will lead one to conclude that most of what is written on this page is original research and should be deleted. I think this is unreasonable. This is a simple paradox which does have a large impact on issues like free will etc.

But the simplicity of the formulation of the paradox makes it easy to fully explain in a few sentences a new way to look at it. Strictly speaking this is "original", but it is also unpublishable.

We don't uphold this view of original research to other more techincal subjects either. Take e.g. my edits to this section.

This derivation is standard second year university stuff and thus unpublishable. However, strictly speaking, I did introduce a new thing in here you don't find in textbooks to make it easier to understand (the partition function of a single mode in order to avoid infinite products over all modes).

So, let's not be pedantic and delete things that do not need more explanations than a few sentences. Count Iblis 12:53, 21 June 2006 (UTC)

Please understand that by removing what you wrote, I was not in any way disagreeing with what was written, nor was I condoning what was already in the Thoughts section. I also apologize for the glib way I threw out the original research buzzword. Nevertheless, buzzwords aside, I do think that thoughts on free will really are qualitatively different from statistical mechanics, and there ought to be a higher standard for inclusion. May we at least remove the link to your personal blog? 192.75.48.150 17:34, 21 June 2006 (UTC)
No problem! I removed the ref. to my blog. There are probably some articles on machine intelligence, simulation argument etc. here on wiki. So, and internal link would be appropriate. The proposition that simulating the brain would generate the same consciousness as the brain itself is a hot item in philosophy. My addition is simply that you can take the predictor to be computer that simulates the brain under appropriate conditions. That's rather trivial. Count Iblis 18:23, 21 June 2006 (UTC)

[edit] Only one possible prediction ?

Ok, I'm new to this problem, but anyway there's a question that is not adressed here (edit: except in Mackanma's post, but then he has had no answer on this subject). I'm assuming here that the predictor is always right. We know that two boxes always contain more than one (or just the same amount). The Chooser has nothing to lose by choosing both boxes : the prediction is made, the money is in the boxes. In these circumstances, there is only one rational course of action for the Chooser, so there should be only one possible prediction : Chooser takes both. The question is "What the Chooser should do to maximise his gain ?" (implicitly "What is the best rational course of action"), so saying that maybe he is adventurous, or doesn't care, is not a valid counter-argument here. Maybe the key to the paradox is not to put free will into question, but to notice that the predictor cannot predict that a rational Chooser act irrationnaly. But then even the Chooser has no free will if he is supposed to act rationnaly and there is only one rational solution. What do you think ? CPM --194.51.20.124 16:56, 18 September 2006 (UTC)

Really? Even though you assume the predictor is ALWAYS right? Now, suppose you choose both boxes. The predictor is always right, by assumption. Therefore, he predicted you would choose both boxes. Therefore you make $1000. Suppose I choose only one box. The predictor is always right, by assumption. Therefore, he predicted I would choose only one box. Therefore, I make $1000000. By your argument, you acted to maximize your gain (rationally), and I did not. But, I made more money than you. A contradiction. 192.75.48.150 14:36, 28 September 2006 (UTC)

[edit] Ethics

I've been thinking about this problem the last couple of days, and I believe it applies to ethics. The two main ethical choices can be described somewhere along the lines of evolution vs. utility (e.g. right wing vs. left wing etc). You can make an equally good argument for both causes (e.g. an entrepreneur arguing with an environmentalist over property rights), much like this paradox, yet both can be seen as simultaneously true and false,. Though they both make strong arguments, they also completley contradict each other. I was wondering if any philosophers have made this observation before (which I imagine they would have), and if so who has done so. Richard001 05:06, 1 October 2006 (UTC)

[edit] Coin flips

Hi - I have a question on this, which may well be noob-like, but bear with me.

What about if, when the Predictor makes his prediction, I then flip a coin (or use some other reasonably random binary method to make my decision, which doesn't have to require a prop - no reason why it can't be purely mathematical) - heads I open Box A, tails I open Box B? Is this disallowed by the Thought Experiment? Presumably you would say that the Predictor still predicts the correct outcome? In which case he is predicting the outcome, not my means of arriving at it, right?

Therefore could you not say that I have free will, because even though the eventual outcome can be determined by the Predictor, my method of arriving at the decision cannot? Or can he only predict the outcome when the actual decision is made by 'me', without the use of external artifice?

Or would you argue that my means of arriving at my decision can be said to be irrelevant?

Not that it matters, but can I 'chain' my artifices, i.e. follow a binary tree, using different methodologies at each level, first using some random method to determine how many 'levels' I go down? Therefore even I cannot predict what decision I will make, right? I don't dispute that nothing is truly random to 'The Predictor', but surely that's irrelevant - the point is that *I* cannot know what I'm going to choose. Using a chained set of artifices to make my decision in such a way that I cannot determine the result I will choose myself, surely I then lose the link between my decision and the Predictors prediction, i.e. he is no longer predicting my decision?

And if you're going to suggest that the Predictor is not predicting my decision, rather the outcome, then surely that means a priori there is no conceivable way for me to maximize my return through any means i.e. the discussion regarding Nash Equilibriums becomes meaningless? Mikejstevenson 16:51, 17 October 2006 (UTC)