Talk:Envelope paradox

From Wikipedia, the free encyclopedia

The following was copied from Talk:Monty Hall problem Mintguy (T) 09:45, 27 Aug 2004 (UTC)

I was talking to a friend about the Monty Hall problem today and he told me about a similar problem, and I don't think there is a Wikipedia article on it and I'm not sure how to present the solution either, but anyway here is the problem:

You are on a gameshow and the host holds out two envelopes for you to choose from A and B. So you choose an envelope (A) and it's got $2000 in it. The presenter then says that one of the envelopes has twice as much money in it as the other one and offers you the chance to switch. So you think about it this way... "If I switch I will go home with either $4000 or $1000, by not switching I will go home with $2000. There is a 50/50 chance that I will double my money by switching. A normal 50/50 bet results in me either doubling my money or losing it all, whereas here I will only lose half. Therefore this is a better than evens bet so I will make the swap." You are just about to swap envelopes when you think about the problem some more - "Surely this can't be right... ". Mintguy (T) 16:13, 15 Jul 2004 (UTC)

Interesting, but not really similar. In the Monty Hall problem, there's just one possible positive payoff, so the only concern is maximizing your chance of getting it. This problem is more complicated. It is true that if you switch, the expected value of the new envelope is (1000*0.5 + 4000*0.5) = 2,500, so if all you care about is the average amount of money you'll take home, you should switch. However, most people in the real world are risk averse, meaning that they may prefer the sure 2,000. Isomorphic 02:58, 16 Jul 2004 (UTC)
Yeah but you're wrong, you see, there is no advantage to switching. How can there be? . If you didn't know how much money was in the envelope, then you might make the same analysis and switch, but then after switching the same analysis would lead you to switch again.Mintguy (T) 07:41, 16 Jul 2004 (UTC)
After I open the first envelope and choose to switch, but before I open the second, here is my position: The other envelope has $2,000. The one I'm holding has 1,000 or 4,000 with (we are assuming) equal probability. My expectation if I switch back is $500 less than if I hold. I'm holding. Dandrake 18:56, Aug 26, 2004 (UTC)

[Tired of nesting the blocks deeper and deeper, like bad C code.] Let's pause to list assumptions. My choice of an envelope is not correlated with the loading of the envelopes, so that I'm equally likely to have the good or the bad envelope. The host also is statistically unbiased: his telling me the news is not correeated with my initial choice of good or bad envelope. Oh, and he's telling the truth.

This is not to say that the expectation argument is right. This is the paradoxical part: that the argument has no apparent flaw in itself, but it gives the nonsensical result that one should choose and then change, even though no new information has come along to cause a change. The host's information is new, or seems to be, but how would my course of action be different if I had known it all along? Bottom line: the expectation argument leads to a silly result, but I don't believe that its flaw has been shown. Maybe this paradox deserves its own article. Dandrake 19:35, Aug 26, 2004 (UTC)


I think I'm not understanding this problem correctly. The way I read it, you either pick an envelope with X or 2X dollars in it with equal probability. Given the option to switch, this expands into four cases

Picked X Picked 2X
Keep X 2X
Switch 2X X

Doesn't this mean that each of these events occurs with equal probablity, and that it doesn't matter? I understand the expectation argument, but I can't reconcile it with this simple grid. Cvaneg 23:13, 26 Aug 2004 (UTC)

I think that that is the point. The paradox lies in reconciling the argument with the grid or finding the flaw in the argument. No one is denying that the grid gives the correct answer. The trouble is that the expectation argument seems reasonable and yet gives the wrong answer. -- Derek Ross | Talk 16:27, 2004 Oct 5 (UTC)

Contents

[edit] An argument for not swapping

It's also possible to frame an argument for sticking with the current envelope

Suppose you open one envelope and find in it the amount of money A. You reason as follows:
  1. The other envelope contains an amount B
  2. This envelope contains an amount A which is either ½B or 2B
  3. Because the envelopes were indistinguishable, either possibility must be equally likely.
  4. So the value of the money in this envelope is ½(½B + 2B) = 1¼B.
  5. This is greater than B, so you gain by sticking to the current envelope.

Of course this argument is just as flawed as the original one but it does seem to show that the problem lies in the naming of three quantities ½X, X and 2X where there are really only two, X and 2X. -- Derek Ross | Talk 16:53, 2004 Oct 5 (UTC)

[edit] An argument which says that sticking is as good as swapping

We are not told what amounts the envelopes hold before we start, only that one contains twice as much as the other. That's important because if we knew that one contained A and the other contained 2A, it would be easy to tell whether to switch or not. So in mathematical terms we only know that one envelope contains X and the other contains 2X

Taking this into account, here's an attempt to frame an argument which comes to the correct conclusion.

You have been told that one envelope contains X and that the other envelope contains 2X.
Suppose you open one envelope and find in it the amount of money A. You reason as follows:
  1. This envelope contains an amount A which is either X or 2X
  2. Because the envelopes were indistinguishable, either possibility must be equally likely.
  3. So the expected value of the money, A, in this envelope in terms of X, is ½(X + 2X) = 1½X.
  4. The other envelope contains an amount B which is either 2X if the value in this one is X, or X if the value in this one is 2X.
  5. So the expected value of the money in the other envelope is ½(X + 2X) = 1½X = B.
  6. A is equal to B, so neither a gain nor a loss is expected on average whether you stick to this envelope or not.

I hope that these alternative arguments help. Even if they are incorrect, they seem to indicate that there is confusion in the original argument between the expected value, 1½X, to be found in an envelope on average and the actual value, A, found on a single occasion. -- Derek Ross | Talk 17:20, 2004 Oct 5 (UTC)


[edit] Useful link

http://consequently.org/papers/envelopes.pdf

The explanation they give is that the best strategy depends on how the prizes are assigned, and the non intuitive results are basically because the paradox actually confuses two of the three strategies, starting off with the prizes being fixed before you choose, but when the probabilities are calculated it is based on the second prize being decided (double or half) after you already know have been given the first.

Compare the third possibility they describe where the prize giver puts $y into an envelope, and then gives you an envelope with either $2y or $y/2 in it. In this case swapping is going to lose you money on average.

Their explanation is convincing in a way that our article's explanation is not. Moreover they take the additional arguments described above into account. It's a good paper. -- Derek Ross | Talk 17:38, 2004 Nov 1 (UTC)

I disagree. In mechanisms 2 and 3 in the paper the envelope-filler gets to choose which envelope you open, but in the statement of the envelope paradox, you get to choose. So the paper's argument, though admirably clear, doesn't seem to apply to this version of the paradox. And I don't see any evidence that the paper has anything to say about the second version of the paradox, where the envelope-filling mechanism is specified but the paradox still arises. Gdr 11:40, 2005 May 4 (UTC)

Proposed Addition to the resolution (paraphrased from the above link):
Since the absolute amounts are irrelevant up to factors of 0,5, 1 and 2, we will denote s the scaling factor. There does not exist a scaling factor for infinite amounts of s (homogenous distr. of s over R+), but to the player the distribution is unknown and will be ignored. Payoffs will be ,5s, 1s and 2s.
There are two different mechanisms of filling the two envelopes. The first one is filling env. 1 (E_1) with s, and E_2 with either 0,5s or 2s. In this case, Expected payoff is s for E_1 and 5/4s for E_2. If you're given E_1, switching is better, if E_2, keeping.
The second one is selecting randomly one Envelope (E_i), filling it with s and filling the other envelope, E_j with 2s. You choose randomly between E_i and E_j. Your Expecation of switching would be 0,5*s (if you have E_j) plus 0,5*2s (if you have E_i).
A good illustration, how one setting can have different prob.distributions is the (mathematical) Bertrand paradox. The situation, drawing a line on a circle and comparing it with an equilateral triangle, leads to different results, depending on what distributions are used. The same happes here.

[edit] Issues with "A Second Paradox"

The section appears to define probabilities that do not add up to 1. "e", by the way, seems to me to be a bad choice for the parameter of the distribution, because of its usual use as the base of the natural logarithm. Finally, it is not at all clear to me how to select a pair of values from this "distribution", so that one of them is twice the other. -- Wmarkham

If you like, write q = 1 − e to avoid e. Now put pn = qn(1 − q). Observe that \sum_{n=0}^\N p_n=(1-q)\sum_{n=0}^N q^n=1-q^{N+1} by the summation formula for geometrical series, so the probabilites sum up to 1. To select a pair of values from this distribution, choose a random number u uniformly between 0 and 1, and compute \frac{\log u}{\log q}. Numerator and denominator are both negative, so the fraction is positive. Let n\geq 0 be the integer part of the fraction \frac{\log u}{\log q}. Now put an amount of 2n into the first and an amount of 2n + 1 into the second envelope. --NeoUrfahraner 07:37, 3 May 2005 (UTC)

Okay, I see now that \sum_{n=0}^\N p_n = 1. I do think that the change of variable (both name and value) makes this clearer. However, I still think that there is a problem, because the conditional probabilities (once a particular envelope has been opened) are reported as e(1 − e)n − 1 and e(1 − e)n, which are both terms from the sequence. Clearly these cannot sum to 1, given that the entire sequence does, and all the other terms are positive. And, those values don't appear (unless I am missing something) to have been used in the calculation of expected gain. Further, the final expression for the expected gain does not appear to follow from the previous one. I try, for example, substituting n=0 and e=1/4. The first expression gives 1/24, the second gives 1/32. Should I re-check my math? -- Wmarkham 02:33, 4 May 2005 (UTC)

Yes, there's was a missing denominator R (now added to the article; I also switched the variable e to q). Thanks for spotting the mistake. Gdr 11:17, 2005 May 4 (UTC)

I notice that n=0 was not claimed to produce these values, so the substitution I did before was not valid. However, after the revision, I think the first 1/R in the expression for expected mean must be a typo.

Plus, I still do not understand where it came from. If NeoUrfahraner's process for generating the values in the envelopes is correct, then I believe that pk gives the probability that the lower value of the two has the exponent k, which is not how I interpret the statement "Suppose that the envelopes contain the non-negative integer sums 2n and 2n+1 with probability q(1 − q)n". For notation, let P(M=m) be the probability that the exponent of the less valuable envelope is m, and let P(N=n) be the probability that I (the player) open the initial envelope to reveal an exponent of n. Is P(M=m) = q(1-q)^m? Is P(N=n) = q(1-q)^n? Both? Neither? Does anyone have an external reference that describes the problem? -- Wmarkham 17:00, 4 May 2005 (UTC)

This section of the article closely follows the treatment by Wainwright in Eureka (see the References). The statement starting "suppose" is specifying the probability distribution of pairs of envelopes. So P(M=m) = q(1-q)^m is true, but P(N=n) ≠ q(1-q)^n. To find P(N=n) requires a simple application of Bayes' theorem, which gives the probabilities in the article. Gdr 17:30, 2005 May 4 (UTC)

Applying Bayes' Theorem, though, I get P(M = n − 1 | N = n) = (1 − an − 1) / (1 − an − 1 + qan), where an is P(N=n|M=n). How do I get from there to q(1 − q)n − 1 / R? What is P(N=n|M=n)?

I'll try to find a copy of Wainwright's treatment, and check myself against it, too. Thanks! -- Wmarkham 18:29, 4 May 2005 (UTC)

Bayes' theorem says that P(M=n|N=n) = P(N=n|M=n) P(M=n) / P(N=n) where
P(M=n) = q(1-q)^n
P(N=n) = (P(M=n) + P(M=n-1)) / 2
P(N=n|M=n) = 1/2
and similarly for P(M=n-1|N=n). Gdr 18:49, 2005 May 4 (UTC)

Okay. I think that the inequality switched directions with the change of variables, but my guess is that it is correct as it now stands. The final expression for expected gain is probably missing a factor of (1-q)^(n-1). -- Wmarkham 00:03, 5 May 2005 (UTC)


[edit] Deletion

I deleted this section as it was *far* more complicated than the article required, and clearly adds conditions not stated in the original problem (at the beginning of the page) in order to make the argument come out supporting the paradoxical claim. As what I've added to the article explains, the amount of money in the envelopes does not depend on your original choice. You choose the envelope with A or 2A (or with A and 1/2A if you like fractions), but the two possible pairs are mutually exclusive.

~Dwee

I restored the deleted material. Are you sure you fully understood the material you cut and its relevance to the paradox? Gdr 19:32, 2005 Jun 7 (UTC)

I am glad you restored the material, because (as you most probably suspected) I did *not* have a full understanding of the paradox in the first place. After being convinced of my errors by a friend, I was to revert the material and found you had already done so. Thank you. Now I have a question that I hope you might answer: what dependence does the profit you gain by opting to switch every time have on the distribution of A through a large number of trials? It seems that without taking this into account, one would require a *very* large number of trials for the expected gain to be realized. Indeed, it seems the expected gain itself has some dependence on the distribution of A, for as it stands, the paradox assumes a constant value of A in making the statement of expected gain, which is certainly not true and was the root of my initial misunderstanding.

~Dwee

Which version of the paradox are you talking about? In the second version with the specified distribution, the expected value of the sum in the envelope is infinite, so there's no "expected profit". No matter how many trials you make, the resulting profit and loss is dominated by the small number of trials in which the sums in the envelopes are very large, so the strong law of large numbers does not apply and there is no sense in which there's a limiting behaviour or any "expected profit". (This is what the second version of the paradox is all about: some distributions don't have a mean and some random sequences have no limiting behaviour.) Gdr 21:59, 2005 Jun 7 (UTC)

[edit] Logic

Premises:

  1. You are given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other.
  2. Suppose you open one envelope and find in it the amount of money A. You reason as follows: The other envelope may contain either ½A or 2A.

(Both of the above premises are quoted from the statement of the paradox in this article.)

Reasoning:

  • From 1, it is clear that the total in both envelopes is an unknown amount X that is greater than zero.
  • In 2, it is stated that either the first envelope contains A and the second ½A, or the first envelope contains A and the second contains 2A.
  • Therefore, if we open both envelopes, either we find A + ½A = 1½A, or we find A + 2A = 3A.
  • In either case, the total is X, so 1½A = X and 3A = X.
Why "and" and not "or"? --NeoUrfahraner 13:17, 13 July 2005 (UTC)
Premise 2 says that it is true that I can find A and that the other envelope contain ½A, and also it is equally true that I can find A and the other envelope contain 2A. (Only in the event will I experience either one or the other.) --Vibritannia 05:27, 15 July 2005 (UTC)
I still do not understand. Premise 2 says "The other envelope may contain either ½A or 2A." It does not say "The other envelope may contain ½A and 2A at the same time."
  • Therefore 1½A = 3A.
  • Subtracting 1½A from both sides and dividing both sides by 1½ gives A = 0.
  • If A = 0, both ½A and 2A are zero as well, ie. in both cases, both envelopes contain zero money.
  • Zero money in the envelopes is a contradiction to premise 1.

Conclusion:

We have deduced something that contradicts the premises. Either the first premise is a misstatement of the paradox that it is intended to describe, or the reasoning in premise 2 is erroneous, or both.

Therefore, as the situation stands, the claim of a paradox in this article is false.

--Vibritannia 09:26, 12 July 2005 (UTC)

That's very funny! (I just hope you're not serious...) Gdr 11:18, 12 July 2005 (UTC)
I think if I have to accept the first premise as true, then a correct second premise would be
2. Suppose you open one envelope and find in it the amount of money A. You reason as follows: Either A = 2Y and the other envelope contains Y, or A = Y and the other envelope contains 2Y.
Then in either case, the total in the envelopes is 3Y − ie. no contradiction − so we can formulate in Y without problems (by talking in terms of p( A = 2Y ) and p( A = Y ) and expected value in Y etc.). --Vibritannia 06:06, 15 July 2005 (UTC)
A is a constant. It's the amount you found in the envelope you opened. You can't treat it as an unknown or a random variable! Gdr 10:33, 15 July 2005 (UTC)
On the contrary, A cannot logically be a constant (which is why the original second premise leads to a contradiction). A can be defined as the amount you find in the first envelope you open, but its value will clearly depend on which envelope you open first − the one with the greater amount of money in it, or the one with the lesser amount in it. --Vibritannia 21:03, 15 July 2005 (UTC)
After you open the envelope, it's a constant. It's at that point that you can no longer treat it as an unknown. Gdr 15:24, 16 July 2005 (UTC)
If A is a constant after you open the envelope, you cannot then go on to suppose that the other envelope contains either ½A or 2A. To do so implies that A is still a variable. There are only 2 constants − the amounts in the two envelopes − but by imagining that the other envelope contains either ½A or 2A, one introduces 3 constants: A, ½A, and 2A. One is forced to consider only 2 constants at a time − either A and ½A, or A and 2A − in which case A clearly isn't constant any more. --Vibritannia 18:09, 16 July 2005 (UTC)
Suppose I give you two envelopes satisfying the condition of the problem. You open one and discover it contains $10. What can you deduce about the other envelope? Gdr 18:56, 16 July 2005 (UTC)
Premises:
  1. You are given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other.
  2. You open one and discover it contains $10.
What can you deduce about the other envelope?
Reasoning:
  • I know that one of the envelopes contains twice as much as the other, but I don't know which because I don't know the total amount of money in the envelopes.
  • It seems to me that the total is either $15 or $30, but I know that the total can't really be either $15 or $30. The total is unknown to me, but the total is not subject to chance; the amount is an objective reality, just one that I have been denied knowledge of.
  • I realize that I have to consider the two possible realities and my guesses about those realities separately.
Reality 1:
  • The total in the envelopes is $15. I have $10, so the other contains $5, but this is unknown to me.
  • If I guess that $10 is the smaller amount, I lose $5. If I guess that $10 is the larger amount, I keep $10.
Reality 2:
  • The total in the envelopes is $30. I have $10, so the other contains $20, but this is unknown to me.
  • If I guess that $10 is the smaller amount, I gain $10. If I guess that $10 is the larger amount, I keep $10.
  • So, given that the reality is that the total in both envelopes is $15, I stand to lose $5, ie. a third of the total.
  • And, given that the reality is that the total in both envelopes is $30, I stand to gain $10, ie. a third of the total.
  • Either way, it's a third.
So what can I deduce about the other envelope?
I can deduce that it contains either a third of the total more, or a third of the total less than the $10 I already have. How much is a third of the total? I don't know; it depends on whether $10 is a third of the total, or whether $10 is two thirds of the total - which I don't know that either.
Which reality is most likely? The reality that I am presented with is the most likely because it is absolutely certainly true; the reality that I am not presented with is the least likely because it is absolutely certainly false. The problem is that I don't know (until I open the other envelope) which is which.
The fact that I am equally likely to guess at either reality doesn't say anything about the likelihood of either reality. The reality was a given as soon as the envelopes were filled with money.
Conclusion:
I can deduce only that the other envelope contains either a third of the total more, or a third of the total less than the $10 I already have, the total being unknown to me. It is logically impossible for me to nail the amount down any closer than that. --Vibritannia 11:09, 17 July 2005 (UTC)
Now you're really kidding me, right? Gdr 16:43, 17 July 2005 (UTC)

[edit] An Alternative Scenario

Consider this alternative scenario.

A friend gives you an envelope; you open the envelope and find $10 inside. The friend asks how many dollars you found; you say $10. He then shows you an empty envelope and says, 'I going to go next door and toss a coin. If it comes up heads, I'm going to put $5 in the envelope. If it comes up tails, I'm going to put $20 in it.' A moment later, he returns from next door and says, 'Do you want to swap your envelope for this one?'
You reason as follows: The other envelope may contain either $5 or $20. In this case, you have reasoned correctly. The expected values you calculate are valid. And the conclusion that you should always take the second envelope is entirely correct.

What's the difference? --Vibritannia 10:15, 18 July 2005 (UTC)

The difference is that in this scenario you do not need a uniform distribution on the natural numbers. In contrast to the original scenario, it is true that the probability that other envelope conatins $20 is indeed 50%. In other words, in the average you will get more money when you swap. --NeoUrfahraner 13:24, 18 July 2005 (UTC)

Yes, 50%. I agree.

In the original scenario, we have two possible pairs of amounts: ( A, ½A ) and ( A, 2A ). On the natural numbers, the pair ( A, ½A ) is twice as frequent as the pair ( A, 2A ) − sort of like all numbers are twice as frequent as even numbers (so a number selected at random from the natural numbers is twice as likely to be just any number than it is to be an even number).

Given that the first envelope contains A, we know that for the second envelope

  • p( ½A ) + p( 2A ) = 1 because there are no other possibilities
  • p( ½A ) = 2 × p( 2A ) because ( A, ½A ) is twice as frequent as ( A, 2A )

Therefore p( ½A ) = 2/3 and p( 2A ) = 1/3. When these probabilities are applied in step 3 of the article, the expected value comes out to be A, ie. there's no reason to switch envelopes. --Vibritannia 17:57, 18 July 2005 (UTC)

You say "On the natural numbers, the pair ( A, ½A ) is twice as frequent as the pair ( A, 2A )". What makes you think that the sums in the envelopes have to follow that distribution? I might have used the distribution {p(10,20) = 1.0}. Or the distribution {p(5,10) = 0.5; p(10,20) = 0.5}. Or one of the distibutions in section 2 of the article. Or any other distribution. Gdr 19:50, 18 July 2005 (UTC)

(I've just realized it cannot be the Natural numbers because if we got A in the first envelope and A was odd, we would know that the other envelope had 2A in it. Let's say the Rationals greater than zero, instead.)

I think it is fair to assume a uniform distribution unless we are told that the experiment is rigged. I mean, if the envelopes always contained $10 and $20 ( ie. p(10,20) = 1 ), then we could pretty quickly (after a few goes) work out only to swap envelopes if we didn't find $20 in the first envelope. Therefore p(10,20) = p(5,10) = 1/n where n is large enough as need be in order not to distort the experiment noticeably. Fortunately, because of what we can observe about the frequencies of different pairs, we don't need to worry about n.

Notice that the conditional probability p( 2A given A ) is not equal to p( A, 2A ). Also p( 2A given A ) is not equal to p( ½A given A ). In general,

p( A/n given A ) = n × p( nA given A )

which can be seen by imposing any limit on the number of dollars available for the experiment. Given A the pair ( A, A/n ) is always possible (or we couldn't have found A in the first envelope), whereas the pair ( A, nA ) isn't always possible because nA is sometimes more dollars than we have available. But we don't need a limit; we can observe the effect a limit has on the relative frequencies of the pairs regardless of what limit actually applies. (I suppose that what I mean by relative frequencies is for how many values of A a pair is possible compared to another pair, up to any limit for A.)

Given that we have already found A in the first envelope, the pair ( A, A/n ) is n times more frequent than the pair ( A, nA ). Given A in the first envelope, the pair ( A, ½A ) is twice as frequent as the pair( A, 2A ) when all possible values of A are considered. --Vibritannia 17:44, 19 July 2005 (UTC)

If you want to invent your own problem, that's fine. But in the envelope paradox you are not told the distribution of values in the envelopes. So you are not at liberty to assume a particular distribution, no matter how "fair" you think it is. Gdr 17:54, 19 July 2005 (UTC)

Assuming a uniform (ie. rectangular) distribution is the same as assuming the unknown total in the envelopes is a random amount each time we play. That hardly qualifies as inventing a new problem. --Vibritannia 18:12, 19 July 2005 (UTC)

The problem doesn't specify a distribution. So you can't assume one, let alone a uniform distribution (which doesn't exist). Gdr 18:15, 19 July 2005 (UTC)

In that case, the solution to the paradox is, 'You have unwittingly made an assumption that you will, on average, get more money by swapping envelopes; therefore you have calculated that you should always swap envelopes.'

Why mention distributions at all since one was never specified? --Vibritannia 18:31, 19 July 2005 (UTC)

The analysis that leads to the paradox involves assuming a distribution of values in the envelopes without noticing that that's what you're doing. The solution to the paradox is therefore to point this out. The fact that no such distribution exists makes the point stronger. Gdr 18:50, 19 July 2005 (UTC)

Quoting from the article, the reasoning says

  1. The other envelope may contain either ½A or 2A
  2. Because the envelopes were indistinguishable, either amount must be equally likely.

and the solution says

[Step 2] uses the unstated assumption that the pairs of amounts (½A, A) and (A, 2A) are equally likely, for all values of A. But there is no probability distribution with this property.

The reasoning proposes either amount is equally likely. That, at least, is not an unstated assumption. The reasoning does appear to make an assumption (without stating one) that, given A is found in the first envelope, the pairs ( A, ½A ) and ( A, 2A ) are also equally likely. But this isn't obviously apparent.

The solution offered doesn't point this out, nor is it exceedingly clear that that is what is meant (if it is). And it doesn't offer any reason why this is a mistake, other than to say 'there is no probability distribution with this property'. It is assumed that the statement is self-explanatory, but it isn't to me because it's not obvious why it is true or why it makes a difference.

The solution seems to assume that I know a lot that I don't. It could benefit from some explanation (definitely before going on to discuss further paradoxes). --Vibritannia 22:47, 19 July 2005 (UTC)

[edit] Another Explanation of the Error

Before we open any envelopes, it doesn't seem unreasonable to assume that if the envelopes are said contain A and 2A (without distinguishing which envelope contains which amount), then all values of A are equally likely − because we have no better information. However, it does not then follow from this assumption that, given A is found in the first envelope, the other envelope is equally likely to contain (1/2)A or 2A for every conceivable value of A (which is something we have implied in the subsequent calculation of expected value).

Yes, this is the point. --NeoUrfahraner

Why?

Because, for any specific amount of money from which to fill the envelopes, given we have found A in the first envelope, for all values of A we can always find (1/2)A in the second envelope, whereas it is only possible to find 2A in the second envelope half the time.

See it like this: If the specific amount of money available is M say, then we can find A to have any value up to (2/3)M, but the second envelope can only contain 2A for values of A up to (1/3)M ie. half as often.

So, given A was found in the first envelope, the chance of finding (1/2)A in the second envelope is 2/3, and the chance of having 2A in the second envelope is 1/3 − not equal as we had assumed, even though all values of A are still being assumed to be equally likely. --Vibritannia 12:23, 20 July 2005 (UTC)

This explanation is not correct. Actually you cannot give exact values of the probabilities - but this is not required because when you do not assume equal probabilities, the paradox vanishes. --NeoUrfahraner 10:44, 23 July 2005 (UTC)
Correct me if I'm wrong, but I think in all of the threads above Vibritannia seems to assume that there is a bound on the sum of the amounts in the envelope whereas Gdr and NeoUrfahraner do not seem to put that constraint.
Whether we choose to put that particular constraint or not, the conclusion is that assuming 2A and A/2 are equally likely in the other envelope is wrong, so why argue about it? What's the disagreement? -- Paddu 14:17, 23 July 2005 (UTC)
OK, you are right. There is no reason to argue about the particular constraint, as long as we agree in the main point, namely that assuming 2A and A/2 are equally likely in the other envelope is wrong. --NeoUrfahraner
Hi. I'm not satisfied with this explanation. Let me label the envelopes heavy and light. After opening the first one, it must be equally likely that the other envelope is either heavier or lighter. If it was more likely that the other one is lighter, we could say that there is a tendency to pick the heavy envelope as the first choice. Then it would be better to always stick with the first envelope. Confusing. --tkalayci
It isn't equally likely that the other is either heavier or lighter. If there were only 2 things to choose from -- A and B, given one the other is equally likely to be A or B. But the amount of money in the envelope/weight of the envelope/etc. can have any value from among a continuous range. Whether we take the range to be 0-∞ or 0-<total amount of money in the world> or 0-<amount of money you think the person has to begin with> or whatever, the range of values is continuous and it isn't the case that heavier and lighter are equally possible. -- Paddu 19:18, 30 July 2005 (UTC)
Paddu, my point is: if it isn't equally likely that the other is either heavier or lighter, then it can't be equally likely that the first envelope is either the heavy or the light one. But it must be since you select the first envelope randomly. Maybe I'm missing something. --tkalayci
Probably I was confusing rather than clarifying :). Let me start afresh. For the first envelope, we know that it is equally likely to have the smaller or the larger amount.
  • Now we open and see that it has <as small an amount as you can imagine>, say $0.02. Now, do you think the other envelope is equally likely to be larger or smaller than $0.02? Since the amount in the first was so little, it is probably more likely that the other envelope has more.
  • As another case, assume we open and see that it has <as large an amount as you can imagine>, say $1 billion. Now, do you think the other envelope is equally likely to be larger or smaller than $1 billion? Since the amount in the first was so much, it is probably more likely that the other envelope has less.
For in-between amounts, the difference between the probability of the other envelope having more and the other envelope having less would probably be less marked, but without knowing the distribution beforehand we shouldn't conclude that "more than amount in 1st envelope that we've already seen" and "less than amount in 1st envelope that we've already seen" are equally possible. Note that the fact that you've seen the amount in one envelope while deciding about the other makes choosing to swap a different problem from choosing the first envelope. HTH. -- Paddu 03:33, 31 July 2005 (UTC)
BTW you could sign your comments with 4 tildes like so: ~~~~. -- Paddu 03:36, 31 July 2005 (UTC)
Paddu, thanks, however, I'm still not convinced. In practice, in real world, your explanation holds. But suppose a totally random number is written inside the first envelope, and then according to a flipped coin, the number is either doubled or halved for the second envelope. There would be no correlation with the actual amounts, since any real number is possible. And when switching, the chances are equal due to the flipped coin. In this case the paradox applies, I can't find the flaw in the non-Probability version. Btw, I don't mind to agree to disagree if you don't want to bother with me--Tkalayci 05:13, 31 July 2005 (UTC)tkalayci
So how did you pick that "totally random" number in the first envelope? Gdr 05:46:49, 2005-07-31 (UTC)
I was gonna say flip coins (or use radioactive decay or something) to produce random bits.. but we need infinite number of bits here, which will take forever.. so I guess I'll have to rest my case. Thanks. Tkalayci 13:17, 31 July 2005 (UTC)tkalayci
No, Tkalayci don't rest your case; you are absolutely right! It's trivially true that the other envelope is either 'heavier' or 'lighter' with a probability of 1/2. It follows directly from one of the axioms of probability theory. As we know that the first envelope is 'heavy' or 'light' with a probability of 1/2 (and no one disagrees here) the other envelope must be 'heavy' or 'light' with a probability of 1/2. This follows from the axiom that states that all (mutually exclusive) possibilities in a random situation must have probabilities that add up to exactly one. And 1/2 + 1/2 = 1 as ought to be known by most of us... ;-) INic 22:50, 1 September 2005 (UTC)

[edit] buhh

Ok, first off, this article is horribly written. Secondly I don't believe its worthy of an article. In the simplest terms possible you have a 50% chance to pick the envelope with the most money the 1st time. When asked to switch, you still are choosing between 2 envelopes, and the chance is still 50%. No new information is gathered once you are able to see what amount of money is in the one you choose first. Hardly interesting in the least. 172.165.221.252 08:59, 25 July 2005 (UTC)

That analysis is incorrect. In particular, the statement "No new information is gathered once you are able to see what amount of money is in the one you choose first." depends on the assumptions that have been made about the distribution of the pairs of numbers that might be in the envelopes. For example, suppose we additionally know that the envelopes contain $5 and $10. Then, upon opening an envelope, we have complete information about the contents of the other. --Wmarkham 22:18, 5 September 2005 (UTC)

No, the first writer is absolutely right; your analysis is incorrect though. The "distribution of the pairs of numbers that might be in the envelopes" is nowhere mentioned in the problem and is irrelevant anyway. It doesn't matter at all 'how' the money got into the envelopes. That is irrelevant, irrelevant, irrelevant.

Your example that we know that the envelopes contains $5 and $10 is strange. Sure in that case we have "complete information" about the contents of the other envelope. But why are you talking about information here? We're not talking about information at all, we are talking about probabilities, remember? And the probability of choosing the $5 or the $10 envelope is still 1/2 each. Both before and after you've opened them. Probability is NOT the same as information, or some kind of weird measure of a persons 'ignorance.' That is plain subjectivistic mumbo jumbo. INic 00:25, 14 September 2005 (UTC)

You are right that the "distribution of the pairs of numbers that might be in the envelopes" is nowhere mentioned in the problem. This, however, does not mean that this distribution is irrelevant. Actually, as soon as you assume some particular distribution, you will understand the situation. --NeoUrfahraner 14:34, 14 September 2005 (UTC)

But what if the donor of money happen to chose how much money he is willing to give away without letting some chance mechanism decide that for him? In that case it's just false to assign any probability distribution to that prior event. And this is the most reasonable interpretation of the initial set up. Not everything in this world is random you know. For example I sincerely hope that your next comment in this forum doesn't reflect a random opinion.

Of course it's irrelevant how the money got into the envelopes when we want to determine the probability for chosing one of them. Let's say you are in a foreign country and you flip a coin you haven't examined before. After the coin flip someone ask you what you think the probability is that the other side would have come face up. You say it's a probability of a 1/2, if the coin is unbiased. Sure, that is the correct answer. But then you are reminded of the fact that you have no idea what signs or symbols are engraved on the opposite side of the coin. Well, you realize (I hope) that that is totally irrelevant. The coin can have whatever engravings on the other side, the probability that that side would have come up is still 1/2.

If you don't agree on this I'm sorry to say you have left the field of probability theory, as this follows directly from one of the axioms of probability theory by Kolmogorov. INic 15:23, 14 September 2005 (UTC)

I agree with the coin, but that is a different story. Assume you are the donor. Make an arbitrary assumption how much money you are willing to give away. Tell me your assumption. --NeoUrfahraner 16:14, 14 September 2005 (UTC)

No, the coin example is exactly the same thing. Call either side of the coin "envelope" and the engravings on the coin "the content of the envelopes." Read the coin story again and replace the words. The axioms of probability theory are appliccable irrespective of different wordings.

Well, not sure if I want to give you any money. ;-) I will happily have you for dinner at my place though. To honor you I will make two dishes of which one is twice as tasty as the other. You pick one and you discover that it's a very nice fish soup. Do you want to eat that or do you want the other dish? INic 21:38, 14 September 2005 (UTC)

The coin example is different because there are no coins with a value of 2 on one side and 4 on the other side. Both sides of the coin have the same value. You need at least two coins with different values to make your example work. With respect to the dinner, this is easy. I like fish soup, so there is no reason to swap. --NeoUrfahraner 09:32, 15 September 2005 (UTC)

I can easily make a coin where i write one number on one side and twice as much on the other side. Fortunately mathematics isn't restricted only to ordinary objects, that already exists in this world. For example I can calculate the probability of winning with a roulette wheel with say 1000 positions. To claim that that example is absurd because there are no roulette wheels that large in this world is a very silly objection. No mathematician would say anything like that.

You asked me to give away some 'arbitrary money.' I refused and invited you for dinner instead and offered you two dishes, not randomly chosen at all by me. You have to pick one by chance though, and it happened to be the fish soup. Well, according to the ordinary reasoning in this context the expected value of the tastiness of the other dish is 25% better than the fish soup you already have. I'm flattered that you like the fish soup and that you don't want to switch, but are you rational? This example clearly shows that any 'prior probability distributions' are totally irrelevant for this problem. That is just Bayesian thinking, and that is not a good thing. INic 11:45, 15 September 2005 (UTC)

Suggest a game with a coin where you write one number on one side and twice as much on the other side, then we can discuss this game. The choice of the fish soup is in concordance with the current article, because after seeing the fish soup, I do no longer expect that there is probability 1/2 for the other meal to be twice as tasty. See the article: "Step 2 in the argument above is flawed". Before seeing the soup, there was no difference in swapping, but after seeing the soup, I got some information and was able to decide whether I like that particular meal or not. --NeoUrfahraner 14:48, 15 September 2005 (UTC)

I already have suggested a game like that! In fact I suggested the more general game where we allowed any engraving on the sides whatsoever. And that includes the numbering you suggest as a special case of course. You agreed with me before that each side must have a probability of 1/2, no matter what engravings we have. Do you still stick to that opinion or do you now want to claim that certain magical engravings follow their own special and mysterious rules of logic?

Aha so what probability do you assign to the event that the other dish is actually the better meal? Let me know how you calculate that please.

Let's see, the probability that the dish you picked is the best meal is 1/2, right? I really hope we agree so far. OK, now you claim that the probability that the other dish is the best is LESS than 1/2, say a < 1/2. You only have two options, right? The total probability that the best dish is either of them is thus 1/2 + a which is less than one, 1/2 + a < 1. But according to one of the axioms of probability theory that can't be the case. The probability that the best dish is either one of them must be exactly one.

I'm sorry to say that your opinion might be legitimate reasoning according to a subjectivist, but it certainly doesn't belong to probability theory. INic 22:28, 15 September 2005 (UTC)

I agree that the probabilty that I picked the best meal was 1/2 before I saw the meal. After seeing it, the probability changed. There is no meal that I like twice as much as fish soup, so the probability that the other dish is the best is zero. --NeoUrfahraner 05:07, 16 September 2005 (UTC)

Aha I see. So when you see the fish soup the probability mysteriously changes for both options so that it still adds up to one, right? Well, that is clearly nonsense too. Let's say I made three dishes the evening you came to my place. Two of them are the same dish and the third twice as tasty as the others. You pick one at random and find that it's the fish soup. Then I remove one of the remaining dishes, one of the bad ones, while telling you that I remove a bad dish. Now you have the option to swap to the remaining dish or stick to the fish soup you already have. What would you do?

According to your unscientific subjectivistic approach the new situation with three dishes doesn't matter to you. You are still convinced that I can't make a dish significantly better than the one you already got, right? Fortunately for mathematics (and myself as a chef) you are wrong. The probability that your fish soup is the nicest dish is 1/3, and 2/3 that the other dish is the better choice. Only a fool would stick to the first dish. INic 23:37, 16 September 2005 (UTC)

All probabilities can change when I get some new information, what is mysterious about that? With respect to your modified example: I like fish soup very much, so you cannot make a meal that is twice as tasty for me as fish soup. So in the changed example, if I pick a dish at random, the change is 1/3 that I get the best one. After seeing the fish soup, I know that I got the best one and that the other two must be the bad ones. When you remove one dish, telling me it is a bad one, this is no new information for me because I know already that both of them are the bad ones. --NeoUrfahraner 06:30, 17 September 2005 (UTC)

No, probabilities doesn't change when you get new information. That is a grave misunderstanding of what probabilities are. A probability can only change when we change the experimental set-up, as I did when introducing three dishes instead of two above. As I already told you probabilities aren't a measure of some ones 'state of knowledge' or anything like that, and can't be made to be that either. Any such attempt will lead to total ambiguity and paradoxes.

For example your subjective probability above is wrong. Not only your particular 'probability estimate' but the whole reasoning is wrong. That is easily seen when I tell you that the other dish is fish soup too, only made with a different set of spices. What you considered to be 'information' wasn't information at all. And when I changed the experimental set-up and thereby really changed the probabilities you ignored that fact, you trusted your bad intuition instead. When you understand what probabilities really are you will never make this error again. INic 16:47, 18 September 2005 (UTC)

There's more than one interpretation of the meaning of probabilities; see probability interpretations and in particular frequency probability and Bayesian probability. From the Bayesian point of view, probabilities are exactly a measure of our state of knowledge about an event.
I know there are more than one interpretation of the meaning of probabilities. But that is the way with other subject as well, for example the study of stars; astronomy and astrology. That fact, however, doesn't mean that both interpretations are equally scientific, or equally justified.
However, for this article, it makes no difference whether you are a frequentist or a Bayesian; the paradox has the same cause and resolution. The two kinds of statistician may describe it in different ways (the frequentist would consider a large number of iterations of the experiment; the Bayesian our state of knowledge about the experiment) but the maths is the same either way. Gdr 17:52, 18 September 2005 (UTC)
No that is not true, the difference in opinion of an objectivist and a subjectivist in this case is total. For the subjectivist this is an unsolvable problem, for the objectivist it's not even a problem. INic 19:34, 18 September 2005 (UTC)
All you need for this paradox is Bayes' theorem, which is valid in both frequentist and Bayesian interpretations. Gdr 19:54, 18 September 2005 (UTC)
It's correct that Bayes theorem is derivable from the Kolmogorov axioms, and as such is part of the mathematical theory both schools share. What differs an objectivist from a subjectivist in this respect is that the latter far more often than the former allows herself to use that theorem. Hence the frequent labeling of the latter as Bayesians. In the current situation the objectivist realize that we can't use Bayes theorem, while the subjectivist seem to have no limits for when to use it. This is why this situation is a paradox for the subjectivist but not even a problem for the objectivist. INic 22:32, 18 September 2005 (UTC)

INic, I have random number generator that generates random decimal digits (i.e, 0 ... 9). I have two envelopes. Now I generate one digit, say n. In the first envelope, I put a paper with 2n written on it, in the second a paper with 2n + 1. Now I generate a second decimal digit. If it is odd, I give you the first envelope, if it is even, I give you the second envelope. I tell you the way how I generated the papers in the envelope, but I do not tell you the results of the random number generator. You may now open the envelope and consider the paper in it. The number on it is your score. Now you may decide whether you swap. How would you proceed to gain a score as high as possible? What is your expected score? --NeoUrfahraner 04:57, 19 September 2005 (UTC)

Well, it clearly doesn't matter what I do unless I happen to get the paper with one or 210 written on it. In the former case I'll switch otherwise I won't. My expected gain is

\begin{matrix} \frac{3}{20} \end{matrix} 2 + \begin{matrix} \frac{1}{10} \end{matrix} (2^2 + 2^3 + ... + 2^9) + \begin{matrix} \frac{1}{20} \end{matrix} 2^{10}

INic 11:55, 19 September 2005 (UTC)

Please check your calculation again. If the first random digit is 9 and the second is even (Probability 1/20), you will find 210 in the envelope you open first and you will not switch. If the first random digit is 9 and the second is odd (Probability 1/20), you will find 29 in the envelope and you will switch, finding 210 in the other envelope, so in the formula for the expected gain the last term must be \frac{1}{10}2^{10}. But anyway, why do you make your choice dependent on the contents of the envelope you opened first? You said before "probabilities doesn't change when you get new information"! --NeoUrfahraner

No my calculation is correct. Please note that I only switch if I find 1 written on the paper. It's correct that no probabilities change when I see 1 on the first paper. It's still true that the two envelopes containing 1 and 2 has a probability of 1/2 each to be chosen in this (last) experiment. I have to ask you as a subjectivist, when you throw a die and a number comes up, say 3, does that mean that the die suddenly becomes a degenerated die where 3 has probability one and the other sides probability zero? INic 14:41, 19 September 2005 (UTC)

OK, I misinterpreted your text from 11:55. But anyway, why are you switching at all? What would be the expected gain when you never switch? What would be the expected gain when you always switch? What would be the expected gain when you switch unless you find 210? Where does the difference come from?

With respect to the die: Not the die becomes degenerated, but the probabilities become degenerated. After seeing the result of the throw, the probabilities for this throw have changed - in fact, the result is now determined. Clearly, this does not say anything about the next throw of a fair dice. --NeoUrfahraner 15:18, 19 September 2005 (UTC)

This is where you subjectivists go astray. You 'translate' the original problem to other problems where you can use your favorite tool, Bayes theorem, i.e., you invent some prior probability distributions. Then you investigate those new situations and try to generalize from them. If you find a pattern that holds for whatever prior you can think of you are satisfied, because then you think you have solved the problem in the most general manner.

Well, this is simply not correct. To invent a prior is to alter the original experiment in some fundamental ways. This is seen above where your prior introduces known limits, and suddenly we know in some cases what's in the other envelope. That is never the case in the original experiment.

But the most serious flaw in the reasoning is the assumption that some random mechanism must have been used to populate the envelopes with money prior to the experiment. That is certainly not the case. The example with the two dishes with fish soup clearly showed that. In spite of the fact that I explicitly stated that I didn't chose meals according to some random mechanism (how would that even be possible?) you insisted in making probability estimates of what kind of dish the unknown dish was. Not surprisingly, this led you to make decisions that was utterly wrong.

If you want to solve a problem in the most general manner you of course have to include the possibility that there is no prior at all. In the envelope as well as the dish case this is by far the most natural interpretation. INic 23:08, 19 September 2005 (UTC)

I never used Bayes theorem in this context - actually the modified problem can be analyzed completely without using Bayes theorem. If you want to solve the problem in the most general manner, you of course have to include the possibility that the envelopes have been filled the way I described. --NeoUrfahraner 04:52, 20 September 2005 (UTC)

Yeah you introduce a prior without ever using Bayes theorem. Quite odd, but I guess anything goes. Instead you let the prior be public information so the player knows when she's on one of the prior's limits. In these cases she knows what's in the other envelope and can act accordingly. This is thus a very bad interpretation of the original problem, as this situation never should happen. It gets even worse when considering the fact that the existence of these known limits are the direct source of the conclusion you draw from this interpretation.

A much better approach is to view the contents of the envelopes as unknown and arbitrary. Exactly in the same way as the limits of your prior were arbitrary chosen by you. They were not chosen by means of any random process, were they?

To test the two strategies (switch and not switch) we can imagine two large populations of people A and B of which everyone are presented two envelopes. The distribution of money in the envelopes are irrelevant, as long as they are evenly distributed, i.e., if m envelope pairs with content {x, y} are presented to the people in A, then the people in B must be presented with exactly the same set of envelope pairs. Simply put, every envelope pair in A must correspond to one similar pair in B.

Note that we don't require that one envelope contains twice as much as the other in every pair. This is due to the fact that the paradox doesn't depend on that requirement. Any amounts will do. (Sure, if you want to use your prior to populate the envelopes you are free to do so.) In this particular set-up we don't even require that the amounts are unknown to the players!

All players toss a coin each, and according to the outcomes the players picks their envelopes. All players in population A now switches to the other envelope (whether they want it or not), players in B stick to their envelopes. The sum of the contents of all selected envelopes in population A is now compared to the money selected in B. We will notice no difference between the strategies, more than random fluctuations in both directions. INic 20:50, 20 September 2005 (UTC)

There is no need for a prior limit; the limit just makes the calculation easier. The main point is that there is an advantage when you cleverly switch depending on the contents of the envelope. There is, however, no difference between always switching and never switching.

With respect to the experiment you suggested: clearly population A will have the same expected gain as population B because their switching strategy is not based on the contents of the envelope. Let population A, however, pick a random envelope from A or B. The contents of this envelope is, say, x. Now all members of A that have contents less than or equal to x switch, the members with more than x hold. In this case (and using the original requirement that one envelope contains twice as much as the other in every pair), the expected gain of A will increase. --NeoUrfahraner 05:18, 21 September 2005 (UTC)

But I wonder why you failed to make a "clever switch depending on the contents" when you were offered two dishes for dinner, if the "clever switching"-thing is intended to be a universal strategy? Apparently there are some serious limits to when this "clever strategy" is applicable. In fact the whole strategy depends on features of the contents we sometimes imagine we put in the envelopes. However, those features are not central to the problem at all. The features I have in mind is that the contents be mappable to the real line and that we need a decent prior (over the real line) to be able to put anything in the envelopes at all. The fish soup story violated both these unnecessary requirements.

Another example is this. Say you will find, not money, but a die in the envelope you choose, with which you later will play a game. You don't know what game, you are only told that one die is twice as good as the other, that is, you will win twice as often with that die than with the other die in the games that follow. You pick one envelope at random and finds that it contains a die with 3 on all six sides. How would you use your "clever strategy" in this case? It's totally impossible. Yet the paradox is still there.

However, when reading your latest comment I suddenly realized that I can't make sense of the subjectivist standpoint presented in the article. According to the text the subjectivists magical ability to change a probability only by observation is both the solution to why the calculation in the problem is wrong (we don't gain 25% every time we switch) AND the reason why we, in fact, CAN gain by switching if we only switch "cleverly", that is not every time. How can that be? To me that seem to be two contradictory standpoints.

Is the original calculation (25% gain) totally wrong or not according to a subjectivist? Is it only partially wrong? If so, how many percent do we gain if we switch "cleverly"? Say we gain x% (x > 0) by switching "cleverly," how do you avoid a paradox? And what is in that case the flaw in the other reasoning leading to no gain at all, i.e., by denoting the contents A and 2A and noting that we either gain A or lose A if we switch? INic 09:46, 22 September 2005 (UTC)

Do you agree that switching depending on the contents of the envelope may increase the expected gain? --NeoUrfahraner 10:23, 22 September 2005 (UTC)

No I don't, only in special cases as I explained. In general that is not the case. INic 10:43, 22 September 2005 (UTC)

Why does it work in special cases? --NeoUrfahraner

If you assume that the contents are mappable to the real line you can use topological properties of R that is not necessary to the problem. In the same way as if you assume that the contents are mappable to the natural numbers you can use properties of N to develop a strategy to win. INic 10:57, 22 September 2005 (UTC)

The envelope paradox as presented in this article is mappable to the real line. I am speaking about the problem presented in this article; what are you talking about? --NeoUrfahraner 11:05, 22 September 2005 (UTC)

Well, the paradox as presented in the article is confined to different amounts of money. That is most naturally mapped to a finite subset of the natural numbers. Everyone realizes that the money requirement is not an essential part of the problem. The paradox is not about the properties of different monetary systems in the world, right? What is essential is the reasoning leading to the paradox. You didn't object when I offered dishes instead of money, right? INic 11:21, 22 September 2005 (UTC)

Essential for the paradox is that it is mappable to the real line or at least to the natural numbers. In that case you agreed that it is solvable by switching depending on the contents. --NeoUrfahraner

No not at all. I've showed by examples that it's not true that you have to confine yourself to the real line. And if you do the paradox isn't solved by the "cleverly switching"-thing, but gets even worse for the subjectivist. INic 11:45, 22 September 2005 (UTC)

Once more: I confine myself to the paradox as presented in this article, which is obviously mappable to the real line. If you want to speak about a different paradox that is not mappable to the real line, feel free to write a separate article about that paradox. --NeoUrfahraner 11:53, 22 September 2005 (UTC)

OK, what do you think I should call it? The Unrestricted Envelope Paradox? The General Envelope Paradox? All I really want is to supplement the text in this article so that it includes the objectivist's view of this problem, not only the subjectivist's view as is the case now. I asked the author earlier if that was OK but he thought that the objectivist / subjectivist thing was a "red herring". And he said that the article must be written from "the neutral point of view". Well, that is certainly not the case as it stands. It's entirely written from a subjectivist standpoint without even mentioning that! INic 12:16, 22 September 2005 (UTC)

I do not know how you should call that article. For the current article, however, switching dependent on the contents of the envelopes may increase the expected gain independent from whether you are objectivist or subjectivist. --NeoUrfahraner 12:24, 22 September 2005 (UTC)

Sure, but as that feature of the real line, for some obscure reason, is part of the solution to the envelope paradox for the subjectivist, that is certainly not the case for the objectivist. Furthermore, for the objectivist Step 2 in the article is not flawed. INic 12:39, 22 September 2005 (UTC)

Why is Step 2 not flawed? --NeoUrfahraner 12:44, 22 September 2005 (UTC)

Follows from the definition of probability for an objectivist, and the axioms of probability. Probabilities are only attributed events of random experiments for an objectivist. The experiment doesn't change here. To look in one of the envelopes doesn't change the experiment obviously. If chances are 50-50 before we open an envelope it certainly must be the same after we open one (or both) of the envelopes. This is trivially true for an objectivist. For an objectivist the probabilities for the sides of a die doesn't change during use, as is the case for the subjectivist as you explained to me before. INic 13:01, 22 September 2005 (UTC)

I put 5 in one envelope and 10 in the other. You chose one envelope at random. It contains 10. What is the probability that the other envelope contains 20? --NeoUrfahraner 13:06, 22 September 2005 (UTC)

The probability getting 5 is 1/2 and getting 10 is also 1/2, even when you have 10 in your hand. The probability of getting 20 is 0. Read the definition again and you will see why this is so. INic 13:14, 22 September 2005 (UTC)

You open one envelope and find in it the amount of money A=10. Step 1 says the other envelope may contain A/2=5 or 2A=20. Step 2 says, either amount must be equally likely. So Step 2 says, the probability that the other envelope contains 20 is 1/2. --NeoUrfahraner 13:21, 22 September 2005 (UTC)

No, Step 2 says that either amount must be equally likely. And either amount here refer to the first and the second envelope. The only envelopes we have as far as I can see. Step 1 and 2 together doesn't imply that the probability is 1/2 that the other envelope contain 20. It only says that if the other envelope contain 20 then its probability is 1/2. In fact, it doesn't matter what it contains. It's probability is 1/2 whatever it contains. INic 13:46, 22 September 2005 (UTC)

I see. I changed the text in the article to make it clearer what the reasoning in Step 2 should be. --NeoUrfahraner 13:52, 22 September 2005 (UTC)

Thanks! However, as it stands now it's trivially wrong as it relies on the principle of indifference. INic 14:09, 22 September 2005 (UTC)

Yes. I added a link. --NeoUrfahraner

The pedagogical problem now is to explain why this isn't just another principle of indifference-paradox. Why does it require a solution of its own? INic 14:47, 22 September 2005 (UTC)

Maybe for the same reason that Bertrand's paradox (probability) has an article of its own. --NeoUrfahraner 15:09, 22 September 2005 (UTC)

In fact the envelope problem isn't just another principle of indifference-paradox, as it can be formulated without using that principle. Hence this paradox really deserves an article of its own. INic 20:59, 22 September 2005 (UTC)

I have a better formulation of the argument, showing the differences between us more clearly:

  1. The other envelope may contain either ½A or 2A
  2. The probability that A is the larger amount is 1/2, and that it's the smaller also 1/2
  3. If A is the larger amount the other envelope contains ½A
  4. If A is the smaller amount the other envelope contains 2A
  5. Thus, the other envelope contains ½A with probability 1/2 and 2A with probability 1/2
  6. So the expected value of the money in the other envelope is ½(½A + 2A) = 1¼A
  7. This is greater than A, so you gain by swapping

Here I agree with all statements up to and including 5. INic 08:10, 23 September 2005 (UTC)

Really? If the envelopes were filled with 5 and 10, and you find A=10 in the envelope, the probability that A=10 is the larger amount is 1 and that A=10 is the smaller amount is 0. Also in this formulation Step 2 is wrong (and based on the principle of indifference only). --NeoUrfahraner 08:46, 23 September 2005 (UTC)

Yeah isn't this neat? You say step 2 is wrong and I claim that step 6 is wrong. Clearly shows our differences. In your case the probability that A is 10 is 1/2 and that A is 5 is also 1/2. If I find 10 in my envelope I still found that with a probability of 1/2. As i told you before a flipped coin doesn't disintegrate when tossed. The probabilities remain constant. INic 09:38, 23 September 2005 (UTC)

I see. And yesterday was Christmas with probability 1/365. --NeoUrfahraner 09:44, 23 September 2005 (UTC)

Haha! Well an objectivist doesn't attribute probabilities to arbitrary statements like that, as you subjectivist do. That is nonsense to us. Tell me what your experiment is and what variable you pick at random (you know define a sample space and all that crap) and I'll be able to answer even this question properly. INic 10:04, 23 September 2005 (UTC)

OK. I take a random number between 0 and 1, multiply it by 365, take its integer part and after that amount of days I come again. See you. --NeoUrfahraner 10:10, 23 September 2005 (UTC)

OK, cheers! And please say hello to Santa from me. INic 10:17, 23 September 2005 (UTC)

[edit] The way to deal with it.

Look in the envelope you chose.

  • If you are happy with the money there, keep it.
  • If you are unhappy, swap, (if you get less, you were already unhappy anyway). (20040302)

[edit] Not really a paradox?

I think the supposed paradox is not really a paradox. The described probability distribution is not continuous, just two discrete choices regardless of the value of A, and the mean is not defined. Taking the average is meaningless.

Consider a similar situation: Suppose there is a 60 million dollar lottery pot with a 1 in 18 million chance of any ticket winning. If I take the average payout over all tickets to estimate my expected value for each ticket I purchase, I should pat myself on the back on what good business sense it is to buy as many tickets as I can--which is nonsense. The same situation exists here. The median (or mode) is a better estimator of the expected value in the lottery case: expect your ticket to be worthless. In this two envelope situation the mode is not defined, the median is not defined, and the mean is not defined either. You clearly have a fifty-fifty chance of making the right choice regardless of what choice you take.

In summary, I have trouble with this article as a whole, but perhaps some clarification is needed on what qualifies as a paradox. For instance, I was shown in school how to derive 1=2 by dividing by zero, but I doubt that would be considered to be a paradox.

According to Wikipedia, a paradox is "an apparently true statement or group of statements that seems to lead to a contradiction or to a situation that defies intuition." The envelope paradox certainly satisfies that definition. Gdr 15:26:01, 2005-07-29 (UTC)
No, it doesn't. There is no defying of intuition involved. Its a 50/50 chance to get the envelope with the most money both times you choose. No paradox. 01:49, 1 August 2005 (UTC)
The line of reasoning given in the opening section leads to the conclusion that you have a positive expectation if you choose one envelope at random and then exchange it for the other one. The conclusion is false, but the flaw in the reasoning is subtle (you can see from the discussion above that some people have trouble spotting it). So it's an apparently true set of statements that leads to a contradiction. Hence a paradox. Gdr 07:49:35, 2005-08-01 (UTC)
Then that line of reasoning is just incorrect. An incorrect line of thought does not lead to the topic at hand being subject to it. 172.146.188.57 09:49, 1 August 2005 (UTC)
Yes, the line of reasoning is incorrect. But to some people it appears (at first sight) to be correct. So it's an apparently true group of statements that (apparently) lead to a false conclusion. So it's a paradox. Gdr 10:09:39, 2005-08-01 (UTC)
More specifically, #3 of the opening is incorrect. So the expected value of the money in the other envelope is ½(½A + 2A) = 1¼A. Is not the case. The expected value is either 1/2 what you have, or twice what you have. Same odds then of picking the one with the most money as when you first made your choice. 172.146.188.57 09:53, 1 August 2005 (UTC)
"Expected value" is being used in its mathematical sense here. Gdr 10:09:39, 2005-08-01 (UTC)

[edit] Solution to third paradox

I strongly feel that this section needs reconsideration. Argument 2 doesn't say that A equals Y and 2Y at the same time, it simply says that the difference between the envelopes is constant (2Y-Y or Y-2Y), which is true. As far as I can tell, neither argument 1 nor 2 are plainly wrong, but both incomplete by not taking probability distributions into account.Tkalayci 05:01, 2 August 2005 (UTC) To clarify, it is true that the two arguments are in conflict (which is why this is a paradox), but there's no reason to prefer one over the other. Argument 2 doesn't mention the found value A, it's the point of view before opening the envelope. Regards. Tkalayci 05:18, 2 August 2005 (UTC)

I think the line of reasoning in the latest edit is faulty:
Therefore the third paradox is fundamentally flawed since it does not use probabilities.
I have pondered this very form of the paradox for some time and there is a subtle point here. The core of the third paradox really has nothing to do with probabilities! In other words, we are no longer concerned with whether or not it is profitable to switch, because nobody ever gave us specific information about the probability distribution the sums follow. Without such information, we cannot possibly decide which strategy is the best. However, we do know what is to be gained, if we do gain, and what is to be lost, if we do lose.
The two arguments are concerned with the possible gain or loss and not in the least with probabilities. The actual paradox is how to reconcile the two, since the first says the amount you might gain is strictly greater than the amount you might lose and the second the amount you might gain is equal to the amount you might lose. We seem to have a contradiction here, that holds regardless of the probability distribution (although there must be some legitimate pdf, of course).
The answer, that was already given in a previous edit, is that the two arguments do not, in fact, contradict each other, because they apply to different situations. Things do change when we pick an envelope, if not in the actual world (the contents of each envelope), at least in the set of possible strategies left. We could speak of conditional probability, but since probabilities are unknown (and irrelevant), I will use the term conditional possibility, which I take the liberty to coin right now.
Before we choose an envelope, there are four possibilities. Namely:

  • a)To choose the envelope with the smallest amount and swap,
  • b)To choose the envelope with the largest amount and swap,
  • c)To choose the envelope with the smallest amount and not swap, and
  • d)To choose the envelope with the largest amount and not swap.

Either way, if we call Y the smallest amount, if we gain we gain Y, but if we lose, we lose Y (second proposition). Notice that we do not hold amount Y in our hands. We cannot pinpoint this amount, it could be in either envelope. Therefore, we have yet to decide which one to pick.
Now, suppose we go on and pick an envelope, some way. We don't even have to open it. Given this choice, our possibilities are limited down to two: Name X the content of our envelope, and notice that now this amount is fixed i.e. we are holding it in our hands. The other envelope, now, contains either X/2 or 2X. If we gain, we gain X, but if we lose, we lose X/2 (first proposition). Notice that this by no means means that we shall profit by swapping! Depending on the pdf, it might be profitable, non-profitable of indifferent to swap, but the possible gain or loss is fixed.
In short, going ahead and picking an envelope narrows down the possibilities - that is what I mean by conditional possibility. Therefore, the two propositions do not contradict each other, as they refer to different "stages" of the "game". --Toredid 13:59, 3 August 2005 (UTC)

I have to say I'm unhappy with the current revision.. "nobody ever gave us specific information about the probability distribution the sums follow". No, we know the rules of the game and from that we are able to calculate the pdf and there is the precise solution which gives the correct answers. Why is the correct solution not included in the article and why are we complicating the matter by discussing irrelevant things? Consider what happens when the contents of other envelope is revealed. We see that the second argument still holds true (the amount to be gained or lost was equal to the constant difference between the envelopes), but the first argument was just speculation (either x/2 or 2x was never there). Therefore the first argument is worthless when you ignore the probabilities. This is generally true otherwise you could say buying a lottery ticket is always a good idea. Toredid, if you are convinced please revert again or rewrite a better version. Regards--Tkalayci 85.101.166.181 16:20, 3 August 2005 (UTC) ok, whatever 85.101.166.181 18:10, 3 August 2005 (UTC)
I must say, it was certainly not in my intentions to cause an edit war. If I reverted changes, I did it because I sincerely think that the "intuitive" approach, despite its plausibility, is technically faulty. To be more specific, there can be no uniform pdf, either in the natural or in the positive real numbers, because these sets have an infinite "measure" and the pdf would tend to zero, therefore would be unacceptable. Whoever puts the sums in the envelopes has therefore to follow some other pdf, and there can be many. Without such information, the "gambler's" knowledge is incomplete. In other words, just because the two envelopes are indistinguishable doesn't mean they are "equally likely" (in fact, we have showed they cannot be equally likely). As for the claim On the natural numbers, the pair ( A, ½A ) is twice as frequent as the pair ( A, 2A ), it is clearly not true for real numbers A (that is, for every A), simply because for A' = 2A the one kind of pairs turns into the other. If we choose natural numbers, things are different of course, but that is a different problem altogether.
I would like to present here a more detailed analysis, showing you some possible ways to choose the sums and put them in the envelopes, so you could see that different preparations lead to different pdfs and winning strategies (much like Bertrand's paradox, actually), but I'm afraid I don't have the time right now. If someone else can do this, by all means do it and improve the article. Regards, --Toredid 18:33, 3 August 2005 (UTC)

[edit] Vote for Deletion

This article survived a Vote for Deletion. The discussion can be found here. -Splash 01:57, 18 August 2005 (UTC)

[edit] 3rd paradox

  1. Let the amount in the envelope you chose be A. Then by swapping, if you gain you gain A but if you lose you lose A/2. So the amount you might gain is strictly greater than the amount you might lose.
  2. Let the amounts in the envelopes be Y and 2Y. Now by swapping, if you gain you gain Y but if you lose you also lose Y. So the amount you might gain is equal to the amount you might lose.

The two arguments are correct but they apply to different states of knowledge. The paradox arises if you pretend that they can be applied to the same situation.

In the second argument, the values in both envelopes are known, but you don't know which one you've got. So this argument is correct before you open either envelope.

In the first argument, the amount in the envelope you opened is known, but you don't know what's in the other envelope. So this argument is correct after you open one of the envelopes.

So according to this, I should swap once I open an envelope? Yeah? Jooler 12:18, 15 September 2005 (UTC)
Yeah, this is as silly as it gets. This 'explanation' is not even close to a convincing solution to the problem. What is meant by 'different states of knowledge'? Does it mean that two different persons opening one envelope each both are entitled to reason like in the former case and that they both gain by trading their envelopes with each other? This is just silly. To call this a solution is rediculous. INic 13:04, 15 September 2005 (UTC)
I agree: there is no knowlesge involved in the arguments. To say that "A is the amount in the envelope I chose" does not mean that anybody knows A in any sense!!--Pokipsy76 16:32, 24 October 2005 (UTC)

If you find $10 in the envelope you open, then you know that the other envelope contains $20 or $5. So if you swap, you will either gain $10 or lose $5. So the amount you could gain is strictly greater than the amount you could lose. (However, this doesn't mean that you should swap, because to know if you should swap, you have to know your expected gain if you do swap, which depends on the probabilities of the two possibilities, which are unknown.) Gdr 13:50, 18 September 2005 (UTC)

The whole point having this third version of the paradox is that it doesn't use or depend on probabilities. It uses logic and mathematics alone (it's inventor is a logician). Therefore the correct solution should depend on logic and mathematics alone. To use probability considerations when solving this version is to miss the whole point why we have this version in the first place. INic 22:51, 18 September 2005 (UTC)

[edit] The Envelope Paradox in Economics

By: Big Morg

My understanding of this paradox is a little different from what I have seen posted here. The paradox I am familiar with is the Nalebuff Paradox ( named after an Economist) and deals with choice under uncertainty and the Morgenstern/ Von Nuemann Utility Function. I will explain the situation as I know it and then explain why it is a paradox. There are two envelopes( X & Y). You are given envelope X and told that it contains a positive sum of money. You are then told that you have the option of swapping X for Y. You are told that Y has either twice the amount of money as X or half the amount of money as X and that the probabilities are equal ( there is a 50% chance of getting double and a 50% chance of getting half). The question is: do you swap? There are several important factors. The first is that at no point do you open either envelope until you are done with the exercise entirely. This means that you don't know how much money is in either until it's over and you can't swap any more ( that's the uncertainty part.) The second factor is that you must assume that you are risk neutral, which means you operate only on the principle of expected values ( this in the utility function part). So you are given X which has,say, ten dollars in it. Your expected value then is ten dollars from X. You are then given the option to swap X for Y which has a fifty percent chance of having twenty dollars in it and a fifty percent chance of having five dollars in it. Since these probabilities are equal then the expected value is simply the average: in this case twelve and a half dollars or 125% of X. You now have Y and are given the option of swapping back for X ( you haven't opened either one remember). Now that you have Y you know that X has either twice the amount of Y or halve the amount of Y ( with equal probability again). So, operating only on expected value you swith back. This is because the expected value of whatever envelope you don't have is always higher than the envelope you do have you will always switch. Over and Over and Over and Over again. You have to assume risk-neutrality and the uncertainty for this to work. It is a paradox because even though you are acting in a manner to maximize your utility which is an axiom of economics you are violating the axiom of rationality.

Nalebuff's envelope problem is essentially the same as the paradox described in this article; see Aaron S. Edlin, Forward Discount Bias, Nalebuff's Envelope Puzzle, and the Siegel Paradox in Foreign Exchange [1]. I doubt Nalebuff's presentation is the original one, but he certainly popularized the paradox.
However, your description of the problem is slightly too vague to generate the paradox; it leaves open possibilities in which it is indeed to our advantage to swap (for example, if the amount X was fixed and the amount Y was chosen by flipping a coin). Gdr 18:41, 18 September 2005 (UTC)

[edit] History of the paradox

This article ought to have a section on the history of the paradox. The first presentation I know of is that of Kraitchik [2] (1953, but does the paradox also appear in the 1930 first edition?), which puts it in the form of wallets rather than envelopes:

Deux personnes, également riches, conviennent de comparer les contenus de leurs porte-monnaies. Chacun ignore les contenus des deux porte-monnaies. Le jeu consiste en ceci: Celui qui a le moins d’argent reçoit le contenu du porte-monnaie de l’autre (au cas où les montants sont égaux, il ne se passe rien). Un des deux hommes peut penser: "Admettons que j’ai un montant de A dans mon porte-monnaie. C’est le maximum que je peux perdre. Si je gagne (probabilité 0.5), le montant final en ma possession sera supérieur à 2A. Donc le jeu m’est favorable." L’autre homme fait exactement le même raisonnement. Bien entendu, vu la symétrie, le jeu est équilibré. Où est la faute dans le raisonnement de chaque homme?"

Martin Gardner [2], in 1982, also puts the paradox in in the form of wallets. However, Nalebuff [3], in 1989 uses envelopes, and that's the form I first encountered it in.

  1. Martin Gardner, Aha! Gotcha, W. H. Freeman and Company, New York, 1982.
  2. Maurice Kraitchik, La mathématique des jeux, second edition, Editions techniques et scientifiques, Brussels, 1953.
  3. Barry Nalebuff, "Puzzles: the other person's envelope is always greener", Journal of Economic Perspectives 3 (1989) 171–181.

Is the paradox original to Kraitchik? Perhaps it is, in the form of wallets, but [2] points out that the envelope paradox is rather similar to Pascal's wager! Gdr 19:01, 18 September 2005 (UTC)

[edit] The Most Basic Version is Lacking (!)

I just realized that the most basic version of this paradox is lacking in the article! It's the version where we don't look in the envelope we pick and only assume it contains A. The same reasoning leads to the same conclusion in this case, but the solution(s) proposed in this article can't be applied. I don't want to start an edit war (as I can't write anything here without irony ;-) so can someone else please add this version of the paradox and it's proposed solution? Logically it should be placed on top of the other solutions I think, and the numbering of the other solutions should be incrased by one, i.e., the second paradox should be renamed 'The third paradox' and so on. INic 12:15, 28 September 2005 (UTC)

You seem to have the opinion that this case is covered by the other cases. Let's say I put $5 and $10 in the envelopes, telling you what amounts I've put in the envelopes. You point at one of the envelopes at random. Let's denote the content of that envelope by A. Now I offer you the option to take the other envelope. The reasoning for a switch is still compelling. Do you still think that the flaw in the reasoning is that there is no uniform prior on the real line? Or that the prior has infinite expectation? We don't need any prior here for obvious reasons, so you can't blame on them here. Please let me know your opinion here and why you don't want to speak of this case in the article (as you reverted my addition of it). INic 16:03, 3 October 2005 (UTC)

The lack of a uniform distribution on the reals just emphasises the main point: that the deduction relies on the assumption that P(A/2, A) = P(A, 2A), which is not justified. (In your example, it's clearly not justified, since one of those probabilities is 1 and the other 0!) Gdr 19:14, 3 October 2005 (UTC)

I don't see this at all. I see symmetry all the way. If you point your finger on one of the envelopes it should be quite clear that it contains $5 with probability 1/2 and $10 with probability 1/2, right? It's the act of looking in one envelope that magically changes the subjective probability, remember? May I quote you "So, before looking in the envelope, it would be correct to deduce that the probability of having picked the smaller envelope is ½." So even according to yourself step 2 in the argument can't be wrong here. The question is what step is wrong instead? INic 23:19, 3 October 2005 (UTC)

[edit] Solution to 2nd paradox

[edit] Part 1

I read:

The distribution in the statement of the second paradox has an infinite mean, so before you open any envelope the expected gain from switching is ∞ − ∞, which is not defined.

And some lines above I read:

So your expected gain if you switch is
½(2nq(1 − q)n − 2n−1q(1 − q)n−1)/R
= 2n−2q(1 − 2q)/R

So somewere something is false: either the gain is undefined or it is equal to the value given above, and if the second quote say something false it should be explained what was the mistake. --Pokipsy76 17:36, 24 October 2005 (UTC)

It is undefined and the computed value is false - that is exactly the explaination that solves the second paradox: on the first sight, the expected gain is 2n−2q(1 − 2q)/R , on the second sight, however, one sees that one calculated ∞ − ∞, which is undefined. --NeoUrfahraner 14:23, 27 October 2005 (UTC)
But 2^(n−2)q(1 − 2q)/R is defined for any n.
Moreover I don't see why the expected gain from switching is ∞ − ∞: if we compute the expected gain from switching we find the divergent series
\frac q 2 + \frac {q(1-2q)}{2}\sum_{n=1}^\infty (2(1-q))^{n-1}
--Pokipsy76 13:18, 3 December 2005 (UTC)
It is like calculating (∞ + 1) − ∞ = 1. Clearly 1 is defined, but this does not mean that (∞ + 1) − ∞ is defined. To be more precise: When you compute the expected gain, you get a divergent series. When you try to compute the difference between the expected gains, you actually compute the difference between divergent series. When you formally reorder the terms in the two divergent series, you may get a seemingly well defined result, but the result is clearly a nonsense because using a different reordering, you may get a different result. This is a well known pitfall in the theory of infinite series. A simple example is 1-1+1-1+.... You may either say the sum is 1 by grouping it as 1 +(-1+1) +(-1+1)+ ... or it is zero by grouping at it as (1-1)+(1-1)+(1-1) ... --NeoUrfahraner 16:53, 3 December 2005 (UTC)
Ok, now I understand your way to look at the situation but I still dont't find this makes things clear about the "paradox". The problem is that we *have proved* that for any possible n if my envelop contains 2n then the best thing to do is to change the envelope. We can say exactly what is the expected gain I recive: it is 2^{n-1}\frac {1-2q}{2-q} (another way to express 2^(n−2)q(1 − 2q)/R ). It is a well defined positive number for any possible n. So now let's leave the probability theory and use pure logic: if for any possible sum that I could find in the envelope the best thing is to change, WHY should I waste time looking at it? Let's change immediatly! *This* sounds paradoxical, and this is not explained by the argument about ∞ − ∞!!--Pokipsy76 17:25, 3 December 2005 (UTC)
Let's put it in a different way: If you don't swap, the average amount you find in the envelope is \infty. If you always swap, the average amount you receive is \infty+\frac q 2 + \frac {q(1-2q)}{2}\sum_{n=1}^\infty (2(1-q))^{n-1}, which is actually the same as without swapping. If you get already infinity in the average, you will not get more by swapping. --NeoUrfahraner 21:41, 3 December 2005 (UTC)
What you say is correct, but it's not a way to solve a paradox: yours is not "a solution", it is a (correct) way to look at the problem. To solve the paradox you have to show the mistake in the paradoxical argument. The argument which lead to the paradox doesn't involve in any way the "average amount I recive if I swap", it does involve the average amount *for any fixed n*, which is finite. So you didn't show where is the mistake.--Pokipsy76 09:11, 4 December 2005 (UTC)
The "average amount *for any fixed n*" is a conditional expectation. The conditional expectation, however, is only defined it the original expectation exists, see e.g. Ash, Real Analysis and Probability, Theorem 6.4.3. Although you can formally compute "the average amount *for any fixed n*", this is actually meaningless since the necessary preconditions are not fulfilled. --NeoUrfahraner 20:29, 4 December 2005 (UTC)
Can you cite the exact words of the textbook that says that the conditional expectation is meaningless if the expectation is infinite?--Pokipsy76 22:43, 4 December 2005 (UTC)

Actually the result is not as explicit as you (or me) would like to have it. I could cite the exact words of the theorem but the main item is hidden quite well. You find already the essential information in Wikipedia (also hidden quite well). In conditional expectation you read "If X is an integrable random variable ...". Integrable means Lebesgue-integrable, which by definition means that the integral (i.e. the expectation value) is finite. In our case X is not an integrable random variable, which means that our mathematical toolbox does no longer work. Although we thought we were computing a conditional expectation, the preconditions were not fulfilled and we have no mathematical theorems available that guarantee that the formally computed value actually has the properties and meaning we expected. Unfortunately I do not know of more results saying explicitely what can happen when you formally compute the conditional expectation of a non-integrable random variable; you can just consider the envelope paradox as such an example. IMHO the burden of proof is on the side of the person who says you should swap because the conditional expectation (based on formal computation without checking the necessary preconditions) is positive, not on the side of the person who says swapping makes no difference. --NeoUrfahraner 08:09, 5 December 2005 (UTC)

The simple fact that some textbook define the conditional expectation only for those variables that have a finite mean does not imply that it would make no sense to define it for a variable that is positive and have a infinite mean: you need some argument to support the claim that is meaningless (if it is). Is there any theorem that is valid just in the case of finite mean and that we are using in the argument of the paradox? Otherwise it is not clear what's the *real* problem about the infinite expectation. The infiniteness of the expectation can suggest that we are in a "strange" situation, but does not "solve" the paradox.--Pokipsy76 20:07, 5 December 2005 (UTC)
No, as I said already the burden of proof is on the side of the person who says you should swap. This person has to provide the necessary mathematical theorems to prove that the formally computed conditional expectation is correct in the case of infiniteness. --NeoUrfahraner 06:51, 6 December 2005 (UTC)
1) As a mathematician I really don't know what does it mean to prove that a formally correct computation is indeed "correct". How would you prove - for example - that the computation is correct if the expectation is finite?
This is indeed a very good question. How do we know that the Andrew Wiles' proof of Fermat's Last Theorem is indeed correct? Actually several experts checked the proof, found no mistake, and then it was published. Everyone can check it. It is considered to be correct, but at least in theory it might happen that some day someone might find an error or some missing step in the proof that has previously been overlooked by the experts. --NeoUrfahraner 08:56, 6 December 2005 (UTC)
UHm.... and so...?--Pokipsy76 09:06, 6 December 2005 (UTC)
And so you can provide the necessary mathematical theorems to formally compute the conditional expectation. This is acceptable until someone finds an error (e.g. you assumed finiteness, but your random variable has infinite mean). --NeoUrfahraner 09:29, 6 December 2005 (UTC)
And what are the theorems "to formally compute the conditional expectation"? I don't know of any such theorem.--Pokipsy76 09:51, 6 December 2005 (UTC)
See Conditional expectation --NeoUrfahraner 10:03, 6 December 2005 (UTC)
There are no theorems "to formally compute the conditional expectation", indeed and the given definition "works" also in the case of an infinite expectation.--Pokipsy76 10:23, 6 December 2005 (UTC)
Didn't you read "... and with finite first moment, the expectation is explicitly given by the infinite"? Although not explicitely stated, this is a theorem ("the expectation is explicitly given by"), and "finite first moment" just means finite expectation. --NeoUrfahraner 10:32, 6 December 2005 (UTC)
I take it to be the definition of conditional expectation for a discrete random variable. Otherwise what should be the definition?--Pokipsy76 10:46, 6 December 2005 (UTC)
Some of the papers mentioned in "Further reading" discuss the problem of infinite means in more detail. Maybe you should check them. I will do the same. --NeoUrfahraner 06:10, 7 December 2005 (UTC)
2) To "solve" a paradox you have to show a mistake in the argument in such way that everyone should be able to agree that it is actually a mistake. To say: "it's up to you to prove that you are right" is not to solve a paradox.--Pokipsy76 07:52, 6 December 2005 (UTC)
OK, I claim that the Riemann hypothesis is false. Prove that I am wrong in such way that everyone should be able to agree. --NeoUrfahraner 08:56, 6 December 2005 (UTC)
What has it to do with "solving a paradox"?--Pokipsy76 09:06, 6 December 2005 (UTC)
The burden of proof is on the side of the person who makes a claim. If someone computes the conditional expectation and I find an error in his computation, he cannot claim that the computation is still correct. Finding one missing link in the chain of argumentation is enough to solve the paradox. --NeoUrfahraner 09:29, 6 December 2005 (UTC)
To call yours a solution everyone should be able to agree that there is a missing link. If you keep saying that it's not up to you to prove that there is a missing link how would you hope that people can get convinced that there is actually a missing link?--Pokipsy76 09:51, 6 December 2005 (UTC)
Again see Conditional expectation. There you find "If 'X' is an integrable random variable", so there is no guarantee that the formalism works when 'X' is not integrable/has infinite mean. --NeoUrfahraner 10:03, 6 December 2005 (UTC)
The mathematics you are speaking about don't say anything about the cases when the formalism "works" in the real world, they just give definitions and properties of an abstract mathematical object (called "conditional expectation") given a set of hypothesis. If instead you give to the word "works" a mathematical meaning than I don't understand what do you mean.--Pokipsy76 10:23, 6 December 2005 (UTC)

[edit] Part 2

After some thinking let me try to explain it in a different way. Denote by A the amount in the first envelope and by B the amount in the second envelope. Before we open any envelope, for the expectation values clearly the equality E(A)=E(B) must hold. Now we open the first envelope, which contains A, and compute the amount in the second envelope by E(B)=E(A)+E(B-A|A). So if E(A) is finite and E(B-A|A) not equal to zero, we obtain a contradiction, i.e. the paradox. Fortunately we found that there is no distribution for A such that E(A) is finite and E(B-A|A)\neq 0.

This leaves the case when E(A) is infinite. In this case, however, the above argumentation does not yield a statistical contradiction, so this is no longer a statistical paradox and we could stop here. Anyway, you will say that it is rational to swap when E(BA | A) > 0. But why should it be rational to swap when E(BA | A) > 0? Actually this just means that in the long run you will get more in this case. Since you will get already infinity in the long run if you do not swap, you cannot get more when you swap. It is also rational to say "a bird in the hand is worth two in the bush". Although the amount you might win is higher than the amount you might lose when you swap, the probability that you lose is higher than the probability that you win. So for a fixed A, chances are small that you win in a single case; and in the long run it will make no difference. Actually you can consider the random variable X=B-A, which satisfies E(X)>0 but E(|X|)=\infty. You should then find something like P(\sum_n X_n \leq 0)\geq 0.5 for every n by numerical calculation or by Monte Carlo Simulation. --NeoUrfahraner 20:45, 7 December 2005 (UTC)

You must agree that *once an envelope has been opened* it is really rational to swap, because in the long run, *given that amount in the open anvelope* you will gain more if you always swap, because the expectation is finite in both cases and is greater in the case of swapping. And you must agree that it is true for any possible amount inside the opened envelope. So the paradox is that it seems that we can say "for any possible amount you will find in the envelope it is better to swap" but we cannot conclude that "it is better to swap *a priori*".--Pokipsy76 14:51, 8 December 2005 (UTC)
No. In the long run I will not get more because I get infinity in both cases. To create a paradox you must offer an argument that always swapping makes sense in the short/finite run. In particular, in the short run the probability that you lose money is higher than the probability that you win money by swapping. --NeoUrfahraner 07:56, 9 December 2005 (UTC)
There is nothing wrong in what I wrote. It's correct to say that in the long run you get infinity in both cases, but a'm saying a different thing: I'm saying that *given a fixed amount in an opened envelope* in the long run you don't get infinity in both case AND you get more if you swap. If you don't belive it try to make a statistical experiment when you have any fixed amount in your envelope (say 100$) and you have the conditional probability given by the formulas above to get more or less when you swap.--Pokipsy76 18:33, 9 December 2005 (UTC)
Actually I do not think you made such an experiment yourself, otherwise you would know better. Of course you should not have a fixed amount in the envelope, because this is not the situation of the paradox. At first let us fix the distribution, e.g. the "Marcus Moore"-example: the envelopes contain 2^n and 2^(n+1) with probability q(1 − q)^n where q=1/4, n=0,1,... Then fix some swapping-strategies: swap never, swap always, swap when you find less than some limit L (L=0 is "swap never", L infinity is "swap always"). Now create M pairs of (closed) envelopes according to the fixed distribution. Now randomly open one of the envelopes in each pair and swap according to the different swapping strategies. At the end count which strategy wins most. This repeat S (e.g. S=1000) times with the same values for q, M, and L. You will find that "swap always" and "swap never" perform equal but that swap-L might perform better. Actually after filling the M envelopes, you are again in the "finite mean" case; in particular, when you swap always, the amount you win from swapping the envelopes with a small amount, you will lose when you swap the envelopes with the top amount. If you get the information what is the maximum in the M envelopes, you can choose L in a way to perform best: just swap when you are below the maximum. When M increases, the maximum increases and the optimal L(M) increases. This still holds when you do not get the any information about the maximum in the envelopes: if you can, say, 10 times chose from differnt pairs of envelopes, it might be reasonable to swap when you find less than 16 in the envelope; if you may choose 100 times, it might be reasonable to swap when you find less than 128. The fact that the conditionaly mean is always positive just means that there is no finite limit for L. For every finite M, however, "swap always" and "swap never" still perform equal. --NeoUrfahraner 06:26, 12 December 2005 (UTC)
You have just given another formulation of the paradox: as you are saying we can say that if L>L' than the strategy "swap when you find less than L" works better than the strategy "swap when you find less than L'". This holds for any L and L'. The natural conclusion seems to be that whatever you find the best thing to do is to swap. But this conclusion is false and this seems (again) to be paradoxical. Finally I want to point out that your argument doesn't say anything against my argument above, your is just another point of view that *still* lead to a paradoxical conclusion.--Pokipsy76 19:42, 12 December 2005 (UTC)
No. L will work better than L' only if L is smaller than the maximum amount you find in the M pairs of envelopes. If M is small (in particular, if M=1), and L is large (say L=2^n), L'=1.5 will perform better with probability 1-q^n, which is arbitrary close to one. A different example: if you have the option either to give your money to a saving account where you will get 2.5% per year for sure, or you could spend your money to some risky investment where you will double your money with probability 60% or lose your money completely with probability 40%, do you say it is rational to spend all your money for the risky investment? --NeoUrfahraner 20:37, 12 December 2005 (UTC)
You are completely changing the point of the discussion. However - despite your "no" - if L>L' than in the long run L do work better than L'. This is easily obtained by direct computation of the expected difference between the gains of the two strategy. This, as I was pointing out, is another way to see that there is a paradox (not a way to solve it!)--Pokipsy76 10:23, 13 December 2005 (UTC)
You said You must agree that *once an envelope has been opened* it is really rational to swap, because in the long run, *given that amount in the open anvelope* you will gain more if you always swap. I changed the point of the discussion to make it clear that it is not rational to base a decision on the long run only. You might be bankrupt in the short run before the long run begins. Do you still think it is rational to look on the conditional expectation only? You said: So the paradox is that it seems that we can say "for any possible amount you will find in the envelope it is better to swap" but we cannot conclude that "it is better to swap *a priori*". Do you still see any paradox if we fix some finite M a priori? Did you make any numerical simulations that support the existence of a paradox? Here are just some of my simulation results:
M=1, S=1000: always vs. never: always swapping is better in 457 cases, never swapping is better in 543 cases.
M=1, S=1000: always vs. L=1.5: always swapping is better in 346 cases, L=1.5 is better in 517 cases, both are equal in 137 cases.
M=10, S=10000: always vs. never: always swapping is better in 5055 cases, never swapping is better in 4914, both are equal in 31 cases.
M=10, S=10000: always vs. L=1024.5: always swapping is better in 1757 cases, L=1024.5 is better in 2201, both are equal in 6042 cases.
There is no significant difference between always swapping and never swapping; it is also visible that the "optimal" L-value increases with M, but there is no paradox at all. --NeoUrfahraner 12:53, 13 December 2005 (UTC)
Just a question: suppose you own 2n and you are given the opportunity to play a game when you can lose 2n−1 with probability q(1 − q)n−1/R and you can win 2n with probability q(1 − q)n/R. Do you think it is rational to play? If your answer is "NO" then the paradox doesn't exist at the very beginning even before you compute the total expectation and find it is infinite (it would be completeley irrelevant to compute the expectation). Is your answer "NO"?--Pokipsy76 12:55, 14 December 2005 (UTC)
It depends on n. For small n I would play, for large n I would not. Depending on the acutal winning chances, I might go up to approximately 1000 EUR, that is n=10. --NeoUrfahraner 13:55, 14 December 2005 (UTC)
I am not sure whether you are supposing that I own 2n exactly and ask me whether I would risk to lose half of my money by gambling. In that case, my answer is clearly "NO". I do not think it is rational to use a large precentage of your money for gambling (or for high risk speculation), even if your chances are hight to win. --NeoUrfahraner 14:04, 14 December 2005 (UTC)
If - as I understand - your point is that one cannot (in general) say that it is better do play a game with positive expected gain then this point of view actually neutralizes the paradox at the beginning, and there is no need to talk about the infiniteness of the expectation (it would be misleading).--Pokipsy76 19:36, 14 December 2005 (UTC)

[edit] Part 3

Actually I see it the opposite way: in the beginning there was the paradox that E(B-A|A)>0 but E(B)=E(A), which was solved by seeing that actually E(B-A|A) is negative for some A. If E(A) is infinite, this is not paradox, so there is no reason to lose many words about it. --NeoUrfahraner 20:42, 14 December 2005 (UTC)

But this is not the way of the article. In the article (see "A second paradox") you are told that the paradox is in the conclusion that "you should always chose the other envelope": it's about the decision to be taken, not about the mathematics involved, nobody is surprised by the mathetaical computation itself: it's the conclusion about the decision that seems paradoxical.--Pokipsy76 21:33, 14 December 2005 (UTC)
When I saw the paradox for the first time, I was surprised by the mathematical computation itself. It was clear to me that there must be some error in the computation, but it took me some time to spot the particular error. Maybe you saw from the very beginning that Step 2 is wrong; I didn't. The conclusion that "you should always chose the other envelope" because the conditional expectation is higher is as natural as saying "If one liter of wine costs 4 Euro, then 100 liter cost 400 Euro". It is a natural approximation, but in real life you might get better conditions when you buy more. So the conditional expectation is a good approximation in a first step, but it does not explain the complete reality - in particular, it is paradox from the beginning since it does not explain why people are playing roulette in spite of the fact that their conditional expectation is negative. In the same way the conditional expectation does not work for large amounts of money, in particular when the risk is unbounded. --NeoUrfahraner 06:11, 15 December 2005 (UTC)
1) Even if you were surprised by the computation itself the paradox is not the computation, it is the decision to be taken based on that computation.
2) I didn't see that taking a general decision based on the positiveness of the conditional expectation is "wrong", I think that there is some abstract way to look at it that makes it correct. But if someone (like you before) wants to deny this I can't give any argument, just I'm not sure that this is the real problem of the paradox, but I must recognize that it is a good point. However it is not relevant in the discussion about the implication of the infinite mean.
3) All your argument about the conditional expectation as a "natural approximation" being wrong in some real cases hold as well in the cases where the expectation is finite. So what does it matter if the expectation is infinite? Why in the so called "solution of 2nd paradox" do we speak about the infinite expectation if this is not the point?--Pokipsy76 09:36, 15 December 2005 (UTC)
The article starts with saying The envelope paradox is a paradox of probability. So it is reasonable to consider a problem of probability, in particular the computation of the conditional expectation, as the main focus of the article and the decision problem as just an illustration. Anyway, if you really want to find a paradox of decision theory in the article, you are right, it is not necessary to distinguish between the finite and the infinite case; it is enough to say If you are happy with the money there, keep it. If you are unhappy, swap, (if you get less, you were already unhappy anyway)., as someone wrote already on this page in Section "The way to deal with it." --NeoUrfahraner 12:35, 15 December 2005 (UTC)
The problem is not to be "happy", the problem to do say what is rational to do!!
Now I would like to reformulate a question I made to you when this problem came up.
Let's avoid the real life situations. Suppose you are *playing a game* against me, the winner of the game is the player that gain the top "score". In this game you must try to obtain as much "points" as possible. In the context of the game you are given an envelope in a situation that is almost completely identical to the "second paradox", the difference is that the envelopes contain "points" instead of money. You open the envelope and you find 2n points, so you must decide if you want to swap... you compute the conditional expectation and you find it is positive (just like the "second paradox" situation). What is now the best "move" to do to win the game with greater probability?--Pokipsy76 18:40, 15 December 2005 (UTC)
This is indeed a good way to look at the problem. It is, however, not yet completely specified. Assume that we will have M pairs of envelopes each, M is known in advance, nobody knows the strategy of the other, and nobody sees the results and actions of the other until all pairs of envelopes have been played. Then I would swap whenever I find less than L(M) in my envelope, where L(M) is a finite unbounded function increasing in M. L(M) is something like the median of the maximum of the contents of the M pairs of envelopes; the exact value I would fix according to some numerical simulation. --NeoUrfahraner 05:48, 16 December 2005 (UTC)
Ok, so let's suppose that we are given just one pair of envelopes, in which case would you swap and why?--Pokipsy76 11:09, 16 December 2005 (UTC)
In this case my simulation results (q=0.25) clearly suggest to use 1<L(M)<=2, i.e., I will swap only when I find 2^0=1 in the envelope and keep otherwise. --NeoUrfahraner 11:21, 16 December 2005 (UTC)
Ok but there should be some mathematical justification to support this strategy. Moreover one could think that a numerical experiment cannot have a real significance since the expectation is infinite and the numerical results are (of course) "very far" away from the expected values.--Pokipsy76 18:40, 16 December 2005 (UTC)
This is easy to explain: In this game it does not matter whether you win with 1 point or with 1000 points difference. If you swap, the probability to lose points is higher than the probability that you win points. The expectation is only positive because you win much more points in the less probable case that you win by swapping. In the suggested game, however, you are counting how often you win, not how much you win - when you replace 2^n by n you still have the same game. --NeoUrfahraner 19:10, 16 December 2005 (UTC)
Good point. So we could discuss when it is the case that the score matter and when it do not. Suppose for example that the game is a part of a bigger game where a lot of players of a two-players envelopes-game have to compare their score so that the final winner is the top scorer. In this case would the score matter? Would the expactation of the score be a good parameter to make a rational decision? What would be the best strategy?--Pokipsy76 20:35, 16 December 2005 (UTC)
Actually there are quite a lot of variations. In addition, even in simple cases I did not find a transitive relation for comparing strategies, so it is not clear how "best strategy" should be defined. What should be clear now, however, is that swapping dependent on the contents may lead to better results than swapping independent from the contents, so the paradox vanishes. --NeoUrfahraner 07:35, 17 December 2005 (UTC)
The paradox doesn't vanish as long as we can consider a context where the expectation can be considered a parameter to make a rational decision.--Pokipsy76 13:27, 17 December 2005 (UTC)
Maybe. But you did not provide such a context. ---NeoUrfahraner 16:24, 17 December 2005 (UTC)
Some lines above I was suggesting some variation where the expectation seems to be important.--Pokipsy76 10:53, 18 December 2005 (UTC)
In addition one can say that the more pairs of envelopes you have, the more important the conditional expectation from swapping becomes. The longer the game, the more often you should swap. If you play an infinite time, you should swap always. Put it the other way round: swapping always will only be best after an infinite time, i.e. never. --NeoUrfahraner 07:58, 17 December 2005 (UTC)
What is the mathematical justification for this claim?--Pokipsy76 13:27, 17 December 2005 (UTC)
It is supported by my simulation. If you don't belive it, however, this is no problem for me. --NeoUrfahraner 16:24, 17 December 2005 (UTC)
If it is true than it would be useful for our understanding to deduce it mathematically--Pokipsy76 10:53, 18 December 2005 (UTC)
If you are really interested in understanding you should make some simulations yourself. --NeoUrfahraner 05:58, 19 December 2005 (UTC)
Statistical analysis is not meaningful because the expectation is infinite!!--Pokipsy76 16:03, 19 December 2005 (UTC)
You have to find other statistics that can be analysed. Some lines above you were suggesting some variations where you can compare different strategies. E.g. win/not win is a binary random variable with finite moments. --NeoUrfahraner 16:32, 19 December 2005 (UTC)

[edit] What is the paradox?

[edit] Part 1

The article blames the 8th step for the paradox:

 8. Hence, you should swap whatever you see in the first envelope 
 But as the situation is symmetric that's clearly nonsense.

But it is not at all clear why that is nonsense. Moreover, as shown later in the article, it is possible to choose the numbers in the envelopes in such a way that after seeing the content, one wants to swap no matter what one sees. A necessary condition for this is that the expected amount in each envelope is infinite. So, 8 is not nonsense at all.

You're absolutely right! When we look in the envelope we've picked the situation isn't symmetric anymore. I've tried to convince the author of this article that this and other shortcomings of the article should be fixed. But he doesn't listen to me anymore and reverses any changes I make. He thinks I'm just a troll so he ignores me. However, there's another article (Two Envelopes problem) that is a flawless mirror of this article. :-) Please feel free to contribute to that article. INic 04:29, 28 November 2005 (UTC)
Good news! You said "When we look in the envelope we've picked the situation isn't symmetric anymore." 23 Sept 2005 you said "Here I agree with all statements up to and including 5". I said "Step 2 is wrong". Step 2 was "The probability that A is the larger amount is 1/2, and that it's the smaller also 1/2". Now you obviously agree that Step 2 is wrong because "the situation isn't symmetric anymore". --NeoUrfahraner 12:21, 28 November 2005 (UTC)
Ah oh no. The story as a whole isn't symmetric anymore if we look in the envelope we pick. The punch line in the paradox is that it's rational to switch indefinately, remember? Well, if we look in the envelope as soon as we pick one at least I wouldn't switch indefinately. After switching once both envelopes would be opened.
No, the punch line is "you should swap whatever you see in the first envelope". You are not allowed to switch again after opening the second envelope. --NeoUrfahraner
Your punch line doesn't lead to a paradox. It's a perfectly legitimate strategy to swap whatever you see in the first envelope. As well as the strategy to not swap whatever you see is legitimate. Aftar all, you have to do something. As the author of this comment points out this isn't nonsense at all.
To obtain a paradox you have to derive a contradiction. The contradiction appears if you reason without opening any envelope, just pointing at one. Then the strategy to swap leads to a never ending swapping as the situation after the swap is exactly the same as before the swap, as the symmetry of the story is preserved. However, this is still not a contradiction. The contradiction appears when you claim that this strategy is rational. It seems more rational to open any envelope and receive some money than to swap indefinately. This is the contradiction.
This is the original statement of the paradox and it's not even mentioned in the current article! INic
Step 2 is still clearly true. It's equvalent to the situation where we toss a coin and we assert that the probability is 1/2 for head to come up, and 1/2 for tail to come up. Those probabilities doesn't change by the act of observing what side actually showed up. INic 06:21, 30 November 2005 (UTC)
(Discussion moved into Section "What is the paradox? Part 2")
Hi INic, where are you? I think we are very close to a common understanding of the story, but now you disappeared. --NeoUrfahraner 07:31, 2 December 2005 (UTC)
Well I doubt that we are close to a common understanding as you're a subjectivist and I'm an objectivist. ;-) INic 12:36, 2 December 2005 (UTC)
Are you interested in finding a common understanding? --NeoUrfahraner 12:47, 2 December 2005 (UTC)
Sure, that would be cool! That would mean that you became an objectivist. :-) INic 17:05, 2 December 2005 (UTC)
It's true that when (and if) you look inside the envelope there is no simmetry anymore, but it is not the situation of the paradox: in fact you *don't* look at the envelope because you know (by logical and/or probabilistic reasoning) that for any possible sum that is inside the best thing to do is to change. This is paradoxical.--Pokipsy76 09:23, 4 December 2005 (UTC)

It is possible to add step 9, which says that if one wants to swap no matter what one sees, then one doesn't really need to see anything and switch anyway. This sounds a little bit more nonsensical. But I think calling it bizarre, rather than nonsense, will serve truth better.

Concluding 9 from 8 is of course flawed. It is valid only if the expected amount is finite, which, of course, is impossible.

[edit] Part 2

What does it mean that "The story as a whole isn't symmetric anymore if we look in the envelope"? What changes when we look into the envelope? What is now asymmetric that was symmetric before? --NeoUrfahraner 07:52, 30 November 2005 (UTC)
That the story isn't symmetric anymore when we look at the result doesn't mean that the coin we flipped when we chose envelope is asymmetric. I understand that this can be confusing to a subjectivist.
You did not answer my question. I did not say anything about a coin. What does it mean "That the story isn't symmetric anymore"? --NeoUrfahraner 12:47, 2 December 2005 (UTC)
Well, I just did above. Please look a few centimeters above of this sentence. Does it really matter to you if we decide what envelope to pick using a tossed coin or not?
You said "That the story isn't symmetric anymore when we look at the result doesn't mean ..." I asked what it means, not what it doesn't mean. --NeoUrfahraner 16:34, 3 December 2005 (UTC)
OK, as it's hard for you to find I'll repeat my explanation here: "To obtain a paradox you have to derive a contradiction. The contradiction appears if you reason without opening any envelope, just pointing at one. Then the strategy to swap leads to a never ending swapping as the situation after the swap is exactly the same as before the swap, as the symmetry of the story is preserved. However, this is still not a contradiction. The contradiction appears when you claim that this strategy is rational. It seems more rational to open any envelope and receive some money than to swap indefinately. This is the contradiction." INic 23:15, 3 December 2005 (UTC)
28 Nov 2005 you said When we look in the envelope we've picked the situation isn't symmetric anymore . Your explanation does not explain this sentence. How do the words the symmetry of the story is preserved explain that the situation isn't symmetric anymore? --NeoUrfahraner 18:13, 4 December 2005 (UTC)
I'm not sure if I can explain this any clearer. Can you please tell me where you think the contradiction is in the original statement of the story? If this is a paradox you have to be able to derive a contradiction somewhere, right? If I open one envelope and sometimes decide to switch to the other one, is that an absurdity? If I open one envelope and decide to stick to that is that absurd? If I open one envelope and decide to switch—is that absurd? Where's the contradiction according to you? INic 03:47, 5 December 2005 (UTC)
Please stay on topic. You did not answer my question. There is a contradiction in your statements. You are saying the situation isn't symmetric anymore , then you are saying the symmetry of the story is preserved . You do not see the contradiction in your statements? As long as you are not able to solve this obvious contradiction, I get the impression that you are not able to think logically at all. --NeoUrfahraner 06:03, 5 December 2005 (UTC)
OK, to repeat, I only said that if we reason without opening any envelope then the symmetry of the story is preserved. And then I said that the story is asymmetric if we look in the envelope. To me that's two different ways to express the same thing. May I ask what kind of logic you adhere to? INic 22:26, 5 December 2005 (UTC)

I moved the discussion into a seperate section to make it clearer. Now it is my time to repeat: What does it mean that "The story as a whole isn't symmetric anymore if we look in the envelope"? What changes when we look into the envelope? What is now asymmetric that was symmetric before? Please do not use word's like "doesn't mean" and the "symmetry of the story is preserved" in your answer. --NeoUrfahraner 06:40, 6 December 2005 (UTC)

I'm afraid that you're the only one on the planet that doesn't understand this. If you can't express in a better way why my previous explanation did no good for you I'm afraid it's futile to repeat it. As a way to understand how you think I reversed the question and asked you how you would derive a contradiction when you look in one envelope, but your only response was "please stay on topic". Well I was very much on topic but you didn't even realize that. That made me worried.

Maybe Pokipsy76 have better luck in explaining this to you? He made a clear statement about this (I quote it in full here so you'll find it): "It's true that when (and if) you look inside the envelope there is no simmetry anymore, but it is not the situation of the paradox: in fact you *don't* look at the envelope because you know (by logical and/or probabilistic reasoning) that for any possible sum that is inside the best thing to do is to change. This is paradoxical.--Pokipsy76 09:23, 4 December 2005 (UTC)" INic 09:05, 6 December 2005 (UTC)

OK, let's do the test: Who is understanding what INic means when he says "The story as a whole isn't symmetric anymore if we look in the envelope"? Feel free to add understanding/not understanding below. --NeoUrfahraner 09:42, 6 December 2005 (UTC)

[edit] Terminological suggestion: "solution" -> "discussion"

It can be a matter of opinion to decide if a given argument is or is not a solution of a paradox. So my suggestion is: let's avoid the word "solution" in the article and just speak about "discussion". The reader will decide by himself if he find that the discussion actually solve the paradox. It would be much more an enciclopedic style.--Pokipsy76 13:01, 4 December 2005 (UTC)

Good point! Go ahead and change it. If you're lucky the author of this article will not reverse your changes. However, if you're not lucky there's another article (Two Envelopes problem) dealing with this problem that is more encyclopedic in style where "solution" is replaced with "proposed solution" of exactly the reasons you say. Please feel free to improve that article. INic 15:13, 4 December 2005 (UTC)

[edit] Marcus Moore

Who is Marcus Moore?--Pokipsy76 19:06, 10 December 2005 (UTC)

As I understand it is nobody you have to know, just some friend of some contributor to Wikipedia. One could replace his example by a similar but more prominent example cited in some paper in the "Further reading" section. --NeoUrfahraner

[edit] Paradox of probability?

[edit] Paradox of probability? (Part 1)

The first line of the article states that "the envelope paradox is a paradox of probability." But if that is the case shouldn't it be possible to derive the paradox from the axioms of probability? However, as far as I know that is not possible. This means that the very first line of the article is misleading.

Furthermore, further down in the article a non-probabilistic version of the paradox is presented. Doesn't this explicitly contradict the first line of the article?

These two observations clearly shows that the very first line of the article is fooling the average reader. INic 00:17, 20 December 2005 (UTC)

What do you mean by "deriving the paradox from the axioms of probability"? See Paradox: A paradox is an apparently true statement or group of statements that seems to lead to a contradiction or to a situation that defies intuition. Typically, either the statements in question do not really imply the contradiction. So when you properly use the axioms of probability, you will see that there is no contradiction/no paradox (in particular, Step 2 cannot be dervied from the axioms of probability). --NeoUrfahraner 06:15, 20 December 2005 (UTC)
For example Russell's paradox is a paradox of (naïve) set theory. This means that it's derivable from set theory itself, not by means of some particular interpretation of set theory. To call the envelope paradox a paradox of probability theory itself would require the same thing; that it would be derivable from the axioms of probability and nothing else (except logic). INic 22:04, 17 January 2006 (UTC)
See Paradox of probability (Part 2) --NeoUrfahraner 08:33, 18 January 2006 (UTC)
Sure, step 2 can be derived from the axioms of probability. Denote the probability that A is the larger amount by p and that A is the smaller by q. We know that p ≥ 0 and q ≥ 0 by the first axiom. Further, that p + q = 1 by a combination of the second and third axiom—here observing that the two events are mutually exclusive. As the situation is symmetric (alternatively that we pick an envelope in a random fashion) we know that the same symmetry must hold between p and q as well, i.e., p = q. Combining our relations we get the desired result: p = q = 1/2. INic 03:58, 17 January 2006 (UTC)
To be more precise: what ist p? The unconditional probability that you picked the larger amount or the conditional probability that you picked the larger amount under the condition that it contains A? --NeoUrfahraner 12:36, 18 January 2006 (UTC)
The story only consider one random event so far so we are not allowed to attribute probabilities to anything else than that event. The event is '(randomly) picking one of two envelopes with different content.' p is the probability associated to one of the two possible outcomes of that event, and q denotes the probability of the other. INic 01:45, 21 January 2006 (UTC)
So your p is the unconditional probability that you picked the larger amount, ignoring its contents. Your p is indeed 1/2. When you want to derive the paradox from the axioms of probability, however, you have to go into a different direction. See Paradox of probability (Part 2) --NeoUrfahraner 07:08, 21 January 2006 (UTC)
Why would I ignore its contents? To me it doesn't matter if I look in the envelope or not. The probability is the same. I'm not a subjectivist, remember? INic 13:24, 21 January 2006 (UTC)
I agree: it is not a paradox of probability, it is a paradox of decision based on probability.--Pokipsy76 07:40, 20 December 2005 (UTC)
It should properly be classified as a puzzle within Bayesian decision theory I think. INic 02:26, 21 December 2005 (UTC)
Who is saying that "Bayesian" is important? Acutally in the literature you find The Two-Envelope Paradox is a decision-theoretic problem that is widely taken to turn on considerations of probability. (Bruce Langtry, The Classical and Maximin Versions of the Two-Envelope Paradox, August 2004) --NeoUrfahraner 07:17, 21 December 2005 (UTC)
It's important because 1) decision theory is concerned mainly with epistemological probabilities and 2) it's impossible to state the envelope paradox within the frequency interpretation of probability. INic 02:19, 3 January 2006 (UTC)
Who is saying that? --NeoUrfahraner 11:54, 3 January 2006 (UTC)
Saying what? 1) As long as the decision situation is uncertain due to lack of information on the part of the decision maker he has to use epistemological probabilities if he wants to use probability at all. Do you need some specific authority to tell you that before you are willing to accept that? 2) All authors dealing with the envelope paradox are writing from an Bayesian viewpoint, which most of them state explicitly. Not a single author deals with the problem from an objectivist standpoint, for obvious reasons. Why should they be interested in a problem that they can't state? Do you need some specific authority to tell you that before you are willing to accept that? INic 22:00, 5 January 2006 (UTC)
Yes, I need a specific authority saying it's impossible to state the envelope paradox within the frequency interpretation of probability. Where do Franz Dietrich and Christian List, The Two-Envelope Paradox: An Axiomatic Approach leave the frequency interpretation of probability? --NeoUrfahraner 17:48, 8 January 2006 (UTC)
They never leave the frequency interpretation because they never enter it. It's obvious from the very beginning that they deal exclusively with epistemological probabilities. The second footnote on the first page is especially revealing, if you still have doubts thus far. I don't believe in authorities, so I don't know how you should go about finding yourself one. BTW, how do you know that someone is an authority? Is it because another even bigger authority has told you that? Hmm... difficult situation. INic 23:51, 9 January 2006 (UTC)
In other words, we agree that nobody is saying that Bayesian is important. --NeoUrfahraner 05:15, 10 January 2006 (UTC)
Quite the contrary, everybody is saying that subjective, epistemological, Bayesian—or whatever you want to call it—probabilities are important, explicitly or implicitly. Not everyone points out this obvious fact explicitly, as Dietrich and List for example, but who can blame them? INic 20:29, 10 January 2006 (UTC)
This is just your personal misunderstanding. Read the papers again and then give an exact citation! --NeoUrfahraner 05:15, 11 January 2006 (UTC)
Not at all. But maybe not every author, being subjectivists, realizes that this problem doesn't appear for the objectivist. I'm not sure. I might have to write an article myself to make this clear to all. INic 02:35, 12 January 2006 (UTC)
Yes. But please observe that Wikipedia is not the place for original research. Publish the article in some other medium and afterwards one can cite it in Wikipedia. --NeoUrfahraner 05:19, 12 January 2006 (UTC)
Yes I know, although I think this is merely a very simple observation and not original research. And it's so simple that I can't take credit for the "discovery." However, I'll send a paper to Analysis and we'll see if they accept it as original research or not. I'll let you know. INic 20:42, 12 January 2006 (UTC)
If it were true it would not be a very simple observation. Also Gdr 17:52, 18 September 2005 (UTC) is saying that for this article, it makes no difference whether you are a frequentist or a Bayesian; the paradox has the same cause and resolution --NeoUrfahraner 06:17, 13 January 2006 (UTC)
Somehow you and Gdr have decided that even the frequentist would believe that looking into one envelope would change the probabilities involved. This, however, is an exclusively subjectivistic notion. Likewise, to consider a prior distribution is an exclusively subjectivistic idea. Priors aren't anything any frequentist would even talk about. Hence the—by definition—subjectivistic proposals in the article isn't anything any frequentist could accept. To anyone even slightly familiar with the differences between an objective and a subjective notion of probability this is indeed a very simple observation. INic 03:58, 17 January 2006 (UTC)
No. We have decided that after looking into one envelope one has to use the conditional probability instead of the original one. --NeoUrfahraner 13:51, 17 January 2006 (UTC)

This is interesting. What causes what here? Is it the case that the somehow inevitable shift in perspective from an ordinary probability to a conditional probability causes the sudden shift of the probabilities involved (when we look into one envelope), or is it the other way around? That is, does the basic subjectivistic notion that the probability for an event changes once the subject know its outcome forces the change of perspective from ordinary to conditional probabilities?

But anyway, the basic question remains: why do we have to use a conditional probability instead of the original one after we've looked into one envelope? Or why do we have to change the probabilities involved once we know the outcome of an event? Whichever way you look at it (i.e., what causes what) both ideas are exclusively subjectivistic in nature. As such, none of them are acceptable from a frequentist point of view. INic 13:05, 21 January 2006 (UTC)

The reason is that Step 6. "So the expected value of the money in the other envelope is ½(½A) + ½ (2A) = 1¼A" is based on conditional probabilites. You clearly see this when you translate it into formal mathematics. See Paradox of probability? (Part 2) for details. --NeoUrfahraner 05:33, 23 January 2006 (UTC)

OK, I see... So if you don't look in the first envelope and just denote its unknown content A and reason in the same way as before, you don't use conditional probabilities at step 6? You use the original probabilities in that case, and the reasoning is valid? And if you look where does the original probabilities go? They just disappear in the air like that? INic 12:52, 23 January 2006 (UTC)

If I do not look into the first enevlope and denote the contents of the first envelope by A and the contents of the second envelope by B, then I just get
(*) E(BA) = E(B) − E(A) = 0,
no conditional probabilites, no contradiction, no need for switching, no paradox. --NeoUrfahraner 13:23, 23 January 2006 (UTC)

And why do you think an objectivist would accept this subjectivistic reasoning? INic 15:22, 23 January 2006 (UTC)

What formula does an objectivist use to compute the expectation of swapping? --NeoUrfahraner 05:16, 24 January 2006 (UTC)

You're the one inventing formulas here, not me. Where do I find your formula (*) above? I can't find it anywhere in the paradox argument. Tell me: what step in the argument is wrong if you don't look in any envelope, only point at one containing A? Clearly step 2 is correct in this case. Even for a subjectivist. Right? INic 13:47, 24 January 2006 (UTC)

Yes. If I do not look in any envelope, I consider the unconditional expectation, i.e. P(A=n)=P(B=n) for every n, E(B-A)=E(B)-E(A)=0, with the result that swapping makes no difference, no contradiction, no need for switching, no paradox. --NeoUrfahraner 14:18, 24 January 2006 (UTC)

But please, please tell me what step (1-8) in the reasoning of the paradox is wrong in this case. I can't find E(B-A)=E(B)-E(A)=0 at any step no matter how many times you repeat it here. INic 15:16, 24 January 2006 (UTC)

It is not in step (1-8), it is implicitely in the next sentence "But as the situation is symmetric, this cannot be the case." Step 1-8 gives you E(B)>E(A) (i.e., you should swap), which is a contradiciton to E(B)=E(A) (i.e. it makes no difference whether you swap). You find E(B)=A(A) explicitely e.g. in David J. Chalmers, The Two-Envelope Paradox: A Complete Analysis? --NeoUrfahraner 08:50, 25 January 2006 (UTC)

I'm having hard times believing that you mean what you say here. You say steps 1 through 8 are all correct but the statement that the situation is symmetric is the culprit. That is, you claim that you should always switch if you don't look in any envelope. This implies that you have to switch back and forth forever, and never open any envelope. I really hope you don't mean this. INic 22:37, 25 January 2006 (UTC)

See paradox: A paradox is an apparently true statement or group of statements that seems to lead to a contradiction or to a situation that defies intuition. Typically the statements in question do not really imply the contradiction To make it more explicit: Steps 1-8 say E(B)>E(A), the next sentence says E(B)=E(A), so we get a contradiction. I agree that this is no real contradiction, there must be an error hidden somewhere. --NeoUrfahraner 06:09, 26 January 2006 (UTC)

OK, that's what I've always suspected too: there must be an error somewhere... I'm glad we agree so far! But my question to you is where that error is. I'd really like to know your opinion here. Please tell me, will ya? INic 10:52, 27 January 2006 (UTC)
See "Paradox of probability? (Part 3)" --NeoUrfahraner 12:55, 27 January 2006 (UTC)

Maybe I should also clarify that I do not claim that you should always switch if you don't look in any envelope. If you don't look in any envelope, I claim that E(B)=E(A), which says it makes no difference whether you swap or not. Only Buridan's ass will switch forever in that situation. --NeoUrfahraner 08:06, 26 January 2006 (UTC)

Have I understood you right that you claim that E(B)=E(A) holds as well as all steps 1-8 are correct? I hope you are aware that a central theme in subjectivistic theory is the notion of a consistent set of believes. According to theory every agent or subject can have whatever opinions she likes as long as the opinions don't contradict each other. To me it looks like your set of believes is inconsistent here. This isn't allowed even within the otherwise free boundaries of subjective probability. INic 10:52, 27 January 2006 (UTC)

No. In my real life I claim that E(B)=E(A). I only claim that Steps 1-8 are correct when I am in the role of the advocatus diaboli. --NeoUrfahraner 13:02, 27 January 2006 (UTC)

And what's your opinion if you drop that devil's mask altogether? What step, 1, 2, 3, 4, 5, 6, 7 or 8, is false in that case? (If you prefer, you may choose among the twelve steps here instead.) That's what I'm interested in. Your various devilish games doesn't interest me, if I may be completely honest to you. I'm sorry. INic 02:48, 28 January 2006 (UTC)
See Part 3. --NeoUrfahraner 04:35, 28 January 2006 (UTC)

[edit] Paradox of probability? (Part 2)

INic, here is a formulation of the paradox using just the axioms of probability:

Consider a pair of positive integer valued random variables (A,B). Let their distribution be symmetric:

P(A=m,B=n)=P(A=n,B=m) (* Assumption A *)

Additonally assume that one random variable is twice the other:

P(A=m,B=n)>0 implies m=2n or n=2m. (* Assumption B *)

Now consider the conditional expectation of B given A: E(B|A). By the linearity of the conditional expectation we obtain

E(B|A)=E(A|A)+E(B-A|A)

Now using that one variable is twice the other we obtain

E(B-A|A=n)=(2n-n)P(B=2n|A=n)+(n/2-n)P(B=n/2|A=n)

Because of the symmetry we have P(B=2n|A=n)=P(B=n/2|A=n)=1/2, so we have

E(B-A|A=n)=(2n-n)/2+(n/2-n)/2=n/4

In particular, we have E(B-A|A)>0 with probability one and consequently E(E(B-A|A))>0

Now from Ash, Real Analysis and Probability, Theorem 6.5.4 we know

E(E(X|Y))=EX

So this means

E(B)=E(E(B|A))=E(E(A|A))+E(E(B-A|A))=E(A)+E(E(B-A|A))>E(A)

By the symmetry, however, we obtain E(A)=E(B), so we have a contradiction. --NeoUrfahraner 08:27, 18 January 2006 (UTC)

Well where did the original story go? I guess you're taking about (improper) priors here, and uses theorems of probability theory outside their scope to derive a contradiction. If you think that a frequentist would do the same errors you're wrong. We don't even consider priors. In fact, to be able to derive this story from the axioms of probability alone you have to supplement the Kolmogorov axioms with an existence axiom for priors. If you do you will indeed end up having a contradictory theory. INic 13:18, 21 January 2006 (UTC)
This is what you asked for: There is no story any longer, no prior, just pure Kolmogorov. As clever frequentist that you are you should easily spot the error in the derivation above. So tell me: where is the error? --NeoUrfahraner 19:29, 21 January 2006 (UTC)
No this isn't what I asked for. Your story here is very far from the original story—so far that it can't be considered a variant of the envelope paradox. That is why I asked you where the connection to the original story was. But apparently there was no connection.
Instead your story is just another trivial example of a principle of indifference-paradox. These kinds of paradoxes have been known for a very long time and is one of the reasons the frequency interpretation of probability replaced the classical interpretation. The frequency interpretation was the concept Kolmogorov had in mind all the time and is why he wrote that "the postulational concepts of a random event and its probability seem the most suitable" just before he introduces his axioms. Thus, to call the reasoning above "pure Kolmogorov" as you do is far from correct. INic 03:59, 22 January 2006 (UTC)
So tell me: where is the error? --NeoUrfahraner 04:31, 22 January 2006 (UTC)
What's the point discussing another problem here? If you think this is a very interesting and novel varaint of a principle of indifference-paradox go ahead and publish a paper about it. Then you can start a new page about this problem here at Wikipedia. I doubt it will be published, though. You still have to derive the envelope paradox from the Kolmogorov axioms, as you promised you could. That will be interesting to see. INic 15:13, 22 January 2006 (UTC)
I cannot believe that you are not smart enough to see the correspondence between the Kolmogorov model and the original text. So in other words, you are just saying that you did not yet spot the error and do not want to admit this. Do you need some additional hint? --NeoUrfahraner 05:16, 23 January 2006 (UTC)
I cannot believe that you think that this is the "formal version" of the envelope paradox and that it in addition follows from the Kolmogorov axioms! Truly amazing! When did the principle of indifference become the fourth axiom of Kolmogorov, do you think? Get real! In addition, the envelope paradox isn't just another principle of indifference paradox, as I've already told you long ago. If it were it wouldn't be interesting at all. Thus, your pseudo-formal story above does nothing to capture the essence of the envelope paradox. INic 13:24, 23 January 2006 (UTC)
Where did I use the principle of indifference? --NeoUrfahraner 13:31, 23 January 2006 (UTC)
You don't only use it once, but twice! Can't you see that (as you speak of 'the error' in singular form above)? Or do you need a hint? Here's a hint concerning the connection to the envelope paradox: in that story the principle of indifference isn't even used once... INic 14:53, 23 January 2006 (UTC)
Yes, I need a hint. --NeoUrfahraner 15:25, 23 January 2006 (UTC)
OK, watch out for sentences containing the word 'symmetry.' INic 16:34, 23 January 2006 (UTC)
Let their distribution be symmetric This is an assumption. Of course the probabilities are not uniquely defined by the axioms and I can make additionaly assumptions for one particular distribution. So what is the problem? --NeoUrfahraner 17:05, 23 January 2006 (UTC)
No that sentence is OK. You use the word 'symmetry' in three sentences and you picked the one that is OK. INic 17:20, 23 January 2006 (UTC)

I understand. Just call the first sentence "Assumption A". Then instead of "By symmetry" jsut write "By Assumption A". For example, you obtain

E(A) = mP(A = m,B = n) = mP(A = n,B = m) = E(B)
m,n m,n
.

"Symmetry" means "Assumption A", not principle of indifference. --NeoUrfahraner 18:29, 23 January 2006 (UTC)Σ

And yet A is twice of B, at every instance (or B twice of A). That is E(A) = 2E(B) (or E(B) = 2E(A)) INic 21:22, 23 January 2006 (UTC)
OK, I looked at it again and E(A) = E(B) the way you've defined things. But let's say that A only can be 1 or 2, and the same for B. This is an admissable pair of random variables according to your requirements. When A is 1 B is 2 and so on. Then P(1,2) = P(2,1) = 1 / 2 according to your definitions. But in your "proof" you suddenly have that P(B = 2 | A = 1) = P(B = 1 / 2 | A = 1) = 1 / 2 "because of symmetry." What symmetry? P(1/2) = 0 always no matter what so I don't get what symmetry you're using here. INic 04:04, 24 January 2006 (UTC)
"Symmetry" again refers to "By Assumption A". But you are right, symmetry is broken when we change from unconditional to conditional probabilities. So what do we know about P(B=2|A=1)? Is it possible that P(B=2|A=1)=1/2? --NeoUrfahraner 05:25, 24 January 2006 (UTC)
No, "Assumption A" is of no help here, and you can't break a symmetry that was never there. What you must use here instead is precisely the principle of indifference. You reason like this: "OK, I don't know if B<A or if B>A when A=n, but I have no reason to think that one case is more likely than the other. Therefore the cases must be equally probable, i.e., 1/2 each." We have talked about this before and you agreed then. INic 13:25, 24 January 2006 (UTC)
Yes, I remember. For good reasons, however, at 16:38, 22 September 2005 Gdr reverted the change in the article with the comment "principle of indifference is a red herring". So the question remains: what do we know about P(B=2|A=1) just using the stated assumptions and the Kolmogorov axioms without referring to the principle of indifference? --NeoUrfahraner 13:45, 24 January 2006 (UTC)
Exactly! The principle of indifference is always a red herring, in every context. That's why I try so hard to tell you that you shoudn't use it. Ever. But it's evident that you do, unfortunately. Can you or Gdr please tell me how P(B = 2n | A = n) = P(B = n / 2 | A = n) = 1 / 2 could follow from P(A = m,B = n) = P(A = n,B = m)? INic 14:12, 24 January 2006 (UTC)
From Assumption B we get P(B = 2n | A = n) + P(B = n / 2 | A = n) = 1. Now
P(B=2n|A=n)=\frac{P(A=n,B=2n)}{P(A=n)}
and from Assumtion A we get
P(B=n/2|A=n)=\frac{P(A=n,B=n/2)}{P(A=n)}=\frac{P(A=n/2,B=n)}{P(A=n)}.
Unfortunately we see that P(A = n,B = 2n) = P(A = n / 2,B = n) does not follow from our assumptions. So let us add
P(A=n,B=2n)=P(A=\frac{n}{2},B=n) (* Assumption C *)
With a probability distribution satisfying Assumption A, B, and C, we still get a contradiction. --NeoUrfahraner 14:31, 24 January 2006 (UTC)
OK, let's assume that we have A, B and an n so that P(A = n,B = 2n) > 0. By iterating Assumption C k times, where k is the power of 2 in the prime factorization of n, we will have an A that is non-integer valued. Your assumptions are contradictory. With contradictory assumptions any theory can "lead" to contradictions. No need to be alarmed by that. INic 15:09, 24 January 2006 (UTC)
Replace Assumption C by
P(A=n,B=2n)=P(A=\frac{n}{2},B=n) for n = 2k, k\geq 1 and P(A=m,B=n)=0 else (* Assumption C1 *)
Then P(B = 2n | A = n) = P(B = n / 2 | A = n) = 1 / 2 for n = 2k, k\geq 1 and P(B = 2 | A = 1) = 1 and P(B = 0 | A = 1) = 0. We still get
E(BA | A = n) = (2nn) / 2 + (n / 2 − n) / 2 = n / 4 > 0 for n = 2k, k\geq 1.
E(BA | A = 1) = 1, and
E(BA | A = n) = 0 for n\neq 2^k.
So E(B)>E(A); a contradiction. --NeoUrfahraner 15:29, 24 January 2006 (UTC)
OK, so after some false starts you have at last arrived at your impossible random variable, your improper prior that I've been anticipating from the very beginning. Wasn't this an extraordinarily complicated and cumbersome way to tell me what you're talking about? Anyway, your "formal proof" in no way shows that the Kolmogorov axiom system is inconsistent—of course. Do you really believe that yourself?
But you pinpoint a serious problem that the subjectivistic interpretations of probability always have struggled with: how to describe total lack of information as a probabilistic distribution? Laplace naïvely used improper uniform probability distributions freely, but it lead to various paradoxes. Read here for example. This is an interesting problem in itself for every serious subjectivist, but it's not the cause of the problems with the envelope paradox. INic 02:45, 25 January 2006 (UTC)
So you are saying that it is impossible to find a probability distribution that is consistent with the Kolmogorov axiom and satisfies P(B=2n|A=n)=P(B=n/2|A=n)=1/2 for every n? --NeoUrfahraner 05:19, 25 January 2006 (UTC)
Yes, one common subjectivistic "solution" to the paradox is to reduce the problem to this general problem within the subjectivistic interpretation itself, i.e., the problem of how to describe 'total lack of information' within the subjectivistic framework. However, when and if that general problem gets a satisfactory solution the envelope problem will be in need of a new subjectivistic solution. INic 10:02, 25 January 2006 (UTC)
By the way, where do you see a prior? Which probabilities satisfying the Kolmogorov axiom are frequentist ones and which ones are Bayesian? --NeoUrfahraner 05:55, 25 January 2006 (UTC)
A and B are priors, and by the way not satisfying the Kolmogorov axioms. Frequentists doesn't identify a lack of information with a (prior) distribution. Objectivists doesn't pull any distributions from an empty hat. That is epistemological magic—in the same way as the principle of indifference is epistemological magic—to an objectivist.
A and B are not priors, they are just a pair of random variables following some Probability distribution which is neither a frequentists one nor a Bayesian one. The Kolmogorov axioms indeed pull distributions from an empty hat, there is no additional axiom saying where the function P assigning real numbers to members of F must come from. --NeoUrfahraner 10:33, 25 January 2006 (UTC)
Well the requirements that you have on the random variables are typically subjectivistic ones. I can't figure out a single case, i.e., random experiment, when a frequentist would be led to consider a distribution like that. Can you? It would be fun to know! So I think it's fair to call A and B subjectivistic priors, in particular priors modelling 'complete ignorance' on the part of the subject. And no, the Kolmogorov axioms doesn't have a single existence axiom. There's no hat tricks there. INic 01:46, 26 January 2006 (UTC)
I get the impression that you are thinking in a dualistic world of good and bad, of frequentists and subjectists. Actually there is one more world, namely the world of axiomists, both of them have to be based on, see Probability interpretations. So you think if a distribution is not a frequentist's one, it must be a subjectivist's one. It is not. It is just pure mathematics. Mathematician don't care about the real world, they pull everything out of empty hats. --NeoUrfahraner 05:18, 26 January 2006 (UTC)
There are many more interpretations of probability than that. There are logical interpretations of probability (developed by Carnap for example) and the propensity interpretation of probability by Karl Popper, to mention just two additional ones. However, the dualistic view you talk about is often advocated by the subjectivists. See here for example. INic 10:52, 27 January 2006 (UTC)
It's true that only subjectivists reason the way you do. There are different flavors of subjectivism, though. In your "formal" story above for example, it wasn't clear from the outset if you reasoned using the classical interpretation or some Bayesian interpretation of probability. It was, however, clear from the outset that some form or subjective perspective was used. In addition, the fact that you have a hard time explaining what's wrong if you (the subject) don't look in any envelope shows—positively—that you're trapped within some subjective mode of reasoning. INic 10:52, 27 January 2006 (UTC)
However, it makes me sad that you're not aware of this yourself. You think you belong to the axiomatic school. The axiomists will accept any situation where the Kolmogorov axioms are satisfied as a valid application of the theory. But your "formal proof" above isn't within the Kolmogorov theory, as you think, and therefore nothing any axiomatist would consider. (The second axiom isn't satisfied.) INic 10:52, 27 January 2006 (UTC)
OK, I note that you couldn't derive the paradox from the Kolmogorov axioms. So can we now, at last, agree upon that "the envelope paradox is a paradox of probability" is a misleading statement in the article? INic 10:00, 25 January 2006 (UTC)
As I said at 06:15, 20 December 2005, it is a paradox in the sense that the statements in question do not really imply the contradiction. In paradox you read also that The word paradox is often used interchangeably and wrongly with contradiction; but where a contradiction by definition cannot be true, many paradoxes do allow for resolution and Still more casually, the term is sometimes used for situations that are merely surprising --NeoUrfahraner 10:33, 25 January 2006 (UTC)
Well, now you're trying to escape from your failure by blurring your goal. You have been talking about deriving a contradiction from the Kolmogorov axioms all the time. Here "so we have a contradiction. --NeoUrfahraner 08:27, 18 January 2006" and here "we still get a contradiction. --NeoUrfahraner 14:31, 24 January 2006" and here "So E(B)>E(A); a contradiction. --NeoUrfahraner 15:29, 24 January 2006." I certainly agree with you that the envelope paradox is a 'paradox', but to claim that it's possible to derive a contradiction from the Kolmogorov axioms alone (and logic) is much much stronger and would be a sensation if it was true. As you now admit you can't I think we should agree that the first sentence of the article is false. INic 01:46, 26 January 2006 (UTC)
On the very beginning I stated "So when you properly use the axioms of probability, you will see that there is no contradiction". When I say that there is a contradiction in the Kolmogorov axioms it should be clear to you that I just play the Advocatus diaboli who is looking for such contradictions in the Kolmogorov axioms. --NeoUrfahraner 05:26, 26 January 2006 (UTC)
Even if you're only playing silly games with me you should at least be honest to yourself and admit that you can't derive any contradiction within the theory itself, and therefore change the first line of the article. And no, it's not clear to me what you should think at all. That you have no idea what's wrong if you don't look in any envelope was totally unexpected to me, for example. INic 10:52, 27 January 2006 (UTC)

By the way, this assumption just means that the envelopes are indistinguishable in the original formulation. --NeoUrfahraner 18:43, 23 January 2006 (UTC)

Aha here you have your principle of indifference explicitly stated! But it's nice that you at last give some clues about the connections to the original story, so I know how it's supposed to be interpreted. INic 21:22, 23 January 2006 (UTC)

[edit] Paradox of probability? (Part 3)

To summarize: in part 2 I gave you a probability-theoretic interpretation of Step 1 to 8 which is a presumed proof that E(B)>E(A). On the other hand we obtain E(B)=E(A) (The next line after step 1 to 8), so this is a contradiction. So either there is a contradiction in the Kolmogorov axioms itself (which would be a sensation) or there is an error in the derivation (which seems more plausible). Checking the derivation again, we found that I was not able to construct a probability distribution satisfying all the Kolmogorov axioms and assumption A, assumption B, and assumption C1 at the same time. Do you agree up to here? --NeoUrfahraner 12:55, 27 January 2006 (UTC)

I'm sorry but I get the strong feeling that you're trying to escape my question by starting to make "summaries" that are not called for. Please answer my question instead. INic 03:03, 28 January 2006 (UTC)

I do not try to escape. On the contrary, my plan is to give you the number of the step as answer to your question after some additional derivation. If I give the answer "Step X" now, you will start again claiming this is not true. My answer can be derived from the summary above; if you do not agree with the summary, you will not agree with the answer. --NeoUrfahraner 04:48, 28 January 2006 (UTC)

Face it Neo, we will never agree because we have different philosophical views concerning probabilities. But that's OK, we don't have to agree. Why can't you just answer my question? I will not blame you because you don't have the same opinion as I have. The world is full of different opinions, and that's good. But if you don't dare for some reason to speak out your opinion, that's a bad thing. Try to build up some courage now and tell me what you think. INic 13:58, 28 January 2006 (UTC)

Actually the only one who has no courage to answer questions is you. In particular, you escaped from answering my questions from 12:55, 27 January 2006, from 05:19, 25 January 2006, and from 18:13, 4 December 2005. Anyway, here is my answer: Step 2 claims that P(A > B | A = n) = P(A < B | A = n) = 1 / 2 for every n. Since there is no distribution satisfying the Kolmogorov axioms, Assumption A and B, and this condition, Step 2 is wrong. --NeoUrfahraner 16:07, 28 January 2006 (UTC)

I'm the first to be sorry if you believe I didn't answer some of your questions. However, I answered your first question three times (13:18 21/1-06, 13:24 23/1-06 and 02:45 25/1-06), your second question I answered 10:02 25/1-06 and finally the third question you mention I answered numerous times in different ways and finally I gave up. Sorry for that. I'm only human you know. I'll try to do better in the future. INic 15:38, 29 January 2006 (UTC)
You wrote many words, but this does not mean that you gave an answer. --NeoUrfahraner 20:50, 29 January 2006 (UTC)
Well at least I tried. And I tried very hard. In part 5 I've gathered four questions you didn't answer at all. INic 02:35, 5 February 2006 (UTC)
Quite a lot of work ;-). You will get answers, but it may take some time. --NeoUrfahraner 20:17, 5 February 2006 (UTC)
I'm really glad that you finally answered the question I've repeated six times (13:58 28/1-06, 03:03 28/1-06, 10:52 27/1-06, 15:16 24/1-06, 13:47 24/1-06 and 12:52 23/1-06). Unfortunately your answer wasn't clarifying at all. Now you say that step 2 is the culprit even when you don't look in any envelope. Your thesis up to this point has always been that it's a fundamental difference if you look in an envelope or not: "We have decided that after looking into one envelope one has to use the conditional probability instead of the original one." ( 13:51 17/1-06) and "If I do not look into the first enevlope [...] I just get [...] no conditional probabilites, no contradiction, no need for switching, no paradox." (13:23 23/1-06). You have even explicitly confirmed that step 2 is correct when directly asked about it "Yes. If I do not look in any envelope, I consider the unconditional expectation..." (14:18 24/1-06). Your answer now is therefore contradicting everything you've said so far. INic 15:38, 29 January 2006 (UTC)
Last time I convicted you having contradictory opinions you claimed that you suffered from a split personality. I wonder if I might have split your personality once more now? INic 15:38, 29 January 2006 (UTC)
Was your question what is wrong when I open one envelope or when I do not open? This article assumes that one envelope has been opened ("Suppose you open one envelope"), so I interpret steps 1 to 8 as conditional probabilities. In this case, step 2 is wrong. If no envelope is opened, i.e., if steps 1 to 8 are interpreted as unconditional probabilities/expectations, then step 6 makes no sense to me. From that point of view, your claim from 09:38, 23 September 2005 that step 6 is wrong is correct. --20:50, 29 January 2006 (UTC)
Do you really mean that repeating the same question six times isn't enough for you?
I mean that you misunderstood my answer from 16:07, 28 January 2006. I never claimed "that step 2 is the culprit even when you don't look in any envelope". --06:39, 1 February 2006 (UTC)
You're still not sure what the question is. Incredible. I'm sorry but I do not believe your different excuses for contradicting yourself. Why? I simply find it far more likely that you're an ordinary man, albiet having severe difficulties saying "I was wrong," than that you're mentally an odd mix of Dr Jekyll, Mr Hyde and Professor Calculus—as would be the case if your different excuses so far were true. INic 20:47, 31 January 2006 (UTC)
I have also some further reading for you: Psychological projection. --06:39, 1 February 2006 (UTC)
Hmmm... Maybe you're right here: I'm just assuming you're an ordinary guy, due to projection. But in reality you're that very strange monster. Scary! INic 23:51, 1 February 2006 (UTC)
Anyway, at last you've revealed what step you think is the wrong one when no one has opened any envelope. I'm pleased to note that we share the same opinion, even though our respective reasons for this opinion probably differ. At last I can get to my point: you let the knowledge of the subject dictate where the reasoning fails. Look: step 2 is the culprit; don't look: step 6 is the culprit. That's cool to a subjectivist, but not to an objectivist. Do you realize now that your opinions only make sense within a subjectivistic framework? INic 20:47, 31 January 2006 (UTC)
My opinions make sense when one uses conditional probabilities. Are you saying that objectivists do not use conditional probabilities? --NeoUrfahraner 06:39, 1 February 2006 (UTC)
OK Professor Calculus, you seem to have forgotten even your own opinion from 20:50 29/1-06 already. To repeat, your opinion was that if no one looks in any of the envelopes you don't use conditional probabilities. Remember? Only if someone looks in an envelope you use conditional probabilities. INic 23:40, 1 February 2006 (UTC)
Anyway, how about answering my Yes/No-question with a "Yes" or a "No"? Here's my question again (if you have forgotten): "Do you realize now that your opinions only make sense within a subjectivistic framework?" INic 23:40, 1 February 2006 (UTC)
No. --NeoUrfahraner 05:27, 2 February 2006 (UTC)
But you do admit that you are a subjectivist by now, don't you? Statements like All probabilities can change when I get some new information (17/9 05) and Not the die becomes degenerated, but the probabilities become degenerated. After seeing the result of the throw, the probabilities for this throw have changed - in fact, the result is now determined. (19/9 05) can only be uttered by a subjectivist. In addition, you always identify subjective uncertainty with some prior distribution, even in cases where a corresponding random process would be impossible: The choice of the fish soup is in concordance with the current article, because after seeing the fish soup, I do no longer expect that there is probability 1/2 for the other meal to be twice as tasty. (15/9 05). INic 03:11, 5 February 2006 (UTC)
I admit that my formulation was not very precise. It is indeed not that All probabilities can change when I get some new information, the precise formulation is that When I get some new information, I switch to the conditional probability under the observed information. For example, if A denotes the result of throwing a fair die, P(A=6)=1/6, but after seeing the result of, say 3, I switch to P(A=6|A=3)=0. As I understand, this is independent from whether the unconditional probability was a subjectivist's one or a frequentist's one. You agreed already that also frequentists use conditional probabilities. When does a frequentist use a conditional probability? --NeoUrfahraner 20:17, 5 February 2006 (UTC)
This clearly shows that you're a subjectivist. You make no clear distinction between conditional probabilities and ordinary probabilities. As soon as you, the subject, get "information" you jump to a conditional probability and claim that the original probability "has changed." It hasn't. A frequentist would never say anything like that. However, I don't think you're a subjectivist due to true philosophical conviction after having pondering all the alternatives. I rather think you're a subjectivist due to the fact that subjectivism is closest to the layman's view of what probability is. INic 01:37, 7 February 2006 (UTC)
Frequentists uses conditional probabilities when it's part of the experiment, not otherwise. I don't know why you're fighting against frequentism when you don't know what it is. INic 01:37, 7 February 2006 (UTC)
Please give an example of a typical experiment where frequentists uses conditional probabilities. --NeoUrfahraner 05:17, 7 February 2006 (UTC)
Look in any good textbook in probability theory. Here is an example. INic 01:32, 8 February 2006 (UTC)
These should be frequentists' examples? Look at example 2, how do frequentists define "The probability that it is Friday and that a student is absent"? Anyway, today is not Friday. What is the probability that a student is absent? Since you said that the probability does not change, it still must be 15% according to the calculation in that web page. --NeoUrfahraner 09:14, 8 February 2006 (UTC)
I just checked and today is indeed Friday. INic 06:10, 10 February 2006 (UTC)
What difference does it make whether it is Friday or not? --NeoUrfahraner 17:18, 10 February 2006 (UTC)
Ha ha. Doesn't matter at all of course. I was just joking. If you want to learn some basic probability theory this is not the place to do that. I suggest you attend a class at your local university. You can ask your teacher there all the basic questions you have. Good luck! INic 01:43, 11 February 2006 (UTC)
With respect to the fish soup: I agree that there is no frequentistic interpretation available for the fish soup. So I only have the choice between no model at all or a subjectivistic model. In that case, I clearly prefer the subjectivistic model. What would a frequentist do in that case? Starve? --NeoUrfahraner 20:17, 5 February 2006 (UTC)
Well, to "clearly prefer the subjectivistic model" was a very bad decision in this case—as in any case. It led you to stick with the dish that was the best dish only to a probability of 1/3. What I would do? A simple frequentistic calculation shows that the other dish is twice as likely to be the best one, so I would switch. This shows that your eagerness to invent priors that was never there can be bad for you. Whereof one cannot speak, thereof one must be silent. INic 01:37, 7 February 2006 (UTC)
Please show me the "simple frequentistic calculation". --NeoUrfahraner 05:17, 7 February 2006 (UTC)
Play here for a while and you'll see what frequencies can do for you. INic 01:32, 8 February 2006 (UTC)
As you know, the Monty Hall problem is a different problem. Anyway, it is a much better example for conditional probabilities than the examples you gave above. Why do you switch in the Monty Hall problem when you don't care who has seen what? According to your words the probabilities have not changed, so all three doors still have probability 1/3 to hold the car. --NeoUrfahraner 12:12, 8 February 2006 (UTC)
No the Monty Hall problem is exactly the same thing. Please have another look at the rules for getting the best dish above and you'll see that. And yes, the probabilities in the Monty Hall problem doesn't change at all. That's the point. All three doors have the same probability holding the car: 1/3. When picking one door the probability of it holding the car is fixed at 1/3 whatever happens. This is precisely the simple feature probabilities have that is nevertheless unintuitive to some people, and is why those persons have difficulties grasping that problem. They reason like this: when there is only 2 doors left the probability that the first door has the car must have changed from 1/3 to 1/2. It hasn't. It's still 1/3. This is a common subjectivistic pitfall. INic 06:10, 10 February 2006 (UTC)
Why do you switch in the Monty Hall problem when you don't care who has seen what? --NeoUrfahraner 10:58, 10 February 2006 (UTC)
I just told you that. Because the probability that the first door I pick has the car is unchanged; it's still 1/3. Why do you change? Bayes theorem? INic 01:05, 11 February 2006 (UTC)
As you remember, I did not change the fish soup. There are two more doors. What about their probabilities? --NeoUrfahraner 06:09, 11 February 2006 (UTC)
Yes I remember that. Does this mean that you don't change door when there is a car behind one of the three doors too? Or do you think in a bad way only when there is fish soup involved? If you do you are inconsistent, and if you don't you haven't understood the Monty Hall problem. In the latter case it is easy to fool you into a game where you eventually will lose all your money. Wanna play? INic 23:15, 13 February 2006 (UTC)
First I want to understand how you interpret the Monty Hall problem. There are two more doors. What about their probabilities? --NeoUrfahraner 05:06, 14 February 2006 (UTC)
My interpretation is very simple. All three doors have probability 1/3 containing the car. When Monty removes one of the not selected doors the probability of the chosen door is still 1/3. However, Monty changes the experiment by altering its sample space. The new experiment has only two possible outcomes. As the chosen door still has a probability of 1/3 for containing the car, the norming condition in the second axiom of probability implies that the other door must have a probability of 2/3 containing the car. INic 23:31, 16 February 2006 (UTC)
The experiment and the sample space have to be fixed before the experiment begins. In particular, the sample space of the Monty Hall problem contains the possible places for the car, the possbile first choices of the player, the possible choices of Monty and the possible final choices of the player. If the doors are denoted by 1,2, and 3 then the sample space is the cartesian product {1,2,3}4. E.g. tuple (1,2,3,1) denotes the event that the car is behind door 1, the player's first choice is door 2, Monty opens door 3, and the player switches to door 1 (winning the car). It is not scientific if you change the sample space during the experiment. --NeoUrfahraner 06:10, 17 February 2006 (UTC)
I'm not changing the sample space, Monty does that. If the sample space isn't changed the probabilities doesn't change either. It's possible to solve the Monty Hall problem from that point of view too, but the story has to be altered slightly first to fit into that model. That's why I think that solution isn't as good as the one I provided above. INic 23:44, 19 February 2006 (UTC)
Monty does that ;-). I guess you have severe difficulties saying "I was wrong". --NeoUrfahraner 09:08, 21 February 2006 (UTC)
Why would I say I'm wrong when I'm not? INic 15:09, 21 February 2006 (UTC)
No need for that. It is enough to say you are wrong when you are wrong. --NeoUrfahraner 15:22, 21 February 2006 (UTC)
Now I want to know how you reason in the Monty Hall problem. After you've explained that I want your account for the same problem when the 'car' is replaced with 'good fish soup' and 'goat' with 'bad fish soup'. Does your accounts differ? If so, why? This will be very interesting. INic 23:31, 16 February 2006 (UTC)
By changing sample spaces during experiments you may obtain every result you like. What you are doing is more esoterics than scientific method. By the way, on 21:38, 14 September 2005 you offered me a very nice fish soup which I decided to take on 09:32, 15 September 2005. As I understand, very nice means good, not bad, so I am still happy that I did not switch to the 'bad fish soup'. But maybe you changed again the sample space in the meanwhile. --NeoUrfahraner 09:40, 17 February 2006 (UTC)
But can you explain how you reason in the Monty Hall game please? You promised to do that as soon I've given my account. I've given my account now. Where's your account? INic 23:44, 19 February 2006 (UTC)
There are many different ways to reason. A simple (but not very elegant) way is to explicitely enumarete the sample space. In particular, for the always-switch strategy the 6 events (1,2,3,1), (1,3,2,1), (2,1,3,2), (2,3,1,2), (3,1,2,3), and (3,2,1,3) have probability 1/9, the 6 events (1,1,2,3), (1,1,3,2), (2,2,1,3), (2,2,3,1), (3,3,1,2), and (3,3,2,1) have probability 1/18, the remaining events have probability 0. The events (A,B,C,D) with A=D will lead to the car, so the always switch strategy leads to the car with probability 6/9=2/3. --NeoUrfahraner 15:14, 21 February 2006 (UTC)
OK, I agree that it wasn't that elegant or simple... But the important thing is that the conclusion is correct. Now, if you replace 'car' with 'well tasting fish soup' and 'goat' with 'bad tasting fish soup' in your account above—will that change the reasoning in a significant way? That is, will you still stick to your first choice of fish soup as you did before? INic 02:07, 22 February 2006 (UTC)
If you replace 'car' with 'well tasting fish soup' and 'goat' with 'bad tasting fish soup', the reasoning does not change in a significant way. --NeoUrfahraner 14:06, 22 February 2006 (UTC)
OK, and what's in that case your answer to my last question? That is, will you still stick to your first choice of fish soup as you did before? Despite the fact that the other fish soup will be the best choice with a probability of 2/3—as you now admit that it has? INic 01:46, 23 February 2006 (UTC)
See my answer from 10:17, 23 February 2006 --NeoUrfahraner 10:18, 23 February 2006 (UTC)
When it comes to the fish soup; you saw a nice fish soup, that's correct. However, the other two dishes was equally nice looking fish soups. In fact, they looked exactly the same as the one you saw. They only differ in taste, remember? After looking at the soup you still don't know if it's tasting good or not, even though it looks good. At 23:37, 16 September 2005 I told you about these new rules and I said that only a fool would stick to the first choice in this case, exactly for the same reasons as it's best to switch in the Monty Hall problem. You, however, happily and without hesitation announced that the foolish choice was your cup of tea. And you still do! That is truly amazing. INic 23:44, 19 February 2006 (UTC)
You changed again the sample space. On 21:38, 14 September 2005 you said "that it's a very nice fish soup". Now you say it only looks good. Or was it Monty who changed the sampley space? --NeoUrfahraner 15:18, 21 February 2006 (UTC)
Of course I changed the sample space. I removed one of the bad dishes, and I told you I did. You should have seen the similarities with the Monty Hall problem here, but you didn't. And you still don't. Amazing. And yes you did only see the fish soup. I didn't let you eat it first and then asked you if you'd wanna switch! I might have been more explicit on that point. Sorry about that. I didn't know you so well at that point in time. INic 00:16, 22 February 2006 (UTC)
I just checked what you said back then when your reasoning made you to foolishly stick to the fish soup. At 14:48, 15 September 2005 you said The choice of the fish soup is in concordance with the current article, because after seeing the fish soup, I do no longer expect that there is probability 1/2 for the other meal to be twice as tasty. See the article: "Step 2 in the argument above is flawed". Before seeing the soup, there was no difference in swapping, but after seeing the soup, I got some information and was able to decide whether I like that particular meal or not. As you can see you were well aware that you only saw the first dish, not eating it. Back then you thought that that was clearly enough information to make a probability estimate of the tastiness of the other dish. Have you changed your mind here too? INic 19:55, 22 February 2006 (UTC)
At that time I thought you are a person of honor who actually offers a very nice fish soup when he writes "that it's a very nice fish soup". Now I learned that I was wrong and that I better do not eat at all from any dish that you offer. --NeoUrfahraner 10:17, 23 February 2006 (UTC)
OK, at last you admit you were wrong in the dish case. You do all your best to blame your own fault on me, though. I know you by now so I didn't really expect a clear "I was wrong"-statement either. However, I really did all I could to get you on the right track from the very beginning. I told you explicitly that I didn't use any prior when chosing dishes, and that it therefore would be foolish to postulate any prior. I even gave you the correct answer to my question before you had to answer it for the first time: The probability that your fish soup is the nicest dish is 1/3, and 2/3 that the other dish is the better choice. Only a fool would stick to the first dish. Despite all these hints—that almost yelled out the right choice to you—you made the wrong decision... The pseudoscience of subjectivism had a firm grip of your mind, and made you immune to rational argument. I really hope you by now realize how dangerous Bayesianism can be. INic 00:39, 28 February 2006 (UTC)

[edit] Paradox of probability? (Part 4)

I agree it is a good convention to answer Yes/No-question with a "Yes" or a "No". So let me repeat two of my Yes/No-questions and ask you to answer with "Yes" or "No":

(12:55, 27 January 2006): Do you agree with my summary? --NeoUrfahraner 05:27, 2 February 2006 (UTC)

No. (As I've told you before.) INic 16:10, 2 February 2006 (UTC)

(06:39, 1 February 2006): Are you saying that objectivists do not use conditional probabilities? --NeoUrfahraner 05:27, 2 February 2006 (UTC)

No. (Stupid rethorical question.) INic 16:10, 2 February 2006 (UTC)

[edit] Closed/open envelope

And one new Yes/No-question: Do you agree that the considered article Envelope paradox is about the "open envelope paradox" where You may open one envelope, examine its contents, and then, without opening the other, choose which envelope to take? --NeoUrfahraner 05:27, 2 February 2006 (UTC)

No. (Last variant in the current article isn't about opened envelopes.) INic 16:10, 2 February 2006 (UTC)
Do you agree that Step 1 to 8 in the current article refer to the "open envelope paradox"? --NeoUrfahraner 16:30, 2 February 2006 (UTC)
Steps 1-8 are part of the first variant of the paradox. I said that only the last variant in the article is about closed envelopes. As there are three variants in the article, the first and the last isn't the same. What part of your personality is it that doesn't understand the difference between first and last? INic 18:05, 2 February 2006 (UTC)
Anyway, if the current article is exclusively concerned with the "open envelope paradox" I suggest that you delete the last variant as well as deleting references to papers where the "closed envelope paradox" is mentioned. INic 18:05, 2 February 2006 (UTC)

Just to make the third question clear: Two envelopes problem considers the "closed envelope paradox" where You pick one envelope at random but before you open it you're offered the possibility to take the other envelope instead. --NeoUrfahraner 09:03, 2 February 2006 (UTC)

No. (Only first and last variants in that article is about closed envelopes.) INic 16:10, 2 February 2006 (UTC)
Do you agree with Eric Schwitzgebel and Josh Dever, Footnote 2, that It is worth distinguishing the "closed envelope" version ... from the "open envelope" version, ... which ... merits a very different treatment? --NeoUrfahraner 16:30, 2 February 2006 (UTC)
Of course not. They are subjectivists, why should I agree? Do you agree? If you do, why? Didn't you say you were merely an axiomist? Or was that only your diabolical self? Hmmmm... Your different mental states confuses me. Can't you provide me with a matrix where you plot your different personalities and opinions? That would be great! INic 18:05, 2 February 2006 (UTC)
There is no need to project anything into my question. It is enough if you simply say yes or no. Anyway, with respect to the footnote, I agree with Eric Schwitzgebel and Josh Dever. As I said already, for the closed envelope version it is appropriate to consider the unconditional probability, for the open envelope version I consider the conditional probability. --NeoUrfahraner 21:23, 2 February 2006 (UTC)
It's interesting to note that you and Gdr have different opinions here. When I added the most basic version of the paradox, i.e., the "closed envelope version," Gdr reverted that with the argument that it made no difference to the solution of the paradox. To Gdr step 2 is the culprit even in that case. Please see our discussion above The Most Basic Version is Lacking (!). Will you please add the closed version to the article? If Gdr aren't reverting that we are improving this article greatly (and we are a big step closer to a merge between the two "envelope paradox"-pages). INic 03:30, 3 February 2006 (UTC)
After reading the references I agree that you are right with respect to the closed envelope version. Originally I applied the reasoning of the open envelope version also to the closed envelope version. Although Gdr is right saying that this is possible, this is wrong for the article because it is not the usual interpretation. In the literature I read, the main focus is on the closed envelope version and on Using Variables Within the Expectation Formula. When I find time, I will convert the article to a style closer to the published literature. --NeoUrfahraner 05:18, 3 February 2006 (UTC)
Great! Please feel free to be inspired by the two envelopes problem page. Can you give me a reference to a paper supporting Gdr's solution to the closed envelope paradox, that also you previously believed in? INic 11:59, 3 February 2006 (UTC)
Unfortunately the main focus of the literature seems to be on the closed envelope version. I found the open envelope version in Olav Gjelsvik, Can Two Envelopes Shake The Foundations of Decision Theory?, in Jan Poland, The Two Envelopes Paradox in a Short Story, and in David J. Chalmers, The Two-Envelope Paradox: A Complete Analysis?. Gjelsvik, however, is not very explicit, and Poland and Chalmers consider continuous distributions, which contain some mathematical pitfalls, so I am not sure whether you will be very happy with them. --NeoUrfahraner 13:44, 3 February 2006 (UTC)
No need to be sorry, my question was concerning the closed envelope problem. I wonder if someone in the literature claim that step 2 is the culprit even in the closed envelope problem, as Gdr and you did before? INic 17:17, 3 February 2006 (UTC)
Up to now I did not find this claim in the literature. --NeoUrfahraner 19:06, 3 February 2006 (UTC)
So this means that it's only you and Gdr that understands this idea so far... INic 02:21, 4 February 2006 (UTC)
At least it means that I cannot proof anything different. --NeoUrfahraner 05:02, 4 February 2006 (UTC)
May I suggest that you and Gdr publish a paper about it? BTW, when will you change the article as you promised? If you don't do it soon I will. INic 01:35, 11 February 2006 (UTC)
Look at Clark/Shackel, bottom line of first page. --NeoUrfahraner 06:19, 11 February 2006 (UTC)
What Clark and Shackel say there is not the same as what you and Gdr claims here. BTW, when will you update the article as you promised 3 February 2006?
Clark and Shackel, bottom line of first page is essentially identical to "Solution" in the article. --NeoUrfahraner 05:14, 14 February 2006 (UTC)
But we're not talking about the open envelope problem, but the closed envelope problem. Rememer? Your amnesia drives me crazy too soon. INic 00:33, 17 February 2006 (UTC)
From what text in the paper do you conclude that Clark and Shackel restrict themselves to the open envelope problem? --NeoUrfahraner 08:03, 17 February 2006 (UTC)
The way I read their paper only the first paragraph is about the closed envelope problem. However, they think that that problem is uninterseting. The only comment they have is "As it stands it is not difficult to see what is wrong with this argument." Then they move on to the more interesting open envelope problem with the comment "But it can be developed into a paradox that is not so easily resolved." INic 18:05, 20 February 2006 (UTC)
No. The paradox that is not so easily resolved is a St. Petersburg-case with E(BA | A = n) > 0 for every n, which can occur only when E(A) is infinite. This infinite case is discussed later in their paper. --NeoUrfahraner 15:53, 21 February 2006 (UTC)
Sure, and that is a variant of the open envelope problem, right? If no envelopes are opened it's irrelevant what prior you have, or if a prior exist—it's possible to pursue the closed envelope argument anyway, correct? That's why only their first paragraph (at most) is about the closed envelope problem. Your favorite quote is from the third paragraph, however. INic 00:32, 22 February 2006 (UTC)
No. They apply the St. Petersburg-case to both the open and closed envelope problem and clearly differentiate these two cases. --NeoUrfahraner 14:09, 22 February 2006 (UTC)
They actually show that the St. Petersburg game has nothing to do with the two envelope problem. On that point they are crystal clear. However, when it comes to the open/closed issue you are actually right that they suddenly, at the end of their paper, basically say that "the problem seem to reappear when we open one envelope and look." That is quite strange as their entire paper is about the open envelope case. In the closed variant there is no need for any particular prior distribution; we can in fact know what the envelopes contain. Their St Petersburg type of priors have no meaning at all in the closed case. Maybe you understand better their reasoning here but it's over my head in any case. INic 16:35, 27 February 2006 (UTC)
You wrote "That is quite strange as their entire paper is about the open envelope case.". Acutally this is not quite strange because as I said on 15:53, 21 February 2006, I do not agree that the paper is restricted to the open envelope case. --NeoUrfahraner 15:18, 28 February 2006 (UTC)
OK, let's assume you are right that their paper is about the closed case up to section four: "4. Looking inside your envelope" where they consider the open envelope case. You still got the problem that you use neither their solution for the closed case nor their solution for the open case in the article. If you want to refer to Clark and Shackel for the solution in the article you have to actually respect their respective solutions. INic 15:21, 3 March 2006 (UTC)
In addition, as you agree with the authors in the literature that claim that we have to distinguish between the open and closed cases, it's very odd that you refuse to incorporate that distinction in the article in a more explicit way. I disagree personaly with these authors but I still want to add this distinction to the article just becase most of the authors think this is important. You agree personally with these authors but refuse anyway, for some mysterious reason, to incorporate this distinction to the article. It's very hard to understand the way you think here. To me you're the real paradox. INic 15:21, 3 March 2006 (UTC)
You wrote "In the closed variant there is no need for any particular prior distribution; we can in fact know what the envelopes contain.". OK, what do the envelopes contain in the closed variant? --NeoUrfahraner 15:19, 28 February 2006 (UTC)
In the closed variant, as I've told you, the envelopes can have known content. Say one silver coin and one gold coin. The paradoxical arument is unaffected by this knowledge as we never open any envelope. Please tell me where the problem with infinity is when we put one silver coin and one gold coin in the envelopes. INic 15:21, 3 March 2006 (UTC)
It seems that I misunderstood the meaning of "can" in your answer. I thought you were meaning "In the closed variant there is no need for any particular prior distribution; we in fact know what the envelopes contain." Could it be that you, however ,meant "In the closed variant there is no need for any particular prior distribution; we may assume that we know what the envelopes contain." Do I now understand correctly? --NeoUrfahraner
I mean that we in no way destroy the paradox by knowing the contents in the envelopes if we never open them. To keep that a secret in the closed case is an overkill that adds nothing to the story. INic 03:01, 7 March 2006 (UTC)
Very interesting. What paradox do you obtain when the envelopes have "known content. Say one silver coin and one gold coin."? --NeoUrfahraner 05:38, 7 March 2006 (UTC)
With probability 1/2 I will get the gold coin, and switching before looking will give me the silver coin that is worth half as much as the gold coin. I will loose half of what I have. If I get the silver coin I will get twice as much if I switch before looking. I will get the silver coin with probability 1/2. If we denote the value I have by A I will on average win 0.25A by switching. INic 00:27, 18 March 2006 (UTC)
So you agree that the model must preserve the irrelevant requirement (that one coin is twice as valuable as the other), see INic 15:33, 3 March 2006 (UTC). --NeoUrfahraner 08:08, 18 March 2006 (UTC)
Of course not. INic 21:52, 18 March 2006 (UTC)
At the moment I do not expect that I can find a formulation where both of us agree; in particular, you disagree with the literature (Eric Schwitzgebel and Josh Dever, Footnote 2). It makes no sense to update the article before an agreement has been achieved. --NeoUrfahraner 05:14, 14 February 2006 (UTC)
But we do agree. At least at 05:18, 3 February 2006 we agreed upon the fact that the current article doesn't explicitly enough include the closed envelope problem and it's solutions as found in the literarure and the references in the article. If you can't find the courage to change the article yourself as you promised to do I'll have to change it for you. Are you afraid of Gdr or what? INic 00:33, 17 February 2006 (UTC)
I said When I find time, I will convert the article to a style closer to the published literature. In particular, I planned to distinguish the "closed envelope" version from the "open envelope" version. You, however, do not agree with this. In addition, after reading Clark and Shackel I am not sure whether any change is actually needed. --NeoUrfahraner 08:03, 17 February 2006 (UTC)
I sure do agree that the literature ditinguish between the open and closed versions of the paradox. Why do you think I'm struggeling with you and Gdr to incorporate that version in the current article? And why do you think I clearly distinguish these cases on the two envelopes problem page? I'll give you a hint: I'm able to distinguish what I think personally from what I see are the most common opinions in the literature. You, however, let the current article reflect the latest state of your mind you find you happen to have. Wikipedia isn't a solipsistic project, I really do hope you know that! INic 02:23, 20 February 2006 (UTC)

[edit] Known random distribution

One more question: If the envelopes are filled using a known random distribution and if one envelope is picked by tossing a fair coin, do you see any difference between the open and the closed envelope version? --NeoUrfahraner 21:23, 2 February 2006 (UTC)

No (assuming that you still manage to derive the paradox under these new conditions). INic 03:30, 3 February 2006 (UTC)

To be more specific: If the envelopes are filled using a known random distribution and if one envelope is picked by tossing a fair coin, do you see any difference in the expected gain between the open and the closed envelope version? --NeoUrfahraner 05:18, 3 February 2006 (UTC)

No. INic 11:59, 3 February 2006 (UTC)

On 04:57, 19 September 2005, I suggested an open envelope game. On 11:55, 19 September 2005 you chose a strategy for that game and computed an expected gain of

\begin{matrix} \frac{3}{20} \end{matrix} 2 + \begin{matrix} \frac{1}{10} \end{matrix} (2^2 + 2^3 + ... + 2^9) + \begin{matrix} \frac{1}{20} \end{matrix} 2^{10}

Do you still say that this expected gain is correct for the chosen strategy? --NeoUrfahraner 13:44, 3 February 2006 (UTC)

This is perhaps an "envelope game" but it's not an envelope paradox. (When you open an envelope in this game the paradox disappear.) INic 17:17, 3 February 2006 (UTC)

If you prefer, we can call it "envelope game". So you still agree that your expected gain is computed correctly for the chosen strategy? --NeoUrfahraner 18:16, 3 February 2006 (UTC)

Yes. INic 00:38, 4 February 2006 (UTC)

Is there a strategy with higher expected gain? --NeoUrfahraner 05:02, 4 February 2006 (UTC)

If you want to talk about other subjects than the envelope paradox I suggest that you start a page called "various envelope games" and you can discuss these games there. If you think that your game is connected to an envelope paradox you have to show that first. INic 13:01, 4 February 2006 (UTC)

This is just the discussion page, not the article, so the rules are less strict. Additionally there are already two envelope articles waiting to be merged, so it makes no sense to start a third one before these articles are merged. If you are not sure about your answer, just tell me your conjecture. --NeoUrfahraner 14:16, 4 February 2006 (UTC)

I am still waiting for an answer. --NeoUrfahraner 06:29, 7 February 2006 (UTC)

We've already discussed this game of yours before and it didn't lead anywhere then. If you want to discuss it again you have to explain why. I've already said all there is to it once and I see no need to repeat that. INic 01:48, 8 February 2006 (UTC)

I just guess that you are having severe difficulties saying "I was wrong." --NeoUrfahraner 05:50, 8 February 2006 (UTC)

Not at all. We have less in common than you think. In fact I love to be wrong. Only when I'm wrong in a discussion I've learnt something, and I love to learn. But can you please tell me what I'm wrong about here? What question did I get and what did I answer? This was a little surprising you know. You haven't even formulated your new goal for this old subject and I'm apparantly already wrong about something! INic 06:24, 10 February 2006 (UTC)

I asked: "How would you proceed to gain a score as high as possible?" You answered: "It clearly doesn't matter what I do unless I happen to get the paper with one or 210 written on it. In the former case I'll switch otherwise I won't.". Then I asked more questions which you did not yet answer up to now:

  • Why are you switching at all?
  • What would be the expected gain when you never switch?
  • What would be the expected gain when you always switch?
  • What would be the expected gain when you switch unless you find 210?
  • Where does the difference come from? --NeoUrfahraner 12:36, 10 February 2006 (UTC)
OK so I'll repeat my answer: This is where you subjectivists go astray. You 'translate' the original problem to other problems where you can use your favorite tool, Bayes theorem, i.e., you invent some prior probability distributions. Then you investigate those new situations and try to generalize from them. If you find a pattern that holds for whatever prior you can think of you are satisfied, because then you think you have solved the problem in the most general manner.
Well, this is simply not correct. To invent a prior is to alter the original experiment in some fundamental ways. This is seen above where your prior introduces known limits, and suddenly we know in some cases what's in the other envelope. That is never the case in the original experiment. [...] This is thus a very bad interpretation of the original problem, as this situation never should happen. It gets even worse when considering the fact that the existence of these known limits are the direct source of the conclusion you draw from this interpretation. INic 01:29, 11 February 2006 (UTC)
You, however, have still not explained why you want to discuss this non-paradoxical example again. INic 01:29, 11 February 2006 (UTC)

I want to discuss this example again because you love to learn. You wrote many words but did not answer my questions. Let's put one of it again:

  • What would be the expected gain when you switch unless you find 210? --NeoUrfahraner 06:19, 11 February 2006 (UTC)
I did answer your questions. If you didn't understand my answer you have to tell me that. You, however, have still not explained how your example is connected to the article at hand. It's great that you wanna teach me things, but I'm afraid you have to tell me what you want to teach me before we begin class. In addition, it must be something that has something to do with the current article. If it isn't you can send me a personal message instead and inform me. This discussion page shouldn't be bloated with irrelevant topics you know. INic 23:37, 13 February 2006 (UTC)

The answer to the question

  • What would be the expected gain when you switch unless you find 210?

is just a single number. You did not give this number up to now, so how did you answer my question? Actually a person who loves to learn does not try to escape answering questions. I rather get the impression that you know already where your error lies. Why do you insist on a personal message? Up to now nobody else complained that we might be off-topic. Are you afraid that someone could read on the discussion page that you made a mistake? --NeoUrfahraner 05:35, 14 February 2006 (UTC)

If I did some errors it would be easy for you to show me where I did them, right? Instead you complain about that I haven't answered your questions. But if I haven't given any answers, how can my answers be wrong? If they don't exist they can't be wrong, right? I have a hard time understanding your kind of "logic." I have a lot to learn about that logic, that's for sure. INic 00:33, 17 February 2006 (UTC)

You said It clearly doesn't matter what I do unless I happen to get the paper with one or 210 written on it. When you answer the question

  • What would be the expected gain when you switch unless you find 210?

you will see that it actually does matter. --NeoUrfahraner 08:07, 17 February 2006 (UTC)

Aha so this is your great discovery? Wow! Why did it take you 5 months to tell me you didn't understand my answer? So what are your conclusions from the fact that "it does matter"? Does this tell you that it's better to switch in every case? How does this example connect to the problem at hand (in the article)? In other words, why is this an interesting example according to you? When you've answered these questions you might begin to understand my answer I gave you 5 months ago. INic 00:01, 20 February 2006 (UTC)

That's a funny way to say "I was wrong" --NeoUrfahraner 19:16, 21 February 2006 (UTC)

You still haven't explained the purpose for you having this example here. Neither this time nor the first time you took it up. I just assume you want to generalize from examples as these having "border effects" to say something about the original problem that lacks borders. Am I wrong about that? You say you have something great to teach me here. So please let me hear that! I really want to know. We have discussed this example of yours so long now that I really want to know why you think it's that interesting. INic 00:47, 22 February 2006 (UTC)

It shows you how you CAN gain by switching if you only switch "cleverly". Among other, it gives you another answer to your question How can that be from 02:35, 5 February 2006. --NeoUrfahraner 14:16, 22 February 2006 (UTC)

But how can you derive a paradox in this case? If you can't derive a paradox, how can this example have any bearing on the paradoxical situation in the article? INic 02:35, 23 February 2006 (UTC)

It is the other way round: it is the solution of the paradox by showing that the statements in question do not really imply the contradiction --NeoUrfahraner 10:24, 23 February 2006 (UTC)

Aha i see, but that is a very bad way of reasoning. Your model of the problem introduces conditions that wasn't there from the beginning, and that is cool if you only clarify and don't destroy the original situation. However, your extra conditions clearly destroy the situation as the reasoning in the paradox isn't valid anymore. This is maybe a subtle point to some but nontheless a very important one. For example, let's say I introduce the extra condition that the player knows the two possible contents in the envelopes, say one gold and one silver coin. The player open one envelope and are asked to switch. She knows at every instant what to do. Does this 'solve' the original two envelopes problem? It clearly doesn't. The new situation lacks the epistemological symmetry the original problem has and is thus a very bad model. Especially as it's precisely the symmetry that causes the paradox. If you want a more detailed model of the original situation you have to preserve the key features in the new model. And the symmetry is a key feature, as is easily realized. INic 19:13, 27 February 2006 (UTC)

It is indeed a bad model because a gold coin is not twice as much as a silver coin, but anyway, it solves the problem. Similiarly the problem is solved whenever you assume any specific probability distribution (with finite mean). BTW, I do not see any epistemological symmetry (whatever that should be) that has to be preserved. The only symmetry that has to be preserved is that one envelope is chosen "randomly" (e.g. using a fair coin.) --NeoUrfahraner 15:32, 28 February 2006 (UTC)

This is quite funny. You think that the model must preserve the irrelevant requirement (that one coin is twice as valuable as the other) but you think it's OK if the necessary requirement (epistemological symmetry) for the paradox is dispensed with. And no, there are finite probability distributions where you still can produce the paradox. INic 15:33, 3 March 2006 (UTC)

When I speak about "symmetry", I mean Assumption A from 08:27, 18 January 2006. Up to now you were not able to explain what you mean by "symmetry", cf. my posting from 09:42, 6 December 2005. Either you are able to provide an exact definition of "epistemological symmetry" or I have to consider it as some meaningless buzzword. --NeoUrfahraner 20:10, 3 March 2006 (UTC)

Your learning curve is really steep. You had a hard time defending your assumption A when you introduced it, remember? Well, obviously you don't. Please read that again and you will see that you reluctantly had to change your mind about your assumption A. You seem to forget about everything. Gah! Ah well. What can I do? INic 03:29, 7 March 2006 (UTC)
Epistemological symmetry is the kind of symmetry we have when we apply the principle of indifference. However, that principle leads to paradoxes as it stands why no one believes in that principle anymore. Epistemological symmetry must therefore be replaced with real symmetry in some model. The symmetry in the model must have the the same effect as the epistemological symmetry, that is, lead to the same conclusions as the principle of indifference would have done. INic 03:29, 7 March 2006 (UTC)

You are right. You are given two indistinguishable envelopes is misleading. You could as well say "You are given a red and a black envelope. Using a fair coin you chose one of them and call it envelope A. The other envelope is envelope B." Now these randomly chosen envelopes A and B satisfy assumption A without using the principle of indifference (or any "epistemological symmetry"). --NeoUrfahraner 05:43, 7 March 2006 (UTC)

That is correct. However, the symmetry you break in your model above is a different symmetry: the symmetry that makes you expect that the other envelope you haven't opened is as likely to be twice as much as half as much of what you have. This is the symmetry I'm talking about. In your model the prior distribution is known to the player and has borders. Obviously, when the player hits the borders the symmetry I'm talking about is broken. This symmetry is absolutely vital for the paradox. As your model doesn't preserves this vital symmetry your model can't shred any light on the paradox. INic 22:30, 17 March 2006 (UTC)

So you agree that when the distribution is known to the player it is no longer true that the other envelope you haven't opened is as likely to be twice as much as half as much of what you have? --NeoUrfahraner 10:08, 18 March 2006 (UTC)

Not at all. INic 21:54, 18 March 2006 (UTC)

[edit] Paradox of probability? (Part 5)

I've gathered four questions you never answered here:

[edit] Part 5 / Question 1

INic 09:46, 22 September 2005: According to the text the subjectivists magical ability to change a probability only by observation is both the solution to why the calculation in the problem is wrong (we don't gain 25% every time we switch) AND the reason why we, in fact, CAN gain by switching if we only switch "cleverly", that is, not every time. How can that be? To me that seem to be two contradictory standpoints. INic 02:35, 5 February 2006 (UTC)

To say it mathematically: There is no problem that E(BA | A = n) = 0.25n, as long as it is valid only for some n. If it were valid, however, for every n, we could derive that E(B)>E(A), which contradicts E(B)=E(A). In particular, it is sufficient to find "enough" n satisfying E(BA | A = n) < 0 --NeoUrfahraner 10:37, 6 February 2006 (UTC)
I wanted to point out that you use the same argument (probabilities change when the subject gets information) to argue in two different directions. On the one hand the probabilities changes so that the expected value calculation in the text isn't valid anymore, showing that you can't gain anything by switching. On the other hand the same probability change enables you to, in fact, gain some by switching. How can the same argument work in both directions? INic 03:27, 7 February 2006 (UTC)
To make it more clear: The condition that must hold is
E(BA | A = n)P(A = n) = 0
n
In other words, the expected gain is zero if you always switch. This does not mean, however, that the individual conditional expectations E(B-A|A=n) have to be zero. --NeoUrfahraner 05:22, 7 February 2006 (UTC)
But I'm only offered to play this game once. In fact, most of us would be glad to be offered to play this game even if it was only for a single try. As a subjectivist you should have no difficulty interpreting probabilities attributed to a single case like this. As a player I don't care about any sums over "all thinkable situations"; I want to know if it's true that I'll gain 25% if I switch or not. And the argument that you provide is that the probabilities in the equation is wrong after I've looked in one envelope—that's why the equation is wrong and therefore I gain nothing by switching. Further, as an evidence that the probabilities really changes when I look in an envelope, you provide me with a strategy that lets me gain money when switching! Even in the single case. Isn't that weird? INic 04:07, 8 February 2006 (UTC)
Do you agree that
E(BA | A = n)P(A = n) = 0
n
is not equivalent to
E(BA | A = n) = 0 for every n? --NeoUrfahraner 06:35, 8 February 2006 (UTC)
Are you saying that it's rational to switch in every case except at the boundaries of your uniform prior? INic 00:56, 11 February 2006 (UTC)
No. Anyway, how about answering my Yes/No-question with a "Yes" or a "No"? --NeoUrfahraner 06:25, 11 February 2006 (UTC)
OK, as you can't think without some prior let's say we have a uniform distribution over a very very wide range of values, say (1024)! integers. An integer pair (m, n) is picked from this set that differ by one, | mn | = 1. In one envelope we write down the number 2m and in the other 2n. Even if we repeat this procedure every second until the universe dies we know that we will never even come close to the boundaries of this uniform distribution. We will play this only once, though. Now, as long as you don't look in any envelope the flaw in the argument is the use of a variable A with 2 different values in the expectation expression. But as soon as you look in an envelope and see A = 2n, for example, two contradictory things happens. The subjective probability P(m=n+1)\ne P(m=n-1) which causes the expected gain for a switch to decrease from 25% gain to zero gain. On the other hand, exactly the same observation makes it possible to increase the expected gain above zero. Even in the single case. How can the same observation both increase and decrease the expected gain? INic 00:51, 14 February 2006 (UTC)
How about answering my Yes/No-question with a "Yes" or a "No"? --NeoUrfahraner 06:13, 14 February 2006 (UTC)
Sure, but the problem is that the prior you talk about is only needed for the decrease argument, for the increase argument no prior is needed. In particular, it's possible to have E(BA | A = n) > 0 for every n when using the strategy. Thus, the sum over your prior is > 0 too. INic 00:53, 17 February 2006 (UTC)
My question has nothing to do with priors or probabilities, it is simple algebra. To make it more clear for you:
Do you agree that for real numbers a_n,\, b_n,\, n=1,\dots N
anbn = 0
n
is not equivalent to
an = 0 for every n? --NeoUrfahraner 08:15, 17 February 2006 (UTC)
I already said sure above. So if this isn't a sum over a prior, why are you talking about it in the first place? I'm talking about the problem in the article, what are you talking about? Please stay on topic, even if you can't answer my questions. INic 00:08, 20 February 2006 (UTC)
You do not know what I am talking about? I am answering your question from 02:35, 5 February 2006: "How can that be?" --NeoUrfahraner 19:19, 21 February 2006 (UTC)
My question is about probabilities for sure. Your "answer" on the other hand is a rethorical question that's only "simple algebra" that "has nothing to do with priors or probabilities." Do you see any difference in scope? INic 00:56, 22 February 2006 (UTC)
On 02:35, 5 February 2006 you said "To me that seem to be two contradictory standpoints." These are not contradictiory standpoints because there is no contradiction between E(E(BA | A)) = 0 and E(B-A|A=n)\neq 0, which can be seen by the above simple algebraic considerations. --NeoUrfahraner 14:22, 22 February 2006 (UTC)
But you've said before that the strategy to increase the gain can always be applied. At 10:21, 31 December 2005 you said that It will, however, work in every case, even when the money in the envelopes are picked deterministically. That means that no prior need to be postulated for increasing the probability. And that is true in every single case. However, to decrease the probability you have to consider the average over some prior. But I'm not interested in what happens "on average" as I'm only playing this game once. Is it really too mind-bending for you to imagine that situation, i.e., a single case? If it is, why isn't it mind-bending for you to imagine that when considering the increase argument? INic 03:49, 23 February 2006 (UTC)
The text you cite refers to a different strategy, namely to the strategy suggested by norpan 13:58, 30 December 2005. --NeoUrfahraner 10:28, 23 February 2006 (UTC)
No, norpan's strategy is precisely the kind of strategy I'm talking about. There are different flavors of them (there's a different one described under the Discussion heading in the article, for example) but they all share the feature we are talking about here. That is, they are all applicable to the single case. INic 19:56, 27 February 2006 (UTC)

[edit] Part 5 / Question 2

INic 09:46, 22 September 2005: Is the original calculation (25% gain) totally wrong or not according to a subjectivist? Is it only partially wrong? If so, how many percent do we gain if we switch "cleverly"? Say we gain x% (x > 0) by switching "cleverly," how do you avoid a paradox? And what is in that case the flaw in the other reasoning leading to no gain at all, i.e., by denoting the contents A and 2A and noting that we either gain A or lose A if we switch? INic 02:35, 5 February 2006 (UTC)

[edit] Answer 2a
The calculation is E(BA | A = n) = 0.25n. For some n this is true, for other n it is not. An exact quantification of the percentage we gain be switching "cleverly" requires an exact quantification of the original proability P(A=m,B=n). I see no paradox in the fact that I can increase my expected gain by using available observations. The paradox only arises if I could increase my expected gain by ignoring all available observations. This would be the case if I could increase my expected gain by switching independent from the observation - in that case I would actually ignore the available observation. --NeoUrfahraner 08:00, 6 February 2006 (UTC)
But the subjectivistic solution to the open paradox is that the probabilities changes when we look in an envelope, i.e., not ignoring its contents. Now you say that the paradox only arises if you could increase your expected gain by ignoring all available observations—which is the closed paradox. That is, you say that the paradox is resolved when we look in an envelope—due to a subjectivistic probability change—but in that case the paradox can't arise anyway! Why make a big fuzz about a subjectivistic solution with improper priors and all that crap if the paradox can't arise in the open case anyway? INic 03:27, 7 February 2006 (UTC)
No, I am saying that there were a paradox if
E(BA | A = n)P(A = n) > 0
n
.
In particular, this would occur if E(B-A|A=n)>0 for every n. Fortunately, however, this is true only for some n and false for some other. --NeoUrfahraner 05:36, 7 February 2006 (UTC)
You didn't answer my question. I'll restate it. If we don't look in any envelopes the solution is that we use variables in a bad way in the equation at step 6, right? OK. If we look in an envelope the variables are OK but the probabilities changes in the equation instead, right? OK. Now you say that the paradox only arises if we ignore what we saw in the envelope we opened! So why did we open any envelope at all? And if the paradox doesn't arise when we look in an envelope (and not ignoring what we see) why have a special solution for that case in the first place? A case without a problem don't call for a solution. Right?? INic 04:07, 8 February 2006 (UTC)
Please read again. I said The paradox only arises if I could increase my expected gain by ignoring all available observations. To increase my expected gain I have to open an envelope and use the available observation. --NeoUrfahraner 06:20, 8 February 2006 (UTC)
But what's the point looking if you're gonna ignore what you see anyway? And how is it psychologically possible to completely ignore what you've once seen? And if you manage to completely forget what you've seen (someone hits you hard in the head, say), what's the difference between that and the case when you never look? The scope for the "strange prior" solution to the paradox seem to be, if not completely nonexistent, very small. INic 00:56, 11 February 2006 (UTC)
Forget about psychology, I just made a (maybe bad) translation from mathematics to real world. When I do not look, switching gives E(B-A)=0. Now I open the envelope, find the amount A=n and compute E(B-A|A=n)>0. If this held for every n, I would compute
E(E(BA | A)) = E(BA | A = n)P(A = n) > 0
n
.
On the other hand, I know E(E(B-A|A))=E(B-A)=0, so this is a contradiction/a paradox. Fortunately the statements in question do not really imply the contradiction. I translated "E(B-A|A=n)>0" to "I could increase my expected gain" and "holds for every n" to "by ignoring all available observations". If you prefer a different translation, feel free to suggest one. --NeoUrfahraner 07:36, 11 February 2006 (UTC)
Forget about psychology?? Subjectivistic probability isn't about anything else than psychology! Anyway, how can "ignoring all available observations" be a 'translation' of "holds for every n"? To 'hold for every n' is an expression related to your imaginary prior whereas 'all available observations' only refer to a real single observation, namely what you actually see in the first envelope you open. Either you ignore that datum or you don't. There's no need for a plural form here. Your 'translation' explanation is empty. You still haven't explained why the open envelope case is a problem at all. INic 01:24, 14 February 2006 (UTC)
As I said, feel free to suggest a different translation for "holds for every n". --NeoUrfahraner 06:17, 14 February 2006 (UTC)
OK, let's see what you mean in an actual situation. Suppose you open an envelope and find 512 monetary units. Can you increase your expected gain by using that information or not? Does 512 "hold for every n" or not? Please explain how to use your rule of thumb in this example. I don't get it. INic 01:25, 17 February 2006 (UTC)
Whether I can increase my expected gain depends on whether E(B-A|A=512)>0, E(B-A|A=512)=0, or E(B-A|A=512)<0. From the law of trichotomy, I know that exactly one of these three statements is true (although I do not know which one). --NeoUrfahraner 08:24, 17 February 2006 (UTC)
And....? Can you increase your expected gain or not? Does this observation "hold for every n"? If it does please say "Yes" and if it doesn't please say "No". Is this really too much to ask for? INic 02:31, 20 February 2006 (UTC)
Yes. --NeoUrfahraner 19:22, 21 February 2006 (UTC)
My god! I will always remember to never ask two questions in a row to a man that suffers from this kind of severe amnesia... INic 01:08, 22 February 2006 (UTC)
OK, I'll repeat only a single question at a time: Can you increase your expected gain or not? If you can please say "Yes" and if you can't please say "No". INic 03:54, 23 February 2006 (UTC)
Yes, if the envelopes are filled using a known random distribution and if one envelope is picked by tossing a fair coin I can increase my expected gain when I switch depending on the contents of the envelope. --NeoUrfahraner 10:30, 23 February 2006 (UTC)
This is interesting, the subjectivistic idea that we always gain "information" by observing outcomes is suddenly dependent on if we actually explicitly know the prior distribution or not! I thought you had your prior in your head always anyway... You had a prior in your head when I served you fish soup, remember? You saw your prior in your head very clearly, you reported. That self-illusion led you astray, however, but that's another story. Have you learned the lesson from the fish soup now or what? What is your answer to the question above if you don't know what prior was used? What is your answer to the same question if you explicitly know that no prior at all was used (as in the dish case)? INic 20:29, 27 February 2006 (UTC)
So you agree that if the envelopes are filled using a known random distribution and if one envelope is picked by tossing a fair coin I can increase my expected gain when I switch depending on the contents of the envelope? --NeoUrfahraner 15:36, 28 February 2006 (UTC)
Not at all. How about answering my questions now? INic 15:44, 3 March 2006 (UTC)
Yes, I can increase my expected gain even when the distribution used for filling the envelopes is unknown. If you do not understand how to increase the expected gain for a known distribution, you will not understand how to incerease the exepected gain in the case of an unknown distribution. So lets first finish the case of a known distribution. Specify a distribution, tell me your expected gain if you do not switch, and I will show you how to increase the expected gain. --NeoUrfahraner 18:07, 3 March 2006 (UTC)
If you have a winning algorithm even for the case of an unknown prior, it's very strange you didn't use that when you were offered dishes before, isn't it? Instead you used some strange mental algorithm that in fact decreased your expected gain significantly. If you can't act according to your own principles yourself, who can? INic 03:50, 7 March 2006 (UTC)
I have a simple example of a finite distribution (with finite expectation) where you can't increase the expected gain when you've looked into one envelope—even when you know the distribution. But I can't tell you what it is because if I do you will complain that it's "original research" and shouldn't be at wikipedia all. So I'll have to rest my case here, unfortunately. INic 03:50, 7 March 2006 (UTC)
In other words, you are saying that nobody in the world knows that there is such an extraordinary distribution, but great INic finally found one and does not want to publish it on this place. I find it more likely that you have severe difficulties saying "I was wrong." Anyway, so do you agree at least that if it is known that the envelopes have been filled with some "ordinary" known distribution, it is possible to increase the expected gain by switching dependent on the contents? --NeoUrfahraner 06:28, 7 March 2006 (UTC)
Yes I discovered it just some weeks ago actually. I've been pondering about a distribution like that for years without success but our discussion here must have triggered me. :-) Is it OK if I mentions you in my article? If the paradox were restricted to "ordinary" distributions (whatever that would mean) your question would be relevant. However, it isn't. INic 23:11, 17 March 2006 (UTC)
Say we restrict to a distribution that "has borders", cf. your answer from 22:30, 17 March 2006. Do you agree that under this restriction it is possible to increase the expected gain by switching dependent on the contents? --NeoUrfahraner 10:19, 18 March 2006 (UTC)
If you have borders known to the player she knows what to do when/if she hits these borders. In the extreme case the prior only has two possible outcomes that is known to the player. In this case the player always knows what to do after opening one envelope. However, this is never the case in the (open) paradox situation. It's essential in the paradox that the player DON'T know what to do for certain. If the player does you discuss another situation altogether. INic 22:26, 18 March 2006 (UTC)
The situation here is quite like when a politician gets a tricky question from a reporter. Instead of answering the original question the politician "restates" the question as another question that is not tricky to answer at all. He then immediately gives, with a smile on his face, the obvious answer to his newly invented easy question. Of course, the restated question had very little in common with the original question, why the original tricky question still is unanswered by the politician. INic 22:26, 18 March 2006 (UTC)

[edit] Answer 2b
With respect to A vs. 2A: This is a formulation where one is again using variables within equations. --NeoUrfahraner 08:00, 6 February 2006 (UTC)
Oh no, not at all. We are not using variables in a bad way here, which is what you mean I suppose. To recap, the expected gain when we switch is
E(switch) ={1 \over 2}(2A - A) + {1 \over 2}(A - 2A)=0
OK. --NeoUrfahraner 06:27, 7 February 2006 (UTC)
If the probabilities changes once we look in an envelope, as you claim, the equation above will show that we always will gain or lose something by switching, on average. How can that be? INic 03:27, 7 February 2006 (UTC)
I do not understand the question. What does the equation say will happen when we look into the enevelope? --NeoUrfahraner 08:59, 7 February 2006 (UTC)
The probabilities in this equation (1/2 and 1/2) are exactly the same probabilities as the ones in the equation at step 6. Thus, if you change the probabilities in the equation at step 6 you have to change those above as well. And as soon you alter the probabilities in the equation above you will end up with a net gain or net loss if you switch, on average. So my question is: what's wrong with this equation? INic 04:07, 8 February 2006 (UTC)
The exact meaning of the equation dependes on the meaning of A. Is A a random variable or a real number? --NeoUrfahraner 06:50, 8 February 2006 (UTC)
Aha interesting. Please explain what happens in both cases. INic 00:56, 11 February 2006 (UTC)
As I said, I do not understand the question. Maybe A is neither a random variable nor a real number. Please make a clear formulation of your question. --NeoUrfahraner 06:30, 11 February 2006 (UTC)
What other options than a random variable and a real number do you see? Please start to explain what is wrong if A is an unknown real constant. INic 01:24, 14 February 2006 (UTC)
If A is an unknown real constant, I get {1 \over 2}(2A - A) + {1 \over 2}(A - 2A)=0. What should be wrong with this result? --NeoUrfahraner 06:17, 14 February 2006 (UTC)
But after the observation you use conditional probabilities instead of 1/2 and 1/2 for the 'win' and 'lose' alternatives as you've explained before. However, in this expression above that maneuver will result in a sure win or a sure loss when switching envelopes, i.e., not zero, as the conditional probability can't be 1/2 "for every n". So what is wrong with this expression? In particular, it will never agree with the expression for the expected gain in the article. INic 01:25, 17 February 2006 (UTC)
I still do not understand. In P(B=m|A=n) the letter A just happens to be the same letter but has a completely different meaning. In particular, it is not an unknown real constant. Are you saying that because A has some particular meaning in my equation it has to have the same meaning in your equation? --NeoUrfahraner 08:34, 17 February 2006 (UTC)
Aha so the probability shift when looking in one envelope is dependent of how we define the variable A? That is really amazing. I thought that the probability shift happened as soon someone actually looked into one envelope. Now you say that the probability shift happens as soon as we define A in a certain way. Hmmm. I have to confess: I understand less and less about the subjectivistic way of thinking. INic 00:16, 20 February 2006 (UTC)
The meaning of your question depends on the meaning of A. Now you say A is a variable, earlier you said it is an unknown real constant. I still do not understand your question. --NeoUrfahraner 19:26, 21 February 2006 (UTC)
But doesn't the probabilities change as soon as you look in an envelope? As soon as you look in an envelope you change to conditional probabilities, right? Or does someone have to define A for you before you make the probability shift? I always thought it was the extra information you got when you saw the envelope content that did the trick. The strategy for winning in the Discussion section for example, doesn't require that anyone define any A first, right? And that winning strategy after looking is the proof that there really is a probability shift after looking, isn't it? INic 01:08, 22 February 2006 (UTC)
Yes, I switch to the conditional probability when I look into the envelope, but I still do not understand how the equation above will show that we always will gain or lose something by switching, on average --NeoUrfahraner 14:36, 22 February 2006 (UTC)
OK, I'll try to be more explicit. When you look in an envelope and see B (I'm using B here to not confuse it with the A in the formula above) you replace P(win) = 1 / 2 and P(lose) = 1 / 2 with P(win) = P(win | B) and P(lose) = P(lose | B), right? The latter aren't in general (over the prior) 1/2, and that's the subjectivistic solution, right? However, this probability shift causes problem if we use the formula above to determine the expected gain of switching. If we use that formula we have no paradox even if would manage to have a uniform prior probability distribution without borders. In particular, this formula will in every instant give another value to the question "what is the expected value if we switch given that we found B?" than the other "ordinary" formula. If the ordinary formula gives the right answer to what we can expect if we switch, what is in that case wrong with the formula above? They can't both be true. INic 04:37, 23 February 2006 (UTC)
OK, B is a random variable that takes the real value b with probability P(B=b),
P(B = b) = 1.
b
Then the conditional expectation of the gain from switching is some function E(switch|B=b)=f(b). f(b) is positive for some b, negative for other b, and maybe zero for other b. --NeoUrfahraner 10:38, 23 February 2006 (UTC)
But you haven't answered my question. My question was: which way to calculate the expectation of switching is the correct one? The formula above or the formula found in the paradoxical argument? They never agree you see, so they can't both be right. In addition, I want to know what is wrong with the one you find is the wrong one. INic 20:44, 27 February 2006 (UTC)
The calculations E(switch) = 0 and \exist b: E(switch|B=b)\neq 0 are both correct and do not contradict each other. --NeoUrfahraner 15:42, 28 February 2006 (UTC)
Ok, I'll try to be as explicit as I possibly can here. Assume you've found 512 monetary units in the first envelope you opened. The other envelope can be a win or a lose depending on what it contains. The expected gain when picking the other envelope is by definition
E(switch|B=512) = P(win|B=512) \cdot win + P(lose|B=512) \cdot lose
When we've looked into the envelope and seen 512 the (subjective) conditional probabilities above are fixed. For example, if you think that 512 monetary units is all the money I've got the probability that you lose by switching is one and that you win is zero. To make this very clear we replace the expressions for the conditional probabilities with two letters representing your actual values in this case, p and q say,
E(switch|B=512) = p \cdot win + q \cdot lose
However, according to one way of thinking win is defined as A and lose is –A, and according to another way of thinking win is defined as B and lose is B/2 (you can replace B with b here if you want). The problem now is that both these views can't be right as
pA+q(-A) \neq pB+qB/2
in general. In particular, if p = q = 1 / 2 we will win by switching according to one view but win nothing when switching according to the other. So which way of computing the expected gain is the correct one? And what is wrong with the wrong one? INic 16:43, 3 March 2006 (UTC)
Maybe you should attend a better course in probability. The correct formula (see Ash, Real Analysis and Probability, Example 6.3.5a) is
E(switch | B = 512) = nP(switch = n | B = 512)
n
--NeoUrfahraner 18:02, 3 March 2006 (UTC)
But this is exactly what I wrote above. In your formula n can only have two values, say u and v (or, rather, all other values are multiplied by zero and vanishes anyway). This means that your formula is equivalent to
E(switch | B = 512) = uP(switch = u | B = 512) + vP(switch = v | B = 512)
Without any loss of generality we can assume that u is the best and v is the worst alternative. Replace u by win and v by lose in your formula and you will discover my formula above. Please answer my questions now. INic 04:11, 7 March 2006 (UTC)
It is very strange for me how you are able to obtain two different result from one clear formula. Let C denote the random amount in the other envelope. If C=c, your amount from switching is c-512. So by substituting n=c-512, the formula becomes
E(switch | B = 512) = (c − 512)P(C = c | B = 512)
c
.
Sine only c=256 and c=1024 have nonzero P(C=c|B=512), the formua gives
E(switch | B = 512) = 512P(C = 1024 | B = 512) − 256P(C = 256 | B = 512).
Where do you see any ambiguity or contradiction? --NeoUrfahraner 05:23, 7 March 2006 (UTC)
OK, let's say we denote the win "2A-A" and the loss "A-2A" where A is an unknown constant. We get
E(switch) = P(C = win | B = 512)(2AA) + P(C = lose | B = 512)(A − 2A) (*)
(if we found 512 in our first envelope) which is zero only if
P(C=win|B=512) = P(C=lose|B=512) = {1 \over 2} .
However when the formula (*) is zero your expectation formula isn't zero. As a result they can't both be true. All I want to know is which formula is the correct one in the single case? If one formula says "switch" and the other says "stay" for example, which one should I follow? INic 23:54, 17 March 2006 (UTC)
I see. You lack some basic understanding of algebra. If you denote the win by "2A-A" the meaning of A is fixed. You must not use the same A for a different equation (like loss=A-2A) without justifing that you are indeed allowed to use it. --NeoUrfahraner 08:50, 18 March 2006 (UTC)
This is interesting. The envelopes contains two and only two values right? One is twice the other, right? Without any loss of generality I can denote the smaller of them A and the larger of them 2A, right? Now, if I win by switching I know I will win 2AA=A and if I lose by switching my loss will be A-2A=-A, right? Or do I have to justify this to you in some way? At 06:17, 14 February you had no problems understanding this kind of simple algebra. What's your problem now? INic 22:47, 18 March 2006 (UTC)
It would be time that you finally decide how you define A, but you changed the definition of A again. You denote the smaller amount by A; since we found B=512 in the envelope, A is either 256 or 512. If A=256, one loses 2A-A=256 by swapping, if A=512, one wins 2A-A=512 by swapping. In both cases the equation {1 \over 2}(2A - A) + {1 \over 2}(A - 2A)=0 holds (cf. my answer from 06:17, 14 February), but in no case one gets a contradiction. Anyway, I think it is better that you learn some algebra before we continue the discussion. --NeoUrfahraner 06:33, 20 March 2006 (UTC)
You always use to say that the probability of winning is 1/2 before we look in one envelope, after we look the probability of winning is the conditional probability. Remember? Now you are suddenly telling me that the subjectivistic probability of winning is dependent on how the letter A is defined! Let's say I'm interested in learning what the probability is of winning just any amount when switching after I've seen say 512 monetary units in one envelope. Are you telling me that it's impossible to talk about that probability unless I first specify what I mean by A? I have always heard that subjectivistic probabilities were apliccable to all statements... Apparently there are serious limits to the use even of subjectivistic probabilities. INic 15:05, 20 March 2006 (UTC)
As you can see from my computation from 05:23, 7 March 2006, I do not need A at all. For some strange reason you are insisting on the usage of A. Anyway, if you want to get any meaningful result from using A, you have first to define what you mean by A. This is just simple algebra, but obviously too complicated for you. --NeoUrfahraner 15:41, 20 March 2006 (UTC)
I have no problem admitting that this is too complicated for me. In my world all this is much simpler. However, I try hard to understand the subjectivistic opinion here, even if it's perhaps far above my head. Anyway, have I understood you correctly when I say that you claim that we are allowed to express the expectation in two different ways here, both equally valid,
E(switch | B = 512) = 512P(C = 1024 | B = 512) − 256P(C = 256 | B = 512)
E(switch|B=512) ={1 \over 2}(2A - A) + {1 \over 2}(A - 2A)
where A is an unknown constant? This despite the fact that the latter always is zero while the first isn't in general zero? And despite the fact that the probabilities for winning and losing are in general different in the two formulas? If that is the case is it meaningful to ask for the probabilities p and q in the following more general formula for the expectation
E(switch | B = 512) = pW + qL
where W is what you win, if you win, and L is what you lose, if you lose? INic 21:58, 20 March 2006 (UTC)
You did not understand correctly. I never claimed that E(switch|B=512) ={1 \over 2}(2A - A) + {1 \over 2}(A - 2A). --NeoUrfahraner 05:46, 21 March 2006 (UTC)
Aha you didn't? Hmmmm... OK, I'm sorry. So please tell me what is wrong with this equation. INic 17:42, 21 March 2006 (UTC)
It may be true, it may be false; it is underdetermined. I do not need this equation. If you think it is true and important, you have to provide a proof. --NeoUrfahraner 19:17, 21 March 2006 (UTC)
Aha when is it true and when is it false? It's in fact not undetermined at all as its value always is zero. I have never seen a proof proving that something is "true and important", so I don't know what you mean here. Is it enough to you if I mention that it is part of the paradox we are discussing? INic 09:23, 22 March 2006 (UTC)
It is obviously true if and only if the considered distribution satisfies E(switch | B = 512) = 0. Since you provided neither evidence that the equation is important nor a proof that it is true for every distribution, there is no reason why it should be mentioned. Anyway, I think it is better that you learn some algebra before we continue the discussion. --NeoUrfahraner 11:41, 22 March 2006 (UTC)
This is quite funny. According to you I'm not allowed to even mention an equation here at the talk pages despite the fact that it's mentioned in the very article under discussion! I have to provide an independent proof that it's "true and important" first. In addition you keep repeating that I'm too uneducated to be worthy your attention. You here manages to violate not one but two basic rules of discussion in philosophy simultaneously. Of course I don't have to attend any special classes in order to have the right to state questions, and I don't have to prove anything before I have the right to speak. My experience tells me that people break some basic rule of discussion only when they run out of valid arguments completely. That's clearly the case here too. INic 20:46, 23 April 2006 (UTC)
OK, so you tell me that the equation
E(switch|B=512) ={1 \over 2}(2A - A) + {1 \over 2}(A - 2A)
is correct iff
512P(C = 1024 | B = 512) − 256P(C = 256 | B = 512) = 0
Right? To begin with this statement is like the watchmaker who was to repair a watch that had stopped completely. Instead of fixing the watch he claimed that it wasn't completely broken, as he had noticed that the watch showed the correct time exactly twice a day. Obviously, only a bad watchmaker would say something like that. The same comment holds for any mathematician claiming that a mathematical relation holds, but only accidentally.
Secondly, if we consider one such occation when it's true according to you we get
{1 \over 2}(2A - A) + {1 \over 2}(A - 2A) = 512 p - 256 q = 0
where p = P(C = 1024 | B = 512) and q = P(C = 256 | B = 512). From this we see that the probability of winning is 1/2 according to the first statement while it's only 1/3 according to the second. Likewise, the probability of losing is 1/2 according to the first but 2/3 according to the second. So if there are occations when both statements are true, as you claim, which statement contains the correct probabilities? INic 20:46, 23 April 2006 (UTC)
As you see, nothing bad happens when you mention the equation E(switch | B = 512) = 0 at the talk page; it just makes no sense to mention it in the article when its relevance is not clear. Anyway, you are right that E(switch | B = 512) = 0 is true if p = P(C = 1024 | B = 512) = 1 / 3 and q = P(C = 256 | B = 512) = 2 / 3. How do you derive, however, that p = q = 1 / 2? --NeoUrfahraner 07:13, 24 April 2006 (UTC)
OK, if you read the article you will discover that there are two conflicting ways to reason. That's actually what causes the paradox we are discussing here at the talk pages. It surprises me somewhat that all this is news for you. Anyway, according to one reasoning it's always possible to assume that the envelopes contain the amounts A and 2A where "A" is a constant unknown to the player. As the difference between the two amounts—according to this mode of reasoning—is the constant A we conclude that we either win A or lose A when switching to the other envelope. Since the situation is symmetric the probability of winning must be the same as of losing, don't you think? If you agree you by now—I hope—realize where the probability of 1/2 for each option comes from. INic 02:03, 26 April 2006 (UTC)
Yes, 1/2 is the unconditional probability. It should be no surprise, however, that the unconditional probability is different from the conditional probability. --NeoUrfahraner 06:10, 26 April 2006 (UTC)
Bingo! OK, let's say we have the situation described above where it's legitimate to use both the conditional probability as well as the unconditional probability according to you. My question then is, to repeat, what probability of winning, by switching envelope, is the correct one, do you think? Is it the conditional or is it the unconditional probability? The unconditional probability of winning is 1/2 and the conditional probability is only 1/3, as you might remember, so it really does matter for your future actions what you will answer here. INic 23:30, 29 April 2006 (UTC)
Just a moment. Is this still part of your question 2b or is this now a new topic? --NeoUrfahraner 15:09, 30 April 2006 (UTC)
Yes, I'm still trying to get an intelligible answer to my original question—as stated over seven months ago. Please don't even try to escape from answering this question now. INic 23:52, 30 April 2006 (UTC)
OK. Question 2b was And what is in that case the flaw in the other reasoning leading to no gain at all, i.e., by denoting the contents A and 2A and noting that we either gain A or lose A if we switch?. There is no flaw in the other reasoning, so you will wait forever to find the nonexisting flaw... You are just comparing apples and pears, i.e. conditional and unconditional probability. --NeoUrfahraner 05:29, 1 May 2006 (UTC)
I put to the record that you try desperately to escape from answering my question from 23:30, 29 April 2006 here and want to have a fresh start at the issue instead, where your initial opinions differ considerably from those you once had. So please tell me, have I understood your current opinion correctly that the conditional and the unconditional probability both always can be used when calculating the probability that we will win money by taking the other envelope? INic 01:16, 2 May 2006 (UTC)
Yes, it depends on what you want to compute. If you swap conditionally, e.g. dependent on the contents of the envelope, you have to use the conditional probability; if you swap unconditionally, i.e. independent from available observations, you may use the unconditional probability. Do you still think that there is a flaw "in the other reasoning"? --NeoUrfahraner 06:46, 2 May 2006 (UTC)
OK, let's say I want to compute the probability for the event that the other envelope contains more money than what I have in my first envelope. I then switch dependent on what the probability is found to be for that event. If the probability is at least 1/2 then I'll switch, otherwise I won't. Please tell me how to compute the probability I need here. INic 10:48, 2 May 2006 (UTC)
How about first answering my question? Anyway, using the assumptions of the article and denoting by B the random amount in the envelope you open and by C the random amount in the closed envelope, if you find b in the envelope, the probability that you find more in the other envelope is
P(C=2b|B=b)=\frac{P(C=2b\mbox{ and } B=b)}{P(B=b)}. --NeoUrfahraner 13:24, 2 May 2006 (UTC)
So why is it not correct to use the unconditional probability in this case? INic 16:59, 2 May 2006 (UTC)
The unconditional probability does not depend on b, so it will give the same result for every b. This does not help when you want to make a decision based on what you found in the envelope; it just gives you an answer whether you should switch always or switch never. Now how about answering my question from 06:46, 2 May 2006? --NeoUrfahraner 18:41, 2 May 2006 (UTC)
I never said that I opened the first envelope. I might have opened it, I might not have opened it. My knowledge in this respect is not specified in my question. Now, what formula should I use in this case, according to you? INic 20:27, 2 May 2006 (UTC)
You should use the conditional probability based on the available observations. Now how about answering my question from 06:46, 2 May 2006? --NeoUrfahraner 06:30, 3 May 2006 (UTC)
Yes, I thought for a long time that you and all other subjectivists had exactly this opinion, i.e., that only when the player opens one envelope she has to use the conditional probability, otherwise she has to use the unconditional probability. However, do you remember when we talked about the Clark and Schackel paper? You said 15:18, 28 February 2006 that the fact that they use conditional probabilities in their reasoning doesn't by any means exclude the closed case. Now you claim that the conditional probability should only be used based upon the available observations. In the closed case there are no observations, as far as I can see. So have you changed your opinion here too? INic 08:58, 3 May 2006 (UTC)
Just a moment. Is this still part of your question 2b or is this now a different topic? And how about answering my question from 06:46, 2 May 2006? --NeoUrfahraner 09:24, 3 May 2006 (UTC)
Oh boy, this is funny! So you want another new fresh start at "2b" already? Shall I put to the record that not only is my question from 23:30, 29 April 2006 too hard for you to answer but my question from 08:58, 3 May 2006 as well? Well, go ahead, hit it one more time! Many believe that the third time you try something automatically brings you luck. You might need that. INic 10:37, 3 May 2006 (UTC)
So you are saying my answer from 06:46, 2 May 2006 does not answer your question from 23:30, 29 April 2006? Just to repeat: both probabilities of winning are correct, they are just answers to different questions. Are you now ready to answer my question from 06:46, 2 May 2006? --NeoUrfahraner 10:54, 3 May 2006 (UTC)
Aha no I didn't get that. So when given a specific question, how do I know what the correct answer is? Let's say the question is "What is the probability of winning, by switching envelope?" Does this question have two different equally correct answers, 1/2 and 1/3, or what? INic 17:12, 3 May 2006 (UTC)
The question is not yet fully specified. You additionally have to specify how you evaluate the experiment. If you count seperately for every different amount in the first envelope, then in each group the relative frequency of winning will converge to the conditional probability. (In our example, the relative frequency of winning in the group where the first envelope contained 512 will converge to 1/3). If you just count totally, ignoring the contents of the first envelope, then the correct answer ist the unconditional probability. So considering all envelopes, not only the envelopes that contained 512, the relative frequency will converge to 1/2. Does this answer your question from 23:30, 29 April 2006? Are you now ready to answer my question from 06:46, 2 May 2006? --NeoUrfahraner 19:50, 3 May 2006 (UTC)
WOW! This is indeed very interesting! Have you transformed yourself to become a frequentist all of a sudden? You never talked about experiments or frequencies before... Or do you do this here just to please me? ;-) Anyway, what is in that case your answer to the question above if no repetitions are possible, i.e., you only have the opportunity to switch once why any rules for grouping experiments are meaningless? And I'm very curious now what your answer is to my question stated at 08:58, 3 May 2006, i.e., how is it possible to talk about conditional probabilities in the closed envelope case? How would you group these experiments getting the results you want? (And ah, concerning your question from 06:46, 2 May 2006: no I don't think there is any flaw there.) INic 22:45, 3 May 2006 (UTC)
  • The law of large numbers is a mathematical law, so it objectively guarantees that the relative frequencies will converge, even if in reality no repetitions are possible (and independent from whether you are Bayesian or a frequentist).
  • With respect to your question stated at 08:58, 3 May 2006: I did not say that only when the player opens one envelope she has to use the conditional probability, otherwise she has to use the unconditional probability. and that the conditional probability should only be used based upon the available observations. I said if you swap unconditionally, i.e. independent from available observations, you may use the unconditional probability and that You should use the conditional probability based on the available observations. The conditional probability gives you information in a finer granularity; in particular, the law of total probability shows that the conditional probability also includes the information contained in the total/unconditional probability. It is just that the computation of the conditional probability is usually more work than the computation of the unconditional probability. So using the conditional probability for the closed envelope case is more work than necessary (if you do not need the additional work on another place), but it is possible. It is also possible that using condtitional probabilities based on unobserved events could simplify some calculations, although I do not know a good example at the moment.
  • Since you agree that there is no flaw in the other reasoning, I consider question 2b as closed and will not give any additional answers in this thread. --NeoUrfahraner 07:18, 4 May 2006 (UTC)

[edit] Part 5 / Question 3

INic 13:05, 21 January 2006: Is it the case that the somehow inevitable shift in perspective from an ordinary probability to a conditional probability causes the sudden shift of the probabilities involved (when we look into one envelope), or is it the other way around? That is, does the basic subjectivistic notion that the probability for an event changes once the subject know its outcome forces the change of perspective from ordinary to conditional probabilities? INic 02:35, 5 February 2006 (UTC)

It is the case that the shift in perspective from an ordinary probability to a conditional probability causes the sudden shift of the probabilities involved. Once the outcome of an event is known, it makes sense to switch from the unconditional probability to the conditional probability under the condition that this specific outcome occurs. IMHO, however, this is in no way specific to subjectivists; AFAIK also frequentists are allowed to do that switch. --NeoUrfahraner 08:00, 6 February 2006 (UTC)
No we are not. Let's say we have two subjects, one that observes an outcome of an experiment and one that doesn't. To a subjectivist it's not problematic that they will assign different probabilities to further events—as probabilities to a subjectivist ony describes different states of mind. To an objectivist, however, this is not the case. We don't care who has seen what, the probabilities must be the same. Otherwise they wouldn't be objective. (This should be evident to anyone merely understanding the difference between the words subjective and objective.) INic 03:27, 7 February 2006 (UTC)
I would like to play poker against you with the rules that I am allowed to look into your cards and you are not allowed to look into neither your nor my cards. I have a deck of cards, so we can start now. You don't care who has seen what anyway. --NeoUrfahraner 05:50, 7 February 2006 (UTC)
If you became blind tomorrow, does that mean that the world ceases to be visible? Or could it be that it's only you that doesn't happen to be able to see it? I don't think you really endorse an idealistic world view, do you? Or maybe you're a solipsist? However, I'm neither of these. Thus, to me your eagerness to cheat when playing poker reveals more about you than it reveals about the world. Fortunately. INic 04:07, 8 February 2006 (UTC)
Does it make a difference when you are allowed to look into your cards or not? --NeoUrfahraner 06:13, 8 February 2006 (UTC)
I think we've found a difference here. To me probability theory is a scientific tool, to you probability theory is all about cheating when playing cards. INic 00:56, 11 February 2006 (UTC)
How about answering my Yes/No-question with a "Yes" or a "No"? Does it make a difference when you are allowed to look into your cards? --NeoUrfahraner 11:29, 11 February 2006 (UTC)
The issue of games of chance is at the core in the early development of probability theory. The theory is all about finding strategies for how to win in, and calculations for evaluating, different games of chance. This is the context of the first subjectivistic interpretation of probability; the classical interpretation. As the paradoxes started to pop up as well as the initial promises for the classical view didn't deliver as expected a new wholly different view emerged. According to the new view the main application of probability theory was not as an aid for winning in games of chance but rather as a scientific tool. Old paradoxes related to winning strategies for games was left in the dust. The new object of study was science (physics, social sciences, insurance, and so on) and the connection to games was only kept as sloppy examples in the first elementary textbooks in school. INic 02:11, 14 February 2006 (UTC)
Thus, to be able to answer your question we have to be aware of the way the sloppy examples in elementary textbooks are written. They never explicitly state what the experiment is, sometimes not even the sample space. The reader has to infer what the experiment is in every case. Your situation above is in the sloppy style of elementary textbooks, where the experiment isn't explicitly stated. In that sense it's formally impossible to answer your question correctly. As soon you have a situation in accord with scientific method I can answer your question. INic 02:11, 14 February 2006 (UTC)
Actually you provided already the Monty Hall problem as a situation in accord with scientific method. I am still waiting to see how you proceed when you don't care which door the game host has opened. --NeoUrfahraner 06:22, 14 February 2006 (UTC)
Yes, I provided the Monty Hall problem exactly because it's well defined. INic 01:31, 17 February 2006 (UTC)
And obviously you did care which door Monty has opened (although your reasoning was not very scientific). --NeoUrfahraner 08:36, 17 February 2006 (UTC)
No I don't care what door Monty opens. I know he will open a door without a car, so I will act the same whatever door he opens. The door he choses is totally irrelevant. INic 00:22, 20 February 2006 (UTC)
If you open, say, door 1 and you do not see which door Monty opens, how do you know whether you have to switch to door 2 or door 3? --NeoUrfahraner 19:27, 21 February 2006 (UTC)
I simply open the remaining door that isn't open. Simple as that. :-) You should really try to be a frequentist for just one day, and see how easy your life will be. ;-) INic 01:14, 22 February 2006 (UTC)
Assume Monty closes the door again and you did not see which door he opened. What will you do now? --NeoUrfahraner 14:25, 22 February 2006 (UTC)
If Monty doesn't remove any door he doesn't change the sample space, obviously. Every door will have a probability of 1/3 containing the car in that case. It doesn't matter what door I pick in this case. Is this case mysterious to you? INic 04:46, 23 February 2006 (UTC)
Now assume that you saw the goat behind the door before Monty closed the door again. What will you do in that case? --NeoUrfahraner 10:44, 23 February 2006 (UTC)
Aha I think I know where you're heading. :-) If the player is a person suffering from severe amnesia she could in fact forget what door Monty did open. (Stranger things have happened.) In this case the correct solution seems to rely on psychology—precisely the way the subjectivists want to have it, and not even the frequentist ought to escape this obvious conclusion. Well, we can in fact escape this trap. The solution is the frequentistic definition of the concept of experiment. If an experiment is understood as a single event in space and time, we would be in trouble indeed. However, to a frequentist an experiment is something else. An experiment is instead the complete set of instructions needed to carry out the actions that will produce the promised results. Scientific journals does, with this definition, actually contain the actual experiments themselves! One purpose with experiments in scientific journals is that they should be objective, that is, it should never depend on who is actually performing the experiment what the outcome should be. (Compare with the situation in pseudoscience where we often see the opposite claim.) Concerning the situation in your question the answer is therefore not dependent on psychology; either it's stated in the experiment that the player always knows and remembers what door Mony opened why she should switch to the other not opened door. Or it's the case that it's stated in the experiment that the player never remembers what door Monty opened why it doesn't matter if she changes her mind or not. INic 21:17, 27 February 2006 (UTC)
Does it make a difference whether you saw a goat behind the door? --NeoUrfahraner 15:46, 28 February 2006 (UTC)
Please read my answer above. INic 16:53, 3 March 2006 (UTC)
Assume you remember what door Monty opened. What will you do in the case that you saw the goat behind the door before Monty closed the door again? --NeoUrfahraner 18:31, 3 March 2006 (UTC)
I need to know if Monty always shows a goat or not, that is, is this "open and see goat thing" part of an experiment or is it an outcome of another experiment? In the original Monty Hall problem this opening of doors is definitely part of the (new) experimental set up. However, your wordings "in the case that" above indicate that this is just one possible outcome of an experiment. To a frequentist like myself (scientific) probabilities doesn't exist without reference to an experiment (as defined above). This means that the full specification of the experiment at hand is required before we can even talk about probabilities. INic 04:40, 7 March 2006 (UTC)
As stated in Monty Hall problem, The game host will always open a door with a goat. What will you do? --NeoUrfahraner 05:29, 7 March 2006 (UTC)
I've already told you what I would do in the Monty Hall problem. You, however, have had different strategies here depending on what is behind the doors(!). If it's a car behind one of the doors you would switch but if it's a nice dish made by me you wouldn't. You still have to explain why my dishes confuses your thoughts, even before you've tasted them... INic 00:04, 18 March 2006 (UTC)
So you are saying you will swap when you saw the goat behind the door before Monty closed the door again because the other door has proability 2/3 to hold the car. On 04:46, 23 February 2006 you said it doesn't matter what door you pick when you did not see which door he opened. So your decision depends on what you saw. On 03:27, 7 February 2006, however, you said you don't care who has seen what. So do you care whether you saw what door Monty opened? --NeoUrfahraner 11:34, 18 March 2006 (UTC)
To repeat: the experiment determines the probability at hand, not my vision. To me it doesn't matter how the information about the experiment reaches me. It can be via text, speech, vision, coded signals or what have you. For example Mont Hall can use different methods for informing about the changing of the sample space. Sometimes he just removes one door, sometimes he shows where one goat is, sometimes he paints something on a door with a goat, sometimes he just tells the player where a goat is, sometimes he lets one goat make a noise revealing where he is, sometimes he lets the player feel where a goat is, sometimes he lets the player smell where one goat is, sometimes he shows the player nothing just informs the player that he will remove one goat from the remaining options and so on. To a frequentist all these different ways of informing the player that the sample space have been reduced by one are equivalent. For Bayesians like you, however, the vision seem to have a special epistemological and magical quality. I wonder if a blind man can be a Bayesian at all? Frequentism is in any case non-discriminatory. INic 23:23, 18 March 2006 (UTC)

[edit] Part 5 / Question 4

INic 12:52, 23 January 2006: If you look where does the original probabilities go? They just disappear in the air like that? INic 02:35, 5 February 2006 (UTC)

The original probabilities still exist. It is only that they are no longer directly used in my model. If I compute, however, the conditional probability by the formula P(X|Y)=\frac{P(X,Y)}{P(Y)} I still make use of the original probabilities. --NeoUrfahraner 08:00, 6 February 2006 (UTC)
That you throw away the original probabilities like that is a bad way of handling things. The fact that you chose the wrong dish earlier, due to your ability to ignore the original probability, shows this very clearly. INic 03:27, 7 February 2006 (UTC)
Maybe I did not choose the soup you preferred. This does not mean, however, that I did choose the wrong dish. --NeoUrfahraner 06:31, 7 February 2006 (UTC)
You chose the bad dish with a probability of 2/3. What that really what you wanted? Anyway, I'm happy to take the other one. Maybe we've found a win-win situation for a subjectivist and a frequentist? INic 04:07, 8 February 2006 (UTC)
Yes. --NeoUrfahraner 09:25, 8 February 2006 (UTC)
It's interesting that you have to cheat to win over me in a game, but I manage to win here without cheating at all. All I did was to let you confuse yourself with your own ideas first. INic 00:56, 11 February 2006 (UTC)
It is easy to win in a win-win situation. ;-) --NeoUrfahraner 06:34, 11 February 2006 (UTC)
In reality this isn't a win-win, it's a win-lose where you lose in 2/3 of all cases. INic 02:15, 14 February 2006 (UTC)
You said it is a win-win situation. Anyway, I am still waiting to see how you distribute these 2/3 to the remaining 2 doors in the Monty Hall problem. --NeoUrfahraner 06:23, 14 February 2006 (UTC)
I didn't claim it was a win-win. I just stated an ironic, rethorical question. I'm sorry about the irony. (I would never have used irony if I suspected that there were a minute risk you wouldn't get it.) INic 01:38, 17 February 2006 (UTC)

[edit] A strategy that gives you more than a 50% chance of picking the best envelope

Here is a practical strategy that gives you a probability of more than 0.5 to pick the best envelope. This shows that you indeed gain information when you look in one of the envelopes.

No, this doesn't show that at all. That we gain information is trivially true, at least if we define information as Shannon does. Your strategy depends heavily on the assumption you state below; without that assumption you can't use your strategy. But the paradox is still there without that assumption and you still get trivial (but useless) information when you open one envelope. INic 03:31, 3 January 2006 (UTC)
What part of the assumption is violated in the paradox? Why is the information useless although it can be used to increase the average gain? --NeoUrfahraner 11:57, 3 January 2006 (UTC)
1) That is not how it works. An extra assumption does not in general violate the original assumptions of a theory or situation. (If that is indeed the case you will end up with a contradictory theory/situation.)What in general happens is that the original theory/situation is narrowed to the cases where the extra assumption holds. See for example what happens to Euclidean geometry if the fifth postulate is removed or replaced with another postulate. 2) The information is useless because it can't be used to increase the average gain in general. Remember when you got fish soup in your first 'envelope'? Soup can't be translated to a rational number so the 'general' strategy can't be used at all. Yet the paradox is there even in this case. I'm sure you can think of many many more cases like that. INic 22:24, 5 January 2006 (UTC)
With information I mean specific information about which envelope is the largest. As for the fish soup discussion, I guess it boils down to (no pun intended) if you think that well-tastedness of fish soup cannot be encoded as a number. So, is there a set where there is a total order but the cardinality is greater than that of the real numbers? If so, then you may have a point, otherwise I just count your dishes using real numbers. norpan 23:49, 5 January 2006 (UTC)
Sure, every set—of whatever cardinality—can be well-ordered (which implies a total ordering). But what has this to do with this paradox? I don't get that. Concerning the fish soup, there is more to it than just the encoding to a number. First of all, you never know in these non-numerical cases what aspect of the 'envelope content' to encode to a number. Neo thought he could encode the fish soup to a number which led him to stick with the fish soup. However, what he didn't know was that he had encoded the wrong aspect of the dish. The other dish was fish soup as well, looking exactly the same as the first one, they only differed in taste. Even if he had tasted the soup he would still not be sure if he had encoded all relevant aspects of the dish to his unique preference number. And let's say he had figured out all relevant aspects of the dish, how would he calculate his unique magic number? I'll consider it a safe bet that a person won't be able to encode different dishes to rational numbers in a consistent way, for a number of reasons. For example, the very same dish tastes better in the beginning when you're hungry. Taste is also affected by what you've eaten before, your mood, how it is served and so on. On top of all this personal preferences are often non-transitive, as even economists have begun to realize. I hope that you by now realize that the real number line, rigid and transitive, is a bad model for personal preferences. This is why your strategy generally fails. INic 04:04, 6 January 2006 (UTC)
I actually read some of your fish soup stuff now, and it seems clear to me that you indeed say that for instance a dish can be twice as tasty as another dish. If that's the case then how do you get away with that without mapping the tastiness to numbers? But enough with the fish soup. This paradox is about numbers, so if you want the fish soup paradox, please put it on a separate page! norpan 11:05, 7 January 2006 (UTC)
Good question, but there are two good answers to that. First off, the argument doesn't require that one envelope contains something exactly twice as valuable as the other envelope. Any difference in value will do actually. The difference doesn't even have to be objective in nature but any subjective difference in value will do. If you think soup A is much better than soup B and I think soup B is just slightly better than A that is fine. The paradox is still there. The paradox doesn't require exact numerical and objective values. However, your strategy does. Secondly, values aren't in general mappable to the real line. You can easily have a situation where B is twice as valuable as A, C is twice as valuable as B, D is twice as valuable as C and yet A twice as valuable as D. Apparently real numbers can't describe this situation. The paradox doesn't care if the value-relation is transitive or not. However, your strategy requires transitivity. (And yes, I've already put up a separate page. But thanks for the advice!) INic 23:08, 7 January 2006 (UTC)
The strategy also doesn't require exact numerical and objective values. You can also select a random dish before looking at the offered dish and then switch if and only iff the offered dish is less tastier than the random dish. In that case the probability to get the better dish is greater than 50%. --NeoUrfahraner 17:40, 8 January 2006 (UTC)
It's impossible to pick a random dish prima facie. You can only pick an arbitrary dish, but that is something entirely different. The only way to pick a dish in the manner the strategy here requires is to first map every real (or rational at least) number to a particular dish having a tastiness represented by that number. A Gödel numbering for dishes if you like. But that kind of Gödel numbering is impossible to do. Hence the strategy breaks down, because it depends heavily on topological properties of the real line. INic 00:30, 10 January 2006 (UTC)
So you agree that the strategy at least works for money because you can assign numbers? --NeoUrfahraner 05:18, 10 January 2006 (UTC)
If you assume a 1-1 mapping between money in one currency and real numbers the two sets are isomorphic, i.e., the same thing essentially. However, usually we think of money as not indefinitely divisible as well as having some upper limit. In that case we can't make an isomorphism. Please note that it's the property of money having numbers attached to them that makes this mapping possible, not the fact that money also is a measure of value. If you get a piece of gold instead of money in your envelope for example, the strategy breaks down. Say your strategy gave you the number 5 to compare the gold to. Should that be interpreted as meaning the weight of the piece of gold in grams? In pounds? Or should it be interpreted as the monetary value of the piece of gold? In what currency in that case? US dollar? Euro? Yen? Or maybe the number of gold atoms? INic 21:18, 10 January 2006 (UTC)

Where does the strategy require a 1-1 mapping between money in one currency and real numbers, where does it require that something is indefinitely divisible or limited? All you need is a total order relation. --NeoUrfahraner 05:12, 11 January 2006 (UTC)

If you allow for different currencies in the envelopes you can have the same numerical amount of money in both envelopes but in different currencies. The paradox is still there but the strategy fails. If you allow for a smallest unit of money there are better strategies than this. When you find an odd number of the smallest unit in an envelope you know it's the smallest without having to resort to the strategy. Actually, you're stupid if you use the strategy in this case. That will happen in half of all cases. If you also allow for an upper limit you know that if the content is more than half of the upper limit that must be the largest envelope. Even in this case you're stupid if you use the strategy. INic 21:53, 12 January 2006 (UTC)

Did anybody say this is the best strategy in every case? In your posting from 11:55, 19 September 2005 (UTC) you also did not suggest the best strategy but I did not say you're stupid. --NeoUrfahraner 05:13, 13 January 2006 (UTC)

I'm only saying that different initial conditions and constraints on the situation suggest different strategies. The strategy proposed here suggest itself when we have real numbers as the constraint for the envelope content. Other strategies suggest themselves when we have a limited set of the natural numbers as the constraint for the envelope content. None of these strategies suggest themselves when we have none of these constraints, simply because they don't work without the constraints. However, the paradox isn't limited to any of these constraints. To make general ontological claims about the importance of looking in one envelope based on one of these strategies is therefore misleading at best. INic 04:11, 14 January 2006 (UTC)

What should be the paradox when you get the information that one envelope contains 1 Euro more than the other? What is the paradox in the game I suggested at 04:57, 19 September 2005 (UTC)? --NeoUrfahraner 06:33, 14 January 2006 (UTC)

No I don't see any paradoxes in these cases. Please show me.

What is your strategy if you are told that one envelope contains twice as much money as the other one, but the currencies are different? In fact, you are told that they both contain the same amount of monetary units. You pick one envelope and find that it contains m monetary units of a currency you haven't heard of before. Now, what is your winning strategy that will prove that mere looking in one envelope has that ontological import that you claim?

Please note that we are safely within the boundaries of the original formulation of the paradox in this case! However, the assumption below isn't satisfied. This shows in an explicit manner that this assumption is totally unrelated to the paradox. Both in its general (any content) and its most classic sense (only money as envelope content). INic 01:46, 15 January 2006 (UTC)

You said "the paradox isn't limited to any of these constraints". Now you are saying "I don't see any paradoxes in these cases". How can this be true at the same time?
Anyway, when there is an unknown currency in the envelope, I need some additional information (in particular, the exchange rate to a currency known to me). But why should this mean that you get trivial (but useless) information when you open one envelope? The exchange rate can easily be found out without looking into the second envelope. --NeoUrfahraner 06:15, 16 January 2006 (UTC)

That the paradox isn't restricted to have elements of R or N (or any subsets of these sets) as envelope content doesn't mean that you are allowed to alter anything and still get the same paradox. That would be quite odd.

Aha I got you!! :-) Suddenly you have to have some additional information that is not nesessary for the paradox! Obviously mere looking in one envelope isn't the magic act that gives you sufficient subjective information, as you've always claimed. This implies that the subjective probabililty that you picked the best envelope is still the same as before the observation: 1/2. Consequently step 2 is clearly true even for the subjectivist in this case and hence the paradox is now in need of a true subjectivistic solution. INic 11:30, 16 January 2006 (UTC)

I never claimed it is sufficient for every purpose. You, however, claimed it is useless. There is, however, something between sufficient and uselss. --NeoUrfahraner 05:20, 17 January 2006 (UTC)

Well you've always claimed that mere looking in an envelope is sufficient for the purpose of altering its probability measure. And as an argument supporting this claim norpan point to the "universal" strategy we discuss here, and you agree. Gdr does the same thing in the article itself under the Discussion heading, using a similar argument. But as I've shown over and over this argument is flawed. And now you admit that yourself. It's not the case that mere looking in one envelope get anyone sufficient information for developing a winning strategy. And hence there is no objective support for the claim that mere looking in one envelope should alter it's probabilities. You can only claim that while referring to the general philosophy of Bayesianism. People with other views doesn't have to believe that. INic 14:40, 21 January 2006 (UTC)

In addition I'd like to know what paper in the published literature the argument under the Discussion heading refer to. If there is no published paper employing this argument this is original research and should be deleted from Wikipedia. INic 14:59, 22 January 2006 (UTC)

See below. I said it already at 10:21, 31 December 2005 (UTC). It is discussed in Cover, T. (1987): Open Problems in Communication and Computation. Problem 5.1: Pick the largest number. Springer, New York. --NeoUrfahraner 05:24, 23 January 2006 (UTC)

So what author talking about the envelope paradox in the published literature refer to T Cover? Does T Cover talk about the envelope paradox when discussing his Problem 5.1? Admit it, no one in the published literature makes this connection. If you can't show me a single author really defending this argument seriously I'll delete that part from the article. INic 14:12, 23 January 2006 (UTC)

I think you should better read the papers you cited in "Further reading". It was not difficult for me to find one refering to T. Cover. Do you need a hint?--NeoUrfahraner 14:24, 23 January 2006 (UTC)

Wikipedia shouldn't be a guessing game. The readers of the articles ought to be confident that the arguments presented are reflected in the literature. If every reader has to read all the literature to judge the truthfulness of every article, what's the point having an encycolpedia at all? Please provide the citation or else I'll delete it. INic 15:18, 23 January 2006 (UTC)

In other words, you did not read the papers you cited. --NeoUrfahraner 15:29, 23 January 2006 (UTC)

I've read them all, and other papers not cited here as well. Some of them I read long ago, though. INic 15:43, 23 January 2006 (UTC)

So where's the citation you promised? INic 19:47, 23 January 2006 (UTC)

I noticed that also Christensen and Utts (1992) mentions this strategy. Thanx. INic 02:25, 26 January 2006 (UTC)

[edit] Assumption

  1. The envelopes contain two non-equal real numbers

[edit] The strategy

  • pick a rational number using a probability distribution that gives a non-zero value for every rational number. (This is simple, say your rational number is p/q, then flip a coin until it becomes tails and use the number of flips for p, flip a coin again until it becomes tails and use the number of flips for q. It's easy to see that you will get any rational number with a probability greater than 0.)
  • if the number p/q you picked is larger than the number on the envelope you picked, switch.

[edit] Why it works

There are three cases, all with probability greater than 0.

  1. Your number p/q is less than both the numbers in the envelopes with probability p1. (For all real numbers there are lower rational numbers.)
  2. Your number p/q is between the numbers in the envelopes with probability p2. (For all pair of real numbers there are rational numbers between them.)
  3. Your number p/q is larger than both the number in the envelopes with probability p3. (For all real numbers there are higher rational numbers.)
  • For case 1 and 3 you will pick the higher envelope with a probability of 0.5.
  • For case 2 you will always switch if you have the lower envelope and not switch if you have the higher. Thus you will pick the higher envelope with a probability of 1.
  • The total probability of picking the higher envelope is 0.5*p1 + 1*p2 + 0.5*p3, and since p1+p2+p3 = 1 this simplifies to 0.5*(1+p2). Since p2 is greater than 0, the total probability is greater than 0.5.

norpan 13:58, 30 December 2005 (UTC)

I agree. I added this strategy to de:Umtauschparadoxon. Depending on your knowledge about the distribution of the amount in the envelopes, this might not be the best strategy. It will, however, work in every case, even when the money in the envelopes is picked deterministically. According to de:Zwei-Zettel-Spiel, this strategy (not specific to the envelope paradox) is discussed in Cover, T. (1987): Open Problems in Communication and Computation. Problem 5.1: Pick the largest number. Springer, New York. --NeoUrfahraner 10:21, 31 December 2005 (UTC)

No, it doesn't work in every case. Put the same numerical amount of money in the envelopes but of different currencies and the strategy is useless. However, the paradox is unaffected. INic 14:12, 23 January 2006 (UTC)

OK, if you prefer, I will restrict it to "works at least in every case where both envelopes contain money of the same currency" --NeoUrfahraner 14:24, 23 January 2006 (UTC)

Yes, the same currency and also at the same time. Or else you could have the same amount of the same currency in the envelopes and the only difference is when the money will be on your bank account. Due to inflation or deflation over time the values will in general be different, and the paradox will be present. The strategy useless, though. INic 15:18, 23 January 2006 (UTC)

OK, I accept the additional restriction that the envelopes contain cash. --NeoUrfahraner 15:29, 23 January 2006 (UTC)

Does this mean that you also accept the statement that this strategy isn't a universally useful strategy, i.e., that it's possible to state the paradox in a way that makes this strategy useless? In particular, do you now admit that your reasoning at 06:30, 17 September 2005 was wrong? INic 00:28, 5 February 2006 (UTC)

It depends on the definition of universally. It is universal in the sense that it is independent from exactly how the money was filled into the envelopes; in particular, it even works when the envelopes are filled deterministically or with a completely unknown random mechanism. It is not universal, however, in the sense that you can change the rules arbitrarily in an unforseen way. Feel free, however, to state the paradox in a way that makes this strategy useless, we then may discuss in detail. --NeoUrfahraner 20:45, 5 February 2006 (UTC)

But I've shown you a lot of examples already where this strategy can't be used but the paradox is still there. Why do you pretend that you suffer from amnesia every so often? It makes it hard to talk to you when I must repeat everything I've said over and over and over agian. Anyway, by universal strategy I mean a strategy that is useful whenever we have an instant of the paradox. In this sense the current strategy isn't universal, and any ontological claims about the paradox drawn from it are thus false. INic 03:54, 7 February 2006 (UTC)

In these examples you change the rules in an unforseen way. --NeoUrfahraner 05:54, 7 February 2006 (UTC)

Not at all. The rules aren't changed a bit. The paradox only talks about money in the envelopes. Nowhere is it stated that the money must be of the same currency or be payed to the winner at the same time. That are requirements you have made up in your mind perhaps. Face it Neo, admit that the strategy isn't in general appliccable to the paradox. INic 04:24, 8 February 2006 (UTC)

[edit] New question

I have a question regarding the statement of "greater amount of money in envelope=less likely to swap". This seems economically true, but mathematically false (disregarding any economic utility theory [i.e: convexity]), if the goal is to get the highest number possible. Perhaps using money is confusing the issue? Thanks

No answer by email, but one question: What part of the article are you referring to? I did not find the sentence "greater amount of money in envelope=less likely to swap". --NeoUrfahraner 05:06, 6 June 2006 (UTC)

Discussion Step 2 seems plausible because the envelopes are identical and we chose one at random. So, before looking in the envelope, it would be correct to deduce that the probability of having picked the smaller envelope is ½. However, once we have looked in the envelope, we have new information — namely the value A — and so the conditional probability given A must be used.

It may seem that because we are ignorant of the distribution of values in the envelopes we have gained no information by opening the envelope. But that is far from the case: the lack of a uniform probability distribution on the positive real numbers means that some values of A must be less common than others, and in particular, the larger A is, the less likely we are to find it in the envelope. -I was referring to this in my question

OK. It is not true that the probability must be strictly decreasing, i.e. it is not true that if m>n then always P(A=m)<P(A=n). It might happen, for example, that small values are less probable, middle-sized values are more probable, and large values are again less probable . What must hold, however, is, that for every \varepsilon>0 there is an N0 such that for every n>N_0:\;P(A=n)<\varepsilon - otherwise you could construct a sequence n_1, n_2, \dots, n_k such that P(A=n_1)+P(A=n_2)+\dots+P(A=n_k)>1\;. Does this answer your question? Should "in particular" be replaced by "simplified speaking"? --NeoUrfahraner 07:53, 7 June 2006 (UTC)

A litle technical, but yah, I think that I get the gist of it. Thanks for answering my question