Talk:Gambler's fallacy
From Wikipedia, the free encyclopedia
The example about the winner of a sports event being more likely to win the next belongs in the section on "not even" not the section on "not independent". I have moved it.
How about an explanation for the joke at the end?
- Your bomb doesn’t make other terrorists less likely to attack your plane, since no one even knows about it. Similarly, when tossing a coin, it may be unlikely that you will get ten heads in a row, but it doesn’t mean that after you have already got nine heads in a row, another head is less likely than tail. Rafał Pocztarski 12:29, 4 Dec 2004 (UTC)
-
- Actually, I once made an experiment to demonstrate it to someone who wouldn’t believe me: we were tossing a coin waiting for two heads in a row, and noting the third result after those two heads, and of course we counted more or less the same number of heads and tails. Rafał Pocztarski 03:03, 12 Dec 2004 (UTC)
- I really don't think the joke needs an explanation - it goes along the same theories as were explained in the rest of the article. It's also really not that improtant, and explaining it would just ruin the joke. Oracleoftruth 09:29, May 26, 2005 (UTC)
[edit] Remove some repitition?
This article reaslly ssems to just repeat the same things in almost the same way, several times, often making it difficult to tell new concepts from repititions of an old concept... I'll try to clean this up. Oracleoftruth 09:33, May 26, 2005 (UTC)
Reading it over more carefully, I realize that everything included is actually a different point, but it would still be good to seperate them more clearly, and perhaps to fuse some more similar points together so it makes more sense. Oracleoftruth 02:05, May 27, 2005 (UTC)
[edit] Trouble understanding
Consider a slot machine that is set to a 50% probability of winning or losing. Assuming an infinite pile of money for both the slot machine and the players, if you stand at the slot machine for long enough, you will end up with the same amount of money that you started with. However, one can beat the machine by allowing someone else to quit while down and then playing until you're up, and then repeating the process, basically taking the money from those who lose. If the Gambler's Fallacy were accurate, then this would be impossible, and the machine would make money for the owner. It seems like this is a logical paradox. Even though it is extremely hard for me to accept, I understand that consecutive heads followed by a tails is just as likely as another heads, but since you're comparing heads to tails, it seems that the heads-heads-heads-heads-tails result is irrelevant while five consecutive heads in a row is not. -- Demonesque talk 18:20, 31 October 2005 (UTC)
- Slot machines don't fall into the category or "truly random events" as the results of a previous payout do affect future payouts (assuming the slot machine does occasionally award all the money it has collected). The effect you mentioned is real, in that your odds of coming out ahead are slightly better if the machine has been pumped full of money by the previous player. However, the odds are so horribly stacked against you to begin with in slots that this slight advantage is unlikely to overcome that huge disadvantage. Another way of putting it is that you are unlikely to get the big payout before you run out of funds. If you look at the odds of getting the big payout and how much you have to spend, you can figure the chances out for yourself. (If the odds are one in a million and you have a thousand coins to spend, then your chances are one in a thousand, or a bit better, if you "reinvest" small payouts in trying for the larger payout.) StuRat 18:47, 31 October 2005 (UTC)
-
- This is all rubbish. The machines are not (supposed to be) sensitive to how much money happens to be in their cash box. Haven't you ever seen a pauout interrupted by there not being enough coins in the box? I certainly have. Similarly, sometimes play is interrupted by a machine needing its coin box emptied. I've played a lot of video poker (not the same as slot machines I admit), and they've worked out fair in that about the expected number of payoffs comes out of the machine, but you have to average over a very long time. Most people don't have the patience for this; I only recorded certain payoffs (sometimes four of a kind, usually straight flushes, and always royal flushes). One time my wife hit a royal flush after only 20 minutes of play (between the two of us). That's about 1% of the expected number of plays. She's also had a royal flush dealt. These things happen. We've also had a stretch of more than twice the expected number of plays, and they deinstalled the machine before that one paid off. These things also happen. --Mike Van Emmerik 21:25, 31 October 2005 (UTC)
-
-
- Let's say that any slot machine where the maximum win goes up with every loss will have progressively better payout odds as the losses accumulate. Now, whether a particular slot machine works that way or not, I do not know, ask the casino's operator. And whether the casino would continue to allow the machine to operate after the odds turn against them, I do not know. I suppose they might, in rare cases, for the good publicity it would generate. StuRat 21:44, 31 October 2005 (UTC)
-
-
-
-
- Ah. Perhaps you are talking about a progressive jackpot; with these, with every bet on machines linked to the jackpot's progressive meter, a small contribution is made to a special account that is used for the top prize. (Sometimes there are multiple jackpots, e.g. one for a royal flush in each of the four card suits; I'll assume only one jackpot, and only for the highest prize, let's say that's five cherries). In this case, the game does become more and more favourable for the player, although for the house the expected profit is the same (house edge for the game, less the jackpot contribution). So when the game passes the break-even point (most players will not know when this happens, but for certain games such as video poker, it can be calculated), it becomes profitable for the player, while still being profitable for the house. How is this possible? Because the house is transferring a portion of the losses made by all players to the eventual winner. Of course, if everyone waited for the jackpot to become break-even, then there would be no play, and the jackpot would remain at its reset value. So there is an opportunity for the player that knows the break-even point, and has the discipline not to play when the game is not yet profitable, and providing that there are other players who are willing to play when the game is not profitable. The house is quite happy with this situation, because they still make a profit (albeit a reduced one because of the jackpot contribution), but this is considered to be a good investment because a large jackpot attracts customers, most of whom will play other games as well, purchase food, and so on. Note that the casino continues to make the same profit on the machines no matter how high the jackpot rises, since the jackpot payout is paid in advance from a proportion of past player's losses. This is entirely different to the idea that payouts are related to the amount of money in the coin box (or otherwise related to player's past losses or wins). I don't believe that machines ever pay out more after heavier than expected losses. If they did, to maintain their expected profit, they would have to pay out less after lighter than average losses (or a certain sized win). Most players would consider that unfair. --Mike Van Emmerik 06:45, 1 November 2005 (UTC)
-
-
-
-
-
-
- Yes, the progressive jackpot is what I was referring to. I don't quite follow how this is different than basing jackpot payouts on the number of coins in the coin box, however. In both cases a portion of loses seem to be transferred to the jackpot fund. What is the material difference between the two cases ?
-
-
-
-
-
-
-
- For the coin box version, I would expect the casino to take some coins out when it gets full or replenish them if it empties beyond a certain point, so there should be a finite range of jackpot payouts. As for the odds discussion, I agree that the long run odds heavily favor the house in all cases, but think that individual "pulls" may be weighted against the house in rare cases, when the progressive jackpot is quite high. StuRat 16:42, 1 November 2005 (UTC)
-
-
-
-
-
-
-
-
- Two main differences: 1) with the progressive jackpot, the variation from normal payouts is explicit; there is a (usually large) meter clearly displaying the change of payoff(s). With your cash box sensitive idea, a player unaware of the previous history of the machine could be disadvantaged (or perhaps even advantaged, depending on the details) without knowing it. 2) Your scheme seems to require a change to the probability of certain payouts; with the progressive jackpot, the probability of all payouts remains the same, but the payoff for the jackpot increases in value. (After a jackpot win, of course, it suddenly reduces in value to the "reset" amount). As I've pointed out before, no individual game ("pull") will be weighted against the house, even with a large jackpot value, because the house already has money "in the bank" from previous losses by players. For example, the casino may make 4 cents of profit for every 1 cent contributed to the jackpot. So even for a $10,000 increase in the jackpot value, the casino already has $40,000 of profit. The jackpot will only get to $10,000 more than reset until there have been a million plays, so the house can't be at a long term disadvantage. (It can still make a short term loss if the frequency of some high payoff combinations comes up more frequently than usual, but this will be balance in the long term by less frequent high payoff combinations at some time in the future.) The same (pulls will never be weighted against the house) would be true of your cash box idea, if implemented correctly; the higher probability of paying out N coins would only be triggered if at least N more than expected coins are in the box. --Mike Van Emmerik 21:53, 1 November 2005 (UTC)
-
-
-
-
-
-
-
-
-
-
- 1) Each slot machine certainly could display the contents of the cash box, if they so desired. The simplest way would be to make the case transparent, perhaps of bullet proof plastic, to discourage theft.
-
-
-
-
-
-
-
-
-
-
-
- 2) No, I was only talking about a change in payouts for certain combos, not a change in the odds of getting any particular combo.
-
-
-
-
-
-
-
-
-
-
-
- Your statemnt that no particular pull is weighted against the house is at odds with what you said. Yes, I agree that they have already made enough money to cover that pull through losses in the past, but that doesn't change the fact that, on average, they are going to lose more money than they will gain on certain pulls of the lever. That is, if they stopped the game right then, they would have more money, on average, than if they continued to allow players when the jackpot is that high. Of course, to do so would be bad publicity and also illegal, if they have promised that the jackpot will be awarded, but that's quite irrelevant to the odds and payouts. StuRat 04:31, 2 November 2005 (UTC)
-
-
-
-
-
-
-
-
-
-
-
-
- Every payoff, by itself, is negative expected value for the house. But we were talking about the long term expectation for the game, i.e. many plays.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- I wasn't talking about the long-term average, but rather each pull, where a certain amount is bet, and a certain amount is won, or not won. Some such pulls may be weighted against the house, but the majority are definitely in their favor. StuRat 21:59, 2 November 2005 (UTC)
-
-
-
-
-
-
-
It seems my point has been misunderstood. I'm not talking about actual slot machines, I'm talking about a hypothetical slot machine and slot machine players, all with an infinite amount of money. The Gambler's Fallacy must be a logical paradox. To bring it back to the coin example, since it's apparently easier to understand, when you have four heads in a row, tails must be more likely with every heads result, and vice versa, because given enough time, the results even out. -- Demonesque talk 03:27, 2 November 2005 (UTC)
- I just tested this, by flipping a coin and recording the results. When heads was significantly higher than tails, I bet on tails, and vice versa. By exploiting the fact that it always breaks even in the end, you win. Betting on the same result every time, of course, breaks even in the end. Demonesque talk03:49, 2 November 2005 (UTC)
Try testing it 1000 more times, and see if it works out the same each time. It won't ! StuRat 04:00, 2 November 2005 (UTC)
Ok, let me explain this in terms of asymptotes and limits. I hope you're familiar with those terms. If not, my apologies in advance...
The graph of y = 1/x will get closer and closer to 0, but never actually reach zero, as x increases. It will approach it from the top side. Similarly the graph of y = -1/x with approach zero, but never actually reach zero, from the bottom side. So, if you have tossed heads more often in the past, then, on average, the number of heads will approach, but never reach, 50%, from the "more heads" side. Similarly, if you have tossed more tails in the past, the number of tails will approach, but never reach, 50%, from the "more tails" side. Now, unlike the 1/x graphs, the heads/tails flips are not guaranteed to be evenly distributed, so that you may very well get exactly 50% or even go past it. The "approaching but never reaching 50%" is only the average behaviour, not the actual behaviour for any one run.
So, what is happening is that the slight heads or tails advantage initially is becoming less and less significant as a percentage of the total rolls.
Let's look at the case where tails fell initially. After the first flip, the average percentage of heads is, of course, 0%:
T = 0% Ave = 0%
After the 2nd flip there are two equally likely possibilitues, which would give us either 50% heads or 0% heads so far. The average of these two possibilities gives us 25% heads on average, which we could expect after the 2nd flip:
TH = 50% TT = 0% Ave = 25%
Here's the same values after the third flip:
THH = 67% THT = 33% TTH = 33% TTT = 0% Ave = 33%
And the fourth flip:
THHH = 75% THHT = 50% THTH = 50% THTT = 25% TTHH = 50% TTHT = 25% TTTH = 25% TTTT = 0% Ave = 37.5%
So, the average number of heads went from 0% to 25% to 33% to 37.5%. We are approaching, but will never quite reach, 50%. Note that the average number of heads is 1/4, 2/6, 3/8 for the 1st, 2nd, and 3rd additional flips, after the initial tails toss. We can generalize this as the formula n/(2n+2). So, after 9 flips, you would have 9/20, or 45% heads, on average. After 99 flips you would have 99/200 or 49.5% heads, on average. And, after 999 flips, you would have `999/2000, or 49.95%, heads, on average.
- Another way to look at it: the average number of heads is a ratio: h/n, where h is the total number of heads results, and n is the number of tosses. After an unusually high number of heads (h large for the correstponding n), there are two ways to reduce the ratio back to the long term average: decrease the average h, or increase n. You seem to think that it has to be h that decreases in the short term. But h increases only by 0 or 1 with each toss, and averages an increase of 1/2. n keeps increasing, and eventually (but never completely) "drowns out" the blip from say four heads in a row. There will be other blips along the way, say five heads in a row, or 6 tails in a row. The probability of a more-than-average-heads blip will be the same as a more-than-average-tails blip of the same number of tosses. All these blips will also get drowned out by the underlying fact that in the end, about half the results are heads. There is no necessity to "correct the average" (of single tosses or blips) in the next 10 tosses, or 100 or 1000. The biggest factor is n, or time. Think very long term. If it helps, I struggled with this one for a long time, too. --Mike Van Emmerik 21:33, 2 November 2005 (UTC)
[edit] Help?
- "If I tell you that 1 of the 2 flips was heads then I am removing the tails-tails outcome only, leaving: heads-heads, heads-tails, and tails-heads. Among the 3 remaining possibilities, heads-heads happens 1 in 3 or 33% of the time."
Umm... what? Can someone explain why this makes sense, seems to me its very wrong...
Assuming the probability that the flip that was heads was the 1st coin is x%, then, obviously, the probablity that the head fell second is (100-x)%. If the first coin is heads (x/100 chance), the second coin has a 50/50 chance of being heads, or 1/2 (Chance of 2 heads: x/100 * 1/2). similarly if the second coin is heads ((100-x)%), the first coin has 1/2 chance of being heads (Chance of 2 heads: (100-x)/100 * 1/2).
adding these together, we get the probablility of 2 heads:
x/100 * 1/2 + (100-x)/100 * 1/2 = x/200 + (100 - x)/200 = 100/200 = 1/2
regardless of which coin is the one revealed.
[edit] Fallacious article.
This is the kind of article and reasoning that will undermine Wikipedia. Whoever wrote it cannot differentiate between probability and degree of certainty.
Let us take the example of a coin toss. The probability is always 50% for either heads or tails. However, if we consider the Fundamental Law of Gambling:
N = log(1-DC)/log(1-p)
N = number of trials DC = degree of certainty that an event will occur p = probability that an event will occur
Anyone who can do math will see that, yes, the degree of certainty that you will get tails increases as a streak of heads goes on. After 3 heads, mathematically, the probability of either heads or tails is still 50%. It ALWAYS is. However, the degree of certainty that it will be tails as calculated by the above formula is 95%. The degree of certainty that it be heads again is 5%.
But wait a minute, you might say. In the article, the author logically argued that the chances of 3 successive heads is an eighth, or 12.5%. Right he is, but he jumped to a different subject. He introduced erroneous logic into his thinking, and went from talking about the probability of one of two events happening, heads (H) or tails (T), to the probability of one of eight events happening (HHH), (HHT), (HTT), etc..
I would like to credit Ion Saliu for bringing these issues to light. I can be reached at ace_kevi at hotmail dot com.
- The article is fine as it stands. You need read the article more carefully, or you need to understand this concept of "degree of certainty". The degree of certainty that you will get at least one heads in N coin flips is, using the above formula (with p=1/2) is
- which is correct. There are 2N possible outcomes, and only one has no heads. If I have not yet flipped the coin - if I am about to flip a coin four times, then the probability that I will come up with at least one heads is DC = 1 − 1 / 24=93.75%. But, If I flip a fair coin 3 times and get 3 successive tails, the probability that the next flip will yield heads is 50%. Anyone who disagrees with this statement is a victim of the gambler's fallacy. Thats exactly what the article says. Do you disagree with this statement? If you do not, then there is no problem. If you do, then you are a victim. PAR 15:27, 19 December 2005 (UTC)
[edit] Explanation
Of course, I do agree. I previously said that the probability never changes. It's 50%. However, the article is far from correct. In fact, it is misleading in many aspects. I will try to cover one of them briefly.
The author says, "suppose that we are in one of these states where, say, four heads have just come up in a row, and someone argues as follows: "if the next coin flipped were to come up heads, it would generate a run of five successive heads. The probability of a run of five successive heads is 0.55 = 0.03125; therefore, the next coin flipped only has a 1 in 32 chance of coming up heads." This is the fallacious step in the argument."
First of all, the "someone" and the author are talking about two different things. The author is talking about the probability of an event which has 32 possible outcomes (HHHHH, HHHHT, etc..) and two possible outcomes after the fourth toss. and the "someone" is talking about the probability of an event that has two possible outcomes (H or T). When the "someone" says "the next coin flipped only has a 1 in 32 chance of coming up heads," if he means probability, he is wrong, but if he means degree of certainty, as should be calculated by the equation I presented above, he is right. The degree of certainty that it will be tails the fifth time based on the previous tosses is above 95%. The probability is still 50%, as always. And yes, it DOES make sense. "Chance" is a vague term, and so is "likely/unlikely." The article should only use "probability or "degree of certainty."
Please take the time to read my comments and understand the difference between probability and degree of certainty. The gambler does not question the probability, that's why the fundamental formula of gambling calculates the degree of certainty.
The concept is hard to grasp. You could call it the non-gambler's fallacy.
- I don't understand what your objection is. Can you give an example of a particular statement in the article that you object to and how you would reword it? PAR 04:11, 20 December 2005 (UTC)
[edit] detailed explanation.
First of all, thank you for taking the time to read my comments and investigate my argument.
Reminder of the Fundamental Formula of Gambling:
N = log(1-DC)/log(1-p)
where
N = number of trials or events
DC = degree of certainty that an event will happen
p = probability that an event will happen
Say we toss a coin:
The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 50%, tails 50%.
It's heads. We toss a second time.
The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 25%, tails 75%.
It's heads. We toss a third time.
The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 10%, tails 90%.
It's heads. We toss a fourth time.
The probability of heads (H) is 50%, tails (T) 50%. The degree of certainty of heads is 5%, tails 95%.
"But wait!" one may cry out, "how did you get 10% at the third toss? The author logically demonstrated that it's 12.5%! You're wrong!"
Not so fast. That's where the author inadvertantly introduced an error into his logic. I am talking about the degree of certainty of one of two events, namely getting HEADS again after getting 2 HEADS. Note that at this point, the probability of getting heads is still 50%. I am sticking to the topic, see. The author drifted from the main course and started talking about the probability of getting 3 HEADS after 3 tosses, which is one of eight possible outcomes for 3 tosses. He started out talking about the probability of one of two outcomes (H or T), and ended up talking about the probability of one of eight outcomes (HHH, HHT..) You cannot have two variables in the equation (p and DC). The probability must stay the same. Please look at the article and re-read my statement.
A little graphic explanation here:
The gambler, or me:
toss 1: H
toss 2: H
toss 3: ? probability of H 50%; degree of certainty 10%; hmm I better bet on T..
The author:
toss 1: H
toss 2: H
toss 3: ? probability of HHH 12.5%, HHT 12.5%, etc..
At the third toss, the probability of H is 50%, that of HHH is 12.5%. But he is comparing apples to oranges. I am sticking to H, while he jumped to a different game (HHH).
He says, "The gambler's fallacy can be illustrated by a game in which a coin is tossed over and over again. Suppose that the coin is in fact fair, so that the chances of it coming up heads are exactly 0.5 (a half). Then the chances of it coming up heads twice in succession are 0.5×0.5=0.25 (a quarter); three times in succession, they are 0.125 (an eighth) and so on. Nothing fallacious so far;.."
That's where his fallacy is. Let me explain this. The gambler doesn't care that the probability of HHH is 0.125, because he is betting on ONE TOSS, NOT THREE. The probability of HHH is irrelevant. Only the probability of H (50%) is relevant, because we are betting one toss at a time. H and HHH are two different things, with different probabilities, and if you get them mixed up, things get blurry.
Considering the probability doesn't change:
A random event is more likely to occur because it has not happened for a period of time; - Correct, the degree of certainty for that event goes up.
A random event is less likely to occur because it has not happened for a period of time; - correct, the degree of certainty for that event goes down.
What I am explaining here is VERY subtle, and evades even the most brilliant minds sometimes. I beg you, dear sir, to re-read my comments again if you do not understand at first.
-
- I still cannot make complete sense of what you are saying, but I may be able to if you can answer the following question:
-
- Suppose I have ten billion people. They each flip a fair coin three times. Now I separate out all those people who have flipped three heads in a row, and I ask them to flip one more time. What percentage of those people will flip heads and what percentage will flip tails? PAR 16:08, 20 December 2005 (UTC)
[edit] reply.
The Short answer
10% of 10 billion will have HHH. You separate them out. That's one billion. They're going for a fourth flip. The probability of either H or T is 50%. The degree of certainty of a fourth consecutive H is 5%. 5% of the one billion will get H. 95% of them will get T. (WARNING: the other people who stopped at 3 flips do not matter at this point.)
If you thought the answer is 50% of one billion will get HHHH, and the other 50% will get HHHT, you are incorrect. You are following the author's fallacious logic by ignoring degrees of certainty.
How did I arrive at my results?
N = log (1 - DC) / log (1 - p)
At this point, your intuition is telling you I am wrong, but please read on.
The Long Answer
Say we flip a coin twice and get 2 heads. The degree of certainty of H after 2 H is 10%. So we have a 10% chance of getting HHH, versus 90% chance of getting HHT.
If we are playing the H or T (50% probability), HHH is "what is the degree of certainty that H will come up a third successive time?" the answer to that is 10%. We are at trial 3.
If we are playing the HHH or HHT or else (12.5% probability), HHH is "what is the degree of certainty that HHH will come up?" the answer to that is 12.5%. We are at trial 1.
I know exactly what is bothering you about all this. Say we're playing a game where 3 consecutive heads is a win. We got 2 heads in the first 2 tosses. "Why does it matter what game we're playing at the third toss? It's not gonna change the result." It won't, but it will tell you how to calculate DC for upcoming trials.
"why can't we consider every 3 tosses an event by itself? In that case the probability of HHH is 12.5% and the author is right!"
We can. However, in the case of 3 tosses (1/8 probability), if we have:
event1(T,H,H) event2(H,T,H), it DOES NOT count as a win (HHH). In the case of one toss at a time, 1/2 probability, it DOES. The gambler is betting ONE TOSS AT A TIME, NOT THREE. It is therefore irrelevant to talk about the probability of HHH in the article, because it skews the data. It ignores this HHH I just demonstrated. It's a totally different game and cannot be used to disprove the gambler's logic.
The author build a very solid argument because he blurs the line between probability and degree of certainty by using the word "chance," and jumps from one game to another whenever he sees it fit. In other words, he says p=12.5%, so for instance, he is playing a game of three tosses = 1 event, where HHH wins. But when THH HTH comes up, he declares himself a winner! This is an example applied to the meaning of his words; he does not play any games in the article.
He does what I like to call data prostitution.
THE GAME WHERE [ONE EVENT = ONE TOSS] AND THE GAME WHERE [THREE TOSSES = ONE EVENT] ARE TWO DIFFERENT, SEPARATE GAMES.
Remember:
N = log (1 - DC) / log (1 - p)
Thank you for reading my comment. It took me four hours to write this comment, organize it and think of the examples to demonstrate the flaw in the article. Please take some time to read it carefully, and if there still are things you do not understand, please let me know.
-
- Please don't go into long discussions. My question was:
-
-
- Suppose I have ten billion people. They each flip a fair coin three times. Now I separate out all those people who have flipped three heads in a row, and I ask them to flip one more time. What percentage of those people will flip heads and what percentage will flip tails?
-
-
- Your answer was:
-
-
- 10% of 10 billion will have HHH. You separate them out. That's one billion. They're going for a fourth flip. The probability of either H or T is 50%. The degree of certainty of a fourth consecutive H is 5%. 5% of the one billion will get H. 95% of them will get T.
-
-
- This is simply wrong. We cannot continue this discussion until you realize that this is wrong. It will take some work, but you need to start flipping coins.
-
- Start flipping a coin, and every time you get three heads in a row, flip it again and then mark down your answer. After you have 40 entries, you will see that the number of heads is quite a bit larger than the 2 that you predict. (5% of 40=2). If you are a programmer, you can speed the process up by writing a program that uses a random number generator, so that you can get the answer quickly. PAR 03:29, 21 December 2005 (UTC)
[edit] Read this..
..carefully.
The answer to your question lies here. http://www.saliu.com/Saliu2.htm#Table
If you still do not understand degrees of certainty, please let me know.
- You are misusing the "degree of certainty" concept. The oft-quoted 10% number in this table (p=1/16, DC=50%, N=10) means this: you will have to run at least 10 trials of a 1-in-16 event (such as HHHT or HHHH) before you will have a 50% or higher probability of the event happening one or more times (e.g. observing HHHH once, twice, etc but not zero times). The reason that N is not 16 is because there is a small chance of seeing two or more occurrences of the desired event (in other words, you could occasionally actually see four heads in a row twice in 10 trials, even through the chances of seeing it are 1/16 every time). For simplicity, let's talk for a moment about throwing four fair coins at once, so each event is one throw. When you add the chances of the event happening at least once (e.g. happens first throw only, second throw only, ... tenth throw only, first and second throw only, ... first and tenth throws only, second and third throws only, ... second and tenth, first, second and third, first, second and fourth, .... all ten throws) these presumably add up to a little over 50%. I've used permutations here, you could think in terms of combinations also, so add the chance of one event being observed (on the first, second, or tenth throw, it doesn't matter), two events, and so on up to ten events.
- So to analyse PAR's problem with 10 billion people: 1 in 8 of them will throw HHH, or 1.25 billion, not 1 billion. Of those 1.25 billion, half will throw another head, and half a tail, so that's 625 million each. To use the 50% degree of certainty figure, after the 10 billion have flipped four coins, divide them up into a billion groups of ten people each. Within each group, there have been 10 trials of a 1-in-16 event, say HHHH. About half of the groups, 500 million of them, will have recorded at least one result of four heads. There are 625 million people with the four heads results, but some groups have two of them, some groups have three, there are by my calculations about 954 groups on average that have five four heads results. Overall half the groups (500 million) have no four heads results. --Mike Van Emmerik 22:46, 21 December 2005 (UTC)
[edit] Problem
I have a problem with this quote from the article: "MSimilarly, if I flip a coin twice and tell you that at least one of the two flips was heads, and ask what the probability is that they both came up heads, you might answer, that it is 50/50 (or 50%). This is incorrect: if I tell you that one of the two flips was heads then I am removing the tails-tails outcome only, leaving the following possibile outcomes: heads-heads, heads-tails, and tails-heads. These are equally likely, so heads-heads happens 1 time in 3 or 33% of the time. If I had specified that the first flip was heads, then the chances the second flip was heads too is 50%."
As far as I can see, the chance of this is in reality 50% and NOT 33%. If you say you have one head, and ask for the probability of both heads, you are eliminating two choices: either tails/tails and tails/heads (if the heads is the first one), ir tails/tails and heads/tails (if the heads is the second one). Both of these leave only two possibilities, each of which has a 50% chance of occuring. As a result, I am removing the aforementioned text until someone presents a better argument for the quote.
- This is easy to resolve - get two coins, a piece of paper and a pencil. Flip the two coins.
If one of them is heads, write down on the paper what the other one was. In other words, when you get TT, don't write down anything. When you get HT or TH, write down T. When you get HH, write down H. After a while you will start to see that there are about twice as many T's as there are H's. (PS - please restore text.) PAR 03:29, 18 January 2006 (UTC)
-
- I agree, please restore the text. Only the TT choice should be eliminated by the statement that "at least one of the flips was heads". The only justification for removing the TH or HT possibilities would be if they specifically said which toss, the first or last, was heads, which they did not. StuRat 07:30, 18 January 2006 (UTC)
-
-
- The absolute absurdity of this example actually has inspired me to create a Wiki account and correct this. Each coin flip is considered an independant event in mathematics, and from the article "the coin does not have a memory," therefore order does not matter. The probability of scoring two heads, given that one of the tosses already is a head, can be represented by the equation:
- Where x is equal to the number of ways to score two "heads" given that one head has already been turned up, and y is the total number of possible outcomes. x must equal 1, because the only way to get two heads is if the unknown coin is a head, and y is equal to 2 because the unknown coin can be either a tail or a head.
- ∴
- or 50%. http://www.mathpages.com/home/kmath309.htm I am removing this example. Alex W 15:46, 10 March 2006 (UTC)
-
PLEASE PERFORM THIS EXPERIMENT: Get two coins, a piece of paper and a pencil. Flip the two coins. If one of them is heads, write down on the paper what the other one was. In other words, when you get TT, don't write down anything. When you get HT or TH, write down T. When you get HH, write down H. After a while you will start to see that the number of T's approaches a third of all entries. In other words, there will be twice as many T's on the paper as H's.
Please, do not talk about what the results will be by analyzing the situation. DO THE EXPERIMENT. We can go around and around forever arguing about what the outcome will be based on logical analysis of the situation, but if you do the experiment, you will see that the example is correct, and it should be restored. Then we can talk about how to analyze the situation. PAR 16:18, 10 March 2006 (UTC)
- Your experiment is correct. When coin flipping, you will find that 50% of the time, the result will be one head and one tail. This is represented by the following formula:
- Where x is the number of combinations with one tail and one head, and y is the total number of possible outcomes. x is equal to 2, because the the following outcomes with one head and one tail are possible, either H-T, or T-H. The total number of possible combinations is 4, T-T, H-H, H-T, T-H.
- ∴
- Therefore, you are correct is stating the H-T / T-H results will outnumber the H-H results by a ratio of 2:1. You are twice as likely to have an outcome of H-T or T-H than a result of H-H. Unfortunately, this experiment cannot be applied to this example because it fails to take into account the fact that the outcome of one of the flips is already known.
-
-
-
- Some of you seem to be ignoring the fact that "at least one heads" also includes both heads. So at least one heads means HH, HT, or TH, all equally likely. Saying that the ordering doesn't matter doesn't help much; you have to remember that one head and one tail has twice the chance of one head and one tail. Here is a new way to calculate it, using Bayes theorem:
-
-
Pr(B|A).Pr(A) = Pr(A & B) Pr(A | B) = ------------- --------- where | means "given" and & means "and". Pr(B) Pr(B)
Let A = two heads; B = at least one head.
Pr(B|A) = Pr(at least one head, given 2 heads) = 1; Pr(A) = 1/4. Or use Pr(A & B) = Pr(A) here (whenever you have two heads, you always have at least one head) = 1/4. Pr(B) = 3/4 (all 4 equally likely possibilities except TT).
So Pr(A | B) = (1/4) / (3/4) = 1/3.
You can also use an alternative form of Bayes theorem:
Pr(B|A).Pr(A) Pr(A | B) = ------------------------------- where | and & are are above, . is multiply, Pr(B|A).Pr(A) + Pr(B|~A).Pr(~A) and ~ means "not"
Pr(B|A) = 1; Pr(A) = 1/4; Pr(B|~A)=pr(at least 1 head given not 2 heads) = all but 1 of three equally likely events)= 2/3; Pr(~A)=Pr(not 2 heads) = 3/4.
1.(1/4) 1/4 1/4 Pr(A|B) = ------------------- = ----------- = ----- = 1/3. 1.(1/4)+(2/3).(3/4) (1/4)+(1/2) 3/4
--Mike Van Emmerik 23:50, 2 May 2006 (UTC)
[edit] Possible Problem
In the article, it says:
Sometimes, gamblers argue, "I just lost four times. Since the coin is fair and therefore in the long run everything has to even out, if I just keep playing, I will eventually win my money back." However, it is irrational to look at things "in the long run" starting from before he started playing; he ought to consider that in the long run from where he is now, he could expect everything to even out to his current point, which is four losses down.
Is this a typo or something? Shouldn't it be:
Sometimes, gamblers argue, "I just lost four times. Since the coin is fair and therefore in the long run everything has to even out, if I just keep playing, I will eventually win my money back." However, it is irrational to look at things "in the long run" starting from after four tosses; he ought to consider that in the long run from where he is now, he could expect everything to even out to his current point, which is four losses down.
Just wondering, because it doesn't seem right. 203.122.192.233 12:29, 22 May 2006 (UTC)
- It sounds right to me. That is, you can't consider past tosses when expecting everything to even out in the long run. StuRat 12:54, 22 May 2006 (UTC)
- Also, can you guys replace the "even out" expression with something else? What does "even out" mean? If it means that the total number of heads (or tails) will converge to half the number of flips (and thus his money will converge to 0), then that's wrong. If it means that the percentage of total flips that are heads (or tails) will converge to 50%, then that's correct, but I doubt this is relevant for the gambler. --Spoon! 07:53, 1 September 2006 (UTC)
-
- The gamblers fallacy is a rather vague feeling that "things should eventually even out", nothing more specific than that. StuRat 01:54, 3 September 2006 (UTC)
[edit] Probability and real life
"The most important questions of life are, for the most part, really only problems of probability." (Pierre Simon de Laplace, "Théorie Analytique des Probabilités")
Finally, I got here. There have been quite a few referrals from this URL. My name is Ion Saliu, the author of the Fundamental Formula of Gambling (FFG). I mean the special approach to gambling relying on a formula started by de Moivre some twenty and a half decades ago. I believe that de Moivre did not finalized his formula out of fear. FFG proves the absurdity of the God concept. It was a very dangerous thought back then. It is still dangerous today, depending on what side of the desert you live.
I wrote the FFG article in 1996. My English is now better. We all evolve and thus prove Darwin's theory. Darwin is another human who was absolutely frightened by his ideas. He had bad dreams very often. He dreamed of being hanged because of his (r)evolutionary theory.
I developed my probability theory to significantly deeper levels. Please read at least two of my newer articles:
Theory of Probability: Best introduction, formulae, software, algorithms
Caveats in Theory of Probability
Yes, the constant p = n/N generates a variety of real–life outcomes. The constant p constantly changes everything. It even generates paradoxes, such as 'Ion Saliu's paradox of N trials'. We were taught that if p = 1/N the probability would be p=1 (or 100%) if we perform N trials. In reality (or paradoxically) the degree of certainty is not 1 but 1 – 1/e (approximately 0.632).
Ion Saliu, Probably At-Large
[edit] Paragraphs deleted from the article
I don't see a reason for why they were deleted, but i don't want to put them back either. I just want to save them in this talk page for easy access (i don't want to look through the article's history for them!), if i ever need them.
- Although the gambler's fallacy can apply to any form of gambling, it is easiest to illustrate by considering coin-tossing; its rebuttal can be summarised with the phrase "the coin doesn't have a memory".
- Sporting events and races are also not even, in that some entrants have better odds of winning than others. Presumably, the winner of one such event is more likely to win the next event than the loser.
- Mathematically, the probability that gains will eventually equal losses is equal to one, and a gambler will return to his starting point; however, the expected number of times he has to play is infinite, and so is the expected amount of capital he will need!
-- Jokes Free4Me 09:28, 20 June 2006 (UTC)
- The only one I deleted was number 3, because it's misleading and/or wrong. Maybe it could be reinstated if written more clearly. I mean you could say that the probability that the gambler will eventually win a billion times the single-game bet is equal to one. That's true too. Same for losing a billion times the bet. Same for any amount you wish to name. Moreover, the expected number of times he has to play to do it is not infinite, its finite. The number of times he has to play in order for it to be a certainty is infinite. PAR 15:53, 20 June 2006 (UTC)
[edit] Traditional logic won't solve this debate
The Gambler's Fallacy is in direct argument with the Law of Averages or the Law of Large Numbers. It seems reasonable to say that the probability of any individual coin toss is 50%, but it is also reasonable to assume that as you get more heads, SOME TIME DOWN THE ROAD, you will HAVE to start seeing TAILS. As time goes on, therefore, it is logical to assume that the probability of getting a head must progressively fall. There is illusion going on in this discussion. LOGIC prescribes a continued 50/50 chance of a head while EXPERIENCE demonstrates that heads become less & less likely as you continue to flip successive heads. This is an example where the idea of LOGIC doesn't work in real-life. Another example of LOGIC not working is on the topic of astrology and metaphysical influences. LOGIC can not explain nor conceive the idea of an astrological influence, and it also can not assimilate the illogical discrepancy between the law of probability, and our real-life experience of probability. Therefore, the theory of probability fails to explain reality, similar to how logic fails to explain an astrological influence. Logic works wonders in other areas, but it is the Law of Large Numbers which it must concede to because this has been proven in real-life and more aptly predicts the future. The criteria for a theory's truth has to do with predictive validity, and I believe the law of large numbers predicts the future of a head more reliability than the theory of probability. Any idiot will SEE that the chance of a head decreases as you flip despite the fact that it is not logical based upon the THEORY of probability. In theory, a lot of things work, but in REALITY, they end up not working after all. What's most important is you pay attention to the outside world and real-life results/experiments as opposed to what may seem LOGICAL in one's mind. A lot of things in the world are counterintuitive. Like, for example, why are the pedal threads reversed on the left side of a bicycle? It may not seem logical, but that's the way it is. Logic can work sometimes and can fail other times, it is not a perfect philosophy... and this discussion is a good example of this. Again, what ultimately matters is one's empirical findings, not whether or not those findings appear logical or not. It's like Fritz Perls once said, "You need to lose your mind and come to your SENSES!" Our theories must be formulated by our EXPERIENCE, not the other way around.
The only philosophical problem regarding the increased chance of a head as you continue to get successive heads has to do with the birth of the coin and ignorance of it's history. Theoretically speaking, the second that coin comes off the press, it's history begins. This, of course, is only in theory, it has not be verified by experience. Human beings have very little experience of the true life of a coin from it's birth, so we can only try to use reason in order to theorize about coin flipping behavior. Since we are all ignorant about the sum total of heads that were flipped since it's birth, our predictions for a future flip is like trying to come into a conversation in the middle and start arguing. You can't scan back and view it's history, so, for all you know, that coin had just flipped 1000 consecutive tails.... then you happen upon it in your dining room, flip it and get 10 consecutive heads and believe that it MUST mean the next is a tail. Obviously, since it had an abnormal amount of tails before you picked it up, the actual probabilities are quite unknown to you. The 50/50 theory is just a generalization, and, in fact, the chance of a head may be a lot more likely than you believed once you consider a coin's entire history.
Another philosophical problem relating to the history of a coin has to do with the logic behind how history can affect a coin's future. Is it really logical to believe that throwing a coin up & down in space and having it hit the floor and fall to rest has ANY affect on it's future behavior? This is completely illogical. I agree, but I also suggested above that logic is not the be-all and end-all of human wisdom. What ultimately matters is experience and observational consensus. I agree that not many human beings are going to devote their lives to following the behavior of a coins from it's birth (so we have very little data), but this is really the only SURE way of validating a theory of coin flipping. Of course, for all humans know, all the pennies in the world could be metaphysically connected in some way to the point where even following individual coins might not help a lot. In other words, the fact that my sister flipped heads in one room might be illogically affecting the outcome of my brother flipping his own coin in the next room. Again, illogical and difficult to conceive of how there would be a connection, but the idea of astrology is just as illogical but appears to have compelling enough evidence to warrant a statistical investigation. Remember, it didn't make sense to anyone that the world was round. That was viewed as illogical in one time, so logic can be a highly subjective way of trying to predict reality. Logic is also merely a tool to assess reality, not a religion to try to protect. If logic doesn't seem to be predicting reality very well, one needs to begin to reevaluate logic itself, or at the very least one's own logic.Dave925 22:46, 22 July 2006 (UTC)
- Sorry, but that's all pretty much BS. You just accept things like the accuracy of astrology as a given, then use that to "prove" that logic in wrong (astrology fails miserably when put to any real test, it's just a psychological issue that people tend to see themselves in any vague statement, like "there has been a crisis in your life"). The only way a coin toss will not have a 50-50 chance is if it isn't a fair coin (weighted unevenly, for example). And I don't think the learned people ever thought the world was flat, only the same common folk who now believe in the gamblers fallacy. StuRat 21:52, 10 August 2006 (UTC)
[edit] Real-life Evidence exists for Gambler's Fallacy with a NORMAL fair coin
By the way, I just programmed my computer to bet on it's microsecond digit, in order to disprove the Gambler's Fallacy. In doing so, I essentially proved that it does exist. The table below shows the percentage of time it was able to predict the digit based upon having thousands of trials of history to learn from. Every 100 trials, I programmed the computer to stop, evaluate the data and bet on the underdog. Then I just programmed it to pick one number and always bet on that. For all practical purposes, 15,000 trials is enough for the average person to realize it doesn't work (at least in the short-run). It's still possible/reasonable that you could have an advantage in the extreme long-run, but, then again, the "history" idea associated with a coin could (for all you know) be infinite which would be like saying it does not exist. To have an infinite history would make history a nonissue in betting because it would be impossible to know enough of the history to make any difference whatsoever in your coin flipping predictions. So, either coins have infinite history, no history, or they have such a large history that, for all practical purposes, the average Joe would not benefit in his lifetime betting on the underdog. This is only for FAIR coins, of course. A completely random event which is not unfairly influenced by another phenomenon which can be learned. The short history that one has access to when flipping a fair coin (compared to it's infinite history) is essentially like no history at all.... which brings every coin flip back down to a 50/50 chance. Again, normal fair coin is the operative word. In cases where you are flipping a coin for 15 minutes and keep getting heads.... this will never happen for a normal fair coin. Or, the chances of that happening are so astronomical that it's not worth talking about. In a world where you could flip a coin and get heads for 15 minutes straight, yes, I would imagine that you could have an advantage by betting on a tail. Of course, if you already flipped it for 15 minutes and didn't get a tail, you would already assume that there must be something wrong with the coin or you are high on drugs. If, in a strange new world a fair normal coin could be flipped for 15 minutes without getting a tail, the history idea could become a pertinent advantage... but, again, with normal fair coins, you will never get far enough off the bell-curve to use your distribution history as a predictive tool. The results of the experiment are below Dave925 08:44, 23 July 2006 (UTC)
15,000 trials = 11.69% bet on 4
15,000 trials = 11.39% bet on 1
15,000 trials = 11.39% bet on 9
15,000 trials = 11.30% bet on 2
15,000 trials = 11.16% bet on 4
15,000 trials = 11.12% bet on 4
15,000 trials = 11.11% bet on 4
15,000 trials = 10.90% bet on 4
15,000 trials = 10.79% bet on 0
15,000 trials = 10.62% bet on underdog
15,000 trials = 10.60% bet on 5
15,000 trials = 10.59% bet on 5
15,000 trials = 10.49% bet on underdog
15,000 trials = 10.48% bet on 3
15,000 trials = 10.43% bet on underdog
15,000 trials = 10.41% bet on underdog
15,000 trials = 10.39% bet on underdog
15,000 trials = 10.35% bet on 5
15,000 trials = 10.15% bet on underdog
15,000 trials = 10.01% bet on 1
15,000 trials = 10.00% bet on 5
- The problem here is that you are using a computer to generate the numbers, which are pseudo-random numbers, not truly random. Some digits may very well come up more often in computer generated numbers. However, if you used something truly random, like radioactive decay events, this defect would be eliminated. StuRat 21:39, 10 August 2006 (UTC)
-
- While there are issues with some naive RNG, it would not be likely that using one would cause the results seen. I prefer to use algorithmic ones rather than hardware ones like that because I have access to their properties and weaknesses. But the microsecond hardware one here appears to be fine for this application. Baccyak4H 03:28, 22 August 2006 (UTC)
[edit] The Proof is in the Slope
Theoretically speaking, however, the chance of getting a tail should begin to increase after a certain point (say, after 10 heads). Yet, the chance of a tail may only be a micropercent which wouldn't help much with prediction. If you talk in terms of 'strings,' a string of heads with a tail at the end will always be THEORETICALLY more likely than a string of the same number of heads with another head added. Again, the percentage between these two distributions may be such an astronomically small amount that it doesn't make sense to make a distinction. It's like splitting hairs here talking about the theoretical influence of ONE coin flip on a string of flips which could reach from here to the moon. The fact is, there are two sides to the coin, so no matter how infinitesimal the percentage is, there has got to be a difference in the distribution. What I'm saying is under the normal distribution of a coin flip, the string 50 heads+1 tail will happen more than 51 heads. If this wasn't the case, the distribution CURVE, wouldn't be a CURVE, right? Think of the distribution of coin flip strings and what the curve looks like. It should look like a normal bell curve and slope downward. It is the 51 heads which makes it slope downward slightly. And, each head you add to the string, the curve slopes down further and further into virtual impossibility. So, one could redefine the Gambler's Fallacy as attributing a SIGNIFICANTLY greater chance to the outcome of a random event due to the history of that event. It's not significant. In fact, it may even be grossly insignificant to the point where it wouldn't make any practical difference to make a distinction. But, the fact is, the curve SLOPES. And it slopes DOWNWARD. If successive head flips doesn't cause it to slope, what does? Dave925 19:11, 10 August 2006 (UTC)
- I'm not following you: " ... the string 50 heads+1 tail will happen more than 51 heads..." do you mean "50 heads followed by one tail, compared to 50 heads followed by one head"? If so, I fear you're wrong, because those two will happen exactly as often as each other with a fair coin, no theoretically or infinitesimally about it. Maybe I'm misreading you, though. - DavidWBrooks 20:47, 10 August 2006 (UTC)
-
- I agree. StuRat 21:40, 10 August 2006 (UTC)
- Looking at your comments again, I think I understand where you're wrong; I apologize if I've misread it.
- If I am about to flip a coin 51 times in a row, it is slightly more likely that I'll get one tail among the flips than no tails - that's the probability cuve. But if I have already flipped the coin 50 times and am preparing to flip it a 51st time, then a tail is no more likely that a head. (In this case the probability "curve" is, as you correctly said, flat.)
- The gambler's fallacy is to think that the probability curve which existed before any coins were flipped still holds sway after some of the flips have happened. In fact, the curve is "re-calculated" (so to speak) after every single flip, and it only applies to events that have not yet happened.
- Not sure if that helps ... - DavidWBrooks 12:02, 15 August 2006 (UTC)
-
- You're right. History can be used to even out odds, but not gain odds. For example, if I asked you to wager on the odds of flipping 4 heads in a row versus flipping 3 heads and 1 tail, you would always bet on the latter. However, if you already knew that I flipped 3 heads, the chance that the next flip is a head is the same because there's only two possible outcomes. The reason why you bet on the latter in the previous wager is because you have 4 chances of success versus 1. THHH, HTHH, HHTH, HHHT. Once you already know the outcomes of the first 3 events, the odds of one single flip is always the same. You can always gain chances when you group two flips together because your chance of getting a head always increases with the number of times you're able to try. However, if you flip the coin 10 times and get 10 tails, you have already lost all your chances... and the next flip is just a 50/50 shot. You can use the knowledge of previous events to even out odds, but not to gain odds. So, yes, history can be used for something. It can be used to show how unlucky you are when you see your odds of a string of flips reduce back down to a single 50/50 shot. This is ever apparent on the show "deal or no deal." The banker recalculates the odds after each case is opened, for good reason. They have a 1 in 26 chance of winning a million dollars, but once you're down to 2 cases and one has the million in it, the banker knows it's a 50/50 shot now... not 1 in 26 like when the game started. Each case which is opened is used to come closer to predicting the amount in their case. It's similar to playing the game "Clue." You don't solve the crime on your 1st move, you eliminate the other possibilities first, and once done, you have a lot better chance of making a successful accusation. Dave925 2/25/2008
[edit] Gender ratios
"A couple has nine daughters. What is the probability that the next child is another daughter? (Answer: 0.5, assuming that children's genders are independent.)" This is a poor example because various factors can make one couple more likely than another to produce girls (see e.g. Kanazawa: Big and tall parents have more sons:Further generalizations of the Trivers–Willard hypothesis, Journal of Theoretical Biology 235 (2005) 583–590) and so a couple that's already produced a long run of daughters is more likely to produce another one (although the effect is probably not as strong as the gambler's fallacy might lead people to believe." Modifying accordingly. --Calair 01:15, 31 December 2006 (UTC)
The statement "Many people lose money while gambling due to their erroneous belief in this fallacy" seems false. A win or loose in gambling is random and is not effected by what a player believes? Marvinciccone 22:13, 9 January 2007 (UTC)
- People who believe in this fallacy are more likely to gamble (or keep gambling), and the more you gamble the more likely you are to lose money. But I'll see if I can make the wording a little less ambiguous. --Calair 01:01, 10 January 2007 (UTC)
[edit] The Mathematicians' Fallacy
As a practical gambler, I came to the conclusion that any talk about any length of a trial procedure which is outside of the humanly performable is but metaphisics. I spent many years on finding the Rosetta Stone of Gambling (unsuccessfully), but found the following relation in Binary Gambling what I never was able to exploit : "THEORETICALLY" (which notion in this inverted commas is outside of the mathematical rigor) If I have 'N' trials, there will be 'N/2' outcomes of EVENT1 and of EVENT2 Those outcomes will forms GROUPS. A GROUP consists only but similar EVENT. It can have one member (E1; for argument's sake) or two members (E1,E1;) or three members (E1,E1,E1) or . . or 'n' members (E1,E1, .... E1 - total of 'n' members)
Through practical observation only, I found that their (e.g.: the GROUPS')numerosity corresponds to the following relations :
If "N" trials then "N/2" EVENT1 - (the following is true to EVENT2, with its own deviations)
The generated GROUPS in an EVENT FIELD; GROUPNUMBERS = (EVENTi/2) The number of single and multiple GROUPs will be : GROUPNUMBERS = (GROUPS/2) This means, that in "N" trials we shall have SINGLEGROUPNUMBERS = (N/2/2/2) MULTIPLEGROUPNUMBERS = (N/2/2/2)from which follows : SINGLEGROUPNUMBERS = MULTIPLEGROUPNUMBERS ;
Further it follows, that we shall have : TWOLONGGROUPS =( MULTIPLEGROUPNUMBERS /2 ) THREELONGGROUPS = ( TWOLONGGROUPS /2) . . nLONGGROUPS = ( [n-1]LONGGROUPS /2)
In a numeric representation, from
1000 trials will produce 500 EVENT1 which forms 250 GROUPS in the fromation of 125 ONELONGGROUP and 125 MULTIPLELONGGROUP; in the 125 MULTIPLELONGGROUP there are {say} 62 TWOLONGGROUP {say} 31 THREELONGGROUP {say} 15 4LONGGROUP {say} 8 5LONGGROUP {say} 4 6LONGGROUP {say} 2 7LONGGROUP {say} 1 8LONGGROUP . {say} 1 otherLONGGROUP also happen ... (as will);
Naturally, nothing like that exists in this form, but all of my experiences tend to bring this same "theoretical" result.
My arguments with the Coin Toss example - which I call BINARY GAME - are as follows :
0./ All arguments shall relate to fixed length structured play of hazard; e.g. RATIONAL PLAY 1./ There is no such EXPERIENCE as infinite numbers of trials; 2./ There are only two results in a series of BINARY TRIALs : i.) a single outcome of an EVENT, or ii.) a multiple outcome of an EVENT 3./ However the results are unpredictable, there is no probability whatsoever that any character of a GROUP (single, multiple) will continue after a certain numbers of occurance - like HTHTHTHTHTHT.. ad infinitum; 4./ While it cannot be predicted with any certainty, the 'longer' events are occuring with statistical regularity in a series of structured plays, and in certain type of hazard plays they even could be exploited; 5./ Because the length of the occurances is dependent on the trial numbers in concideration, the maximum length experienced in BYNARY GAME ( like Roulette, whre 43 blacks come out in a row, to my knowledge) is but the example of the importance of human limitations in practical gambling. (We, as homo ludens might not have enough collective trials to have other result)
Tamaslevy 03:24, 12 February 2007 (UTC)
[edit] Clearing up intro
"A truly random event is by definition one whose outcome cannot be predicted - on the basis of previous outcomes or anything else."
I removed this because it's inaccurate. For one thing, an 'event' is an outcome - 'throwing a die' is a random process, 'rolling a 6' is an event.
For another, under this definition there would be no such thing as a 'truly random' event or process. To see why, note that "rolling a 6" is one possible event when rolling a fair die... but so is "rolling a 5 or 6". Quite clearly the former predicts the latter, and the latter greatly improves the odds of the former.
Alternately, take two fair dice (one red, one black), and define three random variables:
R is the result when the red die is thrown. B is the result when the black die is thrown. RB is the sum of R and B.
Each of these are random (although RB has a different probability distribution from the other two). But if you know R, you have a much better idea what RB will be. This is a situation where you can use one random event to predict another. The important thing in the gambler's fallacy is not just that the events are random, but that they're independent. (Indeed, part of the reason humans are susceptible to the gambler's fallacy is that they're used to dealing with non-independent random events.) --Calair 02:13, 4 March 2007 (UTC)
[edit] Lottery
Are you more likely to win the lottery jackpot by choosing the same numbers every time or by choosing different numbers every time? (Answer: Either strategy is equally likely to win.)
Of course, choosing random numbers is the better option. If you always play the same numbers but miss a draw and your numbers come up... R'win 12:12, 22 September 2007 (UTC)
- No set of numbers is any more or any less likely to match those drawn than any other set. HOWEVER you can to a very small degree minimise your risk of having to share a jackpot by picking a unique sequence of numbers. Human nature means that the numbers 1-31 are commonly picked (birthdays etc) and strings of consecutive numbers often avoided as people irrationally believe they're unlikely to win. For example 32-33-34-45-46-47 has no special "powers" but if it came up you MAY be less likely to share your jackpot than if you'd picked 1-7-15-22-25-30. It's still a mug's game, however! 77.96.239.229 (talk) 15:46, 6 February 2008 (UTC)
[edit] Formal fallacy
The article introduction labels this as a formal fallacy, while the box in the bottom of the article places this as an informal fallacy. Either the current box layout is misleading or there is a contradiction here, I think.
I'm also adding a cleanup (inappropriate tone) tag to the An Example section, which seems completely non-encyclopedic. - Roma_emu (talk) 00:53, 18 December 2007 (UTC)
-
- I'm going to edit the top part to say Informal instead of Formal. Regarding my second note, it seems that the inappropriate part was in fact a copyright violation, which was removed by McGeddon. Unfortunately an anonymous user re-added it, and when I came to this page today that's where it standed, so I edited it to give it a more encyclopedic tone. I thought nothing had been done since my post here; I only looked at the history and noticed the section was a copyvio after the edit. I then reverted to McGeddon's version. -Roma_emu (talk) 01:43, 22 December 2007 (UTC)
[edit] Related: sample-size fallacy, human-generated sequences?
I've read that the gambler's fallacy is a special case of the sample-size fallacy (aka small-sample fallacy), which seems to jibe with assertions here on the talk page that yes, you might reasonably expect the coin flips to regress to the mean, but you'd have to consider the infinite past of that coin (or your coin, or you, or whatever) and the infinite future to expect the regression to happen with certainty. Or something like that. Intuitively, that sounds right, but given the counter-intuitive nature of fallacies, I'm not expert enough to be bold and edit. It's not like I learned math and logic at school or anything!
Also, many have written about how bad humans are at generating strings of "random" numbers; we avoid even numbers, we avoid sequences, etc. (Naturally, at the moment, I can't find a single web page on the topic to cite.) Is this a corollary of the gambler's fallacy and/or sample-size fallacy, or does it have its own name? Either way, something here should link to it, as they seem connected. --JayLevitt (talk) 15:25, 30 December 2007 (UTC)
[edit] Would this also relate to genetics?
Let's say for example, that the genotype of two heterozygous parents were Tt. They produce three offspring, and they are all tall (dominant T). Would there be a 100% chance of the 4th being short (recessive t)? Or would it still be the 3:1 dominant : recessive ratio in every single offspring, including the 4th? So the 4th will still have a 1/4 chance of adopting the recessive trait? In genetics, does this Fallacy still hold false? 64.131.253.168 (talk) 03:20, 18 April 2008 (UTC) Havoc