Talk:Ludic fallacy

From Wikipedia, the free encyclopedia

Socrates This article is within the scope of the WikiProject Philosophy, which collaborates on articles related to philosophy. To participate, you can edit this article or visit the project page for more details.
??? This article has not yet received a rating on the quality scale.
??? This article has not yet received an importance rating on the importance scale.

Notability.

References for the article etc. will be here in a few days.

I added references. More of their way. IdeasLover 07:16, 13 January 2007 (UTC)

"Uncertainty of the nerd"? Where's that coming from? Is that actually Taleb's words as well? --Khazar 13:48, 13 May 2007 (UTC)

Taleb's wording indeed. YechezkelZilber 02:56, 14 May 2007 (UTC)

What the heck is GIF? Frankly I dont believe this belongs here. Does this entry meet the criteria for a fallacy? It seems too vague as it is currently worded. It is self evident that unknown factors cannot influence utility calculations. This has nothing to do with statistical theory per se.

GIF is acronym for "Great Intellectual Fraud". see in the text.
The issue has nothing to do with the theoretical side of probability theory. It is about the practical - too common - use of statistical theory and its application where it is irelevant (or very inexact and practically off the mark). Your claim about the unknown and utiity calculation is exactly about this point - one cannot do utilities etc. based on the fantrasy that he knows everything. Utility stuff is good when used with caution or for theoretical purposes. But in practice one needs to know actual utilities not twicely differentiable Pratt-Arrow curves! (I suspect I am not clear enough. So I will clarify if you think I did not made my point)YechezkelZilber 22:54, 3 June 2007 (UTC)

The definition here needs to be more clearly explained. Although Taleb has a definition on his Black Swan Glossary and in his book, you need to read the whole chapter (or at least the first 4 pages of it) to simply get the definition.


In Taleb's words, ludic fallacy (or "uncertainty of the nerd") is the "manifestation of the Platonic fallacy in the study of uncertainty; basing studies of chance on the narrow world of games and dice. A platonic randomness has an additional layer of uncertainty concerning the rules of the game of real life.

The second paragraph has quotes from NNT's book and glossary, but that doesn't help explain the idea. "The uncertainty of the nerd" is meaningless unless you give the narrative of Dr. John and Fat Tony. The "ludic fallacy" is simply using our current culture's idea of games and gambling as equivalent to randomness as seen in real life. We are guilty of the ludic fallacy when we equate the chances of an event that has occurred to us as equivalent to that of flipping a coin. This is simply because we know the odds of getting a heads or tails (50 percent chance), whereas we can't begin to compute the odds of us missing the train (all we REALLY KNOW is that the odds change depending on other events, i.e. strikes, earthquakes, etc. This leads to Popper, falsification, and the idea that it's easier to say what I don't know than what I do). When we compare the train event to flipping a coin what we are saying is there was only a 50% chance of missing the train. We fall for this fallacy because "we like known schemas and well organized knowledge - to the point of blindness to reality."

I've re-factored the example section in the vein of the train example, as an example should be more straightforward for the viewer who hasn't read the book.--Herda050 06:49, 6 September 2007 (UTC)

I am not sure the example is appropriate. Anyone saying "it was a coin flip" for the train story. does not mean the technical 50/50 gamble. An example where people take the technical stuff seriousely would be more appropriate (maybe the one I gave originally). One may look over in the book and take the example from there (Should be chapter 9 or so "the ludic falacy") YechezkelZilber 02:04, 7 September 2007 (UTC)
YechezkelZilber the autodidact? Sorry to blatantly rewrite your example, as there was nothing incorrect about it. I thought it might be too dense for someone who hasn't read the book though. I know the example I provide is oversimplified (which in itself is a problem), but the article should be geared toward people who haven't read the book. Don't you think he was indicting our too human instinct to link metaphorically the probabilities we see in gambling with everyday instances that occur in life ("we use the example of games, which probability theory was successful at tracking, and claim this is a general case")? I'm not sure who you're referring to by "people who take the technical stuff seriously". If you mean people who were trained in statistics, I didn't get that he was indicting just those people. He says "we automatically, spontaneously associate chance with these platonified games." I read that he was indicting the human race ("experts" included) as falling for the ludic fallacy.
I have the US printing of the book, and he actually doesn't give a clear example in that chapter. He uses the rhetorical style he employs throughout the book, narrative, then tells the story of his attendance at a brainstorming session at the Las Vegas hotel. The story he tells illustrates the Ludic Fallacy, but is too long for the article and copyrighted.--Herda050 06:23, 7 September 2007 (UTC)
I looked at the other fallacy pages and re-added your example as number 2. Guess I "focused" to much to contemplate that having two examples would be fine.--Herda050 06:44, 7 September 2007 (UTC)
It is me. No need to apologize, I enjoy being equal. Thanks for re-using my text, hope it is actaully useful. I admit the heaviness of the example. The UK/US versions of the book are the same. Nice editing. YechezkelZilber 15:19, 7 September 2007 (UTC)

Sorry to be posting this remark anonymously, but I don't yet have a Wikipedia account. The entry makes, I think, I strange and wrong argument. It remarks,

"The young job seeker here forgets that real life is more complex than the theory of statistics. Even with a low probability of success, a really good job may be worth taking the interview. Will he enjoy the process of the interview? What strategic value might he win or lose in the process?"

But... there's nothing in those considerations that are outside the realm of basic expected utility theory and the model of decision analysis that this example is given as a way to refute. In particular, the observation that the low chance of success ought not deter a job seeker if the reward is great enough is the bread and butter of expected utility calculations (e.g. drawing for an inside in poker is a long shot and usually a bad decision, but it might be worthwhile if the pot is large enough in comparison to the bet required to stay in a hand--the expected utility is positive, even if the chance of success is low). - John (a prof in the social sciences)

Thanks for the comment. The idea was about complexity. Even when starting from the assumptions of expected utility, it is not simple to bail down the details. Second order effects (joy of the interview. Non-linear effects of the process etc.) are making the picture even more fuzzy. There is much less knowledge about the parameters than it feels at first glance. Points should be clarified in the article, tough. YechezkelZilber 15:00, 9 November 2007 (UTC)
John to add to YechezkelZilber's point, the ludic fallacy highlights our human tendency to simplify the complexities of life. Specifically in the form of using games and gambling (dice, poker, etc.) as the metaphor. However, unlike in such games, the probabilities in life are dynamic and the rules constantly change. For poker to be an accurate metaphor, you would need to randomly alter the rules (adding cards to the deck, changing the order of hands, etc.). Although all those considerations (listed in the example) may lie within the realm of basic utility theory, real life provides results that are beyond our ability to consider and therefore can't be measured for their utility until after they occur.Herda050 00:50, 13 November 2007 (UTC)

The formulation of this "fallacy" is erroneous -- it's a pragmatic fallacy, not a logical fallacy. I'm not sure if the examples are poorly interpreted or recounted, but neither is a logical problem. In the first example, we are told to _assume_ that the coin is fair (e.g., 50/50 heads/tails) for the purposes of the thought experiment. To then, at the end of the experiment, call into question that premise is contrary to the very point of the thought experiment. If the premise is true, then the good doctor is right; if it's false, then we really have no way of evaluating the truth or falsity of either the doctor's or fat tony's statements. Regardless, I don't think anyone is really fooled by that sort of fallacy -- if a real-life coin DID come up heads 99 times in a row, I would be suspicious of its fairness, but is there any idiot who wouldn't?

In the second example, it's another bit of argumentative trickery to try and persuade us that we can't fit the model to life. Yes, there are unknowns and complexities in real life. That has nothing to do with a model and its applicability to real life. A good portion of modeling theory is dedicated to just this problem: estimating the scope and effect of uncertain circumstances that may affect the model. Simply because his model might suck is no reason to doubt the applicability of a model in general. Similarly, we have well-understood models for things like, say, paths of projectiles in normal earth conditions. But oh no! We neglect to take into account complexities of the gravitational effects of the moon, or a nearby comet! Regardless, any competent physicist (modeler) could give you a reasonably accurate idea of where your projectile will land, given some basic initial conditions and measurements. And in game theory, it's the same situation -- there are always going to be factors that affect probabilities chaotically -- the bit of grease on the Ace of Spades, or a slight ridge in your coin-flipping thumbnail. We just happen to be reasonably comfortable with making guesses based on inductive models.

And, of course, it is induction that is at the heart of this argument (as the author undoubtedly recognizes, with a title like "Black Swan"). If the Ludic fallacy is a false belief that games or models apply to real life, and our models are merely inductive representations of the real thing, then the exact same critique can be applied to any inductive (e.g., synthetic) knowledge. Unfortunately for this "fallacy", David Hume discovered that some time ago.

76.19.65.187 (talk) 22:50, 8 April 2008 (UTC)

I haven't read the book yet, nor do I know very much about philosophy, so I could be completely off here. However, the first example seems to completely ignore Bayesian statistics. We are told to assume that the coin is fair; that is, we can assume we have a prior distribution of 0.5 heads and 0.5 tails. Laplace's rule of succession offers a mathematically rigorous way to update that prior based on the outcomes of the 99 flips; it turns out that the posterior probability of the hundredth flip coming up heads will be 100/101. The reason Dr. John doesn't agree with Fat Tony is not because he is committing the ludic fallacy - it's because he should be using a better model. Stochastic (talk) 03:53, 1 June 2008 (UTC)
Whoops, I mean we can assume a uniform prior for the probability of getting heads. Stochastic (talk) 16:53, 1 June 2008 (UTC)

[edit] Update

Hi I've had a go at improving this page taking some information from the book to help.

I relise that the idea of using the Red Black question is not ideal as the idea of the ludic fallacy is that casio logic doesn't effect real life, but though that it would help explain the idea of the coin being loaded —Preceding unsigned comment added by Horrisgoesskiing (talk • contribs) 11:50, 14 March 2008 (UTC)

==

The text currently contains the following:

"By utilizing your colleague's analogy going forward, you don't understand that there could be a far greater or far lesser chance of making the train, but you think you know what your chances of making the train are, and in reality you now have a far greater or lesser chance of getting home on time. The future unknown risks involve the consequences of consistently getting home later than expected."

There's no referent present for this "colleague's analogy", etc. I don't know the "train story" (which this seems to relate to) so I can't fix things.

Apologies for the anon post. —Preceding unsigned comment added by 75.28.162.152 (talk) 03:25, 3 April 2008 (UTC)

I too became confused about "colleague's analogy" and trains. I don't know the "train story" so I can't fix this orphaned reference. —Preceding unsigned comment added by 67.102.38.38 (talk) 16:44, 23 May 2008 (UTC)