Talk:Statistical independence
From Wikipedia, the free encyclopedia
Contents |
[edit] What does this notation mean?
In the definition of independent rv's the notation [X ≤ a] and [Y ≤ b] are used. What does that mean? I don't think I've seen it. - Taxman 14:18, Apr 28, 2005 (UTC)
- It refers to the event of the random variable X taking on a value less than a. --MarkSweep 16:39, 28 Apr 2005 (UTC)
-
- I am puzzled. If you didn't understand this notation, what did you think was the definition of independence of random variables? Michael Hardy 19:20, 28 Apr 2005 (UTC)
- Well actually that meant I didn't understand it. I had learned it in the past and was familiar with the general concept only. I think we always used P(X≤ a) for consistency, but I could be wrong. In any case the switch from the P notation in the section before to this notation could use some explanation for the village idiots like me. :) I'll see what I can do without adding clutter. - Taxman 20:26, Apr 28, 2005 (UTC)
- I am puzzled. If you didn't understand this notation, what did you think was the definition of independence of random variables? Michael Hardy 19:20, 28 Apr 2005 (UTC)
-
-
-
- It's not a switch in notation at all: it's not two different notations for the same thing; it's different notations for different things. The notation [X ≤ a] does not mean the probability that X ≤ a; it just means the event that X ≤ a. The probability that X ≤ a would still be denoted P(X ≤ a) or P[X ≤ a] or Pr(X ≤ a) or the like. Michael Hardy 20:51, 28 Apr 2005 (UTC)
-
-
I just wrote an edit summary that says:
- No wonder User:taxman was confused: the statement here was simply incorrect. I've fixed it by changing the word "probability" to "event". Michael Hardy 20:58, 28 Apr 2005 (UTC)
Then I looked at the edit history and saw that Taxman was the one who inserted the incorrect language. So that really doesn't explain it. Michael Hardy 20:56, 28 Apr 2005 (UTC)
- ... and it does not make sense to speak of two probabilities being independent of each other. Events can be independent of each other, and random variables can be independent of each other, but probabilities cannot. Michael Hardy 20:58, 28 Apr 2005 (UTC)
[edit] About a role of probability measure in property of independence of events
Definition of independence of events is entered into probability theory only for the fixed probability . It means, that two events can be independent concerning one probability and to not be independent concerning another. We shall illustrate this opportunity on the simple example connected with the scheme of Bernoulli, i.e. with repeated independent trials, each of which has two outcomes S and F (' success ' and ' failure '). We shall assume, that , where then We shall lead three independent trials and we shall consider events.
It is obvious, that
Hence,
Now it is easy to see, that equality
takes place in trivial cases and also at p = 1 / 2,, i.e. also in a symmetric case.
Attention, a conclusion: it means, that the "same" events A and B are independent only at , for other values of probability of success they are not independent.
[edit] Examples of the delusion
Misunderstanding of this fact generates delusion, that in concept of independence of events of probability theory something is put greater, than a trivial relation (1). Behind examples of this delusion far to go it is not necessary. It is enough to come on a page Statistical independence and to look at a preamble:
-
-
-
- "In probability theory, to say that two events are independent intuitively means that knowing whether one of them occurs makes it neither more probable nor less probable that the other occurs. For example, the event of getting a "1" when a die is thrown and the event of getting a "1" the second time it is thrown are independent. Similarly, when we assert that two random variables are independent, we intuitively mean that knowing something about the value of one of them does not yield any information about the value of the other. For example, the number appearing on the upward face of a die the first time it is thrown and that appearing the second time are independent."
-
-
Like from the point of view of common sense all is correct, but only not from the point of view of probability theory where as independence of two events it is understood nothing, except for relation (1). Probably it is the original of the Russian version. The original favourably differs from Russian tracing-paper with one detail: intuitively. But this pleasant detail cannot rescue the position: in probability theory anything greater than the relation (1) is not put in concept of independence of events even at an intuitive level.
The same history and almost with all 'another language' mirrors of this English page. The Russian page ru:Независимость (теория вероятностей) is a tracing-paper from the English original. Here is a preamble of Russian page:
-
-
-
- "В теории вероятностей случайные события называются независимыми, если знание того, случилось ли одно из них, не несёт никакой дополнительной информации о другом. Аналогично, случайные величины называют независимыми, если знание того, какое значение приняла одна из них, не даёт никакой дополнительной информации о возможных значениях другой."
-
-
From eight "mirrors" in different languages only two mirrors have avoided this delusion: Italians and Poles, and that because have written very short papers containing only the relation (1), dexterously and successfully having avoided comments.
[edit] References
Stoyanov Jordan (1999) Counterexamples in probability theory.—, Publishing house " Factorial ". — 288с. ISBN 5-88688-041-0 (p. 29-30).
- Helgus 23:58, 25 May 2006 (UTC)
The only thing, that confuses me in the text of definition, is a one detail – using terms “knowledge”, “knowing” and “information”. These terms do not belong to concepts of probability theory, they are not defined in this theory. In my humble opinion the independence of events is better for defining without these terms so: “two events are independent intuitively means that occurring one of them [say A] makes it neither more probable nor less probable that the other [say B] occurs”. And the independence of random variables is better for defining so: “two random variables are independent intuitively means that the value of one of them (say xi) makes it neither more probable nor less probable any values of the other [say xi’]”. - Helgus 23:49, 26 May 2006 (UTC)
[edit] Correction ?
Reading the arcticle I feel that the paragraphs for continuous and discrete cases are mixed in the section "Conditionally independent random variables". Formula with inequalities pertains to the continuous case, while formula with equality pertains to discrete case. 134.157.16.38 08:03, 19 June 2007 (UTC) Francis Dalaudier.
[edit] independent vs. uncorellated
We had better explain the relationship between these two concepts here. They are often confusing for me. Jackzhp (talk) 14:17, 17 April 2008 (UTC)