Law of total variance
From Wikipedia, the free encyclopedia
In probability theory, the law of total variance or variance decomposition formula states that if X and Y are random variables on the same probability space, and the variance of X is finite, then
In language perhaps better known to statisticians than to probabilists, the two terms are the "unexplained" and the "explained component of the variance" (cf. explained variation).
The nomenclature in this article's title parallels the phrase law of total probability. Some writers on probability call this the "conditional variance formula" or use other names.
(The conditional expected value E( X | Y ) is a random variable in its own right, whose value depends on the value of Y. Notice that the conditional expected value of X given the event Y = y is a function of y (this is where adherence to the conventional rigidly case-sensitive notation of probability theory becomes important!). If we write E( X | Y = y) = g(y) then the random variable E( X | Y ) is just g(Y). Similar comments apply to the conditional variance.)
Contents |
[edit] Proof
The law of total variance can be proved using the law of total expectation: First,
from the definition of variance. Then we apply the law of total expectation by conditioning on the random variable Y:
Now we rewrite the conditional second moment of X in terms of its variance and first moment:
Since expectation of a sum is the sum of expectations, we can now regroup the terms:
Finally, we recognize the terms in parentheses as the variance of the conditional expectation E[X|Y]:
[edit] The square of the correlation
In case the graph of the conditional expected value is a straight line, i.e., if we have
then the explained component of the variance divided by the total variance is just the square of the correlation between X and Y, i.e., in that case,
[edit] Higher moments
A similar law for the third central moment μ3 says
For higher cumulants, a simple and elegant generalization exists. See law of total cumulance.
[edit] References
- Billingsley, Patrick (1995). Probability and Measure. New York, NY: John Wiley & Sons, Inc.. ISBN 0-471-00710-2. (Problem 34.10(b))
- Weiss, Neil (2005). A Course in Probability. Addison Wesley. ISBN 0201774712.