Kolmogorov's inequality
From Wikipedia, the free encyclopedia
In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The inequality is named after the Russian mathematician Andrey Kolmogorov.[citation needed]
Contents |
[edit] Statement of the inequality
Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,
where Sk = X1 + ... + Xk.
[edit] Proof
The following argument is due to Kareem Amin and employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence is a martingale. Without loss of generality, we can assume that S0 = 0 and for all i. Define as follows. Let Z0 = 0, and
for all i. Then is a also a martingale. Since E[Si] = E[Si − 1] for all i and E[E[X | Y]] = E[X] by the law of total expectation,
The same is true for . Thus
[edit] See also
- Chebyshev's inequality
- Doob's martingale inequality
- Etemadi's inequality
- Landau-Kolmogorov inequality
- Markov's inequality
[edit] References
- Billingsley, Patrick (1995). Probability and Measure. New York: John Wiley & Sons, Inc.. ISBN 0-471-00710-2. (Theorem 22.4)
- Feller, William [1950] (1968). An Introduction to Probability Theory and its Applications, Vol 1, Third Edition (in English), New York: John Wiley & Sons, Inc., xviii+509. ISBN 0-471-25708-7.
This article incorporates material from Kolmogorov's inequality on PlanetMath, which is licensed under the GFDL.