Kolmogorov's inequality

From Wikipedia, the free encyclopedia

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The inequality is named after the Russian mathematician Andrey Kolmogorov.[citation needed]

Contents

[edit] Statement of the inequality

Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,

\Pr \left(\max_{1\leq k\leq n} | S_k |\geq\lambda\right)\leq \frac{1}{\lambda^2} \operatorname{Var} [S_n] \equiv \frac{1}{\lambda^2}\sum_{k=1}^n \operatorname{Var}[X_k],

where Sk = X1 + ... + Xk.

[edit] Proof

The following argument is due to Kareem Amin and employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence S_1, S_2, \dots, S_n is a martingale. Without loss of generality, we can assume that S0 = 0 and S_i \geq 0 for all i. Define (Z_i)_{i=0}^n as follows. Let Z0 = 0, and

Z_{i+1} = \left\{ \begin{array}{ll}
S_{i+1} & \text{ if } \displaystyle \max_{1 \leq j \leq i} S_j < \lambda \\ Z_i & \text{ otherwise}
\end{array}
\right.

for all i. Then (Z_i)_{i=0}^n is a also a martingale. Since E[Si] = E[Si − 1] for all i and E[E[X | Y]] = E[X] by the law of total expectation,

\begin{align}
\sum_{i=1}^n \text{E}[ (S_i - S_{i-1})^2] &= \sum_{i=1}^n \text{E}[ S_i^2 - 2 S_i S_{i-1} + S_{i-1}^2 ] \\
&= \sum_{i=1}^n \text{E}\left[ S_i^2 - 2 \text{E}[ S_i S_{i-1} | S_{i-1} ]  + \text{E}[S_{i-1}^2 | S_{i-1}] \right] \\
&= \sum_{i=1}^n \text{E}\left[ S_i^2 - 2 \text{E}[ S^2_{i-1} | S_{i-1} ]  + \text{E}[S_{i-1}^2 | S_{i-1}] \right] \\
&= \text{E}[S_n^2] - \text{E}[S_0^2] = \text{E}[S_n^2].
\end{align}

The same is true for (Z_i)_{i=0}^n. Thus

\begin{align}
\text{Pr}\left( \max_{1 \leq i \leq n} S_i \geq \lambda\right) &=
\text{Pr}[Z_n \geq \lambda] \\
&\leq \frac{1}{\lambda^2} \text{E}[Z_n^2]
=\frac{1}{\lambda^2} \sum_{i=1}^n \text{E}[(Z_i - Z_{i-1})^2] \\
&\leq \frac{1}{\lambda^2} \sum_{i=1}^n \text{E}[(S_i - S_{i-1})^2]
=\frac{1}{\lambda^2} \text{E}[S_n^2] = \frac{1}{\lambda^2} \text{Var}[S_n].
\end{align}

by Chebyshev's inequality.

[edit] See also

[edit] References

  • Billingsley, Patrick (1995). Probability and Measure. New York: John Wiley & Sons, Inc.. ISBN 0-471-00710-2.  (Theorem 22.4)
  • Feller, William [1950] (1968). An Introduction to Probability Theory and its Applications, Vol 1, Third Edition (in English), New York: John Wiley & Sons, Inc., xviii+509. ISBN 0-471-25708-7. 

This article incorporates material from Kolmogorov's inequality on PlanetMath, which is licensed under the GFDL.

Languages