Kolmogorov-Smirnov test

From Wikipedia, the free encyclopedia

In statistics, the KolmogorovSmirnov test (often called the K-S test) is a goodness of fit test used to determine whether two underlying one-dimensional probability distributions differ, or whether an underlying probability distribution differs from a hypothesized distribution, in either case based on finite samples.

The one-sample KS test compares the empirical distribution function with the cumulative distribution function specified by the null hypothesis. The main applications are testing goodness of fit with the normal and uniform distributions. For normality testing, minor improvements made by Lilliefors lead to the Lilliefors test. In general the Shapiro-Wilk test or Anderson-Darling test are more powerful alternatives to the Lilliefors test for testing normality[citation needed].

The two-sample KS test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.

Contents

[edit] Kolmogorov-Smirnov statistic

The empirical distribution function Fn for n iid observations Xi is defined as

F_n(x)={1 \over n}\sum_{i=1}^n I_{X_i\leq x}

where I_{X_i\leq x} is the indicator function.

The Kolmogorov-Smirnov statistic for a given function F(x) is

D_n=\sup_x |F_n(x)-F(x)|,

where \sup S is the supremum of set S. By the Glivenko-Cantelli theorem, if the sample comes from distribution F(x), then Dn converges to 0 almost surely. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see below). The Donsker theorem provides yet stronger result.

[edit] Kolmogorov distribution

The Kolmogorov distribution is the distribution of the random variable

K=\sup_{t\in[0,1]}|B(t)|,

where B(t) is the Brownian bridge. The cumulative distribution function of K is given by

\operatorname{Pr}(K\leq x)=1-2\sum_{i=1}^\infty (-1)^{i-1} e^{-2i^2 x^2}=\frac{\sqrt{2\pi}}{x}\sum_{i=1}^\infty e^{-(2i-1)^2\pi^2/(8x^2)}.

[edit] Kolmogorov-Smirnov test

Under null hypothesis that the sample comes from the hypothesized distribution F(x),

\sqrt{n}D_n\xrightarrow{n\to\infty}\sup_t |B(F(t))|

in distribution, where B(t) is the Brownian bridge.

If F is continuous then under the null hypothesis \sqrt{n}D_n converges to the Kolmogorov distribution, which does not depend on F. This result may also be known as the Kolmogorov theorem; see Kolmogorov's theorem for disambiguation.

The goodness-of-fit test or the Kolmogorov-Smirnov test is constructed by using the critical values of the Kolmogorov distribution.

The null hypothesis is rejected at level α if

\sqrt{n}D_n>K_\alpha,\,

where Kα is found from

\operatorname{Pr}(K\leq K_\alpha)=1-\alpha.

The asymptotic power of this test is 1. If the form or parameters of F(x) are determined from the Xi, the inequality may not hold. In this case, Monte Carlo or other methods are required to determine the rejection level α.

A more familiar form of the test is:

D_n> \frac{K_\alpha}{\sqrt{n}},

found on different references.


The Kolmogorov-Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov-Smirnov statistic is

D_{n,n'}=\sup_x |F_n(x)-F_{n'}(x)|.

and the null hypothesis is rejected at level α if

\sqrt{\frac{n n'}{n + n'}}D_{n,n'}>K_\alpha.

[edit] Setting confidence limits for the shape of a distribution function

While the Kolmogorov-Smirnov test is usually used to test whether a given F(x) is the underlying probability distribution of Fn(x), the procedure may be inverted to give confidence limits on F(x) itself. If one chooses a critical value of the test statistic Dα such that P(Dn > Dα) = α, then a band of width ±Dα around Fn(x) will entirely contain F(x) with probabilitiy 1 − α.

[edit] See also

[edit] References

  • Eadie, W.T.; D. Drijard, F.E. James, M. Roos and B. Sadoulet (1971). Statistical Methods in Experimental Physics. Amsterdam: North-Holland, 269-271. 
  • Stuart, Alan; Keith Ord and Steven Arnold (1999). Kendall's Advanced Theory of Statistics 2A. London: Arnold, a member of the Hodder Headline Group, 25.37-25.43. 

[edit] External links