KolmogorovâSmirnov test
In statistics, the KolmogorovâSmirnov test (KâS test or KS test) is a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample KâS test), or to compare two samples (two-sample KâS test). It is named after Andrey Kolmogorov and Nikolai Smirnov.
The KolmogorovâSmirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). In each case, the distributions considered under the null hypothesis are continuous distributions but are otherwise unrestricted.
The two-sample KâS test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.
The KolmogorovâSmirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic: see below. Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the ShapiroâWilk test or AndersonâDarling test.[1] However, these other tests have their own disadvantages. For instance the ShapiroâWilk test is known not to work well in samples with many identical values.
KolmogorovâSmirnov statistic
The empirical distribution function Fn for n iid observations Xi is defined as
where is the indicator function, equal to 1 if and equal to 0 otherwise.
The KolmogorovâSmirnov statistic for a given cumulative distribution function F(x) is
where sup x is the supremum of the set of distances. By the GlivenkoâCantelli theorem, if the sample comes from distribution F(x), then Dn converges to 0 almost surely in the limit when goes to infinity. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see below). Donsker's theorem provides yet a stronger result.
In practice, the statistic requires a relatively large number of data points to properly reject the null hypothesis.
Kolmogorov distribution
The Kolmogorov distribution is the distribution of the random variable
where B(t) is the Brownian bridge. The cumulative distribution function of K is given by[2]
which can also be expressed by the Jacobi theta function . Both the form of the KolmogorovâSmirnov test statistic and its asymptotic distribution under the null hypothesis were published by Andrey Kolmogorov,[3] while a table of the distribution was published by Nikolai Vasilyevich Smirnov.[4] Recurrence relations for the distribution of the test statistic in finite samples are available.[3]
Under null hypothesis that the sample comes from the hypothesized distribution F(x),
in distribution, where B(t) is the Brownian bridge.
If F is continuous then under the null hypothesis converges to the Kolmogorov distribution, which does not depend on F. This result may also be known as the Kolmogorov theorem.
The goodness-of-fit test or the KolmogorovâSmirnov test is constructed by using the critical values of the Kolmogorov distribution. The null hypothesis is rejected at level if
where Kα is found from
The asymptotic power of this test is 1.
Test with estimated parameters
If either the form or the parameters of F(x) are determined from the data Xi the critical values determined in this way are invalid. In such cases, Monte Carlo or other methods may be required, but tables have been prepared for some cases. Details for the required modifications to the test statistic and for the critical values for the normal distribution and the exponential distribution have been published,[5] and later publications also include the Gumbel distribution.[6] The Lilliefors test represents a special case of this for the normal distribution. The logarithm transformation may help to overcome cases where the Kolmogorov test data does not seem to fit the assumption that it came from the normal distribution.
Discrete null distribution
The KolmogorovâSmirnov test must be adapted for discrete variables.[7] The form of the test statistic remains the same as in the continuous case, but the calculation of its value is more subtle. We can see this if we consider computing the test statistic between a continuous distribution and a step function that has a discontinuity at . In other words, the limit , if it exists, is different from . Thus, when computing the statistic
it is unclear how to replace the limit, unless we know the limiting value of the underlying distribution.
In SAS, the KolmogorovâSmirnov test is implemented in PROC NPAR1WAY
.[8] The discretized KS test is implemented in the ks.test()
function in the dgof package of the R project for statistical computing.[7] In Stata, the command ksmirnov
performs a KolmogorovâSmirnov test.[9]
Two-sample KolmogorovâSmirnov test
The KolmogorovâSmirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the KolmogorovâSmirnov statistic is
where and are the empirical distribution functions of the first and the second sample respectively, and is the supremum function.
The null hypothesis is rejected at level if
Where and are the sizes of first and second sample respectively. The value of is given in the table below for the most common levels of [10]
0.10 | 0.05 | 0.025 | 0.01 | 0.005 | 0.001 | |
1.22 | 1.36 | 1.48 | 1.63 | 1.73 | 1.95 |
and in general by
Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. whether it's normal or not normal). Again, tables of critical values have been published.[5][10] These critical values have one thing in common with the AndersonâDarling and Chi-squares, namely the fact that higher values tend to be more rare.[11]
Setting confidence limits for the shape of a distribution function
While the KolmogorovâSmirnov test is usually used to test whether a given F(x) is the underlying probability distribution of Fn(x), the procedure may be inverted to give confidence limits on F(x) itself. If one chooses a critical value of the test statistic Dα such that P(Dn > Dα) = α, then a band of width ±Dα around Fn(x) will entirely contain F(x) with probability 1 â α.
The KolmogorovâSmirnov statistic in more than one dimension
A distribution-free multivariate KolmogorovâSmirnov goodness of fit test has been proposed by Justel, Peña and Zamar (1997).[12] The test uses a statistic which is built using Rosenblatt's transformation, and an algorithm is developed to compute it in the bivariate case. An approximate test that can be easily computed in any dimension is also presented.
The KolmogorovâSmirnov test statistic needs to be modified if a similar test is to be applied to multivariate data. This is not straightforward because the maximum difference between two joint cumulative distribution functions is not generally the same as the maximum difference of any of the complementary distribution functions. Thus the maximum difference will differ depending on which of or or any of the other two possible arrangements is used. One might require that the result of the test used should not depend on which choice is made.
One approach to generalizing the KolmogorovâSmirnov statistic to higher dimensions which meets the above concern is to compare the cdfs of the two samples with all possible orderings, and take the largest of the set of resulting KâS statistics. In d dimensions, there are 2dâ1 such orderings. One such variation is due to Peacock[13] and another to Fasano and Franceschini[14] (see Lopes et al. for a comparison and computational details).[15] Critical values for the test statistic can be obtained by simulations, but depend on the dependence structure in the joint distribution.
See also
References
- â Stephens, M. A. (1974). "EDF Statistics for Goodness of Fit and Some Comparisons". Journal of the American Statistical Association. American Statistical Association. 69 (347): 730â737. JSTOR 2286009. doi:10.2307/2286009.
- â Marsaglia G, Tsang WW, Wang J (2003). "Evaluating Kolmogorovâs Distribution". Journal of Statistical Software. 8 (18): 1â4.
- 1 2 Kolmogorov A (1933). "Sulla determinazione empirica di una legge di distribuzione". G. Ist. Ital. Attuari. 4: 83â91.
- â Smirnov N (1948). "Table for estimating the goodness of fit of empirical distributions". Annals of Mathematical Statistics. 19: 279â281. doi:10.1214/aoms/1177730256.
- 1 2 Pearson, E. S. and Hartley, H. O., eds. (1972). Biometrika Tables for Statisticians. 2. Cambridge University Press. pp. 117â123, Tables 54, 55. ISBN 0-521-06937-8.
- â Shorack, Galen R.; Wellner, Jon A. (1986). Empirical Processes with Applications to Statistics. Wiley. p. 239. ISBN 047186725X.
- 1 2 Arnold, Taylor B.; Emerson, John W. (2011). "Nonparametric Goodness-of-Fit Tests for Discrete Null Distributions" (PDF). The R Journal. 3 (2): 34â39.
- â https://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/viewer.htm#statug_npar1way_toc.htm
- â ksmirnov â KolmogorovâSmirnov equality-of-distributions test
- 1 2 3 Table of critical values for the two-sample test
- â Mehta, S. (2014) Statistics Topics ISBN 978-1499273533
- â Justel, A.; Peña, D.; Zamar, R. (1997). "A multivariate KolmogorovâSmirnov test of goodness of fit". Statistics & Probability Letters. 35 (3): 251â259. doi:10.1016/S0167-7152(97)00020-5.
- â Peacock J.A. (1983). "Two-dimensional goodness-of-fit testing in astronomy". Monthly Notices of the Royal Astronomical Society. 202: 615â627. Bibcode:1983MNRAS.202..615P. doi:10.1093/mnras/202.3.615.
- â Fasano, G., Franceschini, A. (1987). "A multidimensional version of the KolmogorovâSmirnov test". Monthly Notices of the Royal Astronomical Society. 225: 155â170. Bibcode:1987MNRAS.225..155F. ISSN 0035-8711. doi:10.1093/mnras/225.1.155.
- â Lopes, R.H.C., Reid, I., Hobson, P.R. (April 23â27, 2007). The two-dimensional KolmogorovâSmirnov test (PDF). XI International Workshop on Advanced Computing and Analysis Techniques in Physics Research. Amsterdam, the Netherlands.
Further Readings
- Daniel, Wayne W. (1990). "KolmogorovâSmirnov one-sample test". Applied Nonparametric Statistics (2nd ed.). Boston: PWS-Kent. pp. 319â330. ISBN 0-534-91976-6.
- Eadie, W.T.; D. Drijard; F.E. James; M. Roos; B. Sadoulet (1971). Statistical Methods in Experimental Physics. Amsterdam: North-Holland. pp. 269â271. ISBN 0-444-10117-9.
- Stuart, Alan; Ord, Keith; Arnold, Steven [F.] (1999). Classical Inference and the Linear Model. Kendall's Advanced Theory of Statistics. 2A (Sixth ed.). London: Arnold. pp. 25.37â25.43. ISBN 0-340-66230-1. MR 1687411.
- Corder, G. W.; Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach. Wiley. ISBN 978-1118840313.
- Stephens, M. A. (1979). "Test of fit for the logistic distribution based on the empirical distribution function". Biometrika. 66 (3): 591â595. doi:10.1093/biomet/66.3.591.
External links
- Hazewinkel, Michiel, ed. (2001) [1994], "KolmogorovâSmirnov test", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
- Short introduction
- KS test explanation
- JavaScript implementation of one- and two-sided tests
- Online calculator with the KS test
- Open-source C++ code to compute the Kolmogorov distribution and perform the KS test
- Paper on Evaluating Kolmogorovâs Distribution; contains C implementation. This is the method used in Matlab.
- Paper powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions; Jeff Alstott, Ed Bullmore, Dietmar Plenz. Among others, it also performs the KolmogorovâSmirnov test. Source code and installers of powerlaw package are available at PyPi.