Kolmogorov–Smirnov test

Intuitively, it provides a method to qualitatively answer the question "How likely is it that we would see a collection of samples like this if they were drawn from that probability distribution?"

or, in the second case, "How likely is it that we would see two sets of samples like this if they were drawn from the same (but unknown) probability distribution?".

The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.

ordered observations Xi is defined as The Kolmogorov–Smirnov statistic for a given cumulative distribution function F(x) is where supx is the supremum of the set of distances.

Intuitively, the statistic takes the largest absolute difference between the two distribution functions across all x values.

By the Glivenko–Cantelli theorem, if the sample comes from distribution F(x), then Dn converges to 0 almost surely in the limit when

In practice, the statistic requires a relatively large number of data points (in comparison to other goodness of fit criteria such as the Anderson–Darling test statistic) to properly reject the null hypothesis.

Both the form of the Kolmogorov–Smirnov test statistic and its asymptotic distribution under the null hypothesis were published by Andrey Kolmogorov,[3] while a table of the distribution was published by Nikolai Smirnov.

[4] Recurrence relations for the distribution of the test statistic in finite samples are available.

by in the argument of the Jacobi theta function reduces these errors to

[11] The Lilliefors test represents a special case of this for the normal distribution.

The logarithm transformation may help to overcome cases where the Kolmogorov test data does not seem to fit the assumption that it came from the normal distribution.

Usually this would be the maximum likelihood method, but e.g. for the normal distribution MLE has a large bias error on sigma.

If we need to decide for Student-T data with df = 2 via KS test whether the data could be normal or not, then a ML estimate based on H0 (data is normal, so using the standard deviation for scale) would give much larger KS distance, than a fit with minimum KS.

In this case we should reject H0, which is often the case with MLE, because the sample standard deviation might be very large for T-2 data, but with KS minimization we may get still a too low KS to reject H0.

Therefore, a fast and accurate method has been developed to compute the exact and asymptotic distribution of

is purely discrete or mixed,[8] implemented in C++ and in the KSgeneral package [9] of the R language.

The functions disc_ks_test(), mixed_ks_test() and cont_ks_test() compute also the KS test statistic and p-values for purely discrete, mixed or continuous null distributions and arbitrary sample sizes.

The KS test and its p-values for discrete null distributions and small sample sizes are also computed in [12] as part of the dgof package of the R language.

Major statistical packages among which SAS PROC NPAR1WAY,[13] Stata ksmirnov[14] implement the KS test under the assumption that

For large samples, the null hypothesis is rejected at level

), the minimal bound scales in the size of either of the samples according to its inverse square root.

Note that the two-sample test checks whether the two data samples come from the same distribution.

A shortcoming of the univariate Kolmogorov–Smirnov test is that it is not very powerful because it is devised to be sensitive against all possible types of differences between two distribution functions.

Two-sample KS tests have been applied in economics to detect asymmetric effects and to study natural experiments.

A distribution-free multivariate Kolmogorov–Smirnov goodness of fit test has been proposed by Justel, Peña and Zamar (1997).

[22] The test uses a statistic which is built using Rosenblatt's transformation, and an algorithm is developed to compute it in the bivariate case.

An approximate test that can be easily computed in any dimension is also presented.

One approach to generalizing the Kolmogorov–Smirnov statistic to higher dimensions which meets the above concern is to compare the cdfs of the two samples with all possible orderings, and take the largest of the set of resulting KS statistics.

One such variation is due to Peacock[23] (see also Gosset[24] for a 3D version) and another to Fasano and Franceschini[25] (see Lopes et al. for a comparison and computational details).

Illustration of the Kolmogorov–Smirnov statistic. The red line is a model CDF , the blue line is an empirical CDF , and the black arrow is the KS statistic.
Illustration of the Kolmogorov distribution's PDF
Illustration of the two-sample Kolmogorov–Smirnov statistic. Red and blue lines each correspond to an empirical distribution function, and the black arrow is the two-sample KS statistic.