Uniform convergence in probability

Uniform convergence in probability is a form of convergence in probability in statistical asymptotic theory and probability theory.

It means that, under certain conditions, the empirical frequencies of all events in a certain event-family converge to their theoretical probabilities.

Uniform convergence in probability has applications to statistics as well as machine learning as part of statistical learning theory.

The law of large numbers says that, for each single event

, its empirical frequency in a sequence of independent trials converges (with high probability) to its theoretical probability.

In many application however, the need arises to judge simultaneously the probabilities of events of an entire class

Moreover it, is required that the relative frequency of the events converge to the probability uniformly over the entire class of events

Roughly, if the event-family is sufficiently simple (its VC dimension is sufficiently small) then uniform convergence holds.

The Uniform Convergence Theorem states, roughly, that if

is "simple" and we draw samples independently (with replacement) from

, then with high probability, the empirical frequency will be close to its expected value, which is the theoretical probability.

[2] Here "simple" means that the Vapnik–Chervonenkis dimension of the class

is small relative to the size of the sample.

In other words, a sufficiently simple collection of functions behaves roughly the same on a small random sample as it does on the distribution as a whole.

The Uniform Convergence Theorem was first proved by Vapnik and Chervonenkis[1] using the concept of growth function.

The statement of the uniform convergence theorem is as follows:[3] If

a positive integer, we have: And for any natural number

is defined as: From the point of Learning Theory one can consider

to be the Concept/Hypothesis class defined over the instance set

The Sauer–Shelah lemma[4] relates the shattering number

is the VC Dimension of the concept class

Before we get into the details of the proof of the Uniform Convergence Theorem we will present a high level overview of the proof.

We present the technical details of the proof.

And hence we perform the first step of our high level idea.

is a binomial random variable with expectation

By Chebyshev's inequality we get for the mentioned bound on

(since coordinate permutations preserve the product distribution

The maximum is guaranteed to exist since there is only a finite set of values that probability under a random permutation can take.

By union bound we get Since, the distribution over the permutations

Finally, combining all the three parts of the proof we get the Uniform Convergence Theorem.