Concentration inequality

In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value (typically, its expected value).

The simplest example of the concentration of such a secondary random variable is the CDF of the first random variable which concentrates the probability to unity.

If an analytic form of the CDF is available this provides a concentration equality that provides the exact probability of concentration.

It is precisely when the CDF is difficult to calculate or even the exact form of the first random variable is unknown that the applicable concentration inequalities provide useful insight.

Another almost universal example of a secondary random variable is the law of large numbers of classical probability theory which states that sums of independent random variables, under mild conditions, concentrate around their expectation with a high probability.

Such sums are the most basic examples of random variables concentrated around their mean.

Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them.

is a strictly increasing and non-negative function, then Chebyshev's inequality requires the following information on a random variable

Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality applied to the random variable

Let X be a random variable with unimodal distribution, mean μ and finite, non-zero variance σ2.

The generic Chernoff bound[3]: 63–65  requires the moment generating function of

its variance: It is often interesting to bound the difference between the sum and its expected value.

Hence, the general form of Azuma's inequality can also be used and it yields a similar bound: This is a generalization of Hoeffding's since it can handle other types of martingales, as well as supermartingales and submartingales.

[5] Note that if the simpler form of Azuma's inequality is used, the exponent in the bound is worse by a factor of 4.

Hence, McDiarmid's inequality can also be used and it yields a similar bound: This is a different generalization of Hoeffding's since it can handle other functions besides the sum function, as long as they change in a bounded way.

Bennett's inequality offers some improvement over Hoeffding's when the variances of the summands are small compared to their almost-sure bounds C. It says that: 5.

Chernoff bounds have a particularly simple form in the case of sum of independent variables, since

[7] Bretagnolle–Huber–Carol Inequality bounds the difference between a vector of multinomially distributed random variables and a vector of expected values.

[8][9] A simple proof appears in [10](Appendix Section).

then This inequality is used to bound the total variation distance.

The Mason and van Zwet inequality[11] for multinomial random vectors concerns a slight modification of the classical chi-square statistic.

we have The Dvoretzky–Kiefer–Wolfowitz inequality bounds the difference between the real and the empirical cumulative distribution function.

denote the associated empirical distribution function defined by So

is the average number of random variables that are smaller than

Then Anti-concentration inequalities, on the other hand, provide an upper bound on how much a random variable can concentrate, either on a specific value or range of values.

A concrete example is that if you flip a fair coin

times, the probability that any given number of heads appears will be less than

For example, a result of Rao and Yehudayoff[12] implies that for any

Such inequalities are of importance in several fields, including communication complexity (e.g., in proofs of the gap Hamming problem[13]) and graph theory.

[14] An interesting anti-concentration inequality for weighted sums of independent Rademacher random variables can be obtained using the Paley–Zygmund and the Khintchine inequalities.