In the mathematical theory of probability, Janson's inequality is a collection of related inequalities giving an exponential bound on the probability of many related events happening simultaneously by their pairwise dependence.
Informally Janson's inequality involves taking a sample of many independent random binary variables, and a set of subsets of those variables and bounding the probability that the sample will contain any of those subsets by their pairwise correlation.
be our set of variables.
We intend to sample these variables according to probabilities
be the random variable of the subset of
with probability
That is, independently, for every
be a family of subsets of
We want to bound the probability that any
is a subset of
We will bound it using the expectation of the number of
, and a term from the pairwise probability of being in
be the random variable that is one if
be the random variables of the number of sets in
Then we define the following variables: Then the Janson inequality is: and Janson later extended this result to give a tail bound on the probability of only a few sets being subsets.
give the distance from the expected number of subsets.
φ ( x ) = ( 1 + x ) ln ( 1 + x ) − x
Then we have Janson's Inequality has been used in pseudorandomness for bounds on constant-depth circuits.
[1] Research leading to these inequalities were originally motivated by estimating chromatic numbers of random graphs.