Independence (probability theory)

Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

When dealing with collections of more than two events, two notions of independence need to be distinguished.

A similar notion exists for collections of random variables.

In the standard literature of probability theory, statistics, and stochastic processes, independence without further qualification usually refers to mutual independence.

Why this defines independence is made clear by rewriting with conditional probabilities

Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if

Furthermore, the preferred definition makes clear by symmetry that when

, This is called the multiplication rule for independent events.

, are independent iff the combined random variable

has a joint cumulative distribution function[3]: p. 15 or equivalently, if the probability densities

This is equivalent to the following condition on the joint cumulative distribution function

is mutually independent if and only if[3]: p. 16 It is not necessary here to require that the probability distribution factorizes for all possible

The measure-theoretically inclined reader may prefer to substitute events

That definition is exactly equivalent to the one above when the values of the random variables are real numbers.

It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space (which includes topological spaces endowed by appropriate σ-algebras).

denotes their joint cumulative distribution function.

Note that an event is independent of itself if and only if Thus an event is independent of itself if and only if it almost surely occurs or its complement almost surely occurs; this fact is useful when proving zero–one laws.

are statistically independent random variables, then the expectation operator

The converse does not hold: if two random variables have a covariance of 0 they still may be not independent.

are independent if and only if the characteristic function of the random vector

Random variables that satisfy the latter condition are called subindependent.

On the other hand, if the random variables are continuous and have a joint probability density function

A similar equation holds for the conditional probability density functions in the continuous case.

Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events.

Before 1933, independence, in probability theory, was defined in a verbal manner.

Apparently, there was the conviction, that this formula was a consequence of the above definition.

), Of course, a proof of his assertion cannot work without further more formal tacit assumptions.

Bernstein, and quoted a publication which had appeared in Russian in 1927.

[14] Unfortunately, both Bernstein and Kolmogorov had not been aware of the work of the Georg Bohlmann.

More about his work can be found in On the contributions of Georg Bohlmann to probability theory from de:Ulrich Krengel.

Pairwise independent, but not mutually independent, events
Mutually independent events