Weakly dependent random variables

In probability, weak dependence of random variables is a generalization of independence that is weaker than the concept of a martingale[citation needed].

A (time) sequence of random variables is weakly dependent if distinct portions of the sequence have a covariance that asymptotically decreases to 0 as the blocks are further separated in time.

Weak dependence primarily appears as a technical condition in various probabilistic limit theorems.

Fix a set S, a sequence of sets of measurable functions

, a decreasing sequence

θ

of random variables is

θ

-weakly dependent iff, for all

, we have[1]: 315 Note that the covariance does not decay to 0 uniformly in d and e.[2]: 9 Weak dependence is a sufficient weak condition that many natural instances of stochastic processes exhibit it.

[2]: 9  In particular, weak dependence is a natural condition for the ergodic theory of random functions.

[3] A sufficient substitute for independence in the Lindeberg–Lévy central limit theorem is weak dependence.

[1]: 315  For this reason, specializations often appear in the probability literature on limit theorems.

[2]: 153–197  These include Withers' condition for strong mixing,[1][4] Tran's "absolute regularity in the locally transitive sense,"[5] and Birkel's "asymptotic quadrant independence.

"[6] Weak dependence also functions as a substitute for strong mixing.

[7] Again, generalizations of the latter are specializations of the former; an example is Rosenblatt's mixing condition.

[8] Other uses include a generalization of the Marcinkiewicz–Zygmund inequality and Rosenthal inequalities.

[1]: 314, 319 Martingales are weakly dependent [citation needed], so many results about martingales also hold true for weakly dependent sequences.

An example is Bernstein's bound on higher moments, which can be relaxed to only require[9][10]