Comonotonicity

In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing functions of a single random variable.

In two dimensions it is also possible to consider perfect negative dependence, which is called countermonotonicity.

In particular, the sum of the components X1 + X2 + · · · + Xn is the riskiest if the joint probability distribution of the random vector (X1, X2, .

[3][4] In practical risk management terms it means that there is minimal (or eventually no) variance reduction from diversification.

For extensions of comonotonicity, see Jouini & Napp (2004) and Puccetti & Scarsini (2010).

Let μ be a probability measure on the n-dimensional Euclidean space Rn and let F denote its multivariate cumulative distribution function, that is Furthermore, let F1, .

, Xn) is comonotonic if and only if it can be represented as where =d stands for equality in distribution, on the right-hand side are the left-continuous generalized inverses[8] of the cumulative distribution functions FX1, .

, FXn, and U is a uniformly distributed random variable on the unit interval.

Let (X, Y) be a bivariate random vector such that the expected values of X, Y and the product XY exist.

Let (X*, Y*) be a comonotonic bivariate random vector with the same one-dimensional marginal distributions as (X, Y).

[note 1] Then it follows from Höffding's formula for the covariance[10] and the upper Fréchet–Hoeffding bound that and, correspondingly, with equality if and only if (X, Y) is comonotonic.