In information geometry, a divergence is a kind of statistical distance: a binary function which establishes the separation from one probability distribution to another on a statistical manifold.
is typically the space of parameters of a parametric family of probability distributions.
Dimensional analysis of condition 3 shows that divergence has the dimension of squared distance.
, its symmetrized version is obtained by averaging it with its dual divergence:[3] Unlike metrics, divergences are not required to be symmetric, and the asymmetry is important in applications.
are probability distributions or other objects under consideration, such that conditions 1, 2 are satisfied.
Condition 3 is required for "divergence" as used in information geometry.
As an example, the total variation distance, a commonly used statistical divergence, does not satisfy condition 3.
Notation for divergences varies significantly between fields, though there are some conventions.
, to distinguish them from metric distances, which are notated with a lowercase 'd'.
When multiple divergences are in use, they are commonly distinguished with subscripts, as in
In information theory, a double bar is commonly used:
; this is similar to, but distinct from, the notation for conditional probability,
, and emphasizes interpreting the divergence as a relative measurement, as in relative entropy; this notation is common for the KL divergence.
; this emphasizes the relative information supporting the two distributions.
interprets the parameters as probability distributions, while lowercase
Many properties of divergences can be derived if we restrict S to be a statistical manifold, meaning that it can be parametrized with a finite-dimensional coordinate system θ, so that for a distribution p ∈ S we can write p = p(θ).
For a pair of points p, q ∈ S with coordinates θp and θq, denote the partial derivatives of D(p, q) as Now we restrict these functions to a diagonal p = q, and denote [4] By definition, the function D(p, q) is minimized at p = q, and therefore where matrix g(D) is positive semi-definite and defines a unique Riemannian metric on the manifold S. Divergence D(·, ·) also defines a unique torsion-free affine connection ∇(D) with coefficients and the dual to this connection ∇* is generated by the dual divergence D*.
Thus, a divergence D(·, ·) generates on a statistical manifold a unique dualistic structure (g(D), ∇(D), ∇(D*)).
The converse is also true: every torsion-free dualistic structure on a statistical manifold is induced from some globally defined divergence function (which however need not be unique).
Minimizing these two divergences is the main way that linear inverse problems are solved, via the principle of maximum entropy and least squares, notably in logistic regression and linear regression.
For example, for the squared Euclidean distance, the generator is
The use of the term "divergence" – both what functions it refers to, and what various statistical distances are called – has varied significantly over time, but by c. 2000 had settled on the current usage within information geometry, notably in the textbook Amari & Nagaoka (2000).
[1] The term "divergence" for a statistical distance was used informally in various contexts from c. 1910 to c. 1940.
The term "divergence" was used generally by Ali & Silvey (1966) for statistically distances.
Numerous references to earlier uses of statistical distances are given in Adhikari & Joshi (1956) and Kullback (1959, pp.
Kullback & Leibler (1951) actually used "divergence" to refer to the symmetrized divergence (this function had already been defined and used by Harold Jeffreys in 1948[9]), referring to the asymmetric function as "the mean information for discrimination ... per observation",[10] while Kullback (1959) referred to the asymmetric function as the "directed divergence".
[11] Ali & Silvey (1966) referred generally to such a function as a "coefficient of divergence", and showed that many existing functions could be expressed as f-divergences, referring to Jeffreys' function as "Jeffreys' measure of divergence" (today "Jeffreys divergence"), and Kullback–Leibler's asymmetric function (in each direction) as "Kullback's and Leibler's measures of discriminatory information" (today "Kullback–Leibler divergence").
[12] The information geometry definition of divergence (the subject of this article) was initially referred to by alternative terms, including "quasi-distance" Amari (1982, p. 369) and "contrast function" Eguchi (1985), though "divergence" was used in Amari (1985) for the α-divergence, and has become standard for the general class.
Notationally, Kullback & Leibler (1951) denoted their asymmetric function as
, while Ali & Silvey (1966) denote their functions with a lowercase 'd' as