In probability theory, the total variation distance is a statistical distance between probability distributions, and is sometimes called the statistical distance, statistical difference or variational distance.
The total variation distance between
is defined as[1] This is the largest absolute difference between the probabilities that the two probability distributions assign to the same event.
The total variation distance is an f-divergence and an integral probability metric.
The total variation distance is related to the Kullback–Leibler divergence by Pinsker’s inequality: One also has the following inequality, due to Bretagnolle and Huber[2] (see also [3]), which has the advantage of providing a non-vacuous bound even when
The total variation distance is half of the L1 distance between the probability functions: on discrete domains, this is the distance between the probability mass functions[4] and when the distributions have standard probability density functions p and q,[5] (or the analogous distance between Radon-Nikodym derivatives with any common dominating measure).
This result can be shown by noticing that the supremum in the definition is achieved exactly at the set where one distribution dominates the other.
[6] The total variation distance is related to the Hellinger distance
The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is
, that is, where the expectation is taken with respect to the probability measure
This probability-related article is a stub.