Probabilistic metric space

In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers R ≥ 0, but in distribution functions.

[1] Let D+ be the set of all probability distribution functions F such that F(0) = 0 (F is a nondecreasing, left continuous mapping from R into [0, 1] such that max(F) = 1).

Then given a non-empty set S and a function F: S × S → D+ where we denote F(p, q) by Fp,q for every (p, q) ∈ S × S, the ordered pair (S, F) is said to be a probabilistic metric space if: Probabilistic metric spaces are initially introduced by Menger, which were termed statistical metrics.

[3] Shortly after, Wald criticized the generalized triangle inequality and proposed an alternative one.

[4] However, both authors had come to the conclusion that in some respects the Wald inequality was too stringent a requirement to impose on all probability metric spaces, which is partly included in the work of Schweizer and Sklar.

[5] Later, the probabilistic metric spaces found to be very suitable to be used with fuzzy sets[6] and further called fuzzy metric spaces[7] A probability metric D between two random variables X and Y may be defined, for example, as

{\displaystyle D(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|x-y|F(x,y)\,dx\,dy}

where F(x, y) denotes the joint probability density function of the random variables X and Y.

If X and Y are independent from each other, then the equation above transforms into

where f(x) and g(y) are probability density functions of X and Y respectively.

One may easily show that such probability metrics do not satisfy the first metric axiom or satisfies it if, and only if, both of arguments X and Y are certain events described by Dirac delta density probability distribution functions.

) δ ( y −

the probability metric simply transforms into the metric between expected values

of the variables X and Y.

For all other random variables X, Y the probability metric does not satisfy the identity of indiscernibles condition required to be satisfied by the metric of the metric space, that is:

For example if both probability distribution functions of random variables X and Y are normal distributions (N) having the same standard deviation

, integrating

yields:

exp ⁡

erfc ⁡

{\displaystyle D_{NN}(X,Y)=\mu _{xy}+{\frac {2\sigma }{\sqrt {\pi }}}\exp \left(-{\frac {\mu _{xy}^{2}}{4\sigma ^{2}}}\right)-\mu _{xy}\operatorname {erfc} \left({\frac {\mu _{xy}}{2\sigma }}\right)}

erfc ⁡ ( x )

{\displaystyle \operatorname {erfc} (x)}

is the complementary error function.

lim

{\displaystyle \lim _{\mu _{xy}\to 0}D_{NN}(X,Y)=D_{NN}(X,X)={\frac {2\sigma }{\sqrt {\pi }}}.}

The probability metric of random variables may be extended into metric D(X, Y) of random vectors X, Y by substituting

with any metric operator d(x, y):

where F(X, Y) is the joint probability density function of random vectors X and Y.

For example substituting d(x, y) with Euclidean metric and providing the vectors X and Y are mutually independent would yield to:

Probability metric between two random variables X and Y , both having normal distributions and the same standard deviation (beginning with the bottom curve). denotes a distance between means of X and Y .