Distribution of the product of two random variables

Many of these distributions are described in Melvin D. Springer's book from 1979 The Algebra of Random Variables.

are two independent, continuous random variables, described by probability density functions

starting with its definition We find the desired probability density function by taking the derivative of both sides with respect to

appears only in the integration limits, the derivative is easily performed using the fundamental theorem of calculus and the chain rule.

(Note the negative sign that is needed when the variable occurs in the lower limit of the integration.)

[3] A faster more compact proof begins with the same step of writing the cumulative distribution of

is the Heaviside step function and serves to limit the region of integration to values of

We find the desired probability density function by taking the derivative of both sides with respect to

where we utilize the translation and scaling properties of the Dirac delta function

then This type of result is universally true, since for bivariate independent variables

However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions.

yielding the distribution For the product of multiple (> 2) independent samples the characteristic function route is favorable.

to extract the PDF of the product of the n samples: The following, more conventional, derivation from Stackexchange[7] is consistent with this result.

Multiplying by a third independent sample gives distribution function Taking the derivative yields

The area of the selection within the unit square and below the line z = xy, represents the CDF of z.

The second part lies below the xy line, has y-height z/x, and incremental area dx z/x.

The variance of this distribution could be determined, in principle, by a definite integral from Gradsheyn and Ryzhik,[8] thus

The product of correlated Normal samples case was recently addressed by Nadarajaha and Pogány.

be zero mean, unit variance, normally distributed variates with correlation coefficient

The approximate distribution of a correlation coefficient can be found via the Fisher transformation.

The distribution of the product of correlated non-central normal samples was derived by Cui et al.[11] and takes the form of an infinite series of modified Bessel functions of the first kind.

are central correlated variables, the simplest bivariate case of the multivariate normal moment problem described by Kan,[12] then where [needs checking] The distribution of the product of non-central correlated normal samples was derived by Cui et al.[11] and takes the form of an infinite series.

are independent zero-mean complex normal samples with circular symmetry.

is clearly Chi-squared with two degrees of freedom and has PDF Wells et al.[13] show that the density function of

are independent zero-mean complex normal samples with circular symmetry.

, Heliot et al.[14] show that the joint density function of the real and imaginary parts of

The product of non-central independent complex Gaussians is described by O’Donoughue and Moura[15] and forms a double infinite series of modified Bessel functions of the first and second types.

, follows[16] Nagar et al.[17] define a correlated bivariate beta distribution where Then the pdf of Z = XY is given by where

is the Gauss hypergeometric function defined by the Euler integral Note that multivariate distributions are not generally unique, apart from the Gaussian case, and there may be alternatives.

The product of n Gamma and m Pareto independent samples was derived by Nadarajah.

Diagram to illustrate the product distribution of two variables.
The geometry of the product distribution of two random variables in the unit square.