Taylor expansions for the moments of functions of random variables

In probability theory, it is possible to approximate the moments of a function f of a random variable X using Taylor expansions, provided that f is sufficiently differentiable and that the moments of X are finite.

A simulation-based alternative to this approximation is the application of Monte Carlo simulations.

, the mean and the variance of

, respectively,[1] a Taylor expansion of the expected value of

the second term vanishes.

Therefore, It is possible to generalize this to functions of more than one variable using multivariate Taylor expansions.

For example,[2] Similarly,[1] The above is obtained using a second order approximation, following the method used in estimating the first moment.

It will be a poor approximation in cases where

is highly non-linear.

This is a special case of the delta method.

The variance is then computed using the formula

var ⁡

{\displaystyle \operatorname {var} \left[Y\right]=\operatorname {E} \left[Y^{2}\right]-\mu _{Y}^{2}}

An example is,[2] The second order approximation, when X follows a normal distribution, is:[3] To find a second-order approximation for the covariance of functions of two random variables (with the same function applied to both), one can proceed as follows.

cov ⁡

{\displaystyle \operatorname {cov} \left[f(X),f(Y)\right]=\operatorname {E} \left[f(X)f(Y)\right]-\operatorname {E} \left[f(X)\right]\operatorname {E} \left[f(Y)\right]}

Since a second-order expansion for

has already been derived above, it only remains to find

as a two-variable function, the second-order Taylor expansion is as follows: Taking expectation of the above and simplifying—making use of the identities

) = var ⁡ (

{\displaystyle \operatorname {E} (X^{2})=\operatorname {var} (X)+\left[\operatorname {E} (X)\right]^{2}}

) = cov ⁡ (

{\displaystyle \operatorname {E} (XY)=\operatorname {cov} (X,Y)+\left[\operatorname {E} (X)\right]\left[\operatorname {E} (Y)\right]}

) cov ⁡ (

) var ⁡ (

) var ⁡ (

{\displaystyle \operatorname {E} \left[f(X)f(Y)\right]\approx f(\mu _{X})f(\mu _{Y})+f'(\mu _{X})f'(\mu _{Y})\operatorname {cov} (X,Y)+{\frac {1}{2}}f''(\mu _{X})f(\mu _{Y})\operatorname {var} (X)+{\frac {1}{2}}f(\mu _{X})f''(\mu _{Y})\operatorname {var} (Y)}

Hence, If X is a random vector, the approximations for the mean and variance of

denote the gradient and the Hessian matrix respectively, and

is the covariance matrix of X.