It is a particular Monte Carlo method that numerically computes a definite integral.
In numerical integration, methods such as the trapezoidal rule use a deterministic approach.
Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome.
In Monte Carlo, the final outcome is an approximation of the correct value with respective error bars, and the correct value is likely to be within those error bars.
is bounded due to being identically equal to Var(f), as long as this is assumed finite, this variance decreases asymptotically to zero as 1/N.
This result does not depend on the number of dimensions of the integral, which is the promised advantage of Monte Carlo integration against most deterministic methods that depend exponentially on the dimension.
With an appropriate sample distribution it is possible to exploit the fact that almost all higher-dimensional integrands are very localized and only small subspace notably contributes to the integral.
[6] A large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates.
In particular, stratified sampling—dividing the region in sub-domains—and importance sampling—sampling from non-uniform distributions—are two examples of such techniques.
A paradigmatic example of a Monte Carlo integration is the estimation of π.
Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick N random numbers on Ω and compute
using the Monte-Carlo method in Mathematica: Recursive stratified sampling is a generalization of one-dimensional adaptive quadratures to multi-dimensional integrals.
On each recursion step the integral and the error are estimated using a plain Monte Carlo algorithm.
The ordinary 'dividing by two' strategy does not work for multi-dimensions as the number of sub-volumes grows far too quickly to keep track.
The popular MISER routine implements a similar algorithm.
The MISER algorithm is based on recursive stratified sampling.
[7] The idea of stratified sampling begins with the observation that for two disjoint regions a and b with Monte Carlo estimates of the integral
Hence the smallest error estimate is obtained by allocating sample points in proportion to the standard deviation of the function in each sub-region.
The MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step.
The direction is chosen by examining all d possible bisections and selecting the one which will minimize the combined variance of the two sub-regions.
The variance in the sub-regions is estimated by sampling with a fraction of the total number of points available to the current step.
The remaining sample points are allocated to the sub-regions using the formula for Na and Nb.
is a particular case of a more generic choice, on which the samples are drawn from any distribution
Consider the following example where one would like to numerically integrate a gaussian function, centered at 0, with σ = 1, from −1000 to 1000.
Naturally, if the samples are drawn uniformly on the interval [−1000, 1000], only a very small part of them would be significant to the integral.
This estimator is naturally valid for uniform sampling, the case where
The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region which creates the histogram of the function f. Each histogram is used to define a sampling distribution for the next pass.
[9] In order to avoid the number of histogram bins growing like Kd, the probability distribution is approximated by a separable function:
This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes.
If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS.