Gaussian integral

Named after the German mathematician Carl Friedrich Gauss, the integral is

Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809,[1] attributing its discovery to Laplace.

The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution.

In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator.

This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function.

Although no elementary function exists for the error function, as can be proven by the Risch algorithm,[2] the Gaussian integral can be solved analytically through the methods of multivariable calculus.

A standard way to compute the Gaussian integral, the idea of which goes back to Poisson,[3] is to make use of the property that:

where the factor of r is the Jacobian determinant which appears because of the transform to polar coordinates (r dr dθ is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking s = −r2, so ds = −2r dr.

To justify the improper double integrals and equating the two expressions, we begin with an approximating function:

were absolutely convergent we would have that its Cauchy principal value, that is, the limit

Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than

Since the limits on s as y → ±∞ depend on the sign of x, it simplifies the calculation to use the fact that e−x2 is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity.

In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider

By trigonometric substitution, we exactly compute those two bounds:

By taking the square root of the Wallis formula,

Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.

This shows why the factorial of a half-integer is a rational multiple of

This fact is applied in the study of the multivariate normal distribution.

where σ is a permutation of {1, …, 2N} and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, …, 2N} of N copies of A−1.

for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria.

The exponential over a differential operator is understood as a power series.

While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case.

This can be taken care of if we only consider ratios: In the DeWitt notation, the equation looks identical to the finite-dimensional case.

If A is again a symmetric positive-definite matrix, then (assuming all are column vectors)

is a positive integer An easy way to derive these is by differentiating under the integral sign.

One could also integrate by parts and find a recurrence relation to solve this.

Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL(n)-invariants of the polynomial.

One such invariant is the discriminant, zeros of which mark the singularities of the integral.

For example, the solution to the integral of the exponential of a quartic polynomial is[citation needed]

These integrals turn up in subjects such as quantum field theory.

A graph of the function and the area between it and the -axis, (i.e. the entire real line) which is equal to .