In theoretical physics, stochastic quantization is a method for modelling quantum mechanics, introduced by Edward Nelson in 1966,[1][2][3] and streamlined by Giorgio Parisi and Yong-Shi Wu.
This serves to address the problem of fermion doubling that usually occurs in these numerical calculations.
Stochastic quantization takes advantage of the fact that a Euclidean quantum field theory can be modeled as the equilibrium limit of a statistical mechanical system coupled to a heat bath.
In particular, in the path integral representation of a Euclidean quantum field theory, the path integral measure is closely related to the Boltzmann distribution of a statistical mechanical system in equilibrium.
A statistical mechanical system in equilibrium can be modeled, via the ergodic hypothesis, as the stationary distribution of a stochastic process.