Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation.
particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process.
In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process.
[5][6][7] The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model.
[8] The theory of mean-field interacting particle models had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics.
[5][9] The mathematical foundations of these classes of models were developed from the mid-1980s to the mid-1990s by several mathematicians, including Werner Braun, Klaus Hepp,[10] Karl Oelschläger,[11][12][13] Gérard Ben Arous and Marc Brunaud,[14] Donald Dawson, Jean Vaillancourt[15] and Jürgen Gärtner,[16][17] Christian Léonard,[18] Sylvie Méléard, Sylvie Roelly,[6] Alain-Sol Sznitman[7][19] and Hiroshi Tanaka[20] for diffusion type models; F. Alberto Grünbaum,[21] Tokuzo Shiga, Hiroshi Tanaka,[22] Sylvie Méléard and Carl Graham[23][24][25] for general classes of interacting jump-diffusion processes.
We also quote an earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies.
The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines[27] and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.
[28][29] The Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms.
Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984[35] In molecular chemistry, the use of genetic heuristic-like particle methods (a.k.a.
[41] Particle filters were also developed in signal processing in the early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.
[42][43][44][45][46][47] The foundations and the first rigorous analysis on the convergence of genetic type models and mean field Feynman-Kac particle methods are due to Pierre Del Moral[48][49] in 1996.
Branching type particle methods with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons,[50][51][52] and by Dan Crisan, Pierre Del Moral and Terry Lyons.
[53] The first uniform convergence results with respect to the time parameter for mean field particle models were developed in the end of the 1990s by Pierre Del Moral and Alice Guionnet[54][55] for interacting jump type processes, and by Florent Malrieu for nonlinear diffusion type processes.
The macroscopic behavior of these many-body particle systems is encapsulated in the limiting model obtained when the size of the population tends to infinity.
the heat equation) is given by a Feynman-Kac distribution associated with a free evolution Markov process (often represented by Brownian motions) in the set of electronic or macromolecular configurations and some potential energy function.
The long time behavior of these nonlinear semigroups is related to top eigenvalues and ground state energies of Schrödinger's operators.
During the mutation transition, the walkers evolve randomly and independently in a potential energy landscape on particle configurations.
quantum teleportation, population reconfiguration, resampled transition) is associated with a fitness function that reflects the particle absorption in an energy well.
In molecular chemistry, and statistical physics Mean field particle methods are also used to sample Boltzmann-Gibbs measures associated with some cooling schedule, and to compute their normalizing constants (a.k.a.
[71][72] The mean field genetic type approximation of these flows offers a fixed population size interpretation of these branching processes.
[2][3][54][55][66][79] In computer sciences, and more particularly in artificial intelligence these mean field type genetic algorithms are used as random search heuristics that mimic the process of evolution to generate useful solutions to complex optimization problems.
The idea is to propagate a population of feasible candidate solutions using mutation and selection mechanisms.
The limiting model as the number of agents tends to infinity is sometimes called the continuum model of agents[91] In information theory, and more specifically in statistical machine learning and signal processing, mean field particle methods are used to sample sequentially from the conditional distributions of some random process with respect to a sequence of observations or a cascade of rare events.
Subset simulation and Monte Carlo splitting[99] techniques are particular instances of genetic particle schemes and Feynman-Kac particle models equipped with Markov chain Monte Carlo mutation transitions[67][100][101] To motivate the mean field simulation algorithm we start with S a finite or countable state space and let P(S) denote the set of all probability measures on S. Consider a sequence of probability distributions
-unit simplex into itself, where s stands for the cardinality of the set S. When s is too large, solving equation (1) is intractable or computationally very costly.
One natural way to approximate these evolution equations is to reduce sequentially the state space using a mean field particle model.
, we have the almost sure convergence These nonlinear Markov processes and their mean field particle interpretation can be extended to time non homogeneous models on general measurable state spaces.
of independent standard Gaussian random variables, a positive parameter σ, some functions
The Markov transition of the chain is given for any bounded measurable functions f by the formula with Using the tower property of conditional expectations we prove that the probability distributions