Particle filter

The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations.

Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution.

[14][15][16] Feynman-Kac interacting particle methods are also strongly related to mutation-selection genetic algorithms currently used in evolutionary computation to solve complex optimization problems.

[18] Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques, conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities.

Particle filters and Feynman-Kac particle methodologies find application in signal and image processing, Bayesian inference, machine learning, risk analysis and rare event sampling, engineering and robotics, artificial intelligence, bioinformatics,[19] phylogenetics, computational science, economics and mathematical finance, molecular chemistry, computational physics, pharmacokinetics, quantitative risk and insurance[20][21] and other fields.

In Evolutionary Computing, mean-field genetic type particle methodologies are often used as heuristic and natural search algorithms (a.k.a.

The origins of mean-field type evolutionary computational techniques can be traced back to 1950 and 1954 with Alan Turing's work on genetic type mutation-selection learning machines[22] and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.

In 1963, Nils Aall Barricelli simulated a genetic type algorithm to mimic the ability of individuals to play a simple game.

[26] In evolutionary computing literature, genetic-type mutation-selection algorithms became popular through the seminal work of John Holland in the early 1970s, particularly his book[27] published in 1975.

[28] The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[29] and Crosby (1973).

Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984.

pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W.

In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter",[35] a slightly modified version of this article appeared in 1996.

[36] In April 1993, Neil J. Gordon et al., published in their seminal work[37] an application of genetic type algorithm in Bayesian statistical inference.

Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.

The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral[2][4] in 1996.

The article[2] also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalized conditional probability measures.

The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference.

The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral and Alice Guionnet.

A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process.

In contrast, the Markov Chain Monte Carlo or importance sampling approach would model the full posterior

The assumption that the initial distribution and the transitions of the Markov chain are continuous for the Lebesgue measure can be relaxed.

is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities.

In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute.

of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have: and as soon as the normalizing constant is strictly positive.

at level l=0,...,k. In this situation, we have the approximation formula with the empirical measure Here F stands for any founded function on the path space of the signal.

[10][5] The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the form

[48][49][50][51][52][68][69] More recent developments can be found in the books,[10][5] When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates are controlled by the non asymptotic uniform estimates for any function f bounded by 1, and for some finite constants

we have the particle estimates where This also shows that if then We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition.

In this situation, the particle approximations of the likelihood functions are unbiased and the relative variance is controlled by for some finite constant c. In addition, for any