Filtering problem (stochastic processes)

In the theory of stochastic processes, filtering describes the problem of determining the state of a system from an incomplete and potentially noisy set of observations.

While originally motivated by problems in engineering, filtering found applications in many fields from signal processing to finance.

The problem of optimal non-linear filtering (even for the non-stationary case) was solved by Ruslan L. Stratonovich (1959,[1] 1960[2]), see also Harold J. Kushner's work [3] and Moshe Zakai's, who introduced a simplified dynamics for the unnormalized conditional law of the filter[4] known as the Zakai equation.

In general, if the separation principle applies, then filtering also arises as part of the solution of an optimal control problem.

It is assumed that observations Ht in Rm (note that m and n may, in general, be unequal) are taken for each time t according to Adopting the Itō interpretation of the stochastic differential and setting this gives the following stochastic integral representation for the observations Zt: where W denotes standard r-dimensional Brownian motion, independent of B and the initial condition Y0, and c : [0, +∞) × Rn → Rn and γ : [0, +∞) × Rn → Rn×r satisfy for all t and x and some constant C. The filtering problem is the following: given observations Zs for 0 ≤ s ≤ t, what is the best estimate Ŷt of the true state Yt of the system based on those observations?

By "based on those observations" it is meant that Ŷt is measurable with respect to the σ-algebra Gt generated by the observations Zs, 0 ≤ s ≤ t. Denote by K = K(Z, t) the collection of all Rn-valued random variables Y that are square-integrable and Gt-measurable: By "best estimate", it is meant that Ŷt minimizes the mean-square distance between Yt and all candidates in K: The space K(Z, t) of candidates is a Hilbert space, and the general theory of Hilbert spaces implies that the solution Ŷt of the minimization problem (M) is given by where PK(Z,t) denotes the orthogonal projection of L2(Ω, Σ, P; Rn) onto the linear subspace K(Z, t) = L2(Ω, Gt, P; Rn).

Furthermore, it is a general fact about conditional expectations that if F is any sub-σ-algebra of Σ then the orthogonal projection is exactly the conditional expectation operator E[·|F], i.e., Hence, This elementary result is the basis for the general Fujisaki-Kallianpur-Kunita equation of filtering theory.

The complete knowledge of the filter at a time t would be given by the probability law of the signal Yt conditional on the sigma-field Gt generated by observations Z up to time t. If this probability law admits a density, informally then under some regularity assumptions the density

satisfies a non-linear stochastic partial differential equation (SPDE) driven by

satisfies a linear SPDE called Zakai equation.

[10] These equations can be formulated for the above system, but to simplify the exposition one can assume that the unobserved signal Y and the partially observed noisy signal Z satisfy the equations In other terms, the system is simplified by assuming that the observation noise W is not state dependent.

, the Zakai SPDE for the same system reads These SPDEs for p and q are written in Ito calculus form.

For example, the Kushner-Stratonovich equation written in Stratonovich calculus reads From any of the densities p and q one can calculate all statistics of the signal Yt conditional on the sigma-field generated by observations Z up to time t, so that the densities give complete knowledge of the filter.

Under the particular linear-constant assumptions with respect to Y, where the systems coefficients b and c are linear functions of Y and where

do not depend on Y, with the initial condition for the signal Y being Gaussian or deterministic, the density

is Gaussian and it can be characterized by its mean and variance-covariance matrix, whose evolution is described by the Kalman-Bucy filter, which is finite dimensional.