In the case of well defined transition models, the EKF has been considered[1] the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.
[2] The papers establishing the mathematical foundations of Kalman type filters were published between 1959 and 1961.
[6][7] The EKF adapted techniques from calculus, namely multivariate Taylor series expansions, to linearize a model about a working point.
If the system model (as described below) is not well known or is inaccurate, then Monte Carlo methods, especially particle filters, are employed for estimation.
Monte Carlo techniques predate the existence of the EKF but are more computationally expensive for any moderately dimensioned state-space.
at time n given observations up to and including at time m ≤ n. where the state transition and observation matrices are defined to be the following Jacobians Unlike its linear counterpart, the extended Kalman filter in general is not an optimal estimator (it is optimal if the measurement and the state transition model are both linear, as in that case the extended Kalman filter is identical to the regular one).
In addition, if the initial estimate of the state is wrong, or if the process is modeled incorrectly, the filter may quickly diverge, owing to its linearization.
It should also be noted that the extended Kalman filter may give poor performances even for very simple one-dimensional systems such as the cubic sensor,[9] where the optimal filter can be bimodal[10] and as such cannot be effectively represented by a single mean and variance estimator, having a rich structure, or similarly for the quadratic sensor.
Having stated this, the extended Kalman filter can give reasonable performance, and is arguably the de facto standard in navigation systems and GPS.
[13] Most physical systems are represented as continuous-time models while discrete-time measurements are frequently taken for state estimation via a digital processor.
Higher order EKFs may be obtained by retaining more terms of the Taylor series expansions.
[14] However, higher order EKFs tend to only provide performance benefits when the measurement noise is small.
The typical formulation of the EKF involves the assumption of additive process and measurement noise.
The conventional extended Kalman filter can be applied with the following substitutions:[16][17] where: Here the original observation covariance matrix
This attempts to produce a locally optimal filter, however, it is not necessarily stable because the solutions of the underlying Riccati equation are not guaranteed to be positive definite.
One way of improving performance is the faux algebraic Riccati technique [18] which trades off optimality for stability.
The familiar structure of the extended Kalman filter is retained but stability is achieved by selecting a positive definite solution to a faux algebraic Riccati equation for the gain design.
Another way of improving extended Kalman filter performance is to employ the H-infinity results from robust control.
Robust filters are obtained by adding a positive definite term to the design Riccati equation.
[19] The additional term is parametrized by a scalar which the designer may tweak to achieve a trade-off between mean-square-error and peak error performance criteria.
In the UKF, the probability density is approximated by a deterministic sampling of points which represent the underlying distribution as a Gaussian.
"The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems.
[20] The SOEKF predates the UKF by approximately 35 years with the moment dynamics first described by Bass et al.[21] The difficulty in implementing any Kalman-type filters for nonlinear state transitions stems from the numerical stability issues required for precision,[22] however the UKF does not escape this difficulty in that it uses linearization as well, namely linear regression.
The UKF was in fact predated by the Ensemble Kalman filter, invented by Evensen in 1994.
It has the advantage over the UKF that the number of ensemble members used can be much smaller than the state dimension, allowing for applications in very high-dimensional systems, such as weather prediction, with state-space sizes of a billion or more.