It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.
It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.
[13] The filtering method is named for Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele[14][15] and Peter Swerling developed a similar algorithm earlier.
The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors.
This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.
Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate.
In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model).
In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs.
[31][32] The formula for the updated (a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications.
Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Qk and Rk.
[35] Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed.
Another approach is the Optimized Kalman Filter (OKF), which considers the covariance matrices not as representatives of the noise, but rather, as parameters aimed to achieve the most accurate state estimation.
More generally, if the model assumptions do not match the real system perfectly, then optimal state estimation is not necessarily obtained by setting Qk and Rk to the covariances of the noise.
Instead, in that case, the parameters Qk and Rk may be set to explicitly optimize the state estimation,[37] e.g., using standard supervised learning.
[39] If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.
We assume that between the (k − 1) and k timestep, uncontrolled forces cause a constant acceleration of ak that is normally distributed with mean 0 and standard deviation σa.
Another way to express this, avoiding explicit degenerate distributions is given by At each time phase, a noisy measurement of the true position of the truck is made.
The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above.
If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used.
(Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,[46]: 69 while on 21st-century computers they are only slightly more expensive.)
[46][47] The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter.
The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model.
The remaining probability density functions are The PDF at the previous timestep is assumed inductively to be the estimated state and covariance.
In cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time.
[60] The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely.
[62][63] Expectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers.
Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss.
Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor.
The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.
Recent works[81][82][83] utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.