Linear prediction

Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.

In digital signal processing, linear prediction is often called linear predictive coding (LPC) and can thus be viewed as a subset of filter theory.

In system analysis, a subfield of mathematics, linear prediction can be viewed as a part of mathematical modelling or optimization.

is the predicted signal value,

the previous observed values, with

is the true signal value.

These equations are valid for all types of (one-dimensional) linear prediction.

The differences are found in the way the predictor coefficients

For multi-dimensional signals the error metric is often defined as where

is a suitable chosen vector norm.

are routinely used within Kalman filters and smoothers to estimate current and past signal values, respectively, from noisy measurements.

[1] The most common choice in optimization of parameters

In this method we minimize the expected value of the squared error

, which yields the equation for 1 ≤ j ≤ p, where R is the autocorrelation of signal xn, defined as and E is the expected value.

In the multi-dimensional case this corresponds to minimizing the L2 norm.

Another, more general, approach is to minimize the sum of squares of the errors defined in the form where the optimisation problem searching over all

On the other hand, if the mean square prediction error is constrained to be unity and the prediction error equation is included on top of the normal equations, the augmented set of equations is obtained as where the index

Specification of the parameters of the linear predictor is a wide topic and a large number of other approaches have been proposed.

In fact, the autocorrelation method is the most common[2] and it is used, for example, for speech coding in the GSM standard.

The Gaussian elimination for matrix inversion is probably the oldest solution but this approach does not efficiently use the symmetry of

A faster algorithm is the Levinson recursion proposed by Norman Levinson in 1947, which recursively calculates the solution.

[citation needed] In particular, the autocorrelation equations above may be more efficiently solved by the Durbin algorithm.

Genin proposed an improvement to this algorithm called the split Levinson recursion, which requires about half the number of multiplications and divisions.

[4] It uses a special symmetrical property of parameter vectors on subsequent recursion levels.

terms make use of similar calculations for the optimal predictor containing

Another way of identifying model parameters is to iteratively calculate state estimates using Kalman filters and obtaining maximum likelihood estimates within expectation–maximization algorithms.

If the discrete time signal is estimated to obey a polynomial of degree

are given by the corresponding row of the triangle of binomial transform coefficients.

This estimate might be suitable for a slowly varying signal with low noise.