In comparison to these, Levinson recursion (particularly split Levinson recursion) tends to be faster computationally, but more sensitive to computational inaccuracies like round-off errors.
The Bareiss algorithm for Toeplitz matrices (not to be confused with the general Bareiss algorithm) runs about as fast as Levinson recursion, but it uses O(n2) space, whereas Levinson recursion uses only O(n) space.
The Bareiss algorithm, though, is numerically stable,[1][2] whereas Levinson recursion is at best only weakly stable (i.e. it exhibits numerical stability for well-conditioned linear systems).
Levinson recursion remains popular for several reasons; for one, it is relatively easy to understand in comparison; for another, it can be faster than a superfast algorithm for small n (usually n < 256).
[7] Matrix equations follow the form The Levinson–Durbin algorithm may be used for any such equation, as long as M is a known Toeplitz matrix with a nonzero main diagonal.
For the sake of this article, êi is a vector made up entirely of zeroes, except for its ith place, which holds the value one.
Finally, in this article, superscripts refer to an inductive index, whereas subscripts denote indices.
Tn is also a Toeplitz matrix, meaning that it can be written as The algorithm proceeds in two steps.
The backwards vectors are necessary for the second step, where they are used to build the solution desired.
Levinson–Durbin recursion defines the nth "forward vector", denoted
is defined similarly; it is the vector of length n which satisfies: An important simplification can occur when M is a symmetric matrix; then the two vectors are related by bni = fnn+1−i—that is, they are row-reversals of each other.
First, the forward vector may be extended with a zero to obtain: In going from Tn−1 to Tn, the extra column added to the matrix does not perturb the solution when a zero is used to extend the forward vector.
However, the extra row added to the matrix has perturbed the solution; and it has created an unwanted error term εf which occurs in the last place.
The above equation gives it the value of: This error will be returned to shortly and eliminated from the new forward vector; but first, the backwards vector must be extended in a similar (albeit reversed) fashion.
: If α and β are chosen so that the right hand side yields ê1 or ên, then the quantity in the parentheses will fulfill the definition of the nth forward or backward vector, respectively.
With those alpha and beta chosen, the vector sum in the parentheses is simple and yields the desired result.
one gets the following equation: Now, all the zeroes in the middle of the two vectors above being disregarded and collapsed, only the following equation is left: With these solved for (by using the Cramer 2×2 matrix inverse formula), the new forward and backward vectors are: Performing these vector summations, then, gives the nth forward and backward vectors from the prior ones.
The solution is then built recursively by noticing that if Then, extending with a zero again, and defining an error constant where necessary: We can then use the nth backward vector to eliminate the error term and replace it with the desired formula as follows: Extending this method until n = N yields the solution