Recursive least squares filter

Recursive least squares (RLS) is an adaptive filter algorithm that recursively finds the coefficients that minimize a weighted linear least squares cost function relating to the input signals.

Compared to most of its competitors, the RLS exhibits extremely fast convergence.

The intent of the RLS filter is to recover the desired signal

As time evolves, it is desired to avoid completely redoing the least squares algorithm to find the new estimate for

The benefit of the RLS algorithm is that there is no need to invert matrices, thereby saving computational cost.

The idea behind RLS filters is to minimize a cost function

The error implicitly depends on the filter coefficients through the estimate

is the "forgetting factor" which gives exponentially less weight to older error samples.

The cost function is minimized by taking the partial derivatives for all entries

with the definition of the error signal Rearranging the equation yields This form can be expressed in terms of matrices where

Based on this expression we find the coefficients which minimize the cost function as This is the main result of the discussion.

is, the smaller is the contribution of previous samples to the covariance matrix.

case is referred to as the growing window RLS algorithm.

[2] The discussion resulted in a single equation to determine a coefficient vector which minimizes the cost function.

In this section we want to derive a recursive solution of the form where

We start the derivation of the recursive algorithm by expressing the cross covariance

dimensional data vector Similarly we express

by In order to generate the coefficient vector we are interested in the inverse of the deterministic auto-covariance matrix.

For that task the Woodbury matrix identity comes in handy.

With The Woodbury matrix identity follows To come in line with the standard literature, we define where the gain vector

into another form Subtracting the second term on the left side yields With the recursive definition of

Compare this with the a posteriori error; the error calculated after the filter is updated: That means we found the correction factor This intuitively satisfying result indicates that the correction factor is directly proportional to both the error and the gain vector, which controls how much sensitivity is desired, through the weighting factor,

follows an algebraic Riccati equation and thus draws parallels to the Kalman filter.

[3] The lattice recursive least squares adaptive filter is related to the standard RLS except that it requires fewer arithmetic operations (order N).

[4] It offers additional advantages over conventional LMS algorithms such as faster convergence rates, modular structure, and insensitivity to variations in eigenvalue spread of the input correlation matrix.

The LRLS algorithm described is based on a posteriori errors and includes the normalized form.

The derivation is similar to the standard RLS algorithm and is based on the definition of

, where i is the index of the sample in the past we want to predict, and the input signal

It can be calculated by applying a normalization to the internal variables of the algorithm which will keep their magnitude bounded by one.

This is generally not used in real-time applications because of the number of division and square-root operations which comes with a high computational load.