Recursive least squares filter

From Wikipedia, the free encyclopedia

Recursive least squares (RLS) algorithm is used in adaptive filters to find the filter coefficients that relate to recursively producing the least squares (minimum of the sum of the absolute squared) of the error signal (difference between the desired and the actual signal). This is contrast to other algorithms that aim to reduce the mean square error. The difference is that RLS filters are dependent on the signals themselves, whereas MSE filters are dependent on their statistics (specifically, the autocorrelation of the input and the cross-correlation of the input and desired signals). If these statistics are known, an MSE filter with fixed co-efficients (i.e., independent of the incoming data) can be built.

Contents

[edit] Motivation

Suppose that a signal \mathbf{d} is transmitted over an echoey, noisy channel that causes it to be received as

x(n)=\sum_{k=0}^q b_n(k) d(n-k)+v(n)

where v(n) represents white noise. We will attempt to recover the desired signal \mathbf{d} by use of an FIR filter, \mathbf{w}:

\hat{d}(n) = y(n) = \mathbf{w}_n^\mathit{T} \mathbf{x}(n)

Our goal is to estimate the parameters of the filter \mathbf{w}, and at each time n we refer to the new least squares estimate by \mathbf{w_n}. As time evolves, we would like to avoid completely redoing the least squares algorithm to find the new estimate for \mathbf{w}_{n+1}, in terms of \mathbf{w}_n.

The benefit of the RLS algorithm is that there is no need to invert matrices, thereby saving computational power. Another advantage is that it provides intuition behind such results as the Kalman filter.

[edit] Discussion

The idea behind RLS filters is to minimize a cost function C by appropriately selecting the filter coefficients \mathbf{w}_n, updating the filter as new data arrives. The error signal e(n) and desired signal d(n) are defined in the negative feedback diagram below:

The error implicitly depends on the filter coefficients through the estimate \hat{d}(n):

e(n)=d(n)-\hat{d}(n)

The weighted least squares error function C—the cost function we desire to minimize—being a function of e(n) is therefore also dependent on the filter coefficients:

C(\mathbf{\mathbf{w}_n})=\sum_{i=0}^{n}\lambda^{n-i}|e(i)|^{2}=\sum_{i=0}^{n}\lambda^{n-i}e(i)\,e^{*}(i)

where 0<\lambda\le 1 is an exponential weighting factor which effectively limits the number of input samples based on which the cost function is minimized.

The cost function is minimized by taking the partial derivatives for all entries k of the coefficient vector \mathbf{w}_{n} and setting the results to zero

\frac{\partial C(\mathbf{w}_{n})}{\partial w^{*}_{n}(k)}=\sum_{i=0}^{n}\lambda^{n-i}e(i)\,\frac{\partial e^{*}(i)}{\partial w^{*}_{n}(k)}=\sum_{i=0}^{n}\lambda^{n-i}e(i)\,x^{*}(i-k)=0

Next, replace e(n) with the definition of the error signal

\sum_{i=0}^{n}\lambda^{n-i}\left[d(i)-\sum_{l=0}^{p}w_{n}(l)x(i-l)\right]x^{*}(i-k)= 0

Rearranging the equation yields

\sum_{l=0}^{p}w_{n}(l)\left[\sum_{i=0}^{n}\lambda^{n-i}\,x(i-l)x^{*}(i-k)\right]= \sum_{i=0}^{n}\lambda^{n-i}d(i)x^{*}(i-k)

This form can be expressed in terms of matrices

\mathbf{R}_{x}(n)\,\mathbf{w}_{n}=\mathbf{r}_{dx}(n)

where \mathbf{R}_{x}(n) is the weighted autocorrelation matrix for x(n) and \mathbf{r}_{dx}(n) is the cross-correlation between d(n) and x(n). Based on this expression we find the coefficients which minimize the cost function as

\mathbf{w}_{n}=\mathbf{R}_{x}^{-1}(n)\,\mathbf{r}_{dx}(n)

This is the main result of the discussion.

[edit] Choosing λ

The smaller λ is, the smaller contribution of previous samples. This makes the filter more sensitive to recent samples, which means more fluctuations in the filter co-efficients. The λ = 1 case is referred to as the growing window RLS algorithm.

[edit] Recursive algorithm

The discussion resulted in a single equation to determine a coefficient vector which minimizes the cost function. In this section we want to derive a recursive solution of the form

\mathbf{w}_{n}=\mathbf{w}_{n-1}+\Delta\mathbf{w}_{n-1}

where \Delta\mathbf{w}_{n-1} is a correction factor at time n-1. We start the derivation of the recursive algorithm by expressing the cross correlation \mathbf{r}_{dx}(n) in terms of \mathbf{r}_{dx}(n-1)

\mathbf{r}_{dx}(n) =\sum_{i=0}^{n}\lambda^{n-i}d(i)\mathbf{x}^{*}(i)
=\sum_{i=0}^{n-1}\lambda^{n-i}d(i)\mathbf{x}^{*}(i)+\lambda^{0}d(n)\mathbf{x}^{*}(n)
=\lambda\mathbf{r}_{dx}(n-1)+d(n)\mathbf{x}^{*}(n)

where \mathbf{x}(i) is the p+1 dimensional data vector

\mathbf{x}(i)=[x(i), x(i-1), \dots , x(i-p) ]^{T}

Similarly we express \mathbf{R}_{x}(n) in terms of \mathbf{R}_{x}(n-1) by

\mathbf{R}_{x}(n) =\sum_{i=0}^{n}\lambda^{n-i}\mathbf{x}^{*}(i)\mathbf{x}^{T}(i)
=\lambda\mathbf{R}_{x}(n-1)+\mathbf{x}^{*}(n)\mathbf{x}^{T}(n)

In order to generate the coefficient vector we are interested in the inverse of the deterministic autocorrelation matrix. For that task the Woodbury matrix identity comes in handy. With

A =\lambda\mathbf{R}_{x}(n-1) is (p+1)-by-(p+1)
U =\mathbf{x}^{*}(n) is (p+1)-by-1
V =\mathbf{x}^{T}(n) is 1-by-(p+1)
C = I1 is the 1-by-1 identity matrix

The Woodbury matrix identity follows

\mathbf{R}_{x}^{-1}(n) = \left[\lambda\mathbf{R}_{x}(n-1)+\mathbf{x}^{*}(n)\mathbf{x}^{T}(n)\right]^{-1}
= \lambda^{-1}\mathbf{R}_{x}^{-1}(n-1)
-\lambda^{-1}\mathbf{R}_{x}^{-1}(n-1)\mathbf{x}^{*}(n)
\left\{1+\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{R}_{x}^{-1}(n-1)\mathbf{x}^{*}(n)\right\}^{-1} \mathbf{x}^{T}(n)\lambda^{-1}\mathbf{R}_{x}^{-1}(n-1)

To come in line with the standard literature, we define

\mathbf{P}(n) =\mathbf{R}_{x}^{-1}(n)
=\lambda^{-1}\mathbf{P}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)

where the gain vector g(n) is

\mathbf{g}(n) =\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n)\left\{1+\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n)\right\}^{-1}
=\mathbf{P}(n-1)\mathbf{x}^{*}(n)\left\{\lambda+\mathbf{x}^{T}(n)\mathbf{P}(n-1)\mathbf{x}^{*}(n)\right\}^{-1}

Before we move on, it is necessary to bring \mathbf{g}(n) into another form

\mathbf{g}(n)\left\{1+\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n)\right\} =\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n)
\mathbf{g}(n)+\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n) =\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n)

Subtracting the second term on the left side yields

\mathbf{g}(n) =\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n)-\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}^{*}(n)
=\lambda^{-1}\left[\mathbf{P}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\mathbf{P}(n-1)\right]\mathbf{x}^{*}(n)

With the recursive definition of \mathbf{P}(n) the desired form follows

\mathbf{g}(n)=\mathbf{P}(n)\mathbf{x}^{*}(n)

Now we are ready to complete the recursion. As discussed

\mathbf{w}_{n} =\mathbf{P}(n)\,\mathbf{r}_{dx}(n)
=\lambda\mathbf{P}(n)\,\mathbf{r}_{dx}(n-1)+d(n)\mathbf{P}(n)\,\mathbf{x}^{*}(n)

The second step follows from the recursive definition of \mathbf{r}_{dx}(n ). Next we incorporate the recursive definition of \mathbf{P}(n) together with the alternate form of \mathbf{g}(n) and get

\mathbf{w}_{n} =\lambda\left[\lambda^{-1}\mathbf{P}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)\right]\mathbf{r}_{dx}(n-1)+d(n)\mathbf{g}(n)
=\mathbf{P}(n-1)\mathbf{r}_{dx}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\mathbf{P}(n-1)\mathbf{r}_{dx}(n-1)+d(n)\mathbf{g}(n)
=\mathbf{P}(n-1)\mathbf{r}_{dx}(n-1)+\mathbf{g}(n)\left[d(n)-\mathbf{x}^{T}(n)\mathbf{P}(n-1)\mathbf{r}_{dx}(n-1)\right]

With \mathbf{w}_{n-1}=\mathbf{P}(n-1)\mathbf{r}_{dx}(n-1) we arrive at the update equation

\mathbf{w}_{n} =\mathbf{w}_{n-1}+\mathbf{g}(n)\left[d(n)-\mathbf{x}^{T}(n)\mathbf{w}_{n-1}\right]
=\mathbf{w}_{n-1}+\mathbf{g}(n)\alpha(n)

where \alpha(n)=d(n)-\mathbf{x}^{T}(n)\mathbf{w}_{n-1} is the a priori error. Compare this with the a posteriori error; the error calculated after the filter is updated:

e(n)=d(n)-\mathbf{x}^{T}(n)\mathbf{w}_n

That means we found the correction factor

\Delta\mathbf{w}_{n-1}=\mathbf{g}(n)\alpha(n)

This intuitively satisfying result indicates that the correction factor is directly proportional to both the error and the gain vector, which controls how much sensitivity is desired, through the weighting factor, λ.

[edit] RLS algorithm summary

The RLS algorithm for a p-th order RLS filter can be summarized as

Parameters: p = filter order
λ = forgetting factor
δ = value to initialize \mathbf{P}(0)
Initialization: \mathbf{w}_{n}=0
\mathbf{P}(0)=\delta^{-1}I where I is the (p + 1)-by-(p + 1) identity matrix
Computation: For n=0,1,2,\dots

 \mathbf{x}(n) = 
\left[
\begin{matrix}
x(n)\\
x(n-1)\\
\vdots\\
x(n-p)
\end{matrix}
\right]

 \alpha(n) = d(n)-\mathbf{w}(n-1)^{T}\mathbf{x}(n)
\mathbf{g}(n)=\mathbf{P}(n-1)\mathbf{x}^{*}(n)\left\{\lambda+\mathbf{x}^{T}(n)\mathbf{P}(n-1)\mathbf{x}^{*}(n)\right\}^{-1}
\mathbf{P}(n)=\lambda^{-1}\mathbf{P}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)
 \mathbf{w}(n) = \mathbf{w}(n-1)+\,\alpha(n)\mathbf{g}(n).

Note that the recursion for P follows a Riccati equation and thus draws parallels to the Kalman filter.

[edit] See also

[edit] References

  • Hayes, Monson H. (1996). "9.4: Recursive Least Squares", Statistical Digital Signal Processing and Modeling. Wiley, pg.541. ISBN 0-471-59431-8. 
  • Simon Haykin, Adaptive Filter Theory, Prentice Hall, 2002, ISBN 0-13-048434-2
  • M.H.A Davis, R.B. Vinter, Stochastic Modelling and Control, Springer, 1985, ISBN 0-412-16200-8
Languages