Recursive least squares filter
From Wikipedia, the free encyclopedia
Recursive least squares (RLS) algorithm is used in adaptive filters to find the filter coefficients that relate to producing the recursively least squares of the error signal (difference between the desired and the actual signal)
Contents |
[edit] Motivation
Consider the linear time series model
- y(n + 1) = wx(n) + e(n)
where e(n)˜N(0,1) is a white noise. We wish to estimate the parameter w via least squares, and at each time N we refer to the new least squares estimate by . As time evolves, we would like to avoid completely redoing the least squares algorithm to find the new estimate for , in terms of , and then update them using various techniques.
The benefits of using the RLS algorithm is that there is no need to invert extremely large matrices, thereby saving computing power. Another advantage is that we get some intuition behind such results as the Kalman filter.
We shall now describe these techniques.
[edit] Discussion
The idea behind RLS filters is to minimize a weighted least squares error function. To stay within the adaptive filter terminology, the weighted least squares error function is the cost function defined as
where is an exponential weighting factor which effectively limits the number of input samples based on which the cost function is minimized. The error signal e(n) is defined in the block diagram section of the adaptive filter page. The cost function is minimized by taking the partial derivatives for all entries k of the coefficient vector and setting the results to zero
Next, replace e(n) with the definition of the error signal
Rearranging the equation yields
This form can be expressed in terms of matrices
where is the weightend autocorrelation matrix for x(n) and is the cross-correlation between d(n) and x(n). Based on this expression we find the coefficients which minimize the cost function as
This is the main result of the discussion.
[edit] Recursive algorithm
The discussion resulted in a single equation to determine a coefficient vector which minimizes the cost function. In this section we want to derive a recursive solution of the form
where is a correction factor at time n − 1. We start the derivation of the recursive algorithm by expressing the cross correlation in terms of
where is the p + 1 dimensional data vector
Similarly we express in terms of by
In order to generate the coefficient vector we are interested in the inverse of the deterministic autocorrelation matrix. For that task the Woodbury matrix identity comes in handy. With
-
A is (p + 1)-by-(p + 1) U is (p + 1)-by-1 V is 1-by-(p + 1) C = I = 1 is the 1-by-1 identity matrix
The Woodbury matrix identity follows
-
= =
To come in line with the standard literature, we define
where
Before we move on, it is necessary to bring into another form
Subtracting the second term on the left side yields
With the recursive definition of the desired form follows
Now we are ready to complete the recursion. As discussed
The second step follows from the recursive definition of . Next we incorporate the recursive definition of together with the alternate form of and get
With we arrive at the update equation
where
That means we found the correction factor
[edit] RLS algorithm summary
The RLS algorithm for a pth order algorithm can be summarized as
Parameters: | p = filter order |
λ = forgetting factor | |
δ = value to initialize | |
Initialisation: | |
where I is the (p + 1)-by-(p + 1) identity matrix | |
Computation: | For n = 0,1,2,... |
. |
Note that the recursion for P follows a Ricatti equation and thus draws parallels to the Kalman filter.
[edit] References
- Monson H. Hayes Statistical Digital Signal Processing and Modeling, Wiley, 1996, ISBN 0-471-59431-8
- Simon Haykin Adaptive Filter Theory, Prentice Hall, 2002, ISBN 0-13-048434-2
- M.H.A Davis, R.B. Vinter, (1985), "Stochastic Modelling and Control", ISBN 0412162008