Least mean squares filter

From Wikipedia, the free encyclopedia

Least mean squares (LMS) algorithms are used in adaptive filters to find the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal).

Contents

[edit] Idea

The idea behind LMS filters is to use the method of steepest descent to find a coefficient vector \mathbf{w}_{n} which minimizes a cost function. We start the discussion by defining the cost function as

C(n) = E\left\{|e(n)|^{2}\right\}

where e(n) is defined in the block diagram section of the general adaptive filter and E{.} denotes the expected value. Applying the steepest descent method means to take the partial derivatives with respect to the individual entries of the filter coefficient vector

\nabla C(n) = \nabla E\left\{e(n) \, e^{*}(n)\right\}=E\left\{\nabla e(n) \, e^{*}(n)\right\}

where \nabla is the gradient operator and with \nabla e(n)= -\mathbf{x}(n) follows

\nabla C(n) = -E\left\{\mathbf{x}(n) \, e^{*}(n)\right\}

Now, \nabla C(n) is a vector which points towards the steepest ascent of the cost function. To find the minimum of the cost function we need to take a step in the opposite direction of \nabla C(n). To express that in mathematical terms

\mathbf{w}_{n+1}=\mathbf{w}_{n}-\mu \nabla C(n)=\mathbf{w}_{n}+\mu \, E\left\{\mathbf{x}(n) \, e^{*}(n)\right\}

where μ is the step size. That means we have found a sequential update algorithm which minimizes the cost function. Unfortunately, this algorithm is not realizable until we know E\left\{\mathbf{x}(n) \, e^{*}(n)\right\}.

[edit] Simplifications

For most systems the expectation function E\left\{\mathbf{x}(n) \, e^{*}(n)\right\} must be approximated. This can be done with the following unbiased estimator

\hat{E}\left\{\mathbf{x}(n) \, e^{*}(n)\right\}=\frac{1}{N}\sum_{i=0}^{N-1}\mathbf{x}(n-i) \, e^{*}(n-i)

where N indicates the number of samples we use for that estimate. The simplest case is N = 1

\hat{E}\left\{\mathbf{x}(n) \, e^{*}(n)\right\}=\mathbf{x}(n) \, e^{*}(n)

For that simple case the update algorithm follows as

\mathbf{w}_{n+1}=\mathbf{w}_{n}+\mu \mathbf{x}(n) \, e^{*}(n)

Indeed this constitutes the update algorithm for the LMS filter.

[edit] LMS algorithm summary

The LMS algorithm for a pth order algorithm can be summarized as

Parameters: p = filter order
μ = step size
Initialisation: \mathbf{w}_{n}=0
Computation: For n = 0,1,2,...

\mathbf{x}(n) =  \left[ \begin{matrix} x(n)\\ x(n-1)\\ \vdots\\ x(n-p) \end{matrix} \right]

e(n) = d(n)-\mathbf{w}_{n}^{H}\mathbf{x}(n)
\mathbf{w}_{n+1} = \mathbf{w}_{n}+\mu\,e^{*}(n)\mathbf{x}(n)

[edit] Example: Plant Identification

The goal for a plant identification structure is to match the properties of an unknown system (plant) with an adaptive filter. The following figure shows the block diagram of a plant identification system. The Matlab source code is PlantIdent. Block diagram

In general any adaptive filter can be used for plant identification, however in this example we use an LMS structure. This example allows us to discuss the merits of the step size factor μ. The following figure shows the error signal for a near optimal value of μ. Clearly the error is minimized as the algorithm progresses. e(n) for u=0.01

The next figure shows a case when the step size is chosen too small. It takes a long time, i.e. a lot of iterations, for the algorithm to settle down.

Image Missing : step size too small

The last figure shows the error function for when μ is selected too large. The algorithm is not able to settle down. In terms of the steepest descent method, we are always overstepping the minimum.

e(n) for u=0.08

[edit] References

  • Monson H. Hayes Statistical Digital Signal Processing and Modeling, Wiley, 1996, ISBN 0-471-59431-8
  • Simon Haykin Adaptive Filter Theory, Prentice Hall, 2002, ISBN 0-13-048434-2
  • Simon S. Haykin, Bernard Widrow (Editor) Least-Mean-Square Adaptive Filters, Wiley, 2003, ISBN 0-471-21570-8
  • Bernard Widrow, Samuel D. Stearns Adaptive Signal Processing, Prentice Hall, 1985, ISBN 0-13-004029-0

[edit] See also

In other languages