Least mean squares filter

From Wikipedia, the free encyclopedia

Least mean squares (LMS) algorithms are used in adaptive filters to find the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time.

Contents

[edit] Problem Formulation

LMS filter

Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system \mathbf{h}(n) is to be identified and the adaptive filter attempts to adapt the filter \hat{\mathbf{h}}(n) to make it as close as possible to \mathbf{h}(n), while using only observable signals x(n), d(n) and e(n) (y(n), v(n) and h(n) are not directly observable).

[edit] Idea

The idea behind LMS filters is to use the method of steepest descent to find a coefficient vector \mathbf{h}(n) which minimizes a cost function. We start the discussion by defining the cost function as

C(n) = E\left\{|e(n)|^{2}\right\}

where e(n) is defined in the block diagram section of the general adaptive filter and E{.} denotes the expected value. Applying the steepest descent method means to take the partial derivatives with respect to the individual entries of the filter coefficient vector

\nabla C(n) = \nabla E\left\{e(n) \, e^{*}(n)\right\}=E\left\{\nabla e(n) \, e^{*}(n)\right\}

where \nabla is the gradient operator and with \nabla e(n)= -\mathbf{x}(n) follows

\nabla C(n) = -E\left\{\mathbf{x}(n) \, e^{*}(n)\right\}

Now, \nabla C(n) is a vector which points towards the steepest ascent of the cost function. To find the minimum of the cost function we need to take a step in the opposite direction of \nabla C(n). To express that in mathematical terms

\hat{\mathbf{h}}(n+1)=\hat{\mathbf{h}}(n)-\mu \nabla C(n)=\hat{\mathbf{h}}(n)+\mu \, E\left\{\mathbf{x}(n) \, e^{*}(n)\right\}

where μ is the step size. That means we have found a sequential update algorithm which minimizes the cost function. Unfortunately, this algorithm is not realizable until we know E\left\{\mathbf{x}(n) \, e^{*}(n)\right\}.

[edit] Simplifications

For most systems the expectation function E\left\{\mathbf{x}(n) \, e^{*}(n)\right\} must be approximated. This can be done with the following unbiased estimator

\hat{E}\left\{\mathbf{x}(n) \, e^{*}(n)\right\}=\frac{1}{N}\sum_{i=0}^{N-1}\mathbf{x}(n-i) \, e^{*}(n-i)

where N indicates the number of samples we use for that estimate. The simplest case is N = 1

\hat{E}\left\{\mathbf{x}(n) \, e^{*}(n)\right\}=\mathbf{x}(n) \, e^{*}(n)

For that simple case the update algorithm follows as

\hat{\mathbf{h}}(n+1)=\hat{\mathbf{h}}(n)+\mu \mathbf{x}(n) \, e^{*}(n)

Indeed this constitutes the update algorithm for the LMS filter.

[edit] LMS algorithm summary

The LMS algorithm for a pth order algorithm can be summarized as

Parameters: p = filter order
μ = step size
Initialisation: \hat{\mathbf{h}}(0)=0
Computation: For n = 0,1,2,...

\mathbf{x}(n) = \left[x(n), x(n-1), \dots, x(n-p+1)\right]^T

e(n) = d(n)-\hat{\mathbf{h}}^{H}(n)\mathbf{x}(n)
\hat{\mathbf{h}}(n+1) = \hat{\mathbf{h}}(n)+\mu\,e^{*}(n)\mathbf{x}(n)

[edit] Normalised least mean squares filter (NLMS)

The main drawback of the "pure" LMS algorithm is that it is sensitive to the scaling of its input x(n). This makes it very hard (if not impossible) to choose a learning rate μ that guarantees stability of the algorithm. The Normalised least mean squares filter (NLMS) is a variant of the LMS algorithm that solves this problem by normalising with the power of the input. The NLMS algorithm can be summarised as:

Parameters: p = filter order
μ = step size
Initialisation: \hat{\mathbf{h}}(0)=0
Computation: For n = 0,1,2,...

\mathbf{x}(n) = \left[x(n), x(n-1), \dots, x(n-p)\right]^T

e(n) = d(n)-\hat{\mathbf{h}}^{H}(n)\mathbf{x}(n)
\hat{\mathbf{h}}(n+1) = \hat{\mathbf{h}}(n)+\frac{\mu\,e^{*}(n)\mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)}

[edit] Optimal learning rate

It can be shown that if there is no interference (v(n) = 0), then the optimal learning rate for the NLMS algorithm is

μopt = 1

and is independent of the input x(n) and the real (unknown) impulse response \mathbf{h}(n). In the general case with interference (v(n)! = 0), the optimal learning rate is

\mu_{opt}=\frac{E\left[\left|y(n)-\hat{y}(n)\right|^2\right]}{E\left[|e(n)|^2\right]}

The results above assume that the signals v(n) and x(n) are uncorrelated to each other, which is generally the case in practice.

[edit] Proof

Let the filter misalignment be defined as \Lambda(n) = \left| \mathbf{h}(n) - \hat{\mathbf{h}}(n) \right|^2, we can derive the expected misalignment for the next sample as:

E\left[ \Lambda(n+1) \right] = E\left[ \left| \hat{\mathbf{h}}(n) + \frac{\mu\,e^{*}(n)\mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)} - \mathbf{h}(n) \right|^2 \right]
E\left[ \Lambda(n+1) \right] = E\left[ \left| \hat{\mathbf{h}}(n) + \frac{\mu\, \left(  v^*(n)+y^*(n)-\hat{y}^*(n)  \right) \mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)} - \mathbf{h}(n) \right|^2 \right]

Let \mathbf{\delta}=\hat{\mathbf{h}}(n)-\mathbf{h}(n) and r(n) = \hat{y}(n)-y(n)

E\left[ \Lambda(n+1) \right] = E\left[ \left| \mathbf{\delta}(n) - \frac{\mu\, \left(  v(n)+r(n) \right) \mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)} \right|^2 \right]
E\left[ \Lambda(n+1) \right] = E\left[ \left( \mathbf{\delta}(n) - \frac{\mu\, \left(  v(n)+r(n) \right) \mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)} \right)^H \left( \mathbf{\delta}(n) - \frac{\mu\, \left(  v(n)+r(n) \right) \mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)} \right)  \right]

Assuming independence, we have:

E\left[ \Lambda(n+1) \right] = \Lambda(n) + E\left[ \left( \frac{\mu\, \left(  v(n)-r(n) \right) \mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)} \right)^H \left( \frac{\mu\, \left(  v(n)-r(n) \right) \mathbf{x}(n)}{\mathbf{x}^H(n)\mathbf{x}(n)} \right)  \right] - 2 E\left[\frac{\mu|r(n)|^2}{\mathbf{x}^H(n)\mathbf{x}(n)}\right]
E\left[ \Lambda(n+1) \right] = \Lambda(n) + \frac{\mu^2 E\left[|e(n)|^2\right]}{\mathbf{x}^H(n)\mathbf{x}(n)} - \frac{2 \mu E\left[|r(n)|^2\right]}{\mathbf{x}^H(n)\mathbf{x}(n)}

The optimal learning rate is found at \frac{dE\left[ \Lambda(n+1) \right]}{d\mu} = 0, which leads to:

2 \mu E\left[|e(n)|^2\right] - 2 E\left[|r(n)|^2\right] = 0
\mu = \frac{E\left[|r(n)|^2\right]}{E\left[|e(n)|^2\right]}

[edit] Example: Plant Identification

The goal for a plant identification structure is to match the properties of an unknown system (plant) with an adaptive filter. The following figure shows the block diagram of a plant identification system. The Matlab source code is PlantIdent. Block diagram

In general any adaptive filter can be used for plant identification, however in this example we use an LMS structure. This example allows us to discuss the merits of the step size factor μ. The following figure shows the error signal for a near optimal value of μ. Clearly the error is minimized as the algorithm progresses. e(n) for u=0.01

The next figure shows a case when the step size is chosen too small. It takes a long time, i.e. a lot of iterations, for the algorithm to settle down.

Image Missing : step size too small

The last figure shows the error function for when μ is selected too large. The algorithm is not able to settle down. In terms of the steepest descent method, we are always overstepping the minimum.

e(n) for u=0.08

[edit] References

  • Monson H. Hayes Statistical Digital Signal Processing and Modeling, Wiley, 1996, ISBN 0-471-59431-8
  • Simon Haykin Adaptive Filter Theory, Prentice Hall, 2002, ISBN 0-13-048434-2
  • Simon S. Haykin, Bernard Widrow (Editor) Least-Mean-Square Adaptive Filters, Wiley, 2003, ISBN 0-471-21570-8
  • Bernard Widrow, Samuel D. Stearns Adaptive Signal Processing, Prentice Hall, 1985, ISBN 0-13-004029-0

[edit] See also

In other languages