Least mean squares filter
From Wikipedia, the free encyclopedia
Least mean squares (LMS) algorithms are used in adaptive filters to find the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time.
Contents |
[edit] Problem Formulation
Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system is to be identified and the adaptive filter attempts to adapt the filter to make it as close as possible to , while using only observable signals x(n), d(n) and e(n) (y(n), v(n) and h(n) are not directly observable).
[edit] Idea
The idea behind LMS filters is to use the method of steepest descent to find a coefficient vector which minimizes a cost function. We start the discussion by defining the cost function as
where e(n) is defined in the block diagram section of the general adaptive filter and E{.} denotes the expected value. Applying the steepest descent method means to take the partial derivatives with respect to the individual entries of the filter coefficient vector
where is the gradient operator and with follows
Now, is a vector which points towards the steepest ascent of the cost function. To find the minimum of the cost function we need to take a step in the opposite direction of . To express that in mathematical terms
where μ is the step size. That means we have found a sequential update algorithm which minimizes the cost function. Unfortunately, this algorithm is not realizable until we know .
[edit] Simplifications
For most systems the expectation function must be approximated. This can be done with the following unbiased estimator
where N indicates the number of samples we use for that estimate. The simplest case is N = 1
For that simple case the update algorithm follows as
Indeed this constitutes the update algorithm for the LMS filter.
[edit] LMS algorithm summary
The LMS algorithm for a pth order algorithm can be summarized as
Parameters: | p = filter order |
μ = step size | |
Initialisation: | |
Computation: | For n = 0,1,2,... |
[edit] Normalised least mean squares filter (NLMS)
The main drawback of the "pure" LMS algorithm is that it is sensitive to the scaling of its input x(n). This makes it very hard (if not impossible) to choose a learning rate μ that guarantees stability of the algorithm. The Normalised least mean squares filter (NLMS) is a variant of the LMS algorithm that solves this problem by normalising with the power of the input. The NLMS algorithm can be summarised as:
Parameters: | p = filter order |
μ = step size | |
Initialisation: | |
Computation: | For n = 0,1,2,... |
[edit] Optimal learning rate
It can be shown that if there is no interference (v(n) = 0), then the optimal learning rate for the NLMS algorithm is
- μopt = 1
and is independent of the input x(n) and the real (unknown) impulse response . In the general case with interference (v(n)! = 0), the optimal learning rate is
The results above assume that the signals v(n) and x(n) are uncorrelated to each other, which is generally the case in practice.
[edit] Proof
Let the filter misalignment be defined as , we can derive the expected misalignment for the next sample as:
Let and
Assuming independence, we have:
The optimal learning rate is found at , which leads to:
[edit] Example: Plant Identification
The goal for a plant identification structure is to match the properties of an unknown system (plant) with an adaptive filter. The following figure shows the block diagram of a plant identification system. The Matlab source code is PlantIdent.
In general any adaptive filter can be used for plant identification, however in this example we use an LMS structure. This example allows us to discuss the merits of the step size factor μ. The following figure shows the error signal for a near optimal value of μ. Clearly the error is minimized as the algorithm progresses.
The next figure shows a case when the step size is chosen too small. It takes a long time, i.e. a lot of iterations, for the algorithm to settle down.
Image Missing : step size too small
The last figure shows the error function for when μ is selected too large. The algorithm is not able to settle down. In terms of the steepest descent method, we are always overstepping the minimum.
[edit] References
- Monson H. Hayes Statistical Digital Signal Processing and Modeling, Wiley, 1996, ISBN 0-471-59431-8
- Simon Haykin Adaptive Filter Theory, Prentice Hall, 2002, ISBN 0-13-048434-2
- Simon S. Haykin, Bernard Widrow (Editor) Least-Mean-Square Adaptive Filters, Wiley, 2003, ISBN 0-471-21570-8
- Bernard Widrow, Samuel D. Stearns Adaptive Signal Processing, Prentice Hall, 1985, ISBN 0-13-004029-0
[edit] See also
- Adaptive filter
- Recursive least squares
- For statistical techniques relevant to LMS filter see Least squares.