Similarities between Wiener and LMS
From Wikipedia, the free encyclopedia
This article or section needs sources or references that appear in reliable, third-party publications. Primary sources and sources affiliated with the subject of the article are generally not sufficient for a Wikipedia article. Please include more appropriate citations from reliable sources, or discuss the issue on the talk page. This article has been tagged since May 2008. |
The Least mean squares filter solution converges to the Wiener filter solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter.
Contents |
[edit] Derivation of the Wiener filter for system identification
Given a known input signal s[n], the output of an unknown LTI system x[n] can be expressed as:
where hk is an unknown filter tap coefficients and w[n] is noise.
The model system , using a Wiener filter solution with an order N, can be expressed as:
where are the filter tap coefficients to be determined.
The error between the model and the unknown system can be expressed as:
The total error E can be expressed as:
Use the MMSE criterion over all of n by setting its gradient to zero:
which is for all i = 0,1,2,...,N − 1
Substitute the definition of :
Distribute the partial derivative:
Using the definition of discrete cross-correlation:
Rearrange the terms:
for all i = 0,1,2,...,N − 1
This system of N equations with N unknowns can be determined.
[edit] Derivation of the LMS algorithm
By relaxing the infinite sum of the Wiener filter to just the error at time n, the LMS algorithm can be derived.
The squared error can be expressed as:
Using the Minimum_mean-square_error criterion, take the gradient:
Apply chain rule and substitute definition of
From this equation, we can derive an update equation for each hi at every new n using gradient descent and a step size μ:
which becomes, for i = 0, 1, ..., N-1,
This is the LMS update equation.
[edit] See also
[edit] References
- J.G. Proakis and D.G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Prentice-Hall, 4th ed., 2007.