Matched filter

From Wikipedia, the free encyclopedia

A matched filter is obtained by correlating a known signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a time-reversed version of the template (cf. convolution) . The matched filter is the optimal linear filter for maximizing the signal to noise ratio (SNR) in the presence of additive stochastic noise. Matched filters are commonly used in radar, in which a signal is sent out, and we measure the reflected signals, looking for something similar to what was sent out. Pulse compression is an example of matched filtering. Two-dimensional matched filters are commonly used in image processing, e.g., to improve SNR for X-ray pictures.

Contents

[edit] Derivation of the matched filter

The matched filter is the linear filter, h, that maximizes the output signal-to-noise ratio.

\ y[n] = \sum_{k=-\infty}^{\infty} h[n-k] x[k].

Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly.

We can derive the linear filter that maximimizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise.

Let us formally define the problem. We seek a filter, h, such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signal x.

Our observed signal consists of the desirable signal s and additive noise v:

\ x=s+v.\,

Let us define the covariance matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation:

\ R_v=E\{ vv^H \}\,

where .H denotes Hermitian (conjugate) transpose. Let us call our output, y, the inner product of our filter and the observed signal such that

\ y = \sum_{k=-\infty}^{\infty} h^*[k] x[k] = h^Hx = h^Hs + h^Hv = y_s + y_v.

We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise:

\ SNR = \frac{|y_s|^2}{E\{|y_v|^2\}}.

We rewrite the above:

\ SNR = \frac{|h^Hs|^2}{E\{|h^Hv|^2\}}.

We wish to maximize this quantity by choosing h. Expanding the denominator of our objective function, we have

\ E\{ |h^Hv|^2 \} = E\{ (h^Hv){(h^Hv)}^H \} = h^H E\{vv^H\} h = h^HR_vh.\,

Now, our SNR becomes

\ SNR = \frac{ |h^Hs|^2 }{ h^HR_vh }.

We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the covariance matrix Rv, we can write

\ SNR = \frac{ | {(R_v^{1/2}h)}^H (R_v^{-1/2}s) |^2 }                   { {(R_v^{1/2}h)}^H (R_v^{1/2}h) },

We would like to find an upper bound on this expression. To do so, we first recognize a form of the Cauchy-Schwarz inequality:

\ |a^Hb|^2 \leq (a^Ha)(b^Hb),\,

which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors a and b are parallel. We resume our derivation by expressing the upper bound on our SNR in light of the geometric inequality above:

\ SNR = \frac{ | {(R_v^{1/2}h)}^H (R_v^{-1/2}s) |^2 }                   { {(R_v^{1/2}h)}^H (R_v^{1/2}h) }              \leq              \frac{ \left[                                     {(R_v^{1/2}h)}^H (R_v^{1/2}h)                           \right]                         \left[                                  {(R_v^{-1/2}s)}^H (R_v^{-1/2}s)                         \right] }                   { {(R_v^{1/2}h)}^H (R_v^{1/2}h) }.

Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified:

\ SNR = \frac{ | {(R_v^{1/2}h)}^H (R_v^{-1/2}s) |^2 }                   { {(R_v^{1/2}h)}^H (R_v^{1/2}h) }              \leq s^H R_v^{-1} s.

We can achieve this upper bound if we choose,

\ R_v^{1/2}h = \alpha R_v^{-1/2}s

where α is an arbitrary real number. To verify this, we plug into our expression for the output SNR:

\ SNR = \frac{ | {(R_v^{1/2}h)}^H (R_v^{-1/2}s) |^2 }                   { {(R_v^{1/2}h)}^H (R_v^{1/2}h) }            = \frac{ \alpha^2 | {(R_v^{-1/2}s)}^H (R_v^{-1/2}s) |^2 }                   { \alpha^2  {(R_v^{-1/2}s)}^H (R_v^{-1/2}s) }            = \frac{ | s^H R_v^{-1} s |^2 }                   { s^H R_v^{-1} s }            = s^H R_v^{-1} s.

Thus, our optimal matched filter is

\ h = \alpha R_v^{-1}s.

We often choose to normalize the expected value of the power of the filter output due to the template to unity. That is, we constrain

\ E\{ |y_s|^2 \} = 1.\,

This constraint implies a value of α, for which we can solve:

\ E\{ |y_s|^2 \} = \alpha^2 s^H R_v^{-1} s = 1,

yielding

\ \alpha = \frac{1}{\sqrt{s^H R_v^{-1} s}},

giving us our normalized filter,

\ h = \frac{1}{\sqrt{s^H R_v^{-1} s}} R_v^{-1}s.

If we care to write the impulse response of the filter for the convolution system, it is simply the complex conjugate time reversal of h.

Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replace Rv with the continuous-time autocorrelation function of the noise, assuming a continuous signal s(t), continuous noise v(t), and a continuous filter h(t).

[edit] Alternate derivation of the matched filter

Alternatively, we may solve for the matched filter by solving our maximization problem with a Lagrangian. Again, the matched filter endeavors to maximize the output signal-to-noise ratio (SNR) of a filtered deterministic signal in stochastic additive noise. The observed sequence, again, is

\ x = s + v,\,

with the noise covariance matrix,

\ R_v = E\{vv^H\}.\,

The signal-to-noise ratio is

\ SNR = \frac{|y_s|^2}{ E\{|y_v|^2\} }.

Evaluating the expression in the numerator, we have

\ |y_s|^2 = {y_s}^H y_s = h^H s s^H h.\,

and in the denominator,

\ E\{|y_v|^2\} = E\{ {y_v}^H y_v \} = E\{ h^H v v^H h \} = h^H R_v h.\,

The signal-to-noise ratio becomes

\ SNR = \frac{h^H s s^H h}{ h^H R_v h }.

If we now constrain the denominator to be 1, the problem of maximizing SNR is reduced to maximizing the numerator. We can then formulate the problem using a Lagrange multiplier:

\ h^H R_v h = 1
\ \mathcal{L} = h^H s s^H h + \lambda (1 - h^H R_v h )
\ \nabla_{h^*} \mathcal{L} = s s^H h - \lambda R_v h = 0
\ (s s^H) h = \lambda R_v h

which we recognize as an eigenvalue problem

\ h^H (s s^H) h = \lambda h^H R_v h = \lambda.

Since ssH is of unit rank, it has only one nonzero eigenvalue. It can be shown that this eigenvalue equals

\ \lambda_{\max} = s^H R_v^{-1} s,

yielding the following optimal matched filter

\ h = \frac{1}{\sqrt{s^H R_v^{-1} s}} R_v^{-1} s.

This is the same result found in the previous section.

[edit] Example of matched filter in radar and sonar

Matched filters are often used in signal detection (see detection theory). As an example, suppose that we wish to judge the distance of an object by reflecting a signal off it. We may choose to transmit a pure-tone sinusoid at 1 Hz. We assume that our received signal is an attenuated and phase-shifted form of the transmitted signal with added noise.

To judge the distance of the object, we correlate the received signal with a matched filter, which, in the case of white (uncorrelated) noise, is another pure-tone 1-Hz sinusoid. When the output of the matched filter system exceeds a certain threshold, we conclude with high probability that the received signal has been reflected off the object. Using the speed of propagation and the time that we first observe the reflected signal, we can estimate the distance of the object. If we change the shape of the pulse in a specially-designed way, the signal-to-noise ratio and the distance resolution can be even improved after matched filtering: this is a technique known as pulse compression.

Additionally, matched filters can be used in parameter estimation problems (see estimation theory). To return to our previous example, we may desire to estimate the speed of the object, in addition to its position. To exploit the Doppler effect, we would like to estimate the frequency of the received signal. To do so, we may correlate the received signal with several matched filters of sinusoids at varying frequencies. The matched filter with the highest output will reveal, with high probability, the frequency of the reflected signal and help us determine the speed of the object. This method is, in fact, a simple version of the discrete Fourier transform (DFT). The DFT takes an N-valued complex input and correlates it with N matched filters, corresponding to complex exponentials at N different frequencies, to yield N complex-valued numbers corresponding to the relative amplitudes and phases of the sinusoidal components.

[edit] Example of matched filter in digital communications

The matched filter is also used in communications. In the context of a communication system that sends binary messages from the transmitter to the receiver across a noisy channel, a matched filter can be used to detect the transmitted pulses in the noisy received signal.

Image:total_system.jpg

Imagine we want to send the sequence "11011000100" coded in polar Return-to-zero (RZ) through a certain channel.

Mathematically, a sequence in RZ code can be described as a sequence of unit pulses or shifted rect functions, each pulse being weighted by +1 if the bit is "1" and -1 if the bit is "0". Formally, the scaling factor for the kth bit is,

\ a_k =            \begin{cases}                   1,  & \mbox{if bit } k \mbox{ is 1}, \\                     -1, & \mbox{if bit } k \mbox{ is 0}.          \end{cases}

We can represent our message, M(t), as the sum of shifted unit pulses:

\ M(t) = \sum_{k=-\infty}^\infty a_k \times                                                         \Pi \left(                                                                                     \frac{2(t-kT)}{T}                                                                        \right).

where T is the time length of one bit. Specifically, the bit is asserted for time T / 2 and remains zero for an equal amount of time.

Thus, the signal to be sent by the transmitter is

Image:RZ_tx_signal.jpg

If we model our noisy channel as an AWGN channel, white Gaussian noise is added to the signal. At the receiver end, this may look like:

Image:rx_signal.jpg

A first glance will not reveal the original transmitted sequence. There is a high power of noise relative to the power of the desired signal (i.e., there is a low signal-to-noise ratio). If the receiver were to sample this signal at the correct moments, the resulting binary message would belie the original transmitted one.

To increase our signal-to-noise ratio, we pass the received signal through a matched filter. In this case, the filter should be matched to a RZ pulse (equivalent to a "1" coded in RZ code). Precisely, the impulse response of the ideal matched filter, assuming white (uncorrelated) noise should be a time-reversed complex-conjugated scaled version of the signal that we are seeking. We choose

\ h(t) = \Pi\left( \frac{2t}{T} \right).

In this case, due to symmetry, the time-reversed complex conjugate of h(t) is in fact h(t), allowing us to call h(t) the impulse response of our matched filter convolution system.

After convolving with the correct matched filter, the resulting signal, Mfiltered(t) is,

\ M_\mathrm{filtered}(t) = M(t) * h(t)

Image:filtered_signal.jpg

Which can now be safely sampled by the receiver at the correct sampling instants, resulting in a correct interpretation of the binary message.

Image:sampled_signal.jpg

Since the matched filter is the filter that maximizes the signal-to-noise ratio it can be shown that it also minimizes the Bit error ratio (BER), which is the ratio of the number of bits that the receiver interprets incorrectly as a fraction of the total number of bits sent.

[edit] References

  • Melvin, Willian L. "A STAP Overview." IEEE A&E Systems Magazine 19 (1) (January 2004): 19-35.
  • Turin, George L. "An introduction to matched filters." IRE Transactions on Information Theory 6 (3) (June 1960): 311- 329.
In other languages