Hilbert transform

From Wikipedia, the free encyclopedia

The Hilbert transform, in red, of a square wave, in blue
Enlarge
The Hilbert transform, in red, of a square wave, in blue

In mathematics and in signal processing, the Hilbert transform, here denoted \mathcal{H}, of a real-valued function, s(t)\,, is obtained by convolving signal s(t)\, with 1/(\pi t)\, to obtain \widehat s(t). Therefore, the Hilbert transform \widehat s(t) can be interpreted as the output of a linear time invariant system with input s(t)\,, and a system impulse response given as 1/(\pi t)\,. It is a useful mathematical tool to describe the complex envelope of a real-valued carrier modulated signal in communication theory (see below for more on applications).

The Hilbert transform is named after the renowned mathematician David Hilbert.

Contents

[edit] Definition

The definition of the Hilbert tranform is as follows:

\widehat s(t) = \mathcal{H}\{s\} = (h*s)(t) = \int_{-\infty}^{\infty} s(\tau) h(t-\tau) d\tau = \frac{1}{\pi}\int_{-\infty}^{\infty}\frac{s(\tau)}{t-\tau}\, d\tau.\,

where

h(t) = \frac{1}{\pi t}\,

and considering the integral as a Cauchy principal value (which avoids the singularities at \tau = t\, and \tau=\pm \infty\,). It can be shown that if s\in L^p(\mathbb{R}), then \mathcal{H}(s) is defined and in L^p(\mathbb{R}) for 1<p<\infty.

See also: Lp space

[edit] Frequency response

The Hilbert transform has a frequency response given by the Fourier transform:

H(\omega ) = \mathcal{F}\{h\}(\omega)\, = -i\cdot \sgn(\omega),  

where

And since:

\mathcal{F}\{\widehat s\}(\omega) = H(\omega )\cdot \mathcal{F}\{s\}(\omega),

the Hilbert transform has the effect of shifting the negative frequency components of s(t)\, by +90° and the positive frequencies components by −90°.

[edit] Inverse Hilbert transform

We also note that H^2(\omega ) = -1\,. So multiplying the above equation by -H(\omega )\, gives

\mathcal{F}\{s\}(\omega) = -H(\omega )\cdot \mathcal{F}\{\widehat s\}(\omega)

from which the inverse Hilbert transform is apparent:

s(t) = -(h * \widehat s)(t) = -\mathcal{H}\{\widehat s\}(t).\,

[edit] Hilbert transform examples

Notice: Some authors, e.g., Bracewell, use our -\mathcal{H} as their definition of the forward transform. A consequence is that the right column of this table would be negated.

Signal
s(t)\,
Hilbert transform
\mathcal{H}\{s\}(t)
\sin(t)\, -\cos(t)\,
\cos(t)\, \sin(t)\,
1 \over t^2 + 1 t \over t^2 + 1
Sinc function
\sin(t) \over t
1- \cos(t)\over t
Rectangular function
\sqcap(t)
{1 \over \pi} \ln \left | {t+{1 \over 2} \over t-{1 \over 2}} \right |
Dirac delta function
\delta(t) \,
{1 \over \pi t}

[edit] Narrowband model

Many signals can be accurately modeled as the product of a bandlimited "message" waveform, s_m(t)\,, and a sinusoidal "carrier":

s(t) = s_m(t) \cdot \cos(\omega t + \varphi)\,

When s_m(t)\, has no frequency content above the carrier frequency, \frac{\omega}{2\pi} Hz, then:

\widehat{s}(t) = s_m(t) \cdot \sin(\omega t + \varphi)

So, the Hilbert transform may be as simple as a circuit that produces a 90° phase shift at the carrier frequency. Furthermore:

(\omega t + \varphi)_{\mathrm{mod}\, 2 \pi} = \arctan\left({\widehat s(t) \over s(t)}\right)\,

from which one can reconstruct the carrier waveform. Then the message can be extracted from  s(t)\, by coherent demodulation.

[edit] Analytic representation

The analytic representation of a signal is defined in terms of the Hilbert transform:

s_a(t) = s(t) + i\cdot \widehat s(t)\,

E.g., for the narrowband model [above], the analytic representation is:

s_a(t)\, = s_m(t) \cdot \cos(\omega t + \varphi) + i\cdot s_m(t) \cdot \sin(\omega t + \varphi)\,
= s_m(t) \cdot \left[\cos(\omega t + \varphi) + i\cdot \sin(\omega t + \varphi)\right]\,
= s_m(t) \cdot e^{i(\omega t + \varphi)}\,   (by Euler's formula)

This complex heterodyne operation shifts all the frequency components of s_m(t)\, above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. So that is an indirect way to produce Hilbert transforms.

While the analytic representation of a signal is not necessarily an analytic function, there is a connection to analytic functions here, which is in fact how the Hilbert transform arose historically. The idea is as follows. Starting with a function

f:\mathbb{R}\to\mathbb{R}

one may extend this to a harmonic function on \mathbb{R}^2_+, the upper half plane, by convolving with the Poisson kernel. Every harmonic function is the real part of some analytic function. We then consider the imaginary part of this analytic function, specifically its values along the boundary. It turns out that the boundary values are \mathcal{H}(f). It then follows that the analytic function may be described as the Poisson integral of f+i\mathcal{H}(f).

[edit] Practical considerations

The function h with h(t) = 1/(π t) is a non-causal filter and therefore cannot be implemented as is, if s is a time-dependent signal. If s is a function of a non-temporal variable, e.g., spatial, the non-causality might not be a problem. The filter is also of infinite support which may be a problem in certain applications. Another issue relates to what happens with the zero frequency (DC), which can be avoided by assuring that s does not contain a DC-component.

A practical implementation in many cases implies that a finite support filter, which in addition is made causal by means of a suitable delay, is used to approximate the computation. The approximation may also imply that only a specific frequency range is subject to the characteristic phase shift related to the Hilbert transform. See also quadrature filter.

[edit] Discrete Hilbert transform

If the signal s(t)\, is bandlimited, then \widehat s(t) is bandlimited in the same way. Consequently, both these signals can be sampled according to the sampling theorem, resulting in the discrete signals s[n]\, and \widehat{s}[n]. The relation between the two discrete signals is then given by the convolution:

\widehat{s}[n] = h[n] * s[n]\,

where

h[n]= \begin{cases} 0, & \mbox{for }n\mbox{ even},\\ \frac2{\pi n} & \mbox{for }n\mbox{ odd} \end{cases}

which is non-causal and has infinite duration. In practice, a shortened and time-shifted approximation is used. The usual filter design tradeoffs apply (e.g. filter-order and latency vs. frequency-response). Also notice, that h[n]\, is not just a sampled version of the Hilbert filter h(t)\,, defined above. Rather it is a sequence with this discrete-time Fourier transform:

H(e^{i\omega}) =  \begin{cases} +i, & -\pi \leq \omega < 0 \\ -i, & 0 \leq \omega < \pi \end{cases}

We note that a sequence similar to h[n]\, can be generated by sampling H(e^{i\omega})\, and computing the inverse discrete Fourier transform. The larger the transform (i.e. more samples per radians), the better the agreement (for a given value of the abscissa, n). The figure shows the comparison for a 512-point transform. (Due to odd-symmetry, only half the sequence is actually plotted.)
But that is not the actual point, because it is easier and more accurate to generate h[n]\, directly from the formula. The point is that many applications choose to avoid the convolution by doing the equivalent frequency-domain operation:  simple multiplication of the signal transform with H(e^{i\omega})\,, made even easier by the fact that the real and imaginary components are 0 and ±1 respectively. The attractiveness of that approach is only apparent when the actual Fourier transforms are replaced by samples of the same, i.e., the DFT, which is an approximation and introduces some distortion. Thus, after transforming back to the time-domain, those applications have indirectly generated (and convolved with) not h[n]\,, but the DFT approximation to it, which is shown in the figure.

Notes on fast convolution:

  • Implied in the technique described above is the concept of dividing a long signal into segments of arbitrary size. The signal is filtered piecewise, and the outputs are subsequently pieced back together.
  • The segment size is an important factor in controlling the amount of distortion. As the size increases, the DFT becomes more dense and is a better approximation to the underlying Fourier transform. In the time-domain, the same distortion is manifested as "aliasing", which results in a type of convolution called circular. It is as if the same segment is repeated periodically and filtered, resulting in distortion that is worst at either or both edges of the original segment. Increasing the segment size reduces the number of edges in the pieced-together result and therefore reduces overall distortion.
  • Another mitigation strategy is to simply discard the most badly distorted output samples, because data loss can be avoided by overlapping the input segments. When the filter's impulse response is less than the segment length, this can produce a distortion-free (non-circular) convolution. That of course requires an FIR filter, which the Hilbert transform is not. So yet another technique is to design an FIR approximation to a Hilbert transform filter. That moves the source of distortion from the convolution to the filter, where it can be readily characterized in terms of imperfections in the frequency response.
  • Failure to appreciate or correctly apply these concepts is probably one of the most common mistakes made by non-experts in the digital signal processing field.

[edit] See also

[edit] References

  • Bracewell, R. (1986). The Fourier Transform and Its Applications, 2nd ed, McGraw-Hill.
  • Carlson, Crilly, and Rutledge (2002). Communication Systems, 4th ed.

[edit] External links

In other languages