Spectral density estimation

In statistical signal processing, the goal of spectral density estimation (SDE) is to estimate the spectral density (also known as the power spectral density) of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

SDE should be distinguished from the field of frequency estimation, which assumes that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seeks to find the location and intensity of the generated frequencies. SDE makes no assumption on the number of components and seeks to estimate the whole generating spectrum.

Techniques

Techniques for spectrum estimation can generally be divided into parametric and non-parametric methods. The parametric approaches assume that the underlying stationary stochastic process has a certain structure which can be described using a small number of parameters (for example, using an auto-regressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure.

Following is a partial list of spectral density estimation techniques:

Parametric estimation

In parametric spectral estimation, one assumes that the signal is modeled by a stationary process which has a spectral density function (SDF) S(f;a_1,\ldots,a_p) that is a function of the frequency f and p parameters a_1,\ldots,a_p.[1] The estimation problem then becomes one of estimating these parameters.

The most common form of parametric SDF estimate uses as a model an autoregressive model AR(p) of order p.[1]:392 A signal sequence \{Y_t\} obeying a zero mean AR(p) process satisfies the equation

Y_t = \phi_1Y_{t-1} + \phi_2Y_{t-2} + \cdots + \phi_pY_{t-p} + \epsilon_t,

where the \phi_1,\ldots,\phi_p are fixed coefficients and \epsilon_t is a white noise process with zero mean and innovation variance \sigma^2_p. The SDF for this process is

S(f;\phi_1,\ldots,\phi_p,\sigma^2_p) 
= \frac{\sigma^2_p\Delta t}{\left| 1 - \sum_{k=1}^p \phi_k e^{-2i\pi f k \Delta t}\right|^2} \qquad |f| < f_N,

with \Delta t the sampling time interval and f_N the Nyquist frequency.

There are a number of approaches to estimating the parameters \phi_1,\ldots,\phi_p,\sigma^2_p of the AR(p) process and thus the spectral density:[1]:452-453

Alternative parametric methods include fitting to a moving average model (MA) and to a full autoregressive moving average model (ARMA).

Frequency estimation

Frequency estimation is the process of estimating the complex frequency components of a signal in the presence of noise given assumptions about the number of the components.[3] This contrasts with the general methods above, which do not make prior assumptions about the components.

Finite number of tones

A typical model for a signal x(n) consists of a sum of p complex exponentials in the presence of white noise, w(n)

x(n) = \sum_{i=1}^p A_i e^{j n \omega_i} + w(n).

The power spectral density of x(n) is composed of p impulse functions in addition to the spectral density function due to noise.

The most common methods for frequency estimation involve identifying the noise subspace to extract these components. These methods are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation are Pisarenko's method, the multiple signal classification (MUSIC) method, the eigenvector method, and the minimum norm method.

Pisarenko's method

\hat P_{PHD}(e^{j \omega}) = \frac{1}{|\mathbf{e}^{H}\mathbf{v}_{min}|^2}

MUSIC

\hat P_{MU}(e^{j \omega}) = \frac{1}{\sum_{i=p+1}^{M} |\mathbf{e}^{H} \mathbf{v}_i|^2},

Eigenvector method

\hat P_{EV}(e^{j \omega}) = \frac{1}{\sum_{i=p+1}^{M}\frac{1}{\lambda_i} |\mathbf{e}^H \mathbf{v}_i|^2}

Minimum norm method

\hat P_{MN}(e^{j \omega}) = \frac{1}{|\mathbf{e}^H \mathbf{a}|^2} ; \mathbf{a} = \lambda \mathbf{P}_n \mathbf{u}_1

Single tone

If one only wants to estimate the single loudest frequency, one can use a pitch detection algorithm. If the dominant frequency changes over time, then the problem becomes the estimation of the instantaneous frequency as defined in the time–frequency representation. Methods for instantaneous frequency estimation include those based on the Wigner-Ville distribution and higher order ambiguity functions.[4]

If one wants to know all the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a discrete Fourier transform or some other Fourier-related transform.

See also

References

  1. 1.0 1.1 1.2 1.3 Donald B. Percival and Andrew T. Walden (1992). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 9780521435413.
  2. Burg, J.P. (1967) "Maximum Entropy Spectral Analysis", Proceedings of the 37th Meeting of the Society of Exploration Geophysicists, Oklahoma City, Oklahoma.
  3. Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-8.
  4. Lerga, Jonatan. "Overview of Signal Instantaneous Frequency Estimation Methods". University of Rijeka. Retrieved 22 March 2014.

Further reading