Autocovariance

From Wikipedia, the free encyclopedia

In statistics, given a real stochastic process X(t), the autocovariance is simply the covariance of the signal against a time-shifted version of itself. If each state of the series has a mean, E[Xt] = μt, then the autocovariance is given by

\, K_\mathrm{XX} (t,s) = E[(X_t - \mu_t)(X_s - \mu_s)] = E[X_t\cdot X_s]-\mu_t\cdot\mu_s.\,

where E is the expectation operator.


Contents

[edit] Stationarity

If X(t) is wide sense stationary then the following conditions are true:

\mu_t = \mu_s = \mu \, for all t, s

and

K_\mathrm{XX}(t,s) = K_\mathrm{XX}(s-t) = K_\mathrm{XX}(\tau) \,

where

\tau = s - t \,

is the lag time, or the amount of time by which the signal has been shifted.

As a result, the autocovariance becomes

\, K_\mathrm{XX} (\tau) = E \{ (X(t) - \mu)(X(t+\tau) - \mu)  \}
  = E \{ X(t)\cdot X(t+\tau) \} -\mu^2,\,
  = R_\mathrm{XX}(\tau) - \mu^2,\,

where RXX represents the autocorrelation.

[edit] Normalization

When normalized by dividing by the variance σ2 then the autocovariance becomes the autocorrelation coefficient ρ. That is

 \rho_\mathrm{XX}(\tau) = \frac{  K_\mathrm{XX}(\tau)}{\sigma^2}.\,

Note, however, that some disciplines use the terms autocovariance and autocorrelation interchangeably.

The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1].

[edit] See also

[edit] References

  • P. G. Hoel (1984): Mathematical Statistics, New York, Wiley