Autocovariance

From Wikipedia, the free encyclopedia

In statistics, given a time series or continuous signal Xt, the autocovariance is simply the covariance of the signal against a time-shifted version of itself. If each state of the series has a mean, E[Xt] = μt, then the autocovariance is given by

\, \gamma(i,j) = E[(X_i - \mu_i)(X_j - \mu_j)] = E[X_i\cdot X_j]-\mu_i\cdot\mu_j.\,

Where E is the expectation operator. If Xt is second-order stationary then the following definition becomes the more familiar:

\, \gamma(k) = E[(X_i - \mu)(X_{i-k} - \mu)]= E[X_i\cdot X_{i-k}]-\mu^2,\,

with μ = μi = μj, for all i,j (because of second-order stationarity).

The k is the amount the signal has been shifted and is usually referred to as the lag. When normalised by dividing by the variance σ2 then the autocovariance becomes the autocorrelation R(k). That is

R(k) = \frac{\gamma(k)}{\sigma^2}.\,

Note, however, that some disciplines use the terms autocovariance and autocorrelation interchangeably.

The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1].

[edit] References

  • P. G. Hoel (1984): Mathematical Statistics, New York, Wiley
In other languages