Autocovariance

In statistics, given a real stochastic process X(t), the autocovariance is the covariance of the variable with itself, i.e. the variance of the variable against a time-shifted version of itself. If the process has the mean E[Xt] = μt, then the autocovariance is given by

C_{XX}(t,s) = E[(X_t - \mu_t)(X_s - \mu_s)] = E[X_t X_s] - \mu_t \mu_s.\,

where E is the expectation operator.

Contents

Stationarity

If X(t) is stationary process, then the following conditions are true:

\mu_t = \mu_s = \mu \, for all t, s

and

C_{XX}(t,s) = C_{XX}(s-t) = C_{XX}(\tau)\,

where

\tau = s - t\,

is the lag time, or the amount of time by which the signal has been shifted.

As a result, the autocovariance becomes

C_{XX}(\tau) = E[(X(t) - \mu)(X(t%2B\tau) - \mu)]\,
 = E[X(t) X(t%2B\tau)] - \mu^2\,
 = R_{XX}(\tau) - \mu^2,\,

where RXX represents the autocorrelation in the signal processing sense.

Normalization

When normalized by dividing by the variance σ2, the autocovariance C becomes the autocorrelation coefficient function c[1],

c_{XX}(\tau) = \frac{C_{XX}(\tau)}{\sigma^2}.\,

The autocovariance function is itself a version of the autocorrelation function with the mean level removed. If the signal has a mean of 0, the autocovariance and autocorrelation functions are identical [1].

However, often the autocovariance is called autocorrelation even if this normalization has not been performed.

The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1].

Properties

The autocovariance of a linearly filtered process Y_t

Y_t = \sum_{k=-\infty}^\infty a_k X_{t%2Bk}\,
is C_{YY}(\tau) = \sum_{k,l=-\infty}^\infty a_k a^*_l C_{XX}(\tau%2Bk-l).\,

See also

References

  1. ^ a b Westwick, David T. (2003). Identification of Nonlinear Physiological Systems. IEEE Press. pp. 17–18. ISBN 0-471-27456-9.