In statistics, given a real stochastic process X(t), the autocovariance is the covariance of the variable with itself, i.e. the variance of the variable against a time-shifted version of itself. If the process has the mean E[Xt] = μt, then the autocovariance is given by
where E is the expectation operator.
Contents |
If X(t) is stationary process, then the following conditions are true:
and
where
is the lag time, or the amount of time by which the signal has been shifted.
As a result, the autocovariance becomes
where RXX represents the autocorrelation in the signal processing sense.
When normalized by dividing by the variance σ2, the autocovariance C becomes the autocorrelation coefficient function c[1],
The autocovariance function is itself a version of the autocorrelation function with the mean level removed. If the signal has a mean of 0, the autocovariance and autocorrelation functions are identical [1].
However, often the autocovariance is called autocorrelation even if this normalization has not been performed.
The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1].
The autocovariance of a linearly filtered process