Autocovariance
From Wikipedia, the free encyclopedia
In statistics, given a real stochastic process X(t), the autocovariance is simply the covariance of the signal against a time-shifted version of itself. If each state of the series has a mean, E[Xt] = μt, then the autocovariance is given by
where E is the expectation operator.
Contents |
[edit] Stationarity
If X(t) is wide sense stationary then the following conditions are true:
- for all t, s
and
where
is the lag time, or the amount of time by which the signal has been shifted.
As a result, the autocovariance becomes
where RXX represents the autocorrelation.
[edit] Normalization
When normalized by dividing by the variance σ2 then the autocovariance becomes the autocorrelation coefficient ρ. That is
Note, however, that some disciplines use the terms autocovariance and autocorrelation interchangeably.
The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1].
[edit] See also
[edit] References
- P. G. Hoel (1984): Mathematical Statistics, New York, Wiley