Autocovariance
In statistics, given a real stochastic process X(t), the autocovariance is the covariance of the variable against a time-shifted version of itself. If the process has the mean , then the autocovariance is given by
where E is the expectation operator.
Autocovariance is related to the more commonly used autocorrelation by the variance of the variable in question.
Stationarity
If X(t) is stationary process, then the following are true:
- for all t, s
and
where
is the lag time, or the amount of time by which the signal has been shifted.
As a result, the autocovariance becomes
Normalization
When normalized by dividing by the variance σ2, the autocovariance C becomes the autocorrelation coefficient function c,[1]
However, often the autocovariance is called autocorrelation even if this normalization has not been performed.
The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalization with the variance will put this into the range [−1, 1].
Properties
The autocovariance of a linearly filtered process
- is
See also
References
- P. G. Hoel, Mathematical Statistics, Wiley, New York, 1984.
- Lecture notes on autocovariance from WHOI
- ↑ Westwick, David T. (2003). Identification of Nonlinear Physiological Systems. IEEE Press. pp. 17–18. ISBN 0-471-27456-9.