Dickey-Fuller test

From Wikipedia, the free encyclopedia

In statistics, the Dickey-Fuller test tests whether a unit root is present in an autoregressive model. It is named after the statisticians D. A. Dickey and W. A. Fuller, who developed the test in the 1970s.

[edit] Explanation

A simple AR(1) model is yt = ρyt − 1 + ut, where yt is the variable of interest, t is the time index, ρ is a coefficient, and ut is the error term. A unit root is present if | ρ | = 1.

The regression model can be written as Δyt = (ρ − 1)yt − 1 + ut = δyt − 1 + ut, where Δ is the first difference operator. This model can be estimated and testing for a unit root is equivalent to testing δ = 0. Since the test is done over the residual term rather than raw data, it is not possible to use standard t-distribution to as critical values. Therefore this statistic τ has a specific distribution simply known as the Dickey Fuller table.

There is also an extension called the Augmented Dickey Fuller (ADF), which removes all the structural effect (autocorrelation) in the time serial and then tests using the same procedure.

[edit] Reference

Dickey, D.A. and W.A. Fuller (1979), “Distribution of the Estimators for Autoregressive Time Series with a Unit Root,” Journal of the American Statistical Association, 74, p. 427–431.

[edit] See also

In other languages