In statistics, the Dickey–Fuller test tests the null hypothesis that a unit root is present in an autoregressive (AR) time series model.
The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity.
The test is named after the statisticians David Dickey and Wayne Fuller, who developed it in 1979.
Since the test is done over the residual term rather than raw data, it is not possible to use standard t-distribution to provide critical values.
Test for a unit root with constant and deterministic time trend: Each version of the test has its own critical value which depends on the size of the sample.
In each case, the null hypothesis is that there is a unit root,
The tests have low statistical power in that they often cannot distinguish between true unit-root processes (
is stationary (or trend-stationary), then it has a tendency to return to a constant (or deterministically trending) mean.
Accordingly, the level of the series will be a significant predictor of next period's change, and will have a negative coefficient.
If, on the other hand, the series is integrated, then positive changes and negative changes will occur with probabilities that do not depend on the current level of the series; in a random walk, where you are now does not affect which way you will go next.
Inappropriate exclusion of the intercept or deterministic time trend term leads to bias in the coefficient estimate for δ, leading to the actual size for the unit root test not matching the reported one.
term estimated, then the power of the unit root test can be substantially reduced as a trend may be captured through the random walk with drift model.
[3] On the other hand, inappropriate inclusion of the intercept or time trend term reduces the power of the unit root test, and sometimes that reduced power can be substantial.
Use of prior knowledge about whether the intercept and deterministic time trend should be included is of course ideal but not always possible.
When such prior knowledge is unavailable, various testing strategies (series of ordered tests) have been suggested, e.g. by Dolado, Jenkinson, and Sosvilla-Rivero (1990)[4] and by Enders (2004), often with the ADF extension to remove autocorrelation.
Elder and Kennedy (2001) present a simple testing strategy that avoids double and triple testing for the unit root that can occur with other testing strategies, and discuss how to use prior knowledge about the existence or not of long-run growth (or shrinkage) in y.
[5] Hacker and Hatemi-J (2010) provide simulation results on these matters,[6] including simulations covering the Enders (2004) and Elder and Kennedy (2001) unit-root testing strategies.
Simulation results are presented in Hacker (2010) which indicate that using an information criterion such as the Schwarz information criterion may be useful in determining unit root and trend status within a Dickey–Fuller framework.