Delta method
From Wikipedia, the free encyclopedia
The delta method is a method for deriving an approximate probability distribution for a function of a statistical estimator from knowledge of the limiting distribution of that estimator. In many cases the limiting distribution of the initial estimator is a Normal distribution with mean zero, therefore it is sufficient to obtain the variance of the function of this estimator. If B is an estimator for β then the variance of a function h(B) is
[edit] Derivation
We know that a consistent estimator converges in probability to its true value:
where n is the number of observations. Suppose we want to estimate the variance of a function h of the estimator B. Keeping only the first two terms of the Taylor series, and using vector notation for the gradient, we can estimate h(B) as
which implies the variance of h(B) is approximately
where the last two lines are achieved by recalling, for constants α and η and variable χ, the identity
-
-
-
- ,
-
-
and noting that β is a constant.
The delta method therefore implies that
or in univariate terms,
[edit] Note
The delta method is nearly identical to the formulae presented in Klein (1953, p. 258):
where hr is the rth element of h(B) and Bi is the ith element of B. The only difference is that Klein stated these as identities, whereas they are actually approximations.
[edit] References
- Greene, W. H. (2003), Econometric Analysis, 5th ed., pp. 913f.
- Klein, L. R. (1953), A Textbook of Econometrics, p. 258.
- Lecture notes from Indiana University
- More lecture notes
- Explanation from Stata software corporation