Cook's distance

In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing least squares regression analysis.[1] In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate data points that are particularly worth checking for validity; to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.[2][3]

Definition

Cook's distance measures the effect of deleting a given observation. Data points with large residuals (outliers) and/or high leverage may distort the outcome and accuracy of a regression. Points with a large Cook's distance are considered to merit closer examination in the analysis. For the algebraic expression, first define

\mathbf{H} \equiv \mathbf{X} ( \mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top}

as the (n \times n) hat matrix (or projection matrix) of the n observations of each explanatory variables. Then let \hat{\beta}^{(-i)} be the OLS estimate of \beta that results from omitting the i-th observation (i = 1, 2, \dots, n). Then we have[4]

\hat{\beta} - \hat{\beta}^{(-i)} = \left( \frac{1}{1-h_{i}} \right) ( \mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{x_{i}}\cdot e_{i}

where e_i \, is the residual (i.e., the difference between the observed value and the value fitted by the proposed model), and h_{ii} \,, defined as

h_{ii} \equiv \mathbf{x}_i^{\top} ( \mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{x}_{i}

is the leverage, i.e., the i-th diagonal element of \mathbf{H} \,. With this, we can define Cook's distance as

D_i = \frac{e_i^2}{k \ \mathrm{MSE}}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right],

where k is the number of fitted parameters, and  \mathrm{MSE} \, is the mean square error of the regression model. Algebraically equivalent is the following expression

D_i = \frac{ (\hat{\beta} - \hat{\beta}^{(-i)})^{\top} \mathbf{X}^{\top} \mathbf{X} (\hat{\beta} - \hat{{\beta}}^{(-i)}) } {(1+k)s^2},

where s^{2} is the OLS estimate of the variance of the error term, defined as

s^{2} \equiv \frac{\mathbf{e}^{\top} \mathbf{e} }{n - k}

And a third equivalent expression is

D_i = \frac{ \sum_{j=1}^n (\hat Y_j\ - \hat Y_{j(i)})^2 }{k \ \mathrm{MSE}},

where:

\hat Y_j \, is the prediction from the full regression model for observation j;
\hat Y_{j(i)}\, is the prediction for observation j from a refitted regression model in which observation i has been omitted;

Detecting highly influential observations

There are different opinions regarding what cut-off values to use for spotting highly influential points. A simple operational guideline of D_i>1 has been suggested.[5] Others have indicated that D_i>4/n, where n is the number of observations, might be used.[6]

A conservative approach relies on the fact that Cook's distance has the form W/p, where W is formally identical to the Wald statistic that one uses for testing that H_0:\beta_i=\beta_0 using some \hat{\beta}_{[-i]}. Recalling that W/p has an F_{p,n-p} distribution (with p and n-p degrees of freedom), we see that Cook's distance is equivalent to the F statistic for testing this hypothesis, and we can thus use F_{p,n-p, 1-\alpha} as a threshold.[7]

Interpretation

Specifically D_i can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters. This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases where the particular observation is either included or excluded from the regression analysis.

See also

References

  1. Mendenhall, William; Sincich, Terry (1996). A Second Course in Statistics: Regression Analysis (5th ed.). Upper Saddle River, NJ: Prentice-Hall. p. 422. ISBN 0-13-396821-9. A measure of overall influence an outlying observation has on the estimated \beta coefficients was proposed by R. D. Cook (1979). Cook's distance, Di, is calculated...
  2. Cook, R. Dennis (February 1977). "Detection of Influential Observations in Linear Regression". Technometrics (American Statistical Association) 19 (1): 15–18. doi:10.2307/1268249. JSTOR 1268249. MR 0436478.
  3. Cook, R. Dennis (March 1979). "Influential Observations in Linear Regression". Journal of the American Statistical Association (American Statistical Association) 74 (365): 169–174. doi:10.2307/2286747. JSTOR 2286747. MR 0529533.
  4. Hayashi, Fumio (2000). Econometrics. Princeton University Press. pp. 21–23.
  5. Cook, R. Dennis; Weisberg, Sanford (1982). Residuals and Influence in Regression. New York, NY: Chapman & Hall. ISBN 0-412-24280-X.
  6. Bollen, Kenneth A.; Jackman, Robert W. (1990). Fox, John; Long, J. Scott, eds. Modern Methods of Data Analysis. Newbury Park, CA: Sage. pp. 257–91. ISBN 0-8039-3366-5.
  7. Aguinis, Herman; Gottfredson, Ryan K.; Joo, Harry (2013). "Best-Practice Recommendations for Defining Identifying and Handling Outliers" (PDF). Organizational Research Methods (Sage) 16 (2): 270–301. doi:10.1177/1094428112470848. Retrieved 4 December 2015.

Further reading

This article is issued from Wikipedia - version of the Friday, December 04, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.