Mallows' Cp

From Wikipedia, the free encyclopedia

In statistics, Mallows' Cp, named after Colin Mallows, is often used as a stopping rule for various forms of stepwise regression. This was not Mallows' intention - he proposed the statistic only as a way of facilitating comparisons among many alternative subset regressions, and warned against its use as a decision rule. Collinearity in a regression model results from the common mistake of putting too many regressors into the model. Often many of the hopefully independent variables will have effects that are highly correlated and cannot be separately estimated. When too many regressors, variables whose coefficients must be estimated, have been included in a regression model it is said to be "over-fit." The worst case of this is when the number of parameters to be estimated is larger than the number of observations so that some effects cannot be estimated at all. The Cp statistic can be used as a subsetting criterion to be used in selecting a reduced model without such problems. If P regressors are selected from a set of K > P, Cp is defined as

 {SSEp \over S^2} - N + 2P,

where

SSEp is the Sum[i=1,N] of (Yi-Ypi)2,

the error sum of squares for the model with P regressors, Ypii being the ith predicted value of Y from the P regressors, S2 is the residual mean square after regression on the complete set of K regressors, and N is the sample size. If the model used to form S2 fits without bias, then S2Cp is an unbiased estimator of the mean squared prediction error (MSPE).

[edit] Practical use

A common misconception is that the "best" model is the minimizer of Cp. While true that for independent Gaussian errors of constant variance, the model minimizing MSPE is in some sense optimal, this is not necessarily true for Cp. Rather, because Cp is a random variable, it is important to consider its distribution. For example, one may form confidence intervals for Cp under its null distribution; that is, when the bias is zero. Cp is similar to the Akaike information criterion and, as a reliable measure of the "goodness of fit" for a model, tends to be less dependent than R-square on the number of effects in the model. Hence, Cp tends to find the best subset that includes only the important predictors of the dependent variable. Under a model not suffering from appreciable lack of fit (bias), Cp has expectation nearly equal to P; otherwise the expectation is roughly P plus a positive bias term. Nevertheless, even though it has expectation greater than or equal to P, there is nothing to prevent Cp < P or even Cp < 0 in extreme cases. A common misconception is that one should choose a subset that has Cp approximately equal to p.

[edit] References

  • Mallows, C.L. (1973) "Some Comments on Cp" Technometrics, 15, 661-675.
  • Hocking, R.R. (1976) "The Analysis and Selection of Variables in Linear Regression" Biometrics, 32, 1-50.
  • Daniel, C. and Wood, F. (1980) Fitting Equations to Data, Rev. Ed., NY: Wiley & Sons, Inc.