Bayesian information criterion

From Wikipedia, the free encyclopedia

In statistics, the Bayesian information criterion (BIC) is a statistical criterion for model selection. The BIC is sometimes also named the Schwarz criterion, or Schwarz information criterion (SIC). It is so named because Schwarz (1978) gave a Bayesian argument for adopting it.

Let:

The formula for the BIC is:

\mathrm{BIC} = -2 \cdot \ln{L} + k \ln(n). \

Under the assumption that the model errors or disturbances are normally distributed, this becomes:

\mathrm{BIC} = n\ln\left({\mathrm{RSS} \over n}\right) + k \ln(n). \

Given any two estimated models, the model with the lower value of BIC is the one to be preferred. The BIC is a decreasing function of RSS, the goodness of fit, and an increasing function of k. The BIC penalizes free parameters more strongly than does the Akaike information criterion.

It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. The models being compared need not be nested, unlike the case when models are being compared using an F or likelihood ratio test.

[edit] See also

[edit] References

  • McQuarrie, A. D. R., and Tsai, C.-L., 1998. Regression and Time Series Model Selection. World Scientific.
  • Schwarz, Gideon, 1978. "Estimating the dimension of a model". Annals of Statistics 6(2):461-464.
In other languages