Bayesian information criterion
From Wikipedia, the free encyclopedia
In statistics, the Bayesian information criterion (BIC) is a statistical criterion for model selection. The BIC is sometimes also named the Schwarz criterion, or Schwarz information criterion (SIC). It is so named because Schwarz (1978) gave a Bayesian argument for adopting it.
Let:
- n = the number of observations, equivalently, the sample size;
- k = the number of free parameters to be estimated. If the estimated model is a linear regression, k is the number of regressors, including the constant;
- RSS = the residual sum of squares from the estimated model;
- L = the maximized value of the likelihood function for the estimated model.
The formula for the BIC is:
Under the assumption that the model errors or disturbances are normally distributed, this becomes:
Given any two estimated models, the model with the lower value of BIC is the one to be preferred. The BIC is a decreasing function of RSS, the goodness of fit, and an increasing function of k. The BIC penalizes free parameters more strongly than does the Akaike information criterion.
It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. The models being compared need not be nested, unlike the case when models are being compared using an F or likelihood ratio test.
[edit] See also
- Akaike information criterion
- Bayesian model comparison
- Deviance information criterion
- Jensen-Shannon divergence
- Kullback-Leibler divergence
- Model selection
[edit] References
- McQuarrie, A. D. R., and Tsai, C.-L., 1998. Regression and Time Series Model Selection. World Scientific.
- Schwarz, Gideon, 1978. "Estimating the dimension of a model". Annals of Statistics 6(2):461-464.