Schwarz criterion
From Wikipedia, the free encyclopedia
- Schwartz criterion redirects here. For the term in voting theory, see Schwartz set.
In statistics, the Schwarz criterion (short for Schwarz information criterion, abbreviated SIC) is a statistical information criterion. The SIC is sometimes named the Bayesian information criterion (BIC), because Schwarz (1978) gave a Bayesian argument for adopting it.
Let:
- n = the number of observations, equivalently, the sample size;
- k = the number of free parameters to be estimated. If the estimated model is a linear regression, k is the number of regressors, including the constant;
- RSS = the residual sum of squares from the estimated model;
- L = the maximized value of the likelihood function for the estimated model.
The formula for the SIC is:
Under the assumption that the model errors or disturbances are normally distributed, this becomes:
Given any two estimated models, the model with the lower value of SIC is the one to be preferred. The SIC is a decreasing function of RSS, the goodness of fit, and an increasing function of k. The SIC penalizes free parameters more strongly than does the Akaike information criterion.
It is important to keep in mind that the SIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. The models being compared need not be nested, unlike the case when models are being compared using an F or likelihood ratio test.
[edit] See also
- Akaike information criterion
- Jensen-Shannon divergence
- Kullback-Leibler divergence
- Deviance information criterion
[edit] References
- McQuarrie, A. D. R., and Tsai, C.-L., 1998. Regression and Time Series Model Selection. World Scientific.
- Schwarz, Gideon, 1978. "Estimating the dimension of a model". Annals of Statistics 6(2):461-464.