Schwarz criterion

From Wikipedia, the free encyclopedia

Schwartz criterion redirects here. For the term in voting theory, see Schwartz set.

In statistics, the Schwarz criterion (short for Schwarz information criterion, abbreviated SIC) is a statistical information criterion. The SIC is sometimes named the Bayesian information criterion (BIC), because Schwarz (1978) gave a Bayesian argument for adopting it.

Let:

The formula for the SIC is:

\mathrm{SIC} = -2 \cdot \ln{L} + k \ln(n). \

Under the assumption that the model errors or disturbances are normally distributed, this becomes:

\mathrm{SIC} = n\ln\left({\mathrm{RSS} \over n}\right) + k \ln(n). \

Given any two estimated models, the model with the lower value of SIC is the one to be preferred. The SIC is a decreasing function of RSS, the goodness of fit, and an increasing function of k. The SIC penalizes free parameters more strongly than does the Akaike information criterion.

It is important to keep in mind that the SIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. The models being compared need not be nested, unlike the case when models are being compared using an F or likelihood ratio test.

[edit] See also

[edit] References

  • McQuarrie, A. D. R., and Tsai, C.-L., 1998. Regression and Time Series Model Selection. World Scientific.
  • Schwarz, Gideon, 1978. "Estimating the dimension of a model". Annals of Statistics 6(2):461-464.
In other languages