Deviance information criterion
From Wikipedia, the free encyclopedia
The deviance information criterion (DIC) is a hierarchical modeling generalization of the AIC (Akaike information criterion) and BIC (Bayesian information criterion, also known as the Schwarz criterion). It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation. Like AIC and BIC it is an asymptotic approximation as the sample size becomes large. It is only valid when the posterior distribution is approximately multivariate normal.
Define the deviance as , where are the data, are the unknown parameters of the model and is the likelihood function. is a constant that cancels out in all calculations that compare different models, and which therefore does not need to be known.
The expectation is a measure of how well the model fits the data; the larger this is, the worse the fit.
The effective number of parameters of the model is computed as , where is the expectation of . The larger this is, the easier it is for the model to fit the data.
The deviance information criterion is calculated as
The idea is that models with smaller DIC should be preferred to models with larger DIC. Models are penalized both by the value of , which favors a good fit, but also (in common with AIC and BIC) by the effective number of parameters . Since will decrease as the number of parameters in a model increases, the term compensates for this effect by favoring models with a smaller number of parameters.
The advantage of DIC over other criteria, for Bayesian model selection, is that the DIC is easily calculated from the samples generated by a Markov chain Monte Carlo simulation. AIC and BIC require calculating the likelihood at its maximum over , which is not readily available from the MCMC simulation. But to calculate DIC, simply compute as the average of over the samples of , and as the value of evaluated at the average of the samples of . Then the DIC follows directly from these approximations.
[edit] See also
- Akaike information criterion
- Bayesian information criterion
- Kullback-Leibler divergence
- Jensen-Shannon divergence
[edit] References
- Gelman, Andrew; John B. Carlin, Hal. S. Stern, and Donald B. Rubin (2004). Bayesian Data Analysis, 2nd edition, Boca Raton, Florida: Chapman & Hall/CRC, pp. 182–184. ISBN 1-58488-388-X, LCC QA279.5.B386 2004.
- Spiegelhalter, David J.; Nicola G. Best, Bradley P. Carlin, and Angelika van der Linde (October 2002). "Bayesian measures of model complexity and fit (with discussion)". Journal of the Royal Statistical Society, Series B (Statistical Methodology) 64 (4): 583–639. DOI:10.1111/1467-9868.00353.