Talk:Akaike information criterion
From Wikipedia, the free encyclopedia
Contents |
[edit] Fitting statistical models
When analyzing data, a general question of both absolute and relative goodness-of-fit of a given model arises. In the general case, we are fitting a model of K parameters to the observed data x_1, x_2 ... x_N. In the case of fitting AR, MA, ARMA, or ARIMA models, the question we are concerned with is what K is, i.e. how many parameters to include in the model.
The parameters are routinely estimated by minimizing the residual sum of squares, or by maximizing log likelihood of the data. For normal distributions, the least sum of squares method and the log likelihood method yield identical results.
These techniques are, however, unusable for estimation of optimal K. For that, we use information criteria which also justify the use of the log likelihood above.
[edit] More TBA...=
More TBA... Note: The entropy link should be changed to Information entropy
[edit] Deviance Information Criterion
I've written a short article on DIC, please look it over and edit. Bill Jefferys 22:55, 7 December 2005 (UTC)
- Great, thanks very much. I've edited a bit for style, no major changes though. Cheers, --MarkSweep (call me collect) 23:18, 7 December 2005 (UTC)
[edit] Justification/derivation
is AIC *derived* from anything or is it just a hack? BIC is at least derivable from some postulate. WHy would you ever use AIC over BIC or, better, cross validation?
There is a link on the page ([1]) which shows a proof that AIC can be derived from the same postulate as BIC and vice versa. Cross validation is good but computationally expensive compared to A/BIC - a problem for large scale optimisations. The actual discussion over BIC/AIC as a weapon of choice seems to be long, immensely technical/theoretical and not a little boring 128.240.229.7 12:37, 28 February 2007 (UTC)
[edit] Origin of Name
AIC was said to stand for "An Information Criterion" by Akaike, not "Akaike information Criterion" Yoderj 19:39, 16 February 2007 (UTC)