Monday, August 1, 2011

AIC vs. BIC

AIC and  BIC are two similar, but not identical statistical concepts. They stand for the Akaike and Bayesian Information Criterion, respectively, and are two methods of determining which of two or more competing models is probably the best to use.

These criteria do not tell you if any of the models is actually good at predicting reality, just which are better or worse.

Definitions in math:
AIC = 2k -2*logL
BIC = k*ln(N) - 2*logL
where k is the number of parameters in the model, logL is the maximized log-likelihood, and N is the sample size.
Again, they are similar and based on the maximum likelihood estimate, but are penalized for number of parameters in different ways. Since you should almost always have a sample size larger than 7 (ln(8) > 2), the penalty for number of parameters is greater using the BIC.
(Adding parameters to a model means making it easier to tweak the model to fit the data-it's a bit like cheating if you don't have a good reason to add the parameters. That's why these two criteria penalize you for having additional parameters.)

In the paper I am currently reading, both AIC and BIC are based on the Yuan-Bentler T2* statistic and a Chi-squared distribution. The T2* statistic is a test statistic, or a function that combines many aspects of the data (such as mean, standard deviation, or number of samples) into one number. A distribution is the set of possible values of the test statistic, and how likely each of them are. The Chi-squared distribution is a particular common distrubition of known form that is used with the assumption of the errors in the sample being independent and normally distributed about zero.

No comments:

Post a Comment