Download presentation
Presentation is loading. Please wait.
Published byAugustine Douglas Modified over 9 years ago
1
Information criteria What function fits best? The more free parameters a model has the higher will be R 2. The more parsimonious a model is the lesser is the bias towards type I errors. We have to find a compromis between goodness of fit and bias! Model parameters few many Bias Explained variance The optimal number of model parameters
2
The Akaike criterion of model choice k: number of model parameters +1 L: maximum likelihood estimate of the model If the parameter errors are normal and independent we get n: number data points RSS: residual sums of squares If we fit using 2 : If we fit using R 2 : At small sample size we should use the following correction The preferred model is the one with the lowest AIC.
3
We get the surprising result that the seemingly worst fitting model appears to be the preferred one. A single outlier makes the difference. The single high residual makes the exponential fitting worse
4
Significant difference in model fit Approximately AIC is statisticaly significant in favor of the model with thesmaller AIC at the 5% error benchmark if | AIC| > 2. The last model is not significantly (5% level) different from the second model. AIC model selection serves to find the best descriptor of observed structure. It is a hypothesis generating method. It does not test for significance Model selection using significance levels is a hypothesis testing method. Significance levels and AIC must not be used together.
5
Literature
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.