11.07.2015 Views

statisticalrethinkin..

statisticalrethinkin..

statisticalrethinkin..

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

6.5. USING AIC 205compare( m6.11 , m6.12 , m6.13 , m6.14 , nobs=nrow(d) )R code6.23k AICc w.AICc dAICcm6.14 4 -14.02 0.93 0.00m6.11 2 -7.60 0.04 6.42m6.13 3 -6.89 0.03 7.13m6.12 3 -5.03 0.01 8.99e function compare takes fit models as input, as well as the number of observations (nobs)the models were fit to. It returns a table in which models are ranked from best to worst, withfour columns.(1) k is the number of free parameters in each model.(2) AICc is the small-sample adjusted AIC value for each model. Smaller is better.Notice here that the values are actually negative. is is fine and not unusual withGaussian linear models. e most negative AICc is best.(3) w.AICc is the AKAIKE WEIGHT for each model. ese values are transformed AICcvalues. I’ll explain them below.(4) dAICc is the difference between each AICc and the lowest AICc.ese last two columns, w.AICc and dAICc, are there to ease interpretation of the AICcvalues. Recall that the absolute magnitude of any information criterion contains little information.It’s only the differences between models that provide estimates of relative accuracy.So these two columns transform AICc into measures of relative difference.dAICc, usually called delta-AICc or δAICc, is just a difference. So it provides a quickguide to how quickly accuracy degrades as you move down the model rank. is is useful,because oen the best model is barely better than the second best. Sometimes several modelshave nearly identical AICc values. e differences between AICc values just makes it easierto diagnose such situations. In this case, model m6.14 is more than 6 units of deviance betterthan the second model. From there, differences between models are much smaller.How big of a difference is 6? Well, it’s equivalent to adding 3 meaningless and uncorrelatedpredictor variables. But honestly it’s hard to judge on this scale how much moreaccurate this makes model m6.14. is is where the Akaike weights help. e weight for amodel i in a set of m models is given by:w i =exp(−0.5δAICc i )∑ mj=1 exp(−0.5δAICc j)is example uses AICc, but the formula is the same for any other information criterion. Tocompute these weights yourself, without the convenience of compare:# compute AICc values for all four modelsaicc

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!