11.07.2015 Views

statisticalrethinkin..

statisticalrethinkin..

statisticalrethinkin..

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

6.5. USING AIC 207e attitude this book encourages is to retain and present all models, no matter how big or smallthe differences in AIC (or another criterion). e more information in your summary, the moreinformation for peer review, and the more potential for the scholarly community to accumulate information.And keep in mind that averaging models oen produces better results than selecting anysingle model, obviating the “significance” question.Rethinking: AIC metaphors. Here are two metaphors are used to help explain the concepts behindusing AIC (or another information criterion) to compare models.ink of models as race horses. In any particular race, the best horse may not win. But it’s morelikely to win than is the worst horse. And when the winning horse finishes in half the time of thesecond-place horse, you can be pretty sure the winning horse is also the best. But if instead it’s aphoto-finish, with a near tie between first and second place, then it is much harder to be confidentabout which is the best horse. AIC values are analogous to these race times—smaller values are better,and the distances between the horses/models are informative. Akaike weights transform differencesin finishing time into probabilities of being the best model/horse on future data/races. But if the trackconditions or jockey changes, these probabilities may mislead. Forecasting future racing/predictionbased upon a single race/fit carries no guarantees.ink of models as stones thrown to skip on a pond. No stone will ever reach the other side(perfect prediction), but some sorts of stones make it further than others, on average (make bettertest predictions). But on any individual throw, lots of unique conditions avail—the wind might pickup or change direction, a duck could surface to intercept the stone, or the thrower’s grip might slip. Sowhich stone will go farthest is not certain. Still, the relative distances reached by each stone thereforeprovide information about which stone will do best on average. But we can’t be too confident aboutany individual stone, unless the distances between stones is very large.Of course neither metaphor is perfect. Metaphors never are. But many people find these to behelpful in interpreting information criteria.6.5.1.2. Comparing estimates. In addition to comparing models on the basis of expectedtest deviance, it is nearly always useful to compare parameter estimates among models. Comparingestimates helps in at least two major ways. First, it is useful to understand why a particularmodel or models have lower AIC values. Changes in posterior densities, across models,provide useful hints. Second, regardless of AIC values, we oen want to know whethersome parameter’s posterior density is stable across models. For example, scholars oen askwhether a predictor remains important as other predictors are added and subtracted fromthe model. To address that kind of question, one typically looks for a parameter’s posteriordensity to remain stable across models, as well as for all models that contain that parameterto have lower AICc (or DIC) than those models without it.In the primate milk example, comparing estimates confirms what you already learnedin the previous chapter: the model with both predictors does much better, because each predictormasks the other. In order to demonstrate that in the previous chapter, we actually didfit three of models currently at hand. Looking at a consolidated table of the MAP estimatesmakes the comparison a lot easier. e coeftab function takes a series of fit models as inputand builds such a table:coeftab(m6.11,m6.12,m6.13,m6.14)R code6.25m6.11 m6.12 m6.13 m6.14

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!