11.07.2015 Views

statisticalrethinkin..

statisticalrethinkin..

statisticalrethinkin..

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.4. ADDING A PREDICTOR 107Before fitting this model, though, consider whether this zero-centered prior really makessense. Do you think there’s just as much chance that the relationship between height andweight is negative as that it is positive? Of course you don’t. In this context, such a silly prioris harmless, because there is a lot of data. But in other contexts, your golem may need a littlenudge in the right direction.Rethinking: What’s the correct prior? People commonly ask what the correct prior is for a givenanalysis. e question sometimes implies that for any given set of data, there is a uniquely correctprior that must be used, or else the analysis will be invalid. is is a mistake. ere is no morea uniquely correct prior than there is a uniquely correct likelihood. Instead, statistical models aremachines for inference. Many machines will work, but some work better than others. Priors can bewrong, but only in the same sense that a kind of hammer can be wrong for building a table.In choosing priors, there are simple guidelines to get you started. Priors encode states of informationbefore seeing data. So priors allow us to explore the consequences of beginning with differentinformation. In cases in which we have good prior information that discounts the plausibility of someparameter values, like negative associations between height and weight, we can encode that informationdirectly into priors. When we don’t have such information, we still usually know enough aboutthe plausible range of values. And you can vary the priors and repeat the analysis in order to studyhow different states of initial information influence inference. Frequently, there are many reasonablechoices for a prior, and all of them produce the same inference.Making choices tends to make novices nervous. ere’s an illusion sometimes that default proceduresare more objective than procedures that require user choice, such as choosing priors. If that’strue, then all “objective” means is that everyone does the same thing. It carries no guarantees of realismor accuracy. Furthermore, non-Bayesian procedures make all manner of automatic choices, someof which are equivalent to choosing particular priors. So it isn’t the case that Bayesian models makemore assumptions than non-Bayesian models; they just make it easier to notice the assumptions.4.4.2. Fitting the model. e code needed to fit this model via quadratic approximation isa straightforward modification of the kind of code you’ve already seen. All we have to do isincorporate our new model for the mean into the model specification inside map and be sureto add our new parameters to the start list. Let’s repeat the model definition, now with thecorresponding R code on the righthand side:h i ∼ Normal(µ i , σ)height ~ dnorm(mu,sigma)µ i = α + βx i mu

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!