11.07.2015 Views

statisticalrethinkin..

statisticalrethinkin..

statisticalrethinkin..

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.2. A LANGUAGE FOR DESCRIBING MODELS 89called probability mass. You can usually ignore all these density/mass details while doing computationalwork. But it’s good to be aware of the distinctions, if only passively.e Gaussian distribution is routinely seen without σ but with another parameter, τ. e parameterτ in this context is usually called precision and defined as τ = 1/σ 2 . is change of parametersgives us the equivalent formula (just substitute σ = 1/ √ τ):√ τPr(y|µ, τ) =2π exp( − 1 2τ(y − µ)2)is form is common in Bayesian data analysis, and Bayesian model fitting soware, such as BUGSor JAGS, sometimes requires using τ rather than σ.4.2. A language for describing modelsis book adopts a standard language for describing and coding statistical models. Youfind this language in many statistical texts and in nearly all statistical journals, as it is generalto both Bayesian and non-Bayesian modeling. Scientists increasingly use this same languageto describe their statistical methods, as well. So learning this language is an investment, nomatter where you are headed next.Here’s the approach, in abstract. ere will be many examples later. ese numbereditems describe the choices that will be encoded in your model description.(1) First, we recognize a set of measurements that we hope to predict or understand,the outcome variable or variables.(2) For each of these outcome variables, we define a likelihood distribution that definesthe plausibility of individual observations. In linear regression, this distribution isalways Gaussian.(3) en we recognize a set of other measurements that we hope to use to predict orunderstand the outcome. Call these predictor variables.(4) We relate the exact shape of the likelihood distribution—it’s precise location andvariance and other aspects of its shape, if it has them—to the predictor variables.In choosing a way to relate the predictors to the outcomes, we are forced to nameand define all of the parameters of the model.(5) Finally, we choose priors for all of the parameters in the model. ese priors definethe initial information state of the model, before seeing the data.Aer all these decisions are made—and most of them will come to seem automatic toyou before long—we summarize the model with something mathy like:outcome i ∼ Normal(µ i , σ)µ i = β × predictor iβ ∼ Normal(0, 10)σ ∼ Cauchy(0, 1)If that doesn’t make much sense, good. at indicates that you are holding the right textbook,since this book teaches you how to read and write these mathematical model descriptions.We won’t do any mathematical manipulation of them. Instead, they provide an unambiguousway to define and communicate our models. Once you get comfortable with their grammar,when you start reading these mathematical descriptions in other books or in scientificjournals, you’ll find them less obtuse.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!