11.07.2015 Views

statisticalrethinkin..

statisticalrethinkin..

statisticalrethinkin..

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

8.3. EASY HMC: MAP2STAN 251We’re going to be using HMC on and off for the remainder of this book. You won’t haveto implement it yourself. But understanding some of the concept behind it will help yougrasp how it outperforms Metropolis and Gibbs sampling and also why it is not a universalsolution to all MCMC problems.Suppose King Markov’s cousin Monty is King on the mainland. Monty’s kingdom is nota discrete set of islands. Instead, it is a continuous territory stretched out along a narrowvalley. But the King has a similar obligation: to visit his citizens in proportion to their localdensity. Like Markov, Monty doesn’t wish to bother with schedules and calculations. Solikewise he’s not going to take a full census and solve for some optimal travel schedule.Also like Markov, Monty has a highly-educated and mathematically gied advisor. Hisname is Hamilton. Hamilton realized that a much more efficient way to visit the citizens inthe continuous Kingdom is to travel back and forth along its length. In order to spend moretime in densely settled areas, they should slow the royal vehicle down when houses growmore dense. Likewise, they should speed up when houses grow more sparse. is strategyrequires knowing how quickly population density is changing, at their current location. Butit doesn’t require remembering where they’ve been or knowing the population distributionanyplace else. And a major benefit of this strategy compared to that of Metropolis is that theKing makes a full sweep of the kingdom before revisiting anyone.is story is analogous to how Hamiltonian Monte Carlo works. In statistical applications,the royal vehicle is the current vector of parameter values. Let’s consider the singleparameter case, just to keep things simple. In that case, the log-posterior is like a bowl, withthe MAP at its nadir. en the job is to sweep across the surface of the bowl, adjusting speedin proportion to how high up we are.HMC really does run a physics simulation, pretending the vector of parameters gives theposition of a little frictionless particle. e log-posterior provides a surface for this particle toglide across. When the log-posterior is very flat, because there isn’t much information in thelikelihood and the priors are rather flat, then the particle can glide for a long time before theslope (gradient) makes it turn around. When instead the log-posterior is very steep, becauseeither the likelihood is very concentrated or the priors are, then the particle doesn’t get farbefore turning around.8.3. Easy HMC: map2stane rethinking package provides a convenient interface, map2stan, to compile lists offormulas, like the lists you’ve been using so far to construct map estimates, into Stan HMCcode. A little more housekeeping is needed to use map2stan: you need to preprocess anyvariable transformations, and you need to construct a clean data frame with only the variablesyou will use. But otherwise installing Stan on your computer is the hardest part. Andonce you get comfortable with interpreting samples produced in this way, you go peek insideand see exactly how the model formulas you already understand correspond to the code thatdrives the Markov chain.To see how it’s done, let’s revisit the terrain ruggedness example from Chapter 7.library(rethinking)data(rugged)d

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!