11.07.2015 Views

statisticalrethinkin..

statisticalrethinkin..

statisticalrethinkin..

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

250 8. MARKOV CHAIN MONTE CARLO ESTIMATIONGibbs samplingMetropolis-Hastingsmu146 150 154mu146 150 1548 10 12 14sigma8 10 12 14sigmaFIGURE 8.4. e first 500 samples from Gibbs sampling (le) and fixed-stepMetropolis-Hastings (right). e black plus signs in each plot locate thetarget high density region. Gibbs sampling gets nearby in exactly two steps.allow you to define model in much the sample style as the map models you’ve been usingso far in this book. If you use conjugate priors, BUGS and JAGS models can sample veryefficiently. If you don’t use conjugate priors, they fall back on Metropolis sampling.But there are some practical limitations to Gibbs sampling. First, maybe we don’t wantto use conjugate priors. Some conjugate priors seem silly, and choosing a prior so that themodel fits efficiently isn’t really a strong argument from a scientific perspective. Second,as models become more complex and contain hundreds or thousands or tens-of-thousandsof parameters, Gibbs sampling can become shockingly inefficient. In those cases, there areother algorithms.8.2.2. Hamiltonian Monte Carlo.It appears to be a quite general principle that, whenever there is a randomizedway of doing something, then there is a nonrandomized way that deliversbetter performance but requires more thought. —E. T. Jaynes 101e Metropolis algorithm and Gibbs sampling are both highly random procedures. eytry out new parameter values and see how good they are, compared to the current values.But Gibbs sampling gains efficiency by reducing this randomness and exploiting knowledgeof the target distribution. is seems to fit Jaynes’ suggestion, quoted above, that when thereis a random way of accomplishing some calculation, there is probably a less random way thatis better.Another important MCMC algorithm, HAMILTONIAN MONTE CARLO (or Hybrid MonteCarlo, HMC) pushes Jaynes’ principle further. HMC is much more computationally costlyat each step than are Metropolis or Gibbs sampling. But its proposals are typically muchmore efficient than even Gibbs sampling. As a result, it doesn’t need as many samples todescribe the posterior distribution. And as models become more complex—thousands ortens-of-thousands of parameters—HMC can really outshine other algorithms.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!