10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.374 29 — Monte Carlo MethodsDetailed balanceMany useful transition probabilities satisfy the detailed balance property:T (x a ; x b )P (x b ) = T (x b ; x a )P (x a ), for all x b <strong>and</strong> x a . (29.46)This equation says that if we pick (by magic) a state from the target densityP <strong>and</strong> make a transition under T to another state, it is just as likely that wewill pick x b <strong>and</strong> go from x b to x a as it is that we will pick x a <strong>and</strong> go from x ato x b . Markov chains that satisfy detailed balance are also called reversibleMarkov chains. The reason why the detailed-balance property is of interestis that detailed balance implies invariance of the distribution P (x) under theMarkov chain T , which is a necessary condition for the key property that wewant from our MCMC simulation – that the probability distribution of thechain should converge to P (x).⊲ Exercise 29.7. [2 ] Prove that detailed balance implies invariance of the distributionP (x) under the Markov chain T .Proving that detailed balance holds is often a key step when proving that aMarkov chain Monte Carlo simulation will converge to the desired distribution.The Metropolis method satisfies detailed balance, for example. Detailedbalance is not an essential condition, however, <strong>and</strong> we will see later that irreversibleMarkov chains can be useful in practice, because they may havedifferent r<strong>and</strong>om walk properties.⊲ Exercise 29.8. [2 ] Show that, if we concatenate two base transitions B 1 <strong>and</strong> B 2that satisfy detailed balance, it is not necessarily the case that the Tthus defined (29.45) satisfies detailed balance.Exercise 29.9. [2 ] Does Gibbs sampling, with several variables all updated in adeterministic sequence, satisfy detailed balance?29.7 Slice samplingSlice sampling (Neal, 1997a; Neal, 2003) is a Markov chain Monte Carlomethod that has similarities to rejection sampling, Gibbs sampling <strong>and</strong> theMetropolis method. It can be applied wherever the Metropolis method canbe applied, that is, to any system for which the target density P ∗ (x) can beevaluated at any point x; it has the advantage over simple Metropolis methodsthat it is more robust to the choice of parameters like step sizes. The simplestversion of slice sampling is similar to Gibbs sampling in that it consists ofone-dimensional transitions in the state space; however there is no requirementthat the one-dimensional conditional distributions be easy to sample from, northat they have any convexity properties such as are required for adaptive rejectionsampling. And slice sampling is similar to rejection sampling in thatit is a method that asymptotically draws samples from the volume under thecurve described by P ∗ (x); but there is no requirement for an upper-boundingfunction.I will describe slice sampling by giving a sketch of a one-dimensional samplingalgorithm, then giving a pictorial description that includes the detailsthat make the method valid.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!