10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.402 31 — Ising Modelsthe quantity var(E) tending to a constant at high temperatures. This 1/T 2behaviour of the heat capacity of finite systems at high temperatures is thusvery general.The 1/T 2 factor can be viewed as an accident of history. If only temperaturescales had been defined using β = 1 , then the definition of heatcapacity would beC (β) ≡ ∂Ē∂βk B T= var(E), (31.12)<strong>and</strong> heat capacity <strong>and</strong> fluctuations would be identical quantities.⊲ Exercise 31.1. [2 ] [We will call the entropy of a physical system S rather thanH, while we are in a statistical physics chapter; we set k B = 1.]The entropy of a system whose states are x, at temperature T = 1/β, iswhere(a) Show thatS = ∑ p(x)[ln 1/p(x)] (31.13)p(x) = 1 exp[−βE(x)] . (31.14)Z(β)S = ln Z(β) + βĒ(β) (31.15)where Ē(β) is the mean energy of the system.(b) Show thatS = − ∂F∂T , (31.16)where the free energy F = −kT ln Z <strong>and</strong> kT = 1/β.31.1 Ising models – Monte Carlo simulationIn this section we study two-dimensional planar Ising models using a simpleGibbs-sampling method. Starting from some initial state, a spin n is selectedat r<strong>and</strong>om, <strong>and</strong> the probability that it should be +1 given the state of theother spins <strong>and</strong> the temperature is computed,P (+1 | b n ) =11 + exp(−2βb n ) , (31.17)where β = 1/k B T <strong>and</strong> b n is the local field∑b n = Jx m + H. (31.18)m:(m,n)∈N[The factor of 2 appears in equation (31.17) because the two spin states are{+1, −1} rather than {+1, 0}.] Spin n is set to +1 with that probability,<strong>and</strong> otherwise to −1; then the next spin to update is selected at r<strong>and</strong>om.After sufficiently many iterations, this procedure converges to the equilibriumdistribution (31.2). An alternative to the Gibbs sampling formula (31.17) isthe Metropolis algorithm, in which we consider the change in energy thatresults from flipping the chosen spin from its current state x n ,∆E = 2x n b n , (31.19)<strong>and</strong> adopt this change in configuration with probability{1 ∆E ≤ 0P (accept; ∆E, β) =exp(−β∆E) ∆E > 0.(31.20)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!