01.08.2013 Views

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981<br />

You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.<br />

402 31 — Ising Models<br />

the quantity var(E) tending to a constant at high temperatures. This 1/T 2<br />

behaviour of the heat capacity of finite systems at high temperatures is thus<br />

very general.<br />

The 1/T 2 factor can be viewed as an accident of history. If only temperature<br />

scales had been defined using β = 1<br />

, then the definition of heat<br />

capacity would be<br />

C (β) ≡ ∂Ē<br />

∂β<br />

kBT<br />

= var(E), (31.12)<br />

<strong>and</strong> heat capacity <strong>and</strong> fluctuations would be identical quantities.<br />

⊲ Exercise 31.1. [2 ] [We will call the entropy of a physical system S rather than<br />

H, while we are in a statistical physics chapter; we set kB = 1.]<br />

The entropy of a system whose states are x, at temperature T = 1/β, is<br />

where<br />

(a) Show that<br />

S = p(x)[ln 1/p(x)] (31.13)<br />

p(x) = 1<br />

exp[−βE(x)] . (31.14)<br />

Z(β)<br />

S = ln Z(β) + βĒ(β) (31.15)<br />

where Ē(β) is the mean energy of the system.<br />

(b) Show that<br />

S = − ∂F<br />

, (31.16)<br />

∂T<br />

where the free energy F = −kT ln Z <strong>and</strong> kT = 1/β.<br />

31.1 Ising models – Monte Carlo simulation<br />

In this section we study two-dimensional planar Ising models using a simple<br />

Gibbs-sampling method. Starting from some initial state, a spin n is selected<br />

at r<strong>and</strong>om, <strong>and</strong> the probability that it should be +1 given the state of the<br />

other spins <strong>and</strong> the temperature is computed,<br />

P (+1 | bn) =<br />

1<br />

, (31.17)<br />

1 + exp(−2βbn)<br />

where β = 1/kBT <strong>and</strong> bn is the local field<br />

bn =<br />

<br />

Jxm + H. (31.18)<br />

m:(m,n)∈N<br />

[The factor of 2 appears in equation (31.17) because the two spin states are<br />

{+1, −1} rather than {+1, 0}.] Spin n is set to +1 with that probability,<br />

<strong>and</strong> otherwise to −1; then the next spin to update is selected at r<strong>and</strong>om.<br />

After sufficiently many iterations, this procedure converges to the equilibrium<br />

distribution (31.2). An alternative to the Gibbs sampling formula (31.17) is<br />

the Metropolis algorithm, in which we consider the change in energy that<br />

results from flipping the chosen spin from its current state xn,<br />

∆E = 2xnbn, (31.19)<br />

<strong>and</strong> adopt this change in configuration with probability<br />

<br />

1 ∆E ≤ 0<br />

P (accept; ∆E, β) =<br />

exp(−β∆E) ∆E > 0.<br />

(31.20)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!