14.07.2013 Views

A Course on Large Deviations with an Introduction to Gibbs Measures.

A Course on Large Deviations with an Introduction to Gibbs Measures.

A Course on Large Deviations with an Introduction to Gibbs Measures.

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

6 1. Introducti<strong>on</strong><br />

sequence <strong>with</strong> [hn] zeros <strong>an</strong>d <strong>on</strong>es, then we have 2 [hn] possible words <strong>to</strong> use.<br />

One should then have<br />

<br />

n<br />

≤ 2<br />

[ns]<br />

[hn] .<br />

Using Stirling’s formula (or directly applying (1.1)) <strong>on</strong>e obtains that the<br />

minimal possible h is<br />

<strong>with</strong> I 1/2 given in (1.1).<br />

h(s) = −s log 2 s − (1 − s) log 2(1 − s) = 1 − I 1/2(s)<br />

log 2 ,<br />

Note that h(0) = h(1) = 0. This makes sense, since if we know s = 0<br />

(respectively, s = 1) then we know we will <strong>on</strong>ly get zeros (respectively, <strong>on</strong>es).<br />

Thus, there is nothing <strong>to</strong> encode. This is the case of complete order. On the<br />

other h<strong>an</strong>d, h(1/2) = 1. This <strong>to</strong>o makes sense since there is no informati<strong>on</strong><br />

<strong>on</strong>e c<strong>an</strong> extract from a sequence of fair coin <strong>to</strong>sses <strong>an</strong>d <strong>on</strong>e needs all n<br />

bits <strong>to</strong> encode the sequence. This is the case of complete disorder. For<br />

s ∈ (0, 1/2), <strong>on</strong>e knows that a 1 is less likely <strong>to</strong> occur th<strong>an</strong> a 0 <strong>an</strong>d hence<br />

<strong>on</strong>e should be able <strong>to</strong> c<strong>on</strong>serve <strong>an</strong>d use fewer th<strong>an</strong> n bits <strong>to</strong> encode the<br />

sequence. However, the above formula says that <strong>on</strong>e c<strong>an</strong>not do better th<strong>an</strong><br />

h(s)n. This, of course, does not tell us what the best encoding algorithm is.<br />

1.2. Thermodynamic entropy<br />

Once again, for the sake of illustrati<strong>on</strong>, we will describe <strong>an</strong> oversimplified<br />

system. This secti<strong>on</strong> is inspired by Schrödinger’s course [34].<br />

C<strong>on</strong>sider a physical system of n independent identical comp<strong>on</strong>ents. By<br />

independent we me<strong>an</strong> that the comp<strong>on</strong>ents do not communicate <strong>with</strong> each<br />

other. By identical we me<strong>an</strong> that each of them has the same “mech<strong>an</strong>ism”<br />

attached <strong>to</strong> it, screws, pist<strong>on</strong>s, <strong>an</strong>d what not. Each comp<strong>on</strong>ent c<strong>an</strong> be at <strong>an</strong><br />

energy level from the set {εℓ : ℓ ∈ N}. We submit the system <strong>to</strong> a heat bath<br />

at a fixed absolute temperature T which causes it <strong>to</strong> have <strong>to</strong>tal energy E. Let<br />

aℓ be the number of comp<strong>on</strong>ents in state εℓ. The system tries <strong>to</strong> maximize<br />

its disorder by choosing aℓ’s so that the number of possible c<strong>on</strong>figurati<strong>on</strong>s<br />

n!<br />

a1!···aℓ!··· is as large as possible, subject <strong>to</strong> the c<strong>on</strong>straints aℓ = n <strong>an</strong>d<br />

<br />

aℓεℓ = E.<br />

Equivalently, <strong>on</strong>e c<strong>an</strong> maximize the logarithm of the qu<strong>an</strong>tity in questi<strong>on</strong><br />

<strong>an</strong>d use Lagr<strong>an</strong>ge multipliers <strong>to</strong> achieve this optimizati<strong>on</strong> task; see page 266<br />

of Bartle’s textbook [3]. First, <strong>on</strong>e sets the gradient of<br />

log<br />

n!<br />

a1! · · · aℓ! · · · − α aℓ − β aℓεℓ

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!