14.07.2013 Views

A Course on Large Deviations with an Introduction to Gibbs Measures.

A Course on Large Deviations with an Introduction to Gibbs Measures.

A Course on Large Deviations with an Introduction to Gibbs Measures.

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4 1. Introducti<strong>on</strong><br />

Example 1.1. Let us c<strong>on</strong>sider coin <strong>to</strong>sses. Let {Xn} be <strong>an</strong> i.i.d. sequence of<br />

Bernoulli r<strong>an</strong>dom variables <strong>with</strong> success probability p (i.e. each Xn = 1 <strong>with</strong><br />

probability p <strong>an</strong>d 0 otherwise). Denote the partial sum by Sn = X1+· · ·+Xn.<br />

The law of large numbers ([15] or page 73 of [26]) says that Sn/n c<strong>on</strong>verges<br />

<strong>to</strong> p, almost surely. But at <strong>an</strong>y given n there is a ch<strong>an</strong>ce of p n that we get all<br />

heads (Sn = n) <strong>an</strong>d also a ch<strong>an</strong>ce of (1 − p) n that we get all tails (Sn = 0).<br />

In fact, for <strong>an</strong>y s ∈ (0, 1) there is always a ch<strong>an</strong>ce that <strong>on</strong>e gets a fracti<strong>on</strong><br />

of heads close <strong>to</strong> s. Let us compute this probability.<br />

Let us write [x] for the integral part of x ∈ R, i.e. the largest integer<br />

smaller or equal <strong>to</strong> x. Write<br />

P {Sn = [ns]} =<br />

n!<br />

[ns]!(n − [ns])! p[ns] (1 − p) n−[ns]<br />

∼ nn p [ns] (1 − p) n−[ns]<br />

[ns] [ns] (n − [ns]) n−[ns]<br />

<br />

n<br />

2π[ns](n − [ns]) ,<br />

where we have used Stirling’s formula n! ∼ e−nnn√2πn; see, for example,<br />

page 21 of Khoshnevis<strong>an</strong>’s textbook [26] or page 52 of Feller’s Vol. I [17].<br />

(We say that <strong>an</strong> ∼ bn, or <strong>an</strong> is equivalent <strong>to</strong> bn, when <strong>an</strong>/bn → 1.) Let us<br />

abbreviate<br />

<br />

n<br />

βn =<br />

2π[ns](n − [ns]) ,<br />

γn = (ns)ns (n − ns) n−ns<br />

[ns] [ns] (n − [ns]) n−[ns]<br />

p [ns] (1 − p) n−[ns]<br />

pns (1 − p) n−ns .<br />

Then, P {Sn = [ns]} is equivalent <strong>to</strong><br />

βnγn exp{n log n+ns log ns+n(1−s) log n(1−s)−ns log p−n(1−s) log(1−p)}.<br />

∗ Exercise 1.2. Show that there exists a c<strong>on</strong>st<strong>an</strong>t C such that 1<br />

C √ n ≤ βn ≤<br />

C <strong>an</strong>d 1<br />

Cn ≤ γn ≤ Cn for large enough n.<br />

(1.1)<br />

One then has<br />

lim<br />

n→∞<br />

1<br />

n log P {Sn = [ns]} = −Ip(s), <strong>with</strong><br />

Ip(s) = s log s<br />

1 − s<br />

+ (1 − s) log<br />

p 1 − p .<br />

This functi<strong>on</strong> Ip is c<strong>on</strong>tinuous <strong>on</strong> (0, 1) <strong>an</strong>d its limits at 0 <strong>an</strong>d 1 are exactly<br />

what we predicted earlier: Ip(1) = log 1<br />

p <strong>an</strong>d Ip(0) = log 1<br />

1−p . For s ∈ [0, 1]<br />

it is natural <strong>to</strong> set Ip(s) = ∞. Figure 1.1 shows what this functi<strong>on</strong> looks<br />

like.<br />

The functi<strong>on</strong> Ip in (1.1) is called a large deviati<strong>on</strong> rate functi<strong>on</strong>. Ip(s) is<br />

also called the entropy of the coin yielding heads <strong>with</strong> probability s relative

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!