Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
7.5. TWO ZERO-ONE LAWS 15<br />
Corollary 7.23 For each d 1 there exists pc 2 [0, 1] such that<br />
(<br />
0, if p < pc,<br />
Pp(E•) =<br />
1, if p > pc,<br />
At p = pc we have Ppc(E•) 2{0, 1}.<br />
(7.43)<br />
Proof. We have already shown that E• 2 T and so Pp(E•) 2{0, 1} for all p. It<br />
is also clear that the larger p the more edges there are and so Pp(E•) should be<br />
increasing. However, to prove this we have to work a bit.<br />
The key idea is that we can actually realize percolation for all p’s on the same probability<br />
space. Consider i.i.d. random variables U =(U b) b2B whose law is uniform<br />
on [0, 1] and define<br />
h (p)<br />
b = 1 {U bapplep}. (7.44)<br />
Clearly, h (p) =(h (p)<br />
b ) are i.i.d. Bernoulli(p) so they realize a sample of percolation at<br />
parameter p. Now let E•(p) ={U : h (p) contains infinite connected component}.<br />
Since p 7! h (p)<br />
b increases, if E•(p) occurs, then so does E•(p 0 ) for p0 > p. In other<br />
words<br />
p 0 > p ) E•(p) ⇢ E•(p 0 ) (7.45)<br />
This implies<br />
Pp(E•) =P E•(p) apple P E•(p 0 ) = P p 0(E•), (7.46)<br />
i.e., p 7! Pp(E•) is non-decreasing. Since it takes values zero or one, there exists a<br />
unique point where Pp(E•) jumps from zero to one. This defines pc.<br />
Let us remark what really happens: It is known that<br />
(<br />
= 1, d = 1,<br />
pc<br />
2 (0, 1), d 2.<br />
(7.47)<br />
At d = 2 it is known that pc = 2 (Kesten’s Theorem) but the values for d 3<br />
are not known explicitly (and presumably are not of any special form). Concerning<br />
Ppc(E•), it is widely believed that this probability is zero—there is no infinite<br />
cluster at the critical point—but the proof exists only in d = 2 and d 19.<br />
Our next object of interest is the random variable:<br />
N = number of infinite connected components in h (7.48)<br />
The values N can take are {0, 1, . . . }[{•}. The random variable N definitely<br />
depends on any finite number of edges and so N is not T -measurable. (The reason<br />
why we care is if it were tail measurable then it would have to be constant a.s.)<br />
However, N is clearly translation invariant in the following sense:<br />
Definition 7.24 Let tx : {0, 1} B !{0, 1} B be the map defined by<br />
(txh) b = h b+x<br />
(7.49)<br />
We call tx the translation by x. An event A is translation invariant if t 1<br />
x (A) =A for<br />
all x 2 Z d . A random variable N is translation invariant if N tx = N.