arbres aléatoires, conditionnement et cartes planaires - DMA - Ens
arbres aléatoires, conditionnement et cartes planaires - DMA - Ens
arbres aléatoires, conditionnement et cartes planaires - DMA - Ens
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
As v is continuous, it follows that P(Yδ n = 0) → exp(−v(δ)) as n → ∞ implying (2.3.13).<br />
We recall that the law of Y n under the probability measure P(· | X ηn<br />
0 = n) converges to the<br />
law of (L t ,t ≥ 0). Then, thanks to (2.3.13), we can apply Theorem 2.2.3 to g<strong>et</strong> that, for every<br />
a > 0, the law of the R-tree m −1<br />
n T θ under Π µ n(· | H(θ) ≥ [am n ]) converges to the probability<br />
measure Θ ψ (· | H(T ) > a) in the sense of weak convergence of measures in the space T. As<br />
m −1<br />
n η n → 1 as n → ∞, we g<strong>et</strong> the desired result.<br />
□<br />
We can now compl<strong>et</strong>e the proof of Theorem 2.1.1. Indeed, thanks to Lemmas 2.2.2 and 2.3.3,<br />
we can construct on the same probability space (Ω,P), a sequence of T-valued random variables<br />
(T n ) n≥1 distributed according to Θ(· | H(T ) > ([am n ] + 1)η n ) and a sequence of A-valued<br />
random variables (θ n ) n≥1 distributed according to Π µ n(· | H(θ) ≥ [am n ]) such that for every<br />
n ≥ 1, P a.s., )<br />
GH<br />
(T n ,η n T θn ≤ 4η n .<br />
Then, using Lemma 2.3.13, we have Θ(· | H(T ) > ([am n ]+1)η n ) → Θ ψ (· | H(T ) > a) as n → ∞<br />
in the sense of weak convergence of measures on the space T. So we g<strong>et</strong><br />
for every a > 0, and thus Θ = Θ ψ .<br />
Θ(· |H(T ) > a) = Θ ψ (· |H(T ) > a)<br />
2.4. Proof of Theorem 2.1.2<br />
L<strong>et</strong> Θ be a probability measure on (T,GH) satisfying the assumptions of Theorem 2.1.2.<br />
In this case, we define v : [0, ∞) −→ (0, ∞) by v(t) = Θ(H(T ) > t) for every t ≥ 0. Note<br />
that v(0) = 1 is well defined here. For every t > 0, we denote by Θ t the probability measure<br />
Θ(· | H(T ) > t). The following two results are proved in a similar way to Lemma 2.3.1 and<br />
Lemma 2.3.2.<br />
Lemma 2.4.1. The function v is nonincreasing, continuous and goes to 0 as t → ∞.<br />
Lemma 2.4.2. For every t > 0 and 0 < a < b, the conditional law of the random variable<br />
Z(t,t+b), under the probability measure Θ t and given Z(t,t+a), is a binomial distribution with<br />
param<strong>et</strong>ers Z(t,t + a) and v(b)/v(a).<br />
2.4.1. The DSBP derived from Θ. We will follow the same strategy as in section 2.3<br />
but instead of a CSBP we will now construct an integer-valued branching process.<br />
2.4.1.1. A family of Galton-Watson trees. We recall that µ ε denotes the law of Z(ε,2ε) under<br />
the probability measure Θ ε , and that (θ ξ ,ξ ∈ A) is a sequence of independent A-valued random<br />
variables defined on a probability space (Ω ′ , P ′ ) such that for every ξ ∈ A, θ ξ is distributed<br />
uniformly overÔ−1 (ξ). The following lemma is proved in the same way as Lemma 2.3.3.<br />
Lemma 2.4.3. L<strong>et</strong> us define for every ε > 0, a mapping θ (ε) from T (ε) × Ω ′ into A by<br />
θ (ε) (T ,ω) = θ ξ ε (T )(ω).<br />
Then for every positive integer p, the law of the random variable θ (ε) under the probability<br />
measure Θ pε ⊗ P ′ is Π µε (· | H(θ) ≥ p − 1).<br />
For every ε > 0, we define a process X ε = (Xk ε ,k ≥ 0) on T by the formula<br />
= Z(kε,(k + 1)ε), k ≥ 0.<br />
X ε k<br />
We show in the same way as Proposition 2.3.4 and Proposition 2.3.5 the following two results.<br />
44