28.04.2014 Views

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

26 CHAPTER 3. CLASSIFICATION OF MARKOV CHAINS<br />

Theorem 3.3.3 (Theorem 5.5.3 <strong>of</strong> [25], lemma 17 [28] ) For an aperiodic <strong>and</strong> irreducible Markov chain<br />

{x t } every petite set is small.<br />

A maximal irreducibility measure ψ is an irreducibility measure such that for all other irreducibility measures<br />

φ, we have ψ(B) = 0 ⇒ φ(B) = 0 for any B ∈ B(X). Define B + (X) = {A ∈ B(X) : ψ(A) > 0} where ψ is a<br />

maximal irreducibility measure.<br />

Theorem 3.3.4 For an aperiodic <strong>and</strong> irreducible Markov chain every petite set is petite with a maximal irreducibility<br />

measure for a distribution with finite mean.<br />

Nummelin’s Splitting Technique<br />

The results on recurrence apply to uncountable chains with no atom provided there is a small set or a petite set.<br />

Suppose a set A is 1-small. Define a process z t = (x t , a t ), z t ∈ X × {0, 1}. That is we enlarge the probability<br />

space. Suppose that when x t /∈ A, (a t , x t ) evolve independently from each other. However, when x t ∈ A, we<br />

pick a Bernoulli r<strong>and</strong>om variable, <strong>and</strong> with probability δ the state visits A × {1} <strong>and</strong> with probability 1 −δ visits<br />

A × {0}. From A × {1}, the transition for the next time stage is given by ν(dxt) <strong>and</strong> from A × {0}, it visits the<br />

future time stage with probability<br />

P(dx t+1 |x t ) − ν(dx t+1 )<br />

.<br />

1 − δ<br />

Now, pick δ = ν(X). In this case, A×{1} is an accessible atom, <strong>and</strong> one can verify that the marginal distribution<br />

<strong>of</strong> the original Markov process {x t } has not been altered.<br />

The following can be established using the above construction.<br />

δ<br />

Proposition 3.3.1 If<br />

then,<br />

sup E[min(t > 0 : x t ∈ A)|x 0 = x] < ∞<br />

x∈A<br />

sup E[min(t > 0 : z t ∈ (A × {1}))|z 0 = z] < ∞.<br />

z∈(A×{1})<br />

3.3.3 Existence <strong>of</strong> an Invariant Distribution<br />

We state the final Theorem on the invariant distributions for Markov chains:<br />

Theorem 3.3.5 (Meyn-Tweedie) Consider a Markov process {x t } taking values in X. If there is a Harris<br />

recurrent set A which is also a µ− petite set for some positive measure µ, <strong>and</strong> if the set satisfies<br />

sup E[min(t > 0 : x t ∈ A)|x 0 = x] < ∞,<br />

x∈A<br />

then the Markov chain is positive Harris recurrent <strong>and</strong> it admits a unique invariant distribution.<br />

In this case, the invariant measure satisfies the following, which is a generalization <strong>of</strong> Kac’s Lemma [14]:<br />

Theorem 3.3.6 For a µ-irreducible Markov chain with a unique invariant probability measure; the following is<br />

the invariant probability measure:<br />

∫<br />

π(A) =<br />

C<br />

τ∑<br />

C−1<br />

π(dx)E x [ 1 {xk ∈A}], ∀A ∈ B(X), µ(A) > 0, π(C) > 0<br />

k=0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!