28.04.2014 Views

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

Lecture Notes - Department of Mathematics and Statistics - Queen's ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

24 CHAPTER 3. CLASSIFICATION OF MARKOV CHAINS<br />

Definition 3.3.3 A Markov chain is called recurrent if it is µ−irreducible <strong>and</strong><br />

whenever µ(A) > 0 <strong>and</strong> A ∈ B(X).<br />

∞∑<br />

∞∑<br />

E x [ 1 xt∈A] = P t (x, A) = ∞, ∀x ∈ X,<br />

t=1<br />

t=1<br />

While studying chains in an infinite state space, we use a stronger recurrence definition: A set A ∈ B(X) is<br />

Harris recurrent if the Markov chain visits A infinitely <strong>of</strong>ten with probability 1, when the process starts in A:<br />

Definition 3.3.4 A set A ∈ B(X) is Harris recurrent if<br />

P x (η A = ∞) = 1, A ∈ B(X), ∀x ∈ A, (3.11)<br />

Theorem 3.3.1 Harris recurrence <strong>of</strong> a set A is equivalent to<br />

P x [τ A < ∞] = 1, ∀x ∈ A.<br />

Pro<strong>of</strong>: We now prove this argument ([Meyn-Tweedie, Chapter 9]): let τ A (1) be the first time the state hits<br />

A. By the Strong Markov Property, the Markov chain sampled at successive intervals τ A (1), τ A (2) <strong>and</strong> so on is<br />

also a Markov chain. Let Q be the transition kernel for this sampled Markov Chain. Now, the probability <strong>of</strong><br />

τ A (2) < ∞ can be computed recursively as<br />

P(τ A (2) < ∞)<br />

∫<br />

= Q (x xτA (1) τ A(1), dy)P y (τ A (1) < ∞)<br />

A<br />

( ∫<br />

)<br />

= Q (x xτA (1) τ A(1), dy) P y (τ A (1) < ∞)<br />

A<br />

= 1 (3.12)<br />

By induction, for every n ∈ Z +<br />

P(τ A (n + 1) < ∞)<br />

∫<br />

= Q (x xτA (1) τ A(1), dy)P y (τ A (n) < ∞)<br />

Now,<br />

A<br />

= 1 (3.13)<br />

P x (η A ≥ k) = P x (τ A (k) < ∞),<br />

since k times visiting a set requires k times returning to a set, when the initial state x is in the set. As such,<br />

P x (η A ≥ k) = 1, ∀k ∈ Z +<br />

is identically equal to 1. Define B k = {ω ∈ Ω : η(ω) ≥ k}, <strong>and</strong> it follows that B k+1 ⊂ B k . By the continuity <strong>of</strong><br />

probability P(∩B k ) = lim k→∞ P(B k ), it follows that P x (η A = ∞) = 1.<br />

The pro<strong>of</strong> <strong>of</strong> the other direction for showing equivalence is left as an exercise to the reader.<br />

⊓⊔<br />

Definition 3.3.5 A Markov Chain is Harris recurrent if the chain is µ−irreducible <strong>and</strong> every set A ⊂ X is<br />

Harris recurrent whenever µ(A) > 0. If the chain admits an invariant probability measure, then the chain is<br />

called positive Harris recurrent.<br />

Remark: Harris recurrence is stronger than recurrence. In one, an expectation is considered, in the other, a<br />

probability is considered. Consider the following example: Let P(1, 1) = 1 <strong>and</strong> P(x, x + 1) = 1 − 1/x 2 <strong>and</strong><br />

P(x, 1) = 1/x 2 . P x (τ 1 = ∞) = ∏ t≥x (1 − 1/t2 ) > 0. This chain is not Harris recurrent. It is π−irreducible for<br />

the measure π(A) = 1 if 1 ∈ A. Hence, this chain is recurrent but not Harris recurrent.<br />

⊓⊔

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!