14 CHAPTER 2. CONTROLLED MARKOV CHAINS b) Consider the model above. Further, suppose policy D is adopted by the manager. Consider the induced Markov chain by the policy, that is, consider a Markov chain with the following dynamics: with A t as described the the earlier question. Is this Markov chain irreducible? L t+1 = L t + A t − ⌈λ + 0.1⌉N1 (Lt≥⌈λ+0.1⌉)N, Is there an absorbing set in this Markov chain? If so, what is it?
Chapter 3 Classification <strong>of</strong> Markov Chains 3.1 Countable State Space Markov Chains In this chapter, we first review Markov chains where the state takes values in a finite set or a countable space. A Markov chain {x t } on a countable state space is a collection <strong>of</strong> r<strong>and</strong>om variables {x t , t ∈ Z + }, where each <strong>of</strong> the variables take place in a countable set X. As such, we will call the collection <strong>of</strong> such variables a chain Φ which takes values in the (one-sided or two-sided) infinite-product space X ∞ . The Borel σ−field generated on the infinite space is the one generated by the finite dimensional sets <strong>of</strong> the form {x ∈ X ∞ : x [m,n] = {a m , a m+1 , . . . , a n }, a k ∈ X, m ≤ k ≤ n, m, n ∈ Z + }. We assume that ν 0 is an initial distribution for the Markov chain for x 0 . As such, the process Φ = {x 0 , x 1 , . . .,x n , . . . } is a (time-homogeneous) Markov chain <strong>and</strong> satisfies: P ν0 (x 0 = a 0 , x 1 = a 1 , x 2 = a 2 , . . .,x n = a n ) = ν(x 0 = a 0 )P(x 1 = a 1 |x 0 = a 0 )P(x 2 = a 2 |x 1 = a 1 )...P(x n = a n |x n−1 = a n−1 ) (3.1) If the initial condition is known to be x 0 , we use P x0 (· · · ) in place <strong>of</strong> P ν0 (· · · ). We could also write the evolution in terms <strong>of</strong> a matrix. P(i, j) = Pr(x t+1 = j|x t = i) ≥ 0, ∀i, j ∈ X Here P(·, ·) is also a probability transition kernel, that is for every i ∈ X, P(i, .) is a probability measure on X. Thus, ∑ j P(i, j) = 1. By induction, we could verify that P k (i, j) := P(x t+k = j|x t = i) = ∑ m∈X P(i, m)P k−1 (m, j) In the following, we characterize Markov Chains based on transience, recurrence <strong>and</strong> communication. We then consider the issue <strong>of</strong> the existence <strong>of</strong> an invariant distribution. Later, we will extend the analysis to uncountable space Markov chains. Let us consider a discrete-time, time-homogeneous Markov process living in a countable state space X. Communication If there exists an integer k such that P(x t+k = j|x t = i) = P k (i, j) > 0, <strong>and</strong> another integer l such that P(x t+l = i|x t = j) = P l (j, i) > 0 then state i communicates with state j. 15