Diffusion Processes with Hidden States from ... - FU Berlin, FB MI
Diffusion Processes with Hidden States from ... - FU Berlin, FB MI
Diffusion Processes with Hidden States from ... - FU Berlin, FB MI
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
3.2 Markov Chains, Markov <strong>Processes</strong> and Markov PropertyFrom now on we are able to derive all properties of Markov chains (see [25, p. 333]).Let the state space of the chain S be the set of non-negative integers: 0,1,2,3,4,..., furthermoreone can define the probability P[X n = j |X n−1 = i] as the transition probability of state i to state j.In this case the chain is called time-homogeneous, ifP[X n = j |X n−1 = i] = P[X n+m = j |X (n+m)−1 = i],n = 1,2,..., m ≥ 0, i, j ∈ S. (3.6)Thus we can writep i, j ≡ P[X n = j |X n−1 = i], (3.7)and define the matrix of transition probabilities by⎛T =⎜⎝p 0,0 p 0,1 ··· p 0, j ···p 1,0 p 1,1 ··· p 1, j ···. . ··· . ···p i,0 p i,1 ··· p i, j ···.. ···. ···⎞. (3.8)⎟⎠We will write i ↦→ j if p i, j > 0, which means that the chain can jump directly <strong>from</strong> i to j.We can think of the operation of the chain as follows: The chain starts at time 0 in some state, sayi 0 ∈ S. At the next time step the chain jumps to a neighboring state i 1 <strong>with</strong> the probability p i0 ,i 1,provided that i 0 ↦→ i 1 . It may be the case that this jump is immediately back to the state itself, thatis i 0 = i 1 . We call such an occurrence a self-loop. This procedure is repeated so that at step n thechain is in some state i n , wherei 0 ↦→ i 1 ↦→ i 2 ↦→ ··· ↦→ i n−1 ↦→ i n . (3.9)A sequence of states <strong>with</strong> (3.9) is called a path.Given an initial state, there is a set of possible paths that can be taken by the Markov chain. Thisis called the set of sample paths. One particular sample path, taken by the chain is denoted byi 0 ,i 1 ,i 2 ,...i k−1 ,i k ,... . (3.10)A graphical representation for such a sample path (sample sequence) <strong>with</strong> N elements, based on asimple three-states Markov chain, as shown in Figure 3.1, is given by Figure 3.2.15