22.01.2014 Views

IEOR 263B, Homework 2 Solution

IEOR 263B, Homework 2 Solution

IEOR 263B, Homework 2 Solution

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>IEOR</strong> <strong>263B</strong>, <strong>Homework</strong> 2 <strong>Solution</strong><br />

Due February 5, 2009<br />

1 Problems 2.22, 2.24<br />

See solutions in the textbook.<br />

2 Problem 4.30<br />

Let Z i = X − Y i . We have P(Z i = 1) = P 1 (1 − P 2 ) := p and P(Z i = −1) = (1 − P 2 )P 1 := q. and P(Z i = 0) =<br />

1 − p − q. Let S n = ∑ n<br />

i=1 Z i. The problem to find the probability of error is similar to gambler’s ruin problem<br />

even though we have p + q < 1. This is because the transition to same state does not effect the probability<br />

whether M is reached before M − 1 or vice versa. It just effect the number of steps required. Hence:<br />

P(error) = P(Up M before down –M) = 1 − (p/q)m<br />

1 − (p/q) 2m = 1<br />

1 + (p/q) m = 1<br />

1 + λ m<br />

where λ = p q = P 1(1 − P 2 )<br />

P 2 (1 − P 1 ) .<br />

Also by Wald’s identity (N is a stopping time)<br />

3 Problem 4.31<br />

E[N] = E[S N]<br />

E[Z i ]<br />

=<br />

=<br />

M(λ M − 1)<br />

(P 1 − P 2 )(λ M + 1)<br />

1<br />

λM<br />

(−M) +<br />

1 + λM 1 + λ M M<br />

p − q<br />

The three states are: A (F in 1, S in 2), B (F in 2, S in 1) and C (hunting ends). Then the transition matrix<br />

is given by<br />

⎡<br />

⎤<br />

0.28 0.18 0.54<br />

P = ⎣ 0.18 0.28 0.54 ⎦ . (1)<br />

0 0 1<br />

(a) The required probability is given by P (n) (1, 1) (i.e. the (1, 1) entry of the matrix P (n) ). To get the<br />

explicit value, we first diagonalize the matrix P:<br />

⎡<br />

1 −1.5 −2.5<br />

⎤ ⎡<br />

1 0 0<br />

⎤ ⎡<br />

0.28 0.18 0.54<br />

⎤<br />

P = Γ −1 ΛΓ = ⎣ 1 1.5 −2.5 ⎦ · ⎣ 0 0.1 0 ⎦ · ⎣ 0.18 0.28 0.54 ⎦ . (2)<br />

1 0 0 0 0 0.46 0 0 1<br />

Hence P (n) = Γ −1 Λ (n) Γ, which is easy to compute. The (1, 1) entry of the computed matrix is (0.1 n +<br />

0.46 n )/2.<br />

(b) Notice that in both states A and B, the probability of ending is 0.54.<br />

Geom(0.54), and the average time is thus 1/0.54 ≈ 1.85.<br />

Hence the hunting time is ∼<br />

1


4 Problem 4.38<br />

We show that the Markov chain is time reversible with probabilities π i = a i /A where A = ∑ i a i. See that:<br />

π i P ij =<br />

a ia j q ij<br />

A(a i + a j ) = π jP ji<br />

5 Problem 4.39<br />

Conditioning on the number of individuals that die in a given period it is easy to see:<br />

i∑<br />

( ) i<br />

P ij =<br />

p k (1 − p) i−k a<br />

k<br />

j−i+k<br />

k=0<br />

where a i = 0 if i < 0, and a i = e −λ λ i /i! if i ≥ 0.<br />

The stationary probabilities as given in example 4.3(D) are:<br />

For time reversibility to hold:<br />

k=0<br />

π i = e−λ/p (λ/p) i<br />

, i = 0, 1, . . .<br />

i!<br />

π i P ij = π j P ji<br />

⇐⇒ e−λ/p (λ/p) i i∑<br />

( ) i<br />

p k (1 − p) i−k a<br />

i!<br />

k<br />

j−i+k = e−λ/p (λ/p) j<br />

j!<br />

⇐⇒ (λ/p)i−j j!<br />

i!<br />

i∑<br />

( i<br />

k<br />

k=0<br />

)<br />

p k (1 − p) i−k a j−i+k =<br />

j∑<br />

( j<br />

k<br />

k=0<br />

j∑<br />

( j<br />

k<br />

k=0<br />

)<br />

p k (1 − p) i−k a i−j+k<br />

)<br />

p k (1 − p) i−k a i−j+k<br />

Without loss of generality assume j > i. Then the R.H.S. of the above equation is:<br />

j∑<br />

( ) j<br />

p k (1 − p) i−k a<br />

k<br />

i−j+k<br />

L.H.S. is:<br />

=<br />

=<br />

k=0<br />

j−i−1<br />

∑<br />

k=0<br />

j∑<br />

k=j−i<br />

=<br />

=<br />

( j<br />

k<br />

( j<br />

k<br />

λ/p) i−j j!<br />

i!<br />

i∑<br />

k=0<br />

i∑<br />

(<br />

k=0<br />

)<br />

p k (1 − p) i−k · 0 +<br />

)<br />

p k (1 − p) i−k a i−j+k<br />

k=0<br />

j∑<br />

k=j−i<br />

( j<br />

k<br />

)<br />

p k (1 − p) i−k a i−j+k<br />

i∑ i!<br />

k!(i − k)! pk (1 − p) i−k e −λ λ j−i+k<br />

(j − i + k)!<br />

j!<br />

k!(i − k)!(j − i + k)! pj−i+k (1 − p) i−k e −λ λ k<br />

j<br />

j − i + k<br />

)<br />

p j−i+k (1 − p) i−k −λ λk<br />

e<br />

k!<br />

Changing the variable y = j − i + k we get<br />

j∑<br />

( )<br />

j<br />

p y (1 − p) j−y e −λ λ i−j+y<br />

y<br />

which is equal to R.H.S.<br />

y=j−i<br />

j∑<br />

( j<br />

=<br />

y<br />

y=j−i<br />

(i − j + y)!<br />

)<br />

p y (1 − p) j−y e −λ a i−j+y<br />

2


6 Problem 4.41<br />

(a) By symmetry (or by noting that the chain is doubly stochastic), π j = 1 n<br />

, j = 1, . . . , n. Hence the transition<br />

probability of reverse chain is:<br />

⎧<br />

⎨ p, j = i − 1<br />

Pij ∗ = π j P ji /π i = P ji = 1 − p, j = i + 1 ,<br />

⎩<br />

0, otherwise<br />

where “0” refers to state n, and n + 1 refers to state 1.<br />

(b) The Markov chain is not time reversible unless p = 0.5<br />

7 Problem 4.44<br />

All we need to show is that:<br />

¯π i ¯Pij = ¯π j ¯Pji , for all i ≠ j<br />

Substituting and simplifying this reduces to π j P ij = π j P ji which follows since the original untruncated Markov<br />

chain is time reversible.<br />

3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!