23.07.2012 Views

Linear Algebra

Linear Algebra

Linear Algebra

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

280 Chapter 3. Maps Between Spaces<br />

Topic: Markov Chains<br />

Here is a simple game. A player bets on coin tosses, a dollar each time, and the<br />

game ends either when the player has no money left or is up to five dollars. If<br />

the player starts with three dollars, what is the chance the game takes at least<br />

five flips? Twenty five flips?<br />

At any point in the game, this player has either $0, or $1, ... , or $5. We<br />

say that the player is the state s0, s1, ... ,ors5. A game consists of moves,<br />

with, for instance, a player in state s3 having on the next flip a .5 chance of<br />

moving to state s2 and a .5 chance of moving to s4. Once in either state s0 or<br />

state s5, the player never leaves that state. Writing pi,n for the probability that<br />

the player is in state si after n flips, this equation sumarizes.<br />

⎛<br />

⎞ ⎛ ⎞ ⎛ ⎞<br />

1 .5 0 0 0 0 p0,n p0,n+1<br />

⎜<br />

⎜0<br />

0 .5 0 0 0⎟⎜p1,n⎟<br />

⎜p1,n+1⎟<br />

⎟ ⎜ ⎟ ⎜ ⎟<br />

⎜<br />

⎜0<br />

.5 0 .5 0 0⎟⎜p2,n⎟<br />

⎜p2,n+1⎟<br />

⎟ ⎜ ⎟<br />

⎜<br />

⎜0<br />

0 .5 0 .5 0⎟⎜p3,n⎟<br />

= ⎜ ⎟<br />

⎜p3,n+1⎟<br />

⎟ ⎜ ⎟ ⎜ ⎟<br />

⎝0<br />

0 0 .5 0 0⎠⎝p4,n⎠<br />

⎝p4,n+1⎠<br />

0 0 0 0 .5 1<br />

p5,n<br />

p5,n+1<br />

For instance, the probability of being in state s0 after flip n +1 is p0,n+1 =<br />

p0,n +0.5 · p1,n. With the initial condition that the player starts with three<br />

dollars, calculation gives this.<br />

⎛<br />

n =0<br />

⎞<br />

0<br />

⎜<br />

⎜0<br />

⎟<br />

⎜<br />

⎜0<br />

⎟<br />

⎜<br />

⎜1<br />

⎟<br />

⎝0<br />

⎠<br />

⎛<br />

n =1<br />

⎞<br />

0<br />

⎜<br />

⎜0<br />

⎟<br />

⎜ .5 ⎟<br />

⎜<br />

⎜0<br />

⎟<br />

⎝ .5⎠<br />

⎛<br />

n =2<br />

⎞<br />

0<br />

⎜ .25 ⎟<br />

⎜<br />

⎜0<br />

⎟<br />

⎜ .5 ⎟<br />

⎝0<br />

⎠<br />

⎛<br />

n =3<br />

⎞<br />

.125<br />

⎜<br />

⎜0<br />

⎟<br />

⎜ .375 ⎟<br />

⎜<br />

⎜0<br />

⎟<br />

⎝ .25 ⎠<br />

⎛<br />

n =4<br />

⎞<br />

.125<br />

⎜ .1875 ⎟<br />

⎜<br />

⎜0<br />

⎟<br />

⎜ .3125 ⎟<br />

⎝0<br />

⎠<br />

···<br />

···<br />

⎛<br />

n =24<br />

⎞<br />

.39600<br />

⎜ .00276 ⎟<br />

⎜<br />

⎜0<br />

⎟<br />

⎜ .00447 ⎟<br />

⎝0<br />

⎠<br />

0 0 .25 .25 .375<br />

.59676<br />

For instance, after the fourth flip there is a probability of 0.50 that the game<br />

is already over — the player either has no money left or has won five dollars.<br />

As this computational exploration suggests, the game is not likely to go on for<br />

long, with the player quickly ending in either state s0 or state s5. (Because a<br />

player who enters either of these two states never leaves, they are said to be<br />

absorbtive. An argument that involves taking the limit as n goes to infinity will<br />

show that when the player starts with $3, there is a probability of 0.60 that the<br />

player eventually ends with $5 and consequently a probability of 0.40 that the<br />

player ends the game with $0. That argument is beyond the scope of this Topic,<br />

however; here we will just look at a few computations for applications.)<br />

This game is an example of a Markov chain, named for work by A.A. Markov<br />

at the start of this century. The vectors of p’s are probability vectors. The<br />

matrix is a transition matrix. A Markov chain is historyless in that, with a<br />

fixed transition matrix, the next state depends only on the current state and<br />

not on any states that came before. Thus a player, say, who starts in state s3,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!