01.04.2015 Views

Sequence Comparison.pdf

Sequence Comparison.pdf

Sequence Comparison.pdf

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

B.8 Recurrent Events and the Renewal Theorem 191<br />

B.7.4 Random Walks<br />

Random walks are special cases of Markov chain processes. A simple random walk<br />

is just a Markov chain process whose state space is a finite or infinite subset of<br />

consecutive integers: 0,1,...,c, in which the process, if it is in state k, can either<br />

stay in k or move to one of the neighboring states k − 1 and k + 1. The states 0 and<br />

c are often absorbing states.<br />

A classic example of random walk process is about a gambler. Suppose a gambler<br />

having initially k dollars plays a series of games. For each game, he has probability<br />

p of winning one dollar and probability 1 − p = q of losing one dollar. The money<br />

Y n that he has after n games is a random walk process.<br />

In a general random walk, the process may stay or move to one of m > 2 nearby<br />

states. Although random walks can be analyzed as Markov chains, the special features<br />

of a random walk allow simple methods of analysis. For example, the momentgenerating<br />

approach is a powerful tool for analysis of simple random walks.<br />

B.7.5 High-Order Markov Chains<br />

Markov chain processes satisfy the memoryless and time homogeneity properties.<br />

In bioinformatics, more general Markov chain processes are used to model genecoding<br />

sequences. High-order Markov chains relax the memoryless property. A discrete<br />

stochastic process is a kth-order Markov chain if<br />

Pr[X t+1 = E | X 0 = E 0 ,···,X t−1 = E t−1 ,X t = E t ]<br />

= Pr[X t+1 = E|X t = E t ,X t−1 = E t−1 ,···,X t−k+1 = E t−k+1 ] (B.43)<br />

for each time point t and all states E,E 0 ,E 1 ,...,E t−1 . In other words, the probability<br />

that the process is in a state at the next time point depends on the last k states of the<br />

past history.<br />

It is not hard to see that a kth-order Markov chain is completely defined by the<br />

initial distribution and the transition probabilities of the form (B.43).<br />

B.8 Recurrent Events and the Renewal Theorem<br />

The discrete renewal theory is a major branch of classical probability theory. It concerns<br />

recurrent events occurring in repeated trials. In this section, we state two basic<br />

theorem in the renewal theory. The interested reader is referred to the book [68] of<br />

Feller for their proofs.<br />

Consider an infinite sequence of repeated trials with possible outcomes X i (i =<br />

1,2,...). Let E be an event defined by an attribute of finite sequences of possible<br />

outcomes X i . We say that E occurs at the ith trial if the outcomes X 1 ,X 2 ,...,X i

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!