29.09.2014 Views

Casestudie Breakdown prediction Contell PILOT - Transumo

Casestudie Breakdown prediction Contell PILOT - Transumo

Casestudie Breakdown prediction Contell PILOT - Transumo

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

P<br />

( m)<br />

= P<br />

( r )<br />

⋅ P<br />

( m−r<br />

)<br />

with<br />

P<br />

( m)<br />

=<br />

( m)<br />

( p ) m = 1,2, K<br />

ij<br />

Formula 5-17: Formula of Chapman-Kolmogorov (Simplified Version)<br />

This simplified Formula 5-17 allows an argumentation that shows that every Markov<br />

chain can be described completely by just giving a starting distribution at step 0 and<br />

a transition matrix. 55<br />

As mentioned above, a Markov chain is often used to predict breakdowns. Therefore,<br />

the existing states have to be classified as critical and uncritical ones. In general,<br />

states<br />

i ∈ I with p = 1 are critical ones. They are called absorbing states. Figure 5-5<br />

ii<br />

contains two absorbing states because after taking state 0 or 6 all following states<br />

will remain the same. Markov chains can now be used to determine the probability a<br />

critical state is taken. If an absorbing state is taken with a probability of one hundred<br />

percent, the mean number of steps can also be determined after which an absorbing<br />

state is taken. ([Waldmann04], p. 18)<br />

This determination can be done by calculating<br />

( )<br />

P m<br />

with m = 1,2, K,<br />

∞ . Markov chains<br />

often converge to a stationary distribution, so that the probability for an absorbing<br />

state can be given for an infinite number of state changes. Formula 5-18 introduces a<br />

counter-example that does not converge. Hence, the probability an absorbing state is<br />

taken during the whole processing time of the Markov chain cannot be obtained in<br />

any case but in many cases. ([Waldmann04], p. 40)<br />

⎛0<br />

P = ⎜<br />

⎝1<br />

1⎞<br />

⎟<br />

0⎠<br />

Formula 5-18: Identity Matrix as an Example of a non Converging Markov Chain<br />

The Markov property can also be transferred to the setting of time-continuous<br />

stochastic processes. The result is called Markov process. The biggest difference to<br />

the Markov chains is the non-applicability of the above described state probability<br />

calculations. The results can no longer be determined by just multiplying matrices but<br />

by solving differential equations. To ease calculations the underlying process is often<br />

55 See ([Beichelt97], p. 148) for details<br />

71

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!