29.09.2014 Views

Casestudie Breakdown prediction Contell PILOT - Transumo

Casestudie Breakdown prediction Contell PILOT - Transumo

Casestudie Breakdown prediction Contell PILOT - Transumo

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

MTBF<br />

Availability =<br />

MTBF + MTTR<br />

Formula 5-10: The Definition of Availability [Masing88]<br />

The general idea, to calculate the estimated availability during a specified time,<br />

seems to be promising. But this method of failure- and availability ratios is faced with<br />

a major problem, when applying it to the setting of sensor based temperature<br />

monitoring. Most manufacturers of cooling devices do not offer ratios like MTTF<br />

[Nijmegen06]. As cooling devices are long-life products, a determination of these<br />

measures is also impossible. Hence, the appliance of failure- and availability ratios is<br />

not applicable within the setting of sensor based temperature monitoring.<br />

5.7 Markov Chains<br />

Another approach of predicting breakdowns is the usage of Markov chains. These<br />

chains are simple time-discrete stochastic processes (<br />

n<br />

)<br />

n N0<br />

X ∈<br />

with a countable state<br />

space I that comply with the following Formula 5-11 for all points in time n∈ N<br />

0<br />

and<br />

all states<br />

i ,K,<br />

i , i i ∈ I : ([Waldmann04], p. 11)<br />

0 n−1<br />

n,<br />

n+<br />

1<br />

P X i | X = i , , X = i , X = i ) = P(<br />

X = i | X = i<br />

(<br />

n+ 1<br />

=<br />

n+<br />

1 0 0<br />

K<br />

n−1<br />

n−1<br />

n n<br />

n+<br />

1 n+<br />

1 n n<br />

Formula 5-11: The Markov Property<br />

)<br />

This Markov property is the specific characteristic of Markov chains. It says that the<br />

probability for changing to another state is only influenced by the last observed state<br />

and not by prior ones. Hence, the probability that X<br />

n+ 1<br />

takes the value i<br />

n+ 1<br />

is only<br />

influenced by<br />

i n<br />

∈ I and not by i ,K in ∈ I . ([Waldmann04], p. 11)<br />

0<br />

,<br />

−1<br />

The conditional probability P ( X<br />

n + 1<br />

= in<br />

+ 1<br />

| X<br />

n<br />

= in<br />

) is called the processes’ transition<br />

probability. If this transition probability is independent from the point in time n , the<br />

Markov chain is called homogeneous. Otherwise it is called inhomogeneous<br />

([Waldmann04], p. 11). In the following, this thesis will first of all focus on<br />

homogeneous Markov chains. To improve the readability, they will just be named<br />

Markov chains.<br />

In the majority of cases the transition probability is written as a matrix P . It contains<br />

the probabilities p<br />

ij<br />

of all possible changes between old state i and new state j as<br />

68

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!