27.07.2013 Views

2 Why We Need Model-Based Testing

2 Why We Need Model-Based Testing

2 Why We Need Model-Based Testing

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Advanced Topics 271<br />

action. When executing an action, the strategy may record history. The selection of<br />

actions is always performed on controllable actions, because the strategy is really<br />

the tester strategy. By knowing that certain actions are observable, the strategy can<br />

learn from the behavior of the IUT and use that knowledge to improve its selection<br />

of controllable actions. The following two situations are particularly interesting.<br />

IUT is determinisitic. A typical situation is that the behavior of the IUT is known to<br />

be deterministic. This means that if an observable action happens in a certain state,<br />

then the same observable action will happen again when that same state is visited.<br />

Assume that the model program is written in such a way that its states correspond<br />

to the states of the IUT so that there are no “hidden” states. The model program<br />

may still allow multiple choices, because it is not known what the particular implementation<br />

is, only that it is deterministic. Suppose also that all states of the model<br />

program are either passive or active.<br />

The strategy can record in a map P for each reached passive state the observable<br />

action that occurred in that state. Say that a state is partially explored if it is active and<br />

there exists an unexplored controllable action that is enabled in that state. Suppose<br />

that the goal of the strategy is to maximize transition coverage. When computing<br />

what controllable action to select, a possible algorithm could be to select an action<br />

that has not been explored or to select an action that takes the strategy closer to a<br />

partially explored state. Such an algorithm should be incremental. In this calculation,<br />

P is used to determine what transitions take place from passive states.<br />

Suppose, for example, that the bag implementation in Figure 12.1 is extended so<br />

that it implements the draw operation deterministically by picking the lexicographically<br />

least element from the bag. This would imply that, during test execution, each<br />

time state 6 is visited in Figure 16.3, the observable action is Draw Finish("a").<br />

IUT is random. Another situation is that the behavior of the IUT is random. In this<br />

case the notion of partially explored states can be applied to all states, including<br />

passive states. If the probabilities of the different observable actions are known, the<br />

strategy can select a controllable action that is either unexplored or leads, with high<br />

probability, closer to a partially explored state.<br />

Suppose, for example, that the bag implementation in Figure 12.1 is extended so<br />

that it implements the draw operation in such a way that each element in the bag is<br />

equally likely to be drawn. This would imply that, during test execution, each time<br />

state 6 is visited in Figure 16.3(a), both Draw Finish("a") and Draw Finish("b") are<br />

equally likely to occur. If state 6 is visited multiple times, it is very unlikely that the<br />

same element is chosen each time. In fact, it follows under the given assumptions<br />

that the probability that both "a" and "b" where chosen at least once at some point<br />

grows exponentially in the number of times that state 6 is visited.<br />

more free ebooks download links at:<br />

http://www.ebook-x.com

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!