07.01.2013 Views

Lecture Notes in Computer Science 3472

Lecture Notes in Computer Science 3472

Lecture Notes in Computer Science 3472

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

10 Methodological Issues <strong>in</strong> Model-Based Test<strong>in</strong>g 287<br />

Test<strong>in</strong>g always <strong>in</strong>volves some k<strong>in</strong>d of redundancy: the <strong>in</strong>tended and the actual<br />

behaviors. When a s<strong>in</strong>gle model for both code generation and test case generation<br />

chosen, this redundancy is lack<strong>in</strong>g. In a sense, the code (or model) would be<br />

tested aga<strong>in</strong>st itself. This is why no automatic verdicts are possible.<br />

On the other hand, what can be automatically tested are the code generator<br />

and environment assumptions that are explicitly given, or implicitly encoded <strong>in</strong><br />

the model. This can be regarded as problematic or not. In case the code generator<br />

works correctly and the model is valid, which is what we have presupposed, tests<br />

of the adequacy of environment assumptions are the only task necessary to ensure<br />

a proper function<strong>in</strong>g of the actual (sub-)system. This is where formal verification<br />

technology and test<strong>in</strong>g seem to smoothly blend: formal verification of the model<br />

is done to make sure the model does what it is supposed to. Possibly <strong>in</strong>adequate<br />

environment assumptions can be identified when (selected) traces of the model<br />

are compared to traces of the system. Note that this adds a slightly different<br />

flavor to our current understand<strong>in</strong>g of model-based test<strong>in</strong>g. Rather than test<strong>in</strong>g<br />

a system, we are now check<strong>in</strong>g the adequacy of environment assumptions. This<br />

is likely to be <strong>in</strong>fluential w.r.t. the choice of test cases.<br />

Depend<strong>in</strong>g on which parts of a model are used for which purpose, this scenario<br />

usually restricts the possible abstractions to those that <strong>in</strong>volve a loss of<br />

<strong>in</strong>formation that can be coped with by means of macro expansion (Sec. 10.2).<br />

10.3.2 Automatic Model Extraction<br />

Our second scenario is concerned with extract<strong>in</strong>g models from an exist<strong>in</strong>g system<br />

(Fig. 10.4). The process of build<strong>in</strong>g the system is conventional: somehow, a<br />

specification is built, and then the system is hand coded. Once the system is<br />

built, one creates a model manually or automatically, and this model is then<br />

used for test case generation.<br />

Requirements<br />

Test Case Specification<br />

Generation<br />

Test Cases<br />

Model<br />

Extraction<br />

γ/α<br />

Manual<br />

Verdicts<br />

Specification<br />

Code<br />

Manual<br />

Cod<strong>in</strong>g<br />

HW, OS, Legacy<br />

Fig. 10.4. A model is automatically extracted from code<br />

Automatically extract<strong>in</strong>g abstractions from code or more concrete models<br />

is a rather active branch of computer science [Hol01, GS97, SA99] which we<br />

will not discuss here. The abstractions should be created <strong>in</strong> a way such that<br />

at least some—and identifiable—statements about them should carry over to

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!