07.12.2012 Views

Structured Testing - McCabe and Associates

Structured Testing - McCabe and Associates

Structured Testing - McCabe and Associates

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

eing used during the testing process. It may have little or no connection to the likelihood of<br />

errors manifesting once the system is delivered or to the total number of errors present in the<br />

software.<br />

Another common approach to testing is based on requirements analysis. A requirements specification<br />

is converted into test cases, which are then executed so that testing verifies system behavior<br />

for at least one test case within the scope of each requirement. Although this approach is an<br />

important part of a comprehensive testing effort, it is certainly not a complete solution. Even<br />

setting aside the fact that requirements documents are notoriously error-prone, requirements are<br />

written at a much higher level of abstraction than code. This means that there is much more<br />

detail in the code than the requirement, so a test case developed from a requirement tends to<br />

exercise only a small fraction of the software that implements that requirement. <strong>Testing</strong> only at<br />

the requirements level may miss many sources of error in the software itself.<br />

The structured testing methodology falls into another category, the white box (or code-based, or<br />

glass box) testing approach. In white box testing, the software implementation itself is used to<br />

guide testing. A common white box testing criterion is to execute every executable statement<br />

during testing, <strong>and</strong> verify that the output is correct for all tests. In the more rigorous branch coverage<br />

approach, every decision outcome must be executed during testing. <strong>Structured</strong> testing is<br />

still more rigorous, requiring that each decision outcome be tested independently. A fundamental<br />

strength that all white box testing strategies share is that the entire software implementation<br />

is taken into account during testing, which facilitates error detection even when the software<br />

specification is vague or incomplete. A corresponding weakness is that if the software does not<br />

implement one or more requirements, white box testing may not detect the resultant errors of<br />

omission. Therefore, both white box <strong>and</strong> requirements-based testing are important to an effective<br />

testing process. The rest of this document deals exclusively with white box testing, concentrating<br />

on the structured testing methodology.<br />

1.2 Software complexity measurement<br />

Software complexity is one branch of software metrics that is focused on direct measurement of<br />

software attributes, as opposed to indirect software measures such as project milestone status<br />

<strong>and</strong> reported system failures. There are hundreds of software complexity measures [ZUSE],<br />

ranging from the simple, such as source lines of code, to the esoteric, such as the number of<br />

variable definition/usage associations.<br />

An important criterion for metrics selection is uniformity of application, also known as “open<br />

reengineering.” The reason “open systems” are so popular for commercial software applications<br />

is that the user is guaranteed a certain level of interoperability—the applications work<br />

together in a common framework, <strong>and</strong> applications can be ported across hardware platforms<br />

with minimal impact. The open reengineering concept is similar in that the abstract models<br />

2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!