27.03.2014 Views

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Figure 3. The PF and PTS results for the subject<br />

program<br />

Figure 4. The AP F D and AP F D R results for<br />

the subject program<br />

4.2 Results<br />

First, we see the effectiveness of test case selection. Figure<br />

3 shows the results of the PF and PTS values for the<br />

subject program during its evolution. From the results of<br />

PF, we see that our approach could select the test cases<br />

covering most of the faults in the program. The percentage<br />

of the identified faults can reach 80% above in most cases.<br />

In addition, Figure 3 also shows the percentage of test cases<br />

that were selected for each version of the subject. It illustrates<br />

that the percentage of selected test cases varies widely<br />

from about 30% to 80%, which is similar to the results from<br />

other studies evaluating test case selection on procedural<br />

programs [13]. The results do not show any obvious difference<br />

that is peculiar to object-oriented paradigm. Hence,<br />

from the results, we see that our approach can select a small<br />

set of test cases which can identify most of the faults.<br />

Second, we see the effectiveness of the test case prioritization.<br />

To evaluate this, we manually interrupted the newly<br />

test suite T ′ , and randomly gave another ordering test<br />

suite T ′′<br />

based on T ′ . Then we computed the AP F D value<br />

for T ′′<br />

(AP F D R). Figure 4 shows the AP F D and<br />

AP F D R values using our prioritization approach and the<br />

random prioritization approach, respectively. The data illustrates<br />

that our regression testing prioritization approach<br />

has higher AP F D values than that of random prioritization<br />

approach in all cases. Moreover, in most cases, the AP F D<br />

values of our approach are 10% (or above) higher than that<br />

of random prioritization approach. This shows that our prioritization<br />

approach is more effective compared to the random<br />

prioritization for its better ability of early fault detection.<br />

4.3 Threats to Validity<br />

In this section, we discuss some threats to the validity of<br />

our empirical study. The main external threat is the repre-<br />

sentativeness of our subjects and mutation faults. The subject<br />

program is of small size. Thus we cannot guarantee<br />

that the results can be generalized to other more complex<br />

or arbitrary programs. However, it is a real-world program<br />

widely used in our lab [14]. A second concern is the threat<br />

to construct validity. The goal of test case prioritization is<br />

to maximize some predefined criteria by executing the test<br />

cases in a certain order. Here, our focus is to maximize the<br />

rate of fault detection and we used AP F D to measure it.<br />

However, AP F D is not the only possible measure for fault<br />

detection rate. Some other measures may obtain different<br />

results. Finally, we consider the threat to internal validity.<br />

In our experiment, we utilized mutation faults to study<br />

the test case prioritization. Some other methods to generate<br />

the faults may be better to evaluate our regression testing<br />

approach. However, it is widely used by the academic community<br />

in evaluating test case prioritization [4, 5].<br />

5 Related Work<br />

Anumber of approaches have been studied to facilitate<br />

the regression testing process. Test case selection and prioritization<br />

are two effective techniques to conduct regression<br />

testing.<br />

Test case selection attempts to reduce the size of original<br />

test suite, and focuses on identification of the modified<br />

parts of the program. To date, many regression test<br />

selection techniques have been proposed. Rothermel et al.<br />

proposed a safe and efficient regression test selection technique<br />

based on comparison of differences on the control<br />

flow graph [13]. In addition, regression testing based on<br />

slicing techniques has attracted much attention for a long<br />

time. And Binkley summarized the application of slicing<br />

techniques to support regression testing [1]. Orso et al. also<br />

proposed a similar regression testing approach to ours<br />

in this paper, which used the impact results from the CIA<br />

process to support regression testing [12]. They leveraged<br />

456

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!