27.03.2014 Views

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

nets. Due to limited space, this paper will not review the work<br />

on code-based testing or te st execution frameworks of<br />

concurrent programs.<br />

A. Specification-based Testing of Concurrent Programs<br />

Carver and Tai [6][7] have developed a specification-based<br />

approach to testing concurrent programs. Sequencing<br />

constraints on succeeding and preceding events (CSPE) are<br />

used to specify restrictions on the allowed sequences of<br />

synchronization events. They described how to achieve<br />

coverage and detect violations of CSPE constraints based on a<br />

combination of deterministic and nondeterministic testing.<br />

Chung et al. [8] developed a specification-based approach to<br />

testing concurrent programs against M essage Sequence Charts<br />

(MSCs) with partial and nondete rministic semantics. Based on<br />

the sequencing constraints on the execution of a concurrent<br />

program captured by M SCs, this approach verifies the<br />

conformance relations between the concurrent program and the<br />

MSC specification. Seo et al. [9] presented an approach to<br />

generating representative test se quences from statecharts that<br />

model behaviors of a concurrent program. Representative test<br />

sequences are a subset of all possible interleavings of<br />

concurrent events. They are used as seeds to generate automata<br />

that accept equivalent sequences. Wong and Lei [10] have<br />

presented four methods for generating test sequences that cover<br />

all the nodes in a given reachability graph. The reachability<br />

graph is typically constructed from the design of a concurrent<br />

program. Test sequences are generated based on hot spot<br />

prioritization or topological sort . It is not concerned with<br />

automated generation of data i nput or mapping of the abstract<br />

SYN-sequences derived from the given reachability graph to<br />

concrete implementation sequen ces. Different from the above<br />

work, our approach can generate executable multi-threaded test<br />

code from test models.<br />

B. Testing with Petri Nets<br />

Zhu and He [11] have propos ed a methodology for testing<br />

high-level Petri nets. It is not concerned with how tests can be<br />

generated to meet the criteria. Lucio et al. [12] proposed a<br />

semi-automatic approach to test case generation from CO-OPN<br />

specifications. CO-OPN is a form al object-oriented<br />

specification language based on abstract data types and Petri<br />

nets. This approach transform s a CO-OPN specification into a<br />

Prolog program for test generation purposes.<br />

VII. CONCLUSIONS<br />

We have presented an approach to automated testing of<br />

concurrent pthread programs. It can automatically generate<br />

(non) deterministic test sequences as well as multi-threaded test<br />

code. These functions, together with the existing M ISTA tool,<br />

can provide a useful toolkit for testing concurrent programs. It<br />

can reap two important benefits. First, software testing would<br />

not be effective without a good understanding about a SUT.<br />

Using our approach, testers w ill improve their understanding<br />

about the SUT while documenting their thinking through the<br />

creation of test models. Test m odels can clearly capture test<br />

requirements for software assurance purposes. Second,<br />

automated generation of test code from test models can<br />

improve productivity and reduce cost. W hen concurrent<br />

software needs m any tests, the tests can be generated<br />

automatically from their test m odels. This can better deal w ith<br />

the frequent changes of requirements and des ign features<br />

because only the test models, not test code, need to be updated.<br />

The savings on the manual development of test code also allow<br />

testers to focus on the intellectually challenging aspect of<br />

testing – understanding what needs to be tested in order to<br />

achieve high-level assurance. Our case studies based on small<br />

programs are primarily used to demonstrate the technical<br />

feasibility. We expect to apply our approach to real -world<br />

applications that require a larg e number of concurrent tests in<br />

order to reach a high level of quality assurance.<br />

Our future w ork will adapt the approach to autom ated<br />

generation of concurrent code in other languages and thread<br />

libraries, e.g., Java, C #, and C++. In addition, effectiveness of<br />

the test cases generated from test models essentially depends<br />

on the test models and test generation strategies. W e plan to<br />

evaluate the effectiveness of our approach by investigating<br />

what types of bugs in concurrent program s can be revealed by<br />

the test cases generated from the test m odels. To this end, w e<br />

will first define a fault model of concurrent programs in a given<br />

target language (e.g., C/pthread ), create mutants by injecting<br />

faults in the subject programs deliberately, and test the m utants<br />

with the test code generated from the test m odels. A mutant is<br />

said to be killed if the test c ode reports a failure. U sually the<br />

percentage of the mutants killed by the tests is a good indicator<br />

of the fault detection capability of the test cases.<br />

REFERENCES<br />

[1] H.J. Genrich, “Predicate/Transition Nets,” In: Brauer, W ., Reisig, W .,<br />

Rozenberg, G. (eds.) Petri Nets: Central Models and Their Properties.<br />

Springer-Verlag London, 1987, pp. 207-247.<br />

[2] T. Murata, “Petri Nets: Properties, Analysis and Applications,” Proc. of<br />

the IEEE, vol. 77, no. 4, 1989, pp. 541-580.<br />

[3] D. Xu, “A Tool for Automated Test Code Generation from High -Level<br />

Petri Nets”, Proc. of the 32nd International Conf . on Application and<br />

Theory of Petri Nets and Concurrency (Petri Nets 2011), LNCS 6709,<br />

Newcastle, UK, June 2011, pp. 308–317.<br />

[4] R. H. Carver and K. C. Tai, M odern Multithreading: Implementing,<br />

Testing, and Debugging Multithreaded Java and C++/Pthreads/Win32<br />

Programs, Wiley, 2006.<br />

[5] D. Buttlar, J. Farrell, B. Nichol s, PThreads Programming: A POSIX<br />

Standard for Better Multiprocessing, O'really Media, Sept 1996.<br />

[6] R. H. Carver and K. C. Tai, “Use of Sequencing Constraints for<br />

Specification-based Testing of Concurrent Programs,” IEEE Trans. on<br />

Software Engineering, vol. 24, no. 6, 1998, pp. 471-490.<br />

[7] R.H. Carver, K.C. Tai, "Test Sequence Generation from Formal<br />

Specifications of Distributed Programs," Proc. 15th IEEE International<br />

Conference on Distributed Computing <strong>Systems</strong> (ICDCS'95), 1995, pp.<br />

360-367.<br />

[8] I. S. Chung, H. S. Kim, H. S. Bae, Y. R. Kwon, and B. S. Lee, “Testing<br />

of Concurrent Programs based on Message Sequence Charts,” Proc. the<br />

International Symposium on Software Engineering for Parallel and<br />

Distributed <strong>Systems</strong> (PDSE '99), pp. 72-82.<br />

[9] H. S. Seo, I. S. Chung, and Y. R. Kwon. “Generating Test Sequences<br />

from Statecharts for Concurrent Progr am Testing,” IEICE - Trans. Inf.<br />

Syst. E89-D, 4, 2006, pp. 1459-1469.<br />

[10] W. E. W ong and Y. Lei. “Reachability Graph-Based Test Sequence<br />

Generation for Concurrent Programs,” International Journal of Software<br />

Engineering and <strong>Knowledge</strong> Engineering, vol. 18, no. 6, 2008, pp. 803-<br />

822.<br />

[11] H. Zhu and X. He. "A Methodology for Testing High -Level Petri Nets,"<br />

Information and Software Tech., vol.44, 2002, pp. 473-489.<br />

[12] L. Lucio, L. Pedro, and D. Bu chs. “Semi-Automatic Test Case<br />

Generation from CO-OPN Specifications,” Proc. of the Workshop on<br />

Model-Based Testing and Object-Oriented <strong>Systems</strong>, 2006, pp. 19-26.<br />

351

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!