Comparative Assessment of Testing and Model Checking Using Program Mutation

by Jeremy S. Bradbury, James R. Cordy, Juergen Dingel

Abstract

Developing correct concurrent code is more difficult than developing correct sequential code. This difficulty is due in part to the many different, possibly unexpected, executions of the program, and leads to the need for special quality assurance techniques for concurrent programs such as randomized testing and state space exploration. In this paper an approach is used that assesses testing and formal analysis tools using metrics to measure the effectiveness and efficiency of each technique at finding concurrency bugs. Using program mutation, the assessment method creates a range of faulty versions of a program and then evaluates the ability of various testing and formal analysis tools to detect these faults. The approach is implemented and automated in an experimental mutation analysis framework (ExMAn) which allows results to be more easily reproducible. To demonstrate the approach, we present the results of a comparison of testing using the IBM tool ConTest and model checking using the NASA tool Java PathFinder (JPF).

Bibliographic Information [Bibtex format]

@inproceedings{BCD07,
Author = {Jeremy S. Bradbury AND James R. Cordy AND Juergen Dingel},
Booktitle = {Proc. of the 3rd Workshop on Mutation Analysis (Mutation 2007)},
Month = {Sept.},
Pages = {210-219},
Title = {Comparative Assessment of Testing and Model Checking Using Program Mutation},
Year = {2007}}

Paper: [PDF]   Presentation: [PDF]