Sim, Easterbrook and Holt define a theory of benchmarking aimed at guiding the development of new benchmarks in Software Engineering . The Sim et al. benchmarking theory places particular emphasis on the scope and definition of benchmarks. With respect to scope, Sim et al. place an emphasis on “…benchmarks that are created and used by a technical research community” . They describe the definition of a benchmark as follows :
- Creation of a motivating comparison. The comparison should clearly outline the purpose of the benchmark
and motivate it’s usage.
- Development of a task sample. The task sample should be representative of the possible tasks that the technique or tool may encounter during actual usage.
- Identification or development of performance measures. The performance measures ensure that we can assess the fitness of a technique or tool when used with the task sample.
In this project we aim to evaluate existing concurrency benchmarks using the Sim et al. theory of benchmarking. Furthermore, if necessary we will propose a criteria for the development of a new benchmark. Our focus is on the evaluation and development of a benchmark consisting of Java programs that can be used to evaluate quality assurance techniques for concurrency.
- Jeremy S. Bradbury, Itai Segall (Bell Labs), Eitan Farchi (IBM Haifa Lab), Kevin Jalbert, David Kelk, Kristina Glinos
- Jeremy S. Bradbury, Itai Segall, Eitan Farchi, Kevin Jalbert, David Kelk. “Using Combinatorial Benchmark Construction to Improve the Assessment of Concurrency Bug Detection Tools,” Proc. of the 10th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD 2012), pages 25-35, Minneapolis, Minnesota, Jul. 2012.
 S. E. Sim, S. Easterbrook, and R. C. Holt, “Using Benchmarking to Advance Research: A Challenge to Software Engineering,” in Proc. of the 25th International Conference on Software Engineering (ICSE 2003). May 2003, pp. 74–83