Safety, Testing and Self-Driving Cars

Tesla S Autopilot [Photo credit: Marco Verch, used under CC BY 2.0]

Any system where erroneous behaviour can lead to serious injury or a potential loss of life is classified as a safety critical system. This is true for self-driving or autonomous vehicles where a vehicle malfunction can lead to the injury or death of the driver, passengers or others outside the vehicle. The potential for injury or death is why it is paramount that the developers of self driving vehicles ensure the systems works safely before deploying them to users on public roads. In the field of self-driving vehicles, it is not clear if this best practice is always being followed. While self-driving vehicles are testing extensively using computer simulation and closed circuit test tracks, they are also tested on public roads. For example, driver assistance systems like Tesla’s Autopilot have been beta-tested by real users. Fully autonomous vehicles such as Uber’s self-driving car have also been tested outside of controlled settings on public roads. In cases where testing occurs in public, the vehicle-under-test is surrounded by pedestrians and drivers who may be completely unaware that their interaction is helping to test and improve an autonomous vehicle. This was the case on March 18, 2018, in Tempe, Arizona when Uber’s self-driving car, with a human driver present, hit and killed a pedestrian (see SFGate).

Continue reading

Interesting Quotes from AMD Canada Event

Quote

On March 7th of last year I attended an OCE event called the “AMD HSA and Heterogeneous Computing Research Showcase.” I recently came across my notes from the event and I thought a few quotes from the keynote speaker, Phil Rogers from AMD Canada,  were worth sharing.

Phil Rogers on AMD’s commitment to open industry standards:

“open standards always win over time.” 

Phil Rogers on programming with threads:

“An expert can get two threads right. An expert can sometimes get three threads right… but cannot get all of the test cases right… doesn’t scale (to 100s of threads).”

Predicting Mutation Scores

Last week my MSc student, Kevin Jalbert, presented his early thesis results at the Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2012). The workshop took place in Zurich Switzerland and was colocated with ICSE 2012. The title of the presentation (and the paper that appears in the proceedings) was “Predicting Mutation Score Using Source Code and Test Suite Metrics.” The paper was awarded the Best Paper Award at the workshop.

Mutation testing can be used to evaluate the effectiveness of test suites and can also be used as an oracle during the creation or improvement of test suites. Mutation testing works by creating many versions of a program each with a single syntactic fault. These program versions are created using mutation operators which are based on an existing fault taxonomy (i.e., a set of known fault types that we are trying to find during testing). One mutation operator, Relational Operator Replacement (ROR), could create a new mutant version of the program in which one of the instance of a relational operator (e.g., <) is replaced with a different operator. For example, line 3 of the following Java source  code: Continue reading

Software Testing Tools for Concurrency

As the use of concurrency in software increases there are more software testing tools being developed in both industry an academia. Below is a fairly comprehensive list of tools (mainly for Java):
  • ConTest – IBM concurrent testing tool that inserts random delays into Java bytecode
  • ConAn – a thread testing tool for Java
  • CHESS – a Microsoft tool for concurrent testing in Visual Studio
  • CalFuzzer – an active testing framework
  • ConMAn – mutation testing tool for concurrent programs written in Java
  • Continue reading