Safety, Testing and Self-Driving Cars

Tesla S Autopilot [Photo credit: Marco Verch, used under CC BY 2.0]

Any system where erroneous behaviour can lead to serious injury or a potential loss of life is classified as a safety critical system. This is true for self-driving or autonomous vehicles where a vehicle malfunction can lead to the injury or death of the driver, passengers or others outside the vehicle. The potential for injury or death is why it is paramount that the developers of self driving vehicles ensure the systems works safely before deploying them to users on public roads. In the field of self-driving vehicles, it is not clear if this best practice is always being followed. While self-driving vehicles are testing extensively using computer simulation and closed circuit test tracks, they are also tested on public roads. For example, driver assistance systems like Tesla’s Autopilot have been beta-tested by real users. Fully autonomous vehicles such as Uber’s self-driving car have also been tested outside of controlled settings on public roads. In cases where testing occurs in public, the vehicle-under-test is surrounded by pedestrians and drivers who may be completely unaware that their interaction is helping to test and improve an autonomous vehicle. This was the case on March 18, 2018, in Tempe, Arizona when Uber’s self-driving car, with a human driver present, hit and killed a pedestrian (see SFGate).

Continue reading