Safety, Testing and Self-Driving Cars

Tesla S Autopilot [Photo credit: Marco Verch, used under CC BY 2.0]

Any system where erroneous behaviour can lead to serious injury or a potential loss of life is classified as a safety critical system. This is true for self-driving or autonomous vehicles where a vehicle malfunction can lead to the injury or death of the driver, passengers or others outside the vehicle. The potential for injury or death is why it is paramount that the developers of self driving vehicles ensure the systems works safely before deploying them to users on public roads. In the field of self-driving vehicles, it is not clear if this best practice is always being followed. While self-driving vehicles are testing extensively using computer simulation and closed circuit test tracks, they are also tested on public roads. For example, driver assistance systems like Tesla’s Autopilot have been beta-tested by real users. Fully autonomous vehicles such as Uber’s self-driving car have also been tested outside of controlled settings on public roads. In cases where testing occurs in public, the vehicle-under-test is surrounded by pedestrians and drivers who may be completely unaware that their interaction is helping to test and improve an autonomous vehicle. This was the case on March 18, 2018, in Tempe, Arizona when Uber’s self-driving car, with a human driver present, hit and killed a pedestrian (see SFGate).

More recently, news reports highlighting crashes of Tesla vehicles with semi-autonomous and autonomous capabilities have again brought the issues of safety and testing to the forefront. “Three crashes involving Teslas that killed three people have increased scrutiny of the company’s Autopilot driving system, just months before CEO Elon Musk has planned to put fully self-driving cars on the streets” (see CBC). It is important to note that unlike Uber’s self-driving car, a Tesla with Autopilot is not a fully autonomous driving system – “…Autopilot is a hands-on driver assistance system that is intended to be used only with a fully attentive driver. It does not turn a Tesla into a self-driving car nor does it make a car autonomous” (see Tesla website). The two main features of Autopilot are Traffic-Aware Cruise Control, for matching the vehicle’s speed to traffic, and Autosteer, for steering within a specific lane. Navigate on Autopilot is a Tesla beta feature which autonomously controls lane changes and highway interchange and ramp navigation.

After reviewing the available details on the recent Tesla crashes from a software development perspective, two immediate questions come to mind:

  1. Does the Autopilot system design sufficiently engage the driver to pay attention while Autopilot is in use? According to the Tesla website “Before enabling Autopilot, you must agree to `keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your car.’ Once engaged, if insufficient torque is applied, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.” Does having ones hands on the wheel equate to paying attention to the road and surrounding environment? I would argue the answer to this is no. However, it is important to assess this question using data and not just intuition which is why it would be interesting to understand how Tesla tests that the AutoPilot warning system actually maintains driver attention.
  2. Assuming Autopilot was engaged at the time of the recent crashes, why weren’t the vehicles able to avoid a collision? One of the recent crashes involved a collision with a parked fire truck and another involved a collision with a police cruiser. It’s possible that these are uncommon driving scenario (edge cases) that are infrequently encountered during normal vehicle operation. Is a failure to avoid such a collision the result of a missed edge case test, a technical limitation or something else?

Without all of the information it is impossible to fully understand the circumstances of these tragic crashes. However, it is important to reflect on these vehicle malfunctions in order to make self-driving vehicles safer. It’s also important for software developers and autonomous vehicle manufacturers to engage in discussions regarding how to improve best practices for autonomous vehicle development and testing.

Leave a Reply

Your email address will not be published. Required fields are marked *