Any system where erroneous behaviour can lead to serious injury or a potential loss of life is classified as a safety critical system. This is true for self-driving or autonomous vehicles where a vehicle malfunction can lead to the injury or death of the driver, passengers or others outside the vehicle. The potential for injury or death is why it is paramount that the developers of self driving vehicles ensure the systems works safely before deploying them to users on public roads. In the field of self-driving vehicles, it is not clear if this best practice is always being followed. While self-driving vehicles are testing extensively using computer simulation and closed circuit test tracks, they are also tested on public roads. For example, driver assistance systems like Tesla’s Autopilot have been beta-tested by real users. Fully autonomous vehicles such as Uber’s self-driving car have also been tested outside of controlled settings on public roads. In cases where testing occurs in public, the vehicle-under-test is surrounded by pedestrians and drivers who may be completely unaware that their interaction is helping to test and improve an autonomous vehicle. This was the case on March 18, 2018, in Tempe, Arizona when Uber’s self-driving car, with a human driver present, hit and killed a pedestrian (see SFGate
“Y2K is a great case study that we can use to talk about best practices for how we develop software today.”
Prof. Jeremy Bradbury on Global News Durham [photo credit : Aaron Streck, copyright: Global News]
SQR Lab leader Prof. Jeremy Bradbury was interviewed recently by Aaron Streck of Global News Durham for the 20th anniversary of the Y2K (Millennium) bug. You can watch the news segment and read the article here
This week I gave a research seminar at Dalhousie University and at Mount Allison University on “Automating Software Development Using Artificial Intelligence (AI).” The intersection of AI and Software Engineering is an active research area and has lead to a number of effective and novel applications of machine learning, metaheuristic algorithms and deep learning. Many of these applications of AI to software development can be categorized as:
- Automation of software development activities including the creation of software artifacts (e.g., software test generation)
- Recommendation systems to assist software developers improve their performance (e.g., recommended code for review)
Not all Software Engineering research problems can be suitably addressed by AI techniques. A good first step to determine if a given software development problem can be addressed with AI is to see if it can be re-framed in terms of optimization, classification, prediction, etc. That is, can it be re-framed in terms of the type of problems that AI methods are effective at solving?
To find out more about the Software Quality Research Lab‘s work in this area please see the abstract and slides from my talk below. Continue reading
What is the Heartbleed Bug?
The Heartbleed bug is a recently identified bug in the OpenSSL security protocol toolkit. OpenSSL is widely used on web servers to encrypt user data.In general, software bugs are computer program error that cause the software to behave in an unexpected way (e.g., crash, produce a wrong output). Security bugs are a special kind of bug that can lead to a security vulnerability which allows the software system or the data stored in the system to be accessed in a way that was unintended.