by Professor Simon Burton
In 2017, when Volvo started testing its automated driving systems in Australia, it encountered a situation the Swedish designers had not necessarily anticipated — kangaroos.
Having “trained” the system to accurately recognise and predict the path of large mammals such as deer and elk crossing the road ahead, the movements of the marsupials responsible for 90% of the animal/vehicle collisions in Australia had the system stumped.
Since then, there have been other incidents of automated driving vehicles misinterpreting their surroundings, with fatal consequences for the vehicle occupants and pedestrians.
Increased component reliability and resilience against cyber-attacks are essential pre-requisites for the safety of highly automated driving systems. Yet Volvo’s and others’ experiences demonstrate that arguably the greatest barrier to safe automated driving is functional sufficiency — ensuring that, at a functional level, the system correctly interprets its environmental conditions and makes the decisions that ensure the safety of vehicle occupants and other road users under all possible circumstances and in a legally compliant manner.
Levels of uncertainty
The difficulty of demonstrating the functional sufficiency of an automated driving system lies in the inherent complexity and unpredictability of the ever-changing environment in which it operates.
For example, other road users behave in unexpected ways, the same object may appear differently depending on weather conditions, and road signs may be missing, become damaged or replaced with new designs.
To compound the problem the system observes this complex, unpredictable environment using sensors that themselves have inherent inaccuracies due to the physical limitations of their sensing approaches.
Thus, the understanding and decision-making components of the system are presented with noisy, incomplete data about the current situation. This uncertainty is typically countered by using multiple sensing channels and algorithms that make use of heuristics or machine learning to interpret the sensing data. However, these algorithms are themselves inherently imprecise and introduce an additional level of uncertainty (see this blog post).
Complex decision-making based on uncertain inputs
Based on this imprecise information, the decision-making procedures must make complex judgements regarding the best course of action in possibly ambiguous situations with no clear path to reach a minimal level of risk. In addition, these decisions must demonstrate compliance to applicable road laws, which at times may conflict each other or other notions of safety.
These decisions are put into action by the vehicle systems: the effect of the decision may, in turn, be dependent on a great many vehicle and environmental parameters (e.g. braking distances are dependent on speed, weather and road surface).
The unpredictable nature of the impact of the vehicle’s actions on its environment (e.g. the reactions of other drivers and road users) closes the cycle to the complex environment to be interpreted by the vehicle.
In other words, a function running on failure-free hardware and operating according to its specification could still cause serious safety hazards if the complexity and uncertainties inherent in the driving tasks are not adequately managed.
Sources of uncertainty and their propagation throughout an automated driving system
A framework for safety assurance
Through a series of blog posts, I will give an insight into approaches for arguing the functional sufficiency of highly automated driving, a pre-requisite for the safe introduction of this technology into our daily lives.
The approaches will be presented according to a framework summarised in the figure below. At its core is a top-level definition of an acceptable level of safety. The overall aim is to develop an assurance case that argues that the level of residual risk associated with the system is commensurate to societal and legal expectations.
A framework for the safety assurance of highly automated driving
A systematic domain analysis forms the basis of an understanding of the environment in which the system should operate. If either the complexity of the domain serves too great a challenge to identify the safety-relevant properties commensurate with the desired level of residual risk, or a convincing argument cannot be found that the system fulfils its safety properties for all possible scenarios in the target environment, then the scope of the operational design domain must be restricted until such an argument can be made.
The task of system design is to find a functional and technical approach of the system in terms of its sensing technologies, decision algorithms, computing platforms and actuator principles that is inherently capable to ensure safe driving behaviour within the chosen operating domain. The complexity of the system will require rigorous approaches to system design and analysis and require an iterative approach where assumptions made regarding the environment and system components are explicitly recorded and questioned at all stages.
A verification and validation (V&V) strategy is needed to collect the necessary evidence to demonstrate that not only is the implementation of the system safe (verification) within its defined operating domain but also that the understanding of the domain itself (validation) was sufficient.
The activities within the framework are continuously iterated as the use of the systems in the field leads to a better understanding of the operational design domain as well as the system’s inherent technical limitations and subsequent improvements.
This series of blog posts will end by regarding how the technical arguments outlined in the framework must be considered as part of a wider ethical and legal discussion, and how such a dialogue could be achieved.
Each blog post will contain links to relevant literature published by myself or colleagues in this area, as well as to the AAIP Body of Knowledge.
You can also download a free introductory guide to assuring the safety of highly automated driving: essential reading for anyone working in the automotive field.
Professor Simon Burton
Director Vehicle Systems Safety
Robert Bosch GmbH
Simon is also a Programme Fellow on the Assuring Autonomy International Programme. Contribute to the strategic development of the Programme as a Fellow.