by Professor Simon Burton
My previous post discussed the task of deriving a set of safety requirements for automated driving systems and explained how these were sensitive to the chosen Operational Design Domain (ODD). The next component of my proposed framework for assuring automated driving systems addresses the design of the system itself.
The goal is to design a system that is inherently capable of maintaining its safety goals and at the same time develop a deep understanding of its technical limitations.
The first part of this statement is pretty obvious; the second part may be not so. We need to appreciate the fact that we will not be able to engineer 100% perfect systems for all possible conditions in every environment anytime soon.
What we can do though, is to understand where the system limitations are, do our best to mitigate these during system design and restrict their use to ODDs for which we have a good level of confidence that they will operate safely. This also means, the better our design (and our confidence in it), the larger the scope of the ODD and the ultimate utility of the system. We should also consider that an overly restrictive ODD itself has risks related to the availability of possibly essential mobility services and the need for frequent handovers to manual operation.
A “good” design for automated driving
Let’s revisit the simple system model from my first post in this series, which also highlights the challenges of each step of the sense, understand, decide, act chain:
Looking at each functional component in turn:
- Sense: This involves choosing a suitable combination of sensors that cover all environmental conditions within the chosen ODD. Each sensor modality (e.g. camera, radar, lidar) will vary in range and sensitivity to edge cases. A suitable combination, optimised for the target set of operational scenarios, is therefore required to reach an acceptable level of safety.
- Understand: Based on the sensor inputs over time, the system must calculate a model of the situation the vehicle is in at any given moment, and predict how this is likely to develop in the next few moments. This involves the task of sensor fusion to form a coherent model of the surroundings that also takes into account the inaccuracies of each sensing modality. Different plausibility checks can be applied to determine whether the information provided by the sensors is feasible. This may include making use of digital maps or information provided by infrastructure external to the vehicle.
- Decide: A safe trajectory of the vehicle must be calculated based on the current and predicted situational model. This component must find the right balance between conservative driving and enough permissiveness that the ultimate destination can be reached within an acceptable period of time, whilst not disrupting overall traffic flow, thus introducing additional risks.
- Act: These components ensure that the manoeuvres decided upon by the automated driving function are accurately executed by the vehicle propulsion, steering and braking systems. The lack of a “human backup” increases the reliability and availability requirements on the vehicle systems. This includes the ability to ensure continued operation in case of component failures (e.g. through the use of redundancy), as well the ability to monitor the performance of the systems so that the behavioural planning components can ensure a minimal risk condition (e.g. controlled stop at the side of the road) in case of failures.
There will remain inherent uncertainties and limitations in each component of the system. A holistic approach to the design of the entire system is therefore required in order to ensure that functional insufficiencies in one part of the system do not propagate to the next, eventually leading to system failure.
Developing a robust automated driving system will require combining and extending a number of safety analysis and design approaches. Contract-based design techniques are a means of breaking the system design into individual components. A contract specifies what each system or component expects from its system context (assumptions) and promises can be made to its context in turn (guarantees). Uncertainties in each component (see the figure above) would be encoded in these contracts.
As the examples in the diagram illustrate, design contracts allow for a compositional argument to be made for properties at the system level while allowing for each component to be considered as an independently verifiable “black box”. This can reduce verification effort and nevertheless allow for statements to be made about the system as a whole.
System-level safety analysis is also required to determine failure modes that could lead to system failures. Techniques currently applied in the design of automotive control systems such as fault tree and failure modes and effects analyses (FTAs and FMEAs) have their limitations. Such techniques require an explicit model of the system, its safety goals and fault propagation behaviour. There are uncertainties in all of these areas due to:
- the difficulty of expressing a complete and consistent set of safety goals
- the complexity of the function and its technical realisation over numerous components
- the need to model not only random hardware failures or software “bugs” but also inaccuracies and limitations of the individual components.
This approach will require model-based systems engineering approaches and an extension of current safety analysis techniques to allow for a tighter level of collaboration between the various suppliers, manufacturers and operators involved in the automated driving ecosystem.
Using a combination of design-by-contract based approaches and extended system safety analyses we can build robust systems by designing the system in such a way that failures at the component level do not propagate to system failures. However, you can only analyse and model what you know and there remain many “unknown unknowns” in the form of edge cases that are not detected by the sensors, changes in the environment and unpredictable behaviour of other traffic participants.
We need to design systems that are not only robust against known sources of failures and performance limitations but that are also resilient against unknown and potentially unknowable perturbations.
At a practical system design level, we can address resilience by applying a layered approach to system monitoring and diagnostics as shown in the figure below. The use of a “self-awareness” layer to monitor the performance of the system against a set of high-level safety rules and a defined set of ODD assumptions however underscores the need for clarity of what these actually are (see my previous post).
Other measures for increasing safety will also be required that may include the use of traffic infrastructure, vehicle-2-X communications and changes to the expectations and behaviour of other road users. The performance of the system will need to be continuously evaluated in the field, and this information used to refine the design and make rapid updates to the system (e.g. using over-the-air software updates) without compromising existing safety properties.
You can download a free introductory guide to assuring the safety of highly automated driving: essential reading for anyone working in the automotive field.
Professor Simon Burton
Director Vehicle Systems Safety
Robert Bosch GmbH
Simon is also a Programme Fellow on the Assuring Autonomy International Programme. Contribute to the strategic development of the Programme as a Fellow.