Automated driving and safety — a broader perspective

by Professor Simon Burton

In my previous blog posts, I outlined a framework for building an assurance argument for the safety of automated driving systems. The framework combines many different approaches that are currently under development. Much work is still needed, in both research and industry, before we will see level 4 automated driving functions safely deployed in the market at a large scale.

First of all, there is the inherent technical complexity of a system capable of safely navigating such a varied and dynamic environment. There will always be remaining insufficiencies in the performance of the implemented system for at least some situations.

Secondly, our methods for determining whether the system is safe enough, beginning with the definition of what “safe enough” actually means, also address considerable uncertainty. These include the many dependencies on non-technical issues, such as assumptions made about the behaviour of other road users and the societal acceptance of the residual risk of such systems.

Technically perfect automated driving systems won’t be here any time soon. Safe deployment will therefore depend heavily on measures and trade-offs at the operational, management, and governance levels.

This pattern of uncertainty in ensuring the safety of such complex systems is not unique to automated driving. In general, the infrastructure and sociotechnical systems we rely on to keep us healthy and for the economy to flourish, are becoming ever more complex and interconnected. Think of the impact that the COVID-19 pandemic has had on an already stretched health service, as well as the knock-on effects on the economy, education, and most other walks of life.

Safer complex systems

It is against this background of ever-increasing complexity that the Royal Academy of Engineering, in collaboration with Lloyd’s Register Foundation, established the “Safer Complex Systems” programme.

As part of this programme, myself and colleagues at the University of York undertook an initial study examining case studies and stakeholder feedback from aviation, mobility, healthcare, supply chains and other domains. This work informed the development of a new framework to provide conceptual clarity about what is meant by safer complex systems and has outlined emerging challenges and opportunities for addressing the safety of such systems.

A layered model

In order to capture the complex interdependencies between technical and non-technical views on system safety we have structured the framework in three layers:

  • Governance — the incentives and requirements for organisations to adhere to best practice either through direct regulation, so-called ’soft law’ approaches, or a consensus in the form of national and international standards. In formulating these standards and regulations, governments and authorities represent societal expectations on the acceptable level of residual risk that is to be associated with the systems.
  • Management — coordination of tasks involved in the design, operation, and maintenance of the systems, enabling: risk management and informed design trade-offs across corporate boundaries, control over intellectual property and liability; management of supply chain dynamics; and sustainment of long-term institutional knowledge for long-lived and evolving systems.
  • Task and technical — the technical design and safety analysis process that allows systems to be deployed at an acceptable level of risk, then actively monitored to ensure deviations or gaps between predicted and actual activity can be identified and rectified. This layer includes the technological components and the tasks performed by the users, operators, and stakeholders within a socio-technical context. In some cases, “users” may be unwilling or unknowing participants in the system who are nevertheless impacted by risk.

To more systematically examine the impact of complexity at each layer and the interactions between the layers, we make use of the following model to identify and analyse the effectiveness of both control and design time measures for increasing safety in complex systems.

Recreated from “Safer Complex Systems: An Initial Framework” report

For example, causes of system complexity at the governance layer could include multiple jurisdictions and politicised decision-making. This could lead to conflicting objectives in regulation, ultimately allowing for unsafe systems to be deployed with insufficient accountability. At a task level, the mentally unstimulating task of supervising an automated system could lead to so-called “automation complacency” where the operator ultimately fails to react when the system goes wrong.

Automated driving as a complex system

Automated driving systems exhibit properties of complex systems. Understanding the causes and effects of system complexity is therefore key to managing the safety of the overall system.

I propose that an understanding of the factors causing system complexity, their consequences in the system, and how these can lead to system failures (across the governance, management and task and technical layers) is essential to be able to deploy automated driving in a manner that is considered acceptably safe from a legal as well as an ethical perspective.

In addition to the safety assurance methodology proposed in my previous posts, this leads to several recommendations for the automotive industry to consider, some of which is summarised as follows:

  • Definition of safe for automated driving and inter-connected mobility services — we need industry-wide consensus and regulation for safety targets for automated driving. This should consider both quantitative measures (e.g. based on accident statistics) as well as qualitative approaches (based on engineering practices and operation-time controls) for achieving acceptable levels of risk.
  • Informed, outcome-based, agile regulation — traditional approaches to standards development cannot keep pace with the rapid technological changes driving the transformation of the mobility sector. Outcome-based regulation that stipulates requirements on ”what” to argue instead of ”how” to argue is proposed. This includes taking a systems-oriented view with additional focus on arguing the effectiveness of controls for reducing risk due to emerging system complexity.
  • Operation-time controls and continuous assurance — it is unrealistic to believe that an adequate level of safety can be achieved before a highly automated driving system is deployed and be maintained over the vehicle’s lifetime. More focus must be placed on operation-time controls for maintaining safety, including at the operational management and governance layer as well as the continuous evaluation of the assurance case, refined based on experiences in the field and changing expectations of the system.
  • Holistic safety analysis and risk management — industry must support the development and adoption of systematic risk analysis methods at a system (of systems) level that include the vehicle, supporting infrastructure, and its environment. This must take into account the impact of complexity at the task and technical as well as management and operations layers.
  • Manage the complexity of automated driving in line with confidence in the safety arguments — the capabilities required to safely deploy automated driving systems will need to be developed and confirmed over time, thereby limiting the speed at which the systems can be introduced into the market. These capabilities include appropriate systems-safety competencies, development and validation tool-chains, and proven-in-use technical components. We also require a confirmation of the safety assurance methodology itself as well as the effectiveness of current approaches to regulation.

Safer Complex Systems: An Initial Framework” is published today, 15 July, by the Royal Academy of Engineering. As well as a more in-depth description of the framework, those working in the automotive industry will find section 5.2 on connected and automated vehicles particularly useful.

Professor Simon Burton
Research Division Director
Fraunhofer IKS

Simon is also a Programme Fellow on the Assuring Autonomy International Programme. Contribute to the strategic development of the Programme as a Fellow.



Assuring Autonomy International Programme

A £12M partnership between @LR_Foundation and @UniOfYork to guide the safe development of autonomous systems worldwide.