How to achieve safe, trusted, and resilient autonomous systems: answering the regulators' questions
How the AAIP’s research is answering the questions about how to develop and regulate safe autonomous systems
By Simon Smith
It’s becoming clear that autonomous systems, such as self-driving cars, medical devices that decide on drug dosages, and robots working alongside humans, pose specific challenges to the way in which regulatory frameworks are traditionally developed and applied.
We previously described these challenges and showed how they are pushing the boundaries of six trends that respond to the need to safely develop increasingly complex systems. These trends all, in some way, acknowledge the need for safety assurance to cope with systems, in the widest sense of the term, that cannot be simplified down into mature and completely understood components.
In this post, we highlight some of the questions that need to be answered to progress these trends further. These range from how policy is set, and how to best engage with society on this, to the technical mechanisms for assessing and accepting autonomous systems into operation. We also point to some of the work to define and address these questions being carried out by the AAIP, its funder Lloyd’s Register Foundation, and their communities and stakeholders.
1. The impetus for regulation
From a reactive, technology-led approach to regulation to a proactive, policy-led one.
- How do we set policy under uncertainty and what problems arise if we get this wrong?
- How can we translate these policies into actionable engineering processes?
- How should we account for the wide-ranging societal aspects that RAS demand that might be hard to quantify or measure?
Practical starting points are getting the right people talking with each other across different disciplines (e.g. ethics workshop report), developing the concepts that help to frame the problems (e.g. framework for safer complex systems or a legal framework for autonomous marine operations), and identifying the right regulatory tools that can be applied (e.g. LRF Foresight Review of the Future of Regulatory Systems).
2. Societal engagement in regulation
From regulation that is invisible to the public to a scenario where the public has a need and an expectation to be actively engaged.
- How do we develop trust both in the systems being developed and in the behaviour of the organisations involved?
- How do we broaden the regulatory development process beyond the traditional functional and economic concerns?
- How do we go from informing the public on matters of personal or physical harm, to stimulating and managing debate on psychological harms, discrimination, threats to personal autonomy (e.g. invasions of privacy), and environmental damage?
Engagement by regulators has traditionally taken the form of public outreach or consultation with expert bodies. What is required now is for the fundamental processes run by these organisations to reflect the changes required — for example, finding ways to acknowledge the shift in the distribution of benefits and harm as a part of understanding and assessing safety, or assessing how the concept of responsibility applies when humans are replaced by systems.
Underlying all of this engagement is the concept of trust: that systems function as people expect, and not just from a technical perspective, but also from a legal, ethical and social perspective.
3. Industry cooperation with regulation
From low-key industry acceptance of regulation as a necessary cost of doing business to industry proactively acting as part of an assurance ecosystem that generates new opportunities.
- How do industry and regulators work together to evolve and improve regulations?
- How can regulation stimulate innovation and safety in industry?
- How does safety become part of how systems are naturally developed and even a competitive feature?
AAIP funds demonstrator projects that engage industry. For example, applying and assessing how regulation works in practice for drones and convening industry and regulators in manufacturing on data-driven techniques that improve many commercial metrics alongside safety.
As well as making progress in their own fields, these projects contribute to an open, accessible Body of Knowledge. This provides evolving guidance from practical experience with which a wider regulator and industry community can freely engage.
4. The safety objective
From preventing harm when things break to the ongoing management of safe behaviour during normal system operation.
- How can we develop regulatory frameworks that ensure the absence of harm arising from ongoing system operations?
- How can we develop systems that exhibit the necessary traits of adaptation and resilience?
- How do we manage the many kinds of uncertainty that arise from data-driven systems operating in complex environments?
A fundamental need addressed by AAIP is for safety objectives to focus on the ongoing operational behaviour that preserves safety, such as the ability of a RAS to maintain ‘situational awareness’. This shift of attention to operational behaviour is also reflected in projects that bring multiple disciplines together to tackle open challenges, such as the UKRI TAS Node in Resilience.
5. The safety management process
From identifying individual hazards and the controls that help prevent them to developing insights into how systems continuously interact with and interpret the world around them.
- What methods and tools can we use to identify how systems use data-driven techniques to make increasingly significant decisions, and how doing so in open and unpredictable environments could lead to a range of different kinds of harm?
- How do we draw on a wider range of inputs and data from increasingly automated systems to support these methods and tools?
- How can these techniques be evaluated such that we build confidence in their ability to meet the needs of regulation?
Autonomous systems implement their decision-making in complex environments through ‘Sense / Understand / Decide / Act’ loops, building and acting on models of their environment. These novel architectures are challenging for the traditional tools of safety management, such as hazard identification and failure mode analysis. What is required is a more explicit analysis of the extent to which a system ‘understands’ its environment and how its interactions with that environment can lead to harm.
The AAIP community is working on these techniques, making greater use of explicit models and data in understanding hazards for autonomous systems (e.g. A modular digital twinning framework for safety assurance of collaborative robotics), and also informing what safe system behaviour and interaction can look like, for example by showing how uncertainty can be accounted for in decision-making.
6. The safety assurance process
From trying to exhaustively specify and test system behaviours to continually assessing evidence and risk.
- How does the concept of a safety case apply to robotics and autonomous systems?
- How does assurance operate across the full lifecycle and operation of a system, drawing on the many professions and roles that should be contributing?
- How does assurance get integrated into the increasingly automated tooling and workflow of system development and operation?
The AAIP research is organised into pillars that not only address the six trends above but also bring results together in a practical, usable form as safety case patterns. These give a template for laying out how, for example, the results of a certain kind of test should lead to a certain level of confidence in the safety of a system. As well as being applied and evolved in industry as part of system development, they can also act as a tool to realise the intent of standards and regulations.
The focus is on the areas that are particularly important for autonomous systems, such as machine learning and ethics, and the resulting templates can be used to create a credible and compelling assurance case for an autonomous system.
The future of regulation and assurance
Regulators and assurers face the increasingly urgent question of how best to safeguard the interests of the public, employees, and society as autonomous systems become part of our everyday lives.
Safety assurance, technically, is about having confidence that enough is being done to reasonably prevent harm. Adding autonomy into the mix multiplies the kinds of harm we need to consider, as well as the range of stakeholders who need to be considered in the process. Not only this, but the potential for harm now evolves through the lifetime of the systems themselves. This has implications not just for the way in which regulatory frameworks are applied, but also for the way in which they are developed in the first place.
By accelerating the state of practice along the six trends we’ve outlined, we believe we will achieve a future where:
a) industry has the tools and methods to safely develop autonomous systems
b) regulators and assurance agencies have the tools and methods to confidently set and assess suitably challenging thresholds of acceptability
c) this is all carried out transparently, to the satisfaction of the many stakeholders now implicated in these processes — a question not of absolute safety and predictability, but of trust and resilience in the face of ever-present change.
Simon Smith
Chief Architect
CACI Limited UK
Simon is also a Programme Fellow on the Assuring Autonomy International Programme.
assuring-autonomy@york.ac.uk
www.york.ac.uk/assuring-autonomy