Up and down the Pyramid of Predictability

--

The Pyramid of Predictability
The Pyramid of Predictability

by Simon Smith

Autonomous systems, from robotic warehouse pickers to medical devices that administer our drugs, are approaching widespread use amidst intense investment and expectation of benefits and returns to many stakeholders.

It is also becoming clear that they have the potential to cause harm in ways that regulators and designers have not previously had to deal with — not only physical injury (e.g. a robot accidentally injuring its human co-worker) but also what we might term psychological, societal and environmental harms (e.g. a medical device recommending drugs with a bias towards certain patient groups).

As we start, even unwittingly, to depend on autonomous systems in many aspects of our lives, regulators and assurers face the increasingly urgent question of how best to safeguard the interests of the public, employees and society.

To help understand how best to answer this question, it’s worth stepping back and looking at how regulatory frameworks have traditionally been developed and applied, and the way in which the characteristics of autonomous systems are challenging these ways of working.

The aim of a regulator — setting a threshold of acceptable risk

For a regulator in a highly safety-critical environment, such as nuclear energy, the simplest approach to developing a regulatory framework is to demand a very high level of safety with absolute certainties. This pinnacle of predictability would come with:

● mature processes to govern the operation of a system

● changes to system operation verified and validated under all conditions

● tightly-specified procedures rolled out for trained operators and bystanders

● no remaining open questions over responsibility and liability.

Just to make sure, all of these requirements would be written into legal obligations.

The rise in the complexity of the development and operation of systems and their software in recent years has meant that regulators, even those operating with societal and industry demand for safety, have had to backtrack from this ideal as being simply not achievable.

The threshold of acceptability on the Pyramid of Predictability

Instead, they have been pursuing an alternative approach — endeavouring to set a threshold of acceptable risk that can realistically be cleared by industry. This has stimulated a largely technical community to apply engineering techniques for the reduction of uncertainty, such as:

● breaking down systems into individually testable bits

● ensuring that the system operates in a well-known environment

● identifying risks through seeking out all the failure modes and hazardous states that could occur

● implementing controls across systems and procedures to mitigate these risks

A technique that is increasingly being applied across industries is the safety case — an explicit argument that assembles evidence from testing and certification procedures to justify a claim that the system has reached a particular threshold. The system is deployed into operation, and the public, employees and society accept the residual risk.

Up and down the pyramid

However, crossing the threshold is not the end of the process. Once deployed, there are things that can drag a system back across the line to an unacceptable level of risk.

Across the threshold of acceptable risk on the Pyramid of Predictability

Accidents, for one, give cause to re-examine a system for undetected faults or unstated assumptions that were violated in operation.

More proactively, we might recognise that we are about to deploy a certified system into a novel environment where prior legal or societal assumptions are unlikely to hold. Or we might acknowledge that by integrating our system with others, technical assumptions such as the timing of expected inputs and outputs may differ from what has previously been designed for.

The infusion pump — a medical device that delivers fluids and medication into a patient’s blood flow — is an example of a system that has gone up and down the pyramid as increasingly sophisticated software has been developed to carry out previously manual or mechanical functions.

To take a specific episode, 56,000 incidents involving infusion pumps were recorded in the US alone from 2005 to 2009, including injuries and deaths, indicating that clearly, something wasn’t working.

In response, the regulator — in this case the US FDA — made changes to how the regulatory framework was to be applied, such as:

● suggestions for additional techniques such as the validation of software

● explicit safety assurance cases to better make the case for acceptability

They also made changes to how the regulatory framework itself was developed, and where to set the threshold, including:

● consultation between the regulator and industry earlier in the product lifecycle to see how those engineering techniques could influence decisions made during product design

● a programme of interaction between the regulator and the user community, albeit principally one of the regulators attempting to raise user awareness of potential issues.

The challenges raised by autonomous systems

Autonomous systems have three characteristics that suggest that the engineering techniques that we have relied on in the past, such as system decomposition or tighter performance specifications, may not be of as much help as they used to be. Moreover, the things that push a system below a threshold will, in extremis, need to be actioned and resolved in closer to real-time rather than once in a decade.

The first characteristic helps to define what we mean by an autonomous system:

1. Transfer of decision-making capability and authority
Autonomous systems involve a transfer of non-trivial decision-making capability and authority from people to a system. One challenge this poses is that we may not understand everything that this previous reliance on people entailed technically, legally and ethically.
For example, infusion pumps are starting to incorporate functionality that would previously have required the judgement of a clinician, such as deciding on whether a particular requested medication dosage is dangerous or not, rather than simply staying within pre-set limits.

Three characteristics of the challenges raised by the introduction of autonomous systems

An additional two characteristics compound the issues:

2. Increased use of data-driven techniques
To implement this decision-making in a system, we need to use novel, often data-driven techniques such as model-based and machine learning architectures comprising sensing, understanding, deciding and acting (‘SUDA’), with potential for uncertainty and bias at many points in the development process.
For example, instead of using an absolute rule to decide on whether a specific dosage is dangerous, machine learning is used to find patterns in data for prior dosages that determine what a safe dosage should be for each new case.

3. Open and unpredictable operating environments
To achieve the benefits of autonomy in many domains, we want to deploy into situations that, despite attempts to restrict them, are inherently open and unpredictable. We may only properly understand the functioning of the system once it is interacting with other systems and people in the real world.
For example, a patient may be receiving treatment and medication from a number of infusion pumps and other clinicians, adding up to a novel environment that is not represented in the data that the infusion pump has previously been exposed to, again introducing uncertainties.

Implications for regulators

In the second post in this series, we look in more detail at trends in safety assurance, from societal engagement in regulation through to the evolution of risk and evidence-based approaches, and how close these come to helping to address the issues raised by autonomous systems.

The Pyramid of Predictability
The Pyramid of Predictability

But it seems likely that regulators, assurers and industry will need to recognise that regulatory frameworks will be about living in-between the foothills of uncertainty and the pinnacle of predictability — an equilibrium of adaptability in which dynamic and adaptable processes run to continuously assure the safety of systems adapting to their environments. By also keeping that equilibrium at a high threshold of acceptability, regulators can play a role in driving the innovation required in the industry to generate the new techniques and approaches that will enable the delivery of these truly autonomous, and safe, systems.

Simon Smith
Chief Architect
CACI Limited UK

Simon is also a Programme Fellow on the Assuring Autonomy International Programme.

assuring-autonomy@york.ac.uk
www.york.ac.uk/assuring-autonomy

--

--

Assuring Autonomy International Programme
Assuring Autonomy International Programme

Written by Assuring Autonomy International Programme

A £12M partnership between @LR_Foundation and @UniOfYork to guide the safe development of autonomous systems worldwide. https://twitter.com/AAIP_York

No responses yet