Inside the Pyramid: trends in regulating complex systems
Six trends that signpost the direction for regulating autonomous systems
By Simon Smith
In a previous post we talked about how the three characteristics of autonomous systems were challenging the way in which regulatory frameworks are traditionally developed and applied:
1. the transfer of decision-making capability and authority from humans to systems
2. the increased use of data-driven techniques and the associated complexity of their ‘sense and act’ architectures
3. the open and unpredictable environments into which we intend to deploy these systems to fully realise their benefits
The complexity that these characteristics bring is not amenable to traditional engineering techniques for managing uncertainty, such as breaking down a system into simpler components or simply writing tighter specifications.
Moreover, they make it hard to reach a single point-in-time judgement on the acceptability of these systems. Rather, we need to find ways to operate more continuously within an equilibrium of adaptability.
In this post, we dig deeper into trends in the development and application of regulatory frameworks that have appeared in response to dealing with uncertainty and increasingly complex systems towards the bottom of the pyramid. We also consider how well these trends signpost the direction that regulation of autonomous systems should be taking.
We use some running examples throughout:
● propellers in aviation — a demanding domain with regard to safety
● anti-lock braking (ABS) in automotive — a mature component that has been in operation for a considerable period of time
● infusion pumps in healthcare — machine learning is starting to enter studies for ‘next generation’ products
● automated lane-keeping systems (ALKS) — these integrate multiple automotive systems and are on the cusp of being truly autonomous systems
● air traffic management (ATM) — comprising a whole ecosystem of capabilities
Six trends in developing and applying regulatory frameworks for complex systems
We can identify six trends across how regulatory frameworks are developed and applied.
1. The impetus for regulation
For relatively simple systems, regulations have been shaped incrementally in a technology-led approach (e.g. an incident with a propeller leads to revised guidance on how to address the issue). More complex systems require new policy ideas — for example starting with an anticipated concept of a single integrated air traffic management system across Europe, and setting an expectation on how systems should operate together to achieve the goal.
The UK Government’s BEIS, in “Goals-based and rules-based approaches to regulation”, states the choice facing regulators as being between setting prescriptive rules that mandate or prohibit specific actions (‘do not exceed 70mph’) and setting non-prescriptive goals that define principles or expected outcomes (‘drive prudently’).
2. Societal engagement in regulation
For many simple systems, the public is unaware of the regulatory process: they simply need systems and services to work. But where systems have a wider impact, across a wider range of stakeholders, such as ALKS, then there is a need for more extensive public engagement. The Law Commissions in England and Scotland, for example, flag concerns about unfair risk allocation with ALKS, and the implications of harmed individuals not having a human to blame in case of an incident, in “A regulatory framework for automated vehicles”.
3. Industry cooperation with regulation
For a propeller, regulatory requirements for safety are applied as a necessary cost of doing business, with a legislative level of clarity over what needs to be done to achieve certification of, for example, a ‘variable-pitch propeller’, as reported in “Easy access rules for propellers”. More complex systems hold considerably more ambiguity with regard to what constitutes ‘safe enough’.
Viewed positively, this allows greater scope for innovation in system development to achieve a high threshold of safety; viewed more negatively, this allows greater latitude for lobbying and the advancement of commercial positions, or even the monopolistic ‘capture’ of a regulatory framework by industry. What is clear is that these are all situations that require explicit management on the part of the regulator to be able to able to convene an effective ‘assurance ecosystem’ of stakeholders.
4. The safety objective
Traditionally, safety is defined as the absence of harm. The focus is on establishing a chain of cause and effect from failure to harm and then minimising the human behaviours or technical issues that cause the failure, either in design or in operation.
For more complex systems and changing circumstances, there is an increasing interest in understanding how a system and the ecosystem around it remain safe from day to day. We are interested in how to maximise its capacity for ongoing resilience to keep delivering safe outcomes as circumstances change. In ATM, for example, which “cannot be decomposed in a meaningful way, where system functions are not bimodal, but rather where everyday performance is (and must be) variable and flexible”.
5. The safety management process
Identifying hazards is a fundamental part of the safety management process of handling safety issues in design and operation. A set of well-defined and independent ‘Hazardous Propeller Effects’ are the basis for generating evidence of safety, showing that a suitable control mechanism ensures that the failure of any component of the propeller or propeller control system will not lead to one of these effects.
There is no such simple list for ALKS, a technology on the cusp of autonomy. Rather, there is a need for the explicit modelling of the design intent and process of various components, their interactions in operation, interactions with the environment, and the role of enabling processes and systems, to expose how hazardous states might occur.
6. The safety assurance process
For simple, well-understood systems a prescriptive approach has long been followed: conformance of a representative instance of a system (‘a vehicle taken from the production line’) to a quantified specification (‘required braking distance’) is tested, using rules that result in a pass or a fail, prior to deployment. We employ post-deployment monitoring and checks, and there is opportunity for sanctions such as a recall to be imposed.
However, a ‘pass or fail’ approach that incorporates tests for every aspect of system behaviour of interest with regards to safety is arguably infeasible for more complex systems. These have adopted, to various degrees, a more consensual risk-based approach in which tests are just one kind of evidence that supports a justification (‘safety case’) of why a system is safe to deploy. Alternatives might include the concept of ‘probation in the field’, intended to increase social acceptance of self-driving vehicles throughout the product lifecycle. In a way, we are facing the need to develop and apply mature processes, in the sense of capability maturity, to systems that are inherently technically immature.
Autonomous systems — drawing on and accentuating the trends
Autonomous systems exert something of a force-multiplier effect on each of these trends, in large part down to the three characteristics noted earlier. Each demands the development of better techniques and ways of working.
Although separated out into six trends, these different aspects of how regulatory frameworks are developed and applied are not independent: choices made in how to respond to one trend will limit options for how to respond to others. However, regulators do have choices in how they respond to the pressure to progress along each trend, from near-term actions that involve little disruption to business processes to longer-term change across the whole regulatory ecosystem. Examples of these choices, from near-term to longer-term, include:
1. Changing the way that regulatory requirements are framed
A change in emphasis towards principle and policy, to encourage acceptably safe autonomous system behaviour.
This is a simple change for the regulator, but potentially with widespread impact on the level of innovation required by the ecosystem. For example, take a requirement that is easy to state (e.g. that systems and their developers take responsibility for the safety of that system and those in its environment). This leads to a need for innovation in perception and understanding (e.g. the attempt to understand pedestrian intent by Humanising Autonomy). That, in turn, leads to a need to develop and evaluate assurance techniques that can assess that level of understanding.
2. Changing the way that safety management and safety assurance processes are run
A change to reflect the overall industry trend towards more dynamic and adaptable processes.
A regulator might encourage rapid iterations of ‘evaluate and learn’ activities by authorising processes that run through-life, and that involve greater exploitation of data and automation over manual inspection and audit. We can draw the analogy with emerging ‘DevSecOps’ practices (e.g. see an overview from Atlassian, a provider of tools for software development), in which tooling and automation for security are more integrated into the software development and deployment lifecycle.
Applied to safety, this would evolve safety cases through time, accommodate diverse perspectives and concerns, and be able to assess safety against multiple thresholds for different functions at different times during operation.
Ultimately this approach might comprehensively address some of the concerns raised about ‘shelf-ware’ safety cases in the often-cited Haddon-Cave review of the loss of a UK military aircraft in 2006. This is a change that requires a great deal more flexibility in how regulators work alongside suppliers and manufacturers, with the skills and resources to mutually evaluate the effectiveness of assurance techniques.
3. Changing the way that regulatory frameworks are evolved
A change to how regulatory frameworks themselves are developed, with greater clarity on how they are motivated, negotiated and defined, and greater fluidity in how they are applied and modified
This is a far-reaching change. It requires different behaviours, processes and skills across the ecosystem. Stakeholders must be willing to engage on technical, regulatory, and ethical considerations in the open. Agreement is needed on the role that regulation plays in the autonomy market as a stimulus for the innovation and trust that are enablers of the adoption of autonomous systems, and the realisation of their benefits.
Significantly, all of these require a different range of skills and competencies, especially in machine learning and AI, to understand and manage the inherent complexity of new systems and ecosystems.
Regulators need to be able to know what questions to ask regarding the extent of a system’s understanding of its environment, or the extent of its resilience in the face of change, and be able to interpret and challenge the answers.
We’ll revisit these trends and options in a final post that looks at how the research threads of the Assuring Autonomy International Programme are defining and tackling new directions in the assurance of autonomous systems in collaboration with its global community of researchers and practitioners.
Simon Smith
Chief Architect
CACI Limited UK
Simon is also a Programme Fellow on the Assuring Autonomy International Programme.
assuring-autonomy@york.ac.uk
www.york.ac.uk/assuring-autonomy