By Rob Ashmore
A key aspect of Machine Learning (ML) is that desired functionality is encoded within training data. Indeed, this provides one of the advantages of the approach: it can be used in situations where examples of desired behaviour can be collected (or produced), but where they cannot be precisely described.
This contrasts with traditional approaches to the development of safety-related software. These include formal, traceable, hierarchical analysis of requirements, resulting in a detailed design that can be directly coded against. Suppose, for example, that in aviation we have a high-level function that says “engine thrust reversers should only be available whilst the aircraft is on the ground”. We cannot code the “on the ground” function without some further analysis and, more particularly, requirements decomposition.
It is important that the intent of the requirements is maintained throughout this process. So, in our example, we want to have high confidence in our “on ground” declaration. Consequently, we may choose to equip the aircraft with two Weight On Wheels (WOW) switches, one on each main undercarriage strut, and report “on ground” if both are giving a positive signal.
But, that’s not enough.
The intent is that the requirement applies in all system states, including failure conditions. One such circumstance involves damage of a WOW switch. To protect against this possibility, we could add an extra condition, so that we’ll declare an “on ground” state if there is a signal from one WOW switch and the associated wheel is rotating above a threshold speed.
But, that’s still not enough.
The intent is that the requirement applies in all meteorological conditions. So, we need to think about the effect of crosswinds, which can result in banked landings with weight on a single undercarriage strut. We also need to think about rain, which can lead to wheels aquaplaning (rather than rotating) on a wet runway.
And so on.
The purpose of this discussion is not to illustrate that deciding whether or not an aircraft is on the ground is a difficult problem, although it is. The point is to demonstrate the benefit in thinking about the intent behind a requirement, even if this does not involve formal, traceable, hierarchical decomposition.
Building on the example above, thinking about a requirement’s intent helps us identify specific cases that are important. As illustrated above, these could, for example, involve different meteorological conditions (or, more widely, different environmental circumstances) and different system states (including failure conditions).
We can use this list of cases to ask ourselves two important questions. Firstly, how are these cases represented in our training data? Secondly, how are they addressed in our verification activities? These questions can be difficult to answer with confidence: for example, it may be difficult (or, in some cases, unethical) to obtain data associated with particular system failure conditions.
Nevertheless, in answering these questions, and in analysing the intent of our requirements, we gather valuable information to help us provide a compelling assurance argument for a machine learning developed algorithm.
Rob is also Programme Fellow on the Assuring Autonomy International Programme. Contribute to the strategic development of the Programme as a Fellow.
This article is an overview of UK MOD sponsored research and is released for informational purposes only. The contents of this article represent the views of the author; they should not be interpreted as representing the views of the UK MOD, nor should it be assumed that they reflect any current or future UK MOD policy.
© Crown Copyright 2019. Published under the Open Government License.