Controllability in a highly automated driving context

by Helen Monkhouse

The notion of controllability has always been part of the automotive functional safety risk model. Considered alongside severity and probability of exposure, it’s the way automotive system safety engineers make subjective assessments about the potential accident risk of vehicle system malfunctions.

This idea that the driver is integral to vehicle control has been inherent in our thinking, and over the years I have been involved in many lively debates with colleagues about how likely it was that the driver would be able to control a given hazardous situation and prevent an accident.

This paradigm of the driver in control and responsible for safety is the basis on which automotive safety practice is built.

But wait a minute, what about highly automated driving (HAD) systems, such as highway drive, that potentially allow the driver to remove their hands and feet from the controls and even involve themselves in a non-driving related task? It was this very question that finally persuaded me to take the plunge and start my part-time PhD research journey at the University of York.

The introduction of autonomy

Mixed-mode driving challenges will no doubt exist when self-driving and manually driven vehicles share our roads, but even today’s highly automated systems begin to change the responsibility for the driving task and this challenges how we approach safety analysis.

When analysing HAD systems not only do we need to consider how systems such as adaptive cruise control or traffic jam drive might perceive the world differently to the human driver, but we must also consider what the driver understands about the system’s probable behaviour, and how the driver’s interaction with the system might change given the circumstances.

Such situations might include the late cut-in of another vehicle, reflection from roadside infrastructure influencing the behaviour of radar-based adaptive cruise control, graffiti on roadside signage affecting camera-based vehicle speed management, or a driver’s attentiveness and ability to regain control being affected because they have been “out-of-the-loop” for some time.

A model of vehicle control

To help explore such issues I have developed a conceptual vehicle control model (VCM). Based on the MISRA VCM, and informed by driver modelling and joint cognitive systems research, the enhanced vehicle control model helps illustrate hazard causes that might exist for HAD systems being developed today.

This model provides a new basis from which to analyse hazards and hazard causes of such automated driving functions. Using Michon’s Hierarchical Control Model to replace the ‘driving control’ element in the original MISRA VCM introduces the notion of distributed control. Reviewing the model from a psychological perspective led to the introduction of a feedback loop with a ‘perception / models’ element and an ‘error’ comparator.

Importantly, this represents the differences in environmental perception and understanding that may exist between the human and machine.

Such differences could lead to system behaviour being unexpected to the human, which in itself could be the cause of a hazard.

The enhanced VCM also offers further definition and explanation of each element of the model, because the complexity of the automated driving environment makes the meaning that one should take from each element less clear, compared to human driving for which the MISRA VCM was developed.

A practical methodology

With an initial evaluation of the model complete, I’m now working on an accompanying methodology. When used in conjunction with the model, the aim is for this to be an effective hazard analysis tool for automotive safety engineers developing HAD systems.

Helen Monkhouse
HORIBA MIRA Ltd
Chief Engineer — Functional Safety & PhD student at University of York
Connect on LinkedIn

Acknowledgement

This blog post is based on work by Helen Monkhouse (HORIBA MIRA Ltd and University of York), Ibrahim Habli (University of York), and John McDermid (University of York), to appear in Reliability Engineering and System Safety 2020.

--

--

Assuring Autonomy International Programme

A £12M partnership between @LR_Foundation and @UniOfYork to guide the safe development of autonomous systems worldwide. https://twitter.com/AAIP_York