Optimising the machine and the human

The role of human factors in the safe design and use of AI in healthcare

News headlines and research studies extol the virtues of artificial intelligence (AI), claiming that it can outperform a human clinician in tasks such as breast cancer screening and the treatment of sepsis.

But a study in the British Medical Journal earlier this year found that such claims were exaggerated. Too few of the studies involved randomised clinical trials, testing in a real-world clinical setting, or tracking participants over time.

In essence, the AI algorithms are being developed and tested out of their use context. So what can we do to turn this around and ensure AI fulfils the promises so frequently proclaimed?

A conventional approach

A traditional engineering approach is about developing a machine or product that does what we need as effectively and reliably as possible: we design a machine and then fit the human to the task. With this approach, we could develop an autonomous infusion pump, such as the one used in the SAM demonstrator project, by using historical data to teach the system what dose of insulin medication is needed when.

Consider this AI tool, developed in isolation using historical data, in use on a fast-paced intensive care unit.

  • Holistic care — caring for patients is more than giving medication for a specific condition. The nurse usually interacts with the patient, building up an understanding of their physical and emotional needs. They pick up subtle signs (e.g. the patient looking paler than normal) that might indicate a different dose of insulin is required than that predicted by the AI. These signs are not picked up by the AI infusion pump; it does not have the bigger picture. This is particularly relevant where the patient has multiple illnesses and might receive as many as ten infusions concurrently.
  • Trust — the clinician looking after the ward usually makes a dynamic trade-off when they consider who’s working on the ward: they trust the nurse they’ve worked with for ten years, but if it’s a new starter they may decide to double-check on the dosage being given and provide some additional teaching at the same time. Do they now trust the AI or do a double-check? Can they build trust in the AI without having that reassurance that the AI will be aware of the same things clinicians would be aware of?
  • Clinical management — while in theory taking insulin management off clinicians’ list of duties should give them more time to look after their patients, in reality, economic pressures may mean they are assigned other tasks instead. This may take them away from the bedside, leaving them to supervise the AI system remotely: there’s a danger that instead of giving clinicians back the gift of time, we turn them into carers for AI rather than carers for people.
  • Teaming — what happens if the patient doesn’t respond as anticipated? Do the clinical team now know enough about the situation to be able to safely step in and take over from the machine? In an autonomous vehicle, a safety driver must remain vigilant to take over from the autopilot in an emergency — but we have seen this kind of human-machine setup fail catastrophically in practice. In this hospital ward scenario, the clinicians can’t take back control meaningfully unless the infusion pump has a way of communicating to them what it’s doing in a timely and understandable fashion. Clinicians will have to remain active in the loop, and the AI needs to be designed to be part of the clinical team.

Out of the lab and in a busy intensive care unit this “autonomous” AI isn’t really autonomous in the way imagined by technology developers — it’s one actor in a complex, highly interconnected clinical system made up of people, machines and environment. We have to understand these interactions and the context in which the AI will work in order to assure the safety of the overall clinical system. This is human factors.

The HF/E approach

In reality, for the SAM project, we used a human factors approach. Human factors (or ergonomics; often abbreviated as HF/E) is a scientific discipline concerned with the understanding of interactions among people and other elements of a system.

It is a profession that applies scientific theory, principles and methods to the design of systems in order to optimise human wellbeing and overall system performance (sometimes referred to as the “twin aims” of HF/E). In the UK the Chartered Institute of Ergonomics and Human Factors (CIEHF) is the professional body for human factors.

With a human factors approach we design a system that puts the human at the centre, not the machine. We study and understand all of the interactions as we design and develop tools (such as an autonomous infusion pump) that will be part of a clinical system. It’s about the interactions between the people, the tools (including AI) and environments — in reality, none of them works fully autonomously.

For the SAM demonstrator project, the starting point was, therefore, not the narrow technical challenge of how to regulate blood sugar levels via a data-driven algorithm, but to establish what the clinical system looks like and what the needs and expectations of the different stakeholders are.

Observations on the ward and interviews with a broad range of people (patients, nurses, doctors, educators, medical device specialists, technology developers, regulators etc) are essential data collection methods to elicit this information.

To support the design of the autonomous infusion pump we modelled the current and the future system using the Functional Resonance Analysis Method (FRAM). FRAM is an approach for exploring and representing variability and interactions in socio-technical systems.

This consideration of the whole socio-technical system — the context in which the AI tool will function and the interactions between it and other actors in the system — leads to a safer design, and ultimately to safer use.

The crucial point for any developer is that a human factors approach optimises the machine and the human.

Mark Sujan
Managing Director
Human Factors Everywhere
@MarkSujan

Further reading

Sujan, M., Furniss, D., Grundy, K., Grundy, H., Nelson, D., Elliott, M., White, S., Habli, I. and Reynolds, N., 2019. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ Health & Care Informatics, 26(1)

--

--

Assuring Autonomy International Programme
Assuring Autonomy International Programme

Written by Assuring Autonomy International Programme

A £12M partnership between @LR_Foundation and @UniOfYork to guide the safe development of autonomous systems worldwide. https://twitter.com/AAIP_York

No responses yet