Optimising the machine and the human

The role of human factors in the safe design and use of AI in healthcare

News headlines and research studies extol the virtues of artificial intelligence (AI), claiming that it can outperform a human clinician in tasks such as breast cancer screening and the treatment of sepsis.

But a study in the British Medical Journal earlier this year found that such claims were exaggerated. Too few of the studies involved randomised clinical trials, testing in a real-world clinical setting, or tracking participants over time.

In essence, the AI algorithms are being developed and tested out of their use context. So what can we do to turn this around and ensure AI fulfils the promises so frequently proclaimed?

A conventional approach

A traditional engineering approach is about developing a machine or product that does what we need as effectively and reliably as possible: we design a machine and then fit the human to the task. With this approach, we could develop an autonomous infusion pump, such as the one used in the SAM demonstrator project, by using historical data to teach the system what dose of insulin medication is needed when.

Consider this AI tool, developed in isolation using historical data, in use on a fast-paced intensive care unit.

  • Holistic care — caring for patients is more than giving medication for a specific condition. The nurse usually interacts with the patient, building up an understanding of their physical and emotional needs. They pick up subtle signs (e.g. the patient looking paler than normal) that might indicate a different dose of insulin is required than that predicted by the AI. These signs are not picked up by the AI infusion pump; it does not have the bigger picture. This is particularly relevant where the patient has multiple illnesses and might receive as many as ten infusions concurrently.

Out of the lab and in a busy intensive care unit this “autonomous” AI isn’t really autonomous in the way imagined by technology developers — it’s one actor in a complex, highly interconnected clinical system made up of people, machines and environment. We have to understand these interactions and the context in which the AI will work in order to assure the safety of the overall clinical system. This is human factors.

The HF/E approach

In reality, for the SAM project, we used a human factors approach. Human factors (or ergonomics; often abbreviated as HF/E) is a scientific discipline concerned with the understanding of interactions among people and other elements of a system.

It is a profession that applies scientific theory, principles and methods to the design of systems in order to optimise human wellbeing and overall system performance (sometimes referred to as the “twin aims” of HF/E). In the UK the Chartered Institute of Ergonomics and Human Factors (CIEHF) is the professional body for human factors.

With a human factors approach we design a system that puts the human at the centre, not the machine. We study and understand all of the interactions as we design and develop tools (such as an autonomous infusion pump) that will be part of a clinical system. It’s about the interactions between the people, the tools (including AI) and environments — in reality, none of them works fully autonomously.

For the SAM demonstrator project, the starting point was, therefore, not the narrow technical challenge of how to regulate blood sugar levels via a data-driven algorithm, but to establish what the clinical system looks like and what the needs and expectations of the different stakeholders are.

Observations on the ward and interviews with a broad range of people (patients, nurses, doctors, educators, medical device specialists, technology developers, regulators etc) are essential data collection methods to elicit this information.

To support the design of the autonomous infusion pump we modelled the current and the future system using the Functional Resonance Analysis Method (FRAM). FRAM is an approach for exploring and representing variability and interactions in socio-technical systems.

This consideration of the whole socio-technical system — the context in which the AI tool will function and the interactions between it and other actors in the system — leads to a safer design, and ultimately to safer use.

The crucial point for any developer is that a human factors approach optimises the machine and the human.

Mark Sujan
Managing Director
Human Factors Everywhere
@MarkSujan

Further reading

Sujan, M., Furniss, D., Grundy, K., Grundy, H., Nelson, D., Elliott, M., White, S., Habli, I. and Reynolds, N., 2019. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ Health & Care Informatics, 26(1)

A £12M partnership between @LR_Foundation and @UniOfYork to assure the safety of robotics and autonomous systems worldwide. https://twitter.com/AAIP_York

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store