By Dr Ibrahim Habli
Consider an intensive care unit (ICU) where clinicians are treating patients for sepsis. They must review multiple informational inputs (patient factors, disease stage, bacteria, and other influences) in their diagnosis and treatment.
If the patient is treated incorrectly who do you hold responsible? The clinician? The hospital? What if an aspect of the treatment was undertaken by a system using Artificial Intelligence (AI)? Who (or what) is responsible then?
Healthcare applications using AI have the potential to outperform clinicians for certain tasks . But they raise an important socio-technical issue: moral responsibility for harm to patients.
Moral responsibility concerns accountability for one’s actions. Day-to-day we may hold someone morally responsible for an action only if two conditions are met: control and epistemic. The use of AI in a clinical setting challenges us on both the former and the latter.
Having control over, and ownership of, the action taken.
Back in ICU, a clinician interprets the data available to them and makes the decision to give the patient more fluids as part of their treatment. They have considered every piece of information available to them: they have control and they own that decision and action.
If a machine using AI has interpreted that data and advised the clinician to give more fluids to the patient, does the clinician still control and own the action? A key issue lies in the limited ability to fully reflect the clinical intentions behind complex treatments since it is simply not feasible to specify them completely.
The control condition is weakened.
Having sufficient knowledge and understanding of the action and its likely consequences.
The clinician in ICU made the decision about giving additional fluids knowing and understanding the full patient and environmental background and current situation: they were aware of any recent changes in the clinical setting and could understand the potential consequences of their decision.
It is very difficult to formalise the complexity of sepsis treatment in order to design a system that can fully reflect the changing needs of the setting and the patient. So the informational inputs given to an AI system may be more limited than those available to a human clinician: the AI system doesn’t have sufficient knowledge. The decision making of the system is often hidden so it cannot explain the decision it has taken post-hoc.
The epistemic condition is compromised.
Tipping the balance
The introduction of AI weakens both the control condition and epistemic condition, which leaves us with a moral responsibility gap.
We must work to better understand, and then reduce, this gap: collecting data and experiences, updating safety risks, assessing how clinical practice has been influenced by the AI functionality. From this we must decide what is an acceptable moral responsibility gap, and then continue to monitor and adapt the system, iteratively reducing the gap as we progress.
Dr Ibrahim Habli
Associate Professor of Safety-Critical Systems, University of York @IHabli
This blog post is based on work by Ibrahim Habli (University of York), Tom Lawton (Bradford Teaching Hospitals NHS Foundation Trust), and Zoe Porter (University of York Department of Philosophy), published in the Bulletin of the World Health Organization, April 2020.