The Debate on the Ethics of AI in Health Care

We are conducting a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care, summarising current debates and identify open questions for future research.

Essentially, this paper addresses the following question: ‘how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be ‘ethically mindful?’.

The discussion is particularly challenging, because it is hard to analyse AI without touching upon other connected issues such as data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy.

Nonetheless, in this paper we manage to isolate the ethical concerns around AI-health, and we come to interesting results. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at five levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction.

Our article contributes to the debate on AI in health care by offering a comprehensive analysis of the relevant literature, focusing on the ethical implications for individuals, interpersonal relationships, groups, institutions, societies and the health sector as a whole. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms.