With the advent of AI in healthcare, expertise and experiences accrued in ethics oversight of other controversial fields of biomedical research such as genetics and genomics can provide Research Ethics Committees (RECs) with valuable tools for mitigating the risks of new disruptive technologies.
The landscape of healthcare is currently facing a profound shift with the development and integration of artificial intelligence (AI) technologies. AI tools offer a wide variety of benefits, including applications aimed at monitoring population health such as surveillance and prediction, individual healthcare services for prevention, diagnosis, and treatment of disease, and improving healthcare systems through the use of electronic health records.
At the same time, AI introduction into healthcare and clinical research raises a host of ethical issues, including, but not limited to: data privacy and security for patients and research participants; bias and discrimination in data sets; the need for responsible behavior of autonomous technology; users and stakeholders' trust in novel AI tools; justice considerations regarding access to beneficial AI technology; and challenges for regulation and governance. Many of these issues are not exclusive to AI research and could be compared to challenges in other fields of biomedical research, e.g. genetics and genomics, or overlap with problems raised by big data use. However, this context begs the following question: "What makes these ethical issues different, specific, or more challenging in the context of AI introduction into healthcare?" One potential answer is related to the nature of data forms required for good functioning of AI (public data, social media, data from wearable devices) and that challenges our understanding of “traditional” and familiar research methods that, up until today, have relied on a different way of ethics' scrutiny. For instance, while publicly available data did not -and still not- require an ethics review, such context can be revisited in light of collecting these same data to identify individuals with mental health problems through facial recognition without their consent. Such setting raises particular ethical challenges, complicating notions of consent and data privacy, and requires a new perspective of what constitutes a human research participant. It seems that while the rapid pace at which AI technology development, research, and integration in healthcare is evolving, discussions regarding ethical guidelines and regulatory oversights are lacking behind.
Therefore, with the advent of this digital transformation into healthcare settings, it becomes obvious that diverse layers of oversight are required ranging from governmental to research ethics committees (RECs) oversight. It is known that RECs ensure an independent ethical review of research protocols, which is a crucial mechanism aiming at protecting research participants by reducing harms through careful assessment of diverse risks and potential benefits of a study. If we try to find the main reason that makes AI oversight in healthcare different when compared to other existing oversights surrounding gene therapy or stem cell research, we can notice that there is not only one reason. In fact, every health research study involving AI will bring its own ethical complexities while still sharing common challenges with other studies and that RECs are currently facing. For example, what information should be discussed during informed consent process when using smart devices? How to fill out the expertise gaps in REC boards to better assess research protocols? To answer these and other questions, diverse authors are calling for a revision and update of ethics review processes to better address new concerns raised by health research that employs AI tools.
AI research context raises some common concerns when compared to other research fields such as genetic research. For instance, in genetic research involving collection of genetic information, issues of informed consent, privacy, confidentiality, and returning the results constitute the main ethical concerns that RECs might face when applying ethical scrutiny to review such studies. Therefore, it would be interesting to look back at some of these areas, especially on how RECs have been handling the review processes of these studies and try to inform the assessment of health research protocols related to AI.
While it is beyond the scope of this blog to do an exhaustive analysis and comparison to explore how challenges raised for RECs in genetic research could inform that of AI in healthcare, the first steps of departure would be an empirical research to explore how REC members have been handling ethical issues raised in genetic research and to revisit existing local (such as the Tri-Council Policy Statement 2 -TCPS2- in Canada) and international research guidelines around genetic research. These might inform new policies and guidelines surrounding research ethics practices in AI in the healthcare context. In that vein, a suggestion can include developing a new chapter on AI research, as a tool to add to the TCPS2 that can guide RECs in assessing studies related to AI.
AI is becoming ubiquitous in healthcare. Building on lessons and experiences learned from existing fields of research that raise similar health ethics issues to the development and integration of new AI tools, it will be crucial to consider the key role of diverse stakeholders including RECs. It is imperative that REC members be equipped with the skills and knowledge to assess and apply an appropriate ethical scrutiny to research protocols involving AI in healthcare, to better evaluate the types of harms that could be caused by AI research, for their own research practice and for research participants.
Hazar Haidar is an Assistant Professor at the Ethics programs of the Université du Québec à Rimouski (UQAR). Her research interests focus, among other fields, on reproductive ethics, ethics and genetics and AI in healthcare.