Human in the loop requirement and AI healthcare applications in low-resource settings: A narrative review
Main Article Content
Abstract
Background. Artificial intelligence (AI) applications in healthcare provision have the potential to universalise access to the right to health, particularly in low-resource settings such as rural and remote regions in which AI is deployed to fill in medical expertise gaps. However, a dominant theme in evolving regulatory approaches is the human in the loop (HITL) requirement in AI healthcare applications to ensure the safety and protection of human rights.
Objective. To review HITL requirements in AI healthcare applications and inform how best to regulate AI applications in low-resource settings.
Method. We conducted a narrative review on HITL requirements in AI healthcare applications to assess its practicality in low-resource settings.
Results. HITL requirements in low-resource settings are impractical as AI applications are deployed to fill in gaps of insufficient medical experts.
Conclusion. There is a need for a shift in regulatory approaches from primarily risk-based to an approach that supports the accessibility of AI healthcare applications in low-resource settings. An approach anchored on the human right to science ensures both the safety requirements and access to the benefits of AI systems in healthcare provision.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The SAJBL is published under an Attribution-Non Commercial International Creative Commons Attribution (CC-BY-NC 4.0) License. Under this license, authors agree to make articles available to users, without permission or fees, for any lawful, non-commercial purpose. Users may read, copy, or re-use published content as long as the author and original place of publication are properly cited.
Exceptions to this license model is allowed for UKRI and research funded by organisations requiring that research be published open-access without embargo, under a CC-BY licence. As per the journals archiving policy, authors are permitted to self-archive the author-accepted manuscript (AAM) in a repository.
How to Cite
References
Alami H, Rivard L, Lehoux P, et al. Artificial intelligence in health care: Laying the foundation for responsible, sustainable, and inclusive innovation in low- and middle-income countries. Glob Health 2020;16(52):2. https://doi.org/10.1186/ s12992-020-00584-1
Masso A, Chukwu M, Calzati S. (Non) negotiable spaces of algorithmic governance: Perceptions of the Ubenwa health app as a relocated solution. New Media Soc 2022;24 (4):845-865. https://doi.org/10.1177/14614448221079027
Nema S, Rahi M, Sharma A, Bharti, PK. Strengthening malaria microscopy using artificial intelligence-based approaches in India. Lancet Reg Health Southeast Asia 2022;5(10054):2. https://doi.org/10.1016/j.lansea.2022.100054
European Commission. Ethics guidelines for trustworthy AI: 2019. https://digital- strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed 29th January 2024).
UNESCO. Recommendation on the ethics of artificial intelligence: UNESCO 2021, para 35 & 36. https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed 18 April 2024).
EuropeanUnion.ProposalforAIAct:EU2023,Art.14andArt.7.https://eur-lex.europa. eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/ DOC_1&format=PDF (accessed 8 February 2024).
Van Zyl C, Badenhorst M, Hanekom S, Heine, M. Unravelling ‘low-resource settings’: A scoping review with qualitative content analysis. BMJ Glob Health 2021;6(e005190):1-14.
Maadi M, Akbarzadeh KH, Aickelin U. A review on human-AI interaction in machine learning and insights for medical applications. Int J Environ Res Public Health 2021;18(2121):3. https://doi.org/10.3390/ijerph18042121
Sukhera J. Narrative reviews: Flexible, rigorous and practical. J Grad Med Educ 2022;14(4):414. http://dx.doi.org/10.4300/JGME-D-22-00480.1
Sezgin E. Artificial intelligence in healthcare: Complementing, not replacing doctors and healthcare providers. Dig Health 2023;9:1-5.
Bodén ACS, Molin J, Garvin S, West RA, Lundström C, Treanor D. The human-in-the- loop: An evaluation of pathologists’ interaction with artificial intelligence in clinical practice. Histopath 2021;79:210-218.
JarrahiMJ,DavoudiV,HaeriM.ThekeytoaneffectiveAI-powereddigitalpathology: Establishing a symbiotic workflow between pathologists and machine. J Path Inform 2022;13(100156):1-4. http://dx.doi.org/10.1016/j.jpi.2022.100156
Diyasena D, Arambepola N, Munasinghe L. Effectiveness of human-in-the-loop design concept for eHealth systems. PACIS 2022. Proceedings 2022;191:1-9.
United Nations. General comment No. 14 (2000) the right to the highest attainable standard of health (article 12 of the International Covenant on Economic, Social and Cultural Rights): Committee on Economic, Social and Cultural Rights 2000, para 12. https://digitallibrary.un.org/record/425041?ln=en&v=pdf (accessed 23 April 2024).
ZhangJ,ZhangZ.Ethicsandgovernanceoftrustworthymedicalartificialintelligence. BMC Med Inform Decision Making 2023;23(7):1-15. https://doi.org/10.1186/s12911- 023-02103-9
Gerybaite A, Palmieri S, Vign F. Equality in healthcare AI: Did anyone mention data quality? BioLaw J Rivista di BioDiritto 2022;4:385-409.
WorldHealthOrganization.Ethicsandgovernanceofartificialintelligenceforhealth: Guidance on large multi-modal models: WHO 2024. https://iris.who.int/bitstream/ha ndle/10665/375579/9789240084759-eng.pdf?sequence=1&isAllowed=y (accessed 18 April 2024).
Savalescu J, Giubilini A, Vandersluis R, Mishra A. Ethics of artificial intelligence in medicine. Singapore Med J 2024;65:150-158. https://10.4103/singaporemedj.SMJ- 2023-279
Lederman A, Lederman R & Verspoor K. Tasks as needs: Reframing the paradigm of clinical natural language processing research for real-world decision support. J Am Med Informatics Assoc 2022;29(10):1810-1817. https://doi.org/10.1093/jamia/ ocac121.
Rajabi E, Kafaie S. Knowledge graphs and explainable AI in healthcare. Info 2022;13(459):1-10. https://doi.org/10.3390/.
Prentzas N, Kakas A, Pattichis CS. Explainable AI applications in the medical domain: A systematic review. 2023. https://doi.org/10.48550/arXiv.2308.05411
Gonzàlez-Alday R, Garzía-Cuesta E, Kulikowski CA, Maojo V. A scoping review of the progress, applicability, and future of explainable artificial intelligence in medicine. Appl Sci 2023;13(19):1-23. https://doi.org/10.3390/app131910778.
Hille M, Hummel P, Braun M. Meaningful human control over AI for health? A review. J Med Ethics 2023:1-9. https://doi:10.1136/jme-2023-109095
Haselager P, Schcraffenberger H, Thill, et al. Reflection machines: Supporting effective human oversight over medical decision support systems. Cambridge Quart Health Ethics 2023:1-10. https://doi.org/10.1017/S0963180122000718
Bashkirova A, Krpan, D. Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgment increase psychologist trust and recommendation acceptance. Computers in Human Behavior: Artificial Humans 2024;2(100066):1-14. https://doi.org/10.1016/j.chbah.2024.100066
Roberts H, Cowls J, Hine E et al. Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes. Info Soc 2023;39(2):79-97. https://doi.org/10.1080/01972243.2022.2124565
UNEconomicandSocialCouncil,CommitteeonEconomic,SocialandCulturalRights, General Comment No. 25 on Science and Economic, Social and Cultural Rights, articles 15 (1) (b), (2), (3) & (4) on the International Covenant on Economic, Social and Cultural Rights. E/C.12/GC/25. 2020. https://www.ohchr.org/en/documents/ general-comments-and-recommendations/general-comment-no-25-2020-article- 15-science-and (accessed 13th February 2024).