RSS-Feed abonnieren

DOI: 10.1055/a-2702-1843
Explainable AI: Ethical Frameworks, Bias, and the Necessity for Benchmarks
Authors
Funding Information This work has been supported by the For Wis(h)dom Foundation (Project 9; February 2, 2022).

Abstract
Artificial intelligence (AI) is increasingly integrated into pediatric healthcare, offering opportunities to improve diagnostic accuracy and clinical decision-making. However, the complexity and opacity of many AI models raise concerns about trust, transparency, and safety, especially in vulnerable pediatric populations. Explainable AI (XAI) aims to make AI-driven decisions more interpretable and accountable. This review outlines the role of XAI in pediatric surgery, emphasizing challenges related to bias, the importance of ethical frameworks, and the need for standardized benchmarks. Addressing these aspects is essential to developing fair, safe, and effective AI applications for children. Finally, we provide recommendations for future research and implementation to guide the development of robust and ethically sound XAI solutions.
Publikationsverlauf
Eingereicht: 23. Juli 2025
Angenommen: 13. September 2025
Artikel online veröffentlicht:
23. September 2025
© 2025. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
References
- 1 Wang F, Preininger A. AI in health: state of the art, challenges, and future directions. Yearb Med Inform 2019; 28 (01) 16-26
- 2 Ganatra HA. Machine learning in pediatric healthcare: current trends, challenges, and future directions. J Clin Med 2025; 14 (03) 807
- 3 Balla Y, Tirunagari S, Windridge D. Pediatrics in artificial intelligence era: a systematic review on challenges, opportunities, and explainability. Indian Pediatr 2023; 60 (07) 561-569
- 4 Elahmedi M, Sawhney R, Guadagno E, Botelho F, Poenaru D. The state of artificial intelligence in pediatric surgery: a systematic Review. J Pediatr Surg 2024; 59 (05) 774-782
- 5 Ramesh S, Chokkara S, Shen T. et al. Applications of artificial intelligence in pediatric oncology: a systematic review. JCO Clin Cancer Inform 2021; 5: 1208-1219
- 6 Sethi Y, Patel N, Kaka N. et al. Artificial intelligence in pediatric cardiology: a scoping review. J Clin Med 2022; 11 (23) 7072
- 7 Bianco A, Al-Azzawi ZAM, Guadagno E, Osmanlliu E, Gravel J, Poenaru D. Use of machine learning in pediatric surgical clinical prediction tools: a systematic review. J Pediatr Surg 2023; 58 (05) 908-916
- 8 Till T, Tschauner S, Singer G, Lichtenegger K, Till H. Development and optimization of AI algorithms for wrist fracture detection in children using a freely available dataset. Front Pediatr 2023; 11: 1291804
- 9 Lam A, Squires E, Tan S. et al. Artificial intelligence for predicting acute appendicitis: a systematic review. ANZ J Surg 2023; 93 (09) 2070-2078
- 10 Tennant R, Graham J, Kern J, Mercer K, Ansermino JM, Burns CM. A scoping review on pediatric sepsis prediction technologies in healthcare. NPJ Digit Med 2024; 7 (01) 353
- 11 Verhoeven R, Kupers T, Brunsch CL, Hulscher JBF, Kooi EMW. Using vital signs for the early prediction of necrotizing enterocolitis in preterm neonates with machine learning. Children (Basel) 2024; 11 (12) 1452
- 12 Killian MO, Tian S, Xing A. et al. Prediction of outcomes after heart transplantation in pediatric patients using national registry data: evaluation of machine learning approaches. JMIR Cardio 2023; 7: e45352
- 13 Jung S, Park K, Ihn K. et al. Predicting graft failure in pediatric liver transplantation based on early biomarkers using machine learning models. Sci Rep 2022; 12 (01) 22411
- 14 Schouten JS, Kalden MACM, van Twist E. et al. From bytes to bedside: a systematic review on the use and readiness of artificial intelligence in the neonatal and pediatric intensive care unit. Intensive Care Med 2024; 50 (11) 1767-1777
- 15 Alkhanbouli R, Matar Abdulla Almadhaani H, Alhosani F, Simsekler MCE. The role of explainable artificial intelligence in disease prediction: a systematic literature review and future research directions. BMC Med Inform Decis Mak 2025; 25 (01) 110
- 16 Sadeghi Z, Alizadehsani R, Cifci MA. et al. A review of explainable artificial intelligence in healthcare. Comput Electr Eng 2024; 118: 109370
- 17 Campbell EA, Bose S, Masino AJ. Conceptualizing bias in EHR data: a case study in performance disparities by demographic subgroups for a pediatric obesity incidence classifier. PLOS Digit Health 2024; 3 (10) e0000642
- 18 Johnson TJ, Ellison AM, Dalembert G. et al. Implicit bias in pediatric academic medicine. J Natl Med Assoc 2017; 109 (03) 156-163
- 19 Muralidharan V, Schamroth J, Youssef A, Celi LA, Daneshjou R. Applied artificial intelligence for global child health: addressing biases and barriers. PLOS Digit Health 2024; 3 (08) e0000583
- 20 Cross JL, Choma MA, Onofrey JA. Bias in medical AI: implications for clinical decision-making. PLOS Digit Health 2024; 3 (11) e0000651
- 21 Vrudhula A, Kwan AC, Ouyang D, Cheng S. Machine learning and bias in medical imaging: opportunities and challenges. Circ Cardiovasc Imaging 2024; 17 (02) e015495
- 22 Andaur Navarro CL, Damen JAA, Takada T. et al. Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review. BMJ 2021; 375 (2281) n2281
- 23 Nazer LH, Zatarah R, Waldrip S. et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit Health 2023; 2 (06) e0000278
- 24 Hussain SA, Bresnahan M, Zhuang J. The bias algorithm: how AI in healthcare exacerbates ethnic and racial disparities - a scoping review. Ethn Health 2025; 30 (02) 197-214
- 25 Saint James Aquino Y. Making decisions: bias in artificial intelligence and data–driven diagnostic tools. Aust J Gen Pract 2023; 52 (07) 439-442
- 26 Khera R, Simon MA, Ross JS. Automation bias and assistive AI: risk of harm from AI-driven clinical decision support. JAMA 2023; 330 (23) 2255-2257
- 27 Jorritsma W, Cnossen F, van Ooijen PM. Improving the radiologist-CAD interaction: designing for appropriate trust. Clin Radiol 2015; 70 (02) 115-122
- 28 Mienye ID, Obaido G, Jere N. et al. A survey of explainable artificial intelligence in healthcare: concepts, applications, and challenges. Inform Med Unlocked 2024; 51 (03) 101587
- 29 Muhammad D, Bendechache M. Unveiling the black box: a systematic review of explainable artificial intelligence in medical image analysis. Comput Struct Biotechnol J 2024; 24: 542-560
- 30 Shi W, Giuste FO, Zhu Y. et al. Predicting pediatric patient rehabilitation outcomes after spinal deformity surgery with artificial intelligence. Commun Med (Lond) 2025; 5 (01) 1
- 31 Shi H, Yang D, Tang K. et al. Explainable machine learning model for predicting the occurrence of postoperative malnutrition in children with congenital heart disease. Clin Nutr 2022; 41 (01) 202-210
- 32 Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 2021; 113: 103655
- 33 Jeon I, Kim M, So D. et al. Reliable autism spectrum disorder diagnosis for pediatrics using machine learning and explainable AI. Diagnostics (Basel) 2024; 14 (22) 2504
- 34 van der Velden BHM, Kuijf HJ, Gilhuijs KGA, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79: 102470
- 35 Tampu IE, Bianchessi T, Blystad I. et al. Pediatric brain tumor classification using deep learning on MR images with age fusion. Neurooncol Adv 2024; 7 (01) vdae205
- 36 Huang G, Li Y, Jameel S, Long Y, Papanastasiou G. From explainable to interpretable deep learning for natural language processing in healthcare: how far from reality?. Comput Struct Biotechnol J 2024; 24: 362-373
- 37 Beauchamp TL, Childress JF. Principles of Biomedical Ethics. 5th ed.. Oxford University Press; 2001
- 38 Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 2020; 20 (01) 310
- 39 European Commission. Ethics guidelines for trustworthy AI. Directorate-General for Communications Networks, Content and Technology; 2019. Accessed September 17, 2025 at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
- 40 World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: WHO; 2021. . Accessed September 17, 2025 at: https://www.who.int/publications/i/item/9789240029200
- 41 European Commission. Artificial Intelligence Act. Accessed September 17, 2025 at: https://artificialintelligenceact.eu/
- 42 Muralidharan V, Burgart A, Daneshjou R, Rose S. Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI. NPJ Digit Med 2023; 6 (01) 166
- 43 Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health 2021; 3 (11) e745-e750
- 44 Han H, Liu X. The challenges of explainable AI in biomedical data science. BMC Bioinformatics 2022; 22 (Suppl. 12) 443
- 45 Metsch JM, Hauschild AC. BenchXAI: comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data. Comput Biol Med 2025; 191: 110124
- 46 Goncharenko I, Zahariev I, Gorbunov S. et al. Open and extensible benchmark for explainable artificial intelligence methods. Algorithms 2025; 18 (02) 85