Facial Plast Surg
DOI: 10.1055/a-2413-3529
Letter to the Editor

The Use of Large Language Models Such As ChatGPT on Delivering Patient Information Relating to Surgery

1   The Royal National ENT Hospital, University College London Hospitals NHS Foundation Trust, London, United Kingdom
,
Mustafa Jaafar
2   UCL Artificial Intelligence Centre for Doctoral Training, London, United Kingdom
,
3   Royal Surrey County Hospital, Guildford, United Kingdom
,
Tsz Ki Ko
4   Royal Stoke University Hospital, Stoke-on-Trent, United Kingdom
,
James Schuster-Bruce
5   Department of ENT, Kings College Hospital Foundation Trust, St George's University Hospital, London, United Kingdom
,
Nicholas Eynon-Lewis
6   Department of ENT, Barts Health NHS Trust, London, United Kingdom
,
Peter Andrews
1   The Royal National ENT Hospital, University College London Hospitals NHS Foundation Trust, London, United Kingdom
› Institutsangaben

Rhinology and facial plastic surgery disorders can be complex and not without significant risk and morbidity. Patients can find it challenging to understand the intricacies of their conditions, treatment options, and postoperative care. This is where large language models (LLMs) can potentially bridge the gap by providing clear, up-to-date, accessible, and personalized information. These include the increasingly popular commercially available models, such as ChatGPT, Gemini, and Claude. Open-sourced alternatives, such as Bloom and Mixtral, are licensed under Apache 2.0 and OpenRAIL-M to further lower the participation barriers in utilizing LLMs. Artificial intelligence and natural language processing have provided a significant impact on communication already, and a more calibrated and personalized approach is crucial in otorhinolaryngology, such as rhinology and facial plastic surgery.

All this is heavily caveated by the authors that in these early stages of technological development, medical validation studies and strong clinical oversight are recommended. These need to be reviewed for evaluation purposes and are not recommended for clinical use at this stage.

Patients can access information on ChatGPT at any time, allowing them to gain a better understanding of their condition. The evidence base suggests surgeons underestimate patients' desire for preoperative information, and as far back as 2009 we know that approximately one-third of patients undergoing routine surgery search the internet for additional information despite being given an information leaflet. LLMs could bridge the gap for accessibility of information via appropriately trained preexisting platforms that are available and that are often utilized already by patients. LLMs could also improve the process of obtaining informed consent for surgery. This should reflect a continuous thought process that enables patients to crystallize their thoughts and have a full understanding. Yet patients can feel a sense of disempowerment when it comes to consent, and it is part of the bureaucracy of our processes. LLMs can improve the informed consent process by providing tailored responses based on a patient's specific condition and treatment plan. This personalized guidance empowers patients to make informed decisions and delivers better outcomes. Personalization on a broader scale has the potential to be suitable for LLMs developed with this narrow use case in mind.

LLMs can offer guidance on postoperative management including wound care, pain management, and medication use. When surveyed, LLMs were found to provide “accurate and sufficient” or “partially accurate and sufficient” postoperative advice when validated with domain experts in ophthalmological surgery.

Surgeons should have increased awareness of patient factors that affect understanding, such as age and education level. On an individual basis, these are factors that can be perceived in clinic consultations and information can be delivered in an appropriate format. Our otorhinolaryngology information leaflets and education materials do not necessarily always reflect that; therefore, LLMs represent an interesting paradigm to consider whereby patient educational material can be improved by pitching content at an age- and education-specific level. Furthermore, high-quality educational materials help address concerns long after the consultation has ended and can provide information to the surgeons regarding underlying patient concerns or anxieties that may not have been apparent during the clinic consultation.

To utilize these novel tools appropriately, clinicians should not only appreciate the potential to improve practice but also understand its shortcomings. As many have noted, despite continuous improvements, misinformation and hallucination remain a risk due to the quality of its training data. Quality control and critical evaluation by health care professionals would therefore form a critical component to ensure patients receive the most unbiased and accurate information to make informed decisions.

It is a question of “when” and not “if” artificial intelligence such as LLMs will be started to be used within health care; this letter highlights the potential benefits that can be considered and explored from a patient education perspective.



Publikationsverlauf

Accepted Manuscript online:
11. September 2024

Artikel online veröffentlicht:
22. Oktober 2024

© 2024. Thieme. All rights reserved.

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA