Appl Clin Inform 2024; 15(03): 600-611
DOI: 10.1055/a-2327-4121
Research Article

Evaluation of a Digital Scribe: Conversation Summarization for Emergency Department Consultation Calls

Emre Sezgin
1   Center for Biobehavioral Health, The Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, Ohio, United States
2   The Ohio State University College of Medicine, Columbus, Ohio, United States
,
Joseph W. Sirrianni
3   IT Research and Innovation, The Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, Ohio, United States
,
Kelly Kranz
4   Physician Consult and Transfer Center, Nationwide Children's Hospital, Columbus, Ohio, United States
› Author Affiliations

Funding The project described was supported by Award Number UM1TR004548 from the National Center for Advancing Translational Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Advancing Translational Sciences or the National Institutes of Health.
Preview

Abstract

Objectives We present a proof-of-concept digital scribe system as an emergency department (ED) consultation call-based clinical conversation summarization pipeline to support clinical documentation and report its performance.

Methods We use four pretrained large language models to establish the digital scribe system: T5-small, T5-base, PEGASUS-PubMed, and BART-Large-CNN via zero-shot and fine-tuning approaches. Our dataset includes 100 referral conversations among ED clinicians and medical records. We report the ROUGE-1, ROUGE-2, and ROUGE-L to compare model performance. In addition, we annotated transcriptions to assess the quality of generated summaries.

Results The fine-tuned BART-Large-CNN model demonstrates greater performance in summarization tasks with the highest ROUGE scores (F1ROUGE-1 = 0.49, F1ROUGE-2 = 0.23, F1ROUGE-L = 0.35) scores. In contrast, PEGASUS-PubMed lags notably (F1ROUGE-1 = 0.28, F1ROUGE-2 = 0.11, F1ROUGE-L = 0.22). BART-Large-CNN's performance decreases by more than 50% with the zero-shot approach. Annotations show that BART-Large-CNN performs 71.4% recall in identifying key information and a 67.7% accuracy rate.

Conclusion The BART-Large-CNN model demonstrates a high level of understanding of clinical dialogue structure, indicated by its performance with and without fine-tuning. Despite some instances of high recall, there is variability in the model's performance, particularly in achieving consistent correctness, suggesting room for refinement. The model's recall ability varies across different information categories. The study provides evidence toward the potential of artificial intelligence-assisted tools in assisting clinical documentation. Future work is suggested on expanding the research scope with additional language models and hybrid approaches and comparative analysis to measure documentation burden and human factors.

Human Subject Protection of Human and Animal Subjects

No human subjects were involved in the study.


Supplementary Material



Publication History

Received: 08 January 2024

Accepted: 14 May 2024

Accepted Manuscript online:
15 May 2024

Article published online:
24 July 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany