Appl Clin Inform 2025; 16(04): 1325-1331
DOI: 10.1055/a-2617-6572
Research Article

Summarize-then-Prompt: A Novel Prompt Engineering Strategy for Generating High-Quality Discharge Summaries

Authors

  • Eyal Klang

    1   The Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
    2   Division of Data Driven and Digital Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Jaskirat Gill

    3   Institute for Critical Care Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Aniket Sharma

    3   Institute for Critical Care Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Evan Leibner

    3   Institute for Critical Care Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Moein Sabounchi

    1   The Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Robert Freeman

    4   Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Roopa Kohli-Seth

    3   Institute for Critical Care Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Patricia Kovatch

    5   Scientific Computing, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Alexander W. Charney

    1   The Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Lisa Stump

    6   Mount Sinai Health System and Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • David L. Reich

    7   Department of Anesthesiology, Perioperative, and Pain Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Girish N. Nadkarni*

    1   The Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
    2   Division of Data Driven and Digital Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
    8   Division of Nephrology, Department of Medicine, West Virginia University, Morgantown, West Virginia, United States
  • Ankit Sakhuja*

    1   The Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
    2   Division of Data Driven and Digital Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States
    3   Institute for Critical Care Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, United States

Funding The study was funded by the U.S. Department of Health and Human Services, National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases (fund no.: K08DK131286).
Preview

Abstract

Background

Accurate discharge summaries are essential for effective communication between hospital and outpatient providers but generating them is labor-intensive. Large language models (LLMs), such as GPT-4, have shown promise in automating this process, potentially reducing clinician workload and improving documentation quality. A recent study using GPT-4 to generate discharge summaries via concatenated clinical notes found that while the summaries were concise and coherent, they often lacked comprehensiveness and contained errors. To address this, we evaluated a structured prompting strategy, summarize-then-prompt, which first generates concise summaries of individual clinical notes before combining them to create a more focused input for the LLM.

Objectives

The objective of this study was to assess the effectiveness of a novel prompting strategy, summarize-then-prompt, in generating discharge summaries that are more complete, accurate, and concise in comparison to the approach that simply concatenates clinical notes.

Methods

We conducted a retrospective study comparing two prompting strategies: direct concatenation (M1) and summarize-then-prompt (M2). A random sample of 50 hospital stays was selected from a large hospital system. Three attending physicians independently evaluated the generated hospital course summaries for completeness, correctness, and conciseness using a 5-point Likert scale.

Results

The summarize-then-prompt strategy outperformed direct concatenation strategy in both completeness (4.28 ± 0.63 vs. 4.01 ± 0.69, p < 0.001) and correctness (4.37 ± 0.54 vs. 4.17 ± 0.57, p = 0.002) of the summarization of the hospital course. However, the two strategies showed no significant difference in conciseness (p = 0.308).

Conclusion

Summarizing individual notes before concatenation improves LLM-generated discharge summaries, enhancing their completeness and accuracy without sacrificing conciseness. This approach may facilitate the integration of LLMs into clinical workflows, offering a promising strategy for automating discharge summary generation and could reduce clinician burden.

Protection and Human and Animal Subjects

Human and animal subjects were not included in the project.


* Equal Contribution as Senior Author.


Supplementary Material



Publication History

Received: 15 February 2025

Accepted: 20 May 2025

Accepted Manuscript online:
21 May 2025

Article published online:
10 October 2025

© 2025. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany