Appl Clin Inform 2019; 10(04): 655-669
DOI: 10.1055/s-0039-1695791
Research Article
Georg Thieme Verlag KG Stuttgart · New York

Interactive NLP in Clinical Care: Identifying Incidental Findings in Radiology Reports

Gaurav Trivedi
1  Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Esmaeel R. Dadashzadeh
2  Department of Surgery and Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Robert M. Handzel
3  Department of Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Wendy W. Chapman
4  Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah, United States
,
Shyam Visweswaran
1  Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
5  Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Harry Hochheiser
1  Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
5  Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
› Author Affiliations
Funding The research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under award number R01LM012095 and a Provost’s Fellowship in Intelligent Systems at the University of Pittsburgh (awarded to G.T.). The content of the paper is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the University of Pittsburgh.
Further Information

Publication History

25 April 2019

09 July 2019

Publication Date:
04 September 2019 (online)

Abstract

Background Despite advances in natural language processing (NLP), extracting information from clinical text is expensive. Interactive tools that are capable of easing the construction, review, and revision of NLP models can reduce this cost and improve the utility of clinical reports for clinical and secondary use.

Objectives We present the design and implementation of an interactive NLP tool for identifying incidental findings in radiology reports, along with a user study evaluating the performance and usability of the tool.

Methods Expert reviewers provided gold standard annotations for 130 patient encounters (694 reports) at sentence, section, and report levels. We performed a user study with 15 physicians to evaluate the accuracy and usability of our tool. Participants reviewed encounters split into intervention (with predictions) and control conditions (no predictions). We measured changes in model performance, the time spent, and the number of user actions needed. The System Usability Scale (SUS) and an open-ended questionnaire were used to assess usability.

Results Starting from bootstrapped models trained on 6 patient encounters, we observed an average increase in F1 score from 0.31 to 0.75 for reports, from 0.32 to 0.68 for sections, and from 0.22 to 0.60 for sentences on a held-out test data set, over an hour-long study session. We found that tool helped significantly reduce the time spent in reviewing encounters (134.30 vs. 148.44 seconds in intervention and control, respectively), while maintaining overall quality of labels as measured against the gold standard. The tool was well received by the study participants with a very good overall SUS score of 78.67.

Conclusion The user study demonstrated successful use of the tool by physicians for identifying incidental findings. These results support the viability of adopting interactive NLP tools in clinical care settings for a wider range of clinical applications.

Protection of Human and Animal Subjects

Our data collection and user-study protocols were approved by the University of Pittsburgh's Institutional Review Board (PRO17030447 and PRO18070517).


Supplementary Material