Methods Inf Med 2021; 60(03/04): 110-115
DOI: 10.1055/s-0041-1736311
Short Paper

Using Machine Learning to Capture Quality Metrics from Natural Language: A Case Study of Diabetic Eye Exams

Allan Fong
1  National Center for Human Factors in Healthcare, MedStar Health, Washington, District of Columbia, United States
,
Nicholas Scoulios
2  Department of Hospital Medicine, Internal Medicine, Standford University School of Medicine, Stanford, California, United States
,
H. Joseph Blumenthal
1  National Center for Human Factors in Healthcare, MedStar Health, Washington, District of Columbia, United States
,
Ryan E. Anderson
3  Division of General Internal Medicine, Department of Medicine, MedStar Georgetown University Hospital, Washington, District of Columbia, United States
4  MedStar Institute for Quality and Safety, MedStar Health Research Institute, MedStar Health, Washington, District of Columbia, United States
› Author Affiliations
Funding None.

Abstract

Background and Objective The prevalence of value-based payment models has led to an increased use of the electronic health record to capture quality measures, necessitating additional documentation requirements for providers.

Methods This case study uses text mining and natural language processing techniques to identify the timely completion of diabetic eye exams (DEEs) from 26,203 unique clinician notes for reporting as an electronic clinical quality measure (eCQM). Logistic regression and support vector machine (SVM) using unbalanced and balanced datasets, using the synthetic minority over-sampling technique (SMOTE) algorithm, were evaluated on precision, recall, sensitivity, and f1-score for classifying records positive for DEE. We then integrate a high precision DEE model to evaluate free-text clinical narratives from our clinical EHR system.

Results Logistic regression and SVM models had comparable f1-score and specificity metrics with models trained and validated with no oversampling favoring precision over recall. SVM with and without oversampling resulted in the best precision, 0.96, and recall, 0.85, respectively. These two SVM models were applied to the unannotated 31,585 text segments representing 24,823 unique records and 13,714 unique patients. The number of records classified as positive for DEE using the SVM models ranged from 667 to 8,935 (2.7–36% out of 24,823, respectively). Unique patients classified as positive for DEE ranged from 3.5 to 41.8% highlighting the potential utility of these models.

Discussion We believe the impact of oversampling on SVM model performance to be caused by the potential of overfitting of the SVM SMOTE model on the synthesized data and the data synthesis process. However, the specificities of SVM with and without SMOTE were comparable, suggesting both models were confident in their negative predictions. By prioritizing to implement the SVM model with higher precision over sensitivity or recall in the categorization of DEEs, we can provide a highly reliable pool of results that can be documented through automation, reducing the burden of secondary review. Although the focus of this work was on completed DEEs, this method could be applied to completing other necessary documentation by extracting information from natural language in clinician notes.

Conclusion By enabling the capture of data for eCQMs from documentation generated by usual clinical practice, this work represents a case study in how such techniques can be leveraged to drive quality without increasing clinician work.



Publication History

Received: 09 April 2021

Accepted: 25 August 2021

Publication Date:
01 October 2021 (online)

© 2021. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany