Appl Clin Inform 2021; 12(01): 170-178
DOI: 10.1055/s-0041-1723024
Research Article

Extracting Medical Information from Paper COVID-19 Assessment Forms

Colin G. White-Dzuro*
1  Vanderbilt University School of Medicine, Nashville, Tennessee, United States
,
Jacob D. Schultz*
1  Vanderbilt University School of Medicine, Nashville, Tennessee, United States
,
Cheng Ye
2  Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Joseph R. Coco
2  Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Janet M. Myers
3  Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Claude Shackelford
3  Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
S. Trent Rosenbloom
1  Vanderbilt University School of Medicine, Nashville, Tennessee, United States
2  Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Daniel Fabbri
2  Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
› Author Affiliations
Funding None.

Abstract

Objective This study examines the validity of optical mark recognition, a novel user interface, and crowdsourced data validation to rapidly digitize and extract data from paper COVID-19 assessment forms at a large medical center.

Methods An optical mark recognition/optical character recognition (OMR/OCR) system was developed to identify fields that were selected on 2,814 paper assessment forms, each with 141 fields which were used to assess potential COVID-19 infections. A novel user interface (UI) displayed mirrored forms showing the scanned assessment forms with OMR results superimposed on the left and an editable web form on the right to improve ease of data validation. Crowdsourced participants validated the results of the OMR system. Overall error rate and time taken to validate were calculated. A subset of forms was validated by multiple participants to calculate agreement between participants.

Results The OMR/OCR tools correctly extracted data from scanned forms fields with an average accuracy of 70% and median accuracy of 78% when the OMR/OCR results were compared with the results from crowd validation. Scanned forms were crowd-validated at a mean rate of 157 seconds per document and a volume of approximately 108 documents per day. A randomly selected subset of documents was reviewed by multiple participants, producing an interobserver agreement of 97% for documents when narrative-text fields were included and 98% when only Boolean and multiple-choice fields were considered.

Conclusion Due to the COVID-19 pandemic, it may be challenging for health care workers wearing personal protective equipment to interact with electronic health records. The combination of OMR/OCR technology, a novel UI, and crowdsourcing data-validation processes allowed for the efficient extraction of a large volume of paper medical documents produced during the COVID-19 pandemic.

Protection of Human and Animal Subjects

None.


* Authors contributed equally to this study.


Supplementary Material



Publication History

Received: 10 September 2020

Accepted: 25 December 2020

Publication Date:
10 March 2021 (online)

© 2021. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany