Asynchronous Speech Recognition Affects Physician Editing of NotesFunding This study was supported by grant number R21HS023631 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.
26 January 2018
26 August 2018
17 October 2018 (online)
Objective Clinician progress notes are an important record for care and communication, but there is a perception that electronic notes take too long to write and may not accurately reflect the patient encounter, threatening quality of care. Automatic speech recognition (ASR) has the potential to improve clinical documentation process; however, ASR inaccuracy and editing time are barriers to wider use. We hypothesized that automatic text processing technologies could decrease editing time and improve note quality. To inform the development of these technologies, we studied how physicians create clinical notes using ASR and analyzed note content that is revised or added during asynchronous editing.
Materials and Methods We analyzed a corpus of 649 dictated clinical notes from 9 physicians. Notes were dictated during rounds to portable devices, automatically transcribed, and edited later at the physician's convenience. Comparing ASR transcripts and the final edited notes, we identified the word sequences edited by physicians and categorized the edits by length and content.
Results We found that 40% of the words in the final notes were added by physicians while editing: 6% corresponded to short edits associated with error correction and format changes, and 34% were associated with longer edits. Short error correction edits that affect note accuracy are estimated to be less than 3% of the words in the dictated notes. Longer edits primarily involved insertion of material associated with clinical data or assessment and plans. The longer edits improve note completeness; some could be handled with verbalized commands in dictation.
Conclusion Process interventions to reduce ASR documentation burden, whether related to technology or the dictation/editing workflow, should apply a portfolio of solutions to address all categories of required edits. Improved processes could reduce an important barrier to broader use of ASR by clinicians and improve note quality.
Keywordselectronic health records and systems - clinical documentation and communications - natural language processing - notes - workflow
Protection of Human and Animal Subjects
The University of Washington Human Subjects Division approved the VGEENS study, and this work was performed in compliance with the approved study design and procedures.
- 1 Oxentenko AS, West CP, Popkave C, Weinberger SE, Kolars JC. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch Intern Med 2010; 170 (04) 377-380
- 2 Yadav S, Kazanji N. K C N, et al. Comparison of accuracy of physical examination findings in initial progress notes between paper charts and a newly implemented electronic health record. J Am Med Inform Assoc 2017; 24 (01) 140-144
- 3 Friedberg MW, Chen PG, Van Busum KR. , et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Rand Health Q 2014; 3 (04) 1
- 4 Sinsky CA, Willard-Grace R, Schutzbank AM, Sinsky TA, Margolius D, Bodenheimer T. In search of joy in practice: a report of 23 high-functioning primary care practices. Ann Fam Med 2013; 11 (03) 272-278
- 5 Lam JG, Lee BS, Chen PP. The effect of electronic health records adoption on patient visit volume at an academic ophthalmology department. BMC Health Serv Res 2016; 16: 7
- 6 Hodgson T, Coiera E. Risks and benefits of speech recognition for clinical documentation: a systematic review. J Am Med Inform Assoc 2016; 23 (e1) e169-e179
- 7 Hodgson T, Magrabi F, Coiera E. Efficiency and safety of speech recognition for documentation in the electronic health record. J Am Med Inform Assoc 2017; 24 (06) 1127-1133
- 8 Borowitz SM. Computer-based speech recognition as an alternative to medical transcription. J Am Med Inform Assoc 2001; 8 (01) 101-102
- 9 Kumah-Crystal YA, Pirtle CJ, Whyte HM, Goode ES, Anders SH, Lehmann CU. Electronic health record interactions through voice: a review. Appl Clin Inform 2018; 9 (03) 541-552
- 10 Johnson M, Lapkin S, Long V. , et al. A systematic review of speech recognition technology in health care. BMC Med Inform Decis Mak 2014; 14: 94
- 11 Pezzullo JA, Tung GA, Rogg JM, Davis LM, Brody JM, Mayo-Smith WW. Voice recognition dictation: radiologist as transcriptionist. J Digit Imaging 2008; 21 (04) 384-389
- 12 Vogel M, Kaisers W, Wassmuth R, Mayatepek E. Analysis of documentation speed using web-based medical speech recognition technology: randomized controlled trial. J Med Internet Res 2015; 17 (11) e247
- 13 Kauppinen T, Koivikko MP, Ahovuo J. Improvement of report workflow and productivity using speech recognition--a follow-up study. J Digit Imaging 2008; 21 (04) 378-382
- 14 Mohr DNT, Turner DW, Pond GR, Kamath JS, De Vos CB, Carpenter PC. Speech recognition as a transcription aid: a randomized comparison with standard transcription. J Am Med Inform Assoc 2003; 10 (01) 85-93
- 15 Weiss DL, Kim W, Branstetter IV BF, Prevedello LM. Radiology reporting: a closed-loop cycle from order entry to results communication. J Am Coll Radiol 2014; 11 (12 Pt B): 1226-1237
- 16 Hammana I, Lepanto L, Poder T, Bellemare C, Ly MS. Speech recognition in the radiology department: a systematic review. Health Inf Manag 2015; 44 (02) 4-10
- 17 Goss FR, Zhou L, Weiner SG. Incidence of speech recognition errors in the emergency department. Int J Med Inform 2016; 93: 70-73
- 18 Zhou L, Shi Y, Sears A. Third-party error detection support mechanisms for dictation speech recognition. Interact Comput 2010; 22: 375-388
- 19 Quint LE, Quint DJ, Myles JD. Frequency and spectrum of errors in final radiology reports generated with automatic speech recognition technology. J Am Coll Radiol 2008; 5 (12) 1196-1199
- 20 Ringler MD, Goss BC, Bartholmai BJ. Syntactic and semantic errors in radiology reports associated with speech recognition software. Health Informatics J 2017; 23 (01) 3-13
- 21 Payne TH, tenBroek AE, Fletcher GS, Labuguen MC. Transition from paper to electronic inpatient physician notes. J Am Med Inform Assoc 2010; 17 (01) 108-111
- 22 Tai-Seale M, Olson CW, Li J. , et al. Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Aff 2017; 36 (04) 655-662
- 23 Payne TH, Alonso WD, Markiel JA. , et al. Using voice to create inpatient progress notes: effects on note timeliness, quality, and physician satisfaction. JAMIA Open 2018; DOI: 10.1093/jamiaopen/ooy036.
- 24 Payne TH, Alonso WD, Markiel JA, Lybarger K, White AA. Using voice to create hospital progress notes: Description of a mobile application and supporting system integrated with a commercial electronic health record. J Biomed Inform 2018; 77: 91-96
- 25 Payne TH. . Improving accuracy of electronic notes using a faster, simpler approach. Available at: https://healthit.ahrq.gov/ahrq-funded-projects/improving-accuracy-electronic-notes-using-faster-simpler-approach?nav=publications . Accessed June 14, 2018
- 26 Payne TH, Hirschmann JV, Helbig S. The elements of electronic note style. J AHIMA 2003; 74 (02) 68-70
- 27 Ratclif JW, Metzener DE. Pattern-matching the gestalt approach. Dr Dobb's J 1988; 13: 46
- 28 Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the Physician Documentation Quality Instrument (PDQI-9). Appl Clin Inform 2012; 3 (02) 164-174
- 29 Nuance. Dragon Medical Practice Edition. Available at: https://www.nuance.com/healthcare/physician-and-clinical-speech/dragon-medical-practice-edition.html . Accessed June 1, 2017
- 30 Kalgaonkar K, Liu C, Gong Y. Estimating confidence scores on ASR results using recurrent neural networks. Proc IEEE Int Conf Acoust Speech Signal Process 2015; 2015: 4999-5003
- 31 Ogawa A, Hori T. ASR error detection and recognition rate estimation using deep bidirectional recurrent neural networks. Proc IEEE Int Conf Acoust Speech Signal Process 2015; 2015: 4370-4374
- 32 Angel Del-Agua M, Piqueras S, Gimenez A, Sanchis A, Civera J, Juan A. ASR confidence estimation with speaker-adapted recurrent neural networks. Proc InterSpeech 2016; 2016: 3464-3468
- 33 Yoshikawa M, Shindo H, Matsumoto Y. Joint transition-based dependency parsing and disfluency detection for automatic speech recognition texts. Proc Conf Empir Methods Nat Lang Process 2016; 2016: 1036-1041
- 34 Cho E, Niehues J, Waibel A. NMT-based segmentation and punctuation insertion for real-time spoken language translation. Proc InterSpeech 2017; 2017: 2645-2649
- 35 Susanto RH, Chieu HL, Lu W. Learning to capitalize with character-level recurrent neural networks: an empirical study. Proc Conf Empir Methods Nat Lang Process 2016; 2016: 2090-2095
- 36 Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for named entity recognition. Proc NAACL-HLT 2016; 2016: 260-270
- 37 Lybarger K, Ostendorf M, Yetisgen M. Automatically detecting likely edits in clinical notes created using automatic speech recognition. AMIA Annu Symp Proc 2017; 2017: 1186-1195
- 38 Association for Healthcare Documentation Integrity [Internet]. Healthcare Documentation Quality Assessment and Management Best Practices (updated July 2017). Available at: https://www.ahdionline.org/general/custom.asp?page=qa . Accessed April 23, 2018