Appl Clin Inform 2018; 09(01): 122-128
DOI: 10.1055/s-0038-1626725
Research Article
Schattauer GmbH Stuttgart

Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments

B. E. Jones
B. R. South
Y. Shao
C.C. Lu
J. Leng
B. C. Sauer
A. V. Gundlapalli
M. H. Samore
Q. Zeng
Funding Dr. Jones is funded by a career development award from the Veterans Affairs Health Services Research & Development (#IK2HX001908).
Further Information

Publication History

09 September 2017

31 December 2017

Publication Date:
21 February 2018 (online)


Background Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes.

Objectives This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia.

Methods Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets.

Results Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more “possible-treated” cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80).

Conclusion System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty.


The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects, and was reviewed and approved by the University of Utah and VA SLC Institutional Review Boards (IRB_00065268).

Supplementary Material

  • References

  • 1 Centers for Disease Control and Prevention, National Center for Health Statistics. Underlying Cause of Death 1999–2015 on CDC WONDER Online Database, released December, 2016. Data are from the Multiple Cause of Death Files, 1999–2015, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. Available at: . Accessed October 23, 2017
  • 2 Ramirez JA, Wiemken TL, Peyrani P. , et al; University of Louisville Pneumonia Study Group. Adults hospitalized with pneumonia in the United States: incidence, epidemiology, and mortality. Clin Infect Dis 2017; 65 (11) 1806-1812
  • 3 Mandell LA, Wunderink RG, Anzueto A. , et al; Infectious Diseases Society of America; American Thoracic Society. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis 2007; 44 (Suppl. 02) S27-S72
  • 4 Jain S, Self WH, Wunderink RG. , et al; CDC EPIC Study Team. Community-acquired pneumonia requiring hospitalization among U.S. adults. N Engl J Med 2015; 373 (05) 415-427
  • 5 Ruhnke GW, Coca-Perraillon M, Kitch BT, Cutler DM. Trends in mortality and medical spending in patients hospitalized for community-acquired pneumonia: 1993-2005. Med Care 2010; 48 (12) 1111-1116
  • 6 Nadkarni PM, Ohno-Machado L, Chapman WW. Natural language processing: an introduction. J Am Med Inform Assoc 2011; 18 (05) 544-551
  • 7 Dublin S, Baldwin E, Walker RL. , et al. Natural language processing to identify pneumonia from radiology reports. Pharmacoepidemiol Drug Saf 2013; 22 (08) 834-841
  • 8 Fiszman M, Chapman WW, Aronsky D, Evans RS, Haug PJ. Automatic detection of acute bacterial pneumonia from chest X-ray reports. J Am Med Inform Assoc 2000; 7 (06) 593-604
  • 9 Haas JP, Mendonça EA, Ross B, Friedman C, Larson E. Use of computerized surveillance to detect nosocomial pneumonia in neonatal intensive care unit patients. Am J Infect Control 2005; 33 (08) 439-443
  • 10 Liu V, Clark MP, Mendoza M. , et al. Automated identification of pneumonia in chest radiograph reports in critically ill patients. BMC Med Inform Decis Mak 2013; 13: 90
  • 11 Dean NC, Jones BE, Ferraro JP, Vines CG, Haug PJ. Performance and utilization of an emergency department electronic screening tool for pneumonia. JAMA Intern Med 2013; 173 (08) 699-701
  • 12 Friedman C, Knirsch C, Shagina L, Hripcsak G. Automating a severity score guideline for community-acquired pneumonia employing medical language processing of discharge summaries. Proc AMIA Symp 1999; 256-260
  • 13 DeLisle S, Kim B, Deepak J. , et al. Using the electronic medical record to identify community-acquired pneumonia: toward a replicable automated strategy. PLoS One 2013; 8 (08) e70944
  • 14 VA Informatics and Computing Infrastructure. Available at: . Accessed January 28, 2018
  • 15 Leng JSS, Gundlapalli A, South B. The Extensible Human Oracle Suite of Tools (eHOST) for Annotation of Clinical Narratives. American Medical Informatics Association Spring Congress; 2010. May 25, 2010
  • 16 Cortes CaVV. Support-vector networks. Mach Learn 1995; 20 (03) 273
  • 17 Hearst MADS, Osman E. Support vector machines. IEEE Intel Sys Appl 1998; 13: 18-28
  • 18 Uzuner Ö, South BR, Shen S, DuVall SL. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. J Am Med Inform Assoc 2011; 18 (05) 552-556
  • 19 Lin C-J, Chang C-C. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2011; 2 (27) 1-27
  • 20 Aronsky D, Haug PJ, Lagor C, Dean NC. Accuracy of administrative data for identifying patients with pneumonia. Am J Med Qual 2005; 20 (06) 319-328
  • 21 van de Garde EM, Oosterheert JJ, Bonten M, Kaplan RC, Leufkens HG. International classification of diseases codes showed modest sensitivity for detecting community-acquired pneumonia. J Clin Epidemiol 2007; 60 (08) 834-838
  • 22 Whittle J, Fine MJ, Joyce DZ. , et al. Community-acquired pneumonia: can it be defined with claims data?. Am J Med Qual 1997; 12 (04) 187-193
  • 23 Committee on Diagnostic Error in Health Care, Board on Health Care Services, Institute of Medicine, The National Academies of Sciences, Engineering, and Medicine. In: Balogh EP, Miller BT, Ball JR. , eds, Improving Diagnosis in Health Care. Washington, DC: National Academies Press (US); 2015