CC BY 4.0 · ACI Open 2019; 03(02): e88-e97
DOI: 10.1055/s-0039-1697907
Original Article
Georg Thieme Verlag KG Stuttgart · New York

Patient-Specific Explanations for Predictions of Clinical Outcomes

Mohammadamin Tajgardoon
1  Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Malarkodi J. Samayamuthu
2  Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Luca Calzoni
2  Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Shyam Visweswaran
1  Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
2  Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
› Author Affiliations
Funding The research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under award number R01LM012095. The content of the paper is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the University of Pittsburgh.
Further Information

Publication History

31 March 2018

07 August 2019

Publication Date:
10 November 2019 (online)

  

Abstract

Background Machine learning models that are used for predicting clinical outcomes can be made more useful by augmenting predictions with simple and reliable patient-specific explanations for each prediction.

Objectives This article evaluates the quality of explanations of predictions using physician reviewers. The predictions are obtained from a machine learning model that is developed to predict dire outcomes (severe complications including death) in patients with community acquired pneumonia (CAP).

Methods Using a dataset of patients diagnosed with CAP, we developed a predictive model to predict dire outcomes. On a set of 40 patients, who were predicted to be either at very high risk or at very low risk of developing a dire outcome, we applied an explanation method to generate patient-specific explanations. Three physician reviewers independently evaluated each explanatory feature in the context of the patient's data and were instructed to disagree with a feature if they did not agree with the magnitude of support, the direction of support (supportive versus contradictory), or both.

Results The model used for generating predictions achieved a F1 score of 0.43 and area under the receiver operating characteristic curve (AUROC) of 0.84 (95% confidence interval [CI]: 0.81–0.87). Interreviewer agreement between two reviewers was strong (Cohen's kappa coefficient = 0.87) and fair to moderate between the third reviewer and others (Cohen's kappa coefficient = 0.49 and 0.33). Agreement rates between reviewers and generated explanations—defined as the proportion of explanatory features with which majority of reviewers agreed—were 0.78 for actual explanations and 0.52 for fabricated explanations, and the difference between the two agreement rates was statistically significant (Chi-square = 19.76, p-value < 0.01).

Conclusion There was good agreement among physician reviewers on patient-specific explanations that were generated to augment predictions of clinical outcomes. Such explanations can be useful in interpreting predictions of clinical outcomes.

Protection of Human and Animal Subjects

All research activities reported in this publication were reviewed and approved by the University of Pittsburgh’s Institutional Review Board.