Methods Inf Med 1993; 32(02): 137-145
DOI: 10.1055/s-0038-1634907
Original Article
Schattauer GmbH

Evaluating Consensus Among Physicians in Medical Knowledge Base Construction

N. B. Giuse
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
,
D. A. Giuse
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
2   Robotics Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh
,
R. A. MMer
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
,
R. A. Bankowitz
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
,
J. E. Janosky
3   Dept. of Clinical Epidemiology and Preventive Medicine, University of Pittsburgh
,
F. Davidoff
4   American College of Physicians, Philadelphia
,
B. E. Hillner
5   School of Medicine, Medical College of Virginia, Richmond, VA
,
G. Hripcsak
6   Center for Medical Informatics, Columbia-Presbyterian Medical Center, New York
,
M. J. Lincoln
7   Salt Lake City Veterans Hospital and Dept. of Medical Informatics, University of Utah School of Medicine, Salt Lake City
,
B. Middleton
8   Section of Medical Informatics, Division of General Internal Medicine, Stanford University Medical Center, Stanford
,
J. G. Peden Jr.
9   Section of General Internal Medicine, East Carolina University, Greenville, USA
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
08. Februar 2018 (online)

Preview

Abstract:

This study evaluates inter-author variability in knowledge base construction. Seven board-certified internists independently profiled “acute perinephric abscess”, using as reference material a set of 109 peer-reviewed articles. Each participant created a list of findings associated with the disease, estimated the predictive value and sensitivity of each finding, and assessed the pertinence of each article for making each judgment. Agreement in finding selection was significantly different from chance: seven, six, and five participants selected the same finding 78.6, 9.8, and 1.6 times more often than predicted by chance. Findings with the highest sensitivity were most likely to be included by all participants. The selection of supporting evidence from the medical literature was significantly related to each physician’s agreement with the majority. The study shows that, with appropriate guidance, physicians can reproducibly extract information from the medical literature, and thus established a foundation for multi-author knowledge base construction.