Methods Inf Med 1993; 32(02): 137-145
DOI: 10.1055/s-0038-1634907
Original Article
Schattauer GmbH

Evaluating Consensus Among Physicians in Medical Knowledge Base Construction

N. B. Giuse
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
,
D. A. Giuse
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
2   Robotics Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh
,
R. A. MMer
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
,
R. A. Bankowitz
1   Section of Medical Informatics, Department of Medicine, University of Pittsburgh
,
J. E. Janosky
3   Dept. of Clinical Epidemiology and Preventive Medicine, University of Pittsburgh
,
F. Davidoff
4   American College of Physicians, Philadelphia
,
B. E. Hillner
5   School of Medicine, Medical College of Virginia, Richmond, VA
,
G. Hripcsak
6   Center for Medical Informatics, Columbia-Presbyterian Medical Center, New York
,
M. J. Lincoln
7   Salt Lake City Veterans Hospital and Dept. of Medical Informatics, University of Utah School of Medicine, Salt Lake City
,
B. Middleton
8   Section of Medical Informatics, Division of General Internal Medicine, Stanford University Medical Center, Stanford
,
J. G. Peden Jr.
9   Section of General Internal Medicine, East Carolina University, Greenville, USA
› Author Affiliations
Further Information

Publication History

Publication Date:
08 February 2018 (online)

Abstract:

This study evaluates inter-author variability in knowledge base construction. Seven board-certified internists independently profiled “acute perinephric abscess”, using as reference material a set of 109 peer-reviewed articles. Each participant created a list of findings associated with the disease, estimated the predictive value and sensitivity of each finding, and assessed the pertinence of each article for making each judgment. Agreement in finding selection was significantly different from chance: seven, six, and five participants selected the same finding 78.6, 9.8, and 1.6 times more often than predicted by chance. Findings with the highest sensitivity were most likely to be included by all participants. The selection of supporting evidence from the medical literature was significantly related to each physician’s agreement with the majority. The study shows that, with appropriate guidance, physicians can reproducibly extract information from the medical literature, and thus established a foundation for multi-author knowledge base construction.

 
  • REFERENCES

  • 1 Jelovsek FR, Rittwage J, Pearse WH, Vis-scher HC. Information management needs of the obstetrician-gynecologist: A survey. Obstet Gynecol 1989; 73: 395-9.
  • 2 Huth EJ. The underused medical literature (editorial). Ann Intern Med 1989; 110: 99-100.
  • 3 Hewitt P, Chalmers TC. Using MEDLINE to peruse the literature. Controlled Clinical Trials 1985; 06: 75-84.
  • 4 Haynes RB, McKibbon KA, Walker CJ. et al. Computer searching of the medical literature: An evaluation of MEDLINE searching systems. Ann Intern Med 1985; 103: 812-6.
  • 5 Miller RA, Pople HE, Myers JD. INTER-NIST-I, an experimental computer-based diagnostic consultant for general internal medicine. N Engl J Med 1982; 307: 468-76.
  • 6 Masarie FE, Miller RA, Myers JD. INTER-NIST-I properties: representing common sense and good medical practice in a computerized medical knowledge base. Comput Biomed Res 1985; 18: 458-79.
  • 7 Heckerman D, Miller RA. Towards a better understanding of the INTERNIST-I knowledge base. In: Salamon R, Blum B, Jørgensen M. eds. MEDINFO 86. Amsterdam: Elsevier Science Publ; 1986: 22-6.
  • 8 Giuse DA, Giuse NB, Miller RA. Towards computer-assisted maintenance of medical knowledge bases. Artif Intell Med 1990; 02: 21-33.
  • 9 Giuse DA, Giuse NB, Bankowitz RA, Miller RA. Heuristic determination of quantitative data for knowledge acquisition in medicine. Comput Biomed Res 1991; 24: 261-72.
  • 10 Miller RA, Giuse NB. Medical knowledge bases. Acad Med 1991; 66: 15-7.
  • 11 Giuse DA, Giuse NB, Miller RA. Knowledge acquisition in medicine: enforcing consistency. In: Adlassnig K-P, Grabner G, Bengtsson S, Hansen R. eds. Medical Informatics Europe 1991. Heidelberg: Springer-Verlag; 1991: 337-41.
  • 12 Giuse NB, Bankowitz RA, Giuse DA, Parker RC, Miller RA. Medical knowledge base acquisition: the role of the expert review process in disease profile construction. In: Kingsland LC. ed. Proceedings of the Thirteenth Annual Symposium on Computer Applications in Medical Care. New York: IEEE Computer Society Press; 1989: 105-9.
  • 13 Eddington ES. Randomization Tests. New York: Marcel Dekker; 1980
  • 14 Liou YI. Collaborative knowledge acquisition. Expert Systems with Application 1992; 05: 1-13.
  • 15 Layton TL, Ellis NC, Hutchingson RD. A Delphi algorithm that integrates knowledge for expert systems. In: Proceedings of the Human Factors Society 33rd Annual Meeting. Santa Monica CA: Human Factors Society; 1989: 1159.
  • 16 Shaw MLG, Woodward JB. Validation in a knowledge support system: replication and consistency with multiple experts. In: Boose JH, Gaines BR. eds. The Foundations of Knowledge Acquisition (Knowledge Based Systems, vol.4). London: Academic Press; 1988: 39-60.
  • 17 Garg-Janardan C, Salvendy G. A structured knowledge elicitation methodology for building expert systems. In: Boose JH, Gaines BR. eds. The Foundations of Knowledge Acquisition (Knowledge Based Systems, vol.4). London: Academic Press; 1988: 85-114.
  • 18 Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N Engl J Med 1987; 316: 450-5.
  • 19 Chalmers TC, Levin H, Sacks HS. et al. Meta-analysis of clinical trials as a scientific discipline. I: Control of bias and comparison with large cooperative trials. Stat Med 1987; 06: 315-25.
  • 20 Chalmers TC, Berrier J, Sacks HS. et al. Meta-analysis of clinical trials as a scientific discipline. II: Replicate variability and comparison of studies that agree and disagree. Stat Med 1987; 06: 733-44.
  • 21 Agency for Health Care Policy and Research. AHCPR Program Note: Clinical Guideline Development. Agency for Health Care Policy and Research. 1990
  • 22 Field MJ, Lohr KN. eds. Clinical Practice Guidelines: Directions for a New Agency. Washington DC: Institute of Medicine; 1990
  • 23 Audet AM, Greenfield S, Field M. Medical practice guidelines: current activities and future directions. Ann Intern Med 1990; 113: 709-14.
  • 24 Holbrook AM, Langton KB, Haynes RB, Mathieu A, Cowan S. PREOP: Development of an evidence-based expert system to assist with preoperative assessments. In: Clayton PD. ed. Proceedings of the Fifteenth Annual Symposium on Computer Applications in Medical Care. New York: McGraw-Hill Inc; 1991: 669-73.
  • 25 Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology. A Basic Science for Clinical Medicine. Toronto: Little, Brown and Co; 1985
  • 26 American College of Physicians. Clinical Efficacy Assessment Project. American College of Physicians. 1990
  • 27 Barnett GO, Cimino JJ, Hupp JA, Hoffer EP. DXplain: An evolving diagnostic decision-support system. JAMA 1987; 258: 67-74.
  • 28 Bouhaddou O, Warner HR, Yu H, Lincoln MJ. The knowledge capabilities of the vocabulary component of a medical expert system. In: Miller RA. ed. Proceedings of the Fourteenth Annual Symposium on Computer Applications in Medical Care. New York: IEEE Computer Society Press; 1990: 655-60.
  • 29 Musen MA. Automated Generation of Model-Based Knowledge-Acquisition Tools. Research Notes in Artificial Intelligence. London: Pitman; 1989
  • 30 Haynes RB, McKibbon KA, Fitzgerald D. et al. How to keep up with the medical literature: IV. Using the literature to solve clinical problems. Ann Intern Med 1986; 105: 636-40.
  • 31 Kong A, Barnett GO, Mosteller F, Youtz C. How medical professionals evaluate expressions of probability. N Engl J Med 1986; 315: 740-4.