Methods Inf Med 2016; 55(03): 292-298
DOI: 10.3414/ME15-01-0131
Original Articles
Schattauer GmbH

Is There a Consensus when Physicians Evaluate the Relevance of Retrieved Systematic Reviews?[*]

Dympna O’Sullivan
1   School of Mathematics, Computer Science and Engineering, City University London, London, United Kingdom
2   MET Research Group, Telfer School of Management, University of Ottawa, Ottawa, Ontario, Canada
,
Szymon Wilk
2   MET Research Group, Telfer School of Management, University of Ottawa, Ottawa, Ontario, Canada
3   Institute of Computing Science, Poznan University of Technology, Poznan, Poland
,
Craig Kuziemsky
2   MET Research Group, Telfer School of Management, University of Ottawa, Ottawa, Ontario, Canada
,
Wojtek Michalowski
2   MET Research Group, Telfer School of Management, University of Ottawa, Ottawa, Ontario, Canada
,
Ken Farion
2   MET Research Group, Telfer School of Management, University of Ottawa, Ottawa, Ontario, Canada
4   Division of Emergency Medicine, Children’s Hospital of Eastern Ontario, Ottawa, Ontario, Canada
5   Departments of Pediatrics and Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
,
Bartosz Kukawka
3   Institute of Computing Science, Poznan University of Technology, Poznan, Poland
› Author Affiliations
Further Information

Publication History

received: 09 October 2015

accepted: 07 February 2016

Publication Date:
08 January 2018 (online)

Summary

Background: A significant challenge associated with practicing evidence-based medicine is to provide physicians with relevant clinical information when it is needed. At the same time it appears that the notion of relevance is subjective and its perception is affected by a number of contextual factors.

Objectives: To assess to what extent physi -cians agree on the relevance of evidence in the form of systematic reviews for a common set of patient cases, and to identify possible contextual factors that influence their perception of relevance.

Methods: A web-based survey was used where pediatric emergency physicians from multiple academic centers across Canada were asked to evaluate the relevance of systematic reviews retrieved automatically for 14 written case vignettes (paper patients). The vignettes were derived from prospective data describing pediatric patients with asthma exacerbations presenting at the emer gency department. To limit the cognitive burden on respondents, the number of reviews associated with each vignette was limited to three.

Results: Twenty-two academic emergency physicians with varying years of clinical practice completed the survey. There was no consensus in their evaluation of relevance of the retrieved reviews and physicians’ assessments ranged from very relevant to irrelevant evidence, with the majority of evaluations being somewhere in the middle. This indicates that the study participants did not share a notion of relevance uniformly. Further analysis of commentaries provided by the physicians allowed identifying three possible contextual factors: expected speci ficity of evidence (acute vs chronic condition), the terminology used in the systematic reviews, and the micro environment of clinical setting.

Conclusions: There is no consensus among physicians with regards to what constitutes relevant clinical evidence for a given patient case. Subsequently, this finding suggests that evidence retrieval systems should allow for deep customization with regards to physi -cian’s preferences and contextual factors, including differences in the micro environment of each clinical setting.

* Supplementary material published on our website http://dx.doi.org/10.3414/ME15-01-0131


 
  • References

  • 1 Shortliffe EH, Cimino JJ. editors. Biomedical Informatics. Computer Applications in Health Care and Biomedicine. 4th ed. Springer; 2014
  • 2 Cohen AM, Stavri PZ, Hersh WR. A categorization and analysis of the criticisms of Evidence-Based Medicine. Int J Med Inform 2004; 73 (Suppl. 01) 35-43.
  • 3 Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 1996; 312 7023 71-72.
  • 4 Cochrane.. The Cochrane Library [Internet]. Wiley. [cited June 30, 2015]. Available from: http://www.cochranelibrary.com.
  • 5 Ammenwerth E. Evidence-based health informatics: How do we know what we know?. Methods Inf Med 2015; 54 (Suppl. 04) 298-307.
  • 6 Montorti VM, Ebbert JO. TRIP database. Evid Based Med 2002; 7: 104.
  • 7 Kho ME, Brouwers MC. The systematic review and bibliometric network analysis (SeBriNA) is a new method to contextualize evidence. Part 1: description. J Clin Epidemiol 2012; 65 (Suppl. 09) 1010-1015.
  • 8 Gonzalez-Gonzalez AI, Dawes M, Sanchez-Mateos J, Riesgo-Fuertes R, Escortell-Mayor E, Sanz Cuesta T. et al. Information needs and information-seeking behavior of primary care physicians. Ann Fam Med 2007; 5 (Suppl. 04) 345-352.
  • 9 Swennen MH, van der Heijden GJ, Boeije HR, van Rheenen N, Verheul FJ, van der Graaf Y. et al. Doctors’ perceptions and use of evidence-based medicine: a systematic review and thematic synthesis of qualitative studies. Acad Med 2013; 88 (Suppl. 09) 1384-1396.
  • 10 Pope C. Resisting evidence: the study of evidence-based medicine as a contemporary social movement. Health 2003; 7 (Suppl. 03) 267-282.
  • 11 O’Sullivan D, Wilk S, Michalowski W, Farion K. Automatic indexing and retrieval of encounter-specific evidence for point-of-care support. J Biomed Inform 2010; 43 (Suppl. 04) 623-631.
  • 12 Gatta R, Vallati M, De Bari B, Pasinetti N, Cappelli C, Pirola I. et al. Information retrieval in medicine: an extensive experimental study. In: Bienkiewicz M, Verdier C, Plantier G, Schultz T, Fred ALN, Gamboa H. editors. HEALTHINF 2014 – Proceedings of the International Conference on Health Informatics. Angers, France: SciTePress; 2014. pp 447-452.
  • 13 Timsina P, El-Gayar O, Nawar N. Information technology for evidence based medicine: Status and future direction. In: 20th Americas Conference on Information Systems (AMCIS 2014): Smart Sustainability: The Information Systems Opportunity. Savannah, GA: 2014. pp 1149-1157.
  • 14 Wilk S, Michalowski W, O’Sullivan D, Farion K, Sayyad-Shirabad J, Kuziemsky C. et al. A task-based support architecture for developing point-of-care clinical decision support systems for the emergency department. Methods Inf Med 2013; 52 (Suppl. 01) 18-32.
  • 15 Hung SY, Ku YC, Chien JC. Understanding physicians’ acceptance of the Medline system for practicing evidence-based medicine: a decomposed TPB model. Int J Med Inform 2012; 81 (Suppl. 02) 130-142.
  • 16 Ellsworth MA, Homan JM, Cimino JJ, Peters SG, Pickering BW, Herasevich V. Point-of-care knowledge-based resource needs of clinicians. A survey from a large academic medical center. Appl Clin Inform 2015; 6 (Suppl. 02) 305-317.
  • 17 Shepperd S, Adams R, Hill A, Garner S, Dopson S. Challenges to using evidence from systematic reviews to stop ineffective practice: an interview study. J Health Serv Res Policy 2013; 18 (Suppl. 03) 160-166.
  • 18 Oliver D. Evidence based medicine needs to be more pragmatic. BMJ 2014; 349: g4453.
  • 19 Haynes RB, Cotoi C, Holland J, Walters L, Wilczynski N, Jedraszewski D. et al. Second-order peer review of the medical literature for clinical practitioners. JAMA 2006; 295 (Suppl. 15) 1801-1808.
  • 20 Farion K, Wilk S, Michalowski W, O’Sullivan D, Sayyad-Shirabad J. Comparing predictions made by a prediction model, clinical score, and physicians: pediatric asthma exacerbations in the emergency department. Appl Clin Inform 2013; 4 (Suppl. 03) 376-391.
  • 21 O’Sullivan D, Wilk S, Michalowski W, Slowinski R, Thomas R, Kadzinski M. et al. Learning the preferences of physicians for the organization of result lists of medical evidence articles. Methods Inf Med 2014; 53 (Suppl. 05) 344-356.
  • 22 Manning C, Raghavan P, Schutze H. Introduction to Information Retrieval. New York, NY: Cambridge University Press; 2008
  • 23 Sim J, Wright CC. The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther 2005; 85 (Suppl. 03) 257-268.
  • 24 Flach P. Machine Learning. The Art and Science of Algorithms that Make Sense of Data: Cambridge University Press; 2012
  • 25 Zou KH, Tuncali K, Silverman SG. Correlation and simple linear regression. Radiology 2003; 227 (Suppl. 03) 617-622.
  • 26 Hersh W, Buckley C, Leone TJ, Hickam D. OHSUMED: an interactive retrieval evaluation and new large test collection for research. In: Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Springer; 1994. pp 192-201.
  • 27 Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med 2005; 37 (Suppl. 05) 360-363.
  • 28 Hollingworth W, Medina LS, Lenkinski RE, Shibata DK, Bernal B, Zurakowski D. et al. Interrater reliability in assessing quality of diagnostic accuracy studies using the QUADAS tool. A preliminary assessment. Acad Radiol 2006; 13 (Suppl. 07) 803-810.
  • 29 Kapeller P, Barber R, Vermeulen RJ, Ader H, Scheltens P, Freidl W. et al. Visual rating of age- related white matter changes on magnetic resonance imaging: scale comparison, interrater agreement, and correlations with quantitative measurements. Stroke 2003; 34 (Suppl. 02) 441-445.
  • 30 Carey TS, Melvin CL, Ranney LM. Extracting key messages from systematic reviews. J Psychiatr Pract 2008; 14 (Suppl. 01) 28-34.
  • 31 Berges I, Bermudez J, Illarramendi A. Binding SNOMED CT terms to archetype elements. Establishing a baseline of results. Methods Inf Med 2015; 54 (Suppl. 01) 45.
  • 32 Hersh WR, Crabtree MK, Hickam DH, Sacherek L, Rose L, Friedman CP. Factors associated with successful answering of clinical questions using an information retrieval system. Bull Med Libr Assoc 2000; 88 (Suppl. 04) 323-331.