Methods Inf Med 2006; 45(01): 67-72
DOI: 10.1055/s-0038-1634039
Original Article
Schattauer GmbH

Lest Formalisms Impede Insight and Success: Evaluation in Health Informatics

A Case Study
J. R. Moehr
1   School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada
,
C. Anglin
2   Consultant, Victoria, British Columbia, Canada
,
J. Schaafsma
3   Department of Economics, University of Victoria, Victoria, British Columbia, Canada
,
S. Pantazi
1   School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada
,
N. Grimm
1   School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada
› Author Affiliations
Further Information

Publication History

Publication Date:
06 February 2018 (online)

Summary

Objectives: To illustrate the advantages of an open-ended formative evaluation approach using a project-specific selection of methods over the controlled trial approach in the evaluation of health information systems. To illustrate factors leading to success and others impeding it in a telehealth project.

Methods: The methods and results of an evaluation of the BC Telehealth Program are summarized.

Results: The evaluation gave a comprehensive picture of the project, including assessment of the effects of an array of telehealth applications, and their economic impact. Factors leading to success and others preventing it are identified from the level of overall program management to the project specifics. The results include unanticipated effects and explanations for their reasons of occurrence. Neither the comprehensiveness of information nor the timeliness was achieved in a related project using a controlled trial approach.

Conclusions: Not all types of health information system projects can be evaluated using the controlled trial approach. This approach may impede important insights. It is also usually much less efficient. Funding agencies and journal editors have to take this into account when selecting projects for funding and submissions for publication.

 
  • References

  • 1 Friedman C, Wyatt J. Evaluation methods in medical informatics. New York:: Springer-Verlag; 1997
  • 2 Anderson J, Aydin C, Jay S. Evaluating Health Care Information Systems. Methods and Applications. Thousand Oaks, CA:: Sage; 1994
  • 3 Kaplan B. Addressing organizational issues into the evaluation of medical systems. JAMIA 1997; 4 (02) 94-101.
  • 4 Kushniruk A. et al. Cognitive evaluation of the user interface and vocabulary of an outpatient information system. In: AMIA Fall Symposium. Philadelphia:: Hanley & Belfus Inc; 1996
  • 5 Stoop A, Berg M. Integrating quantitative and qualitative methods in patient care information system evaluation: guidance for the organizational decision maker. Methods Inf Med 2003; 42 (04) 458-62.
  • 6 Jørgensen T. Measuring Effects, in Assessment and Evaluation of Information Technologies in Medicine. Studies in Health Technology and Informatics. Amsterdam:: IOS Press; 1995
  • 7 Grant A, Plante I, Leblanc F. The TEAM methodology for the Evaluation of Information Systems in Biomedicine. Comp Biol Med: 2002. 32 195-207.
  • 8 Shaw NT. ‘CHEATS’: a Generic Information Communication Technology (ICT) Evaluation Framework. Comp Biol Med 2002; 32: 209-20.
  • 9 Ammenwerth E. et al. Vision and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inf 2004; 73: 479-91.
  • 10 Kaplan B. Evaluating Informatics Applications – some alternative approaches: theory, social interactionism, and a call for methodological pluralism. Int J Med Inf 2001; 64: 39-56.
  • 11 Kushniruk AW. Evaluation in the Design of Health Information Systems: Application of Approaches Resulting from Usability Engineering. Comp Biol Med 2002; 32: 141-50.
  • 12 Moehr J. Evaluation: Salvation or Nemesis of Medical Informatics?. Comp Biol Med 2002; 32 (03) 113-25.
  • 13 Stead W. et al. Designing Medical Informatics Research and Library-Resource Projects to Increase What is Learned. JAMIA 1994; 1: 28-33.
  • 14 Moehr JR. Final Evaluation Report. BC Telehealth Program. 2003. University of Victoria: Victoria.; p. 306. http://hinf.uvic.ca/archives/t_health.pdf
  • 15 Institute of Medicine, Committee on evaluating clinical applications of telemedicine. In: Field MJ. Telemedicine: a guide to assessing telecommunications in health. Washington:: National Academy Press: Washington, DC; 1996
  • 16 (NIFTE), N.I.f.T., National Initiative for Telehealth Framework of Guidelines. 2003. NIFTE: Ottawa;
  • 17 Reichertz P. et al. Evaluation of a Field Test of Computers for the Doctor’s Office. Methods Inf Med 1979; 18: 61-70.
  • 18 Reichertz PL. et al. Praxiscomputer im Routinetest. Begleituntersuchung eines Feldversuchs, ed. Z.f.d.K.V.i.d.B. Deutschland. Köln-Lövenich:: Deutscher Ärzteverlag; 1980
  • 19 Reichertz PL. et al. Evaluation of a field test of computers for the doctor's office. In: Yearbook of Medical Informatics ’99 Van Bemmel J, McCray AT. Stuttgart:: Schattauer; 1999: 247-56.
  • 20 Moehr J, Commentary on Reichertz PL. et al. Evaluation of a Field Test of Computers for the Doctor’s Office. In: van Bemmel J, McCray A. Yearbook of Medical Informatics ’99. Stuttgart:: Schattauer; 1999: 247-56.
  • 21 Moehr JR. Simulating the Effect of Computer Application on Office Practice. In: Springer: Third Int Con ference on Systems Science in Health Care.; 1984
  • 22 Holbrook A, Keshavje K, Troyan S. Does Electronic Monitoring and Advice Improve Diabetes Care? The Compete II Randomized Trial. In: e-health 2004. 2004, Canadian Institute for Health Information COACH, Canada’s Health Informatics Association:: Victoria, B.C., Canada.. http://www.e-healthconference.com/downloads/5–10_keshavjee.pdf