Appl Clin Inform 2010; 01(03): 268-285
DOI: 10.4338/ACI-2010-03-RA-0020
Research Article
Schattauer GmbH

Measurement of CPOE end-user satisfaction among ICU physicians and nurses

P.L.T. Hoonakker
1   Center for Quality and Productivity Improvement, University of Wisconsin-Madison, Madison, Wisconsin
,
P. Carayon
1   Center for Quality and Productivity Improvement, University of Wisconsin-Madison, Madison, Wisconsin
2   Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin
,
J.M. Walker
3   Geisinger Health System, Danville, Pennsylvania
› Author Affiliations
Further Information

Publication History

received: 18 March 2010

accepted: 25 July 2010

Publication Date:
16 December 2017 (online)

Summary

Background: Implementation of Computerized Provider Order Entry (CPOE) can fail or meet high levels of user resistance for a variety of reasons, including lack of attention to users’ needs and the significant workflow changes induced and required by CPOE. End-user satisfaction is a critical factor in IT implementation.

Objective: The goal of this study was to identify criteria to select a valid and reliable questionnaire to measure end-user satisfaction with CPOE.

Methods: We developed seven criteria that can be used to select valid and reliable questionnaires. We applied the selection criteria to existing end-user satisfaction questionnaires.

Results: Most of the questionnaires used to measure end-user satisfaction have been tested for reliability and validity and most of the questionnaires have reasonably reliability and some sort of validity. However, only one questionnaire, the Physician Order Entry User Satisfaction and Usage Survey (POESUS) met most of the other criteria we developed to select a questionnaire to evaluate CPOE implementation. We used the POESUS in our study and compared the results with other studies. Results show that users are moderately satisfied with CPOE.

Conclusion: Using the seven criteria we developed, it is possible to select reliable and valid questionnaires. We hope that in the future this will lead to an increasing number of studies using the same questionnaires. That will improve the possibilities for comparing the results of one study to another (benchmarking).

 
  • References

  • 1 Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations: effects on resource utilization. JAMA 1993; 269: 379-383.
  • 2 Sittig DF, Stead WW. Computer-based physician order entry: the state of the art. J Am Med Inform Assoc 1994; 1: 108-123.
  • 3 Lee F, Teich JM, Spurr CD, Bates DW. Implementation of physician order entry: user satisfaction and self- reported usage patterns. J Am Med Inform Assoc 1996; 3: 42-55.
  • 4 Furukawa MF, Raghu TS, Spaulding TJ, Vinze A. Adoption of health information technology for medicatoion safety in US hospitals, 2006. Health Affairs 2008; 27: 865-875.
  • 5 Aarts J, Koppel R. Implementation of computerized physician order entry in seven countries. Health Aff 2009; 28: 404-414.
  • 6 Jha AK. et al. Use of Electronic Health Records in U. S. Hospitals. New England Journal of Medicine 2009; 360: 1628-1638.
  • 7 Delbanco S. Usage of CPOE steadily increasing, Leapfrog says: but top exec is disappointed with rate of adoption. HealthCare Benchmarks and Quality Improvement 2006; 13: 33-34.
  • 8 Murff HJ, Kannry J. Physician satisfaction with two order entry systems. Journal of the American Medical Informatics Association 2001; 8: 499-509.
  • 9 Overhage JM, Tierney WM, Zhou X-H, McDonald CJ. A randomized trial of “corollary orders” to prevent errors of omission. Journal of the American Medical Informatics Association 1997; 4: 364-375.
  • 10 Bates DW. et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. Journal of the American Medical Association 1998; 280: 1311-1316.
  • 11 Bates DW. et al. The Impact of Computerized Physician Order Entry on Medication Error Prevention. J Am Med Inform Assoc 1999; 6: 313-321.
  • 12 Teich JM. et al. Effects of computerized physician order entry on prescribing practices. Archives of Internal Medicine 2000; 160: 2741-2747.
  • 13 Kaushal R, Shojania KG, Bates DW. Effects of Computerized Physician Order Entry and Clinical Decision Support Systems on Medication Safety: A Systematic Review. Arch Intern Med 2003; 163: 1409-1416.
  • 14 Bates DW, Kuperman GJ, Teich JM. Computerized physician order entry and quality of care. Qual Manag Health Care 1994; 2: 8-27.
  • 15 Connolly C. Cedars-Sinai doctors cling to pen and paper. Washington Post 2005
  • 16 Massaro TA. Introducing physician order entry at a major academic center: I. Impact on organizational culture and behavior. Acad Med 1993; 68: 20-25.
  • 17 Massaro TA. Introducing physician order entry at a major academic center: II. Impact on medical education. Acad Med 1993; 68: 25-30.
  • 18 Bates DW. Invited commentary: The road to implementation of the electronic health record. Proc Bayl Univ Med Cent 2006; 19: 311-312.
  • 19 Sittig DF. et al. Emotional aspects of computer-based provider order entry: a qualitative study. J Am Med Inform Assoc 2005; 12: 561-567.
  • 20 Ash JS. et al. Perceptions of physician order entry: Results of a cross-site qualitative study. Methods in Information Medicine 2003; 42: 313-323.
  • 21 Ash J, Fournier L, Stavri P, Dykstra R. Principles for a successful computerized physician order entry implementation. AMIA Annu Symp Proc; 2003: 36-40.
  • 22 Doll W, Torkzadeh G. The measurement of end-user computing satisfaction. MIS Quarterly 1988; 12: 259-274.
  • 23 McLean ER. End-users as application developers. MIS Quarterly 1979; 3: 37-46.
  • 24 Rockart JF, Flannery LS. The management of end user computing. Communications of the ACM 1983; 26: 776-784.
  • 25 International Standards Organisation (ISO) ISO 9241-11:. Guidance on Usability. Geneva: International Standards Organisation (ISO); 1998
  • 26 Fishbein M, Ajzen I. Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley; 1975
  • 27 Baroudi JJ, Olson MH, Ives B. An empirical study on the impact of user involvement on system usage and information satisfaction. Communications of the ACM 1986; 29: 232-238.
  • 28 Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 1989; 13: 319-340.
  • 29 Davis FD, Bagozzi RP, Warshaw PR. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science 1989; 35: 982-1001.
  • 30 Bagozzi RP, Davis FD, Warshaw PR. Development and test of a theory of technological learning and usage. Human Relations 1992; 45: 660-686.
  • 31 Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science 2000; 46: 186-204.
  • 32 Mathieson K, Peacock E, Chin WW. Extending the Technology Acceptance Model: The Influence of Perceived User Resources. Database 2001; 32: 80-112.
  • 33 Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS Quarterly 2003; 27: 425-478.
  • 34 Karsh B, Holden R. New technology implementation in health care. In: Carayon P. ed. Handbook of Human Factors and Ergonomics in Patient Safety. Mahwah, NJ: Lawrence Erlbaum Associates; 2006: 393-410.
  • 35 Karsh B. Beyond usability: designing effective technology implementation systems to promote patient safety. Qual Saf Health Care 2004; 13: 388-394.
  • 36 Shortell SM. et al. Organizational assessment in intensive care units (ICUs): construct development, reliability, and validity of the ICU nurse-physician questionnaire. Medical Care 1991; 29: 709-726.
  • 37 Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care 1999; 11: 319-328.
  • 38 Nunnaly JC. Psychometric theory. California: McGraw Hill; 1978
  • 39 Rossi PH, Wright JD, Anderson AB. (eds.) Handbook of Survey Research. Orlando, FL: Academic Press; 1983
  • 40 Belson WA. Validity in Survey Research. Brookfield, VM: Gower Publishing Company; 1986
  • 41 Babbie E. Survey Research Methods. Belmont, CA: Wadsworth Publishing Company; 1990
  • 42 Dillman DA. Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley and Sons; 2000
  • 43 Hoonakker PLT, Carayon P, Schoepke J. Development of a short questionnaire to evaluate turnover and retention in the IT work force: art or science?. In: Carayon P, Kleiner B, Robertson M, Hoonakker P. (eds.) Human Factors in Organizational Design and Management. Santa Monica, CA: IEA Press; 2005: 555-560.
  • 44 Carayon P, Hoonakker PLT. Survey Design. In: Karwoski W. (ed.) International Encyclopedia of Ergonomics and Human Factors. Boca Raton, FL: CRC Press; 2000: 1899-1902.
  • 45 Carayon P, Hoonakker PLT. Macroegronomic Organizational Questionnaire Survey (MOQS). In: Stanton N. et al(eds.) Handbook of Human Factors and Ergonomic Methods. Boca Raton, FL: CRC Press; 2004: 76-71-76-10.
  • 46 Hoonakker PLT, Carayon P. Questionnaire Survey Nonresponse: A comparison of postal mail and Internet surveys. International Journal of Human Computer Interaction 2009; 25: 348-373.
  • 47 Carayon P. et al. An employee questionnaire for assessing patient safety in outpatient surgery. Advances in Patient Safety: From Research to Implementation 2005; 4: 461-473.
  • 48 Carayon P. et al. Evaluating the causes and consequences of turnover intention among IT users: The development of a questionnaire survey. Behaviour and Information Technology 2006; 25: 381-397.
  • 49 Boudreau M, Gefen D, Straub DW. Validation of information systems research: A state-of-the-art assessment. MIS Quarterly 2001; 25: 1-16.
  • 50 Smith TW. That which we call welfare by any other name would smell sweeter: An analysis of the impact of question wording on response patterns. Public Opinion Quarterly 1987; 51: 75-83.
  • 51 Rasinski KA. The effect of question wording on public support for government spending. Public Opinion Quarterly 1989; 53: 388-394.
  • 52 Carmines EG, Zeller RA. Reliability and validity assessment. California: Sage Publications; 1990
  • 53 Carayon P, Hoonakker PLT. Survey design. In: Karwowski W. (ed.) International Encyclopedia of Ergonomics and Human Factors. London: Taylor & Francis; 2001: 1899-1902.
  • 54 Carayon P, Hoonakker PLT. Macroergonomic Organizational Questionnaire Survey (MOQS). In: Stanton N. et al. (eds.) Handbook of Human Factors and Ergonomic Methods. Boca Raton, FL: CRC Press; 2004: 76-71-76-10.
  • 55 Bartos CE. et al. Development of an instrument for measuring clinicians’ power perceptions in the workplace. J Biomed Inform. in press.
  • 56 McDowell I, Newell C. Measuring Health: A Guide to Rating Scales and Questionnaires. Oxford, UK: Oxford University Press; 1987
  • 57 Couper MP, Blair J, Triplett T. A comparison of mail and e-mail for a survey of employees in federal statistical agencies. Journal of Official Statistics 1999; 39-56.
  • 58 Schonlau M, Fricker RDJ, Elliott MN. Conducting research surveys via e-mail and the Web. Santa Monica, CA: RAND; 2002
  • 59 Dommeyer CJ, Moriarty E. Comparing two forms of an e-mail survey: Embedded vs. attached. The Market Research Society 2000; 42: 39-50.
  • 60 Cook C, Heath F, Thompson RL. A meta-analysis of response rates in web- or Internet-based surveys. Educational and Psychological Measurement 2000; 60: 821-836.
  • 61 Leece P. et al. Internet versus mailed questionnaires: a controlled comparison (2). J Med Internet Res 2004; 6: e39.
  • 62 Ritter P, Lorig K, Laurent D, Matthews K. Internet versus mailed questionnaires: a randomized comparison. J Med Internet Res 2004; 15: e29.
  • 63 McMahon S. et al. Comparison of e-mail, fax, and postal surveys of pediatricians. Pediatrics 2003; 111: e299-e303.
  • 64 Shermis MD, Lombard D. A comparison of survey data collected by regular mail and electronic mail questionnaires. Journal of Business and Psychology 1999; 14: 341-354.
  • 65 Buchanan T, Smith JL. Using the Internet for psychological research: Personality testing on the World Wide Web. Br J Psychol 1999; 1: 125-144.
  • 66 Davis RN. Web-based administration of a personality questionnaire: Comparison with traditional methods. Behav Res Methods Instrum Comput 1999; 31: 572-577.
  • 67 Kwak N, Radler B. A comparison between mail and web surveys: Response pattern, respondent profile, and data quality. Journal of Official Statistics 2002; 18: 257-273.
  • 68 Wilson JP, Bulatao PT, Rascati KL. Satisfaction with a computerized practitioner order-entry system at two military health care facilities. American Journal of Health-System Pharmacists 2000; 57: 2188-2195.
  • 69 Agency for Healthcare Research and Quality (AHRQ). Many Errors by Medical Residents Caused by Teamwork Breakdowns, Lack of Supervision. Agency for Healthcare Research and Quality Press Release 2007
  • 70 Bailey JE, Pearson SW. Development of a tool for measuring and analyzing computer user satisfaction. Management Science 1983; 29: 530-545.
  • 71 Lewis JR. Computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction 1995; 7: 57-78.
  • 72 Chin WW, Lee MKO. A proposed model and measurement instrument for the formation of IS satisfaction: the case of end-user computing satisfaction. Proceedings of 21st International Conference on Information Systems (ISIS). Atlanta, GA: Association for Information Systems; 2000: 553-563.
  • 73 Chin JP, Diehl VA, Norman KL. Development of an instrument measuring user satisfaction of the human-computer interface. Human Factors in Computing Systems: Chi 98 Conference. Los Angeles, CA: ACM SIGCHI; 1998: 213-218.
  • 74 Brooke J. SUS: a “quick and dirty usability scale”. In: Jordan PW, Thomas B, Weerdmeester BA, McClelland AL. (eds.) Usability Evaluation in Industry. London, UK: Taylor & Francis; 1996: 189-194.
  • 75 Yves B, Olson MH, Baroudi JJ. The measurement of user information satisfaction. Communications of the ACM 1983; 26: 785-793.
  • 76 Lund AM. Measuring usability with the USE questionnaire. The Usability SIG Newsletter 2001
  • 77 Deese D. Experiences Measure User Satisfaction. Computer Measurement Group of ACM. Dallas, TX: 1979
  • 78 Vöhringer-Kuhnt T. The Influence of Culture on Usability. Department of Educational Sciences and Psychology. Berlin, Germany: Freie Universität Berlin; 2002
  • 79 Zaphiris P, Zacharia G. Design methodology of an online Greek language course. CHI ‘01 Human factors in computing systems. Seattle, WA: 2001: 103-104.
  • 80 Torkzadeh G, Doll W. Test-retest reliability of the End-User Computing Satisfaction Instrument. Decision Sciences 1991; 22: 26-37.
  • 81 Xiao L, Dasgupta S. Measurement of user satisfaction with web-based information systems: an empirical study. Eight Americas Conference on Information Systems. Dallas, Texas, USA: 2002
  • 82 Abdinnour-Helm SF, Chaparro BS, Farmer SM. Using the End-User Computing Satisfaction (EUCS) Instrument to Measure Satisfaction with a Web Site Decision. Sciences 2005; 36: 341-364.
  • 83 Gelderman M. Translation and validation of the Doll and Torkzadeh End User Computing Satisfaction Instrument. HICSS. Kohala Coast, Hawaii: IEEE Computer Society; 1998: 537-546.
  • 84 Doll W, Xia W, Torkzadeh G. Confirmatory factor analysis of the end-user computing satisfaction instrument. MIS Quarterly 1994; 18: 453-461.
  • 85 Subramanian GH. A replication of perceived usefulness and perceived ease of use measurement. Decision Sciences 1994; 25: 863-873.
  • 86 Adams DA, Nelson RR, Todd PA. Perceived usefulness, ease of use, and usage of information technology: a replication. MIS Quarterly 1992; 16: 227-247.
  • 87 Hendrickson AR, Massey PD, Cronan TP. On the test-retest reliability of perceived usefulness and perceived ease of use scales. MIS Quarterly 1993; 17: 227-230.
  • 88 Szajna B. Software Evaluation and Choice: Predictive Validation of the Technology Acceptance Instrument. MIS Quarterly 1994; 18: 319-324.
  • 89 Harper BD, Norman KL. Improving user satisfaction: The questionnaire for user interaction satisfaction version 5.5. 1st Annual Mid-Atlantic Human Factors Conference; 1993: 224-228.
  • 90 Slaughter LA, Harper BD, Norman KL. Assessing the equivalence of the paper and on-line formats of the QUIS 5.5. Mid-Atlantic Human Factors Conference 2 1994; 87-91.
  • 91 Harper BD, Slaughter L, Norman KL. Questionnaire administration via the WWW: A validation & reliability study for a user satisfaction questionnaire. WebNet 97, Association for the Advancement of Computing in Education. Toronto, Canada: 1997
  • 92 American Institutes for Research Windows XP Home Edition vs. Windows Millennium Edition (ME) Public Report. Concord, MA: American Institutes for Research; 2001: 1-9.
  • 93 Finstad K. The System Usability Scale and Non-Native English Speakers. Journal of Usability Studies 2006; 1: 185-188.
  • 94 Baroudi JJ, Orlikowski WJ. A Short-Form Measure of User Information Satisfaction: A Psychometric Evaluation and Notes on Use. Journal of Management Information Systems 1988; 4: 44-58.
  • 95 Treacy ME. An empirical examination of User Information Satisfaction. Center for Information Systems Research, Sloan School of Management, Massachusetts Institute of Technology; 1985
  • 96 Galletta DF, Lederer AL. Some cautions of the measurement of User Information Satisfaction. Pittsburgh: Graduate School of Business, The University of Pittsburgh; 1986