Appl Clin Inform 2024; 15(05): 1039-1048
DOI: 10.1055/a-2404-2129
State of the Art/Best Practice Paper

Realizing the Full Potential of Clinical Decision Support: Translating Usability Testing into Routine Practice in Health Care Operations

Authors

  • Swaminathan Kandaswamy

    1   Department of Pediatrics, Emory University School of Medicine, Atlanta, Georgia, United States
  • Herbert Williams

    2   Division of Information Systems and Technology, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
  • Sarah Anne Thompson

    2   Division of Information Systems and Technology, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
  • Thomas Elijah Dawson

    2   Division of Information Systems and Technology, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
  • Naveen Muthu

    1   Department of Pediatrics, Emory University School of Medicine, Atlanta, Georgia, United States
    2   Division of Information Systems and Technology, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
    3   Division of Hospital Medicine, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
  • Evan William Orenstein

    1   Department of Pediatrics, Emory University School of Medicine, Atlanta, Georgia, United States
    2   Division of Information Systems and Technology, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
    3   Division of Hospital Medicine, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
 

Abstract

Background Clinical Decision Support (CDS) tools have a mixed record of effectiveness, often due to inadequate alignment with clinical workflows and poor usability. While there is a consensus that usability testing methods address these issues, in practice, usability testing is generally only used for selected projects (such as funded research studies). There is a critical need for CDS operations to apply usability testing to all CDS implementations.

Objectives In this State of the Art/Best Practice paper, we share challenges with scaling usability in health care operations and alternative methods and CDS governance structures to enable usability testing as a routine practice.

Methods We coalesce our experience and results of applying guerilla in situ usability testing to over 20 projects in a 1-year period with the proposed solution.

Results We demonstrate the feasibility of adopting “guerilla in situ usability testing” in operations and their effectiveness in incorporating user feedback and improving design.

Conclusion Although some methodological rigor was relaxed to accommodate operational speed, the benefits outweighed the limitations. Broader adoption of usability testing may transform CDS implementation and improve health outcomes.


Background and Significance

Usability is the extent to which the user interaction with a system, product, or interface is effective, efficient, satisfactory, learnable, and memorable.[1] [2] [3] Electronic health records (EHRs) suffer from multiple usability issues such as poor visual displays, cluttered information, inappropriate defaults, unnecessary hard stops, and many other representation challenges.[4] [5] [6] There exist numerous usability testing approaches applicable to the EHRs including lab-based testing, A/B testing, task analysis, focus groups, interviews, card sorting, eye tracking, keystroke analysis, screen recording, heuristic evaluations, cognitive walk-throughs, function analysis, sequential pattern analysis, guerilla testing, and failure mode and effects analysis.[7] Nonetheless, poor usability of the EHR continues to contribute to patient safety issues, clinician burnout, and added costs for the health systems.[8] [9] [10] [11] [12] [13] [14]

Ineffective clinical decision support (CDS) has been associated with ambiguous, inaccurate, or poorly timed alerts and other CDS formats that do not conform to clinical workflows.[15] CDS and related alarms are most often implemented as complex systems in sociotechnical settings that require a deep understanding of the work system to improve outcomes.[16] There exist many published examples of using human factors engineering principles, participatory design, and human-centered design methods to improve design, development, adoption of CDS, and associated outcomes.[17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] Involvement and input of end users in the design and evaluation is recognized as a critical success factor for health information technology.[33] Many health systems engage in some form of participatory design of CDS with relevant clinical experts but do not consistently perform usability testing.[17] [34] While necessary, user preference is often insufficient to determine the optimal design for a specific goal.[35] Usability testing methods are generally accepted as best practices for CDS development to maximize effectiveness and minimize unintended consequences. However, most usability studies for CDS initiatives are done as one-off efforts or as part of research projects and are not applied to the majority of CDS that health systems put into production.[34]

Usability maturity models have been adapted to health care and focus on moving organizations from Phase 1 or “Unrecognized” need for usability up to Phase 5 or “Strategic” incorporation of usability into the evaluation of errors and implementation of new designs.[36] The Joint Commission and others have emphasized the significance of usability. However, organizations face significant challenges when transitioning from Phase 2 (“Preliminary”) to Phases 3 (“Implemented”) and 4 (“Integrated”). Challenges for scaling usability into routine practice in clinical operations include (1) difficulty recruiting representative users,[20] (2) inadequate resources (e.g., space, equipment, software) to conduct usability studies,[37] (3) time constraints and operational pressures that urge organizations to move on to the next project,[4] and (4) lack of human factors expertise and difficulty integrating human factors engineers within health care environments.[34]

To bridge this gap, Mann et al have proposed a hybrid approach aimed at satisfying both pragmatic and academic objectives with usability testing performed in operational contexts.[38] By relaxing certain standards of rigor (e.g., reducing the number of subjects to be tested, real-time analysis of errors vs. deeper review of audio and screen recordings) while preserving the core methods, the pragmatic approach should be feasible to perform with fewer resources and at a faster pace. However, even the case studies from Mann et al come from federally funded research studies. While these methods are critical for developing highly novel interfaces for complex problems, these approaches remain beyond the resource capabilities of most health systems, which have generally not invested in usability labs. It remains unknown whether such methods can be feasibly applied to most CDS that a health system implements and what the influence of such testing is on CDS design and the outcomes that are achieved.

There is a critical need to apply usability testing to all CDS implementations, not only research-funded or select use cases. In this State of the Art/Best Practice paper, we describe the challenges of making usability testing a routine practice in CDS operations and share lessons learned through initiatives at our institution to scale usability testing to most new CDS implementations.


Setting

This work was done in a tertiary care academic pediatric health system in the Southeastern United States with a single enterprise implementation of Epic Systems© as its EHR. Annually, this health system has approximately 1.1 million patient visits including over 27,000 hospital discharges, 41,000 surgical patients (inpatient and outpatient), and 218,000 emergency department visits. The health system consists of three pediatric hospitals, a center for ambulatory care, an Autism Center, and 18 neighborhood locations including 8 urgent care centers, and 22 cardiology clinics. The work described in this State of the Art/Best Practice paper was primarily done in the three pediatric hospitals and the center for ambulatory care.


Challenges and Lessons Learned

Challenge Number 1: Engaging Representative Frontline Users

When performing usability testing, we want users to be a representative sample of the future user group, not only experienced or tech-savvy users who are more often part of technology design committees. Operational challenges such as high patient volumes, time constraints to participate in studies outside of clinical hours, and staffing issues limit recruitment, particularly for full-time busy clinicians who are likely the most important user base.

Solution Number 1: Perform Guerrilla In Situ Usability Testing

Usability testing carried out locally within health care organizations that have purchased vendor systems and products is usually referred to as “in situ.”[39] Guerrilla testing accelerates recruitment by approaching users in public spaces instead of scheduling usability sessions ahead of time. We combine these two approaches and approach clinicians physically in the clinical setting where the work of interest is actually done and who appear not to be too busy while on service (e.g., the clinical touchdown spaces where providers usually sit between seeing patients in the emergency department or clinic where the CDS will be shown).



Challenge Number 2: Unwillingness of Users to Participate

Many frontline users hesitate to participate in usability studies due to time constraints, lack of compensation, unfamiliarity with the usability team, and fear of being tested on their skill and/or clinical abilities.

Solution Number 2: Leverage Local Champions

We include members from our health system's EHR support on-site team in usability studies. In their regular jobs, the support on-site team's role is to help frontline staff on a day-to-day basis to use and troubleshoot issues with EHR. They are, therefore, well-known by physicians, nurses, and other clinicians. In our experience, recruitment rates into usability testing studies are much higher when these support on-site team members can identify a good time when prospective participants are likely to be available (e.g., immediately after rounds), can call out individuals who do not appear busy for participation, provide psychological safety since recruitment comes from a familiar face, and deliver a warm handoff to the usability team. Over time, the goodwill of these local champions can rub off on the usability team who are seen as “insiders.”



Challenge Number 3: Longer Usability Testing Sessions

Typical usability sessions take approximately 1 hour. In our experience, this duration dissuades many clinicians from participating.

Solution Number 3: Truncated Experimental Design

Extrapolating from Nielsen Norman Group's description of discount usability testing,[40] we designed guerilla in situ usability sessions to last a maximum of 10 minutes. This approach requires compromise on some rigor including (1) limiting the number of scenarios to one to two per participant, (2) instead of recording sessions or using eye tracking or screen capture for future deeper analysis, we use verbal member checking[41] into insights, and (3) prioritizing and only testing the most important design questions to reduce the number of testing tasks. While this approach risks incorrect or insufficient insights compared to longer sessions, we believe the insights that can be gained in shorter sessions are substantially better than not gaining any insights at all from doing no usability testing.



Challenge Number 4: Longer Design Cycle Times

Many CDS systems are developed in response to critical safety events or poor outcomes with strong operational pressures to implement new systems immediately. By contrast, many traditional usability testing approaches require thorough evaluation between design changes. Each design cycle might take a few weeks as human factors engineers must organize insights for a separate team of engineers or analysts to implement.

Solution Number 4: Rapid Prototyping by Involving Builders and System Developers in the Testing Process

We cross-train human factors engineers in EHR build capabilities through formal training classes with vendors and involve EHR analysts in usability testing where they learn through observation and practice. As new lessons are learned or new hypotheses are generated in real-time during testing, these cross-trained usability team members can alter the CDS prototype even between participants within a single half-day session. This approach accelerates the number of prototypes tested per unit time compared to gathering insights from a series of participants in one testing session and awaiting a new build before being able to schedule and conduct the next testing session.



Challenge Number 5: Wait Times for Regulatory Institutional Review Board Approvals

Institutional Review Board (IRB) review of studies helps ensure that studies comply with regulations, ethical standards, and institutional policies. They also help ensure that participants are adequately protected from study-related risks. While IRB approval is critical and helps protect the rights and welfare of human subjects, completing IRB protocols is often time-consuming and difficult to keep pace with operational requests.

Solution Number 5: Consider Usability Testing as a Part of Quality Improvement

Requirements for IRB largely apply to research projects. In research work, the primary beneficiaries of the research are other researchers, scholars, and practitioners in the field of study; dissemination of the results is intended to inform the field of study and the results are expected to be generalized to a larger population beyond the site of data collection and/or to be replicated in other settings. Unlike research, the primary intent of usability testing for CDS development is local improvement to benefit patients, families, and clinicians. In discussion with our local IRB, CDS development was deemed to be part of operational work. Thus, we consider most of the usability testing we do as Quality Improvement and not human subjects research. However, this approach risks investigators inappropriately claiming their work as quality improvement to reduce administrative overhead. Thus, at the conceptual stages of each CDS project, our team determines if we are likely to leverage the use case for research. When there is ambiguity, we create a brief protocol for the IRB to review for nonhuman subjects' research determination. This approach requires much less work from the CDS team than a full protocol but allows the IRB to determine if a full protocol is necessary.




Summary of Our Approach

To address the issues identified above, we employ guerilla in situ usability testing, that is, directly in the clinical setting where the CDS will be used such as inpatient floors, clinic touchdown spaces, and other locations where clinicians interact with the EHR ([Fig. 1]). This approach reduces the back-and-forth of scheduling, recruits representative frontline users since those are the clinicians in the spaces of interest, and captures the interruptions and ambient environments of real clinical settings that can affect how users interact with CDS. While this approach reduces the ability to collect some sophisticated usability metrics (e.g., eye tracking, click counts, audio recordings), it nonetheless preserves key concepts like task completion, time on task, satisfaction, and qualitative feedback from representative users. With an interdisciplinary team ([Table 1]), we have operationalized this approach through the following:

Table 1

Guerilla in situ usability testing team composition

Human factors engineer or other team member who has gained sufficient experience with usability in situ.

Clinical champion or locally known stakeholder (e.g., EHR support on-site team) who will be familiar to staff at the clinical site of interest.

Clinical subject matter expert who can design test scenarios and specify the right thing to do.

EHR analyst/builder who can set up patients and adjust the design.

Abbreviation: EHR, electronic health record.


a Of note, single team member (e.g., physician clinical informatician) may fulfill multiple roles.


Zoom
Fig. 1 Guerilla in situ usability testing process. CDS, clinical decision support; EHR, electronic health record.
  1. Perform a basic user and task analysis through an informal focus group with the CDS requestors.

    • If at the end of stakeholder interviews (requestors, clinical experts, potential frontline users), the CDS team does not have a clear understanding of the tasks and current workflows (including specific EHR screens where the work is done), then a workflow observation should be performed and described using a swim lane workflow diagram or Systems Engineering Initiative for Patient Safety (SEIPS) representation.[42]

  2. Build candidate design in a test EHR environment.

  3. Develop testing scenarios based on feedback from stakeholders, safety reports, simplified failure modes and effects analysis, pre-identified heuristic problems, and/or common clinical use cases.

  4. Set up a test patient who fits the CDS criteria in the test EHR environment.

  5. Go to the clinical space of interest with a familiar stakeholder (e.g., EHR support on-site team) to find prospective participants who do not appear to be very busy and introduce the study team.

  6. Describe the think-aloud protocol43 and provide participants with psychological safety. Specifically, we first let participants know that we are testing the interface and not their clinical knowledge or skills. We then ask participants to talk about what they are looking at, verbalize their thought process and activities they are doing or want to do as part of the simulation, and areas of confusion.

  7. Introduce the scenario and ask participants to use the new EHR interface to work through the use case.

  8. Observe and note down participants' perceptions and comprehension of information and actions within the EHR.

  9. Debrief at the end by specifying the design intent, eliciting participants' feedback, and member-checking notes (recording in clinical settings is generally impractical).

  10. If the design was unsuccessful, discuss potential alternatives with the participant.

  11. If required, make design changes in the test environment before additional testing.

  12. Iterate until there are no new practical, implementable learnings.


Results from Using This Approach at One Institution

Our team performed its first usability tests in December 2018. While these methods were applied to a series of ad hoc projects after early successes, efforts slowed substantially during the pandemic. In June 2022, we updated our CDS governance process to mandate usability testing for all new CDS requests prior to implementation except for extremely urgent cases (e.g., new drug shortages). In 1 year (June 15, 2022–June 15, 2023), our CDS team recruited 219 participants in formative and/or summative usability testing for 20 unique projects employing this methodology ([Table 2]). Session lengths with each participant were generally 5 to 15 minutes. The duration from the initial CDS prototype to the final design was generally 1 to 2 months.

Table 2

Guerilla in situ usability testing projects

Project

Project aim

Number of users

Month and year

Any changes?

Cosmetic changes only

Structural changes

Associated publications/presentations

Blood Orders 2019

Decrease blood product ordering error; improve ordering efficiency

42

December 2018

Yes

No

Yes

[45] [48]

Ketogenic Diet

Reduce inappropriate prescription of carbohydrate-containing medications in hospitalized children on ketogenic diet

25

April 2019

Yes

Yes

No

[58]

Influenza Vaccine v1

Increase uptake of influenza vaccine in pediatric inpatient setting

6

June 2019

Yes

Yes

No

[50]

Integrated Admission Order Set

Improve adoption of guideline order sets by admitting providers

23

January 2020

Yes

No

Yes

[51] [59]

Metabolic Diseases in Emergency Department

Enable early recognition of patients with metabolic disease in the ED at risk of decompensation and aid disease-specific workups and labs required for patients

6

February 2020

Yes

No

Yes

[54]

Central Venous Access Device

Improve documentation and recognition of key properties for appropriate care, maintenance, and removal of central venous access devices

26

May 2020

Yes

No

Yes

[53]

Influenza Vaccine v2

Increase uptake of influenza vaccine in pediatric inpatient setting

6

August 2020

Yes

No

Yes

[60]

Blood Culture Volume

Improve collection of appropriate minimal volume for blood cultures

3

October 2020

Yes

Yes

No

Status Epilepticus

Improve identification of benzodiazepine-resistant status epilepticus (BRSE)

3

February 2021

Yes

No

Yes

[61]

Delayed Hemolytic Transfusion Reaction (DHTR)

Aid in early recognition and subsequent diagnosis of DHTRs in sickle cell disease patients

5

June 2022

Yes

Yes

No

Duplicate PRN

Reduce therapeutic duplication in inpatient medication orders

2

2022-06

Yes

No

Yes

[44]

E-Consent for Blood

Enable and improve adoption of electronic consent instead of paper forms

15

June 2022

Yes

No

Yes

Elopement

Enable identification of patients at risk for elopement and improve situation awareness so that measures can be taken to prevent elopement

15

June 2022

Yes

No

Yes

[62]

Blood Orders 2023

Decrease blood product ordering error; improve ordering efficiency

42

July 2022

Yes

No

Yes

[47]

Non-Accidental Trauma

Improve recognition of nonaccidental trauma and standardize subsequent evaluation

11

July 2022

Yes

No

Yes

Peanut Allergy

Improve early peanut introduction during well-child visit and increase anticipatory guidance

6

July 2022

Yes

No

Yes

[63] [64]

Discharge Subcutaneous Medication

Increase the number of patients discharged appropriately with syringes (and vials if appropriate) to measure correct dose at home

6

August 2022

Yes

No

Yes

[65]

Dosing Weight

Improve documentation of dosing weight in patients with >10% difference between regular and dosing weight to reduce medication dosing errors

6

August 2022

Yes

No

Yes

[66] [68]

Renal Dosing

Enable recognition of patients with renal insufficiency and improve dose adjustments for renal-impaired patients

22

November 2022

Yes

No

Yes

[67]

Intravenous Promethazine

Reduce the use of IV promethazine where an appropriate alternative exists and improve safety of the IV promethazine administration

5

November 2022

Yes

Yes

No

Total Parenteral Nutrition Administration

Improve rate of total parenteral nutrition administration per guidelines

4

November 2022

Yes

Yes

No

Keppra Dosing

Improve timeliness and appropriate dosing of antiseizure medication administration in patients with BRSE

3

January 2023

Yes

No

Yes

ED Boarder

Improve recognition of Boarder patients (patients admitted to floor but waiting in the ED due to lack of bed) and reduce delays in order release and patient care

5

January 2023

Yes

No

Yes

Nothing per mouth (NPO) time

Reduce pre-procedural fasting times without aspiration events or cancelled procedures

13

February 2023

Yes

No

Yes

Sickle Cell Disease Pain Plan

Improve perception and adherence to individualized pain plan in sickle cell disease patients

6

February 2023

Yes

No

Yes

HIV Opt-Out Testing

Improve testing for HIV in eligible patients using opt-out strategy

10

March 2023

Yes

No

Yes

Contraception

Increase contraception counseling rates and prescriptions provided at discharge for adolescents

10

April 2023

Yes

No

Yes

Code Status

Improve order and documentation accuracy for code status changes

17

May 2023

Yes

Yes

No

Enoxaparin

Improve appropriate dosing of enoxaparin

1

May 2023

Yes

No

Yes

Human Milk

Reduce wrong-patient human milk exposures

10

May 2023

Yes

No

Yes

Medication Readiness for Discharge

Improve time to discharge patients as soon as medically and logistically feasible

5

June 2023

Yes

No

Yes

Of the 30 projects in which we employed this methodology from 2018 through June 2023, at least one CDS design change was made in all cases. In 7/30 (23%), only cosmetic changes were made, that is, edits to the wording, font size, color, layout, images, or acknowledgment buttons in the CDS. However, in 23/30 (77%) of cases, structural changes were made such as new CDS artifacts or changes in the CDS channel, target users, patient population, timing, branching logic, or underlying workflows. For example, a CDS request was made due to regulatory concerns about duplicate pro re nata (PNR) indications, particularly for acetaminophen and ibuprofen. The initial CDS design involved alerts created when multiple of these orders were present without text indicating how they should be prioritized by nursing. After usability testing, the format of this CDS changed substantially from an alert to in-line order questions for common PRN indications with additional language to help with prioritization.[44] Similarly, the initial order set design by a committee of relevant stakeholders for blood products was found to lead to many ordering errors in usability testing, ultimately requiring a complete overhaul that has been highlighted in separate publications.[45] [46] [47]

In addition to the operational and quality improvement impact, this work has also resulted in journal publications and conference presentations as well as serving as preliminary data for federally funded grants, further aligning the academic and operational missions of the organization.[44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68]


Discussion

The guerilla in situ usability testing approach combined with CDS governance processes requiring usability testing for all but the most urgent projects has uniformly led to CDS design changes and substantially improved the effectiveness of CDS at our institution. Our project duration with in situ usability testing (1–2 months) was substantially lower than the time required for traditional approaches (11–16 months) for two similar use cases in Mann et al.[38] While we have not compared the performance in the production of most of these CDS designs before and after usability testing, we have shown in simulation that many of our post-usability CDS designs led to fewer errors and better adherence to evidence-based practices.[45] [52] [58] [63] In the prior operational model based on expert CDS design alone, all of these usability problems would only have been detected after going live if they were detected at all. In the absence of insights gained via this testing, many more of these CDS implementations would likely not have changed behavior, failing to meet the potential of CDS to improve outcomes and potentially worsening clinician experience and/or patient safety through unnecessary alert fatigue. While none of the specific approaches described in this paper are novel to the usability research literature, the combination of guerilla recruiting, in situ testing, and training EHR analysts to participate in the usability testing and rapidly iterate between participants has allowed our team to scale the application of usability methods to a larger fraction of CDS implementations.

There were several organizational and strategic process changes required to enable us to scale usability studies for operational projects. First, we needed strong commitment from leadership to go through a process that might slow down project rollout initially but would be beneficial in the long run. We needed to alter CDS build and implementation processes and bake in enough time to do at least formative usability testing. Second, we needed the right team ([Table 1]) including interdisciplinary expertise in CDS development, human factors personnel integrated into the CDS operations team, and well-known or familiar stakeholders who could bridge clinical and usability teams. Finally, we needed documentation workflows to quickly gather insights and forums to share lessons learned and crowd-source design ideas based on insights from usability testing.

We relaxed certain elements of methodological rigor. Specifically, we did not utilize audio or video recordings or follow rigorous qualitative analytic processes such as transcribing participant feedback and performing thematic analyses.[69] We aimed to compensate by member-checking notes at the end of each testing session with participants. The inability to test many scenarios with a single participant also limits the use of randomized block designs and can risk premature closure based on a small sample size between iterations. However, we believe this risk is mitigated as subsequent designs are also tested, and we stop testing only when no new insight is gained. While some risks introduced by designs may be missed if representative scenarios are not used, we believe usability testing with a small sample of scenarios is better than no usability testing at all,[40] which remains the default for most CDS implementations in health care.

Researchers have long advocated for incorporating usability evaluation into system development to effectively influence health care processes and outcomes positively.[70] [71] [72] However, many descriptions of these evaluations would require rigorous and resource-intensive approaches that are difficult to implement routinely without sufficient funding.[73] The modified approach described in this paper improved our own adoption of usability testing methods for operational projects.

To clarify, comprehensive, well-controlled research projects remain the standard to create generalizable knowledge and should be employed whenever possible while the modified more feasible approach we describe in this paper is appropriate to ensure usability testing is applied to a larger fraction of the EHR changes put into production by health systems. We also believe that the adoption of this in operational work may improve participant recruitment and diversity as more frontline users get exposed to and participate in such studies.

In a review of usability studies on health IT, researchers have found that most evaluations are done late in the system development life cycle (SDLC), for example, during the integration of the system into the environment or routine use, rather than earlier stages such as system specification where many barriers can be more easily addressed.[7] Our approach incorporates usability evaluation throughout the SDLC including workflow analysis, prototype development, and iterative development through formative testing. In situ testing bridges the gap between naturalistic approaches (i.e., unobtrusive observations and ethnographic studies that capture realistic behaviors but do not compare design effectiveness) and more controlled experimental studies (i.e., conducting simulations in artificial laboratory environments that can explicitly identify superior designs).[71] Our approach provides a high degree of experimental control while also preserving a high degree of realism to participants during testing allowing the investigator to see the influence of some real-world factors like time pressure, environment, and interruption.

While we have delineated the expertise requirements to perform guerilla in situ usability testing ([Table 1]), many health care settings particularly outside academic and hospital-based contexts, lack these resources. Nonetheless, we believe for practices that have some control over EHR configuration that these principles can be applied to their sphere of control for more rapid improvements in user experience. EHR vendors could also employ these techniques to impact clinical users more broadly, even within health care settings that lack their own informatics or human factors expertise.


Conclusion

CDS can improve guideline adherence and use of evidence-based practices that help achieve better patient outcomes, improved experience for patients and clinicians, reduced costs, and health equity. However, inappropriately designed CDS remains the norm in most health systems. Our results show the feasibility of performing usability testing at scale in health care operations using guerilla in situ usability testing described in this State of the Art/Best Practice paper. Broader uptake of usability testing has the potential to change the course of CDS and ultimately health outcomes through the efficient application of human factors methods not only for select use cases but for every CDS implementation and update.


Clinical Relevance Statement

CDS often fails due to inadequate alignment with clinical workflows and poor usability. Usability testing can improve CDS design tools. However, these methods are mostly adopted in the research context but rarely in operational projects. This article describes an approach, specifically guerrilla in situ usability testing to address challenges with adopting usability testing in health care operations. Broader uptake of usability testing has the potential to change the course of CDS in health care.


Multiple-Choice Questions

  1. Which of these aspects are evaluation goals for usability?

    • Effectiveness

    • Efficiency

    • Satisfaction

    • All of the above.

    Correct Answer: The correct answer is option d. All of the above. Per ISO standard of usability (ISO 9241 pt. 11), usability is the intersection between effectiveness, efficiency, and satisfaction in a context of use.

  2. Apart from human factors/usability expert, what roles are required for guerilla in situ usability testing?

    • Locally known stakeholder

    • Clinical subject matter expert

    • EHR analyst/builder

    • All of the above.

    Correct Answer: The correct answer is option d. All of the above. We need locally known stakeholder who can act as liaisons with clinicians, handshake them with usability testing teams and help with recruitment. They would also know good time to conduct testing Insitu. Clinical subject matter experts are required to design test scenarios and specify the right thing to do. We also need EHR analyst/builder so they can set up patients and adjust design as needed.



Conflict of Interest

E.O. and N.M. are the cofounders and have equity in Phrase Health, a CDS analytics company. They are the Investigators on an R42 grant with Phrase Health from the National Library of Medicine (NLM) and the National Center for Advancing Translational Science (NCATS). Both of them receive salary support from the NLM and NCATS but no direct revenue from Phrase Health. Other authors have nothing to disclose.

Protection of Human Subjects

No human subjects were involved in this perspective. In discussion with the Children's Healthcare of Atlanta IRB, projects applying guerilla in situ usability testing were deemed as quality improvement projects and therefore nonhuman subjects research.



Address for correspondence

Swaminathan Kandaswamy
PhD
Department of Pediatrics, Emory University School of Medicine
2015 Uppergate Dr, Atlanta, GA 30322
United States   

Publication History

Received: 03 June 2024

Accepted: 25 August 2024

Accepted Manuscript online:
27 August 2024

Article published online:
04 December 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom
Fig. 1 Guerilla in situ usability testing process. CDS, clinical decision support; EHR, electronic health record.