Subscribe to RSS
DOI: 10.1055/a-2404-2129
Realizing the Full Potential of Clinical Decision Support: Translating Usability Testing into Routine Practice in Health Care Operations
Authors
Abstract
Background Clinical Decision Support (CDS) tools have a mixed record of effectiveness, often due to inadequate alignment with clinical workflows and poor usability. While there is a consensus that usability testing methods address these issues, in practice, usability testing is generally only used for selected projects (such as funded research studies). There is a critical need for CDS operations to apply usability testing to all CDS implementations.
Objectives In this State of the Art/Best Practice paper, we share challenges with scaling usability in health care operations and alternative methods and CDS governance structures to enable usability testing as a routine practice.
Methods We coalesce our experience and results of applying guerilla in situ usability testing to over 20 projects in a 1-year period with the proposed solution.
Results We demonstrate the feasibility of adopting “guerilla in situ usability testing” in operations and their effectiveness in incorporating user feedback and improving design.
Conclusion Although some methodological rigor was relaxed to accommodate operational speed, the benefits outweighed the limitations. Broader adoption of usability testing may transform CDS implementation and improve health outcomes.
Background and Significance
Usability is the extent to which the user interaction with a system, product, or interface is effective, efficient, satisfactory, learnable, and memorable.[1] [2] [3] Electronic health records (EHRs) suffer from multiple usability issues such as poor visual displays, cluttered information, inappropriate defaults, unnecessary hard stops, and many other representation challenges.[4] [5] [6] There exist numerous usability testing approaches applicable to the EHRs including lab-based testing, A/B testing, task analysis, focus groups, interviews, card sorting, eye tracking, keystroke analysis, screen recording, heuristic evaluations, cognitive walk-throughs, function analysis, sequential pattern analysis, guerilla testing, and failure mode and effects analysis.[7] Nonetheless, poor usability of the EHR continues to contribute to patient safety issues, clinician burnout, and added costs for the health systems.[8] [9] [10] [11] [12] [13] [14]
Ineffective clinical decision support (CDS) has been associated with ambiguous, inaccurate, or poorly timed alerts and other CDS formats that do not conform to clinical workflows.[15] CDS and related alarms are most often implemented as complex systems in sociotechnical settings that require a deep understanding of the work system to improve outcomes.[16] There exist many published examples of using human factors engineering principles, participatory design, and human-centered design methods to improve design, development, adoption of CDS, and associated outcomes.[17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] Involvement and input of end users in the design and evaluation is recognized as a critical success factor for health information technology.[33] Many health systems engage in some form of participatory design of CDS with relevant clinical experts but do not consistently perform usability testing.[17] [34] While necessary, user preference is often insufficient to determine the optimal design for a specific goal.[35] Usability testing methods are generally accepted as best practices for CDS development to maximize effectiveness and minimize unintended consequences. However, most usability studies for CDS initiatives are done as one-off efforts or as part of research projects and are not applied to the majority of CDS that health systems put into production.[34]
Usability maturity models have been adapted to health care and focus on moving organizations from Phase 1 or “Unrecognized” need for usability up to Phase 5 or “Strategic” incorporation of usability into the evaluation of errors and implementation of new designs.[36] The Joint Commission and others have emphasized the significance of usability. However, organizations face significant challenges when transitioning from Phase 2 (“Preliminary”) to Phases 3 (“Implemented”) and 4 (“Integrated”). Challenges for scaling usability into routine practice in clinical operations include (1) difficulty recruiting representative users,[20] (2) inadequate resources (e.g., space, equipment, software) to conduct usability studies,[37] (3) time constraints and operational pressures that urge organizations to move on to the next project,[4] and (4) lack of human factors expertise and difficulty integrating human factors engineers within health care environments.[34]
To bridge this gap, Mann et al have proposed a hybrid approach aimed at satisfying both pragmatic and academic objectives with usability testing performed in operational contexts.[38] By relaxing certain standards of rigor (e.g., reducing the number of subjects to be tested, real-time analysis of errors vs. deeper review of audio and screen recordings) while preserving the core methods, the pragmatic approach should be feasible to perform with fewer resources and at a faster pace. However, even the case studies from Mann et al come from federally funded research studies. While these methods are critical for developing highly novel interfaces for complex problems, these approaches remain beyond the resource capabilities of most health systems, which have generally not invested in usability labs. It remains unknown whether such methods can be feasibly applied to most CDS that a health system implements and what the influence of such testing is on CDS design and the outcomes that are achieved.
There is a critical need to apply usability testing to all CDS implementations, not only research-funded or select use cases. In this State of the Art/Best Practice paper, we describe the challenges of making usability testing a routine practice in CDS operations and share lessons learned through initiatives at our institution to scale usability testing to most new CDS implementations.
Setting
This work was done in a tertiary care academic pediatric health system in the Southeastern United States with a single enterprise implementation of Epic Systems© as its EHR. Annually, this health system has approximately 1.1 million patient visits including over 27,000 hospital discharges, 41,000 surgical patients (inpatient and outpatient), and 218,000 emergency department visits. The health system consists of three pediatric hospitals, a center for ambulatory care, an Autism Center, and 18 neighborhood locations including 8 urgent care centers, and 22 cardiology clinics. The work described in this State of the Art/Best Practice paper was primarily done in the three pediatric hospitals and the center for ambulatory care.
Challenges and Lessons Learned
Challenge Number 1: Engaging Representative Frontline Users
When performing usability testing, we want users to be a representative sample of the future user group, not only experienced or tech-savvy users who are more often part of technology design committees. Operational challenges such as high patient volumes, time constraints to participate in studies outside of clinical hours, and staffing issues limit recruitment, particularly for full-time busy clinicians who are likely the most important user base.
Solution Number 1: Perform Guerrilla In Situ Usability Testing
Usability testing carried out locally within health care organizations that have purchased vendor systems and products is usually referred to as “in situ.”[39] Guerrilla testing accelerates recruitment by approaching users in public spaces instead of scheduling usability sessions ahead of time. We combine these two approaches and approach clinicians physically in the clinical setting where the work of interest is actually done and who appear not to be too busy while on service (e.g., the clinical touchdown spaces where providers usually sit between seeing patients in the emergency department or clinic where the CDS will be shown).
Challenge Number 2: Unwillingness of Users to Participate
Many frontline users hesitate to participate in usability studies due to time constraints, lack of compensation, unfamiliarity with the usability team, and fear of being tested on their skill and/or clinical abilities.
Solution Number 2: Leverage Local Champions
We include members from our health system's EHR support on-site team in usability studies. In their regular jobs, the support on-site team's role is to help frontline staff on a day-to-day basis to use and troubleshoot issues with EHR. They are, therefore, well-known by physicians, nurses, and other clinicians. In our experience, recruitment rates into usability testing studies are much higher when these support on-site team members can identify a good time when prospective participants are likely to be available (e.g., immediately after rounds), can call out individuals who do not appear busy for participation, provide psychological safety since recruitment comes from a familiar face, and deliver a warm handoff to the usability team. Over time, the goodwill of these local champions can rub off on the usability team who are seen as “insiders.”
Challenge Number 3: Longer Usability Testing Sessions
Typical usability sessions take approximately 1 hour. In our experience, this duration dissuades many clinicians from participating.
Solution Number 3: Truncated Experimental Design
Extrapolating from Nielsen Norman Group's description of discount usability testing,[40] we designed guerilla in situ usability sessions to last a maximum of 10 minutes. This approach requires compromise on some rigor including (1) limiting the number of scenarios to one to two per participant, (2) instead of recording sessions or using eye tracking or screen capture for future deeper analysis, we use verbal member checking[41] into insights, and (3) prioritizing and only testing the most important design questions to reduce the number of testing tasks. While this approach risks incorrect or insufficient insights compared to longer sessions, we believe the insights that can be gained in shorter sessions are substantially better than not gaining any insights at all from doing no usability testing.
Challenge Number 4: Longer Design Cycle Times
Many CDS systems are developed in response to critical safety events or poor outcomes with strong operational pressures to implement new systems immediately. By contrast, many traditional usability testing approaches require thorough evaluation between design changes. Each design cycle might take a few weeks as human factors engineers must organize insights for a separate team of engineers or analysts to implement.
Solution Number 4: Rapid Prototyping by Involving Builders and System Developers in the Testing Process
We cross-train human factors engineers in EHR build capabilities through formal training classes with vendors and involve EHR analysts in usability testing where they learn through observation and practice. As new lessons are learned or new hypotheses are generated in real-time during testing, these cross-trained usability team members can alter the CDS prototype even between participants within a single half-day session. This approach accelerates the number of prototypes tested per unit time compared to gathering insights from a series of participants in one testing session and awaiting a new build before being able to schedule and conduct the next testing session.
Challenge Number 5: Wait Times for Regulatory Institutional Review Board Approvals
Institutional Review Board (IRB) review of studies helps ensure that studies comply with regulations, ethical standards, and institutional policies. They also help ensure that participants are adequately protected from study-related risks. While IRB approval is critical and helps protect the rights and welfare of human subjects, completing IRB protocols is often time-consuming and difficult to keep pace with operational requests.
Solution Number 5: Consider Usability Testing as a Part of Quality Improvement
Requirements for IRB largely apply to research projects. In research work, the primary beneficiaries of the research are other researchers, scholars, and practitioners in the field of study; dissemination of the results is intended to inform the field of study and the results are expected to be generalized to a larger population beyond the site of data collection and/or to be replicated in other settings. Unlike research, the primary intent of usability testing for CDS development is local improvement to benefit patients, families, and clinicians. In discussion with our local IRB, CDS development was deemed to be part of operational work. Thus, we consider most of the usability testing we do as Quality Improvement and not human subjects research. However, this approach risks investigators inappropriately claiming their work as quality improvement to reduce administrative overhead. Thus, at the conceptual stages of each CDS project, our team determines if we are likely to leverage the use case for research. When there is ambiguity, we create a brief protocol for the IRB to review for nonhuman subjects' research determination. This approach requires much less work from the CDS team than a full protocol but allows the IRB to determine if a full protocol is necessary.
Summary of Our Approach
To address the issues identified above, we employ guerilla in situ usability testing, that is, directly in the clinical setting where the CDS will be used such as inpatient floors, clinic touchdown spaces, and other locations where clinicians interact with the EHR ([Fig. 1]). This approach reduces the back-and-forth of scheduling, recruits representative frontline users since those are the clinicians in the spaces of interest, and captures the interruptions and ambient environments of real clinical settings that can affect how users interact with CDS. While this approach reduces the ability to collect some sophisticated usability metrics (e.g., eye tracking, click counts, audio recordings), it nonetheless preserves key concepts like task completion, time on task, satisfaction, and qualitative feedback from representative users. With an interdisciplinary team ([Table 1]), we have operationalized this approach through the following:
Abbreviation: EHR, electronic health record.
a Of note, single team member (e.g., physician clinical informatician) may fulfill multiple roles.


-
Perform a basic user and task analysis through an informal focus group with the CDS requestors.
-
If at the end of stakeholder interviews (requestors, clinical experts, potential frontline users), the CDS team does not have a clear understanding of the tasks and current workflows (including specific EHR screens where the work is done), then a workflow observation should be performed and described using a swim lane workflow diagram or Systems Engineering Initiative for Patient Safety (SEIPS) representation.[42]
-
-
Build candidate design in a test EHR environment.
-
Develop testing scenarios based on feedback from stakeholders, safety reports, simplified failure modes and effects analysis, pre-identified heuristic problems, and/or common clinical use cases.
-
Set up a test patient who fits the CDS criteria in the test EHR environment.
-
Go to the clinical space of interest with a familiar stakeholder (e.g., EHR support on-site team) to find prospective participants who do not appear to be very busy and introduce the study team.
-
Describe the think-aloud protocol43 and provide participants with psychological safety. Specifically, we first let participants know that we are testing the interface and not their clinical knowledge or skills. We then ask participants to talk about what they are looking at, verbalize their thought process and activities they are doing or want to do as part of the simulation, and areas of confusion.
-
Introduce the scenario and ask participants to use the new EHR interface to work through the use case.
-
Observe and note down participants' perceptions and comprehension of information and actions within the EHR.
-
Debrief at the end by specifying the design intent, eliciting participants' feedback, and member-checking notes (recording in clinical settings is generally impractical).
-
If the design was unsuccessful, discuss potential alternatives with the participant.
-
If required, make design changes in the test environment before additional testing.
-
Iterate until there are no new practical, implementable learnings.
Results from Using This Approach at One Institution
Our team performed its first usability tests in December 2018. While these methods were applied to a series of ad hoc projects after early successes, efforts slowed substantially during the pandemic. In June 2022, we updated our CDS governance process to mandate usability testing for all new CDS requests prior to implementation except for extremely urgent cases (e.g., new drug shortages). In 1 year (June 15, 2022–June 15, 2023), our CDS team recruited 219 participants in formative and/or summative usability testing for 20 unique projects employing this methodology ([Table 2]). Session lengths with each participant were generally 5 to 15 minutes. The duration from the initial CDS prototype to the final design was generally 1 to 2 months.
|
Project |
Project aim |
Number of users |
Month and year |
Any changes? |
Cosmetic changes only |
Structural changes |
Associated publications/presentations |
|---|---|---|---|---|---|---|---|
|
Blood Orders 2019 |
Decrease blood product ordering error; improve ordering efficiency |
42 |
December 2018 |
Yes |
No |
Yes |
|
|
Ketogenic Diet |
Reduce inappropriate prescription of carbohydrate-containing medications in hospitalized children on ketogenic diet |
25 |
April 2019 |
Yes |
Yes |
No |
[58] |
|
Influenza Vaccine v1 |
Increase uptake of influenza vaccine in pediatric inpatient setting |
6 |
June 2019 |
Yes |
Yes |
No |
[50] |
|
Integrated Admission Order Set |
Improve adoption of guideline order sets by admitting providers |
23 |
January 2020 |
Yes |
No |
Yes |
|
|
Metabolic Diseases in Emergency Department |
Enable early recognition of patients with metabolic disease in the ED at risk of decompensation and aid disease-specific workups and labs required for patients |
6 |
February 2020 |
Yes |
No |
Yes |
[54] |
|
Central Venous Access Device |
Improve documentation and recognition of key properties for appropriate care, maintenance, and removal of central venous access devices |
26 |
May 2020 |
Yes |
No |
Yes |
[53] |
|
Influenza Vaccine v2 |
Increase uptake of influenza vaccine in pediatric inpatient setting |
6 |
August 2020 |
Yes |
No |
Yes |
[60] |
|
Blood Culture Volume |
Improve collection of appropriate minimal volume for blood cultures |
3 |
October 2020 |
Yes |
Yes |
No |
– |
|
Status Epilepticus |
Improve identification of benzodiazepine-resistant status epilepticus (BRSE) |
3 |
February 2021 |
Yes |
No |
Yes |
[61] |
|
Delayed Hemolytic Transfusion Reaction (DHTR) |
Aid in early recognition and subsequent diagnosis of DHTRs in sickle cell disease patients |
5 |
June 2022 |
Yes |
Yes |
No |
– |
|
Duplicate PRN |
Reduce therapeutic duplication in inpatient medication orders |
2 |
2022-06 |
Yes |
No |
Yes |
[44] |
|
E-Consent for Blood |
Enable and improve adoption of electronic consent instead of paper forms |
15 |
June 2022 |
Yes |
No |
Yes |
– |
|
Elopement |
Enable identification of patients at risk for elopement and improve situation awareness so that measures can be taken to prevent elopement |
15 |
June 2022 |
Yes |
No |
Yes |
[62] |
|
Blood Orders 2023 |
Decrease blood product ordering error; improve ordering efficiency |
42 |
July 2022 |
Yes |
No |
Yes |
[47] |
|
Non-Accidental Trauma |
Improve recognition of nonaccidental trauma and standardize subsequent evaluation |
11 |
July 2022 |
Yes |
No |
Yes |
– |
|
Peanut Allergy |
Improve early peanut introduction during well-child visit and increase anticipatory guidance |
6 |
July 2022 |
Yes |
No |
Yes |
|
|
Discharge Subcutaneous Medication |
Increase the number of patients discharged appropriately with syringes (and vials if appropriate) to measure correct dose at home |
6 |
August 2022 |
Yes |
No |
Yes |
[65] |
|
Dosing Weight |
Improve documentation of dosing weight in patients with >10% difference between regular and dosing weight to reduce medication dosing errors |
6 |
August 2022 |
Yes |
No |
Yes |
|
|
Renal Dosing |
Enable recognition of patients with renal insufficiency and improve dose adjustments for renal-impaired patients |
22 |
November 2022 |
Yes |
No |
Yes |
[67] |
|
Intravenous Promethazine |
Reduce the use of IV promethazine where an appropriate alternative exists and improve safety of the IV promethazine administration |
5 |
November 2022 |
Yes |
Yes |
No |
– |
|
Total Parenteral Nutrition Administration |
Improve rate of total parenteral nutrition administration per guidelines |
4 |
November 2022 |
Yes |
Yes |
No |
– |
|
Keppra Dosing |
Improve timeliness and appropriate dosing of antiseizure medication administration in patients with BRSE |
3 |
January 2023 |
Yes |
No |
Yes |
– |
|
ED Boarder |
Improve recognition of Boarder patients (patients admitted to floor but waiting in the ED due to lack of bed) and reduce delays in order release and patient care |
5 |
January 2023 |
Yes |
No |
Yes |
– |
|
Nothing per mouth (NPO) time |
Reduce pre-procedural fasting times without aspiration events or cancelled procedures |
13 |
February 2023 |
Yes |
No |
Yes |
– |
|
Sickle Cell Disease Pain Plan |
Improve perception and adherence to individualized pain plan in sickle cell disease patients |
6 |
February 2023 |
Yes |
No |
Yes |
– |
|
HIV Opt-Out Testing |
Improve testing for HIV in eligible patients using opt-out strategy |
10 |
March 2023 |
Yes |
No |
Yes |
– |
|
Contraception |
Increase contraception counseling rates and prescriptions provided at discharge for adolescents |
10 |
April 2023 |
Yes |
No |
Yes |
– |
|
Code Status |
Improve order and documentation accuracy for code status changes |
17 |
May 2023 |
Yes |
Yes |
No |
– |
|
Enoxaparin |
Improve appropriate dosing of enoxaparin |
1 |
May 2023 |
Yes |
No |
Yes |
– |
|
Human Milk |
Reduce wrong-patient human milk exposures |
10 |
May 2023 |
Yes |
No |
Yes |
– |
|
Medication Readiness for Discharge |
Improve time to discharge patients as soon as medically and logistically feasible |
5 |
June 2023 |
Yes |
No |
Yes |
– |
Of the 30 projects in which we employed this methodology from 2018 through June 2023, at least one CDS design change was made in all cases. In 7/30 (23%), only cosmetic changes were made, that is, edits to the wording, font size, color, layout, images, or acknowledgment buttons in the CDS. However, in 23/30 (77%) of cases, structural changes were made such as new CDS artifacts or changes in the CDS channel, target users, patient population, timing, branching logic, or underlying workflows. For example, a CDS request was made due to regulatory concerns about duplicate pro re nata (PNR) indications, particularly for acetaminophen and ibuprofen. The initial CDS design involved alerts created when multiple of these orders were present without text indicating how they should be prioritized by nursing. After usability testing, the format of this CDS changed substantially from an alert to in-line order questions for common PRN indications with additional language to help with prioritization.[44] Similarly, the initial order set design by a committee of relevant stakeholders for blood products was found to lead to many ordering errors in usability testing, ultimately requiring a complete overhaul that has been highlighted in separate publications.[45] [46] [47]
In addition to the operational and quality improvement impact, this work has also resulted in journal publications and conference presentations as well as serving as preliminary data for federally funded grants, further aligning the academic and operational missions of the organization.[44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68]
Discussion
The guerilla in situ usability testing approach combined with CDS governance processes requiring usability testing for all but the most urgent projects has uniformly led to CDS design changes and substantially improved the effectiveness of CDS at our institution. Our project duration with in situ usability testing (1–2 months) was substantially lower than the time required for traditional approaches (11–16 months) for two similar use cases in Mann et al.[38] While we have not compared the performance in the production of most of these CDS designs before and after usability testing, we have shown in simulation that many of our post-usability CDS designs led to fewer errors and better adherence to evidence-based practices.[45] [52] [58] [63] In the prior operational model based on expert CDS design alone, all of these usability problems would only have been detected after going live if they were detected at all. In the absence of insights gained via this testing, many more of these CDS implementations would likely not have changed behavior, failing to meet the potential of CDS to improve outcomes and potentially worsening clinician experience and/or patient safety through unnecessary alert fatigue. While none of the specific approaches described in this paper are novel to the usability research literature, the combination of guerilla recruiting, in situ testing, and training EHR analysts to participate in the usability testing and rapidly iterate between participants has allowed our team to scale the application of usability methods to a larger fraction of CDS implementations.
There were several organizational and strategic process changes required to enable us to scale usability studies for operational projects. First, we needed strong commitment from leadership to go through a process that might slow down project rollout initially but would be beneficial in the long run. We needed to alter CDS build and implementation processes and bake in enough time to do at least formative usability testing. Second, we needed the right team ([Table 1]) including interdisciplinary expertise in CDS development, human factors personnel integrated into the CDS operations team, and well-known or familiar stakeholders who could bridge clinical and usability teams. Finally, we needed documentation workflows to quickly gather insights and forums to share lessons learned and crowd-source design ideas based on insights from usability testing.
We relaxed certain elements of methodological rigor. Specifically, we did not utilize audio or video recordings or follow rigorous qualitative analytic processes such as transcribing participant feedback and performing thematic analyses.[69] We aimed to compensate by member-checking notes at the end of each testing session with participants. The inability to test many scenarios with a single participant also limits the use of randomized block designs and can risk premature closure based on a small sample size between iterations. However, we believe this risk is mitigated as subsequent designs are also tested, and we stop testing only when no new insight is gained. While some risks introduced by designs may be missed if representative scenarios are not used, we believe usability testing with a small sample of scenarios is better than no usability testing at all,[40] which remains the default for most CDS implementations in health care.
Researchers have long advocated for incorporating usability evaluation into system development to effectively influence health care processes and outcomes positively.[70] [71] [72] However, many descriptions of these evaluations would require rigorous and resource-intensive approaches that are difficult to implement routinely without sufficient funding.[73] The modified approach described in this paper improved our own adoption of usability testing methods for operational projects.
To clarify, comprehensive, well-controlled research projects remain the standard to create generalizable knowledge and should be employed whenever possible while the modified more feasible approach we describe in this paper is appropriate to ensure usability testing is applied to a larger fraction of the EHR changes put into production by health systems. We also believe that the adoption of this in operational work may improve participant recruitment and diversity as more frontline users get exposed to and participate in such studies.
In a review of usability studies on health IT, researchers have found that most evaluations are done late in the system development life cycle (SDLC), for example, during the integration of the system into the environment or routine use, rather than earlier stages such as system specification where many barriers can be more easily addressed.[7] Our approach incorporates usability evaluation throughout the SDLC including workflow analysis, prototype development, and iterative development through formative testing. In situ testing bridges the gap between naturalistic approaches (i.e., unobtrusive observations and ethnographic studies that capture realistic behaviors but do not compare design effectiveness) and more controlled experimental studies (i.e., conducting simulations in artificial laboratory environments that can explicitly identify superior designs).[71] Our approach provides a high degree of experimental control while also preserving a high degree of realism to participants during testing allowing the investigator to see the influence of some real-world factors like time pressure, environment, and interruption.
While we have delineated the expertise requirements to perform guerilla in situ usability testing ([Table 1]), many health care settings particularly outside academic and hospital-based contexts, lack these resources. Nonetheless, we believe for practices that have some control over EHR configuration that these principles can be applied to their sphere of control for more rapid improvements in user experience. EHR vendors could also employ these techniques to impact clinical users more broadly, even within health care settings that lack their own informatics or human factors expertise.
Conclusion
CDS can improve guideline adherence and use of evidence-based practices that help achieve better patient outcomes, improved experience for patients and clinicians, reduced costs, and health equity. However, inappropriately designed CDS remains the norm in most health systems. Our results show the feasibility of performing usability testing at scale in health care operations using guerilla in situ usability testing described in this State of the Art/Best Practice paper. Broader uptake of usability testing has the potential to change the course of CDS and ultimately health outcomes through the efficient application of human factors methods not only for select use cases but for every CDS implementation and update.
Clinical Relevance Statement
CDS often fails due to inadequate alignment with clinical workflows and poor usability. Usability testing can improve CDS design tools. However, these methods are mostly adopted in the research context but rarely in operational projects. This article describes an approach, specifically guerrilla in situ usability testing to address challenges with adopting usability testing in health care operations. Broader uptake of usability testing has the potential to change the course of CDS in health care.
Multiple-Choice Questions
-
Which of these aspects are evaluation goals for usability?
-
Effectiveness
-
Efficiency
-
Satisfaction
-
All of the above.
Correct Answer: The correct answer is option d. All of the above. Per ISO standard of usability (ISO 9241 pt. 11), usability is the intersection between effectiveness, efficiency, and satisfaction in a context of use.
-
-
Apart from human factors/usability expert, what roles are required for guerilla in situ usability testing?
-
Locally known stakeholder
-
Clinical subject matter expert
-
EHR analyst/builder
-
All of the above.
Correct Answer: The correct answer is option d. All of the above. We need locally known stakeholder who can act as liaisons with clinicians, handshake them with usability testing teams and help with recruitment. They would also know good time to conduct testing Insitu. Clinical subject matter experts are required to design test scenarios and specify the right thing to do. We also need EHR analyst/builder so they can set up patients and adjust design as needed.
-
Conflict of Interest
E.O. and N.M. are the cofounders and have equity in Phrase Health, a CDS analytics company. They are the Investigators on an R42 grant with Phrase Health from the National Library of Medicine (NLM) and the National Center for Advancing Translational Science (NCATS). Both of them receive salary support from the NLM and NCATS but no direct revenue from Phrase Health. Other authors have nothing to disclose.
Protection of Human Subjects
No human subjects were involved in this perspective. In discussion with the Children's Healthcare of Atlanta IRB, projects applying guerilla in situ usability testing were deemed as quality improvement projects and therefore nonhuman subjects research.
-
References
- 1 Shultz S, Hand MW. Usability: a concept analysis. . J Theory Constr Test 2015;19(02):
- 2 Department of Health and Human Services. Usability Evaluation Basics. 2013 . Accessed September 29, 2022 at: https://www.usability.gov/what-and-why/usability-evaluation.html
- 3 Nielsen Norman Group. Usability 101: Introduction to Usability. Accessed June 8, 2023 at: https://www.nngroup.com/articles/usability-101-introduction-to-usability/
- 4 Pruitt Z, Howe JL, Krevat SA, Khairat S, Ratwani RM. Development and pilot evaluation of an electronic health record usability and safety self-assessment tool. JAMIA Open 2022; 5 (03) ooac070
- 5 Park S, Marquard J, Austin R, Pieczkiewicz D, Delaney C. Usability of electronic health records from nurses' perspectives: a systematic review. Paper presented at: 2022 IEEE 10th International Conference on Healthcare Informatics (ICHI). 2022: 511-512
- 6 Howe JL, Adams KT, Hettinger AZ, Ratwani RM. Electronic health record usability issues and potential contribution to patient harm. JAMA 2018; 319 (12) 1276-1278
- 7 Yen PY, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012; 19 (03) 413-422
- 8 Melnick ER, Dyrbye LN, Sinsky CA. et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (03) 476-487
- 9 Khairat S, Burke G, Archambault H, Schwartz T, Larson J, Ratwani RM. Perceived burden of EHRs on physicians at different stages of their career. Appl Clin Inform 2018; 9 (02) 336-347
- 10 Gomes KM, Ratwani RM. Evaluating improvements and shortcomings in clinician satisfaction with electronic health record usability. JAMA Netw Open 2019; 2 (12) e1916651
- 11 Koppel R, Metlay JP, Cohen A. et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293 (10) 1197-1203
- 12 Ratwani R, Fairbanks T, Savage E. et al. Mind the Gap. A systematic review to identify usability and safety challenges and practices during electronic health record implementation. Appl Clin Inform 2016; 7 (04) 1069-1087
- 13 Dixit RA, Boxley CL, Samuel S, Mohan V, Ratwani RM, Gold JA. Electronic health record use issues and diagnostic error: a scoping review and framework. J Patient Saf 2023; 19 (01) e25-e30
- 14 Kariotis TC, Prictor M, Chang S, Gray K. Impact of electronic health records on information practices in mental health contexts: scoping review. J Med Internet Res 2022; 24 (05) e30405
- 15 Li RC, Wang JK, Sharp C, Chen JH. When order sets do not align with clinician workflow: assessing practice patterns in the electronic health record. BMJ Qual Saf 2019; 28 (12) 987-996
- 16 Reese TJ, Liu S, Steitz B. et al. Conceptualizing clinical decision support as complex interventions: a meta-analysis of comparative effectiveness trials. J Am Med Inform Assoc 2022; 29 (10) 1744-1756
- 17 Orenstein EW, Weitkamp AO, Rosenau P, Mallozzi C, Tobias MC. Towards a maturity model for clinical decision support operations: an interactive panel. Paper presented at: American Medical Informatics Association Clinical Informatics Conference. Atlanta, GA; 2019
- 18 Kushniruk A, Borycki E, Kuo MH, Kuwata S. Integrating technology-centric and user-centric system testing methods: ensuring healthcare system usability and safety. In: Information Technology in Health Care: Socio-Technical Approaches 2010. IOS Press; 2010: 181-186 . Accessed November 8, 2023 at: https://ebooks.iospress.nl/doi/10.3233/978-1-60750-569-3-181
- 19 Desai AV, Michael CL, Kuperman GJ. et al. A novel patient values tab for the electronic health record: a user-centered design approach. J Med Internet Res 2021; 23 (02) e21615
- 20 Blanes-Selva V, Asensio-Cuesta S, Doñate-Martínez A, Pereira Mesquita F, García-Gómez JM. User-centred design of a clinical decision support system for palliative care: Insights from healthcare professionals. Digit Health 2023; 9: 20 552076221150735
- 21 Rudin RS, Perez S, Rodriguez JA. et al. User-centered design of a scalable, electronic health record-integrated remote symptom monitoring intervention for patients with asthma and providers in primary care. J Am Med Inform Assoc 2021; 28 (11) 2433-2444
- 22 Horsky J, Ramelson HZ. Development of a cognitive framework of patient record summary review in the formative phase of user-centered design. J Biomed Inform 2016; 64: 147-157
- 23 Chokshi SK, Belli HM, Troxel AB. et al. Designing for implementation: user-centered development and pilot testing of a behavioral economic-inspired electronic health record clinical decision support module. Pilot Feasibility Stud 2019; 5 (01) 28
- 24 Thursky KA, Mahemoff M. User-centered design techniques for a computerised antibiotic decision support system in an intensive care unit. Int J Med Inform 2007; 76 (10) 760-768
- 25 Nguyen KA, Patel H, Haggstrom DA, Zillich AJ, Imperiale TF, Russ AL. Utilizing a user-centered approach to develop and assess pharmacogenomic clinical decision support for thiopurine methyltransferase. BMC Med Inform Decis Mak 2019; 19 (01) 194
- 26 Miller SD, Murphy Z, Gray JH. et al. Human-centered design of a clinical decision support for anemia screening in children with inflammatory bowel disease. Appl Clin Inform 2023; 14 (02) 345-353
- 27 Molloy MJ, Zackoff M, Gifford A. et al. Usability testing of situation awareness clinical decision support in the intensive care unit. Appl Clin Inform 2024; 15 (02) 327-334
- 28 Shear K, Rice H, Garabedian PM. et al. Usability testing of an interoperable computerized clinical decision support tool for fall risk management in primary care. Appl Clin Inform 2023; 14 (02) 212-226
- 29 McGonagle EA, Karavite DJ, Grundmeier RW. et al. Evaluation of an antimicrobial stewardship decision support for pediatric infections. Appl Clin Inform 2023; 14 (01) 108-118
- 30 McNab D, Freestone J, Black C, Carson-Stevens A, Bowie P. Participatory design of an improvement intervention for the primary care management of possible sepsis using the Functional Resonance Analysis Method. BMC Med 2018; 16 (01) 174
- 31 Beerlage-de Jong N, Wentzel J, Hendrix R, van Gemert-Pijnen L. The value of participatory development to support antimicrobial stewardship with a clinical decision support system. Am J Infect Control 2017; 45 (04) 365-371
- 32 van Leeuwen D, Mittelman M, Fabian L, Lomotan EA. Nothing for me or about me, without me: codesign of clinical decision support. Appl Clin Inform 2022; 13 (03) 641-646
- 33 Kushniruk A, Nohr C. Participatory design, user involvement and health IT evaluation. In: Evidence-Based Health Informatics. IOS Press; 2016. :139–151. Accessed July 30, 2024 at: https://ebooks.iospress.nl/doi/10.3233/978-1-61499-635-4-139
- 34 Perry SJ, Catchpole K, Rivera AJ, Henrickson Parker S, Gosbee J. 'Strangers in a strange land': Understanding professional challenges for human factors/ergonomics and healthcare. Appl Ergon 2021; 94: 103040
- 35 Nielsen J, Levy J. Measuring usability: preference vs. performance. Commun ACM 1994; 37 (04) 66-75
- 36 Staggers N, Rodney M. Promoting Usability in Organizations with a New Health Usability Model: Implications for Nursing Informatics. NI 2012: 11th International Congress on Nursing Informatics; June 23–27, 2012; Montreal, Canada. 2012: 396 . Accessed June 8, 2023 at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799150/
- 37 Shah T, Kitts AB, Gold JA. et al. Electronic health record optimization and clinician well-being: a potential roadmap toward action. NAM Perspect 2020; 2020
- 38 Mann DM, Chokshi SK, Kushniruk A. Bridging the gap between academic research and pragmatic needs in usability: a hybrid approach to usability evaluation of health care information systems. JMIR Hum Factors 2018; 5 (04) e10721
- 39 Kushniruk AW, Borycki EM, Kannry J. Commercial versus in-situ usability testing of healthcare information systems: towards “public” usability testing in healthcare organizations. Stud Health Technol Inform 2013; 183: 157-161
- 40 Nielsen Norman Group. Discount Usability: 20 Years. Accessed May 14, 2024 at: https://www.nngroup.com/articles/discount-usability-20-years/
- 41 Motulsky SL. Is member checking the gold standard of quality in qualitative research?. Qual Psychol 2021; 8 (03) 389-406
- 42 Holden RJ, Carayon P. SEIPS 101 and seven simple SEIPS tools. BMJ Qual Saf 2021; 30 (11) 901-910
- 43 McDonald S, Edwards HM, Zhao T. Exploring Think-Alouds in Usability Testing: an international survey. IEEE Trans Prof Commun 2012; 55 (01) 2-19
- 44 EDawson T, Beus J, WOrenstein E, Umontuen U, McNeill D, Kandaswamy S. Reducing therapeutic duplication in inpatient medication orders. Appl Clin Inform 2023; 14 (03) 538-543
- 45 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990
- 46 Orenstein EW, Rollins M, Jones J. et al. Influence of user-centered clinical decision support on pediatric blood product ordering errors. Blood Transfus Trasfus Sangue 2023; 21: 3-12
- 47 Thompson SA, Williams H, Rzewnicki D. et al. Avoiding Unintended Consequences of Pediatric Blood Order Set Updates through In Situ Usability Testing. Appl Clin Inform. Accessed July 23, 2024 at: https://www.thieme-connect.com/products/ejournals/abstract/10.1055/a-2351-9642
- 48 Kandaswamy S, Jones J, Orenstein EW. User Center Design of Blood Orders in the EHR to reduce risk of over-transfusion in a pediatric healthcare system. Paper presented at : AABB Annual Meeting. Virtual Meeting: Wiley Online Library; 2020
- 49 Orenstein EW, Rasooly IR, Mai MV. et al. Influence of simulation on electronic health record use patterns among pediatric residents. J Am Med Inform Assoc 2018; 25 (11) 1501-1506
- 50 Orenstein EW, ElSayed-Ali O, Kandaswamy S. et al. Evaluation of a clinical decision support strategy to increase seasonal influenza vaccination among hospitalized children before inpatient discharge. JAMA Netw Open 2021; 4 (07) e2117809-e2117809
- 51 Mrosak J, Kandaswamy S, Stokes C. et al. The influence of integrating clinical practice guideline order bundles into a general admission order set on guideline adoption. JAMIA Open 2021; 4 (04) ooab087
- 52 Mrosak J, Kandaswamy S, Stokes C, Roth D, Orenstein E. Formative usability testing of an admission order set with modular disease-specific order groups to improve guideline order set usage. Paper presented at: Pediatric Hospital Medicine Conference. 2020. ; Florida
- 53 Kandaswamy S, Gill A, Wood S. et al. User-centered design of central venous access device documentation. JAMIA Open 2022; 5 (01) ooac011
- 54 Kandaswamy S, Jain S, Dwight C. et al. Improving outcomes for pediatric patients with metabolic conditions in the ED using user centered design for order sets. Paper presented at: AMIA 2021 Annual Symposium. November 30, 2021 ; San Diego.
- 55 Bennett B, Kurzen E, Orenstein E, Kandaswamy S, Shin H. Clinical decision support to reduce nephrotoxic medication-associated acute kidney injury in non-critically ill hospitalized children. Paper presented at: AMIA Annual Meeting 2022. November 5, 2022 ; Washington, D.C.
- 56 Rowland AF, Nguyen TH, Cunha PP. et al. Implementing a clinical decision support tool to increase early peanut introduction guidance. J Allergy Clin Immunol 2024; S0091 -6749(24)00712-7
- 57 Thompson SA, Kandaswamy S, Orenstein E. CIC 2023: A discount approach to reducing nursing alert burden. Appl Clin Inform 2024; 15: 727-732
- 58 Siegel BI, Johnson M, Dawson TE. et al. Reducing prescribing errors in hospitalized children on the ketogenic diet. Pediatr Neurol 2021; 115: 42-47
- 59 Mrosak J, Kandaswamy S, Stokes C. et al. The effect of implementation of guideline order bundles into a general admission order set on clinical practice guideline adoption: quasi-experimental study. JMIR Med Inform 2023; 11: e42736
- 60 Kandaswamy S, Masterson E, Blanco R. et al. Barriers to seasonal influenza vaccine uptake in a pediatric inpatient healthcare setting after implementation of clinical decision support. Stud Health Technol Inform 2022; 290: 452-456
- 61 Siegel B, Shahnawaz M, Elkins K, Kheder A, Orenstein E. Validation of a semi-automated method for tracking quality metrics in benzodiazepine-resistant status epilepticus (S7.007). . Neurology 2022;98(18 Supplement). Accessed June 13, 2023 at: https://n.neurology.org/content/98/18_Supplement/706
- 62 Thompson S, Mckay L, Ray E, Kandaswamy S, Orenstein EW. Improve or remove: challenges in refining ineffective alerts – the case of elopement prevention. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 63 Nguyen TH, Cunha PP, Rowland AF, Orenstein E, Lee T, Kandaswamy S. User-centered design and evaluation of clinical decision support to improve early peanut introduction: formative study. JMIR Form Res 2023; 7 (01) e47574
- 64 Cunha P, Nguyen T, Pham T, Kandaswamy S, Lee T. Implementing early peanut introduction recommendations by pediatric residents through a clinical decision support system. J Allergy Clin Immunol 2023; 151 (02) AB44 . Accessed March 10, 2023 at: https://www.jacionline.org/article/S0091-6749(22)01792-4/fulltext
- 65 Dawson TE, Beus JM, Orenstein E, Kurzen E, Kandaswamy S. Decision support to improve safety of discharge prescriptions of subcutaneous medications. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 66 Kandaswamy S, Thompson S, Orenstein E. Accurate dosing weight: when the 10% really matters. Stud Health Technol Inform 2024; 310: 354-358
- 67 Kandaswamy S, Dawson TE, Shin H, Orenstein EW. Non-interruptive alerts for reduced renal function: does anyone see them?. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 68 Thompson S, Orenstein EW, Kandaswamy S. Accurate dosing weight: when the 10% really matters. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 69 Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006; 3: 77-101 Accessed May 14, 2024 at: https://www.tandfonline.com/doi/abs/10.1191/1478088706qp063oa
- 70 Stead WW, Haynes RB, Fuller S. et al. Designing medical informatics research and library–resource projects to increase what is learned. J Am Med Inform Assoc 1994; 1 (01) 28-33
- 71 Kushniruk A. Evaluation in the design of health information systems: application of approaches emerging from usability engineering. Comput Biol Med 2002; 32 (03) 141-149
- 72 Kaufman D, Roberts WD, Merrill J, Lai TY, Bakken S. Applying an evaluation framework for health information system design, development, and implementation. Nurs Res 2006; 55 (2 Suppl): S37-S42
- 73 Ammenwerth E, Gräber S, Herrmann G, Bürkle T, König J. Evaluation of health information systems-problems and challenges. Int J Med Inform 2003; 71 (2–3): 125-135
Address for correspondence
Publication History
Received: 03 June 2024
Accepted: 25 August 2024
Accepted Manuscript online:
27 August 2024
Article published online:
04 December 2024
© 2024. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Shultz S, Hand MW. Usability: a concept analysis. . J Theory Constr Test 2015;19(02):
- 2 Department of Health and Human Services. Usability Evaluation Basics. 2013 . Accessed September 29, 2022 at: https://www.usability.gov/what-and-why/usability-evaluation.html
- 3 Nielsen Norman Group. Usability 101: Introduction to Usability. Accessed June 8, 2023 at: https://www.nngroup.com/articles/usability-101-introduction-to-usability/
- 4 Pruitt Z, Howe JL, Krevat SA, Khairat S, Ratwani RM. Development and pilot evaluation of an electronic health record usability and safety self-assessment tool. JAMIA Open 2022; 5 (03) ooac070
- 5 Park S, Marquard J, Austin R, Pieczkiewicz D, Delaney C. Usability of electronic health records from nurses' perspectives: a systematic review. Paper presented at: 2022 IEEE 10th International Conference on Healthcare Informatics (ICHI). 2022: 511-512
- 6 Howe JL, Adams KT, Hettinger AZ, Ratwani RM. Electronic health record usability issues and potential contribution to patient harm. JAMA 2018; 319 (12) 1276-1278
- 7 Yen PY, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012; 19 (03) 413-422
- 8 Melnick ER, Dyrbye LN, Sinsky CA. et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (03) 476-487
- 9 Khairat S, Burke G, Archambault H, Schwartz T, Larson J, Ratwani RM. Perceived burden of EHRs on physicians at different stages of their career. Appl Clin Inform 2018; 9 (02) 336-347
- 10 Gomes KM, Ratwani RM. Evaluating improvements and shortcomings in clinician satisfaction with electronic health record usability. JAMA Netw Open 2019; 2 (12) e1916651
- 11 Koppel R, Metlay JP, Cohen A. et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293 (10) 1197-1203
- 12 Ratwani R, Fairbanks T, Savage E. et al. Mind the Gap. A systematic review to identify usability and safety challenges and practices during electronic health record implementation. Appl Clin Inform 2016; 7 (04) 1069-1087
- 13 Dixit RA, Boxley CL, Samuel S, Mohan V, Ratwani RM, Gold JA. Electronic health record use issues and diagnostic error: a scoping review and framework. J Patient Saf 2023; 19 (01) e25-e30
- 14 Kariotis TC, Prictor M, Chang S, Gray K. Impact of electronic health records on information practices in mental health contexts: scoping review. J Med Internet Res 2022; 24 (05) e30405
- 15 Li RC, Wang JK, Sharp C, Chen JH. When order sets do not align with clinician workflow: assessing practice patterns in the electronic health record. BMJ Qual Saf 2019; 28 (12) 987-996
- 16 Reese TJ, Liu S, Steitz B. et al. Conceptualizing clinical decision support as complex interventions: a meta-analysis of comparative effectiveness trials. J Am Med Inform Assoc 2022; 29 (10) 1744-1756
- 17 Orenstein EW, Weitkamp AO, Rosenau P, Mallozzi C, Tobias MC. Towards a maturity model for clinical decision support operations: an interactive panel. Paper presented at: American Medical Informatics Association Clinical Informatics Conference. Atlanta, GA; 2019
- 18 Kushniruk A, Borycki E, Kuo MH, Kuwata S. Integrating technology-centric and user-centric system testing methods: ensuring healthcare system usability and safety. In: Information Technology in Health Care: Socio-Technical Approaches 2010. IOS Press; 2010: 181-186 . Accessed November 8, 2023 at: https://ebooks.iospress.nl/doi/10.3233/978-1-60750-569-3-181
- 19 Desai AV, Michael CL, Kuperman GJ. et al. A novel patient values tab for the electronic health record: a user-centered design approach. J Med Internet Res 2021; 23 (02) e21615
- 20 Blanes-Selva V, Asensio-Cuesta S, Doñate-Martínez A, Pereira Mesquita F, García-Gómez JM. User-centred design of a clinical decision support system for palliative care: Insights from healthcare professionals. Digit Health 2023; 9: 20 552076221150735
- 21 Rudin RS, Perez S, Rodriguez JA. et al. User-centered design of a scalable, electronic health record-integrated remote symptom monitoring intervention for patients with asthma and providers in primary care. J Am Med Inform Assoc 2021; 28 (11) 2433-2444
- 22 Horsky J, Ramelson HZ. Development of a cognitive framework of patient record summary review in the formative phase of user-centered design. J Biomed Inform 2016; 64: 147-157
- 23 Chokshi SK, Belli HM, Troxel AB. et al. Designing for implementation: user-centered development and pilot testing of a behavioral economic-inspired electronic health record clinical decision support module. Pilot Feasibility Stud 2019; 5 (01) 28
- 24 Thursky KA, Mahemoff M. User-centered design techniques for a computerised antibiotic decision support system in an intensive care unit. Int J Med Inform 2007; 76 (10) 760-768
- 25 Nguyen KA, Patel H, Haggstrom DA, Zillich AJ, Imperiale TF, Russ AL. Utilizing a user-centered approach to develop and assess pharmacogenomic clinical decision support for thiopurine methyltransferase. BMC Med Inform Decis Mak 2019; 19 (01) 194
- 26 Miller SD, Murphy Z, Gray JH. et al. Human-centered design of a clinical decision support for anemia screening in children with inflammatory bowel disease. Appl Clin Inform 2023; 14 (02) 345-353
- 27 Molloy MJ, Zackoff M, Gifford A. et al. Usability testing of situation awareness clinical decision support in the intensive care unit. Appl Clin Inform 2024; 15 (02) 327-334
- 28 Shear K, Rice H, Garabedian PM. et al. Usability testing of an interoperable computerized clinical decision support tool for fall risk management in primary care. Appl Clin Inform 2023; 14 (02) 212-226
- 29 McGonagle EA, Karavite DJ, Grundmeier RW. et al. Evaluation of an antimicrobial stewardship decision support for pediatric infections. Appl Clin Inform 2023; 14 (01) 108-118
- 30 McNab D, Freestone J, Black C, Carson-Stevens A, Bowie P. Participatory design of an improvement intervention for the primary care management of possible sepsis using the Functional Resonance Analysis Method. BMC Med 2018; 16 (01) 174
- 31 Beerlage-de Jong N, Wentzel J, Hendrix R, van Gemert-Pijnen L. The value of participatory development to support antimicrobial stewardship with a clinical decision support system. Am J Infect Control 2017; 45 (04) 365-371
- 32 van Leeuwen D, Mittelman M, Fabian L, Lomotan EA. Nothing for me or about me, without me: codesign of clinical decision support. Appl Clin Inform 2022; 13 (03) 641-646
- 33 Kushniruk A, Nohr C. Participatory design, user involvement and health IT evaluation. In: Evidence-Based Health Informatics. IOS Press; 2016. :139–151. Accessed July 30, 2024 at: https://ebooks.iospress.nl/doi/10.3233/978-1-61499-635-4-139
- 34 Perry SJ, Catchpole K, Rivera AJ, Henrickson Parker S, Gosbee J. 'Strangers in a strange land': Understanding professional challenges for human factors/ergonomics and healthcare. Appl Ergon 2021; 94: 103040
- 35 Nielsen J, Levy J. Measuring usability: preference vs. performance. Commun ACM 1994; 37 (04) 66-75
- 36 Staggers N, Rodney M. Promoting Usability in Organizations with a New Health Usability Model: Implications for Nursing Informatics. NI 2012: 11th International Congress on Nursing Informatics; June 23–27, 2012; Montreal, Canada. 2012: 396 . Accessed June 8, 2023 at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799150/
- 37 Shah T, Kitts AB, Gold JA. et al. Electronic health record optimization and clinician well-being: a potential roadmap toward action. NAM Perspect 2020; 2020
- 38 Mann DM, Chokshi SK, Kushniruk A. Bridging the gap between academic research and pragmatic needs in usability: a hybrid approach to usability evaluation of health care information systems. JMIR Hum Factors 2018; 5 (04) e10721
- 39 Kushniruk AW, Borycki EM, Kannry J. Commercial versus in-situ usability testing of healthcare information systems: towards “public” usability testing in healthcare organizations. Stud Health Technol Inform 2013; 183: 157-161
- 40 Nielsen Norman Group. Discount Usability: 20 Years. Accessed May 14, 2024 at: https://www.nngroup.com/articles/discount-usability-20-years/
- 41 Motulsky SL. Is member checking the gold standard of quality in qualitative research?. Qual Psychol 2021; 8 (03) 389-406
- 42 Holden RJ, Carayon P. SEIPS 101 and seven simple SEIPS tools. BMJ Qual Saf 2021; 30 (11) 901-910
- 43 McDonald S, Edwards HM, Zhao T. Exploring Think-Alouds in Usability Testing: an international survey. IEEE Trans Prof Commun 2012; 55 (01) 2-19
- 44 EDawson T, Beus J, WOrenstein E, Umontuen U, McNeill D, Kandaswamy S. Reducing therapeutic duplication in inpatient medication orders. Appl Clin Inform 2023; 14 (03) 538-543
- 45 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990
- 46 Orenstein EW, Rollins M, Jones J. et al. Influence of user-centered clinical decision support on pediatric blood product ordering errors. Blood Transfus Trasfus Sangue 2023; 21: 3-12
- 47 Thompson SA, Williams H, Rzewnicki D. et al. Avoiding Unintended Consequences of Pediatric Blood Order Set Updates through In Situ Usability Testing. Appl Clin Inform. Accessed July 23, 2024 at: https://www.thieme-connect.com/products/ejournals/abstract/10.1055/a-2351-9642
- 48 Kandaswamy S, Jones J, Orenstein EW. User Center Design of Blood Orders in the EHR to reduce risk of over-transfusion in a pediatric healthcare system. Paper presented at : AABB Annual Meeting. Virtual Meeting: Wiley Online Library; 2020
- 49 Orenstein EW, Rasooly IR, Mai MV. et al. Influence of simulation on electronic health record use patterns among pediatric residents. J Am Med Inform Assoc 2018; 25 (11) 1501-1506
- 50 Orenstein EW, ElSayed-Ali O, Kandaswamy S. et al. Evaluation of a clinical decision support strategy to increase seasonal influenza vaccination among hospitalized children before inpatient discharge. JAMA Netw Open 2021; 4 (07) e2117809-e2117809
- 51 Mrosak J, Kandaswamy S, Stokes C. et al. The influence of integrating clinical practice guideline order bundles into a general admission order set on guideline adoption. JAMIA Open 2021; 4 (04) ooab087
- 52 Mrosak J, Kandaswamy S, Stokes C, Roth D, Orenstein E. Formative usability testing of an admission order set with modular disease-specific order groups to improve guideline order set usage. Paper presented at: Pediatric Hospital Medicine Conference. 2020. ; Florida
- 53 Kandaswamy S, Gill A, Wood S. et al. User-centered design of central venous access device documentation. JAMIA Open 2022; 5 (01) ooac011
- 54 Kandaswamy S, Jain S, Dwight C. et al. Improving outcomes for pediatric patients with metabolic conditions in the ED using user centered design for order sets. Paper presented at: AMIA 2021 Annual Symposium. November 30, 2021 ; San Diego.
- 55 Bennett B, Kurzen E, Orenstein E, Kandaswamy S, Shin H. Clinical decision support to reduce nephrotoxic medication-associated acute kidney injury in non-critically ill hospitalized children. Paper presented at: AMIA Annual Meeting 2022. November 5, 2022 ; Washington, D.C.
- 56 Rowland AF, Nguyen TH, Cunha PP. et al. Implementing a clinical decision support tool to increase early peanut introduction guidance. J Allergy Clin Immunol 2024; S0091 -6749(24)00712-7
- 57 Thompson SA, Kandaswamy S, Orenstein E. CIC 2023: A discount approach to reducing nursing alert burden. Appl Clin Inform 2024; 15: 727-732
- 58 Siegel BI, Johnson M, Dawson TE. et al. Reducing prescribing errors in hospitalized children on the ketogenic diet. Pediatr Neurol 2021; 115: 42-47
- 59 Mrosak J, Kandaswamy S, Stokes C. et al. The effect of implementation of guideline order bundles into a general admission order set on clinical practice guideline adoption: quasi-experimental study. JMIR Med Inform 2023; 11: e42736
- 60 Kandaswamy S, Masterson E, Blanco R. et al. Barriers to seasonal influenza vaccine uptake in a pediatric inpatient healthcare setting after implementation of clinical decision support. Stud Health Technol Inform 2022; 290: 452-456
- 61 Siegel B, Shahnawaz M, Elkins K, Kheder A, Orenstein E. Validation of a semi-automated method for tracking quality metrics in benzodiazepine-resistant status epilepticus (S7.007). . Neurology 2022;98(18 Supplement). Accessed June 13, 2023 at: https://n.neurology.org/content/98/18_Supplement/706
- 62 Thompson S, Mckay L, Ray E, Kandaswamy S, Orenstein EW. Improve or remove: challenges in refining ineffective alerts – the case of elopement prevention. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 63 Nguyen TH, Cunha PP, Rowland AF, Orenstein E, Lee T, Kandaswamy S. User-centered design and evaluation of clinical decision support to improve early peanut introduction: formative study. JMIR Form Res 2023; 7 (01) e47574
- 64 Cunha P, Nguyen T, Pham T, Kandaswamy S, Lee T. Implementing early peanut introduction recommendations by pediatric residents through a clinical decision support system. J Allergy Clin Immunol 2023; 151 (02) AB44 . Accessed March 10, 2023 at: https://www.jacionline.org/article/S0091-6749(22)01792-4/fulltext
- 65 Dawson TE, Beus JM, Orenstein E, Kurzen E, Kandaswamy S. Decision support to improve safety of discharge prescriptions of subcutaneous medications. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 66 Kandaswamy S, Thompson S, Orenstein E. Accurate dosing weight: when the 10% really matters. Stud Health Technol Inform 2024; 310: 354-358
- 67 Kandaswamy S, Dawson TE, Shin H, Orenstein EW. Non-interruptive alerts for reduced renal function: does anyone see them?. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 68 Thompson S, Orenstein EW, Kandaswamy S. Accurate dosing weight: when the 10% really matters. Paper presented at: AMIA 2023 Clinical Informatics Conference. May 23, 2023 ; Chicago, IL.
- 69 Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006; 3: 77-101 Accessed May 14, 2024 at: https://www.tandfonline.com/doi/abs/10.1191/1478088706qp063oa
- 70 Stead WW, Haynes RB, Fuller S. et al. Designing medical informatics research and library–resource projects to increase what is learned. J Am Med Inform Assoc 1994; 1 (01) 28-33
- 71 Kushniruk A. Evaluation in the design of health information systems: application of approaches emerging from usability engineering. Comput Biol Med 2002; 32 (03) 141-149
- 72 Kaufman D, Roberts WD, Merrill J, Lai TY, Bakken S. Applying an evaluation framework for health information system design, development, and implementation. Nurs Res 2006; 55 (2 Suppl): S37-S42
- 73 Ammenwerth E, Gräber S, Herrmann G, Bürkle T, König J. Evaluation of health information systems-problems and challenges. Int J Med Inform 2003; 71 (2–3): 125-135


