Subscribe to RSS
DOI: 10.1055/a-2272-6184
Usability Testing of Situation Awareness Clinical Decision Support in the Intensive Care Unit
Funding Dr. Dewan receives career development and research support from the Agency for Healthcare Research and Quality (K08-HS026975), which supported this study.
- Abstract
- Background & Significance
- Objectives
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple Choice Questions
- References
Abstract
Objective Our objective was to evaluate the usability of an automated clinical decision support (CDS) tool previously implemented in the pediatric intensive care unit (PICU) to promote shared situation awareness among the medical team to prevent serious safety events within children's hospitals.
Methods We conducted a mixed-methods usability evaluation of a CDS tool in a PICU at a large, urban, quaternary, free-standing children's hospital in the Midwest. Quantitative assessment was done using the system usability scale (SUS), while qualitative assessment involved think-aloud usability testing. The SUS was scored according to survey guidelines. For think-aloud testing, task times were calculated, and means and standard deviations were determined, stratified by role. Qualitative feedback from participants and moderator observations were summarized.
Results Fifty-one PICU staff members, including physicians, advanced practice providers, nurses, and respiratory therapists, completed the SUS, while ten participants underwent think-aloud usability testing. The overall median usability score was 87.5 (interquartile range: 80–95), with over 96% rating the tool's usability as “good” or “excellent.” Task completion times ranged from 2 to 92 seconds, with the quickest completion for reviewing high-risk criteria and the slowest for adding to high-risk criteria. Observations and participant responses from think-aloud testing highlighted positive aspects of learnability and clear display of complex information that is easily accessed, as well as opportunities for improvement in tool integration into clinical workflows.
Conclusion The PICU Warning Tool demonstrates good usability in the critical care setting. This study demonstrates the value of postimplementation usability testing in identifying opportunities for continued improvement of CDS tools.
Background & Significance
The most common contributing factor to serious safety events within children's hospitals is a lack of situation awareness.[1] Situation awareness is the degree to which each team member possesses a common mental model synthesizing all of the incoming data of the clinical environment.[2] To optimize situation awareness and improve outcomes, clinical decision support (CDS) systems must combine, summarize, and present the vast amount of clinical data to the care team in an understandable and usable manner.[3] While well-designed CDS tools can enhance patient care processes, they can also contribute to harm if they function poorly, contain incorrect information, or are not properly integrated into workflows.[4] [5] [6] To evaluate CDS tools, teams can perform formal usability testing using widely accepted methods.[7] Usability refers to the system or tool's ability to enable users to safely, effectively, and efficiently carry out their tasks.[8] Usability testing can improve both CDS tool design and outcomes targeted by CDS and can be conducted a different point in the CDS lifecycle.[9] [10]
In the intensive care unit (ICU) setting, CDS tools should facilitate the efficient review of patient data and enable clinicians to prioritize multiple patients and tasks.[11] [12] Currently available CDS tools generally do not enhance clinician adoption of desired practices,[13] and automated tools are not widely implemented or considered beneficial by clinicians within the pediatric intensive care unit (PICU).[14] We hypothesize that poor usability contributes to these findings. Available recommendations for CDS design in ICU settings have limited scope and are not specific to situation awareness tools or to pediatric patients.[11] [15] Mindful of these recommendations and limitations, we developed, refined, and implemented the PICU Warning Tool, an automated CDS tool aimed at aiding the identification of patients at high risk of clinical deterioration and enhancing shared situation awareness among our PICU care teams.[16] [17] [18] The PICU Warning Tool serves as a means to support and sustain our unit-wide system for identifying and caring for high-risk patients and has contributed to reduced cardiopulmonary resuscitation events. In this study, we sought to further improve the PICU Warning Tool through formal postimplementation usability testing.
Objectives
We conducted a usability evaluation of the PICU Warning Tool using a mixed-methods approach. Our objective was to evaluate the tool's usability by clinicians that encounter the tool in practice and to identify opportunities for tool improvement.
Methods
Clinical Setting
We conducted our study in the PICU of a large, urban, quaternary, free-standing children's hospital in the Midwest. Our PICU is a closed, noncardiac, medical-surgical unit with 48 beds and over 2,800 annual admissions.
Clinical Decision Support Tool
The PICU Warning Tool (called the “eWatcher” tool locally) is an embedded CDS tool that aims to identify pediatric ICU patients at high risk of clinical deterioration and enhance shared situation awareness by summarizing patient-level high-risk criteria.[16] [17] [18] Thirteen high-risk criteria are continuously evaluated based on automated electronic health record (EHR) rules and users can manually add five additional criteria. The PICU Warning Tool presents the patient's level of risk (low, medium, and high) on the user's patient list view through passive alerting, visually emphasizing important data without interruptions to EHR workflows. By hovering over the column, users can view the high-risk criteria that the patient meets ([Fig. 1]). Double clicking the icon allows the user to add or remove manual risk criteria. The presence of the tool on the patient list view allows the user to have situation awareness of which and how many patients in the PICU are high-risk. Any user in our system, including physicians, advanced practice providers (APPs), registered nurses (RNs), and respiratory therapists (RTs), can view and update the PICU Warning Tool. The color choices (green, yellow, and red), used to indicate the level of risk, align with local standards and interface design best practices.[19] A second passive alert is displayed within the individual patient chart as a banner to notify clinicians about risk while the chart is already open and to optimize presentation within different workflows. The banner contains a link to a report showing the high-risk criteria met by the patient. Additionally, an alert icon is displayed on the nursing patient view when a patient meets any high-risk criteria, but the icon does not have the additional functionality (color, hover display of criteria, and ability to add/remove criteria) of the patient list column described above and only serves to visually indicate that a patient is high risk. Our tool has undergone prospective validation within our single center and, when combined with huddles and mitigation planning, has demonstrated a reduction in in-hospital pediatric cardiac arrest within the PICU.[18]


In our PICU, the PICU Warning Tool is used daily by providers and nursing and is an integral part of our situation awareness model.[18] Providers have the column in their patient list, and a patient's status (high-risk vs. not high-risk) is displayed on the nursing patient view, as noted above. In addition, high-risk patients are discussed at twice daily safety huddles.
Study Design
We performed a mixed-methods usability evaluation of the PICU Warning Tool. The tool was first assessed using the system usability scale (SUS), a previously validated and widely used measure.[20] Additionally, we conducted formal think-aloud usability testing of the PICU Warning Tool to observe efficiency of tool use and obtain qualitative feedback from users.[21]
Participant Recruitment
We sought input on usability from PICU clinicians who view and interact with the PICU Warning Tool. We approached PICU clinical staff members during their clinical shift, including physicians, APPs, RNs, and RTs and invited them to complete the SUS. Our target was to enroll 50 SUS respondents. For the think-aloud usability assessment, we separately recruited physicians and RNs via email, aiming to enroll five physicians and five RNs. Experts recommend a minimum of 5 subjects for usability testing,[22] with groups of 10 subjects being able to identify over 95% of the problems during testing.[23] All participants were provided with an information sheet containing the principal investigator's contact information and an explanation of their rights.
Data Collection
We administered the 10-question SUS[24] ([Supplement A], available in the online version) to users of the PICU Warning Tool after providing a brief review of the tool. Participants reported their role, years of experience using our EHR (Epic Systems Corporation), and self-assigned level of computer skill (high, medium, or low).
We performed think-aloud usability testing[21] with participants using realistic patient scenarios in a nonproduction EHR environment on a laptop computer. The think-aloud sessions were completed in an office for physicians and in a conference room for RNs; locations were chosen to be convenient for participants. We recorded the screen and captured the audio of participants. This methodology has previously proven effective in assessing usability and enhancing user interface.[25] A member of our study team moderated the think-aloud testing, asked qualitative questions, and served as the primary observer during the sessions. Participants reported their roles, years in current positions, and years of experience using our EHR. The moderator explained the think-aloud technique and instructed participants to vocalize their thoughts as they completed each task. The moderator provided guidance only if a participant became stuck or completed a task incorrectly.
We developed two simulated scenarios ([Supplement B], available in the online version) modeled on actual PICU patients to assess key features of the PICU Warning Tool, such as the identification of high-risk factors for patient deterioration and the addition and removal of user-defined high-risk criteria. While the wording of scenarios varied slightly for physicians and RNs to reflect their respective roles, the tested tasks and overall scenarios remained the same. The order of the scenarios was consistent for all participants. We evaluated the successful completion and time taken to complete the following tasks: (1) reviewing the reasons why a patient meets high-risk criteria; (2) adding high-risk criteria for a patient; and (3) removing high-risk criteria for a patient. The moderator used a structured observation form ([Supplement C], available in the online version) to track task completion and note any difficulties encountered by users. Task time was measured using screen recordings, measured from the time each participant finished reading the task directions aloud until they completed or gave up on the task.
All participants were asked three questions ([Supplement B], available in the online version) following completion of the scenarios regarding (1) general feedback on the tool; (2) what they liked about the tool and suggestions for improvement; and (3) any missing features they felt should be included in the tool. Nursing participants were also asked about potential enhancements to better integrate the tool into their workflow. Responses were audio recorded.
Analysis
We followed survey guidelines to assess the SUS scores.[17] [21] We considered any score above 70 as “passing” with “good” usability, while scores below this threshold were regarded as “not passing” with “poor” usability. We considered a score >80 as “excellent” usability. We computed means and standard deviations for task times, stratified by role.
We performed a content analysis that included a review of audio-recorded responses and moderator observations during think-aloud usability testing. Main themes were identified within three a priori identified groups: (1) tool features that work well; (2) tool features that do not work well; and (3) desired enhancements. The moderator summarized their observations and categorized them into the first two groups: (1) tool features that work well; and (2) tool features that do not work well. We grouped the responses and observations within each category based on the addressed feature of the PICU Warning Tool. The moderator and PI reviewed these groupings individually and achieved consensus.
Results
Participant Characteristics
Fifty-one PICU staff members completed the SUS, including 22 providers (attending physicians, fellows, and APPs), 23 RNs (including 6 leadership nurses), and 12 RTs (including 4 leadership RTs). Among the participants, the median number of years using Epic was 7.5 years (range 1.5–16 years), with 39% (n = 20) reporting high computer skills, 53% (n = 27) reporting medium computer skills, and 8% (n = 4) reporting low computer skills.
Ten PICU staff, including 5 providers (3 attending physicians and 2 fellows) and 5 RNs completed think-aloud usability testing. Among the participants, the median number of years in the current role was 3.5 (range 1 to 24 years) and the median number of years using Epic was 11.5 (range 4 to 16 years).
System Usability Scale
The overall median usability score across all participants was 87.5 (interquartile range [IQR]: 80–95) with the majority of responses being strongly agree or agree ([Fig. 2]). Almost all participants (>96%) rated the usability of the tool as “passing” or “good” ([Fig. 3]). The median usability score by role was similar across roles: 86.3 (IQR: 80–97.5) for providers (physicians/APPs), 87.5 (IQR: 79–98) for fellows, 85 (IQR: 79–98) for attending physicians, 88.8 (IQR: 86–94) for APPs, 90 (IQR: 80–95) for bedside RNs, 95 (IQR: 84 -98) for bedside RTs, 87.5 (IQR: 80–95) for leadership RNs, and 90 (IQR 81.9–95) for leadership RTs. There did not appear to be an association between median usability score and self-reported computer skill level: 87.5 (IQR 78.75–95) for low, 82.5 (IQR 76.25–95) for medium, and 92.5 (IQR 80–97.5) for high.




Think-Aloud Testing Task Completion
We measured success and time to completion for the following key tasks: (1) reviewing why a patient meets high-risk criteria, (2) adding to the patient's high-risk criteria, and (3) removing a patient's high-risk criteria. All participants (10 out of 10) completed the task of reviewing why a patient meets high-risk criteria. The task of adding to the patient's high-risk criteria was completed 75% of the time (15 out of 20 attempts). During scenario one, five participants (one physician and four RNs) failed to complete the task. However, all participants were able to complete the task during scenario two. The task of removing a patient's high-risk criteria was completed by 80% of participants (8 of 10), with two RNs failing to complete the task. There did not appear to be an association between task failure and the number of years using Epic.
[Table 1] shows the means and standard deviations of time to task completion, both in aggregate and by role. The time to task completion ranged from 2 to 92 seconds, with “Reviewing high-risk criteria” task being completed the fastest and the “Adding to high-risk criteria” task completed the slowest. Physicians and RNs had similar task completion times for the “Adding to high-risk criteria” task but differed in task completion time for the other two tasks. The total time spent on think-aloud testing sessions (including responses to qualitative questions) ranged from 11 to 16 minutes per participant.
Abbreviation: SD, standard deviation.
Qualitative Findings from Think-Aloud Testing
Tool Features that Work Well
Participants in the study identified three effective features of the PICU Warning Tool that met their requirements. They praised the visual indicators on the patient list, which enabled them to quickly identify high-risk patients. The “hover to discover” feature was also appreciated as it allowed users to view the high-risk criteria met by a patient without needing to access the patient's chart. The observations confirmed that participants easily identified high-risk patients through the visual indicators on the patient list and utilized the “hover to discover” feature to evaluate high-risk status. Additionally, two participants emphasized the importance of real-time updates in the tool to enhance situation awareness of high-risk patients in the PICU.
Based on the results of the think-aloud testing, we found that participants quickly learned how to use the tool. During the testing, we evaluated the task adding high-risk criteria in both scenarios. All users demonstrated improved ease of use, navigating to the pop-up faster and completing the task more quickly in the second scenario. As a result, the task was completed successfully by all participants in the second scenario.
Tool Features that Do Not Work Well
Participant responses and the moderator observations identified three tool features that did not function optimally. All nursing participants noted that the PICU Warning Tool with full functionality did not appear in their normal clinical workflow. Most nurses do not use the patient list in their daily work in our PICU; instead, they use a nursing-specific patient view within the EHR. The nursing-specific patient review displays an icon to indicate a patient is high-risk, but does not have the hover functionality or ability to manually add criteria. Observation of nursing participants during the think-aloud process confirmed this, as all nurses started with the nursing-specific patient view and required guidance to navigate to the patient list.
Furthermore, there were discrepancies in the selection of high-risk criteria among user roles, which may indicate a lack of specificity in the user-added criteria. Several users commented on this lack of specificity. In the first scenario, a patient with high-volume urine output contributing to vital sign instability was described to participants. All physician users chose “Other” as the high-risk criteria, while all nurse users selected “Provider Intuition” as the high-risk criteria for this scenario. Providers explained that they made this selection because there was a specific reason for considering the patient high risk. Regarding this feature, some users opted to add a comment within the tool to explain their selection, while others did not. Adding a comment increased the time required to complete the task, although three users clarified that if no comment was added, it became unclear why a specific high-risk criterion was selected.
Finally, when users click on the PICU Warning Tool icon to remove a high-risk criterion, (the third task that we tested), the tool presents a blank form and does not visually indicate the previously selected criteria. To remove a criterion, a user must select “No” for the previously chosen high-risk criterion. Multiple participants paused at this point in the workflow and openly expressed uncertainty about what to do next. This feature led to two RNs failing to complete the task. Several users described this aspect of the tool as nonintuitive with one physician user remarking, “I know that you have to click ‘No’ here because it's messed me up before. Definitely not very intuitive.”
Moderator observation alone uncovered a problematic aspect of the tool. Although many users found the “hover to discover” feature helpful, it caused confusion when users tried to add to the high-risk criteria. During the observation, more than half of the users clicked on the pop-up screen that appears when hovering over the icon, instead of clicking on the icon itself (which is the correct workflow). Six participants (60%) expressed their confusion by commenting, “Where do I click?.”
Desired Enhancements
Participants offered several recommendations to enhance the usability and effectiveness of the PICU Warning Tool. Three physicians and three RNs expressed difficulties in comprehending the reason behind certain high-risk criteria assigned to patients. For instance, they pointed out that the “other” user-entered criterion lacks helpful information unless accompanied by an additional comment. However, comments are not mandatory and are not consistently provided. To address this issue, the participants suggested making comments mandatory for these nonspecific, user-selected high-risk criteria. Additionally, multiple providers recommended considering the inclusion of additional user-added criteria, such as “intubation watcher.”
Other improvements focused on the tool's placement in the workflow and the mechanics of interaction. All RN participants mentioned that the tool would be more usable and more frequently accessed if it were integrated into their regular workflow with full functionality. Some users had difficulty understanding where to click to add criteria and recommended including additional instruction in the pop-up, such as “Double-click icon to modify criteria” to alleviate confusion. Lastly, nearly all users expressed a desire for the previously selected criteria to remain displayed as selected when revisiting the tool. For example, if a user selected “Yes” to the “High Risk Intubation” criterion, it should remain visibly selected on the form until another user selects “No” at a later time.
Discussion
We conducted mixed-method usability testing to evaluate the usability of the PICU Warning Tool, a CDS tool designed to improve situation awareness and reduce in-hospital cardiac arrest for critically ill patients in the PICU.[16] While we believe this tool is an example of effective ICU-based CDS as it has demonstrated impact on patient outcomes with a reasonable alert burden, our aim was to identify opportunities to further improve usability postimplementation. Overall, users rated the tool highly on the SUS, indicating good to excellent usability. However, through observation and participant responses during think-aloud testing, we identified several pain points and opportunities for improvement, including better integration of the PICU Warning tool into nursing workflow and redesign of nonintuitive features. Although most participants successfully completed their assigned tasks, our evaluation highlighted the need for changes in user interaction with the tool to enhance usability and effectiveness. These findings underscore the importance of a multimodal approach to usability testing, which has been demonstrated in other studies.[9] [26] [27]
Our study highlights the benefits and drawbacks of postimplementation usability testing. Postimplementation usability testing can occur in situ with users familiar with the tool, leading to helpful feedback from repeated use. Postimplementation testing may be less resource intensive (e.g., easier to recruit users, no need for testing laboratory) and not subject to the time pressure inherent to the development lifecycle of health care IT projects. However, a drawback of postimplementation usability testing is potentially delayed recognize of suboptimal design features that might have been identified during development before a tool was implemented.
CDS solutions that enhance human activities and improve people-based work processes are an important focus, leading to improved outcomes.[14] [28] While examples of effective, automated CDS tools in the PICU setting exist,[29] [30] the implementation and perceived benefits of currently available automated PICU CDS tools are limited.[14] It is crucial to note that these limitations extend beyond the PICU, as predictive algorithms and advanced CDS tools often fall short in successful implementation and effectively improving patient outcomes.[31] [32] Several contextual factors play a role in influencing the critical care team's response to CDS in the ICU setting. For instance, operational and patient-related factors within the ICU, such as the severity of illness and admission rates, can impact cognitive function and decision-making of the critical care team when utilizing CDS.[33] These factors introduce additional complexities and cognitive load that may hinder the integration and acceptance of CDS into their workflow. Furthermore, the design, implementation, and optimization of CDS tools within the ICU are subject to limitations imposed by the EHR vendor. The dynamic and high-risk environment of the ICU environment, coupled with the limitations of CDS pose significant challenges to the effective use of CDS within the ICU. Overcoming these limitations requires rigorous usability testing aimed at improving the integration of these tools in workflows, and our research findings highlight the need for this rigorous testing to be multimodal.
Our multimodal usability evaluation highlights both the effective features of the PICU Warning Tool and areas that could be improved upon. The tool demonstrates ease of access without interrupting the workflow, clear display of complex information, and learnability, which are valuable for informaticians and intensivists involved in developing ICU-based CDS. However, there are notable limitations to our study. We included participants with varying levels of experience with the PICU Warning Tool, which may have resulted in diverse conclusions that may not be broadly applicable to a new audience. Furthermore, since this tool utilizes passive alerting without interrupting workflows, the findings may not be generalized to other types of interruptive alerts which may be more common and concerning to ICU staff. Some of the proposed enhancements may be difficult to implement secondary to vendor-specific constraints. The resources required to perform this multimodal usability evaluation, which included time spent collecting SUS questionnaires, time spent preparing for and conducting think-aloud testing, and a study member familiar with think-aloud usability testing, may limit generalizability of the methods used. However, these methods can be reproduced at minimal cost.
Conclusion
In our postimplementation usability study within the critical care environment, the PICU Warning Tool aimed at enhancing situation awareness demonstrated good to excellent postimplementation usability. This tool exhibited important features that enhanced usability, including learnability and clear display of complex information that is easily accessible. Opportunities for improvement were identified, including better integration into nursing workflow. Our study underscores the value of usability testing in the evaluation of CDS.
Clinical Relevance Statement
Lack of shared situation awareness is the most common contributing factor to serious safety events in children's hospitals. We describe a mixed-methods usability evaluation of a clinical decisions support tool design to enhance shared situation awareness in the critical care setting. The results from our evaluation provide important insights to pediatricians seeking to improve shared situation awareness through clinical decision support tools.
Multiple Choice Questions
-
What is the most common contributing factor to serious safety events in children's hospitals?
-
A. Error in medical decision-making
-
B. Lack of shared situation awareness
-
C. Insufficient staffing
-
D. Failure to communicate effectively
Answer: The most common contributing factor to serious safety events in children's hospitals is a lack of shared situation awareness. While errors in medical decision-making, communication failures, and staffing insufficiency can contribute to serious safety events, they are less common. Targeting improved shared situation awareness has the potential to improve patient safety in children's hospitals.
-
-
Which of the following was identified through usability testing as a desired enhancement in the PICU Warning Tool?
-
A. Making comments mandatory for all user-selected high-risk criteria
-
B. Reducing the number of user-selected high-risk criteria
-
C. Improved integration of the tool into routine nursing workflow
-
D. Improved integration of the tool into routine physician workflow
Answer: Mixed-methods usability testing of the PICU Warning Tool identified improved integration of the tool into routine nursing workflow as a desired enhancement. The CDS tool is integrated into the patient list, which is part of the workflow for physicians, advanced practice providers, and respiratory therapists. Bedside nursing tends to use a different screen for their routine workflow in the electronic health record. Integration into workflow is critical for the success of any CDS tool. Participants also noted a desire for additional user-selected criteria and making comments mandatory for the non-specific user-selected criteria “Other” and “Provider Intuition.”
-
Conflict of Interest
None declared.
Ethical Approval
Our study was approved by the local Institutional Review Board (2020-0202) and was determined to be nonhuman subjects research.
-
References
- 1 Burrus S, Hall M, Tooley E, Conrad K, Bettenhausen JL, Kemper C. Factors related to serious safety events in a children's hospital patient safety collaborative. Pediatrics 2021; 148 (03) e2020030346
- 2 Endsley MR. Measurement of situation awareness in dynamic systems. Hum Factors 1995; 37: 65-84
- 3 Endsley MR. Designing for situation awareness in complex systems. Proceedings of the 2nd International Workshop on Symbiosis of Humans, Artifacts and Environment: 2001
- 4 Bates DW, Kuperman GJ, Wang S. et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10 (06) 523-530
- 5 Kushniruk A, Triola M, Stein B, Borycki E, Kannry J. The relationship of usability to medical error: an evaluation of errors associated with usability problems in the use of a handheld application for prescribing medications. Stud Health Technol Inform 2004; 107 (Pt 2): 1073-1076
- 6 Kushniruk AW, Triola MM, Borycki EM, Stein B, Kannry JL. Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. Int J Med Inform 2005; 74 (7-8): 519-526
- 7 Norman DA. The Design of Everyday Things. New York: Basic Books; 2013
- 8 Preece J, Rogers Y, Sharp H. et al. Human–computer interaction. New York: Addison-Wesley Publishing Company; 1994
- 9 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990
- 10 Akhloufi H, Verhaegh SJC, Jaspers MWM, Melles DC, van der Sijs H, Verbon A. A usability study to improve a clinical decision support system for the prescription of antibiotic drugs. PLoS One 2019; 14 (09) e0223073
- 11 Wright MC, Dunbar S, Macpherson BC. et al. Toward designing information display to support critical care. A qualitative contextual evaluation and visioning effort. Appl Clin Inform 2016; 7 (04) 912-929
- 12 Bode L, Schamer S, Böhnke J. et al; ELISE Study Group. Tracing the progression of sepsis in critically ill children: clinical decision support for detection of hematologic dysfunction. Appl Clin Inform 2022; 13 (05) 1002-1014
- 13 Ronan CE, Crable EL, Drainoni M-L, Walkey AJ. The impact of clinical decision support systems on provider behavior in the inpatient setting: a systematic review and meta-analysis. J Hosp Med 2022; 17 (05) 368-383
- 14 Dziorny AC, Heneghan JA, Bhat MA. et al; Pediatric Data Science and Analytics (PEDAL) Subgroup of the Pediatric Acute Lung Injury and Sepsis Investigators (PALISI) Network. Clinical decision support in the PICU: implications for design and evaluation. Pediatr Crit Care Med 2022; 23 (08) e392-e396
- 15 Wright A, Phansalkar S, Bloomrosen M. et al. Best practices in clinical decision support: the case of preventive care reminders. Appl Clin Inform 2010; 1 (03) 331-345
- 16 Shelov E, Muthu N, Wolfe H. et al. Design and implementation of a pediatric ICU acuity scoring tool as clinical decision support. Appl Clin Inform 2018; 9 (03) 576-587
- 17 Dewan M, Muthu N, Shelov E. et al. Performance of a clinical decision support tool to identify PICU patients at high risk for clinical deterioration. Pediatr Crit Care Med 2020; 21 (02) 129-135
- 18 Dewan M, Soberano B, Sosa T. et al. Assessment of a situation awareness quality improvement intervention to reduce cardiac arrests in the PICU. Pediatr Crit Care Med 2022; 23 (01) 4-12
- 19 Horsky J, Schiff GD, Johnston D, Mercincavage L, Bell D, Middleton B. Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions. J Biomed Inform 2012; 45 (06) 1202-1216
- 20 Brooke J. SUS: a retrospective. J Usability Stud 2013; 8 (02) 29-40
- 21 Press A, McCullagh L, Khan S, Schachter A, Pardo S, McGinn T. Usability testing of a complex clinical decision support tool in the emergency department: lessons learned. JMIR Human Factors 2015; 2 (02) e14
- 22 Nielsen J, Landauer TK. A mathematical model of the finding of usability problems. In: Proceedings of the INTERCHI'93 Conference on Human Factors in Computing Systems. Amsterdam, The Netherlands, The Netherlands: IOS Press 1993: 206-213
- 23 Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput 2003; 35 (03) 379-383
- 24 Brooke J. SUS—a quick and dirty usability scale. In: Jordan PW, Thomas B, McClelland IL, Weerdmeester B. eds. Usability Evaluation in Industry. England: Routeledge; 1996
- 25 Britto MT, Jimison HB, Munafo JK, Wissman J, Rogers ML, Hersh W. Usability testing finds problems for novice users of pediatric portals. J Am Med Inform Assoc 2009; 16 (05) 660-669
- 26 Nguyen TH, Cunha PP, Rowland AF, Orenstein E, Lee T, Kandaswamy S. User-centered design and evaluation of clinical decision support to improve early peanut introduction: formative study. JMIR Form Res 2023; 7: e47574
- 27 Kushniruk AW, Borycki EM, Kuwata S, Kannry J. Emerging approaches to usability evaluation of health information systems: towards in-situ analysis of complex healthcare systems and environments. Stud Health Technol Inform 2011; 169: 915-919
- 28 Patel VL, Kannampallil TG. Cognitive informatics in biomedicine and healthcare. J Biomed Inform 2015; 53: 3-14
- 29 Dewan M, Vidrine R, Zackoff M. et al. Design, implementation, and validation of a pediatric ICU sepsis prediction tool as clinical decision support. Appl Clin Inform 2020; 11 (02) 218-225
- 30 Martin B, Mulhern B, Majors M. et al. Improving pediatric intensive care unit discharge timeliness of infants with bronchiolitis using clinical decision support. Appl Clin Inform 2023; 14 (02) 392-399
- 31 Wulff A, Montag S, Marschollek M, Jack T. Clinical decision-support systems for detection of systemic inflammatory response syndrome, sepsis, and septic shock in critically ill patients: a systematic review. Methods Inf Med 2019; 58 (S 02): e43-e57
- 32 Chen JH, Asch SM. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N Engl J Med 2017; 376 (26) 2507-2509
- 33 Park J, Zhong X, Dong Y, Barwise A, Pickering BW. Investigating the cognitive capacity constraints of an ICU care team using a systems engineering approach. BMC Anesthesiol 2022; 22 (01) 10
Address for correspondence
Publication History
Received: 14 September 2023
Accepted: 18 February 2024
Accepted Manuscript online:
20 February 2024
Article published online:
01 May 2024
© 2024. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Burrus S, Hall M, Tooley E, Conrad K, Bettenhausen JL, Kemper C. Factors related to serious safety events in a children's hospital patient safety collaborative. Pediatrics 2021; 148 (03) e2020030346
- 2 Endsley MR. Measurement of situation awareness in dynamic systems. Hum Factors 1995; 37: 65-84
- 3 Endsley MR. Designing for situation awareness in complex systems. Proceedings of the 2nd International Workshop on Symbiosis of Humans, Artifacts and Environment: 2001
- 4 Bates DW, Kuperman GJ, Wang S. et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10 (06) 523-530
- 5 Kushniruk A, Triola M, Stein B, Borycki E, Kannry J. The relationship of usability to medical error: an evaluation of errors associated with usability problems in the use of a handheld application for prescribing medications. Stud Health Technol Inform 2004; 107 (Pt 2): 1073-1076
- 6 Kushniruk AW, Triola MM, Borycki EM, Stein B, Kannry JL. Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. Int J Med Inform 2005; 74 (7-8): 519-526
- 7 Norman DA. The Design of Everyday Things. New York: Basic Books; 2013
- 8 Preece J, Rogers Y, Sharp H. et al. Human–computer interaction. New York: Addison-Wesley Publishing Company; 1994
- 9 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990
- 10 Akhloufi H, Verhaegh SJC, Jaspers MWM, Melles DC, van der Sijs H, Verbon A. A usability study to improve a clinical decision support system for the prescription of antibiotic drugs. PLoS One 2019; 14 (09) e0223073
- 11 Wright MC, Dunbar S, Macpherson BC. et al. Toward designing information display to support critical care. A qualitative contextual evaluation and visioning effort. Appl Clin Inform 2016; 7 (04) 912-929
- 12 Bode L, Schamer S, Böhnke J. et al; ELISE Study Group. Tracing the progression of sepsis in critically ill children: clinical decision support for detection of hematologic dysfunction. Appl Clin Inform 2022; 13 (05) 1002-1014
- 13 Ronan CE, Crable EL, Drainoni M-L, Walkey AJ. The impact of clinical decision support systems on provider behavior in the inpatient setting: a systematic review and meta-analysis. J Hosp Med 2022; 17 (05) 368-383
- 14 Dziorny AC, Heneghan JA, Bhat MA. et al; Pediatric Data Science and Analytics (PEDAL) Subgroup of the Pediatric Acute Lung Injury and Sepsis Investigators (PALISI) Network. Clinical decision support in the PICU: implications for design and evaluation. Pediatr Crit Care Med 2022; 23 (08) e392-e396
- 15 Wright A, Phansalkar S, Bloomrosen M. et al. Best practices in clinical decision support: the case of preventive care reminders. Appl Clin Inform 2010; 1 (03) 331-345
- 16 Shelov E, Muthu N, Wolfe H. et al. Design and implementation of a pediatric ICU acuity scoring tool as clinical decision support. Appl Clin Inform 2018; 9 (03) 576-587
- 17 Dewan M, Muthu N, Shelov E. et al. Performance of a clinical decision support tool to identify PICU patients at high risk for clinical deterioration. Pediatr Crit Care Med 2020; 21 (02) 129-135
- 18 Dewan M, Soberano B, Sosa T. et al. Assessment of a situation awareness quality improvement intervention to reduce cardiac arrests in the PICU. Pediatr Crit Care Med 2022; 23 (01) 4-12
- 19 Horsky J, Schiff GD, Johnston D, Mercincavage L, Bell D, Middleton B. Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions. J Biomed Inform 2012; 45 (06) 1202-1216
- 20 Brooke J. SUS: a retrospective. J Usability Stud 2013; 8 (02) 29-40
- 21 Press A, McCullagh L, Khan S, Schachter A, Pardo S, McGinn T. Usability testing of a complex clinical decision support tool in the emergency department: lessons learned. JMIR Human Factors 2015; 2 (02) e14
- 22 Nielsen J, Landauer TK. A mathematical model of the finding of usability problems. In: Proceedings of the INTERCHI'93 Conference on Human Factors in Computing Systems. Amsterdam, The Netherlands, The Netherlands: IOS Press 1993: 206-213
- 23 Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput 2003; 35 (03) 379-383
- 24 Brooke J. SUS—a quick and dirty usability scale. In: Jordan PW, Thomas B, McClelland IL, Weerdmeester B. eds. Usability Evaluation in Industry. England: Routeledge; 1996
- 25 Britto MT, Jimison HB, Munafo JK, Wissman J, Rogers ML, Hersh W. Usability testing finds problems for novice users of pediatric portals. J Am Med Inform Assoc 2009; 16 (05) 660-669
- 26 Nguyen TH, Cunha PP, Rowland AF, Orenstein E, Lee T, Kandaswamy S. User-centered design and evaluation of clinical decision support to improve early peanut introduction: formative study. JMIR Form Res 2023; 7: e47574
- 27 Kushniruk AW, Borycki EM, Kuwata S, Kannry J. Emerging approaches to usability evaluation of health information systems: towards in-situ analysis of complex healthcare systems and environments. Stud Health Technol Inform 2011; 169: 915-919
- 28 Patel VL, Kannampallil TG. Cognitive informatics in biomedicine and healthcare. J Biomed Inform 2015; 53: 3-14
- 29 Dewan M, Vidrine R, Zackoff M. et al. Design, implementation, and validation of a pediatric ICU sepsis prediction tool as clinical decision support. Appl Clin Inform 2020; 11 (02) 218-225
- 30 Martin B, Mulhern B, Majors M. et al. Improving pediatric intensive care unit discharge timeliness of infants with bronchiolitis using clinical decision support. Appl Clin Inform 2023; 14 (02) 392-399
- 31 Wulff A, Montag S, Marschollek M, Jack T. Clinical decision-support systems for detection of systemic inflammatory response syndrome, sepsis, and septic shock in critically ill patients: a systematic review. Methods Inf Med 2019; 58 (S 02): e43-e57
- 32 Chen JH, Asch SM. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N Engl J Med 2017; 376 (26) 2507-2509
- 33 Park J, Zhong X, Dong Y, Barwise A, Pickering BW. Investigating the cognitive capacity constraints of an ICU care team using a systems engineering approach. BMC Anesthesiol 2022; 22 (01) 10





