Subscribe to RSS
DOI: 10.1055/s-0042-1757923
Real-Time User Feedback to Support Clinical Decision Support System Improvement
- Abstract
- Background and Significance
- Objectives
- Methods
- Results
- Discussion
- Limitations
- Conclusion
- Clinical Relevance Statement
- Multiple Choice Questions
- References
Abstract
Objectives To improve clinical decision support (CDS) by allowing users to provide real-time feedback when they interact with CDS tools and by creating processes for responding to and acting on this feedback.
Methods Two organizations implemented similar real-time feedback tools and processes in their electronic health record and gathered data over a 30-month period. At both sites, users could provide feedback by using Likert feedback links embedded in all end-user facing alerts, with results stored outside the electronic health record, and provide feedback as a comment when they overrode an alert. Both systems are monitored daily by clinical informatics teams.
Results The two sites received 2,639 Likert feedback comments and 623,270 override comments over a 30-month period. Through four case studies, we describe our use of end-user feedback to rapidly respond to build errors, as well as identifying inaccurate knowledge management, user-interface issues, and unique workflows.
Conclusion Feedback on CDS tools can be solicited in multiple ways, and it contains valuable and actionable suggestions to improve CDS alerts. Additionally, end users appreciate knowing their feedback is being received and may also make other suggestions to improve the electronic health record. Incorporation of end-user feedback into CDS monitoring, evaluation, and remediation is a way to improve CDS.
#
Background and Significance
Clinical decision support (CDS) systems can reduce medication errors, increase guideline adherence, and improve patient care.[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] For these benefits to be realized, CDS needs to function correctly and provide accurate recommendations, but prior work has shown that the majority of CDS alerts are overridden.[12] Multiple studies have shown that CDS can be associated with unintended consequences and patient harm when not working as intended or when CDS is poorly conceived.[13] [14] These malfunctions of CDS have myriad causes, including issues with design, usability, build, and changing clinical practice.[15] [16] Monitoring and evaluation of user feedback and override comments may help informatics teams identify and rapidly remedy CDS malfunctions.
CDS malfunctions can be classified in different ways, ranging from the failure to meet the traditional “Five Rights” of decision support to wastes introduced by CDS in the Lean framework.[17] A variety of approaches have been proposed to prevent and retroactively remedy these malfunctions, including the use of deliberate evidence-based design,[18] [19] [20] [21] [22] extensive prerelease testing,[22] [23] transition to customizable CDS,[11] and usability testing.[24] Yet, even with sophisticated CDS teams and formalized processes adherent to these principles, CDS malfunctions still occur.[25] Postrelease monitoring for CDS appropriateness and malfunctions is critical to successful implementation and maintenance.[23] [26] [27]
One mechanism for postrelease monitoring is direct end-user feedback,[28] [29] which in a survey of 29 chief medical information officers was a highly prevalent source of notification for CDS system (CDSS) malfunctions,[25] and can be considered an integral stage in CDS maturity.[30] [31] Feedback mechanisms around CDS have previously been used in simulations,[32] with predetermined categories of feedback[12] [32] and/or with dedicated tools to solicit feedback from users.[29] Prior work has quantified provider- and patient-level factors that contribute to alert overrides, and described how evaluation of “cranky comments” can identify CDS malfunctions.[15] [33] Chaparro et al described the use of user-entered CDS feedback in a quality-improvement process to reduce overall alert burden.[29] Yet, while descriptive studies of override rates and CDS malfunctions are prevalent in the literature, real-time notification of user feedback on CDS alerts to informatics teams has not been extensively described.
In this brief report, we discuss our experience analyzing user feedback of nonmedication-related CDS via two mechanisms in the electronic health record (EHR): (1) embedded direct user feedback tools during the postrelease phase of the alert lifecycle and (2) monitoring comments left by users when they are overriding alerts.
#
Objectives
The objective of this retrospective cross-sectional study was to quantify user feedback and override comments entered on nonmedication rule-based CDS alerts. Case reports describe examples of how user feedback prompted rapid-cycle build changes in CDS alerts.
#
Methods
Study Setting
The study was conducted between January 1, 2019 and June 1, 2022 at Mass General Brigham and Vanderbilt University Medical Center, two large health care systems located in Boston, Massachusetts and Nashville, Tennessee, United States, respectively. Both systems include a mix of academic and community hospitals, with large ambulatory care practices. The study was considered a quality-improvement project and exempted from institutional review board review.
Each health care system implemented two mechanisms through which users can provide feedback on CDS: (1) user feedback utilizing the Likert scale with the ability to leave additional text comments and (2) review of comments left within alerts (“override comments”). These feedback mechanisms comprise one component of the overall CDS monitoring process, which also includes CDS dashboards, anomaly detection of firing rates, and automated daily emails that highlight daily build changes in CDS (including display text, dependencies, and logic statements). This study included analysis of rule-based passive and interruptive CDS in all care settings, but excluded medication warnings (such as drug–drug interactions) and other forms of CDS such as order sets, predictive models, and clinical calculators where equivalent mechanisms to capture user feedback have not yet been implemented.
The CDS team at Mass General Brigham comprises an internal medicine hospitalist and an emergency medicine physician (both with Epic build certification and clinical informatics board certification), as well as four senior application coordinators and two data scientists. At Vanderbilt University Medical Center, the user feedback is reviewed by two informaticians (both with doctorates in informatics and fellows of the American Medical Informatics Association), as well physician builders and subject-matter experts.
#
Likert User Feedback
On all end-user facing alerts (called BestPractice Advisories [BPAs]) in our Epic EHR (Epic Systems Inc., Verona, Wisconsin, United States), we allow users to submit Likert feedback ([Fig. 1]). The Likert-scale feedback links display with a smiling face, a neutral face, and a frowning face. When a user clicks on a face, a web-browser window is launched from the EHR, where the user has the ability to enter free-text feedback. The user does not need to navigate to a separate application or enter any details about the patient or encounter context, as this is captured automatically in the URL of the alert-specific dynamic hyperlink. At Mass General Brigham, the data are stored in an SQL database, external to the EHR, which is accessible by an internally developed web application front-end that allows the informatics team to record comments and actions taken in response to the feedback; if no comments are left, the Likert score is still recorded. At Vanderbilt University Medical Center, a similar implementation is used to store the information in a REDCap database, similar to what has been previously described[29]; in this set-up, the Likert score is only recorded when comments are left. All user feedback where text comments are entered are also emailed to the CDS team in real time, with the content of the feedback as well as an encounter number and the name of the user who submitted it.


Our CDS teams investigate all feedback with text comments left by users to determine if the alert fired appropriately in the clinical scenario and was targeted correctly to the proper clinician and workflow step. Likert feedback without text comments is not investigated, as adequate clinical context to identify a specific improvement is lacking. The feedback review typically involves chart review by the informaticians on the CDS team to determine the clinical context of the alert firing, as well as testing the alert's logic and properties. As needed, we reach out to the user who left the comment for additional clinical context, or to elicit suggestions on how to improve the alert. When additional clinical expertise is required, our teams work with physicians and nursing and allied health professional subject-matter experts to reach a consensus. After review and potential remediation, we attempt to respond to each user with the results of our investigation and explanation of what we are doing to address the issue raised. During the review process, formal classification of which CDS malfunction was not documented per feedback comment, but the principles of alert appropriateness, clinical utility, and proper targeting were adhered to.
#
Override Comments
Complementary to this feedback mechanism, we also look at override comments left when a clinician decides not to accept the recommended action from an alert ([Fig. 1]). These comments are captured through the native functionality within our EHR and are viewable by most users of the system. Clinicians typically use the field to provide clinical context or documentation, so not all comments necessarily reflect feedback on the appropriateness of the alert.
Each night, we run an SQL query against our EHR's data warehouse that extracts all override comments from the prior day. The override comments are processed by a “Cranky Comment Algorithm”[28] and a daily automated email compiling these comments is sent to the informatics team along with the relevant metadata for each alert ([Fig. 2]). The comments are reviewed by data analysts and clinical leads (as described above) on the CDS team each business day. Analysis of override comments often involves chart review to better understand the context of the alert's firing. We investigate all newly raised issues and respond to clinicians if additional information is needed or to clarify the intent of the alert.


#
Analysis
First, to describe the breadth of CDS implementation in the two study sites, we quantify the total number of unique BPAs and alert firings per BPA. The number and proportion of Likert user feedback and alert override comments were reported. Second, to classify the CDS malfunctions identified by these two mechanisms, we randomly selected 100 Likert user feedback and 100 alert override comments, and two authors grouped them into failures of the “Five Rights” of decision support.[34] The Five Rights principles outline how CDS should be targeted to the correct intervention and delivered to the right person at the right time with necessary content and guidance to make the appropriate clinical decision. Finally, we describe four case reports where user feedback identified a CDS malfunction and led to a rapid-cycle remediation.
#
#
Results
Quantitative
From January 1, 2019 to June 1, 2022, there were 21,334,937 firings of 876 unique BPAs that displayed to end users across the Mass General Brigham enterprise. There were 2,241,994 unique patients who had an alert show, and Hospital Encounters were responsible for most of the alerts (70.7%), followed by Office Visits (15.4%) and Orders Only encounters (3.10%). There were 6,028 (0.03% of all firings) Likert scores recorded with 2,033 (0.009%) comments entered by 1,128 unique users on 397 unique BPAs ([Table 1]). The Likert scores were 1,575 (26.1%) positive, 910 (15.1%) neutral, and 3,543 (58.8%) negative. There were 393,444 alert override comments entered by 35,832 unique users during the study period.
Over this same period, at Vanderbilt University Medical Center, there were 26,127,233 firings of 726 unique BPAs. There were 1,335,957 unique patients who had an alert show, and Hospital Encounters were responsible for most of the alerts (42.2%), followed by Communication encounters (22.1%) and Office Visits (17.0%). There were 767 (0.003% of all firings) Likert scores recorded with 606 (0.002%) comments entered by 360 unique users on 165 unique BPAs ([Table 1]). The Likert scores were 111 (14.5%) positive, 40 (5.2%) neutral, and 616 (80.3%) negative. There were 229,826 alert override comments entered by 14,628 unique users during the study period.
A random sample of feedback entered in the Likert user feedback and alert override comments was classified as failures in the “Five Rights” of decision support (see [Table 2]). Most failures were in the Right Information category, primarily because the alert fired inappropriately or because the alert did not have adequate contextual information for the user to make a clinical decision. The second most common failure was related to improper targeting of alerts, ranging from alerts that fired for the wrong user group (e.g., physician vs. nurse), users not on the primary treatment team (e.g., consultants), or users in clinical specialties not likely to act on a specific alert. Many user feedback comments described frustration with alerts in general, highlighting how CDS malfunctions can contribute to alert fatigue. Finally, a small proportion of user feedback represents positive feedback.
Abbreviations: BP, blood pressure; BPA, BestPractice Advisory.
#
Qualitative Feedback
Four case reports are reported to describe how user feedback helped our informatics team identity CDS malfunctions and deploy rapid remediation. These are illustrative samples and do not reflect the totality of the impact of the user feedback mechanisms described.
Case 1: Monoamine Oxidase Inhibitor Alert (Incorrect Logic Moved to Production)
One site developed a CDSS to warn when patients are taking monoamine oxidase inhibitors and are scheduled to undergo anesthesia to alert the anesthesiologists of potential drug interactions. As part of the process of preparing an informational document for this alert, the logic was adjusted so that the alert would fire for every patient over the age of 18 years to make capturing a screenshot easier. This logic was inadvertently moved to the production environment where it began firing for every patient over the age of 18 in the system. Within minutes of clinics opening, we quickly began receiving negative Likert feedback that this was an inappropriate alert, and we were able to investigate and fix the problem within 4 hours of the initial move to production. During those 4 hours, the alert showed for 2,042 unique encounters; since the fix, it shows for an average of 1.5 unique encounters per day. Although we did eventually receive notification of the same error through the traditional information technology help desk process, the Likert feedback reached the team much more quickly, allowing us to make an immediate correction rather than waiting for hours or an entire day before feedback through the helpdesk reached the informatics team.
#
Case 2: Preoperative Beta Blocker Alert (Inaccurate Medication Value Set)
One site designed an alert to warn anesthesiologists that a patient having an isolated coronary artery bypass graft had not received a β-blocker within 24 hours of the surgery. A user left an override comment stating that their patient “already received beta blocker.” On review, we found that the patient had received carvedilol twice in the prior 24 hours and realized that the β-blocker medication grouper did not contain carvedilol (a frequent issue with β-blocker value sets[35]). The grouper was changed to include carvedilol and the BPA firing rate fell by 97%.
#
Case 3: Sepsis Alert (Insufficient Information Shown to Users)
One site developed a suite of eight BPAs that attempt to identify when a patient is developing or has developed sepsis and warns the end users (both providers and nurses). These alerts have complicated criteria and preconditions built into their logic that were determined by an enterprise-wide sepsis committee. When we initially turned on the alerts, we received constant feedback from end users that their patients were not septic and they did not understand why the alerts were firing. On further analysis, we determined the primary issue was a lack of concise information in the alert describing why it had triggered. We revamped the entire suite of BPAs with new dynamic text that showed why the alert was firing for the patient including pertinent risk factors, vital signs, and laboratory values. This did not decrease the rate at which the alert fired, but was anecdotally felt to be an improvement by clinicians.
#
Case 4: Patient Care Team Assignment (Unique Site-Specific Workflows)
When patients are admitted to the hospital, they are assigned a “Primary Team” to help with census management. If this team is removed, there is an alert to the provider asking them to add back the “Primary Team.” After this alert went live, we received feedback from a user at a smaller community hospital in our network that the alert was not helpful. We reached out to the user and discovered that given the small size of the hospital, patients were not assigned to a primary team. As a result, this alert had been firing for all patients and was not useful for the providers at this hospital. We discussed this with hospital leadership, and with their support, the alert was turned off for this hospital.
#
#
#
Discussion
In this report, we described a framework for incorporating user feedback and alert override comment review in the post-live CDS monitoring process. We classified a random sample of user feedback into malfunctions of the “Five Rights” of decision support and found that most failures were related to inappropriate alert firing, inadequate contextual information within the alert, and improper targeting of the alert to the correct clinical user. We described the importance of real-time notification of user feedback to informatics teams to quickly identify build errors and provided examples of CDS malfunctions including errors in logic, improperly worded alerts, unanticipated workflows, and ontological aberrations.
Prior work has demonstrated the importance of reviewing user feedback to maintain and improve CDSSs.[12] [25] [28] [29] [30] [31] [32] [33] We add to the existing literature by providing specific examples of how user feedback helped us identify and remediate CDS malfunctions, and how real-time notification of user feedback comments shortened the time between detecting the problem and fixing it. While most prior work in this domain described feedback on medication warnings, our report focuses on nonmedication-related ruled-based CDS.
Comparing the two methods, the Likert user feedback links have some advantages over review of alert override comments. Overall, the Likert feedback was used much less frequently than the override comments, which we hypothesize is because leaving “override comments” is more integrated in the workflow and because of priming of the user based on the selected acknowledgment reason. However, as the Likert hyperlinks are clearly labeled as “feedback,” they have a better signal-to-noise ratio for changes that need to be made to the alerts. Additionally, the Likert user feedback is immediately transmitted directly to our CDS team enabling us to respond to users within minutes if needed, allowing us to obtain a more accurate understanding of the clinical context and workflow. Of note, the Likert feedback that is left without additional comments (i.e., users click a frowny face, but do not explain why) has not been impactful, as there is not enough information to understand why the user was unhappy, making it impossible to differentiate specific issues with the CDS from more general feelings about the EHR.
There are other advantages to these feedback mechanisms beyond improving our CDS. End users are often grateful that their feedback is received and that there are clinicians and dedicated analysts reading their feedback and thinking about the CDS that is added in the system ([Fig. 3]). Additionally, when we reach out to clinicians, we often hear about other frustrations with the EHR that are not directly related to CDS, and we can work to address the issue or route the information to the appropriate informatics members. Finally, when the clinical members of our CDS team are practicing clinically, they find the links to be a helpful way to communicate with the rest of the team about the alerts as they appear in the true clinical workflow.


One unexpected result of our system has been that users sometimes inadvertently enter inappropriate text in the comments field. We noticed that some users would enter their password in the override comment field, perhaps mistaking it for a second sign password entry field. To address this, we implemented an automated screening process to suppress likely passwords from being sent to the informatics team. Similarly, sometimes clinicians leave clinical information in the alert's override comment field, presumably with the assumption that it will be acted upon. For example, for an alert that displays when a patient has a metallic implant and is being ordered for magnetic resonance imaging (MRI), we occasionally receive override comments such as: “implant is MRI compatible” or “please discuss with patient.” This is problematic because the comments are not transmitted to the MRI technologists or radiologists. This feedback is almost exclusively left in the “override comments” and so is not transmitted to the informatics teams until the following day; we explicitly warn users not to leave this feedback in the Likert web site, where we have control of the user interface, as we do not have the resources to redirect those comments to the correct clinician. The assumption about how text boxes function in the EHR is not unique to alerts and poses dangers in other areas as well.[36]
Given the benefits realized with alerts, we have expanded this Likert hyperlink feedback mechanism to other aspects of the EHR, including documentation tools, chart review tools, and order sets. Based on our and others' experiences, Epic has developed a native version of this feedback tool embedded in the EHR for CDS. The Epic tool has a more integrated workflow (i.e., there is no additional window with a web interface to collect the feedback) and works well with native Epic reporting tools. While we have not received negative feedback about the additional window from clinicians, we anticipate a more seamless method of feedback would be preferred. Yet, we have continued to use our implementation as we feel that it has several advantages: (1) we use the same framework to collect feedback about other aspects of the EHR, which is not currently possible with Epic's solution; (2) it does not store the information directly in the EHR; and (3) it allows users who do not typically spend time in the production environment (e.g., technical analysts and developers) to also monitor and respond to the feedback.
#
Limitations
This study describes two mechanisms of using user feedback to improve CDSSs, but we do not compare these methods to others such as pilot testing, user focus groups, or emailing users to elicit feedback on specific alerts. Our CDS development and monitoring program includes these other methods, but we have found the two mechanisms described here to be easier to scale to all alerts and allow us to identify issues more quickly. As described above, monitoring user feedback is one component of our overall post-deployment surveillance program, which also includes interactive dashboards, anomaly detection algorithms, and automated email notifications of build changes.
We recognize that review of user feedback comments and override comments requires a significant investment of time and effort for the informatics teams. Our ability to respond to user feedback is dependent upon a robust process to analyze and respond to the data in a timely fashion and relies heavily on a central team of data analysts and informaticists who are intimately familiar with the CDS build. For example, the CDS team at Mass General Brigham spends an average of an hour per business day reviewing override comments with an additional 30 minutes per day for more in-depth clinical review as needed. Other health systems that are not similarly resourced may be challenged to replicate our process.
#
Conclusion
End-user feedback is an efficient and powerful tool to quickly identify and remediate CDS malfunctions and should be considered as part of a comprehensive post-live monitoring process.
#
Clinical Relevance Statement
This study adds to body of research on monitoring and addressing CDS malfunctions in the post-release phase of the CDS lifecycle. It describes a straightforward mechanism of eliciting and responding to feedback from EHR users, and how that feedback can be used to make meaningful changes.
#
Multiple Choice Questions
-
At what point in the CDSS lifecycle should malfunctions be identified and remedied?
-
Conception
-
Initial build and validation
-
Post-release
-
All of the above
Correct Answer: The correct answer is option d. All of the above. As referenced in the text, several studies have looked at mechanisms for identifying and/or preventing issues with CDSS that lead to their malfunction. These include evidenced-based design during conception, usability testing during build and validation, and postrelease monitoring after CDSSs have been implemented.
-
-
Which of the following is a significant challenge/limitation associated with end-user feedback monitoring?
-
Need for end-user education to engage with the system
-
Dedicated resources to review the feedback
-
Technical implementation of feedback mechanism
-
Identifying change from the feedback
Correct Answer: The correct answer is option b. Dedicated resources to review the feedback. In our institutions, we found that end users often left feedback without any additional prompting/education and the implementation of the feedback system is not technically complicated. We were able to identify meaningful changes to make to the system from the feedback. However, this did take significant resources on the part of the team to review and respond to the feedback that came in daily.
-
#
#
Conflict of Interest
None declared.
Protection of Human and Animal Subjects
The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects. This project was undertaken as a quality improvement initiative at Mass General Brigham, and as such was not formally supervised by the Institutional Review Board per their policies.
-
References
- 1 Smith DH, Perrin N, Feldstein A. et al. The impact of prescribing safety alerts for elderly persons in an electronic medical record: an interrupted time series evaluation. Arch Intern Med 2006; 166 (10) 1098-1104
- 2 Bright TJ, Wong A, Dhurjati R. et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med 2012; 157 (01) 29-43
- 3 Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003; 163 (12) 1409-1416
- 4 McGinn TG, McCullagh L, Kannry J. et al. Efficacy of an evidence-based clinical decision support in primary care practices: a randomized clinical trial. JAMA Intern Med 2013; 173 (17) 1584-1591
- 5 Chaudhry B, Wang J, Wu S. et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006; 144 (10) 742-752
- 6 Wright A, Sittig DF, Ash JS. et al. Lessons learned from implementing service-oriented clinical decision support at four sites: a qualitative study. Int J Med Inform 2015; 84 (11) 901-911
- 7 Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc 2001; 8 (04) 299-308
- 8 Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med 2003; 348 (25) 2526-2534
- 9 Kwan JL, Lo L, Ferguson J. et al. Computerised clinical decision support systems and absolute improvements in care: meta-analysis of controlled clinical trials. BMJ 2020; 370: m3216
- 10 Stephens AB, Wynn CS, Hofstetter AM. et al. Effect of electronic health record reminders for routine immunizations and immunizations needed for chronic medical conditions. Appl Clin Inform 2021; 12 (05) 1101-1109
- 11 Stettner S, Adie S, Hanigan S, Thomas M, Pogue K, Zimmerman C. Effect of replacing vendor QTc alerts with a custom QTc risk alert in inpatients. Appl Clin Inform 2022; 13 (01) 19-29
- 12 Nanji KC, Seger DL, Slight SP. et al. Medication-related clinical decision support alert overrides in inpatients. J Am Med Inform Assoc 2018; 25 (05) 476-481
- 13 Stone EG. Unintended adverse consequences of a clinical decision support system: two cases. J Am Med Inform Assoc 2018; 25 (05) 564-567
- 14 Ash JS, Sittig DF, Campbell EM, Guappone KP, Dykstra RH. Some unintended consequences of clinical decision support systems. AMIA Annu Symp Proc 2007; 26-30
- 15 Wright A, Ai A, Ash J. et al. Clinical decision support alert malfunctions: analysis and empirically derived taxonomy. J Am Med Inform Assoc 2018; 25 (05) 496-506
- 16 Kassakian SZ, Yackel TR, Gorman PN, Dorr DA. Clinical decisions support malfunctions in a commercial electronic health record. Appl Clin Inform 2017; 8 (03) 910-923
- 17 Olakotan OO, Yusof MM. Evaluating the alert appropriateness of clinical decision support systems in supporting clinical workflow. J Biomed Inform 2020; 106 (103453): 103453
- 18 Bates DW, Kuperman GJ, Wang S. et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10 (06) 523-530
- 19 Payne TH, Hines LE, Chan RC. et al. Recommendations to improve the usability of drug-drug interaction clinical decision support alerts. J Am Med Inform Assoc 2015; 22 (06) 1243-1250
- 20 Kastner M, Lottridge D, Marquez C, Newton D, Straus SE. Usability evaluation of a clinical decision support tool for osteoporosis disease management. Implement Sci 2010; 5: 96
- 21 Horsky J, Schiff GD, Johnston D, Mercincavage L, Bell D, Middleton B. Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions. J Biomed Inform 2012; 45 (06) 1202-1216
- 22 Wright A, Ash JS, Aaron S. et al. Best practices for preventing malfunctions in rule-based clinical decision support alerts and reminders: results of a Delphi study. Int J Med Inform 2018; 118: 78-85
- 23 Yoshida E, Fei S, Bavuso K, Lagor C, Maviglia S. The value of monitoring clinical decision support interventions. Appl Clin Inform 2018; 9 (01) 163-173
- 24 Richardson S, Feldstein D, McGinn T. et al. Live usability testing of two complex clinical decision support tools: observational study. JMIR Human Factors 2019; 6 (02) e12471
- 25 Wright A, Hickman TTT, McEvoy D. et al. Analysis of clinical decision support system malfunctions: a case series and survey. J Am Med Inform Assoc 2016; 23 (06) 1068-1076
- 26 Lam JH, Ng O. Monitoring clinical decision support in the electronic health record. Am J Health Syst Pharm 2017; 74 (15) 1130-1133
- 27 Liu S, Wright A, Hauskrecht M. Change-point detection method for clinical decision support system rule monitoring. Artif Intell Med 2018; 91: 49-56
- 28 Aaron S, McEvoy DS, Ray S, Hickman TTT, Wright A. Cranky comments: detecting clinical decision support malfunctions through free-text override reasons. J Am Med Inform Assoc 2019; 26 (01) 37-43
- 29 Chaparro JD, Hussain C, Lee JA, Hehmeyer J, Nguyen M, Hoffman J. Reducing interruptive alert burden using quality improvement methodology. Appl Clin Inform 2020; 11 (01) 46-58
- 30 Orenstein EW, Muthu N, Weitkamp AO. et al. Towards a maturity model for clinical decision support operations. Appl Clin Inform 2019; 10 (05) 810-819
- 31 Jones BE, Collingridge DS, Vines CG. et al. CDS in a learning health care system: Identifying physicians' reasons for rejection of best-practice recommendations in pneumonia through computerized clinical decision support. Appl Clin Inform 2019; 10 (01) 1-9
- 32 Pfistermeister B, Sedlmayr B, Patapovas A. et al. Development of a standardized rating tool for drug alerts to reduce information overload. Methods Inf Med 2016; 55 (06) 507-515
- 33 Yoo J, Lee J, Rhee PL. et al. Alert override patterns with a medication clinical decision support system in an academic emergency department: retrospective descriptive study. JMIR Med Inform 2020; 8 (11) e23351
- 34 Osheroff J, Teich J, Levick D. et al. Improving Outcomes with Clinical Decision Support. 2nd ed. Chicago, IL: HIMSS Publishing; 2012
- 35 Wright A, Wright AP, Aaron S, Sittig DF. Smashing the strict hierarchy: three cases of clinical decision support malfunctions involving carvedilol. J Am Med Inform Assoc 2018; 25 (11) 1552-1555
- 36 Ai A, Wong A, Amato M, Wright A. Communication failure: analysis of prescribers' use of an internal free-text field on electronic prescriptions. J Am Med Inform Assoc 2018; 25 (06) 709-714
Address for correspondence
Publication History
Received: 28 March 2022
Accepted: 13 September 2022
Article published online:
26 October 2022
© 2022. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Smith DH, Perrin N, Feldstein A. et al. The impact of prescribing safety alerts for elderly persons in an electronic medical record: an interrupted time series evaluation. Arch Intern Med 2006; 166 (10) 1098-1104
- 2 Bright TJ, Wong A, Dhurjati R. et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med 2012; 157 (01) 29-43
- 3 Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003; 163 (12) 1409-1416
- 4 McGinn TG, McCullagh L, Kannry J. et al. Efficacy of an evidence-based clinical decision support in primary care practices: a randomized clinical trial. JAMA Intern Med 2013; 173 (17) 1584-1591
- 5 Chaudhry B, Wang J, Wu S. et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006; 144 (10) 742-752
- 6 Wright A, Sittig DF, Ash JS. et al. Lessons learned from implementing service-oriented clinical decision support at four sites: a qualitative study. Int J Med Inform 2015; 84 (11) 901-911
- 7 Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc 2001; 8 (04) 299-308
- 8 Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med 2003; 348 (25) 2526-2534
- 9 Kwan JL, Lo L, Ferguson J. et al. Computerised clinical decision support systems and absolute improvements in care: meta-analysis of controlled clinical trials. BMJ 2020; 370: m3216
- 10 Stephens AB, Wynn CS, Hofstetter AM. et al. Effect of electronic health record reminders for routine immunizations and immunizations needed for chronic medical conditions. Appl Clin Inform 2021; 12 (05) 1101-1109
- 11 Stettner S, Adie S, Hanigan S, Thomas M, Pogue K, Zimmerman C. Effect of replacing vendor QTc alerts with a custom QTc risk alert in inpatients. Appl Clin Inform 2022; 13 (01) 19-29
- 12 Nanji KC, Seger DL, Slight SP. et al. Medication-related clinical decision support alert overrides in inpatients. J Am Med Inform Assoc 2018; 25 (05) 476-481
- 13 Stone EG. Unintended adverse consequences of a clinical decision support system: two cases. J Am Med Inform Assoc 2018; 25 (05) 564-567
- 14 Ash JS, Sittig DF, Campbell EM, Guappone KP, Dykstra RH. Some unintended consequences of clinical decision support systems. AMIA Annu Symp Proc 2007; 26-30
- 15 Wright A, Ai A, Ash J. et al. Clinical decision support alert malfunctions: analysis and empirically derived taxonomy. J Am Med Inform Assoc 2018; 25 (05) 496-506
- 16 Kassakian SZ, Yackel TR, Gorman PN, Dorr DA. Clinical decisions support malfunctions in a commercial electronic health record. Appl Clin Inform 2017; 8 (03) 910-923
- 17 Olakotan OO, Yusof MM. Evaluating the alert appropriateness of clinical decision support systems in supporting clinical workflow. J Biomed Inform 2020; 106 (103453): 103453
- 18 Bates DW, Kuperman GJ, Wang S. et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10 (06) 523-530
- 19 Payne TH, Hines LE, Chan RC. et al. Recommendations to improve the usability of drug-drug interaction clinical decision support alerts. J Am Med Inform Assoc 2015; 22 (06) 1243-1250
- 20 Kastner M, Lottridge D, Marquez C, Newton D, Straus SE. Usability evaluation of a clinical decision support tool for osteoporosis disease management. Implement Sci 2010; 5: 96
- 21 Horsky J, Schiff GD, Johnston D, Mercincavage L, Bell D, Middleton B. Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions. J Biomed Inform 2012; 45 (06) 1202-1216
- 22 Wright A, Ash JS, Aaron S. et al. Best practices for preventing malfunctions in rule-based clinical decision support alerts and reminders: results of a Delphi study. Int J Med Inform 2018; 118: 78-85
- 23 Yoshida E, Fei S, Bavuso K, Lagor C, Maviglia S. The value of monitoring clinical decision support interventions. Appl Clin Inform 2018; 9 (01) 163-173
- 24 Richardson S, Feldstein D, McGinn T. et al. Live usability testing of two complex clinical decision support tools: observational study. JMIR Human Factors 2019; 6 (02) e12471
- 25 Wright A, Hickman TTT, McEvoy D. et al. Analysis of clinical decision support system malfunctions: a case series and survey. J Am Med Inform Assoc 2016; 23 (06) 1068-1076
- 26 Lam JH, Ng O. Monitoring clinical decision support in the electronic health record. Am J Health Syst Pharm 2017; 74 (15) 1130-1133
- 27 Liu S, Wright A, Hauskrecht M. Change-point detection method for clinical decision support system rule monitoring. Artif Intell Med 2018; 91: 49-56
- 28 Aaron S, McEvoy DS, Ray S, Hickman TTT, Wright A. Cranky comments: detecting clinical decision support malfunctions through free-text override reasons. J Am Med Inform Assoc 2019; 26 (01) 37-43
- 29 Chaparro JD, Hussain C, Lee JA, Hehmeyer J, Nguyen M, Hoffman J. Reducing interruptive alert burden using quality improvement methodology. Appl Clin Inform 2020; 11 (01) 46-58
- 30 Orenstein EW, Muthu N, Weitkamp AO. et al. Towards a maturity model for clinical decision support operations. Appl Clin Inform 2019; 10 (05) 810-819
- 31 Jones BE, Collingridge DS, Vines CG. et al. CDS in a learning health care system: Identifying physicians' reasons for rejection of best-practice recommendations in pneumonia through computerized clinical decision support. Appl Clin Inform 2019; 10 (01) 1-9
- 32 Pfistermeister B, Sedlmayr B, Patapovas A. et al. Development of a standardized rating tool for drug alerts to reduce information overload. Methods Inf Med 2016; 55 (06) 507-515
- 33 Yoo J, Lee J, Rhee PL. et al. Alert override patterns with a medication clinical decision support system in an academic emergency department: retrospective descriptive study. JMIR Med Inform 2020; 8 (11) e23351
- 34 Osheroff J, Teich J, Levick D. et al. Improving Outcomes with Clinical Decision Support. 2nd ed. Chicago, IL: HIMSS Publishing; 2012
- 35 Wright A, Wright AP, Aaron S, Sittig DF. Smashing the strict hierarchy: three cases of clinical decision support malfunctions involving carvedilol. J Am Med Inform Assoc 2018; 25 (11) 1552-1555
- 36 Ai A, Wong A, Amato M, Wright A. Communication failure: analysis of prescribers' use of an internal free-text field on electronic prescriptions. J Am Med Inform Assoc 2018; 25 (06) 709-714





