Keywords
electronic health records - patient safety - informatics - information systems
Background and Significance
Background and Significance
Clinical information systems (CISs) are imperative to delivering safe, quality care
in the 21st century.[1] As patients engage with their local health care systems at various touch-points
across the care continuum, CISs connect their health care journey, affording clinicians
a holistic view of their treatment. Despite the numerous benefits of CISs, they can
also introduce unintended patient care consequences if not configured and used safely.
It is estimated that one-third of patient safety incidents following a CIS implementation
are caused by its configuration and use.[2]
[3]
For example, a recent issue was encountered at our organization where medication dosing
bands were configured for a medication that was intended to be dosed by body weight.
This caused several patients to receive a higher than recommended dose until it was
caught and corrected. In another case, an intravenous medication was ordered without
a duration or end date and was not configured to have a maximum safe dosing alert.
This resulted in the patient receiving an unsafe excess amount of drug. Of course,
many other failed checks in the care process contributed, but the system, if configured
correctly, could have easily prevented these events.
These examples illustrate the importance of building safety and risk management into
the specification and design of CISs.[4] The eSafety checklist developed by Dhillon-Chattha and colleagues provides a detailed
listing of system-agnostic, evidence-based configuration recommendations that improve
CIS safety.[5]
[6] It contains 642 items organized into 10 common CIS system capabilities including:
global settings; patient identification; clinical documentation; order management;
clinical decision support; medication management; referral management; results management;
clinical communication; and, patient portal ([Fig. 1]). Items are intended to identify system strengths and areas for improvement, which
can then be prioritized and actioned by organizations according to their own patient
safety policies. The tool can be used by organizations to support configuration of
new CIS or to optimize existing systems. This article describes how the eSafety checklist
was adapted and used by Alberta Health Services (AHS) to support safe configuration
of their new system-wide CIS, Connect Care (core application developed by Epic Systems,
Verona, Wisconsin, United States).
Fig. 1 eSafety checklist: an evidence based, system agnostic tool.
As a result of Connect Care, AHS will be Canada's largest province-wide, fully integrated
health system responsible for delivering publicly funded health services to over 4.3
million residents. Connect Care is currently being implemented across AHS in nine
phases (“waves”) with the first having launched in November 2019. Upon complete implementation,
Connect Care will replace almost 1,300 disparate legacy software systems. A province-wide
CIS will significantly improve continuity of patient care and provide a more holistic
patient record for improved decision-making. Due to the size, scope, and complexity
of this implementation, safety was identified as a key priority by AHS throughout
the life cycle of this project. Thus, the eSafety checklist was used to identify high-priority
safe configuration practices for consistent application across the various Connect
Care modules, to be recommended to leadership and implemented prior to launch.
Objective
The objective of this project was to evaluate each of the 13 modules in the Connect
Care application and identify misaligned system configurations according to the eSafety
checklist, which could then be recommended for remediation. A secondary objective
was to measure compliance of each module with the eSafety checklist and quantify patient
safety risk of the identified misaligned system configurations.
The project consisted of three phases: (1) adaptation of the eSafety checklist to
an abbreviated, high-priority listing, (2) a recorded video demonstration of the workflows
and components in each Connect Care module, and (3) independent evaluation of each
recording by two eSafety evaluators using the abbreviated eSafety checklist. An overview
of the process is depicted in [Fig. 2].
Fig. 2 Flowchart of the process for applying the eSafety checklist to Connect Care configuration
including three phases: (1) abbreviated checklist development, (2) module demonstration,
and (3) module evaluation.
Methods
Development of an Abbreviated Checklist
Due to project timelines, a comprehensive evaluation of Connect Care configurations
using the entire eSafety checklist was not a viable option prior to implementation.
Therefore, an abbreviated version of the eSafety checklist was developed and used
for evaluation prior to launching the first wave of Connect Care. Abbreviated checklist
items were selected by AHS Human Factors specialists by prioritizing items that could
be assessed through a general video demonstration and had high potential for human
error and/or patient harm. The abbreviated eSafety checklist focuses on 169 items
while still providing evaluators the ability to assess additional items, if deemed
relevant to a given module. Evaluators were trained on use of the abbreviated checklist
and were asked to familiarize themselves with it and the entire eSafety checklist
prior to module demonstrations.
For reference, the abbreviated checklist is included in [Supplementary Appendix A]> (available in the online version; abbreviated eSafety checklist).
Module Demonstrations
Module demonstrations were provided by each of the module configuration leads to 13
eSafety evaluators, including 5 eQuality and eSafety specialists and 8 Human Factors
specialists. Due to system security and access policy limitations during prelaunch
of Connect Care, eSafety evaluators did not have access to a sandbox environment.
Two evaluators were assigned to each Connect Care module and demonstrations were provided
for each of the following modules: ambulatory (out-patient); anesthesia; cardiology;
diagnostic imaging; emergency; in-patient documentation; in-patient orders; laboratory;
obstetrics; oncology; patient portal; pharmacy; and surgery.
During each demonstration, evaluators were shown commonly performed clinical activities
by a member of the build team and were provided opportunity to ask questions. Demonstrations
were recorded via Skype so that further detailed review could be conducted by evaluators.
The abbreviated eSafety checklist was completed primarily in reference to the recorded
demonstration videos. Build teams remained accessible for questions and clarifications
throughout the evaluation process.
Module Evaluations
Module evaluations were conducted in April 2019. The abbreviated eSafety checklist
was applied independently by at least two evaluators per Connect Care module, from
which configuration compliance and patient safety risks were identified. The recommended
checklist configurations were first rated based on compliance. A technical description
of the compliance ratings can be found in the “Instructions & Scoring” tab of the
eSafety checklist ([Supplementary Appendix A]>, available in the online version). However, for the purposes of this evaluation,
the following compliance ratings were used:
-
Yes: the recommended practice was demonstrated and correctly configured.
-
No: the recommended practice was demonstrated but was incorrectly configured.
-
Partial: the recommended practice was demonstrated but was inconsistently configured.
-
Not applicable: the recommended practice was not demonstrated.
Based on the above definitions, items were assigned a configuration compliance score
of 1 for “yes,” 0 for “no,” and 0.5 for “partial” compliance. Items which could not
be rated based on the available information were excluded to not affect the percent
compliance calculations.
The likelihood of occurrence and impact on patient safety was also considered. Patient
safety risk ratings were as follows:
-
Minor: consequences are trivial. Alternatives may depend on personal preferences.
-
Medium: consequences with significant user frustration and operational impact.
-
Major: consequences with high patient safety risk or potential to impact more than
one module.
-
Red flag: relates to high frequency and/or high consequence patient safety risk which
cannot be mitigated through design, training, or a workaround of some kind.
Completed checklists were compared across raters for each module. Any identified discrepancies
in ratings were noted and re-evaluated in order to gain consensus agreement amongst
evaluators. All partial and noncompliant items, along with evaluator rational, ratings,
and notable examples were compiled into a table of Connect Care Application Opportunities
(provided in [Supplementary Appendix B]>, available in the online version). Lastly, to further support build efforts, the
identified opportunities were prioritized for implementation using the following definitions:
-
Urgent: item of high frequency and/or high consequence patient safety risk; or impacts
multiple modules requiring immediate action.
-
Quick-win: patient safety risk item expected to be easily resolved.
-
Postlaunch: unrelated to patient safety; to be addressed in future waves.
-
Vendor review: issues requiring additional build review, content support, and/or training.
Results
The number of checklist items that were applicable and could be reviewed varied from
one module to the next. Of 169 items, Ambulatory offered the most items, with over
80% of the abbreviated checklist items able to be assessed for compliance ([Table 1]). Some checklist items were not applicable during the review due to the current
state of the module's build. However, this limitation did not reduce the validity
of the eSafety checklist.
Table 1
Number of checklist items reviewed and resulting classification per module
Module
|
No. of items reviewed and assessed for compliance
|
No. of items compliant
|
No. of items partially compliant
|
No. of items noncompliant
|
No. of items reviewed and deemed not applicable (excluded)
|
Outpatient (Ambulatory)
|
145
|
125
|
16
|
4
|
24
|
Obstetrics (Stork)
|
142
|
119
|
15
|
8
|
27
|
Emergency (ASAP)
|
134
|
112
|
14
|
8
|
35
|
Oncology (Beacon)
|
126
|
94
|
20
|
12
|
43
|
Inpatient documentation (ClinDoc)
|
115
|
100
|
9
|
6
|
54
|
Inpatient orders (IP Orders)
|
110
|
88
|
16
|
6
|
59
|
Pharmacy (Willow)
|
102
|
74
|
14
|
14
|
67
|
Diagnostic imaging (Radiant)
|
95
|
73
|
13
|
9
|
74
|
Surgery (OpTime)
|
91
|
73
|
10
|
8
|
78
|
Patient portal (MyChart)
|
86
|
57
|
10
|
19
|
83
|
Cardiology (Cupid)
|
85
|
65
|
8
|
12
|
84
|
Anesthesia
|
79
|
64
|
6
|
9
|
90
|
Laboratory (Beaker)
|
69
|
51
|
10
|
8
|
100
|
Overall Compliance
Of the modules reviewed, all were more than 72% compliant. On average, modules achieved
84% configuration compliance. Ambulatory (92%), Obstetrics (89%), and Emergency (89%)
received the highest configuration compliance while, Patient Portal (72%), Pharmacy
(79%), and Oncology (81%) were identified as having the lowest configuration compliance
([Fig. 3]).
Fig. 3 Connect Care configuration compliance with the eSafety checklist.
Identified Opportunities
Across the 13 modules demonstrated, 273 opportunities for improvement were identified.
Given the overall compliance scores, it is unsurprising that Oncology, Patient Portal,
and Pharmacy also had the greatest proportion of configuration opportunities ([Fig. 3]).
Major Areas of Inconsistency
An analysis of the results of the eSafety checklist identified four major areas of
inconsistency. These areas cover noncompliant items which were viewed across multiple
modules and were identified as a “major” patient safety risk. The four areas include
(1) inconsistent date and time, (2) unclear patient identification, (3) ineffective
alert system, and (4) insufficient decision support.
These areas are discussed in detail below, with supporting examples. Where applicable,
checklist items are noted in parentheses.
Inconsistent Date and Time
Date and time were not expressed in a consistent format and placement (eSafety checklist
items 1.1.7 and 1.1.8). Additionally, the month was often not spelled out or abbreviated,
as it is recommended to be 2-digit day, alphabetical abbreviated month, and 4-digit
year (1.1.9). For example, patient date of birth was displayed with 1 digit month/day,
4-digit year, while orders were dated 2-digit month/day and 2-digit year ([Supplementary Fig. S1], available in the online version). There were also inconsistencies in use of 12 hours
(hh:mm) as opposed to 24-hour (hhmm) times ([Supplementary Fig. S2], available in the online version).
[Supplementary Fig. S2] (available in the online version) shows an example of the correct alphanumeric dating
format (green box); however, in other places in the system it was displayed inconsistently
with single-digit day or nonabbreviated month ([Supplementary Fig. S3], available in the online version).
Unclear Patient Identification
Patient names appeared truncated (2.1.11) and information required to accurately identify
a patient was not always clearly displayed or consistently placed on screens where
patient care documentation occurs (2.3.1). An example of name truncation is provided
in [Supplementary Fig. S4] (available in the online version). In another example, color contrast (grey on black)
resulted in poor legibility of relevant patient information ([Supplementary Fig. S5], available in the online version).
The location of patient information was also inconsistent across modules. Some had
a “patient storyboard” (sidebar), while others did not. Additionally, minimal alerting
was provided when accessing the records of individuals with similar or sound-alike
names (2.6.2). An alert does present for patients with similar and sound-alike names,
however only as a mouse over label, and there is a lack of visual distinctive features
consistent with warnings ([Supplementary Fig. S6], available in the online version).
Ineffective Alert Systems
Within the module, standard rules are not used for colors of severity warnings and
alerts (1.1.26). Additionally, color, shape, and size are not always properly manipulated
to make visual alerts distinct from one another (5.2.7), nor do they always offer
potentially appropriate actions in response (5.3.10). For example, there was noted
inconsistent use of color to indicate high severity warning and alerts within the
same module; sometimes red was used, other times orange ([Supplementary Fig. S7], available in the online version).
System-generated alerts are triggered at the time of order submission, not at order
entry (4.5.1). An example of this is shown in [Supplementary Fig. S8] (available in the online version) where a drug–drug interaction alert appears at
the time of signing as opposed to the time of entry.
Medication warnings were also found to be embedded in text with no visual hierarchy,
making them unapparent and easily missed ([Supplementary Fig. S9], available in the online version). Some alerts did not have any provided options
to quickly resolve issues, requiring the user to back track their actions ([Supplementary Fig. S10], available in the online version).
Insufficient Decision Support
Clinical decision support was highly variable across modules. In some instances, users
were required to remember important information from one page to use on another page
(1.6.2). As shown in [Supplementary Fig. S11] (available in the online version), the user was required to manually enter information
from a previous point within the workflow ([Supplementary Fig. S11], available in the online version).
The system also failed to generate normal reference ranges (3.7.5), alert the user
of critical laboratory values (5.7.1), or consistently emphasize the medication name
to stand out from the rest of the medication order (6.1.5). While the system may provide
an indication/alert of abnormal values, it does not indicate the normal range ([Supplementary Fig. S12], available in the online version). In another example, urinalysis was flagged as
abnormal result, but the system does not specify which of many individual component
values are abnormal, requiring the user to manually identify ([Supplementary Fig. S13], available in the online version).
Additionally, automated systems were not used to detect typographical errors (1.12.8),
as shown in [Supplementary Fig. S14] (available in the online version).
Discussion
The eSafety checklist was used to assess the Connect Care application and identify
several opportunities for better achievement of CIS best practices. To our knowledge,
this is the first time the eSafety checklist has been used in clinical system evaluation,
and the largest formal eSafety evaluation of a CIS of this size and scope. An ambulatory
CIS evaluation tool was piloted by Co et al, but this tool mostly concerns medication
decision support and was tested on several smaller ambulatory clinics.[7] On the other hand, individual U.S. hospitals are now required to do yearly safety
assessments using the SAFER Guides, a 146-item checklist-based risk assessment tool.[8] The SAFER Guides were considered in the development of the eSafety checklist and
are designed to be used at the program level, whereas the eSafety checklist is more
comprehensive with 642 items and is designed for configuration interventions for front-line
informatics professionals. Although the recommended process for applying the SAFER
Guides is similar to how we applied the eSafety checklist, there were several key
recommendations that should be noted, namely the multidisciplinary nature of the assessment
team and the need for annual re-assessment.[9]
Overall compliance with eSafety practices was good with 84% average, but still leaves
ample room for improvement as Connect Care continues to be configured, implemented,
and improved. Accordingly, all opportunities identified for each module were provided
to the relevant build team for further consideration and action.
The four main areas of inconsistency (inconsistent date/time, unclear patient identification,
ineffective alert system, and insufficient decision support) identified are a key
takeaway from this project that should be considered in existing and future CIS builds.
Inconsistencies in date and time conventions have the potential to create confusion,
introduce miscommunication amongst clinicians, and enable inaccurate documentation,
which has the potential to result in poor and/or unsafe patient care.[10] Accordingly, these types of misconfigurations were rated as major risks with urgent
implementation priority ([Supplementary Appendix B]>: application opportunities; available in the online version).
Accurate patient identification is key to ensuring the correct patient receives appropriate
and timely care. Use of truncated patient names is seemingly minor since column width
can be adjusted by the user. However, this does not imply that user will do so, and
at best presents an additional fatigue/workload on the user. User fatigue is a key
factor in patient safety errors.[11]
With respect to ineffective alert system, things like unintuitive alerts and inconsistent
color and coding conventions enhance the likelihood and continuation of potentially
harmful practices. A lack of visually distinctive features consistent with warnings
is detrimental as it may result in users bypassing or missing the notification. Red
is the most natural severe alert color, and orange likely implies a lower level of
severity,[12]
[13] but this was not consistent in the system. Several instances of inconsistent color
of severity warnings (example: “high” being orange in some instances and red in others)
were flagged in the system and rated as major risks. Alerting systems which do not
provide meaning or guidance will over time become a source of annoyance and disruption.
Users may learn to ignore alerts or develop alert fatigue. Unacknowledged alerts can
have significant impacts on patient care.[14]
[15]
Another key area for improvement was insufficient or missing decision support. Reference
ranges should always be provided in alerts, and this presents a major risk as results
could be misinterpreted. When presenting multi-component alerts, the critical or abnormal
component should be identified.[16] Additionally, automated systems were not used to detect typographical errors, which
could result in miscommunication, or add additional workload on staff to fix mistakes.
Several limitations of this work should be noted. The eSafety checklist was applied
by two reviewers per module and, while necessary from a resource standpoint, this
does create potential for inconsistency in application. However, the checklist was
designed to be objective and has a high agreement between reviewers.[5] Additionally, in this exercise, one reviewer was consistently present at all observation
sessions to introduce and set the stage for each observation session.
The Connect Care configuration was incomplete when the review occurred at 6 months
before the first launch of the application. The review was intended to occur before
launch to give time to act on identified opportunities for improvement. However, some
of the deficiencies may have been remedied through the natural course of continued
configuration. Nonetheless, utilization of the checklist at this point ensured key
deficiencies were clearly highlighted for attention.
Due to the strict build timelines, module analysts facilitating the demonstrations
were limited to a 1- to 2-hour session with the evaluators. As a result, it is possible
that not all key areas of each module were revealed to the evaluators to assess and
rate for adherence to the checklist.
Although the full checklist could not be applied due to project timelines, the abbreviated
checklist acted as a prompt for evaluators to attend to the items in the shortened
list. Evaluators were familiar with the full checklist and could reference it when
considering observations aligned with items not in the subset. Alternatively, familiarity
with human factors and eSafety best practice components had assisted evaluators to
search for a checklist item in a certain category to consider for noncompliance.
Some limitations of the checklist could also carry over into its application. For
example, in the original checklist, all relevant configuration practices are included
irrespective of the evidence strength. As a result, both evidence strength and the
potential patient impact would ideally be included to help builders prioritize potential
system changes when providing recommendations.
Conclusion and Future Directions
Conclusion and Future Directions
This study provides further validation of the eSafety checklist as a tool for configuration
of electronic health records. In applying the tool to one of the largest CIS implementations
in Canada, it has shown clinical applicability in identifying gaps in CIS configuration.
After improvements are made by build teams, this assessment will be replicated to
assess and confirm improvements to the system.
We encourage utilization and adaptation of the eSafety checklist at institutions across
Canada and North America for future CIS implementations. An important future development
for the checklist is the establishment of a formal process for keeping it updated
with current evidence and best practice, which will evolve over time.
Clinical Relevance Statement
Clinical Relevance Statement
The eSafety checklist provides early identification of system configuration that is
inconsistent with best practices. Inconsistent configuration with best practices can
lead to implementation of a system prone to user errors. The eSafety approach in using
this tool aims to prevent patient harm in the design, implementation, and use of clinical
information systems.
Multiple-Choice Questions
Multiple-Choice Questions
-
What does CIS stand for?
-
Clinical Informatics Software
-
Clinical Informatics System
-
Clinical Information Software
-
Clinical Information System
Correct Answer: The correct answer is option d.
-
Which is a correct example of the best practice for expression of date in an electronic
health record?
-
23-May-1993
-
23-05-1993
-
05-23-1993
-
May 23, 1993
Correct Answer: The correct answer is option a. Alphanumeric month and four digit year ensures there
can be no confusion between day, month, and year.
-
Which is not a major area of inconsistency identified by the eSafety checklist in
this research?
-
Inconsistent date and time
-
Unclear patient identification
-
Ineffective alert system
-
Insufficient decision support
Correct Answer: The correct answer is option d.
Erratum: The article has been updated as per Erratum (Doi: 10.1055/s-0044-1779302) published
on January 31, 2024.