Appl Clin Inform 2019; 10(05): 804-809
DOI: 10.1055/s-0039-1697599
Research Article
Georg Thieme Verlag KG Stuttgart · New York

A Comparison of One- and Four-Open-Chart Access: No Change in Computerized Provider Order Entry Error Rates

Paula Scariati
1   Health Informatics, Dignity Health, San Francisco, California, United States
,
Herschel Knapp
2   Nursing Research & Analytics, Dignity Health, Phoenix, Arizona, United States
,
Stuart Gray
2   Nursing Research & Analytics, Dignity Health, Phoenix, Arizona, United States
› Author Affiliations
Further Information

Address for correspondence

Paula Scariati, DO, MPH, MS
Dignity Health
185 Berry Street, Suite No. 300, San Francisco, CA 94107
United States   

Publication History

01 March 2019

09 August 2019

Publication Date:
23 October 2019 (online)

 

Abstract

Objective To assess changes in computerized provider order entry error rates among providers who with less than 24-hour notice were switched from four-chart access to one-chart-only access.

Methods An interrupted time series analysis of emergency medicine providers, hospitalists, and maternal child health providers was performed with pairwise comparison of computerized provider order entry error rates within and between specialties. This retrospective snapshot consisted of four phases. Phase 1 was the baseline 2 weeks where providers were privileged to work with up to four charts open. Phase 2 was the 2-week period where providers were limited to one-chart access. Phase 3 was the 2-week period where providers were returned to four-chart access. And phase 4 was a 2-week period 3 months following the end of phase 3.

Results Analysis of the overall and specialty-stratified cohorts revealed no statistically significant differences in median computerized provider order entry error rates across the four phases (Wilcoxon signed-rank test, α = 0.05). However, statistically significant differences in median computerized provider order entry error rates were detected between the three specialties within each phase of the study (Kruskal–Wallis, p < 0.001).

Conclusion Allowing providers in select specialties to have access to four charts simultaneously does not increase their computerized provider order entry error rates. Significant differences in error rates between specialties suggest the need for further study of the use of standardized order sets, charting, and workflow variations.


#

Background and Significance

Working with multiple charts open simultaneously improves the workflow and electronic health record (EHR) satisfaction of providers who need to manage multiple patients concurrently. Within the hospital setting this is often seen in emergency medicine (EM), hospital medicine (HM), and maternal child health (MCH). Many office-based practitioners also find the multiple-chart function convenient, if not necessary. Citing concerns about increased computerized provider order entry (CPOE) errors, but in the absence of consistent, compelling data in favor or against multiple charts open, the Office of the National Coordinator for Health Information Technology, Centers for Medicare and Medicaid, The Joint Commission, and EHR vendors recommend one chart be open at a time.[1] [2] [3] [4] [5] Nevertheless, color-coded tabs, alerts, and patient photos are several of the available EHR features designed to improve patient identification when multiple charts are open.[6] [7] [8] [9]

Dignity Health, the nation's fifth largest health care system, is a 22-state network of over 9,000 physicians with 39 acute care hospitals across California, Nevada, and Arizona. While the organization has consistently supported a conservative position regarding multiple-chart access within the EHR, several physician specialty groups have lobbied aggressively for four-chart access, arguing that limiting chart access forces workarounds and impacts the ability to provide safe, effective patient care. MCH physicians have always been provisioned to work with four charts open to accommodate women delivering multiple babies. EM physicians were provisioned with four-chart access in March 2014 to meet the needs of a workflow that involves significant task stacking and task resumption. Likewise, HM physicians were provisioned with four-chart-open capabilities in May 2017 after successfully negotiating with the medical leadership to recognize that their workflows look much like those of their EM colleagues and that the consequences of restricted access were similar.

On March 7, 2018, our EHR vendor notified all of its clients that they had uncovered a software defect which was summarized as follows: When you have multiple patient charts open, the active chart can unexpectedly change from one patient to another. The potential patient-safety implications of the defect left no option but to limit providers to single-chart access until a software fix was found.

With less than 24 hours of notice, providers, acknowledging the issue and embracing a shared commitment toward patient care, altered their workflows to accommodate the change. Two weeks later, when the concern was resolved, their previous multiple-chart privileges were restored. However, this unexpected and short-lived change presented an organic opportunity to perform a retrospective analysis of these events to determine: Does the number of charts a provider can open simultaneously impact CPOE error rates? Specifically, Ho: the number of charts a provider can open simultaneously has no impact on CPOE error rates; Ha: the number of charts a provider can open simultaneously impacts CPOE error rates.


#

Objective

To assess changes in CPOE error rates among providers who were, with less than 24 hours of notice, switched from working with up to four charts open simultaneously to one-chart-only access.


#

Methods

This study was designed as a four-phase analysis of providers with database preferences set to allow providers to have four patient EHRs (charts) opened simultaneously. Providers were identified within our EHR database by assigned position and the “MAXIMUM_CHARTS” preference for the position allowing four charts to be opened simultaneously.

The four phases for the analysis each were 2 weeks long. In the first 2 weeks, the EM, HM, and MCH providers had the ability to have four charts opened simultaneously. In the second 2 weeks, these providers were limited to one chart open at a time. During the third 2-week time period, these providers were reverted back to the original four-chart preference for each specialty. The first three phases in the time series analysis excluded days when the database preference was changed from four open charts to one open chart preference and again for the days when the preference was reverted back from one open chart to four. The fourth phase in this interrupted time series analysis occurred 3 months after the end of the third phase. This point was chosen randomly to ensure that the return to the baseline was sustained over time.

This study was submitted to the Dignity Health Research Integrity Office (electronic institutional review board, eIRB) and deemed a continuous quality improvement project sponsored by Dignity Health conducted by Health Informatics. As such, it was exempt from IRB review.


#

Results

To determine the average amount of time elapsed between a provider placing an order and then discontinuing, voiding, or canceling that same order, an initial analysis was performed. We queried orders placed by providers over 15 hours (enough time to cover a typical 12-hour shift) and the time difference between these orders and the discontinued, voided, or canceled orders by the same provider on the same patient. The results indicated if an order was placed and then that same order was discontinued, canceled, or voided by the same provider within 9 hours of the original order ([Fig. 1]).

Zoom Image
Fig. 1 Time to correct charting error.

Thus, for an order to be considered an erroneous order, the exact same order that was discontinued, voided, or canceled within 9 hours of the original order had to be placed on a different patient by the same provider within 10 minutes. The reorder time of 10 minutes was chosen to parallel the “wrong-patient retract-and-reorder” methodology used in two randomized clinical trials, validated to indicate an order truly placed in error 76.2% of the time.[6] [10]

Using our vendor's native application, two separate queries of the EHR database were performed. The first query (Query 1) identified all orders placed by ED, HM, and MCH providers for each 2-week phase. This query excluded any patients defined as test patients and patients considered inactive in the system. The second query (Query 2) identified any order that was discontinued, canceled, or voided within 9 hours (540 minutes) by the same provider of the order from the first query (Query 1). The results from both queries were copied into Microsoft Excel and imported as a table into Microsoft Access.

To identify erroneous orders, the Microsoft Access table containing the discontinued, canceled, and voided orders (Query 2) was compared with the Microsoft Access table containing all orders (Query 1) by creating a Microsoft Access query (Query 3) to identify orders where the identical order was placed on a different patient by the same provider within 10 minutes of the discontinued, canceled, or voided order. The datasets from all queries were imported into SPSS version 18 for statistical analysis.

Our longitudinal analysis of CPOE error rates focused on 1,132 providers: 381 from the emergency department, 453 hospitalists, and 298 mommy–baby providers. Of the over 3.4 million orders placed by these providers over the four phases of this study, EM providers submitted 36.9% of the orders, HM providers submitted 52.4% of the orders, and MCH providers submitted 10.7% of the orders ([Table 1]).

Table 1

Computerized provider order entry (CPOE) by provider specialty and phase

Provider

n

Phase 1

Phase 2

Phase 3

Phase 4

Total

Total %

Emergency medicine

381

Σ

µ

SD

310,750

815.6

509.5

310,231

514.3

533.1

306,217

803.7

509.7

308,586

809.9

535.0

1,235,784

810.9

521.5

36.0%

Hospital medicine

453

Σ

µ

SD

470,375

1,038.4

1,003.4

476,446

1,051.8

1,027.7

455,512

1,005.6

982.3

426,421

941.3

939.4

1,828,754

1,009.6

988.8

53.2%

Maternal child health

298

Σ

µ

SD

95,248

319.6

303.8

87,846

294.8

276.1

90,947

305.2

297.1

97,814

328.2

303.6

371,855

312.0

295.3

10.8%

Total

1,132

Σ

µ

SD

876,373

774.2

772.5

874,523

772.6

793.4

852,676

753.6

758.5

832,821

753.7

761.9

3,436,393

758.9

764.3

100%

Abbreviation: SD, standard deviation.


A repeated measures analysis of variance (ANOVA) statistic was evaluated to detect changes in overall CPOE error rates from phase to phase. However, the necessary statistical assumption (Mauchly's test of sphericity) failed, so the nonparametric equivalent statistic, the Wilcoxon signed-rank test, was used. In light of the consistent positive skew of the data along with the nonparametric alternative, we opted to express results in terms of median rather than mean. The findings consistently revealed no statistically significant change in overall CPOE error rates between the four phases: phase 1 (median = 0.000%, interquartile range [IQR] = 0.000–0.073): phase 2 (median = 0.000%, IQR = 0.000–0.046), p = 0.585; phase 2 (median = 0.000%, IQR = 0.000–0.399): phase 3 (median = 0.000%, IQR = 0.000–0.632), p = 0.041; and phase 3 (median = 0.000%, IQR = 0.000–0.066): phase 4 (median = 0.000%, IQR = 0.000–0.059), p = 0.856 ([Table 2]).

Table 2

Longitudinal comparison of median CPOE error corrections by phase

Overall time point comparisons

p-Value

Phase 1 (median = 0.000%, IQR = 0.000–0.073): phase 2 (median = 0.000%, IQR = 0.000–0.046)

0.585

Phase 2 (median = 0.000%, IQR = 0.000–0.046): phase 3 (median = 0.000%, IQR = 0.000–0.066)

0.406

Phase 3 (median = 0.000%, IQR = 0.000–0.066): phase 4 (median = 0.000%, IQR = 0.000–0.059)

0.856

Abbreviations: CPOE, computerized provider order entry; IQR, interquartile range.


Note: Wilcoxon signed-rank test, α = 0.05.


Next we stratified the data by specialty provider (ED, HM, and MCH) to assess the CPOE error rates across the four phases for each specialty group. We detected some small variation in the mean CPOE error rates within each group from phase to phase. However, these minor fluctuations were not statistically significant (Wilcoxon signed–rank test, α = 0.05). For EM providers, phase 1: phase 2, p = 0.926; phase 2 to phase 3, p = 0.478; and phase 3 to phase 4, p = 0.822. For HM providers, phase 1: phase 2, p = 0.922; phase 2 to phase 3, p = 0.921; and phase 3 to phase 4, p = 0.964. For MCH providers, phase 1: phase 2, p = 0.080; phase 2 to phase 3, p = 0.322; and phase 3 to phase 4, p = 0.819 ([Table 3]).

Table 3

Longitudinal comparison of median CPOE error corrections by specialty provider group and phase

Provider

Median Comparisons

p-Value

Emergency medicine

Phase 1 = 0.000% (IQR = 0.000–0.203): phase 2 = 0.000% (IQR = 0.000–0.195)

0.926

Phase 2 = 0.000% (IQR = 0.000–0.195): phase 3 = 0.000% (IQR = 0.000–0.197)

0.478

Phase 3 = 0.000% (IQR = 0.000–0.197): phase 4 = 0.000% (IQR = 0.000–0.207)

0.822

Hospital medicine

Phase 1 = 0.000% (IQR = 0.000–0.052): phase 2 = 0.000% (IQR = 0.000–0.044)

0.922

Phase 2 = 0.000% (IQR = 0.000–0.044): phase 3 = 0.000% (IQR = 0.000–0.045)

0.921

Phase 3 = 0.000% (IQR = 0.000–0.045): phase 4 = 0.000% (IQR = 0.000–0.009)

0.964

Maternal child health

Phase 1 = 0.000% (IQR = 0.000–0.000): phase 2 = 0.000% (IQR = 0.000–0.000)

0.080

Phase 2 = 0.000% (IQR = 0.000–0.000): phase 3 = 0.000% (IQR = 0.000–0.000)

0.322

Phase 3 = 0.000% (IQR = 0.000–0.000): phase 4 = 0.000% (IQR = 0.000–0.000)

0.819

Abbreviations: CPOE, computerized provider order entry; IQR, interquartile range.


Note: Wilcoxon signed-rank test, α = 0.05.


Finally, the Kruskal–Wallis test was applied to assess for possible differences in charting errors between the three specialty groups within each phase. We detected statistically significant different mean CPOE error rates between all three groups at each phase (p < 0.001). EM providers had a significantly higher percentage of CPOE errors when compared with HM and MCH providers at each phase. Comparing the percentage of charting errors between HM and MCH providers at each phase produced mixed results; in phases 1 and 3, MCH providers had a statistically significantly higher percentage of charting errors than HM providers, whereas the opposite was observed in phases 2 and 4 ([Table 4]).

Table 4

Comparison of median CPOE error percentages between specialty provider groups within each phase

Phase 1 medians

p-Value

Phase 2 medians

p-Value

Phase 3 medians

p-Value

Phase 4 medians

p-Value

E = 0.000%: H = 0.000%

<0.001[a]

E = 0.000%: H = 0.000%

<0.001[a]

E = 0.000%: H = 0.000%

<0.001[a]

E = 0 0.000%: H = 0.000%

<0.001[a]

H = 0.000%: M = 0.000%

<0.001[a]

H = 0.000%: M = 0.000%

<0.001[a]

H = 0.000%: M = 0.000%

<0.001[a]

H = 0.000%: M = 0.000%

<0.001[a]

E = 0.000%: M = 0.000%

<0.001[a]

E = 0.000%: M = 0.000%

<0.001[a]

E = 0.000%: M = 0.000%

<0.001[a]

E = 0.000%: M = 0.000%

<0.001[a]

Abbreviations: CPOE, computerized provider order entry.


Note: E = emergency medicine, H = hospital medicine, M = maternal child health.


a Statistically significant difference using Kruskal–Wallis test, α = 0.05.



#

Discussion

The safety expert's job is to safeguard patients from potential harm, which is essential in providing quality health care. While it is prudent to be concerned about the possibility of multiple-chart access propagating CPOE errors, the data in support of that concern are limited.[11] [12] [13] [14] [15] [16] [17] [18] [19] This study revealed no significant change in CPOE errors within specialty groups more likely to use multiple-chart open functionality (EM, HM, and MCH) despite an unanticipated reduction in the number of charts accessible to them simultaneously.

This is not to imply that patient identification protocols such as exhibiting patient charts with unique colored tabs or patient photos should be discontinued as these safeguards may be instrumental in reducing charting errors further.[6] [7] [8] Routine audit reports and dashboards are also prudent means of surveilling for anomalous increases in charting errors. Such awareness may suggest the need for global or selective retraining, and possibly altering settings to limit charts open for clinical domains or providers with high(er) error rates.

The consistent and significant difference in CPOE error rates between our three specialty groups is an invitation to explore the impact of other factors on erroneous order entry. Specifically, MCH providers had significantly fewer error corrections across all three phases. These providers placed the highest number of orders per provider, per day, with the highest use of prepared order sets. Perhaps this suggests that order-set driven CPOE reduces the likelihood of charting errors.

The higher charting errors observed among EM providers is thought to be associated with environmental and workflow factors. EM providers regularly shift their attention among an ever-changing array of patients in a busy setting consisting of intermittent distractions.[19] EM providers may also be more vigilant about detecting and correcting erroneous orders compared with MCH providers. Further research may involve conducting surveys or unobtrusive observations focusing on chart access practices among these groups of providers.

Limitations and Bias

This study had several limitations, the impact of which is unknown. Although we can account for the maximum number of charts the system allowed providers to have open simultaneously, the system does not journal the actual number of charts that a provider had open at any given time, hence limiting our precision. We believe this limitation to be offset by the decision to restrict our study cohort to specialty providers who have actively lobbied to obtain four-chart access (EM and HM) and MCH providers who are provisioned with four-chart access to manage women delivering multiple babies. This, together with a sample size of 1,132 providers, should have afforded us sufficient power to detect a difference in median error rates, if one existed. However, the methodology for directly calculating power in a time series analysis is a topic of discussion with differing opinions.[20] [21] [22] A point of consensus seems to be the need for a minimum sample size of 50 for each point in the time series—which this study greatly exceeds.

Additionally, based on our operational definition of a CPOE error (an order entered for patient A, canceled, then entered for patient B), a provider who makes and corrects such an error would paradoxically present as making more errors than a provider who erroneously enters an order meant for another patient and fails to correct it. Also, our query searched for incidents wherein a doctor entered an order in one patient's chart and within 15 hours canceled that order and entered it into a different patient's chart. While we found few outliers around 14 hours, and none at 15 hours, it is possible that some such charting errors (and corrections) may span beyond 15 hours—perhaps days later.

Crossover, while considered rare, is another potential source of bias, i.e., the possibility that a provider could discontinue, void, or cancel an order and place that same exact order on another patient's chart within 10 minutes as a normal order, not in correction of a charting error.

Finally, the 2-week study window where providers were restricted to one-chart access is a relatively short time period. This made us wonder if the results found would have been sustained over a longer period of time, when the newness of the change wore off and vigilance dropped/complacency set in. Since the timing of the events in this study was beyond our control, it is not possible to answer that question. Replication with a larger sample size would be valuable in furthering our comprehension of this phenomenon.

On a positive note, a bias more commonly associated with trials, the Hawthorn effect (behavior change in response to known study participation), was nonexistent here. The providers in this study were given less than 24 hours' notice of a reduction in charting access and were unaware that data on error rates might be collected and reported.


#
#

Conclusion

Based on our findings, we did not reject the null hypothesis: Ho: the number of charts a provider can open simultaneously has no impact on CPOE error rate. EH, HM, and MCH providers did not experience a change in CPOE error rates when changed between one-chart only and four-chart simultaneous access. Therefore, our findings suggest that there is no basis for constraining the number of charts these providers can access simultaneously to less than four. Significant differences in CPOE error rates between specialties suggest that other factors such as the use of standardized order sets, charting methods, and workflow variations should be further assessed.


#

Clinical Relevance Statement

The recommendation to restrict physicians to single-chart access within the EHR is a conservative perspective steeped more in opinion than data. This practice does little justice to the danger inherent in restricting access when a physician's workflow entails managing multiple patients with risk of interruption during critical tasks. The decision to allow one, two, three, or four charts to be opened simultaneously should be informed by data with the goal of balancing provider workflow needs with the prevention of erroneous order entry.


#

Multiple Choice Questions

  1. The decision to allow multiple-chart access should be based on a balance between:

    • Provider location and clinical decision support.

    • Provider workflow and error prevention.

    • Legal claims and provider specialty.

    • Scribe usage and speech recognition software.

    Correct Answer: The correct answer is option b.

  2. In this study the association between the number of charts a provider can open simultaneously and their rate of erroneous order entry is best described as:

    • Highly correlated.

    • Inversely correlated.

    • Not correlated.

    • Weakly correlated.

    Correct Answer: The correct answer is option c.


#
#

Conflict of Interest

None declared.

Acknowledgments

The authors wish to express gratitude to Shez Partovi, MD, CM, for encouraging the analysis of the data from this event and Tim Melden, RN, MS, for providing analytical resources and coordinating the initial team meetings for this project.

Protection of Human and Animal Subjects

This study was submitted to the Dignity Health Research Integrity Office (eIRB) and deemed a continuous quality improvement project sponsored by Dignity Health conducted by Health Informatics. As such, it was exempt from IRB review.



Address for correspondence

Paula Scariati, DO, MPH, MS
Dignity Health
185 Berry Street, Suite No. 300, San Francisco, CA 94107
United States   


Zoom Image
Fig. 1 Time to correct charting error.