Appl Clin Inform 2022; 13(03): 656-664
DOI: 10.1055/a-1854-4253
Research Article

Displaying Cost and Completion Time for Reference Laboratory Test Orders—A Randomized Controlled Trial

Shohei Ikoma
1   Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, California, United States
,
Logan Pierce
2   Division of Hospital Medicine, University of California San Francisco, San Francisco, California, United States
,
Douglas S. Bell
3   Division of General Internal Medicine, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California, United States
,
Eric M. Cheng
4   Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, California, United States
,
Thomas Drake
5   Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, California, United States
,
Rong Guo
6   Department of Medicine Statistics Core, David Geffen School of Medicine, University of California, Los Angeles, California, United States
,
Alyssa Ziman
7   Wing-Kwai and Alice Lee-Tsing Chung Transfusion Service, Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, California, United States
› Author Affiliations
 

Abstract

Objectives Reduction in unnecessary services is one strategy for increasing the value of health care. Reference laboratory, or send-out, tests are associated with considerable costs. We investigated whether displaying cost and turnaround time (TAT), or time-to-result, for reference laboratory tests at the time of order entry in the electronic health record (EHR) system would impact provider ordering practices.

Methods Reference laboratory test cost and TAT data were randomized prior to the study and only displayed for the intervention group. A 24-month dataset composed of 12 months each for baseline and study periods was extracted from the clinical data mart. A difference-in-differences (DID) analysis was conducted using a linear mixed-effects model to estimate the association between the intervention and changes in test-ordering patterns.

Results In the inpatient setting, the DIDs of aggregate test-order costs and volume were not different among the control and intervention groups (p = 0.31 and p = 0.26, respectively). In the ambulatory setting, the DIDs of aggregate test-order costs and volume were not different among the control and intervention groups (p = 0.82 and p = 0.51, respectively). For both inpatient and ambulatory settings, no significant difference was observed in the DID of aggregate test-order costs and volumes calculated in respect to stratified relative cost and TAT groups (p > 0.05).

Conclusion Lack of alternative tests, test orders placed at a late step in patient management, and orders facilitated by trainees or mid-level providers may have limited the efficacy of the intervention. Our randomized study demonstrated no significant association between the display of cost or TAT display and ordering frequency.


#

Background and Significance

Clinical decision support (CDS) tools expand the capability of electronic health record (EHR) systems and enable health organizations to deliver dynamic and efficient patient-centered care. Fueled by the increasing demand for cost-effective care, the use of CDS tools, implemented in the form of EHR customization, has become increasingly common in an effort to optimize test utilization. These CDS tools have proven effective in encouraging judicious use of diagnostic tests by employing a combination of education, audit, and feedback.[1]

The effect of increased cost transparency, however, has shown either minimal or mixed effects. A systematic review from 2016 by Silvestri et al demonstrated that 8 of 12 studies that implemented laboratory test cost display showed a statistically significant reduction in aggregate order costs and/or order volume.[2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Of the eight studies that showed significant reductions, only three were randomized either at the level of individual tests or clustered by clinic session or provider team.[4] [7] [13] In 2017, the PRICE randomized clinical trial addressed the limitations of the studies identified by Silvestri et al's review. Their study, which analyzed the effect of cost display on 60 common inpatient tests performed for over 140,000 hospital admissions, demonstrated that EHR-enabled display of Medicare-allowable reimbursement costs failed to alter the test-ordering behavior of providers.[14] In a subsequent study, Schmidt et al published a similar 2-year randomized controlled trial that encompassed both inpatient and ambulatory settings. It implemented charge display on 254 laboratory tests, which comprised 97% of the available tests. Similarly, charge display showed no significant effect on test-ordering volume.[15]

We have designed a randomized study that specifically targets laboratory tests that are sent to an external reference laboratory. We investigated whether displaying cost and turnaround time (TAT), or time-to-result, at the time of order entry in the EHR system would impact provider ordering practices.


#

Methods

Study Setting

The study was conducted at the University of California, Los Angeles (UCLA) Health System, which is composed of two academic teaching hospitals and over 200 medical practices throughout Southern California. Our study team consisted of physician informaticists and laboratory leadership. The project was approved by the health information technology (IT) governance groups. The institutional review board approved the project as exempt human subjects research study because it was viewed as a quality-improvement project.

All clinical personnel use UCLA's Epic EHR system (Epic Systems, Verona, Wisconsin, United States). Reference laboratory test orders are received and processed by the central laboratory. In the calendar year 2018 prior to the study period, there were 184,525 orders for 981 unique reference laboratory tests, resulting in $14 million in costs to patients and the health system.

We chose a symmetrical 24-month dataset composed of 12 months each for baseline and study periods to account for possible seasonal and temporal variations, e.g., influx of new clinical providers in July, in test-ordering patterns.


#

Implementation

In the baseline period, reference laboratory test orders are not identified as such nor is there any information about cost or TAT readily available in the EHR at the time of order entry. We obtained the fee schedule and the corresponding TAT data for contracted external laboratory agencies. The naming convention of the intervention (display change) group was modified to include the phrase “send-out” to indicate reference laboratory testing; relative cost categorized into four groups based on quartiles and expressed in terms of dollar signs, ranging from “$” to “$$$$” ($ = 3.50 to 49.84, $$ = 50.00 to 149.60, $$$ = 150.00 to 442.00, $$$$ = 460.00 to 3,500.00); and TAT in days for inpatient and ambulatory settings. Absolute cost was also included in the inpatient setting. Only relative cost was shown in the ambulatory setting given designated contractual obligation by a patient's insurance plan that may differ from the vendor contracted with UCLA Health. For statistical analysis, we collapsed TAT into three groups (G1= 1–2, G2 = 3–5, G3 ≥ 5 days). For example, the order for a meningoencephalitis PCR panel was changed from “Meningoencephalitis, CSF” to “Meningoencephalitis, CSF (Send-out, $X, ($$$$), 10 days)” where “X” represented the contracted price ([Fig. 1]). The display change was implemented on department and facility preference lists of orders as well as in order sets. Of note, users can create their personal preference list and rename any entry on it. The display of cost and TAT was suppressed from appearing in the after-visit summary report given to patients after an encounter.

Zoom Image
Fig. 1 Order window displaying reference laboratory cost and turnaround time. (Meningoencephalitis, CSF panel, 2021; Reproduced with permission from Epic Systems Corporation.)

We adopted the randomization method from the PRICE trial to assign tests to either the intervention or control group.[14] First, we examined the test menu to identify redundant tests. For example, we consolidated allergen-specific immunoglobulin E tests that are separately listed by antigen type within the computerized physician order entry (CPOE) interface. In addition, we grouped similar laboratory tests performed for the same indication, such as a mercury test that is performed on different specimen types (i.e., urine and serum). The resulting 667 tests were stratified by order volume and cost. We used a random number generator to assign the tests to either control or intervention groups in three steps: (1) randomize the top quartile of high-volume tests; (2) of the remaining high-volume tests, randomize the ones that belong to the top quartile of the high-cost tests; and (3) randomize the remaining tests. The intervention and control groups consisted of 331 and 336 tests, respectively. Test distribution by dollar signs and TAT is summarized in [Table 1]. A complete list of included tests is found in [Supplementary Table S1] (available in the online version).

Table 1

Configuration of test randomization and assignment, by cost and TAT group

Cost group

Control group, assigned tests

Study group, assigned tests

 $

$4–50 ($4 min.)

214

202

 $$

$51–150

68

78

 $$$

$151–450

40

36

 $$$$

$451+ ($3,500 max.)

14

13

TAT group

Control group, assigned tests

Study group, assigned tests

 G1

1–2 days

131

110

 G2

3–5 days

130

114

 G3

>5 days

75

105

Abbreviation: TAT, turnaround time.


The intervention was implemented in the first week of November 2018. The display name change was announced by the Chief Medical and Quality Officer to all providers by e-mail on November 8, 2018. No training or further information on the modified CPOE interface was provided. Multiple stakeholders from clinical and IT departments were involved in the planning and implementation of the project. This project was considered as a pilot project. The aim was to gauge the potential effect on reference laboratory tests alone and determine the usability and appropriateness in preparation for the system-wide implementation of cost and TAT data on all available laboratory tests.


#

Data Extraction

Test-ordering data were extracted from the clinical data mart. The following variables associated with test orders were obtained: order date and time, order setting (ambulatory or inpatient), order number, accession number, ordering physician, authorizing physician, and patient age and sex. Extracted data excluded tests that had been performed at reference laboratories but later brought in-house during the study period. Examples included “Folate, RBC” and “Folate, serum.” Tests that were not discrete orderables during the entire study period were also excluded.


#

Outcome Measurements and Statistical Analysis

The study period consisted of baseline and study periods from November 1, 2017 to October 31, 2018 and November 1, 2018 to October 31, 2019, respectively. A difference-in-differences (DID) analysis was conducted using a linear mixed-effects (LME) model to examine the association between the intervention and changes in test-ordering patterns. We treated individual tests as the experimental units. For each test, order volume and aggregate cost were calculated for the baseline and study periods. The LME model was specifically selected to account for the correlation between the baseline and postintervention measurements of individual tests. Fixed effects were group assignment (control vs. intervention), intervention status (baseline vs. intervention), and a two-way interaction between group assignment and intervention status. Inpatient and ambulatory data were separately analyzed. The primary and secondary outcomes were changes in aggregate order costs and volume, respectively.

We modified the LME model by adding a three-way interaction term to determine whether there was a significant difference in the DID among different relative cost and TAT groups. In other words, the three-way interaction term allowed us to estimate the differential effect of the intervention on the two outcomes by relative cost and TAT groups. We dichotomized the relative cost groups into $ and $$–$$$$. Because approximately 75% of tests fell in the $ group, $$–$$$$ were consolidated to equalize the cost distribution. TAT was dichotomized at 5 days into G1–G2 and G3 to mirror the configuration used by Fang et al's study, which examined the association between the TAT display of send-out tests and test-ordering behavior of providers in the inpatient setting.[3] The costs were held steady in our analysis to ensure that any cost changes occurred over the course of our study did not account for potential differences.

We fit two discrete modified LME models for each outcome in inpatient and ambulatory settings. Both models included all fixed and random effects in addition to pairwise two-way interactions, and a three-way interaction. The first LME model included test status (control vs. intervention), study phase (baseline vs. intervention), and dichotomized cost groups ($ vs. $$–$$$$) as independent variables. The other model included test status, study phase, and dichotomized TAT groups (G1–G2 vs. G3) as independent variables.

All analyses were conducted with SAS Software version 9.4 (SAS Institute, Inc., Cary, North Carolina, United States). p-Value of ≤0.05 was considered statistically significant.


#
#

Results

A total of 46,249 inpatient (baseline: 24,454; intervention: 21,795) and 254,826 ambulatory (baseline: 125,774; intervention: 129,052) tests were performed by a reference laboratory during the study period. No significant association between the intervention and test-order pattern was observed.

For the inpatient population, the aggregate test-order volume decreased from 11,259 to 9,538 (decrease of 15.29%) for the intervention group while that for the control group decreased from 13,195 to 12,257 (decrease of 7.11%). The DID was −8.18%. Aggregate order costs decreased from $965,561 to $755,753 (decrease of 21.73%) for the intervention group while that for the control group decreased from $563,361 to $513,134 (decrease of 8.92%). The DID was −12.81%. The result is summarized in [Fig. 2].

Zoom Image
Fig. 2 Aggregate effect of intervention on reference laboratory test-order volume and cost, inpatient, baseline, and study periods.

For ambulatory, aggregate test-order volume increased from 67,348 to 69,823 (increase of 3.67%) for the intervention group and from 58,426 to 59,229 (increase of 1.37%) for the control group. The DID was 2.30%. Aggregate order costs increased from $2,787,218 to $2,802,222 (increase of 0.54%) for the intervention group and increased from $2,016,865 to $206,478 (increase of 2.38%) for the control group. The DID was −1.54%. The result is summarized in [Fig. 3].

Zoom Image
Fig. 3 Aggregate effect of intervention on reference laboratory test-order volume and cost, ambulatory, baseline, and study periods.

For inpatient, there were no differences among the DID of aggregate test-order cost and volume, which showed a decrease of $570.87 (95% confidence interval [CI]: −1,667.77 to 526.02; p = 0.31) and 2.57 (95% CI: −7.02 to 1.88, p = 0.26), respectively. For ambulatory, there were no differences among the DIDs of aggregate test-order cost and volume, which showed a decrease of $87.21 (95% CI: −841.91 to 667.49; p = 0.82) and an increase of 5.36 (95% CI: −10.55 to 21.27; p = 0.51), respectively.

None of three-way interaction terms of the modified LME models were significant (p > 0.05). The test-order volume and aggregate order cost distributions by dollar signs and TAT groups as well as the results of LME model analysis are summarized in [Tables 2] and [3]. Patient demographics are summarized in [Supplementary Table S2] (available in the online version). Top 10 tests with the largest aggregate order costs and volume by study period are presented in [Supplementary Table S3] (available in the online version).

Table 2

Change in aggregate test-order cost by cost and TAT groups

Inpatient

Control group

Intervention group

Cost group

Baseline period

Study period

Change, %

Baseline period

Study period

Change, %

Difference-in-differences, %

 $

128,641.83

120,351.62

−6.44

165,546.38

143,067.09

−13.58

−7.13

 $$

182,142.73

181,496.03

−0.36

157,526.52

102,392.33

−35.00

−34.64

 $$$

157,956.14

153,894.88

−2.57

169,873.79

159,549.81

−6.08

−3.51

 $$$$

94,620.66

57,391.16

−39.35

472,614.34

350,743.47

−25.79

13.56

TAT group

 G1

44,280.34

48,281.94

9.04

82,667.26

70,303.84

−14.96

−23.99

 G2

234,858.27

230,443.81

−1.88

238,400.40

210,176.27

−11.84

−9.96

 G3

284,222.75

234,407.94

−17.53

644,493.37

475,272.59

−26.26

−8.73

Difference-in-differences, dollars (95% CI)

p-Value

Total

563,361.36

513,133.69

−8.92

965,561.03

755,752.70

−21.73

−12.81

−570.87 (−1,667.77 to 526.02)

0.31

Ambulatory

Control group

Intervention group

Cost group

Baseline period

Study period

Change, %

Baseline period

Study period

Change, %

Difference-in-differences, %

 $

640,906.94

680,482.12

6.17

988,836.25

1,046,588.57

5.84

−0.33

 $$

740,666.62

738,755.39

−0.26

582,989.90

615,184.59

5.52

5.78

 $$$

530,828.11

532,387.60

0.29

471,131.72

475,416.62

0.91

0.62

 $$$$

86,070.57

88,343.29

2.64

744,260.34

665,032.70

−10.65

−13.29

TAT group

 G1

454,480.51

457,130.62

0.58

383,783.20

419,360.57

9.27

7.36

 G2

834,069.75

868,609.60

4.14

778,481.54

854,568.82

9.77

5.63

 G3

709,921.98

714,228.18

0.61

1,624,953.47

1,528,293.09

−5.95

−6.56

Difference-in-differences, dollars (95% CI)

p-Value

Total

1,998,472.24

2,039,968.40

2.08

2,787,218.21

2,802,222.48

0.54

−1.54

−87.21 (−841.91 to 667.49)

0.82

Abbreviations: CI, confidence interval; TAT, turnaround time.


Table 3

Difference-in-difference between dichotomized relative cost and turnaroundtime (TAT) groups

Inpatient

Mean order volume difference, No. (95% CI)

p-Value

$ vs. $$–$$$$

−5.65 (−14.77 to 3.47)

0.22

G1–G2 vs. G3

−3.63 (−13.62 to 6.35)

0.48

Mean aggregate cost difference, dollars (95% CI)

p-Value

$ vs. $$–$$$$

−1,255.82 (−3,498.14 to 986.50)

0.27

G1–G2 vs. G3

−938.30 (−3,390.01 to 1,513.41)

0.45

Ambulatory

Mean order volume difference, No. (95% CI)

p-Value

$ vs. $$–$$$$

0.70 (−32.86 to 34.26)

0.97

G1–G2 vs. G3

−8.57 (−44.88 to 27.74)

0.64

Mean aggregate cost difference, dollars (95% CI)

p-Value

$ vs. $$–$$$$

−497.86 (−2,088.03 to 1,092.32)

0.54

G1–G2 vs. G3

−1,442.34 (−3,159.57 to 274.90)

0.10

Abbreviations: CI, confidence interval.



#

Discussion

Our study of randomized tests evaluated the effect of displaying cost and TAT data on provider-ordering practices. A slight reduction in both test-order volumes and aggregate order costs was uniformly observed in the inpatient setting, but not the ambulatory setting. Despite these changes, the DIDs of all outcomes, regardless of clinical setting, were statistically insignificant. In addition, there was no significant difference in the DID among different relative cost or TAT groups.

Our aim was to study the possible effect of cost and TAT data display on reference laboratory test-ordering patterns because these tests tend to be discretionary, especially in the inpatient setting when compared with routine laboratory tests that are performed by on-site laboratories. Reference laboratory tests are high-complexity, esoteric, low-volume tests that are sent to an external laboratory agency, which charges a contracted fee for every sample. Compared with routine laboratory tests that are performed by on-site laboratories, these tests often have longer TATs due to increased transport time to the external agency, lower volume, and batch testing. These tests may also be more costly as they often require the use of highly specialized instruments and analytic techniques. TAT is defined as the elapsed time between the time a specimen is received within the laboratory and the result is available. TAT's effects on physician ordering, especially in the ambulatory setting, have not been well documented. There has only been one nonrandomized study to date that examined both the effect of cost and TAT display on reference laboratory test orders for the inpatient setting.[3] This study also contrasts with Schmidt et al's charge transparency study, which was a long-term, randomized control trial.[15] Although their study encompassed 97% of available tests in both inpatient and ambulatory settings, it did not distinguish send-out from in-house laboratory tests. Because the use of send-out tests tends to be discretionary, we hypothesized that their order metrics would respond differently to a CPOE modification than those of in-house tests. Accordingly, we designed our study to focus exclusively on send-out laboratory tests.

In addition, the number of send-out tests has risen significantly over the recent years with the advent of highly complex microbiology, genetic, and molecular tests. An 8-year longitudinal study by MacMillan et al has shown that the reference laboratory expenses increased 4.2-fold and totaled 12.4% of the total laboratory budget at the end of their study period.[16] This study also noted that the average unit cost of send-out tests was approximately 13 times greater than the average unit cost of in-house tests. Krasowski et al required attending physician approval for 170 send-out tests and the orders decreased by 23%, leading to savings of $600,000 annually.[17] Given these data, we speculated that reference laboratory testing may be overutilized and implementing an intervention targeting send-out tests would lead to cost savings.

Our study differed from previous studies in several respects. The most distinct attribute was that it was a randomized controlled trial that evaluated the differential effect of the display modification between the control and experimental groups. In addition, our study encompassed the entire library of reference laboratory tests and analyzed ordering patterns in both inpatient and ambulatory settings. In comparison to Fang et al's study, which had a similar study design to ours, we demonstrated a contrasting result. We speculate that the following conditions may have accounted for the difference.[3] First, Fang et al's analysis excluded tests whose costs and TAT exceeded $300 and 40 days, respectively, while ours did not. Second, the cost and TAT were displayed in the order execution screen after the provider selected the desired test from the list of candidates in the CPOE search box. Although our implementation, more specifically the timing at which cost and TAT was displayed, was better positioned for facilitating behavioral changes than Fang et al's, our negative result further reinforces the observations made by large randomized studies such as the PRICE trial.[14] A future intervention may consider deploying data display in an alternate form of CDS that quantifies cancellation rate. Third, Fang et al's study displayed the cost and TAT data as general ranges while our inpatient display presented both the exact costs and TAT. Of the tests that were not excluded from Fang et al's analysis, some tests showed TAT ranges that exceeded 1 week, e.g., 3 to 12 days. Providers may have opted to cancel their orders after seeing the upper limits of the TAT.[3]

There are several factors that may have contributed to the lack of statistical significance between the DID of the intervention and control groups. First, test ordering occurs at a late step in providers' management of patients. The course of subsequent clinical actions hinges on the results of laboratory tests and the ordering physicians may not change their decision based on the displayed information. Second, no alternatives to the reference laboratory tests are presented. Because providers are shown only two options—either accept or not proceed with the original test order, most providers may have chosen to just accept the orders. Third, clinical situation may not have permitted providers to modify their orders. In the inpatient setting, test orders are often entered by a resident physician or mid-level provider based on attending physician's instructions, and these ordering providers are less likely to make modifications even when cost and TAT data are presented. Fourth, a few tests made disproportionate contributions to order volumes and aggregate costs despite the multi-step randomization that enabled balanced assignment of tests. For example, we expended $720,804 (baseline: $431,021, study: $289,782) on meningoencephalitis CSF antibody test ($2,435) alone. In the inpatient setting, this test accounted for 42% of aggregate order costs of the intervention group. It also accounted for 60% of the aggregate monthly cost of the inpatient intervention group in November 2017. Likewise, vitamin B1 and zinc (serum or plasma) accounted for 27% of total test-order volume of the control group in the inpatient setting. No control measure was implemented. Our study accounted for the potential impact of such high-impact tests with disproportionately high cost and volume by analyzing both the volume and aggregate costs. We included these tests because their presence is inherent in a standard laboratory testing menu.

CDS tools offer significant potential for optimizing efficiency and safety of care delivery. They often enable improved adherence to evidence-based guidelines and providing cognitive assistance that leads to error reduction.[18] [19] [20] Despite the accumulating evidence, the weak result of our intervention highlights the challenges of designing and implementing an effective CDS tool that adequately addresses the five rights of CDS.[21] To the extent of our knowledge, the superiority of an interruptive versus noninterruptive intervention has not been well established. Hendrickson et al built an interruptive alert system that aimed to suppress unnecessary ordering of 25-hydroxyvitamin D assay by searching for appropriate diagnoses in the patient charts.[22] Although there are extensive data on successful implementations of CDS tool for curbing inappropriate vitamin D testing, nearly 90% of the alerts displayed were overridden. The study attributed this result to the providers' ability to add diagnoses after laboratory test orders are placed and lack of force—the alerts were a soft stop with an educational message, rather than a warning. On the contrary, multiple sources have demonstrated the superiority of noninterruptive design because it prevents “alert fatigue” that leads to overriding.[23] [24] Escovedo et al implemented a silent CDS tool that reduced unnecessary ordering of respiratory viral panel for general populations of patients with respiratory tract infections. The number of orders significantly decreased after removing select synonyms and appending appropriate indications to the test display in the CPOE search window.[24] These independent observations support that an effective CDS tool requires a directive or action-oriented element. The timing of CDS activation is also critical to achieving the intended result. These studies emphasize that providers are unlikely to follow a recommendation if it is not delivered at the right time.

Another design attribute that should be incorporated into a CDS is a feature that leverages context-sensitive EHR data. Kurant et al demonstrated that such data are especially useful for laboratory-based test utilization programs.[25] These programs often examine the data stored in laboratory information systems to identify concerning trends or issues but the granularity of data is often limited.[26] Because EHRs include multiple pathways for laboratory test ordering, such as CPOE, preference lists, and order sets, it is important to understand the context in which the orders in question were placed to devise an optimal strategy.[25] By determining whether orders originated from preference lists or CPOE's search box, Kurant's group was able to devise tailored interventions. Our future work will include a close analysis of order context data to test the validity of our initial hypothesis that ordering of reference laboratory tests is discretionary.

A novel approach that is not yet widely adopted but should be strongly considered for CDS development is to conduct a formative usability testing. In Orenstein et al's study, existing blood product order sets were first modified by a multidisciplinary expert committee then later revised through extensive usability testing by providers through scenario-based feedback.[27] It was noted that persistent failure was detected after the expert modification but the adoption of user-centered design enabled a significant reduction in user errors. Accordingly, collection and integration of user feedback is critical to ensure success of CDS tools. It is our opinion that inclusion of the above features is necessary for future work in the development of CDS tools for optimizing test utilization.

One challenge we faced during CPOE customization, which has not been well discussed in prior studies, was devising an optimal design that maintains the TAT and cost data within the CPOE and shields this information from being present in patient charts, particularly on patient-facing elements. Instead of altering the original test names, we circumvented this issue by entering the information into the institutional preference lists, which contain frequently used department-specific orderables. Although most providers use the default preference lists, we were unable to extend the changes to those who use the custom preference lists due to the limitation of the EHR system.

Our study adds to the growing evidence on the effect of cost and TAT display on laboratory test-ordering practices among clinical providers. However, work remains in areas that facilitate patients to make informed, cost-effective choices in their care. We have not addressed the difference between the displayed cost in the CPOE and what patients will have to pay. Given the complex nature of the health care reimbursement system and insurance, quantification of true cost savings will present a formidable challenge. In addition, development of informational tool that enables shared decision making between providers and patients when ordering laboratory tests, especially in the ambulatory setting, will be beneficial.

This study has several limitations. First, there may be a contaminating effect due to randomization by test. The providers who were exposed to cost data may adopt a more cost-conscious general ordering practice. In addition, they may seek out the cost and TAT information for the control tests after seeing the intervention. Second, providers are not given feedback on their ordering practices or the resulting cost of care. Third, we did not assess for changes in ordering practices based on provider type. Because significant heterogeneity exists in the type of ordering providers, it is difficult to ascertain whether provider type-specific analysis would have led to a different result. A cursory comparison of ordering provider characteristics did not reveal distinct changes between the baseline and study periods. Fourth, the study does not account or control for changes in ordering patterns that may have resulted from changes to diagnostic, screening, or treatment guidelines. Fifth, we did not systematically assess for the presence of corollary phenomena driven by the intervention. It is unclear whether the reductions seen among the tests in the intervention group caused concurrent increases in other tests, e.g., similar on-site tests. Sixth, the study did not adjust for patient comorbidities and diagnoses. Differences in demographic composition may have skewed the results. For future studies, adjustment for these variables using International Classification of Diseases codes may be desirable.


#

Conclusion

The presence of cost and TAT information at the time of reference laboratory test-order entry did not result in a change in the test-order volume or aggregate cost in both inpatient and ambulatory clinical settings. A causal relationship between the display modification and test-order patterns could not be established. Accordingly, embedding cost and TAT data within reference laboratory test names within CPOE systems by itself may be of limited use for promoting patient-centered, cost-conscious care and improving resource utilization in both inpatient and ambulatory settings.


#

Clinical Relevance Statement

We designed a novel study that targeted laboratory tests that are sent to an external reference laboratory. We specifically investigated whether displaying cost and TAT (time-to-result) at the time of order entry in the EHR system would impact provider ordering practices because these tests tend to be discretionary, especially in the inpatient setting when compared to routine laboratory tests that are performed by on-site laboratories. To the extent of our knowledge, a cost transparency study of this scale and robustness has never been conducted and it will enable readers to make an informed decision about implementing cost and TAT data on their EHR systems.


#

Multiple Choice Questions

  1. Which of the following is the reason for displaying only the relative costs of reference laboratory tests ordered in the ambulatory setting?

    • Request from contracted reference laboratories.

    • Contractual obligation by patient's insurance plan.

    • Center for Medicare (CMS) regulation.

    • Frequency of reference laboratory test orders in the ambulatory setting.

    Correct Answer: The correct answer is option b. Absolute cost was not included in the ambulatory setting given designated contractual obligation by a patient's insurance plan that may differ from the vendor contracted with the study site.

  2. For which of the following EHR modules was the intervention deployed?

    • Secure provider messaging system.

    • Log-in splash screen.

    • CPOE interface.

    • Patient laboratory result.

    Correct Answer: The correct answer is option c. The cost and TAT data of reference laboratory tests were implemented in the CPOE interface.


#
#

Conflict of Interest

None declared.

Acknowledgment

We thank Kevin Baldwin from the Department of Health Information Technology; So Hee Kim and Shital Pandya from the Office of Health Informatics and Analytics (OHIA); and Jren Armon and Miriam Ramirez from Beaker clinical pathology analyst team at Information Services & Solutions at UCLA Health.

Protection of Human and Animal Subjects

The institutional review board approved the project as exempt human subjects research study because it was viewed as a quality-improvement project.


Supplementary Material

  • References

  • 1 Eaton KP, Levy K, Soong C. et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med 2017; 177 (12) 1833-1839
  • 2 Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med 2016; 11 (01) 65-76
  • 3 Fang DZ, Sran G, Gessner D. et al. Cost and turn-around time display decreases inpatient ordering of reference laboratory tests: a time series. BMJ Qual Saf 2014; 23 (12) 994-1000
  • 4 Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations. Effects on resource utilization. JAMA 1993; 269 (03) 379-383
  • 5 Everett GD, deBlois CS, Chang P-F, Holets T. Effect of cost education, cost audits, and faculty chart review on the use of laboratory services. Arch Intern Med 1983; 143 (05) 942-944
  • 6 Nougon G, Muschart X, Gérard V. et al. Does offering pricing information to resident physicians in the emergency department potentially reduce laboratory and radiology costs?. Eur J Emerg Med 2015; 22 (04) 247-252
  • 7 Feldman LS, Shihab HM, Thiemann D. et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med 2013; 173 (10) 903-908
  • 8 Horn DM, Koplan KE, Senese MD, Orav EJ, Sequist TD. The impact of cost displays on primary care physician laboratory test ordering. J Gen Intern Med 2014; 29 (05) 708-714
  • 9 Ellemdin S, Rheeder P, Soma P. Providing clinicians with information on laboratory test costs leads to reduction in hospital expenditure. S Afr Med J 2011; 101 (10) 746-748
  • 10 Schilling UM. Cutting costs: the impact of price lists on the cost development at the emergency department. Eur J Emerg Med 2010; 17 (06) 337-339
  • 11 Seguin P, Bleichner JP, Grolier J, Guillou YM, Mallédant Y. Effects of price information on test ordering in an intensive care unit. Intensive Care Med 2002; 28 (03) 332-335
  • 12 Bates DW, Kuperman GJ, Jha A. et al. Does the computerized display of charges affect inpatient ancillary test utilization?. Arch Intern Med 1997; 157 (21) 2501-2508
  • 13 Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of informing physicians of the charges for outpatient diagnostic tests. N Engl J Med 1990; 322 (21) 1499-1504
  • 14 Sedrak MS, Myers JS, Small DS. et al. Effect of a price transparency intervention in the electronic health record on clinician ordering of inpatient laboratory tests: the PRICE randomized clinical trial. JAMA Intern Med 2017; 177 (07) 939-945
  • 15 Schmidt RL, Colbert-Getz JM, Milne CK. et al. Impact of laboratory charge display within the electronic health record across an entire academic medical center: results of a randomized controlled trial. Am J Clin Pathol 2017; 148 (06) 513-522
  • 16 MacMillan D, Lewandrowski E, Lewandrowski K. An analysis of reference laboratory (send out) testing: an 8-year experience in a large academic medical center. Clin Leadersh Manag Rev 2004; 18 (04) 216-219
  • 17 Krasowski MD, Chudzik D, Dolezal A. et al. Promoting improved utilization of laboratory testing through changes in an electronic medical record: experience at an academic medical center. BMC Med Inform Decis Mak 2015; 15: 11
  • 18 Chan AJ, Chan J, Cafazzo JA. et al. Order sets in health care: a systematic review of their effects. Int J Technol Assess Health Care 2012; 28 (03) 235-240
  • 19 Ahmadian L, Khajouei R. Impact of computerized order sets on practitioner performance. Stud Health Technol Inform 2012; 180: 1129-1131
  • 20 Gartner D, Zhang Y, Padman R. Cognitive workload reduction in hospital information systems: decision support for order set optimization. Health Care Manage Sci 2018; 21 (02) 224-243
  • 21 Osheroff JA, Teich JM, Levick D. et al. Improving Outcomes with Clinical Decision Support: an Implementer's Guide. Chicago, IL: HIMSS Publishing; 2012
  • 22 Hendrickson CD, McLemore MF, Dahir KM, et al. Is the climb worth the view? the savings/alert ratio for reducing vitamin D testing. Appl Clin Inform 2020;11(1):160-165
  • 23 Kawamoto K, Lobach DF. Clinical decision support provided within physician order entry systems: a systematic review of features effective for changing clinician behavior. AMIA Annu Symp Proc 2003; 2003: 361-365
  • 24 Escovedo C, Bell D, Cheng E. et al. Noninterruptive clinical decision support decreases ordering of respiratory viral panels during influenza season. Appl Clin Inform 2020; 11 (02) 315-322
  • 25 Kurant DE, Baron JM, Strazimiri G, Lewandrowski KB, Rudolf JW, Dighe AS. Creation and use of an electronic health record reporting database to improve a laboratory test utilization program. Appl Clin Inform 2018; 9 (03) 519-527
  • 26 Baron JM, Dighe AS. The role of informatics and decision support in utilization management. Clin Chim Acta 2014; 427: 196-201
  • 27 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990

Address for correspondence

Alyssa Ziman, MD
757 Westwood Plaza, B403M RRMC, Los Angeles, CA 90095
United States   

Publication History

Received: 16 January 2022

Accepted: 11 May 2022

Accepted Manuscript online:
17 May 2022

Article published online:
06 July 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Eaton KP, Levy K, Soong C. et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med 2017; 177 (12) 1833-1839
  • 2 Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med 2016; 11 (01) 65-76
  • 3 Fang DZ, Sran G, Gessner D. et al. Cost and turn-around time display decreases inpatient ordering of reference laboratory tests: a time series. BMJ Qual Saf 2014; 23 (12) 994-1000
  • 4 Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations. Effects on resource utilization. JAMA 1993; 269 (03) 379-383
  • 5 Everett GD, deBlois CS, Chang P-F, Holets T. Effect of cost education, cost audits, and faculty chart review on the use of laboratory services. Arch Intern Med 1983; 143 (05) 942-944
  • 6 Nougon G, Muschart X, Gérard V. et al. Does offering pricing information to resident physicians in the emergency department potentially reduce laboratory and radiology costs?. Eur J Emerg Med 2015; 22 (04) 247-252
  • 7 Feldman LS, Shihab HM, Thiemann D. et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med 2013; 173 (10) 903-908
  • 8 Horn DM, Koplan KE, Senese MD, Orav EJ, Sequist TD. The impact of cost displays on primary care physician laboratory test ordering. J Gen Intern Med 2014; 29 (05) 708-714
  • 9 Ellemdin S, Rheeder P, Soma P. Providing clinicians with information on laboratory test costs leads to reduction in hospital expenditure. S Afr Med J 2011; 101 (10) 746-748
  • 10 Schilling UM. Cutting costs: the impact of price lists on the cost development at the emergency department. Eur J Emerg Med 2010; 17 (06) 337-339
  • 11 Seguin P, Bleichner JP, Grolier J, Guillou YM, Mallédant Y. Effects of price information on test ordering in an intensive care unit. Intensive Care Med 2002; 28 (03) 332-335
  • 12 Bates DW, Kuperman GJ, Jha A. et al. Does the computerized display of charges affect inpatient ancillary test utilization?. Arch Intern Med 1997; 157 (21) 2501-2508
  • 13 Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of informing physicians of the charges for outpatient diagnostic tests. N Engl J Med 1990; 322 (21) 1499-1504
  • 14 Sedrak MS, Myers JS, Small DS. et al. Effect of a price transparency intervention in the electronic health record on clinician ordering of inpatient laboratory tests: the PRICE randomized clinical trial. JAMA Intern Med 2017; 177 (07) 939-945
  • 15 Schmidt RL, Colbert-Getz JM, Milne CK. et al. Impact of laboratory charge display within the electronic health record across an entire academic medical center: results of a randomized controlled trial. Am J Clin Pathol 2017; 148 (06) 513-522
  • 16 MacMillan D, Lewandrowski E, Lewandrowski K. An analysis of reference laboratory (send out) testing: an 8-year experience in a large academic medical center. Clin Leadersh Manag Rev 2004; 18 (04) 216-219
  • 17 Krasowski MD, Chudzik D, Dolezal A. et al. Promoting improved utilization of laboratory testing through changes in an electronic medical record: experience at an academic medical center. BMC Med Inform Decis Mak 2015; 15: 11
  • 18 Chan AJ, Chan J, Cafazzo JA. et al. Order sets in health care: a systematic review of their effects. Int J Technol Assess Health Care 2012; 28 (03) 235-240
  • 19 Ahmadian L, Khajouei R. Impact of computerized order sets on practitioner performance. Stud Health Technol Inform 2012; 180: 1129-1131
  • 20 Gartner D, Zhang Y, Padman R. Cognitive workload reduction in hospital information systems: decision support for order set optimization. Health Care Manage Sci 2018; 21 (02) 224-243
  • 21 Osheroff JA, Teich JM, Levick D. et al. Improving Outcomes with Clinical Decision Support: an Implementer's Guide. Chicago, IL: HIMSS Publishing; 2012
  • 22 Hendrickson CD, McLemore MF, Dahir KM, et al. Is the climb worth the view? the savings/alert ratio for reducing vitamin D testing. Appl Clin Inform 2020;11(1):160-165
  • 23 Kawamoto K, Lobach DF. Clinical decision support provided within physician order entry systems: a systematic review of features effective for changing clinician behavior. AMIA Annu Symp Proc 2003; 2003: 361-365
  • 24 Escovedo C, Bell D, Cheng E. et al. Noninterruptive clinical decision support decreases ordering of respiratory viral panels during influenza season. Appl Clin Inform 2020; 11 (02) 315-322
  • 25 Kurant DE, Baron JM, Strazimiri G, Lewandrowski KB, Rudolf JW, Dighe AS. Creation and use of an electronic health record reporting database to improve a laboratory test utilization program. Appl Clin Inform 2018; 9 (03) 519-527
  • 26 Baron JM, Dighe AS. The role of informatics and decision support in utilization management. Clin Chim Acta 2014; 427: 196-201
  • 27 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990

Zoom Image
Fig. 1 Order window displaying reference laboratory cost and turnaround time. (Meningoencephalitis, CSF panel, 2021; Reproduced with permission from Epic Systems Corporation.)
Zoom Image
Fig. 2 Aggregate effect of intervention on reference laboratory test-order volume and cost, inpatient, baseline, and study periods.
Zoom Image
Fig. 3 Aggregate effect of intervention on reference laboratory test-order volume and cost, ambulatory, baseline, and study periods.