Appl Clin Inform 2022; 13(04): 874-879
DOI: 10.1055/a-1913-4158
CIC 2021

Dig Deeper: A Case Report of Finding (and Fixing) the Root Cause of Add-On Laboratory Failures

Tyler Anstett
1   Division of Hospital Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
,
Chris Smith
2   Division of Hospital Medicine, University of New Mexico School of Medicine, Albuquerque, New Mexico, United States
,
Kaitlyn Hess
3   UCHealth, Aurora, Colorado, United States
,
Luke Patten
4   Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
,
Sharon Pincus
5   University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
,
Chen-Tan Lin
6   Department of General Internal Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
,
P. Michael Ho
7   Department of Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
› Author Affiliations
Funding None.
 

Abstract

Background Venipunctures and the testing they facilitate are clinically necessary, particularly for hospitalized patients. However, excess venipunctures lead to patient harm, decreased patient satisfaction, and waste.

Objectives We sought to identify contributors to excess venipunctures at our institution, focusing on electronic health record (EHR)-related factors. We then implemented and evaluated the impact of an intervention targeting one of the contributing factors.

Methods We employed the quality improvement (QI) methodology to find sources of excess venipunctures, specifically targeting add-on failures. Once an error was identified, we deployed an EHR-based intervention which was evaluated with retrospective pre- and postintervention analysis.

Results We identified an error in how the EHR evaluated the ability of laboratories across a health system to perform add-on tests to existing blood specimens. A review of 195,263 add-on orders placed prior to the intervention showed that 165,118 were successful and 30,145 failed, a failure rate of 15.4% (95% confidence interval [CI]: 15.1–15.6). We implemented an EHR-based modification that changed the criteria for add-on testing from a health-system-wide query of laboratory capabilities to one that incorporated only the capabilities of laboratories with feasible access to existing patient samples. In the 6 months following the intervention, a review of 87,333 add-on orders showed that 77,310 were successful, and 10,023 add-on orders failed resulting in a postintervention failure rate of 11.4% (95% CI: 11.1, 11.8) (p < 0.001).

Conclusion EHR features such as the ability to identify possible add-on tests are designed to reduce venipunctures but may produce unforeseen negative effects on downstream processes, particularly as hospitals merge into health systems using a single EHR. This case report describes the successful identification and correction of one cause of add-on laboratory failures. QI methodology can yield important insights that reveal simple interventions for improvement.


#

Background and Significance

Venipunctures are a necessary part of clinical care. However, excessive and/or unnecessary venipunctures can reduce the quality of care by negatively affecting patient experience, potentially introducing patient harm (i.e., blood loss or infection), and diverting resources from patients who need them. The overuse of phlebotomy has been associated with increased risk of hematomas, bacteremia, and hospital-acquired anemia.[1] A study conducted by Salisbury et al found that out of 17,676 patients admitted to 57 different hospitals with acute myocardial infarction, approximately 20% developed moderate to severe anemia, secondary to diagnostic blood draws. For every 50 mL of blood taken, the risk of developing anemia increased by 18%.[2] Further, Thavendiranathan et al found that every 1 mL of blood taken was associated with an average 0.070 g/dL decrease in hemoglobin levels and 0.019% decrease in hematocrit among adult internal medicine patients.[3] Ultimately, excessive venipuncture can negatively impact a patient's perception of received care. A multi-institution survey of phlebotomy experiences showed a clear correlation between patient satisfaction and the need for only one needlestick to obtain an appropriate specimen.[4]

We sought to explore contributors to excess venipunctures in hospitalized patients at a single academic medical center. The University of Colorado Hospital is a 700-bed academic medical and quaternary referral center. This hospital is part of a larger 11-hospital system, UCHealth. Prior to the formation of UCHealth, each of the hospitals was either independent or part of smaller health systems and utilized several different electronic health records (EHRs). The merger of these smaller hospitals and systems included migration to a common EHR system.

Although EHR systems can be used to improve the value of care,[5] EHR design has been found to contribute to ordering errors.[6] Thus, we hypothesized that there may be features or processes within the EHR at our hospital that resulted in unintended venipunctures and patient harm. Based on preliminary work, we hypothesized that failures in our add-on process (laboratory tests performed on previously collected patient samples) were contributing to unintended venipunctures.


#

Objective

We sought to identify contributors to excess venipunctures at our institution focusing on EHR-related factors related to add-on laboratory failures. Once identified through quality improvement (QI) methodology, we assessed the impact of an EHR-based programming change on one of the contributing factors.


#

Methods

Setting and Participants

This project was conducted at a single-academic urban 700-bed hospital within an 11-hospital health system. Included patients were adults, aged 18 years or older, and inpatient or observation status, from January 1, 2018 to May 30, 2020.


#

Quality Improvement Methodology

Gemba/direct observation/process mapping: one of the core tenets of QI is to directly observe processes as they occur, at the location where they occur, and with the people directly involved. First described in Lean methodology, this is known as “going to Gemba,” with Gemba being the Japanese term for “where the work happens” or “the scene of the crime,” both relevant to this work. We directly observed and spoke with personnel working in the clinical laboratory. Using our observations, we created a process map which detailed the entire laboratory add-on process from order entry to result reporting, which allowed us to define the many ways an add-on test could ultimately fail to produce a result. These failure modes were then correlated to EHR processes.

Pareto chart: a QI tool known as a Pareto chart, a cause–analysis chart that helps identify which factors contribute the most to an outcome,[7] was created revealing which laboratory tests were responsible for add-on failures. Once we identified our major sources of add-on failures, root cause analysis ultimately revealed a fault in the EHR logic that determined whether an add-on order was selectable by clinicians at the point of order entry.


#

Intervention and Evaluation

On September 20, 2019, laboratory information systems personnel modified how the add-on suggestion logic was displayed to the ordering provider. Instead of searching the entire health system for available add-on capabilities, the new programming logic required two criteria for the add-on option to be displayed to the clinician: (1) the add-on order could be performed at the laboratory where the specimen is currently located, or (2) the add-on order could be performed at one of the contracted courier laboratories for that site if the test is not typically performed at the site where the specimen is currently located. If neither of these criteria were met, the provider was not offered the add-on suggestion. As described in [Fig. 1(A)], if a provider at Hospital A elected to order a haptoglobin test, instead of looking at the entire health system's laboratory capabilities, the EHR logic only looked at the ability of Hospital A's laboratory. If the performing laboratory did not have the ability to add a haptoglobin test to an existing specimen, the EHR would not display the option to add-on to the ordering provider. This eliminated a potential add-on failure by preventing clinicians from choosing the add-on option in the first place for tests which could not in fact be performed by their hospital's laboratory.

Zoom Image
Fig. 1 (A) This graphic demonstrates that different hospitals in our health system have different specimen testing capabilities. The Accepted Specimen (per Epic) column represents colors of specimen tubes that haptoglobin could be tested. Prior to our intervention, a haptoglobin test would be offered to providers as an add-on if Hospital A had yellow-, red-, green-, or purple-topped specimens in the laboratory even though add-on testing was only possible for yellow- and red-topped specimens. (B) Pareto chart demonstrating the top 15 most frequently failed add-on laboratory tests prior to the intervention.

The intervention was assessed using retrospective pre- and postintervention evaluation. Data were collected from the Electronic Health Record, Epic (Epic Systems Corporation, Verona, Wisconsin, United States) including the Beaker Module, which manages laboratory specimen acquisitioning.


#
#

Results

Add-on tests were defined as any new laboratory test order(s) on blood samples already collected. A successful add-on test occurs when the test is performed on the previously collected blood specimen. A failed add-on is defined as an inability to perform the new requested test on the previously collected blood specimen resulting in an automatic order for a new venipuncture.

From January 1, 2018 to September 19, 2019, of 195,263 add-on orders placed by providers, 165,118 were successful and 30,145 failed resulting in a preintervention failure rate of 15.4% (95% confidence interval [CI]: 15.1–15.6).

Gemba/process map: our Gemba observation of the clinical laboratory included a group of personnel assigned to review the in-basket—a collection of new laboratory orders sent electronically by practicing clinicians to be added to existing specimens already in the laboratory. While most laboratory-specific orders are routed directly to phlebotomists to collect specimens, these add-on orders are instead routed to the laboratory personnel who review the digital in-basket to manually evaluate whether the appropriate specimen exists in the laboratory to process these add-on requests. During our observation of the in-basket management routine, laboratory personnel described a preidentified list of add-on laboratory requests that consistently failed. The in-basket managers informed us that certain tests, such as haptoglobin, were almost never in the correct tube type for the equipment in the local laboratory to be able to perform as an add-on test. In fact, the in-basket managers had a paper list taped next to their workstation of commonly failed add-on requests which included haptoglobin at the top. This correlated with the Pareto chart ([Fig. 1B]) of the most frequently failed specimen types that confirmed haptoglobin was the most failed add-on request. Cortisol, complements (C3 and C4), and vitamin D were also on their printed list and captured on our Pareto chart. We observed and analyzed the entire laboratory add-on process from order to completion, identifying all potential ways an add-on test could fail. We then captured each of these failure modes in the EHR and produced a table ([Table 1]), showing the types of failures and plotted frequency. In-basket processing by the laboratory technicians was the most common source of add-on failures.

Table 1

Listing of the different ways an add-on failure could occur

Event category for failed add-ons

Count

(n = 40,168)

Test moved to other specimen by laboratory staff

434

Canceled by laboratory/beaker

1,095

Canceled by background process

1,368

Sent for new collection by laboratory staff

2,263

Other/unknown

3,851

Canceled by care team/other

4,862

Sent for new collection from in-basket

25,295

Note: This project targeted the new collection from in-basket failure mechanism as this was attributed to the laboratory specimen availability logic.


Root cause: in this instance, evidence suggested that many add-on tests failed due to incorrect acquisitioning within the EHR which incorrectly offered the option to add-on option to ordering clinicians. This subset of add-on requests was then cancelled by the laboratory staff at the step of the in-basket management, resulting in a new collection order sent to phlebotomists, potentially resulting in a new venipuncture.

In looking at these commonly failed laboratory tests, we identified the root cause of many of these automatic failures. Due to different equipment and processes, every hospital laboratory in the health system has different add-on sample processing capabilities. For example, as noted in [Fig. 1(A)], due to different processing equipment, Hospital A was able to perform haptoglobin tests from either red or yellow top specimen tubes whereas Hospital C could process haptoglobin tests on red, yellow, green, or purple top specimen tubes. The error occurred in the laboratory specimen acquisitioning logic, whereby instead of the querying the laboratory performing the add-on sample processing, the EHR searched the entire health system ([Fig. 1A]). The health- system-integrated EHR allowed add-ons based on any laboratory in the system instead of the specific hospital where they would be processed resulting in add-on test processing failure.

On September 30, 2019, we made a coding change to the EHR acquisitioning coding to stop the process of add-on searching across the health system, and instead to only query the local laboratory where the specimen would be processed or those available by courier.

Following the intervention on September 20, 2019, until May 30, 2020, 87,333 add-on orders were placed. 77,310 of these were successful and 10,023 add-on orders failed resulting in a postintervention failure rate of 11.4% (95% CI: 11.1–11.8) (p < 0.001) ([Table 2] and [Fig. 2]).

Zoom Image
Fig. 2 Add-on failure rates over time per month from January 2018 to May 2020. The intervention of changing the processing logic went live on September 19, 2019.
Table 2

Pre- and postintervention add-on failures and success rates

Preintervention January 1, 2018–September 19, 2019

Postintervention September 20, 2019–May 30, 2020

Add-on failed

30,145

10,023

Add-on successful

165,118

77,310

Total add-ons requested

195,263

87,333

Failure rate

15.44%

11.48%


#

Discussion

In the present study, using QI methodologies and principles, our team investigated sources of excess venipunctures and identified add-on failures. We then went further and identified multiple sources of add-on failures and eventually uncovered a root cause, an error in the EHR logic acquisitioning laboratory specimens. We implemented a successful fix to the EHR logic process to display the add-on option to providers only when the performing laboratory had the ability to perform the requested test on an existing specimen. This intervention resulted in a significant decline in associated add-on test failures.

Ultimately, the root cause of the error was the migration to a single, common EHR across the health system. The EHR, programmed to limit venipunctures, scanned the entire health system, and suggested an add-on if any sites performed the test on the collected specimen. While this worked well for a single hospital, the logic failed when applied to a large, multi-state health system as some add-ons were suggested at laboratories that did not have the capability to run the sample or a courier agreement with a capable laboratory for that specimen resulting in repeated failed add-on attempts.

The major limitation of this study is that an add-on failure does not necessarily result in an additional venipuncture. Although preintervention data identified an excess of 30,000 add-on failures over this period, we were not able to determine whether each of these add-on failures resulted in new venipunctures. However, since add-on failures are automatically sent for new collection, and we know from prior work that phlebotomists at our institution collect blood samples shortly after orders are placed, it is reasonable to assume that a portion of these add-on failures resulted in unintended venipunctures.

Importantly, add-on testing contributes to significant amounts of work for clinical laboratories.[8] Accordingly, clinical laboratories have implemented interventions to reduce the manual effort associated with add-ons, such as robotic specimen retrieval as described by Nelson et al.[8] By reducing add-on requests that could not be feasibly performed, we reduced one source of work for laboratory personnel at our institution at the level of in-basket management.

This work highlights that finding critical failure points requires not only evaluation of EHR data but other methodologies to identify the root causes. Without speaking directly to laboratory personnel, we would have never known about the consistently failing laboratory tests that led us to identify the acquisition error. Furthermore, by identifying the root causes of failures, interventions emerged with higher chances of success and future errors may be avoided.

Despite the success demonstrated above, our work identified multiple other sources of add-on failures including a nonintuitive user interface and difficulty tracking the amount of specimen available to perform additional testing. There are also unavoidable reasons for add-on failure including expired specimens, manual error, and insufficient specimen available to perform the add-on tests. The persistence of these other factors likely explains the postintervention failure rate of 11.4%. Further, our intervention only targeted failure at the point of the in-basket management. Although beyond the scope of this summary, QI methodology could be employed to address these remaining sources of add-on failures.


#

Lessons Learned

  • Add-on test failure creates extra work for laboratory personnel and may result in excess and unintended venipunctures.

  • Combining quality improvement methodologies with EHR data analysis can reveal the root causes of complex problems and help to identify simple solutions.

  • When individual hospitals with different processes and capabilities are joined into a single health system, it is important to recognize and account for differences during EHR integration.


#

Conclusion

Interventions targeted at reducing venipuncture waste may have unintended consequences, particularly as hospitals and health systems merge. This case report describes the successful identification and correction of one cause of add-on laboratory failures. The use of QI tools such as root cause analysis, process mapping, direct observation, and Pareto charts can yield important insights that reveal simple interventions for improvement.


#

Clinical Relevance Statement

Venipunctures are a necessary part of clinical care but also contribute to patient harm and discomfort. The ability of an EHR to add on new laboratory orders to existing samples ideally allows clinicians to reduce unintended venipuncture for their patients. However, a thorough review of the process at one large health system revealed a startling number of add-on laboratory failures caused in part by differences in laboratory processing between hospitals. Unbeknownst to the ordering clinicians, these failed add-on laboratory orders were routed to phlebotomists to obtain a new sample thus resulting in excess and unintended venipunctures. An intervention changing EHR handling of these add-on samples resulted in an immediate reduction in add-on laboratory failures.


#

Multiple Choice Questions

  1. What is the definition of an add-on laboratory test?

    • A laboratory test performed on a different specimen from the same patient to validate the results of a prior test.

    • A laboratory test performed on a previously collected specimen from the same patient to make a diagnosis or clinical decision.

    • A laboratory test performed on a different specimen from a different patient to validate the laboratory testing instrument.

    • A laboratory test performed on a previously collected specimen from the same patient to validate the laboratory testing instrument.

    Correct Answer: The correct answer is option b. An add-on laboratory test is a test performed on a previously collected specimen from the same patient to make a diagnosis or clinical decision.

  2. How do excess venipunctures lower the value of care provided to individual patients?

    • Improve clinicians' ability to monitor the effect of treatments.

    • Create more work for laboratory personnel.

    • Increase diagnostic accuracy.

    • Contribute to iatrogenic anemia, pain, and increased risk of infection.

    Correct Answer: The correct answer is option d. Excess venipunctures contribute to iatrogenic anemia, pain, and increased risk of infection which lowers the value of care. Add-on tests might improve clinicians' ability to monitor the effects of treatment and definitely create more work for laboratory personnel, but these do not necessarily increase the value of care for the patient.

  3. How did the electronic health record and laboratory information system contribute to add-on laboratory failures?

    • Erroneously sent the test to the wrong performing laboratory where the specimen did not exist.

    • Erroneously displayed the option to add-on to an existing specimen to ordering clinicians when an acceptable sample was not available to add-on additional tests.

    • Erroneously displayed the option to add-on to an existing specimen to ordering clinicians when an expired sample was available to add-on additional tests.

    • Erroneously displayed the option to add-on to an existing specimen to ordering clinicians when the sample was contaminated.

    Correct Answer: The correct answer is option b. We identified that the EHR was offering ordering clinicians the option to add-on tests to samples that were not available at the performing laboratory because they were collected in a noncompatible tube type for that specific laboratory's testing capabilities.


#
#

Conflict of Interest

None declared.

Acknowledgments

The authors thank Amber Stokes, MLS for her immense help in finding data, clarifying processes, and performing analyses.

Protection of Human and Animal Subjects

This study was designated as Quality Improvement and thus nonhuman subject research by the Colorado Multiple Institutional Review Board (COMIRB).


  • References

  • 1 Dale JC, Pruett SK. Phlebotomy–a minimalist approach. Mayo Clin Proc 1993; 68 (03) 249-255
  • 2 Salisbury AC, Reid KJ, Alexander KP. et al. Diagnostic blood loss from phlebotomy and hospital-acquired anemia during acute myocardial infarction. Arch Intern Med 2011; 171 (18) 1646-1653
  • 3 Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med 2005; 20 (06) 520-524
  • 4 Dale JC, Howanitz PJ. Patient satisfaction in phlebotomy: a College of American Pathologists′ Q-Probes study. Lab Med 1996; 27 (03) 188-192
  • 5 Rudin RS, Friedberg MW, Shekelle P, Shah N, Bates DW. Getting value from electronic health records: research needed to improve practice. Ann Intern Med 2020; 172 (11, Suppl): S130-S136
  • 6 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990
  • 7 What is a Pareto Chart? Analysis & Diagram | ASQ. (2022). Accessed May 4, 2022 at: https://asq.org/quality-resources/pareto
  • 8 Nelson LS, Davis SR, Humble RM, Kulhavy J, Aman DR, Krasowski MD. Impact of add-on laboratory testing at an academic medical center: a five year retrospective study. BMC Clin Pathol 2015; 15: 11

Address for correspondence

Tyler Anstett, DO
Division of Hospital Medicine, University of Colorado Anschutz Medical Campus
Leprino Building, 4th Floor, 12401 East 17th Avenue, Mailstop F-782, Aurora, Colorado 80045
United States   

Publication History

Received: 31 December 2021

Accepted: 28 July 2022

Accepted Manuscript online:
29 July 2022

Article published online:
21 September 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Dale JC, Pruett SK. Phlebotomy–a minimalist approach. Mayo Clin Proc 1993; 68 (03) 249-255
  • 2 Salisbury AC, Reid KJ, Alexander KP. et al. Diagnostic blood loss from phlebotomy and hospital-acquired anemia during acute myocardial infarction. Arch Intern Med 2011; 171 (18) 1646-1653
  • 3 Thavendiranathan P, Bagai A, Ebidia A, Detsky AS, Choudhry NK. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med 2005; 20 (06) 520-524
  • 4 Dale JC, Howanitz PJ. Patient satisfaction in phlebotomy: a College of American Pathologists′ Q-Probes study. Lab Med 1996; 27 (03) 188-192
  • 5 Rudin RS, Friedberg MW, Shekelle P, Shah N, Bates DW. Getting value from electronic health records: research needed to improve practice. Ann Intern Med 2020; 172 (11, Suppl): S130-S136
  • 6 Orenstein EW, Boudreaux J, Rollins M. et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform 2019; 10 (05) 981-990
  • 7 What is a Pareto Chart? Analysis & Diagram | ASQ. (2022). Accessed May 4, 2022 at: https://asq.org/quality-resources/pareto
  • 8 Nelson LS, Davis SR, Humble RM, Kulhavy J, Aman DR, Krasowski MD. Impact of add-on laboratory testing at an academic medical center: a five year retrospective study. BMC Clin Pathol 2015; 15: 11

Zoom Image
Fig. 1 (A) This graphic demonstrates that different hospitals in our health system have different specimen testing capabilities. The Accepted Specimen (per Epic) column represents colors of specimen tubes that haptoglobin could be tested. Prior to our intervention, a haptoglobin test would be offered to providers as an add-on if Hospital A had yellow-, red-, green-, or purple-topped specimens in the laboratory even though add-on testing was only possible for yellow- and red-topped specimens. (B) Pareto chart demonstrating the top 15 most frequently failed add-on laboratory tests prior to the intervention.
Zoom Image
Fig. 2 Add-on failure rates over time per month from January 2018 to May 2020. The intervention of changing the processing logic went live on September 19, 2019.