Appl Clin Inform 2020; 11(05): 692-698
DOI: 10.1055/s-0040-1716537
Case Report

Application of Human Factors Methods to Understand Missed Follow-up of Abnormal Test Results

Deevakar Rogith
1   The University of Texas Health Science Center at Houston School of Biomedical Informatics, Houston, Texas, United States
,
Tyler Satterly
2   Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veteran Affairs Medical Center, Houston, Texas, United States
3   Department of Medicine, Section of Health Services Research, Baylor College of Medicine, Houston, Texas, United States
,
Hardeep Singh
2   Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veteran Affairs Medical Center, Houston, Texas, United States
3   Department of Medicine, Section of Health Services Research, Baylor College of Medicine, Houston, Texas, United States
,
Dean F. Sittig
1   The University of Texas Health Science Center at Houston School of Biomedical Informatics, Houston, Texas, United States
4   UT-Memorial Hermann Center for Healthcare Quality and Safety, Houston, Texas, United States
,
Elise Russo
5   Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Michael W. Smith
6   Department of Industrial and Mechanical Engineering, Universidad de las Americas Puebla, Cholula, Mexico
,
Don Roosan
7   Department of Pharmacy Practice and Administration, College of Pharmacy Western University of Health Sciences, Pomona, California, United States
,
Viraj Bhise
8   Department of Medicine, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, United States
,
Daniel R. Murphy
2   Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veteran Affairs Medical Center, Houston, Texas, United States
3   Department of Medicine, Section of Health Services Research, Baylor College of Medicine, Houston, Texas, United States
› Author Affiliations
Funding This project is funded by the Agency for Health Care Research and Quality (R01HS022087) and partially funded by the Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13–413). D.R.M. is additionally funded by an Agency for Healthcare Research and Quality Mentored Career Development Award (K08-HS022901) and H.S. is additionally supported by the VA Health Services Research and Development Service (CRE 12–033; Presidential Early Career Award for Scientists and Engineers USA 14–274), the VA National Center for Patient Safety, and the Agency for Health Care Research and Quality (R01HS022087). These funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. There are no conflicts of interest for any authors.
 

Abstract

Objective This study demonstrates application of human factors methods for understanding causes for lack of timely follow-up of abnormal test results (“missed results”) in outpatient settings.

Methods We identified 30 cases of missed test results by querying electronic health record data, developed a critical decision method (CDM)-based interview guide to understand decision-making processes, and interviewed physicians who ordered these tests. We analyzed transcribed responses using a contextual inquiry (CI)-based methodology to identify contextual factors contributing to missed results. We then developed a CI-based flow model and conducted a fault tree analysis (FTA) to identify hierarchical relationships between factors that delayed action.

Results The flow model highlighted barriers in information flow and decision making, and the hierarchical model identified relationships between contributing factors for delayed action. Key findings including underdeveloped methods to track follow-up, as well as mismatches, in communication channels, timeframes, and expectations between patients and physicians.

Conclusion This case report illustrates how human factors–based approaches can enable analysis of contributing factors that lead to missed results, thus informing development of preventive strategies to address them.


#

Background and Significance

Lack of timely follow-up and missed follow-up of abnormal test results (henceforth, “missed results”) is a recognized patient safety concern.[1] [2] In ambulatory settings, the incidence of missed results can be up to 65% and can lead to delayed diagnosis or treatment.[2] [3] Knowledge on individual-level (vs. system-level) factors contributing to missed results is evolving. For example, it is imperative to understand individual's intent at the point of care and decision making that contribute to missed results especially within the complex sociotechnical context of electronic health records (EHRs)-enabled health systems. Human factors methods could advance understanding of physician decision-making processes and uncover factors related to individual decision-making processes.

Prior research on understanding causes of missed results is limited and has used retrospective chart reviews,[2] EHR activity logs,[4] [5] focus groups,[6] [7] cognitive task analyses,[8] aggregated root cause analyses data,[9] and safety event reporting.[10] Our objective was to use human factors methods to further illustrate workflow and process issues related to missed results and focus on contributing factors. By using the human factors methods described in this study, we gathered additional detail about physician's decision-making process, illustrating factors that contribute to missed results.

We applied three human factors methods, including critical decision method (CDM)-based interviews, contextual inquiry (CI)-based analysis methodology, and fault tree analysis (FTA), to understand factors contributing to missed results. Missed follow-up of abnormal laboratory result involves complex human–system interaction factors; to uncover the related decision-making processes, we chose CDM-based interviews. Building on the decision-making process, we applied CI and FTA to illustrate contributing factors and interactions among the factors leading to missed follow-up. Such human factors methods can enhance understanding of where to focus strategies to reduce or mitigate negative outcomes.

Critical Decision Method–Based Interviews

CDM is a cognitive task analysis technique used to describe naturalistic decision making, and improves understanding of situational awareness, mental models, and decision points in particular situations.[11] [12] [13] CDM involves gathering information about a personally experienced incident via focused interviews with task experts and identifying timelines, key decision points, and factors influencing decision making, such as clinical decision making in critical care.[14] [15] [16] One of the limitations of the CDM method is the delay between the incident and the interview.[17] By using near-real time detection of incidents, we reduce the problem of memory decay.


#

Contextual Inquiry–Based Analysis

The CI methodology helps in understanding the context of actions, such as physicians' responses to abnormal test results. CI is a structured methodology for modeling work domains and identifying user needs, guiding both interviews, and analysis. It focuses on four principles as follows: (1) identifying context of participants' work, (2) partnering with participants to observe and discuss work, (3) interpreting insights and relaying them back to the participant, and (4) using the research question to guide the interactions. CI-based analysis helps generate models to represent different aspects of how work functions: communication flow and coordination, culture, task sequences, physical environment, and artifacts. CI has been used in developing tools and implementing new workflows in health care.[18] [19] [20]


#

Fault Tree Analysis

FTA[21] is a form of root cause analysis used to illustrate and analyze complex interacting pathways leading to process failures[21] and is used for developing error prevention, monitoring, and intervention strategies.[22] [23] [24] FTA models an outcome as a hierarchy of interacting contributing factors[21] [25] using Boolean logic operators (“AND” and “OR”). Construction of fault trees requires describing top-level outcomes and resolving them into basic (primary initiating events) and intermediate events (immediate causes for basic events).[26] This method enables visual analytics and probabilistic modeling of factors contributing to an outcome and has been applied in clinical use cases for studying factors related to adverse events.[27] [28] [29]

A combination of these human factors methods could allow in-depth identification of causes for missed results and inform targeted solutions to improve decision-making processes.


#
#

Case Report

Setting

This study was performed at three large primary care clinics in Texas after Institutional Review Board approval. Each clinic used EHRs and included trainees.


#

Case Selection

We queried the clinical data repository at each site from January 1, 2015 to September 30, 2015 to identify abnormal imaging and laboratory results ([Table 1]). A reviewer (V.B.) manually reviewed records to identify missed results, defined as lack of documented action (repeat or subsequent testing, referral placement, medication change, or patient notification) within 14 days. We then invited 30 physicians who ordered the respective tests for interviews. We used maximum variation sampling techniques to maximize heterogeneity in clinic site and test types with each subsequent interview.

Table 1

List of abnormal laboratory result cases not followed-up

Real

Vignette

Total

Chest X-ray

1

1

EKG

1

3

4

Hemoglobin

1

1

TSH

10

Urine albumin

2

2

Urine culture

2

2

Urine micro

8

2

10

Total

15

30

Abbreviations: EKG, electrocardiogram; TSH, thyroid stimulating hormone.



#

Interview

We created a CDM-based interview guide ([Appendix A]) to understand follow-up in the context of a physician's own missed result cases including reasons for the miss. Questions identified factors delaying follow-up, not necessarily in the same case. Questions also both elicited specific factors contributing to missed results and identified relationships between work system and individual decision-making factors. Interviews were audio recorded and transcribed.

Three investigators (M.W.S., D.F.S., and D. Roosan) performed semistructured interviews with physicians using the CDM-based interview guide, and data were analyzed using the four CI principles. For the first 15 cases, interviewees were aware that they missed the follow-up (delay case interviews). However, this contributed to some reluctance in responding to questions about causes of missed results. To ensure responses were not constrained by recognition of their own potential oversight and enable more open discussion, we modified the method, so the remaining 15 cases were not traditional CDM interviews about the participants' own incidents, but instead vignettes similar to that experienced by their patient (vignette case interviews). The vignettes were generated by removing identifying information about the patient, treating physician, and clinic.


#

Data Analysis

Two other independent reviewers (D. Rogith and T.S.) with human factors expertise analyzed data using CI-based and FTA methods. We adapted the CI to include only the flow diagram analysis. We adapted the FTA to consider all logical operators as “AND” operators.[26] The sociotechnical model[30] guided identification of factors contributing to missed results.

Reviewers first analyzed transcripts to identify underlying factors contributing to missed results. Because we aimed to identify information flow breakdowns related to missed results, reviewers used the CI-based analysis methodology to develop flow models of communication and coordination in result management decision making. Thus, the reviewers independently reviewed transcripts and identified discussion about information flow and workflow decisions related to managing the test result. These were represented in the model in terms of information flow for both people (e.g., physicians and patients) and data sources (e.g., laboratory results and EHR systems). Each reviewer then independently combined their 30 flow models into a single model before collaboratively reconciling into one final model ([Fig. 1]).

Zoom Image
Fig. 1 Contextual inquiry flow model of follow-up of abnormal laboratory results. EHR, electronic health record.

Reviewers then performed FTAs to identify events leading to inaction for each case and used deductive reasoning to identify basic events from the interviews. To generate fault trees based on actual events, we chose only the 15 cases where interviewees were aware that they had missed the follow-up, and both reviewers (D. Rogith and T.S.) independently conducted FTAs for each case to identify basic events. Basic events were then grouped into intermediate events based on the flow model described above. An interdisciplinary team discussed findings and consolidated intermediate events to generate a cumulative fault tree ([Fig. 2]). Using this process, the basic and intermediate events were grouped into four categories: patient, clinical condition, physician, and EHR.

Zoom Image
Fig. 2 Fault tree analysis of events leading to no follow-up of abnormal laboratory results. EHR, electronic health record.

#
#

Results

Contributory causes identified from interview data are listed in [Table 2]. During delay case interviews, workflow issues were predominant (e.g., forgetting to notify patients about therapy changes based on results, diffusion of responsibility between referring physicians and residents for results follow-up, and language barriers). However, in vignette case interviews, EHR issues were more prominent. For example, limited patient portal usage led physicians to not send messages about results.

Table 2

List of reasons for not following-up an abnormal laboratory result

Categories

Delay case interviews

Vignette case interviews

Test results

• Similar abnormal laboratory results in past

• Results deemed abnormal but not clinically serious

• Results arrive after patient's visit

• Laboratory results are scanned or faxed so not available in structured data tabs

• Noise in color coding of abnormal laboratory results

Physician actions

• Specialist expected to follow-up

• Follow up deemed to be resident physician's responsibility

• Communication breakdown during delegating follow-up action to be relayed by staff

• No feedback from staff that abnormal results and follow-up actions communicated to patient

• Forgetfulness

• No dedicated staff for follow-up

• Follow-up deemed an unbillable activity

• Communication breakdown during delegating follow-up action to be relayed by staff

• No feedback from staff that abnormal results and follow-up actions communicated to patient

Clinical actions

• Action taken in form of adding a clinical note or updating prescription without communication to patient

• No treatment modifications necessary

• Need to explain results in detail

EHR system

• Patient does not use portal

• Unable to confirm whether patient accessed result via portal

• Patient does not use portal

• Unable to confirm whether patient accessed result via portal

• Multiple clinics, multiple EHR systems—forgets to act on time

• Time limit for auto notification is deemed to be too short (10 business days)

• Not sure of difference between “communication” vs “release of abnormal laboratory result EHR features”.

Other communications

• Inability to reach patient via phone

• Physician prefers not to call patients

• Language barriers

• Mail—no feedback on status of mailing

• Inability to reach patient via phone. No patient contact information available

• Prefer direct communication (via SMS-like technology)

• Mail—no feedback on status of mailing

Patient factors

• Patient deemed responsible for follow-up

• Patient has another appointment within 2 weeks, so follow-up delayed

• Patient has another appointment within 2 weeks, so follow-up delayed

Abbreviation: EHR, electronic health record.


Note: Factors common in identified cases and vignette cases are in bold.


We developed a CI flow model describing physicians' processes for managing abnormal results ([Fig. 1]). The flow model shows four different paths in physicians' action after test results as follows: (1) Identifying abnormal results, (2) tracking follow-up, (3) delegating follow-up, and (4) conducting follow-up. For each path, we identified barriers in the follow-up process. This displays how physicians interact with abnormal results, their expectations for managing these results, and user requirements for completing the follow-up tasks. Key findings from the flow model included physicians' lack of methods to track follow-up and mismatch in communication channels, timeframes, and expectations between patients and physicians.

Several physicians described unwillingness to sending notifications to the EHR portals to communicate results because they felt patients may not use the portal. Some physicians reported that if the result was not acted on by a physician within a specific timeframe (e.g., 10 days), the EHR automatically released results without a physician interpretation. This removed the item from the physician's to-do list, limiting prompts to act. Furthermore, some physicians preferred only in-person communication of abnormal results at patients' next appointments, which may occur beyond the autorelease timeframe.

[Fig. 2] displays the FTA-based hierarchical model of factors contributing to missed test results, which displays the frequency of each occurrence among the 15 delay case interviews. The most common factors were physicians' assumption that ordering physicians are responsible for follow-up (5 of 15 cases). While most institutions designate responsibility for result follow-up to the ordering physician, physicians reported not being notified when resident-ordered tests returned, adding delays to follow-up. In specialty referrals, some physicians reportedly assumed that referred specialist physicians will manage the results of the tests ordered by the referring physician because results would arrive at the time of the specialists' appointments. The FTA hierarchical model ([Fig. 2]) also confirms mismatch in communication channels for follow-up (5 of 15 cases). We found that some physicians shifted responsibility for follow-up to patients, such as by releasing results electronically only when patients signed up to the organization's patient portal and/or expecting patients to schedule follow-up visits to discuss results.

These intermediate events provide a high-level cause for the individual basic events. Some intermediate events highlight professionalism issues, such as reliance on patient appointments and poor communication etiquette. One physician asserted that patients are responsible for scheduling follow-up visits within 2 weeks. In other cases, physicians preferred to wait on nonurgent results for patients' next visit if scheduled within 2 weeks. However, several physicians felt that follow-up discussion was often forgotten if patients missed or canceled visits.


#

Discussion

Using three complementary human factors methods, we identified causes for lack of abnormal laboratory results follow-up. Reasons identified included physicians' expectations that patients are responsible for scheduling follow-up, mismatch in physician–patient communication preferences, and difficulties with managing abnormal results in EHR systems. These first-person accounts using human factors methods differ from prior studies in that individual decisionmaking, and workflow- and technology-related findings were prominent, allowing identification of contributing factors and barriers to action.

Combining CI-based and FTA analysis methods with CDM-based interviews permitted uncovering of contributing factors for missed results. For example, CDM-based interviews allow identification of multiple causes for inaction on abnormal results, while adding CI-based analysis enabled uncovering of contextual information such as information exchange between physician and staff ([Fig. 1]). This helped categorize causes for missed results ([Table 2]). Application of FTA then yielded a hierarchical model of missed results. Identifying basic contributory causes can assist in designing systems to manage abnormal test results, implementing results follow-up policies, and training clinicians to reduce breakdowns. Interestingly, we obtained richer detail with vignettes than delay cases, suggesting that physicians remain hesitant to discuss care breakdowns that they are involved in and providing guidance for future work in this area.


#

Limitations

Several limitations should be noted. First, our findings may be limited by socially desirability bias given the potentially sensitive topic of missed results. In vignette cases, physicians were unable to refer to their own experiences with the cases as they would in traditional CDM interviews. Additionally, findings from these three sites may not be generalizable to different practice settings and EHRs. Second, the CI-based analysis was performed by reviewers who did not perform the interviews; however, this offers a more independent assessment of findings that might not be apparent during interviews. The initial interview was collected to understand the decision making using CDM, and so for the secondary analysis using CI- and FTA-based methods, we used independent reviews. Third, CI methodology involves both observations and analysis. However, in clinical practice, it is impractical to directly observe rare events such as those under study in this care report. Nevertheless, CI-based analysis allowed useful information to be gleaned from postevent interview. Finally, we did not aim to identify specific actions to improve efficiency of test result management; however, our findings help inform future work to identify and test solutions.


#

Conclusion

We illustrate our application of diverse human factors methods, – CDM, CI, and FTA, to understand factors in abnormal test result follow-up. Our methods identified multiple factors contributing to missed follow-up, such as provider–patient communication channel mismatch and diffusion of responsibility. We focused on identifying barriers to successful follow-up and pathways leading to inaction. Future directions include expanding these methods to facilitate design information systems and implementation of preventive strategies to reduce missed test results.


#

Clinical Relevance Statement

Adverse events and care delays can occur when physicians miss taking actions on abnormal test results. However, individual decision-making factors surrounding such events are less understood. Using a combination of human factors methods described herein can identify key contributory factors that guide development of preventive interventions.


#

Multiple Choice Questions

1. Which method is useful in understanding information flow in decision making process?

  • Critical decision method

  • Contextual inquiry

  • Process mining

Correct Answer: The correct answer is option b.

2. Which method is useful in understanding sequence of events leading to an adverse outcome?

  • Fault tree analysis

  • Process mining

  • Critical decision method

Correct Answer: The correct answer is option a.


#

Appendix A


#

Critical Decision Method Interview Protocol

Explanation of Study and Description of Interview

We are interested in how system factors affect the process of follow-up to abnormal test results.

We've talked with leadership, IT, laboratory, and to some providers to get a high-level overview of the general process. But because so much of what happens depends on specific details, to fully understand all the factors, we also need to get concrete, and look in depth at a sample of cases.

We are looking at a quasirandom sample of recent abnormal test results, sampling a range of follow-up patterns.

Patient ___ and test results ____ on ____ is one we want to explore. We are interested in the various system and other factors that played a role in the follow-up of this case. We'd like to speak to you because you are most knowledgeable about this case and what factors played a role.

I'd like to first walk through the case and get your description of things. Then I'll quickly review it with you to make sure I've got the picture. Then I'll ask some more questions to get a richer picture of what elements played a role in this particular case.

Do you remember the case?

Do you want to pull up the chart?


#

Provider Account

Can you walk me through it? Your initial impressions during the appointment, what your concerns were and what decisions you made ….

Starting at the visit where you ordered the test or before that if you want.

Timeline

Draw timeline

Include

  • Test ordered

  • PCP's expectations

  • Test performed

  • Results accessed by PCP

  • Any orders in response to test result

  • Notification to patient

  • Any response from down-stream services

Review with provider


#
#

Deepening

In our sampling we've seen similar results that had a more involved follow-up, and also ones with a less involved follow-up. We've see similar results that had faster resolution and ones that took more time to be resolved. Do you think there were any factors that led to this test result being addressed in the way it was? Why wasn't it slower or faster, or a more involved or less involved follow-up?

Probes for factors:

  • Busy day, staffing level

  • CPOE for this test, this patient, via this exam room/office

  • Patient engagement

  • Patient access to laboratory, radiology

  • Reviewing results this test, this patient, via this exam room/office

  • Confidence versus Uncertainty about patient trajectory

Probe questions:

  • What led you to order test?

  • Training regarding ordering this test via EHR?

  • What were your expectations? What led you to expect that?

  • In what context did you access the results—morning at work, evening at home …

  • What did the results mean to you?

  • What other clinical information did you use to make that assessment?

  • What was your main concern at that point?

  • Did you consider different options? How did you decide to take the next step?

  • What were your expectations about the patient's response? About how the next step would occur? About the clinical trajectory of the patient?


#
#
#

Conflict of Interest

None declared.

Protection of Human and Animal Subjects

The study was approved by Baylor College of Medicine Institutional Review board. Informed consent was obtained from the physicians prior to the interview.


Supplementary Material

  • References

  • 1 Wynia MK, Classen DC. Improving ambulatory patient safety: learning from the last decade, moving ahead in the next. JAMA 2011; 306 (22) 2504-2505
  • 2 Callen JL, Westbrook JI, Georgiou A, Li J. Failure to follow-up test results for ambulatory patients: a systematic review. J Gen Intern Med 2012; 27 (10) 1334-1348
  • 3 Callen J, Georgiou A, Li J, Westbrook JI. The safety implications of missed test results for hospitalised patients: a systematic review. BMJ Qual Saf 2011; 20 (02) 194-199
  • 4 Singh H, Thomas EJ, Sittig DF. et al. Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain?. Am J Med 2010; 123 (03) 238-244
  • 5 Singh H, Thomas EJ, Mani S. et al. Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential?. Arch Intern Med 2009; 169 (17) 1578-1586
  • 6 Hysong SJ, Sawhney MK, Wilson L. et al. Understanding the management of electronic test result notifications in the outpatient setting. BMC Med Inform Decis Mak 2011; 11 (01) 22
  • 7 Singh H, Vij MS. Eight recommendations for policies for communicating abnormal test results. Jt Comm J Qual Patient Saf 2010; 36 (05) 226-232
  • 8 Hysong SJ, Sawhney MK, Wilson L. et al. Provider management strategies of abnormal test result alerts: a cognitive task analysis. J Am Med Inform Assoc 2010; 17 (01) 71-77
  • 9 Powell L, Sittig DF, Chrouser K, Singh H. Assessment of health information technology-related outpatient diagnostic delays in the US veterans affairs health care system: a qualitative study of aggregated root cause analysis data. JAMA Netw Open 2020; 3 (06) e206752
  • 10 Lacson R, Cochon L, Ip I. et al. Classifying safety events related to diagnostic imaging from a safety reporting system using a human factors framework. J Am Coll Radiol 2019; 16 (03) 282-288
  • 11 Hoffman RR, Crandall B, Shadbolt N. Use of the critical decision method to elicit expert knowledge: a case study in the methodology of cognitive task analysis. Hum Factors 1998; 40 (02) 254-276
  • 12 Pascual R, Henderson S. Evidence of naturalistic decision making in military command and control. Naturalistic Decision Making 1997; 217-26
  • 13 Crandall B, Getchell-Reiter K. Critical decision method: a technique for eliciting concrete assessment indicators from the intuition of NICU nurses. ANS Adv Nurs Sci 1993; 16 (01) 42-51
  • 14 Patterson MD, Militello LG, Bunger A. et al. Leveraging the critical decision method to develop simulation-based training for early recognition of sepsis. J Cogn Eng Decis Mak 2016; 10 (01) 36-56
  • 15 Power N, Baldwin J, Plummer NR, Laha S. Critical care decision making: a pilot study to explore and compare the decision making processes used by critical care and non-critical care doctors when referring patients for admission. 2017 :255. Presented at 13th International Conference on Naturalistic Decision; Bath, UK
  • 16 Schnittker R, Marshall SD, Horberry T, Young K. Decision-centred design in healthcare: The process of identifying a decision support tool for airway management. Appl Ergon 2019; 77: 70-82
  • 17 Klein GA, Calderwood R, Macgregor D. Critical decision method for eliciting knowledge. IEEE Trans Syst Man Cybern 1989; 19 (03) 462-472
  • 18 Coble J, Maffitt J, Orland M, Kahn M. Contextual inquiry: discovering physicians' true needs. Proc Annu Symp Comput Appl Med Care 1995; 469-473
  • 19 Ho J, Aridor O, Glinski DW. et al. Needs and workflow assessment prior to implementation of a digital pathology infrastructure for the US Air Force Medical Service. J Pathol Inform 2013; 4: 32
  • 20 Liang J, Yan B, Yang C, Yang H. The design of hemiplegia rehabilitation equipment based on contextual inquiry. IOP Conf. Ser.: Mater. Sci. Eng 2019; 520: 012020 . Doi: 10.1088/1757-899X/520/1/012020
  • 21 Vesely WE, Roberts N. Fault Tree Handbook. Washington, D.C.: Systems and Reliability Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission; 1981
  • 22 Hessian R, Salter BB, Goodwin EF. Fault-tree analysis for system design, development, modification, and verification. IEEE Trans Reliab 1990; 39 (01) 87-91
  • 23 Rao KD, Gopika V, Rao VS, Kushwaha H, Verma AK, Srividya A. Dynamic fault tree analysis using Monte Carlo simulation in probabilistic safety assessment. Reliab Eng Syst Saf 2009; 94 (04) 872-883
  • 24 Hauptmanns U. Semi-quantitative fault tree analysis for process plant safety using frequency and probability ranges. J Loss Prev Process Ind 2004; 17 (05) 339-345
  • 25 Lee W-S, Grosh D, Tillman FA, Lie CH. Fault tree analysis, methods, and applications: a review. reliability. IEEE Transactions on. 1985; 34 (03) 194-203
  • 26 Rogith D, Iyengar MS, Singh H. Using fault trees to advance understanding of diagnostic errors. Jt Comm J Qual Patient Saf 2017; 43 (11) 598-605
  • 27 Jonas JA, Devon EP, Ronan JC. et al. Determining preventability of pediatric readmissions using fault tree analysis. J Hosp Med 2016; 11 (05) 329-335
  • 28 McElroy LM, Khorzad R, Rowe TA. et al. Fault tree analysis: assessing the adequacy of reporting efforts to reduce postoperative bloodstream infection. Am J Med Qual 2017; 32 (01) 80-86
  • 29 Li X-Z, Gao J-M, Zhao Y-Q, Chen Y-J. Infusion pump system safety analysis based on fault tree and Markov analysis. In: Wei J. , ed. Mechanical Engineering and Control Systems: Proceedings of the 2016 International Conference on Mechanical Engineering and Control System (MECS2016). New Jersey, NJ: World Scientific Publishing Co. Pte. Ltd.; 2017
  • 30 Sittig DF, Singh H. A new socio-technical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010; 19 (Suppl. 03) i68-i74

Address for correspondence

Deevakar Rogith, MBBS, PhD
7000 Fannin Street Suite 600, Houston, TX 77030
United States   

Publication History

Received: 04 March 2020

Accepted: 01 August 2020

Article published online:
21 October 2020

Georg Thieme Verlag KG
Stuttgart · New York

  • References

  • 1 Wynia MK, Classen DC. Improving ambulatory patient safety: learning from the last decade, moving ahead in the next. JAMA 2011; 306 (22) 2504-2505
  • 2 Callen JL, Westbrook JI, Georgiou A, Li J. Failure to follow-up test results for ambulatory patients: a systematic review. J Gen Intern Med 2012; 27 (10) 1334-1348
  • 3 Callen J, Georgiou A, Li J, Westbrook JI. The safety implications of missed test results for hospitalised patients: a systematic review. BMJ Qual Saf 2011; 20 (02) 194-199
  • 4 Singh H, Thomas EJ, Sittig DF. et al. Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain?. Am J Med 2010; 123 (03) 238-244
  • 5 Singh H, Thomas EJ, Mani S. et al. Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential?. Arch Intern Med 2009; 169 (17) 1578-1586
  • 6 Hysong SJ, Sawhney MK, Wilson L. et al. Understanding the management of electronic test result notifications in the outpatient setting. BMC Med Inform Decis Mak 2011; 11 (01) 22
  • 7 Singh H, Vij MS. Eight recommendations for policies for communicating abnormal test results. Jt Comm J Qual Patient Saf 2010; 36 (05) 226-232
  • 8 Hysong SJ, Sawhney MK, Wilson L. et al. Provider management strategies of abnormal test result alerts: a cognitive task analysis. J Am Med Inform Assoc 2010; 17 (01) 71-77
  • 9 Powell L, Sittig DF, Chrouser K, Singh H. Assessment of health information technology-related outpatient diagnostic delays in the US veterans affairs health care system: a qualitative study of aggregated root cause analysis data. JAMA Netw Open 2020; 3 (06) e206752
  • 10 Lacson R, Cochon L, Ip I. et al. Classifying safety events related to diagnostic imaging from a safety reporting system using a human factors framework. J Am Coll Radiol 2019; 16 (03) 282-288
  • 11 Hoffman RR, Crandall B, Shadbolt N. Use of the critical decision method to elicit expert knowledge: a case study in the methodology of cognitive task analysis. Hum Factors 1998; 40 (02) 254-276
  • 12 Pascual R, Henderson S. Evidence of naturalistic decision making in military command and control. Naturalistic Decision Making 1997; 217-26
  • 13 Crandall B, Getchell-Reiter K. Critical decision method: a technique for eliciting concrete assessment indicators from the intuition of NICU nurses. ANS Adv Nurs Sci 1993; 16 (01) 42-51
  • 14 Patterson MD, Militello LG, Bunger A. et al. Leveraging the critical decision method to develop simulation-based training for early recognition of sepsis. J Cogn Eng Decis Mak 2016; 10 (01) 36-56
  • 15 Power N, Baldwin J, Plummer NR, Laha S. Critical care decision making: a pilot study to explore and compare the decision making processes used by critical care and non-critical care doctors when referring patients for admission. 2017 :255. Presented at 13th International Conference on Naturalistic Decision; Bath, UK
  • 16 Schnittker R, Marshall SD, Horberry T, Young K. Decision-centred design in healthcare: The process of identifying a decision support tool for airway management. Appl Ergon 2019; 77: 70-82
  • 17 Klein GA, Calderwood R, Macgregor D. Critical decision method for eliciting knowledge. IEEE Trans Syst Man Cybern 1989; 19 (03) 462-472
  • 18 Coble J, Maffitt J, Orland M, Kahn M. Contextual inquiry: discovering physicians' true needs. Proc Annu Symp Comput Appl Med Care 1995; 469-473
  • 19 Ho J, Aridor O, Glinski DW. et al. Needs and workflow assessment prior to implementation of a digital pathology infrastructure for the US Air Force Medical Service. J Pathol Inform 2013; 4: 32
  • 20 Liang J, Yan B, Yang C, Yang H. The design of hemiplegia rehabilitation equipment based on contextual inquiry. IOP Conf. Ser.: Mater. Sci. Eng 2019; 520: 012020 . Doi: 10.1088/1757-899X/520/1/012020
  • 21 Vesely WE, Roberts N. Fault Tree Handbook. Washington, D.C.: Systems and Reliability Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission; 1981
  • 22 Hessian R, Salter BB, Goodwin EF. Fault-tree analysis for system design, development, modification, and verification. IEEE Trans Reliab 1990; 39 (01) 87-91
  • 23 Rao KD, Gopika V, Rao VS, Kushwaha H, Verma AK, Srividya A. Dynamic fault tree analysis using Monte Carlo simulation in probabilistic safety assessment. Reliab Eng Syst Saf 2009; 94 (04) 872-883
  • 24 Hauptmanns U. Semi-quantitative fault tree analysis for process plant safety using frequency and probability ranges. J Loss Prev Process Ind 2004; 17 (05) 339-345
  • 25 Lee W-S, Grosh D, Tillman FA, Lie CH. Fault tree analysis, methods, and applications: a review. reliability. IEEE Transactions on. 1985; 34 (03) 194-203
  • 26 Rogith D, Iyengar MS, Singh H. Using fault trees to advance understanding of diagnostic errors. Jt Comm J Qual Patient Saf 2017; 43 (11) 598-605
  • 27 Jonas JA, Devon EP, Ronan JC. et al. Determining preventability of pediatric readmissions using fault tree analysis. J Hosp Med 2016; 11 (05) 329-335
  • 28 McElroy LM, Khorzad R, Rowe TA. et al. Fault tree analysis: assessing the adequacy of reporting efforts to reduce postoperative bloodstream infection. Am J Med Qual 2017; 32 (01) 80-86
  • 29 Li X-Z, Gao J-M, Zhao Y-Q, Chen Y-J. Infusion pump system safety analysis based on fault tree and Markov analysis. In: Wei J. , ed. Mechanical Engineering and Control Systems: Proceedings of the 2016 International Conference on Mechanical Engineering and Control System (MECS2016). New Jersey, NJ: World Scientific Publishing Co. Pte. Ltd.; 2017
  • 30 Sittig DF, Singh H. A new socio-technical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010; 19 (Suppl. 03) i68-i74

Zoom Image
Fig. 1 Contextual inquiry flow model of follow-up of abnormal laboratory results. EHR, electronic health record.
Zoom Image
Fig. 2 Fault tree analysis of events leading to no follow-up of abnormal laboratory results. EHR, electronic health record.