Appl Clin Inform 2022; 13(03): 681-691
DOI: 10.1055/s-0042-1751092
Research Article

Providers Electing to Receive Electronic Result Notifications: Demographics and Motivation

Benjamin H. Slovis
1   Department of Emergency Medicine, Thomas Jefferson University, Philadelphia, Pennsylvania, United States
2   Office of Clinical Informatics, Jefferson Health, Philadelphia, Pennsylvania, United States
,
William J.K. Vervilles
3   Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania, United States
,
David K. Vawdrey
4   Office of the Chief Data and Informatics Officer, Geisinger Health, Danville, Pennsylvania, United States
,
Jordan L. Swartz
5   Ronald O. Perelman Department of Emergency Medicine, NYU Langone Health, New York, New York, United States
,
Catherine Winans
6   Information Services and Technology, Jefferson Health, Philadelphia, Pennsylvania, United States
,
John C. Kairys
2   Office of Clinical Informatics, Jefferson Health, Philadelphia, Pennsylvania, United States
7   Department of Surgery, Thomas Jefferson University, Philadelphia, Pennsylvania, United States
,
Jeffrey M. Riggio
2   Office of Clinical Informatics, Jefferson Health, Philadelphia, Pennsylvania, United States
8   Department of Medicine, Thomas Jefferson University, Philadelphia, Pennsylvania, United States
› Author Affiliations
Funding None.
 

Abstract

Background Automated electronic result notifications can alert health care providers of important clinical results. In contrast to historical notification systems, which were predominantly focused on critical laboratory abnormalities and often not very customizable, modern electronic health records provide capabilities for subscription-based electronic notification. This capability has not been well studied.

Objectives The purpose of this study was to develop an understanding of when and how a provider decides to use a subscription-based electronic notification. Better appreciation for the factors that contribute to selecting such notifications could aid in improving the functionality of these tools.

Methods We performed an 8-month quantitative assessment of 3,291 notifications and a qualitative survey assessment of 73 providers who utilized an elective notification tool in our electronic health record.

Results We found that most notifications were requested by attending physicians (∼60%) and from internal medicine specialty (∼25%). Most providers requested only a few notifications while a small minority (nearly 5%) requested 10 or more in the study period. The majority (nearly 30%) of requests were for chemistry laboratories. Survey respondents reported using the tool predominantly for important or time-sensitive laboratories. Overall opinions of the tool were positive (median = 7 out of 10, 95% confidence interval: 6–9), with 40% of eligible respondents reporting the tool improved quality of care. Reported examples included time to result review, monitoring of heparin drips, and reviewing pathology results.

Conclusion Developing an understanding for when and how providers decide to be notified of clinical results can help aid in the design and improvement of clinical tools, such as improved elective notifications. These tools may lead to reduced time to result review which could in turn improve clinical care quality.


#

Background and Significance

Ubiquitous electronic health records (EHRs) make valuable information available for delivering effective health care.[1] However, health care providers are at times inundated with too much information.[2] It is well documented that physicians and other care providers spend considerable time reviewing clinical data.[3] [4]

Automated electronic result notifications can alert health care providers of important clinical results, and notification capabilities have matured alongside EHRs.[5] Historically, notification systems predominantly focused on critical abnormalities and used alphanumeric pages or short message service. They were designed to reduce time-to-awareness of abnormalities, and several studies of these systems reported clinical improvements.[5] [6] [7] [8] [9] [10] [11] A limitation of most prior work in this area is that it relied on custom-developed technology not easily deployed within today's predominant commercially available EHR systems.[12]

A second limitation of most previous work on automated electronic result notifications is that they were mandatory, or preset, rather than elective and user-configurable. Elective notifications are chosen by the health care provider prior to the test result and allow the provider to subscribe to a notification when the result is available. Although the fundamental theorem of informatics proposed by Friedman[13] is based on a synergy between humans and technology, in practice, care providers are increasingly beholden to the EHR.[14] By enhancing provider autonomy in giving them the ability to select if and when they chose to be notified, elective notifications hold the potential to reduce alert fatigue.[5] [15] Poor EHR usability is associated with burnout and dissatisfaction,[16] and elective notifications may be one way to improve end-user experience. Poon et al created a subscription-based system allowing the end-user to request notification via an alphanumeric page of a laboratory test result,[15] but the system was not designed in a modern vendor-based EHR. Koziatek et al studied a vendor-based elective solution in their emergency department (ED) and demonstrated a reduction in time between the test result and decision-making, though the tool was not routinely used.[17]

Our hospital's vendor-based EHR has an elective option for notification of clinical results regardless of abnormal or critical values. Users can elect to receive a notification at the time of order entry or after, delivered to the provider's smartphone or smartwatch.

Understanding when and how a provider decides to use a subscription-based electronic notification could improve the functionality of these tools, as well as enhance the implementation and training in a clinical production environment. The purpose of this study was to perform a quantitative and qualitative assessment of providers who utilized the elective notification tool available in our EHR. We hypothesized that users would have specific clinical scenarios in which elective notifications would be most utilized and found most useful. We hypothesized that users may report experiences where notifications improved time to result review or clinical intervention. By better understanding when and how providers decide to be notified of clinical results the lessons learned in this study could influence future tool design. Improved function and utilization of such tools could potentially improve provider interaction with the EHR, reduce time to result review, and thereby improve clinical care quality.


#

Methods

Notification System

On March 25 2018, our enterprise EHR (Epic Systems, Verona, Wisconsin, United States) was upgraded to include asynchronous elective notifications, allowing end-users to select laboratory, microbiology, procedure, or radiology orders for which they would receive electronic notifications when the results became available. To be eligible for a notification, the end-user needed to have the EHR smartphone app on their personal mobile device and signed into it within the previous 30 days. To receive a notification, the user would click on a bell icon (labeled as “Notify Me”) next to the order in the computerized provider order entry system, which generates a single alert for each order selected. Users could also review previously ordered studies with pending results and elect to receive notifications via the bell icon for them as well. When an order resulted in the EHR, the end-user received a notification on their mobile device, and optionally on their smartwatch, if applicable. Users were educated through email and via a tip-sheet made available through the hospital intranet.


#

Design and Setting

We performed a retrospective chart review of our EHR to determine characteristics of individuals who elected to utilize the notification tool. We then surveyed those individuals who utilized the tool to better understand what influenced their use of the system. Finally, a focus group was conducted to assess ways in which the tool could be improved.

The Thomas Jefferson University enterprise had 908 acute care beds and over 2,700 physicians and practitioners caring for more than 1.4 million people annually throughout inpatient, outpatient, and ED settings.[18] This study was performed at the center city division which included an urban tertiary-care hospital and a smaller community hospital within the same region, as well as associated ambulatory clinics. All tests ordered at these locations were included in the analysis. The organization's institutional review board approved the human subjects research involved in the study.


#

Retrospective Review

Users of the notification tool were identified via a retrospective query of the EHR over an 8-month period (March 25, 2018–December 7, 2018) for all instances when a user requested to be notified of a result within the system. We collected variables including the date and time of the request for notification, the type of study for which the notification was requested, the type of provider who requested the notification (e.g., attending, fellow, resident, advanced practice provider [APP; nurse practitioner or physician assistant], transplant nurse coordinator, or nurse) and the requester's primary specialty as documented in our EHR. Individual orders were grouped based on the type of laboratory test, procedure, or imaging modality. We computed descriptive statistics for provider and order characteristics.


#

Survey Instrument

To our knowledge there is no externally validated instrument to assess an individual's interest in and use of result notifications. Therefore, we created a novel survey instrument. Questions investigated respondents' level of appreciation for the notification system, its perceived ease of use, and its impact on patient care, workflow improvement, and patient harm. Results were recorded using categorical scales (1–10), yes or no answers, multiple choice, and free text responses. The full survey is included in Appendix A.


#

Survey

From the providers identified during the retrospective chart review, we generated a list of all previously identified users, excluding those that had left the institution, and sent an email requesting those individuals to participate in the survey described above using Qualtrics survey software (Qualtrics, Provo, Utah, United States). Providers who consented to the survey submitted demographic information including their medical specialty and their provider type. Participants spent approximately 15 minutes completing the survey. They did not receive compensation or incentives for participating. Descriptive statistics were calculated for survey responses. If some questions were answered, but the entire survey was not completed, the partial answers were included in the analysis.


#

Focus Group

After analyzing the survey responses, individuals who consented to participate in the survey were again contacted via email to establish interest and consent to participate in a focus group to better understand their utilization of the system. We employed a modified grounded theory[19] approach to conduct the focus group. After obtaining informed consent, participants took part in open-ended discussions that were allowed to develop naturally while the interviewer took notes. Codes, concepts, and categories were identified from the thematic analysis of the focus group discussions.


#

Statistics

All statistics were calculated using standard methods with R statistical software (R Core Team, 2020).


#
#

Results

During the study period, there were 1,846,911 laboratory, microbiology, procedure, or radiology orders placed that had the potential to be chosen for elective notification, if the end-user had the ability to request a notification based on the criteria described above. Of these, 3,291 notification requests were placed for 2,294 unique orders (0.12% of the total). About 17% (391 of 2,294) of orders overall had more than one user request a notification. The 2,294 orders were placed by 646 unique providers, who comprised 15.76% of the total 4,098 providers who could potentially place an order during our study period. When comparing the median number of orders per provider with and without notifications, the estimated proportion of orders with notifications requested was 3.06% (95% confidence interval [CI]: 2.14–3.97) per provider per week.

Encounter Demographics

Of those encounters where an order associated with a notification was placed, 97.90% (3,222/3,291) were hospital-level encounters, while 0.82% (27/3,291) were ambulatory encounters. In addition, 1.25% (41/3,291) were laboratory or “orders-only” encounters and 1 (0.03%) was an ancillary procedure encounter.


#

User Demographics

Of the users who placed orders associated with notifications, the majority (60.37% (390/646)) were attendings, while 29.10% (188/646) were residents, 8.05% (52/646) were nurse practitioners, 1.55% (10/646) were fellows, 0.46% (3/646) were physician assistants, and 0.46% (3/646) were transplant nurse coordinators.

Of the 646 providers, 128 (19.81%) carried more than one specialty association (with a maximum of four specialties), resulting in 802 total provider-to-specialty relationships. The largest represented specialty regarding elective notifications was internal medicine with 200 providers (24.94%). The number of individuals by specialty and provider type for those specialties who ordered more than five notifications is represented in [Fig. 1].

Zoom Image
Fig. 1 Breakdown of the number of providers based on provider type for each specialty where that specialty ordered more than five notifications in the study period.

Most providers placed two or fewer notification requests in the study period (412/646, 63.78%), while 31 providers (4.80%) placed 10 or more, with a maximum of 612 notifications by one provider, which represented almost 19% (612/3,291) of total notifications requested. The second most requested by one provider was 404 accounting for 12.27% of the total notifications.


#

Order Demographics

There were 249 unique orderable tests divided among 19 groups (i.e., chemistry, X-ray, pathology, etc.) that comprised the 2,294 orders associated with notifications in our results. The most frequently notified order group was chemistry tests making up 29.73% (682/2,294 [95% CI: 27.87–31.65%]) of orders. The breakdown of each group of orderable tests and the percent of notifications associated with that group are displayed in [Table 1]. The most frequently requested individual test overall was the “complete blood count with differential” with 303 (13.21% of 2,294 [95% CI: 11.86–14.68%]) requests, followed by “chem 7 panel” with 249 (10.85% of 2,294 [95% CI: 9.63–12.23%] requests. It should be noted there are variations between individual orders (i.e., complete blood count with differential versus complete blood count) that make reporting frequencies of individual orders less reliable then reporting the group within which these orders belong (i.e., chemistry, hematology, etc.)

Table 1

Breakdown of orderable tests including frequency, percentage of the total, and confidence interval for each category

Order category

N

%

95% CI

Chemistry

682

29.73

27.87–31.65

Hematology

572

24.93

23.19–26.77

Ultrasound

433

18.88

17.31–20.55

CT

281

12.25

10.95–13.68

Coagulation

103

4.49

3.70–5.44

MRI

71

3.10

2.44–3.91

X-ray

67

2.92

2.29–3.72

Other

21

0.92

0.58–1.42

Urine

14

0.61

0.35–1.05

Body fluids

9

0.39

0.19–0.77

Infectious disease

9

0.39

0.19–0.77

Blood gas

8

0.35

0.16–0.72

Nuclear medicine

8

0.35

0.16–0.72

Immunology

7

0.31

0.13–0.66

Endocrine

3

0.13

0.03–0.42

Procedure

2

0.09

0.02–0.35

Echo

2

0.09

0.02–0.35

ECG

1

0.04

0.002–0.28

Pathology

1

0.04

0.002–0.28

Total

2,294

100

Abbreviations: CI, confidence interval; CT, computed tomography; MRI, magnetic resonance imaging.



#

Utilization

An analysis of variance demonstrated significant variability in the weekly utilization of the tool during our 29-week study period (p < 0.001) with a Tukey procedure demonstrating 16 pairwise comparisons with statistically significant differences (p < 0.05) in utilization. Many of these significant pairwise comparisons included study week 9, where a maximum of 74 notifications were requested in a single day and a single provider accounted for over 47 (63.51%) of those. [Fig. 2] demonstrates boxplots of the frequency of daily notifications by week.

Zoom Image
Fig. 2 Boxplots of the frequency of the daily notifications in each week.

#

Result Review

Of the 3,291 requested notifications, 2,393 (72.71%) were reviewed by the requesting provider via the notification on their personal device, while the remaining 898 (27.29%) were not reviewed via the notification though the results may have been made available and reviewed through alternative means such as via patient list icons or trackboard notifications. Of the reviewed notifications, the median time from message sent to message read was 274 minutes (interquartile range: 16–1,756 minutes). This is the time to which a provider opened the notification message on their device; however, health care providers did not have to mark the message itself as read to review the result of the order in the patient's chart.


#

Survey Results Demographics

There were 141 individual providers (21.8% of 646) that interacted with our electronic survey. Of these, 124 (87.94% of 141) consented to participate, and of these, 73 (58.87% of 124, 51.77% of 141 and (11.30% of 646) completed the survey. In addition, 39 (52.60%) identified as attending physicians, while 30 (41.10%) were residents, 6 (8.23%) were APPs, and 1 (1.37%) was a nurse. The largest proportion of respondents came from internal medicine (21.98% (16/73)) (see [Table 2]).

Table 2

The number of respondents from the survey study broken down by specialty and provider type with percentages

Specialty

Attending

In training

APP

Nurse

Total (%)

Internal medicine

9

6

1

0

16 (21.62%)

Emergency medicine

7

3

0

0

10 (13.51%)

General surgery

0

5

2

0

7 (9.46%)

Medical oncology

2

0

1

0

3 (4.05%)

Anesthesiology

0

2

0

1

3 (4.05%)

Cardiothoracic surgery

2

0

0

0

2 (2.70%)

Family medicine

1

1

0

0

2 (2.70%)

Hematology/oncology

1

0

1

0

2 (2.70%)

Neurology

0

1

1

0

2 (2.70%)

Otolaryngology

2

0

0

0

2 (2.70%)

Pulmonary

1

1

0

0

2 (2.70%)

Cardiology

1

0

0

0

1 (1.35%)

Endocrinology

1

0

0

0

1 (1.35%)

Gastroenterology

1

0

0

0

1 (1.35%)

Geriatric medicine

1

0

0

0

1 (1.35%)

OBGYN

0

1

0

0

1 (1.35%)

Psychiatry

0

1

0

0

1 (1.35%)

Radiation oncology

0

1

0

0

1 (1.35%)

Trauma surgery

1

0

0

0

1 (1.35%)

Urology

1

0

0

0

1 (1.35%)

Vascular medicine

1

0

0

0

1 (1.35%)

No reported specialty

7

6

0

0

13 (17.57%)

Total

39 (52.60%)

28 (37.84%)

6 (8.11%)

1 (1.35%)

74 (100%)

Abbreviations: APP, advanced practice provider; OBGYN, obstetrics and gynecology.



#

Survey Results

In total, 21 (28.77% of 73, 95% CI: 19.07–40.72%) of respondents reported they had never used the tool, despite our inclusion criteria. Seventeen (23.29%, 95% CI: 14.52–34.91%) stated they rarely used the tool while 14 (19.18%, 95% CI: 11.25–30.42%), 12 (16.44%, 95% CI: 9.14–27.35), and 9 (12.33%, 95% CI: 06.14–22.61%) stated they used the tool monthly and 4 to 6 times per week and daily, respectively. For the 9 that reported daily use, 5 reported 5 to 10 times per week, 3 reported 10 to 15 times per week, and 1 reported 15+ uses per week.

The majority of respondents reported requesting notifications during (26/73 [35.62%], 95% CI: 25.0–47.76%) or after (19/73 [26.03%], 95% CI: 16.77–37.84%) patient encounters. When doing so the main themes identified for why they requested notifications at these times were of importance (9/24 [37.5%], 95% CI: 19.55–59.24%) and time sensitivity (5/24 [20.83%], 95% CI: 7.94–42.71%). In addition, 20 out of 52 respondents who answered this question (38.46%, 95% CI: 25.63–52.99%) reported occasional unintentional use of the tool. We asked providers what percentage of the time they selected notifications at the time of order entry versus after laboratories had been ordered and were awaiting results. The median percent of the time spent selecting orders for notification specifically at the time of order entry was 50% (95% CI: 10–75%) and the median percent of the time selecting notifications after completing order entry was completed was 50% (95% CI: 20–90%). Our survey also included Likert-style questions with an ascending order (1 = negative opinion,10 = positive opinion). Responses to Likert-style questions are presented in [Table 3].

Table 3

Likert-scale questions (1 = low, 10 = high) with median response and 95% confidence interval (CI), N = 66l

Question

Median

95% CI

Low

High

How much do you like the bell result notification tool?

7

6

9

How easy is it to use?

7

6

8

How likely are you to recommend using this tool to a colleague?

7

6

8

How often do you remember to use it?

4

3

5

How likely is it to help patients?

7

6

8

How likely is it to speed up workflow?

7

6

8

How likely is it to save lives?

5

4

5

How likely is it to allow you to spend less time on workstations?

5

4

5

How likely is it to allow you to spend more time with direct patient

5

5

5

How likely is it to allow the computer to work more for you?

7

6

9

Of those respondents who answered the following questions, 21 of 52 (40.38%, 95% CI: 27.31–54.87%) answered “yes” to an event where the tool improved patient care. Of these, seven reported specific details about how and why they felt the tool improved patient care. Generally, these responses indicated that providers felt the tool improved result review time, and in some cases made them aware of unanticipated results. The users also cited specific clinical situations such as managing heparin drips or following up surgical pathology results. The specific responses from end-users are presented in [Table 4].

Table 4

Written responses provided by respondents to times when the tool improved patient care

Yes, prompt awareness of communication of results supports patients engaging in their care.

I received the results earlier than if I had just been routinely reviewing the laboratories.

Alerted me to an ultrasound result that was positive that I was unaware of.

When following heparin drips or other tests I am waiting on, instead of mindlessly clicking in epic and continually logging in to check if it's back I just wait for my phone to go off saving me time and making sure that I get the result in real time. It's important not to add the alert to too many tests because then it would become overwhelming with phone alerts, specifically when you have multiple patients with alerts that are set.

I used the bell notification for surgical pathology. As a resident, the OR nurses place the pathology order under the attending's name, so the result (which can take 3–7 days) only goes to their inbox.

Discharged patient with pending laboratories. Followed up appropriately.

I wouldn't say it has improved them drastically, but I do see results a little earlier sometimes (maybe by 30 min at the most?). Usually, it just saves me from having to keep checking.

Allows for quick action.

A single respondent reported an experience where the notification caused “harm to a patient” but did not elaborate in the survey.


#

Focus Group

Four providers who responded to our survey agreed to participate in a follow-up focus group. The small number of focus group participants resulted in incomplete saturation to allow for viable results of grounded theory. However, several principal categories were compiled: (1) respondents confirmed themes of improved patient care and provider satisfaction, with specific statements of improved response to time-sensitive results that allowed for clinical decisions, (2) respondents stated the tool reduced cognitive load and allowed for prioritization when performing high-volume tasks, and (3) respondents highlighted effective workflow integration of the tool when they were away from direct interaction with the EHR interface. Respondents appreciated notifications on their personal devices including smartwatches, despite some initial barriers such as required security software.


#
#

Discussion

Automated notification of critical results has been demonstrated to improve aspects of patient care.[7] [10] [20] [21] [22] [23] Previous literature reviews have demonstrated that there is significant variability in research on this topic, though many systems that have been studied are of home-grown design and not vendor-based.[5] While most notification systems are automated, little is known about elective notifications where the provider chooses which results to be notified, especially with vendor-based systems. Poor data organization in a wealth of clinical alerts is an established cause of nonelectronic workarounds (i.e., paper lists),[24] and elective notifications may allow self-prioritization without resulting to workarounds. To our knowledge, there is only one study (by Koziatek et al) that examines vendor-based elective electronic notifications on patient care, and it is limited to the ED setting.[17] Even less is known about what motivates a provider to be notified at the time of elective notification.

In this study, we utilized multiple methods to assess the utilization of—and then characterize providers' assessments and opinions of—a vendor-based elective electronic notification system. We retrospectively analyzed the frequency of use of the tool and characteristics of the orders associated with its use, as well as the demographics of the users. We then surveyed those users and performed a focus group interview to learn how and why they used the tool. Overall, a small proportion (0.12%) of orders was associated with the use of the tool. We examined use at the order-level, but low rates have been previously reported by Koziatek et al, who demonstrated only 2.7% of ED encounters with this similar style of notification.[17] Interestingly our results showed approximately one in six results (17.04%, 391/2,294) had more than one notification request associated with it, implying that multiple providers felt that the result warranted notification and may have been especially important to patient care. Despite a relatively small number of our overall orders having notifications, more than 15% of providers utilized the tool during the study period implying at least some perceived value in it, with a predominance toward use in the hospital setting. Most (over 70%) of requested notifications were reviewed, with a median time to reviewing a result message of approximately 4.5 hours. Result notifications in our system are a form of message that is presented to a provider and can be reviewed in their “in-basket.” It is important to note that the message itself does not need to be reviewed to review the order result. A provider can receive a notification that a laboratory is complete, and instead of reviewing it on their personal device as a message, they can go to the workstation and review it in the patient's chart. Providers may also be made aware of new results via icons on patient lists or trackboards. Only later might they go into their in-basket and mark the message as reviewed. This could potentially lead to the prolonged time to review we observed in our results.

The tool was mostly used by attending physicians and residents (over 89%), with the highest frequency of use coming from internal medicine (∼25%), and the highest frequency order class being chemistry laboratories (∼30%). This finding is similar to what Poon et al reported, i.e., that chemistry tests were some of the most frequently chosen tests for elective notification systems based on alphanumeric pages.[15] However, Koziatek et al observed that residents made up the largest proportion of their study, though it was limited to the ED.

Since its implementation, the frequency of use of our tool has waxed and waned weekly. This variation implies there are likely many variables contributing to the use of the tool. For instance, a user cited heparin drips as a reason for its use, so variability in need for heparin drips could influence utilization. Seasonal disease processes (such as influenza) and associated testing may have also influenced variability in the utilization of the tool, as could transitions in training (i.e., where new medical residents who are unfamiliar with the tool start practicing during the beginning of their residency and utilization drops). Further research on associated diagnosis codes and association with training level could explore this hypothesis. In addition to patient-level characteristics (i.e., observed pathologies), provider characteristics may have influenced our observations. For instance, periods where high-frequency utilizers are working clinically may influence the decision to use the tool. We identified a single user during week 9 who contributed to over 60% of the notifications in 1 day, and two providers contributed to more than 30% of all notifications in our study period. Given the potential for provider preference to play such a critical role in utilization trends, exploring ways to improve the predilection could impact the tool's overall use. It is also unclear how cognizant providers were of the availability of the tool, and if the educational initiative was adequate to generate awareness. Modifications to initial and subsequent training programs could promote utilization of this and other resources available in the EHR.

In addition to heparin drips, providers cited surgical pathology, radiology results, and discharge follow-up planning as cases where the tool improved patient care (see [Table 4]). An additional unanticipated value of the tool was for trainees to become alerted to tests their attending had ordered, when these results would normally only be sent to the attending. Our assumption was that ordering providers would be the primary utilizers of this tool to alert themselves of results. Additional providers subscribing to results that would otherwise not be automatically reported to them demonstrate such a tool's malleability and expand upon its original intent and should be studied in the future.

Despite over 600 users of the tool, only approximately 11% completed our survey, though over 50% of those who interacted with the survey completed it. Like overall use of the tool, the majority of respondents were physicians from internal medicine. Surprisingly, despite our survey only being sent to those individuals who had previously used the tool, a large proportion (28.8%) reported they had never used it. It is possible they used the tool accidentally, and it is too easy to unexpectedly activate the notification. Additionally, development of a feedback process to make sure providers had activated the alerts could improve usability and intentional requests for notifications. Alternatively, our survey might not have been explicit enough in describing the intervention (the tool) in question, and respondents were unclear to which functionality we were referring. Future studies could examine the frequency of unintentional use of this tool and if it impacts end-users' future awareness and use, as well as consider a sample of those users who do not use the tool to understand what influences their decision to avoid it.

There was variability in the frequency of use, but most respondents reported using the tool during or right after patient encounters and cited the priority of results as a reason for its use. Overall, respondents appeared to like the tool and felt it helped patients, though they did not always remember to use it. Some of the proposed value propositions of notifications are reduced time spent performing result review and increased time in direct patient care. However, the perceived impact of this tool was underwhelming for each of these measurements.

Our focus group was under-saturated and may be biased by participation of those respondents who had strong positive opinions about the tool. However, several themes emerged that warrant deeper investigation. These include: (1) the prioritization of patient care as a reason for using the tool, (2) a reduction in cognitive labor which allows for multitasking and rapid responses, and (3) novelty of workflow integration despite some barriers. These themes fit many of the ideas associated with our survey results, including benefiting patient care and workflow improvements. However, our small number of participants makes it difficult to rely on these results to construct a theory from our data. The perceived capacity to self-prioritize patient care with a concurrent reduction in cognitive load may demonstrate one of the value propositions of elective notifications in increased physician autonomy. Promoting the ideal that the system works for the health care provider and not vice versa holds the potential to reduce provider burnout.


#

Conclusion

Implementing a vendor-based elective notification system demonstrated variable weekly use, most frequently by internal medicine physicians with a preference for chemistry laboratories. Survey results of end-users demonstrated overall approval of the tool with perceived benefit to patient care, though supposed value propositions of reduced time at workstations and improved patient care were not measurably demonstrated. Themes of improved patient care, reduced cognitive labor, and moderate success of workflow integration with limited barriers were identified in a focus group. Some of these themes may indicate ways to relieve factors that may contribute to health care provider burnout. Our study suggests EHR vendors that have not generated a similar notification system may consider doing so, and EHR vendors with existing tools should consider developing provider-level metrics such as frequency of utilization and time to result review to determine clinical value of these elective alerts to potentially increase utilization and optimization. Further research on not only how elective notifications impact patient care, but also how health care providers choose to be notified hold the potential to optimize asynchronous clinical decision support tools with potential benefit for improved quality of care. Specifically, future studies demonstrating definitive clinical situations when elective notifications in a vendor-based system resulted in shorter time to result review, or evidence that notifications resulted in decreased cognitive load through fewer manual checks of the EHR, or a reduced need for automated alerts when elective alerts are used would substantiate the efficacy of these tools.


#

Clinical Relevance Statement

Our research on vendor-based elective notification systems adds to the understanding of when and how providers decide to be notified of clinical results. The ability to self-prioritize notifications and thus possibly reduce time to result review to what has already been deemed clinically relevant may improve quality of care.


#

Multiple Choice Questions

  1. With regards to EHR-based clinical notification systems,

    • The majority studied were developed in home-grown EHRs.

    • The majority studied were developed in vendor-based EHRs.

    • None have been previously developed.

    • Nearly all notifications are subscription-based.

    Correct Answer: The correct answers is option a. Most of the research surrounding EHR-based notification systems comes from home-grown systems that were mandatory in nature, meant to reduce time to awareness for health care providers of important results. Little is known about subscription-based notifications and how they are used in vendor-based EHRs.

  2. Which of the following classes made up the majority of orders associated with notifications in this study?

    • Hematology.

    • Chemistry.

    • Ultrasound.

    • CT.

    Correct Answer: The correct answer is option b. Chemistry tests were the most frequently ordered class making up nearly 30% of all notifications. This was followed by hematology, then ultrasound, and then CT.


#
Appendix A

Survey Instrument

  • 1) What service are you from?

  • 2) Are you in training or an attending?

  • 3) What PGY year are you if in training?

  • 4) How long is your program?

  • 5) On a scale of 1–10 how much to you like the tool?

  • 6) On a scale of 1–10 how much would you recommended it?

  • 7) On a scale of 1–10 how easy is it to use?

  • 8) On a scale of 1–10 do you always remember to use it?

  • 9) On a scale of 1–10 how likely is it to help patients?

  • 10) On a scale of 1–10 how likely is it to speed up workflow?

  • 11) On a scale of 1–10 how likely is it to save lives?

  • 12) On a scale of 1–10 how likely is it to allow you to spend less time on workstations?

  • 13) On a scale of 1–10 how likely is it to allow you to spend more time with direct patient care?

  • 14) On a scale of 1–10 how likely is it to allow the computer to work more for you?

  • 15) When do you usually order clinical tests in your workflow?

  • 16) How often do you select to be notified of these results via “bell” notifications?: daily, 4–6 times per week, once a month, rarely, never.

  • 17) How frequently do you use it per week?: 5–10, 10–15, 15+ per week.

  • 18) What prompts you to be notified?

  • 19) Do you always use this tool?

  • 20) Are there times when you prefer versus times when you don't?

  • 21) Have you ever unintentionally used or not used the notification tool?

  • 22) Have you had any experiences where it improved patient outcome or caused harm?

  • 23) Any other comments?


#

Conflict of Interest

None declared.

Protection of Human and Animal Subjects

This study was approved by the Institutional Review Board of Thomas Jefferson University and determined to be of minimal risk to participants.

  • References

  • 1 Henry J, Pylypchuk Y, Searcy T. et al. Adoption of electronic health record systems among US non-federal acute care hospitals: 2008–2015. ONC Data Brief 2016; 35: 1-9
  • 2 Salmasian H, Landman AB, Morris C. An electronic notification system for improving patient flow in the emergency department. AMIA Jt Summits Transl Sci Proc 2019; 2019: 242-247
  • 3 Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med 2016; 91 (06) 827-832
  • 4 Sinsky C, Colligan L, Li L. et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med 2016; 165 (11) 753-760
  • 5 Slovis BH, Nahass TA, Salmasian H, Kuperman G, Vawdrey DK. Asynchronous automated electronic laboratory result notifications: a systematic review. J Am Med Inform Assoc 2017; 24 (06) 1173-1183
  • 6 Samal L, Stavroudis T, Miller R, Lehmann H, Lehmann C. Effect of a laboratory result pager on provider behavior in a neonatal intensive care unit. Appl Clin Inform 2011; 2 (03) 384-394
  • 7 Kuperman GJ, Teich JM, Bates DW. et al. Detecting alerts, notifying the physician, and offering action items: a comprehensive alerting system. Proc AMIA Annu Fall Symp 1996; 704-708
  • 8 Tate KE, Gardner RM, Scherting K. Nurses, pagers, and patient-specific criteria: three keys to improved critical value reporting. Proc Annu Symp Comput Appl Med Care 1995; 164-168
  • 9 Etchells E, Adhikari NK, Cheung C. et al. Real-time clinical alerting: effect of an automated paging system on response time to critical laboratory values–a randomised controlled trial. Qual Saf Health Care 2010; 19 (02) 99-102
  • 10 Kuperman GJ, Teich JM, Tanasijevic MJ. et al. Improving response to critical laboratory results with automation: results of a randomized controlled trial. J Am Med Inform Assoc 1999; 6 (06) 512-522
  • 11 O'Connor SD, Khorasani R, Pochebit SM, Lacson R, Andriole KP, Dalal AK. Semiautomated system for nonurgent, clinically significant pathology results. Appl Clin Inform 2018; 9 (02) 411-421
  • 12 ONC. 2015 Edition Market Readiness for Hospitals and Clinicians. Health IT Quick-Stat #55. Office of the National Coordinator for Health Information Technology; March 2019. Accessed June 12, 2022 at: https://www.healthit.gov/data/quickstats/2015-edition-market-readiness-hospitals-and-clinicians
  • 13 Friedman CPA. A “fundamental theorem” of biomedical informatics. J Am Med Inform Assoc 2009; 16 (02) 169-170
  • 14 Arndt BG, Beasley JW, Watkinson MD. et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (05) 419-426
  • 15 Poon EG, Kuperman GJ, Fiskio J, Bates DW. Real-time notification of laboratory data requested by users through alphanumeric pagers. J Am Med Inform Assoc 2002; 9 (03) 217-222
  • 16 Melnick ER, Dyrbye LN, Sinsky CA. et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (03) 476-487
  • 17 Koziatek C, Swartz J, Iturrate E, Levy-Lambert D, Testa P. Decreasing the lag between result availability and decision-making in the emergency department using push notifications. West J Emerg Med 2019; 20 (04) 666-671
  • 18 About Us. Thomas Jefferson University, 2021. Accessed June 16 2022 at: https://hospitals.jefferson.edu/about-us.html.
  • 19 De Chesnay M. Nursing Research Using Grounded Theory: Qualitative Designs and Methods in Nursing. New York, NY: Springer Publishing Company, LLC; 2015
  • 20 Piva E, Sciacovelli L, Zaninotto M, Laposata M, Plebani M. Evaluation of effectiveness of a computerized notification system for reporting critical values. Am J Clin Pathol 2009; 131 (03) 432-441
  • 21 Chen TC, Lin WR, Lu PL. et al. Computer laboratory notification system via short message service to reduce health care delays in management of tuberculosis in Taiwan. Am J Infect Control 2011; 39 (05) 426-430
  • 22 Liebow EB, Derzon JH, Fontanesi J. et al. Effectiveness of automated notification and customer service call centers for timely and accurate reporting of critical values: a laboratory medicine best practices systematic review and meta-analysis. Clin Biochem 2012; 45 (13–14): 979-987
  • 23 Verma A, Wang AS, Feldman MJ, Hefferon DA, Kiss A, Lee JS. Push-alert notification of troponin results to physician smartphones reduces the time to discharge emergency department patients: a randomized controlled trial. Ann Emerg Med 2017; 70 (03) 348-356
  • 24 Menon S, Murphy DR, Singh H, Meyer AN, Sittig DF. Workarounds and test results follow-up in electronic health record-based primary care. Appl Clin Inform 2016; 7 (02) 543-559

Address for correspondence

Benjamin H. Slovis, MD, MA
Department of Emergency Medicine, Thomas Jefferson University
1020 Sansom Street, Thompson Building, Room 239, Philadelphia, PA 19107
United States   

Publication History

Received: 11 January 2022

Accepted: 25 May 2022

Article published online:
13 July 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Henry J, Pylypchuk Y, Searcy T. et al. Adoption of electronic health record systems among US non-federal acute care hospitals: 2008–2015. ONC Data Brief 2016; 35: 1-9
  • 2 Salmasian H, Landman AB, Morris C. An electronic notification system for improving patient flow in the emergency department. AMIA Jt Summits Transl Sci Proc 2019; 2019: 242-247
  • 3 Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med 2016; 91 (06) 827-832
  • 4 Sinsky C, Colligan L, Li L. et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med 2016; 165 (11) 753-760
  • 5 Slovis BH, Nahass TA, Salmasian H, Kuperman G, Vawdrey DK. Asynchronous automated electronic laboratory result notifications: a systematic review. J Am Med Inform Assoc 2017; 24 (06) 1173-1183
  • 6 Samal L, Stavroudis T, Miller R, Lehmann H, Lehmann C. Effect of a laboratory result pager on provider behavior in a neonatal intensive care unit. Appl Clin Inform 2011; 2 (03) 384-394
  • 7 Kuperman GJ, Teich JM, Bates DW. et al. Detecting alerts, notifying the physician, and offering action items: a comprehensive alerting system. Proc AMIA Annu Fall Symp 1996; 704-708
  • 8 Tate KE, Gardner RM, Scherting K. Nurses, pagers, and patient-specific criteria: three keys to improved critical value reporting. Proc Annu Symp Comput Appl Med Care 1995; 164-168
  • 9 Etchells E, Adhikari NK, Cheung C. et al. Real-time clinical alerting: effect of an automated paging system on response time to critical laboratory values–a randomised controlled trial. Qual Saf Health Care 2010; 19 (02) 99-102
  • 10 Kuperman GJ, Teich JM, Tanasijevic MJ. et al. Improving response to critical laboratory results with automation: results of a randomized controlled trial. J Am Med Inform Assoc 1999; 6 (06) 512-522
  • 11 O'Connor SD, Khorasani R, Pochebit SM, Lacson R, Andriole KP, Dalal AK. Semiautomated system for nonurgent, clinically significant pathology results. Appl Clin Inform 2018; 9 (02) 411-421
  • 12 ONC. 2015 Edition Market Readiness for Hospitals and Clinicians. Health IT Quick-Stat #55. Office of the National Coordinator for Health Information Technology; March 2019. Accessed June 12, 2022 at: https://www.healthit.gov/data/quickstats/2015-edition-market-readiness-hospitals-and-clinicians
  • 13 Friedman CPA. A “fundamental theorem” of biomedical informatics. J Am Med Inform Assoc 2009; 16 (02) 169-170
  • 14 Arndt BG, Beasley JW, Watkinson MD. et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (05) 419-426
  • 15 Poon EG, Kuperman GJ, Fiskio J, Bates DW. Real-time notification of laboratory data requested by users through alphanumeric pagers. J Am Med Inform Assoc 2002; 9 (03) 217-222
  • 16 Melnick ER, Dyrbye LN, Sinsky CA. et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (03) 476-487
  • 17 Koziatek C, Swartz J, Iturrate E, Levy-Lambert D, Testa P. Decreasing the lag between result availability and decision-making in the emergency department using push notifications. West J Emerg Med 2019; 20 (04) 666-671
  • 18 About Us. Thomas Jefferson University, 2021. Accessed June 16 2022 at: https://hospitals.jefferson.edu/about-us.html.
  • 19 De Chesnay M. Nursing Research Using Grounded Theory: Qualitative Designs and Methods in Nursing. New York, NY: Springer Publishing Company, LLC; 2015
  • 20 Piva E, Sciacovelli L, Zaninotto M, Laposata M, Plebani M. Evaluation of effectiveness of a computerized notification system for reporting critical values. Am J Clin Pathol 2009; 131 (03) 432-441
  • 21 Chen TC, Lin WR, Lu PL. et al. Computer laboratory notification system via short message service to reduce health care delays in management of tuberculosis in Taiwan. Am J Infect Control 2011; 39 (05) 426-430
  • 22 Liebow EB, Derzon JH, Fontanesi J. et al. Effectiveness of automated notification and customer service call centers for timely and accurate reporting of critical values: a laboratory medicine best practices systematic review and meta-analysis. Clin Biochem 2012; 45 (13–14): 979-987
  • 23 Verma A, Wang AS, Feldman MJ, Hefferon DA, Kiss A, Lee JS. Push-alert notification of troponin results to physician smartphones reduces the time to discharge emergency department patients: a randomized controlled trial. Ann Emerg Med 2017; 70 (03) 348-356
  • 24 Menon S, Murphy DR, Singh H, Meyer AN, Sittig DF. Workarounds and test results follow-up in electronic health record-based primary care. Appl Clin Inform 2016; 7 (02) 543-559

Zoom Image
Fig. 1 Breakdown of the number of providers based on provider type for each specialty where that specialty ordered more than five notifications in the study period.
Zoom Image
Fig. 2 Boxplots of the frequency of the daily notifications in each week.