CC BY-NC-ND 4.0 · Appl Clin Inform 2019; 10(02): 295-306
DOI: 10.1055/s-0039-1688478
Research Article
Georg Thieme Verlag KG Stuttgart · New York

Diffusing an Innovation: Clinician Perceptions of Continuous Predictive Analytics Monitoring in Intensive Care

Rebecca R. Kitzmiller
1   School of Nursing, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States
,
Ashley Vaughan
1   School of Nursing, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States
,
Angela Skeeles-Worley
2   Curry School of Education and Human Development, University of Virginia, Charlottesville, Virginia, United States
,
Jessica Keim-Malpass
3   School of Nursing, University of Virginia, Charlottesville, Virginia, United States
,
Tracey L. Yap
4   School of Nursing, Duke University, Durham, North Carolina, United States
,
Curt Lindberg
5   Billings Clinic, Billings, Montana, United States
,
Susan Kennerly
6   College of Nursing, East Carolina University, Greenville, North Carolina¸ United States
,
Claire Mitchell
2   Curry School of Education and Human Development, University of Virginia, Charlottesville, Virginia, United States
,
Robert Tai
2   Curry School of Education and Human Development, University of Virginia, Charlottesville, Virginia, United States
,
Brynne A. Sullivan
7   Division of Neonatology, University of Virginia, Charlottesville, Virginia, United States
,
Ruth Anderson
1   School of Nursing, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States
,
Joseph R. Moorman
8   Departments of Cardiology and Biomedical Engineering, University of Virginia, Charlottesville, Virginia, United States
9   Center for Advanced Medical Analytics, University of Virginia, Charlottesville, Virginia, United States
› Author Affiliations
Funding This study was funded by the National Center for Advancing Translational Sciences (Grant/Award Number: ‘KL2TR001109’), Mitre Corporation (Grant/Award Number: ‘Contract No 109140-Phase 1 & 2’), and University of Virginia (Grant/Award Number: ‘THRIV’).
Further Information

Address for correspondence

Rebecca R. Kitzmiller, PhD, MHR, RN, BC
University of North Carolina at Chapel Hill
4108 Carrington Hall, CB 7460, Chapel Hill, NC 27599
United States   

Publication History

29 October 2018

18 March 2019

Publication Date:
01 May 2019 (online)

 

Abstract

Background The purpose of this article is to describe neonatal intensive care unit clinician perceptions of a continuous predictive analytics technology and how those perceptions influenced clinician adoption. Adopting and integrating new technology into care is notoriously slow and difficult; realizing expected gains remain a challenge.

Methods Semistructured interviews from a cross-section of neonatal physicians (n = 14) and nurses (n = 8) from a single U.S. medical center were collected 18 months following the conclusion of the predictive monitoring technology randomized control trial. Following qualitative descriptive analysis, innovation attributes from Diffusion of Innovation Theory-guided thematic development.

Results Results suggest that the combination of physical location as well as lack of integration into work flow or methods of using data in care decisionmaking may have delayed clinicians from routinely paying attention to the data. Once data were routinely collected, documented, and reported during patient rounds and patient handoffs, clinicians came to view data as another vital sign. Through clinicians' observation of senior physicians and nurses, and ongoing dialogue about data trends and patient status, clinicians learned how to integrate these data in care decision making (e.g., differential diagnosis) and came to value the technology as beneficial to care delivery.

Discussion The use of newly created predictive technologies that provide early warning of illness may require implementation strategies that acknowledge the risk–benefit of treatment clinicians must balance and take advantage of existing clinician training methods.


#

Background and Significance

New technologies, such as predictive analytics, hold the potential to dramatically improve knowledge about illnesses and their effective treatment.[1] [2] [3] [4] Predictive technologies are designed to assess for and warn of patient risk hours to days in advance of clinical signs toward the goals of early clinical intervention and improved patient outcomes.[5] [6] [7] [8] However, risk of illness does not guarantee a patient will develop that illness, thus clinicians must balance the benefits of early intervention (e.g., resolving infection prior to sepsis onset) with the negative consequences of delayed treatment or unnecessary treatment (e.g., developing antibiotic-resistant organisms). To date, predictive analytics studies primarily focused on statistical model development and accuracy, rather than on clinicians' acceptance and adoption of predictive technologies into care.[9] [10] [11] [12] [13] [14] Among studies that did evaluate clinician use, results underscore clinicians' difficulty in translating risk prediction into medically actionable interventions.[6] [15] Known technology implementation challenges (e.g., poor design,[16] misalignment between system design and care processes,[17] [18] [19] [20] [21] [22] [23] [24] changes to communication and care processes[25]) may further complicate use of these emerging innovations, negatively impact efficacy trials,[26] [27] and delay systematic adoption.[6] This article describes neonatal intensive care unit (NICU) clinician perceptions of a continuous predictive analytics technology and how those perceptions influenced clinician adoption.

Continuous Predictive Analytics: Heart Rate Observation

Using streaming electrocardiograph (ECG) monitoring heart rate data (RR interval, sample asymmetry, standard deviation, and calculations of sample entropy) from bedside monitoring technology, a University of Virginia (UVA) interdisciplinary team (neonatology, cardiology, statistics, biomedical engineering) developed mathematical algorithms to discriminated neonatal sepsis and sepsis-like illness.[28] The team then created a monitor to visualize the algorithm, heart rate observation (HeRO), a predictive analytics technology and refined technology functions in response to neonatologist feedback ([Fig. 1]). HeRO calculated and displayed a neonate's fold-increased risk of developing sepsis in the next 24 hours where a score of 1 represents baseline sepsis risk among all neonates. This score was known as the heart rate characteristic (HRC) index. Updated hourly, the monitor was designed to display a 5-day trend ([Fig. 1]: orange, top) and indicate highest HRC ([Fig. 1]: yellow vertical line) with corresponding raw heart rate data ([Fig. 1]: green, bottom) with controls to allow users to scroll back through time.[29]

Zoom Image
Fig. 1 Heart rate observation (HeRO) monitor-visualizing heart rate characteristics index, corresponding heart rate pattern, and controls to scroll forward and backward in time.

Following Food and Drug Administration approval, the HeRO research team conducted a parallel, two-group, individually randomized control trial (RCT) (NCT00307333) among 3,003 very low birth weight (< 1,500 g) neonates from 9 U.S. NICUs to determine if HeRO improved neonatal sepsis outcomes.[29] At UVA, a single HeRO monitor ([Fig. 1]) was mounted in a central location in each of six pods to maximize visibility from the 6 to 9 beds contained in each pod.

At the beginning of the clinical trial (April, 2004), the research team provided NICU clinicians with information regarding how the score was calculated and that a rising score might indicate the need to assess the patient and, as needed, to test or treat as appropriate.[29] The research team did not use other implementation strategies (e.g., programmatic training, decision aids, treatment protocols) to promote or improve provider engagement with HeRO or to influence the use of HeRO data in clinical care due to concerns about unnecessary sepsis evaluations or overuse of blood cultures and antibiotics.[29] Concluded in May, 2010, the RCT resulted in significant reduction in sepsis-related mortality (22%) among very low birth weight neonates (hazard ratio= 0.7; 95% confidence interval [CI], 0.61–0.99, p = 0.04) and among extremely low birth weight neonates (26%; hazard ratio = 0.74; 95% CI, 0.57–0.95; p = 0.02).[29] Monitored infants experienced a slight increase in drawn blood cultures (10%) and days on antibiotics (5%), the difference from nonmonitored infants was not significant (p = 0.31).[29] HeRO remains in use and has been associated with early recognition of other significant neonatal illnesses such as necrotizing enterocolitis[30] and respiratory decompensation.[30] [31] [32] HeRO represents an early example of predictive analytics using continuously available bedside ECG monitor data and may be the first to be routinely used in care delivery. Members of the UVA NICU were the first to use HeRO and represented an ideal user group from which to understand user perceptions of newly developed predictive monitoring technology.


#

Guiding Framework: Diffusion of Innovation

Diffusion of Innovations (DOI) served as the theoretical lens specifically because this theory considers clinical team members' needs, motivation, values and goals, skills, learning style, and networks as core components influencing adoption of new practices.[19] The theory has proven useful for understanding adoption of care cueing,[33] surgical checklists,[34] after-visit summaries,[35] and technology in geriatric care.[36] DOI research notes that an innovation's attributes (i.e., complexity, compatibility, trialability, observability, relative advantage) influences if and how quickly an innovation will be adopted ([Table 1]).[33] [34] [36] [37] [38]

Table 1

Innovation characteristics defined[a]

Complexity

Degree to which an innovation is perceived as difficult to understand and use; complexity can be reduced by practical experience and demonstration, or adopted piecemeal (p. 596)

Compatibility

Degree to which an innovation is perceived as consistent with existing values, past experiences, and needs of potential adopters (p. 596)

Trialability

Degree to which an innovation may be experimented with on a limited basis; reduces risk (p. 596)

Observability

Degree to which the results of an innovation are visible to others; results are visible; stimulates peer discussion of a new idea (p. 596)

Relative advantage

Degree to which an innovation is perceived as better than the one it supersedes; can be measured in economic terms, social prestige, convenience, satisfaction (p. 595)

a Rogers, E. Diffusion of Innovations.5th ed. New York: Free Press; 2003.


If an innovation is perceived by clinicians as difficult to use, not integrated with existing workflow, and/or does not offer an advantage over existing practices, clinicians are less likely to adopt it.[16] [27] [34] [39] [40] And thus, careful attention to users' perceptions of and reactions to an innovation's attributes may lead to better design and improved integration into care delivery.

An innovation's diffusion within a social system, such as a patient care unit, is influenced by communication among members.[37] Negative providers' perceptions may create implementation issues, a significant challenge when newly developed technology undergo clinical trial effectiveness evaluation. For example, Kappen et al's study of a newly developed predictive screening tool recommending a preemptive approach to postanesthesia nausea and vomiting, found that clinicians' preference for their usual, trusted treatment lead to failure to use the screening tool data, actions that may have contributed to the study's null findings.[26] [27] Among studies of surgical safety checklists and innovative surgical procedures, conflict between new and existing processes led clinicians to deviate from best practice[16] [34] or abandon the new surgical procedure altogether.[41] Clinicians' willingness to trial a newly developed innovation is a critical component of efficacy studies, predominant technology implementation strategies (classroom training, protocols) may be inappropriate because the technology lacks evidence of best use in practice.


#
#

Methods

Study Design

This study employed a cross-sectional qualitative descriptive design using individual interviews collected from an academic NICU in central Virginia.[42] Participants were recruited through a convenience sampling strategy that included any point of care clinician (registered nurse, respiratory therapist, nurse practitioner, attending physician) who worked in the unit and were exposed to the HeRO display monitoring for any period of time. There were no exclusion criteria.


#

Setting

UVA Health System is a regional academic medical center located in Charlottesville, Virginia, United States. UVA's NICU was involved in the HeRO clinical trial from April 2004 to May 2010. At the time of the trial, this 45-bed, level IV NICU admitted approximately 600 neonates annually and was organized into 8 sections, or pods of 6 to 9 beds each.


#

Participants

Following permission from the UVA NICU medical director, NICU members (nurses, nurse practitioners, and resident, fellow, and attending physicians) were contacted by email inviting participation in a qualitative study of medical decisionmaking using HeRO. A follow-up email was sent 2weeks later. Respondents were scheduled for in-person or telephonic semistructured interviews during January and February 2012. Consent was obtained from each participant at the time of the interview. The study received ethics approval (UVA Institutional review board- SBS #2015–0352).


#

Data Collection

Following an interview guide ([Appendix A]), semistructured interviews with open-ended questions were conducted in-person and telephonically, audio recorded, transcribed verbatim, and imported into ATLAS.ti (Scientific Software Development GmbH, Berlin, Germany). These data were not subsequently analyzed due to study staff turnover.

Appendix A

Semistructured interview guide

Date:

Profession

MD-Attending; MD-Fellow; MD-Resident Medical Student;

Nurse Practitioner; Registered Nurse

How long have you been a [physician, nurse, student]?

How long have you worked in the neonatal intensive care unit (NICU)?

Please describe your routine on a typical day in the NICU.

 When you walk up to a baby, what do you do to assess the situation? What's the very first thing that you physically do?

There are a lot of devices and data displayed in the NICU, which ones do you pay the most attention to? What kind of information do you get from them?

If a new piece of equipment arrived in the NICU, how would you integrate a new approach into your practice of taking care of infants?

Do you feel there are any personal obstacles that could prevent you from incorporating a new approach into your practice?

Can you tell me a little bit about when and how you were introduced to the HeRO monitor?

 Do you remember when you were introduced to the HeRO monitor and how that took place?

 How did you learn to use the data available through the HeRO monitor?

 Since you were first introduced to the HeRO monitor, how do new members of the care team learn about HeRO?

How does HeRO play a role in your practice?

 What information do you get from HeRO?

 What role does HeRO play in terms of all the other monitor information you use?

Does the literature on the HeRO monitor currently, or when you first were introduced to it, inform your practice?

Is there anything else you would like to tell me about your practice when it comes to the HeRO monitor?

Would you share your observations about how other members of the NICU healthcare team use HeRO?

 Is there a difference in how they use it?

 Do you find that certain members of the team look more at the trend compared with the absolute value?

 HeRO data are presented or used differently by the nurses compared with nurse practitioners?

 Do parents provide any information about HeRO scores?


#

Data Analysis

Because these data were collected to answer a different research question, R.A., R.K., C.L., and J.M. read five interviews (registered nurse, nurse practitioner, resident, fellow, and attending) to determine if appropriate data existed to answer study aims.[43] [44] Due to the open-ended nature of the questions and the topics covered in the interview guide, this preliminary review found descriptions of participants' perceptions of HeRO, interactions with HeRO, and use of HeRO data in practice.

Informed by the desire to understand NICU clinicians' perceptions of HeRO and the relationship between perceptions and subsequent adoption of HeRO, the team developed four broad a priori codes to explore how clinicians first became aware of HeRO (awareness); learned to interpret HeRO data (interpretation); used HeRO (use); and how HeRO data guided care decisions and actions (decision-making and action).[45] The team then read a subset of interviews and identified additional codes to capture contextual information about professional roles and routines, and care responsibilities. Following code and code definition agreement, the teams were divided into pairs to conduct descriptive analysis using the full code book. Each interview was read in its entirety by both coders, and then in a second reading, codes were applied to text segments. Coding pairs conducted code agreement meetings at two separate times during the coding process. The entire team met on a weekly basis to review code level text segments, evolving memos, and emerging themes. Guided by Miles et al,[46] text segments were abstracted for each code into matrices as matrices facilitate sorting and grouping segments to identify themes informed by DOI's five innovation attributes. The team frequently returned to original material to uncover assumptions and explore alternate hypotheses. The team used four strategies to enhance trustworthiness: (1) multiple team members coded the same interview data with cross-validation of code use; (2) assumptions and questions about the data were captured in memos and reviewed with the team; (3) all aspects of the study design were open for review by the members of the research team; and (4) all members used ATLAS.ti to provide an audit trail.[46]


#
#

Results

The 22 participants represented a cross-section of healthcare professionals: registered nurses (n = 3), nurse practitioners (n = 3), resident physicians (n = 3), neonatology fellows (n = 4), and neonatology attending physicians (n = 7). Participants ranged in professional experience from 2 to 30 years and worked in the UVA NICU between 1 month and 15 years. Seven participants were employed at the inception of the clinical trial, seven joined during the clinical trial, and six joined after the trial concluded. No participants were members of the UVA research team. Participants' perceptions of HeRO are organized according to the five DOI innovation attributes (complexity, compatibility, trialability, observability, relative advantage) ([Table 2]).[37]

Table 2

Select quotations related to DOI innovation attributes

Complexity

 Location

“It's very different pod by pod. I think that it sometimes can be kind of hard to see since especially if you're going to a—you go to a bedside, you hear all the things and you say, “Oh, what's the HeRO score?” Either they haven't looked or you're trying to look and sometimes if you're [in] the middle of B pod you're trying to stand on tippy toes and look through the glass or around the corner or something and see, and count of beds so that you can kind of see what the HeRO score is from a distance and it's kind of hard to do.” (Attending)

“The nurses weren't focusing on it. The residents really didn't know much about it. [Research team] realized that there were times when the HeRO score was just getting ignored.” (Attending)

 Understanding

“Until somebody says to you and takes you by the hand…this is how to approach these screens; these are the questions you can answer with this technology, I won't use it just cause it's there.” (Fellow)

“As long as I understand what it's there for and I understand how it works, that it's been well-taught and that there's some evidence behind its use, then I'm all for it. I think if it's been well-explained I can latch onto things pretty quickly as long as I get to play with it a little bit before it has to be on a real patient, if I get to really sort of see it in action or whatever.” (RN) **cotheme: Trialability

“Cause we've seen the results. That frequently the blood cultures will come back positive. The baby did have an infection. Particularly when the study was going on and you'd have babies that were blinded that weren't on the Hero you wished, when they showed signs of infection. You wished you could've seen what the Hero was doing there, because you just knew it would've gone up. So, I feel like we have certainly seen that it does seem have a predictive value.” (RN)

“For example, like I said, the fact that Hero is on a wall and I can scan nine patients and pretty much instantly know what's going on, that, to me, is a testament to the power of the graphic, versus the fact that I can go into Epic and find it but the visualization process is different.” (Fellow)

Compatibility

 Care tasks

“Initially, our physician group wanted nursing staff to begin to just document that number. I (said) You are delegating a responsibility of observing a numerical trend, but you have not provided any direction as to what constitutes need for a response. Until you can articulate that to the nursing staff, the nursing staff cannot assume accountability without knowing what your response algorithm is. I think they, as a team, determined what those parameters would be for response.” (RN)

“Then the nurses were told if the HeRO score goes up by a certain amount they need to alert a clinician—a nurse practitioner, or a resident, or a fellow, or an attending.” (Attending)

“…routine care involves vital signs every so many hours, depending upon your patient population, and the HeRO score is a part of vital signs monitoring.” (RN)

“It actually now appears automatically. There's a way to automatically get it put into the progress note. Even just in the last six months when I've been rounding with Epic, I've noticed that the residents are much more aware of what the HeRO score is and what it means than they were three years ago.” (Attending)

 Communication

“We actually had the fellows responsible for reviewing the HeRO trending overnight so they would know what might have transpired with the baby's monitoring overnight. So that would be part of their presentation. Even if it's the matter of the HeRO remains below two they were looking at it.” (Attending)

“You know, I'm trying to think of who doesn't use it. We're a pretty—it's pretty engrained in our practice at this point that everybody, even in our report as nurses when we hand off, will make a comment; HeRO stable or HeRO went up overnight, but this is what we're doing about it. So I really can't talk to very many instances where it hasn't come up.” RN

Trialability

 Clinical reasoning

“…especially at the beginning when it was over two we were doing a full blown workup, I felt like there was a lot of unnecessary workups. That just in itself predisposes the baby to—you stick in a catheter in their urethra, you probably—a little bit more prone to bladder infections then.” (Nurse Practitioner)

“On call at night when there's a kid that's not doing well, we have some suspicion of sepsis, the nurse practitioners or the fellows would take a look at the HeRO monitors. Then we'd talk a little bit about what the significance of those numbers were and whether or not that push[es] us one way or another in our decision making.” (Resident)

“It's something that if I go to them [physicians] with the information and say, there's been a change in the HeRO score that draws their attention to it. They'll look on that and try and incorporate that as one more piece of the puzzle in trying to make their decisions.” (RN)

“Then there are some babies where they might do one little odd thing or something that's maybe a little bit concerning but it's only the one event. Then you go back and you look at the HeRO score and say, well, did it go up?” (Attending)

“What [HeRO's] really done is shown me that I think putting both together, using HeRO score and something else is a lot more predictive, or it guides my care and my decision making more so.” (Attending)

It has shown that it [HeRO] makes a difference, and we obviously believe in it strongly here, so we pay attention when the HeRO goes about two. We don't necessarily-and if it's just a HeRO, then we get a CBC, but if there's more clinical symptoms that are correlating with the HeRO then we go ahead and do blood and urine and potentially start antibiotics” (Nurse Practitioner)

Observability

“We've all learned about it from the same attendings and fellows and nurse practitioners, so if you learn something from the same people, I think your practice with it tends to be at least similar.” (Resident)

“[Attendings] spent some time explaining to me what it is, how it works, how you can look at it…so, just learning in which clinical aspects would you do this versus that I've learned from the attendings.” (Fellow)

“Nobody really explained it. I learned about it from just the routine of once in a while people would go and check on it (HeRO score), or a nurse would say, oh, the HeRO score's up, and I'd be like, ah, what does that mean? I don't know. What's a HeRO score? Then just from being there, gradually I picked up that it was about heart rate variability.” (Fellow)

“The HeRO score is a part of vital signs monitoring. So the ability to critically analyze that for a new hire is supported by an experienced nurse helping [them] along the way to interpret that.” (RN)

“When nurses are brought into our unit, if they're a novice new grad, they get six months of a precepted orientation. So that means, they are paired with a person. At the very beginning, routine care involves vital signs every so many hours, depending upon your patient population, and the HeRO score is a part of vital signs monitoring.” (RN)

Relative advantage

 Supports clinical judgment

“The vital signs of the baby, as far as monitors go. Hero is helpful sometimes. But lots of times I feel that the baby tells you first. Especially after having a good bit of experience, the Hero can kinda help back up your feeling that the baby's getting sick. But at this point I can kind of get a feeling.” (RN)

“…it can always kind of help out my case I think. If I think that a baby who … is becoming ill, or he needs respiratory support further than what he's already on. I can kind of grab the docs and be like; this is what I'm seeing. Oh by the way the Hero score is up. Then that kind of helps them say, oh okay well let's go ahead and get septic work, or whatever needs to be done.” (RN)

“I think it's just another thing to add to their [RN] story to get me concerned…I think that's reasonable but then that prompts us to go in and investigate it.” (Fellow)

 Surveillance

“…one of the things that it's [HeRO Score] done is both shown how something that's non-invasive that can be active all the time can be helpful.” (Attending)

“…I'm giving tours at the NICU to families and they're worried about their baby hooked up to these monitors—what's reassuring to some of them is saying “Here is the monitor, here are all the numbers we're looking at and we're getting data on your baby. I'm not even touching your baby. I'm not poking or prodding your baby. I can see what the heart rate is, I can see what the respiratory pattern is, see what the blood pressure is, … we have all these methods of evaluating your baby without having to wake the baby up and take a blood sample. I think the HeRO is one of those ways that we can assess the baby without hurting the baby so to speak.an advantage is not hurting the baby.” (RN)

“There really was not anything like Hero that they used before in terms of its predictive quality. the only close comparison is a human caregiver having an instinct that something MIGHT happen.” (Fellow)

 Evidence base

“I know there has been a recent review in [journal] by [doctor]. I have not had a chance to read that. But we talk about HeRO all the time. Before they present some data in big national meetings.” (Attending)

“[The published study] has shown that it makes a difference, and we obviously believe in it strongly here, so we pay attention when the HeRO goes above two.” (Nurse Practitioner)

“We're always proud to say it's the biggest randomized clinical trial of very low birth weight infants ever with 3,000 patients. The fact that that showed mortality reduction, I mean there's really not much that reduces mortality in preemies.” (Attending)

Abbreviations: CBC, complete blood count; DOI, Diffusion of Innovation; HeRO, heart rate observation; NICU, neonatal intensive care unit; RN, Registered Nurse.


Complexity

Innovations that are easily understood and used, can be learned incrementally, or can be experimented with are more likely to be adopted.[37] At clinical trial inception, participants (n = 9) reported receiving a brief presentation provided by a member of the HeRO research team. This presentation described the monitor's function and score meaning: the fold-increased risk that a neonate will develop sepsis in the next 24 hours. Although participants received initial information about the display and score, participants did not know whether a changing score should drive clinical decision making or whether it should merely contribute to overall clinical impressions. A neonatology fellow reflected on the introduction of technology into care, suggesting that use may take more than simply understanding how predictive data are calculated and what the data represents: “Until somebody says to you and takes you by the hand…this is how to approach these screens; these are the questions you can answer with this technology, I won't use it just cause it's there.”

Prior to HeRO's introduction, users were accustomed to making care decisions based on physiologic data that provided information on the neonate's current status (i.e., respiratory rate, heart rate, laboratory values, etc.). Because this was the first application of HeRO in a clinical environment, participants had no experience with HeRO scores or their association with patient symptoms. Neither could participants rely on other members of the care team to help them learn about or use HeRO. This lack of experience may have negatively affected initial engagement with HeRO data. In fact, the lack of use was pervasive across the entire care team. It appears that participants' initial engagement with the data was not influenced by knowledge about HeRO provided when the monitors were installed or by the monitors' presence on the unit.


#

Compatibility

Innovations that align with users' values, needs, or past experiences are more likely to be adopted.[33] [34] [37] Clinicians noted that the location of HeRO differed from the location of other devices they used in day-to-day care delivery. Physiologic monitors that displayed heart and respiratory rates, oxygenation, blood pressure, etc., resided at each neonate's bedside. The single HeRO display was centrally located in each pod. To see the data, clinicians describe the need to move away from the bedside, stand on “tippy toes,” or to walk to the monitor. Thus, physical location may have deterred routine engagement with HeRO.

The research team and unit managers agreed to undertake initial steps to increase participant attention to the HeRO score, yet refrained from requesting specific care interventions in response to the data. Participants described strategies that align with or were compatible with routine care practices. Nurses were instructed to record the score every 4 hours and alert nurse practitioners and physicians if the score reached two or increased by two. Fellows were required to observe and report on score trends and care actions during morning patient rounds. These strategies appear to have influenced participant behavior as noted by one nurse practitioner, “Having the nurses writing it down was key to us being successful in reacting appropriately to the spikes and so forth as they happen. I think that was a turning point.”

At first, nurses documented HeRO data on the paper vital signs flow sheet, then later in the electronic health record. Similarities between HeRO and vital signs collection pattern (every 4 hours) or its documentation in close proximity to vital signs data eventually caused nurses to view HeRO as a vital sign. Prior to these requirements, HeRO data did not have a place in the routine assessment, documentation, or daily conversations about patients. Overtime, data collection and communication became embedded in care routines and team interactions.


#

Trialability

The trialability, or the ability of participants to experiment[37] with HeRO during the RCT, allowed for a more nuanced understanding of the application to develop over the course of the study. Several attendings and nurse practitioners observed that initial reactions to rising HeRO scores may have led to unnecessary testing due to inexperience. “…especially at the beginning when it [HeRO Score] was over two we were doing a full blown workup, I felt like there was a lot of unnecessary workups” (nurse practitioner). They attribute this perception to inexperience with HeRO as well as clinical inexperience among some team members.

The clinical team eventually learned that a score of two or a rise of two did not necessarily mean that a neonate had sepsis. Through documentation and data presentation during patient rounds, HeRO data became integrated as a component of the overall dataset routinely used in care decision making. Further, participants eventually learned that not all neonates with rising HeRO scores would develop sepsis. As they gained experience, participants developed critical judgment about the relationship between HeRO scores and signs to guide when to undertake diagnostic testing and treatment. Over time, participants came to rely on the score to help them understand uncertain emerging symptoms and used HeRO in ongoing communication and decision making about next care interventions.


#

Observability

The more readily a user can see or observe the results of using an innovation, the more likely it will be adopted.[37] Members of the NICU also learned how to interpret and react to HeRO data by observing the practices of more experienced clinicians. Less experienced clinicians observed if and how senior participants used HeRO data. “[Attending] spent some time explaining to me what it is, how it works, how you can look at it…so, just learning in which clinical aspects would you do this versus that I've learned from the attendings” (fellow). Less experienced participants appear to have benefited most when senior members shared how they use HeRO data in care decision making. Both less experienced as well new members of the NICU reported observing the practices of experienced participants to figure out how to interpret and use HeRO data in care delivery.


#

Relative Advantage

Observable, substantiated advantage of an innovative and newly introduced technology is seen as a pivotal attribute for influencing its adoption.[37] In the case of HeRO, there was no evidence base or even experienced participants upon whom the NICU team could rely. Through interaction with the data and observation of neonatal symptoms and outcomes, HeRO data served different purposes for different types of participants. For example, the data confirmed nurses' emerging clinical impressions and helped nurses determine when to share their observations to other members of the care team. Physicians came to expect nurses to use HeRO data when communicating about a patient's status. The inclusion of HeRO data in nurses' communication about patients seemed to serve as a trigger for physician team members because this may have prompted patient assessment or closer examination of patient data. Over time, participants recognized the benefit of noninvasive, continuous monitoring.

As evidence from the RCT emerged, NICU clinicians identified themselves as contributors to a significant improvement in neonatal care delivery. RCT findings may also have served to reinforce that participants made the correct decision to use HeRO in care delivery as it implies that use has scientific merit.


#
#

Discussion

This study examined NICU clinicians' perceptions of a predictive analytics monitoring technology following the conclusion of the RCT establishing its efficacy. In light of the novel nature of HeRO, evidence of effectiveness as well as guidance for its application in care delivery was limited.[28] [47] Use of prediction in care delivery often requires a balance between the benefits and risk of taking action. Consistent with DOI research, study results suggest that HeRO's attributes were key to influencing its use in the NICU. The findings highlight participants' initial reaction to HeRO, the effect of minimal prompts on participant engagement with HeRO data, how the care team learned to interpret and use HeRO to guide care decisions, and how the benefits, or relative advantage, of HeRO data emerged over time with experience.

Reduce Complexity: Provide Simple Guidelines for Engaging with HeRO

Knowledge about the usefulness of an innovation may not be sufficient to promote an innovation's use. Although sepsis remains a significant cause of death for neonates, the presence of HeRO in relative proximity to the bedside was insufficient to promote use among study participants. Prior research indicates that if an innovation is difficult to use or understand, adoption may occur slowly or not at all.[26] [48] Because HeRO provided an early alert for the increasing potential for sepsis, but was not a definitive test for sepsis, participants may have had difficulty knowing if and when an increasing HeRO score warranted medical action. Combined with the physical location of HeRO, difficulty interpreting the score in the context of care delivery may have been a contributing factor for the initial lack of attention. Simple, mandated interaction with the data, such as documenting and communicating, increased both written and verbal visibility and was seen by study participants as a turning point. Further, guidelines for when to report score changes, a “call out” procedure, likely reduced nurse uncertainty or worry about raising false alarms and engaged several types of care providers in the evaluation of HeRO trends and patient status. Call out procedures, a type of decision aid, are associated with effective clinician communication, early care intervention,[49] and reduced mortality among hospitalized patients.[50] However, several participants expressed concern that initial reactions to HeRO led to unnecessary sepsis work-ups. The RCT did note a nonsignificant increase in blood cultures and antibiotics[29]; it may be that the concern voiced by more experienced clinicians actually curtailed overreaction to rising HeRO scores. Decision aids, such as the one described in our study, may promote engagement, while avoiding mandated care actions and may provide a more effective means of introducing predictive analytics technologies into complex healthcare settings.


#

Enhance Compatibility: Align HeRO-Related Tasks with Existing Clinician Experience

Studies note that congruence, or compatibility, with user norms, values, and experiences increases the likelihood of adoption.[36] [51] In this study, nurses were asked to document the HeRO score every 4 hours, a pattern and task nurses routinely perform. Fellows were tasked with observing HeRO trends and reporting their observations as a component of their daily patient presentation to the care team. Through documentation, HeRO data were in the same medical record location as other relevant clinical data (e.g., heart and respiratory rate) that, in turn, may have influenced clinician perspective perhaps because it overcame the misalignment between HeRO's location on the unit when compared with other physiologic devices. Over time, HeRO data gained credibility and were eventually viewed as another vital sign that became a component of care communication and decision making. Thus, defining HeRO as a vital sign connected the innovation to existing clinician practice and understanding of physiologic data.[52]


#

Foster Trialability: Promote Observation and Association

Users desire the opportunity to trial an innovation because it lessens uncertainty, promotes trust, and may confirm the benefits of using an innovation.[37] [53] Further, seeking feedback from users' trial experiences may provide opportunities to improve functionality.[54] HeRO may be a particularly difficult technology with which to experiment because scores rise as much as 24 hours in advance of symptom presentation.[55] Thus, early forecast of sepsis may be incongruent with clinical assessment. Yet, it appears that clinicians engaged in a form of trialability. Through active evaluation of HeRO trends, emerging signs, and neonates' responses to care actions, clinicians made sense of HeRO and developed judgment about when to wait, undertake further testing, or initiate treatment.[56] This finding suggests that developing learning cases may provide opportunities to trail clinical decision making by allowing clinicians to look back over time, explore patterns, and associated care actions as well as neonates' responses.[57] This may be particularly important for patients who exhibit a high degree of score variability and present an uncertain clinical picture. Further, deliberation about HeRO data in conjunction with patient signs often took place among a few collaborating clinicians, thus the larger NICU community missed the opportunity for collective learning, a situation that may be improved by consistent use of learning cases.


#

Increase Observability: Respected Leaders Provide Meaningful Examples

During their training, residents work with attendings from different clinical specialties. Study attendings integrated HeRO into their education practices including one-on-one mentoring, conducting patient rounds, demonstrating clinical reasoning, and providing formalized classes. Resident and fellow participants noted that they valued “hearing” attendings' cognitive processing of HeRO data and indicated that they followed attendings' examples. In addition to participation in patient rounds, nurses learning took the form of orientation, one-on-one work with a nurse preceptor, and protocolized tasks. Direct observation of respected, successful “other's” innovation use is associated with increased likelihood of adoption.[53] Although not planned, attendings served as champions, key individuals who supported the innovation through observable HeRO use and verbalization about HeRO within clinical reasoning. Formally assigning HeRO data collection and communication tasks to fellows and nurses may have implied that attendings and senior nurses valued HeRO data.[53] Unlike technology training methods that move clinicians to the classroom and separate the professions, senior clinicians served as role models and provided HeRO training and learning within the context of care.[58]


#

Demonstrate Relative Advantage: Experience and Evidence

Through continued use, HeRO was eventually viewed as advantageous to clinicians' care communication and decision making. Studies identify relative advantage, defined as an innovation's benefit to the user, as an essential innovation attribute linked to adoption.[53] [59] In the absence of firm evidence of benefit (e.g., monetary, quality, efficiency, satisfaction), uptake of innovations is prolonged.[60] In the case of HeRO, relative advantage had not yet been established, thus clinicians had to discover the advantage as they worked with and learned about HeRO. Fitzgerald et al suggest that in medical contexts ambiguous new scientific knowledge is socially mediated, meaning that an innovation's benefits are established through use and ongoing dialog between clinicians.[61] In this study, social mediation occurred through the use of call out procedures, daily patient rounds, and clinicians' day-to-day collaborative clinical reasoning. HeRO data provided evidence to support emerging clinical impressions that nurses communicated to providers. Physicians and nurse practitioners came to expect nurses to use HeRO data as evidence of clinical concern. Clinicians came to value the noninvasive nature of HeRO as it provided continuous monitoring without the pain or risk of infection associated with laboratory testing. Positive clinical trial results may have served as a form of affirmation. Since evidence of effectiveness influences clinician willingness to use prediction in practice, frequent review of case examples may provide sufficient evidence to promote initial clinician engagement with new innovations such as HeRO.


#

Strengths and Limitations

Study limitations include generalizability, strong interest in HeRO at UVA, small sample size, and recall bias. While selecting participants from a single hospital unit limits generalizability, this units' extensive experience with HeRO allowed us to explore the experience of implementing a newly developed technology. Our participants were the first to be recruited into the RCT, the first to encounter HeRO data and knowledge in the care environment, and therefore, had the greatest experiences to share with the study team. Although our sample was small, it represented a cross-section of professions, neonatal experience, tenure on unit, and experience with HeRO. Due to the wide variation in experience with HeRO, we achieved thematic saturation only in terms of HeRO use in clinical decisionmaking: HeRO was a “piece of the puzzle,” on data point among many considered when developing a medical course of action. Because participant interviews took place a year following the conclusion of the clinical trial, participant data were at risk for recall bias. However, we found consistent descriptions of the original implementation; the strategies unit managers first established; as well as how members learned through role modeling and dialog across participants. A next step to understanding the use of HeRO in hospitals might include direct observation of care team actions. Finally, this study examined a single type of prediction, sepsis, a particularly persistent, devastating illness. Future research efforts should consider evaluating other types of negative patient experiences, such as hemorrhage, where early intervention is associated with improved survival.


#
#

Conclusion

Other than a handful of prediction studies, the majority focused on model development and validation, therefore there is little evidence to guide integrating predictive data into providers' care routines.[11] [27] Tools such as the Acute Physiology and Chronic Health Evaluation, are typically used to benchmark ICU quality and not alert providers to patient decline.[11] Further, there is little evidence to guide interventions to promote clinician acceptance and use of predictive technologies as the majority of studies focus upon model development and accuracy, not on providers' acceptance of prediction as an element of care decision making.[9] [11] [14] [62] [63] Because the success of predictive technologies such as HeRO rely on the human system for interpretation and action,[26] [49] [64] processes to build human capacity to interpret predictive data in the context of clinical reasoning are essential. Simple strategies designed to engage clinicians' attention and promote data communication may be foundational to helping clinicians to learn how to effectively use new tools in care delivery.


#

Multiple Choice Questions

  1. What attributes of an innovation influence user adoption?

    • Relative advantage, trialability, observation, accountability, complexity.

    • Observation, relative advantage, sustainability, complexity, compatibility.

    • Complexity, compatibility, observation, trialability, relative advantage.

    • Affordability, relative advantage, observation, trialability, complexity.

    Correct Answer: The correct answer is option c. Although there is no order or ranking, the five DOI theory innovation attributes include complexity, compatibility, observation, trialability, and relative advantage.

  2. When implementing new technology into health care settings, what strategy would most likely promote its use in care delivery?

    • Training classes scheduled to meet the needs of varying shifts.

    • Reminder emails that included best practice materials.

    • Integration into existing data collection, documentation and communication.

    • Vendor provided online tutorials that includes case examples.

    Correct Answer: The correct answer is option c. In general, integration into workflow is essential for implementation of any kind of technology, evidenced-based practice, or care protocol. While other options provide knowledge about an innovation, integration provides opportunities to develop skills.


#
#

Conflict of Interest

J.R. Moorman has equity and is Chief Medical Officer of Advanced Medical Predictive Devices, Diagnostics and Displays, which has licensed technology for the University of Virginia related to predictive monitoring. Funding support (R. Anderson, C. Lindberg, and R. Kitzmiller) for the project provided by the MITRE Corporation. The MITRE Corporation operates the Centers for Medicare & Medicaid Services (CMS's) Alliance to Modernize Healthcare (CAMH), a federally funded research and development (FFRDC) dedicated to strengthening the nation's health care system. The MITRE Corporation operates CAMH in partnership with CMS and the Department of Health and Human Services. Additional funding support for R. Kitzmiller's salary came from National Center for Advancing Translational Sciences, National Institutes of Health (KL2TR001109), and for J. Keim-Malpass' salary from a University of Virginia Translational Health Research Institute of Virginia (THRIV) Scholars award. The remaining authors declare that they have no competing interests.

Acknowledgments

We would like to thank Dr. Beth Black, PhD, RN, FAAN, for her careful editing and Stephanie Wragg, PhD, for study design and data collection contributions.

Authors' Contributions

C.L., J.R.M., R.A., R.K., and R.T. developed the study design. R.T. conducted participant interviews. A.V., C.L., R.A., R.K., S.K., and T.Y. lead the application of the theoretical framework. Data analysis was conducted by A.S.-W., A.V., C.K., C.M., J.K.-M., R.A., R.K., S.K., and T.Y. Drafts of the manuscript were prepared by A.S.-W., A.V., B.S., C.L., J.K.-M., J.R.M., R.A., R.K., S.K., and T.Y. All authors read and approved the final manuscript.


Protection of Human and Animal Subjects

Ethics approval was obtained from the University of Virginia IRB-SBS #2015–0352. Procedures included participant consent to participate.


  • References

  • 1 Agency for Healthcare Research and Quality. AHRQ Issue Brief: Harnessing the Power of Data. Agency for Healthcare Research and Quality; August 2015
  • 2 Seely A, Newman K, Herry C. Monitoring variability and complexity at the bedside. In: Sturmberg JP. , ed. The Value of Systems and Complexity Sciences for Healthcare. New York: Springer; 2016
  • 3 Green GC, Bradley B, Bravi A, Seely AJE. Continuous multiorgan variability analysis to track severity of organ failure in critically ill patients. J Crit Care 2013; 28 (05) 879.e1-879.e11
  • 4 Henry KE, Hager DN, Pronovost PJ, Saria S. A targeted real-time early warning score (TREWScore) for septic shock. Sci Transl Med 2015; 7 (299) 299ra122
  • 5 Kappen TH, van Klei WA, van Wolfswinkel L, Kalkman CJ, Vergouwe Y, Moons KGM. Evaluating the impact of prediction models: lessons learned, challenges, and recommendations. Diagnostic and Prognostic Research 2018; 2 (01) 11
  • 6 Kennedy G, Gallego B. Clinical prediction rules: asystematic review of healthcare provider opinions and preferences. Int J Med Inform 2019; 123: 1-10
  • 7 Nakamura F, Nakai M. Prediction models.  - why are they used or not used?. Circ J 2017; 81 (12) 1766-1767
  • 8 Shelov E, Muthu N, Wolfe H. , et al. Design and implementation of a pediatric ICU acuity scoring tool as clinical decision support. Appl Clin Inform 2018; 9 (03) 576-587
  • 9 Chalmers JD, Singanayagam A, Akram AR. , et al. Severity assessment tools for predicting mortality in hospitalised patients with community-acquired pneumonia. Systematic review and meta-analysis. Thorax 2010; 65 (10) 878-883
  • 10 Hosein FS, Bobrovitz N, Berthelot S, Zygun D, Ghali WA, Stelfox HT. A systematic review of tools for predicting severe adverse events following patient discharge from intensive care units. Crit Care 2013; 17 (03) R102
  • 11 Breslow MJ, Badawi O. Severity scoring in the critically ill: part 1--interpretation and accuracy of outcome prediction scoring systems. Chest 2012; 141 (01) 245-252
  • 12 van den Boogaard M, Schoonhoven L, Maseda E. , et al. Recalibration of the delirium prediction model for ICU patients (PRE-DELIRIC): a multinational observational study. Intensive Care Med 2014; 40 (03) 361-369
  • 13 Wassenaar A, van den Boogaard M, van Achterberg T. , et al. Multinational development and validation of an early prediction model for delirium in ICU patients. Intensive Care Med 2015; 41 (06) 1048-1056
  • 14 Seely AJ, Bravi A, Herry C. , et al; Canadian Critical Care Trials Group (CCCTG). Do heart and respiratory rate variability improve prediction of extubation outcomes in critically ill patients?. Crit Care 2014; 18 (02) R65
  • 15 Müller-Riemenschneider F, Holmberg C, Rieckmann N. , et al. Barriers to routine risk-score use for healthy primary care patients: survey and qualitative study. Arch Intern Med 2010; 170 (08) 719-724
  • 16 Koppel R, Wetterneck T, Telles JL, Karsh BT. Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc 2008; 15 (04) 408-423
  • 17 Barber N, Cornford T, Klecun E. Qualitative evaluation of an electronic prescribing and administration system. Qual Saf Health Care 2007; 16 (04) 271-278
  • 18 Bar-Lev S, Harrison MI. Negotiating time scripts during implementation of an electronic medical record. Health Care Manage Rev 2006; 31 (01) 11-17
  • 19 Doolin B. Power and resistance in the implementation of a medical management information system. Inf Syst J 2004; 14 (04) 343-362
  • 20 Lapointe L, Rivard S. A multilevel model of resistance to information technology implementation. Manage Inf Syst Q 2005; 29 (03) 461-491
  • 21 Lau F, Penn A, Wilson D, Noseworthy T, Vincent D, Doze S. The diffusion of an evidence-based disease guidance system for managing stroke. Int J Med Inform 1998; 51 (2-3): 107-116
  • 22 Littlejohns P, Wyatt JC, Garvican L. Evaluating computerised health information systems: hard lessons still to be learnt. BMJ 2003; 326 (7394): 860-863
  • 23 Koppel R, Metlay JP, Cohen A. , et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293 (10) 1197-1203
  • 24 Novak L, Brooks J, Gadd C, Anders S, Lorenzi N. Mediating the intersections of organizational routines during the introduction of a health IT system. Eur J Inf Syst 2012; 21 (05) 552-569
  • 25 Edmondson AC, Bohmer RM, Pisano GP. Disrupted routines: team learning and new technology implementation in hospitals. Adm Sci Q 2001; 46 (04) 685
  • 26 Kappen TH, Moons KG, van Wolfswinkel L, Kalkman CJ, Vergouwe Y, van Klei WA. Impact of risk assessments on prophylactic antiemetic prescription and the incidence of postoperative nausea and vomiting: a cluster-randomized trial. Anesthesiology 2014; 120 (02) 343-354
  • 27 Kappen TH, van Loon K, Kappen MA. , et al. Barriers and facilitators perceived by physicians when using prediction models in practice. J Clin Epidemiol 2016; 70: 136-145
  • 28 Griffin MP, Moorman JR. Toward the early diagnosis of neonatal sepsis and sepsis-like illness using novel heart rate analysis. Pediatrics 2001; 107 (01) 97-104
  • 29 Moorman JR, Carlo WA, Kattwinkel J. , et al. Mortality reduction by heart rate characteristic monitoring in very low birth weight neonates: a randomized trial. J Pediatr 2011; 159 (06) 900-906
  • 30 Sullivan BA, Grice SM, Lake DE, Moorman JR, Fairchild KD. Infection and other clinical correlates of abnormal heart rate characteristics in preterm infants. J Pediatr 2014; 164 (04) 775-780
  • 31 Goel N, Chakraborty M, Watkins WJ, Banerjee S. Predicting extubation outcomes-a model incorporating heart rate characteristics index. J Pediatr 2018; 195: 53-58
  • 32 Clark MT, Vergales BD, Paget-Brown AO. , et al. Predictive monitoring for respiratory decompensation leading to urgent unplanned intubation in the neonatal intensive care unit. Pediatr Res 2013; 73 (01) 104-110
  • 33 Yap TL, Kennerly S, Corazzini K, Porter K, Toles M, Anderson RA. Evaluation of Cueing Innovation for Pressure Ulcer Prevention Using Staff Focus Groups. Paper presented at: Healthcare 2014
  • 34 Dharampal N, Cameron C, Dixon E, Ghali W, Quan ML. Attitudes and beliefs about the surgical safety checklist: just another tick box?. Can J Surg 2016; 59 (04) 268-275
  • 35 Federman AD, Sanchez-Munoz A, Jandorf L, Salmon C, Wolf MS, Kannry J. Patient and clinician perspectives on the outpatient after-visit summary: a qualitative study to inform improvements in visit summary design. J Am Med Inform Assoc 2017; 24 (e1): e61-e68
  • 36 Vedel I, Akhlaghpour S, Vaghefi I, Bergman H, Lapointe L. Health information technologies in geriatrics and gerontology: a mixed systematic review. J Am Med Inform Assoc 2013; 20 (06) 1109-1119
  • 37 Rogers E. Diffusion of Innovations. 5th ed. New York: Free Press; 2003
  • 38 Simunovic M, Coates A, Smith A, Thabane L, Goldsmith CH, Levine MN. Uptake of an innovation in surgery: observations from the cluster-randomized Quality Initiative in Rectal Cancer trial. Can J Surg 2013; 56 (06) 415-421
  • 39 Mikkelsen ME, Gaieski DF, Goyal M. , et al. Factors associated with nonadherence to early goal-directed therapy in the ED. Chest 2010; 138 (03) 551-558
  • 40 Bunkenborg G, Poulsen I, Samuelson K, Ladelund S, Åkeson J. Mandatory early warning scoring--implementation evaluated with a mixed-methods approach. Appl Nurs Res 2016; 29: 168-176
  • 41 Pisano GP, Bohmer RM, Edmondson AC. Organizational differences in rates of learning: evidence from the adoption of minimally invasive cardiac surgery. Manage Sci 2001; 47 (06) 752-768
  • 42 Sandelowski M. Whatever happened to qualitative description?. Res Nurs Health 2000; 23 (04) 334-340
  • 43 Hinds PS, Vogel RJ, Clarke-Steffen L. The possibilities and pitfalls of doing a secondary analysis of a qualitative data set. Qual Health Res 1997; 7 (03) 408-424
  • 44 Szabo V, Strang VR. Secondary analysis of qualitative data. ANS Adv Nurs Sci 1997; 20 (02) 66-74
  • 45 Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005; 15 (09) 1277-1288
  • 46 Miles MB, Huberman AM, Saldaña J. Qualitative Data Analysis: A Methods Sourcebook. Thousand Oaks, CA: SAGE Publications, Incorporated; 2013
  • 47 Griffin MP, Lake DE, Bissonette EA, Harrell Jr FE, O'Shea TM, Moorman JR. Heart rate characteristics: novel physiomarkers to predict neonatal infection and death. Pediatrics 2005; 116 (05) 1070-1074
  • 48 Ford EW, McAlearney AS, Phillips MT, Menachemi N, Rudolph B. Predicting computerized physician order entry system adoption in US hospitals: can the federal mandate be met?. Int J Med Inform 2008; 77 (08) 539-545
  • 49 Gardner-Thorpe J, Love N, Wrightson J, Walsh S, Keeling N. The value of Modified Early Warning Score (MEWS) in surgical in-patients: a prospective observational study. Ann R Coll Surg Engl 2006; 88 (06) 571-575
  • 50 Priestley G, Watson W, Rashidian A. , et al. Introducing critical care outreach: a ward-randomised trial of phased introduction in a general hospital. Intensive Care Med 2004; 30 (07) 1398-1404
  • 51 Guilbert ER, Robitaille J, Guilbert AC, Morin D. Determinants of the implementation of a new practice in hormonal contraception by Quebec nurses. Can J Hum Sex 2014; 23 (01) 34-48
  • 52 Weick KE, Sutcliffe KM, Obstfeld D. Organizing and the process of sensemaking. Organ Sci 2005; 16 (04) 409
  • 53 Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004; 82 (04) 581-629
  • 54 Rehr CA, Wong A, Seger DL, Bates DW. Determining inappropriate medication alerts from “inaccurate warning” overrides in the intensive care unit. Appl Clin Inform 2018; 9 (02) 268-274
  • 55 Moss TJ, Lake DE, Calland JF. , et al. Signatures of subacute potentially catastrophic illness in the ICU: model development and validation. Crit Care Med 2016; 44 (09) 1639-1648
  • 56 Weick KE. Sensemaking in Organizations. London: Sage; 1995
  • 57 Norman G. Research in clinical reasoning: past history and current trends. Med Educ 2005; 39 (04) 418-427
  • 58 Cruess SR, Cruess RL, Steinert Y. Role modelling--making the most of a powerful teaching strategy. BMJ 2008; 336 (7646): 718-721
  • 59 Masterson Creber RM, Dayan PS, Kuppermann N. , et al; Pediatric Emergency Care Applied Research Network (PECARN) and the Clinical Research on Emergency Services and Treatments (CREST) Network. Applying the RE-AIM framework for the evaluation of a clinical decision support tool for pediatric head trauma: a mixed-methods study. Appl Clin Inform 2018; 9 (03) 693-703
  • 60 Menachemi N, Burke DE, Ayers DJ. Factors affecting the adoption of telemedicine--a multiple adopter perspective. J Med Syst 2004; 28 (06) 617-632
  • 61 Fitzgerald L, Ferlie E, Wood M, Hawkins C. Interlocking interactions, the diffusion of innovations in health care. Hum Relat 2002; 55 (12) 1429
  • 62 Subbe CP, Kruger M, Rutherford P, Gemmel L. Validation of a modified Early Warning Score in medical admissions. QJM 2001; 94 (10) 521-526
  • 63 Heitz CR, Gaillard JP, Blumstein H, Case D, Messick C, Miller CD. Performance of the maximum modified early warning score to predict the need for higher care utilization among admitted emergency department patients. J Hosp Med 2010; 5 (01) E46-E52
  • 64 Keim-Malpass J, Kitzmiller RR, Skeeles-Worley A. , et al. Advancing continuous predictive analytics monitoring: Moving from implementation to clinical action in a learning health system. Crit Care Nurs Clin North Am 2018; 30 (02) 273-287

Address for correspondence

Rebecca R. Kitzmiller, PhD, MHR, RN, BC
University of North Carolina at Chapel Hill
4108 Carrington Hall, CB 7460, Chapel Hill, NC 27599
United States   

  • References

  • 1 Agency for Healthcare Research and Quality. AHRQ Issue Brief: Harnessing the Power of Data. Agency for Healthcare Research and Quality; August 2015
  • 2 Seely A, Newman K, Herry C. Monitoring variability and complexity at the bedside. In: Sturmberg JP. , ed. The Value of Systems and Complexity Sciences for Healthcare. New York: Springer; 2016
  • 3 Green GC, Bradley B, Bravi A, Seely AJE. Continuous multiorgan variability analysis to track severity of organ failure in critically ill patients. J Crit Care 2013; 28 (05) 879.e1-879.e11
  • 4 Henry KE, Hager DN, Pronovost PJ, Saria S. A targeted real-time early warning score (TREWScore) for septic shock. Sci Transl Med 2015; 7 (299) 299ra122
  • 5 Kappen TH, van Klei WA, van Wolfswinkel L, Kalkman CJ, Vergouwe Y, Moons KGM. Evaluating the impact of prediction models: lessons learned, challenges, and recommendations. Diagnostic and Prognostic Research 2018; 2 (01) 11
  • 6 Kennedy G, Gallego B. Clinical prediction rules: asystematic review of healthcare provider opinions and preferences. Int J Med Inform 2019; 123: 1-10
  • 7 Nakamura F, Nakai M. Prediction models.  - why are they used or not used?. Circ J 2017; 81 (12) 1766-1767
  • 8 Shelov E, Muthu N, Wolfe H. , et al. Design and implementation of a pediatric ICU acuity scoring tool as clinical decision support. Appl Clin Inform 2018; 9 (03) 576-587
  • 9 Chalmers JD, Singanayagam A, Akram AR. , et al. Severity assessment tools for predicting mortality in hospitalised patients with community-acquired pneumonia. Systematic review and meta-analysis. Thorax 2010; 65 (10) 878-883
  • 10 Hosein FS, Bobrovitz N, Berthelot S, Zygun D, Ghali WA, Stelfox HT. A systematic review of tools for predicting severe adverse events following patient discharge from intensive care units. Crit Care 2013; 17 (03) R102
  • 11 Breslow MJ, Badawi O. Severity scoring in the critically ill: part 1--interpretation and accuracy of outcome prediction scoring systems. Chest 2012; 141 (01) 245-252
  • 12 van den Boogaard M, Schoonhoven L, Maseda E. , et al. Recalibration of the delirium prediction model for ICU patients (PRE-DELIRIC): a multinational observational study. Intensive Care Med 2014; 40 (03) 361-369
  • 13 Wassenaar A, van den Boogaard M, van Achterberg T. , et al. Multinational development and validation of an early prediction model for delirium in ICU patients. Intensive Care Med 2015; 41 (06) 1048-1056
  • 14 Seely AJ, Bravi A, Herry C. , et al; Canadian Critical Care Trials Group (CCCTG). Do heart and respiratory rate variability improve prediction of extubation outcomes in critically ill patients?. Crit Care 2014; 18 (02) R65
  • 15 Müller-Riemenschneider F, Holmberg C, Rieckmann N. , et al. Barriers to routine risk-score use for healthy primary care patients: survey and qualitative study. Arch Intern Med 2010; 170 (08) 719-724
  • 16 Koppel R, Wetterneck T, Telles JL, Karsh BT. Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc 2008; 15 (04) 408-423
  • 17 Barber N, Cornford T, Klecun E. Qualitative evaluation of an electronic prescribing and administration system. Qual Saf Health Care 2007; 16 (04) 271-278
  • 18 Bar-Lev S, Harrison MI. Negotiating time scripts during implementation of an electronic medical record. Health Care Manage Rev 2006; 31 (01) 11-17
  • 19 Doolin B. Power and resistance in the implementation of a medical management information system. Inf Syst J 2004; 14 (04) 343-362
  • 20 Lapointe L, Rivard S. A multilevel model of resistance to information technology implementation. Manage Inf Syst Q 2005; 29 (03) 461-491
  • 21 Lau F, Penn A, Wilson D, Noseworthy T, Vincent D, Doze S. The diffusion of an evidence-based disease guidance system for managing stroke. Int J Med Inform 1998; 51 (2-3): 107-116
  • 22 Littlejohns P, Wyatt JC, Garvican L. Evaluating computerised health information systems: hard lessons still to be learnt. BMJ 2003; 326 (7394): 860-863
  • 23 Koppel R, Metlay JP, Cohen A. , et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293 (10) 1197-1203
  • 24 Novak L, Brooks J, Gadd C, Anders S, Lorenzi N. Mediating the intersections of organizational routines during the introduction of a health IT system. Eur J Inf Syst 2012; 21 (05) 552-569
  • 25 Edmondson AC, Bohmer RM, Pisano GP. Disrupted routines: team learning and new technology implementation in hospitals. Adm Sci Q 2001; 46 (04) 685
  • 26 Kappen TH, Moons KG, van Wolfswinkel L, Kalkman CJ, Vergouwe Y, van Klei WA. Impact of risk assessments on prophylactic antiemetic prescription and the incidence of postoperative nausea and vomiting: a cluster-randomized trial. Anesthesiology 2014; 120 (02) 343-354
  • 27 Kappen TH, van Loon K, Kappen MA. , et al. Barriers and facilitators perceived by physicians when using prediction models in practice. J Clin Epidemiol 2016; 70: 136-145
  • 28 Griffin MP, Moorman JR. Toward the early diagnosis of neonatal sepsis and sepsis-like illness using novel heart rate analysis. Pediatrics 2001; 107 (01) 97-104
  • 29 Moorman JR, Carlo WA, Kattwinkel J. , et al. Mortality reduction by heart rate characteristic monitoring in very low birth weight neonates: a randomized trial. J Pediatr 2011; 159 (06) 900-906
  • 30 Sullivan BA, Grice SM, Lake DE, Moorman JR, Fairchild KD. Infection and other clinical correlates of abnormal heart rate characteristics in preterm infants. J Pediatr 2014; 164 (04) 775-780
  • 31 Goel N, Chakraborty M, Watkins WJ, Banerjee S. Predicting extubation outcomes-a model incorporating heart rate characteristics index. J Pediatr 2018; 195: 53-58
  • 32 Clark MT, Vergales BD, Paget-Brown AO. , et al. Predictive monitoring for respiratory decompensation leading to urgent unplanned intubation in the neonatal intensive care unit. Pediatr Res 2013; 73 (01) 104-110
  • 33 Yap TL, Kennerly S, Corazzini K, Porter K, Toles M, Anderson RA. Evaluation of Cueing Innovation for Pressure Ulcer Prevention Using Staff Focus Groups. Paper presented at: Healthcare 2014
  • 34 Dharampal N, Cameron C, Dixon E, Ghali W, Quan ML. Attitudes and beliefs about the surgical safety checklist: just another tick box?. Can J Surg 2016; 59 (04) 268-275
  • 35 Federman AD, Sanchez-Munoz A, Jandorf L, Salmon C, Wolf MS, Kannry J. Patient and clinician perspectives on the outpatient after-visit summary: a qualitative study to inform improvements in visit summary design. J Am Med Inform Assoc 2017; 24 (e1): e61-e68
  • 36 Vedel I, Akhlaghpour S, Vaghefi I, Bergman H, Lapointe L. Health information technologies in geriatrics and gerontology: a mixed systematic review. J Am Med Inform Assoc 2013; 20 (06) 1109-1119
  • 37 Rogers E. Diffusion of Innovations. 5th ed. New York: Free Press; 2003
  • 38 Simunovic M, Coates A, Smith A, Thabane L, Goldsmith CH, Levine MN. Uptake of an innovation in surgery: observations from the cluster-randomized Quality Initiative in Rectal Cancer trial. Can J Surg 2013; 56 (06) 415-421
  • 39 Mikkelsen ME, Gaieski DF, Goyal M. , et al. Factors associated with nonadherence to early goal-directed therapy in the ED. Chest 2010; 138 (03) 551-558
  • 40 Bunkenborg G, Poulsen I, Samuelson K, Ladelund S, Åkeson J. Mandatory early warning scoring--implementation evaluated with a mixed-methods approach. Appl Nurs Res 2016; 29: 168-176
  • 41 Pisano GP, Bohmer RM, Edmondson AC. Organizational differences in rates of learning: evidence from the adoption of minimally invasive cardiac surgery. Manage Sci 2001; 47 (06) 752-768
  • 42 Sandelowski M. Whatever happened to qualitative description?. Res Nurs Health 2000; 23 (04) 334-340
  • 43 Hinds PS, Vogel RJ, Clarke-Steffen L. The possibilities and pitfalls of doing a secondary analysis of a qualitative data set. Qual Health Res 1997; 7 (03) 408-424
  • 44 Szabo V, Strang VR. Secondary analysis of qualitative data. ANS Adv Nurs Sci 1997; 20 (02) 66-74
  • 45 Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005; 15 (09) 1277-1288
  • 46 Miles MB, Huberman AM, Saldaña J. Qualitative Data Analysis: A Methods Sourcebook. Thousand Oaks, CA: SAGE Publications, Incorporated; 2013
  • 47 Griffin MP, Lake DE, Bissonette EA, Harrell Jr FE, O'Shea TM, Moorman JR. Heart rate characteristics: novel physiomarkers to predict neonatal infection and death. Pediatrics 2005; 116 (05) 1070-1074
  • 48 Ford EW, McAlearney AS, Phillips MT, Menachemi N, Rudolph B. Predicting computerized physician order entry system adoption in US hospitals: can the federal mandate be met?. Int J Med Inform 2008; 77 (08) 539-545
  • 49 Gardner-Thorpe J, Love N, Wrightson J, Walsh S, Keeling N. The value of Modified Early Warning Score (MEWS) in surgical in-patients: a prospective observational study. Ann R Coll Surg Engl 2006; 88 (06) 571-575
  • 50 Priestley G, Watson W, Rashidian A. , et al. Introducing critical care outreach: a ward-randomised trial of phased introduction in a general hospital. Intensive Care Med 2004; 30 (07) 1398-1404
  • 51 Guilbert ER, Robitaille J, Guilbert AC, Morin D. Determinants of the implementation of a new practice in hormonal contraception by Quebec nurses. Can J Hum Sex 2014; 23 (01) 34-48
  • 52 Weick KE, Sutcliffe KM, Obstfeld D. Organizing and the process of sensemaking. Organ Sci 2005; 16 (04) 409
  • 53 Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004; 82 (04) 581-629
  • 54 Rehr CA, Wong A, Seger DL, Bates DW. Determining inappropriate medication alerts from “inaccurate warning” overrides in the intensive care unit. Appl Clin Inform 2018; 9 (02) 268-274
  • 55 Moss TJ, Lake DE, Calland JF. , et al. Signatures of subacute potentially catastrophic illness in the ICU: model development and validation. Crit Care Med 2016; 44 (09) 1639-1648
  • 56 Weick KE. Sensemaking in Organizations. London: Sage; 1995
  • 57 Norman G. Research in clinical reasoning: past history and current trends. Med Educ 2005; 39 (04) 418-427
  • 58 Cruess SR, Cruess RL, Steinert Y. Role modelling--making the most of a powerful teaching strategy. BMJ 2008; 336 (7646): 718-721
  • 59 Masterson Creber RM, Dayan PS, Kuppermann N. , et al; Pediatric Emergency Care Applied Research Network (PECARN) and the Clinical Research on Emergency Services and Treatments (CREST) Network. Applying the RE-AIM framework for the evaluation of a clinical decision support tool for pediatric head trauma: a mixed-methods study. Appl Clin Inform 2018; 9 (03) 693-703
  • 60 Menachemi N, Burke DE, Ayers DJ. Factors affecting the adoption of telemedicine--a multiple adopter perspective. J Med Syst 2004; 28 (06) 617-632
  • 61 Fitzgerald L, Ferlie E, Wood M, Hawkins C. Interlocking interactions, the diffusion of innovations in health care. Hum Relat 2002; 55 (12) 1429
  • 62 Subbe CP, Kruger M, Rutherford P, Gemmel L. Validation of a modified Early Warning Score in medical admissions. QJM 2001; 94 (10) 521-526
  • 63 Heitz CR, Gaillard JP, Blumstein H, Case D, Messick C, Miller CD. Performance of the maximum modified early warning score to predict the need for higher care utilization among admitted emergency department patients. J Hosp Med 2010; 5 (01) E46-E52
  • 64 Keim-Malpass J, Kitzmiller RR, Skeeles-Worley A. , et al. Advancing continuous predictive analytics monitoring: Moving from implementation to clinical action in a learning health system. Crit Care Nurs Clin North Am 2018; 30 (02) 273-287

Zoom Image
Fig. 1 Heart rate observation (HeRO) monitor-visualizing heart rate characteristics index, corresponding heart rate pattern, and controls to scroll forward and backward in time.