Appl Clin Inform 2022; 13(02): 380-390
DOI: 10.1055/s-0042-1744388
CIC 2021

Design, Usability, and Acceptability of a Needs-Based, Automated Dashboard to Provide Individualized Patient-Care Data to Pediatric Residents

Julia K.W. Yarahuan
1   Division of Pediatric Hospital Medicine, Department of Pediatrics, Boston Children's Hospital, Boston, Massachusetts, United States
,
Huay-Ying Lo
2   Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
,
Lanessa Bass
2   Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
,
Jeff Wright
3   Information Services, Texas Children's Hospital, Houston, Texas, United States
,
Lauren M. Hess
2   Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
› Author Affiliations
Funding None.
 

Abstract

Background and Objectives Pediatric residency programs are required by the Accreditation Council for Graduate Medical Education to provide residents with patient-care and quality metrics to facilitate self-identification of knowledge gaps to prioritize improvement efforts. Trainees are interested in receiving this data, but this is a largely unmet need. Our objectives were to (1) design and implement an automated dashboard providing individualized data to residents, and (2) examine the usability and acceptability of the dashboard among pediatric residents.

Methods We developed a dashboard containing individualized patient-care data for pediatric residents with emphasis on needs identified by residents and residency leadership. To build the dashboard, we created a connection from a clinical data warehouse to data visualization software. We allocated patients to residents based on note authorship and created individualized reports with masked identities that preserved anonymity. After development, we conducted usability and acceptability testing with 11 resident users utilizing a mixed-methods approach. We conducted interviews and anonymous surveys which evaluated technical features of the application, ease of use, as well as users' attitudes toward using the dashboard. Categories and subcategories from usability interviews were identified using a content analysis approach.

Results Our dashboard provides individualized metrics including diagnosis exposure counts, procedure counts, efficiency metrics, and quality metrics. In content analysis of the usability testing interviews, the most frequently mentioned use of the dashboard was to aid a resident's self-directed learning. Residents had few concerns about the dashboard overall. Surveyed residents found the dashboard easy to use and expressed intention to use the dashboard in the future.

Conclusion Automated dashboards may be a solution to the current challenge of providing trainees with individualized patient-care data. Our usability testing revealed that residents found our dashboard to be useful and that they intended to use this tool to facilitate development of self-directed learning plans.


#

Background and Significance

The Accreditation Council for Graduate Medical Education (ACGME) requires programs to provide residents with patient-care and quality metrics to self-reflect and identify areas for improvement.[1] [2] [3] The inclusion of competency milestones that emphasize iterative improvement is reflective of an increasing emphasis on objective quality metrics by health care organizations worldwide. Increasingly, health care institutions are using quality dashboards which allow providers to track their performance on key quality metrics.[4] These types of dashboards have been shown to improve adherence to quality guidelines and patient outcomes.[4]

Previous work has shown that trainees are interested in receiving patient-care data in the form of individualized case logs and other rotation-specific quality metrics.[5] [6] Despite accrediting body requirements, increasing prevalence of institutional quality dashboards, and trainee desire for personalized performance data, only a few studies exist among procedural and radiological specialties which discuss dashboard development for automated case-logging and tracking.[6] [7] [8] [9] Even fewer studies describe the creation of dashboards that provide quality metrics for trainees.[7] [8] While there are two studies about the use of automated case logs in pediatrics (one for aggregate pediatric residency data and the other for pediatric emergency medicine fellows),[9] [10] to our knowledge there are no studies or descriptions of a dashboard that provides individualized, rotation-specific automated case logs and quality metrics to pediatric residents.


#

Objectives

We aimed to (1) design and implement a real-time automated dashboard providing meaningful individualized patient-care data to pediatric residents, and (2) examine the usability and acceptability of the dashboard among pediatric residents.


#

Methods

Study Design

This was a mixed methods study of an educational innovation conducted at a pediatric tertiary care center from February 2020 to April 2021. The educational innovation consisted of the development of a real-time automated dashboard containing individualized patient-care data. After design and development of the dashboard, we conducted preliminary validation followed by formal usability and acceptability testing with resident users. Our institutional review board reviewed and approved this study.


#

Study Setting and Participants

The newly developed dashboard provides residents with patient-care data from their time on the pediatric hospital medicine (PHM) rotation. The PHM inpatient service is a core requirement of pediatric residency training, and typically provides residents with broad general pediatrics exposure to common inpatient diagnoses (e.g., asthma, pneumonia, bronchiolitis, etc.).[11] [12] Pediatric residents at our institution typically complete three 4-week blocks on the PHM service in their intern year (postgraduate year 1 [PGY1]), as well as one supervisory block during their third year of residency (PGY3). Eleven pediatric residents participated in usability and acceptability testing.


#

Dashboard Design and Data Sources

The dashboard design team consisted of a database programmer and four pediatric hospitalists with expertise in dashboard design, quality metrics, the electronic health record (EHR), and one of the pediatric residency program Associate Program Directors. These team members were involved in all parts of the dashboard development, including conceptualization, metric selection, visualization design, and the study of the dashboard after implementation. The project team regularly collaborated with pediatric residency program leadership throughout the development process.

Development of the dashboard was informed by an institutional needs assessment, which consisted of an anonymous, voluntary survey distributed to all residents. The survey elicited resident attitudes toward the currently provided types of feedback and patient-care data and asked residents to review which types of data would be most meaningful to them for engaging in critical self-reflection of their patient-care practices. Dashboard metrics that residents were most interested in included counts of rotation-specific “core-competency” diagnoses (e.g., diagnoses like asthma, bronchiolitis, pneumonia), procedure counts (e.g., counts of procedures performed by that trainee), basic quality metrics (e.g., adherence to guidelines, length of stay, readmission rates), and efficiency metrics (e.g., count of patient encounters per shift).

To build the dashboard, we queried an enterprise data warehouse (Health Catalyst, Salt Lake City, Utah, United States) populated with data from our EHR (Epic, Epic Systems Corporation, Verona, Wisconsin, United States). We created a real-time connection to visual analytics software QlikSense (Qlik Technologies Inc, King of Prussia, Pennsylvania, United States).[13] We allocated patients to trainees based on note authorship. We included any resident who signed the note as a note author, which means that if both the intern and upper-level resident signed a note, then that patient note would be attributed to both of them. Standard practice at our institution is for residents to sign notes only if they have physically examined the patient and have thus been directly involved in their care. Different metrics referred to different note types based upon authors' consensus on the most clinically relevant operational definition of that metric ([Table 1]). For example, readmission rates were only calculated for patients discharged by each individual resident (e.g., patients for whom that resident signed the discharge summary).

Table 1

Quality metrics and notes used for attribution

Metric

Description

Notes used for attribution to resident

Antibiotic use in bronchiolitis

Patients with a diagnosis of bronchiolitis who have an antibiotic order after an admission order is placed

History and physical note

Broad spectrum antibiotic use in community-acquired pneumonia

Distribution of class of antibiotic orders (e.g., penicillins, cephalosporins) after an admission order is placed in patients admitted with uncomplicated community-acquired pneumonia

History and physical note

Length of stay

Length of stay by broad diagnostic categories (e.g., gastroenteritis, sepsis) compared with PHM median for each category

Discharge summary

Readmission rate

30-day readmission rate compared with PHM median

Discharge summary

Rapid response by diagnosis

Rapid response count by broad diagnostic categories (e.g., gastroenteritis, sepsis)

All notes[a]

Rapid response by month

Rapid response count compared with the resident's total count of patient encounters by month

All notes[a]

Abbreviation: PHM, pediatric hospital medicine.


Note: This table describes the quality metrics and operational definitions that were displayed on the dashboard for each resident. We also listed which notes were used to attribute these metrics to residents, recognizing that no attribution system is perfect.


a The resident may not have been personally involved with the rapid response but we felt if they had written a note on the patient this represented a significant level of involvement in that patient's care.


To build the core-competency diagnostic counts identified as a priority in our needs assessment, we used core-competency diagnoses derived from a previously published list of PHM Core Competencies which are endorsed by the Society for Hospital Medicine and Academic Pediatric Association.[14] These core competencies are slightly modified to reflect the patient populations cared for by our institution's PHM teams. We created individualized core-competency diagnostic counts for PHM by assigning all relevant International Classification of Diseases, Tenth Revision (ICD-10) codes to each core-competency diagnosis.[15] For example, we identified all ICD-10 codes that referred to pneumonia and mapped them to a “Pneumonia” core competency.

The needs assessment additionally identified that pediatric residents were most interested in metrics where they felt that they had sufficient decision-making responsibility. Quality of care metrics were selected based upon previously proposed PHM-related quality indicators, with careful emphasis placed on metrics where pediatric residents would likely feel a sense of ownership.[16] [17]

When planning this dashboard with residency leadership a key objective identified was to preserve resident anonymity while still providing individualized data to each resident. Without anonymity, residents would be able to view each other's metrics. To prevent this, we created anonymous fictional character names which were linked to each resident provider. For example, resident Jane Doe would be assigned a character name of Frodo Baggins. In the dashboard, all data would appear for Frodo Baggins, but only Jane Doe would know that this represents her patient-care data.

After preliminary design of the dashboard, initial α testing was conducted by the project team and one outside user with expertise in dashboard development. Dashboard data was validated against manual queries from the enterprise data warehouse. Preliminary data validation was conducted by two authors (J.Y. and J.W.) and included sampling charts from 30 randomly selected residents, distributed as 10 residents per year for 3 years. For each resident selected, at least three patient records were reviewed, for a total of approximately 90 patient records reviewed. Additional data validation was conducted by manually reviewing 50 randomly selected patient charts to ensure that notes and procedures were being attributed appropriately to residents and that no notes or procedures were being missed. Errors were identified both in the attribution logic (e.g., a type of discharge summary was not included in our initial query) and in the diagnostic mapping to core-competency counts (e.g., Streptococcus meningitis mapping to community acquired pneumonia). These errors were fixed at the time of identification and then data validation was continued as previously described.


#

Usability and Acceptability Testing

Usability and acceptability testing was conducted with a small group of volunteer resident users with a mixed methods approach. Informed consent was obtained from all participants prior to participation. Think-aloud interviews ([Supplementary Appendix A], available in the online version) and anonymous surveys ([Supplementary Appendix B], available in the online version) were conducted which questioned residents on technical features of the dashboard application, ease of use, as well as users' attitudes and intention toward using the dashboard in the future.

In think-aloud interviews, users were instructed to verbalize their experience as they navigated through the dashboard. One author (J.Y.) continuously monitored the screen throughout the user's session. User's sessions lasted between 30 minutes and 1 hour. The users were assigned three tasks which encompassed three primary functionalities of the dashboard. The tasks included (1) identifying a user's three least frequent core-competency diagnoses, (2) identifying the user's average number of encounters per day worked, and (3) identifying the user's most commonly prescribed antibiotic for patients they admitted with community-acquired pneumonia (e.g., patients for whom they wrote a History and Physical Note). Once the participants had completed these three tasks, they were asked to summarize their attitudes and perceptions of the dashboard application. This portion of the interview used a guide developed by the research team. Questions assessed users' perceptions of usefulness, ease of use, attitude toward using, and intention to use the dashboard. The interviewer prompted further explanation as needed. Interview times ranged between 30 and 45 minutes. All interviews were conducted by the same team member (J.Y.) to ensure standardization of the interview process. All interviews were audio recorded and transcribed verbatim. Interviews were conducted until thematic saturation was achieved during data analysis.

Participants were additionally asked to complete an anonymous survey based on the Technology Acceptance Model as a framework.[18] Participants were emailed a link to the survey after the interview, and informed that participation in the survey was voluntary. This survey consisted of 15 Likert-scaled items assessing the perceived usefulness, ease of use, attitude toward using, and intention to use the dashboard. Questions were very similar to the semistructured interview but provided participants anonymity to minimize bias.


#

Analysis

Categories from usability interviews were identified using a content analysis approach. Two authors (J.Y. and L.H.) independently coded participant responses and subdivided responses into categories and subcategories. Disagreement was rare, but when it occurred, the authors referred to the original transcript to clarify the participants' meaning. After initial categorization, the authors confirmed that the selected quotes were most representative of each category. Commonly mentioned suggestions for improvement were identified, and changes were made to the dashboard design. All survey data were analyzed using Microsoft Excel.[19]


#
#

Results

Tool Description

In response to resident survey results, we developed a dynamic, automated dashboard that provides individualized, resident-specific patient-care and quality metrics. The data in the dashboard is updated with new data nightly. Every visualization in the tool is interactive, so users are able to manipulate visualizations to explore in further detail. For example, if a resident is interested in which specific ICD-10 codes are captured in a particular diagnostic category as shown in [Fig. 1], they can select that category and a tree-map will filter to show which specific diagnoses are included and their relative frequency. Our dashboard is made up of four pages: Core-Competency Counts, Demographics, Quality Metrics, and Productivity and Efficiency Metrics. The Core-Competency Counts ([Fig. 1]) dashboard page provides individualized core-competency diagnosis counts compared with the rolling average for pediatric interns over the last three academic years. The Demographics dashboard page ([Fig. 2]) provides a basic overview of the age, ethnicity, language, and home city and nation for all patients a resident has cared for. The Quality Metrics dashboard page ([Fig. 3]) includes the rate of antibiotic prescriptions in patients admitted with a diagnosis of bronchiolitis (a viral illness where antibiotics are typically not indicated), most frequently prescribed antibiotics in patients admitted for community-acquired pneumonia, frequency of rapid response calls (mechanism for intensive care evaluation and transfer), length of stay by diagnosis, and readmission rates. Finally, the Productivity and Efficiency Metrics dashboard page ([Fig. 4]) includes a resident's total number of patient encounters, unique patient encounters, average patient encounters per day, counts of different note types, and counts of procedure notes. All of these are compared with an average pediatric intern over the last three academic years, as previously described.

Zoom Image
Fig. 1 Pediatric hospital medicine core-competency counts. This figure is a screenshot of a visualization from the interactive dashboard which provides core-competency diagnostic counts for pediatric hospital medicine (PHM) for an individual pediatric intern or postgraduate year (PGY) 1 resident. It includes comparisons to average counts for PGY1s from the three prior full academic years. The resident in this example could choose to focus on those diagnoses where he is significantly below average, like bronchiolitis, or other diagnoses like poisoning/ingestion which he has never cared for during his PHM rotations. This was the most requested piece of data that the residents mentioned during our needs assessment.
Zoom Image
Fig. 2 Demographics summary page view. This screenshot of the interactive dashboard displays a broad overview of patient demographics for patients cared for by this resident. In the upper left corner, there is a histogram of age distributions. On the right side of the screen, there is a breakdown of patient ethnicity and primary language spoken. In the lower left corner, there a world map with patients' home cities.
Zoom Image
Fig. 3 Example quality metric: antibiotics in community-acquired pneumonia (CAP). This screenshot of the interactive dashboard visualization shows an example of a resident-specific quality metric: the types of antibiotics prescribed for patients admitted with a diagnosis of community-acquired pneumonia. This figure only includes patients that this resident admitted (patients for whom he wrote a History and Physical note).
Zoom Image
Fig. 4 Example Descriptive and Productivity Metrics. This screenshot of the interactive dashboard illustrates descriptive and productivity metrics for a pediatric intern and provides comparisons to postgraduate year (PGY) 1 averages over the last three academic years. We initially designed this view without comparisons, but comparisons were frequently requested in our usability surveys.

#

Usability and Acceptability Testing Results

Eleven resident users were selected to participate based on a first-come, first-served basis. Of these resident users, four were pediatric interns (PGY1), four were second-year residents, and three were third-year residents. Several changes were made based on resident input to improve the ease-of-use and comprehensibility of the dashboard. First, many residents had trouble navigating between pages of the dashboard. To overcome this, we added an emphasis on page navigation in a brief introductory video that is linked from the main page of the dashboard. Many residents wanted more information about how each metric was calculated. To quickly orient them, we added explanatory text descriptions in most visualizations and added a detailed documentation page which contained in-depth explanations of how specific metrics were calculated. Finally, residents repeatedly asked for more peer comparison data on many of the visualizations, so we added this type of comparative data wherever possible (see [Fig. 4] for example).


#

Content Analysis Results

In semistructured interviews, the most frequently mentioned proposed usage of the dashboard was its utility for a resident's self-directed learning ([Table 2]). Specifically, 10 out of 11 residents mentioned that they would like to use the core-competency diagnostic counts to review their current diagnostic exposure and seek out learning opportunities for less frequently encountered diagnoses. The most commonly suggested changes were to add more peer comparison data for the productivity and efficiency metrics and to increase the amount of patient-level data that was provided for quality metrics. Residents overall had few concerns or fears about use or implementation of the dashboard, but 3 of 11 residents mentioned that they felt that certain quality metrics are not reflective of decisions made by the individual resident. For example, length of stay may reflect attendings' decisions regarding discharge timing more than the actions of the individual discharging resident. Finally, residents' preferred setting and frequency of dashboard use did vary slightly. Most residents (7 out of 11) indicated that they would likely refer to the dashboard once or twice per PHM rotation. Similarly, most (8 out of 11) felt comfortable reviewing the dashboard with a residency leader or advisor, and most (9 out of 11) would feel comfortable sharing the dashboard results with peers, upper-level residents, or mentors.

Table 2

Semistructured interview content analysis

Category

Subcategory

Illustrative Quote

Dashboard content

Self-directed learning

Guides patient selection with deficiencies identified through patient counts (10/11 mentioned)

U3: “I love the core competency assessment - identifying patients I need to pick up in the future is incredibly helpful”

U6: “In the beginning of intern year, I was […] keeping a list, like on my phone, […], but it very quickly became too much to handle it […] so this was literally what I would have wanted to have or do in the beginning”

Allows reflection on quality metrics

(5/11)

U3: “The quality metrics I think are really helpful, like antibiotics in bronchiolitis makes me more conscious of that in a way I'm not sure I was before”

U7: “Quality metrics are really helpful […] those are like very specific, actionable things”

Helps with overall personal achievement review

(4/11)

U3: “I think it's really helpful in terms of addressing milestones, especially for yourself [… which is] one of the things you're expected to do. […] So this is a tool. I imagine if I had as an intern in my third or second PHM, I can say one of my goals for this week is to pick up as many XYZ patients as possible or to take a look at how many chest x-rays I'm ordering”

U11: “Personally, just for overall progress as a resident, I think it would be a really nice sort of objective way to evaluate where you are”

Objectivity

Fills gap of current lack of objective feedback

(3/11)

U4: “This dashboard is the best form of objective feedback that I've received in the two years of residency so far. I feel like feedback is usually positive and I would like to be able to look at myself in a more realistic light and see what I can be working on”

Countering self-doubt with objective data

(3/11)

U9: “I think personally, this is really helpful because I know that I can get into imposter syndrome mood, or just very negative headspace about my performance and having actual data that can either verify what I'm saying, or counter it is helpful”

U10: “I think it would be a really cool opportunity as an intern, for you to see that number of patients you saw and be like, 'well, dang, I really did work hard and I saw all these patients and here's what I learned'”

Avoiding bias

(1/11)

U5: “If a reviewer is giving feedback to someone there's not a lot of ways to counteract biases. So this also could be used.

If I was faculty and I'm giving feedback, are my data and feedback different. If you're a female, is my feedback different, if you are a minority or, or person of color. So I feel like this could be used to build accountability as well”

Concerns

Fear of negative peer comparisons

(1/11)

U9: “One thing that I would be nervous about is people using it to belittle themselves or compare themselves to other people”

Metrics not relevant to residents

(3/11)

U2: “Some of the metrics I don't feel responsible for (readmissions, length of stay) are less helpful”

Usability

Learning to use

Overall ease-of-use

(3/11)

U4: “I mean, this is very user-friendly in general”

Need for video

(1/11)

U1: “I definitely wouldn't have been able to figure out much without [the introductory video]”

Need for increased user documentation

(1/11)

U7: “Maybe if there was a disclaimer or an interpretation of how to use the data, it would be less intimidating”

Preferred use of dashboard

Frequency of use

(11/11)

Preferred frequency varied: weekly when on PHM (2), once or twice per PHM block (7), biannually (2)

Sharing with peers/upper-levels/mentors

(9/11)

9/11 users felt comfortable sharing with the aforementioned groups

U8: “I think I would feel comfortable sharing it with almost anyone. Cause I think it can be a really good way to identify areas where you're pretty competent and areas where you need some more work so that then you can work together as a team to kind of gain exposures to maybe things you haven't seen”

Formal review with residency leadership/advisors/attendings

8/11

8/11 users preferred formal review of the dashboard with residency leaders or career advisors

U10: “This seems a lot more personalized obviously. And so if someone was really trying to think about like your goals and like how the gaps reflected by this dashboard where the strengths are affected by this dashboard influence your decisions in like the actual clinical setting, then I think it could be a really cool way to grow”

Suggested improvements

Suggested additions or changes

Increased peer comparison data

(4/11)

U3: “I think the comparison is what helps me more than anything else, to be honest”

Increased patient level data

(4/11)

U6: “The current limitation [is] not being able to link patient diagnoses to their MRNs”

Suggested uses

Other ideas for use

Tailoring teaching topics

(1/11)

U10: “As a PHM upper level to have this data for my interns because then I can kind of tailor my education to what they are doing and have done”

Identifying struggling learners or gaps in programs (1/11)

U2: “I think it would be most useful for people who are struggling or people, you know, or in situations like now with COVID feel like residents aren't getting the experience they need even the most, most helpful”

Use in recruitment of new residents

(2/11)

U3: “It could be a way that [our program] could show off their residency program to applicants and [say] we see this much volume on average in our residency and this many of this type of patients”

Use for job/fellowship applications

(1/11)

U9: “I wonder if it would even have implications to future employers being able to say, I have seen X number of this diagnosis, I've been interested in this subspecialty, I've gone out of my way to seek out these patients as well”

Abbreviations: COVID, coronavirus disease; MRN, medical record number; PHM, pediatric hospital medicine.


Note: This table categorizes resident responses in the semistructured interviews (n = 11). The subcategories are sorted from most frequently mentioned to least frequently mentioned. Illustrative quotes for each subcategory are included, which have been edited for brevity and clarity.



#

Survey Results

All 11 residents who participated in usability and acceptability testing also completed an anonymous survey. Surveyed users overall found the dashboard useful and easy to use, had a positive attitude toward using, and expressed intention to use the dashboard in the future ([Table 3]). Most encouragingly, 100% of users surveyed “strongly agreed” that the dashboard was useful. When asked how likely they were to recommend the dashboard to a coresident, the average response was 96% (on a scale of 0–100%). Comments in the survey closely aligned with those expressed verbally during the semistructured interview process.

Table 3

Technology acceptance model dashboard survey (Likert responses format 1–4)

Domain

Question

Average response

[Min, Max]

PU

I find the dashboard useful

4.0 [4, 4]

PU

Using the dashboard makes it easier to identify knowledge gaps and areas for improvement

3.9 [3, 4]

PU

Using the dashboard makes it easier to review my clinical exposures (e.g., counts of core-competency diagnoses)

3.9 [3, 4]

PU

Using the dashboard would help me review the clinical exposures (e.g., counts of core-competency diagnoses) of interns I am supervising

3.7 [3, 4]

PU

Using the dashboard would help me identify knowledge gaps and areas for improvement of interns I am supervising

3.7 [2, 4]

PEU

It is easy to become skillful at using the dashboard

3.5 [2, 4]

PEU

I find it easy to use the dashboard

3.5 [2, 4]

ATU

My experience using the dashboard is favorable

3.8 [3, 4]

ATU

I think it is valuable to use the dashboard

3.8 [3, 4]

IU

I plan to use the dashboard to review my own personal clinical exposures (e.g., counts of core-competency diagnoses)

3.9 [3, 4]

IU

I plan to use the dashboard to identify knowledge gaps and areas for improvement for myself

3.9 [3, 4]

IU

I will review my personal clinical exposure data (e.g., counts of core-competency diagnoses) more frequently by using the dashboard

3.9 [3, 4]

IU

I will self-identify knowledge gaps and areas for improvement more frequently by using the dashboard

3.9 [3, 4]

How likely are you to recommend using the dashboard to a coresident?

(Scale of 0–100)

95.8 [73, 100]

Abbreviations: ATU, attitude toward using; IU, intention to use; PEU, perceived ease of use; PU, perceived usability.


Note: This table summarizes resident responses to an anonymous survey based on the technology acceptance model. Eleven resident users completed the survey.


4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.



#
#

Discussion

The purpose of this study was to assess the feasibility, acceptability, and usability of an automated dashboard that provides pediatric residents with individualized patient-care and quality metrics. We believe this is the first study to describe the creation of a dashboard that provides pediatric resident users with these types of metrics. Not only is the provision of patient-care and quality metrics required by accrediting bodies, but residents themselves desire more objective data as evidenced by the results of our needs assessment and prior studies.[1] [2] [6] In our semistructured interviews, residents repeatedly mentioned that this type of patient-care data would allow them to critically review their practice patterns and would be helpful in developing their individualized learning plan. Nearly all surveyed residents indicated a desire to use the core-competency diagnostic counts to help prioritize their learning efforts, especially with regards to directing their future patient-care encounters or electives. Several residents mentioned that this is the first objective data they have been provided by the residency program. Interestingly, some residents also commented on finding the metrics overall reassuring and described this type of data as being useful to combat “imposter syndrome.” Imposter syndrome is a common phenomenon among residents where one has a persistent fear of being inadequate, and has been shown to be a major contributor to burnout among physicians and trainees.[20] [21] Provision of this type of objective data may also help overcome the previously well-documented racial and gender bias in performance evaluation in medicine,[22] [23] [24] which was also mentioned by one resident tester.

Regarding usability and acceptability, survey and interview results indicate that residents overall had very positive experiences when using this dashboard. Residents rated the dashboard highly regarding ease-of-use and usefulness, with a positive attitude toward using it. They universally indicated that they would like to use the dashboard regularly in the future and would strongly recommend use of the dashboard to their coresidents. Prior studies have described barriers in developing dashboards for use by trainees, including challenges with patient attribution which can lead trainees to feel that metrics are not as meaningful.[5] [25] [26] [27] [28] In our study, resident users seemed overall to understand the limitations of the dashboard, but similarly reported that some metrics were less meaningful on an individual basis due to patient attribution limitations. Resident comments indicated that they felt some metrics were more reflective of decisions made by the care team rather than an individual, which is consistent with findings of other studies regarding the challenges of creating resident-specific performance metrics.[5] [26] [29] Another study has prioritized a list of resident-specific quality metrics which could mitigate this issue, but these metrics primarily focused on content captured within resident documentation (e.g., work of breathing or response to therapy documentation).[30] While these metrics would be very specific to the work of an individual resident, these data are very challenging to integrate into an automated tool without sophisticated natural language processing, so we were not able to include these metrics in this iteration of our dashboard.

Limitations

There are several limitations of this study. First, this was a pilot study, so our dashboard only provides data for patient encounters during residents' PHM rotation, which accounts for 4 months in a typical 36-month residency. Furthermore, our patient attribution was based exclusively on note writing, which may not perfectly reflect all patients cared for by a resident.[25] [28] A resident may have participated meaningfully in the care of a patient, but if no note was written (for example, if care occurred overnight) then this would not be captured by the dashboard. Additionally, at our institution upper-level residents (PGY2 and above) typically sign fewer notes than interns on this rotation, which makes the results of some parts of the dashboard less relevant as residents advance in their training.

In terms of core-competency diagnoses, we used ICD-10 codes to quantify diagnoses, which may not always accurately reflect the true diagnosis of a patient for several reasons including: the patient has many diagnoses and not every diagnosis was entered into the EHR by the care team, no appropriate ICD-10 code exists, or the patient was admitted with generic symptoms (e.g., fever) and after a diagnosis was made the ICD-10 code was not updated.

With regards to usability testing, we conducted this testing with 11 resident users distributed across years of training, but this is subject to sampling bias since we invited interested volunteers to pilot our tool. Additionally, social desirability bias may have impacted user responses during their usability testing and interview since they were being observed by a member of the developer team. The subsequent anonymous survey was administered to attempt to minimize this bias, and survey results were very similar to responses given during usability interviews.

Finally, creating this type of dashboard is labor intensive and requires institutional technical support and significant technical expertise, which may make it challenging to implement a similar tool at a smaller program or a program with fewer resources.


#

Future Directions

Following the successful implementation of the dashboard for the PHM rotation at our institution, we plan to expand the included resident rotations to capture the broader experience of pediatric residents. The most popularly requested areas for expansion included the emergency room and the intensive care unit. Once additional pediatric residency rotations are included in the dashboard, we would like to incorporate routinely scheduled review of dashboard metrics into residency feedback and mentorship sessions. We believe that this type of patient-care and quality data could not only be used by residents to direct their learning efforts, but in the future could also be utilized by program directors in designing and evaluating their residency structure and by the ACGME for program oversight to ensure that trainees are receiving the breadth and depth of experiences for adequate training. More study is needed to determine whether such dashboards will lead to an impact on resident quality-of-care metrics or breadth of clinical exposure.


#
#

Conclusion

We describe a unique solution to currently existing gaps in pediatric residency programs' ability to provide personalized, objective, and readily available patient-care and quality data to residents. By capitalizing on EHR and analytics capabilities, residency programs can develop automated dashboards capable of providing trainees with meaningful data regarding their patient care.


#

Clinical Relevance Statement

Despite the ACGME's requirements to provide residents with individualized performance data and quality metrics, there is very little research on this topic. Our article describes a way to create and display important individualized patient-care data to pediatric residents in an automated manner. Our results will help guide other residency training programs as they consider the types of data that they wish to provide to pediatric residents.


#

Multiple Choice Questions

  1. Which type of testing is conducted by the development team prior to testing by end users?

    • Stress testing

    • Performance testing

    • Beta testing

    • Alpha testing

    Correct Answer: The correct answer is option d. Alpha testing occurs when the internal development team tests the product before either usability testing or other testing by end users.

  2. What type of testing occurs when you are asking users to try and complete typical tasks of a newly developed tool while observers watch and take notes?

    • Sanity testing

    • Usability testing

    • Integration testing

    • Acceptance testing

    Correct Answer: The correct answer is option b. Usability testing is described here where the goal is to have end users walk through typical use scenarios and observers collect information to identify any usability problems prior to full deployment.


#
#

Conflict of Interest

None declared.

Acknowledgments

We would like to acknowledge the Texas Children's Hospital Information Services department for their generous support of this project with both technical resources and technician time and guidance, without whom this project would not have been possible.

Protection of Human and Animal Subjects

Our institutional review board reviewed and approved this study.


Supplementary Material

  • References

  • 1 Accreditation Council for Graduate Medical Education.. Common Program Requirements. Accessed January 13, 2021 at: https://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements
  • 2 Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007; 29 (07) 648-654
  • 3 pediatricsmilestones.pdf. Accessed November 23, 2020 at: https://www.acgme.org/portals/0/pdfs/milestones/pediatricsmilestones.pdf
  • 4 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
  • 5 Rosenbluth G, Tong MS, Condor Montes SY, Boscardin C. Trainee and program director perspectives on meaningful patient attribution and clinical outcomes data. J Grad Med Educ 2020; 12 (03) 295-302
  • 6 Wright SM, Durbin P, Barker LR. When should learning about hospitalized patients end? Providing housestaff with post-discharge follow-up information. Acad Med 2000; 75 (04) 380-383
  • 7 Ehrenfeld JM, McEvoy MD, Furman WR, Snyder D, Sandberg WS. Automated near-real-time clinical performance feedback for anesthesiology residents: one piece of the milestones puzzle. Anesthesiology 2014; 120 (01) 172-184
  • 8 Wheeler K, Baxter A, Boet S, Pysyk C, Bryson GL. Performance feedback in anesthesia: a post-implementation survey. Can J Anaesth 2017; 64 (06) 681-682
  • 9 Levin JC, Hron J. Automated reporting of trainee metrics using electronic clinical systems. J Grad Med Educ 2017; 9 (03) 361-365
  • 10 Bachur RG, Nagler J. Use of an automated electronic case log to assess fellowship training: tracking the pediatric emergency medicine experience. Pediatr Emerg Care 2008; 24 (02) 75-82
  • 11 2019 Accreditation Council for Graduate Medical Education (ACGME). ACGME Program Requirements for Graduate Medical Education in Pediatrics. . Published online July 1, 2019. Available at: https://www.acgme.org/globalassets/pfassets/programrequirements/320_pediatrics_2021v2.pdf
  • 12 Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children's hospitals in the United States. J Hosp Med 2016; 11 (11) 743-749
  • 13 Qlik Sense [Computer Software]. Version 3.1. King of Prussia, PA: Qlik; 2020
  • 14 Stucky ER, Ottolini MC, Maniscalco J. Pediatric hospital medicine core competencies: development and methodology. J Hosp Med 2010; 5 (06) 339-343
  • 15 Organization WH. ICD-10: International Statistical Classification of Diseases and Related Health Problems: Tenth Revision. World Health Organization; 2004. . Accessed February 19, 2021 at: https://apps.who.int/iris/handle/10665/42980
  • 16 Shen MW, Percelay J. Quality measures in pediatric hospital medicine: Moneyball or looking for Fabio?. Hosp Pediatr 2012; 2 (03) 121-125
  • 17 Parikh K, Hall M, Mittal V. et al. Establishing benchmarks for the hospitalized care of children with asthma, bronchiolitis, and pneumonia. Pediatrics 2014; 134 (03) 555-562
  • 18 Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manage Inf Syst Q 1989; 13 (03) 319-340
  • 19 Microsoft Excel for Mac [Computer Software]. Version 16.57. Redmond, WA: Microsoft Corporation; 2020
  • 20 Mullangi S, Jagsi R. Imposter syndrome: treat the cause, not the symptom. JAMA 2019; 322 (05) 403-404
  • 21 Gottlieb M, Chung A, Battaglioli N, Sebok-Syer SS, Kalantari A. Impostor syndrome among physicians and physicians in training: a scoping review. Med Educ 2020; 54 (02) 116-124
  • 22 Liebschutz JM, Darko GO, Finley EP, Cawse JM, Bharel M, Orlander JD. In the minority: black physicians in residency and their experiences. J Natl Med Assoc 2006; 98 (09) 1441-1448
  • 23 Nunez-Smith M, Ciarleglio MM, Sandoval-Schaefer T. et al. Institutional variation in the promotion of racial/ethnic minority faculty at US medical schools. Am J Public Health 2012; 102 (05) 852-858
  • 24 Dayal A, O'Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med 2017; 177 (05) 651-657
  • 25 Schumacher DJ, Wu DTY, Meganathan K. et al. A feasibility study to attribute patients to primary interns on inpatient ward teams using electronic health record data. Acad Med 2019; 94 (09) 1376-1383
  • 26 Smirnova A, Sebok-Syer SS, Chahine S. et al. Defining and adopting clinical performance measures in graduate medical education: where are we now and where are we going?. Acad Med 2019; 94 (05) 671-677
  • 27 Epstein JA, Noronha C, Berkenblit G. Smarter screen time: integrating clinical dashboards into graduate medical education. J Grad Med Educ 2020; 12 (01) 19-24
  • 28 Mai MV, Orenstein EW, Manning JD, Luberti AA, Dziorny AC. Attributing patients to pediatric residents using electronic health record features augmented with audit logs. Appl Clin Inform 2020; 11 (03) 442-451
  • 29 Sebok-Syer SS, Pack R, Shepherd L. et al. Elucidating system-level interdependence in electronic health record data: what are the ramifications for trainee assessment?. Med Educ 2020; 54 (08) 738-747
  • 30 Schumacher DJ, Holmboe ES, van der Vleuten C, Busari JO, Carraccio C. Developing resident-sensitive quality measures: a model from pediatric emergency medicine. Acad Med 2018; 93 (07) 1071-1078

Address for correspondence

Julia Yarahuan, MD
Department of General Pediatrics, Boston Children's Hospital
300 Longwood Avenue, Boston, MA 02115
United States   

Publication History

Received: 03 October 2021

Accepted: 05 February 2022

Article published online:
16 March 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Accreditation Council for Graduate Medical Education.. Common Program Requirements. Accessed January 13, 2021 at: https://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements
  • 2 Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007; 29 (07) 648-654
  • 3 pediatricsmilestones.pdf. Accessed November 23, 2020 at: https://www.acgme.org/portals/0/pdfs/milestones/pediatricsmilestones.pdf
  • 4 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
  • 5 Rosenbluth G, Tong MS, Condor Montes SY, Boscardin C. Trainee and program director perspectives on meaningful patient attribution and clinical outcomes data. J Grad Med Educ 2020; 12 (03) 295-302
  • 6 Wright SM, Durbin P, Barker LR. When should learning about hospitalized patients end? Providing housestaff with post-discharge follow-up information. Acad Med 2000; 75 (04) 380-383
  • 7 Ehrenfeld JM, McEvoy MD, Furman WR, Snyder D, Sandberg WS. Automated near-real-time clinical performance feedback for anesthesiology residents: one piece of the milestones puzzle. Anesthesiology 2014; 120 (01) 172-184
  • 8 Wheeler K, Baxter A, Boet S, Pysyk C, Bryson GL. Performance feedback in anesthesia: a post-implementation survey. Can J Anaesth 2017; 64 (06) 681-682
  • 9 Levin JC, Hron J. Automated reporting of trainee metrics using electronic clinical systems. J Grad Med Educ 2017; 9 (03) 361-365
  • 10 Bachur RG, Nagler J. Use of an automated electronic case log to assess fellowship training: tracking the pediatric emergency medicine experience. Pediatr Emerg Care 2008; 24 (02) 75-82
  • 11 2019 Accreditation Council for Graduate Medical Education (ACGME). ACGME Program Requirements for Graduate Medical Education in Pediatrics. . Published online July 1, 2019. Available at: https://www.acgme.org/globalassets/pfassets/programrequirements/320_pediatrics_2021v2.pdf
  • 12 Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children's hospitals in the United States. J Hosp Med 2016; 11 (11) 743-749
  • 13 Qlik Sense [Computer Software]. Version 3.1. King of Prussia, PA: Qlik; 2020
  • 14 Stucky ER, Ottolini MC, Maniscalco J. Pediatric hospital medicine core competencies: development and methodology. J Hosp Med 2010; 5 (06) 339-343
  • 15 Organization WH. ICD-10: International Statistical Classification of Diseases and Related Health Problems: Tenth Revision. World Health Organization; 2004. . Accessed February 19, 2021 at: https://apps.who.int/iris/handle/10665/42980
  • 16 Shen MW, Percelay J. Quality measures in pediatric hospital medicine: Moneyball or looking for Fabio?. Hosp Pediatr 2012; 2 (03) 121-125
  • 17 Parikh K, Hall M, Mittal V. et al. Establishing benchmarks for the hospitalized care of children with asthma, bronchiolitis, and pneumonia. Pediatrics 2014; 134 (03) 555-562
  • 18 Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manage Inf Syst Q 1989; 13 (03) 319-340
  • 19 Microsoft Excel for Mac [Computer Software]. Version 16.57. Redmond, WA: Microsoft Corporation; 2020
  • 20 Mullangi S, Jagsi R. Imposter syndrome: treat the cause, not the symptom. JAMA 2019; 322 (05) 403-404
  • 21 Gottlieb M, Chung A, Battaglioli N, Sebok-Syer SS, Kalantari A. Impostor syndrome among physicians and physicians in training: a scoping review. Med Educ 2020; 54 (02) 116-124
  • 22 Liebschutz JM, Darko GO, Finley EP, Cawse JM, Bharel M, Orlander JD. In the minority: black physicians in residency and their experiences. J Natl Med Assoc 2006; 98 (09) 1441-1448
  • 23 Nunez-Smith M, Ciarleglio MM, Sandoval-Schaefer T. et al. Institutional variation in the promotion of racial/ethnic minority faculty at US medical schools. Am J Public Health 2012; 102 (05) 852-858
  • 24 Dayal A, O'Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med 2017; 177 (05) 651-657
  • 25 Schumacher DJ, Wu DTY, Meganathan K. et al. A feasibility study to attribute patients to primary interns on inpatient ward teams using electronic health record data. Acad Med 2019; 94 (09) 1376-1383
  • 26 Smirnova A, Sebok-Syer SS, Chahine S. et al. Defining and adopting clinical performance measures in graduate medical education: where are we now and where are we going?. Acad Med 2019; 94 (05) 671-677
  • 27 Epstein JA, Noronha C, Berkenblit G. Smarter screen time: integrating clinical dashboards into graduate medical education. J Grad Med Educ 2020; 12 (01) 19-24
  • 28 Mai MV, Orenstein EW, Manning JD, Luberti AA, Dziorny AC. Attributing patients to pediatric residents using electronic health record features augmented with audit logs. Appl Clin Inform 2020; 11 (03) 442-451
  • 29 Sebok-Syer SS, Pack R, Shepherd L. et al. Elucidating system-level interdependence in electronic health record data: what are the ramifications for trainee assessment?. Med Educ 2020; 54 (08) 738-747
  • 30 Schumacher DJ, Holmboe ES, van der Vleuten C, Busari JO, Carraccio C. Developing resident-sensitive quality measures: a model from pediatric emergency medicine. Acad Med 2018; 93 (07) 1071-1078

Zoom Image
Fig. 1 Pediatric hospital medicine core-competency counts. This figure is a screenshot of a visualization from the interactive dashboard which provides core-competency diagnostic counts for pediatric hospital medicine (PHM) for an individual pediatric intern or postgraduate year (PGY) 1 resident. It includes comparisons to average counts for PGY1s from the three prior full academic years. The resident in this example could choose to focus on those diagnoses where he is significantly below average, like bronchiolitis, or other diagnoses like poisoning/ingestion which he has never cared for during his PHM rotations. This was the most requested piece of data that the residents mentioned during our needs assessment.
Zoom Image
Fig. 2 Demographics summary page view. This screenshot of the interactive dashboard displays a broad overview of patient demographics for patients cared for by this resident. In the upper left corner, there is a histogram of age distributions. On the right side of the screen, there is a breakdown of patient ethnicity and primary language spoken. In the lower left corner, there a world map with patients' home cities.
Zoom Image
Fig. 3 Example quality metric: antibiotics in community-acquired pneumonia (CAP). This screenshot of the interactive dashboard visualization shows an example of a resident-specific quality metric: the types of antibiotics prescribed for patients admitted with a diagnosis of community-acquired pneumonia. This figure only includes patients that this resident admitted (patients for whom he wrote a History and Physical note).
Zoom Image
Fig. 4 Example Descriptive and Productivity Metrics. This screenshot of the interactive dashboard illustrates descriptive and productivity metrics for a pediatric intern and provides comparisons to postgraduate year (PGY) 1 averages over the last three academic years. We initially designed this view without comparisons, but comparisons were frequently requested in our usability surveys.