Appl Clin Inform 2018; 09(04): 791-802
DOI: 10.1055/s-0038-1675179
Case Report
Georg Thieme Verlag KG Stuttgart · New York

User Testing an Information Foraging Tool for Ambulatory Surgical Site Infection Surveillance

Dean J. Karavite
1   Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
,
Matthew W. Miller
1   Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
,
Mark J. Ramos
1   Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
,
Susan L. Rettig
2   Department of Infection Prevention and Control, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
,
Rachael K. Ross
3   Division of Infectious Disease, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
,
Rui Xiao
4   Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
,
Naveen Muthu
1   Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
5   Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
,
A. Russell Localio
4   Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
,
Jeffrey S. Gerber
3   Division of Infectious Disease, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
5   Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
,
Susan E. Coffin
3   Division of Infectious Disease, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
5   Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
,
Robert W. Grundmeier
1   Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
5   Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
› Author Affiliations
Funding This project was supported by grant R01HS020921 (Electronic Surveillance for Wound Infections after Ambulatory Pediatric Surgery) from the Agency for Ambulatory Pediatric Surgery. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.
Further Information

Address for correspondence

Dean J. Karavite, MSI
Department of Biomedical and Health Informatics (DBHi)
Children's Hospital of Philadelphia, 2716 South Street, Philadelphia, PA 19146
United States   

Publication History

02 May 2018

04 September 2018

Publication Date:
24 October 2018 (online)

 

Abstract

Background Surveillance for surgical site infections (SSIs) after ambulatory surgery in children requires a detailed manual chart review to assess criteria defined by the National Health and Safety Network (NHSN). Electronic health records (EHRs) impose an inefficient search process where infection preventionists must manually review every postsurgical encounter (< 30 days). Using text mining and business intelligence software, we developed an information foraging application, the SSI Workbench, to visually present which postsurgical encounters included SSI-related terms and synonyms, antibiotic, and culture orders.

Objective This article compares the Workbench and EHR on four dimensions: (1) effectiveness, (2) efficiency, (3) workload, and (4) usability.

Methods Comparative usability test of Workbench and EHR. Objective test metrics are time per case, encounters reviewed per case, time per encounter, and retrieval of information meeting NHSN definitions. Subjective measures are cognitive load using the National Aeronautics and Space Administration (NASA) Task Load Index (NASA TLX), and a questionnaire on system usability and utility.

Results Eight infection preventionists participated in the test. There was no difference in effectiveness as subjects retrieved information from all cases, using both systems, to meet the NHSN criteria. There was no difference in efficiency in time per case between the Workbench and EHR (8.58 vs. 7.39 minutes, p = 0.36). However, with the Workbench subjects opened fewer encounters per case (3.0 vs. 7.5, p = 0.002), spent more time per encounter (2.23 vs. 0.92 minutes, p = 0.002), rated the Workbench lower in cognitive load (NASA TLX, 24 vs. 33, p = 0.02), and significantly higher in measures of usability.

Conclusion Compared with the EHR, the Workbench was more usable, short, and reduced cognitive load. In overall efficiency, the Workbench did not save time, but demonstrated a shift from between-encounter foraging to within-encounter foraging and was rated as significantly more efficient. Our results suggest that infection surveillance can be better supported by systems applying information foraging theory.


#

Background and Significance

Surgical site infections (SSIs) are the second most reported healthcare-associated infection.[1] Ambulatory procedures account for an estimated 75% of all surgeries, yet little is known about SSI in pediatric patients undergoing ambulatory surgery.[2] Strategies to prevent SSI depend upon robust and efficient surveillance processes to ensure data are accurate and actionable.[3] [4] While reporting is not yet mandatory in all states, established criteria defining SSI are published by the Center for Disease Control and Prevention's National Health and Safety Network (NHSN).[5]

Superficial SSIs, the most common type of SSI after ambulatory surgery, are defined by the NHSN as occurring within 30 days of surgery, involving only skin and subcutaneous tissue, and including at least one of the following: purulent drainage; a positive culture; a reopened incision and pain/tenderness, swelling, or erythema (redness); and diagnosis by surgeon, physician, or other designee.[5] In performing SSI surveillance, the infection preventionist (IP) must perform a highly detailed chart review to determine if a case meets the NHSN criteria.

Electronic health record (EHR) data have the potential to increase the accuracy and efficiency of SSI identification.[5] [6] Automated SSI surveillance based on querying discrete EHR data has been evaluated for adult inpatient surgical procedures.[7] However, NHSN criteria for superficial SSIs from ambulatory surgery are often contained in unstructured notes and fully automated surveillance is difficult necessitating a semiautomated approach.[8] Information foraging theory provides a framework to design systems that support the search and retrieval tasks required in semiautomated surveillance.[9]


#

Objective

This article determines if an EHR-embedded tool, built with business intelligence software, improves the efficiency and workload of case reviews for the IP performing ambulatory SSI surveillance.


#

Workbench Development

We observed three IPs performing SSI chart reviews and performed a cognitive task analysis.[10] The case review task flow is summarized in seven steps: (1) identify date of surgery; (2) identify all clinical encounters within 30 days of surgery; (3) review encounter for SSI relevant information; (4) record/memorize any SSI information discovered; (5) repeat steps 3 to 4 for all encounters; (6) compile SSI information into a summary or “patient narrative”; and (7) compare narrative to NHSN definitions to make the determination. The steps are shown in [Fig. 1].

Zoom Image
Fig. 1 Task flow comparison between electronic health record (EHR) and EHR with surgical site infection (SSI) Workbench.

In reviewing a single patient case, the EHR provides only a high-level table view of postsurgical encounters, so the IP is forced to review all encounters to identify which have information relevant to SSI determination, even though many encounters may not be related to surgery or a potential infection. Given this, steps 3 to 5, a repetitive and imprecise search and retrieval task, impose an extraneous cognitive load[11] secondary to the primary goal of the case review, and a clear opportunity for computer assistance. We developed a design strategy to offload this cognitive load ([Fig. 1]), and determined that we could meet functional requirements using commercially available business intelligence software (Qlik, Malvern, Pennsylvania, United States) with vendor-supported EHR integration (Epic Systems, Verona, Wisconsin, United States).

The SSI Workbench was designed to support SSI surveillance by applying concepts from information foraging theory to provide visual indicators of high-yield encounters or “patches” of SSI-related information.[9] [12] [13] Similar to the EHR, the Workbench displays a table presenting all encounters within 30 days of the surgical procedure, but adds four columns displaying the presence of SSI-related information: (1) SSI keywords, the presence of 70 SSI terms, and synonyms, plus a count of the occurrences for each; (2) culture orders; (3) infection diagnoses; and (4) antibiotic orders. The SSI keywords are defined in a separate file of regular expressions that can be updated with new terms, abbreviations, or even misspellings ([Appendix A]). Any cell in the encounter table containing SSI relevant information is highlighted in red and all encounters provide a hyperlink to the encounter note in the EHR ([Fig. 2]). This design approach indicates the high-yield information patches but does not limit the user's ability to review other encounters of interest. For example, IPs almost always review the initial surgical encounter and any follow-up surgical encounter.

Appendix A

SSI Terms and Regular Expressions: Terms/synonyms and corresponding regular expressions used by the SSI Workbench

SSI_TERM

REGEXP

drainage

(\s|^)drainage(\s|\z|\W|$)

infected

(\s|^|\?)infected(\s|\z|\W|$)

infection

(\s|^)infection(\s|\z|\W|$)

cultured

(\s|^)cultured(\s|\z|\W|$)

culture

(\s|^)culture(\s|\z|\W|$)

positive culture

(\s|^)positive culture(\s|\z|\W|$)

culture positive

(\s|^)culture positive(\s|\z|\W|$)

negative culture

(\s|^)negative culture(\s|\z|\W|$)

culture negative

(\s|^)culture negative(\s|\z|\W|$)

wound reopened

(\s|^)wound reopened(\s|\z|\W|$)

incision reopened

(\s|^)incision reopened(\s|\z|\W|$)

reopened

(\s|^)reopened(\s|\z|\W|$)

irrigation

(\s|^)irrigation(\s|\z|\W|$)

debridement

(\s|^)debridement(\s|\z|\W|$)

I&D

(\s|^)I\s?\&\s?D(\s|\z|\W|$)

incision and drainage

(\s|^)incision and drainage(\s|\z|\W|$)

lance

(\s|^)lance(\s|\z|\W|$)

purulent

(\s|^)purulent(\s|\z|\W|$)

purulence

(\s|^)purulence(\s|\z|\W|$)

purulent drainage

(\s|^)purulent drainage(\s|\z|\W|$)

thick drainage

(\s|^)thick drainage(\s|\z|\W|$)

foul drainage

(\s|^)foul drainage(\s|\z|\W|$)

foul smelling drainage

(\s|^)foul smelling drainage(\s|\z|\W|$)

opaque drainage

(\s|^)opaque drainage(\s|\z|\W|$)

pus

(\s|^)pus(\s|\z|\W|$)

seroma

(\s|^)seroma(\s|\z|\W|$)

pain

(\s|^)pain(\s|\z|\W|$)

painful

(\s|^)painful(\s|\z|\W|$)

tender

(\s|^)tender(\s|\z|\W|$)

tenderness

(\s|^)tenderness(\s|\z|\W|$)

localized swelling

(\s|^)localized swelling(\s|\z|\W|$)

edema

(\s|^)edema(\s|\z|\W|$)

swelling

(\s|^)swelling(\s|\z|\W|$)

swollen

(\s|^)swollen(\s|\z|\W|$)

locally swelling

(\s|^)locally swelling(\s|\z|\W|$)

erythema

(\s|^)erythema(\s|\z|\W|$)

red

(\s|^)red(\s|\z|\W|$)

redness

(\s|^)redness(\s|\z|\W|$)

reddish

(\s|^)reddish(\s|\z|\W|$)

ruddy

(\s|^)ruddy(\s|\z|\W|$)

inflammation

(\s|^)inflammation(\s|\z|\W|$)

inflamed

(\s|^)inflamed(\s|\z|\W|$)

injected

(\s|^)injected(\s|\z|\W|$)

heat

(\s|^)heat(\s|\z|\W|$)

warm

(\s|^)warm(\s|\z|\W|$)

warmth

(\s|^)warmth(\s|\z|\W|$)

clinda

(\s|^)clinda

ceph

(\s|^)ceph

vanc

(\s|^)vanc

zithro

(\s|^)zithro

zpack

(\s|^)z.?pack

azithro

(\s|^)azithro

z-pack

(\s|^)z-pack(\s|\z|\W|$)

antibiotic

(\s|^)antibiotic(\s|\z|\W|$)

febrile

(\s|^)febrile(\s|\z|\W|$)

elevated temp

(\s|^)elevated temp(\s|\z|\W|$)

elevated temperature

(\s|^)elevated temperature(\s|\z|\W|$)

tactile temp

(\s|^)tactile temp(\s|\z|\W|$)

tactile temperature

(\s|^)tactile temperature(\s|\z|\W|$)

dehis

(\s|^)dehis(\s|\z|\W|$)

dehisc

(\s|^)dehisc(\s|\z|\W|$)

dehiscence

(\s|^)dehiscence(\s|\z|\W|$)

pin site

(\s|^)pin site(\s|\z|\W|$)

celluliti(c|s)

(\s|^)cellulitic(\s|\z|\W|$)

suture abscess

(\s|^)suture abscess(\s|\z|\W|$)

stab wound

(\s|^)stab wound(\s|\z|\W|$)

trocar

(\s|^)trocar(\s|\z|\W|$)

trocar site

(\s|^)trocar site(\s|\z|\W|$)

burn

(\s|^)burn(\s|\z|\W|$)

burn wound

(\s|^)burn wound(\s|\z|\W|$)

Zoom Image
Fig. 2 The surgical site infection (SSI) Workbench displaying all medical encounters experienced by a single patient within 30 days after a surgical procedure. Highlighted cells indicate SSI-relevant data present in an encounter. Of the 22 postsurgical encounters for this patient, only 5 have potentially relevant SSI information.

We developed scenario-based mockups of the Workbench (Axure, San Diego, California, United States) for exploratory testing[14] using pluralistic walkthroughs[15] with two IPs. The walkthrough presented no task errors, and the IPs reported a high level of satisfaction on system utility and usability. The results led us to develop the Workbench and plan for summative user testing.


#

Methods

We performed comparative user testing[16] between the EHR with workbench and EHR to collect a mix of objective and subjective data in assessing efficiency, workload, usability, and utility. This study was reviewed and approved by the Children's Hospital of Philadelphia Institutional Review Board.

Study Setting and Participants

The study was performed within an academic pediatric healthcare network that includes a main hospital, 31 primary care practices, 6 multispecialty centers, and 3 ambulatory surgical centers. Annual ambulatory surgical volume exceeds 18,000 cases. All sites use the same EHR (Epic Systems) and infection surveillance is conducted by a single department of infection prevention and control, which includes 10 certified IPs and a full-time medical director. Excluding the IPs who participated in the Workbench design, test participants represented the entire staff of hospital IPs, with each having experience in SSI surveillance and NHSN criteria.


#

Study Methods and Data Collection

To support the review of real patient cases without compromising confidentiality, medical record integrity, or the hospital EHR, the study was performed using an EHR test environment that included all patient data.

A usability test plan was developed to compare the two systems in performing SSI surveillance: EHR with Workbench and EHR.[15] [16] To maximize the limited available participants, we applied a semibalanced incomplete block design. Fourteen SSI cases meeting the NHSN definition for superficial SSI were identified by an IP and a member of the research team. Only SSI-positive cases were selected for three reasons: (1) reflect hospital processes to actively monitor EHR data via a predictive algorithm for SSI reducing the need for IPs to rule out negative cases[17] (2) maximize limited available participants; and (3) keep testing sessions under 2 hours.

The 14 cases were randomized and ordered so each participant would review 7 total cases, 3 or 4 with each system. As a result, each of the 14 cases were reviewed four times, twice with each system[18] ([Fig. 3]). While all participants were experienced IPs familiar with ambulatory SSI surveillance, we chose to test the Workbench first to bias any potential learning effect.

Zoom Image
Fig. 3 Case and participant randomization.

Participants were consented and then filled out a demographic questionnaire. Each received a printout with a test outline and instructional overview of the workbench (Appendix B). Due the novelty of the EHR-embedded Workbench, a minimal form of instruction was required. For each case, participants were given a worksheet listing eight signs/symptoms from the NHSN definitions (Appendix C). Participants were instructed to check off any sign/symptom they discovered until they determined if the case was a reportable SSI. All participants had a full understanding of the conditional logic of the NHSN definitions and the purpose of the checklist was to keep notes and support a posttest comparison of each system in supporting the ability to find the signs/symptoms.


#

Usability Measures

Participant interactions with both systems were recorded using Morae (TechSmith Corporation, Okemos, Michigan, United States). Morae was configured to produce a series of objective measures including encounters reviewed per case, time per case, and time per encounter. The system also recorded clicks and keystrokes. Participant comments were encouraged via the think aloud protocol.[19] Subjective data were collected via six questionnaires: (1) pretest demographics questionnaire; (2) post-Workbench review National Aeronautics and Space Administration (NASA) Task Load Index (TLX) (raw score method)[20] [21]; (3) post-Workbench review usability/utility questionnaire; (4) post-EHR review NASA TLX; (5) post-EHR review usability/utility questionnaire; and (6) SSI surveillance method preference/adoption questionnaire (Appendix D). After the test session, the facilitator engaged the participants in a discussion on Workbench and EHR functionality.


#

Data Management and Analysis

All questionnaire responses were entered into REDCap (Vanderbilt University, Nashville, Tennessee, United States). Morae data required a review to verify and, when necessary, correct the accuracy of all time-based markers. Data were organized in Excel (Microsoft, Seattle, Washington, United States) and then analyzed in R version 3.3.3.[22] Analyses were primarily descriptive (mean, standard deviation [SD], and range). Student's t-test was used for significant differences in continuous outcomes between groups (EHR review vs. Workbench). When the distribution was skewed, the median was calculated and Wilcoxon signed-rank test was used to test significance.


#
#

Results

Eight IPs participated in the test over a 2-week period. The majority of the participants were female (n = 7, 88%), and all the certified IPs worked in infection prevention for a mean of 8.3 years (range, 1–32 years).

The mean duration of each case review was similar for the Workbench and EHR review (8.58 vs. 7.39 minutes; SD = 4.66, p = 0.36) ([Fig. 4]). However, participants viewed significantly fewer encounters per case when using the Workbench (median, 3 vs. 7.5, p = 0.002). The mean time spent per encounter was higher with the Workbench (2.23 vs. 0.92 minutes, SD = 2.48, p = 0.002). An analysis of clicks and keystrokes per encounter showed no significant difference. Participants successfully identified all cases as a reportable SSI using both systems. Participants commented on the lack of time saving. For example, “It's not about time, it's about being confident you found everything in the kid's chart,” and “Time is not as important as knowing you caught everything.”

Zoom Image
Fig. 4 User test objective performance measures for Workbench and electronic health record (EHR).

In assessing cognitive load, the Workbench raw NASA TLX score was significantly lower than the EHR (24 vs. 33, p = 0.02). All six individual measures received a lower mean score for the Workbench with a significantly lower mean score for Effort (37 vs. 52, p = 0.02) ([Table 1]). Participant comments addressing this difference include, “This [Workbench] lets me be more focused on what happened instead of trying to find out what happened,” and “With [the EHR], you always worry about what you might be missing.”

Table 1

NASA TLX results for Workbench and EHR

NASA TLX category

SSI Workbench

NASA TLX raw score

Standard EHR

NASA TLX raw score

p-Value

Mental demand

43

54

0.03

Physical demand

5

11

0.14

Temporal demand

22

18

0.64

Effort

37

52

0.02

Frustration

19

32

0.17

Performance

16

32

0.09

Overall

24

33

0.02

Abbreviations: EHR, electronic health record; NASA TLX, National Aeronautics and Space AdministrationTask Load Index; SSI, surgical site infection.


In subjective usability and utility measures, the Workbench received higher ratings on all seven measures of usability and utility, with six of the seven measures having a significantly higher mean rating ([Table 2]). Participant comments on usability include, “[EHR] doesn't' give me any clue which encounters are important. With this I know exactly where to go,” and “Seeing what is in an encounter before opening it so helpful.”

Table 2

Usability/utility questionnaire responses for Workbench and EHR

Usability/Utility[a]

SSI Workbench

EHR

p-Value

Mean score/Standard deviation

Mean score/Standard deviation

This system assisted me in finding SSI-related information

6.38 ± 0.52

4.00 ± 1.51

0.02

This system helped me feel confident I was finding all SSI-related information

5.50 ± 1.51

4.25 ± 1.75

0.06

This system helped me be efficient in finding SSI-related information

5.63 ± 0.74

2.63 ± 0.74

0.02

I felt productive using this system

5.63 ± 0.74

3.13 ± 1.13

0.02

This system supports infection surveillance work

6.13 ± 0.64

3.86 ± 1.25

0.02

I am satisfied with how easy it is to use this system

5.13 ± 1.36

2.86 ± 0.83

0.04

Overall, I am satisfied with this system

5.75 ± 0.70

3.36 ± 1.06

0.02

Abbreviations: EHR, electronic health record; SSI, surgical site infection.


a 7-point Likert-type scale: 1 = strongly disagree to 7 = strongly agree



#

Discussion

We developed an EHR-embedded information foraging tool to assist IPs in performing ambulatory SSI surveillance. In a comparative user test of the Workbench and EHR, we measured four dimensions of usability: (1) effectiveness, the ability to retrieve SSI information; (2) efficiency, the time to review a case; (3) workload, using the NASA TLX; and (4) usability and utility, using a questionnaire. In comparing the EHR and Workbench results, there was no difference in effectiveness as participants were able to retrieve SSI information using both systems. In comparing efficiency, there was no difference in overall time on task. However, when using the Workbench participants reviewed fewer encounters per case and spent more time per encounter. Comparisons of the other two dimensions, workload and satisfaction, revealed a significant difference where participants rated the Workbench lower in cognitive load and higher in usability and utility.

Our findings are consistent with the literature that suggests that for complex tasks, effectiveness, efficiency, and satisfaction may represent independent aspects of usability that are not necessarily correlated.[23] By identifying case encounters with relevant SSI information, the Workbench offloaded a portion of the search and retrieval work to the computer and reduced cognitive load by assisting in tasks that otherwise imposed an extraneous cognitive load.[24] [25]

Card et al describe information foraging within a “patchy structure” with the goal of finding high-yield patches.[13] In this patchy environment, the forager is faced with a time allocation decision of “between-patch” versus “within-patch” foraging.[9] Participants repeatedly commented that the Workbench gave them more confidence in finding SSI information and allowed them to be more focused on those findings. While there was no overall time savings using the Workbench, results of fewer encounters plus more time per encounter reflect a difference in time allocation between the two systems: between-encounter foraging using the EHR and within-encounter foraging using the Workbench. Participant comments and survey responses indicate a significant preference for within-encounter foraging in SSI surveillance.

Participant comments suggested an additional benefit of the Workbench; that the simple data visualization helped them form a high-level understanding of the patient narrative by essentially presenting a timeline of the patient's SSI-related care.[26] Timelines have been demonstrated to support pattern recognition in structured clinical data.[27] Although beyond the scope of this work, these comments suggest opportunities for developing time-based visualizations for unstructured clinical data.

Finally, most IPs did not use a variety of EHR search, navigation, and filter functions. This suggests that, even with the Workbench, IPs could benefit from additional EHR training in performing complex information foraging tasks.

Limitations

Our study has the following limitations:

  1. The study took place at a single institution with a single EHR.

  2. Although our study focused on pediatric patients, which limits generalizability to adult populations, the NHSN definitions apply to both children and adults. Additional evaluation is required to determine if our tool will offer similar benefits for adult patients.

  3. Our institution has an extensive care network, and as a result many patients have all postsurgical encounters within our system. The Workbench does not address the challenges that may arise where patient records are distributed among healthcare organizations.

  4. The think aloud protocol combined with the novelty of the Workbench may have influenced case review times.

  5. Only positive SSI cases were reviewed.

  6. System order was not randomized and it is likely that the observed benefit of the workstation would be different in a randomized design, though a randomized experiment may have revealed greater benefit for the workstation.


#
#

Conclusion

This work suggests that EHR functionality based on information foraging theory can be beneficial in infection surveillance. In the absence of more advanced EHR search and retrieval functionality, the Workbench demonstrates a feasible approach of using business intelligence software integrated with the EHR to improve infection surveillance.


#

Multiple Choice Questions

  1. Surveillance for superficial site infections (SSIs) from ambulatory surgery in children can be supported by using data from the electronic health record. Which approach is currently the most feasible?

    • Fully automated data analysis to identify a superficial SSI.

    • A semiautomated approach where data analysis supports more effective manual search and retrieval tasks.

    • The use of diagnostic codes to identify a superficial SSI.

    • Culture orders and results.

    Correct Answer: The correct answer option b, a semiautomated approach where data analysis supports more effective search and retrieval tasks.

  2. A comparative usability test between two systems should be planned to collect which types of data?

    • Objective results, such as task completion, time on task, clicks, and keystrokes.

    • Subjective responses such as the NASA Task Load Index and usability questionnaires.

    • Think aloud responses and other participant comments.

    • All of the above.

    Correct Answer: The correct answer is option d, all of the above.


#
Appendix B Test Instruction Sheet

SSI User Test Overview


Thank you very much for agreeing to participate in our research study. You will be helping us test a prototype of a new system, the “SSI Workbench,” designed to assist infection preventionists in the surveillance and reporting of superficial incisional surgical site infections. We believe the test will take up to 90 minutes to complete. The outline for the test is as follows:

  1. We will review the test consent, format and instructions

  2. You will fill out a short pre-test questionnaire

  3. We will provide a brief overview of the SSI Workbench

  4. You will perform a chart review of 3–4 cases using the SSI Workbench

    • You will fill out a questionnaire on your experience with the SSI Workbench

  5. You will perform a chart review of 3–4 cases using the standard EHR (Epic)

    • You will fill out a questionnaire on your experience with the EHR


The chart reviews (#4 and #5 listed above) will be slightly modified from your typical SSI surveillance work. Each case that you will be reviewing is a known reportable SSI. For each case, with both the EHR and SSI Workbench, we will display all encounters within 30 days of surgery. Your task will be to search through these encounters and identify any clear indications of specific NHSN criteria; basically the evidence you would use to build the case that a child met NHSN criteria for an SSI. You don't need to find every mention of each criteria, just enough evidence for you to conclude that the child met a specific criterion. We will provide a checklist to help you keep track your findings, but the criteria we are looking for are:

  1. Purulent drainage

  2. Pain or tenderness

  3. Localized swelling

  4. Erythema

  5. Heat

  6. Positive culture

  7. Incision reopened by surgeon

  8. Diagnosis of SSI by clinician


Instructions for SSI Workbench


The SSI Workbench is designed to facilitate the chart review process of SSI surveillance. The workbench is designed to display all case encounters within the 30-day observation period in a simple table. Table columns include information about each encounter, such as date, department, and chief complaint. Additional columns include information from an automated chart search for information that could suggest an SSI. For example, the columns Infection Dx, Antibiotic, and Culture will include display information if any are included in that encounter.


The column, “SSI Keywords” is a little different. This column returns a list of many terms related to an SSI that occur anywhere within each encounter. These terms include all the NHSN terms listed above, as well as common variations or synonyms, and also include additional terms such as infection, dehiscence, drainage… and many others. Of course, the purpose is to assist the IP in identifying encounters of interest, but it is important to understand that the presence of a term in the SSI Keyword column does not automatically equate to a “clear indication” of that finding. For example, “purulent drainage” may appear as a finding in SSI Keywords, but in the actual encounter note the provider may have written, “there was no purulent drainage.”


[Fig. A1] below is a screen shot of the SSI Workbench. It appears within Epic. This case has a total of 13 encounters within the 30-day observation period. Of the 13 encounters, 4 encounters the system has indicated 4 as having potential SSI information via highlighted the appropriate cells in red and listing the findings. Any of the 13 encounters can be opened by clicking on the date under the column, “Contact Date” or the link, “Open Encounter” in the Action column.

Zoom Image
Fig. A1 SSI Workbench.
Appendix C SSI Worksheet
  • System: ___ SSI Workbench __ EHR

  • Case Order #: ___

  • Study Case ID: ___

Instructions

Please mark (check the box) all the evidence for Superficial Incisional SSI you find in the chart

Evidence

Present

Purulent drainage

Pain or tenderness

Localized swelling

Erythema

Heat

Positive culture

Incision reopened by surgeon

Diagnosis of SSI by clinician

Appendix D User Test Questionnaire
Zoom Image
Zoom Image

#

Conflict of Interest

None.

Acknowledgments

The authors want to thank the Department of Infection Prevention & Control, Children's Hospital of Philadelphia, for all the time, effort, and expertise they contributed to this project and the information services EHR support team, especially Jim Gay and Linda Tague, for critical work in implementing the Workbench.

Authors' Contributions

D.K. designed the Workbench, developed and facilitated the test, analyzed data, and authored and revised the manuscript. M.M. contributed to literature searches, design of the Workbench, and manuscript editing. M.R. developed the Workbench and contributed to generating study data. S.R. contributed to the design and testing of the Workbench and manuscript editing. R.R. contributed to methods, study design, participant randomization, data analysis, and manuscript editing. R.X. contributed to study design, data analysis, and manuscript editing. N.M. contributed to study design and manuscript editing. R.L. contributed to study design and manuscript editing. J.G. was project co-PI, contributed to literature searches, study design, and manuscript editing. S.C. was the project PI, led the team in study design, analysis, and manuscript authoring and editing. R.G. contributed to Workbench development, study design, and manuscript editing.


Protection of Human and Animal Subjects

This study was reviewed by the Children's Hospital of Philadelphia Institutional Review Board.


  • References

  • 1 Klevens RM, Edwards JR, Richards Jr CL. , et al. Estimating health care-associated infections and deaths in U.S. hospitals, 2002. Public Health Rep 2007; 122 (02) 160-166
  • 2 Ambulatory Surgery Centers. A Positive Trend in Healthcare. Ambulatory Surgical Center Coalition. Available at: http://www.ascassociation.org . Accessed 2017
  • 3 Haley RW. The scientific basis for using surveillance and risk factor data to reduce nosocomial infection rates. J Hosp Infect 1995; 30 (Suppl): 3-14
  • 4 Consensus paper on the surveillance of surgical wound infections. The Society for Hospital Epidemiology of America; The Association for Practitioners in Infection Control; The Centers for Disease Control; The Surgical Infection Society. Infect Control Hosp Epidemiol 1992; 13 (10) 599-605
  • 5 Surgical Site Infection (SSI) Event, Procedure-associated model, Center for Disease Control. Available at: https://www.cdc.gov/nhsn/pdfs/pscmanual/9pscssicurrent.pdf . Accessed 2017
  • 6 Grammatico-Guillon L, Baron S, Gaborit C, Rusch E, Astagneau P. Quality assessment of hospital discharge database for routine surveillance of hip and knee arthroplasty-related infections. Infect Control Hosp Epidemiol 2014; 35 (06) 646-651
  • 7 van Mourik MS, Troelstra A, van Solinge WW, Moons KG, Bonten MJ. Automated surveillance for healthcare-associated infections: opportunities for improvement. Clin Infect Dis 2013; 57 (01) 85-93
  • 8 Woeltje KF, Lin MY, Klompas M, Wright MO, Zuccotti G, Trick WE. Data requirements for electronic surveillance of healthcare-associated infections. Infect Control Hosp Epidemiol 2014; 35 (09) 1083-1091
  • 9 Pirolli P, Card S. Information foraging. Psychol Rev 1999; 106 (04) 643
  • 10 Schraagen JM, Chipman SF, Shalin VL. Cognitive Task Analysis. Psychology Press; 2000
  • 11 Paas F, Renkl A, Sweller J. Cognitive load theory and instructional design: recent developments. Educ Psychol 2003; 38 (01) 1-4
  • 12 Chi EH, Pirolli P, Chen K. , et al. Using information scent to model user information needs and actions and the Web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM; 2001 :490–497
  • 13 Card SK, Pirolli P, Van Der Wege M. , et al. Information scent as a driver of Web behavior graphs: results of a protocol analysis method for Web usability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM; 2001 :498–505
  • 14 Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004; 37 (01) 56-76
  • 15 Nielsen J. Usability inspection methods. In Conference Companion on Human Factors in Computing Systems. ACM; 1999: 413-414
  • 16 Rubin J, Chisnell D. Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests. Indianapolis, IN: John Wiley & Sons; 2008
  • 17 Grundmeier RW, Xiao R, Ross RK. , et al. Identifying surgical site infections in electronic health data using predictive models. J Am Med Inform Assoc 2018; 25 (09) 1160-1166
  • 18 Cox GM, Cochran WG. Experimental Designs. New York: Wiley; 1957
  • 19 Ericsson K, Simon H. Verbal reports as data. Psychol Rev 1980; 87 (03) 215-251
  • 20 Hart SG, Staveland LE. Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv Psychol 1988; 52: 139-183
  • 21 Hart SG. NASA-task load index (NASA-TLX); 20 years later. Proc Hum Factors Ergon Soc Annu Meet 2006; 50 (09) 904-908
  • 22 R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing;2017. Available at: http://www.r-project.org/ . Accessed March 2, 2018
  • 23 Frøkjær E, Hertzum M, Hornbæk K. . Measuring usability: are effectiveness, efficiency, and satisfaction really correlated? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.ACM;2000 April 1:345–352
  • 24 Sweller John. Cognitive load theory, learning difficulty, and instructional design. Learning Instruct 1994; 4 (04) 295-312
  • 25 Ahmed A, Chandra S, Herasevich V, Gajic O, Pickering BW. The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance. Crit Care Med 2011; 39 (07) 1626-1634
  • 26 Plaisant C, Milash B, Rose A. , et al. LifeLines: visualizing personal histories. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM;1996:221–227
  • 27 Card SK, Mackinlay JD, Schneiderman B. Information Visualization: Using Vision to Think. San Francisco, CA: Morgan-Kaufmann; 1999

Address for correspondence

Dean J. Karavite, MSI
Department of Biomedical and Health Informatics (DBHi)
Children's Hospital of Philadelphia, 2716 South Street, Philadelphia, PA 19146
United States   

  • References

  • 1 Klevens RM, Edwards JR, Richards Jr CL. , et al. Estimating health care-associated infections and deaths in U.S. hospitals, 2002. Public Health Rep 2007; 122 (02) 160-166
  • 2 Ambulatory Surgery Centers. A Positive Trend in Healthcare. Ambulatory Surgical Center Coalition. Available at: http://www.ascassociation.org . Accessed 2017
  • 3 Haley RW. The scientific basis for using surveillance and risk factor data to reduce nosocomial infection rates. J Hosp Infect 1995; 30 (Suppl): 3-14
  • 4 Consensus paper on the surveillance of surgical wound infections. The Society for Hospital Epidemiology of America; The Association for Practitioners in Infection Control; The Centers for Disease Control; The Surgical Infection Society. Infect Control Hosp Epidemiol 1992; 13 (10) 599-605
  • 5 Surgical Site Infection (SSI) Event, Procedure-associated model, Center for Disease Control. Available at: https://www.cdc.gov/nhsn/pdfs/pscmanual/9pscssicurrent.pdf . Accessed 2017
  • 6 Grammatico-Guillon L, Baron S, Gaborit C, Rusch E, Astagneau P. Quality assessment of hospital discharge database for routine surveillance of hip and knee arthroplasty-related infections. Infect Control Hosp Epidemiol 2014; 35 (06) 646-651
  • 7 van Mourik MS, Troelstra A, van Solinge WW, Moons KG, Bonten MJ. Automated surveillance for healthcare-associated infections: opportunities for improvement. Clin Infect Dis 2013; 57 (01) 85-93
  • 8 Woeltje KF, Lin MY, Klompas M, Wright MO, Zuccotti G, Trick WE. Data requirements for electronic surveillance of healthcare-associated infections. Infect Control Hosp Epidemiol 2014; 35 (09) 1083-1091
  • 9 Pirolli P, Card S. Information foraging. Psychol Rev 1999; 106 (04) 643
  • 10 Schraagen JM, Chipman SF, Shalin VL. Cognitive Task Analysis. Psychology Press; 2000
  • 11 Paas F, Renkl A, Sweller J. Cognitive load theory and instructional design: recent developments. Educ Psychol 2003; 38 (01) 1-4
  • 12 Chi EH, Pirolli P, Chen K. , et al. Using information scent to model user information needs and actions and the Web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM; 2001 :490–497
  • 13 Card SK, Pirolli P, Van Der Wege M. , et al. Information scent as a driver of Web behavior graphs: results of a protocol analysis method for Web usability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM; 2001 :498–505
  • 14 Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004; 37 (01) 56-76
  • 15 Nielsen J. Usability inspection methods. In Conference Companion on Human Factors in Computing Systems. ACM; 1999: 413-414
  • 16 Rubin J, Chisnell D. Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests. Indianapolis, IN: John Wiley & Sons; 2008
  • 17 Grundmeier RW, Xiao R, Ross RK. , et al. Identifying surgical site infections in electronic health data using predictive models. J Am Med Inform Assoc 2018; 25 (09) 1160-1166
  • 18 Cox GM, Cochran WG. Experimental Designs. New York: Wiley; 1957
  • 19 Ericsson K, Simon H. Verbal reports as data. Psychol Rev 1980; 87 (03) 215-251
  • 20 Hart SG, Staveland LE. Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv Psychol 1988; 52: 139-183
  • 21 Hart SG. NASA-task load index (NASA-TLX); 20 years later. Proc Hum Factors Ergon Soc Annu Meet 2006; 50 (09) 904-908
  • 22 R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing;2017. Available at: http://www.r-project.org/ . Accessed March 2, 2018
  • 23 Frøkjær E, Hertzum M, Hornbæk K. . Measuring usability: are effectiveness, efficiency, and satisfaction really correlated? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.ACM;2000 April 1:345–352
  • 24 Sweller John. Cognitive load theory, learning difficulty, and instructional design. Learning Instruct 1994; 4 (04) 295-312
  • 25 Ahmed A, Chandra S, Herasevich V, Gajic O, Pickering BW. The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance. Crit Care Med 2011; 39 (07) 1626-1634
  • 26 Plaisant C, Milash B, Rose A. , et al. LifeLines: visualizing personal histories. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM;1996:221–227
  • 27 Card SK, Mackinlay JD, Schneiderman B. Information Visualization: Using Vision to Think. San Francisco, CA: Morgan-Kaufmann; 1999

Zoom Image
Fig. 1 Task flow comparison between electronic health record (EHR) and EHR with surgical site infection (SSI) Workbench.
Zoom Image
Fig. 2 The surgical site infection (SSI) Workbench displaying all medical encounters experienced by a single patient within 30 days after a surgical procedure. Highlighted cells indicate SSI-relevant data present in an encounter. Of the 22 postsurgical encounters for this patient, only 5 have potentially relevant SSI information.
Zoom Image
Fig. 3 Case and participant randomization.
Zoom Image
Fig. 4 User test objective performance measures for Workbench and electronic health record (EHR).
Zoom Image
Fig. A1 SSI Workbench.
Zoom Image
Zoom Image