Appl Clin Inform 2018; 09(03): 565-575
DOI: 10.1055/s-0038-1667041
Research Article
Georg Thieme Verlag KG Stuttgart · New York

Leveraging Patient-Reported Outcomes Using Data Visualization

Lisa V. Grossman
1   Department of Biomedical Informatics, Columbia University, New York, New York, United States
2   College of Physicians and Surgeons, Columbia University, New York, New York, United States
,
Steven K. Feiner
3   Department of Computer Science, Columbia University, New York, New York, United States
,
Elliot G. Mitchell
1   Department of Biomedical Informatics, Columbia University, New York, New York, United States
,
Ruth M. Masterson Creber
4   School of Nursing, Columbia University, New York, New York, United States
› Author Affiliations
Funding This study is supported by the National Institute of Nursing Research (K99NR016275, PI: Masterson Creber).
Further Information

Address for correspondence

Lisa V. Grossman
Department of Biomedical Informatics, Columbia University
622 W 168th Street, PH-20, New York, NY 10032
United States   

Publication History

21 March 2018

04 June 2018

Publication Date:
01 August 2018 (online)

 

Abstract

Background Health care organizations increasingly use patient-reported outcomes (PROs) to capture patients' health status. Although federal policy mandates PRO collection, the challenge remains to better engage patients in PRO surveys, and ensure patients comprehend the surveys and their results.

Objective This article identifies the design requirements for an interface that assists patients with PRO survey completion and interpretation, and then builds and evaluates the interface.

Methods We employed a user-centered design process that consisted of three stages. First, we conducted qualitative interviews and surveys with 13 patients and 11 health care providers to understand their perceptions of the value and challenges associated with the use of PRO measures. Second, we used the results to identify design requirements for an interface that collects PROs, and designed the interface. Third, we conducted usability testing with 12 additional patients in a hospital setting.

Results In interviews, patients and providers reported that PRO surveys help patients to reflect on their symptoms, potentially identifying new opportunities for improved care. However, 6 out of 13 patients reported significant difficultly in understanding PRO survey questions, answer choices and results. Therefore, we identified aiding comprehension as a key design requirement, and incorporated visualizations into our interface design to aid comprehension. In usability testing, patients found the interface highly usable.

Conclusion Future interfaces designed to collect PROs may benefit from employing strategies such as visualization to aid comprehension and engage patients with surveys.


#

Background and Significance

Health care organizations increasingly collect patient-reported outcomes (PROs) to monitor patients' health status and determine perceived effectiveness of care.[1] [2] [3] As defined by the U.S. Food and Drug Administration, a PRO is any report on a patient's health status that comes directly from the patient, without additional interpretation from the clinician or anyone else. When collecting PROs, health care organizations use standardized survey questions to understand patient experiences and symptoms.[1] Organizations use various methods to convey PRO data to providers, and the results may inform each patient's care plan individually.

The U.S. federal financial incentive policy Meaningful Use mandates that health care organizations collect and assess PROs.[2] [3] [4] Studies have demonstrated that PROs improve patient outcomes such as quality of life and survival.[5] [6] [7] [8] [9] The impact of PROs, coupled with policies encouraging their use, may explain their recent increase in popularity and use across multiple medical domains.[1] Recently, the National Institutes of Health (NIH) created the Patient-Reported Outcomes Measurement Information System (PROMIS) measures.[10] [11] [12] [13] PROMIS is a validated and standardized set of PRO measures applicable to a range of chronic conditions. The PROMIS scales map directly to so-called “legacy” instruments such as the Patient Health Questionnaire-9 depression questionnaire.

Promising initiatives to design electronic PRO (e-PRO) collection systems exist,[14] including efforts to incorporate PROs into patient portals.[1] [2] [3] [14] [15] The two largest U.S. electronic health record (EHR) vendors, Epic and Cerner, both recently incorporated PROs into their portals.[14] Beyond portals, many institutions support independent e-PRO systems for cardiology, oncology, dermatology, orthopedics, and other specialized care.[15] [16] [17] A recent review found that institutions often develop and use these systems locally.[16]

Despite the increasing popularity of PROs and e-PRO systems, several knowledge gaps and challenges remain. First, few studies have assessed user perceptions of PRO surveys and e-PRO systems. Investigating user perceptions should inform strategies to increase patients' engagement with surveys and systems. Second, low health literacy and low technology literacy may prevent patients from completing PRO surveys precisely and efficiently.[18] Many e-PRO interfaces lack contextual information to help patients interpret survey questions and answer choices.[16] Such limitations may inhibit patients from communicating PROs accurately and decrease clinicians' confidence in PROs.[19] Third, current e-PRO interfaces do not provide the contextual information needed for patients to understand and act on their PRO survey results, if the patient receives their results at all.[16] Because periodically repeated surveys can capture symptom progression over time, helping patients interpret their results may facilitate self-management. Receiving results may also increase the survey's value to patients and better engage patients in their own care. Previously studied strategies to display results for patients primarily used graphs.[20] [21] [22] [23] Additional strategies to aid patient's comprehension of their PRO data remain to be explored.

In this article, we present a user interface to increase heart failure patients' engagement with PROs using data visualization, called mi.Symptoms. We use the heart failure population to demonstrate our interface's utility for multiple reasons. Heart failure affects more than 26 million people worldwide[24] and 6.5 million people in the United States,[25] and is the leading cause of 30-day hospital readmissions in the United States.[25] Ineffective symptom management is the primary cause of repeated hospital readmissions in heart failure.[26] Clinicians typically spend the majority of each outpatient visit discussing symptoms, and time constraints may prevent the full elucidation of clinically relevant symptoms. As such, PROs hold the potential to transform clinical decision making and improve symptom management in heart failure.


#

Objective

In this article, we present results from an iterative, user-centered design process to develop mi.Symptoms, an e-PRO interface that uses visualizations to engage patients. First, we conducted semistructured interviews to assess patients' and providers' perceptions of the value and challenges associated with PROs. Using our interview data, we identified design requirements for mi.Symptoms. Then, we developed and evaluated the mi.Symptoms interface, including visualizations, to help patients interpret survey questions, answer choices, and results.


#

Methods

To develop the mi.Symptoms interface, we used an iterative, user-centered design approach with three stages. In stage 1, we conducted semistructured interviews with heart failure patients and their health care providers to understand their perceptions of the value and challenges associated with using PRO measures. Directly after the interview, participants completed the Health IT Usability Evaluation Scale (Health-ITUES)[27] to evaluate a mi.Symptoms mockup. In stage 2, we identified design requirements based on our interview results, and designed the mi.Symptoms' interface. In stage 3, we conducted usability testing of the interface. The Columbia University Medical Center Institutional Review Board approved the study.

Participants and Recruitment

Patients: Using purposeful sampling on age and race, we identified study participants from a cardiac inpatient unit and an ambulatory cardiac clinic at an urban academic medical center. We recruited at both locations for stage 1 and at the inpatient unit only for stage 3. We included adult patients with a confirmed diagnosis of heart failure who could read 5th grade English. We excluded patients with severe cognitive impairment, major psychiatric illness or concomitant terminal illness. The research coordinator invited patients to participate in face-to-face encounters at the clinic or hospital. If the patient agreed to participate, the coordinator obtained written informed content and collected data during the same encounter. Interviews were audio-recorded.

Providers: Using snowball sampling, we identified health care providers whose primary responsibility is managing inpatient or outpatient heart failure patients. The coordinator invited providers to participate through personal emails. Providers agreed to complete an audio-recorded interview and the Health-ITUES after providing written informed consent.


#

Qualitative Interviews and Health-ITUES (Stage 1)

Measurements and Materials: A content expert developed the semistructured interview guides based on personal experience and incorporated feedback from each study team member. The interview topics for both patients and providers included the application's general usefulness, its usefulness for patient–provider communication, helpful and unhelpful features, and suggested changes. To collect patient demographics, assess previous technology use, and determine health literacy, we used our previously described patient characteristics survey.[28] The patient characteristics survey relies on a three-item health literacy screening questionnaire to determine whether health literacy is inadequate.[29]

To assess perceived usefulness and ease of use of mi.Symptoms, we adapted the Health-ITUES for our setting.[27] We created two adaptations, one for patients and one for providers. We added three items to elucidate mi.Symptoms' usefulness for patient–provider communication:

  1. I think mi.Symptoms could improve how [I / my patients] report [my / their] physical symptoms to [my health care provider / me].

  2. I think mi.Symptoms could improve how [I / my patients] report [my / their] psychological symptoms to [my health care provider / me].

  3. I think mi.Symptoms could improve how [I / my patients] report social limitations that are a result of [my / their] symptoms to [my health care provider / me].

To familiarize the patient or provider participant with mi.Symptoms' purpose and content before the interview and Health-ITUES survey, we created an HTML/CSS mockup of mi.Symptoms ([Fig. 1]). The mockup displayed sample PRO survey questions and a single sample results page. The mockup consisted primarily of text in a basic layout, and lacked visualizations. We included PROs from PROMIS[10] [11] [12] [13] and the Heart Failure Somatic Perception Scale (HFSPS).[30] [31] [32] We included the HFSPS in addition to PROMIS because it assesses pertinent acute physical symptoms of heart failure.

Zoom Image
Fig. 1 Sample mockup of mi.Symptoms used in interviews.

Data Collection: The coordinator guided the patient or provider participant through the mockup, conducted the interview, and completed the Health-ITUES. In addition, the coordinator completed the patient survey with patient participants. Recruitment and analysis continued until thematic saturation was achieved, meaning that no new themes emerged after the last three interviews.

Data Analysis: We conducted a standard descriptive analysis of patient characteristics and Health-ITUES data in R version 3.3.3. A professional service transcribed the audio recordings of patient and provider interviews. We performed qualitative descriptive analysis of the transcripts in NVivo Version 11. First, two authors with training in qualitative methods independently read transcripts, and defined codes in a dictionary for the remaining analysis. In addition to codes that emerged from the data, the dictionary included a priori codes based on the research questions, interview guide, and literature. Then, the researchers independently coded the transcripts using the dictionary-defined codes. Kappa scores ranged from 0.76 to 0.96 for individual questions. The researchers met to resolve conflicts through discussion and identify themes. To enhance confirmability, we used member checks and peer debriefing among the data collection team.[33] [34] [35] In addition, we shared summaries of the coded data with two patients and two providers and asked for their confirmation or revisions to interpretation. The two patients and two providers confirmed the interview findings.


#

Design Requirements and Process (Stage 2)

We drew on our interview and Health-ITUES results to identify design requirements for mi.Symptoms. With the requirements in mind, we used storyboarding techniques to detail patients' interactions with the interface and determine the necessary components. We further refined mi.Symptoms through team discussions and informal review with providers, to produce a fully functioning interface consistent with our design requirements, running on an Apple iPad Pro. We used Bootstrap 3 as our front-end framework and jQuery 1.12.4 as our JavaScript library. We used jQuery 1.12.4 to support older versions of Internet Explorer, which is an important consideration in our population. We used ASP.NET as our back-end framework for compatibility with our hospital's EHR system.[36]


#

Usability Study (Stage 3)

For the usability study, we recruited only new patients who did not participate in stage 1. The coordinator guided these patient participants through a previously described patient characteristics survey[28] and a three-part usability assessment protocol. The coordinator video-recorded the participant's screen using QuickTime Player and encouraged concurrent think-aloud. In the first part, the coordinator asked the participant to complete tasks using mi.Symptoms, such as answering survey questions and interpreting survey results. In the second part, we asked the participant to state preferences between the three strategies for visualizing PRO survey results. In the third part, we asked the participant to complete the eight-item Standardized User Experience Percentile Rank Questionnaire (SUPR-Q),[37] to assess the usability, credibility, loyalty, appearance, and overall quality of mi.Symptoms.

Data Analysis: We analyzed patient characteristics in R and SUPR-Q data in a proprietary Microsoft Excel calculator. Based on video data, a researcher assigned binary (pass/fail) performance indicators to each task for each participant, and computed summary performance statistics in Excel. The researcher transcribed relevant qualitative commentary into Microsoft Word and coded the commentary by hand. The research team met to discuss themes, and confirmed themes using peer debriefing among the data collection team.


#
#

Results

Study Population

In stage 1, we consented 13 heart failure patients and 11 health care providers to participate in interviews and complete the Health-ITUES. In stage 3, we consented 12 additional heart failure patients to participate in the usability study. On average, patient participants were 55 years old and over half reported inadequate health literacy ([Table 1]). Providers participants included 3 cardiology attending physicians, 3 cardiology fellows, 2 cardiology residents, and 3 nurse practitioners.

Table 1

Characteristics of patient participants

Characteristics

Stage 1 (n = 13)

Stage 3 (n = 12)

Age in years: median (range)

53 (30–78)

57 (35–72)

Female sex: n (%)

8 (61.5)

2 (16.7)

Race: n (%)

 White

5 (38.5)

7 (58.3)

 Black or African American

6 (46.2)

3 (25.0)

 Asian or Pacific Islander

1 (7.7)

2 (16.7)

 Other

1 (7.7)

0 (0.0)

Latino ethnicity: n (%)

3 (23.1)

2 (16.7)

Education: n (%)

 High school graduate

4 (30.8)

3 (25.0)

 Some college/Trade school

3 (23.0)

5 (41.7)

 Associate's degree

2 (15.4)

1 (8.3)

 Bachelor's degree

4 (30.8)

1 (8.3)

 Graduate degree

0 (0.0)

2 (16.7)

Inadequate health literacy: n (%)

6 (46.2)

10 (83.3)

Looks up health information online: n (%)

9 (69.2)

10 (83.3)


#

Qualitative Interviews and Health-ITUES (Stage 1)

Interviews with Patients: [Table 2] describes the main themes from patient interviews. Several patient participants (n = 5) described completing PRO surveys using mi.Symptoms as a reflective experience that prepared them to interact with their physicians. Participants felt that mi.Symptoms reminded them what to say in potentially intimidating or time-limited visits with their physician. Participants spoke about mi.Symptoms' usefulness as a reminder:

Table 2

Patient's perceptions of value and challenges associated with using PROs

Category

Theme

Example quote

Value

Encourages reflection on symptom experience

“If you have a bunch of things that you can't really keep track of, they won't come up until you're asked … questions.” [Pt. 11]

Prepares patient for interaction with provider

“mi.Symptoms reminds the person what they want to talk about [with the physician].” [Pt. 1]

Facilitates personal tracking and monitoring

“I'd like to use this system at home … to keep track of my symptoms.” [Pt. 4]

Connects symptoms with disease

Symptoms come up that, before you saw them, you didn't realized they were connected with [your disease].” [Pt. 4]

Prompts positive emotions

“Answering these questions about my symptoms lifts me up, and I feel like I'm not alone.” [Pt. 2]

Challenges

Trouble understanding PRO surveys

“The way the questions were worded was not straightforward. Could the questions just be more clear?” [Pt. 12]

Lack of unstructured communication

“At the end, if you want make [a] comment you should be able to add one.” [Pt. 8]

Low technology literacy

If I knew how to use it [the tablet] I could use [the PRO survey].” [Pt. 3]

Concern that providers will not use PROs

“I'll do [the PRO survey] if you promise to read it. That's the problem, [the doctors] don't read.” [Pt. 1]

Abbreviation: PRO, patient-reported outcome.


  • “If you have a bunch of things that you can't really keep track of, they won't come up until you're asked the specific [mi.Symptoms] questions. If you're not asked the specific questions, you're not going to remember all the symptoms that you're having.” [Pt. 11]

  • “[mi.Symptoms] reminds the person what they want to talk about. It helps me say [to the physician]: ‘in the 10 minutes we have together, let's make sure that we’ve addressed these things.” [Pt. 1]

  • “I think it would be really good have [mi.Symptoms] as a conversation starter, because a lot of times, the doctor will ask you many questions, and … when the doctor asks them some people feel intimidated.” [Pt. 4]

Several participants (n = 4) viewed mi.Symptoms as a potential tool to monitor their symptoms and identify the symptoms associated with their disease. Two participants requested the ability to use mi.Symptoms at home for “personal tracking.” Three participants described not associating certain symptoms with heart disease until asked about them. Participants explained how mi.Symptoms helped them view their symptoms differently:

  • “It made me broaden my thinking, and it kind of brought together what [my doctors, nurses, and nutritionist] keep telling me. It gave a different perspective.” [Pt. 7]

  • “Sometimes symptoms come up that, before you saw them, you didn't realized they were connected with [your disease]. Some people don't associate that atrial flutter might cause your appetite to slow down.” [Pt. 4]

Several participants (n = 6) reported difficulty comprehending the PRO survey questions and answer choices. One participant reported that he “didn't understand the words” and asked for audio. Another participant wanted more “understandable” questions that contained less medical jargon. Said a third:

  • “The way the questions were worded was not straightforward. Could the questions just be more clear?” [Pt. 12]

Several participants (n = 4) felt that unstructured communication, such as electronic messaging, should supplement the PRO survey. Participants wanted to use messaging to ask their physician questions or add comments to their survey results. Multiple participants (n = 3) reported concerns about adapting to or learning to use the interface. One participant reported his poor eyesight made using the iPad difficult. Another participant described his reluctance:

  • “I have a tablet at home, but I don't use it right now… If I knew how to use it [the tablet] I could use [the survey].” [Pt. 3]

Multiple participants (n = 3) described the positive emotions associated with completing the PRO survey. These participants felt that the survey helped them normalize their symptoms by implying that others experienced the same symptoms. As one participant explained:

  • “Answering these questions about my symptoms lifts me up, and I feel like I'm not alone.” [Pt. 2]

Multiple participants (n = 3) reported reasons mi.Symptoms may lack value. The reasons included worry that the physician will not read or use the PROs, a preference for communicating with the physician face-to-face, and a belief the PROs will not impact care. As one participant explained:

  • “I'll do [mi.Symptoms] if you promise to read it. That's the problem, [the doctors] don't read. You tell one doctor one thing and then the next set of doctors come and you have to start all over again.” [Pt. 1]

Interviews with Providers: Several providers (n = 4) discussed how mi.Symptoms might help patients recognize symptoms or associate symptoms with heart failure:

  • “Some patients don't recognize that abdominal bloating is fluid retention, or that nocturnal cough is orthopnea. They don't link their diet to gaining weight. When people put those together they can better self-manage.” [Pr. 3]

Two providers reported that PROs might reduce their cognitive load. Providers thought survey results might supplement verbal information collected in one-on-one visits, and more easily identify the specific timing of symptoms, progression of symptoms, or the most important symptoms. One provider explained:

  • “Some patients, regardless of their literacy, are just not good at communicating what's going on. They want to tell you a million different things. You have to be the interpreter and say: ‘From all these things you want to tell me, it really seems like everything has to do with fatigue.’ I think [mi.Symptoms] may help with this.” [Pr. 7]

Two providers felt mi.Symptoms might identify missed opportunities for medical intervention, by prompting patients to thoroughly report symptoms. One provider described how mi.Symptoms might enhance symptom review:

  • “[Some heart failure patients] get this upper gastrointestinal gas buildup. A lot of them have complained to me: ‘Nobody ever asked me about that. Nobody ever does anything about it.’ That's something I would definitely ask on [mi.Symptoms].” [Pr. 6]

Health-ITUES with Patients and Providers: Overall, both patients and providers perceived the mi.Symptoms mockup as useful and easy-to-use ([Table 3]).

Table 3

Health-ITUES scores for the mi.Symptoms mockup (scores range from 1 to 5, lowest to highest)

Construct

Patient score

Provider score

Perceived usefulness: mean (±SD)

4.2 (±1.1)

4.1 (±0.8)

 Productiveness

4.1 (±1.2)

4.0 (±0.9)

 General usefulness

4.4 (±1.1)

3.8 (±0.9)

 General satisfaction

4.1 (±1.0)

4.1 (±0.7)

 Performance speed

4.4 (±0.8)

4.0 (±0.9)

 Communication

4.3 (±1.1)

4.4 (±0.7)

Perceived ease-of-use: mean (±SD)

4.4 (±1.0)

4.0 (±0.9)

 Competency

4.4 (±1.0)

4.1 (±1.0)

 Learnability

4.3 (±1.1)

3.8 (±0.8)

 Ease-of-use

4.5 (±1.1)

4.0 (±0.9)

Abbreviations: Health-ITUES, Health IT Usability Evaluation Scale; SD, standard deviation.



#

Design Requirements and Process (Stage 2)

Requirement 1: Design to aid comprehension. Although the NIH developed PROMIS with attention to low literacy, 6 of our 13 patient participants reported difficulty understanding questions or answer choices. Poor comprehension may inhibit the patient's ability to reflect on their symptoms and report them accurately. By requirement, the interface should incorporate strategies to aid patient interpretation of questions and answer choices. In mi.Symptoms, we satisfied this requirement by providing previously tested infographics within questions and answer choices ([Fig. 2], panels 1–4).[23] [38] [39] [40] With validated surveys like PROMIS and HFSPS, changing the question text is impossible, but visualizations may aid comprehension. Additional strategies include providing definitions of potentially unclear jargon using links or pop-ups. Notably, interpretation aids may impact the validity, reliability, and overall performance of PRO measures, and future work must explore this. In the context of individual patient care, however, such interpretation aids may help patients better report outcomes and improve patient–provider communication.

Zoom Image
Fig. 2 Patient-reported outcome (PRO) questions with visualizations, educational information, and unstructured messaging in mi.Symptoms.

Requirement 2: Design to aid interpretation. Both patients and providers wished to track symptoms over time to aid management and identify missed opportunities for intervention. By requirement, the interface should convey PRO survey results to patients. In mi.Symptoms, we satisfied this requirement by visualizing survey results for patients, using three strategies ([Fig. 3]):

Zoom Image
Fig. 3 Sample strategies for visualizing patient-reported outcome (PRO) survey results. (Left: small cards and graph; Right: large cards).
  1. Small cards: A set of cards, each containing a short sentence describing a severe symptom, which when clicked on provides textual educational information.

  2. Graph: A bar graph that lists the patient's symptoms from most to least severe and displays each symptom's severity score.

  3. Large cards: A set of cards, each displaying a symptom name, its two-sentence description, a visual representation of its severity, and a link to textual educational information.

By requirement, the interface should also track PROs longitudinally. Although the current version of mi.Symptoms does not display PRO data over time, future versions should.

Requirement 3: Design to support education. Participants described how PROs helped patients connect their symptoms with their disease. By requirement, the interface should incorporate strategies to help patients strengthen this connection. In mi.Symptoms, we satisfied this requirement by adding American Heart Association education materials and in-application links that define medical terms ([Fig. 2], panel 5). Educational information conveyed alongside survey results may help patients make connections more explicitly and improve their health literacy.

Requirement 4: Design to promote communication. In interviews, patients described wanting unstructured messaging features to ask their physician questions and communicate further regarding their symptoms. By requirement, the interface should include options for unstructured messaging. In mi.Symptoms, we satisfied this requirement by including an unstructured messaging feature ([Fig. 2], panel 6).

Use Case Scenario: When a heart failure patient arrives at the clinic, the receptionist enters the patient's medical records number (MRN) into mi.Symptoms, installed on an iPad ([Fig. 4]). The MRN triggers mi.Symptoms to display the patient's identifying information, which the receptionist verifies. Then, the patient completes the PRO questions and views the summary visualization of their results while in the waiting room. When the patient presses a “submit” button, the application prompts the patient to return the iPad.

Zoom Image
Fig. 4 mi.Symptoms interface for the clinic receptionist.

#

Usability Study (Stage 3)

Participants preferred the buttons and cards to the graph when visualizing survey results. Half of the participants failed to interpret the graph correctly, and even participants who could read it often required multiple attempts. Participants perceived that the buttons and cards provided more information than the graph. However, two users preferred the graph because it summarized the results in one location. Participants appreciated design elements they perceived as “fun,” such as icons in colorful bubbles. Participants preferred lighter colors, which they reported enhanced the survey's positive emotional effect. All participants understood tapping or clicking, but two participants did not understand scrolling. These participants could navigate through the application using “next” and “back” buttons and complete the survey without assistance, but never viewed educational content available via scrolling.

SUPR-Q Results: Overall, mi.Symptoms scored highly on all four dimensions of the eight-item SUPR-Q compared with well-known applications ([Table 4]).

Table 4

SUPR-Q scores for mi.Symptoms (scores range from 0 to 1, lowest to highest)

Domain

Score

Overall: mean (±SD)

0.94 (±0.10)

 Usability

0.93 (±0.10)

 Credibility

0.92 (±0.10)

 Loyalty

0.92 (±0.13)

 Appearance

0.97 (±0.08)

Abbreviations: SD, standard deviation; SUPR-Q, Standardized User Experience Percentile Rank Questionnaire.



#
#

Discussion

Engaging patients with PROs is challenging, and technology is an important tool to help patients report and interpret PROs. In this study, we aimed to understand both patients' and providers' perceptions of the value and challenge associated with PROs. We also aimed to improve upon current e-PRO systems, using visualizations to enhance the patient experience. We developed a user interface, mi.Symptoms, that met a series of design requirements, and conducted usability testing with it. We reported on the design process, and discussed our generalizable insights about designing e-PRO systems. Our interview findings suggested that patients perceive the value of PROs. PROs encouraged patients to reflect on their symptom experience and recognize the symptoms associated with their disease. Two participants, unprompted, asked for mi.Symptoms access at home to aid self-tracking of symptoms. Providers reported that PROs could potentially support their patients' self-management. As per the situation-specific theory of heart failure self-care,[41] [42] [43] reporting PROs may increase patients' symptom perception by helping them recognize the range of symptoms associated with their disease. Viewing visualized PRO data and educational information may increase symptom perception by helping patients attribute meaning to their symptom experience.

Patients reported difficulty understanding PRO survey questions, answer choices, and results. Their confusion emphasizes the need to contextualize PROs to aid comprehension, using visualizations or explanations. Future work should explore the impact of different visualizations on comprehension, and how visualizations impact the validity, reliability, and overall performance of PRO measures. Visualized PRO measures may perform differently, ideally better, but may lack comparability with nonvisualized versions. For individualized patient care, comparability is less of a concern than ensuring accurate patient–provider communication. However, comparability over time is necessary for many administrative and research tasks. The tension between using comprehension aids to assist low literacy patients and ensuring comparability of PRO measures is one that requires exploration.

Our usability study found that patients preferred visualizations with brief text descriptions. This is consistent with previous research that brief text descriptions combined with visualizations perform better than either alone.[23] [38] [40] Additionally, most patients in our study failed to comprehend graphs. This is consistent with previous research that inadequate graph literacy is prevalent, around 40% of the U.S. population.[44] Notably, graph literacy and numeracy vary widely even in populations with high health literacy,[45] and low graph literacy is correlated with low patient portal use.[46] [47] Previous studies have primarily displayed PRO data for patients with graphs.[20] [21] [22] [23] Given our results and the prevalence of inadequate graph literacy, exploration of additional visualization strategies beyond graphs is warranted. When displaying PRO data over time, avoiding graphs is potentially difficult. In this case, strategies to help patients understand graphs or brief text descriptions of the graph's interpretation may be necessary. Future work should evaluate comprehension of graphs and additional visualization strategies in large samples.

Information engagement is a two-way process involving exchanges between patients and providers. In our interviews, participants occasionally expressed concern about providers not utilizing PRO data. Such concerns might stem from health system skepticism[48] or recent, high profile cases of ransomware attacks against EHR systems, and press coverage of cybersecurity vulnerabilities in general.[49] [50] Future tools should consider indicating when PRO data are seen by and used by medical staff, or educating patients on uses of PRO data, to encourage patients to continue providing PROs.

Patient–provider communication through portals may be described as unstructured or structured. In unstructured communication, such as secure electronic messaging, the patient selects the topic of communication. In structured communication, such as PROs, the interface prompts patients with topics or questions to communicate about. Our interview results suggest that patients want unstructured communication channels to supplement structured ones. In previous studies, providers have expressed concerns that unstructured messaging may overwhelm them with patient contact.[51] Future work should address the potential burden of unstructured messaging on providers.

Our findings echo previous research on e-PRO systems in several ways. First, as our interviews suggested, e-PRO systems offer an opportunity to educate patients and improve health literacy. Previous systems have incorporated disease management tips, education modules, and data interpretation guidelines.[16] Future work should explore additional opportunities for patient education directly within the e-PRO interface. Second, our provider interviews echoed previous research on the value of PROs. Like others, our providers thought PROs could save time, enhance symptom review, and identify missed opportunities for medical intervention.[15] [17] Finally, as our interviews suggested, heart failure patients anticipate using PROs for personal tracking and management, similar to surgical and oncology patients.[17] Future work should explore how disease type impacts patients' perceived value of PROs.

We chose to use an external, or “wraparound,” e-PRO system, rather than building a system within the EHR itself. External systems offer several strengths over internal systems, including the easy customization necessary to create our visualizations. However, external systems also demonstrate several weaknesses. First, although our system does not require any username or password, the clinic receptionist must link it directly to the EHR using the patient's MRN. Second, our external system uses a unique protocol to communicate with our EHR, which is not easily generalizable to other EHR systems.

We choose to implement mi.Symptoms in the waiting room for five key reasons. First, although our institution offers an online patient portal, monthly use for patients with access lies below 2%, consistent with the low adoption of portals across the United States.[52] Second, physicians can immediately act on information collected in the waiting room, decreasing legal liability. For example, if the patient screens positive for depression in the waiting room, the physician can immediately place them on an antidepressant trial. Third, because every patient reports PROs from the waiting room, the provider can integrate PROs into their standard clinic workflow, which may increase PRO utilization. As such, we do not need to prompt providers with notifications to view PROs, and this may prevent alert fatigue. Fourth, the clinic receptionist can assist patients with the application in the waiting room, potentially increasing reliability and therefore clinician trust in the patient-reported information. Fifth, restricting the frequency of PRO collection to clinic visits reduces the burden of reporting on patients. Although collecting PROs in the waiting room has many strengths, one weakness is the inability to monitor symptoms between visits.

Limitations

Our patient participants represent a wide range of demographics. However, a broader sample representing multiple chronic conditions, including but not limited to heart failure, might expand our understanding of user perceptions and needs. Including patients in other countries or institutions should also expand our understanding. Future work should offer patients a broader variety of visualizations and strategies for interpreting results, and evaluate their comprehension and preferences on a larger scale. Older adults experience more chronic illness than younger populations, and future work could target this age group. Future work could also target high-risk populations such as cognitively impaired and terminally ill populations, which we excluded from our study. The sample size for our usability study is small, and future work should explore mi.Symptoms' usability in a larger sample. In this study, we explored three strategies for visualizing a single survey's results, but we did not explore strategies for visualizing multiple survey results over time. Future work should evaluate strategies for longitudinal visualization of results for both patients and providers.


#
#

Conclusion

Engaging patients in PRO collection may improve symptom reporting and illness understanding. In this article, we described results from an iterative, user-centered design process to develop an electronic user interface to collect PROs. Patients reported difficulty understanding PRO survey questions and results, prompting the integration of visualizations into the interface. Future interfaces design to collect PROs may benefit from the lessons learned from our design process and from employing similar strategies to engage patients with PROs.


#

Clinical Relevance Statement

This study advances the knowledge of design requirements for an interface that collects patient-reported outcomes (PROs). Our results contain generalizable knowledge to inform the development of PRO collection systems and to improve symptom monitoring in chronic illness for practitioners and consumers.


#

Multiple Choice Question

What do patients perceive as the value of using patient-reported outcomes?

  • Encourages reflection on symptom experience

  • Facilitates personal tracking and monitoring

  • Connects symptoms with disease

  • All of the above

Correct Answer: The correct answer is option d, all of the above. In our study, patients reported all three as valuable effects of using patient-reported outcomes (PROs). Patients used PROs as a reflective experience that prepared them to interact with their physician. Patients reported interest in using PROs for personal tracking of symptoms longitudinally. Finally, PROs helped patients associate symptoms with disease, such as connecting gastrointestinal symptoms with heart disease.


#
#

Conflict of Interest

None.

Acknowledgments

The authors appreciate the American Medical Informatics Association Student Design Competition Committee, who provided excellent feedback on the early stages of this project.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects, and was reviewed by the Columbia University Medical Center Institutional Review Board.


  • References

  • 1 Ali J, Basch E, Baumhauer J. , et al. Users' Guide to Integrating Patient-Reported Outcomes in Electronic Health Records. Report. Baltimore, MD: Johns Hopkins University; 2017
  • 2 Wu A, Jensen R, Salzberg C, Snyder C. Advances in the use of patient reported outcome measures in electronic health records. Pcori Natl Work to Adv use PRO Meas Electron Heal Rec]; 2013 :1–82. Available at: http://www.pcori.org/assets/2013/11/PCORI-PRO-Workshop-EHR-Landscape-Review-111913.pdf . Accessed June 21, 2018
  • 3 Wu AW, Kharrazi H, Boulware LE, Snyder CF. Measure once, cut twice–adding patient-reported outcome measures to the electronic health record for comparative effectiveness research. J Clin Epidemiol 2013; 66 (8, Suppl): S12-S20
  • 4 Wagle N. Implementing Patient-Reported Outcome Measures. NEJM Catal [Internet]; 2016 . Available at: http://catalyst.nejm.org/implementing-proms-patient-reported-outcome-measures/ . Accessed June 21, 2018
  • 5 Kotronoulas G, Kearney N, Maguire R. , et al. What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. J Clin Oncol 2014; 32 (14) 1480-1501
  • 6 Basch E, Deal AM, Kris MG. , et al. Symptom monitoring with patient-reported outcomes during routine cancer treatment: a randomized controlled trial. J Clin Oncol 2016; 34 (06) 557-565
  • 7 Valderas JM, Kotzeva A, Espallargues M. , et al. The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res 2008; 17 (02) 179-193
  • 8 Marshall S, Haywood K, Fitzpatrick R. Impact of patient-reported outcome measures on routine practice: a structured review. J Eval Clin Pract 2006; 12 (05) 559-568
  • 9 Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract 1999; 5 (04) 401-416
  • 10 Flynn KE, Dew MA, Lin L. , et al. Reliability and construct validity of PROMIS® measures for patients with heart failure who undergo heart transplant. Qual Life Res 2015; 24 (11) 2591-2599
  • 11 Cella D, Riley W, Stone A. , et al. Initial Adult Health Item Banks and First Wave Testing of the Patient-Reported Outcomes Measurement Information System (PROMIS) Network: 2005–2008. J Clin Epidemiol 2011; 63 (11) 1179-1194
  • 12 Bevans M, Ross A, Cella D. Patient-Reported Outcomes Measurement Information System (PROMIS): efficient, standardized tools to measure self-reported health and quality of life. Nurs Outlook 2009; 62 (05) 339-345
  • 13 Cella D, Yount S, Rothrock N. , et al. The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap cooperative group during its first two years. Med Care 2007; 45 (05) (Suppl. 01) S3-S11
  • 14 Paul M. Northwestern leads consortium to integrate patient-reported outcomes into electronic health records]; 2016. Available at: http://www.healthmeasures.net/images/applications/EASIPRO_PressRelease_102016.pdf . Accessed June 21, 2018
  • 15 Van Cranenburgh OD, Ter Stege JA, De Korte J, De Rie MA, Sprangers MAG, Smets EMA. Patient-reported outcome measurement in clinical dermatological practice: relevance and feasibility of a web-based portal. Dermatology 2016; 232 (01) 64-70
  • 16 Jensen RE, Snyder CF, Abernethy AP. , et al. Review of electronic patient-reported outcomes systems used in cancer clinical care. J Oncol Pract 2014; 10 (04) e215-e222
  • 17 Bennett AV, Jensen RE, Basch E. Electronic patient-reported outcome systems in oncology clinical practice. CA Cancer J Clin 2015; 62 (05) 337-347
  • 18 Wynia MK, Osborn CY. Health literacy and communication quality in health care organizations. J Health Commun 2010; 15 (Suppl. 02) 102-115
  • 19 Huba N, Zhang Y. Designing patient-centered personal health records (PHRs): health care professionals' perspective on patient-generated data. J Med Syst 2012; 36 (06) 3893-3905
  • 20 Snyder CF, Smith KC, Bantug ET. , et al. What do these scores mean? Presenting patient-reported outcomes data to patients and clinicians to improve interpretability. Cancer 2017; 123 (10) 1848-1859
  • 21 Brundage MD, Smith KC, Little EA, Bantug ET, Snyder CF. , PRO Data Presentation Stakeholder Advisory Board. Communicating patient-reported outcome scores using graphic formats: results from a mixed-methods evaluation. Qual Life Res 2015; 24 (10) 2457-2472
  • 22 Smith KC, Brundage MD, Tolbert E. , et al. Engaging stakeholders to improve presentation of patient-reported outcomes data in clinical practice. Support Care Cancer. Support Care Cancer 2016; 24 (10) 4149-4157
  • 23 Arcia A, Woollen J, Bakken S. A systematic method for exploring data attributes in preparation for designing tailored infographics of patient reported outcomes. EGEMS (Wash DC) 2018; 6 (01) 1-9
  • 24 Ambrosy AP, Fonarow GC, Butler J. , et al. The global health and economic burden of hospitalizations for heart failure: lessons learned from hospitalized heart failure registries. J Am Coll Cardiol 2014; 63 (12) 1123-1133
  • 25 Benjamin EJ, Blaha MJ, Chiuve SE. , et al; American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart Disease and Stroke Statistics–2017 Update: a report from the American Heart Association. Circulation 2017; 135 (10) e146-e603
  • 26 Goldberg RJ, Spencer FA, Szklo-Coxe M. , et al. Symptom presentation in patients hospitalized with acute heart failure. Clin Cardiol 2010; 33 (06) E73-E80
  • 27 Yen P-Y, Wantland D, Bakken S. Development of a customizable Health IT Usability Evaluation Scale. AMIA Annu Symp Proc 2010; 201: 917-921
  • 28 Masterson Creber R, Prey J, Ryan B. , et al. Engaging hospitalized patients in clinical care: Study protocol for a pragmatic randomized controlled trial. Contemp Clin Trials 2016; 47: 165-171
  • 29 Chew L, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Health (London) 2004; 11: 12
  • 30 Jurgens CY, Fain JA, Riegel B. Psychometric testing of the heart failure somatic awareness scale. J Cardiovasc Nurs 2006; 21 (02) 95-102
  • 31 Jurgens CY, Lee CS, Reitano JM, Riegel B. Heart failure symptom monitoring and response training. Hear Lung J Acute Crit Care. 2013; 42 (04) 273-280
  • 32 Jurgens CY, Lee CS, Riegel B. Psychometric analysis of the heart failure somatic perception scale as a measure of patient symptom perception. J Cardiovasc Nurs 2017; 32 (02) 140-147
  • 33 Shenton AK. Strategies for ensuring trustworthiness in qualitative research projects. Educ Inf 2004; 22 (February): 63-75
  • 34 Lincoln YS. Emerging criteria for quality in qualitative and interpretive research. Qual Inq 1995; 1 (03) 275-289
  • 35 Tobin GA, Begley CM. Methodological rigour within a qualitative framework. J Adv Nurs 2004; 48 (04) 388-396
  • 36 Grossman LV, Mitchel EG. Visualizing the patient-reported outcomes measurement information system (PROMIS) measures for clinicians and patients. AMIA Annu Symp Proc 2017; 2289-2293
  • 37 Sauro J. SUPR-Q: a comprehensive measure of the quality of the website user experience. J Usability Stud 2015; 10 (02) 68-86
  • 38 Arcia A, Suero-Tejeda N, Bales ME. , et al. Sometimes more is more: iterative participatory design of infographics for engagement of community members with varying levels of health literacy. J Am Med Inform Assoc 2016; 23 (01) 174-183
  • 39 Arcia A, Velez M, Bakken S. Style guide: an interdisciplinary communication tool to support the process of generating tailored infographics from electronic health data using EnTICE3. EGEMS (Wash DC) 2015; 3 (01) 1120
  • 40 Arcia A, Bales ME, Brown W. , et al. Method for the development of data visualizations for community members with varying levels of health literacy. AMIA 2013; 2013: 51-60
  • 41 Riegel B, Dickson VV. A situation-specific theory of heart failure self-care. J Cardiovasc Nurs 2008; 23 (03) 190-196
  • 42 Riegel B, Carlson B, Moser DK, Sebern M, Hicks FD, Roland V. Psychometric testing of the self-care of heart failure index. J Card Fail 2004; 10 (04) 350-360
  • 43 Riegel B, Lee CS, Dickson VV. Self-care in patients with chronic heart failure. Nat Rev Cardiol 2011; 8 (11) 644-654
  • 44 Galesic M, Garcia-Retamero R. Graph literacy: a cross-cultural comparison. Med Decis Making 2011; 31 (03) 444-457
  • 45 Nayak JG, Hartzler AL, Macleod LC, Izard JP, Dalkin BM, Gore JL. Relevance of graph literacy in the development of patient-centered communication tools. Patient Educ Couns 2016; 99 (03) 448-454
  • 46 Sharit J, Lisigurski M, Andrade AD, Karanam C, Nazi KM, Lewis JR, Ruiz JG. The roles of health literacy, numeracy, and graph literacy on the usability of the VA's personal health record by veterans. J Usability Stud 2014; 9 (04) 173-193
  • 47 Ruiz JG, Andrade AD, Hogue C. , et al. The association of graph literacy with use of and skills using an online personal health record in outpatient veterans. J Health Commun 2016; 21 (sup2): 83-90
  • 48 Fiscella K, Franks P, Clancy CM. Skepticism toward Medical Care and Health Care Utilization. Lippincott Williams & Wilkins Stable; 2017;36(2):180–189. Available at: http://www.jstor.org/stable/3767180 . Accessed June 21, 2018
  • 49 Sittig DF, Singh H. A socio-technical approach to preventing, mitigating, and recovering from ransomware attacks. Appl Clin Inform 2016; 7 (02) 624-632
  • 50 Harkins M, Freed AM. The ransomware assault on the healthcare sector. J Law Cyber Warf 2018; 6 (02) 148-164
  • 51 Dalal AK, Dykes P, Mcnally K. , et al. Engaging Patients, Providers, and Institutional Stakeholders in Developing a Patient-centered Microblog. AMIA 2014 Annual Symposium. Washington, DC; 2014
  • 52 Kaelber D, Jha A, Johnston D, Middleton B, Bates D. A research agenda for personal health records (PHRs). J Am Med Inform Assoc 2008; 15 (06) 729-736

Address for correspondence

Lisa V. Grossman
Department of Biomedical Informatics, Columbia University
622 W 168th Street, PH-20, New York, NY 10032
United States   

  • References

  • 1 Ali J, Basch E, Baumhauer J. , et al. Users' Guide to Integrating Patient-Reported Outcomes in Electronic Health Records. Report. Baltimore, MD: Johns Hopkins University; 2017
  • 2 Wu A, Jensen R, Salzberg C, Snyder C. Advances in the use of patient reported outcome measures in electronic health records. Pcori Natl Work to Adv use PRO Meas Electron Heal Rec]; 2013 :1–82. Available at: http://www.pcori.org/assets/2013/11/PCORI-PRO-Workshop-EHR-Landscape-Review-111913.pdf . Accessed June 21, 2018
  • 3 Wu AW, Kharrazi H, Boulware LE, Snyder CF. Measure once, cut twice–adding patient-reported outcome measures to the electronic health record for comparative effectiveness research. J Clin Epidemiol 2013; 66 (8, Suppl): S12-S20
  • 4 Wagle N. Implementing Patient-Reported Outcome Measures. NEJM Catal [Internet]; 2016 . Available at: http://catalyst.nejm.org/implementing-proms-patient-reported-outcome-measures/ . Accessed June 21, 2018
  • 5 Kotronoulas G, Kearney N, Maguire R. , et al. What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. J Clin Oncol 2014; 32 (14) 1480-1501
  • 6 Basch E, Deal AM, Kris MG. , et al. Symptom monitoring with patient-reported outcomes during routine cancer treatment: a randomized controlled trial. J Clin Oncol 2016; 34 (06) 557-565
  • 7 Valderas JM, Kotzeva A, Espallargues M. , et al. The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res 2008; 17 (02) 179-193
  • 8 Marshall S, Haywood K, Fitzpatrick R. Impact of patient-reported outcome measures on routine practice: a structured review. J Eval Clin Pract 2006; 12 (05) 559-568
  • 9 Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract 1999; 5 (04) 401-416
  • 10 Flynn KE, Dew MA, Lin L. , et al. Reliability and construct validity of PROMIS® measures for patients with heart failure who undergo heart transplant. Qual Life Res 2015; 24 (11) 2591-2599
  • 11 Cella D, Riley W, Stone A. , et al. Initial Adult Health Item Banks and First Wave Testing of the Patient-Reported Outcomes Measurement Information System (PROMIS) Network: 2005–2008. J Clin Epidemiol 2011; 63 (11) 1179-1194
  • 12 Bevans M, Ross A, Cella D. Patient-Reported Outcomes Measurement Information System (PROMIS): efficient, standardized tools to measure self-reported health and quality of life. Nurs Outlook 2009; 62 (05) 339-345
  • 13 Cella D, Yount S, Rothrock N. , et al. The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap cooperative group during its first two years. Med Care 2007; 45 (05) (Suppl. 01) S3-S11
  • 14 Paul M. Northwestern leads consortium to integrate patient-reported outcomes into electronic health records]; 2016. Available at: http://www.healthmeasures.net/images/applications/EASIPRO_PressRelease_102016.pdf . Accessed June 21, 2018
  • 15 Van Cranenburgh OD, Ter Stege JA, De Korte J, De Rie MA, Sprangers MAG, Smets EMA. Patient-reported outcome measurement in clinical dermatological practice: relevance and feasibility of a web-based portal. Dermatology 2016; 232 (01) 64-70
  • 16 Jensen RE, Snyder CF, Abernethy AP. , et al. Review of electronic patient-reported outcomes systems used in cancer clinical care. J Oncol Pract 2014; 10 (04) e215-e222
  • 17 Bennett AV, Jensen RE, Basch E. Electronic patient-reported outcome systems in oncology clinical practice. CA Cancer J Clin 2015; 62 (05) 337-347
  • 18 Wynia MK, Osborn CY. Health literacy and communication quality in health care organizations. J Health Commun 2010; 15 (Suppl. 02) 102-115
  • 19 Huba N, Zhang Y. Designing patient-centered personal health records (PHRs): health care professionals' perspective on patient-generated data. J Med Syst 2012; 36 (06) 3893-3905
  • 20 Snyder CF, Smith KC, Bantug ET. , et al. What do these scores mean? Presenting patient-reported outcomes data to patients and clinicians to improve interpretability. Cancer 2017; 123 (10) 1848-1859
  • 21 Brundage MD, Smith KC, Little EA, Bantug ET, Snyder CF. , PRO Data Presentation Stakeholder Advisory Board. Communicating patient-reported outcome scores using graphic formats: results from a mixed-methods evaluation. Qual Life Res 2015; 24 (10) 2457-2472
  • 22 Smith KC, Brundage MD, Tolbert E. , et al. Engaging stakeholders to improve presentation of patient-reported outcomes data in clinical practice. Support Care Cancer. Support Care Cancer 2016; 24 (10) 4149-4157
  • 23 Arcia A, Woollen J, Bakken S. A systematic method for exploring data attributes in preparation for designing tailored infographics of patient reported outcomes. EGEMS (Wash DC) 2018; 6 (01) 1-9
  • 24 Ambrosy AP, Fonarow GC, Butler J. , et al. The global health and economic burden of hospitalizations for heart failure: lessons learned from hospitalized heart failure registries. J Am Coll Cardiol 2014; 63 (12) 1123-1133
  • 25 Benjamin EJ, Blaha MJ, Chiuve SE. , et al; American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart Disease and Stroke Statistics–2017 Update: a report from the American Heart Association. Circulation 2017; 135 (10) e146-e603
  • 26 Goldberg RJ, Spencer FA, Szklo-Coxe M. , et al. Symptom presentation in patients hospitalized with acute heart failure. Clin Cardiol 2010; 33 (06) E73-E80
  • 27 Yen P-Y, Wantland D, Bakken S. Development of a customizable Health IT Usability Evaluation Scale. AMIA Annu Symp Proc 2010; 201: 917-921
  • 28 Masterson Creber R, Prey J, Ryan B. , et al. Engaging hospitalized patients in clinical care: Study protocol for a pragmatic randomized controlled trial. Contemp Clin Trials 2016; 47: 165-171
  • 29 Chew L, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Health (London) 2004; 11: 12
  • 30 Jurgens CY, Fain JA, Riegel B. Psychometric testing of the heart failure somatic awareness scale. J Cardiovasc Nurs 2006; 21 (02) 95-102
  • 31 Jurgens CY, Lee CS, Reitano JM, Riegel B. Heart failure symptom monitoring and response training. Hear Lung J Acute Crit Care. 2013; 42 (04) 273-280
  • 32 Jurgens CY, Lee CS, Riegel B. Psychometric analysis of the heart failure somatic perception scale as a measure of patient symptom perception. J Cardiovasc Nurs 2017; 32 (02) 140-147
  • 33 Shenton AK. Strategies for ensuring trustworthiness in qualitative research projects. Educ Inf 2004; 22 (February): 63-75
  • 34 Lincoln YS. Emerging criteria for quality in qualitative and interpretive research. Qual Inq 1995; 1 (03) 275-289
  • 35 Tobin GA, Begley CM. Methodological rigour within a qualitative framework. J Adv Nurs 2004; 48 (04) 388-396
  • 36 Grossman LV, Mitchel EG. Visualizing the patient-reported outcomes measurement information system (PROMIS) measures for clinicians and patients. AMIA Annu Symp Proc 2017; 2289-2293
  • 37 Sauro J. SUPR-Q: a comprehensive measure of the quality of the website user experience. J Usability Stud 2015; 10 (02) 68-86
  • 38 Arcia A, Suero-Tejeda N, Bales ME. , et al. Sometimes more is more: iterative participatory design of infographics for engagement of community members with varying levels of health literacy. J Am Med Inform Assoc 2016; 23 (01) 174-183
  • 39 Arcia A, Velez M, Bakken S. Style guide: an interdisciplinary communication tool to support the process of generating tailored infographics from electronic health data using EnTICE3. EGEMS (Wash DC) 2015; 3 (01) 1120
  • 40 Arcia A, Bales ME, Brown W. , et al. Method for the development of data visualizations for community members with varying levels of health literacy. AMIA 2013; 2013: 51-60
  • 41 Riegel B, Dickson VV. A situation-specific theory of heart failure self-care. J Cardiovasc Nurs 2008; 23 (03) 190-196
  • 42 Riegel B, Carlson B, Moser DK, Sebern M, Hicks FD, Roland V. Psychometric testing of the self-care of heart failure index. J Card Fail 2004; 10 (04) 350-360
  • 43 Riegel B, Lee CS, Dickson VV. Self-care in patients with chronic heart failure. Nat Rev Cardiol 2011; 8 (11) 644-654
  • 44 Galesic M, Garcia-Retamero R. Graph literacy: a cross-cultural comparison. Med Decis Making 2011; 31 (03) 444-457
  • 45 Nayak JG, Hartzler AL, Macleod LC, Izard JP, Dalkin BM, Gore JL. Relevance of graph literacy in the development of patient-centered communication tools. Patient Educ Couns 2016; 99 (03) 448-454
  • 46 Sharit J, Lisigurski M, Andrade AD, Karanam C, Nazi KM, Lewis JR, Ruiz JG. The roles of health literacy, numeracy, and graph literacy on the usability of the VA's personal health record by veterans. J Usability Stud 2014; 9 (04) 173-193
  • 47 Ruiz JG, Andrade AD, Hogue C. , et al. The association of graph literacy with use of and skills using an online personal health record in outpatient veterans. J Health Commun 2016; 21 (sup2): 83-90
  • 48 Fiscella K, Franks P, Clancy CM. Skepticism toward Medical Care and Health Care Utilization. Lippincott Williams & Wilkins Stable; 2017;36(2):180–189. Available at: http://www.jstor.org/stable/3767180 . Accessed June 21, 2018
  • 49 Sittig DF, Singh H. A socio-technical approach to preventing, mitigating, and recovering from ransomware attacks. Appl Clin Inform 2016; 7 (02) 624-632
  • 50 Harkins M, Freed AM. The ransomware assault on the healthcare sector. J Law Cyber Warf 2018; 6 (02) 148-164
  • 51 Dalal AK, Dykes P, Mcnally K. , et al. Engaging Patients, Providers, and Institutional Stakeholders in Developing a Patient-centered Microblog. AMIA 2014 Annual Symposium. Washington, DC; 2014
  • 52 Kaelber D, Jha A, Johnston D, Middleton B, Bates D. A research agenda for personal health records (PHRs). J Am Med Inform Assoc 2008; 15 (06) 729-736

Zoom Image
Fig. 1 Sample mockup of mi.Symptoms used in interviews.
Zoom Image
Fig. 2 Patient-reported outcome (PRO) questions with visualizations, educational information, and unstructured messaging in mi.Symptoms.
Zoom Image
Fig. 3 Sample strategies for visualizing patient-reported outcome (PRO) survey results. (Left: small cards and graph; Right: large cards).
Zoom Image
Fig. 4 mi.Symptoms interface for the clinic receptionist.