Appl Clin Inform 2024; 15(02): 250-264
DOI: 10.1055/a-2269-0995
Research Article

HistoriView: Implementation and Evaluation of a Novel Approach to Review a Patient Using a Scalable Space-Efficient Timeline without Zoom Interactions

Heekyong Park
1   Department of Research Information Science and Computing, Mass General Brigham, Somerville, Massachusetts, United States
,
Taowei David Wang
1   Department of Research Information Science and Computing, Mass General Brigham, Somerville, Massachusetts, United States
2   Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, United States
,
Nich Wattanasin
1   Department of Research Information Science and Computing, Mass General Brigham, Somerville, Massachusetts, United States
,
Victor M. Castro
1   Department of Research Information Science and Computing, Mass General Brigham, Somerville, Massachusetts, United States
,
Vivian Gainer
1   Department of Research Information Science and Computing, Mass General Brigham, Somerville, Massachusetts, United States
,
Shawn Murphy
1   Department of Research Information Science and Computing, Mass General Brigham, Somerville, Massachusetts, United States
2   Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, United States
› Institutsangaben
 

Abstract

Background Timelines have been used for patient review. While maintaining a compact overview is important, merged event representations caused by the intricate and voluminous patient data bring event recognition, access ambiguity, and inefficient interaction problems. Handling large patient data efficiently is another challenge.

Objective This study aims to develop a scalable, efficient timeline to enhance patient review for research purposes. The focus is on addressing the challenges presented by the intricate and voluminous patient data.

Methods We propose a high-throughput, space-efficient HistoriView timeline for an individual patient. For a compact overview, it uses nonstacking event representation. An overlay detection algorithm, y-shift visualization, and popup-based interaction facilitate comprehensive analysis of overlapping datasets. An i2b2 HistoriView plugin was deployed, using split query and event reduction approaches, delivering the entire history efficiently without losing information. For evaluation, 11 participants completed a usability survey and a preference survey, followed by qualitative feedback. To evaluate scalability, 100 randomly selected patients over 60 years old were tested on the plugin and were compared with a baseline visualization.

Results Most participants found that HistoriView was easy to use and learn and delivered information clearly without zooming. All preferred HistoriView over a stacked timeline. They expressed satisfaction on display, ease of learning and use, and efficiency. However, challenges and suggestions for improvement were also identified. In the performance test, the largest patient had 32,630 records, which exceeds the baseline limit. HistoriView reduced it to 2,019 visual artifacts. All patients were pulled and visualized within 45.40 seconds. Visualization took less than 3 seconds for all.

Discussion and Conclusion HistoriView allows complete data exploration without exhaustive interactions in a compact overview. It is useful for dense data or iterative comparisons. However, issues in exploring subconcept records were reported. HistoriView handles large patient data preserving original information in a reasonable time.


Introduction

Improving the efficiency of the patient review process has significant impacts on clinical and research settings. For example, clinicians review patient records to make proper treatment plans, and researchers review patients to validate phenotype algorithms. These tasks require reviewers to examine a patient's history at the overview and detail levels. Because of the importance of temporality in patient records, timelines are a popular tool employed for these tasks.[1] [2] [3] [4] [5]

Timeline adoption in the medical field has evolved from simple time series plots of quantitative data to more sophisticated systems incorporating higher-level analysis and categorical data. Early systems focused on scaling raw numeric values,[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] while more recent systems have added features such as interpretation of quantitative data (e.g., critical fever) or inference of higher-level abstractions.[16] [17] [18] [19] [20] Another group of visualization systems focuses on the existence of categorical data along the time axis.[21] [22] [23] [24] [25] [26] [27] [28] [29] [30] Users can select data points to reveal their value or content. Each data point can be associated with a different data type, unit, or provenance. Furthermore, there are also studies to visualize information extracted from clinical text in various ways.[31] [32] [33] [34] [35]

In the Mass General Brigham (MGB) Research Information Science and Computing (RISC) department, we have been using a categorical data timeline for phenotype validation. We learned from experience that naively applying timelines can cause many problems and undo its benefits.[36] The complexity of health care data poses challenges. For example, diseases may be acute or chronic, and treatment can vary in length and intensity. To fully understand a patient's history, one must maintain the contexts of all the long-term and short-term problems and their interrelated relationships while also investigating details of specific data points. This dynamic necessitates a multilevel exploration process, characterized by exhaustive zooming interactions.

One way to help users understand contextual information is to provide a compact timeline. However, if a timeline separates visually overlaying events (i.e., whose visual representations overlap regardless of their temporal overlap relation) by stacking them vertically, it will be difficult to render a patient history on one screen. Therefore, we consider a space-efficient timeline representation, the nonstacking visualization method only. In contrast, preserving the compact nature of an overview by using merged event representation creates cognition and interaction problems. We identified the challenges as follows:

  • (1) Event recognition problem: Merged events can mislead users by obscuring the position, duration, and counts of other nearby data points. For example, a single dot on a timeline may actually represent multiple dots, and a long bar may actually represent many densely populated data points. Visual overlays are common in patient timelines. In addition to temporally overlapping events, dense periods of data points and lengthy timelines create the overlaid representation.

  • (2) Access ambiguity problem: Merged events prevent users from hovering or selecting a specific data point on the timeline, making access to the data point's detail difficult. When users select an overlaying data point, the system cannot tell which data users intend to select. Typically, only data with the highest display layer is selected, but the user has no way of recognizing the ambiguity, and there is no way to allow users to choose the correct one without proper provision.

  • (3) Inefficient interaction problem: Merged events produce excessive zooming and panning interactions to get individual data points from a merged one or vice versa. By separating the data points, they may be able to get around visual ambiguity and access ambiguity issues. However, zoom and pan can be inefficient as users may have to zoom and pan repeatedly to view individual data points or to backtrack to their original context. This can lead to users losing their temporal context and increasing mental load. Additionally, when one data point completely overlays others, it does not help solve the problem.

Furthermore, the scalability is another challenge. Visualizing a patient's whole history helps review patients. However, getting the data is often limited by the number of returnable data. Moreover, loading lifelong data might cause severe performance degradation in visualization and interaction time. In our preliminary analysis, it took 50.46 seconds to visualize 10,000 data using our new prototype timeline tool invented for this study. If we include the response time of data queries, the total turnaround time will be much longer. The long wait would discourage users from using a timeline, even if it's tremendously helpful.

Previous timeline studies mainly focused on novel ways to represent,[3] [7] [22] [33] search[3] [20] [22] [24] [37] or aggregate[23] [24] [38] patterns of patient data. Cockburn et al[39] categorize visualization techniques to provide individual data details and the context of the data-scape into four strategies: zooming, overview + detail, focus + context, and cue-based techniques. Although timeline research in medicine has not explicitly addressed the challenges we presented above, there are examples that fall into these categories.

Zooming allows users to explore the data at different levels of detail. The majority of the categorical patient data visualization systems implement this method. Unfortunately, two levels are not shown concurrently and add cognitive and interaction burdens. Overview + detail, on the other hand, separates overview and detail by space. CHRONOS[40] and Lifelines2[22] are examples that use overview + detail. Focus + context techniques use distortions such as a fisheye lens[26] to present context and detail in the same view. CareVis[27] is a timeline system that uses a fisheye view. However, many studies have shown that focus + context techniques create target acquisition problems for users and impair users' relative spatial judgments.[41] [42] [43] [44] Lastly, the cue-based technique selectively highlights or suppresses items within the information space to provide a focused view. For example, Harvest[32] uses a shape to display the time range selected.

This study aims to develop a novel timeline visualization tool specifically designed to enhance the review of a patient's electronic health records (EHRs) for research purposes. The principal objective is to overcome the challenges presented by the intricate and voluminous nature of EHR data: solving the event recognition, access ambiguity, and inefficient interaction problems and visualizing a patient's entire history in one timeline within a reasonable time. In this paper, we propose HistoriView, a timeline visualization and interaction technique that helps users identify and access overlaid data points without zooming or panning. We also describe how we implemented a high-throughput HistoriView plugin.


Methods

HistoriView

HistoriView is an interactive and scalable visual exploratory tool for longitudinal patient data review. It was created based on LifeLines[19]'s representational frame, which adds concepts vertically and displays record items along the horizontal timeline inside its concept group row. We chose LifeLines because it provides a general framework that can be used to display a patient history with any disease and helps users see the relationships between different types of data.

HistoriView visualizes the existence of categorical event data on a timeline. It renders events as flat (i.e., nonstacking), transparent, color-coded ranged bars grouped by concepts. The translucency effect gives users a visual cue on the density of the data. Details of each event are delivered through popups interactively ([Fig. 1] and [Supplementary Video], available in the online version).

Supplementary Video

Zoom
Fig. 1 Demo patient data rendered in HistoriView timeline. In HistoriView timeline, events are represented using ranged transparent color bars. This timeline overlays event bars in the same concept group category when there are visually overlaying events (i.e., events whose visual representations share common time-axis coordinates, regardless of their temporal overlap relation), instead of stacking them. Therefore, a darker area means more events are overlaid on that location. When a user mouseover a data bar, the Overlay Detector detects all visually overlaid events and sends the information to the Overlay Marker and the Interactive List Controller. Then, the Overlay Marker makes changes to the identified events' representation. The Interactive List Controller prompts popups to give record details and provides visual cues to help maintain context. In this example, on the mouseover position, nine events were identified as visually overlaying by the Overlay Detector. Their representation on the timeline is transformed into orange bars and shifted upward by the Overlay Marker to distinguish them from the other events. The list of events and preview values are shown in a List Popup (a). When a user clicks on the third data, another popup with more detailed information pops out (b) (Detail Popup). The mouse-hovered row in the List Popup is highlighted in sky color (c) and its corresponding data bar boundary is delineated in red line (d). The bold font in the List Popup means its Detail Popup is currently open. This captured image visualizes fictitious patient data.

We devised Overlay Detector (OD), Overlay Marker (OM), and Interactive List Controller (ILC) to control the interaction, presentation, and delivery of the events. Identifying merged events is crucial since it can cause serious navigational problems when exploring timeline data. When users mouseover a data bar, the OD detects occluded events at the hover location within a preset radius of the mouse cursor ([Fig. 2a–i]) and sends the information to the OM and the ILC. The occluded events will then be delivered to users as a list in a popup for them to be examined one by one.

Zoom
Fig. 2 Mouseover events detection in the Overlay Detector. (a) Overlay detection methods. Two different overlay detection methods we examined for HistoriView. The first row shows a flat representation which is displayed to users, and the second row is a stacked representation of the same data. We set 2 pixels for both the mouse radius I and an ignorable margin (δ) between two endpoints. The radius obviates the need for users to adjust mouse location minutely to mouseover to the exact data bar or a hidden data bar. (i) Location-based overlay detection method. HistoriView's Overlay Detector currently uses this location-based overlay method. In this method, E1 and E2 are detected as their x-coordinates overlay the mouse cursor whose radius is r. (ii) Event-based overlay detection method. The event-based approach recursively identifies all visually overlaying events to the mouseover event(s). It first identifies E1 and E2, which are mouseover events. Then, it additionally identifies E3 and E4 because they both overlay with E2. It would subsequently identify E5 because it overlays with E4 with the tolerance of δ. Since E5 is not overlaid with any other new events, the process returns the overlaid set {E1, E2, E3, E4, E5} and terminates. (b) A timeline rendered by the event-based detection method. An example of a HistoriView visualization using the event-based visual overlay detection algorithm (this image is captured in the middle of developing the HistoriView. Therefore, the visual output is different from our final representation). In this timeline, several closely located (within the margin δ) event clusters are detected and colored in purple. The number and the range of the detected visual overlays include the majority of data bars for that concept because of some long-term events. As a result, users must scroll down the long event list in the popup to search for a specific event of interest. This is almost identical to reviewing a raw set of data.

We initially used a different strategy ([Fig. 2a–ii]) that detects events that visually overlap the previous selection extensively. However, after we applied this method to real patient data, we found problematic cases, as shown in [Fig. 2b]. When there are many durational events (e.g., medication), the event-based strategy will pick up almost all the events, making the data review no better than looking at a spreadsheet. Therefore, we refined the algorithm to detect only the events overlaying a mouse location.

The OM then transforms the identified events' representation to show which data bars are the focused ones, so there will be no mistakes in interpreting the visualization ([Table 1]). The downside of the location-based overlay detection method is that there may be instances of hidden data, such as E3 and part of E4 in [Fig. 2], that are wholly or partially covered by a set of overlaid events. To resolve this issue, OM highlights the events and shifts them upwards slightly (y-shift) to distinguish them from the other bars. For cleaner and more readable representation, it places the focused events on the top layer and makes the background nontransparent.

Table 1

Overlay marking scheme

Overlay marking process

Simplified example

Real-world example

(a) Transparent, flat representation

(b) Adding color change, y-shift

(c) Smoothing uneven borderlines

(d) Adding a nontranslucent white background

(e) Stacked representation

Note: This figure illustrates the visual overlay marking scheme of the Overlay Marker (OM) (a–d) and its corresponding stacked representation(e). The left column illustrates the process with a simplified example and the right column shows a case from real patient data. The events are cardiovascular agents prescribed during a 2-month period (the two gray vertical line in (e) are month grid lines and the red dashed line is the mouse cursor location mark). Stacked representation (e) is displayed in reduced size due to the page limitation. When the OM receives the mouseover event list from the Overlay Detector, the OM transforms their representation. (a) From the transparent, flat representation, (b) the OM changes color and shifts them slightly upwards (y-shift) of the visually overlaid events on the mouse cursor. (c) For clearer visualization, it flattens uneven borders between the focused data bars and the hidden ones and (d) adds a nontranslucent white background of the detected events.


ILC provides access to any events without the need to zoom, and they allow users to review detailed information in connection with the whole patient history. It prompts a summary of all the detected events, one row per event (List Popup). Mousing over each row in the List Popup triggers an event boundary mark in the timeline. It conveys the temporality of that specific event, supporting intuitive analysis of temporal relationships among events. Mouse-click on a row brings further information for that event (Detail Popup).

Additional features are designed for efficient navigation. For example, functions are designed to work in place. Popups appear right around a mouse position and can be pinned by clicking on the originating data bar without moving the mouse away from it. Moreover, popups can be kept pinned and moved, so reviewers do not need to go back to find a data location to recall previous information.

HistoriView library was implemented in JavaScript, based on the open-source vis.js.[45] It accepts its inputs (groups and items) in JavaScript Object Notation (JSON) format. Each JSON input contains a list of groups (concept list) and a list of items (events). Each item has a start and end time and concept group information that is used to render the timeline. It also carries detailed information that is delivered through popups. The visualization is independent of the underlying system since it communicates with data provided in JSON format.


Scalable HistoriView Timeline Plugin on Integrating Biology and the Bedside

An initial plugin was implemented on our local Integrating Biology and the Bedside (i2b2)[46] instance to explore the feasibility of HistoriView for incorporating comprehensive clinical histories into research analyses ([Fig. 3]). The i2b2 data repository houses EHR data extracted from the MGB Research Patient Data Registry (RPDR),[47] spanning records up to 2013. This plugin enables investigators to enter a patient number, which triggers the retrieval of the patient's entire history from the i2b2 repository and subsequently visualizes it using the HistoriView library.

Zoom
Fig. 3 Scalable HistoriView Timeline Plugin workflow diagram. When reviewers enter a patient number, HistoriView plugin makes queries to get the patient's whole history. Considering limitations in the queryable data size and a reasonable turnaround time, queries use each top-most level ontology concept. The split queries are sent and received in parallel. The data in the returned XML messages are converted to JSON format. Since the number of data affects the visualization time, it reduces the number of representation nodes without loss of information. With the processed JSON events, the HistoriView library visualizes an interactive patient timeline.

We developed a PHP module that runs i2b2 queries and manages the data delivery to the HistoriView library. Considering limitations in the queryable data size and a reasonable turnaround time, it creates multiple i2b2 queries which one query requests each top-level ontology concept. It sends the queries concurrently and ensures getting results for all of them. All the data from the i2b2's XML response messages are extracted and converted into JSON groups and events.

To avoid visualization and interaction time delay caused by the large number of returned records, we developed a method that reduces the number of JSON events ([Fig. 4]). Events that occurred in the same months and are of the same group are condensed as one single event, whose start time is the earliest event's start time and end time is the latest end time among them. However, it keeps all the original individual event information in detail slot so that visual cues and popup data are accurate and uncompromised. HistoriView library was updated to support reduced JSON event files. When this event reduction process is done, HistoriView visualizes it.

Zoom
Fig. 4 Reduced JSON data. This example shows how the JSON event reduction works. The three events R1, R2, and R3 in the same concept group (0) are reduced into one RR1 event as they all start in the same month, November 2020. The earliest start time and the latest end time of the original events become the start and end times of the reduced event. In this case, R1's start time and R3′s end time become RR1's time range (highlighted). Each original event id and its detailed information, which includes start and end time, is stored as an array in the new event's detailed information. As a result, the reduced event JSON would have only one event RR1 but know R1, R2, and R3′s ids and time information to render the red-focus-rectangle, and their individual detailed information to create their detail popups.


Evaluation

Usability Evaluation

A user study was conducted to evaluate the representation and navigational methods of HistoriView. Only members of the MGB RISC department with proper institutional review board (IRB) approval to view patient data were recruited as study participants. Furthermore, none of the participants were involved in the design, discussion, or implementation of HistoriView. We sent a study invitation email to the eligible group. The participation was strictly voluntary, and there was no compensation for it.

The entire evaluation was online and self-guided. To acquaint participants with our new tool, a tutorial video, a user manual, and an evaluation guideline were provided. Participants were asked to use a real-patient HistoriView timeline to answer information-seeking questions. The task is aimed to guide them to experience various aspects of HistoriView's features before forming opinions ([Table 2b]). The participants then answered a System Usability Scale (SUS)[48]-based, 5-point Likert scale usability survey to assess the user interaction experience, usefulness, and learnability of HistoriView ([Table 2c]). To assess user preference for HistoriView's overlapping event representation over a stacked event representation, an additional survey was assessed ([Table 2d]).

Table 2

Usability evaluation design

(a) Evaluation data selection requirements

ID

Requirements

R1

At least one concept must have dense event occurrences, and therefore conductive to having many visually overlaying relations.

R2

At least one concept must contain events with duration.

R3

At least one concept must contain hidden data. Specifically, the visual representation of the hidden data bar should be located in the middle of other interval event(s) that are longer than the hidden one. The target hidden event's subconcept name should be unique and occur only once throughout the patient's history.

R4

At least one concept must contain at least one pair of events in during or after temporal relations that can be captured intuitively from the timeline display.

R5

These conditions may occur jointly in one single concept.

(b) Information-seeking questions and related timeline features

Task

Timeline features

HistoriView-specific features

Conventional timeline features

Q1

What date was it that the patient got high WBC count (flagged as H or High)?

● List Popup

 - Access data without zooming

 - Preview value

● Occurrence time of a point event

Q2

How long did the patient received Dexamethasone therapy?

● Detail Popup

 - Get start time and end time

● Duration of a ranged event

Q3

When did the patient had Tobradex ointment after second bone marrow or hematopoietic stem cell transplant?

(Hint: find in Glucocorticoids concept row)

● Hidden data

 - Recognize and access hidden data

● Detail Popup

 - Get a sub-concept name

● Keep context while focusing details

● Temporal intuitiveness

 - after relation

 - nth occurrence

Q4

Did the patient have Filgrastim while taking Dexamethasone?

● Temporal intuitiveness

 - during relation

(c) Usability survey

ID

Question

SQ1

I thought the timeline was easy to use

SQ2

I would imagine that most people would learn to use this timeline very quickly.

SQ3

I felt very confident finding the answers using the timeline.

SQ4

I thought the timeline was easy to learn

SQ5

I could answer the questions without having to zoom

(Note: 'Zoom' means a user's behavior to zoom in or zoom out a timeline. It does not mean looking at a detail pop up.)

(d) A patient history in two different representation schemes that are provided for preference survey

(i) HistoriView timeline

(ii) Stacked timeline

Note: (b) The goal of these questions was to put study participants in a setting where they must utilize HistoriView's features to accomplish realistic tasks using a realistic dataset. We expected this emersion exercise could help them try all of the HistoriView's major features and form opinions on what worked and what did not. Each task is designed to use multiple features to minimize the number of questions.


(c) To assess the usability, usefulness, and learnability of HistoriView, we created a SUS-inspired survey with a 5-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree). Participants marked their level of agreement for each statement for each statement.


(d) We provided images that capture two different timeline representations of the same data. One was the (i) HistoriView representation, with translucency and visually overlaid events merged in compact spaces. The other was a (ii) conventional stacked timeline visualization created by vis.js. It showed events with no translucency that were vertically stacked to avoid merging. Participants were asked to choose a preferred timeline representation. For fair comparison, we selected an example that demonstrated both pros and cons of the two timeline methods. For example, we captured a time range that contained concepts with a lot of visually overlaying events, durational events with during relations, and two bars that look like a single tick, but one is overlaid events, and the other is not.


Finally, we collected the participants' feedback. Individuals were asked to submit any feedback in free text. For those who did not return written feedback or answered negatively to the usability survey, semistructured interviews were conducted. These interviews adopted an open-ended conversational format while incorporating focused prompts to elicit specific insights, such as overall impressions of HistoriView, motivations behind negative survey responses, and suggestions for improvement. All interviews were audio-recorded, and key content was subsequently transcribed for analysis.

Information-seeking task results (e.g., accuracy and response time) were not used for evaluation metrics since the design was not intended to score individual features and it was a no-reward experiment without time restriction, to allow flexibility. Moreover, the number of multifeature questions is limited to draw generalizable insights. The MGB IRB approved a waiver of patient informed consent.


Scalability Evaluation

We performed an evaluation to assess the scalability of the HistoriView timeline plugin, focusing on its performance in handling large datasets and its ability to efficiently visualize patient data. To facilitate a feasible evaluation within the constraints of manual patient number entry, we restricted the test patient sampling pool. Prioritizing data abundance and complexity, 100 individuals were randomly selected from our local i2b2 repository based on two criteria: being over 60 years old and having at least one note for each note type. For each patient, the following metrics were recorded: total number of records, number of JSON events reduced, i2b2 query response time, visualization time, and total turnaround time.

Due to the limitation of returnable data size by a single query and the long visualization time, it was impractical to iterate comparison with a baseline HistoriView plugin (i.e., without using the proposed split query and reduced-event methods) using the same test patients. Therefore, to establish the baseline, we measured HistoriView's visualization time using synthetic patient JSON files, progressively increasing the number of records by 5,000 up to 40,000.



Results

Usability Evaluation Result

Usability Evaluation Participants

Eleven people volunteered and completed the evaluation. The participants included one physician and chart reviewer, one RPDR specialist who educates and interacts with users, two software quality assurance (QA) specialists, one data analyst, researchers, and developers with many years of experience developing biomedical information research applications.


Usability Survey and Preference Survey Results

The survey results are summarized in [Fig. 5]. Most respondents had positive sentiments toward HistoriView. Ten out of eleven responded that our timeline was easy to use (SQ1), and nine answered that it was easy to learn (SQ4). Eight participants agreed or strongly agreed that people would learn to use the timeline very quickly timeline (SQ2). Eight participants indicated they were confident in finding the answers from the (SQ3), and eight replied that they did not need to zoom the timeline to find the answers (SQ5). In the survey asking about their preference for HistoriView representation over the stacked timeline, all eleven participants preferred HistoriView representation.

Zoom
Fig. 5 Usability survey results. (a) Usability survey result. This graph illustrates the survey result aligning neutral opinions in the middle. The number values denote the number of subjects answered. For the usability questions asked about the usability, learnability, and usefulness of the HistoriView in a positive way, most of the submitted answers show agreement or strong agreement. (b) Preference for the HistoriView timeline over a stacked timeline. In the survey asking about their preference for the HistoriView representation over a stacked timeline, all eleven participants preferred the HistoriView representation.

Qualitative Feedback

In the qualitative assessment, nearly all participants expressed favorable impressions. The opinions converged to satisfaction on display, ease of learning and use, and efficiency. Notably, popups were the most frequently mentioned for facilitating efficient navigation of patient history and convenient data comparison. However, challenges were also identified, including the occasional interference of sensitive popup responses with interaction with the current popup interactions, the lack of visual cues, and difficulties in navigating subconcept data from the concept group name. Minor critiques and improvement suggestions were also received. Furthermore, negative usability survey responses were attributed to domain knowledge gaps and confusion regarding list popup functionality. Notably, one participant, who is a clinician, envisioned its potential for clinical use. [Table 3] provides a detailed summary of the qualitative feedback.

Table 3

Qualitative feedback results

Finding

Comments

A. Positive aspects:

  (i) Positive impression

(P11) “amazing … everything is excellent … I would highly recommend physicians to use it.”

(P6) “The functionality is awesome. It is so important, it's gonna be so helpful and [I am] so excited about it,”

(P5) “Excellent job […] well thought-out and from users' stand-point very well designed.”

(P10) “It is obvious that a lot of hard work went into this tool and it looks and feels great.”

(P2) “Loved the timeline features, nicely done”

(P1) “I've understood, appreciated about the timeline”

(P8) “Overall, very nicely done!”

  (ii) Display

(P4) “I think it's a great way to show a lot of information about a subject in a small amIt of space, ...”

(P6) “There is no criticism about the timeline look”

(P5) “Graphics and overall look are very clean and easy to follow. All of which helps prevent eye strain…”

(P9) “much prefer the in-line display over the stacked display”

(P5) “[HistoriView] is more concise and easier to read.”

(P8) “while I like [HistoriView], I can also find details in [the stacked representation] somewhat useful in certain circumstances.”

  (iii) Ease to learn and use

(P1) “[I] learned it quicker than I thought […] I've understood […] and I think it was really easy to learn.”

(P6) “I thought it's very easy to use, easy to hover over.”

(P5) “Regarding the functionality of [HistoriView], it is so much easier to use than the old timeline especially for its intended target audience.”

(P4) “Future users will get used to [HistoriView] quickly, and it's a lot easier than the raw data they are currently working with … I think they'll appreciate that once they start using it.”

(iv) Efficiency

(P4) “I think it's a great way to show a lot of information about a subject in a small amount of space and to easily navigate concepts or ideas that you want to see”

(P4) “easier to navigate than from a datasheet”

(P11) “I found it very interesting to review patients”

(P4) “it's a great way to easily navigate concepts or ideas that you want to see without having to run a lot of subqueries, and complicated queries”

(P11) “It can help physicians to better treat the patients who have lengthy past medical history. It can make it easy for physicians to try new treatment options for critical care patients.”

(P4) “I think [popups] are useful features. […] You can keep the popups open and set. So, you don't have to find that tick or remember what tick it was in again. And if you're trying to compare different spots on the timeline, it's nice, such as how much time was in between.”

(P9) “I like the window that shows you multiple events in one shot.”

B. Negative aspects:

  (i) Fast popup prompts

(P7) “If you move your cursor around (don't hover) you get a lot of popups. I wanted to pop up something in the middle and moved my cursor in from the upper right. Took me a few tries of closing popups in the upper right before I could get to the one in the middle. I found it to be annoying.”

(P1) “Sometimes, when you move your mouse around, it's a little bit distracting to see though, the popups are everywhere.”

  (ii) Lack of visual cues

(P1) “when viewing [HistoriView], there are no horizontal scrol' bars, … So I can't tell if there are more data points that are [beyond the scope of the timeline].”

(P2) “[The ability to] scrolling to any direction of the timeline is not very clear, maybe a scroll bar at the bottom may be helpful.”

(P9) “I knew I could use the scroll bar to zoom into the timeline from previous displays, but there is absolutely no other indication that the functionality exists. Similarly, [there is] no indication that I can drag the timeline to the left and to the right.”

(P7) “When you have a wide timeline and a lot of rows it's sometimes difficult to know what is what without hovering on a tick mark”

  (iii) Difficulties in navigating subconcept records

(P10) “It was difficult to identify Tobradex without the hint that it might be a part of the patient record. Drilling through this level of detail can be challenging.”

(P1) “sorting by value is not available”

  (iv) Other negative aspects

(P9) “P icon for pinning was non-intuitive”

(P9) “I can't see “unpin” a window once I pin it. I guess I could just close it at that point, but it feels disconcerting to have an option that I can't take back.”

(P9) “Weird to me that I can close all popups through a drop down on one popup”

C. Suggestions

  (i) User guide support

(P11) “But one thing I would like to say that if there should be some training module for physicians like there is one Training Module for RPDR in Health Stream. And If there would be some detailed video, it would be extremely helpful.”

(P3) “Provide info icons to popup screens on what they are displaying”

  (ii) Visual cue

(P7) “Wondering if it would help if the concept names were color coded with their tick marks. When you have a wide timeline and a lot of rows its sometimes difficult to know what is what without hovering on a tick mark. On the other hand the popups identify this tick marks so this is not that important.”

  (iii) Concept hierarchy information

(P3) “Provide concept path for the concepts in the popup screens”

  (iv) A stationary screen space for popup contents

(P5) “Maybe my option to say, don't follow mouse movement.

Or, maybe have that window that lives stationary just to have that change in one spot, distracting, that's just my own preference.”

D. Negative usability survey answers explained:

  (i) Disagree to SQ3

    - had to rely on the hints provided, due to the lack of relevant domain knowledge

(P2) “I just kept getting confused. What are they really asking? What am I supposed to be looking at? Is this just this one concept? Or is it a group of concepts? If it's the only tick that applies to?

Because they mostly are getting raw data, and they have specific questions in mind, so when they go to the timeline, and they start to navigate, they'll know what they are looking for. A lot easier for them.”

  (ii) Disagree to SQ5

    - Lack of understanding about List Popup

(P3) “So, in order to view the details, I had to do that way. I had to know exactly where it falls on the timeline. […] I thought maybe you have to first zoom in, in order to see how many bars are there. Because they may all be [clumped] together, you wouldn't know if there are multiple events, wouldn't know which one will be the one we are looking at. Because I was under the impression that each tick maybe having one popup.”

[It turned out P3 did not check the tutorial materials provided, and understood how a list popup works after explanation.]

    - Was able to use List Popup properly, but did not understand it the first time through

(P9) “I didn't understand that you could view multiple events in one window the first time through – it would likely change my answer to question 5, but… the fact that I didn't understand that first time through is probably worth noting.”

E. Potential for clinical use

(P11) “It can help physicians to better treat the patients who have lengthy past medical history. It can make it easy for physicians to try new treatment options for critical care patients.”



Scalability Evaluation Result

[Fig. 6] shows scalability evaluation results. A patient had 12,353 records on average, and the maximum case had 32,630 records. These exceed the maximum returnable records (i.e., default limitation is 7,500 records) of the baseline system. HistoriView successfully pulled entire records of the test patients. Our event reduction method reduced the average number to 852 and the maximum to 2,019 (93.81% reduction). The total turnaround time from the user request to the visualization, which includes data querying time, was 31.97 seconds on average, and the maximum took 45.40 seconds.

Zoom
Fig. 6 Scalability evaluation results.

Most of the turnaround time was spent getting patient data from the i2b2 repository. The visualization process was completed in less than 3 seconds for all study patients, and the average was 1.03 seconds. This is a significant improvement compared with our preliminary analysis, in which we measured visualization time by incrementing demo data size by 5,000 records. For example, for the patient with the highest number of records (i.e., 32,630), the visualization time was 2.54 seconds. Without the event reduction process, based on our simulation with synthetic patient data, it is expected to take between 459.45 and 638.07 seconds, which are response times measured visualizing 30,000 and 35,000 events.



Discussion

HistoriView uses zooming, cue-based, and z-based overview + detail techniques. While most overview + detail techniques separate the detail from the context on the x–y plane, our approach uses z-separation. For example, CHRONOS displays the entire timeline in a small separate space when the timeline is zoomed in, while Lifelines2 displays the details of a selected data point in a detail panel (e.g., sonogram). Unlike these systems, which take up valuable screen space, HistoriView displays a detailed view as a popup over the contextual timeline.

Similar to ours, Harvest[32]'s timeline component uses a cue-based technique to mark the user's interest and a z-based overview + detail technique for detailed information. However, HistoriView is different in many ways. Harvest reacts to the user-set time range and displays all the visit details of the focus area directly, using information bubbles. Each visit information is contained in one information bubble, so there can be multiple bubbles that pop up at the same time in the selected time frame. This is possible since it visualizes patient visit information only, so dealing with merged events is much easier than our use case.

In contrast, we do not limit the visualization target to specific data types, and the interaction is more dynamic. While users need to manipulate a time slider to set focus in Harvest, we immediately get an event list triggered by the mouseover interaction. The target is automatically selected overlaying events rather than the whole set of events in the selected time range. Due to the complexity and the large number of data we visualize, we cannot show all the details of focused events at the same time. Therefore, we deliver event details in two levels, from preview to full detail by user-click. Comparing multiple records is made more convenient by pinning popups and reorganizing the popup windows' layout.

Contrary to our concern that HistoriView's new features might be difficult, the evaluation results were promising. In the usability survey, at least eight out of eleven participants expressed agreement or strong agreement with the usability questions asking about positive aspects. Only one or two participants showed disagreement with two questions (SQ3 and SQ5), while no strong disagreement was recorded. Regarding negative survey answers, P2 anticipated that the domain knowledge gap would not hinder intended users as this will be used for a specific goal by specialists. Feedback from P3 and P9 suggests the presence of a learning curve to use our tool, highlighting the need for user guide support as emphasized by P3 and P11.

While the compact HistoriView design has substantial benefits for a patient review, qualitative feedback reveals challenges with these dense timeline rows, hosting diverse data types. Users struggled to locate specific subconcept data among numerous entries (P2, P10). Value sorting is impossible (P1) due to heterogeneous data and mixed units within a list popup. As implied by P3′s suggestion for concept hierarchy information, it may be difficult to understand the context of the concept from current popups. However, we believe these issues can be improved through integrating filter-by-subconcept and search functions and enriching popups with ontological context within the existing timeline framework. This approach will enhance subconcept discoverability and comprehension, upholding core design principles.

For popups, we considered balancing between convenience and disturbance. If popup generation is triggered by a user's action (e.g., mouse click), it would be inconvenient when reviewing a lot of similar-looking data bars. Conversely, autoprompting popups following a mouse cursor would sometimes be annoying when a user gets unwanted popups. To solve this problem, we added two user-options for controlling popups. The first option is to pin a popup by the user. Once a popup is pinned, it will stay open until the user unpins it. The second option is to disable automatic popups. Additionally, we set the freeze duration (i.e., the amount of time that HistoriView waits before opening a new popup) to 600 ms. We expected the freeze duration would nicely balance the tradeoffs. However, some users found this to be too short. We remediated it by increasing the duration to 1,500 ms.

Despite the limited sample size, our study offers valuable insights into the usability of HistoriView as a patient review tool for research. To address the challenges of a restricted recruitment pool and uncompensated participants, we minimized the time commitment required for participation. The entire evaluation was online and self-guided with no time limit. The information-seeking tasks were simplified and guided participants to use multiple features. All questions were multiple-choice.

Additionally, the number of usability survey questions was also reduced to half of the original SUS questionnaire. This reduction compromised one underlying principle of alternating positive and negative questions to derive a balanced SUS score. Therefore, we opted against calculating a final score. Nevertheless, adopting only part of the questions retains value due to their validated efficacy in measuring usability. Moreover, our questionnaire aligns with the latest ISO 9241-110 standard,[49] which suggests usability measures should cover effectiveness, efficiency, and satisfaction.

To mitigate the potential bias, the evaluation was carefully designed to replicate real-world scenarios. Initially, a rigorous patient selection process was implemented to ensure that HistoriView's effectiveness as a patient review tool could be comprehensively assessed. This involved identifying a patient with clinical data aligned with predefined requirements ([Table 2a]) by meticulously searching clinical trials on ClinicalTrial.gov.[50] The i2b2 database was queried using diagnoses and medical concepts extracted from the eligibility criteria of clinical trials. The results were scrutinized to find a patient exhibiting all the queried concepts and their patterns, ensuring compliance with the data requirements. Subsequently, a patient diagnosed with acute lymphoblastic leukemia was chosen and visualized for evaluation.

Next, for information-seeking tasks, we constructed clinically relevant research questions ([Table 2b]). These questions were designed to probe concepts drawn from real-world clinical trials and to investigate temporality. The temporal constraints employed in the questions were based on well-established temporal patterns observed in clinical trial eligibility criteria.[51] Importantly, the participants were highly qualified to provide valuable feedback on the system. They were either experienced in working with patient data or possessed a deep understanding of user needs in this domain. Recognizing the potential deficit in domain knowledge among participants lacking clinical backgrounds, we provided a hint for task Q3, mirroring the level of domain knowledge typically exhibited by users in actual research scenarios. We encouraged both positive and negative feedback.

The scalability of HistoriView was achieved by splitting data queries and using an event reduction method. This approach dramatically improved the responsiveness of the system. The reduction strategy reduces the number of visual artifacts by combining chunks of data at a monthly level. This has the limitation that if one zooms in the timeline at the daily scale, the overview of individual data distribution may not be accurate at first sight. However, users can still get accurate information by interacting through HistoriView popups. In addition, in a timeline that spans many years of lifelong history, a 1-month duration looks like a tick, and users would not notice the difference in representation generated by reduced events.


Conclusion

We propose HistoriView, an interactive timeline visualization designed to help a patient history review at the atomic data level. The combined approach of its overlay detection algorithm, visual design, and interaction architecture solves the event recognition, access ambiguity, and inefficient interaction problems. Handling large patient data was another challenge to consider. Querying and loading a large lifelong data cause serious performance degradation. We deployed an i2b2 HistoriView timeline plugin, using a split query scheme and reduced event modeling.

The usability study of HistoriView was promising. Most participants agreed or strongly agreed that HistoriView is easy to learn and use, does not require zooming for navigation, and delivers the right information clearly. All participants preferred HistoriView representation over a stacked timeline. In addition, the scalability evaluation results showed that it can visualize a patient's entire history without loss of original information in a reasonably short amount of time.

HistoriView's main contribution is to allow complete data exploration without zooming while maintaining a space-efficient overview. This is particularly valuable for timelines with numerous merged events or iterative data comparisons. While primarily targeted toward research use cases, one participant commented potential utility in clinical settings. However, its compactness, while advantageous, also presents a new challenge: difficulties in exploring subconcept records. Our immediate focus lies in addressing qualitative feedback to optimize HistoriView based on participant insights and integrating diverse data types (e.g., images, notes), which detail popup that is currently limited to structured detail information. Ultimately, our goal is to build an application that uses HistoriView to support patient review within research contexts.


Clinical Relevance Statement

This study proposes HistoriView: an interactive, scalable timeline for patient review that supports efficient data navigation and full data exploration at any time-granular view. It resolves event recognition, access ambiguity, and inefficient interaction problems caused by complexity of patient data, which are main barriers of using timelines as a patient review tool. HistoriView is expected to greatly improve time-consuming and laborious patient review process and impact patient care and medical research.


Multiple Choice Questions

  1. Which problems may occur from overlaid representations in a conventional timeline?

    • At the overview level, users may think they are looking at one data point, but at the finer granularity, there can be multiple nearby data.

    • When a user mouseover or click a merged event representation, only the event which is rendered at the top-most layer will be selected for further information.

    • To review individual data while maintaining all the long-term and short-term context and their interrelationships, users need to zoom and pan laboriously.

    • All a, b, and c

    Correct Answer: The correct answer is option d. Merged event representations are common in patient timelines but these are the main barriers using timelines for patient review. We defined three problems (i.e., event recognition problem, access ambiguity problem, and inefficient interaction problem) that can be caused by such representations.

    First, they can mislead users by obscuring the position, duration, and counts of other nearby data points. For example, users may think that they are looking at one data point, but there can be multiple data at the same position. Alternatively, users see a ranged bar and assume it corresponds to a few durational data. But when zoomed in, it is possible that there is not any durational one, but many densely populated data points (event recognition problem).

    Second, they prevents users from hovering or selecting on a specific data point on the timeline. When a user selects an overlaying point, the system cannot tell which data the user intend to select, and there is no way to allow users to choose the right one (access ambiguity problem).

    Third, getting individual data points from a merged one or vice versa generally depends on zooming and panning interactions. Reviewers must maintain all the long-term and short-term context and their interrelationships. However, repetitive zooming and panning can make users lose their temporal context. Also, backtracking to their original context may be difficult and increases mental load (inefficient interaction problem).

  2. What is not true about HistoriView?

    • Users can delve into individual data details from the overview level display.

    • Users can clearly see the visual boundary of an individual data range in a overlaid event representation.

    • Users can pin popups to keep interesting data information popups on the screen.

    • As the HistoriView plugin implementation uses reduced JSON strategy for scalability, there are missing events that are not represented in the patient timeline.

    Correct Answer: The correct answer is option b. HistoriView's popups provide individual data access without having to zoom. Mousing over an event in a List Popup displays the event's boundary cue in the timeline. Popups can be pinned for further analysis. For the scalable timeline implementation, we use reduced JSON as a data source. This method reduces the number of display nodes, but all the records are preserved and delivered through popup. Therefore, there is no missing data in the timeline, so that reviewers can review entire dataset.



Conflict of Interest

None declared.

Acknowledgments

The authors wish to thank all the volunteers from the MGB RISC Department for participation and valuable feedback.

Protection of Human and Animal Subjects

This study was approved by the Institutional Review Board (IRB) of Mass General Brigham under protocols Mass General Brigham Biobank (2009P002312). The Mass General Brigham Institutional Review Board approved for a waiver of patient informed consent.



Address for correspondence

Heekyong Park, PhD
Mass General Brigham, 399 Revolution Drive, Somerville, MA 02145
United States   

Publikationsverlauf

Eingereicht: 08. September 2023

Angenommen: 08. November 2023

Accepted Manuscript online:
15. Februar 2024

Artikel online veröffentlicht:
03. April 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom
Fig. 1 Demo patient data rendered in HistoriView timeline. In HistoriView timeline, events are represented using ranged transparent color bars. This timeline overlays event bars in the same concept group category when there are visually overlaying events (i.e., events whose visual representations share common time-axis coordinates, regardless of their temporal overlap relation), instead of stacking them. Therefore, a darker area means more events are overlaid on that location. When a user mouseover a data bar, the Overlay Detector detects all visually overlaid events and sends the information to the Overlay Marker and the Interactive List Controller. Then, the Overlay Marker makes changes to the identified events' representation. The Interactive List Controller prompts popups to give record details and provides visual cues to help maintain context. In this example, on the mouseover position, nine events were identified as visually overlaying by the Overlay Detector. Their representation on the timeline is transformed into orange bars and shifted upward by the Overlay Marker to distinguish them from the other events. The list of events and preview values are shown in a List Popup (a). When a user clicks on the third data, another popup with more detailed information pops out (b) (Detail Popup). The mouse-hovered row in the List Popup is highlighted in sky color (c) and its corresponding data bar boundary is delineated in red line (d). The bold font in the List Popup means its Detail Popup is currently open. This captured image visualizes fictitious patient data.
Zoom
Fig. 2 Mouseover events detection in the Overlay Detector. (a) Overlay detection methods. Two different overlay detection methods we examined for HistoriView. The first row shows a flat representation which is displayed to users, and the second row is a stacked representation of the same data. We set 2 pixels for both the mouse radius I and an ignorable margin (δ) between two endpoints. The radius obviates the need for users to adjust mouse location minutely to mouseover to the exact data bar or a hidden data bar. (i) Location-based overlay detection method. HistoriView's Overlay Detector currently uses this location-based overlay method. In this method, E1 and E2 are detected as their x-coordinates overlay the mouse cursor whose radius is r. (ii) Event-based overlay detection method. The event-based approach recursively identifies all visually overlaying events to the mouseover event(s). It first identifies E1 and E2, which are mouseover events. Then, it additionally identifies E3 and E4 because they both overlay with E2. It would subsequently identify E5 because it overlays with E4 with the tolerance of δ. Since E5 is not overlaid with any other new events, the process returns the overlaid set {E1, E2, E3, E4, E5} and terminates. (b) A timeline rendered by the event-based detection method. An example of a HistoriView visualization using the event-based visual overlay detection algorithm (this image is captured in the middle of developing the HistoriView. Therefore, the visual output is different from our final representation). In this timeline, several closely located (within the margin δ) event clusters are detected and colored in purple. The number and the range of the detected visual overlays include the majority of data bars for that concept because of some long-term events. As a result, users must scroll down the long event list in the popup to search for a specific event of interest. This is almost identical to reviewing a raw set of data.
Zoom
Fig. 3 Scalable HistoriView Timeline Plugin workflow diagram. When reviewers enter a patient number, HistoriView plugin makes queries to get the patient's whole history. Considering limitations in the queryable data size and a reasonable turnaround time, queries use each top-most level ontology concept. The split queries are sent and received in parallel. The data in the returned XML messages are converted to JSON format. Since the number of data affects the visualization time, it reduces the number of representation nodes without loss of information. With the processed JSON events, the HistoriView library visualizes an interactive patient timeline.
Zoom
Fig. 4 Reduced JSON data. This example shows how the JSON event reduction works. The three events R1, R2, and R3 in the same concept group (0) are reduced into one RR1 event as they all start in the same month, November 2020. The earliest start time and the latest end time of the original events become the start and end times of the reduced event. In this case, R1's start time and R3′s end time become RR1's time range (highlighted). Each original event id and its detailed information, which includes start and end time, is stored as an array in the new event's detailed information. As a result, the reduced event JSON would have only one event RR1 but know R1, R2, and R3′s ids and time information to render the red-focus-rectangle, and their individual detailed information to create their detail popups.
Zoom
Fig. 5 Usability survey results. (a) Usability survey result. This graph illustrates the survey result aligning neutral opinions in the middle. The number values denote the number of subjects answered. For the usability questions asked about the usability, learnability, and usefulness of the HistoriView in a positive way, most of the submitted answers show agreement or strong agreement. (b) Preference for the HistoriView timeline over a stacked timeline. In the survey asking about their preference for the HistoriView representation over a stacked timeline, all eleven participants preferred the HistoriView representation.
Zoom
Fig. 6 Scalability evaluation results.