Appl Clin Inform 2017; 08(04): 1095-1105
DOI: 10.4338/ACI-2017-04-RA-0067
Research Article
Schattauer GmbH Stuttgart

Usability Evaluation of Electronic Health Record System around Clinical Notes Usage–An Ethnographic Study

Rubina F. Rizvi
,
Jenna L. Marquard
,
Gretchen M. Hultman
,
Terrence J. Adam
,
Kathleen A. Harder
,
Genevieve B. Melton
Further Information

Address for correspondence

Genevieve B. Melton, MD, PhD
Institute for Health Informatics and Department of Surgery, University of Minnesota
420 Delaware Street SE, Mayo Mail Code 450, Minneapolis, MN 55455
United States   

Publication History

24 April 2017

20 September 2017

Publication Date:
14 December 2017 (online)

 

Abstract

Background A substantial gap exists between current Electronic Health Record (EHR) usability and potential optimal usability. One of the fundamental reasons for this discrepancy is poor incorporation of a User-Centered Design (UCD) approach during the Graphical User Interface (GUI) development process.

Objective To evaluate usability strengths and weaknesses of two widely implemented EHR GUIs for critical clinical notes usage tasks.

Methods Twelve Internal Medicine resident physicians interacting with one of the two EHR systems (System-1 at Location-A and System-2 at Location-B) were observed by two usability evaluators employing an ethnographic approach. User comments and observer findings were analyzed for two critical tasks: (1) clinical notes entry and (2) related information-seeking tasks. Data were analyzed from two standpoints: (1) usability references categorized by usability evaluators as positive, negative, or equivocal and (2) usability impact of each feature measured through a 7-point severity rating scale. Findings were also validated by user responses to a post observation questionnaire.

Results For clinical notes entry, System-1 surpassed System-2 with more positive (26% vs. 12%) than negative (12% vs. 34%) usability references. Greatest impact features on EHR usability (severity score pertaining to each feature) for clinical notes entry were: autopopulation (6), screen options (5.5), communication (5), copy pasting (4.5), error prevention (4.5), edit ability (4), and dictation and transcription (3.5). Both systems performed equally well on information-seeking tasks and features with greatest impacts on EHR usability were navigation for notes (7) and others (e.g., looking for ancillary data; 5.5). Ethnographic observations were supported by follow-up questionnaire responses.

Conclusion This study provides usability-specific insights to inform future, improved, EHR interface that is better aligned with UCD approach.


#

Background and Significance

Health information technologies (HITs), such as Electronic Health Record (EHR) systems, are considered as critical factors in transforming the health care industry.[1] Despite the high EHR adoption rates, substantial gaps exist between the current state of EHRs and their potential usefulness.[2]

Recently, the HIT end-user community and EHR experts have pointed specifically to the cognitive challenges resulting from poor EHR usability as one of the key reasons for this gap.[2] Also, substantial level of disparity exists around perception of HIT usage and its possible outcomes among its various users, having wide range of technology skills,[3] [4] further confound the situation. A well-designed EHR graphical user interphase (GUI) could help address these challenges by improving system usability leading to improvements in health care delivery.[5]

Usability has been defined in various ways and typically encompasses a set of evaluation methods to understand user experiences for the purpose of creating more desirable, usable, and useful products.[6] The International Organization for Standardization (ISO) defines usability as, “an extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.”[7] Nielsen defines usability as, “a quality attribute that assesses how easy user interfaces are to use” and describes five basic principles (i.e., easy to learn, easy to remember, efficient, having minimal errors, and with greater user satisfaction).[8] [9] An essential approach to account for and resolve usability problems is User-Centered Design (UCD), which is guided by the philosophy that “the final product should suit the users, rather than making the users suit the product.”[10]

To date, several EHR usability studies employing various methodological approaches have been conducted in diverse contexts, such as clinical decision support systems and dental EHR systems.[2] [11] [12] [13] [14] Among these methods, ethnography is one of the earliest techniques in which subjects are observed in a naturalistic setting. Ethnography has also been employed in the software development cycle for evaluating information systems.[15] This approach to data collection provides rich, realistic, and holistic view of user behavior in task completion and could aid in gathering additional detailed information, which users sometimes fail to communicate during more controlled (e.g., laboratory based) methodological approaches. Similar observational study methodologies have been used widely in health care research.[16] [17] [18]

There is a growing amount of literature providing guidelines and recommendations that could help improve EHR usability and ultimately enhance patient safety and quality of care.[19] [20] [21] For a comprehensive usability evaluation, a multimethod approach is preferred.[22] [23] [24] Despite these recommendations, there is a limited number of studies where the HIT usability has been assessed employing more than one methodological approach. A few examples of such multi-method studies are dental EHR evaluation employing user testing along with observations, interviews, and Goals, Operators, Methods, and Selection (GOMS) modeling techniques[25]; computerized provider order entry system assessment using two different sets of heuristics along with usability testing[26]; and diabetes mHealth system evaluation employing combination of user testing with semistructured interviews and questionnaires around patients' experiences using the system.[22] Furthermore, there is a limited number of research studies that present usability comparisons from viewpoints of people with a diverse set of perspectives, e.g., expert users versus novice users,[27] physician versus patients,[28] and users versus usability experts.[29]

One specific area needing attention is the design and functionality offered by these EHR systems' GUI around clinical notes usage. There are several challenges associated with clinical notes usage. Clinical notes may be difficult to find, time consuming to enter, contain poorly formatted information that is difficult to read, incorporate erroneous or out-of-date information, or lack standardized content display within EHR systems.[30] [31] Despite these known usability problems, EHR clinical notes remain essential resources for clinicians who use them to communicate, summarize, and synthesize patient care information for decision making. Physicians and other clinicians are challenged, both when entering information into and retrieving information from clinical notes, as current EHRs may not sufficiently support these tasks. To date, few studies have examined usability of the user interfaces pertaining to clinical notes. A few examples of more recent studies are time-and-motion studies reporting that note documentation should be treated as synthesis rather than composition, and the documentation process could be best supported by incorporation of various search tools that could facilitate note construction[32] and eye tracking studies on physicians' visual attention while reading electronic progress notes revealing that most time was spent in slowly reading the “impression and plan” section of progress notes with minimal time spent on sections, such as “medications,” “vital signs,” and “laboratory results,” even when there was additional information on these sections.[33]

Comprehensive understanding of currently employed EHR systems in terms of the functionality and design elements offered by their GUIs, is an essential initial step toward redesigning future, user-centered EHR systems.


#

Objective

This research study was conducted to answer for the following questions: What are the various designs and functionality features pertaining to the clinical notes usage offered by GUIs of two existing EHRs systems, and how could these features potentially influence EHR usability as ascertained by usability evaluators' and users' viewpoints? The insights derived from user observations and comments would provide interface designers an initial platform to help generate the future EHR clinical notes interface that is better aligned with user's needs, usability evaluators' suggestions, and usability guidelines.


#

Methods

General Description and Setting

An ethnographic field study,[34] [35] supplemented by a postobservation questionnaire, was performed to collect data about the daily activities of EHR users in their naturalistic settings. Participant observation was performed by immersing in physicians' routine daily activities and collecting rich data about their interaction with EHRs while performing clinical documentation tasks. Participant physicians were briefed about project goals, the methodology employed to collect data, and instructions on traditional, concurrent think aloud method (i.e., to share their thoughts audibly about clinical notes usage while interacting in “real time” with the GUI of a particular EHR system). Informal conversation was also performed between observers and physicians to gain an understanding of any emerging issues. Field notes were documented with an electronic tablet using a time-stamped application (Timestamped Field Notes Application 3.0).[36]

Internal Medicine resident physicians were observed interacting with one of the two different EHR systems in the inpatient environment of two tertiary care centers (System-1, a commercial vendor system at the Location-A and System-2, an open source system at the Location-B). Because residents who participated in this study spent most of their time interacting with EHRs in workrooms, particularly for clinical notes usage-related tasks, the majority of observations were performed in physician workrooms. Each resident was observed on different days of the week (4–5 days) and during various sections of the day (e.g., prerounding, rounding, and postrounding). In general, Location-A had a more diverse patient population needing treatment for more complex medical and surgical conditions, whereas at Location-B, patients were older, predominantly males, and mainly coming in for treatment of chronic medical conditions and psychiatric diseases.


#

Study Sample

A total of 12 (6 per system) mid- and senior-level resident physicians, in their second through fourth years, enrolled in Internal Medicine Categorical or Internal Medicine Combined programs, were recruited for the study. Interns, medical students, advanced practice providers, attending physicians, and other clinicians (nurses, physicians' assistants etc.) were excluded. The characteristics of participants, summarized in [Table 1], were similar across the two sites. Study participants were given a $50 gift certificate as an incentive for their participation.

Table 1

Characteristics of resident participants

Location-A

Location-B

Mean age (y)

31 (±3.6)

29.5 (±1.6)

Mean years in training

2.8 (±0.4)

3 (±0.6)

Gender

 Female (%)

4 (66.6)

3 (50)

 Male (%)

2 (33.3)

3 (50)

Because of the complexities associated with evaluating EHR system usage, employing usability evaluators with dual domain knowledge (both usability experience and health care knowledge) was crucial. Two of the authors (R.F.R., a health informatician and physician and G.M.H., a health informatician and clinical researcher with a Masters of Public Health) were assigned this role.


#

Data Collection

Data regarding the usability and functionality of each EHR's clinical notes were collected at both sites by R.F.R. and G.M.H. As noted earlier, the majority of data collection was done in the residents' workrooms. To ensure a representative sampling of different activities for each EHR system, each resident was observed on various days of the week (e.g., on-call and off-call days [refer to admitting and nonadmitting days, respectively], weekends, and inpatient sections of clinic days) for a total of 4 to 5 days. Observation times were approximately between 7:00 a.m. and 6:00 p.m., where each resident was individually observed for 2.0 to 2.5 hours/day and during various sections of the day (e.g., prerounding, rounding, and postrounding). On average, each participant was observed for 9 hours (±2.5) at Location-A and 9.6 hours (±1.9) at Location-B, with a total of over 110 hours spent on observation. The total time included time spent on note documentation, order entry, chart review, and others. Note documentation consumed an average of 20 to 30% of the total time, a proportion of time that aligns with findings from previous time-motion studies.[37]

Observation data were further supplemented by a postobservation questionnaire. Both close and open-ended questions were employed to collect residents' subjective responses from two standpoints, clinical notes entry and information-seeking tasks (the sample questions from the questionnaire can be seen in [Appendix]).

Appendix

Sample questions from the postobservation questionnaire

Q1.

How much time on average do you think you spend entering a specific note type (e.g., H&P progress note, discharge summary)?

Q2.

How do you work around templates of various note types (e.g., H&P progress note, discharge summary)?

Q3.

What style do you prefer while entering a specific note type, i.e., chronological order of various sections of different notes (e.g., H&P progress note, discharge summary)?

Q4.

What style do you prefer while reading a specific note type, i.e., chronological order of various sections of different notes (e.g., H&P progress note, discharge summary)?

Q5.

What are the major limitations of the EHR's GUI in terms of note entry/note retrieval tasks?

Q6.

How do you think these limitations can be rectified?

Q7.

What are the major strengths of the EHR's GUI in terms of note entry/retrieval tasks?

Q8.

How do you think these strengths can be further improved?


#

Data Analysis

An Ethnographic Content Analysis (ECA)[38] of qualitative data was performed on the observatory notes documented as “field notes,” employing an integrated qualitative–quantitative research design.[39] These field notes consisted of information on clinical documentation tasks (e.g., clinical notes entry or related information-seeking tasks) noted while physicians were interacting with EHRs and were a combination of direct observations by observers and comments volunteered by resident physicians. These raw data were later dissected into groups of words or phrases (the meaning unit, referred as “usability references” in this study). Each usability reference pertaining to the study “theme,” i.e., functionality and design elements around clinical documentation tasks, was coded in terms of the EHR system (e.g., System-1 or System-2) it is referring to and its perceived impact on usability (positive [P], negative [N], or equivocal [E]). Usability was coded as positive, negative, or equivocal if the usability evaluators considered the EHR features to be desirable, undesirable, or ambivalent, respectively. NVivo (version 10.1.3),[40] a qualitative data analysis tool, was used in this study.

The screen shot of the field observers' data collection tool with nodes is shown in [Fig. 1]. The coding schema pertaining to functionality and design elements around clinical documentation tasks (i.e., clinical notes entry or related information-seeking, [Fig. 2]) was generated through an iterative process of brainstorming and refinement among research team members. The team included health informaticians (R.F.R., G.M.H., T.J.A., G.B.M., J.L.M.), physicians (R.F.R., T.J.A., G.B.M.), and usability evaluators (R.F.R., G.M.H., J.L.M., K.A.H.)., with the latter two members having additional industrial engineering and experimental cognitive psychology expertise, respectively. Conflicts were iteratively addressed and resolved.

Zoom Image
Fig. 1 Screen shot of the data collection tool and nodes generated.
Zoom Image
Fig. 2 Visual depiction of coding scheme used in content analysis.

Two team members (primarily R.F.R. and G.M.H.) coded the notes through repetitive and comprehensive scanning of the field notes and brainstorming among other coauthors, ensuring that the final coding schema represented the majority of the source domain and not merely a small nonrepresentative slice. Intercoder agreement was 98%, with a kappa value of 0.8. Any remaining coding discrepancies were discussed and resolved through a consensus process.

Data was analyzed and presented at three hierarchical levels: (1) at the higher level of subthemes, (2) at the more granular level of categories within those subthemes, and (3) at the deepest levels of codes within those categories. We analyzed the usability reference data in the context of various usability features from two standpoints: (1) frequency (percentage) of being evaluated as positive, negative, or equivocal under each subtheme, category, or code and (2) their impact on usability as measured through gauging references to denote a specific usability feature. The references were gauged by assigning weights against a severity impact scale by two evaluators (coauthors), R.F.R. and T.J.A., both physicians and health informaticians with expertise in EHR usability evaluation. A 7-point severity rating scale employed was based on three variables: (1) percentage frequency of total references, (2) the perceived impact on user interaction/performance, i.e., the subjective assessment of impact of usability feature on user interaction/performance, (3) the usage (sporadic or recurrent) of that particular usability feature. Score for each feature was averaged out between two evaluators and was categorized into three levels, i.e., high impact (>5), medium impact (3–5), and low impact (<3). The results were further validated by analyzing responses obtained from physicians through post-observation questionnaires.


#
#

Results

In total, there were more usability references specific to clinical notes use for System-1 (347) than System-2 (132). Both Systems (1 and 2), had greater number of references under note entry (276, 103) than information-seeking tasks (71, 29). Usability references were dissected at three levels of granularity, i.e., subthemes, categories, and codes ([Figs. 3] [4] [5]), cataloged as either positive, negative, or equivocal and were reported as percentage frequency.

Zoom Image
Fig. 3 Frequency analysis of usability references at the level of subthemes. SY, system.
Zoom Image
Fig. 4 Frequency analysis of usability references at the level of categories. EC, error control; NS, navigation and search ability; SY, system; UF, user control and freedom; WA, workflow accelerators.
Zoom Image
Fig. 5 Frequency analysis of usability references at the level of codes. AP, auto population; CM, communication; CP, copy pasting; DT, dictation and transcription; ED, editability; EP, error prevention; FO, formatting; NN, navigating for notes; NT, navigating for templates; OH, online help; OT, others; SC, spell check; SO, screen options; SY, system.

Systems Comparison at the Level of Subthemes

Analysis at the level of subthemes ([Fig. 3]) revealed that System-1, as compared with System-2, excelled in note entry features by having higher percentage of positive usability references (P = 26% vs. 12%) and substantially lower negative references (N = 12% vs. 34%). Inconclusive results were attained for information-seeking tasks, as System-1 in comparison to System-2 had both lower percentages of positive (P = 14% vs. 28%) and negative references (N = 34% vs. 41%).


#

Systems Comparison at the Level of Categories

More granular analysis at the level of categories ([Fig. 4]) showed similar results, i.e., System-1 surpassed System-2 in note entry by having higher percentage of positive and lower percentage of negative usability references, specifically with respect to error control, user control and freedom, and work flow accelerators. Whereas inconclusive results were obtained for information-seeking tasks related to navigation and ability to search, i.e., System-1 as compared with System-2 showed both lower percentages of positive and negative usability references.


#

Systems Comparison at the Level of Codes

Analysis done at the deepest level of codes ([Fig. 5]) further revealed the details of note entry features having higher percentage of positive and lower percentage of negative usability references under System-1 as compared with System-2, for example, error prevention and spell check, edit ability and formatting, dictation and transcription, screen options, autopopulation, and communication, except under copy pasting. With respect to information-seeking tasks related to navigation and ability to search, the percentages of positive and negative references under System-1 versus System-2 under all four codes, i.e., navigating for notes, navigating for templates, online help, and others, showed inconclusive results. Overall, at all three levels, a greater percentage of references were coded as equivocal for System-1 than for System-2 under both note entry and information-seeking tasks to the coders' uncertainty surrounding particular usability items warranting further studies.


#

Severity Impact Rating

The data on usability references denoting a specific usability feature were further analyzed by assigning an overall severity score. The references were gauged by two coauthors (R.F.R. and T.J.A.) after assigning each feature a score against a severity impact scale based on percentage frequency of total references, its perceived impact on user interaction/performance (positive, negative, or equivocal), and its usage (sporadic or recurrent). The score was later categorized into three groups as high impact (>5), e.g., navigating for notes (score = 7), autopopulation (score = 6), screen options (score = 5.5) and others (score = 5.5); medium impact (3–5), e.g., communication (score = 5), error prevention (score = 4.5), copy pasting (score = 4.5), edit ability (score = 4), and dictation and transcription (score = 3.5); and low impact (<3), e.g., spell check (score = 2.5), formatting (score = 2.5), navigating for templates (score = 2.5), and online help (score = 2.5; [Fig. 6]). The severity impact scale used was grounded on three variables: (1) proportion of references in total, (2) the perceived impact it will have on user interaction/performance, and (3) its usage (sporadic or recurrent). The severity impact scale is presented as mean of individual ratings from R.F.R. and T.J.A. (physician and health informaticians). The above results were further validated by consolidating residents' quotes collected during observation and from a questionnaire administered to physicians. ([Tables 2] and [3]).

Table 2

Representative sample of quotes from users

Negative

Equivocal

Positive

Users-System-1

–“Screens with too many options/tabs that are not needed or used.”

–“Too many ways to perform the same task adds confusion.”

–“Autopopulation introduces tons of junk and nobody wants to look at this crap.”

–“The autopopulated data are not accurate always.”

“It can be overwhelming at times, because there are so many options to do the same thing.”

–“Filters are cumbersome.”

–“Probably, we spend similar amount of time interacting with EHRs at both locations, i.e., System-1 at Location-A (more complicated patients, but also more efficient system) and System-2 at Location-B (less complicated and less efficient system).”

–“Notes comes last, patient care comes first.”

–“Summary tab is very useful. I can customize it the way I want.”

–“It has much more reliability/support to have notes/data from outside uploaded in the charts. I know that if something was given to the records department, it will be there.”

–“Best thing in it is the short-cut templated phrases!”

–“I can create a well-organized note with different fonts/colors stressing importance.”

–“Note entry is way better!”

Users-System-2

–“To multitask is one of its biggest limitations, and the ability to open multiple patient charts (in one instance) would greatly simplify this.”

–“I feel that the biggest challenge is multitasking, as we can only work on one patient at a time without being able to look at multiple data (split screen), very frustrating when entering notes on a complex patient.”

–“It is quite slow at retrieving large number of notes, which is necessary for complex patients to be able to look further back into their history.”

–“I find it challenging to retrieve records from outside location-B. The ability to find records from nationwide is certainly a strength, although it can be rather challenging to actually find what you're looking for.”

–”Notes documentation is the least important chores for the day.”

–“I like it's black and white, simplistic interface.”

–“Retrieving notes is awesome, the reason why we love this system.”

–“Retrieving notes function is pretty good.”

–“Consistency in finding documents is one of the strengths of System-2.”

–“I like its simplicity, since there is only one way to find most data points you would like to see.”

Table 3

Innovative ideas from users

Ideas

Users-System-1

–“If the physician entered a term BNP in the notes, it should pull up the most recent BNP lab results of that particular patient.”

–“Other encounters and clinician notes (telephone encounters/nurses' notes), crowd provider notes. There should be separate tabs for these.”

–“Limited search function could be improved if it had a Google-type search engine for notes, labs, orders.”

Users-System-2

–“I think System-2 would most likely benefit from the ability to have multiple charts open at the same time and from use of sidebar similar to System-1.”

–“If we could better understand/billing requirements for note entry, we can have more structured/standardized notes.”

“In order to address the variability issue in notes structure, we should have standard templates.”

–“What if the current problems get blown in and then you can actually click on the problem, which takes you to the relevant previous notes.”

Abbreviation: BNP, brain natriuretic factor.


Zoom Image
Fig. 6 Frequency comparison of total usability references under System-1 and 2. AP, autopopulation; CM, communication; CP, copy pasting; DT, dictation and transcription; ED, editability; EP, error prevention; FO, formatting; NN, navigating for notes; NT, navigating for templates; OH, help; OT, others; SC, spell check; SO, screen options.

#
#

Analysis

Usability evaluation was performed on two widely implemented EHR GUIs around critical tasks of clinical notes usage through data collected from ethnographic studies along with postobservation questionnaires. Each EHR system was appraised in terms of percentages of respective usability references being perceived and cataloged by usability evaluators as positive, negative, or equivocal. Results were later validated by analyzing physicians' responses.

EHR Usability Pertaining to Note Entry

Under note entry, System-1 had considerably more positive and comparatively less negative feedback. The most desirable note entry-related features were autopopulation and screen options, classified as high impact. Autopopulation functionality, executed through smart phrases, served as a catalytic agent in the note writing process and was thought to improve user efficiency during task performance. Conversely, it was also considered as a source of introducing inaccurate, repetitive, dated, and redundant information leading to lengthy notes as quoted by various users ([Table 2]). Similarly, the ability to have various screen display options (e.g., split panes, floating screens) was also considered as a strength, because these features facilitated concurrent information-seeking tasks with note entry-related tasks. On the contrary, the inability to multitask was considered to be one of the least favorable aspects of the system despite the fact that multitasking could be associated with increase chances of errors. For instance, users were not allowed to open more than one patient's chart at a time, an error prevention feature, or view previous notes/data within the same window of the same patient's chart to inform the content of the current note, thus hindering timely access to relevant patient information.

The ease of communication between other clinicians and EHRs with regard to interoperability, error prevention through screen alerts, ability to copy paste/easy edit options, and proficient dictation and transcription services were few of the other medium-impact usability strengths pertaining to the note entry task that was repeatedly praised by the respective system users. The formatting and spell check feature, despite having a low impact on usability, was also frequently praised because it gave users the freedom to customize their notes in different font styles/sizes/colors.


#

EHR Usability Pertaining to Information-Seeking Tasks

Under information-seeking tasks, System-2 had a greater percentage of positive as well as negative observations, whereas ease of navigating for notes was the most favorable feature having the greatest impact on usability. The likely explanation for the positive feedback was the simplistic GUI design with intuitive default notes listing display (e.g., notes from previous encounters were cataloged according to the specialties with better consistency and ease of finding desired notes). This was in contrast to the frustration users expressed with the extensive list of notes containing several options to perform the same tasks (overfunctionality) and the perception that note filters, offered as a feature, were cumbersome to use. Hence, a sense of information overload negatively affects intuitiveness and ease of use. Similarly, “others,” corresponding to the ease of locating ancillary data (e.g., laboratories, imaging), was considered to be another important aspect of GUI that could substantially impact its usability. Having ancillary data accessible through various screens rather than through a sole homepage and a search box to find specific information are a few of the favorable features that could enhance EHR usability pertaining to clinical notes usage. In addition, navigating for templates and online help were also considered to be desirable features despite of their low impact on usability.


#

Equivocal Results

Under both subthemes for the two systems, i.e., note entry and information-seeking tasks, a considerable portion of data was coded into the equivocal category more under System-1 than System-2 because of their uncertain effect on usability. These items would require a more in-depth and individual study of each feature/item to understand their influence on usability. We expect that this analysis, however, could yield some interesting additional findings about these systems.


#
#

Discussion

Suboptimal EHR usability, resulting from lack of incorporation of UCD design approach in the Systems Development Life Cycle (SDLC) process is one of the primary factors leading to ineffective and inefficient tasks performance (e.g., poor quality or missing data, increase error rate, challenges with care coordination, compromised patient safety), dissatisfaction among users (providers), and ultimately poor health care delivery.

This research study explores the two existing EHRs in terms of their design and functionality features pertaining to critical tasks centered on clinical notes usage. Data were collected using multimethod approach, analyzed both from users' and usability evaluators' perspectives and employing both qualitative and quantitative approaches.

We discovered that GUI of each EHR system being evaluated offered varied sets of design and functionality features pertaining to the clinical notes usage. Each of these features could potentially influence EHR usability either positively, negatively, or equivocally, as ascertained by usability evaluators' and users' viewpoints and could also be assigned “usability impact score” as measured through a 7-point severity rating scale.

Systems Comparison and Its Implications

We discovered that overall, System-1 surpassed System-2 in clinical notes usage specific to note entry-related tasks, while both Systems performed equally well on information-seeking tasks associated with clinical notes usage. Usability features scored as “high impact” were autopopulation, screen options, navigating for notes, and others; as “medium impact” were communication, error prevention, copy pasting, edit ability, and dictation and transcription, and as “low impact” being spell check, navigating for templates, and online help.

In-depth understanding of desirable and undesirable usability features offered by existing EHR GUIs could serve as an initial platform to help redesign future EHR interface. Hence, more efficient and effective task performances associated with greater user satisfaction that could ultimately result in enhanced health care delivery and better health outcomes.


#

Comments and Innovative Ideas by Users

We also solicited several suggestions from users of both systems, which could help us in designing a new and improved GUI having better overall usability. One user recommended incorporating advanced technologies, such as login with finger scans or pupil iris scan to enhance the EHR usability, whereas having a “Google” like search engine was a common suggestion received from several users. According to some users, standardizing the structure of the templates used for different note types and establishing a structured curriculum for medical students/residents about the coding/billing requirements for notes writing, could result in more standardized note entry, potentially decreasing note format and content variability. According to one of the users, linking the name of a laboratory test with the most recently reported result would enhance user efficiency. With respect to improving usability pertaining to information-seeking tasks associated with clinical notes usage, users offered several suggestions, such as the idea of reducing the crowding of notes by incorporating separate locations/tabs based on encounter types and authors and enhancing user efficiency by entering current problems automatically and retrieving relevant data pertinent to these problems (e.g., notes, laboratories, imaging results) by clicking on them.


#

Study Limitations

Several limitations are associated with this study including a small sample size and restriction to users from one specialty. All users were second to fourth year residents, working in an academic setting, having similar ages, training experience, and technology skills. Also, the field studies were limited to the inpatient setting, whereas EHR use in a patient care area was not studied. Because of limited resources and paucity of double evaluators, we employed two authors as evaluators rather than recruiting them from outside the study team. Our findings are limited by a lack of robust statistical analysis, because of our small sample size and the qualitative nature of our data. Also, we did not employ any validated instrument for measuring severity impact rating. In addition to these limitations, there are potential biases linked with qualitative data collection and analysis methods, which could result in variability in how results were presented.


#

Future Work

In future, comparative analysis of usability features, embedded in various other competing EHR systems, could be performed by employing different usability evaluation methods (e.g., heuristic evaluations, cognitive walk through, formal usability testing). To enhance generalizability of our study findings, EHR usability could also be evaluated by employing varied and larger sets of clinicians (e.g., attending physicians, specialists, nurses) and usability evaluators, and in diverse settings (e.g., ambulatory, urgent care, emergency department). Time-motion studies could also be performed to gauge the efficiency of performing a particular task and report more precise time to task data. In addition, further studies are warranted to understand observed discrepancies in users and usability evaluator feedback about the impact of various features on usability.


#
#

Conclusion

This study helps to illuminate the strengths and weaknesses of varied sets of clinical notes usage-related usability features offered by two widely implemented EHRs. By incorporating the desired usability elements and eliminating the undesired ones in the future EHR design process, we could generate an ideal system that is better aligned with users' needs and usability guidelines.


#

Clinical Relevance Statement

Insufficient EHR usability resulting from lacking UCD approach is the leading cause of the current state of EHRs and their potential usefulness. This study provides an in-depth analysis of usability strengths and weaknesses of two widely implemented EHR GUIs with respect to critical tasks around clinical notes usage through analyzing data collected from real users observed in their actual work environment. The knowledge gained could serve as a guide in designing a future EHR interface that is better aligned with a user-centered approach and could ultimately result in improved end-user clinical notes usage.


#

Multiple Choice Question

In the system development life cycle process (SDLC), following personnel is often neglected resulting in suboptimal system usability.

  • Software developers

  • Programmers

  • Users

  • Usability experts

Correct Answer: The correct answer is C, users. Despite the reported benefits ensuing from the meaningful use of Electronic Health Record (EHR) systems, there exists a substantial gap between the current state of their use and perceived potentials. One of the fundamental reasons for this discrepancy is lack of incorporation of a “User-Centered Design” (UCD) approach during the EHR SDLC process.


#
#

Conflict of Interest

None.

Acknowledgments

The authors would like to thank the following: Elizabeth Lindemann for proofreading the manuscripts, chief residents, Drs Jessica Voight and Kate Gillen for helping us with subject recruitment, all residents for their participation and valuable feedback, and both hospitals for letting us collect data at their facilities.

Note

The content is solely the responsibility of the authors and does not represent the official views of the National Science Foundation or Agency for Healthcare Research and Quality.


Protection of Human and Animal Subjects

Residents interacting with two different EHR systems in their respective hospitals were investigated following approvals and in compliance with Institutional Review Board (IRB) no.1308E41121 and Research and Development committee (R&D) no. R140720X standards.


Funding

This work was supported by National Science Foundation Award no. CMMI-1150057 (J.L.M.) and the Agency for Healthcare Research and Quality Award no. R01HS022085 (G.B.M.).


  • References

  • 1 Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff (Millwood) 2011; 30 (03) 464-471
  • 2 Zhang J, Walji MF. TURF: toward a unified framework of EHR usability. J Biomed Inform 2011; 44 (06) 1056-1067
  • 3 Holden RJ. Physicians' beliefs about using EMR and CPOE: in pursuit of a contextualized understanding of health IT use behavior. Int J Med Inform 2010; 79 (02) 71-80
  • 4 Kossman SP, Scheidenhelm SL. Nurses' perceptions of the impact of electronic health records on work and patient outcomes. Comput Inform Nurs 2008; 26 (02) 69-77
  • 5 Rodriguez NJ, Murillo V, Borges JA, Ortiz J, Sands DZ. A usability study of physicians interaction with a paper-based patient record system and a graphical-based electronic patient record system. Proc AMIA Symp 2002; 667-671
  • 6 Barnum CM. . Usability testing essentials ready, set…test! Burlington, Massachusetts, MA: Morgan Kaufmann; 2010
  • 7 ISO. ISO 9241-11:1998 Ergonomic requirements for office work with visual display terminals (VDTs)–Part 11: guidance on usability. ISO 1998; 45: 22
  • 8 Nielsen J. Usability Engineering. Boston: Academic Press; 1993
  • 9 Nielsen J. Usability 101: Introduction to Usability 2003. Available at: http://www.nngroup.com/articles/usability-101-introduction-to-usability/
  • 10 Baxter K. Understanding Your Users: A Practical Guide to User Research Methods. 2nd ed. Waltham, Massachusetts, MA: Morgan Kaufmann; 2015
  • 11 Jones S, Donelle L. Assessment of electronic health record usability with undergraduate nursing students. Int J Nurs Educ Scholarsh 2011; 8 (01) 24
  • 12 Friedberg MW, Chen PG, Van Busum KR. , et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Rand Health Q 2014; 3 (04) 1
  • 13 Carayon P, Cartmill R, Blosky MA. , et al. ICU nurses' acceptance of electronic health records. J Am Med Inform Assoc 2011; 18 (06) 812-819
  • 14 Ratwani RM, Fairbanks RJ, Hettinger AZ, Benda NC. Electronic health record usability: analysis of the user-centered design processes of eleven electronic health record vendors. J Am Med Inform Assoc 2015; 22 (06) 1179-1182
  • 15 Beynon-Davies P. Ethnography and information systems development: ethnography of, for and within is development. Inf Softw Technol 1997; 39 (08) 531-540
  • 16 Saleem JJ, Plew WR, Speir RC. , et al. Understanding barriers and facilitators to the use of clinical information systems for intensive care units and anesthesia record keeping: a rapid ethnography. Int J Med Inform 2015; 84 (07) 500-511
  • 17 McMullen CK, Ash JS, Sittig DF. , et al. Rapid assessment of clinical information systems in the healthcare setting: an efficient method for time-pressed evaluation. Methods Inf Med 2011; 50 (04) 299-307
  • 18 Kopp BJ, Erstad BL, Allen ME, Theodorou AA, Priestley G. Medication errors and adverse drug events in an intensive care unit: direct observation approach for detection. Crit Care Med 2006; 34 (02) 415-425
  • 19 Middleton B, Bloomrosen M, Dente MA. , et al; American Medical Informatics Association. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc 2013; 20 (e1): e2-e8
  • 20 Kaipio J, Lääveri T, Hyppönen H. , et al. Usability problems do not heal by themselves: National survey on physicians' experiences with EHRs in Finland. Int J Med Inform 2017; 97: 266-281
  • 21 Zahabi M, Kaber DB, Swangnetr M. Usability and safety in electronic medical records interface design: a review of recent literature and guideline formulation. Hum Factors 2015; 57 (05) 805-834
  • 22 Georgsson M, Staggers N. An evaluation of patients' experienced usability of a diabetes mHealth system using a multi-method approach. J Biomed Inform 2016; 59: 115-129
  • 23 Khajouei R, Hasman A, Jaspers MW. Determination of the effectiveness of two methods for usability evaluation using a CPOE medication ordering system. Int J Med Inform 2011; 80 (05) 341-350
  • 24 Walji MF, Kalenderian E, Piotrowski M. , et al. Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR. Int J Med Inform 2014; 83 (05) 361-367
  • 25 Walji MF, Kalenderian E, Tran D. , et al. Detection and characterization of usability problems in structured data entry interfaces in dentistry. Int J Med Inform 2013; 82 (02) 128-138
  • 26 Devine EB, Lee CJ, Overby CL. , et al. Usability evaluation of pharmacogenomics clinical decision support aids and clinical knowledge resources in a computerized provider order entry system: a mixed methods approach. Int J Med Inform 2014; 83 (07) 473-483
  • 27 Clarke MA, Belden JL, Kim MS. Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR). J Eval Clin Pract 2014; 20 (06) 1153-1161
  • 28 Ant Ozok A, Wu H, Garrido M, Pronovost PJ, Gurses AP. Usability and perceived usefulness of Personal Health Records for preventive health care: a case study focusing on patients' and primary care providers' perspectives. Appl Ergon 2014; 45 (03) 613-628
  • 29 Corrao NJ, Robinson AG, Swiernik MA, Naeim A. Importance of testing for usability when selecting and implementing an electronic health or medical record system. J Oncol Pract 2010; 6 (03) 120-124
  • 30 Rizvi RF, Harder KA, Hultman GM. , et al. A comparative observational study of inpatient clinical note-entry and reading/retrieval styles adopted by physicians. Int J Med Inform 2016; 90: 1-11
  • 31 Rosenbloom ST, Denny JC, Xu H, Lorenzi N, Stead WW, Johnson KB. Data from clinical notes: a perspective on the tension between structure and flexible documentation. J Am Med Inform Assoc 2011; 18 (02) 181-186
  • 32 Mamykina L, Vawdrey DK, Stetson PD, Zheng K, Hripcsak G. Clinical documentation: composition or synthesis?. J Am Med Inform Assoc 2012; 19 (06) 1025-1031
  • 33 Brown PJ, Marquard JL, Amster B. , et al. What do physicians read (and ignore) in electronic progress notes?. Appl Clin Inform 2014; 5 (02) 430-444
  • 34 Reeves S, Kuper A, Hodges BD. Qualitative research methodologies: ethnography. BMJ 2008; 337: a1020
  • 35 Berg BL, Lune H, Lune H. Qualitative research methods for the social sciences. Boston, Massachusetts, MA: Pearson; 2004
  • 36 Timestamped field notes. 2015 ; Available at: http://www.neukadye.com/mobile-applications/timestamped-field-notes/
  • 37 Chen L, Guo U, Illipparambil LC. , et al. Racing against the clock: internal medicine residents' time spent on electronic health records. J Grad Med Educ 2016; 8 (01) 39-44
  • 38 Altheide DL. Reflections: ethnographic content analysis. Qual Sociol 1987; 10 (01) 65-77
  • 39 Srnka KJ, Koeszegi ST. From words to numbers: how to transform qualitative data into meaningful quantitative results. Schmalenbach Business Review 2007; 59 (01) 29-57
  • 40 NVivo-QSR International. 2015 ; Available at: http://www.qsrinternational.com/products_nvivo.aspx

Address for correspondence

Genevieve B. Melton, MD, PhD
Institute for Health Informatics and Department of Surgery, University of Minnesota
420 Delaware Street SE, Mayo Mail Code 450, Minneapolis, MN 55455
United States   

  • References

  • 1 Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff (Millwood) 2011; 30 (03) 464-471
  • 2 Zhang J, Walji MF. TURF: toward a unified framework of EHR usability. J Biomed Inform 2011; 44 (06) 1056-1067
  • 3 Holden RJ. Physicians' beliefs about using EMR and CPOE: in pursuit of a contextualized understanding of health IT use behavior. Int J Med Inform 2010; 79 (02) 71-80
  • 4 Kossman SP, Scheidenhelm SL. Nurses' perceptions of the impact of electronic health records on work and patient outcomes. Comput Inform Nurs 2008; 26 (02) 69-77
  • 5 Rodriguez NJ, Murillo V, Borges JA, Ortiz J, Sands DZ. A usability study of physicians interaction with a paper-based patient record system and a graphical-based electronic patient record system. Proc AMIA Symp 2002; 667-671
  • 6 Barnum CM. . Usability testing essentials ready, set…test! Burlington, Massachusetts, MA: Morgan Kaufmann; 2010
  • 7 ISO. ISO 9241-11:1998 Ergonomic requirements for office work with visual display terminals (VDTs)–Part 11: guidance on usability. ISO 1998; 45: 22
  • 8 Nielsen J. Usability Engineering. Boston: Academic Press; 1993
  • 9 Nielsen J. Usability 101: Introduction to Usability 2003. Available at: http://www.nngroup.com/articles/usability-101-introduction-to-usability/
  • 10 Baxter K. Understanding Your Users: A Practical Guide to User Research Methods. 2nd ed. Waltham, Massachusetts, MA: Morgan Kaufmann; 2015
  • 11 Jones S, Donelle L. Assessment of electronic health record usability with undergraduate nursing students. Int J Nurs Educ Scholarsh 2011; 8 (01) 24
  • 12 Friedberg MW, Chen PG, Van Busum KR. , et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Rand Health Q 2014; 3 (04) 1
  • 13 Carayon P, Cartmill R, Blosky MA. , et al. ICU nurses' acceptance of electronic health records. J Am Med Inform Assoc 2011; 18 (06) 812-819
  • 14 Ratwani RM, Fairbanks RJ, Hettinger AZ, Benda NC. Electronic health record usability: analysis of the user-centered design processes of eleven electronic health record vendors. J Am Med Inform Assoc 2015; 22 (06) 1179-1182
  • 15 Beynon-Davies P. Ethnography and information systems development: ethnography of, for and within is development. Inf Softw Technol 1997; 39 (08) 531-540
  • 16 Saleem JJ, Plew WR, Speir RC. , et al. Understanding barriers and facilitators to the use of clinical information systems for intensive care units and anesthesia record keeping: a rapid ethnography. Int J Med Inform 2015; 84 (07) 500-511
  • 17 McMullen CK, Ash JS, Sittig DF. , et al. Rapid assessment of clinical information systems in the healthcare setting: an efficient method for time-pressed evaluation. Methods Inf Med 2011; 50 (04) 299-307
  • 18 Kopp BJ, Erstad BL, Allen ME, Theodorou AA, Priestley G. Medication errors and adverse drug events in an intensive care unit: direct observation approach for detection. Crit Care Med 2006; 34 (02) 415-425
  • 19 Middleton B, Bloomrosen M, Dente MA. , et al; American Medical Informatics Association. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc 2013; 20 (e1): e2-e8
  • 20 Kaipio J, Lääveri T, Hyppönen H. , et al. Usability problems do not heal by themselves: National survey on physicians' experiences with EHRs in Finland. Int J Med Inform 2017; 97: 266-281
  • 21 Zahabi M, Kaber DB, Swangnetr M. Usability and safety in electronic medical records interface design: a review of recent literature and guideline formulation. Hum Factors 2015; 57 (05) 805-834
  • 22 Georgsson M, Staggers N. An evaluation of patients' experienced usability of a diabetes mHealth system using a multi-method approach. J Biomed Inform 2016; 59: 115-129
  • 23 Khajouei R, Hasman A, Jaspers MW. Determination of the effectiveness of two methods for usability evaluation using a CPOE medication ordering system. Int J Med Inform 2011; 80 (05) 341-350
  • 24 Walji MF, Kalenderian E, Piotrowski M. , et al. Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR. Int J Med Inform 2014; 83 (05) 361-367
  • 25 Walji MF, Kalenderian E, Tran D. , et al. Detection and characterization of usability problems in structured data entry interfaces in dentistry. Int J Med Inform 2013; 82 (02) 128-138
  • 26 Devine EB, Lee CJ, Overby CL. , et al. Usability evaluation of pharmacogenomics clinical decision support aids and clinical knowledge resources in a computerized provider order entry system: a mixed methods approach. Int J Med Inform 2014; 83 (07) 473-483
  • 27 Clarke MA, Belden JL, Kim MS. Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR). J Eval Clin Pract 2014; 20 (06) 1153-1161
  • 28 Ant Ozok A, Wu H, Garrido M, Pronovost PJ, Gurses AP. Usability and perceived usefulness of Personal Health Records for preventive health care: a case study focusing on patients' and primary care providers' perspectives. Appl Ergon 2014; 45 (03) 613-628
  • 29 Corrao NJ, Robinson AG, Swiernik MA, Naeim A. Importance of testing for usability when selecting and implementing an electronic health or medical record system. J Oncol Pract 2010; 6 (03) 120-124
  • 30 Rizvi RF, Harder KA, Hultman GM. , et al. A comparative observational study of inpatient clinical note-entry and reading/retrieval styles adopted by physicians. Int J Med Inform 2016; 90: 1-11
  • 31 Rosenbloom ST, Denny JC, Xu H, Lorenzi N, Stead WW, Johnson KB. Data from clinical notes: a perspective on the tension between structure and flexible documentation. J Am Med Inform Assoc 2011; 18 (02) 181-186
  • 32 Mamykina L, Vawdrey DK, Stetson PD, Zheng K, Hripcsak G. Clinical documentation: composition or synthesis?. J Am Med Inform Assoc 2012; 19 (06) 1025-1031
  • 33 Brown PJ, Marquard JL, Amster B. , et al. What do physicians read (and ignore) in electronic progress notes?. Appl Clin Inform 2014; 5 (02) 430-444
  • 34 Reeves S, Kuper A, Hodges BD. Qualitative research methodologies: ethnography. BMJ 2008; 337: a1020
  • 35 Berg BL, Lune H, Lune H. Qualitative research methods for the social sciences. Boston, Massachusetts, MA: Pearson; 2004
  • 36 Timestamped field notes. 2015 ; Available at: http://www.neukadye.com/mobile-applications/timestamped-field-notes/
  • 37 Chen L, Guo U, Illipparambil LC. , et al. Racing against the clock: internal medicine residents' time spent on electronic health records. J Grad Med Educ 2016; 8 (01) 39-44
  • 38 Altheide DL. Reflections: ethnographic content analysis. Qual Sociol 1987; 10 (01) 65-77
  • 39 Srnka KJ, Koeszegi ST. From words to numbers: how to transform qualitative data into meaningful quantitative results. Schmalenbach Business Review 2007; 59 (01) 29-57
  • 40 NVivo-QSR International. 2015 ; Available at: http://www.qsrinternational.com/products_nvivo.aspx

Zoom Image
Fig. 1 Screen shot of the data collection tool and nodes generated.
Zoom Image
Fig. 2 Visual depiction of coding scheme used in content analysis.
Zoom Image
Fig. 3 Frequency analysis of usability references at the level of subthemes. SY, system.
Zoom Image
Fig. 4 Frequency analysis of usability references at the level of categories. EC, error control; NS, navigation and search ability; SY, system; UF, user control and freedom; WA, workflow accelerators.
Zoom Image
Fig. 5 Frequency analysis of usability references at the level of codes. AP, auto population; CM, communication; CP, copy pasting; DT, dictation and transcription; ED, editability; EP, error prevention; FO, formatting; NN, navigating for notes; NT, navigating for templates; OH, online help; OT, others; SC, spell check; SO, screen options; SY, system.
Zoom Image
Fig. 6 Frequency comparison of total usability references under System-1 and 2. AP, autopopulation; CM, communication; CP, copy pasting; DT, dictation and transcription; ED, editability; EP, error prevention; FO, formatting; NN, navigating for notes; NT, navigating for templates; OH, help; OT, others; SC, spell check; SO, screen options.