Subscribe to RSS
DOI: 10.1055/a-2006-4936
Usability Testing of an Interoperable Computerized Clinical Decision Support Tool for Fall Risk Management in Primary Care
- Abstract
- Background and Significance
- Objective
- Methods
- Results
- Discussion
- Limitations
- Conclusion
- Clinical Relevance Statement
- Multiple-Choice Questions
- References
Abstract
Background Falls are a widespread and persistent problem for community-dwelling older adults. Use of fall prevention guidelines in the primary care setting has been suboptimal. Interoperable computerized clinical decision support systems have the potential to increase engagement with fall risk management at scale. To support fall risk management across organizations, our team developed the ASPIRE tool for use in differing primary care clinics using interoperable standards.
Objectives Usability testing of ASPIRE was conducted to measure ease of access, overall usability, learnability, and acceptability prior to pilot .
Methods Participants were recruited using purposive sampling from two sites with different electronic health records and different clinical organizations. Formative testing rooted in user-centered design was followed by summative testing using a simulation approach. During summative testing participants used ASPIRE across two clinical scenarios and were randomized to determine which scenario they saw first. Single Ease Question and System Usability Scale were used in addition to analysis of recorded sessions in NVivo.
Results All 14 participants rated the usability of ASPIRE as above average based on usability benchmarks for the System Usability Scale metric. Time on task decreased significantly between the first and second scenarios indicating good learnability. However, acceptability data were more mixed with some recommendations being consistently accepted while others were adopted less frequently.
Conclusion This study described the usability testing of the ASPIRE system within two different organizations using different electronic health records. Overall, the system was rated well, and further pilot testing should be done to validate that these positive results translate into clinical practice. Due to its interoperable design, ASPIRE could be integrated into diverse organizations allowing a tailored implementation without the need to build a new system for each organization. This distinction makes ASPIRE well positioned to impact the challenge of falls at scale.
#
Background and Significance
Among community-dwelling older adults, falls are a leading cause of disability and independence loss.[1] They are widespread and persistent, impacting 29.5% of rural older adults and 27% of urban older adults.[2] The importance of preventing falls in the community has been well established in the literature. However, the role of computerized clinical decision support (CCDS) in the adoption of fall prevention guidelines in primary care practice represents a gap. CCDS is defined as technology that provides timely, patient-specific information to improve care quality, typically by invoking automation at the point of care.[3] [4] A recent randomized control trial found that CCDS systems that leverage decision support best practices (e.g., user-centered design) typically have higher adoption rates and effect clinical behavior more than off-the-shelf solutions.[5]
Despite use of fall risk screening tools, guideline adherence for fall prevention continues to be suboptimal. The 2017 Medicare Health Outcomes Survey found only 51.5% of those screened at risk for falls received any intervention.[6] [7] Providers have reported a lack of skills to address fall risk suggesting that more robust CCDS that goes beyond screening would be useful.[8] We are aware of one study that addresses fall prevention in primary care using CCDS that has been published since 2012. It consisted of a screening reminder, but no recommended actions.[9] Common barriers to use of prevention guidelines include time pressure, competing clinical priorities, lack of agreement with or awareness of guidelines, and lack of training.[10] [11] [12] A primary care provider responsible for 2,500 patients would need 1,773 hours per year to provide all U.S. Preventive Service Task Force grade A and B recommendations, including fall prevention.[13] High workloads including management of chronic conditions can leave little time or cognitive bandwidth to address preventive services.[14]
CCDS tools have been shown to increase adherence with preventive services.[15] Many CCDS tools are developed for use in one location, limiting generalizability.[16] However, if they were interoperable the potential for scalability would be greater. CCDS using data exchange standards (e.g., Fast Healthcare Interoperability Resources [FHIR]) can be integrated into any electronic health record (EHR) and integrate fragmented clinical information providing a concise comprehensive picture.[13]
Before new user-facing CCDS can be implemented, it should be tested to ensure usability. The concept of usability is multi-dimensional and includes attributes such as efficiency, learnability, errors, and satisfaction.[17] Measures of usability evaluate a systems' ability to allow users to complete intended tasks safely, effectively, efficiently, and enjoyably.[18] In addition to usability, the concept of acceptability may be more important; acceptability or willingness to engage with and accept recommendations from the system.[17] If users are unwilling or unable to engage with a system, it will not matter how easy it is to use. Usability testing can be done before or after system implementation and use quantitative or qualitative approaches.[19] Preimplementation testing is typically done in a laboratory and often has quantitative components (e.g., time-on-task, error rates). However, usability evaluation through observation in real-world settings can provide richer data than laboratory-only studies. Simulation is one approach that tries to gain contextual information from observations with quantitative data more typical of laboratory-based testing.[20] Simulation recreates the context within which the CCDS is intended and primes participants to behave more like they would in practice compared with think-aloud protocols. Simulation also allows for more control over testing scenarios and limits potential confounding factors compared with postimplementation observation studies.
#
Objective
To support wider engagement with the fall prevention process, our team developed Advancing Fall ASsessment and Prevention PatIent-Centered Outcomes REsearch (ASPIRE), an interoperable CCDS. ASPIRE provides tailored fall prevention recommendations in the context of primary care visits for those screened at risk. Summative testing was conducted using simulation to measure ease of access, overall usability, learnability, and acceptability of ASPIRE prior to pilot testing, see [Table 1] for operational definitions. While the focus of this article is a summative evaluation, a brief description of system development and formative testing is included for context.
Abbreviation: SUS, System Usability Scale.
#
Methods
System Development
ASPIRE is middleware that enables primary care providers across disparate EHR systems to launch the tool from within the EHR and develop actionable fall prevention plans with older adults who screened positive for fall risk. By providing tailored recommendations based on EHR data, ASPIRE helps providers engage with patients to determine a mutually agreed-upon plan. ASPIRE used all supported FHIR standards and had to use some EHR specific resources when FHIR standards were not supported by local EHRs. Due to differences in which services are supported within each EHR and for security reasons, each site ran a separate instance of the ASPIRE logic. ASPIRE was developed based on user-centered design principles and information on user requirements is reported elsewhere.[21] ASPIRE's design is based on fall prevention evidence, user requirements from primary care staff and patients, and input from usability experts to ensure compliance with heuristics.[17] [22] These sources combined with prior research experience resulted in a design that focused on three risk factors: (1) mobility/exercise, (2) fall risk increasing drugs (FRIDs), and (3) bone health. ASPIRE pulls patient information from the EHR related to these risk factors and displays them for the provider to validate with the patient. Based on selections in the first step, recommendations are made in the second step, resulting in a fall prevention plan that can be sent back to the EHR in the third step. Recommendations depend on patient data and may include referral to physical therapy, exercise handouts, medication-deprescribing handouts, and information on osteoporosis including bisphosphonates. Because fall prevention exercises have the strongest supporting evidence, they were provided to all patients and varied in difficulty.[23] All exercise handouts were based on Otago exercises and developed by a physical therapist specializing in fall prevention.[24]
Formative testing or iterative design feedback was conducted using a think-aloud approach and was grounded by principles of user-centered design.[25] Formative testing is a small-sample qualitative approach to find and fix usability problems.[26] The first three sessions utilized a static prototype. The following 14 sessions were conducted with a click-able prototype in Figma, an interface design and testing program. Following completion of formative testing, the prototype was revised and integrated into the EHR at two sites, one urban site using Epic and one rural site using Athena Practice.
#
Approach
After integration into respective EHRs, but before summative testing, the research team tested the system using 12 different test patients developed specifically for this project. Summative testing evaluates system usability and typically reports metrics including time on task, errors, and user satisfaction,.[26] Time on task and errors are part of the learnability concept, while user satisfaction is linked to overall usability.
#
Setting and Recruitment
Participants were recruited using purposive sampling from two sites with differing EHR systems to enable us to account for system-specific issues and ensure interoperability. The first was a large urban health care system serving the Boston area. The second was a federally designated rural health clinic associated with a nearby academic medical center in north-central Florida. These two settings utilize different EHR systems which provided the necessary environment to develop and test an interoperable solution. Staff were eligible to participate if they were primary care providers whose patient population included older adults. Sample estimation for usability studies balances cost and time to find the most errors possible.[26] This study planned for 20 summative testing participants, which would uncover an estimated 95% of errors.[27] However, due to challenges in recruiting, the final sample size was 14 (10 urban, 4 rural), which should reveal approximately 90% of errors.[27]
#
Procedure
Summative testing was conducted via secure video conferencing with audio and video recordings, similar to a virtual visit, and consisted of a participant, facilitator, and patient-actor. The facilitator and patient-actor were members of the research team. At the beginning of the session, the facilitator provided a brief introduction of ASPIRE and gave the participant remote control of the screen. Participants were advised that questions about the tool would be answered after the session and to complete tasks to the best of their abilities by engaging with the patient-actor. The use of the patient-actor was intended to simulate near real-world use. Each session included two scenarios and participants were randomized to determine which they would see first. Each scenario included information about age, gender, chronic diseases, general activity levels, assistive devices, medications, and history of osteoporosis or osteopenia. Scenarios covered a variety of clinical situations including differences in ability to access physical therapy, differences in mobility, osteoporosis, osteopenia, and a variety of FRIDs. During both scenarios the patient-actor used predetermined personas that tried to anticipate participant questions. If an unanticipated question was asked, the patient-actor answered “I don't know” for consistency. Each participant used ASPIRE across four steps: (1) launch and landing, (2) risk factors, (3) recommendations, (4) Document and Print. The facilitator only gave hints related to system use if participants were unable to complete the step or if an error would have prevented the completion of subsequent step. During launch and landing, the user was expected to navigate to a button integrated into their EHR and review the landing page (see [Fig. 1]). During the second step, the provider was presented with risk factors identified by ASPIRE from the EHR ([Fig. 2]), which they were expected to validate with the patient. ASPIRE preselected mobility and bone health risk factors based on EHR data and pulled any actively prescribed FRIDs. Based on feedback from providers during formative testing, FRIDs were not preselected as most providers reported only wanting to change one medication at a time. During the third step, the provider was presented with recommendations, including talking points, based on previous selections (see [Fig. 3]). Recommendations were grouped by risk factor with exercise first, FRIDs second, and bone health third. During this step, the provider was presented a recommended exercise level but could choose a different level based on clinical judgement. They could also preview handouts and de-select any items they did not want to use. When the participant hovered over the different exercise levels, a description of intended recipients appeared (see [Fig. 4]) for descriptions. In the medication section, recommendations were based on class. For example, if the participant selected diazepam, recommendations for de-prescribing benzodiazepines were provided. In the document and print task ([Fig. 5]), the provider had a summary of resources to print, recommended orders, and a prepopulated note that summarized the fall prevention plan. All items in this task were the result of selections made in previous steps. The prepopulated note could be sent to the EHR, reducing documentation time in the EHR.










#
Metrics
This study used the Single Ease Question (SEQ) and System Usability Scale (SUS). The SEQ is a post-task measure of perceived difficulty and was asked at the completion of each task during the first scenario using a 7-point scale.[28] It was used to provide insight into potential differences in usability between tasks and inform interpretation of global usability results. Global usability was measured using the SUS, a validated posttest measure of subjective overall usability and satisfaction.[26] After completion of the second scenario, the SUS was administered using the polling feature. The polling feature was used to allow participants to answer items without being asked verbally in an attempt to minimize social desirability response bias.
Learnability refers to the ease in which a novice user can reach a reasonable level of proficiency in a short period of time.[17] To assess learnability, time on task, hints, and errors were compared for each user between the first and second scenarios. Users were randomized to determine which scenario they saw first to ensure that learnability measures were not influenced by scenario order. Some studies have suggested that learnability can be calculated as a sub-scale of the SUS.[29] However, subsequent studies have shown that the factorization of the SUS depends on how much experience participants have with the system being tested.[30] For new users the SUS has a single-factor structure, whereas studies of more experienced users result in a two-factor structure. Based on this information, it would have been inappropriate to use SUS subscales to evaluate a tool after two uses.
Acceptability of health care interventions is a prerequisite to effectiveness and requires thoughtful design allowing best possible outcomes with available resources.[31] This study measured acceptability by comparing the number of recommendations presented to each participant compared with the number of recommendations included in the fall prevention plan. If there was a conflict between content of the written plan and verbalized intent of the participant, the verbalized intent was counted. For example, if a participant clearly verbalized to the patient-actor that they were not going to start bisphosphonates, but did not edit the note or de-select this recommendation, the verbalized intent to not start bisphosphonates was used in the acceptability calculation.
Upon completion of the SUS questionnaire, participants were asked open-ended questions by the facilitator. These questions included what participants liked and did not like about the tool, if they preferred it to their current fall prevention practices, if they would recommend it to others, and if the tool should do anything else. Participants completed a demographic form and received a $50 gift card.
#
Analysis
Recorded sessions were analyzed using NVivo 12 to describe and quantify usability data. Iterative content analysis was used of field notes and recordings was used to analyze interview questions from the end of the session. Analysis was done by the lead author, shared with usability expert, and the team. An a priori code book was developed by the lead author in consultation with a usability expert and reviewed by the team. The code book, available as Supplementary Material, was developed to allow analysis of the summative testing measures in [Table 1]. Descriptive statistics of time on task, errors, hints, and acceptability were calculated for the total sample, each site, and each scenario order. A paired t-test to assess within-subject time on task between scenarios was completed after verifying normality. The Wilcoxon ranked sum test was used to compare recommendations seen by participants to recommendations included in the fall prevention plan due to nonnormal distribution of results. Statistics were calculated using R-Studio.
#
#
Results
A total of 14 summative usability sessions were conducted, 10 from the urban site and four from the rural site. Participants included physicians (n = 7), nurse practitioners (n = 5), and physician assistants (n = 2). Most participants reported caring for some older adults (n = 9), while some reported caring mostly for older adults (n = 5). On average they had 5.8 years' of experience working with their respective EHR and nearly all reported intermediate skill level with technology in general (n = 12).
Usability and Ease of Access
Mean SUS score was 77.3 (median = 80, range = 30–92.5). To interpret SUS results, scores were converted to percentiles using published tables.[26] The mean percentile score was 80.9 (median = 90, range = 2–100), indicating above-average usability. This is further supported by 13 of 14 participants stating they would recommend ASPIRE to a colleague and they preferred ASPIRE to their current practice. Participants rated tasks as relatively easy with mean SEQ ranging between 5.1 and 5.5 across the four steps. Providers found the tool accessible with an average SEQ for the launch and landing task of 5.5 (median = 6, range = 3–7).
#
Learnability
Total time on task was 12.9 minutes for the first scenario (median = 13.5, range = 5.4–16.8) and 9.4 minutes for the second (median = 8.9, range = 4.6–17.3). This represents a statistically significant reduction in total time on task (p = 0.0001), indicating that users were able to quickly become familiar with the tool. Reduction in total time on task was also evaluated by site and scenario order with all groups showing statistically significant reductions in total time (p < 0.05).
On average the number of hints required during the first scenario was 2.6 (median = 2, range = 1–5) and decreased to an average of 0.14 for the second scenario (median = 0, range = 0–2). Reductions in hints were seen uniformly across sites and scenario order. However, urban participants required more hints during the first scenario (mean = 1.5) compared with rural participants (mean = 1). Based on review of field notes and recordings, this difference was due to button location in the EHR and not the ASPIRE system itself. Urban participants had difficulty finding the fall risk icon in Epic, whereas rural participants received hints to click “let's begin” to complete the task.
Error rates remained consistent between scenarios with 0.1 fewer errors made during the second scenario (mean = 2.86) compared with the first (mean = 2.79). Error analysis showed users made errors of commission and omission. Errors of commission occurred when users did something (e.g., accidently clicking on something) while errors of omission occurred when a user failed to engage in an action (e.g., not editing prepopulated note to reflect verbalized plan). See [Table 2] for a description of errors with associated steps, frequencies, severity scores, and mitigating factors.
Abbreviations: EHR, electronic health record; PT, physical therapy.
Note: Error severity: 1 = cosmetic, 2 = minor, 3 = major, 4 = catastrophic.
Acceptability was calculated for each scenario for a total of 28 observations across 14 participants. Recommendations provided varied by scenario and participant based on the selections made during the risk factor task. For example, if only one FRID was selected then only recommendations for that selection were shown. Total recommendations seen by participants varied from three to six (mean = 4.9, median = 5). Total acceptability was based on the number of recommendations provided compared with recommendations accepted. Recommendations accepted ranged from one to six (mean = 3.6, median = 3.5). There was a statistically significant difference found between number of recommendations provided and number accepted (p < 0.001). The most accepted recommendation was exercise. In 22 of the 28 observations, the recommended level of exercise was included in the final fall prevention plan. In five of the six observations a higher level of exercise was selected. Only one participant in one scenario felt that exercise was not applicable. Acceptance of FRID-related recommendations was mixed. Benzodiazepine handouts and tapering calendar were accepted all but once (n = 11), and gabapentin was addressed each time it was selected (n = 14). However, loop diuretics were only addressed 7 of the 20 times they were selected. The least accepted recommendation was the prescription of bisphosphonates. This recommendation was provided in 26 observations, but only accepted in 14.
#
#
Discussion
By designing our study to include simulation and open-ended questions, we were able to evaluate the ASPIRE system using both quantitative metrics and provide important contextual insight into that data through qualitative means. If we had relied only on quantitative metrics like SUS and SEQ, we would not have had insight into why scores were chosen by participants. Interview questions also provided insight into what value users saw in the system; see [Table 3] for themes and representative responses. Overall, participants found ASPIRE easy to use and preferred it over current practice. This is further supported by comparing our SUS scores to a large SUS database. Compared with the 446 studies in the database, ASPIRE is in the 80th percentile and receives a grade of “B.”[26] To our knowledge, ASPIRE is the only tailored fall prevention tool designed for primary care. One other CCDS study focused on fall prevention in primary care was found, it but did not include SUS scores or quantitative usability metrics.[9] This study redesigned a clinical reminder to conduct fall risk screening within a specific EHR. That CCDS did not include tailored recommendations nor was it available for integration into other EHRs. This makes ASPIRE a novel approach to addressing fall prevention in primary care because it provides actionable recommendations that could overcome previously reported skill gaps. The ASPIRE system contains the four features identified as critical to impact practice including being computer based, provision of recommendations, workflow integration, and provision of support during the decision process.[32] Furthermore, on completion of the ongoing pilot study, ASPIRE will be available for integration into any EHR from the CDS Connect repository.[33]
Abbreviation: EHR, electronic health record; PT, physical therapy.
While providers rated the tool easy to access based on SEQ ratings, most urban participants required at least one hint to complete the launch and landing task during the first scenario. Several providers from the urban site verbalized something like “now that I know where the button is it will be easy to do again.” It is not uncommon for participants to rate a task favorably on the SEQ if they believe it will be easily repeatable.[26] This was further supported by participants quickly launching the tool during the second scenario without prompting.
Our results also showed that ASPIRE was easy to learn with significantly decreased time required during the second scenario. However, on average, participants spent 9.4 minutes using ASPIRE during the second scenario. This could represent a substantial portion of a typical primary care visit. However, preliminary data of the ongoing pilot suggest that on average providers are spending 4 minutes with the tool. This suggests that time using the system is further reduced when the provider has a relationship with the patient and has received training. In the context of an annual wellness visit, which can allow for more time than a regular follow-up appointment, this could be implemented. Even within shorter appointments, this tool could be used when falls are of significant concern. Some participants also commented that this tool would save them time on other tasks like documentation so time spent with the tool may save time on other visit-related tasks.
While error rates remained consistent between scenarios, this could be mitigated by user interface (UI) adjustments and providing training prior to implementation. In this study, no initial system training nor corrective instruction between scenarios was completed, suggesting that training could be beneficial. Many participants commented on errors and implied increased familiarity with the tool would prevent future errors and decrease time required to use the system. Training should include where the tool is accessed, a walk-through of the steps, review of handouts available, and provide references to the evidence used to develop the logic. An explanation of how the system preselects items is also recommended. To further reduce errors, adjustments to the UI were made based on results, EHR limitations, and budget. Changes included adjusting the color of buttons not related to prepopulated data. In the prototype tested navigation buttons and the buttons to send information back to the EHR were the same dark blue that indicated a selection was based on patient data from the EHR. Some participants verbalized that the dark blue on those buttons indicated that information was automatically sent back to the EHR. Some providers verbalized that the recommended orders were automatically sent to the EHR. However, due to technical limitations this was not feasible. To ensure orders are entered, an instruction was added alerting users that they must be manually entered. Because there was confusion over location and visibility of the edit button for the prepopulated note, the UI was updated so that the note defaulted to an editable status. Lastly, a resource library of all handouts and supporting evidence was created. This will enable providers to access materials that the CCDS did not automatically recommend and allow providers to print more than one level of exercise handout, which was a recommendation from testing. Several providers requested the ability to provide multiple levels of exercise so patients could progress to higher levels without returning to clinic.
Acceptability of recommendations tested varied widely. Exercise was the most accepted recommendation and was the result of guidelines specific to primary care and research team experience from prior fall prevention studies in primary care. Loop diuretics were often selected in the risk factor step, but not addressed in the final plan. This divergence may be due to differences in logic for diuretics compared with other medication classes. During development the team anticipated that providers would not consult a specialist nor require a handout to address diuretics, therefore information regarding diuretics is not displayed in the recommendation task. One participant commented that it was interesting that the diuretic selected in step one was not included in step two. Based on results and participants' comments, future versions of the CCDS should consider revising step two to include information about diuretics. When developing bisphosphonate recommendations, guidelines from the American College of Endocrinology were used, which recommended prescription of bisphosphonates for osteopenia and osteoporosis.[34] Our participants agreed bisphosphonate use to treat osteoporosis, but most felt it was inappropriate for osteopenia. This may represent a difference in clinical practice between primary and specialty care providers.
#
Limitations
The use of simulation attempted to mimic real-world use, but was not able to replicate the patient–provider relationship vital to primary care practice.[35] Several participants verbalized that the system would be easier to use with patients they know. Participants remotely controlled ASPIRE during testing which may have contributed to some lag and increased time on task. Our study also fell short of initial recruiting targets. The limited number of eligible participants from the rural site coupled with anecdotal suggestions of the coronavirus disease 2019 (COVID-19) burnout from the urban site may have contributed to this challenge. Lastly, due to limited time and budget, this study only included two clinical scenarios which covered only some of the recommendations ASPIRE can produce. Further pilot testing should be done to evaluate acceptability of all possible recommendations and the ability of ASPIRE to integrate into clinical workflows. Future studies should also measure ability of ASPIRE to influence clinical practice and patient outcomes.
#
Conclusion
Usability data suggest that ASPIRE represents an improvement to current practice in both rural and urban clinics with different available resources. Our results highlight the importance of using guidelines already acceptable to the target end-user when developing CCDS and support previous findings suggesting that workflow integration is important to successful CCDS. Due to its interoperable design ASPIRE has the potential for broad impact across organizations; however, limited support for some FHIR services could add to implementation burden in some EHRs. Pilot testing is needed to validate that that our favorable usability results translate into clinical practice workflows and how its use impacts utilization of fall prevention guidelines.
#
Clinical Relevance Statement
This study demonstrates that it is possible to develop an interoperable computerized clinical decision support to target the fall prevention process with better than average usability. Due to its' interoperable design and focus on three risk factors, it has the potential to integrate into any electronic health record and be used in time-constrained primary care environments.
#
Multiple-Choice Questions
-
Factors associated with successful clinical decision support systems include which of the following?
-
Workflow integration
-
Computer-based
-
Point-of-care support
-
Actionable recommendations
-
All of the above.
Correct Answer: The correct answer is option e. Systems should be integrated into workflow, otherwise users are far less likely to access them. Computer-based systems can make provision of recommendations more timely, which is important when making decisions at the point of care. Systems that provide actionable recommendations are more likely to change behavior when compared with systems that only flag risks without providing ways to address that risk.
-
-
When should summative usability testing be done?
-
During system development
-
Before full implementation
-
After full implementation
Correct Answer: The correct answer is option b. Summative testing provides input on what to fix and metrics can be used as a baseline for posttest design changes. It is important to fix any issues that will have significant negative impact on usability and users willingness to adopt a new system before full implementation. If a system has too many flaws at implementation, users will lose trust and may not be willing to adopt it, even if improvements are made later.
-
#
#
Conflict of Interest
None declared.
Protection of Human and Animal Subjects
IRB approval was received for protocol number: Site 1: 2020P002075; Site 2: CED000000426_.
Note
P.C.D. and R.L. contributed equally to the development of the manuscript as co–senior investigators.
The views expressed herein are those of the author(s) and do not reflect the official policy or position of Brooke Army Medical Center, the Department of Defense, or any agencies under the U.S. Government.
-
References
- 1 Florence CS, Bergen G, Atherly A, Burns E, Stevens J, Drake C. Medical costs of fatal and nonfatal falls in older adults. J Am Geriatr Soc 2018; 66 (04) 693-698
- 2 Moreland B, Kakara R, Henry A. Trends in nonfatal falls and fall-related injuries among adults aged ≥65 years - United States, 2012-2018. MMWR Morb Mortal Wkly Rep 2020; 69 (27) 875-881
- 3 Clinical Decision Support. The Office of the National Coordinator for Health Information Technology. Accessed March 7, 2021 at: https://www.healthit.gov/topic/safety/clinical-decision-support
- 4 Bryan C, Boren SA. The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature. Inform Prim Care 2008; 16 (02) 79-91
- 5 Trinkley KE, Kroehl ME, Kahn MG. et al. Applying clinical decision support design best practices with the practical robust implementation and sustainability model versus reliance on commercially available clinical decision support tools: randomized controlled trial. JMIR Med Inform 2021; 9 (03) e24359
- 6 Taylor SF, Coogle CL, Cotter JJ, Welleford EA, Copolillo A. Community-dwelling older adults' adherence to environmental fall prevention recommendations. J Appl Gerontol 2019; 38 (06) 755-774
- 7 Assurance NCfQ. Fall risk management. National Committee for Quality Assurance. Accessed April 25, 2022 at: https://www.ncqa.org/hedis/measures/fall-risk-management/
- 8 Howland J, Hackman H, Taylor A, O'Hara K, Liu J, Brusch J. Older adult fall prevention practices among primary care providers at accountable care organizations: a pilot study. PLoS One 2018; 13 (10) e0205279
- 9 Spears GV, Roth CP, Miake-Lye IM, Saliba D, Shekelle PG, Ganz DA. Redesign of an electronic clinical reminder to prevent falls in older adults. Med Care 2013; 51 (03) (3, suppl 1): S37-S43
- 10 Zheng MY, Suneja A, Chou AL, Arya M. Physician barriers to successful implementation of US Preventive Services Task Force routine HIV testing recommendations. J Int Assoc Provid AIDS Care 2014; 13 (03) 200-205
- 11 Hoskins KF, Tejeda S, Vijayasiri G. et al. A feasibility study of breast cancer genetic risk assessment in a federally qualified health center. Cancer 2018; 124 (18) 3733-3741
- 12 Kurth AE, Krist AH, Borsky AE. et al. U.S. Preventive Services Task Force methods to communicate and disseminate clinical preventive services recommendations. Am J Prev Med 2018; 54 (1S1): S81-S87
- 13 Braunstein ML. Health Informatics on FHIR: How HL7's New API is Transforming Healthcare. Cham: Springer; 2018: 292
- 14 Barnett ML, Bitton A, Souza J, Landon BE. Trends in outpatient care for medicare beneficiaries and implications for primary care, 2000 to 2019. Ann Intern Med 2021; 174 (12) 1658-1665
- 15 Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020; 3: 17
- 16 Bright TJ, Wong A, Dhurjati R. et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med 2012; 157 (01) 29-43
- 17 Nielsen J. Chapter 2 - What is usability? In: Nielsen J, ed. Usability Engineering. Burlington, MA: Morgan Kaufmann; 1993: 23-48
- 18 Peute LW, Spithoven R, Bakker PJ, Jaspers MW. Usability studies on interactive health information systems; where do we stand?. Stud Health Technol Inform 2008; 136: 327-332
- 19 Nielsen J, Mack RL. Usability Inspection Methods. Morristown, NJ: Wiley; 1994
- 20 Li AC, Kannry JL, Kushniruk A. et al. Integrating usability testing and think-aloud protocol analysis with “near-live” clinical simulations in evaluating clinical decision support. Int J Med Inform 2012; 81 (11) 761-772
- 21 Rice H, Garabedian PM, Shear K. et al. Clinical decision support for fall prevention: defining end-user needs. Appl Clin Inform 2022; 13 (03) 647-655
- 22 Rice HG, Garabedian P, Shear K. et al. Computerized clinical decision support for fall prevention: defining end-user requirements for primary care staff and patients. Paper presented at: 2022 AMIA Informatics Summit; March 21–24, 2022; Chicago, IL, United States. Accessed January 18, 2023 at: https://knowledge.amia.org/75287-amia-1.4633888/t005-1.4635399/t005-1.4635400/2082-1.4635455/2082-1.4635456
- 23 Grossman DC, Curry SJ, Owens DK. et al; US Preventive Services Task Force. Interventions to prevent falls in community-dwelling older adults: US Preventive Services Task Force recommendation statement. JAMA 2018; 319 (16) 1696-1704
- 24 Shubert TE, Smith ML, Jiang L, Ory MG. Disseminating the Otago Exercise Program in the United States: perceived and actual physical performance improvements from participants. J Appl Gerontol 2018; 37 (01) 79-98
- 25 De Vito Dabbs A, Myers BA, Mc Curry KR. et al. User-centered design and interactive health technologies for patients. Comput Inform Nurs 2009; 27 (03) 175-183
- 26 Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. 2nd ed. Amsterdam: Elsevier; 2016: 350
- 27 Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput 2003; 35 (03) 379-383
- 28 Albert W, Tullis TS. Chapter 5 - Self-reported metrics. In: Albert W, Tullis TS, eds. Measuring the User Experience. 3rd ed. Burlington, MA: Morgan Kaufmann; 2023: 109-151
- 29 Lewis JR, Sauro J. The Factor Structure of the System Usability Scale. Berlin, Heidelberg: Springer; 2009: 94-103
- 30 Borsci S, Federici S, Bacci S, Gnaldi M, Bartolucci F. Assessing user satisfaction in the era of user experience: comparison of the SUS, UMUX, and UMUX-LITE as a function of product experience. Int J Hum Comput Interact 2015; 31 (08) 484-495
- 31 Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res 2017; 17 (01) 88
- 32 Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330 (7494): 765
- 33 Connect Repository CDS. Agency for Healthcare Research and Quality. Accessed April 29, 2022 at: https://cds.ahrq.gov/cdsconnect/repository
- 34 Camacho PM, Petak SM, Binkley N. et al. American Association of Clinical Endocrinologists/American College of Endocrinology Clinical Practice Guidelines for the Diagnosis and Treatment of Postmenopausal Osteoporosis-2020 Update. Endocr Pract 2020; 26 (suppl 1): 1-46
- 35 Weinfeld JM, Gorman PN. Primary care physician designation and response to clinical decision support reminders: a cross-sectional study. Appl Clin Inform 2016; 7 (02) 248-259
Address for correspondence
Publication History
Received: 07 July 2022
Accepted: 02 January 2023
Accepted Manuscript online:
04 January 2023
Article published online:
15 March 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Florence CS, Bergen G, Atherly A, Burns E, Stevens J, Drake C. Medical costs of fatal and nonfatal falls in older adults. J Am Geriatr Soc 2018; 66 (04) 693-698
- 2 Moreland B, Kakara R, Henry A. Trends in nonfatal falls and fall-related injuries among adults aged ≥65 years - United States, 2012-2018. MMWR Morb Mortal Wkly Rep 2020; 69 (27) 875-881
- 3 Clinical Decision Support. The Office of the National Coordinator for Health Information Technology. Accessed March 7, 2021 at: https://www.healthit.gov/topic/safety/clinical-decision-support
- 4 Bryan C, Boren SA. The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature. Inform Prim Care 2008; 16 (02) 79-91
- 5 Trinkley KE, Kroehl ME, Kahn MG. et al. Applying clinical decision support design best practices with the practical robust implementation and sustainability model versus reliance on commercially available clinical decision support tools: randomized controlled trial. JMIR Med Inform 2021; 9 (03) e24359
- 6 Taylor SF, Coogle CL, Cotter JJ, Welleford EA, Copolillo A. Community-dwelling older adults' adherence to environmental fall prevention recommendations. J Appl Gerontol 2019; 38 (06) 755-774
- 7 Assurance NCfQ. Fall risk management. National Committee for Quality Assurance. Accessed April 25, 2022 at: https://www.ncqa.org/hedis/measures/fall-risk-management/
- 8 Howland J, Hackman H, Taylor A, O'Hara K, Liu J, Brusch J. Older adult fall prevention practices among primary care providers at accountable care organizations: a pilot study. PLoS One 2018; 13 (10) e0205279
- 9 Spears GV, Roth CP, Miake-Lye IM, Saliba D, Shekelle PG, Ganz DA. Redesign of an electronic clinical reminder to prevent falls in older adults. Med Care 2013; 51 (03) (3, suppl 1): S37-S43
- 10 Zheng MY, Suneja A, Chou AL, Arya M. Physician barriers to successful implementation of US Preventive Services Task Force routine HIV testing recommendations. J Int Assoc Provid AIDS Care 2014; 13 (03) 200-205
- 11 Hoskins KF, Tejeda S, Vijayasiri G. et al. A feasibility study of breast cancer genetic risk assessment in a federally qualified health center. Cancer 2018; 124 (18) 3733-3741
- 12 Kurth AE, Krist AH, Borsky AE. et al. U.S. Preventive Services Task Force methods to communicate and disseminate clinical preventive services recommendations. Am J Prev Med 2018; 54 (1S1): S81-S87
- 13 Braunstein ML. Health Informatics on FHIR: How HL7's New API is Transforming Healthcare. Cham: Springer; 2018: 292
- 14 Barnett ML, Bitton A, Souza J, Landon BE. Trends in outpatient care for medicare beneficiaries and implications for primary care, 2000 to 2019. Ann Intern Med 2021; 174 (12) 1658-1665
- 15 Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020; 3: 17
- 16 Bright TJ, Wong A, Dhurjati R. et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med 2012; 157 (01) 29-43
- 17 Nielsen J. Chapter 2 - What is usability? In: Nielsen J, ed. Usability Engineering. Burlington, MA: Morgan Kaufmann; 1993: 23-48
- 18 Peute LW, Spithoven R, Bakker PJ, Jaspers MW. Usability studies on interactive health information systems; where do we stand?. Stud Health Technol Inform 2008; 136: 327-332
- 19 Nielsen J, Mack RL. Usability Inspection Methods. Morristown, NJ: Wiley; 1994
- 20 Li AC, Kannry JL, Kushniruk A. et al. Integrating usability testing and think-aloud protocol analysis with “near-live” clinical simulations in evaluating clinical decision support. Int J Med Inform 2012; 81 (11) 761-772
- 21 Rice H, Garabedian PM, Shear K. et al. Clinical decision support for fall prevention: defining end-user needs. Appl Clin Inform 2022; 13 (03) 647-655
- 22 Rice HG, Garabedian P, Shear K. et al. Computerized clinical decision support for fall prevention: defining end-user requirements for primary care staff and patients. Paper presented at: 2022 AMIA Informatics Summit; March 21–24, 2022; Chicago, IL, United States. Accessed January 18, 2023 at: https://knowledge.amia.org/75287-amia-1.4633888/t005-1.4635399/t005-1.4635400/2082-1.4635455/2082-1.4635456
- 23 Grossman DC, Curry SJ, Owens DK. et al; US Preventive Services Task Force. Interventions to prevent falls in community-dwelling older adults: US Preventive Services Task Force recommendation statement. JAMA 2018; 319 (16) 1696-1704
- 24 Shubert TE, Smith ML, Jiang L, Ory MG. Disseminating the Otago Exercise Program in the United States: perceived and actual physical performance improvements from participants. J Appl Gerontol 2018; 37 (01) 79-98
- 25 De Vito Dabbs A, Myers BA, Mc Curry KR. et al. User-centered design and interactive health technologies for patients. Comput Inform Nurs 2009; 27 (03) 175-183
- 26 Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. 2nd ed. Amsterdam: Elsevier; 2016: 350
- 27 Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput 2003; 35 (03) 379-383
- 28 Albert W, Tullis TS. Chapter 5 - Self-reported metrics. In: Albert W, Tullis TS, eds. Measuring the User Experience. 3rd ed. Burlington, MA: Morgan Kaufmann; 2023: 109-151
- 29 Lewis JR, Sauro J. The Factor Structure of the System Usability Scale. Berlin, Heidelberg: Springer; 2009: 94-103
- 30 Borsci S, Federici S, Bacci S, Gnaldi M, Bartolucci F. Assessing user satisfaction in the era of user experience: comparison of the SUS, UMUX, and UMUX-LITE as a function of product experience. Int J Hum Comput Interact 2015; 31 (08) 484-495
- 31 Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res 2017; 17 (01) 88
- 32 Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330 (7494): 765
- 33 Connect Repository CDS. Agency for Healthcare Research and Quality. Accessed April 29, 2022 at: https://cds.ahrq.gov/cdsconnect/repository
- 34 Camacho PM, Petak SM, Binkley N. et al. American Association of Clinical Endocrinologists/American College of Endocrinology Clinical Practice Guidelines for the Diagnosis and Treatment of Postmenopausal Osteoporosis-2020 Update. Endocr Pract 2020; 26 (suppl 1): 1-46
- 35 Weinfeld JM, Gorman PN. Primary care physician designation and response to clinical decision support reminders: a cross-sectional study. Appl Clin Inform 2016; 7 (02) 248-259









