Appl Clin Inform 2017; 08(02): 369-380
DOI: 10.4338/ACI-2016-09-RA-0149
Research Article
Schattauer GmbH

Automating Clinical Score Calculation within the Electronic Health Record

A Feasibility Assessment
Christopher Aakre
1  Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic, Rochester, MN, USA
,
Mikhail Dziadzko
2  Department of Anesthesiology, Division of Critical Care, Mayo Clinic, Rochester, MN, USA
,
Mark T. Keegan
2  Department of Anesthesiology, Division of Critical Care, Mayo Clinic, Rochester, MN, USA
,
Vitaly Herasevich
2  Department of Anesthesiology, Division of Critical Care, Mayo Clinic, Rochester, MN, USA
3  Multidisciplinary Epidemiology and Translation Research in Intensive Care (METRIC), Mayo Clinic, Rochester, MN, USA
› Author Affiliations
Funding This publication was made possible by CTSA Grant Number UL1 TR000135 from the National Center for Advancing Translational Sciences (NCATS), a component of the National Institutes of Health (NIH). Its contents are solely the responsibility of the authors and do not necessarily represent the official view of NIH.
Further Information

Publication History

received: 01 September 2016

accepted: 07 February 2017

Publication Date:
21 December 2017 (online)

Summary

Objectives: Evidence-based clinical scores are used frequently in clinical practice, but data collection and data entry can be time consuming and hinder their use. We investigated the programmability of 168 common clinical calculators for automation within electronic health records.

Methods: We manually reviewed and categorized variables from 168 clinical calculators as being extractable from structured data, unstructured data, or both. Advanced data retrieval methods from unstructured data sources were tabulated for diagnoses, non-laboratory test results, clinical history, and examination findings.

Results: We identified 534 unique variables, of which 203/534 (37.8%) were extractable from structured data and 269/534 (50.4.7%) were potentially extractable using advanced techniques. Nearly half (265/534, 49.6%) of all variables were not retrievable. Only 26/168 (15.5%) of scores were completely programmable using only structured data and 43/168 (25.6%) could potentially be programmable using widely available advanced information retrieval techniques. Scores relying on clinical examination findings or clinical judgments were most often not completely programmable.

Conclusion: Complete automation is not possible for most clinical scores because of the high prevalence of clinical examination findings or clinical judgments – partial automation is the most that can be achieved. The effect of fully or partially automated score calculation on clinical efficiency and clinical guideline adherence requires further study.

Citation: Aakre C, Dziadzko M, Keegan MT, Herasevich V. Automating clinical score calculation within the electronic health record: A feasibility assessment. Appl Clin Inform 2017; 8: 369–380 https://doi.org/10.4338/ACI-2016-09-RA-0149

Clinical Relevance Statement

Automated calculation of commonly used clinical scores within the EHR could reduce the cognitive-workload, improve practice efficiency, and facilitate clinical guideline adherence.


Human Subjects Protects

Humans subjects were not included in the project.