CC BY-NC-ND 4.0 · Yearb Med Inform 2018; 27(01): 127-128
DOI: 10.1055/s-0038-1667090
Section 5: Decision Support
Georg Thieme Verlag KG Stuttgart

Best Paper Selection

Further Information

Publication History

Publication Date:
29 August 2018 (online)

 

Chen JH, Alagappan M, Goldstein MK, Asch SM, Altman RB. Decaying relevance of clinical data towards future decisions in data-driven inpatient clinical order sets. Int J Med Inform 2017 Jun;102:71-9 https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/28495350/

Ebadi A, Tighe PJ, Zhang L, Rashidi P. DisTeam: A decision support tool for surgical team selection. Artif Intell Med 2017 Feb;76:16-26 https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/28363285/

Fung KW, Kapusnik-Uner J, Cunningham J, Higby-Baker S, Bodenreider O. Comparison of three commercial knowledge bases for detection of drug-drug interactions in clinical decision support. J Am Med Inform Assoc 2017 Jul 1;24(4):806-12 https://academic.oup.com/jamia/article-lookup/doi/10.1093/jamia/ocx010

Mikalsen KØ, Soguero-Ruiz C, Jensen K, Hindberg K, Gran M, Revhaug A, Lindsetmo RO, Skrøvseth SO, Godtliebsen F, Jenssen R. Using anchors from free text in electronic health records to diagnose postoperative delirium. Comput Methods Programs Biomed 2017 Dec;152:105-14 https://linkinghub.elsevier.com/retrieve/pii/S0169-2607(17)31154-9


#

Appendix: Content Summaries of Selected Best Papers for the 2018 IMIA Yearbook, Section Decision Support

Chen JH, Alagappan M, Goldstein MK, Asch SM, Altman RB

Decaying relevance of clinical data towards future decisions in data-driven inpatient clinical order sets

Int J Med Inform 2017 Jun;102:71-9

Taking note that the current knowledge-based decision support approaches to promote best practice are limited by classical trial-based clinical research and human authoring, Chen et at. advocate for data-driven approaches that would take advantage of data accumulated in EHRs to predict clinical practice patterns and then offer automated decision support. A clinical order recommender system, analogous to Netflix or Amazon's product recommenders, was used to predict admission orders in a tertiary academic hospital based on admission diagnoses. The objective of the study was to assess the impact of this approach on the accuracy of decision prediction for varying historical datasets used to train the clinical order recommender system and to estimate the decay rate of the relevance of prior data. The training sets were built on available data from 2009 to 2012 with different periods varying in duration, from one to 48 months, and varying in their starting year. Predicted orders and human-authored order sets were compared to actual 2013 data. Results showed that the accuracy of predicted decisions for the reference period (2013) was significantly better when the system was trained on just one month and on recent data (2012) than when it was trained on one year and on older data (2009). Interestingly, using more data from a longer period (2009-2012) was not better than using the most recent data (2012), except when applying a decaying weighting scheme. In this context, the half-life of data relevance was estimated at four months. Whatever the training set, the predicted decisions using data mining were more accurate than the predefined, human-authored, order sets. The authors conclude that data-driven models predict decisions better when trained on small recent datasets than on larger sets augmented with older data. Adding older training data may lead to less efficient predictions, unless a decaying weighting function is used.

Ebadi A, Tighe PJ, Zhang L, Rashidi P

DisTeam: A decision support tool for surgical team selection

Artif Intell Med 2017 Feb;76:16-26

DisTeam is a novel decision support tool for surgical team selection aspiring to contribute in reduced conflicts, better coordination, and better patient outcomes. While many studies have elaborated on how to organize optimal human resources allocation in the hospital environment, DisTeam relies on a matchmaking framework to accommodate optimal surgical team selection among individual healthcare professionals (surgeons, anesthesiologists, and nurse circulators). In particular, having as input the historical data about surgical teams (i.e. surgical complications associated with teams and their members as well as their teamwork history) and the individual characteristics of the patient (e.g. age, body mass index, and Charlson comorbidity index score), Ebadi et at. introduced via DisTeam a metaheuristic framework for objective evaluation of surgical teams, in order to identify the optimal team for a given patient. Relying on a genetic algorithm, DisTeam generates a ranked list of possible surgical teams, the evaluation of which suggested high effectiveness. Specifically, DisTeam converges quickly to the optimal solution, providing the best surgical team as well as additional teams that could be employed alternatively. DisTeam was evaluated using intra-operative data from 6,065 unique orthopedic surgery cases, involving 60 distinct surgeons, 157 anesthesiologists, and 223 circulators. Compared to the current state-of-the-art, DisTeam extends existing scheduling software by considering team structure and history as well as patient's specific characteristics, beyond personnel preferences (about days, shifts, units, etc.) as well as regulatory and union requirements.

Fung KW, Kapusnik-Uner J, Cunningham J, Higby-Baker S, Bodenreider O

Comparison of three commercial knowledge bases for detection of drug-drug interactions in clinical decision support

J Am Med Inform Assoc 2017 Jul 1;24(4):806-12

Medication safety is of primary importance to healthcare delivery. Known DDIs may be among the most preventable adverse events. The authors compared three commercial drug KBs largely used in the US for automated decision support in CPOE systems generating DDI alerts. In a normalization phase, all drug resources were mapped to RxNorm to allow for comparisons. The contents of KBs were statically compared to assess how they overlapped. It was also checked whether the KBs covered a reference list of highly significant DDIs from the Office of the National Coordinator for Health Information Technology (ONC). Finally, each KB and the ONC list were applied to an actual dataset of 14 million prescriptions to simulate their effect as clinical decision support. Results showed that the number of drug-drug pairs listed in each KB varied from a factor of three. Among the 8.6 million unique pairs, 79% were present in only one KB and 5% in all three KBs. Content analysis showed that there was more agreement than disagreement in the severity ranking of the DDIs in common pairs, especially for contraindications. The selected DDIs of the ONC list were covered at 99% or more in the three KBs. Applied to the prescription dataset, the number of alerts varied according to the KB. However, ONC alerts were all covered by the KBs, though differences in the severity ranking were observed. Notably, two statins and QT-prolonging agents were responsible of more than 97% of all ONC alerts. Observed variations in size and contents call for better standardization of drug KBs. Despite this, KBs significantly cover the reference ONC DDIs. The authors suggest that other contraindicated DDIs shared by all KBs might complement the current ONC list.

Mikalsen KØ, Soguero-Ruiz C, Jensen K, Hindberg K, Gran M, Revhaug A, Lindsetmo RO, Skrøvseth SO, Godtliebsen F, Jenssen R

Using anchors from free text in electronic health records to diagnose postoperative delirium

Comput Methods Programs Biomed 2017 Dec;152:105-14

Although being a common complication after major surgery with serious consequences, especially in the elderly population, postoperative delirium remains often undetected. To this end, Mikalsen et at. exploited free-text parts of EHRs (a rich information resource, given that nurses monitor the patient's health status after the surgery and report them three times a day) to construct data-driven CDSSs for addressing this problem. However, this task typically relies on the labeling of training data, which is both time-consuming and expensive to perform as a process. This shortcoming was addressed by Mikalsen et at. by adopting an anchor-based learning framework which transforms the key observations contained in the free-text (i.e. the anchors) into labels. Learning with anchors presents a method of efficiently learning statistically-driven phenotypes with minimal manual intervention, under the assumption that the presence of an anchor variable implies the presence of the latent label of interest. In order to eliminate the problem of specifying reliable anchors, Mikalsen et at. developed a problem-specific method (based on domain knowledge and exploratory data analysis using clustering and visualization techniques) and employed the elastic-net based classification, which forces sparsity and has been shown to provide robustness in settings where the dimensionality is higher than the sample size. Aiming to assess the proposed framework in the detection of postoperative delirium, Mikalsen et at. exploited EHR data (corresponding to 7,741 patients) from a Norwegian university hospital and observed an increase in the AUC-PR from 0.51 to 0.96 compared to baselines. Overall, the study concluded that the proposed method could be successfully applied in problems where no obvious anchors exist as well as in other application domains, such as the preoperative identification of malnourished patients and the prediction of patients at risk for postoperative complications.


#
#