CC BY-NC-ND 4.0 · Yearb Med Inform 2021; 30(01): 239-244
DOI: 10.1055/s-0041-1726522
Section 10: Natural Language Processing
Survey

A Review of Recent Work in Transfer Learning and Domain Adaptation for Natural Language Processing of Electronic Health Records

Egoitz Laparra
1   School of Information, University of Arizona, Tucson, USA
,
Aurelie Mascio
2   Department of Biostatistics and Health Informatics, King's College London, London, United Kingdom
,
Sumithra Velupillai
3   Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
,
Timothy Miller
4   Computational Health Informatics Program, Boston Children's Hospital, Boston, USA
5   Department of Pediatrics, Harvard Medical School, Boston, USA
› Author Affiliations

Summary

Objectives: We survey recent work in biomedical NLP on building more adaptable or generalizable models, with a focus on work dealing with electronic health record (EHR) texts, to better understand recent trends in this area and identify opportunities for future research.

Methods: We searched PubMed, the Institute of Electrical and Electronics Engineers (IEEE), the Association for Computational Linguistics (ACL) anthology, the Association for the Advancement of Artificial Intelligence (AAAI) proceedings, and Google Scholar for the years 2018-2020. We reviewed abstracts to identify the most relevant and impactful work, and manually extracted data points from each of these papers to characterize the types of methods and tasks that were studied, in which clinical domains, and current state-of-the-art results.

Results: The ubiquity of pre-trained transformers in clinical NLP research has contributed to an increase in domain adaptation and generalization-focused work that uses these models as the key component. Most recently, work has started to train biomedical transformers and to extend the fine-tuning process with additional domain adaptation techniques. We also highlight recent research in cross-lingual adaptation, as a special case of adaptation.

Conclusions: While pre-trained transformer models have led to some large performance improvements, general domain pre-training does not always transfer adequately to the clinical domain due to its highly specialized language. There is also much work to be done in showing that the gains obtained by pre-trained transformers are beneficial in real world use cases. The amount of work in domain adaptation and transfer learning is limited by dataset availability and creating datasets for new domains is challenging. The growing body of research in languages other than English is encouraging, and more collaboration between researchers across the language divide would likely accelerate progress in non-English clinical NLP.



Publication History

Article published online:
03 September 2021

© 2021. IMIA and Thieme. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

 
  • References

  • 1 Uzuner Ö, South BR, Shen S, DuVall SL. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. J Am Med Inform Assoc 2011; 18 (05) 552-6
  • 2 Elhadad N, Pradhan S, Gorman S, Manandhar S, Chapman W, Savova G. SemEval-2015 Task 14: Analysis of Clinical Text. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Denver, Colorado: Association for Computational Linguistics; 2015. . p. 303-10
  • 3 Styler IV WF, Bethard S, Finan S, Palmer M, Pradhan S, de Groen PC. et al. Temporal Annotation in the Clinical Domain. Trans Assoc Comput Linguist 2014; 2: 143-54
  • 4 Sun W, Rumshisky A, Uzuner O. Annotating temporal information in clinical narratives. J Biomed Inform 2013 Dec;46 Suppl(0): S5-12
  • 5 Uzuner O, Bodnari A, Shen S, Forbush T, Pestian J, South BR. Evaluating the state of the art in coreference resolution for electronic medical records. J Am Med Inform Assoc 2012; Oct 19 (05) 786-91
  • 6 Uzuner Ö. Recognizing Obesity and Comorbidities in Sparse Data. J Am Med Inform Assoc 2009; 16 (04) 561-70
  • 7 Stubbs A, Kotfila C, Xu H, Uzuner Ö. Identifying risk factors for heart disease over time: Overview of 2014 i2b2/UTHealth shared task Track 2. J Biomed Inform 2015 Dec;58 Suppl(Suppl): S67-77
  • 8 Stubbs A, Uzuner Ö. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus. J Biomed Inform 2015; 58: S20-9
  • 9 Johnson AEW, Pollard TJ, Shen L, Lehman LH, Feng M, Ghassemi M. et al. MIMIC-III, a freely accessible critical care database. Sci Data 2016 May 24 3 (01) 1-9
  • 10 Devlin J, Chang M-W, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Minneapolis, Minnesota: Association for Computational Linguistics; 2019: 4171-86
  • 11 Névéol A, Dalianis H, Velupillai S, Savova G, Zweigenbaum P. Clinical Natural Language Processing in languages other than English: opportunities and challenges. J Biomed Semant 2018; Mar 30 9 (01) 12
  • 12 Ramponi A, Plank B, Lombardo R. Cross-Domain Evaluation of Edge Detection for Biomedical Event Extraction. . In: Proceedings of The 12th Language Resources and Evaluation Conference. Marseille, France: European Language Resources Association; 2020: 1982-9
  • 13 Holderness E, Cawkwell P, Bolton K, Pustejovsky J, Hall M-H. Distinguishing Clinical Sentiment: The Importance of Domain Adaptation in Psychiatric Patient Health Records. In: Proceedings of the 2nd Clinical Natural Language Processing Workshop. Minneapolis, Minnesota, USA: Association for Computational Linguistics; 2019. . p. 117-23
  • 14 Lee H-J, Zhang Y, Roberts K, Xu H. Leveraging existing corpora for de-identification of psychiatric notes using domain adaptation. AMIA Annu Symp Proc AMIA Symp 2017; 2017: 1070-9
  • 15 Li X, Yang Y, Yang P. Multi-source Ensemble Transfer Approach for Medical Text Auxiliary Diagnosis. In: 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE). 2019. . p. 474-9
  • 16 Zhou L, Suominen H, Gedeon T. Adapting State-of-the-Art Deep Language Models to Clinical Information Extraction Systems: Potentials, Challenges, and Solutions. JMIR Med Inform 2019; Apr 25 7 (02) e11499
  • 17 Wang Z, Qu Y, Chen L, Shen J, Zhang W, Zhang S. et al. Label-Aware Double Transfer Learning for Cross-Specialty Medical Named Entity Recognition. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. New Orleans, Louisiana: Association for Computational Linguistics; 2018. . p. 1-15
  • 18 Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F. et al. Domain-Adversarial Training of Neural Networks. J Mach Learn Res 2016; 17 (59) 1-35
  • 19 Liu M, Han J, Zhang H, Song Y. Domain Adaptation for Disease Phrase Matching with Adversarial Networks. In: Proceedings of the BioNLP 2018 workshop. Melbourne, Australia: Association for Computational Linguistics; 2018. . p. 137-41
  • 20 Mani A, Palaskar S, Konam S. Towards Understanding ASR Error Correction for Medical Conversations. In: Proceedings of the First Workshop on Natural Language Processing for Medical Conversations. Online: Association for Computational Linguistics; 2020. . p. 7-14
  • 21 Dong X, Chowdhury S, Qian L, Li X, Guan Y, Yang J. et al. Deep learning for named entity recognition on Chinese electronic medical records: Combining deep transfer learning with multitask bi-directional LSTM RNN. PloS One 2019; 14 (05) e0216046
  • 22 Ben AbachaA, Shivade C, Demner-Fushman D. Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering. In: Proceedings of the 18th BioNLP Workshop and Shared Task. Florence, Italy: Association for Computational Linguistics; 2019. . p. 309-10
  • 23 Chopra S, Gupta A, Kaushik A. MSIT_SRIB at MEDIQA 2019: Knowledge Directed Multi-task Framework for Natural Language Inference in Clinical Domain. In: Proceedings of the 18th BioNLP Workshop and Shared Task. Florence, Italy: Association for Computational Linguistics; 2019. . p. 488-92
  • 24 Xing J, Zhu K, Zhang S. Adaptive Multi-Task Transfer Learning for Chinese Word Segmentation in Medical Text. In: Proceedings of the 27th International Conference on Computational Linguistics. Santa Fe, New Mexico, USA: Association for Computational Linguistics; 2018. . p. 3619-30
  • 25 Rios A, Kavuluru R. Neural transfer learning for assigning diagnosis codes to EMRs. Artif Intell Med 2019; 96: 116-22
  • 26 Hassanzadeh H, Kholghi M, Nguyen A, Chu K. Clinical Document Classification Using Labeled and Unlabeled Data Across Hospitals. AMIA Annu Symp Proc AMIA Symp 2018; 545-54
  • 27 Ji B, Li S, Yu J, Ma J, Tang J, Wu Q. et al. Research on Chinese medical named entity recognition based on collaborative cooperation of multiple neural network models. J Biomed Inform 2020; Apr; 104: 103395
  • 28 Newman-Griffis D, Zirikly A. Embedding Transfer for Low-Resource Medical Named Entity Recognition: A Case Study on Patient Mobility. In: Proceedings of the BioNLP 2018 workshop. Melbourne, Australia: Association for Computational Linguistics; 2018. . p. 1-11
  • 29 Uzuner O, Solti I, Cadag E. Extracting medication information from clinical text. J Am Med Inform Assoc 2010; Oct 17 (05) 514-8
  • 30 Gligic L, Kormilitzin A, Goldberg P, Nevado-Holgado A. Named entity recognition in electronic health records using transfer learning bootstrapped Neural Networks. Neural Netw Off J Int Neural Netw Soc 2020; Jan; 121: 132-9
  • 31 Peters M, Neumann M, Iyyer M, Gardner M, Clark C, Lee K. et al. Deep Contextualized Word Representations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. New Orleans, Louisiana: Association for Computational Linguistics; 2018. . p. 2227-37
  • 32 Koleck TA, Dreisbach C, Bourne PE, Bakken S. Natural language processing of symptoms documented in free-text narratives of electronic health records: A systematic review. J Am Med Inform Assoc 2019; 26 (04) 364-79
  • 33 Si Y, Wang J, Xu H, Roberts K. Enhancing clinical concept extraction with contextual embeddings. J Am Med Inform Assoc 2019; Nov 1 26 (11) 1297-304
  • 34 Lin C, Miller T, Dligach D, Sadeque F, Bethard S, Savova G. A BERT-based One-Pass Multi-Task Model for Clinical Temporal Relation Extraction. In: Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing. Online: Association for Computational Linguistics; 2020. . p. 70-5
  • 35 Peng Y, Yan S, Lu Z. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. In: Proceedings of the 18th BioNLP Workshop and Shared Task. Florence, Italy: Association for Computational Linguistics; 2019. . p. 58-65
  • 36 Mascio A, Kraljevic Z, Bean D, Dobson R, Stewart R, Bendayan R. et al. Comparative Analysis of Text Classification Approaches in Electronic Health Records. ArXiv200506624 Cs 2020 May 8;
  • 37 Zhu Y, Kiros R, Zemel R, Salakhutdinov R, Urtasun R, Torralba A. et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In: 2015 IEEE International Conference on Computer Vision (ICCV). 2015. . p. 19-27
  • 38 Lee J, Yoon W, Kim S, Kim D, Kim S, So CH. et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Wren J editor. Bioinformatics 2019 Sep 10;btz682
  • 39 Alsentzer E, Murphy J, Boag W, Weng W-H, Jindi D, Naumann T. et al. Publicly Available Clinical BERT Embeddings. In: Proceedings of the 2nd Clinical Natural Language Processing Workshop. Minneapolis, Minnesota, USA: Association for Computational Linguistics; 2019. . p. 72-8
  • 40 Rasmy L, Xiang Y, Xie Z, Tao C, Zhi D. Med-BERT: pre-trained contextualized embeddings on large-scale structured electronic health records for disease prediction. NPJ Digit Med 2021; May 20 4 (01) 86
  • 41 Li Y, Rao S, Solares JRA, Hassaine A, Ramakrishnan R, Canoy D. et al. BEHRT: Transformer for Electronic Health Records. Sci Rep 2020; Apr 28 10 (01) 7155
  • 42 Fraser KC, Nejadgholi I, De Bruijn B, Li M, LaPlante A, Abidine KZE. Extracting UMLS Concepts from Medical Text Using General and Domain-Specific Deep Learning Models. ArXiv191001274 Cs 2019 Oct 2;
  • 43 Rosenthal S, Barker K, Liang Z. Leveraging Medical Literature for Section Prediction in Electronic Health Records. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP). Hong Kong, China: Association for Computational Linguistics; 2019. . p. 4864-73
  • 44 Lin C, Bethard S, Dligach D, Sadeque F, Savova G, Miller TA. Does BERT need domain adaptation for clinical negation detection?. J Am Med Inform Assoc 2020; Apr 1 27 (04) 584-91
  • 45 Shelmanov A, Liventsev V, Kireev D, Khromov N, Panchenko A, Fedulova I. et al. Active Learning with Deep Pre-trained Models for Sequence Tagging of Clinical and Biomedical Texts. In: 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 2019. p. 482-9
  • 46 Bhatia P, Arumae K, Celikkaya B. Towards Fast and Unified Transfer Learning Architectures for Sequence Labeling. In: 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). 2019. p. 1852-9
  • 47 Shang J, Ma T, Xiao C, Sun J. Pre-training of Graph Augmented Transformers for Medication Recommendation. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Macao, China: International Joint Conferences on Artificial Intelligence Organization; 2019. . p. 5953-9
  • 48 Bodenreider O. The Unified Medical Language System (UMLS): integrating biomedical terminology. Nucleic Acids Res 2004; Jan 1;32(suppl_1): 2004D267-70
  • 49 Viani N, Miller TA, Napolitano C, Priori SG, Savova GK, Bellazzi R. et al. Supervised methods to extract clinical events from cardiology reports in Italian. J Biomed Inform 2019; 95: 103219
  • 50 Trienes J, Trieschnigg D, Seifert C, Hiemstra D. Comparing Rule-based, Feature-based and Deep Neural Methods for De-identification of Dutch Medical Records. In: Proceedings of the 1st ACM WSDM Health Search and Data Mining Workshop (HSDM2020). 2020
  • 51 Costumero R, García-Pedrero Á, Gonzalo-Martín C, Menasalvas E, Millan S. Text Analysis and Information Extraction from Spanish Written Documents. In: Ślȩzak D, Tan A-H, Peters JF, Schwabe L. editors Brain Informatics and Health. Cham: Springer International Publishing; 2015: p.188-97 (Lecture Notes in Computer Science).
  • 52 Becker M, Böckmann B. Extraction of UMLS® Concepts Using Apache cTAKESTM for German Language. Stud Health Technol Inform 2016; 223: 71-6
  • 53 Almagro M, Martínez R, Montalvo S, Fresno V. A cross-lingual approach to automatic ICD-10 coding of death certificates by exploring machine translation. J Biomed Inform 2019; Jun 1 94: 103207
  • 54 Buendía F, Gayoso-Cabada J, Juanes-Méndez J-A, Martín-Izquierdo M, Sierra J-L. Cataloguing Spanish Medical Reports with UMLS Terms. In: Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality. New York, NY, USA: Association for Computing Machinery; 2015: p. 423-30 . (TEEM’19)
  • 55 Villena F, Eisenmann U, Knaup P, Dunstan J, Ganzinger M. On the Construction of Multilingual Corpora for Clinical Text Mining. Stud Health Technol Inform 2020; Jun 1; 270: 347-51
  • 56 Mitrofan M, Barbu VMititelu, Mitrofan G. Towards the Construction of a Gold Standard Biomedical Corpus for the Romanian Language. Data 2018 Dec 3 (04) 53
  • 57 Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. ArXiv191001108 Cs. 2020 Feb 29;
  • 58 Houlsby N, Giurgiu A, Jastrzebski S, Morrone B, Laroussilhe QD, Gesmundo A. et al. Parameter-Efficient Transfer Learning for NLP. In: International Conference on Machine Learning; PMLR. 2019: 2790-9
  • 59 Harutyunyan H, Khachatrian H, Kale DC, Ver Steeg G, Galstyan A. Multitask learning and benchmarking with clinical time series data. Sci Data 2019; 6 (01) 96
  • 60 McGuinness K. Transfer Learning [Internet]. . D2L4 Insight@DCU Machine Learning Workshop 2017; [cited 2021 Mar 11]. Available from: https://www.slideshare.net/xavigiro/transfer-learning-d2l4-insightdcu-machine-learning-workshop-2017
  • 61 Ruder S. Neural transfer learning for natural language processing [PhD Thesis]. NUI Galway; 2019.