Subscribe to RSS
DOI: 10.1055/s-0041-1730032
Lessons Learned for Identifying and Annotating Permissions in Clinical Consent Forms
Funding E. Umberfield was supported in part by the Robert Wood Johnson Foundation Future of Nursing Scholar's Program predoctoral training program. She is presently funded as a Postdoctoral Research Fellow in Public and Population Health Informatics at Fairbanks School of Public Health and Regenstrief Institute, supported by the National Library of Medicine of the National Institutes of Health under award number 5T15LM012502-04. The study was further supported by the National Human Genome Research Institute of the National Institutes of Health under award number 5U01HG009454-03, the Rackham Graduate Student Research Grant, and the University of Michigan Institute for Data Science. The content of this publication is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities.
Abstract
Background The lack of machine-interpretable representations of consent permissions precludes development of tools that act upon permissions across information ecosystems, at scale.
Objectives To report the process, results, and lessons learned while annotating permissions in clinical consent forms.
Methods We conducted a retrospective analysis of clinical consent forms. We developed an annotation scheme following the MAMA (Model-Annotate-Model-Annotate) cycle and evaluated interannotator agreement (IAA) using observed agreement (A o), weighted kappa (κw ), and Krippendorff's α.
Results The final dataset included 6,399 sentences from 134 clinical consent forms. Complete agreement was achieved for 5,871 sentences, including 211 positively identified and 5,660 negatively identified as permission-sentences across all three annotators (A o = 0.944, Krippendorff's α = 0.599). These values reflect moderate to substantial IAA. Although permission-sentences contain a set of common words and structure, disagreements between annotators are largely explained by lexical variability and ambiguity in sentence meaning.
Conclusion Our findings point to the complexity of identifying permission-sentences within the clinical consent forms. We present our results in light of lessons learned, which may serve as a launching point for developing tools for automated permission extraction.
Protection of Human and Animal Subjects
Institutional Review Board review was not required because human subjects were not involved. Only blank consent forms were collected and analyzed.
Publication History
Received: 29 December 2020
Accepted: 31 March 2021
Article published online:
23 June 2021
© 2021. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 American Medical Association. Informed Consent. American Medical Association. . Published 2020. Accessed January 4, 2020 at: https://www.ama-assn.org/delivering-care/ethics/informed-consent
- 2 Hertzum M. Electronic health records in Danish home care and nursing homes: inadequate documentation of care, medication, and consent. Appl Clin Inform 2021; 12 (01) 27-33
- 3 Chalil Madathil K, Koikkara R, Obeid J. et al. An investigation of the efficacy of electronic consenting interfaces of research permissions management system in a hospital setting. Int J Med Inform 2013; 82 (09) 854-863
- 4 Reeves JJ, Mekeel KL, Waterman RS. et al. Association of electronic surgical consent forms with entry error rates. JAMA Surg 2020; 155 (08) 777-778
- 5 Chen C, Lee P-I, Pain KJ, Delgado D, Cole CL, Campion Jr TR. Replacing paper informed consent with electronic informed consent for research in academic medical centers: a scoping review. AMIA Jt Summits Transl Sci Proc 2020; 2020: 80-88
- 6 Litwin J. Engagement shift: informed consent in the digital era. Appl Clin Trials 2016; 25 (6/7): 26, 28, 30, 32
- 7 Obeid J, Gabriel D, Sanderson I. A biomedical research permissions ontology: cognitive and knowledge representation considerations. Proc Gov Technol Inf Policies 2010; 2010: 9-13
- 8 Lin Y, Harris MR, Manion FJ. et al. Development of a BFO-based Informed Consent Ontology (ICO). In: 5th ICBO Conference Proceedings; Houston, Texas; October 6–10, 2014
- 9 Chapman WW, Dowling JN. Inductive creation of an annotation schema for manually indexing clinical conditions from emergency department reports. J Biomed Inform 2006; 39 (02) 196-208
- 10 Centers for Medicare & Medicaid Services. Hospital general information. Data.gov. Published February 23, 2019. Accessed November 7, 2019 at: https://data.cms.gov/provider-data/dataset/xubh-q36u
- 11 Centers for Medicare & Medicaid Services. Ambulatory surgical quality measures – facility. Data.gov. Published October 31, 2019. Accessed November 7, 2019 at: https://data.cms.gov/provider-data/dataset/wue8-3vwe
- 12 CTSA Program Hubs. National center for advancing translational sciences. Published March 13, 2015. Accessed November 26, 2019 at: https://ncats.nih.gov/ctsa/about/hubs
- 13 Pustejovsky J, Bunt H, Zaenen A. Designing annotation schemes: from theory to model. In: Ide N, Pustejovsky J. eds. Handbook of Linguistic Annotation. Dordrecht: Springer; 2017: 21-72
- 14 Dataturks. Trilldata Technologies Pvt Ltd; 2018. . Accessed May 11, 2021 at: https://github.com/DataTurks
- 15 Honnibal M, Montani I. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Published online 2017
- 16 Artstein R. Inter-annotator Agreement. In: Ide N, Pustejovsky J. eds. Handbook of Linguistic Annotation. Dordrecht: Springer; 2017: 297-313
- 17 R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2013 . Accessed April 23, 2021 at: http://www.R-project.org/
- 18 Umberfield E, Ford K, Stansbury C, Harris MR. Dataset of Clinical Consent Forms [Data set]. University of Michigan - Deep Blue. Accessed May 11, 2021 at: https://doi.org/10.7302/j17s-qj74
- 19 Umberfield E, Stansbury C, Ford K. et al. Evaluating and Extending the Informed Consent Ontology for Representing Permissions from the Clinical Domain. (Under Review)
- 20 Eltorai AEM, Naqvi SS, Ghanian S. et al. Readability of invasive procedure consent forms. Clin Transl Sci 2015; 8 (06) 830-833
- 21 HL7. 6.2 Resource Consent - Content. HL7 FHIR Release 5, Preview 3. Published 2021. Accessed March 11, 2021 at: http://build.fhir.org/consent.html
- 22 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33 (01) 159-174