CC BY-NC-ND 4.0 · Appl Clin Inform 2019; 10(02): 189-198
DOI: 10.1055/s-0039-1679927
Research Article
Georg Thieme Verlag KG Stuttgart · New York

Can Automated Retrieval of Data from Emergency Department Physician Notes Enhance the Imaging Order Entry Process?

Justin F. Rousseau
1   Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
2   Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
3   Department of Population Health, Dell Medical School, Austin, Texas, United States
4   Department of Neurology, Dell Medical School, Austin, Texas, United States
,
Ivan K. Ip
1   Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
2   Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
5   Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
,
Ali S. Raja
1   Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
2   Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
,
Vladimir I. Valtchinov
1   Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
2   Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
6   Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States
,
Laila Cochon
1   Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
2   Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
,
Jeremiah D. Schuur
7   Department of Emergency Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
,
Ramin Khorasani
1   Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
2   Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
› Institutsangaben
Funding This study was funded in part by the Boston-Area Research Training Program in Biomedical Informatics grant T15LM007092 from the National Library of Medicine. The authors thank Shawn Murphy, Henry Chueh, and the study institution's Research Patient Data Registry group for facilitating use of their database, and Laura E. Peterson for editorial assistance with the manuscript.
Weitere Informationen

Address for correspondence

Justin F. Rousseau, MD, MMSc
Departments of Population Health and Neurology, Dell Medical School
1701 Trinity Street, Stop Z0500
Austin, TX 78712
United States   

Publikationsverlauf

24. August 2018

18. Januar 2019

Publikationsdatum:
20. März 2019 (online)

 

Abstract

Background When a paucity of clinical information is communicated from ordering physicians to radiologists at the time of radiology order entry, suboptimal imaging interpretations and patient care may result.

Objectives Compare documentation of relevant clinical information in electronic health record (EHR) provider note to computed tomography (CT) order requisition, prior to ordering of head CT for emergency department (ED) patients presenting with headache.

Methods In this institutional review board-approved retrospective observational study performed between April 1, 2013 and September 30, 2014 at an adult quaternary academic hospital, we reviewed data from 666 consecutive ED encounters for patients with headaches who received head CT. The primary outcome was the number of concept unique identifiers (CUIs) relating to headache extracted via ontology-based natural language processing from the history of present illness (HPI) section in ED notes compared with the number of concepts obtained from the imaging order requisition.

Results Our analysis was conducted on cases where the HPI note section was completed prior to image order entry, which occurred in 23.1% (154/666) of encounters. For these 154 encounters, the number of CUIs specific to headache per note extracted from the HPI (median = 3, interquartile range [IQR]: 2–4) was significantly greater than the number of CUIs per encounter obtained from the imaging order requisition (median = 1, IQR: 1–2; Wilcoxon signed rank p < 0.0001). Extracted concepts from notes were distinct from order requisition indications in 92.9% (143/154) of cases.

Conclusion EHR provider notes are a valuable source of relevant clinical information at the time of imaging test ordering. Automated extraction of clinical information from notes to prepopulate imaging order requisitions may improve communication between ordering physicians and radiologists, enhance efficiency of ordering process by reducing redundant data entry, and may help improve clinical relevance of clinical decision support at the time of order entry, potentially reducing provider burnout from extraneous alerts.


#

Background and Significance

Despite broad proliferation of electronic health records (EHRs), opportunities to optimize health information technology (IT) tools remain. Among eight types of health IT-related sentinel events recently identified by The Joint Commission, 24% were due to workflow and communication issues.[1] Computerized physician order entry (CPOE) contributes to these challenges, as physicians often must enter redundant data. For instance, they document patient assessment and plan in a note in the EHR. To then request an imaging study, in most commercial EHR implementations they must enter redundant clinical information in the EHR's order entry module. This imaging order requisition consists of free text and/or structured forms that are entered independently and do not automatically populate from data in the note. Even when customized structured forms are presented, workflow limitations often lead to selecting the minimum number of indications required. Physicians may also enter incomplete or conflicting information, adversely impacting communication to radiologists, interpretation quality, and quality of patient care.[2] [3] Although redundancies may potentially add reliability and safety in collaborative work,[4] reducing redundant data entry is a goal in efforts to improve usability, efficiency, accuracy, and safety in order entry and clinical decision support (CDS) tools.[5] [6] [7]

Despite multiple data entry, in a survey of radiologists, 72% reported not receiving enough clinical information about patients, and 87% reported that more information could lead to a change in study interpretation.[8] However, time and workflow limitations prevent radiologists from consistently searching the EHR for additional information.[8] Emergency radiology is particularly vulnerable to image ordering communication gaps due to pressures on emergency medicine physicians to complete orders prior to documenting the encounter.

Headache is a common complaint in the emergency department (ED)[9] where imaging and communicating clinical information are critical to identify life-threatening pathology. However, the Choosing Wisely campaign has identified head computed tomography (CT) in patients with headaches as a target for reduction of potentially wasteful or unnecessary medical tests, making it a high-yield scenario to implement CDS.[10] Despite studies evaluating the lack of information documented in notes prior to imaging exam completion and radiology interpretation,[11] the quantity and quality of information in the order requisition at the time of order entry for head CTs remains unclear. It is also unknown how often the clinical documentation in the EHR contains additional information that could improve image interpretation.

The decision to order an imaging study of the head/brain for headache is more complex than simply ordering a CT scan with a “reason for study” containing words relevant to the patient's history and condition. The clinician must determine what study to order, if any, and whether there is need for contrast agent based on suspected diagnoses and acuity of the problem. In the United States, payer rules for reimbursement may also affect the clinician's decision. Ideally, an expert clinician such as a neurologist would be present to assist clinicians ordering head/brain imaging studies in the ED; pragmatically, this is not feasible. Computerized CDS tools, based on expert clinician or published evidence, may thus improve ordering provider's decision-making. The clinical relevance and usefulness of CDS is based on the accurate capture of clinically relevant coded or “structured” indications in the CPOE module of the EHR. A major challenge of CPOE is efficient capture of relevant clinical information, often requiring ordering providers to re-enter the same clinical information in provider notes (typically in free text form) and image ordering requisitions (typically in coded form when CDS is implemented). With CT ordering for headache in the ED as the use case, we assess whether relevant clinical information exists in provider notes to potentially augment the information in ordering requisitions.


#

Objectives

We sought to compare the number of concept unique identifiers (CUIs) relevant to headache extracted from EHR physician notes that were present at the time of image order entry with the number of concepts contained in the CPOE order requisition in ED patients presenting with headache who received a head CT. We hypothesized that a significant amount of clinical data is present in the EHR notes that could potentially be used to augment or prepopulate indications provided in the image order requisition.


#

Methods

Setting and Population

The requirement to obtain informed consent was waived by the institutional review board for this observational retrospective study conducted in the ED of a 793-bed, quaternary care, level 1 trauma center teaching hospital with approximately 60,000 annual visits and ED physician resident training. We evaluated all encounters for adults presenting to the ED between April 1, 2013 and September 30, 2014 with a chief complaint (CC) of headache who received a head CT.


#

Cohort Identification and Data Collection

In the ED documentation, physicians separately documented the narrative note in sections such as CC, history of present illness (HPI), and initial assessment and plan (A/P). An unusual feature of ED documentation in our EHR was that note sections could be updated and signed independently from each other. A separate timestamp was attached to each version of each note section when it was submitted and when the fully completed note section was signed, independent of other note sections. We extracted note sections of interest and submission timestamps from eligible patient encounters during the study period including CC, HPI, review of systems, past medical history, physical examination, initial A/P, updates to the ED course, and attending notes. Due to limitations in the ED discharge data tool, we only had access to correctly paired timestamps and note text of final submission of note sections by manually reviewing and abstracting these data from patients' notes in the EHR. Due to this limitation, we only conducted our analysis on note sections that had final submissions prior to the time of head CT order entry.

We then isolated encounters for patients who presented with a CC of headache who received a head CT. We queried the ED CC fields for the terms “headache,” “head ache,” “head pain,” and “HA,” excluding words containing the letters “ha” (e.g., “change”). We queried the institution's radiology CPOE system (Percipio; Medicalis, San Francisco, California, United States) and radiology information system (RIS) (IDXrad; GE Healthcare, Burlington, Vermont, United States) data warehouse for orders of completed head CT studies placed in the ED where the timestamp of the order was within the ED admission and discharge time. We joined the ED documentation data with the RIS data using the patient medical record number. In the case of multiple head CTs, we used the first for analysis. We extracted study order details including timestamps and indications from the order requisition.

An open source natural language processing (NLP) tool, Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) version 3.01, which includes YTEX (Yale cTAKES Extensions),[12] [13] was used to mine the note sections for CUIs including a polarity for each concept, where “1” indicates a positive concept and “−1” indicates a negated concept. For example, in the sentence “the patient has nausea, but no vomiting,” nausea is a positive concept (“1”), vomiting a negated one (“−1”). cTAKES was customized with ontologies of clinical terms from the latest releases of the SNOMED-CT (Systematized Nomenclature of Medicine—Clinical Terms) vocabulary files using the National Cancer Institute-supported Knowledge Representation languages' resource description framework (RDF) and process definitions from MetamorphoSys' sub-setting utility[14] and of radiology terms from RadLex.[15] Custom components were developed to allow cTAKES to take its input (JdbcCollectionReader) from a structured data source (a table in Microsoft SQL [Structured Query Language] server), and also write its output (CasConsumer) to the YTEX defined schema. CUI extraction from the YTEX schema was done using a SQL query with multiple joins for the unique batch name of the job, resulting in a table in which each line contained the CUI and the ID of the source text. The cTAKES implementation environment is described in [Supplementary Table S1] (available in the online version). We monitored and validated the results of the NLP for consistency with the requisition indications and for conflicting information within the same note. We manually reviewed cases of inconsistency or conflict.

Demographic and clinical data including timing of ED admission and discharge were collected from the institution's Research Patient Data Registry.


#

Concepts Relevant to Headache

We created a list of 115 concepts and corresponding CUIs, based on literature and expert opinion, relevant in determining those at increased risk of intracranial pathology among patients presenting with headaches.[10] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] The list was reviewed by a panel of physician experts in radiology, internal medicine, emergency medicine, and neurology. We queried the results of the NLP extraction against this concept list to identify the number of concepts relevant to headache present in the note section. The CUI and concept terms are provided in [Supplementary Table S2] (available in the online version).

Imaging order requisitions in our EHR consisted of a combination of free text and/or structured data including signs and symptoms, relevant history, and suspected differential diagnoses, customized based on the imaging study being ordered. Head CT had 72 signs and symptoms and 230 relevant history or diagnoses to choose from in addition to the option for free text entry. The structured entries and free text were manually reviewed for indications relevant to headache. Relevant indications were added to the list of CUIs if they were not already present. However, a few indications were not able to be represented with a CUI, such as “headache, rapidly increasing frequency,” or “prior imaging abnormal/normal/nondiagnostic.” A total 73/115 (63.5%) of the CUIs from the developed list relevant to headache were not represented on the order requisition.

A wide variety of signs, symptoms, and relevant history and their combinations contribute to appropriate indications for obtaining a head CT in the setting of headache, as indicated in the list of concepts in [Supplementary Table S2] (available in the online version), or the extensive options of signs, symptoms, and relevant history on the customized order requisition for head CT at our institution. Thus, we deemed it would be unreasonable to develop a standardized algorithm to classify appropriateness of head CT and rather, determined this task would best be performed by human clinical experts primed with review of relevant literature to evaluate combinations of single concepts to infer appropriateness.

A neurologist and an internist reviewed the literature regarding appropriate indications for obtaining imaging studies for headaches in the ED,[16] and evaluated solely the extracted concepts from the HPI section of the notes and separately evaluated solely the indications from the order requisition present for each encounter where the HPI was signed prior to image order entry. For each set of concepts or indications, the physicians independently graded head CT appropriateness based on their clinical judgement and the literature reviewed. Any differences were reconciled through discussion until consensus was achieved. For this study, we used “appropriateness” as a proxy for the value of information communicated from ordering provider to the radiologist. For example, “acute headache” and “nausea” are insufficient information to identify an appropriate indication, as these symptoms can be present with a typical migraine. However, “unilateral paresis” alone is enough to communicate an appropriate indication for head CT.


#

NLP Performance

A physician researcher reviewed HPI portions of notes of each encounter where the HPI was signed prior to image order entry and identified presence or absence of each of the 115 concepts determined to be relevant to headache. Mentions of concepts identified by manual review were compared with NLP extraction of concepts for each concept. True positives (TPs) were defined as NLP successfully extracting the appropriate polarity of the concept when there was mention of the concept in the notes. In false positives (FPs), NLP erroneously detected a concept as present when there was no mention of that concept in the notes. In false negatives (FNs), NLP failed to extract a concept or extracted the incorrect polarity of a concept when there was mention of the concept in the notes. In true negatives (TNs), NLP did not detect a concept when it was not mentioned in the notes. Precision (TP/[TP + FP]), recall (TP/[TP + FN]), accuracy ([TP + TN]/[TP + TN + FP + FN]), and F-measure (2 ×  precision × recall/[precision + recall]) were calculated for each concept.


#

Outcome Measures and Statistical Analyses

The primary outcome was the number of CUIs related to headache extracted from the HPI compared with the number of concepts obtained from the imaging order requisition. Secondary outcome measures include rates of note sections submitted as incomplete versions and signed as completed versions prior to the time of imaging order entry, the total number of positive and negative CUIs extracted from the HPI, comparisons of total and headache-relevant CUIs extracted from HPIs signed before and after image ordering, and percentage of head CTs graded as appropriate based on extracted concepts compared with requisition indications.

Data transformation and comparisons of timing (orders vs. documentation) were performed using Microsoft Access 2007 (Microsoft, Redmond, Washington, United States) and R version 3.2.2 software (R Project for Statistical Computing, Vienna, Austria).

Analyses comparing numbers of extracted CUIs to requisition indications, evaluating NLP performance, and comparing percentages of head CTs graded as appropriate were performed using Microsoft Excel 2007 (Microsoft, Redmond, Washington, United States) and JMP Pro v.10 (SAS Institute, Cary, North Carolina, United States). As the distributions of indications in order requisitions and extracted CUIs were nonparametric, we used Wilcoxon signed rank test for comparison of paired samples and Wilcoxon rank-sum test for comparison of independent samples. A two-tailed p-value <0.05 was considered statistically significant. We created word clouds (using wordle.net) to reflect the relative frequency of concept adjusted by LN(count + 1), and frequency tables of extracted CUIs and requisition indications to depict the concepts found in each.


#
#

Results

Study Population

We identified all encounters for patients who presented with a CC of headache (2,787 encounters for 2,490 patients) and all encounters where at least one head CT was performed (6,084 encounters for 5,355 patients) from the 85,916 consecutive encounters during the 18-month study period ([Fig. 1]). There were 666 encounters (626 unique patients) with both a CC of headache and a head CT performed, representing 23.9% of encounters with a CC of headache (666/2,787). In total, 63.4% of these patients were female, consistent with female predominance of patients presenting to the ED for headache in previous studies,[26] [27] and the average age was 51.5 years (range: 18–96, ± standard deviation 18.2).

Zoom Image
Fig. 1 Study design diagram for patient selection.

#

Note Section Entry and Completion

Rates of note section entry and completion prior to head CT ordering are provided in [Table 1]. Of patients with head CT, there was submission of an initial HPI, A/P, or attending note prior to image study ordering in 39.2% (261/666) of encounters. One of these note sections was completed and signed in 24.6% (164/666) of encounters. A fully completed and signed HPI was present in 23.1% (154/666) of encounters.

Table 1

Summary of note sections submitted and signed prior to the time of image order entry

Entry submitted prior to CT order

N (%)

Signed prior to CT order

N (%)

CC = headache

666

666

ED chief complaint

458 (68.8%)

ED history of present illness (HPI)

226 (33.9%)

154 (23.1%)

ED initial assessment and plan (A/P)

70 (10.5%)

46 (6.9%)

ED attending note

38 (5.7%)

0 (0%)

HPI or A/P or attending note

261 (39.2%)

164 (24.6%)

Abbreviations: CC, chief complaint; CT, computed tomography; ED, emergency department.



#

Concepts Relevant to Headache

In the 154 encounters with a fully completed and signed HPI, the number of NLP-extracted CUIs specific to headache per HPI was significantly greater than the number of indications per encounter identified in the image order requisition (median: 3 vs. 1; Wilcoxon signed rank p < 0.0001) ([Fig. 2]).

Zoom Image
Fig. 2 Box plots of count of concepts extracted from imaging requisition indications compared with extracted concept unique identifiers (CUIs) in history of present illness notes completed and signed prior to order entry. Box edges represent first and third quartiles, center line represents median value, whiskers represent range of counts. Median = 1 for Indications above.

An average of 28.3 total CUIs were extracted from HPI notes; an average of 3.1 of these were relevant to headaches. There was no significant difference in either the total number of NLP-extracted CUIs or those CUIs relevant to headache per encounter in HPI notes completed and signed prior to image ordering compared with those completed and signed after ordering (total: median 29 vs. 28 [Wilcoxon rank sum p = 0.29]; relevant to headache: median: 3 vs. 3 [Wilcoxon rank sum p = 0.07]) as displayed in [Figs. 3] and [4], respectively.

Zoom Image
Fig. 3 Box plots comparing total extracted concepts from history of present illness notes completed and signed after and before image order entry. Box edges represent first and third quartiles, center line represents median value, whiskers represent range of counts. CUI, concept unique identifier.
Zoom Image
Fig. 4 Box plots comparing extracted concepts relevant to headache from history of present illness notes completed and signed after and before image order entry. Box edges represent first and third quartiles, center line represents median value, whiskers represent range of counts. CUI, concept unique identifier.

#

NLP Performance

In total, 75 of 115 concepts identified as being relevant to headache were mentioned in at least one of the 154 HPI sections of notes that were signed prior to image ordering. Precision, recall, accuracy, and F-measure are listed for each concept in [Supplementary Table S3] (available in the online version). Of all concepts able to be extracted via NLP in at least one encounter, precision = 0.99, recall = 0.67, accuracy = 0.97, with an F-measure of 0.80. For all concepts including those not mentioned in notes or able to be extracted via NLP, precision = 0.99, recall = 0.53, accuracy = 0.98, with an F-measure of 0.69. In addition, in seven concepts NLP successfully extracted the concept from the note where it was incorrectly manually judged to not have a mention of that concept. There were also 25 cases where the reviewer judged the NLP extraction to be incorrect, either in stating that the concept was present or identifying the polarity, but it was found after further review to be correct.


#

Differences in Concepts between Sources

All NLP results were reviewed manually for consistency with the notes as well as consistency with the image order requisition. In 143/154 (92.9%) of encounters with a completed and signed HPI prior to image ordering, the extracted CUIs provided new concepts not present in the imaging requisition indications, with an average of 3.03 new concepts (1.03 positive, 2.01 negated) per encounter. In 27/154 (17.5%) encounters, at least one extracted concept was also present in the requisition indications, and in 14/154 (9.1%) encounters, there were conflicting concepts either between the extracted CUIs within the same note (11/14; 78.6%) or between the extracted CUIs and the requisition indications (3/14; 21.4%). The majority of cases of conflict were due to mentioning positive along with negated concepts in the note narrative (e.g., a patient reporting a current headache but no prior headaches). The few cases of conflict between indications and extracted concepts were from rare artifacts, such as NLP not identifying abbreviations that would have cancelled a negation word, such as “patient with no medical problems p/w (presents with) fever,” or because the indications input only allows selecting “nausea and vomiting” while the note stated the patient had nausea but no vomiting. In another case, an indication was erroneously selected when the note specified that the patient did not have the symptom. The differences in concepts found between the extracted CUIs and requisition indications are depicted in [Figs. 5] and [6], respectively. Extracted concepts have an additional dimension as they may be positive (blue) or negated (red) while the requisition indications are limited to only concepts that are present. However, the requisition concept “normal neurologic examination” is a proxy for negated findings.

Zoom Image
Fig. 5 Word cloud of extracted concepts from history of present illness notes completed and signed before image order entry. Blue: positive concepts; red: negated concepts; size: relative frequency of concept adjusted by LN(count + 1). Frequency counts listed in [Supplementary Table S4] (available in the online version).
Zoom Image
Fig. 6 Word cloud of indications from image order requisitions. Size: relative frequency of concept adjusted by LN(count +1). Frequency counts listed in [Supplementary Table S5] (available in the online version).

To evaluate the clinical significance of these findings, we used the physician reviewer consensus grading to consider all concepts or all indications present in the encounter to determine whether a head CT was appropriate. In 84/154 (54.5%) of encounters, the head CT was graded as appropriate based on CUIs compared with 87/154 (56.5%) encounters where the head CT was graded as appropriate based on the imaging requisition indications (McNemar's test p = 0.73). Despite no statistical difference in the rate of appropriateness, in 35/154 (22.7%) of encounters, the CUIs extracted from notes added value where the indications alone were not enough to grade the head CT as appropriate but the extracted CUIs showed that a head CT was appropriate. Incorporating these two sources of information would have improved the rate of appropriateness from 56.5% (87/154) to 79.2% (122/154). The appropriateness of encounters determined via indications or extracted CUIs with their combined value is represented in [Fig. 7]. For example, an order requisition for imaging had one indication, “acute.” However, extracted concepts revealed the patient had an infectious disease disorder, associated neck pain, numbness, and vomiting. In a case where the requisition was sufficient to show that a head CT was appropriate but the extracted concepts added new information that could have led to an improved interpretation of the study, a requisition listed indications of “head trauma, loss of consciousness, and acute.” The only relevant extracted concept, “warfarin sodium (+1),” would have been very helpful for the radiologist to know.

Zoom Image
Fig. 7 Number and rate of adjudicated appropriateness of encounters determined via concepts obtained from imaging ordering requisition indications or via natural language processing (NLP)-extracted concept unique identifiers (CUIs) from the history of present illness (HPI) section of emergency department (ED) notes with their combined value. The box with the dotted border is the added value of including extracted CUIs from the notes in addition to indications alone.

#
#

Discussion

In this study, we identified unique, relevant clinical information present in unstructured physician notes in the EHR that was either submitted or finalized at the time of image ordering that, if harvested in real-time, could be used to augment the communication from the ordering physician to the radiologist at the time of this critical patient handoff, while reducing errors from incomplete or incorrect order requisition in the EHR. HPIs signed prior to image ordering contained significantly more relevant, unique concepts than what was provided in the image order requisition.

Regarding the complex decision to order an imaging study of the head/brain for headache, there is a large and growing body of CDS rules and appropriateness criteria for multiple imaging modalities including appropriateness ratings for imaging studies for 16 clinical variants of headache,[28] [29] and for recommending appropriate head imaging studies for multiple conditions.[30] Some have validated impact on ordering behaviors of clinicians,[29] [30] including for mild traumatic brain injury.[31] [32] However, all rely on input of structured indications, and the burden is placed on ordering physicians to provide these additional, redundant data, when interacting with these CDS tools. All of these examples could benefit from harvesting data from unstructured notes to prepopulate their requirements for structured coded indications. We have demonstrated that there is a significant amount of useful and unique information available in the EHR at the point of order entry that may be combined with other structured data in the EHR (e.g., medications and laboratory values) to effectively augment order requisitions to improve the communication between the ordering physician and the radiologist, without additional burden to the ordering physician.

A significantly greater number of CUIs specific to headache were extracted from HPIs present prior to image ordering compared to the number of indications per encounter identified in the order requisition. The low number of indications per requisition is consistent with previous research showing that image order requisitions provide inadequate communication from the orderer to the radiologist.[8] To specify the examination protocol and interpretation, radiologists are responsible to review the imaging order requisition. In some practices, radiologists have the capability to access the clinical notes in the EHR but even they may only access the clinical notes in a minority of cases due to time constraints.[8] We are not aware of any lawsuits due to radiologists not accessing or reviewing the entirety of the medical record, nor of any regulations for radiologists to access parts of the EHR beyond the order requisition. Although previous work has used NLP in real-time clinical scenarios such as using radiology reports as part of a pneumonia screening tool,[33] our study identifies a novel approach of using NLP to identify discrete data in clinical notes present at the time of the decision to order an imaging study.

The CUIs from the notes provided new relevant information beyond what was provided in the requisition in 92.9% of encounters. There was no difference in the rates of adjudicated appropriateness of head CT based on the concepts from the notes and requisitions. But in 22.7% of cases where the CT was not found to be appropriate based on the requisition indications, the concepts in the HPI justified the appropriateness. Thus, if these two sources of concepts were combined, the rate of appropriateness would improve from 56.5 to 79.2% ([Fig. 7]). We used graded appropriateness of the study based on the concepts from the two sources as a proxy for the usefulness or helpfulness of those data. However, we expect that the uniqueness of the data also contributes to usefulness or helpfulness in that it will likely lead to improved radiology interpretation, particularly when the indications for examinations are even more heterogeneous than with head CT, such as with abdominal CT with complex combinations of indications with respect to multiple organ systems.

The NLP performance—high precision at the expense of recall—gives assurance that if the NLP tool detected a concept in a note, the concept was reliably present and of the correct polarity in that note. This is an acceptable tradeoff, understanding that our focus is to identify enough data to allow for an action to be taken, and to identify useful data that are currently being neglected that can allow an incremental step toward enhancing the imaging order entry process. We have also illustrated imperfections in extracted data via NLP with the conflicting extracted concepts within the notes due to the nature of the narrative. There are also imperfections in data entered in the order requisition with rare conflicts between the indications and the extracted concepts, likely unintentional and potentially due to fatigue of redundant data entry and clicking on indications rather than manual entry.

Factors such as context-specific physician workflow, patient needs, and access to workstations limit the amount of data documented in the EHR by the time of image order entry, particularly in the ED. Although clinicians likely prefer to perform documentation as near to the encounter as possible, that goal is often not possible. Although we showed there were no statistically significant differences in CUIs relevant to headache or overall CUIs between HPI sections signed prior to image ordering versus after image ordering, unmeasured differences between these scenarios may remain. However, it is encouraging that for 23.1% of cases, there are data available at the time of order entry. The significant amount of useful data we found present in the EHR at the time of imaging order entry gives encouragement that favorable rates could be found at other institutions. Additionally, as means of documentation become more mobile and closer to the point of care, including with the use of medical scribes and voice recognition, the more likely we expect that useful data will be available in the EHR prior to image ordering. In addition, if processes such as ordering imaging studies are linked to the workflow of documentation and are made easier through using the already documented data, this may encourage more and earlier documentation by providers or their proxies.

Our findings may have important implications in the context of new federal regulations where implementing sections of the U.S. Protecting Access to Medicare Act (PAMA) of 2014 to promote evidence-based care will necessitate ordering providers be exposed to specific appropriate use criteria through certified CDS mechanisms for certain high-cost ambulatory imaging services (including in the ED) as a requirement for payment for such services.[34] For example, extracting relevant CUIs from EHR notes may reduce the ordering provider burden to re-enter needed clinical information or diminish the number of required CDS interactions. Celi et al describe a future state optimal data system[35] where EHR data including clinician documentation are fully integrated and provide real-time “bidirectional data streams” to inform downstream processes such as image ordering or CDS. Demner-Fushman et al describe applications in which NLP can drive CDS when integrated with the EHR.[36] This study takes a step toward that goal by recognizing that there is valuable underutilized information in the living document of a clinical encounter. In addition, further work is needed to integrate and automate the extraction of these data to populate the order requisition to enhance the ordering process. These efforts, if successful, would free the ordering physician from the need to enter redundant data likely already present in the EHR, as well as reduce errors from incomplete or incorrect data entry, which we encounter more with increases in structured data entry forms.[37] These efforts may also provide a means to potentially reduce physician burnout that many attribute in part to the amount of data entry and interactions with the EHR.[38] [39] [40] As the socio-technical environment progresses and the ability to document clinical notes gets closer to the bedside, we expect more data will be available that can further be used to inform and drive downstream processes in patient care, enhancing the meaningful use and efficiency of the EHR.

Our study has limitations. It was conducted in a single academic setting, making generalizability unclear. Physician practice variability is likely high regarding the timing of documentation of notes, also affecting generalizability. Clinical documentation is likely clustered based on individual clinician practice, but due to our inability to link the author with the note section, and considering that there were a relatively small number of physicians and multiple physicians edit individual notes, we did not include clustering in our analysis. Our study only looked at encounters for patients with CC of headache, making generalizability to other complaints unclear. However, rates of HPIs with data submitted prior to image order entry were similar for the total number of (nonheadache) encounters during the study period ([Table 2]), as well as the five most frequently occurring CCs for patients for whom head CTs had been obtained. Although clinical notes may contain several potential reasons not specific to headache for conducting a study, we filtered for CUIs relevant to the CC of headache. We did not account for conversations between ordering physicians and radiologists, which may happen in difficult cases, since there is no consistent means of documenting these conversations in most practices.

Table 2

History of present illness (HPI) note sections with documentation submitted prior to image order entry for most frequent chief complaints (CCs) in patients for whom head CTs were ordered

Submitted prior to order entry

N

Total

N

Percent

HPI

Total

1,693

6,084

27.8%

CC = fall

183

888

20.6%

CC = headache

226

666

33.9%

CC = seizure

75

275

27.3%

CC = dizzy

95

265

35.8%

CC = syncope

65

216

30.1%

Future work is needed to evaluate if retrieval of clinical concepts from free form physician notes can inform CDS, and to assess the impact on the decision to order an imaging study of the head/brain for headache. Clinical documentation, whether complete and signed or not, is imperfect by its nature when compared with the true state of patient health, falling short of capturing all pertinent positives and negatives relevant to the CC. These imperfect data and proposed methods are not intended to replace order requisitions or CDS. Rather, they may be able to augment and potentially autopopulate the order requisition, potentially enhancing the clinical relevance of CDS tools. Future work is also needed to modify and improve the performance, specifically recall, of the condition-specific NLP. We must be cognizant in future work of potential unintended consequences of integrating other sources of data, maintain the intention to reduce physician documentation burden, and not add a requirement to review NLP-extracted concepts or to document notes prior to ordering an imaging study.


#

Conclusion

Clinician documentation in the EHR provides a resource of valuable information that is present in a significant percentage of encounters at the point of image ordering that, if leveraged in a timely and automated fashion, could improve communication between the members of the health care team when ordering an imaging study, helping in improving quality of care and efficiency.


#

Clinical Relevance Statement

CPOE in the EHR produces suboptimal communication between ordering providers and has the potential to contribute to alert fatigue and ultimately physician burnout. The results of this study show relevant data in clinical notes that may be used to augment the process of populating data from clinical notes into imaging order requisitions. This would lessen the burden of clerical data entry by ordering providers while improving communication between ordering providers and radiologists.


#

Multiple Choice Questions

  1. Imaging order requisition forms rely mostly on what resource to communicate clinical data about the reason for ordering an imaging study?

    • The problem list in the EHR.

    • Clinical notes in the EHR.

    • The ordering provider.

    • Structured laboratory and diagnostic study results.

    Correct Answer: The correct answer is option c, the ordering provider. In most commercial EHR implementations, the ordering provider must enter redundant clinical information in the EHR's order entry module. This imaging order requisition consists of free text and/or structured forms that are entered independently and do not automatically populate from data in the note, the problem list, or laboratory or diagnostic results.

  2. When attempting to make use of data in clinical notes in the EHR, what informatics tool would be the most effective to use?

    • Optical character recognition.

    • Natural language processing.

    • Health information exchange.

    • Clinical decision support.

    Correct Answer: The correct answer is option b, natural language processing (NLP). NLP would be the optimal tool to extract structured data from unstructured notes. Optical character recognition may be helpful in converting scanned handwritten or typed notes into machine-readable text, but then one would still need to use NLP to extract structured data from the text. Clinical decision support tools have been used in lieu of clinical notes but require additional data entry by the ordering provider, and health information exchange is relevant for sharing clinical data across institutions.


#
#

Conflict of Interest

None declared.

Protection of Human and Animal Subjects

This study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects, and was reviewed by Brigham and Women's Hospital Institutional Review Board protocol 2015P002169.


Supplementary Material

  • References

  • 1 Sentinel Event Alert. 2015. Available at: www.jointcommission.org . Accessed February 11, 2016
  • 2 Obara P, Sevenster M, Travis A, Qian Y, Westin C, Chang PJ. Evaluating the referring physician's clinical history and indication as a means for communicating chronic conditions that are pertinent at the point of radiologic interpretation. J Digit Imaging 2015; 28 (03) 272-282
  • 3 Leslie A, Jones AJ, Goddard PR. The influence of clinical information on the reporting of CT by radiologists. Br J Radiol 2000; 73 (874) 1052-1055
  • 4 Munkvold G, Ellingsen G, Koksvik H. Formalising work - reallocating redundancy. Paper presented at: Proceedings of the ACM 2006 conference on CSCW. Banff, Alberta, Canada; 2006:59–68. Available at: http://www.idi.ntnu.no/grupper/su/publ/glenn/4_2006_CSCW.pdf . Accessed December 5, 2016
  • 5 Gupta A, Raja AS, Khorasani R. Examining clinical decision support integrity: is clinician self-reported data entry accurate?. J Am Med Inform Assoc 2014; 21 (01) 23-26
  • 6 Bates DW, Kuperman GJ, Wang S. , et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10 (06) 523-530
  • 7 Khorasani R, Hentel K, Darer J. , et al. Ten commandments for effective clinical decision support for imaging: enabling evidence-based practice to improve quality and reduce waste. Am J Roentgenol 2014; 203 (05) 945-951
  • 8 Boonn WW, Langlotz CP. Radiologist use of and perceived need for patient data access. J Digit Imaging 2009; 22 (04) 357-362
  • 9 Edlow JA, Panagos PD, Godwin SA, Thomas TL, Decker WW. ; American College of Emergency Physicians. Clinical policy: critical issues in the evaluation and management of adult patients presenting to the emergency department with acute headache. Ann Emerg Med 2008; 52 (04) 407-436
  • 10 Choosing Wisely: An Initiative of the ABIM Foundation. Available at: http://choosingwisely.org . Accessed February 11, 2016
  • 11 Hanna TN, Rohatgi S, Shekhani HN, Dave IA, Johnson JO. Clinical information available during emergency department imaging order entry and radiologist interpretation. Emerg Radiol 2017; 24 (04) 361-367
  • 12 Garla VN, Brandt C. Ontology-guided feature engineering for clinical text classification. J Biomed Inform 2012; 45 (05) 992-998
  • 13 Garla V, Lo Re III V, Dorey-Stein Z. , et al. The Yale cTAKES extensions for document classification: architecture and application. J Am Med Inform Assoc 2011; 18 (05) 614-620
  • 14 MetamorphoSys Help. 2009. Available at: https://www.nlm.nih.gov/research/umls/implementation_resources/metamorphosys/help.html . Accessed February 8, 2016
  • 15 National Cancer Informatics Program. Terminology. Available at: https://cbiit.nci.nih.gov/ncip/biomedical-informatics-resources/interoperability-and-semantics/terminology . Accessed February 4, 2016
  • 16 Cutrer FM. Evaluation of the adult with nontraumatic headache in the emergency department. UpToDate, Post TW (Ed). Available at: https://www.uptodate.com/contents/evaluation-of-the-adult-with-nontraumatic-headache-in-the-emergency-department . Accessed February 4, 2019
  • 17 Rothman RE, Keyl PM, McArthur JC, Beauchamp Jr NJ, Danyluk T, Kelen GD. A decision guideline for emergency department utilization of noncontrast head computed tomography in HIV-infected patients. Acad Emerg Med 1999; 6 (10) 1010-1019
  • 18 Edlow JA, Caplan LR. Avoiding pitfalls in the diagnosis of subarachnoid hemorrhage. N Engl J Med 2000; 342 (01) 29-36
  • 19 Friedman BW, Lipton RB. Headache emergencies: diagnosis and management. Neurol Clin 2012; 30 (01) 43-59 , vii
  • 20 Swadron SP. Pitfalls in the management of headache in the emergency department. Emerg Med Clin North Am 2010; 28 (01) 127-147
  • 21 Practice parameter: the utility of neuroimaging in the evaluation of headache in patients with normal neurologic examinations (summary statement). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 1994; 44 (07) 1353-1354
  • 22 Ramirez-Lassepas M, Espinosa CE, Cicero JJ, Johnston KL, Cipolle RJ, Barber DL. Predictors of intracranial pathologic findings in patients who seek emergency care because of headache. Arch Neurol 1997; 54 (12) 1506-1509
  • 23 Locker TE, Thompson C, Rylance J, Mason SM. The utility of clinical features in patients presenting with nontraumatic headache: an investigation of adult patients attending an emergency department. Headache 2006; 46 (06) 954-961
  • 24 Perry JJ, Stiell IG, Sivilotti MLA. , et al. High risk clinical characteristics for subarachnoid haemorrhage in patients with acute headache: prospective cohort study. BMJ 2010; 341: c5204-c5204
  • 25 Perry JJ, Stiell IG, Sivilotti MLA. , et al. Clinical decision rules to rule out subarachnoid hemorrhage for acute headache. JAMA 2013; 310 (12) 1248-1255
  • 26 Morgenstern LB, Huber JC, Luna-Gonzales H. , et al. Headache in the emergency department. Headache 2001; 41 (06) 537-541
  • 27 Goldstein JN, Camargo Jr CA, Pelletier AJ, Edlow JA. Headache in United States emergency departments: demographics, work-up and frequency of pathological diagnoses. Cephalalgia 2006; 26 (06) 684-690
  • 28 Douglas AC, Franz WJ, Broderick DF. , et al. American College of Radiology ACR Appropriateness Criteria®. 1996. Available at: https://acsearch.acr.org/docs/69482/Narrative/ . Accessed October 26, 2018
  • 29 Huber TC, Krishnaraj A, Patrie J, Gaskin CM. Impact of a commercially available clinical decision support program on provider ordering habits. J Am Coll Radiol 2018; 15 (07) 951-957
  • 30 Sanders DL, Miller RA. The effects on clinician ordering patterns of a computerized decision support system for neuroradiology imaging studies. Proc AMIA Symp 2001; 583-587
  • 31 Gupta A, Ip IK, Raja AS, Andruchow JE, Sodickson A, Khorasani R. Effect of clinical decision support on documented guideline adherence for head CT in emergency department patients with mild traumatic brain injury. J Am Med Inform Assoc 2014; 21 (e2): e347-e351
  • 32 Ip IK, Raja AS, Gupta A, Andruchow J, Sodickson A, Khorasani R. Impact of clinical decision support on head computed tomography use in patients with mild traumatic brain injury in the ED. Am J Emerg Med 2015; 33 (03) 320-325
  • 33 Dean NC, Jones BE, Ferraro JP, Vines CG, Haug PJ. Performance and utilization of an emergency department electronic screening tool for pneumonia. JAMA Intern Med 2013; 173 (08) 699-701
  • 34 Protecting Access to Medicare Act of. 2014 (PL 113–93) Section 218(b). 2014. Available at: https://www.congress.gov/113/plaws/publ93/PLAW-113publ93.pdf . Accessed May 2, 2016
  • 35 Celi LA, Marshall JD, Lai Y, Stone DJ. Disrupting electronic health records systems: the next generation. JMIR Med Inform 2015; 3 (04) e34
  • 36 Demner-Fushman D, Chapman WW, McDonald CJ. What can natural language processing do for clinical decision support?. J Biomed Inform 2009; 42 (05) 760-772
  • 37 Kummer BR, Lerario MP, Navi BB. , et al. Clinical information systems integration in New York City's first mobile stroke unit. Appl Clin Inform 2018; 9 (01) 89-98
  • 38 Tai-Seale M, Olson CW, Li J. , et al. Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Aff (Millwood) 2017; 36 (04) 655-662
  • 39 Zulman DM, Shah NH, Verghese A. Evolutionary pressures on the electronic health record: caring for complexity. JAMA 2016; 316 (09) 923-924
  • 40 Babbott S, Manwell LB, Brown R. , et al. Electronic medical records and physician stress in primary care: results from the MEMO Study. J Am Med Inform Assoc 2014; 21 (e1): e100-e106

Address for correspondence

Justin F. Rousseau, MD, MMSc
Departments of Population Health and Neurology, Dell Medical School
1701 Trinity Street, Stop Z0500
Austin, TX 78712
United States   

  • References

  • 1 Sentinel Event Alert. 2015. Available at: www.jointcommission.org . Accessed February 11, 2016
  • 2 Obara P, Sevenster M, Travis A, Qian Y, Westin C, Chang PJ. Evaluating the referring physician's clinical history and indication as a means for communicating chronic conditions that are pertinent at the point of radiologic interpretation. J Digit Imaging 2015; 28 (03) 272-282
  • 3 Leslie A, Jones AJ, Goddard PR. The influence of clinical information on the reporting of CT by radiologists. Br J Radiol 2000; 73 (874) 1052-1055
  • 4 Munkvold G, Ellingsen G, Koksvik H. Formalising work - reallocating redundancy. Paper presented at: Proceedings of the ACM 2006 conference on CSCW. Banff, Alberta, Canada; 2006:59–68. Available at: http://www.idi.ntnu.no/grupper/su/publ/glenn/4_2006_CSCW.pdf . Accessed December 5, 2016
  • 5 Gupta A, Raja AS, Khorasani R. Examining clinical decision support integrity: is clinician self-reported data entry accurate?. J Am Med Inform Assoc 2014; 21 (01) 23-26
  • 6 Bates DW, Kuperman GJ, Wang S. , et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10 (06) 523-530
  • 7 Khorasani R, Hentel K, Darer J. , et al. Ten commandments for effective clinical decision support for imaging: enabling evidence-based practice to improve quality and reduce waste. Am J Roentgenol 2014; 203 (05) 945-951
  • 8 Boonn WW, Langlotz CP. Radiologist use of and perceived need for patient data access. J Digit Imaging 2009; 22 (04) 357-362
  • 9 Edlow JA, Panagos PD, Godwin SA, Thomas TL, Decker WW. ; American College of Emergency Physicians. Clinical policy: critical issues in the evaluation and management of adult patients presenting to the emergency department with acute headache. Ann Emerg Med 2008; 52 (04) 407-436
  • 10 Choosing Wisely: An Initiative of the ABIM Foundation. Available at: http://choosingwisely.org . Accessed February 11, 2016
  • 11 Hanna TN, Rohatgi S, Shekhani HN, Dave IA, Johnson JO. Clinical information available during emergency department imaging order entry and radiologist interpretation. Emerg Radiol 2017; 24 (04) 361-367
  • 12 Garla VN, Brandt C. Ontology-guided feature engineering for clinical text classification. J Biomed Inform 2012; 45 (05) 992-998
  • 13 Garla V, Lo Re III V, Dorey-Stein Z. , et al. The Yale cTAKES extensions for document classification: architecture and application. J Am Med Inform Assoc 2011; 18 (05) 614-620
  • 14 MetamorphoSys Help. 2009. Available at: https://www.nlm.nih.gov/research/umls/implementation_resources/metamorphosys/help.html . Accessed February 8, 2016
  • 15 National Cancer Informatics Program. Terminology. Available at: https://cbiit.nci.nih.gov/ncip/biomedical-informatics-resources/interoperability-and-semantics/terminology . Accessed February 4, 2016
  • 16 Cutrer FM. Evaluation of the adult with nontraumatic headache in the emergency department. UpToDate, Post TW (Ed). Available at: https://www.uptodate.com/contents/evaluation-of-the-adult-with-nontraumatic-headache-in-the-emergency-department . Accessed February 4, 2019
  • 17 Rothman RE, Keyl PM, McArthur JC, Beauchamp Jr NJ, Danyluk T, Kelen GD. A decision guideline for emergency department utilization of noncontrast head computed tomography in HIV-infected patients. Acad Emerg Med 1999; 6 (10) 1010-1019
  • 18 Edlow JA, Caplan LR. Avoiding pitfalls in the diagnosis of subarachnoid hemorrhage. N Engl J Med 2000; 342 (01) 29-36
  • 19 Friedman BW, Lipton RB. Headache emergencies: diagnosis and management. Neurol Clin 2012; 30 (01) 43-59 , vii
  • 20 Swadron SP. Pitfalls in the management of headache in the emergency department. Emerg Med Clin North Am 2010; 28 (01) 127-147
  • 21 Practice parameter: the utility of neuroimaging in the evaluation of headache in patients with normal neurologic examinations (summary statement). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 1994; 44 (07) 1353-1354
  • 22 Ramirez-Lassepas M, Espinosa CE, Cicero JJ, Johnston KL, Cipolle RJ, Barber DL. Predictors of intracranial pathologic findings in patients who seek emergency care because of headache. Arch Neurol 1997; 54 (12) 1506-1509
  • 23 Locker TE, Thompson C, Rylance J, Mason SM. The utility of clinical features in patients presenting with nontraumatic headache: an investigation of adult patients attending an emergency department. Headache 2006; 46 (06) 954-961
  • 24 Perry JJ, Stiell IG, Sivilotti MLA. , et al. High risk clinical characteristics for subarachnoid haemorrhage in patients with acute headache: prospective cohort study. BMJ 2010; 341: c5204-c5204
  • 25 Perry JJ, Stiell IG, Sivilotti MLA. , et al. Clinical decision rules to rule out subarachnoid hemorrhage for acute headache. JAMA 2013; 310 (12) 1248-1255
  • 26 Morgenstern LB, Huber JC, Luna-Gonzales H. , et al. Headache in the emergency department. Headache 2001; 41 (06) 537-541
  • 27 Goldstein JN, Camargo Jr CA, Pelletier AJ, Edlow JA. Headache in United States emergency departments: demographics, work-up and frequency of pathological diagnoses. Cephalalgia 2006; 26 (06) 684-690
  • 28 Douglas AC, Franz WJ, Broderick DF. , et al. American College of Radiology ACR Appropriateness Criteria®. 1996. Available at: https://acsearch.acr.org/docs/69482/Narrative/ . Accessed October 26, 2018
  • 29 Huber TC, Krishnaraj A, Patrie J, Gaskin CM. Impact of a commercially available clinical decision support program on provider ordering habits. J Am Coll Radiol 2018; 15 (07) 951-957
  • 30 Sanders DL, Miller RA. The effects on clinician ordering patterns of a computerized decision support system for neuroradiology imaging studies. Proc AMIA Symp 2001; 583-587
  • 31 Gupta A, Ip IK, Raja AS, Andruchow JE, Sodickson A, Khorasani R. Effect of clinical decision support on documented guideline adherence for head CT in emergency department patients with mild traumatic brain injury. J Am Med Inform Assoc 2014; 21 (e2): e347-e351
  • 32 Ip IK, Raja AS, Gupta A, Andruchow J, Sodickson A, Khorasani R. Impact of clinical decision support on head computed tomography use in patients with mild traumatic brain injury in the ED. Am J Emerg Med 2015; 33 (03) 320-325
  • 33 Dean NC, Jones BE, Ferraro JP, Vines CG, Haug PJ. Performance and utilization of an emergency department electronic screening tool for pneumonia. JAMA Intern Med 2013; 173 (08) 699-701
  • 34 Protecting Access to Medicare Act of. 2014 (PL 113–93) Section 218(b). 2014. Available at: https://www.congress.gov/113/plaws/publ93/PLAW-113publ93.pdf . Accessed May 2, 2016
  • 35 Celi LA, Marshall JD, Lai Y, Stone DJ. Disrupting electronic health records systems: the next generation. JMIR Med Inform 2015; 3 (04) e34
  • 36 Demner-Fushman D, Chapman WW, McDonald CJ. What can natural language processing do for clinical decision support?. J Biomed Inform 2009; 42 (05) 760-772
  • 37 Kummer BR, Lerario MP, Navi BB. , et al. Clinical information systems integration in New York City's first mobile stroke unit. Appl Clin Inform 2018; 9 (01) 89-98
  • 38 Tai-Seale M, Olson CW, Li J. , et al. Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Aff (Millwood) 2017; 36 (04) 655-662
  • 39 Zulman DM, Shah NH, Verghese A. Evolutionary pressures on the electronic health record: caring for complexity. JAMA 2016; 316 (09) 923-924
  • 40 Babbott S, Manwell LB, Brown R. , et al. Electronic medical records and physician stress in primary care: results from the MEMO Study. J Am Med Inform Assoc 2014; 21 (e1): e100-e106

Zoom Image
Fig. 1 Study design diagram for patient selection.
Zoom Image
Fig. 2 Box plots of count of concepts extracted from imaging requisition indications compared with extracted concept unique identifiers (CUIs) in history of present illness notes completed and signed prior to order entry. Box edges represent first and third quartiles, center line represents median value, whiskers represent range of counts. Median = 1 for Indications above.
Zoom Image
Fig. 3 Box plots comparing total extracted concepts from history of present illness notes completed and signed after and before image order entry. Box edges represent first and third quartiles, center line represents median value, whiskers represent range of counts. CUI, concept unique identifier.
Zoom Image
Fig. 4 Box plots comparing extracted concepts relevant to headache from history of present illness notes completed and signed after and before image order entry. Box edges represent first and third quartiles, center line represents median value, whiskers represent range of counts. CUI, concept unique identifier.
Zoom Image
Fig. 5 Word cloud of extracted concepts from history of present illness notes completed and signed before image order entry. Blue: positive concepts; red: negated concepts; size: relative frequency of concept adjusted by LN(count + 1). Frequency counts listed in [Supplementary Table S4] (available in the online version).
Zoom Image
Fig. 6 Word cloud of indications from image order requisitions. Size: relative frequency of concept adjusted by LN(count +1). Frequency counts listed in [Supplementary Table S5] (available in the online version).
Zoom Image
Fig. 7 Number and rate of adjudicated appropriateness of encounters determined via concepts obtained from imaging ordering requisition indications or via natural language processing (NLP)-extracted concept unique identifiers (CUIs) from the history of present illness (HPI) section of emergency department (ED) notes with their combined value. The box with the dotted border is the added value of including extracted CUIs from the notes in addition to indications alone.