Appl Clin Inform 2018; 09(01): 185-198
DOI: 10.1055/s-0038-1636508
Research Article
Schattauer GmbH Stuttgart

Validation and Refinement of a Pain Information Model from EHR Flowsheet Data

Bonnie L. Westra
,
Steven G. Johnson
,
Samira Ali
,
Karen M. Bavuso
,
Christopher A. Cruz
,
Sarah Collins
,
Meg Furukawa
,
Mary L. Hook
,
Anne LaFlamme
,
Kay Lytle
,
Lisiane Pruinelli
,
Tari Rajchel
,
Theresa Tess Settergren
,
Kathryn F. Westman
,
Luann Whittenburg
Weitere Informationen

Address for correspondence

Bonnie L. Westra, PhD, RN, FAAN, FACMI
School of Nursing, University of Minnesota
308 Harvard Street SE, Minneapolis, MN 55455
United States   

Publikationsverlauf

14. September 2017

15. Januar 2018

Publikationsdatum:
14. März 2018 (online)

 

Abstract

Background Secondary use of electronic health record (EHR) data can reduce costs of research and quality reporting. However, EHR data must be consistent within and across organizations. Flowsheet data provide a rich source of interprofessional data and represents a high volume of documentation; however, content is not standardized. Health care organizations design and implement customized content for different care areas creating duplicative data that is noncomparable. In a prior study, 10 information models (IMs) were derived from an EHR that included 2.4 million patients. There was a need to evaluate the generalizability of the models across organizations. The pain IM was selected for evaluation and refinement because pain is a commonly occurring problem associated with high costs for pain management.

Objective The purpose of our study was to validate and further refine a pain IM from EHR flowsheet data that standardizes pain concepts, definitions, and associated value sets for assessments, goals, interventions, and outcomes.

Methods A retrospective observational study was conducted using an iterative consensus-based approach to map, analyze, and evaluate data from 10 organizations.

Results The aggregated metadata from the EHRs of 8 large health care organizations and the design build in 2 additional organizations represented flowsheet data from 6.6 million patients, 27 million encounters, and 683 million observations. The final pain IM has 30 concepts, 4 panels (classes), and 396 value set items. Results are built on Logical Observation Identifiers Names and Codes (LOINC) pain assessment terms and extend the need for additional terms to support interoperability.

Conclusion The resulting pain IM is a consensus model based on actual EHR documentation in the participating health systems. The IM captures the most important concepts related to pain.


#

Background and Significance

The widespread implementation of electronic health records (EHRs) provides health care organizations the opportunity to capture, use, and share data for evaluation, benchmarking, quality improvement, and research to improve the effectiveness, efficiency, and outcomes of patient care. Secondary use and sharing, however, requires data to be represented using recognized terminologies and descriptors that are consistent, understood, and effectively formatted for comparison. These requirements suggest that concepts must be standardized, formally modeled, and mapped into the EHR for optimal use. An “information model” (IM) is an organized structure to represent knowledge about a clinical condition or concept including data elements, their relationships, and the data standards that are independent of implementation in EHRs.[1] IMs can be mapped to EHR data to identify semantic similarities[2] and, more importantly, to enable researchers to understand and normalize differences when they occur to improve data sharing.

The American Recovery and Reinvestment Act of 2009 (ARRA) provided incentives for creating a national health care technology infrastructure and accelerating the adoption and meaningful use of enterprise-wide, vendor-based EHR systems with a key focus on physician-based data capture. Vendors provide generic content and guide organizations in using consensus-based approaches to configure the bulk of their system to meet the clinical requirements of the organization. Much of the documentation, however, is captured in flowsheet format, using nonstandardized semistructured data in a matrix format for patient assessments, goals, problems, interventions, and outcomes of care. Limited resources and rapid deployment timelines provide little time for organizations to identify and adopt standardized terminologies and use IMs to design flowsheets for future data sharing. Further, informaticians are required to choose from multiple terminologies[3] [4] with limited reference standards to guide flowsheet builds. These conditions allow organizations to continue to design and implement customized content, creating flowsheet rows (unique identifications [IDs]) for different care areas, e.g., intensive care units, emergency departments, or medical–surgical units,[5] with varied choice options for documentation in flowsheet rows with the same or similar names. As organizations move beyond deployment, it is time to reevaluate the extensive clinical information captured in flowsheets and consider how to optimize and manage data better in the future. Furthermore, analyzing existing content may inform the development of standardized terminologies and IMs for representing the essential nursing and interprofessional assessments and interventions to achieve best patient outcomes.

Nursing informatics leaders have successfully utilized methods for developing generalizable domain-specific IMs based on documentation artifacts captured in the EHR.[6] [7] These investigators used consensus-based, data-driven methods for analyzing EHR data elements embedded by large multisite health care systems to develop a skin inspection and pressure ulcer IM for standardizing and coding concepts. Both groups identified that existing EHR systems contain heterogeneous data with limited interoperability. They recommended ongoing efforts to create common IMs based on best evidence, clinical expertise, and standardized terminology beyond skin and pressure ulcer prevention.

Similarly, Westra et al[5] utilized EHR data to develop a Reference Information Model for the concept of pain. These researchers selected the concept of pain because it is a commonly occurring problem, assessed and managed by all professional nurses and those who specialize in pain management.[8] About 126 million, or more than half (56%) of adults in the United States, reported some level of pain within a 3-month period.[9] The estimated total national economic cost (direct and indirect) attributed to pain in 2010 ranged from $560 to $635 million.[10] The concept of pain remains an important aspect of hospital-based patient care, with additional regulatory focus on conducting pain assessments consistent with age, condition, and ability to understand, with an increased focus on patient involvement and the effective use of nonpharmacological interventions.[11] The Pain Reference IM was developed by extracting the metadata from a clinical data repository (CDR) of one large integrated health care system representing over 2.4 million patients. The validation of the Pain Reference IM with other health care organizations was needed to increase the generalizability of the model.


#

Objective

The purpose of our study was to validate and refine a Pain Reference IM from EHR flowsheet data that standardizes pain concepts, definitions, and associated value sets for assessments, goals, interventions, and outcomes..


#

Methods

This study is a retrospective observational study using an iterative consensus-based approach to map, analyze, and evaluate EHR pain data across several organizations to validate and refine the Pain Reference IM.[5] A convenience sample of nursing informatics researchers who were active in the Nursing Knowledge Big Data Science Initiative[12] were invited to represent their organization as participants in the study. One researcher was a pain management specialist; others consulted pain experts in their organizations or pain resources (i.e., pain society guidelines or studies).The researchers represented medium to large size multihospital health care systems with the majority of the group using the Epic EHR (see [Table 1]). Of the 10 participating organizations, 8 shared metadata for mapping their EHR to the Pain Reference IM and 2 additional organizations shared how they built their systems since they were just going live. The shared metadata included all flowsheet data, but only general pain concepts from inpatient and outpatient settings including the emergency department were analyzed for this project. Specialized cardiac/chest pain assessments were excluded as the focus was on general pain.

Table 1

Data source for validation of the pain information model

Organization

Organization type

Data source

Number of beds

Dates represented by data

Allina Health

Hospitals, medical centers, clinics, rehabilitation, hospice, homecare, retail pharmacy

13 hospitals, 90+ clinics

1,775

2005–2016

Aurora Health Care

Private, not-for-profit, integrated health care system w/ 16 hospitals including behavioral health, rehab, and hospice

1 hospital; quaternary medical center

710

CY 2016

Bumrungrad International Hospital[a]

Hospital

1 hospital

580

2013–2016

Cedars Sinai

Academic medical center and health system

1 hospital, 40 clinics

886

2009–2016

Duke University Health System

Health system

3 hospitals, 400 clinics

1,512

2012–2016

Fairview Health Services

Hospitals, academic health center, clinics, senior housing, retail pharmacy

7 hospitals, 40 + clinics

2,530

2011–2016

Kaiser Permanente

Health system, hospitals, academic hospitals (graduate medical education), clinics, ambulatory care centers, acute rehab, inpatient psychiatry

Northern California region only: 21 hospitals, 233 medical office buildings, 203 ambulatory care centers

3,922

2005–2016

North Memorial Medical Center

Hospitals, specialty and primary care clinics, home care, medical transportation

2 hospitals

355

2016

Partners Healthcare[a]

Integrated health system

9 hospitals, many clinics

2,825

2016

UCLA Health

Health system

4 hospitals

861

2013–2016

Abbreviations: CY, calendar year; EHR, electronic health record; UCLA, University of California, Los Angeles.


a Organizations that provided information about their EHR build only.


Organizations were asked to extract metadata about the flowsheet documentation contained in their EHR. The metadata consisted of a unique identifier for each flowsheet data row representing assessments, interventions, goals, or outcomes; the internal description and name used to display the flowsheet row; the name of the template (data entry screen) that was used to collect the data (and which grouping of pain concepts within the screen); the number of observations, encounters, and patients; and the date of first and last uses. This metadata represented actual documentation by clinicians at each organization. [Fig. 1] shows an example of how the pain flowsheet data are documented and the relationship to the metadata. Within each organization, the EHR data were transferred to their Clarity relational database. A Structured Query Language (SQL) script was developed that allowed each of the organizations to extract the metadata in exactly the same manner. Based on organizations' resources for data extraction, there was variation in the time frames selected and specific hospitals or practices included in metadata extractions.

Zoom Image
Fig. 1 Example of documenting pain on flowsheets. The orange template is a screen view that shows the Adult Assessment which includes multiple groups of related questions shown in light green on the left. The group called “Pain” shows examples of specific questions/flowsheet measures displayed to the clinician. The clinician selects answers from the value sets with actual documentation shown in blue for documentation that occurred at specific dates/times.

Each organization next mapped their metadata to the concepts in the Pain Reference IM. This was accomplished using software (FloMap) that allowed the metadata to be imported from each organization. FloMap was developed by one of the researchers (S.J.) and is not currently publically available. A researcher from each organization used FloMap to search for pain-related flowsheet rows in their organization's metadata and map them to the appropriate concept in the Pain Reference IM. FloMap allows sophisticated searching using Boolean logic to make it easy to find local data that matched the pain concepts. Flowsheet rows related to exclusion criteria (i.e., cardiac/chest pain) or rows that had less than 10 observations were not mapped. [Fig. 2A] demonstrates the mapping process. This example shows how FloMap finds all flowsheet rows that contain “pain” and one of the additional terms. Users can see the value sets which help to determine if the flowsheet rows represent a similar concept. They then select the flowsheet rows that map to the concept and click on “Add items to concept.” After these flowsheet rows are added to the concept, they are displayed and included in reports for comparison (see [Fig. 2B]).

Zoom Image
Fig. 2 (A) Example of Boolean searching FloMap for mapping flowsheet rows to “Factors that Aggravate Pain.” (B) Display of flowsheet measures mapped to the concept of “Factors that Aggravate Pain.”

After the local flowsheet data were mapped to concepts in the Pain Reference IM, the group met biweekly to evaluate the concept mappings across all of the organizations. A FloMap-generated report was used by the group to make decisions about which concepts to keep, combine, remove, or add additional concepts. Concepts were retained when all researchers agreed that the concepts represented essential questions for the majority of patients. One researcher was a pain management specialist; others consulted pain experts in their organizations or pain resources (i.e., pain society guidelines or studies). There was discussion that there are some differences in use based on the population such as age (pediatric vs. adult), type of unit (e.g., intensive care unit vs. a medical–surgical unit), or the patient's capability (e.g., ability to verbalize pain). The group developed a definition and discussed the use of the concepts to help determine decisions about the concept and associated value sets. A value set represents the list of all possible values (answers) associated with a specific concept (question). Value set response counts varied by concept and ranged from a few (3–4) to many (> 100). For example, the concept of “Body Site” had 507 different response choices across the 10 organizations. Some response values were not useful (e.g., misspelled or incomplete words like “a,” “ac,” “acu,” “acut” for a value choice of “acute”) or clearly inappropriate such as “…,” “/,” “ + + +,” etc. After the inappropriate responses were removed, several concepts with multiple diverse value sets remained for evaluation.

To support the group in evaluating diverse response values, a FloMap “survey” feature was developed. The FloMap survey aggregated all of the response values for a particular concept into a single list while retaining details about which organizations used each choice. Researchers from each organization received a survey via email with 1 to 2 concepts and a list of response values set choices for each concept from all of the organizations. The email contained a secure link to the survey. Survey participants were asked to select values that were considered generalizable across organizations even if their organization did not currently include that value. The results were then discussed at the biweekly calls. Value set items that received 50% or more of the votes were automatically retained in the Pain Reference IM. Those items that received less than 30% were automatically removed from the model and those between 30 and 50% were discussed by the group. The group decided on these thresholds to reduce the amount of discussion needed to reach consensus. The results were compared with pain concepts included in Logical Observation Identifiers Names and Codes (LOINC). Some concepts were then renamed to match those in the Nursing Physiologic Assessment Panel in LOINC or other LOINC locations.


#

Results

The aggregate metadata from 8 large health care organizations that contributed metadata represented flowsheet data from 6 million patients, 27 million encounters, and 683 million observations. A high level diagram of the resulting pain IM concepts is shown in [Fig. 3]; the red font indicates new panels and concepts added. [Table 2] shows a comparison of the original Pain Reference IM and final consensus regarding which concepts were retained with or without revision, removed, or added. The new model consists of 30 concepts grouped into 4 panels with 396 value set items. The in-depth analysis revealed that 24 concepts were retained, 6 added, and 59 removed compared with the concepts in the original Pain Reference IM. Since some scales require copyright permission to use, we retained only the scale score for each of the pain scales for consistency. The Supplemental Digital Content (SCD) 1 includes a detailed list of the retained concepts, definitions, and their value sets. [Table 3] lists the information for each concept that was part of the final model: the minimum (Min) and maximum (Max) number of flowsheet rows per organization mapped to a concept as well as the average (Avg) number of flowsheet rows across organizations. On average, organizations mapped 9 flowsheet rows to a single concept in the model. In fact, one organization had 81 unique flowsheet rows for recording “Numeric Pain Rating 0–10 Score.” [Table 3] also includes statistics for the percent of organizations using a particular concept, the number and percent of patients for which a concept was documented, and the total number of observations documented. Some concepts, such as “Numeric Pain Rating 0–10 Score” are documented on 100% of patients. Finally, [Table 3] also includes the number of value set choices for each concept in the original and final models. The number of items in a value set ranged from 4 items for “Pain Duration” to 91 items for “Body Site.” The concepts that remained in the model (not newly added) are documented on average for 16% of patients. [Table 4] lists the 13 concepts from the final pain IM that are currently mapped to LOINC and 17 new concepts needed in LOINC. Additionally, there were pain concepts in LOINC that were not found in organizations' data.

Table 2

Pain reference IM concepts with validation decisions

Abbreviations: FLACC score, Face, Legs, Activity, Cry, Consolability score; IM, information model; LOINC, Logical Observation Identifiers Names and Codes; NICU, neonatal intensive care unit; TENS, transcutaneous electrical nerve stimulation.


Table 3

Descriptive statistics and LOINC code for validated concepts in the final pain IM

Concept name

Flowsheet rows mapped

% Organizations

Patients

Number of observations

# Value set items

Min

Avg

Max

Number

%

Original

Final

Current Pain

1

18

55

75

869,380

13

28,284,945

9

4

Pain Type

7

12

32

88

980,287

15

17,895,687

11

5

Context of Pain Rating

1

11

30

38

557,998

9

16,296,187

4

5

Pain Quality

4

5

6

25

187,313

3

2,082,821

30

41

Nonverbal Pain Indicators

1

7

16

50

56,254

1

703,598

27

37

Pain Exacerbating Factors

1

3

4

38

139,179

2

1,248,679

26

38

Pain Alleviating Factors

2

23

61

38

40,658

1

4,063,022

28

37

Pain Pattern Panel

 Speed of Pain Onset

1

5

8

88

831,081

13

12,355,821

11

5

 Pain Duration

2

7

19

88

2,806,582

43

29,006,751

4

4

 Pain Frequency

1

11

31

63

262,094

4

2,610,203

7

5

 Pain Course

1

4

8

38

126,076

2

823,074

4

Pain Location Panel

 Body Site

6

28

104

88

3,749,894

57

46,098,215

84

91

 Body Location Qualifier

6

14

32

63

763,380

12

10,969,940

NA

NA

 Body Laterality

0

0

0

0

0

NA

NA

Pain Scale Panel

 Checklist of Nonverbal Pain Indicators (CNPI) Score

20

9

 CRIES Score

20

6

 Critical-care Pain Observation Tool (CPOT) Score

50

5

 FACES (Wong–Baker) Rating Scale Score

1

13

27

38

73,525

1

1,806,270

6

6

 Faces Pain Scale–Revised (FPS-R Scale) Score

30

6

6

 FLACC Pain Assessment Score

1

7

20

88

277,682

4

4,670,837

6

6

 Neonatal Infant Pain Scale (NIPS) Score

1

1

1

63

258,254

4

2,723,492

7

7

 Neonatal Pain, Agitation & Sedation Scale (N-PASS) Score

1

2

2

50

111,323

2

4,781,129

6

6

 Numeric Pain Rating 0–10 Score

4

23

81

100

6,561,150

100

153,340,448

11

11

 PAIN Advanced Dementia (PAINAD) Score

2

8

20

38

34,380

1

897,546

5

5

 Premature Infant Pain Profile (PIPP) Score

20

8

 Revised FLACC Pain Assessment (rFLACC) Score

2

2

2

13

1,664

0

47,532

6

6

Pain Goals Panel

 Acceptable Comfort Level (numeric)

1

3

11

63

2,716,906

41

65,625,525

11

11

 Acceptable Comfort Level (nominal)

1

4

7

75

388,199

6

4,017,356

14

5

Pain Interventions

1

13

44

100

2,639,964

40

43,971,596

66

67

Pain Outcome Description

1

4

9

50

115,769

2

2,180,853

5

5

Abbreviations: Avg, average; FLACC score, Face, Legs, Activity, Cry, Consolability score; IM, information model; LOINC, Logical Observation Identifiers Names and Codes; Max, maximum; Min, minimum; NA, not applicable.


Table 4

Comparison of validated pain information model with LOINC nursing physiological assessment panel[a]

Concepts in pain IM and LOINC Nursing Physiological Assessment (n = 13)

32419–4 Pain Quality

38209–3 Pain Exacerbating Factors

38210–1 Pain Alleviating Factors

38203–6 Speed of Pain Onset

38207–7 Pain Duration

38206–9 Pain Course

39111–0 Body Site

39112–8 Body Location Qualifier[a]

20228–3 Body Laterality[a]

80316–3 Pain Scales

38221–8 FACES (Wong–Baker)[a]

38208–5 Pain Rating 0–10 Scale

38213–5 FLACC Pain Assessment

Concepts In LOINC Nursing Physiological Assessment not in pain Reference IM ( n  = 6)

38201–0 Pain Onset [Date and Time] – Reported

38202–8 Pain Onset [Hours Ago] – Reported

38204–4 Pain Primary Location – Reported

38205–1 Pain Radiation

38211–9 Pain Initiating Event Narrative – Reported

80317–1 Pain Assessment [Interpretation]

New concepts not in the Nursing Physiological Assessment ( n  = 17)

Current Pain

Pain Type

Context of Pain Rating

Nonverbal Pain Indicators

Pain Frequency

Checklist of Nonverbal Pain Indicators (CNPI) Score

CRIES Score

Critical-care Pain Observation Tool (CPOT) Score

Neonatal Pain, Agitation & Sedation Scale (N-PASS) Score

Neonatal Infant Pain Scale (NIPS) Score

PAIN Advanced Dementia (PAINAD) Score

Premature Infant Pain Scale (PIPP) Score

Faces Pain Scale – Revised (FPS-R Scale) Score

Revised FLACC Pain Assessment (rFLACC) Score

Acceptable Comfort Level (numeric)

Acceptable Comfort Level (nominal)

Pain Outcome Description

Abbreviations: FLACC score, Face, Legs, Activity, Cry, Consolability score; IM, information model; LOINC, Logical Observation Identifiers Names and Codes.


a Concepts in LOINC but not in Nursing Physiological Assessment Panel.


Zoom Image
Fig. 3 Concepts retained in the pain information model (IM) through a data-driven consensus process.

#

Discussion

The purpose of our study was to develop and refine a pain IM from EHR flowsheet data that standardizes pain concepts, definitions, and associated value sets for assessments, goals, interventions, and outcomes. The data-driven consensus process among 10 organizations resulted in a considerable reduction of concepts, panels (classes), and value set items compared with the original Pain Reference IM which included 84 concepts grouped into 14 panels and 599 value set items.[5] The new model consists of 30 concepts grouped into 4 panels with 396 value set items. The consensus process helped eliminate concepts from the original model mainly due to limited use across organizations and consistency in representing pain assessment scales. However, some infrequently occurring concepts were retained in the model as they were used to simplify documentation, such as a one-item question to assess nonverbal pain indicators versus a 5 to 9 item observational pain scale. We found that organizations combined some concepts for ease of documentation such as body orientation which included value items from both body location qualifier and body laterality. The pain IM separated body orientation into the two concepts to be consistent with LOINC standards.

One of the strengths of our study was extracting all flowsheet rows related to pain and mapping them to the Pain Reference IM. EHRs become unwieldy over time with multiple people building a system and upgrades occurring. Mapping all semantically comparable flowsheet rows to a concept demonstrated the redundancy in EHRs. While Harris et al[7] used a similar consensus process for developing a pressure ulcer model, our study goes beyond their process to include the ability to find and map data throughout the EHR. A custom query provides a method for extracting comparable data for interoperability and cross-organization pain research.

Using real-world evidence from large data sets is an increasing trend in research due to the potential cost-savings.[13] However, if research was conducted evaluating a vital sign such as patients' pain and it used only one of the multiple flowsheet rows mapped to a concept like “Numeric Pain Rating 0–10 Score,” then the study's effectiveness for evaluating medications or nursing pain interventions could result in false conclusions because the pain rating data contained in the other flowsheet rows mapped to “Numeric Pain Rating 0–10 Score” would be missing. Implementation of an IM can reduce redundancy and increase the usefulness of the data.

While it might be ideal to have a single pain scale, we retained 12 unique pain assessment scales which include both self-report and observational assessments. Nurses and other clinicians need to select the appropriate tool based on age, setting, and clinical condition. The pain assessment scales identified address these wide variety of circumstances. One essential point, however, is the importance of consistent use of the same scale over time to evaluate patient's progress.

Results of our study extend the concepts needed in LOINC for interoperability.[14] For example, there are some important pain concepts that are missing from LOINC such as “Current Pain,” “Pain Type,” and “Acceptable Comfort Level (numeric).” However, there are LOINC concepts not found in our study, such as “Pain Onset,” which is a date and time stamp. Another LOINC term that was not found in our organizations' data was “Pain Primary Location.” This is likely due to the fact that patients can have multiple pain locations, each with its own assessment, so it is not used in practice.

There are several future steps planned including adding standard terminology mappings to the concepts, validating the pain IM with additional organizations and settings, and applying the process to validate IMs for other clinical areas. The terminology standards include LOINC for assessments and some outcomes and Systematized Nomenclature of Medicine–Clinical Terms (SNOMED CT) for value sets associated with assessments, problems, and interventions.[14] Additional terms will need to be submitted to LOINC and SNOMED CT when codes do not exist. Once this work is completed, broad dissemination is needed. The Nursing Knowledge Big Data Science Initiative is developing an open source repository for sharing work such as the pain IM.[15] Additional research is needed in several areas. Validation of the IM with a broader set of stakeholders including home care, hospice, long-term care, and others would be beneficial. The IM could also be validated with multiple clinical experts specific to pain using a Delphi technique or other consensus approach. Further IM mapping is needed on other nurse-sensitive measures such as falls, catheter-associated urinary tract infections (CAUTI), and central line associated bloodstream infections (CLABSI). Once the models have been validated, research is needed on implementation of the models and the coded data elements in EHRs to understand what worked, what problems and issues were uncovered, how documentation is impacted, and if any differences exist based on vendor solutions. Ultimately, we want to know if the IMs and use of coded key data elements increases interoperability and our ability to enable large, multicenter research including comparative effectiveness research.

Our work has several limitations. A volunteer sample of organizations participated, and thus, the model may not be generalizable to all organizations. While this was a convenience sample, which can limit generalizability of findings, the geographic locations, population size, and variety of practices provides a foundation for a generalizable pain IM that can be used to support research. The pain IM is a beginning and it is anticipated that it will evolve over time. In particular, there are additional concepts needed for cardiac services and pain clinics may have more specialized assessments and interventions. There was variability in the data extraction approaches and criteria used at each participating organization that could influence the results. No attempt was made to dictate how to implement the pain IM in an EHR, and thus organizations need to determine the best practice for doing this. While the researchers consulted their pain experts, a more conscious effort is needed in future work to include domain experts. Another limitation is that FloMap is not yet available publicly nor is the SQL script for data extraction. If others are interested in its use, they can contact S.J., one of the authors on this article.


#

Conclusion

The purpose of our research was to validate and refine a pain IM by using a data-driven approach across multiple health systems. The resulting pain IM is a consensus model based on actual EHR documentation in the participating health systems. The pain IM captures the most important concepts related to pain.


#

Clinical Relevance Statement

Secondary use of EHR data must be standardized for comparison within and across organizations. Our study resulted in 30 concepts, definitions, and associated value sets agreed upon by 10 organizations as useful for building or optimizing an EHR. Our methods also allowed agencies to map their flowsheet data to these concepts to support future research.


#

Multiple Choice Question

Variation in flowsheet data are often due to

  • The content and guidelines provided by vendors

  • Professional guidelines that influence content

  • Limited resources and rapid deployment of EHRs

  • Available guidelines from terminologies of how to build EHRs

  • All of the above

    Correct Answer: The correct answer is e, all of the above. Vendors provide generic content and guide organizations in using consensus-based approaches to configure the bulk of their system to meet the clinical requirements of the organization. Much of the documentation, however, is captured in flowsheet format, using nonstandardized, semistructured data in a matrix format for patient assessments, goals, problems, interventions, and outcomes of care. Limited resources and rapid deployment timelines provide little time for organizations to identify and adopt standardized terminologies and use IMs to design flowsheets for future data sharing. Further, the informaticians are required to choose from multiple terminologies[3] [4] with limited reference standards to guide flowsheet builds.


#
#

Conflict of Interest

None.

Acknowledgment

We would like to acknowledge the organizations that were willing to share their data and committed their staff time to collaborate on this project over an extended period of time.

Protection of Human and Animal Subjects

The data were considered “metadata” and represented descriptions of how the organization's EHR was designed and aggregated counts for frequency of use; no patient-identifiable data were included. Each participant consulted with their organization to determine whether Institutional Board Approval was needed. If Institutional Review Board (IRB) approval was required, it was obtained prior to data extraction and transmission to a secure database at the University of Minnesota.


  • References

  • 1 Goossen W, Goossen-Baremans A, van der Zel M. Detailed clinical models: a review. Healthc Inform Res 2010; 16 (04) 201-214
  • 2 Johnson SG, Byrne MD, Christie B. , et al. Modeling flowsheet data for clinical research. AMIA Jt Summits Transl Sci Proc 2015; 2015: 77-81
  • 3 Rutherford M a. Standardized nursing language: what does it mean for nursing practice?. Online J Issues Nurs 2008; 13 (01) 1-7
  • 4 Westra BL, Delaney CW, Konicek D, Keenan G. Nursing standards to support the electronic health record. Nurs Outlook 2008; 56 (05) 258-266.e1
  • 5 Westra BL, Christie B, Johnson SG. , et al. Modeling flowsheet data to support secondary use. Comput Informatics Nurs 2017; 35 (09) 452-458
  • 6 Chow M, Beene M, O'Brien A. , et al. A nursing information model process for interoperability. J Am Med Inform Assoc 2015; 22 (03) 608-614
  • 7 Harris MR, Langford LH, Miller H, Hook M, Dykes PC, Matney SA. Harmonizing and extending standards from a domain-specific and bottom-up approach: an example from development through use in clinical applications. J Am Med Inform Assoc 2015; 22 (03) 545-552
  • 8 American Nurses Association & the American Society for Pain Management. Pain Management Nursing: Scope and Standards of Practice. 2nd ed. Silver Spring, MD: American Nurses Association; 2016
  • 9 Nahin RL. Estimates of pain prevalence and severity in adults: United States, 2012. J Pain 2015; 16 (08) 769-780
  • 10 Gaskin DJ, Richard P. The economic costs of pain in the United States. J Pain 2012; 13 (08) 715-724
  • 11 The Joint Commission. Joint Commission Enhances Pain Assessment and Management Requirements for Accredited Hospitals. The Joint Commission Perspectives 2017;37(7):1-3. Available at: https://www.jointcommission.org/assets/1/18/Joint_Commission_Enhances_Pain_Assessment_and_Management_Requirements_for_Accredited_Hospitals1.PDF
  • 12 Delaney CW, Pruinelli L, Alexander S, Westra BL. 2016 Nursing Knowledge Big Data Science Initiative. Comput Inform Nurs 2016; 34 (09) 384-386
  • 13 Jarow JP, LaVange L, Woodcock J. Multidimensional evidence generation and FDA regulatory decision making: defining and using “real-world” data. JAMA 2017; 318 (08) 703-704
  • 14 Matney SA, Settergren TT, Carrington JM, Richesson RL, Sheide A, Westra BL. Standardizing physiologic assessment data to enable big data analytics. West J Nurs Res 2016; 39 (01) 63-77
  • 15 Carter-Templeton H, Effken J, Weaver C, Cochran K, Androwich I, O'Brien A. Toward a central repository for sharing nursing informatics' best practices. Comput Inform Nurs 2016; 34 (06) 245-246

Address for correspondence

Bonnie L. Westra, PhD, RN, FAAN, FACMI
School of Nursing, University of Minnesota
308 Harvard Street SE, Minneapolis, MN 55455
United States   

  • References

  • 1 Goossen W, Goossen-Baremans A, van der Zel M. Detailed clinical models: a review. Healthc Inform Res 2010; 16 (04) 201-214
  • 2 Johnson SG, Byrne MD, Christie B. , et al. Modeling flowsheet data for clinical research. AMIA Jt Summits Transl Sci Proc 2015; 2015: 77-81
  • 3 Rutherford M a. Standardized nursing language: what does it mean for nursing practice?. Online J Issues Nurs 2008; 13 (01) 1-7
  • 4 Westra BL, Delaney CW, Konicek D, Keenan G. Nursing standards to support the electronic health record. Nurs Outlook 2008; 56 (05) 258-266.e1
  • 5 Westra BL, Christie B, Johnson SG. , et al. Modeling flowsheet data to support secondary use. Comput Informatics Nurs 2017; 35 (09) 452-458
  • 6 Chow M, Beene M, O'Brien A. , et al. A nursing information model process for interoperability. J Am Med Inform Assoc 2015; 22 (03) 608-614
  • 7 Harris MR, Langford LH, Miller H, Hook M, Dykes PC, Matney SA. Harmonizing and extending standards from a domain-specific and bottom-up approach: an example from development through use in clinical applications. J Am Med Inform Assoc 2015; 22 (03) 545-552
  • 8 American Nurses Association & the American Society for Pain Management. Pain Management Nursing: Scope and Standards of Practice. 2nd ed. Silver Spring, MD: American Nurses Association; 2016
  • 9 Nahin RL. Estimates of pain prevalence and severity in adults: United States, 2012. J Pain 2015; 16 (08) 769-780
  • 10 Gaskin DJ, Richard P. The economic costs of pain in the United States. J Pain 2012; 13 (08) 715-724
  • 11 The Joint Commission. Joint Commission Enhances Pain Assessment and Management Requirements for Accredited Hospitals. The Joint Commission Perspectives 2017;37(7):1-3. Available at: https://www.jointcommission.org/assets/1/18/Joint_Commission_Enhances_Pain_Assessment_and_Management_Requirements_for_Accredited_Hospitals1.PDF
  • 12 Delaney CW, Pruinelli L, Alexander S, Westra BL. 2016 Nursing Knowledge Big Data Science Initiative. Comput Inform Nurs 2016; 34 (09) 384-386
  • 13 Jarow JP, LaVange L, Woodcock J. Multidimensional evidence generation and FDA regulatory decision making: defining and using “real-world” data. JAMA 2017; 318 (08) 703-704
  • 14 Matney SA, Settergren TT, Carrington JM, Richesson RL, Sheide A, Westra BL. Standardizing physiologic assessment data to enable big data analytics. West J Nurs Res 2016; 39 (01) 63-77
  • 15 Carter-Templeton H, Effken J, Weaver C, Cochran K, Androwich I, O'Brien A. Toward a central repository for sharing nursing informatics' best practices. Comput Inform Nurs 2016; 34 (06) 245-246

Zoom Image
Fig. 1 Example of documenting pain on flowsheets. The orange template is a screen view that shows the Adult Assessment which includes multiple groups of related questions shown in light green on the left. The group called “Pain” shows examples of specific questions/flowsheet measures displayed to the clinician. The clinician selects answers from the value sets with actual documentation shown in blue for documentation that occurred at specific dates/times.
Zoom Image
Fig. 2 (A) Example of Boolean searching FloMap for mapping flowsheet rows to “Factors that Aggravate Pain.” (B) Display of flowsheet measures mapped to the concept of “Factors that Aggravate Pain.”
Zoom Image
Fig. 3 Concepts retained in the pain information model (IM) through a data-driven consensus process.