Evid Based Spine Care J 2014; 05(01): 002-005
DOI: 10.1055/s-0034-1371445
Science in Spine
Georg Thieme Verlag KG Stuttgart · New York

Credibility Matters: Mind the Gap

Andrea C. Skelly
1   Spectrum Research, Inc., Tacoma, Washington, United States
› Author Affiliations
Further Information

Address for correspondence

Andrea C. Skelly, PhD
Spectrum Research, Inc., Atrium Court
705 S. 9th Street, Tacoma, WA 98405
United States   

Publication History

Publication Date:
28 March 2014 (online)

 

Introduction

Clinicians, policy makers, and patients need to be able to rely on high-quality scientific research to make informed decisions about health care options and policy. Frustration ensues on all levels when there is low confidence in the quality and integrity of available research on spine care. When research quality is low and reporting of it is poor, clinicians and patients may have confusion regarding best health care options. Policy makers may not reimburse for treatments or diagnostic modalities that are deemed not effective based on available evidence. At the most basic level, all parties want the same thing: to do what “works,” yet they all suffer when there is low confidence in evidence.

There is a credibility gap that spans all aspects of medical research, from study planning to study reporting to availability of data for verification to final study publication. Several studies provide empirical evidence on publication and related biases and how conclusions may differ based on what is and is not reported and how.[1] [2] [3] One example of publication-related bias is seen in the recent controversy surrounding results from the Yale Open Data Access (YODA) studies[4] [5] as compared with original trial publications on bone morphogenetic protein. A primary conclusion from consideration of these reports was a call for timely and complete transparency of data reporting.[6] [7]

Subsequently, media and scientific circles have reiterated strong calls to reduce study bias in study analysis and reporting.[8] [9] [10] [11]

Outcome reporting bias is one type of publication-related bias and is an under-recognized problem.[12] [13] This occurs when there is selective reporting of some outcomes but not others, possibly depending on the nature and direction of the findings. In addition to ethical concerns regarding such selective reporting, the reported results can be misleading. One example of the impact of such selective reporting is an analysis of 283 Cochrane Reviews. Kirkham et al report that 34% of reviews contained at least one trial with high suspicion of outcome reporting bias for the primary outcome.[12] Sensitivity analysis on these reviews revealed that the treatment effect was reduced by 20% or more in 23% of reviews. After adjustment for outcome reporting bias, 19% of meta-analyses with a statistically significant result became non-significant and 26% would have overestimated the treatment effect by 20% or more. This can impact policy making and clinical decision making and potentially result in harm to patients.

Transparency and attention to detail in research design, specification of outcomes, analysis, reporting, and dissemination are critical to “minding the gap” regardless of study design or level and type of funding. This article (and previous Science in Spine articles) describes some key components for such transparency related to conducting research with a focus on outcomes reporting.


#

Where Does It Start?

The credibility gap must be considered and addressed at all levels of study planning, reporting, and publication of any study, regardless of design. It starts with:

  • Fully formulating a focused and answerable study question as described in the previous Science in Spine.[14]

  • Creating specific study aims and testable hypotheses that are objectively stated a priori.

  • Using a structured approach to specify the study question and to guide research design and execution. The Patients, Intervention, Comparison, and Outcomes (PICO) table for treatment and diagnostic studies or a Patients, Prognostic factors, and Outcomes (PPO) table for prognostic studies is one method for providing the blueprint for conceptualizing, operationalizing, and reporting results from your study.

  • Using your PICO/PPO to stay on track. Use of the PICO/PPO topology or other organizing framework can enhance the quality of your study and the quality of reporting by decreasing ambiguity, clarifying objectives, as well as identifying and focusing on aspects of primary importance. All reported outcomes should be traced back to your PICO/PPO, study question, and specific aims.

The value of using the PICO format was highlighted in a study of 89 RCT reports.[15] Rios et al created a score based on the PICO elements and then examined the extent to which the reports stated the PICO elements of a structured research question and correlation with an overall quality reporting score based on the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The result: The PICO-related score was independently associated with the overall reporting quality score. The implication is that when care is taken to specify the question and to use a framework for study design and execution, reporting of study components related to quality is higher as is the perceived quality of the study. It also helps prevent biases such as outcome reporting bias. In the era of evidence-based practice, attention to the quality of study design, execution, and reporting is important to policy makers and others.


#

The Components and Applying the Concept

[Table 1] provides an overview of PICO/PPO. In many instances, it is logical to add two components, one for Timing and one for Setting, modifying the acronym to PICOTS/PPOTS. The “S” may also be used to denote study design.

Table 1

Overview of PICOTS and PPOTS

PICO

PPO

Therapeutic

Diagnostic

Prognostic

Patients

What patient group?

What patient group?

Patients

What patient group?

Intervention

In what surgical treatment, procedure, or implants are you interested?

What diagnostic procedure?

Prognostic factors

What primary prognostic (risk) factor might influence outcome?

Comparison

What is the comparison (control) treatment?

Is there a gold or suitable reference standard?

What other factors might influence outcome?

Outcomes

In what outcomes are you interested? (e.g., pain)

Are you interested in validity (e.g., sensitivity/specificity), and/or reliability (e.g., inter/intra rater reproducibility)

Outcomes

In what outcome(s) are you interested? (e.g., nonunion)

Timing

What follow-up times are important

Does timing of the test influence the outcome

Timing

What timing of follow-up or assessment of outcome is important (e.g., peri-operative)

Setting (or study design)

In what setting(s) is treatment performed (e.g., in the emergency department)

Under what conditions or locations (setting) is the diagnostic test performed?

Setting (or study design)

What conditions or settings are important to consider?

  • Patients: A homogenous patient population is best. It is important to define the patient population in terms of all factors that relate to the condition of interest, patient demographic features (e.g., age, gender), behaviors (e.g., smoking), medical history, medications (e.g., steroids, NSAIDS, etc.) that may influence outcomes, general health factors, comorbidities, factors that may be associated with the treatment selection (e.g., location/severity of condition), and others that may be relevant to treatment selection or influence of outcomes. Are patients with previous surgical interventions to be included or excluded? Are specific pathologies to be excluded?

  • Intervention: This may be a newer or novel treatment that is to be compared with a more standard treatment (called the comparator).

  • Comparator: This is your “control” group and consists of those receiving the alternative, standard, or “other” treatment to which the intervention will be compared. All comparative studies will have a control/comparator group. Sometimes your question will not have a control group, such as in the case when you are interested in safety or handling characteristics of a new implant or procedure.

  • Outcome(s): What is the primary outcome of importance? Be specific and aim for the most important outcomes. Conceptual examples include patient reported outcomes such as pain, function, and quality-of-life as well as more clinical outcomes such as nonunion, major complications, repeat surgery, or death. It is best to use validated outcomes measures and measure clinically meaningful outcomes as well as harms. Future articles will discuss operationalizing and measuring your outcomes.

Resources for additional details about applying this to diagnostic and prognostic studies (PPOTS) can be found in the SMART Handbook for Spine Clinical Research,[16] and Agency for Healthcare Research and Quality Methods publications.[17] [18]


#

How Does This Help? Returning to Our Focus on Outcomes Reporting

Choosing to not evaluate or report on an outcome (particularly a harm) may reduce the credibility and applicability of your findings. The PICOTS is the start of your research game plan. Specifying and defining the outcomes to measure, how and when they will be measured a priori, and sticking to the plan assists you in avoiding ambiguity and misreported results. You are committing to measuring, analyzing, and reporting on those outcomes, including those related to harm, regardless of their statistical significance. This enhances the transparency and credibility of your study report and provides a sound basis for drawing objective conclusions.


#

Why Does It Matter?

Empirical evidence of outcome reporting bias (particularly related to treatment harms) over the past decade has led to a call for the registration of clinical trials and publication of protocols prior to trial completion to ensure transparency.[3] [12] [19] [20] Increasingly, there is a call for researchers to publish study protocols of non-randomized studies. There is also increased interest in comparing the extent to which published results from a study are consistent with the study aims and the prespecified protocol. Study credibility is at stake, even if yours is not an RCT.

Regardless of study design, using the PICOTS/PPOTS framework as part of your prespecified protocol helps you stay on track as you plan and execute your study and is an important initial step toward transparency. It can form the basis of a checklist for ensuring that you have followed the basic game plan. Keeping it in mind as you write up results (positive as well as negative findings) will assist you in avoiding selective reporting and other reporting bias. This in turn enhances the credibility of your study within and outside your field.

Consider how reporting of your results may impact future studies that may build on yours, and how your data may be used in synthesis of data across studies such as in meta-analysis. The accuracy and completeness of your report may impact the credibility of the overall body of evidence.


#

Summary

The purpose of the Science in Spine articles in EBSJ is to assist surgeons in understanding research, facilitate critical thinking about research beyond “statistical significance,” and to help enhance the quality of research that they report. Decisions by clinicians, patients, and policy makers rest on the quality and integrity of reported research. To avoid biased study reporting:

  • It is important to have a framework such as PICOTS/PPOTS for specific primary study features a priori.

  • It is important to report on all study results/outcomes regardless of statistical significance.

  • It is important to consider the potential for various types of reporting and publication bias when critically appraising studies and systematic reviews.

It is in the best interest of all to “mind the gap” and actively take steps to improve the value and reporting of research (regardless of study design or funding source) by following basic research steps to ensure quality.


#
#
  • References

  • 1 Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication and related biases. Health Technol Assess 2000; 4 (10) 1-115
  • 2 Song F, Parekh S, Hooper L , et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 2010; 14 (8) iii , ix–xi, 1–193
  • 3 Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev 2011; (1) MR000031
  • 4 Fu R, Selph S, McDonagh M , et al. Effectiveness and harms of recombinant human bone morphogenetic protein-2 in spine fusion: a systematic review and meta-analysis. Ann Intern Med 2013; 158 (12) 890-902
  • 5 Simmonds MC, Brown JV, Heirs MK , et al. Safety and effectiveness of recombinant human bone morphogenetic protein-2 for spinal fusion: a meta-analysis of individual-participant data. Ann Intern Med 2013; 158 (12) 877-889
  • 6 Kuntz RE. The changing structure of industry-sponsored clinical research: pioneering data sharing and transparency. Ann Intern Med 2013; 158 (12) 914-915
  • 7 Resnick D, Bozic KJ. Meta-analysis of trials of recombinant human bone morphogenetic protein-2: what should spine surgeons and their patients do with this information?. Ann Intern Med 2013; 158 (12) 912-913
  • 8 AllTrials. All Trials Registered, All Trials Reported; 2013. Available at: http://www.alltrials.net/blog/
  • 9 Ross JS, Gross CP, Krumholz HM. Promoting transparency in pharmaceutical industry-sponsored research. Am J Public Health 2012; 102 (1) 72-80
  • 10 Ross JS, Krumholz HM. Ushering in a new era of open science through data sharing: the wall must come down. JAMA 2013; 309 (13) 1355-1356
  • 11 Zarin D. Benefits of Sharing Clinical Research Data. Presented at the Institute of Medicine Workshop on Sharing Clinical Research Data; Washington, DC; October 4–5, 2012. Available at: www.iom.edu/Reports/2013/Sharing-Clinical-Research-Data.aspx
  • 12 Kirkham JJ, Dwan KM, Altman DG , et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ 2010; 340: c365
  • 13 Dwan K, Gamble C, Williamson PR, Kirkham JJ ; Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS ONE 2013; 8 (7) e66844
  • 14 Raich AL, Skelly AC. Asking the right question: specifying your study question. Evid Based Spine Care J 2013; 4 (2) 68-71
  • 15 Rios LP, Ye C, Thabane L. Association between framing of the research question using the PICOT format and reporting quality of randomized controlled trials. BMC Med Res Methodol 2010; 10: 11
  • 16 Lee MJ, Norvell DC, Dettori JR, Skelly AC, Chapman JR , eds. SMART Handbook for Spine Clinical Research. New York: Thieme; 2013
  • 17 Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(12)-EHC063-EF. Rockville, MD; 2012. Available at: www.effectivehealthcare.ahrq.gov
  • 18 Whitlock EP, Lopez SA, Chang S, Helfand M, Eder M, Floyd N. Identifying, selecting, and refining topics. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. Available at: http://www.effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productid=318 . Rockville, MD: Agency for Healthcare Research and Quality; 2009
  • 19 Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 2004; 291 (20) 2457-2465
  • 20 Dwan K, Gamble C, Kolamunnage-Dona R, Mohammed S, Powell C, Williamson PR. Assessing the potential for outcome reporting bias in a review: a tutorial. Trials 2010; 11: 52

Address for correspondence

Andrea C. Skelly, PhD
Spectrum Research, Inc., Atrium Court
705 S. 9th Street, Tacoma, WA 98405
United States   

  • References

  • 1 Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication and related biases. Health Technol Assess 2000; 4 (10) 1-115
  • 2 Song F, Parekh S, Hooper L , et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 2010; 14 (8) iii , ix–xi, 1–193
  • 3 Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev 2011; (1) MR000031
  • 4 Fu R, Selph S, McDonagh M , et al. Effectiveness and harms of recombinant human bone morphogenetic protein-2 in spine fusion: a systematic review and meta-analysis. Ann Intern Med 2013; 158 (12) 890-902
  • 5 Simmonds MC, Brown JV, Heirs MK , et al. Safety and effectiveness of recombinant human bone morphogenetic protein-2 for spinal fusion: a meta-analysis of individual-participant data. Ann Intern Med 2013; 158 (12) 877-889
  • 6 Kuntz RE. The changing structure of industry-sponsored clinical research: pioneering data sharing and transparency. Ann Intern Med 2013; 158 (12) 914-915
  • 7 Resnick D, Bozic KJ. Meta-analysis of trials of recombinant human bone morphogenetic protein-2: what should spine surgeons and their patients do with this information?. Ann Intern Med 2013; 158 (12) 912-913
  • 8 AllTrials. All Trials Registered, All Trials Reported; 2013. Available at: http://www.alltrials.net/blog/
  • 9 Ross JS, Gross CP, Krumholz HM. Promoting transparency in pharmaceutical industry-sponsored research. Am J Public Health 2012; 102 (1) 72-80
  • 10 Ross JS, Krumholz HM. Ushering in a new era of open science through data sharing: the wall must come down. JAMA 2013; 309 (13) 1355-1356
  • 11 Zarin D. Benefits of Sharing Clinical Research Data. Presented at the Institute of Medicine Workshop on Sharing Clinical Research Data; Washington, DC; October 4–5, 2012. Available at: www.iom.edu/Reports/2013/Sharing-Clinical-Research-Data.aspx
  • 12 Kirkham JJ, Dwan KM, Altman DG , et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ 2010; 340: c365
  • 13 Dwan K, Gamble C, Williamson PR, Kirkham JJ ; Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS ONE 2013; 8 (7) e66844
  • 14 Raich AL, Skelly AC. Asking the right question: specifying your study question. Evid Based Spine Care J 2013; 4 (2) 68-71
  • 15 Rios LP, Ye C, Thabane L. Association between framing of the research question using the PICOT format and reporting quality of randomized controlled trials. BMC Med Res Methodol 2010; 10: 11
  • 16 Lee MJ, Norvell DC, Dettori JR, Skelly AC, Chapman JR , eds. SMART Handbook for Spine Clinical Research. New York: Thieme; 2013
  • 17 Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(12)-EHC063-EF. Rockville, MD; 2012. Available at: www.effectivehealthcare.ahrq.gov
  • 18 Whitlock EP, Lopez SA, Chang S, Helfand M, Eder M, Floyd N. Identifying, selecting, and refining topics. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. Available at: http://www.effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productid=318 . Rockville, MD: Agency for Healthcare Research and Quality; 2009
  • 19 Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 2004; 291 (20) 2457-2465
  • 20 Dwan K, Gamble C, Kolamunnage-Dona R, Mohammed S, Powell C, Williamson PR. Assessing the potential for outcome reporting bias in a review: a tutorial. Trials 2010; 11: 52