Evid Based Spine Care J 2014; 05(01): 002-005
DOI: 10.1055/s-0034-1371445
Science in Spine
Georg Thieme Verlag KG Stuttgart · New York

Credibility Matters: Mind the Gap

Andrea C. Skelly
1   Spectrum Research, Inc., Tacoma, Washington, United States
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
28. März 2014 (online)

Preview

Introduction

Clinicians, policy makers, and patients need to be able to rely on high-quality scientific research to make informed decisions about health care options and policy. Frustration ensues on all levels when there is low confidence in the quality and integrity of available research on spine care. When research quality is low and reporting of it is poor, clinicians and patients may have confusion regarding best health care options. Policy makers may not reimburse for treatments or diagnostic modalities that are deemed not effective based on available evidence. At the most basic level, all parties want the same thing: to do what “works,” yet they all suffer when there is low confidence in evidence.

There is a credibility gap that spans all aspects of medical research, from study planning to study reporting to availability of data for verification to final study publication. Several studies provide empirical evidence on publication and related biases and how conclusions may differ based on what is and is not reported and how.[1] [2] [3] One example of publication-related bias is seen in the recent controversy surrounding results from the Yale Open Data Access (YODA) studies[4] [5] as compared with original trial publications on bone morphogenetic protein. A primary conclusion from consideration of these reports was a call for timely and complete transparency of data reporting.[6] [7]

Subsequently, media and scientific circles have reiterated strong calls to reduce study bias in study analysis and reporting.[8] [9] [10] [11]

Outcome reporting bias is one type of publication-related bias and is an under-recognized problem.[12] [13] This occurs when there is selective reporting of some outcomes but not others, possibly depending on the nature and direction of the findings. In addition to ethical concerns regarding such selective reporting, the reported results can be misleading. One example of the impact of such selective reporting is an analysis of 283 Cochrane Reviews. Kirkham et al report that 34% of reviews contained at least one trial with high suspicion of outcome reporting bias for the primary outcome.[12] Sensitivity analysis on these reviews revealed that the treatment effect was reduced by 20% or more in 23% of reviews. After adjustment for outcome reporting bias, 19% of meta-analyses with a statistically significant result became non-significant and 26% would have overestimated the treatment effect by 20% or more. This can impact policy making and clinical decision making and potentially result in harm to patients.

Transparency and attention to detail in research design, specification of outcomes, analysis, reporting, and dissemination are critical to “minding the gap” regardless of study design or level and type of funding. This article (and previous Science in Spine articles) describes some key components for such transparency related to conducting research with a focus on outcomes reporting.