Homœopathic Links 2018; 31(04): 221-222
DOI: 10.1055/s-0039-1677673
Editorial
Thieme Medical and Scientific Publishers Private Ltd.

Publication Bias—A Threat to Scientific Validity

Bindu Sharma
1   Scientist-IV, Central Council for Research in Homoeopathy, under Ministry of AYUSH, New Delhi, India
› Author Affiliations
Further Information

Publication History

Publication Date:
26 February 2019 (online)

The term ‘publication bias’ was first used by statistician Theodore Sterling in 1959 to refer to fields in which ‘successful’ research is more likely to be published. It is a type of bias that occurs in published academic research wherein it is systematically unrepresentative of the population of completed studies. As a result, the literature of such a field results in drawing false conclusions.[1] Publication bias is sometimes called the ‘file drawer effect’ which says that results not supporting the hypotheses of researchers often go into the file drawers, leading to a bias in published research.[2] The term was coined by Rosenthal in 1979.[3]

Publication bias is a potential threat in all areas of research, including qualitative research, primary quantitative studies, narrative reviews, and quantitative reviews, that is, meta-analysis.[4] The presence of publication bias in the literature has been most extensively studied in biomedical research.[5] [6] [7] Where publication bias is present, published studies are no longer a representative sample of the available evidence. This bias distorts the results of meta-analyses and systematic reviews as evidence-based medicine is increasingly reliant on meta-analysis to assess evidence.[8]

It has come to prominence in recent years largely with the introduction and widespread adoption of the use of systematic review and meta-analytic methods to summarise research. As the methods of reviewing have become more scientific, systematic and quantitative, it is possible to empirically demonstrate the existence of publication bias and to quantify its impact.[4]

Publication bias occurs when the publication of research results depends not just on the quality of the research but also on the hypothesis tested, and the significance and direction of effects detected.[9] Publishing only results that show a significant finding disturbs the balance of findings. In some cases, it may foster adverse consequences, when an ineffective or dangerous treatment is falsely viewed as safe and effective.[4] The most common reason for publication biasness is nonpublication of results that do not wave off the null hypothesis assuming it to be a mistake, failure to support a known finding, loss of interest in the topic or anticipation that others will be uninterested in the null results.[10] Studies with positive results are more likely to make it to the journals and thus found to be published faster than the studies with negative results. Thus any meta-analysis or literature review based on the published data will be biased. Studies with significant results have a shorter median time to publication (4–7 years) whereas those with nonsignificant results have a median time of 8 years. Positive-result bias occurs when authors are more likely to submit, or editors are more likely to accept, positive results than negative or inconclusive results.[11] Studies with significant results have a shorter outcome. Reporting bias occurs when multiple outcomes are measured and analysed, but the reporting of these outcomes is dependent on the strength and direction of its results. A generic term coined to describe these post hoc choices is HARKing (‘Hypothesising After the Results are Known’).[12]

Publication bias matters because literature reviews regarding support for a hypothesis can be biased if the original literature is contaminated by publication bias. When the research that is readily available differs in its results from the results of all the research that has been done in an area, readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows.[4]

There are number of proposed strategies to detect and control publication bias.[4] Some journals ask for a preregistration of the study prior to collection of data and analysis to spot redundant publication and challenge nonpublication and highlight selective reporting and post hoc analysis. Other strategies include maintaining trial registers and linking protocols with publications, P-curve analysis[13] and disfavouring small and nonrandomised studies because of their demonstrated high susceptibility to error and bias.[10]

Publication bias can be contained through better powered studies, enhanced research standards and careful consideration of true and untrue relationships.[14] Better powered studies refer to large studies that deliver definitive results or test major concepts and lead to low-bias meta-analysis. Enhanced research standards such as the preregistration of protocols, the registration of data collections and adherence to established protocols are other techniques. To avoid false-positive results, the experimenter must consider the chances that they are testing a true or untrue relationship. This can be undertaken by properly assessing the false-positive report probability based on the statistical power of the test[15] and reconfirming (whenever ethically acceptable) established findings of prior studies known to have minimal bias.

 
  • References

  • 1 Sterling TD. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J Am Stat Assoc 1959; 54 (285) 30-34 Retrieved November 18, 2018
  • 2 Scargle JD. Publication bias: the “file-drawer problem” in scientific inference. J Sci Explor 2000; 14 (02) 94-106
  • 3 Rosenthal R. File drawer problem and tolerance for null results. Psychol Bull 1979; 86: 638-641
  • 4 Rothstein HR, Sutton AJ, Borenstein M. , eds. Publication Bias in Meta-Analysis—Prevention, Assessment and Adjustments. West Sussex, England: John Wiley & Sons; 2005
  • 5 Dickersin K, Min YI. NIH clinical trials and publication bias. Online J Curr Clin Trials 1993; Doc No 50: [4967 words, 53 paragraphs]
  • 6 Decullier E, Lhéritier V, Chapuis F. Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ 2005; 331 (7507): 19-22
  • 7 Song F, Parekh-Bhurke S, Hooper L. , et al. Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Med Res Methodol 2009; 9: 79
  • 8 Joober R, Schmitz N, Annable L, Boksa P. Publication bias: what are the challenges and can they be overcome?. J Psychiatry Neurosci 2012; 37 (03) 149-152
  • 9 Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA 1990; 263 (10) 1385-1389
  • 10 Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet 1991; 337 (8746): 867-872
  • 11 Sackett DL. Bias in analytic research. J Chronic Dis 1979; 32 (1-2): 51-63
  • 12 Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev 1998; 2 (03) 196-217
  • 13 Simonsohn U, Nelson LD, Simmons JP. P-curve: a key to the file-drawer. J Exp Psychol Gen 2014; 143 (02) 534-547
  • 14 Ioannidis JP. Why most published research findings are false. PLoS Med 2005; 2 (08) e124
  • 15 Wacholder S, Chanock S, Garcia-Closas M, El Ghormli L, Rothman N. Assessing the probability that a positive report is false: an approach for molecular epidemiology studies. J Natl Cancer Inst 2004; 96 (06) 434-442