Homeopathy 2006; 95(01): 55
DOI: 10.1016/j.homp.2005.11.004
Letter to the Editor
Copyright © The Faculty of Homeopathy 2005

Sir? Is that bias?

David Reilly

Subject Editor:
Further Information

Publication History

Publication Date:
02 January 2018 (online)

The meta-analysis of clinical trials of homeopathy by Shang et al is a fascinating lesson in bias with useful examples for students.[ 1 ] Shang et al advise us to counter the biases of small-scale meta-analyses by ‘borrowing strength’ from its larger context. So, before commenting on their small-scale data sets and interpretations, let me apply their teaching by summarizing their ‘big picture’: in 110 trials each of conventional drugs and homoeopathic drugs ‘most odds ratios indicated a beneficial intervention’—i.e. they both worked better than placebo. This fits previous large-scale meta-analyses of the limited homoeopathic trial data set, it fits conventional research, and impressions of many with hands-on experience of both systems.

Next, the paper offers two key theoretical contributions about bias, with useful practical demonstrations. The first piece of theory: ‘detection of bias is difficult when meta-analyses are based on small numbers of trials’. The first demonstration: two meta-analyses based on small numbers of trials—will the student spot the biases? Is it biased to reject the very positive one (eight respiratory trials) because, well, because it is so positive—and so it ‘might promote the conclusion that the results cannot be trusted’. Let's see, Shang et al teach that small-scale conclusions should ‘borrow strength’ from the bigger surrounding picture and its confounding biases. ‘Please Sir, is it a bias that their a priori assumption was that homoeopathy effects are due to non-specific artefacts, and conventional effects are not?’ ‘Quiet boy!’

The second lesson in interpreting small number meta-analyses illustrates their second theoretical contribution: that smaller trials and those of less quality produce more beneficial effects. So they take two larger samples of trials selected by process and criteria of their own choosing (‘Sir, is that bias?’)—the one of 110 from 200 or so trials available, the other 110 from a third of million (‘Sir? Is ..’ ‘Quiet boy!’). These differing contexts of varying ‘borrowed strength’ then yield samples of different characteristics, of most interest for this demonstration of the ravages of bias: 19% of one therapy (let us call it X) are of higher quality vs 8% of the other (lets say Y). Shang et al's teaching helps us predict that analysis of X will yield less beneficial effects than that of Y. This says nothing about X or Y, just about trial biases. So, now for their crunch, will their theory work? Select down to 8 trials of X (from say 200) and 6 of Y (from say a third of a million) and they find that both work (ie odds ration less than 1), but, X shows less treatment effect than Y. Bravo—the theory works! Oh, in passing they happen to point out that X is homoeopathy, so it can’t work, so it doesn’t. And the Lancet anonymously announces the ‘End of Homoeopathy’. ‘Sir? Is that…’ ‘Quiet boy!’.