Homeopathy 2009; 98(02): 129
DOI: 10.1016/j.homp.2009.01.002
Letter to the Editor
Copyright © The Faculty of Homeopathy 2009

Reply to Wilson

A.L.B. Rutten
,
C.F. Stolper

Subject Editor:
Further Information

Publication History

Publication Date:
15 December 2017 (online)

Wilson's main objection is our suspicion of post-hoc hypothesising by Shang et al. Wilson states that larger trials were defined by Shang as “Trials with SE [standard error] in the lowest quartile were defined as larger trials”. According to Wilson this was done to predefine ‘larger trials’. We agree with Wilson that this is indeed a strange way of defining ‘larger trials’, but it is perfectly possible to simply define larger studies a priori according to sample size in terms like ‘above median’ as we suggested in our paper. Shang et al did not mention the sensitivity of the result to this choice of cut-off value: if median sample size (including 14 trials) is chosen homeopathy has the best (significantly positive) result, if eight trials are selected homeopathy has the worst result. In the post-publication data they mentioned sample sizes but not Standard Errors. Isn't it odd that the authors did not mention the fact that homeopathy is effective based on a fully plausible definition of ‘larger’ trials, but stated that it is not effective based on a strange definition of ‘larger’, but that this was not apparent because of missing data?

We questioned the possibility of true pre-specification of hypotheses regarding homeopathy in our introduction. The director of Shang et al.'s analysis, Egger, analysed Linde's data in a paper published in 2001, introducing the hypothesis that low quality, especially in smaller trials, could bias results in homeopathy.[ 1 ] On the other hand, smaller good quality trials can produce stronger effects by better selection of patients. In that case the asymmetry of the funnel plot is not caused by bias and extrapolation towards the largest trials by meta-regression analysis is questionable. We think that the authors should have referenced to this paper in the introduction instead of in the methods section, because this was the logical-starting-point for Shang's analysis.

We were indeed amazed that no matching trial could be found for a homeopathic trial on chronic polyarthritis by Wiesenauer. Shang did not specify criteria for matching of trials. We would expect the authors to explain this exclusion because Wiesenauer's trial would have made a difference in meta-regression analysis and possibly also in the selection of the eight larger good quality trials.

Wilson's remark about prominent homeopaths choosing muscle soreness as indication is not relevant. Using a marathon as starting point for a trial is understandable from an organisational point of view, although doubt is possible about external validity. Publishing negative trials in alternative medicine journals is correct behaviour. There is, however, strong evidence that homeopathic Arnica is not effective after long distance running and homeopathy as a method should not be judged by that outcome.[ 2 ]

In the Homeopathy paper we did mention that meta-regression analysis showed no difference between homeopathy and placebo and referred to our objections against this method of meta-regression in the Journal of Clinical Epidemiology,[ 3 ] which were: “First, the asymmetry of funnel-plots is not necessarily a result of bias. It can also occur when smaller studies show larger effect just because they were done in a condition with high treatment effects, and thus requiring smaller patient numbers. Moreover, meta-regression predicts the OR at an extreme value (the minimum standard deviation observed).

It is well known from mathematical statistics, that these predictions are imprecise, especially when the number of observations is small and the estimate of the regression line is unstable. This is the case in our analyses. For example, the funnel plot of the four largest trials was negatively skewed (AC = 0.13), whereas it was positively skewed for the five largest trials (AC = 1.97). Thus, our meta-regression analyses generally suffer from low statistical power”.

Furthermore, Shang et al. disregarded safety. The fact that meta-regression in conventional medicine did not extrapolate to ineffectiveness was much influenced by three therapies that are not available because of serious adverse effects. As stated above, the comparative meta-regression analysis was questionable because of the difference in quality in smaller trials between homeopathy and conventional medicine. This difference was not mentioned in Shang's paper.

As stated in our paper, the difference in quality and sample size regarding the homeopathy and conventional trials on muscle soreness was an important factor in the different outcome for homeopathy and conventional medicine. One ineffective indication with four trials was included in homeopathy and excluded in conventional medicine. Further influences on the different outcome are different quality, different safety and different publication bias.

The hypotheses stated by Shang et al. in their introduction were not as clear as the hypothesis posed by Sterne, Egger and Davey-Smith in 2001.[ 1 ] What other hypothesis could be proven by this comparison with a matched set of conventional trials than that quality in homeopathy trials is worse than in conventional trials? We showed that matching was lost if this hypothesis about quality was abandoned to compare effects. The conclusion that homeopathy is a placebo effect and that conventional medicine is not was not based on a comparative analysis of carefully matched trials, as stated by the authors.

 
  • References

  • 1 Sterne J.A.C., Egger M., Smith G.D. Investigating and dealing with publication and other biases in meta-analysis. BMJ 2001; 323: 101-105.
  • 2 Ernst E., Barnes J. Are homoeopathic remedies effective for delayed onset muscle soreness? A systematic review of placebo-controlled trials. Perfusion 1998; 11: 4-8.
  • 3 Lüdtke R., Rutten A.L. The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials. J Clin Epidemiol 2008; 61: 1197-1204 10.1016/j.jclinepi.2008.06.015