Our results do neither prove that homeopathic medicines are superior to placebo nor do they prove the opposite...it occurs that Shang’s conclusions are not so definite as they have been reported and discussed.
What Ludtke and Rutten have done is actually quite interesting, though, so I'd like to explore it in a bit more detail. What they've done is taken the 21 trials of homeopathy that Shang et al. considered to be of "higher quality". They have then performed various analyses on this subset of trials to see what happens if you exclude trials based on their size or other parameters.
The authors plotted a funnel plot of odds ratio (a measure of the size of the effect of the intervention: values below 1 indicate a benefit over placebo) versus standard error (which is dependent on trial size). For all of the 21 trials, they found that there was substantial (but not statistically significant) asymmetry in the plot (if the funnel plot is asymmetrical, then biases are present in the data: these might be publication bias, small study effects, or a whole host of other effects). They also note that no evidence for asymmetry was found for the 8 largest trials of the 21. This actually re-iterates one of the main points of the Shang analysis: that a large number of trials is needed to identify asymmetry, and therefore to get an idea of bias in the dataset. That is why Shang et al. looked at all 110 trials that matched their inclusion criteria; that enabled them to identify a highly significant asymmetry in their funnel plot. This was important because it showed that the smaller the study size, the larger the apparent effect.
The thing in the paper that homeopaths will probably emphasise is that for the set of 21 higher quality trials, the pooled odds ratio (from random effects meta-analysis) was 0.76, suggesting a benefit over placebo. But wait! What are the 95% confidence intervals? 0.59-0.99. This indicates anything from a unimpressive benefit to a clinically negligible one. In other words, it's a deeply uninspiring result, but homeopaths will be trying to tell you that it shows that homeopathic remedies do something.
The interesting thing that the authors then did was to take the 2 largest trials, and look at what happens to the results when you add trials in descending order of patient numbers (Figure 2 of the paper). Once you get to the point where you've included the 14 largest trials, the resulting odds ratio is always less than 1 (except for the case of 17 trials). This is interesting in a way, but all it really does is demonstrate what Shang et al. said: that smaller trials are more likely to give positive results. So the more trials you add, the more positive but less reliable the results are; with 14 or more trials you might just about scrape a statistically significant benefit, but that result is not as reliable as the analysis restricted to the eight largest trials. It's also worth noting that the upper limits of the confidence intervals in Figure 2 are always close to 1, showing that any benefit is likely to be clinically insignificant. They perform a similar analysis in their Figure 3, except they use a meta-regression analysis rather than a random effects meta-analysis, and for that they show no statistically significant benefit no matter how many studies they include.
Another thing that homeopaths will probably jump on is that if one very negative trial (on Arnica for muscle soreness) is omitted from the set of 21 trials, the results appear more positive (odds ratio 0.73, 95% confidence interval 0.56-0.93) when a random effects meta-analysis is used. There are a number of other trials that can be removed from the dataset to give apparently positive results, but only when random effects analysis is used: a meta-regression analysis shows that there is no statistically significant benefit no matter which study you remove. Also, when performing a similar analysis on the 8 large, higher quality trials originally identified by Shang et al., no statistically significant benefit is found whichever trial you decide to remove. Again, note that the 8 largest trials are intrinsically more reliable than the smaller ones.
All the way through the paper, it is noticeable that meta-regression analysis shows more negative results than a random effects meta-analysis. In fact, the authors point out that in their meta-regression analysis "no single predicted OR [odds ratio] could be shown to differ significantly from unity". So which should be used? The authors write "... there is no guideline which tells a researcher when to prefer meta-regression to random effects metaanalysis or vice versa. As the statistical test for asymmetry only has a small power, Egger suggests to perform metaregressions when the respective P-value falls below 0.10. Applying this criterion there seemed to be no need to perform a meta-regression in most of the subsets we analyzed". But this conclusion is based on the restricted analysis of 21 higher quality trials. Shang et al.'s original analysis of 110 trials of homeopathy showed asymmetry with p<0.0001, suggesting that a meta-regression analysis would be more appropriate.
So, the upshot is that the paper's title is misleading. The conclusions on the effectiveness of homeopathy do not highly depend on the set of analyzed trials, if an appropriate test is used. Asymmetry is not adequately identified in the dataset because too few trials are used. And, even if you can convince yourself that you can get a statistically significant benefit by playing around with the numbers, the actual clinical benefit is negligible. In some ways, the paper actually reinforces the conclusions of Shang et al., and it certainly doesn't show that homeopathic medicines work.