Monday, 20 October 2008

More meta-analysis delight

Having whined and bullshitted about the 2005 meta-analysis (Shang et al.) that was published in the Lancet (which showed that larger, high-quality trials demonstrated no effect for homeopathy beyond placebo), the homeopaths are now trying a bit of number-crunching. This is an improvement, although they don't seem to be any better at number crunching than they are at logic, as we saw with the recent Ludtke and Rutten paper in the Journal of Clinical Epidemiology.

In the Journal of Clinical Epidemiology, there is a proper review process, and the authors couldn't get away with writing just any arrant nonsense that came into their head. This led to a paper that broadly supported the original conclusions of the Shang study, which obviously wasn't that much use. Luckily, Homeopathy, the in-house fanzine of the Faculty of Homeopathy, has no such qualms about publishing more or less any old rubbish (and I should know, I've been published in it myself). Hence a paper by Rutten and Stolper in the October issue of Homeopathy criticising the Shang paper, entitled "The 2005 meta-analysis of homeopathy: the importance of post-publication data". I'm not going to analyse the whole thing-there comes a point when life really is too short-but I do want to pick up on one blatant error that seems to suggest they haven't carefully read the paper they are criticising.

One thing that Shang did was look at papers that were of good quality, based on how well they were randomised, how well they were blinded, and so on. They also sought to look at the "larger" trials, because their analysis showed that smaller trials with higher standard errors tended to be more positive. Rutten and Stolper claim that the cut-off size for larger trials was different between the homeopathy trials and conventional trials "without plausible reason". They write:

In the homeopathy group the cut-off value was n = 98, including eight trials (38% of the higher quality trials). The cut-off value for larger conventional studies in this analysis was n = 146, including six trials (66% of the higher quality trials). These cut-off values were considerably above the median sample size of 65. There were 31 homeopathy trials larger than the homeopathy cut-off value and 24 conventional trials larger than the conventional cutoff value. We can think of no criterion that could be common to the two cut-off values. This suggests that this choice was post-hoc.


What Rutten and Stolper are doing here is essentially accusing Shang and colleagues of fiddling the results so that homeopathy looked less effective than it really is. That is a serious accusation, and you might expect that someone making it would have carefully checked the original paper to see what the authors said their criterion for larger trials was. If you read the original Shang paper, on page 728 you will find this definition: "Trials with SE in the lowest quartile were defined as larger trials". There is nothing particularly opaque about this, and yet the idea that what was meant by "larger" trials was somehow unclear comes up time and time again.

Picking trials "with SE in the lowest quartile" may seem like an odd way of choosing the "larger" trials. Why not just pick a number, like 100, which Rutten and Stolper seem to want to do? Well, if you want to avoid post-hoc analysis it is a sensible thing to do, as you know you will have a reasonable number of "larger" trials to work with. If you just pick out a number, you might find that almost none, or almost all, of your trials qualify as being "larger", since you don't know what the distribution of trial sizes is until you do your analysis.

Of course, another thing is that in a comparison between homeopathy and conventional medicine, the discrepancy in trial size numbers actually gives homeopathy an advantage. This is because smaller trials of homeopathy that would have missed the cut for conventional trials are included, and these are more likely to show spurious positive results. So, if the difference in trial sizes really were due to the authors' bias, then they were biased in favour of homeopathy.

So, Rutten and Stolper have made an erroneous accusation of biased post-hoc analysis based on not bothering to properly read, or carefully think about, the paper they are criticising. In my opinion they have made themselves look rather silly. Yet, even sillier still, their entire paper is an exercise in post-hoc analysis, as they try to find ways of torturing the data to get the result they want, i.e. that homeopathy works. This is how things go in the wonderful world of homeopathy, where all kinds of pseudoscience can be justified as long as they give you the right answer.

Friday, 10 October 2008

Result!

I've just had the first paper from my post-doctoral work at Manchester accepted by the Journal of Structural Geology, pending minor revisions.

Beeeeerrrrrssssss!

Wednesday, 8 October 2008

Shang study remains firmly in the water

In the comments to a post at Respectful Insolence, my favourite homeopath Dana Ullman weighs in with the suggestion that the Shang et al. meta-analysis of trials of homeopathy and conventional medicine (which has been written about extensively by me and apgaylard), had been "blown out of the water". Ullman makes this assertion based on a new paper by Ludtke and Rutten, entitled "The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials", that has been accepted by the Journal of Clinical Epidemiology. It's nice to see that this paper does exist after all. So does the article really blow Shang out of the water? A quick look at the conclusions tells us that the answer is no:

Our results do neither prove that homeopathic medicines are superior to placebo nor do they prove the opposite...it occurs that Shang’s conclusions are not so definite as they have been reported and discussed.


What Ludtke and Rutten have done is actually quite interesting, though, so I'd like to explore it in a bit more detail. What they've done is taken the 21 trials of homeopathy that Shang et al. considered to be of "higher quality". They have then performed various analyses on this subset of trials to see what happens if you exclude trials based on their size or other parameters.

The authors plotted a funnel plot of odds ratio (a measure of the size of the effect of the intervention: values below 1 indicate a benefit over placebo) versus standard error (which is dependent on trial size). For all of the 21 trials, they found that there was substantial (but not statistically significant) asymmetry in the plot (if the funnel plot is asymmetrical, then biases are present in the data: these might be publication bias, small study effects, or a whole host of other effects). They also note that no evidence for asymmetry was found for the 8 largest trials of the 21. This actually re-iterates one of the main points of the Shang analysis: that a large number of trials is needed to identify asymmetry, and therefore to get an idea of bias in the dataset. That is why Shang et al. looked at all 110 trials that matched their inclusion criteria; that enabled them to identify a highly significant asymmetry in their funnel plot. This was important because it showed that the smaller the study size, the larger the apparent effect.

The thing in the paper that homeopaths will probably emphasise is that for the set of 21 higher quality trials, the pooled odds ratio (from random effects meta-analysis) was 0.76, suggesting a benefit over placebo. But wait! What are the 95% confidence intervals? 0.59-0.99. This indicates anything from a unimpressive benefit to a clinically negligible one. In other words, it's a deeply uninspiring result, but homeopaths will be trying to tell you that it shows that homeopathic remedies do something.

The interesting thing that the authors then did was to take the 2 largest trials, and look at what happens to the results when you add trials in descending order of patient numbers (Figure 2 of the paper). Once you get to the point where you've included the 14 largest trials, the resulting odds ratio is always less than 1 (except for the case of 17 trials). This is interesting in a way, but all it really does is demonstrate what Shang et al. said: that smaller trials are more likely to give positive results. So the more trials you add, the more positive but less reliable the results are; with 14 or more trials you might just about scrape a statistically significant benefit, but that result is not as reliable as the analysis restricted to the eight largest trials. It's also worth noting that the upper limits of the confidence intervals in Figure 2 are always close to 1, showing that any benefit is likely to be clinically insignificant. They perform a similar analysis in their Figure 3, except they use a meta-regression analysis rather than a random effects meta-analysis, and for that they show no statistically significant benefit no matter how many studies they include.

Another thing that homeopaths will probably jump on is that if one very negative trial (on Arnica for muscle soreness) is omitted from the set of 21 trials, the results appear more positive (odds ratio 0.73, 95% confidence interval 0.56-0.93) when a random effects meta-analysis is used. There are a number of other trials that can be removed from the dataset to give apparently positive results, but only when random effects analysis is used: a meta-regression analysis shows that there is no statistically significant benefit no matter which study you remove. Also, when performing a similar analysis on the 8 large, higher quality trials originally identified by Shang et al., no statistically significant benefit is found whichever trial you decide to remove. Again, note that the 8 largest trials are intrinsically more reliable than the smaller ones.

All the way through the paper, it is noticeable that meta-regression analysis shows more negative results than a random effects meta-analysis. In fact, the authors point out that in their meta-regression analysis "no single predicted OR [odds ratio] could be shown to differ significantly from unity". So which should be used? The authors write "... there is no guideline which tells a researcher when to prefer meta-regression to random effects metaanalysis or vice versa. As the statistical test for asymmetry only has a small power, Egger suggests to perform metaregressions when the respective P-value falls below 0.10. Applying this criterion there seemed to be no need to perform a meta-regression in most of the subsets we analyzed". But this conclusion is based on the restricted analysis of 21 higher quality trials. Shang et al.'s original analysis of 110 trials of homeopathy showed asymmetry with p<0.0001, suggesting that a meta-regression analysis would be more appropriate.

So, the upshot is that the paper's title is misleading. The conclusions on the effectiveness of homeopathy do not highly depend on the set of analyzed trials, if an appropriate test is used. Asymmetry is not adequately identified in the dataset because too few trials are used. And, even if you can convince yourself that you can get a statistically significant benefit by playing around with the numbers, the actual clinical benefit is negligible. In some ways, the paper actually reinforces the conclusions of Shang et al., and it certainly doesn't show that homeopathic medicines work.

Friday, 3 October 2008

Sayed Gooda: a tribute


There was sad news for me this week, and for more or less everyone who has done geological fieldwork in Egypt. Sayed Gooda, our driver on our trips to Sinai, has passed away.

Without Sayed, our work would have been a lot more difficult. Sayed didn't just drive us, over bad roads and difficult terrain. He looked after us, kept us out of trouble, made us delicious sandwiches for lunch (the highlight of the Sinai field day), and tried valiantly to teach us Arabic . Practically everyone who has worked on the Gulf of Suez owes something to Sayed. I will always remember arriving in a hot, noisy and fume-choked Cairo for the first time, nerves jangling from an overnight flight from Amsterdam, and being put at ease right away by the big friendly man in the big friendly green jeep.

Sayed was in Saudi Arabia when he passed away, and has been buried there. In the e-mail telling us of his death, his friend Tarek Moustafa wrote "I think he died Happy as this would have been a dream come true to him to die in the Holy lands". We can only hope so. Thanks for everything, Sayed.