Monday 20 October 2008

More meta-analysis delight

Having whined and bullshitted about the 2005 meta-analysis (Shang et al.) that was published in the Lancet (which showed that larger, high-quality trials demonstrated no effect for homeopathy beyond placebo), the homeopaths are now trying a bit of number-crunching. This is an improvement, although they don't seem to be any better at number crunching than they are at logic, as we saw with the recent Ludtke and Rutten paper in the Journal of Clinical Epidemiology.

In the Journal of Clinical Epidemiology, there is a proper review process, and the authors couldn't get away with writing just any arrant nonsense that came into their head. This led to a paper that broadly supported the original conclusions of the Shang study, which obviously wasn't that much use. Luckily, Homeopathy, the in-house fanzine of the Faculty of Homeopathy, has no such qualms about publishing more or less any old rubbish (and I should know, I've been published in it myself). Hence a paper by Rutten and Stolper in the October issue of Homeopathy criticising the Shang paper, entitled "The 2005 meta-analysis of homeopathy: the importance of post-publication data". I'm not going to analyse the whole thing-there comes a point when life really is too short-but I do want to pick up on one blatant error that seems to suggest they haven't carefully read the paper they are criticising.

One thing that Shang did was look at papers that were of good quality, based on how well they were randomised, how well they were blinded, and so on. They also sought to look at the "larger" trials, because their analysis showed that smaller trials with higher standard errors tended to be more positive. Rutten and Stolper claim that the cut-off size for larger trials was different between the homeopathy trials and conventional trials "without plausible reason". They write:

In the homeopathy group the cut-off value was n = 98, including eight trials (38% of the higher quality trials). The cut-off value for larger conventional studies in this analysis was n = 146, including six trials (66% of the higher quality trials). These cut-off values were considerably above the median sample size of 65. There were 31 homeopathy trials larger than the homeopathy cut-off value and 24 conventional trials larger than the conventional cutoff value. We can think of no criterion that could be common to the two cut-off values. This suggests that this choice was post-hoc.


What Rutten and Stolper are doing here is essentially accusing Shang and colleagues of fiddling the results so that homeopathy looked less effective than it really is. That is a serious accusation, and you might expect that someone making it would have carefully checked the original paper to see what the authors said their criterion for larger trials was. If you read the original Shang paper, on page 728 you will find this definition: "Trials with SE in the lowest quartile were defined as larger trials". There is nothing particularly opaque about this, and yet the idea that what was meant by "larger" trials was somehow unclear comes up time and time again.

Picking trials "with SE in the lowest quartile" may seem like an odd way of choosing the "larger" trials. Why not just pick a number, like 100, which Rutten and Stolper seem to want to do? Well, if you want to avoid post-hoc analysis it is a sensible thing to do, as you know you will have a reasonable number of "larger" trials to work with. If you just pick out a number, you might find that almost none, or almost all, of your trials qualify as being "larger", since you don't know what the distribution of trial sizes is until you do your analysis.

Of course, another thing is that in a comparison between homeopathy and conventional medicine, the discrepancy in trial size numbers actually gives homeopathy an advantage. This is because smaller trials of homeopathy that would have missed the cut for conventional trials are included, and these are more likely to show spurious positive results. So, if the difference in trial sizes really were due to the authors' bias, then they were biased in favour of homeopathy.

So, Rutten and Stolper have made an erroneous accusation of biased post-hoc analysis based on not bothering to properly read, or carefully think about, the paper they are criticising. In my opinion they have made themselves look rather silly. Yet, even sillier still, their entire paper is an exercise in post-hoc analysis, as they try to find ways of torturing the data to get the result they want, i.e. that homeopathy works. This is how things go in the wonderful world of homeopathy, where all kinds of pseudoscience can be justified as long as they give you the right answer.

9 comments:

pj said...

Presumably they are also unaware of the concept of standard error, hence their comments concentrating on sample size.

Anonymous said...

Out of interest - in the 'background' section they refer to "a discrepancy between the outcome of a meta-analysis published in 1997 of 89 trials of homeopathy by Linde et al and an analysis of 110 trials by Shang et al published in 2005". Didn't Linde et al though: (a) find no evidence that hpathy worked for any particular condition; and (b) later conclude in another paper that the conclusions of their meta analysis were weakened by their more recent work that looked at trial quality?

If Rutten and Stolper were worried about this discrepancy and this is their main reason for publishing in Homeopathy, could it be the case that their efforts are in vain? It seems to me that Linde et al may have already provided a convincing explanation of this discrepancy.

Paul Wilson said...

jdc:

Yes, that's a good point. Homeopaths seem obsessed with the idea that Shang was somehow biased and/or fraudulent. This prevents them from actually looking at the paper objectively. In fact, it's deeply unsurprising that they should have come to different conclusions to those of Linde et al., for the reasons you point out.

Paul Wilson said...

pj:

To be fair, Shang is perhaps a little confusing on this point. Although they talk about "larger" trials throughout the paper, they're actually looking at "lower SE" trials. Still, if you're going to criticise the way they picked their "larger" trials, you should probably read the paper to find out how they did it first...

Anonymous said...

jdc wrote, "Didn't Linde et al ... later conclude in another paper that the conclusions of their meta analysis were weakened by their more recent work that looked at trial quality?"

Yes. Not only did they conclude that the evidence of bias they detected by their reanalysis of the same data "weakens the findings of our original meta-analysis", but also that an update to their analysis of studies of classical homoeopathy, and the fact that a number of new high quality trials (not included in their analysis) had negative results "seem to confirm the finding that more rigorous trials have less-promising results." They concluded from all this that their 1997 paper "at least overestimated the effects of homeopathic treatments.

Linde K, Scholz M, Ramirez G, Clausius N, Melchart D, and Jonas W. Impact of Study Quality on Outcome in Placebo-Controlled Trials of Homeopathy. J. Clin. Epidemiol. 52 (7) 631-636 (1999)

Linde and Jonas also commented, in the course of a letter to the Lancet critiquing the Shang paper, that "our 1997 meta-analysis has unfortunately been misused by homoeopaths as evidence that their therapy is proven."

Not that this has had any effect on the homoeopaths, of course.

Neuroskeptic said...
This comment has been removed by the author.
Neuroskeptic said...

That's really very silly. Although not as silly as believing in homeopathy in the first place.

Speaking of post-hoc data dredging, Lynn McTaggart of Intention Experiment fame (and friend of homeopathy) has recently surpassed herself in this arena. She's managed to take data showing that her latest Intention Experiment killed dozens of people, and claims that it's a success.

I hereby shamelessly plug my new blog:

Neuroskeptic

where I have written about this... it makes her previous experiments with Rustram Roy look like Nature material.

Paul Wilson said...

Neuroskeptic: cheers for that, and thanks for the link to your blog. Excellent first post. I'll bung up a permanent link in the sidebar.

Neuroskeptic said...

Thanks. You are now my first link...