Wednesday, 31 December 2008

Pediatrics publishes a critique of an "anti-vaccine" book

I've tended not to write anything about vaccines, partly because it's a long way from my field of expertise, and partly because it brings out the internet loons in greater numbers than almost anything else. Still, I ran across this via Ben Goldacre's miniblog, and it overlaps with my interest in scientific publishing: Pediatrics, the official journal of the American Academy of Pediatrics, has published an article by Paul Offit and Charlotte Moser. The article is highly critical of "The Vaccine Book: Making the Right Decision for Your Child", by Dr. Robert Sears, also known in media-friendly mode as Dr. Bob in his role as one part of "America's family of pediatricians". This is something that I haven't seen before: the journal of a learned society publishing a rebuttal of a popular health book. The article goes through Dr. Sears's book, and picks out misleading statements, flawed logic and questionable assumptions, particularly concentrating on the alternative vaccine schedules suggested in the book.

It seems unlikely that worried parents will pick up this article; they are much more likely to read the book. But perhaps by putting these arguments in one place in Pediatrics, it makes it easier for pediatricians themselves to counter the arguments against vaccines that they hear from parents. It might even be possible to give parents the article to look through. And many of the arguments in Dr. Bob's book ARE arguments against vaccines, in spite of the protestations of Dr. Bob himself. The paper is not written in overly technical language, although it does contain a lot of references that are unlikely to be easily available to the average concerned parent. It gives pediatricians a chance to say that actually mainstream scientific opinion does not agree with Dr. Bob. This is unlikely to do much to sway those who've drunk the anti-vaccine kool-aid and think that Dr. Offit is an unscrupulous shill of big pharma, willing to cause lasting harm to untold numbers of children in order to make money (just do a google search to see what I mean). But it might sway sensible people who have inevitably picked up doubts from the unbelievably incompetent reporting of the science surrounding vaccines in the media.

Perhaps it could be worth putting a similar article debunking the myths on MMR into something like the BMJ or the Lancet? Or perhaps something similar already exists and I've missed it?

Friday, 12 December 2008

News on latest Homeopathy submission

A while ago a submitted a comment to the journal Homeopathy, discussing a paper published in the journal by Rutten and Stolper, claiming to find a number of serious flaws in the Shang et al. meta-analysis that appeared in the Lancet in 2005. I had a slightly cryptic e-mail back from the journal last week:

Dear Dr Wilson,

Thank you for sending us this article. We have sent the article to the authors for comment.

We will be in touch again shortly.

I assume that means that the comment will be published, with a reply by the authors. I'll be interested in what they have to say, because at this point some of the errors in their paper seem to be indefensible.

Post #100, and what Wilson is up to

The previous post on Manchester's congestion charge was the hundredth post on Hawk/Handsaw. I'd celebrate with a beer, if I hadn't come back from fieldwork in Egypt with some friendly bacteria that are currently restricting me to soup and toast.

For those how are interested in what I'm doing, I've just completed my first post-doctoral project at the University of Manchester. The first paper from that work now has a DOI number and is available as an "accepted manuscript" at the Journal of Structural Geology website (behind a paywall, I'm afraid: if anyone is desperately interested and can't access it, e-mail me and I can bung you a PDF).

I will be starting a new project in January, still based at Manchester, working on sub-surface data from the North Sea. I'll be sponsored by StatoilHydro, so I'll be doing a fair bit of work at various StatoilHydro offices in Norway, starting in Harstad in late January. It's a three year contract, so I have a bit of stability. The plan is to get more papers out, and try for lectureships towards the end of the project.

Manchester says No!

The results of Manchester's referendum on congestion charging and improvements to public transport are in, and they could hardly be clearer. According to the Guardian, 79% of respondents said no, on the back of a 53% turnout. That's close to a general election level of turnout around here. Although the Yes campaign claimed that 9 out of 10 people would not pay the charge, around 4 out of 10 people decided that they didn't want it.

I have distinctly mixed feelings about this. Mostly, I'm disappointed; after all, I did vote yes myself. That's easy for me: I don't own a car, walk to work, and rely on public transport to get around Greater Manchester. The congestion charge would cost me precisely nothing. According to the No campaign, it would work out at £1200 a year for those who would have to pay the maximum amount. It's clearly a regressive tax, and that is not a small amount of money for people who are already struggling to make ends meet. Of course, that's the point: the charge wouldn't work if everyone could easily afford it.

For me, the most important thing about the proposals was the improvements to public transport. There is no doubt that these are needed. Some of them, such as extensions of the Metrolink tram system to Oldham and Rochdale, will happen anyway. But I suspect we will be stuck with an unreliable and expensive (deregulated) bus service for a long time to come.

The problem is that, in the words of the Yes campaign, "there is no plan B". That being the case, it seems crazy to put the question to a referendum. Saying that you either agree with these proposals or you get nothing is hardly a democratic way of posing a referendum question; it's a pretense of democracy. It would have been more sensible, and democratic, to either have our local representatives on the Manchester Council decide to implement the proposals come what may, or have a referendum in which there was a genuine choice of competing proposals.

Improvements in public transport are a good thing, but on their own they will not reduce car use. Driving will always be more convenient than taking the bus, unless there are clear disincentives for car use. It seems that congestion charging is a political impossibility in Manchester, at least right now. An alternative might be to make driving in the city less convenient, by restricting parking, restricting access to certain streets, etc. This would have the advantage of not stinging the poor with another regressive tax. What is clear is that doing nothing is not an option in the long term: it's back to the drawing board for Manchester's transport policy.

Wednesday, 19 November 2008

Science by press release (epic fail)

While I was messing about at the University of Google, doing some research for the thing that I've just sent to homeopathy, I came across this [pdf]. Yes, the homeopaths (in the form of the International Homeopathic Medical League and the European Committee for Homeopathy) have put together a press release, based on the study in Homeopathy by Rutten and Stolper, and the study by Ludtke and Rutten in the Journal of Clinical Epidemiology. Both studies criticise (mostly wrongly) a perfectly good meta-analysis of homeopathy from the Lancet. The headline is "New evidence for homeopathy". No question mark, no caveats. You might guess that the International Homeopathic Medical League and the European Committee for Homeopathy would not be entirely unbiased sources of information. In fact, these studies are very far from being evidence for homeopathy, as I've tried to point out (and not just me). I suppose two studies sounds better than one, but the truth is that these are essentially the same study, with extra nonsense in the part that got published in Homeopathy. The studies apply a post-hoc analysis to the data from the Lancet article, and claim to be able to produce a positive result for homeopathy if you squint at the data and stand on one leg. Then they accuse the Lancet paper of data dredging. What a laugh riot.

Unfortunately, lots of scientific studies get reported based on their press releases. This means that the people who read the reports get no sense of the flaws in the study and the caveats that should be applied to its conclusions; these come from independent scientific scrutiny of the study once it has been peer-reviewed and published. Pushing press releases on your research may be a good way to get brownie points from your university, but it's not usually a good way of fostering an improved understanding of the scientific process. To be fair, it's by no means only homeopaths and their ilk who do this, as Ben Goldacre has pointed out.

Still, the good thing about this is that a search on Google News for this release shows that, at the time of writing, it has only been picked up by a few woo-ish magazines: mainstream western news seems to have ignored this more or less completely. Maybe the press has got fed up of this particular manufactured controversy, at least for now.

I know I said life was too short...

I wrote a little about a paper by Rutten and Stolper recently published in the amusing pseudo-journal Homeopathy. The paper performed the usual homeopath party trick of throwing incorrect allegations of research misconduct at the Shang et al. meta-analysis of homeopathy that was published in the Lancet, while also engaging in dubious statistical analysis. I've now had a little time to put together something a bit more meaty, with proper references and everything, and send it off as a letter to the editor of Homeopathy. I reproduce the text below.

Rutten and Stolper [1] have conducted a re-analysis of the data used in the landmark Lancet meta-analysis (Shang et al.) [2] of trials of homeopathy and conventional medicine. However, their approach to this work seems to have been influenced by a belief that the Shang analysis was deliberately skewed against homeopathy, and in favour of conventional medicine. I argue here that the evidence does not support that contention, and that the re-analysis by Rutten and Stolper does not show that the Shang et al. study was invalid.


Rationale for the re-analysis

In the abstract of their paper, Rutten and Stolper state “There is a discrepancy between the outcome of a meta-analysis published in 1997 of 89 trials of homeopathy by Linde et al and an analysis of 110 trials by Shang et al published in 2005, these reached opposite conclusions”, and on page 170 they write “The contradiction between Linde's conclusion based on 89 trials, and Shang et al's conclusion, based on 110 trials seems odd”. But there is nothing particularly surprising about this discrepancy. The Linde paper referred to was published in the Lancet in 1997 [3]. The same team re-analysed the data in a paper published in 1999 [4]. They concluded that because trials of higher methodological quality had smaller effect sizes, and that because a number of newly published high-quality trials showed negative results for homeopathy, their meta-analysis had over-estimated the effectiveness of homeopathy. Hence there is no reason to see to the discrepancy between Shang et al. and Linde et al. (1997) as being particularly “odd”.


Trial quality

Rutten and Stolper make statements about the “pre-specified hypotheses” of the Shang et al. study, but these are not consistent through the paper. In the introduction, they state:

The hypotheses predefined mentioned in the introduction of Shang et al's paper were: ‘Bias in conduct and reporting of trials is a possible explanation for positive findings of placebo-controlled trials of both homeopathy and allopathy (conventional medicine)’; and: ‘These biases are more likely to affect small than large studies; the smaller a study, the larger the treatment effect necessary for the results to be statistically significant, whereas large studies are more likely to be of high methodological quality and published even if their results are negative’.”


Yet, in Rutter and Stolper’s section on “Pre-specified hypotheses” they include “quality in homeopathy is worse than in conventional medicine” as a hypothesis of Shang et al., and say that this hypothesis was falsified in the Shang et al. study. This is a straw man: it is not a hypothesis that was discussed in the Shang et al. paper, and Rutten and Stolper have missed the point of including a matched set of trials of conventional medicine. As Rutten and Stolper state (p. 170) “Pooling of results is…questionable if homeopathy works for some conditions and not for others”. This is a reasonable point. However, it is clear that some experimental conventional treatments work and some do not. The results of the analysis of conventional medicine were not consistent with the placebo hypothesis, showing that it is possible to obtain a positive result using the methods of Shang et al., even there is considerable heterogeneity in the results [5].


Post-hoc analysis?

Rutten and Stolper make the claim that the sub-sets of larger, higher quality studies were chosen post-hoc, presumably to make homeopathy appear less effective than it really is. In their paper, Rutten and Stolper state [p. 172-173]:


Cut-off values for sample size were not mentioned or explained in Shang el al's [sic] analysis. Why were eight homeopathy trials compared with six conventional trials? Was this choice predefined or post-hoc? Post-publication data showed that cut-off values for larger higher quality studies differed between the two groups. In the homeopathy group the cut-off value was n = 98, including eight trials (38% of the higher quality trials). The cut-off value for larger conventional studies in this analysis was n = 146, including six trials (66% of the higher quality trials). These cut-off values were considerably above the median sample size of 65. There were 31 homeopathy trials larger than the homeopathy cut-off value and 24 conventional trials larger than the conventional cut-off value. We can think of no criterion that could be common to the two cut-off values. This suggests that this choice was post-hoc.”


The first thing to note is that it is not true that cut-off values for sample size were not mentioned or explained in the Shang et al. analysis. In the original Shang paper, on page 728, it is stated that “Trials with SE [standard error] in the lowest quartile were defined as larger trials”. In other words, the cut-off was not defined in terms of numbers of subjects, but in terms of standard error. It might be argued that this is a strange way of defining “larger” trials (and perhaps it should have been phrased as “lower standard error”). But it makes sense when criteria must be stated a priori. If a number of subjects were stated as a cut-off value, there would be no way of knowing how many studies would meet that criterion before looking at the data. You might find that a very large or very small number of studies met the criterion, making further analysis difficult. So, there is no mystery as to why the “cut-off values” were different between trials of homeopathy and trials of conventional medicine: it is because the distribution of standard errors is different between the two populations. This could be discovered simply by reading the original paper, and the conclusion that the groups were chosen post-hoc cannot be sustained.


A further point here is that the group of “larger” homeopathy trials contains smaller trials that would not have made the cut for “larger” trials in the conventional medicine group. Those smaller trials are more likely to show spurious positive results. It follows that had the authors engineered the groups to get the result they wanted, they had engineered them in favour of homeopathy.


Another paragraph in Rutten and Stolper states “We did not further investigate possible selection bias by excluding trials, but we were surprised by the exclusion of Wiesenauer's trial on chronic polyarthritis. This was a larger trial (n = 176), of good quality according to Linde, with positive results. This trial would have contributed positively to the outcome of the larger higher quality trials. Shang excluded this trial because no matching trial could be found” (page 171). Since the trial was excluded on the basis of the clearly stated, pre-specified exclusion criteria, what is surprising about it having been excluded? Including it would have made a nonsense of the design of the study and violated the pre-specified exclusion criteria, and would have been a gross error.


Another possible outcome?

Rutten and Stolper conduct a sensitivity analysis, but, as they note, the decisions they make in this analysis are highly subjective. They decide to exclude all trials of homeopathy for muscle soreness [6-9], on the grounds that “treatment of healthy individuals is very rare in homeopathic practice [and] this outcome has low external validity to judge the effect of homeopathy as a method” (page 173). Yet, the trials were conducted with the participation of prominent homeopaths, and some were published in homeopathic or alternative medicine journals [8, 9], so at least some homeopaths seem to be of the opinion that there is enough external validity for it to be worth conducting a trial. So how can the external validity of the trials be judged in a transparent way? In a meta-analysis based on clear, pre-specified criteria, there could be no justification for omitting the studies.


It is also notable that one of the authors was a co-author of another re-analysis published in the Journal of Clinical Epidemiology [10]. That analysis showed that if random-effects meta-analysis is used, it is possible to add smaller trials to Shang’s set of “larger, higher quality” trials of homeopathy, and get a statistically significant (although clinically unimpressive) benefit for homeopathy. All this really shows is that a finding in favour of homeopathy is not robust, and as Shang et al. showed, including smaller trials also decreases the reliability of the findings. The re-analysis also showed that the benefit for homeopathy was statistically insignificant when a meta-regression analysis was used: this negative finding was strangely not mentioned in the Homeopathy paper. Because the results differed between meta-regression and random-effects analyses, and because Shang et al. showed highly significant evidence of funnel-plot asymmetry in their complete dataset of 110 trials of homeopathy, it is arguable that meta-regression analysis is a more appropriate choice.


Overall, it is clear that “another outcome” (i.e. one favourable to homeopathy) is possible, as long as negative studies are excluded without good reason, smaller and less reliable studies are included, and a particular method of statistical analysis is used. In a paper that (wrongly) criticises a study for analysing data based on criteria established post-hoc, this seems like an odd point to make.


Conclusion

The analysis by Rutten and Stolper contains misconceptions of Shang et al., contains some important errors, and does not show that the Shang et al. study was an invalid analysis. In particular, there is no evidence that the Shang et al. study involved post-hoc choice of subgroups. The results of meta-analyses can be debated, but scientists should not be accused of research misconduct on the basis of no evidence, or on the basis of having failed to read their work properly.


References

1. Rutten ALB and Stolper CF. The 2005 meta-analysis of homeopathy: the importance of post-publication data. Homeopathy 2008; 97: 169-177.

2. Shang A, Huwiler-Müntener K, Nartey L et al. Are the clinical effects of homeopathy placebo effects? Comparative study of placebo-controlled trials of homeopathy and allopathy, Lancet 2005; 366: 726–732.

3. Linde K, Clausius N, Ramirez G et al. Are the clinical effects of homeopathy placebo effects? A meta-analysis of placebo-controlled trials, Lancet (1997); 350: 834–843.

4. K. Linde K, Scholz M, Ramirez G, Clausius N, Melchart D, Jonas WB. Impact of study quality on outcome in placebo-controlled trials of homeopathy, J Clin Epidemiol 1999; 52: 631–36.

5. Shang A, Jüni P, Sterne JAC, Huwiler-Müntener K, Egger M. Are the clinical effects of homeopathy placebo effects? A meta-analysis of placebo-controlled trials: Author’s reply, Lancet 2005; 366: 2083-2084

6. Vickers AJ, Fisher P, Wyllie SE, Rees R. Homeopathic Arnica 30X is ineffective for muscle soreness after long-distance running – A randomized, double-blind, placebo-controlled trial. Clin J Pain 1998; 14: 227–231.

7. Vickers AJ, Fisher P, Smith C, Wyllie SE, Lewith GT. Homoeopathy for delayed onset muscle soreness - A randomised double blind placebo controlled trial. Brit J Sports Med 1997; 31: 304–307.

8. Jawara N, Lewith GT, Vickers AJ, Mullee MA, Smith C. Homoeopathic Arnica and Rhus toxicodendron for delayed onset muscle soreness - A pilot for a randomized, double-blind, placebo-controlled trial. Brit Hom J 1997; 86: 10–15.

9. Tveiten D, Bruset S, Borchgrevink CF, Norseth J. Effects of the homeopathic remedy Arnica D30 on marathon runners: A randomized, double-blind study during the 1995 Oslo Marathon. Complement Ther Med 1998; 6(2): 71–74.

10. Lüdtke R, Rutten ALB. The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials. J Clin Epidemiol 2008; 61: 1197-1204


Thursday, 6 November 2008

Journal of Structural Geology paper

If anyone is interested in the paper I recently got accepted in the Journal of Structural Geology, there is a short summary of it here...

Hazel Blears talks rubbish about the blogosphere

I was struck by this article in yesterday's Grauniad. Hazel Blears, the MP for Salford and communities minister (who I incidentally saw having a swift half in Manchester's City Arms the other day), weighs in on the culture of "corrosive cynicism" which is supposedly damaging political discourse in the UK. This, of course, is all the fault of the media, and in particular the blogosphere. The following quote is from an address that Blears is giving today at a Hansard Society conference on growing political disengagement in Britain:

"Perhaps because of the nature of the technology, there is a tendency for political blogs to have a 'Samizdat' style. The most popular blogs are rightwing, ranging from the considered Tory views of Iain Dale, to the vicious nihilism of Guido Fawkes. Perhaps this is simply anti-establishment. Blogs have only existed under a Labour government. Perhaps if there was a Tory government, all the leading blogs would be left-of-centre? But mostly, political blogs are written by people with disdain for the political system and politicians, who see their function as unearthing scandals, conspiracies and perceived hypocrisy. Until political blogging 'adds value' to our political culture, by allowing new voices, ideas and legitimate protest and challenge, and until the mainstream media reports politics in a calmer, more responsible manner, it will continue to fuel a culture of cynicism and despair."
Now, I don't want to claim that all of what Blears says here is total nonsense. Clearly the media in general does have a lot to answer for. But perhaps there could be other reasons for political disengagement in the UK? I can think of a handful off the top of my head.

1. The growing reliance of the Labour party on rich donors, which has led to a number of scandals, including the "cash for honours" affair. The corollary of this is a decrease in party membership, an erosion of internal party democracy, a lack of connection between the party grassroots and the government, and a perception amongst the electorate that the government is corrupt.

2. The Iraq war, opposed by a large proportion of the population, and launched on the basis of statements that were not true (whether or not they were strictly "lies"), has now led to the deaths of more than 175 British soldiers and serious injuries to many more. This is aside from the civilian death toll in Iraq, the best that can be said of which is that we don't know what it is, but it is a hell of a lot.

3. A failure to apply appropriate regulation to financial markets allowed the inflation of a credit and asset bubble which has now burst, and will lead to perhaps hundreds of thousands of people losing their jobs and their homes.

4. A massive proliferation in anti-terror laws, which despite re-assurances when they were brought in, are now being used to suppress legitimate protest and freeze the assets of Icelandic banks.

No doubt you can think of a few more. So perhaps this corrosive cynicism has as much to do with the cynicism and incompetence of the government as that of the media and the blogosphere?

Quite apart from that, it is interesting to look at what Blears says about the blogosphere. She says that political bloggers "see their function as unearthing scandals, conspiracies and perceived hypocrisy", and then adds that "until political blogging 'adds value' to our political culture...it will continue to fuel a culture of cynicism and despair." To me, unearthing scandals does add value to our political culture. Or should government wrong-doing just be hidden? An important aspect of democratic government is that those who govern us can be held to account. That can't happen if no-one knows what they're up to.

It's obviously true that a large number of blogs are dreadful and useless. But it's equally true that many are valuable. By indulging in a rant against the media and the blogosphere, without addressing the contribution of government corruption and incompetence, Blears is not doing much to further the debate on political disengagement.

Monday, 20 October 2008

More meta-analysis delight

Having whined and bullshitted about the 2005 meta-analysis (Shang et al.) that was published in the Lancet (which showed that larger, high-quality trials demonstrated no effect for homeopathy beyond placebo), the homeopaths are now trying a bit of number-crunching. This is an improvement, although they don't seem to be any better at number crunching than they are at logic, as we saw with the recent Ludtke and Rutten paper in the Journal of Clinical Epidemiology.

In the Journal of Clinical Epidemiology, there is a proper review process, and the authors couldn't get away with writing just any arrant nonsense that came into their head. This led to a paper that broadly supported the original conclusions of the Shang study, which obviously wasn't that much use. Luckily, Homeopathy, the in-house fanzine of the Faculty of Homeopathy, has no such qualms about publishing more or less any old rubbish (and I should know, I've been published in it myself). Hence a paper by Rutten and Stolper in the October issue of Homeopathy criticising the Shang paper, entitled "The 2005 meta-analysis of homeopathy: the importance of post-publication data". I'm not going to analyse the whole thing-there comes a point when life really is too short-but I do want to pick up on one blatant error that seems to suggest they haven't carefully read the paper they are criticising.

One thing that Shang did was look at papers that were of good quality, based on how well they were randomised, how well they were blinded, and so on. They also sought to look at the "larger" trials, because their analysis showed that smaller trials with higher standard errors tended to be more positive. Rutten and Stolper claim that the cut-off size for larger trials was different between the homeopathy trials and conventional trials "without plausible reason". They write:

In the homeopathy group the cut-off value was n = 98, including eight trials (38% of the higher quality trials). The cut-off value for larger conventional studies in this analysis was n = 146, including six trials (66% of the higher quality trials). These cut-off values were considerably above the median sample size of 65. There were 31 homeopathy trials larger than the homeopathy cut-off value and 24 conventional trials larger than the conventional cutoff value. We can think of no criterion that could be common to the two cut-off values. This suggests that this choice was post-hoc.


What Rutten and Stolper are doing here is essentially accusing Shang and colleagues of fiddling the results so that homeopathy looked less effective than it really is. That is a serious accusation, and you might expect that someone making it would have carefully checked the original paper to see what the authors said their criterion for larger trials was. If you read the original Shang paper, on page 728 you will find this definition: "Trials with SE in the lowest quartile were defined as larger trials". There is nothing particularly opaque about this, and yet the idea that what was meant by "larger" trials was somehow unclear comes up time and time again.

Picking trials "with SE in the lowest quartile" may seem like an odd way of choosing the "larger" trials. Why not just pick a number, like 100, which Rutten and Stolper seem to want to do? Well, if you want to avoid post-hoc analysis it is a sensible thing to do, as you know you will have a reasonable number of "larger" trials to work with. If you just pick out a number, you might find that almost none, or almost all, of your trials qualify as being "larger", since you don't know what the distribution of trial sizes is until you do your analysis.

Of course, another thing is that in a comparison between homeopathy and conventional medicine, the discrepancy in trial size numbers actually gives homeopathy an advantage. This is because smaller trials of homeopathy that would have missed the cut for conventional trials are included, and these are more likely to show spurious positive results. So, if the difference in trial sizes really were due to the authors' bias, then they were biased in favour of homeopathy.

So, Rutten and Stolper have made an erroneous accusation of biased post-hoc analysis based on not bothering to properly read, or carefully think about, the paper they are criticising. In my opinion they have made themselves look rather silly. Yet, even sillier still, their entire paper is an exercise in post-hoc analysis, as they try to find ways of torturing the data to get the result they want, i.e. that homeopathy works. This is how things go in the wonderful world of homeopathy, where all kinds of pseudoscience can be justified as long as they give you the right answer.

Friday, 10 October 2008

Result!

I've just had the first paper from my post-doctoral work at Manchester accepted by the Journal of Structural Geology, pending minor revisions.

Beeeeerrrrrssssss!

Wednesday, 8 October 2008

Shang study remains firmly in the water

In the comments to a post at Respectful Insolence, my favourite homeopath Dana Ullman weighs in with the suggestion that the Shang et al. meta-analysis of trials of homeopathy and conventional medicine (which has been written about extensively by me and apgaylard), had been "blown out of the water". Ullman makes this assertion based on a new paper by Ludtke and Rutten, entitled "The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials", that has been accepted by the Journal of Clinical Epidemiology. It's nice to see that this paper does exist after all. So does the article really blow Shang out of the water? A quick look at the conclusions tells us that the answer is no:

Our results do neither prove that homeopathic medicines are superior to placebo nor do they prove the opposite...it occurs that Shang’s conclusions are not so definite as they have been reported and discussed.


What Ludtke and Rutten have done is actually quite interesting, though, so I'd like to explore it in a bit more detail. What they've done is taken the 21 trials of homeopathy that Shang et al. considered to be of "higher quality". They have then performed various analyses on this subset of trials to see what happens if you exclude trials based on their size or other parameters.

The authors plotted a funnel plot of odds ratio (a measure of the size of the effect of the intervention: values below 1 indicate a benefit over placebo) versus standard error (which is dependent on trial size). For all of the 21 trials, they found that there was substantial (but not statistically significant) asymmetry in the plot (if the funnel plot is asymmetrical, then biases are present in the data: these might be publication bias, small study effects, or a whole host of other effects). They also note that no evidence for asymmetry was found for the 8 largest trials of the 21. This actually re-iterates one of the main points of the Shang analysis: that a large number of trials is needed to identify asymmetry, and therefore to get an idea of bias in the dataset. That is why Shang et al. looked at all 110 trials that matched their inclusion criteria; that enabled them to identify a highly significant asymmetry in their funnel plot. This was important because it showed that the smaller the study size, the larger the apparent effect.

The thing in the paper that homeopaths will probably emphasise is that for the set of 21 higher quality trials, the pooled odds ratio (from random effects meta-analysis) was 0.76, suggesting a benefit over placebo. But wait! What are the 95% confidence intervals? 0.59-0.99. This indicates anything from a unimpressive benefit to a clinically negligible one. In other words, it's a deeply uninspiring result, but homeopaths will be trying to tell you that it shows that homeopathic remedies do something.

The interesting thing that the authors then did was to take the 2 largest trials, and look at what happens to the results when you add trials in descending order of patient numbers (Figure 2 of the paper). Once you get to the point where you've included the 14 largest trials, the resulting odds ratio is always less than 1 (except for the case of 17 trials). This is interesting in a way, but all it really does is demonstrate what Shang et al. said: that smaller trials are more likely to give positive results. So the more trials you add, the more positive but less reliable the results are; with 14 or more trials you might just about scrape a statistically significant benefit, but that result is not as reliable as the analysis restricted to the eight largest trials. It's also worth noting that the upper limits of the confidence intervals in Figure 2 are always close to 1, showing that any benefit is likely to be clinically insignificant. They perform a similar analysis in their Figure 3, except they use a meta-regression analysis rather than a random effects meta-analysis, and for that they show no statistically significant benefit no matter how many studies they include.

Another thing that homeopaths will probably jump on is that if one very negative trial (on Arnica for muscle soreness) is omitted from the set of 21 trials, the results appear more positive (odds ratio 0.73, 95% confidence interval 0.56-0.93) when a random effects meta-analysis is used. There are a number of other trials that can be removed from the dataset to give apparently positive results, but only when random effects analysis is used: a meta-regression analysis shows that there is no statistically significant benefit no matter which study you remove. Also, when performing a similar analysis on the 8 large, higher quality trials originally identified by Shang et al., no statistically significant benefit is found whichever trial you decide to remove. Again, note that the 8 largest trials are intrinsically more reliable than the smaller ones.

All the way through the paper, it is noticeable that meta-regression analysis shows more negative results than a random effects meta-analysis. In fact, the authors point out that in their meta-regression analysis "no single predicted OR [odds ratio] could be shown to differ significantly from unity". So which should be used? The authors write "... there is no guideline which tells a researcher when to prefer meta-regression to random effects metaanalysis or vice versa. As the statistical test for asymmetry only has a small power, Egger suggests to perform metaregressions when the respective P-value falls below 0.10. Applying this criterion there seemed to be no need to perform a meta-regression in most of the subsets we analyzed". But this conclusion is based on the restricted analysis of 21 higher quality trials. Shang et al.'s original analysis of 110 trials of homeopathy showed asymmetry with p<0.0001, suggesting that a meta-regression analysis would be more appropriate.

So, the upshot is that the paper's title is misleading. The conclusions on the effectiveness of homeopathy do not highly depend on the set of analyzed trials, if an appropriate test is used. Asymmetry is not adequately identified in the dataset because too few trials are used. And, even if you can convince yourself that you can get a statistically significant benefit by playing around with the numbers, the actual clinical benefit is negligible. In some ways, the paper actually reinforces the conclusions of Shang et al., and it certainly doesn't show that homeopathic medicines work.

Friday, 3 October 2008

Sayed Gooda: a tribute


There was sad news for me this week, and for more or less everyone who has done geological fieldwork in Egypt. Sayed Gooda, our driver on our trips to Sinai, has passed away.

Without Sayed, our work would have been a lot more difficult. Sayed didn't just drive us, over bad roads and difficult terrain. He looked after us, kept us out of trouble, made us delicious sandwiches for lunch (the highlight of the Sinai field day), and tried valiantly to teach us Arabic . Practically everyone who has worked on the Gulf of Suez owes something to Sayed. I will always remember arriving in a hot, noisy and fume-choked Cairo for the first time, nerves jangling from an overnight flight from Amsterdam, and being put at ease right away by the big friendly man in the big friendly green jeep.

Sayed was in Saudi Arabia when he passed away, and has been buried there. In the e-mail telling us of his death, his friend Tarek Moustafa wrote "I think he died Happy as this would have been a dream come true to him to die in the Holy lands". We can only hope so. Thanks for everything, Sayed.

Friday, 19 September 2008

If I was Prof Michael Reiss, I'd be annoyed...

Poor Prof Reiss: not only has he been forced to resign for making a generally sensible speech that was leapt on and distorted by anti-religious Fellows of the Royal Society and the media, but the media (in the shape of the Guardian) has actually admitted that it had a part in his downfall. In a column titled "In praise of Prof Michael Reiss": nice touch.

Quotes:

The subtlety of Prof Reiss's position was lost in some media reports, while the headlines in many newspapers- including this one - did not convey the nuance of his message. This appears to have cost him his job.
and:

But Prof Reiss's shabby treatment smacks of an organisation that is frightened of a debate about how creationist views should be tackled by teachers.


You can bet that this is the line that will be taken by those who would wish to pollute science with creationist nonsense. This seems like a spectacular own goal to me.

Monday, 15 September 2008

Creationism in schools: a manufactured controversy

I wrote briefly about the comments of Prof. Michael Reiss, director of education at the Royal Society, that creationism should be taught in schools in certain circumstances. The reaction to Reiss's comments has been somewhat alarming, with calls for him to be sacked [UPDATE: Professor Reiss has now resigned].

Here's Sir Harry Kroto:

I warned the president of the Royal Society that his [Reiss] was a dangerous appointment a year ago. I did not realise just how dangerous it would turn out to be.


And Sir Richard Roberts:

I think it is outrageous that this man is suggesting that creationism should be discussed in a science classroom. It is an incredible idea and I am drafting a letter to other Nobel laureates - which would be sent to the Royal Society - to ask that Reiss be made to stand down.


And, inevitably, Richard Dawkins:

A clergyman in charge of education for the country's leading scientific organisation - it's a Monty Python sketch.


To me these comments seem to taking things too far, especially in the light of a letter to today's Guardian from Prof. Reiss himself:

Your headline (Teach creationism, says top scientist, September 12) misrepresents the views of myself and the Royal Society. The society believes that if a young person raises the issue of creationism in a science class, a teacher should be in a position to examine why it does not stand up to scientific investigation. This does not put it on a par with evolution, which is recognised as the best explanation for the history of life on Earth from its beginnings and for the diversity of species.

Evolution is rightly taught as an essential part of biology and science courses in schools, colleges and universities across the world. Creationism, which has no scientific validity, can be discussed in a science class if it is raised by a pupil, but should in no way be seen as comparable to evolution or any other scientific theory which is backed up with evidence.

If that is what Reiss is saying, it seems to be fairly difficult to argue, and it's essentially what I was getting at in my previous post. But it's a bit of tangle, and there are several interlocking issues. A number of people seem to believe that Reiss's position as an ordained church minister makes him ineligible for his post, as is made most clear by Dawkins. I would have to disagree with that. Just because Reiss is a committed Christian, that clearly doesn't mean he has to believe in the literal truth of the Genesis account of creation. Plenty of perfectly sensible people take it as an allegory or a fable. There are creationists and creationists; if Reiss were a young Earth creationist, arguing that the Earth was 6,000 years old and was created in seven days, his position would be untenable. But that is clearly not what he is saying, as a reading of what he actually said will show. The controversy seems to have stemmed from the way the Guardian quoted him, when they wrote that Reiss thought that "science teachers should not see creationism as a 'misconception' but as an alternative 'world view'. What he actually said was:

Creationism can profitably be seen not as a simple misconception that careful science teaching can correct. Rather, a student who believes in creationism has a non-scientific way of seeing the world, and one very rarely changes one's world view as a result of a 50-minute lesson, however well taught.
That should be seen in the context of what Reiss said about discussion of creationism:

If questions or issues about creationism and intelligent design arise during science lessons they can be used to illustrate a number of aspects of how science works.


This is exactly my view on the matter, and I would say that it is a more nuanced and sensible point than the Guardian's precis would suggest. I can't be too scathing about that, as I wrote my previous post based on the Guardian's construction of Reiss's comments. Which just goes to show that you should always go back to original sources.

For me the grave difficulty here is that scientists are seen to be saying that there is no place for the religious in science. A religious man is being attacked for religious views that he doesn't actually hold on closer inspection. In some cases he is being attacked simply for being religious. That is wrong. There have always been religious people in science, and many of them have been exceptionally capable. What Reiss has said is actually reasonably sensible, and a lot of the opprobrium seems to have come from reading the press, rather than what Reiss has actually said.

You lose, quack

Another fantastic result for the mighty Ben Goldacre: a nutritionist by the name of Matthias Rath was suing Dr Goldacre, and the Guardian newspaper, which publishes his Saturday column. Dr Goldacre had criticised Rath for his advice that nutritional supplements could reverse the course of AIDS, and that patients should stop taking anti-retroviral drugs (ARVDs). Rath has now dropped the case, and been ordered to pay costs. Hopefully this will generate enough publicity for people to realise just how dangerous bad, unevidenced health advice can be.

We know that ARVDs work, and we know that nutritional supplements do not work, for treating people with AIDS. So what Rath was doing was advising people not to take drugs that work, and instead to take nutritional supplements which could have had no effect on the course of their illness. It is difficult to describe this as anything other than murderous quackery. There seems to be little doubt that people have died because of this advice.

You may think it is fair to say that Rath probably doesn't fall into the 'mainstream' of nutritionists. But I am not so sure. Patrick Holford, for example, a prominent UK nutritionist, has said that "AZT, the first prescribable anti-HIV drug, is potentially harmful and proving less effective than vitamin C”. This is superficially based on an actual scientific study. But it goes far, far beyond a reasonable interpretation of the study, which looked at what happens to some cells in a dish on a lab bench when you put some vitamin C on them. It is simply impossible to take that kind of bench research and apply it to what happens in something hugely complex like the human body. Holford is not a marginal figure: he is at the top of his profession. At least he doesn't suggest that ARVDs don't save lives.

Rath is clearly barking. But he thrives in the environment of denigration of mainstream science, and misunderstanding of science, that obtains in the world of nutritionism. The difference is that Rath, in taking his supplement quackery to somewhere that desperately needed drugs, not vitamins, he was able to cause a hell of a lot of damage.

As Ben Goldacre points out, the title "Nutritionist" is not protected, so anyone can call themselves a nutritionist. It follows that I am a nutritionist too. As your nutritionist, I would suggest that if you feel that you need nutritional advice, you should seek out a registered dietitian, who will actually be a qualified health professional.

Friday, 12 September 2008

Creationism in schools redux

I suppose the debate as to whether you should teach creationism in science class will always be with us. Here's Prof Michael Reiss, director of education at the Royal Society, wading into the morass in today's Guardian. My opinion on this, for what it's worth, is that what needs to be taught is what science is, how it is done, and what it's useful for. Without that context, students don't have the tools to evaluate the arguments, and you're basically engaged in indoctrination, whether you're teaching evolution or creationism. In my view, the evolution versus creationism debate is a perfect opportunity to provide that context; simply pretending that there's no discussion is not helpful.

Unfortunately, this does not seem to be the position that Reiss is taking. From the article:

Reiss said he used to be an "evangelist" for evolution in the classroom, but that the approach had backfired. "I realised that simply banging on about evolution and natural selection didn't lead some pupils to change their minds at all. Now I would be more content simply for them to understand it as one way of understanding the universe," he said.

Reiss, who is an ordained Church of England minister, told the British Association Festival of Science in Liverpool that science teachers should not see creationism as a "misconception" but as an alternative "world view".


This seems to be taking the rather wishy-washy view that all world views have equal explanatory power. To me, creationism is not a way of 'understanding' the universe: it's a way of refusing to understand it. For all I know, there could be a creator of some kind. But that is not a scientific hypothesis that enables me to understand anything I observe. The place of creationism in science class is as an example of what is not science, and where the limits of scientific enquiry might lie.


Wednesday, 27 August 2008

Heavy metals in "safe", natural, alternative medicines...why we need evidence based medicine

I saw this in today's Guardian. A team led by Robert Saper of the Boston Medical Centre ordered 230 different ayurvedic medicines from websites, and analysed the 193 they actually received for metal content [interesting that they never received 37 of them...]. For those interested, the paper is in JAMA 300 (8): 915-923, but it's behind a paywall...

What is Ayurvedic medicine? It's not something I know a lot about, but it's a traditional form of Indian medicine. Medicines are made up of herbs or mixtures of herbs, which can be combined with metals, minerals or animal products. The medicines that include metals, minerals or gems are known as rasa shastra medicines. As is typical for any form of alternative medicine, Ayurveda is described as being safe and natural by the various people who sell it: try a simple Google search to see what I mean. But how safe are they really?

Well, according to Saper and colleagues, a good proportion of them are stuffed full of heavy metals. Of the 193 medicines they analysed, 20.7% contained detectable levels of lead, mercury or arsenic. Unsurprisingly, metals were more prevalent in the rasa shastra medicines, of which 40.6% had detectable levels of nasty metals. The authors then took the recommended doses for each of the metal-containing medicines, and calculated what the daily intake of heavy metals would be for someone taking the medicine according to the stated dose. They found that for ALL of the medicines that contained detectable metal levels, a person taking the recommended dose would exceed standards for the ingestion of heavy metals. For some of the medicines manufactured in India, ingestion of heavy metals exceeded the limits by up to 10,000 times.

Pretty scary, eh? There are some caveats here. Firstly, these medicines were bought online: the authors concede that the results may not be similar for medicines given in consultation with an Ayurveda practitioner, or over the counter in western pharmacies. Even so, a previous study of medicines sold in Boston by Saper and colleagues showed that 20% contained lead, mercury or arsenic. So be careful: it is clear that while Ayurveda may be a 'natural' alternative to conventional medicine, it is not necessarily 'safe', as these case reports also testify. This is why conventional medicines are tested for safety before they can be sold. While there is no doubt that the evidence-based process of efficacy and safety testing can sometimes go wrong, it should be supported; the alternative is taking medicine of no proven benefit and with no understanding of the risks. This is too often what happens with inadequately regulated alternative treatments.

Tuesday, 26 August 2008

Renaissance for t'other blog

As some of you may know, I have another blog that I use to write about my research, and other things of interest in the geology of sedimentary basins and petroleum geology. Or at least, that was what I was supposed to do with it. Of course, its far too much fun to go homeopath baiting, and so I've tended to neglect it.

I've now got a plan to try and write something for that blog once a week. The first of these posts is up now, and addresses the question of abiogenic oil. I hope at least some of you will be vaguely interested enough to have a look. Future stuff will include some neat bits and pieces from my own research.

Pride and Prejudice

I was in Manchester city centre for the Pride parade on Saturday. It was a wonderful sunny day, and everyone had a smile on their face; you don't see that in Manchester every day. You've never so many loons in one place in your life. Cracking stuff.

The parade is a pretty big event these days. It takes about an hour and a half to go past. Half the city seems to be in it, with the other half watching. My better half was sprinting around the parade route with her running club, the Manchester Frontrunners, so I was in town to watch.

As you might imagine, there was a small group of Christian fruitcakes just by the Town Hall, holding placards about how God destroyed Sodom and Gomorrah, and so on. This was never going to be popular, and they were surrounded by a phalanx of Manchester's finest. There was occasionally a chorus of boos as the more militant sections of the parade went past, but they were mainly ignored or just treated as part of the entertainment. Once the parade had gone past, the Christian loon-in-chief was shouting about how further Pride events would bring destruction upon Manchester. After all, God had punished New Orleans for its Mardi Gras celebrations. A small crowd of people were stood around, roaring with laughter at every new piece of nonsense, before gradually filtering away to various of Manchester's hostelries.

It seems like poetic justice that of all the people in Manchester on Saturday, the Christian loons were the only ones not having a good time.

Tuesday, 19 August 2008

A mystery paper...

Just a little bit more on the interview with Dana Ullman that I wrote about here.

Ullman claims that a re-analysis of Shang et al. has been accepted for publication in the Journal of Clinical Epidemiology. The only reference I can find to this study is this, where a study dated 2007, entitled "The conclusions on the effectiveness of homeopathy highly depend on the set of analysed trials" by R Ludtke and ALB Rutten is listed as being 'in press' in the Journal of Clinical Epidemiology.

Here's the list of articles in press in the Journal of Clinical Epidemiology. There is no sign of any such paper. Various searches fail to find any similar papers published anywhere else, or in earlier issues of the Journal of Clinical Epidemiology. The only thing I can find is a paper in Homeopathy called "‘Proof’ against homeopathy in fact supports Homeopathy", in which one Lex Rutten is credited as the first author. Whether this is the same Rutten I cannot say. The main point of the paper seems to be that if you add four positive trials to the Shang dataset, the result would be more positive. And they accuse Shang of cherry-picking. Two of the trials complained about were excluded [PDF] from the Shang meta-analysis: the Fisher et al. paper because it had an ineligible study design, and the Weisenauer and Gaus paper because no matching conventional trial could be found. Of the other two, one by Arnal-Laserre appears to be a French thesis of some description [EDIT: This is a French thesis: it was mentioned in the Cochrane review of "Homoeopathy for the induction of labour". Apparently, the reviewers could not obtain a copy of the thesis, which perhaps explains why Shang et al. did not include it], and the other by Maiwald et al. was not a placebo-controlled trial.

So, does this re-analysis exist, or is it just another figment of the collective homeopathic imagination? And if it ever does get published, is it likely that it will have anything useful to say?

Monday, 18 August 2008

Dana Ullman says the thing that is not...

...no surprise there, then.

Here's Ullman, a US base homeopath, in an interview published on the website of Sue Young, a London-based homeopath. There's all kinds of drivel here, but there is an exchange on the Shang et al. meta-analysis of homeopathy (published in the Lancet in 2005) that particularly caught my eye, because it's full of absolute nonsense. Not only that, but Ullman has had his misconceptions about this paper explained to him in numerous places on the internet, including on this very blog. Here's what he and his interviewer, one Louise Mclean of the Zeus information service, had to say:

DANA: In fact there is a new study that is coming out shortly which is a re-analysis of the 2005 Lancet review of Shang. The researchers got it accepted in a major international journal of research. What they have finally done is what Shang didn’t do. He didn’t review ALL of the high calibre research but only a small part of it. He ignored comprehensive analysis entirely. I think he knew exactly what it was but he didn’t want to report on it, as it was too positive. Instead he only reported on trials with very large numbers of subjects because when you do that, most of those studies use one remedy for everybody without any degree of individuality.

LOUISE: We individualise.

DANA: We do individualise but sometimes the single remedy or the formulas will work for a broad number of people.

LOUISE: Like Mixed Pollen for hayfever.

DANA: That’s right or Oscillococcinum. But for some reason they did not include any of David Reilly’s research. http://www.bmj.com/cgi/content/abstract/321/7259/471

I don’t know why they ignored it.

LOUISE: It was too positive.

DANA: In fact they had a remark in the Shang article published in the Lancet, where they specifically made reference to trials on respiratory ailments and that the results were robust, but they said they couldn’t trust them because there were only 8 studies. But then again they based their entire analysis on 8 homeopathic studies and 6 conventional ones. So they can’t have it both ways and this new journal article in the Journal of Clinical Epidemiology which is ranked as one of the top international journals of reviews of research, has accepted the new studies.


Sigh. Why is this nonsense? Let me count the ways.

1. Shang et al. did in fact analyse ALL of the trials of homeopathy that met their inclusion criteria. This allowed them to establish, using statistical methods, that smaller trials and those of less robust methodology showed better results for homeopathy, because of bias. The good quality, large studies showed that homeopathy had no effect. This is the pattern you would expect to see if homeopathy is a placebo.

2. Ah, individualisation. In fact, a number of the trials in the Shang study were of individualised homeopathy (including two of those that were considered large and of high quality). There was no evidence that individualised homeopathy was better than any other type of homeopathy (p=0.636). In any case, individualisation is only important when it suits Ullman, as seen when he says "We do individualise but sometimes the single remedy or the formulas will work for a broad number of people".

3. The meta-analysis not only included the Reilly paper in the BMJ that is linked to, but two other Reilly papers, as can be seen from the additional material [PDF] to the paper that is available online. This is contrary to Ullman's assertion that "for some reason they did not include any of David Reilly’s research".

4. The point that Shang et al. make about the 8 studies of respiratory ailments is that 8 studies was too few for a meta-analysis restricted to those studies to detect the bias that is revealed by an analysis of the complete dataset. The eight studies of homeopathy that Ullman wrongly claims Shang et al. "based their entire analysis on" were identified as the studies most likely to be free of bias, based on an analysis of the entire dataset. So the authors are not trying to have it both ways at all, and Ullman is comparing apples with oranges.

What I find particularly annoying about this is that Ullman and Mclean are essentially accusing Shang and his co-workers of research misconduct. What do they base this very serious accusation on? On a total misunderstanding of their paper, and a flat-out lie that they omitted research that was 'too positive', when that research was in fact included in the analysis. I am not a statistician, but the paper is not that difficult to understand, if you read it. Followers of Dana Ullman's career will not be surprised by his disingenuousness on this, I'm sure.

It seems that no matter how often I (and others, notably apgaylard) write about the persistent mis-representation of the Shang paper, the homeopaths carry on regardless.

Friday, 15 August 2008

More bad homeopathic water science

More from the new issue of comedy journal Homeopathy. Homeopaths continue to pursue research on 'high dilutions' (i.e. dilutions such that there is highly unlikely to be any of the original solute remaining) in search of a vaguely plausible mechanism for homeopathy to work. Since the evidence is that homeopathy doesn't work, this is unlikely to be a fruitful endeavour, but it does provide entertainment for connoiseurs of pseudoscience.

The latest offering is a paper by one R. Assumpção , entitled "Electrical impedance and HV plasma images of high dilutions of sodium chloride" and an acccompanying editorial by Cyril Smith. Two sets of observations are presented: a set of Kirlian photographs and a set of measurements of the impedance of various homeopathic dilutions of NaCl.

What is Kirlian photography? When an object resting on a photographic plate is connected to a source of high voltage, an image is formed on the plate. Semyon Kirlian, whom the technique is named after, thought that such images might be compared to a human 'aura'. You can see how this would be attractive to various kinds of energy medicine fruitcakes. In the paper, Assumpção provides Kirlian photographs of homeopathic dilutions of an NaCl solution (6c, 12c, 24c and 30c) and of succussed (shaken) water without dilution. The images are certainly pretty, but what do they show? Not a lot, as far as I can tell. It isn't clear how such images should be interpreted, and all Assumpção really concludes is that different dilutions look different. This, in itself, is not a particularly exciting finding, in the absence of any hypothesis as to why that is and what it might mean.

What of the impedance data? Assumpção finds that impedance of the samples increases with dilution, up to about 12c (which is the point where Avogadro's constant suggests that there is unlikely to be any of the original NaCl remaining), and then decreases again slightly with further dilution. "This phenomenon is inexplicable in terms of conventional chemical theory" concludes Assumpção, a statement that really should be followed by "OMG!!!!11!1!1!!1!!!!!1".

Well, I think I have a reasonable explanation for the results that causes no problems for conventional chemical theory. As expected, dilution of the NaCl solution causes an increase in impedance as ions are removed from the solution. But there is also a competing increase in the concentration of dissolved ions caused by the succussion (agitation) process, as atmospheric gases and other impurities are dissolved in the water. So the small decrease in impedance beyond 12c could result from this process. I'm more puzzled as to why the impedance of the distilled water increases when it is shaken, suggesting a decrease in the concentration of dissolved ions. However, I suspect that there is a conventional explanation, but Assumpção doesn't seem to be particularly curious about this. Is there a change in the ionic species in the water as it is shaken? Does shaking facilitate outgassing of the dissolved gases that remain in the distilled water? We don't know. This continues the homeopathic water research tradition of publishing things without any thought as to what factors (other than the magic of water) might have led to the results.

Of course, the other thing to note is that the paper only looks at one solute, NaCl. For homeopathy, what would be really interesting is if you could show that there was a difference between two different homeopathic substances at dilutions greater than 12c. Previous attempts to do this have ended in some embarassment, as Smith acknowledges in the accompanying editorial, when he states "in the light of the controversy which has attended previous claims in this field, caution, and independent repetition of these results is required". There seems to be little sign of any such caution so far.

Finally, in Figure 5 of the paper, Assumpção had omitted to label to the impedance curves, showing which curve belongs to which dilution. The solution? Simply write the labels in, with a pen, afterwards. It is truly amateur hour.

Thursday, 14 August 2008

Are the homeopaths getting there?

The new issue of Homeopathy (the in-house shitrag of the Faculty of Homeopathy) is out, which is always a joy. To my surprise, there's actually a paper in there that seems to be groping towards a vague sort of sense. The authors (Clare Relton and Alicia O' Cathain of the University of Sheffield, and Kate J. Thomas of the University of Leeds) seem to be on the cusp of imagining what homeopathy could actually be useful for. They don't say outright that the principles of homeopathy make no sense and that it doesn't work, but they do seem to be getting close to an idea of what might make homeopathy useful in the absence of specific effects from the wee sugar pills they give you. As we know, the evidence from the largest and best quality trials tells us that those pills don't do anything over and above the placebo effect. But what can we do with the placebo effect? In some ways, the paper is a missed opportunity, but there are at least signs of movement.

In the paper, the authors distinguish between several meanings of "homeopathy". Does it refer to the medicines? To consultations with a homeopath? To the principles of homeopathy? Or to the homeopathic therapeutic system as a whole? To be honest, I don't really care. But then the authors look at the definitions of 'efficacy' and 'effectiveness':

In clinical epidemiology ‘efficacy’ refers to ‘the extent to which a specific intervention, procedure, drug, regimen, or service produces a beneficial result under ideal conditions’. Evidence of efficacy is usually interpreted as requiring evidence from placebo-controlled randomised controlled trials (RCTs). Whereas ‘effectiveness’ is a ‘measure of the extent to which a specific intervention, procedure, drug, regimen or service, when deployed in the field in routine circumstances does what it is intended to do for a specific population’. Evidence of effectiveness requires pragmatic RCTs and well conducted observational studies of routine clinical practice.


The authors then argue that evidence of cost effectiveness, based on the clinical effectiveness of the intervention is required by NICE for interventions that are provided by the NHS, not evidence of efficacy. Then they go on to look at the evidence.

The authors divide the evidence into experimental evidence, by which they mean randomised, double-blind, placebo-controlled trials. This is where things go off the rails a bit. The authors suggest that looking at the 'active ingredient' of homeopathy as being the medicine given is far too reductionist; perhaps the active ingredient should be thought of as including "Six putative active ingredients... : patient's openness to the mind body connection, consultational empathy, in depth enquiry into bodily complaints, disclosure, the remedy matching process, homeopathic remedies". Not only that, but "Other authors have discussed the difficulties of separating out the effects of the homeopathic medicine from the consultation effects".

Now, this is interesting. It seems that what the authors are saying here is that much of the benefit that patients get from homeopathy is unrelated to the medicine they are given; it is related to the relationship between the homeopath and the patient, and to the consultation process. In medical science, there's a term for this: it's called the placebo effect. That's why you conduct placebo-controlled trials; because those effects exist, and you therefore need to separate them from the effect of the actual medicine that is given to the patients. That placebo-controlled trials are specifically designed to address this problem seems to have passed the authors by. Placebo-controlled trials of individualised homeopathy (i.e. the whole homeopathic package, including consulation) have been carried out, and show no better results than trials that did not include the consultation. But at least there is a recognition here that non-specific effects play a very large role in homeopathy.

The authors go on to look at what they call 'observational evidence', consisting of observational studies and case reports. The authors admit that this sort of evidence is "prone to many types of bias, it is regarded as weaker than experimental evidence, and is generally disregarded in systematic reviews of evidence". But they say "However, this type of evidence does acknowledge the homeopath i.e., focuses on treatment by a homeopath in contrast to the experimental evidence which focuses only on the homeopathic medicine".

This seems to be just the usual special pleading that 'my woo can't be tested by your cold-hearted science'. Then again, the authors do seem to be suggesting that the pills containing nothing are not as important as the context in which those pills are delivered. So where next? Unfortunately, the authors miss this opportunity by simply making some weak recommendations about terminology. But if the homeopathic community could admit that their drugs don't work, and sit down to figure out how they could strip out all the pseudo-science about energy medicine, water memory, quantum entanglement and so on, and how they could harness the power of the placebo effect, then what remained could conceivably be of some use.

Of course, that the homeopaths haven't managed to figure this out for themselves yet is a little surprising, given that this is what Ben Goldacre has been arguing for some time. It's also interesting to reproduce the last paragraph of the famous Shang et al. meta-analysis which was so damning of homeopathy:

"We emphasise that our study, and the trials we examined, exclusively addressed the narrow question of whether homoeopathic remedies have specific effects. Context effects can influence the effects of interventions, and the relationship between patient and carer might be an important pathway mediating such effects. Practitioners of homoeopathy can form powerful alliances with their patients, because patients and carers commonly share strong beliefs about the treatment’s effectiveness, and other cultural beliefs, which might be both empowering and restorative. For some people, therefore, homoeopathy could be another tool that complements conventional medicine, whereas others might see it as purposeful and antiscientific deception of patients, which has no place in modern health care. Clearly, rather than doing further placebo-controlled trials of homoeopathy, future research efforts should focus on the nature of context effects and on the place of homoeopathy in health-care systems."

Could the homeopathic and conventional medicine communities be moving towards common ground?

Friday, 18 July 2008

Some good news...

According to this article in Pulse, the number of prescriptions written for homeopathic 'remedies' in the UK fell significantly between 2005 and 2007. 83,000 prescriptions were written in 2005, 62,700 in 2006 and 49,300 in 2007.

Naturally, if you're a homeopath, this is the result of a 'hate campaign' conducted against homeopathy by Prof Edzard Ernst, the Laing Professor of Complementary Medicine at the Peninsula Medical School (Universities of Exeter and Plymouth). At least, this is what one Dr Tim Robinson says in the Pulse article. This is fairly typical of the homeopathic community, who inevitably confuse a desire to rigorously assess the evidence base with a 'hate campaign'. Dr Robinson writes "Patients are not asking for [homeopathy] because of what has been written in the press..." This ignores woefully uncritical articles like this one that appear in the press, and columns for homeopaths in local papers that amount to free advertising, like this in the Manchester Evening News. If it was left to the press, people could be forgiven for thinking that there was decent evidence that homeopathy works.

Perhaps the message is getting out: that in an NHS with scarce funding, there's no room for funding things that don't work.

Monday, 7 July 2008

The homeopathic memory hole: not much progress since 1861...

A recent conference on the current state of research in homeopathy once again exposes the field as an intellectual shambles.

The bloggers have already written a good deal about some of the presentations at the conference. AP Gaylard points out that homeopaths are still "saying the thing that is not" about a meta-analysis (Shang et al.) that was published in 2005. The Quackometer goes into great detail about why Lionel Milgrom is talking rubbish; something that is also ably demonstrated in several excellent posts by shpalman. And Gimpy explains why Dr Alex Tournier PhD's assessment of the evidence is fundamentally flawed.

What I find difficult to understand is how the homeopaths can justify their selective quoting of evidence. The repeated instances of saying the thing that is not about the Shang paper are only one example. A materials science paper by Rao et al. is constantly wheeled out as evidence that homeopathy could work, for all the world as if it hadn't been demolished by a critical letter to the editor in a subsequent issue of the journal (I should know, I was a co-author of the letter). This letter has gone down the homeopathic memory hole, never to be mentioned again, while the original (execrably bad) paper refuses to die. There is also an identifiable pattern to how homeopaths discuss meta-analyses. Apparently positive ones that take no account of trial quality are always mentioned, but crucial caveats are always omitted (for example, Linde et al. 1997 said they could find no evidence that homeopathy worked for any particular condition: this is never mentioned). A later paper by Linde et al. looked at trial quality, and concluded that the results weakened the positive conclusions of their earlier paper: this is never mentioned. The Shang et al. paper is only mentioned to make erroneous criticisms of its methodology, to incorrectly state that critical information was missing from the paper, or to accuse the authors of scientific misconduct based on a total misunderstanding of the paper.

A bit of googling the other day turned up this. Back in 1861, a former editor of the North American Journal of Homeopathy, one Dr Peters, publicly renounced homeopathy. I recommend having a quick read of it. The points made are just as valid today as they were in 1861. And yet since then, homeopathy has persisted in heading into an intellectual cul-de-sac. Perhaps this is because homeopaths only retain the bits of evidence that superficially seem to be in favour of homeopathy: everthing else goes down the memory hole, even when it is published in homeopathic journals.

For some time, I've been considering putting together a summary of the evidence on homeopathy, and sending it off to Homeopathy (the in-house comic of the Faculty of Homeopathy) for publication. It seems clear that there's no point, even if the journal would print it. Firstly, homeopaths will pretend it never existed. And secondly, there are plenty of decent books on the subject for those who are genuinely curious about the evidence. The homeopaths will continue to be deluded.

Wednesday, 25 June 2008

Blogging is a transferable skill...

I've just returned from Aberdeen, where I was being interviewed for a job at the university (fingers crossed). In my limited experience of these things, it's always a bit nerve-wracking, and you can never quite predict what questions you'll be asked (although there are some that always come up: where do you see yourself in five years?). One of the five-member panel asked me if, apart from the research-related stuff on my CV, there were any other skills or things that I would like the panel to know about. I was unprepared for that, and I figured that saying 'no' would sound like 'no, I have nothing of interest to say about myself whatsoever', and couldn't be a good thing. Although I hadn't planned to, I talked a little bit about this blog, and explained how I used it to try and explain scientific issues in an accessible way. To my surprise, it went down very well with the panel, and I was asked how my blogging experience might help in university teaching. Given my relative lack of teaching experience, having the opportunity to talk about that can't have done my chances any harm.

So, if you're a blogger doing interviews, it's probably worth bringing up your blog. After all, you use it to distill complex issues down to their essentials, don't you? That's definitely a transferable skill.

More intention experiment: the ghost of Benveniste

I should probably avoid the Intention Experiment website, but it's such a fabulous compendium of nonsense and loony ideas that it's hard to leave alone. Now they're channeling the ghost of Jaques Benveniste. As many will know, Benveniste caused a storm of controversy when his lab published a paper in Nature that appeared to show that homeopathic concentrations of a certain type of antibody could have a biological effect, even though the chances of the solution containing any actual molecules of the antibody were tiny. This is the basis of the idea of 'memory of water' in homeopathy. The Nature paper was published with an unprecedented "editorial reservation", and a team assembled by Nature visited the Benveniste lab to look into the results. The results of the investigation were damning, showing that the design of the experiments was poor, and an article outlining the problems was published in a later issue of Nature. Ever since, homeopaths and other brands of quacks have been convinced that Benveniste's results were suppressed by the scientific establishment on behalf of Big Pharma, and so on.

Benveniste, however, was not put off by this setback, and continued in his work, going on to suggest that not only did the 'memory of water' effect exist, but that it could be transmitted digitally, down phone lines or over the internet. Brilliantly, this only gave 'positive' results when the equipment was being run by a particular researcher. Benveniste doesn't seem to have reflected too hard on why that might have been the case. Benveniste called this "digital biology", which would have "immense consequences on medical diagnostic procedures and the agro-food industry, with huge technological and commercial impact", and was only being held back because scientists are "opposed to the evolution of science". You recognise the narrative here, don't you?

Why do I mention all this? Because the Intention Experiment blog carries news of an experiment into "healing by e-mail". Apparently, a "Francesca McCarney, Ph.D., teacher of professional intuitives at the Academy of Intuitive Studies and Intuition Medicine" conducted the experiment, where 88 people were each sent 2 e-mails. One of these e-mails had "healing energy" "encapsulated" into it, and the other one did not. Apparently, the e-mails were indentified correctly 31.9% of the time, against a 25% probablity of getting them right by chance. This seems like a deeply unimpressive result, with no confidence interval to give us an idea of how likely it would be for such a result to occur by chance, but we're told that "scientists would consider it highly significant result" [sic]. We also don't know whether there might inadvertently have been clues in the text of the e-mails.

It seems that loony ideas never die; they just re-appear periodically in a slightly different form.

Friday, 20 June 2008

Another study for homeopathy awareness week...

It's homeopathy awareness week (until tomorrow), with a focus on homeopathic treatment for allergies.

In the spirit of awareness, here's a systematic review of complementary therapies in the treatment of allergic rhinitis and asthma in the Journal of Allergy and Clinical Immunology, with a summary at the NHS National Library for Health here.

For homeopathy, the authors found that "Some positive results were described with homeopathy in good-quality trials in rhinitis, but a number of negative studies were also found. Therefore it is not possible to provide evidence-based recommendations for homeopathy in the treatment of allergic rhinitis, and further trials are needed".

Fun with trials, for homeopathy awareness week...

Occasionally, people sympathetic to homeopathy try to discuss scientific studies. It's usually fun when this happens, because I get chance to do some paper deconstruction. What can I say, I'm a geek. I usually learn something from doing it, even if only that I'm talking rubbish.

In the comments to this post, a lady (I assume) named Vicky posted three studies claimed to support homeopathy. Of these, the Graunke et al. paper on tadpoles has some glaring problems which are discussed here, and the Oberbaum et al. paper simply contains too few subjects (15 in each group) to come to any reliable conclusions. The third paper, by Bernstein et al., was a bit of an oddity, and I'd like to take a slightly closer look at it, in honour of homeopathy awareness week.

Firstly, the paper is published in the American Journal of Therapeutics. This journal does seem to be peer-reviewed, but is not listed in the ISI Journal Citation Reports, which suggests that it is not a top-of-the-line journal. But that in itself doesn't necessarily mean the findings are wrong.

The paper studies the effect of a proprietary homeopathic remedy, Relieva, on psoriasis. First red flag: psoriasis is a condition where patients experience flare-ups and remissions. So any improvement could be a result of this natural cyclicity rather than any treatment effect.

Relieva is an unusually concentrated substance in homeopathic terms. The study describes the active ingredient as "a highly concentrated, proprietary extract of M. aquifolium 10% known as Psorberine". This remedy actually contains some active ingredient, unlike many homeopathic remedies, so there is at least a chance of it working without needing to overturn practically everything we know about pharmacology.

The study is described as a randomised, double-blind, placebo-controlled study, and in general it seems to be well conducted. There were a total of 200 subjects, 100 in each group, suggesting the trial is large enough to get decent results. The authors conducted an 'intention to treat' analysis. In this analysis, all subjects enrolled in the trial are considered, rather than just those who completed it. The authors 'impute' a worst-case score (12 points) for Psoriasis Area Severity Index (PASI), a measure of psoriasis severity, to all the patients who dropped out of the trial. I'm not sure why they did this: they might have a sound reason for doing that, but I don't see it explained in the paper. Notably, 3 subjects dropped out of the treatment group, compared to 26 in the placebo group. This is a large difference, and it would be expected to introduce a bias into the results of the intention to treat analysis. Also, since a whopping 26% of the placebo arm dropped out of the trial, the results are at least of questionable reliability. The authors suggest that the high drop-out rate in the placebo group compared to the treatment group might be an indication of the efficacy of the treatment. However, their analysis of the subjects that completed the trial suggests that the placebo group experienced some improvement in their psoriasis (a PASI point reduction of 2.22 compared to 3.58 in the treatment group), so I'm not sure that explanation stacks up. The maximum increase was +3 for the treatment group and +4 for the placebo group. It's also worth noting that this big difference in drop-out rates has the potential to break the blinding of the trial.

The acknowledgments of the paper are very odd, at least to me. They say "The authors thank Global Clinicals, Inc (www.GlobalClinicals.com ) and its staff for performing this study, Missy Magill for the statistical evaluation, and Andrea Gwosdow (PhD) for preparing this manuscript". It's possible I'm being naive here, but this raises the question of what the authors actually did to earn the right to be credited as authors.

Who then are Global Clinicals, Inc? Here's their webpage. They undertake clinical research on behalf of their clients, so there is at least the appearance of a conflict of interest here; clients are not going to be happy if their treatment is shown to be no good. This is the sort of the thing that Big Pharma are (rightly) criticised for.

So overall, I would suggest that the evidence in the trial does not unequivocally show that Relieva works. There are also some shenanigans surrounding the authorship of the paper that would be highly criticised by homeopaths if they occurred in a trial of conventional medicine. Finally, even if you accept that the trial provides evidence in favour of Relieva, it provides no general evidence in favour of the basic tenets of homeopathy ('like cures like' and 'potentisation'). For that, you would need to show that Relieva causes psoriasis in healthy volunteers in a 'proving', and show that its effects increased with increasing dilution. If it does work, it's not because it's homeopathic.