Friday, 30 November 2007
What's wrong with Shang et al.?
What did the paper do? The authors set out to test the hypothesis that homeopathic treatment effects can be attributed to the placebo effect. If that were the case, then any positive trial results for homeopathy would have to result from poor study design and/or bias. The authors tested this proposition by identifying 105 papers reporting 110 trials of homeopathy, and matching them with 110 trials of 'allopathy', or conventional treatments, on the basis of disorder treated and type of outcome measured. The authors then assessed the methodological quality of the papers, based on factors such as whether the trial was adequately blinded and whether it was adquately randomised. The paper found that for all the homeopathic trials, there was an effect beyond placebo. However, when the trials that were of low methodological quality and/or had sample sizes that were small were stripped out of the analysis, the remaining 8 trials showed no effect beyond placebo. On the other hand, when the same procedure was followed for the conventional medicine trials, the six remaining trials did show an effect beyond placebo.
So far, so good. What have been the criticisms of the paper?
One criticism has been that the trials deemed to be large and of higher quality were not identified, and that the reporting of the meta-analysis was inadequate. This criticism does carry some weight, and the reporting in the original paper was not good enough. However, the authors recognised the problem, and rectified it by identifying the trials in a reply to published criticisms that appeared in the Lancet (Lancet 366: 2083). You can find all the details of the study via apgaylard's blog here. So, this criticism is no longer valid.
Another criticism has been that the meta-analysis only uses 8 papers out of 105 to conclude that homeopathic remedies are no better than placebo. This seems to totally miss the point of the study. For one thing, it's a meta-analysis, so it pools studies in order to get more statistically significant results than single studies. The eight studies of homeopathy have a total n of 1,923, which is quite respectable. Also, Shang et al. have not employed some sort of sleight of hand to dismiss the other 97 papers. They have filtered them out because they are of inadequate methodological quality and/or size, based on clearly stated criteria. This allows the authors to compare the results from all the studies with the results from the best studies. When you use only the best studies, there is no longer any benefit for homeopathy beyond placebo. In contrast, using the best studies of conventional treatments, there is an effect beyond placebo. Again, this is the whole point of the study, and criticising it on the basis that it seeks to use the best-quality studies seems somewhat misguided.
Another common criticism from homeopaths is that the study doesn't test 'real' homeopathy. Shang et al. split studies of homeopathy into four types:
1. Classical homeopathy: individualised treatment based on homeopathic history-taking
2. Clinical homeopathy: no history-taking involved, each patient gets the same remedy
3. Complex homeopathy: patients take a mixture of several different remedies
4. Isopathy: the agent judged to be the cause of the disorder was used
For example, here's a website where they state flat out that there is no such thing as clinical homeopathy. This would be news to anyone who has wandered into Boots and seen the homeopathic remedies on sale there. More commonly, the criticism is that only 'classical homeopathy' is really homeopathy, and the other types don't count. Even if we allow this criticism, the fact is that 18 of the included trials were of 'classical homeopathy', as defined by the authors, and two of those made it into the group of eight large, high quality trials. The statistical analysis also showed that there was little evidence that effects differed between different types of homeopathy. So, not only did the study include trials of individualised homeopathy, it showed that these were no more effective than the other forms of homeopathy.
So, on the whole, it seems to me that the methodology of Shang et al. is reasonable, and the conclusions justified. I think it's probably true that no study is entirely without flaws, and I'm willing to be corrected on this. But so far I've seen no good criticism of the Shang et al. study that invalidates its conclusions.
Edit: Just as an aside it's interesting to read the second last paragraph of Shang et al., where they discuss the place of homeopathy in treatment systems. I take the liberty of reproducing the paragraph below:
"We emphasise that our study, and the trials we examined, exclusively addressed the narrow question of whether homoeopathic remedies have specific effects. Context effects can influence the effects of interventions, and the relationship between patient and carer might be an important pathway mediating such effects. Practitioners of homoeopathy can form powerful alliances with their patients, because patients and carers commonly share strong beliefs about the treatment’s effectiveness, and other cultural beliefs, which might be both empowering and restorative. For some people, therefore, homoeopathy could be another tool that complements conventional medicine, whereas others might see it as purposeful and antiscientific deception of patients, which has no place in modern health care. Clearly, rather than doing further placebo-controlled trials of homoeopathy, future research efforts should focus on the nature of context effects and on the place of homoeopathy in health-care systems."
This seems to be entirely reasonable, and suggests that Shang et al. have no particular bias against homeopathy.
Wednesday, 21 November 2007
Something to prove
In the January issue of Homeopathy, Dantas et al. published a review of homeopathic provings, or Homeopathic Pathogenetic Trials (HPTs), as the authors prefer to call them. The authors defined HPTs as being "clinical trials designed to investigate the effects of the exposure of human volunteers, in good health, to potentially toxic or pathogenetic substances, diluted and serially agitated according to homeopathic pharmacopoeial methods, with a view to providing data to inform their use as homeopathic medicines". The idea is that symptoms caused by the homeopathic preparations can be cured by the same preparations, under the principle of 'like cures like'. There is no good evidence that this principle can be applied as a general rule, but even so it has become one of the foundation stones of homeopathy. One problem is that the symptoms in an HPT are recorded by the volunteers who take part in the proving. No quantitative data is collected about symptoms, and there are well-known problems with such self-reporting studies. Also, in many cases there is no way of telling whether the symptoms occurred as a result of the homeopathic preparation, or for some other reason, because such trials are not always placebo-controlled (Dantas et al. claim that 58% of the trials in their review were placebo-controlled).
The review by Dantas et al. concluded that "Most studies had design flaws, particularly absence of proper randomization, blinding, placebo control and criteria for analysis of outcomes", and went so far as to state that "The central question of whether homeopathic medicines in high dilutions can provoke effects in healthy volunteers has not yet been definitively answered, because of methodological weaknesses of the reports". Their central point is that while provings often turn up all kinds of symptoms, methodological flaws mean that you can't tell whether the symptoms were caused by the homeopathic preparation or not. The authors recommend that improved methodology should be adopted for future HPTs.
This is interesting stuff, and suggests that there are at least some homeopaths who question the value of HPTs, and on perfectly reasonable scientific grounds. It all starts to go a bit wrong in the responses to the article, which were published in the current issue of Homeopathy.
Sherr and Quirk's response is probably the most fun, and I suggest you track it down for yourself (but only if you've got time for such nonsense). Their point of view can be summarised by a paragraph towards the end of the paper, where they state "Eliminating the majority of symptoms or characteristic single symptoms due to over scientific vigour or a concern about statistical significance or background noise, risks throwing out the baby with the bathwater. It is important to remember the proof of provings is first and foremost their clinical usability and efficiency". Over-scientific vigour or a concern over statistical significance, indeed. This is pre-enlightenment thinking if ever I saw it.
They also say that "A good proving is not about producing every possible symptom. It is about producing enough symptoms of quality so that the intelligent homoeopath can perceive a meaningful totality". I take this to mean that you don't have to worry about using the best possible methodology, because the homeopath has some magic way of 'perceiving a meaningful totality'. Also, the object of the proving is to produce 'enough symptoms', not the ones actually caused by the preparation. (Here I gloss over the fact that homeopathic preparations tend to contain no active ingredient, so will in all likelihood produce no symptoms at all). This is illustrated by a proving of hydrogen mentioned in Dantas et al., where the original trial produced 50 times more symptoms than a subsequent trial with improved methodology. According to Sherr and Quirk, the problem here is not with the original trial, but with the improved one, which produced too few symptoms to constitute a usable proving.
Dantas et al. respond with a paper entitled 'We must distinguish symptoms caused by the medicine from other symptoms'. In this case, the title is probably an adequate response on its own.
Then Harald Walach has a paper in response to Sherr and Quirk, entitled "Potential nonlocal mechanisms make placebo controls in pathogenetic trials difficult". This, once again, is quantum gibberish being used to claim that placebo-controlled trials can't work for homeopathy, because of 'entanglement' between patient, practitioner and remedy. Not to put too fine a point on it, this is bollocks, because entanglement has not been observed for systems containing more than a few particles. This is just homeopaths trying to find a way out of all the negative placebo-controlled trials of homeopathy. The attempts by those sympathetic to homeopathy to explain it via quantum mechanics are taken apart in some detail on shpalman's blog here and also here. My favourite bit of Walach's response is this: "It is a well-known lore of homeopathic proving that those in control groups, relatives, or even the pet dog may develop proving symptoms although they have not taken the remedy. This lore, although anecdotal and not scientific evidence at all, is valuable since it suggests that placebo controls might not be adequate". So, although this 'lore' is 'not scientific evidence at all' it is still valuable as evidence that placebo controls may be inadequate. Hm. Perhaps another explanation is that the proving symptoms recorded in the trial had nothing whatever to do with the homeopathic preparation being trialled, and so could be expected to be found in people (or dogs) not taking the preparation? That's why you do a placebo-controlled trial in the first place, and that's why quantitative data (as opposed to self-reporting) on symptoms are so important.
At the end of it all, you have to wonder what would happen if relatively sceptical authors such as those responsible for Dantas et al. started to address the results from meta-analyses that persistently show that homeopathic preparations have no benefit beyond placebo. Unfortunately, there seems to be no sign of this happening, as the authors conclude their paper by saying "As evidence accumulates for the efficacy and safety of homeopathy from rigorous clinical trials, there is an increasing need to investigate and develop valid methodologies for the experimental pillar of homeopathy—the homeopathic pathogenetic trial". Still, perhaps this drive towards better methodology may have unintended consequences. As we know from the Shang et al. meta-analysis in the Lancet, the better the methodology of your study, the more likely it is to show no effect beyond placebo for homeopathy.
References
Dantas, F., Fisher, P., Walach, H., Wieland, F., Rastogi, D.P., Teixeira, H., Koster, D., Jansen, J.P., Eizayaga, J., Alvarez, M.E.P., Marim, M., Belon, P. and Weckx, L.L.M. 2007. A systematic review of the quality of homeopathic pathogenetic trials published from 1945 to 1995. Homeopathy, 96: 4-16.
Dantas, F., Fisher, P., Rastogi, D.P., Teixeira, H., Eizayaga, J., Alvarez, M.E.P., Belon, P. and Weckx, L.L.M. 2007. Authors' response: we must distinguish symptoms caused by the medicine from other symptoms. Homeopathy, 96: 275-276
Shang, A., Huwiler-Müntener, K., Nartey, L., Jüni, P., Dörig, S., Sterne, J.A.C., Pewsner, D., Egger, M. 2005. Are the clinical effects of homoeopathy placebo effects? Comparitive study of placebo-controlled trials of homoeopathy and allopathy. Lancet, 366: 726-732.
Sherr, J. and Quirk, T. 2007. Systematic review of homeopathic pathogenetic trials: an excess of rigour? Homeopathy, 96: 273-275
Walach, H. 2007. Response: potential nonlocal mechanisms make placebo controls in pathogenetic trials difficult. Homeopathy, 96: 278.
Tuesday, 13 November 2007
Still more on memory of water
This morning I heard that my comments on Martin F Chaplin's article have been accepted for publication. The letter critiquing the Rao et al. paper that was jointly drafted by contributors to the JREF forums (I'm the third author) has also been accepted. Both should appear in the January issue of Homeopathy. So, to give the journal its due, it has not shied away from robust debate. This has doubled my publication record overnight, but I'm not sure if I should include these on my CV...
I expect that the authors of the original articles will have a reply published in the same issue. It will be interesting to see what they have to say.
Incidentally, an erratum to the Rao et al. paper has been published in the latest issue of Homeopathy. It deals with a referencing mistake, and is really the least of the problems with the paper, but it's something.
Monday, 12 November 2007
Sinai fieldwork diary, October/November 2007
LiDAR is a laser-based system for collecting what are essentially high-resolution DEMs of rock outcrops (it's actually designed for architecture and surveying work, but it works pretty well for us too). It's heavy and cumbersome, and we have to carry it around the desert. But we do get to look at some pretty fantastic geology too...
Despite having an enormous quantity of gear, the trip to
Now I need some logs, so that’s what I’m mostly going to be doing for the next four days.