Wednesday, 27 August 2008
What is Ayurvedic medicine? It's not something I know a lot about, but it's a traditional form of Indian medicine. Medicines are made up of herbs or mixtures of herbs, which can be combined with metals, minerals or animal products. The medicines that include metals, minerals or gems are known as rasa shastra medicines. As is typical for any form of alternative medicine, Ayurveda is described as being safe and natural by the various people who sell it: try a simple Google search to see what I mean. But how safe are they really?
Well, according to Saper and colleagues, a good proportion of them are stuffed full of heavy metals. Of the 193 medicines they analysed, 20.7% contained detectable levels of lead, mercury or arsenic. Unsurprisingly, metals were more prevalent in the rasa shastra medicines, of which 40.6% had detectable levels of nasty metals. The authors then took the recommended doses for each of the metal-containing medicines, and calculated what the daily intake of heavy metals would be for someone taking the medicine according to the stated dose. They found that for ALL of the medicines that contained detectable metal levels, a person taking the recommended dose would exceed standards for the ingestion of heavy metals. For some of the medicines manufactured in India, ingestion of heavy metals exceeded the limits by up to 10,000 times.
Pretty scary, eh? There are some caveats here. Firstly, these medicines were bought online: the authors concede that the results may not be similar for medicines given in consultation with an Ayurveda practitioner, or over the counter in western pharmacies. Even so, a previous study of medicines sold in Boston by Saper and colleagues showed that 20% contained lead, mercury or arsenic. So be careful: it is clear that while Ayurveda may be a 'natural' alternative to conventional medicine, it is not necessarily 'safe', as these case reports also testify. This is why conventional medicines are tested for safety before they can be sold. While there is no doubt that the evidence-based process of efficacy and safety testing can sometimes go wrong, it should be supported; the alternative is taking medicine of no proven benefit and with no understanding of the risks. This is too often what happens with inadequately regulated alternative treatments.
Tuesday, 26 August 2008
I've now got a plan to try and write something for that blog once a week. The first of these posts is up now, and addresses the question of abiogenic oil. I hope at least some of you will be vaguely interested enough to have a look. Future stuff will include some neat bits and pieces from my own research.
The parade is a pretty big event these days. It takes about an hour and a half to go past. Half the city seems to be in it, with the other half watching. My better half was sprinting around the parade route with her running club, the Manchester Frontrunners, so I was in town to watch.
As you might imagine, there was a small group of Christian fruitcakes just by the Town Hall, holding placards about how God destroyed Sodom and Gomorrah, and so on. This was never going to be popular, and they were surrounded by a phalanx of Manchester's finest. There was occasionally a chorus of boos as the more militant sections of the parade went past, but they were mainly ignored or just treated as part of the entertainment. Once the parade had gone past, the Christian loon-in-chief was shouting about how further Pride events would bring destruction upon Manchester. After all, God had punished New Orleans for its Mardi Gras celebrations. A small crowd of people were stood around, roaring with laughter at every new piece of nonsense, before gradually filtering away to various of Manchester's hostelries.
It seems like poetic justice that of all the people in Manchester on Saturday, the Christian loons were the only ones not having a good time.
Tuesday, 19 August 2008
Ullman claims that a re-analysis of Shang et al. has been accepted for publication in the Journal of Clinical Epidemiology. The only reference I can find to this study is this, where a study dated 2007, entitled "The conclusions on the effectiveness of homeopathy highly depend on the set of analysed trials" by R Ludtke and ALB Rutten is listed as being 'in press' in the Journal of Clinical Epidemiology.
Here's the list of articles in press in the Journal of Clinical Epidemiology. There is no sign of any such paper. Various searches fail to find any similar papers published anywhere else, or in earlier issues of the Journal of Clinical Epidemiology. The only thing I can find is a paper in Homeopathy called "‘Proof’ against homeopathy in fact supports Homeopathy", in which one Lex Rutten is credited as the first author. Whether this is the same Rutten I cannot say. The main point of the paper seems to be that if you add four positive trials to the Shang dataset, the result would be more positive. And they accuse Shang of cherry-picking. Two of the trials complained about were excluded [PDF] from the Shang meta-analysis: the Fisher et al. paper because it had an ineligible study design, and the Weisenauer and Gaus paper because no matching conventional trial could be found. Of the other two, one by Arnal-Laserre appears to be a French thesis of some description [EDIT: This is a French thesis: it was mentioned in the Cochrane review of "Homoeopathy for the induction of labour". Apparently, the reviewers could not obtain a copy of the thesis, which perhaps explains why Shang et al. did not include it], and the other by Maiwald et al. was not a placebo-controlled trial.
So, does this re-analysis exist, or is it just another figment of the collective homeopathic imagination? And if it ever does get published, is it likely that it will have anything useful to say?
Monday, 18 August 2008
Here's Ullman, a US base homeopath, in an interview published on the website of Sue Young, a London-based homeopath. There's all kinds of drivel here, but there is an exchange on the Shang et al. meta-analysis of homeopathy (published in the Lancet in 2005) that particularly caught my eye, because it's full of absolute nonsense. Not only that, but Ullman has had his misconceptions about this paper explained to him in numerous places on the internet, including on this very blog. Here's what he and his interviewer, one Louise Mclean of the Zeus information service, had to say:
DANA: In fact there is a new study that is coming out shortly which is a re-analysis of the 2005 Lancet review of Shang. The researchers got it accepted in a major international journal of research. What they have finally done is what Shang didn’t do. He didn’t review ALL of the high calibre research but only a small part of it. He ignored comprehensive analysis entirely. I think he knew exactly what it was but he didn’t want to report on it, as it was too positive. Instead he only reported on trials with very large numbers of subjects because when you do that, most of those studies use one remedy for everybody without any degree of individuality.
LOUISE: We individualise.
DANA: We do individualise but sometimes the single remedy or the formulas will work for a broad number of people.
LOUISE: Like Mixed Pollen for hayfever.
DANA: That’s right or Oscillococcinum. But for some reason they did not include any of David Reilly’s research. http://www.bmj.com/cgi/content/abstract/321/7259/471
I don’t know why they ignored it.
LOUISE: It was too positive.
DANA: In fact they had a remark in the Shang article published in the Lancet, where they specifically made reference to trials on respiratory ailments and that the results were robust, but they said they couldn’t trust them because there were only 8 studies. But then again they based their entire analysis on 8 homeopathic studies and 6 conventional ones. So they can’t have it both ways and this new journal article in the Journal of Clinical Epidemiology which is ranked as one of the top international journals of reviews of research, has accepted the new studies.
Sigh. Why is this nonsense? Let me count the ways.
1. Shang et al. did in fact analyse ALL of the trials of homeopathy that met their inclusion criteria. This allowed them to establish, using statistical methods, that smaller trials and those of less robust methodology showed better results for homeopathy, because of bias. The good quality, large studies showed that homeopathy had no effect. This is the pattern you would expect to see if homeopathy is a placebo.
2. Ah, individualisation. In fact, a number of the trials in the Shang study were of individualised homeopathy (including two of those that were considered large and of high quality). There was no evidence that individualised homeopathy was better than any other type of homeopathy (p=0.636). In any case, individualisation is only important when it suits Ullman, as seen when he says "We do individualise but sometimes the single remedy or the formulas will work for a broad number of people".
3. The meta-analysis not only included the Reilly paper in the BMJ that is linked to, but two other Reilly papers, as can be seen from the additional material [PDF] to the paper that is available online. This is contrary to Ullman's assertion that "for some reason they did not include any of David Reilly’s research".
4. The point that Shang et al. make about the 8 studies of respiratory ailments is that 8 studies was too few for a meta-analysis restricted to those studies to detect the bias that is revealed by an analysis of the complete dataset. The eight studies of homeopathy that Ullman wrongly claims Shang et al. "based their entire analysis on" were identified as the studies most likely to be free of bias, based on an analysis of the entire dataset. So the authors are not trying to have it both ways at all, and Ullman is comparing apples with oranges.
What I find particularly annoying about this is that Ullman and Mclean are essentially accusing Shang and his co-workers of research misconduct. What do they base this very serious accusation on? On a total misunderstanding of their paper, and a flat-out lie that they omitted research that was 'too positive', when that research was in fact included in the analysis. I am not a statistician, but the paper is not that difficult to understand, if you read it. Followers of Dana Ullman's career will not be surprised by his disingenuousness on this, I'm sure.
It seems that no matter how often I (and others, notably apgaylard) write about the persistent mis-representation of the Shang paper, the homeopaths carry on regardless.
Friday, 15 August 2008
The latest offering is a paper by one R. Assumpção , entitled "Electrical impedance and HV plasma images of high dilutions of sodium chloride" and an acccompanying editorial by Cyril Smith. Two sets of observations are presented: a set of Kirlian photographs and a set of measurements of the impedance of various homeopathic dilutions of NaCl.
What is Kirlian photography? When an object resting on a photographic plate is connected to a source of high voltage, an image is formed on the plate. Semyon Kirlian, whom the technique is named after, thought that such images might be compared to a human 'aura'. You can see how this would be attractive to various kinds of energy medicine fruitcakes. In the paper, Assumpção provides Kirlian photographs of homeopathic dilutions of an NaCl solution (6c, 12c, 24c and 30c) and of succussed (shaken) water without dilution. The images are certainly pretty, but what do they show? Not a lot, as far as I can tell. It isn't clear how such images should be interpreted, and all Assumpção really concludes is that different dilutions look different. This, in itself, is not a particularly exciting finding, in the absence of any hypothesis as to why that is and what it might mean.
What of the impedance data? Assumpção finds that impedance of the samples increases with dilution, up to about 12c (which is the point where Avogadro's constant suggests that there is unlikely to be any of the original NaCl remaining), and then decreases again slightly with further dilution. "This phenomenon is inexplicable in terms of conventional chemical theory" concludes Assumpção, a statement that really should be followed by "OMG!!!!11!1!1!!1!!!!!1".
Well, I think I have a reasonable explanation for the results that causes no problems for conventional chemical theory. As expected, dilution of the NaCl solution causes an increase in impedance as ions are removed from the solution. But there is also a competing increase in the concentration of dissolved ions caused by the succussion (agitation) process, as atmospheric gases and other impurities are dissolved in the water. So the small decrease in impedance beyond 12c could result from this process. I'm more puzzled as to why the impedance of the distilled water increases when it is shaken, suggesting a decrease in the concentration of dissolved ions. However, I suspect that there is a conventional explanation, but Assumpção doesn't seem to be particularly curious about this. Is there a change in the ionic species in the water as it is shaken? Does shaking facilitate outgassing of the dissolved gases that remain in the distilled water? We don't know. This continues the homeopathic water research tradition of publishing things without any thought as to what factors (other than the magic of water) might have led to the results.
Of course, the other thing to note is that the paper only looks at one solute, NaCl. For homeopathy, what would be really interesting is if you could show that there was a difference between two different homeopathic substances at dilutions greater than 12c. Previous attempts to do this have ended in some embarassment, as Smith acknowledges in the accompanying editorial, when he states "in the light of the controversy which has attended previous claims in this field, caution, and independent repetition of these results is required". There seems to be little sign of any such caution so far.
Finally, in Figure 5 of the paper, Assumpção had omitted to label to the impedance curves, showing which curve belongs to which dilution. The solution? Simply write the labels in, with a pen, afterwards. It is truly amateur hour.
Thursday, 14 August 2008
In the paper, the authors distinguish between several meanings of "homeopathy". Does it refer to the medicines? To consultations with a homeopath? To the principles of homeopathy? Or to the homeopathic therapeutic system as a whole? To be honest, I don't really care. But then the authors look at the definitions of 'efficacy' and 'effectiveness':
In clinical epidemiology ‘efficacy’ refers to ‘the extent to which a specific intervention, procedure, drug, regimen, or service produces a beneficial result under ideal conditions’. Evidence of efficacy is usually interpreted as requiring evidence from placebo-controlled randomised controlled trials (RCTs). Whereas ‘effectiveness’ is a ‘measure of the extent to which a specific intervention, procedure, drug, regimen or service, when deployed in the field in routine circumstances does what it is intended to do for a specific population’. Evidence of effectiveness requires pragmatic RCTs and well conducted observational studies of routine clinical practice.
The authors then argue that evidence of cost effectiveness, based on the clinical effectiveness of the intervention is required by NICE for interventions that are provided by the NHS, not evidence of efficacy. Then they go on to look at the evidence.
The authors divide the evidence into experimental evidence, by which they mean randomised, double-blind, placebo-controlled trials. This is where things go off the rails a bit. The authors suggest that looking at the 'active ingredient' of homeopathy as being the medicine given is far too reductionist; perhaps the active ingredient should be thought of as including "Six putative active ingredients... : patient's openness to the mind body connection, consultational empathy, in depth enquiry into bodily complaints, disclosure, the remedy matching process, homeopathic remedies". Not only that, but "Other authors have discussed the difficulties of separating out the effects of the homeopathic medicine from the consultation effects".
Now, this is interesting. It seems that what the authors are saying here is that much of the benefit that patients get from homeopathy is unrelated to the medicine they are given; it is related to the relationship between the homeopath and the patient, and to the consultation process. In medical science, there's a term for this: it's called the placebo effect. That's why you conduct placebo-controlled trials; because those effects exist, and you therefore need to separate them from the effect of the actual medicine that is given to the patients. That placebo-controlled trials are specifically designed to address this problem seems to have passed the authors by. Placebo-controlled trials of individualised homeopathy (i.e. the whole homeopathic package, including consulation) have been carried out, and show no better results than trials that did not include the consultation. But at least there is a recognition here that non-specific effects play a very large role in homeopathy.
The authors go on to look at what they call 'observational evidence', consisting of observational studies and case reports. The authors admit that this sort of evidence is "prone to many types of bias, it is regarded as weaker than experimental evidence, and is generally disregarded in systematic reviews of evidence". But they say "However, this type of evidence does acknowledge the homeopath i.e., focuses on treatment by a homeopath in contrast to the experimental evidence which focuses only on the homeopathic medicine".
This seems to be just the usual special pleading that 'my woo can't be tested by your cold-hearted science'. Then again, the authors do seem to be suggesting that the pills containing nothing are not as important as the context in which those pills are delivered. So where next? Unfortunately, the authors miss this opportunity by simply making some weak recommendations about terminology. But if the homeopathic community could admit that their drugs don't work, and sit down to figure out how they could strip out all the pseudo-science about energy medicine, water memory, quantum entanglement and so on, and how they could harness the power of the placebo effect, then what remained could conceivably be of some use.
Of course, that the homeopaths haven't managed to figure this out for themselves yet is a little surprising, given that this is what Ben Goldacre has been arguing for some time. It's also interesting to reproduce the last paragraph of the famous Shang et al. meta-analysis which was so damning of homeopathy:
"We emphasise that our study, and the trials we examined, exclusively addressed the narrow question of whether homoeopathic remedies have specific effects. Context effects can influence the effects of interventions, and the relationship between patient and carer might be an important pathway mediating such effects. Practitioners of homoeopathy can form powerful alliances with their patients, because patients and carers commonly share strong beliefs about the treatment’s effectiveness, and other cultural beliefs, which might be both empowering and restorative. For some people, therefore, homoeopathy could be another tool that complements conventional medicine, whereas others might see it as purposeful and antiscientific deception of patients, which has no place in modern health care. Clearly, rather than doing further placebo-controlled trials of homoeopathy, future research efforts should focus on the nature of context effects and on the place of homoeopathy in health-care systems."
Could the homeopathic and conventional medicine communities be moving towards common ground?