Wednesday 29 July 2009

Simon Singh: "Beware the spinal trap"

As many of you will no doubt be aware, science writer Simon Singh is being sued by the British Chiropractic Association (BCA). The BCA objected to a piece that Singh published in the Guardian, pointing out that chiropractic has little or no evidence base, particularly when used to treat anything other than back problems, and that it can be dangerous. After Singh was sued, the Guardian took down the article pending the legal proceedings. As part of their campaign to "Keep the libel laws out of science", Sense About Science is asking bloggers to re-publish an edited version of the article, because scientific arguments should not be settled in the courtroom. I'm glad to do my bit.

Beware the spinal trap

Some practitioners claim it is a cure-all, but the research suggests chiropractic therapy has mixed results - and can even be lethal, says Simon Singh.

You might be surprised to know that the founder of chiropractic therapy, Daniel David Palmer, wrote that '99% of all diseases are caused by displaced vertebrae'. In the 1860s, Palmer began to develop his theory that the spine was involved in almost every illness because the spinal cord connects the brain to the rest of the body. Therefore any misalignment could cause a problem in distant parts of the body.

In fact, Palmer's first chiropractic intervention supposedly cured a man who had been profoundly deaf for 17 years. His second treatment was equally strange, because he claimed that he treated a patient with heart trouble by correcting a displaced vertebra.

You might think that modern chiropractors restrict themselves to treating back problems, but in fact some still possess quite wacky ideas. The fundamentalists argue that they can cure anything, including helping treat children with colic, sleeping and feeding problems, frequent ear infections, asthma and prolonged crying - even though there is not a jot of evidence.

I can confidently label these assertions as utter nonsense because I have co-authored a book about alternative medicine with the world's first professor of complementary medicine, Edzard Ernst. He learned chiropractic techniques himself and used them as a doctor. This is when he began to see the need for some critical evaluation. Among other projects, he examined the evidence from 70 trials exploring the benefits of chiropractic therapy in conditions unrelated to the back. He found no evidence to suggest that chiropractors could treat any such conditions.

But what about chiropractic in the context of treating back problems? Manipulating the spine can cure some problems, but results are mixed. To be fair, conventional approaches, such as physiotherapy, also struggle to treat back problems with any consistency. Nevertheless, conventional therapy is still preferable because of the serious dangers associated with chiropractic.

In 2001, a systematic review of five studies revealed that roughly half of all chiropractic patients experience temporary adverse effects, such as pain, numbness, stiffness, dizziness and headaches. These are relatively minor effects, but the frequency is very high, and this has to be weighed against the limited benefit offered by chiropractors.

More worryingly, the hallmark technique of the chiropractor, known as high-velocity, low-amplitude thrust, carries much more significant risks. This involves pushing joints beyond their natural range of motion by applying a short, sharp force. Although this is a safe procedure for most patients, others can suffer dislocations and fractures.

Worse still, manipulation of the neck can damage the vertebral arteries, which supply blood to the brain. So-called vertebral dissection can ultimately cut off the blood supply, which in turn can lead to a stroke and even death. Because there is usually a delay between the vertebral dissection and the blockage of blood to the brain, the link between chiropractic and strokes went unnoticed for many years. Recently, however, it has been possible to identify cases where spinal manipulation has certainly been the cause of vertebral dissection.

Laurie Mathiason was a 20-year-old Canadian waitress who visited a chiropractor 21 times between 1997 and 1998 to relieve her low-back pain. On her penultimate visit she complained of stiffness in her neck. That evening she began dropping plates at the restaurant, so she returned to the chiropractor. As the chiropractor manipulated her neck, Mathiason began to cry, her eyes started to roll, she foamed at the mouth and her body began to convulse. She was rushed to hospital, slipped into a coma and died three days later. At the inquest, the coroner declared: 'Laurie died of a ruptured vertebral artery, which occurred in association with a chiropractic manipulation of the neck.'

This case is not unique. In Canada alone there have been several other women who have died after receiving chiropractic therapy, and Edzard Ernst has identified about 700 cases of serious complications among the medical literature. This should be a major concern for health officials, particularly as under-reporting will mean that the actual number of cases is much higher. If spinal manipulation were a drug with such serious adverse effects and so little demonstrable benefit, then it would almost certainly have been taken off the market.

Simon Singh is a science writer in London and the co-author, with Edzard Ernst, of Trick or Treatment? Alternative Medicine on Trial. This is an edited version of an article published in The Guardian for which Singh is being personally sued for libel by the British Chiropractic Association.

Friday 24 July 2009

On peer review

If you spend a lot of time hanging around at sceptical websites, you'll be aware that when evidence is demanded in support of a claim, it is usually demanded in the form of a peer-reviewed journal article. The suggestion is that peer review provides some sort of check on quality: any peer reviewed paper should meet a certain minimum standard. But is this really the case?

First of all, what does it mean to say that an article is peer reviewed? If an article is peer reviewed, it has been checked over by scientists who work in a similar field to the submitted article. A submitted article will usually be sent to two or three reviewers, who will each read the paper and submit a report to the journal editor. The editor will then decide whether the article merits publication.

On the face of it, this would seem to imply that certain standards are being met. But there is some evidence that this isn't necessarily the case. For example, this article (found via Ben Golacre's miniblog) suggests that in Chinese journals, only 5.9-7.7% of supposed randomised controlled trials reported in peer reviewed articles had adequate procedures for randomisation. A lack of adequate randomisation means that there is a good chance of introducing bias into your trial, and it ought to be one of the first things a reviewer would check. While the article specifically addresses trials published in Chinese journals, I don't think there's any compelling reason to think that things are dramatically different in what we laughingly call the west. Anecdotally, anyone who spends time wading through journals as part of their day job will be able to come up with several examples of utterly dreadful papers that should never have been published. This is without looking at pseudojournals, such as those that concentrate on complementary and alternative medicine, where articles on quackery are peer reviewed by other quacks.

So, if peer review can't tell you whether a trial described as randomised is in fact randomised, what can it tell you? Does it really act as any kind of guarantee of minimum quality? I would suggest not.

That is not to say that peer review is useless as it stands. In my fairly limited experience, papers that I have submitted have always been significantly improved by peer review. But surely there's a way of making peer review "fit for purpose", to use the current jargon?

This post was prompted by a discussion at the Bad Science forum, where the idea of applying industrial-style quality assurance to journal articles was raised. This would mean that there would be some sort of checklist that a reviewer would have to go through, and this would be checked to make sure it had been done. It would not be much use to do this informally; there would need to be some formal way of doing it.

In fact, this is not too far from what already happens, in many cases. I've just got a review back in which the reviewers have answered a series of yes/no questions (in addition to their more detailed written comments). For example, "Are the interpretations and conclusions sound and supported by the interpretation provided?", and "Is the paper logically organised?". For the example of trials published in Chinese journals above, there could be a question like "Is the methodology appropriate for testing the specified hypotheses?". Again, there would have to be some checks that this had been adequately done; this is really what the journal editor should be doing. At present, I think the role of the editor is often too weak. They do little more than send out and receive reviews. This is probably not surprising, given that editors are usually working more or less voluntarily and tend to have plenty of other things that they need to do. And it is not always the case: there are many excellent editors who make a lot of effort to engage with the papers they are handling, and the reviewer's comments on them. But if the role of editors were beefed up, such that they spend time formally checking that reviews have been carried out adequately, then peer review might actually provide the quality guarantee that we seem to think it should.

That might require actually paying editors and reviewers for their time. This would be a fairly radical step, but if it led to a professionalisation of the journal reviewing and editing process it would probably be a good thing. And if it led to a reduction in the number, and an increase in the quality, of papers published, that would not be a bad thing either.

Wednesday 1 July 2009

I get my name in the Veterinary Record

This is somewhat old news, but I haven't had chance to write about it before. To add to the publications I have in Homeopathy, I now have one (as third author) in the Veterinary Record. This is starting to get silly; I'm supposed to be a geologist.

Perhaps unsurprisingly, this is related to a terrible homeopathy study [Hill et al., The Veterinary Record 164:364-370], this time on the treatment of skin conditions in dogs. It's another example of homeopaths continuing to do small, badly designed studies, when plenty of large and properly conducted studies, and systematic reviews and meta-analyses of those studies, show that homeopathy doesn't work. The letter I am involved in is one of three letters that were published criticising the study: they can be found, with the author's reply, at The Veterinary Record 164: 634-636 [apologies for the lack of links: there's no DOI for these that I can find]. There is also an excellent discussion of the paper, and some of the responses to it, over at JREF.

The design in this study is truly extraordinary. Initially, 20 dogs with skin problems were recruited to the study. All were treated with individualised remedies by a homeopath. In 15 cases, the dog owners reported no improvement. In 5 cases, the owners reported a significant improvement. Not looking good for homeopathy so far. Still, the five improved dogs were said to have responded well to homeopathy, and went on to phase 2, which was a proper randomised and blinded placebo-controlled trial. Unfortunately, one dog had to be euthanased before the trial could happen, and another dog's skin problems had resolved completely after the first stage, leaving only three dogs in phase 2. Supposedly, those dogs did better with homeopathy than with placebo, thus justifying, as ever, "further research".

This is possibly the easiest study to criticise that I've ever seen. Put simply, the first phase lacks a control group, so improvements cannot be attributed to homeopathy. There is simply no evidence that the five dogs recruited to phase 2 actually responded to homeopathy, rather than just improved spontaneously. Then the second phase of the trial includes only three dogs. There is no way to interpret the results of such a tiny, underpowered study. Those are the main problems, but there are others. For example, all the dogs were on some kind of conventional medication, so that cannot be ruled out as contributing to any improvement.

The only reasonable conclusion from the study is that there is no strong evidence that homeopathy did anything for the dogs in the trial. But the paper concludes that the improvement seen in the five dogs (which again cannot be attributed to homeopathy on the basis of this study) is enough to justify further research. No doubt the paper will also be spammed all over the internet by the likes of Dana Ullman, as proof positive that homeopathy works. Hopefully the letter I'm a co-author on, along with the two other letters critical of the study that were published, will go some way to addressing that. The signs are not good, though. The original Hill et al. paper included the statement that "Different homeopathic remedies and different
dilutions of the same remedy have been distinguished from each other using Raman and infrared spectroscopy, even though all should contain nothing but water", with a reference to "Rao and others, 2007" [In fact, Rao et al. did not even claim that infrared spectroscopy showed any difference]. Regular readers will know that Rao and colleagues did nothing of the sort, and that to describe their paper as "discredited" would be something of an understatement. In the world of homeopathy, discredited papers never die. They are just recycled for use with audiences who don't know that they've been discredited. I suspect that this one will be no different.

As an aside, my favourite part of this study is that "constitutional signs" of each of the dogs, as used by the homeopath to pick a remedy, are listed [Table 2 of the paper]. For dog number 16, these are listed as:

Affectionate
Fears thunderstorms
Clairvoyant
Grief
Desires chicken; oranges aggravate


A clairvoyant dog! And this was published in a respected veterinary journal.