Sunday 18 January 2009

Jeremy Sherr and the ethics of CAM

As a geologist, I don't tend to have to worry about the ethics of research involving humans; I usually only deal with rocks. But in medical research, ethics are crucially important. In CAM research, the ethical dimension often seems to be lacking, which brings me to the case of Jeremy Sherr.

Jeremy Sherr is a prominent homeopath who claims to treat AIDS patients in Tanzania with homeopathy, based on the usual poor quality anecdotal evidence and wishful thinking. He recently caused a stir in the badscience blogosphere, with a blog entry in which he mused about conducting a trial of homeopathy in AIDS patients. The mooted trial is transparently unethical, as pointed out by a number of sceptical bloggers (notably in the comments on Sherr's blog, and at Gimpy's blog, here and here, and The Lay Scientist, here and here), and as we'll see below. Now, Sherr likes to edit his blogposts and delete comments, but let's have a look at what Sherr had to say about the trial.

I am happy to go for a simple trial initially, treating AIDS patients who are not taking ARVs. There is no shortage of patinets who, although they have been offered ARVs, have chosen not to take them, usually because of the serious and debilitating side effects. There are plenty of statistics on ARV treatment and patients with no treatment at all that we can compare to.


Why is this unethical? I would say for three reasons. Firstly, it is a general principle of medical ethics that patients in a clinical trial should not be denied proven treatments for their condition. Clearly, in this trial AIDS patients would be denied ARVs. Sherr seems to think that this is OK because his patients have decided not to take ARVs themselves, but this is, I think, irrelevant. You would still be running a trial in which the subjects are not receiving the best possible standard of care. The issue of informed consent is again critical here; patients would need to be informed that not taking ARVs could be severely damaging to their health.

Secondly, the trial as mooted will not provide any useful information, because there is no control group. Whatever happens in the trial, it is impossible to say that it happened because of homeopathy, rather than sources of bias in the trial design. Since the trial could not provide any usable information, it would be unethical.

Thirdly, I would say we have enough evidence and knowledge about homeopathy to say that it is not going to cure AIDS. Given that there is no likelihood of a true positve result, it is unethical to involve patients in a clinical trial. Informed consent comes into play again here: patients in the trial ought to be told that the current evidence shows that there's essentially no chance of homeopathy having any beneficial effects beyond placebo.

Sherr says, in the recent blog post in which he calls the waaaaambulance over the criticism he has received, that "Any research I may undertake will be subject to rigorous ethical review of the highest standard". Hopefully that will in fact happen, in which case the mooted trial will surely not go ahead. What is disturbing is that Sherr has stated in the past, referring to research protocols and ethics review, that "You have to find willing partners and get a protocol through an ethics committee, and you need to talk their language. I hope it will work but if not, I will just go and do it on a small scale myself - I am determined to do that". This is the most telling comment, I think: it makes it clear that Sherr is not really interested in medical ethics, except as a hoop he must reluctantly jump through in order to experiment on terminally ill patients. And if he can't get ethics approval, he'll just do it anyway.

So much for Sherr. Disturbingly, however, he seems not to be anomalous in CAM circles in his total lack of any sense of ethics. A review of a book "Complementary and Alternative Medicine: Ethics, the Patient, and the Physician” has just been posted on the Science Based Medicine blog. The reviewer comments that "We do not read a word about how to approach a patient who has suffered damage due to CAM, or how to approach those who have stopped their regular treatment" [emphasis mine]. One would have thought that this would be a key issue for any book purporting to address ethics in CAM.

As one of the commenters to the Science Based Medicine piece astutely points out, CAM is a "deprofessionalization phenomenon". Researchers in the field of CAM seem to have no idea about research ethics, and no idea about the linked issue of how to conduct good research. If a text on ethics in CAM is so careless of these important ethical questions, how can we expect CAM practitioners to be any more careful?

UPDATE: There is now an excellent and comprehensive post on the Sherr saga at Respectful Insolence...

Saturday 10 January 2009

Yet one more radically pointless homeopathy study

Generally, when the efficacy of a medical intervention is being assessed, you expect the quality of the trials and evidence available to increase through time. You might start out with uncontrolled case reports and observational studies, proceed through small-scale pilot studies, to large randomised placebo-controlled trials, and finally to systematic reviews and meta-analyses of the literature.

In CAM research this pattern is often not followed. Once systematic reviews and meta-analyses start to show that there is no evidence that the CAM treatment works, more small trials of poor methodology are conducted, many of which inevitably have (spurious) positive results. This allows CAM advocates to claim that there is lots of evidence in favour of their intervention, because they don't bother to account for study size and quality.

There is a fine example of this CAM tradition in the latest issue of Homeopathy. A systematic review, published in 2006, of homeopathy for treatment of allergic rhinitis concluded that "Some positive results were described in rhinitis with homeopathy in good-quality trials, but an equal number of negative studies counterbalance the positive ones. Therefore it is not possible to provide evidence-based recommendations for the use of homeopathy to treat allergic rhinitis, and further randomized controlled trials are needed". Well, perhaps: some would argue that the prior probability (close to nil) and currently existing evidence are enough to conclude that homeopathy does not work for allergic rhinitis (or, indeed, anything else). Be that as it may, it should be clear that the only useful new evidence would come from large and well-conducted RCTs. So what do Maria Goossens and a football team of colleagues do in the latest issue of Homeopathy? Why, publish a "prospective, open, non-comparative study" on homeopathy and allergic rhinitis, of course.

The methodology of the study consists of "treating" some patients suffering symptoms of allergic rhinitis with homeopathy, and getting them to fill in a quality of life questionaire at the start of the study, and after three weeks and four weeks. The physicians involved also assessed the severity of symptoms at baseline, three weeks, and four weeks. Unsurprisingly, the study found that people felt better with homeopathic treatment. But the methodological problems with this study are straightforward to point out. There is no control group. As a result, there can be no randomisation or blinding. Don't take my word for it; here's what the authors say in the discussion of their paper:

We did not distinguish between intermittent and persistent allergic rhinitis. All patients with intermittent allergic rhinitis (symptoms present less than four consecutive weeks a year) will be better after four weeks without any treatment. Patients who consult a homeopathic physician for allergic rhinitis usually have been suffering for a long time and from severe symptoms as the high level of the RQLQ score at baseline indicates. This study cannot be conclusive because there is no control group. Neither the physician, nor the patient was blinded. We cannot conclude that the degree of certainty of the physician about the appropriateness of the homeopathic prescription of a homeopathic remedy and the physician’s impression whether he had sufficient information about the patient’s condition influenced the outcome...it is not possible to draw a conclusion on the effect of the homeopathic treatment. This would require an RCT. To evaluate the effect of homeopathic treatment for allergic rhinitis an RCT should be performed.


So there you have it. The study cannot come to any useful conclusions. And, in the introduction to the paper, the authors write "This study was originally considered as a preliminary to a Randomized Clinical Trial (RCT) comparing standard conventional therapy with homeopathy (non-inferiority study). The RCT was never performed because sponsorship was withdrawn".

OK, that's life. Sometimes planned research funding fails to come off. These things happen. But why then publish the pilot study? Methodologically, it is useless, and it could never have added anything to the previously existing evidence from RCTs and systematic reviews. This study would never have been published anywhere other than a CAM journal, where scientific usefulness can take a back seat to an ideological desire to publish any evidence that looks as though it is in favour of homeopathy, no matter how methodologically weak it is, and in defiance of the higher level evidence that already exists.

Well, no-one is going to die from allergic rhinitis, so how much does it matter? The problem is that homeopaths don't stop at self-limiting conditions like hayfever. Some insist that homeopathy is a complete system of medicine and it can cure anything, including AIDS and malaria. Ben Goldacre's miniblog points to Jeremy Sherr's blog, for example, where Sherr is preparing to begin an unethical experiment on AIDS sufferers. This is a long road of madness, to be sure, but it begins where people believe they can cure hayfever through the use of magic sugar pills.

Friday 9 January 2009

Does the REF add up to good science?

The RAE (Research Assessment Exercise) results from the 2008 were published back in December. You might have noticed this from the number of university websites that could be found frantically spinning the results. My very own University of Manchester, for example, is claiming that Manchester had broken into the “golden triangle” of UK research, that is, Oxford, Cambridge and institutions based in London. It seems that depending on the measure you pick, we’re anywhere between third and sixth place in the UK. Clearly these are excellent results, but whether we’re really up there with the Oxfords, Cambridges, Imperials and UCLs of the world I’m not sure.

In any case, that was the last ever RAE. It has been a fairly cumbersome process, involving expert peer review of the research contribution of research institutions, that has been a real burden on the academics who have had to administer it. I’m sure there are few who will mourn its passing. Now the world of English academia is waiting, like so many rats in an experimental maze, to find out what will replace the RAE. The replacement will be a thing called the Research Excellence Framework, or REF, and at this stage exactly what it will involve is fairly sketchy. However, it will be based on the use of bibliometrics (statistical indicators that are usually based on how much published work is cited in other publications) and “light-touch peer review”.

What kind of bibliometric indicators are we talking about? Last year HEFCE (the Higher Education Funding Council for England, the body that evaluates research and decides who gets scarce research funding) published a “Scoping study on the use of bibliometric analysis to measure the quality of research in UK higher education institutions” produced by the Centre for Science and Technology Studies at the University of Leiden, Netherlands. I’ve spent a fair amount of time reading through this, and in some ways I was encouraged. It’s clear that some thought has gone into creating bibliometric indicators that are as sensible as possible: I was dreading a crude approach based around impact factors, which have already done so much damage to the pursuit of good science. The authors of the “scoping study” came up with an “internationally standardised impact indicator”: I will abbreviate this as ISII for concision. The ISII takes the average number of citations for publications for the academic unit you are interested in (this might be a research group, an academic department or an entire university), and divides it by a weighted, field-specific international reference level. The reference level is calculated by taking the average number of citations for all publications in a specific field: if the publication falls under more than one field (as many will in practice), the reference level can be calculated as a weighted average of the number of citations generated by publications in all the fields in question. So, if the ISII for your research group comes out as 1, you’re average, if above 1, better than the average, and if below 1, worse than the average. The authors of the scoping study say that they regard the ISII as being “the most appropriate research performance indicator”, and suggest that a value of >1.5 indicates a scientifically strong institution. They also suggest a threshold of 3.0 to identify research excellence. It seems that the HEFCE is expecting to adopt the ISII as the main research performance indicator, according to their FAQs, where they say “We propose to measure the number of citations received by each paper in a defined period, relative to worldwide norms. The number of citations received by a paper will be 'normalised' for the particular field in which it was published, for the year in which it was published, and for the type of output”. However, they are still deciding what thresholds they will use to decide which institutions are producing high-quality research.

All well and good. If you insist that bibliometric indicators are necessary, this is probably as good a way as any of generating those data. However, there are some problems here, as well as philosophical difficulties with the entire approach.

Firstly, what is it we are trying to measure? In theory, what HEFCE wants to do is evaluate research quality. But the ISII does not directly measure research quality. Like any indicator based on citation rates, it is measuring the “impact” of the research: how many other researchers published papers that cited the research. It ought to be clear that while this should reflect quality to some degree, there are significant confounding factors. For example, research that is done in a highly active topic is likely to be cited more than research in which fewer groups are working. This does not mean that work in less active topics is of intrinsically lower quality, or even that it is less useful.

Secondly, there is an assumption that the be-all and end-all of scientific research is publication in peer-reviewed journals that are indexed in the Web of Science citation database published by Thomson Scientific. This a proprietary database that lists articles in the journals that it indexes, and also tracks citations. Criteria for journals to be included are not in the public domain (although the scoping report suggests these are picked based on their citation impact, p. 43). A number of journals that I would not consider to be scientifically reputable are included. For example, under the heading of Integrative and Complementary Medicine, the 2007 Journal Citation Reports (a database that compiles bibliometric statistics for journals in the citation database) includes 12 journals, including Evidence Based Complementary and Alternative Medicine (impact factor 2.535!) and the Journal of Alternative and Complementary Medicine (impact factor 1.526). This reinforces the point made above: it would be possible to publish outright quackery in either of these journals, have it cited by other quacks in the quackery that they publish, and get a respectable rating on the ISII. The ISII can’t tell you that this is a vortex of nonsense: it only sees that other authors have cited the work. It is also true that not all journals are included in the citation index: for example, in my own field the Bulletin of Canadian Petroleum Geology fails to make the cut, although it has always published good quality research. Although the authors of the scoping report make clear that it is possible to expand bibliometrics beyond the citation database, this will take much more effort and it seems that HEFCE will not take this route. So we will be relying on a proprietary and opaque database to make decisions on future research funding. A further point is that it is not clear how open access publications will be incorporated in the citation index: in principle there is no reason that this can’t happen, but can we be sure it will?

Thirdly, there is the assumption that research output can only be evaluated in terms of published articles in peer-reviewed journals. I’m not sure that this accurately reflects the actual research output of many scientists. For example, most of us put a lot of effort into presentations at scientific conferences, chapters in books, or government reports that will never make it into a citation database. This has become a problem for things like, in my own field, the special publications of the Geological Society of London. These are volumes that collect recent research on specific topics, and they generally contain excellent research. But they aren’t included in citation databases and they have no impact factor. This has led to a lack of interest in publishing results in these special publications, because they don’t tick the right boxes in terms of publication metrics. This is surely a bad thing. A similar problem occurs with things like government open-file reports. These are not, in general, pieces of world-class, cutting edge research. But that does not mean that they are useless or that they have no value. For example, good regional geological work can allow mineral exploration to be better targeted, benefiting the local economy. Yet that kind of work is ignored in a framework that only considers journal articles: HEFCE says only that “We accept that citation impact provides only a limited reflection of the quality of applied research, or its value to users. We invite proposals for additional indicators that could capture this”. To me, research quality and value cannot be measured by bibliometric indicators. It can only be evaluated by reading the research, understanding its context within the totality of pre-existing research, and understanding how it contributes to new understanding. That is, it can only be evaluated through peer review.

Which brings me to my fourth point; there are some questions about the role of peer review within the REF. HEFCE says that “the scoping study recommends that experts with subject knowledge should be involved in interpreting the data. It does not recommend that primary peer review (reading papers) is needed in order to produce robust indicators that are suitable for the purposes of the REF”. However, I’m not convinced that this accurately summarises what is written in the scoping report, which says “In the application of indicators, no matter how advanced, it remains of the utmost importance to know the limitations of the method and to guard against misuse, exaggerated expectations of non-expert users, and undesired manipulations by scientists themselves…Therefore, as a general principle we state that optimal research evaluation is realised through a combination of metrics and peer review. Metrics, particularly advanced analysis, provides the tools to keep the peer review process objective and transparent. Metrics and peer review both have their strengths and limits. The challenge is to combine the two methodologies in such a way that the strengths of one compensates for the limitations of the other”.

Finally, there is a hint of conflict of interest in the preparation of the scoping report by the Centre for Science and Technological Studies: according to their website, the centre is involved in selling "products" based on its research and development in the area of bibliometric indicators. Their report in favour of bibliometric indicators might allow them to drum up significant business from HEFCE.

At present, the proposals for the REF are at a fairly early stage, but the use of bibliometric indicators seems to be entrenched, and there will be a pilot exercise on bibliometric indicators this year. However, this is based on “expert advice” that consists of a single report from an organisation that makes money by creating bibliometric indicators. While academia in general might welcome the proposals on the grounds that they will be less burdensome than the RAE and give everyone more time to do research, I don’t think many academics will be kidding themselves that the bibliometric indicators involved actually tell us much about research quality and usefullness.