tag:blogger.com,1999:blog-44773853420662758972024-03-11T03:23:09.193+00:00Hawk/Handsaw"I am but mad north-north-west: when the wind is southerly I know a hawk from a handsaw." --Hamlet, Act II, scene ii.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.comBlogger139125tag:blogger.com,1999:blog-4477385342066275897.post-88402266471938636262010-08-25T14:47:00.004+01:002010-08-25T15:28:30.216+01:00Infinite costs wouldn't be NiceThere's been a bit of a furore over the last couple of days about a decision not to approve a drug for treating advanced bowel cancer, Avastin, on the grounds that it would not be cost-effective. This has led to a huge amount of criticism, and headlines ranging from the <a href="http://www.guardian.co.uk/society/2010/aug/24/avastin-too-expensive-for-patients">relatively</a> <a href="http://www.bbc.co.uk/news/health-11060968">sober</a> to the Mail's <a href="http://www.dailymail.co.uk/news/article-1305858/NICE-accused-talking-Avastins-power-save-lives-cancer-patients.html">hysterical</a> "Betrayal of the cancer patients: Rationing watchdog accused of talking down wonder drug's power to save lives".<br /><br />The decision (or guidance: in fact, the final decision has not yet been made, with the guidance subject to consultation and appeal) was made by the National Institute of Health and Clinical Excellence (Nice), which decides on the cost-effectiveness of treatments and whether they should be available to patients on the NHS. According to trial data submitted by the drug manufacturer, Roche, the drug can extend the mean lifespan of people diagnosed with advanced bowel cancer from 19.9 months on chemotherapy alone, to 21.3 months with chemotherapy and avastin. This is by no means a miracle cure or wonder drug. This is a six-week gain, but it comes at a cost: the drug costs up to £21,000 per patient.<br /><br />Barbara Moss, a survivor of bowel cancer, appears in both the <a href="http://www.dailymail.co.uk/news/article-1305858/NICE-accused-talking-Avastins-power-save-lives-cancer-patients.html">BBC</a> and <a href="http://www.guardian.co.uk/society/2010/aug/24/avastin-too-expensive-for-patients">Guardian</a> articles. The Guardian says:<br /><blockquote>Mrs Moss said the National Institute for Health and Clinical Excellence (Nice) had put a "value on life" after it said the price was too high for the extra benefit it gives patients</blockquote>While Mrs Moss is quoted in both pieces as saying:<br /><blockquote>It seems immoral to me that, as a result of negative NICE decisions like this one, people's choice of living or dying depends on whether they can afford a drug, because it isn't available to them on the NHS.</blockquote>It's difficult not to have some sympathy for this point of view. But someone has to put a "value on life" (or in fact, a thing called a <a href="http://en.wikipedia.org/wiki/Quality-adjusted_life_year">Quality Adjusted Life Year</a>, QALY). Otherwise Roche could come up with a life-saving drug that cost £1 trillion per patient, and the NHS would have to fund it because you "can't put a value on life". Ultimately, the question is not just about avastin. There is only a limited amount of money for the NHS to spend on drugs, so if avastin gets funded something else (with probably greater benefit) doesn't get funded . These are clearly not easy decisions to make, which is why we have Nice to do it in the first place.<br /><br />My point here is not that Nice have necessarily got this right (I'm not qualified to judge), or even that the QALY approach is the best possible one. The point is that Nice is doing a difficult, unpopular and necessary job. Accusing them of betraying cancer patients, when what they're actually trying to do is get the greatest benefit from a limited budget, is not really helpful to anyone.<br /><h1><br /></h1>Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com4tag:blogger.com,1999:blog-4477385342066275897.post-73291098681200378652010-08-24T11:28:00.003+01:002010-08-24T12:10:34.254+01:00Bad cycling: I'm the champion (but I'm not very fast)Last night was the Stockport Clarion club hill-climb, on the infamous <a href="http://en.wikipedia.org/wiki/Cat_and_Fiddle_Road">Cat and Fiddle road</a> between Macclesfield and Buxton. I <a href="http://hawk-handsaw.blogspot.com/2009/08/long-trip-to-closed-pub.html">rode</a> in this event last year, and had a good ride on a clear and dry day with a good tailwind. I had high hopes of bettering the 26:00 I did last year, but things didn't go exactly to plan.<br /><br />There's a <a href="http://www.maccinfo.com/cat/">webcam</a> attached to the side of the Cat and Fiddle pub, which looks over the finish of the course at the road summit. I had a look at it yesterday afternoon, and there wasn't much to see except cloud and massive wagons heading over to Buxton with their fog lights on. It didn't look like a lot of fun. Once I got home, I was having serious second thoughts about the whole enterprise as I got ready for the 18-mile ride to Macclesfield, with the rain rattling the windows in my flat. But I made myself head out, leaving in absolutely filthy weather and gnarly traffic for a fun rush-hour jaunt through Stockport. The weather gradually cleared up as I got towards Macclesfield, and I arrived with about ten minutes to spare before the 7pm start.<br /><br />Unfortunately, the start had been put back to 7:30, so I ended up waiting around in the cold for 40 minutes. Will, the timekeeper, didn't make it until just before 7:30 because he had to pick someone up from the airport. So, dispensing with numbers, the four riders who had shown up got going as soon as possible. The weather was now dry, but cool and windy. I decided to ride with lights in case it was cloudy at the summit. I went off number 2, and it was one of those days when you know it's going to be rough as soon as you set off. I hadn't had enough of a warm-up, and my legs just didn't want to have any of it. I didn't really get settled in until I was past Walker Barn and had done the most difficult bit. By then, the two guys behind me had already gone past, but I had caught up with my minute man. Emerging into the open country beyond Walker Barn, the gale-force tailwind came into play, and the middle section of the course was very fast. I even had to touch the brakes on some of the corners. Then the last climb onto the top: I almost got blown off the road on a cross-wind section, and struggled past the timekeeper in 26:36, 36 seconds slower than last year. But, with the only other Stockport Clarion rider being my minute man, who I had passed about two and a half miles earlier, I was the club hill-climb champion. This is nice, especially as I was several times hill-climb champion at my former club, Birdwell Wheelers, but there's no pretending that I was particularly quick. Several riders in the club would have put minutes into me had they ridden, but like the lottery, you have to be in it to win it. Jolan managed to capture a webcam image of us at the top: I'm the guy in the red shirt at left...<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFKdjrO5NAcLvora7EQhSsA1h2BTJaS8IaBAGIwr5sTqLjvRfgf_GMMW6ezXwEFakaHwKJ8sgzMis6tyr1k5hgJ_y3Djl0tAaa9GNfznanIiGAg21894HYeA7NniA2YMki1C9zQ7dHnhF1/s1600/Cat_fiddle.jpg"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFKdjrO5NAcLvora7EQhSsA1h2BTJaS8IaBAGIwr5sTqLjvRfgf_GMMW6ezXwEFakaHwKJ8sgzMis6tyr1k5hgJ_y3Djl0tAaa9GNfznanIiGAg21894HYeA7NniA2YMki1C9zQ7dHnhF1/s400/Cat_fiddle.jpg" alt="" id="BLOGGER_PHOTO_ID_5508929617446463186" border="0" /></a><br />My reward was a long, windy ride home, mainly in darkness, but I got to see the sun go down over Fiddlers Ferry power station as I headed back down into Macclesfield.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com1tag:blogger.com,1999:blog-4477385342066275897.post-4752738885009515772010-08-17T11:47:00.003+01:002010-08-17T12:07:00.889+01:00Bad cycling: I get round in 27:45Following my <a href="http://hawk-handsaw.blogspot.com/2010/01/bad-cycling-new-years-day-10.html">New Year's Day adventures</a>, I haven't done as much bike racing as I hoped I would this year. I was away In Norway all of June for work, and then the <a href="http://www.stockport-clarion.org.uk/index.html">Stockport Clarion</a> evening time-trials took a break during July. However, the last three Monday evenings I've been out on the Chelford 10-mile course, in an attempt to get some race fitness for next week's hill climb on the Cat and Fiddle. My aim was to get under 28 minutes for 10 miles, which is pretty modest. After a horrible day in the first event, where I was recovering from a migraine and should probably have stayed at home, I clocked 29:30 and wasn't optimistic. But the following week was better with 28:24, and last night I cracked it, just about, with 27:45. For the first time in these 10s, there were actually a couple of people slower than me. It was a good night, dry and sunny with a light headwind on the slightly longer outward leg, and times were generally fast-ish. There was a 21-minute ride, which is pretty quick for such a slow course.<br /><br />Coming into Chelford on the homeward leg, there was an electronic sign that gives a read-out of your speed. I was grovelling up the slight incline at a bare 21 mph, so the days when I could get under 25 minutes seem long ago. One of the other riders went through at 27 mph. Still, I felt good, and I'm looking forward to the Cat and Fiddle next week.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com5tag:blogger.com,1999:blog-4477385342066275897.post-38700704571498632882010-08-17T11:09:00.002+01:002010-08-17T11:46:05.254+01:00"Dr" Nancy Malik is spamming my blog again...In the comments to a <a href="http://hawk-handsaw.blogspot.com/2010/01/observer-ignores-evidence-on-homeopathy.html">couple</a> of <a href="http://hawk-handsaw.blogspot.com/2009/01/yet-one-more-radically-pointless.html">old</a> posts, homeopathic apologist and internet numpty <span style="font-style: italic;">extraordinaire</span> Nancy Malik says the following:<br /><blockquote>Studies in support of Homeopathy published in reputed journals<br /><br />1. Scientific World Journal<br />http://www.ncbi.nlm.nih.gov/pubmed/17982565<br /><br />2. Lancet<br />http://www.ncbi.nlm.nih.gov/pubmed/9310601<br /><br />3. Neuro Psycho Pharmacology<br />http://www.nature.com/npp/journal/v27/n2/abs/1395862a.html // Bacopa Monnieri for memory </blockquote>I thought I'd have a quick look and explain why they're nonsense. Unfortunately, this hasn't proven to afford much in the way of intellectual exercise.<br /><br />The <a href="http://www.ncbi.nlm.nih.gov/pubmed/17982565">first</a> paper is by Graunke et al., and concerns, I kid you not, the treatment of tadpoles with homeopathic thyroxin. This is a well-known bad homeopathy paper. The tadpoles in the treatment group were more developed than those in the control group at the start of the experiment, so it wasn't much of a surprise that they were more developed at the end too. There is more discussion of this dreadful rubbish <a href="http://forums.randi.org/showthread.php?t=99751&page=7">here</a>.<br /><br />The <a href="http://www.ncbi.nlm.nih.gov/pubmed/9310601">second</a> paper is the famed Linde et al. meta-analysis, published in 1997. While this paper does say "<span>The results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to placebo", there are some other things to bear in mind:<br /><br />1. The paper also says "</span><span>However, we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition".<br /><br />2. A <a href="http://www.ncbi.nlm.nih.gov/pubmed/10391656">1999 paper</a> by the same authors, using improved methodology and including new trials, states that "It seems...likely that our meta-analysis at least overestimated the effects of<br />homeopathic treatments".<br /><br />3. A subsequent meta-analysis by <a href="http://www.ncbi.nlm.nih.gov/pubmed/16125589">Shang et al.</a>, published in the Lancet in 2005, using further improved methodology concluded that the results were compatible with homeopathy being a placebo.<br /><br />Finally, the <a href="http://www.nature.com/npp/journal/v27/n2/full/1395862a.html">third study</a>, by Roodenrys et al. in the journal <span style="font-style: italic;">Neuropsychopharmocology</span>, is not about homeopathy at all, but rather about herbal medicine. In homeopathy, remedies are typically diluted such that it is very unlikely that they contain any of the original material: there is no active ingredient. In the Roodenrys study, what is being tested is brahmi, an Indian herb, of which the paper says: </span><br /><blockquote>Studies have shown that the herb contains many active constituents, including a number of alkaloids and saponins, however, the major constituents are the steroidal saponins, Bacosides A and B.</blockquote>So it isn't entirely surprising that brahmi might have some effect.<br /><br />From this fairly cursory glance at the studies provided by Nancy Malik, it's clear that she is from the Dana Ullman school of evaluating journal articles. This involves finding some papers that superficially appear to support your position, and then spamming them all over the internet. Luckily, for this approach there is no need to understand the articles, or even to read them. For people who think that magic water is medicine, that would be rather too much to expect.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com65tag:blogger.com,1999:blog-4477385342066275897.post-75177583174026022852010-02-03T21:18:00.002+00:002010-02-03T21:38:45.425+00:00Private Eye still believes in Wakefield...As discussed in my <a href="http://hawk-handsaw.blogspot.com/2010/02/its-literature-integrity-week.html">previous post</a>, and <a href="http://scienceblogs.com/insolence/2010/02/the_martyrdom_of_st_andy.php">many</a> <a href="http://www.badscience.net/2010/01/the-wakefield-mmr-verdict/"></a><a href="http://www.badscience.net/2010/01/the-wakefield-mmr-verdict/">other</a> <a href="http://www.timesonline.co.uk/tol/news/uk/health/article7009882.ece">places</a> all over the internet, the GMC ruling against Andrew Wakefield was damning. But not so much if you write for Private Eye, who have been defending Wakefield for years. So, in the light of the GMC findings that Wakefield behaved unethically, with "callous disregard" for the wellbeing of the children involved, and that parts of his research were fraudulent, is it time for the Eye to admit it was wrong?<br /><br />Apparently not. In an "In the Back" piece, the Eye had the following to say:<br /><blockquote>Although the GMC said the hearing was not about vaccination and autism, it is fairly clear that the two and a half years of disciplinary proceedings were to bring the MMR debate to a conclusion. The three doctors were to some degree being accused (and found guilty) of causing a public health scare which led to a fall in vaccination rates, so damaging "herd immunity", particularly with regard to measles.</blockquote>Nope. They were found guilty of conducting invasive diagnostic procedures, against the children's interests and without ethical approval; of not disclosing serious conflicts of interest; and of presenting false information in the publication of their research (which finally led to the Lancet retracting the paper).<br /><blockquote>Several parents of the children who featured in the team's original research paper, which was at the centre of the GMC case, stormed out of the hearing in angry protest at the findings - particularly the suggestion that their children's tests were not clinically necessary...They say they would have told the GMC that the treatment they received at the hospital helped their children's symptoms - but they were never called to testify.</blockquote>Eh? We're not talking about treatment here, we're talking about invasive diagnostic tests. These were done without ethical approval, without the required expertise, and against the interests of the children. Perhaps we should just get rid of ethical approval, as it gets in the way of heroic doctors doing whatever they need to vunerable patients.<br /><br />And finally:<br /><blockquote>None of this debate about conduct, however, changes the fact that no subsequent research has supported Wakefield's thesis of a possible link between the MMR vaccine and autism. Nor does it change the fact that despite previous attempts to justify the <span style="font-style: italic;">Eye's</span> coverage at the time, some readers are still deeply critical of the magazine's reporting of the MMR debate between 2001 and 2007, when we wrote about the concerns of Wakefield, the families and their lawyers, and endorsed calls for more research (see Letters).</blockquote>Well, indeed. So why is the Eye still apparently defending the indefensible? I still buy the Eye, for all the other good that they do, but they've made a mistake over MMR and they ought to be big enough to admit it. They would surely ask the same of any of the politicians and industry figures they regularly lampoon.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com6tag:blogger.com,1999:blog-4477385342066275897.post-67344808165871519772010-02-03T20:19:00.003+00:002010-02-03T21:39:34.684+00:00It's literature integrity week...The integrity of the scientific literature is at issue in not one but two news stories this week. Firstly, the 1998 Lancet <a href="http://dx.doi.org/doi:10.1016/S0140-6736%2897%2911096-0">paper</a> by Andrew Wakefield and several co-authors, which purported to show a link between autism and the MMR vaccination, was <a href="http://download.thelancet.com/flatcontentassets/pdfs/S0140673610601754.pdf">retracted</a> [PDF] by the journal. Meanwhile, there is an <a href="http://www.guardian.co.uk/environment/2010/feb/01/leaked-emails-climate-jones-chinese">ongoing brouhaha</a> about a 1990 Nature paper that has some implications for climate change research.<br /><br />I'm only going to deal with Wakefield here because, you know, I have other things to do. The retraction by the Lancet is really the final nail in the coffin of Wakefield's paper, which has been heavily criticised for numerous reasons. Of the thirteen authors, ten had <a href="http://dx.doi.org/doi:10.1016/S0140-6736%2804%2915715-2">previously retracted</a> the interpretation that there was any link between autism associated with gastro-intestinal problems and the MMR vaccine. Only Wakefield himself and one P Harvey refused to sign up to this: the remaining author could not be contacted. It has also been known for some time that the methodology of the paper was flawed. So what suddenly changed this week?<br /><br />For some time the General Medical Council (GMC) has been investigating Wakefield and two colleagues, Simon Murch and John Walker-Smith, over allegations relating to their research on autism. The GMC findings were released on January 28th, and can only be described as devastating. You can find the whole thing <a href="http://www.gmc-uk.org/static/documents/content/Wakefield__Smith_Murch.pdf">here</a> [PDF], or there's a good summary at <a href="http://scienceblogs.com/insolence/2010/02/the_martyrdom_of_st_andy.php">Respectful Insolence</a>.<br /><br />The GMC did not look at whether the research findings were right or wrong: rather they looked at the research methodology. Essentially, the major problems were these:<br /><br />1. Wakefield had an undisclosed conflict of interest, because he was being paid by lawyers whose clients believed their children had been harmed by the MMR vaccine.<br /><br />2. Wakefield ordered invasive diagnostic tests, including colonoscopies and lumbar punctures, that were unnecessary and not in the children's interests, and he had neither the required ethical approval nor the requisite expertise to order those tests.<br /><br />3. Wakefield obtained blood samples at his son's birthday party by paying children £5 each. The GMC panel described Wakefield's actions as comprising "callous disregard for the distress and pain the children might suffer".<br /><br />4. The 12 children were described as having been "consecutively referred", but this was not true.<br /><br />Heres what the Lancet <a href="http://download.thelancet.com/flatcontentassets/pdfs/S0140673610601754.pdf">had to say</a>:<br /><blockquote>Following the judgment of the UK General Medical Council’s Fitness to Practise Panel on Jan 28, 2010, it has become clear that several elements of the 1998 paper by Wakefield et al1 are incorrect, contrary to the findings of an earlier investigation.2 In particular, the claims in the original paper that children were “consecutively referred” and that investigations were “approved” by the local ethics committee have been proven to be false. Therefore we fully retract this paper from the published record.</blockquote>So that's that. The problem with the paper is not that it was wrong; that has been known for some time. In fact, simply being wrong would not be a reason to retract the paper. Science often progresses by building on papers that were not quite right. No, the problem with the Wakefield Lancet paper was that it was fraudulent, unethical and incompetent, as well as being wrong. As a result, the takeup of MMR has fallen below the ~95% level at which herd immunity is maintained, and measels has once again been declared endemic in the UK. That's why you should try to avoid publishing fraudulent and unethical research. We'll let you off if your research is merely wrong, as long as it was honestly and competently wrong.<br /><br />Wakefield eh, what a hero? Unfortunately, as we'll see in my <a href="http://hawk-handsaw.blogspot.com/2010/02/private-eye-still-believes-in-wakefield.html">next post</a>, some people in the most unexpected places still believe in the Cult of Andy...Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com1tag:blogger.com,1999:blog-4477385342066275897.post-76628297625283534492010-01-31T15:42:00.003+00:002010-01-31T16:34:57.965+00:00The Observer ignores the evidence on homeopathyHomeopathy is in the news once again, following a <a href="http://www.1023.org.uk/">campaign</a> set up by Merseyside Skeptics in which skeptics took an overdose of homeopathic pills, demonstrating that there's nothing in them other than lactose, and then went to the pub for some non-homeopathic beer. The campaign was mainly aimed at high-street pharmacist Boots, whose professional standards director admitted to the science and technology select committee of the House of Commons that there was no evidence that homeopathy worked, but they were very happy to continue charging people money for it. The campaign, and a forthcoming report of the select committee on whether the NHS should fund homeopathy, gave the Observer an excuse to publish a feature <a href="http://www.guardian.co.uk/lifeandstyle/2010/jan/31/homeopathic-remedies-nhs">article</a>, in which Anushka Asthana and Robin McKie examine the supposed controversy about whether homeopathy works or not.<br /><br />Of course, scientifically speaking there is no controversy over homeopathy. It's perfectly clear that there is no scientific reason why it ought to work, and that when it is tested in properly conducted trials it works no better than placebo. I grant you that this probably wouldn't make for a very satisfying Sunday newspaper feature, and it would certainly be rather too short to fill up all of page 30 of the Observer. But you might expect there to be some discussion of the actual evidence. Not really, though. Edzard Ernst gets to say that "I have now published more than 100 papers on homeopathy and I am quite clear about its efficacy: you may as well take a glass of water than a homeopathic medicine". The evidence of Jane Lawrence of the Royal Pharmaceutical Society to the House of Commons select committee on science and technology that "There is no basis for [homeopathic remedies] being effective" is also quoted. But then Cristal Sumner of the British Homeopathic Association gets away with saying "Homeopathy helps patients and is not a placebo effect", despite all the evidence showing exactly the opposite.<br /><br />This is typical of how the press treats scientific issues; they are presented in a superficially even-handed way, but crucially there is no attempt to weigh the evidence. Again, there is no scientific controversy over homeopathy, but by reading the Observer article you could be forgiven for thinking there was considerable room for doubt.<br /><br />The worst things about the article, though, are the two inset boxes (these only appear in the printed version of the article, not in the online version, as far as I can tell). The first box is headed "Common Renedies" and lists "Popular homeopathic remedies for sale in Britain", including Arnica for clearing up bruises, and mixed pollen for treating hayfever. As these are homeopathic, they contain no Arnica or pollen, and there is no evidence they do anything except lighten your wallet, but this is not mentioned in the box. The second box is headed "Case Study", and recounts the story of one Helen Llewelyn, who claims that homeopathy helped control her <a href="http://www.nhs.uk/conditions/endometriosis/Pages/Introduction.aspx">endometriosis</a>. This is an anecdote; it tells us that Llewelyn feels better, but it doesn't tell us anything about why, especially as we know nothing of what real medicines she might have been taking. From <a href="http://www.endometriosis-uk.org/community/personalstories/helen-llewelyn.html">this</a> website, it looks as though Llewelyn's case history is rather complex, and she is presently using homeopathy in conjunction with several kinds of real medicine. For medical treatments, you need much better evidence than this to show that they work. In the case of homeopathy, the evidence exists and it shows that homeopathy doesn't work. But there isn't an inset box anywhere that sums up the useful evidence: just one uncontrolled case report.<br /><br />Given that there is genuinely a debate about whether the NHS should fund homeopathy, it would be good for this debate to be informed by the best currently available evidence. That is exactly what the Observer fails to do: all we get is "he said, she said" appeals to various forms of authority, and an anecdote. No wonder the newspapers are struggling.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com12tag:blogger.com,1999:blog-4477385342066275897.post-5748029530744375012010-01-16T14:57:00.005+00:002010-01-17T18:48:49.489+00:00Medical Hypotheses row resurfacesLast year, publishers Elsevier got into trouble with HIV-AIDS researchers, after Medical Hypotheses (an Elsevier journal) published two papers on the subject of AIDS: one by Peter Duesberg claiming that the AIDS epidemic in South Africa was overhyped, and another by Marco Ruggiero suggesting that the Italian health ministry did not believe that HIV was the sole cause of AIDS (blog posts at <a href="http://www.badscience.net/2009/09/medical-hypotheses-fails-the-aids-test/">Bad Science</a> and <a href="http://scienceblogs.com/insolence/2009/09/pity_poor_peter_duesberg_even_medical_hy.php">Respectful Insolence</a>). The papers were deeply flawed, and were retracted by Elsevier pending an investigation into how they were published. The story has now <a href="http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=409997&c=2">resurfaced</a> in the Times Higher Education Supplement (THES), because:<br /><blockquote>Prominent Aids researchers contacted Elsevier to object to the article and wrote to the US National Library of Medicine requesting that Medical Hypotheses be removed from the Medline citation database - an act that would exclude it from the mainstream scientific-communication network.</blockquote>Elsevier have now convened an expert panel to decide on the future of Medical Hypotheses, with conclusions due by the end of 2010.<br /><br />In fact, there is no great mystery as to how these flawed papers came to be published. Medical Hypotheses is not peer reviewed: instead, decisions on publication are taken solely by the journal's editor, Prof Bruce Charlton. Articles are often accepted within days, or even hours, of being submitted, suggesting there is little or no quality control on what gets published. Prof Charlton defends this process on two grounds: firstly, that there ought to be some outlet for speculative and bizarre ideas that will not be published by mainstream journals. Secondly, that Medical Hypotheses is a successful and influential journal. Here's what he has to say on the comments following the <a href="http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=409997&c=2">THES</a> article:<br /><blockquote>The basic facts are that Medical Hypotheses - www.elsevier.com/locate/mehy - is explicitly and proudly editorially-reviewed (i.e. by me - not peer reviewed); aims to publish radical and revolutionary scientific ideas; and it is objectively a successful journal. It makes a profit, the Thomson ISI Impact Factor is 1.416 (much better than average, and rising), and I know from internal sources that there are half a million papers downloaded per year - which is equivalent download usage to the prestigious Journal of Theoretical Biology. Clearly, in spite or because of our policy to publish bold and sometimes bizarre ideas, Medical Hypotheses plays a significant role in medical science. Fact; not opinion. The editorial advisory board currently includes such respected figures as Nobelist Arvid Carlsson http://en.wikipedia.org/wiki/Arvid_Carlsson; Sir Roy Calne http://en.wikipedia.org/wiki/Roy_Calne; Antonio Damasio http://en.wikipedia.org/wiki/Antonio_Damasio and V.S. Ramachandran http://en.wikipedia.org/wiki/Vilayanur_S._Ramachandran . Past editorial advisors have included Sir Karl Popper and Nobelist Sir James Black. *** There are only two possible legitimate outcomes to the current process. Either: 1. Medical Hypotheses could continue as an influential, profitable and well-known editorially-reviewed journal with a radical mission. Or else: 2. The journal could be closed-down altogether, and the title abolished. But it would obviously not be ethically acceptable to launch a new ‘imposter’ journal - with utterly different editorial aims, procedures and personnel; yet retaining the 34 year established title of Medical Hypotheses.</blockquote>As I keep saying, the impact factor of a journal tells you nothing about its quality. For example, here are three peer-reviewed <a href="http://hawk-handsaw.blogspot.com/2007/08/cracking-example-of-pseudojournal.html">pseudojournals</a> that repeatedly publish abject nonsense and pdeudoscience, with their impact factors according to Journal Citation Reports:<br /><ul><li>Homeopathy: 1.041<br /></li></ul><ul><li>Evidence-based Complementary and Alternative Medicine: 1.954<br /></li></ul><ul><li>Journal of Alternative and Complementary Medicine: 1.628</li></ul>The articles in these journals are typically written by quacks, and are cited by other quacks writing in quack journals, giving a high-ish but meaningless impact factor. Perhaps Medical Hypotheses is also highly influential among pseudoscientists?<br /><br />But the main point here is about radical and controversial hypotheses. I think most people would agree that these have their place in scientific discourse, and there ought to be somewhere to publish them. However, this isn't really what the argument is about. In this case, two fatally flawed papers were published with little or no scrutiny: these papers have potential global health implications. In the case of the Duesberg paper, <a href="http://denyingaids.blogspot.com/2009/09/peer-reviewing-peter-duesberg.html">reviews</a> posted on the Denying AIDS blog show the major problems with the paper. There's a difference between publishing provocative ideas that might inspire new research, and ones that are just demonstrably wrong. While the likes of Peter Duesberg have the right to say what they like, they don't have the right to say it in a <a href="http://en.wikipedia.org/wiki/MEDLINE">MEDLINE</a>-indexed journal. This is not an argument about free speech, it's an argument about the integrity of the scientific literature. There may be a place for journals such as Medical Hypotheses, but there has to be some level of quality control. Otherwise, why should anyone take them seriously?Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com6tag:blogger.com,1999:blog-4477385342066275897.post-1270949761845115392010-01-04T11:17:00.004+00:002010-01-04T13:48:24.465+00:00Bad cycling: New Years Day '10'Not so much bad science, as bad cycling. Last year I returned to bike racing after more than 10 years away, riding a few club events organised by my club, <a href="http://www.stockport-clarion.org.uk/">Stockport Clarion</a>. The triumph of my season was my <a href="https://www.blogger.com/comment.g?blogID=4477385342066275897&postID=9201899593893325175">not-too-bad performance</a> in the club hill climb up the Cat and Fiddle. Since then, I've not really been on the bike too much, because it's dark and cold and I'm a bit of a wuss like that. I did have an ill-fated ride before Christmas, where I fell on a patch of ice just before the Great Stone Road roundabout in Stretford, and then almost got run over by the gritter that was following me. But Stockport Clarion traditionally organises a 10-mile time trial on New Years Day, and I thought it would be a good way to kick-start 2010.<br /><br />It was an 11am start at Chelford, so I reckoned I had to leave about 9:30. This seemed less and less like a good idea as I headed off into the -4 weather, wearing pretty much every piece of bike kit I own. At least the roads were quiet. There was virtually no wind, but riding at about 15 mph was enough to freeze my hands within a few miles, despite my winter gloves. There was a bloke staggering about and yelling as I went through the edge of Didsbury. Hopefully he was on his way home. There was hardly anyone else about. I rattled through a deserted Alderley Edge and made it to the sign-on about 10:45, just as light snow began to fall. Dunc, the timekeeper, had a good story about the previous year's race, when it had also snowed. Apparently a rider behind him noticed that he was leaving two tyre tracks instead of one: he'd twisted his forks in a crash a few days before.<br /><br />By the time 11:00 came around, there were still only two riders. It was now -3, and the news was that both Snake Pass and the Cat and Fiddle were closed because of drifting snow. One rider from Poynton had taken one look at the thermometer (-6) and decided to go back to bed. Having come this far, I thought we might as well have a ride anyway. Then the perenially late Will arrived, on a full time-trial machine, to make it three riders.<br /><br />The Chelford course is a bit of a horror, as time trial courses go. But there are just too many traffic lights around Manchester for a selection of courses, so Chelford is what you get. It's hardly a dragstrip, but not really a sporting course either. It's flattish, narrow and typically busy with fast traffic, but being New Years Day it was relatively quiet. The road surface is pretty heavy and there's a couple of lumps to keep you honest, the main one being the railway bridge in Chelford. That probably doesn't sound too bad, but it's amazing how much effort it costs to maintain speed over the damned thing, with only about a mile to go to the finish.<br /><br />I started number 2, and within a few hundred yards my feet were frozen. There's a roundabout after about a mile, where Ric the marshal was shivering and pacing about to keep warm, and then the turn is just outside Knutsford. By then I'd already been passed by Will at number 3, and was clearly losing time to Dan at number 1. By the time I got to the turn, I was starting to warm up, but my legs just wouldn't have it over the last 4 miles and I finished in 32:53. That's the slowest '10' I've ever done, a minute slower than the first one I ever did riding as a schoolboy under the banner of Penistone Grammar School. I rode back to the timekeeper's car, and took a drink from my bottle: there was ice in it.<br /><br />Still, the <a href="http://www.stockport-clarion.org.uk/TimeTrials/NYD10-2010.html">winning time</a> was 28:39, from a rider who would be doing 24 minute rides in summer, so I couldn't feel too bad. In any case, I didn't have time: to avoid hypothermia, I jumped back on the bike and set off home ASAP.<br /><br />Hopefully there'll be more bad science in the year ahead (there's certainly no shortage of it about), and perhaps even some good science of my own. Whatever else happens, though, I'm certain that there'll be more bad cycling.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com7tag:blogger.com,1999:blog-4477385342066275897.post-55849170661708793642009-11-21T17:58:00.002+00:002009-11-21T18:09:16.246+00:00Fighting parliamentary committees...with your mindYou might remember me <a href="http://hawk-handsaw.blogspot.com/search?q=intention+experiment">writing</a> about the Intention Experiment in the past. The Intention Experiment hopes to harness the power of positive thinking to influence events. In one <a href="http://hawk-handsaw.blogspot.com/2008/05/lesson-in-how-to-get-false-positive.html">memorable example</a>, they tried to influence the structure of water through the collected brainpower of a large collection of gullible numpties. Despite the experiment having a design likely to massively increase the chance of getting a false positive result, nothing astonishing happened.<br /><br /><a href="http://gimpyblog.wordpress.com/">Gimpy</a> has now made my day by posting <a href="http://gimpyblog.wordpress.com/2009/11/19/homeopaths-use-witchcraft-to-influence-parliament/#comment-6784">this story</a> about homeopaths concerned that the British parliament's Science and Technology Committee will conclude that there is no evidence in favour of homeopathy. Instead of marshalling the evidence and trying to make a scientific case, the homeopaths are trying an intention experiment to influence the committee in favour of homeopathy.<br /><br />There's just nothing to add to this: it's shear barking mad lunacy, and you have to think that maybe these people are their own worst enemies.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com2tag:blogger.com,1999:blog-4477385342066275897.post-65425849018523019842009-10-26T16:11:00.003+00:002009-10-26T16:36:59.804+00:00Tuesday is the wettest day of the week...?It was a bit disappointing to get back from a week in the field in Sinai, to find my employers at the University of Manchester advertising the Manchester Science Festival with <a href="http://www.manchester.ac.uk/aboutus/news/display/?id=5171">this</a> story. Apparently, Tuesday is the wettest day of the week in Manchester.<br /><br />Well, every day is rainy in Manchester. Perhaps predictably, if you look at the <a href="http://www.manchesterrain.com/summary.html">data</a> this conclusion is based on, you can see that there isn't exactly an astounding difference in rainfall between different days of the week. Certainly, statistically significant differences have not been demonstrated. The research has some interesting things to say about how rainfall patterns seem to have changed over the last 30 years or so: Manchester is somehow managing to get wetter. This is consistent with warmer temperatures, as more water vapour can be moved around when temperatures are higher. This is interesting stuff, and excellent for illustrating local changes in climate for a science festival. So why emphasise that Tuesday is supposedly the wettest day of the week, when the data surely don't convincingly support that? I suppose the university press office thinks that wet Tuesdays are a more interesting story than local climate changes, but I don't think I would agree.<br /><br />A quick google shows that the story has been picked up by the <a href="http://www.express.co.uk/posts/view/135683/Why-Tuesday-is-the-day-you-ll-most-likely-need-your-umbrella">Express</a> (Why Tuesday is the day you'll most likely need your umbrella) and the <a href="http://www.telegraph.co.uk/topics/weather/6413555/Tuesday-is-the-rainiest-day.html">Telegraph</a> (Tuesday is the rainiest day), which are clearly based around the press release. I think a better press release might have been headlined "Manchester getting rainier", which is interesting and also has the benefit of being supported by the data.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com2tag:blogger.com,1999:blog-4477385342066275897.post-80194611559595013112009-09-23T10:51:00.001+01:002009-09-23T10:54:37.331+01:00REF consultation document publishedFor anyone interested in how research funding is allocated (fascinating stuff, I know), a consultation document on the Research Excellence Framework (REF) is now available <a href="http://www.hefce.ac.uk/pubs/hefce/2009/09_38/">here</a>. REF is the mooted replacement for the old Research Assessment Exercise (RAE), the last one of which was conducted in 2008. Enjoy...Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com6tag:blogger.com,1999:blog-4477385342066275897.post-80079273983616557052009-09-18T13:32:00.005+01:002009-09-18T15:18:07.487+01:00Playing the game: the impact of research assessmentYesterday I was sent this <a href="http://www.rin.ac.uk/communicating-knowledge">report</a>, produced by the <a href="http://www.rin.ac.uk/">Research Information Network</a>, in conjunction with the <a href="http://www.jisc.ac.uk/">Joint Information Systems Committee</a>, and entitled "Communicating knowledge: How and why UK researchers publish and disseminate their findings". The report used a literature review, bibliometric analysis, an online survey of UK researchers, and focus groups or interviews with researchers to look at how and why researchers put information into the public domain. Being an early-career researcher, I'm interested in this sort of thing: I know why I'm publishing and disseminating information, but it's interesting to see why everyone else is doing it. It's also interesting to see the extent to which research assessment in the UK - until recently the <a href="http://www.hefce.ac.uk/research/ref/reform/">Research Assessment Exercise (RAE)</a> and in future the mysterious <a href="http://www.hefce.ac.uk/Research/ref/">Research Excellence Framework (REF)</a> - influence the decisions that researchers make. What particularly struck me about the report was the number of times researchers talked about "playing games": the framework of research assessment is seen as a game to be played, with the needs of research being subordinated to the need to put in a good performance. This has important implications for the REF, in which <a href="http://hawk-handsaw.blogspot.com/2009/01/does-ref-add-up-to-good-science.html">bibliometric indicators</a> are likely to play an important role.<br /><br />The key point of the report is that there is some confusion among researchers about what exactly it is they're supposed to be doing. There are conflicting and unclear messages form different bodies about what sort of research contributions are valued. The perception is that the only thing that really counts in terms of research assessment is peer-reviewed journal articles. Other contributions, such as conference proceedings, books, book chapters, monographs, government reports and so on are not valued. As a result, the proportion journal articles compared to other outputs increased significantly between 2003 and 2008. A couple of comments by researchers quoted in the report (p.15):<br /><br /><blockquote>[There is] much more emphasis on peer reviewed journals …Conferences, working papers and book chapters are pretty much a waste of time … Books and monographs are worth concentrating on if they help one demarcate a particular piece of intellectual territory.</blockquote><br /><br /><blockquote>There is a strong disincentive to publish edited works and chapters in edited works, even though these are actually widely used by researchers and educators in my field, and by our students.</blockquote><br /><br />This is certainly the impression I get from my own field. In fact, I have been advised by senior colleagues to target high-impact journals, rather than, for example, special publications. I have never received any formal guidance on what research outputs are expected of me, but the prevailing atmosphere gives the impression that it's all about journal articles. After publishing a couple of things from my PhD, it took another three years to publish anything from my first post-doc. I worried about that: it seemed that the numerous conferences and internal company reports and presentations I produced over that time counted for nothing career-wise.<br /><br />The report makes it clear that, in the case of the RAE, it is more perceptios than the reality causing the problem: the RAE rules meant that most outputs were admissible, and all would be treated equally. But it's perceptions that drive the way researchers respond to research assessment. Clearer guidance is needed.<br /><br />An interesting point brought up by the report is how, when there is more than one author for a journal article, the list of authors is arranged. In my field, authors are typically listed in order of contribution, so I was surprised to find that this is by no means always the case. In some fields, especially in the humanities and social sciences, authors are commonly listed alphabetically. In some cases, the leader of the research group is listed first, in other cases last. And there are various mixtures of listing by contribution, grant-holding and alphabetic order. There is even a significant minority where papers based on work done by students have the student's supervisor as first author! This means that there is no straightforward way of apportioning credit to multiple authors of a paper, something that <a href="http://www.dcscience.net/?p=182">David Colquhoun</a> has already pointed out. This is a huge problem for any system of assessment based on bibliometrics.<br /><br />The report also examines how researchers cite the work of other people. Other researcher's work should be cited because it forms part of the background of the new research, because it supports a statement made in the new paper, or as part of a discussion of how the new paper fits into the context of previous research. Crucially, this includes citing work with which the authors disagree, or that is refuted or cast into doubt in the light of the new work (p.30):<br /><br /><blockquote>Citing somebody often indicates opposition / disagreement, rather than esteem and I am as likely to cite and critique work that I do not rate highly as work I value.<br /></blockquote><br />So any system that relies on bibliometric indicators is likely to reward controversial science as much as good science (not that those categories are mutually exclusive, but they don't completely overlap either).<br /><br />Researchers are perfectly clear that a system based on bibliometrics will cause them to change their publication behaviour: 22% will try to produce more publications, 33% will submit more work to high-status journals, 38% will cite their collaborators work more often, while 6% will cite their competitors work less often. This will lead to more journal articles of poorer quality, a the decline of perfectly good journals that have low "impact", and corruption in citation behaviour. In general, researchers aren't daft, and they've clearly identified the incentives that would be created by such a system.<br /><br />The report presents a worrying picture of research, and scientific literature, distorted by the perverse incentives created by poorly thought-out and opaque forms of research assessment. It can be argued that scientists who allow their behaviour to be distorted by these incentives are acting unprofessionally: I wouldn't disagree. But for individuals playing the game, the stakes are high. Perhaps we ought to be thinking about whether research is the place for playing games. It surely can't lead to good science.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com10tag:blogger.com,1999:blog-4477385342066275897.post-116917193431550452009-09-16T16:23:00.002+01:002009-09-16T16:26:20.989+01:00I get e-mailGot this today, sent out to academic and academic-related staff in my department:<br /><br /><div> <p class="MsoNormal"><span style="font-family:Arial;font-size:85%;"><span style="font-size: 10pt; font-family: Arial;" lang="EN-GB"></span></span></p></div><blockquote>Dear All, <br /><br />Please find attached NSS results by Faculty, School and JACS Level 3 subjects. Also included is a mapping document to accompany the JACS report to assist you in understanding which programmes of study are included under each heading. The Word document, 'APPENDIX 06-Surveys - NSS Table EPS.doc' shows the data that will be included in the OPR documentation. <br /><br />Please note that the data is FOR INTERNAL USE ONLY.</blockquote><br /><br />I have no idea what NSS, JACS or OPR mean, so this e-mail makes no sense to me whatsoever. I seem to be getting an increasing number of these things, all with acronyms I've never heard of.<br /><div><p class="MsoNormal"><span style="font-family:Arial;font-size:85%;"><span style="font-size: 10pt; font-family: Arial;" lang="EN-GB"></span></span><span lang="EN-GB"><o:p></o:p></span></p></div>Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com7tag:blogger.com,1999:blog-4477385342066275897.post-86056845413403974602009-09-16T14:43:00.005+01:002009-09-16T16:00:19.105+01:00What happens when you don't have peer reviewNormally, when a scientific paper is submitted, it is subjected to scrutiny by two or more scientists working in a similar field. Only if the paper gets through this peer review process, and if corrections required by the reviewers have been made, does the paper actually get published. This process is by no means perfect: bad papers slip through, and good papers get blocked by over-zealous reviewers. But there are two examples this week of what can go wrong when papers are not peer reviewed.<br /><br />Firstly, <a href="http://www.badscience.net/2009/09/medical-hypotheses-fails-the-aids-test/">Ben Goldacre</a> and <a href="http://scienceblogs.com/insolence/2009/09/pity_poor_peter_duesberg_even_medical_hy.php">Respectful Insolence</a> discuss the case of two papers, recently published in Medical Hypotheses, that were so bad they <a href="http://www.ncbi.nlm.nih.gov/pubmed/19586724">were</a> <a href="http://www.ncbi.nlm.nih.gov/pubmed/19619953?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DiscoveryPanel.Pubmed_Discovery_RA&linkpos=1&log$=relatedarticles&logdbfrom=pubmed">withdrawn</a> by publishers Elsevier. Given that Elsevier happily publishes <span style="font-style: italic;">Homeopathy</span>, the fanzine of the Faculty of Homeopathy, this should give pause for thought. Medical Hypotheses is a bit of an oddity: it does not send papers out for peer review. Rather, they are approved solely by the editor of the journal, one Bruce Charlton. It appears that many papers are approved within days, sometimes hours, of being submitted, suggesting that there is very little scrutiny of the papers.<br /><br />The two papers are one by <a href="http://www.ncbi.nlm.nih.gov/pubmed/19619953?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DiscoveryPanel.Pubmed_Discovery_RA&linkpos=1&log$=relatedarticles&logdbfrom=pubmed">Duesberg et al</a>., and one by <a href="http://www.ncbi.nlm.nih.gov/pubmed/19586724">Ruggiero et al.</a>, both of which seek to deny the magnitude of the AIDS crisis. Seth Kalichman of the Denying Aids blog did an <a href="http://denyingaids.blogspot.com/2009/09/peer-reviewing-peter-duesberg.html">experiment</a> by sending the manuscript out for blind peer review. All three "reviewers" rejected the manuscript on the basis that it was filled with logical flaws and mis-representations of the published literature.<br /><br /><br />Elsevier says:<br /><br /><blockquote>This Article-in-Press has been withdrawn pending the results of an investigation. The editorial policy of Medical Hypotheses makes it clear that the journal considers "radical, speculative, and non-mainstream scientific ideas", and articles will only be acceptable if they are "coherent and clearly expressed." However, we have received serious expressions of concern about the quality of this article, which contains highly controversial opinions about the causes of AIDS, opinions that could potentially be damaging to global public health. Concern has also been expressed that the article contains potentially libelous material. Given these important signals of concern, we judge it correct to investigate the circumstances in which this article came to be published online. When the investigation and review have been completed we will issue a further statement. Until that time, the article has been removed from all Elsevier databases. The Publisher apologizes for any inconvenience this may cause. The full Elsevier Policy on Article Withdrawal can be found at http://www.elsevier.com/locate/withdrawalpolicy.</blockquote><br /><a href="http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=408113&c=2">The second example</a> is a paper published in Proceedings of the National Academy of Sciences, amusingly known as PNAS. This is a venerable and respected journal, but it has a little-known wrinkle: members of the National Academy of Sciences are allowed to bypass formal peer review by "communicating" papers for other researchers. This is how the <a href="http://www.pnas.org/site/misc/iforc.shtml">PNAS "Information for Authors"</a> page describes the process:<br /><br /><span style=";font-family:Arial,Helvetica,sans-serif;font-size:small;" ></span><blockquote>An Academy member may “communicate” for others up to 2 manuscripts per year that are within the member's area of expertise. Before submission to PNAS, the member obtains reviews of the paper from at least 2 qualified referees, each from a different institution and not from the authors' or member's institutions. Referees should be asked to evaluate revised manuscripts to ensure that their concerns have been adequately addressed. The names and contact information, including e-mails, of referees who reviewed the paper, along with the reviews and the authors' response, must be included. Reviews must be submitted on the PNAS review form, and the identity of the referees must not be revealed to the authors. The member must include a brief statement endorsing publication in PNAS along with all of the referee reports received for each round of review. Members should follow National Science Foundation (NSF) guidelines to avoid conflict of interest between referees and authors (see Section iii). Members must verify that referees are free of conflicts of interest, or must disclose any conflicts and explain their choice of referees. These papers are published as “Communicated by" the responsible editor. </blockquote>The paper in question is was submitted via this communication process. It was written by Donald Williamson, a retired academic from the University of Liverpool, and <a href="http://dx.doi.org/doi:10.1073/pnas.0908357106">suggests that butterflies and caterpillars orginated as different species</a>:<br /><br /><blockquote>I reject the Darwinian assumption that larvae and their adults evolved from a single common ancestor. Rather I posit that, in animals that metamorphose, the basic types of larvae originated as adults of different lineages, i.e., larvae were transferred when, through hybridization, their genomes were acquired by distantly related animals.</blockquote><br /><br />The paper has been criticised on the basis that it contains no supporting data for what is, after all, a fairly extraordinary hypothesis. Not only that, but it turns out that it had previously been rejected by seven different journals.<br /><br />In both Medical Hypotheses and PNAS, the defence seems to be that there needs to be some mechanism by which speculative ideas that go against current mainstream opinion can be presented and discussed. This seems fair enough, but is anything gained by publishing hypotheses that are not supported by any data, or papers that are logically flawed and contain mis-representations? In both these cases, it seems that the papers would not have been published had they been reviewed properly.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com3tag:blogger.com,1999:blog-4477385342066275897.post-23374761741627123732009-08-26T11:35:00.005+01:002009-08-26T12:01:51.987+01:00Soviet ManchesterThere's an <a href="http://www.library.manchester.ac.uk/specialcollections/exhibitions/current/">exhibition</a> of maps of Manchester at the <a href="http://www.library.manchester.ac.uk/specialcollections/">John Rylands Library</a> just now, and I'm hoping to have a look at it next week when I take a much-needed break. One of the exhibits in particular is attracting a lot of attention: a <a href="http://www.manchester.ac.uk/aboutus/news/display/?id=4983">map of the city</a> prepared by the Soviets, for use in the event of an invasion of the UK.<br /><br /><blockquote>It shows the roads - familiar to many Mancunians - which the Soviets felt were wide enough to carry tanks including Washway Road, the Mancunian Way, and Princess Road.</blockquote><br />I biked down Washway Road last night on my way home from a training ride, and it's quite hard to imagine columns of Soviet tanks clanking along it towards Stretford. Though I suppose if there ever had been a Soviet invasion, Washway Road would be even more of a post-nuclear wasteland than it currently appears. Apparently, this map is only 35 years old, and it's very strange to think that so recently there were plans for a Soviet Manchester.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgynZQIDXni1wmX8HliFoIH-NwnzyrjtP94Y4aSYcNbipMIrStGL_9bVj-YQsgIa71fFHN_Pb-Ip3exguBO9iXibhRboVHm4DwawLMATmZEoThHcO6WqJkcXq6bLKi0_hnbTf62KX1o5MJ_/s1600-h/tank.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 306px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgynZQIDXni1wmX8HliFoIH-NwnzyrjtP94Y4aSYcNbipMIrStGL_9bVj-YQsgIa71fFHN_Pb-Ip3exguBO9iXibhRboVHm4DwawLMATmZEoThHcO6WqJkcXq6bLKi0_hnbTf62KX1o5MJ_/s400/tank.gif" alt="" id="BLOGGER_PHOTO_ID_5374225893836615410" border="0" /></a>Washway Road, yesterday<br /></div>Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com4tag:blogger.com,1999:blog-4477385342066275897.post-51898386110033948202009-08-21T21:23:00.006+01:002009-08-21T21:53:39.446+01:00Society of Homeopaths defends ineffective treatment for condition that kills 2 million children a year......surprise!<br /><br />I'm not going to write a huge amount about this, because it is <a href="http://scienceblogs.com/insolence/2009/08/the_world_health_organization_disses_hom.php">ably</a> covered <a href="http://www.quackometer.net/blog/2009/08/society-of-homeopaths-are-shambles-and.html">elsewhere</a>. But the <a href="http://www.who.int/en/">World Health Organisation</a> finally came out today, and <a href="http://news.bbc.co.uk/1/hi/health/8211925.stm">clearly stated</a> that homeopathy should not be used for life-threatening conditions such as AIDS, malaria, TB and childhood diarrhoea.<br /><br />The Society of Homeopaths (SoH) <a href="http://www.homeopathy-soh.org/whats-new/press-releases.aspx">respond</a> by saying that treating AIDS, malaria and TB would contravene their ethical guidelines. These guidelines are a joke, as you can see by perusing the <a href="http://www.quackometer.net/blog/">Quackometer</a> and <a href="http://gimpyblog.wordpress.com/">Gimpy's blog</a>. But apart from that, they say:<br /><br /><blockquote>The Society of Homeopaths, the UK’s largest body of registered homeopaths, is concerned to learn, in an online article by the BBC (“WHO warns against homeopathy use’), that the World Health Organisation (WHO) has issued caution against the use of homeopathy for childhood diarrhoea following a letter by the charity Sense About Science.</blockquote><br />They go on to <a href="http://www.quackometer.net/blog/2009/08/society-of-homeopaths-are-shambles-and.html">cherry-pick and distort</a> the research that has actually been conducted on homeopathy for childhood diarrhoea.<br /><br />Well, who's surprised by that? Regular readers will be aware of just a few of the ways that homeopaths try to distort the evidence that homeopathy doesn't work. The point here is that the SoH, a supposedly professional organisation, is trying to defend a totally ineffective treatment for a disease that kills <a href="http://www.who.int/mediacentre/news/releases/2009/diarrhoea_research_20090310/en/index.html">2 million children every year</a>. Homeopathy can't do anything to help, and using it instead of proper medical treatment could cost lives. I don't really mind if homeopaths sell ineffective sugar pills to the worried well in Alderley Edge, but this press release is delusional and irresponsible, and reasonable people should have no truck with this sort of dangerous nonsense.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com2tag:blogger.com,1999:blog-4477385342066275897.post-92018995938933251752009-08-18T11:09:00.003+01:002009-08-18T14:33:37.517+01:00A long trip to a closed pubSeveral years ago, I used to do a lot of <a href="http://en.wikipedia.org/wiki/Individual_time_trial">time trialling</a>. Given that, at my peak fitness, I weighed 8 and a half stone and looked like a bundle of pipe cleaners tied together, my favourite races were always <a href="http://en.wikipedia.org/wiki/Hillclimbing_%28cycling%29">hill climbs</a>. Hill climbs are essentially short time-trials up a hill, and they usually happen at the end of the season in September and October. In a typical season I would ride several around Sheffield, Monsal Head and Curbar Edge being particular favourites. Then in 1999 I went to Canada to study for my PhD, and I didn't race again until this year.<br /><br />This year I joined <a href="http://www.stockport-clarion.org.uk/">Stockport Clarion CC</a>, and I've been riding the odd weekday 10-mile time-trial with no great success. But the event I was really looking forward to was the club hill climb championships. Hill climb courses are typically between several hundred yards and a couple of miles long, and they're usually steep, with gradients of 10-20%. Just to be different, ours is on the <a href="http://en.wikipedia.org/wiki/Cat_and_Fiddle_Road">Cat and Fiddle road</a>, between Macclesfield and the <a href="http://en.wikipedia.org/wiki/Cat_and_Fiddle_Inn">eponymous pub</a> that stands, hemmed in by peat bog, at the road summit. That's about 6.5 miles of climbing, but at an average gradient of less than 4%. There are steeper railway bridges around here, but there's still about 335 m of height gain involved. The <a href="http://website.lineone.net/%7Ejim.henderson/cycling/hc/cat.html">course</a> starts in the outskirts of Macclesfield, opposite a bus stop, and the first 3 miles or so is a fairly steady 4.8%. The next mile and a half is very bendy, contains some short downhill sections, and is nearly flat on average. The final mile and a half takes you up onto Axe Edge Moor proper, climbing at 3.5% or so.<br /><br />I had never ridden up the climb before, but I had a gameplan of sorts. The first section was the bit where my light weight would give me an advantage, so I would ride hard, but not flat out: the key is not to overdo it, what with there still being 3 miles to go. I would then use the fast section to take a bit of a breather, riding at slightly less than 10-mile pace, since you don't gain much time there anyway. Then it would be eyeballs out over the last mile and a half to the summit. What gears would I use? Hell if I knew. I would figure it out as I went along.<br /><br />The race was last night, and I actually really enjoyed it. The conditions could scarcely have been better. It was clear and dry, about 18 C, and there was a moderate tailwind. There were only 8 riders, and I was off Number 2, with a slower rider starting one minute before me. I started well, settling in very quickly, and once I had emerged from the trees at the bottom of the climb I could see my minute man almost all of the time. I was clearly gaining on him, and that gave me a psychological boost, as I finally reeled him in on a sharp right-hand bend about a mile and a half in, just before Walker Barn. I was first on the road now, a nice feeling, and my legs were holding up just fine. After Walker Barn, into the fast part of the course, I switched into the big ring, and tried to maintain a slightly-slower-than-10-mile-time-trial pace. My biggest problem here was gear selection. The gradient changes so often and there are so many sharp curves that I felt like I was using nearly every gear on the bike, and I was changing front rings too often: I should have picked one and stuck to it. But I still felt good, and I knew I had something left for the last mile and a half. No strategy involved here; just eyeballs out until the summit. There was a guy out for a training ride in front, and that gave me something to chase. Round the last bend, and there was the welcome sight of the Cat and Fiddle Inn. One last leg-breaking effort, and I was past the timekeeper in 26 minutes dead, gasping like a freshly landed fish. It was about a minute faster than I'd hoped for, so I couldn't have been happier.<br /><br />It was glorious at the summit, with views across the Cheshire Plain and the Peak District. Unfortunately, the pub is closed on Monday nights, so there was no chance of a swift half before the 25 mile ride back home. At least the first seven or so was downhill.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com5tag:blogger.com,1999:blog-4477385342066275897.post-73785781907099008872009-08-07T20:31:00.006+01:002009-08-07T21:39:51.284+01:00Publication bias: medical science is looking for it. Are you?Via the mighty <a href="http://www.badscience.net/">Ben Goldacre</a>'s miniblog, <a href="http://www.bmj.com/cgi/content/full/bmj.b2981">here</a>'s a study that's just been published in the BMJ on the subject of <a href="http://en.wikipedia.org/wiki/Publication_bias">publication bias</a>. Publication bias typically occurs when studies with positive results are preferentially published over studies with negative results. There are many reasons why this might happen; drug companies don't want to publish negative trials of their drugs for obvious reasons, but there is also a tendency for journals to be uninterested in publishing negative trials, and for busy authors to be reluctant to write them up. There's nothing sexy about a negative trial.<br /><br />In the BMJ study, Santiago Moreno and colleagues look at anti-depressants. This is a good area to look at, because of the availability of data that was submitted to the Federal Drug Administration in the United States. Legal requirements enforce submission of ALL data to the FDA, so the authors consider the FDA dataset to be unbiased (although not necessarily complete). This unbiased dataset can then be compared to the data that is available in published journal articles.<br /><br />The comparison is done with our old friend the <a href="http://en.wikipedia.org/wiki/Funnel_plot">funnel plot</a>. This plots the standard error for a trial against the size of the effect that the trial found. The authors of the BMJ study added a new twist by contouring the funnel plot for statistical significance: at a glance it can be seen where studies fall in terms of statistical significance.<br /><br />The results are dramatic. Of the 74 trials registered with the FDA, 23 were not published. In the FDA data, there is a wide spread of results across the contours marking 1%, 5% and 10% levels of significance. When only the published data are considered, there is a clear cut-off at the contour for the 5% significance level, which is typically used in clinical trials to establish statistical significance. That strongly suggests that publication bias is a serious problem in the set of published trials: trials with statistically non-significant results have been systematically excluded.<br /><br />What this means is that the published literature over-estimates the benefit of anti-depressants. It doesn't show that anti-depressants don't work: meta-analysis of the FDA data still shows a beneficial effect. The point is that the real benefit (as shown by the FDA data) is less than the benefit you would expect if you looked only at the published literature. Anti-depressants work less well than you might think, but they still work.<br /><br />The problem is, of course, that most of the time we only have the published literature to work with. So the BMJ paper adds a useful visual technique for identifying publication bias as a likely problem, even if we don't have access a bias-free dataset for comparison. There is no doubt that biases exist in published data; the response from medical science, as seen here, is to try to identify and account for these biases. Regular readers <a href="http://hawk-handsaw.blogspot.com/2009/04/homeopathy-paper-published.html">will know</a> that the response of CAM research is to manipulate the data in order to pretend that the problem doesn't exist.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com5tag:blogger.com,1999:blog-4477385342066275897.post-90664369081613326362009-08-05T13:34:00.003+01:002009-08-05T14:21:04.278+01:00I have a bad feeling about this...I was idly browsing the front page of the Guardian's website, when I really ought to be doing increasingly tedious corrections to a paper, and I saw <a href="http://www.guardian.co.uk/lifeandstyle/2009/aug/05/homeopathy-eczema-cure">this</a>. Eczema sufferer Peter Beech, after many years of decreasingly effective steroid treatments, is trying homeopathy. Beech also claims to be a sceptic of alternative medicine.<br /><br />First things first. Is there any evidence that homeopathy is effective for treatment of eczema? A quick search on Pubmed showed <a href="http://www.ncbi.nlm.nih.gov/pubmed/12919110?ordinalpos=3&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum">this</a> to be the most recent (2003) relevant review. Incidentally, Pubmed is a freely accessible service, unlike many of the journal databases used by academics, and anyone with an internet connection can do the same thing as me and come up with the presently available evidence in a couple of minutes. Here's what the summary says:<br /><blockquote>Alternative methods are commonly used in patients with dermatologic diseases, with homeopathy being one of the most common. Homeopathy was developed by Samuel Hahnemann (1755–1843) and is based on the law of similars and the law of infinitesimals. It is a regulatory therapy where high dilutions of particular compounds are thought to induce a counterreaction in the organism. In dermatology, homeopathy is often used in atopic dermatitis, other forms of eczema, psoriasis, and many other conditions. To date, however, there is no convincing evidence for a therapeutic effect. There are only a few controlled trials, most of them with negative results. The few studies with positive results have not been reproduced. Acceptance by the patient seems largely based on counseling and emotional care rather than on objective responses to the homeopathic drugs.</blockquote>Not particularly convincing, is it?<br /><br />Beech is at least unlikely to do himself any harm by trying homeopathy. However, real harm can result when people use homeopathy and avoid real medicine, as <a href="http://www.canberratimes.com.au/news/national/national/general/parents-guilty-of-manslaughter-over-daughters-eczema-death/1533293.aspx">this tragic case</a> in Australia shows. Nine-month old Gloria Thomas died after her eczema allowed an infection to get out of control. Her parents had been treating her with homeopathy instead of real medicine: they were later convicted of manslaughter by gross criminal negligence.<br /><br />But the main reason that I have a bad feeling about this, is that even if Beech gets better, it won't prove anything about homeopathy (and the same goes if he doesn't see any improvement, to be fair). This is essentially an uncontrolled case report, and there would be no way of showing that any improvement resulted from the homeopathy, rather than, say, the natural cyclicity of eczema, or the placebo effect. Whatever happens to Beech, it isn't going to trump the evidence from the most recent systematic review of the evidence from clinical trials. So Beech's experiment cannot add anything to what we laughingly call the "debate" over the efficacy of homeopathy.<br /><br />Still, good luck to him. I hope he does get some relief from his symptoms, but if he does the chances of it having anything to do with homeopathy are vanishingly small.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com3tag:blogger.com,1999:blog-4477385342066275897.post-53797275411073001402009-07-29T10:53:00.003+01:002009-07-29T11:04:02.345+01:00Simon Singh: "Beware the spinal trap"<h1></h1>As many of you will no doubt be aware, science writer Simon Singh is being sued by the British Chiropractic Association (BCA). The BCA objected to a piece that Singh published in the Guardian, pointing out that chiropractic has little or no evidence base, particularly when used to treat anything other than back problems, and that it can be dangerous. After Singh was sued, the Guardian took down the article pending the legal proceedings. As part of their campaign to "Keep the libel laws out of science", <a href="http://www.senseaboutscience.org.uk/index.php/site/project/333/">Sense About Science</a> is asking bloggers to re-publish an edited version of the article, because scientific arguments should not be settled in the courtroom. I'm glad to do my bit.<br /><br /><h1>Beware the spinal trap</h1> <h2>Some practitioners claim it is a cure-all, but the research suggests chiropractic therapy has mixed results - and can even be lethal, says <b>Simon Singh</b>.</h2> <p>You might be surprised to know that the founder of chiropractic therapy, Daniel David Palmer, wrote that '99% of all diseases are caused by displaced vertebrae'. In the 1860s, Palmer began to develop his theory that the spine was involved in almost every illness because the spinal cord connects the brain to the rest of the body. Therefore any misalignment could cause a problem in distant parts of the body.</p> <p>In fact, Palmer's first chiropractic intervention supposedly cured a man who had been profoundly deaf for 17 years. His second treatment was equally strange, because he claimed that he treated a patient with heart trouble by correcting a displaced vertebra.</p> <p>You might think that modern chiropractors restrict themselves to treating back problems, but in fact some still possess quite wacky ideas. The fundamentalists argue that they can cure anything, including helping treat children with colic, sleeping and feeding problems, frequent ear infections, asthma and prolonged crying - even though there is not a jot of evidence. </p> <p>I can confidently label these assertions as utter nonsense because I have co-authored a book about alternative medicine with the world's first professor of complementary medicine, Edzard Ernst. He learned chiropractic techniques himself and used them as a doctor. This is when he began to see the need for some critical evaluation. Among other projects, he examined the evidence from 70 trials exploring the benefits of chiropractic therapy in conditions unrelated to the back. He found no evidence to suggest that chiropractors could treat any such conditions.</p> <p>But what about chiropractic in the context of treating back problems? Manipulating the spine can cure some problems, but results are mixed. To be fair, conventional approaches, such as physiotherapy, also struggle to treat back problems with any consistency. Nevertheless, conventional therapy is still preferable because of the serious dangers associated with chiropractic.</p> <p>In 2001, a systematic review of five studies revealed that roughly half of all chiropractic patients experience temporary adverse effects, such as pain, numbness, stiffness, dizziness and headaches. These are relatively minor effects, but the frequency is very high, and this has to be weighed against the limited benefit offered by chiropractors.</p> <p>More worryingly, the hallmark technique of the chiropractor, known as high-velocity, low-amplitude thrust, carries much more significant risks. This involves pushing joints beyond their natural range of motion by applying a short, sharp force. Although this is a safe procedure for most patients, others can suffer dislocations and fractures.</p> <p>Worse still, manipulation of the neck can damage the vertebral arteries, which supply blood to the brain. So-called vertebral dissection can ultimately cut off the blood supply, which in turn can lead to a stroke and even death. Because there is usually a delay between the vertebral dissection and the blockage of blood to the brain, the link between chiropractic and strokes went unnoticed for many years. Recently, however, it has been possible to identify cases where spinal manipulation has certainly been the cause of vertebral dissection.</p> <p>Laurie Mathiason was a 20-year-old Canadian waitress who visited a chiropractor 21 times between 1997 and 1998 to relieve her low-back pain. On her penultimate visit she complained of stiffness in her neck. That evening she began dropping plates at the restaurant, so she returned to the chiropractor. As the chiropractor manipulated her neck, Mathiason began to cry, her eyes started to roll, she foamed at the mouth and her body began to convulse. She was rushed to hospital, slipped into a coma and died three days later. At the inquest, the coroner declared: 'Laurie died of a ruptured vertebral artery, which occurred in association with a chiropractic manipulation of the neck.'</p> <p>This case is not unique. In Canada alone there have been several other women who have died after receiving chiropractic therapy, and Edzard Ernst has identified about 700 cases of serious complications among the medical literature. This should be a major concern for health officials, particularly as under-reporting will mean that the actual number of cases is much higher. If spinal manipulation were a drug with such serious adverse effects and so little demonstrable benefit, then it would almost certainly have been taken off the market.</p> <p><b>Simon Singh</b> is a science writer in London and the co-author, with Edzard Ernst, of <i>Trick or Treatment? Alternative Medicine on Trial</i>. This is an edited version of an article published in <i>The Guardian</i> for which Singh is being personally sued for libel by the British Chiropractic Association.</p>Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com4tag:blogger.com,1999:blog-4477385342066275897.post-34024534603658325872009-07-24T09:28:00.003+01:002009-07-24T10:36:56.039+01:00On peer reviewIf you spend a lot of time hanging around at sceptical websites, you'll be aware that when evidence is demanded in support of a claim, it is usually demanded in the form of a peer-reviewed journal article. The suggestion is that peer review provides some sort of check on quality: any peer reviewed paper should meet a certain minimum standard. But is this really the case?<br /><br />First of all, what does it mean to say that an article is peer reviewed? If an article is peer reviewed, it has been checked over by scientists who work in a similar field to the submitted article. A submitted article will usually be sent to two or three reviewers, who will each read the paper and submit a report to the journal editor. The editor will then decide whether the article merits publication.<br /><br />On the face of it, this would seem to imply that certain standards are being met. But there is some evidence that this isn't necessarily the case. For example, this <a href="http://www.trialsjournal.com/content/10/1/46">article</a> (found via Ben Golacre's miniblog) suggests that in Chinese journals, only 5.9-7.7% of supposed randomised controlled trials reported in peer reviewed articles had adequate procedures for randomisation. A lack of adequate randomisation means that there is a good chance of introducing bias into your trial, and it ought to be one of the first things a reviewer would check. While the article specifically addresses trials published in Chinese journals, I don't think there's any compelling reason to think that things are dramatically different in what we laughingly call the west. Anecdotally, anyone who spends time wading through journals as part of their day job will be able to come up with several examples of utterly dreadful papers that should never have been published. This is without looking at pseudojournals, such as those that concentrate on complementary and alternative medicine, where articles on quackery are peer reviewed by other quacks.<br /><br />So, if peer review can't tell you whether a trial described as randomised is in fact randomised, what can it tell you? Does it really act as any kind of guarantee of minimum quality? I would suggest not.<br /><br />That is not to say that peer review is useless as it stands. In my fairly limited experience, papers that I have submitted have always been significantly improved by peer review. But surely there's a way of making peer review "fit for purpose", to use the current jargon?<br /><br />This post was prompted by a <a href="http://www.badscience.net/forum/viewtopic.php?f=3&t=9892">discussion</a> at the Bad Science forum, where the idea of applying industrial-style quality assurance to journal articles was raised. This would mean that there would be some sort of checklist that a reviewer would have to go through, and this would be checked to make sure it had been done. It would not be much use to do this informally; there would need to be some formal way of doing it.<br /><br />In fact, this is not too far from what already happens, in many cases. I've just got a review back in which the reviewers have answered a series of yes/no questions (in addition to their more detailed written comments). For example, "Are the interpretations and conclusions sound and supported by the interpretation provided?", and "Is the paper logically organised?". For the example of trials published in Chinese journals above, there could be a question like "Is the methodology appropriate for testing the specified hypotheses?". Again, there would have to be some checks that this had been adequately done; this is really what the journal editor should be doing. At present, I think the role of the editor is often too weak. They do little more than send out and receive reviews. This is probably not surprising, given that editors are usually working more or less voluntarily and tend to have plenty of other things that they need to do. And it is not always the case: there are many excellent editors who make a lot of effort to engage with the papers they are handling, and the reviewer's comments on them. But if the role of editors were beefed up, such that they spend time formally checking that reviews have been carried out adequately, then peer review might actually provide the quality guarantee that we seem to think it should.<br /><br />That might require actually paying editors and reviewers for their time. This would be a fairly radical step, but if it led to a professionalisation of the journal reviewing and editing process it would probably be a good thing. And if it led to a reduction in the number, and an increase in the quality, of papers published, that would not be a bad thing either.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com5tag:blogger.com,1999:blog-4477385342066275897.post-53126840887213093652009-07-01T14:46:00.004+01:002009-07-02T15:54:46.353+01:00I get my name in the Veterinary RecordThis is somewhat old news, but I haven't had chance to write about it before. To add to the publications I have in <span style="font-style: italic;">Homeopathy</span>, I now have one (as third author) in the <a href="http://veterinaryrecord.bvapublications.com/"><span style="font-style: italic;">Veterinary Record</span></a>. This is starting to get silly; I'm supposed to be a geologist.<br /><br />Perhaps unsurprisingly, this is related to a terrible homeopathy study [Hill et al., <span style=""><em></em></span><span style="font-style: italic;">The Veterinary Record</span> 164:364-370], this time on the treatment of skin conditions in dogs. It's another example of homeopaths continuing to do small, badly designed studies, when plenty of large and properly conducted studies, and systematic reviews and meta-analyses of those studies, show that homeopathy doesn't work. The letter I am involved in is one of three letters that were published criticising the study: they can be found, with the author's reply, at <span style="font-style: italic;">The Veterinary Record</span> 164: 634-636 [apologies for the lack of links: there's no DOI for these that I can find]. There is also an excellent discussion of the paper, and some of the responses to it, over at <a href="http://forums.randi.org/showthread.php?t=138044">JREF</a>.<br /><br />The design in this study is truly extraordinary. Initially, 20 dogs with skin problems were recruited to the study. All were treated with individualised remedies by a homeopath. In 15 cases, the dog owners reported no improvement. In 5 cases, the owners reported a significant improvement. Not looking good for homeopathy so far. Still, the five improved dogs were said to have responded well to homeopathy, and went on to phase 2, which was a proper randomised and blinded placebo-controlled trial. Unfortunately, one dog had to be euthanased before the trial could happen, and another dog's skin problems had resolved completely after the first stage, leaving only three dogs in phase 2. Supposedly, those dogs did better with homeopathy than with placebo, thus justifying, as ever, "further research".<br /><br />This is possibly the easiest study to criticise that I've ever seen. Put simply, the first phase lacks a control group, so improvements cannot be attributed to homeopathy. There is simply no evidence that the five dogs recruited to phase 2 actually responded to homeopathy, rather than just improved spontaneously. Then the second phase of the trial includes only three dogs. There is no way to interpret the results of such a tiny, underpowered study. Those are the main problems, but there are others. For example, all the dogs were on some kind of conventional medication, so that cannot be ruled out as contributing to any improvement.<br /><br />The only reasonable conclusion from the study is that there is no strong evidence that homeopathy did anything for the dogs in the trial. But the paper concludes that the improvement seen in the five dogs (which again cannot be attributed to homeopathy on the basis of this study) is enough to justify further research. No doubt the paper will also be spammed all over the internet by the likes of Dana Ullman, as proof positive that homeopathy works. Hopefully the letter I'm a co-author on, along with the two other letters critical of the study that were published, will go some way to addressing that. The signs are not good, though. The original Hill et al. paper included the statement that "Different homeopathic remedies and different<br />dilutions of the same remedy have been distinguished from each other using Raman and infrared spectroscopy, even though all should contain nothing but water", with a reference to "Rao and others, 2007" [In fact, Rao et al. did not even claim that infrared spectroscopy showed any difference]. Regular readers <a href="http://hawk-handsaw.blogspot.com/2008/01/answer-came-there-none.html">will know</a> that Rao and colleagues did nothing of the sort, and that to describe their paper as "discredited" would be something of an understatement. In the world of homeopathy, discredited papers never die. They are just recycled for use with audiences who don't know that they've been discredited. I suspect that this one will be no different.<br /><br />As an aside, my favourite part of this study is that "constitutional signs" of each of the dogs, as used by the homeopath to pick a remedy, are listed [Table 2 of the paper]. For dog number 16, these are listed as:<br /><br /><blockquote>Affectionate<br />Fears thunderstorms<br />Clairvoyant<br />Grief<br />Desires chicken; oranges aggravate</blockquote><br /><br />A clairvoyant dog! And this was published in a respected veterinary journal.Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com11tag:blogger.com,1999:blog-4477385342066275897.post-12441341984035152182009-06-24T11:24:00.009+01:002009-06-25T13:38:04.679+01:00What do bibliometrics actually add to research evaluation?Firstly, the reason that I haven't posted in an age is that I've been in Norway, interpreting seismic data for the new project I'm working on. Hopefully I can now post a bit more regularly, as I should actually be in Manchester for a few consecutive weeks, for the first time this year.<br /><br />Regular readers will know that I <a href="http://hawk-handsaw.blogspot.com/2009/01/does-ref-add-up-to-good-science.html">like</a> to <a href="http://hawk-handsaw.blogspot.com/2007/06/metrication.html">whinge</a> about the increasing use of statistical indicators (<a href="http://en.wikipedia.org/wiki/Bibliometrics">bibliometrics</a>) to evaluate research performance. Previously in England, research performance has been evaluated by the Research Assessment Exercise, a cumbersome and involved system based around expert peer review of research. Currently, HEFCE (the body that decides how scarce research funding is allocated to English universities) is looking into replacing this with a cumbersome and involved system based around bibliometrics and "light-touch" peer review. To this end, a pilot exercise using bibliometrics and including 22 universities has been underway. An <a href="http://www.hefce.ac.uk/pubs/rdreports/2009/rd13_09/">interim report</a> on the pilot is now available.<br /><br />Essentially, three approaches have been evaluated:<br /><br />i) Based on institutional addresses: here papers are assigned to a university based on the addresses of the the authors, as stated in the paper. This would be cheap to do, as it would need no input from the universities.<br /><br />ii) Based on all papers published by authors. In this approach, all papers written by staff selected for the 2008 RAE were identified. This requires a lot of data to be collected.<br /><br />iii) Based on selected papers published by authors. Again, this approach used all staff selected for the 2008 RAE, but only used the most cited papers.<br /><br />For each approach, the exercise was conducted twice: once using the <a href="http://en.wikipedia.org/wiki/Web_of_science">Web Of Science</a> (WoS) database, and once using <a href="http://en.wikipedia.org/wiki/Scopus">Scopus</a>. The results were then compared with those from the 2008 RAE.<br /><br />Well, the results are interesting, if you like this sort of thing. It is clear that the results can be very different from those provided by the RAE, whichever method was used, although the "selected papers" method tends to give the closest results. It is also notable that the two different databases give different results, sometimes radically so; Scopus seems to consistently give higher values than WoS. Workers in some fields complained that they made more use of other databases, such as the <a href="http://en.wikipedia.org/wiki/ArXiv">arXiv</a> or <a href="http://en.wikipedia.org/wiki/Google_Scholar">Google Scholar</a> (it's worth noting that the favoured databases are proprietary, while the arXiv and Google Scholar are publically accessible).<br /><br />In general, the institutions involved in the pilot preferred the "selected papers" method, but it seems that none of the methods produced particularly convincing results. According to the report (paras 66 and 67):<br /><br /><blockquote>In many disciplines (particularly in medicine, biological and physical sciences and psychology), members reported that the ‘top 6’ model (which looked at the most highly cited papers only) generally produced reasonable results, but with a number of significant discrepancies. In other disciplines (particularly in the social sciences and mathematics) the results were less credible, and in some disciplines (such as health sciences, engineering and computer science) there was a more mixed picture. Members generally reported that the other two models (which looked at ‘all papers’) did not generally produce credible results or provide sufficient differentiation.</blockquote><br />One of the questions here is what is meant by "reasonable" or "credible" results? The institutions involved in the pilot seem to assume that the best results are the ones that most closely match those of the RAE. I suspect this is because the large universities that currently receive the lion's share of research funding are not going to support any system that significantly changes the status quo.<br /><br />The institutions involved in the pilot seem to think that bibliometrics would be most useful when used in conjunction with expert peer review. From the report:<br /><br /><blockquote>Members discussed whether the benefits of using bibliometrics would outweigh the costs. Some found this difficult to answer given limited knowledge about the costs. Nevertheless there was broad agreement that overall the benefits would outweigh the costs – assuming a selective approach. For institutions this would involve a similar level of burden to the RAE and any additional cost of using bibliometrics would be largely absorbed by internal management within institutions. For panels, some members felt that bibliometrics might involve additional work (for example in resolving differences between panel judgements and citation scores); others felt that they could be used to increase sampling and reduce panels’ workloads.</blockquote><br />According to the interim report, the "best" results (i.e. those most closely matching the results of the RAE) were obtained using a methodology that will have a similar administrative burden as the RAE. Even then the results had "significant discrepancies". So, if the aim of the pilot was to get similar results to the RAE with a lesser administrative burden, it seems that the pilot exercise has failed on both counts. So if bibliometrics don't seem to add much to the process, it's worth considering what they might take away. For which, see my <a href="http://hawk-handsaw.blogspot.com/2009/01/does-ref-add-up-to-good-science.html">previous post</a>...Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com2tag:blogger.com,1999:blog-4477385342066275897.post-66905743563702106942009-05-05T12:45:00.004+01:002009-05-05T13:02:51.359+01:00The usual excuse for not postingYes, I've been hanging about in Egypt again, looking at rocks for my day job. In the absence of any bad science related stuff, here are some pretty pictures.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVKuN58WZrXUfmtBWrOKi_M4CSh1Evi0y2w03woXN5ZgJIplut1-bAfWi0oIMAw0jGGv33T_U-Aqu9KEu1SvNDrGELIZbX7jEHRJxVNXNMOPpn85k7WMH7xqXYCVQpaAq7NQ7LjvLGxuFH/s1600-h/DSCF1080.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVKuN58WZrXUfmtBWrOKi_M4CSh1Evi0y2w03woXN5ZgJIplut1-bAfWi0oIMAw0jGGv33T_U-Aqu9KEu1SvNDrGELIZbX7jEHRJxVNXNMOPpn85k7WMH7xqXYCVQpaAq7NQ7LjvLGxuFH/s400/DSCF1080.JPG" alt="" id="BLOGGER_PHOTO_ID_5332306309910399170" border="0" /></a>El Tor, the town where we stayed, at sunset.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizzHkcY8Y0mBk_92qerZqv2Kli-pShkUSbX8DzQCRjn35BwACqeg1VU3IMYKMl6T9yk4sf5Fu0LgnbSjh3XkKm2DE-8qCU5tydNnvZW-Xvqx43buVxFDpUjE81Tp2HEHqB7c-zS1gVkE2X/s1600-h/DSCF1434.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizzHkcY8Y0mBk_92qerZqv2Kli-pShkUSbX8DzQCRjn35BwACqeg1VU3IMYKMl6T9yk4sf5Fu0LgnbSjh3XkKm2DE-8qCU5tydNnvZW-Xvqx43buVxFDpUjE81Tp2HEHqB7c-zS1gVkE2X/s400/DSCF1434.JPG" alt="" id="BLOGGER_PHOTO_ID_5332306307071826514" border="0" /></a>Downtown El Tor.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcsWF5HD77zrzXQwRMh_vZrr3PgUFP_o04x7qIYmbl6rQm0p7oM8A9KlLVB5PBpBBOl3HfZowyoILnSM06XoQLQBRimOQfcNm5DtiC48gNeFMN4U1BqV50SWAIqfewuJysHSHJBD8NfIlQ/s1600-h/DSCF1357.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcsWF5HD77zrzXQwRMh_vZrr3PgUFP_o04x7qIYmbl6rQm0p7oM8A9KlLVB5PBpBBOl3HfZowyoILnSM06XoQLQBRimOQfcNm5DtiC48gNeFMN4U1BqV50SWAIqfewuJysHSHJBD8NfIlQ/s400/DSCF1357.JPG" alt="" id="BLOGGER_PHOTO_ID_5332306313998284274" border="0" /></a>Fossilised burrows in Miocene syn-rift rocks. There's a lot of this in the study area, which usually means that structures that would help to understand the depositional environment are obscured.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgklZrFyC7KpzvxXjl6Xe4XVEHe5wvxK0js0hF9HZNHySghEeN2dTEXpUKYF_imjlKidTaXv-oOICT5VaSK7NVGMRA3gcy3SBVTVkBUNfP5haTHXxNlMK9_rrhmlBPR3m_YzcCw1HPU0oYV/s1600-h/DSCF1421.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgklZrFyC7KpzvxXjl6Xe4XVEHe5wvxK0js0hF9HZNHySghEeN2dTEXpUKYF_imjlKidTaXv-oOICT5VaSK7NVGMRA3gcy3SBVTVkBUNfP5haTHXxNlMK9_rrhmlBPR3m_YzcCw1HPU0oYV/s400/DSCF1421.JPG" alt="" id="BLOGGER_PHOTO_ID_5332306301696179698" border="0" /></a>Part of the field area. To the right are rocks of the Precambrian basement. In the foreground, a major normal fault separates those Precambrian rocks from Nubian sandstone, Eocene carbonate units, and Miocene syn-rift calc-arenites.<br /><br /><br /></div>Paul Wilsonhttp://www.blogger.com/profile/18101626906004768474noreply@blogger.com5