Friday, 24 July 2009

On peer review

If you spend a lot of time hanging around at sceptical websites, you'll be aware that when evidence is demanded in support of a claim, it is usually demanded in the form of a peer-reviewed journal article. The suggestion is that peer review provides some sort of check on quality: any peer reviewed paper should meet a certain minimum standard. But is this really the case?

First of all, what does it mean to say that an article is peer reviewed? If an article is peer reviewed, it has been checked over by scientists who work in a similar field to the submitted article. A submitted article will usually be sent to two or three reviewers, who will each read the paper and submit a report to the journal editor. The editor will then decide whether the article merits publication.

On the face of it, this would seem to imply that certain standards are being met. But there is some evidence that this isn't necessarily the case. For example, this article (found via Ben Golacre's miniblog) suggests that in Chinese journals, only 5.9-7.7% of supposed randomised controlled trials reported in peer reviewed articles had adequate procedures for randomisation. A lack of adequate randomisation means that there is a good chance of introducing bias into your trial, and it ought to be one of the first things a reviewer would check. While the article specifically addresses trials published in Chinese journals, I don't think there's any compelling reason to think that things are dramatically different in what we laughingly call the west. Anecdotally, anyone who spends time wading through journals as part of their day job will be able to come up with several examples of utterly dreadful papers that should never have been published. This is without looking at pseudojournals, such as those that concentrate on complementary and alternative medicine, where articles on quackery are peer reviewed by other quacks.

So, if peer review can't tell you whether a trial described as randomised is in fact randomised, what can it tell you? Does it really act as any kind of guarantee of minimum quality? I would suggest not.

That is not to say that peer review is useless as it stands. In my fairly limited experience, papers that I have submitted have always been significantly improved by peer review. But surely there's a way of making peer review "fit for purpose", to use the current jargon?

This post was prompted by a discussion at the Bad Science forum, where the idea of applying industrial-style quality assurance to journal articles was raised. This would mean that there would be some sort of checklist that a reviewer would have to go through, and this would be checked to make sure it had been done. It would not be much use to do this informally; there would need to be some formal way of doing it.

In fact, this is not too far from what already happens, in many cases. I've just got a review back in which the reviewers have answered a series of yes/no questions (in addition to their more detailed written comments). For example, "Are the interpretations and conclusions sound and supported by the interpretation provided?", and "Is the paper logically organised?". For the example of trials published in Chinese journals above, there could be a question like "Is the methodology appropriate for testing the specified hypotheses?". Again, there would have to be some checks that this had been adequately done; this is really what the journal editor should be doing. At present, I think the role of the editor is often too weak. They do little more than send out and receive reviews. This is probably not surprising, given that editors are usually working more or less voluntarily and tend to have plenty of other things that they need to do. And it is not always the case: there are many excellent editors who make a lot of effort to engage with the papers they are handling, and the reviewer's comments on them. But if the role of editors were beefed up, such that they spend time formally checking that reviews have been carried out adequately, then peer review might actually provide the quality guarantee that we seem to think it should.

That might require actually paying editors and reviewers for their time. This would be a fairly radical step, but if it led to a professionalisation of the journal reviewing and editing process it would probably be a good thing. And if it led to a reduction in the number, and an increase in the quality, of papers published, that would not be a bad thing either.

5 comments:

Dr Aust said...

Interesting point, Paul.

You might be interested in a somewhat different view that recently appeared in the THE.

David Colquhoun said...

Yes interesting, but the problem seems to me that just about any paper, however poor, gets published in some 'peer reviewed' journal. Much time is wasted by people gradually moving down the hierarchy of journals until they find one that will take it. At the moment I'm about half way to persuading myself that we don't need journals at all. Put the results, and raw data, on a web site and open the comments. it would save an enormous amount of time and money. Could it work?

Paul Wilson said...

Dr Aust, cheers for that link. I should probably subscribe to the THE, or at least read it more regularly.

Some good points there, but it seems that Richard Bateman's experience is somewhat different to mine, perhaps because he is publishing in more prestigious journals where editing is taken more seriously. I have never found editing to be obtrusive: it has always either been useful, or almost non-existent. Perhaps this will change as I continue in my career...

Also true that a box-ticking approach is to be avoided. But I would argue that where Bateman says "Allies say 'yea' and detractors say 'nay', typically with minimal justification", that's actually an argument for stronger editors who can prevail upon reviewers to properly justify their reviews.

Of course, what I've suggested is really a very modest proposal for tweaking the current system. David's idea is much more radical. But I don't see why I couldn't work. You would need website space, but research institutions would take care of that. Perhaps you could then have aggregator sites that allow people to find relevant research. It's all still going to have be indexed somehow so that we can find stuff. So, there's still going to have be a decision at some level as to what is worth indexing, and so on.

In a similar vein, I'd like to see all journals have a web comments form for all their articles, similar to what the BMJ does with its rapid responses. There could be some level of moderation so that they aren't totally swamped with drivel.

Dr Aust said...

Agree with you about the comments thread, Paul. Apparently all the Nature group journals are going to this system, though I don't know if they have actually implemented it yet.

Joe D said...

david: yes, it almost certainly can work. perhaps not exactly just dumping the raw data "on a web site", but by things like open lab notebooks, mandatory database deposition of raw data, and so on. obviously, there is no technological barrier to it, just a psychological one. the world has changed and people are going to take a few years to catch up yet.

no doubt this will, once again, be a hot topic at the science online conference in august.