Via the mighty Ben Goldacre's miniblog, here's a study that's just been published in the BMJ on the subject of publication bias. Publication bias typically occurs when studies with positive results are preferentially published over studies with negative results. There are many reasons why this might happen; drug companies don't want to publish negative trials of their drugs for obvious reasons, but there is also a tendency for journals to be uninterested in publishing negative trials, and for busy authors to be reluctant to write them up. There's nothing sexy about a negative trial.
In the BMJ study, Santiago Moreno and colleagues look at anti-depressants. This is a good area to look at, because of the availability of data that was submitted to the Federal Drug Administration in the United States. Legal requirements enforce submission of ALL data to the FDA, so the authors consider the FDA dataset to be unbiased (although not necessarily complete). This unbiased dataset can then be compared to the data that is available in published journal articles.
The comparison is done with our old friend the funnel plot. This plots the standard error for a trial against the size of the effect that the trial found. The authors of the BMJ study added a new twist by contouring the funnel plot for statistical significance: at a glance it can be seen where studies fall in terms of statistical significance.
The results are dramatic. Of the 74 trials registered with the FDA, 23 were not published. In the FDA data, there is a wide spread of results across the contours marking 1%, 5% and 10% levels of significance. When only the published data are considered, there is a clear cut-off at the contour for the 5% significance level, which is typically used in clinical trials to establish statistical significance. That strongly suggests that publication bias is a serious problem in the set of published trials: trials with statistically non-significant results have been systematically excluded.
What this means is that the published literature over-estimates the benefit of anti-depressants. It doesn't show that anti-depressants don't work: meta-analysis of the FDA data still shows a beneficial effect. The point is that the real benefit (as shown by the FDA data) is less than the benefit you would expect if you looked only at the published literature. Anti-depressants work less well than you might think, but they still work.
The problem is, of course, that most of the time we only have the published literature to work with. So the BMJ paper adds a useful visual technique for identifying publication bias as a likely problem, even if we don't have access a bias-free dataset for comparison. There is no doubt that biases exist in published data; the response from medical science, as seen here, is to try to identify and account for these biases. Regular readers will know that the response of CAM research is to manipulate the data in order to pretend that the problem doesn't exist.
Subscribe to:
Post Comments (Atom)
5 comments:
The odd thing about antidepressants is that the effects seem to get smaller at every new trial. It's already accepted that they are ineffective in mild/moderate depression and even severe depression is looking dodgier, as in Kirsch et al.
I don't suppose 'depression' is a well-defined homogeneous condition, but not much can be done about that at the moment.
It may well be that they are not helpful at all. Not one of pharmacology's greatest moments.
David brings up the elephant in the room for psychiatric research. Without reasonably biologically defined illnesses, it is impossible to know if studies are comparable, or even if individuals within the same study are comparable. Especially given the heterogeneous nature of depressive illness, it's very probable that any effect may only arise in a subset of the clinical population. It would be helpful if researchers looked to study 'symptom sets' rather than diagnoses, as has been suggested with schizophrenia. This way there is at least a hope that the treatment is being effected on individuals with the same underlying syndrome.
Actually, Kirsch et al found that antidepressants do work in "severe" depression - nothing dodgy about that. Except their attempt to put a skeptical spin on it by saying that the effect in severe depression was merely a "reduced placebo effect" with increasing severity but they were universally derided for that.
The BMJ paper picks up on the fact that we use an arbitrary p=0.05 threshold for "positive results" and we can use this to examine publication bias. This is a point so obvious that it hasn't been noticed until now - excellent work.
The pretty obvious conclusion for clinical research is that not only should drug companies be legally required to submit details of all trials to the regulator (and all the pertinent data - I believe Kirsch et al still had trouble using all the FDA submitted trials) but that data should be made publically available.
Public interest outweighs commercial concerns about secrecy because otherwise publication bias will systematically misrepresent trial outcomes. It is a simple proposal that is simply politically unpalatable. Markets such as Europe and the US are essential to pharmaceutical companies and they would have to comply - but it is easier for them to lobby and pay off politicians.
Interestingly - when we re-analysed Kirsch's data we found that Kirsch's dodgy analysis method had underestimated the effect of antidepressants more than publication bias had overestimated it.
Post a Comment