Via the mighty Ben Goldacre's miniblog, here's a study that's just been published in the BMJ on the subject of publication bias. Publication bias typically occurs when studies with positive results are preferentially published over studies with negative results. There are many reasons why this might happen; drug companies don't want to publish negative trials of their drugs for obvious reasons, but there is also a tendency for journals to be uninterested in publishing negative trials, and for busy authors to be reluctant to write them up. There's nothing sexy about a negative trial.
In the BMJ study, Santiago Moreno and colleagues look at anti-depressants. This is a good area to look at, because of the availability of data that was submitted to the Federal Drug Administration in the United States. Legal requirements enforce submission of ALL data to the FDA, so the authors consider the FDA dataset to be unbiased (although not necessarily complete). This unbiased dataset can then be compared to the data that is available in published journal articles.
The comparison is done with our old friend the funnel plot. This plots the standard error for a trial against the size of the effect that the trial found. The authors of the BMJ study added a new twist by contouring the funnel plot for statistical significance: at a glance it can be seen where studies fall in terms of statistical significance.
The results are dramatic. Of the 74 trials registered with the FDA, 23 were not published. In the FDA data, there is a wide spread of results across the contours marking 1%, 5% and 10% levels of significance. When only the published data are considered, there is a clear cut-off at the contour for the 5% significance level, which is typically used in clinical trials to establish statistical significance. That strongly suggests that publication bias is a serious problem in the set of published trials: trials with statistically non-significant results have been systematically excluded.
What this means is that the published literature over-estimates the benefit of anti-depressants. It doesn't show that anti-depressants don't work: meta-analysis of the FDA data still shows a beneficial effect. The point is that the real benefit (as shown by the FDA data) is less than the benefit you would expect if you looked only at the published literature. Anti-depressants work less well than you might think, but they still work.
The problem is, of course, that most of the time we only have the published literature to work with. So the BMJ paper adds a useful visual technique for identifying publication bias as a likely problem, even if we don't have access a bias-free dataset for comparison. There is no doubt that biases exist in published data; the response from medical science, as seen here, is to try to identify and account for these biases. Regular readers will know that the response of CAM research is to manipulate the data in order to pretend that the problem doesn't exist.
Showing posts with label publication bias. Show all posts
Showing posts with label publication bias. Show all posts
Friday, 7 August 2009
Subscribe to:
Posts (Atom)