There are so many reasons why we can get a misleading, and often overly positive, view of the impact of treatments.
One reason is publication bias, which refers to the fact that studies reporting “positive” findings, ie not null effect, (explained in more detail here by the Cochrane Collaboration) are more likely to be published – both by journals and then by the media – for a whole host of reasons.
According to an international collaboration of researchers who have been investigating the extent of publication bias in animal studies, most of what we know about this problem comes from studies of clinical interventions (and is why we now have clinical trial registration systems).
They say their study, just published here by the Public Library of Science (PloS) Biology, is the first to show that publication bias is prevalent in basic research.
They estimate from their modelling that systematic reviews of the published results of interventions in animal models of stroke overstate their efficacy by around one third, and that around one-sixth of experiments in this area remain unpublished.
Why does this matter? The reasons they suggest include:
• ethical concerns, first because the animals used have not contributed to the sum of human knowledge, and second because participants in clinical trials may be put at unnecessary risk if efficacy in animals has been overstated.
• if experiments have been conducted but are not available to reviewers, and if the results of these experiments as a group are not the same as results from experiments that were published, then both narrative and systematic reviews, and the resulting expert opinion and public understanding, will be biased.
The study introduced Croakey to a new term – the “file drawer problem”. At its most extreme, say the researchers, this occurs when the 95% of studies that were truly neutral (that is, which reported no significant effects) remain in the files of the investigators, the 5% of experiments that were falsely positive are published, and reviewers conclude—falsely—that the literature represents biological truth.
One of the researchers, Professor David Howells from the Florey Neuroscience Institutes, said in a statement: “When papers go unpublished, they don’t contribute to the collective professional wisdom about whether particular medical interventions work and can even create false myths. These myths are then ‘learnt’ by undergraduate medical students and read by practising GPs who ultimately go on to treat patients accordingly. Researchers also rely on published papers to devise and conduct their own research, so may be undertaking risky or unnecessary clinical trials as a result. I can’t speak for areas other than stroke research, but our findings probably hold true for other medical research involving animal experiments and more broadly in the life sciences.”
A factor that I think contributes to this is the peer review process. Journals are often more willing to publish null or negative findings than authors might imagine. But it’s not just authors who get discouraged – reviewers often indicate dissatisfaction with null findings.
In a sense you can understand why. They’ve given up their time to review a paper and it lacks the “so what?” factor. Even a negative finding provides a partial “so what?”, so potentially this may be more of a problem for studies with no significant findings. In my limited experience of guest editing, reviewers’ approaches varies widely. Some limit their role to reviewing whether the approach was sound and the conclusions justified by the data, others take task with the nature of the arguments presented, others examine the legitimacy of the overall approach or issue being studied, others interest themselves in defending their turf. In light of this null or negative findings can have a rough time getting published.
And you can understand why editors go along with this. They’ve had to search and beg for reviewers and they often don’t have the time to appraise the articles in much detail themselves. They rely heavily on reviewers to determine if any article should be accepted, revised or rejected. So even if they’re willing to publish null findings they might not receive reviews that are helpful enough say “this is a reasonably sound study but its findings aren’t exciting”. The problems with a paper may be presented to them as more fundamental.
This probably isn’t the main reason for the “file drawer problem”. It’s just one of a number of factors that combine to make it daunting for people to publish their less exciting research findings.
Ben – maybe there’s scope for the Journal of Null Findings? Sounds a bit Pythonesque praps..
Or there needs to be more places to publish outside of the journals, like in mathematics, where there is… oh something. where they all publish even if not in journals. So the findings are still out there, even if not picked up by the majors, and able to be share and then worked from.
Where’s my mathematician partner when I need him?
@Croakey – there are actually several journals of non-significant findings. There’s the Journal of Articles in Support of the Null Hypothesis (http://www.jasnh.com/) in psychology, the Journal of Negative Results in Education (http://www.jnre.org/) and perhaps most pertinent to health, Journal of Negative Results in BioMedicine (http://www.jnrbm.com/). The last of these even has an “impact factor”.
Fantastic – thanks for this. Many a true word said in jest..