There are so many reasons why we can get a misleading, and often overly positive, view of the impact of treatments.
One reason is publication bias, which refers to the fact that studies reporting “positive” findings, ie not null effect, (explained in more detail here by the Cochrane Collaboration) are more likely to be published – both by journals and then by the media – for a whole host of reasons.
According to an international collaboration of researchers who have been investigating the extent of publication bias in animal studies, most of what we know about this problem comes from studies of clinical interventions (and is why we now have clinical trial registration systems).
They say their study, just published here by the Public Library of Science (PloS) Biology, is the first to show that publication bias is prevalent in basic research.
They estimate from their modelling that systematic reviews of the published results of interventions in animal models of stroke overstate their efficacy by around one third, and that around one-sixth of experiments in this area remain unpublished.
Why does this matter? The reasons they suggest include:
• ethical concerns, first because the animals used have not contributed to the sum of human knowledge, and second because participants in clinical trials may be put at unnecessary risk if efficacy in animals has been overstated.
• if experiments have been conducted but are not available to reviewers, and if the results of these experiments as a group are not the same as results from experiments that were published, then both narrative and systematic reviews, and the resulting expert opinion and public understanding, will be biased.
The study introduced Croakey to a new term – the “file drawer problem”. At its most extreme, say the researchers, this occurs when the 95% of studies that were truly neutral (that is, which reported no significant effects) remain in the files of the investigators, the 5% of experiments that were falsely positive are published, and reviewers conclude—falsely—that the literature represents biological truth.
One of the researchers, Professor David Howells from the Florey Neuroscience Institutes, said in a statement: “When papers go unpublished, they don’t contribute to the collective professional wisdom about whether particular medical interventions work and can even create false myths. These myths are then ‘learnt’ by undergraduate medical students and read by practising GPs who ultimately go on to treat patients accordingly. Researchers also rely on published papers to devise and conduct their own research, so may be undertaking risky or unnecessary clinical trials as a result. I can’t speak for areas other than stroke research, but our findings probably hold true for other medical research involving animal experiments and more broadly in the life sciences.”