Anyone who has ever been involved in applying for academic or research grants knows how time consuming preparing the application can be. In this thought-provoking piece Nicolas Graves (QUT), Adrian Barnett (QUT) and Philip Clarke (Uni of Syd) speculate on whether those thousands of hours of highly-trained academics’ time could be put to better use than estimating the exact number of ball-point pens required in the third year of the research project and providing this information in 12 point, arial, triplicate etc. This article originally appeared in The Conversation.
Chance is something researchers feel could be important when their applications for scientific research funding are assessed and discussed in peer review.
Now this hunch has been supported by an analysis of the National Health and Medical Research Council (NHMRC) assessments used to determine funding outcomes for the 2009 project grant scheme worth over $350 million.
The research paper – Funding grant proposals for scientific research: retrospective analysis of scores by members of grant review panel – published in the British Medical Journal (BMJ) today, set out to quantify the randomness and cost of the funding process. Both were found to be significant and raise questions about the efficacy of how research funding is allocated in the current system.
Project grants submitted to the NHMRC go through an arduous peer review process before they are finally funded or rejected.
Each proposal is given a score by members of a grant review panel and the average of these scores falls either above or below a funding line.
The study found that when variability among panel members’ scores is accounted for, only the top 9% of applications always score above the line, the bottom 61% never reach the line and 29% are sometimes above and sometimes below.
So what influences variation in the scores among panel members? A US journalist observing a peer review panel for the American Cancer Society reported that discussions about grants were affected by individual beliefs about whether the research was important.
He found there was pressure to find any flaw in a grant proposal in order to exclude it from further consideration, and a small number of researchers could influence others by either supporting or talking down a proposal.
A delicate balance
In an effort to reduce bias, the NHMRC excludes panel members who know the applicants from any discussion of the grant. But these people tend to be experts in the area and those left behind to assess the proposals’ quality might be relatively unfamiliar with the science.
It’s likely that both very strong and unfundable applications will be identified reliably, but many proposals occupy a congested middle ground and reliably ranking these grants is much harder.
These difficult-to-separate proposals are very sensitive to a small change in score, which could be enough to tip them over or push them under the funding line.
A further analysis found the proportion of “sometimes funded” grants was different across the 45 discipline-based panels. The most reliable panel allocated 19% and the least reliable panel allocated 49% of proposals to the “sometimes funded” category.
The total costs of the project grant process – from application to allocation – is estimated to be $49 million with 85% of the cost incurred by the applicants themselves.
Researchers are required to provide up to 70 pages of information and include a nine-page research plan, which is the meat of the peer review process.
The median time spent preparing the paperwork was 22 days with a maximum of 65 days. In total, 180 years of research time was used up preparing grant applications in 2009.
But the process is not just onerous for the applicants. Members of grant review panels process up to 100 proposals in four days.
Not only are long applications costly to prepare they may reduce the quality of peer review because panel members are discombobulated by the volume of paperwork.
But the variation between panel member’s scores addresses just one aspect of the grant allocation process.
As grants are discussed by the panels before scores are awarded it would nice to know how these discussions influence the scoring process.
We are planning further research to compare one set of grants assessed by two independent panels to estimate inter-panel variability. This will then be compared with the assessment of another panel that receives a shorter version of the grant proposal.
A journal style system where applicants submit a proposal to one of many expert sub-editors who reject outright or send applications for peer review might be tested.
The sub-editors could then rank the proposals taking into account the peer review comments. Randomness may be difficult to reduce but a lower cost system with similar reliability might be easier to achieve.
Another approach is not to try to minimise randomness, but explicitly accept it.
This could be achieved by first identifying the very strong and the unfundable proposals and then entering the hard-to-discriminate grants into a ballot.
Regulation will be needed to stop massive increases in the number of applications made.
Radical reform would be to reward research teams with funding based on their actual performance rather than promises made in grant applications.
There would have to be some seed funding to get junior researchers started but anecdotal evidence suggests this already happens with successful groups writing applications for research they’ve already completed.
It is important to fund the best research possible and translate new knowledge into health services. So finding a funding system that is reliable, fair and cost-effective is a useful field of enquiry.