The National Health and Medical Research Council (NHMRC) funds most of the health and medical research conducted in Australia – and hence pays the salaries of most of our health and medical researchers.
Recently, a decline in the level of NHMRC funding has meant that fewer grant applications are being funded and this, along with an assessment system which allows bias and chance to influence the outcomes of applications, is leading to a demoralised and depleted medical research workforce, according to Professor Tony Blakely, from the Melbourne School of Population and Global Health at the University of Melbourne.
In the article below, originally published in The Conversation, Blakely discusses how the NHMRC could improve the way in which it awards grants in order to support our health and medical research workforce improve the value that their work delivers to the Australian community.
Tony Blakely writes:
Most health research in Australia is funded by the National Health and Medical Research Council (NHMRC), which distributes around $800 million each year through competitive grant schemes. An additional $650 million a year is funded via the Medical Research Future Fund, but this focuses more on big-picture “missions” than researcher-initiated projects.
Ten years ago, around 20 percent of applications for NHMRC funding were successful. Now, only about 10–15 percent are approved.
Over the same ten-year period, NHMRC funding has stayed flat while prices and population have increased. In inflation-adjusted and per capita terms, the NHMRC funding available has fallen by 30 percent.
As growing numbers of researchers compete for dwindling real NHMRC funding, research risks becoming “a high-status gig economy”. To fix it, we need to spend more on research – and we need to spend it smarter.
Increased funding needed
To keep pace with other countries, and to keep health research a viable career, Australia first of all needs to increase the total amount of research funding.
Between 2008 and 2010, Australia matched the average among OECD countries of investing 2.2 percent of GDP in research and development. More recently, Australia’s spending has fallen to 1.8 percent, while the OECD average has risen to 2.7 percent.
When as few as one in ten applications is funded, there is a big element of chance in who succeeds.
Think of it like this: applications are ranked in order from best to worst, and then funded in order from the top down. If a successful application’s ranking is within say five percentage points of the funding cut-off, it might well have missed out if the assessment process were run again – because the process is always somewhat subjective and will never produce exactly the same results twice.
So five percent of the applications are “lucky” to get funding. When only 10 percent of applications get funding, that means half of the successful ones were lucky. But if there is more money to go around and 20 percent of applicants are funded, the lucky five percent are only a quarter of the successful applicants.
This is a simplistic explanation, but you can see that the lower the percentage of grants funded, the more of a lottery it becomes.
This increasing element of “luck” is demoralising for the research workforce of Australia, leading to depletion of academics and brain drain.
The ‘application-centric’ model
As well as increasing total funding, we need to look at how the NHMRC allocates these precious funds.
In the past five years, the NHMRC has moved to a system called “application-centric” funding. Five (or so) reviewers are selected for each grant and asked to independently score applications.
There are usually no panels for discussion and scoring of applications – which is what used to happen.
The advantages of application-centric assessment include (hopefully) getting the best experts on a particular grant to assess it, and a less logistically challenging task for the NHMRC (convening panels is hard work and time-consuming).
Disadvantages of application-centric assessment
However, application-centric assessment has disadvantages.
First, assessor reviews are not subject to any scrutiny. In a panel system, differences of opinion and errors can be managed through discussion.
Second, many assessors will be working in a “grey zone”. If you are expert in the area of a proposal, and not already working with the applicants, you are likely to be competing with them for funding. This may result in unconscious bias or even deliberate manipulation of scores.
And third, there is simple “noise”. Imagine each score an assessor gives is made up of two components: the “true score” an application would receive on some unobservable gold standard assessment, plus or minus some “noise” or random error. That noise is probably half or more of the current variation between assessor scores.
So how do we reduce the influence of both assessor bias and simple “noise”?
First, assessor scores need to be “standardised” or “normalised”. This means rescaling all assessors’ scores to have the same mean (standardisation) or same mean and standard deviation (normalisation).
This is a no-brainer. You can use a pretty simple Excel model (I have done it) to show this would substantially reduce the noise.
Second, the NHMRC could use other statistical tools to reduce both bias and noise.
One method would be to take the average ranking of applications across five methods:
- with the raw scores (i.e. as done now)
- with standardised scores
- with normalised scores
- dropping the lowest score for each application
- dropping the highest score for each application.
The last two “drop one score” methods aim to remove the influence of potentially biased assessors.
The applications that make the cutoff rank on all the methods are funded. Those that are always beneath the threshold are not funded.
Applications that make the cut on some tests but fail on others could be sent out for further scrutiny – or the NHMRC could judge them by their average rank across the five methods.
This proposal won’t fix the problem with the total amount of funding available, but it would make the system fairer and less open to game-playing.
A fairer system
Researchers know any funding system contains an element of chance. One study of Australian researchers found they would be happy with a funding system that, if run twice in parallel, would see at least 75 percent of the funded grants funded in both runs.
I strongly suspect (and have modelled) that the current NHMRC system is achieving well below this 75 percent repeatability target.
Further improvements to the NHMRC system are possible and needed. Assessors could provide comments, as well as scores, to applicants. Better training for assessors would also help. And the biggest interdisciplinary grants should really be assessed by panels.
No funding system will be perfect. And when funding rates are low, those imperfections stand out more. But, at the moment, we are neither making the system as robust as we can nor sufficiently guarding against wayward scoring that goes under the radar.
See here for Croakey’s archive of stories on medical research