The way funding decisions are currently made in the UK is based on the Haldane Principle, set out in the early 1900s. Under the principle, researchers are best-placed to make decisions on how research is funded (also known as peer review).
From an expertise and independence perspective, this appears to be a compelling argument and one that feels intuitively right. However, there is very little empirical evidence to support it.
Peer review in grant funding is effective at differentiating between good and bad research proposals. It isn’t as effective at teasing apart the really good proposals from the relatively good ones.
Who gets funded is dependent on who makes the decision at the time, meaning the results can sometimes seem to be random at best. And if we look at borderline cases – those who just obtained funding versus those who just missed out – there is no clear relationship between funding outcome and future success.
In some instances, there is no difference in scientific achievement (for example publications, independence and leadership) between researchers who received funding versus those who didn’t. In other cases, researchers who weren’t funded initially went on to do better than those who were.
This raises questions for funders, including:
- how do you effectively choose the ‘best’ research ideas?
- how do you pick the right reviewers?
- are reviewers being asked to make predictions about the future, and are they qualified to do so?