‘Unfortunately, we have to inform you that after thorough review [YOUR FAVORITE FUNDING ORGANISATION] must reject your application’. Most of us know this sentence all to well, as most rejection letters of our grant applications contain it in a similar form. From a purely statistical point of view, we receive such letters quite frequently. In German biomedicine, the funding rates are between 5 and 25 %, depending on funder and program. Upon receiving a rejection we often feel personally offended. After all, we have put down our best ideas, often had already included some preliminary results and proposed experiments we had already conducted, even beautified the document with a lot of prose, and flattered the most important potential reviewers with strategically placed quotations, etc. And then the rejection! So we had to start over from the beginning, rewrite everything, submit it again, perhaps to another funding agency. This is how we spend a substantial fraction of our days at the office, if we don’t review applications of our colleagues. On average, scientists spend 40% of their time writing or reviewing applications.
We are all familiar with the plight of the grant reviewing system. In short, it drains ressources of all parties involved (including the funding institutions); often the reviewers are only marginally qualified; mainstream projects have a higher chance of being selected than innovative and high-risk projects; we completely lack criteria for the future success of projects; old boys networks and conflicts of interest abound; the Matthew effect leads to funding of the funded; preference is given to established researchers, and so on. But we scientists behave like sheep. In spite of omnipresent criticism, especially among colleagues after a few beers, we keep turning the hamster wheel. ‘That is the how the system works, and we do not have a better one’.
True, but there are a number of alternatives. While they are very plausible, none of them has been tested on a large scale. But can they be more inefficient and wasteful than current procedures? There is room for experimentation! Isn’t experimentation the core business of scientists? Why don’t we apply our skills to the way we distribute funding? I have written about one such alternative some time ago: Basic funding for scientists, combined with peer-to-peer funding. Scientists pass on a certain part of their basic funding to other researchers of their choice. This system makes a lot of sense, but it is a quite radical solution. It will probably be difficult to ever come to the test. But another idea has greater chances of being implemented, although it sounds even crazier: the funding lottery!
At least three major funding agencies worldwide are currently experimenting with systems in which randomness is key in the funding decision: the ‘Explorer Grants’ of the Health Research Council of New Zealand, the ‘Seed Projects’ of New Zealand’s Science for Technology Innovation and, you might be surprised, in Germany the Volkswagen Foundation with the ‘Experiment! funding line. True to the motto: ‘Research funding already is already a lottery, so let’s make it a proper one right away’.
This is less crazy than it sounds, and has a history dating back to the 19th century. There is also solid theory and simulations which suggest that the scheme might work. But how does such a funding lottery work? In its purest form, quite simple: Scientists submit applications, which are subjected to an initial check. What obviously makes no sense and does not correspond to the formal specifications is triaged right away. Then everything that is left comes into the bowl, and as many applications as can be funded are drawn. This can, of course, be further refined: for example, with criteria for preselection. Here, a minimum set of requirements is applied that includes both the applicant’s scientific oeuvre and the hypotheses and methods pursued. However, these should be kept relatively broad and open, otherwise unorthodox ideas and hidden skills of the applicants will be eliminated at the outset. One could also directly fund the top ranked proposals of the initial review, and only submit the ‘middle range’ into the lottery. Alternatively, a weighted lottery is conceivable: The higher an application was ranked in the initial review, the more ‘tickets’ the application receives, thus increasing its chances of being drawn. In any case, a lottery does away with relatively short applications. The lottery should be staged publicly, the details of the process made transparent, and the criteria as well as the results of the initial review openly accessible.
But what’s the point, really? Paradoxically, a fairer selection of funding recipients than in peer review! Because ranking in regular peer review arbitrarily draws a line between the funded and the rejected projects. Everybody member of review panels knows the process. We rank the applications according to the grades in the review, and then draw a line below which no more funding is available, because we have spent all the money on the projects above the line. Do the projects below and above the line really have different potential to be successful? Often the members of the review panel talk their heads off and push proposals up or down. In fact, after unanimous selection of a top few, and triaging the applications with the lowest scores, reviewers spend most of their time discussing the proposals between those categories.
Afterwards, reviewers leave the panel with the feeling that they have done a good and fair job. But there is no evidence whatsover that despite their honest efforts, they have produced a ‘better’ selection, that is chosen proposals that will fare better than the ones they rejected. Because meta-research has unequivocally shown that the peer review process, no matter what kind, is neither predictive nor consistent. It is not predictive of the potential and future success of funded projects. This is not so much because the reviewers often do not spend enough time with the proposals, or don’t have enough expertise to assess them, or have prejudices, or a conflict of interest. The real problem ist that there are no reliable criteria for the future success of applications. Moreover, the peer review process is not sufficiently consistent, since a repetition with other, but equally qualified experts does not lead to nearly the same results. Here, too, we have solid evidence from meta-research. This why the lottery is fairer, because it treats applicants equally in the absence of reliable criteria for discriminating them in a predicitive fashion. The lottery is also fair in that it gives all qualified applicants a chance, irrespective of gender, age, prestige, institution, or network. In addition, the lottery massively reduces the cumulative effort of the application and selection process, both on the part of the applicants (shorter applications), and of course especially on the part of the reviewers and administrators of the funding agencies. This makes it more efficient and frees up time and resources for real science.
The most interesting advantage of the lottery, however, is that it promotes diversity and innovation. Many mainstream projects would certainly also be drawn in such a lottery. Simply because there are many mainstream projects in the bowl, as most of us write mainstream applications. This is the nature of the mainstream. But because the mainstream is not positively selected in a random process, as it would be in regular peer review, rare breakthrough and disruptive research has a chance of being a winner. An additional bonus is that there would be more applications in such a system. Specifically, applications might be submitted which currently do not see the light, because the applicants in anticipation of a rejection do not bother to write them up in the first place.
But aren’t there numerous concerns? Would the introduction of such a system not lead to an outcry? If not among the scientists, then in the public: ‘Check this out, science has turned into a casino, with our tax money! Are they out of their minds?’ And wouldn’t such a system lead to a massive increase in low-quality applications? Quick and dirty submissions, just to have a ticket in the lottery? And then the money won will be squandered on nonsensical activities? First of all we should remember that even in the current system a lot of waste is produced, supported by public funds. And that with investing considerable ressources in selecting the waste. We could easily solve this problem by excluding scientists from the system who flood the process with inferior applications. And since the process is completely transparent: Who would want to make a fool of oneself by submitting bad science?
Interestingly, where the funding lottery is being tested, no outcry occured: The Volkswagen Foundation began testing such a lottery in 2017. It deserves the highest praise for this! The foundation does what we scientists do: If we have a plausible and relevant hypothesis, we conduct an experiment to refute it or to gain evidence for its correctness. This is exactly what the Volkswagen Foundation does in the Experiment! program. Applications are awarded in a partially randomised procedure via a lottery. All this is evaluated by means of accompanying research: Do projects selected by peer review differ in their course and success from those that were drawn in the lottery? The Deutsche Forschungsgemeinschaft (DFG), Germany’s main and most prestigious funder, on the other hand, invests its billions in project funding and has relied exclusively on the peer review procedure since its inception. Without making any effort to provide evidence for its efficiency, and despite the obvious shortcomings of the current procedure. Why doesn’t the DFG test alternative selection procedures and funding formats, even on a tiny scale? Quite simply: The DFG is the self-governing organisation of science funding in Germany. We scientists ARE the DFG. And we are sheep!
A German version of this post has been published as part of my monthly column in the Laborjournal: http://www.laborjournal-archiv.de/epaper/LJ_19_04/18/index.html
Volkswagen Experiment! https://www.volkswagenstiftung.de/en/funding/our-funding-portfolio-at-a-glance/experiment
Dorothy Bishop’s blog regarding the VW-Funding Lottery: https://deevybee.blogspot.com/2018/04/should-research-funding-be-allocated-at.html
More literature on research funding lotteries:
Avin S. Policy Considerations for Random Allocation of Research Funds. RT. A Journal on Research Policy & Evaluation 1 (2018) Doi: 10.13130/2282-5398/8626 https://riviste.unimi.it/index.php/roars/article/view/8626
Avin S. Funding Science by Lottery. 2015 In: Mäki U., Votsis I., Ruphy S., Schurz G. (eds) Recent Developments in the Philosophy of Science: EPSA13 Helsinki. European Studies in Philosophy of Science, vol 1. Springer, Cham. https://link.springer.com/chapter/10.1007%2F978-3-319-23015-3_9
Avin S. Centralised Funding and Epistemic Exploration. Penultimate draft. Final draft forthcoming in The British Journal for the Philosophy of Science (BJPS). http://philsci-archive.pitt.edu/13369/7/Centralised-Funding-and-Epistemic-Exploration-Final-Revision-for-web.pdf
Egger M. Random selection for science funding: not such a crazy idea. 2018. Horizons https://www.horizons-mag.ch/2018/06/05/random-selection-for-science-funding-not-such-a-crazy-idea/
Fang FC, Casadevall A. 2016. Research funding: the case for a modified lottery. mBio 7(2):e00422-16. doi:10.1128/mBio.00422-16. https://mbio.asm.org/content/7/2/e00422-16.abstract
Firpo T, Smith L. A random approach to innovation. 2018. NESTA https://www.nesta.org.uk/feature/ten-predictions-2019/random-approach-innovation/
Klaus B, del Alamo D. Talent Identification at the limits of Peer Review: an analysis of the EMBO Postdoctoral Fellowships Selection Process BioRxiv. 2018 https://www.biorxiv.org/content/10.1101/481655v2?wt_zmc=nl.int.zonaudev.zeit_online_chancen_cb.m_21.01.2019.nl_ref.zeitde.bildtext.link.20190121&utm_medium=nl&utm_campaign=nl_ref&utm_content=zeitde_bildtext_link_20190121&utm_source=zeit_online_chancen_cb.m_21.01.2019_zonaudev_int