It’s noon on Saturday, the sun is shining. I am just evaluating nine applications of a call for applications by a German Science Ministry (each approximately 50 pages). Fortunately, last weekend I was already able to finish evaluations of four applications for an international foundation (each approximately 60 pages). Just to relax I am working on a proposal of my own for the Deutsche Forschungsgemeinschaft (DFG) and on one for the European Union. I’ve lost my overview as to how many article evaluations I have agreed to do but have not yet delivered. But tomorrow is Sunday, I can still get a few done. Does this agenda ring a bell with you? Are you one of the average scientists who according to various independent statistics spend 40% of their work time reviewing papers or bids? No problem, because there are 24 hours in every day, and then there are still the nights too, to do your research.
I don’t want to complain, though, but rather make you a suggestion as to how to get more time for research. Interested? Careful, it is only for those with nerves of steel. I want to break a lance for a scattergun approach and whet your appetite for this idea: We allot research money not per bid but have it given to all as a basic support. With the tiny modification, that a fraction of the funds received must be passed on to other researchers. You think it sounds completely crazy, as in NFG (North Korean Research Community)?
First have a look first at the current system. Research money is given upon application; the selection is made by peer review. This has somehow stood the test of time, but we all know its weaknesses, the most important of which has already been implied: It devours unbelievable resources. Writing the applications, reviewing them, administrating the process. Even under favorable conditions, much more than half of all applications are approved, and often the quota is significantly under 10%. When an application is rejected, the resources used up to that point are wasted. Gone are the times when Otto Warburg wrote his legendary one-liner to the Emergency Association of German Science, the precursor of the DFG: “Need 10,000 Reichsmarks” (Click to see the application). That was the complete application – and surely it was funded. Today we work away writing our applications for weeks or months, proceeding tactically and partly for projects that have already been completed, and then we doll them up with a bit of originality. Reviewers (“peers”) then go to work on your application. This distracts them to a considerable degree from their own work. And it all has a very unpredictable end. Above all when someone applies with something truly original, i.e. something “risky”, the application can easily fall by the wayside and with it the innovation. The achievable will be funded, not the possible. Preferred is the mediocre, the conventional, the incremental, taking into consideration to boot the Matthäus Principle (“He who has, gets” Mt 25,29). That explains why the average age of applicants to the DFG is continually rising. From almost every Nobel Prize winner you hear “My research, which brought me to Stockholm, would no longer be supported today”. Newcomers and people with a truly new idea have a tough time; big shots with large work groups have it easier. Not to mention conflicts of interest, insider relationships among professionals or even feuds, that could play a role in an evaluation. We all know these problems; we like to bluster about them with colleagues over a beer, particularly when once again we have an application turned down. The gut feeling one has with these rejections is described comprehensively in the literature and backed up with empirical evidence that confirms our worst fears. But could it have possibly gone otherwise?
For some time now, a different process for allocating research funds is being discussed. Its functioning is so radically different that you could take it as a joke, and to wit, you might hear the jester’s bells jingling. But when you think about it, you can appreciate the immense charm it holds.
Fundamentally, the process draws from the famous “Page rank” algorism by Page and Brin with which Google evaluated web sites. The idea goes like this: Each scientist in the system receives a “basic funding”, for example 100,000 € per year with no conditions attached. In the USA the money could come from the NIH, in Germany from the DFG. Recipients, however, must pass on a certain portion of the grant– let us say half of it, to one or several other scientists in the system–anonymously of course, via the organization that granted the basic funding. Who would you pass the funds on to? You set your own criteria, but you would naturally have in mind originality, quality, relevance etc. All the things we (should) take as the basis when peer reviewing. The standard rules would obtain: i.e. it could not be someone from your own institution, or with whom you share authorship etc. Whoever receives additional funds in this manner must pass on a certain portion (it could once again be 50 percent) to others. How would the funds spread out in such a system? In analogy with Google’s Page rank, more funds would accumulate with the scientists considered by their peers to be the most promising, wildest, most terrific, best etc. The money saved could at least partially be forwarded to the institutions of the persons who received the basic support and had to finance core facilities. For the researchers who received the original support and their work groups, this practice would create a research infrastructure such as we could only dream of, particularly those of us who work at a university…
The entire system would be peer-to-peer funding; scientists would be supported, not projects. Bollen and colleagues have given us a very clear description of the system. These authors have also “tested” it in a simulation, namely by applying the principle to the databank listing all grants and grantees of the NIH. The authors chose citations as the surrogate as the databank obviously doe not contain information on how researchers would forward funding. The main principle is: An author who is cited frequently is considered important; you would give that author money. Bollen et al. chose a basic funding of 100,000 USD, the rough equivalent of the average support that the NIH allots per researcher and year. The result: Peer-to-peer support resulted in the simulation to a distribution of funds very similar to that of NIH per peer review! Same results, but without applications, review process, and the whole organizational burden of the current system! And if the funds are distributed not by citations but on the basis of the esteem for the research of others, the system would not only save massive resources but would probably promote innovative and relevant research. The process is moreover very flexible and tunable, above all through the amount of the basic funds and through the quota for to be forwarded to others. But does the system not lead to inside arrangements, “old boys” clubs etc.)? As if we didn’t have that baggage already! It is also much easier in this system to detect “gaming “ because conspicuous patterns in the flow of means would be easily identified.
Is the introduction of peer-to-peer funding realistic? Would a big funder risk swapping the current system — flawed and extremely resource-intensive as it is, but functional — for something radical and untested? In other words: Am I serious? Yes, because you can, in fact, experiment with this system parallel to the existing one. Play around with the parameters, start small and scale up slowly. That would make a fine application to the DFG! But before I do that, I still have to finish reviewing the nine applications.
A German version of this post has been published as part of my monthly column in the Laborjournal:
Anyone who has read this far and is starting to think about whether there might be something to the idea, is heartily invited to look at the original papers:
These two papers describe the system in full detail:
Bollen J, Crandall D, Junk D, Ding Y, Börner K. From funding agencies to scientific agency: Collective allocation of science funding as an alternative to peer review. EMBO Rep. 2014 Feb;15(2):131-3.
This paper „tests“ the system in a simulation:
Bollen, J., Crandall, D., Junk, D. et al. An efficient system to fund science: from proposal review to peer-to-peer distributions Scientometrics (2017) 110: 521. doi:10.1007/s11192-016-2110-3
This is what Retraction Watch says about peer funding: https://www.evernote.com/shard/s21/nl/2303872/2592c704-d95b-41b6-aba9-00b4565aeab6/ ..
NIH Funding fort he mainstream, innovative papers were not promoted or found support elsewhere: Nicholson JM, Ioannidis JP. Research grants: Conform and be funded. Nature. 2012 Dec 6;492(7427):34-6. https://www.nature.com/nature/journal/v492/n7427/full/492034a.html
‘Peer review of grant proposals, far from being a reasonable way of ensuring quality by the allocation of funds, is a near disaster.’: Horrobin DF. Peer review of grant applications: a harbinger for mediocrity in clinical research? Lancet. 1996 Nov 9;348(9037):1293-5 http://www.sciencedirect.com/science/article/pii/S0140673696080294
Nicholson, J. M. (2012), Collegiality and careerism trump critical questions and bold new ideas: A student’s perspective and solution. http://onlinelibrary.wiley.com/doi/10.1002/bies.201200001/epdf
Of historic Interest: A facsimile of Otto Warburgs ‘Research application’ to the Emergency Association of Deutschen Wissenschaft (1921). Das waren noch Zeiten! https://dirnagl.com/?s=warburg