Excellence vs. Soundness: If we want breakthrough science, we need to fund ‘normal science’

I recently read “Excellence R Us”: University research and the fetishisation of excellence. by Samuel Moore, Cameron Neylon, Martin Paul Eve, Daniel Paul O’Donnell & Damian Pattinson. This excellent (!) article, and Germany’s upcoming third round of the ‘Excellence strategy’ incited me to the following remarks on ‘Excellence’

So much has already been written on Excellence. (In)famously, in German Richard Münch’s 500 page tome on the “Academic elite”. In it he characterized the concept of excellence as a social construct for the distribution of research funds and derided the text bubbles socialized under that concept. He castigated each and every holy cow in the scientific landscape of Germany, including the Deutsche Forschungsgemeinschaft (DFG, Germany’s leading funding organization). Upon its publication in 2007 shortly after the German Excellence initiative was launched for the first time, Münch’s book filled with indignation the representatives of what he disparaged as ‘cartels, monopolies and oligarchies’, and a mighty flurry rustled through the feuilletons of the republic.

Today, on the eve of the 3rd round of the Excellence initiative (now: Excellence Strategy), only the old timers remember that, and that is precisely the reason why I am going to tackle the topic once again and fundamentally. And because I believe there is a direct connection between the much-touted crisis of the (life) sciences and the Excellence rhetoric.

What do we mean by ‘Excellence’? Isn’t that an easy question?  It means the pinnacle, the elite, the extraordinary, something exceptional etc.  A closer look will tell you that the concept is void of content.  In the world of science, we find excellent biologists, physicists, experts in German studies, sociologists. That they are excellent or extraordinary, means only that they are far better than others in those fields, but according to what measure?  We only learn that they are the few on the extreme of a Gaussian distribution. These few are considered worthy of reward ­– with a professorship, with more research funds or with entire initiatives. And that is not only a German phenomenon. The English for example have their Research Excellence Framework (REF).  Whole universities receive their means relative to their scientific excellence. And you will say:  Quite rightly!  And I will say:  Think again!

The first question to ask is who actually sorts out the researchers, projects and the universities according to excellence and non-excellence. And according to what criteria could this be performed?  Jack Stilgoe formulated it in the Guardian : “’Excellence’ is an old-fashioned word appealing to an old-fashioned ideal. ‘Excellence’ tells us nothing about how important the science is, and everything about who makes the selection.” Because this is how it goes: The search for excellence will be successful according to the criteria that were set for the search. In biomedicine, these criteria are set: publications in a handful of select journals. Or in more practical terms, the most abstract of all metrics, the Journal Impact Factor (JIF). What is excellent? Publications in journals with a high impact factor. How do we select excellent researchers and their projects? By counting publications with high JIF. How does the excellence in a project manifest itself: Through publications with a high JIF.  For those of you who find this self-referential loop too simplistic: You can add a few more criteria,  and the loop will just get bigger. What is excellent? Plenty of external support, preferably from the DFG (or the NIH, MRC, etc). How do you get a lot of third party funding? By publishing in journals with a high JIF, and so on and so forth.

But is not a top publication a good predictor for future pioneering results? Unfortunately not, because we, the scientists who rated the paper in peer review as publishable, find it difficult to judge the significance and future relevance of research. Many studies have demonstrated this. For example:  The evaluation of NIH applications (to be exact: the “percentile” score) correlates very poorly with the relevance of the funded projects as extrapolated from citations. [Here just a footnote:  With DFG applications you cannot investigate this connection at all, because the DFG does not give access to the relevant information.] What is the most striking is our inability to recognize projects or publications when they are of high relevance. A vast number of papers in our “rejected” histories are awarded years later with the Nobel Prize. “Breakthrough findings” are not advertised in funding programs or hyped up to excellence. Usually, they just happen when “chance favors the prepared mind” as Louis Pasteur formulated it.

Thus it seems that the sensitivity and specificity of reviewing and assessing top research are exceedingly low. Some of the false negatives will perhaps be discovered years later, the false positives will just draw system resources. Moreover, the rhetoric around excellence has other corrosive effects. It promotes narratives that exaggerate the importance and effect sizes of one’s own results. It rewards “shortcuts” in the form of “more flexible” analysis and publication on the way to supposedly spectacular results. This explains the inflation the of significant p-values and effect sizes, the allegedly imminent breakthrough in the therapy of diseases etc. Some researchers even fall prey to the temptation to misconduct to achieve guaranteed and immediate excellent results. When the drive for excellence entices to questionable scientific practice, it obstructs “normal science”. Normal science means for Thomas Samuel Kuhn the daily, unspectacular theorizing, observation and experimenting by researchers which leads to knowledge and consolidation. Normal science is simply given an occasional thorough shaking up with a “paradigm shift” and then set up anew. Normal science does not lead to spectacular findings (“stories”); it is based on competent methods, high rigor and transparency. It is replicable. In a word, everything that might be swept under the table in the quest for excellence. At the same time, normal science is the very substrate for “breakthrough science” – the paradigm shift. However, it cannot be called up at will; it happens serendipitously and cannot be smoked out with a call for application. So even if it sounds paradoxical:  If we want top research we must fund normal science! Anyone who funds ‘excellence’ only, gets excellence, with all its effects and side-effects. That includes of course top publications which in and of themselves do not constitute value… apart from boosting researchers, initiatives, universities, and state excellence ratings to the top.

The choice according to excellence criteria also leads to funder-homophilia, the tendency to support scientists doing research similar to one’s own. And it leads to concentration of resources (Matthäus Effect, “He who has, gets”), usually to the disadvantage of non-excellent areas of research such as ordinary science. The rhetoric of excellence is inherently regressive:  it makes decisions based on past excellence. That reduces the chance of funding something that is really new, while rigor, creativity and diversity drop through the net.

The rhetoric of excellence does, however, have one essential function that at first glance seems irreplaceable. It delivers science a criterion for the distribution of scarce state research funds, and arguments for increasing research funding that is understandable for the man on the street. Have a look: “With us you are supporting Excellence!” And when the politicians hear those golden words, they’ll jump onstage too. How drab it would be to call for funding for “normal science”!

In Germany the stage is set again for an ‘Excellence Show’, but wouldn’t it be time to change the production or at least the set? We could ever so gently slip in some “sound science” rhetoric to stand up there beside Excellence rhetoric. For this, the English notion of “Soundness” is great, for it means conclusiveness, validity, soundness, dependability. A more pluralistic starting point for research distribution would to fund sound science. It incorporates the many qualities that constitute (good) science.  Can we evaluate “soundness”, or is that not as “empty” a “signifier” as ”Excellence”? Team science and cooperation, open science, transparency, adherence to scientific and ethical standards, replicability — all this and more can not only be named, but also to a certain point even quantified. These would then be criteria for a broad funding. No additional finances are required, because less excellence would be funded, with the resulting side effect that the funders would be buying more “tickets” in the lottery, and funding research without predictive criteria for breakthrough science is indeed a lottery. She who has more tickets, wins more often. This explains the paradox why funding less excellence can produce more of it. The top research, the new therapies, the paradigm shifts rise out of a higher number of qualitatively high-value projects in normal science.  But only a fool would think that feasible?

 

A German version of this post has been published as  part of my monthly column in the Laborjournalhttp://www.laborjournal-archiv.de/epaper/LJ_17_09/22/

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s