Excellence vs. Soundness: If we want breakthrough science, we need to fund ‘normal science’

I recently read “Excellence R Us”: University research and the fetishisation of excellence. by Samuel Moore, Cameron Neylon, Martin Paul Eve, Daniel Paul O’Donnell & Damian Pattinson. This excellent (!) article, and Germany’s upcoming third round of the ‘Excellence strategy’ incited me to the following remarks on ‘Excellence’

So much has already been written on Excellence. (In)famously, in German Richard Münch’s 500 page tome on the “Academic elite”. In it he characterized the concept of excellence as a social construct for the distribution of research funds and derided the text bubbles socialized under that concept. He castigated each and every holy cow in the scientific landscape of Germany, including the Deutsche Forschungsgemeinschaft (DFG, Germany’s leading funding organization). Upon its publication in 2007 shortly after the German Excellence initiative was launched for the first time, Münch’s book filled with indignation the representatives of what he disparaged as ‘cartels, monopolies and oligarchies’, and a mighty flurry rustled through the feuilletons of the republic.

Today, on the eve of the 3rd round of the Excellence initiative (now: Excellence Strategy), only the old timers remember that, and that is precisely the reason why I am going to tackle the topic once again and fundamentally. And because I believe there is a direct connection between the much-touted crisis of the (life) sciences and the Excellence rhetoric.

What do we mean by ‘Excellence’? Isn’t that an easy question?  It means the pinnacle, the elite, the extraordinary, something exceptional etc.  A closer look will tell you that the concept is void of content.  In the world of science, we find excellent biologists, physicists, experts in German studies, sociologists. That they are excellent or extraordinary, means only that they are far better than others in those fields, but according to what measure?  We only learn that they are the few on the extreme of a Gaussian distribution. These few are considered worthy of reward ­– with a professorship, with more research funds or with entire initiatives. And that is not only a German phenomenon. The English for example have their Research Excellence Framework (REF).  Whole universities receive their means relative to their scientific excellence. And you will say:  Quite rightly!  And I will say:  Think again!

The first question to ask is who actually sorts out the researchers, projects and the universities according to excellence and non-excellence. And according to what criteria could this be performed?  Jack Stilgoe formulated it in the Guardian : “’Excellence’ is an old-fashioned word appealing to an old-fashioned ideal. ‘Excellence’ tells us nothing about how important the science is, and everything about who makes the selection.” Because this is how it goes: The search for excellence will be successful according to the criteria that were set for the search. In biomedicine, these criteria are set: publications in a handful of select journals. Or in more practical terms, the most abstract of all metrics, the Journal Impact Factor (JIF). What is excellent? Publications in journals with a high impact factor. How do we select excellent researchers and their projects? By counting publications with high JIF. How does the excellence in a project manifest itself: Through publications with a high JIF.  For those of you who find this self-referential loop too simplistic: You can add a few more criteria,  and the loop will just get bigger. What is excellent? Plenty of external support, preferably from the DFG (or the NIH, MRC, etc). How do you get a lot of third party funding? By publishing in journals with a high JIF, and so on and so forth.

But is not a top publication a good predictor for future pioneering results? Unfortunately not, because we, the scientists who rated the paper in peer review as publishable, find it difficult to judge the significance and future relevance of research. Many studies have demonstrated this. For example:  The evaluation of NIH applications (to be exact: the “percentile” score) correlates very poorly with the relevance of the funded projects as extrapolated from citations. [Here just a footnote:  With DFG applications you cannot investigate this connection at all, because the DFG does not give access to the relevant information.] What is the most striking is our inability to recognize projects or publications when they are of high relevance. A vast number of papers in our “rejected” histories are awarded years later with the Nobel Prize. “Breakthrough findings” are not advertised in funding programs or hyped up to excellence. Usually, they just happen when “chance favors the prepared mind” as Louis Pasteur formulated it.

Thus it seems that the sensitivity and specificity of reviewing and assessing top research are exceedingly low. Some of the false negatives will perhaps be discovered years later, the false positives will just draw system resources. Moreover, the rhetoric around excellence has other corrosive effects. It promotes narratives that exaggerate the importance and effect sizes of one’s own results. It rewards “shortcuts” in the form of “more flexible” analysis and publication on the way to supposedly spectacular results. This explains the inflation the of significant p-values and effect sizes, the allegedly imminent breakthrough in the therapy of diseases etc. Some researchers even fall prey to the temptation to misconduct to achieve guaranteed and immediate excellent results. When the drive for excellence entices to questionable scientific practice, it obstructs “normal science”. Normal science means for Thomas Samuel Kuhn the daily, unspectacular theorizing, observation and experimenting by researchers which leads to knowledge and consolidation. Normal science is simply given an occasional thorough shaking up with a “paradigm shift” and then set up anew. Normal science does not lead to spectacular findings (“stories”); it is based on competent methods, high rigor and transparency. It is replicable. In a word, everything that might be swept under the table in the quest for excellence. At the same time, normal science is the very substrate for “breakthrough science” – the paradigm shift. However, it cannot be called up at will; it happens serendipitously and cannot be smoked out with a call for application. So even if it sounds paradoxical:  If we want top research we must fund normal science! Anyone who funds ‘excellence’ only, gets excellence, with all its effects and side-effects. That includes of course top publications which in and of themselves do not constitute value… apart from boosting researchers, initiatives, universities, and state excellence ratings to the top.

The choice according to excellence criteria also leads to funder-homophilia, the tendency to support scientists doing research similar to one’s own. And it leads to concentration of resources (Matthäus Effect, “He who has, gets”), usually to the disadvantage of non-excellent areas of research such as ordinary science. The rhetoric of excellence is inherently regressive:  it makes decisions based on past excellence. That reduces the chance of funding something that is really new, while rigor, creativity and diversity drop through the net.

The rhetoric of excellence does, however, have one essential function that at first glance seems irreplaceable. It delivers science a criterion for the distribution of scarce state research funds, and arguments for increasing research funding that is understandable for the man on the street. Have a look: “With us you are supporting Excellence!” And when the politicians hear those golden words, they’ll jump onstage too. How drab it would be to call for funding for “normal science”!

In Germany the stage is set again for an ‘Excellence Show’, but wouldn’t it be time to change the production or at least the set? We could ever so gently slip in some “sound science” rhetoric to stand up there beside Excellence rhetoric. For this, the English notion of “Soundness” is great, for it means conclusiveness, validity, soundness, dependability. A more pluralistic starting point for research distribution would to fund sound science. It incorporates the many qualities that constitute (good) science.  Can we evaluate “soundness”, or is that not as “empty” a “signifier” as ”Excellence”? Team science and cooperation, open science, transparency, adherence to scientific and ethical standards, replicability — all this and more can not only be named, but also to a certain point even quantified. These would then be criteria for a broad funding. No additional finances are required, because less excellence would be funded, with the resulting side effect that the funders would be buying more “tickets” in the lottery, and funding research without predictive criteria for breakthrough science is indeed a lottery. She who has more tickets, wins more often. This explains the paradox why funding less excellence can produce more of it. The top research, the new therapies, the paradigm shifts rise out of a higher number of qualitatively high-value projects in normal science.  But only a fool would think that feasible?

 

A German version of this post has been published as  part of my monthly column in the Laborjournalhttp://www.laborjournal-archiv.de/epaper/LJ_17_09/22/

Start your own funding organization!

It’s noon on Saturday, the sun is shining.  I am just evaluating nine applications of a call for applications by a German Science Ministry (each approximately 50 pages).  Fortunately, last weekend I was already able to finish evaluations of four applications for an international foundation (each approximately 60 pages).  Just to relax I am working on a proposal of my own for the Deutsche Forschungsgemeinschaft (DFG) and on one for the European Union.  I’ve lost my overview as to how many article evaluations I have agreed to do but have not yet delivered.  But tomorrow is Sunday, I can still get a few done.  Does this agenda ring a bell with you?  Are you one of the average scientists who according to various independent statistics spend 40% of their work time reviewing papers or bids?  No problem, because there are 24 hours in every day, and then there are still the nights too, to do your research.

I don’t want to complain, though, but rather make you a suggestion as to how to get more time for research.  Interested?  Careful, it is only for those with nerves of steel.  I want to break a lance for a scattergun approach and whet your appetite for this idea:  We allot research money not per bid but have it given to all as a basic support.  With the tiny modification, that a fraction of the funds received must be passed on to other researchers.  You think it sounds completely crazy, as in NFG (North Korean Research Community)?

Continue reading

For risks and side effects consult your librarian

Scarcely noticed by the scientific community in Germany, an astounding development is taking place: The alliance of German scientific organizations with the Conference of University Rectors (DEAL Consortium) is flexing its muscles at the publishing houses. We are witnessing the beginning of the end of the business model current in scientific publishing: an exodus out of institutional library subscriptions to journals and into open access to all to scientific literature (OA), financed by a once-only article publishing charge (APC).  The motive for this move is convincing:  Knowledge financed by society must be freely accessible to society, and the costs for accessing scientific publications have risen immensely, increasing every year by over 5% and all but devouring the last resources of the universities.

The big publishing houses are merrily pocketing fantastic returns for research that is financed by taxes and produced, curated, formatted, and peer reviewed by us.  These returns run at a fulsome 20 – to 40%, which would probably not be legal in any other area of business.  At the bottom of this whole thing is a bizarre swap: With our tax money we are buying back our own product – scientific knowledge in manuscript form — after having handed it over up front to the publishers.  It gets even wilder:  The publishing houses give us back our product on loan only, with limited access, without any rights over the articles. The taxpayer, having paid for it all, cannot access it, meaning that not only Joe Blow the taxpayer is left standing in the cold, but with him practicing medical doctors or clinicians, and scientists outside of the Universities. Continue reading

How original are your scientific hypotheses really?

Have you ever wondered what percentage of your scientific hypotheses are actually correct? I do not mean the rate of statistically significant results you get when you dive into new experiments. I mean rather the rate of the hypotheses that were confirmed by others, or that postulated a drug that was in fact effective in other labs or even patients. Nowadays, unfortunately, only very few studies are independently repeated (more on that later) and even therapies long established are often withdrawn from the market as ineffective or even harmful. You can only hope to approach a rate of “success”, and that is exactly what I will now attempt to do. You are wondering why I am posing this apparently esoteric question: It is because knowing approximately how high the percentage of hypotheses is that actually prove to be correct would have wide-reaching consequences for evaluating research results for your own results, and for those of others. This question has an astonishing but direct relevance in the discussion relating the current crisis in biomedical science. Indeed, a ghost is haunting biomedical research!
Continue reading

And now for something completely different: Sequential designs in preclinical research

sequenzDespite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. The aim of our recent article in PLOS Biol is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain.

PLoS Biol. 2017 Mar 10;15(3):e2001307. doi: 10.1371/journal.pbio.2001307.

 

Lost access to Elsevier journals?

darth-vaderThe negotiation of a new German-wide arrangement with Elsevier and other publishers for online access to journals has stalled, as Elsevier did not agree to the terms proposed by a consortium of many German universities. Meanwhile, many of these Universities have cancelled their subscriptions with Elsevier. Scientists trying to access Elsevier journals now need their credit cards in standby to access (their) articles! Although this may be frustrating, I find that it is good news that the struggle of scientists for open access to their work is coming to a head. Many researchers for the first time start to grasp the business model behind scientific publishing! They naively thought ‘open access’ means that while working on their computers within the intranet of their universities a simple click gives them free access to any paper they want. They might now join the still small band of ‘activists’ fighting for novel publishling models. Two of those activists, Romain Brette and Björn Brembs have recently provided very useful resources for concerned researchers (So Your Institute Went Cold Turkey On Publisher X. What Now?), and a vision of the post-journal world as well as thoughtful suggestions on how we can ‘help move science to the post-journal world‘.

 

Could a neuroscientist understand a computer?

economist-brain-computer… is the title of a recent article by Jonas and Kording, published in Plos Comp Biol and featured in the Economist. The Economist summarizes their findings by stating that ‘testing the methods of neuroscience on computer chips suggests they are wanting’, and on the magazine cover labels neuroscience’s toolkit as ‘faulty’.

Jonas and Kording used a simple microchip (one used in ‘prehistoric’ game computers like the Atari) and asked the question whether the chip could be ‘understood’ by applying the same approaches applied by the large scale human brain projects. These multibillion consortia work under the premise that the human brain works like a supercomputer – doesn’t it process information and use electrical currents? So if you understand the wiring diagram (‘the connectome’) and the firing of electrical signals through it, you would be able to model its working principle. What you need is just lots of data, and heavy computing. Jonas and Kording used this approach and checked whether it allowed them to understand how the game chip works. Since we already know how it works (because it was engineered in the first place) we can test how far this approach takes us. They even threw in ‘interventions’, very similar to how modern neuroscience started, when neurologists like Paul Broca used structural lesions (e.g. after infarction) in their patient’s brains to make inferences about the functions of specific brain regions. So what happens to Donkey Kong  if you tinker with a few transitors on the chip, and what does it tell you about their function? If you follow Jonas and Kording, not much.  They conclude that current analytic approaches in neuroscience ‘may fall short of producing meaningful understanding of neural systems, regardless of the amount of data’.

So the methods used by the BRAIN Initiatitive or the Human Brain Project may be wanting – but what if it is even worse, and their basic tenet (‘The human brain is a computer’) is wrong, and the hype around those projects is not only methodologically but also conceptually flawed?  In a recent post in AEON, Robert Epstein argues that ‘your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer’. Click here to follow his argument why it is silly to believe that brains must be information processors just because computers are information processors.