Do you have a fitness tracker? Are you on Twitter or Facebook and count your likes and followers? Do you know your ResearchGate score? Do you pay attention to Gault Millau toques and Michelin stars when you visit restaurants? Then you’re in good company, because you’re doing reputation management on a wide variety of levels with quantitative indicators. Just like universities and research sponsors. Except that you do it privately and entirely voluntarily!
On these pages I have recently discussed (here, and here) why in academia today we hardly judge research on the basis of its originality, quality and true scientific or societal impact. Instead, we use quantitative indicators such as Journal Impact Factor (JIF) or third-party funding, and distribute grants or academic titles based on these indicators. I also pondered a few foolish ideas on how to turn the wheel back a bit, in the direction of a content-based evaluation of research achievements. But these considerations still failed to take into account that institutions and funding agencies are in good company – namely ours – when they foster competition with simple, abstract metrics. This makes things easier for them. And at the same time, harder for us to change the system. Because we may have to change ourselves.
As long as the majority of the population is not immune to SARS-COV2, the health care system must be protected from collapse due to overload with COVID patients. Therefore, for the past year, we have been testing measures ranging from increased hand washing to total lockdown with some success. Measures are introduced, tightened, relaxed, or abolished, only to be reintroduced, … and so it goes. Politicians justify their actions with incidence values, utilization of hospitals, model calculations and the advice of experts (see also here in my blog). Undeniably, many of these (anti)Corona measures have enormous plausibility. It is also trivial to realize that a total lockdown can severely limit the spread of a virus. However, this cannot be sustained forever. Therefore, the question of which of the measures from the black box lockdown have an effect, and for which the harm outweighs the benefit, is immensely relevant. With this knowledge, one might put together an evidence-based package of Corona measures that is less drastic than a lockdown, but just as effective. And perhaps in this way persuade some skeptics to participate. This is why the question of which evidence is available for the effectiveness of individual measures is so important. But beware. Continue reading
Science gobbles up massive amounts of societal resources, not just financial ones. For academic research in particular, which is self-governing and likes to invoke the freedom of research (which in Germany is even enshrined in the constitution), this raises the question of how it allocates the resources made available to it by society. There is no natural limit to how much research can be done – but there is certainly a limit to the resources that society can and will allocate to research. So which research should be funded, which scientists should be supported?
Mechanisms for answering these questions, which are central to the academic enterprise, have evolved evolutionarily over many decades. However, these mechanisms control not only the distribution of funds among researchers and within as well as between institutions, but ultimately also the content and quality of research. The mechanisms by which research funds and tenure are evaluated and allocated and the metrics used in these processes determine scientists’ daily routines and the way they do research more than their reading literature, their views through a microscope, or their presentations at conferences. Continue reading
In this post I’ll be looking at the question of why scientific careers today depend so much on the Journal Impact Factor (JIF). And the acquisition of as much third-party funding as possible. Or, more generally, why the content, originality and reliability of research results are often a secondary matter when commissions talk their heads off about who to include in their own ranks. And who not. Or which grant applications deserve to be funded. In short, follow me on a brief and incomplete history of how and why we ended up judging the quality of science through proxies such as JIF and amount of third party funding. Perhaps a historical perspective will also yield clues as to how we can overcome this mess. But I am getting ahead of myself. Let’s start where it all began, with the founding fathers of modern science. Continue reading
We all have been there: After a long wait and increasing tension, an email arrives from the editotial office. With an increasing heart rate we open the email. “Thank you for submitting your manuscript to our Journal. Your manuscript was sent for external peer review, which is now complete. Based on our evaluation and the comments of external reviewers, your manuscript did not achieve a sufficient priority for further consideration, and we have decided not to pursue your manuscript for publication. While we understand you may be disappointed with this decision, we hope the reviewer comments will help you revise your manuscript and submit it to another journal. Thank you for the privilege of reviewing your work.’ After the initial shock, we take a look at the reviews in the attachment. Reviewer 1 found the work quite good, some minor issues, all fixable, with well-meaning suggestions. But Reviewer 2! Has he even read the article? Did he confuse it with another paper on his desk? In any case, this unknown ‘expert’ was totally incompetent, but still dared to pour several pages of slurry over 3 years of our hard work and its highly relevant results.
Despite the fact that the number of cases is now rising sharply again and we have now entered a lockdown ‘light’, we in Germany are rightly pleased that we have so far come through the corona crisis much better than many of our neighbors or the USA. Was the ‘German way’ perhaps so successful because politicians in Germany had an open ear for science and therefore prescribed the right measures based on evidence?
‘Translation’ – from mouse to human and back – the mantra and eternal quest of university medicine! Where else than in academic medical centers can you find all this under one roof: basic biomedical and clinical research, the patients necessary for clinical trials, government funding, as well as motivated and excellently trained personnel! ‘Translation’ is as old as academic medicine itself – but the term for it was only coined in the 1980s and since then it has adorned the websites and mission statements of all university hospitals worldwide. Translation has certainly been a model of success – just think of, treatment of chronic neurological disorders like epilepsy, Parkinson’or multiple sklerosis, therapies of some forms of cancer, or HIV. Continue reading
In letzter Zeit war ich zu Gast in einigen Podcasts und längeren Interviews, für Audiophile hier die links:
(Update August 2020)
On March 17th, just as many countries were taking draconian measures to contain the SARS-COV-2 pandemic, the Greek-American meta-researcher and epidemiologist John Ioannidis, whom I often quote in my posts proclaimed a “fiasco in the making‘! With strong language and a few ad hoc estimations of COVID fatality rates he warned that based on poor data or no evidence at all politicians might inflict incalculable damage on society, possibly much worse than what a virus, putatatively as dangerous as influenza, could cause. As one of the most highly cited researchers in the world and a vocal critic of quality problems in biomedicine, his COVID related interviews, opinion pieces and articles since then have received a great deal of attention, in the scientific community, in the lay press, and especially among his worldwide fan base. Continue reading
Research using animals is a sensitive issue. Anyone who does animal experiments, like myself, is reluctant to talk about it, at least outside our natural habitat, the laboratory or scientific conventions. Institutions where animal experiments are carried out are also quite shy about the topic. Recently, the Max Planck Society left Nikos Logothetis (MPI Tübingen) standing in the rain when he was targeted by a media campaign. Now he and some of his laboratory staff are off to Shanghai… The websites of prominent research institutes feature all kinds of colorful illustrations showing immunohistochemistry slides, doctors and students in white coats with pipettes in their hands, sitting at computers or their microscopes. But rats or mice are conspicuously missing! They proudly display their research activities, and enthusiastically advertise (future) research breakthroughs towards completely new and effective therapies. But no reference is made to animal experiments on campus!