Boost your score: Digital narcissism in the competition of scientists

Do you have a fitness tracker? Are you on Twitter or Facebook and count your likes and followers? Do you know your ResearchGate score? Do you pay attention to Gault Millau toques and Michelin stars when you visit restaurants? Then you’re in good company, because you’re doing reputation management on a wide variety of levels with quantitative indicators. Just like universities and research sponsors. Except that you do it privately and entirely voluntarily!
On these pages I have recently discussed (here, and here) why in academia today we hardly judge research on the basis of its originality, quality and true scientific or societal impact. Instead, we use quantitative indicators such as Journal Impact Factor (JIF) or third-party funding, and distribute grants or academic titles based on these indicators. I also pondered a few foolish ideas on how to turn the wheel back a bit, in the direction of a content-based evaluation of research achievements. But these considerations still failed to take into account that institutions and funding agencies are in good company – namely ours – when they foster competition with simple, abstract metrics. This makes things easier for them. And at the same time, harder for us to change the system. Because we may have to change ourselves.
So what about the individual side of quantitative performance evaluation? Scientific performance evaluation is just a specialized form of a development and a mirror of a quantification cult in society, which has not even stopped at the private sphere. For it is no longer only in the professional sphere that quantification serves to create a market in which performance is measured and increased through competition with numbers. Also on the level of the individual researcher everything is a matter of status and reputation. The need to publish papers in renowned journals (or, more simply, with high JIF) in order to stay in the academic system, or even to move up, develops into the management of personal scientific status: ‘He/she wrote two Nature papers last year! ‘, or ‘My h-index is over 50’, and so on. Objective as well as subjective uncertainty in the competition among scientists only increases the desire for status and information that quantifies it. This has led to a fetishization of self-presentation and public image, which is lived out, among other things, in the cultivation and maintenance of one’s curriculum vitae, one’s own professional website or Twitter account. The motto here is ‘looking good’, rather than ‘being good’. Meanwhile, graduate programs offer their students seminars in the art of this professional self-presentation and self-optimization. We train the next generation in status competition, and award prizes to status seekers.
The majority of young people, on the other hand, are by no means rebelling, but would like to see further training in this art form. Of course, all this is by no means surprising, since reputation management via quantitative indicators is now fully established in private life as well. However, science is also particularly affected by such quantification excesses because scientists may have an increased need for recognition and a desire for recognition. Titles, top publications, awards, always being first: Scientists are born competitors. Moreover, scientists are naturally susceptible to the quantification logic of competition. What is measurable and can be expressed in numbers is transparent, comprehensible, evidence-based, rational, neutral, precise, simple, immediate, and objectively comparable. Measurement is part of the basic repertoire of the scientific method. From this point of view, counting the JIF and h-index, but also in Gault Millau toque points or Twitter followers, is not far from scientific practice.
But isn’t that harmless? Even useful, since the scientists goading themselves and each other in this way then perform great research deeds? I fear not. Quantification simplifies by abstraction. A quality (‘what’) is transformed into a quantity (‘how much’). Incomparable things suddenly become comparable, even the proverbial apples to pears! There is now a common standard for different things. Dr. Maier and Dr. Müller can now compare each other directly. Via the cumulative JIF, or the h-index, the Research Gate, or the Altmetric Score. The last two are wonderful, but also sad examples of the core of the problem. Impact is understood as attention. It is not original hypotheses, new findings, or even scientific or societal benefits that become essential here, but visibility and popularity. ResearchGate calls on its users: ‘Boost your score’. ResearchGate does not disclose how the score is calculated. Whether it is reproducible, or what it is actually supposed to mean. But that doesn’t matter, because it produces a number that can be used to compare and compete with each other. As a result, our thinking and judgments are increasingly geared to such indicators, and in the process, professional standards and content are being superseded.
Those who criticize the hypercompetition in the system, the ‘publish or perish’ culture, and call on institutions to change course, must therefore also reflect on their own practice. We participate voluntarily and with great zeal in a multitude of (partly) private games of competition, which are fought out by means of meager, abstract numbers. And what’s more, we get really upset when we are not satisfied with our own numbers, i.e. our ranking. After all, satisfaction with an indicator and its calculation correlates very well with one’s own ranking. If this is not good, the indicator is presumably unsuitable. Under those circumstance correct statements about JIF or third-party funding are heard even from colleagues without any record in criticizing the system. Or they criticize the algorithm: Isn’t there a formula which could rank me better?
We have internalized the quantitative, abstract status logic imposed on us by the institutions, and made it an important variable of our self-esteem. We have voluntarily adopted the indicators and benchmarks, and because the institutions and our colleagues attach great value to them, we ourselves do so with all the greater conviction: Conform and perform!
Steffen Mau, who analyzes these activities in detail from a social sciences perspective in his excellent book ‘ Das metrische Wir – Über die Quantifizierung des Sozialen’ (The Metric We – On the Quantification of the Social), points out at the beginning of his treatise that in German the word ‘vermessen’ already contains a premonition of bad things: ‘Vermessen’ does not only mean the comparison with a yardstick, but also means to measure wrongly, as well as ‘arrogant’ or ‘presumptuous’. The wrong yardstick, the reflex on the reputation, and finally the setting of wrong incentives, all this the German language already anticipates there!
A German version of this post has been published earlier as part of my monthly column in the Laborjournal: https://www.laborjournal.de/editorials/2218.php
> that in German the word ‘vermessen’ already contains a premonition of > bad things: ‘Vermessen’ does not only mean the comparison with a > yardstick, but also means to measure wrongly, as well as ‘arrogant’ or > ‘presumptuous’.
Ha, grossartig. Ja, das Deutsche ist oftmals so doppeldeutig und verräterisch. Schöner Blogpost.
LG LB
> WordPress.com > Uli Dirnagl posted: ” Do you have a fitness tracker? Are you on > Twitter or Facebook and count your likes and followers? Do you know > your ResearchGate score? Do you pay attention to Gault Millau toques > and Michelin stars when you visit restaurants? Then you’re in good ” >