Predators in the (paper) forest
It struck at the end of July. A ‘scandal’ in science shook the Republic. Research by the NDR (Norddeutscher Rundfunk), NDR (Westdeutscher Rundfunk) and the Süddeutsche Zeitung revealed that German scientists are involved in a “worldwide scandal”. More that 5000 scientists in German universities, institutes and federal authorities had, with public funds, published their work in on-line pseudoscientific publishing houses that do not comply with the basic rules and for assuring scientific quality. The public and not just a few scientists heard for the first time about “predatory publishing houses” and “predatory journals”.
Predatory publishing houses, whose presentation in phishing mails is quite professional, offer scientists Open Access (OA) publication of their scientific studies at a cost, whereby they imply that their papers will be peer reviewed. No peer review is carried out, and the articles are published on the web site of these “publishing houses”, which however are not listed in the usual search engines such as PubMed. Every scientist in Germany finds several such invitations per day in his or her e-mails. If you are a scientist and receive none, you should be worried about it.
Now, one might think that with predatory publishing houses Elsevier and consorts are meant. They have a profit rate of over 30% by selling the fruit of our labors, and then we even have to wait in trepidation for them to accept our “gift”, i.e. our articles! Tax payers, who have financed all this, and other unhappy souls who do not enjoy the benefits of access to an expensive institutional library, can only reach the fruit of their findings with their credit cards. But wait, not so fast! Elsevier is not a predator, for there you do get (usually) a proper review process.
And this is where it gets interesting, even complicated. I too am convinced that a good review process can improve scientific studies. This however is often not the case at all. The process devours massive resources but delivers no scientific evidence that it “functions”. The review process is slow, expensive, erratic and poor in detecting errors. It is often abused; results are potentially anti-innovative. We all know the problem. Articles are often written with potential reviewers already in mind; frequently nothing further is gained in the revision other than that a particular reviewer is silenced. Instead of hunting down new discoveries, scientists spend much too much time delivering precisely those results that the reviewer wants to see. Of course, the odd unsalvageable bad work is dumped. But only temporarily, for it will be published later somewhere else, after a cascade of submissions in journals with decreasing impact factor. If need be, in a predatory journal! Making the review process fair and productive will require a good deal of effort and innovative approaches, as seen for example in the OA Journals of EMBO or F1000Research.
So how come established scientists publish in journals whose name they have never heard before? It is often because after a series of absolutely nerve-racking unsuccessful submissions they lose their nerves. The paper might even have been very good, but not spectacular enough — a negative finding or even a null result. Or the reviewers’ requirements were too costly to be filled, or the postgrad no longer available. The authors then succumb to the temptation and payed the fee to see the fruit of their labor in a journal with a super sounding name and to have another paper on their literature list. And here lies the core of the problem: a rewards and career system oriented to simple quantitative indicators. For example, at least 10 original papers with peer review and first or last author status are required of ‘Habilitanden’ (a strange, but important German academic degree which is impossible to explain) at the Charité. The same goes for most German faculties.
So predatory publishing houses are actually exploiting a systemic problem in our academic system. The victims of predatory publishers are also perpetrators! We often judge scientists according to quantitative, easily measurable metrics (number and impact factor of the publications, acquired third party funds) that have little to do with the quality of the science or its scientific or social relevance. The contents and significance of the research often get short shrift for time reasons. Moreover, studies are judged essentially according to the criterion of whether they have a positive result, and not whether they were performed well and demonstrate a reliable result. That is why we read every day in the newspaper about the new next cure for Alzheimer, cancer etc., without ever seeing it come along.
Speaking of promised cures: It has been claimed that the predators are the source of the danger that “fake science” will be published and will then be put into clinical usage that is dangerous for patients. This might have happened in isolated cases. However, fake science is also published in serious scientific journals. And when that happens in the Lancet, a hundred thousands are immediately affected: Andrew Wakefield and the Anti-Waxxer send their greetings. Nor should we forget that about 50 % of all completed clinical studies are not published at all. Also because the results, if they were not unambiguous, or the study hypothesis is not verified, are more difficult to publish. Thus, we fetishize the “positive” and the spectacular. Viewed in this light, you can even say the predators publish evidence that would not be otherwise available!
It is also clear that too many (positive) studies are published, no matter whether by Elsevier et al. or by predators. Why? Simple, because we reward publications but do not reward exploring important questions and performing such exploration with good methodology. Commissions can simply no longer examine our publications any more, let alone read them. That also holds, by the way, for the scientists themselves. PubMed itself listed almost 1.3 mill published articles. More than 90 % of the published literature (note, not in predatory publishing houses) are not even read! Approximately 50% of these are never the less cited at least once, i.e. they are often unread! So we simply count publications, no matter what is in it. Or we add their impact factor. Regardless of whether it says anything about the article, for it measures only the average frequency of the journal’s citation.
It is also tragic that the affair over the predatory publishing houses casts very good open access journals and the principle behind them into bad repute. The “pay per article” principle is equated with bad quality, even though there is no connection between them. It also stigmatizes open access that many publishers offer authors — after rejection of their paper in “normal” journals — to “forward” it to an open access journal of the same publisher (usually with a lower impact factor). Everything then gets even more complicated when at some journals you cannot tell whether they are predators. Some of the journals which today count as predators used to be listed in MedLine and PubMED and had decent impact factors (e.g. Oncotarget).
Thus, it isn’t even about predatory publishing houses; they only exploit our system error. What can, what must we do? Stigmatization of those who have published proper studies in predatory publishing houses will not help. We need to clarify the situation promptly, because for many colleagues it is not at all clear what a “predatory journal” is and how it can be recognized. Or how the institutional subscription model is financed. Many a colleague still believes that the seemingly “free” practice of downloading any scientific article is Open Access! Taking decisive action, however, will mean changing the academic incentives and career system in such a way that along with quantitative factors, more qualitative, quality-oriented indicators are introduced. Besides the question of how innovative research is, criteria such as scientific care, transparency, availability of the data for other scientists, or incorporation of patients in the planning of clinical studies (“participation”) must be evaluated and “rewarded”. Then we will find out whether innovative formats like preprint servers (e.g. Bioarxiv), post-publication review, and registered reports might not much more effectively enable high-value publication and scientific discourse.
A German version of this post has been published as part of my monthly column in the Laborjournal: http://www.laborjournal-archiv.de/epaper/LJ_18_09/22/index.html
References:
Björn Brembs‘ critical analysis of OA: http://bjoern.brembs.net/2017/03/please-address-these-concerns-before-we-can-accept-your-oa-proposal/
Björn Brembs warns that OA may create more problems than it solves: http://bjoern.brembs.net/2016/04/how-gold-open-access-may-make-things-worse/
SCI-HUB: Radical but illegal OA. Over 60 million articles for download. Access. https://dirnagl.com/?s=sci-hub
Der „Fake Science-Skandal“ – eine Meinung! Blogbeitrag von Wolfgang Nellen https://www.laborjournal.de/blog/?p=9840 (in German)
Abzock-Zeitschriften, Datenauswertung Teil 1: Methoden, Ländervergleich, Gesamtzahl. Blogbeitrag von Markus Pössel https://scilogs.spektrum.de/relativ-einfach/abzock-zeitschriften-den-daten-auf-der-spur/ (in German)
Hi Uli,
Very interesting and enlightening!! And you are too right about finding other “metrics” to “reward” scientists than number of publications and impact factor… number of citations is one as I very much doubt that a poor paper will ever be highly cited, whereas a good one even in an average IF journal will get well cited!
Best,
Herve Boutin