Man errs as long as he strives….
An article entitled “Growth in a Time of Debt” was published in 2010 by the highly respected Harvard economists Carmen Reinhart and Kenneth Rogoff. It dealt with the relationship between national economic growth and national debt. They reported on their discovery of an astonishing, globally observable correlation: As national debt rises, the economic growth of a nation initally also rises. If, however, the national debt exceeds 90 %, this ratio is reversed quite abruptly. Growth turns into contraction, and economic output then declines as debt rises further. The discovery of a “90 % debt threshold” hit like a bomb. Some suspect that the article was the basis for the European austerity policy after the 2008 financial crisis. What is certain, however, is that the paper was enthusiastically used by Western politicians to justify their restrictive fiscal policy. In 2013, Thomas Herndon, a student, reanalyzed the data of the Reinhart-Rogoff paper as part of a semester assignment. After some back and forth, the authors had given him the original Excel spreadsheet. And lo and behold, in a few minutes he found a number of serious errors in it! After correction, the debt threshold disappeared, and the data now appeared to prove the opposite, a steady, positive correlation between government debt and growth across the entire range! What do we learn from this? Apart from the fact that the fundamental error of Reinhart and Rogoff is of course the confusion of correlation with causation: Excel is not suitable for the analysis of complex scientific data. Even more importantly, scientists make mistakes, which can have serious consequences.
To err is human, at least that’s what they say. This has been recognized in many areas of society, especially where errors can lead directly to smaller or larger catastrophes. For example in nuclear power plants, in air traffic control, or in hospitals. A professional handling of errors for the purpose of their prevention or repetition is legally prescribed in these domains in the form of professional error management systems. Interestingly, biomedical research is not familiar with such systems. We there only hear about errors from the “errata” sometimes published Journal and PubMed. Most often the “error” was that the name of the institute of one of the authors was misspelled, or, horrible dictu, an author was listed in the wrong place!
Have you ever heard of a scientist or technical assistant presenting a mistake he or she made? Which was then analysed and discussed in the group? Probably not. Because that rarely happens. In hospitals, this is standard. In so-called “morbidity and mortality conferences”, unusual courses of treatment, adverse events, deaths, etc. are systematically dealt with. The aim is to identify errors and weaknesses in a multi-professional environment – especially in clinical processes – and to derive and implement improvement measures.
Could it be that we don’t need this in biomedical research? Because we rarely make mistakes? And if they happen, are they without impact on our results or their interpretation? Do scientific mistakes not reoccur? Of course we make mistakes, a lot of them, and sometimes even serious ones! And these mistakes may reoccur. Errors in biomedical basic science can indirectly damage patients (see previous post), rob doctoral students of years of their youth, contribute to the unnecessary death or suffering of laboratory animals, or may lead to a massive waste of resources.
What sources of error are there in the laboratory’s complex working environment? Systematic errors of devices, such as pipettes, scales, plate readers, etc., can lead to a waste of resources. There is a plethora of machinery in the laboratory, and most of it more complex than a pipette. And even pipettes can be mishandeled, degrading their calibration, and consequently jeopardizing the experiments of subsequent users. Talking about device calibration: Without it, or wrongly done, every output of the device may spoil a lot of work. Another common source of errors are deviations from protocols. But most important are probably simple ‘human errors’, like leaving the freezer door open and thawing of reagents or samples, erroneous labeling samples, calculation errors, flawed documentation, or mistakes in the use of a device, or analysis software (Excel!). But that’s not all, because such errors can creep into the protocols or even publications and cause problems for others. To complicate matters, most labs are buzzing with people, with very different levels of professionalism, education, motivation or training.
I suspect that an important reason why biomedical researchers seem to make very few mistakes is the fear of reporting or admitting them. One could be held accountable, labelled as a dummy, or draw the wrath of those who may or may not have been affected by the mistake. These are diffuse fears that work. Many researchers will say: “We talk openly about it, we don’t punish anyone for their mistakes”. But do you know how many mistakes are made and how many are reported in your laboratory? How do you communicate errors? How can others learn from your mistakes? Are there safeguards that errors are not repeated?
This can only happen in an environment where there is an “error culture“. Error culture has a lot to do with attitude, but also with structures. The attitude is to see mistakes as an opportunity. It requires a crystal clear statement from the laboratory or group leadership that mistakes, unless they are intentional, must never lead to “punishment” or discrimination of any kind. Structural prerequisites include formats for reporting errors, which can also be used anonymously. In the simplest case, this is something like an “error box” in which you can post a note. But that is only half the battle, because the error must also be analysed, communicated to others and, if necessary, measures introduced so that it cannot be repeated. This could, for example, be a regular agenda item at the lab meeting. If you are interested in a more systematic handling of errors in the laboratory, I recommend our article (PLoS Biol. 2016 Dec 1;14(12):e2000705) and our open source software LabCIRS presented there. The Laboratory Critical Incidence Reporting System (LabCIRS) facilitates the handling of errors, especially in the context of basic research. In LabCIRS errors can be reported anonymously, which is especially important in the initial phase of a proactive approach to errors.
However, it is often not even possible to report an error anonymously in the lab because its description alone betrays the originator. So it is very much about the “mind set”: Everyone makes mistakes, we can learn from our own mistakes and those of others, and to admit mistakes and ensure that they are not repeated is an expression of professionalism. But because making mistakes is taboo and anxiety-stricken, one should try to turn the subject around positively. Sounds crazy, but why not award the ‘best’ mistakes every year? In this way, reporting of errors which was useful to others is rewarded, and the mindset can thrive. Medical conferences sometimes feature a session called ‘My worst mistake’! Often this is the best-attended session of the whole conference. Why not have such sessions at research conferences?
Last but not least, if mistakes are discovered after a study has been published, action must be taken. This applies to wrong information in the methods section (wrong dosage, wrong reference value etc.) as well as to results which need to be corrected (errors in the evaluation, in the graph etc.), or also to erroneous conclusions. Sounds obvious, but if you systematically search errata in the research literature, you will very rarely find anything in this direction. Is this due to the fact that errors in need for correction are rarely discovered after publication? My guess is rather that it is because most authors have great fear of the stigma of an erratum or a retraction. Ask yourself: Is your own conscience pure in this category? Have you never been in a situation where you would have had to correct what you had already published?
The much-read and much appreciated blog retractionwatch.com, which unfortunately also lives a little from our malice and voyeurism, therefore features ‘good’ retractions after ‘honest error’ in a special category: “Doing the right thing“. In any case, I have the greatest respect for scientists who use an ‘honest error’ as an opportunity to correct their publication!
A German version of this post has been published as part of my monthly column in the Laborjournal: http://www.laborjournal-archiv.de/epaper/LJ_19_01/22/index.html