Errare humanum est – To err is human. Biomedical research, a human enterprise, is no exception in this regard. Ever more sophisticated methodologies probing how complex organisms function in health in disease invite errors on all levels – from designing experiments and studies to the collection of data and the reporting of results. The stakes are high, in terms of resources spent, and professional rewards to be gained for individuals.
Recent concerns about the reliability and reproducibility of biomedical research have focused on weaknesses in planning, conducting, analysing, and reporting research. Clearly, the discussion is revolving around factors which negatively impact on the quality of research – and which may be remedied by structured measures to improve research quality. However, the potential contribution of errors to the disappointingly low level of reproducibility and predictiveness of biomedical research, and how scientists deal with these errors, has not yet been considered.
In a PLOS Biology article which appeared this week we propose the implementation of a simple and effective method to enhance the quality of basic and preclinical academic research: critical incident reporting (CIR). CIR has become a standard in clinical medicine but to our knowledge has never been implemented in the context of academic basic research. We provide a simple, free, open-source software tool for implementing a CIR system in research groups, laboratories, or large institutions (LabCIRS). LabCIRS was developed, tested, and implemented in our multidisciplinary and multiprofessional neuroscience research department. It is accepted by all members of the department, has led to the emergence of a mature error culture, and has made the laboratory a safer and more communicative environment. Initial concerns that implementation of such a measure might lead to a “surveillance culture” that would stifle scientific creativity turned out to be unfounded.
A demo version and source code of LabCIRS can be found via the supplement of the article.
Hat ein bischen gedauert zum nächsten post, dann schon wieder ein Buch, und auch noch auf Deutsch, und das für Laien… [Sorry, this is about a book, and unfortunately it is in German…]. Aber es hat auch ja gedauert, das mit Jochen Müller das zu schreiben. Es war Jochen’s Idee Hirnfunktion anhand von Funktionsausfällen zu erklären, die bei neurologischen Erkrankungen auftreten. Und das tun wir dann, anhand von Kopfschmerz, MS, Schlaganfall, Parkinson’scher und Alzheimer’scher Erkrankung, sowie Epilepsie. Ein Lesebuch über wie das Hirn funktioniert. Zumindest was man momentan darüber weiss. Das Buch handelt dann natürlich auch von diesen Erkrankungen und deren Behandlung. Ziel war es aber nicht, ein Lehrbuch oder einen Patientenratgeber zu schreiben, sondern ein Buch für Laien, das Spass macht beim Lesen. Jochen, der ein Postdoc bei mir in der Abteilung für Experimentelle Neurologie gemacht hat, ist promovierter Neurobiologe, aber auch erfolgreicher Blogger, Autor, und vor allem, semiprofessioneller Science Slammer. Deshalb slammt er auch im Buch recht heftig, und meine Rolle ist es, ihn am Boden (der Neurowissenschaften) zu halten. Hat viel Spass gemacht. Gibt’s bei Droemer, gedruckt und als Kindle.
Besprechung des Buches durch Prof.Dr. Arno Villringer in Neuroforum
From the preface: Despite major advances in prevention, acute treatment, and rehabilitation, stroke remains a major burden on patients, relatives, and economies. The role and potential benefits of experimental models of stroke (i.e. focal cerebral ischemia) in rodents have been recently debated. Critics argue that numerous treatment strategies have been tested successfully in models only to be proven dismal failures when tested in controlled clinical trials.
When methods of systematic review and metaanalysis are applied, however, it turns out that experimental models actually did faithfully predict the negative outcomes of clinical trials. For example, thrombolysis with tissue plasminogen activator (t-PA), the only clinically effective pharmacological treatment of acute ischemic stroke, was first demonstrated and evaluated in an experimental model of stroke. Many other examples document the positive prediction of rodent stroke models even beyond the brain, such as changes in the immune system and susceptibility to infection after stroke. These were first described and can be faithfully modeled in rodents. Continue reading
Radio Feature (5 Min) von Hellmuth Norwig im SWR zur Ungleichverteilung der Geschlechter bei Tierexperimenten. Sie sind überwiegend männlichen Geschlechts, in den wenigsten Fällen orientiert sich das verwendete Geschlecht an der Geschlechterverteilung der Erkrankung beim Menschen.
Based on research, mainly in rodents, tremendous progress has been made in our basic understanding of the pathophysiology of stroke. After many failures, however, few scientists today deny that bench-to-bedside translation in stroke has a disappointing track record. I here summarize many measures to improve the predictiveness of preclinical stroke research, some of which are currently in various stages of implementation: We must reduce preventable (detrimental) attrition. Key measures for this revolve around improving preclinical study design. Internal validity must be improved by reducing bias; external validity will improve by including aged, comorbid rodents of both sexes in our modeling. False-positives and inflated effect sizes can be reduced by increasing statistical power, which necessitates increasing group sizes. Compliance to reporting guidelines and checklists needs to be enforced by journals and funders. Customizing study designs to exploratory and confirmatory studies will leverage the complementary strengths of both modes of investigation. All studies should publish their full data sets. On the other hand, we should embrace inevitable NULL results. This entails planning experiments in such a way that they produce high-quality evidence when NULL results are obtained and making these available to the community. A collaborative effort is needed to implement some of these recommendations. Just as in clinical medicine, multicenter approaches help to obtain sufficient group sizes and robust results. Translational stroke research is not broken, but its engine needs an overhauling to render more predictive results.
Read the full article at the Publishers site (STROKE/AHA). If your library does not have a subscription, here is the Authors Manuscript (Stroke/AHA did not allow me to even pay for open access, as it is ‘a special article…’).
Recently, NIH Scientists B. Ian Hutchins and colleagues have (pre)published “The Relative Citation Ratio (RCR). A new metric that uses citation rates to measure influence at the article level”. [Note added 9.9.2016: A peer reviewed version of the article has now appeared in PLOS Biol]. Just as Stefano Bertuzzi, the Executive Director of the American Society for Cell Biology, I am enthusiastic about the RCR. The RCR appears to be a viable alternative to the widely (ab)used Journal Impact Factor (JIF).
The RCR has been recently discussed in several blogs and editorials (e.g. NIH metric that assesses article impact stirs debate; NIH’s new citation metric: A step forward in quantifying scientific impact? ). At a recent workshop organized by the National Library of Medicine (NLM) I learned that the NIH is planning to widely use the RCR in its own grant assessments as an antidote to JIF, raw article citations, h-factors, and other highly problematic or outright flawed metrics. Continue reading