Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. The aim of our recent article in PLOS Biol is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain.
PLoS Biol. 2017 Mar 10;15(3):e2001307. doi: 10.1371/journal.pbio.2001307.
The negotiation of a new German-wide arrangement with Elsevier and other publishers for online access to journals has stalled, as Elsevier did not agree to the terms proposed by a consortium of many German universities. Meanwhile, many of these Universities have cancelled their subscriptions with Elsevier. Scientists trying to access Elsevier journals now need their credit cards in standby to access (their) articles! Although this may be frustrating, I find that it is good news that the struggle of scientists for open access to their work is coming to a head. Many researchers for the first time start to grasp the business model behind scientific publishing! They naively thought ‘open access’ means that while working on their computers within the intranet of their universities a simple click gives them free access to any paper they want. They might now join the still small band of ‘activists’ fighting for novel publishling models. Two of those activists, Romain Brette and Björn Brembs have recently provided very useful resources for concerned researchers (So Your Institute Went Cold Turkey On Publisher X. What Now?), and a vision of the post-journal world as well as thoughtful suggestions on how we can ‘help move science to the post-journal world‘.
… is the title of a recent article by Jonas and Kording, published in Plos Comp Biol and featured in the Economist. The Economist summarizes their findings by stating that ‘testing the methods of neuroscience on computer chips suggests they are wanting’, and on the magazine cover labels neuroscience’s toolkit as ‘faulty’.
Jonas and Kording used a simple microchip (one used in ‘prehistoric’ game computers like the Atari) and asked the question whether the chip could be ‘understood’ by applying the same approaches applied by the large scale human brain projects. These multibillion consortia work under the premise that the human brain works like a supercomputer – doesn’t it process information and use electrical currents? So if you understand the wiring diagram (‘the connectome’) and the firing of electrical signals through it, you would be able to model its working principle. What you need is just lots of data, and heavy computing. Jonas and Kording used this approach and checked whether it allowed them to understand how the game chip works. Since we already know how it works (because it was engineered in the first place) we can test how far this approach takes us. They even threw in ‘interventions’, very similar to how modern neuroscience started, when neurologists like Paul Broca used structural lesions (e.g. after infarction) in their patient’s brains to make inferences about the functions of specific brain regions. So what happens to Donkey Kong if you tinker with a few transitors on the chip, and what does it tell you about their function? If you follow Jonas and Kording, not much. They conclude that current analytic approaches in neuroscience ‘may fall short of producing meaningful understanding of neural systems, regardless of the amount of data’.
So the methods used by the BRAIN Initiatitive or the Human Brain Project may be wanting – but what if it is even worse, and their basic tenet (‘The human brain is a computer’) is wrong, and the hype around those projects is not only methodologically but also conceptually flawed? In a recent post in AEON, Robert Epstein argues that ‘your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer’. Click here to follow his argument why it is silly to believe that brains must be information processors just because computers are information processors.
Errare humanum est – To err is human. Biomedical research, a human enterprise, is no exception in this regard. Ever more sophisticated methodologies probing how complex organisms function in health in disease invite errors on all levels – from designing experiments and studies to the collection of data and the reporting of results. The stakes are high, in terms of resources spent, and professional rewards to be gained for individuals.
Recent concerns about the reliability and reproducibility of biomedical research have focused on weaknesses in planning, conducting, analysing, and reporting research. Clearly, the discussion is revolving around factors which negatively impact on the quality of research – and which may be remedied by structured measures to improve research quality. However, the potential contribution of errors to the disappointingly low level of reproducibility and predictiveness of biomedical research, and how scientists deal with these errors, has not yet been considered.
In a PLOS Biology article which appeared this week we propose the implementation of a simple and effective method to enhance the quality of basic and preclinical academic research: critical incident reporting (CIR). CIR has become a standard in clinical medicine but to our knowledge has never been implemented in the context of academic basic research. We provide a simple, free, open-source software tool for implementing a CIR system in research groups, laboratories, or large institutions (LabCIRS). LabCIRS was developed, tested, and implemented in our multidisciplinary and multiprofessional neuroscience research department. It is accepted by all members of the department, has led to the emergence of a mature error culture, and has made the laboratory a safer and more communicative environment. Initial concerns that implementation of such a measure might lead to a “surveillance culture” that would stifle scientific creativity turned out to be unfounded.
A demo version and source code of LabCIRS can be found via the supplement of the article.
Hat ein bischen gedauert zum nächsten post, dann schon wieder ein Buch, und auch noch auf Deutsch, und das für Laien… [Sorry, this is about a book, and unfortunately it is in German…]. Aber es hat auch ja gedauert, das mit Jochen Müller das zu schreiben. Es war Jochen’s Idee Hirnfunktion anhand von Funktionsausfällen zu erklären, die bei neurologischen Erkrankungen auftreten. Und das tun wir dann, anhand von Kopfschmerz, MS, Schlaganfall, Parkinson’scher und Alzheimer’scher Erkrankung, sowie Epilepsie. Ein Lesebuch über wie das Hirn funktioniert. Zumindest was man momentan darüber weiss. Das Buch handelt dann natürlich auch von diesen Erkrankungen und deren Behandlung. Ziel war es aber nicht, ein Lehrbuch oder einen Patientenratgeber zu schreiben, sondern ein Buch für Laien, das Spass macht beim Lesen. Jochen, der ein Postdoc bei mir in der Abteilung für Experimentelle Neurologie gemacht hat, ist promovierter Neurobiologe, aber auch erfolgreicher Blogger, Autor, und vor allem, semiprofessioneller Science Slammer. Deshalb slammt er auch im Buch recht heftig, und meine Rolle ist es, ihn am Boden (der Neurowissenschaften) zu halten. Hat viel Spass gemacht. Gibt’s bei Droemer, gedruckt und als Kindle.
Besprechung des Buches durch Prof.Dr. Arno Villringer in Neuroforum
From the preface: Despite major advances in prevention, acute treatment, and rehabilitation, stroke remains a major burden on patients, relatives, and economies. The role and potential benefits of experimental models of stroke (i.e. focal cerebral ischemia) in rodents have been recently debated. Critics argue that numerous treatment strategies have been tested successfully in models only to be proven dismal failures when tested in controlled clinical trials.
When methods of systematic review and metaanalysis are applied, however, it turns out that experimental models actually did faithfully predict the negative outcomes of clinical trials. For example, thrombolysis with tissue plasminogen activator (t-PA), the only clinically effective pharmacological treatment of acute ischemic stroke, was first demonstrated and evaluated in an experimental model of stroke. Many other examples document the positive prediction of rodent stroke models even beyond the brain, such as changes in the immune system and susceptibility to infection after stroke. These were first described and can be faithfully modeled in rodents. Continue reading
Radio Feature (5 Min) von Hellmuth Norwig im SWR zur Ungleichverteilung der Geschlechter bei Tierexperimenten. Sie sind überwiegend männlichen Geschlechts, in den wenigsten Fällen orientiert sich das verwendete Geschlecht an der Geschlechterverteilung der Erkrankung beim Menschen.