Research using animals is a sensitive issue. Anyone who does animal experiments, like myself, is reluctant to talk about it, at least outside our natural habitat, the laboratory or scientific conventions. Institutions where animal experiments are carried out are also quite shy about the topic. Recently, the Max Planck Society left Nikos Logothetis (MPI Tübingen) standing in the rain when he was targeted by a media campaign. Now he and some of his laboratory staff are off to Shanghai… The websites of prominent research institutes feature all kinds of colorful illustrations showing immunohistochemistry slides, doctors and students in white coats with pipettes in their hands, sitting at computers or their microscopes. But rats or mice are conspicuously missing! They proudly display their research activities, and enthusiastically advertise (future) research breakthroughs towards completely new and effective therapies. But no reference is made to animal experiments on campus!
Spektrum der Wissenschaft
AUS FORSCHUNG WIRD GESUNDHEIT
Wie gut ist die biomedizinische Forschung?
Heute stellen wir die Frage: Wie gut ist die biomedizinische Forschung? Stimmt es, was John Ioannidis von der amerikanischen Universität Stanford behauptet hat, dass die Hälfte aller wissenschaftlichen Artikel falsch sind? Beantworten kann mir diese Frage Professor Ulrich Dirnagl. Er leitet am Berlin Institute of Health das BIH Quest Center, das die Qualität und Ethik in der Wissenschaft erforscht. Er hat John Ioannidis ans BIH eingeladen, um als Einstein BIH Visiting Fellow mit ihm zusammen zu arbeiten.
Dichtung und Wahrheit in der Forschung
Ulrich Dirnagl ist Professor für Neurologie an der Charité – und “Wissenschaftsnarr”, als der er regelmäßig eine Kolumne im “Laborjournal” schreibt. Mit Thomas Prinzler spricht er über Qualität und Ethik in der biomedizinischen Forschung. Denn zu oft würden Ergebnisse weggelassen oder auch gefälscht.
Wo Professor Zufall regiert
Zu wenige Versuchspersonen, unsauber geplante Experimente, keine Replikation der Untersuchung: viele biomedizinische Studien haben Schwächen. So große, dass man stattdessen genauso gut eine Münze werfen könnte, meint der Neurologe Ulrich Dirnagl.
Podcast Spektrum – Wirkstoffradio (André Lampe und Bernd Rupp)
Podcast Kritisches Denken (Philip Barth, Andreas Blessing)
Episode 25 – Qualität in der Forschung
Im ersten Teil des Gesprächs mit Prof. Ulrich Dirnagl von der Charité Berlin sprechen wir über strukturelle Probleme in der Forschungslandschaft, die Reproduzierbarkeitskrise und den p-Wert. Details zur Episode
Podcast Kritisches Denken (Philip Barth, Andreas Blessing)
Episode 26 – Mikrobiomforschung und andere Hype-Zyklen
In Teil 2 des Gesprächs mit Prof. Ulrich Dirnagl unterhalten wir uns über die Mikrobiomforschung und wie Hype-Zyklen in der Wissenschaft verlaufen.
Podcast Gesundheit Macht Politik
Ulrich Dirnagl | Forschung: This is an intergalactic emergency
And here’s a video cast from the European Academy of Neurology
You got to see this youtube video! Hectically cut sequences of busy young scientists in high-tech laboratories wearing lab coats, nerdy looking guys are soldering electronic circuits and stare into oscilloscopes, we are taken on a roller coaster ride through an animated brain chockful of tangled nerve cells. And in between all this, on stage at the California Academy of Sciences, car and rocket manufacturer Elon Musk announces his latest vision in a messianic pose: The symbiosis of the human brain with artificial intelligence (AI)! This time his plan to save mankind does not involve mass evacuation to Mars, but will be realized by a revolutionary Brain Machine Interface (BMI), designed and manufactured by his company Neuralink. You may have guessed it, this has caused a tremendous media hype all over the world. The verdict in the press and on the net was: “Musk at his best, a bit over the edge, but if HE announces a breakthrough like that there must be something to it”. The more cautious asked: “But couldn’t this be dangerous for mankind? Do we need a new ethic for stuff like this?” Continue reading
Recently in a train station book shop I stood gaping in astonishment in front of a thematically highly specialized book display. It was the bowels-brain table. The books piled up on it promised enlightenment about how the bowel and in particular its contents influence us – yes – how, they verily steer our emotions. A selection of book titles: “Shit-Wise – How a Healthy Intestinal Flora Keeps us fit”; “Bowels heal brain heal body”; “Happiness begins in the bowels”, or “The second brain – How the bowels influence our mood, our decisions and our feeling of wellbeing”. Newspapers, magazines and the internet can also tell us this. The wrong bowel bacteria make us depressive – but the right ones make us happy … which is why yogurt helps against depression. Continue reading
U.S. economist Robin Hanson posed this question in the title of an article published in 1995. In it he suggested replacing the classic review process with a market-based alternative. Instead of peer review, bets could decide which projects will be supported or which scientific questions prioritized. In these so-called “prediction” markets, individuals stake “bets” on a particular result or outcome. The more people trade on the marketplace, the more precise will be the prediction of outcome, based as it is on the aggregate information of the participants. The prediction market thus serves the intellectual swarms. We know that from sport bets and election prognoses. But in science? Sounds totally crazy, but it isn’t. Just now it is making its entry into various branches of science. How does it function, and what does it have going for it? Continue reading
- Let’s get this out of the way: Reproducibility is a cornerstone of science: Bacon, Boyle, Popper, Rheinberger
- A ‘lexicon’ of reproducibility: Goodman et al.
- What do we mean by ‘reproducible’? Open Science collaboration, Psychology replication
- Reproducible – non reproducible – A false dichotomy: Sizeless science, almost as bad as ‘significant vs non-significant’
- The emptiness of failed replication? How informative is non-replication?
- Hidden moderators – Contextual sensitivity – Tacit knowledge
- “Standardization fallacy”: Low external validity, poor reproducibility
- The stigma of nonreplication (‘incompetence’)- The stigma of the replicator (‘boring science’).
- How likely is strict replication?
- Non-reproducibility must occur at the scientific frontier: Low base rate (prior probability), low hanging fruit already picked: Many false positives – non-reproducibility
- Confirmation – weeding out the false positives of exploration
- Reward the replicators and the replicated – fund replications. Do not stigmatize non-replication, or the replicators.
- Resolving the tension: The Siamese Twins of discovery & replication
- Conclusion: No scientific progress without nonreproducibility: Essential non-reproducibility vs . detrimental non-reproducibility
- Further reading
There is a lot of thinking going on today about how research can be made more efficient, more robust, and more reproducible. At the top of the list are measures for improving internal validity (for example randomizing and blinding, prespecified inclusion and exclusion criteria etc.), measures for increasing sample sizes and thus statistical power, putting an end to the fetishization of the p-value, and open access to original data (open science). Funders and journals are raising the bar for applicants and authors by demanding measures to safeguard the validity of the research submitted to them.
Students and young researchers have taken note, too. I teach, among other things, statistics, good scientific practice and experimental design and am impressed every time by the enthusiasm of the students and young post docs, and how they leap into the adventure of their scientific projects with the unbent will to “do it right”. They soak up suggestions for improving reproducibility and robustness of their research projects like a dry sponge soaks up water. Often however the discussion is in the end not satisfying, especially when we discuss students’ own experiments and approaches to research work. I often hear: “That’s all very good and fine, but it won’t get by with my group leader.” Group leaders would tell them: “That is the way we have always done that, and it got us published in Nature and Science”, “If we do it the way you suggest, it won’t get through the review process”, or “We then could only get it published in PLOS One (or Peer J, F1000 Research etc.) and then the paper will contaminate your CV”, etc.
I often wish that not only the students would be sitting in the seminar room, but also their supervisors with them! Continue reading