Category: Statistics

Diet: Is nutrition science a more reliable source of advice than your grandmother?

Meat consumption is bad for your health. It gives you cancer, heart attacks, stroke, you name it. Says nutrition science. And they must know. After all, it’s a science. Is it, really?

A few years ago, Jonathan Schoenfeld and John Ioannidis took a standard cookbook and randomly selected 50 frequently occurring ingredients (sugar, coffee, salt, etc.). They then carried out a systematic literature search, asking whether there were epidemiologicial studies that had investigated the cancer risk of these ingredients. And they found what they were looking for. For 80% of the ingredients at least one study existed, for many even several. Of 264 of these studies, 103 found that the ingredient investigated increased the risk of cancer, while 88 reduced the risk! So  after all Joe Jackson was right: ‘Everything gives you cancer’! But wait a minute: Milk? Veal? Orange juice? Continue reading

Podcast mania

In letzter Zeit war ich zu Gast in einigen Podcasts und längeren Interviews, für Audiophile hier die links:

 

 

 

Spektrum der Wissenschaft

AUS FORSCHUNG WIRD GESUNDHEIT
Wie gut ist die biomedizinische Forschung?
Stefanie Seltmann

Heute stellen wir die Frage: Wie gut ist die biomedizinische Forschung? Stimmt es, was John Ioannidis von der amerikanischen Universität Stanford behauptet hat, dass die Hälfte aller wissenschaftlichen Artikel falsch sind? Beantworten kann mir diese Frage Professor Ulrich Dirnagl. Er leitet am Berlin Institute of Health das BIH Quest Center, das die Qualität und Ethik in der Wissenschaft erforscht. Er hat John Ioannidis ans BIH eingeladen, um als Einstein BIH Visiting Fellow mit ihm zusammen zu arbeiten.

https://www.spektrum.de/podcast/wie-gut-ist-die-biomedizinische-forschung/1702044

14 min


Inforadio Berlin

Dichtung und Wahrheit in der Forschung

Ulrich Dirnagl ist Professor für Neurologie an der Charité – und “Wissenschaftsnarr”, als der er regelmäßig eine Kolumne im “Laborjournal” schreibt. Mit Thomas Prinzler spricht er über Qualität und Ethik in der biomedizinischen Forschung. Denn zu oft würden Ergebnisse weggelassen oder auch gefälscht.

https://www.inforadio.de/programm/schema/sendungen/wissenswerte/202002/09/wissenschaft-forschung-medizin-ethik-faelschung-betrug.html

15 min


Deutschlandfunk Kultur

Wo Professor Zufall regiert  

Zu wenige Versuchspersonen, unsauber geplante Experimente, keine Replikation der Untersuchung: viele biomedizinische Studien haben Schwächen. So große, dass man stattdessen genauso gut eine Münze werfen könnte, meint der Neurologe Ulrich Dirnagl.

https://www.deutschlandfunkkultur.de/biomedizinische-studien-wo-professor-zufall-regiert.976.de.html?dram:article_id=458680#

7 min


Podcast Spektrum – Wirkstoffradio (André Lampe und Bernd Rupp)

WSR019 Schlaganfall, Stroke Units und die Verantwortung der Forschung 

150 min!


Podcast Kritisches Denken (Philip Barth, Andreas Blessing)

Episode 25 – Qualität in der Forschung

Im ersten Teil des Gesprächs mit Prof. Ulrich Dirnagl von der Charité Berlin sprechen wir über strukturelle Probleme in der Forschungslandschaft, die Reproduzierbarkeitskrise und den p-Wert. Details zur Episode

https://kritisches-denken-podcast.de/episode-25-qualitaet-in-der-forschung/

40 min


Podcast Kritisches Denken (Philip Barth, Andreas Blessing)

Episode 26 – Mikrobiomforschung und andere Hype-Zyklen

In Teil 2 des Gesprächs mit Prof. Ulrich Dirnagl unterhalten wir uns über die Mikrobiomforschung und wie Hype-Zyklen in der Wissenschaft verlaufen.

https://kritisches-denken-podcast.de/episode-26-mikrobiomforschung-und-andere-hype-zyklen/

60 min


Podcast Gesundheit Macht Politik

Ulrich Dirnagl | Forschung: This is an intergalactic emergency

https://gmp-podcast.de/blog/gmp053/


And here’s a video cast from the European Academy of Neurology

EAN 2019: Charles Edouard Brown-Séquard Lecture – Interview with Prof. Ulrich Dirnagl

Could gambling save science?

U.S. economist Robin Hanson posed this question in the title of an article published in 1995. In it he suggested replacing the classic review process with a market-based alternative. Instead of peer review, bets could decide which projects will be supported or which scientific questions prioritized. In these so-called “prediction” markets, individuals stake “bets” on a particular result or outcome. The more people trade on the marketplace, the more precise will be the prediction of outcome, based as it is on the aggregate information of the participants. The prediction market thus serves the intellectual swarms. We know that from sport bets and election prognoses. But in science? Sounds totally crazy, but it isn’t. Just now it is making its entry into various branches of science. How does it function, and what does it have going for it? Continue reading

Your Lab is Closer to the Bedside Than You Think

 

With a half-page article written about him and his study, an Israeli radiologist unknown until then made it into the New York Times (NYT 2009). Dr. Yehonatan Turner presented computer-tomographic scans (CTs) to radiologists and asked them to make a diagnosis. The catch: Along with the CT a current portrait photograph of the patient was presented to the physicians. Remember, radiologists very often do not see their patients, they make their diagnosis in a dark room staring at a screen. Dr. Turner in his study used a smart cross-over design: He first showed the CT together with a portrait photograph of the patient to one group of radiologists. Three months later the same group had to make a diagnosis using the same CT, but without the photo. Another group of radiologists were first given only the CT and then, three months later the CT with photo. A further control group examined only the CTs, as in routine practice. The hypothesis: When a radiologist is exposed to the individual patient, and not only to an anatomical finding on a scan, she will be more conscious of her own responsibility, hence findings will be more thorough and diagnosis more accurate. And in fact, this is what he found. The radiologists reported that they had more empathy with the patient, and that they “felt like doctors”. And they spotted more irregularities and pathological findings when they had the CT and photo in front of them than when they were only looking at the CT (Turner and Hadas-Halpern 2008).

So how about showing researchers in basic and preclinical biomedicine photos of patients with the disease they are currently investigating in a model of the disease? Continue reading

No scientific progress without non-reproducibility?

Slides of my talk at the FENS Satellite-Event ‘How reproducible are your data?’ at Freie Universität, 6. July 2018, Berlin

 

  1. Let’s get this out of the way: Reproducibility is a cornerstone of science: Bacon, Boyle, Popper, Rheinberger
  2. A ‘lexicon’ of reproducibility: Goodman et al.
  3. What do we mean by ‘reproducible’? Open Science collaboration, Psychology replication
  4. Reproducible – non reproducible – A false dichotomy: Sizeless science, almost as bad as ‘significant vs non-significant’
  5. The emptiness of failed replication? How informative is non-replication?
  6. Hidden moderators – Contextual sensitivity – Tacit knowledge
  7. “Standardization fallacy”: Low external validity, poor reproducibility
  8. The stigma of nonreplication (‘incompetence’)- The stigma of the replicator (‘boring science’).
  9. How likely is strict replication?
  10. Non-reproducibility must occur at the scientific frontier: Low base rate (prior probability), low hanging fruit already picked: Many false positives – non-reproducibility
  11. Confirmation – weeding out the false positives of exploration
  12. Reward the replicators and the replicated – fund replications. Do not stigmatize non-replication, or the replicators.
  13. Resolving the tension: The Siamese Twins of discovery & replication
  14. Conclusion: No scientific progress without nonreproducibility: Essential non-reproducibility vs . detrimental non-reproducibility
  15. Further reading

Open Science Collaboration, Psychology Replication, Science. 2015 ;349(6251):aac4716

Goodman et al. Sci Transl Med. 2016;8:341ps12.

https://dirnagl.com/2018/05/16/can-non-replication-be-a-sin/

https://dirnagl.com/2017/04/13/how-original-are-your-scientific-hypotheses-really

Train PIs and Professors!

There is a lot of thinking going on today about how research can be made more efficient, more robust, and more reproducible. At the top of the list are measures for improving internal validity (for example randomizing and blinding, prespecified inclusion and exclusion criteria etc.), measures for increasing sample sizes and thus statistical power, putting an end to the fetishization of the p-value, and open access to original data (open science). Funders and journals are raising the bar for applicants and authors by demanding measures to safeguard the validity of the research submitted to them.

Students and young researchers have taken note, too.  I teach, among other things, statistics, good scientific practice and experimental design and am impressed every time by the enthusiasm of the students and young post docs, and how they leap into the adventure of their scientific projects with the unbent will to “do it right”.  They soak up suggestions for improving reproducibility and robustness of their research projects like a dry sponge soaks up water.  Often however the discussion is in the end not satisfying, especially when we discuss students’ own experiments and approaches to research work.  I often hear: “That’s all very good and fine, but it won’t get by with my group leader.” Group leaders would tell them: “That is the way we have always done that, and it got us published in Nature and Science”, “If we do it the way you suggest, it won’t get through the review process”, or “We then could only get it published in PLOS One (or Peer J, F1000 Research etc.) and then the paper will contaminate your CV”, etc.

I often wish that not only the students would be sitting in the seminar room, but also their supervisors with them! Continue reading

Can (Non)-Replication be a Sin?

I failed to reproduce the results of my experiments! Some of us are haunted by this horror vision. The scientific academies, the journals and in the meantime the sponsors themselves are all calling for reproducibility, replicability and robustness of research. A movement for “reproducible science” has developed. Sponsorship programs for the replication of research papers are now in the works.In some branches of science, especially in psychology, but also in fields like cancer research, results are now being systematically replicated… or not, thus we are now in the throws of a “reproducibility crisis”.
Now Daniel Fanelli, a scientist who up to now could be expected to side with those who support the reproducible science movement, has raised a warning voice. In the prestigious Proceedings of the National Academy of Sciences he asked rhetorically: “Is science really facing a reproducibility crisis, and if so, do we need it?” So todayon the eve, perhaps, of a budding oppositional movement, I want to have a look at some of the objections to the “reproducible science” mantra. Is reproducibility of results really the fundament of scientific methods? Continue reading