Category: Meta-analysis
“The Ioannidis Affair”
On March 17th, just as many countries were taking draconian measures to contain the SARS-COV-2 pandemic, the Greek-American meta-researcher and epidemiologist John Ioannidis, whom I often quote in my posts proclaimed a “fiasco in the making‘! With strong language and a few ad hoc estimations of COVID fatality rates he warned that based on poor data or no evidence at all politicians might inflict incalculable damage on society, possibly much worse than what a virus, putatatively as dangerous as influenza, could cause. As one of the most highly cited researchers in the world and a vocal critic of quality problems in biomedicine, his COVID related interviews, opinion pieces and articles since then have received a great deal of attention, in the scientific community, in the lay press, and especially among his worldwide fan base. Continue reading
Diet: Is nutrition science a more reliable source of advice than your grandmother?
Meat consumption is bad for your health. It gives you cancer, heart attacks, stroke, you name it. Says nutrition science. And they must know. After all, it’s a science. Is it, really?
A few years ago, Jonathan Schoenfeld and John Ioannidis took a standard cookbook and randomly selected 50 frequently occurring ingredients (sugar, coffee, salt, etc.). They then carried out a systematic literature search, asking whether there were epidemiologicial studies that had investigated the cancer risk of these ingredients. And they found what they were looking for. For 80% of the ingredients at least one study existed, for many even several. Of 264 of these studies, 103 found that the ingredient investigated increased the risk of cancer, while 88 reduced the risk! So after all Joe Jackson was right: ‘Everything gives you cancer’! But wait a minute: Milk? Veal? Orange juice? Continue reading
Microbe Mania with Melancholic Microbes
Recently in a train station book shop I stood gaping in astonishment in front of a thematically highly specialized book display. It was the bowels-brain table. The books piled up on it promised enlightenment about how the bowel and in particular its contents influence us – yes – how, they verily steer our emotions. A selection of book titles: “Shit-Wise – How a Healthy Intestinal Flora Keeps us fit”; “Bowels heal brain heal body”; “Happiness begins in the bowels”, or “The second brain – How the bowels influence our mood, our decisions and our feeling of wellbeing”. Newspapers, magazines and the internet can also tell us this. The wrong bowel bacteria make us depressive – but the right ones make us happy … which is why yogurt helps against depression. Continue reading
No scientific progress without non-reproducibility?
Slides of my talk at the FENS Satellite-Event ‘How reproducible are your data?’ at Freie Universität, 6. July 2018, Berlin
- Let’s get this out of the way: Reproducibility is a cornerstone of science: Bacon, Boyle, Popper, Rheinberger
- A ‘lexicon’ of reproducibility: Goodman et al.
- What do we mean by ‘reproducible’? Open Science collaboration, Psychology replication
- Reproducible – non reproducible – A false dichotomy: Sizeless science, almost as bad as ‘significant vs non-significant’
- The emptiness of failed replication? How informative is non-replication?
- Hidden moderators – Contextual sensitivity – Tacit knowledge
- “Standardization fallacy”: Low external validity, poor reproducibility
- The stigma of nonreplication (‘incompetence’)- The stigma of the replicator (‘boring science’).
- How likely is strict replication?
- Non-reproducibility must occur at the scientific frontier: Low base rate (prior probability), low hanging fruit already picked: Many false positives – non-reproducibility
- Confirmation – weeding out the false positives of exploration
- Reward the replicators and the replicated – fund replications. Do not stigmatize non-replication, or the replicators.
- Resolving the tension: The Siamese Twins of discovery & replication
- Conclusion: No scientific progress without nonreproducibility: Essential non-reproducibility vs . detrimental non-reproducibility
- Further reading
Open Science Collaboration, Psychology Replication, Science. 2015 ;349(6251):aac4716
Goodman et al. Sci Transl Med. 2016;8:341ps12.
https://dirnagl.com/2018/05/16/can-non-replication-be-a-sin/
https://dirnagl.com/2017/04/13/how-original-are-your-scientific-hypotheses-really
Can (Non)-Replication be a Sin?
I failed to reproduce the results of my experiments! Some of us are haunted by this horror vision. The scientific academies, the journals and in the meantime the sponsors themselves are all calling for reproducibility, replicability and robustness of research. A movement for “reproducible science” has developed. Sponsorship programs for the replication of research papers are now in the works.In some branches of science, especially in psychology, but also in fields like cancer research, results are now being systematically replicated… or not, thus we are now in the throws of a “reproducibility crisis”.
Now Daniel Fanelli, a scientist who up to now could be expected to side with those who support the reproducible science movement, has raised a warning voice. In the prestigious Proceedings of the National Academy of Sciences he asked rhetorically: “Is science really facing a reproducibility crisis, and if so, do we need it?” So todayon the eve, perhaps, of a budding oppositional movement, I want to have a look at some of the objections to the “reproducible science” mantra. Is reproducibility of results really the fundament of scientific methods? Continue reading
Of Mice, Macaques and Men
Tuberculosis kills far more than a million people worldwide per year. The situation is particularly problematic in southern Africa, eastern Europe and Central Asia. There is no truely effective vaccination for tuberculosis (TB). In countries with a high incidence, a live vaccination is carried out with the diluted vaccination strain Bacillus Calmette-Guérin (BCG), but BCG gives very little protection against tuberculosis of the lungs, and in all cases the vaccination is highly variable and unpredictable. For years, a worldwide search has been going on for a better TB vaccination.
Recently, the British Medical Journal has published an investigation in which serious charges have been raised against researchers and their universities: conflicts of interest, animal experiments of questionable quality, selective use of data, deception of grant-givers and ethics commissions, all the way up to endangerment of study participants. There was also a whistle blower… who had to pack his bags. It all happened in Oxford, at one of the most prestigious virological institutes on earth, and the study on humans was carried out on infants of the most destitute layers of the population. Let’s have a closer look at this explosive mix in more detail, for we have much to learn from it about
- the ethical dimension of preclinical research and the dire consequences that low quality in animal experiments and selective reporting can have;
- the important role of systematic reviews of preclinical research, and finally also about
- the selective (or non) availability and scrutiny of preclinical evidence when commissions and authorities decide on clinical studies.
The Relative Citation Ratio: It won’t do your laundry, but can it exorcise the journal impact factor?
Recently, NIH Scientists B. Ian Hutchins and colleagues have (pre)published “The Relative Citation Ratio (RCR). A new metric that uses citation rates to measure influence at the article level”. [Note added 9.9.2016: A peer reviewed version of the article has now appeared in PLOS Biol]. Just as Stefano Bertuzzi, the Executive Director of the American Society for Cell Biology, I am enthusiastic about the RCR. The RCR appears to be a viable alternative to the widely (ab)used Journal Impact Factor (JIF).
The RCR has been recently discussed in several blogs and editorials (e.g. NIH metric that assesses article impact stirs debate; NIH’s new citation metric: A step forward in quantifying scientific impact? ). At a recent workshop organized by the National Library of Medicine (NLM) I learned that the NIH is planning to widely use the RCR in its own grant assessments as an antidote to JIF, raw article citations, h-factors, and other highly problematic or outright flawed metrics. Continue reading
Where have all the rodents gone?
Using metaanalysis and computer simulation we studied the effects of attrition in experimental research on cancer and stroke. The results were published this week in the new meta-research section of PLOS Biology. Not surprisingly, given the small sample sizes of preclinical experimentation, loss of animals in experiments can dramatically alter results. However, effects of attrition on distortion of results were unknown. We used a simulation study to analyze the effects of random and biased attrition. As expected, random loss of samples decreased statistical power, but biased removal, including that of outliers, dramatically increased probability of false positive results. Next, we performed a meta-analysis of animal reporting and attrition in stroke and cancer. Most papers did not adequately report attrition, and extrapolating from the results of the simulation data, we suggest that their effect sizes were likely overestimated. Continue reading
Replication crisis, continued
Biomedicine currently suffers a ‚replication crisis‘: Numerous articles from academia and industry prove John Ioannidis’ prescient theoretical 2005 paper ‘Why most published research findings are false’ (Why most published research findings are false) to be true. On the positive side, however, the academic community appears to have taken up the challenge, and we are witnessing a surge in international collaborations to replicate key findings of biomedical and psychological research. Three important articles appeared over the last weeks which on the one hand further demonstrated that the replication crisis is real, but on the other hand suggested remedies for it:
Two consortia have pioneered the concept of preclinical randomized controlled trials, very much inspired by how clinical trials minimize bias (prespecification of a primary endpoint, randomization, blinding, etc.), and with much improved statistical power compared to single laboratory studies. One of them (Llovera et al.) replicated the effect of a neuroprotectant (CD49 antibody) in one, but not another model of stroke, while the study by Kleikers et al. failed to reproduce previous findings claiming that NOX2 inhibition is neuroprotective in experimental stroke. In Psychology, the Open Science Collaboration conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Disapointingly but not surprisingly, replication rates were low, and studies that replicated did so with much reduced effect sizes.
See also:
10 years after: Ioannidis revisits his classic paper
In 2005 PLOS Medicine published John Ioannidis’ paper ‘Why most published research findings are false’ . The article was a wake up call for many, and now is probably the most influential publication in biomedicine of the last decade (>1.14 Mio views on the PLOS Med webside, thousands of citations in the scientific and lay press, featured in numerous blog posts, etc.). Its title has never been refuted, if anything, it has been replicated, for examples see some of the posts of this blog. Almost 10 years after, Ioannidis now revisits his paper, and the more constructive title ‘How to make more published research true” (PLoS Med. 2014 Oct 21;11(10):e1001747. doi: 10.1371/journal.pmed.1001747.) already indicates that the thrust this time is more forward looking. The article contains numerous suggestions to improve the research enterprise, some subtle and evolutionary, some disruptive and revolutionary, but all of them make a lot of sense. A must read for scientists, funders, journal editors, university administrators, professionals in the health industry, in other words: all stakeholders within the system!