Category: Translation

The art of motorcycle maintenance

assembly lineScientific rigor and the art of motorcycle maintenance‘ was another recent fine analysis on reliability issues in current biomedicine (Munafo et al. Nat Biotechnol. 32:871-3). If you only want to read one article, this may be it. It nicely  sums up the problems and suggests all the relevant measures (see most of my previous posts). But besides the reference to Robert Pirsig’s 1974 novel, what is new on the article is the comparison of the scientific enterprise to the automobile industry, which successfully responded to quality problems with structured quality control (for a more thorough treatment, see the previous post on trust and auditing).  Here is their conclusion:

‘Science is conducted on the principle that it is self-correcting, but the extent to which this is true is an empirical question. The more that quality control becomes integrated into the scientific process itself, the more the whole process becomes one of continual improvement. Implementing this at the level of production implies a culture of incentivizing, educating and empowering those responsible for production, rather than policing quality after the fact with ‘quality inspectors’ (i.e., peer reviewers) or, even more distally, requiring attempts at replication. We think this insight, applied successfully to automobile manufacturing in the 1970s, can also be profitably applied to the practice of scientific research to build a more solid foundation of knowledge and accelerate the research endeavor.’

It is time to act!

Replication and post-publication review: Four best practice examples

bestpracticeMany of the concrete measures proposed to improve the quality and robustness of biomedical research are greeted with great skepticism: ‘Good idea, but how can we implement it, and will it work?’. So here are a few recent best practice examples regarding two key areas: Replication, and the review process. Continue reading

Journals unite for reproducibility

SORBETTO/ISTOCKPHOTO.COM (DOI: 10.1126/science.aaa1724)

SORBETTO/ISTOCKPHOTO.COM (DOI: 10.1126/science.aaa1724)

Amidst what has been termed ‘reproducibility crisis’ (see also a number of previous posts) in June 2014 the National Institutes of Health and Nature Publishing Group had convened a workshop on the rigour and reproducibility of preclinical biomedicine. As a result, last week the NIH published ‘Principles and Guidelines for Reporting Preclinical Research‘, and Nature as well as Science ran editorials on it. More than 30 journals, including the Journal of Cerebral Blood Flow and Metabolism, are endorsing the guidelines. The guidelines cover rigour in statistical analysis, transparent reporting and standards (including randomization and blinding as well as sample size justification), and data mining and sharing. This is an important step forward, but implementation has to be enforced and monitored: The ARRIVE guidelines (many items of which reappear in the NIH guidelines) have not been adapted widely yet (see previous post). In this context I highly recommend the article by Henderson et al in Plos Medicine in which they systematically review existing guidelines for in vivo animal experiments. From this the STREAM collaboration distilled a checklist on internal, external, and construct validity which I found more comprehensive and relevant than the one published now by the NIH. In the end, however, it is not so relevant to which guideline (ARRIVE, NIH, STREAM, etc.) researchers, reviewers, editors, or funders comply, but rather whether they use one at all!


Note added 12/15/2014: Check out the PubPeer postpublication discussion on the NIH/Nature/Science initiative (click here)!

10 years after: Ioannidis revisits his classic paper

ioannidisIn 2005 PLOS Medicine published John Ioannidis’ paper ‘Why most published research findings are false’ . The article was a wake up call for many, and now is probably the most influential publication in biomedicine of the last decade (>1.14 Mio views on the PLOS Med webside, thousands of citations in the scientific and lay press, featured in numerous blog posts, etc.). Its title has never been refuted, if anything, it has been replicated, for examples see some of the posts of this blog. Almost 10 years after, Ioannidis now revisits his paper, and the more constructive title ‘How to make more published research true” (PLoS Med. 2014 Oct 21;11(10):e1001747. doi: 10.1371/journal.pmed.1001747.) already indicates that the thrust this time is more forward looking. The article contains numerous suggestions to improve the research enterprise, some subtle and evolutionary, some disruptive and revolutionary, but all of them make a lot of sense. A must read for scientists, funders, journal editors, university administrators, professionals in the health industry, in other words: all stakeholders within the system!

Pick one: Genomic responses in mouse models POORLY/GREATLY mimic human inflammatory diseases

genomic responsesAbout a year ago Seok et al. shocked the biomedical world with the verdict that mice are not humans, or more specifically, that the blood genomic responses in various inflammatory conditions do not correlate at all between human patients, and the corresponding disease models (see previous post , as well as this one). Now another paper, by Takao et al. and also published in PNAS, concludes the exact opposite, that is that there is a near perfect correlation between blood genomic responses in mouse and man.

Meanwhile, the initial publication is among the top cited medical publications of the last year, and hundreds of newspapers and blogs (including this one) have covered it. It will be interesting to see how much media coverage the Takao paper will receive, probably much less. But what happened, which paper should we believe?

Continue reading

Sind die meisten Forschungsergebnisse tatsächlich falsch?

lj7_2014In July, Laborjournal (‘LabTimes’), a free  German monthly for life scientists (sort of a  hybrid between the Economist and the British Tabloid The Sun), celebrated its 20th anniversary with a special issue. I was asked to contribute an article. In it I try to answer the question whether most published research findings are false, as John Ioannidis rhetorically asked in 2005.

To find out, you have to be able to read German, and click here for a pdf of the article (in German).


Can mice be trusted?

Katharina Frisch ‘Mouse and man’

I started this blog with an post on a PNAS paper which at that time had received a lot of attention in the scientific community and lay press. In this article, Seok et al. argued that ‘genomic responses in mouse models poorly mimic human inflammatory diseases‘. With this post I am returning to this article, as I recently was asked by the Journal STROKE to contribute to their ‘Controversies in Stroke’ series. The Seok article had disturbed the Stroke community, so a pro/con discussion seemed timely. In the upcoming issue of STROKE Sharp and Jickling will argue that ‘the peripheral inflammatory response in rodent ischemic stroke models is different than in human stroke. Given the important role of the immune system in stroke, this could be a major handicap in translating results in rodent stroke models to clinical trials in patients with stroke.‘ This is of course true! Nevertheless, I counter by providing some examples of translational successes regarding stroke and the immune system, and conclude that ‘the physiology and pathophysiology of rodents is sufficiently similar to humans to make them a highly relevant model organism but also sufficiently different to mandate an awareness of potential resulting pitfalls. In any case, before hastily discarding highly relevant past, present, and future findings, experimental stroke research needs to improve dramatically its internal and external validity to overcome its apparent translational failures.’ For an in depth treatment, follow the debate:

Article: Dirnagl: Can mice be trusted

Article: Sharp Jickling: Differences between mice and humans