Category: Translation

Pick one: Genomic responses in mouse models POORLY/GREATLY mimic human inflammatory diseases

genomic responsesAbout a year ago Seok et al. shocked the biomedical world with the verdict that mice are not humans, or more specifically, that the blood genomic responses in various inflammatory conditions do not correlate at all between human patients, and the corresponding disease models (see previous post , as well as this one). Now another paper, by Takao et al. and also published in PNAS, concludes the exact opposite, that is that there is a near perfect correlation between blood genomic responses in mouse and man.

Meanwhile, the initial publication is among the top cited medical publications of the last year, and hundreds of newspapers and blogs (including this one) have covered it. It will be interesting to see how much media coverage the Takao paper will receive, probably much less. But what happened, which paper should we believe?

Continue reading

Sind die meisten Forschungsergebnisse tatsächlich falsch?

lj7_2014In July, Laborjournal (‘LabTimes’), a free  German monthly for life scientists (sort of a  hybrid between the Economist and the British Tabloid The Sun), celebrated its 20th anniversary with a special issue. I was asked to contribute an article. In it I try to answer the question whether most published research findings are false, as John Ioannidis rhetorically asked in 2005.

To find out, you have to be able to read German, and click here for a pdf of the article (in German).

 

Can mice be trusted?

Katharina Frisch ‘Mouse and man’

I started this blog with an post on a PNAS paper which at that time had received a lot of attention in the scientific community and lay press. In this article, Seok et al. argued that ‘genomic responses in mouse models poorly mimic human inflammatory diseases‘. With this post I am returning to this article, as I recently was asked by the Journal STROKE to contribute to their ‘Controversies in Stroke’ series. The Seok article had disturbed the Stroke community, so a pro/con discussion seemed timely. In the upcoming issue of STROKE Sharp and Jickling will argue that ‘the peripheral inflammatory response in rodent ischemic stroke models is different than in human stroke. Given the important role of the immune system in stroke, this could be a major handicap in translating results in rodent stroke models to clinical trials in patients with stroke.‘ This is of course true! Nevertheless, I counter by providing some examples of translational successes regarding stroke and the immune system, and conclude that ‘the physiology and pathophysiology of rodents is sufficiently similar to humans to make them a highly relevant model organism but also sufficiently different to mandate an awareness of potential resulting pitfalls. In any case, before hastily discarding highly relevant past, present, and future findings, experimental stroke research needs to improve dramatically its internal and external validity to overcome its apparent translational failures.’ For an in depth treatment, follow the debate:

Article: Dirnagl: Can mice be trusted

Article: Sharp Jickling: Differences between mice and humans

Quality assurance and management in experimental research

tüvsüd

Currently, a worldwide discussion among stakeholders of the biomedical research enterprise revolves around the recent realization that a significant proportion of the current resources spent on medical research are wasted, as well as around potential actions to increase its value. The reproducibility of results in experimental biomedicine is generally low, and the vast majority of medical interventions introduced into clinical testing after successful preclinical development prove unsafe or ineffective. One prominent explanation for these problems is flawed preclinical research. There is consensus that the quality of biomedical research needs to be improved. ‘Quality’ is a broad and generic term, and it is clear that a plethora of factors together determine the robustness and predictiveness of basic and preclinical research results. Against this background the experimental laboratories of the Center for Stroke Research Berlin (CSB, Dept. of Experimental Neurology) have decided to take a systematic approach and to implement a structured quality management system. In a process involving all members of the department from student to technician, post doc, and group leader in over more than one year we have established standard operating produres, defined common goals and indicators, improved communication structures and document management, implemented an error management, are developing an electronic laboratory notebook, among other measures. On July 3rd 2014 this quality management system successfully passed an ISO 9001 certification process (Certificate 12 100 48301 TMS). The auditors were impressed by the quality oriented ‘spirit’ of all members of the Department, and the fact that to their knowledge the CSB is the first academic institution worldwide which has established a structured quality management in experimental research of this standard and reach. The CSB is fully aware of the fact that the mere fact that a certified quality management has been implemented does not guarantee translational success. However, we believe that innovation will only have impact on the improvement of the outcome of patients if it thrives on the highest possible standards of quality. Certification of our standards renders them transparent and verifiable to the research community, and serves as a first step towards a preclinical medicine in which research conduct and results can be monitored and audited by peers.

 

Exploratory and confirmatory preclinical research

explore_confirmIn the current issue of PLOS Biology Kimmelman, Mogil, and Dirnagl argue that distinguishing between exploratory and confirmatory preclinical research will improve  translation: ‘Preclinical researchers confront two overarching agendas related to drug development: selecting interventions amid a vast field of candidates, and producing rigorous evidence of clinical promise for a small number of interventions. They suggest that each challenge  is best met by two different, complementary modes of investigation. In the first (exploratory investigation), researchers should aim at generating robust pathophysiological theories of disease. In the second (confirmatory investigation), researchers should aim at demonstrating strong and reproducible treatment effects in relevant animal models. Each mode entails different study designs, confronts different validity threats, and supports different kinds of inferences. Research policies should seek to disentangle the two modes and leverage their complementarity. In particular, policies should discourage the common use of exploratory studies to support confirmatory inferences, promote a greater volume of confirmatory investigation, and customize design and reporting guidelines for each mode.’

For full article click here.

Systemic flaws of biomedical research ecosystem

pnasIn the current issue of the Proceedings of the National Academy of Science (USA), four heavyweights, Bruce Alberts, Marc W. Kirschner, Shirley Tilghman, and Harold Varmus, provide fundamental criticism of the US biomedical research system, and offer ideas for ‘Rescuing US biomedical research from its systemic flaws’. Their main point is that ‘The long-held but erroneous assumption of never-ending rapid growth in biomedical science has created an unsustainable hypercompetitive system that is discouraging even the most outstanding prospective students from entering our profession—and making it difficult for seasoned investigators to produce their best work. This is a recipe for long-term decline, and the problems cannot be solved with simplistic approaches.’ Most of the issues they raise are equally applicable to European biomedical research. Full article: PNAS-2014-Alberts-5773-7

 

Have the ARRIVE guidelines been implemented?

 

arriveResearch on animals generally lacks transparent reporting of study design and implementation, as well as results. As a consequene of poor reporting, we are facing problems in replicating published findings, publication of underpowered studies and excessive false positives or false negatives, publication bias, and as a result difficulties in translating promising preclinical results into effective therapies for human disease. To improve the situation, in 2010 the ARRIVE guidelines for the reporting of animal research (www.nc3rs.org.uk/ARRIVEpdf) were formulated, which were adopted by over 300 scientifc journals, including the Journal of Cerebral Blood Flow and Metabolism (www.nature.com/jcbfm). Four years after, Baker et al. ( PLoS Biol 12(1): e1001756. doi:10.1371/journal.pbio.1001756) have systematically investigated the effect of the implementation of the ARRIVE guidelines on reporting of in vivo research, with a particular focus on the multiple sclerosis field. The results are highly disappointing:

‘86%–87% of experimental articles do not give any indication that the animals in the study were properly randomized, and 95% do not demonstrate that their study had a sample size sufficient to detect an effect of the treatment were there to be one. Moreover, they show that 13% of studies of rodents with experimental autoimmune encephalomyelitis (an animal model of multiple sclerosis) failed to report any statistical analyses at all, and 55% included inappropriate statistics.. And while you might expect that publications in ‘‘higher ranked’’ journals would have better reporting and a more rigorous methodology, Baker et al. reveal that higher ranked journals (with an impact factor greater than ten) are twice as likely to report either no or inappropriate statistics’ (Editorial by Eisen et al., PLoS Biol 12(1): e1001757. doi:10.1371/journal.pbio.1001757).

It is highly likely that other fields in biomedicine have a similar dismal record. Clearly, there is a need for journal editors and publishers to enforce the ARRIVE guidelines and to monitor its implementation!

 

Found in translation

Lost or found in translation? Stroke is a major cause of global morbidity and mortality, yet therapeutic options are very limited. Numerous preclinical studies promised highly effective novel treatments, none of which have made it into practice despite a plethora of clinical trials. This failure to bridge the gap between bench and bedside deeply frustrates researchers, clinicians, the pharmaceutical industry, and patients. Dirnagl and Endres  argue that despite the apparent translational failures in neuroprotection research, and counter to current nihilism, basic and preclinical stroke research has in fact been able to predict human pathophysiology, clinical phenotypes, and therapeutic outcomes. The understanding of stroke pathobiology that has been achieved through basic research has led to changes in stroke care whose value can be demonstrated. Preclinical investigations have informed the clinical realm even in the absence of intermediary phase 2 or phase 3 trials. Their arguments rest on examples of successful bench-to-bedside translation in which experimental studies preceded human trials and successfully predicted  outcomes or phenotypes, as well as on examples of successful ‘back-translation’, where studies in animals recapitulated what we already knew to be true in human beings.  An analysis of the reasons for the apparent (or only perceived) translational failures  further strenghtens their proposition, and suggests measures to improve the positive predictive value of preclinical stroke research. Researchers, funding agencies, academic institutions, publishers, and professional societies should work together to harness the tremendous potential of basic and preclinical research, in stroke research as well as in other fields of medicine

Ulrich Dirnagl and Matthias Endres. Found in Translation: Preclinical Stroke Research Predicts Human Pathophysiology, Clinical Phenotypes, and Therapeutic Outcomes. Stroke. 201445: 1510-1518

METRICS

metaresearchThe Economist reported that John Ioannidis, together with Steven Goodman, later this month will open the Meta – Research Innovation Center at Standford University (METRICS). Generously supported by the Buck foundation , it will fight bad science, bias, and lack of evidence in all areas of biomedicine.  The institute’s moto is to ‘Identify and minimise persistent threats to medical research quality’. Those who have followed the work of Ioannidis and Goodman know that this is good news indeed! A concise overview of Ioannidis research can be found in this online article at Maclean’s.