Category: Translation

Of Mice, Macaques and Men

Tuberculosis kills far more than a million people worldwide per year. The situation is particularly problematic in southern Africa, eastern Europe and Central Asia. There is no truely effective vaccination for tuberculosis (TB). In countries with a high incidence, a live vaccination is carried out with the diluted vaccination strain Bacillus Calmette-Guérin (BCG), but BCG gives very little protection against tuberculosis of the lungs, and in all cases the vaccination is highly variable and unpredictable. For years, a worldwide search has been going on for a better TB vaccination.

Recently, the  British Medical Journal has published an investigation in which serious charges have been raised against researchers and their universities: conflicts of interest, animal experiments of questionable quality, selective use of data, deception of grant-givers and ethics commissions, all the way up to endangerment of study participants. There was also a whistle blower… who had to pack his bags. It all happened in Oxford, at one of the most prestigious virological institutes on earth, and the study on humans was carried out on infants of the most destitute layers of the population. Let’s have a closer look at this explosive mix in more detail, for we have much to learn from it about

  • the ethical dimension of preclinical research and the dire consequences that low quality in animal experiments and selective reporting can have;
  • the important role of systematic reviews of preclinical research, and finally also about
  • the selective (or non) availability and scrutiny of preclinical evidence when commissions and authorities decide on clinical studies.

Continue reading

Is Translational Stroke Research Broken, and if So, How Can We Fix It?

 

broken pipeBased on research, mainly in rodents, tremendous progress has been made in our basic understanding of the pathophysiology of stroke. After many failures, however, few scientists today deny that bench-to-bedside translation in stroke has a disappointing track record. I here summarize many measures to improve the predictiveness of preclinical stroke research, some of which are currently in various stages of implementation: We must reduce preventable (detrimental) attrition. Key measures for this revolve around improving preclinical study design. Internal validity must be improved by reducing bias; external validity will improve by including aged, comorbid rodents of both sexes in our modeling. False-positives and inflated effect sizes can be reduced by increasing statistical power, which necessitates increasing group sizes. Compliance to reporting guidelines and checklists needs to be enforced by journals and funders. Customizing study designs to exploratory and confirmatory studies will leverage the complementary strengths of both modes of investigation. All studies should publish their full data sets. On the other hand, we should embrace inevitable NULL results. This entails planning experiments in such a way that they produce high-quality evidence when NULL results are obtained and making these available to the community. A collaborative effort is needed to implement some of these recommendations. Just as in clinical medicine, multicenter approaches help to obtain sufficient group sizes and robust results. Translational stroke research is not broken, but its engine needs an overhauling to render more predictive results.

Read the full article at the Publishers site (STROKE/AHA). If your library does not have a subscription, here is the Authors Manuscript (Stroke/AHA did not allow me to even pay for open access, as it is ‘a special article…’).

Where have all the rodents gone?

Using metaanalysis and computer simulation we studied the effects of attrition in experimental research on cancer and stroke. The results were published this week in the new meta-research section of PLOS Biology. Not surprisingly, given the small sample sizes of preclinical experimentation, loss of animals in experiments can dramatically alter results. However, effects of attrition on distortion of results were unknown. We used a simulation study to analyze the effects of random and biased attrition. As expected, random loss of samples decreased statistical power, but biased removal, including that of outliers, dramatically increased probability of false positive results. Next, we performed a meta-analysis of animal reporting and attrition in stroke and cancer. Most papers did not adequately report attrition, and extrapolating from the results of the simulation data, we suggest that their effect sizes were likely overestimated.

And these were our recommendations: Attrition of animals is often unforeseen and does not reflect willful bias. However, there are several simple steps that the scientific community can use to diminish inferential threats due to animal attrition. First, we recommend that authors prespecify inclusion and exclusion criteria, as well as reasons for exclusion of animals. For example, the use of flowcharts to track animals from initial allocation until analysis, with attrition noted, improves the transparency of preclinical reporting. An added benefit of this approach lies in the ability to track systemic issues with experimental design or harmful side effects of treatment. Journal referees can also encourage such practices by demanding them in study reports. Finally, many simple statistical tools used in medicine could be adopted to properly impute (and report) missing data. Overall, compliance with ARRIVE guidelines will aid in most, if not all, of the issues inherent to missing data in preclinical research and help structure a better standard for animal use and reporting. (Click here to access the full article, and here for Bob Siegerink’s blog post about it).

Nature ran a feature on the article, which was massively covered by the lay press, in interviews, and in blogs. For example:

Süddeutsche Zeitung

http://www.sueddeutsche.de/wissen/biomedizin-schlechter-als-wuerfeln-1.2805259

Deutsche Welle

http://www.dw.com/en/most-medical-studies-misreport-animal-testing/a-18958271

THE AUSTRALIAN

http://www.theaustralian.com.au/news/health-science/vanishing-laboratory-animals-bolster-false-research-results/news-story/065ee17d3e752974f22d9533118a382a

here is a Hungarian one

http://www.hirado.hu/2016/01/05/gyakran-hianyosak-es-ellenorizhetetlenek-az-orvosi-alaptanulmanyok/

and French

http://www.lapresse.ca/sciences/medecine/201601/04/01-4936408-la-credibilite-des-etudes-biomedicales-ecornee.php

and a Spanish

http://zh.clicrbs.com.br/rs/noticias/noticia/2016/01/credibilidade-de-estudos-biomedicos-e-questionada-por-nova-pesquisa-4944049.html

and an Argentinan:

http://www.agerpres.ro/sci-tech/2016/01/05/credibilitatea-studiilor-biomedicale-a-fost-analizata-de-doua-grupuri-de-cercetatori-plos-biology–15-19-18

or a Swiss (Italian)

http://www.swissinfo.ch/por/credibilidade-de-estudos-biom%C3%A9dicos-%C3%A9-questionada-por-nova-pesquisa/41874540

view Altmetrics

BIAS!

bias

This has been a week chock-full of bias! First nature ran a cover story on it, with an editorial, and a very nice introduction into the subject by Regina Nuzzo. Then Malcolm Macleod and colleagues published a perspective in Plos Biology demonstrating limited reporting of measures to reduce the risk of bias in life sciences publications, and that there may be an inverse correlation between journal rank or prestige of the University from which the research originated and presence of measures to prevent bias. At the same time Jonathan Kimmelman’s group came out with a report in eLife  in which they meta-analytically explored preclinical studies of an anticancer drug (sunitinib) to demonstrate that only a fraction of drugs that show promise in animals end up proving safe and effective in humans, partly because of design flaws, such as lack of prevention of bias, and partly due to positive publication bias. Both articles resulted in a worldwide media frenzy, including coverage by Nature and the lay press, here is an example from the Guardian. Retraction Watch interviewed Jonathan, while Malcolm spoke on BBC4.

Wenn Forschung nicht hält, was sie verspricht

Eine Sendung dlfdes Deutschlandfunk (ausgestrahlt 20.9.15) von Martin Hubert. Aus der Ankündigung: ‘Biomediziner sollen in ihren Laboren unter anderem nach Substanzen gegen Krebs oder Schlaganfall suchen. Sie experimentieren mit Zellkulturen und Versuchstieren, testen gewollte Wirkungen und ergründen ungewollte. Neuere Studien zeigen jedoch, dass sich bis zu 80 Prozent dieser präklinischen Studien nicht reproduzieren lassen.’ Hier der Link zum Audiostream bzw. zum  Transkript.

(German only, sorry!)

Trust but verify: Institutions must do their part for reproducibility

robust scienceThe crisis in scientific reproducibility has crystalized as it has become increasingly clear that the faithfulness of the majority of high-profile scientific reports is with little foundation, and that the societal burden of low reproducibility is enormous. In todays issue of Nature, C. Glenn Begley, Alastair Buchan, and myself suggest measures by which academic institutions can improve the quality and value of their research. To read the article, click here.

Our main point is that research institutions that receive public funding should be required to demonstrate standards and behaviors that comply with “Good Institutional Practice”. Here is a selection of potential measures, implementation of which shuld be verified, certified and approved by major funding agencies.

Compliance with agreed guidelines:  Ensure compliance with established guidelines such as ARRIVE, MIAME, data access (as required by National Science Foundation and National Institutes of Health, USA).

Full access to the institution’s research results: Foster open access and open data; preregistration of preclinical study designs.

Electronic laboratory notebooks: Provide electronic record keeping compliant with FDA Code of Federal Regulations Title 21 (CFR Title 21 part 11). Electronic laboratory notebooks allow data and project sharing, supervision, time stamping, version control, and directly link records and original data.

Institutional Standard for Experimental Research Conduct (ISERC): Establish ISERC (e.g. blinding, inclusion of controls, replicates and repeats etc); ensure dissemination, training and compliance with IMSERC.

Quality management: Organize regular and random audits of laboratories and departments with reviews of record keeping and measures to prevent bias (such as randomization and blinding).

Critical incidence reporting: Implement a system to allow the anonymous reporting of critical incidences during research. Organize regular critical incidence conferences in which such ‘never events’ are discussed to prevent them in the future and create a culture of research rigor and accountability.

Incentives and disincentives: Develop and implement novel indices to appraise and reward research of high quality.  Honor robustness and mentoring as well as originality of research. Define appropriate penalties for substandard research conduct or noncompliance with guidelines. These might include decreased laboratory space, lack of access to trainees, reduced access to core facilities.

Training:  Establish mandatory programs to train academic clinicians and basic researchers at all professional levels in experimental design, data analysis and interpretation, as well as reporting standards.

Research quality mainstreaming: Bundle established performance measures plus novel  institution-unique measures to allow a flexible, institution-focused algorithm that can serve as the basis for competitive funding applications.

Research review meetings: create forum for routine assessment of institutional publications with focus on robust methods: the process rather than result.

Continue reading

Replication crisis, continued

replizierenBiomedicine currently suffers a ‚replication crisis‘: Numerous articles from academia and industry prove John Ioannidis’ prescient theoretical 2005 paper ‘Why most published research findings are false’ (Why most published research findings are false) to be true. On the positive side, however, the academic community appears to have taken up the challenge, and we are witnessing a surge in international collaborations to replicate key findings of biomedical and psychological research. Three important articles appeared over the last weeks which on the one hand further demonstrated that the replication crisis is real, but on the other hand suggested remedies for it:

Two consortia have pioneered the concept of preclinical randomized controlled trials, very much inspired by how clinical trials minimize bias (prespecification of a primary endpoint, randomization, blinding, etc.), and with much improved statistical power compared to single laboratory studies. One of them (Llovera et al.) replicated the effect of a neuroprotectant (CD49 antibody) in one, but not another model of stroke, while the study by Kleikers et al. failed to reproduce previous findings claiming that NOX2 inhibition is neuroprotective in experimental stroke. In Psychology, the Open Science Collaboration conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Disapointingly but not surprisingly, replication rates were low, and studies that replicated did so with much reduced effect sizes.

See also:

http://www.theguardian.com/commentisfree/2015/aug/28/psychology-experiments-failing-replication-test-findings-science

http://www.theguardian.com/science/2015/aug/27/study-delivers-bleak-verdict-on-validity-of-psychology-experiment-results