Using metaanalysis and computer simulation we studied the effects of attrition in experimental research on cancer and stroke. The results were published this week in the new meta-research section of PLOS Biology. Not surprisingly, given the small sample sizes of preclinical experimentation, loss of animals in experiments can dramatically alter results. However, effects of attrition on distortion of results were unknown. We used a simulation study to analyze the effects of random and biased attrition. As expected, random loss of samples decreased statistical power, but biased removal, including that of outliers, dramatically increased probability of false positive results. Next, we performed a meta-analysis of animal reporting and attrition in stroke and cancer. Most papers did not adequately report attrition, and extrapolating from the results of the simulation data, we suggest that their effect sizes were likely overestimated. Continue reading
Eine Sendung des Deutschlandfunk (ausgestrahlt 20.9.15) von Martin Hubert. Aus der Ankündigung: ‘Biomediziner sollen in ihren Laboren unter anderem nach Substanzen gegen Krebs oder Schlaganfall suchen. Sie experimentieren mit Zellkulturen und Versuchstieren, testen gewollte Wirkungen und ergründen ungewollte. Neuere Studien zeigen jedoch, dass sich bis zu 80 Prozent dieser präklinischen Studien nicht reproduzieren lassen.’ Hier der Link zum Audiostream bzw. zum Transkript.
(German only, sorry!)
The crisis in scientific reproducibility has crystalized as it has become increasingly clear that the faithfulness of the majority of high-profile scientific reports is with little foundation, and that the societal burden of low reproducibility is enormous. In todays issue of Nature, C. Glenn Begley, Alastair Buchan, and myself suggest measures by which academic institutions can improve the quality and value of their research. To read the article, click here.
Our main point is that research institutions that receive public funding should be required to demonstrate standards and behaviors that comply with “Good Institutional Practice”. Here is a selection of potential measures, implementation of which shuld be verified, certified and approved by major funding agencies.
Compliance with agreed guidelines: Ensure compliance with established guidelines such as ARRIVE, MIAME, data access (as required by National Science Foundation and National Institutes of Health, USA).
Full access to the institution’s research results: Foster open access and open data; preregistration of preclinical study designs.
Electronic laboratory notebooks: Provide electronic record keeping compliant with FDA Code of Federal Regulations Title 21 (CFR Title 21 part 11). Electronic laboratory notebooks allow data and project sharing, supervision, time stamping, version control, and directly link records and original data.
Institutional Standard for Experimental Research Conduct (ISERC): Establish ISERC (e.g. blinding, inclusion of controls, replicates and repeats etc); ensure dissemination, training and compliance with IMSERC.
Quality management: Organize regular and random audits of laboratories and departments with reviews of record keeping and measures to prevent bias (such as randomization and blinding).
Critical incidence reporting: Implement a system to allow the anonymous reporting of critical incidences during research. Organize regular critical incidence conferences in which such ‘never events’ are discussed to prevent them in the future and create a culture of research rigor and accountability.
Incentives and disincentives: Develop and implement novel indices to appraise and reward research of high quality. Honor robustness and mentoring as well as originality of research. Define appropriate penalties for substandard research conduct or noncompliance with guidelines. These might include decreased laboratory space, lack of access to trainees, reduced access to core facilities.
Training: Establish mandatory programs to train academic clinicians and basic researchers at all professional levels in experimental design, data analysis and interpretation, as well as reporting standards.
Research quality mainstreaming: Bundle established performance measures plus novel institution-unique measures to allow a flexible, institution-focused algorithm that can serve as the basis for competitive funding applications.
Research review meetings: create forum for routine assessment of institutional publications with focus on robust methods: the process rather than result.
Biomedicine currently suffers a ‚replication crisis‘: Numerous articles from academia and industry prove John Ioannidis’ prescient theoretical 2005 paper ‘Why most published research findings are false’ (Why most published research findings are false) to be true. On the positive side, however, the academic community appears to have taken up the challenge, and we are witnessing a surge in international collaborations to replicate key findings of biomedical and psychological research. Three important articles appeared over the last weeks which on the one hand further demonstrated that the replication crisis is real, but on the other hand suggested remedies for it:
Two consortia have pioneered the concept of preclinical randomized controlled trials, very much inspired by how clinical trials minimize bias (prespecification of a primary endpoint, randomization, blinding, etc.), and with much improved statistical power compared to single laboratory studies. One of them (Llovera et al.) replicated the effect of a neuroprotectant (CD49 antibody) in one, but not another model of stroke, while the study by Kleikers et al. failed to reproduce previous findings claiming that NOX2 inhibition is neuroprotective in experimental stroke. In Psychology, the Open Science Collaboration conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Disapointingly but not surprisingly, replication rates were low, and studies that replicated did so with much reduced effect sizes.
In 2009, Chalmers and Glasziou investigated sources of avoidable waste in biomedical research and estimated that its cumulative effect was that about 85% of research investment is wasted (Lancet 2009; 374: 86–89). Critical voices have since then questioned the exceedingly high number (85%), or claimed that because of non-linearity’s and idiosyncrasies of the biomedical research process a large number of failures are needed to produce a comparably small number of breakthroughs, and therefore hailed the remaining 15%. Waste is defined as ‘resources consumed by inefficient or non-essential activities’. Does progress really thrive on waste?
The MEDLINE currently indexes 5,642 journals. PubMed comprises more than 24 million citations for biomedical literature from MEDLINE. My field is stroke research. Close to 30.000 articles were published in 2014 on the topic ‘Stroke’ (clinical and experimental), more than 20.000 of them were peer reviewed original articles in the English language (Web of Science). That amounts to more than 50 articles every day. In 2014, 1700 of them were rodent studies, a mere 5 per day. Does (can) anyone read them? And should we read them? Do researchers worldwide every day produce knowledge worth publishing in 50 articles?
‘Scientific rigor and the art of motorcycle maintenance‘ was another recent fine analysis on reliability issues in current biomedicine (Munafo et al. Nat Biotechnol. 32:871-3). If you only want to read one article, this may be it. It nicely sums up the problems and suggests all the relevant measures (see most of my previous posts). But besides the reference to Robert Pirsig’s 1974 novel, what is new on the article is the comparison of the scientific enterprise to the automobile industry, which successfully responded to quality problems with structured quality control (for a more thorough treatment, see the previous post on trust and auditing). Here is their conclusion:
‘Science is conducted on the principle that it is self-correcting, but the extent to which this is true is an empirical question. The more that quality control becomes integrated into the scientific process itself, the more the whole process becomes one of continual improvement. Implementing this at the level of production implies a culture of incentivizing, educating and empowering those responsible for production, rather than policing quality after the fact with ‘quality inspectors’ (i.e., peer reviewers) or, even more distally, requiring attempts at replication. We think this insight, applied successfully to automobile manufacturing in the 1970s, can also be profitably applied to the practice of scientific research to build a more solid foundation of knowledge and accelerate the research endeavor.’
It is time to act!