Category: Research

Train PIs and Professors!

There is a lot of thinking going on today about how research can be made more efficient, more robust, and more reproducible. At the top of the list are measures for improving internal validity (for example randomizing and blinding, prespecified inclusion and exclusion criteria etc.), measures for increasing sample sizes and thus statistical power, putting an end to the fetishization of the p-value, and open access to original data (open science). Funders and journals are raising the bar for applicants and authors by demanding measures to safeguard the validity of the research submitted to them.

Students and young researchers have taken note, too.  I teach, among other things, statistics, good scientific practice and experimental design and am impressed every time by the enthusiasm of the students and young post docs, and how they leap into the adventure of their scientific projects with the unbent will to “do it right”.  They soak up suggestions for improving reproducibility and robustness of their research projects like a dry sponge soaks up water.  Often however the discussion is in the end not satisfying, especially when we discuss students’ own experiments and approaches to research work.  I often hear: “That’s all very good and fine, but it won’t get by with my group leader.” Group leaders would tell them: “That is the way we have always done that, and it got us published in Nature and Science”, “If we do it the way you suggest, it won’t get through the review process”, or “We then could only get it published in PLOS One (or Peer J, F1000 Research etc.) and then the paper will contaminate your CV”, etc.

I often wish that not only the students would be sitting in the seminar room, but also their supervisors with them! Continue reading

Can (Non)-Replication be a Sin?

I failed to reproduce the results of my experiments! Some of us are haunted by this horror vision. The scientific academies, the journals and in the meantime the sponsors themselves are all calling for reproducibility, replicability and robustness of research. A movement for “reproducible science” has developed. Sponsorship programs for the replication of research papers are now in the works.In some branches of science, especially in psychology, but also in fields like cancer research, results are now being systematically replicated… or not, thus we are now in the throws of a “reproducibility crisis”.
Now Daniel Fanelli, a scientist who up to now could be expected to side with those who support the reproducible science movement, has raised a warning voice. In the prestigious Proceedings of the National Academy of Sciences he asked rhetorically: “Is science really facing a reproducibility crisis, and if so, do we need it?” So todayon the eve, perhaps, of a budding oppositional movement, I want to have a look at some of the objections to the “reproducible science” mantra. Is reproducibility of results really the fundament of scientific methods? Continue reading

When you come to a fork in the road: Take it

It is for good reason that researchers are the object of envy. When not stuck with bothersome tasks such as grant applications, reviews, or preparing lectures, they actually get paid for pursuing their wildest ideas! To boldly go where no human has gone before! We poke about through scientific literature, carry out pilot experiments that surprisingly almost always succeed. Then we do a series of carefully planned and costly experiments. Sometimes they turn out well, often not, but they do lead us into the unknown. This is how ideas become hypotheses; one hypothesis leads to those that follow, and voila, low and behold, we confirm them! In the end, sometimes only after several years and considerable wear and tear on personnel and material, we manage then to weave a “story” out of them (see also). Through a complex chain of results the story closes with a “happy end”, perhaps in the form of a new biological mechanism, but at least as a little piece to fit the puzzle, and it is always presented to the world by means of a publication. Sometimes even in one of the top journals. Continue reading

Is Translational Stroke Research Broken, and if So, How Can We Fix It?

 

broken pipeBased on research, mainly in rodents, tremendous progress has been made in our basic understanding of the pathophysiology of stroke. After many failures, however, few scientists today deny that bench-to-bedside translation in stroke has a disappointing track record. I here summarize many measures to improve the predictiveness of preclinical stroke research, some of which are currently in various stages of implementation: We must reduce preventable (detrimental) attrition. Key measures for this revolve around improving preclinical study design. Internal validity must be improved by reducing bias; external validity will improve by including aged, comorbid rodents of both sexes in our modeling. False-positives and inflated effect sizes can be reduced by increasing statistical power, which necessitates increasing group sizes. Compliance to reporting guidelines and checklists needs to be enforced by journals and funders. Customizing study designs to exploratory and confirmatory studies will leverage the complementary strengths of both modes of investigation. All studies should publish their full data sets. On the other hand, we should embrace inevitable NULL results. This entails planning experiments in such a way that they produce high-quality evidence when NULL results are obtained and making these available to the community. A collaborative effort is needed to implement some of these recommendations. Just as in clinical medicine, multicenter approaches help to obtain sufficient group sizes and robust results. Translational stroke research is not broken, but its engine needs an overhauling to render more predictive results.

Read the full article at the Publishers site (STROKE/AHA). If your library does not have a subscription, here is the Authors Manuscript (Stroke/AHA did not allow me to even pay for open access, as it is ‘a special article…’).

The Relative Citation Ratio: It won’t do your laundry, but can it exorcise the journal impact factor?

impactRecently, NIH Scientists  B. Ian Hutchins and colleagues have (pre)published “The Relative Citation Ratio (RCR). A new metric that uses citation rates to measure influence at the article level”. [Note added 9.9.2016: A peer reviewed version of the article has now appeared in PLOS Biol]. Just as Stefano Bertuzzi, the Executive Director of the American Society for Cell Biology, I am enthusiastic about the RCR. The RCR appears to be a viable alternative to the widely (ab)used Journal Impact Factor (JIF).

The RCR has been recently discussed in several blogs and editorials (e.g. NIH metric that assesses article impact stirs debate; NIH’s new citation metric: A step forward in quantifying scientific impact? ). At a recent workshop organized by the National Library of Medicine (NLM) I learned that the NIH is planning to widely use the RCR in its own grant assessments as an antidote to JIF, raw article citations, h-factors, and other highly problematic or outright flawed metrics. Continue reading

Where have all the rodents gone?

Using metaanalysis and computer simulation we studied the effects of attrition in experimental research on cancer and stroke. The results were published this week in the new meta-research section of PLOS Biology. Not surprisingly, given the small sample sizes of preclinical experimentation, loss of animals in experiments can dramatically alter results. However, effects of attrition on distortion of results were unknown. We used a simulation study to analyze the effects of random and biased attrition. As expected, random loss of samples decreased statistical power, but biased removal, including that of outliers, dramatically increased probability of false positive results. Next, we performed a meta-analysis of animal reporting and attrition in stroke and cancer. Most papers did not adequately report attrition, and extrapolating from the results of the simulation data, we suggest that their effect sizes were likely overestimated. Continue reading

BIAS!

bias

This has been a week chock-full of bias! First nature ran a cover story on it, with an editorial, and a very nice introduction into the subject by Regina Nuzzo. Then Malcolm Macleod and colleagues published a perspective in Plos Biology demonstrating limited reporting of measures to reduce the risk of bias in life sciences publications, and that there may be an inverse correlation between journal rank or prestige of the University from which the research originated and presence of measures to prevent bias. At the same time Jonathan Kimmelman’s group came out with a report in eLife  in which they meta-analytically explored preclinical studies of an anticancer drug (sunitinib) to demonstrate that only a fraction of drugs that show promise in animals end up proving safe and effective in humans, partly because of design flaws, such as lack of prevention of bias, and partly due to positive publication bias. Both articles resulted in a worldwide media frenzy, including coverage by Nature and the lay press, here is an example from the Guardian. Retraction Watch interviewed Jonathan, while Malcolm spoke on BBC4.

Wenn Forschung nicht hält, was sie verspricht

Eine Sendung dlfdes Deutschlandfunk (ausgestrahlt 20.9.15) von Martin Hubert. Aus der Ankündigung: ‘Biomediziner sollen in ihren Laboren unter anderem nach Substanzen gegen Krebs oder Schlaganfall suchen. Sie experimentieren mit Zellkulturen und Versuchstieren, testen gewollte Wirkungen und ergründen ungewollte. Neuere Studien zeigen jedoch, dass sich bis zu 80 Prozent dieser präklinischen Studien nicht reproduzieren lassen.’ Hier der Link zum Audiostream bzw. zum  Transkript.

(German only, sorry!)

Trust but verify: Institutions must do their part for reproducibility

robust scienceThe crisis in scientific reproducibility has crystalized as it has become increasingly clear that the faithfulness of the majority of high-profile scientific reports is with little foundation, and that the societal burden of low reproducibility is enormous. In todays issue of Nature, C. Glenn Begley, Alastair Buchan, and myself suggest measures by which academic institutions can improve the quality and value of their research. To read the article, click here.

Our main point is that research institutions that receive public funding should be required to demonstrate standards and behaviors that comply with “Good Institutional Practice”. Here is a selection of potential measures, implementation of which shuld be verified, certified and approved by major funding agencies.

Compliance with agreed guidelines:  Ensure compliance with established guidelines such as ARRIVE, MIAME, data access (as required by National Science Foundation and National Institutes of Health, USA).

Full access to the institution’s research results: Foster open access and open data; preregistration of preclinical study designs.

Electronic laboratory notebooks: Provide electronic record keeping compliant with FDA Code of Federal Regulations Title 21 (CFR Title 21 part 11). Electronic laboratory notebooks allow data and project sharing, supervision, time stamping, version control, and directly link records and original data.

Institutional Standard for Experimental Research Conduct (ISERC): Establish ISERC (e.g. blinding, inclusion of controls, replicates and repeats etc); ensure dissemination, training and compliance with IMSERC.

Quality management: Organize regular and random audits of laboratories and departments with reviews of record keeping and measures to prevent bias (such as randomization and blinding).

Critical incidence reporting: Implement a system to allow the anonymous reporting of critical incidences during research. Organize regular critical incidence conferences in which such ‘never events’ are discussed to prevent them in the future and create a culture of research rigor and accountability.

Incentives and disincentives: Develop and implement novel indices to appraise and reward research of high quality.  Honor robustness and mentoring as well as originality of research. Define appropriate penalties for substandard research conduct or noncompliance with guidelines. These might include decreased laboratory space, lack of access to trainees, reduced access to core facilities.

Training:  Establish mandatory programs to train academic clinicians and basic researchers at all professional levels in experimental design, data analysis and interpretation, as well as reporting standards.

Research quality mainstreaming: Bundle established performance measures plus novel  institution-unique measures to allow a flexible, institution-focused algorithm that can serve as the basis for competitive funding applications.

Research review meetings: create forum for routine assessment of institutional publications with focus on robust methods: the process rather than result.

Continue reading

The “Broken Science” (aka “waste”) debate: A personal cheat sheet

1340830347_broken-glass-14

On March 17, 2015 five panelists from cognitive neuroscience and psychology (Sam Schwarzkopf, Chris Chambers, Sophie Scott, Dorothy Bishop, and Neuroskeptic) publicly debated  “Is science broken? If so, how can we fix it?” . The event was organized by Experimental Psychology, UCL Division of Psychology and Language Sciences / Faculty of Brain Sciences in London.

The debate revolved around the ‘reproducibility crisis’, and covered false positive rates, replication, faulty statistics, lack of power, publication bias, study preregistration, data sharing, peer review, you name it. Understandably the event caused a stir in the press, journals, and the blogosphere (NatureBiomed centralAidan’s AviaryThe Psychologist, etc…).

Remarkably, some of the panelists (notably Sam Schwarzkopf) respectfully opposed the current ‘crusade for true science’  (to which I must confess I subscribe) by arguing that science is not broken at all,  but rather, by trying to fix it we run the risk to wreck it for good. Already a few days before the official debate, he and Neuroskeptic had started to exchange arguments on Neuroskeptic’s blog. While both parties appear to agree that science can be improved, they completely disagree in their analysis of the current status of the scientific enterprise, and consequently also on action points.

This predebate argument directed my attention to a blog, which was run by Sam Schwarzkopf, or rather his alter ego, the ‘Devil’s neuroscientist’ for a short, but very productive period. Curiously, the Devil’s neuroscientist retired from blogging the night before the debate, threatening that there will be no future posts! This is sad, because albeit somewhat aggressively, but very much to the point, the Devil’s neuroscientist tried to debunk the thesis that there is any reproducibility crisis, that science is not self-correcting, that studies should be preregistered, etc. In other words, he was arguing against most of the issues raised and remedies suggested also on my pages. In passing, he provided a lot of interesting links to proponents on either side of the fence. Although I do not agree with many of his conclusions, his is by far the most thoughtful treatment on the subject. Most of the time I discuss with fellow scientist who dismiss problems of the current model of biomedical research I get rather unreflected comments. They usually simply celebrate the status quo as the best of all possible worlds and don’t get beyond the statement that there may be a few glitches, but that the model has evolved over centuries of undeniable progress. “If it’s not broken, don’t fix it.”

The Devil’s blog stimulated me to produce a short summary of key arguments of the current debate, to organize my own thoughts and as a courtesy to the busy reader. Continue reading