Is Translational Stroke Research Broken, and if So, How Can We Fix It?


broken pipeBased on research, mainly in rodents, tremendous progress has been made in our basic understanding of the pathophysiology of stroke. After many failures, however, few scientists today deny that bench-to-bedside translation in stroke has a disappointing track record. I here summarize many measures to improve the predictiveness of preclinical stroke research, some of which are currently in various stages of implementation: We must reduce preventable (detrimental) attrition. Key measures for this revolve around improving preclinical study design. Internal validity must be improved by reducing bias; external validity will improve by including aged, comorbid rodents of both sexes in our modeling. False-positives and inflated effect sizes can be reduced by increasing statistical power, which necessitates increasing group sizes. Compliance to reporting guidelines and checklists needs to be enforced by journals and funders. Customizing study designs to exploratory and confirmatory studies will leverage the complementary strengths of both modes of investigation. All studies should publish their full data sets. On the other hand, we should embrace inevitable NULL results. This entails planning experiments in such a way that they produce high-quality evidence when NULL results are obtained and making these available to the community. A collaborative effort is needed to implement some of these recommendations. Just as in clinical medicine, multicenter approaches help to obtain sufficient group sizes and robust results. Translational stroke research is not broken, but its engine needs an overhauling to render more predictive results.

Read the full article at the Publishers site (STROKE/AHA). If your library does not have a subscription, here is the Authors Manuscript (Stroke/AHA did not allow me to even pay for open access, as it is ‘a special article…’).

The Relative Citation Ratio: It won’t do your laundry, but can it exorcise the journal impact factor?

impactRecently, NIH Scientists  B. Ian Hutchins and colleagues have (pre)published “The Relative Citation Ratio (RCR). A new metric that uses citation rates to measure influence at the article level”. [Note added 9.9.2016: A peer reviewed version of the article has now appeared in PLOS Biol]. Just as Stefano Bertuzzi, the Executive Director of the American Society for Cell Biology, I am enthusiastic about the RCR. The RCR appears to be a viable alternative to the widely (ab)used Journal Impact Factor (JIF).

The RCR has been recently discussed in several blogs and editorials (e.g. NIH metric that assesses article impact stirs debate; NIH’s new citation metric: A step forward in quantifying scientific impact? ). At a recent workshop organized by the National Library of Medicine (NLM) I learned that the NIH is planning to widely use the RCR in its own grant assessments as an antidote to JIF, raw article citations, h-factors, and other highly problematic or outright flawed metrics. Continue reading

Gut microbiota impact on stroke outcome: Fad or fact?

gutandbrainMicrobiota and its contribution to brain function and diseases has become a hot topic in neuroscience. In this article in the Journal of Cerebral Blood Flow and Metabolism we discuss the emerging role of commensal bacteria in the course of stroke. Further, we review potential pitfalls in microbiota research and their impact on how we interpret the available evidence, emerging results, and on how we design future studies

Continue reading

SCI-HUB: The beginning of the end of publishing as we know it?

scihubUnbeknownst to many academics working in large universities or research institutes, SCI-HUB, ‘the first pirate website in the world to provide mass and public access to tens of millions of research papers’, has made almost every research paper worldwide accessible for free. While spoiled academics in rich countries retrieve almost any paper which they are interested in with a simple mouse click via their institution, an illegal operation is serving an even greater selection of articles to those less fortunate who otherwise might have to pay 20 $ or more per article.This act of piracy is threatening the publishing industry to the very foundation – not unlike Napster or Pirate Bay had threatened the music industry. Meanwhile, the publishing industry is still dancing on volcanoes. Double dipping in the transition phase from institutional subscriptions to a per-article fee based Open Access business model (i.e. charging authors AND libraries for giving access to the same article) the publishing giants still generate profit rates of 20 -40%. But what if a growing number of scientists use SCI-HUB as their portal for downloading articles, and courts can’t stop it? Two recent blogs nicely explain how SCI-HUB works, and what it could mean for you and the publishers. One from an industry perspective, the other celebrating Alexandra Elbakyan, the founder of SCI-HUB, as the Robin Hood of Science. If what happened in the music industry is predictive for publishing, it is quite likely that despite an eventual victory of the establishment over the pirates the business model of the industry will have to dramatically change. With unforeseeable consequences for how we are going to publish our research, and how (or even whether) we will be charged for access to publications.



A pocket guide to electronic laboratory notebooks

labbookEvery professional doing active research in the life sciences is required to keep a laboratory notebook. However, while science has changed dramatically over the last centuries, laboratory notebooks have remained essentially unchanged since pre-modern science. In an article published in F1000Research, an open access post-publication immediate publication platform, we argue that the implementation of electronic laboratory notebooks (eLN) in academic research is overdue, and we provide researchers and their institutions with the background and practical knowledge to select and initiate the implementation of an eLN in their laboratories. In addition, we present data from surveying biomedical researchers and technicians regarding which hypothetical features and functionalities they hope to see implemented in an eLN, and which ones they regard less important. We also present data on acceptance and satisfaction of those who have recently switched from paper laboratory notebook to an eLN.  We thus provide answers to the following questions: What does an electronic laboratory notebook afford a biomedical researcher, what does it require, and how should one go about implementing it? Read the full article

Where have all the rodents gone?

Using metaanalysis and computer simulation we studied the effects of attrition in experimental research on cancer and stroke. The results were published this week in the new meta-research section of PLOS Biology. Not surprisingly, given the small sample sizes of preclinical experimentation, loss of animals in experiments can dramatically alter results. However, effects of attrition on distortion of results were unknown. We used a simulation study to analyze the effects of random and biased attrition. As expected, random loss of samples decreased statistical power, but biased removal, including that of outliers, dramatically increased probability of false positive results. Next, we performed a meta-analysis of animal reporting and attrition in stroke and cancer. Most papers did not adequately report attrition, and extrapolating from the results of the simulation data, we suggest that their effect sizes were likely overestimated.

And these were our recommendations: Attrition of animals is often unforeseen and does not reflect willful bias. However, there are several simple steps that the scientific community can use to diminish inferential threats due to animal attrition. First, we recommend that authors prespecify inclusion and exclusion criteria, as well as reasons for exclusion of animals. For example, the use of flowcharts to track animals from initial allocation until analysis, with attrition noted, improves the transparency of preclinical reporting. An added benefit of this approach lies in the ability to track systemic issues with experimental design or harmful side effects of treatment. Journal referees can also encourage such practices by demanding them in study reports. Finally, many simple statistical tools used in medicine could be adopted to properly impute (and report) missing data. Overall, compliance with ARRIVE guidelines will aid in most, if not all, of the issues inherent to missing data in preclinical research and help structure a better standard for animal use and reporting. (Click here to access the full article, and here for Bob Siegerink’s blog post about it).

Nature ran a feature on the article, which was massively covered by the lay press, in interviews, and in blogs. For example:

Süddeutsche Zeitung

Deutsche Welle


here is a Hungarian one

and French

and a Spanish

and an Argentinan:–15-19-18

or a Swiss (Italian)

view Altmetrics