Category: Research

“Translational research may be most successful when it fails”: The efficiency of translation in biomedicine

Blühende WüsteIn 2009, Chalmers and Glasziou investigated sources of avoidable waste in biomedical research and estimated that its cumulative effect was that about 85% of research investment is wasted (Lancet 2009; 374: 86–89). Critical voices have since then questioned the exceedingly high number (85%), or claimed that because of non-linearity’s and idiosyncrasies of the biomedical research process a large number of failures are needed to produce a comparably small number of breakthroughs, and therefore hailed the remaining 15%. Waste is defined as ‘resources consumed by inefficient or non-essential activities’. Does progress really thrive on waste?

Continue reading

Are scientific papers actually read?

wastebinThe MEDLINE currently indexes 5,642 journals. PubMed comprises more than 24 million citations for biomedical literature from MEDLINE. My field is stroke research. Close to 30.000 articles were published in 2014 on the topic ‘Stroke’ (clinical and experimental), more than 20.000 of them were peer reviewed original articles in the English language (Web of Science). That amounts to more than 50 articles every day. In 2014, 1700 of them were rodent studies, a mere 5 per day. Does (can) anyone read them? And should we read them? Do researchers worldwide every day produce knowledge worth publishing in 50 articles?

Continue reading

The art of motorcycle maintenance

assembly lineScientific rigor and the art of motorcycle maintenance‘ was another recent fine analysis on reliability issues in current biomedicine (Munafo et al. Nat Biotechnol. 32:871-3). If you only want to read one article, this may be it. It nicely  sums up the problems and suggests all the relevant measures (see most of my previous posts). But besides the reference to Robert Pirsig’s 1974 novel, what is new on the article is the comparison of the scientific enterprise to the automobile industry, which successfully responded to quality problems with structured quality control (for a more thorough treatment, see the previous post on trust and auditing).  Here is their conclusion:

‘Science is conducted on the principle that it is self-correcting, but the extent to which this is true is an empirical question. The more that quality control becomes integrated into the scientific process itself, the more the whole process becomes one of continual improvement. Implementing this at the level of production implies a culture of incentivizing, educating and empowering those responsible for production, rather than policing quality after the fact with ‘quality inspectors’ (i.e., peer reviewers) or, even more distally, requiring attempts at replication. We think this insight, applied successfully to automobile manufacturing in the 1970s, can also be profitably applied to the practice of scientific research to build a more solid foundation of knowledge and accelerate the research endeavor.’

It is time to act!

Replication and post-publication review: Four best practice examples

bestpracticeMany of the concrete measures proposed to improve the quality and robustness of biomedical research are greeted with great skepticism: ‘Good idea, but how can we implement it, and will it work?’. So here are a few recent best practice examples regarding two key areas: Replication, and the review process. Continue reading

Should we trust scientists? The case for peer auditing

robertboyle

Robert Boyle

Since the 17th century, when gentlemen scientists were typically seen as trustworthy sources for the truth about humankind and the natural order the tenet is generally accepted that ‘science is based on trust‘. This refers to trust between scientists, as they build on each others data and may question a hypothesis, or a conclusion, but not the quality of the scientifc method applied or the faithfulness of the report, such as a publication. But it also refers to the trust of the public in the scientists which societies support via tax-funded academic systems. Consistently, scientists (in particular in biomedicine) score highest among all professions in ‘trustworthiness’ ratings. Despite often questioning the trustworthiness of their competitors when chatting over a beer or two, they publically vehemently argue against any measure proposed to underpin confidence in their work by any form of scrutiny (e.g. auditing). They instead swiftly invoke Orwellian visions of a ‘science police’ and point out that scrutiny would undermine trust and jeopardize the creativity and ingenuity inherent to the scientific process. I find this quite remarkable. Why should science be exempt from scrutiny and control, when other areas of public and private life sport numerous checks and balances? Science may indeed be the only domain in society which is funded by the public and gets away with strictly rejecting accountability. So why do we trust scientists, but not bankers?

Continue reading

Journals unite for reproducibility

SORBETTO/ISTOCKPHOTO.COM (DOI: 10.1126/science.aaa1724)

SORBETTO/ISTOCKPHOTO.COM (DOI: 10.1126/science.aaa1724)

Amidst what has been termed ‘reproducibility crisis’ (see also a number of previous posts) in June 2014 the National Institutes of Health and Nature Publishing Group had convened a workshop on the rigour and reproducibility of preclinical biomedicine. As a result, last week the NIH published ‘Principles and Guidelines for Reporting Preclinical Research‘, and Nature as well as Science ran editorials on it. More than 30 journals, including the Journal of Cerebral Blood Flow and Metabolism, are endorsing the guidelines. The guidelines cover rigour in statistical analysis, transparent reporting and standards (including randomization and blinding as well as sample size justification), and data mining and sharing. This is an important step forward, but implementation has to be enforced and monitored: The ARRIVE guidelines (many items of which reappear in the NIH guidelines) have not been adapted widely yet (see previous post). In this context I highly recommend the article by Henderson et al in Plos Medicine in which they systematically review existing guidelines for in vivo animal experiments. From this the STREAM collaboration distilled a checklist on internal, external, and construct validity which I found more comprehensive and relevant than the one published now by the NIH. In the end, however, it is not so relevant to which guideline (ARRIVE, NIH, STREAM, etc.) researchers, reviewers, editors, or funders comply, but rather whether they use one at all!

 

Note added 12/15/2014: Check out the PubPeer postpublication discussion on the NIH/Nature/Science initiative (click here)!

Appraising and rewarding biomedical research

PQRST

In academic biomedicine, and  in most countries and research environments, grants, performance based funding, positions, etc.,  are appraised and rewarded based on very simple quantitative metrics: The impact factor (IF) of previous publications, and the amount of third party funding. For example, at the Charite in Berlin researchers receive  ‘Bonus’ funding (to be spent only in research, of course!) which is calculated by adding the IFs of the journals in which the researcher has published over the last 3 years, and the cumulative third party funding during that period (weighted depending on whether the source was the Deutsche Forschungsgemeinschaft (DFG, x3), the ministry of science (BMBF), the EU, foundations (x2), or others (x1). IF and funding contribute 50% each to the bonus. In 2014, this resulted in a bonus of 108 € per IF point, and 8 € per 1000 € funding (weighted).

Admittedly, IF and third party funding are quantitative, hence easily comparable, and using them is easy and ‘just’. But is it smart to select candidates or grants based on a metric that measures  the average number of citations to recent articles published in a journal? In other words, a metric of a specific journal, and not of authors and their research findings. Similar issues concern third party funding as an indicator: It reflects the past, and not the presence or future, and is affected by numerous factors that are only losely dependent or even independent of the quality and potential of a researcher or his/her project. But it is a number, and it can be objectively measured, down to the penny! Despite widespread criticism of these conventional metrics, they remain the backbone of almost all assessment exercises. Most researchers and research administrators admit that this approach is far from being perfect, but they posit that they are the best of all the worse solutions. In addition, they lament that there are no alternatives. To those I recommend  John Ioannidis’ and Muin Khoury’s recent opinion article in JAMA. 2014 Aug 6;312(5):483-4. [Sorry for continuing to feature articles by John Ioannidis, but he keeps on pulling brilliant ideas out of his hat]

They propose the ‘PQRST index’ for assessing value in biomedical research. What is it?

Continue reading