Category: Neuroscience

The “Broken Science” (aka “waste”) debate: A personal cheat sheet

1340830347_broken-glass-14

On March 17, 2015 five panelists from cognitive neuroscience and psychology (Sam Schwarzkopf, Chris Chambers, Sophie Scott, Dorothy Bishop, and Neuroskeptic) publicly debated  “Is science broken? If so, how can we fix it?” . The event was organized by Experimental Psychology, UCL Division of Psychology and Language Sciences / Faculty of Brain Sciences in London.

The debate revolved around the ‘reproducibility crisis’, and covered false positive rates, replication, faulty statistics, lack of power, publication bias, study preregistration, data sharing, peer review, you name it. Understandably the event caused a stir in the press, journals, and the blogosphere (NatureBiomed centralAidan’s AviaryThe Psychologist, etc…).

Remarkably, some of the panelists (notably Sam Schwarzkopf) respectfully opposed the current ‘crusade for true science’  (to which I must confess I subscribe) by arguing that science is not broken at all,  but rather, by trying to fix it we run the risk to wreck it for good. Already a few days before the official debate, he and Neuroskeptic had started to exchange arguments on Neuroskeptic’s blog. While both parties appear to agree that science can be improved, they completely disagree in their analysis of the current status of the scientific enterprise, and consequently also on action points.

This predebate argument directed my attention to a blog, which was run by Sam Schwarzkopf, or rather his alter ego, the ‘Devil’s neuroscientist’ for a short, but very productive period. Curiously, the Devil’s neuroscientist retired from blogging the night before the debate, threatening that there will be no future posts! This is sad, because albeit somewhat aggressively, but very much to the point, the Devil’s neuroscientist tried to debunk the thesis that there is any reproducibility crisis, that science is not self-correcting, that studies should be preregistered, etc. In other words, he was arguing against most of the issues raised and remedies suggested also on my pages. In passing, he provided a lot of interesting links to proponents on either side of the fence. Although I do not agree with many of his conclusions, his is by far the most thoughtful treatment on the subject. Most of the time I discuss with fellow scientist who dismiss problems of the current model of biomedical research I get rather unreflected comments. They usually simply celebrate the status quo as the best of all possible worlds and don’t get beyond the statement that there may be a few glitches, but that the model has evolved over centuries of undeniable progress. “If it’s not broken, don’t fix it.”

The Devil’s blog stimulated me to produce a short summary of key arguments of the current debate, to organize my own thoughts and as a courtesy to the busy reader. Continue reading

“Translational research may be most successful when it fails”: The efficiency of translation in biomedicine

Blühende WüsteIn 2009, Chalmers and Glasziou investigated sources of avoidable waste in biomedical research and estimated that its cumulative effect was that about 85% of research investment is wasted (Lancet 2009; 374: 86–89). Critical voices have since then questioned the exceedingly high number (85%), or claimed that because of non-linearity’s and idiosyncrasies of the biomedical research process a large number of failures are needed to produce a comparably small number of breakthroughs, and therefore hailed the remaining 15%. Waste is defined as ‘resources consumed by inefficient or non-essential activities’. Does progress really thrive on waste?

Continue reading

Are scientific papers actually read?

wastebinThe MEDLINE currently indexes 5,642 journals. PubMed comprises more than 24 million citations for biomedical literature from MEDLINE. My field is stroke research. Close to 30.000 articles were published in 2014 on the topic ‘Stroke’ (clinical and experimental), more than 20.000 of them were peer reviewed original articles in the English language (Web of Science). That amounts to more than 50 articles every day. In 2014, 1700 of them were rodent studies, a mere 5 per day. Does (can) anyone read them? And should we read them? Do researchers worldwide every day produce knowledge worth publishing in 50 articles?

Continue reading

The art of motorcycle maintenance

assembly lineScientific rigor and the art of motorcycle maintenance‘ was another recent fine analysis on reliability issues in current biomedicine (Munafo et al. Nat Biotechnol. 32:871-3). If you only want to read one article, this may be it. It nicely  sums up the problems and suggests all the relevant measures (see most of my previous posts). But besides the reference to Robert Pirsig’s 1974 novel, what is new on the article is the comparison of the scientific enterprise to the automobile industry, which successfully responded to quality problems with structured quality control (for a more thorough treatment, see the previous post on trust and auditing).  Here is their conclusion:

‘Science is conducted on the principle that it is self-correcting, but the extent to which this is true is an empirical question. The more that quality control becomes integrated into the scientific process itself, the more the whole process becomes one of continual improvement. Implementing this at the level of production implies a culture of incentivizing, educating and empowering those responsible for production, rather than policing quality after the fact with ‘quality inspectors’ (i.e., peer reviewers) or, even more distally, requiring attempts at replication. We think this insight, applied successfully to automobile manufacturing in the 1970s, can also be profitably applied to the practice of scientific research to build a more solid foundation of knowledge and accelerate the research endeavor.’

It is time to act!

Replication and post-publication review: Four best practice examples

bestpracticeMany of the concrete measures proposed to improve the quality and robustness of biomedical research are greeted with great skepticism: ‘Good idea, but how can we implement it, and will it work?’. So here are a few recent best practice examples regarding two key areas: Replication, and the review process. Continue reading

Should we trust scientists? The case for peer auditing

robertboyle

Robert Boyle

Since the 17th century, when gentlemen scientists were typically seen as trustworthy sources for the truth about humankind and the natural order the tenet is generally accepted that ‘science is based on trust‘. This refers to trust between scientists, as they build on each others data and may question a hypothesis, or a conclusion, but not the quality of the scientifc method applied or the faithfulness of the report, such as a publication. But it also refers to the trust of the public in the scientists which societies support via tax-funded academic systems. Consistently, scientists (in particular in biomedicine) score highest among all professions in ‘trustworthiness’ ratings. Despite often questioning the trustworthiness of their competitors when chatting over a beer or two, they publically vehemently argue against any measure proposed to underpin confidence in their work by any form of scrutiny (e.g. auditing). They instead swiftly invoke Orwellian visions of a ‘science police’ and point out that scrutiny would undermine trust and jeopardize the creativity and ingenuity inherent to the scientific process. I find this quite remarkable. Why should science be exempt from scrutiny and control, when other areas of public and private life sport numerous checks and balances? Science may indeed be the only domain in society which is funded by the public and gets away with strictly rejecting accountability. So why do we trust scientists, but not bankers?

Continue reading

10 years after: Ioannidis revisits his classic paper

ioannidisIn 2005 PLOS Medicine published John Ioannidis’ paper ‘Why most published research findings are false’ . The article was a wake up call for many, and now is probably the most influential publication in biomedicine of the last decade (>1.14 Mio views on the PLOS Med webside, thousands of citations in the scientific and lay press, featured in numerous blog posts, etc.). Its title has never been refuted, if anything, it has been replicated, for examples see some of the posts of this blog. Almost 10 years after, Ioannidis now revisits his paper, and the more constructive title ‘How to make more published research true” (PLoS Med. 2014 Oct 21;11(10):e1001747. doi: 10.1371/journal.pmed.1001747.) already indicates that the thrust this time is more forward looking. The article contains numerous suggestions to improve the research enterprise, some subtle and evolutionary, some disruptive and revolutionary, but all of them make a lot of sense. A must read for scientists, funders, journal editors, university administrators, professionals in the health industry, in other words: all stakeholders within the system!

Sind die meisten Forschungsergebnisse tatsächlich falsch?

lj7_2014In July, Laborjournal (‘LabTimes’), a free  German monthly for life scientists (sort of a  hybrid between the Economist and the British Tabloid The Sun), celebrated its 20th anniversary with a special issue. I was asked to contribute an article. In it I try to answer the question whether most published research findings are false, as John Ioannidis rhetorically asked in 2005.

To find out, you have to be able to read German, and click here for a pdf of the article (in German).

 

Higgs’ boson and the certainty of knowledge

Peter Higgs

Peter Higgs

“Five sigma,” is the gold standard for statistical significance in physics for particle discovery.  When the New Scientist reported about the putative confirmation of the Higgs boson, they wrote:

‘Five-sigma corresponds to a p-value, or probability, of 3×10-7, or about 1 in 3.5 million. There’s a 5-in-10 million chance that the Higgs is a fluke.’ 

Does that mean that p-values can tell us the probability of being correct about our hypotheses? Can we use p-values to decide about the truth (correctness)  of hypotheses? Does p<0.05 mean that there is a smaller than 5 % chance that an experimental hypothesis is wrong?

Continue reading

Can mice be trusted?

Katharina Frisch ‘Mouse and man’

I started this blog with an post on a PNAS paper which at that time had received a lot of attention in the scientific community and lay press. In this article, Seok et al. argued that ‘genomic responses in mouse models poorly mimic human inflammatory diseases‘. With this post I am returning to this article, as I recently was asked by the Journal STROKE to contribute to their ‘Controversies in Stroke’ series. The Seok article had disturbed the Stroke community, so a pro/con discussion seemed timely. In the upcoming issue of STROKE Sharp and Jickling will argue that ‘the peripheral inflammatory response in rodent ischemic stroke models is different than in human stroke. Given the important role of the immune system in stroke, this could be a major handicap in translating results in rodent stroke models to clinical trials in patients with stroke.‘ This is of course true! Nevertheless, I counter by providing some examples of translational successes regarding stroke and the immune system, and conclude that ‘the physiology and pathophysiology of rodents is sufficiently similar to humans to make them a highly relevant model organism but also sufficiently different to mandate an awareness of potential resulting pitfalls. In any case, before hastily discarding highly relevant past, present, and future findings, experimental stroke research needs to improve dramatically its internal and external validity to overcome its apparent translational failures.’ For an in depth treatment, follow the debate:

Article: Dirnagl: Can mice be trusted

Article: Sharp Jickling: Differences between mice and humans