Replication and post-publication review: Four best practice examples
Many of the concrete measures proposed to improve the quality and robustness of biomedical research are greeted with great skepticism: ‘Good idea, but how can we implement it, and will it work?’. So here are a few recent best practice examples regarding two key areas: Replication, and the review process.
In the ‘Reproducibility Project‘ cancer biologists are replicating selected experiments from a set of 50 research papers in an effort to estimate the rate of reproducibility in preclinical cancer research. Much has been written about it, and I believe that this approach may be applicable to other fields as well. Now Elife is reviewing and publishing the various replication protocols, which guarantees expert input even before the actual experiment starts, as well as publication no matter what the result will be.
Another remarkable set of replications is being currently in press at Cortex, a journal which also has a ‘registered reports’ track, in which scientists submit the study protocol for review before they start the experiment, and upon acceptance publish the results regardless of the outcome with respect to the tested hypothesis (negative-neutral-positive). Boekel et al. failed to replicate 17 distinct correlations between human brain structure and behavior. The study was preregistered, used a smart Bayesian approach, and sports a very thorough and thoughtful discussion on what this failed replication means, and what it does not mean. Neuroskeptic has a nice summary of the story. I just wanted to bring up the Cancer Reproducibility Project and the Cortex paper here as antidotes against the notion that reproducing the work of others is not rewarding, is logistically impossible, or does not lead to relevant progress.
Anonymous prepublication review (the ‘standard model’) is often labelled as the best of all worse solutions to safeguarding the quality of published work (see previous post). Non-anonymous post-publication review is the alternative approach. For those not familiar with it, let’s look at an example from my field. It was published in F1000Research, a journal which continues to push the envelope of scientific publishing. The article ‘Secretomes of apoptotic mononuclear cells ameliorate neurological damage in rats with focal ischemia’ was published after revision 2, alongside the comments of the two named (Boltze, Chopp) expert reviewers, who spent a lot of time with the manuscript and gave contructive advice which was utilized by the authors to improve the manuscript. It is open access, all communications and versions of the manuscript are accessible, and the original data is just a click away (on figshare). Even if you are not interested in the subject, here you can peek into the future of publishing.
Another interesting initiative in this regard is PubPeer, although I still have my bets on Pubmed Commons (see previous post). How this works can be studied rearding a recent publication in Nature entiteled ‘Modulation of the proteoglycan receptor PTPσ promotes recovery after spinal cord injury‘. In an (unfortunately) anonymous discussion a post-publication review of this paper enfolds (and is in fact still ongoing…). This is how science should work, my only criticism is that the scientists don’t look each other into the eyes, and that if you read the paper in Nature you will not find out that there are issues around the paper which are currently debated. This is not the the fault of PubPeer (but rather of Nature), and this is why I think PubmedCommons is an even better venue for such discourse, as there the discussion is permanently linked to the article on Pubmed.