Category: Research
Have the ARRIVE guidelines been implemented?
Research on animals generally lacks transparent reporting of study design and implementation, as well as results. As a consequene of poor reporting, we are facing problems in replicating published findings, publication of underpowered studies and excessive false positives or false negatives, publication bias, and as a result difficulties in translating promising preclinical results into effective therapies for human disease. To improve the situation, in 2010 the ARRIVE guidelines for the reporting of animal research (www.nc3rs.org.uk/ARRIVEpdf) were formulated, which were adopted by over 300 scientifc journals, including the Journal of Cerebral Blood Flow and Metabolism (www.nature.com/jcbfm). Four years after, Baker et al. ( PLoS Biol 12(1): e1001756. doi:10.1371/journal.pbio.1001756) have systematically investigated the effect of the implementation of the ARRIVE guidelines on reporting of in vivo research, with a particular focus on the multiple sclerosis field. The results are highly disappointing:
‘86%–87% of experimental articles do not give any indication that the animals in the study were properly randomized, and 95% do not demonstrate that their study had a sample size sufficient to detect an effect of the treatment were there to be one. Moreover, they show that 13% of studies of rodents with experimental autoimmune encephalomyelitis (an animal model of multiple sclerosis) failed to report any statistical analyses at all, and 55% included inappropriate statistics.. And while you might expect that publications in ‘‘higher ranked’’ journals would have better reporting and a more rigorous methodology, Baker et al. reveal that higher ranked journals (with an impact factor greater than ten) are twice as likely to report either no or inappropriate statistics’ (Editorial by Eisen et al., PLoS Biol 12(1): e1001757. doi:10.1371/journal.pbio.1001757).
It is highly likely that other fields in biomedicine have a similar dismal record. Clearly, there is a need for journal editors and publishers to enforce the ARRIVE guidelines and to monitor its implementation!
Nachkochen unmöglich

Das Heft 3 des Laborjournals enthält einen sehr brauchbaren Artikel zur (nicht-) Reproduzierbarkeit von Ergebnissen. Sorry, in German only….
METRICS
The Economist reported that John Ioannidis, together with Steven Goodman, later this month will open the Meta – Research Innovation Center at Standford University (METRICS). Generously supported by the Buck foundation , it will fight bad science, bias, and lack of evidence in all areas of biomedicine. The institute’s moto is to ‘Identify and minimise persistent threats to medical research quality’. Those who have followed the work of Ioannidis and Goodman know that this is good news indeed! A concise overview of Ioannidis research can be found in this online article at Maclean’s.
The probability of replicating ‘true’ findings is low…
Due to small group sizes and presence of substantial bias experimental medicine produces a large number of false positive results (see previous post). It has been claimed that 50 – 90 % of all results may be false (see previous post). In support of these claims is the staggerlingly low number of experiments that can be replicated. But what are the chances to reproduce a finding that is actually true?
Open evaluation of scientific papers
Scientific publishing should be based on open access, and open evaluation. While open access is on its way, open evaluation (OE) is still controversial and only slowly seeping into the the system. Kriegeskorte, Walther, and Deca have edited a whole issue on Frontiers in Computational Neuroscience devoted to this topic, with some very scholarly and thoughtful discussions on the pros and cons of OE. I highly recommend the editorial (An emerging consensus for open evaluation), which tries to synthesize the arguments into ’18 visions’. The beauty of their blueprint for the future of scientific publication (which was already published a year ago) is that it is possible to start with the current system and slowly evolve it into a full blown OE system, while checking on the way whether the different measures deliver their promises.
Otto Warburg’s research grant
Otto Warburg (1883 – 1970), the famous German biochemist and Nobel laureate, is often creditet for having written the perfect, the ideal research grant. I just stumbled into a ‘reconstruction’ of his classic application to the Notgemeinschaft der Deutschen Wissenschaft (Emergency Association of German Science, the fore runner of the Deutsche Forschungsgemeinschaft), probably in 1921. The application consisted of a single sentence, ‘I require 10,000 marks’ and was fully funded. It was (re)printed in a review in Nature Cancer Reviews entiteled ‘Otto Warburg’s contributions to current concepts of cancer metabolism’ by Koppenol et al. Thanks! Those were the days!
