Category: Publishing

Too good to be true: Excess significance in experimental neuroscience

pvalueIn a massive metaanalysis of animal studies of six neurological diseases (EAE/MS; Parkinsons; Ischemic stroke; Spinal cord injury; Intracerebral hemorraghe; Alzheimer’s disease) Tsilidis at al. have demonstrated that the published literature in these fields has an excess of statistically significant results that are due to biases in reporting (PLoS Biol. 2013 Jul;11(7):e1001609). By including more than 4000 datasets (from more than 1000 individual studies!) which they synthesized in 160 metaanalyses they impressively substantiate that there are way too many ‘positive’ results in the literature!  Underlying reasons are reporting bias, including study publication bias, selective outcome reporting bias (where null results are omitted) and selective analysis bias (where data are analysed with different methods that favour ‘positive’ results). Study size was low (mean 16 animals), less than 1/3 of the studied randomized, or evaluated outcome in a blinded fashion, and only 39 of 4140 studies performed sample size calculations!

Overcoming negative (positive) publication bias

1-negativeF1000 Research starts initiative to overcome ‘positive publication bias’ (aka ‘negative publication bias)’. Until end of August publication fees  are waived for submission of Null results.

Only data that are available via publications—and, to a certain extent, via presentations at conferences—can contribute to progress in the life sciences. However, it has long been known that a strong publication bias exists, in particular against the publication of data that do not reproduce previously published material or that refute the investigators’ initial hypothesis. The latter type of contradictory evidence is commonly known as ‘negative data.’ This slightly derogatory term reflects the bias against studies in which investigators were unable to reject their null hypothesis (H0), a tool of frequentist statistics that states that there is no difference between experimental groups.

Researchers are well aware of this bias, as journals are usually not keen to publish the nonexistence of a phenomenon or treatment effect. They know that editors have little interest in publishing data that refute, or do not reproduce, previously published work—with the exception of spectacular cases that guarantee the attention of the scientific community, as well as garner extra citations (Ioannidis and Trikalinos, 2005). The authors of negative results are required to provide evidence for failure to reject the null hypothesis under numerous conditions (e.g., dosages, assays, outcome parameters, additional species or cell types), whereas a positive result would be considered worthwhile under any of these conditions . Indeed, there is a dilemma: one can never prove the absence of an effect, because, as Altman and Bland (1995) remind us, ‘absence of evidence is not evidence of absence’.

Several journals have already opened their pages to ‘negative’ results. For example, the  Journal of Cerebral Blood Flow and Metabolism: Fighting publication bias: introducing the Negative Results section  publishes such studies as a one-page summary (maximum 500 words, two figures) in the print edition of the journal, and the accompanying full paper online.

Impact factor mania

impact factor

The San Francisco Declaration on Research Assessment has stimulated a flurry of editorials (Science, ELife, etc.) and received a lot of praise. While the Declaration, as well as the editorials, provide a concise and critical view of the current status of research evaluation by funding agencies and academic institutions, none of the arguments is new. In fact, all of them were already voiced many times before, even by Eugene Garfield, the ‘spritual father’ of the impact factor (IF). The IF appears to be irreversibly ingrained in the brains of committee members and grant reviewers. Besides being highly convenient (one just needs to press the sort button in a spreadsheet to create shortlists for positions…), it is the water that runs the mill of the current publishing model. Declarations and editorials (in high IF journals…) will not change this. The only option is to switch to a publication model which prevents hirarchies of journals, and which uses other metrics, such as citations, and post publication review, which could generate other, peer based metrics (e.g. in analogy to the InternetMovieDataBase, IMDB, see the highly stimulating article by Witten and Tibshirani: ‘Scientific research in the age of omics: the good, the bad, and the sloppy’).

First, shaky steps are taken by PLOS ONE, F1000 Research, and others.  Radically changing the publication model would not only cure ‘impact factor mania’, but also ameliorate a number of other issues, such as negative publication bias, fragmentation of publications, or heavily  biased publications (due to the need to report ‘black and white’). Besides, it would save a lot of time (for authors and reviewers) and money (for the tax payer).

Cherrypick your h-index!

Scientometrics are increasingly used to evaluate scientists for positions, etc. Some while ago, citation numbers for individuals (and derived parameters, such as the h-index) could only be obtained via Thomson Reuters  ISI Web of Science. Then came Elsevier’s Scopus, and now we also have Google Scholar Citations. Most reseachers use them without much thinking about them, and quite often without referencing the specific source they used to obtain their personal metrics. However, the citation counts and h-indices calculated by these 3 services may be very different.

Continue reading

Nature Neuroscience initiative to improve reporting

Nature Neuroscience is currently  undertaking an initiative to improve statistics and methods reporting.

Making methods clearer : Nature Neuroscience : Nature Publishing Group.

Following an initative of the NIH/NINDS (Landis et al.), Nature Neuroscience is testing a new scheme to improve reporting, and consequently to reduce bias and faulty statistics in work published in their journal. Authors have to fill out at very detailed checklist (stats checksheet nat neurosci source file ud) which is sent to  the reviewers. Other journals had checklists before, but they were pro forma, and mostly ignored. This checklist looks very bureaucratic, and authors will hate it, but it contains all the relevant questions (stats including power; bias including blinding and randomization; ethics; detailed reporting of strains, animal husbandry, material; design of functional imaging studies, etc.). It forces authors to think about these issues, and if they haven’t done their homework before designing the experiments they either have to cancel submission, or fake entries into the sheet. Which would put them in clear violation of good scientific practice, compared to just not reporting that they used an underpowered design without blinding and randomization (which is the current practice)….

Nature Neuroscience should be commended for this initiative. Other journals will hopefully adopt this policy, and reviewers take the time to study the checksheets in order to request clarification from the authors or pull the plug on publication of flawed papers.

Does negative publication bias exist?

Positively negative – are editors to blame for publication bias? | Discussions – F1000 Research.

In this interview Stephen Senn is correct in pointing out that the argument, recently voiced by Ben Goldacre in ‘Bad Pharma’ that there is no negative publication bias is flawed. “Senn explains […] that the flaw in the argument of ‘no editorial bias’ is the assumption that the papers submitted to any given journal are more or less of the same quality, regardless of whether they are positive or negative. What we’re apparently overlooking here are the papers that aren’t being submitted.”

 Read the full article at F1000research.com:

http://f1000research.com/articles/1-59/v1