Category: Research

Replication and post-publication review: Four best practice examples

bestpracticeMany of the concrete measures proposed to improve the quality and robustness of biomedical research are greeted with great skepticism: ‘Good idea, but how can we implement it, and will it work?’. So here are a few recent best practice examples regarding two key areas: Replication, and the review process. Continue reading

Should we trust scientists? The case for peer auditing

robertboyle

Robert Boyle

Since the 17th century, when gentlemen scientists were typically seen as trustworthy sources for the truth about humankind and the natural order the tenet is generally accepted that ‘science is based on trust‘. This refers to trust between scientists, as they build on each others data and may question a hypothesis, or a conclusion, but not the quality of the scientifc method applied or the faithfulness of the report, such as a publication. But it also refers to the trust of the public in the scientists which societies support via tax-funded academic systems. Consistently, scientists (in particular in biomedicine) score highest among all professions in ‘trustworthiness’ ratings. Despite often questioning the trustworthiness of their competitors when chatting over a beer or two, they publically vehemently argue against any measure proposed to underpin confidence in their work by any form of scrutiny (e.g. auditing). They instead swiftly invoke Orwellian visions of a ‘science police’ and point out that scrutiny would undermine trust and jeopardize the creativity and ingenuity inherent to the scientific process. I find this quite remarkable. Why should science be exempt from scrutiny and control, when other areas of public and private life sport numerous checks and balances? Science may indeed be the only domain in society which is funded by the public and gets away with strictly rejecting accountability. So why do we trust scientists, but not bankers?

Continue reading

Journals unite for reproducibility

SORBETTO/ISTOCKPHOTO.COM (DOI: 10.1126/science.aaa1724)

SORBETTO/ISTOCKPHOTO.COM (DOI: 10.1126/science.aaa1724)

Amidst what has been termed ‘reproducibility crisis’ (see also a number of previous posts) in June 2014 the National Institutes of Health and Nature Publishing Group had convened a workshop on the rigour and reproducibility of preclinical biomedicine. As a result, last week the NIH published ‘Principles and Guidelines for Reporting Preclinical Research‘, and Nature as well as Science ran editorials on it. More than 30 journals, including the Journal of Cerebral Blood Flow and Metabolism, are endorsing the guidelines. The guidelines cover rigour in statistical analysis, transparent reporting and standards (including randomization and blinding as well as sample size justification), and data mining and sharing. This is an important step forward, but implementation has to be enforced and monitored: The ARRIVE guidelines (many items of which reappear in the NIH guidelines) have not been adapted widely yet (see previous post). In this context I highly recommend the article by Henderson et al in Plos Medicine in which they systematically review existing guidelines for in vivo animal experiments. From this the STREAM collaboration distilled a checklist on internal, external, and construct validity which I found more comprehensive and relevant than the one published now by the NIH. In the end, however, it is not so relevant to which guideline (ARRIVE, NIH, STREAM, etc.) researchers, reviewers, editors, or funders comply, but rather whether they use one at all!

 

Note added 12/15/2014: Check out the PubPeer postpublication discussion on the NIH/Nature/Science initiative (click here)!

Appraising and rewarding biomedical research

PQRST

In academic biomedicine, and  in most countries and research environments, grants, performance based funding, positions, etc.,  are appraised and rewarded based on very simple quantitative metrics: The impact factor (IF) of previous publications, and the amount of third party funding. For example, at the Charite in Berlin researchers receive  ‘Bonus’ funding (to be spent only in research, of course!) which is calculated by adding the IFs of the journals in which the researcher has published over the last 3 years, and the cumulative third party funding during that period (weighted depending on whether the source was the Deutsche Forschungsgemeinschaft (DFG, x3), the ministry of science (BMBF), the EU, foundations (x2), or others (x1). IF and funding contribute 50% each to the bonus. In 2014, this resulted in a bonus of 108 € per IF point, and 8 € per 1000 € funding (weighted).

Admittedly, IF and third party funding are quantitative, hence easily comparable, and using them is easy and ‘just’. But is it smart to select candidates or grants based on a metric that measures  the average number of citations to recent articles published in a journal? In other words, a metric of a specific journal, and not of authors and their research findings. Similar issues concern third party funding as an indicator: It reflects the past, and not the presence or future, and is affected by numerous factors that are only losely dependent or even independent of the quality and potential of a researcher or his/her project. But it is a number, and it can be objectively measured, down to the penny! Despite widespread criticism of these conventional metrics, they remain the backbone of almost all assessment exercises. Most researchers and research administrators admit that this approach is far from being perfect, but they posit that they are the best of all the worse solutions. In addition, they lament that there are no alternatives. To those I recommend  John Ioannidis’ and Muin Khoury’s recent opinion article in JAMA. 2014 Aug 6;312(5):483-4. [Sorry for continuing to feature articles by John Ioannidis, but he keeps on pulling brilliant ideas out of his hat]

They propose the ‘PQRST index’ for assessing value in biomedical research. What is it?

Continue reading

10 years after: Ioannidis revisits his classic paper

ioannidisIn 2005 PLOS Medicine published John Ioannidis’ paper ‘Why most published research findings are false’ . The article was a wake up call for many, and now is probably the most influential publication in biomedicine of the last decade (>1.14 Mio views on the PLOS Med webside, thousands of citations in the scientific and lay press, featured in numerous blog posts, etc.). Its title has never been refuted, if anything, it has been replicated, for examples see some of the posts of this blog. Almost 10 years after, Ioannidis now revisits his paper, and the more constructive title ‘How to make more published research true” (PLoS Med. 2014 Oct 21;11(10):e1001747. doi: 10.1371/journal.pmed.1001747.) already indicates that the thrust this time is more forward looking. The article contains numerous suggestions to improve the research enterprise, some subtle and evolutionary, some disruptive and revolutionary, but all of them make a lot of sense. A must read for scientists, funders, journal editors, university administrators, professionals in the health industry, in other words: all stakeholders within the system!

p-value vs. positive predictive value

mypvalue

Riddle me this:

What does it mean if a result is reported as significant at p < 0.05?

A If we were to repeat the analysis many times, using new data each time, and if the null hypothesis were really true, then on only 5% of those occasions would we (falsely) reject it.

B Without knowing the statistical power of the experiment, and not knowing the prior probability of the hypothesis, I cannot estimate the probability whether a significant research finding (p < 0.05) reflects a true effect.

C The probability that the result is a fluke (the hypothesis was wrong, the drug doesn’t work, etc.), is below 5 %. In other words, there is a less than 5 % chance that my results are due to chance.

(solution at the end of this post)

Be honest, although it doesn’t sound very sophisticated (as opposed to A and B), you were tempted to chose C, since it makes a lot of sense, and represents your own interpretation of the p-value when reading and writing papers. You are in good company. But is C really the correct answer?

Continue reading

Sind die meisten Forschungsergebnisse tatsächlich falsch?

lj7_2014In July, Laborjournal (‘LabTimes’), a free  German monthly for life scientists (sort of a  hybrid between the Economist and the British Tabloid The Sun), celebrated its 20th anniversary with a special issue. I was asked to contribute an article. In it I try to answer the question whether most published research findings are false, as John Ioannidis rhetorically asked in 2005.

To find out, you have to be able to read German, and click here for a pdf of the article (in German).

 

Higgs’ boson and the certainty of knowledge

Peter Higgs

Peter Higgs

“Five sigma,” is the gold standard for statistical significance in physics for particle discovery.  When the New Scientist reported about the putative confirmation of the Higgs boson, they wrote:

‘Five-sigma corresponds to a p-value, or probability, of 3×10-7, or about 1 in 3.5 million. There’s a 5-in-10 million chance that the Higgs is a fluke.’ 

Does that mean that p-values can tell us the probability of being correct about our hypotheses? Can we use p-values to decide about the truth (correctness)  of hypotheses? Does p<0.05 mean that there is a smaller than 5 % chance that an experimental hypothesis is wrong?

Continue reading

Can mice be trusted?

Katharina Frisch ‘Mouse and man’

I started this blog with an post on a PNAS paper which at that time had received a lot of attention in the scientific community and lay press. In this article, Seok et al. argued that ‘genomic responses in mouse models poorly mimic human inflammatory diseases‘. With this post I am returning to this article, as I recently was asked by the Journal STROKE to contribute to their ‘Controversies in Stroke’ series. The Seok article had disturbed the Stroke community, so a pro/con discussion seemed timely. In the upcoming issue of STROKE Sharp and Jickling will argue that ‘the peripheral inflammatory response in rodent ischemic stroke models is different than in human stroke. Given the important role of the immune system in stroke, this could be a major handicap in translating results in rodent stroke models to clinical trials in patients with stroke.‘ This is of course true! Nevertheless, I counter by providing some examples of translational successes regarding stroke and the immune system, and conclude that ‘the physiology and pathophysiology of rodents is sufficiently similar to humans to make them a highly relevant model organism but also sufficiently different to mandate an awareness of potential resulting pitfalls. In any case, before hastily discarding highly relevant past, present, and future findings, experimental stroke research needs to improve dramatically its internal and external validity to overcome its apparent translational failures.’ For an in depth treatment, follow the debate:

Article: Dirnagl: Can mice be trusted

Article: Sharp Jickling: Differences between mice and humans

Quality assurance and management in experimental research

tüvsüd

Currently, a worldwide discussion among stakeholders of the biomedical research enterprise revolves around the recent realization that a significant proportion of the current resources spent on medical research are wasted, as well as around potential actions to increase its value. The reproducibility of results in experimental biomedicine is generally low, and the vast majority of medical interventions introduced into clinical testing after successful preclinical development prove unsafe or ineffective. One prominent explanation for these problems is flawed preclinical research. There is consensus that the quality of biomedical research needs to be improved. ‘Quality’ is a broad and generic term, and it is clear that a plethora of factors together determine the robustness and predictiveness of basic and preclinical research results. Against this background the experimental laboratories of the Center for Stroke Research Berlin (CSB, Dept. of Experimental Neurology) have decided to take a systematic approach and to implement a structured quality management system. In a process involving all members of the department from student to technician, post doc, and group leader in over more than one year we have established standard operating produres, defined common goals and indicators, improved communication structures and document management, implemented an error management, are developing an electronic laboratory notebook, among other measures. On July 3rd 2014 this quality management system successfully passed an ISO 9001 certification process (Certificate 12 100 48301 TMS). The auditors were impressed by the quality oriented ‘spirit’ of all members of the Department, and the fact that to their knowledge the CSB is the first academic institution worldwide which has established a structured quality management in experimental research of this standard and reach. The CSB is fully aware of the fact that the mere fact that a certified quality management has been implemented does not guarantee translational success. However, we believe that innovation will only have impact on the improvement of the outcome of patients if it thrives on the highest possible standards of quality. Certification of our standards renders them transparent and verifiable to the research community, and serves as a first step towards a preclinical medicine in which research conduct and results can be monitored and audited by peers.