On March 17th, just as many countries were taking draconian measures to contain the SARS-COV-2 pandemic, the Greek-American meta-researcher and epidemiologist John Ioannidis, whom I often quote in my posts proclaimed a “fiasco in the making‘! With strong language and a few ad hoc estimations of COVID fatality rates he warned that based on poor data or no evidence at all politicians might inflict incalculable damage on society, possibly much worse than what a virus, putatatively as dangerous as influenza, could cause. As one of the most highly cited researchers in the world and a vocal critic of quality problems in biomedicine, his COVID related interviews, opinion pieces and articles since then have received a great deal of attention, in the scientific community, in the lay press, and especially among his worldwide fan base. Continue reading
Meat consumption is bad for your health. It gives you cancer, heart attacks, stroke, you name it. Says nutrition science. And they must know. After all, it’s a science. Is it, really?
A few years ago, Jonathan Schoenfeld and John Ioannidis took a standard cookbook and randomly selected 50 frequently occurring ingredients (sugar, coffee, salt, etc.). They then carried out a systematic literature search, asking whether there were epidemiologicial studies that had investigated the cancer risk of these ingredients. And they found what they were looking for. For 80% of the ingredients at least one study existed, for many even several. Of 264 of these studies, 103 found that the ingredient investigated increased the risk of cancer, while 88 reduced the risk! So after all Joe Jackson was right: ‘Everything gives you cancer’! But wait a minute: Milk? Veal? Orange juice? Continue reading
You got to see this youtube video! Hectically cut sequences of busy young scientists in high-tech laboratories wearing lab coats, nerdy looking guys are soldering electronic circuits and stare into oscilloscopes, we are taken on a roller coaster ride through an animated brain chockful of tangled nerve cells. And in between all this, on stage at the California Academy of Sciences, car and rocket manufacturer Elon Musk announces his latest vision in a messianic pose: The symbiosis of the human brain with artificial intelligence (AI)! This time his plan to save mankind does not involve mass evacuation to Mars, but will be realized by a revolutionary Brain Machine Interface (BMI), designed and manufactured by his company Neuralink. You may have guessed it, this has caused a tremendous media hype all over the world. The verdict in the press and on the net was: “Musk at his best, a bit over the edge, but if HE announces a breakthrough like that there must be something to it”. The more cautious asked: “But couldn’t this be dangerous for mankind? Do we need a new ethic for stuff like this?” Continue reading
U.S. economist Robin Hanson posed this question in the title of an article published in 1995. In it he suggested replacing the classic review process with a market-based alternative. Instead of peer review, bets could decide which projects will be supported or which scientific questions prioritized. In these so-called “prediction” markets, individuals stake “bets” on a particular result or outcome. The more people trade on the marketplace, the more precise will be the prediction of outcome, based as it is on the aggregate information of the participants. The prediction market thus serves the intellectual swarms. We know that from sport bets and election prognoses. But in science? Sounds totally crazy, but it isn’t. Just now it is making its entry into various branches of science. How does it function, and what does it have going for it? Continue reading
With a half-page article written about him and his study, an Israeli radiologist unknown until then made it into the New York Times (NYT 2009). Dr. Yehonatan Turner presented computer-tomographic scans (CTs) to radiologists and asked them to make a diagnosis. The catch: Along with the CT a current portrait photograph of the patient was presented to the physicians. Remember, radiologists very often do not see their patients, they make their diagnosis in a dark room staring at a screen. Dr. Turner in his study used a smart cross-over design: He first showed the CT together with a portrait photograph of the patient to one group of radiologists. Three months later the same group had to make a diagnosis using the same CT, but without the photo. Another group of radiologists were first given only the CT and then, three months later the CT with photo. A further control group examined only the CTs, as in routine practice. The hypothesis: When a radiologist is exposed to the individual patient, and not only to an anatomical finding on a scan, she will be more conscious of her own responsibility, hence findings will be more thorough and diagnosis more accurate. And in fact, this is what he found. The radiologists reported that they had more empathy with the patient, and that they “felt like doctors”. And they spotted more irregularities and pathological findings when they had the CT and photo in front of them than when they were only looking at the CT (Turner and Hadas-Halpern 2008).
So how about showing researchers in basic and preclinical biomedicine photos of patients with the disease they are currently investigating in a model of the disease? Continue reading
It struck at the end of July. A ‘scandal’ in science shook the Republic. Research by the NDR (Norddeutscher Rundfunk), NDR (Westdeutscher Rundfunk) and the Süddeutsche Zeitung revealed that German scientists are involved in a “worldwide scandal”. More that 5000 scientists in German universities, institutes and federal authorities had, with public funds, published their work in on-line pseudoscientific publishing houses that do not comply with the basic rules and for assuring scientific quality. The public and not just a few scientists heard for the first time about “predatory publishing houses” and “predatory journals”.
Predatory publishing houses, whose presentation in phishing mails is quite professional, offer scientists Open Access (OA) publication of their scientific studies at a cost, whereby they imply that their papers will be peer reviewed. No peer review is carried out, and the articles are published on the web site of these “publishing houses”, which however are not listed in the usual search engines such as PubMed. Every scientist in Germany finds several such invitations per day in his or her e-mails. If you are a scientist and receive none, you should be worried about it. Continue reading
- Let’s get this out of the way: Reproducibility is a cornerstone of science: Bacon, Boyle, Popper, Rheinberger
- A ‘lexicon’ of reproducibility: Goodman et al.
- What do we mean by ‘reproducible’? Open Science collaboration, Psychology replication
- Reproducible – non reproducible – A false dichotomy: Sizeless science, almost as bad as ‘significant vs non-significant’
- The emptiness of failed replication? How informative is non-replication?
- Hidden moderators – Contextual sensitivity – Tacit knowledge
- “Standardization fallacy”: Low external validity, poor reproducibility
- The stigma of nonreplication (‘incompetence’)- The stigma of the replicator (‘boring science’).
- How likely is strict replication?
- Non-reproducibility must occur at the scientific frontier: Low base rate (prior probability), low hanging fruit already picked: Many false positives – non-reproducibility
- Confirmation – weeding out the false positives of exploration
- Reward the replicators and the replicated – fund replications. Do not stigmatize non-replication, or the replicators.
- Resolving the tension: The Siamese Twins of discovery & replication
- Conclusion: No scientific progress without nonreproducibility: Essential non-reproducibility vs . detrimental non-reproducibility
- Further reading
Tuberculosis kills far more than a million people worldwide per year. The situation is particularly problematic in southern Africa, eastern Europe and Central Asia. There is no truely effective vaccination for tuberculosis (TB). In countries with a high incidence, a live vaccination is carried out with the diluted vaccination strain Bacillus Calmette-Guérin (BCG), but BCG gives very little protection against tuberculosis of the lungs, and in all cases the vaccination is highly variable and unpredictable. For years, a worldwide search has been going on for a better TB vaccination.
Recently, the British Medical Journal has published an investigation in which serious charges have been raised against researchers and their universities: conflicts of interest, animal experiments of questionable quality, selective use of data, deception of grant-givers and ethics commissions, all the way up to endangerment of study participants. There was also a whistle blower… who had to pack his bags. It all happened in Oxford, at one of the most prestigious virological institutes on earth, and the study on humans was carried out on infants of the most destitute layers of the population. Let’s have a closer look at this explosive mix in more detail, for we have much to learn from it about
- the ethical dimension of preclinical research and the dire consequences that low quality in animal experiments and selective reporting can have;
- the important role of systematic reviews of preclinical research, and finally also about
- the selective (or non) availability and scrutiny of preclinical evidence when commissions and authorities decide on clinical studies.
Recently, NIH Scientists B. Ian Hutchins and colleagues have (pre)published “The Relative Citation Ratio (RCR). A new metric that uses citation rates to measure influence at the article level”. [Note added 9.9.2016: A peer reviewed version of the article has now appeared in PLOS Biol]. Just as Stefano Bertuzzi, the Executive Director of the American Society for Cell Biology, I am enthusiastic about the RCR. The RCR appears to be a viable alternative to the widely (ab)used Journal Impact Factor (JIF).
The RCR has been recently discussed in several blogs and editorials (e.g. NIH metric that assesses article impact stirs debate; NIH’s new citation metric: A step forward in quantifying scientific impact? ). At a recent workshop organized by the National Library of Medicine (NLM) I learned that the NIH is planning to widely use the RCR in its own grant assessments as an antidote to JIF, raw article citations, h-factors, and other highly problematic or outright flawed metrics. Continue reading
Using metaanalysis and computer simulation we studied the effects of attrition in experimental research on cancer and stroke. The results were published this week in the new meta-research section of PLOS Biology. Not surprisingly, given the small sample sizes of preclinical experimentation, loss of animals in experiments can dramatically alter results. However, effects of attrition on distortion of results were unknown. We used a simulation study to analyze the effects of random and biased attrition. As expected, random loss of samples decreased statistical power, but biased removal, including that of outliers, dramatically increased probability of false positive results. Next, we performed a meta-analysis of animal reporting and attrition in stroke and cancer. Most papers did not adequately report attrition, and extrapolating from the results of the simulation data, we suggest that their effect sizes were likely overestimated. Continue reading