Anyone who follows this blog must get the impression that I am a nag and misanthropist. Nothing and nobody seems to please me. I alway find the sample sizes too small, the statistics too lazy, the data hand-picked, or the results too positive and the conclusions drawn from them too exaggerated. Also, I find the peer review system unreliable, not to mention support of mainstream research and giving to those who already have (‘Matthew effec’) by the funding agencies. I even dared to critsize the Nobel Prize an atavistic instrument celebrating the lonely, white male reseacher and genius. Artificial intelligence I find stupid, and the academic career system the core of all these evils. To name but just a few examples.
But that is far from the truth! I am a science enthusiast! I am convinced that science is the best that the 1500 grams of protein and fat encapsulated in our skull have ever produced. Yes, I am a science nut. So with this entry, at the beginning of the new decade, let’s start with a proper hymn to biomedical science. Continue reading
Meat consumption is bad for your health. It gives you cancer, heart attacks, stroke, you name it. Says nutrition science. And they must know. After all, it’s a science. Is it, really?
A few years ago, Jonathan Schoenfeld and John Ioannidis took a standard cookbook and randomly selected 50 frequently occurring ingredients (sugar, coffee, salt, etc.). They then carried out a systematic literature search, asking whether there were epidemiologicial studies that had investigated the cancer risk of these ingredients. And they found what they were looking for. For 80% of the ingredients at least one study existed, for many even several. Of 264 of these studies, 103 found that the ingredient investigated increased the risk of cancer, while 88 reduced the risk! So after all Joe Jackson was right: ‘Everything gives you cancer’! But wait a minute: Milk? Veal? Orange juice? Continue reading
The societal acceptance of the results of our daily work as scientists is dire. The majority of the US population does not explain evolution with Darwin, but with Holy Scripture. Measles is on the rise again worldwide, because vaccination opponents smell a conspiracy by the pharmaceutical industry to make children autistic. A substantial proportion of the population does not believe that climate change is man-made. They believe that if you fear climate change you are hysterical, and manipulated by interested scientists competing for funding and fame. Homeopaths treat disease with sugar pills, while the health insurers, with our money, foot the bill.
A popular recipe against this increasing rejection of relevant scientific findings is to provide more and better science education in schools and the media. Inspired by a lecture of the American sociologist and historian of science Steven Shapin, I respectfully disagree.
You got to see this youtube video! Hectically cut sequences of busy young scientists in high-tech laboratories wearing lab coats, nerdy looking guys are soldering electronic circuits and stare into oscilloscopes, we are taken on a roller coaster ride through an animated brain chockful of tangled nerve cells. And in between all this, on stage at the California Academy of Sciences, car and rocket manufacturer Elon Musk announces his latest vision in a messianic pose: The symbiosis of the human brain with artificial intelligence (AI)! This time his plan to save mankind does not involve mass evacuation to Mars, but will be realized by a revolutionary Brain Machine Interface (BMI), designed and manufactured by his company Neuralink. You may have guessed it, this has caused a tremendous media hype all over the world. The verdict in the press and on the net was: “Musk at his best, a bit over the edge, but if HE announces a breakthrough like that there must be something to it”. The more cautious asked: “But couldn’t this be dangerous for mankind? Do we need a new ethic for stuff like this?” Continue reading
Damn! What an effort: Generation of a knockout mouse line, back crossing in background strain and litermates, all the genotyping. Followed by a plethora of experiments in a disease model: surgery, magnetic resonance imaging, histology, behavioral studies, and so on. Finally the result: No phenotype! The knockout mouse appears to be a mouse like any other. Not different from the wild type background strain. But wait, we rather need to phrase it like this: We did not find a statistically significant difference between knockout and wild type. So we cannot even conclude that wild type are like knowout mice, but rather: If there is a difference, it might be smaller than the detectable effect size, depended on sample size, error level (alpha and beta) and the variance of our results. But we had planned our experiments well: The sample size was determined a priori, and chosen so that we would have been able to detect a difference on the order of one standard deviation. This is what statisticians call a Cohen’s d of 1, which is considered a substantial effect. We could not have done more animals than the (34!), because of limited ressources, the duration of the PhD thesis, and the timing of the grant. But what now? Write a paper? Reporting a NULL result? How would this look like in a resume, besides, who cares about NULL results, and which reputable journal would publish them at all? Continue reading
A study in this weeks Nature (Vrselja et al. ) has created an immediate media frenzy. Nature puts it like this: ‘Pig brains kept alive outside body for hours after death’ and ‘Revival of disembodied organs raises slew of ethical and legal questions about the nature of death and consciousness.’ The New York Times: ‘In a study that raises profound questions about the line between life and death, researchers have restored some cellular activity to brains removed from slaughtered pigs.; STAT: ‘The pigs were dead. But four hours later, scientists restored cellular functions in their brains’ etc.
That sounds spectacular. But if one reads the study (and the commentaries) is easy to spot that there are two main deficiencies: 1) The study lacks novelty, and 2) The assertion that it presents a relevant step towards restoring brain function after a prolonged interruption of cerebral blood flow is not only exaggerated, but simply wrong. Continue reading
‘Unfortunately, we have to inform you that after thorough review [YOUR FAVORITE FUNDING ORGANISATION] must reject your application’. Most of us know this sentence all to well, as most rejection letters of our grant applications contain it in a similar form. From a purely statistical point of view, we receive such letters quite frequently. In German biomedicine, the funding rates are between 5 and 25 %, depending on funder and program. Upon receiving a rejection we often feel personally offended. After all, we have put down our best ideas, often had already included some preliminary results and proposed experiments we had already conducted, even beautified the document with a lot of prose, and flattered the most important potential reviewers with strategically placed quotations, etc. And then the rejection! So we had to start over from the beginning, rewrite everything, submit it again, perhaps to another funding agency. This is how we spend a substantial fraction of our days at the office, if we don’t review applications of our colleagues. On average, scientists spend 40% of their time writing or reviewing applications. Continue reading
Triangulation! The Egyptians used it to build their pyramids. The Greeks developed a branch of mathematics out of it. Until the 19th century whole countries were charted in this way. Far into the 20th century ships have determined their position with it. To determine your position by triangulation you only need a set square and a protractor, which the surveyors call a theodolite, as well as the coordinates of two visible landmarks. It’s that simple!
Could it be that triangulation is also an important methodological approach in biology? A cure even for the replication crisis? Munafo and Smith recently postulated this in a commentary in Nature. Sociologists call it triangulation when they use two or more different methods to investigate one particular research question. If the results converge at one point, i.e. lead to the same result, this increases validity and credibility. Don’t we do this routinely in the experimental life sciences? Does the knock-out mouse have the same phenotype as one in which the signalling pathway was pharmacologically blocked? Do transcript and protein expression correlate with the phenotype?
Thus, basic biomedical research is familiar with ‘targeting’ a goal with different methods grounded in already established knowledge (the landmarks of the surveyor!). Are the results converging? Bingo, we have located the biological mechanism! Therefore it leaves many of us cold, if spoilsports with gradschool statistics argue that most studies in biomedicine must be false positive despite significant p-value. Because we don’t just rely on ONE result. Instead we triangulate by means of different approaches! In order to validate results, this might even be superior to replication. If something is simply repeated, it is not unlikely that a systematic error will be repeated too. This would make the result reproducible, but still not correct.
Were the skeptics wrong when calling out a crisis in biomedical research? Are we already doing the right thing? Continue reading
An article entitled “Growth in a Time of Debt” was published in 2010 by the highly respected Harvard economists Carmen Reinhart and Kenneth Rogoff. It dealt with the relationship between national economic growth and national debt. They reported on their discovery of an astonishing, globally observable correlation: As national debt rises, the economic growth of a nation initally also rises. If, however, the national debt exceeds 90 %, this ratio is reversed quite abruptly. Growth turns into contraction, and economic output then declines as debt rises further. The discovery of a “90 % debt threshold” hit like a bomb. Some suspect that the article was the basis for the European austerity policy after the 2008 financial crisis. What is certain, however, is that the paper was enthusiastically used by Western politicians to justify their restrictive fiscal policy. In 2013, Thomas Herndon, a student, reanalyzed the data of the Reinhart-Rogoff paper as part of a semester assignment. After some back and forth, the authors had given him the original Excel spreadsheet. And lo and behold, in a few minutes he found a number of serious errors in it! After correction, the debt threshold disappeared, and the data now appeared to prove the opposite, a steady, positive correlation between government debt and growth across the entire range! What do we learn from this? Apart from the fact that the fundamental error of Reinhart and Rogoff is of course the confusion of correlation with causation: Excel is not suitable for the analysis of complex scientific data. Even more importantly, scientists make mistakes, which can have serious consequences. Continue reading