Train PIs and Professors!

There is a lot of thinking going on today about how research can be made more efficient, more robust, and more reproducible. At the top of the list are measures for improving internal validity (for example randomizing and blinding, prespecified inclusion and exclusion criteria etc.), measures for increasing sample sizes and thus statistical power, putting an end to the fetishization of the p-value, and open access to original data (open science). Funders and journals are raising the bar for applicants and authors by demanding measures to safeguard the validity of the research submitted to them.

Students and young researchers have taken note, too.  I teach, among other things, statistics, good scientific practice and experimental design and am impressed every time by the enthusiasm of the students and young post docs, and how they leap into the adventure of their scientific projects with the unbent will to “do it right”.  They soak up suggestions for improving reproducibility and robustness of their research projects like a dry sponge soaks up water.  Often however the discussion is in the end not satisfying, especially when we discuss students’ own experiments and approaches to research work.  I often hear: “That’s all very good and fine, but it won’t get by with my group leader.” Group leaders would tell them: “That is the way we have always done that, and it got us published in Nature and Science”, “If we do it the way you suggest, it won’t get through the review process”, or “We then could only get it published in PLOS One (or Peer J, F1000 Research etc.) and then the paper will contaminate your CV”, etc.

I often wish that not only the students would be sitting in the seminar room, but also their supervisors with them! Continue reading

Can (Non)-Replication be a Sin?

I failed to reproduce the results of my experiments! Some of us are haunted by this horror vision. The scientific academies, the journals and in the meantime the sponsors themselves are all calling for reproducibility, replicability and robustness of research. A movement for “reproducible science” has developed. Sponsorship programs for the replication of research papers are now in the works.In some branches of science, especially in psychology, but also in fields like cancer research, results are now being systematically replicated… or not, thus we are now in the throws of a “reproducibility crisis”.
Now Daniel Fanelli, a scientist who up to now could be expected to side with those who support the reproducible science movement, has raised a warning voice. In the prestigious Proceedings of the National Academy of Sciences he asked rhetorically: “Is science really facing a reproducibility crisis, and if so, do we need it?” So todayon the eve, perhaps, of a budding oppositional movement, I want to have a look at some of the objections to the “reproducible science” mantra. Is reproducibility of results really the fundament of scientific methods? Continue reading

When you come to a fork in the road: Take it

It is for good reason that researchers are the object of envy. When not stuck with bothersome tasks such as grant applications, reviews, or preparing lectures, they actually get paid for pursuing their wildest ideas! To boldly go where no human has gone before! We poke about through scientific literature, carry out pilot experiments that surprisingly almost always succeed. Then we do a series of carefully planned and costly experiments. Sometimes they turn out well, often not, but they do lead us into the unknown. This is how ideas become hypotheses; one hypothesis leads to those that follow, and voila, low and behold, we confirm them! In the end, sometimes only after several years and considerable wear and tear on personnel and material, we manage then to weave a “story” out of them (see also). Through a complex chain of results the story closes with a “happy end”, perhaps in the form of a new biological mechanism, but at least as a little piece to fit the puzzle, and it is always presented to the world by means of a publication. Sometimes even in one of the top journals. Continue reading

Of Mice, Macaques and Men

Tuberculosis kills far more than a million people worldwide per year. The situation is particularly problematic in southern Africa, eastern Europe and Central Asia. There is no truely effective vaccination for tuberculosis (TB). In countries with a high incidence, a live vaccination is carried out with the diluted vaccination strain Bacillus Calmette-Guérin (BCG), but BCG gives very little protection against tuberculosis of the lungs, and in all cases the vaccination is highly variable and unpredictable. For years, a worldwide search has been going on for a better TB vaccination.

Recently, the  British Medical Journal has published an investigation in which serious charges have been raised against researchers and their universities: conflicts of interest, animal experiments of questionable quality, selective use of data, deception of grant-givers and ethics commissions, all the way up to endangerment of study participants. There was also a whistle blower… who had to pack his bags. It all happened in Oxford, at one of the most prestigious virological institutes on earth, and the study on humans was carried out on infants of the most destitute layers of the population. Let’s have a closer look at this explosive mix in more detail, for we have much to learn from it about

  • the ethical dimension of preclinical research and the dire consequences that low quality in animal experiments and selective reporting can have;
  • the important role of systematic reviews of preclinical research, and finally also about
  • the selective (or non) availability and scrutiny of preclinical evidence when commissions and authorities decide on clinical studies.

Continue reading

Believe it or not!

Medicine is full of myths.  Sometimes you even get the impression that it is actually based mostly on myths.  Many are so plausible that you would have to be a fool to not believe in them.  And so today let us take a closer look at the placebo effect. In doing so we will run into a surprisingly little-known phenomenon: regression to mean.  This has also implications for experimenters.

Hardly anyone doubts the almost magic effects of the placebo effect, so perhaps it will surprise you to hear that hard evidence for its existence is rather weak — and that there are some important arguments against its efficiency.  Cochrane reviews, after all the golden standard for systematic reviews, did not find convincing evidence for its effectivity. They demonstrate that placebos might be effective when it comes to patient reported outcomes, particularly for pain and nausea.  But the effects, should there be any at all, are not that impressive.  For so-called “observer reported outcomes”, i.e. whenever study doctors did the measuring, no effectiveness was found at all.

Since you probably consider the placebo effect to be one of the fundaments of medicine and me to be a fool, you might just shake your head and push this post aside.  Or you allow me to proffer a few arguments as to why the placebo effect is a clearly overrated phenomenon.  You would then also learn something about regression to the mean.  And this might even be relevant to your own research.

Continue reading

And the Moral of the Story is: Don’t believe your p-values!

In my previous post I had a look at the culture of science in physics, and found much that we life scientists might want to copy.  Physics itself, and especially particles physics, present a goldmine of lessons to be learned, two of which I would like to discuss with you today.

Some of you will remember: In 2001 the results of a large international experiment convulsed not only the field of physics; it shook the whole world. On September 22nd the New York Times ran it on the front page: “Einstein Roll Over? Tiny neutrinos may have broken cosmic speed limit”! What had happened? Continue reading

Don’t ask what your Experiment can do for You: Ask what You can do for your Experiment!

I was planning to highlight physics as a veritable model, as champion of publications culture and team science from which we in the life sciences could learn so much.  And then this:  The Nobel Prize for physics went to Rainer Weiss, Barry Barish and Kip Thorne for the “experimental evidence” for the gravitation waves foreseen in 1919 by Albert Einstein.  Published in a paper with more than 3 000 authors!

Once again the Nobel Prize is being criticized:  That it is always awarded to the “old white men” at American universities, or that good old Mr. Nobel actually stipulated that only one person per area of research be awarded, and only for a discovery in the past year.  Oh, well….  I find more distressing that the Nobel Prize is once again perpetuating an absolutely antiquated image of science:  The lone research geniuses, of whom there are so few, or more precisely, a maximum of three per research area (medicine, chemistry, physics) have “achieved the greatest benefits for humanity”.  Awarded with a spectacle that would do honor to a Eurovision Song Contest or the Oscar Awards.  It doesn’t surprise me that this is received enthusiastically by the public. This cartoon-like image of science has been around since Albert Einstein at the latest.  And from Newton up to World War II, before the industrialization and professionalization of research, this image of science was justified.  What disturbs me is that the scientific community partakes so fulsomely in this anachronism.  You will ask why the Fool is getting so set up again –it’s harmless in the worst case, you say, and the winners are almost always worthy of the prize?  And surely a bit of PR can do no harm in these post-factual times where the opponents of vaccination and the climate-change deniers are on the rise? Continue reading