Believe it or not!
Medicine is full of myths. Sometimes you even get the impression that it is actually based mostly on myths. Many are so plausible that you would have to be a fool to not believe in them. And so today let us take a closer look at the placebo effect. In doing so we will run into a surprisingly little-known phenomenon: regression to mean. This has also implications for experimenters.
Hardly anyone doubts the almost magic effects of the placebo effect, so perhaps it will surprise you to hear that hard evidence for its existence is rather weak — and that there are some important arguments against its efficiency. Cochrane reviews, after all the golden standard for systematic reviews, did not find convincing evidence for its effectivity. They demonstrate that placebos might be effective when it comes to patient reported outcomes, particularly for pain and nausea. But the effects, should there be any at all, are not that impressive. For so-called “observer reported outcomes”, i.e. whenever study doctors did the measuring, no effectiveness was found at all.
Since you probably consider the placebo effect to be one of the fundaments of medicine and me to be a fool, you might just shake your head and push this post aside. Or you allow me to proffer a few arguments as to why the placebo effect is a clearly overrated phenomenon. You would then also learn something about regression to the mean. And this might even be relevant to your own research.
A measurement returning a value deviating from the mean will tend when repeated to be followed by a measurement rendering a value nearer to the average. Trivial, is it not? Expressed more simply: The further a measured value deviates from the mean, the more unlikely it is. The natural scientist and scientific jack-of-all-trades Francis Galton (1822-1911) was the first to recognize this, and gave the phenomenon its name: ‘regression to the mean’. In 1886 he used population registries to compare the body weight of parents to that of their fully grown children (i.e. in adulthood) and found that fully grown children on average are closer to the average size than were their parents. And only on the surface is it paradoxical that a large child usually has parents who are smaller than he/she (more on this, and on the subject in general, in an excellent article by Stephen Senn). But what does that have to do with the placebo effect?
A person becomes a patient when she has symptoms of disease. That person goes to a doctor when she no longer can or wants to bear the symptoms. Luckily, she does something and feels better after a while, seemingly thanks to the arts of medicine. Or she does not go to the doctor but “just knows”, or knows from a magazine, what medicine is best for her (e.g. aroma therapy or Ibuprofene). After taking the medicine, things feel better after a few days and after a few weeks she is back to normal. Voltaire (1694-1778) formulated it this way: “The art of medicine consists of amusing the patient while nature cures the disease”. And here, snuggled in close to Voltaire’s explanation of the apparent effectiveness of homeopathy, we find crux of the matter with the placebo effect. We describe it as improving symptoms with a sugar pills or a sham procedure. In good studies, this procedure is randomized, controlled, blinded and compared with the real medication or principle (the ‘verum’). But wouldn’t you know it: almost none of the randomized controlled studies had a real control group! With that we mean a control group that had absolutely no treatment! Only in comparison to these can we speak at all of a placebo effect. By not treating study participants at all, a research group might clarify how the disease develops naturally, i.e. without treatment, and whether the course of verum and placebo groups deviated from that. Receiving no treatment when you visit a doctor is not a very popular design!
Fortunately, such studies have been carried out. And from these we know that the natural course of most diseases fluctuates. And the symptoms are usually treated when they are at their maximum – at the point at which they get better naturally. And placebo treatment is barely more effective (pain, nausea, mood) than nature’s natural course. Expressed somewhat more generally, a comparison within a group can show whether a patient is feeling better or worse, but not whether and to what extent this can be ascribed to the treatment.
The plot thickens. Regression to the mean is hiding in almost all clinical studies and leads to overestimation of treatment effect regardless of whether verum or placebo. Let us take as an example a study testing a blood pressure-lowering medication. Patients are included in the study if their blood pressure exceeds a certain value. Purely on the basis of the statistical fluctuation, there will always be subjects included who have an increased blood pressure when it is being measured but who are not hypertensives. When measured again, their blood pressure is normal: regression to the mean! These subjects, however, were included as study participants; their blood pressure values before treatment will be included in the calculation of the mean for the entire group. Then treatment starts, measurement is repeated, and the mean of the entire group is now lower than before drug treatment. The effect of treatment will thus be overestimated, since the “patients” who are no hypertensives are also measured again, and their mean is now regressed. A detailed example with figures can be found in Senn’s article. That would not be a problem if there was a genuine control group, i.e. of untreated subjects! Unfortunately this comparison is available only in very few studies. In the the few studies in which an untreated group was included, no placebo effect was found, or only a very weak effect in subjective symptoms such as pain! Not in blood pressure!
Perhaps you work with cell cultures or with rats, and will therefore say: Interesting, but happily I always have a group without treatment, so this is not relevant to me. To which I reply: Careful! Regression to the mean applies not only to individual values, but also to the results of entire studies. Especially when they are based on a low sample sizes and therefore have high variance as well low statistical power, and use significance levels of 5 %. In other words, most studies.
Just imagine, you are doing an experiment. With n=8, as always. You find an effect; it is statistically significant, let’s say p<0.03. You’re happy. You do a few more other experiments for the study and then write up the paper describing the effect. Congratulations! But what if the ‘significant’ effect had been a false positive? And a replication of the experiment had corrected the mean in the direction of a NULL-effect? That is, if it had regressed to mean? When we fetishize positives and, in particular, spectacular findings (i.e. a priori improbabilities), we run a high risk of producing false positive which enter the publication record. That problem would be easy to fix, but the solution is not very popular: larger sample sizes, sufficient power, stringent significance levels, replications and publication of negative and neutral results. Bye bye Nature Paper! (See LJ 4/2017)
If you feel sufficiently upset (or stimulated) by the above, please have a look at the following articles:
As always, David Colquhoun brilliantly summarizes the issue in his blog DC’s Improbable Science:
Colquhoun D. Placebo effects are weak: regression to the mean is the main reason ineffective treatments appear to work. http://www.dcscience.net/2015/12/11/placebo-effects-are-weak-regression-to-the-mean-is-the-main-reason-ineffective-treatments-appear-to-work/
Highly recommended. Stephen Senn tackles the issue with an historical excursion and somewhat more serious statistics. Contains references to full statistical treatments of regression to the mean:
Senn S (2011) Francis Galton and regression to the mean. Significance 8:124-126 http://onlinelibrary.wiley.com/doi/10.1111/j.1740-9713.2011.00509.x/full
When the first large Cochrane review was published, concluding that there is little evidence to support the existence of the placebo effect, the New England Journal ran this very readable editorial:
Bailar JC (2001) The Powerful Placebo and the Wizard of Oz. N Engl J Med 2001; 344:1630-1632 http://www.nejm.org/doi/full/10.1056/NEJM200105243442111
Most recent Cochrane review on placebo interventions:
Hróbjartsson A, Gøtzsche PC (2010) Placebo interventions for all clinical conditions. Cochrane Database Syst Rev. 2010 Jan 20;(1):CD003974. http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003974.pub3/abstract;jsessionid=54208DB95DAC850F6F5C3518E2CEC671.f01t02doi: 10.1002/14651858.CD003974.pub3
This narrative review presents the classic view on placebo effects. The authors use only one citation (to another narrative review by the same authors) to claim the existence of the placebo effect:
‘Moreover, recent clinical research into placebo effects has provided compelling evidence that these effects are genuine biopsychosocial phenomena that represent more than simply spontaneous remission, normal symptom fluctuations, and regression to the mean. “
It would have been nice to see reference to evidence from original studies or systematic reviews.
Kaptchuk TJ, Miller FG (2015) Placebo Effects in Medicine. N Engl J Med 2015; 373:8-9July 2, 2015 http://www.nejm.org/doi/full/10.1056/NEJMp1504023#t=article
A German version of this post has been published as part of my monthly column in the Laborjournal: http://www.laborjournal-archiv.de/epaper/LJ_18_01/28/