Feb 22 2016
The Evidence Says – Homeopathy Does Not Work
In a recent blog post for the BMJ, Paul Glasziou wrote about the recent Australian review of homeopathic remedies of which he was head:
…I lost interest after looking at the 57 systematic reviews (on 68 conditions) which contained 176 individual studies and finding no discernible convincing effects beyond placebo.
He is not the first person to look at the totality of clinical evidence for homeopathy and find it wanting. Glasziou was chair of the working party that produced the 2015 NHMRC report on homeopathy, which concluded:
Based on the assessment of the evidence of effectiveness of homeopathy, NHMRC concludes that there are no health conditions for which there is reliable evidence that homeopathy is effective.
So, after more than two centuries, and thousands of studies in total, no homeopathic treatment has crossed over the line of what would generally be considered sufficient evidence to prove that it works. That is very telling. I liken the evidence to other dubious claims, such as ESP. After a century of research and thousands of studies there is no clear evidence that ESP is real.
For both homeopathy and ESP there is a great deal of noise, but no clear signal. There are many flawed or small studies, but no repeatable high quality studies.
This lack of convincing evidence also has to be looked at in the context of the scientific plausibility of homeopathy (or ESP). Homeopathy’s scientific plausibility approaches zero.
To quickly summarize: homeopathic potions are diluted to the point where there is little to no “active ingredient” remaining. Homeopaths have to argue that the original substances leave behind their “essence,” which is just covering one highly implausible claim with another.
Further, the “active ingredients” themselves are based upon pure magical thinking, including the false notion that like cures like, which is little more than sympathetic magic, and even more fanciful notions about the relationship between personality and illness. I like to say that homeopathic treatments are essentially fairy dust diluted into non-existence. Others have called homeopathy the air-guitar of medicine, which is also an apt analogy.
When we combine maximal scientific implausibility with weak evidence, in fact convincing evidence for lack of efficacy, there is only one reasonable conclusion to reach – homeopathy does not work, and in fact, as far as we can tell, it cannot work.
Despite this, Glasziou, who is a professor of evidence based medicine (EBM), went into his review with an open mind. I find that interesting, as that is a premise of EBM, to not consider prior plausibility and just look at the evidence. So he did a real EBM review – and found that there isn’t a single indication for which there is sufficient evidence to claim that homeopathy works.
Unsurprisingly, Dana Ullman made an appearance in the comments to Glasziou’s blog post in order to represent the “science denial” position of the die-hard homeopath (as he often does). Ullman had two points to make: 1) the threshold for evidence in the NHMRC review was too high, 2) and that mainstream medicine does not fare well under the same strict standards. Both of these claims are dubious.
When someone is arguing for lowering the standards of evidence, that is a huge red flag. That is rarely a strong position. Ullman complains that the NHMRC report set as their threshold three independent studies, each with at least 150 subjects total. To me, that is a perfectly reasonable standard.
Independent replication is the key to science. There are numerous biases which can affect the outcome of studies – researcher bias, publication bias, citation bias, researcher degrees of freedom, and even occasional fraud. We only know if something is really real if the effect can be independently replicated. This point cannot be emphasized enough.
I have written before, in fact, that before a new phenomenon should be deemed probably real we should expect evidence that simultaneously meets the following thresholds:
1 – Rigorous trial design
2 – Statistically significant positive results
3 – Adequate signal to noise ratio
4 – Independent replication
Three out of four are not enough. You can get three out of four with a fake treatment or a phenomenon that is not real. It is difficult to get all four unless the phenomenon is real.
Further, these are general principles. Where exactly we draw the line is a judgment call. It is partly based on an understanding of the history of science – in the past, where is the line that has reliably indicated that a phenomenon is real? It is also partly based on scientific plausibility. The bar is higher for claims that represent a greater dissonance with existing evidence. Extraordinary claims require extraordinary evidence.
These rules reflect a genuine desire to know what is real, rather than a genuine desire to prove that one’s pet theory is true.
The NHMRC review and other reviews have concluded that no homeopathic remedy has crossed this line for any indication. You can quibble about exactly where to draw the line, but honestly homeopathy is not even close. You would have to abandon all reasonable standards of evidence in order to argue that the evidence supports homeopathy – which is exactly what Ullman does.
Ullman’s second claim is that if you apply the same standards to mainstream medicine it also does not fare well. He cites as his primary source for this claim a BMJ clinical evidence review, which Ullman summarizes:
In fact, when the BMJ’s “Clinical Evidence” analyzed common medical treatments to evaluate which are supported by sufficient reliable evidence, they reviewed approximately 3,000 treatments and found only 11% were found to be beneficial.
Let’s take a closer look at this review. They found that 11% met the highest category of “beneficial” but 24% were in the “likely beneficial” category. Further, 7% were in the category – trade off between benefits and harm. This does not mean they are unproven, just that they have side effects that represent a significant trade off.
Only 8% were unlikely to be beneficial or likely to be ineffective. That is the category, by the way, in which I would place homeopathy.
But here is the biggest omission of Ullman in summarizing this review:
‘Unknown effectiveness’ is perhaps a hard categorisation to explain. Included within it are many treatments that come under the description of complementary medicine (e.g., acupuncture for low back pain and echinacea for the common cold), but also many psychological, surgical, and medical interventions, such as CBT for depression in children, thermal balloon ablation for fibroids, and corticosteroids for wheezing in infants.
In the biggest category of “unknown effectiveness” (50%) the review included CAM treatments like herbs and acupuncture. It is therefore incredibly dishonest to portray these numbers are representing mainstream medicine.
Even further, the authors report that these percentages represent just the number of treatments they looked at, not the frequency with which they are used. In other words, it is possible that most practitioners largely stick within the 42% (higher if you exclude CAM therapies) of treatments for which there is reasonable plausibility and evidence of efficacy.
In fact, that is standard practice – to follow a hierarchy of evidence, starting with the safest and most evidence-based treatments first, and then working your way down only when such treatments have failed.
Ullman is also cherry picking. One review found that on average such reviews conclude that 76% of treatments are reasonably evidence-based. This is admittedly a complex question and you can address it in many ways. How often do practitioners follow strict published practice guidelines, for example?
Keep in mind, those of us at Science-Based Medicine and others in the medical community are arguing that we need to be raising our baseline threshold for evidence. We need to more thoroughly consider plausibility, and if anything raise the bar for accepting new treatments. There is increasing recognition of the systemic biases in published medical research, and more than ever we need to dedicate our profession to rigorous standards of evidence.
In the midst of this Ullman and others want to lower the standards of evidence, all so that they can admit their preferred unscientific treatment.
Conclusion
The evidence still shows that homeopathy does not work. Even if you take a strict EBM approach and do not consider scientific plausibility, the evidence does not support the conclusion that homeopathy works for anything. There is not one indication for which any homeopathic treatment has been reliably shown to work, if you use a fair and reasonable threshold of evidence.
And of course, homeopathy is as scientifically implausible as you can get. It is essentially magic. Homeopathic products are magic potions that have far more in common with a witches brew than a scientific remedy.