Mar 11 2013

Revenge of the Woo

Sometimes the targets of our skeptical analysis notice, and they usually are not pleased with the attention.

Last year the Acupuncture Trialists Collaboration published a meta-analysis of acupuncture trials in which they claim, “The results favoured acupuncture.” The report was widely criticized among those of use who pay attention to such things. In my analysis I focused on the conclusions that the authors drew, rather than their methods, while others also had concerns about the methods used.

The authors did not appreciate the criticism and went as far as to publish a response, in which they grossly mischaracterize their critics and manage to completely avoid the substance of our criticism.

To review, the original meta-analysis concluded:

Acupuncture is effective for the treatment of chronic pain and is therefore a reasonable referral option. Significant differences between true and sham acupuncture indicate that acupuncture is more than a placebo. However, these differences are relatively modest, suggesting that factors in addition to the specific effects of needling are important contributors to the therapeutic effects of acupuncture.

In my critique I pointed out that the results do not show that acupuncture is effective, nor that it is a reasonable referral option. What they characterize as “modest” differences were, rather, not clinically significant. Further, such tiny differences are most parsimoniously explained as the result of researcher and publication bias, two phenomena that are well established in general and specifically within the acupuncture literature. Unblinding alone would be sufficient to explain these results.

What they call “factors in addition to the specific effects of needling” the rest of the scientific community would call “placebo effects,” which are not an indication that a treatment works, but rather the result of bias, noise, and statistical illusions. These results are due to unblinded comparisons with untreated groups in clinical trials – they are not evidence of any kind of efficacy.

Their conclusions are part of a pattern visible within the acupuncture community – attempting to parlay placebo effects into the mirage of a real effect from acupuncture. I commented in my original article that such a conclusion was evidence of pro-acupuncture bias in the authors.

In their response, the authors write:

Although there was little argument about the findings in the scientific press, a controversy played out in blog posts and the lay press.

Only one substantive critique of the paper has appeared in a scientific forum.

We find that there is little argument in the scientific press because most scientists pay little attention to what they consider fringe practices. That is precisely why it is left to those of us who do care and pay attention to fringe medicine to provide a detailed analysis and point out the flaws in reasoning used by proponents.

In fact we did submit a letter in critique of the study, in a traditional scientific forum, but it was not published. Only the brief letter by David Colquhoun was.

This represents a typical strategy by proponents of dubious fringe medicine – interpret lack of resistance by mainstream scientists as acceptance. Whey they do encounter resistance, they try to minimize it as irrational – as Vickers et. al. have done here.

They continue:

This controversy was characterised by ad hominem remarks, anonymous criticism, phony expertise and the use of opinion to contradict data, predominantly by self-proclaimed sceptics.

This is a remarkable exercise in cherry picking and distortion. Their example of “ad hominem” remarks was my article in Science-Based Medicine (linked above) in which I said their conclusions were not justified and were therefore evidence of pro-acupuncture bias. This was followed by a substantive critique of their analysis, demonstrating the bias.

The majority of the criticism was not anonymous. All the usual players (myself, David Colquhoun, Edzard Ernst, Mark Crislip, Andy Lewis) posted articles or comments under our names. There are a few medical bloggers (like Orac) who prefer to remain anonymous (although they also blog under their real name) so as to preserve their rhetorical freedom and minimize professional harassment. To characterize the criticism as “anonymous criticism” is extremely unfair.

Under “phony expertise” they explain:

Many blog posters threw around methodological concepts such as I2 or funnel plots, or made claims about the nature of chronic pain or acupuncture placebo techniques. At the same time, many admitted to not having read the paper,4 and none appear to have published scientific research on pain, acupuncture or meta-analysis.

The reference is to Orac’s criticism. I reread it, and nowhere do I see a statement that Orac did not read the study. In fact he read not only the study but analyzed the studies in the meta-analysis themselves. Perhaps they meant to reference another article.

They also deride the concept of “Science-Based Medicine” as if that is a strange concept. What they fail to realize is that our collective expertise is in the distinction between science and pseudoscience, and the various mechanisms of self-deception. Most of us are also physicians, and we share our respective specialty expertise when collectively analyzing such studies. The original article, and this response, are excellent evidence of why such expertise is desperately needed in medicine, especially when dealing with unusual claims, such as acupuncture.

In response to my article they wrote:

One blogger asserted that acupuncture ‘has an effect size that is very small and, in my opinion, overlaps with no effect at all’.3 It is simply bizarre to dismiss years of careful statistical analysis on the grounds that results ‘might’ change; similarly, it should go without saying that whether an effect size overlaps with no effect is not a matter of opinion but of CIs.

Wrong and wrong. I did not simply substitute my opinion. My criticism was also not based on confidence intervals. We are not talking about statistical analysis, but systematic bias. I specifically cited the paper on “researcher degrees of freedom” to document this point. I further cited the authors in admitting that unblinding is a source of bias.

The point is that you can generate a small statistically significant result even when a treatment has zero effect. The authors falsely and naively assume that statistical significance equals a real effect, and they retreat to this position as if that counters the meat of our criticism. But it is simply not true. This is the point that those who are not aware of the principles of science-based medicine often miss. This is precisely why we advocate using a Bayesian analysis rather than p-values to assess clinical data.

Researcher and publication bias tend to produce a small (but statistically significant) positive result in clinical trials, and a meta-analysis will show that. We reject the results because:

– The effect size is not clinically significant

– There is a-priori and empirical evidence of bias in the acupuncture research

– Acupuncture is inherently implausible

– There is a clear pattern in the research that the best controlled and designed trials have no effect (no difference between true, sham, and placebo acupuncture.

There are two ways to interpret the acupuncture literature. One is that there is a real effect but it is too small to be clinically relevant. The second (the one I advocate) is that the small effects that tend to emerge from the research are likely not real and due to well-established sources of bias in clinical research. This is the most parsimonious interpretation. It is partly justified by the fact that the effect sizes and patterns in the research are similar to other phenomena, such as homeopathy and ESP, that are almost certainly not true.

We also criticized this statement from the authors:

With respect to the debate about clinical implications, the Collaboration argued that, while a treatment should ideally be shown to be superior to placebo, evaluation of clinical significance should be based on overall benefit, including any non-specific effects.

Yes – this should be debated. We maintain that clinical significance should absolutely not include “non-specific effects,” because such effects do not support a specific benefit from the procedure in question and are largely the product of illusion and bias. Further, useful non-specific effects, such as a therapeutic relationship between doctor and patient, can be had with legitimate treatments that are not based upon dubious principles.

Conclusion

Criticism of the Vickers et. al. article have been substantive and perfectly legitimate. Some were indeed intended for a lay audience, and meant to counter gullible treatments in the lay press. The authors, however, are unfair to dismiss all of this as “political muckraking” as they do in their response.

They did try to address some of the substantive criticism, but failed to do so, in my opinion. They entirely missed the main point of the effect of systematic bias in clinical research. They do so, it appears, because they lack expertise in pseudoscience – the very expertise they derided.

The pattern is very clear. Acupuncture is an implausible treatment with a pattern of clinical evidence that mirrors other highly implausible treatments. Researcher degrees of freedom alone is enough to explain the small residue of positive results, and it should not be ignored that the best designed studies tend to be entirely negative.

The unblinded comparisons to a no-treatment group do not justify acupuncture. Imagine if a pharmaceutical company tried to get away with such an argument. What Vickers and his coauthors have demonstrated is the dire need for science-based medicine.

 

28 responses so far