Sep 27 2011
Understanding the various aspects of the placebo effect is now a priority for proponents of science-based medicine. Now that for many modalities the evidence is in and is largely negative, proponents are exploiting the general lack of understanding of placebo effects to claim that their modality “works” as a placebo. Even skeptics may have a hard time understanding some of the counter-intuitive aspects of placebo effects.
Here, for example, is a question from reader PharmD28:
My question/issue is regarding debating someone about an intervention that has not been proven effective yet they clearly tell you that it is effective for them.
Take acupuncture. I was talking to a nurse practitioner colleague just today about acupuncture as one of the MD’s at my facility does it and has done it for her. She basically conceded that there is not strong evidence that it works, but she has had it done for her headache 5 times with “very good” results”. She said one of the five times it made her nautious, but this was “expected” for the first treatment and subsequently it eliminated her headache. She told me it did nothing for her back pain or for her TMJ. I have no explanation for as to why this intervention worked in this case for “headache”, but I find myself in that instance without good rebuttle for such, except to thinking to myself that yeah, placebo works too for subjective outcomes.
This is a common question – if a treatment “works”, even though it is just a placebo effect, isn’t that still a worthwhile effect.
The answer is – it depends, but mostly no. The problem is in the assumption that because one is feeling better the treatment worked. This is the post hoc ergo propter hoc logical fallacy. We do not know what the subject’s headaches would have been like had they not received acupuncture. It’s possible they were destined to improve in any case.
Simple regression to the mean explains why this is likely. People will tend to seek treatment when their symptoms are at their worst, which means they are likely, by chance alone, to regress to the mean of the distribution of symptoms – or return to a less severe state, which will be interpreted as improvement.
This is further compounded by confirmation bias. This case provides an excellent example of this. The acupuncture did not work for the back pain or TMJ (really TMJ syndrome, “TMJ” just stands for temporo-mandibular joint). So she tried acupuncture for a variety of symptoms, and one improved (while on one occasional developing nausea, which could have been a side effect or just a worsening of the headache).
In other words – we have very noisy data, with some improvement, some worsening, and some unchanged. It does not make scientific sense to pick out only the positive effects from this distribution of data and declare that the acupuncture “worked” in those instances.
This is exactly like having an alleged psychic guess cards, and perform no better than chance but declare that for those random hits they did make her psychic power was working. You have to look at all the data to see if there was an effect.
This principle applies to medical interventions as well – you have to look systematically at all the data to see if there is an effect. When you do this – acupuncture does not work. Saying “well it worked for me” is exactly like saying that the psychic powers worked whenever they randomly hit, even though the overall pattern was negative (consistent with random guessing).
Another layer of randomness to the data which is then ripe for cherry picking and confirmation bias is trying multiple therapies for the same problem (in addition to the same therapy for multiple problems). For example, someone might take medication, acupuncture, chiropractic, and homeopathic remedies at the same time for their headaches, and if they improve credit one or more of the alternative treatments. Or they may try them in sequence, and whichever one they took when their symptoms improved on their own gets the credit due to the post hoc fallacy.
We intuitively ignore the failed treatments – the misses – and commit the lottery fallacy by asking the wrong question: what are the odds of my headache getting better shortly after taking treatment X. But the real question is – what are the odds of my headache getting better at any time, and that I would have recently tried some treatment.
There are also psychological factors in play. When people try an unconventional treatment, perhaps out of desperation or just the hope for relief, they may feel vulnerable to criticism or a bit defensive for trying something unorthodox and even a bit bizarre. There is therefore a huge incentive to justify their decision by concluding that the treatment worked – to show all the skeptics that they were right all along.
Then, mixed in with all of this, is a genuine improvement in mood, and therefore symptoms, from the positive attention of the practitioner (if there is one – i.e. you’re not taking an over-the-counter remedy), or just from the hope that relief is on the way and the feeling that you are doing something about your health and your symptoms. This is a genuine, but non-specific, psychological effect of receiving treatment and taking steps to have some control over your situation.
What is distressing to those of us who are trying to promote science-based medicine is that this latter factor is often treated as if it is the entire placebo effect, or at least a majority. The evidence, however, suggests that it is an extreme minority of the effect.
A recent study with asthma, for example, shows that the placebo effect for objective measurements of asthma severity was essentially zero. There was a substantial effect for subjective outcomes. So subjects reported feeling better even when objective measures showed they were no better. This sounds an awful lot like confirmation bias and other psychological factor, like expense/risk justification and the optimism bias.
Placebo effects are largely an illusion of various well-known psychological factors and errors in perception, memory, and cognition – confirmation bias, regression to the mean, post-hoc fallacy, optimism bias, risk justification, suggestibility, expectation bias, and failure to account for multiple variables. There are also variable (depending on the symptoms being treated) and subjective effects from improved mood and outlook.
Concluding from all of this that a treatment “works”, when a treatment appears to be followed by improved symptoms, is like concluding that an alleged psychic’s power “works” whenever their random guessing hits. This is why anecdotal experience is as worthless in determining if a treatment works as is taking the subjective experience of a target of a cold reading in determining if a psychic’s power is genuine.
Yet, even for many skeptics, the latter is more intuitive than the former. It is hard to shake the sense that if someone feels better than the treatment must have “worked” in some way.
43 Responses to “Well, It Worked for Me!”
Leave a Reply
You must be logged in to post a comment.