Apr 17 2012

Alternative Medicine’s Attack on Science

If you have been paying attention it is quite clear that at the core of the CAM (Complementary and Alternative Medicine) movement is a deliberate and calculated attack against science as the basis for medicine and health care. The original brand of “alternative” medicine was the most accurate – it is an alternative to science and evidence-based medicine. The later terms, “complementary” and “integrative,” are deceptions meant to distract from the fact the CAM (as much as general statements can be made about such a loose category) is anti-science, and therefore cannot be integrated into science.

Fortunately for those of us who are trying to increase public awareness about the anti-science agenda of CAM, CAM proponents frequently show their hand. They advocate for changing the rules of evidence to suit their needs. They talk about integrating their therapies with science-based medicine, but then pull a bait and switch and push pure pseudoscience as first line treatment. They dismiss and denigrate legitimate science as if it were all a big corporate conspiracy. They advocate for (and are slowing getting) laws to weaken the science-based standard of care for medicine. And of course, they distort and misrepresent real science and promote abject pseudoscience.

Perhaps none are worse in their broad-based attack on science than the homeopaths. Really, if they are going to promote homeopathy, they have no choice. Homeopathy is pure magical pseudoscience, and it doesn’t work. A thorough review by the British government recently concluded that homeopathy is “witchcraft.” Science, therefore, is the homeopath’s worst enemy (as homeopath Werner aptly demonstrates in this hilarious YouTube video). To the homeopath there is no more frightening phrase than, “Science-Based Medicine.” To survive they must either destroy science or break it to their will (which would destroy science).

Orac brought my attention to the latest attack against science by a homeopath, Heidi Stevenson. He does a fine job of deconstructing the nonsense, but I feel the need to add my own comments. Stripped down the article has two points to make: anecdotal evidence is not only legitimate, it’s the best form of evidence; and science-based doctors use mostly anecdotal evidence too. Both points are wrong.

I have written extensively already about the role of anecdotes in science. In short, they are the weakest form of evidence. They are not without any value, but their use is primarily as a preliminary form of evidence useful for forming hypotheses. They are too weak a form of evidence, however, to use as a way of testing hypotheses or on which to base conclusions. Anecdotal evidence tends to be overwhelmed by confirmation bias, perception bias, and a host of other cognitive biases so that they will appear to support whatever we already believe or wish to believe. They are not a tool for leading to the truth, however.

Stevenson offers the following as evidence for her claim that real doctors use anecdotes all the time:

If you tell your doctor that a drug he’s just given you is causing a terrible headache, the chances are that you’ll be believed, and your treatment will be changed. He’s basing that decision on the anecdotal evidence you’ve just given.

This is a hopelessly naive statement, on many levels. This is not, in fact, legitimate clinical decision-making. If a patient tells me that they got a headache after taking a new prescription that I have given them, I do not automatically conclude, based upon this anecdote, that the medication gave them a headache. This would be post hoc ergo propter hoc (after this therefore because of this) reasoning, a logical fallacy. Rather, I would consult the scientific evidence – what percentage of patients in the clinical trials taking this drug reported headache, vs those taking the placebo? If three percent of people taking the drug and three percent of those taking the placebo reported headache, then it is reasonable to conclude that the drug does not cause headache. The certainty of this conclusion will be based upon the number of people in the study. If there were a thousand people in the study, it’s still possible that 1 in 10,000 people will get a headache from the drug. But at least I can conclude that headache as a side effect is very unlikely. If, on the other hand, 20% of people taking the drug reported headache, then it is much more likely that the patient is correct and the drug did indeed cause their headache.

I would also gather more information from the patient. Did they get headaches before taking the drug? Was their new headache similar to or different from their prior headaches? Was there anything else going on that could possibly have caused the headaches? This is all part of taking a thorough history, which is a way of testing hypotheses by looking for additional information that would support or refute the hypothesis (in this case, that the medication caused the headache).

I then take other factors into consideration. What is the patient’s attitude toward the medication? Even if it did not cause their headache they may be unwilling to keep taking it because they believed it caused their headache. There is no point in prescribing a medication that a patient is not going to take. Also, how many other options are available? If there are many other options, then it might just be easier to try something else. If there are few other options then it may be better not to give up on this one medication too quickly, just because of a side effect that may have actually been a coincidence. We have the option, if the patient is willing and it is medically appropriate, in some cases to gather some additional data. I may have the patient stop the medication for two weeks and monitor their headaches, and if they are gone then restart the medication to see if the headaches recur. If they do, then that greatly increases the probability that the medication is actually causing the headache and it wasn’t just a coincidence.

Compare my actual practice to the simplistic characterization by Stevenson.

I need to further point out, as Orac does, that the individualization of treatments that have been proven to be safe and effective by scientific studies is not the same as relying on anecdotes. It is the application of scientific data to the individual. On this point Stevenson writes:

Unfortunately, those who promote science in medicine to the exclusion of all other means of learning miss the most significant fact of all: Humans are individuals, complex beyond comprehension. Life itself is something more than the interaction of chemicals and the laws of Newtonian physics.

This is a massive straw man. Science-based medicine does not ignore the complexity of human biology – that is actually core to the SBM approach. That is precisely why we need the best information in order to make effective decisions. Scientific studies do give us information about average responses of groups – but that information can be statistically applied to individuals. There is no way to predict exactly how an individual will respond to a treatment, but we can say how they are statistically likely to respond, and make rational clinical decisions based upon that information. Those treatment decisions can then be individualized based upon the patient’s actual response – as best as we can determine. It is not perfect, and no one has claimed that it is, but it’s the best we have. It is not the same as pure anecdotal evidence, as Stevenson suggests.

She goes on to write:

We learned that certain herbs had beneficial effects by trying them and passing on the information of what resulted: pure anecdotal evidence. But that’s how we know, for example, that milk thistle is good for the liver and hawthorn is good for the heart. No studies needed to be done. We learned through experience and anecdote.

This is nothing but circular reasoning. How do we know that milk thistle is good for the liver? Is she using anecdote to confirm anecdote? What does the scientific evidence say? Well, recent reviews conclude that there haven’t yet been good studies, so we don’t know. There is concern about possible contamination and toxicity. It would also be nice to have data on active ingredients, dosing, pharmacokinetics, and drug-drug interactions. Try to get that with anecdotes.

What about hawthorn for the heart? The evidence is mixed. In preliminary studies it appears to have benefit for heart failure, but not other measures of heart function. While generally well tolerated, there are some safety concerns. Stevenson’s summary is that “hawthorn is good for the heart.” That is what anecdotal evidence tells us, she argues. But that is useless clinical information. We need careful scientific studies to tell us – how much, with what side effects, and for what conditions specifically? In order to make rational risk vs benefit decisions and engage in practical clinical decision making, you need scientific evidence, not just anecdotes. If Stevenson is arguing that scientific studies support the anecdotes, then ironically she is acknowledging that scientific studies are the gold standard.

Her examples are also cherry picked. Anecdotes also lead people to believe that gingko was good for memory (it isn’t), that echinacea can treat cold symptoms (it doesn’t) and that aristolochia is safe (it isn’t). Anecdotal evidence failed in these and countless other beliefs. The history of anecdotal evidence is a mountain of failure with a few successes – and it is rigorous scientific evidence that has enabled us to separate out the successes from the failures. To argue that all we need are anecdotes is to be willfully blind of history (even very recent history).

Conclusion

Anecdotes have a real but minor and preliminary role to play in scientific evidence. They are at best exploratory evidence that guides later rigorous study. Homeopaths (and CAM proponents in general) want to rely on weak evidence, because rigorous evidence does not support their fairy tales. Stevenson’s arguments are confused, factually incorrect, biased, and fallacious – also par for the course for CAM proponents. She wants you to think that science is all a big conspiracy of corporations and corrupt government. She would rather have you listen to her anecdotes, because they can be used to back any claim you wish to make.

This is the position of the guru, not the rational or science-based practitioner. “Ignore and distrust science – listen to my stories.”

It is also a clear example of the pernicious nature of CAM. Often people ask – what’s the harm. There is much harm from believing in and relying on nonsense. Stevenson’s article highlights just one form of such harm – fostering an overall distrust in science, the harm of which is hard to measure but should not be underestimated.

33 responses so far