Mar 09 2015

Basic Science Should Inform Clinical Science

Last year David Gorski and I published an article in which we argue that it is a waste of resources and ultimately counterproductive to conduct clinical trials of a treatment that is so scientifically implausible it might as well be “magic.” Homeopathy, for example, fits squarely into this category. The alternative medicine (CAM) community did not respond favorably to our arguments.

A recent article by Sunita Vohra and Heather Boon directly critiques our article. Vohra and Boon are both involved in homeopathy research, so this is no surprise. In their brief article they essentially repeat the standard CAM talking points about scientific research, without really countering the position that David and I have described. In their article, in my opinion, they demonstrate the utter intellectual bankruptcy of the CAM position. They repeat points that have been deconstructed years ago, without ever addressing the counterpoints.

The core of the disagreement is about the relative role of various kinds of scientific research in evaluating medical therapies. The position of science-based medicine (SBM) is that rigorous efficacy trials are required to truly know if a treatment is safe and effective (that aspect of our position we share with standard evidence-based medicine or EBM). Further, this clinical evidence must be put into the context of all the rest of science, right down to basic laws of physics, summarized as an overall scientific judgement about the plausibility of the treatment. This basic science plausibility should also be used to guide the expenditure of our limited resources in conducting expensive and resource-draining clinical trials. At the same time, solid evidence from clinical trials can inform basic science by suggesting possible biological mechanisms.

Vohra and Boon lay out the CAM position, which I will address point-by-point. They begin:

Recently, Gorki and Novella recommended an approach that would severely curtail one’s ability to explore novel therapies and gain new understanding about human biology.

This has been the cry of our CAM critics from the beginning. It is essentially the Galileo gambit – they opposed Galileo and he turned out to be right, and now they are opposing us, so… This is the battle cry of all cranks. Our approach would not severely curtail the ability to explore novel therapies. We have simply stated that it is a waste of public resources, and an unethical use of patients as subjects and probably a violation of their informed consent, to study treatments that are extremely scientifically implausible. However, CAM apologists always state this principle as if it would apply to any new knowledge, or any area that we do not currently fully understand. This is a straw man of our position, however.

By the way, they actually bring up Galileo later in the article, writing:

Thank heavens Galileo tested gravity, rather than follow the pervasive thought at the time that heavier objects must fall faster – this seemed clearly plausible, yet is completely false.

They continue:

“This approach is a cause for concern as it is predicated on an assumption that we already understand how the world and, in particular, the human body, works. This seems to fly in the face of the basic scientific method, which starts with an intriguing observation and encourages one to ask ‘why did that happen?’”

Their argument here is essentially that we do not know everything, and therefore we should behave as if we know nothing. Of course science does not understand everything, and it never will. Our position is not predicated on the position that we know everything. It is predicated on the assumption that we do not know nothing – that we currently have a body of scientific knowledge that, while imperfect and incomplete, is a useful guide to further scientific research. We are not starting with a blank slate with every question.

Of course, pseudoscience wants to ignore current scientific knowledge precisely because it runs contrary to established knowledge.

You can also see in the way they frame their position that they are vague when referring to knowledge and understanding. They make no attempt to distinguish how much we know, or how reliable our scientific knowledge is. This, in my opinion, is either deliberate or simply reflects their flawed view of the issue. You cannot talk about scientific knowledge or plausibility in this context without acknowledging the wide variation in degree. This is a broad continuum from solid established scientific facts to notions that run so counter to established knowledge that it is reasonable to treat them as impossible – and covering all the ground in between. They discuss the issue in stark black and white terms that ignore the continuum.

Their second sentence, however, is perhaps more telling. They leave out a critical step in their cartoon of the scientific method (and again, this is absolutely typical of pseudoscience in general). When the process starts with an observation (it doesn’t have to) the next step is not to ask “how did this happen,” but, “did this really happen,” and perhaps, “what is it that actually happened.”

In other words, when observation suggests the existence of a new phenomenon, it is important to double and triple check the observations, and to conduct further observations and even experiments when possible to determine if the alleged phenomenon is real. Only when the evidence supports the conclusion that the phenomenon is real is it reasonable to expend resources investigating the nature of the phenomenon. Otherwise you are conducting “tooth fairy science.” Your treatment may be the equivalent of N-rays.

They continue on this theme:

Interestingly, the hypothetical scenario used by these authors to help the reader understand the magnitude of the problem is the same strategy used by those trying to explain how little basic science research informs clinical therapies. Unfortunately, it has become well known that basic science may or may not inform how therapies work in patients, more often ‘may not’.

Their logic here is not valid as they are essentially confusing the fact that often treatments which look promising at the basic science level often do not work out when studied clinically, with the reverse, that treatments which seem implausible at the basic science level might actually work. These two situations are not symmetrical. Further, they completely ignore the degree of basic science implausibility.

They are essentially making a massive argument from ignorance, downplaying our scientific knowledge, and whitewashing over the vast differences in our level of confidence. They are saying that because, for example, antioxidants don’t work when tested clinically when we thought they might work based on our incomplete understanding of the role of oxidative stress, that therefore we should not be biased against treatments that seem to violate basic laws of physics. These are not equivalent.

So they want to throw out plausibility as a criterion in assessing possible treatments, essentially arguing that our current scientific knowledge is so imperfect we can comfortably ignore it. You might think that they would go on to defend a strictly EBM approach – saying we should trust in rigorous clinical evidence regardless of the basic science plausibility. You would be wrong. Not only, they argue, should we ignore basic scientific knowledge, we should water down clinical evidence and rely upon the weakest forms of such evidence. They write:

Randomised controlled trials have become the gold standard for evaluating treatment effectiveness, but they are expensive and resource-intensive. Rather than arguing that gathering clinical evidence about popular therapies is a folly, we suggest consideration of other kinds of innovative trial design, such as N-of-1 trials.

They also suggest “pragmatic” studies, “Comparative effectiveness trials or patient-centered effectiveness research.”  None of these trial designs, as they admit, are efficacy trials, meaning that they are not designed to isolate the treatment as a variable and determine if it has specific efficacy – if the treatment actually works. They are really only valid forms of study when involving treatments that already have proven efficacy.

So not only would they dispense with pesky considerations of plausibility, they would rely on clinical research that cannot actually tell if a treatment works, and in fact is designed in such a way as to almost guarantee a false positive outcome. Talk about rigging the game.

As is also typical, they justify their position with an appeal to popularity:

Best available data suggest the majority of patients use complementary therapies. Evidence about which therapies may be helpful, and which may be harmful, in whom, and why, is urgently needed to guide practice and policy.

Actually that claim is not true, or at least is highly misleading. As we have explained many times on SBM, the numbers show that a very small percentage of the population, single digits, use hard-core CAM modalities and practitioners. CAM apologists inflate the numbers immensely, however, by using a fuzzy definition of CAM and including everything from vitamins to massage.

However, even and perhaps especially if the claim of popularity were true, that might indicate a dire need for rigorous and definitive efficacy trials, not weak clinical studies that amount to little more than marketing research used for promotion. I would further argue that spending resources on such trials would only be useful within a regulatory and practice framework that would use the results of such research. As history has made plainly clear, however, studies showing lack of efficacy have almost no impact on CAM practitioners. I have an open challenge that no one has been able to meet – show me one CAM modality that has been abandoned due to evidence of lack of efficacy.

Conclusion

Vohra and Boon repeat the standard CAM position regarding scientific research, clearly demonstrating, in my opinion, that CAM is built upon pseudoscience and misdirection and is thoroughly intellectually bankrupt. They want to ignore basic science and rely upon the weakest forms of clinical evidence, in order to promote treatments that range from barely plausible to so scientifically implausible they are the equivalent of magic.

In order to counter the perfectly reasonable criticism that David Gorski, I, and many others have leveled at this approach they construct an obvious straw man of our position, and collapse the broad range of scientific knowledge down to simplistic false dichotomies.

To lay out in more detail our actual position, there are a range of categories into which we might place possible treatments, and a variety of legitimate approaches we might take to effectively and efficiently advance clinical science.

At the top of the hierarchy of treatments are those that are well understood from a mechanistic and basic-science point of view and backed by rigorous and reproducible clinical evidence.

Below this level are treatments that are highly plausible with positive clinical evidence, but the clinical evidence is moderate and less than definitive. Treatments with only preliminary or no clinical evidence should be considered experimental.

There may also be treatments for which we do not have a plausible mechanism, but there may be no particular reason to think they do not work. If we have definitive clinical evidence of their efficacy, this may guide basic science research to later understand their mechanism.

We can continue to work our way down the hierarchy of science-based treatments in this way, understanding that there is a continuum of both basic science plausibility and clinical evidence. Considering a treatment reasonable from an SBM perspective means that there is a critical mass of some combination of plausibility and clinical evidence. This also has to be put into clinical perspective – what are we treating, are other treatments available, how risky is the treatment, etc.?

At some point, however, there is a line below which you are no longer practicing SBM, but rather pseudoscience or witchcraft. Clear clinical evidence of lack of efficacy, or extreme scientific implausibility are below this line.

I can imagine, however, a scenario in which a treatment that seems highly implausible actually works by some unknown mechanism. In this case, however, the clinical evidence would have to be of such a nature that it offsets whatever basic science evidence says the treatment is implausible. The more implausible, the more rigorous the clinical evidence should be.

By contrast, CAM apologists want to promote treatments which are simultaneously highly implausible and not backed by rigorous clinical evidence. In fact they want to go out of their way to conduct only weak clinical studies that are of essentially no value in determining efficacy. Further they want to practice this type of unscientific medicine in a regulatory environment without a science-based standard of care.

Conducting clinical studies in this context is the very definition of a waste of resources. Even worse, such research is used to market unscientific treatments that most likely don’t work to a public which is likely to assume there are standards in place to protect them. An assumption which unfortunately is increasingly wrong.

Note: David Gorski also replies at Science-Based Medicine here.

14 responses so far