Mar 18 2016

Ioannidis – Evidence-Based Medicine Has Been Hijacked

In a recent commentary, framed as an open letter to David Sackett (the father of evidence-based medicine), John Ioannidis argues that EBM has been hijacked by various interests. He also clarifies his position in an interview with Retraction Watch.

Ioannidis hits many interesting points: EBM has become a way to market products and services, clinical studies are largely in the hands of corporations with vested interests, academics are under their own pressures which emphasize getting grant money, practitioners are likewise struggling to survive in an era of managed care, and quacks and charlatans are exploiting the whole mess.

It is an eye-opening roller-coaster ride, including many personal stories, through the mind of perhaps the most famous current critic of the industry of medical science. I agree with much of what he says, and in fact they coincide with a great deal of commentary here and at science-based medicine. He takes a more cynical and pessimistic tone than I would, but that is subjective.

Perhaps I am a bit more optimistic (on my good days) because I am actively working on a solution – SBM. That is, in fact, precisely what SBM was designed to be, a fix for the limitations and vulnerabilities of EBM, the very vulnerabilities that Ioannidis now exposes.

Some of the more solid points that Ioannidis makes includes the overhyping of basic science or preliminary research. He writes:

Right after my talk, everybody rushed to hear the launch of a new campaign, where the leader of the institution singled out this unique historic moment: that institution would single-handedly eliminate most major types of cancer within a few years. Several years have passed, and none of these cancer types have disappeared. I recently tried to find the name of that campaign online but realized that this institution has launched many similar campaigns. Which among many was the unique historic moment that I happened to be at?

This pattern is all too familiar – some basic science finding is extrapolated to an ultimate and dramatic clinical outcome. It is not just journalists who overhype, but researchers and their press offices play a role as well. In this industry of hype, every discovery about cell metabolism will cure cancer, every viral study will lead to a cure for the common cold, and every incremental advance in our understanding of brain physiology will lead to a cure for Alzheimer’s disease.

He extends this to epidemiology:

Second, instead of dealing with these major public health risks, the production of spurious, false-positive, or confounded putative risk factors is more dangerous than ever. Jumping from correlation to causation, data dredging is called causal evidence and fuels guidelines.

I wrote about this also in an article called Everything Causes Cancer. There is a lot of noise in data, and if you try even a little bit you can extract a false signal out of that noise. Sometimes the signal is real, but clinically meaningless. The public is then exposed to an endless stream of warnings about tiny or spurious risks, which dilute out warnings about the large and proven risks. People become worried about the wrong things. (This is now typified in my mind by the person who smugly reassured me that she only smokes organic cigarettes.)

Regarding industry sponsored research he says:

It is just that they often ask the wrong questions with the wrong short-term surrogate outcomes, the wrong analyses, the wrong criteria for success (e.g., large margins for noninferiority), and the wrong inferences, but who cares about these minor glitches?

This highlights how difficult it can be to properly interpret a study or series of studies. People focus on the p-value – are the results statistically significant. If they are, then whatever the authors are claiming must be true. Reality is far more complex.

Ioannidis also refers in his commentary and later interview to the fact that, as a meta-science researcher, and therefore critic of the practice of science within biomedicine, he has not always been popular. In fact there are many cranks and science deniers that try to find common cause with him, and cite him as evidence that science is broken or hopelessly corrupt.

The cranks and science-deniers, however, are pointing at the splinter in the eye of medicine while they have a plank in their own. All of the problems with biomedicine are magnified by orders of magnitude when it comes to alternative medicine, the science-deniers, and the fringe.

Further, Ioannidis is criticizing the science of medicine in order to make it better. He is pointing the way to improving how biomedical science is funded, evaluated, executed, reported, and translated into practice.

Enter Science-Based Medicine

There is a reason I have been following the publications of Ioannidis over the years and writing about them frequently – he is pointing (often in a rigorous empirical way) to the same issues that we address at SBM. As I said, SBM is our best effort at a fix. We agree that EBM has been hijacked. This does not mean it is worthless, it is a valiant effort, but as Ioannidis says, an unfinished project.

SBM addresses all of the issues that Ioannidis points to. The point of SBM is to take a global look at the big picture of any clinical claim (it is scientific skepticism applied to health care). This means we have to consider scientific plausibility, not just clinical trials. We have to look at what basic science and clinical science studies are actually telling us. What variables are being controlled for, what is the real relationship between the outcome measure and the disease of interest, what are all the biases, how certain are we really that this phenomenon is real?

This is why it is often the case that an SBM evaluation of a claim is more skeptical or negative than an EBM evaluation, or claims that don’t even satisfy the criteria of EBM. We are not naysayers or cynics – we simply recognize that science is complex, flawed, and often wrong or misleading.

The ultimate goal of SBM is to have as accurate as possible an idea of where the threshold should be before we accept a claim. In medicine this has a very practical and important implication – when do we recommend an intervention?

Right now that threshold is almost certainly too low, for all the reasons that Ioannidis discusses. When we look back at the literature, as Ioannidis has, we see that we have adopted interventions prematurely, sometimes resulting in causing more harm than good.

The stakes, if anything, are getting higher. Health care is swallowing up 20% of our economy, and we simply cannot afford it. We need greater efficiency in every aspect of heath care and biomedical research. We need to be researching the right questions, in the right way.

Ioannidis also claims that 80% of biomedical research is wasted. I can see how this is true – I see so many studies that fail to address to proper question (does this treatment actually work). I would argue that 99% of acupuncture studies are worthless, 100% of homeopathy studies are worthless, but many studies of even mainstream questions are also wasted due to poor design, or asking the wrong question.

The other challenge of medicine, as an applied science, is that there often is a huge emotional investment (not just a financial investment) in the outcome. Placebo effects are powerful in convincing people that certain treatments work (both patients and practitioners). When scientific evidence fails to support this, anger and frustration is a common response. There is also the temptation to shoot the messenger. (Just look at the cryotherapy article from earlier this week.)

We need to make the case to the public that the stakes are high and we have to get this right. We need to move massively in the direction of science-based medicine, and weed out all the pseudoscience, bias, and perverse incentives from medicine.

 

10 responses so far