Feb 24 2017

Practicing Evidence-Based Medicine

SBM-coverAn excellent article in ProPublica by David Epstein discusses the problem of doctors not adhering to the best evidence-based standards. The full article is worth a read, and I won’t just repeat it here, but I do want to highlight a few points which align well with what I have been writing here and at SBM for years.

The essential problem is that there is a disconnect between the best evidence-based standards and what is actually practiced out in the world. There are actually two problems here. The first is the scientific evidence itself. The second is the alignment of practice to this evidence.

Scientific evidence in medicine has a few challenges. There is publication bias, researcher bias, p-hacking, the decline effect, and problems with replication. What all of this adds up to is that there is a lot of published preliminary evidence, most of which is wrong in the false positive direction. There is a tendency, in my opinion, of adopting treatments prematurely.

One study which supports this conclusion was cited by Epstein – a 2013 study that shows that it is not uncommon for research to contradict current treatments:

Dr. Prasad’s major conclusion concerns the 363 articles that test current medical practice — things doctors are doing today. His group determined that 146 (40.2%) found these practices to be ineffective, or medical reversals. Another 138 (38%) reaffirmed the value of current practice, and 79 (21.8%) were inconclusive — unable to render a firm verdict regarding the practice.

This does not mean that 40% of what doctors do is not backed by evidence. Treatments that are questionable are more likely to be studied. Also, a big journal like the NEJM is more likely to publish an interesting result, such as a reversal of current practice, than a boring result, that what we thought all along is really true.

But it does mean that there are many current practices which may not hold up to further research. It is possible this means we are adopting treatments too early. This is not necessarily the case, however – what we really need to know is the risk vs benefit of early adoption of treatment. Are more people helped by adopting a treatment that actually works before it is fully proven, or are more people harmed by adopting a treatment that does not work or is harmful before the evidence is more clear? That would be a difficult question to answer definitively. We are left to infer the best answer from existing evidence.

Essentially I recommend three things to fix this first part of the problem, with the research itself. The first is rather standard – greater dedication to evidence-based medicine. This means basing practice on actual evidence, not experience alone and not just on what makes sense.

Second, we need to significantly alter the medical research infrastructure to put more resources into higher quality studies and replications, and publish fewer preliminary studies. Researchers need to stop cranking out lower quality papers, and put more work into each publication (for example, do a couple of replications before you publish your data). We probably need more education for researchers to increase the typical rigor in research and avoid statistical errors. Institutions need to reward quality of quantity more, and boring replications over surprising high-impact results. And journals need to change their publication policies to optimize them to a positive impact on clinical practice, not maximizing their impact factor.

The third fix is what I would call a shift to science-based medicine. SBM goes a bit beyond EBM in a couple of ways. The first is to directly consider plausibility, or prior probability. EBM essentially eliminated such considerations, looking only at clinical evidence. We think you should look at all scientific evidence, including non-biomedical science, to illuminate the plausibility of an intervention. This will help put the clinical evidence into perspective.

Further, SBM explicitly considers all the factors I listed above, such a p-hacking and publication bias. Finally we also recommend moving beyond over-reliance on p-values and frequentist analysis. This means additionally using Bayesian analysis, and explicitly considering effect sizes and other considerations.

In a way, SBM takes a more holistic (yeah, I know) and critical approach to the scientific evidence.

The second problem discussed in the article is that, even when we do have good quality evidence, many practitioners do not follow best practice. This is also a complex problem, without a single cause.

There are many potential biases at work here. New treatments seem sexy and interesting, and carry with them the hope that they will be better. A lot of medicine is high tech, and this creates a technology bias. Sometimes a lower tech intervention may be better.

Doctors would rather do something than not do something. When you are confronted with a patient having a problem (a bothersome symptom, or worried about a bad outcome) they want you to do something. Sometimes the best thing to do is either nothing, or very minimalistic. Patients often feel uncomfortable with this, and in turn make their doctors feel uncomfortable.

There may (depending on the practice context) be a financial incentive to do something, or the more high tech option, than to do nothing or a low tech option.

Finally, the evidence moves quickly, and it is difficult to keep up with it. There are mechanisms in place for doing so, but we could argue that these mechanisms are not robust enough. Perhaps there needs to be more oversight and feedback, not to punish or stigmatize practitioners, but to keep them in a feedback loop that moves them continuously toward the standard of care.

Part of the problem here also relates to the first problem of the quality of evidence. It is easy to be overwhelmed with tons of low to moderate grade evidence. When the evidence itself is uncertain, it is easier to substitute your own clinical judgement, which is subject to multiple biases. If there was less but higher quality evidence it would be easier to keep up and feel confident in the current standard.

It is easy to become a bit disillusioned when all the problems with scientific medicine are summarized at once. To put this in perspective, however, most of what doctors do are solidly evidence-based, and most of the rest is at least reasonably evidence-based. The discussion, however, is extremely useful in order to raise those standards even further. There is a lot of room for improvement, and we actually know (at least in broad brushstrokes) what we need to do. We just need to do it.

14 responses so far