Oct 06 2017

Unnecessary Medical Interventions

clinical-decision-making-46-638A recent JAMA article is an update on a systematic review of overused interventions in medicine. They list the top ten overused tests and treatments, meant to highlight this problem in medicine. They conclude:

The body of empirical work continues to expand related to medical services that are provided for inappropriate or uncertain indications. Engaging patients in conversations aimed at shared decision making and giving practitioners feedback about their performance relative to peers appear to be useful in reducing overuse.

You can read a summary of the ten overused interventions here.  The one you are likely already familiar with is antibiotic overuse. The others are very specific tests or interventions in specific situations, like Computed Tomography Pulmonary Angiography to help diagnose acute pulmonary embolism.

Reviewing each of these interventions in the top ten list would require a deep dive into the literature and detailed discussion, which is not my intent here. If you want that level of detail, read the original article. What I want to discuss is, in general terms, why this is a problem in the first place.

There are two broad reasons. The first is a good one – because medicine endeavors to be science-based. We actually care about optimizing outcomes, that’s why researchers carefully review the evidence to make detailed recommendations about the best clinical management in specific situations. This is all part of the self-corrective process of science. The authors are even careful to point out that the purpose of such reviews is not to criticize or shame anyone, but to provide critical evidence-based feedback to improve practice. Compare that attitude to anything that exists in the alternate reality of CAM.

The second reason is that medicine and clinical decision-making is complex, and it often goes against our basic psychology. This is why, in my opinion, good clinicians need to be critical thinkers, otherwise they (and by extension their patients) will fall victim to cognitive biases and pitfalls.

For example, one of the top ten overused interventions is nutritional support for critically ill patients. I know, right – that sounds crazy. Are the authors seriously proposing that we don’t feed critically ill patients? Well yes, yes they are, at least not routinely. This shows how counter-intuitive the evidence can be.

What this means is putting a tube into a patient’s stomach in order to give them nutrition. After a certain amount of time the tube can erode the nasal cavity, and it becomes necessary to do a procedure to place a feeding tube surgically directly through the abdominal wall into the stomach or first part of the small intestine. It makes sense that people need to be fed, and that nutrition is critical for healing whatever injury or problem has them in intensive care in the first place.

But making sense is not sufficient. In medicine we need to consider risk vs benefit, hard outcomes, and predictive value. The question is – do ICU patients generally do better if we give them tube feedings? The authors argue that they don’t. They do not have a greater survival or shorter hospital stay. You might argue that there are other more subtle outcomes that are important, like nitrogen balance or muscle mass, and those are reasonable points. What about time spent in rehab rebuilding lost muscle? These questions require a detailed analysis and perhaps even further research.

But this intervention shows how complex these questions can get. We can’t just do what feels right or what makes sense. We need solid evidence to make sure the risk we are exposing patients to is worth the benefit. Also, keep in mind the authors are talking about routine tube feeding, meaning giving it to every patient automatically. Perhaps what we need is targeted tube feeding, based on blood work, for example, or underlying condition and length of stay. Maybe most people can coast for a few days without tube feeding. Also, keep in mind patients will routinely get calories in the form of glucose in their IV fluid.

Diagnostic testing can be an even more complex issue. Good clinical decision making requires us to relearn how to think about diagnostic questions. People tend to fall into heuristics (mental short cuts) that are just not adequate to the complexities of medical decision making.

For example, we tend to fall for the representativeness heuristic – we think it is likely that a person belongs to a category if they seem typical of that category. In medicine this means we think it is likely a person has a diagnosis if they have symptoms of that diagnosis. But wait – doesn’t that make sense? Only to a point. The fallacy is in failing to also consider the prevalence of that diagnosis. So, even if someone has typical features of a very rare disease, it is still not likely that have that rare disease – because it is rare. It may be far more likely that they are an atypical presentation of a very common disease.

You can apply this also to specific symptoms. We naively think about how typical a symptom is of a specific disease, but we really should think about how predictive it is. Fever may be a typical symptom of zika virus infection, but by itself it is not very predictive.

When ordering tests we need to think about the sensitivity and specificity of that test, and how predictive positive and negative outcomes would be. Further, we need to consider what we will actually do with those test results and if they will make a difference to the patient. A good rule of thumb in medicine is not to order a test unless you know exactly what you will do with the results. Don’t do it, “just to see,” or “just to be sure.”

Perhaps the hardest thing for a clinician to do is put aside their gut feelings, their personal experience and what they feel is right and practice according to cold hard facts and logic. It seems incongruous to what a physician should be, but carefully following evidence leads to the best outcomes. But to clarify, this does not mean clinicians don’t consider personal preference and patient feelings. It doesn’t mean we actually are cold – just that we follow a careful evidence-based process, rather than shooting from the hip.

Even for clinicians who are aware of all these factors and are good critical and clinical thinkers, there is still the challenge of having access to the necessary specific information. Each tiny component of managing a patient could be informed by a complex and deep medical literature, one that is changing all the time as new studies are published. All doctors are out-of-date on some of their knowledge, and have gaps in other areas. No one has complete up-to-date knowledge, it’s not possible.

Therefore we need to carefully think about the systems we put in place to maximize practitioners having the information they need at the “point of patient care” – the moment they are making a specific clinical decision. This is traditionally accomplished through training and continue education, which is necessary but insufficient.

There are also practice guidelines where experts review the evidence and quickly summarize them for the busy clinician.

We are just emerging into another era of medicine where we harness the potential of computers, AI, and expert systems to provide that critical information when needed. This is an underutilized tool at this time, in my opinion, but I think will eventually transform the practice of medicine.

Meanwhile, we need to continue to gather the kind of evidence summarized in this recent JAMA article, and make it available to practitioners. We also need to continue to push back against any attempt to water down the evidence-based nature of modern medicine.

5 responses so far