Jan 13 2017

Cognitive Biases in Health Care Decision Making

decision-makingThis was an unexpected pleasant find in an unusual place. The Gerontological Society of America recently put out a free publication designed to educate patients about cognitive biases and heuristics and how they can adversely affect decision making about health care.

The publication is aimed at older health care consumers, but the information it contains is applicable to all people and situations. It is a well written excellent summary of common cognitive biases with a thorough list of references. There are plenty of other resources that also review this material, including my own Teaching Company course, but this is a good user-friendly reference.

What is most encouraging about this publication is the simple fact that it recognizes that this is an issue. It is taking knowledge of psychology and applying it to the real world, recognizing the specific need for critical thinking skills in the public. This could have easily been produced in many different contexts – not only any medical specialty, but investing your money, buying a home, choosing a college, or evaluating news reports.

The report is aimed simultaneously at health care providers and patients. It is primarily a guide for providers for communicating with older adults, accounting for cognitive biases in decision-making, but at the same time will help consumers communicate with their providers and make better decisions.

The publication summarizes current knowledge about heuristics, which is a mental short cut we often use to make difficult problems more simple. Psychologists speak of system 1 and system 2 thinking. System 1 is heuristic thinking – quick simplified assessments. System 2 thinking is more analytical, but it is prone to cognitive biases. So even when you take the time to analyze your immediate system 1 response you may still fall prey to biases in how you assess information. You need to understand both types of thinking in order to navigate to the most logical and reliable conclusion.

Perhaps the best example of this given in the publication, applicable to how providers communicate to patients, is the framing effect. Information can be framed from a positive or negative perspective, and this dramatically affects the decisions that we make, when logically it should not.

For example, a physician can state the probable treatment outcome in terms of death or survival – 20% of people who opt for this treatment survive, or 80% of people who opt for this treatment die. Logically it is easy to see that these two framings are exactly equivalent. If our decision-making were purely logical the framing should not affect out decisions, but they do.

The difference is in how we assess risk. Humans tend to be risk averse when it comes to positive outcomes, and risk seeking when it comes to negative outcomes. We don’t like to miss out on a positive outcome, and so we will take the sure thing. However, we are willing to take a bigger risk in order to avoid a negative outcome. Here is the classic example given in the publication:

In the first part of the study (the positive frame condition),
participants selected from the following “lives saved” options:
• If Program A is adopted, 200 people will be saved.
• If Program B is adopted, there is a one-third probability
that all 600 people will be saved and a two-thirds
probability that no people will be saved.
In the second part of the study (the negative frame condition),
participants selected from the following “lives lost” options:
• If Program A is adopted, 400 people will die.
• If Program B is adopted, there is a one-third probability
that no one will die and a two-thirds probability that all 600
people will die.
The outcomes are identical in the positive frame and negative
frame conditions. Yet in the “lives saved” scenario, most participants
chose Program A: they preferred the sure thing over the gamble.
In the “lives lost” scenario, most participants chose Program B: they
accepted the gamble.

The specific recommendation for providers, those communicating risk and possible outcomes to patients, is to give multiple framings. You can state the same data, for example, in terms of absolute risk reduction, relative risk reduction, and numbers needed to treat. For example, a treatment may reduce the risk of a negative outcome from 2% to 1%. That is a relative risk reduction of 50%, which is most often how such outcomes are reported in the media. However, it is an absolute risk reduction of only 1%. You can also state this as – for every one hundred people treated, you will prevent one person from getting the negative outcome.

When you are on the receiving end of such information it is helpful to reframe it for yourself. You can easily transpose survival rates into mortality rates. Whenever relative risk reduction numbers are given, you should find out what the absolute risk reduction is, and the numbers needed to treat.

It is also helpful to be aware of our inherent risk assessments. In this way you have the best chance of making a logical decision based on all the fact, rather than an emotional decision based upon inherent risk aversion and biased by a limited framing.

This has implications beyond medication decision-making. For example:

“62% of people disagreed with allowing “public condemnation of democracy”, but only 46% of people agreed that it was right to “forbid public condemnation of democracy.”

People are also more likely to prefer meat that is 75% lean vs 25% fat, or prefer a 95% success rate to a 5% failure rate.

Conclusion

There is more in the publication than framing bias. It’s a free download, so take a look.

What is most intriguing about this publication for me is the recognition that cognitive biases are an important issue about which to educate providers and patients. I would love to see more of this, in all contexts.

There has been a lot of discussion and hand-wringing recently about our “post-truth” age of “fake news.” I have written about this a lot recently as well, and concluded that there is no simple fix to this problem. The only real solution is to have a population that has a generally higher critical thinking skill. Critical thinking is a skill, it can be taught.

Since the beginning of our experiment in democracy it has been recognized that a functional democracy depends upon an educated public. That is more true today than ever. In addition, that education needs to include critical thinking skills, which includes knowledge of heuristics and cognitive biases. These should be on the back of every cereal box. There should be collectible bubble-gum cards detailing different biases. There should be game shows awarding fabulous prizes to those who can identify different logical fallacies.

More seriously, what I mean is that our society needs to recognize and value critical thinking skills more highly, and bake it into everyone’s basic education. I cannot think of anything taught in a K-12 education that is more important than teaching students how to think critically, how to evaluate claims and information, and how to engage in rational decision-making.

 

209 responses so far