Jul 03 2018

Defining Down Problems

This is an interesting cognitive bias recently documented by psychology researchers – we tend to lower the bar for what constitutes a “problem” as the frequency of that problem decreases. The authors call this “perception and judgement creep.”

Let’s say you are a teacher tasked with documenting instances of “bad behavior” among your students. What constitutes “bad behavior” requires judgement, and occurs on a continuum. Does whispering to your friend when everyone is supposed to be quiet count? What the researchers found is that the frequency of behavior which can be considered “bad” determines where you set the cutoff. If the frequency is high, then you will likely count only really bad behavior. As the frequency drops, you will count less and less bad behavior as “bad”, which will create the illusion that the problem of bad behavior is not getting better, when it objectively is.

The researchers did a series of experiments with very different targets. In one experiment they had subjects count blue dots. They were shown dots with a variety of colors, some clearly blue, some clearly not blue, and others that were on the borderline of being blue, such as purple or violet. What they found was that as the frequency of clearly blue dots decreased the subjects started to expand the range of what they considered “blue”, including more purple dots.

In another experiment they had subjects look at pictures of faces and count how many were showing angry emotions. As the frequency of angry faces decreased, subjects started to count more and more neutral faces as angry. In a third experiment the researchers asked subjects to review research requests and look for unethical behavior. Yet again, when the frequency of clearly unethical requests decreased, the subjects started counting more and more innocent requests as unethical.

So what’s going on here? Like any such psychological research, it deals with complex human behavior and questions about cause and effect are very difficult to address. Much more research is necessary to flesh out the conditions of this phenomenon and to start to tease apart its primary causes. But let’s speculate about plausible causes.

This phenomenon might be due largely to perception. As I have discussed many times before, perception itself is highly subjective and constructed. We don’t “see” the world so much as construct an internal model of it from highly filtered, selected, and sparse sensory cues. Further, expectation drives perception to a significant degree – our brains fill in the gaps and construct our perception to meet expectation.

Further, there are different modes of perception depending on how we are focusing our attention. This is a complex topic in itself, and very relevant to magic and illusion. Magicians have learned through trial and error how people tend to focus their attention, and therefore how to misdirect and confuse their perception. When you are actively looking for something it’s as if you prepare a mental template in your mind, and then search for matches to that template. In this mode you can easily miss even blatant stimuli that don’t fit the template – this is called inattentional blindness. (Watch this video for the classic demonstration of this phenomenon.)

So when asked to look for blue dots, you imagine blue dots and look for matches. But if the frequency is low we react to the lack of positive feedback by expanding the template to include less and less blue dots, so that we get a sufficient positive feedback. What is sufficient? That is also probably context dependent. But it is possible that we simply get anxious, frustrated, or even bored when we don’t find what we are looking for, so we look “harder.” Looking “harder” can mean finding more and more subtle examples.

It is not difficult to imagine an adaptive advantage to this behavior. Let’s say we are foraging for food. We find a tree with edible fruit on it. At first we are likely to pick the “low-hanging fruit” (literally, in this case). But as the obvious and readily accessible fruit diminishes we need to look for more hidden, less obvious, and less accessible fruit. In this context the goal is to keep the supply of food stable, and in order to do that we need to constantly adjust our criteria based upon prevalence. What we count as edible food may also need to be adjusted based upon availability. That half-rotten fruit on the ground may appear inedible at first, but when the rest of the fruit is gone it suddenly becomes edible.

This behavior may also be adaptive in the negative sense – not when looking for something you need, but when looking for problems to be fixed. In this case you may have a set amount of resources to deal with the problem. You address the big problems first, but then look for smaller and smaller problems to fix. This is a rational allocation of resources.

So it seems that this perception adjustment is a basic cognitive strategy common in human brains, likely because it has some adaptive benefit. I have to point out, though, that behavior does not have to be adaptive. It can be an epiphenomenon, or a side consequence of other behavior that has an unconnected role. But in this case I suspect there is an adaptive advantage, which sounds like a good area of future research.

In any case, this behavior appears to be a cognitive bias. The problem is (as is often the case with such cognitive biases) it may work in some contexts, but not others. It works when there is a need for a stable source of whatever you are looking for, or when allocating fixed and limited resources to a wide range of problems.

This cognitive strategy does not work, however, in all contexts, when it makes more sense to fix the criteria rather than the frequency. In the press release, for example, they discuss a radiologist looking for tumors on an X-ray. What constitutes radiographic evidence for a tumor should have fixed criteria, and not be flexible based upon the frequency of tumors. It’s OK if an X-ray has zero tumors. A radiologist should not call more and more subtle findings tumor simply to keep to a quota of found tumors.

The same is true of the hypothetical institutional review board used by the researchers, where subjects looked for unethical behavior in research proposals. Such things should have strict operational definitions, and not be flexible in order to find some problem. Similarly, police should not use flexible definitions of breaking the law.

But then there are many contexts that are in the gray zone. Poverty, for example, in an interesting question. How do we define poverty? Should that definition be fixed or relative? The UN prefers a relative definition of poverty, where living conditions and income are compared to the average for the society in which someone lives. This leads to curious outcomes, such as the US having a higher poverty rate than far poorer countries. One might argue this ceases to be a measure of actual poverty and is more a reflection of income disparity.

Then we get to the big obvious problems, such as sexism and racism. In the last 50 years it is clear from any measure that the magnitude of both sexism and racism have decreased significantly in the US and other developed nations. But how should we view persistent sexism and racism today? Is it appropriate to turn our attention to milder and milder examples of bigotry, or to look at how far we have come and declare “problem solved?” I am presenting a false dichotomy here, of course, and it’s possible the optimal approach is somewhere in the middle. We should continue to reduce sexism and racism, with the ultimate goal of a gender and color-blind society where every individual has equal rights and dignity. We may never get to that goal, but having a goal of zero is reasonable, and necessarily means that we will be addressing smaller and smaller problems as we make progress.

At the same time, it is helpful to recognize that we have made progress. It is also helpful to recognize that there are societies in the world where the problem is far worse. But this recognition should not turn into the fallacy of relative privation – where a problem is deemed not a problem, or not worth taking action, because there are bigger problems elsewhere.

There is a good analogy to be made with medicine. We may have a goal of eliminating cancer. That’s a good goal, worth pursuing. So as we make progress in treating cancer, we will get diminishing returns as the problem itself gets less severe. As incidence goes down and survival increases, we will have to pursue more and more subtle cases to keep pushing toward our goal of zero cancers.

The bottom line, again as with most cognitive biases, is that it is important simply to be aware that the bias exists. This way we can take a thoughtful approach to a question, rather than just take the default approach or the cognitive path of least resistance. In this case it is important to think about context when you are looking for stuff, or evaluating a situation. What are your goals? What is subjective and what is objective?

In many contexts you may want to establish fixed and objective criteria, to ward against this bias. What counts as unethical behavior, illegal behavior, or being naughty should be clear and unambiguous, and not creep over time.

In contexts where it is appropriate to use flexible criteria, like minimizing a problem, do not let the shifting criteria obscure past progress. It’s OK to tackle smaller and smaller problems, but don’t let that make you think that no progress is being made. It may make more sense to use fixed criteria to measure the problem, so you can track progress, and then just shift your resources. If you use relative criteria to measure the problem (like the UN’s definition of poverty) then almost by definition you will not be able to track progress.

 

 

No responses yet