Jul 03 2015
A Quick Logic Lesson
Try your hand at this quick puzzle, then come back and read the rest of this post.
How did you do? This is a great little test with a very important lesson.
The discussion that follows the puzzle is a fairly good explanation of confirmation bias, which is a partial explanation for why people might fail to solve the puzzle. It is a partial explanation only, however, and therefore missed an opportunity to teach a critical lesson in scientific reasoning.
Confirmation bias is the tendency to seek out, perceive, accept, and remember information that confirms beliefs we already hold, coupled with the tendency to miss, ignore, forget, or explain away information that contradicts our beliefs.
How many times have you either said yourself or heard someone else say, “well, that’s an exception?” Is it, or is it just data? By calling an example an “exception” you are assuming that there is a rule it violates. This is a way of dismissing information that contradicts your beliefs.
The puzzle article explains that people seek our information that confirms their hypothesis, rather than seeking out information that contradicts their hypothesis (confirmation bias). Therefore, they come up with a hypothesis about the rule governing the number sequence, they enter in a sequence that should yield a positive answer if their hypothesis is correct, and if it is correct they believe their hypothesis to be confirmed.
In testing a hypothesis there are actually three things a good scientist should do, and the article only discusses one of them – testing your hypothesis against information that should yield a negative result.
Another critical step that the article ignores, however, is the need to test alternate hypotheses – try to come up with a hypothesis that is also consistent with the existing data and then test that. Specifically you should have entered in a number sequence that would fulfill the alternate hypothesis but not your original hypothesis.
Failure to consider or test alternate hypotheses is called the congruence bias, and it is a type of heuristic. This is less well known than confirmation bias, but in many situations is just as important to understand.
The third step, which is not really relevant to this particular test, is to consider the effects of a negative result from any of your tests. In this case, since you are trying to figure out a mathematical rule, results are definitive – if a result breaks the rule, the rule is wrong, period. When testing scientific hypotheses, however, results are not always definitive and simply increase or decrease the chance that the hypothesis is correct.
To give a real-world example of this type of reasoning, let’s consider medical diagnosis. One of the reasons this puzzle was trivial for me is because I am familiar with confirmation bias and congruence bias, and the need to look for negative outcomes and to test alternate hypotheses. Hypothesis testing like this is a daily part of the practice of medicine.
When confronted with a patient with a set of signs and symptoms, physicians should create a differential diagnosis – a list of possible diagnoses from most likely to least likely. It would be a supreme mistake to only consider your first guess or only the most likely diagnosis.
Physicians then need to order tests; each test (physical exam findings, blood test, X-ray, biopsy, whatever) is a test of their diagnostic hypothesis. The pitfall physicians need to learn to avoid is to test only their pet diagnostic hypothesis, and interpret a positive outcome as absolute confirmation of their diagnosis.
They should also order workup to test other possible diagnoses, and they also need to consider the real predictive value of a positive or negative outcome of each test on each diagnosis they are considering.
This logic does not only apply to professional fields like medicine (although it is critical to any investigational profession). We could use this logic in everyday life. Consider political opinions, for example. We tend to seek out examples which confirm our political beliefs, and fail to consider the impact if those examples were negative, the effect those examples have on alternative views, and examples that contradict our views.
The combination of confirmation bias and the congruence bias can create a powerful sense that the world confirms our ideology, when in fact that ideology can be partly, mostly, or even completely wrong.
Conclusion
I love it that a somewhat viral article in the New York Times is teaching a core lesson of critical thinking – confirmation bias. A more nuanced discussion, however, would have included the congruence bias as well, which is even more pertinent, in my opinion, to why people might fail that puzzle.
The real challenge, however, is to get people to internalize these logic lessons and consistently apply them to themselves in everyday life.