May 10 2018

False Dichotomy and Science Denial


Psychologist Jeremy Shapiro has an interesting article on RawStory in which he argues that one of the pillars of science denial is the false dichotomy. I agree, and this point is worth exploring further. He also points out that the same fallacy in thinking is common in several mental disorders he treats.

The latter point may be true, but I don’t see how that adds much to our understanding of science denial, and may be perceived as inflammatory. For example, he says that borderline personality disorder clients often split the people in their world into all bad or all good. If you do one thing wrong, then you are a bad person. Likewise, perfectionists often perceive that any outcome or performance that is less than perfect gets lumped into one category of unsatisfactory.

I do think these can be useful examples to show how dichotomous thinking can lead to or at least support a mental disorder. Part of the goal of therapy for people with these disorders is cognitive therapy, to help them break out of their pattern of approaching the world as a simple dichotomy. But we have to be careful not to imply that science denial itself is a mental illness or disorder.

Denialism and False Dichotomy

A false dichotomy is a common logical fallacy in which many possibilities, or a continuum of possibilities, is rhetorically collapsed into only two choices. People are either tall or short, there is no other option. There are just Democrats and Republicans.

While some physical properties may in fact be truly dichotomous (electric charge is either positive or negative), people and the world itself usually display much more complex features. Most traits exist along a continuum. Yet our minds like simplicity, and we like to categorize and pigeon-hole things in order to mentally grapple with them. Using schematics and categories is fine, but we have to recognize they are not reality, which is often more messy.

These principles are especially true when dealing with very complex systems, like people. People are rarely if ever all good or all bad, for example. People generally are a complex combination of traits that range from vice to virtue, are often context dependent, and exist along a continuum.

Likewise, scientific understanding also cannot be understand as any simple dichotomy. I have written previously about the demarcation problem between science and pseudoscience, for example. We cannot divide all claims to science into two clean categories – with pristine science on one end and pure pseudoscience at the other. There is a continuum with no clear dividing line between the two.

However, we can identify methods and features that are scientifically valid and others that are flawed. The more valid features any scientific endeavor has, the more of a legitimate science it is, while the more dubious features it has makes it more pseudoscientific. So while there is no sharp demarcation line, there are two recognizable ends of the spectrum. Denying this reality is also a logical fallacy – the false continuum.

Scientific knowledge also falls along a continuum. No fact is established to 100% metaphysical certitude, nor can we assign a 0% probability to any claim. This is because human knowledge is limited, is dependent on our perspective, frame of reference, and perhaps unknown assumptions.

Still this does not mean that we cannot be 99.99% certain that some basic fact about the universe is probably true. The world is roughly a sphere. We can be absolutely certain of that (despite the delusions of flat-earthers) to such a high degree that we can treat it as 100%. Similarly we can say that homeopathy has as close to 0% a chance of having a real medical effect as we can get in medicine. You can place every scientific claim along this spectrum, based on existing evidence, competing theories, known unknowns, and other factors. The more well-established independent lines of evidence point to one conclusion, the more confident we can be in that conclusion.

So while there is a continuum of confidence in scientific facts and theories, we can divide that continuum up into practical categories. There are well-established facts that we can use as a solid foundation. There are theories that are sufficiently well-established that we can act upon them, even if there remains some small uncertainty or room for doubt. Other claims are possibly true but we should treat with caution. Some claims in the middle are a toss up, we really cannot say with any confidence one way or the other. Then there are claims that are probably not true but there is room for a minority opinion and we shouldn’t write them off just yet. And finally there are claims and theories that have been sufficiently disproved that we can move on and stop wasting any further resources on pursuing them.

We can quibble about where exactly to draw the lines, and about exactly where any one scientific claim exists on this spectrum, and that debate is healthy. It is part of the scientific process. Designations are also moving targets, revised as new evidence and new ideas are brought to bear.

Shapiro I think is correct in pointing out that science denialism, as one of its strategies, collapsing this continuum into the false dichotomy of – scientific conclusions are either rock solid, or they are suspect and controversial at best and bogus at worst. They ignore the huge part of the spectrum where we can treat theories as probably true, even if minor uncertainty remains. The purpose of this strategy is so that all they have to do is point to unknowns, apparent anomalies, apparent contradictions, or any dissent among scientists (no matter how minor) as evidence that the theory is not 100% rock solid. Therefore the theory is controversial and suspect.

So evolution deniers will point to “gaps” in the fossil record as if that calls the entire theory into question. Or they will point to disagreements among scientists about some of the details of evolution to claim that the entire theory is controversial and there is no consensus. Any chink, any flaw, and the whole theory collapses, in their view.

Scientists often inadvertently feed this strategy, because they are operating in the real world where scientific knowledge is a continuum. They will sometimes make statements about how disruptive their new discovery is, or how little we understood prior to their breakthrough, without realizing how such statements can easily be misused to attack the science itself. This is an important principle of effective science communication – to give an accurate portrayal of how science progresses. This means resisting the urge to overhype your own research.

Scientists are operating within a scientific paradigm, so when they make casual statements like, “We have no idea how this works,” they unconsciously are assuming that people will put such statements into the same scientific context in which it was meant. But that is often not the case. Usually such absolute statement are not literally true – we often have lots of ideas, and lots of evidence, but there may still be competing theories, or we lack solid confirming evidence.

Science needs to be understood as the messy, flawed, but at its best rigorous, thorough, and careful endeavor that it is. We don’t know everything, and we don’t necessarily know anything 100%. But that does not mean we know nothing, or that you can casually dismiss any scientific conclusion you don’t like. We do know stuff, and some stuff we know to such a high degree of confidence that we can treat it as a fact. Other things we can say with sufficient confidence to base important decisions on those conclusions. I practice medicine, so this is my daily life.

Climate change is a perfect example. There are significant uncertainties in exactly what is happening and will happen with the climate, all the feedback mechanisms at play, and what the net results will be. But we do have a fairly high degree of confidence that releasing large amounts of previously sequestered carbon into the atmosphere is forcing rising average global temperatures, with potentially inconvenient effects. The consensus on the evidence is strong enough to act, even with the lingering uncertainty.

Waiting for 100% certainty is rarely practical. If you approached health care this way, you would be paralyzed into inaction with very bad outcomes. If we were only 95% confident that an asteroid was going to wipe out all life on Earth, I think we should act on that 95%, and not quibble about the 5%.

No responses yet