Jul 18 2023

How We Determine What to Believe as True

Psychologists have been studying a very basic cognitive function that appears to be of increasing importance – how do we choose what to believe as true or false? We live in a world awash in information, and access to essentially the world’s store of knowledge is now a trivial matter for many people, especially in developed parts of the world. The most important cognitive skill in the 21st century may arguably be not factual knowledge but truth discrimination. I would argue this is a skill that needs to be explicitly taught in school, and is more important than teaching students facts.

Knowing facts is still important, because you cannot think in a vacuum. Our internal model of the world is build on bricks of fact, but before we take a brick and place it in our wall of knowledge, we have to decide if it is probably true or not. I have come to think about this in terms of three categories of skills – domain knowledge (with scientific claims this is scientific literacy), critical thinking, and media savvy.

Domain knowledge, or scientific literacy, is important because without a working knowledge of a topic you have no basis for assessing the plausibility of a new claim. Does it even make basic sense? An easily refutable claim may be accepted simply because you don’t know it is easily refutable. Critical thinking skills involve an understanding of the heuristics we naturally use to estimate truth, our cognitive biases, cognitive pitfalls like conspiracy thinking, how motivation affects our thought processes, and mechanisms of self deception. Media savvy involves understanding how to assess the reliability of information sources, how information ecosystems work, and how information is used by others to deceive us.

A recent study involves one aspect of this latter category – how do we assess the reliability of information sources and how this affects our bottom line assessment of whether or not something is true. The researchers did two studies involving 1,181 subjects. They gave the subjects factual information, then presented them with claims made by a media outlet. They were further told whether the media outlet intended to inform or deceive on this topic. They studies claims that are considered highly politicized and those that were not.

What they found is that subjects were more likely to deem a claim true if it came from a source considered to be trying to inform, and more likely to be false when the source was characterized as trying to deceive – even if the claims were the same. At first this result seems strange because the subjects were told the actual facts, so they knew absolutely (within the confines of the study) whether or not the claim was true.

However, subjects were willing to consider a claim true but not precise, and assumed the source was providing an estimate. This introduces an element of subjectivity. For example, the subjects may have been told that 114 people attended an event, while source A said that 109 people attended, and source B said 100 people attended. Whether or not source A’s claim was considered to be true depended in part on the alleged motivations of source A.

Like many psychological studies, this study design is a construct, and feels contrived. You never really know what subjects were thinking. That is why you need to do many study to see if a phenomenon can replicate under many study designs. Also, this paper is really looking at something very specific. As the authors state:

However, unlike most existing work on misinformation and belief formation, this paper does not assess how people discern true versus false information. Rather, this paper seeks to understand what people think even qualifies as true versus false information.

This is an interesting angle, and partly explains how people can look at the same set of data and come to different conclusions. It also implies that to some degree we take a dichotomous approach to facts, deeming them basically true or basically false. This is rational to some extent, as we need to decide at times whether or not to act on information. We are faced with binary choices, and this requires a binary assessment. But as scientists and critical thinkers we need to recognize that we often have to make binary choices with incomplete or even unreliable information. It’s better to think of claims in terms of a sliding scale of probability, from almost zero to almost 100%, with a zone in the middle of “I don’t know”. But the world often forces choices on us. A defendant is either innocent or guilty.

What this study suggests is that we grant ourselves some wiggle room in defining true in order to fit a messy and complex world into our preferred system of binary judgements. It’s true enough. We often employ a number of half-j0king euphemisms for this process – correct to within an order of magnitude, or good enough for government work.

Again, one caution here is that this binary choice was explicitly imposed by this study design. This is why many psychological studies give a spread of options, such as a Likert scale. What if this study asked if a claim was entirely true, mostly true, don’t know, mostly false, or entirely false?  They did use a Likert scale when assessing the reliability of the source. That subjects were essentially forced to make a binary judgement may have influenced them to grant themselves some wiggle room when defining “true”. The study still shows a bias in assessing whether a claim is true, and that in some circumstances we will define “true” broadly. But I would like to see other versions of this study with different options.

I’ll also say that having some wiggle room when defining what is “true” is legitimate. Again, the world is messy and complex, and our language is only an approximation. For example, on the SGU we discuss complex scientific topics every week, but we have only so much time and can only go so deep on each topic. We can never give enough detail to satisfy an expert, and we try to make our discussions true and accurate – as far as they go.

Often times actual experts will e-mail us, with a variety of responses. Some will frame their feedback as us being “wrong” and then simply give more detail. While others, even on the same topic, will frame their feedback as – you were correct, but here is more detail or technical precision. If, for example, I say the Earth is a sphere, is that true or false? Well, it is true within a certain range of precision. But it is more true to say that the Earth is an asymmetrical oblate spheroid. But someone being a stickler might deem the “sphere” claim to be false. In reality, there are degrees of true or false. Saying the Earth is a sphere is mostly true, and only slightly false. Saying the Earth is a cube is mostly false (at least it’s three dimensional), and saying it’s flat is both absurd and entirely false.

I am often in this position myself, sometimes when in the role of professor teaching medical students. How do you respond to answers that have some concepts or details correct and others either incorrect or just imprecise or perhaps misleading? From a teaching perspective, it’s just as important to point out what people get right as what they get wrong. From a psychological perspective, people will respond (and learn) better if the correcting feedback is framed as – this is good, this is how it can be even better. Medicine is also an applied science, so there is another reasonable criterion to apply – does it matter clinically? In the clinical (as opposed to the research) context, if something is a “distinction without a difference”, or at a point where further precision will have no detectable effect on clinical outcome, then it’s true enough.

What all this means is that, even though we think of true and false as absolute binaries, there is actually a lot of wiggle room in these definitions, and this further means that we cannot absolutely remove subjectivity in how we think about whether something is true or false. Subjectivity then invites bias. We can dismiss as false something with a tiny imprecision, or accept as true a claim that only has a tiny kernel of truth.

For me, having tried to communicate science in many contexts for many years, I have tried to discipline myself to use more nuanced language. This means using a lot of qualifying language to implicitly or sometimes explicitly recognize that true and false often occur on a continuum. Also – context is important. I also think my medical background has helped. We often need to make binary decisions given uncertainty, and we need to effectively communicate all that to non-experts with high personal stakes.

But all of this takes a lot of cognitive work, and a high degree of comfort with complexity and uncertainty. None of these are inherent human traits.

No responses yet