Jun 28 2021

It’s OK to be Wrong

In science it is not only OK to be wrong, it is an unavoidable and perpetual state – depending, of course, on how you define “wrong”. We lack a complete understanding of the universe, and all of our theories are at best approximations of reality. From this perspective no theory is completely correct, and therefore to some extent is at least somewhat wrong. The idea of progress in science it to become less wrong, knowing that perfect knowledge is not technically possible.

Progress in science is therefore mostly about identifying what facts and ideas we hold that are wrong, exactly how they are wrong, and how we can gather new data or observations that will test those ideas, allow us to refine them, and become one step less wrong.

Some philosophers express this idea as – there is no metaphysical truth in science. What science has is models that predict how the universe will behave. The more accurate and far reaching those predictions, the better the model is. When these models start to break down, because new observations conflict with their predictions, then we have to revise the model to account for the new observations. In this way we make more and more complex, and/or deep models that can make better and better predictions.

Philosophers argue about whether or not it is appropriate to say that our scientific models are “true” in the metaphysical sense. I don’t want to get into this argument, but just want to state that from the point of view of applied science, it doesn’t matter. If our probes reach Pluto then it doesn’t matter at that point if our scientific theories are actually the way the universe works or are just a mental construct that predicts outcomes within observable tolerances. It get’s the job done either way. I may matter in terms of constructing new theories – but again, I don’t want to go down this rabbit hole.

What I do want to discuss is a common category mistake people make when thinking about science. The concept of a category mistake was developed by Philosopher Gilbert Ryle in 1949, and is essentially a cognitive error in which the properties of one category are inappropriately applied to another. He developed this idea to criticize mind-body dualism. He stated that the prevailing arguments for dualism are wrong because they are making a category mistake, saying that the mind does not have physical properties and therefore cannot be physical, and are therefore not the brain. The error here is that the mind is not a thing, and therefore does not need to have physical properties. The mind is a process, a collection of properties. So the argument leading to dualism is based on a category mistake.

To give another example, it’s like asking to point to the building, or even set of buildings, that is Yale University. The category mistake is that a University is not a physical object. It may be made of physical objects, but it itself is more than the objects it contains. A university is also a tradition, a culture, the people who work for it and their students, their ideas and their activity. Even if all the buildings were destroyed, the university would not cease to exist.

I bring this up because it is common for critics of applied sciences to make a category mistake. For example, global warming deniers are keen on pointing out that nothing in science is settled. While this is true, it’s irrelevant to the discussion. They are confusing how we approach knowledge in science vs how we apply that knowledge in the world. Medicine is another common context for this. We don’t know anything with certitude in medicine. Nothing is settled, all knowledge is open to revision. However, we have to treat patients in the meantime.

When it comes to applied science we are not dealing with certitude, and we do not have to close down debate or pretend that any conclusion in science is fully “settled”. Rather, we deal with risk vs benefit, with probability, with return on investment and cost efficiency.

My colleague Phil Plait came up with a good analogy. Let’s say there is a large asteroid on a probable collision course for the Earth. This is a planet killer, an extinction level event. We cannot know the orbit with 100% certainty, only probability. The longer we observe it, the greater that probability gets. Now – should we wait until we are 100% sure the asteroid is going to hit before we do anything? At what probability is it worth it to take action? Even if it’s 10% likely, that is still way too high to risk extinction.

If it is 50% likely that you have cancer, do you want to wait until we are closer to 100% certain before taking action? Of course not. Is the science “settled” on whether or not you have cancer, on the utility of the diagnostic tests, on the effectiveness of the treatments? No, but doing nothing is a choice also, with its own risks, and often the best course of action is to act on uncertain information.

So it is silly to argue about whether or not climate science, vaccine science, genetic modification, nuclear power, or anything is “settled”. This is a category mistake, and a diversion from the real issues. What matters is the risk vs benefit of all available courses of action given the best knowledge we have at the time.

No responses yet