Nov 18 2019
Peak Intelligence
There is an interesting article over at The Conversation asking the question – have humans reached peak intelligence? This is something we have discussed previously on the SGU so I was keen to find out what philosophers think about this question. The core question is this – are there ultimate limits to the ability of humans to think, understand, and hypothesize? If so, are we approaching that limit now? There is also an angle to this the article did not cover directly – is there is limit to our ability to manage complexity (as opposed to just comprehending reality)?
There are different ways to approach this question. From an evolutionary point of view, our ancestors were likely under selective pressure to solve problems of immediate survival, and not to unravel the deep mysteries of the universe. But I don’t think this is ultimately relevant. This is a hyper-adaptationalist approach. It actually doesn’t matter to the ultimate question, because our hands did not evolve to play the piano either. Abilities that evolve for one purpose may be more generally useful. Clearly humans evolved some general cognitive abilities that go way beyond their immediate narrow evolutionary function.
But the broader point is salient – our cognitive abilities are not necessarily unlimited. What if the universe is simply more complex than our brains can comprehend? Take quantum mechanics, for example. The best thinkers we have, specializing in this question, still cannot solve the mystery of duality and apparent non-locality. We have some ideas, but it is possible that our brains are simply not equipped to imagine the true answer. It may be like a cat trying to understand calculus. If this is true, then what would we expect to happen in the course of scientific development? Would we hit a wall?
As they also discuss in the article, I don’t think so. Rather, if we look at the course of scientific development, our ability to do science is progressing, the technology of science, if you will. But at the same time the difficulty, complexity, and subtlety of the problems are increasing. We are having to work harder and harder for progressively smaller returns. Rather than hitting a wall, I agree that we will likely just wade into the molasses. We will keep pushing deeper and deeper into fundamental theories about how the universe works, but progress will become slower and slower. It may never actually stop, but advances will simply come fewer and farther between.
While I think it is reasonable to conclude that this is likely the long term trend of scientific discovery, I don’t think we are in a position to determine where we are in that arc. You cannot see a pattern when you are in the middle of it. We will always, by necessity, be looking back at the history of scientific progress and discerning its overall pattern.
The other aspect of this that interests (and concerns) me that the article did not delve into is complexity. Our society is getting more and more complex, and it seems to be getting worse beyond our control. No one has the ability to slow things down, we just have to perpetually scramble to try to manage it. I do wonder if civilization will get to a critical point and then implode in some way. I think like any system, there is a tendency toward greater chaos. As things naturally evolve, they become more and more of a kluge. Think of our legal system, our medical system, any bloated piece of software, and of course biological systems. At some point there may be a revolution and cleansing, wipe the slate clean and start fresh. That is the long-term pattern of human history. No state lasts forever. The cleansing does not always have to be a revolution, however, it can be a managed reformation.
Yet another aspect beyond the scope of the article but I think is very important is another form of limitation on human cognition – biases and heuristics. There are not absolute limits, just inefficiencies and things that potentially can lead us astray. The article essentially focuses on what we are capable of when thinking clearly, but humans do not always think clearly. We therefore backslide and this also hampers progress. One question is – is the ratio of progress to backsliding changing over historical time? Will this also reach a point of equilibrium?
I was pleased to see that the article covered something that I thought of immediately – even if the human brain has hard limits on its ability to understand the universe, we are not ultimately limited by the human brain. We may encounter alien intelligences, which would be fascinating (and scary, but let’s hope they don’t want to enslave or eat us). But more predictably, we are also developing artificial intelligence. Whatever you think about the current state and the rate of progress of this endeavor, we are steadily developing more and more intelligent machines, and eventually we will very likely develop general AI with capabilities beyond humans. We may be able to evolve intelligent machines and select them for their ability to solve scientific mysteries. Also – we may be able to evolve or genetically engineer ourselves, and evenĀ merge with our technology.
I am usually skeptical of “peak” anything arguments. They have a poor history. And I think this is true of peak intelligence as well.