Jun 12 2025
How Humans Solve Problems
The human brain is extremely good at problem-solving, at least relatively speaking. Cognitive scientists have been exploring how, exactly, people approach and solve problems – what cognitive strategies do we use, and how optimal are they. A recent study extends this research and includes a comparison of human problem-solving to machine learning. Would an AI, which can find an optimal strategy, follow the same path as human solvers?
The study was specifically designed to look at two specific cognitive strategies, hierarchical thinking and counterfactual thinking. In order to do this they needed a problem that was complex enough to force people to use these strategies, but not so complex that it could not be quantified by the researchers. They developed a system by which a ball may take one of four paths, at random, through a maze. The ball is hidden from view to the subject, but there are auditory clues as to the path the ball is taking. The clues are not definitive so the subject has to gather information to build a prediction of the ball’s path.
What the researchers found is that subjects generally started with a hierarchical approach – this means they broke the problem down into simpler parts, such as which way the ball went at each decision point. Hierarchical reasoning is a general cognitive strategy we employ in many contexts. We do this whenever we break a problem down into smaller manageable components. This term more specifically refers to reasoning that starts with the general and then progressively hones in on the more detailed. So far, no surprise – subjects broke the complex problem of calculating the ball’s path into bite-sized pieces.
What happens, however, when their predictions go awry? They thought the ball was taking one path but then a new clue suggests is has been taking another. That is where they switch to counterfactual reasoning. This type of reasoning involves considering the alternative, in this case, what other path might be compatible with the evidence the subject has gathered so far. We engage in counterfactual reasoning whenever we consider other possibilities, which forces us to reinterpret our evidence and make new hypotheses. This is what subjects did, h0wever they did not do it every time. In order to engage in counterfactual reasoning in this task the subjects had to accurately remember the previous clues. If they thought they did have a good memory for prior clues, they shifted to counterfactual reasoning. If they did not trust their memory, then they didn’t.
What this means is that human reasoning follows certain algorithms that work, but they are constrained by the limits of human cognition. The hierarchical approach is constrained by the fact that we cannot follow four parallel paths simultaneously. Therefore, this strategy often fails. The counterfactual approach is primarily limited by memory for prior evidence, so this strategy will also sometimes fail. What people did is shift back and fourth between these strategies depending on which they thought would work best within their cognitive constraints. All of this is interesting, but not that surprising.
However, the researchers then designed a machine learning algorithm to perform the same task. The AI, without any cognitive constraints, was able to perform 100%. It was able to follow all potential paths of the ball and use the information to determine which path the ball took. But, the researchers were able to program in constraints, such as limiting its ability to process different bits of information in parallel, and throwing in some limitations to its memory. When the AI had all of the constraints that human brains have, it then followed the same strategy as people – shifting back and forth between different cognitive approaches.
The authors conclude that, essentially, evolution has accomplished the same thing that their AI programming has, finding the optimal problem-solving behavior within existing cognitive constraints. Our cognitive strategies are therefore both rational and optimal, but are limited by things like perception and memory. We have to make inferences from imperfect information.
Although the authors do not focus on this fact, this research is also in line with previous cognitive research in that, absent some overriding emotional motivation, people are generally rational by nature. We tend to follow rational heuristics and cognitive strategies. For example, people will use Bayesian analysis to update their conclusions in the face of new information. This all makes sense – why would evolution not optimize cognition and decision-making, given how critical such behavior is to our survival. What limits our ability to make decisions, reach conclusion, and solve problems is not reason but just the limitations of our cognitive ability.
However, we do have emotions. Emotions are also an evolved algorithm that acts as a short-cut to promote adaptive behavior. We feel fear in order to avoid danger. This is interesting because we can also calculate potential danger and make a rational decision about which behavioral path will limit that danger, so why do we need the fear (evolutionarily speaking)? I don’t know that we have a definitive experimental answer, but the simple answer is that it must be evolutionarily adaptive. We have multiple emotions that might affect our behavior in any given situation. We might be curious as to what that noise is, while we are engaged in searching for food to satisfy our hunger. But then we hear the sounds of a predator. So which emotion wins out – curiosity, hunger, or fear? At the same time we may be making calculations about probability and past experience, risk vs benefit. Maybe there is a predator, but if I can grab the food before it sees me it’s worth the risk. This is the system 1 vs system 2 thinking, analytical vs intuitive. We also shift back and forth between these strategies.
So why not, then, just trust evolution? Why not just go with the flow and do what comes naturally? Well, we may not like the trade-offs that are optimal for evolution. Evolutionary success does not care if we are happy or fulfilled, or that our society is fair, or that we protect the environment, only that we spread our genes to the next generation. We also evolved in an environment that is not the same as our current one (referring to every layer of our world, including technology, society, and culture). We are not necessarily adapted to the modern world, so our intuition may not serve us optimally.
What I think all this means is that we benefit from understanding our own decision-making and problem-solving. This includes identifying all the cognitive strategies we engage in, including their strengths and weaknesses, and also all the heuristics, biases, and emotional algorithms that affect our behavior and thinking. I also think we need to lean more heavily on analytical-rational thinking, because our old behavioral algorithms have not kept up with the modern world (evolution does not work that quickly). This, of course, is metacognition – thinking about thinking itself, and coming up with the best strategy for coming up with the best strategies. Such fun.