Nov 19 2024
Robots and a Sense of Self
Humans (assuming you all experience roughly what I experience, which is a reasonable assumption) have a sense of self. This sense has several components – we feel as if we occupy our physical bodies, that our bodies are distinct entities separate from the rest of the universe, that we own our body parts, and that we have the agency to control our bodies. We can do stuff and affect the world around us. We also have a sense that we exist in time, that there is a continuity to our existence, that we existed yesterday and will likely exist tomorrow.
This may all seem too basic to bother pointing out, but it isn’t. These aspects of a sense of self also do not flow automatically from the fact of our own existence. There are circuits in the brain receiving input from sensory and cognitive information that generate these senses. We know this primarily from studying people in whom one or more of these circuits are disrupted, either temporarily or permanently. This is why people can have an “out of body” experience – disrupt those circuits which make us feel embodied. People can feel as if they do not own or control a body part (such as so-called alien hand syndrome). Or they can feel as if they own and control a body part that doesn’t exist. It’s possible for there to be a disconnect between physical reality and our subjective experience, because the subjective experience of self, of reality, and of time are constructed by our brains based upon sensory and other inputs.
Perhaps, however, there is another way to study the phenomenon of a sense of self. Rather than studying people who are missing one or more aspects of a sense of self, we can try to build up that sense, one component at a time, in robots. This is the subject of a paper by three researchers, a cognitive roboticist, a cognitive psychologist who works with robot-human interactions, and a psychiatrist. They explore how we can study the components of a sense of self in robots, and how we can use robots to do psychological research about human cognition and the self of self.
Obviously we are a long way away from having artificial intelligence (AI) that reproduces human-level general cognition. But by now it’s pretty clear that we do not need this in order to at least simulate aspects of human level cognition and beyond. One great example with reference to robotics is that we do not need human-level general AI to have a robot walk. Instead we can develop algorithms that can respond in real time to sensory information so that robots can maintain themselves upright, traverse terrain, and respond to perturbations. This actually mimics how the human brain works. You don’t have to think too much about walking. There are subcortical pathways that do all the hard-lifting for you – algorithms that utilize sensory input to maintain anti-gravity posture, walk, and react to perturbations. The system is largely subconscious, although you can consciously direct it. Similarly you don’t have to think about breathing. It’s automatic. But you can control your breathing if you want.
The idea with robots is not that we create a robot that has a full human-level sense of self, but that we start to build in specific components that are the building blocks of a sense of self. For example, robots could have sensors and algorithms that give them feedback that indicates they control their robotic body parts. As with the human brain, a circuit can compare the commands to move a body part with sensors that indicate how the body part actually moved. Similarly, when robots move there can be sensors feeding into algorithms that determine what the effect of that movement was on the outside world (a sense of agency).
This would not be enough to give the robot a subjective experience of self, just as your brainstem would not give you a sense of self without a functioning cortex. But we can start to build the subconscious components of self. We can then do experiments to see how, if at all, these components affect the behavior of the robot. Perhaps this will enable them to control their movements more precisely, or adapt to the environment more quickly and effectively.
I think this is a good pathway for developing robotic AI in any case. Our brains evolved from the bottom up, starting with simple algorithms to control basic functions. It makes sense that we should build robotic intelligence from the bottom up also. Then, as we develop more and more sophisticated AI, we can plug these subconscious algorithms into them.
The big question is – how much will plugging in a bunch of narrow AI / subconscious algorithms into each other contribute to AI sentience and self-awareness? Will (like V-ger or Skynet from science fiction) awareness spontaneously emerge from a complex-enough network of narrow AIs? Is that how vertebrate self-awareness evolved? Arguably, human consciousness is ultimately a bunch of subconscious networks all talking to each other in real time with wakeful consciousness emerging from this process. You can take components away, changing the resulting consciousness, but if you take too many of them away, then wakeful consciousness cannot be maintained.
The other question I have concerns the difference between AI running on a computer and AI in a robot. Does an AI have to be embodied to have human-like self-awareness? Is a Max Headroom type of AI with a completely virtual existence possible? Probably – if they had a virtual body and it was programmed to function like a physical body in the virtual world. But since we are developing robotics anyways, developing robotic AI that mimics human-like embodiment and sense of self makes sense. It evolved for a reason, and we should explore how to leverage that to advance robotics. While we use our understanding of neuroscience to help advance AI and robotics, we can also use AI and robotics to study neuroscience.
As the authors propose, we can use our attempts at building the components of self into robots to see how those components function and what effect they have.