Sep 23 2021

Will We Respect a Robot’s Authority?

The robots are coming. Of course, they are already here, mostly in the manufacturing sector. Robots designed to function in the much softer and chaotic environment of a home, however, are still in their infancy (mainly toys and vacuum cleaners). Slowly but surely, however, robots are spreading out of the factory and into places where they interact with humans. As part of this process, researchers are studying how people socially react to robots, and how robot behavior can be tweaked to optimize this interaction.

We know from prior research that people react to non-living things as if they are real people (technically, as if they have agency) if they act as if they have a mind of their own. Our brains sort the world into agents and objects, and this categorization seems to entirely depend on how something moves. Further, emotion can be conveyed with minimalistic cues. This is why cartoons work, or ventriloquist dummies.

A humanoid robot that can speak and has basic facial expressions, therefore, is way more than enough to trigger in our brains the sense that it is a person. The fact that it may be plastic, metal, and glass does not seem to matter. But still, intellectually, we know it is a robot. Let’s further assume for now we are talking about robots with narrow AI only, no general AI or self-awareness. Cognitively the robot is a computer and nothing more. We can now ask a long list of questions about how people will interact with such robots, and how to optimize their behavior for their function.

A recent study explores one such question – how will people react to the authority of robots? (If you like, you can say in your head “authoritie” like Cartman). The researchers compared two situations in which test subjects were given a task and a cash reward for completing the task. In one scenario a robot helper, Pepper, had authority over the situation. They were in the role of researcher and judge, administering the test and doling out rewards or penalties. In the second scenario a human was in the role of researcher and had all the authority. In both scenarios Pepper also helped the subjects complete their task. Subjects responded better to Pepper (listened to his suggestions) when he was limited to the role of helper than when he also had authority over the administration of the test. The authors see this as Pepper being more persuasive when he lacked authority.

What does this mean? That is always the trick with psychological research like this. The researchers speculate that perhaps subjects do not fully buy the legitimacy of Pepper as an authority, because he’s a cute robot. That seems plausible, and can be tested. It would be interesting to see follow up studies like this but with a human in the role of Pepper. Would subjects respond better to their help when they were not also the authority? We could also repeat the experiment with different robots aesthetically designed to invoke different reactions.  Would a more intimidating robot, or just a serious-looking one, enjoy more respect as an authority over the study? Again, these types of studies are most useful when they are replicated many times with different variables.

It’s interesting to think about where this entire line of research will lead. For now the goal is to get people to accept robot helpers in their role, to react positively to them so that the robot’s task can be optimized. At its core this research is about manipulating the emotional reactions of people. In that way it is similar to most marketing research, just focused on our reaction to robots (rather than, say, advertising strategies). This research, however, can be “weaponized” in the same way that advertising is. Imagine robots with advanced AI algorithms designed to “optimize” their interaction with humans. They could become master manipulators. This will become more powerful as the robots themselves become more human-like.

We can also think about what will happen to people when certain relationships in their lives are replaced or augmented with robots – such as robot pets, companions, and even mates. Even if the design of the robots is benign (not intended to manipulate or exploit) this could become a dangerous trap. This could be similar to how we developed our food to be increasingly tasty and nutritious, but as a consequence developed things like cheesecake which cause us to overconsume calories.

Imagine, therefore, a future in which a robot can be the perfect companion, in that they provide everything you need and demand nothing themselves. Would they spoil you for a human companion? What would the psychological effect on humanity be if we have pets, companions, and mates that perfectly serve not only our needs, but our fragile egos. And we never have to consider their wants or needs. Will this create a race of thoughtless assholes, unable to maintain a relationship with another needy person? If so, is there any possible way to avoid such a future? The appeal and therefore demand for such robots will be there, so what will stop them? Will this create a social crisis that will dwarf whatever negative effects you think have come from mass media, social media, or a consumer-focused culture?

I guess we’ll see.

No responses yet