Jul 23 2024

AI Companions – Good or Bad?

Often times the answer to a binary question is “yes”. Is artificial intelligence (AI) a powerful and quickly advancing tool or is it overhyped? Yes. Are opiates useful medicines or dangerous drugs? Yes. Is Elon Musk a technological visionary or an eccentric opportunist? This is because the world is usually more complex and nuanced than our false dichotomy or false choice simplistic thinking. People and things can contain disparate and seemingly contradictory traits – they can be two things at the same time.

This was therefore my immediate reaction to the question – are AI companions a potentially healthy and useful phenomenon, or are they weird and harmful? First let me address a core neuropsychological question underlying this issue – how effective are chatbot companions, for just companionship, or for counseling, or even as romantic partners? The bottom line is that the research consistently shows that they are very effective.

This is likely a consequence of how human brains are typically wired to function. Neurologically speaking, we do not distinguish between something that acts alive and something that is alive. Our brains have a category for things out there in the world that psychologists term “agents”, things that are acting on their own volition. There is a separate category for everything else, inanimate objects. There are literally different pathways in the brain for dealing with these two categories, agents and non-agents. Our brains also tend to overall the agent category, and really only require that things move in a way that suggest agency (moving in a non-inertial frame, for example). Perhaps this makes evolutionary sense. We need to know, adaptively, what things out there might be acting on their own agenda. Does that thing over there want to eat me, or is it just a branch blowing in the wind.

Humans are also intensely social animals, and a large part of our brains are dedicated to social functions. Again, we tend to overcall what is a social agent in our world. We easily attribute emotion to cartoons, or inanimate objects that seem to be expressing emotions. Now that we have technology that can essentially fake human agency and emotion, this can hack into our evolved algorithms which never had to make a distinction between real and fake agents.

In short, if something acts like a person, we treat it like a person. This extends to our pets as well. So – do AI chatbots act like a real person? Sure, and they are getting better at it fast. It doesn’t matter if we consciously know the entity we are chatting with is an AI, that knowledge does not alter the pathways in our brain. We still process the conversation like a social interaction. What’s the potential good and bad here?

Let’s start with the good. We already have research showing that AI chatbots can be effective at providing some basic counseling. They have many potential advantages. They are good listeners, and are infinitely patient and attentive. They can adapt to the questions, personality, and style of the person they are chatting with, and remember prior information. They are good at reflecting, which is a basic component of therapy. People feel like they form a therapeutic alliance with these chatbots. They can also provide a judgement-free and completely private environment in which people can reflect on whatever issues they are dealing with. They can provide positive affirmation, while also challenging the person to confront important issues. At least these can provide a first line of defense, cheaply and readily available.

Therapeutic relationships easily morph into personal or even romantic ones, in fact this is always a very real risk for human counselors (a process called transferance). So, why wouldn’t this also happen with AI therapists, and in fact can be programmed to happen (a feature rather than a bug). All the advantages carry over – AI romantic partners can adapt to your personality, and have all the qualities you may want in a partner. They provide companionship that can lessen loneliness and be fulfilling in many ways. l

What about the sexual component? Indicators so far are that this can be very fulfilling as well. I am not saying that anything is a real replacement for a mutually consenting physical relationship with another person. But as a second choice, it can have value. The most important sex organ, as they say, is the brain. We respond to erotic stimuli and imagery, and sex chatting can be exciting and even fulfilling to some degree. This likely varies from person to person, as does the ability to fantasize, but for some sexual encounters happening entirely in the mind can be intense. I will leave for another day what happens when we pair AI with robotics, and for now limit the discussion to AI alone. The in-between case is like Blade Runner 2049, where an AI girlfriend was paired with a hologram. We don’t have this tech today, but AI can be paired with pictures and animation.

What is the potential downside? That depends on how these apps are used. As a supplement to the full range of normal human interactions, there is likely little downside. It just extends our experience. But there are at least two potential types of problems here – dependence on AI relationships getting in the way of human relationships, and nurturing our worst instincts rather than developing relationship skills.

The first issue mainly applies to people who may find social relationship difficult for various reasons (but could apply to most people to some extent). AI companions may be an easy solution, but the fear is that it would reduce the incentive to work on whatever issues make human relationships difficult, and reduce the motivation to do the hard work of finding and building relationships. We may choose the easy path, especially as functionality improves, rather than doing the hard work.

But the second issue, to me, is the bigger threat. AI companions can become like cheesecake – optimized to appeal to our desires, rather than being good for us. While there will likely be “health food” AI options developed, market forces will likely favor the “junk food” variety. AI companions, for example, may cater to our desires and our egos, make no demands on us, have no issues of their own we would need to deal with, and would essentially give everything and take nothing. In short, they could spoil us for real human relationships. How long will it be before some frustrated person shouts in the middle of an argument, “why aren’t you more like my AI girlfriend/boyfriend?” This means we may not have to build the skills necessary to be in a successful relationship, which often requires that we give a lot of ourselves, think of other people, put the needs of others above our own, compromise, and work out some of our issues. ]

This concept is not new. The 1974 movie, based on the 1972 book, The Stepford Wives, deals with a small Connecticut town where the men all replace their wives with robot replicas that are perfectly subservient. This has become a popular sci-fi theme, as it touches, I think, on this basic concept of having a relationship that is 100% about you and not having to do all the hard work of thinking about the needs of the other person.

The concern goes beyond the “Stepford Wife” manifestation – what if chatbot companions could either be exploited, or even are deliberately optimized, to cater to – darker – impulses? What are the implications of being in a relationship with an AI child, or slave? Would it be OK to be abusive to your AI companion? What if they “liked” it? Do they get a safe word? Would this provide a safe outlet for people with dark impulses, or nurture those impulses (preliminary evidence suggests it may be the latter).  Would this be analogous to roleplaying, which can be useful in therapy but also can have risks?

In the end, whether or not AI companions are a net positive or negative depends upon how they are developed and used, and I suspect we will see the entire spectrum from very good and useful to creepy and harmful. Either way, they are now a part of our world.

No responses yet