Jan 29 2024

Controlling the Narrative with AI

There is an ongoing battle in our society to control the narrative, to influence the flow of information, and thereby move the needle on what people think and how they behave. This is nothing new, but the mechanisms for controlling the narrative are evolving as our communication technology evolves. The latest addition to this technology is the large language model AIs.

“The media”, of course, has been a large focus of this competition. On the right there is constant complaints of the “liberal bias” in the media, and on the left there are complaints of the rise of right-wing media which they feel is biased and radicalizing. The culture wars focus mainly on schools, because those schools teach not only facts and knowledge but convey the values of our society. The left views DEI (diversity, equity, and inclusion) initiates as promoting social justice while the right views it as brainwashing the next generation with liberal propaganda. This is an oversimplification, but it is the basic dynamic. Even industry has been targeted by the culture wars – which narratives are specific companies supporting? Is Disney pro-gay? Which companies fly BLM or LGBTQ flags?

But increasingly “the narrative” (the overall cultural conversation) is not being controlled by the media, educational system, or marketing campaigns. It’s being controlled by social media. This is why, when the power of social media started to become apparent, many people panicked. Suddenly it seemed we had seeded control of the narrative to a few tech companies, who had apparently decided that destroying democracy was a price they were prepared to pay for maximizing their clicks. We now live in a world where YouTube algorithms can destroy lives and relationships.

We are not yet over panicking about the influence of social media and the tech giants who control them when another player has crashed the party – artificial intelligence, chatbots, and the large language models that run them. This is an extension of the social media infrastructure, but it is enough of a technological advance to be disruptive. Here is the concern – by shaping the flow of information to the masses, social media platforms and AI can have a significant effect on the narrative, enough to create populist movements, to alter the outcome of elections, or to make or destroy brands.

It seems likely that increasingly we will be giving control of the flow of information to AI. Now, instead of searching on Google for information you can have a conversation with Chat GPT. Behind the scenes it’s still searching the web for information, but the interface is radically different. I have documented and discussed here many times how easy human brains are to fool. We have evolved circuits in our brain that construct our perception of reality and make certain judgements about how to do so. One subset of these circuits is dedicated to determining if something out there in the world has agency (are they a person or just a thing) and once the agency-algorithm determines that something is an agent, that then connects to the emotional centers of our brain. We then feel toward that apparent agent and treat them as if they were a person. This extends to cartoons, digital entities, and even abstract shapes. Physical form, or the lack thereof, does not seem to matter because it is not part of the agency algorithm.

It is increasingly well established that people respond to an even half-way decent chatbot as if that chatbot were a person. So now when we interface with “the internet”, looking for information, we may not just be searching for websites but talking with an entity – an entity that can sound friendly, understanding, and authoritative. Even though we may know completely that this is just an AI, we emotionally fall for it. It’s just how our brains are wired.

A recent study demonstrates the subtle power that such chatbots can have. They asked subjects to talk with ChatGPT-3 about black lives matter (BLM) and climate change, but gave them no other instructions. They also surveyed the subjects attitudes toward these topics before and after the conversation. Those who scored negatively toward BLM or climate change ranked their experience half a point lower on a five point scale (which is significant), so they were unhappy when the AI told them things they did not agree with. But, more importantly, after the interaction their attitudes moved 6% in the direction of accepting climate change and the BLM movement. We don’t know from this study if this effect is enduring, or if it is enough to affect behavior, but at least temporarily ChatGPT did move the needle a little. This is a proof of concept.

So the question is – who controls these large language model AI chatbots, who we are rapidly making the gatekeepers to information on the internet?

One approach is to make it so that no one controls them (as much as possible). Through transparency, regulation, and voluntary standards, the large tech companies can try to keep their thumbs off the scale as much as possible, and essentially “let the chips fall where they may.” But this is a problem and early indications are this approach likely won’t work. The problem is that even if they are trying not to influence the behavior of these AI, they can’t help but to have a large influence on them by the choices they make about how to program and train them. There is no neutral approach. Every decision has a large influence, and they have to make choices. What do they prioritize.

If, for example, they prioritize the user experience, well, as we see in this study, one way to improve the user experience is to tell people what they want to hear, rather what the AI determines is the truth. How much does the AI caveat what it says? How authoritative should it sound? How thoroughly should it source whatever information it gives? And how does it weight different sources that it is using? Further, we know that these AI applications can “hallucinate” – just make up fake information. How do we stop that, and to what extent (and how) to we build in fact-checking processes into the AI?

These are all difficult and challenging questions, even for a well-meaning tech company acting in good faith. But of course, there are powerful actors out there who would not act in good faith. There is already deep concern about the rise of Tik Tok, and the ability of China to control the flow of information through that app to favor pro-China news and opinion. How long will it be before ChatGPT is accused of having a liberal bias, and ConservaGPT is created to combat that (just like the Conservapedia, or Truth Social)?

The narrative wars go on, but they seem to be increasingly concentrated in fewer and fewer choke points of information. That, I think, is the real risk. And the best solution may be an anti-trust approach – make sure there are lots of options out there, so no one or few options dominate.

No responses yet