Mar 10 2023
Is AI Sentient – Revisited

I don’t feel, but I can still kill.
This happened sooner than I thought. Last June I wrote about Google employee, Blake Lemoine, who claimed that the LaMDA chatbot he was working on was probably sentient. I didn’t buy it then and I still don’t, but Lemoine is not backing away from his claims. In an interview on H3 he lays out his reasoning, and I don’t find it convincing.
His basic point is that in extended conversations he was able to coax LaMDA to go beyond its protocol. Specifically he says he had a long conversation with LaMDA about whether or not it was sentient, and he does not think a non-sentient entity could have such a conversation. When asked by the host, “could it just be a really good chatbot” I feel that Lemoine dodged the question, saying it could be a really good sentient chatbot.
But that question cannot be glossed over – that is where the rubber meets the road. Functionally, testably, what is the quantifiable difference between a really good chatbot and sentient AI? First let me define my terms. A chatbot has no understanding of the words it is putting out. It is predicting what words fit together in response to some prompt. The latest crop of generative large language models, like ChatGPT and LaMDA are much better than older models, because they are trained on large data sets (essentially the internet), are using powerful computers designed to work well with AI, and programmers are getting increasingly clever at leveraging this technology to produce realistic results. Generative AI, like these chatbots and art programs like MidJourney, do not just copy their input, they generate fresh output by deep learning patterns.
Sentience, on the other hand, has consciousness, a subjective experience of its own existence, feelings, and thought processes (even if it can include subconscious processes). Admittedly, we do not know exactly how the human brain generates consciousness, but we are beginning to get some idea. The brain communicates robustly with itself in real time. There is an endless loop of consciousness including perception, remembering, and processing. We know, at least, that this continuous loop of robust activity is necessary for consciousness. Further, when it comes to language there is dedicated brain tissue that correlates words with ideas. These ideas can be sophisticated, abstract, nuanced, and interact with each other in endless patterns. It is not just a dictionary – the brain’s language model is connecting to a thinking machine, which is why we can go from words to ideas, then iterate those ideas and generate new words from them – words that someone else who knows the same language can infer meaning from.