Oct 23 2015

AI and the Chinese Room Argument

In 1980 John Searle proposed what has come to be known as the Chinese Room Argument as a refutation of the functionalist theory of consciousness. This is a thought experiment, much like the other most famous thought experiment in artificial intelligence (AI), the Turing test.

Searle asks you to imagine a native English speaker with no knowledge in Chinese in a locked room with a large number of reference books. Slips of paper with Chinese symbols are passed under the door. The person then looks up the symbols in the reference books, copies the associated symbols onto the paper, and passes it back. In this way the person is generating answers to the questions he is being asked without any understand at all.

I was recently asked this question about the Chines Room Argument:

“My question is, isn’t Searle basically saying that the mind is magical? If the system / room / program can be arbitrarily complex and is able to learn, surely there is nothing material that sets it apart from the brain? I guess I am siding with the epiphenomenalist critics of the argument.”

Searle is not saying the mind is magical. He is saying that information processing by itself (functionality) is not sufficient to produce understanding. The person in the locked room is processing information, but clearly has no understanding of the questions and answers. They are meaningless symbols to him.

The person in the room is functionally a chat bot. Chat bots have a database of many possible answers that they give to questions, based upon some algorithm. This is a top-down brute force approach to simulating conversation – and I do think that simulation is the correct analogy here. The computer is not having a conversation, it is simulating one. (Set aside the implausibility of the system, as wonderfully lampooned in this SMBC cartoon.)

That, I think, is the primary lesson from Searle’s argument. However, Searle himself seems to derive a different conclusion. In 2010 he wrote:

To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes.

He is saying that understanding (consciousness) is not magical but is specific to biology, and cannot be reproduced in silicon (only simulated). I think this is incorrect. The Chinese Room Argument does not lead to the conclusion that neurons are necessary, only that information processing is not sufficient.

The human brain can process information without producing understanding or consciousness. One might argue that most of the brain’s processing is subconscious. What is the difference between subconscious processing and conscious processing? That is perhaps the biggest question in neuroscience today. Researchers are making progress on an answer, but we are not there yet.

To illustrate further that information processing is insufficient (the real lesson from Searle’s argument) other analogies have also been proposed. One is to imagine a vast computer assembled from wooden blocks with symbols on them. The wooden blocks are manipulated in such a way (by millions of workers) to produce an actual computer processing system. Could such a system become large and complex enough to ever become conscious? I think that this is an apt analogy and the answer is obviously no. Information processing is not enough.

What, then, is necessary? I don’t think it’s biology. I think that an information processing system has to have components to it that specifically create what we experience as understanding, intention, and consciousness. For example, it needs to be able to spontaneously talk to itself, to monitor itself, and to tell the difference between a memory and active experience. These functions are missing from Searle’s analogy.

The human brain does not just process input and spit out output. The conscious brain is always active, spontaneously generating signals that “alert” the cortex, communicating among the various networks in the brain, which are being fed input not just from the outside but from other parts of the brain. The brain is a massive parallel processor that can spontaneously find patterns, often complex and subtle.

Answers to questions are not generated by simply memorizing specific answers to specific questions (looking them up in a massive database). Answers are generated by multiple layers of pattern recognition.

This is an incomplete answer. Research in this direction, however, seems to be working well.

I have argued before that I think our understanding of intelligence will progress through cooperative research programs in neuroscience and computer science. We will know we have succeeded when we can use our knowledge of how the human brain produces consciousness to build a computer that is conscious.

This leads back to the other major thought experiment in AI, the Turing test. The Turing test involves a human tester asking questions of a system behind a wall (could be a human or a computer), and then trying to infer from the answers whether the system is conscious or not. I have argued previously that the Turing test is insufficient to determine if a system is bottom-up conscious, or a top-down simulation. We need to know something about how the system works, not just its output. A sufficiently sophisticated chat bot could pass the Turing test, but is not conscious.

One might argue that a really good Turing test could sniff out the difference. Trained philosophers, for example, could push the Turing test to to its limits, asking subtle questions that would root out a sophisticated chat bot. It seems to me, however, that this would lead only to an “arms race” of testers and chat bots and never resolve the question.

Conclusion

The Chinese Room Argument is a classic thought experiment in the question of AI and has led to a large amount of productive philosophical discussion, much like the Turing test. What I think this and all other apt analogies lead to, however, is the conclusion that in order for an information-processing system to be conscious it has to have functionality that specifically leads to consciousness.

I disagree with the strong-AI epiphenomenalists who argue that consciousness is an epiphenomenon that spontaneously emerges from any information-processing system that crosses some threshold of complexity (like Vger from Star Trek I, or Skynet from Terminator). But I also disagree with Searle and those who argue consciousness is specific to biology. Of course, I further disagree with the dualists who believe that consciousness is not physical at all.

Consciousness, rather, is a specific phenomenon that emerges from systems that contain functionality that specifically contributes to consciousness (spontaneous activity, self-monitoring, self-communication, etc,).

A further question is whether or not there is a threshold of complexity. Are snails conscious? Do they have a snail’s level of consciousness, or are they subconscious? I think the answer is both – there is a continuum of consciousness, but there is also a lower limit. I don’t think bacteria are conscious, for example.

156 responses so far