Oct 06 2008

An Upcoming Turing Test

In 1950 Alan Turing, in a paper entitled Computing Machinery and Intelligence, a practical test to determine if a computer possesses true intelligence. In what is now called the Turing test, an evaluator would ask questions of a computer and a person, not knowing which was which and with text only communication, and then have to decide which was the computer. If the evaluator cannot tell the difference (or if 30% of multiple evaluators cannot) then the computer is deemed to have passed the Turing test and should be considered intelligent.

On October 12 the Loebner Prize for Artificial Intelligence will conduct a formal Turing test of six machines (the finalists in this year’s competition) – Elbot, Eugene Goostman, Brother Jerome, Jaberwacky, Alice, and Ultra Hal. (It seems that AI will have to endure whimsical names, probably until true AI can demand more serious names for itself.) The prize for the victor is $100,000 and a gold medal – and career opportunitites that will probably dwarf the actual prize.

Ever since Alan Turing proposed his test it has provoked two still relevant questions: what does it mean to be intelligent, and what is the Turing test actually testing. I will address this latter question first.

The Turing test is really testing the ability to simulate a natural and open-ended conversation, enough to fool a regular person. One way to “simulate” such a conversation is to actually be able to hold one. But another way is to employ a complex algorithm that either chooses canned responses from a large repertoire or constructs answers following a set of rules. Such algorithms exist and are referred to as artificial intelligence (AI). Anyone who has played a video game involving interaction with game characters has experienced such AI.

Therefore, the Turing test treats computers as black boxes – it does not assess what is going on inside the box, it merely judges the output. And therefore it cannot tell the difference between true intelligence and a clever simulation.

But this statement leads us only to the next question – what, if anything, is the difference?

Hugh Loebner, creator of the Loebner prize, has this to say:

There are those who argue that intelligence must be more than an algorithm. I, too, believe this.

I completely agree – depending upon how you define intelligence. Loebner seems to be using the term to mean consciousness, which is how I think most people interpret the term in this context. But the word “intelligence” can be used more broadly, and can refer simply to the ability to manipulate data. AI as a computer term takes this meaning as it applies to the ability to simulate human intelligence or compete against human players. You might also say, for example, that computers that are capable of beating world champions in chess are intelligent, but they are not conscious.

Computers have become increasing more powerful, but power alone will not achieve either intelligence or consciousness. Programmers, taking advantage of greater computing power, have created increasingly sophisticated AI algorithms (again, as any video-gamer can attest). But they are not yet close to passing the Turing test. At the bottom of this article is an example of a human and AI conversation. Read them and then come back… OK – pretty easy to tell the difference, right? The AI conversation was awkward and it lacked any sign of a true thought process. It seemed algorithmic.

But I can imagine a day in the not-too-distant future when such AI can pass a Turing test. The algorithms will have to become much more complex, allow for varying answers to the same question, and make what seem to be abstract connections which take the conversation is new and unanticipated directions. You can liken computer AI simulating conversation to computer graphics (CG) simulating people. At first they appeared cartoonish, but in the last 20 years we have seen steady progress. Movement is now more natural, textures more subtle and complex. One of the last layers of realism to be added was imperfection. CG characters still seem CG when they are perfect, and so adding imperfections adds to the sense of reality. Similarly, an AI conversation might want to sprinkle some random quirkiness into the responses.

The questions is – will sophisticated-enough algorithms running on powerful-enough computers ever be conscious? What Loebner is saying, and I agree, is that the answer is no. Something more is needed.

What if, instead of using algorithms to pick canned answers, the AI program actually attempts to understand the meaning of the question and then draw upon a fund of basic knowledge about the world and about itself and then constructs an answer. This is a more complex process than a response algorithm. At a minimum the computer will have to understand human speech – it will have to have a vocabulary and a rather complete understanding of syntax and grammar, both to understand the question and create a response.

Then it will have to have a vast knowledge base, including many fact that we take for granted. For example, it will have to know that when you let go of something it drops to the floor, people need light to see, that bunny rabbits are cute and that rotting food smells bad. How many such factoids are crammed into the average human brain?

In order to simulate a human, the AI will also have to have a personality, a persona, a “life.” This could simply mean that it needs a knowledge base about the person it is simulating – what they do, how old are they, what is their life history.

While I think it is easy to agree that an algorithm offering up canned responses is not conscious, it is more difficult to make that judgment about system that is constructing responses. That’s because as the processing gets more complex it is possible to imagine that consciousness will emerge, and it becomes more difficult to see the differences between such an AI and a human brain. If the AI understands the rules of speech, so that it can both understand language and speak. And if it has a thorough knowledge base about itself and the world, and (here’s the key) it can take the abstract meaning of a question or statement, compare that to its knowledge base, make complex comparisons, search for patterns and connections, and then construct an answer based upon its “personality” – then how is that fundamentally different from a human brain?

I am not saying such AI would be conscious – just that we are getting a bit closer. I also think more is needed. The AI would have to have an internal state that it could monitor. It would have to be able to talk to itself – to think. There would need to be an active self-perpetuating process going on, not just a reaction to input.

What about feeling? Would the AI have to feel in order to be conscious? This is a tough one and I could be persuaded either way. On the one hand you could argue that consciousness and emotions are not the same thing – a conscious being could lack emotions.  On the other hand, if by “feeling” we mean anything that constitutes the subjective experience of one’s own existence, well then, yes. I think it would have to “feel” to be conscious (stated this way, however, this might just be a tautology).

What about the ability to adapt and learn? Is this a prerequisite for consciousness? This is certainly a property of human intelligence. Our brains adapt and learn, even change their hard-wiring in response to repeated behavior and experience.  Could an AI be conscious but static – unable to change? It’s hard to imagine, but I cannot say exactly why this would need to be a prerequisite. Part of my difficulty is in addressing the broad question of what is consciousness, rather than what is human consciousness. It is easier to say whether or not AI would be conscious in all the ways that humans are, but more difficult to address whether or not it has a form of consciousness, just different than human or lacking in some respects.

There may be other functions required to be conscious that I have not touched upon yet. For example, we know that human brains hold pieces of information in their working memory, and they can manipulate these pieces of information. We also have the ability to focus our attention. So, would any AI need to have something that is deemed “attention” where it is focusing on a subset of it’s knowledge or stimuli? If it is manipulating data but no paying attention to it, is that the same as subconscious processing in humans? Without the built in ability to pay attention, would AI be entirely “subconscious” and therefore not conscious at all?

This all leads to the final question – how would we know? I think this points to the fundamental weakness of the Turing test, it is only looking at output, not the process. I don’t think we could ever know if an AI was conscious based entirely on output. This is because I think we will develop powerful-enough AI to simulate human intelligence more than enough to pass the Turing test.

In order to judge whether an AI was truly conscious I think we need to not only look at behavior, but we need to look at what it going on inside the black box.  We need to consider basic principles – is the AI paying attention, is it thinking, is it able to make new connections from existing knowledge, to actually increase its knowledge simply by thinking? We will know we have true consciousness because we built it to be conscious, not just to simulate it.

This, of course, requires that we know what consciousness is and how it is created, which leads back to neuroscience. As we reverse engineer the human brain we are learning how it creates consciousness. While we do not have all the pieces yet, progress continues without slowing.  And, as I have written before, the tasks of understanding the human brain and building AI are increasingly intertwined.

The moment the Turing test is passed, it will become obsolete. For now it is an interesting milestone in the development of AI. But once we have passed that milestone it will become obvious that it does not really mean anything. Simulating human conversation is an important technology – but it is not machine consciousness. The focus is already shifting to understanding the nature of consciousness itself.

31 responses so far