Oct 13 2008

Artificial Consciousness

Published by under Uncategorized
Comments: 46

My blog from last week on the upcoming Turing test provoked a great deal of interesting conversation in the comments – which is great. Short blog entries are often insufficient to fully explore a deep topic. Often I am just scratching the surface, and so there is often much more meat in the comments than the original post.

Some points came up in the comments that I thought would be good fodder for a follow up post.

Siener wrote:

Think about it this way: You are saying that a system can exists that acts like it is conscious, but unless it has some magical additive, some élan vital with absolutely zero affect on its behaviour it cannot be truly conscious.

That is not what I am saying at all.  From my many previous posts on the topic it is clear that I am not a dualist of any sort. I essentially agree with Daniel Dennet’s approach to the question of dualism. When I wrote that behavior alone is insufficient to determine if computer AI is conscious I was not referring to some magical extra ingredient, but a purely materialistic aspect of the AI itself.

What I am saying is that it is possible to create AI that is sufficiently complex to mimic human conversation without having consciousness. What Siener is saying is that this is impossible. The real difference between conscious AI and zombie AI that just mimic consciousness some argue is the difference between emergent bottom-up processes and top-down processing. Consciousness, at present, is best understood as an emergent phenomenon of brain activity. But further – consciousness does not necessarily emerge from brain activity – it depends upon how the brain is wired.

For example, the cerebellum is the part of the brain that provides coordination and balance. It is, in a way, its own brain with as much complexity and processing as the cortex. But the cerebellum is not conscious. Why not? Because that is not part of its function. Consciousness does not emerge from the cerebellum.

Therefore, computer AI could perform processing as sophisticated as the cerebellum without even the possibility of consciousness emerging from its function. That processing may do things like mimic human conversation.

Further – Siener misinterprets my point when he says that my position contends that consciousness would have no effect on behavior. I am not saying that at all. Clearly, consciousness has a strong effect on behavior. It is the top of the hierarchy of behavior control. My point only requires that you can have behavior without consciousness. We have that now – current AI has behavior and I don’t think anyone would argue that my PC (on which I run programs with AI) is conscious. The only question is, can that sort of non-conscious AI become sophisticated enough to fool a person into thinking they are conscious. I think the answer is yes.

Sonic wrote:

When I say that physics is essentially dualistic I mean that there is a physical aspect (Schrodinger’s equation) and a ‘mental’ aspect (the conscious choices made by experimenters.)
The choices are not fixed by any known laws of physics, yet the choices are asserted to have causal effects.
This is the formulation that is actually used by practicing physicists today. It is a dualistic (in terms of philosophy) approach.

I have to disagree here, and again see my earlier posts on dualism. In fact it is perfectly compatible with modern physics to say that human behavior is entirely deterministic and grounded in the laws of physics. In fact, some take this to the conclusion that there is no true “free will”, meaning that our behavior – all of our choices – are determined by the physical processes going on inside our brains. Whether or not you agree with this, it is wrong to say that modern physics or science is dualistic, because no form of dualism is required by modern science, nor is it used.

Wallet55 wrote:

the discussion of chess programs, which were considered a form of a turing test belies some of the assumptions about this test and technology and our own intelligence. Chess programs, which can now beat the best chess players, do not play chess in the heuristic way that we do. With some tweaks, they are essentially high speed legal move generators with static end position analyzers. This of course is not elegant, is not chess playing, but darn it, it beats the crap out of most of us. Even when losing though, most chess players can often sense they are playing a computer.

This is an excellent example of exactly what I am saying.  Chess programs play chess by a different method than a human chess player, but the end result can be quite similar – indistinguishable from, and even better than, the best human chess players. But chess programs are not conscious.

To add to this – one of the advantages of consciousness is that it is broadly useful and adaptable. It enables conscious beings to deal with an infinite number of situations or new bits of information. It is not task-specific. It enables us to attend to various different types of sensory input and information processing and to think about them in new ways. Whereas, a chess program plays chess, or a conversation simulator generates conversation. It is certainly possible to make more versatile programs that can have many application, but when using a top-down approach (rather than emergent behavior) such programs will always be constrained.

If we create a truly versatile program, that has basic types of processing that can then be applied to new problems or situations, perhaps we will have something that is conscious. This is an interesting question, and I honestly don’t know what the answer is. I suspect our ideas about this will change as we progress in our reverse-engineering of the brain and attempts at creating conscious AI. Right now we can only speculate, or extrapolate from our current knowledge which is still pretty far away from conscious AI. But I would be suprised if our understanding of the true nature of consciousness (from a reductionist materialist viewpoint) does not change significantly as our knowledge and technology advance.

In this arena, the next 50 years should be very interesting.

46 responses so far