Mar 30 2007

More on Computers and Consciousness

Since the topic of artificial intelligence has garnered so much interest, and there were many excellent follow up questions, I thought I would dedicate my blog today to answering them and extending the discussion.

Noël Henderson asked: “Would a non-chemical AI unit, even with very complex processing and memory capabilities, be able to experience what we normally refer to as emotion? Is self-awareness (and in my layman’s understanding I tend to think in terms of ‘ego’) dependent upon the ability to experience emotion?”

Emotion is just one more thing that our brains do. There is no reason that AI with a different substrate cannot also create the experience of emotions. Emotions are a manifestation of how information is processed in the brain – which is why I said that an AI brain would not only have to be able to hold the information of the brain but would also need to duplicate its processing of that information. For example, the knowledge of the death of someone that the AI brain has learned to associate with positive feelings could produce the experience of sadness and loss by affecting the degree of activity in patterns of circuits that contribute to mood, focus our attention on pleasant or unpleasant details, make us anticipate our future happiness, etc. Basically, the same thing that happens in a biological brain. In other words, the experience of emotion is just as much a physical aspect of the brain as any other cognitive phenomenon.

I don’t think emotions are a prerequisite to consciousness or self-awareness. Without them you will simply have a non-emotional consciousness.

Jim Shaver wrote: “Isn’t it reasonable to expect that the development of an artificially intellegent consciousness would have to come first, before the technology to “download” the content of a biological brain into it?”

It depends on the process. In my example, consciousness is not directly downloaded (uploaded?), rather a second brain is massively connected to your biological brain, and then the circuitry (or whatever it’s made of) slowly models itself after the pattern of connections and activity in the biological brain, while both work together, much like the two hemispheres of the brain. The AI brain does not have to be conscious at the start of the process, and in fact we don’t even have to know how to program it, as long as it is designed to map its function to the biological brain. In a way this is like using the brain as a template for the AI brain – so we don’t have to know or understand every detail of the design – we just need to copy the template.

Jim also wrote: “And while predictions of 40 to 50 years are provocative, I think an estimate of ten or one-hundred times that number might be more realistic.”

I agree that we need to be very cautious about making predictions, as Steve Pinker wrote, it’s an invitation to look foolish. But we can make some common sense extrapolations to come to reasonable probabilities about future developments. Historically, there is a trend to overestimate short term technological progress, but underestimate long term progress. I think Kurzweil is correct when he says that people tend to project a linear progress, but actually information technology is progressing at an accelerating geometric pace. If you superimpose a straight line over a geometric curve, you will see that the straight line overestimates short term progress and underestimates long term progress.

We can say this – if information technology continues to progress along the same geometric curve as it has been then in 40-50 years we will have more than enough computing power is a small and energy efficient package to exceed the computer power and memory capability of the human brain. If these estimates are off, it will be probably by 10-20 years, not hundreds or thousands. The other questions are more difficult to answer – will we be able to replicate the complexity of biological brain function in order to create AI, or can we develop the technology to scan the brain to make a copy, or use it as a template as I described above. I don’t know, but I think these technologies are feasible within roughly the same time scale.

I don’t think an AI brain would have to be analog – a digital AI could duplicate the function of an analog brain (virtually create an analog brain), but it would take more processing power. So we may end up with an AI brain that has one million times the processing power of our brain in order to duplicate its function (and this is taken into consideration already in the above estimates of time).

Jim further writes: “Assuming the technology were good enough, why would the biological brain have to be destroyed in the process? And if both artificial brain and biological brain live, which one is really you in the end? If we could transfer the copy of consciousness into a biological clone, one could argue that the clone would be just as much you as the original you, and now we have a particularly sticky ethical paradox.”

The biological brain would not have to be destroyed in my example. I am assuming that eventually the biological brain will die due to natural aging. Of course, we may also develop the technology to keep the brain alive and healthy indefinitely, in which case we can prolong the biological/AI hybrid phase indefinitely.

Regarding which brain is really you – that’s the beauty of my method, they are both you. You slowly evolve into a different kind of intelligence. Think about it this way, are you the same person you were when you were five years old? No. You are very different, you have developed and matured. You might be a dramatically different person, but the shadow of that 5 year old still lives within you. You were you throughout the entire process of maturing from an infant to an adult. That’s continuity. Now we are just using technology to extend the maturing and growing process. You will slowly mature into the hybrid AI just as you slowly matured into an adult, the whole time it will still be you. The massive connections means that both biological brain and AI brain function as one. Slowly, the biological brain may shrink to insignificance, so you will mostly be the AI brain.

Also, think of it this way – if we take on the goal of eventually becoming a computer based intelligence (for its apparent advantages, putting aside any ethical concerns for a moment), then how can we get ourselves into an AI substrate without running into problems of continuity or disturbing questions of whether or not the AI will really be you. I think the only solution is to slowly mature/grow/evolve from a biological consciousness to an AI consciousness through something similar to what I outlined.

Shayne wrote: “How is continuity of consciousness preserved when we sleep or are heavily sedated? If it isn’t, then am I deluded to think that I’m anything more than a replica of another person who existed yesterday?”

Continuity does not imply that wakefulness is unbroken. It means that the substrate of consciousness in continuous. I think that copying or moving the information in the brain wholesale over to another substrate will not preserve continuity – it will just make a copy. It will not be a continuation of the thread of your self-awareness. Some people think this doesn’t matter, but I am not happy with that position. Let’s put it this way – If it were available now I would not do it.

Nathan D wrote: “Daniel Dennet said of people who believe consciousness is an epiphenomenon: ‘I am flabbergasted that anyone takes this view seriously. It’s insane. Epiphenomenalism is exactly as absurd as the following view: In every cylinder, of every internal combustion engine, there are seven epiphenomenal gremlins. They’re caused by the action of the cylinder; they cause nothing in turn. They are undetectable by any machine, by any test — there couldn’t be a gremlinometer, they don’t add to the horsepower, they don’t add to the weight, they don’t add to the mass . . . they are completely epiphenomenal. The very concept of epiphenomenalism, of effects that have no effects, is completely unmotivatable. Always, always, always. It’s defined in such a way that you could never have any possible reason to assert it. It’s trivial. There could be no motivation for asserting it.'”

I plan to write in the future about consciousness itself. But regarding the use of the term “epiphenomenon” I disagree with the narrow definition Dennet is using. The term is often used to refer to a higher-order phenomenon that emerges spontaneously from lower order processes. For example, Stephen J. Gould argued that an increase in complexity is not inherent to evolutionary processes. That complexity increases in some lineages is an epiphenomenon of evolution.

Consciousness is an epiphenomenon of neurological function – of perception of stimuli, internally generated neuronal activity, information storage and processing. In fact the term is often used to imply that there isn’t a “gremlin” inside our heads. Consciousness is not some thing; it is not it’s own kind of phenomenon as Allan Wallace and others argue; there is no spirit in the machine. It can also be referred to as an emergent property of brain activity. I admit these terms seem inadequate, that is because there really isn’t an exact analogy for our own consciousness. It is hard to put into words exactly what it is. But more on this later.

No responses yet