Oct 17 2017

What Is Artificial Intelligence

AI-1A recent article by Peter Yordanov claims that Artificial Intelligence (AI) is nothing but misleading clickbait. This is a provocative way to state it, but he has a point, although I don’t think he expressed it well.

Yordanov spends most of the article describing his understanding of human intelligence, partly by walking through the evolution of the central nervous system. His basic conclusion, if I am reading it correctly, is that what we have today and call AI is nothing like biological intelligence.

This is certainly true, but it seems like he takes a long time to make what is essentially a semantic argument. The core problem is that the word “intelligence” means many things. Lack of a consistent operational definition plagues the use of the term is pretty much every context, and certainly in computer AI.

What we have now and is generally referred to as AI are computer algorithms that display functions that resemble intelligence or duplicate certain components of intelligence. Computers are good at crunching numbers, running algorithms, recognizing patterns, and searching and matching data. Newer algorithms are also capable of learning – of changing their behavior based on data input.

Doing some combination of these things with a powerful-enough computer can enable AI systems to beat grand masters at chess or go, to compete with human champions at wide-ranging trivia games, and even to model human behavior and conversation. The latter are not yet able to consistently fool a human (the Turing test), but they are getting close and we will likely be there soon.

These are the things we call AI today. Yordanov is essentially saying that this is all well and good, but it is not the “intelligence” we mean when we refer to human intelligence. This is, of course, true. Computer AI is not self-aware, not truly thinking, and has no understanding. It is duplicating the effects of these aspects of human intelligenceĀ  – with great sophistication and some brute computing force.

It would be nice if we had a generally accepted term for what we currently call AI to distinguish it from what most people think of as AI – meaning self-awareness. “Machine learning” is fine but doesn’t cover the whole spectrum. There are specific technical terms for the various components, but a new umbrella term for everything short of self-awareness would be optimal.

The deeper question is – will current AI extrapolate to what is sometimes called general AI which includes self-awareness? Yordanov writes that he believes the answer to that question is no, and I agree.

I do not think we will get to general AI with more and more sophisticated algorithms running on more and more powerful computers. We will make systems that are better at duplicating the effects of general AI, but will not be truly self-aware. I do think something else is required.

That something else is not biology, and there is no reason it cannot be created artificially (whether that material will be silicon or something else doesn’t really matter). What is needed is a functionality that current computer chips do not have.

We are not quite sure yet what that functionality is, because we have not yet reverse engineered the mammalian brain. But we have some ideas. For starters, the brain is neither hardware or software, it is both simultaneously – sometimes called “wetware.” Information is not stored in neurons, the neurons and their connections are the information. Further, processing and receiving information transforms those neurons, resulting in memory and learning.

That much we know and computer chips that function more like neurons are already being developed. I do suspect that the path to true AI goes through neuronal chips, rather than classic silicon chips.

But that also is not enough. Yordanov touches on this, but I want to emphasize it – the brain is wired to constantly talk to itself in an endless loop. Thoughts are information that feed into the loop of processing, which also accepts external information through the senses, and the results of internal networks constantly reporting to each other, and then using that information to generate more results.

This endless loop of communicating and processing information is our stream of consciousness. What we are currently researching but have yet to unravel is the exact networks and how they interact, and how that manifests in human-level consciousness. We have pieces, but not enough to put it all together.

This, I think, is where AI research and neuroscience will dove-tail. We can use what we learn from neuroscience to design AI, which can then become an experimental model by which we can further advance our knowledge of intelligence and neuroscience.

Eventually we should be able to make a human brain in silicon. When we do there is every reason to think that that silicon brain will be self-aware – true general AI.

What is fascinating to think about is how will it be different from a human brain. We can experiment with turning up, down, on, or off different circuits and seeing how that affects the resulting AI. This, in turn, could be a model for every mental illness.

I also suspect that this will force us to reconsider what we think we know about the basic components of neurological function (beyond the obvious like motor movements and recording visual information). What is the neurological substrate of empathy, hostility, creativity, reality checking, and feeling that we occupy our bodies?

We may never be able to fully disentangle all the circuits and their interactions – it is so complex thatĀ the number of possible interactions is too great, making it like trying to predict the weather. We can only take it so far before chaos reigns.

Another lesson from all this, which I have discussed previously, is that what we can accomplish with non-self-aware AI is greater than we previously thought. We assumed that general AI would be necessary to beat a grand master at chess, but that assumption was wrong. Limited algorithmic AI can do amazing and sophisticated things, like driving a car, without being on the path to general AI.

This is why I predict that the future of self-aware robot servants in every home will not happen. It won’t have to. Our robotic and computer infrastructure will be able to do everything we need it to do with limited AI. If we develop general self-aware AI it will be for the research, to better understand human and artificial intelligence, and just to see if we can. General AI may then find some useful function, but that will not drive its development.

That function may also be mostly to enhance humans.

It’s all hard to predict, but fun and interesting to think about.

125 responses so far