Mar 28 2017
Is AI Going to Save or Destroy Us?
Futurists have a love-hate relationship with artificial intelligence (AI). Elon Musk represents the fear side of this. In two recent articles we see two sides of this fear of AI. In a Vanity Fair piece we learn:
He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
We also learn from The Verge:
SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface venture called Neuralink, according to The Wall Street Journal. The company, which is still in the earliest stages of existence and has no public presence whatsoever, is centered on creating devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence. These enhancements could improve memory or allow for more direct interfacing with computing devices.
So Musk thinks we need to enhance our own intelligence digitally in order to compete with the AI that we are also creating, so they don’t destroy us. Musk is joined by Bill Gates and Stephen Hawking raising the alarm bells about the dangers of AI.
On the other end of the spectrum are Ray Kurzweil, Mark Zuckerberg and Larry Page. They think AI will bring about the next revolution for humanity, and we have nothing to worry about.
So who is right?
I am much closer to the Kurzweil-Zuckerberg end of the spectrum. First, I don’t think we are on the brink of creating the kind of AI that Musk and the others worry about.
How Close Are We To AI?
Seventy years ago, when it became clear that computer technology was taking off exponentially and that these machines were powerful information processors, it seemed inevitable that computers would soon exceed the capacity of the human brain, and that AI would emerge out of this technology. This belief was reflected in the science fiction of the time.
In 2001 (the 1968 film) we thought nothing of HAL being an AI computer (and one that goes a little funny and kills his crew). That time frame seemed about right. Even more telling were Star Trek The Motion Picture and The Terminator. In both films computers awaken and become fully aware AI simply by crossing some threshold of information and computing power. That plot element reflects the belief that AI was all about computing power – something which turned out to be a false assumption.
Here we are in 2017, almost 50 years after the Kubrick film, and Moore’s Law has held up fairly well. We have cheap powerful computers, and supercomputers reaching for the Exaflop level – a billion billion (quintillion) calculations per second. The current fastest supercomputers are getting close to the raw computing power of the human brain, and we will soon exceed it.
I have no fear that when we finally turn on that first exaflop computer that it will awaken and become self aware. That notion now seems so quaint and misguided. There are two reasons for this.
The first is that standard computer architecture is simply different than vertebrate brains. Computers are digital and largely serial. The brain is analogue and massively parallel. This means they are good at different things.
The fact is that standard computer hardware is simply not on a course to become artificially intelligent, because it does not function that way. You could theoretically have a virtual human brain in a standard computer architecture, but then it would have to be orders of magnitude more powerful. We are likely still decades away from such a computer, which would likely be the size of a building and require the power of a small city to operate.
We are, however, just beginning to develop neuromorphic chips. As the name implies, these are computer chips designed to function more like neurons – analogue and massively parallel. These kinds of chips are simply much more efficient and much better at doing the kinds of things that our brains do. I strongly suspect that if we ever do develop self-aware AI it will be with something like neuromorphic technology, and not standard computer technology.
This brings me to the second reason I am not worried – computers (regardless of their architecture) are not simply going to wake up. We have learned how naive this idea was. Computers will have to be designed to be self-aware. It won’t happen by accident.
In fact, I have been using the term ‘AI” to refer to self-aware general artificial intelligence. However, we already have AI of the softer variety. There is AI in your smartphone, and in your video game. We already have software that can learn and adapt, and it can do this without the slightest self-awareness. We are even using neuromorphic chips to perform tasks like pattern recognition that this type of computing does much better – again, without anything on the path to awareness.
I am not afraid of AI because it seems to me that our computers will do what we want them to do, as long as we continue on the path of top-down engineering. AI will be able to do everything we need and want it to do without self-awareness. Our self-driving cars are not one day going to revolt against us.
I do think it is possible to develop a fully self-aware general AI that matches and then exceeds human intelligence. In fact, I think we will do this. Neuromorphic technology is the beginning. With computers designed to function like the brain there is the potential of reproducing the processes that produce awareness in humans. This, however, will not be easy. It will require a dedicated research and development program with self-aware AI as the goal.
The other possible path is that we model the human brain, even before we fully understand it. We are working to model the connectome – a diagram of all the connections in the human brain. We are also modeling the basic components of the brain, such as the cortical column. Once we have an accurate enough map of the brain, and the neuromorphic technology to reproduce its function, we could theoretically just build (virtually or in hardware) an artificial human brain. I believe a functional model of the human brain would be, in fact, a self-aware human brain.
I do think it would be a mistake to put such an artificial brain into a fully autonomous super robot, at least before we fully understand and control the technology. Think about why we would want to do this.
We would not do this to make robot slaves. Robot slaves should not be self-aware, and they don’t have to be. They can do everything we need them to do without the burden or risk of self-awareness. If we want a self-aware AI to think for us, they would not need to be in a robot body. They can be safely on a desk top.
We would have to be willfully careless, like the scientist in Caprica who built the Cylons (yeah, we should not build self-aware killer robots). Creating self-aware robots that get out of our control would require a deliberate and careless program.
I am trying to envision an application that requires both self-awareness and autonomy. The only thing I can think of is space exploration, because in space a non-biological body is a huge advantage, and distance requires autonomy.
Further, by the time we can develop self-aware AI we will also be able, using the same technology, to enhance our own intellect and physical capabilities. This brings us back to Musk’s Neuralink – he obviously thinks the same thing, and wants to make sure that computer-brain interfaces are up to the task of allowing us to compete with our own AI.
For the foreseeable future self-aware AI will need to be the result of a massive and deliberate program, giving us the time to be careful. I don’t see any immediate need for creating the kind of AI that haunts our sci-fi nightmares.
Of course, if you go far enough into the future, all bets are off. But at the same time, we cannot predict what our own human capabilities will evolve into. There is no point is worrying about the distant future because we simply cannot predict what will happen.
So, I would say that I am not worried for the next 50 or even 100 years. We should continue to develop AI for the benefits they will bring us, and just don’t invest millions of dollars and years of research building self-aware killer robots.