Dec 12 2014

The Future Threat of AI

Occasional warnings about artificially intelligent robots taking over the world convulse through the media. There is currently a ripple involving prior interviews with Stephen Hawking and Elon Musk. Their names attract attention, and so the issue will provide a media distraction for a day or two.

In an interview with the BBC, Hawking said:

“The development of full artificial intelligence could spell the end of the human race.”

“It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

In an interview in June with CNBC, Elon Musk said:

“I think there’s things that are potentially dangerous out there. …There’s been movies about this, like ‘Terminator.’ There’s some scary outcomes and we should try to make sure the outcomes are good, not bad.”

Machines that can think are a staple of science fiction, indicating that there is a fascination with the topic. Most often artificially intelligent machines threaten humanity, such as in Terminator, The Matrix, and Battlestar Galactica. In the Dune series humanity is almost wiped out by machines, leading to a ban on any machines that mimic the mind of a person. Even in Star Wars, where droids are the humble servants of biological creatures, we are warned that if droids could think for themselves “we would all be in trouble.”

In Asimov’s Robot series the positronic brains of AI are designed from the ground up with the famous three laws of robotics, the first being that a robot cannot harm a human or allow harm to come to a human.

The problem, of course, is that we are too early in the process of designing true AI – a fully self-aware machine intelligence, to predict what will happen when we cross that threshold. It is certainly reasonable to consider the risks.

There is general agreement that human level intelligence does not represent any real limit, and so once AI achieves this level there is no reason why they won’t just blow past it. If we have a computer that can think as fast and well as a human in 50 years, then in 100 years we might have a machine that can think 1 million times faster, with greater memory and fidelity, purer logic, and unpredictable motivation. It is only reasonable to consider if such AI might pose a threat to humanity.

Those who are optimistic about AI point to the potential boon they can provide to humanity. They can potentially accelerate research and technological development by orders of magnitude.

Also, our civilization is becoming increasingly complex and difficult to manage. AI might be the ultimate tool we need to help us manage our individual lives as well as our growing institutions. This prospect is simultaneously encouraging and frightening, as AI itself. It would be great to have machines doing what people are not good at, such as persistent vigilance and attention to tiny details without letting anything slip through the cracks. At the same time, this can easily lead to a situation in which AI are in charge of our civilization, and we don’t even understand the institutions that control our lives.

This suggests another kind of threat that AI might pose. Science Fiction focuses mainly on AI competing with humanity, enslaving us or wiping us out. AI, however, may also fulfill their role as caretakers of humanity, just too well. They may take a paternalistic approach to this task, protecting us from ourselves, taking away our freedoms to keep us safe and secure. We might become an infantilized species under our robot caretakers.

Still others feel that none of this will happen because we will combine with our AI, not be replaced by it. We will implant supercomputers in our brains, becoming super AI ourselves.

At this point I don’t think we know what will happen. Perhaps at some point every possible outcome will occur to some degree, given enough time. Perhaps it is inevitable that machines will rule the universe, and biology is just a stepping stone. When we finally meet aliens, will they be biological, machines, or a fusion of the two?

We are, however, getting close enough to AI that we need to be thinking about possible outcomes every step of the way. Building in safeguards seems like a no-brainer (pun intended). We need to get creative about what those safeguards might be.

I also think we should avoid the most obvious risky steps, such as creating fully autonomous, self-replicating, AI robots. Putting AI in command of weapons also seems like a horrifically bad idea. Humans always have to be in the loop, with their hand on the plug.

While these steps may seem obvious, my concern is that competition among nations may motivate some to forgo such safeguards, out of fear that their enemies won’t, if nothing else. We may have an AI arms race. International agreements, and a mechanism to enforce them, to avoid such outcomes seems prudent.

The bigger picture is that humans are developing many types of technologies that contain the potential for serious abuse or just unintended consequences, such as the development of nuclear weapons, biological and chemical weapons, and increasingly vital technological infrastructures. Meanwhile our world is anything but universally enlightened and peaceful. We are making progress, but is it fast enough?

Perhaps, as Carl Sagan observed, all civilizations might go through this phase where there is a race between their technological development and their social maturity, with many not surviving. He was referring mostly to nuclear weapons, but there are other threats more subtle and profound.

54 responses so far