Feb 28 2008

I for one welcome our new robotic overlords.

Published by under General
Comments: 52

University of Sheffield professor and computer scientist Noel Sharkey, best known for his appearances on the BBC show Robot Wars, in a talk before Britain’s Royal United Services Institute warned the world that automated military robots “pose a threat to humanity.” I agree. Seriously.

Well, OK – not right now. But it is not too early to think about the implications of developing increasingly automated robots designed for warfare. While I think it is an unlikely scenario that such machines will take over the world anytime this century, as in The Matrix or Terminator, they may pose a credible risk in the near future.

Sharkey warned that such machines may fall into the hands of our enemies. They could then be hacked, reprogrammed, or just taken control of and used against our side. Or they can be copied. We may see cheap Chinese knockoffs of our latest warbot showing up on the battlefield. This may in turn lead to an arms race of robotic warfare – and that can only lead to Skynet blowing up the world.

Even if it does not get that far, we still may have a world threatened by hundreds of millions of killer robots, some of which could be in the hands of terrorists or psychotic dictators. Of course you could say this about any weapon. Nuclear weapons are inherently dangerous because they may fall into the hands of dictators or terrorists. But this just reinforces the point, as the former has already happened and there are serious concerns about the potential of the latter.

Another cause for concern is malfunction. A killer robot, armed and armored, and fully automated with subroutines that basically involve killing human beings and destroying infrastructure, might fall prey to what is technically called a “glitch.” Most computer users are familiar with this phenomenon. The difference between engaging the enemy and going on a bloody rampage may be a fine line.

Any such system would, of course, have fail safes – manual override, a kill switch, redundancies, and a self-destruct if all else fails. But fail safes only reduce the risk of error, they can never eliminate it (or reduce the risk to zero). And war is often chaotic and unpredictable. The best made-plans of our military leaders may break down on the battle field, in the hands of a panicky private, or in the midst of a hasty retreat. No one can see all ends.

What are some options to prevent what I will call the Cylon scenario? Well, we could never build such things. That would require an international treaty limiting the development and deployment of automated robots designed to kill or destroy. (The legal and technical mavens can work out all the details as to how to define this.)

Or – we could develop robots for warfare that are not automated but that always require a human driver (even if they are remote). They are simply incapable of “pulling the trigger” on their own. There is gray here as well, though. Can one human driver coordinate dozens of robots – like playing a video game?

I predict that we and others will develop some type of automatic robots for various aspects of warfare. They already have, in fact, and I see no reason why it will slow down. They will use the “fail safe” argument for defense, and I can only hope that there is sufficient oversight to truly minimize the probability of disaster. I will also note that for now I am much more worried about rogue nukes than killer robots.

But there is one line I recommend we never cross – developing truly artificially intelligent, automated, mobile military robots. Especially ones that are independent, meaning they can repair themselves and secure their own power source. And especially especially ones that can replicate themselves or build new robots. One could argue that any independent robotic AI is a long term risk. Why should we seed the universe with entities that will out-compete us in every way and may just decide that we are in their way?

One answer to this (favored by Ray Kurzweil) is that we will not build robotic AI, we will become robotic AI. We will merge with our machines and they will be us. An interesting, if completely speculative, idea.

It’s all too far in the future for any confident predictions. But I do believe we should think very seriously every step of the way as we build more and more intelligent robots that are more and more automated and independent.

Klaatu barada nikto!

52 responses so far