The advertisements above do not necessarily reflect the views of this blog, its authors, or host.

AI Future Shock

A recent meeting of computer scientists at Asilomar Conference Grounds on Monterey Bay in California discussed possible concerns about the future of computers – especially increasingly intelligent machines. I find this a fascinating topic. We are living through the tranformation of our civilization by information technology – and we’re only getting started.

Some of the points raised gave me a, “well, duh!” reaction. For example:

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home.

New technology often threatens existing jobs. As technology advances the job market must constantly evolve to match it. But while old jobs become obsolete, the technology creates new jobs – services and utilizing the technology. It is probably safe to say that in 50 years a large portion of jobs that people hold today will no longer exist.

But they did raise some very interesting points, and it is not too early to attempt to anticipate potential problems. For example, what if criminals get a hold of AI software that they can use to troll for personal information on the internet, or tirelessly do their nefarious tasks? The abuse of AI by criminals and terrorists is a legitimate concern, and building in safeguards could be helpful. I liken this to the problems of spam, viruses, worms, and identity theft on the internet. When the internet and then the world wide web were first being developed, these kinds of things were not anticipated. If they were perhaps more security could have been built into the basic structure of the internet. Could we be living in a spam-free world today if it were anticipated 20 years ago?

Another concern that was raised was AI becoming more intelligent than humans. In my opinion this is inevitable. It is not a question of if but of when. Unless we go the Dune route and ban machines that mimic the mind of man, we will have machines that are smarter than humans. This is not a problem in and of itself – only if these machines go beyond our control. We certainly should not build fully self-sufficient AI military robots whose mission and capabilities extend to killing humans. That seems like an accident waiting to happen.

Ray Kurzweil’s answer to this problem is that humans will merge with our machines. The AI of the future will be us – so no worries. This will likely be true to some extent, but it is difficult to say exactly how this will manifest.

One very good point I had not thought of was that we should begin an open dialogue on the issues raised by AI and computer advances before public opinion polarizes, as it has with genetically modified foods. This polarization is already happening to some degree, with “technologists” anticipating all the wonderful things this technology will do, while alarmists warn about the risks to humanity. Discussing this issue may ameliorate this polarization, or it may accelerate it. I predict the latter – because it is likely that those who are not participating in the early discussions of the impact of AI that will react fearfully when the technology starts to hit.

A series of ethical controversies are coming – what are the rights of AI, what is the definition of human, is it ethical to even turn on an AI machine? We can and should discuss these questions, and try to anticipate and head off abuses. But if history is any judge society is likely to stay one step behind the forces that are transforming it.

2 comments to AI Future Shock

  • matt g

    Hey Steve,

    You might want to have a read (if you haven’t already) of Kevin Warwick’s book “In The Mind Of The Machine”.

    It lays out a pretty credible hypothesis, looking at intelligence as a purely physical process. He compares the complexity of insect brains, to the similarly complex neural network software he uses in robots. With a similar basic body plan in functional scope to insects, his robots seem to be functionally similar to biological entities – they develop unique behavioural patters that are not pre-programmed or hardwired etc, all as a result of chance and positive reinforcement strengthening the weighting of successful strategies.

  • Remember the late 80s, early 90s “automatization will kill our jobs” scare? Sure, some people lost jobs but how many got employed building and programming the machines? Another failed prediction of the doomsayers.

    Anyway, we should not only have public debates concerning this issue, but also educate people so they actually know the extend of the technology and implications of AI. With the GMO debate it’s mostly a problem of uneducated speculation concerning the dangers of genetically modified food. For instance, I found this insane flyer stating that completely new DNA might be dangerous: it’s still the same nucleic acids, people! (By the way, I’ll send you the flyers once I’ve scanned it… it’s quite disturbing what these people claim). People think that GM-food are completely novel, terrifying things, just because they don’t know anything of the underlying science.

    This lack of knowledge is a recurring problem in our democratic, technologically advancing societies: voters, consumers, just plain citizens make world-changing decisions while lacking the scientific background to make well-reasoned and balanced decisions.

    People will disagree on these issues, fine, but at least we should decide our policies on facts, and not on unfounded, luddite fears.

    And if we’re going to ban all “thinking machines”, that’ll be fine with me: I’ve got melange stock-piled and I am already on my way to become a Mentat.

Leave a Reply