A recent meeting of computer scientists at Asilomar Conference Grounds on Monterey Bay in California discussed possible concerns about the future of computers – especially increasingly intelligent machines. I find this a fascinating topic. We are living through the tranformation of our civilization by information technology – and we’re only getting started.
Some of the points raised gave me a, “well, duh!” reaction. For example:
The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home.
New technology often threatens existing jobs. As technology advances the job market must constantly evolve to match it. But while old jobs become obsolete, the technology creates new jobs – services and utilizing the technology. It is probably safe to say that in 50 years a large portion of jobs that people hold today will no longer exist.
But they did raise some very interesting points, and it is not too early to attempt to anticipate potential problems. For example, what if criminals get a hold of AI software that they can use to troll for personal information on the internet, or tirelessly do their nefarious tasks? The abuse of AI by criminals and terrorists is a legitimate concern, and building in safeguards could be helpful. I liken this to the problems of spam, viruses, worms, and identity theft on the internet. When the internet and then the world wide web were first being developed, these kinds of things were not anticipated. If they were perhaps more security could have been built into the basic structure of the internet. Could we be living in a spam-free world today if it were anticipated 20 years ago?
Another concern that was raised was AI becoming more intelligent than humans. In my opinion this is inevitable. It is not a question of if but of when. Unless we go the Dune route and ban machines that mimic the mind of man, we will have machines that are smarter than humans. This is not a problem in and of itself – only if these machines go beyond our control. We certainly should not build fully self-sufficient AI military robots whose mission and capabilities extend to killing humans. That seems like an accident waiting to happen.
Ray Kurzweil’s answer to this problem is that humans will merge with our machines. The AI of the future will be us – so no worries. This will likely be true to some extent, but it is difficult to say exactly how this will manifest.
One very good point I had not thought of was that we should begin an open dialogue on the issues raised by AI and computer advances before public opinion polarizes, as it has with genetically modified foods. This polarization is already happening to some degree, with “technologists” anticipating all the wonderful things this technology will do, while alarmists warn about the risks to humanity. Discussing this issue may ameliorate this polarization, or it may accelerate it. I predict the latter – because it is likely that those who are not participating in the early discussions of the impact of AI that will react fearfully when the technology starts to hit.
A series of ethical controversies are coming – what are the rights of AI, what is the definition of human, is it ethical to even turn on an AI machine? We can and should discuss these questions, and try to anticipate and head off abuses. But if history is any judge society is likely to stay one step behind the forces that are transforming it.