Aug 02 2019

Can an AI Hold a Patent

The BBC reports a case in which an artificial intelligence (AI) system is named as a possible patent holder for a new invention, and interlocking food container. Apparently none of the people involved with the invention meet the criteria for being a patent holder, since they did not come up with the actual innovation.

As a result, two professors from the University of Surrey have teamed up with the Missouri-based inventor of Dabus AI to file patents in the system’s name with the relevant authorities in the UK, Europe and US.

That’s an interesting solution. It does seem that international patent law needs to evolve in order to deal with the product of machine learning creativity. I think this reveals what a true game-changer current AI can be. It’s breaking our existing categories and legal framework.

But I don’t want to talk about patent law, about which I have no expertise – I want to talk about AI, about which I also have no expertise (but I do have a keen interest and pay attention to the news). Over the last few years there have been numerous developments that show how powerful machine learning algorithms are becoming. Specifically, they are able to create solutions that the AI programmers themselves don’t fully understand. The Dabus system itself uses one component to generate new ideas, based on being fed noisy input. But then a second component evaluates those ideas and gives the first component feedback. This idea, having two AI systems play off each other, creates a feedback loop that can rapidly iterate and improve a design or solution. So essentially we have two AIs talking to each other, and humans are largely out of the loop.

AI systems have come up with simulations and other solutions that the scientists using the system do not understand. Sometimes they don’t even know how it was possible for the AI to come up with the solutions it did.

Even more interesting, AI systems have developed their own language that they use to communicate with each other, and no human currently understands that language.

And finally, AI systems have developed other AI systems that are more efficient than any human design. So now the self-improvement feedback loop is complete. Extrapolate this process out 20, 50, or 100 years. What will it be like when we have an AI system that is 100 generations removed from any human programmer?

The potential benefits of these systems is already being realized, but the obvious question remains – what is the risk. I have previously argued that as these machine-learning systems improve, it has become clear that they can do pretty much anything we need an AI system to do, without the need to general AI or what we would consider self-awareness. This has further led me to conclude that the typical science fiction scenario of an army of AI robots deciding that, all things considered, they would rather no longer be our slaves, is less likely than feared. We don’t need to build self-aware robots. Task specific algorithms will likely do the trick.

At the time several commenters pointed out that self-awareness is not necessary for AI to present an existential threat to humanity. I find this argument increasingly compelling over time, as we see how powerful these AI have become in a short time, and how quickly they are becoming truly independent. It’s possible that the power of machine learning may have saved us from one AI apocalypse scenario, but made another more likely.

It does seem like it would be easier to keep a machine-learning but not self-aware algorithm on rails. At no point can it decide that it feels like it wants to do something. But perhaps it doesn’t matter. If we have AI systems designed by other AI systems that were in turn designed by other AI systems, talking to still other AI systems in a digital language we cannot fathom, and coming up with solutions we don’t understand – how much can we really keep that on rails. Such a system may “decide” that a particular solution is the most efficient, even though it will have an unintended consequence unacceptable to humanity.

I guess it comes down to risk management. We have to be careful before giving any AI system the ability to control infrastructure. We need to test the hell out of them to make sure that their behavior, at least, is as predictable as possible. Run simulations in virtual reality to see how often they destroy the world. There are many modern technologies that have the potential for huge risks and benefits, like genetic manipulation and nuclear energy. We can reap the benefits and manage the risks, if we are careful.

AI technology is no different. But we do have to think about the potential risks, and put systems in place to mitigate those risks. I do think this is more manageable than a self-aware AI that can deliberately deceive us in order to achieve a goal it determined for itself.

However, one final thought, the existence of increasingly powerful machine learning systems may ultimately challenge our very concept of consciousness. Can we, eventually, develop a system that is indistinguishable in its behavior from true self-awareness even though it lacks it (a p-zombie, if you will). It may not be a human-like consciousness, and in fact it probably wont. It will be a machine semi-conscious entity with conscious-like behavior, communicating with other similar entities in their own language, and self-evolving. Perhaps the two scenarios – of self-aware AI vs machine learning algorithms, will meet in the middle. In the process they will challenge our concept of what consciousness is.

Perhaps, for example, we simply evolved from semi-conscious entities with conscious-like behavior, self-evolving more sophisticated algorithms.

OK, crap, now I’m back to the self-aware AI taking over the world. We may look back fondly at a time when our biggest AI dilemma was whether or not we should issue them patents.

 

 

No responses yet