Jul 19 2021

Teaching AI Imagination

Artificial intelligence (AI) has made tremendous strides in the last few decades. Every time someone comes up with something an AI can’t do, someone builds a system that can do it (eventually). At first experts believed that no AI could beat a chess master in chess, now no human has any chance against the best chess algorithms. The skepticism then moved onto the game “Go” which was thought to require too much creativity and flexibility. In a typical chess match a player may have 20 moves available to them, while in a Go match the number of available moves is more like 200. However, South Korean Go master Lee De-Sol recently retired from competition because the AI AlphaGo “cannot be defeated”.

The primary reason AI capability has been so underestimated is because we naively assumed that any AI would need to accomplish a task is a manner similar to humans. So if we use a conscious thought process, an AI would have to use a similar process. And without human-level sentience things like playing chess or Go would simply not be possible. But this assumption was wrong. Programmers were able to leverage the strength of modern computers and software to duplicate and even surpass these high-level cognitive tasks, while bypassing the need for human-like sentience. It’s possible, therefore, that we may never need to develop self-aware AI. It can do what we need without it.

However, there are still some cognitive abilities that AIs lack that defenders of biological superiority may still point to, such as imagination. AIs typically needs to be trained on lots of data. Humans are particularly good as categorizing and extrapolating using imagination. For example, a child may experience a couple of dog breeds, and when confronting a new and very different dog breed seem to have no difficulty understanding that it’s still a dog. Whereas if they encounter a cat they know it’s not a dog.

The same is true for technology, such as cars. We understand the essence of what makes a car a car, and when encountering even a very different vehicle we can easily place it within the category of cars. We can even imagine a car that does not exist and therefore have never seen (what the researchers refer to as “extrapolation”).  Is it possible to create AI with the same ability?

That is the focus of a recent study that tries to develop AI with one aspect of imagination called “disentanglement”. This is the ability to mentally separate various attributes of an object from the whole. For example, if you see a red Corvette, your brain can disentangle the red color from the other attributes of the Corvette. This is necessary in order to imagine a blue Corvette. The goal of the study was therefore to develop AI with what they call, “controllable disentangled representation learning.”

They use a neural network, and during the learning phase instead of presented one object at a time to the AI they present multiple objects in the same category. The AI’s task is to find the attributes they all have in common and disentangle specific attributes from each other. So all cars have four wheels, a windshield, a steering wheel, etc., but they can be any color. In the second phase of the process the AI then recombines various attributes in novel ways, in what the researchers call, “controllable novel image synthesis.” This is supposed to be the AI version of imagination.

As an example, study author Yunhao Ge says:

“For instance, take the Transformer movie as an example” said Ge, “It can take the shape of Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a Bumblebee-colored Megatron car driving in Times Square, even if this sample was not witnessed during the training session.”

This seems like a nice incremental advance, but still not at the level of human imagination. I would be more impressed if the AI were able to design a completely new transformer that is still easily recognizable as a transformer (not just a new color in a new setting).  Current AI capabilities have been characterized as “shallow mimicry” but still able to produce useful results. Jason Toy, CEO of a deep learning company, said:

“What’s interesting is that, compared to a lot of other machine learning techniques, deep learning technology is what’s called a ‘generative model,’ meaning that it learns how to mimic the data it’s been trained on. If you feed it thousands of paintings and pictures, all of a sudden you have this mathematical system where you can tweak the parameters or the vectors and get brand new creative things similar to what it was trained on.”

“Disentanglement” adds one new ability, allowing AI to dig one step deeper into the creative process. AI programmers are essentially trying to “reverse engineer” creativity, and then duplicate the essential elements of the process in AI. But again – they won’t necessarily have to accomplish these elements in the same way a human brain does. There is no reason to think they will not eventually succeed. If you disagree, you risk suffering the same fate as those who doubted AI could beat a human at chess or Go.

No responses yet