Mar 07 2024

Is the AI Singularity Coming?

Like it or not, we are living in the age of artificial intelligence (AI). Recent advances in large language models, like ChatGPT, have helped put advanced AI in the hands of the average person, who now has a much better sense of how powerful these AI applications can be (and perhaps also their limitations). Even though they are narrow AI, not sentient in a human way, they can be highly disruptive. We are about to go through the first US presidential election where AI may play a significant role. AI has revolutionized research in many areas, performing months or even years of research in mere days.

Such rapid advances legitimately make one wonder where we will be in 5, 10, or 20 years. Computer scientist Ben Goertzel, who popularized the term AGI (artificial general intelligence), recently stated during a presentation that he believes we will achieve not only AGI but an AGI singularity involving a superintelligent AGI within 3-8 years. He thinks it is likely to happen by 2030, but could happen as early as 2027.

My reaction to such claims, as a non-expert who follows this field closely, is that this seems way to optimistic. But Goertzel is an expert, so perhaps he has some insight into research and development that’s happening in the background that I am not aware of. So I was very interested to see his line of reasoning. Will he hint at research that is on the cusp of something new?

Goertzel laid out three lines of reasoning to support his claim. The first is simply extrapolating from the recent exponential grown of narrow AI. He admits that LLM systems and other narrow AI are not themselves on a path to AGI, but they show the rapid advance of the technology. He aligns himself here with Ray Kurzweil, who apparently has a new book coming out, The Singularity is Nearer. Kurzweil has a reputation for predicting advances in computer technology that were overly optimistic, so that is not surprising.

I find this particular argument not very compelling. Exponential growth in one area of technology at one particular time does not mean that this is a general rule about technology for all time. I know that is explicitly what Kurzweil argues, but I disagree with it. Some technologies hit roadblocks, or experience diminishing returns, or simply peak. Stating exponential advance as a general rule did not mean that the hydrogen economy was coming 20 years ago. It has not made commercial airline travel any faster over the last 50 years. Rather, history is pretty clear that we need to do a detailed analysis of individual technologies to see how they are advancing and what their potential is. Even still, this only gives us a roadmap for a certain amount of time, and is not useful for predicting disruptive technologies or advances.

So that is a strike one, in my opinion. Recent rapid advances in narrow AI does not predict, in and of itself, that AGI is right around the corner. It’s also strike two, actually, because he argues that one line of evidence to support his thesis is Kurzweil’s general rule of exponential advance, and the other is the recent rapid advances in LLM narrow AIs. So what is his third line of evidence?

This one I find the most compelling, because at least it deals with specific developments in the field. Goertzel here is referring to his own work: “OpenCog Hyperon,” as well as associated software systems and a forthcoming AGI programming language, dubbed “MeTTa”. The idea here is that you can create an AGI by stitching together many narrow AI systems. I think this is a viable approach. It’s basically how our brains work. If you had 20 or so narrow AI systems that handled specific parts of cognition and were all able to communicate with each other, so that the output of one algorithm becomes the input of another, then you are getting close to a human brain type of cognition.

But saying this approach will achieve AGI in a few years is a huge leap. There is still a lot we don’t know about how such a system would work, and there is much we don’t know about how sentience emerges from the activity of our brains. We don’t know if linking many narrow AI systems together will cause AGI to emerge, or if it will just be a bunch of narrow AIs working in parallel. I am not saying there is something unique about biological cognition, and I do think we can achieve AGI in silicon, but we don’t know all the elements that go into AGI.

If I had to predict I would say that AGI is likely to happen both slower and faster than we predict. I highly doubt it will happen in 3-8 years. I suspect it is more like 20-30 years. But when it does happen, like with the LLMs, it will probably happen fast and take us by surprise. Goertzel, to his credit, admits he may be wrong. He says we may need a, “quantum computer with a million qubits or something.”  To me that is a pretty damning admission, that all his extrapolations actually mean very little.

Another aspect of his predictions is what happens after we achieve AGI. He, as many others have also predicted, said that if we give the AGI the ability to write its own code then it could rapidly become superintelligent, like a single entity with the cognitive ability of all human civilization. Theoretically, sure. But having an AGI that powerful is more than about writing better code, right? It’s also limited by the hardware, and the availability of training data, and perhaps other variables as well. But yes, such an AGI would be a powerful tool of science and technology that could be turned toward making the AGI itself more advanced.

Will this create a Kurzweil-style “singularity”? Ultimately I think that idea is a bit subjective, and we won’t really know until we get there.

No responses yet