Jun 05 2023

Have Current AI Reached Their Limit?

We are still very much in the hype phase of the latest crop of artificial intelligence applications, specifically the large language models and so-called “transformers” like Chat GPT. Transformers are a deep learning model that use self-attention to differentially weight the importance of its input, including any recursive use of its own output. This process has been leverage with massive training data, basically the size of the internet. This has produced some amazing results, but as time goes by we are starting to see the real limits of this approach.

I do see a tendency for a false dichotomy when it comes to AI – with AI is all hype at one end of the spectrum, to AI is going to save/destroy the world at the other. This partly follows the typical trend to overestimate short term technological process on the hype end, to underestimating long term progress on the cynical end. I think reality is somewhere in between – this latest crop of AI applications is extremely powerful, but has its limits and will not simply improve indefinitely.

I have been using several AI programs, like Chat GPT and Midjourney, extensively, and the limitations become more clear over time. The biggest limitation of these AI apps is that they cannot think the way people do. They mimic the output of thinking, without any true understanding. They do this by being training on a massive database, and using essentially statistics to predict what comes next (what word fragment or picture element). This produces some amazing results, and it’s still shocking that it works so well, but also creates interesting fails. In Midjourney, for example, (and AI art production application) when creating your prompts that result in image options, you can’t really explain to the application the way you would to a person what it is you want. You are trying to find the right triggers, but those triggers are quirky, and are highly dependent on the training data. Using different words to describe the same thing can trigger wildly different results, based upon how those words were used in the training data.

The same is true of Chat GPT. The more unusual your request the more quirky the result. And the results are not based strictly on reality, just statistically how words are used. This is why there is a problem with so-called hallucinations. You are getting a statistically probable answer, not a real answer, and quirks in the data will produce quirks in the result. The problem has no real understanding of what it’s saying. It’s literally just faking it, mimicking human language by statistically reconstructing what it has learned.

We see this limitation in other areas of AI as well. We were supposed to have self-driving cars by 2020, but we may still be a decade away. This is because the AI driving applications get progressively confused by novel or unusual situations. They are particularly bad at predicting what people will do, something that human drivers are much better at. So they are great in controlled or predictable situations, but can act erratically when thrown a curve-ball – not a good feature in any driver.

But here is the big question – are these current limitations of AI solvable through incremental advances of the existing technology, or fundamental limitations of that technology that will require new innovations to solve? It’s increasingly looking like the latter is closer to the truth.

And this is where we get into overestimating short term progress. People look at the progress in AI of the last decade, even the last few years, and falsely assume this level or progress will continue and then extrapolate linearly into the future. But experts are saying this is not necessarily the case. There are two main reasons for this. The first is the lack of true understanding that I just described. The second is that we are getting to the practical limits of leveraging deep learning on large data sets. Training Chat GPT, for example, cost $100 million, and required a massive infrastructure of hardware and energy usage. There are some practical limits on how much bigger we can get. Further, there appears to be diminishing returns from going larger. Progress from this approach, therefore, may be close to plateauing. Incremental improvements will likely involve greater efficiency and speed, but may not produce significantly better results.

Getting past the “uncanny valley” of almost human may not be possible without a completely new approach. Or it may take orders of magnitude more technological advance than getting to where we are now. There are plenty of examples from past technology that may illuminate this issue. High temperature superconductors when through their hype phase in the 1980s. This produced genuinely useful technology, but everyone assumed we would get to room temperature superconductors quickly. Here we are almost 40 years later and we may be no closer, because the path was ultimately a dead end. Similarly, the current path of AI technology may not lead to general AI with true understanding. A totally different approach may be necessary.

What I think will happen now is that we will enter a period where the marketplace learns how best to leverage this AI technology, and will learn what it is good at and what it is not good at. There will be incremental improvements. New ways of using transformer technology will emerge. But AI, under this technological paradigm, will not forever improve and will reach its limits, mostly imposed by its lack of true understanding. There may be the perception that the AI hype bubble has burst. But then the underestimating of long term progress will kick in. Researchers will come up with new innovations that take AI to the next level, and the process will likely repeat itself.

What is hard to predict is how long this cycle will take. Billions of dollars are being poured into AI research, and there is an entire industry of very smart people working on this. There is no telling what they will come up with. But the lack of true understanding may prove to be a really hard nut to crack, and may put a ceiling on AI capabilities for decades. We will see.

No responses yet