Apr 27 2023
AI – Is It Time to Panic?
I’m really excited about the recent developments in artificial intelligence (AI) and their potential as powerful tools. I am also concerned about unintended consequences. As with any really powerful tool, there is the potential for abuse and also disruption. But I also think that the recent calls to pause or shutdown AI development, or concerns that AI may become conscious, are misguided and verging on panic.
I don’t think we should pause AI development. In fact, I think further research and development is exactly what we need. Recent AI developments, such as the generative pretrained transformers (GPT) have created a jump in AI capability. They are yet another demonstration of how powerful narrow AI can be, without the need for general AI or anything approaching consciousness. What I think is freaking many people out is how well GPT-based AIs, trained on billions of examples, can mimic human behavior. I think this has as much to do with how much we underestimate how derivative our own behavior is as how powerful these AI are.
Most of our behavior and speech is simply mimicking the culture in which we are embedded. Most of us can get through our day without an original thought, relying entirely on prepackaged phrases and interactions. Perhaps mimicking human speech is a much lower bar than we would like to imagine. But still, these large language models are impressive. They represent a jump in technology, able to produce natural-language interactions with humans that are coherent and grammatically correct. But they remain a little brittle. Spend any significant time chatting with one of these large language models and you will detect how generic and soulless the responses are. It’s like playing a video game – even with really good AI driving the behavior of the NPCs in the game, they are ultimately predictable and not at all like interacting with an actual sentient being.
There is little doubt that with further development these AI systems will get better. But I think that’s a good thing. Right now they are impressive but flawed. AI driven search engines have a tendency to make stuff up, for example. That is because they are predicting information and generating responses, not just copying or referencing information. The way they make predictions may be ironically hard to predict. They use shortcuts that are really effective, most of the time, but also lead to erroneous results. They are the AI equivalent of heuristics, rules of thumb that sort of work but not really. Figuring out how to prevent errors like this is a good thing.
So what’s the real worry? As far as I can tell from the open letters and articles, it’s just a vague fear that we will lose control, or have already lost control, of these powerful tools. It’s an expression of the precautionary principle, which is fine as far as it goes, but is easily abused or overstated. That’s what I think is happening here.
One concern is that AI will be disruptive in the workplace. I think that ship has sailed, even if it is not out of view yet. I don’t see how a pause will help. The marketplace needs to sort out the ultimate effects.
Another concern is that AI can be abused to spread misinformation. Again, we are already there. However, it is legitimate to be concerned about how much more powerful misinformation will be fueled by more powerful algorithms or deep fakes.
There is concern that AIs will be essentially put in charge of important parts of our society and will fail in unpredictable and catastrophic ways.
And finally there are concerns that AI will become conscious, or at least develop emergent behaviors and abilities we don’t understand. I am not concerned about this. AI is not going to just become conscious. It doesn’t work that way, for reasons I recently explained.
Part of the panic I think is being driven by a common futurism fallacy – to overestimate short term advance while underestimating long term advance. AI systems just had a breakthrough, and that makes it seem like the advances will continue at this pace. But that is rarely the case. AIs are not about to break the world, or become conscious. They are still dumb and brittle in all the ways that narrow AI is dumb.
Here is what I think we do need to do. First, we need to figure out what this new crop of AI programs are good at and what they are not good at. Like any new technology, if you put it in the hands of millions of people they will quickly sort out how it can be used. The obvious applications are not always the best ones. Microwaves were designed for cooking, which they are terrible for, but they excel at heating. Smart phones are used for more things than just phones, which is almost a secondary function now. So what will GPT AIs really be good for? We will see. I know some people using them to write contracts. Perhaps they will function as personal assistants, or perhaps they will be terrible at that task. We need research and use to sort this out, not a pause.
There may need to be regulation, but I would proceed very carefully here. Some experts warn of the black box problem, and that seems like something that can easily be fixed. Include in the program internal reporting on method, so that it can be reviewed. We also need to sort out the property rights issues, especially with generative art programs. Perhaps artists should have the right to opt out (or not opt in) their art for training data. We may also need quality assurance before AI programs are given control over any system, like approving self-driving cars.
I don’t think any of this needs a pause. I think that will happen naturally – there is already talk of diminishing returns from GPT applications in terms of making them more powerful. Tweaking them, making them better, fixing flaws, and finding new applications will now take time. Don’t stop now, while we are in the messy phase. I also think experts need to be very clear – these systems are not conscious, they are nothing like consciousness, and they are not on a path to become conscious.