Jul 07 2025

The Real Risk of AI

Artificial Intelligence (AI) is unavoidable. It’s now a part of our daily lives as it has been steadily infiltrating the technology we use every day, whether you realize it or not. I remain somewhat in the middle in terms of the hype-to-technological-miracle spectrum. I don’t think, as some fellow skeptics do, that the current batch of AI is all hype and nothing new. Machine learning, neural networks, and large language models have clearly turned a corner. Companies are now willing to spend millions, even hundreds of millions, to train models on vast sets of data. The latest AI apps are powerful. They are genuinely accelerating the pace of research and development, for example, accomplishing in hours or days what previously took weeks or months. But current AI also has clear limitations. It makes mistakes, and can even confidently (it appears to the end-user) proclaim as facts things it just completely fabricated, including entire scientific references. On the creative side (which I use often) it’s still mostly derivative dreck.

I look at AI as an interesting and potentially powerful tool that is flawed and limited. The outcome depends entirely on how it is used, and this is where I think the true risk of rapidly introducing new AI tools into society lies. It’s disruptive in good and bad ways, and if we are not careful it will be mostly in the bad ways. I also think we are facing the silicon valley culture of “move fast and break things” combined with an attitude of “just get out of my way” when it comes to any regulation or quality control. It’s a reasonable corporate strategy to “break things” internally as you are exploring new technology. I really don’t care how many ships Musk blows up if that process leads to a working safe rocket. But they appear to be moving fast and breaking things out there in the world in ways that affect people and society.

How is the use of AI going wrong? One important way is that it makes it easy for people to be lazy. This is something I have long worried about existentially for humanity. I now think of this as the WALL-E syndrome – in a society run by AI and robots (or any such system) that can totally take care of your needs, it’s easy to sit back and do nothing. It’s possible we evolved a certain laziness as an efficiency mechanism – accomplish tasks efficiently, conserve energy and resources.

With AI it’s easy to do a half ass but barely passable job at most creative or tedious tasks. There will always be people who are creative, skilled, and hard working. But AI makes it easy to flood the zone with half-baked crap, and that is exactly what’s happening. Instagram, for example, is overrun with “AI slop”. I even got burned by this myself – I purchased some artwork to use in a game I run, and what I received was AI-generated useless crap. This raises another aspect of this – you make it fast and easy to generate lots of low-grade content, and some people will try to monetize that.

We also saw the laziness factor in RFKs recent health commission report, which contained citations to studies that don’t exist. It is overwhelmingly likely that some flunky used AI to generate portions of the report and no one checked the citations. Even worse is how little an uproar this caused. AI laziness has already been normalized to a frustrating degree.

We are also seeing the invasion of AI into the service sector. Now – AI can be a benefit, if it is executed well, but often that is not what we see. Under pressure to roll-out AI quickly, many companies are doing it without proper vetting or training. I will use a personal anecdote as an example. On the last SGU episode I described my 10-day long saga of trying to switch my work phone number onto my personal account. This should have been a 20 minute process, and instead I went through 10 days of tech-help hell, some of it fixing problems created by the reps trying to fix the original problem. I have since learned from insiders that this is likely because the system now uses AI, which was rolled out too quickly and is optimized for upselling rather than good service.

What this means is that the reps don’t know how to use the system well. It also means the system itself often fails and no one knows how to fix it. One layer of this is that reps don’t have direct access to the back end, so the system might generate an error code, but the reps don’t know what it means. It also makes it incredibly easy to make mistakes, and very difficult to identify them.

Essentially AI is now in charge, and it doesn’t work well. The human technicians no longer have total control over the situation or all the information they need. Extrapolate this situation to all of society – AI is running our technology, it’s not doing a great job because it was rolled out too quickly, and the workers in the trenches no longer are able to troubleshoot and quality control. Contrast this to the counterfactual where AI is rolled out slowly, only after being stress-tested, with proper training of the technicians who will be using it. This AI functions with total transparency, with technicians able to see what’s happening, to manually override when necessary, and to troubleshoot on their own. In other words – AI can be a well-integrated tool used by experts. Or it can be a half-ass tool that functionally replaces experts. When you get the half-ass version (like whenever you are talking to an AI on a help line) you know it.

My fear is that it’s easy to become complacent with flawed technology rolled out too quickly that encourages our inherent laziness and floods the world with “AI slop”. We can get much better, but this is only likely to happen if we demand it. That means we need to understand and be outraged by the bad version of AI. But it seems to be normalizing very quickly. Don’t be complacent. Be outraged and demand better.

No responses yet