Jul 29 2025
What To Do About AI Slop
I wasn’t planning on doing a follow up to my recent post on AI so quickly, but a published commentary on the issue makes a good point of discussion. I know it can get tiring to see so much news and commentary about AI, but we are in the middle of a rapidly evolving and potentially disruptive technology, so an active and dynamic conversation is needed. Also, yesterday I was giving a seminar to high school STEM students on critical thinking and media savvy, and the students were eager to raise the question of AI and what impact it has. They very much wanted to know how to navigate this world they are inheriting overwhelmed with AI-generated misinformation and deep fakes. How bad is it going to get, and what should they do?
Like many such questions we can focus on two levels – individual and societal. This question very much needs to be addressed issue by issue, but generally speaking I am a proponent of dealing with societal issues with societal solutions, and not just dumping all the burden and responsibility on individual citizens. This does not mean individuals should not take responsibility for themselves, only that this should not be the only solution. Let’s take crime as an example. There are steps that individuals can take to make themselves less vulnerable to crime, but that is not the ultimate solution to a society overwhelmed by crime. We also need police, social programs, good street lighting, and other measures to reduce overall crime.
The same is true with AI-generated deep fakes, misinformation, disinformation, and just low quality slop. Since my talk yesterday was on critical thinking, I focused on what we can do as individuals. This is basically scientific skepticism 101. Evaluate the source of any information and always try to track back any claim to its original source. Do not accept someone else’s narrative about that information – if it’s important, take the time to find out for yourself. Also, do not let other people curate information for you, because that let’s them control your information ecosystem and create the narrative for you. Do not rely on any one source for information. Seek out different sources and different perspectives, and specifically seek out information that contradicts or falsifies any claim you are facing (especially if it’s something you want to be true).
With respect specifically to AI, one of the questions I was asked was how to tell the difference between AI generated content and “real” content. Unfortunately, anything I say will quickly be obsolete. Most people have learned the existing tells. For images, look for anomalies (six fingers, text that’s not quite right), but as AI apps get better, these tells are disappearing. For text the best description I have heard is that AI text is “blandly competent”. There is a lack of true creativity, but the grammar and punctuation are perfect. But having tracked this over the last few years, AI text is also getting much better. I still have a sense of the typical formats that AI-generated answers tend to have, but the quality is definitely improving.
The bottom line is that, fairly quickly it seems the output of AI will simply be too good to discern from human-generated content by quality alone. We simply have to assume we will not be able to tell the difference, and even if some people still can for now, they won’t be able to for long. What do we do then?
Simply put – we have to acknowledge that any media we are reading or viewing may be AI generated or altered. A picture or video is no longer evidence of anything. This means that everything is suspect until it has been verified independently. The downside here is that real pictures and video are also suspect, which makes it easy to deny real evidence, at least for a news cycle.
All this is why I agree with the author of the above commentary – we need societal solutions to this situation. We cannot simply put all the burden on individuals. He suggests a number of policies that might help, and they seem reasonable. For example, we can require AI companies to include in their software a watermark or metadata to tag any AI-generated content. It needs to be transparent, and easy for an end-user to determine if any content they are viewing was generated with AI.
Certain types of AI generated content can be banned, such as child pornography. One way to enforce this is for AI companies to forbid certain prompts – ones that would generate banned content. Laws can also give people certain rights over the use of their image, so that others cannot make a deep fake of you without your permission.
On the content hosting side, applications could be required to filter out or tag deep fakes. More tricky would be ways of filtering out AI-generated misinformation or disinformation. As we have seen recently, there is no easy answer to who determines what is misinformation, and the tech giants do not want to have this responsibility and were eager to ditch it. But I don’t think we can just throw up our hands. We need to readdress this issue and see if there is a workable compromise.
The alternative is looking fairly grim. It is a reasonable and open question as to whether a modern democracy can survive in a world where the average citizen is buried in misinformation. Self-government requires access to reliable information. AI can be abused as a powerful tool to control the information ecosystem in which citizens live. Your vote then becomes meaningless, because you don’t have the information needed to make an informed decision. That information may be out there, but it is buried beneath misinformation that is orders of magnitude greater in volume. The work to discern what is real from what is fake becomes too great, so people give up, and just believe what feels good – which makes it easy for others to control what they believe. I for one do not feel comfortable leaving all this in the hands of the tech bros, who do not have a great track record of civil service. This is a conversation we very much need to have.