Aug 18 2025

Chat-GPT 5 Is Out

I have been using Chat GPT since it was first released, so I was interested to see how much of an upgrade the new Chat-GPT 5 is. For those of you living in a luddite cave, Chat-GPT is one of the new artificial intelligence (AI) applications known as an LLM, or large language model. I genuinely use it for personal projects, but also try to put it through its paces just to see how well it works. Here are some of my personal impressions.

First – it is definitely better than the previous versions. Some of this is aesthetic – 5 is not as effusive and praising as previous versions. It has a more business-like vibe, although still starts off responses with something nice to say, but not as sycophantic. It is extremely capable in terms of organizing information, giving summaries of large documents, and suggesting content. On the creative side, I find it to be vastly superior to the original versions from a few years ago, and incrementally but noticeably better than the previous version. Let me give you can example of what I consider to be a challenging task I gave it.

I uploaded a 200 page document into 5 which is a rules supplement for a table-top roleplaying game (something I have worked on over the last 5 years). I asked for its analysis, and it was pretty impressive. It was able to identify ambiguities and balance inconsistencies in the rules, identify potential synergies that might further unbalance the game, and then generate a table calculating out average stats so that I could directly compare different character builds. Don’t worry if you don’t fully understand what I am referring to (GMs will), this was amazing. This was essentially hours and hours of work completed in literal seconds (much of which I never would have done myself because it would have been hours of work).

This is all technocratic, which I think is a strength of these LLMs – brute forcing lots of data and doing reasonably sophisticated analysis and presentation. I also like that it then suggests what it can do for you next, which most of the time my response was – “Why yes, I would like you to do that for me.” But I also wanted to see how it did on more creative tasks, which in the past have been unimpressive. So I uploaded another document (also in the hundreds of pages) detailing character histories, world building, and a log of all the adventures my party has gone on so far. I then outlined (very basically) the next adventure in the sequence and asked it to suggest some adventure arcs.

It gave me three solid suggestions. I picked one, added some details, and iterated. Along the way it demonstrated “understanding” (effectively, I know it doesn’t really understand) of a lot of nuance – challenge level, balancing encounter types, creating mystery, proper rewards, leveraging in-game relationships, dropping hints of deeper plot without giving it away, etc. I did not use its output directly, but the process was a source of many great ideas, some of which had the reaction of, “Crap, that’s brilliant, I should have thought of that.” But then, once I finalized an encounter, I could ask it to stat it using my home rules, which again it did in seconds, saving me potentially hours of work. I then went further asking it to flesh out background characters and details, which it also did extremely well (adding a layer of depth and detail I otherwise would not have had the time to do myself).

So – in the end, using Chat-GPT 5 as a co-creator, I was able to accomplish my task in about 10% of the time it would have taken me otherwise, with better, more thorough, and more detailed results. In short, I was impressed.

Granted, this is a particular type of task. It does not require factual information, so there are essentially no issues with hallucinating. It’s all for personal use and for entertainment purposes, in a field where it is expected that GMs will heavily borrow from other sources and find inspiration wherever they can, as long as the game is fun in the end. It is creative, without having to be genuinely original, but also has a lot of rules and tedium. It is perhaps ideally suited for an LLM. Still, there is crossover for a lot of types of tasks, especially those operating within a defined format (like legal documents, coding, any type of form, etc).

One thing I find interesting is how controversial AI and LLMs have become. We discussed this on the SGU, and you would think by the responses that were were talking about Gaza. I suspect a lot of this comes from the fact that people generally prefer simple clean moral narratives. If they think LLMs are bad, then they want everything about them to be bad, and they get upset if you acknowledge that it is a powerful and effective tool. We see this “splitting” with controversial people as well – Musk or Trump must be either all good or all bad all the time. But the real world is complicated. Also, AI is a complex multifaceted topic, and we cannot talk or write about every single aspect every time we bring it up, so some people get upset about what we did not talk about (the things that fit their moral narrative).

So yes – LLM training is expensive. The data centers that run them use a lot of energy. Training can violate creator’s rights to their own material, without compensation. There is a lot of hype surrounding AI and the reality is more modest. AI can be disruptive and cost jobs. They still have problems with hallucinations. They have a problem with sycophancy, and some people are falling into emotional relationships with software owned and controlled by a corporation. They are causing problems for teachers as it is now trivial to have AI do your work for you. And AI is creating a tsunami of deep fakes.

But also – LLMs are powerful tools that are getting better over time, incrementally but significantly. When used properly they can accelerate research, doing work in days that would otherwise take months or even years. They can help get control of the avalanche of data that we have to deal with in the modern world. They can be a huge productivity boost. They can displace mind-numbingly tedious work. They are great at error detection and correction. And they can be a lot of fun as a creative tool.

Like any new significant technology, we will have to work out the kinks. Over time we will collectively figure out what they are good for and what they are not good for, how best to leverage the technology while mitigating the downsides. The technology itself will continue to improve, shoring up weaknesses. But there is a lot of potential for harm as well, and so thoughtful regulation would be nice. I would like to see protection and even potentially compensation for creators, for example. We don’t want to hamper innovation, but at the same time I don’t trust the tech bros to do what’s best for society (remember social media?). It’s complicated – accept it. Give the devil his due. I do think this is an important chapter is human history and we should strive to get it right.

No responses yet