Feb 16 2026

The Future of AI-Powered Prosthetics

It’s not easy being a futurist (which I guess I technically am, having written a book about the future of technology). It never was, judging by the predictions of past futurists, but it seems to be getting harder as the future is moving more and more quickly. Even if we don’t get to something like “The Singularity”, the pace of change in many areas of technology is speeding up. Actually it’s possible this may, paradoxically, be good for futurists. We get to see fairly quickly how wrong our predictions were, and so have a chance at making adjustments and learning from our mistakes.

We are now near the beginning of many transformative technologies – genetic engineering, artificial intelligence, nanotechnology, additive manufacturing, robotics, and brain-machine interface. Extrapolating these technologies into the future is challenging. How will they interact with each other? How will they be used and accepted? What limitations will we run into? And (the hardest question) what new technologies not on that list will disrupt the future of technology?

While we are dealing with these big question, let’s focus on one specific technology – controllable robotic prosthetics. I have been writing about this for years, and this is an area that is advancing more quickly than I had anticipated. The reason for this is, briefly, AI. Recent advances in AI are allowing for far better brain-machine interface control than previously achievable. Recent advances in AI allow for technology that is really good at picking out patterns from tons of noisy data. This includes picking out patterns in EEG signals from a noisy human brain.

This matters when the goal is having a robotic prosthetic limb controlled by the user through some sort of BMI (from nerves, muscles, or directly from the brain). There are always two components to this control – the software driving the robotic limb has to learn what the user wants, and the user has to learn how to control the limb. Traditionally this takes weeks to months of training, in order to achieve a moderate but usable degree of control. By adding AI to the computer-learning end of the equation, this training time is reduced to days, with far better results. This is what has accelerated progress by a couple of decades beyond where I thought it would be.

Continue Reading »

Comments: 0

Feb 12 2026

Falling In Love With AI

There are many ways in which our brains can be hacked. It is a complex overlapping set of algorithms evolved to help us interact with our environment to enhance survival and reproduction. However, while we evolved in the natural world, we now live in a world of technology, which gives us the ability to control our environment. We no longer have to simply adapt to the environment, we can adapt the environment to us. This partly means that we can alter the environment to “hack” our adaptive algorithms. Now we have artificial intelligence (AI) that has become a very powerful tool to hack those brain pathways.

In the last decade chatbots have blown past the Turing Test – which is a type of test in which a blinded evaluator has to tell the difference between a live person and an AI through conversation alone. We appear to still be on the steep part of the curve in terms of improvements in these large language model and other forms of AI. What these applications have gotten very good at is mimicking human speech – including pauses, inflections, sighing, “ums”, and all the other imperfections that make speech sound genuinely human.

As an aside, these advances have rendered many sci-fi vision of the future quaint and obsolete. In Star Trek, for example, even a couple hundred years in the future computers still sounded stilted and artificial. We could, however, retcon this choice to argue that the stilted computer voices of the sci-fi future were deliberate, and not a limitation of the technology. Why would they do this? Well…

Current AI is already so good at mimicking human speech, including the underlying human emotion, that people are forming emotional attachments to them, or being emotionally manipulated by them. People are, literally, falling in love with their chatbots. You might argue that they just “think” they are falling in love, or they are pretending to fall in love, but I see no reason not to take them at their word. I’m also not sure there is a meaningful difference between thinking one has fallen in love and actually falling in love – the same brain circuits, neurotransmitters, and feelings are involved.

Continue Reading »

Comments: 0

Feb 09 2026

Uranium and Motivated Reasoning

This post is only partly about uranium, but mostly about motivated reasoning – our ability to harness our reasoning power not to arrive at the most likely answer, but to support the answer we want to be true. But let’s chat about uranium for a bit. In the comments to my recent article on a renewable grid, once commenter referred to a blog post on skeptical science and quoted:

Abbott 2012, linked in the OP, lists about 13 reasons why nuclear will never be capable of generating a significant amount of power. Nuclear supporters have never addressed these issues. To me, the most important issue is there is not enough uranium to generate more than about 5% of all power.”

This is the flip side, I think, to the misinformation about renewable energy I was discussing in that post. Let me way, I don’t think there is an objective right answer here, but my personal view is that the pathway to net zero that emits the least amount of carbon includes nuclear energy, a view that is in line with the IPCC. There is, however, still a lot of anti-nuclear bias out there, just as their is pro-fossil fuel bias, and pro-renewable bias, and every kind of bias. If you want to make a case for any particular source of power, there are enough variables to play with that you can make a case. However, factual misstatements are different – we should at least be arguing from the same set of verified facts. So let’s address the question – how much uranium is there.

There is no objective answer to this question. Why not? Because it depends on your definition. Most estimates of how much uranium there is in the world, in the context of how much is available for nuclear power, do not include every atom of uranium. They generally take several approaches – how much is in current usable stockpiles, how much is being produced by active mines, and how much is “commercially” available. That last category depend on where you draw the line, which depends on the current price of uranium as well as the value of the energy it produces. If, for example, we decided to price the cost of emitting carbon from energy production, the value of uranium would suddenly increase. It also depends on the technology to extract and refine uranium. The value of uranium is also determined by the efficiency of reactors.

Continue Reading »

Comments: 0

Feb 05 2026

The AI Slop Problem

Mark Zuckerberg said a few months ago that AI is ushering in a third phase of social media. First social media was used to connect with family and friends, then it became a platform for content creators, and now creativity is being further unleashed with new AI-powered tools. That’s a pretty rosy view, and unsurprising coming from the creator of Facebook. Many people, however, are becoming increasing concerned about what the net effect of AI-generated content will be, especially low-grade content (now colloquially referred to as AI slop).

One thing is clear – AI-generated content, because it is so easy and fast, is increasingly flooding social media. AI’s influence takes two basic forms, AI-generated content, and recommendations driven by AI-powered algorithms. So an AI might be telling you to watch an AI-generated video. Recent studies show that about 70% of images on Facebook are now AI-generated, with 80% of the recommendations being AI-powered. This is a fast-moving target, but across social media AI-generated content is somewhere between 20 and 40%. This is not evenly distributed, with some sites being overwhelmed. The arts and crafts site Etsy has been overrun by AI slop, causing some users to abandon the platform.

We are already seeing a backlash and crackdown, but this is sporadic and of questionable effectiveness. Etsy, for example, has tried to limit AI slop on its site, but with limited success. So where is all this headed?

We need to consider the different types of content separately. Much of AI-slop is obviously fake and for entertainment purposes only. They may be cartoony or obviously humorous, with no intent to pass as real or deceive. Some content is meant to entertain (i.e., drive clicks and engagement), but is not obviously fake. Part of the appeal, in fact, may be the question of whether or not the content is real. Other content is meant to deceive, to influence public opinion or the behavior of the content consumer. This latter type of content is obviously the most concerning.

Continue Reading »

Comments: 0

Feb 03 2026

Forgetting History

Engaging on social media to discuss pseudoscience can be exhausting, and make one weep for humanity. I have to keep reminding myself that what I am seeing is not necessarily representative. The loudest and most extreme voices tend to get amplified, and people don’t generally make videos just to say they agree with the mainstream view on something. There is massive selection bias. But still, to some extent social media does both reflect the culture and also influence it. So I like to not only address specific pieces of nonsense I find but also to look for patterns, patterns of claims and also of thought or narratives.

Especially on TikTok but also on YouTube and other platforms, one very common narrative that I have seen amounts to denying history, often replacing it with a different story entirely. At the extreme the narrative is – “everything you think you know about history if wrong.” Often this is framed as – “every you have been told about history is a lie.” Why are so many people, especially young people, apparently susceptible to this narrative? That’s a hard question to research, but we have some clues. I wrote recently about the Moon Landing hoax. Belief in this conspiracy in the US has increased over the last 20 years. This may be simply due to social media, but also correlates with the fact that people who were alive during Apollo are dying off.

Another factor driving this phenomenon is pseudoexperts, who also can use social media to get their message out. Among them are people like Graham Hancock, who presents himself as an expert in ancient history but actually is just a crank. He has plenty of factoids in his head, but has no formal training in archaeology and is the epitome of a crank – usually a smart person but with outlandish ideas and never checks his ideas with actual experts, so they slowly drift off into fantasy land. The chief feature of such cranks is a lack of proper humility, even overwhelming hubris. They casually believe that they are smarter that the world’s experts in a field, and based on nothing but their smarts can dismiss decades or even centuries of scholarship.

Continue Reading »

Comments: 0

Feb 02 2026

A Fully Renewable Grid?

My long-stated position (although certainly modifiable in the face of any new evidence, technological advance, or good arguments) is that the optimal pathway to most rapidly decarbonize our electrical infrastructure is to pursue all low-carbon options. I have not heard anything to dissuade me so far from this position. A couple of SGU listeners, however, pointed me to this video making the case for a renewable + battery energy infrastructure.

The channel, Technology Connections, does a good job at putting all the relevant data into context, and I like the big-picture approach that the host, Alec Watson, takes. I largely agree with the points he makes. Also, at no point does he say we should not also build nuclear, geothermal, or more hydroelectric. He does, perhaps, imply that we don’t need nuclear at several points, but he did not address it directly.

So what are the big-picture points I agree with? He correctly points out that fossil fuels are disposable – they are fuel that you burn. They do not, in themselves, create any energy infrastructure. Meanwhile, a solar panel or wind turbine, once you have invested in building them, can produce energy essentially for free for 20 years. He argues that we should be investing in infrastructure, not just pulling fuel out of the ground that we will burn and it’s gone. I get this point, however, what about hydrogen? It is not certain, but let’s hypothetically say we find large reserves of underground hydrogen that we can tap into. I would not be against extracting this resource and burning it for energy, since it is clean (produces only water, and does not release carbon). Although, we might find better uses for such hydrogen other than burning it, such as feedstock for certain hard-to-decarbonize industries.

But his point remains valid – we should be looking for ways to develop our technology to be reusable, circular, and sustainable, rather than extractive. Extracting and burning a resource is one way and limited. At most this should be a stepping stone to more sustainable technology, and I think we can reasonably argue that fossil fuels was that stepping stone and it is beyond time to move beyond fossil fuel to better technology.

Continue Reading »

Comments: 0

Jan 26 2026

Rethinking the Habitable Zone

Published by under Astronomy
Comments: 0

As we continue the search for life outside of the Earth, it helps if we have a clear picture of where life might be. This is all a probability game, but that’s the point – to maximize the chance of finding the biosignatures of life. One limitation of this search, however, is that we have only one example of life and a living ecosystem – Earth. Life may take many different forms and therefore exist in what we would consider exotic environments.

That aside, it seems a good bet that life is more likely in locations where liquid water is possible, and therefore liquid water is a reasonable marker for habitability. When we talk about the habitable zone of stars, that is what we are talking about – the distance from the star where it is possible for liquid water to exist on the surface of planets. There are more variables than just the temperature of the star, however. The composition of the atmosphere also matters. High concentrations of CO2, for example, extend the habitable zone outward. There is therefore a conservative habitable zone, and then a more generous one allowing for compensating factors.

A new paper wishes to extend the conservative habitable zone further, specifically around M and K class dwarfs. K-dwarfs, or orange stars, are likely already the best candidates for life. They are bright and hot enough to support liquid water and photosynthesis, they emit less harmful radiation than red (M) dwarfs, and live a relatively long time, 15-70 billion years. They also comprise about 12% of all main sequence stars. Yellow stars like our sun are also good for life, but have a shorter lifespan (10 billion years) and make up only about 6% of main sequence stars.

Continue Reading »

Comments: 0

Jan 22 2026

The AI 2027 Scenario

A group of AI experts have released a paper that explores (or “predicts”) the possibility of a near-term AI explosion that ultimately leads to the extinction of humanity. This has, of course, sparked a great deal of discussion, feedback, and criticism. Here is the scenario they lay out, in their “AI 2027” paper.

To avoid targeting a specific company, they discuss a fictional company called OpenBrain, which sets out specifically to develop an AI application to automate computer coding. They call their first iteration Agent 0, and use it to speed up the development of more AI. They build larger and larger data centers to power and train Agent 0, and do leap six months ahead of their competition. They use Agent 0 to develop Agent 1, which is an autonomous coder. China manages to steel some of the core IP of Agent 1, setting off an AI competition between superpowers.

I am giving you the quick version here, and you can read all the details in the paper. Agent 1 is used to develop Agent 2, which is powerful enough to essentially kick off the Singularity – the hypothesized technology explosion which is created by developing AI that is capable of creating more powerful AI. In this scenario Agent 2 develops a new and more efficient computer language, and uses it to develop Agent 3, which is the first truly general AI. However, the company starts to panic a little when they realize they have essentially lost control of Agent 3, and can no longer guarantee that it aligns with the companies goals and ethics. They discuss rolling back for now to Agent 2, but competition with China and other companies convinces them to forge ahead, resulting in Agent 4, which is not only a general AI but a superintelligence.

Continue Reading »

Comments: 0

Jan 19 2026

Moon Landing Hoax In School

Last week a child of one of my cohosts on the SGU, who is in fifth grade (the child, not the cohost), came home from school and declared, rather dramatically, “Mom, Dad – did you know that we never went to the Moon? It was all fake.” They found this to be a surprising revelation, but were convinced this was a proven scientific fact. Of course, we live in the age of the internet, and our children are going to be exposed to all sorts of information that may be misleading or age-inappropriate. This is one more thing parents have to deal with. What was disturbing about this incident was where they learned this “scientific fact” – from their science teacher.

Any parent should be concerned about this, but in a family of skeptical science communicators, this raised the alarm bells. But the first thing they did was send a polite e-mail to the teacher (cc’ing the principal) and simply ask what happened. This is good practice – always go to the primary source. It’s easy for anyone to get the wrong idea, and this wouldn’t be the first time a fifth grader misinterpreted a lesson in class. The teacher essentially said that while he did not explicitly tell the students we did not go to the Moon (the student reports he said “it’s possible we did not go to the Moon”), he personally believes we did not, and that it is a “proven scientific fact” that it would have been impossible, then and now, to send people to the Moon (somebody should tell the Artemis astronauts).

Apparently he raised at least two points in class – that there were (impossibly) no stars in the background of the photographs taken from the Moon, and the astronauts could not have survived passage through the radiation belts around the Earth. These are both old and long-debunked claims of the Moon-hoax conspiracy theorists. While it is easy to find sources online, let me briefly summarize why these claims are wrong.

Continue Reading »

Comments: 0

Jan 13 2026

Is Donut Lab’s Solid State Battery Legit?

The tech world is buzzing with the claims of a startup battery company out of Finland called Donut Lab. They claim to have created the world’s first production solid state battery. At first blush the claims are exciting but seem in line with the promises that we have been hearing about solid state batteries for years. So it may seem that a company has finally cracked the technical issues with the technology and gotten a product across the finish line. But let’s take a closer look.

First let’s review their claims. The CEO is claiming that their battery has a specific energy of 400 watt hours per kilogram. This is great, considering the current lithium ion batteries in production are in the 175-250 range. The Amprius silicon anode Li-ion battery has 370 Wh/kg, so 400 sounds plausibly incremental, but make no mistake, this would still be a huge breakthrough. Meanwhile the CEO also claims 100,000 charge-discharge cycles, and operation temperature from -30 to 100C. In addition he claims his battery is cheaper than standard Li-ion, does not use any geopolitically sensitive raw materials, and is already in production (for motorcycles). Further it can be fully recharged in 5 minutes, and is incredibly stable with no risk of catching fire.

As I have pointed out previously, battery technology is tricky because a useful EV battery needs a suite of features all at the same time, while reality often requires trade-offs. So you can get your high capacity, but with increased expense, for example (like the Amprius battery). So claiming to have every critical feature of an EV battery improve all at once is beyond a huge deal. That in itself starts to get into the implausibility range, but it’s not impossible. My reaction appears to be similar to most people in the tech world – show me the money. At the CES where Donut rolled out its battery claims, in short, they did not do that.

Continue Reading »

Comments: 0

Next »