Apr
16
2026
The recent rapid advance in the capabilities of artificial intelligence (AI) applications I think qualifies as a disruptive technology. The term “disruptive technology” was popularized in 1997 by Clayton M. Christensen. To summarize, a disruptive technology is “an innovation that fundamentally alters the way industries operate, businesses function, or consumers behave, often rendering existing technologies, products, or services obsolete.” AI is potentially so powerful, and changing so quickly, that it is challenging to optimally regulate it. We are caught in a classic dilemma – we do not want to hamper our own competitiveness in a critical new technology, but we also don’t want to unwittingly create new vulnerabilities or unintended negative consequences. For now we seem to be erring on the side of not hampering competitiveness, which basically places us at the tender mercies of tech bros.
Which is partly why I found the conflict between Anthropic and the Department of Defense (still the legal name) so fascinating. In short, Anthropic’s powerful AI application, Claude, has at least two significant internal “red lines” or guardrails – it cannot be used for massive domestic surveillance, and it cannot be used for final military targeting, without a human in the loop. Anthropic CEO Dario Amodei has not backed down on this – he says that the first restriction on domestic surveillance is simply a matter of ethics. The second restriction, however, is mainly a matter of quality control – their system is still vulnerable to hallucinations and is not reliable enough to count on for final targeting decisions. Hegseth has criticized his concerns as “woke” and a critical vulnerability for the US military. More charitably, he say essentially that the US military is using the application lawfully, and should not be restricted in any lawful use of the software. Others have also stated that in an emergency they have to know the software will do whatever they ask it.
This conflict has many deep implications, and is beyond what I intend for this blog post. What I want to focus on is the fact that an AI application is creating this ethical dilemma, and forcing us to ask – who should control such awesome power, the CEO of a tech company or the Federal government? It seems that we are facing or about to face many similar questions provoked by the disruptive nature of recent AI applications.
Continue Reading »
Apr
13
2026
Last week I wrote about the possibilities of genetically engineering humans. The quickie version is this – we are already using genetic engineering (CRISPR) for somatic changes to treat diseases, and other applications are likely to follow. Engineering germline cells, which would get into the human gene pool, are legally and ethically fraught, but it’s hard to predict how this will play out. I have also written often about genetically engineering food. I think this is a great technology with many powerful applications, but it should be, and largely is, highly regulated to make sure that anything that gets into the human food chain is safe.
I haven’t written as much about genetically engineering pets, and this is likely to be the lowest hanging fruit. That is because pets are neither food nor are they a human medical intervention. But that does not mean they are not regulated – they are regulated in the US under the FDA and USDA. Genetic engineering is treated as an animal drug, and must be deemed safe to the animals being engineered. The USDA also can regulate engineered plants and animals to make sure they do not pose any risk to the environment, humans, or livestock. This makes sense. We would not want, for example, to allow a company to release a genetically engineered bee, pest, or predator into the environment without proper oversight.
Pets, as a category, are domesticated, are not intended to be used as food, nor are they intended to be released into the wild. I say “intended” because pets can become food for predators, and they can escape or be released into the wild, and even become feral. But these contingencies are much easier to prevent than with food or wild plants or animals. For example, if you get a rescue pet, it has likely already automatically been spade or neutered. One easy way to reduce risk would be to make any GE pet sterile, which is likely what the company would want to do anyway to prevent violation of their patents through breeding. In short, it seems that reasonable regulatory hurdles should not be a major problem for any effort to commercialize GE pets.
Continue Reading »
Apr
09
2026
Are we getting close to the time when parents would have the option of genetically engineering their children at the embryo stage? If so, is this a good thing, a bad thing, or both? In order for this to happen such engineering would need to be technically, legally, and commercially viable. Let’s take these in order, and then discuss the potential implications.
The main reason this is even a topic for discussion is because genetically engineering is technically feasible. Obviously we do it to plants and animals all the time. We also have increasingly powerful and affordable technology for doing so, such as CRISPR. This is already powerful and practical enough for small startups to perform CRISPR as a service, if it were legal. We already have FDA-approved CRISPR treatments, and have performed personalized CRISPR therapy. CRISPR is fast and affordable enough to have made its way into the clinic. But there is a crucial difference between these treatments and genetic modification – these treatments affect somatic cells, not germ-line cells. This means that whatever change is made will stay confined to that one individual, and cannot get into the human gene pool. What we are talking about now is genetically modifying an embryo at an early enough stage that it will affect all cells, including germ cells. This means that these changed can be passed down to the next generation, and effectively enter the human gene pool.
This difference is precisely why there is regulation dealing with such procedures in many countries, including the US. In the US the situation is a little complex. It is not explicitly illegal to perform germ line gene editing on humans. However, there is a ban on federal funding for any such research. This does allow for private funding of such research, but any resulting treatment would still need FDA approval, which is highly unlikely in the current environment. Despite this, there is discussion among several startups to start exploring this idea. Why this is happening all at once is not clear, but it seems like we have crossed some threshold and startups have noticed. With current regulation, where does that leave us regarding our three criteria?
Technically a CRISPR-based germ-line treatment for humans is possible. We do have the technology. What needs to be worked out is specific changes and their results. This would require clinical trials, and that is the main stumbling block in the US and some other countries. It seems unlikely the FDA would approve such trials, and therefore there would be no way to even work towards FDA approval. A company could theoretically do privately funded studies that are not part of FDA approval, but they would still need ethical approval (IRB approval) for such studies, which may prove difficult (although not necessarily impossible). Such research could be carried out in countries with more lax regulations, however. Over 70 nations have such regulations, which means many do not. So technically we are theoretically close to having marketable treatments designed to change actual human genetic inheritance.
Continue Reading »
Mar
31
2026
Many teachers are panicking over AI (artificial intelligence), and for good reason. This goes beyond students using AI to cheat on their homework or write their essays for them. If you have AI essentially think for you, then you will not learn to think. On the other hand optimists point out that AI can be a powerful tool to aid in learning. It all comes down to how we use, regulate, and manage our AI tools.
The cautionary approach was captured well, I think, by Mark Crislip in this SBM commentary, in which worries about the effects of AI on doctor education. How will a new generation of physicians learn how to think like expert clinicians if they can have AIs do all their clinical thinking for them? My question is – is AI fundamentally different than all the other technological advances that have come before. Did calculators take away our ability to do math? The answer appears to be no. Students still gain basic math skills at the same rate with or without access to calculators. But there are lots of confounding factors here, and so some teachers still warn of allowing kids access to calculators too soon. Others point out that access to calculators has simply shifted our math abilities, away from basic operations toward more modeling, problem solving, and complex concepts. It seems we are in the middle of the same exact conversation about AI.
We can also think about things like GPS. My ability to navigate from point A to point B without GPS, or to navigate with maps, has definitely declined. But using GPS has also made my navigating to unfamiliar locations easier and more efficient. I would not want to go back to a world without it.
Continue Reading »
Mar
30
2026
As we anticipate the Artemis II launch, now slated for early April with plans to take four astronauts on a trip around the Moon and back to Earth, NASA has been unveiling some significant changes to its plans for returning to the Moon and beyond. If you have fallen behind these announcements, here is a summary of the important bits.
Artemis II will continue as planned, marking the first crewed deep space mission since 1972 (Apollos 17). The original plan was for Artemis III to land on the Moon in 2027, but this mission has been pushed to an Artemis IV mission in 2028. A new Artemis III mission has been inserted – this will go only to low Earth orbit (LEO) and will test the integration of all the systems necessary to land on the Moon. This will include docking with one or both of the two landers, one being built by SpaceX and one by Blue Origin. This sounds like a really good idea, and it did seem unusual that they were planning on going straight to the Moon without ever test docking with the lander.
Even though landing on the Moon will be delayed by at least a year, NASA says this will set them up to have at least annual landings on the Moon after that, with a goal of a landing every six months. The reason for this frequent pace is the the more recent announcement by NASA last week – that they are putting on pause plans for a Lunar Gateway in lunar orbit and instead are going to focus on building a permanent Moon base near the lunar south pole.
In order to make this possible, and to support the future Moon base (no word yet on whether this will be called Moon Base Alpha, as it should) NASA plans about 30 uncrewed robotic landings on the Moon every year. They will be scoping out the location for the base and delivering equipment and supplies.
Continue Reading »
Mar
23
2026
In the decades before the Wright brothers historic 1903 flight at Kitty Hawk there were many claims of powered heavier-than-air flying machines. There were also many false sightings of “airships”, amounting to a form of mass delusion. But the false claims and false sightings do not change the fact that the technology for powered flight was right on the cusp, and that the Wright brothers crossed that threshold in 1903, leading ultimately to the massive industry we have today. This is not surprising. There is often a sense, in the industry and spreading to the public, that the technological pieces are in place for a significant application breakthrough. Today this is more true than ever, with a vibrant industry of tech news, showcases, conferences, blogs, podcasts, etc. I cover plenty of tech new here. It’s interesting to try to glimpse what technology is right around the corner. Any technology that is closely watched and much anticipated is likely to generate lots of premature hype and false claims.
This is definitely true for battery technology. We are arguably in the middle of a massive effort to electrify as much of our industry as possible, especially transportation. Also maximizing intermittent renewable sources of energy would be greatly facilitated by advances in energy storage. Meanwhile electronic devices are becoming increasingly integrated into our daily lives. Advances in battery technology can have a dramatic impact on all these sectors, and is likely to be a critical technology for the next century. So it’s no surprise that there is a lot of hype surrounding battery tech, some of it legitimate, some of it fake, and some just premature. But this hype does not change the fact that battery technology is rapidly improving and the hype will become reality soon enough (just like the Wright flyer).
When it comes to EV batteries we all have a wish-list of features we would like to see. I now own two EVs, and they are the best cars I have ever owned. At least for my personal situation (I live in an exurb and own my own parking spots), EVs are great, and current battery technology is more than adequate for EVs. But sure, I live everyday with the reality of how advances in battery tech will make EVs even more convenient and useful. I have detailed the wish-list before, but here it is again: increased capacity, both in terms of volume but especially weight (specific energy), to decrease the weight while increasing the potential range of EVs, faster charging (with the holy grail being the ability to fully recharge an EV as fast as you can fill a car with gas), long charge-discharge cycle lifespan (longer than the lifespan of the car), useful in a wide range of temperatures, stability (does not spontaneously catch fire), and cheap, which is tied to being made from cheap and abundant elements. This last feature also means that the battery is not dependent on rare elements whose supply line is largely controlled by hostile or conflict-ridden countries.
Continue Reading »
Feb
16
2026
It’s not easy being a futurist (which I guess I technically am, having written a book about the future of technology). It never was, judging by the predictions of past futurists, but it seems to be getting harder as the future is moving more and more quickly. Even if we don’t get to something like “The Singularity”, the pace of change in many areas of technology is speeding up. Actually it’s possible this may, paradoxically, be good for futurists. We get to see fairly quickly how wrong our predictions were, and so have a chance at making adjustments and learning from our mistakes.
We are now near the beginning of many transformative technologies – genetic engineering, artificial intelligence, nanotechnology, additive manufacturing, robotics, and brain-machine interface. Extrapolating these technologies into the future is challenging. How will they interact with each other? How will they be used and accepted? What limitations will we run into? And (the hardest question) what new technologies not on that list will disrupt the future of technology?
While we are dealing with these big question, let’s focus on one specific technology – controllable robotic prosthetics. I have been writing about this for years, and this is an area that is advancing more quickly than I had anticipated. The reason for this is, briefly, AI. Recent advances in AI are allowing for far better brain-machine interface control than previously achievable. Recent advances in AI allow for technology that is really good at picking out patterns from tons of noisy data. This includes picking out patterns in EEG signals from a noisy human brain.
This matters when the goal is having a robotic prosthetic limb controlled by the user through some sort of BMI (from nerves, muscles, or directly from the brain). There are always two components to this control – the software driving the robotic limb has to learn what the user wants, and the user has to learn how to control the limb. Traditionally this takes weeks to months of training, in order to achieve a moderate but usable degree of control. By adding AI to the computer-learning end of the equation, this training time is reduced to days, with far better results. This is what has accelerated progress by a couple of decades beyond where I thought it would be.
Continue Reading »
Feb
12
2026
There are many ways in which our brains can be hacked. It is a complex overlapping set of algorithms evolved to help us interact with our environment to enhance survival and reproduction. However, while we evolved in the natural world, we now live in a world of technology, which gives us the ability to control our environment. We no longer have to simply adapt to the environment, we can adapt the environment to us. This partly means that we can alter the environment to “hack” our adaptive algorithms. Now we have artificial intelligence (AI) that has become a very powerful tool to hack those brain pathways.
In the last decade chatbots have blown past the Turing Test – which is a type of test in which a blinded evaluator has to tell the difference between a live person and an AI through conversation alone. We appear to still be on the steep part of the curve in terms of improvements in these large language model and other forms of AI. What these applications have gotten very good at is mimicking human speech – including pauses, inflections, sighing, “ums”, and all the other imperfections that make speech sound genuinely human.
As an aside, these advances have rendered many sci-fi vision of the future quaint and obsolete. In Star Trek, for example, even a couple hundred years in the future computers still sounded stilted and artificial. We could, however, retcon this choice to argue that the stilted computer voices of the sci-fi future were deliberate, and not a limitation of the technology. Why would they do this? Well…
Current AI is already so good at mimicking human speech, including the underlying human emotion, that people are forming emotional attachments to them, or being emotionally manipulated by them. People are, literally, falling in love with their chatbots. You might argue that they just “think” they are falling in love, or they are pretending to fall in love, but I see no reason not to take them at their word. I’m also not sure there is a meaningful difference between thinking one has fallen in love and actually falling in love – the same brain circuits, neurotransmitters, and feelings are involved.
Continue Reading »
Feb
09
2026

This post is only partly about uranium, but mostly about motivated reasoning – our ability to harness our reasoning power not to arrive at the most likely answer, but to support the answer we want to be true. But let’s chat about uranium for a bit. In the comments to my recent article on a renewable grid, once commenter referred to a blog post on skeptical science and quoted:
“Abbott 2012, linked in the OP, lists about 13 reasons why nuclear will never be capable of generating a significant amount of power. Nuclear supporters have never addressed these issues. To me, the most important issue is there is not enough uranium to generate more than about 5% of all power.”
This is the flip side, I think, to the misinformation about renewable energy I was discussing in that post. Let me way, I don’t think there is an objective right answer here, but my personal view is that the pathway to net zero that emits the least amount of carbon includes nuclear energy, a view that is in line with the IPCC. There is, however, still a lot of anti-nuclear bias out there, just as their is pro-fossil fuel bias, and pro-renewable bias, and every kind of bias. If you want to make a case for any particular source of power, there are enough variables to play with that you can make a case. However, factual misstatements are different – we should at least be arguing from the same set of verified facts. So let’s address the question – how much uranium is there.
There is no objective answer to this question. Why not? Because it depends on your definition. Most estimates of how much uranium there is in the world, in the context of how much is available for nuclear power, do not include every atom of uranium. They generally take several approaches – how much is in current usable stockpiles, how much is being produced by active mines, and how much is “commercially” available. That last category depend on where you draw the line, which depends on the current price of uranium as well as the value of the energy it produces. If, for example, we decided to price the cost of emitting carbon from energy production, the value of uranium would suddenly increase. It also depends on the technology to extract and refine uranium. The value of uranium is also determined by the efficiency of reactors.
Continue Reading »
Feb
05
2026
Mark Zuckerberg said a few months ago that AI is ushering in a third phase of social media. First social media was used to connect with family and friends, then it became a platform for content creators, and now creativity is being further unleashed with new AI-powered tools. That’s a pretty rosy view, and unsurprising coming from the creator of Facebook. Many people, however, are becoming increasing concerned about what the net effect of AI-generated content will be, especially low-grade content (now colloquially referred to as AI slop).
One thing is clear – AI-generated content, because it is so easy and fast, is increasingly flooding social media. AI’s influence takes two basic forms, AI-generated content, and recommendations driven by AI-powered algorithms. So an AI might be telling you to watch an AI-generated video. Recent studies show that about 70% of images on Facebook are now AI-generated, with 80% of the recommendations being AI-powered. This is a fast-moving target, but across social media AI-generated content is somewhere between 20 and 40%. This is not evenly distributed, with some sites being overwhelmed. The arts and crafts site Etsy has been overrun by AI slop, causing some users to abandon the platform.
We are already seeing a backlash and crackdown, but this is sporadic and of questionable effectiveness. Etsy, for example, has tried to limit AI slop on its site, but with limited success. So where is all this headed?
We need to consider the different types of content separately. Much of AI-slop is obviously fake and for entertainment purposes only. They may be cartoony or obviously humorous, with no intent to pass as real or deceive. Some content is meant to entertain (i.e., drive clicks and engagement), but is not obviously fake. Part of the appeal, in fact, may be the question of whether or not the content is real. Other content is meant to deceive, to influence public opinion or the behavior of the content consumer. This latter type of content is obviously the most concerning.
Continue Reading »