Apr 20 2026

The Prospect of Regenerating Limbs

Regeneration is one of the futuristic tropes of science-fiction, because it is both incredibly powerful and not theoretically impossible. Imagine the ability to regrow a lost limb, or simply to replace a diseased or worn out limb. There are about a million limb amputations worldwide every year, so it is a very common medical problem. What if we could regenerate organs? This would be a game-changer for medicine.

There are several approaches to addressing missing limbs or failing organs. One is the cyborg approach – make a mechanical version to replace the biological one. We are making progress here, with brain machine interfaces, mechanical hearts, and other advances. Or you could transplant the body part from another person, or even an animal that has been genetically modified to be compatible. You can also regrow the missing or failing body part from the intended recipient’s own tissues and then transplant that. Or you could inject stem cells programed to regrow the needed part inside the recipient. All of these options are active research programs, have shown some incredible promise, but are also years or even decades away, especially in their mature form.

Let’s now add one more technology to the list – genetic therapy that triggers natural regeneration, meaning from the person’s own tissue. This has long been a target of potential therapy, inspired by the fact that there are many animals that can already naturally do this. Most extreme is the axolotl (a type of salamander that for some reason has become very population with the young generation), which can regenerate just about any of its body parts. They form a blastoma of pluripotent stem cells at the site of injury that can quickly regrow into a missing limb, heart, spinal cord, parts of the brain, etc. in weeks. There are also zebrafish, which can regrow their tail fins. Mice can also regrow missing digits, which is important because they are mammals showing that regeneration can happen even within the mammal clade. You don’t have to be a salamander.

Continue Reading »

Comments: 0

Apr 16 2026

AI May Disrupt The Internet

The recent rapid advance in the capabilities of artificial intelligence (AI) applications I think qualifies as a disruptive technology. The term “disruptive technology” was popularized in 1997 by Clayton M. Christensen. To summarize, a disruptive technology is “an innovation that fundamentally alters the way industries operate, businesses function, or consumers behave, often rendering existing technologies, products, or services obsolete.” AI is potentially so powerful, and changing so quickly, that it is challenging to optimally regulate it. We are caught in a classic dilemma – we do not want to hamper our own competitiveness in a critical new technology, but we also don’t want to unwittingly create new vulnerabilities or unintended negative consequences. For now we seem to be erring on the side of not hampering competitiveness, which basically places us at the tender mercies of tech bros.

Which is partly why I found the conflict between Anthropic and the Department of Defense (still the legal name) so fascinating. In short, Anthropic’s powerful AI application, Claude, has at least two significant internal “red lines” or guardrails – it cannot be used for massive domestic surveillance, and it cannot be used for final military targeting, without a human in the loop. Anthropic CEO Dario Amodei has not backed down on this – he says that the first restriction on domestic surveillance is simply a matter of ethics. The second restriction, however, is mainly a matter of quality control – their system is still vulnerable to hallucinations and is not reliable enough to count on for final targeting decisions. Hegseth has criticized his concerns as “woke” and a critical vulnerability for the US military. More charitably, he say essentially that the US military is using the application lawfully, and should not be restricted in any lawful use of the software. Others have also stated that in an emergency they have to know the software will do whatever they ask it.

This conflict has many deep implications, and is beyond what I intend for this blog post. What I want to focus on is the fact that an AI application is creating this ethical dilemma, and forcing us to ask – who should control such awesome power, the CEO of a tech company or the Federal government? It seems that we are facing or about to face many similar questions provoked by the disruptive nature of recent AI applications.

Continue Reading »

Comments: 0

Apr 14 2026

Do You Have Video Game Skilz?

Remember The Last Starfighter from 1984? In that movie a trailer-park kid with limited prospects spends his time on an arcade-style video game, Starfighter. He plays the game so much that he beats the final level, and it turns out he is the first person to ever do so. He is heavily criticized for spending so much time playing a game, which is seen as a sign of boredom and lack of ambition – a waste of time. The twist (42 year old spoiler incoming) is that the game was actually a test (the Excalibur test – a deliberate reference to King Arthur) to find a skilled pilot for an actual real-life starfighter. He goes on to save the galaxy from invasion.

The interesting premise of the movie is that playing a video game is not only a test of real-life skill, but can be used to train such skill. In 1984 this was  kind of a new idea, and appealing to a generation of kids newly hooked on video games. Video games have been significantly mainstreamed over the last half century, but there is still a bit of a cultural stigma attached to them – they are seen as the realm of dorks and geeks, with inevitable jokes about how avid video gamers with “never get laid” (or something to that effect). Since the beginning of their popularity parents have worried, with such worry being fed by a sensationalist media, that video games were going to “rot” their kids’ brains, turn them into losers who can never get a skilled job, and might even cause violent behavior. Every mass shooting someone brings up violent video games.

But the evidence simply does not support these concerns. One big problem with the research is that it shows correlation only, not causation. Sure, people who play aggressive video games tend to be more aggressive, but that doesn’t mean the game is the cause. Further, there are many confounding factors, and more recent research shows that violence in the game is not the key feature. It has more to do with the level of difficulty and the resulting frustration that seems to raise aggression, not violence in the game. More competitive and difficult games tend to be more stimulating, regardless of the level of violence. The bottom line – after decades of research, systematic reviews conclude: “There is insufficient scientific evidence to support a causal link between violent video games and violent behavior.”

Continue Reading »

Comments: 0

Apr 13 2026

Genetically Engineered Pets Are Coming

Last week I wrote about the possibilities of genetically engineering humans. The quickie version is this – we are already using genetic engineering (CRISPR) for somatic changes to treat diseases, and other applications are likely to follow. Engineering germline cells, which would get into the human gene pool, are legally and ethically fraught, but it’s hard to predict how this will play out. I have also written often about genetically engineering food. I think this is a great technology with many powerful applications, but it should be, and largely is, highly regulated to make sure that anything that gets into the human food chain is safe.

I haven’t written as much about genetically engineering pets, and this is likely to be the lowest hanging fruit. That is because pets are neither food nor are they a human medical intervention. But that does not mean they are not regulated – they are regulated in the US under the FDA and USDA. Genetic engineering is treated as an animal drug, and must be deemed safe to the animals being engineered. The USDA also can regulate engineered plants and animals to make sure they do not pose any risk to the environment, humans, or livestock. This makes sense. We would not want, for example, to allow a company to release a genetically engineered bee, pest, or predator into the environment without proper oversight.

Pets, as a category, are domesticated, are not intended to be used as food, nor are they intended to be released into the wild. I say “intended” because pets can become food for predators, and they can escape or be released into the wild, and even become feral. But these contingencies are much easier to prevent than with food or wild plants or animals. For example, if you get a rescue pet, it has likely already automatically been spade or neutered. One easy way to reduce risk would be to make any GE pet sterile, which is likely what the company would want to do anyway to prevent violation of their patents through breeding. In short, it seems that reasonable regulatory hurdles should not be a major problem for any effort to commercialize GE pets.

Continue Reading »

Comments: 0

Apr 09 2026

Are Genetically Engineered Humans Coming

Are we getting close to the time when parents would have the option of genetically engineering their children at the embryo stage? If so, is this a good thing, a bad thing, or both? In order for this to happen such engineering would need to be technically, legally, and commercially viable. Let’s take these in order, and then discuss the potential implications.

The main reason this is even a topic for discussion is because genetically engineering is technically feasible. Obviously we do it to plants and animals all the time. We also have increasingly powerful and affordable technology for doing so, such as CRISPR. This is already powerful and practical enough for small startups to perform CRISPR as a service, if it were legal. We already have FDA-approved CRISPR treatments, and have performed personalized CRISPR therapy. CRISPR is fast and affordable enough to have made its way into the clinic. But there is a crucial difference between these treatments and genetic modification – these treatments affect somatic cells, not germ-line cells. This means that whatever change is made will stay confined to that one individual, and cannot get into the human gene pool. What we are talking about now is genetically modifying an embryo at an early enough stage that it will affect all cells, including germ cells. This means that these changed can be passed down to the next generation, and effectively enter the human gene pool.

This difference is precisely why there is regulation dealing with such procedures in many countries, including the US. In the US the situation is a little complex. It is not explicitly illegal to perform germ line gene editing on humans. However, there is a ban on federal funding for any such research.  This does allow for private funding of such research, but any resulting treatment would still need FDA approval, which is highly unlikely in the current environment. Despite this, there is discussion among several startups to start exploring this idea. Why this is happening all at once is not clear, but it seems like we have crossed some threshold and startups have noticed. With current regulation, where does that leave us regarding our three criteria?

Technically a CRISPR-based germ-line treatment for humans is possible. We do have the technology. What needs to be worked out is specific changes and their results. This would require clinical trials, and that is the main stumbling block in the US and some other countries. It seems unlikely the FDA would approve such trials, and therefore there would be no way to even work towards FDA approval. A company could theoretically do privately funded studies that are not part of FDA approval, but they would still need ethical approval (IRB approval) for such studies, which may prove difficult (although not necessarily impossible). Such research could be carried out in countries with more lax regulations, however. Over 70 nations have such regulations, which means many do not. So technically we are theoretically close to having marketable treatments designed to change actual human genetic inheritance.

Continue Reading »

Comments: 0

Apr 06 2026

What Is Your Favorite Color?

Many people might find this to be an easy question and simple concept – what is your favorite color? In fact it was used as the quintessential easy question by the bridge guardian in Monty Python and the Holy Grail. But it is a good rule of thumb that everything is much more complicated than you think or than it may at first appear, and this is no exception. We recently had a casual discussion about this topic on the SGU, and it left me unsatisfied, so I thought I would do a deeper dive. Perhaps there is a neuroscientific answer to this question.

The panel differed in their reactions to the question of favorite color (we were just giving our subjective feelings, not discussing research or evidence). Cara felt that “favorite color” is largely arbitrary. Kids are asked to pick a favorite color, which they do (under pressure) and then often just stick with that answer as they get older. She also felt the question was meaningless without context – are you referring to clothes, cars, house color, or something else? Jay was at the other end of the spectrum – he has a strong affiliation for the color orange which gives him a pleasant feeling. The rest were somewhere in between these two extremes.

I knew there had to be a science of “favorite color”, which I thought might be interesting. Indeed there is – and it is interesting.

First, what is the distribution of favorite color, across the world and demographically? Blue is, far and away, the most favorite color, in most countries across the world, so it seems to be very cross-cultural. It is also the favorite across age groups and gender. The second-most favorite color is either green, red, or purple. Brown is almost universally the least favorite color. Gender has an effect on favorite color, with more women favoring pink, and reds in general (but still preferring blue overall). Republicans still prefer blue over red, but more Republicans prefer red than Democrats. There are country-specific differences as well. Red is a higher preference in China than many other countries, for example.

Continue Reading »

Comments: 0

Apr 02 2026

Brain As Receiver Is Still Wrong

I have a love-hate relationship with TikTok, as I do social media in general. It is a great communication tool and allows scientists and science communicators to get their content out to a larger audience cheaply and easily. If you know how to use the internet and social media as a resource, you can find a video about almost any topic. I particularly love the “how to” videos. And yet these applications are also used (mostly used) to spread nonsense and misinformation, or at least inaccurate, misleading, or overly generalized information. The low bar of entry cuts both ways.

As a result I spend part of my time as a communicator with my finger in the dike of social media pseudoscience and science denial. For example, this individual feels his insights into the workings of the human brain need to be shared with the world. His musings are based entirely on a false premise, his apparent misunderstanding of what neuroscientists understand about brain function. He begins with the nicely vague statement, “scientists have discovered”, followed by a completely incorrect statement – that thoughts come to our brain from outside the brain.

Before I get into this old “brain as receiver” claim, I want to point out that this format is extremely common on TikTok in particular and social media in general. This is more worrying than any individual claim – the culture is to present some random nonsense in the format of “isn’t this crazy”, or with with a cynical tone implying something nefarious is going on. Such authors may or may not believe what they say, they may just be trying to amplify their engagement with a total disregard toward whether what they are saying is true or not. They may even be a full Poe – knowing that what they say is nonsense. Either way, they feel it is appropriate to spend the time to record and upload a video without spending the few minutes that would be needed to check to see if what they are saying is even true. The very platform they are using to spread their nonsense often has all the information they need to answer their alleged questions. The culture is profoundly incurious, intellectually vacuous, lacking all scholarship or quality control, and seems to value only engagement. Thrown into the mix are true believers, grifters, and those who display classic symptoms of some form of thought disorder. This is “infotainment” taken to its ultimate expression.

Continue Reading »

Comments: 0

Mar 31 2026

AI And Schools

Many teachers are panicking over AI (artificial intelligence), and for good reason. This goes beyond students using AI to cheat on their homework or write their essays for them. If you have AI essentially think for you, then you will not learn to think. On the other hand optimists point out that AI can be a powerful tool to aid in learning. It all comes down to how we use, regulate, and manage our AI tools.

The cautionary approach was captured well, I think, by Mark Crislip in this SBM commentary, in which worries about the effects of AI on doctor education. How will a new generation of physicians learn how to think like expert clinicians if they can have AIs do all their clinical thinking for them? My question is – is AI fundamentally different than all the other technological advances that have come before. Did calculators take away our ability to do math? The answer appears to be no. Students still gain basic math skills at the same rate with or without access to calculators. But there are lots of confounding factors here, and so some teachers still warn of allowing kids access to calculators too soon. Others point out that access to calculators has simply shifted our math abilities, away from basic operations toward more modeling, problem solving, and complex concepts. It seems we are in the middle of the same exact conversation about AI.

We can also think about things like GPS. My ability to navigate from point A to point B without GPS, or to navigate with maps, has definitely declined. But using GPS has also made my navigating to unfamiliar locations easier and more efficient. I would not want to go back to a world without it.

Continue Reading »

Comments: 0

Mar 30 2026

NASA Unveils New Moon Plans

As we anticipate the Artemis II launch, now slated for early April with plans to take four astronauts on a trip around the Moon and back to Earth, NASA has been unveiling some significant changes to its plans for returning to the Moon and beyond. If you have fallen behind these announcements, here is a summary of the important bits.

Artemis II will continue as planned, marking the first crewed deep space mission since 1972 (Apollos 17). The original plan was for Artemis III to land on the Moon in 2027, but this mission has been pushed to an Artemis IV mission in 2028. A new Artemis III mission has been inserted – this will go only to low Earth orbit (LEO) and will test the integration of all the systems necessary to land on the Moon. This will include docking with one or both of the two landers, one being built by SpaceX and one by Blue Origin. This sounds like a really good idea, and it did seem unusual that they were planning on going straight to the Moon without ever test docking with the lander.

Even though landing on the Moon will be delayed by at least a year, NASA says this will set them up to have at least annual landings on the Moon after that, with a goal of a landing every six months. The reason for this frequent pace is the the more recent announcement by NASA last week – that they are putting on pause plans for a Lunar Gateway in lunar orbit and instead are going to focus on building a permanent Moon base near the lunar south pole.

In order to make this possible, and to support the future Moon base (no word yet on whether this will be called Moon Base Alpha, as it should) NASA plans about 30 uncrewed robotic landings on the Moon every year. They will be scoping out the location for the base and delivering equipment and supplies.

Continue Reading »

Comments: 0

Mar 24 2026

What Happened to Comet 3I/Atlas

Published by under Astronomy
Comments: 0

Last year the inner solar system had an interstellar visitor – 3I/Atlas (which stands for the third interstellar object which was discovered by the Atlas telescope). The third ever of anything is by definition a rare event, and so this was scientifically exciting. The comet came into the inner solar system, passing close to Jupiter and Mars, but not to the Earth, went behind the sun, then emerged on its path away from the sun. It is now headed for the orbit of Jupiter and out of the solar system. At first 3I/Atlas displayed a number of minor anomalies. It was behaving sort of like a comet, but with some differences. This fits well, however, with the main hypothesis that it is an interstellar comet – so it’s a comet, but may have a different composition from comets that were formed in our own solar system. This is not almost certainly the case – the comet comes from the thick disc of the galaxy, likely from a low metallicity star system, and has likely been travelling through interstellar space for billions of years, possibly being even older than our own star.

Now that it is passing out of the solar system we can look at all the data that NASA collected and make some fairly confident conclusions. There are a lot of sources of information, but Wikipedia actually has a pretty good summary and list of references. In the end, 3I/Atlas behaved mostly like a typical comet. It formed a tail heading away from the sun, brightened as it got close, then faded away as it moved away from the sun. Spectral analysis found that the comet was unusually rich in carbon dioxide (CO2), with small amounts of water ice, water vapor, carbon monoxide (CO), and carbonyl sulfide (OCS). It also had small amounts of cyanide and nickel gas, which is common in comets from our own solar system. In other words – it is a comet. It did originate from a part of the sky that we had previously calculated would have fewer such interstellar objects, which either makes it especially rare or means that our calculations are off.

Every time we encounter a new interstellar object we gather more data about such objects – how frequent are they, where do they come from, and what is their nature. Right now we have just three data points. After the first one, Oumuamua, we had not idea how common they were because we had just one data point. Now we have enough instruments surveying the sky that we are better able to detect such objects, which are very fleeting. The question was – was Oumuamua a one-off, and we just got lucky to detect something that happens very rarely, or are such objects common. We now have three data points and can conclude that they are fairly common, and we should detect one every few years or so, perhaps even more often if we start looking more.

Continue Reading »

Comments: 0

Next »