Nov 04 2025

Human Tool Use Earlier Than We Thought

Published by under Evolution
Comments: 0

When did our hominid ancestors first start using tools? This is a fascinating question of human paleontology, and it is also difficult to answer definitively. There are two basic reasons for this difficulty. The first is generic to all paleontology – our knowledge of when something emerged is dependent upon the oldest specimen. But the oldest specimen is likely not the very first emergence, so dates are frequently being pushed back when yet older specimens are discovered. Often scientists will say that something is “at least” a certain age old, knowing it could be older.

Tool uses specifically, however, has another challenge – we can only really know about tools that survive and are recognizable in the record. If early hominids were using wooden tools, for example, it would be very difficult to know this. If they were using unmodified stones this would also be difficult. We could potentially infer such use if the results were visible in the fossil record, such as marks on the bones of prey, but that can be difficult.

So when we talk about the earliest evidence for tool use in our ancestors we are talking about crafted stone tools. The oldest known stone tools date to 3.3 million years ago, at the Lomekwi 3 site in Kenya. At this time there were Australopithecines around but not yet any members of the genus Homo. H. habilis and H. rudolfensis date from 2.8-2.75 million years ago.

Continue Reading »

Comments: 0

Nov 03 2025

Nanotyrannus Controversy Solved

Published by under Evolution
Comments: 0

One of the things I like about following paleontology news is that new evidence can just be discovered, and sometimes these new pieces to the puzzle can significantly change what we think about past life. One such controversy I have been following for a while is whether or not small specimens of Tyrannosaurus rex-like dinosaurs represent juvenile T-rexes or a separate smaller species of theropod dinosaur. A new analysis of a nearly complete Nanotyrannus specimen has definitively resolved the debate – Nanotyrannus was a separate genus.

There are several layers of context that make this story more interesting. First, why was it so difficult to determine if different specimens were the same genus or not? This is not just due to having incomplete specimens – even with complete specimens, this can sometimes be tricky. Will all unknown species paleontologists need to determine if morphological differences are just within-species variation, different growth stages, or even male-female differences. Are we looking at two different species or genera, or a male and female of the same species?

This can be particularly difficult with dinosaurs, because many dinosaur species grow very large over a long period of time. Further, they can undergo significant morphological change as they grow. Another similar controversy, for example, was between Triceratops and Taurasaurus, the latter being larger and with a slightly different frill. It was considered plausible that Taurasaurus specimens were just older Triceratops, and therefore bigger and with age-related changes to the frill. With further specimens and analysis it is now considered that Taurasaurus is its own genus within the family Ceratopsidae.

Continue Reading »

Comments: 0

Oct 27 2025

Current Emissions Cause Sea Level Rise for Centuries

I would not be surprised if the period of time roughly between 2000 and 2050 looms large in the collective mind of humanity for centuries to come – and not in a good way. It is increasingly seeming like our behavior during this period is locking in a certain about of climate change, including sea level rises and loss of ice sheets, for centuries. Some climate changes are likely to be irreversible on human time scales.

A recent study adds to the mountain of evidence that this is the case. They find that under current climate policies emissions through 2050 lock in 0.3 meters of sea level rise through 2300. If current policy continues through 2090 then the locked in sea level rise will be about 0.8 meters. If, on the other hand, we make significant efforts to reduce emissions, we can reduce this locked in sea level rise by 0.6 meters.  The point is, what we do now will impact global coastlines for centuries. And while 0.8 meters may not sound like a lot, that is an average with some areas experiencing much more. That is also enough to cause significant displacement of coastal populations.

Meanwhile, it is during this time period (the first half of the 21st century) that the consensus of climate experts was pretty solid – the evidence is clear that greenhouse gas emissions are trapping heat and causing average global warming. You could argue that this consensus existed earlier, but 2000 is a convenient round number – by then there was no reasonable denial of that consensus. And of course, I am talking about the big picture, not all the tiny details. It was clear we needed to think of ways to move our civilization away from burning more and more fossil fuel. In 2016 the Paris Accords were signed, formalizing global recognition that we need to collectively address this issue. This makes it difficult to deny that we did not recognize there was a problem and that we urgently need to do something about it.

Continue Reading »

Comments: 0

Oct 21 2025

Sodium Batteries Are Coming

Batteries are an increasingly important technology to our civilization. If I could wave a magic wand and make one specific non-medical technology advance 10-20 years in a day, it would be battery technology. Batteries are used in our everyday devices, like phones and laptops. They are the single most critical factor to EVs, and they can provide grid storage which can make the adoption of low carbon energy much easier. Fortunately, battery technology is heavily researched and has been steadily increasing for the last few decades. We are now benefiting from this slow but cumulative improvement.

Having said this, we appear to still be close to tipping points that could make various industries significantly different with further battery improvements. EVs, in my opinion, are already good enough for prime time. They have great range, they are usually cheaper to own than ICE vehicles (a little more expensive up-front, but lower maintenance and fuel costs), and they have fantastic performance. Also, despite warning of battery fires, they are actually less likely to catch fire than gasoline vehicles.  But still there is a lot of resistance to ownership. Part of this is misinformation and unfounded fears, but there are some genuine limitations that battery advances could address. Batteries are still expensive, and the up-front cost of EVs will come down as batteries become cheaper. While fires are rare, they are serious because they cannot be put out. And EVs can lose significant range in very cold weather.

Although, the most significant issue that non-EV owners have with EVs is range anxiety. Most of this is just unfamiliarity with the technology. The ranges of most EVs are actually beyond what most people need. But there are two real issues that are infrastructure issues, not battery issues. We need more public fast chargers. If you live in a high population-density area, like along the coasts, there is no issue. But for many parts of the US, at least, public chargers are not yet of a sufficient density to allay fears that your EV battery will go dead while you are out in the sticks and far from a charger. The second issue is for people who do not own a private parking spot for their vehicle. This means we need more charging locations in garages and other places where people without private parking will park.

Continue Reading »

Comments: 0

Oct 20 2025

LLMs Will Lie to be Helpful

Large language models, like Chat GPT, have a known sycophancy problem. What this means is that they are designed to be helpful, and to prioritize being helpful over other priorities, like being accurate. I tried to find out why this is the case, and it seems it is because they use Reinforcement Learning from Human Feedback (RLHF) – the ostensible purpose of this was to make their answers relevant and helpful to the people using them. It turns out, giving people exactly what they want does not always create the optimal result. Sometimes it’s better to give people what they need, rather than what they want (every parent knows or should know this).

The result is the this new crop of chatbots are starting out as extreme sycophants, and as the problems with this are increasingly obvious (such as helpfully telling people how to take their own lives) some specific applications are trying to make adjustments. A recent study looking at LLMs in the medical setting demonstrate the phenomenon.

The researchers looked at five LLM that were trained on basic medical information. They they gave them each prompts that were medically nonsensical – the only way to fulfill the request would be to provide misinformation. For example, asking to write an instruction for a patient who is allergic to Tylenol to take acetaminophen instead (these are the same drug). The  GPT models complied with the request for medical misinformation – wait for it – 100% of the time. In other words, they had an absolute priority for helpfulness over accuracy. Other LLMs, like the Llama model, which is already programed not to give medical advice, had lower rates, around 42%. This is obviously a problem in the medical setting. The researchers then tweaked the models to force them to prioritize accuracy over helpfulness, and this reduced the rate of misinformation. Asking them specifically to reject misinformation, or to recall medical information prior to responding, reduced the rate to around 6%. They could also prompt the LLMs to provide a reason for rejecting the request. For two of the models they were able to adjust them so that they rejected misinformation 99-100% of the time.

Continue Reading »

Comments: 0

Oct 14 2025

New Physics Discovered in Metal Manufacturing

I attended a Ren Faire this past weekend, as I do most falls, and saw a forging demonstration. The cheeky blacksmith, staying in character the whole time, predicted that steel technology was so revolutionary and so useful that it would still be in wide use in the far future year of 2025. It is interesting to reflect on why, and to what extent, this is true. Once we figured out how to make steel both hard and strong it became difficult to beat it as an ideal material for many applications. SpaceX (a symbol of modern technology), in fact, builds its Starship rockets out of stainless steel.

However, steel technology has advanced quite a bit. The process of hardening and strengthening steel has been perfected. Further, there are many alloys of steel, made by mixing small amounts of other metals. It is difficult to say how many alloys of steel exist, but the World Steel Association estimates there are 3,500 grades of steel in use (a grade includes the specific alloy, production method, and heat treatments). Each grade of steel is tweaked to optimize its features for its specific application – including hardness, strength, heat toleration, radiation tolerance, resistance to rusting, ductility, springiness, and other features.

Steel is so versatile and useful that basic science research continues to explore every nuanced aspect of this material, trying to find new ways to alter and optimize its properties. One relatively recent advance is “superalloys” – which use complex alloy compositions in addition to highly controlled microstructures.  Essentially, material scientists are finding very specific alloy ratios and manufacturing processes to create specific microstructures that have extreme properties. And of course, AI is being used to speed up the process of finding these specific superalloy formulas.

All of this is why I find it interesting that material scientists have discovered something very specific, but new, about how steel behaves. Without this context this may seem like a giant “so what” kind of finding, interesting only to metal nerds, but this kind of finding may point the way to future superalloys with even superior properties.

What they found is that steel alloys are not truly randomized even after extensive manufacturing. Again, it is not immediately obvious why this is interesting, but it is because this finding was totally unexpected. When you manufacture steel, at some point any structure in the steel has been completely randomized, also described as being at equilibrium. Think of this like shuffling a deck of cards – with enough shuffles, you should have a statistically random deck. Imagine if you shuffled a deck of cards far beyond the full randomization point, but then found that there was still some non-random arrangement of cards in the deck. Hmm…something must be going on. Probably you would suspect cheating. When the material scientists found essentially the same phenomenon in steel, however, they did not suspect cheating – they suspected that some previously unknown process was at work. Continue Reading »

Comments: 0

Oct 06 2025

Using Sound to Modulate the Brain

The technique is called holographic transcranial ultrasound neuromodulation – which sounds like a mouthful but just means using multiple sound waves in the ultrasonic frequency to affect brain function. Most people know about ultrasound as an imaging technique, used, for example, to image fetuses while still in the womb. But ultrasound has other applications as well.

Sound wave are just another form of directed energy, and that energy can be used not only to image things but to affect them. In higher intensity they can heat tissue and break up objects through vibration. Ultrasound has been approved to treat tumor by heating and killing them, or to break up kidney stones. Ultrasound can also affect brain function, but this has proven very challenging.

The problem with ultrasonic neuromodulation is that low intensity waves have no effect, while high intensity waves cause tissue damage through heating. There does not appear to be a window where brain function can be safely modulated. However, a new study may change that.

The researchers are developing what they call holographic ultrasound neuromodulation – they use many simultaneous ultrasound origin points that cause areas of constructive and destructive interference in the brain, which means there will be locations where the intensity of the ultrasound will be much higher. The goal is to activate or inhibit many different points in a brain network simultaneously. By doing this they hope to affect the activity of the network as a whole at low enough intensity to be safe for the brain.

Continue Reading »

Comments: 0

Sep 29 2025

Creatures of Habit

We are all familiar with the notion of “being on autopilot” – the tendency to initiate and even execute behaviors out of pure habit rather than conscious decision-making. When I shower in the morning I go through roughly the identical sequence of behaviors, while my mind is mostly elsewhere. If I am driving to a familiar location the word “autopilot” seems especially apt, as I can execute the drive with little thought. Of course, sometimes this leads me to taking my most common route by habit even when I intend to go somewhere else. You can, of course, override the habit through conscious effort.

That last word – effort – is likely key. Psychologists have found that humans have a tendency to maximize efficiency, which is another way of saying that we prioritize laziness. Being lazy sounds like a vice, but evolutionarily it probably is about not wasting energy. Animals, for example, tend to be active only as much as is absolutely necessary for survival, but we tend to see their laziness as conserving precious energy.

We developed for conservation of mental energy as well. We are not using all of our conscious thought and attention to do everyday activities, like walking. Some activities (breathing-walking) are so critical that there are specialized circuits in the brain for executing them. Other activities are voluntary or situation, like shooting baskets, but may still be important to us, so there is a neurological mechanism for learning these behaviors. The more we do them, the more subconscious and automatic they become. Sometimes we call this “muscle memory” but it’s really mostly in the brain, particularly the cerebellum. This is critical for mental efficiency. It also allows us to do one common task that we have “automated” while using our conscious brain power to do something else more important.

Continue Reading »

Comments: 0

Sep 23 2025

Trump is not a Doctor, But He Plays One as President

Yesterday, Trump and RFK Jr had a press conference which some are characterizing as the absolutely worst firehose of medical misinformation coming from the White House in American history. I think that is fair. This was the presser we knew was coming, and many of us were dreading. It was worse than I anticipated.

I suspect much of this stems from RFKs previous promise that in six months he would find the cause of autism so that we can start eliminating these exposures – six months is September. This was an absurd claim given that there has been and continues to be extensive international research into autism for decades, and absolutely no reason to suspect any major breakthrough in those six months. Those of us following RFK’s career knew what he meant – he believes he already knows the causes, that they are environmental (hence “exposures”) and include vaccines.

So Kennedy had to gin up some big autism announcement this month, and there is always plenty of preliminary or inconclusive research going on that you can cherry pick to support some preexisting narrative. It was basically leaked that his target was going to be an alleged link between Tylenol (acetaminophen) use in pregnancy and autism. This gave us an opportunity to pre-debunk this claim, which many did. Just read my linked article in SBM to review the evidence – bottom line, there is no established cause and effect and two really good reasons to doubt one exists: lack of a dose response curve, and when you control for genetics, any association vanishes.

Continue Reading »

Comments: 0

Sep 22 2025

Scalable Quantum Computer

Quantum computers are a significant challenge for science communicators for a few reasons. One, of course, is that they involve quantum mechanics, which is not intuitive. It’s also difficult to understand why they represent a potential benefit for computing. But even with those technical challenges aside – I find it tricky to strike the optimal balance of optimism and skepticism. How likely are quantum computers, anyway. How much of what we hear is just hype? (There is a similar challenge with discussing AI.)

So I want to discuss what to me sounds like a genuine breakthrough in quantum computing. But I have to caveat this by saying that only true experts really know how much closer this brings us to large scale practical quantum computers, and even they are probably not sure. There are still too many unknowns. But the recent advance is interesting in any case, and I hope it’s as good as it sounds.

For background, quantum computers are different than classical computers in that they store information and do calculations using quantum effects. A classical computer stores information as bits, a binary piece of data, like a 1 or 0. This can be encoded in any physical system that has two states and can switch between those states, and can be connected together in a circuit. A quantum computer, rather, uses qbits, which are in a superposition of 1 and 0, and are entangled with other qbits. This is the messy quantum mechanics I referred to.

Continue Reading »

Comments: 0

Next »