Mar 24 2025

How To Keep AIs From Lying

We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?

LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.

There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?

Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.

Continue Reading »

Comments: 0

Mar 21 2025

The Neuroscience of Constructed Languages

Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.

For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.

A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.

Continue Reading »

Comments: 0

Mar 18 2025

Living with Predators

For much of human history, wolves and other large carnivores were considered pests. Wolves were actively exterminated on the British Isles, with the last wolf killed in 1680. It is more difficulty to deliberately wipe out a species on a continent than an island, but across Europe wolf populations were also actively hunted and kept to a minimum. In the US there was also an active campaign in the 20th century to exterminate wolves. The gray wolf was nearly wiped out by the middle of the 20th century.

The reasons for this attitude are obvious – wolves are large predators, able to kill humans who cross their paths. They also hunt livestock, which is often given as the primary reason to exterminate them. There are other large predators as well: bears, mountain lions, and coyotes, for example. Wherever they push up against human civilization, these predators don’t fare well.

Killing off large predators, however, has had massive unintended consequences. It should have been obvious that removing large predators from an ecosystem would have significant downstream effects. Perhaps the most notable effects is on the deer population. In the US wolves were the primary check on deer overpopulation. They are too large generally for coyotes. Bears do hunt and kill deer, but it is not their primary food source. Mountain lions will hunt and kill deer, but their range is limited.

Without wolves, the deer population exploded. The primary check now is essentially starvation. This means that there is a large and starving population of deer, which makes them willing to eat whatever they can find. They then wipe out much of the undergrowth in forests, eliminating an important habitat for small forest critters. Deer hunting can have an impact, but apparently not enough. Car collisions with deer also cost about $8 billion in the US annually, causing about 200 deaths and 26 thousand injuries. So there is a human toll as well. This cost dwarfs the cost of lost livestock, estimated to be about 17 million Euros across Europe.

Continue Reading »

Comments: 0

Mar 17 2025

Using AI for Teaching

Published by under Education
Comments: 0

A recent BBC article reminded me of one of my enduring technology disappointments over the last 40 years – the failure of the educational system to reasonably (let alone fully) leverage multimedia and computer technology to enhance learning. The article is about a symposium in the UK about using AI in the classroom. I am confident there are many ways in which AI can enhance learning efficacy in the classroom, just as I am confident that we collectively will fail to utilize AI anywhere nears its potential. I hope I’m wrong, but it’s hard to shake four decades of consistent disappointment.

What am I referring to? Partly it stems from the fact that in the 1980s and 1990s I had lots of expectations about what future technology would bring. These expectations were born of voraciously reading books, magazines, and articles and watching documentaries about potential future technology, but also from my own user experience. For example, starting in high school I became exposed to computer programs (at first just DOS-based text programs) designed to teach some specific body of knowledge. One program that sticks out walked the user through the nomenclature of chemical reactions. It was a very simple program, but it “gamified” the learning process in a very effective way. By providing immediate feedback, and progressing at the individual pace of the user, the learning curve was extremely steep.

This, I thought to myself, was the future of education. I even wrote my own program in basic designed to teach math skills to elementary schoolers, and tested it on my friend’s kids with good results. It followed the same pattern as the nomenclature program: question-response-feedback. I feel confident that my high school self would be absolutely shocked to learn how little this type of computer-based learning has been incorporated into standard education by 2025.

When my daughters were preschoolers I found every computer game I could that taught colors, letters, numbers, categories, etc., again with good effect. But once they got to school age, the resources were scarce and almost nothing was routinely incorporated into their education. The school’s idea of computer-based learning was taking notes on a laptop. I’m serious. Multimedia was also a joke. The divide between what was possible and what was reality just continued to widen. One of the best aspects of social media, in my opinion, is tutorial videos. You can often find much better learning on YouTube than in a classroom.

Continue Reading »

Comments: 0

Mar 14 2025

Cutting to the Bone

One potentially positive outcome from the COVID pandemic is that it was a wakeup call – if there was any doubt previously about the fact that we all live in one giant interconnected world, it should not have survived the recent pandemic. This is particularly true when it comes to infectious disease. A bug that breaks out on the other side of the world can make its way to your country, your home, and cause havoc. It’s also not just about the spread of infectious organisms, but the breeding of these organisms.

One source of infectious agents is zoonotic spillover, where viruses, for example, can jump from an animal reservoir to a human. So the policies in place in any country to reduce the chance of this happening affect the world. The same is true of policies for laboratories studying potentially infectious agents.

It’s also important to remember that infectious agents are not static – they evolve. They can evolve even within a single host as they replicate, and they can evolve as they jump from person to person and replicate some more. The more bugs are allows to replicate, the greater the probability that new mutations will allow them to become more infectious, or more deadly, or more resistant to treatment. Resistance to treatment is especially critical, and is more likely to happen in people who are partially treated. Give someone an antibiotic to kill 99.9% of the bacteria that’s infecting them, but stop before the infection is completely wiped out, and then the surviving bacteria can resume replication. Those surviving bacteria are likely to be the most resistant bugs to the antibiotic. Bacteria can also swap antibiotic resistant genes, and build up increasing resistance.

In short, controlling infectious agents is a world-wide problem, and it requires a world-wide response. Not only is this a humanitarian effort, it is in our own best self-interest. The rest of the world is a breeding ground for bugs that will come to our shores. This is why we really need an organization, funded by the most wealthy nations, to help establish, fund, and enforce good policies when it comes to identifying, treating, and preventing infectious illness. This includes vaccination programs, sanitation, disease outbreak monitoring, drug treatment programs, and supportive care programs (like nutrition). We would also benefit from programs that target specific hotspots of infectious disease in poor countries that do not have the resources to adequately deal with them, like HIV in sub-Saharan Africa, and tuberculosis in Bangladesh.

Continue Reading »

Comments: 0

Mar 13 2025

Hybrid Bionic Hand

If you think about the human hand as a work of engineering, it is absolutely incredible. The level of fine motor control is extreme. It is responsive and precise. It has robust sensory feedback. It combines both rigid and soft components, so that it is able to grip and lift heavy objects and also cradle and manipulate soft or delicate objects. Trying to replicate this functionality with modern robotics have been challenging, to say the least. But engineers are making steady incremental progress.

I like to check it on how the technology is developing, especially when there appears to be a significant advance. There are two basic applications for robotic hands – for robots and for prosthetics for people who have lost their hand to disease or injury. For the latter we need not only advances in the robotics of the hand itself, but also in the brain-machine interface that controls the hand. Over the years we have seen improvements in this control, using implanted brain electrodes, scalp surface electrodes, and muscle electrodes.

We have also seen the incorporation of sensory feedback, which greatly enhances control. Without this feedback, users have to look at the limb they are trying to control. With sensory feedback, they don’t have to look at it, overall control is enhanced, and the robotic limb feels much more natural. Another recent addition to this technology has been the incorporation of AI, to enhance the learning of the system during training. The software that translates the electrical signals from the user into desired robotic movements is much faster and more accurate than without AI algorithms.

A team at Johns Hopkins is trying to take the robotic hand to the next level – A natural biomimetic prosthetic hand with neuromorphic tactile sensing for precise and compliant grasping. They are specifically trying to mimic a human hand, which is a good approach. Why second-guess millions of years of evolutionary tinkering? They call their system a “hybrid” robotic hand because it incorporates both rigid and soft components. Robotic hands with rigid parts can be strong, but have difficulty handling soft or delicate objects. Hands made of soft parts are good for soft objects, but tend to be weak. The hybrid approach makes sense, and mimics a human hand with internal bones covered in muscles and then soft skin.  Continue Reading »

Comments: 0

Mar 10 2025

Stem Cells for Parkinson’s Disease

For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.

PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.

When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.

The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine.  These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.

Continue Reading »

Comments: 0

Mar 06 2025

Where Are All the Dwarf Planets?

Published by under Astronomy
Comments: 0

In 2006 (yes, it was that long ago – yikes) the International Astronomical Union (IAU) officially adopted the definition of dwarf planet – they are large enough for their gravity to pull themselves into a sphere, they orbit the sun and not another larger body, but they don’t gravitationally dominate their orbit. That last criterion is what separates planets (which do dominate their orbit) from dwarf planets. Famously, this causes Pluto to be “downgraded” from a planet to a dwarf planet. Four other objects also met criteria for dwarf planet – Ceres in the asteroid belt, and three Kuiper belt objects, Makemake, Haumea, and Eris.

The new designation of dwarf planet came soon after the discovery of Sedna, a trans-Neptunian object that could meet the old definition of planet. It was, in fact, often reported at the time as the discovery of a 10th planet. But astronomers feared that there were dozens or even hundreds of similar trans-Neptunian objects, and they thought it was messy to have so many planets in our solar system. That is why they came up with the whole idea of dwarf planets. Pluto was just caught in the crossfire – in order to keep Sedna and its ilk from being planets, Pluto had to be demoted as well. As a sort-of consolation, dwarf planets that were also trans-Neptunian objects were named “plutoids”. All dwarf planets are plutoids, except Ceres, which is in the asteroid belt between Mars and Jupiter.

So here we are, two decades later, and I can’t help wondering – where are all the dwarf planets? Where are all the trans-Neptunian objects that astronomers feared would have to be classified as planets that the dwarf planet category was specifically created for? I really thought that by now we would have a dozen or more official dwarf planets. What’s happening? As far as I can tell there are two reasons we are still stuck with only the original five dwarf planets.

Continue Reading »

Comments: 0

Mar 03 2025

The New TIGR-Tas Gene Editing System

Remember CRISPR (clustered regularly interspaced short palindromic repeats) – that new gene-editing system which is faster and cheaper than anything that came before it? CRISPR is derived from bacterial systems which uses guide RNA to target a specific sequence on a DNA strand. It is coupled with a Cas (CRISPR Associated) protein which can do things like cleave the DNA at the targeted location. We are really just at the beginning of exploring the potential of this new system, in both research and therapeutics.

Well – we may already have something better than CSRISP: TIGR-Tas. This is also an RNA-based system for targeting specific sequences of DNA and delivering a TIGR associated protein to perform a specific function. TIGR (Tandem Interspaced Guide RNA) may have some useful advantages of CRISPR.

As presented in a new paper, TIGR is actually a family of gene editing systems. It was discovered not by happy accident, but by specifically looking for it. As the paper details: “through iterative structural and sequence homology-based mining starting with a guide RNA-interaction domain of Cas9”. This means they started with Cas9 and then trolled through the vast database of phage and parasitic bacteria for similar sequences. They found what they were looking for – another family of RNA-guided gene editing systems.

Like CRISPR, TIGR is an RNA guided system, and has a modular structure. Different Tas proteins can be coupled with the TIGR to perform different actions at the targeted site. But there are several potential advantages for TIGR over CRISPR. Like CRISPR it is RNA guided, but TIGR uses both strands of the DNA to find its target sequence. This “dual guided” approach may lead to fewer off-target errors. While CRISPR works very well, there is a trade-off in CRISPR systems between speed and precision. The faster it works, the greater the number of off-target actions – like cleaving the DNA in the wrong place. The hope is that TIGR will make fewer off-target mistakes because of better targeting.

Continue Reading »

Comments: 0

Feb 27 2025

Are Small Modular Reactors Finally Coming?

Small nuclear reactors have been around since the 1950s. They mostly have been used in military ships, like aircraft carriers and submarines. They have the specific advantage that such ships could remain at sea for long periods of time without needing to refuel. But small modular reactors have never taken off as a source of grid energy. The prevailing opinion for why this is seems to be that they are simply not cost effective. Larger reactors,  which are already expensive endeavors, produce more megawatts per dollar. SMRs are simply too cost inefficient.

This is unfortunate because they have a lot of advantages. Their initial investment is smaller, even though the cost per unit energy is more. They are safe and reliable. They have a small footprint. And they are scalable. The military uses them because the strategic advantages are worth the higher cost. Some argue that the zero carbon on demand energy they provide is worth the higher cost, and I think this is a solid argument. Also there are continued attempts to develop the technology to bring down the cost. Arguably it may be worth subsidizing the SMR industry so that the technology can be developed to greater cost effectiveness. Decarbonizing the energy sector is worth the investment.

But there is another question – are there civilian applications that would also justify the higher cost per unit energy? I have recently encountered two that are interesting. The first is a direct extension of the military use – using an SMR to power a cargo ship. South Korean company, HD Korea Shipbuilding & Offshore Engineering, has revealed their designs for an SMR powered cargo ship, and has received “approval in principle”. Obviously this is just the beginning phase – they need to actually develop the design and get full approval. But the concept is compelling.

Continue Reading »

Comments: 0

Next »