Mar 31 2025

The Politicians We Deserve

This is an interesting concept, with an interesting history, and I have heard it quoted many times recently – “we get the politicians (or government) we deserve.” It is often invoked to imply that voters are responsible for the malfeasance or general failings of their elected officials. First let’s explore if this is true or not, and then what we can do to get better representatives.

The quote itself originated with Joseph de Maistre who said, “Every nation gets the government it deserves.” (Toute nation a le gouvernement qu’elle mérite.) Maistre was a counter-revolutionary. He believed in divine monarchy as the best way to instill order, and felt that philosophy, reason, and the enlightenment were counterproductive. Not a great source, in my opinion. But apparently Thomas Jefferson also made a similar statement, “The government you elect is the government you deserve.”

Pithy phrases may capture some essential truth, but reality is often more complicated. I think the sentiment is partly true, but also can be misused. What is true is that in a democracy each citizen has a civic responsibility to cast informed votes. No one is responsible for our vote other than ourselves, and if we vote for bad people (however you wish to define that) then we have some level of responsibility for having bad government. In the US we still have fair elections. The evidence pretty overwhelmingly shows that there is no significant voter fraud or systematic fraud stealing elections.

This does not mean, however, that there aren’t systemic effects that influence voter behavior or limit our representation. This is a huge topic, but just to list a few examples – gerrymandering is a way for political parties to choose their voters, rather than voters choosing their representatives, the electoral college means that for president some votes have more power than others, and primary elections tend to produce more radical options. Further, the power of voters depends on getting accurate information, which means that mass media has a lot of power. Lying and distorting information deprives voters of their ability to use their vote to get what they want and hold government accountable.

Continue Reading »

Comments: 0

Mar 28 2025

H&M Will Use Digital Twins

The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds.

Regarding the H&M announcement specifically, they said they will use digital twins of models that have already modeled for them, and only with their explicit permission, while the models retain full ownership of their image and brand. They will also be compensated for their use. On social media platforms the use of AI-generated imagery will carry a watermark (often required) indicating that the images are AI-generated.

It seems clear that H&M is dipping their toe into this pool, doing everything they can to address any possible criticism. They will get explicit permission, compensate models, and watermark their ads. But of course, this has not shielded them from criticism. According to the BBC:

American influencer Morgan Riddle called H&M’s move “shameful” in a post on her Instagram stories.

“RIP to all the other jobs on shoot sets that this will take away,” she posted.

This is an interesting topic for discussion, so here’s my two-cents. I am generally not compelled by arguments about losing existing jobs. I know this can come off as callous, as it’s not my job on the line, but there is a bigger issue here. Technological advancement generally leads to “creative destruction” in the marketplace. Obsolete jobs are lost, and new jobs are created. We should not hold back progress in order to preserve obsolete jobs.

Continue Reading »

Comments: 0

Mar 27 2025

The 80-20 Rule

From the Topic Suggestions (Lal Mclennan):

What is the 80/20 theory portrayed in Netflix’s Adolescence?

The 80/20 rule was first posed as a Pareto principle that suggests that approximately 80 per cent of outcomes stem from just 20 per cent of causes. This concept takes its name from Vilfredo Pareto, an Italian economist who noted in 1906 that a mere 20 per cent of Italy’s population owned 80 per cent of the land.
Despite its noble roots, the theory has since been misappropriated by incels.
In these toxic communities, they posit that 80 per cent of women are attracted to only the top 20 per cent of men. https://www.mamamia.com.au/adolescence-netflix-what-is-80-20-theory/

As I like to say, “It’s more of a guideline than a rule.” Actually, I wouldn’t even say that. I think this is just another example of humans imposing simplistic patterns of complex reality. Once you create such a “rule” you can see it in many places, but that is just confirmation bias. I have encountered many similar “rules” (more in the context of a rule of thumb). For example, in medicine we have the “rule of thirds”. Whenever asked a question with three plausible outcomes, a reasonable guess is that each occurs a third of the time. The disease is controlled without medicine one third of the time, with medicine one third, and not controlled one third, etc. No one thinks there is any reality to this – it’s just a trick for guessing when you don’t know the answer. It is, however, often close to the truth, so it’s a good strategy. This is partly because we tend to round off specific numbers to simple fractions, so anything close to 33% can be mentally rounded to roughly a third. This is more akin to a mentalist’s trick than a rule of the universe.

Continue Reading »

Comments: 0

Mar 24 2025

How To Keep AIs From Lying

We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?

LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.

There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?

Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.

Continue Reading »

Comments: 0

Mar 21 2025

The Neuroscience of Constructed Languages

Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.

For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.

A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.

Continue Reading »

Comments: 0

Mar 18 2025

Living with Predators

For much of human history, wolves and other large carnivores were considered pests. Wolves were actively exterminated on the British Isles, with the last wolf killed in 1680. It is more difficulty to deliberately wipe out a species on a continent than an island, but across Europe wolf populations were also actively hunted and kept to a minimum. In the US there was also an active campaign in the 20th century to exterminate wolves. The gray wolf was nearly wiped out by the middle of the 20th century.

The reasons for this attitude are obvious – wolves are large predators, able to kill humans who cross their paths. They also hunt livestock, which is often given as the primary reason to exterminate them. There are other large predators as well: bears, mountain lions, and coyotes, for example. Wherever they push up against human civilization, these predators don’t fare well.

Killing off large predators, however, has had massive unintended consequences. It should have been obvious that removing large predators from an ecosystem would have significant downstream effects. Perhaps the most notable effects is on the deer population. In the US wolves were the primary check on deer overpopulation. They are too large generally for coyotes. Bears do hunt and kill deer, but it is not their primary food source. Mountain lions will hunt and kill deer, but their range is limited.

Without wolves, the deer population exploded. The primary check now is essentially starvation. This means that there is a large and starving population of deer, which makes them willing to eat whatever they can find. They then wipe out much of the undergrowth in forests, eliminating an important habitat for small forest critters. Deer hunting can have an impact, but apparently not enough. Car collisions with deer also cost about $8 billion in the US annually, causing about 200 deaths and 26 thousand injuries. So there is a human toll as well. This cost dwarfs the cost of lost livestock, estimated to be about 17 million Euros across Europe.

Continue Reading »

Comments: 0

Mar 17 2025

Using AI for Teaching

Published by under Education
Comments: 0

A recent BBC article reminded me of one of my enduring technology disappointments over the last 40 years – the failure of the educational system to reasonably (let alone fully) leverage multimedia and computer technology to enhance learning. The article is about a symposium in the UK about using AI in the classroom. I am confident there are many ways in which AI can enhance learning efficacy in the classroom, just as I am confident that we collectively will fail to utilize AI anywhere nears its potential. I hope I’m wrong, but it’s hard to shake four decades of consistent disappointment.

What am I referring to? Partly it stems from the fact that in the 1980s and 1990s I had lots of expectations about what future technology would bring. These expectations were born of voraciously reading books, magazines, and articles and watching documentaries about potential future technology, but also from my own user experience. For example, starting in high school I became exposed to computer programs (at first just DOS-based text programs) designed to teach some specific body of knowledge. One program that sticks out walked the user through the nomenclature of chemical reactions. It was a very simple program, but it “gamified” the learning process in a very effective way. By providing immediate feedback, and progressing at the individual pace of the user, the learning curve was extremely steep.

This, I thought to myself, was the future of education. I even wrote my own program in basic designed to teach math skills to elementary schoolers, and tested it on my friend’s kids with good results. It followed the same pattern as the nomenclature program: question-response-feedback. I feel confident that my high school self would be absolutely shocked to learn how little this type of computer-based learning has been incorporated into standard education by 2025.

When my daughters were preschoolers I found every computer game I could that taught colors, letters, numbers, categories, etc., again with good effect. But once they got to school age, the resources were scarce and almost nothing was routinely incorporated into their education. The school’s idea of computer-based learning was taking notes on a laptop. I’m serious. Multimedia was also a joke. The divide between what was possible and what was reality just continued to widen. One of the best aspects of social media, in my opinion, is tutorial videos. You can often find much better learning on YouTube than in a classroom.

Continue Reading »

Comments: 0

Mar 14 2025

Cutting to the Bone

One potentially positive outcome from the COVID pandemic is that it was a wakeup call – if there was any doubt previously about the fact that we all live in one giant interconnected world, it should not have survived the recent pandemic. This is particularly true when it comes to infectious disease. A bug that breaks out on the other side of the world can make its way to your country, your home, and cause havoc. It’s also not just about the spread of infectious organisms, but the breeding of these organisms.

One source of infectious agents is zoonotic spillover, where viruses, for example, can jump from an animal reservoir to a human. So the policies in place in any country to reduce the chance of this happening affect the world. The same is true of policies for laboratories studying potentially infectious agents.

It’s also important to remember that infectious agents are not static – they evolve. They can evolve even within a single host as they replicate, and they can evolve as they jump from person to person and replicate some more. The more bugs are allows to replicate, the greater the probability that new mutations will allow them to become more infectious, or more deadly, or more resistant to treatment. Resistance to treatment is especially critical, and is more likely to happen in people who are partially treated. Give someone an antibiotic to kill 99.9% of the bacteria that’s infecting them, but stop before the infection is completely wiped out, and then the surviving bacteria can resume replication. Those surviving bacteria are likely to be the most resistant bugs to the antibiotic. Bacteria can also swap antibiotic resistant genes, and build up increasing resistance.

In short, controlling infectious agents is a world-wide problem, and it requires a world-wide response. Not only is this a humanitarian effort, it is in our own best self-interest. The rest of the world is a breeding ground for bugs that will come to our shores. This is why we really need an organization, funded by the most wealthy nations, to help establish, fund, and enforce good policies when it comes to identifying, treating, and preventing infectious illness. This includes vaccination programs, sanitation, disease outbreak monitoring, drug treatment programs, and supportive care programs (like nutrition). We would also benefit from programs that target specific hotspots of infectious disease in poor countries that do not have the resources to adequately deal with them, like HIV in sub-Saharan Africa, and tuberculosis in Bangladesh.

Continue Reading »

Comments: 0

Mar 13 2025

Hybrid Bionic Hand

If you think about the human hand as a work of engineering, it is absolutely incredible. The level of fine motor control is extreme. It is responsive and precise. It has robust sensory feedback. It combines both rigid and soft components, so that it is able to grip and lift heavy objects and also cradle and manipulate soft or delicate objects. Trying to replicate this functionality with modern robotics have been challenging, to say the least. But engineers are making steady incremental progress.

I like to check it on how the technology is developing, especially when there appears to be a significant advance. There are two basic applications for robotic hands – for robots and for prosthetics for people who have lost their hand to disease or injury. For the latter we need not only advances in the robotics of the hand itself, but also in the brain-machine interface that controls the hand. Over the years we have seen improvements in this control, using implanted brain electrodes, scalp surface electrodes, and muscle electrodes.

We have also seen the incorporation of sensory feedback, which greatly enhances control. Without this feedback, users have to look at the limb they are trying to control. With sensory feedback, they don’t have to look at it, overall control is enhanced, and the robotic limb feels much more natural. Another recent addition to this technology has been the incorporation of AI, to enhance the learning of the system during training. The software that translates the electrical signals from the user into desired robotic movements is much faster and more accurate than without AI algorithms.

A team at Johns Hopkins is trying to take the robotic hand to the next level – A natural biomimetic prosthetic hand with neuromorphic tactile sensing for precise and compliant grasping. They are specifically trying to mimic a human hand, which is a good approach. Why second-guess millions of years of evolutionary tinkering? They call their system a “hybrid” robotic hand because it incorporates both rigid and soft components. Robotic hands with rigid parts can be strong, but have difficulty handling soft or delicate objects. Hands made of soft parts are good for soft objects, but tend to be weak. The hybrid approach makes sense, and mimics a human hand with internal bones covered in muscles and then soft skin.  Continue Reading »

Comments: 0

Mar 10 2025

Stem Cells for Parkinson’s Disease

For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.

PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.

When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.

The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine.  These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.

Continue Reading »

Comments: 0

Next »