Jun
12
2025
The human brain is extremely good at problem-solving, at least relatively speaking. Cognitive scientists have been exploring how, exactly, people approach and solve problems – what cognitive strategies do we use, and how optimal are they. A recent study extends this research and includes a comparison of human problem-solving to machine learning. Would an AI, which can find an optimal strategy, follow the same path as human solvers?
The study was specifically designed to look at two specific cognitive strategies, hierarchical thinking and counterfactual thinking. In order to do this they needed a problem that was complex enough to force people to use these strategies, but not so complex that it could not be quantified by the researchers. They developed a system by which a ball may take one of four paths, at random, through a maze. The ball is hidden from view to the subject, but there are auditory clues as to the path the ball is taking. The clues are not definitive so the subject has to gather information to build a prediction of the ball’s path.
What the researchers found is that subjects generally started with a hierarchical approach – this means they broke the problem down into simpler parts, such as which way the ball went at each decision point. Hierarchical reasoning is a general cognitive strategy we employ in many contexts. We do this whenever we break a problem down into smaller manageable components. This term more specifically refers to reasoning that starts with the general and then progressively hones in on the more detailed. So far, no surprise – subjects broke the complex problem of calculating the ball’s path into bite-sized pieces.
What happens, however, when their predictions go awry? They thought the ball was taking one path but then a new clue suggests is has been taking another. That is where they switch to counterfactual reasoning. This type of reasoning involves considering the alternative, in this case, what other path might be compatible with the evidence the subject has gathered so far. We engage in counterfactual reasoning whenever we consider other possibilities, which forces us to reinterpret our evidence and make new hypotheses. This is what subjects did, h0wever they did not do it every time. In order to engage in counterfactual reasoning in this task the subjects had to accurately remember the previous clues. If they thought they did have a good memory for prior clues, they shifted to counterfactual reasoning. If they did not trust their memory, then they didn’t.
Continue Reading »
Jun
03
2025
In the movie Blade Runner 2049 (an excellent film I highly recommend), Ryan Gosling’s character, K, has an AI “wife”, Joi, played by Ana de Armas. K is clearly in love with Joi, who is nothing but software and holograms. In one poignant scene, K is viewing a giant ad for AI companions and sees another version of Joi saying a line that his Joi said to him. The look on his face says everything – an unavoidable recognition of something he does not want to confront, that he is just being manipulated by an AI algorithm and an attractive hologram into having feelings for software. K himself is also a replicant, an artificial but fully biological human. Both Blade Runner movies explore what it means to be human and sentient.
In the last few years AI (do I still need to routinely note that AI stands for “artificial intelligence”?) applications have seemed to cross a line where they convincingly pass the classic Turing test. AI chatbots are increasingly difficult to distinguish from actual humans. Overall, people are only slightly better than chance at distinguishing human from AI generated text. This is also a moving target, with AIs advancing fairly quickly. So the question is – are we at a point where AI chatbot-based apps are good enough that AIs can serve as therapists? This is a complicated question with a few layers.
The first layer is whether or not people will form a therapeutic relationship with the AI, in essence reacting to them as if they are a human therapist. The point of the Blade Runner reference was just to highlight what I think the clear answer is – yes. Psychologists have long demonstrated that people will form emotional attachments to inanimate objects. We also imbue agency onto anything that acts like an agent, even simple cartoons. We project human emotions and motivations onto animals, especially our pets. People can also form emotional connections to other actual people purely online, even exclusively through text. This is just a fact of neuroscience – our brains do not need a physical biological human in order to form personal attachments. Simply acting or even just looking like an agent is sufficient.
Continue Reading »
Jun
02
2025
I was away on vacation the last week, hence no posts, but am now back to my usual schedule. In fact, I hope to be a little more consistent starting this summer because (if you follow me on the SGU you already know this) I am retiring from my day job at Yale at the end of the month. This will allow me to work full time as a science communicator and skeptic. I have some new projects in the works, and will announce anything here for those who are interested.
On to today’s post – I recently received an e-mail from Janyce Boynton, a former facilitator who now works to expose the pseudoscience of facilitated communication (FC). I have been writing about this for many years. Like many pseudosciences, they rarely completely disappear, but tend to wax and wane with each new generation, often morphing into different forms while keeping the nonsense at their core. FC has had a resurgence recently due to a popular podcast, The Telepathy Tapes (which I wrote about over at SBM). Janyce had this to say:
I’ll be continuing to post critiques about the Telepathy Tapes–especially since some of their followers are now claiming that my student was telepathic. Their “logic” (and I use that term loosely) is that during the picture message passing test, she read my mind, knew what picture I saw, and typed that instead of typing out the word to the picture she saw.
I shouldn’t be surprised by their rationalizations. The mental gymnastics these people go through!
They’re also claiming that people don’t have to look at the letter board because of synesthesia. According to them, the letters light up and the clients can see the “aura” of each color. Ridiculous. I haven’t been able to find any research that backs up this claim. Nor have I found an expert in synesthesia who is willing to answer my questions about this condition, but I’m assuming that, if synesthesia is a real condition, it doesn’t work the way the Telepathy Tapes folks are claiming it does.
For quick background, FC was created in the 1980s as a method for communicating to people, mostly children, who have severe cognitive impairment and are either non-verbal or minimally verbal. The hypothesis FC is based on is that at least some of these children may have more cognitive ability than is apparent but rather have impaired communication as an isolated deficit. This general idea is legitimate, and in neurology we caution all the time about not assuming the inability to demonstrate an ability is due purely to a cognitive deficit, rather than a physical deficit. To take a simple example, don’t assume someone is not responding to your voice because they have impaired consciousness when they could be deaf. We use various methods to try to control for this as much as possible.
Continue Reading »
Apr
29
2025
In my previous post I wrote about how we think about and talk about autism spectrum disorder (ASD), and how RFK Jr misunderstands and exploits this complexity to weave his anti-vaccine crank narrative. There is also another challenge in the conversation about autism, which exists for many diagnoses – how do we talk about it in a way that is scientifically accurate, useful, and yet not needlessly stigmatizing or negative? A recent NYT op-ed by a parent of a child with profound autism had this to say:
“Many advocacy groups focus so much on acceptance, inclusion and celebrating neurodiversity that it can feel as if they are avoiding uncomfortable truths about children like mine. Parents are encouraged not to use words like “severe,” “profound” or even “Level 3” to describe our child’s autism; we’re told those terms are stigmatizing and we should instead speak of “high support needs.” A Harvard-affiliated research center halted a panel on autism awareness in 2022 after students claimed that the panel’s language about treating autism was “toxic.” A student petition circulated on Change.org said that autism ‘is not an illness or disease and, most importantly, it is not inherently negative.'”
I’m afraid there is no clean answer here, there are just tradeoffs. Let’s look at this question (essentially, how do we label ASD) from two basic perspectives – scientific and cultural. You may think that a purely scientific approach would be easier and result in a clear answer, but that is not the case. While science strives to be objective, the universe is really complex, and our attempts at making it understandable and manageable through categorization involve subjective choices and tradeoffs. As a physician I have had to become comfortable with this reality. Diagnoses are often squirrelly things.
When the profession creates or modifies a diagnosis, this is really a type of categorization. There are different criteria that we could potentially use to define a diagnostic label or category. We could use clinical criteria – what are the signs, symptoms, demographics, and natural history of the diagnosis in question? This is often where diagnoses begin their lives, as a pure description of what is being seen in the clinic. Clinical entities almost always present as a range of characteristics, because people are different and even specific diseases will manifest differently. The question then becomes – are we looking at one disease, multiple diseases, variations on a theme, or completely different processes that just overlap in the signs and symptoms they cause. This leads to the infamous “lumper vs splitter” debate – do we tend to lump similar entities together in big categories or split everything up into very specific entities, based on even tiny differences?
Continue Reading »
Apr
28
2025

RFK Jr.’s recent speech about autism has sparked a lot of deserved anger. But like many things in life, it’s even more complicated than you think it is, and this is a good opportunity to explore some of the issues surrounding this diagnosis.
While the definition has shifted over the years (like most medical diagnoses) autism is currently considered a fairly broad spectrum sharing some underlying neurological features. At the most “severe” end of the spectrum (and to show you how fraught this issue is, even the use of the term “severe” is controversial) people with autism (or autism spectrum disorder, ASD) can be non-verbal or minimally verbal, have an IQ <50, and require full support to meet their basic daily needs. At the other end of the spectrum are extremely high-functioning individuals who are simply considered to be not “neurotypical” because they have a different set of strengths and challenges than more neurotypical people. One of the primary challenges is to talk about the full spectrum of ASD under one label. The one thing it is safe to say is that RFK Jr. completely failed this challenge.
What our Health and Human Services Secretary said was that normal children:
“regressed … into autism when they were 2 years old. And these are kids who will never pay taxes, they’ll never hold a job, they’ll never play baseball, they’ll never write a poem, they’ll never go out on a date. Many of them will never use a toilet unassisted.”
This is classic RFK Jr. – he uses scientific data like the proverbial drunk uses a lamppost, for support rather than illumination. Others have correctly pointed out that he begins with his narrative and works backward (like a lawyer, because that is what he is). That narrative is solidly in the sweet-spot of the anti-vaccine narrative on autism, which David Gorski spells out in great detail here. RFK said:
“So I would urge everyone to consider the likelihood that autism, whether you call it an epidemic, a tsunami, or a surge of autism, is a real thing that we don’t understand, and it must be triggered or caused by environmental or risk factors. “
In RFK’s world, autism is a horrible disease that destroys children and families and is surging in such a way that there must be an “environmental” cause (wink, wink – we know he means vaccines). But of course RFK gets the facts predictable wrong, or at least exaggerated and distorted precisely to suit his narrative. It’s a great example of how to support a desired narrative by cherry picking and then misrepresenting facts. To use another metaphor, it’s like making one of those mosaic pictures out of other pictures. He may be choosing published facts but he arranges them into a false and illusory picture. RFK cited a recent study that showed that about 25% of children with autism were in the “profound” category. (That is another term recently suggested to refer to autistic children who are minimally verbal or have an IQ < 50. This is similar to “level 3” autism or “severe” autism, but with slightly different operational cutoffs.)
Continue Reading »
Apr
21
2025
Have you ever been into a video game that you played for hours a day for a while? Did you ever experience elements of game play bleeding over into the real world? If you have, then you have experienced what psychologists call “game transfer phenomenon” or GTP. This can be subtle, such as unconsciously placing your hand on the AWSD keys on a keyboard, or more extreme such as imagining elements of the game in the real world, such as health bars over people’s heads.
None of this is surprising, actually. Our brains adapt to use. Spend enough time in a certain environment, engaging in a specific activity, experiencing certain things, and these pathways will be reinforced. This is essentially what PTSD is – spend enough time fighting for your life in extremely violent and deadly situations, and the behaviors and associations you learn are hard to turn off. I have experienced only a tiny whisper of this after engaging for extended periods of time in live-action gaming that involves some sort of combat (like paint ball or LARPing) – it may take a few days for you to stop looking for threats and being jumpy.
I have also noticed a bit of transfer (and others have noted this to me as well) in that I find myself reaching to pause or rewind a live radio broadcast because I missed something that was said. I also frequently try to interact with screens that are not touch-screens. I am getting used to having the ability to affect my physical reality at will.
Now there is a new wrinkle to this phenomenon – we have to consider the impact of spending more and more time engaged in virtual experiences. This will only get more profound as virtual reality becomes more and more a part of our daily routine. I am also thinking about the not-to-distant future and beyond, where some people might spend huge chunks of their day in VR. Existing research shows that GTP is more likely to occur with increased time and immersiveness. What happens when our daily lives are a blend of the virtual and the physical? Not only is there VR, there is augmented reality (AR) where we overlay digital information onto our perception of the real world. This idea was explored in a Dr. Who episode in which a society of people were so dependent on AR that they were literally helpless without it, unable to even walk from point A to B.
Continue Reading »
Mar
24
2025
We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?
LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.
There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?
Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.
Continue Reading »
Mar
21
2025
Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.
For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.
A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.
Continue Reading »
Mar
10
2025
For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.
PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.
When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.
The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine. These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.
Continue Reading »
Feb
18
2025
The evolution of the human brain is a fascinating subject. The brain is arguably the most complex structure in the known (to us) universe, and is the feature that makes humanity unique and has allowed us to dominate (for good or ill) the fate of this planet. But of course we are but a twig on a vast evolutionary tree, replete with complex brains. From a human-centric perspective, the closer groups are to humans evolutionarily, the more complex their brains (generally speaking). Apes are the most “encephalized” among primates, as are the primates among mammals, and the mammals among vertebrates. This makes evolutionary sense – that the biggest and most complex brains would evolve from the group with the biggest and most complex brains.
But this evolutionary perspective can be tricky. We can’t confuse looking back through evolutionary time with looking across the landscape of extant species. Any species alive today has just as much evolutionary history behind them as humans. Their brains did not stop evolving once their branch split off from the one that lead to humans. There are therefore some groups which have complex brains because they are evolutionarily close to humans, and their brains have a lot of homology with humans. But there are also other groups that have complex brains because they evolved them completely independently, after their group split from ours. Cetaceans such as whales and dolphins come to mind. They have big brains, but their brains are organized somewhat differently from primates.
Another group that is often considered to be highly intelligent, independent from primates, is birds. Birds are still vertebrates, and in fact they are amniotes, the group that contains reptiles, birds, and mammals. It is still an open question as to exactly how much of the human brain architecture was present at the last common ancestor of all amniotes (and is therefore homologous) and how much evolved later independently. To explore this question we need to look at not only the anatomy of brains and the networks within them, but brain cell types and their genetic origins. For example, even structures that currently look very different can retain evidence of common ancestry if they are built with the same genes. Or – structures that look similar may be built with different genes, and are therefore evolutionarily independent, or analogous.
Continue Reading »