Apr
21
2025
Have you ever been into a video game that you played for hours a day for a while? Did you ever experience elements of game play bleeding over into the real world? If you have, then you have experienced what psychologists call “game transfer phenomenon” or GTP. This can be subtle, such as unconsciously placing your hand on the AWSD keys on a keyboard, or more extreme such as imagining elements of the game in the real world, such as health bars over people’s heads.
None of this is surprising, actually. Our brains adapt to use. Spend enough time in a certain environment, engaging in a specific activity, experiencing certain things, and these pathways will be reinforced. This is essentially what PTSD is – spend enough time fighting for your life in extremely violent and deadly situations, and the behaviors and associations you learn are hard to turn off. I have experienced only a tiny whisper of this after engaging for extended periods of time in live-action gaming that involves some sort of combat (like paint ball or LARPing) – it may take a few days for you to stop looking for threats and being jumpy.
I have also noticed a bit of transfer (and others have noted this to me as well) in that I find myself reaching to pause or rewind a live radio broadcast because I missed something that was said. I also frequently try to interact with screens that are not touch-screens. I am getting used to having the ability to affect my physical reality at will.
Now there is a new wrinkle to this phenomenon – we have to consider the impact of spending more and more time engaged in virtual experiences. This will only get more profound as virtual reality becomes more and more a part of our daily routine. I am also thinking about the not-to-distant future and beyond, where some people might spend huge chunks of their day in VR. Existing research shows that GTP is more likely to occur with increased time and immersiveness. What happens when our daily lives are a blend of the virtual and the physical? Not only is there VR, there is augmented reality (AR) where we overlay digital information onto our perception of the real world. This idea was explored in a Dr. Who episode in which a society of people were so dependent on AR that they were literally helpless without it, unable to even walk from point A to B.
Continue Reading »
Mar
24
2025
We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?
LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.
There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?
Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.
Continue Reading »
Mar
21
2025
Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.
For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.
A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.
Continue Reading »
Mar
10
2025
For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.
PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.
When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.
The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine. These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.
Continue Reading »
Feb
18
2025
The evolution of the human brain is a fascinating subject. The brain is arguably the most complex structure in the known (to us) universe, and is the feature that makes humanity unique and has allowed us to dominate (for good or ill) the fate of this planet. But of course we are but a twig on a vast evolutionary tree, replete with complex brains. From a human-centric perspective, the closer groups are to humans evolutionarily, the more complex their brains (generally speaking). Apes are the most “encephalized” among primates, as are the primates among mammals, and the mammals among vertebrates. This makes evolutionary sense – that the biggest and most complex brains would evolve from the group with the biggest and most complex brains.
But this evolutionary perspective can be tricky. We can’t confuse looking back through evolutionary time with looking across the landscape of extant species. Any species alive today has just as much evolutionary history behind them as humans. Their brains did not stop evolving once their branch split off from the one that lead to humans. There are therefore some groups which have complex brains because they are evolutionarily close to humans, and their brains have a lot of homology with humans. But there are also other groups that have complex brains because they evolved them completely independently, after their group split from ours. Cetaceans such as whales and dolphins come to mind. They have big brains, but their brains are organized somewhat differently from primates.
Another group that is often considered to be highly intelligent, independent from primates, is birds. Birds are still vertebrates, and in fact they are amniotes, the group that contains reptiles, birds, and mammals. It is still an open question as to exactly how much of the human brain architecture was present at the last common ancestor of all amniotes (and is therefore homologous) and how much evolved later independently. To explore this question we need to look at not only the anatomy of brains and the networks within them, but brain cell types and their genetic origins. For example, even structures that currently look very different can retain evidence of common ancestry if they are built with the same genes. Or – structures that look similar may be built with different genes, and are therefore evolutionarily independent, or analogous.
Continue Reading »
Feb
14
2025
My younger self, seeing that title – AI Powered Bionic Arm – would definitely feel as if the future had arrived, and in many ways it has. This is not the bionic arm of the 1970s TV show, however. That level of tech is probably closer to the 2070s than the 1970s. But we are still making impressive advances in brain-machine interface technology and robotics, to the point that we can replace missing limbs with serviceable robotic replacements.
In this video Sarah De Lagarde discusses her experience as the first person with an AI powered bionic arm. This represents a nice advance in this technology, and we are just scratching the surface. Let’s review where we are with this technology and how artificial intelligence can play an important role.
There are different ways to control robotics – you can have preprogrammed movements (with or without sensory feedback), AI can control the movements in real time, you can have a human operator, through some kind of interface including motion capture, or you can use a brain-machine interface of some sort. For robotic prosthetic limbs obviously the user needs to be able to control them in real time, and we want that experience to feel as natural as possible.
The options for robotic prosthetics include direct connection to the brain, which can be from a variety of electrodes. They can be deep brain electrodes, brain surface, scalp surface, or even stents inside the veins of the brain (stentrodes). All have their advantages and disadvantages. Brain surface and deep brain have the best resolution, but they are the most invasive. Scalp surface is the least invasive, but has the lowest resolution. Stentrodes may, for now, be the best compromise, until we develop more biocompatible and durable brain electrodes.
Continue Reading »
Feb
04
2025
Designing research studies to determine what is going on inside the minds of animals is extremely challenging. The literature is littered with past studies that failed to properly control for all variables and thereby overinterpreted the results. The challenge is that we cannot read the minds of animals, and they cannot communicate directly to us using language. We have to infer what is going on in their minds from their behavior, and inference can be tricky.
One specific question is whether or not our closest ancestors have a “theory of mind”. This is the ability to think about what other creatures are thinking and feeling. Typical humans do this naturally – we know that other people have minds like our own and we can think strategically about the implications of what other people think, how to predict their behavior based upon this, and how to manipulate the thoughts of other people in order to achieve our ends.
Animal research over the last century or so has been characterized by assumptions that some cognitive ability is unique to humans, only to find that this ability exists in some animals, at least in a precursor form. This makes sense, as we have evolved from other animals, most of our abilities likely did not come out of nowhere but evolved from more basic precursors.
Continue Reading »
Nov
19
2024
Humans (assuming you all experience roughly what I experience, which is a reasonable assumption) have a sense of self. This sense has several components – we feel as if we occupy our physical bodies, that our bodies are distinct entities separate from the rest of the universe, that we own our body parts, and that we have the agency to control our bodies. We can do stuff and affect the world around us. We also have a sense that we exist in time, that there is a continuity to our existence, that we existed yesterday and will likely exist tomorrow.
This may all seem too basic to bother pointing out, but it isn’t. These aspects of a sense of self also do not flow automatically from the fact of our own existence. There are circuits in the brain receiving input from sensory and cognitive information that generate these senses. We know this primarily from studying people in whom one or more of these circuits are disrupted, either temporarily or permanently. This is why people can have an “out of body” experience – disrupt those circuits which make us feel embodied. People can feel as if they do not own or control a body part (such as so-called alien hand syndrome). Or they can feel as if they own and control a body part that doesn’t exist. It’s possible for there to be a disconnect between physical reality and our subjective experience, because the subjective experience of self, of reality, and of time are constructed by our brains based upon sensory and other inputs.
Perhaps, however, there is another way to study the phenomenon of a sense of self. Rather than studying people who are missing one or more aspects of a sense of self, we can try to build up that sense, one component at a time, in robots. This is the subject of a paper by three researchers, a cognitive roboticist, a cognitive psychologist who works with robot-human interactions, and a psychiatrist. They explore how we can study the components of a sense of self in robots, and how we can use robots to do psychological research about human cognition and the self of self.
Continue Reading »
Oct
10
2024
How certain are you of anything that you believe? Do you even think about your confidence level, and do you have a process for determining what your confidence level should be or do you just follow your gut feelings?
Thinking about confidence is a form of metacognition – thinking about thinking. It is something, in my opinion, that we should all do more of, and it is a cornerstone of scientific skepticism (and all good science and philosophy). As I like to say, our brains are powerful tools, and they are our most important and all-purpose tool for understanding the universe. So it’s extremely useful to understand how that tool works, including all its strengths, weaknesses, and flaws.
A recent study focuses in on one tiny slice of metacognition, but an important one – how we form confidence in our assessment of a situation or a question. More specifically, it highlights The illusion of information adequacy. This is yet another form of cognitive bias. The experiment divided subjects into three groups – one group was given one half of the information about a specific situation (the information that favored one side), while a second group was given the other half. The control group was given all the information. They were then asked to evaluate the situation and how confident they were in their conclusions. They were also asked if they thought other people would come to the same conclusion.
You can probably see this coming – the subjects in the test groups receiving only half the information felt that they had all the necessary information to make a judgement and were highly confident in their assessment. They also felt that other people would come to the same conclusion as they did. And of course, the two test groups came to the conclusion favored by the information they were given.
Continue Reading »
Oct
07
2024
Scientists have just published in Nature that they have completed the entire connectome of a fruit fly: Network statistics of the whole-brain connectome of Drosophila. The map includes 140,000 neurons and more than 50 million connections. This is an incredible achievement that marks a milestone in neuroscience and is likely to advance our research.
A “connectome” is a complete map of all the neurons and all the connections in a brain. The ultimate goal is to map the entire human brain, which has 86 billion neurons and about 100 trillion connections – that’s more than six orders of magnitude greater than the drosophila. The human genome project was started in 2009 through the NIH, and today there are several efforts contributing to this goal.
Right now we have what is called a mesoscale connectome of the human brain. This is more detailed than a macroscopic map of human brain anatomy, but not as detailed as a microscopic map at the neuronal and synapse level. It’s in between, so mesoscale. Essentially we have built a mesoscale map of the human brain from functional MRI and similar data, showing brain regions and types of neurons at the millimeter scale and their connections. We also have mesoscale connectomes of other mammalian brains. These are highly useful, but the more detail we have obviously the better for research.
We can mark progress on developing connectomes in a number of ways – how is the technology improving, how much detail do we have on the human brain, and how complex is the most complex brain we have fully mapped. That last one just got its first entry – the fruit fly or drosophila brain.
Continue Reading »