Archive for the 'Neuroscience' Category

May 02 2023

Reading The Mind with fMRI and AI

Published by under Neuroscience

This is pretty exciting neuroscience news – Semantic reconstruction of continuous language from non-invasive brain recordings. What this means is that researchers have been able to, sort of, decode the words that subjects were thinking of simply by reading their fMRI scan. They were able to accomplish this feat using a large language model AI, specifically GPT-1, an early version of Chat-GPT. It’s a great example of how these AI systems can be leveraged to aid research.

This is the latest advance in an overall research goal of figuring out how to read brain activity and translate that activity into actual thoughts.  Researchers started by picking some low-hanging fruit – determining what image a person was looking at by reading the pattern of activity in their visual cortex. This is relatively easy because the visual cortex actually maps to physical space, so if someone is looking at a giant letter E, that pattern of activity will appear in the cortex as well.

Moving to language has been tricky, because there is no physical mapping going on, just conceptual mapping. Efforts so far have relied upon high resolution EEG data from implanted electrodes. This research has also focused on single words or phrases, and often trying to pick one from among several known targets. This latest research represents three significant advances. The first is using a non-invasive technique to get the data, fMRI scan. The second is inferring full sentences and ideas, not just words. And the third is that the targets were open-ended, not picked from a limited set of choices. But let’s dig into some details, which are important.

Continue Reading »

No responses yet

Apr 28 2023

Coaching with Empathy

Published by under Neuroscience

The show Ted Lasso is about to wrap up its final season. I am one of the many people who really enjoy the show, which turns on a group of likable people helping each other through various life challenges with care and empathy. Lasso is an American college football coach who was recruited to coach an English “football” team, and manages to muddle through with Zen-like calm and folksy good spirits.

Although entertaining, is the Ted Lasso style of coaching effective? Is it more effective to coach more like a drill sergeant, with fear and intimidation? It’s interesting that the fear-based model of leadership, whether coaching or otherwise, seems to be intuitive. It’s the default mode for many people. But evidence increasingly shows that the Ted Lasso model works better. Empathy may be the most effective leadership style.

Coaches who lead with empathy tend to get more out of their athletes, foster loyalty, establish trust and more effective communication. When you think about it, it makes sense. People work harder when they are motivated. Fear-based motivation is ultimately external, the fear of displeasing a leader, earning their wrath or punishment. Empathy nurtures internal motivation, wanting to succeed because you feel confident, and you want to achieve personal and group goals.

This philosophy is not new – this is a form of the old cliche of catching more flies with honey than vinegar. The philosophy has also filtered into the education community, which increasingly emphasizes positive reward rather than negative feedback. Making students feel anxious or stupid, it turns out, is counterproductive. This can be taken too far as well, however. I have seen it manifest as a policy of never telling students they are wrong. But what if they are? Well, don’t ask them a question that can be right or wrong, therefore there is no possibility of being wrong. OK, but some answers are still better than others.

Continue Reading »

No responses yet

Apr 13 2023

Building A Robotic Hand

Roboticists are often engaged in a process of reinventing the wheel – duplicating the function of biological bodies in rubber, metal, and plastic. This is a difficult task because biological organisms are often wondrous machines. The human hand, in particular, is a feat of evolutionary engineering.

Researchers at the University of Cambridge have designed a robotic hand that both reflects the challenge of this task and some of the principles that might help guide the development of this technology. One feature of this study struck me as significant because of how it reflects the actual function of the human hand, in a way not mentioned in the paper or press release. This made me wonder if the roboticists were even aware that they were replicating a known principle.

The phenomenon in question is called tenodesis (I did a search on the paper and could not find this term). I learned about this during my neurology residency when rotating in a rehab hospital. When you extend your wrist this pulls the tendons of the fingers tight and causes the fingers to flex into a weak grasp.  For people with a spinal cord injury around the C6-7 level, they can extend their wrist but not grasp their fingers. So they can learn to exploit the tenodesis effect to have a functional grasp, which can make a huge difference to their independence.

The Cambridge roboticists have apparently independently hit upon this same idea. They designed a robot hand that is anthropomorphic but where the fingers were not attached to actuators. The robot, however, could flex its wrist, which would passively cause the fingers to flex into a grasp – exactly as it happens in human hands with tenodesis. But why would roboticists want to make a robot hand with “paralyzed” fingers? The answer is – to optimize efficiency. Attaching all the fingers to actuators is a complex engineering feat and also using those actuators consumes a lot of energy. Passive grip is therefore much more energy efficient, which is a huge advantage in robotics.

Continue Reading »

No responses yet

Apr 03 2023

Is AI Sentient Revisited

On the SGU this week we interviewed Blake Lemoine, the ex-Google employee who believes that Google’s LaMDA may be sentient, based on his interactions with it. This was a fascinating discussion, and even though I think we did a pretty deep dive in the time we had, it also felt like we were just scratching the surface of the complex topic. I want to summarize the topic here, give the reasons I don’t agree with Blake, and add some further analysis.

First, let’s define some terms. Intelligence is essentially the ability to know and process information, often in the context of adapting one’s responses to that information. A calculator, therefore, displays a type of intelligence. Sapience is deeper, implying understanding, perspective, insight, and wisdom. Sentience is the subjective experience of one’s existence, the ability to feel. And consciousness is the ability to be awake, to receive and process input and generate output, and to have some level of control over that process. Consciousness implies spontaneous internal mental activity, not just reactive.

These concepts are a bit fuzzy, they overlap and interact with each other, and we don’t really understand them fully phenomenologically, which is part of the problem of talking about whether or not something is sentient. But that doesn’t mean that they are meaningless concepts. There is clearly something going on in a human brain that is not going on in a calculator. Even if we consider a supercomputer with the processing power of a human brain, able to run complex simulations and other applications – I don’t think there is a serious argument to be made that it is sentient. It is not experiencing its own existence. It does not have feelings or emotions.

The question at hand is this – how do we know if something that displays intelligent behavior also is sentient? The problem is that sentience, by definition, is a subject experience. I know that I am sentient because of my own experience. But how do I know that any other living human being is also sentient?

Continue Reading »

No responses yet

Mar 21 2023

Unifying Cognitive Biases

Are you familiar with the “lumper vs splitter” debate? This refers to any situation in which there is some controversy over exactly how to categorize complex phenomena, specifically whether or not to favor the fewest categories based on similarities, or the greatest number of categories based on every difference. For example, in medicine we need to divide the world of diseases into specific entities. Some diseases are very specific, but many are highly variable. For the variable diseases do we lump every type into a single disease category, or do we split them into different disease types? Lumping is clean but can gloss-over important details. Splitting endeavors to capture all detail, but can create a categorical mess.

As is often the case, an optimal approach likely combines both strategies, trying to leverage the advantages of each. Therefore we often have disease headers with subtypes below to capture more detail. But even there the debate does not end – how far do we go splitting out subtypes of subtypes?

The debate also happens when we try to categorize ideas, not just things. Logical fallacies are a great example. You may hear of very specific logical fallacies, such as the “argument ad Hitlerum”, which is an attempt to refute an argument by tying it somehow to something Hitler did, said, or believed.  But really this is just a specific instance of a “poisoning the well” logical fallacy. Does it really deserve its own name? But it’s so common it may be worth pointing out as a specific case. In my opinion, whatever system is most useful is the one we should use, and in many cases that’s the one that facilitates understanding. Knowing how different logical fallacies are related helps us truly understand them, rather than just memorize them.

A recent paper enters the “lumper vs splitter” fray with respect to cognitive biases. They do not frame their proposal in these terms, but it’s the same idea. The paper is – Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases. The idea of parsimony is to use to be economical or frugal, which often is used to apply to money but also applies to ideas and labels. They are saying that we should attempt to lump different specific cognitive biases into categories that represent underlying unifying cognitive processes.

Continue Reading »

No responses yet

Mar 09 2023

Anxiety Biomarkers

Published by under Neuroscience

Psychiatry, psychology, and all aspects of mental health are a challenging area because the clinical entities we are dealing with are complex and mostly subjective. Diagnoses are perhaps best understood as clinical constructs – a way of identifying and understanding a mental health issue, but not necessary a core neurological phenomenon. In other words, things like bipolar disorder are identified, categorized, and diagnosed based upon a list of clinical signs and symptoms. But this is a descriptive approach, and may not correlate to specific circuitry in the brain. Researchers are making progress finding the “neuroanatomical correlates” of known clinical entities, but such correlates are mostly partial and statistical. Further, there is culture, personality, and environment to deal with, which significantly influences how underlying brain circuitry manifests clinically. Also, not all mental health diagnoses are equal – some are likely to be a lot closer to discrete brain circuitry than others.

With all of these challenges, researchers are still trying to progress mental health from a purely descriptive endeavor to a more biological approach, where appropriate. There are a number of ways to do this. The most obvious is to look at the brain itself. Such imaging can be anatomical (taking a picture of the physical anatomy of the brain, such as a CT scan or MRI scan) or functional (looking at some functional aspect of the brain, like EEG or functional MRI). This kind of research is producing a steady stream of information, finding correlations with mental health disorder states, but few have progressed to the point that they are clinically useful. To be useful for research all we need is sufficient statistical significance. But to be useful clinically, to actually determine how to treat an individual person, you need sufficient accuracy (sensitivity and specificity) to guide treatment decisions. That requires much more accuracy than just basic research.

There is also another biological way of evaluating mental health states – molecular biomarkers. This approach stems from the fact that every cell in the body activates a different set of genes – so brain cells activate brain genes, while liver cells activate liver cells. Also, one type of cell will activate genes at different intensities during different functions. So when the pancreas needs to create a lot of insulin, the insulin genes become more active. We can detect the RNA that is produced when specific genes are activated, or patterns of RNA when suites of genes are activated. This can be a biomarker signature of specific functional states.

Continue Reading »

No responses yet

Feb 16 2023

Serial Dependence Bias

Published by under Neuroscience

As I have discussed numerous times on this blog, our brains did not evolve to be optimal precise perceivers and processors of information. Here is an infographic showing 188 documents cognitive biases. These biases are not all bad – they are tradeoffs. Evolutionary forces care only about survival, and so the idea is that many of these biases are more adaptive than accurate. We may, for example, overcall risk because avoiding risk has an adaptive benefit. Not all of the biases have to be adaptive. Some may be epiphenomena, or themselves tradeoffs – a side effect of another adaptation. Our visual perception is rife with such tradeoffs, emphasizing movements, edges, and change at the expense of accuracy and the occasional optical illusion.

One interesting perceptual bias is called serial dependence bias – what we see is influenced by what we recently saw (or heard). It’s as if one perception primes us and influences the next. It’s easy to see how this could be adaptive. If you see a wolf in the distance, your perception is now primed to see wolves. This bias may also benefit in pattern recognition, making patterns easier to detect. Of course, pattern recognition is one of the biggest perceptual biases in humans. Our brains are biased towards detecting potential patterns, way over calling possible patterns, and then filtering out the false positives at the back end. Perhaps serial perceptual bias is also part of this hyper-pattern recognition system.

Psychologists have an important question about serial dependence bias, however. Does this bias occur at the perceptual level (such as visual processing) or at a higher cognitive level? A recently published study attempted to address this question. They exposed subjects to an image of coins for half a second (the study is Japanese, so both the subjects and coins were Japanese). They then asked subjects to estimate the number of coins they just saw and their total monetary value. The researchers wanted to know what had a greater effect on the subjects – the previous amount of coins they had just viewed or their most recent guess. The idea is that if serial dependence bias is primarily perceptual, then the amount of coins will be what affects their subsequent guesses. If the bias is primarily a higher cognitive phenomenon, then their previous guesses will have a greater effect than the actual amount they saw. To help separate the two (because higher guesses would tend to align with greater amounts) they had subjects estimate the number and value of coins on only every other image. Therefore their most recent guess would be different than the most recent image they saw.

Continue Reading »

No responses yet

Jan 06 2023

Brain Uses Hyperbolic Geometry

Published by under Neuroscience

The mammalian brain is an amazing information processor. Millions of years of evolutionary tinkering has produced network structures that are fast, efficient, and capable of extreme complexity. Neuroscientists are trying to understand that structure as much as possible, which is understandably complicated. But progress is steady.

A recent study illustrates how complex this research can get. The researchers were looking at the geometry of neuron activation in the part of the brain that remembers spatial information – the CA1 region of the hippocampus. This is the part of the brain that has place neurons – those that are activated by being in a specific location. They wanted to know how networks of overlapping place neurons grow as rats explore their environment. What they found was not surprising given prior research, but is extremely interesting.

Psychologically we tend to have a linear bias in how we think of information. This extends to distances as well. It seems that we don’t deal easily (at least not intuitively) with geometric or logarithmic scales. But often information is geometric. When it comes to the brain, information and physical space are related because neural information is stored in the physical connection of neurons to each other. This allows neuroscientists to look at how brain networks “map” physically to their function.

In the present study the neuroscientists looked at the activity in place neurons as rats explored their environment. They found that rats had to spend a minimum amount of time in a location before a place neuron would become “assigned” to that location (become activated by that location). As rats spent more time in a location, gathering more information, the number of place neurons increased. However, this increase was not linear, it was hyperbolic. Hyperbolic refers to negatively curved space, like an hourglass with the starting point at the center.

Continue Reading »

No responses yet

Nov 10 2022

Facial Characteristic, Perception, and Personality

Published by under Neuroscience

A recent study asked subjects to give their overall impression of other people based entirely on a photograph of their face. In one group the political ideology of the person in the photograph was disclosed (and was sometimes true and sometime not true), and in another group the political ideology was not disclosed. The question the researchers were asking is whether thinking you know the political ideology of someone in a photo affects your subjective impression of them. Unsurprisingly, it did. Photos that were labeled with the same political ideology (conservative vs liberal) were rated more likable, and this effect was stronger for subjects who have a higher sense of threat from those of the other political ideology.

This question is part of a broader question about the relationship between facial characteristics and personality and our perception of them. We all experience first impressions – we meet someone new and form an overall impression of them. Are they nice, mean, threatening? But if you get to actually know the person you may find that your initial impression had no bearing on reality. The underlying question is interesting. Are there actual facial differences that correlate with any aspect of personality? First, what’s the plausibility of this notion and possible causes, if any?

The most straightforward assumption is that there is a genetic predisposition for some basic behavior, like aggression, and that these same genes (or very nearby genes that are likely to sort together) also determine facial development. This notion is based on a certain amount of biological determinism, which itself is not a popular idea among biologists. The idea is not impossible. There are genetic syndromes that include both personality types and facial features, but these are extreme outliers. For most people the signal to noise ratio is likely too small to be significant.  The research bears this out – attempts at linking facial features with personality or criminality have largely failed, despite their popularity in the late 19th and early 20th centuries.

Continue Reading »

No responses yet

Nov 07 2022

AWARE-II Near Death Experience Study

The notion of near death experiences (NDE) have fascinated people for a long time. The notion is that some people report profound experiences after waking up from a cardiac arrest – their heart stopped, they received CPR, they were eventually recovered and lived to tell the tale. About 20% of people in this situation will report some unusual experience. Initial reporting on NDEs was done more from a journalistic methodology than scientific – collecting reports from people and weaving those into a narrative. Of course the NDE narrative took on a life of it’s own, but eventually researchers started at least collecting some empirical quantifiable data. The details of the reported NDEs are actually quite variable, and often culture-specific. There are some common elements, however, notably the sense of being out of one’s body or floating.

The most rigorous attempt so far to study NDEs was the AWARE study, which I reported on in 2014. Lead researcher Sam Parnia, wanted to be the first to document that NDEs are a real-world experience, and not some “trick of the brain.” He failed to do this, however. The study looked at people who had a cardiac arrest, underwent CPR, and survived long enough to be interviewed. The study also included a novel element – cards placed on top of shelves in ERs around the country. These can only been seen from the vantage point of someone floating near the ceiling, meant to document that during the CPR itself an NDE experiencer was actually there and could see the physical card in their environment. The study also tried to match the details of the remembered experience with actual events that took place in the ER during their CPR.

You can read my original report for details, but the study was basically a bust. There were some methodological problems with the study, which was not well-controlled. They had trouble getting data from locations that had the cards in place, and ultimately had not a single example of a subject who saw a card. And out of 140 cases they were only able to match reported details with events in the ER during CPR in one case. Especially given that the details were fairly non-specific, and they only had 1 case out of 140, this sounds like random noise in the data.

Continue Reading »

No responses yet

Next »