Archive for the 'Neuroscience' Category

Aug 22 2019

AI and Scaffolding Networks

A recent commentary in Nature Communications echoes, I think, a key understanding of animal intelligence, and therefore provides an important lesson for artificial intelligence (AI). The author, Anthony Zador, extends what has been an important paradigm shift in our approach to AI.

Early concepts of AI, as reflected in science fiction at least (which I know does not necessarily track with actual developments in the industry) was that the ultimate goal was to develop a general AI that could master tasks from the top down through abstract understanding – like humans. Actual developers of AI, however, quickly learned that this might not be the best approach, and in any case is decades away at least. I remember reading in the 1980s about approaching AI more from the ground up.

The first analogy I recall is that of walking – how do we program a robot to walk? We don’t need a human cortex to do this. Insects can walk. Also, much of the processing required to walk is in the deeper more primitive parts of our brain, not the more complex cortex. So maybe we should create the technology for a robot to walk by starting with the most basic algorithms similar to those used by the simplest creatures, and then build up from there.

My memory, at least, is that this completely flipped my concept of how we were approaching AI. Don’t build a robot with general intelligence who can do anything and then teach it to walk. You don’t even build algorithms that can walk. You break walking down into its component parts, and then build algorithms that can master and combine each of those parts. This was reinforced by my later study of neuroscience. Yeah – that is exactly how our brains work. We have modules and networks that do very specific things, and they combine together to produce more and more sophisticated behavior.

Continue Reading »

Like this post? Share it!

No responses yet

Aug 19 2019

Facts vs Stories

There is a common style of journalism, that you are almost certainly very familiar with, in which the report starts with a personal story, then delves into the facts at hand often with reference to the framing story and others like it, and returns at the end to the original personal connection. This format is so common it’s a cliche, and often the desire to connect the actual new information to an emotional story takes over the reporting and undermines the facts.

This format reflects a more general phenomenon – that people are generally more interested in and influenced by a good narrative than by dry facts. Or are we? New research suggests that while the answer is still generally yes, there is some more nuance here (isn’t there always?). The researchers did three studies in which they compared the effects of strong vs weak facts presented either alone or embedded in a story. In the first two studies the information was about a fictitious new phone. The weak fact was that the phone could withstand a fall of 3 feet. The strong fact was that the phone could withstand a fall of 30 feet. What they found in both studies is that the weak fact was more persuasive when presented embedded in a story than along, while the strong fact was less persuasive.

They then did a third study about a fictitious flu medicine, and asked subjects if they would give their e-mail address for further information. People are generally reluctant to give away their e-mail address unless it’s worth it, so this was a good test of how persuasive the information was. When a strong fact about the medicine was given alone, 34% of the participants were willing to provide their e-mail. When embedded in a story, only 18% provided their e-mail.

So, what is responsible for this reversal of the normal effect that stories are generally more persuasive than dry facts? The authors suggest that stories may impair our ability to evaluate factual information. This is not unreasonable, and is suggested by other research as well. To a much greater extent than you might think, cognition is a zero-sum game. When you allocate resources to one task, those resources are taken away from other mental tasks (this basic process is called “interference” by psychologists). Further, adding complexity to brain processing, even if this leads to more sophisticated analysis of information, tends to slow down the whole process. And also, parts of the brain can directly suppress the functioning of other parts of the brain. This inhibitory function is actually a critical part of how the brain works together.

Continue Reading »

Like this post? Share it!

No responses yet

Aug 13 2019

Weber’s Law

Published by under Neuroscience

I confess I have never heard (or at least don’t remember ever hearing) about Weber’s Law (pronouned vayber) until reading about it with this news item. It is the Law of Just Noticeable Differences. It deals with the minimum difference in a stimulus necessary to notice. While clearly established, and there are many hypotheses to explain the phenomenon, there has never been a way to test which hypothesis is correct. The news items relates to new evidence which may provide a mechanism.

Weber’s law applies to all sensory modalities – sight, sound, taste, smell, and tactile sense. For any sensory stimulus there is a minimum difference that a person can notice. For example, if you are visually comparing the length of two lines trying to determine which one is longer, or if you are holding two weights and trying to determine which one is heavier. There is a minimum difference that is necessary to be able to notice. Experimentally this means there is a relationship between the ratio of the difference and the probability of determining the correct answer.

So if you are trying to determine which light is brighter, an experiment may determine that for lights of 100 and 110 lumens there is a 75% chance of correctly detecting which light is brighter. What Weber’s law states is that one this relationship is determined, it holds true no matter what the absolute value of the stimulus is, as long as the ratio is the same. So for lumens of 200 and 220, or 1000 and 1100, there would still be a 75% probability of being correct. The only thing that matters is the ratio.

As you might expect, there is a lot of nuance to the law, such as subtle variations in the math and differences between vertebrates and insects, etc., which I won’t get into. They are not important for the current discussion, but know that they exist.

Continue Reading »

Like this post? Share it!

No responses yet

Aug 06 2019

Video Game Violence

Published by under Neuroscience

Recent mass shootings have once again fueled discussion about the role of video game violence (VGV) and aggressive behavior. This is an enduring controversy, which is a real scientific controversy (not just a political one) because the research is highly complex.

Part of that complexity is that there is just one question, does VGV cause aggressive behavior – there are many subquestions, and many ways to measure outcomes. Research can focus on whether or not VGV is correlated with aggressive attitudes, aggressive behavior, or with diminished prosocial attitudes or behavior, or empathy towards the victims of violence, or normalizing aggressive or violent behavior. If there is a correlation, then research needs to tease apart what is cause and what is effect. Researchers also have to decide how to measure all of these things, and to consider demographic variables as well as duration and intensity of exposure and duration of any potential effects. Finally there is the issue of confounding factors, always an issue with psychological research – how do we establish the true lines of cause and effect.

Right now there appears to be two basic schools of thought. Anderson and colleagues champion the view that there is strong evidence for not only a correlation between VGV and aggressive behavior, experimental studies have shown that VGV causes aggressive ideation and behavior, and reduces empathy and prosocial behavior. A 2018 meta-analysis shows that these correlations are indeed strong, and exist across experimental and observational studies. These effects are greatest for males and for whites, less so for Asians, and not significant for Hispanics.

The other school is championed by Ferguson and others, who argue that these results are spurious and due to poor research designs. Specifically he argues that the effects are inflated by including measures of aggression that are too mild, and not ultimately meaningful. There is only an effect if you include things like aggressive language, but not if you restrict the definition of aggressive behavior to actual violence. Further, he argues, that confounding factors are not adequately controlled for, and when you do, the effect disappears.

Continue Reading »

Like this post? Share it!

No responses yet

Jul 18 2019

Neuralink To Begin Human Trials

Published by under Neuroscience

I’m still trying to figure out if Elon Musk is a mad genius or a supervillain. Perhaps that’s a false dichotomy. Seriously, I do like his approach – he has billions of dollars laying around, so he decides that we need some specific technology in order to build the future, and he builds a company dedicated to developing that technology. Wherever he sees holes, he tries to fill them.

SpaceX has been, in my opinion, his most dramatic success. He has pioneered the technology of reusable rockets, and anyone who has seen one of his falcon rockets landing vertically has to be impressed. Tesla cars are impressive as well, but from what I understand he still has to make the company profitable. I’m still skeptical about the hyperloop, but at least he’s trying. It all depends on how cheap he can make tunneling, and the real innovation may be in his Boring company.

Not all of his companies involve travel. He also wants to change humans, in order to ultimately keep up with the AI he thinks we will inevitably create. In 2017 he tweeted, “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,” along with a picture declaring, “In the end, the machines will win.” The existential threat of AI is a separate question.

Now for most people, if you are worried about AI you talk about it with your friends and colleagues. Perhaps you have a blog where you can share your concerns with the world. But if you are Elon Musk you can start a 100 million dollar company designed to thwart the perceived threat. So that’s what he did.

Continue Reading »

Like this post? Share it!

No responses yet

Jul 02 2019

Making Mini-Brains from Stem Cells

Published by under Neuroscience

A new report details the progress scientists have made in developing brain organoids from stem cells. They use human embryonic stem cells to culture neurons – brain cells. Lead author, Hideya Sakaguchi, describes the process:

“The team cultured the organoids for 70-100 days, dissociated them into single cells and then disseminated them into another culture dish. The disseminated cells created neuronal networks in a self-organized manner.”

Just by culturing individual neurons together, they spontaneously formed networks and some three-dimensional tissue structure, forming into layers similar to the layers seen in human cortex. Further, the networks of neurons demonstrated some coordinated firing. There was both spontaneous individual cell activity, as well as synchronized activity within networks of cells.

The result is not a brain, which is why it is called an organoid (often referred to as a mini-brain, but this is less technically accurate). What this demonstrates is the inherent properties of human brain neurons to spontaneously form tissue structure and to from neural networks that are functional. The cells are essentially trying to self-organize into a brain. They cannot fully do this, however, because there is a huge piece missing – sensory input and the feedback from output.

A human brain, even an infant brain, contains more information by orders of magnitude than is contained in all the genes that are involved in neurological function. The genes are not a blue print for a brain. Rather, the genes are a set of instructions, of behaviors, that if followed allow for the development of a fully formed brain. But that development requires more information – information from the rest of the body. This process continues after birth as babies develop their vision, hearing ability to move, eventually to walk, socialization, and language. If deprived of stimulation in these areas, the relevant part of the brain will not develop.

Continue Reading »

Like this post? Share it!

No responses yet

Jun 27 2019

The Bystander Effect

Published by under Neuroscience

Social psychology is the study of how people behave in social situations, so it deals with the complex interactions between personality, culture, and social pressures on how we behave and in turn are affected by each other. I took a social psychology course in college and it really opened my eyes. This was one of the first courses I took that challenged my assumptions in a profound way, because there is a disconnect between our assumptions about how people think and behave and how they actual do when objectively observed. In this way social psychology (and psychology in general) is an important pillar of scientific skepticism.

As an example, there is a recent study that uses CCTV to monitor violent incidents in three cities, Amsterdam, Lancaster, and Cape Town. So these are real-life events, not staged for the study. The researchers counted how many times people intervened in such incidents, such as someone being pummeled on the ground by an attacker. First, think how you would respond in such a situation. Now also think about how the average person would respond. What percentage of the time do you think a bystander intervened? Was it the same or different in the various cities, which differ in terms of their crime and safety? If individuals fail to respond, why?

Your answers to these questions probably say more about you and the culture you live in than reality. This is the meta-finding of social psychology. We often are incorrect in our assumptions about what other people think, how other people behave, and what motivates other people. We also judge ourselves by a different set of rules than we judge others (the fundamental attribution error). Research also finds that understanding this is extremely empowering, and this is also something I found fascinating about social psychology. This was the first time I can remember that a little bit of knowledge empowered me to take greater control of my actions, rather than ride passively down the currents of subconscious psychological forces.

I am deliberately putting the link to the study and the results below the fold. When you’re ready, take a look.

Continue Reading »

Like this post? Share it!

No responses yet

Jun 24 2019

Study on Visual Framing in the Presidential Debates

This week we will have the first primary debates of the presidential cycle, with two Democratic debates of the top 20 candidates (10 each night). A timely study was just published looking at the coverage of the different candidates in the 2016 primary debates of both parties. The results show a dramatic disparity in how different candidates were covered.

Unfortunately, the headline of the press release is misleading: Study Shows Visual Framing by Media in Debates Affects Public Perception. The study did not measure public perception, and therefore there is no basis to conclude anything about how the framing affected public perception. The study only quantified the coverage. But what they found was interesting.

They went frame by frame through the first two primary debates of both parties and calculated how much coverage each candidate had and what type – solo, split screen, side-by-side, multi-candidate shot, and audience reaction. This is what they found:

We likewise considered how much time the camera spent on a given candidate before cutting away by computing  -scores for each candidate’s mean camera fixation time (see Figure 3). This allowed us to see whether networks were visually priming the audience to differentially perceive the candidates as viable leaders. These data show that across the four debates, only Trump, specifically during CNN’s Republican Party debate, had substantially longer camera fixations (  ) than the other candidates (   to 1.84). During this debate, Bush (  ) was the only candidate besides Trump to have a positive z-score, providing modest support for our visual priming hypotheses concerning fixation time (H2). While for the Fox News debate, Cruz (  ) and Huckabee (  ) had substantially higher  -scores than the rest of the field, including Trump, their scores were well within the bounds of expectations. Likewise, on the Democratic side, neither CNN (   to 1.17) nor CBS (   to 0.89) gave a significant visual priming advantage to any candidate, although there were trends toward front-runners Clinton and Sanders having slightly longer than average fixation times during both debates.

Essentially, there was a lot of noise in the data, but only one significant spike above the noise – during the CNN debate Trump had significantly more camera time than the rest, with Bush also having greater camera time but not nearly as much as Trump. At the time they were the two front-runners in polling. Clinton and Sanders also had a trend towards more camera time in their debates, but not statistically significant.

Continue Reading »

Like this post? Share it!

No responses yet

Jun 18 2019

Is Authenticity a Thing?

Authenticity is a tricky concept when it comes to people, and is increasingly being challenged both in psychology and even with regard to physical objects (with regard to objects, the value rather than reality of authenticity is questioned).  Writing for Scientific American, psychologist Scott Barry Kaufman deconstructs the psychological concept of authenticity nicely. But let’s start with a standard psychology definition of what this means:

Authenticity generally reflects the extent to which an individual’s core or true self is operative on a day-to-day basis. Psychologists characterize authenticity as multiple interrelated processes that have important implications for psychological functioning and well-being. Specifically, authenticity is expressed in the dynamic operation of four components: awareness (i.e., self-understanding), unbiased processing (i.e., objective self-evaluation), behavior (i.e., actions congruent with core needs, values, preferences), and relational orientation (i.e., sincerity within close relationships). Research findings indicate that each of these components relates to various aspects of healthy psychological and interpersonal adjustment.

My issue with this definition is that each of those components don’t necessarily add up to something greater than the sum of the parts. I understand the concept of unbiased processing, for example,  but this still tells me nothing about how it leads to authenticity, and by extension what authenticity is. How is it different than just being psychologically healthy, as measured by more specific traits?

Kaufman reviews the research on authenticity and show that really it’s just a rationalization for holding a favorably biased view of ourselves. People tend to think they are being authentic when they are acting on their virtues, being their best self, and also acting in ways that are congruent with societal expectations. The concept of authenticity is, in essence, used to manage one’s reputation. I am being authentic when doing things that other people will view positively, and not being my true self when I do things that will harm my reputation.

But as Kaufman points out – everything we do is a manifestation of some aspect of our true self. If you are acting in a way that is not congruent with your core values, you are still doing it for a reason that is part of your overall personality – that is part of your “true self.” If you are engaging in biased processing, or being insincere, these are part of who you are also – otherwise you wouldn’t be doing them.

Continue Reading »

Like this post? Share it!

No responses yet

May 06 2019

Detecting Lies in the Brain

Published by under Neuroscience

It’s fairly common knowledge at this point that the polygraph test for detecting who is lying is not reliable enough to be used practically. Here is a good summary by the American Psychological Association (APA). The bottom line is that the entire idea of a lie-detector is problematic for various reasons. First, the underlying premises have not really emerged from psychological research, and has not been validated by research. The idea is that people will display physiological signs of stress when they are making an effort to be deceptive, or when confronted with incriminating information. However, the relationship between physiological signs and mental stress is too complex to develop any test. There is no universal feature of lying that can be detected.

The polygraph uses two basic techniques. The first is the control question test (CQT) – you ask questions of the person being examined, control questions that do not relate to the crime in question, and relevant questions related to the crime. The idea is that they will react more to the relevant than the control questions. The other method, the guilty knowledge test (GKT) is similar – mentioning random items along with one directly related to the crime may reveal guilty knowledge that only the perpetrator should know.

The idea sounds compelling, and it does work in that using these techniques results in a slight statistical advantage in determining who is lying and who isn’t. However, a small statistical advantage is all but worthless in practical application. There are too many false positives and false negatives to be useful. For any individual suspect, at the end of the test you still don’t know if they are lying or not.

Part of the problem is that people are complex and variable. Not everyone responds the same way to stress, or to the situations provoked in the testing. But the problem is worsened by the existence of effective mental countermeasures. There are two basic countermeasures that have been shown to be effective – lowering further the statistical effect of the polygraph. The first is to assign mental significance to control items or questions, thereby reacting similarly to the control and the relevant items. The second is to create mental distance to all the items, including the relevant ones. Focus on something else – the sound of the words, their precise dictionary meaning, or imagine a famous character saying them. If the statements are in writing, you can focus on the color of the ink, the font, or other superficial aspects.

These countermeasure work. They successfully blur any difference between control and relevant items.

Continue Reading »

Like this post? Share it!

No responses yet

Next »