Archive for the 'Neuroscience' Category

Jan 06 2023

Brain Uses Hyperbolic Geometry

Published by under Neuroscience

The mammalian brain is an amazing information processor. Millions of years of evolutionary tinkering has produced network structures that are fast, efficient, and capable of extreme complexity. Neuroscientists are trying to understand that structure as much as possible, which is understandably complicated. But progress is steady.

A recent study illustrates how complex this research can get. The researchers were looking at the geometry of neuron activation in the part of the brain that remembers spatial information – the CA1 region of the hippocampus. This is the part of the brain that has place neurons – those that are activated by being in a specific location. They wanted to know how networks of overlapping place neurons grow as rats explore their environment. What they found was not surprising given prior research, but is extremely interesting.

Psychologically we tend to have a linear bias in how we think of information. This extends to distances as well. It seems that we don’t deal easily (at least not intuitively) with geometric or logarithmic scales. But often information is geometric. When it comes to the brain, information and physical space are related because neural information is stored in the physical connection of neurons to each other. This allows neuroscientists to look at how brain networks “map” physically to their function.

In the present study the neuroscientists looked at the activity in place neurons as rats explored their environment. They found that rats had to spend a minimum amount of time in a location before a place neuron would become “assigned” to that location (become activated by that location). As rats spent more time in a location, gathering more information, the number of place neurons increased. However, this increase was not linear, it was hyperbolic. Hyperbolic refers to negatively curved space, like an hourglass with the starting point at the center.

Continue Reading »

No responses yet

Nov 10 2022

Facial Characteristic, Perception, and Personality

Published by under Neuroscience

A recent study asked subjects to give their overall impression of other people based entirely on a photograph of their face. In one group the political ideology of the person in the photograph was disclosed (and was sometimes true and sometime not true), and in another group the political ideology was not disclosed. The question the researchers were asking is whether thinking you know the political ideology of someone in a photo affects your subjective impression of them. Unsurprisingly, it did. Photos that were labeled with the same political ideology (conservative vs liberal) were rated more likable, and this effect was stronger for subjects who have a higher sense of threat from those of the other political ideology.

This question is part of a broader question about the relationship between facial characteristics and personality and our perception of them. We all experience first impressions – we meet someone new and form an overall impression of them. Are they nice, mean, threatening? But if you get to actually know the person you may find that your initial impression had no bearing on reality. The underlying question is interesting. Are there actual facial differences that correlate with any aspect of personality? First, what’s the plausibility of this notion and possible causes, if any?

The most straightforward assumption is that there is a genetic predisposition for some basic behavior, like aggression, and that these same genes (or very nearby genes that are likely to sort together) also determine facial development. This notion is based on a certain amount of biological determinism, which itself is not a popular idea among biologists. The idea is not impossible. There are genetic syndromes that include both personality types and facial features, but these are extreme outliers. For most people the signal to noise ratio is likely too small to be significant.  The research bears this out – attempts at linking facial features with personality or criminality have largely failed, despite their popularity in the late 19th and early 20th centuries.

Continue Reading »

No responses yet

Nov 07 2022

AWARE-II Near Death Experience Study

The notion of near death experiences (NDE) have fascinated people for a long time. The notion is that some people report profound experiences after waking up from a cardiac arrest – their heart stopped, they received CPR, they were eventually recovered and lived to tell the tale. About 20% of people in this situation will report some unusual experience. Initial reporting on NDEs was done more from a journalistic methodology than scientific – collecting reports from people and weaving those into a narrative. Of course the NDE narrative took on a life of it’s own, but eventually researchers started at least collecting some empirical quantifiable data. The details of the reported NDEs are actually quite variable, and often culture-specific. There are some common elements, however, notably the sense of being out of one’s body or floating.

The most rigorous attempt so far to study NDEs was the AWARE study, which I reported on in 2014. Lead researcher Sam Parnia, wanted to be the first to document that NDEs are a real-world experience, and not some “trick of the brain.” He failed to do this, however. The study looked at people who had a cardiac arrest, underwent CPR, and survived long enough to be interviewed. The study also included a novel element – cards placed on top of shelves in ERs around the country. These can only been seen from the vantage point of someone floating near the ceiling, meant to document that during the CPR itself an NDE experiencer was actually there and could see the physical card in their environment. The study also tried to match the details of the remembered experience with actual events that took place in the ER during their CPR.

You can read my original report for details, but the study was basically a bust. There were some methodological problems with the study, which was not well-controlled. They had trouble getting data from locations that had the cards in place, and ultimately had not a single example of a subject who saw a card. And out of 140 cases they were only able to match reported details with events in the ER during CPR in one case. Especially given that the details were fairly non-specific, and they only had 1 case out of 140, this sounds like random noise in the data.

Continue Reading »

No responses yet

Oct 14 2022

Brain Cells Playing Pong

Published by under Neuroscience

This is definitely the neuroscience news of the week. It shows how you can take an incremental scientific advance and hype it into a “new science” and a breakthrough and the media will generally just eat it up. Did scientists teach a clump of brain cells to play the video-game pong? Well, yes and no. The actual science here is fascinating and very interesting, but I fear it is generally getting lost in the hype.

This is what the researchers actually did – they cultured mouse or human neurons derived from stem cells onto a multi-electrode array (MEA). The MEA can both read and stimulate the neurons. Neurons spontaneously network together, so that’s what these neurons did.  They then stimulated the two-dimensional network of neurons either on the left or the right and at different frequencies, and recorded the network’s response. If the network responded in a way the scientists deemed correct, then they were “rewarded” with a predictable further stimulation. If their response was deemed incorrect, they were “punished” with random stimulation. Over time the network learned to produce the desired response, and its learning accelerated. Further, human neurons learned faster than mouse neurons.

Why did this happen? That is what researchers are trying to figure out, but the authors speculate that predictable stimulation allows the neurons to make more stable connections, while random stimulation is disruptive. Therefore predictable feedback tends to reinforce whatever network pattern results in predictable feedback. In this way the network is behaving like a simple AI algorithm.

Continue Reading »

No responses yet

Oct 06 2022

3D Printing Implantable Computer Chips

This is definitely a “you got chocolate in my peanut butter” type of advance, because it combines two emerging technologies to create a potential significant advance. I have been writing about brain-machine interface (or brain-computer interface, BCI) for years. My take is that the important proof of concepts have already been established, and now all we need is steady incremental advances in the technology. Well – here is one of those advances.

Carnegie Mellon University researchers have developed a computer chip for BCI, called a microelectrode array (MEA), using advanced 3D printing technology. The MEA looks like a regular computer chip, except that it has thin pins that are electrodes which can read electrical signals from brain tissue. MEAs are inserted into the brain with the pins stuck into brain tissue. They are thin enough to cause minimal damage. The MEA can then read the brain activity where it is placed, either for diagnostic purposes or to allow for control of a computer that is connected to the chip (yes, you need wired coming out of the skull). You can also stimulate the brain through the electrodes. MEAs are mostly used for research in animals and humans. They can generally be left in the brain for about one year.

One MEA in common use is called the Utah array, because it was developed at the University of Utah, which was patented in 1993. So these have been in use for decades. How much of an advance is the new MEA design? There are several advantage, which mostly stem from the fact that these MEAs can be printed using an advanced 3D printing technology called Aerosol Jet 3D Printing. This allows for the printing at the nano-scale using a variety of materials, included those needed to make MEAs. Using this technology provides three advantages.

Continue Reading »

No responses yet

Sep 26 2022

The AI Renaissance

We appear to be in the middle of an explosion of AI (artificial intelligence) applications and ability. I had the opportunity to chat with an AI expert, Mark Ho, about what the driving forces are behind this rapid increase in AI power. Mark is a cognitive scientist who studies how AI’s work out and solve problems, and compares that to how humans solve problems. I was interviewing him for an SGU episode that will drop in December. The conversation was far-ranging, but I did want to discuss this one question – why are AIs getting so much more powerful in recent years?

First let me define what we mean by AI – this is not self-aware conscious computer code. I am referring to what may be called “narrow” AI, such as deep learning neural networks that can do specific things really well, like mimic a human conversation, reconstruct art images based on natural-language prompts, drive a car, or beat the world-champion in Go.  The HAL-9000 version of sentient computer can be referred to as Artificial General Intelligence, or AGI. But narrow AI does not really think, it does not not understand in a human sense. For the rest of this article when I refer to “AI” I am referring to the narrow type.

In order to understand why AI is getting more powerful we have to understand how current AI works. A full description would take a book, but let me just describe one basic way that AI algorithms can work. Neural nets, for example, are a network of nodes which also act as gates in a feed forward design (they pass information in one direction). The gates receive information and assign a weight to that information, and if it exceeds a set threshold it then passes that along to the next layers of nodes in the network. decide whether or not to pass information onto the next node based on preset parameters, and can give different weight to this information. The parameters (weights and thresholds) can be tuned to affect how the network processes information. These networks can be used for deep machine learning, which “trains” the network on specific data. To do this there needs to be an output that is either right or wrong, and that result is fed back into the network, which then tweaks the parameters. The goal is for the network to “learn” how it needs to process information by essentially doing millions of trials, tweaking the parameters each time and evolving the network in the direction of more and more accurate output.

So what is it about this system that is getting better? What others have told me, and what Mark confirmed, is that the underlying math and basic principles are essentially the same as 50 years ago. The math is also not that complicated. The basic tools are the same, so what is it that is getting better? One critical component of AI that is improving is the underlying hardware, which is getting much faster and more powerful. There is just a lot more raw power to run lots of training trials. One interesting side point is that computer scientists figured out that graphics cards (graphics processing unit, or GPU), the hardware used to process the images that go to your computer screen, happen to work really well for AI algorithms. GPUs have become incredibly powerful, mainly because of the gaming industry. This, by the way, is why graphics cards have become so expensive recently. All those bitcoin miners are using the GPUs to run their algorithms. (Although I recently read they are moving in the direction of application specific integrated circuits.)

Continue Reading »

No responses yet

Sep 13 2022

Children Are Natural Skeptics

There is ongoing debate as to the extent that a skeptical outlook is natural vs learned in humans. There is no simple answer to this question, and human psychology is complex and multifaceted. People do demonstrate natural skepticism toward many claims, and yet seem to accept with abject gullibility other claims. For adults it can also be difficult to tease out how much skepticism is learned vs innate.

This is where developmental psychology comes in. We can examine children of various ages to see how they behave, and this may provide a window into natural human behavior. Of course, even young children are not free from cultural influences, but it at least can provide some interesting information. A recent study looked at two related questions – to children (ages 4-7) accept surprising claims from adults, and how do they react to those claims. A surprising claim is one that contradicts common knowledge that even a 4-year old should know.

In one study, for example, an adult showed the children a rock and a sponge and asked them if the rock was soft or hard. The children all believed the rock was hard. The adult then either told them that the rock was hard, or that the rock was soft (or in one iteration that the rock was softer than the sponge). When the adult confirmed the children’s beliefs, they continued in their belief. When the adult contradicted their belief, many children modified their belief. The adult then left the room under a pretense, and the children were observed through video. Unsurprisingly, they generally tested the surprising claims of the teacher through direct exploration.

This is not surprising – children generally like to explore and to touch things. However, the 6-7 year-old engaged in (or proposed during online versions of the testing) more appropriate and efficient methods of testing surprising claims than the 4-5 year-olds. For example, they wanted to directly compare the hardness of the sponge vs the rock.

Continue Reading »

No responses yet

Sep 02 2022

Algorithms Still Reinforce Echochambers

Why do societies collapse? This is an interesting question, and as you might imagine the answer is complex. There are multiple internal and external reasons, but a core features seems to be that a combination of factors were simultaneously at work – a crisis that the society failed to deal with adequately because of dysfunctional institutions and political infrastructure. All societies face challenges, but successful ones solve them, or at least make significant adjustments. There are also multiple ways to define “collapse”, which does not have to involve complete extinction. We can also add political or institutional collapse, where, for example, a thriving democracy collapses into a dictatorship.

There are many people concerned that America is facing a real threat that could collapse our democracy. The question is – do we have the institutional vigor to make the appropriate adjustments to survive these challenges? Sometimes, by the time you recognize a serious threat it’s too late. At other times, the true causes of the threat are not recognized (at least not by a majority) and therefore the solutions are also missed. So the question is, to the extent that American democracy is under threat, what are the true underlying causes?

This is obviously a complex question that I am not going to be able to adequately address in one blog post. I would like to suggest, however, that social media algorithms are at least one factor contributing to the destabilizing of democracy. It would be ironic if one of the greatest democracies in world history were brought down in part by YouTube algorithms. But this is not implausible.

Continue Reading »

No responses yet

Aug 23 2022

Do We Need a New Theory of Decision Making?

Published by under Neuroscience

How people make decisions has been an intense area of study from multiple angles, including various disciplines within psychology and economics. Here is a fascinating article that provides some insight into the state of the science addressing this broad question. It is framed as a meta-question – do we have the right underlying model that properly ties together all the various aspects of human decision-making? It is not a systematic review of this question, and really just addresses one key concept, but I think it helps frame the question.

The title reflects the author’s (Jason Collins) approach – “We don’t have a hundred biases, we have the wrong model.” The article is worth a careful read or two if you are interested in this topic, but here’s my attempt at a summary with some added thoughts. As with many scientific phenomena, we can divide the approach to human decision making into at least two levels, describing what people do and an underlying theory (or model) as to why they behave that way. Collins is coming at this mostly from a behavioral economics point of view, which starts with the “rational actor” model, the notion that people generally make rational decisions in their own self-interest. This model also includes the premise the individual have the computational mental power to arrive at the optimal decision, and the willpower to carry it out. When research shows that people deviate from a pure rational actor model of behavior, those deviations are deemed “biases”. I’ve discussed many such biases in this blog, and hundreds have been identified – risk aversion, sunk cost, omission bias, left-most digit bias, and others. It’s also recognized that people do not have unlimited computational power or willpower.

Collins likens this situation to the Earth-centric model of the universe. Geocentrism was an underlying model of how the universe worked, but did not match observations of the actual universe. So astronomers introduced more and more tweaks and complexities to explain these deviations. Perhaps, Collins argues, we are still in the “geocentrism” era of behavioral psychology and we need a new underlying model that is more elegant, accurate, and has more predictive power – a heliocentrism for human decision-making. He acknowledges that human behavior it too complex and multifaceted to follow a model as simple and elegant as, say, Kepler’s laws of planetary motion, but perhaps we can do better than the rational actor model tweaked with many biases to explain each deviation.

Continue Reading »

No responses yet

Aug 08 2022

The Psychology of FOMO

Published by under Neuroscience

One of the many unintended consequences of social media is what is popularly referred to as FOMO – fear of missing out. People see all the wonderful things people are doing and buying in their social media profiles, and fear that they are missing out on the good life, or the latest trend, or perhaps some investment opportunity. This is the social media equivalent of “keeping up with the Joneses”. FOMO results from a basic human psychological tendency, to determine our own happiness by comparing ourselves to some relative standard, whether that’s our neighbors, our social group, or what we see on TV or on people’s Facebook pages.

This phenomenon also interacts with another, that we determine our happiness relative to our own current state, meaning that we habituate to our current situation. Functionally what this means is that if we want to remain happy we constantly need more – more than we have now, and more that other people have. The habituation phenomenon was humorously depicted in the video game, Portal 2 (an excellent game, highly recommended if you like video games). The main antagonist is an AI that is programmed to run the player through various testing scenarios. Each time the player completes a test the AI gets the silicon version of a dose of dopamine, but the digital Nirvana is short-lived and it has to run another test to maintain the good feeling. But it rapidly habituates to this feedback, with shorter and less intense reward meaning it has to test faster and harder.

This is essentially how humans function as well. We are never content. We cannot remain happy by standing still. We need whatever other people have, and we need more than we currently have. This lines up with research into happiness. Making more money does make people happier, up to the level where basic needs and security are met (in the US this is now about 75k per year). Some researchers frame this not as money making people happy, but rather not having enough money to meet basic needs is stressful and makes people unhappy. Beyond this basic level, increasing income does not correlate with happiness. Whether you make 75 thousand a year or 75 million a year does not matter. Further, everyone thinks that they would be happy if they just made 20% more than they currently make – regardless of how much that is. We habituate to our current situation and then think we need a little more to be happy.

Continue Reading »

No responses yet

Next »