Archive for the 'Neuroscience' Category

Jun 10 2021

Brain Connections in Aphantasia

There is definitely something to be said for the neurodiversity perspective – when it comes to brain function there is a wide range of what can be considered healthy. Not all differences should be looked at through the lens of pathology or dysfunction. Some brains may be more typical than others, but that does not mean objectively “normal”, better, or healthier. Like any valid concept it can be taken too far. There are conditions that can reasonably considered to be brain disorders causing objective dysfunction. But the scope of healthy variation is likely far broader than many people assume.

Part of this concept is that brain organization and function includes many trade-0ffs. To some extent this is a simple matter of finite brain resources that are allocated to specific abilities, increase one and by necessity another has to diminish. Also different functions can be at cross-purposes. Extraverts may excel in social situations, but introverts are better able to focus their attention inward to accomplish certain tasks.

In light of this, how should we view the phenomenon of aphantasia, a relative inability to summon a mental image? Like most neurological functions, the ability to have an internal mental image varies along a spectrum. At one end of the extreme are those with a hyperability to recall detailed mental images. At the other are those who may completely lack this ability. Most people are lumped somewhere in the middle. The phenomenon of aphantasia was first described in the 1880s, then mostly forgotten for about a century, and now there is renewed interest partly due to our increased ability to image brain function.

A new study does just that, looking at people across that phantasia spectrum to see how their brain’s differ. Using fMRI they scanned the brains of those with aphantasia, hyperphantasia, and average ability. They found that in neurotypical subject there was a robust connection between the visual cortex (which becomes active when imagining an image) and the frontal cortex, involved in attention and decision-making. That makes sense – this connection allows us to direct our attention inwardly to our visual cortex, to activate specific stored images there. Subjects with aphantasia had a relative lack of these connections. While this is a simple model, it makes perfect sense.

Continue Reading »

No responses yet

Jun 04 2021

How Common Are BS Jobs?

Published by under Neuroscience

Douglas Adams had a talent for irony. In the Hitchhiker’s Guide series he told the tale of a civilization that tried to improve itself by tricking everyone with a useless job into taking a rocket trip to another world (actually to nowhere). For example, one of the discarded people’s jobs was to clean phones. That’s it – they were a phone cleaner. That civilization later collapsed due to a pandemic started by a phone virus.

Part of Adams’ humor was taking reality and then pushing it to the absurd, but that core of reality gave his humor more heft. We may not have phone cleaners, but it does seem that certain jobs are less useful than others. Of course there is a certain amount of subjectivity and value judgements here, but there are some jobs that even the people in them judge to be without purpose. The concept of “bullshit jobs” was proposed by anthropologist David Graeber. In his book Bullshit Jobs, he claims that 20-50% of people are in BS jobs, that this number is increasing over time, that BS jobs are concentrated in certain professions, and that such jobs are psychologically unhealthy. New research finds that he was correct in one out of four of these claims.

While Graeber was bringing attention to a real issue, the psychological effects of being in a job that you yourself feel is of no value, when it came to the magnitude of this issue he did not have hard data. He was largely making inferences. This did lead to mixed reviews of his work at the time, with some reviewers finding his arguments often labored. The new research is an extensive survey of workers in Europe between 2005-2015, with over 30,000 responses. Since by his own definition, a BS job is one that even the person in it feels is worthless, the survey relied upon self-report of whether one’s job had value. Those who responded “rarely or never” to the question, “I have the feeling of doing useful work,” were deemed to have a BS job. The total percentage of people in this category was 4.8%. That’s still about one in 20 people, but a far cry from the as high as 50% Graeber claimed.

Continue Reading »

No responses yet

May 21 2021

The Neuroscience of Robotic Augmentation

Imagine having an extra arm, or an extra thumb on one hand, or even a tail, and imagine that it felt like a natural part of your body and you could control it easily and dexterously. How plausible is this type of robotic augmentation? Could “Doc Oc” really exist?

I have been following the science of brain-machine interface (BMI) for years, and the research consistently shows that such augmentation is neurologically possible. There is still a lot of research to be done, and the ultimate limits of this technology will be discovered when real-world use becomes common. But the early signs are very good. Brain plasticity seems to be sufficient to allow for incorporation of robotic feedback and motor control into our existing neural networks. But there are still questions about how complete this incorporation can be, and what other neurological effects might result.

A new study further explores some of these questions. They studied 20 participants who them fitted with a “third thumb” opposite their natural thumb on one hand. Each thumb was customized and 3D printed, and could be controlled with pressure sensors under their toes. The subjects quickly learned how to control the thumb, and could soon do complex tasks, even while distracted or blindfolded. They further reports that over time the thumb felt increasingly like part of their body. They used the thumb, even at home, for 2-6 hours each day over five days. (Ten control subjects wore an inactive thumb.)

They used fMRI to scan the subjects at the beginning and end of their training. What they found was that subjects changed the way they used the muscles of their hand with the third thumb, in order to accommodate the extra digit. There were also two effects on the motor cortex representing the hand with the extra thumb. At baseline each finger moved independently would have a distinct pattern of motor cortex activation. After training, these patterns became less distinct. The researchers refer to this as a, “mild collapse of the augmented hand’s motor representation.” Second, there was a decrease in what is called “kinematic synergy.”

Continue Reading »

No responses yet

May 18 2021

Bullshit and Intelligence

The term “bullshitting” (in addition to its colloquial use) is a technical psychological term that means, “communication characterised by an intent to be convincing or impressive without concern for truth.” This is not the same as lying, in which one knows what they are saying is false. Bullshitters simply are indifferent to whether or not what they say is true. Their speech is optimized for sounding impressive, not accuracy. I have discussed before research showing that people who are more receptive to “pseudoprofound bullshit” are also more gullible in their evaluation of fake news and false claims.  Pseudoprofound bullshit are statements that superficially sound wise but are actually vacuous, and operationally for these studies are generated randomly, such as “Innocence gives rise to subjective chaos.”

A new study extends this bullshit research further by looking at the measured and perceived intelligence of subjects correlated to their ability to generate bullshit and receptivity towards it. The study is done to test an evolutionary hypothesis that intelligence evolved largely to provide social skill. Humans are an intensely social species, and the ability to navigate a complex social network requires cognitive skill. Therefore, the authors hypothesize, if this is true that the ability to bullshit (which is used as a marker for social skill) should correlate with intelligence. While the results of the study, which I will get to shortly, are interesting, I think we have to recognize that it is horrifically difficult to make such evolutionary statements of cause and effect.

Cognitive ability is so multifaceted that boiling down selective pressures to any one factor is essentially impossible. At best we can say that the ability to sound confident and convincing is a social skill that would provide one type of advantage to a social species. But we also have to recognize that individuals may pursue many different strategies favoring different attributes. Further, other personality characteristics could have a great influence on the willingness and ability to bullshit that have nothing to do with intelligence. And finally, intelligence bestows so many general advantages that could provide selective reinforcement that, again, it becomes problematic at best to isolate one factor as dominant. While I found the results of this study interesting, there is nothing here that is not incompatible with the interpretation that they are all epiphenomena, not primary selective pressures.

Continue Reading »

No responses yet

May 13 2021

Communicating Through Handwriting with Thought

Published by under Neuroscience

We have another incremental advance with brain-machine interface technology, and one with practical applications. A recent study (by Krishna Shenoy, a Howard Hughes Medical Institute investigator at Stanford University and colleagues) demonstrated communication with thought alone at a rate of 15 words (90 characters) per minute, which is the fastest to date. This is also about as fast as the average person texts on their phone, and so is a reasonably practical speed for routine communication.

I have been following this technology here for year. The idea is to connect electrodes to the brain, either on the scalp, on the brain surface, inside blood vessels close to the brain, or even deep inside the brain, in order to read electrical signals generated by brain activity. Computer software then reads these signals and learns to interpret them. The subject also undergoes a training period where they learn to control their thoughts in such a way as to control something connected to their brain’s output. This could mean moving a cursor on a computer screen, or controlling a robotic arm.

Researchers are still working out the basics of this technology, both hardware and software, but are making good steady progress. There doesn’t appear to be any inherent biological limitation here, only a technological limitation, so progress should be, and has been, steady.

The researchers did something very clever. The goal is to facilitate communication with those who are paralyzed to the point that they cannot communicate physically. One method is to control a cursor and move it to letters on a screen in order to type out words. This works, but is slow. Ideally, we would simply read words directly from the language cortex – the subject thinks words and they appear on a screen or are spoken by a synthesizer. This, however, is extremely difficult because the language cortex does not have any obvious physical organization that relates to words or their meaning. Further, this would take a high level of discrimination, meaning that it would requires lots of small electrodes in contact with the brain.

Continue Reading »

No responses yet

May 03 2021

The Science of Feel-Good Storytelling

From one perspective art is mostly a science that we understand better at an intuitive rather than analytical level. This does not reduce the creative elements off artistic expression, but it does mean there is an underlying empirical phenomenon to be understood. Storytelling, for example, has a basic structure, with elements that serve a specific purpose. One of the more famous attempts at breaking down the structure of certain types of stories is the Hero’s Journey by Joseph Campbell, in which he explains the common elements of epic quests in literature, that hold true even in more modern storytelling like Star Wars.

A recent study by German authors takes a similar look at the “feel good film”, which is not really a specific genre but more of a vague category. The “feel good film” is meant, as the name implies, to elevate the mood and be pleasant to watch. The authors point out that this is often a point of criticism by serious film critics, but simultaneously a point of praise from viewers. There is just as much of an art and science behind making a good feel-good film as any other type, so I think the criticism is unfair. I would focus more on the quality of any specific movie.

After extensive surveying, the authors found that the best formula for a feel good effect is the romantic comedy. The authors write;

“Often these involve outsiders in search of true love, who have to prove themselves and fight against adverse circumstances, and who eventually find their role in the community.”

Further, the authors found that the introduction of a “fairy tale” element served well to lighten the movie and enhance the feel-good effect. The movie that immediately came to mind when I read this was Enchanted, which seems to deliberately follow this formula (and it worked, extremely well). The authors also point out that such films require genuine drama and conflict. There appears to be a sweet spot – where we feel a real threat but we know the good guys are going to pull it out in the end. It’s like a rollercoaster – it’s simulated danger, but we know we are safe. In Enchanted the evil queen needs to seem genuinely menacing, for example.

Continue Reading »

No responses yet

Apr 30 2021

Organic Electrochemical Synaptic Transistor

The title of this post should be provocative, if you think about it for a minute. For “organic” read flexible, soft, and biocompatible. An electrochemical synapse is essentially how mammalian brains work. So far we can be talking about a biological brain, but the last word, “transistor”, implies we are talking about a computer. This technology may represent the next step in artificial intelligence, developing a transistor that more closely resembles the functioning of the brain.

Let’s step back and talk about how brains and traditional computers work. A typical computer, such as the device you are likely using to read this post, has separate memory and logic. This means that there are components specifically for storing information, such as RAM (random-access memory), cache memory (fast memory that acts as a buffer between the processor and RAM) and for long-term storage hard drives and solid state drives. There are also separate components that perform logic functions to process information, such as the central CPU (central processing unit), graphics card, and other specialized processors.

The strength of computers is that they can perform some types of processing extremely fast, such as calculating with very large numbers. Memory is also very stable. You can store a billion word document and years later it will be unchanged. Try memorizing a billion words. The weakness of this architecture is that it is very energy intensive, largely because of the inefficiency of constantly having to transfer information from the memory components to the processing components. Processors are also very linear – they do one thing at a time. This is why more modern computers use multi-core processors, so they can have some limited multi-tasking.

Continue Reading »

No responses yet

Apr 20 2021

Imaging the Living Brain In Action

Published by under Neuroscience

One major factor in the progress of our understanding of how brains function is the ability to image the anatomy and function of the brain in greater detail. At first our examination of the brain was at the gross anatomy level – looking at structures with the naked eye. With this approach we were able to divide the brain in to different areas that were involved with different tasks. But it soon became clear that the organization and function of the brain was far more complex than gross examination could reveal. The advent of microscopes and staining techniques allowed us to examine the microscopic anatomy of the brain, and see the different cell types, their organization into layers, and how they network together. This gave us a much more detailed map of the anatomy of the brain, and from examining diseased or damaged brains we could infer what most of the identifiable structures in the brain did.

But still, we were a couple of layers removed from the true level of complexity of brain functioning. Electroencephalography gave us the ability not to look at brain anatomy but function – we could detect the electrical activity of the brain with a series of electrodes in real time. This gave us good temporal resolution of function, and a good window into overall brain function (is the brain awake, asleep, or damaged) but very poor spatial resolution. This has improved in recent decades thanks to computer analysis of EEG signals, which can map brain function in higher detail, but is still very limited.

CT scans and later MRI scans allow us to image brain anatomy, even deep anatomy, in living creatures. In addition we can see some pathological details like edema, bleeding, scar tissue, iron deposition, or inflammation. With detailed imaging we could see the lesion while still being able to examine a living patient (rather than having to wait until autopsy to see the lesion). As MRI scans advanced we could also correlate non-pathological anatomical features with neurological function (such as skills or personality), giving us yet another window into brain function.

Continue Reading »

No responses yet

Apr 16 2021

Later School Start Times

Yet another study shows the benefits of delaying the start time for High School students. This study also looked at middle school and elementary school students, had a two year follow up, and including both parent and student feedback. In this study: “Participating elementary schools started 60 minutes earlier, middle, 40-60 minutes later, and high school started 70 minutes later,” and found

Researchers found that the greatest improvements in these measures occurred for high school students, who obtained an extra 3.8 hours of sleep per week after the later start time was implemented. More than one in ten high school students reported improved sleep quality and one in five reported less daytime sleepiness. The average “weekend oversleep,” or additional sleep on weekends, amongst high schoolers dropped from just over two hours to 1.2 hours, suggesting that with enough weekday sleep, students are no longer clinically sleep deprived and no longer feel compelled to “catch up” on weekends. Likewise, middle school students obtained 2.4 additional hours of sleep per week with a later school start time. Researchers saw a 12% decrease in middle schoolers reporting daytime sleepiness. The percent of elementary school students reporting sufficient sleep duration, poor sleep quality, or daytime sleepiness did not change over the course of the study.

This adds to prior research which shows similar results, and also shows that student academic performance and school attendance improves. For teens their mood improves, their physical health improves, and the rate of car crashes decreases. So it seems like an absolute no-brainer that the typical school start time should be adjusted to optimize these outcomes. Why isn’t it happening? Getting in the way are purely logistical problems – synchronizing school start times with parents who need to go to work, sharing buses among elementary, middle, and high school, and leaving enough time at the end of the day for extracurricular activities. But these are entirely solvable logistical hurdles.

Continue Reading »

No responses yet

Apr 12 2021

Progress on Bionic Eye

Some terms created for science fiction eventually are adopted when the technology they anticipate comes to pass. In this case, we can thank The Six Million Dollar Man for popularizing the term “bionic” which was originally coined by Jack E. Steele in August 1958. The term is a portmanteau of biological and electronic, plus it just sounds cools and does roll off the tongue, so it’s a keeper. So while there are more technical terms for an artificial electronic eye, such as “biomimetic”, the press has almost entirely used the term “bionic”.

The current state of the art is nowhere near Geordi’s visor from Star Trek TNG. In terms of approved devices actually in use, we have the Argus II, which is a device that include an external camera mounted on glasses and connected to a processor. These send information to a retinal implant that connects to ganglion cells which send the signals to the brain. In a healthy eye the light-sensing cells in the retina will connect to the ganglion cells, but there are many conditions that prevent this and cause blindness. The photoreceptors my degenerate, for example, or corneal damage does not allow light to get to the photoreceptors. As long as there are surviving ganglion cells this device can work.

Currently the Argus II contains 60 pixels (6 columns of 10) in black and white. This is incredibly low resolution, but it can be far better than nothing at all. For those with complete blindness, being able to sense light and shapes can greatly enhance the ability to interact with the environment. They would still need to use their normal assistive device while walking (cane, guide dog or human), but would help them identify items in their environment, such as a door. Now that this device is approved and it functions, incremental improvements should come steadily. One firmware update allows for the perception of color, which is not directly senses but inferred from the pattern of signals.

Continue Reading »

No responses yet

Next »