Dec 21 2007

How the Brain Interprets Language

Blogging on Peer-Reviewed ResearchI have mentioned before that we are in the midst of a pulse of exciting neuroscience research involving the use of functional MRI scanning (fMRI) to see which brain structures are involved with which cognitive activities. We are thereby reverse engineering the brain. A study just published in the journal Neuron uses fMRI to look at a specific aspect of how our brains process language. (doi:10.1016/j.neuron.2007.09.037; Uri Hasson, Jeremy I. Skipper, Howard C. Nusbaum and Steven L. Small, Abstract Coding of Audiovisual Speech: Beyond Sensory Representation, NeuronVolume 56, Issue 6, , 20 December 2007, Pages 1116-1126.)

For background, the language center of the brain (Wernicke’s area in the dominant temporal lobe) is the “dictionary” of the brain – translating words into concepts and concepts into words. Wernicke’s area has input from auditory and visual areas of the brain, which makes sense. In essence, Werkincke’s area hears speech and then translates those sounds into words that have abstract meaning.

It is also true that we use visual cues, when available, in translating sounds. What we hear will therefore be altered by what we see. And further, the context of speech affects how we interpret what we hear. Ventriloquism exploits this fact – ventriloquists will substitute sounds they can say without moving their lips for more difficult sounds, relying on the fact that the audience will hear what makes sense.

It is also established that our language cortex can recognize a limited set of speech sounds, or components (called percepts). We learn these in the first four years of our life from hearing speech, and then the “language window” closes and we are limited to those sounds for the rest of our lives. Anything we hear in the context of speech after that will be sorted into one of the pre-existing percepts.

The question asked by this study concerns the processing of visual and auditory information prior to getting to the language cortex – does the language area receive “raw” sensory information, or is it already processed into percepts? What they found is that the auditory information is already processed into an abstract percept – a specific speech sound.

The methods were clever, and based upon established research methods. They exploited the fact that sensory input will have a decreasing effect on brain activity with repetition (so-called repetition suppression). They then looked at brain activity with fMRI with various visual and auditory stimuli – but using stimuli that were designed to result in the same percept. In other words, video and audio that varied in exact content but that would be perceived as the same sound or component of speech. They then looked to see if repetition suppression occurred more with the same auditory stimuli, the same visual stimuli, or the same percept regardless of the precise stimuli.

What they found is that repetition suppression occurred more with the same precept – even when the visual or auditory stimuli were different. This suggests that there is a middle layer in language processing between raw sensory input and the language part of the brain. This middle layer sorts auditory and visual stimuli into a specific language percept. Percepts are then combined to form words that have an abstract meaning. So decoding language from sound and sight is a two step process.

This study was also able to see which part of the brain performs this processing – the pars opercularis of the inferior frontal gyrus. This detail is of no particular interest to the non-neuroscientist, but I mention it to point out that such research not only tells us how the brain is functionally constructed but how it is anatomically constructed, and how function follows anatomy. Both components – functionality and anatomy – are critical to the broader goal of reverse engineering the brain.

While this study is a small slice of all the interesting neuroscience that is happening, I reviewed it to demonstrate some important concepts. The first is that the materialist paradigm of neuroscience – that everything people think, do, and feel can be studied and understood as a physical process of the brain with a specific anatomical correlate – is very successful and is producing tangible results.

It is also to demonstrate the fine degree of detail to which we have already plumbed the depths of the complexity of the brain. Too often I hear or read reporters, critics of science, and even scientists casually refer to the extent of our ignorance of how the brain works. I do not mean to downplay all that remains to be learned about the brain, and how truly complex and subtle a machine it is. But often the public is left with the sense that the brain is little more than a black box about which we know next to nothing. This is a far cry from the current reality – we have a detailed model of the different parts of the brain and what they do.

It is easy to be mystified by the brain and neuroscience, but (however complex it is) it is just a machine that must follow logical and ultimately understandable rules. More important than how much we know at any one time is how successful are we at learning more about the brain based upon our current approach. The fact that neuroscience is progressing rapidly tells us that our models are useful – and this is all science is really about when you strip it down: making models of the physical world that make predictions and then seeing how those predictions work out.

By this criterion the neuroscience model of the brain and mind is remarkably successful.

3 responses so far