Sep 10 2019
How the Brain Filters Sound
Our brains are constantly assailed by sensory stimuli. Sound, in particular, may bombard us from every direction, depending on the environment. That is a lot of information for brains to process, and so mammalian brains evolved mechanisms to filter out stimuli that is less likely to be useful. As our understanding of these mechanisms has become more sophisticated it has become clear that the brain is operating at multiple levels simultaneously.
A recent study both highlights these insights and gives a surprising result about one mechanism for auditory processing. Neuroscientists have long known about auditory sensory gating (ASG) – if the brain is exposed to identical sounds close together, subsequent sounds are significantly reduced. This fits with the general observation that the brain responds more to new stimuli and changes in stimuli, and quickly become tolerant to repeated stimuli. This is just one way to filter out background noise and pay attention to the bits that are most likely to be important.
Further, for ASG specifically, it has been observed that schizophrenics lack this filter. You can even diagnoses schizophrenia partly by doing what is called a P50 test – you give two identical auditory stimuli 500ms apart, and then measure the response in the auditory part of the brain. In typical people (and mice and other mammals) there is a significant (60% or more) reduction in the response to the second sound. In some patients with schizophrenia, this reduction does not occur.
In fact researchers have identified a genetic mutation, the 22q11 deletion syndrome, that can be associated with a higher risk of schizophrenia and a failure of ASG. Reduced ASG may be the cause of some symptoms in these patients with schizophrenia, but is also clearly not the whole picture. It’s common for a single mutation is a gene that contributes to brain development or function to result in a host of changes to ultimate brain function.
The new study tries to find out where in the pathway of auditory processing does ASG occur. Researchers hypothesized that it would be at the final step, in the frontal lobes, where the brain determines how much attention it is going to pay to any particular stimuli. This hypothesis is partly based on the fact that schizophrenia is primarily a disorder of frontal lobe function. They studied healthy mice and those with the 22q11 deletion. They placed electrodes along the pathway, from the brainstem to the frontal lobes, and then recorded their response to auditory stimuli. What they found was the opposite of what they expected – the auditory gating occurred not at the end of the process in the frontal lobes, but toward the beginning of the process in the brainstem. Further testing with the 22q11 mice found that they lacked this brainstem level filter.
I like this study because it represents a trend I have noticed in the neuroscience literature over the last couple of decades – increasing understanding of how sensory process in the brain operates simultaneously at multiple levels, and with processing occurring in both the top-down and bottom-up directions simultaneously. Some basic principles, and in retrospect make perfect sense, are also emerging.
For example, the brain seems to have evolved to be efficient. Processing information takes time and energy, and anything that speeds up our ability to respond to our environment can have an obvious survival advantage. Bottom-up processing is one way to be more efficient – make the simplest processes do as much of the heavy lifting as possible. Leave the least amount of work for the complex processing parts of the brain to do. So sounds get filtered at the subcortical levels with relatively simple neurological processes. That way, a lot of the filtering has already occurred before the sounds get to the frontal lobes where complex things like analysis and attention are determined.
Understanding of the visual pathways has shown the same basic setup – where a lot of visual processing occurs before images even get to the cortex itself. This also makes sense from an evolutionary point of view, as lizards would need mechanisms to filter their sensory input without much of a cortex.
What is perhaps even more cool is recent discoveries that the flow of filtering information occurs in both directions. So – the subcortical pathways do basic processing and filtering of sensory stimuli, to clean up the signals, emphasize contrast and other important features, and filter out redundant information or information likely to be of little value. The signals then go up to higher levels of processing, where form and meaning are assigned. Raw visual data gets cleaned up, then processed to reveal shape, size, color, movement, and lighting. That information then gets interpreted to determine what it likely is, so it’s not just a shape, it’s a car. And then the object is further assigned significance and emotional context, and then it evokes relevant memories.
But – we have also learned that these higher parts of the brain then communicate back down to the lower levels of processing and inform their process. Essentially, if the parts of the brain involved in visually processing receive the image of a shape and that area makes a match from its visual memory banks between that shape and an elephant, in then determines that the object is therefore an elephant. Then that part of the brain sends signals to the parts doing the more basic processing that says, make that image look more like an elephant. It does this by filtering out features that don’t match an elephant, and emphasizing lines that are elephant, and interpreting things like size and distance in such a way as to be consistent with what is known about elephants.
This phenomenon is perhaps most obvious when it comes to sounds. If you hear ambiguous language sounds, your brain will do it’s best to make a match, but perhaps it can’t. If, however, you are simply told what the sounds are, your brain will make that match. Then you will actually hear those words. The way in which your brain processes the sounds alter – you don’t just think you hear the words, you actually hear them (according to your subjective perception).
There are several takeaways from all of this. First, our brains heavily filter, alter, and process sensory information. They are not passive recorders or perceivers, but actively construct perceptual experience in an adaptive process. Only a tiny portion of all the sensory information around us gets noticed and constructed, and that information is highly processed. Further, all this processing occurs at every level in the sensory pathway, not just at the highest levels. And finally, this processing works in multiple directions at one, both bottom-up, and top-down, but also laterally as different sensory modalities effect each other. What we see affects what we hear, and vice-versa.
All these processes are not perfect, and often involve trade-offs. Evolutionary pressure has pushed the system toward statistical optimality, but that may mean making choices that are best most of the time, but not all the time. The process breaks down, resulting in what we experience as illusions. We mishear what other people say, or temporarily may construct our visual input in the wrong way, until new information or a different perspective emerge. We may never correct the misperception, and walk away with a memory of having seen or heard something that was not accurate. This fact creates a lot of raw material to feed alien, cryptozoological, and paranormal beliefs. It also has massive implications for eye-witness testimony in court.