Aug 02 2018

Prior Exposure Influences What We See

One of the mantras on this blog is that perception is constructed by complex processes in the brain, not a passive recording of external stimuli. The implications of this are profound – what you see, hear, and taste are influenced by your internal model of the world, and what your other senses are telling you. In real time your brain is comparing your sensory inputs to each other, and to stored memories. It then finds the best match possible and (this is critical) tweaks your perception to more strongly conform to the apparent match.

In a very real sense, believing is seeing.

A new study extends our understanding of this constructive perceptual phenomenon a bit with respect to vision. The question was mainly – where in the brain are these processes happening with respect to vision? The researchers used a standard paradigm called Mooney images, which are black and white images degraded so that they are difficult to interpret. However, when primed with an undegraded grayscale version of the image (called disambiguation), it becomes trivially easy to interpret the Mooney image. The effects of this priming may last from days to indefinitely.

The researchers confirmed this priming effect, and that it is very robust. What this means is that what the subjects saw was determined as much, if not more, by their memories (of the disambiguation image) as by their current visual stimuli. What you remember is as or more important than what you are seeing, at least when what you are seeing is ambiguous.

The new information from this study, however, was what brain activity reflects this process. Here they also confirmed prior research that visual processing is very hierarchical – basic processing occurs at primary visual cortical levels, and then goes up to higher levels where more complexity is added and memory becomes a stronger influence. Specifically the researchers wanted to know if the default-mode network (DMN) was playing any role.

The DMN is known to be active when recalling memories or thinking about things not related to any current external stimuli. It was therefore hypothesized that the DMN is involved purely in internal thought processes (like calling up memories), but not reacting to external stimuli.  This study shows that the DMN is active during disambiguation, and remains active when recognizing Mooney images based on prior exposure. So the DMN is involved in calling up memories, apparently, but this also interacts with active sensory stimuli.

Further, the farther up the hierarchy you go, the more of an effect disambiguation had on brain activity. Memory becomes more and more of a factor at the higher hierarchical levels of sensory processing. Previous research has also shown that these higher hierarchical levels communicate back down to the more basic levels, affecting basic visual construction. So processing goes in both directions.

Putting this all together, what this means is that, let’s say you are in the forest and you see an ambiguous blob in your peripheral vision. This draws your attention so you look more closely, but the object is partially obstructed and only partially lit by dabbled light. While you are looking your visual cortex is trying to interpret the basic visual data in three dimension, account for shading effects, separate foreground from background, accentuate borders, and infer color.

This information is being sent up the chain to higher cortical levels, which are searching your memory for the best match. Let’s say the best match is a black bear, so that is what you think you see. The higher cortical areas then communicate back down to the primary visual cortex to make the image look more like your memory of a black bear. Then the image really snaps into focus, and you are sure you are looking at a black bear. The image being constructed in your brain from the ambiguous stimuli is that of a black bear, and so that is what you experience. You have no way of knowing how much the image in your minds eye has been tweaked by memory vs the light hitting your retina – you just see a black bear.

Let’s say, however, that you are in the forest because you are hunting for Bigfoot. You are primed by images of Bigfoot, and further primed by the fact that Bigfoot is a target you are looking for. This means you are trying to find a match to a predetermined target. The same exact ambiguous image that someone else’s brain constructed into a black bear, your brain constructs into Bigfoot. That is what you see.

As you creep closer to get a better picture, you eventually see that the black bear/Bigfoot was just the stump of a fallen tree, with a suggestive shape. Of course you might just take an ambiguous picture, which you also see as Bigfoot, and that becomes evidence.

This kind of visual priming affecting the interpretation of ambiguous stimuli is rampant, from UFOs to chupacabras. It is not just active in pseudoscience, however. There are many historical cases, such as the missing red panda from the Rotterdam zoo. There were numerous sightings of the red panda throughout the city, once word was spread to be on the lookout. Unfortunately the panda was found dead on the train tracks right next to the zoo, and all the sightings were just an effect of priming.

The same effect comes into play when learning about something new. When I took up birding I started to see far more different kinds of birds. I had more and more birds for reference in my memory, and when seeing the jittery little things at my feeder I was better able to disambiguate the stimuli. At first I conflated black-capped chicadees and nuthatches. Now they look so different to me I can’t imagine how I confused them for each other.

You see what you know, and the more you know the more you see.

No responses yet