Mar 18 2008

Monocular Depth Perception

An important realization for any scientist or skeptic is that reality is almost always more complex than our understanding of it. This is especially true of the common or lay understanding of any topic in science. (In fact this is likely to be true unless you are on the absolute cutting edge of knowledge in an area.)

Take depth perception. The common, and correct, belief is that depth perception results from what is called binocular disparity – the brain compares the images from each eye and uses the degree of difference to estimate distance. From my casual discussions with friends and family it seems that most people think this is the only mechanism by which our brains create the perception of depth. It isn’t. Real life is almost always more complex.

Neuroscientists have known for a long time that the brain uses other visual cues to estimate distance. People who are blind in one eye have impaired depth perception, but they still have functional depth perception. The world does not look flat to them. Monocular depth perception functions well enough, for example, to allow for safe driving.

From an evolutionary point of view it makes sense that vertebrate brains would adopt any method of estimating distance that they hit upon. This is especially true in those animals that do not have binocular vision. Geese, for example, have opted to have eyes on opposite sides of the head in order to have the broadest field of vision. Hunting species typically favor binocular vision – taking a smaller field of vision for greater precision of depth perception.

Other mechanisms of depth perception include estimating absolute size from experience (we know, for example, how big an elephant should be) and then estimating distance based upon apparent size (objects get smaller as they get farther away). Nearer objects will also tend to pass in front of objects that are farther away. Our brains use all of this information to infer a three-dimensional image of the world around us.

Recently a team of researchers from the University of Rochester, led by Greg DeAngelis, have fleshed out yet another mechanism of depth perception independent of binocular disparity (published this week in the journal Nature). DeAngelis is quoted as saying:

It looks as though in this area of the brain, the neurons are combining visual cues and non-visual cues to come up with a unique way to determine depth.

This newly discovered method is unique in the interesting way it combines non-visual information to create depth perception. It was previously discovered that the vestibular system may be involved in depth perception. This new study both confirms this and maps out the actual brain areas involved.

This method of depth perception involves the phenomenon of parallax – objects that are closer move across our visual field more quickly than objects that are farther away when we move or turn our head. In fact astronomers use parallax produced by the movement of the earth around the sun to estimate the distance to nearby stars. Based upon vision alone parallax can be used to tell the relative distance of objects – that object A is closer or farther than object B. But if we want to estimate actual rather than just relative distance we also need information about the amount of movement that produced the parallax (like knowing how far the earth moves in traveling around the sun.

This is where the vestibular system comes in. This system in the brain is based upon receptors in the inner ear in what are called the semicircular canals. These are fluid-filled loops arranged orthogonal to each other so that there is one canal along each of the three axes. There are also two chambers call the utricle and the saccule. These are sensory organs designed to respond to rotation (the semicircular canals) and linear acceleration (the utricle and saccule) based upon the inertial movement of the fluid inside, which flows past hair-like receptors that move triggering a neuron to fire. The vestibular system therefore provide sensory information to various parts of our brain (like our balance system) so that we can sense if we are still or accelerating and in which direction, and what position we are in relative to gravity. This is the system that produces the sensation of dizziness (more precisely called vertigo in this context) – for example when we spin around we accelerate the fluid in the canals (like spinning a bowl filled with water), and then when we stop the fluid will keep flowing for awhile, producing a subjective sensation that we or the world are continuing to spin.

What DeAngelis and his colleagues have discovered is a brain region (the middle temporal region) that combines visual information and vestibular information to produce an independent estimate of distance. The brain is combining parallax from the visual system with an estimate of how much we or our heads or move or rotating from the vestibular system – movement and parallax = distance.

That such a system could evolve is plausible. Vestibular information projects to the brainstem, the cerebellum, and to various parts of the cortex (not all of which are currently understood – but this is a nice addition). So vestibular information is widely available and integrated into other sensory systems. It is specifically integrated with vision – for example in the vestibulo-ocular reflex – a reflex that creates automatic eye movement to exactly compensate for head movements so that we can keep our vision steady and fixed.

When I read research like this I am simultaneously impressed by the cleverness of my fellow humans to figure such things out, and also humbled and inspired by the awesome complexity of the natural world.  Nature almost always is much more complex than it at first seems.

Also we need constant reminding of where our personal knowledge is relative to what is known. We can reasonably extrapolate from this one example to conclude that our lay understanding of any topic in science is as incomplete and oversimplified as thinking that all depth perception is due to binocular disparity. Above a “lay” understanding is that of an interested and well-read science enthusiast – a description that likely fits most of the readers of this blog. But we also need to remember that there are layers of understanding above that as well. I put myself in the category of a generalist neuroscientist (my area of expertise does not involve the visual system). Above that would be a neuroscientist whose expertise does involve the visual system, and even above that are those few who are actively involved in cutting-edge research on the specific topic at hand.

It is helpful to understand where one’s knowledge and understanding lies along this spectrum, otherwise we risk falling into the trap of using a naive lay understanding of a topic to challenge the conclusions of the experts in the field. This is not to make an argument from authority – that the experts are alway right – but to argue that we all need to keep our own knowledge in perspective and have, at the very least, a proper humility. Otherwise we risk looking foolish like a creationist.

13 responses so far