May 27 2011

Human Echolocation

Remember the Marvel comic hero Daredevil? He was blinded by exposure to radiation, but that same exposure ramped up his other senses so that he could, in essence, “see” with his hearing. Daredevil used a form of echolocation in order to survey his surroundings, and was able to be an acrobatic crime-fighter as a result.

While I liked the character, I always thought the idea was far-fetched. (Yeah, I know – it’s a comic-book character.) But perhaps the idea is not as science-fiction as you might think. There are reported cases of blind humans who developed a form of echolocation – they even use clicks to generate sound for this ability.

Echolocation is the use of sound waves to bounce off of object and then form an image of the those objects from the sound waves that bounce back. This may seem extraordinary, but it is no more extraordinary then using light waves to form three-dimensional images of the world around us. It just takes a bit of brain processing. Bats are the most common animal to come to mind when one thinks of echolocation, but other creatures do it as well, such as dolphins.

A new study in PLoS One looks at two so-called human echolocation experts, one with early blindness and one with late blindness. They found that when the subjects listened to ordinary sound their auditory cortex was recruited. But when they listened to clicks used for echolocation, part of their visual cortex was also recruited. The pattern of cortex activated also depended on the location and movement of the objects reflecting the echo.

This suggests that human echolocation experts, to an extent, are actually “seeing” with echolocation – they are using the visual processing part of their brain to process sound. This makes sense on many levels. First, sound waves are already processed to a degree to detect direction, distance, and even size. Our ears are positioned so that sound waves will hit them at slightly different times from different directions, and our brains can process that information – comparing the timing of the sounds in both ears, to give us a sense of where the sound is coming from. If you combine this type of processing with a bit of visual processing, it makes sense that you can make a crude image of the environments from sound information alone.

This also demonstrates the plasticity of the brain – it can change its function based upon use. Also, parts of the brain that no longer have a function – like the visual cortex of someone who becomes blind – can be recruited to serve a new function.

The existence of human echolocation is also a nice bit of evidence for the plausibility of evolution. Deniers often have a difficult time imagining how sophisticated biological features can emerge. How can a bat ancestor evolve echolocation. What the cases of human echolocation demonstrate is that new abilities can emerge suddenly, sufficiently formed to be useful, even if extremely crude. This is due, in part, to coaptation – the use of a biological feature evolved for one purpose for a different purpose. In this case people are using their brain hardwiring evolved for ordinary processing of sound and vision in order to process sound like vision.

Now imagine humans who have moved to a niche that is very dark or to a nocturnal lifestyle (without artificial light). Those few who develop even crude echolocation would have a significant advantage, and would provide an evolutionary toe-hold into further refinement of the ability. Eventually you would have something like what bats have today.

A completely unrelated thought occurred to me. Reports of human echolocation indicate that they use mouth clicks, or taping of their hands, feet, or objects like a cane, in order to generate the sound used in echolocation. This is very limiting as the frequency of the clicks is very low, and so the amount of information is highly limited. I wonder if anyone has tried using an electronic device to generate clicks. This would be a trivial device to create – just a small box that creates clicks with a slider that adjust the frequency of the clicks, and a dial for volume. It would be interesting to see what the limits of human echolocation are using these artificial clicks.

In any case – this is one of the coolest neuroscience stories in a while – a comic-book superhero’s power is actually real (in a way).

Share

24 responses so far

24 Responses to “Human Echolocation”

  1. TheDawgLiveson 27 May 2011 at 9:16 am

    “I wonder if anyone has tried using an electronic device to generate clicks. This would be a trivial device to create – just a small box that creates clicks with a slider that adjust the frequency of the clicks, and a dial for volume. It would be interesting to see what the limits of human echolocation are using these artificial clicks.”

    I may be completely off base, but I thought part of echo-location involved the brain knowing when the sound was emitted, so that it could calculate the distance to the object that was reflecting the sound. Maybe that’s just an optional bonus.

  2. tmac57on 27 May 2011 at 9:46 am

    With the use of an electronic device,you could actually use sound waves beyond the human spectrum for emission,and then convert the echo results to an audible signal through ear buds.This would have the benefits of making it transparent to others in the area,and allow the use of optimum transmitting and receiving signals which would not need to be necessarily the same.

  3. daedalus2uon 27 May 2011 at 9:48 am

    Most of the information is likely not in direct timing of reflections, but rather the phase of return signals. The phase difference can be measured over a significant period of time (a few wave periods) giving much more resolution than the single arrival time.

    I think that clicks of the right shape containing the right waveforms would be a lot better than random clicks that can be made by the mouth or tongue. If the waveforms, amplitude and click frequency could be independently modulated via eye movements or eye muscle activation, then people could very quickly (I think) learn to optimize the sound production. This is what bats do, but bats are limited in that the sound is generated via vocalizations, which limits the location and properties of the emitted sound.

    I think this could be done with a phone app. Mount it on the head like a head lamp and use head shaking via accelerometers to set the frequency of clicks. Head shaking up and down and side to side gives two degrees of freedom, frequency of head shaking gives a few more. You could code positional information in the clicks, orientation to North for example.

    TDL, sound travels at ~300 m/s. I don’t think that the clicking devices that have been used are capable of being manually activated with the kind of time resolution necessary (sub millisecond) to use the arrival time information directly. I think what matters is relative timing, where the arrival time of the click does tell the brain where the click originated, but it is the relationship between the arrivals of the reflected waves that tell the geometry of the reflecting surfaces.

  4. srahhhon 27 May 2011 at 10:08 am

    I’ve heard of human echolocation before, and to my knowledge, the participants were using things as simple as dog clickers for generating sound…This would certainly be more convenient than constantly clicking your tongue, though as it’s already been pointed out, lacks nuance in frequency and the like. It would be interesting to see this in conjunction with not only the visually impaired, but for people who work in dark/obscured environments — firefighters and coal miners, maybe?

  5. _rand15_on 27 May 2011 at 11:15 am

    I have a little personal experience with this. Back in the mid-1960s, I worked for a little company that did interesting things with sound. One of them was a device built into a can the size of a soup can (maybe it was) with a knob on the side. It used an electromechanical effect to produce very powerful, very sharp clicks. It was built to explore echolocation in humans. The knob varied the rep rate from maybe one/second up to a real buzz.

    I got to play with the thing myself, just a little. Even in just a few minutes, it was remarkable how much information you could start to sense about the surroundings. The key was usually to set the rep rate so that each reflected pulse returned halfway inbetween each pair of emitted pulses. If you stood in front of a wall, for example, and turned the knob to change the rep rate, when you hit the right rate, the wall would suddently start to “sing” at you. That’s the only way I can describe the sensation. It’s very distinctive. You got a strong sense of how large the wall was and how far away.

    One day I stood close to the wall of our office building, set the rep rate, and closed my eyes. As I walked along the wall, I came to an open door. I felt strongly that there was a big cavern there. It was a very distinct feeling, very different from being next to the blank wall.

    Note that I never felt that I was “seeing” anything. But then I’m not blind, and I didn’t get to use the device very long.

    I was told of blind people, using the device, who could detect curbs and step off them safely, and of one blind child who could safely ride a bicycle and avoid obstacles while doing so.

    I have no doubt that (at least) some people can get so that they can essentially see their environment using a device like this, because I’ve had a taste of it myself.

  6. jreon 27 May 2011 at 4:49 pm

    Our human ability to localize auditory events (and, by extension, reflected sounds from objects) is remarkably sophisticated. Several distinct mechanisms all interact to create a mental picture of a sound’s location.
    In binaural hearing, location of azimuth comes mostly from interaural level and time differences (not detection of phase in the strict sense, though that would be cool). Detection of an event’s angle of elevation, in contrast, comes largely from the direction-dependent spectral coloration introduced by the pinna and the head known as “Head-Related Transfer Functions” or HRTFs.
    You can see how HRTFs might have been adaptive when you think about how important it might be for one of our ancestors to know quickly whether that growl comes from a leopard on the ground or in a tree.
    Each of us grows up with a unique head and pair of pinnae, and without knowing it has come to know the auditory world through a unique set of HRTFs.
    Wightman and Kistler have done some fascinating work in this area, and a few years ago had a demo at their website allowing you to hear a familiar sound, then put on a pair of headphones and hear the same sound “through someone else’s ears.” Very cool indeed.

  7. daedalus2uon 27 May 2011 at 7:24 pm

    jre, that is very interesting and does suggest that better results would be achieved by recruiting already existing neural structures. If each person has a unique transfer function, that would be difficult to emulate. Tuning the spectral content of the clicks to match the spectral response of the ear would be relatively easy and wouldn’t require ongoing real-time calculation.

    tmac, the data processing capacity of the sensory decoding neurons in the brain is a lot more than any electronic device that could be made portable and battery operated for a few thousand dollars. The software to do the decoding would be pretty formidable and might have some liability issues where a click generator would not.

  8. sonicon 27 May 2011 at 8:15 pm

    A friend of mine uses echolocation to get around. I haven’t seen the guy for a few years, but I’m sure he is still at it. He is blind from birth and makes sounds using his tongue, snapping fingers and voice. Sometimes he opens his mouth and slaps his cheek to get a sound. He never uses a cane to get around and walks the streets and so forth without a problem. He would often know where things were before I did.
    I think he would use different sounds under different conditions and would often make one sound followed by another. I think he used different pitches to get different information.
    He’s a musician (that’s how I know him- we played together) and had perfect pitch. Perhaps these things are all related.

  9. anatmanon 27 May 2011 at 9:47 pm

    here is an interesting article on daniel kish, an expert echolocator who teaches other blind people and works with researchers.

    http://www.mensjournal.com/the-blind-man-who-taught-himself-to-see

  10. diabetic77on 27 May 2011 at 10:14 pm

    I found the video of the blind man who could ride a bike on YouTube from the TV series, Is It Possible – http://www.youtube.com/watch?v=vpxEmD0gu0Q. I remember the first time seeing this and I was like, “Wow, talk about not being a victim of your circumstances.”

    I believe the phone app has some potential. With some tweaking someone could change the lives forever of the blind.

    Hey, I run a medical news blog and was wondering if you might want to exchange links. On a better note, I know the owner of JRS Medical and believe he might be interested in advertising with you guys. You can contact his marketing consultant at dpatterson (at) elbrusconsulting.com for more info if interested.

    Keep up these great posts either way!!

  11. Ufoon 28 May 2011 at 10:07 am

    Here’s a pretty nice documentary about Ben Underwood who used the click technique:

    http://www.youtube.com/watch?v=qLziFMF4DHA

  12. _rand15_on 28 May 2011 at 2:31 pm

    daedalus2u said

    “If each person has a unique transfer function, that would be difficult to emulate. Tuning the spectral content of the clicks to match the spectral response of the ear would be relatively easy and wouldn’t require ongoing real-time calculation. ”

    I’m pretty sure you don’t need to tune for a specific ear. The same company I mentioned above developed a technique for recording through pairs of model ears – there was a microphone near where the eardrum would have been, and the ears were positioned as if attached to someone’s head. If you listened to a playback with really good-quality earphones, you got an amazing sense of presence (again, this was in the mid 1960s).

    As a matter of interest, it turned out that to get the maximum effect of realism, the recording/playback had to be able to reproduce frequencies up to about 40 khz. The thinking was that this allowed for proper reproduction of small phase differences, which presumably humans can process and detect. Still, you’d get some of the effect with lower frequency responses.

    They also made a 5-times-larger pair of ears to use underwater. The speed of sound in water is 5 times that in air, so with 5X ears, all the phase relationships would be comparable to what you’d get in air.

    They used these to record porpoises emitting sonar. I listened to one recording where the porpoise put his beak near or inside one of the ears and nosed around, exploring with his sonar. The sensation, listening to the recording, was very much like having a fly buzz around inside your ear. In fact, I had a strong urge to swat at my ear.

    So you can use someone else’s ears nearly as well as your own.

  13. daedalus2uon 28 May 2011 at 4:52 pm

    rand, I agree, you don’t need to tune for a specific ear, but if different ears do have different frequency responses, then matching those for a specific ear might help that ear decode position a little bit easier and to a little bit better spacial resolution.

    The “sense of presence” that you mention is due to your neural decoding of the subtle differences due in part to the specific shape of the ear. Your neural hardware is tuned to the specific shape of your ear, someone else’s ears cause their neural hardware to tune to their ears. You want your neural hardware to tune to the shape of your ears.

    Changing from one set of ears to another is going to degrade the auto-tuning. If you are going to try and implement something like this for blind people, you want to make it as user friendly and with as short a learning curve as possible (and then not change it so they have to upgrade and relearn it all over again, that is a waste of their time and plasticity).

    You don’t want to just get the sensation of location, you want a degree of precision in that location, and if subtle phase differences at 40 kHz matter then you need to supply systems that will do that. 40 kHz is on the order of the dimensions of the external ear. If those frequencies matter, then you are not going to get the same responses from ears with different shapes or different transfer functions.

    I am just speculating here, but if one ear has reflective or diffractive elements that work best at 38 kHz, and another ear has elements that work best at 35 kHz, I think that using clicks containing those different frequencies will work better for those different ears.

  14. tmac57on 28 May 2011 at 7:36 pm

    d2u-Is there any evidence that humans can perceive anything beyond 20kHz? I could see some rare exceptions maybe,but 38kHz sounds a bit too high.

  15. _rand15_on 28 May 2011 at 8:29 pm

    daedalus2u said

    “Your neural hardware is tuned to the specific shape of your ear, someone else’s ears cause their neural hardware to tune to their ears. You want your neural hardware to tune to the shape of your ears.

    Changing from one set of ears to another is going to degrade the auto-tuning. ”

    I agree – in principle. Of course, we don’t really know, but I suspect it won’t make much difference in practice. If you really had a sharp spike for your click, it would have equal power at all frequencies (of interest), and rather than tuning it for a specific ear, the ear (of course, we mean the processing system behind it) will adapt.

    The man whose ideas led to the things I’ve described, Dwayne Batteau (no longer with us), thought that to decode a wide range of acoustic phenomena, the brain must have a very powerful and probably generalized capability to perform deconvolutions. He certainly made that seem very plausible to me. If that’s right, I’d think that a particular brain could adapt very well to another pair of ears, or a different form of sonar pulses, etc.

    “Changing from one set of ears to another is going to degrade the auto-tuning.” What I’m saying is that, from my limited experience and from talking with others who had more, is that using other ears works surprisingly well. Of course, all this speculation is subject to actually trying it out.

    tmac57 said

    “Is there any evidence that humans can perceive anything beyond 20kHz?”

    The people I worked with who had a lot more experience with this equipment than I did said that it was necessary to reproduce frequencies up to about 40 khz to get the full presence effect. As d2u said, the size of the ear structures fits in with this. As I said, the thinking was not that a person could actually sense *frequencies* that high, but that small phase differences *could* affect the presence of the sound, and to reproduce those, they needed the high frequency response. OTOH, I never made any experiments about this myself.

    One thing to remember is that many (but not all) people can localize sound sources in the vertical plane, even though their source is on the midplane. Many can do this even with just one ear in use. Binaural phase differences couldn’t support this ability. If you bend down your pinnae and try again to localize the source, you can’t. This shows that vertical localization depends on the detailed shape of the external ear, which is asymmetrical in the vertical plane. The asymmetry would be necessary to perform vertical localization.

    Just to get a really rough numerical view of all this, the speed of sound at sea level is about 1000 ft/sec, or 12,000 inches/sec. For a frequency of 40khz. the wavelength is about 12,000/40,000 = 0.3 inches. So if we want to resolve sizes comparable to structures of the external ear – say around 1/8 – 1/4 inches, 20 khz seems too low and 40 khz would be more plausible. Again, this does not say that a person can actually *hear* tones at 40 khz.

  16. pious fraudon 29 May 2011 at 1:54 am

    Ive just got regular old echolocution…it works well enough though.

  17. daedalus2uon 29 May 2011 at 8:42 am

    rand, some of my thoughts about adapting between different ears relates to my experience with eye glasses. I don’t like to switch between different pairs of the same prescription because they are enough different that it bothers me and messes with my eye-hand coordination.

    All the tuning of sensory systems is automatic (and poorly understood). I think assuming that there is a lot about human ecolocation that we don’t understand is the safest default and that systems should be produced that make signals with a fidelity that is beyond what average, or even exceptional individuals need.

  18. daedalus2uon 29 May 2011 at 8:47 am

    tmac57, I once worked with someone who had previously worked with high intensity ultrasonics, 40 khz at 180 dB. He used ultrasonic whistles and said that if you put a cotton ball at the focus (the 180 dB spot), it would catch on fire.

    He always said that you could hear 40 kHz, if it was loud enough. His hearing was totally trashed from exposure to high intensity ultrasound.

  19. tmac57on 29 May 2011 at 8:47 am

    Ive just got regular old echolocution…it works well enough though.

    The rain,rain,rain,in Spain,Spain,Spain,falls mainly,mainly,mainly…

  20. BillyJoe7on 29 May 2011 at 8:59 am

    Steven,

    “This is due, in part, to coaptation – the use of a biological feature evolved for one purpose for a different purpose.”

    Are you sure that’s what it’s called.
    There is preadaptation, exaptation, and cooption, which mean roughly the same thing.
    But I have never heard of coaptation in evolution.
    Coaptation means lining up and joining two surfaces or edges together.

  21. tmac57on 29 May 2011 at 11:00 am

    Billyjoe7-I assumed that it was a misspelling of ‘coadaptation’.

    http://www.blackwellpublishing.com/ridley/a-z/Coadaptation.asp

  22. Jeremiahon 29 May 2011 at 2:27 pm

    Dr. Novella has used the term coaptation in the context of evolution and this is the correct way to describe how different species use the same mechanism for different purposes. To describe this as coadaptation is not an accurate depiction of what has happened here. The species do not have needed to evolve in concert with each other.

  23. Davroson 30 May 2011 at 4:27 am

    Hi Steve.

    “I wonder if anyone has tried using an electronic device to generate clicks.”

    This is not quite the same thing, but it’s close:

    http://news.bbc.co.uk/2/hi/science/nature/3171226.stm

    Love the blog and the podcast.

  24. BillyJoe7on 30 May 2011 at 7:32 am

    tmac,

    Coadaptation does not seem to be the correct term. Coadaptation does not mean “the use of a biological feature evolved for one purpose for a different purpose” which is how Steven Novella defines coaptation.

    —————————–

    Jeremiah,

    The problem with coaptation is that I cannot find it defined as such in any dictionary. It seems to have only one definiton: “The bringing together of two parts to form a seamless whole”.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.