Jan 24 2019

Yawning in Virtual Reality

Psychologists are increasingly using virtual reality (VR) in their psychological experiments. It’s very convenient – they can create whatever environment they want with total control over all visual and auditory variables. It’s also safe, so they can study how people respond in traffic without the risk of subjects getting run over.

The meta-question for this research, however, is whether people will respond the same in VR as they do in physical reality? That is a research question unto itself, with implications for all other VR-based research.

Based on my personal experience in VR I would guess that it depends. Current VR technology is of sufficient resolution and fidelity that it successfully tricks the brain – your brain believes that you are in the environment you see. I say “your brain” because you consciously know it is VR and not meat-space, but your brain incorporates the visual and auditory sensory streams into its construction of reality as if they were real. So, consciously you may know the difference, but subconsciously you don’t.

Perhaps the best demonstration of this is the Planck Experience – a fun little VR demonstration in which you walk out onto a planck 40 floors high on a virtual skyscraper. You know you are safe in a room, but your subconscious brain buys the visual construction and your emotions react as if you are about to die.

A recent experiment adds a bit more data to this question. Researchers used VR to test yawning. There is a phenomenon of contagious yawning – mammals will yawn when they see other mammals yawn, about 30-60% of the time. So the researcher put subjects in VR where they were exposed to virtual yawning. Sure enough – the yawns were contagious 38% of the time, in the range of previous research.

But there is another well-established phenomenon in which the presence of another person will inhibit this contagious yawning. Subjects will yawn less or try to suppress their yawns, presumably because of a perceived social stigma of yawning in front of others (because it makes you seem rude or bored). So the researchers then put a virtual person in the environment with the subject, but they did not inhibit yawning. However, when a researcher was physically present in the room with the subject, even though the subject could not see or hear them (but knew they were there) they suppressed their yawns.

What does all this mean? I think all this is best understood with the realization that our brains use multiple sensory streams simultaneously in order to construct our perception of reality. Different senses affect each other, are compared with each other, and are woven together to create the perception of one seamless reality. This is a powerful but flawed process that can break down, such as with optical illusions, or the little misperceptions of daily life. Incongruity between sensory stream can also cause vertigo or motion sickness, still a significant problem with VR.

But our brains don’t necessarily need every sensory stream to create every construction, it will work with what it has. What the VR experience tells us is that a purely visual presentation is sufficient to create a convincing construction of reality, at least in terms of our environment and our movement. Add in sound and it gets more convincing, more visceral and powerful.

What we can say from this specific study is that contagious yawning can be triggered by sight and sound alone (and perhaps even by sight alone), which makes sense. This may be a purely visual reflex.

However, the social behavior of suppressing a yawn when others are present is not a visual reflex. It may require a more convincing multi-modal sensory experience to trigger. Or (what I think is far more likely) it requires conscious belief, not just subconscious construction.

Think back to the planck experience – there is a dramatic disconnect between what the conscious brain knows and the subconscious brain feels. When it comes to experiencing the sensation of being at an extreme height, the subconscious brain “wins” (at least to the extent that you cannot suppress the fear, even if you can overcome it). When it comes to social cues, perhaps the conscious brain wins. Knowing that there wasn’t really anyone present, even if there were virtual people, was enough, and subjects acted as if they were alone.

The researchers did alter some variables. The avatar meant to convey social presence was either there or not there, looking toward or away from the subject, and either moving or completely still. None of these variables had any effect on yawn suppression. It’s possible that conscious belief trumped the avatar. It’s also possible that the avatar was simply not convincing enough.

What is generally suspected is that the more sensory modalities are incorporated into the virtual experience, the more convincing and therefore visceral it will be. At some threshold, virtual presence will be as real for the experiencer as meat presence. I have experienced this to a limited degree also. For example, during the planck experience, when the doors open to the outside on the 40th floor, I have secretly turned a fan on the person using the VR. The sudden breeze adds to the illusion, making it more visceral.

I have also experienced high-end theme park rides (like Spider Man at Universal Islands of Adventure) that have used multi-modal sensory cures to enhance the experience. The sight of the flame-thrower was combined with a blast of hot air, to incredible effect.

This is where VR is headed. We already have visual and auditory covered, and the visual experience will only get  better. The next step is haptic feedback, giving sensory feedback so that when you grip an object, for example, you feel as if it is in your hand. Also, being able to see yourself more completely (not just your virtual hands) will likely have a powerful effect on the sense of presence.

The real trick is going to be providing vestibular feedback to simulate movement and orientation with respect to gravity. This would not only make the experience feel more real, it could eliminate the limiting motion sickness (people are very variable in this regard – for me it is horrible). This could also partly be solved with an omnidirectional treadmill or similar technology – so you are physically walking when your avatar is walking. Adding in even a little bit of vertical movement would also be a huge improvement – the disconnect with virtual and real vertical movement is the most nausea-inducing for me.

With regard to buying the physical presence of an avatar, I think this experiment left a lot of room for improvement. The avatar was simply standing in front of subjects, and moving to indicate they were alive. But if they were more realistic and interactive (showed agency), there may be a threshold where we start to treat them as real. Our brains treat things that display agency by how they move as if they have emotional significance. The avatars in this study did not trigger that pathway, but if cartoon critters can, then I think virtual people can.

I think we can expect a lot more VR psychological experiments in the future. It’s convenient for researchers, and the problems can likely be worked out, or at least accounted for. VR psychology research is a technology, and it will improve, just as VR itself will continue to improve. At some point there may be no reason not to do psychological research in VR.

No responses yet