WE ARE VR (PART ONE)
Flying cars. Hotels in space. Robotic sexual partners. Some technologies seem destined to remain vaporware: Great ideas in theory which never seem to become practical, widespread products. But next year could be the year in which we tick at least one much-anticipated item off of the list. Finally, we may be getting virtual reality.
VR is, of course, much anticipated by those gamers seeking to immerse themselves in virtual worlds as fully as possible. But its usefulness extends beyond escapism. Thanks to the fact that virtual reality entails tracking a person as closely as possible so as to match movements in reality to what is happening in computer-generated environments, this technology will collect a wealth of data for anthropologists, sociologists, psychologists, and others interested in people and society. Avatars are useful tools for both communication and storytelling, two of possibly the most fundamentally human traits. Thus they have great potential not just in representing us in online games and social networks, but in how we learn, how we work, and much more besides.
Commercial VR- by which I mean technology which can provide a sense of presence adequate enough to convince the unconscious parts of the brain that the experience is real, without inducing motion-sickness and at a cost that is affordable to those who can afford the latest games consoles- looks set to begin in 2016 with the launch of hardware like the Oculus Rift and Sony’s Morpheus headset. But VR itself has a much longer history.
IN THE BEGINNING.
So how far back do we have to go to find the origins of VR? Most essays and books I know credit Jaron Lanier with coining the term in the 1980s. In 1985, Lanier founded the company ‘VPL Research’, which produced early VR products like the Data Glove and the EyePhone. However, the Wikipedia entry for ‘VR’ traces its origins back to 1949 and a short story by Stanley G Weinbaum called ‘Pygmalion’s Glasses’. A decade later, Morton Heilig created the ‘Sensorama’, which was able to display stereoscopic 3D images in a wide-angle view, stereo sound, and could even produce aromas. Aparently, it was effective enough to ensure that Howard Rheingold was impressed when he tried it some forty years later while researching his 1992 book ‘Virtual Reality’.
Tracing the origins of VR depends on what you mean by ‘origins’ and ‘VR’: Whether you mean the concept of virtual reality or its practical realization; and whether you mean computer-generated reality or imaginary worlds brought to life by some other device (the Sensorama was a mechanical device that used no computers). If we mean the concept, then possibly Weinbaum short story is the point of origin for VR (although didn’t DesCartes come up with a thought experiment involving a false reality created by an evil demon in his 1641 ‘Meditations On First Philosophy’?). If we mean a computer-mediated device, then Ivan Sutherland’s 1968 contraption, dubbed the ‘Sword of Damocles’ due to the fact that it was an HMD (Head-Mounted Display) so heavy it had to be suspended from the ceiling, marks the origins of VR. On the other hand, if we expand the definition to include any device capable of creating an experience of reality, our journey toward the origins of VR would take us back not just decades, or even centuries, but to the very dawn of humankind.
There is a scene in The Matrix in which Morpheus challenges Neo to define ‘Real’ and tells him that if one defines ‘real’ in terms of what one can see and taste and touch, then ‘real’ is simply “electrical signals interpreted by your brain”. The implication here is that there is a difference between the world as it is and the world as it is perceived. In other words, the brain by its very nature is a kind of virtual reality machine, one which generated fictional worlds which approximate objective reality.
It seems like an odd assumption. After all, most people like to believe they are in touch with reality and that to live in a make-believe world is the very definition of ‘delusional’. But actually there is a great deal of truth in what Morpheus says. From the 20th Century onwards, physicists have amassed a wealth of evidence showing that reality is fundamentally driven by laws of quantum physics, a reality so bizarre it is famously said that if you think you understand it, you don’t. Well, if reality as revealed by our most precise scientific investigations seems counterintuitive to anything we commonly experience, doesn’t that prove we live our lives in a virtual reality of perceptions created by the mind that only approximates objective reality?
There are also good evolutionary reasons to believe that what we experience is only an approximation- a useful fiction, if you will- of the real world. Every moment of the day, reality bombards the senses with what would be an overwhelming amount of information unless the brain filtered much of it, and applied simplifying assumptions. Most illusions work by cleverly exploiting those built-in assumptions.
One of my favourites is the ‘McGurk Effect’. In this illusion, one watches a video clip of somebody saying a word like ‘pass’ over and over again, and then repeating a similar-sounding word like ‘farce’. In actual fact, the soundtrack is always repeating ‘pass’ with the person lip-synching ‘pass’ or ‘farce’. The McGurk Effect illustrates that our eyes can influence what we hear. We grow up learning that the mouth forms specific shapes in order for us to speak syllables like ‘fa’ and ‘pa’ so when you see someone mouthing ‘fa’ but you hear ‘pa’, your vision decides that what the person says cannot be ‘farce’, and to reconcile the conflicting information between the eyes and ears, your mind makes you hear a different sound to what is actually being heard. Illusions like this show us that reality is not just something we experience; it is something our minds create.
Another point to bare in mind is that, as human beings, our reality does not just consist of living in the here and now. The human mind is designed to wander. Working at the University of California, Santa Barbara, Jonathan Schooler and colleagues have developed methods to determine how often this happens. They determined that our minds zone out (that is, wander unconsciously) 15 to 20% of the time and tune out (wander deliberately) 25 to 35% of the time. We all experience being ‘somewhere else’ in our own minds: Imagining, remembering, or misremembering places, people and events. Being in two places at once- one physical, one mental- is a very human tendency.
WE LIVE IN AND AMONG STORIES.
Anthropologists have long sought to classify humans, coming up with terms like ‘the thinking animal’ or ‘Man the toolmaker’. But, perhaps the most apt description would be ‘the storytelling animal’. Creating imaginary places and people is a truly universal human skill. From the oral-storytellers of hunter-gatherer tribes, to the folktales written in ancient Sanskrit, Sumerian or Latin, and up to modern times with millions of books, TV shows and movies, storytelling is evident across all culture and throughout all human history.
This is such a fundamentally human trait, it may not seem odd, but it is if you think about it. Why would evolution not select against minds that wasted time creating imaginary situations, rather than dealing exclusively with the real world? A branch of study known as ‘Literary Darwinism’ seeks to answer that question by comparing the themes of the tales themselves. Far from being specific to each culture, similar themes and character types appear consistently in narratives from all cultures. Anyone who has spent some time ‘people watching’ in Second Life will have discovered that women there tend to be slim, young and beautiful. It is tempting to blame this stereotype on the fashion industry or Hollywood — endless images of impossibly beautiful people fill our streets and homes via billboard posters, magazine covers and TV shows. But, precisely the same gender description is encountered wherever you move across the landscape of folktales. No matter what continent, or what century, and regardless of whether it is a hunter-gatherer or an industrial society, women are much less likely to be the main characters and more likely to have emphasis placed on their beauty. Meanwhile, male characters are typically portrayed as more active and physically courageous. What these gender stereotypes reflect, it is suggested, are classic signs of reproductive health: youth and beauty for females (signifying the ability to bear children), and the ability to provide for a family (signalled by power and success) in males.
As for the themes, Patrick Hogan (a professor of English and Comparative Literature) has found that as many as two-thirds of the most respected stories in narrative traditions appear to be based on three narrative prototypes. ‘Romantic’ and ‘Heroic’ scenarios make up the two more common prototypes, with the former focusing on the trials and travails of love and the latter focusing on power struggles. Professor Hogan dubbed the third prototype ‘Sacrificial’. These kind of tales focus on agrarian plenty versus famine, as well as on societal redemption. These basic prototypes appear over and over again as humans create narrative records of basic needs: food, reproduction and social status.
The latter need is almost certainly the reason why we have stories in the first place. In order to follow a story, you need an ability to read another entity’s motivations and intentions. Understanding a story, in other words, is a skill that is equivalent to understanding the human mind. Psychologists have a name for the kind of immersionism typified by a weepy movie. They call it ‘Narrative Transport’. Whenever your emotions become inextricably tied to a story’s characters, you are displaying the ability to attribute mental states, such as awareness and intent, to another entity. This ability is known as ‘Theory of Mind’ and it is crucial to social interaction and communal living.
Living in a community requires keeping tabs on who the group members are and what they are doing. It requires interacting with others and learning the rules and customs of society. Storytelling persists because there is no better way to promote social cohesion among groups, or for passing on knowledge to future generations. Stories’ roles in establishing the rules of society are demonstrated in a web-based survey of more than five hundred readers. The respondents answered questions about the motivations and personalities of one hundred and forty-four principle characters from a wide selection of Victorian novels. One thing this survey revealed is an evolved psychological tendency to envision human social relations as morally polarized between “us” and “them”. Another, was a tendency to view antagonists as a malign force motivated by social dominance as an end in itself, something that threatens the very principle of community.
Theory of Mind is vital to social living, and it develops in children around age four or five. Once we possess it, we tend to make stories out of everything. This tendency was demonstrated in a 1944 study by Fritz Heider and Mary-Ann Simmel. They created an animation of a pair of triangles and a circle moving around a square. Although the shapes had no minds, people nevertheless described the scene as if the triangles and circle had intentions and motivations. They would make comments like ‘the triangles are chasing the circle’. We have a predilection for making characters and stories out of whatever we see in the world around us.
REMEMBERING THE PAST AND IMAGINING THE FUTURE.
What is going on in the brain as we create and understand narratives? Imaging studies have identified areas of the brain that appear crucial to this ability. The medial and lateral prefrontal cortex are responsible for working memory, something that helps sequence information and represent story events. The cingulate cortex is evolved in visuospatial imagery and may be connecting personal experience with the story to add understanding. Identification of characters’ mental states seems to be the responsibility of regions such as the prefrontal cortex, temporoparietal junction, and temporal lobes. Patterns for story processing differ from those of other related mental tasks, such as paying attention or stringing together sentences for language comprehension.
Sometimes, the brain shows very little difference in patterns of activity, even when one would think it would. Apart from people with certain forms of dementia, we all have the ability to recall the past and imagine the future. We also have the ability to tell one from the other. If I imagine a birthday party, for instance, I do not confuse this fantasy with an actual party I attended. Conversely, if I recall a party that I did attend in the past, I know that what I see in my mind’s eye really happened and is not just imaginary.
The fact that we can so easily distinguish memory of the past from imagining the future might lead one to expect different patterns of activity associated with the past and the future. Indeed, that is what a team lead by Kathleen McDermot expected to see when they recorded the brain activity of subjects as they recalled or imagined a common experience. But, what they found was that both tasks produced very similar brain activity. McDermot remarked, “we really thought we were going to see a region that was more active in memory than in future thought. We didn’t find that”. This evidence suggests that our personal past and future are closely linked in the brain.
Why is that? Well, in and of itself, the ability to recall the past is not evolutionarily useful. It only becomes so once you can also plan for the future. Remembering how hungry you were last winter is advantageous only if it convinces you to store away food you find in a current season of abundance in preparation for the coming winter. Our capacity to remember the past evolved to help us imagine and plan for the future. One of the main functions of memory, therefore, is to shuffle scraps of the past around in novel ways to project possible futures.
This constructive nature of memory is believed to be the reason why we are prone to false memories. Professor Elizabeth Loftus wrote, “I’ve spent three decades learning how to alter people’s memories. I’ve even gone as far as planting entirely false memories into the minds of ordinary people — memories such as being lost in a shopping mall… all planted through the power of suggestion”. A simple way to demonstrate false memories is to show a person a list of words such as ‘pillow’, ‘doze’, and ‘sleep’. S/he can be easily tricked into remembering that the word ‘dream’ appeared on the list as well. However, people do not make the same mistake with unrelated words.
What this type of fallibility shows is that your memory is not a flawless action replay of an event that really happened. Instead, we only have the ability to remember bits and pieces of our past; to recall the outline of things rather than exhaustive details. We may feel as though we remember certain events fully, but what the mind actually does is imaginatively fill in missing details to construct plausible — but not necessarily accurate — accounts of what happened.
So, people who study human behaviour have looked to earlier forms of communication and storytelling in order to better understand what makes us tick. Therefore, it is perhaps not surprising to learn that virtual worlds and avatars, those latest examples of communication/storytelling tools, are likewise being used to gain a better insight into ourselves.
LEARNING ABOUT OURSELVES IN VR
Some studies are intended to see if real-world behaviours are evident in virtual worlds. There was, for example, the ‘Stigma Study’. To be stigmatized is to be less valued by others. From neurological studies, we know that when a person encounters a ‘stigmatized other’ their brains show a pattern of activity that indicates they feel threatened. In experiments designed to test whether stigmatization is carried over into virtual reality, Jim Blascovich and Jeremy Bailenson arranged for participants in an experiment to meet with ‘Sally’ both in real life and virtual reality in a variety of conditions:
1: Some participants met ‘Sally’ with a birthmark on her face and also met her avatar which likewise bore a birthmark.
2: Some participants met ‘Sally’ with a birthmark, whereas her avatar had none.
3: Some encountered ‘Sally’ with no birthmark and also met her avatar, which did have a birthmark.
4: Some encountered both ‘Sally’ and her avatar without a birthmark.
The results of these studies showed that people were initially threatened by ‘Sally’ only if she bore the birthmark in the physical world. But within four minutes, participants were threatened only if Sally’s avatar had a birthmark.
What this tells us is that people adjust to virtual reality and accept it as ‘real’, and carry their prejudices over into the computer-generated world. The slight delay also tells us that we take a short while to adjust to the virtual world but come to accept it as grounded reality. In that sense, virtual reality is somewhat like those prism glasses which make the world seem upside down. Not surprisingly, it makes for a disorientating experience to wear such glasses, at least at first. But then the brain adjusts and the subject comes to view their topsy-turvy world as normal. If they then remove the glasses, grounded reality (which, by definition, is right way up) seems upside down to them. Just as people’s brains are neurophysically wired to ‘right’ sensory data, they also adapt to VR and behave accordingly.
As well as using virtual reality to gain insights into human behaviour, we also find instances where the technology is used to alter behaviour. The technologies that make virtual reality possible can be used to create a virtual mirror which users can approach and observe themselves (or rather, their avatar) in. But, being a virtual mirror, your reflection can do things that would not be possible in the physical world. It can, for example, morph into another person. In a set of experiments, participants stood in front of a virtual mirror in which they viewed themselves either at their current age, or as elderly persons. This was followed by a twenty minute conversation with another person about their life in the virtual world. Upon exiting the virtual world, the participants who had viewed themselves in the elderly condition budgeted twice as much money compared to those who saw their reflection at their current age.
One thing that virtual reality has long been known for is its ability to create an illusion of closeness. Gamers commonly work with and compete against other players, sharing a space in the virtual world while simultaneously being physically positioned far apart, perhaps on separate continents. As in the case of the virtual mirror, the technology lets us not only reproduce reality but also to do things that would be physically impossible. For example, in the real world it’s not possible for two people to literally share the same space, but in virtual reality there is nothing to prevent the rules governing the computer-generated environment being written so as to allow two bodies to overlap.
Professor Ruzena Bajcsy has developed tracking systems so precise, they are capable of capturing every single joint and movement. Studies have been done to see if people can learn physical movements more successfully by sharing the body of an expert, compared to watching an expert in a traditional video tutorial. Jim Blascovich and Jeremy Bailenson investigated whether the martial art tai chi could be better learned in a virtual world where body overlapping is possible, and their results showed that subjects who could share the body space of an expert did indeed perform substantially better compared to those who simply watched a video tutorial.
The ability to quickly and easily make changes to a virtual environment, and particularly the ability to reproduce dangerous situations while totally avoiding the possibility of any real harm, makes virtual reality a very useful tool for treating phobias or to come to terms with traumatic events. A common method for helping sufferers with arachnophobia and other irrational fears is systematic desensitization, whereby the patient undergoes increasingly intimate encounters with their object of dread. In the case of spiders, this could initially consist of entering a room in which there is a spider in a glass tank that is covered by a cloth. Then, on a subsequent session, being in a room in which the cloth is removed and the spider in the tank is in clear sight. Step by step, the patient becomes gradually desensitized to the point where they are confident enough to allow a tarantula to crawl up their arm.
The problem with this technique is that it entails working with live creatures that need looking after, and that costs time and money. And, as you can imagine, treating something like a fear of flying is costlier still. But, of course, by using virtual reality the costs can be dramatically lowered and we are also able to exercise a degree of control that would not be possible using live actors or mechanical devices.
Working with Dr JoAnn Difede, assistant professor of psychiatry at Newyork Presbyterian hospital, Dr Hunter Hoffman (who had previously found that the escapism of virtual reality is powerful enough to enable burns victims to reduce the perception of pain while their burns are being redressed by fifty to ninety percent) developed a virtual recreation of the attack on the Twin Towers. Little by little, those traumatized by the events of 9/11 encounter increasingly intense recreations, beginning with looking up at the World Trade Center, with no airplanes flying by let alone crashing, and then gradually more and more elements are added in, such as explosions and the sounds of people panicking. Using this technique, patients who had not responded to any other physiological treatment showed dramatic improvement.
And, like I said, the cost of doing this virtually is negligible compared to real life. And as Moore’s law marches on the costs keep going down. When Dr Hoffman started working with burns victims in 1996, a decent VR machine cost about $45,000. In 2016, when the likes of the Oculus Rift are expected to launch, a budget of under $1000 would almost certainly get you a VR setup as good, if not better, than that 90s example.
IMMERSION VERSUS PRESENCE
To those who have never experience a VR setup, it may be hard to believe that fictional recreations can help with real trauma. Everybody knows that something virtual is not real, so how can being in a pretend airplane cure somebody’s fear of flying?
But to think that way would be to ignore the power of ‘presence’. Presence is not the same thing as immersion. Videogames have long achieved a sense of immersion, the capability to draw players into the game and invest in their avatar and the challenges they are setting out to beat. You do not need photorealism to achieve immersion but you do need consistency of rules. So, for example, if the game prevents you from jumping over what looks like a totally clearable obstacle for some arbitrary reason, your attention is directed to fact that you are just playing a game. In some ways, realism can work against immersion, because it is much harder to achieve consistency of rules that match realism compared to a simple fantasy game that stays true to its own internal logic. You only have to watch the documentary movie ‘King Of Kong’ (about the two best Donkey Kong champions competing to achieve the highest possible score) to see how absorbed one can become with simple graphics and consistent rules.
But, no matter how engrossing a videogame can be, immersion is not the same thing as presence. The way videogames are traditionally played, the avatar is something you control and the environment it is in is observed from afar, viewed on a monitor. When videogame technology achieves presence, though, you perceive yourself as literally inhabiting a VR environment.
As with immersion, photorealism is not the most important thing for achieving presence. What is important is that the display offers a wide enough field of view to prevent one from seeing the edges, and that the tracking and rendering technology is capable of updating the point of view fast enough to avoid noticeable latency. According to John Carmack, “twenty milliseconds or less will provide the minimum level of latency deemed acceptable” and if it can be reduced to 18 milliseconds or less, the experience will be perceived as immediate, meaning you can move your head and redirect your gaze in a way that feels entirely natural.
DARE YOU CROSS ‘THE PIT’?
A common way of testing whether a VR setup has achieved presence is to have subjects undergo ‘The Pit’. As its name suggests, this is a VR experience in which you find yourself standing before a pit. Not a shallow pit, mind you, but a very long drop. There is also a plank spanning the pit (plus a real plank in the actual room in which the experiment takes place) and people are challenged to walk across the real plank while they perceive themselves as walking across the sheer drop.
Now, obviously, there is no pit in real life and the subjects know this. Nevertheless, according to Blascovich and Bailenson, one in three adults cannot summon up the courage to walk the plank, and those that do try struggle to maintain balance just as if they were really trying to cross a long drop. This happens because the brain’s perceptual systems (ones operating below the level of conscious awareness) are satisfied that the experience is real. And no matter how much you tell yourself the experience is only virtual, your brain insists you are doing something risky like standing close to the edge of a precipice.
Tracking and rendering technology has long been used to convince the mind to accept something artificial as natural. Undoubtedly the most commonly used example would be the telephone. When you say something during a phone conversation, the inbuilt microphone ‘tracks’ your voice by digitizing it. And what the listener hears is not really your voice but a reproduction that approximates the sound of your voice, ‘rendered’ by his or her phone’s inbuilt speaker. Because the sounds being heard are so close to that of a human voice, your mind believes that is what you are listening to, and we have long-since adopted the attitude that a phone conversation is a direct two-way conversation between people who are physically distant, not a conversation mediated by artificial sounds that repeat what is being said.
So, if tracking and rendering works well enough to achieve presence, the more primitive parts of the brain will be convinced the experience is real. For that reason, VR has proven extremely useful as a tool of therapy for people with phobias and other traumas that can be treated by repeat exposure to whatever threatens them. It also means we have an unprecedented ability to record people’s actions in minute detail and learn more about ourselves and how to make ourselves better people.
I WAS MADE A RACIST
A word of caution, though: Sometimes the results are not what was expected. Consider the case of Victoria Groom, a graduate student who worked with Blascovich and Bailenson. She figured that if white participants could become black people in virtual reality, the experience would help reduce racial stereotyping. Instead, perceiving oneself as black in a virtual mirror actually increased people’s scores on standard measures of racism, and this was the case whether the subject him or herself was black or white. What this study suggests is that simply assigning a racial identity to somebody makes the stereotype more salient. However, it has been demonstrated that face-to-face contact with members of out-groups and taking a stigmatized-other’s perspective can reduce racism. In that case, VR systems like the ones the US Army have developed may prove more useful. The US Army has invested in virtual recreations of foreign cultures like those of the Middle East, and these function as training grounds allowing soldiers to immerse themselves fully in different customs so as to better understand and interact with locals in respectful ways. With VR, we too can immerse ourselves in cultures different to our own with far less expense and inconvenience compared to flying to exotic locations, and use the experience to reduce negative stereotypes.
This also suggests that virtual reality could be very useful for educational purposes, and this is what we will look into in part two.