Try this exercise. Stop reading for a minute and take a look at the objects around you. Think about how they influence your life and your thinking. In the previous essay, we concentrated mostly on how other people play a part in shaping one’s developing personality. But humans are not just social animals, they are also prolific toolmakers. The cultural artefacts we have created enter into our thoughts, providing ways of approaching certain questions. As the psychologist Sherry Turkle put it, “we think with the objects we love; we love the objects we think with”.
Think of the influence one object had on my opening paragraph: The clock. A historian of technology called Lewis Manford wrote about how the notion of time as divided into hours, minutes and seconds did not exist prior to the invention of accurate timepieces. Instead, people marked the passage of time by the cycles of dawn, morning, day, afternoon, evening and night. Once clocks became readily available, actions could be more precisely measured, and different activities could be coordinated more effectively to achieve a future goal. We learned to divide our time into precise units, thereby becoming the sort of regimented subjects industrial nations require. The image of the clock extends out all the way to the Newtonian universe, an image of celestial mechanics that is still used today to determine the time and place for solar eclipses, and to park robotic explorers on or around alien worlds.
The psychologist Jean Piaget studied the way we use everyday objects in order to think about abstract concepts like time, number, and life. When it comes to determining what is (and what is not) alive, Piaget’s studies during the 1920s showed that children use increasingly fine distinctions of movement. For infants, anything that moves is seen as ‘alive’. As they grow older, small children learn not to attribute aliveness to things which move only because an external force pushes or pulls them. Only that which moves of its own accord is alive. Later still, children acquire a sense of inner movement characterized by growth, breathing and metabolism, and these became the criteria for distinguishing life from mere matter.
The so-called ‘movement theory’ of life remained standard until the late 70s and early 80s. From then on, the focus moved away from physical and mechanical explanations and concentrated more on the psychological. The chief reason for this was the rise in popularity of the computer. Unlike a clockwork toy, which could be  understood by being broken down into individual parts whose function could be determined by observing each one’s mechanical operation, the computer permitted no such understanding. You just cannot take the cover off and observe the actual functions of its circuitry. Furthermore, the home pc gradually transformed from kit-built devices that granted the user/builder an intimate theoretical knowledge of its principles of operation to the laptops of today, where you void your warrenty if you so much as remove the cover. Nowadays, it is quite possible to use a computer without having any knowledge of how it works on a fundamental level.
In that sense, the computer offers a range of metaphors for thinking about postmodernism. In his classic article, ’Postmodernism, or The Cultural Logic Of Late Capitalism’, Frederic Jameson noted how we lacked objects that could represent postmodern thought. On the other hand, ’Modernism’ had no shortage of objects that could serve as useful metaphors. Basically, modernist thinking involves reducing complex things to simpler elements and then determining the rules that govern these fundamental parts.
For the first few decades, computers were decidedly ’modernist’. After all, they were rigid calculating machines following precise logical rules. It may seem strange to use the past tense, given that computers remain calculating machines. But  the important point is that, for most people, this is no longer a useful way to think about computers. Because they have the ability to create complex patterns from the building blocks of information, computers can effectively morph from one functionality to another. Machines used to have a single purpose, but a computer can become a word processor, a video editing suite or even a rally car driving along a mountainous terrain. So long as you can run the software that tells it how simulate something, the computer will take perform that task.
Lev Vygotsky wrote about how, from an early age, we learn to separate meaning from one object and apply it to another. He gave the example of a child pretending a stick is a horse:
“For a child, the word ‘horse’ applied to the stick means ‘there is a horse’ because mentally he sees the object standing behind the word”.
This ability to transfer meaning is emphasised in the culture of simulation brought about by computers. The user no longer sees a rigid machine designed for a singular purpose. Although it remains a calculating machine, that fundamental layer is hidden beneath a surface layer of icons. Click on this icon, and you have a little planet earth that you can rotate or zoom in to see your street or some other location. Click on that icon, and you have something else to interact with. Whatever you use, you are far more likely to operate it using simulations of buttons and sliders, rather than messing around with the mathematical operations that really make it work.
In postmodernism, the search for ultimate origins and structure is seen as futile. If there is ultimate meaning, we are not privileged to know it. That being the case, knowing can only come through the exploration of surfaces.  Jameson characterized postmodern thought as the precedence of surface over depth; of the simulation over the “real”. The windows-based pc and the web therefore offer fitting metaphors because, as Sherry Turkle noted, “[computers] should not longer be thought of as rigid machines, but rather as fluid simulation spaces….[People] want, in other words, environments to explore, rather than rules to learn”.
Computers are interactive machines whose underlying mechanics have grown increasingly opaque. Perhaps it is not surprising, then, that the computer would become the metaphor for that other interactive but opaque object: The brain. Moreover, windows-based pcs and the Web, along with advances in certain scientific fields, are eroding the boundaries between what is real and what is virtual; between the unitary and the multiple self.
It took several decades for it to become acceptable that the boundaries between people and machines had been eroded, and it is fair to say the idea still meets with some resistance. The original Star Trek portrayed advanced computers in a manner that reflected most people’s attitudes up until the early 80s. While there was an acceptance that such machines had some claim to intelligence and people accorded them psychological attributes hitherto applicable only to humans, there was still an insistence on a boundary between people and anything a computer could be. Typically, this boundary centred around emotion. Captain Kirk routinely gained the upper hand over those cold, logical machines by relying on his gut instinct.
Star Trek: Next Generation had a somewhat different portrayal of machines. Commander Data was treated like a valued member of the crew. It is worth considering some scientific and technological developments that might account for this change in attitudes. For audiences of the original Star Trek, computers were an unfamiliar and startling new technology, but by the late 80s the home pc revolution was well under way. Furthermore, there had been a move away from top-down, rule-based approaches to AI, replaced with bottom-up emergent models with obvious parallels to biology. As Sherry Turkle commented, “it seems less threatening to imagine the human mind as akin to a biological styled machine than to think of the mind as a rule-based information processor”. Finally, as we have seen in previous essays, the human brain is primed to respond to social actions. Roboticists like Cynthia Brezeal have shown how even a minimal amount of interactivity is enough to make us project our own complexity onto an object, and accord it more intelligence than it is perhaps capable of. This tendency has a name, and it is called the ‘Eliza Effect’. Whereas the Julia Effect is primarily about the limitations of language and how it is more convenient to talk about smoke-and-mirrors AI like it is the real deal, the ‘Eliza Effect’ refers to the more general tendency to attribute intelligence to responsive computer programs.
Eliza was a kind of chatbot that specialized in psychotherapy, and it was invented by Joseph Weizenbaum in 1966. Actually, his intention was not to create an AI that could pass a Turing test or even a Feigenbaum test (in which an AI succeeds in being accepted as a specialist in a particular field, in this case psychology). No, what he wanted was to demonstrate that computers were limited in their capacity for social communication. Like ‘Julia’, Eliza is programmed to respond appropriately with questions and comments, but does not understand what is said to it, nor what it says in response. Since Eliza’s limitations were easily identifiable, Weizenbaum felt sure that people would soon tire of conversing with it. However, some people would spend hours in conversation with his chatbot. Weizenbaum saw this as a worrying outcome, a sign that people were investing too much authority in machines. “When a computer says ‘I understand’”, he wrote, “that’s a lie and an impossibility and it shouldn’t be the basis for psychotherapy”.
In the late 70s to early 80s, Sherry Turkle researched people’s reactions to Eliza, and in doing so she found that its appeal had little to do with either its conversational or psychotherapeutic capability. Instead, people treated it like a kind of diary that had the added appeal of helping them hone their portrayal of their inner being. One participant recalled how “I put my ideas out and see what my ideas are…I’m not talking to it. It’s more that I type and get everything out that is in my head…I see myself, but nobody sees me”.
For this person, Eliza was not an ‘other’ but rather an extension of her own mind. In that sense, it is a continuation of a general tendency to project oneself out into the world using whatever cultural materials are available. The Web is particularly useful for this kind of purpose, and because of this it can provide psychoanalytic conditions as well as metaphors for postmodernism.
In a Freudian analytic situation, the analyst will sit behind the patient. The reason they do this, is so that the patient will come to think of the psychologist as a disembodied voice. This encourages ‘projection’. The patient projects past feelings and thoughts onto the analyst. Carl Jung’s psychology encouraged the individual to become familiar with a range of universal archetypes, including other-gendered selves called ‘anima’ (if the subject is male) and ‘animus’ (if the subject is female). Archetypes also turn up in the practices of Dr Yannon Volcani, who believes we can gain better control over our personalities if we nurture and develop them as external to ourselves. Since ‘projection’ and ‘externalization’ play important roles in psychoanalysis, we can see how social networks and online worlds might naturally lend themselves to analytic situations.
For one thing, the user always has the option of opening up extra windows, perhaps to conduct a private conversation with one person among a group, or maybe to have a presence in several concurrent events. People who study MMORPGs and online worlds talk of the author as displaced and distributed, partly because such worlds grow from the collaborative efforts of many people, but also because each user can have a disembodied presence all over the Web, or embodied in many virtual locations and worlds, thanks to multiple avatars.
The avatar itself is perhaps the strongest influence on externalizing aspects of identity. In RL, the individual sees the world through a first-person perspective. You can opt to see SL from the same viewpoint, but the default view is a third-person perspective. This means you login to SL and see your avatar as though it were somebody else who is somewhere else (unless, that is, you designed your home location to match your physical surroundings). The ability to exert subtle and dramatic changes over your avatar’s physical appearance further encourages externalization of archetypes. No longer is a woman’s ‘animus’ a mere abstract concept. He is there, made digital flesh, building up his social networks and personal history whenever she logs into his account.
It is very rare for an online identity to stay confined to its place of origin. This leads to what Molatov Alva called the ‘avatarization of social networks’. The name ‘Extropia DaSilva’ originated in SL, but since then it has been adopted as an email address, a Facebook and Flickr account, and it also appears as the author of essays or comments posted on various blogs. Many online account holders go much further, wilfully mashing up their 1st and 2nd life identities, sometimes to the extent of losing any distinction between the two.
Of course, no online world actually exists separate to the real world. The latter is, after all, where those oh-so- important computers and servers, and the energy required to run them, resides. But we can acknowledge this link to the physical world while also admitting an element of unreality to SL. True photorealism is an immensely difficult feat to accomplish, and one that no realtime rendering process can currently achieve. Online worlds therefore have a rather cartoonish look about them. In fact, at first glance you would probably think someone logged in to SL was playing a videogame.
Never tell an SL resident that their world is a ‘game’. It tends to be viewed as demeaning, and it really is inaccurate if the comparison is with an MMORPG like ‘Everquest’. But, maybe it is not online worlds that we are belittling when we strongly deny that SL is a game, but the act of play itself. The psychoanalyst Erik Erikson emphasised the importance play has in building one’s identity:
“…Play is the infantile form of the human ability to deal with experience by creating model situations and to master reality by experiment and planning…In reconstructing the model situation, he redeems his failures and strengthens his hopes. He anticipates the future from the point of view of a shared and corrected past. No thinker can do more and no playing child can do less”.
By placing us in dual roles of observer and participant; by blurring the boundaries between fact and fiction; by enabling one to externalize archetypes and project them onto avatars that other people react to, online worlds lend themselves well to play therapy. Psuedonymity and anonymity have much to do with this. One can always set up an account in order to explore majors, minors and micros that might have long-lasting consequences if tried out in RL or through one’s ‘main’ avatar. Sherry Turkle has pointed out how “lack of information about the real person to whom one is talking, the silence into which one types, the lack of visual cues, all encourage projection”. Studies have shown that more than 75% of users feel safer speaking their mind when using an avatar.
The relationship between psychology and the Web is not one way. Like anything else, psychiatric symptoms tend to reflect the larger culture. It is interesting to note, then, how very much rarer Dissociative Identity Disorder was in the 1970s, compared to 20 years later. Clinical psychology texts of the 70s barely considered DID worth mentioning, viewing the condition as affecting only one in a million. There was typically one ‘alter’ personality in each of the few cases that were studied. Much seemed to have changed 20 years later. Not only did each subject have many more ‘alter’ personalities of different ages, races, genders and sexual orientations (sometimes as many as 16 alters per person), there were also many more reports of the condition itself. One cannot help but wonder if the rise of the Web, and the ability to cycle through aspects of personality constructed online and distributed via alt accounts, played a part in bringing about this change.
DID is not a healthy mental state, and it is worth considering why not since it tells us something about what is (and what is not) healthy roleplay. Although it is commonly known as multiple personality disorder, that is somewhat misleading because the condition creates disconnected minors and micros, rather than ‘majors’ aka fully-developed personalities. A person with DID has memories that are experienced in isolation from the rest of the ‘mindweb’.
Where online identity exploration is concerned, thinkers have tried to resolve an apparent contradiction. How can a personality fragmented into multiple online presences, distributed over several windows, avatars, etc, nevertheless be coherent? One idea is that the ability to open up several windows imposes a certain distance between the personalities projected into each one, but they remain close enough to enable the user to freely move between them. According to Turkle, “the essence of this self is not unitary, nor are its parts stable entities. It is easy to cycle through its aspects and these themselves are changing through communication with each other…Having literally written our online presence into existence, we are in a position to be more aware of what we project into everyday life”.
But projection is only truly beneficial if it is a back-and-forth process. In other words, what happens online must lead to what Wagner James Au calls ‘mirrored flourishing’. If your avatar’s successes and triumphs give you a sense of achievement or wellbeing, or even if a bad experience enables you to learn something useful, that is mirrored flourishing in action. Some people feel this is best achieved by developing an online presence that  matches their physical self as closely as possible. For others, it is the ability to create idealized or fantasy characters that best serves this function. However, if your online personae is too different from what you believe yourself to be capable of, that makes it harder to integrate online experiences into one’s ‘actual’ life. Psychoanalysis differentiates between ‘working through’ and ‘acting out’ past experiences. The latter refers to a restaging of old conflicts in new settings, a repetition in which one re-enacts the past but does not learn from it. In contrast, ‘working through’ involves externalizing and projecting memories with the express purpose of addressing old issues in new ways. One can ‘act out’ or ‘work through’ while engaged in identity exploration online. It all depends on whether or not you have developed characters whose experiences can be applied in your actual life.
According to Sherry Turkle, “Internet experiences help us develop models of psychological well-being that are in a meaningful sense postmodern: They admit multiplicity and flexibility. They acknowledge the constructed nature of reality, self, and other. We are encouraged to think of ourselves as emergent, fluid, decentralized…ever in process”.
This all sounds generally positive and beneficial. Why, then, did so many participants in a discussion about alts, have something bad to say about it? What could possibly be harmful about identity exploration? The next essay will consider such questions, and also consider arguments that run contrary to the idea that online social networking encourages multiplicity.
This entry was posted in Philosophies of self, technology and us. Bookmark the permalink.

One Response to ALT! WHO GOES THERE? PART 4.

  1. Pingback: Alt! who goes there? « Khannea Suntzu's Nymious Mess

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s