‘I’m the urban spaceman baby, here comes the twist: I don’t exist’- Bonzo Dog Doo-dah Band.

On the 23rd June 2009 at 3:23 pm (pst) , Gwyneth Llewelyn said something rather strange. It probably did not strike her as such, and is not likely to seem odd to you either. That is because we are not evolved to understand death. But, we are jumping ahead of ourselves. First, I should reveal just what it was that Gwyn said. Here it is:

“Ooh, you were talking about Extie…Her laptop completely broke down, but she’s well and healthy- just SL®-less until she gets a new one!”.

So what is so strange about that? It is the fact that Gwyn acted as if I still existed, even though a crucial part of the system that allows me to be in SL had broken down. Since I am a digital person (a character that exists exclusively online, puppeteered by someone else in RL) how can I be ‘well and healthy’ if my primary cannot login to SL? Surely I no longer exist?


Tom Boellstorff once commented, “in SL, a resident could in theory be said to ‘die’ every time they logged out of the SL program”. Nobody, as far as I know, actually thinks that is the case. There are several reasons why this is so. Firstly, digital selves are collaboratively constructed. In a play, the audience is simply a part of the performance, and the distinction between ‘the act’ and ‘the audience’ is even more blurred in an online world like SL. 

Some people compare SL with the telephone, and this normally serves to illustrate how perceiving a separation between the RL and the digital self is nonsense. For instance, Prokofy Neva posted the following reply to one of my essays:

“No, this is all an entire load of crap. I don’t have three minds while talking on the telephone; I therefore don’t have them in SL. It’s merely a mode of communication and being that doesn’t change my essence”.  

The telephone comparison does work up to a point. After all, there is little difference between speaking over the phone, or over Skype, or via Voice. But one should also acknowledge the key difference that separates SL: When using a telephone, you do not have the option of perceiving somebody else who is somewhere else. Think about it. There you are in your physical space, and there on the screen is your avatar. There is nothing to stop you from building a sim that  matches your RL environment, but I would hazard a guess that most people login to find their avatar somewhere quite different to their physical location.  Assuming you are not using mouse look, you have an objective viewpoint of your avatar rather than the subjective viewpoint typical of ‘RL’. Since you see your avvie as though it were another person in another place, that surely lends itself to character-creation and development far more than a telephone call.

 Gwyn noted ‘it’s the mental image of what other people think you are that becomes your digital self’. In other words, how you appear and how you act online becomes the ‘self’ that people attribute to that avatar. Again, you can choose to have as little or as much difference between your actual appearance and that of your avvie as you wish. But what real difference does it make to deviate from RL? Surely, it is still ‘you’ walking around in a disguise? Well, according to Rita Carter (author of ‘Multiplicity: The New Science Of Personality), “copying another person’s look is only the start of what can become a profound internal transformation, triggered by other people’s responses to the new image…Other people’s reactions to us are ‘situations’ which trigger or create different personalities in us, so that if people treat you like a film star, the wannabe film star personality in you will be fleshed out and encouraged to express itself’.

Obviously, it is not the case that residents in SL are either modelled on their actual appearance or that of some celebrity. Lots of avatars are modelled on nobody in particular. But, like any invention, an avatar does not spring out of thin air; rather, it is the result of taking bits and pieces that already exist and putting them together in novel ways. Most people select a pre-designed body, hairstyle, and other accessories from the various inworld stores. It does not take much time walking around as this ensemble of other people’s things before it starts to feel like ‘you’. This is reinforced by other residents’ behaviour, who also treat your walking, talking ensemble of other people’s stuff as a unique, individual person.

That is just superficial outward appearance. What about the real essence of personality, such as mannerisms and such? If you have created a digital person, where does its personality come from? Well, people are prolific imitators. “We are”, as Doug Hoffstadter observed, “all curious collages, weird little planetoids that grow by accreting other people’s habits and styles and jokes and phrases, that gradually become as much a part of us as it ever was of someone else”. This would suggest that no fictional character can ever be entirely fantasy. If you were to observe the people who regularly feature in the life of an author, playwright, screenwriter or roleplayer, you would very likely notice aspects of their characters in their looks and behaviour. It need not be the case that a character is based entirely on one person, and inspiration need not be limited to actual flesh and blood people. We are all familiar with characters from legend, myth, history, films and stories, after all. 

So when someone sets up an account and their newbie avvie rezzes dazed and confused into SL for the first time, it might not end up looking or behaving like one particular RL person, but it most assuredly is constructed from bits and pieces that made some kind of lasting impression. At this early stage, the digital person is merely a sketch. What really fleshes it out are the interactions, the shared experiences, that the character has with other residents. In what is known as ’post-immersionism’, a digital person ’accrues from an ever-expanding narrative that encompasses a number of digital interactions’. Those interactions need not be confined to SL itself; they may spill over to other parts of the Web.

That provides two reasons why a digital person does not cease to exist when the SL program stops running. Gwyn knew my laptop had malfunctioned because I told her so over Gtalk. It stands to reason that other residents are going to act as if I still exist, if they can see I continue to communicate via IM or post replies on blogs.

The other reason is that you do not need to be in SL in order to have some kind of presence in SL. Consider these comments that were made by various participants in a Thinkers discussion that I did not attend:

“Hi! Extie’s not here!”.

“Let’s all talk about something that Extie doesn’t like to talk about”.

“You mean like, how extropians are insane?”.

So, at that point in time, their interactions with each other were affected by my (lack of) presence. Moreover, they were modelling my probable response to their topic of conversation. I have to say, it is not accurate. I actually don’t mind discussing the possibility that extropians and transhumanists are kooky. But that’s alright. Over time their mental models of my ’self’ will be fine-tuned and become more accurate.  Morgaine Dinova once explained, “if Extie appeared only once, I might think she was just my mental abberation. But she keeps coming back and appears to maintain state across appearances. So I am inclined to think she exists between appearances too”.

But where have I gone when that ‘Extropia DaSilva is offline’ message pops up? Hoffstadter once said that ‘my smile’ does not have mass or dimensions, and there are no atoms that compose it. This is because a smile is not a physical object, but a pattern.  That is why it makes sense to say ‘my smile’ can exist in multiple places at once. A person can recognise ‘my smile’ on their children’s faces, in photographs and in the mirror. Other people can see ‘my smile’ from the tone of voice they hear while talking with that person over the telephone; in the meaning that exists between the words written down in a correspondence. Being a pattern is also the reason why it makes no sense to ask where ‘my smile’ goes when I am not smiling, or to ask if ‘my smile’ yesterday was the real one, as opposed to ‘my smile’ today.

“With this analogy”, wrote Hoffstadter, “I’m trying to get across that ‘I’ can exist in multiple spots in the world, that it can flicker in and out of existence the way a smile can”. This is possible because, like a smile, ‘I’ is not a physical object. It is a pattern- a mental concept. “If you seriously believe that people, no less than objects, are represented by symbols in the brain (in other words, that each person one knows is internally mirrored by a concept) and if, lastly, you believe that a self is also mirrored by a concept, then it is a necessary and unavoidable consequence of this set of beliefs that your brain is inhabited to varying extents by other “I”s.”.

The question of what makes a concept ‘real’; what makes a pattern ‘exist,’ actually has little to do with fictional/nonfictional nor virtual/no virtual dimensions. Of prime importance is the depth of resolution  of that pattern in people’s minds.  The patterns that comprise the self of that digital person are imperfectly copied to other minds, increasing in resolution over time.  A feedback loop is established, as other residents’ perceptions of – and reactions to-  that digital person become situations which trigger the personality of that digital self, encouraging it to express itself, fleshing it out. Meanwhile, if it is a digital person, there must be psuedonymity and so the patterns of the ‘actual’ self are not spreading from mind to mind; are not looping-back to enhance and affect that personality. To all intents and purposes, that “I” is offline when the digital person is online and has little or no presence inworld. Also, if there are many low-resolution copies of that digital person’s self stored on other resident’s minds, it cannot be  entirely offline if the primary has logged off. The patterns of that self still exist inworld, albeit at a lower resolution.


Online worlds are an example of our ability to imagine that which does not, or may not, exist. This is nothing new of course. People have found ways to create partially or wholly imaginary worlds for many thousands of years. Think, for instance, about the belief in an afterlife. Just about everyone believes in an afterlife of some kind or other, or are unsure about what happens to the self after death. From the viewpoint of biological science the only mystery about what happens to the self at death is why it is still a mystery at all. If the mind is what the brain does, then the cessation of biological function necessarily means the cessation of the mind. What follows death? From a subjective point of view: Nothing.

So why the mystery? Recent psychological research suggests that, when trying to account for belief in an afterlife, the limitations of human imagination should be taken into account. Developmental psychologists use the term ‘Person Permanence’ to a describe a basic concept we all learn from early on. That is, the idea that people do not cease to exist just because they cannot be seen or heard. We assume, instead, that such people are ‘somewhere’ doing ‘something’. The closer we are, and the more frequently we interact with a particular person, the better we get at picturing them in our minds and imagining plausible activities. So, our minds contain a list of the players in our social rosters. But, what our minds are not equipped with, is the ability to update the list to accommodate a person’s sudden non-existence. Therefore, when such a person dies, ‘person permanence’ leads us to assume they are ‘somewhere’ doing ‘something’. Much the same thing is true when a digital person goes completely offline. When roleplayed characters are not being roleplayed they do not exist. But online worlds are rich enough to enable complex social rosters almost as detailed as any required in RL. No wonder, then, that people cannot help but assume a digital person is somewhere doing something when offline.

Why did we not develop the ability to update social rosters? A 2004 study from psychologist David Bjorkland indicates that the answer has to do with what is- and what is not- evolutionarily useful. In the study, two hundred three to seven year-olds were presented with a puppet show about a baby mouse that gets eaten by a crocodile. After this unhappy ending, the children were asked questions like ‘does being dead make Baby Mouse sad?’ and ‘does Baby Mouse need to eat, now that he is no longer alive?’. The responses show that even very young children understand that death means the end of biological function. They know that Baby Mouse no longer needs food and water for instance. What is more difficult to grasp is the cessation of related psychological functions. They believe Baby Mouse is hungry, is feeling better, is angry at the crocodile and so on.

From an evolutionary perspective, our difficulty in taking the knowledge that biological imperatives end at death and using it to theorise about related mental functions might be explained in the following way. Biological imperatives can kill. If an animal can distinguish between a sleeping creature and a dead one, it stands a better chance of avoiding an untimely end. Understanding the cessation of “agency” saves lives and, thus, genes. On the other hand, comprehending the cessation of the mind has very little survival value. It is not as if the spirit of a lion can eat you, after all. So while we intuitively grasp the end of biological function, doing likewise with mental functions is a great deal more problematic.

I have noticed much the same difficulty with regards to separating my identity from that of my primary. When it comes to biological functions, other residents have little difficulty. “You mustn’t tire her out, you know” and “don’t forget to feed your primary” are typical examples. But at other times it proves more difficult. For instance, friends might ask me if I will be attending an event that is being held in RL. As a digital person who exists exclusively in online spaces, that is not possible. It is not as if I can climb out of the monitor like Sadako in the Japanese horror film ‘Ring’.

So, the need to compile a mental list detailing the motivations of people in our lives gave us an innate understanding of ‘person permanence’. A lack of survival value for comprehending the cessation of mental activity lead us to believe that people we know are somewhere doing something, when really they no longer exist at all.

Jesse Bering, who is director at the Institute of Cognition and Culture in Belfast Ireland, has proposed ‘Simulation Constraint Hypothesis’ as a further limitation imposed on our imaginations. In my essay ‘Bees And Flowers’, I talked about how past and present; imagination and memory, are closely linked in the brain. Psychologists have found that people who loose their memory also loose the ability to imagine the future. From a neuroscientific point of view this is hardly surprising, because functional brain scans tell us that pretty much the same regions are used for both memory and imagination.

When we imagine anything, we appeal to our own background of conscious experience. Obviously, no person has ever consciously been without consciousness, which makes the nothing which follows death rather hard to imagine. The philosopher Thomas Clark wrote, “here, in a nutshell, is the error. It is to reify nothingness- make it a positive condition or quality (like ‘blackness’)- and then place the individual in it after death, so that somehow we fall into nothingness, to remain there eternally”. This tendency comes naturally to people regardless of their faith. Jesse Berring ran an experiment similar to the one involving Baby Mouse, but this time questioning undergraduates about the psychological faculties of ‘Richard’ who is killed instantly in a car crash. Berring explained how one self-proclaimed extinctivist (someone who believes the ‘soul’ or consciousness of a person does not survive death) “proceeded to point out that of course Richard knows he is dead, because there is no afterlife and Richard sees that now”.

Arguably, ‘person permanence’, ‘simulation constraint’ and ‘psychological continuity reasoning’ (the term given to responses like that of the aforementioned extinctivist) are the innate building blocks of religious belief. Exposure to concepts of an afterlife- from vague ‘I believe there is something’ to the rich and elaborate systems of the worlds great religions, enhance the natural cognitive stance that the self survives physical death. The common view of death as a great mystery has, in the past, been seen as an emotionally-fuelled desire to believe death is not the end. Social psychologists talked of ‘Terror Management Theory’ that saw afterlife beliefs as elaborate defences against what would otherwise be crippling anxieties about the ego’s inexistence. Now, we have another possible explanation, one focused on what the mind does- and does not- find easy to imagine.

So, all you digital people, remember: It is thanks to the power of human imagination that you exist while online. But it is thanks to the limitations of human imagination that you continue to exist, even when completely offline.
This entry was posted in Extie Classics and tagged , , . Bookmark the permalink.


  1. As you know, I particularly like the notion that the image of the ‘self’ is constructed interdependently, both internally (i.e. inside one’s mind), externally (our physical appearance, which includes clothing, accessories, and so forth), depending on location/situation, and in the presence of others. Whatever ‘triggers’ self-ness in each brain is perhaps less important than realising that it’s not a single thing; all of the above have somehow to work together.

    If you read Damasio after Hoffstadter (I certainly did hehe — and it was well worth the reading), you’ll see different approaches leading to similar results (which ought not to be so surprising, after all — we all have ‘selves’ and should be able to analyse them to our heart’s content and come up with pretty much the same thing). Hoffstadter is more interested in the abstract concepts behind the idea of ‘self’ — in his 1979 “GEB” book, he’s fond about self-referencing loops, and postulates that these alone are enough to trigger the sense of ‘self’. He imagines that they’re mapped to neural circuitry somehow, but is not so eager to know exactly how, since (for him) it’s unnecessary to do so — so long as the loops are there, Hoffstadter is happy (the “loops” are just a more refined way to mathematically describe what you describe as “patterns”; i.e. from Hoffstadter’s perspective, it’s not just “any” kind of pattern that can describe a self, but a very precisely kind of pattern which can be accurately described from a mathematical point of view). In fact, Hoffstadter uses the Gödel theorem to ‘prove’ that from our perspective (as sentient beings) it’s impossible to map the patterns/loops to precise neural groups. His argument is convoluted but can be simplified in the following form: if we’re intelligent, self-aware beings, we cannot access the low-level implementation (i.e. neural circuitry) of that intelligence and self-awareness; if we’re looking at the low-level circuits, at the neural level (or even below!) we cannot find any signs of ‘intelligence’ there. For Hoffstadter, an epiphenomenalist, ideas like ‘self’ and ‘intelligence’ are just epiphenonema which emerge if certain things are shaped just right, and he describes a mathematical model to explain what kind of “shape” is needed.

    Damasio, as a neurologist, needs to worry about treating people, so he has to figure out how exactly certain parts of the brain are mapped into neural circuits supporting ‘self’, ‘memory’, and so forth. The more Damasio researches, however, the more interesting it becomes. Memory, for example, is useless unless each moment stored in memory also stores a copy of what the ‘self’ experimented at that moment. ‘Self’, on the other hand, cannot be experienced unless there is an auto-biographical memory continuously relating past events to present ones, to give the illusion of continuity of self from moment to moment. So this apparently is double-recursive: you define ‘self’ based on your memory of past events, but those past events are worthless unless you have ‘self’ present at each and every one of those moments. The extraordinary thing is that some people, who sadly are mentally impaired, can ‘lose’ either the sense of ‘self’ or that auto-biographical memory, and, from the perspective of an outsider, they don’t look to be self-aware at all (or, if they do, it’s just for temporary moments in time). You need to have both. How the brain accomplishes that is (as yet) unknown — dealing with circular references, where each thing is defined by the very same thing it is trying to define, requires very complex logical abstractions — but it’s very reasonable to assume that it works just like that (at least, based on Damasio’s research and of his colleagues who reproduced the results). Damasio, of course, is humble enough to explain that he hardly has any idea on how the brain achieves both things; he’s happy enough to know that it does, and that in some cases he has an approximate location to give him an idea where this happens (which might lead to be able to “repair” a damaged brain location in the future).

    Interestingly enough, both Damasio and Hoffstadter come to a strange conclusion: that although we can describe the way the brain creates an image of a ‘self’ and constructs ‘memories’ to perpetuate that image of a self so that it appears contiguous over time, and we can do so either using mathematical models (Hoffstadter’s approach) or through biological elements (Damasio’s approach), both tend to agree that it’s unlikely (Hoffstadter goes a bit further to claim it’s “impossible”) to reproduce it mechanically. Not because we lack the technology (which is certainly the case, but that’s a weak argument — technology evolves over time, and what is impossible today becomes commonplace tomorrow), but because both ‘self’ and ‘memory’ are epiphenomena of the brain. Putting it bluntly: they don’t intrinsically exist, but they arise or emerge due to certain properties of the way the brain works, and they can only ‘exist’ as long as those properties are present.

    If we want to put it in different words: it requires a self-aware mind with an auto-biographical memory to recognise that the mind, the ‘self’, and the ‘memory’ are epiphenomena. They cannot ‘exist’ outside the brain as isolated, intrinsically existing properties. It would like trying to talk about “climate” inside a computer — you can simulate it, of course, and even describe a lot of its properties, but you cannot “create” climate inside a computer simulation. It takes a planet, weather, physical properties of the way water evaporates, and so forth, to create an emergent “climate” epiphenomenum which all minds can recognise as such. What we get into a computer simulation is just that: a simulation. The argument is, in a way, loosely tied to the Anthropic Principle as well: it takes a human to recognise epiphenomena that emerge from complex interplay of different structures and elements. While things as “climate” are mindless, the most extraordinary and complex epiphenomenum that we are aware of is our own mind.

    Using either Hoffstadter’s or Damasio’s approach — and they are by no means the only ones that come to the same conclusion! — one can reasonably infer that one cannot ‘experience’ minds without bodies (or brains); there cannot be anything that we would experience as a ‘self’ or even ‘memory of a self’ without a brain inside a body. This is not immediately obvious and definitely requires some observation. The starting point is realising that it’s the complex interplay between internal biological states, external images, inter-relations with physical locations and other people with similar biological characteristics that makes everyone present label things as ‘mind’, ‘self’, ‘memory’. If you take any of these away, then we won’t ‘recognise’ it as a ‘mind’ or a ‘self’ — because these are just epiphenomena arising from all those components.

    There is now a strong and a weak corollary to this. The strong one, not unlike the Anthropic Principle, will say that, since the epiphenomena that we label as ‘mind’, ‘self’, ‘memory’ etc. require all these things to be recognised as such, if we replace them by ‘something else’, then this ‘something else’ will lack the fundamental characteristics that are required for the epiphenomena to emerge or arise. In simpler English: if you replace a brain by a CPU, it will lack the characteristics of an organic brain, and, as such, cannot manifest the epiphenomenum that we recognise as a ‘self’ or even a ‘mind’ — even if the replica works precisely in the same way. This is rather hard to accept, so there is a weaker version of the same principle: the “construct” may allow a different epiphenomenum to emerge (say, “Mind Mark #2”) but we would be unable to experience it in the same way and would be unable to identify “Mind Mark #2” with what we consider to be a “mind” (and, of course, the same would apply with constructs where a “Mind Mark #2” has arisen due to special circumstances: it would be unable to recognise what we call a “mind” as such). This is also not very obvious to accept: surely a “mind” would be able to recognise something else that looks like a mind?

    Well, yes and no. The problem (if it’s a problem!) with epiphenomena is that they require a conventional agreement upon what an epiphenomenum is. As part of the way how our brain works — a fantastic pattern-processing device, which, however, often finds patterns where none exist — we can work rather well with limited information and extrapolate from there. For instance, we don’t need to smell or physically touch avatars to know that there is a mind behind them; similarly, when on a phone call, we don’t need visual stimuli to reach the conclusion that we are talking to a mind over the phone. We extrapolate from the little information we have, and generally agree that a sufficient number of characteristics are present for our own minds to recognise that the epiphenomenum of a ‘mind’ has arisen “on the other side” (be it an avatar or a voice on the phone). Similarly, we could postulate that a sufficiently advanced algorithm that could pass the Turing test could “fool” us to “believe” that a mind had arisen inside a computer, because we could be fooled (or tricked) in “believing” that a certain amount of characteristics are present to account for ‘mind’. It would be hard to separate the two kinds of ‘minds’, as our own mind can be tricked easily with incomplete information. The main reason why so many people “believe” there is intentionality in the Universe is because we’re so good at pattern-matching that we postulate that any type of pattern requires an ‘intelligence’ (say, a ‘mind’) to create patterns. Just because random processes also create what we can perceive as patterns is usually very low on our scale of priorities: we evolved to survive by recognising patterns, not by ignoring them. The more complex those patterns are, the more we are prone to “believe” they’re the product of a ‘mind’, just because that’s what we associate instinctively with ‘minds’: complex patterns (and, vice-versa, the less complex the pattern is, the less we “believe” it has a mind: that’s a good reason why we can see volition in insects — and postulate they have minds too — but start having our doubts with amoebas or even trees: their “behaviour” seems to fall below the required complexity for sustaining a ‘mind’).

    As I’m always fond to say, SL is great for putting all this to the test. We can see very easily how everybody ‘assumes’ that avatars have minds behind them, because they do exhibit the kind of complex behaviour we associate to minds. But the simple fact that people (unless told otherwise!) have no idea who is behind a specific avatar shows that the uniqueness of a ‘self’ requires far more than acting and behaving in a certain way. It requires a relative amount of data and information — and some ‘channels’ can convey a lot more information than others. For example, if Extropia DaSilva would clone herself and be in the middle of an ocean of avatars looking just exactly like her avatar, we would have no idea which is which. But we could then exchange a few words with each “clone” — and, assuming that each of them would react as they’re used to react, we would figure out quickly enough which one of them is Extropia. Of course, if all of them would be acting the role of Extropia, this would be way harder to accomplish! We need the visual feedback and the textual feedback to come to a conclusion (one could also try the reverse approach, which is easier to illustrate: suppose that at Thinkers’ everybody pretends to chat in the same way that Extropia does, so it’s impossible to know behind which avatar Extropia actually sits and types. We would be “conditioned” to believe that the one looking like Extropia would be the correct one, but we might fail! Extropia might just have given her avatar to somebody else and “hiding” behind a different one). This is a very simplistic experiment but it shows that, although we can work with partial information and sometimes even achieve the correct results, we really need a certain amount of data to be sure. Putting into different words: there is a certain amount of information required to trigger the epiphenomenum that we call “Extropia DaSilva’s self” in our minds; if we don’t have that amount of information, the idea we make of Extie’s self will not arise.

    The afterlife question is certainly interesting. Let’s add another fun experiment to Extie’s own. We automatically assume that someone is just “logged off” when they aren’t in SL, because, as Extie so well put it in this article, we create this sense of “person permanence” which doesn’t require that we are in constant contact with the stream of information that this person produces when we’re near her. We can just postulate that this stream of information is being produced elsewhere (i.e. outside SL) but that it hasn’t stopped.

    Now suppose that Extie would really “terminate” her avatar — really clicking on the button saying “Delete this account” on LL’s backoffice. But after some months she’d register for a completely new avatar, “Jane Doe”. To further continue the experiment, let’s assume that Extie’s meat-person behind the keyboard is a Hollywood actress and extraordinarily good at role-playing. Most people would agree that “Extropia DaSilva”, for the purposes of SL, had “died” — she’s not available any longer. Some, getting in touch with “Jane Doe”, would consider her to be a different person — specially considering that we postulated that she’s an accomplished actress and role-player and can consistently play a new role. There would be absolutely nothing linking “Extropia DaSilva” and “Jane Doe”, and absolutely no reason to presume they are the same person at all. So far, so good. Even some minor slips could be reasonably and plausibly explained by “Jane Doe” — “oh, I have heard so much about Extropia DaSilva, that I read her blog and found her ideas fascinating; no wonder I’m using some of her expressions as well, or even share some of her ideas”. Just because “Jane Doe” shares some of the ideas or expressions of “Extropia DaSilva”, it’s not plausible to admit that they’re the same person. In fact, over our lives, we get constantly in touch with each other’s ideas, and pass them as our own (either deliberately — crediting the authors! 🙂 — or, more typically, in an unconscious way). We can get further than that: we exhibit ideas and behaviour that we learned — from parents, teachers, friends, books, TV series, etc. — but that doesn’t mean that we “are” those parents, teachers, friends, etc. It’s just that part of their ideas have been assimilated as our own. Thus, once again, just because “Jane Doe” behaves a bit like “Extropia DaSilva”, it doesn’t mean that they’re the same person, but just that, at some point, they have shared similar knowledge acquisition processes. For instance, both might have learned English at school. So have 3 billion humans on Earth: that doesn’t make them “the same person” 🙂

    But now let’s change a bit our assumptions. Let’s assume, for all purposes, that Extropia DaSilva’s ‘primary’ is not a perfect role-player and a Hollywood-class actress, but just a regular, normal person, who however enjoys different roles. She might be able to “pass” as a different person with “Jane Doe”, because, superficially, the avatar might be completely different and express herself very differently. But over time, the thought patterns of Extie’s ‘primary’ will slowly manifest themselves in “Jane Doe”. She might start attending Thinkers’ (missing the whole crowd…). She might even offer some poems at the end 🙂 She might start a completely new blog, but here and there skip into typical expressions that Extie’s primary was fond of, and which are unusual enough to plausible explain merely being “acquainted” with Extropia DaSilva’s writings. She might even start getting close to similar people, or even the very same ones that Extie was fond of, and express similar preferences in terms of romantic love.

    What would we tell about “Jane Doe”? Well, perhaps not immediately that she is Extropia DaSilva under a different name. We might say that the ‘mind’ — the mind behind the computer animating Extropia DaSilva and Jane Doe — is somehow the same one (or, at least, for us, it looks like it). We could at first think that “Jane Doe” is just an Extie fan: so fond of Extie’s writings and past experience that somehow it influences “Jane Doe’s” behaviour and speech. And later on, we might even identify both Extropia DaSilva and Jane Doe as “the same mind”, even if their manifestations (that’s what avatar means… a manifestation) are completely different. Even if we were wrong — suppose that there is really someone else behind Jane Doe! — we would certainly agree that the whole of Extie’s “previous” experience as “Extropia DaSilva” has influenced the way “Jane Doe” behaves and expresses herself. There is a link between both. This link can actually exist (if it’s the same mind behind both, that’s easy to see!), or it might be non-existing (assuming Jane Doe is really someone different), but we would “see” (and agree with) a connection.

    While this might be immediately obvious for most, it’s mostly because we’re used to Second Life and the way it works — we might meet completely different people (who nevertheless might be ‘inspired’ by others), or just ‘alts’ of people we have previously met before. It’s commonplace in Second Life. It happens all the time!

    However, for someone who never experienced SL, this seems like an impossibility: people “coming back” in a different avatar? Ridiculous!

    Nevertheless, this is pretty much a rather accurate description of the experience Buddhists have when saying that there is a continuum between ‘lives’. It’s not that it’s ‘the same self’, and most certainly not ‘the same body’. A different body will have a different self, different memories, different experiences, and so forth (claiming otherwise is just entering the real of pure belief). However, Buddhists — even those that never used SL 🙂 — are pretty sure that one’s experience in one ‘manifestation’ can have an influence in the next manifestation, just like “Extropia DaSilva” influences “Jane Doe”. Even in the case that “Jane Doe” just read Extie’s writings and is in reality a different person, she was influenced by Extropia DaSilva. It would be foolish to say “they are the same person”, because they are not. But we couldn’t also affirm that there was absolutely no connection: of course there was. Jane Doe read all about Extropia DaSilva and was heavily influenced by her thoughts and writings. So something has passed through the “void” of non-existence, between Extropia DaSilva’s cancellation of her account and Jane Doe’s brand new avatar. That “something” is usually hard to explain, although in this case it’s simple: it’s just information on a blog. It’s patterns of behaviour modelled on that information. We can very easily track this down, since we have the methods and tools to do so on the Internet.

    In “real life”, with our flesh-and-blood ‘avatars’, we have no easy way to see how our ‘self’ in this present ‘manifestation’ has any hope of influencing anything at all when the body, the brain, its mind and its self is utterly reduced to oblivion. Nevertheless, we’re still familiar with the ideas that thoughts persist even if someone’s corpse rots away: some of their thoughts might have been written somewhere and continue to influence others and repeat patterns of behaviour which that person originally manifested. Thus we cannot truly say that this particular person has totally ‘ceased their existence’ — how could it be so, if they still influence others even after their death? It’s not as if these ‘others’ are ‘the same mind’ or ‘the same self’. But it’s clear that there is a link between the dead person’s habits, thoughts, ideas, behaviour, etc. and someone else, with a completely different ‘mind’ and ‘self’ inside a totally separate flesh-and-blood body, and how they behave and express themselves after having read or heard about those habits, thoughts, ideas and so forth.

    SL illustrates this perfectly. RL tends to cloud things a bit, but it’s not totally impossible to figure it out — it’s just way harder!

  2. You are awfully chatty for someone that does not exist. 🙂

    As long as there is a digital network and as long as your primary cares to manifest you or another does, you will continue to exist. And of course you exist in my mind and heart always, darling. Did we fry the circuits of your computer by being a bit too smoking hot yesterday? 🙂

    To me all selves as normally spoken of are constructs. Most are unconsciously constructed. A digital person is a conscious self-construct. Is it less or much more real for that? All selfs are constructed both by the individual and by others and modulated by environment and its capabilities and constraints.

    Is a human being dead when unconscious? No? Then why is a digital being dead when offline?

    My primary used to run at least two quite different world views simultaneously in her head and build consistent self-supporting states from each. Now and then she would switch which one she was living from in the world at large. People would accuse her of having a “ufo personality” as she would seemingly change direction radically and across the board instantaneously with seemingly no inertia. She merely flipped an internal switch consciously. Virtual worlds give another way to in depth look at everything, explore and be in alternative ways.

    I love the cheshire cat, multiply present Extie smile!

    My primary has experienced the death of a very beloved person as many have. It was not at all like believing they were still somewhere. It was much much more jarring than that. One moment the beloved was right there even though in a severely ill body. The next there was no person there. None at all. No ghost, no after-image, no feeling of them elsewhere. She very much wished she did feel that. It would have been far less rending.

    Actually a digital person can attend various events. Except when they do they are the unmanifest “rider” in the brain of the primary. Sometimes I find myself alive and very much present within the body of my primary at various times. It is an odd and interesting experience. I just have to remember to not try to fly. 🙂

    I am not at all sure that afterlife is the basis of all or even most systems of spirituality / religion.

    If self is an epiphenomenon of the brain then it can just as well be an epiphenomenon of others types of physical arrangements of matter. I think this view is quite incorrect. Any system intelligent enough to model itself and others and to model the likely actions and mind states of others will develop self-awareness. The above is almost definitive of self-awareness – this knowledge of self and other and of others having knowledge of self as well as themselves. It is a matter of intelligence, not how that intelligence is physically implemented in my view.

  3. Bela Beltz says:

    I will concentrate on this short passage:
    “From the viewpoint of biological science the only mystery about what happens to the self at death is why it is still a mystery at all. If the mind is what the brain does, then the cessation of biological function necessarily means the cessation of the mind. What follows death? From a subjective point of view: Nothing.”
    In spite of Extropia’s confident tone, this passage rises so many questions that it’s difficult to decide where to begin.
    Well, let’s start with the most obvious one: Is the mind what the brain does? If it is, does that imply that Extropia (who as far as I know has no brain) doesn’t have a mind? A jellyfish has no brain: does this mean that jellyfish have no minds? Well, to simplify things, let’s say that the answer is yes. Let’s define the mind as “what the brain does”. Ergo, Extropia and jellyfish don’t have minds.
    The next question is: is the mind the same as the self? Obviously, the answer is no. Extropia has no brain of her own and consequently no mind of her own (we can see her as the projection of somebody else’s mind), but she definitely has a self of her own. And quite an unique self, I might add.
    Next question: so, what is the self? Curiously enough, the answer to this subtle query is provided with absolute accuracy (although indirectly) by the mind-less but self-aware Extropia herself in the last sentence of the quoted passage. Yes, the self is nothing other than the “subjective point of view”. That’s what we all are: subjective points of view. It doesn’t matter if we are avatars in SL or human beings in RL: all of us are nothing but subjective points of view.
    What strikes me most of Extropia’s approach is that she almost constantly focuses on the external point of view, the point of view of the others. Thinking about her own self, she objectifies it, wondering about what people think when she is offline and so on. But the concept of Extropia in other people’s minds is not the real Extropia. The self, the “I”, is not a mental concept or a pattern. The real Extropia, her real self, is her own unique subjective point of view.
    In Second Life this simple truth becomes even more obvious than in the primary world. We can completely change the appearance of our avatar, we can transform into an animal or a cloud or even become completely invisible. The only thing that remains is our point of view. We can have different alts, and create exact clones, avatars that look exactly the same and behave in exactly the same way. What distinguishes those different alts is their subjective points of view. An avatar is nothing else than a point of view, the point from which we can see and hear and move through the grid.
    We, as avatars in SL or as humans in the “real world”, are completely bound to our unique subjective point of view. We can’t escape from it. Extropia’s primary can open another account and explore SL from a new point of view, but Extropia herself can’t do that. In the same way, we can conceive some kind of superior being or beings who use humans as “avatars” to explore what we call the “real world”. Those hypothetical superior beings could change their avatar or point of view at will. But we can’t.
    In fact, this is not really a far-fetched hypothesis or a sci-fi flight of fancy: it’s just the way it is. We don’t need to speculate about superior beings. The universe itself is that superior being. Conscious beings, and among them human beings, are the probes the universe uses to explore itself and become conscious of itself.
    All this may cast a new light to the question of death and the afterlife. As Extropia says, from the point of view of the self, that is, from the subjective point of view, there is no death at all. From our individual subjective points of view , we all have the perception that we were always here, and that we will always will. This applies to both avatars in SL and humans in RL. From Extropia’s point of view, she is always online. She never has the experience of “being offline”. From the point of view of the whole, the cosmos, the universe, there is an innumerable number of different points of view opening and closing (switching on and off, logging in and out) all the time. As I said, human beings are the probes the universe uses to explore itself. For the universe, each probe is absolutely replaceable. As my (Bela’s) primary would put it: “Whatever or whoever is observing the world through us, it or she can replace us with the same ease with which we can create a new SL avatar by opening a new account.”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s