MIND UPLOADING AND THE BODY

  1. In discussions of mind uploading, the body seems absent. Why is this? One reason, I think, is we assume that, since the brain is ‘the most complex thing in the known universe’, any civilization with the technology to create emulations of particular brains is bound to be able to make nice shiny new bodies to go with them.

    But I also suspect it is because we have an overly-simplified view of the body. We think of it as being this sort of vehicle controlled by the brain that is just there at the mind’s convenience. This point of view is revealed most strongly in those science fiction and fantasy scenarios where the mind of person A somehow finds itself in the body of person B. In such scenarios the assumption is always that this is really person A and, aside from some initial confusion upon finding a radically different person staring back goggle-eyed in the mirror, no change has occurred to A’s subjective self. That this scenario makes sense is testament to how naive we are about the deeply symbiotic relationship of THIS body and THIS brain in making THIS self. If such a body swapping scenario were to happen, probably the result would not be person A or person B but a some confused, disorientated person who is neither.

    How did we come to this naive view of the body? I think the blame can be placed in large part on the approach Western science takes in understanding complex systems. That is, via a mechanistic, reductionist approach in which the system is broken down into simpler components. But each component is still complex enough to warrant a lifetime’s study to understand thoroughly (as thoroughly as it can be understood without understanding the whole of which it is part) and so specialists devote their lives to understanding tiny parts of a complex system.

    Now, the mechanistic, reductionist approach has its uses. The triumphs of Western science are testament to that. But it has it’s limitations too and, arguably, these limitations are most starkly revealed in the hard problem of consciousness. The hope Western science has always had is that some thing will be found that is responsible for consciousness (it is the soul; it is the brain) and if we can just understand that, we will have cracked the hard problem of consciousness. For all I know this hope may turn out one day to be justified, but I harbour a suspicion that it is a fool’s hope. I tend to think that the reductionist approach of breaking complex things down into manageable components to be studied largely in isolation is a necessary but insuffient step. It is only when we can conceptually assemble those components back into the systems of which they are inseparable parts and see those systems as parts of a network that spans brain-body-environment, in other words, when we can acquire an holistic view of reality in which nothing is seperate but all is deeply related and interconnected, that qualia will advance from a mystery (where we do not even know what an answer would look like) to a problem (where we do not know the answer but we do at least have an inkling of what we are looking for).

    The reductionists’ hope that consciousness just is the brain reveals itself in uploading discussions. If only we could map the structure and functions of the brain and copy all that into a great big powerful computer, we could transfer/ copy that self. Never mind the body, bin it. They are bound to have lovely bipedal robots and simulations of bodies in virtual environments. Your uploaded consciousness can use those forms of embodiment. This attitude is revealed in that option cryonics offers- ‘neuropreservation’- in which just your head is preserved and your body is left to rot. Of course, they are really only interested in the brain, not the whole head, but it is just simpler to preserve a head than to safely seperate the brain from it. This is the opposite of what the ancient Egyptians did, because they went to great lengths to preserve the body but they scooped the brain out through the deceased’s nostrils. Silly Egyptians for not noticing the importance of the brain in creating the subjective self, you might say. But what if we are just as ignorant in our understanding of your body’s role in creating your subjective self? What if THIS brain must be uploaded with THIS body, the one it has always been in inseparable symbiotic relations with, in order for there to be a copy/transference of THIS mind? I rather suspect that if those neuropreserved patience are taken from the freezer and a functional model of their brain is created and that brain is embodied in a form that does not match the profoundly personal relationship the original brain had with the body it was born with, the result will not be person A waking up, memories and personality intact ready to carry on enjoying life, but a brand new person, bewildered, confused, and disorientated.

    AFTERWORD.

    Since posting this essay, I came across the following passage in the book ‘Connectome’ by Sebastian Seung. The passage is worth quoting, because it argues that providing a body to go with the upload may not be all that difficult, at least not for any future civilization with the science and technology needed to perform whole brain emulations of a high-enough quality to count as an upload…

    “Any future civilization advanced enough to create a brain simulation would also be able to handle its input and output. Actually, input and output would be easy in comparison, because the connections between the brain and the external world are far less numerous than the connections within the brain. The optic nerve, which connects the eye to the brain, carries visual input through its million axons. That may sound like a lot, but there are many more axons running within the brain….On the output side, the pyramidal tract carries signals from the motor cortex to the spinal cord, so that the brain can control movement of the body. Like the optic nerve, the pyramidal tract contains a million axons. Therefore, our future civilization could hook the simulation up to cameras or other sensors, or to an artificial body. If these “peripherals” were well crafted, the upload would be able to smell a rose and enjoy all the other pleasures of the real world”-Sebastian Seung.

This entry was posted in Philosophies of self, technology and us and tagged , . Bookmark the permalink.

7 Responses to MIND UPLOADING AND THE BODY

  1. Ivy Sunkiller says:

    Oh my, where to start!
    Divide and conquer approach works perfectly well in a field that creates the most complex man-made objects in the universe – the computers. We, and by that I mean no single individual, don’t know how a computer works from A to Z, not anymore anyway. Heck, you don’t even need to know how a CPU works to write state of the art software – it’s largely irrelevant in the high level programming.

    The experiments that we can do regarding out of body experience show us that our mind/brain can adapt to having a different body quite quickly if you just cheat the sensory input, so I really wouldn’t be too worried about the consequences of having a different body (or no *physical* body as such at all).

    Going further, the experiments we can do in VR do teach us that having a body with different appearance does change our personality, but does that mean you are suddenly becoming a different person, or do you just evolve and adapt? If you define a person as a consciousness that has a very specific set of personality traits, then by that definition I am a different person every 5 years if not more often – more than that, I’m a different person when I’m talking with my mom, a different person when doing my professional work, a different person when discussing matters at Thinkers and a completely different person when tormenting toy in the most cruel and wicked ways I can imagine.

    Does it really matter if we start to behave differently upon receiving a new body? What if your consciousness wants to change the personality layer that requires a full body change? I, for one, want to drop this shell I have been “given” and instead become what I *want to be*.

    I guess if you look at it as a method to obtain *just* longevity/immortality then the fact that you suddenly become “someone else” upon acquiring a new body can be a concern. For me – if it doesn’t change the “I”, why do it in the first place? As long as my consciousness is preserved, I’d like to experience being as many different personalities as possible.

    • I’d say that if you start to “behave differently” in a new body, you won’t even be aware that you’re a different person, because you will have lost all references to the previous body you had, and there won’t be any memories of the past that will aid you in keeping the illusion of a continuous mental experience 🙂 (see my comment below to understand why those memories are irretrievably lost)

      Then again, this is exactly what Buddhists claim that happens every time we discard a physical body 😉 While consciousness might be preserved or not (I have no idea if that’s the case or not), in the sense of “the ability to perceive and to be aware”, memories are lost when the body is lost, and the notion of an “I” — an emergent property of a specific brain/body combination — is lost as well. So when you are reborn, you have no other option than to start from scratch: there is nothing left from the experience in the previous body. Granted, I’m simplifying things for the sake of the discussion, but I’d say that under your scenario, it’s not “you” that experiment “as many different personalities as possible” when replacing your body, but each body has a distinct, different “you”, and cannot be aware of “previous ‘yous'”. What connects all those “yous” is the ability of being aware, but nothing more than that. The “sense of self” is intrinsically tied to a specific brain/body combination and is not preserved across body changes. Again, see below for an attempt to explain why this is the case 🙂

  2. The most subtle state in a system creates unicity. In other words – when making complex in extremely complex/convoluted/subtle systems, especially highly redundant/kludged systems (such as a haphazzardly evolved mammalian brain) you assessment would be largely correct – YES porting the functionality of the brain “in vitro” from the old brain and in to a new “analogue” (somewhat functionally similar) brain, and with a fairly compatible body, yes even then you’d end up with a new hybridized mind. Yes probably confused too.

    But there is a problem with your line of reasoning. Let’s assume a spectrum of different scenarios on how mind uploading would come to be – it might be the result of organic/slow research in to neurological psychiatric treatments. Or it might be a side effect of AI research. Or it ight emerge because “a few hundred billionaires tied up their grotesque investments in a foundation and that money can only be claimed by a re-awakened version of themselves that knows the banking unlock codes”… Or it might be the glacial side effect of a dozen other types of research or market need.

    Whatever the case, “mind uploading” won’t be researched as a goal-distinct type of research for quite a while. It doesn’t exist “in vitro” since there is no market for “mind uploading” in the current geo-economical context. Nobody will invest a fraction of the required money in a transition from human mind to machine mind. So the inescapable conclusion (for me at least) is that MU will take quite a while (say, till well into the next century) and MU will be a highly context dependent type of research.

    Context dependent means that technology will have quirks. I name you the difference between apple and microsoft as an example. Both two are almost the same market niche. Yet the subtle nuance differences in application creates two intensely different media. Now envision the constraints and temperaments of an “upload industry”. The most subtle choices would create wholly differently flavoured upload regimen!

    What if we’d find that by 2060 the uploads created from corpsicles and the uploads created from high paid executive destructive scans would create two wholly differently natured uploads, both results (a) have perfect recall and emotional semblance of their former human lives, (b) both create largely human-analogue mental emulates .. but… what if both yielded completely differently flavored results? .. because the procedures were just plain different..?

    Add to this an entirely third niche of “therapeutic uploads”, i.e. people who had lethal brain diseases and “gradually” (by incremental prosthetization) became uploads? Then you might have a third, completely different type, as distinctly separate in nature as fanta and coke and tonic.

    Or Apple and Google and Microsoft.

    Eventually we WILL have an upload industry, even if it’s garage, but my most pessimistic estimate would be well in the 2100s. The big problem if you go too far in the future by postulating mind uploading, is the deadful conclusion of spill-over…

    I have tried over and over to explain spill-over to Transhumanists and nobody seems to “get” it.

    Spill-over are unintended side effects of research. Such as – you create theInternets for world peace and (WTF!) you end up with facebook, reddit, fourchan and torrents. And then you get the world’s most horrific corporate elite-sponsored backlash in the form of ACTA. Now thing about it. What if we do get a degree of gradual “uploadaciousness”, where human minds can outsource their own thinking functionality in to devices – to the point that some users totally outsource (a kind of cyber-emeritate). The fall-out in side effects would (must) be well in to recursive chains of causality and implications and unintended consequences that take my imagination well over the singularity horizon.

    Minds that think hundreds of times as fast. Minds that have synthetic ultra-memories. Minds that have hyper-multitasking functionality. Minds with hundreds or thousands of smart aigent sub-routines. Minds with amazing pattern analysis ….. and minds with all these functionalities (and probably 200 million eve online skill points and many hundreds of gigaISK in cybernetics) coming up .. with.. things.. I…can’t…possibly..conceive..of…

    You see why Mind Uploading ends the human state almost overnight? Ask the guys at the SIAI, they know. And trust me, they are worried shitless about this. They don’t want a sudden nanomed inspired neuro-revolution to happen overnight, say in 2032?

    It would be total insanity. A world gone mad, overnight.

  3. Re “What if THIS brain must be uploaded with THIS body, the one it has always been in inseparable symbiotic relations with, in order for there to be a copy/transference of THIS mind?”

    No bigger deal than discarding old comfortable clothes, if you ask me. Or an old familiar car, especially if the car does not work so good anymore, or the clothes do not feel so comfortable anymore. I guess it depends on what replacement body we get, in meatspace or VR or both. If it feels better than my current body, I am sure I will get used to it in no time.

    • I am sure that whatever you call “you” will not even understand the concept of having a previous body 😉 It will be just a new identity in a new body without previous recollection of having inhabited another body and having a different identity.

      That is, assuming Damasio et al are right, although a lot of Eastern traditions and cultures would agree with the neurologists as well: switch the body, and the mind is changed as well.

      Of course, you might be able to leave some notes to your “new” self, and possibly there will be some “affinity” with your former self. In a sense it might be like remembering a past dream or a movie you’ve watched: you might have some recollection of what your former flesh-and-blood self looked like and felt like, but it will be vague and misty like a dream. Not much different from people who claim that they “remember” past lives (most don’t; they just think they do). I might also agree that it could theoretically be possible to subject your new body & self to some hypnotic suggestions to “believe” you’re the same person as before.

      In any case, my point is that it will not be like changing clothes.

      Unless, of course, you might have attained a transcended mind in your previous flesh-and-blood existence and realise that the physical body you have is nothing more than a lump of organic matter you carry around and can discard at will. But if you already have attained that level of subtle conscience, one wonders why there might be a need to bother with a specific body, built to specs for some purpose — such transcended minds, which are rarer than neutrinos hitting the Earth, give little value to any kind to physical bodies anyway and are more than happy to take whatever is available. It’s just us normal human beings that see the advantage of getting a new body (with a new brain) to extend life somehow. And for the ones like us, the more parts we replace — until there is nothing left from the original flesh-and-blood body — the more likely it is that we will have little recollection of our former selves.

  4. ONTH if your replacement body is decidedly inferior (which is likely in the early days, since early generations of technology tend toward the inferior) you may greatly regret ever having left your gorgeously sensual meatbody.

    I recommend reading ‘Touching: The Human Importance Of Skin’ for a thorough examination of the vital roles the human body plays in our lives. Before I read it I might have agreed that your body can be discarded as easily as old rags. No longer.

  5. What is there to upload? 🙂

    You know, you were expecting that from me… but consider it for a moment. What is the mind, after all? We can sort of describe what it does but not pinpoint it; we know that it is “somehow” embodied in the neural connections in our brain, but we cannot say exactly where. The problem is that what we call a “mind” is just an epiphenomenon of what the brain does. In a sense, it’s like trying to transplant the Earth’s climate — another nice word which just describes an epiphenomenon — to Mars and expect it to work. After all, we know how most parts of the climate work, don’t we?

    Regardless of the philosophical discussion — we need to know what we are actually going to upload before we brace ourselves for the difficult task of doing it — there is the brain/body issue to deal with. I haven’t read ‘Touching: The Human Importance Of Skin’, but I recently (re)read Damasio. At some point, when describing all kinds of brain lesions and how the victim’s consciousness is affected, Damasio explains that we aren’t “just the brain”. We’re also a nervous system, and this makes a huge difference. But in some cases, even severing part of the nervous system from the brain will not stop the brain from getting some input from the body. Why? Because there is a second (known) mechanism for transmitting information to the brain inside out bodies: the blood circulation with its chemical messages (hormonal or otherwise). Damasio explains in quite detail how the “chemical messaging system” is as fundamental as the neural one: according to him — and of course, as a scientist, he might be wrong; but for now, this is our Western science’s best working hypothesis on how certain aspects of the mind work — our “sense of self” comes mostly from the constant mapping that we get from our body, which is unceasingly fed into the brain, of which we create a map of our own physical selves (even if we’re not aware of that; non-self-aware animals have the same mechanisms of “body mapping into the brain”, too). We cannot work consciously if that constant “body image” is not regularly updated into our brains; Damasio and others have laboured for decades to consistently demonstrate it. But that image is not created merely by sensory input — it also requires the chemical messaging system transmitted via the blood. Even though chemical messages are far slower than neural ones, they’re not that slow. We’re aware of neural activity 200ms after neurons fire in the brain, but blood gets into the brain every second — so I’d say blood messages are 5 times slower, which is not terrible. In a computer, the difference between things stored on RAM and things retrieved from hard disk is way larger (disk access is 100-1000x slower than RAM access), and nevertheless we use virtual memory mapped on disks.

    Mind uploading proponents might just say, “oh, that’s ok, we just need to decrypt the kind of messages that are passed from the various organs in the body via the blood stream into the brain, and then we’ll solve the problem in the same way we’re figuring out what kind of electrochemical signals are transmitted by the nervous system.” Not so fast! As Damasio and others have shown, chemical messages are an entirely different kind of beast. They not only confer information to the brain, but they change the way the brain works. If you wish an analogy, this is like saying that blood messages reprogramme the brain — they don’t “merely” send information, they’re like small computer programmers which will hack at the brain’s structure and rewire it, make it more sensitive to some kinds of messages and less sensitive, and allow it to process certain kinds of input in a different way. These messages even break neural pathways apart and create new ones! Ugh.

    So this means that severing a brain and placing it into a robot will not be extremely difficult, it will be next-to-impossible, because you simply cannot replicate everything. Even if there is a step-by-step approach — “let’s see what kind of messages the liver sends to the brain and start emulating these by sending some artificially synthesized chemical messages” — there is a huge problem to overcome: at this stage, Damasio has shown that the brain loses all “sense of self” at the lowest level, and when that happens, higher-level mechanisms which rely on a self-image will not work. That is, your brain might work at a vegetative state, but it will have no self-awareness. It might trigger some automated functions and reflexes which do not require a self-image to be constantly built and updated, but it won’t do much more. Even stored memories will break apart. The line of investigation followed by Damasio’s co-workers has shown that each time we store a memory of an event, we don’t merely store the event in itself and its relation to the environment. It goes deeper than that: our self-image is stored as well. What the brain was feeling at that time — and “feeling” is just a rational explanation of a series of chemically-induced “emotions”, according to this line of research — is also stored, related to the self-image. But when some mechanism breaks apart the self-image mapping subroutines at the brain, all these memories vanish. Or rather, they might still be stored somehow in the brain, but they’re absolutely useless for the mind. It’s like programmers having lots of snippets of code with null pointers for lots of subroutines that are not available any more: the code is still intact, but the references are lost, so the programme won’t run. And this is not speculative thinking: neurologists have actually studied cases of poor human beings who, for some reason, had this self-image updating mechanism broken down (or severely damaged), and saw what happened to the self-awareness of the being that went through this dramatic experience.

    Most of these brain lesions, where the self-image mapping is interrupted, are not reversible. But a few (perhaps the most interesting ones) are. Certain chemical changes can also inhibit the self-image mapping, and once the chemicals are depleted, the brain will work again as before. This would explain things like anesthesia or the effects of certain drugs. I forget which are which, but I remember that some drugs are able to paralyse the nervous system that convey pain/pleasure information to the brain, but not the self-image mapping. In those cases, the patient will not feel anything and be unable to move, but will certainly continue to be aware of itself and its surroundings. Other drugs have the reverse effect: the self-image mapping will cease, and in that case (if it’s a temporary condition), the patient will not be aware of itself — even though he/she might be able to react to external stimulation and even do quite complex tasks like drinking water from a glass. But there will be absolutely no awareness of doing anything. That information, because it lacks the self-image mapping, will not even be stored in one’s memory — so when “coming out” of that condition, the patient will not “remember” anything because there is nothing to remember: a memory, to be a valid one, requires the self-image mapping data to be present. Without it, memories are worthless in terms of a self-aware being.

    The conclusion is that, while the brain seems to act as the central processing unit of our being, it’s worthless without a body — a human body — to provide it with everything it requires, and we’re not just talking about nutrients and similar life-sustaining mechanisms. The “brain-in-a-vat” thought experiment is a nice idea, but biologically impossible. The whole body is required for a self-aware mind embedded in a brain. While we certainly can work with a deficient body — e.g. the loss of a member or two will not make our self-awareness stop! — we require “far more body” than the proponents of the “mind-is-just-the-brain” theory like to believe in. While the “mind-is-just-the-brain” theory might have been popular in, say, the 1950s or so, current research shows that things are far, far more complicated than that: brain and body make an unified organism which enables a self-aware mind to “arise”, and you cannot separate one from the other and still expect to get the resulting mind to be self-aware (not to mention to “feel it’s the same mind”).

    I personally think that Damasio’s model is far too simple — he just postulates a few levels, from firing neurons and processing chemical messages to complex higher-level cogitation — and believe that people like Douglas Hofstadter (of “Gödel, Escher, Bach” fame) have proposed far more complex models which are possibly closer to the truth of understanding how the brain works. Hofstadter’s models require such a huge number of levels that the higher ones cannot be aware of the lowest ones (i.e. neuron-firing); in fact, he makes a bold assumption that you cannot “create minds” so long as there is an awareness of those “lower levels” (his discussion of the subject is far too intricate for me to follow, and assumes a degree of mathematical prowess which is far beyond by abilities to remotely understand). He also postulates something quite interesting: two organisms (biological or otherwise) with exactly the same components and working in precisely the same way will not have the same mind (and he goes on explaining why this is impossible). Damasio alludes to the same theory as well, although he uses different approaches. I can give a simple example which I can understand — genetic twins have the same DNA but don’t think in the same way, and develop different personalities, specially if they’re separated at birth and get different educations, of course (the interesting thing in those experiments is not that they get different personalities — which is expected — but how much is actually similar in their ways of thinking, even when separated at birth).

    Thus, even if we consider the remote and implausible possibility of “mind uploading” to something else — another organic body; a robotic body; a simulation inside a computer — and assume that we can, in fact, duplicate at the finest level of detail all organic mechanisms of the brain, the result will not be a “mind upload”. It will have a mind of its own. Hofstadter, at least in the 1970s, believed that we would be able to create synthetic minds somehow, but he was quite clear in this explanation that any mind created syntehtically, even if it pretended to duplicate an existing organism (a human being), would be a different mind, and it couldn’t possibly be anything else than that.

    Now of course this is just what current scientific research says. We can speculate and postulate that tomorrow some eminent neurologist comes up with new proof that Damasio, Hofstadter, and a trillion other researchers are all wrong, and that they have come up with a new model of the mind which renders current research obsolete. This has happened in science before (and it usually involves a major paradigm shift!), so it’s possible to postulate it. The question is if it’s reasonable to postulate it at all. At the light of current research, the answer is “no”: we have a pretty good understanding of how certain key components of the brain (and of the body, too) work to predict what kind of model of the brain might emerge in the future, and we can narrow the options to a certain set of possible outcomes. The possibility of “mind uploading” or “brain transplants” (to organic or non-organic bodies) are excluded from those possible outcomes. At best, there will just be a different mind inhabiting the new body, with no relationship to the original mind whatsoever. At worst, we would have created the most technologically advanced vegetable in the world. Neither solution is interesting 🙂

    This doesn’t preclude the possibility of creating new minds from “nothing” (i.e. creating synthetic brains or brain simulations that become self-aware somehow). This is certainly within the possibilities of the current scenarios, and possibly achievable in the kind of near-future predictions that Khannea makes in her comment. But there is a world of difference between creating a synthetic mind and uploading an existing, organic mind into a new system, outside the original brain-body organism. The first is mathematically plausible, and organically might be possible to a degree; the second is both a mathematical and biological impossibility. We will need new maths and a new branch of biology to emerge before we can even speculate on how such a feat could ever be achieved; and currently, as said, neither current maths nor current biology allow such things to happen.

    As a reply to Khannea, I’d say that “mind uploading” does, indeed, have a market — namely, for corporate tycoons who want to live forever. For now, they invest in eliminating death as their priority. Why? Because it’s achievable. Mind uploading is not. The safer investment is in the first set of technologies, and forget about the rest 🙂

Leave a reply to extropiadasilva Cancel reply