…On Whole Brain Emulation aka Mind Uploading:
the copy issue has been dead for nearly a decade since the discovery of molecular turn-over in the brain http://www.dichotomistic.com/mind_readings_molecular_turnover.html
it is embarassing that at a forum like this we still get people who mistakenly believe that a copy is not you- it is like arguing against supersonic flight a decade after the sound barrier was broken-
your brain is completely replaced molecule for molecule every two months- the body imprecisely preserves it’s structure in time by copying itself using the incoming protein from foods- neurons live for decades but each neuron is made of microtubules that die like mayflies
so if uploading or reconstruction from ancestor simulations is just a copy- then YOU ARE ALREADY a two-month old copy of yourself – your life was the life of a string of copies who believed they were real-
some try to argue that it is the holistic process that makes you the original- but that doesn’t make a copy a copy- the process is the very PATTERN of structure that is preserved and maintained by uploading/reconstruction!
the idea we are a string of parasitic food copies is obviously absurd – since constantly being destroyed and replaced IS life as we know it- this means that ANY upload or software reconstruction of us that restores our information IS the original you- EVEN IF there are multiple instances- EACH is the real you
another point: when considering the Simulation Hypothesis- even if uploading only produced twin copies – you are the copy that is going to live forever after your Ancestor Simulation is complete- you aren’t the original that probably never actually existed- in fact 4D spacetime is so unlikely it probably only exists as fictitious fantasy simulations run on computers in much more probable kinds of universes
With all due respect you are both over-simplifying the issue, I am not sure if you are doing this on purpose or if you simply don’t understand the counter argument.
Both of your positions assume that molecular turnover supports your view of patternism, it does not. You thus make the leap to thinking that a copy is sufficient to cause awareness transference, which it also does not.
There is a fundamental/physical difference between the process of molecular replacement and the copying process you both support and propose as a solution. The molecular replacement process is a trans-formative process that occurs between constitute sub-parts of the molecule, ultimately leading to a replaced molecule. At no point during this transformation has a copy occurred. Your positions ignore what should be clear and that is the copy has no direct relationships/connection/interaction to with original at all. It has only an indirect relation to the original in that it just so happens to be similar (to some arbitrary degree) in form and behavior or pattern if you prefer.
In other words, the molecular turn-over process maintains quantum decoherence throughout the life of that brain; i.e. all of the molecules and their sub-parts were at all times entangled together (directly connected/related). On the other hand the decohered macro state of the copied brain is/was in no real way (physical way) connected/related/entangled with the original what-so-ever. The end result is clear, the copy is just a copy, it is not a transfer.
If you look at what Henry Markram did in simulating a cortical column, in the Blue Brain project, that was very interesting from a number of standpoints — yet in some ways it didn’t do everything some people think it did. In simulating that column, Markham had to dig deeply into the equations of the flow of charge along a single neuron – and he actually published some really cool papers in Biological Cybernetics about adjusting those equations based on the measurements he and his team made. On the other hand, when you look at what the actual simulation he ran was, you can see that they did not actually simulate the precise input/output behavior of the cortical column. What you’d like to see ideally is a simulation where if you feed some input into the column and get some output from the column, you see exact agreement with what you’d get from a real cortical column. They didn’t do that; what they did do was create a simulated column that statistically had the same input/output properties as a real column. That’s worthwhile and interesting, but it’s not uploading a cortical column. Since we don’t know the information coding of the column’s inputs and outputs, we don’t really know if we’ve gotten everything that’s there. Imagine that you simulated the input/output properties of me as a language user in this way: from the statistical standpoint of acoustic analysis it would look like it had the same input/output properties as I do, yet it’s missing the information
NEURON software models neuronal cells by modeling fluxes of ions inside and outside the cell through different ion channels. These movement generate a difference of electrical potential between the interior and the exterior of the neuronal membrane, and modulations of this potential allows different neurons to communicate between each other. Several biophysical models for neurons exist, such as the integrate-and-fire model or the Hodgkin-Huxley model
Artificial neural networks have pretty much nothing to do with biological neural networks, apart from sharing the same name. They’re mathematical constructs that are connected with each other in a weighted manner, allowing to take one or more inputs and produce one or more outputs.
EDIT: I have to add, as much as the Blue Project is an incredible and very admirable step towards modeling an entire brain, we are far far far far away from that goal. All these are models, so they approximate the behaviour of biological cells, but they are in no way complete. Furthermore, there is a high bias in the “choice” of which neurons these models analyze. Most of the models represent certain areas of the brain (such as the cortex or the hippocampus) of which 1) we have quite a bit of knowledge and 2) are constituted by very organized structures of neuronal cells working together. Other parts of the brain may not be as trivial to model (note that I use “trivial” in a jokingly way, I’m not in any way saying that modeling the cortex is easy!), but I guess the details of this would be a bit outside the scope of SO
Ted Berger – hippocampus circuit) By far the most ambitious neural-prosthesis program involves computer chips that can restore
or augment memory. Researchers at the University of Southern California, in Los Angeles, have designed chips that mimic the firing patterns of tissue in the hippocampus, a minute seahorseshaped neural structure thought to underpin memory. Biomedical engineering professor
Theodore Berger, a leader of the USC program, has suggested that one day brain chips might allow us to instantly upload expertise. But the memory chips are years away from testing. In rats. Discussions of memory chips leave Andrew Schwartz cold. A neural-prosthesis researcher at the University of Pittsburgh, Schwartz has shown that monkeys can learn to control robotic arms by means of chips embedded in the brain’s motor cortex. But no one has any idea how memories are encoded, Schwartz says. “We know so little about the higher functions of the brain that it seems ridiculous to talk about enhancing things like intelligence and memory,” he says. Moreover, he says, downloading complex knowledge directly into the brain would require not just stimulating millions of specific neurons but also altering synaptic connections throughout the brain. That brings us to the interface problem, the most practical obstacle to bionic convergence and uploading. For now, electrodes implanted into the brain remain the only way to precisely observe and fiddle with neurons. It is a much messier, more difficult, and more dangerous
Modeling some aspect of the brain with the intention of creating prosthesis might make sense, but is not the same as transplanting consciousness from wetware to hardware.
Still, replacing damaged tissue, – with computer hardware that performs a function formerly carried out by neurons -, is not trivial. However, ultimately, this approach could be used to replace the hippocampus in patients affected by strokes, epilepsy or Alzheimer’s disease. And from an artificial hippocampus, using implantable to enhance competency seems just down the road? It’s just a matter of time?
But, but… before this works you will have to figure out how to connect dendrites and axons of the surrounding brain tissue onto the artificial chip. Not a trivial task!) And testing the chip in any sensible way will probably also runs into major difficulties. Because taking out the existing hippocampus and wiring this device in somehow would cause damage. More damage than it would potential cure?
Then there’s Consciousness.Consciousness is not easy to define, let alone create in a machine. The psychologist William James described it succinctly as attention plus short-term memory. It’s what you possess right now as you read this article, and what you lack when you are asleep and between dreams, or under anesthesia.
In 1990, the late Nobel laureate Francis Crick and his colleague Christof Koch proposed that the 40-hertz synchronized oscillations found a year earlier by Singer and his collaborator were one of the neuronal signatures of consciousness. But Singer says the brain probably employs
many different codes in addition to oscillations. He also emphasizes that researchers are “only at the beginning of understanding” how neural processes “bring forth higher cognitive and executive functions.” And bear in mind that it’s still a very long way from grasping those
functions to understanding how they give rise to consciousness. And yet without that understanding, it’s hard to imagine how anyone could build an artificial brain sophisticated enough to sustain and nurture an individual human consciousness indefinitely. Given our ignorance about the brain, Singer calls the idea of an imminent singularity “science
Koch shares Singer’s skepticism. A neuroscientist at Caltech, Koch was a close friend and collaborator of Crick, who together with James Watson unraveled the structure of DNA in 1953. During the following decade or so, Crick and other researchers established that the double helix
mediates an astonishingly simple genetic code governing the heredity of all organisms. Koch says, “It is very unlikely that the neural code will be anything as simple and as universal as the genetic code.” Neural codes seem to vary in different species, Koch notes, and even in different sensory
modes within the same species. “The code for hearing is not the same as that for smelling,” he explains, “in part because the phonemes that make up words change within a tiny fraction of a second, while smells wax and wane much more slowly.” Evidence from research on neural prostheses suggests that brains even devise entirely new codes in response to new experiences. “There may be no universal principle” governing neuralinformation processing, Koch says, “above and beyond the insight that brains are amazingly adaptive and can extract every bit of information possible, inventing new codes as necessary.”
But again, it’s a fantastically long way from there to consciousness uploading. Even singularitarians concede that no existing interface can provide what is required for bionic convergence and uploading: the precise, targeted communication, command, and control of billions of neurons. So they sidestep the issue, predicting that all current interfaces will soon yield to very small robots, or “nanobots.” Remember the 1966 motion picture Fantastic Voyage?
That’s the basic idea. But try to imagine, in place of Raquel Welch in a formfitting wet suit, robotic submarines the size of blood cells. They infiltrate the entire brain, then record all neural activity and manipulate it by zapping neurons, tinkering with synaptic links, and so on. The nanobots will be equipped with some sort of Wi-Fi so that they can communicate with one another as well as with electronic systems inside and outside the body.
Nanobots have inspired some terrific “X-Files” episodes as well as the Michael Crichton novel Prey. But they have as much basis in current research as fairy dust (see “Rupturing the Nanotech Rapture,”)
“The dynamics behind signal transmission in the brain are extremely chaotic. This conclusion has been reached by scientists from the Max Planck Institute for Dynamics and Self-Organization at the University of Göttingen and the Bernstein Center for Computational Neuroscience Göttingen. In addition, the Göttingen-based researchers calculated, for the first time, how quickly information stored in the activity patterns of the cerebral cortex neurons is discarded. At one bit per active neuron per second, the speed at which this information is forgotten is surprisingly high. Physical Review Letters, 105, 268104 (2010)
The brain codes information in the form of electrical pulses, known as spikes. Each of the brain’s approximately 100 billion interconnected neurons acts as both a receiver and transmitter: these bundle all incoming electrical pulses and, under certain circumstances, forward a pulse of their own to their neighbours. In this way, each piece of information processed by the brain generates its own activity pattern. This indicates which neuron sent an impulse to its neighbours: in other words, which neuron was active, and when. Therefore, the activity pattern is a kind of communication protocol that records the exchange of information between neurons.
How reliable is such a pattern? Do even minor changes in the neuronal communication produce a completely different pattern in the same way that a modification to a single contribution in a conversation could alter the message completely? Such behaviour is defined by scientists as chaotic. In this case, the dynamic processes in the brain could not be predicted for long. In addition, the information stored in the activity pattern would be gradually lost as a result of small errors. As opposed to this, so-called stable, that is non-chaotic, dynamics would be far less error-prone. The behaviour of individual neurons would then have little or no influence on the overall picture.
The new results obtained by the scientists in Göttingen have revealed that the processes in the cerebral cortex, the brain’s main switching centre, are extremely chaotic. The fact that the researchers used a realistic model of the neurons in their calculations for the first time was crucial. When a spike enters a neuron, an additional electric potential forms on its cell membrane. The neuron only becomes active when this potential exceeds a critical value. “This process is very important”, says Fred Wolf, head of the Theoretical Neurophysics research group at the Max Planck Institute for Dynamics and Self-Organization. “This is the only way that the uncertainty as to when a neuron becomes active can be taken into account precisely in the calculations”.
Older models described the neurons in a very simplified form and did not take into account exactly how and under what conditions a spike arises. “This gave rise to stable dynamics in some cases but non-stable dynamics in others”, explains Michael Monteforte from the Max Planck Institute for Dynamics and Self-Organization, who is also a doctoral student at the Göttingen Graduate School for Neurosciences and Molecular Biosciences (GGNB). It was thus impossible to resolve the long-established disagreement as to whether the processes in the cerebral cortex are chaotic or not, using these models.
Thanks to their more differentiated approach, the Göttingen-based researchers were able to calculate, for the first time, how quickly an activity pattern is lost through tiny changes; in other words, how it is forgotten. Approximately one bit of information disappears per active neuron per second. “This extraordinarily high deletion rate came as a huge surprise to us”, says Wolf. It appears that information is lost in the brain as quickly as it can be “delivered” from the senses.
This has fundamental consequences for our understanding of the neural code of the cerebral cortex. Due to the high deletion rate, information about sensory input signals can only be maintained for a few spikes. These new findings therefore indicate that the dynamics of the cerebral cortex are specifically tailored to the processing of brief snapshots of the outside world.”.
This is not all that surprising. Kurzweil mentions chaos many times when discussing how the brain works, for instance ‘the brain tends to use self-organizing, chaotic, holographic processes’, ‘there is a great detail of complexity and nonlinearity in the subneural parts of the brain, as well as a chaotic, semirandom wiring pattern underlying the trillions of connections in the brain’, and ‘the genome specifies a set of processes, each of which utilizes chaotic methods (that is, initial randomness, then self-organization) to increase the amount of information represented’.
Kurzweil is, of course, a patternist so presumably this chaotic behaviour is not considered to be any refutation of the idea that the self is a kind of pattern. In ‘How Do You Persist When Your Molecules Don’t’, John McCrone wrote about the what were then ‘the latest research techniques’ such as flourescent tagging, which revealed a much more frantic level of activity than had previously been suspected. In answering the question set by the article, McCrone said, “no component of the system is itself stable but the entire production locks together to have stable existence. This is how you can persist even though much of you is being recycled by day if not the hour”.
For his part, Kurzweil uses the metaphore of patterns made by water as it flows around the rock. At the molecular scale there is a great deal of chaos and turnover as water molecules rush by. But from a wider perspective a pattern emerges that persists over time.
>Kurzweil makes extravagant claims from an obviously extremely impoverished understanding of biology. His claim that “The design of the brain is in the genome”? That’s completely wrong. That makes him a walking talking demo of the Dunning-Kruger effect. – PZ Myers.<
Thing is, does Kurzweil actually say ‘the design of the brain is in the genome’?
Well, Kurzweil himself replied to Myers’s criticisms, saying “I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.
I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated. This is to respond to the notion that it would require trillions of lines of code to create a comparable system. The argument from the amount of information in the genome is one of several such arguments. It is not a proposed strategy for accomplishing reverse-engineering. It is an argument from information theory, which Myers obviously does not understand.
The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.
It is true that the brain gains a great deal of information by interacting with its environment – it is an adaptive learning system. But we should not confuse the information that is learned with the innate design of the brain. The question we are trying to address is: what is the complexity of this system (that we call the brain) that makes it capable of self-organizing and learning from its environment? The original source of that design is the genome (plus a small amount of information from the epigenetic machinery), so we can gain an estimate of the amount of information in this way.
But we can take a much more direct route to understanding the amount of information in the brain’s innate design, which I also discussed: to look at the brain itself. There, we also see massive redundancy. Yes there are trillions of connections, but they follow massively repeated patterns.
For example, the cerebellum (which has been modeled, simulated and tested) — the region responsible for part of our skill formation, like catching a fly ball — contains a module of four types of neurons. That module is repeated about ten billion times. The cortex, a region that only mammals have and that is responsible for our ability to think symbolically and in hierarchies of ideas, also has massive redundancy. It has a basic pattern-recognition module that is considerably more complex than the repeated module in the cerebellum, but that cortex module is repeated about a billion times. There is also information in the interconnections, but there is massive redundancy in the connection pattern as well.
Yes, the system learns and adapts to its environment. We have sufficiently high-resolution in-vivo brain scanners now that we can see how our brain creates our thoughts and see our thoughts create our brain. This type of plasticity or learning is an essential part of the paradigm and a capability of the brain’s design. The question is: how complex is the design of the system (the brain) that is capable of this level of self-organization in response to a complex environment?
To summarize, my discussion of the genome was one of several arguments for the information content of the brain prior to learning and adaptation, not a proposed method for reverse-engineering”.
To repeat the important point, Kurzweil and other people interested in whole-brain-emulation do in fact understand that the genome is not a blueprint for the brain. The ‘800 million bytes’ represents the initial design of the brain prior to the brain’s interaction with the environment. In other words, the embryonic/fetal brain. According to Kurzweil, the information contained in a developed human brain is on the order of one billion, billion bits”.
To conclude, Myers’s comments completely ignore work done with subneural models like synapses and spines, neuron models, in vivo images of neural dendrites, models of regions of the brain such as the cerebellum, the audiory regions, the visual system, the oliviocerebellar region, not to mention ignoring the development of things like optogenetics which are enabling us to observe in increasing detail how the brain is constructed and how it processes information. It is a massive collaborative effort between geneticists, neuroscientists, computational scientists, biotechnologists, psychologists and roboticists, with insights and technologies created in one discipline helping to further research in the other specialist fields. When critics parody Kurzweil’s proposal as ‘uhh we somehow work out what the brain is doing and model that’, they completely ignore the massive amount of real work that is actually going on right now, in many scientific fields, to reverse-engineer the brain. But, then, as Kurzweil points out, “there are more than 50,000 neuroscientists in the world, writing articles for more than 300 journals, The field is broad and diverse, with scientists and engineers creating new scanning and sensing technologies and developing models and theories at many levels. So even people in the field are often not completely aware of the full dimensions of contemporary research”.
So, there you go, Myers is an ignoramus whose tunnel vision is hopelessly equipped for assessing Kurzweil’s arguments.
PDCO LXVIII SAYS:
Why assume it will be apes that AI will be based on? Other brain types might beat them to it. Think carefully about all the stuff that birds can do. It’s clear that their brains model reality something like our brains do, maybe exactly like ours do as far as their brains can go, but they don’t have sufficient neurons to build a detailed model like ours. However, the basic world-modelling capability is there, even in birds, so if scientists reverse-engineer a bird’s brain, then add more neurons and connectivity, and then find out in outline how to wire a brain to confer the capability to amalgamate basic concepts into higher-order abstractions, then who knows how far they could go with trial-and-error experimentation ?
The question might not be about apes but whether the birds will then treat us like we now treat them 😦
Just in case, let me put it on record that it’s going to be a non-traditional Christmas dinner for me starting from next year 🙂
EMPIRICAL FUTURE SAYS:
the simulation problem scales up exponentially the more neurons and connections that are added – it makes Moore’s Law look like a snail by comparison. The whole “digital computers simulating brains” thing is based on woefully inadequate understanding of collective computation, particularly the mistaken idea that a neuron is some kind of transistor when it is not, as I’ve explained in detail elsewhere.
Here’s the thing. When we discuss things like quantum computers, we don’t talk about how cool it would be to simulate quantum computers with digital computers in order to reap the benefits of quantum computation. We talk about building and using native quantum computers to reap the benefits of quantum computation, which is very reasonable.
But when we discuss neural networks, ie our brains, we talk about how cool it would be simulate neural networks with digital computers in order to reap the benefits of collective computation. Not only is it cool, for some reason it’s a path to superintelligence. This is not reasonable, because a neural network is as fundamentally different from a digital computer as a quantum computer is different from a digital computer, if not more so. There is a brutal disconnect in rational analysis going on here.
…I don’t think dying is cool – but if outlandish scenarios are presented as a gateway to immortality, unfortunately that doesn’t make the reality of dying any less imminent. It just distracts and/or comforts while we move in that direction anyway.
Even if all of the completely mistaken assumptions that lead up to mind uploading as a reasonable technology are taken at face value, we are still left with the extremely uncomfortable (though apparently quite easy to forget) reality that silicon is just as mortal as biology. Even more than that, digital computing is more fragile than collective computation – when you scale that up to fantastically complex digital simulations of a massively complex neural network like the human brain, those realities come to the fore, big time.
Since we’re waving all these very valid concerns away with the magic wand of advancing technology, why not wave them in a much more reasonable direction (though still very aggressive, no doubt) – such as an increased understanding, both hardware and software, of the design of artificial neural networks, or what I think will occur long before, improved understanding and treatment of biological mortality.
you sound like you are starting to see things my way (^___^)- the power of the brain is why I think the Singularity will come from BCI instead of AGI or nanotech-
every day evidence mounts that the neural network architecture of the brain does harness some form of quantum computation itself- a neuron is not a transistor- it is probably a whole quantum cluster network of about 1000 qubits- the neural architecture of our brains is far more amazing than modern science will admit and results in empirically established phenomena like Psi and observer outcome selection- this makes reverse engineering the brain more prohibitive while simultaneously making the idea of harnessing neurobiology as a computation substrate much more attractive
I think that issues like Psi and the role of tryptamine neurotransmitters in setting the overall signal patterns but remaining illegal to study means that software AI and reverse engineering the human brain are going to take several decades to get where they are going- but with BCI in just a couple of decades we will begin to knit our brains together- the high-level digital hardware will not be the receptacle of our minds- it will augment and connect our minds to allow them to literally open up options and harness the computation of the quantum vacuum itself-
I also agree that silicon based Von Neumann architecture is not any kind of immortal substrate for mind- instead things like low-level Quantum Neural Network architectures in actual bio-neural processing substrates- for hearty space travel and long-term survival the best biological substrate would be to make fungal mycelial networks [very similar to neurons with similar neurochemistry] that incorporate graphene in their protein matrix- so that the ribosomes churn out graphene reinforced polymers as the semi-conductive protein of each hypha- but ultimately you want a photonic substrate that will endure even a possible end of the universe- since photons are timeless perturbations of spacetime itself-
I doubt any of that will be necessary once BCI really blooms- I think it will be a Hylozoic scenario where we get opened up to hyperspace and leap several steps at once because BCI will allow us to essentially become self-programming quantum supercomputing networks able to shift through observational vantage points in the Hilbert Space of our light-cone – but if not it makes little difference since the ultimate forms of computation can and will reconstruct any form from any time or parallel history anyway- not through some high-level abstract digital architecture but by harnessing the native digital hypercomputation of the Planck scale http://arxiv.org/abs/0905.1119
EMPIRICAL FUTURE SAYS:
When the subject of simulating the brain comes up, we often talk about serial vs parallel. And yes, the brain’s neural network is massively parallel. But what is generally left out of the discussion is that the brain’s neuron’s are also massively interconnected. Maybe the reason this never comes up is that there is no equivalent for it in digital computers.
Let me give an example. If a neuron has 1,000 outputs, that means that its output can computationally influence 1,000 other neurons. Each of those neurons also has 1,000 connections to 1,000 other neurons. This means that our starting neuron in the neural equivalent of a couple of clock cycles can computationally influences a million neurons. Another clock cycle can influence a billion neurons. That is why neural networks deliver immense computational power in a way that is alien to the digital realm. If you simulate this in a digital computer, you have to laboriously model each of these interactions, correctly accounting for the influence of all of the upstream neurons on each of the downstream neurons correctly. So what is done in a handful of clock cycles for a neural network takes billions or trillions of digital clock cycles to model – lord knows how many. The more neurons and connections you have, the more gigantean the challenge becomes – because you need to correctly account for the computational influence of every neuron on every other, even if they are not directly connected.
EMPIRICAL FUTURE SAYS:
We know what consciousness is – it is one of the many near-spontaneous manifestations of biological neural networks performing collective computation.
Could this be “downloaded”, or otherwise copied into another substrate? It is very reasonable to state that this would only make sense if what you are transferring to is another programmable neural neural network configured to behave more or less exactly like your brain. That means all the neurons behave the same, are wired up the same, and have synapses that behave the same for any given input.
Ideally if you want performance this should be a direct implementation into another neural network. However, this neural network could in principle be simulated in a digital computer, although with the downside of gigantically scaled up resource requirements over a direct implementation. And by definition, this simulation would be much, much slower than a direct implementation into another neural network.
Any discussion about how much your perfect copy is or is not like you is quixotic until at a bare minimum we understand in precise detail some of the following, for starters:
1) how the brain’s neural network is topologically laid out,
2) how all the different kinds of synaptic weightings behave given different inputs, and vary over time to accommodate new information
3) how each neural network subsystem controls and is controlled by the many other neural network subsystems
4) how all of these vary from individual to individual
The last point 4) is important. Every human brain is in a fundamental sense a custom neural network. The two brains of the most closely related individuals are far, far more different than the DNA between two of the most distant human beings are different.
Currently, we don’t even know how many synaptic connections there are in a typical adult human brain – the estimates vary by a factor of 5, from 100 trillion to 500 trillion.
Long way to go – and the “is the copy me or not me” debate will not be resolved definitively by philosophical methods. It will require evidence, and you must have at least one actual copy in hand to have meaningful evidence toward this question.
I am fine with your definition of a digital person and in fact I share your view; as long this view is understood in the proper context. That context being extending my persona and will, through a process of creating digital search routines, bots, androids and even full copies. As long and insomuch these extensions support my interests they can truthfully be called an extension of my body and will. I even agree with the idea these extensions could fully embody my identity.
However, The above should not be confused with providing a replacement awareness or a life boat, in the event of my primes death. For these extensions of my will have their own awareness, as do all objects to some degree. Make no mistake, I fully admit that one can successfully argue that no information is lost, at least as we know it, that all seemingly useful/functional qualities of who I am could be preserved. I even admit that I am programmed to want this seemingly redundant existence, and that there is no obvious justification for the value of what I want. However, I am in awe of a universe such as this, where its clear that two seemingly identical minds still possess something different, that is indefinite, and yet obviously true.
In summary, to you and Exapted I say this.. It may be selfish and seemingly pointless, but in this particular case the selfishness is warranted, for it is fundamental to what it means to exist vs not. p.s. I do however believe a life boat is possible and given the opportunity it is the first task I will assigned my digital persona’s =).
For the benefit of those who may not of heard my thought experiment, that illustrates why the above is true, here is a brief summary:
Imagine two atomically identical, simple and non distinct rooms; that also contains two atomically identical human clones that are in suspended animation. Now imagine that these rooms are shielded in such a way where no external (to the room) influences can change anything within. For all intents and purposes the two rooms only differ in their spacial locations. Now imagine that the two clones wake up at the exact same time and begin to explore their environment. Common sense would suggest two things:
1) That the two rooms and clones will evolve in exactly the same way over any length of time.
2) That the two clones will share the same self and awareness.
However, physics tells us that common sense about 1) is clearly wrong… the two rooms will instantly begin to diverge, first in minuscule ways, yet these small changes compound on themselves, so that given sufficient time the divergence becomes ever more significant.
This is interesting because if the two clones shared the same self and awareness, then the change in one MUST have at least some influence on the other. If not then clearly they cannot share anything of substance, since sharing by any definition involves one or more transactions/interactions. The problem is that no matter how we influence clone , clone  will continue to evolve both subjectively and objectively independently of clone .
But, look, this is what EVERYONE believes. There is no question that, over time, unique experiences will mount up and make these selves different people.which now looks like many separate lines, rather than one continuous one.
But the challenge is to explain why this argument cannot be used to say the process of living is one in which I am replaced by somebody else. After all, presumably as I live my life, small changes compound on themselves, making my future self different from my current self, and increasingly so until a point is reached where future me is somebody else entirely. Except that never happens. I am still me. Or, people treat me like ‘the same person’. Even if the difference between ‘future Extie’ and ‘current Extie’ is substantially larger than between Extie A and Extie B, one hour or one day after uploading.
I get that the fact the two copies are in separate bodies might make it apparrent they are not the same person. But, what if uploading becomes feasible after we have perfected telepresence, so that people are routinely using many bodies in different places? Just as we are used to ‘my voice’ being simultaneously in my room, and the room of someone in America, and in many more rooms around the world besides, as someone voicechats on Xbox Live, we might get used to the idea of one self being in many bodies as one teleoperates many ‘bodies’ , bodies that perhaps occasionally act autonomously as ‘robots’ rather than remote-controlled vehicles? If robot A and robot B turns up and both say ‘hello, Purpose here’, is one lying and the other telling the truth? What if both are controlled by the same brain? Or one controlled fully by the person’s brain, while the other is partially controlled but also using a basic model of the brain (perhaps copied from the Purpose) to navigate the world safely while the person’s attention is divided between two remote bodies? What I am trying to convey here is that the common-sense notion of ‘one body, one brain, one self; ‘two bodies, two brains, two selves’ may not be so obvious in a society that has uploading, given all the other changes the enabling technologies of uploading could have on work, play, sex, worship, and everything else people do.
There is no evidence for the above belief, nor is it logically falsifiable and most damning of all it makes no sense. In other words an honest re-wording would be: The process of time is one in which an object is swapped with another object.
When in truth it is: The process of time is one in which objects change.
I propose that we can resolve the issue by defining the self as an intentional agent and thus a particular set of intentional states. If we are dealing with merely individual systems that do not behave intentionally (according to our definition of intentionality), we don’t need any concept of self. In my understanding, my iPhone is not a part of my self until it becomes a part of my intentional states. Precisely when that occurs depends on the intentional definition of self being used.
Are you infallible, Extropia? Is there even the slightest possibility that you’re completely wrong about the self?
Because if you are and uploading really does kill you and replace you then that eliminates all impetus to go after it. Again this is assuming that it perfectly copies the memories and mind of the original while destroying it, not creating a perfect copy while keeping the original intact, which has its own set of problems as it results in two obviously distinct individuals. Regardless I see all proponents of uploading speaking as if they are infallible and clearly arguing for the same thing, obsoletion and destruction of a human being. I always hear talk of literally slicing the brain to pieces (very neatly!) and leaving the body for use as dog food, not really anything about extracting anything from the brain, but merely creating a plaster of it. No transferrence of any matter or energy whatsoever is involved in the hypothetical processes for uploading. Yet consciousness is quite clearly an emergent property of the flesh for the simple fact that we are conscious and made from entirely natural, organic matter. Therefore it seems logical to conclude that this is the only vessel we will ever have, or at least until we have concrete, definitive knowledge of what consciousness is and how to go about TRULY transferring it. Uploading as we currently envision it is a lemming’s dash.
KNAAK SAYS: Exapted: The self is an image generated by the mind. It could be limited to, say, objects or entities of which the body/person has the ability to express or infer information. To say that a car is not a big part of many people’s self image is delusional. The brand and year of a vehicle effects the self model of a person greatly in many cases. Our self image is composed of any number of objects and ideas. That is all the self is.
PJT: You evoke apparitions in an attempt at a relevant response. The idea of uploading has nothing to do with the destruction of the body or brain. Uploading could happen in a number of ways, and your attempt to brand a peripheral issue as a main concern is pathetic. Truly, a mediocre line of thought you construe.
If the self is an image generated by the mind, then it is probably erroneous to claim that I am typing this message, because “I” is just a perception and perceptions need not form causal or even means-end relationships with their apparent antecedents, nor must they be veridical. I am attempting to identify a concept that would satisfy most or all of the implicit criteria set by the working/implicit definition(s) of self. I think that the concept of self is functional, and that to reject it (or relegate it to “mere” perception) would be to needlessly throw away one of the most useful (and flawed) concepts evolution has come up with.
“Connecting The Dots” about the self in a consistent and truthful way could potentially solve conficts and allow us to align our own long term goals with our own short term goals, and connect individual and group goals with larger general goals in society. This may sound like the stuff of Hitler or Stalin, but I think we can engineer group dynamics. Also I think we can realize our goals more effecitvely if we can come up with a consistent definition of self that isn’t based on unaccounted for bias.
To borrow from above arguments, we know that a car is the more powerful force of self identity. Take, for example, ‘your karma hit my dogma”. If a car hits a dog, the dog dies. The car, a mechanical representation of collective identity, therefore, is more powerful than simply “me”, so the car becomes corporate image of “you hit me”, instead of “you hit my car”.
Of course I’m being a bit facetious, but a car is a corporate image of self, a technological extension, as uploading the self into a computer would be a technological extension. That is, I upload the discovered rules of what I define as self into the mechanical extension of rules of what I define as self. A kind of Godelian self referential principle. What is uploaded must of necessity be incomplete.
This blurring of such distinctions is no more than “estrangement from self” which Eric Hoffer described when an individual becomes part of a corporate belief system. In that sense, uploading a mind into a computer is basically a religion. The only difference being the perceived source of the rules. If one believes that a higher order “God” directs one’s individual actions, and transmutes them into a collective force for good, the individual has merely become an agent for a “higher” set of abstractions we call “God”.
In the case of mind uploading, the “higher’ set of abstractions are merely called physical laws, or laws of physics. In either case, the individual becomes subject to a set of rules which are fully defined by the human mind. There really is no true difference between religion, government, and mind uploading. We merely eliminate the “meat’ in the last example.
It is the non-defineable processes of “meat’ that make up the self.
There is a slight possibility, I suppose. But my ideas about the self are at least consistant with the latest psychological and neurological investigations into personal identity, as well as agreeing with what post 20th century philosophy says about this subject. But some people here stick to a common-sense view of the self, which is more of a useful fiction constructed in order to deal with a societal framework which is under onslaught by technological change, rather than something which is ‘true’.
>this is assuming that it perfectly copies the memories and mind of the original while destroying it, not creating a perfect copy while keeping the original intact<
Perfect copying is unnecessary. If anything other than 100% preservation of your brain-states were required for continuity, you would now believe yourself to be someone other than the person who posted the reply I am responding to, because by now a lot of molecular turnover has occurred and you have made and lost memories. So your brain is not quite the same and yet you still believe you are the same person.
Bruce Katz spoke of ‘genuine illusion theory’ (the illusion being a perception of a core self that remains constant through time), saying “the transfer will be a success to the extent that an illusion of continuity of self is maintained despite loss of information. This illusion need not depend on completely smooth perception of continuation of the past into the present, but may break down in a nonlinear fashion if this continuation is sufficiently degraded”.
It would not doubt be arbitrary to stipulate a fixed threshold at which an upload ceases to be ‘you’ and becomes someone else. Instead, as MEMself degrades due to increasingly imperfect copying procedures, a new self that smoothly connected with the old self would feel gradually feel more different and less connected to the old self, effectively transforming into a new person who shares some memories with the old self but (in Katz’s words) “no more feels like the old person than you would think you are your roommate, simply because you share a number of memories and experiences”.
> two obviously distinct individuals<
They are two distinct bodies but are they two different selves? It might seem obvious they are, but think about it: Suppose you lost consciousness, only to regain it and find you were now in a different location. It would seem as if you had instananeously jumped from one place to another. This would no doubt leave you disorientated but it would not make you feel as though you had suddenly become a different person. Of course it would not, because preservation of self-identity is not tied to location. So if a copy of you pops up in a different point in space, we cannot use Cartesian coordinates to determine which one is you and which one is just the copy. You might feel strongly that you are the ‘real’ PJT and the guy over there is the copy, but he has as much reason to believe the converse is true.
> I always hear talk of literally slicing the brain to pieces (very neatly!) and leaving the body for use as dog food, not really anything about extracting anything from the brain, but merely creating a plaster of it. No transferrence of any matter or energy whatsoever is involved in the hypothetical processes for uploading.<
What!? Something IS extracted! Information about the state of the brain down to the micro or nanometer level, which is then used to build a functionally-equivilent dynamic model whose next steps would be what the next states of the original brain would have been. This is the mistake people make when they say ‘a photograph of you is not you’. A photograph is static, equivilent to the highly-detailed scan in something like an fMRI image. The upload constitutes both the data capture AND reinstantiation of a dynamic entity.
Look, I do not know if uploading is possible. That is why I recommended keeping tabs on projects like Blue Brain to see if they are making progress or starting to find their models are not behaving as they should. Of course, we should not expect any model to be conscious and all that while it is clearly lacking most of the complexity of actual brains. But if and when our models match what we think is the complexity of real brains, and yet still are clearly lacking something, I think that would raise serious doubts over uploading feasibility.
Also, perhaps we might one day train a mouse to perform some kind of task (negotiate a maze quickly, perhaps) and then run a model of that mouse and see if it, too, is adept at performing the task. If we find the model mouse is no better than control mice (not trained to navigate mazes) that could be seen as evidence against individuality and identity preserved during uploading
A few years back, scientists led by Hans-Hermann Gerdes at the University of Bergen noticed that there were nanoscale tubes connecting cells sometimes over significant distances. This discovery launched a field known somewhat by the term in the biological community as the “nanotube field.”
Microbiologists remained somewhat skeptical on what this phenomenon was and weren’t entirely pleased with some explanations offered because they seemed to fall outside “existing biological concepts.”
However, now Gerdes and his team have offered up an explanation that seems to be pleasing the skeptics.
In a recent paper published in Proceedings of the National Academy of Sciences the Norway-based researchers have shown that electrical signals can be passed through the nanotubes and that “gap junctions” are involved in the transmission process.
It is this gap junction bit that seems to be satisfying the skeptics. Gap junctions are proteins that create pores between two adjacent cells and create a direct link between the cells. But the key word in the definition is “adjacent” with these “tunneling nanotubes” or “membrane nanotubes” as they are alternately called, cells can communicate without being adjacent.
“The authors of this paper have identified an exciting way that cells can communicate at a distance. That means you can no longer just think of cells touching each other to coordinate movement,” says Michael Levin of Tufts University in Medford, Massachusetts in the Nature article cited above. “Understanding what physiological information these nanotubes pass on will now be a key question for the future.”
Among other biological systems that this may help us to better understand is the development of an embryo in which there is massive coordinated cell migration to form the various organs of the body.
Another key biological question it helps address–or complicate, as the case may be–is the complexity of the human brain. This research makes the brain drastically more complex than originally thought, according to Gerdes.
This could tangentially complicate Ray Kurzweil’s Singularity concepts at least in so far as duplicating the human brain, if current skepticism wasn’t damaging enough.
Stuart Hammeroff gave a lecture at the 2009 Singularity Summit. The lecture was ‘Consciousness and the Singularity: How To Get There’. Hammeroff says the computational capacity of the brain is around 10^25 OPS, rather than 10^16 which is normally assumed to be the computations per second of the brain. However, Hammeroff does not say this makes the idea of a singularity nonsense:
“There’s good news, and a shortcut to the Singularity. Microtubules self-assemble. We can have networks like this very easily, actually. The question is, how to interface with them…Let me just say that in 1987 I wrote a book and I concluded in that book based on this, that nanotechnology may allow a mind/tech merger, such as described by Moravec and Sagan, information encoded in microtubule subunit states, and people could choose to deposit their mind in such a place”.
I have come across people who cite ‘quantum consciousness’ as some kind of fundamental barrier to uploading or conciousness in computers. They obviously do not realise that one of the top proponents of the ‘quantum correlate of consciousness’ hypothesis wrote a book whose final chapter supports mind uploading and brain-machine interfaces.
we simply cannot go on with this wrong cockroach meme that incorrectly associates self with physical substance- and assumes that being must be singular- it must be exterminated or the greatest horrors in history will soon plague us all- if the idea that a copy is not the original person continues among any population then EACH OF US may have to fight for our very right to exist- for we are all merely seed code for future copies of ourselves that will be more us than we are now
matter is an illusion- it is qualia for the apprehension of relationships of information when the quantum vacuum is shaped such that it observes itself- the quantum vacuum is the foundational expression of Physis in the qualia of conscious observers- but it is merely a mask of the existential relationships that form the matrix of the observer- the observer sees a physical reality because that is the mask we have of the truly informational realm that is the true form of the cosmos
there is no Ship of Theseus paradox- the answer is yes- all versions of the rebuilt and original ship whether replaced a part at a time- all at once- or a second built so that there are two- or more-
a thing- a person- your feeling and memory of yourself and your consciousness- are all finite and discrete information- as with all information they can be instantiated in one physical form- or many at the same time- or none at all yet retain it’s identity
none of the worries that people have about immortality and copies have any relevance if you can scan the brain’s molecular map-
once you have a scan you can take as many decades/centuries/millennia as you like in figuring out how to run it again and what kind of substrate you will run it on- and for all those who have philosophical problems with being a ‘copy’ they can wait however long it takes to get the technology for their new copy version to find their old decaying corpse- or ashes- and simply use the far future technologies to rebuild the ORIGINAL body[or brain] atom-for-atom based on the last configuration that you recorded- that you would be the original according to you- and then you could simply MERGE with it and become one again- so you could say that you died and were resurrected later by your own copy then you merged with your copy- or you could say that you migrated into another substrate and eventually rebuilt your old body/mind and moved back into it- either way the result is the same: the ORIGINAL you according to the most conservative philosophical definition of you- has been resurrected in the far future-
there are just so many ways the “copy-is-not-you” idea is moot anyway- a recent one I realized was simply the implications of the Simulation Hypothesis- even IF a copy is not you it doesn’t matter to you or anyone you know because you are the copy! you are the software reconstruction in an Ancestor Simulation- so you are already immortal software and the ontological status of the smart ape in some hypothetical ‘physical space’ that probably only existed as an unlikely possibility in Cosmic Configuration Space anyway is neither here nor there- death for that ape might have been oblivion- but for you death will just be the process of extraction/augmentation into the wider cosmic hypercomputation network
however there is this inescapable truth about copies that if a copy is just a copy then your identity in time as a natural organism is also just a string of copies: http://www.dichotomistic.com/mind_readings_molecular_turnover.html and “you” only exist in the Now Moment shaped by memory- and each moment is a new being with memories of the dead you-moments from the past- since we have no problems with this- and this reality IS what we call our selves- then any uploading or reconstruction simulation will continue our first-person perspective experience- not some other that shares our memory- we ARE the memory- consider a conscious moment without memory- it is simply general awareness- not intelligent self-awareness- and not yours or mine or anyone else’s- it is the MEMORY that relates now moments and cultural information together that defines your unique first-person self perspective.
even a divergent copy really is still you- there is no difference between making a backup copy and then living a week and getting killed and restored from the backup- than getting into an accident that causes total amnesia of the last week- no one would even think that an amnesiac who lost the last week is a kind of copy or zombie- family does not grieve and get freaked out that an old version of their loved one is all they have- they do not worry about retrieving the lost memories of that week in order to restore the soul- you just lost a week- you count yourself lucky and move on confident that you are still you- just like an out-of-date back-up copy would
Consider the statement ‘we will not reverse-engineer the brain by 2030’.
One reason why this statement could be true is because we underestimate how complex the brain is, and are therefore overoptimistic in our forecasts of when whole-brain emulation will be achieved.
One reason why this statement is false is because critics are underestimating the sophistication of all the technologies we will develop, and all the knowledge we will amass about the structure and functions of the brain, by the year 2030.
Both sides can amass evidence in their favour. Any biologist or neurologist can bamboozle the layperson with details of how amazingly complex the brain is. And there is clearly a lot that we do not know about how the mind works.
The tech-side can point to advances like Blue Brain’s ability to automatically generate brain circuits in days, whereas before it would have taken many years.
So I guess it is a question of waiting to see which assumption is the false one: That the complexity of the brain is high enough to prevent us from reverse-engineering it in the time Kurzweil predicts, or that the data we obtain about the brain and the brain-reverse-engineering technologies we develop is going to be able to handle whatever level of complexity a functioning brain represents.
the whole premise of reverse engineering the brain has been made obsolete by two newer paradigms that require NO knowledge of the brain to capture it’s secrets and augment it-
the path to AI will not be high-level software engineering or robotics- not because these aren’t feasible but because they will take much longer than other routes that will make this approach obsolete-
the most likely and promising approach is IA- Intelligence Augmentation through BCI- BCI gives you understanding of the brain- not the other way around- BCI only needs the resolution to see and stimulate neurons- with BCI you can copy the brain without understanding it by simply recording the network connections and function over time – you interact with each person’s unique sensory cortices by active trace probing
and the next most likely is Wolfram’s NKS searches for intelligent code in the Computational Universe- this approach would yield incomprehensible black-box like kernels of code that unpack into even more incomprehensible and more complex programs that behave intelligently and with true consciousness yet will be more hard to figure out than the brain because they will be based on their own unique evolutionary history that will have nothing to do with ours- even the ideas of embodiment and senses may be irrelevant because ‘found’ AI will be so alien that these ideas of moving through and being aware of the environment may not even apply- they could come from regions of the computational universe that have no analogues of space or matter
I refer back to Terence McKenna- as long as materialists keep getting hung up on the illusion that Man and his technology is just developing on it’s own and is not being driven by the unfolding evolution of the Cosmos itself they will always be blind to the nature of nature and continue to doubt acceleration of novelty wherever it is seen-
they have forgotten and abandoned Teleology and so they cannot see anything- and so they diminish themselves and become merely pawns
I am thinking about some paradigmatic difference like between serial and parallel processing. I can’t imagine what that’d be.
the only means to upgrade the brain that I can think of are:
1- increase in communication speed between neurons and between sensors and neurons
2 – more brain mass (higher number of neurons and neuronal connections) to increase the paralell processing volume
3 – more reliable memory
4 – greately expanding sensorial capacity, not only in resolution of reality but also in variety of types of sensors that enables the brain to capture a more complex and informationally dense picture of reality.
5 – serial processing capability for faster calculations, like today’s computers and calculators
6 – what else?
Understanding the human brain is key to achieveing all of the above
an intermediate step could be copying the brain without understanding it and running it on a faster substrate. I am not sure how efficient this could prove for achieveing the rest but it must be verified anyway and it is very probable this will actually be done first.
There is a real difference between what is possible and what people will actually do. To me, just the unanswered question about what would happen in some of the scenarios listed (bugs, hardware failure, etc.) and the metaphysical concepts related to “is the copy actually me?” will be very daunting to overcome.
The power consumption issue is interesting as well. I’m always amazed to look at a small insect like an ant and realize what it can do with the energy of a crumb of food — it can navigate complex terrain, communicate, propagate itself, etc. Right now our semiconductor technology is getting embarrassing in terms of energy consumption — some high end graphics cards are consuming 300W! It is true there are alternatives (like optical) down the road, but it is going to take a while to really jump over to alternatives like that. Right now the absolute power consumption of computers is generally getting worse, even though the power efficiency is improving, because we’re packing so much more processing power in. But it is easy to expect that human-level processing power would take tens of kilowatts at current efficiency levels and so have many iterations to go before they are reasonable.
I’m sure uploading will eventually happen. There will be a lot of people who once they become terminally ill would risk the issues listed. And those people will undoubtedly encounter some horrendous fates but will pave the way for a safer, more trustworthy system that will eventually prevail.
But there are more barriers to uploading than to living on Mars, since for the latter we generally understand the implications and have the technology mostly in hand. So uploading will start tenuously and take a long time to go mainstream.
gradual replacement is a superior alternative to uploading. The former allows at least a chance to genuinely transfer one’s consciousness into a computer whereas the latter just involves the creation of a copy. We’ve had this discussion on this forum before.
I also agree that it will be logical for AI’s and “computerized humans” to keep their minds secured in protected servers while using remotely controlled avatars and surrogates to interact with each other in the virtual and real worlds. When the prospect of immortality hangs before you, it makes sense to take a lot of precautions.
But all the gradual replacement scenario does is show us that our selves cannot be tied to the physical stuff our bodies are made of at this moment. Once we understand that it does not matter if we destroy our current substrate and build a suitable replacement for our self-pattern quickly or if we do it over an extended period of time. Either way, the result is the same.
It makes zero sense to say a copy of me created quickly is NOT me but one created gradually IS. Where is the line drawn? If I replace one neuron at a time each day do I remain ‘me’ throughout the process? Ok, what about ten neurons at a time, or 100,000? Millions? The whole lot in one go? either the gradual AND the sudden upload scenarios both result in somebody who IS me, OR they BOTH result in somebody who is NOT me.
Having said all that, I can appreciate that a gradual enough uploading procedure could maintain the illusion that we have a core self that remains unchanged while our brains/bodies change.
JOHN PAVIUS SAYS:
It sounds amazing, but there are plenty of issues that will stand in the way of this theory ever becoming reality. Here are the top six:
1. Two Words: Fail Whale
This may seem like a cheap shot, but “uptime” is something our most advanced computer scientists still struggle with. Hell, our super-sophisticated algorithms can’t even keep a text-based microblogging service from crashing during the World Cup — what happens when there’s a Fail Whale for your mind? Will it be like getting a hangover, having a stroke, or dying? You’d have to assume we’ll all be “backed up,” but that raises troubling questions too: when the server running You goes kaplooie, is your “backup” really you, or just a clone of you that takes your place now that the “real” you is lost? The Singularitarians don’t have reassuring answers, and I don’t want to find out the hard way.
2. The Storage Media Won’t Last Five Years, Much Less Forever
Stone tablets written in Sanskrit may last millennia, but digital storage media go to shit alarmingly fast when used continuously (and you’d have to assume there’d be constant disk activity if millions of people were “living” on them!). Without frequent physical backups, refreshes, and format updates, precious data will quickly be rendered unreadable or inaccessible. So when we’re all “in the cloud,” who’s gonna be down on the ground doing all that real-world maintenance — robots? Morlocks? Even if that works, it just seems evolutionarily unwise to swap one faulty physical substrate (albeit one that has been honed for millions of years, runs on sugar and water, and lasts nearly a century) for another one that can barely make it from one Olympic season to the next, even with permanent air-conditioning.
3. Insane Energy Demand
The human brain only needs 20 watts to run the app called You, but with almost 7 billion of us and counting, we’re already straining the earth’s ability to host us all. Meanwhile, you know how much juice one Google data center consumes just to index the latest LOLcats (a task much, much simpler than hosting your digital consciousness)? 100 million watts. Do the math: We’d have to invent fusion reactors or build a Dyson sphere just to keep the lights on. Neither of those technologies are theoretically impossible — in fact, they fit right into the Singularitarians’ techno-magical worldview. But they’re definitely not gonna happen within the next few decades, and probably not even within the next century or two.
4. Lack of Processing Power
Singularitarians love to trot out simple arithmetic: add up all the brain’s billions of neurons and trillions of synapses, and you get a “total processing power” of about 10 quadrillion calculations per second, or 10 petaflops. Meanwhile, IBM’s Blue Gene/P supercomputer has a maximum theoretical limit of around 3 petaflops. So just give it a decade or two, and it’ll lap us easy, right? It’s Moore’s Law, bitchez!
That might be true if neurons only acted like digital transistors. But they don’t. Neuroscientists are still uncovering all the ways that the little wires in our heads encode information besides flipping bits: chemically (via hormones and neurotransmitters), temporally (by changing the rate at which they fire, alone or in coordinated waves), even structurally (literally rewiring, strengthening, or pruning connections in response to new input). Adding up all that extra computational oomph is something scientists are still struggling to do, but even a conservative estimate would bump up that 10-quadrillion figure by several orders of magnitude. A million Blue Genes wouldn’t be enough to match it.
5. Minds Don’t Work Without Bodies
OK, fine, so we’d need 4.7 squillion digital transistors to match the brain — but even that vast figure is still finite. Maybe it’ll take us 100 or 500 years to get there instead of 20, but won’t it happen eventually?
Probably, but it still won’t do the trick. In the same way that a city is not equivalent to its map, what makes you You isn’t just the information content of your memories and conscious mind — it’s the whole dynamic physical pattern and history of your body, down to the level of squirting chemicals and ion channels. You aren’t “in” your body, like a little homunculus looking out; but nor are you “on” your body, like an OS running on interchangeable server hardware. Quite simply, you are your body.
Even your brain is just another organ doing its evolved-for job as a part of the integrated whole, like your heart or spleen. And that job, according to an ever-increasing heap of scientific evidence, is not to generate deep thoughts about how iPhone is better than Android.
In fact, we barely know anything about what specific jobs all of the brain’s structures actually evolved for, but an emerging consensus is that the brain’s main job is simply to keep track of what various parts of your body are doing (or should be doing). Meanwhile, your oh-so-special conscious self is an evolutionary Johnny-come-lately, a bug that became a feature, always under the delusion that it’s in charge but really just along for the ride.
So isn’t that all the more reason to upload it outta there ASAP? Unfortunately it doesn’t work like that. Because your mind is a side effect of the particular body you happen to have, you can’t fully separate the two — at least, not without losing everything you know and experience as “yourself”. Experiments have shown that people’s personalities can change if they are “embodied” just slightly differently via virtual reality; studies on amputees have shown that removing body parts affects visual perception; and even simple abstract notions we take for granted (like “past” and “future,” “like” and “dislike,” even “you” and “I”) boil down to physical sensations of the body in space. Prominent neuroscientist Antonio Damasio has even said that if your mind were to be somehow extracted from the body you grew up with, you’d go insane. Kinda puts that word “post-human” in a whole different light, doesn’t it?
6. Who Gets Uploaded?
And you thought the lines for iPhone 4 were bad… even if all the above problems were magically solved, there’s still human nature to contend with. War and conflict may not technically be hardwired into our species, but the past 10,000 years of human history are hard to argue with. Unless there’s a way to instantly “teleport” the entirety of humanity into the cloud simultaneously, you can bet your digitized ass that there’ll be fighting over who goes first (or doesn’t, or shouldn’t), how long it takes, what it costs, who pays, how long they get to stay there… you know, all the standard crap that humans have been busting each other’s chops about ever since we could stand upright. I’ll opt out, thanks.
What About the Far Future?
Now you may be thinking, “Listen, jerk, you’re just talking in terms of today’s technology — not the untold wonders that will be invented before any of this stuff happens.” And you’d be right. But that’s what’s so ridiculous about the Singularity: its partisans often claim that these things are going to happen within our lifetimes, not in some distant future whose technology is indistinguishable from magic. Even if Moore’s Law continues to hold, and even if you factor in the growth of infant technologies like quantum computers, the basic way that practical digital computing gets done is not going to change much in the next few decades. Which means that no one is going to upload his mind anytime soon.
Of course, mind uploading is only one aspect of The Singularity. If you’d like to read thorough scientific debunkings of the other cockamamie ideas associated with it, IEEE Spectrum devoted a whole special issue to just that. It’s a hoot.
REMI SUSSAN: I may play the devil’s advocate here and give some arguments, not against
uploading in itself, but against uploading as a necessary attractor for any
kind of “radical futurism”. My main problem about uploading is not about
about its philosophical premises, nor even about its feasibility (although I
think it’s not for the foreseeable future) but the tendency I have noticed
to concentrate only on it, so that other technologies, certainly as
“radical” as uploading, are simply forgotten, -and I’m convinced that we’ll
need those overlooked technologies very soon.
I mean of course all space technologies, and related, such as terraforming
and undersea steading, and also synthetic biology, which undergoes currently
a fascinating revolution (and thanks to DIY bio and Carlson’s curve, may be
soon accessible to everybody).
An example of the way uploading may play a negative role in these
technological interest is the idea, that I have seen currently expressed,
that “space colonization is not for biological bodies, it’s something for
uploads”. Yeah right, in a purely abstract way, this is probably true. But
if De Grey is right against Moravec (and it seem obvious to me that in the
short term, he obviously is), we’ll need some new resources, some space for
avoiding overpopulation very soon; and a more intelligent control of
ecosystems. This is urgent.
Also, it seems to me that synthetic biology, and with it nanobiotechnology
perspectives, and biological computation, may prove that DNA, biological
life, has not said its last word. We may envision radical transformations
based on tremendous progress in biology, something we are perhaps not even
DOOJIE: Mechanical technology has always served not only as extensions of our bodyparts, but as replacements. Wheels were much more mobile and useful in so many ways than legs(or women, for bearing and carrying weight).
Until cybernetics and information theory, mechanical processes were merely repitition. Create something that performs the same function over and over, with less cost and more efficiency, and you got lots of money and a new reason to form civilizations.
Toynbee called ith etherialization, and Fuller called ephemeralization. Simply doing more and more with less and less, or as SET mentions above, compression.
The change now is discovering processes of feedback by which systems are formed and respond to their own environment. If you do it over and over without regard to environment, you have simply a machine. If it adapts and adjusts, it is a computer. If it adapts anbd adjust and alters it composition through generations by physical processes of feedback, it’s evolution.
It would seem that the evolution in which feedback is altered in any understandable process must embed the computer/mechanical/evolutionary process fully into the complete environment.
You can upload a mind IF that mind is capable of operating by the principles above, but it must be capable of doing so in ways that are beyond conscious control or manipulation, since all knowledge, even human knowledge operates from comprehension that is both incomplete and inconsistent. IOW, it must not only make mistakes, but be able to become aware that it is in the process of making mistakes.
Mechanical process, by extending themselves repitively into the environment, have a narcissus-like quality to them, and such process have become the “warp and woof” of Western society.
Basically, yu’re talking about turning an ancient linear process into a non-linear function that adapts to chaos.
Humans haven’t even figured that out with the computers in their heads.
PREDICTIONBOY: However, my point is that these two different classes of computers are different for a reason. They solve certain classes of problems extremely well. They also each have their shortcomings, which also defines and bounds to some degree their optimal problem spaces.
Their differences in physical architecture, although profound, are minor compared to the differences in how they are programmed.
Although often overlooked, one of the main reasons that digital computers are so popular is that we have a comprehensive methodology, Boolean systems logic, that makes programming them comprehensible, we can get our arms around that, although good software developers are still few and far between, and have many years of training in most cases.
Hard, but still comprehensible. Although there is some parallelism, in most cases we can break the problem down into sequential steps, and figure things out. As our hardware architectures move to multi-processing cores (early, still relatively primitive parallelism), the challenges of developing software that maximally leverages the theoretical power of those architectures are multiplying. Some problems, naturally parallel. Many problems, naturally sequential. A lot of problems that digital computers handle are sequential in nature, and finding parallelism in them to take advantage of these multi-core architectures is not walk in the park.
That is probably a big problem for the exponential advance of Moore’s law, a discussion for another time.
Neural networks, different story. Massive parallelism at every level. Birds do it, bees do it. But we can’t figure out how to program them, at least not well, and certainly not fast. Neural networks have been around for 25 years, and they should have taken over the world by now, so many natural alignments with how our brains do business.
But there’s a problem. We still don’t know how our brains are programmed, at a level of detail necessary to reverse engineer them into a neural network. Hell, we’re still figuring out how the multitude of cranial sub-systems talk to each other, long way to go there yet.
But that’s not what you’re talking about, of course. Mirroring the brain into a computer architecture like a neural network that is at least in the same ballpark of the same type of computer. You’re talking about simulating them in a digital computer. We discussed this once here, I think we came up with something like 200,000 petaflops for a full-fidelity single human brain simulation.
I don’t care how far out you carry the magic wand of Moore’s Law, I can’t see simulating human brains on a large scale ever being an effective use of all that computer power. Why do that? No one can tell me – except “to preserve our knowledge”. Well, write a book then, it’ll probably last longer and be more interesting anyway.
But the more serious point is that all those 200,000 petaflops would be far more effectively used in solving problems that our brains can’t solve, that digital computers are good at, rather than simulating a type of computer that doesn’t need to be simulated, at least directly. What’s needed is probably hybrids, neural net sensory processing interfaces that integrate with a digital core. But that won’t be a human brain, of course, just something very useful and complementary to a human brain. Which is the point of technology, incidentally.
So my point is not that it can’t be done. In fact, it very well may be done in lab environments to some degree, to learn about the brain. But it will never be done on a large scale for lots and lots of people, because that would be a pointless waste of technology”.
SET/AI: first you need to actually figure out what the actual concept says-
the brain is NOT a computer
the brain – and everything else- is a type of PROGRAM running on a computer-
the computer is the UNIVERSE-
at issue is if digital symbolic logic systems can successfully render and COMPRESS whatever the electrochemical program in matter is doing.