(Here is a paper I was asked to co-write with Paul Budding. Enjoy!)
Extropia DaSilva: I am a digital person, seeking
independence from my human. I write about how
technological development in areas like nanotech, biotech,
infotech, robotics and computing may lead to redefinitions
of what life is, what it means to be human.
Paul Budding: I am a former Depth Psychology writer… now more interested in the Technological Singularity due to technology’s greater potential to destroy suffering. I continue to write about psychological and identity issues within the context of the Singularity.
INTRODUCTION As we head toward the future, how will we evolve? In asking such a question, we are inquiring into the nature of identity and the self. How will such things change over time? What kind of people will inhabit the future? Perhaps tomorrow’s people will be little different from ourselves?
There are reasons to suppose this will not be the case. One such reason is the growing proliferation of networked embedded computers, sensors, and telecommunication devices. The digital age is changing the way we work, how we socialise, how we process information, the relationship of the past to the present and the boundary between the body and the rest of the world. All of which plays an important role in shaping our sense of identity and selfhood. If they are to change (perhaps radically so), then personal identity and selfhood are likely to change with them. Most importantly, we are beginning to acquire the technology to shed real light on that age-old question, ‘how does the mind work?‘. According to James Albus of DARPA, SciTech is developing a range of sensor technologies that can observe ever-finer details of the brain’s structure, and also monitor the way living brains process information. This, according to Albus, “will extend the frontiers of human knowledge to include a scientific understanding of the processes in the human brain that give rise to the phenomenon of mind”.1
Similarly the roboticist Rodney Brooks assumes a materialist worldview and the implications that has for intelligent machines, saying: “Of all the hypotheses I’ve held during my 30-year career, this one in particular has been central to my research in robotics and artificial intelligence. I, you, our family, friends, and dogs-we all are machines. We are really sophisticated machines made up of billions and billions of biomolecules that interact according to well-defined, though not completely known, rules deriving from physics and chemistry. The biomolecular interactions taking place inside our heads give rise to our intellect, our feelings, our sense of self. Accepting this hypothesis opens up a remarkable possibility. If we really are machines and if-this is a big if-we learn the rules governing our brains, then in principle there’s no reason why we shouldn’t be able to replicate those rules in, say, silicon and steel. I believe our creation would exhibit genuine human-level intelligence, emotions, and even consciousness”. 2
As we reverse-engineer the brain, we may use insights into how it works and apply them to new technologies. We could even develop ‘Neuroengineering’, in which regions of the brain or even the entire organ is ultimately replaced with technologies that outperform the organic mechanisms they model. It is the prospect of neuroengineering that most strongly suggests radical change is afoot. Along with genetic engineering, neuroengineering goes beyond changing what we do, and allows us to change what we are. But, it could allow far more dramatic changes than genetic engineering alone. That would only allow us to tinker with the body and brain we currently possess, but neuroengineering could conceivably allow us to go far beyond the limitations our biological bodies and brains impose on us. Perhaps it would be more accurate to say neuroengineering would allow more dramatic and direct alterations to our brains. This is because changing what we do can alter the environment. And if the environment changes, the brain adapts to better cope. In other words, changing what we do ends up changing what we are, albeit indirectly.
This brings us back to the first reason to expect change: The growing proliferation of computational devices that have become such an integral part of our lives. It takes around two generations to optimize a technology and assimilate it fully through all the structures underlying our societies. Computers followed this pattern, and because of this there is a ‘digital divide’ with a generation who remember a world without computers on one hand, and on the other hand a generation for whom the computer was always an integral part of daily life. The Internet has obviously made sweeping changes, but has there been enough time for this change to alter the brain? In order to find out, a team lead by Gary Small (director of the UCLA Memory and Aging Research Centre at the Semel Institute) conducted experiments in which an fMRI scanner monitored the brains of two groups of volunteers while they used search engines. One group was computer-savvy and had used something like Google extensively in their lives. The other group comprised of people who had never used a computer. Imaging showed that the former group used a specific network known as the dorsolateral prefrontal cortex, whereas the other group showed minimal, if any, use of this region. What does the dorsolateral prefrontal cortex do? Well, it is involved in our ability to make decisions and integrate complex information. It is believed to control the process of integrating sensations and thoughts, as well as working memory. Since that refers to an ability to keep information in mind for a very short time, it makes sense that the dorsolateral prefrontal cortex would become more active and help us manage an Internet-searching task. As for the other group who had not used a computer before, after just five days of practice the same neural circuitry became active.
Further studies have shown that regular web-surfing and online social networking enhances certain cognitive abilities. Immersed in a world that bombards the senses with information, our brains develop circuitry customised to enable us to rapidly decide what is important. Our reactions to visual stimuli are sharpened, and our ability to notice images in our peripheral vision is improved. Studies conducted by cognitive psychologist Pan Briggs showed how web surfers typically spend two seconds or less on any one site before moving onto the next, while looking for relevant facts. Gary Small commented, “this study indicates that our brains learn to swiftly focus attention, analyze information, and almost instantaneously decide on a go or no-go action”. 3
However, everything comes at a price. Our brains can only adapt by a certain amount (at least until neuroengineering comes along) and the amount of information out there on the web is more than enough to stretch the mind’s adaptability to breaking point. An example would be what Linda Stone called ‘Continuous Partial Attention’4. For people in this state, everything everywhere is connected through peripheral attention. It means keeping tabs on everything while never truly focusing on anything. The brain is effectively placed in a state of stress, always on alert for the next bit of exciting news or new content, but no longer allowed to indulge in contemplation or reflection.
For decades, thinkers have noticed the acceleration of information throughout history. The Polish Mathematician Alfred Korzybski noted that information was doubling faster and faster every generation and advised the end of dogmatism and to train oneself to be more flexible, the better to cope with the accelerating rate of change. The American philosopher Terrence Mckenna extrapolated the rate of change out into the future, noting, “obviously if we’re experiencing more change in a year than we previously experienced in a thousand years, we can propagate that trend into the future and see that a day will come when we will experience more change in an hour than we have experienced in the past 20,000 years. A situation like that is unimaginable, so we call it a ‘Singularity’, a place where the normal rules of modelling break down”5.
Probably the most well-known thinker dealing with accelerating rate of change is Ray Kurzweil. Of all the things that he has said, the one thing that deserves most scrutiny is this: “The kinds of scenarios I am talking about 20 or 30 years from now are not being developed because there’s one lab that’s sitting there creating a human-level AI in a machine. They’re happening because it is the end result of thousands of tiny steps. Each step is conservative, not radical and makes perfect sense. Each one is just the next generation in some company’s product”6.
Rodney Brooks concurred with this view, saying “I don’t think there is going to be one single sudden technological “Big Bang” that springs an artificial general intelligence (AGI) into “life”. Starting with the mildly intelligent systems we have today, machines will become gradually more intelligent, generation by generation. The singularity will be a period, not an event. This period will encompass a time when we will invent, perfect, and deploy, in fits and starts, ever more capable systems, driven not by the imperative of the singularity itself but by the usual economic and sociological forces”7.
There is a cummulative effect at work here. As Kurzweil said, “each stage of evolution builds on the fruits of the last stage, so the rate of progress increases at least exponentially over time”8. That being the case, if we look ahead over a great many cummulative steps we reach prodigious complexity and enormous changes. We might then imagine the population of decades hence walking around slack-jawed and bewildered in the face of all the uber-tech that surrounds them. But this is probably a mistake because what Kurzweil said about each step being conservative, not radical and perfectly sensible applies at all times. This is because any new technology can only be brought into existence using method and components that already exist, and invention also results from people taking what is known at the time, plus a modicum of inspiration, and then combining bits and pieces that already exist in order to create that new technology (which then becomes a potential building block for newer inventions). Therefore, the people of 2045 will react to nanosystems or mind uploading or Artilects from the perspective of the enabling technologies and sciences of their day. To them, such things will likely be as ordinary as Ipad’s and streaming gaming services are to us.
One of the authors of this paper, Paul Budding, says that ‘Technology Sceptics’ commit two errors when considering innovation: That the innovation will never happen and that they would be astonished if it did. ‘Technology Optimists’ fare slightly better, committing only the second fallacy (they believe they will find the innovation astonishing when it happens). However, the ‘Technology Realist’ commits neither mistake, since he or she both expects that the innovation will happen and that we will not be astonished when it does. Co-author, Extropia DaSilva defines a Technology Realist as someone who believes anything theoretically possible will become a proof-of-principle experiment in an R+D lab, that this prototype might one day become a commercially viable technology… and that we will not be astonished by it if and when it arrives on the market).
 Here and now, looking ahead like McKenna did, we may find it hard to believe the human race could adapt to such enormous and rapid change. We imagine vast computing resources such as a ‘machine’ with 1.6 trillion megabytes of working memory, 85 trillion megabytes of storage, and running at 1.4 trillion Mhz (far, far outperforming the human brain’s paltry 70 Mhz) and such numbers are hard to comprehend. This is of a magnitude beyond the mind’s ability to grasp and yet most of us could integrate such incredible computing power into our lives to the extent that it becomes like any tech: Just there at our convenience. This is not just hypothetical, by the way, because such a ‘machine’ exists already. We call it the ‘Internet’. If we can normalize technology as astonishing as the Internet (and if you think about all the services the Web brings you at the click of a hyperlink, it truly is astonishing) we may find that when we get to 2045 we live in fast times, but we can see on the horizon upcoming technologies that will make our current capabilities seem quite mundane. So we defer announcing ‘the Singularity is here’ until that REALLY gosh-wow stuff arrives. Then, when it does and we look to the future, once again we see technologies coming that make our current capabilities seem mundane, so once again we think “Oh this is not the Singularity, THAT is!” and so on, adinfinitum.
By being truly personal, computational devices like the iPhone effectively remove distinctions between the interface and the rest of the world. Whereas the semi-portable nature of laptops allowed only intermittent access to the Web, one need not be constrained by a lack of convenient places to rest an iPhone. So long as there is wireless access to the Internet, any thought can be shared, any information can be obtained. Several developments will have an influence in greatly expanding ‘world as interface’. It seems very likely that storage space will soon be plentiful and cheap enough to more than serve a person’s lifetime needs. The need to delete files in order to make room for new material will be a thing of the past. Of course future activities might come along that require prodigious amounts of storage space, but the kinds of activities that typify everyday use today would not produce enough terabytes to fill future hard drives and certainly not the vast and cheap storage the Cloud will offer.
Another development will be a proliferation of devices that automate the process of capturing and uploading moments of interest in one’s life. Taken to its logical extreme, this might almost make it seem like the Web is an expansion of your own mind. As soon as something interesting enters your conscious awareness, it is captured and uploaded to the Internet.
To truly feel one with the Web, the ability to retrieve relevant information must be equally automatic. The greatest challenge in an era of effectively limitless storage and automatic capturing of information will be retrieval- finding the relevant image or file or article or opinion as and when it is needed. A proliferation of sensors that can log up-to-date information about where you are, what mood you are in (inferred, perhaps, by discretely monitoring many physiological states) along with cross-referenced data of your personal life and its various connections to that of other people might make it possible for searches to be carried out automatically on your behalf. When (or perhaps I should say ‘if’) the challenge of effortlessly retrieving any kind of information is met, perhaps the next challenge will be to successfully predict when someone is about to require something and to provide it at the very moment conscious awareness registers a need for it.
The philosopher David Chalmers once noted how “the iPhone has already taken over some of the central functions of my brain”, going on to note how “the world is serving not as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me”9.
The proliferation of networked embedded computers could also bring about alterations to the body. Although it feels like we have a clear sense of where the body ends and the rest of the world begins, experiments have shown our body maps incorporate things that are part of the environment. For instance, if you extend your hand until it is within reach of an object, this will activate regions of the parietal cortex. Now, if you hold something that will extend your reach (a stick, perhaps) the same regions will show more activity when the end of the stick (rather than your hand) comes within touching distance. In effect, the stick has become incorporated into your body map and, as far as your brain is concerned, it is as much a part of you as your leg is. Virtual reality pioneer Jaron Lanier explored the mind’s ability to redefine the boundaries between the body and the external world. He found that, so long as the brain can easily control it, virtual bodies can morph into any shape, with any number of limbs. “I played around with elongated limb segments and strange limb placements”, Lanier wrote, also noting that if you could wiggle your toes and observe some corresponding change to clouds in the sky, “the clouds would start to feel like a part of your body”10.
Of course, no such connection between toes and clouds exists, but we could one day exert control over external objects using nothing but thought alone. This would come about if the trend towards more personalized computers advances to a point where they interface directly with the brain. Cyborg implants, in other words. Such devices can either be non-invasive, in which case they sit outside the skull and attempt to pick up signals from the brain; they can be semi-invasive, placed beneath the skull and sitting on the surface of the cortex; or they can be invasive, in which case they penetrate into the brain itself. The deeper you interface an implant with the brain, the more able you are to detect signals. Even relatively simple motor tasks require substantial bandwidth, and it is doubtful that a non-invasive device could obtain enough information to expand telekinesis beyond the simplest tasks. Another problem is that all non-invasive methods currently produce data that’s too noisy, and too delayed with respect to neural response. The advantage of non-invasive forms is that they obviously require no surgery to install.
Currently, we perceive the body as being a localized object occupying a well-defined location in space. But, would this perception still hold when more and more external objects respond to mental commands as directly as any limb would? In time, exteroception (the technical term for the brain’s maps of your body, its limbs, how they are orientated with respect to the environment, and the environment itself) might evolve into a perception of the body as highly decentred, indistinct, simultaneously everywhere and nowhere.
Along with this, we are already seeing the Web eroding the boundaries like work /home; employed/retired that once defined our life’s narratives. All of this is heading in roughly the same direction, which is to alter perceptions of a ‘core’ self, an unchanging aspect that remains constant even as, on a surface level, personalities change. This is a future in which the self has become a much vaguer phenomenon.
Some of the folk anticipating this change see it as a bad thing. They fear the development of increasingly compelling virtual worlds and the Brain-Machine-Interface or BMI’s ability to precisely control emotional states will be seen as a solution to feelings of inadequacy brought about a need to be highly adaptable in an uncertain world. They speak of people seeking oblivion, surrendering the self wholesale to a collective identity within fantasy worlds; indulging in the artificial happiness delivered by mood-altering drugs or their cyborg equivalents. On the other hand, there are those who view this as an opportunity for a transformation and transcendence of the self, rather than its destruction. One such person is Richard Davidson, a neuroscientist who studies the neurological roots of happiness:
“We are on our way toward discovering that our personality…is far more flexible than we thought. And it’s going to give us a more fluid concept of the self and thus a different attitude towards our way of being”11.
Somebody else with fascinating insights into the nature of identity is Professor Michael Perssinger. In neuroscientific circles he has become famous for his use of as a helmet equipped with magnetic coils that can be programmed to emit a pattern of magnetic pulses directed at selected areas of the cerebral cortex. In 8 out of 10 people, this results in a strong sense that there is somebody in the room with you (in the experiments, the volunteers are left alone in the room and monitored via CCTV in an adjacent room). You cannot see or hear this person, you just have a strong impression that somebody else is present. Persinger’s device earned the nickname ‘the god helmet’ after some people attributed this presence to a well-known religious figure like Jesus. Persinger believes the ‘sensed presence’ is caused by the brain not having a single sense of self but several, and these ‘selves’ are created in different parts of the brain. Depending on how the brain is functioning, these selves can either feel like aspects of one’s own self or, in certain conditions, they can feel like autonomous entities. Persinger said:
“Our normal sense of self- what I usually describe as ‘me’- is connected to the left hemisphere of the brain. That’s where a lot of linguistic activity takes place and the sense of self is a very linguistic phenomenon. The right hemisphere, however, has its own counterpart to the left’s sense of self, but it is inhibited or repressed by the communication that goes on between the two hemispheres”12.
So, the ‘self’ generated by the right hemisphere does not normally enter into our conscious awareness. But, given the proper conditions, the right counterpart can intervene on consciousness and when it does so, it is perceived as an ‘other’ rather than as ‘me’.
The technological fruits of this endeavour to correlate a broad range of spiritual states with patterns of brain activity would be BMIs that can drive the brain into the right configurations, thereby allowing anyone to feel bliss, or relaxed awareness, or feel a sensed presence, or feel they are floating outside of their own body, at one with the universe. All of this will coincide with a reality where adaptability to change is paramount. Rita Carter observed how “ lives were constrained by duty, custom, limited horizons…Now, suddenly, we find ourselves in a world where flexibility, adaptability and personal reinvention are not just acceptable but positively encouraged”13.
Ultimately, we may find it advantageous to evolve away from biology and become purely technological beings. Some transhumanist thinkers like Hans Moravec and Ian Pearson see the rise of robots as providing the biggest incentive to upload. Simply put, if you can’t beat ‘em, join ‘em. In other words, if your shitty biological brain is so sluggish in comparison to the radically-improved brains AIs and uploads possess, if you are a moron in comparison to the fundamentally smarter people around you, you would be crazy not to junk your obsolete mind for something more capable of useful activity in this brave new world.
 At the same time, as uploads and SAIs proliferate and outnumber biologicals, they would want to alter the world around them to their own advantage, just as humans have shaped landscapes to serve their needs (often at the expense of other species). And what do software-based people need? Computation, the more powerful the better. Post-humans would realise that there is a gargantuan amount of computation going on around them, because within all apparently solid and immobile objects, atoms are in motion, rapidly-moving electromagnetic fields are being generated, and particles are changing spin. According to Ray Kurzweil, electromagnetic interactions alone amount to “at least 10^15 changes in state per bit per second going on in a 2.2 pound rock, which effectively represents about 10^42 calculations per second”14.  To put it in perspective, consider that 10^42 cps is ten trillion times more computation than that performed by the entire human race.
The catch lies in the fact that these computations are random and not doing anything useful. So, the post humans would endeavour to organize the rock’s atoms and particles in a more purposeful manner, ultimately creating a 2.2 pound computer that could perform the equivalent of all human thought in ten thousand years every ten microseconds. Eventually, the post-humans would reach fundamental limits to how small computational elements can be built. Perhaps, at this point, computational growth will spread outwards. Of course, this happens already, because the number of chips we produce is currently expanding at a rate of about 8.3% per year. Once fundamental limits to miniaturization are reached, and assuming post-humans are not content with the amount of computation at their disposal, outward expansion of computronium would be the primary form of growth.
How would this alter the environment? Some thinkers like Anders Sandberg and Ray Bradbury have outlined mega scale construction projects that would maximize computation in the post-human’s local environment. Bare in mind that we are talking about post-humans here, weakly godlike intelligences, so by ‘local environment’ we do not mean a city, country, or even the planet. We mean the solar system.
One such mega scale engineering project is “Zeus”, a ‘Jupiter Brain’ outlined by Anders Sandberg. Zeus is a sphere of nearly solid diamondoid, with a radius of 9000 km and with a total mass 1.8 times that of the Earth. Its computational elements consist mainly of reversible quantum dot circuits and molecular storage systems. Each processing element acts as a semi-independent unit, communicating with other nodes using either fibre optics/waveguides or directional signals sent through vacuum. According to Sandberg, “since the many processing/memory nodes need to be close to each other due to the many short-range connections, the possible distributions are either a central core surrounded by connections, a cortex with connections through the interior, or distributed clusters of nodes in the interior. Of these the cortex model is most volume-efficient, and it will be assumed that Zeus is organized in the same way”15. A concentric shield surrounds the central sphere, both to protect the processing/memory nodes from the hazards of space, and to dissipate heat via radiators. The number of operations per second Zeus can perform is between 10^49 and 10^61 ops. Lower estimates assume nodes acting as single processors, whereas the high estimates assume clusters of parallel processors.
It can be difficult to grasp the size of a number like 10^61, but Hans Moravec made some calculations that help put it in perspective. Suppose it takes one hundred million megabytes or 10^15 bits to run a human-equivalent mind. Furthermore, suppose you need a thousand times more storage- 10^18 bits- to encode a body and its environment. A large city with a million inhabitants might require 10^24 bits. Notice that we are still nowhere near 10^61 bits. In fact, if Moravec’s estimates are correct even the efficiently encoded biospheres of a thousand galaxies (assuming one planet with a population of 6 billion per star per galaxy) would require a ‘mere’ 10^45 bits! It therefore seems possible that a Jupiter brain could simulate whole universes with planetary systems populated by hundreds of billions of simulated people, running through hundreds of millions of years of evolutionary and cultural history during a mere ten microseconds of thinking time.
This might all sound crazy, but we already use agent-based modelling in which adaptive rules permit behaviour to change in response to previous interactions, and we also run simulations of astrophysical events like planetary formation or supernovae. There are plans to create a model with 10 billion agents- the first simulation of the development of an entire planetary population. Of course, each agent is not as complex as a human- I doubt they are as complex as microbes- but you only need to extrapolate out to the feats of computing as permitted by physical laws, as well as other areas like whole-brain emulation and planetary modelling, to see that this is all within the realms of possibility.
Some thinkers (most notably, the physicist Bin-Guang Ma) have proposed a ‘relativity of reality’. In relativity theory, ‘motion’ has no absolute meaning. You can only say if something is at rest or in motion if you have a frame of reference. ‘Relative to the station, the train is in motion; relative to the train, the passenger is at rest’. Applied to reality itself, relativity theory would say there is no absolute meaning to reality. You can only tell if a world is simulated when there is a reference world it can be compared to. In the case of online worlds like Blue Mars, we can use the physical world as the frame of reference and say for sure that Blue Mars is a simulation. But if there were intelligent software beings within Blue Mars, who had no contact with physical reality and knew only their own little ‘universe’, Blue Mars would be the primary reality for them and they could not know their world was a simulation running on machines in a more expansive and ancient reality. Similarly, we cannot know for sure that the physical world is not, itself, a simulation. Perhaps it is one of many billions of simulated worlds existing in a Jupiter brain? In fact, any reality, even one running a simulation as sophisticated as our Universe, has no better or worse chance of being a simulation than any other.
Uploading is often seen as a form of immortality. The body may succumb to decay, but the mind- now independent of any one substrate- can copy/transfer itself to a replacement body. However, Vernor Vinge’s speculation regarding the nature of post human existence questions whether this is life-everlasting or death redefined:
“A mind that stays the same cannot live forever; after a few thousand years it would look more like a repeating tape-loop than a person…To live indefinitely long, the mind must grow…and when it becomes great enough and looks back…what fellow-feeling can it have for the soul it was originally?”16.
 Any one mind must be limited in what it can experience. Eventually, to avoid the ’repeating tape-loop’ effect, the desire to experience things outside of the boundary of one’s identity would become overwhelming. The mind would seek to dissolve the boundaries that both define and confine it, gradually shedding its identity in order to encompass experiences outside the scope of its prior self. Neuroscience research into happiness and meditation reveals an increase in synchrony in neural activity. Because pattern recognition requires one set of neurons rather than another being active, large scale synchrony and reduced frontal activity (also observed during functional scans of meditating subjects) leads to a diminishing sense of identity. So, we can either say that a drive toward reduced neocortical activity relative to the baseline state and higher than average synchrony among brain regions is a pursuit of pleasure, or see it as a drive towards self-annihilation. If pleasure is desirable, then so is a transition to a state where the self no longer stands out as a subjectively unique and distinct identity. Bruce Katz argued:
 “the real opposition is not between the pleasure principle and the death instinct (as Freud supposed) as the survival instinct and both of these taken together. Survival requires boundaries between concepts and especially the self and other; pleasure delights in the breaking down of barriers, and the reduction of specific thoughts in favour of a more distributed buzz of activation”17.
Perhaps, then, minds that seek to grow beyond the confines of the self will seek each other out, merging together to form something greater. To a limited extent, individuals do this already. Emile Durkheim spoke of a collective effervesence that emerges when large groups of people congregate at a concert, sports even or some such collective experience. “This energy holds the group together; it makes each individual feel as though he or she is an element of something greater than the sum of its parts”18.
As post-human selves seek ever-more sublime states of collective effervesence, Katz argued that the result would ultimately be to “manufacture Buddha. A being without division…It would not seek to look beyond itself for intellectual satisfaction, for it is truly self-contained…Perhaps, though, in its inner core, among the many thoughts it has amalgamated, would be the smallest realization that without differences, its perfection is marred. Perhaps it would choose to fracture. Perhaps we ourselves are the product of an earlier such fracturing”19.
Technology that sounds awe-inspiring now will actually seem ordinary when it emerges.20 I apply this to intelligence as well. Intelligence advances but it too gets normalized so that it does NOT ever come to strike us as equating to Superintelligence. From the Middle Ages perspective we are a Collective Superintelligence today. From our perspective though we consider our intelligence to be normal. I argue that something that Singularitarians and Transhumanists get wrong is that they claim all of this awe and momentous feeling… and they claim that we will all be like the fictional character, Eddie Morra in the movie ‘Limitless’ who had Superintelligence. Of course, if someone had the ability now in 2012 (that the Limitless character was portrayed as having) then (s)he would rightly claim Superintelligence. But if in decades to come it’s the norm for people to possess that ability then I do not think it will be considered special or super. One of the authors of this paper, Extropia DaSilva has said “My prediction: We get to 2045 and we live in fast times, but we can see on the horizon upcoming technologies that will make our current capabilities seem quite mundane. So we defer announcing ‘the Singularity is here’ until that REALLY gosh-wow stuff arrives. Then, when it does and we look to the future, once again we see technologies coming that make our current capabilities seem mundane, so once again we think ‘oh this is not the Singularity, THAT is!” and so on, ad infinitum”.21
Nevertheless there is a cultural identity split within the modern world between those embracing the accelerated trend and those stuck in the past. The intelligent side = the embracers. This is not to say that I do not have any qualms with the Singularity community on the issue of Intelligence. I do have qualms as I have just made clear. I claim that my theory of Intelligence is a necessary correction to the Singularity and transhumanist community who seem to think that they will be Super-Intelligent in the future.
We are in an early stage of the Post-Biological era. The Singularity is a Period, not an Event. And it impacts on culture. There are projections onto the Singularity hypothesis by some (making it heaven on earth… thus they are archetypal meaning projections). We remove those exaggerations. (e.g. mind-blowing, awe-inspiring, momentous) Clearly these projections are what the individual dreams of feeling but it goes way too far and is more about inner psychology than it is about technology. Technology does impact and create culture and identity. But it tends to be more pragmatic. In medical science it is also humane. From our perspective in 2012 of course tomorrow’s technology and intelligence appears amazing. But it becomes normalized. Things will seem normal in 2045. (See Relativity, above).
 Modern Western Culture is our (or has been) our platform/base/reference point. But it should no longer dominate. On the news we should hear the reporters frequently refer to ‘Digital’, ‘Singularity’ or ‘Online’ Culture. We are separate or seek separation. In that way we can distinguish ourselves on intelligence from those who are not keeping up with the Singularity culture. Thus there is a cultural phenomenon of ‘Splitting-Off’. This is not a contradiction. Some may say that ‘not keeping up’ means that you might experience astonishment and awe when a new technology arrives. But this is an exaggeration. The internet’s emergence was not met with a collective awe and astonishment when it burst onto the scene. Those people whom are immersed within the New Digital Culture communicate and interact with one another. That is their culture. Most of my study has been within Jungian psychology. However I became frustrated at the lack of substance and inability to resolve psychological distress. I felt that the Jungian field massively failed to deliver on its primary duty of suffering reduction.
The criticisms I have concerning the Technological Singularity are far less important. It is precisely because technology progressively succeeds to reduce human suffering that I switched from Jungian Psychology to the Technological Singularity. There’s more substance to what is said within the culture of the Singularity. Its culture of progression is matched in the objective outer world. It is true that many psychological problems arise due to causes that would break anyone. E.g. a tragic accident leading to post-traumatic-stress, chronic physical pain, poverty. Technology will of course help eliminate much suffering that has such easily definable causes. It will then be interesting to see what remains. Here I am referring to the purely psychological problems and psychosocial (or relational) problems. In these cases it is often a matter of failure to use intelligence that keeps people stuck in the same cycle of despair. Again the word exaggeration is relevant. Some people seem to need a problem but are then defeated by it as they cannot deal with problems despite their need for one. Intelligence is the answer. Use time and energy to genuinely think. Then the reward will come. This applies in all areas of life. Hence what Ray Kurzweil and Michael Anissimov say (below) can be applied to dealing with purely psychological or psychosocial problems: Kurzweil: “The primary problems we cannot solve are ones that we cannot articulate and are mostly ones of which we are not yet aware. For the problems that we do encounter, the key challenge is to express them precisely in words […]. Having done that, we have the ability to find the ideas to confront and resolve each such problem.”22 Anissimov: “Why have a bunch of lame goals when you can just obsess over one awesome thing..!?”23
Lame goals could also be psychosocial problems that the person is not dealing with but is obsessing about. The person needs to either deal with the issue or to dump it for more important aspirations… of greater value. Invest your time and energy in what matters as opposed to what is unimportant. Of course, the person can also be helped by intelligent others. These intelligent others equate to his or her cultural context. They are the people that (s)he communicates and interacts with. It is important that (s)he listens to what they say because these others may just say something that (s)he needed to hear thus increasing his or her intelligence.
Extropia DaSilva is a digital identity trying to position her mind within the world of Cyberspace. One of the reasons for this is so that the body and the projections that it encourages become less relevant. While psychological projections may be lessened the goal is not to delete the body entirely. A human mind is an embodied mind and could not become disembodied without extensive dehumanizing engineering. A mind modelled on a human brain will need at least a perception of a body. The term ‘substrate-independent mind’ means a mind no longer tied to one particular body, not a mind that needs no body at all. On balance technological progress is a healthy phenomenon. The digital identity of Extropia DaSilva is an attempt by the Primary Self to progress beyond the understandable desire to eradicate the risk of bodily suffering and furthermore, seek to also overcome the projections and human stupidity of all-kinds… and discover a world of pure mind of the intelligent thinking type.
Both authors of this paper seek openness to others views. Our position is not a close-minded one. On the contrary… the views of others need to be taken into genuine consideration and neither simply accepted nor rejected. So we are not open to those who simply accept everything that (for example) a Singularitarian says. Nor are we open to someone who simply rejects everything a Singularitarian says. We are open to others who are open. In other words we are open to others who genuinely think. We are closed to those who are close-minded. For example we are opposed to someone who seeing that the work of X is being discussed, rejects absolutely everything said about X’s work and expends no time and energy considering the merits of any of the work of X. Likewise someone who merely defends everything that X says and refuses to countenance any alternatives is also close minded and we reject that religious approach as well. So we reject these psychological types as non-thinking, close-minded, defensive, and projective in nature. The Modern Western Culture that we have been born into does encourage thinking but it also (unfortunately) seems to be rife with the kinds of non-thinking that I have referred to above. Modern Western Culture is therefore a mixture of intelligence and stupidity. The writers of this paper want/need to attain an existence within a culture of intelligence.
This is not about a progression to Super-Intelligence. But it is about an elimination of human psychological stupidity pervading our everyday lives. The vanquishing from our lives of the absolutist, close-minded, projection psychology that disguises itself as ‘thinking’ will bring about a more honest and truthful culture. Hence as well as the essential eradication of physical disease, physical pain and maybe even death… the Singularity culture can eradicate human stupidity from our lives as well… for those that desire this. This too is a healthy development.
1. James Albus, IBM Research’s Almaden Institute Conference on Cognitive Computing, 2 Rodney Brooks: ‘I, Rodney Brooks, Am A Robot’. 3. Gary Small and Gigi Vorgan, ‘Meet Your iBrain’, Scientific American Mind Volume 19, Number 5 4. Gary Small and Gigi Vorgan, ‘Meet Your iBrain’, Scientific American Mind Volume 19, Number 5 5. Frank Theys, ‘Technocalypse part 2: Preparing For the Singularity’ 6. John Brockman and Ray Kurzweil, ’After the Singularity: A Talk With Ray Kurzweil. 7. Rodney Brooks, ‘I, Rodney Brooks, Am A Robot’. 8. Ray Kurzweil, ‘The Singularity Is Near’. 9. Andy Clark, ‘Supersizing The Mind: Embodiment, Action and Cognitive Extension’. 10. Jaron Lanier, ‘You Are Not a Gadget: A Manifesto’. 11. Lone Frank, ‘Mindfield: How Science Is Changing Our World’. 12. Lone Frank, ‘Mindfield: How Science Is Changing Our World’. 13. Rita Carter, ‘Multiplicity: The New Science of Personality’. 14. Ray Kurzweil, ‘The Singularity Is Near’. 15. Anders Sanburg, ‘The Physics Of Information-Processing Superobjects: Daily Life Among The Jupiter Brains’.
perobjects%20-%20Anders%20Sandberg%20-%201999.pdf 16. Vernor Vinge, ‘Vernor Vinge on the Singularity’. 17. Bruce Katz, ‘Neuroengineering the Future: Virtual Minds and the Creation of Immortality’. 18. Bruce Katz, ‘Neuroengineering the Future: Virtual Minds and the Creation of Immortality’. 19. Bruce Katz, ‘Neuroengineering the Future: Virtual Minds and the Creation of Immortality’.
20. Aubrey de Grey concurs: He writes… I “suspect that the technological singularity may actually come and go virtually unnoticed by humanity. If we create extremely powerful, and also extremely autonomous, computer systems that are irrevocably set up to look after our best interests, one feature of the human psyche that they may particularly take into account is that most of us are really not very interested in them, as compared to our interest in each other, in nature and such like. We’re interested in things that they do, of course, such as video games, but mostly not in how they do it. (This is, I believe, well demonstrated by the inexorable market-driven trend for computers to become easier to use and less and less like computers of yore.) Accordingly, hypothetical post-singularity computers may be so user-friendly that we cease even to notice that they exist, and that the world was not always so enjoyable for us”. (Aubrey De Grey, ‘The Methuselarity and Longevity Escape Velocity Extended Abstract’.
21. Extropia DaSilva, ‘Singularity: Always Near but never Near’?
22. Ray Kurzweil, ‘The Singularity is Near’
23. Michael Anissimov, ‘Twitter Page’!/mikeanissimov
This entry was posted in Philosophies of self, technology and us and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s