RISE OF THE ROBOTS AND THE JESSIE-SIM UNIVERSE.

CTRL-ALT R 3: RISE OF THE ROBOTS AND THE JESSIE SIM UNIVERSE.
‘Pierre goes cross-eyed, trying to understand the implications of the slug’s cosmology’-Charles Stross.
 aug14 2011 robot pic_001
Science fiction stories can occasionally predict the future.  However, like most forecasts they are rarely completely accurate. Jules Verne anticipated submarine warfare- but against wooden vessels rather than armoured battle fleets. The 1950s visions of space exploration missed out the role that digital computers would play in such scenarios, instead differing the task of mapping out the details of missions to human navigators armed with slide rules. As for science fiction tales like ‘Neuromancer’ and ‘The Matrix’, our actual VR worlds fail to match up to these visions in a couple of ways. Firstly, you do not, generally speaking, connect to such worlds by plugging a cable into your brain or by wearing glasses that beams the world onto your retina. The latter technology is used to a very limited extent, but is certainly nowhere near as ubiquitous as it is in ‘Neuromancer’.
The second way in which contemporary online worlds differ from science fiction is this. We do not live in ‘the’ Matrix, a single, unified virtual environment. Instead, the Internet has lots of MMOGs and online worlds that are self-contained and to all intents and purposes isolated from one another. Your Second Life ® avatar cannot travel to Everquest, nor can the money you earn questing in Final Fantasy XI be used as currency in ‘There’. The author Steven Johnson wrote, ‘because the metaverse evolved largely out of videogames, it makes sense that it should be composed of fiefdoms- after all, you wouldn’t expect a Grand Theft Auto crack dealer to drop in for a barbeque with the Sims’.
That last comment illustrates how online worlds are often incompatible on more than a technical level. There may well be a cultural divide too, as behaviour that was acceptable and even encouraged in one MMOG may be a violation of the rules in another. Back when SL was still going through beta testing, a group of players from ‘World War 2 Online’ began logging in and using the content creation tools to plan new tactics. Once it was clear that SL was quite useful as a means for planning tactics for WW2O, the players invited friends along to join in. According to Cory Ondrejka, this caused ‘a substantial change in our demographic. Suddenly, we’re presented with a community of a few hundred players, a good percentage of whom are those WW2O players asking: “who do I shoot?” ‘.
The older residents of SL were not all that interested in running around shooting at each other, but they recognised a business opportunity when they saw one. Before long, stores sprang up around the WW2O players’ clubhouse. ‘Massive battles broke out’, Ondrejka remembered. ‘Eventually, it all settled down as we decided to give the WW2 Onliners this one simulator: a 16 acre square named Jessie’. This became the place to go in SL if you were looking for a spot of lethal combat. The sim was separated from its more peaceful neighbours by a high wall, memorably described by Hamlet Au as ‘a cross between the cold war’s Berlin Wall and a giant dam, to hold back the kind of trouble you come into Jessie to look for’. Along with that wall, just about every other divide manifested itself too, including cultural differences and mutually incompatible political outlooks. A complete history of Jessie is not necessary here. All we need to know is that there is (or was; I am not sure if it still exists) a place in SL that is cut off from its neighbouring sims.
A FICTIONAL ANSWER TO THE FERMI PARADOX.
In the 1970s, the Polish Author, Stanislaw Lem, wrote ‘A Perfect Vacuum’ a collection of fictional reviews of made-up works by non-existent authors. The last piece in this collection is ‘The New Cosmogony’, supposedly a transcript of Professor Alfred Testa delivering his Nobel Prize acceptance speech. The reason why he was awarded a Nobel? He solved the Fermi Paradox.
Now, the paradox is not fictional. It is a real puzzle that was formulated by Enrico Fermi. One day, over lunch, he and some colleagues were discussing what seemed to be the destiny of the human species, which was to one day leave this Earth and spread out across the Universe. Astronomy has long worked under the ‘Copernican Principle’, in which it is believed we hold no special place in the cosmos. That being the case, Fermi reasoned that there should be other planets in which life evolves a technologically-capable species who also develop the capability to venture out into the cosmos. Furthermore, it is a known fact that the Universe is home to stars like the Sun but which are many millions of years older than it is right now. That meant there should be alien civilizations that are millions of years older than ours and so ahead, technologically speaking- by an equivilent amount. If it was likely that humans would become an intergalactic species in the future, it must then be the case that some alien civilizations had taken that great leap already.
The Fermi Paradox, then, consists of the following contradiction. If we assume great leaps in the sophistication of our space age technology (perhaps ultimately utilizing warp drives and other such theoretical physics) we should expect to spread out among the stars. This seems even more reasonable when you remember that modern astrophysics tells us Earth will not be habitable forever. Also, given the fact that the Universe could be home to alien civilizations that are anything up to  tens of millions of years older than ours, we should expect this scenario to have been played out already. But, what we observe is a Universe that is, apparently, not home to Star-Trekkin’ alien tourists or colonists. The finite lifespan of stars as worked out by astrophysics; the technical capability to journey into space coupled with theoretical capabilities for reaching distant stars (and maybe even galaxies), the drive for the preservation of the species and the Copernican Principle all add up to a Universe in which life is seen spreading out from across the cosmos.  But no sattelite, or telescope, or instrument of any kind ever gathered compelling empirical evidence of anything other than a Universe that is devoid of any such endeavour. How do we explain this discrepancy between theory and fact?
‘Jessie’ would seem to offer two possible explanations- one quite obvious and direct, the other more subtle and rather metaphorical. First, the obvious explanation. The whole sim is testament to our species’ violent tendencies. It is, sadly, not too difficult to imagine us turning our skills at developing potent technologies to destructive ends, perhaps culminating in the development of doomsday weapons. Perhaps some irreconcilable difference (theological, political, moral, whatever) then develops and escalates to the point where a global war involving the use of such terrible destructive power is unleashed. Furthermore, perhaps it is just a sad quirk of fate that all technologically-capable species develop the capacity for such great power before developing the collective maturity to handle it properly. The inevitable extinction of such a species would then seem like a given.
The other explanation has to do with the fact that the Lindens ® eventually deemed it necessary to segregate the WW2O players within a sim of their own, isolated from the rest of SL by an enormous wall. Stanislaw Lem’s fictional professor tackled the Fermi Paradox by asking the following question: Suppose civilizations millions of years more advanced than ours deemed it necessary to keep their territory separate from encroaching neighbours. What might constitute good defences against intergalactic civilizations? He quickly dismisses the idea that intergalactic civilizations might defend their territory according to the sci-fi convention of patrolling space with warships equipped with proton torpedoes or Death Stars bristling with planet-obliterating lazers. The Fermi paradox, after all, rests on the fact that we are not surrounded- granted, at stellar distances- by any such technology.  But the professor argues, ‘we do not see them, because they are everywhere. That is, not they, but the fruit of their labour’. It is fruitless to look for spacecraft and what-have-you, because ‘a billion year old civilization employs none. Its tools are what we call the laws of nature’. Physics itself is the “machine” of such civilizations.
In Lem’s story, the professor comes to realise that we are too limiting in our appreciation of what’s possible. ‘(Man) tells himself that maybe someday he will come near to matching nature in its excellence of action. But to go further is impossible’. So, humanity endeavours to learn how atoms may be arranged according to physical law, and has as its aspiration the products of Darwinian evolution. ‘The natural represents the limit of the series of works that “artificially” repeat or modify it’.
However, the professor argues that technological ability actually extends much further than simply discovering physical laws and working within their constraints. Beyond that, there is ‘the level where such laws are laid down…In the Universe, it is no longer possible to distinguish what is “natural” (original) from what is “artificial” (transformed)’. Suitably advanced civilizations can play around with the very laws of physics themselves.  When we study those laws, ‘we discern…the basic canons of the strategy employed by the Players’.
Why is the Universe expanding? Because the Players that came before us evolved in a cosmos in which ‘the physics established by one (civilization) would always happen upon, during expansion, the physics of another’. Each such civilization evolves in a region of space with its own physical laws. As their knowledge grows in power they learn how to manipulate reality on ever-more fundamental levels, plunging into the inner space of existence. Meanwhile, their technology becomes ‘an ever-widening radius’ that, in the fullness of time, encounters the technological expansion of alien civilizations. ‘These physics could not traverse one another without collision, because they were not identical; and they were not identical because they did not represent the same initial conditions for each such civilization considered separately’.
So, a civilization ultimately triggers an expanding bubble of ‘artificial’ physics that eventually slams into other such bubbles. In doing so, ‘prodigious amounts of energy were released by annihilations and transformations of various kinds’ (in the story, the faint heat that permeates the universe- the cosmic microwave background- is  said to be an echo of such catastrophic encounters). In order to save the Universe from utter ruin, the weakly godlike civilizations tweak the laws of physics to ensure space itself can expand faster than the civilizations that evolve within it. ‘It is only in such a Universe, despite the fact that new civilizations are continually emerging, that the distance separating them remains permanently vast’.
Each Player also seeks to prevent ‘the rise of a local coalition of new players’. Such things as the formation of antagonistic groups, conspiracy to conquer and the establishment of centres of local power all require effective communication. Superior communication is so advantageous that one can imagine an endless stockpiling of energy, if physics permitted an increase in the speed of action in direct proportion to the energy invested. A Player that had at its disposal ten times the energy of its rivals could inform itself ten times as rapidly. And a Player with twenty times the energy would would have double the advantage again. ‘In such a Universe’, wrote Lem’s fictional narrator, ‘the possibility exists to monopolize control over its physics and all other players of the Game. Such a Universe might be said to encourage rivalry, energy competition, the acquisition of power’.
Once again, the players turn to their ability to lay down physical laws. The speed of light is set up to represent a fundamental limit on how fast communications can be. Beyond a certain point, it simply does not pay to stockpile any more energy, because it will not enable you to break the barrier imposed by light speed. Such a barrier, coupled with the vast distances between Players (which remains vast, thanks to the expansion of space itself) ‘constitutes an absorption screen against all who attain that level of the Game where they become full-fledged participants in it’. In other words, rather than rely on comparatively ineffectual methods such as imposing legal restrictions, issuing threats, or punishment, or coercian, Players minimize the risk of coalitions of antagonistic groups who may threaten the stability of the Game (ie, trigger an intergalactic appocalypse), by adjusting the laws of physics such that there is no opportunity for coalitions of Players to form in the first place. ‘Each Player, then, operates on the strategic principle of minimax: it changes existing conditions in such a way as to maximize the common gain and minimize harm. For this reason, the present Universe is homogenous and isotropic (it is governed by the same laws throughout, and in it no direction is favoured over another)’.
Griefers. One thing you learn in SL is that they are attracted to opinionated loudmouths (I have no particular resident in mind, of course). Not drawing too much attention to yourself is just about the best way to avoid their unwelcome attention. If some Players are griefers, able to ruin your day with apocalyptic acts of wanton destruction, perhaps the silence of the heavens makes strategic sense. ‘The Players do not speak to one another; they themselves have prevented it; it was one of the canons of the stabalization of the Game…We cannot listen in on the conversation of the Players because they are silent, silent in keeping with their strategy’.
PARADIGM SHIFTS IN PATTERN BUILDING.
It might seem a bit of a waste of time, devoting a sizable chunk of this essay to a made-up cosmology, penned by a fictional professor. Surely, there is enough real science and philosophy to think about, without needing to wander into  total fantasy? But the further you go in the realm of speculative thinking, the harder it becomes to distinguish a brainstorm that is meant to be taken seriously, from a story that is only meant as a bit of fun.
I first heard about Lem’s ‘New Cosmogony’ in Damien Broderick’s book, ‘The Spike’, one of my favourite works of non-fiction that deals with the so-called ‘technological singularity’. One way to understand what that means is to consider how the Universe’s ability to rearrange matter/energy into new patterns has gone through paradigm shifts in the past. Any alien intelligences observing our solar system would find examples of patterns that could have come about by chance; matter organised by the driving forces of gravity or geological activity; shaped by the sculpting power of weathering. But, on one planet-Earth- they would encounter numerous examples of patterns too complex to have come about by random chance. On our world, things are divided up into that which looks designed (such as a fish or submarine)  and that which does not (such as a cloud or a sand dune). We say something looks designed if its parts are assembled in ways that are statistically improbable in a functional direction.
We also know that ‘things that look designed’ should be divided into ‘things that look designed, and are’ (such as cameras and computers) and ‘things that look designed, but are not’ (eyes and birds). Any life form, indeed, any functional part of a life form, (such as a heart or a liver cell) is a structure whose complexity is orders of magnitude too improbable to have come about by chance- but only if it is assumed that all the luck has to come in one fell swoop. In 1859, Charles Darwin introduced the brilliant idea that when cascades of small chance steps accumulate, you can reach prodigious heights of adaptive complexity.
Through cumulative evolution by natural selection, the universe gatherered together various techniques for putting together increasingly complex patterns with an uncanny illusion of design. The important point to bare in mind, is that the ‘creative toolkit’ of the preceeding era, a time before cumulative selection somehow emerged from random chance, was simply incapable of putting together the complex patterns of matter/energy that we call ‘life’.  Gravity may be able to pull on solar nebulae and organise it into planets, but it cannot pull matter together so that it forms a functional system like a heart.  Wind erosion can sculpt rock into eye-catching patterns, but it is statistically impossible that it would sculpt rock into something as complex as a tree. Natural selection represents a power to rearrange the building blocks of matter into new patterns that is above and beyond the forces from which it emerged. It does, however, have recognisable limitations. For one thing, the appearance of design in life is, as I said, an illusion. Natural selection has no foresight. It cannot conceptualize anything and then set about realizing its goal. There is a certain amount of randomness in its methodology. Patterns we call life can self-replicate, but they do not copy perfectly. Errors in the replication crop up, leading to subtle differences. Differences that just happen to be disadvantageous are more likely to be wiped out by various environmental factors before they can self-replicate. Differences that happen to give an advantage stand more chance of being imperfectly copied. The way the environment ‘selects’ between accidental ‘bad luck’ and ‘good fortune’ is definitely NOT random. But evolution does not ‘know’ what a good design, or what a bad design, is. It doesn’t ‘know’ anything. It just blindly presents possible solutions to a problem and the environment selects the good from the bad. Nor, for that matter, does the environment ‘know’- at least, not in the sense that a trained architect ‘knows’ that a building she is designing will be structurally sound or not. It is just that, on at least one planet, various factors have come together that, over time, ended up producing a powerful illusion of design and an unmistakable display of complexity.
Another limitation of natural selection, is its inability to comprehensively redesign anything it has built. It can only modify that which it has already made. However, it did assemble matter into a particular complex functional pattern that we call ‘human being’. Thanks to the fact that humans have brains large and organized enough to be able to imagine that which does not exist, the communicative ability to transmit ideas to fellow humans, and the manual dexterity to recombine matter into new patterns, the limitations of natural selection were overcome. Complex functional patterns arose- such as microprocessors- that are not an illusion of design but WERE designed. We can imagine complex functional patterns that do not physically exist and then work out how they might be put together. ‘How’ does not necessarily come to us all at once- all modern technology is the result of generations of accumulated knowledge and the lessons learned from trial and error. But the key thing that differentiates technology from natural selection is that we can think ahead, see a goal and stumble our way toward it. Morover, we can, if needs be, perform complete redesigns. At the moment, the heart of a computer is the microprocessor, but we are drawing up various plans for a whole new paradigm of information processing when the current generation reaches foreseeable limits.
Admittedly, technology does seem like a step backward from natural selection, in that the products of nature are often of a complexity above and beyond that of their technological counterparts. Prosthetic limbs, for example, come nowhere close to matching the ability of natural limbs.  But, you have to remember that natural selection has been blindly throwing together possible solutions for billions of years. On the other hand, humans went from stone-age tools to manufacturing all the designs of the modern age in 100,000-2 million years. That seems a long time when judged on human timescales, but in terms of natural selection it is unimaginably rapid. If the fruits of 2 million years-worth of accumulated knowledge were compared to whatever natural selection evolved over a comparable period, the former would almost certainly have the edge in terms of functional elegance and complexity.
So, in our Universe we can identify three paradigm shifts in the power to produce patterns of matter/energy, each ultimately emerging from the products of the preceeding paradigm. That is not to say that a technological species necessarily results from natural selection. It is widely believed by the scientific community that natural selection has no direction and no modern species was more likely to have evolved than any other. But nevertheless, I do believe it would make sense that a tour of the Universe would involve dividing patterns of matter and energy into that which came about through random chance and simple actions; that which evolved a powerful illusion of design through natural selction and that which was intelligently put together to fit a purpose, via technology. It does make sense to describe technology as a new chapter in the story of pattern-building, as distinct from natural selection as natural selection is from random chance.
Right, now finally I get to the point of what the Technological Singularity is. It is a fourth paradigm shift in the ability to create functional patterns. I don’t just mean physical patterns but conceptual ones too. Mental patterns that we label ’creativity’, ’ideas’, ’inspiration’, ’intuition’. Imagine an abstract space in which every kind of mental and physical pattern sits, waiting to be discovered. Some are simple enough for it to be possible that random chance may stumble across them. Others are too statistically improbable and require cumulative selection. Others require foresight if they are to see the light of day. But, now imagine that there exists patterns of a complexity that puts them beyond the grasp of the natural intelligence of human beings. Ideas beyond our ability to imagine, technologies whose complexity is too great. But it is believed by some that certain technologies like biotechnology, information technology and nanotechnology may be combined in such a way that we overcome the limits of human intelligence. Although evolution has clearly put together marvellously complex designs (and the human brain is the most complex design of all) it very rarely produces a design that is of the highest possible functionality. It just doesn’t need to do that. A design of life does not have to be perfect, it only needs to be barely good enough to have a chance of reproducing. We should therefore not be lulled into thinking that the fabulous power of the human brain really is the most capable information processor that can be physically built. Vernor Vinge identified four possible ways in which the fourth paradigm shift aka Singularity might come about:
‘There may be developed computers that are “awake” and superhumanly intelligent’.
‘Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity’.
‘Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent’.
‘Biological science may provide means to improve natural human intellect’.
The first rule of Fight Club is: You don’t talk about Fight Club. And the first rule of the Technological Singularity is: You can’t talk about the Technological Singularity. At least, you can’t talk about it beyond vague terms concerning ‘unknowability’. The fundamental problem concerns effectively describing how it feels to be smarter than a human, and what such a mind might be capable of imagining. Frankly, expecting a human author to write an accurate portrayl of post-human existence is akin to anticipating the day your dog will bark, perhaps in yaps and woofs that translate into Morse Code, Pythagoras’ Theorem.
Not that the intrinsic inability to describe life after the Singularity prevents people from having a go, and Broderick compiles several such attempts in his book. The last chapter deals with what I call ‘technology-driven cosmology’. That is, the suggestion that our future progeny may develop technology of such power, and extend their reach so far, that the evolution of the universe comes under intelligent control. You can see the logic at work here. Take the largest functional pattern you can conceive of (and they don’t come much bigger than a Universe), and make it the plaything of weakly godlike intelligences. After a brief introduction to String theory, a run-of-the-mill standard (ie not technologically-driven) grand Unified Theory that describes our Universe as ‘a mere shadow cast by a richer realm made up of ten dimensions’, we move on to:
Robert Bradbury’s concepts of ‘mega scale super intelligent thought machines’, built from the reprocessed matter of entire planetary systems (a feasible task, so Bradbury assures us, given replicating robotic factories) that ‘consume the entire output of stars’ in order to run enough calculations to simulate  populated worlds equal to 10 billion souls for every star in a thousand galaxies.
Jonathon Burns’ idea that ‘maybe the quark-gluon plasma (a state that the universe is believed to have been in, very early in its life) is riddled with quantized chromo dynamic flux tubes in bunches. Assymetry. Structure. Gates and switches’. In other words,  once-upon-a-time the whole Universe self-organized into an almighty computer.
Frank Tipler’s ‘Omega Point’ theory: Our technological descendents collapse the Universe and derive useful computations from the deformation of spacetime. At the final split second before it collapses back to a singularity, an infinite number of computations can be performed which are used to ‘fetch us back to life at the end of time, reconstructing each one of us, in a virtual Universe inside its own mind’ (this last point is proved, Tippler boldly declares, by Game Theory and microeconomic analysis).
And somewhere amongst all that you get a brief synopsis of Lem’s ‘New Cosmogony’. Unless you were told beforehand which one of these theories is a work of pure fiction, and which are intended as serious speculation (at least, I assume they are serious), I doubt you would be able to discern fact from fantasy. Possibly, you might dismiss them all as blatently ridiculous. But the reader is not left to guess, for Broderick says of Lem’s ‘New Cosmogony,’ ‘(I’m not) seriously suggesting this is how the universe really began’. Oh, good. But then, straight after that, we find ‘but the scenario does sketch out rather brilliantly just the kind of Universe we might expect this one to become’.  What apparently links such far-out speculations with the here-and-now is an assumed spike in the power of information technologies and robotics (hence the book’s title, The Spike).
ROBOT EVOLUTION, 1950-2008.
Earlier, we talked about the partially successful predictions of science fiction. Futurists are not always successful, though, and there are two kinds of failure that really stand out in hindsight. The computer industry has fallen foul of both, in its time.
One form of failure is to drastically underestimate the rate of advancement and the usefulness in everyday life that a technology will have. In the 1940s, IBM chairman Thomas Watson took Grosch’s Law (named after fellow IBM employee Herbert Grosch, it states ‘computer power rises by the square of the price’. That is, the more costly a computer, the better its price-performance ratio) to mean the total global market was ‘maybe five computers’. This, bare in mind, was back in the days when computers were room-filling behemoths, based on vacuum tubes. The Integrated circuit that forms the heart of all modern computers did not become commercially available until 1968. In 1965, the inventor of the integrated circuit, Gordon Moore, took the annual doubling of the number of transistors that could be fitted onto an integrated circuit and predicted, ‘by 1975, economics may dictate squeezing as many as 65,000 components onto a single sillicon chip’.
The integrated circuit lead to desktop personal computers. These inexpensive commodities were thousands of times more cost-effective than mainframes and they dealt Grosch’s Law a decisive defeat. Today, ‘Moore’s Law’ and its prediction that a fixed price buys double the computing power in 18 month’s time has become something of an industry given and has defied every forcecast of its demise. The naysayers first heralded the end of Moore’s Law in the mid 1970s when integrated circuits held around 10,000 components and their finest details were around 3 micrometers in size. In order to advance much further, a great many problems needed to be overcome and experienced engineers were worrying in print that they might be insurmountable. It was also in the 1970s that Digital Equipment Corporation’s president, Ken Alson claimed, ‘there’s no reason for individuals to have a computer in their home’.
Obviously, such pessimism was unfounded. By 2004, the feature size of integrated circuit gates had shrunk to around 50 nanometers and we talk about billions of components, rather than tens of thousands.  Millions of PCs had been sold worldwide by 2002, even the Amish maintain a website and we somehow went from a time in the 60s when nobody bar a few thousand scientists would have noticed if all the world’s computers stopped working, to a society that would grind to a halt if all computers stopped working.
The other form of failure stands in direct contrast to the gaff of drastically underestimating the growth of a technology. That is, a technology that fails rather completely to live up to its promise. A particularly infamous example is robotics. The artificial intelligence movement was founded in 1950 and it was believed that within a decade or two, versatile, mobile, autonomous robots would have eliminated drudgery in our lives. By 1979, the state-of-the-art in mobile robotics fell way short of the requisite capabilities. A robot built by Stanford university (known as ‘Cart’) took 5 hours to navigate its way through a 30-metre obstacle course, getting lost about one crossing in four. Robot control systems took hours to find and pick up a few blocks on a tabletop. Far from being competent enough to replace adults in manufacturing and service industries, in terms of navigation, perception and object manipulation, robots were being far outperformed by toddlers. Even by 2002, military-funded research on autonomous robot vehicles had produced only a few slow and clumsy prototypes.
Can we identify the reasons why experts came up with such wildly inaccurate predictions? In all likelihood, they were lead astray by the natural ability of computers. The first generation of AI research was inspired by computers that calculated like thousands of mathematicians, surpassing humans in arithmetic and rote memorization. Such machines were hailed as ‘giant brains’, a term that threatened to jeapordize computer sales in the 1950s as public fears concerning these ‘giant brains’ taking over took hold. It was this distrust that lead IBM’s marketing department to promote the slogan ‘computers do only what their programs specify’, and the implication that humans remain ultimately in control is still held to be a truism by many today (despite being ever-more untrue, given the increased levels of abstraction that modern programs force us to work at, requiring us to entrust ever-larger details to automated systems ). Because computers were outperforming adults in such high mental abilities as mathematics, it seemed reasonable to assume that they would quickly master those abilities that any healthy child can do.
We seem to navigate our environment, identify objects and grab hold of things without much mental effort, but this ease is an illusion. Over hundreds of millions of years, Darwinian evolution fine-tuned animal brains to become highly organized for perception and action. Through the 70s and 80s, the computers readily available to robotics research were capable of executing about 1 MIPS. On 1 MIPS computers, single images cram memory, require seconds to scan and serious image analysis takes hours. Animal vision performs far more eleborate functions many times a second. In short, just because animals make perception and action seem easy, that does not mean the underlying information processing is simplistic.
One can imagine a mad computer designer rewiring the neurons in a fly’s vision and motor system so that they perform as arithmetic circuits. Suitable optimised, the fly’s brain would match or even surpass the mathematical prowess of computers and the illusion of computing power would be exposed. The field of cybernetics actually attempted something similar to this. But, rather than rewire an animal brain so that it functioned like a computer, they did the opposite and used computers to copy the nervous system by imitating its physical structure. By the 1980s, computers could simulate assemblies of neurons, but the maximum number of neurons that could be simulated was only a few thousand. This was insufficient to match the number of neurons in an insect brain (a housefly has 100,000 neurons). We now think that it would take at least 100 MIPS to match the mental power of a housefly. The computers readily available to robotics research did not surpass 10 MIPS until the 1990s.
Because they had the mental ability of insects, robots from 1950-1990 performed like insects, at least in some ways. Just as ants follow scent trails, industrial robots followed pre-arranged routes. With their insect-like mental powers, they were able to track a few handpicked objects but, as Hans Moravec commented, ‘such robots are easily confused by minor surprises such as shifted bar codes or blocked corridors (not unlike ants thrown off a scent trail or a moth that has mistaken a street light for the moon)’.
Insects adopted the evolutionary strategy of routinely engaging in pretty stupid behaviour, but existing in such numbers that at least some are fortunate enough to survive long enough to procreate. Obviously, such a strategy is hardly viable for robots. No company could afford to routinely replace robots that fall down stairs or wedge themselves in corners. It is also not practical to run a manufacturing system if route changing requires expensive and time-consuming work by specialists of inconsistent availability. The mobile robotics industry has learned what their machines need to do if they are to become commercially viable. It needs to be possible to unpack them anywhere, and simply train them by leading them once through their tasks. Thus trained, the robot must perform flawlessly for at least six months. We now know that, at the very least, it would require one thousand MIPS computers- mental matches for the tiniest lizards- to drive reliable mobile robots.
It would be a mistake to think that matching our abilities requires nothing more than sufficient computing power. Although computers were hailed as ‘giant brains’, neuroscience has since determined that, in many ways, brains are not like computers. For instance, whereas the switching units in conventional computers have around three connections, neurons have thousands. Also, computer processors execute a series of instructions in consecutive order, an architecture known as serial processing. But with the brain, a problem is broken up into many pieces, each of which is tackled separately by its own processor, after which results are integrated to get a general result. This is known as parallel processing.
The differences between brains and computers are by no means restricted to the examples I gave. But, it needs to be understood that these differences need not be fundamental. Computers have gone through radical redesigns in the past (think of the sillicon chip replacing vacuum tubes) and such a change can happen again. As Joe Tsien explained, ‘we and other computer engineers are beginning to apply what we have learned about the organization of the brain’s memory system to the design of an entirely new generation of intelligent computers’.
From that quote, one might conclude that Professor Tsien’s expertise lies predominantly in computer science. Actually, he is professor of pharmacology and biomedical engineering, director of the Centre for Systems Neurobiology at Boston University and founder of the Shanghai Institute of Brain Functional Genomics. Why professor Tsien should be interested in computer engineering becomes clear when you consider how neuroscience, computers and AI are beginning to intersect. Those remote scanning systems that cognitive neuroscience use to examine brain function require high-power computers. The better the computer is, the more detailed the brain scans will be. Knowledge gained from such brain scans leads to a better idea of how brains function, which can be applied to make more powerful computers. Gregory S. Paul and Earl Cox explained that ‘the investigative power of the combination of remote scanning and computer modelling cannot be exaggerated. It helps force neuroscientists to propose rigorously testable hypotheses that can be checked out in simplified form on a neural network such as a computer…We learned half of what we know about brains in the last decade as our ability to image brains in real time has improved keeping in step with the sophistication of brain scanning computers’.
It would be wrong to imply that the fMRI scanner is all that is required to reverse-engineer a brain. These machines help us pinpoint which areas of the brain are associated with which mental abilities, but they cannot show us how the brain performs those tasks. This is partly because current brain-scanning devices have a spatial resolution of one milimetre, but the axonal and dendritic processes comprising the brain’s basic neuronal circuits are so fine that only electron microscopy of 50 nanometer serial sections can resolve their connectivity. It is often said that the human brain is the most complex object in the known Universe. The complexity of brains becomes apparent when you realise that mapping the neuronal network of the nematode worm took ten years, despite the fact that its whole brain is only 0.01 nm^3 in volume. As you can imagine, mapping 500 trillion synaptic connections between the 100 billion neurons in the human brain is a far greater challenge. However, the task is made ever-less difficult as we invent and improve tools to aid in the job of reverse-engineering the brain. In the past few years, we have seen the development of such things as:
A technique developed at Harvard University for synthesizing large arrays of silicon nanowires. With these, it’s possible to detect electrical signals from as many as 50 places in a single neuron, whereas before we were only able to pick up one or two signals from a neuron. The ability to detect electrical activity in many places along a neuron helps improve our knowledge of how a neuron processes and acts on incoming signals from other cells.
A ‘Patch Clamp Robot’ has been developed by IBM to automate the job of collecting data that is used to construct precise maps of ion channels and to figure other details necessary for the accurate simulation of brain cells. This robot is able to do about 30 years-worth of manual lab work in about 6 months.
An ‘Automatic Tape-Collecting Lathe Ultramicrotome’ (ATLUM) has been developed to ‘aid in the efficient nanoscale imaging over large (tens of cubic millimetres) volumes of brain tissue. Scanning electron microscope images of these sections can attain sufficient resolution to identify and trace all circuit activity’. ATLUM is currently only able to map entire insect brains or single cortical columns in mamallian brains (a cortical column is the basic computational unit of the brain) but anticipated advances in such tools will exponentially increase the volume of brain tissue we can map.
Together with such things as automated random-access nanoscale imaging, intelligent neuronal tracing algorithms and in-vivo cellular resolution imaging of neuronal activity, we have a suite of tools for overlaying the activity patterns within brain regions on a detailed map of the synaptic circuitry within that region. Although we still lack a way to bring together the bits of what we know into an overarching theory of how the brain works, we have seen advances in the understanding of brain function lead to such things as:
Professor Tsien and his colleagues discovery of what may be the basic mechanism the brain uses to convert collections of electrical impulses into perception, memory, knowledge and behaviour. Moreover, they are developing methods to convert this so-called ‘universal neural code’ into a language that can be read by computers. According to Professor Tsien, this research may lead to ‘seamless brain-machine interfaces, a whole new generation of smart robots’ and the ability to ‘download memories and thoughts directly into computers’.
Work conducted by MIT’s Department of Brain and Cognitive Sciences has lead to a greater understanding of how the brain breaks down a problem in such a way so that the finished pieces can be seamlessly recombined (the challenge of successfully performing this step has been one of the stumbling blocks in using parallel processing in computers). This work has lead to a general-vision program that can perform immediate-recognition, the simplest case of general object recognition. Immediate-recognition is typically tested with something called the ‘animal absence/presence test’. This involves a test subject being shown a series of pictures in very rapid succession (a few tenths of a second for each photo) and trying to determine if there is an animal present in any of them. When the program took this test along with human subjects, it gave the right answer 82% of the time, whereas the people were correct 80% of the time. This was the first time a general-vision program has performed on a parr with humans.
IBM’s Blue Brain Project has built a supercomputer comprised of 2,000 microchips, each of which has been designed to work just like a real neuron in a real brain. The computer is currently able to simulate a neocortical column. It achieves this by simulating the particular details of our ion channels and so, just like a real brain, the behaviour of Blue Brain naturally emerges from its molecular parts. According to Henry Markham (director of Blue Brain), ‘this is the first model of the brain that has been built from the bottom up…totally biologically accurate’. His team expect to be able to accurately model a complete rat brain by some time around 2010 and plan to test the model by downloading it into a robot rat whose behaviour will be studied alongside real rats.
In late 2007, Hans Moravec’s company Seegrid ‘had load-pulling and factory “tugger robots” that, on command, autonomously follow routes learned in a single human-guided walkthrough’.
Many of the hardware limitations (and some of the software issues) that hampered mobile robots in the past have been overcome. Since the 1990s, computer power for controlling a research robot shot through 100 MIPS and has reached 50,000 MIPS in some high-end desktop computers. Laser range finders that precisely measure distance  and which cost roughly $10,000 a few years ago can now be bought for about $2000. At the same time, the basic building blocks of perception and behaviour that serve animals so well have been reverse-engineered.
4 GENERATIONS OF UNIVERSAL ROBOTS.
Past experience tells us that we should expect mobile robots with all the capabilities of people only after generations of machines that match the capabilities of less complex animals. Hans Moravec outlined four generations of ‘universal robots’, beginning with those whose mental power matches lizards. The comparison with animals is only meant to be a rough analogy. Nobody is suggesting that robots with the mental capabilities of monkeys are going to swing from your light fittings going ‘oo,oo,oo’…
1st-gen robots have onboard computers whose processing power will be about 3,000 MIPS. These machines will be direct descendents of robots like Roomba (an autonomous vacuum cleaner) or even people-operated vehicles like forklift trucks (which can be adapted for autonomy). Whereas Roomba moves randomly and can sense only immediate obstacles, 1st-gens will have sufficient processing power to build photo realistic 3D maps of their surroundings. They will seem to have genuine awareness of their circumstances, able to see, map and explore their work places and perform tasks reliably for months on end. But they will only have enough processing power to handle contingencies explicitly covered in their application programs. Except for specialized episodes like recording a new cleaning route (which, as mentioned earlier, should ideally require nothing more complicated than a single human-guided walkthrough), they will be incapable of learning new skills; of adapting to new circumstances. Any impression of intelligence will quickly evaoprate as their responses are never seen to vary.
2nd – generation robots will have 100,000 MIPS at their disposal, giving them the mental power of mice. This extra power will be used to endow them with ‘adaptive learning’. In other words, their programs will provide alternative ways to accomplish steps in a task. For any particular job, some alternatives will be preferable to others. For instance, a way of gripping one kind of object may not work for other kinds of object. 2nd-gens will therefore also require ‘conditioning modules’ that re-inforce positive behaviour (such as finding ways to clean a house more efficiently) and weed out negative outcomes (such as breaking things).
Such robots could behave in dangerous ways if they were expected to learn about the physical world entirely through trial and error. It would obviously be unacceptable to have your robotic housekeeper throw a bucket of water over your electrical appliances as it learns effective and ineffective ways to spruce them up. Moravec suggests using supercomputers to provide simulated environments for such robots to learn in. It would not be possible to simulate the everyday world in full physical detail, but approximations could be built up by generalizing data collected from actual robots. According to Moravec, ‘a propper simulator would contain at least thousands of learned models for various basic actions, in what amounts to a robotic version of common-sense physics…Repeatedly, conditioning suites that produced particularly safe and effective work would be saved, modified slightly and tried again. Those that do poorly would be discarded’.
2nd-gens will therefore come pre-installed with the knowledge that water and electrical appliances do not mix, that glass is a fragile material and so on, thereby ensuring that they learn about the world around them without endangering property or lives. They will adjust to their workplaces in thousands of subtle ways, thereby improving performance over time. To a limited extent, they will appear to have likes and dislikes and be motivated to seek the first and avoid the second. But they will seem no smarter than a small mammal outside the specific skills built into their application program of the moment.
3rd-generation robots will have onboard computers as powerful as the supercomputers that optimised 2nd-gen robots- roughly a monkey-scale 3,000,000 MIPS. This will enable the 3D maps of robots’ environments to be transformed into perception models, giving 3rd-gens the ability to not only observe its world but to also build a working simulation of it. 2nd-gens would make all their mistakes in real life, but by running its simulation slightly faster than realtime, a 3rd-gen could mentally train for a new task, alter its intent if the simulation results in a negative outcome, and will probably succeed physically on the first attempt. With their monkey-scale intelligence, 3rd-gens will probably be able to observe a task being performed by another person or robot and learn to imitate it by formulating a program for doing the task itself. However, a 3rd-gen will not have sufficient information or processing power to simulate itself in detail. Because of this, they will seem simple-minded in comparison to people, concerned only with concrete situations and people in its work area.
4th-generation robots will have a processing power of 100 million MIPS, which Moravec estimates to be sufficient for human-scale intelligence. They will not only be able to run simulations of the world, but also to reason about the simulation. They will be able to understand natural language as well as humans, and will be blessed with many of our perceptual and motor abilities. Moravec says that 4th-gens ‘will be able to accept statements of purpose from humans (such as ‘make more robots’) and “compile” them into detailed programs that accomplish the task’.
WHY DO WE NEED ROBOTS/AI?
A short answer to the question, ‘what defines a 4th-gen robot’ might be ‘they are machines with the general competence of humans’. However, it may not be the case that 4th gens will have all of the capabilities of people. Today, technical limitations are the reason why mobile robots cannot match humans in terms of motor control, perceptual awareness, judgement and emotion- we simply don’t yet know how to build robots that can do those things. In the future, we may know how to build such robots but for various reasons may decide not to equip them with the full range of human capabilities. For instance, whereas a human has natural survival instincts and a distaste of slavery, robots may be designed so that they want to serve more than survive. This is certainly not unprecedented in nature. In the animal kingdom we find examples of individuals motivated to serve more than survive, with the worker castes of social insects being a good case-in-point.
Bill Gates wrote, ‘I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. I believe that technologies such as distributed computing, voice and visual recognition, and wireless broadband connectivity will open the door to a new generation of autonomous devices that will enable computers to perform in the physical world on our behalf. We may be on the verge of a new era, when the pc will get off the desktop and allow us to see, hear, touch and manipulate objects where we are not physically present’.
‘A robot in every home’ sounds similar to Gates and Paul Allen’s dream of ‘a computer in every home’. But the impact that mobile robots might have on our lives could be even more profound. Computers have changed the world in ways that few people anticipated, but in many ways they exist in a separate to that in which we live our lives. As we have seen, this is because machine intelligence has not had the ability to act autonomously in physical space, instead finding strengths in mathematical space. But, if the problems of motor control, perceptual awareness and reasoning are overcome, it might be possible for robots to run society without us, not only performing all productive work but also making all managerial and research/development decisions.
This leads to the question, ‘why would we surrender so much control to our machines?’. Perhaps we won’t. But, according to Joseph Tainter, who is an archeologist and author of ‘The Collapse Of Complex Societies’, ‘for the past 100,000 years, problem solving has produced increasing complexity in human societies’. Every solution ultimately generates new problems. Success at producing larger crop yields leads to a bigger population. This in turn increases the need for more irrigation canals to ensure crops won’t fail due to patchy rain. But too many canals makes ad-hoc repairs infeasible, and so a management beauracracy needs to be set up, along with some kind of taxation to pay for it. The population keeps growing, the resources that need to be managed and the information that needs to be processed grows and diversifies, which in turn leads to more kinds of specialists. According to Tainter, sooner or later ‘a point is reached when all the energy and resources available to a society are required just to maintain its existing levels of complexity’.
Once such a point is reached, a paradigm shift in the organization of hierarchies becomes inevitable. Yaneer Bar-Yam, who heads the New England Complex Systems Institute in Cambridge Massachusets explained that ‘to run a hierarchy, managers cannot be less complex than the systems they are managing’. Rising complexity requires societies to add more and more layers of management. In a hierarchy, there ultimately has to be an individual who can get their head around the whole thing, but eventually this starts to become impossible. When this point is reached, hierarchies give way to networks in which decision-making is distributed. In ‘Molecular Nanotechnology And The World System, Thomas McCarthy wrote, ‘as global markets expand and specialization increases, it is becoming the case that many products are available from only one country, and in some cases only one company in that country…Whole industries may be brought to their knees without access to a crucial part’.
This is the logical extreme of international trade; of division of labour: Dependence on other nations leading to networked civilizations that become increasingly tightly coupled. ‘The intricate networks that tightly connect us together’, said the Political Scientist Thomas Homer-Dixon, ‘amplify and transmit any shock’. In other words, the interconnectedness of the global system reaches a point where a breakdown anywhere means a breakdown everywhere.
But the benefits we get from the division of labour becomes truly profound only when the group of workers trading their goods and services becomes very large. As McCarthy pointed out, ‘it is no coincidence that the dramatic increase in world living standards that followed the end of the Second World War was concurrent with the dramatic increase in international trade made possible by the liberal post-war trading regime; improved standards of living are the result of more trade, because more trade has meant a greater division of labour and thus better, cheaper products and services’.
This is something that customers have come to expect- the right to choose better goods at lower prices. So long as this attitude exists, rising productivity will remain a business imperative. Therefore, output per worker must increase, and so the amount of essential labour decreases. Mechanization and automation have increased productivity, but apart from highly structured environments such as those found in car assembly plants, machines have required human direction and assistance. But, mobile robots are advancing on all fronts, and they represent a solution to the problem of managing complex networked civilizations, while at the same time shrinking the human component of competitive business.
In ‘The New Luddite Challenge’, Ted Kaczynski argued that ‘as society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them…a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage, the machines will be in effective control’. Admittedly, the poor performance of mobile robots in the past does invite skepticism of this idea of a totally automated society. But, no matter how unlikely the idea of truly intelligent, autonomous robots may seem, the prospect of humans being engineered to match the advantages of machines is even more infeasible. A robot worker would have unwavering attention, would perform its task with maximum efficiency over and over again, and would never ask for holidays or even a wage-packet at the end of the day (but the need for maintainence will mean something like sick leave still exists). It is just inconceivable that people could be coerced into working 24/7 for no pay, but with robots every nuance of their motivation is a design choice. Provided the problems of spatial awareness and object recognition/handling can be solved, and especially if artificial general intelligence is ever achieved, there seems to be no reason why capable robots wouldn’t displace human labour so broadly that the average workday would have to drop to zero.
THE HUMAN/ROBOT RELATIONSHIP.
Assuming such a scenario ever comes about, what happens to wages? Such a question was asked in the early 1980s by James Albus, who headed the automation division of the then-National Bureau of Standards. His suggestion was to give people stock in trusts that owned automated industries, thereby allowing them to live off their stock income. However, Moravec argued that ownership might not be reliable as a source of income in the long-term. Any company that chose to re-invest everything in productive operations would drive companies that wasted resources by paying owners out of business. Better and cheaper robotic decision makers would squeeze owners out of capital markets as surely as labourers would be replaced by robotic workers.
Moravec sees the eventual demise of ownership as bringing about the end of Captialism itself, but then goes on to argue that ‘capital enterprises will thrive as never before. Some companies will die, but others will grow. Those that grow especially well will be forced to divide by antitrust laws. Some companies may decide to cooperate in joint ventures that are a mix of their parent firms’ goals and skills. With no return on investment in a hyper competitive marketplace, the effort may kill the parents. But, if the offspring grows and divides, the parents’ way of thinking may become more widespread…The ultimate payoff for success in the marketplace will no longer be monetary return on investment, but reproductive success’.
As mobile robots develop more graceful limb movements and navigate their workplaces more intuitively, the boundary between machine and living thing may well appear to blur. Furthermore, if Moravec’s vision for success in the marketplace in the age of intelligent robots is accurate, this mashup between technology and nature may not apply only to individual robots, but to whole companies. This trend was also noted by Chris Meyer and Stan Davis in their book ‘It’s Alive’: ‘What we learn and codify about adaptation and evolution will, first, be modelled in digital code, so that we can simulate adaptive systems for specific purposes…Next, software itself will become evermore like an ecology…As the rules of evolution combine with the connected economy, our business world will become…continually adaptive- in other words, alive’.
Actually, to a very limited extent there already is a blurring of the living and the artificial in the world of business. Some of the rights of people are given to corporations, such as the right to own property and make contracts. But, in other ways, we treat our coporations very differently. Most especially, whereas a person’s right to life is seen as fundamental, no such right is given to corporations; they may legally be killed by competition or legal or financial actions. Also, corporations don’t have the rights to vote on the laws that govern and tax them.
If fully automated industries run by intelligent machines become ‘alive’ as some have suggested, will we see the same legal rights that people have being given to these new forms of life? Moravec hopes not: ‘Humans have a chance of retiring comfortably only if they themselves set corporate taxes, and all other corporate laws, in their own self-interest’. By the time robots are approaching the 4th generation, the pivotal role that humans will play is likely to be in formulating the laws that govern corporate behaviour. There is a danger that if robotic industries were allowed to develop in a completely free marketplace, by competing mightily among themselves for matter, energy and space they would drive their price beyond the reach of humans, therefore squeezing us out of existence by making the basic necessities of life unaffordable. It’s easy to mock this notion of robots causing the extinction of the human race as being nothing more than hokey Hollywood science fiction, but bare in mind that the theories that drive biology are being adapted in the way we use information and in how our enterprises are managed. Biology, information and business are converging on general evolution and this can only increase as biologically-inspired technologies become ever-more prevalent in our networked civilizations. It might be worth remembering, as we look to evolution to help grow our roboticised corporations, that species almost never survive encounters with superior competitors.
Fortunately, this nightmare scenario assumes a completely free marketplace, but through such activities as collecting taxes, Government coerces nonmarket behaviour. Raising corporate taxes could provide social security from birth for the human race. This would make humans the main repository for money, and in order to raise sufficient funds to pay their taxes, the robotic workforces would need to compete among themselves to produce goods and services that people would want to buy. According to Moravec, ‘automated research, as superhumanly systematic, industrious, and speedy as robot manufacturing, will generate a succession of new products, as well as improved robot researchers and models of the physical and social world’. One likely outcome of this, is that the robotic corporations will develop and refine models of human psychology, using them in order to accurately guage our tastes. In Singularity discussions the point is often raised that we won’t fully understand the minds of artificial super intelligences. But here we see an even more humbling prospect; we won’t know our OWN minds as well as the Ais will know them. Moravec reckoned that ‘the super intelligences, just doing their job,  will peer into the workings of human minds and manipulate them with subtle cues and nudges, like adults redirecting toddlers’.
At this point, a problem with the idea of controlling such powerfully intelligent systems becomes apparent. How are we to ensure that they won’t use their superior knowledge of human psychology in order to trick us into removing artificial constraints on their growth? The old idea of pulling the plug on a machine if it gets out of control assumes there is a plug to be pulled, or that pulling it will not have consequences as serious as leaving the machine running. But, if artificial intelligences do indeed become an ecology supporting our networked civilizations, such highly decentralized systems might not have anything like an off switch and if they did, we wouldn’t dare touch it because modern civilization would soon break down without those multitudes of robot workers and other systems toiling away behind the scenes in order to keep things ticking over.
Furthermore, it is a convention in futurism to take the several possible routes to Singularity and concentrate on each one in isolation to the others. This approach is necessary because the sciences underlying each path are complex and it is hard enough to cover the R+D occurring in the fields driving one possible route to Singularity, let alone all of them. The reality, though, is that the many pathways to Singularity are connected together. This complex web of biologically-inspired technologies and technology-infused biology presents yet another problem for restricting the development of robots. There is a certain ‘Us’ and ‘Them’ mentality at work here; one set of laws for the humans and another for the robots. But while today it is easy to separate humans from robots, in the future advances in brain/machine interfaces, prosthetic limbs and artificial organs will result in hybrids that confuse the issue. At what point does the number of artificial body/brain replacements that a human has, result in that person becoming a machine that must be denied fundamental human rights?
It is not just the varying quantity of cyborgization that complicates matters, but also the varying quality of body/organ replacements. Let us suppose that progress in developing artificial substitutes for body parts (including the brain) does not stop at matching the capability of natural body parts but improves upon them. Your body and brain can be upgraded, and when the next generation of replacements roll off the production line, you can upgrade yourself still further. No transhumanist would deny a person the right to improve any part of their body or mind beyond natural limits. It is, after all, the fundamental principle of their movement. But some transhumanists insist on imposing strict limits on the self development of robots. Again, this assumes we can easily distinguish super-smart robots from super-smart cyborgs.
‘A good compromise, it seems to me’, suggested Moravec, ‘is to allow anyone to perfect their biology within broad biological bounds…To exceed the limits, one must renounce legal standing as a human being, including the right…to influence laws- and to remain on Earth…Freely compounding super intelligence, much too dangerous for Earth, can blossom for a very long time before it makes the barest mark on the galaxy’.
THE FINAL FRONTIER.
This move into the solar system will not just be prompted by transhumanists wishing to escape the restrictions of Earth’s laws, but also by the two opposing imperatives of conducting massive research projects in order to keep ahead of competition in Earth’s demanding markets, and high taxes on large, dangerous Earth-bound facilities. Freed from the laws that restricted its growth, the robotic ecosystem would flourish into countless evolving machine phenotypes. This will not be propogation via reproduction but rather reconstruction, as the machines redesign their hardware/software in order to meet the future with continuos self-improvement. The laws of physics will impose some restrictions on the phenotypes available to robots. The ‘body’ of a robot may not be an individual unit occupying a single location in space, but rather distributed systems linked via telepresence. A crude prototype emerged in the spring of 2000, when a team at Duke university wired the brain of an owl monkey to a computer that converted its brain’s electrochemical activity into comands that moved two robot limbs in synchrony with the movements of the monkey’s own arm. One of the robot arms was hundreds of miles away. A robot whose brain was thousands of times more powerful than a human’s (let alone an owl monkey’s) might be in comand of trillions of  ‘hands’, sensory organs and subconscious routines that are scattered hither and thither. But the speed of light would impose a limit on how far its body could be spread before communication delays hoplessly slowed its reaction time. Beyond that point, each part of its body would have to be stand alone (working independently, under its own volition) and therefore another machine as opposed to part of the same individual. It seems implausible that a single robot could mass more than a 100km asteroid.
At the other end of the scale, normal ‘atomic’ matter allows the features of integrated circuits to shrink to one atom’s width, and for switching speeds at 100 trillion times per second. Any faster, and chemical bonds would rip. Present day integrated circuits extended into 3D and combined with the best molecular storage methods could pack a human-scale intelligence into a cubic centimetre.
Sufficiently intelligent beings may not need to be constrained by the limits of atomic matter. In 1930, the physicist Paul Dirac deduced the existence of the positron in calculations that combined quantum mechanics with special relativity (the positron was verified to exist two years later). The same calculations also predict the existence of a particle that carries a magnetic ‘charge’ like an isolated north of south pole of a magnet. We call these particles ‘monopoles’. Magnetic charge is conserved, and since the lightest monopole has nothing to decay into, some monopoles must be stable. According to Moravec, ‘oppositely charged monoples would attract, and a spinning monopole would attract electrically-charged particles to its end, while electrical particles would attract monopoles to their poles…resulting in material that, compared to normal ‘atomic’ matter, might be a trillion times as dense, that remains solid at millions of degrees and is able to support switching circuits a million times as fast.
Since the solar system will become a breeding ground for an entire ecology of freely-compounding intelligences, we should expect to find competition for available matter, space and energy as well as competition for the successful replication of ideas. We have seen parasites emerging in software evolution experiments, and so we should expect parasites to emerge in the robot ecosystem. This will necessitate the directed evolution of vast, intricate and intelligent antibody systems that guard against foreign predators and pests. Something analogous to the food chain will no doubt arise. “An entity that fails to keep up with its neighbours”, reckoned Moravec, ‘is likely to be eaten. Its space, energy, material and useful thoughts reorganized to serve another’s goals’.
Since the more powerful mind will tend to have an advantage, there will be pressure on each evolving intelligence to continually restructure its own body and the space it occupies into information storage and processing systems that are as optimal as they can possibly be. At the moment, very little of the matter and energy in the solar system is organized to peform meaningful information processing. But the freely-compounding intelligences, ever mindful of the need to out-think competitors, are likely to restructure every last mote of matter in their vicinity, so that it becomes part of a relevant computation or is storing data of some significance. There will seem to be more cyber stuff between any two points, thanks to improvements through both denser utilization of matter and more efficient encoding. Because each correspondent will be able to process more and more thoughts in the unaltered transit time for communication (assuming light speed remains the fixed limit), neighbours will subjectively seem to grow more and more distant. By using its resources much more efficiently, and therefore increasing the subjective elapsed time and the amount of effective space between any two points, raw spacetime will be displaced by a cyberspace that is far larger and long-lived. According to Moravec, ‘because it will be so much more capacious than the conventional space it displaces, the expanding bubble of cyberspace can easily recreate internally everything it encounters, memorizing the old Universe as it consumes it’.
In physics, the ‘Bekenstein Bound’ is the conjectured limit on the amount of information that can be contained within a region of space containing a known energy. Named after Joseph Bekenstein, it’s a general quantum mechanical calculation which tells us that the maximum amount of information fully describing a sphere of matter is proportional to the mass of the sphere times its radius, hugely scaled. Let’s assume that all the matter in the solar system is restructured so that every atom stores the maximum possible number of bits. According to the Bekenstein Bound, one hydrogen atom can potentially store a million bits, and the solar system itself leaves room for 10^86 bits.
Humans are interested in the past. Archeologists scrutinize fragments of pottery and other broken artefacts, painstakingly piecing them together and attempting to reconstruct the cultures to which such objects belonged. Evolutionary biologists rely on fossil records and gene sequencing technologies to try and retrace the complex paths of natural selection. If the freely-compounding robot intelligences ultimately restructure space into an expanding bubble of cyberspace consuming all in its path, and if the post-biological entities inherit a curiosity for their past from the animals that helped create them, the 10^86 bits available would provide a powerful tool for post-human historians. They would have the computational power to run highly-detailed simulations of past histories- so detailed that the simulated people in those simulated histories think their reality is…well…’real’.
The idea of post-human intelligences running such simulations is often met with disbelief. Why would go to the effort of constructing such simulations in the first place? Such objections miss the point that the Bekenstein Bound puts across. Assuming Moravec’s estimates for the raw computational power of the human brain is reasonably accurate then, according to the man himself, ‘a human brain equivilent could be encoded in one hundred million megabytes or 10^15 bits. If it takes a thousand times more storage to encode a body and its environment, a human with living space might consume 10^18 bits…and the entire world population would fit in 10^28 bits’. But, look again at the potential computing capacity of the solar system: 10^86 bits. Such a number vastly exceeds the amount of bits required to run simulations of our reality. As Moravec said, ‘The Minds will be so vast and enduring that rare infinitesimal flickers of interest by them in the human past will ensure that our entire history is replayed in full living detail, astronomically many times, in many places and in many, many variations’.
Such conjectures have stunning implications for our own reality. Any freely-compounding intelligence restructuring our Solar System into sublime thinking substrates could run quadrillions of detailed historical simulations. That being the case, surely we must conclude that any given moment of human history- now for instance- is astronomically more likely to be a virtual reality formed in the vast computational space of Mind, rather than the physical reality we believe it to be.  Meanwhile, perhaps, the self-enhancing robots that make up the post-Singularity interplanetary ecosystem are engaged in competition and cooperation, whole memetic systems flowing freely, turning to fluid the boundaries of personal identity. And yet, some boundaries may still exist, most likely due to distance, deliberate choice and incompatible ways of thought. Bounded regions of restructured spacetime patrolled by datavores eager to eat the minds of lesser intelligences? Truly, Jessie Sim  is a mere hint of the possible conflicts reality has in store.
Advertisements
This entry was posted in Extie Classics, technology and us and tagged , , , . Bookmark the permalink.

2 Responses to RISE OF THE ROBOTS AND THE JESSIE-SIM UNIVERSE.

  1. As the designer of the first Neurology practice within Second Life, I am compelled to tip my hat to the author of this amazing nexus piece. If these are simply the author’s musings, as disclaimed, I shudder to think of the impact that a more seriously written piece would create.

    From multiple perspectives, it is getting harder to deny that our world is already undergoing the beginning of the acceleration into the knee of an exponential curve. Or, more accurately, the sum of multiple superimposed exponential curves.

    My primary question for everyone is: “Are you prepared?” Prepared mentally for the challenges to come? Prepared physically to handle the rigorous demands that we will all face with some certainty? Prepared spiritually, if for no other reason that it is good insurance, but maybe also because spirituality provides a useful tool for tolerating perturbations, by shoring up reserve?

    The truth is that nobody is prepared. Nobody. We cannot even begin to fathom what the fast part of the exponential change curve will do to us. But there is some predictability that is accessible to some degree. Follow us here and we will start to reveal that, to prepare our civilization to not only tolerate but embrace our new selves in the close years to come.

  2. Reblogged this on Axion NeuroTherapy and commented:
    As the designer of the first Neurology practice within Second Life, I am compelled to tip my hat to the author of this amazing nexus piece. If these are simply the author’s musings, as disclaimed, I shudder to think of the impact that a more seriously written piece would create.

    From multiple perspectives, it is getting harder to deny that our world is already undergoing the beginning of the acceleration into the knee of an exponential curve. Or, more accurately, the sum of multiple superimposed exponential curves.

    My primary question for everyone is: “Are you prepared?” Prepared mentally for the challenges to come? Prepared physically to handle the rigorous demands that we will all face with some certainty? Prepared spiritually, if for no other reason that it is good insurance, but maybe also because spirituality provides a useful tool for tolerating perturbations, by shoring up reserve?

    The truth is that nobody is prepared. Nobody. We cannot even begin to fathom what the fast part of the exponential change curve will do to us. But there is some predictability that is accessible to some degree. Follow us here and we will start to reveal that, to prepare our civilization to not only tolerate but embrace our new selves in the close years to come.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s