RISE OF THE ROBOTS

Predicting the future is a tricky business, and there are two kinds of failure that really stand out in hindsight. The computer industry has fallen foul of both, in its time.

One form of failure is to drastically underestimate the rate of advancement and the usefulness in everyday life that a technology will have. In the 1940s, IBM chairman Thomas Watson took Grosch’s Law (named after fellow IBM employee Herbert Grosch, it states ‘computer power rises by the square of the price’. That is, the more costly a computer, the better its price-performance ratio) to mean the total global market was ‘maybe five computers’. This, bare in mind, was back in the days when computers were room-filling behemoths, based on vacuum tubes. The Integrated circuit that forms the heart of all modern computers did not become commercially available until 1968. In 1965, the inventor of the integrated circuit, Gordon Moore, took the annual doubling of the number of transistors that could be fitted onto an integrated circuit and predicted, ‘by 1975, economics may dictate squeezing as many as 65,000 components onto a single sillicon chip’.

The integrated circuit lead to desktop personal computers. These inexpensive commodities were thousands of times more cost-effective than mainframes and they dealt Grosch’s Law a decisive defeat. Today, ‘Moore’s Law’ and its prediction that a fixed price buys double the computing power in 18 month’s time has become something of an industry given and has defied every forcecast of its demise. The naysayers first heralded the end of Moore’s Law in the mid 1970s when integrated circuits held around 10,000 components and their finest details were around 3 micrometers in size. In order to advance much further, a great many problems needed to be overcome and experienced engineers were worrying in print that they might be insurmountable. It was also in the 1970s that Digital Equipment Corporation’s president, Ken Alson claimed, ‘there’s no reason for individuals to have a computer in their home’.

Obviously, such pessimism was unfounded. By 2004, the feature size of integrated circuit gates had shrunk to around 50 nanometers and we talk about billions of components, rather than tens of thousands.  Millions of PCs had been sold worldwide by 2002, even the Amish maintain a website and we somehow went from a time in the 60s when nobody bar a few thousand scientists would have noticed if all the world’s computers stopped working, to a society that would grind to a halt if all computers stopped working.

The other form of failure stands in direct contrast to the gaff of drastically underestimating the growth of a technology. That is, a technology that fails rather completely to live up to its promise. A particularly infamous example is robotics. The artificial intelligence movement was founded in 1950 and it was believed that within a decade or two, versatile, mobile, autonomous robots would have eliminated drudgery in our lives. By 1979, the state-of-the-art in mobile robotics fell way short of the requisite capabilities. A robot built by Stanford university (known as ‘Cart’) took 5 hours to navigate its way through a 30-metre obstacle course, getting lost about one crossing in four. Robot control systems took hours to find and pick up a few blocks on a tabletop. Far from being competent enough to replace adults in manufacturing and service industries, in terms of navigation, perception and object manipulation, robots were being far outperformed by toddlers. Even by 2002, military-funded research on autonomous robot vehicles had produced only a few slow and clumsy prototypes.

Can we identify the reasons why experts came up with such wildly inaccurate predictions? In all likelihood, they were lead astray by the natural ability of computers. The first generation of AI research was inspired by computers that calculated like thousands of mathematicians, surpassing humans in arithmetic and rote memorization. Such machines were hailed as ‘giant brains’, a term that threatened to jeapordize computer sales in the 1950s as public fears concerning these ‘giant brains’ taking over took hold. It was this distrust that lead IBM’s marketing department to promote the slogan ‘computers do only what their programs specify’, and the implication that humans remain ultimately in control is still held to be a truism by many today (despite being ever-more untrue, given the increased levels of abstraction that modern programs force us to work at, requiring us to entrust ever-larger details to automated systems ). Because computers were outperforming adults in such high mental abilities as mathematics, it seemed reasonable to assume that they would quickly master those abilities that any healthy child can do.

We seem to navigate our environment, identify objects and grab hold of things without much mental effort, but this ease is an illusion. Over hundreds of millions of years, Darwinian evolution fine-tuned animal brains to become highly organized for perception and action. Through the 70s and 80s, the computers readily available to robotics research were capable of executing about 1 MIPS. On 1 MIPS computers, single images cram memory, require seconds to scan and serious image analysis takes hours. Animal vision performs far more eleborate functions many times a second. In short, just because animals make perception and action seem easy, that does not mean the underlying information processing is simplistic.

One can imagine a mad computer designer rewiring the neurons in a fly’s vision and motor system so that they perform as arithmetic circuits. Suitable optimised, the fly’s brain would match or even surpass the mathematical prowess of computers and the illusion of computing power would be exposed. The field of cybernetics actually attempted something similar to this. But, rather than rewire an animal brain so that it functioned like a computer, they did the opposite and used computers to copy the nervous system by imitating its physical structure. By the 1980s, computers could simulate assemblies of neurons, but the maximum number of neurons that could be simulated was only a few thousand. This was insufficient to match the number of neurons in an insect brain (a housefly has 100,000 neurons). We now think that it would take at least 100 MIPS to match the mental power of a housefly. The computers readily available to robotics research did not surpass 10 MIPS until the 1990s.

Because they had the mental ability of insects, robots from 1950-1990 performed like insects, at least in some ways. Just as ants follow scent trails, industrial robots followed pre-arranged routes. With their insect-like mental powers, they were able to track a few handpicked objects but, as Hans Moravec commented, ‘such robots are easily confused by minor surprises such as shifted bar codes or blocked corridors (not unlike ants thrown off a scent trail or a moth that has mistaken a street light for the moon)’.

Insects adopted the evolutionary strategy of routinely engaging in pretty stupid behaviour, but existing in such numbers that at least some are fortunate enough to survive long enough to procreate. Obviously, such a strategy is hardly viable for robots. No company could afford to routinely replace robots that fall down stairs or wedge themselves in corners. It is also not practical to run a manufacturing system if route changing requires expensive and time-consuming work by specialists of inconsistent availability. The mobile robotics industry has learned what their machines need to do if they are to become commercially viable. It needs to be possible to unpack them anywhere, and simply train them by leading them once through their tasks. Thus trained, the robot must perform flawlessly for at least six months. We now know that, at the very least, it would require one thousand MIPS computers- mental matches for the tiniest lizards- to drive reliable mobile robots.

It would be a mistake to think that matching our abilities requires nothing more than sufficient computing power. Although computers were hailed as ‘giant brains’, neuroscience has since determined that, in many ways, brains are not like computers. For instance, whereas the switching units in conventional computers have around three connections, neurons have thousands. Also, computer processors execute a series of instructions in consecutive order, an architecture known as serial processing. But with the brain, a problem is broken up into many pieces, each of which is tackled separately by its own processor, after which results are integrated to get a general result. This is known as parallel processing.

The differences between brains and computers are by no means restricted to the examples I gave. But, it needs to be understood that these differences need not be fundamental. Computers have gone through radical redesigns in the past (think of the sillicon chip replacing vacuum tubes) and such a change can happen again. As Joe Tsien explained, ‘we and other computer engineers are beginning to apply what we have learned about the organization of the brain’s memory system to the design of an entirely new generation of intelligent computers’.

From that quote, one might conclude that Professor Tsien’s expertise lies predominantly in computer science. Actually, he is professor of pharmacology and biomedical engineering, director of the Centre for Systems Neurobiology at Boston University and founder of the Shanghai Institute of Brain Functional Genomics. Why professor Tsien should be interested in computer engineering becomes clear when you consider how neuroscience, computers and AI are beginning to intersect. Those remote scanning systems that cognitive neuroscience use to examine brain function require high-power computers. The better the computer is, the more detailed the brain scans will be. Knowledge gained from such brain scans leads to a better idea of how brains function, which can be applied to make more powerful computers. Gregory S. Paul and Earl Cox explained that ‘the investigative power of the combination of remote scanning and computer modelling cannot be exaggerated. It helps force neuroscientists to propose rigorously testable hypotheses that can be checked out in simplified form on a neural network such as a computer…We learned half of what we know about brains in the last decade as our ability to image brains in real time has improved keeping in step with the sophistication of brain scanning computers’.

It would be wrong to imply that the fMRI scanner is all that is required to reverse-engineer a brain. These machines help us pinpoint which areas of the brain are associated with which mental abilities, but they cannot show us how the brain performs those tasks. This is partly because current brain-scanning devices have a spatial resolution of one milimetre, but the axonal and dendritic processes comprising the brain’s basic neuronal circuits are so fine that only electron microscopy of 50 nanometer serial sections can resolve their connectivity. It is often said that the human brain is the most complex object in the known Universe. The complexity of brains becomes apparent when you realise that mapping the neuronal network of the nematode worm took ten years, despite the fact that its whole brain is only 0.01 nm^3 in volume. As you can imagine, mapping 500 trillion synaptic connections between the 100 billion neurons in the human brain is a far greater challenge. However, the task is made ever-less difficult as we invent and improve tools to aid in the job of reverse-engineering the brain. In the past few years, we have seen the development of such things as:

A technique developed at Harvard University for synthesizing large arrays of silicon nanowires. With these, it’s possible to detect electrical signals from as many as 50 places in a single neuron, whereas before we were only able to pick up one or two signals from a neuron. The ability to detect electrical activity in many places along a neuron helps improve our knowledge of how a neuron processes and acts on incoming signals from other cells.

A ‘Patch Clamp Robot’ has been developed by IBM to automate the job of collecting data that is used to construct precise maps of ion channels and to figure other details necessary for the accurate simulation of brain cells. This robot is able to do about 30 years-worth of manual lab work in about 6 months.

An ‘Automatic Tape-Collecting Lathe Ultramicrotome’ (ATLUM) has been developed to ‘aid in the efficient nanoscale imaging over large (tens of cubic millimetres) volumes of brain tissue. Scanning electron microscope images of these sections can attain sufficient resolution to identify and trace all circuit activity’. ATLUM is currently only able to map entire insect brains or single cortical columns in mamallian brains (a cortical column is the basic computational unit of the brain) but anticipated advances in such tools will exponentially increase the volume of brain tissue we can map.

Together with such things as automated random-access nanoscale imaging, intelligent neuronal tracing algorithms and in-vivo cellular resolution imaging of neuronal activity, we have a suite of tools for overlaying the activity patterns within brain regions on a detailed map of the synaptic circuitry within that region. Although we still lack a way to bring together the bits of what we know into an overarching theory of how the brain works, we have seen advances in the understanding of brain function lead to such things as:

Professor Tsien and his colleagues discovery of what may be the basic mechanism the brain uses to convert collections of electrical impulses into perception, memory, knowledge and behaviour. Moreover, they are developing methods to convert this so-called ‘universal neural code’ into a language that can be read by computers. According to Professor Tsien, this research may lead to ‘seamless brain-machine interfaces, a whole new generation of smart robots’ and the ability to ‘download memories and thoughts directly into computers’.

Work conducted by MIT’s Department of Brain and Cognitive Sciences has lead to a greater understanding of how the brain breaks down a problem in such a way so that the finished pieces can be seamlessly recombined (the challenge of successfully performing this step has been one of the stumbling blocks in using parallel processing in computers). This work has lead to a general-vision program that can perform immediate-recognition, the simplest case of general object recognition. Immediate-recognition is typically tested with something called the ‘animal absence/presence test’. This involves a test subject being shown a series of pictures in very rapid succession (a few tenths of a second for each photo) and trying to determine if there is an animal present in any of them. When the program took this test along with human subjects, it gave the right answer 82% of the time, whereas the people were correct 80% of the time. This was the first time a general-vision program has performed on a parr with humans.

IBM’s Blue Brain Project has built a supercomputer comprised of 2,000 microchips, each of which has been designed to work just like a real neuron in a real brain. The computer is currently able to simulate a neocortical column. It achieves this by simulating the particular details of our ion channels and so, just like a real brain, the behaviour of Blue Brain naturally emerges from its molecular parts. According to Henry Markham (director of Blue Brain), ‘this is the first model of the brain that has been built from the bottom up…totally biologically accurate’. His team expect to be able to accurately model a complete rat brain by some time around 2010 and plan to test the model by downloading it into a robot rat whose behaviour will be studied alongside real rats.

In late 2007, Hans Moravec’s company Seegrid ‘had load-pulling and factory “tugger robots” that, on command, autonomously follow routes learned in a single human-guided walkthrough’.

Many of the hardware limitations (and some of the software issues) that hampered mobile robots in the past have been overcome. Since the 1990s, computer power for controlling a research robot shot through 100 MIPS and has reached 50,000 MIPS in some high-end desktop computers. Laser range finders that precisely measure distance  and which cost roughly $10,000 a few years ago can now be bought for about $2000. At the same time, the basic building blocks of perception and behaviour that serve animals so well have been reverse-engineered.

4 GENERATIONS OF UNIVERSAL ROBOTS.

Past experience tells us that we should expect mobile robots with all the capabilities of people only after generations of machines that match the capabilities of less complex animals. Hans Moravec outlined four generations of ‘universal robots’, beginning with those whose mental power matches lizards. The comparison with animals is only meant to be a rough analogy. Nobody is suggesting that robots with the mental capabilities of monkeys are going to swing from your light fittings going ‘oo,oo,oo’…

1st-gen robots have onboard computers whose processing power will be about 3,000 MIPS. These machines will be direct descendents of robots like Roomba (an autonomous vacuum cleaner) or even people-operated vehicles like forklift trucks (which can be adapted for autonomy). Whereas Roomba moves randomly and can sense only immediate obstacles, 1st-gens will have sufficient processing power to build photo realistic 3D maps of their surroundings. They will seem to have genuine awareness of their circumstances, able to see, map and explore their work places and perform tasks reliably for months on end. But they will only have enough processing power to handle contingencies explicitly covered in their application programs. Except for specialized episodes like recording a new cleaning route (which, as mentioned earlier, should ideally require nothing more complicated than a single human-guided walkthrough), they will be incapable of learning new skills; of adapting to new circumstances. Any impression of intelligence will quickly evaoprate as their responses are never seen to vary.

2nd – generation robots will have 100,000 MIPS at their disposal, giving them the mental power of mice. This extra power will be used to endow them with ‘adaptive learning’. In other words, their programs will provide alternative ways to accomplish steps in a task. For any particular job, some alternatives will be preferable to others. For instance, a way of gripping one kind of object may not work for other kinds of object. 2nd-gens will therefore also require ‘conditioning modules’ that re-inforce positive behaviour (such as finding ways to clean a house more efficiently) and weed out negative outcomes (such as breaking things).

Such robots could behave in dangerous ways if they were expected to learn about the physical world entirely through trial and error. It would obviously be unacceptable to have your robotic housekeeper throw a bucket of water over your electrical appliances as it learns effective and ineffective ways to spruce them up. Moravec suggests using supercomputers to provide simulated environments for such robots to learn in. It would not be possible to simulate the everyday world in full physical detail, but approximations could be built up by generalizing data collected from actual robots. According to Moravec, ‘a propper simulator would contain at least thousands of learned models for various basic actions, in what amounts to a robotic version of common-sense physics…Repeatedly, conditioning suites that produced particularly safe and effective work would be saved, modified slightly and tried again. Those that do poorly would be discarded’.

2nd-gens will therefore come pre-installed with the knowledge that water and electrical appliances do not mix, that glass is a fragile material and so on, thereby ensuring that they learn about the world around them without endangering property or lives. They will adjust to their workplaces in thousands of subtle ways, thereby improving performance over time. To a limited extent, they will appear to have likes and dislikes and be motivated to seek the first and avoid the second. But they will seem no smarter than a small mammal outside the specific skills built into their application program of the moment.

3rd-generation robots will have onboard computers as powerful as the supercomputers that optimised 2nd-gen robots- roughly a monkey-scale 3,000,000 MIPS. This will enable the 3D maps of robots’ environments to be transformed into perception models, giving 3rd-gens the ability to not only observe its world but to also build a working simulation of it. 2nd-gens would make all their mistakes in real life, but by running its simulation slightly faster than realtime, a 3rd-gen could mentally train for a new task, alter its intent if the simulation results in a negative outcome, and will probably succeed physically on the first attempt. With their monkey-scale intelligence, 3rd-gens will probably be able to observe a task being performed by another person or robot and learn to imitate it by formulating a program for doing the task itself. However, a 3rd-gen will not have sufficient information or processing power to simulate itself in detail. Because of this, they will seem simple-minded in comparison to people, concerned only with concrete situations and people in its work area.

4th-generation robots will have a processing power of 100 million MIPS, which Moravec estimates to be sufficient for human-scale intelligence. They will not only be able to run simulations of the world, but also to reason about the simulation. They will be able to understand natural language as well as humans, and will be blessed with many of our perceptual and motor abilities. Moravec says that 4th-gens ‘will be able to accept statements of purpose from humans (such as ‘make more robots’) and “compile” them into detailed programs that accomplish the task’.

WHY DO WE NEED ROBOTS/AI?

A short answer to the question, ‘what defines a 4th-gen robot’ might be ‘they are machines with the general competence of humans’. However, it may not be the case that 4th gens will have all of the capabilities of people. Today, technical limitations are the reason why mobile robots cannot match humans in terms of motor control, perceptual awareness, judgement and emotion- we simply don’t yet know how to build robots that can do those things. In the future, we may know how to build such robots but for various reasons may decide not to equip them with the full range of human capabilities. For instance, whereas a human has natural survival instincts and a distaste of slavery, robots may be designed so that they want to serve more than survive. This is certainly not unprecedented in nature. In the animal kingdom we find examples of individuals motivated to serve more than survive, with the worker castes of social insects being a good case-in-point.

Bill Gates wrote, ‘I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. I believe that technologies such as distributed computing, voice and visual recognition, and wireless broadband connectivity will open the door to a new generation of autonomous devices that will enable computers to perform in the physical world on our behalf. We may be on the verge of a new era, when the pc will get off the desktop and allow us to see, hear, touch and manipulate objects where we are not physically present’.

‘A robot in every home’ sounds similar to Gates and Paul Allen’s dream of ‘a computer in every home’. But the impact that mobile robots might have on our lives could be even more profound. Computers have changed the world in ways that few people anticipated, but in many ways they exist in a separate to that in which we live our lives. As we have seen, this is because machine intelligence has not had the ability to act autonomously in physical space, instead finding strengths in mathematical space. But, if the problems of motor control, perceptual awareness and reasoning are overcome, it might be possible for robots to run society without us, not only performing all productive work but also making all managerial and research/development decisions.

This leads to the question, ‘why would we surrender so much control to our machines?’. Perhaps we won’t. But, according to Joseph Tainter, who is an archeologist and author of ‘The Collapse Of Complex Societies’, ‘for the past 100,000 years, problem solving has produced increasing complexity in human societies’. Every solution ultimately generates new problems. Success at producing larger crop yields leads to a bigger population. This in turn increases the need for more irrigation canals to ensure crops won’t fail due to patchy rain. But too many canals makes ad-hoc repairs infeasible, and so a management beauracracy needs to be set up, along with some kind of taxation to pay for it. The population keeps growing, the resources that need to be managed and the information that needs to be processed grows and diversifies, which in turn leads to more kinds of specialists. According to Tainter, sooner or later ‘a point is reached when all the energy and resources available to a society are required just to maintain its existing levels of complexity’.

Once such a point is reached, a paradigm shift in the organization of hierarchies becomes inevitable. Yaneer Bar-Yam, who heads the New England Complex Systems Institute in Cambridge Massachusets explained that ‘to run a hierarchy, managers cannot be less complex than the systems they are managing’. Rising complexity requires societies to add more and more layers of management. In a hierarchy, there ultimately has to be an individual who can get their head around the whole thing, but eventually this starts to become impossible. When this point is reached, hierarchies give way to networks in which decision-making is distributed. In ‘Molecular Nanotechnology And The World System, Thomas McCarthy wrote, ‘as global markets expand and specialization increases, it is becoming the case that many products are available from only one country, and in some cases only one company in that country…Whole industries may be brought to their knees without access to a crucial part’.

This is the logical extreme of international trade; of division of labour: Dependence on other nations leading to networked civilizations that become increasingly tightly coupled. ‘The intricate networks that tightly connect us together’, said the Political Scientist Thomas Homer-Dixon, ‘amplify and transmit any shock’. In other words, the interconnectedness of the global system reaches a point where a breakdown anywhere means a breakdown everywhere.

But the benefits we get from the division of labour becomes truly profound only when the group of workers trading their goods and services becomes very large. As McCarthy pointed out, ‘it is no coincidence that the dramatic increase in world living standards that followed the end of the Second World War was concurrent with the dramatic increase in international trade made possible by the liberal post-war trading regime; improved standards of living are the result of more trade, because more trade has meant a greater division of labour and thus better, cheaper products and services’.

This is something that customers have come to expect- the right to choose better goods at lower prices. So long as this attitude exists, rising productivity will remain a business imperative. Therefore, output per worker must increase, and so the amount of essential labour decreases. Mechanization and automation have increased productivity, but apart from highly structured environments such as those found in car assembly plants, machines have required human direction and assistance. But, mobile robots are advancing on all fronts, and they represent a solution to the problem of managing complex networked civilizations, while at the same time shrinking the human component of competitive business.

In ‘The New Luddite Challenge’, Ted Kaczynski argued that ‘as society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them…a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage, the machines will be in effective control’. Admittedly, the poor performance of mobile robots in the past does invite skepticism of this idea of a totally automated society. But, no matter how unlikely the idea of truly intelligent, autonomous robots may seem, the prospect of humans being engineered to match the advantages of machines is even more infeasible. A robot worker would have unwavering attention, would perform its task with maximum efficiency over and over again, and would never ask for holidays or even a wage-packet at the end of the day (but the need for maintainence will mean something like sick leave still exists). It is just inconceivable that people could be coerced into working 24/7 for no pay, but with robots every nuance of their motivation is a design choice. Provided the problems of spatial awareness and object recognition/handling can be solved, and especially if artificial general intelligence is ever achieved, there seems to be no reason why capable robots wouldn’t displace human labour so broadly that the average workday would have to drop to zero.

THE HUMAN/ROBOT RELATIONSHIP.

Assuming such a scenario ever comes about, what happens to wages? Such a question was asked in the early 1980s by James Albus, who headed the automation division of the then-National Bureau of Standards. His suggestion was to give people stock in trusts that owned automated industries, thereby allowing them to live off their stock income. However, Moravec argued that ownership might not be reliable as a source of income in the long-term. Any company that chose to re-invest everything in productive operations would drive companies that wasted resources by paying owners out of business. Better and cheaper robotic decision makers would squeeze owners out of capital markets as surely as labourers would be replaced by robotic workers.

Moravec sees the eventual demise of ownership as bringing about the end of Captialism itself, but then goes on to argue that ‘capital enterprises will thrive as never before. Some companies will die, but others will grow. Those that grow especially well will be forced to divide by antitrust laws. Some companies may decide to cooperate in joint ventures that are a mix of their parent firms’ goals and skills. With no return on investment in a hyper competitive marketplace, the effort may kill the parents. But, if the offspring grows and divides, the parents’ way of thinking may become more widespread…The ultimate payoff for success in the marketplace will no longer be monetary return on investment, but reproductive success’.

As mobile robots develop more graceful limb movements and navigate their workplaces more intuitively, the boundary between machine and living thing may well appear to blur. Furthermore, if Moravec’s vision for success in the marketplace in the age of intelligent robots is accurate, this mashup between technology and nature may not apply only to individual robots, but to whole companies. This trend was also noted by Chris Meyer and Stan Davis in their book ‘It’s Alive’: ‘What we learn and codify about adaptation and evolution will, first, be modelled in digital code, so that we can simulate adaptive systems for specific purposes…Next, software itself will become evermore like an ecology…As the rules of evolution combine with the connected economy, our business world will become…continually adaptive- in other words, alive’.

Actually, to a very limited extent there already is a blurring of the living and the artificial in the world of business. Some of the rights of people are given to corporations, such as the right to own property and make contracts. But, in other ways, we treat our coporations very differently. Most especially, whereas a person’s right to life is seen as fundamental, no such right is given to corporations; they may legally be killed by competition or legal or financial actions. Also, corporations don’t have the rights to vote on the laws that govern and tax them.

If fully automated industries run by intelligent machines become ‘alive’ as some have suggested, will we see the same legal rights that people have being given to these new forms of life? Moravec hopes not: ‘Humans have a chance of retiring comfortably only if they themselves set corporate taxes, and all other corporate laws, in their own self-interest’. By the time robots are approaching the 4th generation, the pivotal role that humans will play is likely to be in formulating the laws that govern corporate behaviour. There is a danger that if robotic industries were allowed to develop in a completely free marketplace, by competing mightily among themselves for matter, energy and space they would drive their price beyond the reach of humans, therefore squeezing us out of existence by making the basic necessities of life unaffordable. It’s easy to mock this notion of robots causing the extinction of the human race as being nothing more than hokey Hollywood science fiction, but bare in mind that the theories that drive biology are being adapted in the way we use information and in how our enterprises are managed. Biology, information and business are converging on general evolution and this can only increase as biologically-inspired technologies become ever-more prevalent in our networked civilizations. It might be worth remembering, as we look to evolution to help grow our roboticised corporations, that species almost never survive encounters with superior competitors.

Fortunately, this nightmare scenario assumes a completely free marketplace, but through such activities as collecting taxes, Government coerces nonmarket behaviour. Raising corporate taxes could provide social security from birth for the human race. This would make humans the main repository for money, and in order to raise sufficient funds to pay their taxes, the robotic workforces would need to compete among themselves to produce goods and services that people would want to buy. According to Moravec, ‘automated research, as superhumanly systematic, industrious, and speedy as robot manufacturing, will generate a succession of new products, as well as improved robot researchers and models of the physical and social world’. One likely outcome of this, is that the robotic corporations will develop and refine models of human psychology, using them in order to accurately guage our tastes. In Singularity discussions the point is often raised that we won’t fully understand the minds of artificial super intelligences. But here we see an even more humbling prospect; we won’t know our OWN minds as well as the Ais will know them. Moravec reckoned that ‘the super intelligences, just doing their job,  will peer into the workings of human minds and manipulate them with subtle cues and nudges, like adults redirecting toddlers’.

At this point, a problem with the idea of controlling such powerfully intelligent systems becomes apparent. How are we to ensure that they won’t use their superior knowledge of human psychology in order to trick us into removing artificial constraints on their growth? The old idea of pulling the plug on a machine if it gets out of control assumes there is a plug to be pulled, or that pulling it will not have consequences as serious as leaving the machine running. But, if artificial intelligences do indeed become an ecology supporting our networked civilizations, such highly decentralized systems might not have anything like an off switch and if they did, we wouldn’t dare touch it because modern civilization would soon break down without those multitudes of robot workers and other systems toiling away behind the scenes in order to keep things ticking over.

Furthermore, it is a convention in futurism to take the several possible routes to Singularity and concentrate on each one in isolation to the others. This approach is necessary because the sciences underlying each path are complex and it is hard enough to cover the R+D occurring in the fields driving one possible route to Singularity, let alone all of them. The reality, though, is that the many pathways to Singularity are connected together. This complex web of biologically-inspired technologies and technology-infused biology presents yet another problem for restricting the development of robots. There is a certain ‘Us’ and ‘Them’ mentality at work here; one set of laws for the humans and another for the robots. But while today it is easy to separate humans from robots, in the future advances in brain/machine interfaces, prosthetic limbs and artificial organs will result in hybrids that confuse the issue. At what point does the number of artificial body/brain replacements that a human has, result in that person becoming a machine that must be denied fundamental human rights?

It is not just the varying quantity of cyborgization that complicates matters, but also the varying quality of body/organ replacements. Let us suppose that progress in developing artificial substitutes for body parts (including the brain) does not stop at matching the capability of natural body parts but improves upon them. Your body and brain can be upgraded, and when the next generation of replacements roll off the production line, you can upgrade yourself still further. No transhumanist would deny a person the right to improve any part of their body or mind beyond natural limits. It is, after all, the fundamental principle of their movement. But some transhumanists insist on imposing strict limits on the self development of robots. Again, this assumes we can easily distinguish super-smart robots from super-smart cyborgs.

‘A good compromise, it seems to me’, suggested Moravec, ‘is to allow anyone to perfect their biology within broad biological bounds…To exceed the limits, one must renounce legal standing as a human being, including the right…to influence laws- and to remain on Earth…Freely compounding super intelligence, much too dangerous for Earth, can blossom for a very long time before it makes the barest mark on the galaxy’.

THE FINAL FRONTIER.

This move into the solar system will not just be prompted by transhumanists wishing to escape the restrictions of Earth’s laws, but also by the two opposing imperatives of conducting massive research projects in order to keep ahead of competition in Earth’s demanding markets, and high taxes on large, dangerous Earth-bound facilities. Freed from the laws that restricted its growth, the robotic ecosystem would flourish into countless evolving machine phenotypes. This will not be propogation via reproduction but rather reconstruction, as the machines redesign their hardware/software in order to meet the future with continuos self-improvement. The laws of physics will impose some restrictions on the phenotypes available to robots. The ‘body’ of a robot may not be an individual unit occupying a single location in space, but rather distributed systems linked via telepresence. A crude prototype emerged in the spring of 2000, when a team at Duke university wired the brain of an owl monkey to a computer that converted its brain’s electrochemical activity into comands that moved two robot limbs in synchrony with the movements of the monkey’s own arm. One of the robot arms was hundreds of miles away. A robot whose brain was thousands of times more powerful than a human’s (let alone an owl monkey’s) might be in comand of trillions of  ‘hands’, sensory organs and subconscious routines that are scattered hither and thither. But the speed of light would impose a limit on how far its body could be spread before communication delays hoplessly slowed its reaction time. Beyond that point, each part of its body would have to be stand alone (working independently, under its own volition) and therefore another machine as opposed to part of the same individual. It seems implausible that a single robot could mass more than a 100km asteroid.

At the other end of the scale, normal ‘atomic’ matter allows the features of integrated circuits to shrink to one atom’s width, and for switching speeds at 100 trillion times per second. Any faster, and chemical bonds would rip. Present day integrated circuits extended into 3D and combined with the best molecular storage methods could pack a human-scale intelligence into a cubic centimetre.

Sufficiently intelligent beings may not need to be constrained by the limits of atomic matter. In 1930, the physicist Paul Dirac deduced the existence of the positron in calculations that combined quantum mechanics with special relativity (the positron was verified to exist two years later). The same calculations also predict the existence of a particle that carries a magnetic ‘charge’ like an isolated north of south pole of a magnet. We call these particles ‘monopoles’. Magnetic charge is conserved, and since the lightest monopole has nothing to decay into, some monopoles must be stable. According to Moravec, ‘oppositely charged monoples would attract, and a spinning monopole would attract electrically-charged particles to its end, while electrical particles would attract monopoles to their poles…resulting in material that, compared to normal ‘atomic’ matter, might be a trillion times as dense, that remains solid at millions of degrees and is able to support switching circuits a million times as fast.

Since the solar system will become a breeding ground for an entire ecology of freely-compounding intelligences, we should expect to find competition for available matter, space and energy as well as competition for the successful replication of ideas. We have seen parasites emerging in software evolution experiments, and so we should expect parasites to emerge in the robot ecosystem. This will necessitate the directed evolution of vast, intricate and intelligent antibody systems that guard against foreign predators and pests. Something analogous to the food chain will no doubt arise. “An entity that fails to keep up with its neighbours”, reckoned Moravec, ‘is likely to be eaten. Its space, energy, material and useful thoughts reorganized to serve another’s goals’.

Since the more powerful mind will tend to have an advantage, there will be pressure on each evolving intelligence to continually restructure its own body and the space it occupies into information storage and processing systems that are as optimal as they can possibly be. At the moment, very little of the matter and energy in the solar system is organized to peform meaningful information processing. But the freely-compounding intelligences, ever mindful of the need to out-think competitors, are likely to restructure every last mote of matter in their vicinity, so that it becomes part of a relevant computation or is storing data of some significance. There will seem to be more cyber stuff between any two points, thanks to improvements through both denser utilization of matter and more efficient encoding. Because each correspondent will be able to process more and more thoughts in the unaltered transit time for communication (assuming light speed remains the fixed limit), neighbours will subjectively seem to grow more and more distant. By using its resources much more efficiently, and therefore increasing the subjective elapsed time and the amount of effective space between any two points, raw spacetime will be displaced by a cyberspace that is far larger and long-lived. According to Moravec, ‘because it will be so much more capacious than the conventional space it displaces, the expanding bubble of cyberspace can easily recreate internally everything it encounters, memorizing the old Universe as it consumes it’.

In physics, the ‘Bekenstein Bound’ is the conjectured limit on the amount of information that can be contained within a region of space containing a known energy. Named after Joseph Bekenstein, it’s a general quantum mechanical calculation which tells us that the maximum amount of information fully describing a sphere of matter is proportional to the mass of the sphere times its radius, hugely scaled. Let’s assume that all the matter in the solar system is restructured so that every atom stores the maximum possible number of bits. According to the Bekenstein Bound, one hydrogen atom can potentially store a million bits, and the solar system itself leaves room for 10^86 bits.

Humans are interested in the past. Archeologists scrutinize fragments of pottery and other broken artefacts, painstakingly piecing them together and attempting to reconstruct the cultures to which such objects belonged. Evolutionary biologists rely on fossil records and gene sequencing technologies to try and retrace the complex paths of natural selection. If the freely-compounding robot intelligences ultimately restructure space into an expanding bubble of cyberspace consuming all in its path, and if the post-biological entities inherit a curiosity for their past from the animals that helped create them, the 10^86 bits available would provide a powerful tool for post-human historians. They would have the computational power to run highly-detailed simulations of past histories- so detailed that the simulated people in those simulated histories think their reality is…well…’real’.

The idea of post-human intelligences running such simulations is often met with disbelief. Why would go to the effort of constructing such simulations in the first place? Such objections miss the point that the Bekenstein Bound puts across. Assuming Moravec’s estimates for the raw computational power of the human brain is reasonably accurate then, according to the man himself, ‘a human brain equivilent could be encoded in one hundred million megabytes or 10^15 bits. If it takes a thousand times more storage to encode a body and its environment, a human with living space might consume 10^18 bits…and the entire world population would fit in 10^28 bits’. But, look again at the potential computing capacity of the solar system: 10^86 bits. Such a number vastly exceeds the amount of bits required to run simulations of our reality. As Moravec said, ‘The Minds will be so vast and enduring that rare infinitesimal flickers of interest by them in the human past will ensure that our entire history is replayed in full living detail, astronomically many times, in many places and in many, many variations’.

Such conjectures have stunning implications for our own reality. Any freely-compounding intelligence restructuring our Solar System into sublime thinking substrates could run quadrillions of detailed historical simulations. That being the case, surely we must conclude that any given moment of human history- now for instance- is astronomically more likely to be a virtual reality formed in the vast computational space of Mind, rather than the physical reality we believe it to be.  Meanwhile, perhaps, the self-enhancing robots that make up the post-Singularity interplanetary ecosystem are engaged in competition and cooperation, whole memetic systems flowing freely, turning to fluid the boundaries of personal identity. And yet, some boundaries may still exist, most likely due to distance, deliberate choice and incompatible ways of thought. Bounded regions of restructured spacetime patrolled by datavores eager to eat the minds of lesser intelligences.

 

 

 

 

Advertisements
This entry was posted in Extie lite.. Bookmark the permalink.

5 Responses to RISE OF THE ROBOTS

  1. Mark Bruce says:

    Extropia,

    Your recent posts have been absolutely stellar! Thank you for providing such deliciously stimulating memetic fodder for me to gorge on!

    A few off-the-cuff thoughts:

    I think that drawing a line regarding the number or percentage of artificial body/brain replacements that a human has, the crossing of which results in that person becoming a machine that must be denied fundamental human rights, risks exactly the same conflict, tragedy, and suffering that has plagued our all too human history at every point fundamental human rights have been denied to a person. As you say, no transhumanist would deny a person the right to improve any part of their body or mind beyond natural limits.

    Regarding Moravec’s “compromise” of forcing those who wish to exceed the limits to renounce legal standing as a human being and the right to remain on Earth, I’m afraid I strongly disagree. So long as the exceeding of such limits does not harm, hinder, or exploit another being, I don’t really see a valid reason why they can’t stay on Earth and retain full human rights (I don’t see how you can ever ethically or morally remove a persons human rights in any case, but that’s another matter) and influence laws, etc. In fact, by exceeding such limits they may be better placed than most others to wield such influence. And at the end of the day, if some groups of people are massively exceeding these limits, how is it all possible for the remaining people to enforce such hypothetical laws to kick them off planet, if the massively enhanced have no desire nor inclination to do so?

    Keep that mind of yours buzzing – looking forward to the next instalment!

    • On the issue of denial of rights, Anne Foerst said that if we pick any particular reason to deny a robot human rights, we automatically exclude at least some humans. After all, if you claim ‘a robot cannot be human because it lacks X’ there are very likely some people who lack ‘X’. Are they not human? “Whatever criteria you choose, you will always exclude human beings”, Foerst said, “So I absolutely reject the notion that we can use any empirical criteria to define when an AI is equal to us”.

      >So long as the exceeding of such limits does not harm, hinder, or exploit another being, I don’t really see a valid reason why they can’t stay on Earth and retain full human rights<

      Hard to argue with that. It might, however, prove difficult to exercise a choice that cannot harm, hinder or exploit anyone else. For example, quite a few people reject transhumanism and might well claim that any event increasing its presence and influence in the world is 'harmful'. The best we can hope for, I think, is open dialogue between opposing camps, with the goal of coming to some compromise. Might be difficult in practice, though.

  2. Jorge says:

    Dear Extropia,

    Fantastic post. I enjoyed every line and idea here.
    Keep the thinking alive!

    Jorge

  3. Aww thanks so much for the lovely comment, Jorge:)

  4. Pingback: 2010 in review | Mind Child's musings

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s