IT’S ALIVE: THE THEORY AND CONSEQUENCES OF TECHNOLOGICAL EVOLUTION.
Welcome to this year’s Thinkers lecture!
The theme this year is “It’s Alive: The Theory And Consequences Of Technological Evolution”. Those of you who attended last year, and who have excellent memories, may recall that it was about the possible consequences of an evolutionary arms-race between search engine providers.
NATURAL SELECTION IS THE WRONG KIND OF THEORY.
So, I thought this year I would talk about the evolution of technology in general. The search for a workable theory of technological evolution is almost as old as the theory of natural selection itself, and the belief that inanimate matter can have lifelike abilities attributed to it is as old as mythology. Think of the gollem from Hebrew legend, or Poseidon the Greek god of the sea. Just four years after Darwin had published ‘On The Origin Of Species’, Samuel Butler was calling for a theory of evolution for machines. Most attempts at such a theory have tried to frame it in terms of the steady accumulation of changes, recognisable as Darwinian.
The idea goes something like this. Any given technology has many different designers with different ideas, some of which are selected for their superior performance while others are rejected. This results in a steady accumulation of repeated practices and learned skills. Imagine an ancestor crossing a body of water by riding on a log. Other people copy this idea and over time make adjustments that add to the stability, manoeuvrability and buoyancy of the vessel. Or, their ideas adversely affect the usefulness of the craft and therefore do not survive. The sociologist S. Colum Gilfillan traced in detail how the inventions of planking, ribbing, keels, lateen sails and square sails came about, thereby demonstrating a detailed line of descent that takes us from the dugout canoe to the sailing ship and, finally, to the modern steamship of his day.
So there we go. Darwinian evolution can be applied to technology, right?
Natural selection has certain constraints which make it quite incapable of explaining certain technologies. The first constraint is that a new species can only be created through incremental steps. This is problematic, because no amount of steady accumulations could ever transform a piston engine into a jet engine, nor could a candle evolve into a light bulb. Another constraint is that every incremental step must produce something viable. Inventors, though, are quite capable of creating machines that have irreducible complexity, meaning an organization of parts, the loss of any one of which is fatal to the whole. They can do this because human inventors have some degree of foresight and can see how a component that has no use by itself may become useful when part of a complex organization of other parts. Nature, by contrast, is a ‘blind watchmaker’ with no capacity for foresight.
So where does that leave us in the search for an evolutionary theory for machines? It certainly does not mean there is no such thing, only that Darwinian selection is not always applicable. How, then, can we explain the appearance of radically new technologies that cannot have come about through the steady accumulation of changes to existing technologies?
Judged from outward appearances, some machines appear quite unrelated to anything else. But then, the same thing could be said of life. From outward appearances, one would see no relation between a wolf and a dolphin, or a bat and a mushroom. It was only through studies of animal anatomy, comparisons of the skeletal structures of fossilized remains, and how embryos develop that a common ancestry was suspected and later, with the ability to map and compare genes, confirmed. Similarly we have to consider technology in its entirety- not just physical inventions but all processes, devices, modules, methods and algorithms that ever existed- in order to see a kind of evolution at work.
When we do that, we discover that the cartoon cliché of some light bulb flashing over a genius’s head as a great idea comes from nowhere is quite wrong. The history of technology is by no means one of more-or-less independent discoveries. This is because any new technology can only come about by using components and methods that already exist. A jet engine, for example, is created by combining compressors, turbines and combustion systems, all of which pre-existed in other technologies.
W. Brian Arthur, who is a professor at the Santa Fe Institute, calls this ‘Combinatorial Evolution’. Any and all novel technologies are created through combinations of existing technologies. Some combinations prove useful, and so they persist and spread around the world, becoming potential building blocks for further technologies. Or, a time may come when they are clearly surpassed by other technologies and so they go extinct. Also, there are many possible combinations that make little sense, and they too become nothing.
Combinatorial evolution is observed in the natural world from time to time. The most obvious example would be multicellular organisms, which evolved from combinations of single-celled organisms. Clearly, though, evolution through variation and selection is the norm in the natural world. We see the opposite in the technological world. Here, combinations are the norm and while Darwinian variation and selection also plays a role, it follows behind combinatorial evolution, working on structures already formed.
When technology and all its related practices are considered in their entirety, it becomes apparent that every invention stands upon a pyramid of others that made it possible. All future technologies will derive from those that exist today, because these are the building blocks that will be combined to make further technologies that will go on to become potential building blocks, and so on in an accumulation of useful combinations.
But what about the base of the pyramid? If the technologies of today came about through combinations of yesterday’s technology, where did those building blocks come from? Technologies that existed in even earlier times, of course. But this is starting to sound like a problem of infinite regress. Where does it all end?
WHERE IT ALL BEGINS.
The reason any technology ultimately works at all is because it successfully taps into some kind of natural phenomena. The jet engine works thanks to the law that every action has an equal and opposite reaction. The light bulb exploits electromagnetism. Essentially, technology is a programming of nature. Tracing the evolutionary pyramid of technology therefore takes us back to the earliest natural phenomenon that humans captured.
The earliest phenomenon that could be captured must have been detectable by unaided human senses, and understandable to an animal that almost certainly lacked the communicative abilities of homo sapiens. It would have been lying around on the ground, essentially.
Some stones happen to have sharp edges, providing a handy cutting tool. A burning branch can set alight combustible material, providing fire for warmth, protection and, eventually, cooking.
Combinations began to appear, and this lead to the discovery of more natural phenomenon. Fire can reach high enough temperatures to allow the smelting of metals, a discovery that lead to iron axe-heads and blades. Further on, as the amount of potential building blocks accumulated, and knowledge of different kinds of natural phenomena increased and expanded, clusters of technology and crafts of practice began to emerge.
Levers, ropes and toothed gears were combined to make early machines for milling, irrigation and building construction. Agricultural societies built up stores of produce in earthenware pots. Ownership of the contents (not to mention what the pots contained in the first place) had to be recorded somehow, and so writing happened.
In time, instruments specifically designed for precise observation were invented, along with scientific methods of deduction. This lead to chemical, optical, thermodynamic and electrical phenomena becoming understood and exploited. Ever-more precise instruments uncovered still finer phenomena, such as X-radiation and coherent light, and that lead to vast arrays of laser optics, radio transmissions and logic circuit elements combined in various ways, which brings us up to modern times.
So, the history of technology is an evolutionary story of related devices, methods, and exploitations of natural phenomena. It results from people taking what is known at the time, plus a modicum of inspiration, and then combining bits and pieces that already exist in just the right way in order to link some need with some effect or effects that can fulfil it.
But, in what sense can technology be said to be alive? As yet, no formal definition of life exists. The best we can do is judge whether or not any system meets certain criteria, namely reproduction, growth, and response and adaptation to the environment. Technology does indeed meet all these requirements, but of course, so far it has required human agency for its buildout and reproduction. It is therefore alive, but only in the sense that a coral reef (built from the activities of small organisms) can be said to be a living thing.
THE TECHNOLOGY TRAP.
Technological evolution seems to go hand in hand with the accumulation and refinement of knowledge. There is a tremendous amount that we know of, things our ancestors never suspected could exist, simply because they lacked the methods and the equipment needed to discover natural phenomenon like quantum physics.
When I say “we” I mean the human race as a collective. But when we consider people as individuals we find a great deal of ignorance concerning the fine details of how modern societies function. Simply put, we all use technologies as if we know them, while actually lacking anything approaching a thorough understanding.
This is nobody’s fault; there is no other way to be. One reason why is because even a relatively few building blocks can be combined in a great many ways. If all you had were 40 building blocks, the number of possible combinations would be 1,099,511,627,735. Possible combinations scale exponentially (as 2 to the power of N). Of course, not all combinations result in workable possibilities, far from it. But even if the chances are one in a million that some combination of technologies will result in a novel technology that then becomes a building block itself, once you pass a certain threshold the numbers of possible combinations climbs very rapidly indeed.
The accumulation of technologies and related methods therefore quickly grows beyond the point where the individual can hope to wrap his or her head around everything. And it is not just the amount of technologies that is the problem, but the fact that technologies have had to grow more complex.
This, too, was no accident but an inevitable outcome of certain snowballing effects. Suppose our nomadic ancestors were fortunate enough to discover rich and fertile land, and sophisticated enough to develop technologies to exploit such resources, meeting the tribe’s basic needs. Doing so would have lead to prosperity, which in turn would have lead to population numbers rising.
But, that would have put more strain on the land’s ability to provide for the tribe, and technology would have had to become more sophisticated in order to ensure people’s basic needs continued to be met. Once the sophistication of technologies and the number of skills required to maintain a society reached a certain level, it would have begun to make sense for an individual to specialise in a few professions, rather than being a generalist who could meet all of his or her family’s needs. The reasons why specialization makes sense, is because a person who does nothing but work with stone all day becomes a far better stonemason or builder than someone whose mind and hands must do many things (polymaths who excel in many disciplines are few and far between).
But, then, being an expert in brick does not, in and of itself, put any bread in your basket, nor does being an expert baker produce ovens. If a society of specialists is to function at all, there needs to be a way of organizing them so that, collectively, they meet the needs of the society. This requires managers, systems of government that can coordinate actions and delegate responsibility. This, in turn requires channels of communication, and the ever-growing stockpiles of building blocks manufactured in dedicated places must be transported. It is no wonder that the history of economic development is, to a significant degree, the history of transport and communication. Each advance in transport and communication reduces the economic costs of recombination, thereby making innovation ever less expensive. The need for networks of efficient channels of communication puts a pressure on the discovery of electromagnetic fields and how they can be used to send and receive digitized information.
Once communication becomes digitized, all objects in the digital domain become data strings that can be acted on in the same way. They do not even need to share the same physical space for combinations to happen, since telecommunication can combine digital elements remotely. Thus does technology become a system, a network of functionalities.
Once tribes have evolved into gigantic societies engaged in economic activities that span an entire planet, each city has become a technology island, totally dependent on networks of services bringing supplies from outside. Without such imports, a city like London or Tokyo could neither feed, nor house, no clothe the inhabitants that dwell within it. The pace of life in a city is set by the pace of the technology that serves it, and you have to hope the pace can keep up with demand, because if not the city and the millions who dwell within it would die.
In that sense, our comfortable urban lives are technology traps. We are completely dependent upon technologies to maintain our way of life, while at the same time taking it all for granted. Who ever concerns themselves with how, exactly, pressing a switch results in a light coming on, or a microwaved dinner being cooked, or any other number of convenient operations we have at our fingertips? But, what happens when pressing that button does not achieve the desired effect? What then? Phone for a specialist who can fix the problem? In other words, we reach for more technology. And, why not? After all, the thought that technology will not come to the rescue is just unthinkable, right?
Obviously, it is just as true to say that we depend on technology for our survival, as it is to say it depends upon us for its reproduction. However, the extent to which technology depends upon people for its evolution may well become less and less as time goes by.
“CODE IS CODE”.
This leads to two questions. Can we do this? Should we do this? The first question asks whether or not we can indeed automate aspects of evolution, freeing technology to sense, diagnose and fix problems by itself, or even go as far as conceiving, designing, and building the next generation of technologies autonomously. Well, the natural world seems to run all by itself without human help. And now we have technologies like DNA sequencing, gene chips, and bioinformatics that enable us to discern life’s processes at their fundamental level. Also, our computers have reached the point where they are powerful enough to recreate many of these processes in-silico, and we are starting to see how they work in inorganic settings.
Richard Dawkins once observed, “genetics today is pure information technology. This, precisely, is why an antifreeze gene can be copied from an arctic fish and pasted into a tomato”. Because life is fundamentally an information technology (in other words, something that operates on the basis of coded instructions) it can be translated into languages understandable to computers, which operate according to the same principles. Another point to consider is that evolution is basically an algorithm, a repeatable procedure, and our understanding of that algorithm has revealed core principles that apply to any adaptive system. They are:
RECOMBINATION: The driving force of innovation, not to mention of technological evolution itself.
AGENTS: Decision-making units, following rules that determine their choices. Cells are biological agents following chemical rules, trading programs are agents following rules in a financial market.
SELF-ORGANIZATION: The ability to organize autonomously to create something more complex. As we have already seen, the only way to have something more complex than a single agent, is by connecting multiple agents.
SELECTIVE PRESSURE: If the process is to generally move towards improvement, there has to be some method for determining which agents get the opportunity to recombine in the next generation. Where technology is concerned, the market provides that selective pressure. Our purchasing decisions ultimately decide what has survivability, and what does not.
ADAPTATION: Fitness landscapes are not static. They change shape with each round of evolution, as new competitors emerge forcing rivals to adapt to the new threat.
CO-EVOLUTION: Because rivals must continually adapt to one another, they affect the evolution of industry (or the ecology) as a whole.
EMERGENCE: Certain behaviours that the system itself exhibits are not to be found in any of the parts. Instead, complex global phenomena arises from simple local processes and local interaction rules. No central control over the entire thing is necessary. Indeed, such top-down governance would screw things up.
Because these core principles apply to any adaptive system, we can swap between domains and see nature in terms of mechanism; the economy as an organism. Biology, information, technology and business are likely to converge on general evolution, as the theories which drive biology are adapted in the way we manage our enterprises. Christopher Meyer and Stan Davies, who both work at the Centre For Business Innovation, coined the phrase “Code Is Code”, explaining:
“you can translate biology into information, and information into biology because both operate on the basis of coded instructions, and those codes are translatable. When you get down to it, code is simply code”.
OK, so we can do it. The second question asked, “why should we do it?” Why should we pursue what Kevin Kelly has called “out-of-controllness”?
Geological evidence, plus knowledge of the constraints natural selection must work under, tells us there is an order in the sequence of body designs. Everything with limbs evolved after everything with a body. In turn, ’everything with a body’ was a class that came after single-celled organisms. The sequence had to follow this order, because the forms of one stage made the next possible. But, what is more important, is that the precursor stages also drove the evolutionary process by changing the fitness landscape.
This was demonstrated in an experiment by Martin Boraas. He and his colleagues bred a single-celled alga in the lab. They then introduced a predator, another single-celled creature capable of ingesting other microbes. Within 200 generations, the alga evolved a way around the threat, which was to clump together into clusters consisting of eight cells each. In other words, the introduction of a predator encouraged the evolution of a simple body plan. You can imagine single-celled life evolving from a primordial soup. Once the freely-available supplies were nearing exhaustion thanks to increasing numbers of replicating microbes, that would have put a selective pressure on kick-starting a food chain. That, in turn, would have encouraged the evolution of increasingly complex bodies for defense and attack.
What has this got to do with technology? Well, we see a similar thing, in that potential building blocks do more than just make the next stage possible. They also drive the evolutionary process because of the changes they make, both directly and indirectly, to human life. Technologies solve problems, but they also introduce new ones. James Burke put it like this:
“An invention acts rather like a trigger. Because, once it’s there it changes the way things are, and that change stimulates the production of another invention, which in turn causes more change and so on”.
So, technology that depended on a scientific understanding, and on the systemetized use of natural phenomena, was both made possible and made necessary by the opportunities and challenges that previous generations of inventions helped bring about.
EVOLUTION IN SILICON.
This is just as true today as it ever was, but something is new. Our most complex technologies have reached a point where we must look to the principles of nature to make or govern them. There is a kind of myth that has grown up around computers, one whose origin can be traced to a comment made by Ada Lovelace, the first programmer. She said:
“The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform”.
Most people recognise this in the form expressed by IBM’s marketing department:
“Computers can only do what their programs specify”.
If they have no pretensions to originate anything, there can never be such a thing as artificial intelligence, because it is always the programmers themselves (never the programs) that are intelligent. But is it true?
The way computers have been designed has, itself, evolved over time. This evolution was enabled and made necessary by certain developments triggered by previous rounds of computing technology.
After the 1960s, the number of components on a chip had risen by a factor of a million, while the manufacturing cost per transistor had fallen to mere pennies. This had the effect of making computer circuitry designed step-by-step by draughtsmen an uneconomical prospect. As nanotech pioneer Eric Drexler explained:
“If a million transistor design has an expected market of 100,000 units, then every dollar of design cost per transistor adds tens of dollars to the price of each chip. Yet, a dollar can’t buy much time from a human design team”.
This lead to the invention of ‘silicon compilers’, software systems capable of producing a chip- ready to manufacture- with very little human help beyond specifying the chip’s function. Having gained a foothold, silicon compilers gradually improved, and the result of this is that human programmers have had to work at increasing levels of abstraction, entrusting ever-larger details to computer search and pre-programmed expertise. Programs consisting of hundreds of instructions might be completely understood by their designers, but modern software is built from millions of instructions and we have learned to expect the unexpected.
Since the 1980s, another automated design process (one with obvious parallels to the natural world) emerged. This is known as ‘evolutionary algorithms’ (EA). An EA takes two parent designs and blends components of each to produce multiple offspring. Each offspring combines the features of its parents in different ways. The EA then selects which offspring it considers worth re-breeding, and they are replicated with occasional mutations (which involves changing a 1 to a 0 here, a 0 to a 1 there) resulting in variation in the next generation. The fittest of that generation is then selected, and so it goes on. As the random mutations and non-random selection process continues for thousands of generations, useful features accumulate in the same design.
Whereas a human designer does not have time to test all possible combinations, an EA comes closer to doing so, thanks to the sheer speed with which computers can explore mathematical space. However, so far their use has been limited to a few niche applications, because the need to breed and evaluate thousands of generations requires ultra-fast computers. But now, thanks to the emergence of multicore chips that make it easy to divide tasks between cores (something that suits EAs) that is changing. EA pioneer John Koza explained:
“We can now undertake evolutionary problems that were once too complicated or time-consuming. Things we couldn’t do in the past, because it would have taken two months to run the genetic program, are now possible in days or less”.
Systems such as these can employ vast quantities of data, more than enough to be overwhelming to humans. While each procedure may be simple by itself, they may be linked in ways that make the overall result complex and surprising. Many thousands of rules can be incorporated, as opposed to the small number that are in diagrams and equations that humans use.
But, we pay a price for this, because the code that results from this evolutionary process is so different from conventional code that programmers find it impossible to follow. The complexity is beyond human capability to design or debug. This is something unprecedented in the scientific era of invention. While the layperson may use technology without understanding it, we at least expect the professional to have complete knowledge of how their designs function. No longer. Our most complex technologies are no longer designed and built. They are grown and trained, and have to be studied as we now study nature: by probing and experimenting.
The two trends of the falling cost of transistors and increasing numbers of components on a chip, combine to drive Moore’s Law. This can either be expressed as:
A fixed amount of money buys twice the computing power every 18-24 months. Or:
A fixed amount of computing power will cost half as much every 18-24 months,
The first tells us that, as calculations beyond the capabilities of today’s computers become possible with tomorrow’s, we can simulate things which once could only be investigated with practical experiments. For instance, we now have enough computational power to model turbulance and airflow, and as a result companies like NASA Ames no longer use wind-tunnels to test out aerodynamic designs. Airplanes can be reliably tested as virtual models. Also, we have sophisticated models of metabolic, immune, nervous and circulatory systems as well as structural models of the heart and other muscles. As computing power increases (along with our understanding of biology) these models will increase in fidelity and eventually we will see the integration of all the subsystem models into one systematic model. The era of animal experiments will then be made obsolete by the era of the biologically-accurate virtual human.
The second way of expressing Moore’s Law tells us that today’s expensive, high-end computer is tomorrow’s cheap item. Power that only $2000 could buy will, in ten year’s time, be available for $31.25, and the cost keeps falling. A time comes when computing power that was once available only on the most expensive desktops effectively disappears into cell-phones, credit cards, all manner of things. What is more, nobody notices it much, simply because it has become so unremarkable by now.
But, think about what it means. As the trend progresses, it becomes possible to integrate more and more computers and sensors into more and more objects. Also, the first way of expressing Moore’s Law tells us that, over time, sensors will be able to capture new kinds of data, more accurately, using less space and energy. We have already deployed tens of thousands of sensors of many types, gathering information ranging from the location of vehicles, to potential chemical or radioactive leaks, to migratory patterns of animals, and even a person‘s emotional state. Thanks to wireless networks and telecommunication, the newly-sensed data is available anywhere in realtime, and can be combined in almost limitless ways to produce new products and services.
Information has become codified and information technology modularized, and because of this we can install upgrades and add-ons and plug-ins much more quickly. Just think how much time and expertise it takes to add a turbo-charger to a car, versus adding a new graphics card to your computer. As for software upgrades, they can happen automatically, without you needing to be aware it is happening. This all allows innovation to spread faster than ever before. Imagine saying the following to someone in 1990:
“So, I was surfing the Web and found this blog that said SL had more accounts registered than WOW, but a wiki I found by Googling proved otherwise, and I twittered the facts before updating my Facebook page”.
They would not have a clue what you were talking about. The Web, Google, Second Life, Wikis, Twitter and Facebook, none of it existed in 1990. Hell, quite a few did not even exist in 2000! It really is hard to believe that something like Google is such a recent addition to our lives. In relatively short order, our business and social lives have been restructured to be totally dependent on powerful computation and telecommunication networks.
But, as any network becomes more intensely connected, it starts to become ‘nonlinear’. Small changes can lead to large effects. Because a signal created in any market, society or system can travel further than ever before, our world is one of increasing volatility.
When we look at biological systems and how they are governed, we find no central control but rather bottom-up governance. This is just as true for economies. Even pirates, the quintessential mob of lawless cutthroats, did not really operate under a none-existent rule of law. Economist Peter Leeson showed how a pirate code “emerged from piratical interactions and information-sharing, not from a pirate king who centrally designed and imposed a common code on all current and future sea bandits”. Economies, then, are the result of bottom-up self-organized behaviour that naturally arises from interactions between agents, as opposed to top-down beauracratic design.
Of course, systems of top-down beauracratic design do exist. Since the industrial revolution, models of management and organization inside most businesses treasured stability and control. Christopher Meyer and Stan Davis identified several reasons why top-down systems were favoured.
First of all, industries that built the first large organizations were, more often than not, harnessing energies in new ways. Ensuring a boiler did not explode, a train did not crash and molten steel did not spill out of leaky containers were life-and-death issues. Risks had to be controlled, and the knowledge to do that resided in the heads of an executive elite.
Organizations therefore prized certainty over uncertainty. The industrial-management priority was to reduce unit cost on the assumption of stable demand. The leadership style favoured in the 20th century consisted of a hierarchy headed by one individual who could get his head around the whole thing. Any CEO not capable of predicting the future of the company, who could not discuss every aspect of the organization and who frequently changed strategies, was not the sort that inspired confidence.
Such organizations are likely to grow increasingly incapable of functioning as the 21st century progresses. The complexity of our networks are such that we cannot anticipate when some newly made connection will cause an instability, and the speed of innovation plus the “code is code” principle of IT allows a competitor to arise from anywhere and quickly spread to be a threat, even to seemingly unrelated organizations. Forget assumptions of stability. Volatility and the cost of managing it, that is now the imperative, and it needs a bottom-up, adaptive approach. As Kevin Kelly observed:
“We find you don’t need experts to control things. Dumb people, who are not as smart in politics, can govern things better if all the lower rungs, the bottom, is connected together. And so the real change that’s happening is that we’re bringing technology to connect lots of dumb things together”.
If you want bottom-up governance to arise from interactions among individuals, it makes sense to look at the oldest societies on the planet. No wonder, then, that insects like ants and bees have been studied for insights into how a highly organized whole might emerge from the numerous activities of parts, each with its own agenda.
Experiments have shown that if corpses are distributed randomly, live ants will gather up their dead fellows and organize them into neat little piles. Corpses are picked up and dropped as a function of the density of corpses ants detect in their neighbourhood. The greater the pile of dead ants is, the more likely it is that a live ant will drop a dead ant on that pile, and the less likely it will be to remove one.
This kind of behaviour, along with other activities such as the way ants sort their larvae, has lead to methods for exploring large databases. Software agents with simple rules like those governing ant behaviour, forage among customer profiles with attributes such as age, gender, marital status, what services they have used, and so on. These become organized into clusters.
Whereas more conventional methods of sorting data into clusters usually assumes a predefined number of groups into which the data are then fit, the number of clusters organized by the ant-based approach emerges automatically from the data. Because of this, antlike sorting does a better job of discovering interesting commonalities that might well have remain hidden to the more conventional approach.
Other ways in which social insect behaviour has found its way into technology can be seen in methods for rerouting network traffic in busy communications (based on the way ants forage for food) and better ways of organizing assembly lines in factories (based on division of labour among bees). Self-organization has several benefits over top-down direction. These are:
FLEXIBILITY: The group can adapt more quickly to change than would be the case if there were top-down chains of command from which one must get approval before taking action.
ROBUSTNESS: The group can still perform its tasks, even when one or more individuals fail. Relatively little supervision is needed, and problems too complex for centralized control can be addressed.
EXPLORATION: Self-organized groups continuously test radically different approaches and many possible solutions. Because of this, whenever a current method fails, the group will have a pool of alternatives already at hand, which can become handy backup plans.
In order to get the best out of systems of self-organized behaviour, leaders will need to establish guidelines and constraints that govern independent actions. Corporate behaviour will need to be unpacked into rules that drive the choices individuals make, in order to affect that larger structures that emerge. Therefore, it will be a priority to create the connected capabilities that enable cooperation and autonomous action.
Davis and Meyer came up with principles that they reckon will be fundamental to corporate behaviour in the years ahead. These are “Seed, Select, Amplify’, “Sense and Respond” and “Learn and Adapt”.
Because almost any kind of business can now use simulation to experiment with, the cost of testing has fallen dramatically. A “seed” is an idea, something with potential value. A company should seek to produce as many seeds as it can afford to, and test them for potential value across diverse economic opportunities. There must, of course, be a way to weed out inferior ideas, freeing up resources to spread good seeds (“select” and “amplify”) and that is where “Sense and Respond” comes in.
As we have seen, the increasing capability and falling cost of networked sensors and microprocessors is approaching a point where we can imbue products with qualities vital to anything alive. That is, the ability to sense environmental changes and respond appropriately to them. Information from every stage in the consumer-item relationship will one day be gathered by software that can learn from what it senses, adapting the system to make it better. Davis and Meyer wrote:
“As our enterprises become chiefly composed of coded messages connecting human and software agents, the concepts of evolution become more central to their behaviour. Information will move to a framework of biology, as autonomous software and increasing connectivity make networked systems behave as if alive”.
Indeed, this shift from a top-down, clockwork like view of technology to a bottom-up organic one, can clearly be seen in terms such as “artificial evolution”, “genetic algorithms”, “emergence”, “viral marketing”, “neural networks” and other kinds of software challenges related to growing complex systems. As we strive to create technologies and organizations that adapt continually and rapidly in order to keep pace with shifts in their market; as technology is organized into networks of systems that sense, respond, and configure themselves in appropriate ways, will “it’s alive” become more than mere metaphor?
THE THIRD DIGITAL REVOLUTION.
Much of what has been discussed so far can be attributed to two digital revolutions. In the 40s, telephone calls got worse with distance, a problem that was fixed by Claude Shannon when he digitized communication. Von Neumann did likewise for computers. So, what is the third digital revolution?
We have seen that technological evolution is fundamentally driven by the capturing of natural phenomena, and that this leads to the development of instruments that allow for increasingly fine observations and manipulations of natural phenomena. Chemists would welcome instruments that would enable them to measure and modify molecules so that they might study their structures, behaviours and interactions. Materials science would welcome technology that allowed them to be more systematic and thorough, building new materials according to plan and allowing one laboratory to make more new materials in a day, than all of today’s materials scientists put together. Biologists would welcome technology that lets them map cells completely and study their interactions in detail.
Can we identify a technological capability that would be able to fulfil all of the wishes listed above? Indeed we can. If you follow Moore’s Law into the future, you can see that computers will one day have switches that are molecular in size and connected in complex three-dimensional patterns. Bill Degrado, who is a protein chemist at Du Pont, said:
“People have worked for years making things smaller and smaller until we’re approaching molecular dimensions. At that point, one can’t make smaller things, except by starting with molecules and building them up into assemblies”.
Computers contain fast-cycling parts that make complex patterns from the building blocks of information (binary digits), and the third digital revolution will result from machines that contain fast-cycling parts that can make complex patterns from the building blocks of matter (the elements of the periodic table).
This is in marked contrast to most manufacturing today, which is almost entirely analogue in its nature. We take a batch of materials and chop them, bake them, melt them, mix them, in order to arrive at a final product. It takes a lot of raw material in order to make something like a car, which is why quarries look like such scars on the landscape. Natural things, on the other hand, get built in a fundamentally different way. Tiny biological machines called rhybosomes build things under digital control.
No wonder, then, that one of the main contenders for the post-silicon paradigm is molecular electronics, which promises to dramatically extend the power of IT. To get an idea of how powerful molecular computing could be, consider that a single drop of water contains more molecules than all transistors ever built. This is partly because the molecules are tiny, even by the standards of modern computer components, but mostly because they are organized into three-dimensional patterns. Every IC, in contrast, is a thin two-dimensional layer of computation on a thick and inert substrate.
Now consider that the company Zettacore, who build molecular memories from porphyin molecules that self-assemble on exposed sillicon, have already demonstrated up to eight stable digital states per molecule.
In short, molecular electronics might one day deliver sugar-cube sized devices with more processing power than all of today’s computers combined, or super computers the size of viruses. That, however, is not the most exciting prospect of molecular manufacturing. As Neil Gershenfeld said:
“It’s not when a program describes a thing, it’s when a program becomes a thing that we bring the programmability of the digital world to the physical world”.
So, what we are talking about here is the reimplementation of all molecular biology in engineered materials to code construction. We are looking to create the ability to take a description of an object, and then have it self-assemble from molecular or atomic building blocks. We are talking about machines with the capability to self-repair, and even self-replicate. But, what about machines with cognition?
Electrical engineers who wish to reverse-engineer a rival’s computer chip do so by placing precise sensors at specific points in the circuitry. This enables them to follow the actual information being transformed in real time, thereby creating a detailed description of how the circuits actually work.
Molecular nanotechnology would enable us to do the same thing with living brains, and indeed we are using our increasingly accurate and refined sensor technologies (along with ever more powerful computers) to do just that, developing techniques such as marking specific cells in lab animals by genetically engineering the organisms to incorporate fluorescent proteins found in marine species, thereby enabling researchers to monitor biochemical reactions and track the movements of cellular proteins in real time, to diffusion tensor imaging, a technique that infers the location of nerve fibers by tracking water molecules in the brain as they move along them.
In other words, our technologies are now beginning to sense the natural phenomena responsible for the design and function of the human brain. Neuroscientist Lloyd Watts pointed out that:
“Scientific advances are enabled by a technology advance that allows us to see what we have not been able to see before…we collectively know enough about our own brains, and have developed such advanced computing technology, that we can now seriously undertake the construction of a verifiable, real-time, high-resolution model of significant parts of our intelligence”.
IBM’s Blue Brain project has so far succeeded in reverse-engineering the cortical column, which can be thought of as the basic microprocessor, the building block from which the neocortex is made. Many other areas of R+D in bio, neuro and computer sciences are also following paths to complete the reverse-engineering of the human brain, for reasons that range from treating neurological disorders, to eliminating the need for animal experiments, to nothing more than the urge to understand how the mind works.
But, if it is indeed true that the brain IS the mind, and it becomes possible to build brains or, rather, to have machines which can self-assemble a body and a brain to control it, do we then have machines who think? Who feel? Who create? Would this lead to technology conceiving of, designing, and manufacturing its own next-generation? If so, how might the human/machine relationship be affected? In pondering questions like these, Carl Zimmer came up with the following:
“The Web is encircling the Earth, subsuming not only computers but cars and cash registers and televisions. We have surrounded ourselves with a global brain, which taps into our own brains, an intellectual forest dependent on a hidden fungal network”.
FROM A DISTANCE…
Whether something appears to be a single entity or a collection of individuals, often depends on your perspective. On a molecular scale, a single cell is a collection of chemicals and molecules. On the macro scale, a vast number of different cell types appears as a single animal. Maybe it is the case that, seen from the perspective of space, our planet seems less like a vast number of people and technologies, and more like a single entity slowly teasing apart the laws of nature? It is as if, in at least one tiny corner of the cosmos, the Universe is organizing itself into patterns of matter/energy that pursue the directive, Temet Nosce- ‘Know Thyself’.