IT’S ALIVE! THE THEORY AND CONSEQUENCES OF TECHNOLOGICAL EVOLUTION.

IT’S ALIVE: THE THEORY AND CONSEQUENCES OF TECHNOLOGICAL EVOLUTION.
Welcome to this year’s Thinkers lecture!
The theme this year is “It’s Alive: The Theory And Consequences Of Technological Evolution”. Those of you who attended last year, and who have excellent memories, may recall that it was about the possible consequences of an evolutionary arms-race between search engine providers.
NATURAL SELECTION IS THE WRONG KIND OF THEORY.
So, I thought this year I would talk about the evolution of technology in general. The search for a workable theory of technological evolution is almost as old as the theory of natural selection itself, and the belief that inanimate matter can have lifelike abilities attributed to it is as old as mythology. Think of the gollem from Hebrew legend, or Poseidon the Greek god of the sea. Just four years after Darwin had published ‘On The Origin Of Species’, Samuel Butler was calling for a theory of evolution for machines. Most attempts at such a theory have tried to frame it in terms of the steady accumulation of changes, recognisable as Darwinian.
The idea goes something like this. Any given technology has many different designers with different ideas, some of which are selected for their superior performance while others are rejected. This results in a steady accumulation of repeated practices and learned skills. Imagine an ancestor crossing a body of water by riding on a log. Other people copy this idea and over time make adjustments that add to the stability, manoeuvrability and buoyancy of the vessel. Or, their ideas adversely affect the usefulness of the craft and therefore do not survive. The sociologist S. Colum Gilfillan traced in detail how the inventions of planking, ribbing, keels, lateen sails and square sails came about, thereby demonstrating a detailed line of descent that takes us from the dugout canoe to the sailing ship and, finally, to the modern steamship of his day.
So there we go. Darwinian evolution can be applied to technology, right?
Wrong.
Natural selection has certain constraints which make it quite incapable of explaining certain technologies. The first constraint is that a new species can only be created through incremental steps. This is problematic, because no amount of steady accumulations could ever transform a piston engine into a jet engine, nor could a candle evolve into a light bulb. Another constraint is that every incremental step must produce something viable. Inventors, though, are quite capable of creating machines that have irreducible complexity, meaning an organization of parts, the loss of any one of which is fatal to the whole. They can do this because human inventors have some degree of foresight and can see how a component that has no use by itself may become useful when part of a complex organization of other parts. Nature, by contrast, is a ‘blind watchmaker’ with no capacity for foresight.
COMBINATORIAL EVOLUTION.
So where does that leave us in the search for an evolutionary theory for machines? It certainly does not mean there is no such thing, only that Darwinian selection is not always applicable. How, then, can we explain the appearance of radically new technologies that cannot have come about through the steady accumulation of changes to existing technologies?
Judged from outward appearances, some machines appear quite unrelated to anything else. But then, the same thing could be said of life. From outward appearances, one would see no relation between a wolf and a dolphin, or a bat and a mushroom. It was only through studies of animal anatomy, comparisons of the skeletal structures of fossilized remains, and how embryos develop that a common ancestry was suspected and later, with the ability to map and compare genes, confirmed. Similarly we have to consider technology in its entirety- not just physical inventions but all processes, devices, modules, methods and algorithms that ever existed- in order to see a kind of evolution at work.
When we do that, we discover that the cartoon cliché of some light bulb flashing over a genius’s head as a great idea comes from nowhere is quite wrong. The history of technology is by no means one of more-or-less independent discoveries. This is because any new technology can only come about by using components and methods that already exist. A jet engine, for example, is created by combining compressors, turbines and combustion systems, all of which pre-existed in other technologies.
W. Brian Arthur, who is a professor at the Santa Fe Institute, calls this ‘Combinatorial Evolution’. Any and all novel technologies are created through combinations of existing technologies. Some combinations prove useful, and so they persist and spread around the world, becoming potential building blocks for further technologies. Or, a time may come when they are clearly surpassed by other technologies and so they go extinct. Also, there are many possible combinations that make little sense, and they too become nothing.
Combinatorial evolution is observed in the natural world from time to time. The most obvious example would be multicellular organisms, which evolved from combinations of single-celled organisms. Clearly, though, evolution through variation and selection is the norm in the natural world. We see the opposite in the technological world. Here, combinations are the norm and while Darwinian variation and selection also plays a role, it follows behind combinatorial evolution, working on structures already formed.
When technology and all its related practices are considered in their entirety, it becomes apparent that every invention stands upon a pyramid of others that made it possible. All future technologies will derive from those that exist today, because these are the building blocks that will be combined to make further technologies that will go on to become potential building blocks, and so on in an accumulation of useful combinations.
But what about the base of the pyramid? If the technologies of today came about through combinations of yesterday’s technology, where did those building blocks come from? Technologies that existed in even earlier times, of course. But this is starting to sound like a problem of infinite regress. Where does it all end?
WHERE IT ALL BEGINS.
The reason any technology ultimately works at all is because it successfully taps into some kind of natural phenomena. The jet engine works thanks to the law that every action has an equal and opposite reaction. The light bulb exploits electromagnetism. Essentially, technology is a programming of nature. Tracing the evolutionary pyramid of technology therefore takes us back to the earliest natural phenomenon that humans captured.
The earliest phenomenon that could be captured must have been detectable by unaided human senses, and understandable to an animal that almost certainly lacked the communicative abilities of homo sapiens. It would have been lying around on the ground, essentially.
Some stones happen to have sharp edges, providing a handy cutting tool. A burning branch can set alight combustible material, providing fire for warmth, protection and, eventually, cooking.
Combinations began to appear, and this lead to the discovery of more natural phenomenon. Fire can reach high enough temperatures to allow the smelting of metals, a discovery that lead to iron axe-heads and blades. Further on, as the amount of potential building blocks accumulated, and knowledge of different kinds of natural phenomena increased and expanded, clusters of technology and crafts of practice began to emerge.
Levers, ropes and toothed gears were combined to make early machines for milling, irrigation and building construction. Agricultural societies built up stores of produce  in earthenware pots. Ownership of the contents (not to mention what the pots contained in the first place) had to be recorded somehow, and so writing happened.
In time, instruments specifically designed for precise observation were invented, along with scientific methods of deduction. This lead to chemical, optical, thermodynamic and electrical phenomena becoming understood and exploited. Ever-more precise instruments uncovered still finer phenomena, such as X-radiation and coherent light, and that lead to vast arrays of laser optics, radio transmissions and logic circuit elements combined in various ways, which brings us up to modern times.
So, the history of technology is an evolutionary story of related devices, methods, and exploitations of natural phenomena. It results from people taking what is known at the time, plus a modicum of inspiration, and then combining bits and pieces that already exist in just the right way in order to link some need with some effect or effects that can fulfil it.
But, in what sense can technology be said to be alive? As yet, no formal definition of life exists. The best we can do is judge whether or not any system meets certain criteria, namely reproduction, growth, and response and adaptation to the environment. Technology does indeed meet all these requirements, but of course, so far it has required human agency for its buildout and reproduction. It is therefore alive, but only in the sense that a coral reef (built from the activities of small organisms) can be said to be a living thing.
THE TECHNOLOGY TRAP.
Technological evolution seems to go hand in hand with the accumulation and refinement of knowledge. There is a tremendous amount that we know of, things our ancestors never suspected could exist, simply because they lacked the methods and the equipment needed to discover natural phenomenon like quantum physics.
When I say “we” I mean the human race as a collective. But when we consider people as individuals we find a great deal of ignorance concerning the fine details of how modern societies function. Simply put, we all use technologies as if we know them, while actually lacking anything approaching a thorough understanding.
This is nobody’s fault; there is no other way to be. One reason why is because even a relatively few building blocks can be combined in a great many ways. If all you had were 40 building blocks, the number of possible combinations would be 1,099,511,627,735. Possible combinations scale exponentially (as 2 to the power of N). Of course, not all combinations result in workable possibilities, far from it. But even if the chances are one in a million that some combination of technologies will result in a novel technology that then becomes a building block itself, once you pass a certain threshold the numbers of possible combinations climbs very rapidly indeed.
The accumulation of technologies and related methods therefore quickly grows beyond the point where the individual can hope to wrap his or her head around everything. And it is not just the amount of technologies that is the problem, but the fact that technologies have had to grow more complex.
This, too, was no accident but an inevitable outcome of certain snowballing effects. Suppose our nomadic ancestors were fortunate enough to discover rich and fertile land, and sophisticated enough to develop technologies to exploit such resources, meeting the tribe’s basic needs. Doing so would have lead to prosperity, which in turn would have lead to population numbers rising.
But, that would have put more strain on the land’s ability to provide for the tribe, and technology would have had to become more sophisticated in order to ensure people’s basic needs continued to be met. Once the sophistication of technologies and the number of skills required to maintain a society reached a certain level, it would have begun to make sense for an individual to specialise in a few professions, rather than being a generalist who could meet all of his or her family’s needs. The reasons why specialization makes sense, is because a person who does nothing but work with stone all day becomes a far better stonemason or builder than someone whose mind and hands must do many things (polymaths who excel in many disciplines are few and far between).
But, then, being an expert in brick does not, in and of itself, put any bread in your basket, nor does being an expert baker produce ovens. If a society of specialists is to function at all, there needs to be a way of organizing them so that, collectively, they meet the needs of the society. This requires managers, systems of government that can coordinate actions and delegate responsibility. This, in turn requires channels of communication, and the ever-growing stockpiles of building blocks manufactured in dedicated places must be transported. It is no wonder that the history of economic development is, to a significant degree, the history of transport and communication. Each advance in transport and communication reduces the economic costs of recombination, thereby making innovation ever less expensive. The need for networks of efficient channels of communication puts a pressure on the discovery of electromagnetic fields and how they can be used to send and receive digitized information.
Once communication becomes digitized, all objects in the digital domain become data strings that can be acted on in the same way. They do not even need to share the same physical space for combinations to happen, since telecommunication can combine digital elements remotely. Thus does technology become a system, a network of functionalities.
Once tribes have evolved into gigantic societies engaged in economic activities that span an entire planet, each city has become a technology island, totally dependent on networks of services bringing supplies from outside. Without such imports, a city like London or Tokyo could neither feed, nor house, no clothe the inhabitants that dwell within it. The pace of life in a city is set by the pace of the technology that serves it, and you have to hope the pace can keep up with demand, because if not the city and the millions who dwell within it would die.
In that sense, our comfortable urban lives are technology traps. We are completely dependent upon technologies to maintain our way of life, while at the same time taking it all for granted. Who ever concerns themselves with how, exactly, pressing a switch results in a light coming on, or a microwaved dinner being cooked, or any other number of convenient operations we have at our fingertips? But, what happens when pressing that button does not achieve the desired effect? What then? Phone for a specialist who can fix the problem? In other words, we reach for more technology. And, why not? After all, the thought that technology will not come to the rescue is just unthinkable, right?
Obviously, it is just as true to say that we depend on technology for our survival, as it is to say it depends upon us for its reproduction. However, the extent to which technology depends upon people for its evolution may well become less and less as time goes by.
“CODE IS CODE”.
This leads to two questions. Can we do this? Should we do this? The first question asks whether or not we can indeed automate aspects of evolution, freeing technology to sense, diagnose and fix problems by itself, or even go as far as conceiving, designing, and building the next generation of technologies autonomously. Well, the natural world seems to run all by itself without human help. And now we have technologies like DNA sequencing, gene chips, and bioinformatics that enable us to discern life’s processes at their fundamental level. Also, our computers have reached the point where they are powerful enough to recreate many of these processes in-silico, and we are starting to see how they work in inorganic settings.
Richard Dawkins once observed, “genetics today is pure information technology. This, precisely, is why an antifreeze gene can be copied from an arctic fish and pasted into a tomato”. Because life is fundamentally an information technology (in other words, something that operates on the basis of coded instructions) it can be translated into languages understandable to computers, which operate according to the same principles. Another point to consider is that evolution is basically an algorithm, a repeatable procedure, and our understanding of that algorithm has revealed core principles that apply to any adaptive system. They are:
RECOMBINATION: The driving force of innovation, not to mention of technological evolution itself.
AGENTS: Decision-making units, following rules that determine their choices. Cells are biological agents following chemical rules, trading programs are agents following rules in a financial market.
SELF-ORGANIZATION: The ability to organize autonomously to create something more complex. As we have already seen, the only way to have something more complex than a single agent, is by connecting multiple agents.
SELECTIVE PRESSURE: If the process is to generally move towards improvement, there has to be some method for determining which agents get the opportunity to recombine in the next generation. Where technology is concerned, the market provides that selective pressure. Our purchasing decisions ultimately decide what has survivability, and what does not.
ADAPTATION: Fitness landscapes are not static. They change shape with each round of evolution, as new competitors emerge forcing rivals to adapt to the new threat.
CO-EVOLUTION: Because rivals must continually adapt to one another, they affect the evolution of industry (or the ecology) as a whole.
EMERGENCE: Certain behaviours that the system itself exhibits are not to be found in any of the parts. Instead, complex global phenomena arises from simple local processes and local interaction rules. No central control over the entire thing is necessary. Indeed, such top-down governance would screw things up.
Because these core principles apply to any adaptive system, we can swap between domains and see nature in terms of mechanism; the economy as an organism. Biology, information, technology and business are likely to converge on general evolution, as the theories which drive biology are adapted in the way we manage our enterprises. Christopher Meyer and Stan Davies, who both work at the Centre For Business Innovation, coined the phrase “Code Is Code”, explaining:
“you can translate biology into information, and information into biology because both operate on the basis of coded instructions, and those codes are translatable. When you get down to it, code is simply code”.
OK, so we can do it. The second question asked, “why should we do it?” Why should we pursue what Kevin Kelly has called “out-of-controllness”?
Geological evidence, plus knowledge of the constraints natural selection must work under, tells us there is an order in the sequence of body designs. Everything with limbs evolved after everything with a body. In turn, ’everything with a body’ was a class that  came after single-celled organisms. The sequence had to follow this order, because the forms of one stage made the next possible. But, what is more important, is that the precursor stages also drove the evolutionary process by changing the fitness landscape.
This was demonstrated in an experiment by Martin Boraas. He and his colleagues bred a single-celled alga in the lab. They then introduced a predator, another single-celled creature capable of ingesting other microbes. Within 200 generations, the alga evolved a way around the threat, which was to clump together into clusters consisting of eight cells each. In other words, the introduction of a predator encouraged the evolution of a simple body plan. You can imagine single-celled life evolving from a primordial soup. Once the freely-available supplies were nearing exhaustion thanks to increasing numbers of replicating microbes, that would have put a selective pressure on kick-starting a food chain. That, in turn, would have encouraged the evolution of increasingly complex bodies for defense and attack.
What has this got to do with technology? Well, we see a similar thing, in that potential building blocks do more than just make the next stage possible. They also drive the evolutionary process because of the changes they make, both directly and indirectly, to human life. Technologies solve problems, but they also introduce new ones. James Burke put it like this:
“An invention acts rather like a trigger. Because, once it’s there it changes the way things are, and that change stimulates the production of another invention, which in turn causes more change and so on”.
So, technology that depended on a scientific understanding, and on the systemetized use of natural phenomena, was both made possible and made necessary by the opportunities and challenges that previous generations of inventions helped bring about.
EVOLUTION IN SILICON.
This is just as true today as it ever was, but something is new. Our most complex technologies have reached a point where we must look to the principles of nature to make or govern them. There is a kind of myth that has grown up around computers, one whose origin can be traced to a comment made by Ada Lovelace, the first programmer. She said:
“The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform”.
Most people recognise this in the form expressed by IBM’s marketing department:
“Computers can only do what their programs specify”.
If they have no pretensions to originate anything, there can never be such a thing as artificial intelligence, because it is always the programmers themselves (never the programs) that are intelligent. But is it true?
The way computers have been designed has, itself, evolved over time. This evolution was enabled and made necessary by certain developments triggered by previous rounds of computing technology.
After the 1960s, the number of components on a chip had risen by a factor of a million, while the manufacturing cost per transistor had fallen to mere pennies. This had the effect of making computer circuitry designed step-by-step by draughtsmen an uneconomical prospect. As nanotech pioneer Eric Drexler explained:
“If a million transistor design has an expected market of 100,000 units, then every dollar of design cost per transistor adds tens of dollars to the price of each chip. Yet, a dollar can’t buy much time from a human design team”.
This lead to the invention of ‘silicon compilers’, software systems capable of producing a chip- ready to manufacture- with very little human help beyond specifying the chip’s function. Having gained a foothold, silicon compilers gradually improved, and the result of this is that human programmers have had to work at increasing levels of abstraction, entrusting ever-larger details to computer search and pre-programmed expertise. Programs consisting of hundreds of instructions might be completely understood by their designers, but modern software is built from millions of instructions and we have learned to expect the unexpected.
Since the 1980s, another automated design process (one with obvious parallels to the natural world) emerged. This is known as ‘evolutionary algorithms’ (EA). An EA takes two parent designs and blends components of each to produce multiple offspring. Each offspring combines the features of its parents in different ways. The EA then selects which offspring it considers worth re-breeding, and they are replicated with occasional mutations (which involves changing a 1 to a 0 here, a 0 to a 1 there) resulting in variation in the next generation. The fittest of that generation is then selected, and so it goes on.  As the random mutations and non-random selection process continues for thousands of generations, useful features accumulate in the same design.
Whereas a human designer does not have time to test all possible combinations, an EA comes closer to doing so, thanks to the sheer speed with which computers can explore mathematical space. However, so far their use has been limited to a few niche applications, because the need to breed and evaluate thousands of generations requires ultra-fast computers. But now, thanks to the emergence of multicore chips that make it easy to divide tasks between cores (something that suits EAs) that is changing. EA pioneer John Koza explained:
“We can now undertake evolutionary problems that were once too complicated or time-consuming. Things we couldn’t do in the past, because it would have taken two months to run the genetic program, are now possible in days or less”.
Systems such as these can employ vast quantities of data, more than enough to be overwhelming to humans. While each procedure may be simple by itself, they may be linked in ways that make the overall result complex and surprising. Many thousands of rules can be incorporated, as opposed to the small number that are in diagrams and equations that humans use.
But, we pay a price for this, because the code that results from this evolutionary process is so different from conventional code that programmers find it impossible to follow. The complexity  is beyond human capability to design or debug. This is something unprecedented in the scientific era of invention. While the layperson may use technology without understanding it, we at least expect the professional to have complete knowledge of how their designs function. No longer. Our most complex technologies are no longer designed and built. They are grown and trained, and have to be studied as we now study nature: by probing and experimenting.
The two trends of the falling cost of transistors and increasing numbers of components on a chip, combine to drive Moore’s Law. This can either be expressed as:
A fixed amount of money buys twice the computing power every 18-24 months. Or:
A fixed amount of computing power will cost half as much every 18-24 months,
The first tells us that, as calculations beyond the capabilities of today’s computers become possible with tomorrow’s, we can simulate things which once could only be investigated with practical experiments. For instance, we now have enough computational power to model turbulance and airflow, and as a result companies like NASA Ames no longer use wind-tunnels to test out aerodynamic designs. Airplanes can be reliably tested as virtual models. Also, we have sophisticated models of metabolic, immune, nervous and circulatory systems as well as structural models of the heart and other muscles. As computing power increases (along with our understanding of biology) these models will increase in fidelity and eventually we will see the integration of all the subsystem models into one systematic model. The era of animal experiments will then be made obsolete by the era of the biologically-accurate virtual human.
The second way of expressing Moore’s Law tells us that today’s expensive, high-end computer is tomorrow’s cheap item. Power that only $2000 could buy will, in ten year’s time, be available for $31.25, and the cost keeps falling. A time comes when computing power that was once available only on the most expensive desktops effectively disappears into cell-phones, credit cards, all manner of things. What is more, nobody notices it much, simply because it has become so unremarkable by now.
But, think about what it means. As the trend progresses, it becomes possible to integrate more and more computers and sensors into more and more objects. Also, the first way of expressing Moore’s Law tells us that, over time, sensors will be able to capture new kinds of data, more accurately, using less space and energy. We have already deployed tens of thousands of sensors of many types, gathering information ranging from the location of vehicles, to potential chemical or radioactive leaks, to migratory patterns of animals, and even a person‘s emotional state. Thanks to wireless networks and telecommunication, the newly-sensed data is available anywhere in realtime, and can be combined in almost limitless ways to produce new products and services.
Information has become codified and information technology modularized, and because of this we can install upgrades and add-ons and plug-ins much more quickly. Just think how much time and expertise it takes to add a turbo-charger to a car, versus adding a new graphics card to your computer. As for software upgrades, they can happen automatically, without you needing to be aware it is happening. This all allows innovation to spread faster than ever before. Imagine saying the following to someone in 1990:
“So, I was surfing the Web and found this blog that said SL had more accounts registered than WOW, but a wiki I found by Googling proved otherwise, and I twittered the facts before updating my Facebook page”.
They would not have a clue what you were talking about. The Web, Google, Second Life, Wikis, Twitter and Facebook, none of it existed in 1990. Hell, quite a few did not even exist in 2000! It really is hard to believe that something like Google is such a recent addition to our lives. In relatively short order, our business and social lives have been restructured to be totally dependent on powerful computation and telecommunication networks.
But, as any network becomes more intensely connected, it starts to become ‘nonlinear’. Small changes can lead to large effects. Because a signal created in any market, society or system can travel further than ever before, our world is one of increasing volatility.
BOTTOM-UP GOVERNANCE.
When we look at biological systems and how they are governed, we find no central control but rather bottom-up governance. This is just as true for economies. Even pirates, the quintessential mob of lawless cutthroats, did not really operate under a none-existent rule of law. Economist Peter Leeson showed how a pirate code “emerged from piratical interactions and information-sharing, not from a pirate king who centrally designed and imposed a common code on all current and future sea bandits”. Economies, then, are the result of bottom-up self-organized behaviour that naturally arises from interactions between agents, as opposed to top-down beauracratic design.
Of course, systems of top-down beauracratic design do exist. Since the industrial revolution, models of management and organization inside most businesses treasured stability and control. Christopher Meyer and Stan Davis  identified several reasons why top-down systems were favoured.
First of all, industries that built the first large organizations were, more often than not, harnessing energies in new ways. Ensuring a boiler did not explode, a train did not crash and molten steel did not spill out of leaky containers were life-and-death issues. Risks had to be controlled, and the knowledge to do that resided in the heads of an executive elite.
Organizations therefore prized certainty over uncertainty. The industrial-management priority was to reduce unit cost on the assumption of stable demand. The leadership style favoured in the 20th century consisted of a hierarchy headed by one individual who could get his head around the whole thing. Any CEO not capable of predicting the future of the company, who could not discuss every aspect of the organization and who frequently changed strategies, was not the sort that inspired confidence.
Such organizations are likely to grow increasingly incapable of functioning as the 21st century progresses. The complexity of our networks are such that we cannot anticipate when some newly made connection will cause an instability, and the speed of innovation plus the “code is code” principle of IT allows a competitor to arise from anywhere and quickly spread to be a threat, even to seemingly unrelated organizations.  Forget assumptions of stability. Volatility and the cost of managing it, that is now the imperative, and it needs a bottom-up, adaptive approach. As Kevin Kelly observed:
“We find you don’t need experts to control things. Dumb people, who are not as smart in politics, can govern things better if all the lower rungs, the bottom, is connected together. And so the real change that’s happening is that we’re bringing technology to connect lots of dumb things together”.
If you want bottom-up governance to arise from interactions among individuals, it makes sense to look at the oldest societies on the planet. No wonder, then, that insects like ants and bees have been studied for insights into how a highly organized whole might emerge from the numerous activities of parts, each with its own agenda.
Experiments have shown that if corpses are distributed randomly, live ants will gather up their dead fellows and organize them into neat little piles. Corpses are picked up and dropped as a function of the density of corpses ants detect in their neighbourhood. The greater the pile of dead ants is, the more likely it is that a live ant will drop a dead ant on that pile, and the less likely it will be to remove one.
This kind of behaviour, along with other activities such as the way ants sort their larvae, has lead to methods for exploring large databases. Software agents with simple rules like those governing ant behaviour, forage among customer profiles with attributes such as age, gender, marital status, what services they have used, and so on. These become organized into clusters.
Whereas more conventional methods of sorting data into clusters usually assumes a predefined number of groups into which the data are then fit, the number of clusters organized by the ant-based approach emerges automatically from the data. Because of this, antlike sorting does a better job of discovering interesting commonalities that might well have remain hidden to the more conventional approach.
Other ways in which social insect behaviour has found its way into technology can be seen in methods for rerouting network traffic in busy communications (based on the way ants forage for food) and better ways of organizing assembly lines in factories (based on division of labour among bees). Self-organization has several benefits over top-down direction. These are:
FLEXIBILITY: The group can adapt more quickly to change than would be the case if there were top-down chains of command from which one must get approval before taking action.
ROBUSTNESS: The group can still perform its tasks, even when one or more individuals fail. Relatively little supervision is needed, and problems too complex for centralized control can be addressed.
EXPLORATION: Self-organized groups continuously test radically different approaches and many possible solutions. Because of this, whenever a current method fails, the group will have a pool of alternatives already at hand, which can become handy backup plans.
In order to get the best out of systems of self-organized behaviour, leaders will need to establish guidelines and constraints that govern independent actions. Corporate behaviour will need to be unpacked into rules that drive the choices individuals make, in order to affect that larger structures that emerge. Therefore, it will be a priority to create the connected capabilities that enable cooperation and autonomous action.
Davis and Meyer came up with principles that they reckon will be fundamental to corporate behaviour in the years ahead. These are “Seed, Select, Amplify’, “Sense and Respond” and “Learn and Adapt”.
Because almost any kind of business can now use simulation to experiment with, the cost of testing has fallen dramatically. A “seed” is an idea, something with potential value. A company should seek to produce as many seeds as it can afford to, and test them for potential value across diverse economic opportunities. There must, of course, be a way to weed out inferior ideas, freeing up resources to spread good seeds (“select” and “amplify”) and that is where “Sense and Respond” comes in.
As we have seen, the increasing capability and falling cost of networked sensors and microprocessors is approaching a point where we can imbue products with qualities vital to anything alive. That is, the ability to sense environmental changes and respond appropriately to them. Information from every stage in the consumer-item relationship will one day be gathered by software that can learn from what it senses, adapting the system to make it better. Davis and Meyer wrote:
“As our enterprises become chiefly composed of coded messages connecting human and software agents, the concepts of evolution become more central to their behaviour. Information will move to a framework of biology, as autonomous software and increasing connectivity make networked systems behave as if alive”.
Indeed, this shift from a top-down, clockwork like view of technology to a bottom-up organic one, can clearly be seen in terms such as “artificial evolution”, “genetic algorithms”, “emergence”, “viral marketing”, “neural networks” and other kinds of software challenges related to growing complex systems. As we strive to create technologies and organizations that adapt continually and rapidly in order to keep pace with shifts in their market; as technology is organized into networks of systems that sense, respond, and configure themselves in appropriate ways, will “it’s alive” become more than mere metaphor?
THE THIRD DIGITAL REVOLUTION.
Much of what has been discussed so far can be attributed to two digital revolutions. In the 40s, telephone calls got worse with distance, a problem that was fixed by Claude Shannon when he digitized communication. Von Neumann did likewise for computers. So, what is the third digital revolution?
We have seen that technological evolution is fundamentally driven by the capturing of natural phenomena, and that this leads to the development of instruments that allow for increasingly fine observations and manipulations of natural phenomena. Chemists would welcome instruments that would enable them to measure and modify molecules so that they might study their structures, behaviours and interactions. Materials science would welcome technology that allowed them to be more systematic and thorough, building new materials according to plan and allowing one laboratory to make more new materials in a day, than all of today’s materials scientists put together. Biologists would welcome technology that lets them map cells completely and study their interactions in detail.
Can we identify a technological capability that would be able to fulfil all of the wishes listed above? Indeed we can. If you follow Moore’s Law into the future, you can see that computers will one day have switches that are molecular in size and connected in complex three-dimensional patterns. Bill Degrado, who is a protein chemist at Du Pont, said:
“People have worked for years making things smaller and smaller until we’re approaching molecular dimensions. At that point, one can’t make smaller things, except by starting with molecules and building them up into assemblies”.
Computers contain fast-cycling parts that make complex patterns from the building blocks of information (binary digits),  and the third digital revolution will result from machines that contain fast-cycling parts that can make complex patterns from the building blocks of matter (the elements of the periodic table).
This is in marked contrast to most manufacturing today, which is almost entirely analogue in its nature. We take a batch of materials and chop them, bake them, melt them, mix them, in order to arrive at a final product. It takes a lot of raw material in order to make something like a car, which is why quarries look like such scars on the landscape. Natural things, on the other hand, get built in a fundamentally different way. Tiny biological machines called rhybosomes build things under digital control.
No wonder, then, that one of the main contenders for the post-silicon paradigm is molecular electronics, which promises to dramatically extend the power of IT. To get an idea of how powerful molecular computing could be, consider that a single drop of water contains more molecules than all transistors ever built. This is partly because the molecules are tiny, even by the standards of modern computer components, but mostly because they are organized into three-dimensional patterns. Every IC, in contrast, is a thin two-dimensional layer of computation on a thick and inert substrate.
Now consider that the company Zettacore, who build molecular memories from porphyin molecules that self-assemble on exposed sillicon, have already demonstrated up to eight stable digital states per molecule.
In short, molecular electronics might one day deliver sugar-cube sized devices with more processing power than all of today’s computers combined, or super computers the size of viruses. That, however, is not the most exciting prospect of molecular manufacturing. As Neil Gershenfeld said:
“It’s not when a program describes a thing, it’s when a program becomes a thing that we bring the programmability of the digital world to the physical world”.
So, what we are talking about here is the reimplementation of all molecular biology in engineered materials to code construction. We are looking to create the ability to take a description of an object, and then have it self-assemble from molecular or atomic building blocks. We are talking about machines with the capability to self-repair, and even self-replicate. But, what about machines with cognition?
Electrical engineers who wish to reverse-engineer a rival’s computer chip do so by placing precise sensors at specific points in the circuitry. This enables them to follow the actual information being transformed in real time, thereby creating a detailed description of how the circuits actually work.
Molecular nanotechnology would enable us to do the same thing with living brains, and indeed we are using our increasingly accurate and refined sensor technologies (along with ever more powerful computers) to do just that, developing techniques such as marking specific cells in lab animals by genetically engineering the organisms to incorporate fluorescent proteins found in marine species, thereby  enabling researchers to monitor biochemical reactions and track the movements of cellular proteins in real time, to diffusion tensor imaging, a technique that infers the location of nerve fibers by tracking water molecules in the brain as they move along them.
In other words, our technologies are now beginning to sense the natural phenomena responsible for the design and function of the human brain. Neuroscientist Lloyd Watts pointed out that:
“Scientific advances are enabled by a technology advance that allows us to see what we have not been able to see before…we collectively know enough about our own brains, and have developed such advanced computing technology, that we can now seriously undertake the construction of a verifiable, real-time, high-resolution model of significant parts of our intelligence”.
IBM’s Blue Brain project has so far succeeded in reverse-engineering the cortical column, which can be thought of as the basic microprocessor, the building block from which the neocortex is made. Many other areas of R+D in bio, neuro and computer sciences are also following paths to complete the reverse-engineering of the human brain, for reasons that range from treating neurological disorders, to eliminating the need for animal experiments, to nothing more than the urge to understand how the mind works.
But, if it is indeed true that the brain IS the mind, and it becomes possible to build brains or, rather, to have machines which can self-assemble a body and a brain to control it, do we then have machines who think? Who feel? Who create? Would this lead to technology conceiving of, designing, and manufacturing its own next-generation? If so, how might the human/machine relationship be affected? In pondering questions like these, Carl Zimmer came up with the following:
“The Web is encircling the Earth, subsuming not only computers but cars and cash registers and televisions. We have surrounded ourselves with a global brain, which taps into our own brains, an intellectual forest dependent on a hidden fungal network”.
FROM A DISTANCE…
Whether something appears to be a single entity or a collection of individuals, often depends on your perspective. On a molecular scale, a single cell is a collection of chemicals and molecules. On the macro scale, a vast number of different cell types appears as a single animal. Maybe it is the case that, seen from the perspective of space, our planet seems less like a vast number of people and technologies, and more like a single entity slowly teasing apart the laws of nature? It is as if, in at least one tiny corner of the cosmos, the Universe is organizing itself into patterns of matter/energy that pursue the directive, Temet Nosce- ‘Know Thyself’.
Advertisements
This entry was posted in technology and us. Bookmark the permalink.

3 Responses to IT’S ALIVE! THE THEORY AND CONSEQUENCES OF TECHNOLOGICAL EVOLUTION.

  1. Seren Seraph says:

    A very interesting piece. A couple of meta comments. It seems to have been duplicated in that tere appear to be two copies of the article in this post. Some paragraph breaks would have helped readability and perhaps some sub-headings.

    I somewhat disagree with some of your premises. Various technologies are in fact created every now and again out of whole cloth. Or at least their “evolution” is quite a bit different than natural systems. Principally this is due to a lot of technologies being born out of an idea, a desire for a particular package of functionality that the world has not seen before. Given that desire it is certainly true that the inventor will look at things already existing that give some part of the functionality desired. These pre-existing things indeed suggests that other things might be possible. But without the mind of the inventor and that creative impulse and intelligence mere combination would avail little. The inventors look to see what technologies and techniques might be used to build out a solution. But the composition of the solution, its tuning to the problem at hand, and often the invention of new subparts that did not exist is not a simple evolution from what is. It is directed. Yes, it incorporates what existed before where it can. But that is not to say that it is wholly or predominantly derivative. That is the heart of the disagreement. I don’t agree that recombination is the driving force of innovation. Combinatorics gives an exponential explosion that by itself will not distinguish even what may be a desirable combination much less suggest something that just may be possible and that is desirable. That takes a thinking, desiring, creative intelligence. This seems to receive less acknowledgment in the post than it deserves.

    The reason for specialization is that it is far easier to advance if every person doesn’t have to do everything they need done themselves in addition to what you mentioned, which is that skills are much more perfected when one can specialize in them. These two together mean that a greater quantity and quality of goods and services are available to each person with far less effort.

    Actually very little today is grown and trained in high technology. To do this with Genetic Algorithms (or the equivalent) is quite difficult except for relatively easy to state and test problem spaces. With larger messier problem spaces the utility function is difficult to specify and/or too costly/time-consuming to compute as often as a good GA search requires and the search space suffers a combinatorial explosion exhausting all computing resources at its disposal. This is not to say that we will not program machines to do creativity and thinking but it will be much more like the way we do these things rather than GAs and such.

    Software upgrading itself, especially significantly, is unfortunately a good ways off. There are many very sticky issues in this area especially using the software languages and techniques of today.

    Top down business practices are largely giving way, and have been for some time now, to much more flexible structures of small team approaches with some sharing/mapping of resources. The old model doesn’t work in most modern businesses and this has been recognized. The one place where top down and highly centralized hierarchies are still tried a lot is in government. Enough said. Micro-economics trumps macro economics. Individuals make better decisions more quickly about things in their local context than any data gatherer that is more centralized ever can.

    While people are not ants or bees and should not be organized as if they are, it is a very good point that swarm intelligences of these kinds are quite wondrously capable across a lot of surprising problem domains.

    It is also quite true that massively adaptive sensing and responding automated systems with reasonably clear goals and freedom to self adjust are much more capable and less fragile than many of today’s software systems. It does give one pause though to realize that such a system is no longer entirely and engineered archetype. It is not possible for humans to maintain as a simply Turing machine or something one can write out and study a full state diagram for. It is more akin to an ecology whose parts may be Turing machines but whose composition is not meaningfully one. This is being recognized in the relatively new software paradigm of Interactive Computation.

    The Zettacore 8 states per molecule is very interesting. However, I have heard that we have demonstrated 35 bit storage *per electron* in the lab! (http://nextbigfuture.com/2009/02/subatomic-technology-stanford-writes-35.html)

  2. Seren, first of all, thanks for pointing out the essay was duplicated. I have now erazed the redundant half. As for paragraph breaks, they are included in the pre-published version but they are missing when I publish. I go back, edit the piece to put them back, and they disappear again when I republish. Strangely, essays cut and pasted from Gwyn’s blog do retain their paragraph breaks, so there must be some way to do it.

    >Various technologies are in fact created every now and again out of whole cloth. Or at least their “evolution” is quite a bit different than natural systems.But without the mind of the inventor and that creative impulse and intelligence mere combination would avail little.I don’t agree that recombination is the driving force of innovation. Combinatorics gives an exponential explosion that by itself will not distinguish even what may be a desirable combination much less suggest something that just may be possible and that is desirable. That takes a thinking, desiring, creative intelligence. This seems to receive less acknowledgment in the post than it deserves.<

    In the documentary 'Connections', James Burke argues that the image of the inventor with a flashbulb lighting above his head to signify a great idea coming from nowhere is an inacurate image of innovation. "At no time did an invention come out of thin air into somebody's head. You just had to put a number of bits and pieces, that were already there, together in the right way". I do not think this just applies to physical inventions. I have never had a truly original idea, one that owes no debt to any prior thinker. I am sure I am not unique in this respect. Every person takes bits and pieces of other people's prior ideas, and recombines them in new ways.

    While I think the process of invention is more mundane than a flash of inspiration coming to some maverick genius, enabling her to see something nobody without her special gifts could ever have imagined, obviously it is not a 'no-brainer' to shuffle bits and pieces of prior knowledge in order to gain profound and valuable insight, nor is it a no-brainer to recombine pre-existing technologies in order to create a useful and commercially successful new product. If that were the case, we could all do it. It takes skill to be an inventor.

    As for the rest of your comments, I have nothing to add except to say I am in agreement with them.

  3. For some reason the prior post missed out this part…

    >Various technologies are in fact created every now and again out of whole cloth. Or at least their “evolution” is quite a bit different than natural systems.But without the mind of the inventor and that creative impulse and intelligence mere combination would avail little.<

    Yes, for now we must take human involvement as a given. As I said in the essay, 'so far it (technology) has required human agency for its buildout and reproduction. It is therefore alive, but only in the sense that a coral reef (built from the activities of small organisms) can be said to be a living thing".

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s