'is it alive'?

‘is it alive’?

(This essay is a transcript of my Christmas 2012 lecture, which is all about the role technology is playing in bringing about a new kind of organism).                                                                   
The Fourth Transition.
Welcome to this year’s lecture!
This year, the topic is all about ‘the fourth evolutionary transition. What is that, exactly? Well, we begin to get a clue by looking at termites.
The science writer Lewis Thomas described termites as one of nature’s seven wonders. What is so amazing about them cannot be seen by examining termites as individuals. As Lewis wrote, “there is nothing at all wonderful about a single, solitary termite”. But something wonderful happens when the number of termites reaches a critical mass. Lewis described what happens as being like the termites “had suddenly received a piece of extraordinary news, they organise into platoons and begin stacking up pellets to precisely the right height, then turning the arches to connect the columns, constructing the cathedral and its chambers in which the colony will live out its life”.
So, which termite masterminded all this construction? Wrong question. Because, as Lewis pointed out, termites “are not the dense mass of individual insects they appear to be; they are an organism, a thoughtful, meditative brain on a million legs”. When a large number of a species of animal coordinate behaviour to the extent that termites do, the collective is described as a ‘superorganism’.
Put a termite under the microscope and you will see that its body is made up of millions of cells of different types. Even more than the termite, each cell cannot be thought of as a solitary thing, because it is part of a society, and it depends on that society for its survival. 
In the early history of life, single-celled organisms were all that existed. An interesting experiment was conducted in which a single-celled alga was allowed to replicate for over a thousand generations before a single-celled predator was introduced. Within two hundred generations, the alga began clumping together, with hundreds of cells in a clump at first but eventually pairing down to eight cells per clump. This was an optimal number that made each clump large enough to avoid predation but small enough for each cell to pick up enough light to survive.
One can imagine how, over time, single cells clumping together would evolve slightly different cells in the group, the effect of this difference being a wider range of behaviour. Predators with more hunting skills, prey with more ways of defending themselves. After a billion or so years multicellular societies became the incredibly complex, coordinated systems we know as plants and animals.
Turn up the magnification so that you can see the structure of each cell, and you will find that it, too, is a society. Although we may think animals are powered by using oxygen to slow-burn organic compounds in order to gain energy while plants get theirs by photosynthesising light, the fact is that not one cell in your body knows what to do with oxygen and no plant cell can extract energy from light. 
Inside each and every animal cell there are other, bacteria-like organisms called mitochondria, while inside every plant cell we find chloropasts. It is these mitochondria that know what to do with oxygen and the chloropasts that know how to get energy from sunlight.
Scientists believe that those mitochondria and chloropasts were once free-living single-celled organisms, living independent lives. Then, a relationship was formed between some such single-celled organism and a bacterium, and over hundreds of millions of years this symbiotic relationship gave rise to the eukaryotic cell, a high-tech miniature machine that was to become the foundation for all multicellular life on Earth. Richard Dawkins explained, “all our cells are… stuffed with bacteria which have become so transformed by generations of cooperation with the host cell that their bacterial origins are almost lost to sight”. Even more than the individual cells in a multicellular organism, the bacteria-like mitochondria and the cells in which they live cannot be thought of as separate things, even if far back in the dim and distant past the ancestors of those mitochondria did live independent lives.
There is yet another example of co-operation, an event that is one of the mysteries of science. Even the simplest single-celled organism is actually quite a complex chemical system. Billions of years ago, such systems of gradually increasing complexity made the transition from non-life to life. Although scientists are increasingly learning to craft such systems in the laboratory, none seem to come with an unambiguous label defining them as either alive or not. This is probably because it is intrinsically arbitrary to ask at which point any system of increasing complexity becomes ‘alive’. 
This point was emphasised by Robert Hazen, a professor of earth science at George Mason University, Fairfax, Virginia:
“Any attempt to formulate an absolute definition that distinguishes between life and non-life represents… a false dichotomy… Rather, life must have arisen from a sequence of emergent events- diverse processes of organic synthesis followed by molecular selection, concentration, encapsulation and organisation into various molecular structures… what appears today as a yawning divide between non-life and life obscures the fact that the chemical evolution of life occurred in this stepwise sequence of successively more complex stages”.
So, to recap, there have been three great transitions, each one resulting in a new kind of life formed from a union of existing ‘organisms’:
TRANSITION ONE: The increasingly complex biochemical systems that ultimately evolved into baceria-like cells.
TRANSITION TWO: The combination of bacteria into cells, resulting in the eukaryotic cell.
TRANSITION THREE: The organisation of eukaryotic cells into multicellular forms.
So what is the fourth evolutionary transition? In order to perceive it, we need not a microscope but a ‘macroscope’- a point of view that can take in the whole Earth and dense networks of activity happening over the course of generations (but becoming increasingly fast).
In his paper on the technological Singularity, Vernor Vinge outlined several pathways that could lead to superhuman intelligence. One is particularly relevant to what I am talking about:
“The Internet Scenario: Large computer networks (and their associated users) may ‘wake up’ as a superhumanly intelligent entity”.
People often labour under a false impression when considering this scenario. They think it is suggesting that if we connect enough computers together and write or breed enough of the right kind of software, then, like with termites, a ‘critical mass’ will be achieved and, behold! The Internet comes alive. But the scenario is not concerned with computer networks alone, but rather how they are used as a part of human groups. It is those humans, after all, that help create the link structure Google depends upon for its trawling of the Web for relevant searches, that engage in the ongoing arguments from which Wikipedia’s articles are created and revised, and which organise social-media lead revolutions like the Arab Spring.
Also, that network of digital devices can only function thanks to the existence of other, older networks. We plug our devices (or their battery chargers) into electric outlets, drawing power from electric grids. The hardware we buy comes from production plants, all of which rely on other factories and mines from around the world to supply them with parts they need, and a global network of transportation to ship those parts to required destinations. The skills needed to design the software and hardware rely on networks like the education system and scientific research (without which, for example, we would not have the laws of electromagnetism which underpins so much of the modern world). All this requires capital, provided by economic systems, and full bellies, provided by a global agricultural system.
Greg Stock believes that, when we consider all the physical and intangible networks woven throughout the world today, we can indeed perceive the existence of a planet-sized super-organism. He refers to it as ‘Metaman’:
“Metaman processes huge amounts of information by combining human thought and computer calculation within the various organised networks of human activity”.
People who study human societies believe it is no accident that we move towards more complexity. Instead, it is an inevitable consequence of a simple fact: Whenever a society solves its problems, the success that brings leads to more (and more complex) problems. For one thing, societies that prosper tend to grow in size, thereby putting a strain on available resources, therefore requiring more elaborate means of acquiring necessities. A small tribe might sustain itself by collecting water from a watering hole, but at some point an expanding population is going to have to build an irrigation system, along with a system of management when there are too many canals for ad-hoc repairs to be practical.
As the complexity and number of problems a growing populace faces grows, it becomes increasingly necessary to divide tasks up into specialised skills. In today’s world, especially in 1st world countries, people rely on the skills of others to provide nearly all of which they need. And such is the complexity of most modern products that it is infeasible for any individual craftsperson to design and build them. Instead, hierarchical organisations are required, in which the manufacturing process is broken down into a series of micro-tasks overseen by layers of management. 
But, hierarchical organisations must also face the problem of increasing complexity and the ultimate solution is to fundamentally alter the way in which society is organised, and how we think about technological and economic systems. In a hierarchy, there is always a ‘head’ who must make final decisions, but once complexity grows too large for any individual to try and get their head around the whole thing, hierarchies have to give way to distributed decision-making facilitated by networks. As Kevin Kelly observed:
“We find you don’t need experts to control things. Dumb people, who are not as smart in politics, can govern things better if all the lower rungs, the bottom, is connected together. And so the real change that’s happening is that we’re bringing technology to connect lots of dumb things together”.
By the way, when Kelly calls people dumb he does not mean they are stupid. Instead, he means networks of human activity and the technological networks facilitating it can handle problems and make decisions beyond the capabilities of any individual. Whenever we perform feats like detecting hints of ‘dark energy’ or track changes in global climate, such feats should really be attributed to the sum total of human and machine networks comprising ‘metaman’. As Greg Stock put it: 
“When I speak not of ‘humans’ or ‘society’ but of ‘Metaman’ accomplishing something, I do so to acknowledge the role played by these immense and complex collaborations that are ubiquitous in the developed world”.
The technologies we are relying on to connect ‘dumb’ things together in order to expand and deepen the sensory awareness of the planetary super-organism are mostly digital technologies. The emergence of digitisation had a profound effect on how technology, and the socio-economic systems supporting (and supported by) them, are perceived.
Walk through any urban area, and the prevalence of digital devices is apparent. Almost everyone you pass is either holding a smartphone to their ear or gazing at its screen. If current rates of consumption are maintained, by 2015 there should be some 4.5 billion smartphones in the world. And this is but one example of the plethora of digital devices that are expected. As the cost of computing, sensing and communicating decreases, it becomes feasible to add connectivity to more and more everyday things.
To give some idea of the scale of this ‘Internet of Things’, consider the number of addresses the latest revision of the Internet’s primary communications protocol is designed to handle. The current version, IPV4, can provide up to 4 billion addresses. But that is not nearly enough. IPV6 will provide up to 340 trillion, trillion, trillion addresses, enough to give every atom on Earth its own unique IP address.
OK, so we probably will not go quite as far as turning every atom into a web-enabled object. But we should definitely expect a future in which the Internet expands to cover more and more of the globe, and its web becomes increasingly tightly woven as more and more nodes are added.
Along with its advance into developing nations via wireless communication and cheap mobile devices, the Internet will even encompass the oceans. This is the ambition behind so-called ‘Cabled Ocean Observatories’, a network of buoys and robotic craft that will carry sensors detecting, among other things, biological and chemical properties throughout the water column, and geophysical observations made on the sea floor. 
As I said, the increasing presence of the Web and the ubiquity of digital devices is altering our perception of a great many things. One such change was anticipated back in 1995 by Eric Schmidt, the CTO of Sun Microsystems:
“When the network becomes as fast as the processor, the computer hollows out and spreads across the network”.
This phenomenon is now happening with ‘cloud computing’ in which more and more of the files and apps once stored locally are instead kept in datafarms like the ones Google operate, streamed to personal digital devices as and when needed. Google’s services require its growing cluster of servers to act as one machine, and that requires many parallel operations to be carried out at once. This move can be likened to the shift in manufacturing ushered in by the industrial age, in which factories broke up production into thousands of parts to be performed simultaneously, rather than relying on workers in separate shops turning out finished products step by step.
Kevin Kelly reckoned that, some time around 2015, desktop operating systems will become obsolete. He wrote:
“The Web will be the only operating system worth coding for. You will reach the same distributed computer whether you login by phone, PDA, laptop or HDTV”.
The act of turning objects into digital devices will dramatically speed up recombination. Recombination has always been the essence of invention. No new technology ever appeared out of thin air but was instead created by combining bits and pieces that already existed. When devices become digital they are all, at heart, objects of the same type. That is, data-strings. Therefore, as Brian W. Arthur (author of ‘The Nature Of Technology’) pointed out:
“Digitisation allows functionalities to be combined, even if they come from different domains”.
Moreover, the fact that these devices communicate over networks means that recombinations can happen remotely. For instance, ‘Ninja Blocks’ are small devices intended to make it very easy to add communications and sensing capability to everyday objects, allowing one to create things like phone-controlled coffee machines.
The effect of all this is likely to be a very rapid increase in the rate of invention, as we configure and reconfigure various digital objects into new combinations. The economics of the past were built on assumptions of predictability and order, befitting a world in which mechanical systems behaved with clockwork predictability. The digital age is ushering in a perception of technology as a kind of chemistry, one always recreating itself in new combinations. According to Brian W. Arthur:
“Economics is beginning to respond to these changes and reflect that the object it studies is not a system in equilibrium, but an evolving, complex system whose elements- consumers, investors, firms, governing authorities- react to patterns those elements create”.
When talking about digital devices one finds oneself using words like ‘communicating’, ‘sensing’, and in some cases ‘self-configuring’ and ‘self-healing’. These are terms that used to apply exclusively to biological systems. Perhaps, though, it is not surprising that we need to use more and more biological terms in order to describe the behaviour of our networks of digital devices. After all, we learned, from studies of the origin of life, that there is no fundamental divide between the animate and the inanimate. There are only systems of increasing complexity that gradually acquire more and more lifelike characteristics. We should therefore expect that, as technology becomes more sophisticated, it will become less mechanistic and more biological, sensitive and cognitive to its surroundings.
However, this increase in the number of digital devices comes with a cost. This increase, along with the growth of high-speed communications networks and high-capacity storage systems, has resulted in vast amounts of data being generated every second. Modern scientific tools like the Large Hadron Collider or the Australian Square Kilometre Array are capable of generating several petabytes of data per day, and Google’s database of hundreds of petabytes is swelled daily by incoming data orders of magnitude larger than the whole web of a decade ago. The cost is a decrease in human attention, as it becomes impossible for us to even scratch the surface of such vast quantities of data.
More and more we must turn to machine assistance. One way of dealing with the data deluge is to automate the process of scientific discovery as far as possible. The popular image of astronomers looking through telescopes is not a particularly accurate portrayal of modern astronomy. Instead, we use robotic instruments with sufficient intelligence to, say, tell a star from a galaxy and which can detect phenomenon too subtle for human senses (such as a star blinking for a nanosecond due to an asteroid passing in front of it). We also rely on automated processes. Most of the galaxy images collected by the Sloan Digital Sky Survey were never viewed by humans but were instead extracted from wide-field images reduced in an automatic pipeline. 
So, modern astronomy employs autonomous, semi-intelligent instruments which relay data to datacenters, and those datacenters use various techniques to further filter the data before finally relaying it to the computer monitors which are what professional astronomers look at. 
It has been argued by some that science itself is undergoing dramatic change thanks to the petabyte age, giving rise to ‘Data-Intensive Science’. Traditionally, science has been built around testable hypotheses, and crucial to this method are models that determine underlying mechanisms. With that in hand, correlation can be confidently connected with causation. But Chris Anderson of Wired Magazine argued:
“Petabytes allow us to say: ‘Correlation is enough’…We can analyse the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot”.
It should be emphasised that we are not talking about AIs pushing human experts towards obsolescence here. Rather, we are talking about an approach to ultra-intelligence involving cooperation between networks of machines with ‘non-humanlike intelligence’ capable of exploring datasets in ways impossible for humans, and humans employing skills like pattern recognition that machines struggle with. The trick is for these to interoperate effectively, such that the strengths of one compensate for the weaknesses of the other.
No human, for example, can comprehend an equation with several hundred million variables, but Google’s clusters handle such datasets no problem (Google converts the entire web into a big equation with several hundred million variables, which are the page ranks of all the web pages, plus billions of terms that are all the links). But, equally, the web contains lots of information humans comprehend easily- such as the context of visual images- which are profoundly hard for machines to make sense of. So, collectively, Google and its associated users form an entity that can mine vast sets of data for relevant information and extract useful knowledge from it.
The most important contribution computers and software tools can bring in this context is not intelligence per se, but rather knowledge management. This is necessary because science is rapidly transforming from a “cottage industry model in which one small team in a single location was responsible for the entire procedure of a particular line of inquiry, from collecting the data to writing the paper, to a more ‘industrial approach’ involving large, distributed teams of specialists collaborating around the world”.
The ‘Human Brain Project’, for instance, will rely on collaborations from teams in Switzerland, Germany, Spain, France (to name a few of the countries involved) drawing on expertise in areas like ‘clinical neuroscience’, ‘pharmacology’, ‘numerical analysis’, ‘animal physiology’ and ‘robotics and mechatronics’.
Multidisciplinary science faces a grand challenge, in that science throughout the 20th century fragmented into more and more specialised disciplines, with vocabularies largely incomprehensible to outsiders. This ultra-specialisation means that a scientist in one field might need to access the same data as another scientist, but from a very different perspective. The challenge, then, is to organise the world’s data so that it is easily accessible and simple to share across boundaries of specialised knowledge.
Fundamental to this approach is a drive to ‘objectify knowledge’, organising it into standard, machine-understandable representations. Whereas today’s cloud computing services chiefly focused on scalable platforms for computing, tomorrow’s will be much more concerned with the management of knowledge, driven by semantic approaches such as machine encodings of terms, concepts, and relationships. Contemporary examples of this ‘knowledge layer’ include the ‘Open Web Alliance’, which is an “open collaborative community (seeking) to organise the massive amounts of information flooding the biological sciences and other sciences”. Another example is Wolfram Alpha, an “online service at computes answers and relevant visualisations from a knowledge base of curated, structured data”.
Ultimately, the goal is to organise the world’s data so that it is a simple matter to look at some data and find all the information relevant to it, and gain insights by fusing data from multiple disciplines and domains. Combined with techniques like natural-language processing, ‘semantic web’ and other methods for objectifying knowledge, it will be possible to ask things like ‘fetch me the incidence of outbreak of flu across Asia and find correlations with migrating birds” and be represented with texts and visualisations that contain just the right information needed (provided the information is there, somewhere, among the world’s databases).
Jeanette Wing, professor of computer science at Carnegie Mellon University, has talked about how computer science techniques and technologies are being applied to different disciplines, resulting in ‘computational thinking’. So, we have ‘computational ecology’ (concerned with simulating ecologies) and ‘eco-informatics’ (concerned with collecting and analysing ecological information). We have ‘computational biology’ (concerned with simulating biological systems) and ‘bioinformatics’ (concerned with the study of methods for storing, retrieving, and analysing biological data). Jeanette Wing wrote:
“Computational methods and models give us the courage to solve problems and design systems that no one would be capable of tackling alone”.
Today, if you search images on Google, it does a pretty good job of finding relevant results. This is not thanks to AI alone, but a combination of human knowledge, choices about that knowledge recorded in simple acts like clicking on a hyperlink or altering a search query, and computer networks mining that data so as to organise it more effectively.
Whereas before we relied upon hierarchical organisations to produce things like vast collections of images and encyclopaedias, now we can rely on a kind of automatic pooling of knowledge in which patterns of user activity lays down trails and systems of knowledge self-organise into categories richer and more complex than the relatively simplistic categories we used to order our knowledge by. We see the rise of ‘meganiches’ in which social networking enables individuals with rare and specialised interests to find like-minded souls, organising into groups as large as any previously achieved by mainstream media.
A lot of this collaborative effort is conducted freely, without expectation of extrinsic reward. Kevin Kelly noted:
“One study found that only forty percent of the web is commercial. The rest runs on duty or passion”.
One result of this freely-given effort is a reduction in the cost of failure. By and large, organisations that have employees are biased toward steady producers. But with something like Wikipedia we see a huge imbalance in participation. A typical article will have hundreds contributing one edit each and only a few contributing a substantial portion of the main body of text. But, since nobody is being paid that is absolutely fine and there is no temptation to try and address this inequality. Individually, of course, single edits would amount to negligible improvement. But those simple acts accumulate. Wikipedia harnesses different levels of effort and different skills and organises it all into what is probably the top source of reference of our time. Remember, it is not the technology of Wikipedia alone that achieved this, but that technology and the society of human users it supports. 
Similarly, to ask Google something is not simply to rely on large clusters of computers in some data farm somewhere. It is also to rely on human effort, much of it negligible when considered individually but producing powerful effects once those individual efforts are pooled together. 
At some point in history, we crossed a threshold, from designing technologies that could, in principle, be undertaken by individuals, to those that absolutely require interdisciplinary knowledge spread across a great many people. Compare the Large Hadron Collider to the Great Pyramid. Obviously, the construction of both was of a scale no individual could undertake. But I do believe an individual could draw up a complete blueprint of the Great Pyramid. But no person, no matter how clever and how polymathic they may be, could ever design a machine as complex as the LHC. Such machines absolutely require collaborative creation supported by networks of communications and information technologies. 
So, if we now have technologies whose complexity rules out their being designed by a single human mind, are they not, by definition, the result of superhuman effort? In a private conversation, J. Storrs Hall told me:
“I think it should be clear that the Internet is already a superhuman entity. Hell, even a ten-person company is a superhuman entity. The question is, is it one that can cause a singularity?”.
I think so, for a couple of reasons. One was described by Luis Van Ann (the inventor of CAPCHTA) in a TED talk called ‘Massive-Scale Online Collaboration’. You might have heard of ‘Dunbar’s Number’, which refers to the maximum number of individuals with whom one can maintain stable social relationships. If you look at the number of people involved in large-scale projects such as the Panama Canal or the Apollo Moon Landing, they all involved roughly the same number of participants- somewhere in the region of 120,000. This is because it has always been impossible to coordinate- let alone pay- teams whose number of participants exceeded the hundreds of thousands.
However, the Internet is enabling us to assemble teams numbering in the hundreds of millions. It is likely that you yourself have been part of some such massive-scale online collaboration. Every time you type a RE-CAPCHTA, for instance, you are one of hundreds of millions of people helping to digitise the world’s books.
Equipped with the right technological aids, ordinary people can achieve great things. It took teams of gamers playing ‘Foldit’ just ten days to model the Mason-Pfizer Monkey Virus Retroviral Protese- a feat that had eluded scientists for fifteen years. 
If a hundred thousand people working together can put a man on the moon, what might a hundred million, working together along with vast computing resources and ‘Data-Intensive Science’- be capable of?
The other reason that this could lead to a Singularity is because the plethora of objects entering the digital domain not only enables a dramatic speedup in the recombination of things. Thanks to an ever-denser communications network and increasingly efficient search technologies, group formation is becoming increasingly easy. Moreover, a machine-curated knowledge-layer would go some ways to meeting Vernor Vinge’s challenge:
“We need to extend the capabilities of search engines and social networks to produce services that can bridge barriers created by technical jargon and forge links between unrelated specialities, bringing research groups with complementary problems and solutions together”.
With many of the costs of group formation greatly reduced, it would be viable to pursue real blue-sky thinking and explore multiple possibilities. Mega-teams with interdisciplinary expertise would form, break apart, reform in different combinations, as the projects they are involved in fail to take off or show signs of advancing toward some goal. As Clay Shirky reasoned:
“Open systems, by reducing the cost of failure, enable their participants to fail like crazy, building on the successes as they go”.
When we combine this more rapid exploration of possibility space via recombinations of specialised knowledge with an increasingly efficient assessment of worldviews against an objective reality we can so powerfully measure (thanks to the network of sensors monitoring the planet’s various systems) that should result in more paradigm shifts in scientific theory happening faster.
It will not just be scientific research that will be improved by increasing effectiveness of group formation, data analysis and sensing of global systems. In a private correspondence, David Brin told me:
“One important aspect is that we will see better and better tools for discourse that allow more rapid building of ad-hoc teams of humans and AI that directly solve problems in real time: “Smart mobs” that bypass slower tools like corporations and governments”.
There are multiple pathways to a technological singularity, from building artificial superintelligence to genetically engineering humans to be super-geniuses. But it seems to me that the ‘Internet Scenario’ is the one most likely to get us there first, because it relies on trends well underway, driven by basic human needs to organise into groups and communicate knowledge. This scenario does not rely on designing machines to do everything people are good at (a profoundly difficult challenge) nor does it involve turning people into machines (a moral and ethical minefield if ever there was one). It relies only on the further co-evolutionary development of humanity and its technology. Human brains are particularly suited to this form of symbiosis. 
One reason why this is so can be found by considering vision. The strange thing about vision is that there is a contradiction between the world that we see and what we should see given the construction of the eye. Our daily experience is of a full colour, highly-detailed scene. But the middle of the retina (the fovea) is packed with colour-sensitive neurons (or ‘cones’) whereas beyond about ten degrees from the middle there are only ‘rods’- neurons that only detect light and shade. This must mean that what we are actually seeing is a visual scene in which the centre is sharp-focus and full colour and the edge is blurry and devoid of colour.
It is believed that the visual system does not construct a detailed model of what is ‘out there’ at all, but settles instead on encoding a rough gist of the scene. But, at any moment, by repositioning the fovea via sequences of rapid-eye movements known as saccades, we can acquire detailed information from any particular point ahead of us at any particular time. According to Andy Clark, where possible the brain prefers to rely on ‘meta-knowledge’ which basically means ‘knowing how to find out’. In his own words:
“Having a super-rich, stable inner model of the scene could enable you to answer certain questions rapidly and fluently, but so could knowing how to rapidly retrieve the very same information as soon as the question is posed”.
In Clark’s view, the belief that the brain is the source of human intelligence is only partially correct. In fact, human intelligence can only be understood by considering interactions between the brain, the body, and cultural and technological environments. Clark explained:
“What the human brain is best at is learning to be a team player in a problem-solving field of nonbiological props, scaffoldings, instruments and resources- natural-born cyborgs ever-eager to dovetail their activity to the increasingly complex envelopes in which they develop, mature and operate”.
Brains like ours are poised to incorporate ubiquitous, invisible-in-use technologies into our mental models. To illustrate this point, Clark pointed out that, when asked “do you know the time?” a person with a watch would say “yes”. But if you ask someone if they know what such-and such a word means, they would reply “no, but I can find out” and go consult a dictionary. Notice though, how both scenarios appear the same. A person is asked something they do not know, and they consult some tool in order to find out.
The difference lies in the ease at which that information can be retrieved. The more ‘invisible-in-use’ a tech becomes, the more akin to our neural substrates it is. While writing, for example, an author is using the prosterior parietal subsystems, which make appropriate adjustments to hand orientation and finger placement. Only, nobody uses such systems in any conscious sense. Similarly, if you asked me, “can you define the word ‘happy’?” I would not reply, “no, but I can retrieve the information from my memory systems”. I would just tell you.
Equipped with a watch, then, a person is a hybrid biotechnological system whose conscious self represents a fairly thin layer, sitting between unconscious neural subsystems ‘below’ and cultural/technological systems ‘above’ and these systems all operate harmoniously to enable ‘you’ (this system that includes the wristwatch and knowledge of how to use it) to know the time. It seems reasonable to assume, then, that if a dictionary could be accessed as easily as a watch can inform us of the time, we would incorporate that into our mental models of who we are, and what we are capable of doing.
Increasingly, of course, we are inhabiting cultural and technological environments that enable us to access all kinds of information whenever we need it. When asked how we would know if the Internet and its human users had ‘woken up’ as a superorganism, Valkyrie Ice told me:
“The creation of a ubiquitous device that contains a personal tutor/ assistant/memory manager/researcher…Oh, wait, that’s what smartphones are becoming. Gee, looks like the scenario is already underway. It’s just going to take a few more years to improve upon. Once Watson and Siri develop into something more akin to [John] Smart’s ‘digital twin’, and enable every individual to have all-the-time access to the full realm of human knowledge, along with an interface that optimises to fit each individual’s learning and thinking patterns, this will be the most likely outcome”. 
Valkyrie is talking about mobile or wearable devices that offer near-constant access to cloud-based apps. Knowledge-management software that ‘learns to be me’. In other words, learns how best to complement an individual’s strength and weaknesses. It has long been known that the brain is highly plastic. Violin players, for example, show enhanced regions responsible for motor control, thanks to the amount of complex finger movements their art requires. Neural Constructivists believe the brain’s adaptability extends beyond merely fine-tuning existing circuitry and involves the actual construction of new neural circuitry. This would make the brain a constructive learning system, in which the basic computational resources alter and expand (or contract) as the system learns. As it is experience that drives this process, it would mean we come to have designer brains purpose-built to dovetail to reliable problem-solving systems.
At the same time, those external systems are also becoming increasingly adaptable, ‘learning’ from human users so as to provide better services. Google captures the search behaviour of its users, using everything from how we punctuate, how often we click on the first result, and many other patterns of behaviour, in order to guide future improvements to the system. We are progressing from external cognitive systems that evolved over a period of generations, to systems evolving in near realtime as petabytes of data from a plethora of networked sensors capture user behaviour to be analysed by Google-sized computing resources.
We are offloading more and more aspects of our thinking to external systems. But, who really benefits? The individual? Or those vast systems we are plugged into? It is rightly pointed out that services which appear free are actually paid for in data about ourselves. As media theorist Douglas Rushkoff pointed out, a Facebook user is not really a consumer. Rather, the user is the commodity in which the company ‘Facebook’ trades. In ‘The Blind Giant’, Nick Harkaway wrote:
“Being a consumer, a customer, implies a measure of control over the relationship…The commodity, on the other hand, gets the minimum necessary attention to keep it in a marketable state”.
In this context, being in a marketable state means being somebody who is a good target for advertisement. The more the individual can be pigeonholed into categories, the more effective advertising will be. Are the friends recommendations you receive and the search results you get serving to expand your horizons and open your mind, or are they serving to put you in a bubble that narrows your view, making you a more convenient commodity?
It must surely be the case that companies like Google, fed daily with petabytes of data on social behaviour, and the combined computing and brain power to analyse it, know far more about what influences us to buy, what psychological drives push us to that final decision, than the individual does. In a world in which we will depend so much on services like Google Now to help organise our lives, it would behoove us to learn more about what influences us, so we can apply those systems in ways that help us make better, more informed decisions.
We need to know what can safely be unlearned, what knowledge that was once vital but which is now irrelevant in the digital age. We need to be sure which aspects of cognition can be offloaded to external systems and which should remain ‘within the brain’ if we do not wish to grow less intelligent. Perhaps most importantly, we need to encourage use of social networks to create smart mobs, to become a member of groups who are truly much more than the sum of their parts, rather than trap ourselves in bubbles that merely reinforce our prejudices. 
At the macroscale, where do we stand right now? Mike Wing, IBM’s Vice President of Strategic Communications, reckoned, “the planet itself- natural systems, human systems, physical objects- has always generated an enormous amount of data, but we weren’t able to see it, hear it, capture it. Now we can, because all of this stuff is instrumented. And it’s all interconnected… So, in effect, the planet has grown a central nervous system”.
This central nervous system is enabling us, as components of a superorganism, to tune in on the heartbeat of nations, to organise smart mobs that can help bring down corrupt regimes, that can track weather patterns and help reduce the human cost of hurricanes. It is bringing the world and its people into our homes, and exposing us (for better, for worse, and certainly both) to the world.
When, though, will the final push that sends us over the threshold to a post-singularity era, happen? More importantly, when will we know this has happened? If we consider that the Internet scenario involves a symbiotic relationship, an alliance of mutual benefit between human and technological systems, I would say that Michael Chorost provided the best answer. He wrote:
“There may come a day when we start to see behaviour that simply does not make sense in terms of what we know about hardware, software, and human behaviour”.
That would indeed be a sign that the Fourth Evolutionary Transition had resulted in the awakening of a fundamentally new kind of entity. 

Extropia DaSilva: That would indeed be a sign that the Fourth Evolutionary Transition had resulted in the awakening of a fundamentally new kind of entity.
[2012/12/18 16:26] Gwyneth Llewelyn: Behaviour in what? Where?
[2012/12/18 16:26] Extropia DaSilva: PHEW! DONE!
[2012/12/18 16:26] Ari (arisia.vita): great Exti!
[2012/12/18 16:26] Gwyneth Llewelyn: Congrats, Extie, that was awesome!
[2012/12/18 16:26] Zobeid Zuma: /me applauds!
[2012/12/18 16:26] Extropia DaSilva: In The System, Gwyn:)
[2012/12/18 16:26] ArtCrash Exonar: yay!
[2012/12/18 16:26] Extropia DaSilva: Ok questions?
[2012/12/18 16:26] Gwyneth Llewelyn: What “system” might that be? 🙂
[2012/12/18 16:27] ArtCrash Exonar: Lots of things to think about there. Too many!
[2012/12/18 16:27] Zobeid Zuma: That was great. Not that I’m convinced at all, mind you… But well done! 🙂
[2012/12/18 16:27] Extropia DaSilva: The global system of technological networks and human culture.
[2012/12/18 16:27] Gwyneth Llewelyn: Ok, I’ll ask a nasty question…
[2012/12/18 16:27] Extropia DaSilva: I got this dress from my sis, Gwyn. You were going to ask about the dress, right?
[2012/12/18 16:28] Gwyneth Llewelyn: Well, it behaves under patterns we aren’t able to fime
[2012/12/18 16:28] Gwyneth Llewelyn: *time
[2012/12/18 16:28] Extropia DaSilva: Can I sit down now?
[2012/12/18 16:28] Gwyneth Llewelyn: eek
[2012/12/18 16:28] Gwyneth Llewelyn: what happened to my text
[2012/12/18 16:28] Gwyneth Llewelyn: And no, it wasn’t about the dress….
[2012/12/18 16:28] Gwyneth Llewelyn: So hmm
[2012/12/18 16:28] Gwyneth Llewelyn: You started with termites, which are a good point
[2012/12/18 16:28] Extropia DaSilva: My virtual feet ACHE! Go figure.
[2012/12/18 16:28] Gwyneth Llewelyn: I’ll throw Gödel and Hofstadter at you, btw…
[2012/12/18 16:28] Gwyneth Llewelyn: So
[2012/12/18 16:29] Gwyneth Llewelyn: Termites, individually, are unimportant
[2012/12/18 16:29] Gwyneth Llewelyn: But they create things together “as if driven by a mastermind”;
[2012/12/18 16:29] Gwyneth Llewelyn: which doesn’t exist really
[2012/12/18 16:29] Extropia DaSilva: I like Hofsdadter.
[2012/12/18 16:29] Gwyneth Llewelyn: It’s just us, humans,
[2012/12/18 16:29] Gwyneth Llewelyn: looking from outside,
[2012/12/18 16:29] Gwyneth Llewelyn: who are able to capture the patterns
[2012/12/18 16:29] Gwyneth Llewelyn: and say: “wow, intelligent behaviour!”
[2012/12/18 16:29] Extropia DaSilva: ok…
[2012/12/18 16:29] Gwyneth Llewelyn: But ask a termite, “do you see the emergence of a mastermind of all of you put together”?
[2012/12/18 16:30] Gwyneth Llewelyn: The termite will say “no”.
[2012/12/18 16:30] Gwyneth Llewelyn: If Gödel were a termjite,
[2012/12/18 16:30] Gwyneth Llewelyn: he would say, “we termites are unable to ‘see from outside'”
[2012/12/18 16:30] Extropia DaSilva: Hardly. It could not even comprehend your question:)
[2012/12/18 16:30] Gwyneth Llewelyn: Hofstadter, by contrast, as a termite, would say: “each level of complexity does not need to be aware of the levels above and below; we function at the level we’re aware”
[2012/12/18 16:30] Gwyneth Llewelyn: (you know what I mean)
[2012/12/18 16:30] Gwyneth Llewelyn: So
[2012/12/18 16:31] Gwyneth Llewelyn: We humans, in your scenario,
[2012/12/18 16:31] Gwyneth Llewelyn: will not be able to “discover” a superhuman intelligence combined of humans & tech
[2012/12/18 16:31] Gwyneth Llewelyn: However,
[2012/12/18 16:31] Gwyneth Llewelyn: a (theoretical) superhuman intelligence, watching Planet Earth,
[2012/12/18 16:32] Gwyneth Llewelyn: would say: “oh, that’s a wonderfully complex organism! The planet is intelligent, even though its components — humans, networks, computers — are not”
[2012/12/18 16:32] Gwyneth Llewelyn: Similarly, it doesn’t make sense to ask if the neurons are intelligent by themselves;
[2012/12/18 16:32] Extropia DaSilva: Well I think that technology that requires teams of hundreds of millions and vast computing resources in order to be researched and developed would seem as astounding to little old me as would any product dreamed up by an artilect.
[2012/12/18 16:33] Gwyneth Llewelyn: but not even the brain is “Intelligent” by itself.
[2012/12/18 16:33] Gwyneth Llewelyn: It’s us, watching the brain, saying: “this complex organ is intelligent”
[2012/12/18 16:33] ArtCrash Exonar: We can see that the internet is like a culture. It has been built by millions of nameless over time. But we can still see the culture and we can still see the internet ‘mind’, even though we can only observe it by bits and pieces at a time.
[2012/12/18 16:34] Gwyneth Llewelyn: Sure, Art. Like the termites. They might have a vague sense of belonging to something, or that something has been built that hasn’t been there the day before.
[2012/12/18 16:34] Gwyneth Llewelyn: But the termites don’t “see” a “Mind” in their cooperative behaviour.
[2012/12/18 16:35] Extropia DaSilva: Well when you check the weather report the information you receive is dependent upon a pretty big network of human knowledge, sensing systems, computational models and so forth. So ‘predicting’ the weather is one of those abilities that Stock would attribute to ‘Metaman’ not just people.
[2012/12/18 16:35] Gwyneth Llewelyn: Put into other words: sure, we might have already reached Extie’s Singularity, but it would take a non-human superintelligence, from the outside, to recognise it as it is.
[2012/12/18 16:35] ArtCrash Exonar: We should get over the termite metaphor as literal. And see it more as mass action. We don’t have to take the entire metahor literally and ascribe consciousness to termites.
[2012/12/18 16:35] Gwyneth Llewelyn: Exactly my point, Art.
[2012/12/18 16:35] Gwyneth Llewelyn: But, alas, you can also say precisely the same about the nervous system 🙂
[2012/12/18 16:36] Extropia DaSilva: Vinge did suggest one thing..
[2012/12/18 16:36] Gwyneth Llewelyn: Neurons and the hormonal messenging system are not “Intelligent” by themselves.
[2012/12/18 16:36] Extropia DaSilva: and I think it has come true..
[2012/12/18 16:36] Gwyneth Llewelyn: It’s us, functioning at a level above nurons firing and chemicals binding, who label what our brain does as “consciousness” and “Intelligence”
[2012/12/18 16:36] Gwyneth Llewelyn: *neurons firing
[2012/12/18 16:37] ArtCrash Exonar: I see the internet as an extension of human ‘culture’, the difference being that all aspects of the culture are accessible to all members of that culture. This is new.
[2012/12/18 16:37] Gwyneth Llewelyn: Oh sure, I agree.
[2012/12/18 16:37] Extropia DaSilva: He suggested teams of artists could work with computers to produce art that cannot be created by either humans or computers working alone. I believe films like Avatar are such colllaborative forces, because CGI like that absolutely requires a combination of humann talent and computer calculation.
[2012/12/18 16:37] Gwyneth Llewelyn: Even though we had “sneakernets” before. The first era of globalization started when people were able to send telegrams instantly across the world 🙂
[2012/12/18 16:37] Zobeid Zuma: I was a bit put off by all the references to “singularity”, which I still think is a bit of a silly idea, and I’m not sure why it keeps being brought up.
[2012/12/18 16:38] Extropia DaSilva: It is not silly.
[2012/12/18 16:38] Extropia DaSilva: some of the belief systems people have built up around the concept may be silly. But the concept itself is not.
[2012/12/18 16:39] Zobeid Zuma: I’m not sure that doing a word substitution and replacing “civilization” with “organism” or “metaman” actually improves our understanding…
[2012/12/18 16:39] Gwyneth Llewelyn: Hm. Using Extie’s description, “Singularity” is what an external super-intelligence would describe Planet Earth to be — its joined network of humans, networks, data centers, and so forth. “Intelligence”, by contrast, is what we humans call what happens inside a mass of neurons; or a colony of termites.
[2012/12/18 16:39] Ari (arisia.vita): The question to me is whether the creation of the “intelligent supertermite” organism, namely the mound, benefits each termite? And would the creations of a presumed “global mind” benefit us in any way we could understand? Would we simply begin to wonder why things just seem to be working out for humanity?
[2012/12/18 16:39] Gwyneth Llewelyn: So it’s just pushing the level up 🙂
[2012/12/18 16:39] Zobeid Zuma: Our civilization has been becoming more interconnected, accumulating and communicating more information, for a long time. So what we have now, perhaps, is a change of magnitude, not of kind.
[2012/12/18 16:40] Gwyneth Llewelyn: I’d say, yes, Ari; evolution would have been ruthless against the poor termites otherwise, i.e. if they didn’t have a competitive advantage
[2012/12/18 16:40] Extropia DaSilva: The singularity refers to ‘creating or becoming a super-human intelligence’. See, we assume there is a limit to human intelligence. YOu know you comprehend things your cat cannot, right? Well, the singularity is based on this idea that there could be higher intelligences, and that we cannot fathom their knowledge..
[2012/12/18 16:40] Gwyneth Llewelyn: On the other hand, if Hofstadter (and Gödel…) are right, that “global mind” would not look upon us humans as being intelligent, or even part of it; just like we don’t consider our neurons intelligent by themselves.
[2012/12/18 16:41] Zobeid Zuma: Oh, is that the new definition? 😛
[2012/12/18 16:41] Extropia DaSilva: NO.
[2012/12/18 16:41] Extropia DaSilva: It is THE definition. It has ALWAYS been THE definition.
[2012/12/18 16:41] Zobeid Zuma: So the singularity is AI, basically. Well, the idea of AI isn’t silly. But it doesn’t seem like such a big deal when you look at it that way.
[2012/12/18 16:41] Gwyneth Llewelyn: Extie: this time, I’ll quote Sagan: who cares about the super-human intelligence, if we are unable to communicate with it” 🙂
[2012/12/18 16:41] Ari (arisia.vita): but we do act to preserve our neurons?
[2012/12/18 16:41] Extropia DaSilva: no no no no no
[2012/12/18 16:42] Gwyneth Llewelyn: Ari: yes, we’re conditioned biologically to do so, even if we’re not aware at how we do that
[2012/12/18 16:42] Ari (arisia.vita): to keep them “healthy” (happy?) ?
[2012/12/18 16:42] Gwyneth Llewelyn: yes — we eat to give them energy 🙂
[2012/12/18 16:42] Ari (arisia.vita): so our neurons benefit
[2012/12/18 16:42] Gwyneth Llewelyn: Yes.
[2012/12/18 16:42] Ari (arisia.vita): even though they do not know why they benefit
[2012/12/18 16:42] Extropia DaSilva: Most of the pathways to singularity outlined by Vinge is NOT about AI per se, but some sort of effective collaboration between people and technology.
[2012/12/18 16:42] Gwyneth Llewelyn: But… the neurons, by themselves, are useless
[2012/12/18 16:43] Zobeid Zuma: But if your definition is that loose, then you’ve made a case that we’re already there.
[2012/12/18 16:43] Zobeid Zuma: So… What happens? :/
[2012/12/18 16:43] Gwyneth Llewelyn: Oh, I dig that, Extie. I just don’t think it’s overly different to have a society to collaborate to create a Great Pyramid or the HSC 🙂 It’s pretty much the same thing,
[2012/12/18 16:43] Extropia DaSilva: We are not there yet.
[2012/12/18 16:43] Gwyneth Llewelyn: or, if you prefer, it’s the difference between wolves collaborating to hunt down a prey, and humans creating cities
[2012/12/18 16:44] Gwyneth Llewelyn: Well, using your definition, Extie, I think we already are there.
[2012/12/18 16:44] ArtCrash Exonar: And ‘there’ isn’t an either/or point, but a process wherein ‘there’ is arrived at without at any time being definable as changing from ‘not there’ to ‘there’.
[2012/12/18 16:44] Gwyneth Llewelyn: But using that same definition, we cannot possibly be aware of the supermind, and the supermind cannot be aware of us.
[2012/12/18 16:45] Ari (arisia.vita): well I for one wish to thank the great ubermind for surrounding me with such lovely companions…
[2012/12/18 16:45] Gwyneth Llewelyn: heh Ari — we’re merely pawns
[2012/12/18 16:45] Extropia DaSilva: we are not there yet because the technology is not quite invisible in use enough. Now..once most of the people of the world have Google glasses and can access information on the web as easily as they can access it from their own neural network, THEN we shall be at the singularity. I mean, imagine holding a conversation with somebody so augmented and you are not!
[2012/12/18 16:45] Zobeid Zuma: The pyramid could be *designed* by one person, even though it took thousands to do the work of construction.
[2012/12/18 16:45] Ari (arisia.vita): you are queens Gwyn…
[2012/12/18 16:46] Gwyneth Llewelyn: Extie, from the perspective of someone living in the streets of New Delhi, we’re already superminds
[2012/12/18 16:46] Gwyneth Llewelyn: However, I think that you were discusing two separate things in your lecture!
[2012/12/18 16:46] Gwyneth Llewelyn: One is how technology enhances our senses (knowledge can be seen as one sense too)
[2012/12/18 16:47] Gwyneth Llewelyn: That goes beyond question. We’re exponentially being able to tackle complex problems that were simply impossible few years ago,
[2012/12/18 16:47] Gwyneth Llewelyn: and the pace is increasing!
[2012/12/18 16:47] Gwyneth Llewelyn: The other thing is that you admit that at some point, all this “enhancing technology” will somehow “become” a supermind
[2012/12/18 16:47] Extropia DaSilva: Yes Zo but the large hadron collider cannot be designed by an individual. But then, there are technologies that cannnot be designed by natural human teams of 100,000 or so members. But we are beginning to be able to organize 100 million+ teams, thanks to our computer and communications networks. Imagine what such technologies would be like!
[2012/12/18 16:47] Gwyneth Llewelyn: And a supermind that we can recognise as such.
[2012/12/18 16:48] ArtCrash Exonar: The supermind makes us aware of Justin Bieber’s love live and Kate Middleton’s pregnancy.
[2012/12/18 16:48] Extropia DaSilva: How am I supposed to answer so many questions, anyway? They are all scrolling of the screen faster than my primary can read them!
[2012/12/18 16:48] Zobeid Zuma: That did give me pause to think about the rate of technological progress. With this kind of ability, we may be able to keep it from declining as fast as I expected….
[2012/12/18 16:48] Gwyneth Llewelyn: I cannot design even the simplest tool I have — a computer! It took tens of thousands of people working together to create it for me; so we are already highly interconnected.
[2012/12/18 16:48] Gwyneth Llewelyn: Extie:; you need more tech enhancements hehe
[2012/12/18 16:49] Gwyneth Llewelyn: ZO, good point! I never thought about that, but it’s true 🙂
[2012/12/18 16:49] ArtCrash Exonar: Change of subject point: I would like to think about ‘What can we afford to unlearn?’ as it is clear that memorizing factoids is becoming less necessary. But some of that is necessary in order to understand the overall picture of any field of knowledge.
[2012/12/18 16:49] Extropia DaSilva: It is all going into my digital memory..
[2012/12/18 16:49] Extropia DaSilva: Gwyn..
[2012/12/18 16:50] Gwyneth Llewelyn: Anyway, I don’t disagree with the rest of your assumptions. I think they were well fundamented.
[2012/12/18 16:50] Zobeid Zuma: By the way…. During the lecture I also found myself thinking about Vacca’s “The Coming Dark Age”. I’m sure he would argue that this whole global information system is just a house of cards poised to suffer a cascade failure at any time. 😀
[2012/12/18 16:50] Extropia DaSilva: Again, there has always been a limit to the size of teams that we could organize. I think that means a limit on the complexity of the technologu we could build.
[2012/12/18 16:50] Gwyneth Llewelyn: I might take issue with the definition of ‘science’ (getting Popper spinning in his grave), but I cannot disagree with a new methodology for knowledge acquisition, which is the primary focus of science according to Popper and his followers.
[2012/12/18 16:51] Gwyneth Llewelyn: So you speculate that at some point there will be no limits to the size of those teams? Hmm
[2012/12/18 16:51] Gwyneth Llewelyn: Right now, the record in organising teams is…. held by India. One leader can coordinate 1.2 billion humans. That’s not bad 🙂
[2012/12/18 16:51] Extropia DaSilva: Like I said, ‘if we can put a man on the moon with a hundred thousand, what might an organization numbering in the hundreds of millions be capable of’?
[2012/12/18 16:52] Gwyneth Llewelyn: Hm
[2012/12/18 16:52] Ari (arisia.vita): magic Exti… 🙂
[2012/12/18 16:52] Gwyneth Llewelyn: Ok, let me put it this way
[2012/12/18 16:52] Extropia DaSilva: Understanding consciousness, for one thing!
[2012/12/18 16:52] Gwyneth Llewelyn: You’re assuming that there were “only” hundred thousand people working on the Apollo programme.
[2012/12/18 16:52] Gwyneth Llewelyn: But in reality, there were far more.
[2012/12/18 16:52] Gwyneth Llewelyn: Because those hundred thousand had to be fed,
[2012/12/18 16:52] Gwyneth Llewelyn: to get clothes,
[2012/12/18 16:53] Gwyneth Llewelyn: to have paved roads to travel to their workplace,
[2012/12/18 16:53] Extropia DaSilva: Hmm..yeah OK.
[2012/12/18 16:53] Gwyneth Llewelyn: to have cars and such
[2012/12/18 16:53] ArtCrash Exonar: Quote from a famous woman: “If they can put a man on the moon, why can’t they put all of them up there?”
[2012/12/18 16:53] Gwyneth Llewelyn: So, well, you shouldn’t stop at 100,000
[2012/12/18 16:53] Zobeid Zuma: Haha!
[2012/12/18 16:53] Extropia DaSilva: (Nice one, Art)
[2012/12/18 16:53] Gwyneth Llewelyn: Similarly, you can look around at pretty much everything around you, and the numbers of people indirectly involved are bilklions
[2012/12/18 16:53] Extropia DaSilva: I suppose not.
[2012/12/18 16:54] Zobeid Zuma: Who decides which projects are worthy of having such a team assembled for them? The question of how to allocate resources is on my mind.
[2012/12/18 16:54] Gwyneth Llewelyn: And, using also parts of your lecture as an argument,
[2012/12/18 16:54] ArtCrash Exonar: ponders the bilklions
[2012/12/18 16:54] Gwyneth Llewelyn: you don’t even need to ‘worry’ about making sure those billions are providing food, clothes, vcars etc for the 100,000,
[2012/12/18 16:54] Gwyneth Llewelyn: it happens automatically 🙂
[2012/12/18 16:54] Zobeid Zuma: It seems to me there is a lot of mis-allocation in technology research and development, and I’m not sure this type of “singularity” addresses that.
[2012/12/18 16:55] Gwyneth Llewelyn: Notice also that at least since the Industrial Revolution, this has been the case, but perhaps we can go further back in time
[2012/12/18 16:55] Gwyneth Llewelyn: So, to a degree, we’re already all interrelated and interacting, even if indirectly; and, as you posited in the lecture, this doesn’+t require “centralised authority”
[2012/12/18 16:55] Extropia DaSilva: It is like any technology Gwyn. If you think about it, any technology owes its existence to other earlier technologies, and so on forming a kind of evolutionary ancestry going all the way back to the stone age.
[2012/12/18 16:55] Gwyneth Llewelyn: Zo: no, we have to go to politics for that! hehe
[2012/12/18 16:56] Gwyneth Llewelyn: Yes, Extie, good example. Nothing exists in a vacuum
[2012/12/18 16:56] Zobeid Zuma: Politics, yeah. 😛
[2012/12/18 16:56] ArtCrash Exonar: compiling all knowledge involves compiling all misinformation and bad knowledge as well as all lies. Hopefully they will only be the noise background.
[2012/12/18 16:56] Gwyneth Llewelyn: So what you’re talking about is mostly a question of quality. Doing more things. Faster. Doing impossible things just because we now have the tech for turning them into possible things.
[2012/12/18 16:56] Zobeid Zuma: Ah yes, the World Wide Web: repository of all the world’s knowledge and all the world’s BS!
[2012/12/18 16:57] ArtCrash Exonar: and having a standard of ‘what constitutes knowledge’.
[2012/12/18 16:57] Extropia DaSilva: Imagine trying to build a computer from scratch. I mean, the absolute raw materials from which an iPad (for example) is made. It would be impossible because the knowledge embeded in the manufacture of an iPad is IMMENSE.
[2012/12/18 16:57] Gwyneth Llewelyn: E.g. in the late 1950s, Russians used rooms full of humans to calculate orbits for their spacecraft. Now we can do that on reasonably-sized computers — or on the cloud — and do it in picoseconds.
[2012/12/18 16:57] Gwyneth Llewelyn: Excellent point, Extie
[2012/12/18 16:57] Zobeid Zuma: That does bring to mind something else… There *is* still a lot of information that’s either not on the web or is not easily accessible due to copyright, paywalls, etc.
[2012/12/18 16:57] Extropia DaSilva: Yes Zo..
[2012/12/18 16:58] ArtCrash Exonar: Don’t worry Facebook and Google want to eliminate copyrights…. heh
[2012/12/18 16:58] Gwyneth Llewelyn: So we do “better, faster” and this grows in complexity and speed every year. That’s good.
[2012/12/18 16:58] Extropia DaSilva: THere is this thing known as the deep web, which is basically all the websites that search engines cannot find. The deep web is VAST compared to the ‘visible web’.
[2012/12/18 16:58] Gwyneth Llewelyn: I have no issue with that bit of the argument 🙂
[2012/12/18 16:59] ArtCrash Exonar: What is the deep web? firewalled data repositories?
[2012/12/18 16:59] Gwyneth Llewelyn: The issue is, “ok, at some point, a supermind will emerge”, arguing from biology and neuroscience…
[2012/12/18 16:59] Extropia DaSilva: No it is…all the websites that are lost.
[2012/12/18 16:59] Gwyneth Llewelyn: Art: Google doesn’t index *everything*. Also, that’s a wrong question: the deep web is what Google doesn’t index 🙂 so we cannot search what is there… because we have no way to find it.
[2012/12/18 17:00] Extropia DaSilva: They are there, somewhere, on the internet. But we just do not know how to find them.
[2012/12/18 17:00] Gwyneth Llewelyn: It’s like, uh, dark matter 🙂 We can infer thgat it exists, but…. it’s dark because we cannot *see* it 🙂
[2012/12/18 17:00] ArtCrash Exonar: Does dark matter?
[2012/12/18 17:00] Extropia DaSilva: I do agree with one thing Gwyn said…
[2012/12/18 17:01] ArtCrash Exonar: ONE!
[2012/12/18 17:01] Ari (arisia.vita): it’s been great being with you all but I must fly…ty for a great talk Exti, and thanks to you all for a stimulating discussion…see you all again soon I hope…
[2012/12/18 17:01] ArtCrash Exonar: I agree with 1.5 things Gwyn said
[2012/12/18 17:01] Ari (arisia.vita): be well and happy…
[2012/12/18 17:01] ArtCrash Exonar: see you Ari
[2012/12/18 17:01] Extropia DaSilva: with the AI scenario it may be easier to judge when it has happend. ‘Cuz you get this neat robot or something that is like a super-duper genius and you can see ‘wow this is a super smart intelligence’…
[2012/12/18 17:01] Gwyneth Llewelyn: IOn 2005, I think, a friend of mine working for Google told me that each of their million servers had a copy of 6 billion websites in memory (RAM) and ten times as much on disk; they imagined that the whole web was at least another order of magnitude bigger, but they were unable to search further. Also, from an economical point of view, it made no difference, since they already had 99.% of what everybody wants to find indexed on their databases 🙂 The remaining 1% have little relevance to 99% of all pçeopl, so….
[2012/12/18 17:02] Gwyneth Llewelyn: (bye Ari!)
[2012/12/18 17:02] Ari (arisia.vita): bye all
[2012/12/18 17:02] Ari (arisia.vita): *hugs*
[2012/12/18 17:02] ArtCrash Exonar: nitey!
[2012/12/18 17:02] Extropia DaSilva: But with the Internet scenario..the superhuman understanding may be distributed across a network of networks so vast we cannot even perceive it. It could happen and we would miss it.
[2012/12/18 17:03] Gwyneth Llewelyn: Right, Extie! With your scenario, while it’s much more plausible — we can even admit that we already have everything that is needed — however, we cannot be aware of that supermind, not can the supermind be aware of us; we function at different levels.
[2012/12/18 17:03] ArtCrash Exonar: My personal website of 1998 is super valuable don’t you know!
[2012/12/18 17:03] Extropia DaSilva: Of course:) Who would not value the thoughts of ArtCrash?
[2012/12/18 17:04] ArtCrash Exonar: haha
[2012/12/18 17:04] ArtCrash Exonar: That is before I was born, now that I think of it.
[2012/12/18 17:04] Gwyneth Llewelyn: Right now, Earth’s supermind might bechatting with other superminds across the galaxy and we have no way of finding out about them; but those superminds wouldn’t even be aware of individual humans living on Earth, like we’re not aware of our own neurons firing and exchanging chemical messages.
[2012/12/18 17:04] Gwyneth Llewelyn: Art: you have previous lives? 🙂
[2012/12/18 17:04] Extropia DaSilva: BTW I shall soon be posting this lecture as an essay on my blog so feel free to comment on it:)
[2012/12/18 17:05] Extropia DaSilva: Oh…
[2012/12/18 17:05] Gwyneth Llewelyn: Heh. I shall 🙂

This entry was posted in after thinkers, technology and us and tagged , . Bookmark the permalink.


  1. Lucas says:

    Admiring the time and energy you put into your website and
    in depth information you provide. It’s awesome to come across a blog every once in a while that isn’t the same outdated rehashed material.
    Fantastic read! I’ve saved your site and I’m including your RSS feeds to my Google account.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s