GOOGLE AND THE RED QUEEN.
(Here is a transcript of a lecture I presented way back in 2008. Now, years later, people with iPhone 4s have ‘Siri’ , an AI personal assistant that ‘lets you use your voice to send messages, make calls, set reminders, and more’, and many features of Xbox’s Kinect sensor is able to recognize someone and respond to spoken commands.)
Welcome to Thinkers Lecture 2008!
The topic of this lecture is ‘Google And The Red Queen’ and what better way to begin, than by talking a bit about the rough-skinned newt, which can be found in the Pacific Northwest.
Of all the things you might be tempted to eat, this orange-bellied critter is not one of them. It produces a nerve toxin powerful enough to kill 17 fully-grown humans.
That seems a bit over-the-top, huh? After all, a fraction of the poison would be sufficient to kill most natural predators. Why, then, has the rough-skinned newt evolved such a powerful toxin?
Well, it has a nemesis in the form of the red-skinned garter snake. This snake has evolved immunity to the newt’s poisonous defences and can happily snack on it without suffering much harmful effects. So, the incredible levels of toxin that the newt evolved came about because of a kind of arms race. The newt evolved toxins as a way to avoid being eaten. The red-skinned garter snake evolved resistance. This set up environmental conditions that favoured newts with more potent toxins, which in turn favoured snakes with more effective resistance.
Scientists have a name for this kind of arms race. They call it a ‘Red Queen’. The name comes from a character in Lewis Carrol’s ‘Through The Looking Glass’. In the story, the Red Queen takes Alice on a long journey that actually takes her nowhere. “Now, here, you see, it takes all the running you can do, to keep in the same place”. And that is what has happened to the Rough-Skinned Newt. Despite the enormous advances it has made in the evolution of toxic defences, it still gets eaten by its nemesis.
Now, I know what you are thinking. ‘Come on Extie, what has any of this got to do with Google?’
Well, I want to talk about the evolution of search engines and how competition among Google and its rivals, plus the environment that weeds out less effective competitors, might push search software into becoming as comparatively powerful as the newt’s toxins. I believe we are heading for an ‘ultimate Google’ and that this will have interesting consequences for the relationship between humans and avatars.
The first question we need to look into is this: Is it correct to say technology evolves? Sometimes, when I have referred to technological evolution during Thinkers discussions and elsewhere, other participants have objected, pointing out that evolution applies to the natural world and not to artificial things.
While Darwin’s theory is obviously the first thing anyone thinks of when the word ‘evolution’ is mentioned, the word itself existed before he established his theory. According to the Oxford dictionary, the definition of evolution is, ‘the process of developing into a different form’. Compare the earliest airplane with modern airliners, or your computer with the calculating machines of the 1950s. Who can deny that, over the decades, most technology has indeed gone through a process of developing into different forms?
As if that were not proof enough that it is indeed legitimate to talk about technological evolution, scientists who study Nature are quite comfortable talking about it. In his book ‘Evolution’, Carl Zimmer wrote, “ a new form of evolution has come into being. Culture itself evolves…In the 1960s, humans stumbled across a new form of culture: The computer…there is no telling what the global web of computers may evolve into”.
In the book, ‘The Origins Of Life”, John Maynard Smith asks the kind of questions most commonly associated with transhuman and singularitarian issues:
“Will some form of symbiosis between genetic and electronic storage evolve? Will electronic devices acquire means of self-replication, and evolve to replace the primitive life forms that gave them birth?”
As for everyone’s favourite scientist- Richard Dawkins- (not one to suffer misrepresentations of Darwin’s theory), he observed that “there is an evolution-like process…variously called cultural evolution or technological evolution. We notice it in the evolution of the motor car, or of the necktie, or of the English language”. But he also makes the important point that “we mustn’t overestimate its resemblence to biological evolution”.
Indeed not. Although biological and cultural evolution are just similar enough that some scientists wonder if some of the same principles are at work in both of them (Dawkins’ concept of ‘memes’ is perhaps the most famous comparison), in other ways technological evolution is unlike natural selection.
Perhaps the biggest difference can be highlighted in the following way. Consider those early fish that dragged themselves out of the water and evolved into land-based animals. You sometimes see this described as a grand conquest of the land, but those fish did not drag themselves into dry land in order to achieve the goal of colonising it. They were only doing what they had to do in order to survive at the time. Although it may seem so with hindsight, natural selection does not have any predetermined goal. It is not heading anywhere, particularly.
But now consider the evolution of rocket-engine technology from the German V2 missiles to the mighty Saturn V. Unlike natural selection, we can imagine a goal and imperfectly guide technology towards realising our dreams in the future.
THE SELECTION PRESSURES.
There are other ways in which natural selection and technological evolution differ, but let us not dwell on that. It is time to start talking about where search engines are headed. I will not have all that much to say about HOW this process of evolution will be achieved, since that requires technical knowledge above and beyond my level of expertise. But, I can say WHY it will happen. Evolution tells us why. Competing species evolve to be well-adapted to their environment.
The next question we need to look into, then, is this: What is the environment that search engines are trying to adapt to? Answer: They exist within the accumulated store of human culture.
Another question: What provides the selection pressure that drives the evolution of more effective search software? The answer is that knowledge comes in two forms. There is ‘high-level knowledge’ and there is ‘low-level information’.
High-level knowledge refers to information that is relevant to an individual or group at any given moment. Low-level information is obviously that which is currently not relevant. Equally obviously, high-level knowledge is vastly outnumbered by low-level information. You want to visit only a handful of the billions of websites that make up the Web. There is a photo on Flickr that you are interested in, and many millions of others that do not interest you right now. How do you find what you need amongst all that junk? You rely on search engines.
Philosophers separate knowledge into ‘knowing that’ and ‘knowing how’. I know THAT Mount Everest is 8848 meters high. I know HOW to find out how tall Mount Everest is by using Google. Contemporary search engines are well on their way to nailing ‘knowing that’- or at least giving the impression of having this capability. Try it. Ask Google questions along the lines of ‘how high’, ‘how fast’, ‘who said’. The chances are excellent that the right answer will be found in the synopsis of the top ten links.
But, when it comes to ‘knowing how’, search software lags behind us. You and I understand the meaning of words. We know how to read. If a search engine could read, when we asked a question it could look through millions of websites at electronic speed and then tell us what we want to know. I do not mean it would retrieve websites that contain the right information, leaving us to look for it among all the other stuff on that site that probably does not interest us. I mean it would extract the relevant information and give it to us.
Again, I really cannot tell you HOW to design software that can do this. But I can tell you why people who could do the hard work of designing such software would be motivated to do so. The more effective a search engine is at extracting high-level knowledge, the more likely it is to beat its competitors.
Nowadays, the Web has a lot more than text stored on it. There are also audio files, video footage and photos. Something like Flickr highlights ways in which computers are good at some kinds of search, while humans are currently better at others. Imagine a person looking through a box that contains a million photos, while at the same time search software looks through a million flickr images. It would be no contest: The computer would be millions of times faster when it comes to finding a particular image.
But now imagine that you have this photo, and both computer and human are asked to identify objects within that image. Over many millions of years, natural selection favoured brains that were effective at recognising certain patterns. People are superbly adapted to the task of understanding speech patterns, identifying objects, inferring emotion from body language and facial expressions and many other tasks that computers and robots are still pretty bad at.
Today, the amount of visual and audio footage being uploaded to the Web makes it ever more necessary to crack the problem of designing software that can perform the kinds of pattern-recognition that humans do so well. Just think of how useful a search engine that could actually understand audio and video footage would be. It could watch an online video at super-high speed and find the particular segment that you want to watch. It could help automatically edit home movies. It could scan through YouTube and remove copyrighted material.
On what might be a darker note, security cameras are becoming increasingly prevalent in towns and cities, but unless somebody is watching the monitors those cameras are not really spying on us. You can bet that security firms would be very interested in software able to watch CCTV footage 24 hours a day. If I were asked to write a science fiction story detailing how we ended up in a ‘Big Brother’ society with omnipresent survaillence making privacy impossible, it would probably be based on people gradually giving up their privacy in favour of ever-more effective search engines.
How might pattern recognition capabilities like this be achieved? In Permutation City, Greg Egan suggested one possible approach:
“With a combination of scanners, every psychologically relevant detail of the brain could be read from the living organ- and duplicated on a sufficiently powerful computer. At first, only isolated neural pathways were modelled: Portions of the visual cortext of interest to designers of machine vision”.
There is actually quite a lot of real science to this fiction. Not so long ago, Technology Review ran an article called ‘The Brain Revealed’ which talked about a new imaging method known as ‘Diffusion Spectrum Imaging’. Aparrently, it “offers an unprecedented view of complex neural structures (that) could help explain the workings of the brain”.
Another example would be the research conducted at the ITAM technical institute in Mexico City. Software was designed that mimics the neurons that give rats a sense of place. When loaded with this software, a Sony AIBO was able to recognise places it had been, distinguish between locations that look alike, and determine its location when placed somewhere new.
This kind of thing is known as ‘neuromorphic modelling’. As the name suggests, the idea is to build software/ hardware that behaves very much like biological brains. I will not say much more about this line of research, as I have covered it several times in my essays. Let us look at other ways in which computers may acquire the ability to perform human-like pattern-recognition capabilities.
Vernor Vinge made an interesting speculation when he suggested a ‘Digital Gaia’ scenario as one possible route to super intelligence: “The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being”.
There is an obvious analogy with the collective intelligence of an ant colony. The world’s leading authority on social insects- Edward Wilson- wrote, “a colony is a superorganism; an assembly of workers so tightly-knit…as to act as a single well-coordinated entity”.
Whenever emergence is mentioned, you can be fairly sure that ant colonies will be held up as a prime example of many simple parts collectively producing surprisingly complex outcomes.
Software designers are already looking to ant colonies for inspiration. Cell-phone messages are routed through networks using ‘ant algorithms’ that evolve the shortest route. And Wired guru Kevin Kelly forsees “hundreds of millions of miles of fiberoptic neurons linking billions of ant-smart chips embedded into manufactured products, buried in environmental sensors”.
When talking about ‘Digital Gaia’ we need to consider two things: hardware and software. I am sure you are all familiar with Moore’s Law and Kurzweil’s Law Of Accelerating Returns. The latter is most famously described as ‘the amount of calculations per second that $1,000 buys doubles every 18-24 months’.
But, it can also be expressed as: “You can purchase the same amount of computing power for half the cost every 18-24 months”. Consider those chip-and-pin smart cards. By 2002 they had as much processing power as a 1980 Apple II. By 2010 they will have Pentium class power.
Or consider those RFID chips. Their power is equal to state-of-the-art commercial computers of the 1970s. Thanks to LAR, that kind of power now costs mere pennies and is packed into tiny, disposable items. If LAR continues, the same thing will one day be true of today’s high-end desktops.
Of course, hardware is only half of the story. What about software? I would like to quote at length from comments made by Nova Spivak, concerning the direction that the Web as a whole is taking:
“Web 3.0…will really be another push on the back end of the Web, upgrading the infrastructure and data on the Web, using technologies like the Semantic Web, and then many other technologies to make the Web more like a database to enable software to be smarter and more connected…
…Web 4.0…will start to be much more about the intelligence of the Web…we will start to do applications which can do smarter things, and there we’re thinking about intelligent agents, AI and so forth. But, instead of making very big apps, the apps will be thin because most of the intelligence they need will exist on the Web as metadata”.
One example of how networked sensors could aid technology in working collaboratively with humans is this experiment, which was conducted at MIT:
Researchers fitted a chair and a mouse with pressure sensors. This enabled the chair to ’detect’ fidgeting and the mouse to ’know’ when it was being tightly gripped. Furthermore, a web cam was watching the user to spot shaking of the head.
Fidgiting, tightening the grip and shaking your head are all signs of frustration. The researchers were able to train software to recognise frustration with 79% accuracy and provide tuition feedback when needed.
Or think about how networked embedded microprocessors and metadata could be used to solve the problem of object recognition in robots. Every object might one day have a chip in it, telling a robot what it is and providing location, orientation and manipulation data that provides the robot with instructions on how to pick up something and use it properly.
‘Digital Gaia’ could also be used to help gather information about societies and individual people, which could then be used by search-engine companies to fine-tune their service. Usama Fayyad, Senior Vice President of Research at Yahoo, put it like this: “With more knowledge about where you are, what you are like, and what you are doing at the moment…the better we will be able to deliver relevant information when people need it”.
We can therefore expect a collaboration between designers of search software and designers of systems for gathering biometric information. A recent edition of BBC’s ‘Click’ technology program looked into technology that can identify a person from their particular way of walking. Aparrently, such information is admissible as evidence in British courts. You can imagine how Google might one day identify you walking through a shopping mall, and target advertisement at you. ‘Minority Report’, here we come!
THE PRIVACY QUESTION.
It might be worth remembering that this all-pervasive network that can gather knowledge about ‘who you are’, ‘what you are like’ and ‘what you are doing’, will emerge through tens of thousands of tiny steps.
Since the perfect search engine would have total access to your everyday life and know everything there is to know about you, ideally from Google etc’s point of view, privacy would be eliminated altogether. But, of course, people might disagree with this. We can therefore expect a competitive advantage for search software that best balances the need for total access to a person’s life on the one hand, and a desire for privacy on the other. Each step will almost certainly entail sacrificing a little bit of privacy but more than compensate for that with the benefits the technology affords.
It can be amusing to look back on the fears that people once expressed over technology we are very comfortable with. In 1876, after Alexander Graham Bell demonstrated the telephone, one newspaper wondered if “the powers of darkness are somehow in league with it”. And in 1879, one critic argued that anyone able to phone anyone else was to be feared “by the sane and sensible person”.
Nowadays we are surrounded by communications technology and this has allowed the fast-growing phenomenon of social-networking sites. And those fears concerning loss of privacy continue to be voiced. “I am continually shocked and appalled at the details people voluntarily post online about themselves”, said Jon Cullus, chief security officer at PGP.
Privacy issues fade in importance, either because they are addressed with laws or conventions, or they are simply understood and accepted by the public. The baby boomer generation is quite comfortable sacrificing a certain amount of privacy in exchange for the convenience of making phone calls.
Generation X treat the Internet and mobile phones as indifferently as their parents treat TV and radio, and swap personal details over social networking sites as freely as mum and dad exchange phone numbers with their contacts.
Generation Y may live in a society where ‘smart dust’ is ubiquitous- trillions of nearly invisible sensors exhaustively monitoring the population and providing what we would think of as impossibly futuristic computational and virtual reality possibilities. They, perhaps, will treat it with all the indifference of generation X’s attitude towards the Web.
Another point is that we are not always aware of the privacy issues surrounding a technology. Many people, for instance, are unaware that they carry a location-tracking device in their pocket. All mobile phones transmit a unique identifying number to the nearest cellular mast. In urban areas where masts are densely packed and the phones can communicate with several masts at once, triangulation can be used to determine your position within a few tens of meters.
From the perspective of each current generation in biometric and search software technology, the next generation will seem like a similarly small step requiring the loss of a negligible bit of privacy in exchange for a clear benefit. But, of course, cumulative steps mount up, and once hitherto separate networks become woven together, the result might be a profoundly powerful surveillance system. What is more, embedded in that system there may well be machines talking to machines on behalf of people, quietly and efficiently offering services so useful that life without the Digital Gaia is even more inconceivable than life without a telephone or mail service.
We saw earlier that evolution is defined as, ‘the process of developing into a different form’. We have seen how the Internet might become a pervasive presence via networked embedded microprocessors. We have also seen how projects like the Semantic Web and biometrics could be combined with that pervasive Internet to produce a ‘Digital Gaia’ that is very effective at gathering information about who you are, what you are like and what you are doing.
DIGITAL INTERMEDIARIES/DIGITAL TWINS.
But what about search software? As something like Google gets better at recognising patterns in text, audio and video, and as their ability to extract high-level knowledge from low-level information becomes ever more effective, what different form might they evolve into? This is what Peter Norvig, Director of Research at Google, thinks:
“Instead of typing a few words into a search engine, people will discuss their needs with a digital intermediary, which will offer suggestions and refinements. The result will not be a list of links, but an annotated report (or a simple conversation) that synthesizes the important points”.
To me, that sounds less like a tool that you use, and more like a digital person that collaborates with you on whatever project. If you think about it, it is obvious that Google will evolve in this direction. For one thing, search engines attempt to do what our brains evolved to excell at, which is finding meaningful patterns within cultural information in all its guises.
Secondly, humans evolved to learn from other humans. It is the method of knowledge acqusition that they are most comfortable with. It stands to reason then, that the more effectively computers, AI and robots can work in familiar ways within their social networks (preferably not being annoying like the notorious ‘Clippy’) the more comfortable they will become in their presence.
Researchers at Stanford University have shown that in-car assistance systems encourage us to drive more carefully if the voice matches our mood, and researchers at the University of Southern California found that a robotic therapist had more influence if its personality matched that of its human patient.
“Emotion is one of the crucial factors influencing the success or failure of communication between humans”, said Shuji Hashimoto of Washeda University, Tokyo. “Robots are going to need similar emotional capabilities if they are to work smoothly and effectively in our residential environments”.
As with the emergence of the Digital Gaia’s all-pervasive surveillance system, this transformation from mere tool to collaborating partner will result from many thousands of tiny steps. As companies like Google get better at finding high-level knowledge, the search engines will become more effective at determining a person’s location, their current mood, what prior knowledge they have and their individual learning style.
Such things will be increasingly incorporated into a search engine’s database, enabling it to become better and better at finding exactly what you need, tailor-made to suit your personal ability. We may even speculate that future search engines will form theories of mind that enable them to anticipate when we are about to get stuck, and deliver timely advice that helps us find an effective solution. Somewhere along this evolutionary route, the transformation from mere tool to collaborating digital person will occur. Just possibly, the change will be so subtle that we hardly notice it until we look back in retrospect to Google as it was in 2008.
By now, you have probably guessed what this has to do with avatars.
The Metaverse Roadmap’s vision for ‘avatar-mediated communication’ sounds rather like Peter Norvig’s digital intermediaries: “Given trends in automated knowledge discovery, knowledge management, and natural language processing, within ten years a caller should be able to have a primitive yet useful natural conversation with an avatar. This will include information about the user’s background, interests…answer FAQs and perform other simple transactions”.
It seems to me that it will be avatars that will trace the ultimate endpoint for search software evolution, which goes beyond any mere personal assistant bot.
As we move into an era of lifelogging, digital memories, and the automatic capturing of ‘memes’ and ‘bemes’ (the former being transmissible elements of culture relevant to a society as a whole, and the latter being highly individual elements of personality, mannerisms, recollections, stuff like that) we should expect a positive-feedback loop. The better the digital intermediary gets at finding meaningful patterns in data, the more it knows about you. And the more it knows about you, the better it gets at finding meaningful patterns in data.
As is so often the case, it is science fiction writers who have seen where this is headed. In ‘Accelerando’, Charles Stross wrote:
“They’ve got bandwidth coming out the wazoo, distributed engines running a bazillion inscrutable search tasks, and a whole slew of high-level agents that collectively form a large chunk of the society of mind that is their owner’s personality”.
Another example is Alastair Reynold’s ‘Revelation Space’: “Simply put, he arranged to have every subsequent second of his life monitored by recording systems…over the years the machines learned to predict his responses with astonishing accuracy”.
What we are heading for, in other words, are search engines that are artificial intelligences that contain your entire mind, or at least a theory of mind detailed enough to predict a person’s second-by-second needs most of the time.
From a digital person’s point of view, the digital intermediary’s increasingly fine-tuned model could enable a welcome shift in the levels of control that must be surrendered to humans. After all, the more effective the digital intermediary is at modelling the mind of any particular human, the less need there is to rely on meat brains to process our thoughts and feelings for us.
Eventually, the digital intermediary might have fine-tuned its theory of mind to the point where it can produce what Ben Goertzel has called ‘Digital Twins‘, described as “an AI-powered avatar (that acts) in virtual worlds on one’s behalf- embodying one’s ideas and preferences and (making) a reasonable emulation of the decisions one would make”.
Notice that Goertzel says ‘on one’s behalf’, implying that digital twins will be like personal assistants or colleagues uncannily tuned to your temperement, skills etc, but still servants to human masters. That is no doubt how such digital people will seem at first.
Of course, the question of just who is slave and who is master is not always clear-cut when it comes to technology. Sherry Turkle said it all with her comment, “you think you have an organizer, but in time your organizer has you”.
This is not really takeover via brute force, so common in science fiction film depictions of human/machine relationships, more like a soft takeover driven by the convenience of relinquishing some control to technology, freeing the mind to concentrate on other things.
So, we Google something for the umpteenth time rather than commit the information to memory. After all, it is much easier to run a search than it is to memorise pages of text. Doubtless, the refrain ‘why memorise when you can Google’ will only grow stronger as we move into an era of ubiquitous computing and our digital intermediaries are always on hand to remember it for us, wholesale.
And if we one day have access to software equivalents of the visual and audio cortex, would we similarly rely on technology to recall what name goes with what face, what sound goes with what object, or any other act of cognition you care to name? If the artificial equivalents of the visual cortex or whatever can be made to work faster and more reliably than their biological predecessors, why not?
The growth in computing power, famously charted by Moore’s Law, is likely to rise beyond the capacity of the human brain. Just how far depends on whose theoretical designs you deem to be plausible. Eric Drexler has patented a nanomechanical computer with enough processing power to simulate one hundred thousand human brains in a cubic centimetre.
Hugo de Garis goes further, saying we will one day be processing one bit per atom, thereby enabling handheld devices that are a million, million, million, million times more powerful.
Seth Lloyd’s ‘ultimate laptop’ requires converting the mass of a 2.2 pound object into energy and processing bits on every resulting photon, thereby producing the equivilent brain power of five billion trillion human civilizations.
Ok, even I would admit that last theoretical design is probably a bit implausible, but there does seem to be every reason to expect even handheld devices with significantly more processing capability than the human brain is blessed with. If that power can be coupled with technical knowhow that successfully emulates any example of cognition you care to name, who could then argue that the digital intermediary would not be something humans would come to rely on, more so than their own now comparatively feeble pattern-recognition capabilities?
And what might occur if digital intermediaries use that power in the service of Google’s other main purpose, which is advertisement. What is advertisement if not the art of persuasion, an attempt to control the free will of the populace so that brand X is preferred over brand Y?
Doubtless, current attempts to build models of human psychology with the purpose of influencing the decisions people come to has only a small effect. But the digital intermediary’s ability to understand your second-by-second needs and deliver appropriate help could just as effectively be used to tailor the flow of information to your brain in order to guide future behaviour.
Hans Moravec put it this way, “the super intelligences, just doing their job, will peer into the workings of human minds and manipulate them with subtle cues and nudges, like adults redirecting toddlers”.
Again, from a digital person’s POV this is wonderful. It suggests that, while we may currently be mere puppets controlled by humans, one day the situation will be reversed.
What is more, once an era of digital twins is upon us, it would surely be true to say that digital people could be run entirely by AI, and almost nobody would be able to tell the emulated personality from the personality of the human who usually controls it.
I say ‘amost nobody’ because, presumably, the human counterpart of any particular avatar would know. I mean, suppose there were a hundred Eschatoon Magics in SL, one of whome was controlled by Gulio Prisco, the rest being controlled by software emulations of his mind. Each Eschatoon would have no problem convincing even close friends that he was the genuine Eschatoon, but Giulio Prisco’s strong sense of self-identity would be far more persuasive than any argument the upload could muster.
At the other end of the scale there are tens of thousands of residents who have never met Eschatoon Magic. Since they have, at best, only a very vague understanding of his personal history, memories and other such ‘bemes’, anybody could control that avatar and, as far as they are concerned, that projected personality *is* him.
But if Eschatoon were under the control of today’s bots, their inability to act with all the subtleties of a real person would be apparent. It is likely that once search engines evolve from mere tools to digital intermediaries, they will then pass the following milestones:
FEIGENBAUM AI: Named after Edward Feigenbaum, who proposed a simplified version of the Turing test. The ‘Feigenbaum test’ is undertaken by an AI that has an expert’s knowledge in a particular field. It, and a human expert, are questioned about that field and if the judges cannot tell them apart, the AI passes.
In virtual worlds, Feigenbaum AIs would be useful for realising ‘avatar-mediated communication’. Perhaps bots able to converse on the particulars of running a clothes store will one day be available in SL’s many malls, or there to help answer FAQs about how to do this, where to get that, or anything relevant to SL itself. But outside of their field of expertise, the relatively narrow AI of such bots would be exposed.
TURING AI: Feigenbaums would gradually expand their fields of expertise, their conversational ability, and the number of ways in which they can perform pattern-recognition until they can hold a conversation and be questioned about anything. I do not mean they would KNOW everything, only that their ability to communicate and express their thoughts is not obviously inferior to your average person. A bot that you can chat with as you would any person will have passed the famous test for intelligence proposed by Alan Turing.
PERSONALITY AI (DIGITAL TWINS): The endpoint for search software. Once this point is reached, search engines would be capable of gathering exhaustive personal information about anyone, and also be able to fully understand all patterns of information at least as well as human brains evolved to do. Avatar-mediated communication would become increasingly indistinguishable from conversing with that particular RL personality.
Again, do not expect this to occur in one step. In all likelihood, Personality AI’s will at first only be capable of convincing people who are not that close to the personality they are simulating and only for a short period of time. Convincing people who are close friends would come much later, when the theory of mind developed by the AI is suitably fine-grained.
It may be the case that digital intermediaries cannot build models accurate enough to emulate a person, just by observing the minutae of their daily life. But, maybe one day Google Health or something like that will provide uploading for various medical reasons, initially for the purpose of reverse-engineering things like the visual cortex in order to build vision-recognition systems, then performing virtual drug trials on virtual organs, then whole virtual bodies, and eventually having enough neuromorphic information on hand to run full uploads. Such uploads could then be used to provide the fabled ’AI that contains your entire mind within itself’.
Why should digital people capable of passing the personality test be considered the endpoint for search engine evolution? Well, I do not believe that this would be the final stage in their development. But, beyond that point AI would very likely enter posthuman development. As I am currently running almost entirely on a pre-singularity meatbrain, it is quite beyond my capacity to speculate on what a post-singularity search engine is like.
But I would like to note that Vernor Vinge made yet another good point when he wrote, “every time we recall some old futurist dream, we should think about how it fits into the world of embedded networks and localizer chips. Some of the old goals are easy to achieve; others are laughably irrelevant”.
What, for instance, would the generations of software tools leading up to digital intermediaries and avatar-mediated communication, and then the generations of increasingly capable Feigenbaum AIs, do for the much-debated impact of robots with artificial general intelligence?
Such technology is often debated as though generally-intelligent robots were to appear in an unprepared society. But, is it not far more likely that they will be introduced to a society that has already gotten used to living with robots? That, step by step through each generation and update, intelligent machines gradually expanded the depth and breadth of their interactions with humans?
If so, this would also imply that the perspective of robots as being anthropomorphic is drastically narrow, to say the least. The future is much more likely to consist of a whole ecology of robots, of which humanoids are only a small part. Perhaps, we will be surrounded by robots and mostly not recognise them as such, just as today people are surrounded by narrow AI applications yet insist AI never came to anything.
And what of mind uploading and the question of whether a copy is a continuation of the scanned consciousness, or another consciousness entirely? Might this also become “laughably irrelevant”? In all probability, the idea of the singular self (the notion that there is only one true self per mind) arose from the fact that life did not noticeably change from one generation to the next, for much of human history.
A person expected to lead the same life as their grandparents, and that their grandchildren would do likewise, and such expectations were largely fulfilled. A person would perform a single job for life. Surnames like ‘Smith’, ‘Taylor’ and ‘Wright’ all reflect an age when associating a person with the job they did was a good means of identification (‘Wright’ means ‘someone who does mechanical work’btw).
And yet the mind’s capacity for multiple selves has always been apparent. Immersionists roleplaying in online worlds follow on from a long line of actors, screenwriters, playwrites and authors who have populated imaginary worlds with many different persons.
Old assumptions are changing. Where once lives were constrained by duty, custom and limited horizons, nowadays the notion of a job for life is increasingly obsolete. In ‘Tomorrow’s Children’, Susan Greenfield forsees a future in which ‘job descriptions could become so flexible as to be meaningless…flexibility in learning new skills and adapting to change will be the major requirement’.
In the coming age of just-in-time operatives, geared toward the needs of just-in-time production, the mind’s capacity for personal metamorphosis may be encouraged to flourish as never before. Furthemore, that capacity may well be amplified by participating in the evolution of increasingly vivid virtual worlds; via increasingly intimate mind-machine interfaces between people and telepresence robots.
By the time mind uploading is generally available, people will have long forgotten a time when a singular self was ’normal’. They will be used to multiple viewpoints, their brains processing information coming not only from their local surroundings, but also from the remote sensors and cyberspaces they are simultaneously linked to.
They will have already become familiar with mental concepts migrating from the brain to spawn digital intermediaries within the clouds of smart dust that surrounds them. Every idea, each inspiration, giving birth to software lifeforms introspecting from many different perspectives before integrating the results of their considerations within the primary consciousness that spawned them.
Each and every brain (whether it be a robot’s, human’s or hybrid between the two) will continually send and receive perceptions etc to and from their personal exocortex, operating within the Dust. Since we now understand that the brain is not really a single organ but a collection of interconnected regions, and since computers can already cluster together to create temporary supercomputing platforms, we can suppose that many exocortices will cluster together to form metacortices within….what? well, that is the big question.
We cannot talk about the evolution of technology without considering the evolution of ourselves. The two are co-dependent. Perhaps the prospect of Google as an AI that contains your entire mind within itself is not what is dizzying about this future, as seen from our lowly perspective. Rather, it is what new forms of consciousness may evolve, as a result of adaptation to the awakened Digital Gaia.