ALT! WHO GOES THERE? PART 6A.

ALT! WHO GOES THERE? PART 6A.

Extropia...At Extropia.

INTRODUCTION
As we head toward the future, how will alts evolve? In asking such a question, we are inquiring into the nature of identity and the self. How will such things change over time? What kind of people will inhabit the future?
THREE REASONS FOR CHANGE
Perhaps tomorrow’s people will be little different from ourselves? There are several reasons to suppose this will not be the case. One such reason is the growing proliferation of networked embedded computers, sensors, and telecommunication devices. The digital age is changing the way we work, how we socialise, how we process information, the relationship of the past to the present and the boundary between the body and the rest of the world. All of which plays an important role in shaping our sense of identity and selfhood. If they are to change (perhaps radically so), then personal identity and selfhood are likely to change with them.
Another reason to foresee change comes from the fact that science is in the early stages of another revolution in how we conceive of ourselves. In Western cultures there have been four such revolutions so far. The Copernican revolution lead to a radical revision of how we conceived our position in the universe. The Darwinian revolution lead to a dramatic revision of our relationship to the natural world. The Freudian revolution challenged the assumption that we are always in rational, conscious control over ourselves. And the genetic revolution redefined life from a mysterious Elan Vital to an information technology that can be cut-and-pasted to create something that blurs the boundaries between the biologic and technologic.
The fifth such revolution is coming from the brain sciences. Hitherto, studies of the brain have been anatomical, categorising gross structures while ignoring the finer details of organization and (more importantly) how they actually function. The reason why such detail was left out is because we lacked the technology to observe such things. But now, thanks to the growing proliferation of sensors and computer technology, that is changing. According to James Albus of DARPA, SciTech is  developing a range of sensor technologies that can observe ever-finer details of the brain’s structure, and also monitor the way living brains process information. This, according to Albus, “will extend the frontiers of human knowledge to include a scientific understanding of the processes in the human brain that give rise to the phenomenon of mind”.
This fifth revolution has been given the name ‘Neurocentrism’ and it has two meanings. On one hand, it refers to the fact that there is a migration to neuroscience from other fields. Specialists in psychology, theology, anthropology, economics, philosophy, advertising and more are beginning to incorporate insights into what actually goes on in the brain while we barter for the best deal, or meditate, or pay more attention to this billboard poster rather than that one. And this brings us to the second meaning of neurocentrism: That what it is to be human, what it is that determines who you are, is located in the brain. Lone Frank explained:
“Whereas before, speculations turned to culture and the psyche, which was strangely disconnected from the organism, the physical brain is now prominent and steadily becoming the reservoir and end station for all the questions about human nature and existence”.
That is not to say that the brain alone can tell us all that we wish to know. Rather, it is pointing out that traditional scientific approaches to understanding ourselves- ‘nature’ (or genetics) and ‘nurture’ (or societal influences from family and beyond)- were always missing a vital link. Genes are just strings of information, and compiling a complete map of the genome only takes us partway in understanding the mind. Only by examining living brains as they process information, and connecting this with an increasing understanding of how genes code for the molecular machinery that drive such functions, can we make more progress.
One thing that the mapping genomes taught us is that the genetic code is not the complete program for building brains. There is not enough information in the genome to determine the patterns of the brain’s network and communication links. The brain is an adaptive learning system that gains a great deal of information by interacting with its environment. In many ways, then, the world we grow up in provides the bulk of the program for building brains.
It is our complex minds that enable us form such highly-developed societies, and it is those societies that help shape such complex brains. Antonio Damasio explained how “in cognitive neuroscience, we study the connection between brain processes and particular types of behaviour. It’s not the isolated tissue that interests us. No, we are delving into the mechanisms of the conduct we see in the social world”.
‘Nature-Neuro-Nurture’ is bringing together specialists from a wide variety of fields, taking answers from one area of expertise to shed light on unanswered questions in others. As this combined effort develops what Albus called “a scientific understanding of the processes in the human brain that give rise to the phenomenon of mind”, we are likely to find some of our assumptions regarding the nature of self and personal identity were as wrong as our descendent’s belief that the Earth was flat. Already, neurocentrism has uncovered too many such revelations for me to detail them all here, but I will be mentioning a few in this article.
NEUROENGINEERING
As we reverse-engineer the brain, we may use insights into how it works and apply them to new technologies. We could even develop ‘Neuroengineering’, in which regions of the brain or even the entire organ is ultimately replaced with technologies that outperform the organic mechanisms they model. It is the prospect of neuroengineering that most strongly suggests radical change is afoot. Along with genetic engineering, neuroengineering goes beyond changing what we do, and allows us to change what we are. But, it could allow far more dramatic changes than genetic engineering alone. That would only allow us to tinker with the body and brain we currently possess, but neuroengineering could conceivably allow us to go far beyond the limitations our biological bodies and brains impose on us.
Perhaps it would be more accurate to say neuroengineering would allow more dramatic and direct alterations to our brains. This is because changing what we do can alter the environment. And if the environment changes, the brain adapts to better cope. In other words, changing what we do ends up changing what we are, albeit indirectly.
MEET YOUR iBRAIN
This brings us back to the first reason to expect change: The growing proliferation of computational devices that have become such an integral part of our lives. It takes around two generations to optimize a technology and assimilate it fully through all the structures underlying our societies. Computers followed this pattern, and because of this there is a ‘digital divide’ with a generation who remember a world without computers on one hand, and on the other hand a generation for whom the computer was always an integral part of daily life. The Internet has obviously made sweeping changes, but has there been enough time for this change to alter the brain?
In order to find out, a team lead by Gary Small (director of the UCLA Memory and Aging Research Centre at the Semel Institute) conducted experiments in which an fMRI scanner monitored the brains of two groups of volunteers while they used search engines. One group was computer-savvy and had used something like Google extensively in their lives. The other group comprised of people who had never used a computer. Imaging showed that the former group used a specific network known as the dorsolateral prefrontal cortex, whereas the other group showed minimal, if any, use of this region. What does the dorsolateral prefrontal cortex do? Well, it is involved in our ability to make decisions and integrate complex information. It is believed to control the process of integrating sensations and thoughts, as well as working memory. Since that refers to an ability to keep information in mind for a very short time, it makes sense that the dorsolateral prefrontal cortex would become more active and help us manage an Internet-searching task. As for the other group who had not used a computer before, after just five days of practice the same neural circuitry became active.
Further studies have shown that regular web-surfing and online social networking enhances certain cognitive abilities. Immersed in a world that bombards the senses with information, our brains develop circuitry customised to enable us to rapidly decide what is important. Our reactions to visual stimuli are sharpened, and our ability to notice images in our peripheral vision is improved. Studies conducted by cognitive psychologist Pan Briggs showed how web surfers typically spend two seconds or less on any one site before moving onto the next, while looking for relevant facts. Gary Small commented, “this study indicates that our brains learn to swiftly focus attention, analyze information, and almost instantaneously decide on a go or no-go action”.
THE DOWNSIDE
However, everything comes at a price. Our brains can only adapt by a certain amount (at least until neuroengineering comes along) and the amount of information out there on the web is more than enough to stretch the mind’s adaptability to breaking point. An example would be what Linda Stone called ‘Continuous Partial Attention’. For people in this state, everything everywhere is connected through peripheral attention. It means keeping tabs on everything while never truly focusing on anything. The brain is effectively placed in a state of stress, always on alert for the next bit of exciting news or new content, but no longer allowed to indulge in contemplation or reflection.
Continuous partial attention seems to be rather addictive. This is probably due to the fact that it appears to place the individual at the centre of a thriving social network. The perpetual connectivity to all the buzz and chatter of Facebook and its ilk feeds the ego and the sense of self-worth. But, at the same time, the brain is weakening certain neural circuits even as it strengthens others. Keeping in contact via texting or social networking sites as opposed to face-to-face communication means the brain is not exposed to fundamental social skills, such as reading facial expressions and body language. In 2002, a Stanford university study found that traditional face-to-face interaction times drops by nearly 30 minutes for every hour we spend at the computer.
PERSONAL AND PERSONIFED COMPUTERS
Computers are going to become more personal in a couple of ways. One such change should be apparent, because it is already a widespread phenomenon. This is the proliferation of truly personal computing devices, such as Apple’s iPhone. The other change is more noticeable in Eastern countries like Japan and South Korea than it is in Western countries. That is: The development of robots intended to provide care and other social functions. Currently, in Western countries, the robotics industry caters almost exclusively for industry and warfare. Bill Gates compared the robotics industry as it currently exists to the computer industry in its early days. “Think of the manufacturing robots currently used on automobile assembly lines as the equivalent of yesterday’s mainframes…as I look at the trends that are now starting to converge, I can envision a future in which robotic technologies become a nearly ubiquitous part of our daily lives”.
So, on one hand we see computers becoming more personal and on the other we see computers becoming more personified, morphing into anthropomorphic machines. Is the latter trend subjective or objective? In other words, are we imposing humanlike characteristics on inhuman machines? In ‘Virtual Worlds: The Emperor’s New Bodies’, Peter Wiebel argued that “Any man-made object inherently displays its user’s properties, simply by being made by man”. The same author pointed out that machines are built with the express purpose of performing a task that is of relevance to human society. The more general-purpose a machine is intended to be, particular in areas to do with domestic and interpersonal skills, the more humanlike it must become. It must, after all, function in environments that have grown to suit the human form.
There is, of course, a psychological element at play here as well. As psychologists who study our relationships with technology have found, we are willing to meet social robots more than half way, and project onto them emotions and self-awareness from a relatively sparse and simplistic set of reactions. For various reasons social robots might be accepted as having an inner life even before they actually do have brains complex enough to generate such states of mind. The proliferation of truly personal computers will play a part in this development.
YOUR HEAD IN THE CLOUD
By being truly personal, computational devices like the iPhone effectively remove distinctions between the interface and the rest of the world. Whereas the semi-portable nature of laptops allowed only intermittent access to the Web, one need not be constrained by a lack of convenient places to rest an iPhone. So long as there is wireless access to the Internet, any thought can be shared, any information can be obtained. Several developments will have an influence in greatly expanding ‘world as interface’. It seems very likely that storage space will soon be plentiful and cheap enough to more than serve a person’s lifetime needs. The need to delete files in order to make room for new material will be a thing of the past. Of course future activities might come along that require prodigious amounts of storage space, but the kinds of activities that typify everyday use today would not produce enough terabytes to fill future hard drives and certainly not the vast and cheap storage the Cloud will offer.
Another development will be a proliferation of devices that automate the process of capturing and uploading moments of interest in one’s life. Taken to its logical extreme, this might almost make it seem like the Web is an expansion of your own mind. As soon as something interesting enters your conscious awareness, it is captured and uploaded to the Internet.
To truly feel one with the Web, the ability to retrieve relevant information must be equally automatic. The greatest challenge in an era of effectively limitless storage and automatic capturing of information will be retrieval- finding the relevant image or file or article or opinion as and when it is needed. Google’s Chief Executive, Eric Schmidt, believes the next phase in search engines will be to progress from syntax to semantics, in other words from what you typed to what you meant. A proliferation of sensors that can log up-to-date information about where you are, what mood you are in (inferred, perhaps, by discretely monitoring many physiological states) along with cross-referenced data of your personal life and its various connections to that of other people might make it possible for searches to be carried out automatically on your behalf. When (or perhaps I should say ‘if’) the challenge of effortlessly retrieving any kind of information is met, perhaps the next challenge will be to successfully predict when someone is about to require something and to provide it at the very moment conscious awareness registers a need for it.
The philosopher David Chalmers once noted how “the iPhone has already taken over some of the central functions of my brain”, going on to note how “the world is serving not as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me”.
THE PAST IS NOW, SO GOODBYE PRIVACY
“I don’t believe society understands what happens when everything is available, knowable, and recorded by everyone all the time”, commented Schmidt. It could, for instance, alter our notion of time, making the past and present indistinguishable from each other. Hitherto, the past belonged to the murky world of fallible human memory. But experiences captured in a variety of media could be replayed in much higher fidelity. The boundary between what is now and what once was will all but disappear, because both would be accessed primarily through the interface, delivered by the digital assistants one trusts to run one’s life.
In some ways, this would be a good thing. I expect most people would welcome the opportunity to re-experience cherished moments. But, what about those moments one would rather forget? In ‘The End Of Privacy’, Daniel Jo Solove wrote, “in the past, episodes of youthful experimentation and foolishness were eventually forgotten, giving us an opportunity to start anew, change and grow”. In an era of perfect and total recall, it would not be so easy to escape youthful indiscretions. Daily Mail science correspondent Micheal Hanlon explained how “the advent of eternal online scrutiny has coincided with a new judgemental era, driven by a puritan, humourless U.S corporate culture and the fear of litigation”. We already live in a world where a few words of dissatisfaction can end up on a social networking site, read by the boss, and resulting in a sacking. Employees scrutinise the Web to uncover the murky past of prospective candidates. This need not amount to anything as serious as a criminal record, but simply a few photos posted of them while drunk.
One might respond to this by saying the individual should exercise more restraint with regards to what they post online. True enough, but in a world of ubiquitous computing and sensors, one has less control over the moments captured by other people. If one cannot escape the past, it might become necessary to adopt a new name and identity on reaching adulthood. Or, one might adopt a pseudonymous identity to begin with, although separating virtual life from real life may be difficult when the distinction between the two has become a lot less clear than it is today.
MY PLASTIC PAL
In other essays, I have argued that the other people making up one’s social networks are as (perhaps more) important for the development of an online identity than the individual who first set up accounts with the likes of Second Life. Recall from part 2 how philosophers like Jaques Lacan and George Herbert Mead regarded family and the greater society as being fundamental to the development of self. The so-called ‘significant other’ may seem to be even more intimately tied to oneself when we have effortless 24/7 access to social networking sites. DARPA has a research project underway known as ‘Augmented Cognition’ or ‘AugCog’. The idea here is to be able to optimize one’s use of information technology by monitoring the brain’s ability to process information, determine when particular parts of the brain are about to be overloaded, and adjust the incoming sensory information so that other, underused parts of the brain come into play. If such a system were to become commercially available, it might well be adopted by people addicted to ‘continuous partial attention’. With the ability to access one’s social networks refined and expanded, and with one’s connection to the Web becoming more intimate, one’s sense of identity may evolve from something personal, private and individualistic towards an enlarged, collective and essentially public identity. Neuroscientist Susan Greenfield foresees a time when “you are most at home networked into the large, passive collective and therefore do not mind being scrutinised. It’s more as though they were a part of you in any case, a kind of collective self”. I said earlier that this network may include social robots (which could include autonomous avatars) as well as semi-cyber companions like digital people roleplayed by some anonymous human, or robots tele-operated by someone. But, what about the reasons why a proliferation of networked embedded computers will lower the threshold for humanlike general intelligence?
Two examples of feats where humans still beat robots hands down are object recognition and manual dexterity. Vernor Vinge noted how “both tasks would be much easier if objects in the environment possessed sensors and effectors and could communicate. For example, the target of a robot’s hand could provide location and orientation data”. This would make the physical world like The Sims, because in Will Wright’s blockbuster the sims themselves know almost nothing. It is the objects you furnish their home with that instruct the sim on how they should be used. As your home fills up with more household appliances and furnishings, your sim appears to be accumulating a wider range of social and domestic skills.
Another reason would be the rewiring of the brain. We saw earlier how regular use of computer-mediated communication weakens neural circuitry specialised for face-to-face communication. We might come to accept social robots as having an inner life because fundamental social skills have been eroded, making us less efficient at recognizing flaws in the robots’ social skills. Moreover, what deficiencies there are may be more than compensated by your cyber-companion’s superior reliability and efficiency. After all, other people do not always place top priority on your personal needs. As Susan Greenfield speculated: “The poorer we become at social interactions, the more we will seek solace with our cyber friends. To take things even further, if you never have to consider the thoughts and actions of others- because the cyber world is endlessly accommodating and forgiving- there might even be a progressive retreat into that world”.
Coming up in part 6B: The changing nature of work and what it means for brain landscapes.
About these ads
This entry was posted in Philosophies of self. Bookmark the permalink.

7 Responses to ALT! WHO GOES THERE? PART 6A.

  1. Net Antwerp says:

    Schimit retracted his statement I believe, after seeing how unjust, controlling and manipulative his statements were.

    The series is becoming more and more ridiculous – to the point where you now assume that Humanity wants/is going to want your cult revolution to happen. In reality less than 1% wants it – the percentage isn’t going to go up higher anytime soon.

  2. You may believe Schmidt retracted his statement, but since you provided no reference material to validate this, it stands as yet another one of your baseless claims. I will happily eat my words if you provide a reliable source confirming this retraction.

    Throughout this essay, I tend to use statements like “We could even develop ‘Neuroengineering”, “social robots might be accepted..”, and “one’s sense of identity may evolve from something personal, private and individualistic towards an enlarged, collective and essentially public identity”. This is a far cry from assertions like ‘we will develop neuroengineering’ and ‘we will evolve from a private to a public identity’. Anyone who says X ‘may’ or ‘might’ or ‘could’ happen are clearly open to the possibility that X might, in fact, NOT happen. Anyone who says ‘X will happen’ is not open to such a possibility.

    Presumably the likes of Michael Hanlon, Susan Greenfield, Daniel Jo Solove and Linda Stone are also open to the possibility that the issues raised in this essay could happen. If they do not believe it is possible, why raise such concerns in articles and books? It would be like raising concerns that Rudolph the Red-nosed reindeer will damage their roof when landing Santa’s sleigh on it, even though they do not believe in Father Christmas.

  3. Net Antwerp says:

    “You may believe Schmidt retracted his statement, but since you provided no reference material to validate this, it stands as yet another one of your baseless claims”

    Schmidt went from ‘We do datamine’ (other meetings, etc) to ‘We don’t mine your data AT ALL’ pretty quick on one of the episodes of the Colbert Report show.

    Easily dismissable? Not at all. As Google’s spell over the general public start to fade…

    “If they do not believe it is possible, why raise such concerns in articles and books? It would be like raising concerns that Rudolph the Red-nosed reindeer will damage their roof when landing Santa’s sleigh on it, even though they do not believe in Father Christmas”

    Unwanted possibilities result in unwanted outcomes. Like those pesky door-to-door salesman and telemarketers.

  4. Unwanted possibilities result in unwanted outcomes. Like those pesky door-to-door salesman and telemarketers.<

    Yeah, why do they persist? I have never met anyone who welcomed either and yet the practice continues. I guess it must make some kind of business sense to employ people to do such jobs.

  5. They don’t datamine AT ALL? How do they target advertising? How do they rank searches? How do they track the spread of a pandemic by monitoring the amount of times any one state runs a search for ‘flu-related topics? They must do some sort of analysis on the data, I would have thought.

  6. Net Antwerp says:

    “They don’t datamine AT ALL”?

    Schmidt’s sleight-of-hand to retain and expand its userbase.

  7. Pingback: Alt! who goes there? « Khannea Suntzu's Nymious Mess

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s