Once you’ve had a lover-robot you’ll never want a real man again”- Gigolo Joe.

Technology is becoming ever more prevalent in human affairs. Social networking sites are increasingly seen as viable alternatives to singles bars as a way to meet prospective dates, and of course there is the phenomenon of people conducting romantic relationships exclusively in virtual worlds.

How far can technology’s influence in the area of sex and love grow? Will it become more than a mere facilitator of human relationships? A logical outcome of an increase in the importance of technology in our lives would be ‘technophilia’, loving technology itself. The idea of loving technology is a concept that science fiction writers have thought about, most obviously in the form of lover bots like ‘Gigolo Joe’ (the android played by Jude Law in Steven Spielberg’s movie AI: Artificial Intelligence). The prospect of robots becoming our companions and lovers seems to be one that people anticipate. In 2003, a Marketing and Opinion Research Survey of children in Britain found that 34% of adults and 37% of children believe that, by 2020, computers will be as important to them as friends and family. South Korea intends to put a robot in every home by 2020, and in Japan companion robots have been proposed as a means of enticing young people away from cyberspace, re-connecting them with the physical world.

Prototypes of humanoid robots exist. These include Japan’s DER 01 and Korea’s EverR-2. Having watched demonstrations of these machines on YouTube, I think it’s fair to say the technology has some way to go before these robots are convincingly humanlike. Their movements are still not fluid enough; their conversational abilities too obviously pre-canned. The technology is going to have to show dramatic improvement, and become much cheaper. And there is a psychological factor to consider: Will we accept machines as lovers? And, if so, how far are we willing to go in accommodating any possible deficiencies the technology may have?


Earlier in this series, we saw how love grows out of attachment. ‘Material Possession Attachment’ extends this to physical things. As a result of repeated use, what began as a mere commodity becomes more important to its owner. Even though it is probably mass-produced and there are therefore many other objects identical to it, an object you own acquires a sense of uniqueness. It becomes ‘my car‘; ‘my computer’. Milhay Csikszenthmihlyi and Eugene Rochberg- Halton call this special meaning ‘Psychic Energy’ (although they don’t mean anything mystical). The more psychic energy an owner invests in an object, the more it is interacted with, the more it takes on a subjective aura of uniqueness and personal meaning.

Because they are so interactive and play an increasingly important part in organizing and managing our lives, computers and networked devices are particularly liable to trigger powerful feelings of attachment. From various studies, Bryon Reeves and Clifford Nass have concluded that people subconsciously treat computers as though they were people, applying “social rules and expectations”. They point out that:

“A) Both humans and computers communicate with other humans using words.

B)  Both humans and computers are interactive. They respond to social situations based on prior ‘inputs’ from the person with whom they are interacting. 

C) Computers have filled many roles that have traditionally been filled by people”.

Through popular science, people are becoming familiar with the ‘computational theory of mind’, the idea that the brain is, in some way or other, a great big powerful computer. We are acquiring an increasingly materialistic view of human life, including such seemingly spiritual qualities like falling in love. The roboticist Rodney Brooks argued that emotional states are “basically just a number of the amounts of various neurochemicals circulating in the brain. Why should a robot’s numbers be any less authentic?”

Along with this belief that computers are, or could be, a kind of person, we see computers and the Web becoming increasingly personified. This serves to ramp up attachment yet further. Consider how the various components of attachment theory apply to our relationship with networked devices:

Proximity maintainence: We like to keep our smartphones close at hand.

Safe Haven: Having your computer near enough for you to access it when needed feels comforting,

Separation Distress: Because it plays such a crucial role in organizing and managing our daily lives, an inability to access your computer can feel stressful.

I would imagine many people could identify with the business executive who admitted to the psychologist Sherry Turkle, “when I lost my blackberry, I felt like I had lost a part of my mind”. In short, all of the symptoms of attachment manifest in our relationship with computers and networked devices.


Still, you might argue that, yes we are increasingly dependent on computers but that does not necessarily mean we treat them as though they were people. But many experiments have shown that (on a subconscious level, at least) there is a tendency to anthropomorphise these machines. It has been shown, for instance, that people apply stereotypical attitudes to gender to computers. These attitudes include the assumption that men know more about certain topics than women (and vice versa). In experiments where computers are given a male or female voice, people tend to carry over stereotypical views on human gender.

Another example is ‘reciprocal self disclosure’. In general, people tend not to discuss innermost feelings with anyone other than their nearest and dearest. However, quite often people will open up to strangers if the stranger first discloses secrets about themselves. Experiments have shown that if a computer first discloses something personal about itself before asking a question, the participants’ response is likely to be more intimate in terms of the depth and breadth of self-disclosure.

In considering why we anthropomorphise computers, Bryon Reeves and Clifford Nass attribute it to the fact that our minds evolved in a world where only humans did humanlike things. Therefore, there was no selection pressure working on distinguishing between humans doing humanlike things, and non-humans doing humanlike things. This tendency ramps up considerably when the computer is specifically designed to mimic a person. The endeavour to successfully build such machines is often considered to be a goal of artificial intelligence, which of course it is, but I would argue that it is much more to do with learning how to push our ‘evolutionary buttons’ so that we cannot help but attribute qualities like mind and emotion to the machines. 

This might sound as though androids (robots designed to resemble human males) and gynoids (the ‘female’ equivalent) are machines purposefully designed to deceive people. Maybe so, but it does look as though people are ready and willing to be deceived. From studies of people’s interactions with the ‘Eliza’ chatbot, psychologists have found that people are willing to go to some lengths to maintain the illusion of a ghost-in-the-machine. Eliza is fascinating to psychologists, not because bot itself is complex or smart (it’s just a bunch of cheap tricks, really) but because of the way people engage with it. As Sherry Turkle explained in her book ‘The Second Self’:

I often saw people trying to protect their relationship with Eliza…They didn’t ask questions that they knew would “confuse” the program, that would make it talk “nonsense”. And they went out of their way to ask questions in a form that they believed would provoke a lifelike response”.

Such observations strongly suggest that people are willing to meet machines more than halfway in attributing personhood. It seems likely, then, that if lover bots are in any way preferable to people in certain respects, quite a few of us might go out of our way to maintain the illusion of being in a relationship with a fellow human being.


When considering possible advantages lover bots might have over actual people, the first thing that pops into people’s mind is usually the prospect of fantastic sex, whenever you want it. In terms of physical appearance, the lover bot could be custom-built to the client’s personal preference. One can imagine the lover bot coming pre-installed with expert knowledge in seduction techniques, foreplay, sexual positions and methods for prolonging orgasm. As for the prospect of the lover bot turning away from its human and saying “not tonight, I have a headache”, such behaviour need not be a part of the machine’s repertoire (“God, no, men would never want a sexbot that behaves like an actual female”, reckoned Price Bailey at Thinkers). In 1985, the Guardian newspaper reported that prostitutes in New York “share some of the fears of other workers- that technology developments may put them out of business… as one woman groused, “it won’t be long before customers can buy a robot from the drug-store and the won’t need us at all”.

At Thinkers, Elizabeth Spieler reasoned that this aspect of a lover bot’s repertoire was not something that would interest women. “In considering the woman’s nature, why would she use a robot? Isn’t it males who require porn?”. Other participants pointed out that, since sex toys like vibrators have been successfully marketed to women, maybe lover bots will also find female buyers. In support of Spieler’s view, just about all humanoid robots I have seen are designed to resemble women with the attributes of youth and beauty known to be particularly appealing to the male gender. 

But I think it’s worth remembering that we are not just discussing sex bots, but lover bots, what Cupidon-Pepin described as “a sex bot that will promise conversations at midnight in the kitchen about the kid’s future”. In short, a robot that can offer emotional as well as physical comfort. Another likely advantage that such a robot might offer is the certainty that your partner would remain faithful and would always place you at the centre of its world. In his book, “Love and Sex With Robots”, David Levy argued that, since men are able to get sex more easily now compared to decades ago, they have been more hesitant to commit to long-term relationships. One might imagine that easy access to gorgeous (artificial) women always willing to offer fantastic, no-strings sex might further erode men’s willingness to commit. Perhaps, then, the prospect of taking on an android as a partner- “always willing to please and satisfy, and totally committed” in Levy’s words- might be a great selling point for the female market.

There is at least one more advantage that a lover bot may have over a human partner. Because its mind would be a sophisticated software program, your lover bot’s personality and ‘self’ could be regularly backed-up and downloaded into a replacement body if the current model should malfunction. In other words, you would never need face the prospect of grieving over the death of your spouse. 


Both proponents and skeptics of lover bots agree that a major obstacle to be overcome is the suspicion that ‘it is just a robot’. Elizabeth Spieler summed this up with her comment, “without intimacy I see no point in the relationship…no intimacy can occur in the robots [because] it’s not human”. Both proponents and skeptics agree that progress in ‘affective computing’ is likely to result in robots that are as good as humans (if not better) at inferring people’s emotional states. Also, there is agreement among both groups that future generations of humanoid robots will perform humanlike behaviour in an increasingly realistic way. Where the opposing camps split is on the issue of whether there is anything behind the performance. When a robot responds as if it is having an emotion, is there really something going on, subjectively?

In some ways, this question is similar to that facing people engaged in romantic relationships via their avatars. Some feel the need to question whether the person behind the avatar actually feels what they apparently project through their online personae. Is it real or is it performance? Sherry Turkle spoke of how many of her colleagues believe “performance is the currency of all social relationships…a robot may only seem to care and understand…people, too, may only seem to care and understand”.

My own attitude is that if the other person never gives any indication of trying to deceive me and consistently behaves as if their emotions are genuine, then I might as well accept them as genuine, given the complete lack of evidence to the contrary. David Levy reckoned the same argument should work with respect to robots. “If it behaves in every way as though it does indeed like you, then you can safely assume that it does indeed like you”.

Is that a watertight argument, or is there a flaw somewhere? I think the best argument against emotions in machines was the one Spieler touched upon and which Turkle also put forward. That is, that the robot cannot feel human emotions because it never had a human life. Recall from previous instalments how the attachment that forms between mother and child can be considered the ‘common ancestor’ of the many aspects of human love. But the robot was never born and never had a mother. It was built in a factory. As Turkle said, “a love relationship involves… looking at the world through another’s point of view, shaped by history, biology…computers and robots do not have these experiences to share”.

Now, for all I know, given sufficient knowledge of developmental psychology and methods of translating that into machine-readable code, we might learn how to make lover bots whose behaviour is complex and naturalistic enough to warrant a belief that one can achieve a meeting of minds through shared experiences. People can have false memories that feel as real to them as any actual past experience. So why not robots? In ‘Blade Runner’, Rachel’s false memories of childhood made her less likely to be identified as a Replicant via the Voight-Kampf test for human empathic responses. Maybe one day there will be robots as convincing as Rachel, but is that really what we want? It seems not. On the basis of many interviews, Sherry Turkle noted how people’s “fantasies about robot companions…return, again and again, to how robots might…be made to order”. People fantasize about robots offering risk-free relationships in which the possibility of heart-break need not be considered. They want, in other words, companionship without the demands of friendship.


In previous instalments, we saw how online social games/ networking/worlds all ask us to project an identity and how, by giving us the ability to write, edit and delete what we present to our contacts, what we project becomes (perhaps knowingly, maybe subconsciously) a performance of a kind rather than the ‘real me’. Comforted by the ability to re-invent ourselves as comely avatars with less flattering sides to our nature edited out of the performance (if not completely at least more so compared to real life) perhaps it is a short step to expecting made-to-measure friends and lovers where we can take what we want and caste aside the rest?

If that is what we wish for, then we may end up in simplified relationships. But that too may be what we want. Back in 1998, a survey of people’s attitudes to friends (‘Friendship and Intimacy in the Digital Age’) revealed how “many people…would say they are too busy for friends, given the increasing demands of work, commuting, consumering, childcare”. In today’s world of permanent connection to an ever-expanding web of wall posts and tweets, discovering one has little time for friends unless they are online is a familiar complaint. Perhaps it is little wonder, then, that studies of Internet use and real-space socializing tend to find a rise in the former corresponds with a decline in the latter.

Quite often, when I am chatting with someone via messaging, my conversational partner will go offline. No excuses, no apologies, just gone, mid conversation. In the past, behaviour like this (turning your back on someone, putting the phone down on them) was considered the height of bad manners. But such rules of conduct don’t apply to messaging, which is meant to be engaged in, interrupted and resumed when…well whenever it is convenient for the individual. Our networked devices serve not only as portals through which we access our contacts, but also as shields that isolate us from them. If it is not convenient for me to have any interaction with you right now, I can press a button and be rid of you.

But, while individuals can be dealt with in this way, the social network as a whole keeps on expanding. You find yourself in more groups; with more friend recommendations, more wall posts, more comments and pingbacks. It can easily become overwhelming, and constantly monitoring all that activity can lead to a state-of-mind known as ‘continual partial attention’, characterized by a sense that everything is connected via your peripheral vision, as though you are looking but not really focusing on anything passing by your eyes on the screen.

According to Sherry Turkle, “the text-driven world of rapid response does not make self-reflection impossible but does little to cultivate it. When interchanges are reformatted for the small screen and reduced to the emotional shorthand of emoticons, there are necessary simplifications”. Neuroscience is revealing how the plasticity of the brain is being affected as we live life through the screen. As Gary Small and Gigi Vorgan explained in the book ‘iBrain: Surviving the Technological Alteration of the Modern Mind, “the pathways for human interaction and communication weaken as customary one-to-one people skills atrophy”.

Enter the robot, which may well take the convenience of interruptible relationships to new heights. Yes you can terminate an online relationship any time you want just by logging off, but picking up where you left off depends on your partner also choosing to log in. But if that partner were an AI, then they could be summoned at will, running avatars, piloting robots, maybe just being part of the ambience of your surroundings. A lover you can summon whenever you are in the mood for their company. While the emotional range the robot has to offer may not be as complex as human emotions (indeed, may not be an emotion at all but only a performance cleverly designed to push those evolutionary buttons), while we may have to lower our expectations of relationships in order to get the most out of lover bots, maybe our weakened pathways for human interaction and communication would make the robots emotional intelligence (which Daniel Goleman defined as “the ability to monitor one’s own and others’ emotions, to discriminate among them, and to use that information to guide one’s thinking and actions”) seem sufficient? 

This future is not inevitable. However, I do believe it is possible and that the seeds for its germination have already been planted. However, such a future can only take root if our attitudes, our wants and needs, drive technology in this direction. I think we can hope for better than a future where people live out the fantasy of a perfect relationship with a machine that, actually, cannot truly reciprocate their love.

Coming soon, the final installment of the ‘Lovegame’ series.

This entry was posted in technology and us. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s