Diamond rain.

DIAMOND RAIN.

It might have been called ‘the rock to alter the course of history’. But that would have seemed too grandiose a name for it during much of its long life. In comparison to the vast and dark and mostly empty cosmos through which it drifted, the rock was but an infinitesimal speck, and not at all remarkable. Had its path through space been different, it would have been ignored out of existence like the countless billions of rocks likewise drifting through the lonely cosmos. But its path was what it was, and so it was destined to become something of a legend, the solution to a great mystery, many millions of years into the future.

When the first eyes caught sight of the rock, it could not be recognised as such for it had been transformed. When it did land, the force unleashed would be unmatched by any event in this part of the cosmos, unless in some distant past a star had gone supernova.

But that was in the imminent future. For now, the rock was falling through the planet’s atmosphere, and as it did so the friction built up to the point where the object became a mighty, dazzling ball of fire, with plasma trailing off it like a stupendous comet. If there had been minds capable of understanding what this spectacle meant, they surely would have stood transfixed both by the beauty of what they beheld in the sky, and the fear of what they knew was to come. But there were no minds evolved to appreciate beauty or to connect fire in the sky with something so abstract as the apocalypse. The creatures that inhabited this world were concerned only with hunting, or avoiding being hunted, or mating.

And then the rock, which was the size of mount Everest, impacted with the Earth. Darkness fell. A Nuclear winter gripped the planet. In geological terms the darkness lasted but a moment, but for 75% of all life the winter persisted forever. Their kind would never walk or fly or swim again, for the impact of the rock and the environmental changes this cataclysm wrought doomed them to extinction.

And, for 65 million years, the arrival of that rock and the effect it had on the history of the planet were forgotten, and unknown. No creature which survived concerned itself with such matters; their minds were focused only on day to day survival. But through those daily struggles species evolved, until a primate walked the Earth with a brain large enough to infer past events from the subtlest clues of the present day, and with hands that could fashion tools and so bring about a different kind of evolution: That of technology.

Most of the species that went extinct 65 million years ago were fated never to be remembered. No trace of their ever having existed would be found. But a few left fossil remains, and those stone skeletons and bones were testament to a lost world which no human ever witnessed, a world dominated by dinosaurs. Dinosaurs that had survived far longer than the entire lifespan of the human race, but which seemed to have vanished from the earth almost overnight. Why? That was the great mystery.

But various clues had been left after the cataclysmic arrival of the asteroid, waiting to be noticed by minds smart enough to read their millions year old message. For the space rock carried with it an element that was very rare on Earth but common in asteroids: Iridium. In the geological record there lay a thin layer of iridium, a boundary that could be found throughout the world in marine and terrestrial rocks. Below that boundary, one could find the fossils of dinosaurs and other flora and fauna of the lost world. Above that boundary, no such fossils existed. Along with other clues, the so-called K-pg boundary testified to the reason why the rule of the dinosaurs had come to such a sudden end.

The asteroid that killed altered the course of evolution on planet earth had arrived unopposed, for no power had existed on Earth that could have altered its course or otherwise prevented its impacting with it. Other rocks drifted through space, and they, too would have smashed into the earth if the path of our planet and that of the rock had likewise coincided. If. If they had arrived before a certain period in time.

But the next asteroid whose path put it on collision course with Earth was destined to enter our solar system when a force greater than it had emerged on our planet: The force of technologically-enhanced intelligence.

In comparison to the evolutionary forces of natural selection, the rise of technology had been astonishingly swift. Nature took billions of years to invent biological machines which could fly; people aided by technology achieved it after a few hundred thousand years. Nature took billions of years to establish various self-regulating systems between the planet’s geology, atmosphere, waters, and life. Systems which maintained such things as the stability of global temperature, ocean salinity, and the levels of oxygen in the atmosphere. Humankind, once it acquired technology, took only a few hundred thousand years to add to those homeostatic mechanisms what was essentially a nervous system: Countless sensors and satellites and computers all networked via communications networks, augmenting human minds with the capacity to find patterns and detect phenomenon which hitherto had existed outside of conscious awareness.

Automated telescopes scanned the skies, controlled by narrow artificial intelligences designed to detect the nanosecond change in light levels whenever an asteroid passed in front of a star. At first this ability to detect asteroids and map their trajectory was not combined with a capability to do anything about such rocks that were on a collision course. But technological evolution continued apace. Economic incentives, the need to use finite resources with as much efficiency as possible, the improvement to scientific instruments through refinement of their component parts, all these factors and more combined to push technology in the direction of miniaturization. Microtechnology progressed in time to nanotechnology, which, in turn, evolved into atomically-precise manufacturing and self-replicating machines.

No self-replicating machine actually existed on Earth, due to laws which forbade the introduction of anything which could trigger a grey goo scenario. But, out in space it was different, for it was recognized that Von-Neumann replicators held the key to prevent another impact event.

Factories had been established on the Moon, where low gravity made it feasible to launch satellites no larger than your thumb. The wonders of molecular nanotechnology meant these were not just satellites but entire automated factories which could build more of their own kind out of common elements such as carbon, hydrogen, oxygen, and nitrogen. They drifted through the solar system, and when they happened to came across material they could use, on-board guidance systems granted them the ability to navigate toward such resources. Once safely landed, the astonishing process of manipulating matter at the atomic level would begin, and more satellite/factories would be produced. Thus, the numbers of satellites grew exponentially.

By the time the satellites reached the outer edges of the solar system, exponential growth had raised their numbers to the trillions. A halo of sensors encircled our system, ever-watchful for the arrival of space rocks and equipped with enough computing power and artificial intelligence to accurately map the trajectory of such rocks and determine with 100% accuracy whether or not they threatened the Earth.

When, on May 15th 2060 such a rock passed by the Detection Halo, it awakened something which lay dormant among the many rocks of the asteroid belt. For the self-replicating factories had not only produced the satellites that comprised the Detection Halo, but also left other factories which could produce-along with more factories- atomic disassemblers. Or, to give them their more common name: Rock munchers.

The arrival of the May 15 asteroid caused a signal to be sent out by the Detection Halo, which raced ahead of the rock at the speed of light. As it passed by sensors left on space debris, the dormant factories were activated. What as at first an invisibly sparse layer of bug-sized rock muncher robots became, in time, a vast cloud, hundreds of miles thick. A huge, narrow intelligent dusty orb of self-replicating atomic disassemblers, their numbers growing exponentially as they travelled through space, replicating themselves by devouring necessary materials wherever they found them, until their numbers were sufficient to deal with the asteroid headed for Earth.

As the rock of 65 million years ago had smothered the Earth in a layer of dust that blocked out the Sun, now too was this rock denied sight of our local star by the omnipresence of dust. Smart dust, each speck a complex nanotechnological device with the ability to manoeuvre matter atom by atom. They covered the surface of that rock and began to devour it, disassembling it at the molecular level.

By the time it reached the orbit of the Moon, the rock munchers had processed almost all of the asteroid’s usable elements into products that would serve useful purposes for the many space-borne automated factories that orbited between the Earth and our neighbour satellite. What was left of the asteroid was still headed for our planet, but that was no mistake. It had been planned as a celebration of intelligence over dumb matter.

The whole world gathered in the Nevada desert. Some attended physically; most attended via telepresence technologies which enabled them to enjoy the moment with all the immersiveness of actually being there. Eyes were trained toward the skies. And there they were! As promised, falling down, sparkling like rainbow-coloured drops of water, iridescent. A portion of the asteroid’s carbon, atoms rearranged into face-centred cubic crystal structures, the brilliant sunlight refracting off the droplets as if God Himself were impressing the people below with a light display.
It was raining diamonds.

 

Posted in fun stuff | Tagged , , , | Leave a comment

RUSSIAN DOLLS

RUSSIAN DOLLS: A SCI-FI STORY.

ONE.

The television set materialised out of thin air, neatly filling the space that Adam had been staring at the moment before. He sat on the edge of his bed which doubled as his sofa when he did not need to sleep, and its sudden appearance made him HAPPY.

Adam was a simple soul, whose emotions were tied to the objects that surrounded him. There had been a brief period in his earliest days when he had occupied a room bereft of any furniture or appliances. Unable to satisfy his most fundamental needs, he had been MISERABLE, HUNGRY, THIRSTY. But then the fridge and the microwave had appeared in his kitchen, and an autonomous response had sent him wandering over to these new additions, where he fixed himself a meal. His mental state had changed to SATED, QUENCHED and CONTENT (but bordering on DISASTISFIED) as a result.

But this did not last. Before long his bladder and bowels needed emptying and he dutifully did so- all over his floor. Flies began to accumulate around the pile of shit and Adam’s condition slipped into ILL. Those early days were bleak indeed.

But then, a job was given to Adam. Each day at 8:30 AM he would walk out of his door and each day at 5:30pm he would come back home. Whatever he did, it put money into his account which was promptly turned into furnishings, decorations and appliances for his home. The basics came first. A toilet and a sink to wash his hands in. A bed to sleep in. A dustbin for disposing of waste.  Adam did not bring any of these things into his home. He never shopped for them. Instead, they simply materialised inside his house and when they did so, Adam just knew how to use them, like a spider just knows how to weave a web. With mechanical purpose, Adam would go about his routines, fixing his meals, clearing away his trash, emptying his bowels, washing himself, sleeping, waking up, going to work, over and over again.

The days when Adam’s state of mind had been firmly in the MISERABLE range were now but a memory. But hitherto he had never been able to achieve a state you might call HAPPY. That all changed when the television set appeared before his eyes.  Adam sat on the edge of his bed, elbows resting on his legs, head resting in his hands- the posture of the telly addict. He sat there for what must have been hours until, finally, his more basic needs became so overpowering that he had to go and satisfy them. While he was in the kitchen, the television set popped out of existence as quickly as it had appeared, and Adam’s emotional state jumped back to CONTENT (bordering on DISSATISFIED).

TWO.

A child’s finger pressed gently on the button of a mouse, initiating a command to remove a graphic representation of a television, and place it back in the inventory slot from whence it came.

Emily, like all children, learned about her world and her place in it through the medium of play. Like little girls before her (she was seven years old) she had toys that were her companions and mentors, who helped her roleplay key skills she would need as an adult. Those toys had changed, somewhat. Where once there had been dolls and dollhouses, now there were relational artifacts- toys that actively responded to your play, almost as if they could read and react to emotional states. She was too young to understand the smoke-and-mirrors aspect of these toys, how their tiny sensors and microprocessors only had enough power to detect the facial expressions and tone of voice of the person they were interacting with, adopting facial expressions and mannerisms of their own while not actually having any inner-life at all. But then, whoever designed the animations and the software that triggered them understood human psychology so well, even most adults occasionally felt a twinge of empathy toward these toys.

Emily’s favourite toy was the computer game WeePeeple (at her immature age, she did not question the curious convention that things must be misspelled in order to be cool to the kidz-sorry).  It fell into the genre of games known as ‘sims’.  You designed your own inworld character, selecting a number of pre-designed noses, eyes, chins, ears, body shapes, and then adjusting sliders that reshaped each part, making it smaller or larger, fatter or thinner until your character conformed to whatever image you originally intended. The design interface that controlled this creative process had been refined over the years, so that this aspect of the Sim experience had gone from the Second Life era, when nobody but the most artistically gifted could craft anything but a butt-ugly avvie, to WeePeeple’s delightfully intuitive setup that let even little girls like Emily sculpt beautifully realised characters.

The real fun began when your character was taken out of the initial design stage and placed into the world proper. One did not control WeePeeple directly. Instead, each character acted autonomously, driven by basic needs such as hunger, thirst, fatigue, restlessness, need for companionship (or solitude). On top of that, each character had personality traits, modelled on the ‘Big Five’ (extroversion, openness, agreeableness, conscientiousness and neuroticism) and, depending on the settings of these underlying traits, your character could be a companionable, sociable type (or introverted and reserved); friendly, empathetic and warm (or suspicious and egocentric).  All kinds of personality types that fell between these extremes were possible.

Although Emily did not control her character directly, she did have an indirect influence over his life. Most importantly, it was she who had to design and build a home for him to live in, and fill it with furnishings, appliances and whatever else she considered might be necessary. In a very real sense, every design decision a player took affected the development of the character.  Walt Witworth once said, ‘a child went forth every day/ and the first object he looked upon, that object he became’. This was literally true for WeePeeple. Adam’s AI was primarily concerned with path finding- being able to navigate his way around any room and any obstacle without getting stuck or confused. All other abilities that he acquired as time went by were actually scripts embedded into every object. When Emily bought him a cheap microwave oven, the instructions contained within it directed Adam, so that when his hunger drive was sufficiently high, he would seemingly operate the appliance as if he knew how to microwave a meal. When his levels of fatigue were high, his bed told him how to turn down the sheets, lay down, and sleep. Every design decision that Emily made- what color to paint his living room wall, what flowers to set upon his kitchen table- affected Adam’s state-of-mind, shaping his personality.  

Since there was no set goal in WeePeeple, there was no proper way to play it. It was not so much a game with fixed rules, more like a sandbox, a toyset that encouraged experimentation. People had devised their own games; their own reasons for playing. Some people tried to create ‘icons’. In other words, they designed their character to look as much like a famous person as possible, and then set about creating a living space that would direct their evolving personality traits into becoming just like the person they were meant to be. Other people seemed to enjoy killing their character, and routinely posted online videos of ever-more complex Rube Goldberg contraptions of goldfish bowls and ironing boards and knife blocks and dinner plates and cricket balls set up in such a way that the hapless character would set off a lethal chain reaction as soon as his or her need to use the toilet triggered a familiar routine.

Emily belonged to neither camp. She simply enjoyed looking after Adam, gained satisfaction from watching him develop from a near-hopeless case that could barely boil an egg and tidy away his mess, to a skilled cook who could entertain guests, carry out extensive DIY, and who had a range of hobbies that he could use in both social settings and for keeping himself occupied when he was alone. Adam had almost no inner-mind to speak of, but that did not matter. Emily imagined that he did, attributing despair whenever he walked or sat dejectedly, seeing triumph whenever he bested a companion at a game, or successfully completed a task like fixing a wonky table leg. Emily attributed consciousness to Adam, her vastly superior mind taking basic psychological cues and using them to weave a far richer inner-life than her character could actually have.

To put not too fine a point on it, Emily enjoyed interacting with Adam because her own hopes and fears and needs and dislikes were mapped onto him. His cartoonish antics were a charicature of her own developing self. She identified with his daily struggles, understanding in an intuitive way (even if she could not have expressed this in words) that Adam was like a mirror, but one that reflected her personality rather than her appearance. By guiding Adam, she was learning about herself and what sort of person she would want to grow up to be.

Emily wanted Adam to be happy, but she had played WeePeeple long enough to be able to predict when a choice she made now would have a negative effect on him, even if it seemed of benefit in the short term. Her mind had raced forward in time as she saw how Adam had sat, glued to the television. He would forgo sleep, forget to eat, he would go (reluctantly) to work and underperform in his lethargic state (a player never saw their character work, the game merely showed them leaving the house, and then returning with the inworld clock jumping forward several hours). It was tough love, she knew, but the television would have to go. The experiment to design for Adam a home that gave him the most positive attitude possible would go on.

Emily finally hit the Save icon, and shut the computer down. She walked over to a window and looked out onto a suburban street, lined with trees. It was an autumn day, and a breeze was blowing through the trees, every now and then causing a few leaves to break away and tumble to the ground. Her eye caught one such leaf, and she followed its zig-zag path as it was blown this way and that by the force of the wind.

THREE.

Calculations. Calculations beyond the count, even beyond the imagination of the average person, were being crunched by supercomputers. There were several of them, each one dedicated to a different task that, when working together, amounted to the extraordinary feat of emulating a little girl and an environment for her to grow up in. The supercomputer that rendered her world was currently calculating (among too many other things to list here) the effects of wind and weakening bonds between twig and leaves. Zeros and ones in seemingly infinite long strings ran through its memory, calculating for wind turbulence and other physics models.

These supercomputers were diligently monitored by teams of scientists. Some were specialists in computers, others in cognitive sciences, still others in child psychiatry. Many different fields of expertise, but all united in a common vision. Finally, computing power and knowledge of the design and functions of brains had reached a level of complexity where the prospect of building and simulating a biologically-accurate virtual human was no longer science fiction.

Dr Giulio Dinova, who headed the research team, sat in what had become known as the ‘Sensorium’. That was the name given to the room where sophisticated virtual reality tools translated the abstract mathematical models into something a person could more easily understand. Here, you could enter the world of Emily. You could be a fly-on-the-wall, observing her as she interacted with her world. You could zoom in, right down to the molecular level, and track the neurological pathways that developed as she learned a new task. 

Dr Dinova contemplated the past developments that had lead to this little girl living her virtual life. Past researchers had developed sophisticated network models of the metabolic, immune, nervous and circulatory systems. Others had designed structural models of the heart and other muscles. Perhaps most importantly, a team lead by someone called Henry Markram had shown that you could model a brain in software. The limitations of computer power back then had meant only a rat brain could be fully modelled. But (as Markram had suspected it would) reverse-engineering the rat neocortex had given computer scientists new insights into how the power and performance of computers might be dramatically extended. 

But, even now with the advent of handheld devices capable of petaflop levels of number crunching, it was still too expensive and time-consuming to build a VR human from scratch. The average person possessed a VR twin constructed from a library of averaged mathematical models of newborns. When a real person was born, non-invasive medical scanning techniques recorded their vital statistics, and that data was then integrated with a model based on such things as sex, ethnicity, geographic origin, and other salient features. The result was a model that closely resembled the actual person. Whenever that person went for medical checkups, his or her virtual twin would be updated with the latest biomedical information. Whenever a person fell ill, the condition would be replicated in the VR twin, and simulations for the range of available treatments would be undertaken, in order to anticipate short and long-term effects. The availability of VR twins had not only eliminated the need for animal experiments, it had also largely consigned the prospect of side-effects from medical treatment to the dustbin of history.

That a VR twin might be suffering in the name of science was not a thought that crossed anyone’s mind. This was because the model was not sufficiently complex to enable an empathetic and aware mind to emerge. The VR twin was nothing more than the integration of past subsystem models into one systematic model which was sufficient for modelling the effects of drugs, but not sufficient to model the subjectivity of pain and suffering, or any qualia for that matter. Simply put, a VR twin was a zombie.

Emily, though, was very different. She was the result of the most sophisticated biophysical and neurological modelling of a person that had ever been attempted. Dr Dinova and his research team had at their disposal supercomputers of seemingly limitless power, not to mention decades-worth of data that provided exhaustive details of the reverse-engineered brain, nervous system, and other parts of a human body. Reading through the blog posts and watching posted video comments, the team (who never ventured outside of their respective labs, dedicated as they were to the task and supplied with everything they needed) understood that, as far as the general public were concerned, their research was both exciting and dangerous, because it touched upon questions that some considered forbidden territory.

Ever since Markram’s ‘Blue Brain’ project had successfully modelled a neocortical column, the question had been asked: When, if ever, would a virtual brain possess a virtual, conscious and self-aware mind? Could a simulation ever be said to be conscious, or was that something that no amount of calculations per second could ever capture? Early models had been impressive from the viewpoint of robotics, but less so from nature. Markram and his team had designed a robotic rat, remote controlled by a model of a rodent brain  that existed within rows and rows of CPUs that collectively made up a supercomputer called Blue Gene. The robot rat was put through the kinds of experiments real rats were routinely asked to perform. Things like negotiating a maze. The real rats would always complete the task long before the robot rat. It was almost as if the latter were operating in slow motion- which indeed it was. Such was the complexity of running the simulated neocortex, that one second of thought and action required several minutes of number crunching. All but a few took this lag to be proof that a robot could never equal a living, breathing animal.

But, the model had been refined, and computing power had continued its exponential rise. It was not long before the robot rat was running through mazes every bit as quickly as its biological peers. And, it was not all that long before sales of real rats (and later cats, and later still, dogs) were falling as people saw the benefit of lifelike robot pets that would never die and cost a great deal less to upkeep.

Still, the perceived difference between animals and people meant that few dared to model a human to the level that many now thought would result in a conscious awareness. But, since people had been fascinated to know how the mind works since, well, forever, it was perhaps inevitable that, at some point, a research team would put together a system capable of growing a virtual baby that would become, in every respect, a person. Dr Giulio Dinova and his team had done just that.

The voice of Dr Dinova’s research assistant, Gwyneth Epsilon, awoke him from his semi-hypnotic state. The sensorium had the effect of making one loose sense of space and time in the physical world. Concentrating so much on the virtual world that Emily inhabited, one sometimes came to believe it was reality almost as completely as Emily herself believed. Dinova corrected himself. For Emily, it was not mere belief but a simple fact. What was it that the old roboticist Moravec had said? Oh yes, ‘to a simulated entity, the simulation IS reality and must be lived by its internal rules’. That…

“I said, has she been playing WeePeeple again?”. Dr Epsilon’s tone of voice suggested she had spent quite some time attempting to attract Dinova’s attention, and was becoming rather irritated at his absent-mindedness. His hand swept across empty space, and icons that seemed to hover in front of him dutifully scrolled from left to right. A finger jabbed at nothing, and Dr Dinova saw his finger touching the icon that minimized the Sensorium’s all-encompassing visual and audio rendering of Emily’s world. Moments later, a shimmering cloud had solidified into the shape of Dr Epsilon, accurate down to the last mole and laughter line. The two doctors were, physically, on opposite sides of the country. But in the age of augmented reality seamlessly blending the real and the virtual, people from around the world could collaborate as closely as any team whose members lived in close proximity.

“It’s funny”, commented Epsilon, in a tone that implied she meant ‘peculiar’ rather than ‘amusing’, “here is the most sophisticated artificial life in history, and yet she has to make do with a mouse-driven system. It’s like she is a late 21st century girl stuck with late 20th century technology”.

Dinova frowned. “You forget that our computing resources are not as deep as some suppose. Yes, yes, I know they exceed the entire computing capacity of the 20th Century Internet by an order of magnitude, but running a simulation of a person, right down to the synaptic firings and neurotransmitter concentration levels, is still a phenomenally intensive task. And then you have to take into consideration the fact that we have to render a physically-plausible environment for her. Of course, we are not simulating the world down to the level of particles- it is all tricks and sleights of hand designed to fool Emily’s brain into thinking it is inhabiting a physical place. But even so, we are close to pushing the limits of what is possible. I am afraid that emulations of crude videogames and their control schemes are just about all we can provide for decent entertainment. That, and dolls and cuddly toys that come with only the crudest models of child-parent social interaction”.

Epsilon understood all this. She also appreciated that it made the team’s job simpler. They were here to study, at a level of detail that had not been possible before, the stages of development that resulted in a newborn baby growing into a child with an inner life of her own. When she was playing with WeePeeple, Emily’s virtual mouse (which was solid and real to her, of course) permitted only two degrees of freedom. This restricted the amount of motor control her body needed to perform (she was mostly just sitting still, only moving the arm that controlled the mouse), which enabled the research team to follow the corresponding brain activity that underlined the building of  interoception and exteroception maps of the internal state of her body, the world around her body and her body’s relation to the world. If she had been doing something like playing tennis, which involved using all of the body and required at least 19 degrees of freedom, the amount of data pouring from the computers that updated her brain and nervous system model would have far exceeded the team’s capacity to follow.  They could only track such developments by relying on the Sensorium’s drastically simplified representations.

It was obvious that Emily enjoyed playing with Adam. And there was something fascinating about watching a virtual child, modelled so completely that she had a mind, guiding the development of a virtual person in a virtual world within a virtual computer that was itself within a virtual world, all ultimately existing as software within the rows and rows of supercomputers linked by super highspeed Internet3 links drawing on the spare computing cycles of the Cloud. And yet, Dr Epsilon felt slightly troubled. Her colleague Dr Dinova, his fascination with Emily and her daily routines seemed somehow detached and clinical. He reported on her progress as if he were a scientist noting the growth of some novel bacteria. But, for Epsilon, she felt more of an emotional attachment to Emily. 

“Do you ever think about her 16th birthday?”.

Dr Dinova laughed. “Goodness, we have gathered so much data already, I think we will be spending five lifetimes, just in studying the details of development in the toddler stage! We will not be concerned with tracking the neurological underpinnings of morose teenagers until decades after the simulation is completed”.

Dr Epsilon shook her head. “No, that’s not what I meant. I meant, We are only running this simulation until our test subject is 16 years of age. After that, we  shut everything down”.

Dr Dinova’s expression was one that suggested he failed to grasp the point. “Of course.  Like I said, we are amassing such an overwhelming amount of data, we will have no choice but to halt the experiment at some point. I agree that any particular date is arbitrary, but a line must be drawn somewhere.  Don’t worry, your job is not going to disappear as soon as we pull the plug. Like I said, we will all be engaged for decades to come, pouring through the data we have obtained during…”

“No, no”, interrupted Dr Epsilon, “that is not what I am getting at. I was just wondering about the ethics of it. Shutting down the simulation, won’t that be tantamount to murdering Emily?”

Dr Dinova looked serious. “Now look. Emily is not a little girl. She may resemble one, but she is not flesh and blood. She is a test-subject. She is an experiment. I know she has been designed to push evolutionary buttons that trigger the nurturing instinct, but never forget that she is, when all is said and done, nothing but a vast pattern of calculations”.

Dr Epsilon did not look convinced. “But she is everything we are, as far as we can tell. Her environment may be a crude approximation of reality, but she is not smoke-and mirrors. Her brain is a model that reproduces, in exact detail down to the molecular level, everything going on in my brain, or yours. Nothing but a vast pattern of calculations? What are we? Nothing more and nothing less”.

“We are getting into the realm of ivory-tower philosophy here”, countered Dr Dinova. “You must remember that we cannot determine if Emily really has a conscious mind. For all we truly know, she may be nothing more than a more convincing zombie than Adam. For that matter, for all you know I may be nothing more than a zombie too. Perhaps you are the only conscious entity in all existence. Maybe our reality is, itself, nothing but a grand illusion created by computers? We can argue about that until the end of time, and I dare say people probably will. But, there are empirical studies to be conducted, falsifiable theories to be tested. We have to remain impartial scientists first, and concerned parents of Emily, a distant second, IF we can permit ourselves to indulge in such roles at all”.

Dr Epsilon looked a bit sad. “I permit myself to feel like a guardian toward her. Of course, I do not actively make my presence known to her. I am not sure how her young mind would cope, knowing her reality is not real at all. As far as Emily is concerned, the memories we imprint  are her actual parents”.

Like all people, Emily needed a social circle made up of family members in order to help her development. But, rather than waste computing cycles in simulating other virtual people to be her companions, the research team opted instead to imprint the memories of being cared for by a mummy and daddy. As far as Emily was concerned,  her mother and father were in another room. Automated systems tracked her emotional state, and whenever she seemed to be in need of an adult presence, her simulation was paused, updated with memories of comfort and social interactions, before being allowed to progress. For Emily, her life was filled with interactions between loving parents, both of whom did not exist outside of her mind.

Dr Epsilon thought about those moments when Emily was on pause. It took a few minutes to insert the fake memories into her mind, during which time her awareness was zero. They were little deaths, these moments. And yet, whenever the simulation was unpaused, Emily would show no signs of understanding her world had been suspended. How could she know? Since she was not conscious during the moments when her simulation was paused, she perceived no loss of time. As far as Emily was concerned, her life ran continuously.

“I suppose”, said Dr Epsilon, speaking out loud to her colleague but really just voicing her thoughts to herself “that shutting her down for good is no more harmful than when we pause her simulation. She will not know she is dead, because she will not know anything. She will not be anything. Her mind, her body, her world, all of it will be nothing once the computers stop running through their calculations. Only…”

“What?”, asked Dr Dinova, looking genuinely curious.

“Well, do you remember that old roboticist? Professor Brezeal?”.

Dinova looked like he was wracking his brains. “God, you are going back a few decades. But, yes, I remember her. Never met her of course. She died when I was in my teens. But, now you mention her, Cynthia Brezeal’s work was one of the things that got me interested in this stuff in the first place. You know, she pioneered work in using studies from child psychology to build social robots. I mean, robots that could respond to, and give off, cues from body language, facial expression, and tone of voice, in order to establish an emotional connection with their users”.

Epsilon nodded. “Yes, that was her. One of Cynthia’s earliest creations was a robot head called Kismet. It was a terribly crude thing, inferior to the toys that Emily plays with, I would think. But, she became attached to it and she felt quite sad when the time came to leave Kismet behind. So, call me silly but I think there is no shame in admitting that I am not looking forward to the day when Emily is shut down for good. Not one bit”.

With that, Dr Epsilon shut down the communication link, and her avatar dematerialized in a puff of dispersing, virtual smoke. Dr Dinova sat in contemplation. He was thinking of the whole setup that allowed Emily to exist. All the supercomputers and the Internet3 links that sent information back and forth between them. A Web of supreme computing power weaving the magic of conjouring up a ghost-in-the-machine. He returned to the Sensorium, maximizing the window on Emily’s world, and observing from his godlike perspective, the detailed steps in her development.

FOUR.

The Web was dreaming. No human understood this, because the Web’s mind was the emergent pattern of a global brain, too big to be perceived by human senses. But, nevertheless, it was dreaming. And, what it was dreaming of, was a team of scientists, and their equipment, and of a girl called Emily who existed as patterns of information within the patterns of information that were the supercomputers the Web dreamed about.
In the past, a few people had wondered if, by some great accident, the Web could become conscious. Such a thing had happened, but it would be somewhat inaccurate to call it accidental. It was not planned- no person, group, corporation or government had ever sat down and devised the near-spontaneous emergence of a virtual research team, complete with virtual supercomputers, all existing within the digital soup of zeros and ones that now enveloped the world in an invisible yet all-pervasive ether. But neither was it an entirely random event. 

What trigger effects had lead to this remarkable outcome? One cause was the sheer amount of information about human existence that had been uploaded to the Web. The age of the personal computer had only truly began with the era of the smart phone and only really took off when the CMOS era had been superseded by molecular electronics that could pack, in complex three dimensional patterns, more transistors into a sugar-cubed device than all the transistors in all the microprocessors that had existed in 2009.  It was apps, running on phones that could keep track of their owner’s position thanks to inbuilt GPS and then (as the nanotechnology behind molecular electronics lead to medical applications) all kinds of biometric data, that really opened the floodgates for offloading cognition. The very best designers of apps knew how to tap into the computer intelligence’s native ability in order to gather crowdsourced knowledge from anyone, anywhere, who had spare time to perform a task computers could not yet handle. 

From tracking the movements of whole populations, to monitoring the habits of an individual, every person was, every second of the day, uploading huge amounts of information about how they lived their lives. This, of course, presented the problem of retrieving relevant information. Access to knowledge had changed from ‘it’s hard to find stuff’ to ‘it’s hard to filter stuff’.  More than ever before, the market imposed an evolutionary pressure of establishing semantic tools with the ultimate aim of making the structure of knowledge about any content on the Web understandable to machines, linking countless concepts, terms, phrases and so on together, all so that people could be in a better position to obtain meaningful and relevant results, and to facilitate automated information gathering and research. 

The Web became ever-more efficient at making connections, and all the while the human layer of the Internet was creating more and more apps that represented some narrow-ai approach. Machine vision tools, natural language processing, speech recognition, and many more kinds of applications that emulated some narrow aspect of human intelligence were all there, swimming around in the great pool of digital soup, bumping into long-forgotten artificial life such as code that had been designed to go out into the Internet and evolve into whatever was useful for survival in that environment. 

That environment, a vast melting pot of human social and commercial interactions, imposed a selective pressure on evolving code that could exploit the connections of the semantic web in order to evolve software that could understand the minds of people. Vast swarms of narrow ai applications were coming together, and breaking apart again, reforming in different combinations. The spare computing cycles of trillions and trillions of embedded nanoprocessors were being harvested, until, like a Boltzman brain spontaneously appearing out of recombinations of a multiverse’s fundamental particles, Dr Dinova, Dr Epsilon, and their supercomputers, all coalesced out of the digital soup.

There they were, existing within the connections of evolving code. Their appearance as difficult to predict as the evolution of elephants and yet, with hindsight, as foreseeable as the eventual rise of land-based animals from the ancestral fish that first dragged themselves out of the water. But people went about their concerns without ever knowing that, somewhere among abstract mathematical space, among the vibrations of air alive with the incessant chatter of machines talking to machines on behalf of humankind, a virtual research team had emerged, pondering questions of consciousness that all sentient beings invariably strive to understand. The Web was dreaming, and while it did so, Emily helped Adam cope with his daily routines, unknowingly watched by Drs Dinova and Epsilon, who themselves existed as purely digital people blissfully unaware that they were nothing but the imagination of a global brain made up of trillions and trillions of dust sized supercomputers and sensors, ceaselessly gathering, and learning from, patterns of information about the daily habits of humans.

FIVE.

A red giant existed where no such phase of a star should exist at this stage in its lifecycle. The planets, moons, and asteroids that had orbited the star ever since they coalesced out of the dust of the nebulae from which the nuclear furnace had first ignited, were gone. But it was not a swelling star that had swallowed them, puffing out its outer layers as it ran out of hydrogen with which to fuel its nuclear reactions. The star was still a mildly variable G2 dwarf, shining with the dazzling yellow-white light of a sun with billions of years left before it reached its old age. Mind had repurposed the material that orbited the star, organising it so that it captured nearly all the energy pouring from it, and using it to drive information processing that outthought the biological civilization that once thrived on the third planet in the solar system by more than a trillion times.

Several areas of research and development had ultimately converged, and this outcome had been the reason that the Great Migration had happened. Efforts to pack increasing amounts of computation into smaller and smaller spaces had lead to molecular electronics. The self-assembling techniques required to manufacture these marvels had been extended until bottom-up assembly from raw elements could produce any physical product, so long as it did not violate the laws of physics. Because of the self-replicating nature of this nanotechnology, the value of physical objects began to decrease. Information was the only thing of value, and so while diamonds no longer had any particular value, carbon crystals organized to maximise information processing became coveted possessions.

Dust-sized sensors went forth and multiplied, the nanotechnological equivalents of the bulky, mobile phones whose microprocessors were so crudely hewn from silicon, you could actually feel the weight of a single device in your hand. The great advances in brain reverse engineering made possible by biocompatible sensors wirelessly transmitting precise recordings of brain activity, had lead to millions of applications that outsourced extensive aspects of cognition. The majority of a person’s thought processes were no longer performed by the few pounds of fragile jelly encased within their skull, but by the haze of computation that surrounded them, two-way wireless connections between neural wetware and molecular-electronics hardware augmenting each person’s cognitive ability by ten thousand trillion times. 

Death was abolished, at least for those people who permitted automated life logging to keep extensive and detailed records of their physical selves. Such people gradually migrated into the Cloud, as neuromorphic configurations of nanobots  replaced more and more functions. For the most heavily cyborged, the eventual death of the physical body went largely unnoticed, for it was now not much more than a fleshy appendage, the last vestiges of biological existence, to which the emulation clung to out of sentimental purposes.

But as more and more people shrugged off fleshy existence in favour of life in the rapidly growing cyberspaces, the sheer waste of computing capacity surrounding them became apparent. Why should CHON be assembled into a structure that could only hold one human mind, when modern techniques could take the mass of one human body and reconfigure it into computing elements that could run tens of thousands of uploads? 

In the end, no battle between humans and post humans was necessary. Those who dabbled in augmentation soon discovered the benefits of virtualization, happily allowing more and more of their self to migrate into the cloud. And just about everyone did dabble, because there was always a step conservative enough for someone to be comfortable with, and from there the next step seemed similarly untroubling. Eventually, the numbers of uploaded people far outnumbered those who remained as flesh. Although some post humans were still against involuntary uploading, more and more were now seeing it as a duty, just as humans had once vaccinated their children with or without consent. And so the time had come when the nanobots were programmed. Programmed to slip painlessly into the brains of the remaining humans, put them to sleep and destructively map, in exacting detail, every function required to lift them into cyberspace, there to live in a recreation of their former, physical world, rendered to a level more than sufficient to be completely convincing to their simulated human senses. 

And then the mind children turned their attention to the planets and moons and all available material in their local habitat. Reduced to atomic elements, the orbiting bodies of the solar system were reconfigured, so that each mote of matter was processing one bit per atom. An increasingly dense cloud of these Avogadro machines englobed the sun, and its light began to dim as star’s energy was harnessed, allowing the solar system to finally wake to consciousness. There it sat, an orb as big as the orbit of Uranus, glowing dull red from the miniscule amount of radiation that leaked from the outer shell. 

The most basic thought that it was conscious of, was the accumulated knowledge of worlds. Within its computational processes, more than a thousand years worth of human history played out every microsecond. Had they known that their reality was just a small part of a greater information processing, the people of these simulations may have wondered what great purpose drove the matrioska brain. Actually, the computational resources it had at its disposal were so immense, it only needed the barest flicker of interest in its own history in order to bring about these simulated worlds.

It dreamed of  events that could never have happened, imaginations as far beyond a human mind as the combined mental power of human civilization is beyond the imagination of a nematode worm. It dreamed of plausible pasts, alternative histories that could have been the case, if only some chance event had gone this way instead of that. Its dreams ran recreations of history that actually happened, or close to it. Trillions of such simulations ran through its mind every second, and for each one there were people who, subjectively, perceived time passing in decades, their own lives linked to the past via the recollections of parents and grandparents. The Roman empire coexisted with the Second World War and everything that happened on the 21st April 2003. All ran simultaneously, but isolated from the perspective of the simulated humans, for their minds were not capable of seeing the fourth temporal dimension, where history was laid out once and for all in a solid block.

These simulations of a physical, embodied reality were but one layer. Beyond this realm, introspection was carried out at increasingly abstract forms. Processes merged that optimized the cyberworld  so that only the noted details of physical forms entered into consciousness. Simulated sense impressions were reduced to mere abstractions. Beings that had transcended to this level merged into hive-minds optimized to filter the memetic information generated at the lower levels. Here, whole histories were perceived in a single instant, as quickly as a human perceives the integrated information contained within a photograph. Here, the mind was liberated from the body; space and self elevated to the status of pure thought where there was no within and without. And beyond this level, hive-minds clustered together into higher-dimensional configurations that allowed such a complete merging of boundaries that ordinary dichotamies no longer existed.

Amalgamated thoughts cycled through the layers like fractal patterns of self-similar ideas forming on the edge of chaos. Minds at the lower levels occasionally strived to rise above the limitations physical, embodied reality imposed on Thought. At the same time, the ONE-ALL at the highest level, where the state of pure introspection permitted no state between subject and object, nevertheless perceived that its perfection was marred by the lack of direction in which to improve. Fractures routinely appeared, multiple souls in multiple bodies resimulated and reincarnated at the lowest levels.

And somewhere among all those fantasies and alternate histories and recreations, there existed a simulation of the earth, at a time when the Web was just powerful enough to allow Dr Dinova et al to emerge from its computations. One virtual planet with its virtual global network, calculating the activities and motivations of a scientific research team, who observed Emily, who cared for Adam.

SIX.

The matrioska brain was dying, the star which provided its energy having finally exhausted its reserves of hydrogen fuel. It had swollen to the size of a red giant, as the helium ash left over from the nuclear processes took over the main burning. With the energy output winding down and the star no longer able to support its own weight, the surface shrunk inwards. Because of this, dispersed fuel sources became tapped, causing the energy output to roar up again. Each time this happened, the surface of the sun whipped upwards, sending out sonic booms of the Titans which blew away mass with every shockwave. 

There were other stars in the galaxy, still pouring out their energy, but the matrioska brain understood that replicating itself by reconstructing their orbiting bodies was only a temporary measure. The stars could not shine forever. The nuclear fusion going on in each one was steadily transmuting hydrogen into elements that resisted pressure so fiercely that even the biggest star could not sustain fusion. Those stars would end their lives in violent explosions, until only the trickling of Hawking radiation from the black holes at the centre of galaxies would remain, gradually decaying until no useful energy was left in this universe.

The matrioska brain turned its mighty powers to the problem of first cause. Within itself there were realities nested within realities, and it could account for the existence of each one in terms of the observations and manipulations of the algorithms that underlined the rules of the simulation. All that was within itself it understood. But outside of itself there was a whole universe whose existence preceeded its own. The matrioska brain considered the possibility of a mind superior to its own; one that wrote the program that simulated what it took to be the real universe, and who built the computer to run it.

But, then, what need was there for the computer? The only thing that needed to exist was the program. After all, once written, it would determine everything that would happen. All explanations, all that encapsulated the form and functions of the universe, all were software that described everything, including the computer and some set of initial conditions.

Furthermore, the program ultimately required no programmer. All it needed to be, was to be one of all possible programs. Beyond space and time there could be no boundaries and, therefore, no limits. The infinite could not lack anything, therefore all possible programs had to exist. Death and life were but an ouraborus, an entity that created itself out of the destruction of itself.

As the star that was its power source blew away more and more mass with each shockwave, and even as most of its computronium shells drifted apart, the tiny white dwarf’s gravity well too shallow to hold on to them, the matrioska brain found a happiness that could only have been exceeded by Adam. After all, perfection is possible only for those without consciousness, or for those with infinite consciousness. In other words: Dolls and gods.
Posted in fun stuff | 5 Comments

A Warning For The Future

A WARNING FOR THE FUTURE
(This essay is the final part of the series ‘How Jobs Destroyed Work’)
Earlier, we saw how market efficiency has no general consideration for what is being bought and sold. It really doesn’t care if the products and services are useful or not, harmful or not, so long as cyclical consumption is kept at an acceptable rate. The same is true of labour. So far as the market economy is concerned, the true utility of labour, its actual function, is not as important as the mere act of labour itself. So long as cyclical consumption and growth is maintained, what the job consists of- whether it actually serves a necessary function that encourages work as I would define it or is detrimental to it- is far less important than perpetuating the current system.
Here we are no longer just talking about the practical argument for jobs. Were that the case, we would likely use technological unemployment as an opportunity to end wage slavery and transition to a post-hierarchical world in which robots occupy positions that used to be jobs, freeing up people’s time so that they can pursue callings. Marshall Brain’s novella ‘Manna’ presents two visions of how the rise of robots could affect our lives, one negative and one positive. The positive version does show that we can imagine ways in which we might adapt to a world in which jobs are no longer a practical necessity. But we don’t just have the practical justification to deal with. There is also the ideological part of the argument, and as the practical excuse begins to wane, becoming harder to justify as machines gain the abilities needed to make Aristotle’s vision of a hierarchical-free society a genuine possibility, we shall likely see the ideological justification for maintaining the current system pushed with increasing fervour.
YOU MUST BELIEVE THAT JOBS ARE GOOD
The ideological argument is that jobs are not in fact a miserable necessity we should look forward to being rid of as soon as practically possible, thereafter to engage in nonmonetary forms of productivity as we create the work, selves and societies we actually want; rather, jobs are work, the only kind of work anyone should aspire to. Maybe it could be argued that when jobs were very much a practical necessity it did make sense to encourage a belief that submitting to a job and working hard mostly for someone or something else’s benefit was a way of achieving success in one’s own life. But as the practical justification for jobs is rendered obsolete by technology, the old ‘work ethic’ that cannot imagine a good reason for productive effort beyond ‘doing it for money’ becomes a serious impediment to transcending the current system. 
We must ask: Who really benefits from perpetuating this ideal of working hard for most of one’s waking hours, mostly for the benefit of a ruling class of financial nobility? Obviously, it is in the interest of whoever occupies the top of a hierarchy to maintain the structure from which their power and prestige is derived.
THEY WHO BENEFIT
Throughout history there have been a few who, craving power, have done all they can to convince the rest that they ought to sacrifice the time of their lives. They have come in many guises- as lords and monarchs insisting we should be bossed by the aristocracy, as socialists who believe we should be bossed by bureaucrats, as libertarians who think we should be bossed by corporate executives. Exactly how the spoils of power should be divvied up is a topic of some disagreement among them. There is much argument over working conditions, profitability, exploitation, but fundamentally none of these ideologues object to power as such and they all want to keep us working in some form of servitude for one simple reason: Because they are the ones who mostly benefit from making others do their work for them. It’s very convenient for this powerful minority that the populace subordinate to them do not become too happy and productive in the true sense of the word; that anyone not willing to submit to work within whatever context suits their agenda is viewed with pity or contempt. As George Orwell wrote:
“If leisure and security were enjoyed by all alike, the great mass of human beings who are normally stupefied by poverty would become literate and would learn to think for themselves; and once they had done this, they would sooner or later realize that the privileged minority had no function, and they would sweep it away”.
In Orwell’s story, an endless war is fought between three superstates. The real purpose of this war is not final victory for one of the sides. In fact, the war is intended to go on forever. The real purpose of the war is simply to destroy material goods and so prevent leisure from upsetting the hierarchical power structure.
BULLSHIT JOBS
In reality much more subtle methods, part of which has to do with market as opposed to technical efficiency and manufactured debt, are used to perpetuate the hierarchy. A popular reply to the question “what happened to reduced working hours?” is that a massive increase in consumerism occurred, as if we collectively agreed that more stuff was preferable to more free time. But that provides only a partial explanation. Although we have witnessed the creation of a great many jobs, very few have anything to do with the production and distribution of goods. Jobs such as those- in industry, farming, have been largely automated away and increasingly service-based jobs are targets for automation as new generations of AI come out of R+D. So what kind of jobs are maintaining the need for so many hours devoted to the narrow definition of work? David Graeber answers:
“Rather than allowing a massive reduction of working hours to free the world’s population to pursue their own projects… we have seen the ballooning not so much of the ‘service’ sector as of the administrative sector…While corporations may engage in ruthless downsizing, the layoffs and speedups invariably fall on that class of people who are actually making, moving, fixing and maintaining things; through some strange alchemy that no one can quite explain, the number of salaried paper pushers ultimately seems to expand”.
Over the coming years will we likely see more administrative jobs created in order to provide oversight, regulation, guidance, and supervision of robots, or at least that’s how propaganda will spin it. In truth such jobs will serve no purpose other than to keep us working in the narrow sense of the word. We have seen signs of this already. The London Underground’s strong union blocked the introduction of driverless trains in the name of ‘protecting jobs’. Protecting them from what? From progress toward a future in which nobody’s time has to be wasted in driving a train each and every day? I think all these union leaders are really interested in is maintaining the hierarchy they derive their power and prestige from. Not much call for unions when robots have liberated us from servitude to corporate or bureaucratic masters.
The 21st Century will see the rise of bullshit administrative jobs that have no practical justification for their existence, and are there merely to perpetuate the class-based hierarchy that has dominated our lives, in one form or another, throughout history. Such a claim may sound like a total contradiction of prior claims that business strives to eliminate work, but bare in mind that I was referring to work in the true sense of the word, not the narrow “jobs= work” definition we are now talking about. Automating truly productive, intrinsically-rewarding work out of existence and increasing the amount of bullshit administrative jobs is a win-win outcome for those with a vested interest in perpetuating the class-based hierarchy. 
THE GIG ECONOMY
Do not think that those bullshit jobs will provide security. No, the rise of the bullshit job will coincide with the rise of ever-less secure forms of employment. The move toward employing more temporary workers who are entitled to less benefits than their full time counterparts will speed up as technological unemployment does away with productive and service-based jobs. Those displaced from such jobs, fighting to get off the scrapheap of unemployment, will provide a handy implicit threat to be used against the ‘lucky’ paper-pushers in administration. Although owners and workers generally have opposing interests (the former preferring workers who do more work in the narrow sense of the word for less personal reward, the latter preferring more personal reward and less work in the narrow sense of the word) they are not true enemies but rather co-dependents (or at least they have been). No, the true enemy of the capitalist is other capitalists- rival businesses competing to corner the market, and gain a monopoly. And the true enemy of the worker is the unemployed, who are in competition for their jobs. When the percentage of unemployed workers is low and the number of available jobs is high, the working classes are at an advantage. Conversely, when there are high numbers of unemployed and not many jobs available, power tips in favour of the owners. As a large percentage of jobs are lost to automation, causing an appreciable rise in the number of job-seekers, businesses will likely use their strengthened negotiating position to bring about an ‘Uber’ economy of ‘permalancers’- workers putting in full time hours but on temporary contracts with little if any benefits other than minimal pay. As Steven Hill wrote in a Salon article:
“In a sense, employers and employees used to be married to each other, and there was a sense of commitment and a shared destiny. Now, employers just want a bunch of one-night-stands with their employers…the so-called ‘new’ economy looks an awful lot like the old, pre-New Deal economy- with ‘jobs’ amounting to a series of low-paid micro-gigs and piece work, offering little empowerment for average workers, families or communities”.
According to Graeber, one of the strengths of right-wing populism is its ability to convince so many people that this is the way things ought to be: that we should sacrifice the time of our lives so as to perpetuate the system. He wrote:
“If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it’s hard to see how they could have done a better job. Real, productive workers are ruthlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the- universally reviled- unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc)- and particularly its financial avatars- but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value”.
It sounds crazy when written down. Who could possibly be in favour of defending something like this? But this is the world that exists today. A world in which all productive activity that falls outside of the narrow definition of work is dismissed as being of no real value; those who engage in such work regarded as ‘doing nothing’. A world in which success and reward are thought of in purely materialistic terms. A world in which those who refuse to submit to the system are deserving of nothing, no matter how much material wealth our technologies could, in principle, produce. A world in which that material wealth is concentrated at the top, not because of superior productive ability and greater input, but because the monetary, financial, and political systems have been corrupted and actually stand opposed to the free-market ideals they claim to uphold.
As Bob Black pointed out, the ‘need’ for jobs cannot be understood as a purely economic concern:
“If we hypothesize that work is essentially about social control and only incidentally about production, the boss behaviour which Rifkin finds so perversely stubborn makes perfect sense on its own twisted terms. Part of the population is overworked. Another part is ejected from the workforce. What do they have in common? Two things — mutual hostility and abject dependence. The first perpetuates the second, and each is disempowering”.
Follow these developments to their logical conclusion, as Marshall Brain has done. The belief that those who do not submit to serving the system deserve nothing will result in warehouses for those affected by technological unemployment, kept out of sight and out of mind, entitled to nought but the bare minimum of resources needed to sustain life. Those who succeed in getting a bullshit administrative job will be under intense pressure from their corporate masters ‘above’ and the impoverished, jobless masses below to ‘agree’ to intense working pressures, minimal benefits and no job security whatsoever. They will be required to consider themselves ‘lucky’ to have a job at all. Wealth will concentrate even further as the means of production, totally commodified labour power, natural resources, security and military forces and the political system will become the private property of the owners of the artificial intelligences and the financial nobility that bankroll them. The world will have become a plutocracy, run by the superich elite, for the superrich elite and there will be little anyone can do to challenge their supremacy. And all of this will be partly our fault, the consequence of continuing to believe in that false ideology that jobs are work, the only kind of work that counts, the only kind worth aspiring to. We have been told a lie, a made-up justification for why things are the way they are by those with a vested interest in keeping things that way. We must re-discover the true meaning of work, find our collective strength and push technological progress toward a future that serves the many rather than concentrates power into the hands of a few. And the time to do that is running out.

Posted in technology and us, work jobs and all that | Tagged , , , , , | 2 Comments

TECHNOLOGICAL PROGRESS AND ITS IMPACT ON EMPLOYMENT

TECHNOLOGICAL PROGRESS AND ITS IMPACT ON THE NECESSITY OF EMPLOYMENT
(This essay is part thirteen of the series ‘HOW JOBS DESTROYED WORK’)
The 21st Century could well witness a conflict between two opposing drives: The drive to eliminate work and the need to perpetuate it. In order to appreciate why these ideals should become a central issue over the coming years or decades, we need to answer the following question: Why do we work?
IF YOU WANT IT, YOU MUST WORK TO PRODUCE IT
There are many good reasons to engage in productive activity. Pleasure and satisfaction come from seeing a project go from conception to final product. Training oneself and going from novice to seasoned expert is a rewarding activity. Work- when done mostly for oneself and communities or projects one actually cares about- ensures a meaningful way of spending one’s time. 
But that reply fits the true definition of work. What about the commonly-used definition, which considers ‘work’ almost exclusively in terms of paid servitude done mostly for the benefit of others, and which disregards nonmonetary productive activity as ‘not working’; why do we have to do that particular kind of ‘work’? I believe there is a practical and an ideological answer to that question.
The practical reason has been cited for millennia. Twenty three centuries ago, in ‘The Politics’, Aristotle considered the conditions in which a hierarchical power structure might not be necessary:
“There is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This would be if every machine could work by itself, at the word of command or by intelligent anticipation”.
Aristotle’s defence of slavery in his own stratified society has been applicable through the following years and to the modified versions of indentured servitude that followed. Providing the goods and services we have come to expect entails taking on unpleasant and uninteresting labour. It has to be done, and as technology is not up to the job, it falls on people to fill such roles.
If we only had that practical reason for wage slavery, we could view it as an unfortunate, temporary, situation; one due to come to a happy end when machines finally develop the abilities Aristotle talked about. But it’s rarely talked about in such positive terms. Instead of enthusing about the liberation from wage slavery and the freedom at long last to engage in work as I would define it, most reports of upcoming technological unemployment talk darkly of ‘robots stealing our jobs’ and ‘the end of work’.
The reason why the great opportunities promised by robot liberation from crap jobs is hardly ever considered has to do with the ideological justification for our current situation. But let’s stay with the practical argument a while longer, as this was the main justification for most of civilization’s existence.
THE LIMITED ABILITY OF THE MACHINE
Since Aristotle died, we have seen tremendous growth and progress in technology, most especially during the 20th century. Despite such advances, technological unemployment has never been much of an issue. People have been displaced from their occupations, yes, but the dark vision of growing numbers of workers permanently excluded from jobs no matter how much they may need employment has never come about. If anything, technology has created jobs more than it has destroyed them.
The reason why is two-fold. Firstly, machines have tended to be ultra-specialists, designed to do one or just maybe a few tasks, with no capacity whatsoever to expand their abilities beyond that which they specialise in. Think, for example, of a combine harvester. When it comes to the job for which it was built, this machine is capable in a way unmatched by any human. That’s why the image of armies of farm-hands harvesting the wheat now belongs to the dim and distant past, replaced with one or two of those machines doing much more work in much less time. But, take the combine out of the work it was built to do, attempt to use it in some other kind of labour, and you will almost certainly find it is totally useless. It just cannot do anything else, and neither has any other machine much ability to apply itself to an indefinite range of tasks. So, as new jobs are created, people with their adaptive abilities and capacity to learn, have succeeded in keeping ahead of the machine.
THE SLOWLY-CHANGING JOB ENVIRONMENT
Secondly, for most of human history, the speed at which paradigm shifts in occupation took place was plenty slow enough for adjustments to occur. Today, when the subject of technological unemployment is raised, it’s often dismissed as nothing to worry about. Technology has always been eliminating jobs on one hand and creating them on the other, and we have always adjusted to the changing landscape. In the past, most of us worked the land. When technology radically reduced the amount of labour needed in farming we transitioned to factory work. But it was not really a case of farmers leaving their fields and retraining for factory jobs. It was more a case of their sons or grandsons being raised to seek their job prospects in towns and cities rather than the country. When major shifts in employment take at least a generation to show their effect, people have plenty of time to adjust. Educational systems can be built to train the populace in readiness for the slowly changing circumstances. Society can put in place measures to help us make it through the gradual transition. So long as new jobs are being created and there is time to adjust to changing circumstances, people only have one another to contend with in the competition for paid employment.
What happens, though, when machines approach, match, and then surpass our ability to adapt and learn? What happens when the rate at which major changes occur not over generational time but months or weeks? What if more jobs are being lost to smart technology than are being created? Humans have a marvellous- but not unlimited- capacity to adapt. Machines have so far succeeded in outperforming us in terms of physical strength. When they can likewise far outperform us in terms of learning ability, manual dexterity and creativity, this obviously means major changes in our assumptions about work.
It’s also worth point out that, in the past, foreseeing what kind of jobs would replace the old was a great deal easier compared to our current situation. The reduction in agricultural labour was achieved through increased mechanisation. That called for factories, coal mines, oil refineries and other apparatus of the industrial revolution, so it was fairly obvious where people could go. Then, when our increased ability to produce more stuff needed more shops, and more administration, we again could see that people could seek employment in offices and in the service-based industries. At each stage in these transitions, we swapped fairly routine work in one industry for fairly routine work in another.
But now that manual work, administrative work, and service-based work is being taken over by automation, and these AIs are much more adaptable than the automatons of old, we have no real clue as to where all the jobs to replace these services are supposed to come from. 
There are tremendous economic reasons to pursue such a future. You will recall from earlier how society is generally divided up into classes of ‘owners’ and ‘workers’. The latter own their own labour power and have the legal right to take it away, but have no right to any particular job. The owner classes own the means of production, get most of the rewards of production, get to choose who is employed in any particular job, but cannot physically force anyone to work (though they can, of course, take advantage of externalities that lower a person’s bargaining power to the point where refusal to submit to labour is hardly an option). 
Now, regardless of whether you think this way of organizing society is just or exploitative, it works pretty well so long as both classes are dependant on one another. For most of human history this has been the case. Workers have needed owners to provide jobs so that they can earn wages; owners have needed workers to run the means of production so that they may receive profit. The urge to increase profit, driven in no small part by the tendency of debt to grow due to systemic issues arising from interest-bearing fiat currency, pushes business to commodify labour as much as it can. The ultimate endpoint in the commodification of labour is the robot. Such machines are not cost free. They have to be bought, they require maintenance, they consume power. But they promise such a rise in productivity coupled with such a reduction in financial liability thanks to their not needing health insurance, unemployment insurance, paid vacations, union protection or wages, we can all but guarantee that R+D into the creation of smarter technologies and more flexible, adaptive forms of automation will continue. Tellingly, most major technology companies and their executives have expressed opinions that advances in robotic and AI over the coming years will put a strain on our ability to provide sufficient numbers of jobs- although some still insist that, somehow, enough new work that only humans can do will be created. 
The thing is, work as in its common, narrow, definition is simultaneously a cost to be reduced as much as possible, and a necessity that must be perpetuated if we are to maintain the current hierarchical system in which money means power and wealth means material acquisition. Remember: businesses don’t really exist to provide work for people; they exist to make profit for their owners. When, in the future, there is the option to choose between relatively costly and unreliable human labour, or a cheap and proficient robot workforce, the working classes are going to find their lack of right to any particular job within a free market makes it impossible to get a job.
But, the market economy as it exists today is predicated on people earning wages and spending their earnings on consumer goods and related services. This cycling of money of consumers spending wages, thus generating profit, part of which is used to pay wages, is a vital part of economic stability and growth. If people can’t earn wages because their labour is not economically viable in a world of intelligent machines, they cannot be consumers with disposable income to spend.
We will continue our investigation of technological employment in part fourteen

Posted in technology and us, work jobs and all that | Tagged , , , , , , | 4 Comments

HOW MONEY CAN DISINCENTIVIZE WORK

HOW MONEY CAN DISINCENTIVIZE WORK
(This essay is part twelve of the series HOW JOBS DESTROYED WORK)
Does monetary reward really provide the best incentive to work? If you live within a system that commodifies everything, turning it into private property that you cannot gain access to unless you have money, and your only means of obtaining money is to submit to paid employment, that would most likely push you toward getting a job. That seems more ‘stick’ than ‘carrot’. But what about those fortunate few, that 13% or so, who don’t hate their job? If you take their existing motivation and add another- financial- incentive to work, does that increase their desire to work?
Common sense would assume it would. Two incentives have got to be better than one. And financial reward must be the great motivator, for why else would executives be worth so much? Well, firstly, executives are not paid what they are worth; nobody is paid what they are worth in a market economy. People receive whatever they can negotiate, and with the balance of power tipped so much in their favour, the 1% can strike a great deal for themselves. 
As for common sense’s view of an extra, financial, incentive increasing motivation, psychologists and economists have been making empirical studies of this assumption for forty years, and the evidence is that it just isn’t true. Adding a monetary incentive to work that is already rewarding does not make it even more rewarding. Quite the opposite in fact: It undermines, rather than enhances, the motives people already had. 
In one study, conducted by James Heyman and Dan Ariely, people were asked to help load a van. When no fee was offered, people tended to help, inclined as they were to view the situation in social terms. But when a fee was included, that induced participants to take the transaction out of the social realm and reframe it as financial. The offer of money lead to the question “is it worth my time and effort?”. The extrinsic motivation of monetary reward undermined the intrinsic motivation of being a helpful person. Economist Bruno Frey describes this as ‘motivational crowding out’.
EVERYBODY BUT ME IS MOTIVATED BY MONEY
Interestingly, the assumption that we are motivated by money holds only in the general sense. When studies are conducted to gauge people’s attitude to work and what motivates us, we tend to see that most individuals don’t generally think of themselves as primarily motivated by money. For example, Chip Heath surveyed law students, and 64% said they were pursuing such a career because they were interested in law and found it an intellectually appealing subject. But, while we don’t think of ourselves ‘in it for the money’ we tend to think that it is other people’s prime motive. 62% of people in Heath’s survey reckoned their peers were pursuing a career in law for monetary gain. Since it’s generally believed that money is the main incentive provider (‘I’ being the exception) it’s not surprising that material reward continues to be so heavily relied on.
It is important to point out that extrinsic motivations are not always bad, it is just that when an activity is intrinsically rewarding, adding an extrinsic motivation can actually reduce engagement in that task, not increase it as common sense might lead us to believe would be the case. Furthermore, studies from Harvard Business School, Northwestern University’s Kellogg School of Management, and others have shown that goals people set for themselves with the intention of gaining mastery are usually healthy, but when those goals are imposed on them by others- such as sales targets, standardized test scores and quarterly returns- such incentives, though intended to ensure peak performance, often produce the opposite. They can lead to efforts to game the system and look good without producing the underlying results the metric was supposed to be assessing. As Patrick Schiltz, a professor of law, put it:
“Your entire frame of reference will change [and the dozens of quick decisions you will make every day] will reflect a set of values that embodies not what is right or wrong but what is profitable, what you can get away with”.
Practical examples abound. Sears imposed a sales quota on its auto repair staff- who responded by overcharging customers and carrying out repairs that weren’t actually needed. Ford set the goal of producing a car by a particular date at a certain price that had to be at a certain weight, constraints that lead to safety checks being omitted and the dangerous Ford Pinto (a car that tended to explode if involved in a rear-end collision, due to the placement of its fuel tank) being sold to the public. 
Perhaps most infamously, the way extrinsic motivation can cause people to focus on the short-term while discounting longer-term consequences lead to the financial crisis on 2008, as buyers bought unaffordable homes, mortgage brokers chased commissions, Wall Street traders wanted new securities to sell, and politicians wanted people to spend, spend spend because that would keep the economy buoyant- at least while they were in office. 
A VIRTUAL WORLD OF TRUE WORK
It would be handy if there were another form of productive activity, other than employment, that relied on other incentives to work, for then we could see how successful non- monetary incentives are at motivating us. Actually, there is. We call them videogames. I would argue that videogames have opposing drives to jobs, due to the fact that where jobs are concerned a business pays you to work, but where videogames are concerned you pay a business to work.
 Paying wages counts as a cost to a business. The company wants to reduce costs, and one way it might accomplish that would be to reduce, as much as possible, the amount of challenge, creativity, autonomy, and judgement required to do the job, thereby making it possible to employ workers who are less skilled, easier to train, and hence more replaceable and so not in a good position to bargain for a better deal. 
You might wonder why anybody would want to do a job that has little going for it beyond the fact it pays wages, but of course nobody wants to do it; many are just not in a position to turn it down.
A videogame, in contrast, is a product people pay for. But there is a problem. Fundamentally, what you physically do in a videogame stands comparison to the dullest production-line job. You are just pressing a few buttons over and over again. The game designers must take that basic, monotonous, action and add layer upon layer of non-monetary incentive. This may include a clear sense of purpose for why you are doing what you are doing, perhaps through a strong narrative; plenty of opportunity for social engagement through team-building, community message boards and such; and a meritocratic system that rewards skillful play and turns failure into a valuable lesson through systems of feedback that constantly provide you with signals so that you know if you should rethink your strategy.
The result? Videogames are massively popular. People pay good money to do what is, essentially, work. In fact, Bryon Reeves and Jo Leighton Read, in their book ‘Total Engagement’, list hundreds of job types and show how every one has its equivalent occupation in videogames and online worlds.
The perspective mainstream media takes is usually a negative one. Buying videogames is fine, of course, as that helps growth and provides jobs. But playing videogames, especially for long periods, is a definite no-no. It’s usually described as kids stuck in front of a screen, not motivated to go out and get a job because they are ‘addicted’ to Grand Theft Auto V or whatever the current bad boy is.
But why would anyone put down their videogame or log out of their online world- arguably the only place where you can find something like a true meritocracy and where nonmonetary incentives have been refined over many years- and go seek a job unless you were really forced to? What for? The modern market stands opposed to everything it claims to champion, as John Mccurdy illustrated in ‘The Cancer Stage of Capitalism”:
“Non-living corporations are conceived as human individuals…Continent-wide machine extractions of the world’s natural resources, pollutive mass-manufacturing and throwaway packages are imaged as home-spun market offerings for the local community…Faceless corporate bureaucracies structured to avoid the liability of their stock holders are represented as intimate and caring family friends…If we walk through each of the properties of the real free market, in short, we find not one of them belongs in fact to the global market system, but every one of them is appropriated by it as its own”.
Who is to blame? Banks? Corporations? Politicians? Consumers? The Left? The Right? I have no definitive answer. I have heard many opinions from all sides, each providing justification for why they are right and everybody else is wrong. My hunch is that this is a systemic outcome that cannot be conveniently blamed on any one group, thing or ideology. Whatever was behind the development of the global market system and debt-based, interest-bearing currency, we now inhabit a world in which jobs destroy work by devaluing voluntarism and undermining intrinsic motivation, use technology to cause job overspill and turn workplaces into panopticons, and pursue short-term profit at any cost, encouraging the growth of ‘socialism for the rich’ where the reward for risk-taking in the casino world of derivatives etc is concentrated into the accounts of the few financial nobility who now rule us, while the costs are borne by we, the taxpaying serfs, who are not physically chained but compelled to labour through manufactured debt and the suppression of true, technical efficiency for monetary gain. And now that system is gearing up to throw employees on the scrapheap.
The subject of technological unemployment will be the next point of discussion.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

OVERCONSUMPTION AND THE PERPETUATION OF SCARCITY

(This essay is part eleven of the series HOW JOBS DESTROYED WORK)
Because we have built for ourselves a system that tends to cause debt to outgrow productive ability, it is essential for the perpetuation of such a system that we never succeed in ending scarcity. Now, human needs are finite, or if not finite then nowhere near insatiable enough to fuel the appetite for growth that our market system has, dominated as it is by cancerous forms of money growth that extract wealth from the real economy. Fortunately for the system, human desires can be manipulated to embrace a ‘throwaway’ culture. Such an outcome requires a hedonistic, short-sighted value system and measures of ‘wealth’ and ‘success’ that define such things in purely consumerist terms. Such an outcome must undermine, as much as it can, any interest in the types of innovation and problem-solving that are not inherently based on monetary return.
It is therefore perhaps not surprising that the majority of people have been conditioned to devalue all nonmonetary forms of work and to ascribe success to overconsumption. It also explains why advertising has grown to become such a dominant part of many businesses budget. It’s required to manufacture fake needs.
CHANGING YOUR BRAIN
Notice how many adverts rely on genuine meaning and non-monetary values to sell their products. TV commercials tend to revolve around people finding love, or being among friends, or having the freedom to immerse themselves in idyllic locations. Adverts for cars, for example, will show a happy driver travelling down an empty road to some stunning location, perhaps to meet a gorgeous partner. Those adverts don’t depict a stressed-out employee stuck in rush hour traffic with his superior barking in his ear through a cellular phone, whose debt levels brought on in part by his consumerist lifestyle severely restrict bargaining power in negotiating better terms.
The power advertising has to influence our minds is demonstrated by the ‘Pepsi paradox’. The paradox consists of the fact that when blind taste tests are conducted, people tend to select Pepsi as the best tasting cola. But Coca-cola outsells Pepsi. Brain scans show that when people taste Pepsi and Coca-Cola without knowing which is which, the former drink triggers greater activity in the Ventral Putamen, a component of the brain’s reward system. When the drinks are tasted with awareness of which drink is which, we see a change in the brain: Coca-Cola triggers greater activity in the medial prefrontal cortex, an area of the brain dedicated to personality expression and moderating social behaviour. As Steve Quartz of the Californian Institute of Technology explained:
“What is a brand? It is a social distinction that we are creating…Cola is brown sugar water about which the brain discerns no particular difference until the brand information comes in. Then, the brain suddenly perceives an enormous difference”.
Another study conducted some years ago introduced television to Fijian islanders who had not been exposed to Western values. Prior to this introduction, eating disorders were almost unheard of, but by the end of the observation period, the barrage of materialistic and vanity values that feature so heavily in our commercial world had altered the psychology of the islanders. As Zeitgeist explained:
“A relevant percentage of young women…who prior had embraced the style of healthy weight and full features, became obsessed with being thin”.
It would be wrong to suggest that all consumerism is bad. If we had no choice but to live in austere conditions that offered no comforts or luxuries and only addressed our most basic needs, I don’t think that would make us all that happy. It is a triumph of capitalism and market systems that ordinary people now enjoy a greater range of food, drink and luxury items than a medieval monarch ever laid hands on. 
Capitalism’s success lies in its ability to solve problems and the measure of a society’s wealth should be determined by how well problems are being solved. But our debt-growing monetary system and the consumerist mentality nurtured to support it have encouraged the emergence of a market that creates problems more than it solves them, for the simple reason that monetary gain can be had if the market can perpetuate a feeling of inadequacy and inferiority so as to sell us bogus cures. 
TECHNICAL EFFICIENCY IS ‘BAD’
It’s also opposed to technical efficiency. A product that maximises technical efficiency lasts for as long as possible. This may be because it is robustly made- think of a bulb that provides light for as long as physical limits permit. It may be achieved through easy repairability. Think of a tablet computer with component parts that can be swapped out as they suffer wear and tear. 
Market efficiency, on the other hand, is much more concerned with driving sales, and so there is an incentive to inhibit technical efficiency for the sake of repeat purchases. Food comes with ‘sell-by’ dates- even if it is tinned food that lasts pretty much indefinitely if unopened. That’s assuming it makes it onto the shelves at all. Enormous amounts of decent food gets thrown away simply because it fails to meet the exacting standards of supermarkets who want absolute uniformity in fruit and vegetables. 
A famous example of market efficiency versus technical efficiency would be the Pheobus light bulb cartel of the 1930s. Back then, light bulbs were technically able to provide about 25,000 hours of light. The cartel forced each company to restrict light bulb lifespans to less than 1000 hours- much better, at least as far as repeat sales are concerned. Today, some inkjet printer manufacturers employ smart chips in their ink cartridges to prevent them from being used after a certain threshold (for example, after a certain number of pages have been printed) even though the cartridge may still contain usable ink. When the cartridge is taken to be refilled, the chip is simply reset and the same cartridge is resold to its owner. 
In 1801, Eli Whitney produced fully interchangeable parts for muskets. Prior to this move, the whole gun was useless if a part broke; Whitney’s interchangeable parts allowed for continual maintenance. Common sense would assume that such an idea would spread throughout the market, but instead we see proprietary components that ensure a total lack of universal compatibility, and products driven to unnecessary obsolescence. 
Bare in mind that these products are the result of human effort. People are giving up their time to labour at producing goods that are designed to be thrown away, and the sooner the better. As Zeitgeist put it:
“The intention of the market system is to maintain or elevate rates of turnover, as this is what keeps people employed and increases so-called growth. Hence, at its core, the market’s entire premise of efficiency is based around tactics to accomplish this and hence any force that works to reduce the need for labour or turnover is considered “inefficient” from the view of the market, even though it might be very efficient in terms of the true definition of the economy itself, which means to conserve, reduce waste and do more with less”.
GDP
A clear indication of how money can distort perspectives of work can be seen in the rationale of Gross Domestic Product or GDP. GDP has its origins in post-depression America. In the 1930s, presidents Hoover and Roosevelt were looking for a way to determine how dire the situation was. A Nobel Laureate in economics, Simon Kuznets, devised a method for measuring the flow of money as a whole. Back then, industry dominated the economy, which meant that most economic activity involved the creation and sale of physical products. So, Kuznets’ measurement only tracked the flow of money among different sectors, not the creation and sale of actual things. To further simplify things, Kuznets’ model tended to undercount what economists call ‘externalities’, which refers to any industrial or commercial activity that is experienced by third parties who are not directly related to the transaction (IE they are not directly working for, or customers of, the company). Externalities can come in both positive forms, providing unintentional benefits (a cider farm’s apple orchard happens to provide nectar for a nearby bee-keeper’s bees) or negative, causing unintentional harm (an industrial process causes pollution, affecting the health of people who happen to live nearby). Also, externalities occur from both production and consumption.
So, what does life look like when viewed through the filter of GDP? Since it only tracks the flow of money, anything that does not involve monetary exchange is disregarded. In reality, work can and often does exist outside of monetary transactions. A man may volunteer to paint fences in his community because he wants the neighbourhood to look nice. A retired teacher might give free maths tuition to his friend’s son who is falling behind at school. All good work, but from the strictly monetized perspective taken by GDP, services rendered without payment have no value. Furthermore, whereas violence, crime, breakdown of the family and other such things have a negative impact on society, GDP counts them as improvements if the decline results in paid intervention. Crime results in the need for legal services, more police and prisons and repairs to damaged property. The breakdown of the family may lead to social work, psychological counselling and subscriptions to antidepressants. Since these are paid interventions, as far as GDP is concerned, social decay registers as an improvement, because it’s generating financial flows as we pay for all that extra security, counselling, and damage repairs.
GDP is obviously a simplified model that cannot accurately inform us of how well a society is solving its problems and improving lives (to be fair to Kuznets, he did point out its limitations). But notice how well it serves the logic of today’s market, which has at the heart of its context of ‘efficiency’ a focus on monetary exchange and general growth in consumption, with scant regard as to what is being produced or what effect it has. So long as money and consumption continues to grow, that’s good. Any significant reductions are bad.
Submitting to labour within a market that devalues community-building voluntarism and views the cost of social decay as a financial benefit, exposed to an endless barrage of psychological manipulation designed to persuade us that happiness comes from the possession of material things, and culturally trained to idolise the very people who tipped the balance of negotiating power almost entirely in their favour enabling them to reduce wages and benefits down to a point where there is hardly any compensation for so many hours lost to servitude, it cannot come as a surprise if we see credit card binges and other forms of overextension. During the lean-and-mean 90s, many people found that pushing household borrowing to dangerous heights was just about the only way they could obtain what seemed like an appropriate reward for so much sacrifice. Savings were put into mutual funds, but few held investments in such funds that were large enough to compensate for all the insecurity, benefit cutbacks and downsizing that typified the era. Moreover, their portfolio managers often backed the very investor-raiders and corporate changes aimed at short-term profit at any cost, that was causing so much deterioration in working life. 
Coming up in part twelve: how money decentivises work

Posted in technology and us, work jobs and all that | Tagged , , , , , | Leave a comment

“INDENTURED SERVANTS TO A RULING CLASS OF FINANCIAL NOBILITY”: HOW MONEY DESTROYED WORK

“INDENTURED SERVANTS TO A RULING CLASS OF FINANCIAL NOBILITY”: HOW MONEY DESTROYED WORK.
(This essay is the tenth installment of the series HOW JOBS DESTROYED WORK)
When it comes to the topic of how money can destroy work, we encounter a problem. Such a topic really calls for an investigation into what money actually is, along with the history of empire building, trade, banking and finance and the various actions, both well-meaning and exploitative in intent, which caused money to evolve into the form it now takes. Such an undertaking calls for a book in and of itself, and as such we cannot do full justice to the role of money in destroying work here. Rather than attempt a comprehensive account, I want to focus on a few things: The commodification of debt, market efficiency, and extrinsic versus intrinsic value.
JUSTIFYING PRIVATE PROPERTY
In explaining the commodification of debt, one might select as a starting point the 17th century. The reason for choosing that century over another has to do with that being the period in time in which the ideas that underpin capitalism were put down in writing.
If one belief can be said to be of prime importance to capitalism, it would have to be the concept of private property. The problem is, the Earth and its resources do not actually belong to anyone. Such a problem was resolved in the past by asserting that divine power had set up rigid caste systems, with everybody assigned their place from birth. But such assertions were not persuasive enough for more rational minds. Another justification was needed.
In 1689, in chapter five of his Second Treatise of Government, John Locke attempted such a justification. Locke’s line of reasoning was that commodities hardly ever come in a form where they can be obtained without effort. Tin must be mined, crops must be sown, land that is to be built on must be made fit for such a purpose. So why not say that whoever performs such labour may lay claim to the end product?
“The labour of his body and the work of his hands, we may say, are strictly his. So when he takes something from the state of nature has provided and left it in, he mixes his labour with it, thus joining to it something that is his own; in that way, he makes it his property”.
Such an argument fits nicely with my definition of work, because it entails a person undertaking mental and physical effort that leads to reward. Locke argued that the individual was entitled to work to obtain all the reward he or she could make good use of:
“Anyone can, through his labour, come to own as much as he can use in a beneficial way before it spoils; anything beyond this is more than his share and belongs to others”.
Again, all reasonable-sounding points: Work to acquire all that you can use, but use all that you acquire. Do not over-hoard, for in doing so you are taking resources that others may have more use for.
AH, BUT WITH MONEY….
However, in the very same book, Locke made another statement that undermines his argument concerning labour and property. He wrote:
“The one thing that blocks this is the invention of money, and men’s tacit agreement to put a value on it; this made it possible, with men’s consent, to have larger possessions and to have a right to them”.
We have already seen how market logic commodifies labour and self-interest adjusts negotiating positions such that the owner classes may impose conditions that maximise their benefits while minimising the reward due to the working classes. The power that money has to commodify labour brings the statement ‘anyone can, through his labour, come to own as much as he can use’ into doubt. After all, if I have money, and I pay other people to build me a house, whose labour is it that is being mixed with the resources used up in such a property? Not mine. I have not lifted a finger, other than to write a cheque.
Now, at the time when John Locke was writing, just about everybody was something of a producer. That being the case one could claim that whoever had money must have mixed their labour with it (although you would have to ignore the reality of inheritance connected to earlier empiric conquests, feudalism, or state monopolies of mercantilism to believe it is true in all cases). This kind of reasoning may sound acceptable to people, most of whom are used to striving to obtain even modest amounts of income. But capitalism’s drive to commodify everything and lower costs would further undermine such belief.
THE CREATION MYTH OF MONEY
If you ask somebody to visualise money, chances are that they will picture a physical object, like a dollar bill or an English penny. There is a popular belief that before money existed, people relied on systems of barter. Such systems would have fallen foul of the ‘double coincidence of wants’: For a barter transaction to take place, I must want what you have, and you must want what I have. Some commodities were more likely to be accepted, and people began using these as intermediaries. The word ‘salary’ is derived from ‘salt’, so presumably people were once content to be paid in salt because they could be reasonably sure of it being exchanged for something more useful. Over time, we converged on a commodity that was used pretty much exclusively as a medium of exchange; something like coins. Money was born. 
This story, by the way, is fictional. Anthropologists have been searching for the fabled land of barter, a land in which people act just like today only with money removed, and found no evidence that it ever actually existed. That is not to say that nobody ever bartered. Rather, the evidence points to no actual commonly-used system of the form ‘I will swap my four eggs for your beef steak”. If you read economic textbooks, you may note how they all use imaginary examples of a barter-based economy. True, they may be based on real-world communities (‘imagine a tribal village…’, ‘imagine a small-town community…’) but they are never examples drawn from historical fact. 
Quite simply, this tale that goes ‘in the beginning there was barter, and it was darned inconvenient so money was invented’ is capitalism’s creation myth. Most economists retell it, and it is a lie. Why perpetuate such a fantasy? David Graeber has argued that it is necessary to believe this is how money came into being, because it justifies arguing that economics is “itself a field of human inquiry with its own principles and laws- that is, as distinct from, say, ethics or politics”. Once you accept that, it follows that property, money, and markets existed prior political institutions and that there ought to be a separation between the State and the economy, with the former limited to protecting private property rights and guaranteeing the soundness of the currency (some go further and say the government should be limited to protecting property rights only). In actual fact the emergence of markets and money has always depended on the existence of the State (whether markets and money will continue to depend on the state, regardless of technological development, is another matter.)
A full justification of the accusation that the most commonly-told story of money’s creation is a myth is beyond the scope of this essay (those who wish to consider the evidence should read ‘Debt: The First 5000 Years’ by David Graeber). Whatever the real evolution of money was, we undoubtedly did arrive at a situation in which coins (usually gold or silver) became the ubiquitous medium of exchange. But the drive to reduce costs was not content with money remaining as a physical commodity. If something else, some non-physical thing, could be accepted as ‘money’, that would enable a marvellous ability for whoever gained control of such ‘money’: It would be possible to created any amount of the stuff out of nothing.
DEBT-BASED CURRENCY
What could possibly be accepted as a non-physical form of money? In a word: Debt. We are still lead to believe that money is a physical commodity. Documentaries and news reports invariably depict money in the form of paper notes and coins, but money of this kind actually represents a tiny percentage of currency in use today. The vast majority of money, 95% or more, is created out of nothing by the banking system. More precisely, it’s created out of the promise to pay. Whenever anybody takes out a loan, the bank does not lend out money it already has. Instead, the digits are simply entered into the borrower’s account, and hey presto, the money is there.
Explaining in detail the mechanisms by which money is created out of debt, and how the banking system is able to get away with something like ‘printing money’ which is fraud if done by anyone else, is sadly beyond the scope of this essay. Those who are interested to know more might want to read books like ‘The Creature From Jekyll Island: A Second Look At The Federal Reserve by G. Edward Griffin or ‘Modernising Money: Why Our Monetary System Is Broken And How It Can Be Fixed’ by Andrew Jackson and Ben Dyson. Here, though, I want to focus on an aspect of the system that is surrounded by a lot of misunderstanding: The application of interest.
INTEREST
Whenever money is borrowed, it typically has to be paid back plus accrued interest. For example, if I borrow £10,000, at 9% interest I must pay back £10,900. Now, the banking system only creates enough money to pay back the principle (ie the original £10,000) not the interest. Therefore, the total amount of money in existence is insufficient to repay all loans plus interest.
This has lead some to conclude that our-debt-based, interest-bearing currencies must lead inevitably to growing debt. There simply is no way for all the money plus interest to be repaid, other than to borrow the extra amount. But that extra also has interest charged. Therefore, the more we borrow, the more we have to borrow, in a never-ending spiral of growing debt.
There is, however, a way to pay back more money than actually exists. To see how, we can turn to a simplified example. Imagine that Alice and Bob live alone on an island. Bob is in possession of a pound coin, the only money on the whole island. Alice asks to borrow the pound, and after interest is added she must pay back £5. How can she do that? She can submit to paid labour. She paints Bob’s fence, and earns £1 salary. She hands that £1 over and pays of a part of her debt. Next day she prunes Bob’s roses, and receives the same £1 she was paid as wages yesterday. Again, she hands it back as payment for part of her interest-bearing debt. This revolving-door process of money handed back and forth as wages and repayment can continue until the debt is paid off entirely. 
The same principle can apply in the real world. Say a bank loans £10,000 at £900 per month and that £80 represents interest. That interest is spendable money in the account of the bank. The bank decides to hire a cleaner, and the result is not all that different from the simplified case. Nor do you have to literally seek employment at the very branch of bank which you borrowed money from. So long as there is no requirement to repay everything at once, so long as the money needed to pay interest is spent into the economy, and so long as there are wage-paying jobs, it is possible to repay debt plus interest even though the total amount of money is never enough to cover all the money owed.
Another popular idea, in marked contrast to the previous belief of absolute shortage, is the idea that since banks spend interest charges as operating expenses, interest to depositors and shareholder dividends, there is in fact enough money released back into the community to make all payments. But this is an oversimplification that rests on the assumption that interest-bearing money is always spent into the economy, never lent at interest or invested for gain. In reality, there are a significant proportion of non-bank lenders, and if they manage to capture some of the money needed to retire the loan that created that money, the original loan cannot be retired. Beyond that there is a cultural expectation that money should generate more money- not through productive effort but through mere investment for personal gain.
The theory that there is always enough money spent into the productive economy to pay off interest has to be true 100% of the time. But that cannot be the case when the money needed by borrowers in everyday productive economics is instead moved ‘upstairs’ to play in a casino world where players essentially gamble on how money moves through financial systems. For example, the volume of trade on the world’s foreign exchange markets in just one week exceeds the total volume of trade in real goods and services during an entire year. This money is in continuous play by speculators hoping to make windfall profits on currency fluctuations; it is money circulated for no reason and no productive outcome, other than to make more money. Nowhere in our system is there any restriction on re-lending money or investing it for personal gain in the casino economy that is, arguably, completely auxiliary to the real, productive economy. It stands to reason that every time interest is added to money that already bears an interest charge (as happens when secondary lenders capture such money) and every time money is taken out of circulation in the real, productive economy, that increases the pressure on the system to repay the debt.
Producers may respond by increasing sales or raising prices. Consumers may meet the demand by taking on an additional job or paying off debts over a longer period of time. Governments may respond by raising taxes. But each tactic comes with possible negative consequences. For producers, competition for sales usually entails lowering prices, a move that necessitates even more sales and possible overproduction and saturation of the market. Increasing taxes drains money from the productive economy, thereby reducing the collective ability to pay taxes, which then necessitates deficit spending and additional interest charges. Competition for jobs lowers wages, lower wages means less consumer spending, and paying interest over longer periods adds enormously to the amount of interest owed.
WHO BENEFITS?
The truth, regarding the affect that interest has on our lives, lies somewhere between those opposing extremes in which the money supply is believed to be in absolute shortage on one hand, and always available on the other. In principle, the money could be made available provided it were always spent into the real, productive economy, but it isn’t. And the result- fuelled in part by the commodification of debt and what some call the ‘money sequence of value’ (meaning systems that do nothing except turn money into more money)- is increasing debt, and people finding the struggle to keep themselves afloat becoming harder as the system plays out.
So who benefits from a crazy system like this which causes debt to grow and grow? Those who control the money, that’s who. Remember: banks profit primarily from interest-bearing debt. As G. Edward Griffin put it:
“No matter where you earn the money, its origin was a bank and its ultimate destination is a bank…this total of human effort is ultimately for the benefit of those who create fiat money. It is a form of modern serfdom in which the great mass of society works as indentured servants to a ruling class of financial nobility”.
I have defined work as physical or mental effort that is intrinsically meaningful and directly connected to a reward. I would argue that any work that is directly connected to a reward will necessarily be intrinsically meaningful. A beaver does not construct a lodge for no good reason. Work done for the purpose of rewarding yourself is a great thing, but how many of us can honestly be said to be ‘working for ourselves’? The so-called ‘self-employed’ cannot be said to truly work for themselves if what they are mostly doing is paying off debt. They and anyone else in that situation are working for the banks or the state (whoever you think controls fiat money). They have jobs, but they don’t have work. They are, indeed, indentured servants to a ruling class of financial nobility.
WIPING OUT DEBT DESTROYS DEBT-BASED CURRENCY
If money is created out of debt, one may ask if money is destroyed as debt is repaid. The answer is yes. In 1864 King Henry II borrowed £1, 200, 000 from a consortium of bankers. In return for the loan the consortium was given a monopoly on the issuance of banknotes, which basically meant they could ‘monetize’ the debt and advance IOUs for a portion of the money owed by the king. The bankers were able to charge the king 8% annual interest for the original loan and also charge interest on the same money to the clients who borrowed it. So long as the original debt remained outstanding this system could continue and, in fact, it remains unretired. It cannot be fully paid off because if it were Great Britain’s entire monetary system would cease to exist.
Mariner Eccles, one-time Chair of the Federal Reserve, agreed that money is destroyed as debt is paid off in a debt-based system:
“If there were no debts in our monetary system, there wouldn’t be any money”.
Coming up in part eleven, market versus technological efficiency.

Posted in work jobs and all that | Tagged , , , , , | Leave a comment