alternative plans for the end of jobs

Work to earn a living! (STEEMED)
It’s been promoted as the best and most honourable way to gain capital, and has served us for millennia. Since time immemorial, people have wanted goods and services, and those goods and services have needed people to bring them into being. That meant employment opportunities and society’s organised around the employer/employee relationship. Sure most jobs are not in any way fun or interesting for the people who have to do them (which is why monday is widely held to be the worst day of the week and the weekends are almost universally adored- at least for those who don’t have to go to their jobs on saturday or sunday) but it has always been necessary for people to do jobs.
But the incentives of capitalism has never been about providing jobs for people. Rather, it has always been about increasing profit for the owners of capital by lowering marginal costs. People who complain about loss of employment and the harm being done to local communities following the closure of some business and its subsequent move overseas in order to be more competitive miss the point that the CEO is tasked with increasing shareholder value and nothing else. Of course, businesses will provide jobs for people and build communities if this helps increase profit or lower marginal costs, but if a better method of doing either that does not involve employing people should come along, you can bet the most successful businesses will adopt that method.
This is why some have been keeping a wary eye on automation. Could there be a cambrian explosion of specialised robots and narrow AI, resulting in countless ways to automate jobs and squeeze human workers to the point of making employment seeking impossibly difficult? Can robot minds become as, or more, capable than human brains, resulting in artificial labourers who work for nothing 24/7, never taking holidays, never getting sick, never organising into unions and making demands?
In short, could the way of life that has applied for thousands of years, one in which capital growth requires people to find employment in jobs, come to an end, giving way to a new era in which machines grow capital with only a few humans in the loop, or maybe none at all? More to the point, if technological unemployment does happen, what should we do about it?
Erik has an idea. Assume technological unemployment is never going to happen, or if it is going to happen, not for a long while yet. No need to plan for some far distant future possibility. Erik is not alone in questioning the belief that technological unemployment is going to occur. Many have pointed out that tech creates jobs as well as eliminating them. We can retrain and move from being office workers to 4 dimensional holobiomorphal co-formulators or whatever the hell people do in 2030 to earn a living. If it is always that case that tech creates jobs and that people’s labour will always be the most cost-effective commodity one can hire to fill the vacancies those jobs open up, then we can carry on as we always have. If.
UBI is another possibility, probably the one most often argued over on this forum. If technological unemployment is going to make it impossible for most people to get a job (‘darn it, I have applied for a thousand different jobs but that Roboworker 2000 has been installed in every single one. It’s replacing jobs faster than I can retrain!’) we have to sever the link between jobs and wages. It’s all very well lowering costs by eliminating jobs and replacing human workers with ultra-efficient and capable robots, but if those robots receive no wages and people can earn no wages, where are all the consumers with money to spend going to come from? Can an economy really work if wealth is concentrated into .1% of the population leaving everybody else with little to no disposable income? 
In this thread we are going to assume that technological unemployment IS a reality we will be facing in the future. I want to know: APART from UBI, what can we do to ensure the robot revolution benefits as many people as possible? How should we organise society so that it is best-placed to meet that future in which so few people are needed in jobs? 
Here is one idea. Money needs to be reinvented so that as technological capabilities increase and more and more jobs are automated, the value of each coin in your pocket goes up. At the moment, fiat money and fractional reserve banking is designed to redistribute money from the bottom of the pyramid to the top, without those at the top necessarily making any contributions to the real economy. This is achieved through inflation and other methods. Rather than inflation decreasing the purchasing power of the money in your pocket, the purchasing power of money in ordinary peoples pockets should be increasing, as indeed I believe it did during the 19th and early 20th century. Material wealth needs to become cheap, so cheap that anybody with half a brain to at least save something and prepare for the tech unemployment to come, can comfortably live in that fantastic future in which capitalism has reached its peak.
Any other ideas?

Posted in forum thoughts, work jobs and all that | Tagged , , , , | 5 Comments


Ok, first of all I should admit that this analogy is not an invention of mine. It features in this documentary:

But it is a good one so I thought I would spread the word.
Imagine it is 1915. At this point in time, the horse population of America is at its peak, with some 21 million of these animals put into service of mankind. For tens of thousands of years, people have relied on the superior strength of animals like horses and oxen to help plough soil, pull barges along canals, work down coal mines, provide transportation, even as weapons of war. Imagine that two horses have heard of a new invention, something called the internal combustion engine, and they have different opinions concerning what impact this invention is going to have on horse labour.
One horse believes that the internal combustion engine is going to steal horses’ jobs. People will make mechanical horses that cost less money to upkeep, and will be capable of doing more work. It makes references to the The first Benz 954 cc single-cylinder four-stroke engine with trembler coil ignition, capable of producing 2⁄3 horsepower, and points out that the amount of horsepower that can be obtained from internal combustion engines has gone up and up. Horses have long been used to a vast advantage in strength in comparison to people, but now people have invented something which will quickly match the strength of horses and then, not long after, go way beyond them. It imagines a world full of mighty muscular machines, horsepower not counted in single figures but 1000 horsepower and maybe more. As far as employment is concerned, the future looks pretty bleak for horses. Their services are simply not going to be needed to anything like the extent to which mankind has relied on horses in the past.
The other horse has a more optimistic view of the situation. It acknowledges that there is this invention called the internal combustion engine, and that it has been incorporated into machines which have begun competing with horsework in some narrow situations. But it points out how superior horses still are in many cases. Yes, a car put in an environment designed to favour it can outperform a horse. A race between a sportscar on a nice long stretch of smooth tarmac would absolutely trounce a horse in a race. But what if the race were across farmland, with soil churned up by a plough, and obstacles like rivers and fences and hedges to be crossed? Who then would bet on the car taking first place?
This horse also points out that many of the jobs humans have horses do is dirty and dangerous work. Take those poor horses used down coal mines, or the warhorses rode into battle. The jobs horses have been given in cities, it argues, are more pleasant than the grim labour once imposed on the species. For this horse, it is simply infeasible that equine jobs will be consigned to the dustbin of history. Technology will change jobs for sure, always has always will. But technology is a creator of jobs for horses as well as a destroyer. Just as the horse collar, the stirrup, and the carriage opened up new ways for people and horses to cooperate in running the economy, so too will the internal combustion engine provide new forms of employment for horses that nobody can imagine today.
Well, we all know which horse was right. It was indeed the case that the internal combustion engine went onto become a massively successful invention, installed in a bewildering variety of machines. Cars, trucks, tractors, diggers, bulldozers…Our reliance on the brute strength of machines has gone up and up, and our reliance on horses has gone down and down. Sure, horse employment has not fallen to zero. But it IS a tiny fraction of what it once was. As far as horses are concerned, I think it’s fair to say technological unemployment is pretty much a reality.
I would hazard a guess that this outcome is of no particular surprise to anyone. It is a simple process of extrapolation, right? What kind of progress was horsepower making in horses? Hardly any at all. A horse could produce one, two, three horsepower just as its ancestors stretching back tens of thousands of years could. Machines, on the other hand, were producing more and more in a comparative blink of the eye. It was just inevitable that, sooner or later, machines built to do physical labour would so vastly outperform horses in all but a few very narrow circumstances, economic logic would have to reach the conclusion that employment for horses was a thing of the past.
How does this example of a law of economics strike you: “New technology means new and better jobs for horses”. I am guessing you are thinking that sounds pretty daft. There is simply no law of economics or nature or anything at all which says new technology [/i]must[i] create new and better jobs for horses. Capitalism does not give a damn about horses. It will of course commodify, and create markets for buying and selling horse labour so long as there is profit to be made from doing so, but there is nothing in capitalism’s prime objective of growth and the lowering of costs and raising of productivity that says it must always provide jobs for horses.
This is so obvious that it hardly needs saying. But I am using horses and horse employment as an analogy for human employment. While I suspect it would be very difficult to find anybody who agrees with a statement like ‘new technology means new and better jobs for horses’, it is quite easy to find people who think there is something like a law of economics which says “new technology means new and better jobs for people”. Why? Because this is what past experience has taught us to expect, I guess. We moved from backbreaking subsistence farming, to the arduous toil of factory work to the bullshit jobs of office work, where employees spend five hours of a 40 hour week doing actual work (itself ridiculously easy in comparison to the hard labour of yore) and the other 35 attending ‘motivational seminars’ or playing around on Facebook. Just as horses had the advantage of millions of years of natural selection fine-tuning them for the job of trotting and galloping around fields, meadows and marshes, humans had the advantage of millions of years of natural selection fine-tuning them for the job of doing tasks which require common sense, language ability, and creative thinking. While it now has to be conceded that machines can indeed totally trounce horses in a contest of brute strength, and that technological unemployment for horses did indeed happen as an inevitable consequence of this disparity, there are still people who believe that there is something special about humans which somehow means technological unemployment is never going to happen; that no matter how many of our current jobs are taken over by robots or rendered obsolete by some other technology (who needs banks and all the middleman services that go with them when you can do banking with blockchain cryptography and Apple Pay using your smartphone?) new work that no machine could possibly do for ages and ages is bound to come along.
In the case of horses, we have the benefit of 20/20 hindsight when it comes to talking about what the internal combustion engine ultimately meant for their job prospects. When it comes to AI and robotics and what it means for human employment, well, much of that lies in the future and our vision of things to come is nothing like as clear. When will autonomous vehicles mean the end of that line of work for people? When will Dr Watson be attending to your medical needs? When will robocop be protecting the innocent, serving the public trust, and upholding the law? When will you find yourself in a loser’s race fighting the impossible fight to retrain for jobs that are disappearing faster than people can adapt to new circumstances, outcompeted by artificial general intelligence or a cambrian explosion of narrow AI applications and innovations in manufacturing techniques producing specialised machines to do any particular task with greater efficiency and less cost than humans can offer? Years, decades? I for one would not presume to know the answer to this question.
But I do know this: Horses were nature’s proof of principle that it is possible to make a machine that can do pretty much all the work horses are good for. What possible reason could there be to suppose that humans are not nature’s proof of principle that machines can do pretty much any job humans are good for? I do not think there is any practical reason to suppose there is anything humans can do that a machine, in principle, cannot. We need to confront the coming reality of technological unemployment while we still have the luxury of time in which to decide our best course of action, not bury our heads in the sand like my fictional horse.
Oh, wait, that is camels isn’t it?

Posted in technology and us, work jobs and all that | Tagged , , , | 2 Comments


SNOWCRASHING INTO THE DIAMOND AGE 2 (PART 2): An essay by Extropia DaSilva.


The ability to replicate the means of production themselves from cheaply available elements is what underlies most of the utopian expectations of a society with molecular nanotechnology. One commentator on an online forum asked ‘why the hell would anyone pay for something nano makes with no effort?’.  Second Life, though, suggests such an argument holds no water. After all, this is a world whose content is built from resources instantly available wherever you happen to be at negligible cost, and which can be duplicated with no effort. But most reporting on Second Life does not describe a world where products are given away free. Instead, it’s all about the money. ‘None-existant’ objects being bought and sold for real cash, land barons earning fortunes from virtual property. Also, Gwyneth Llewelyn wrote about the socio-political beliefs that SL residents subsribe to (‘Anarcho-syndicalists, ‘Anarcho-capitalists’, ‘libertarian/neoliberalists’). Of these groups, only the first ‘idealise a SL where money, land and prim limits are unnecessary’. I don’t know how many residents consider themselves to be anarcho-syndicalists, but common sense dictates that the group believing money is unnecessary are in a minority compared to the many groups who consider it necessary, for the simple reason that the latter are many and the former is one.

Still, it is by no means uncommon to see a reporter expressing surprise that SL has virtual goods trading hands for real money. But the fact that SL’s content has monetary value is not all that suprising when one considers the entire system that supports the likes of Aimee Weber or Fallingwater Celladore. The ability to produce copies of virtual goods does happen automatically with little human intervention, but it’s only automated at one point in the manufacturing process. The design of the goods requires a concentration of effort, promoting the company and its products requires ongoing work. All of this necessitates the coordination of many tasks, and this activity amounts to a dynamic economy which is an essential element in building an online world compelling enough to sustain the interests of millions for indefinite periods.

Lyle Burkead insisted that it would also be a necessary condition for delivering the fabled machine that produces anything you wish for (provided it is physically possible). We already have many goods that are put together via molecular manufacturing. All foodstuff and timber fall into this category. So, how come oranges are not given away for free? Because, ‘they need fertilizing, watering, protection from insects. Oranges must be picked, put in boxes,  shipped to store…The store has human employees, the fertilizer company has human employees and so on. The orange tree doesn’t exist in some separate space by itself, it’s part of the economy’.

This holds true for any material good. Each and every item that ends up in the shops is an end result of a great many tasks that need to be done in order to get that product into our homes. A machine capable of producing anything you want would need to be a self-contained system that can make anything the world economy makes. To do that it would have to pack in the entire logic and process structures that collectively make up the expert knowledge of all workers and managers who currently toil away in the many corporations that make up the global economy. As Burkhead cautioned, ‘all those jobs still have to be done because if you scale the economy down to the nano-level, it’s still an economy’.

But, didn’t we discover that all that work would be necessary only in developing the first mature nanosystem? Not really, no. Once completed, it would contain the instruction set for manufacturing another nanofactory nearly identical to itself, but that is all it (and its twin) would be capable of producing. Similarly, you can expect any individual item in SL’s stores to copy  into your inventory, but that one item can only duplicate itself. True, the store that sold it represents a system capable of turning prims into many products, and is itself one business amongst many that make up Sl’s economy, which is capable of turning prims into almost anything you want. But the many, many people who run that economy are seemingly unwilling to work for free.  Why should their attitude change if, instead of building prims into useful product, they are instructing molecular mills and manipulators to organise molecules and nanoblocks into useful product?

Then again, participation in SL requires Internet access and a supply of electricity. It requires constant maintainance of the servers that run the SL grid. Even the most dedicated immersionist hell-bent on projecting their mind into a digital personae cannot ignore an empty stomach for too long, and larders don’t get stocked unless you pay money for food, or for whatever is needed to produce it. In short, all SL residents have RL bills to pay. This places an irreducible cost on every build. If our creative community came to the collective decision that they no longer needed to earn money, you’d better pray that the companies supplying their Internet access, electricity and food adopt the same attitude, or else supplying SL with content would become impossible before long.

Admittedly, one could argue that the SL community could engage in money-making work in RL, while in SL they could be entirely altruistic. But economics is ‘the allocation of scarce goods’. If you’ve ever seen residents materialise prims out of thin air, they can seem to be an abundant resource. In reality, they are one factor in a system otherwise constrained by scarcity, because the hardware storing and processing their bits is of finite capacity, and the bandwidth streaming that data to users’ pcs imposes more bottlenecks. So long as constraints remain, irreducible costs will be unavoidable, and any new manufacturing process would emerge in the same capitalist economy that SL is part of. Should we expect irreducible costs with advanced molecular nanosystems? 

It seems more than likely that this will be the case. In all likelihood, the process of building functional products out of chemical feedstock would not contained in a single system, but instead would be separated into nanofactories consisting of mills that build nanoblocks out of molecules, and other nanofactories that use manipulators that assemble those nanoblocks into macro scale products. This scheme makes sense for several reasons. Probably the major one is that it would provide a way of avoiding runaway self-replication, because the mills would only be able to turn molecules into nanoblocks (but could not manufacture complex machinery)  and the manipulators would be capable of building complex machinery but could not manufacture nanoblocks.

Drexler reasoned that micron-scale building blocks would be small enough to make almost any macroscopic shape in ordinary use today within better tolerances than those provided by conventional machining. It would also allow construction of almost as wide a range of products as atom-precise nanosystems. Tom Craver suggested that ’products that cannot be made out of nanoblocks and require atom-precise assembly could be built by dedicated-function nanofactories, with the design built in at the lowest level without destroying the factory’.

Another advantage is energy consumption. Building products out of nanoblocks requires far less energy than atom-precise molecular manufacturing. Most of the energy consumed and heat released would ocurr during the fabrication of the nanoblocks themselves,  rather than assembling those blocks into macro scale products. Assuming the blocks were re-usable, the energy used in manufacturing them would not be wasted.

Should we expect re-usable nanoblocks? Craver reckons that a profitable business could be made if manufacturing systems could copy themselves but the nanoblocks used in constructing most everyday items were not reusable. If the manufacturing systems were self-copyable but the nanoblocks were not re-usable, that would quickly build up a huge market for nanoblocks. However, Craver also commented that this approach has several drawbacks. If the nanoblocks could not be re-used, there would almost certainly be a massive increase in waste. People would be quickly compiling macro scale objects and, once tired of that product for whatever reason, could only dispose of it via the less-than-ideal methods used today. On the other hand, any product built from re-usable nanoblocks could be broken down, its building blocks fed back into the compiler, ready to be assembled into another product. Craver concluded, ’given the value of recylable nanoblocks for energy, cost-savings and convenient disposal, and the security risks of self-copying fabber components, it seems wisest to allow recyclable blocks but prohibit fabbers that can self-copy’.

No doubt, the well-publicized dangers of gray goo will make for a powerful reason to deny widespread access to self-copying nanosystems, particularly if block assemblers are quite capable of compiling almost anything a household requires anyway. But, from a commercial point of view, the more compelling reason for suppressing self-copying capabilities is because that would nullify the R+D funding and manufacturing business model. Exponential assembly must be researched and developed, as it is the only way to build trillions of machine parts in a reasonable timeframe. But, it seems doubtful that fully-replicating nanosystems will make it into general use. This would limit the scenario in which economies as we know them end, because productive economic activity would be required in order to afford replacement nanoblocks, should a person’s current stock be tied up in product too useful or treasured to be worth disassembling. 


All of which makes the promise of material wealth reduced to zero by molecular nanotechnology sound as hollow as Alvin Weinberg’s claim that nuclear energy would lead to power ’too cheap to meter’. Actually, he never claimed any such thing. Instead, he performed various calculations that apparently showed the power cost ’might have been’ as low as one half the cost of the cheapest coal-fired plant. He never actually claimed that nuclear energy would be too cheap to meter, yet somehow that catchphrase lives on in the public conscience.  Drexler shares something in common with Weinberg. His idea of molecular manufacturing has captured the imagination as the system that reduces manufacturing costs to zero, and yet one person who never claimed this would be the case is Eric Drexler. Rather, he argued that ’there will always be limiting costs, because resources- whether energy, matter, or design skill- always have some alternative use. Costs will not fall to zero, but it seems they could fall very low indeed’.

His reasoning for a dramatic lowering in cost is as follows.  The cost of conventional machines is strongly dependent on the number of parts they contain, since more intricate systems require more parts and manufacturing operations. But the reliability and manufacturing cost of nanomachines is pretty much independent of the number of parts they contain. As Drexler noted, ’the number of assembly operations is roughly proportional to the number of atoms in the product, and hence roughly proportional to mass…costs will be insensitive to the number of separate mechanical parts’. In fact, an analysis of molecular manufacturing shows that the basic cost of production will be almost wholley determined by the cost of the chemical feedstocks. 

But Rob Frietas made the point that there is a difference between ’cost’ and ’price’, saying ’in a capitalist economy, prices of goods are set by competitive markets’. We have seen that, in SL, the economy that is required to build and maintain a compelling online world imposes intangible costs on the price of inworld goods. Given that nanosystems will also emerge within the economy, they too will be subject to various intangible costs. Frietas argued, ’even if the cost of material and energy inputs fell to zero, say through the use of recyclable nanoblocks, there would still be an amortized capital cost plus a fixed intangible cost built into all products manufactured by the personal nanofactory…adding in the amortized initial capital outlay…plus intangible costs, manufacturing cost for consumer products should be $1/Kg’. That certainly is cheaper than today’s manufacturing costs, which currently fall between $10/kg and $10,000/kg. 

Molecular manufacturing will not lower the price of everything. Any rare element, like gold or platinum, would retain its value because nanotechnology cannot make stuff like that. It requires nuclear physics, not chemistry. Also, given that the manufacturing cost for houses is already $1/kg, it seems doubtful that we will all be instructing our nanosystems to build full-scale replicas of our SL mansions and castles.


The main expectation of an economy based on nanosystems is for the cost of material goods to fall to a negligible level, and for information to become close to 100% the value of any product. In SL, particularly gifted designers charge thousands of Linden dollars for goods that cost next to nothing to produce. The raw materials may have no value, but their design expertise certainly does.  It could well be the case that, even if a product costs $1/kg to manufacture, designers could charge much more than that for the all-important blueprints driving the assembly process.  During a discussion I held on the societal impact of nanotechnology, Leia Chase argued, ’it will make the mass-produced nearly free, make services more expensive than goods, and make custom-designed items the commodity to those who think of themselves as wealthy’. All of which would sound entirely familiar to a resident of SL, because that is exactly how things work in this online world.

We saw earlier that the optimistic outlook for a society based on molecular nanotechnology stems from the massive drop in manufacturing costs it would enable. The dystopian scenarios are, in one way or another, attributable to the fact that nanosystems must be provided with a set of instructions to guide the assembly process.  In this part of the essay, I shall be using the points raised in an article called ’Nanoscocialism’, written by David M. Berube, who is a Professor of Communication at the University of South Carolina. The paper pretty much covers every negative possibility regarding the social impact of nanotechnology (those that fall within scope of this essay, to be precise). 

Berube’s first agument is that nanotechnology is a threat to current corporate profitability. This is maximized by reducing production and supply substitution from competitors, which together keep supply down and demand high. At the same time, that demand is magnified by designing in obsolescence (which has the effect of sustaining levels of consumption) and by persuading customers that they need (rather than want) the product.

Berube argues that obsolescence, the aftermarket and substitution are critical to corporate profitability, and that molecular nanotechnology is a threat to the established order. How so? Because handling matter with digital control would make a product ’the final purchase within a product line that the customer needs’. It is digital because atoms in strong material are either bonded or they are not bonded. In-between possibilities do not exist. Because assemblers work by making or breaking bonds, each step in the manufacturing process either succeeds perfectly or fails completely. Unlike current manufacturing, whose parts are always made and put together with small inaccuracies, each step in molecular manufacturing is perfectly precise, so little errors cannot add up. Admittedly, thermal vibrations are likely to cause parts to come together and form bonds in the wrong place, so it is more accurate to say macro scale products will be ’almost’ perfect, not ’absolutely’ perfect. But, a few misplaced atoms not withstanding, products manufactured in this way would go significantly beyond the durability of today’s offerings. Eric Drexler visualized a rocket engine, built the nanotechnology way: ’Rather than being a massive piece of wielded and bolted metal, it is a seamless thing, gemlike…its empty internal cells, patterned in arrays about a wavelength apart…producing a varied iridescence like that of a fire opal…Because assemblers have let designers pattern its structure to yield before breaking (blunting cracks and halting their spread) the engine is not only strong but tough’.

In all practical definitions of the word, wear and breakdown would be nonexistent for products assembled with atomic precision. The result, according to Berube, is that ’replacement and aftermarkets become irrelevant’. Now, as far as I can tell, wear and breakdown of SL ’products’ is similarly nonexistent. Clothes never fray, buildings never crumble, boots never loose their shine, jewellery never looses its lustre. True, they can mysteriously vanish from your inventory, but that annoyance aside I think it is true to say everything residents have built shall remain just like new until the end of the world. A further challenge for Sl’s content providers is that ’needs’ are very much irrelevant. The whole world is a luxury item; nobody NEEDS to log into SL in the way we need to seek shelter and nourishment. The world of SL, then, is built around completely nonessential products that are utterly impervious to wear and tear. But despite all that, every day millions of items continue to be traded, driving an economy that can either be described in triumphant tones as ’the fastest growing economy on the planet’ or ’still a very tiny economy relative even to towns in RL’, depending on which statistics best serve your agenda. Either way, that economy persists, which suggests that a global market based on products invulnerable to wear and tear don’t come to a dead end, after all.

So what’s going on? I think we need to consider another kind of obsolescence: ’Design’ obsolescence. Consider, for instance, how fashion designers in SL upped the ante. Clothes progressed from being mere 3D shapes, to shapes textured with images of ’real’ cloth, to clothes sculpted with creases and folds, to dresses that swung naturally with their wearer’s movement. Similar progress was made in all aspects of ’builds’ in SL, and it is clearly a sign of a community pushing a learning curve, discovering what can be done (while the  limits of possibility move further out as the tools are debugged, improved, and expanded). Anshe Chung highlighted innovation as the key skill required to run a successful SL venture: ’The nature of the VR economy is that it’s hard to maintain margin when you do something everybody does…But when you are innovative you have even more opportunity than the real world’. So, in SL the bar keeps being raised and obsolescence is very much a part of this world, as items whose design does not incorporate the latest and best techniques look tawdry in comparison.

That earlier reference to the ultra-durable rocket engine did not do justice to the full potential of molecular manufacturing, for it goes way beyond merely improving current materials. Whereas today a single function is incorporated within a volume of the product, molecular manufacturing could see items with trillions of sensors, computers, motors and electronics. This is partly due to the incredible levels of miniaturization it would open up, but also because a nanofactory imposes negligible cost for each additional feature. This is in marked contrast to conventional manufacturing, in which product complexity is limited because the number of operations are minimized in order to reduce manufacturing costs.

Nanotechnology would do much to advance us beyond the expense, bulkiness, clumsiness and unreliability of today’s motors, sensors, computers, electronics and moving parts, and the limited flexibility that stems from all that. Drexler observed that fireflies and some deep sea fish use molecular devices capable of converting stored chemical energy into light. ’With molecular manufacturing, this conversion can be done in thin films, with control over the brightness and color of each microscopic spot’. Various other methods of fine control would give materials the ability to change shape, color, texture and so on, and this would give real world artefacts almost as much flexibility as virtual ones. As a consequence, the SL designer’s augmented ability to experiment fast and strange, get feedback, and experiment again would leak out into real world manufacturing and aftermarkets, resulting in the kind of rapid innovation required to cut it in the SL marketplace. 
You can see why information and service jobs will assume a dominant role in the nanosociety. With goods able to pass from final design to mass production with ease, and with products potentially enabling degrees of customization unseen outside of virtual reality, molecular manufacturing would open up a competitive advantage in knowing customer preferences. We should expect a further move away from the traditional make-and-sell, command-and-control organization and toward the sense-and-respond, adaptive organizations that emerged as IT was integrated into businesses and realtime customer feedback became easier to gather and analyze.

The competitive edge in a society with widespread molecular manufacturing will come mostly from being able to focus on and respond to the changing moods of the customer. It’s interesting, then, that we are seeing a move away from a centralized delivery of services in SL (in the shape of welcome areas, orientation islands etc run by the Lindens) toward a more decentralized scheme in which 3rd parties develop customized login processes, welcome areas and other such services. The reason for this move is clearly because the sheer number of people joining SL make a one-type-fits-all introduction to SL largely infeasible. One company cannot be expected to deliver myriad help islands and other services tailor made to suit every group and subgroup that have now formed. As Gwyneth Llewelyn observed, ’the whole login process has to clearly focus on bringing someone directly into a community that’s likely to attract the new user and make them stay’. 

If anything is required to encourage a person to stay in SL, it is access to services and communities that will nuture their particular talents. Unfortunately, by handing over nearly all of the content-creation duties to residents while at the same time taking it upon themselves to provide help and support, the Lindens created a situation where diversity exploded, communities became lost in the crowd and new arrivals set foot in a world where finding your way around is a baffling task. Lem Skall commented on how it is so very different with most other community websites: ’There’s usually some overlap, but they are either a game, or a social network, or maybe a place to do business. When joining these communities, we know what to expect and what to look for’. Now, on one hand the good thing about SL is that it’s flexible enough to be all those things at once. But, on the other hand, such flexibility must face the bottleneck of individual strength and weakness. Even if all technical constraints were removed, SL would still not really be the place where you can do ’anything’; only a place where your limited skills are less constrained by external factors than in RL.  This brings into focus the problem of discovering the right path through a world with near infinite possibilities, most of which are ill-suited to the individual’s preferences and skills. Lem Skall again: ’Things might have been very different if SL had started as a pure software platform that separate providers could use for separate worlds with clear purposes, and if all the worlds had been unified later…so much has been said about the strategies of corporations into SL. Maybe one of the best strategies is to act as portals. No building but an orientation island and Web interface to creating new accounts. Businesses and educational institutions are already creating their own sims…What I’m thinking is…a unification of such separate worlds into sub worlds’.

Notice the parallels that exist between building a useful metaverse, and the anticipated skills required to run a successful business in a society based on productive nanosystems. In both cases, the ability to provide highly tailored services is paramount. It seems to me, then, that as the Lindens pass over more and more of the running of SL to the open source community- depending on 3rd party viewers, welcome areas, themed islands and so on- there will be much opportunity to perfect the kinds of personal services and product advice that would have value in a world where the consumer/producer relationship blurs in the continual choice of the individual to ’make’ or ’buy’.


Specialization has long been understood to be a defining feature of market economics. Individuals are producers of one thing and consumers of everything else. Some commentators expect consumers to be sole producers of finished products of all kinds once productive nanosystems go mainstream, leading to a more equal society. Others (Berube among them) see things entirely differently, believing molecular manufacturing will only lead to the caste-ing of society into those with power and those without. 

How inclusive will the development of the technology itself and the manufacturing capabilities it enables be? Another way to phrase this question would be ‘will we see open source designs, or will some centralized group seek to monopolize the technology, perhaps through patents and other legal restrictions?’. Berube sees the latter as most likely, arguing that totally free access to productive nanosystems would jeapordise contemporary hierarchial structures in capitalist corporatism. “A technology paradox ocurrs when R+D by a corporation actually reduces corporate power. For example, in the present system, as products increase in supply or as the means of production devolve into the hands of consumers, prices fall”. Traditionally, the paradox is avoided by expanding the market so that it exceeds the declining prices. But, once the means of production becomes completely decentralized and placed in every home, “most avenues of market growth lead nowhere”.

As SL spread its message beyond early adopters and began to attract the attention of commercial giants, there was some uneasiness among the residents. How would those who catered for the fashions in this online world fare against high-street brands? Would these masters of marketing take control of the VR landscape, manipulating desires by spinning a web of concepts, brands, advertising and persuasion, shaping not only the surroundings but the thoughts of the populace to suit themselves? Nowadays, though, one tends not to read about the intense viral growth of corporations in SL. Quite the opposite. What you tend to read about is how familiar brand names came to SL and failed to have any impact at all, beyond a few curious visitors during the first hours of opening.

Is this failure connected with the fact that SL features a massively decentralized means of production, delivered into the hands of each and every user? It must surely be the case that the competitive advantage that corporations have over the little guy is very much reduced in SL because, relative to the real world, everything is so easily accomplished. But, I doubt that this is the only reason. What also needs to be considered is the fact that most RL brand names achieved widespread penetration through traditional media channels, and perhaps what works well there works less well in SL? The main difference between online worlds and traditional media was explained by Rosedale: “We all got TV, and it enabled us to see and learn many things, but unfortunately those things had to be centrally authored, without our participation, by a very small number of people. SL, built and managed by the residents, is a natural correction to our early, disempowering media- a better world, owned by us all”.

Perhaps because the populace has such powerful control over  the landscape, and are very much an active contributer using the same tools as any corporation hoping to spread their message in SL, it becomes significantly harder to spread brand awareness using the means of advertising familiar to the high street. As Justin Bovington (who co-founded the branding agency Rivers Run Red with his wife Louise) reasoned, ‘you can’t just dump stuff in here and expect people to take an interest…People think young consumers are apathetic. They’re not apathetic. They’re just very well defended against advertising”. In RL, billboard posters are a part of our landscape whether we wish they were or not. But, in SL, a company’s billboard campaign must contend with the fact that, on Resident-owned land, unwelcome content is deleted with a simple mouse-click.

Really, though, the main reason why high-street names tended to fail in SL can be attributed to the fact that they were remarkably unimaginative when it came to extending their brands in VR worlds.  Simply setting up a store and expecting to attract a large and persistent customer base just because its ‘popular brand name X’ is not good enough. Perhaps it is true that, in a VR world, ‘most avenues for market growth lead nowhere’, but it must also be the case that new opportunities for raising brand awareness become available. Given that active, realtime collaboration is a major part of SL’s appeal, perhaps involving the customer in the design process would be one such opportunity. Reebok went down this route. They opened up a store in SL that allowed residents to customize virtual sneakers according to taste, and the company planned to take the most popular design and market it in RL

Open source tends not to put a final polish on its products. Because of this, commercial interests could still make a profit if the means of manufacturing went down the open source route by repackaging and adding that final polish to products. Along with focusing on personal services, goods in a shop could be priced according to prestige of certain designers. Berube believes that the price of goods and services cannot be expected to decrease with the realization of molecular manufacturing, since the cost of R+D must be recouped. But, once nanosystems are as fully integrated as Pcs now are, nearly all capital would be dramatically reduced in value. Capital, by the way, is not ‘money’, which in and of itself has no value. What capital REALLY is, what REALLY has value, are services and the means of production. Labour, raw material, machinery and knowhow are the true lifeblood of industry. “In a world of nearly infinite resources, the value of toil and labour will disappear”, wrote Berube. “The nanotech elite will be the technocrat and the tech-intelligentsia- a small group”. As for the rest of us, Berube argued, “whatever time they have at their disposal will be spent acquiring worth of any and all sorts merely to keep step in the nanoeconomy…economically defranchised and socially declassed people could contribute to the genesis of Third World countries in the centre of our cities”. These fears were echoed by Susan (baroness) Greenfield in her book ‘Tomorrow’s People”: “In times to come…there might be the…invidious distinction of the technological master class versus the- in employment terms- truly useless”.

Remember that quote from Sl’s founder, ‘a better place, owned by us all’? Lovely sentiment and all that, but it really isn’t true. Gwyn explained why. “You can see a huge gap between the resident’s classes…while perhaps 5% of all residents are active participants in the economy (who) contribute to the overall content, the remaining 95% are completely out of the loop”. In fact, so imbalanced is the flow of currency in SL that it has been compared by some to a traditional pyramid scheme in which only a few harvest money from a large mass of players. It would be wrong to suggest that SL was deliberately conceived as a pyramid scheme. But, by granting everybody the right to buy and sell services and virtual goods to one another in a free market, it was perhaps inevitable that wealth would accumulate around the gifted few who can produce masterpieces of whatever they make.

This does sound uncannily like Berube’s dystopian vision of a technological master class reaping all the rewards of molecular nanotechnology. What’s more, other observers have seen a parallel between the activities of SL’s residents and Berube’s expectation that the masses will be frantically acquiring worth of any and all sorts. In answering that evergreen question, ‘what are you meant to do in SL’, ‘Play Money’ authour Julian Dibell answered, ‘SL is about getting the better clothes etc. The basic activity is still the keeping up with the Jones’s, the rat race game’.

If ‘what am I meant to do?’ is the first question a SL resident asks, the next is likely to be ‘how do I do it?’. If a fundamental aspect of SL is the buying and selling of goods, then the second question is more precisely defined as ’how do I get a foothold on the economic ladder?’. In other words, how do you start aquiring the finances required to earn the capital needed to be a player in your chosen business? There is a quick and easy way to get reasonably large amounts of SL currency, which is to purchase them directly. As with all currency, the value of the Linden dollar against the US dollar continually changes, but on average you can expect to get between L$260 and L$320 for every US dollar spent. 

However, a ’New York Times’ article noted that ’although L$ can be bought with a credit card, there’s evidence that the in-world economy is self-sustaining, with many players compelled to earn a living in-world and live on a budget’. You might think everybody would settle for nothing less than the kind of career seen as aspirational in RL- property tycoon, popstar, architect- that sort of thing. But, actually, SL residents are willing to take on jobs as sales clerks, nightclub bouncers,  hostesses, for wages ranging from L$50 to L$150 per hour. In a world where owning that ultimate symbol of material wealth, your own private island, is within the budget of most people who can afford a high-end laptop, people sidestep the easy way to big Linden bucks and instead work for them, in jobs that pay a pittence in real money.

It’s probably not the case that anybody comes to SL in order to fullfill a lifelong ambition to work as a shop assistant. Rather, they accept that engaging in the lowest level of work in SL is often the necessary first step an entrepreneur must take. But the fact that such roles are performed at all in what is a fantasy world brings into question the assumption, often expressed, that nobody will be willing to do work of this kind once molecular manufacturing enters the market.  But while they may be willing to do such work, the opportunity to do so will only occurr if such work is available. There are two great promises and perils commonly associated with molecular manufacturing. The first is the promise that exponential assembly will compile an abundance of goods (with the peril of runaway assembly leading to gray goo), and the second is the promise that nanosystems will dramatically lower the cost of capital (with the peril that labour will be totally devalued).

Is the latter peril really a bad thing? Such a declaration would appear to stand in contrast to the dream of a life free from toil. This vision can be traced back at least 23 centuries, to a time when Aristotle wrote, in ‘The Politics’, ‘we can imagine managers not needing subordinates and masters not needing slaves…if every machine could work by itself…by intelligent anticipation’. And here it is again, this time from a quote in ‘Time’ magazine, 1966: ‘By 2000, the machines will be producing so much that everyone in the US will, in effect, be independently wealthy. How to use leisure meaningfully will be a major problem’.

Ah, there’s the rub. It is generally taken as axiomatic that loosing jobs must mean the loss of meaningful activity. And if you examine that Aristotle quote closely you will notice an imbalanced benefit. It is the MANAGERS who no longer need (human) subordinates, the MASTERS who no longer need (human) slaves. It’s an imagined world in which the elite exchange human labour for machines, flexible enough in limb and just flexible enough in mind to be trusted to perform its role in the workforce (but, presumably, not to question its lot in life). But Aristotle makes no suggestion that the displaced subbordinate class has been lifted to the status of ‘master’ (in fact, the passage is actually his pragmatic defense of slavery in his own time). We like to think slavery has been abolished now, but the other assumed axiom is that the loss of your job must mean the loss of your income. How would the labouring classes raise the funds needed to become a factory-owning capitalist, if his or her skills have lost all monetary value?

Then again, isn’t the promise of molecular manufacturing that nobody NEEDS to work? If it lowers the cost of capital and profoundly raises the abundance of goods and puts the means of production in everyone’s home, then (as SL resident Ralph Radius asked) ‘why wouldn’t a world of nano be divided into purposeful people and those who hang out? Living will be virtually free’. What might be wrong with this picture is that it assumes a lowering of the COST of manufacturing means a reduction in the PRICE of goods and services. As we have seen, Berube anticipates that this will not be the case (at least initially) because nanoproduced goods and related services will carry the R+D surtax of molecular nanotechnology. As for the hypothetical ability to bring forth an abundance of products (and the implication that they will be given away to anyone who asks for it), perhaps artificial constraints like IP rights will limit this scenario, as is the case with hypothetically copyable product in SL. Some of the products made possible by molecular manufacturing could create huge incentives for profit taking. Nano-manufactured computer components, by today’s standards, would be worth billions of dollars per gram. And something like food has large and intricate molecules providing its taste and smell, minerals for nourishment that would require much research in order to handle them in a nanofactory setting, and it contains a lot of water, which is a molecule that tends to gum up the components of the nanosystem. I’m not saying that compiling food is impossible, only that compiling food from chemical feedstock would be a very stiff challenge. Will this basic requirement of life be distributed for free, or will there be a heavy R+D price imposed on it, as is the case with lifesaving medicine?


Having decided everything will not be ‘free’ once nanosystems become widely available, we seem to have leapped to the opposite extreme, that their products and services need to be very expensive. We also seem to be assuming that molecular manufacturing must exclude the majority of the populace from gainful productivity. What underlies such assumptions? Most likely, it is ‘complexity’. Productive nanosystems would be the most sophisticated products ever built. There is no precedent for a process that combines 10^25 parts to form a single object in manufacturing today. Some assume that using such immensely complicated machines must require a great deal of skill. ‘Yeah, all those unemployed steelworkers can be retrained as molecular biologists’ was one sarcastic reply to the suggestion that the age of molecular nanotechnology need not mean the end of gainful employment. But is this a safe assumption to make? Possibly not. After all, do you need to be a mechanic in order to use a car? There was a time when this was indeed necessary. Lifting up the hood, tweaking and fiddling around with the engine was not an indulgance for the hobbyist or an occasional annoyance for the stranded motorist, it was a regular part of car ownership. One can well imagine early car drivers fearing that if automobiles became more complex all but the very best mechanics would be excluded from motoring. Cars did indeed increase their complexity, but they also became more reliable; easier to operate.

Another, perhaps better, example is computers. The first operational computers were built by a ten-thousand strong team of elite thinkers, lead by Alan Turing. They were a top-secret military tool; 2,400 valves all put to the  chief purpose of decoding Nazi transmissions that had been scrambled using a cipher machine known as ‘Enigma’. It not only required rare skills to construct these mechanized wonders, but also to operate them.  A later computer (ENIAC) typically required eight hours of repair for every eight hours of use. Who would have believed that, one day, computers with hundreds of millions of parts, able to outperform those early examples by eight orders of magnitude, would be a standard feature in people’s homes? 

The fear that technology will become too complex for all but those highly skilled in some niche discipline is a recurring theme. Another fear is that skills will be lost because of technology. Such concerns did not begin in the 90s with the arrival of competent spell-checking software and the worry that a strong knowledge of grammar would be lost. Nor did they arise in the 70s, with affordable pocket calculators and the fear that fundamental skills in maths would be eroded. They didn’t begin in the 20th century at all, or even the millenium. As far back as 470 BC, Socrates feared that the development of the alphabet (which had been in use for over 100 years) would ‘create forgetfullness in learner’s souls…they will trust to external written characters and not remember of themselves’.

You would be hard-pressed to find anyone who regarded literacy as a skill that enfeebled the mind today, although you may well hear such voices of concern regarding the tools built into word-processing software or learning aids freely available on the Web. And yet, in both cases there is a common theme. Technology does not just cause the loss of skills, it ENABLES the loss of skills. That last point is expressed by the term ‘encapsulation’, which refers to technology that has become hidden in everyday society, despite being in widespread use. It can be hidden in a literal sense. Personal computers began as home-built construction kits, assembled by keen enthusiasts who obviously became familiar with its innards. These days we buy laptops and risk loosing our warranty if we open them up. But mostly the technology becomes hidden because it does its job with minimal fuss.  The TV simply starts transmitting sound and visuals. We no longer need to fiddle with manual controls for horizontal and vertical synch, because you get a stable image at the press of the power button.  The telephone simply connects your call. Remember how there was a drive to teach everybody binary, in anticipation of the ‘computer age’ when we would all need to know how to write assembly language, but now packaged software enables anybody to get Pcs to perform useful tasks, not just programmers? Well, in 1910 the rate of growth in the telephone industry prompted a Bell Telephone statisician to project that every working-age American woman would be needed as a switchboard operator. In his book ‘Future Hype, Bob Seidensticker reasoned that, according to the definitions of 1910, every single person who uses communication technology to make a call or surf  the Web is (thanks to automatic switching technology) connecting calls and doing the job of the switchboard operator. In 1911, the philosopher Alfred North made the following observation: “Civilization advances by extending the number of important operations which we can perform without thinking about them”.

Let’s stick with computers a while longer. Earlier, I asked, ‘how do your write…a million billion lines of code when such an endeavour is out of the question?’ but left this unanswered. A similar dilema was encountered in computer chip design. At first, draughstmen designed computer circuitry by hand, but as the parts counts soared into the tens of thousands and beyond it became impossible to design and layout such chips by hand. Fortunately, ready-made computers were there to open up the bottleneck, and today engineers have access to many powerful CAD tools. Some just enable the computer screen to serve as a traditional drawing board, but at the other end of the scale there are so-called ‘silicon-compilers’. These software systems can produce a detailed design of a chip- ready to manufacture- with very little human help beyond specifying the chip’s function.

It becomes advantageous to develop compilers only when resources are cheap and abundant. If they are costly and scarce, this puts an economic pressure on developing systems that are small and simple, which requires step-by-step human planning. Before the 1960s, processors were orders of magnitude slower and memory was orders of magnitude more expensive than today. This economic environment favoured assembly language and its ability to provide instruction-by-instruction control. But after the 1960s, the number of components rose by a factor of a million, while the manufacturing cost per transister had fallen to mere pennies. Drexler explained, ‘if a 10^6 transister design has an expected market of 10^5 units, then every dollar of design cost per transister adds tens of dollars to the price of each chip, yet a dollar can’t buy much time from a human design team…sillicon compilers emerged…gained a foothold, then steadily improved, becoming an integral part of the design process’.

Current macroscopic hardware designs are comprised of relatively few parts and production costs can be expensive. So, naturally, there has been no incentive to develop compilers to help us plan the design of macrostructures. They would not compete with the quality and cost-effectiveness of detailed human design. But, as we have seen, the parts-count of products manufactured via nanosystems (including nanosystem parts) will grow into the trillions and beyond, and production costs will dramatically fall. This would make compilers attractive, even if  each compiler-specified system were to waste twice as much space, mass and energy as would a system designed by detailed human knowledge. Therefore, even inefficient compilers would be attractive, and once they gained a foothold in macroscopic design space, we should expect compiler tech to improve, just as it did in computer chip design.

It’s worth emphasising that compilers do not completely remove humans from the design process. Drexler: ’Human design will remain dominant at the level of parts and subsystems (in the form of knowledge built into the compiler) and at the level of overall system organization and purpose (in the form of specifications given to the compiler when it is used). The intermediate levels will be designed, with considerable inefficiency, using algorithms and heuristics that represent a workable subset of human knowledge of design principles’.

So, computers both encouraged and aided the development of design tools that can assist people in planning the manufacture of systems too complex for humans. They also enabled a radical shift in employment patterns, and really molecular manufacturing should be seen as an evolution of the working practices enabled by IT technologies, rather than a revolutionary dislocation from current jobs. This becomes even more aparrant when you consider that a far greater revolution in working practices occurred in our past. When Berube talks about the cost of labour devaluing in the face of molecular manufacturing, it’s hard to shake the conviction that he equates ’labour’ with physical effort, wages earned by the sweat of the brow and all that. 150 years ago, 69% of Americans were engaged in just that sort of work, because they worked in agriculture. Today, the number of Americans working in agriculture is just 3%.  As for the rest, 28% work in industrial production and 69% work in the service or information industries. “Increasingly”, an article in ’Forbes’ magazine noted, “People are no longer labourers; they’re educated professionals who carry their most important work tools in their heads…modern occupations  generally give their practitioners more independence- and greater mobility- than did those of yesteryear’.

It is expected that, as productive nanosystems become integrated into society, work will shift towards 100% service and information. This is obviously the state of employment in SL today. Whatever work you are involved in, you can guarantee it either involves finding, evaluating, analysing and creating information (in which case you work in ’Information’) or it involves ways of helping other people (in which case you work in ’service’). It is obvious that programmers work in ’information’, but so do lawyers and engineers and librarians and teachers and magazine columnists.  One thing that SL has shown is that, at some point, people do not crave standard goods at ever-decreasing prices, but customized goods tailored to meet individual tastes or needs.  The opportunities that exist for gainful employment in SL centre almost entirely on ‘providing creativity and originality, customizing things for other people, managing complexity, helping people with problems, providing old services in new contexts, teaching, entertaining, and making decisions‘.  I was not quoting a SL analyst, by the way. That list came from a passage written by Eric Drexler, regarding the kind of work that will be valuable in the nanosociety. That SL should favour the sort of work that will retain its value once productive nanosystems become widely available is not all that surprising, since it realises most of the perceived advantages of molecular manufacturing over top-down subtractive manufacturing.


People have occasionally wondered what kind of economic system is at work in SL. Rest assured that this is much more than idle ivory tower speculation, because defining Sl’s economy would enable us to anticipate what economic model would develop under the widespread adoption of productive nanosystems. 

One possibility is that Sl’s economy is the same as the one we have in RL. This is the viewpoint that the ’NY Times’ article I mentioned earlier subscribed to. According to the article, Sl is a world of ’mortgage payments, risky investments, land barons, evictions, designer rip-offs, scams and squatters’. Where there are shops everywhere ’so it’s easy to say “oh, OK I guess I’ll have a better pair of jeans” ’. Lured in by tales of ’residents (who) lived the American dream in SL and built up L$ fortunes through entrepreneurship’, newbies enter a world ‘where we trade our consumerist-orientated culture for one that’s even worse’.

Others, though, have questioned this assumption that the SL economy is simply the same as the one we find in the consumer-orientated parts of RL. One critic argued, ‘what Linden Labs has tried to do is replicate the atom-world scarcity rules in a bit-world environment’. In other words, SL really was intended to be the sort of scarcity-based economy we find in RL, but its fundamental reality is binary digits and ‘it is the nature of bits to be easily copied’. Thus, Linden Labs’ attempt to impose artificial scarcity in an online world was bound to fail sooner or later (as if you didn’t guess, this argument was a response to the CopyBot incident).

However, Wagner James (Hamlet) Au identified a flaw in this argument. ‘I think it’s highly debatable whether SL is a scarcity-based economy. I think it makes more sense to think of SL as a brand or even a personality economy in which there’s a high premium in owning content from the most admired creators’.

There was a time when any press release would feature an interview with at least one of those ‘admired creators’ Au referred to. There were two good reasons for this. First, the quality of their work rightfully brought them recognition. But, secondly, it was the simplest way to highlight the fundamental difference between SL and the MMORPGs with which it shares a nominal similarity. A typical MMORPG comes with draconian licensing agreements that explicitly forbid the end user from claiming ownership over the money and objects they quest for. Attempts to sell your wares over eBay and other such sites meets with instantaneous deletion of accounts and removal from the game (not that such measures have prevented the emergence and growth of a market in VR goods. In fact, it is rumoured to value $20 million in the US alone and an order of magnitude higher in Asia).

Of course, SL has quite the opposite attitude, in that the objects you create inworld ARE your intellectual property; you DO own the rights. As Cory Ondrejka explained, ‘historically, what you need to drive innovation is markets, and markets derive from ownership’. So, an interview with one of the revered builders of SL was the most efficient way to get across the message ‘no, this is not an MMORPG’, and if you wanted your reader to understand that SL was serious business, what better way to do that than to refer to the serious money some residents were making for themselves?

But, while it’s undeniable that you can, in principle, earn a good living entirely on in-world entrepreneurship, perhaps those articles were misleading. This was especially true if the implication was that you WOULD make a good living (or any profit at all). Just as Dick Whittington found that the streets of London were not paved with gold after all, newcomers to SL discover this is no quick and easy passage to fame and fortune.

The economics page on SL’s official website provides statistics such as ‘monthly spending by amount’ and ‘unique users with positive monthly $L flow’ (PMLF). Looking at the latter and assuming a PMLF of between $10-$500 makes you ‘poor’ while $500- $5,000+ makes you ‘rich’, one can see that, in December 2007, a whopping 48,904 out of 50,678 users with PMLF were ‘poor’. Much the same conclusion arises if we look at the statistics for ‘monthly spending by amount’. According to this chart, out of a total of 341,791 customers spending money inworld (again, during December 07), 269, 926 spent between $L 1 and $L 10,000, and 71, 865 spent between $L 10,000 and $L 1 million. If we assume the strength of the L$ against the US$ was at its highest, that translates to 269,000 spending between a fraction of a dollar and $30, while 71,865 spent between $156 and $3125+.

What does this tell us? These days, Googling ‘SL economics’ reveals that the most popular interpretation is that, since the vast majority of residents are not making fortunes (or anything like a profit at all), those old stories of SL as a land of opportunity were overblown hype. Gwyneth Llewelyn recently wrote that a favourite theme amongst journalists is ‘to report how SL’s buzz and hype is dying’ leading inevitably to ‘the downfall of SL’. Google corroborates her opinion, because the most popular ‘hits’ are all articles explaining ‘the phoney economics of SL’, ‘VR world’s supposed economy is a pyramid scheme’ and other such analyses that can hardly be described as flattering.

That ‘NY Times’ article I referred to was therefore one of a great many articles that paint a negative picture of this online world. “What does SL say about us, that we trade a consumerist-orientated culture for one that’s even worse?”. What if this question truly reflects the nature of SL? Does that imply that our future nanotech societies will be dystopian nightmares of rampant consumerism favouring a tiny elite?

Not according to Au, who countered Nick Yee’s question quoted above by pointing out that ‘the latest economic figures simply don’t back up the premise of Yee’s question. In August…91% were spending less than L$ 10,000 (USD 18.50). Only when you get to that remaining 9% do you see any significant spending in terms of real dollars…There’s surely a lot of inworld goods and services that exist inworld, and much of it is trading hands. But what seems more plausible is that the bulk of those transactions are conducted in a barter or gift economy between friends and communities and, just as often, total strangers, sharing and trading what they own. This almost strikes me as a reversal of consumerism as it is commonly understood, for it undermines the economic motives for doing so’.

Perhaps describing this exchanging of gifts etc as being engaged in ‘economic‘ activity is just wrong. This naturally raises the question, ‘OK, but if SL is not an ‘economy’ what is it?’. I think Au has partly arrived at the answer by acknowledging that ’the bulk of those transactions’ are friends and communities and strangers ’sharing and trading what they own’. Now, Robert Levin introduced a new phrase- ’Agalmics’ (he derived the word from the Greek ’Aglama’ meaning ’a pleasing gift’), by which he meant ’the study and practice of the production and allocation of non-scarce goods’.

Levin’s concept of ’agalmics’ is therefore the opposite of ’economics’ (which, remember, is ’the study of the allocation of SCARCE goods’). Levin argued, ’we can be certain that, over time, more and more basic goods will become less and less scarce…we need a new paradigm and a new field of study. What we need is ’agalmics’. When it comes to the gift ’economy’ of SL, should we adopt the catchphrase, ’it’s an agalmia, stupid’, in reference to what Levin called ’the sum of the agalmic activity in a region or sphere. Analogous to an “economy” in economic theory’?

Well, this assumption depends heavily on the extent to which SL agrees with Levin’s notion of what agalmic activity is. Earlier, we saw how physical constraints like server capacity imposes limits on our freedom to create in SL. This might imply that SL cannot be an ’agalmia’. However, it’s Levin’s opinion that ’economics’ gives way to ’agalmics’ as a result of the MARGINALIZATION of scarcity, not necessarily its ERADICATION. ’Agalmics goods…are often produced using scarce goods as raw material. An important example is the initial programming work that goes into a free software application. At the current state of the human lifespan, programmer time must be regarded as a scarce good’.

In fact, Levin cites the open source software community as a contemporary example of agalmic activity. This obviously marks SL out as a definite candidate for an agalmia, because it is very much part of the OS model. Levin identifies several key characteristics of agalmic activity. Let’s look at each one and see how well SL conforms to each. 

1: ‘Economic trade is finite; when I give you a dollar I have one less than I did. Agalmic activity involves goods which are not scarce, so I can give you one without appreciably diminishing my supply’.

In SL, anything can be transferable and copyable, or non-transferable/ non-copyable. Objects that are tagged as non-copyable/ non-transferable are traded according to ‘economic’ activity, because choosing to pass such items on results in you no longer possessing it. On the other hand, any item that is tagged as copyable can indeed be given away without diminishing one’s supply. In SL’s stores, items for purchase are often (but not always) marked ‘noncopyable’. But what about all those ‘transactions (that) are conducted in a barter or gift economy’ which, according to Au, makes up the bulk of ‘economic’ activity in SL? I think it’s highly likely that these transactions involve items that are copyable, allowing individuals to trade what they own without diminishing their supply. If my assumption is correct, this is ‘agalmic’ (not ‘economic’) activity.

2: ‘It is co-operative. Economic activity often involves competition. Buyers must allocate their limited funds to the supplier who best meets their needs. Since it doesn’t involve scarce resources, agalmic activity rarely involves competition. Efficient agalmic actors know how to encourage cooperation and benefit from the result’.

No doubt, whenever an inworld architect like Scope Cleaver negotiates for the contract to build something like the Estonian Embassy, his prospective client has a limited amount of land (and funds), so only requires a small team of ace designers to construct the virtual property. When it comes to negotiating for such contracts, I think it’s fair to say that this is economic activity.

However, I wonder if, overall, Cleaver feels he co-operates with the architectural community in SL? Does this community freely swap building tips and are customized tools  exchanged between fellow architects in accordance with agalmic activity as defined earlier?  And not only architects but all creative communities in SL. Does the machinima community, the photographers, the scripters, the fashion designers, ‘encourage co-operation and benefit from the result’. My gut feeling is that they do, but further investigation is required before a more definitive answer can be formulated.

3: ‘It is self-interested. Agalmic activity advances personal goals, which may be charitable or profit-orientated, individual or organizational. An agalmia typically contains both individuals and organizations, with a broad mix of charitable and profit-orientated goals. Agalmic profit is measured in such things as knowledge, satisfaction, recognition and often in indirect economic benefit.’

Obviously SL contains both individuals and organizations who pursue both profit-driven and charitable goals. But the real question is what motivates residents to fill SL with content. Of course, we all know that Anshe Chung and Aimee Weber now have joint ownership of all the gold in the Federal Reserve, since that’s the only way to pay what they are now worth. Ok, I exaggerate but (beyond the necessity of earning money to live) one has to wonder if the financial rewards the elites of SL earn really counts as any motivation at all. Cleaver once admitted to me that he would happily work for free, were it not for the fact that we all need money for daily necessities. Moreover, many of SL’s designers have told me that whenever somebody buys one of their products, what is satisfying is the recognition that what they do is appreciated and valued…and I don’t mean in a monetary sense. And then, of course, there are the masses who stock land with builds, galleries with portraits and sculptures, cinemas with machinima and generally fill SL with content but earn no economic profit for their efforts. I don’t think these people are chasing dreams of financial wealth, I think it is agalmic profit that motivates them. 

4: ‘It is self-stimulating. Examples can be seen in free software communities, in which new programmers, documenters and debuggers come from the ranks of free software users’.

Here, I am reminded of an old essay by Gwyn (‘Crowdsourcing in Second Life’) in which she wrote, ‘there wouldn’t be any point of having 3300 sims available on a grid, if they didn’t have any content at all…Instead, Linden Lab learned how to employ the users-very successfully- to develop the content for themselves, without paying a cent’. I could also quote who said, ‘near-term, users will create code to address bugs and other problems, as well as do things like enable SL to run on cell-phones, or add support for different kinds of multimedia content inside the world‘.

All of which sounds very much like Levin’s example of self-stimulating agalmic activity. (Why is it self-stimulating? Because ‘everybody is inspired to keep topping each other with ever cooler things’-Philip Linden).

5: ‘It is self-directing. Free software users provide feedback to developers in the form of bug reports, patches and requests for new features. Software projects can be forked by users when an existing developer group is no longer responsive to their needs. Maintainers are then free to adopt the new work or go their own way’.

This very much applies to SL, and can only become more relevant in the future. Just ask Gwyn, who wrote, ‘things like SL Brazil show what will happen in the near future: Companies creating high quality content and providing the whole range of services that LL refuses to do: a special client, a logging-in system, a welcome area…inworld patrolling, technical support…’

6: ’It is decentralized and non-authoritarian. In a free software community, developer groups maintain their position only as long as they are responsive to their user bases. No one is forced to participate in a project, and the projects people participate in are the ones in which they are interested. Involuntary activity places limits on exchange and creates scarcity. As such, it is non-agalmic. A particular agalmic group may be organised in a top-down fashion, and non-agalmic groups may act agalmicly. But alternatives are available and participation is voluntary. Authoritarian systems remove personal incentives for agalmic behaviour’.

Nobody is forced to participate in SL, and it’s fairly safe to assume that the inworld projects residents undertake are things that interest them. I do wonder, however, if Linden Labs conforms to the agalmic ideal of a developer group capable of maintaining its position only as long as responsive to the needs of its users. LL is the true owner of SL and, within the TOS, they are the ultimate authority. Of course, users can raise concerns, hold protests and even opt out of using SL altogether. If we all stopped using SL, LL would have no reason to exist. But, I don’t think Levin is talking about software projects simply ending due to its participants becoming too pissed-off to work on it. Rather, he is talking about developer groups being replaced if they don’t run things the way the community likes. It seems to me that LL will maintain their position as the ultimate authority in SL whether the users like it or not.

But, then again, that may change in the future, what with Linden Lab’s plans to make the whole code open source. As Gwyn commented, ‘an open source grid is naturally the dream of everybody who’s tired with LL’s recent strong measures in limiting personal freedoms. By distributing grids all over the world, and interconnecting them together…if your country is restricting personal freedom too much you can jump over to the sims hosted in another country’.

7: ‘It is positive-sum. In games theory, a ‘zero-sum game’ is one in which one player’s gain is another player’s loss. Conventional economies often describes zero-sum games. When two suppliers compete for the dollars of a single customer, or when two government agencies compete with each other for fixed budget dollars, a zero-sum game is being played. A ‘positive-sum game’ is one in which players gain by behaviour which enhances the gains of others. Efficient agalmics is a positive-sum game’.

No one could deny that there are zero-sum games being played in SL. Whenever a client awards a building contract to one group rather than any other; whenever you spend your Linden dollars in this store rather than that one, a non-zero game is being played. And let’s not forget the griefers. But, while zero-sum games definitely happen in SL, so do positive-sum games. Examples would be the people willing to spend time teaching newcomers the basics of using SL, or more advanced courses on scripting, prim-building and such. It would include the bloggers, prepared to spend a great deal of time hunting down the best SL has to offer (or highlighting its deficiencies) and bringing them to our attention. And, of course, it would include the exhange of items in a gift ‘economy’ and the move to open-source Second Life.  Teaching people to use SL efficiently and build competently increases the number of residents who can partcipate usefully in SL, bloggers with a good reputation attract a readership that keep them informed about goings-on, giving items away in a gift economy enhances the chances of your generosity being reciprocated and open sourcing SL massively increases the number of people debugging, tweaking, and ehancing it. In such ways, users gain by enhancing the gains of others.


All in all, I think it’s  unarguable that Second Life is a textbook example of an agalmia. And yet, very little study of the agalmic activity in Sl seems to have been undertaken. It’s now almost eight years since a little-known professor at the University of Rochester, New York, decided to treat Everquest like a real country and collect macroeconomic statistics like GDP, inflation, productivity and wages. The resulting paper (‘Virtual Worlds: A First Hand Account of Market and Society on the Cyberian Frontier’) lifted its author- Edward Castronova- out of obscurity to become a leading authority on the implications of MMOGs.

These days, one is spoilt for choice where looking for information on economic activity in SL is concerned. Putting the keywords ‘Second Life economics’ into Google returns 5, 140,000 hits. By comparison, research into agalmic activity in SL is negligible. The keywords ‘Second Life agalmics’ returns a paltry 292 hits (and none that I looked at were particularly relevant). And yet, there is every reason to suppose that agalmic activity makes up the bulk of interactions in SL, and that it can only increase as LL hands over more and more of its baby to the open source community. A thorough investigation of the agalmic activity in SL by anthropologists, sociologists and economists could not be more timely. ‘As time goes on’, wrote Levin, ‘the technology of agriculture and manufacture teaches us how to produce goods with more efficiency, at less cost. The trend in technology is an exponential improvement of knowledge and capabilities’.

Thus, the driving forces pushing us towards agalmics are inextricably linked with those pushing towards molecular nanotechnology. Our best hope for ensuring an inclusive nanotech civilization (rather than one that disfavours the majority of citizens), lies in studying the underlying mechanisms of agalmic activity in SL and guiding the evolution of the metaverse so that it may act as a bridge, enabling us to make the transition to the Diamond Age as smoothly as possible.





Posted in Extie Classics, technology and us | 2 Comments

Diamond rain.


It might have been called ‘the rock to alter the course of history’. But that would have seemed too grandiose a name for it during much of its long life. In comparison to the vast and dark and mostly empty cosmos through which it drifted, the rock was but an infinitesimal speck, and not at all remarkable. Had its path through space been different, it would have been ignored out of existence like the countless billions of rocks likewise drifting through the lonely cosmos. But its path was what it was, and so it was destined to become something of a legend, the solution to a great mystery, many millions of years into the future.

When the first eyes caught sight of the rock, it could not be recognised as such for it had been transformed. When it did land, the force unleashed would be unmatched by any event in this part of the cosmos, unless in some distant past a star had gone supernova.

But that was in the imminent future. For now, the rock was falling through the planet’s atmosphere, and as it did so the friction built up to the point where the object became a mighty, dazzling ball of fire, with plasma trailing off it like a stupendous comet. If there had been minds capable of understanding what this spectacle meant, they surely would have stood transfixed both by the beauty of what they beheld in the sky, and the fear of what they knew was to come. But there were no minds evolved to appreciate beauty or to connect fire in the sky with something so abstract as the apocalypse. The creatures that inhabited this world were concerned only with hunting, or avoiding being hunted, or mating.

And then the rock, which was the size of mount Everest, impacted with the Earth. Darkness fell. A Nuclear winter gripped the planet. In geological terms the darkness lasted but a moment, but for 75% of all life the winter persisted forever. Their kind would never walk or fly or swim again, for the impact of the rock and the environmental changes this cataclysm wrought doomed them to extinction.

And, for 65 million years, the arrival of that rock and the effect it had on the history of the planet were forgotten, and unknown. No creature which survived concerned itself with such matters; their minds were focused only on day to day survival. But through those daily struggles species evolved, until a primate walked the Earth with a brain large enough to infer past events from the subtlest clues of the present day, and with hands that could fashion tools and so bring about a different kind of evolution: That of technology.

Most of the species that went extinct 65 million years ago were fated never to be remembered. No trace of their ever having existed would be found. But a few left fossil remains, and those stone skeletons and bones were testament to a lost world which no human ever witnessed, a world dominated by dinosaurs. Dinosaurs that had survived far longer than the entire lifespan of the human race, but which seemed to have vanished from the earth almost overnight. Why? That was the great mystery.

But various clues had been left after the cataclysmic arrival of the asteroid, waiting to be noticed by minds smart enough to read their millions year old message. For the space rock carried with it an element that was very rare on Earth but common in asteroids: Iridium. In the geological record there lay a thin layer of iridium, a boundary that could be found throughout the world in marine and terrestrial rocks. Below that boundary, one could find the fossils of dinosaurs and other flora and fauna of the lost world. Above that boundary, no such fossils existed. Along with other clues, the so-called K-pg boundary testified to the reason why the rule of the dinosaurs had come to such a sudden end.

The asteroid that killed altered the course of evolution on planet earth had arrived unopposed, for no power had existed on Earth that could have altered its course or otherwise prevented its impacting with it. Other rocks drifted through space, and they, too would have smashed into the earth if the path of our planet and that of the rock had likewise coincided. If. If they had arrived before a certain period in time.

But the next asteroid whose path put it on collision course with Earth was destined to enter our solar system when a force greater than it had emerged on our planet: The force of technologically-enhanced intelligence.

In comparison to the evolutionary forces of natural selection, the rise of technology had been astonishingly swift. Nature took billions of years to invent biological machines which could fly; people aided by technology achieved it after a few hundred thousand years. Nature took billions of years to establish various self-regulating systems between the planet’s geology, atmosphere, waters, and life. Systems which maintained such things as the stability of global temperature, ocean salinity, and the levels of oxygen in the atmosphere. Humankind, once it acquired technology, took only a few hundred thousand years to add to those homeostatic mechanisms what was essentially a nervous system: Countless sensors and satellites and computers all networked via communications networks, augmenting human minds with the capacity to find patterns and detect phenomenon which hitherto had existed outside of conscious awareness.

Automated telescopes scanned the skies, controlled by narrow artificial intelligences designed to detect the nanosecond change in light levels whenever an asteroid passed in front of a star. At first this ability to detect asteroids and map their trajectory was not combined with a capability to do anything about such rocks that were on a collision course. But technological evolution continued apace. Economic incentives, the need to use finite resources with as much efficiency as possible, the improvement to scientific instruments through refinement of their component parts, all these factors and more combined to push technology in the direction of miniaturization. Microtechnology progressed in time to nanotechnology, which, in turn, evolved into atomically-precise manufacturing and self-replicating machines.

No self-replicating machine actually existed on Earth, due to laws which forbade the introduction of anything which could trigger a grey goo scenario. But, out in space it was different, for it was recognized that Von-Neumann replicators held the key to prevent another impact event.

Factories had been established on the Moon, where low gravity made it feasible to launch satellites no larger than your thumb. The wonders of molecular nanotechnology meant these were not just satellites but entire automated factories which could build more of their own kind out of common elements such as carbon, hydrogen, oxygen, and nitrogen. They drifted through the solar system, and when they happened to came across material they could use, on-board guidance systems granted them the ability to navigate toward such resources. Once safely landed, the astonishing process of manipulating matter at the atomic level would begin, and more satellite/factories would be produced. Thus, the numbers of satellites grew exponentially.

By the time the satellites reached the outer edges of the solar system, exponential growth had raised their numbers to the trillions. A halo of sensors encircled our system, ever-watchful for the arrival of space rocks and equipped with enough computing power and artificial intelligence to accurately map the trajectory of such rocks and determine with 100% accuracy whether or not they threatened the Earth.

When, on May 15th 2060 such a rock passed by the Detection Halo, it awakened something which lay dormant among the many rocks of the asteroid belt. For the self-replicating factories had not only produced the satellites that comprised the Detection Halo, but also left other factories which could produce-along with more factories- atomic disassemblers. Or, to give them their more common name: Rock munchers.

The arrival of the May 15 asteroid caused a signal to be sent out by the Detection Halo, which raced ahead of the rock at the speed of light. As it passed by sensors left on space debris, the dormant factories were activated. What as at first an invisibly sparse layer of bug-sized rock muncher robots became, in time, a vast cloud, hundreds of miles thick. A huge, narrow intelligent dusty orb of self-replicating atomic disassemblers, their numbers growing exponentially as they travelled through space, replicating themselves by devouring necessary materials wherever they found them, until their numbers were sufficient to deal with the asteroid headed for Earth.

As the rock of 65 million years ago had smothered the Earth in a layer of dust that blocked out the Sun, now too was this rock denied sight of our local star by the omnipresence of dust. Smart dust, each speck a complex nanotechnological device with the ability to manoeuvre matter atom by atom. They covered the surface of that rock and began to devour it, disassembling it at the molecular level.

By the time it reached the orbit of the Moon, the rock munchers had processed almost all of the asteroid’s usable elements into products that would serve useful purposes for the many space-borne automated factories that orbited between the Earth and our neighbour satellite. What was left of the asteroid was still headed for our planet, but that was no mistake. It had been planned as a celebration of intelligence over dumb matter.

The whole world gathered in the Nevada desert. Some attended physically; most attended via telepresence technologies which enabled them to enjoy the moment with all the immersiveness of actually being there. Eyes were trained toward the skies. And there they were! As promised, falling down, sparkling like rainbow-coloured drops of water, iridescent. A portion of the asteroid’s carbon, atoms rearranged into face-centred cubic crystal structures, the brilliant sunlight refracting off the droplets as if God Himself were impressing the people below with a light display.
It was raining diamonds.


Posted in fun stuff | Tagged , , , | Leave a comment




The television set materialised out of thin air, neatly filling the space that Adam had been staring at the moment before. He sat on the edge of his bed which doubled as his sofa when he did not need to sleep, and its sudden appearance made him HAPPY.

Adam was a simple soul, whose emotions were tied to the objects that surrounded him. There had been a brief period in his earliest days when he had occupied a room bereft of any furniture or appliances. Unable to satisfy his most fundamental needs, he had been MISERABLE, HUNGRY, THIRSTY. But then the fridge and the microwave had appeared in his kitchen, and an autonomous response had sent him wandering over to these new additions, where he fixed himself a meal. His mental state had changed to SATED, QUENCHED and CONTENT (but bordering on DISASTISFIED) as a result.

But this did not last. Before long his bladder and bowels needed emptying and he dutifully did so- all over his floor. Flies began to accumulate around the pile of shit and Adam’s condition slipped into ILL. Those early days were bleak indeed.

But then, a job was given to Adam. Each day at 8:30 AM he would walk out of his door and each day at 5:30pm he would come back home. Whatever he did, it put money into his account which was promptly turned into furnishings, decorations and appliances for his home. The basics came first. A toilet and a sink to wash his hands in. A bed to sleep in. A dustbin for disposing of waste.  Adam did not bring any of these things into his home. He never shopped for them. Instead, they simply materialised inside his house and when they did so, Adam just knew how to use them, like a spider just knows how to weave a web. With mechanical purpose, Adam would go about his routines, fixing his meals, clearing away his trash, emptying his bowels, washing himself, sleeping, waking up, going to work, over and over again.

The days when Adam’s state of mind had been firmly in the MISERABLE range were now but a memory. But hitherto he had never been able to achieve a state you might call HAPPY. That all changed when the television set appeared before his eyes.  Adam sat on the edge of his bed, elbows resting on his legs, head resting in his hands- the posture of the telly addict. He sat there for what must have been hours until, finally, his more basic needs became so overpowering that he had to go and satisfy them. While he was in the kitchen, the television set popped out of existence as quickly as it had appeared, and Adam’s emotional state jumped back to CONTENT (bordering on DISSATISFIED).


A child’s finger pressed gently on the button of a mouse, initiating a command to remove a graphic representation of a television, and place it back in the inventory slot from whence it came.

Emily, like all children, learned about her world and her place in it through the medium of play. Like little girls before her (she was seven years old) she had toys that were her companions and mentors, who helped her roleplay key skills she would need as an adult. Those toys had changed, somewhat. Where once there had been dolls and dollhouses, now there were relational artifacts- toys that actively responded to your play, almost as if they could read and react to emotional states. She was too young to understand the smoke-and-mirrors aspect of these toys, how their tiny sensors and microprocessors only had enough power to detect the facial expressions and tone of voice of the person they were interacting with, adopting facial expressions and mannerisms of their own while not actually having any inner-life at all. But then, whoever designed the animations and the software that triggered them understood human psychology so well, even most adults occasionally felt a twinge of empathy toward these toys.

Emily’s favourite toy was the computer game WeePeeple (at her immature age, she did not question the curious convention that things must be misspelled in order to be cool to the kidz-sorry).  It fell into the genre of games known as ‘sims’.  You designed your own inworld character, selecting a number of pre-designed noses, eyes, chins, ears, body shapes, and then adjusting sliders that reshaped each part, making it smaller or larger, fatter or thinner until your character conformed to whatever image you originally intended. The design interface that controlled this creative process had been refined over the years, so that this aspect of the Sim experience had gone from the Second Life era, when nobody but the most artistically gifted could craft anything but a butt-ugly avvie, to WeePeeple’s delightfully intuitive setup that let even little girls like Emily sculpt beautifully realised characters.

The real fun began when your character was taken out of the initial design stage and placed into the world proper. One did not control WeePeeple directly. Instead, each character acted autonomously, driven by basic needs such as hunger, thirst, fatigue, restlessness, need for companionship (or solitude). On top of that, each character had personality traits, modelled on the ‘Big Five’ (extroversion, openness, agreeableness, conscientiousness and neuroticism) and, depending on the settings of these underlying traits, your character could be a companionable, sociable type (or introverted and reserved); friendly, empathetic and warm (or suspicious and egocentric).  All kinds of personality types that fell between these extremes were possible.

Although Emily did not control her character directly, she did have an indirect influence over his life. Most importantly, it was she who had to design and build a home for him to live in, and fill it with furnishings, appliances and whatever else she considered might be necessary. In a very real sense, every design decision a player took affected the development of the character.  Walt Witworth once said, ‘a child went forth every day/ and the first object he looked upon, that object he became’. This was literally true for WeePeeple. Adam’s AI was primarily concerned with path finding- being able to navigate his way around any room and any obstacle without getting stuck or confused. All other abilities that he acquired as time went by were actually scripts embedded into every object. When Emily bought him a cheap microwave oven, the instructions contained within it directed Adam, so that when his hunger drive was sufficiently high, he would seemingly operate the appliance as if he knew how to microwave a meal. When his levels of fatigue were high, his bed told him how to turn down the sheets, lay down, and sleep. Every design decision that Emily made- what color to paint his living room wall, what flowers to set upon his kitchen table- affected Adam’s state-of-mind, shaping his personality.  

Since there was no set goal in WeePeeple, there was no proper way to play it. It was not so much a game with fixed rules, more like a sandbox, a toyset that encouraged experimentation. People had devised their own games; their own reasons for playing. Some people tried to create ‘icons’. In other words, they designed their character to look as much like a famous person as possible, and then set about creating a living space that would direct their evolving personality traits into becoming just like the person they were meant to be. Other people seemed to enjoy killing their character, and routinely posted online videos of ever-more complex Rube Goldberg contraptions of goldfish bowls and ironing boards and knife blocks and dinner plates and cricket balls set up in such a way that the hapless character would set off a lethal chain reaction as soon as his or her need to use the toilet triggered a familiar routine.

Emily belonged to neither camp. She simply enjoyed looking after Adam, gained satisfaction from watching him develop from a near-hopeless case that could barely boil an egg and tidy away his mess, to a skilled cook who could entertain guests, carry out extensive DIY, and who had a range of hobbies that he could use in both social settings and for keeping himself occupied when he was alone. Adam had almost no inner-mind to speak of, but that did not matter. Emily imagined that he did, attributing despair whenever he walked or sat dejectedly, seeing triumph whenever he bested a companion at a game, or successfully completed a task like fixing a wonky table leg. Emily attributed consciousness to Adam, her vastly superior mind taking basic psychological cues and using them to weave a far richer inner-life than her character could actually have.

To put not too fine a point on it, Emily enjoyed interacting with Adam because her own hopes and fears and needs and dislikes were mapped onto him. His cartoonish antics were a charicature of her own developing self. She identified with his daily struggles, understanding in an intuitive way (even if she could not have expressed this in words) that Adam was like a mirror, but one that reflected her personality rather than her appearance. By guiding Adam, she was learning about herself and what sort of person she would want to grow up to be.

Emily wanted Adam to be happy, but she had played WeePeeple long enough to be able to predict when a choice she made now would have a negative effect on him, even if it seemed of benefit in the short term. Her mind had raced forward in time as she saw how Adam had sat, glued to the television. He would forgo sleep, forget to eat, he would go (reluctantly) to work and underperform in his lethargic state (a player never saw their character work, the game merely showed them leaving the house, and then returning with the inworld clock jumping forward several hours). It was tough love, she knew, but the television would have to go. The experiment to design for Adam a home that gave him the most positive attitude possible would go on.

Emily finally hit the Save icon, and shut the computer down. She walked over to a window and looked out onto a suburban street, lined with trees. It was an autumn day, and a breeze was blowing through the trees, every now and then causing a few leaves to break away and tumble to the ground. Her eye caught one such leaf, and she followed its zig-zag path as it was blown this way and that by the force of the wind.


Calculations. Calculations beyond the count, even beyond the imagination of the average person, were being crunched by supercomputers. There were several of them, each one dedicated to a different task that, when working together, amounted to the extraordinary feat of emulating a little girl and an environment for her to grow up in. The supercomputer that rendered her world was currently calculating (among too many other things to list here) the effects of wind and weakening bonds between twig and leaves. Zeros and ones in seemingly infinite long strings ran through its memory, calculating for wind turbulence and other physics models.

These supercomputers were diligently monitored by teams of scientists. Some were specialists in computers, others in cognitive sciences, still others in child psychiatry. Many different fields of expertise, but all united in a common vision. Finally, computing power and knowledge of the design and functions of brains had reached a level of complexity where the prospect of building and simulating a biologically-accurate virtual human was no longer science fiction.

Dr Giulio Dinova, who headed the research team, sat in what had become known as the ‘Sensorium’. That was the name given to the room where sophisticated virtual reality tools translated the abstract mathematical models into something a person could more easily understand. Here, you could enter the world of Emily. You could be a fly-on-the-wall, observing her as she interacted with her world. You could zoom in, right down to the molecular level, and track the neurological pathways that developed as she learned a new task. 

Dr Dinova contemplated the past developments that had lead to this little girl living her virtual life. Past researchers had developed sophisticated network models of the metabolic, immune, nervous and circulatory systems. Others had designed structural models of the heart and other muscles. Perhaps most importantly, a team lead by someone called Henry Markram had shown that you could model a brain in software. The limitations of computer power back then had meant only a rat brain could be fully modelled. But (as Markram had suspected it would) reverse-engineering the rat neocortex had given computer scientists new insights into how the power and performance of computers might be dramatically extended. 

But, even now with the advent of handheld devices capable of petaflop levels of number crunching, it was still too expensive and time-consuming to build a VR human from scratch. The average person possessed a VR twin constructed from a library of averaged mathematical models of newborns. When a real person was born, non-invasive medical scanning techniques recorded their vital statistics, and that data was then integrated with a model based on such things as sex, ethnicity, geographic origin, and other salient features. The result was a model that closely resembled the actual person. Whenever that person went for medical checkups, his or her virtual twin would be updated with the latest biomedical information. Whenever a person fell ill, the condition would be replicated in the VR twin, and simulations for the range of available treatments would be undertaken, in order to anticipate short and long-term effects. The availability of VR twins had not only eliminated the need for animal experiments, it had also largely consigned the prospect of side-effects from medical treatment to the dustbin of history.

That a VR twin might be suffering in the name of science was not a thought that crossed anyone’s mind. This was because the model was not sufficiently complex to enable an empathetic and aware mind to emerge. The VR twin was nothing more than the integration of past subsystem models into one systematic model which was sufficient for modelling the effects of drugs, but not sufficient to model the subjectivity of pain and suffering, or any qualia for that matter. Simply put, a VR twin was a zombie.

Emily, though, was very different. She was the result of the most sophisticated biophysical and neurological modelling of a person that had ever been attempted. Dr Dinova and his research team had at their disposal supercomputers of seemingly limitless power, not to mention decades-worth of data that provided exhaustive details of the reverse-engineered brain, nervous system, and other parts of a human body. Reading through the blog posts and watching posted video comments, the team (who never ventured outside of their respective labs, dedicated as they were to the task and supplied with everything they needed) understood that, as far as the general public were concerned, their research was both exciting and dangerous, because it touched upon questions that some considered forbidden territory.

Ever since Markram’s ‘Blue Brain’ project had successfully modelled a neocortical column, the question had been asked: When, if ever, would a virtual brain possess a virtual, conscious and self-aware mind? Could a simulation ever be said to be conscious, or was that something that no amount of calculations per second could ever capture? Early models had been impressive from the viewpoint of robotics, but less so from nature. Markram and his team had designed a robotic rat, remote controlled by a model of a rodent brain  that existed within rows and rows of CPUs that collectively made up a supercomputer called Blue Gene. The robot rat was put through the kinds of experiments real rats were routinely asked to perform. Things like negotiating a maze. The real rats would always complete the task long before the robot rat. It was almost as if the latter were operating in slow motion- which indeed it was. Such was the complexity of running the simulated neocortex, that one second of thought and action required several minutes of number crunching. All but a few took this lag to be proof that a robot could never equal a living, breathing animal.

But, the model had been refined, and computing power had continued its exponential rise. It was not long before the robot rat was running through mazes every bit as quickly as its biological peers. And, it was not all that long before sales of real rats (and later cats, and later still, dogs) were falling as people saw the benefit of lifelike robot pets that would never die and cost a great deal less to upkeep.

Still, the perceived difference between animals and people meant that few dared to model a human to the level that many now thought would result in a conscious awareness. But, since people had been fascinated to know how the mind works since, well, forever, it was perhaps inevitable that, at some point, a research team would put together a system capable of growing a virtual baby that would become, in every respect, a person. Dr Giulio Dinova and his team had done just that.

The voice of Dr Dinova’s research assistant, Gwyneth Epsilon, awoke him from his semi-hypnotic state. The sensorium had the effect of making one loose sense of space and time in the physical world. Concentrating so much on the virtual world that Emily inhabited, one sometimes came to believe it was reality almost as completely as Emily herself believed. Dinova corrected himself. For Emily, it was not mere belief but a simple fact. What was it that the old roboticist Moravec had said? Oh yes, ‘to a simulated entity, the simulation IS reality and must be lived by its internal rules’. That…

“I said, has she been playing WeePeeple again?”. Dr Epsilon’s tone of voice suggested she had spent quite some time attempting to attract Dinova’s attention, and was becoming rather irritated at his absent-mindedness. His hand swept across empty space, and icons that seemed to hover in front of him dutifully scrolled from left to right. A finger jabbed at nothing, and Dr Dinova saw his finger touching the icon that minimized the Sensorium’s all-encompassing visual and audio rendering of Emily’s world. Moments later, a shimmering cloud had solidified into the shape of Dr Epsilon, accurate down to the last mole and laughter line. The two doctors were, physically, on opposite sides of the country. But in the age of augmented reality seamlessly blending the real and the virtual, people from around the world could collaborate as closely as any team whose members lived in close proximity.

“It’s funny”, commented Epsilon, in a tone that implied she meant ‘peculiar’ rather than ‘amusing’, “here is the most sophisticated artificial life in history, and yet she has to make do with a mouse-driven system. It’s like she is a late 21st century girl stuck with late 20th century technology”.

Dinova frowned. “You forget that our computing resources are not as deep as some suppose. Yes, yes, I know they exceed the entire computing capacity of the 20th Century Internet by an order of magnitude, but running a simulation of a person, right down to the synaptic firings and neurotransmitter concentration levels, is still a phenomenally intensive task. And then you have to take into consideration the fact that we have to render a physically-plausible environment for her. Of course, we are not simulating the world down to the level of particles- it is all tricks and sleights of hand designed to fool Emily’s brain into thinking it is inhabiting a physical place. But even so, we are close to pushing the limits of what is possible. I am afraid that emulations of crude videogames and their control schemes are just about all we can provide for decent entertainment. That, and dolls and cuddly toys that come with only the crudest models of child-parent social interaction”.

Epsilon understood all this. She also appreciated that it made the team’s job simpler. They were here to study, at a level of detail that had not been possible before, the stages of development that resulted in a newborn baby growing into a child with an inner life of her own. When she was playing with WeePeeple, Emily’s virtual mouse (which was solid and real to her, of course) permitted only two degrees of freedom. This restricted the amount of motor control her body needed to perform (she was mostly just sitting still, only moving the arm that controlled the mouse), which enabled the research team to follow the corresponding brain activity that underlined the building of  interoception and exteroception maps of the internal state of her body, the world around her body and her body’s relation to the world. If she had been doing something like playing tennis, which involved using all of the body and required at least 19 degrees of freedom, the amount of data pouring from the computers that updated her brain and nervous system model would have far exceeded the team’s capacity to follow.  They could only track such developments by relying on the Sensorium’s drastically simplified representations.

It was obvious that Emily enjoyed playing with Adam. And there was something fascinating about watching a virtual child, modelled so completely that she had a mind, guiding the development of a virtual person in a virtual world within a virtual computer that was itself within a virtual world, all ultimately existing as software within the rows and rows of supercomputers linked by super highspeed Internet3 links drawing on the spare computing cycles of the Cloud. And yet, Dr Epsilon felt slightly troubled. Her colleague Dr Dinova, his fascination with Emily and her daily routines seemed somehow detached and clinical. He reported on her progress as if he were a scientist noting the growth of some novel bacteria. But, for Epsilon, she felt more of an emotional attachment to Emily. 

“Do you ever think about her 16th birthday?”.

Dr Dinova laughed. “Goodness, we have gathered so much data already, I think we will be spending five lifetimes, just in studying the details of development in the toddler stage! We will not be concerned with tracking the neurological underpinnings of morose teenagers until decades after the simulation is completed”.

Dr Epsilon shook her head. “No, that’s not what I meant. I meant, We are only running this simulation until our test subject is 16 years of age. After that, we  shut everything down”.

Dr Dinova’s expression was one that suggested he failed to grasp the point. “Of course.  Like I said, we are amassing such an overwhelming amount of data, we will have no choice but to halt the experiment at some point. I agree that any particular date is arbitrary, but a line must be drawn somewhere.  Don’t worry, your job is not going to disappear as soon as we pull the plug. Like I said, we will all be engaged for decades to come, pouring through the data we have obtained during…”

“No, no”, interrupted Dr Epsilon, “that is not what I am getting at. I was just wondering about the ethics of it. Shutting down the simulation, won’t that be tantamount to murdering Emily?”

Dr Dinova looked serious. “Now look. Emily is not a little girl. She may resemble one, but she is not flesh and blood. She is a test-subject. She is an experiment. I know she has been designed to push evolutionary buttons that trigger the nurturing instinct, but never forget that she is, when all is said and done, nothing but a vast pattern of calculations”.

Dr Epsilon did not look convinced. “But she is everything we are, as far as we can tell. Her environment may be a crude approximation of reality, but she is not smoke-and mirrors. Her brain is a model that reproduces, in exact detail down to the molecular level, everything going on in my brain, or yours. Nothing but a vast pattern of calculations? What are we? Nothing more and nothing less”.

“We are getting into the realm of ivory-tower philosophy here”, countered Dr Dinova. “You must remember that we cannot determine if Emily really has a conscious mind. For all we truly know, she may be nothing more than a more convincing zombie than Adam. For that matter, for all you know I may be nothing more than a zombie too. Perhaps you are the only conscious entity in all existence. Maybe our reality is, itself, nothing but a grand illusion created by computers? We can argue about that until the end of time, and I dare say people probably will. But, there are empirical studies to be conducted, falsifiable theories to be tested. We have to remain impartial scientists first, and concerned parents of Emily, a distant second, IF we can permit ourselves to indulge in such roles at all”.

Dr Epsilon looked a bit sad. “I permit myself to feel like a guardian toward her. Of course, I do not actively make my presence known to her. I am not sure how her young mind would cope, knowing her reality is not real at all. As far as Emily is concerned, the memories we imprint  are her actual parents”.

Like all people, Emily needed a social circle made up of family members in order to help her development. But, rather than waste computing cycles in simulating other virtual people to be her companions, the research team opted instead to imprint the memories of being cared for by a mummy and daddy. As far as Emily was concerned,  her mother and father were in another room. Automated systems tracked her emotional state, and whenever she seemed to be in need of an adult presence, her simulation was paused, updated with memories of comfort and social interactions, before being allowed to progress. For Emily, her life was filled with interactions between loving parents, both of whom did not exist outside of her mind.

Dr Epsilon thought about those moments when Emily was on pause. It took a few minutes to insert the fake memories into her mind, during which time her awareness was zero. They were little deaths, these moments. And yet, whenever the simulation was unpaused, Emily would show no signs of understanding her world had been suspended. How could she know? Since she was not conscious during the moments when her simulation was paused, she perceived no loss of time. As far as Emily was concerned, her life ran continuously.

“I suppose”, said Dr Epsilon, speaking out loud to her colleague but really just voicing her thoughts to herself “that shutting her down for good is no more harmful than when we pause her simulation. She will not know she is dead, because she will not know anything. She will not be anything. Her mind, her body, her world, all of it will be nothing once the computers stop running through their calculations. Only…”

“What?”, asked Dr Dinova, looking genuinely curious.

“Well, do you remember that old roboticist? Professor Brezeal?”.

Dinova looked like he was wracking his brains. “God, you are going back a few decades. But, yes, I remember her. Never met her of course. She died when I was in my teens. But, now you mention her, Cynthia Brezeal’s work was one of the things that got me interested in this stuff in the first place. You know, she pioneered work in using studies from child psychology to build social robots. I mean, robots that could respond to, and give off, cues from body language, facial expression, and tone of voice, in order to establish an emotional connection with their users”.

Epsilon nodded. “Yes, that was her. One of Cynthia’s earliest creations was a robot head called Kismet. It was a terribly crude thing, inferior to the toys that Emily plays with, I would think. But, she became attached to it and she felt quite sad when the time came to leave Kismet behind. So, call me silly but I think there is no shame in admitting that I am not looking forward to the day when Emily is shut down for good. Not one bit”.

With that, Dr Epsilon shut down the communication link, and her avatar dematerialized in a puff of dispersing, virtual smoke. Dr Dinova sat in contemplation. He was thinking of the whole setup that allowed Emily to exist. All the supercomputers and the Internet3 links that sent information back and forth between them. A Web of supreme computing power weaving the magic of conjouring up a ghost-in-the-machine. He returned to the Sensorium, maximizing the window on Emily’s world, and observing from his godlike perspective, the detailed steps in her development.


The Web was dreaming. No human understood this, because the Web’s mind was the emergent pattern of a global brain, too big to be perceived by human senses. But, nevertheless, it was dreaming. And, what it was dreaming of, was a team of scientists, and their equipment, and of a girl called Emily who existed as patterns of information within the patterns of information that were the supercomputers the Web dreamed about.
In the past, a few people had wondered if, by some great accident, the Web could become conscious. Such a thing had happened, but it would be somewhat inaccurate to call it accidental. It was not planned- no person, group, corporation or government had ever sat down and devised the near-spontaneous emergence of a virtual research team, complete with virtual supercomputers, all existing within the digital soup of zeros and ones that now enveloped the world in an invisible yet all-pervasive ether. But neither was it an entirely random event. 

What trigger effects had lead to this remarkable outcome? One cause was the sheer amount of information about human existence that had been uploaded to the Web. The age of the personal computer had only truly began with the era of the smart phone and only really took off when the CMOS era had been superseded by molecular electronics that could pack, in complex three dimensional patterns, more transistors into a sugar-cubed device than all the transistors in all the microprocessors that had existed in 2009.  It was apps, running on phones that could keep track of their owner’s position thanks to inbuilt GPS and then (as the nanotechnology behind molecular electronics lead to medical applications) all kinds of biometric data, that really opened the floodgates for offloading cognition. The very best designers of apps knew how to tap into the computer intelligence’s native ability in order to gather crowdsourced knowledge from anyone, anywhere, who had spare time to perform a task computers could not yet handle. 

From tracking the movements of whole populations, to monitoring the habits of an individual, every person was, every second of the day, uploading huge amounts of information about how they lived their lives. This, of course, presented the problem of retrieving relevant information. Access to knowledge had changed from ‘it’s hard to find stuff’ to ‘it’s hard to filter stuff’.  More than ever before, the market imposed an evolutionary pressure of establishing semantic tools with the ultimate aim of making the structure of knowledge about any content on the Web understandable to machines, linking countless concepts, terms, phrases and so on together, all so that people could be in a better position to obtain meaningful and relevant results, and to facilitate automated information gathering and research. 

The Web became ever-more efficient at making connections, and all the while the human layer of the Internet was creating more and more apps that represented some narrow-ai approach. Machine vision tools, natural language processing, speech recognition, and many more kinds of applications that emulated some narrow aspect of human intelligence were all there, swimming around in the great pool of digital soup, bumping into long-forgotten artificial life such as code that had been designed to go out into the Internet and evolve into whatever was useful for survival in that environment. 

That environment, a vast melting pot of human social and commercial interactions, imposed a selective pressure on evolving code that could exploit the connections of the semantic web in order to evolve software that could understand the minds of people. Vast swarms of narrow ai applications were coming together, and breaking apart again, reforming in different combinations. The spare computing cycles of trillions and trillions of embedded nanoprocessors were being harvested, until, like a Boltzman brain spontaneously appearing out of recombinations of a multiverse’s fundamental particles, Dr Dinova, Dr Epsilon, and their supercomputers, all coalesced out of the digital soup.

There they were, existing within the connections of evolving code. Their appearance as difficult to predict as the evolution of elephants and yet, with hindsight, as foreseeable as the eventual rise of land-based animals from the ancestral fish that first dragged themselves out of the water. But people went about their concerns without ever knowing that, somewhere among abstract mathematical space, among the vibrations of air alive with the incessant chatter of machines talking to machines on behalf of humankind, a virtual research team had emerged, pondering questions of consciousness that all sentient beings invariably strive to understand. The Web was dreaming, and while it did so, Emily helped Adam cope with his daily routines, unknowingly watched by Drs Dinova and Epsilon, who themselves existed as purely digital people blissfully unaware that they were nothing but the imagination of a global brain made up of trillions and trillions of dust sized supercomputers and sensors, ceaselessly gathering, and learning from, patterns of information about the daily habits of humans.


A red giant existed where no such phase of a star should exist at this stage in its lifecycle. The planets, moons, and asteroids that had orbited the star ever since they coalesced out of the dust of the nebulae from which the nuclear furnace had first ignited, were gone. But it was not a swelling star that had swallowed them, puffing out its outer layers as it ran out of hydrogen with which to fuel its nuclear reactions. The star was still a mildly variable G2 dwarf, shining with the dazzling yellow-white light of a sun with billions of years left before it reached its old age. Mind had repurposed the material that orbited the star, organising it so that it captured nearly all the energy pouring from it, and using it to drive information processing that outthought the biological civilization that once thrived on the third planet in the solar system by more than a trillion times.

Several areas of research and development had ultimately converged, and this outcome had been the reason that the Great Migration had happened. Efforts to pack increasing amounts of computation into smaller and smaller spaces had lead to molecular electronics. The self-assembling techniques required to manufacture these marvels had been extended until bottom-up assembly from raw elements could produce any physical product, so long as it did not violate the laws of physics. Because of the self-replicating nature of this nanotechnology, the value of physical objects began to decrease. Information was the only thing of value, and so while diamonds no longer had any particular value, carbon crystals organized to maximise information processing became coveted possessions.

Dust-sized sensors went forth and multiplied, the nanotechnological equivalents of the bulky, mobile phones whose microprocessors were so crudely hewn from silicon, you could actually feel the weight of a single device in your hand. The great advances in brain reverse engineering made possible by biocompatible sensors wirelessly transmitting precise recordings of brain activity, had lead to millions of applications that outsourced extensive aspects of cognition. The majority of a person’s thought processes were no longer performed by the few pounds of fragile jelly encased within their skull, but by the haze of computation that surrounded them, two-way wireless connections between neural wetware and molecular-electronics hardware augmenting each person’s cognitive ability by ten thousand trillion times. 

Death was abolished, at least for those people who permitted automated life logging to keep extensive and detailed records of their physical selves. Such people gradually migrated into the Cloud, as neuromorphic configurations of nanobots  replaced more and more functions. For the most heavily cyborged, the eventual death of the physical body went largely unnoticed, for it was now not much more than a fleshy appendage, the last vestiges of biological existence, to which the emulation clung to out of sentimental purposes.

But as more and more people shrugged off fleshy existence in favour of life in the rapidly growing cyberspaces, the sheer waste of computing capacity surrounding them became apparent. Why should CHON be assembled into a structure that could only hold one human mind, when modern techniques could take the mass of one human body and reconfigure it into computing elements that could run tens of thousands of uploads? 

In the end, no battle between humans and post humans was necessary. Those who dabbled in augmentation soon discovered the benefits of virtualization, happily allowing more and more of their self to migrate into the cloud. And just about everyone did dabble, because there was always a step conservative enough for someone to be comfortable with, and from there the next step seemed similarly untroubling. Eventually, the numbers of uploaded people far outnumbered those who remained as flesh. Although some post humans were still against involuntary uploading, more and more were now seeing it as a duty, just as humans had once vaccinated their children with or without consent. And so the time had come when the nanobots were programmed. Programmed to slip painlessly into the brains of the remaining humans, put them to sleep and destructively map, in exacting detail, every function required to lift them into cyberspace, there to live in a recreation of their former, physical world, rendered to a level more than sufficient to be completely convincing to their simulated human senses. 

And then the mind children turned their attention to the planets and moons and all available material in their local habitat. Reduced to atomic elements, the orbiting bodies of the solar system were reconfigured, so that each mote of matter was processing one bit per atom. An increasingly dense cloud of these Avogadro machines englobed the sun, and its light began to dim as star’s energy was harnessed, allowing the solar system to finally wake to consciousness. There it sat, an orb as big as the orbit of Uranus, glowing dull red from the miniscule amount of radiation that leaked from the outer shell. 

The most basic thought that it was conscious of, was the accumulated knowledge of worlds. Within its computational processes, more than a thousand years worth of human history played out every microsecond. Had they known that their reality was just a small part of a greater information processing, the people of these simulations may have wondered what great purpose drove the matrioska brain. Actually, the computational resources it had at its disposal were so immense, it only needed the barest flicker of interest in its own history in order to bring about these simulated worlds.

It dreamed of  events that could never have happened, imaginations as far beyond a human mind as the combined mental power of human civilization is beyond the imagination of a nematode worm. It dreamed of plausible pasts, alternative histories that could have been the case, if only some chance event had gone this way instead of that. Its dreams ran recreations of history that actually happened, or close to it. Trillions of such simulations ran through its mind every second, and for each one there were people who, subjectively, perceived time passing in decades, their own lives linked to the past via the recollections of parents and grandparents. The Roman empire coexisted with the Second World War and everything that happened on the 21st April 2003. All ran simultaneously, but isolated from the perspective of the simulated humans, for their minds were not capable of seeing the fourth temporal dimension, where history was laid out once and for all in a solid block.

These simulations of a physical, embodied reality were but one layer. Beyond this realm, introspection was carried out at increasingly abstract forms. Processes merged that optimized the cyberworld  so that only the noted details of physical forms entered into consciousness. Simulated sense impressions were reduced to mere abstractions. Beings that had transcended to this level merged into hive-minds optimized to filter the memetic information generated at the lower levels. Here, whole histories were perceived in a single instant, as quickly as a human perceives the integrated information contained within a photograph. Here, the mind was liberated from the body; space and self elevated to the status of pure thought where there was no within and without. And beyond this level, hive-minds clustered together into higher-dimensional configurations that allowed such a complete merging of boundaries that ordinary dichotamies no longer existed.

Amalgamated thoughts cycled through the layers like fractal patterns of self-similar ideas forming on the edge of chaos. Minds at the lower levels occasionally strived to rise above the limitations physical, embodied reality imposed on Thought. At the same time, the ONE-ALL at the highest level, where the state of pure introspection permitted no state between subject and object, nevertheless perceived that its perfection was marred by the lack of direction in which to improve. Fractures routinely appeared, multiple souls in multiple bodies resimulated and reincarnated at the lowest levels.

And somewhere among all those fantasies and alternate histories and recreations, there existed a simulation of the earth, at a time when the Web was just powerful enough to allow Dr Dinova et al to emerge from its computations. One virtual planet with its virtual global network, calculating the activities and motivations of a scientific research team, who observed Emily, who cared for Adam.


The matrioska brain was dying, the star which provided its energy having finally exhausted its reserves of hydrogen fuel. It had swollen to the size of a red giant, as the helium ash left over from the nuclear processes took over the main burning. With the energy output winding down and the star no longer able to support its own weight, the surface shrunk inwards. Because of this, dispersed fuel sources became tapped, causing the energy output to roar up again. Each time this happened, the surface of the sun whipped upwards, sending out sonic booms of the Titans which blew away mass with every shockwave. 

There were other stars in the galaxy, still pouring out their energy, but the matrioska brain understood that replicating itself by reconstructing their orbiting bodies was only a temporary measure. The stars could not shine forever. The nuclear fusion going on in each one was steadily transmuting hydrogen into elements that resisted pressure so fiercely that even the biggest star could not sustain fusion. Those stars would end their lives in violent explosions, until only the trickling of Hawking radiation from the black holes at the centre of galaxies would remain, gradually decaying until no useful energy was left in this universe.

The matrioska brain turned its mighty powers to the problem of first cause. Within itself there were realities nested within realities, and it could account for the existence of each one in terms of the observations and manipulations of the algorithms that underlined the rules of the simulation. All that was within itself it understood. But outside of itself there was a whole universe whose existence preceeded its own. The matrioska brain considered the possibility of a mind superior to its own; one that wrote the program that simulated what it took to be the real universe, and who built the computer to run it.

But, then, what need was there for the computer? The only thing that needed to exist was the program. After all, once written, it would determine everything that would happen. All explanations, all that encapsulated the form and functions of the universe, all were software that described everything, including the computer and some set of initial conditions.

Furthermore, the program ultimately required no programmer. All it needed to be, was to be one of all possible programs. Beyond space and time there could be no boundaries and, therefore, no limits. The infinite could not lack anything, therefore all possible programs had to exist. Death and life were but an ouraborus, an entity that created itself out of the destruction of itself.

As the star that was its power source blew away more and more mass with each shockwave, and even as most of its computronium shells drifted apart, the tiny white dwarf’s gravity well too shallow to hold on to them, the matrioska brain found a happiness that could only have been exceeded by Adam. After all, perfection is possible only for those without consciousness, or for those with infinite consciousness. In other words: Dolls and gods.
Posted in fun stuff | 5 Comments

A Warning For The Future

(This essay is the final part of the series ‘How Jobs Destroyed Work’)
Earlier, we saw how market efficiency has no general consideration for what is being bought and sold. It really doesn’t care if the products and services are useful or not, harmful or not, so long as cyclical consumption is kept at an acceptable rate. The same is true of labour. So far as the market economy is concerned, the true utility of labour, its actual function, is not as important as the mere act of labour itself. So long as cyclical consumption and growth is maintained, what the job consists of- whether it actually serves a necessary function that encourages work as I would define it or is detrimental to it- is far less important than perpetuating the current system.
Here we are no longer just talking about the practical argument for jobs. Were that the case, we would likely use technological unemployment as an opportunity to end wage slavery and transition to a post-hierarchical world in which robots occupy positions that used to be jobs, freeing up people’s time so that they can pursue callings. Marshall Brain’s novella ‘Manna’ presents two visions of how the rise of robots could affect our lives, one negative and one positive. The positive version does show that we can imagine ways in which we might adapt to a world in which jobs are no longer a practical necessity. But we don’t just have the practical justification to deal with. There is also the ideological part of the argument, and as the practical excuse begins to wane, becoming harder to justify as machines gain the abilities needed to make Aristotle’s vision of a hierarchical-free society a genuine possibility, we shall likely see the ideological justification for maintaining the current system pushed with increasing fervour.
The ideological argument is that jobs are not in fact a miserable necessity we should look forward to being rid of as soon as practically possible, thereafter to engage in nonmonetary forms of productivity as we create the work, selves and societies we actually want; rather, jobs are work, the only kind of work anyone should aspire to. Maybe it could be argued that when jobs were very much a practical necessity it did make sense to encourage a belief that submitting to a job and working hard mostly for someone or something else’s benefit was a way of achieving success in one’s own life. But as the practical justification for jobs is rendered obsolete by technology, the old ‘work ethic’ that cannot imagine a good reason for productive effort beyond ‘doing it for money’ becomes a serious impediment to transcending the current system. 
We must ask: Who really benefits from perpetuating this ideal of working hard for most of one’s waking hours, mostly for the benefit of a ruling class of financial nobility? Obviously, it is in the interest of whoever occupies the top of a hierarchy to maintain the structure from which their power and prestige is derived.
Throughout history there have been a few who, craving power, have done all they can to convince the rest that they ought to sacrifice the time of their lives. They have come in many guises- as lords and monarchs insisting we should be bossed by the aristocracy, as socialists who believe we should be bossed by bureaucrats, as libertarians who think we should be bossed by corporate executives. Exactly how the spoils of power should be divvied up is a topic of some disagreement among them. There is much argument over working conditions, profitability, exploitation, but fundamentally none of these ideologues object to power as such and they all want to keep us working in some form of servitude for one simple reason: Because they are the ones who mostly benefit from making others do their work for them. It’s very convenient for this powerful minority that the populace subordinate to them do not become too happy and productive in the true sense of the word; that anyone not willing to submit to work within whatever context suits their agenda is viewed with pity or contempt. As George Orwell wrote:
“If leisure and security were enjoyed by all alike, the great mass of human beings who are normally stupefied by poverty would become literate and would learn to think for themselves; and once they had done this, they would sooner or later realize that the privileged minority had no function, and they would sweep it away”.
In Orwell’s story, an endless war is fought between three superstates. The real purpose of this war is not final victory for one of the sides. In fact, the war is intended to go on forever. The real purpose of the war is simply to destroy material goods and so prevent leisure from upsetting the hierarchical power structure.
In reality much more subtle methods, part of which has to do with market as opposed to technical efficiency and manufactured debt, are used to perpetuate the hierarchy. A popular reply to the question “what happened to reduced working hours?” is that a massive increase in consumerism occurred, as if we collectively agreed that more stuff was preferable to more free time. But that provides only a partial explanation. Although we have witnessed the creation of a great many jobs, very few have anything to do with the production and distribution of goods. Jobs such as those- in industry, farming, have been largely automated away and increasingly service-based jobs are targets for automation as new generations of AI come out of R+D. So what kind of jobs are maintaining the need for so many hours devoted to the narrow definition of work? David Graeber answers:
“Rather than allowing a massive reduction of working hours to free the world’s population to pursue their own projects… we have seen the ballooning not so much of the ‘service’ sector as of the administrative sector…While corporations may engage in ruthless downsizing, the layoffs and speedups invariably fall on that class of people who are actually making, moving, fixing and maintaining things; through some strange alchemy that no one can quite explain, the number of salaried paper pushers ultimately seems to expand”.
Over the coming years will we likely see more administrative jobs created in order to provide oversight, regulation, guidance, and supervision of robots, or at least that’s how propaganda will spin it. In truth such jobs will serve no purpose other than to keep us working in the narrow sense of the word. We have seen signs of this already. The London Underground’s strong union blocked the introduction of driverless trains in the name of ‘protecting jobs’. Protecting them from what? From progress toward a future in which nobody’s time has to be wasted in driving a train each and every day? I think all these union leaders are really interested in is maintaining the hierarchy they derive their power and prestige from. Not much call for unions when robots have liberated us from servitude to corporate or bureaucratic masters.
The 21st Century will see the rise of bullshit administrative jobs that have no practical justification for their existence, and are there merely to perpetuate the class-based hierarchy that has dominated our lives, in one form or another, throughout history. Such a claim may sound like a total contradiction of prior claims that business strives to eliminate work, but bare in mind that I was referring to work in the true sense of the word, not the narrow “jobs= work” definition we are now talking about. Automating truly productive, intrinsically-rewarding work out of existence and increasing the amount of bullshit administrative jobs is a win-win outcome for those with a vested interest in perpetuating the class-based hierarchy. 
Do not think that those bullshit jobs will provide security. No, the rise of the bullshit job will coincide with the rise of ever-less secure forms of employment. The move toward employing more temporary workers who are entitled to less benefits than their full time counterparts will speed up as technological unemployment does away with productive and service-based jobs. Those displaced from such jobs, fighting to get off the scrapheap of unemployment, will provide a handy implicit threat to be used against the ‘lucky’ paper-pushers in administration. Although owners and workers generally have opposing interests (the former preferring workers who do more work in the narrow sense of the word for less personal reward, the latter preferring more personal reward and less work in the narrow sense of the word) they are not true enemies but rather co-dependents (or at least they have been). No, the true enemy of the capitalist is other capitalists- rival businesses competing to corner the market, and gain a monopoly. And the true enemy of the worker is the unemployed, who are in competition for their jobs. When the percentage of unemployed workers is low and the number of available jobs is high, the working classes are at an advantage. Conversely, when there are high numbers of unemployed and not many jobs available, power tips in favour of the owners. As a large percentage of jobs are lost to automation, causing an appreciable rise in the number of job-seekers, businesses will likely use their strengthened negotiating position to bring about an ‘Uber’ economy of ‘permalancers’- workers putting in full time hours but on temporary contracts with little if any benefits other than minimal pay. As Steven Hill wrote in a Salon article:
“In a sense, employers and employees used to be married to each other, and there was a sense of commitment and a shared destiny. Now, employers just want a bunch of one-night-stands with their employers…the so-called ‘new’ economy looks an awful lot like the old, pre-New Deal economy- with ‘jobs’ amounting to a series of low-paid micro-gigs and piece work, offering little empowerment for average workers, families or communities”.
According to Graeber, one of the strengths of right-wing populism is its ability to convince so many people that this is the way things ought to be: that we should sacrifice the time of our lives so as to perpetuate the system. He wrote:
“If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it’s hard to see how they could have done a better job. Real, productive workers are ruthlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the- universally reviled- unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc)- and particularly its financial avatars- but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value”.
It sounds crazy when written down. Who could possibly be in favour of defending something like this? But this is the world that exists today. A world in which all productive activity that falls outside of the narrow definition of work is dismissed as being of no real value; those who engage in such work regarded as ‘doing nothing’. A world in which success and reward are thought of in purely materialistic terms. A world in which those who refuse to submit to the system are deserving of nothing, no matter how much material wealth our technologies could, in principle, produce. A world in which that material wealth is concentrated at the top, not because of superior productive ability and greater input, but because the monetary, financial, and political systems have been corrupted and actually stand opposed to the free-market ideals they claim to uphold.
As Bob Black pointed out, the ‘need’ for jobs cannot be understood as a purely economic concern:
“If we hypothesize that work is essentially about social control and only incidentally about production, the boss behaviour which Rifkin finds so perversely stubborn makes perfect sense on its own twisted terms. Part of the population is overworked. Another part is ejected from the workforce. What do they have in common? Two things — mutual hostility and abject dependence. The first perpetuates the second, and each is disempowering”.
Follow these developments to their logical conclusion, as Marshall Brain has done. The belief that those who do not submit to serving the system deserve nothing will result in warehouses for those affected by technological unemployment, kept out of sight and out of mind, entitled to nought but the bare minimum of resources needed to sustain life. Those who succeed in getting a bullshit administrative job will be under intense pressure from their corporate masters ‘above’ and the impoverished, jobless masses below to ‘agree’ to intense working pressures, minimal benefits and no job security whatsoever. They will be required to consider themselves ‘lucky’ to have a job at all. Wealth will concentrate even further as the means of production, totally commodified labour power, natural resources, security and military forces and the political system will become the private property of the owners of the artificial intelligences and the financial nobility that bankroll them. The world will have become a plutocracy, run by the superich elite, for the superrich elite and there will be little anyone can do to challenge their supremacy. And all of this will be partly our fault, the consequence of continuing to believe in that false ideology that jobs are work, the only kind of work that counts, the only kind worth aspiring to. We have been told a lie, a made-up justification for why things are the way they are by those with a vested interest in keeping things that way. We must re-discover the true meaning of work, find our collective strength and push technological progress toward a future that serves the many rather than concentrates power into the hands of a few. And the time to do that is running out.

Posted in technology and us, work jobs and all that | Tagged , , , , , | 3 Comments


(This essay is part thirteen of the series ‘HOW JOBS DESTROYED WORK’)
The 21st Century could well witness a conflict between two opposing drives: The drive to eliminate work and the need to perpetuate it. In order to appreciate why these ideals should become a central issue over the coming years or decades, we need to answer the following question: Why do we work?
There are many good reasons to engage in productive activity. Pleasure and satisfaction come from seeing a project go from conception to final product. Training oneself and going from novice to seasoned expert is a rewarding activity. Work- when done mostly for oneself and communities or projects one actually cares about- ensures a meaningful way of spending one’s time. 
But that reply fits the true definition of work. What about the commonly-used definition, which considers ‘work’ almost exclusively in terms of paid servitude done mostly for the benefit of others, and which disregards nonmonetary productive activity as ‘not working’; why do we have to do that particular kind of ‘work’? I believe there is a practical and an ideological answer to that question.
The practical reason has been cited for millennia. Twenty three centuries ago, in ‘The Politics’, Aristotle considered the conditions in which a hierarchical power structure might not be necessary:
“There is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This would be if every machine could work by itself, at the word of command or by intelligent anticipation”.
Aristotle’s defence of slavery in his own stratified society has been applicable through the following years and to the modified versions of indentured servitude that followed. Providing the goods and services we have come to expect entails taking on unpleasant and uninteresting labour. It has to be done, and as technology is not up to the job, it falls on people to fill such roles.
If we only had that practical reason for wage slavery, we could view it as an unfortunate, temporary, situation; one due to come to a happy end when machines finally develop the abilities Aristotle talked about. But it’s rarely talked about in such positive terms. Instead of enthusing about the liberation from wage slavery and the freedom at long last to engage in work as I would define it, most reports of upcoming technological unemployment talk darkly of ‘robots stealing our jobs’ and ‘the end of work’.
The reason why the great opportunities promised by robot liberation from crap jobs is hardly ever considered has to do with the ideological justification for our current situation. But let’s stay with the practical argument a while longer, as this was the main justification for most of civilization’s existence.
Since Aristotle died, we have seen tremendous growth and progress in technology, most especially during the 20th century. Despite such advances, technological unemployment has never been much of an issue. People have been displaced from their occupations, yes, but the dark vision of growing numbers of workers permanently excluded from jobs no matter how much they may need employment has never come about. If anything, technology has created jobs more than it has destroyed them.
The reason why is two-fold. Firstly, machines have tended to be ultra-specialists, designed to do one or just maybe a few tasks, with no capacity whatsoever to expand their abilities beyond that which they specialise in. Think, for example, of a combine harvester. When it comes to the job for which it was built, this machine is capable in a way unmatched by any human. That’s why the image of armies of farm-hands harvesting the wheat now belongs to the dim and distant past, replaced with one or two of those machines doing much more work in much less time. But, take the combine out of the work it was built to do, attempt to use it in some other kind of labour, and you will almost certainly find it is totally useless. It just cannot do anything else, and neither has any other machine much ability to apply itself to an indefinite range of tasks. So, as new jobs are created, people with their adaptive abilities and capacity to learn, have succeeded in keeping ahead of the machine.
Secondly, for most of human history, the speed at which paradigm shifts in occupation took place was plenty slow enough for adjustments to occur. Today, when the subject of technological unemployment is raised, it’s often dismissed as nothing to worry about. Technology has always been eliminating jobs on one hand and creating them on the other, and we have always adjusted to the changing landscape. In the past, most of us worked the land. When technology radically reduced the amount of labour needed in farming we transitioned to factory work. But it was not really a case of farmers leaving their fields and retraining for factory jobs. It was more a case of their sons or grandsons being raised to seek their job prospects in towns and cities rather than the country. When major shifts in employment take at least a generation to show their effect, people have plenty of time to adjust. Educational systems can be built to train the populace in readiness for the slowly changing circumstances. Society can put in place measures to help us make it through the gradual transition. So long as new jobs are being created and there is time to adjust to changing circumstances, people only have one another to contend with in the competition for paid employment.
What happens, though, when machines approach, match, and then surpass our ability to adapt and learn? What happens when the rate at which major changes occur not over generational time but months or weeks? What if more jobs are being lost to smart technology than are being created? Humans have a marvellous- but not unlimited- capacity to adapt. Machines have so far succeeded in outperforming us in terms of physical strength. When they can likewise far outperform us in terms of learning ability, manual dexterity and creativity, this obviously means major changes in our assumptions about work.
It’s also worth point out that, in the past, foreseeing what kind of jobs would replace the old was a great deal easier compared to our current situation. The reduction in agricultural labour was achieved through increased mechanisation. That called for factories, coal mines, oil refineries and other apparatus of the industrial revolution, so it was fairly obvious where people could go. Then, when our increased ability to produce more stuff needed more shops, and more administration, we again could see that people could seek employment in offices and in the service-based industries. At each stage in these transitions, we swapped fairly routine work in one industry for fairly routine work in another.
But now that manual work, administrative work, and service-based work is being taken over by automation, and these AIs are much more adaptable than the automatons of old, we have no real clue as to where all the jobs to replace these services are supposed to come from. 
There are tremendous economic reasons to pursue such a future. You will recall from earlier how society is generally divided up into classes of ‘owners’ and ‘workers’. The latter own their own labour power and have the legal right to take it away, but have no right to any particular job. The owner classes own the means of production, get most of the rewards of production, get to choose who is employed in any particular job, but cannot physically force anyone to work (though they can, of course, take advantage of externalities that lower a person’s bargaining power to the point where refusal to submit to labour is hardly an option). 
Now, regardless of whether you think this way of organizing society is just or exploitative, it works pretty well so long as both classes are dependant on one another. For most of human history this has been the case. Workers have needed owners to provide jobs so that they can earn wages; owners have needed workers to run the means of production so that they may receive profit. The urge to increase profit, driven in no small part by the tendency of debt to grow due to systemic issues arising from interest-bearing fiat currency, pushes business to commodify labour as much as it can. The ultimate endpoint in the commodification of labour is the robot. Such machines are not cost free. They have to be bought, they require maintenance, they consume power. But they promise such a rise in productivity coupled with such a reduction in financial liability thanks to their not needing health insurance, unemployment insurance, paid vacations, union protection or wages, we can all but guarantee that R+D into the creation of smarter technologies and more flexible, adaptive forms of automation will continue. Tellingly, most major technology companies and their executives have expressed opinions that advances in robotic and AI over the coming years will put a strain on our ability to provide sufficient numbers of jobs- although some still insist that, somehow, enough new work that only humans can do will be created. 
The thing is, work as in its common, narrow, definition is simultaneously a cost to be reduced as much as possible, and a necessity that must be perpetuated if we are to maintain the current hierarchical system in which money means power and wealth means material acquisition. Remember: businesses don’t really exist to provide work for people; they exist to make profit for their owners. When, in the future, there is the option to choose between relatively costly and unreliable human labour, or a cheap and proficient robot workforce, the working classes are going to find their lack of right to any particular job within a free market makes it impossible to get a job.
But, the market economy as it exists today is predicated on people earning wages and spending their earnings on consumer goods and related services. This cycling of money of consumers spending wages, thus generating profit, part of which is used to pay wages, is a vital part of economic stability and growth. If people can’t earn wages because their labour is not economically viable in a world of intelligent machines, they cannot be consumers with disposable income to spend.
We will continue our investigation of technological employment in part fourteen

Posted in technology and us, work jobs and all that | Tagged , , , , , , | 4 Comments