Hot Air: An Allegorical Tale


Once upon a time, people dreamed of being able to fly. This dream lead to two kinds of flying machine being built. One type of ‘flying machine’ was the ornithopter. I put the term flying machine in scare quotes because, when such machines were scaled up to be large enough to transport passengers, they were utterly useless. Knowledgeable critics pointed out fundamental reasons why this kind of flying machine could never work, which really should have spelled the end of ornithopters. Nevertheless there was always those who persisted in believing it could be made to work. Due to the comedic footage of these machines’ pathetic attempts at flying, such believers were labelled ‘Commies’.

The other kind of flying machine was the hot air balloon. Unlike the ornithopter, which had never worked when scaled up to accommodate human passengers, the hot air balloon did show some limited success in raising people up. Noticing how people were climbing into baskets suspended under balloons and enjoying the freedom of the skies, folks nicknamed balloonists ‘free basketeers’, and the adoption of the ‘free basket’ spread around the world.

The free basketeers saw nothing but a bright future ahead. They pointed out the successes that had been made in lifting people up (undeniable to anyone willing to look at the evidence) and also pointed out the improvements that had been made to balloons, enabling them to go higher, travel further, and lift more people up. Sure, there had been some mistakes along the way and even a disaster or several, but overall the trend was toward ‘better’. The free basketeers confidently asserted that the ability of free baskets to raise people up was limitless and that future generations would surely be flying through space in hot air balloons.

But a few critics believed there were serious flaws in such assumptions. Although they could certainly see that free baskets had shown limited success in lifting people up, they also highlighted various problems that seemed to be getting more serious. For one thing, there was evidence that, the higher up you go, the colder it got. If such trends continued, they argued, then by the time you got into space it would be so lethally cold you and your passengers would freeze to death. And then there was the problem of breathable air. The higher up you go, the thinner the air became. So people would asphyxiate long before they got to fly around the Andromeda galaxy.

All such objections were dismissed by the free basketeers. “Higher altitudes are correlated with colder conditions? We don’t believe in that global cooling nonsense!”. (The scientific consensus that, yes, it really is unbearably cold in space, did nothing to shake their belief in the boundless expanse of free basketeering). “Oxygen will run out before we reach outer space? Oh, pooh, such talk of reaching ‘peak oxygen’ just fails to see ballooning will find a way!”.

Meanwhile the critics continued to insist that there were fundamental limits to what could be achieved via ballooning and continued to highlight problems that free basketeers dismissed with counter-arguments that seemed wholly convincing to them but very flawed to the critics. And the critics did not just criticise. Some, like Buckminster Fuller, Jacques Fresco and Peter Joseph, reckoned that hot air ballooning had provided the foundations for an entirely different way of doing things. For example, there was that burner that was used to heat up the air inside the balloon. If you aimed it at the ground you got a little push, in accordance with the Newtonian principle ‘for every action there is an equal and opposite reaction’. This gave some the idea that ‘rockets’ with enough thrust to achieve escape velocity could one day be built. They drew up outlines- not complete blueprints, mind you, there was a great deal left to be figured out-showing how such rockets could be built.

Another vague design took as its basis the material that balloons were made out of, which had been getting lighter and stronger. Some fringe thinkers considered the possibility of sails that could use the solar winds produced by the sun to push a craft through space. As with the rockets, there was a lot of details missing in such plans, but those who saw merit in the idea insisted it could work in principle, whereas balloons would never actually transport us to other planets and stars regardless of what the free basketeers told you.

How did the free basketeers respond to these alternatives to balloons? They were dismissed as totally impractical. Very rarely was any valid criticism put forward. Rather, anyone who spoke negatively about free baskets or outlined an alternative was dismissed as a ‘commie’, told (as one might explain to a simpleton) that ‘ornithopters cannot work’. The basketeers had come to believe that ornithopters or hot air balloons were the only kinds of flying machines there could be and they regarded all alternatives through this blinkered perspective. If you did not believe in the free basket, you had to be a commie!

So how does the story end? I do not know. It could be that free basketeers succeed in persuading people that anyone attempting a different approach is a commie and that there are no limits to how high and how far you can go in a hot air balloon, resulting in catastrophe as this particular approach runs up against problems it is fundamentally incapable of dealing with.

Or, it could be that those proposing alternatives to both ornithopters and balloons convince enough people of the potential to be found in their way of doing things, if only we could redirect our efforts to such proposals. In that case, perhaps the free basket would be remembered as a somewhat successful means of providing flight whose adoption was understandable given the knowledge available at the time, and whose eventual replacement was wholly necessary.

So what did we learn from this allegorical tale? We had two attempts at making a flying machine, and you probably guessed that ‘commie’ and ‘free basket’ were puns on ‘communism’ and ‘free markets’. The ‘free basket’ approach to flying demonstrated limited success in ‘raising people up’ (in a literal sense, not the metaphorical sense in which the free market is claimed to be a rising tide that lifts all boats). This leads free basketeers to suppose there is no limit to how high and how far you can go in a balloon, a belief that can only persist by ignoring the mounting problems that such an approach is fundamentally incapable of solving. 

But, as it is at root a competitive system based on exploiting scarcity for the purpose of gaining material advantage, and therefore needs scarcity to persist even as it makes some progress in reducing want in limited cases, capitalism ends up creating problems like ecosystems struggling to cope with the garbage we in our throwaway consumerist cultures discard, resource wars, social decay, a banking and financial system that (roughly speaking) creates fictional wealth that boosts the paper profits of corporations while loading increasing debt onto the lower classes, and other increasingly urgent issues that tend to be brushed aside as ‘negative externalities’. 

We also imagined how, with some ingenious rethinking and a great deal of applied effort, some of the technologies built to make ballooning work could be used as the basis for a completely different approach. That is what the likes of the Zeitgeist movement are trying to do; come up with credible outlines for an alternative to both capitalism and communism that identifies the fundamental flaws in these older economic models and applying such strategies as systems thinking, strategic abundance, technical efficiency and the Creative Commons to sketch out a different method of running an economy. But just as rockets were dismissed as ornithopters (even though they obviously are not) the ‘resource-based economy’ outlined by Zeitgeist gets dismissed as communism, even though it obviously is not.

So, in summary my story was a cautionary tale about how the limited success free baskets/ free markets had in raising people up somehow led to an irrational belief in this method’s boundless potential, an optimistic outlook that could only be sustained by adopting a ‘head in the sand ‘ approach to mounting problems and directing straw-men attacks at promising alternatives. All in all, a pretty apt portrayal of how debates over the flaws of communism/capitalism and the potential of RBE tend to go. 


“Utopia For Realists” by Rutger Bergman

“The New Human Rights Revolution” by Peter Joseph

“The Virtue Of Selfishness” by Ayn Rand

Posted in Uncategorized | Leave a comment

What Flexibility In Gig Work?

Labour’s shadow chancellor, John McDonald, once claimed that workers’ rights are now in the kind of precarious situation that has not been seen since the 1930s. Specifically, the shadow chancellor highlighted the gig economy and zero-hours contracts. He likened them to the kind of employment his father had to endure. His father was a docker, and every day he had to stand around with other men in the hope he would be selected for work that day. It was a precarious existence with no guarantee of wages from one day to the next. The gig economy, claimed the Shadow Chancellor, offers a similarly raw deal.

Some commentators reckon that this is not the case, claiming that many workers enjoy being independent contractors and the flexibility that comes with it. That flexibility is apparently so desirable that they would gladly give up other rights that labour unions fought hard to establish, such as sick pay, holiday entitlement and a minimum wage.

So, is it really true that people crave the flexibility offered by the gig economy, so much so that they would forgo a great many workers’ rights if offered the choice?

I think that depends on what kind of flexibility we are talking about.

I can imagine a form of flexibility that would make gig work pretty darn attractive to the employee. The kind of flexibility I am imagining is the kind where the availability of work is entirely at the workers’ convenience. In other words, whenever you want to work you may do so, but equally the moment it’s more convenient to quit that is also perfectly OK.

One can imagine people adopting all kinds of working patterns, tailored to all kinds of lifestyles. People who work through winter and relax through summer; people who have one day on, one day off; people who have no routine at all and are in and out of work as the mood takes them. If the flexibility that the gig economy offered really is the kind that works entirely at the independent contractor’s convenience, then the critic must have a point when he says that many people prefer this kind of work.

But there is another kind of flexibility, the kind where you are at a business’s beck and call. When the business has work available, you are there to do it. As soon as your services are no longer required, you are sent home, ready to spring into action the moment it is convenient for the business to hire you once more. Notice that this kind of flexibility need not necessarily work entirely in the independent contractor’s favour. There may be times when you’re called to work but it’s not so convenient. It’s a sunny day and your friends are off to the beach. You feel under the weather. Your partner is in labour and you want to be there to see your first-born enter the world. And those times when it’s convenient for the business to send you home may not be so convenient for you. “No work available today, huh? Darn, I could really have used the money”.

Now, who would benefit from the kind of flexibility that puts workers entirely at a company’s beck and call? Obviously, the employers would be the ones to benefit. You can’t tell me that in an ideal world (as seen from their perspective) companies like Uber wouldn’t rather there be flexibility of the ‘beck and call’ variety.

Strictly speaking, workers have always had the choice to work any hours they like. It’s not as if factories and offices lock the doors, trapping their workforce inside until the boss declares sufficient work has been extracted from them (well Ok some sweatshops in developing countries do this, but these are exceptional cases). So, really, you could walk away from your job at any time.

I should point out that I am talking about a particular kind of freedom here. What I really mean is that you are free to do something like walk away from your job, provided you are willing to accept whatever consequences may result. Provided you are prepared to face whatever may come, it’s hard to think of an example where one is not free to choose. You are free to commit crime, free not to pay taxes, free to jump off a cliff.

Of course, most people would say that the consequences could be so bad it acts as a sufficient restraint on how we behave. So, most people pay their taxes and obey the law not because they must in the sense we must all die one day, but because they choose to obey and therefore avoid the consequence, rather than choose not to and maybe face those consequences.

Similarly, even if you are in a regular, full-time, Monday to Friday, nine to five job, you don’t have to stick to that schedule. You could decide, “you know what, it’s Wednesday, it’s 10 AM and it’s a sunny day outside. So I am off”, and just go, cheerily telling your superior, “I’ll be back tomorrow. Or not. Depends on my mood”.

Would there be consequences? Probably yes. Most likely, you would figure that such behaviour would result in your being fired, and the threat of that would be enough to ensure you stick to your contract and be at work when it says you should.

For gig work to offer the good kind of flexibility where you get to work only when it is convenient for you, then the consequences resulting from your choice not to work can’t be so (presumably) dire that you regulate your behaviour like people tend to do in regular work.

Now, where regular work is concerned, the long struggle of unionisation and workers’ rights has established procedures that must be undertaken when dealing with workers who break the rules. In lots of cases you can’t just be sacked, but must first receive verbal warnings, written warnings, and only if you persist in your behaviour does the company finally have it in their power to terminate your employment.

But what about the gig economy? Businesses offering work of this kind do not need to go through any kind of disciplinary procedure. They don’t even need to fire anyone. Indeed, seeing as how workers in the gig economy are supposedly self-employed contractors, they can’t fire you because you do not officially work for them. But what they can do is stop you from using the app.

I can imagine how rumours going around could constrain workers’ freedom to choose whether or not to work. The chatter might go something like this. “Man, be careful. That app you use to accept or refuse work? It logs every time you refuse. I heard that if you turn down too many offers you go on ‘Blacklist’ and can’t get any more work for months. I would accept at least 80 percent of the jobs they offer if I were you”.

By the way, this isn’t just a made-up scenario. In actual fact, businesses in the gig economy often do require their independent contractors to submit to a certain percentage of tasks or otherwise face being temporarily or permanently banned from using their app. And it usually is a high percentage that you must accept, like 80%. Put another way, you are free to turn down the opportunity to work…twenty percent of the time. But other than that if the app says there is work to be done you had better agree to do it, or else. To me, this is kind of like saying something is entirely free…you just have to pay £80 in order to access it.

Furthermore, we should not assume that things will stay as they are. Perhaps things will change in a positive way, with future kinds of gig work offering more of the ‘work whenever you feel like it’ kind of flexibility. But, then again, it could go the other way and become more about working at the businesses’ convenience.

Which way it goes depends on who has the greater power to influence the market and the political system. Given that the owners of successful disruptive software can become multi billionaires, a sum of money way beyond the aspiration of anyone tasking for the gig economy, it seems to me that the safe bet is to say it will be the gig business owners who wield the greater influence.

Since we mentioned disruptive software, we should also take into consideration the effects of artificial intelligence. According to some experts, the algorithms coming in the foreseeable future will wipe out most middle-class jobs. The result will be a market that on one hand consists of a few owners of algorithms that come out on top in a ‘winner takes all’ environment, perhaps becoming multi-trillion dollar businesses.

On the other hand there will be everybody else, a great mass of humanity competing for whatever jobs are left. When you consider Moravec’s Paradox, which is named after a roboticist called Hans Moravec and states ‘whatever we find easiest to accomplish, AI finds hardest and vice versa’, you can see that AIs are more likely to take away the sort of jobs you need a college or university education for, rather than the sort of jobs anyone can walk in off the street and do. In other words, it will be AIs that do things like writing articles for journals, flying airplanes, and work in legal and medical practices (which they can already do to a limited extent), leaving work like scrubbing toilets and dusting shelves to humans (which robots are still pathetic at).

This larger group has been labelled the ‘precariat’, a combination of ‘precarious’ and ‘proletariat’ or ‘many mouths’. It underscores the precarious situation this mass of humanity (which will include the majority of us, by definition) because there would be so many competing for whatever jobs are left, and that would provide an enormous amount of pressure that businesses who still require human labour could use to their advantage.

We imagined earlier how gig-based businesses could use the data harvested by their apps to filter out those who persist in working at their own convenience, encouraging (coercing?) more to lean toward the ‘work at the company’s convenience’ approach to tasking. Once you have a reserve army of independent contractors prepared to leap into action whenever the boss snaps his or her fingers, where do you go from there?

Probably, you then have your independent contractors bid for tasks, with the ‘winner’ being whoever is prepared to do the best job for the least amount of money. Thus, potentially, the combined disruptive effects of artificial intelligence and precariat work would result in a race to the bottom where the vast majority of us find it virtually impossible to get anything beyond minimal reward for labour, squeezed by pressure from demand for remaining jobs from our fellow precariat on one hand, and on the other by the rapacious profit-seeking demands of that tiny minority who have now become so wealthy and influential they form a plutocracy as powerful as any emperor or pharaoh.


“Austerity” By Kerry-Ann Mendoza

“The Corruption Of Capitalism” by Guy Standing

Posted in Uncategorized | Leave a comment



Welcome to the second part of my essay on the virtues of being lazy.

I think that, when you look at the contrast between work we evolved to do and how work is often structured in jobs, you get the real explanation for the sort of laziness that gets held in contempt.

Of course, nobody really idolises hard work to the point where even adopting practical labour-saving solutions is frowned upon. We can all see the benefit to be had from avoiding work that does not need doing. What we don’t like are people who do evade necessary work, the sort who could get a job if they really tried but instead remain unemployed, sponging off those who do contribute to society.

At the same time, we understand why they are reluctant to get off their backsides. After all, work sucks. Songs like ‘Manic Monday’ and ‘Tell Me Why (I Don’t Like Mondays’) are popular precisely because they tap into that widely-held feeling of “oh no, not another working week”. We love weekends, bank holidays and vacations mostly because of what we are not doing, namely working.  

Our hatred at those permanently workshy characters is fuelled partly by jealousy. Lucky sods. How dare they get to avoid work when I must submit to it? We therefore organise society to punish those who refuse to work, partly as a means of correcting their behaviour but also as a deterrent intended to stop us joining them (which, secretly, most of us would prefer to do).  After all, since we hate work but work is necessary, a carrot and stick approach is required to keep us turning up at our employment.

But this idea that we hate work…really it is a pernicious myth. We don’t ‘hate work’ at all. You can see this is so by looking at how people spend their time off. Very rarely do we spend it in the stereotypical way the unemployed are portrayed as living, which is ‘sitting around doing nothing’. No, we are up and about making plans for socialising, carrying out repairs or improvements to our homes, tending our gardens. We have hobbies we throw ourselves into. In fact, we often pack so much activity into our breaks we joke half-seriously that we could do with another break to recover!

No, we don’t hate work. No animal that evolved to have such a large brain, such an imaginative mind, such dexterous hands, such complex language and cooperative abilities, got that way through hating work. That would be as weird as dolphins being evolved to have streamlined bodies and flippers and yet hating water and staying away from it.

We actually enjoy working when it consists of short bursts of activity whose competent execution leads directly to reward for ourselves and our loved-ones. That scene from ‘Witness’ in which an Amish community work together to construct a barn exemplifies the sort of work we like doing; labouring as part of a family to produce something that will directly improve that family’s life. Then resting in sociable contemplation at what we have achieved.

You don’t necessarily get that kind of satisfaction from a job. Many people, I am sure, would nod in agreement with Karl Marx and his description of work in the capitalist context: “The worker therefore only feels himself outside his work, and in his work feels outside himself. He is at home when he is not working, and when he is working he is not at home”. We have had this separation of the spheres of working life and home life for so many generations that we regard it as ‘natural’, but it would never have occurred to our ancestors that any distinction between working life and family life existed, as indeed it did not in their environments.

Not only do we feel often feel we have to remove ourselves from those we care about in order to go to employment, but it also has to be said that  the primary goal of a job is not to produce anything of direct value to you or anyone you could reasonably call ‘family’. No, it’s ultimately all for the purpose of enriching strangers; of meeting very abstract goals like seeing some company’s position on the FTSE or DOW JONES go up by some points. That’s not to say you get no compensation from a job. You get paid, at least. But that means you are at least one step removed from your true reward. After all, it’s not really money you want but rather what its purchasing power can provide.

And, increasingly, we use that money in order to access stuff many steps removed from anything we truly need. Tyler Durden of ‘Fight Club’ fame spoke of our condition when he said “advertising has us working jobs we hate so we can buy shit we don’t need”. That’s modern working life, isn’t it? Submitting to work you don’t enjoy because it is structured to be unlike any kind of working pattern we evolved to do; for personal goals that are a gross distortion of true values, and all ultimately for a purpose like ‘increasing the profits of corporation X’ which, frankly, most of us couldn’t care less about.

How strange it is that we fear robots taking away our jobs. Our lazy natures, our evolved love of work but certainly not in the form of jobs as they are often structured, should rejoice at the prospect of technology that eliminates jobs once and for all.

I say, liberate our lazy natures! Let us play, let us socialise, let us relax whenever we want and work as and when the mood takes us. We evolved to take things easy but also to broaden our horizons and strive for more. Properly organised, a world of advanced artificial intelligence could bring about a world of true work properly aligned with how we evolved to be.


“Guns, Germs and Steel” by Jared Diamond

“Bullshit Jobs: A Theory” by David Graeber

“Why We Work” by Barry Schwartz

Posted in Uncategorized | Leave a comment

In Praise Of Laziness


What is the greatest human trait? Judging by the way it gets praised so often, one might assume that to be a ‘hard worker’ would be an obvious candidate. By general agreement, it is those who ‘work hard’ who should be rewarded the most. And whenever a politician speaks about wanting to represent the interests of his or her constituency, you can be sure that it will be ‘hard working folk’ who he or she intends to help.

In contrast, to be lazy is not worthy of praise. Indeed, it is considered to be one of the seven deadly sins. Lazy characters in stories tend to be there so as to serve as some kind of morality tale encouraging us to abandon such ways. “Don’t be like this character, look where you will end up”.

Yes, hard work is good and therefore something to be encouraged, while laziness is just wrong and to be disapproved of. At least, that seems to be the attitude society wants to encourage.

But is it correct? Is laziness really all bad? Are we we really right in holding up hard work as the ultimate virtue?

I don’t think we are. I think laziness is part of the reason why progress is made; why the future can turn out better than the past.

A major reason why the future can seem brighter is because of technological development. It is thanks to new technological capabilities that we can reduce or eliminate problems that were hitherto intractable. It can aspire to more than was previously obtainable. Now, obviously, work has to be done or else technological progress would grind to a halt. I don’t intend to try and show we should be against hard work. But it does seem to me that ‘lazy’ intentions are, to some extent, the driving force behind a lot of what we invent. After all, a lot of what we invent are ‘labour-saving’ devices. We invent something often because there is a task we can’t really be bothered with and would rather get away with doing it less or not at all.

Imagine that our ancient ancestors, with their primitive stone tools, only wanted to ‘work hard’. If that were so, then I would argue that they would have shown a great deal less interest in improving their tools. “This tree I am attempting to chop down with my flint knife, it’s going to take an enormous amount of effort. Great! I love hard work, me. Who would want an axe or, heaven forfend a chainsaw? That would get the work done in half the time, and I am not at all interested in anything but hard work”.

In reality, we couldn’t be bothered to work quite so hard at whatever we were doing, and so we looked for ways to reduce the amount of effort needed to reach our goals. Did our cavemen ancestors progress from stone tools to iron ones out of a desire to work hard in solving the various problems such an evolution requires, or because they were kind of lazy and therefore wanted better tools and less work? In our modern age do people start businesses because they crave the hard work one must undertake to succeed in such endeavours, or because they look forward to one day earning so much profit they can afford to hire staff to do all the work for them (and have you ever noticed how the most vocal proponents of ‘hard work’ tend to be those with enough capital to pay others to do all the work?).

The answer is that both play a part. Human nature is not one hundred percent committed to hard work nor totally in favour of being lazy. If were were content to just be lazy, our world would look as radically different today as the hypothetical ‘world of hard workers’ just imagined. If we were content to just live as lazy folk, then we would be satisfied with merely meeting our most basic survival needs. So long as we had a quenched thirst, a full stomach and protection from harsh environments we would have all we could ever want. There would be no desire to make music or play sports or make scientific discoveries. We went on to do all those things because we are lazy being with the capacity to work hard and strive for more.

We are lazy beings because it makes evolutionary sense to be that way. Energy should not be wasted unnecessarily and natural selection harshly punishes those that do. The successful hunter is the one evolved to catch prey with minimal effort, not the ones who prefer the long, arduous chase even when a shorter, easier catch is an option. And prey likewise evolve herd behaviour, camouflage and defences like armour and poisons in order to make it easier to defend themselves against predation. They too get punished if they waste unnecessary energy in thwarting a predator’s intention to make a meal of them. In nature, winners are the ones who work hard only when they have to.

Given that our ancestors were hunter-gatherers, the sensible would have been to permit relaxation during slack periods in order for there to be plenty of energy when the time came to put it to good use. You can imagine how there would have been seasons in which there was plenty of fruit to gather, or moments when everyone should mobilise to bring home game. But afterwards, when the fruit was picked and the hog roasting on the spit, the time left was better spent playing, socialising, or resting.

This is, in fact, how we evolved to work. We are designed for occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity.

This work pattern could hardly have changed much when human societies transitioned to farming and were able to develop into chieftains and larger hierarchical societies. After all, farming is also very seasonal work, so here too it would have made much more sense to adopt work attitudes that encouraged intense activity when necessary (such as when the harvest was ready to be gathered) but at other times to just leave the peasants alone to potter about minding and maintaining things or relaxing.

Now, it’s true that the evolution of human societies into hierarchical structures not only entailed the emergence of a ruling ‘upper class’ but also a lower caste of slaves and serfs. But, although we commonly conceive of such lower caste people as being worked to death by brutal task-masters, in actual fact early upper classes were nowhere near as obsessed with time-management as is the modern boss and didn’t care what people were up to so long as the necessary work was accomplished. As Graeber explained, “the typical medieval serf, male or female, probably worked from dawn to dusk for twenty to thirty days out of any year, but just a few hours a day otherwise, and on feast days, not at all. And feast days were not infrequent”.

Part two of this essay still to come.


“Guns, Germs and Steel” by Jared Diamond

“Bullshit Jobs: A Theory” by David Graeber

“Why We Work” by Barry Schwartz

Posted in Uncategorized | Leave a comment



If you are interested in cryptocurrencies such as bitcoin, chances are that you have heard some skeptic make a comparison with ‘tulips’. Why would blockchain-based assets be compared with that particular flower? Well, it is all to do with one of the craziest bubbles ever inflated, which was what I want to talk about in this post. In order to lay down the groundwork, though, we have to go way back in time to the 15th century…

In The Beginning…

The story of how Amsterdam’s most famous bloom became the basis of one of the most infamous speculative bubbles does not actually begin in the Netherlands, but rather in Spain and Portugal. The end of the 15th century saw improvements to the design of ships and inventions that were to prove important for navigation, such as the clock and the compass. Together, these advances made it possible to cross oceans, discover new lands, and open up trade routes.

The Christian kingdoms of Spain and Portugal did just that, famously sending Christopher Columbus west in 1492 on a journey that would ‘discover’ the Americas. Five years later Vasco de Gama journeyed southward to discover the Cape Of Good Hope and the naval route to India.

With these discoveries, both Spain and Portugal suddenly found themselves with trading options along African and Asian coasts, not to mention access to vast and rich territories in the New World. This meant that, from the 16th century onwards, the scene was set for a transformation from the old feudal economies to mercantile economies. The international trade routes made it possible to create far superior wealth compared to that offered by grain production by the small feudal fiefs of Europe. Mercantile economies were based on the idea that a country’s total amount of wealth represented the overall profit it made from trade. As each strip of land obviously holds only a limited amount of tradable resources, the volume of a country’s trade was dependent on the amount of land over which it held trade rights.

Mercantilism therefore lead to expansionism, as any European power that could afford it sent off ships in search of hitherto undiscovered territory (not discovered by any other European, that is). It was customary for the Monarch to hold claim to the new territory overseas, the management of which required a large administrative body under direct royal control. It had always been profitable to serve the King during times of war, but the territorial expansion meant the nobility could make more wealth serving the King abroad rather than by managing their private estates.

This lead to a powerful, centralised monarchy and the creation of the first great European empire. But there was something of a downside to this way of organising things, since the creation of a powerful, centralised monarchy held back the creation of a strong and independent mercantile class, which in turn held back private enterprise. The result of all this was that capitalism did not grow out of the empires of Spain and Portugal, but rather from one of the more disadvantaged newcomers in the race for international trade.

The Dutch East India Company

That nation was the Netherlands. The end of their 80-year struggle for independence from Spain left the nation with no significant aristocracy and not much in the way of marked class differences. Instead, the Netherlands developed a significant middle class that thrived on trade. Up to the Industrial Revolution, Amsterdam could lay claim to being the greatest city in Europe, as well as laying claim to a few ‘firsts’ in capitalism. For example, many historians consider the Netherlands to be the world’s first truly capitalist nation. Also, the Dutch East India Company, which was formed in 1602, was one of the first multinational companies. Also, by being the first company ever to offer its stock on the market, the Dutch East India Company pretty much invented the stock market, meaning the Dutch could claim that among their list of ‘firsts’ too.

The Netherlands were really successful at trade, so much so that it had managed to drive the Portuguese off most of their trading posts in the Indian Ocean. By the 1630s, the timing was almost right for a period of mass speculation. Thanks to the trade of their merchants, the Dutch were the recipients of the highest salaries of any European. Shares of the Dutch East India Company were richly rewarding shareholders for their investments, and much of that money was being poured into properties to create a robust housing market. Ongoing appreciation of asset values created excess wealth that went on to fund further asset purchases.

This wealth was setting the scene for an asset bubble, but at the time there was something holding back the move toward wild speculation. That something was the fact that not everyone could take part. This was because Dutch East India shares were both expensive and illiquid (in other words not easily resold) and that made them unavailable to all but the wealthiest. The same could be said for the most prized properties. However, a quirk of nature was soon to arise which would seemingly hold out the promise of vast wealth that anybody could speculate on…

Enter the Tulips

Tulips had been introduced to Europe around the mid-1500s, and had always held the promise of some value. In fact, they still do, as can be appreciated by remembering how famous Amsterdam is for that particular bloom. But something happened around 1634 that would cause the value of this plant to skyrocket, and that something was a virus. The virus, which was transmitted by aphids, lead to a couple of consequences for the tulip, both of which are the reason why a crazy speculative bubble arose. Firstly, the virus had the effect of transforming an ordinary solid-coloured tulip into a startling-looking variegated variety with  beautiful flamelike petals. This was a much-prized variety, and as nobody really knew what caused such variegation there was much speculation as folks attempted to predict which bulbs would develop into the prized tulips.

Secondly, the virus ultimately killed the tulip. This made it something of a hot potato, in that you really wanted to sell the tulip on for a higher price rather than be the sucker who was left with nothing but a dead bulb.

Unlike shares in the Dutch East India Company or prized property, tulips were much more affordable, which meant more people could join in the speculation of this particular asset. Not surprisingly, given the stories of immense riches to be gained from selling on a prized bulb, many, many people were drawn into speculation. Most of these people were not experienced traders. In fact, the professionals pretty much shunned the tulip trade and continued investing in good old reliables such as East India stock. They regarded tulips as more of an expression of wealth than a means to that end.

But for more inexperienced traders, the chance of having and reselling a prized tulip was considered to be the means to great fortune. Because the tulip spends most of its life as a bulb rather than a blossom, it naturally lent itself to a futures market (something the Dutch called a windhandel, or the wind trade). By ‘futures market’, I mean a situation where both buyer and seller agree to the future price of a good, and when that specific time arrives, the buyer is obliged to pay the seller whatever amount was agreed upon.

However, waiting for that agreed-upon time to arrive was too slow for the growing crowds of speculators. Therefore, a move was made to transition from selling tulips themselves, and instead trading those futures contracts. And trade them they did, sometimes as much as ten times in one day. You can see then, how the value of tulips was entering into ever higher realms of abstraction. The trade in futures market contracts meant that people didn’t have to worry about an actual tulip being delivered. No, their only concern was being able to sell the contract for a higher price than they had bought it for. The result of this was that, at the very peak of the tulipmania during the winter of late 1636 and early 1637, a time when the bulbs were still dormant in the ground, not one blossoming tulip actually changed hands.

Funny money

But there is even more to this tale of wild speculation than that. You see, not only were no bulbs being traded, no real money was, either. At that time, ‘real money’ was the guilder, the currency of the Dutch Republic. This was not the paper currency we are used to, it was money based on a specific amount of precious metal, 0.027 ounces of gold. Much of the trade in futures contracts was not financed with real money, but rather with ‘notes of personal credit’. In other words, with IOUs. So not only were there no bulbs being traded during the heights of tulipmania, no money was changing hands either. Instead, transactions were being made on nothing but the promise to deliver the money in the future.

According to Edward Chancellor, author of ‘Devil Take the Hindmost: A History Of Financial Speculation’, “by the later stages of the mania, the fusion of the windhandel with paper credit created the perfect symmetry of insubstantiality: most transactions were for tulip bulbs that could never be delivered because they didn’t exist and were paid for with credit notes that could never be honoured because the money wasn’t there”.

To give an idea of just how high the price of tulip bulbs rose (or, perhaps I should say, the price of the promise of such a bulb) consider that the highest record amount paid for a tulip at that time was a whopping 5,200 guilders. In gold terms, that’s nine pounds of the stuff. You could have bought eighteen modest-sized houses for the price of that one tulip.

It all ends

Like all bubbles, this one could not inflate forever. The end inevitably came, because the bulbs blossomed into flowers or turned out to be dead duds, and because the contractual dates for when IOUs had to be paid for with the promised money were coming around. The wealthiest were not hit too hard, since, if you remember, they had continued investing in things like townhouses and East India Stock. No, it was those less experienced in investing, the people caught it in crowd behaviour, buying into futures contracts for tulip bulbs for no reason other than that was what everyone else was doing, that got hurt the most. Inevitably, a lot of those people found out that their anticipated fortunes amounted to nothing but worthless promises. Fights broke out over the amount due per contract, and the Dutch government stepped in, declaring that the contracts could be settled for 3.5 percent of their initial value. On one hand, that was obviously preferable to paying the full contract. But nevertheless 3.5 percent of the most expensive tulip still equated to a year’s salary for some unfortunate citizens.


So that’s the story of tulipmania. What lessons can be applied to blockchain-based assets? Well, firstly, I don’t think it is all that fair to compare blockchain-based assets to ‘tulips’. A tulip does have some value. They are pretty things and people pay for pretty things. But you can hardly call a tulip bulb a general-purpose technology. A general-purpose technology is one that can be used in a great many ways. Examples would be ‘electricity’ or ‘computing’. Just think of all the inventions and industries and jobs that have been built on the basis of those two technologies. The blockchain is also a general purpose technology, and that means speculating on its future growth need not be sheer pie-in the sky. People who expect to make a fortune from crypto-assets might just be making educated guess regarding the future potential of Satoshi Nakamoto’s invention.

Having said that, all speculation is prone to crowd behaviour. Just because the underlying blockchain technology is sound, doesn’t mean to say that assets built on top of it can’t be scams designed to lure in suckers, or that genuine products can’t fuel asset bubbles as people buy or sell for no good reason other than everybody else is doing likewise. ‘It’s just like tulips!’ may be a retort used by skeptics who don’t really know all that much about cryptoassets and blockchains, but nevertheless the story of the tulip speculative bubble does hold some valuable lessons. After all, those who do not learn from history are doomed to repeat it.


“Capitalism: A Graphic Guide” by Dan Cryan, Sharron Shatil and Piero

“Cryptoassets: The Innovative Investor’s Guide to Bitcoin and Beyond” by Chris Burnsike and Jack Tatar.

Posted in Uncategorized | Leave a comment

Philanthropy: Praiseworthy Or Propaganda?



How will Bill Gates be remembered?

If this question had been asked in the 90s, I suspect most people would say the answer is obvious. He would be remembered as the co-founder of Microsoft, a feat of entrepreneurialism that resulted in him becoming one of the richest men in the world.

But there is another noteworthy thing that can be attributed to Bill Gates. For, as well as being extraordinarily rich, he can also be credited with remarkable acts of charity. In 2010, for example, he put $10 billion toward vaccines, which was the largest pledge ever made by a charitable foundation to a single cause. Also in 2010, Gates and Warren Buffet announced the ‘Giving Pledge’, so called because the wealthy people who sign it pledge to give away half their fortune to philanthropic and charitable causes.

So, there are two ways in which Bill Gates might be remembered. Chiefly as a businessman who accumulated great wealth or as a philanthropist who made big donations to worthy causes.

That last legacy of being a great philanthropist sounds like an achievement that could only be viewed in a positive light. But, as with most things, there are two ways of looking at philanthropy. On the one hand, it can be seen as a justification for the social structures that enable some people to acquire disproportionate wealth, for it turns out that, however ruthless such people might have been in gaining their fortune, they ultimately had a significant altruistic side to their character, generously giving to worthy causes.

But, on the other hand, a more cynical way to look at it would be to say that it is really nothing but a band-aid to cover up the exploitative conditions that cause so many to need charity. If society was structured in such a way as to not allow the extent of inequality extreme wealth necessitates, those people would not have required charity in the first place. In short, these so-called philanthropists are just using a portion of their vast wealth in propaganda and token gestures while not really doing anything much to alter the structures they took advantage of.

Those opposing viewpoints have existed since the beginnings of modern philanthropy. By ‘modern philanthropy’, I mean large-scale philanthropy based in the private rather than the public sector.

There is general agreement among contemporary historians that modern philanthropy was invented by the great industrialists whose names are now synonymous with extraordinary wealth. John David Rockefeller, Andrew Carnegie, Cornelius Vanderbilt and people like that. These were legendarily ruthless businessmen whose rapaciousness earned them the title ‘Robber Baron’.

Rockefeller, for example, acquired his extraordinary wealth partly through industrial espionage. He sent spies into his competitors’ businesses in order to ascertain their financial situation. His own company (Standard Oil) would then lower the price of its own oil, making his rival hopelessly uncompetitive. Meanwhile, in other parts of the country, the price Standard Oil charged was increased in order to make up the difference. According to Dylan Ratigan, “in this way, the company charged its customers a premium to drive the competition out of business, which left those same customers even more dependent on Standard Oil. Rockefeller referred to this approach as ‘sweating the competition’”.

By 1882 Standard Oil controlled up to 90 percent of the oil refining capacity in the United States. Seven years later its monopolistic grip had extended to retail, wholesale and oilfields as well. In Short, Rockefeller’s tactics changed what had been a free market in oil with fluctuating prices adjusting with competitive supply and demand, to a rigged market where prices were stabilised at artificially high prices. That’s why people like him were called ‘robber’ barons. They used the free market mandate of increasing one’s wealth via whatever method you can get away with to ultimately end the free market and impose a monopoly that effectively took wealth away from people through rent extraction.

Like Bill Gates, Rockefeller went on to become the world’s richest man. Also, like Bill Gates, he went on to build a foundation-the Rockefeller Foundation-dedicated to philanthropic ventures. It was set up in 1910 from $50 million in Standard Oil stock and, by the time of his death in 1937, half of Rockefeller’s fortune had been given away. This legacy, along with the philanthropic acts of other 19th century American industrialists, can be seen in cities like New York. You would be hard-pressed to find a museum, art gallery, university, concert hall or charity that cannot trace its origins back to some such businessman.

But, as I said earlier, there has always been a more cynical way of looking at this. Businessweek highlighted this fact when they wrote, “John D Rockefeller became a major donor-but only after a public relations expert, Ivy Lee, told him that donations could help salvage a damaged Rockefeller image”. Put another way, according to this cynical view, Rockefeller was not actually interested in Philanthropy. After all, a true desire to promote social welfare necessarily seeks to fundamentally change the preconditions that are the root cause of social problems. What Rockefeller was doing was simply placating an angry public by throwing a bit of money at some public service or other, while not really doing  much to alter the structures that enabled the few to gain so much at the expense of the many in the first place.


In the last chapter, we talked about the 19th century industrialists who earned the dubious title of robber baron and who can be credited with inventing modern philanthropy. When Warren Buffet wanted to make a philanthropist out of Gates, his first move was to give him a copy of an essay by Andrew Carnegie (the greatest of all the 19th century industrialist/philanthropists), called ‘The Gospel Of Wealth’. Incidentally, we see here (and also with the case of Rockefeller, who we talked about in part one) a curious observation, which is that these billionaires do not seem to consider using their wealth for the common good until somebody else points out that this is an option. And we can add a third billionaire to that list, for Paul Allen, the co-founder of Microsoft, is quoted as complaining “I’ve spent money on jets, boats. I don’t know what to do next”. Notice that it never occurred to him that a portion of his billions might be put to philanthropic use and once again this had to be pointed out to him (by the author Douglas Adams in this case). It could be that there are some very wealthy people who did not need any persuading to turn to charitable acts, but given that these are, almost by definition, mostly selfish, greedy, ruthless and ethically dubious individuals (it is, after all, kind of hard to acquire that kind of fortune by being a nice guy) I am willing to bet they are few and far between.

That might sound like a harsh assessment, but it was one echoed by the economist Jeffrey Sachs, who reckoned, “They are tough, greedy, aggressive, and feel absolutely out of control in a quite literal sense, and they have gamed the system to a remarkable extent. They genuinely believe they have a God-given right to take as much money as they possibly can in any way they can get it, legal or otherwise”.

I should point out that this was his view of City traders on Wall Street, so it should not be considered applicable to everyone who is very wealthy. Probably J.K Rowling is not like this, since she owes her fortune to the astonishing success of the Harry Potter franchise, making her more like a lottery winner than somebody who fought their way to the top of business.

Anyway, in Carnegie’s essay, the great iron and steel magnate posed the question, “what is the proper mode of administering wealth after the laws upon which civilisation is founded have thrown it into the hands of the few?”.

Such a question seems aligned with the findings of Thomas Piketty. In ‘Capital In The 21st Century’ (which is actually concerned with capitalism up to the 21st century) the French economist put together the most exhaustively researched analysis of market capitalism and its consequences so far assembled, and drew the following conclusion. When allowed to unfold in its natural, ‘financially liberalized’ state, capitalism will very likely result in those who hold significant wealth gaining far greater returns compared to those who rely primarily on labour income. There are various reasons for this. For example, if you are very wealthy you can afford to hire the very best financial advisors who know how to skirt around the law and squeeze every drop of value out of your assets while avoiding the tax man. The poor, meanwhile, cannot afford any financial advice and are constantly being targeted by fraudsters, predatory bosses, authoritative bureaucrats and marginally legitimate debt sellers. Also, the owner classes can gain access to high-level capital investments that are simply unavailable to those of us lacking significant wealth in the first place. Technically, this is known as the ‘wealth to income ratio’.

Pinketty’s study showed what Carnegie thought was true, which is that wealth tends to grow much faster for the already wealthy. That wealth can then be used to further alter the structure of social systems so as to further increase the tendency for wealth to flow toward the elite minority. The behaviour of Rockefeller (talked about in part one) aligns with Pinketty’s view of meritocratic competition making sense only when one is trying to establish a fortune. However, once wealth is acquired it makes more sense to turn anti-competitive so you can live off of the rent extraction to be had from a market rigged in your favour. Why strive to make a fortune when you can just protect the fortune you (or your ancestors) have already amassed?

The more positive way of looking at this might be to say that it is OK to amass a great fortune, even one that involves ethically-dubious practices, if one then uses that money in charitable ways. Those who adopt this way of looking at philanthropy tend to prefer to cast charitable donations in monetary terms, probably because the donations seem so amazingly generous when so presented. For example, Peter Diamandis and Steven Kotler, in a chapter devoted to technophilanthropists in their book ‘Abundance’, wrote, “by 2004, charitable giving in America had increased to $248.5 billion, the biggest yearly total ever. Two years later, the number was $295 billion”.

By everyday standards, such sums of money are almost unimaginably large, way beyond anything most people could earn in several lifetimes. It therefore seems that those who donate such amounts must be superhumanly charitable and worthy of the highest praises society can bestow.

As for the cynics, they tend to prefer percentages. For example, the Chronicle Of Philanthropy’s study found that households earning between $50,000 and $75,000 a year gave an average of 7.6% of their discretionary income to charity. 7.6% of a pretty paltry amount if you ask me. Shouldn’t a society founded on Christian values be giving away more like 40% of discretionary income? Worse still, the figures are even more dire when we turn to those who earn over $100,000. Even though such folk are obviously in a position to be much more generous, in percentage terms they are less so, giving a paltry average of just 4.2%.

Several studies have come to similar conclusions, which is that those who can least afford to donate to charity actually donate the greatest amount in percentage terms, while those who are most financially advantaged give away the least. Ken Stern, writing for the Atlantic, reckoned that America’s bottom 20 percent donated 3.2 percent of their income to charitable causes, while the top 20 percent gave away a minuscule 1.3 percent. Of course, if you have billions to begin with, then 1.3 percent of that would be a lot of money, more than most of us can ever dream of having let alone giving away. But when expressed in percentage terms it comes across as a tiny amounts, a mere gesture intended to paper over the fact that society is rigged against the many. A 2011 study from the University of California, Berkeley, found that upper class individuals are more likely to “exhibit unethical decision-making tendencies, take valued goods from others, lie in a negotiation and cheat to increase their chances of winning”. They also have a disproportionate ability to mould  a society so heavily dependent on the pursuit of money into a shape of their liking, so unsurprisingly our societies work to reward them at the expense of so many others.


There is a tendency for rich philanthropists to become patrons of things that primarily interest the upper classes, while ignoring issues affecting the  poor even though such issues are usually much more urgent. We saw this attitude in part one, when we talked about the philanthropic ventures of the 19th century Robber Barons. As you may recall, they mostly donated to elite schools, concert halls, museums and stuff like that. People like Carnegie also showed zero interest in tackling issues outside of their own city.

Given the period in which these men lived, they can be forgiven for being so localised in their philanthropy. After all, this was long before the age of smartphones and global communications networks, so people were nothing like as aware of issues like poverty in Africa as we are today.

But while we might forgive such blinkered philanthropy back then for the reason given, it’s much harder to justify it now. And yet it still occurs. According to Ken Stern’s article in The Atlantic (“Why The Rich Don’t Give To Charity”), in 2012 “not one of the top 50 individual charitable gifts went to a social-service organisation or to a charity that principally serves the poor and dispossessed”.

It would be incorrect to say philanthropists never turn their attention to such issues, because they sometimes do. There is, for instance, the Omidyar Network (brainchild of EBay founder Pierre Omidyar), pursuing such things as microfinancing which could potentially unlock opportunity for people who cannot access traditional financial and banking outlets. Another example would be the Rockefeller-backed Acumen fund, which is a for-profit company that only invests in businesses that manufacture and sell, at affordable prices,  products and services needed in the developing world (things like mosquito nets and reading glasses). Of course, the Bill and Melinda Gates Foundation’s multibillion dollar commitment to tackle malaria can also be counted among the charitable ventures focused on problems that really matter.

All such examples of worthy causes should of course be commended. But the emphasis on charity as the means to deal with the fallout from the negative consequences of market competition arguably evades the more pressing question, which is ‘why is such intervention necessary to begin with?’. A cynic might say that throwing money at those affected by the negative externalities that are virtually inevitable when people are engaged in a competition to selfishly their own fortune via whatever method can be gotten away with, operating within social structures that already disproportionately favour a minority of greedy, ethically-dubious people, is akin to giving up and managing the symptoms of a socioeconomic disease rather than seeking out its root and curing it outright. It is sort of like having an engine that is leaking oil and you choose to constantly add more oil rather than fix the engine.

As to the root causes, those who have researched the systemic causes of today’s problems have traced the root back to incompatible assumptions that have held since the Neolithic period. We find one of these assumptions contained within the definition of the word ‘economics’, which is the ‘study of the allocation of scarce resources’. Along with the assumption of scarcity there is a potentially incompatible assumption applied to growth, which is that it is infinite.

The reason why the assumption of scarcity is incompatible with the assumption of infinite growth should be so obvious as to not need spelling out. But as our current consumerist lifestyles are using up vital resources far beyond sustainable rates, the obviousness must not be all that graspable, so perhaps we should express the contradiction between assumptions of scarcity and assumptions of infinite growth. Growth cannot be sustained indefinitely in an ever-accelerating fashion if it relies on something of finite supply. Eventually that supply will be exhausted and the growth must come to a stop. It is worth noting that it really does not matter how large that finite supply is, infinite growth is always by definition infinitely greater. Therefore, people who dismiss concerns about how unsustainable our economic systems based on infinite consumption are, because we can try and gain access to the much larger resources of the solar system or the galaxy or the visible universe (assuming that is even remotely practical an aim to begin with), miss the crucial point that infinite growth in consumption will exhaust any physical resource. Only infinite resources can sustain infinite growth, but then we would have to abandon the assumption of scarcity, returning us to the essential point of scarcity and infinite growth being incompatible beliefs.

For most of human history since the Neolithic period, it did not much matter that we operated under such contradictory assumptions, because we lacked the practical ability to do much harm. Up until the Industrial Revolution, populations were caught in a ‘Malthusian Trap’. It was named after Thomas Malthus, who argued that population growth would outrun the ability to provide enough essential resources to sustain that growth, leading to famines and other calamities that would reduce the numbers of mouths to feed to more sustainable levels.

Going back beyond the Neolithic period, we are talking about a period of time when human populations consisted of small groups of tribes and bands. When populations are small and their capacity to take is restricted, Earth’s resources can seem endlessly bountiful, as indeed they are if the capacity to take is restricted to levels below that of the Earth’s ability to replenish resources. Hunter-gatherers fishing with rods and nets couldn’t even begin to affect fish stocks in an impactful way.

But it’s quite a different story when your pursuit infinite growth in consumption of fish in the interest of ever more profit has resulted in fish-catching technologies that can capture hundreds or thousands of tons of fish. There was about 1.8 million tons of spawning cod in the Grand Banks when the first commercial fishing ships capable of capturing such prodigious amounts appeared in 1951. By 1991, it was down to 130,000 tons and a year later the Canadian government had to step in and close the Grand Banks to cod fishing, or else the species would have been fished to extinction. But that decision came with consequences too, because it meant that 32,000 fishermen were thrown out of work and required billions of dollars in aid to support their families. You can see in this sorry tale how dangerous the assumption of infinite growth is when subjected to a finite resource. I would also point out that the pursuit of more profit could well have been fed by the dwindling supplies of cod, for the more scarce this in-demand resource became, the more expensive and worth pursuing it would be. That would incentivise more profit-seeking fishermen to pursue and catch the prized fish, in a positive feedback loop that either ends in the extinction of cod or government/social intervention to halt this unsustainable consumption.

Another root cause has to do with how societies have been structured since the Neolithic period. I have covered this in detail in my series ‘This Is What You Get’, so search for that if you want more details, but suffice to say that societies have tended to be ‘redistributive’. That is, they are societies in which the populace can be divided into two groups: A non-producing elite who wield great political power and social influence, and who receive ‘tribute’ and get to disproportionately determine how it should then be redistributed to the rest of the populace (hence ‘redistributive’ societies). And, on the other hand, everybody else, the producing masses, toiling for minimal reward and who wield comparatively little social influence and political power.

This hierarchical structure has held (with some modifications, though none that truly affect its essential form) through every form of society since the Neolithic period. From the abject slaves and ruling monarchs of Egypt, to the vassals and lords in medieval feudal societies, to the handicraft merchants and state monopolists of mercantilism and on to our contemporary with the growing numbers of precariat employees and a rapacious elite in the financial and banking sectors, we see broadly the same thing, which is a society in which there are people who work and then there are those other people (always much smaller in number but far more powerful in other ways) who gain the lion’s share of the reward generated by that work. In short, it is a systemic framework that assures the superiority of a minority for whom the temptations of kleptocracy (stealing from the people they rule) is all too often a siren song they cannot resist. Even communism, which was vaguely imagined to operate very differently, turned out not so different in practice. In capitalism you get bossed by business people and in communism you get bossed by bureaucrats. Either way it is a redistributive society comprised of those who do the work on one hand, and the non-producing elites who disproportionately control fruits of that work on the other.

Stanford neurologist Robert Sapolsky, summarised the issue in the following way. “Agriculture allowed for the stockpiling of surplus resources and thus, inevitably, the unequal stockpiling of them. Stratification of society and the invention of classes. Thus it has allowed for the invention of poverty”.

Ever since, the presence of a powerful elite with kleptocratic tendencies, running societies on the incompatible assumptions of scarcity and infinite growth, have worked to sustain poverty rather than truly to eradicate it. This should not be thought of in terms of conspiratorial plotting. Rather, it is the evolutionary outcome of the selfish pursuit of material wealth via whatever method can be gotten away with, let loose in societies with a pre-existing disproportionate advantage for some that changes the psychological makeup of the rest in ways that lead to unsustainable exploitations of the poverty that cannot be eradicated,  for to do so would be to bring down the very system that the elites depend upon for their position.


In the last chapter we ended with the following observation:

‘The presence of a powerful elite with kleptocratic tendencies, running societies on the incompatible assumptions of scarcity and infinite growth, have worked to sustain poverty rather than truly to eradicate it. This should not be thought of in terms of conspiratorial plotting. Rather, it is the evolutionary outcome of the selfish pursuit of material wealth via whatever method can be gotten away with, let loose in societies with a pre-existing disproportionate advantage for some that changes the psychological makeup of the rest in ways that lead to unsustainable exploitations of the poverty that cannot be eradicated,  for to do so would be to bring down the very system that the elites depend upon for their position’.

Now, one might think that this assertion can be refuted by those examples of the problem-solving capacity of market competition turning a once-scarce resource into an abundant one. Aluminium, for example, was once a rare and expensive thing, but now it is so cheap we use it and discard it without a thought. Such examples really do not refute the claim that market competition seeks to perpetuate scarcity because we are talking about an overall condition that cannot be refuted simply by citing isolated examples of a particular resource that has been made abundantly available.

Ever since our technological capabilities became sophisticated enough to enable us to escape the Malthusian trap, we have seen the rise of various ways of artificially perpetuating scarcity. These have included psychological manipulations intended to mess with the ability to distinguish between genuine needs and frivolous ‘wants’, and the creation of a throwaway culture. Thus, although we still talk about the ‘economy’, what we actually have is nothing like an economy in the sense of what the root of that word referred to. Because, rather than striving to use our resources in as efficient a way as possible, given current technical knowhow (which is what we should be doing if we are really trying to economise) we seem determined instead to turn the world’s resources into junk to be thrown away and repurchased. After all, the goal in a consumerist culture is to sell more stuff, so the last thing you want is for people to be content with what they have (this ties in with the point about our ability to make intelligent choices over what we really need being messed with). We also have services that are less about helping those in need and more to do with extracting rent out of those in need by preying on their financial instability so as to get them more indebted, profiting from their desperation.

This has resulted in two forms of socioeconomic sickness: The existence of needless poverty on one hand, and the existence of wealth obesity on the other. It really should disgust people that anyone should become a multibillionnaire in a world where others must subsist on less than two dollars a day. Sadly, for millennia ruling kleptocrats have used propaganda and other methods to keep the masses from developing revolutionary thoughts, so this complacency is not surprising.

As Gillian Tett of the Financial Times said, “most societies have an elite and the elite try to stay in power; and the way they stay in power is not merely by controlling the means of production, but by controlling the cognitive map, the way we think”. We see this ‘control of the cognitive map’ in the way our societies condition us to aspire toward the excesses of the wealthy and to accept many eminently solvable issues as intractable problems we should just accept as “the way it has to be”. It is all to do with the ‘need’ to perpetuate scarcity so the boundless growth of consumerism can continue to extract profit and the elites can maintain the structures that their privileged position depend upon.

Any call to seriously re-engineer society in order to achieve a more equitable distribution of material wealth tends to meet the same retort, which is that it can only lead to Leftist totalitarianism. Such a statement was echoed by Forbes writer Jeffrey Dorfman:

“Once you admit that income redistribution is fair, there’s no logical stopping point short of communism”.

There are a couple of flaws in this assertion. Firstly, it seems to forget that the market economy is itself a process of wealth redistribution. Perhaps Dorfman is one of those market ideologues with faith in the ‘invisible hand’ creating peace and harmony out of the selfish pursuit of competing to gain differential advantage over others? But, as Harvard researcher Jonathan Schlefer explained, “beginning in the 1870s, theorists…wanted to show how market trading among individuals, pursuing self-interest, and firms, maximising profit, would lead an economy to a stable and optimal equilibrium…after a century of work, they concluded that no mechanism can be shown to lead decentralised markets toward equilibrium, unless you make assumptions…regarded as utterly implausible”.

Market dynamics turn human frailty and misfortune into commodities to be exploited for profit. And the competition to find the commodities that can be sold at the most cost-competitive price encourages fraud (because what could be more competitive than successfully making money out of nothing but bogus claims?). This is why a totally laissez faire market operating absent of any kind of regulation will tend to destroy itself. But neoliberalism is driven toward turning everything into a commodity to be bought and sold for differential advantage, and those regulatory bodies are no exception to that rule. They can be corrupted into a means of conferring unfair advantage in the interest of selfish profit maximisation. “Crony” capitalism is really just the likely outcome of free market principles operating in the real world with its historic cases of hierarchical, redistributive societies that are prone to kleptocracy.

The other flawed assumption is that, if the goal is ‘fair’ income redistribution, then the logical stopping point is communism. Presumably, Dorfman means a society in which everybody gets the same income (that is, after all, how most people think communism is supposed to work). But how can that possibly be the logical stopping point if the goal is fairness? There is nothing ‘fair’ about equal pay across the board, given that individuals clearly make unequal contributions toward beneficial and detrimental outcomes. It does not bother me that some people are more materially rewarded than I am, and it does not bother most other people. When asked how wealth should be distributed, most people rightly dismiss full communism and opt instead for some measure of distribution that rewards the most productive while ensuring sufficient wealth at the bottom to alleviate relative poverty (defined as not being able to access the average lifestyle for that society). It’s just that this ideal redistribution is more equitable than what people believe is actually the case (and true wealth inequality is even worse than most people believe).

Despite the fact that there is a public call to bring about greater equitability in wealth redistribution, every policy to bring it about tends to be dismissed by new-liberal ideologies as unworkable solutions that can only end in authoritarianism and the Gulag. For some reason, philanthropy comes out as the only viable means of patching over the harm caused by the selfish pursuit of material wealth. The question is, why?



Last chapter we ended with a question:

‘Despite the fact that there is a public call to bring about greater equitability in wealth redistribution, every policy to bring it about tends to be dismissed by neo-liberal ideologies as unworkable solutions that can only end in authoritarianism and the Gulag. For some reason, philanthropy comes out as the only viable means of patching over the harm caused by the selfish pursuit of material wealth. The question is, why?’.

From the positive perspective, it could be because, where philanthropy is concerned, money is being handled by those with a proven track record of making it work to produce value. Whereas governments are known to waste money on unnecessary bureaucracy, the philanthropists are people who have revolutionised retail, or brought computing to the masses, or built rockets that can land on platforms out at sea. Who could be better placed to use money responsibly and build a better future?

Advocates of philanthropy also cite autonomy as another advantage. This line of reasoning was adopted by Matthew Bishop (author of ‘Philanthrocapitalism: How the Rich Can Save the World’) “They do not face elections every few years like politicians, or suffer the tyranny of shareholder demands for ever-increasing profits, like CEOs of most public companies. Nor do they have to devote vast amounts of time and resources to raising money, like most heads of NGOs. That frees them up…to take up ideas too risky for government, to deploy substantial resources quickly when the situation demands it”. In short, they answer to nobody and if their heart is in the right place nothing can stop them putting life-changing sums of money to good use”.

But, since these are life-changing sums of money the philanthropists are being trusted with, there needs to be assurances that their hearts are, indeed, in the right place. The best way to ensure things are done properly is to have transparency and a democratic process. The problem is, there is often neither transparency or accountability. The World Health Organisation’s head of Malaria Research, Aarati Kochi, compared the Gates foundation to a cartel, claiming the organisation was “accountable to no-one other than itself”. And Dr David McCoy, adviser to the People’s Health Movement”, reckoned “it also operates through an interconnected network of organisations and individuals across the NGO and business sectors. This allows it to leverage influence through a kind of ‘group-think’ in international health”. From this perspective, ‘philanthropic’ organisations have zero transparency, are accountable to nobody, and are really just an excuse to transfer power from the State to billionaires. As Peter Kramer, a critic of the Giving Pledge, said, “it’s not the state that determines what is good for the people, but rather the rich want to decide”. Given that these are often some of the most ruthless exploiters of competitive behaviour and its negative effects, one has to wonder if unaccountable billionaires working without transparency really can be trusted to serve the public’s interests.

The cynical way of looking at philanthropy is to view it as just a PR exercise whose purpose is to justify some having so much to begin with, while throwing token amounts of money at those enduring the negative externalities that inevitably rise when we compete to gain more by whatever method we can get away with. Where the ‘Giving Pledge’ is concerned, there is no legal obligation to do anything. Signatories merely say they will give away half of their fortune; signing the pledge places them under no enforceable commitment to actually follow through on their promise. Now, if there were transparency, so that the public could see what was being donated and where it was going, that might ensure the pledge is indeed honoured. But, guess what? There is no transparency. So how can we ever know what was given away or for what purpose? Really, then, there is nothing to prevent the Giving Pledge from being a PR stunt intended to placate a public grown sick and tired of the excesses of the rich and the gross wealth inequality fueled by a greed is good culture that has brought such harm to people and their communities. As activist Slavoj Zizek put it, “charity is the humanitarian mask hiding the face of economic exploitation”.

Also, when it comes to the establishment of charitable organisations, there are reasons for taking such action that do not necessarily count as altruistic. You see, by setting up such organisations, the ultra-rich can take advantage of tax loopholes as money is passed through them.

Such was the case of the foundation set up by the Walton Family. These five Walmart heirs have a combined net wealth of over $139 billion, meaning they have more money than the bottom 40 percent of Americans combined. An independent audit determined that the Walton Family Foundation-built “at almost no cost to themselves” was “exploiting complex loopholes in order to avoid billions of dollars in estate taxes”.

As to how much of that $139 billion fortune actually went to charity, the answer is…0.4 percent. That is such a paltry amount, it is hard not to agree with Peter Joseph of the Zeitgeist Movement who said, “what they are really doing is bypassing state funding in favour of their own interests. Moving money to charity foundations, effectively consolidating wealth in the hands of private interests rather than government, is a logical method to keep things ‘in the club’ of private business power”.


Philanthropy and charity are either the most viable way of dealing with the problems we are facing, or they are just a PR stunt amounting to a band-aid for problems whose systemic root remains deliberately unaddressed by those with vested interests in retaining the status quo. Obviously, one’s own political ideologies would influence which of these possibilities seems most plausible. Really, though, I suppose all we can do is wait and see if the philanthropist’s pledges really do bare fruit and build a better tomorrow.


“The New Human Rights Movement” by Peter Joseph

“The Survival Manual” by Mark Braund and Ross Ashcroft

Posted in Uncategorized | 2 Comments

Rewarding Work In ‘Red Dead Redemption 2’


In this essay I thought I would write about the ways in which Rockstar’s Red Dead Redemption 2 incorporates the elements work needs in order to be rewarding into its gameplay.

First, though, we need to figure out what those elements are. Barry Schwartz has looked into this, and come up with the following ideas:

“Satisfied people do their work because they feel that they are in charge. Their work day offers them a measure of autonomy and discretion. They use that autonomy and discretion to achieve a level of mastery or expertise…Finally, these people are satisfied with their work because they find what they do meaningful. Potentially, their work makes a difference to the world”.

I think the key words in that passage are ‘autonomy’, ‘discretion’, ‘mastery’ and ‘meaning’. Whenever physical or mental activity incorporates these, you have work that is rewarding.

So how does Red Dead Redemption 2 fare? First off, the environment in which this game is set obviously lends itself to ‘autonomy’. It is set in the vast expanse of the American Wild West and, as the game’s trailer puts it, “the world is full of adventures and experiences that you discover naturally as you move fluidly from one moment to another”. This gives the game a non-linear feel, as the player can ride off in any direction.

Along the way, the player is likely to encounter various situations and activities. Most of the time you are not required to participate and can decide for yourself whether or not to get involved. This means the game manages to incorporate another feature work needs in order to be rewarding, namely ‘discretion’. We also see discretion at work during missions, where you are asked to make decisions like what actions members of your posse should take, or whether an aggressive or pacifist response is most appropriate for the current situation.

One could also cite character management and customisation as further ways in which this game provides opportunities for discretion or judgement. As the game’s trailer says, “your experience is defined by the choices and decisions you make…You can, of course, choose what to wear, ride and eat”. Furthermore, these are not merely cosmetic choices that just change your appearance but have no real consequences. Your character has various health attributes that you need to take care of. A decent coat in winter could mean the difference between life and death, whereas during a hot summer it would not be wise to wear such warm clothing. From character customisation and management, to the snap decisions required of the player during missions, to the open world and the nonlinear experiences it offers, Red Dead Redemption 2 provides plenty of opportunity to apply one’s discretion.

When it comes to mastery, ever since Pong was introduced with the simple instruction to ‘avoid missing ball for high score’, videogames have provided players with challenges that test their ability and enable them to feel like their skills are developing.

The best games don’t just rely on setting a challenge like getting from A to B in a set time or shooting X number of targets. They incorporate systems of feedback into the gameplay that informs the player how well they are performing and whether they should try another strategy. You have visual and audio cues that let you know if things are going well or not, and the best games do not leave you in the dark over what you should be doing, but at the same time don’t hold your hand and instead leave it up to you to figure out how to accomplish what needs to be done.

Finally, by providing the player with identifiable tokens of progress in the form of special items, areas and other stuff that you unlock by achieving certain objectives and challenges, games like Red Dead Redemption 2 let you feel like you are gaining mastery and making real progress as the gameplay continues.

Now, when it comes to meaningful work, one might struggle to claim Red Dead Redemption 2 provides much of this if we consider the game from the perspective of ‘real life’. This is, after all, just a videogame. Sitting in front of a TV pressing buttons on a joypad hardly stands besides researching the cure for cancer as “work that makes a positive difference to the world”.

But in the context of the in-world experience, many games offer a grand narrative that sees the player progress from a nobody at the start to a significant figure whose actions and decisions have had a decisive effect on shaping history by the end. You become the hero who saved the world. Admittedly, in Rockstar’s most famous franchise (Grand Theft Auto) you are attempting to rise in the ranks of criminals, which is not exactly everyone’s idea of a positive contribution to society. And, in Red Dead Redemption 2 you are cast as an outlaw and can engage in the kind of activities for which Rockstar has earned an image as the bad boy of videogaming. You know, robbery, murder, that kind of thing. But there does appear to be opportunities to act like an outlaw with noble intentions. There are situations in which you can choose to help people or to refrain from killing foes. According to the trailer, “there are countless secrets to uncover and people to meet. You can get into raucous altercations…chase down bounties. Your behaviour has consequences and people will remember you and your actions”.

So, like all the best games, Rockstar’s Red Dead Redemption 2 successfully incorporates ‘autonomy’, ‘discretion’ and ‘mastery’ into a grand narrative that provides a sense of social meaning.

This achievement is all the more remarkable when you consider how menial an activity videogaming actually is. After all, what are you physically doing when you play these games? Repeatedly pushing buttons. Really, that’s it. Yet, somehow, games designers can take this dull, repetitive activity, one that ranks alongside rote assembly line work as the most menial ever created, and build an experience on top of it that is so compelling people happily pay good money so they can do it!

But, really, it is because it is we the players who are paying to do work in videogames that designers have every reason to try and make it as engaging and rewarding as possible. Aside from stories of people who work in sweatshops grinding through MMORPGS to level up avatars that then get sold on to people who would rather pay than do the tiresome work of obtaining a high-level character themselves, nobody is ever coerced into playing a videogame. Participating in such activity is entirely voluntary for the vast majority of players, so if they are to sell, their developers have to make sure that the work a gamer has to do in order to get through the game is rewarding.

However, when it comes to jobs, I believe there is an incentive to reduce the elements that make work rewarding. If you take the first three features (which were ‘autonomy’, ‘discretion’ and ‘mastery’), I believe these have something in common, which is that they provide opportunity to enhance one’s individuality. The best games really do try to include ways to let the player customise the experience, which in some cases goes as far as incorporating editing tools that enable you to craft whole new levels and gameplay. I would argue that the reason why videogames tend not to make good films is because the characters in them are often pretty much blank slates intended to be filled in by the player’s personality, not well-developed characters with their own psychology.

The best videogames provide plenty of opportunity to enhance one’s individuality. We are each of us unique individuals with our own lifestyles, presences and abilities, and ideally we would have jobs that reflect this. But this could potentially cause problems when combined with the other feature that makes work rewarding, which is that it should be ‘socially meaningful’.

Imagine that my job is very important and valuable to society, and that it is also perfectly tailored to suit the unique individual I am. Obviously this would be tremendously engaging and rewarding work for me, but what if one day I was run over by a bus? The company would be in big trouble, for who could replace me and fit into a position uniquely suited to the individual I am?

On the other hand, if you can somehow reduce or even eliminate the amount of autonomy, discretion and mastery a job requires, you also need rely less on the individuality your employee. In so doing, the employee can be treated less like a unique person and more like an interchangeable unit that can be removed and replaced at the employer’s discretion.

This obviously impacts on the employee’s bargaining power, as anyone who feels they are eminently replaceable is not going to ask for better pay or preferable working conditions. Cheaper labour means more profit for employers. Furthermore, employees are in a competitive environment in which they fight to earn enough money to keep from becoming too indebted, to keep up appearances in environments that emphasise material wealth as the sign of success, and in which there are taxes that have to be paid if you don’t want to go to jail. In other words, there is a lot of ‘negative motivation’ leading people to submit to jobs not because they expect to be rewarded if they do, but because they fear the consequences if they don’t.

So, employers have to pay their workers in order to get jobs done, and they prefer cheaper labour where possible and so are incentivised to reduce the qualities of work that make it rewarding, as in doing so they make employees more like interchangeable and replaceable units. Furthermore, in the world of wage labour there are various forms of negative motivation that pushes people into accepting jobs that are not very rewarding, so unlike videogames designers employers need not be too concerned that the work they offer sucks.

I think this helps explain why people seemingly don’t want to work. It really is bizarre if you think about it. Imagine an animal like a dolphin, obviously evolved for a life in water and yet seemingly reluctant to leave dry land and go swimming. People are like that. We have large brains housing creative minds, dexterous hands that can use tools in complex ways, sophisticated language that enables us to cooperate and compete in ways no other animal can even imagine doing, and we are healthiest when mentally and physically active in social situations. In short, we evolved to work but apparently we don’t want to. At least, that seems to be the attitude people have when the topic of UBI comes up. As well as objections about how unaffordable they think it would be, people claim that if you did not have to earn wages in order to survive, nobody would work and we would just passively consume TV all day.

Actually, the evidence shows that it is in countries where people spend the most time in jobs that we find the highest consumption of TV. Which makes sense if you think about it. Having burned so much energy in their jobs they don’t have much left to do anything else in their spare time. On the other hand, people who live in countries in which less time is expected to be spent in jobs tend to be engaged in more voluntary work and spend less time sat in front of the TV.

Also, to get back to the theme of this essay, videogames debunk the theory that nobody wants to work. After all, if that were true, nobody would pay good money to go through the effort of trying to beat the various challenges these games set. Nor would anyone develop their sporting or artistic talents. After all, these things take work, and quite a lot of it in some cases. The reason why we so willingly pay to do the work in videogames is, as I have argued, because the designers of such games have an incentive to make such work as rewarding as they possibly can, because at the end of the day they want as many people to go out and buy the game and recommend others do so as possible. On the other hand, job providers are incentivised to reduce the qualities that make work rewarding in order to make employees more replaceable and exploitable. Not that all jobs can have such qualities reduced; it’s just that enough can be made unrewarding to explain why 90 percent of people don’t enjoy their paid work.

As Red Dead Redemption 2 shows, it is not really work we don’t like. It is jobs.

Thanks to Rockstar for the images


‘Why We Work’ by Barry Schwartz

‘Utopia For Realists’ by Rutger Bregman

‘Red Dead Redemption 2 trailer’ by Rockstar

Posted in Uncategorized | Leave a comment

On Slavery


The past, it has been said, is a foreign country where things are done differently. At times, when looking back at the past, one feels a sense of relief to live now and not then. Who, for example, has heard of accounts of people enduring surgery while awake and aware and not thought “thank goodness I live now, when anaesthesia exists”?

And then there is the practice that is the main topic of this series. Slavery was once legal and widely practiced. Thank goodness we live now, when it is not only illegal but considered so morally repugnant there is a call to take down the monuments of historical figures whose fortunes partially depended on it, regardless of what philanthropic achievements they may also have accomplished. Not everyone believes this move to strip historical figures of their monuments because they did not live according to modern ethical principles is just, but we must all feel that the abolishment of slavery ranks as one of the high points of human progress.

Yet I feel like we have the wrong belief when it comes to slavery. Not wrong in the sense of our moral attitude toward it, but in the sense of how we think it ended and the extent to which it did end.

The way its end is popularised

In the popular imagination, it was the superior moral argument that ended slavery. Abolitionists campaigned to make it illegal and as right was on their side they ultimately won. And that was that, slavery was abolished. And while it was being practiced, we are encouraged to believe it was always the most brutal violation of a person’s liberty. Dramas and documentaries always portray the practice as white Europeans colonising foreign lands and, finding people of colour and being too prejudiced to see we are all the same beneath the superficial difference of skin tones, treat them like beasts of burden. They round them up, clap them in chains, throw them into the cargo hold of a ship and then sell them in markets to brutal masters who force them to toil under the crack of the whip.

What these beliefs do is make it seem like a chasm exists between the past and the present. Over there, beyond that great dividing line, there was slavery. Thank goodness we live over here where there is freedom and career opportunities.

But, really, this is a flawed belief. The abolishment of slavery was neither as decisive nor as complete as we are lead to believe. There is no gulf between slavery and jobs; rather, they exist on a continuum. And if there is freedom to be found along this path, then we have not yet reached that point.

The transition

You only have to imagine how a sudden transition to illegality for slavery would play out in practice to see how it would have been a gradual evolution toward freedom. Picture the scene. You are a slave and, as such, you own nothing at all, not even your own body. But then slavery is abolished and now, for the first time since your capture, there is something you can call your own. You are the sole owner of your labour power. But you don’t own anything else, and everything around you is the property of others. You cannot farm the land in order to grow your own food because that’s somebody else’s private property. You own no tools and have no money with which to buy them and you cannot just take some for to do so would be stealing.

All in all, as a former slave who now owns your own labour power and little else, your options are going to feel very limited. In fact, you would probably feel like there is only one thing you can do. You are going to have to beg your former masters to employ you. Now, this is hardly going to be a negotiation between equals. Pretty much all the bargaining power will be in the hands of the rich, propertied and well-connected former owners. So, when you come begging for a job, are you really going to be offered reasonable hours, paid vacations, entitlement to in-work safety protocols and a decent salary? No, certainly not, not if your former masters follow capitalist logic and are out to hire labour as cost-effectively as possible. What you will be offered would be work that is barely distinguishable from your former state of complete servitude, with no rights other than the right to quit and wages so low you can only subsist on them (which of course means that actually quitting work altogether feels like an unobtainable fantasy) and (if there are plenty of former slaves also looking for employment) not much chance of getting offers that are better anywhere else. After all, why would former owners not squeeze every last drop of value out of your labour power, when they hold all the bargaining chips?

The comedian Steve Hughes summed up how it really felt the day slavery was ‘abolished’ in one of his stand-up shows:

“Right! You lot are free to go. We’ll see you back here tomorrow at six-thirty!”.

In the next instalment, we will see that the situation was probably even worse, because of how rigged society was against those recently ‘liberated’ slaves.


“The New Human Rights Movement” by Peter Joseph

Steve Hughes standup routine


Slavery and racism

No essay on slavery can avoid talking about racial prejudice. After all, racism is often portrayed as being synonymous with slavery. But while there is no denying that an attitude of white superiority has existed, especially during the late 19th and early 20th century, we are wrong to suppose that blacks were enslaved simply because white racists considered them inferior. No, what actually drove slavery (or, at least, American slavery) was economics. Simply put, there was market pressure to secure cheap labour and profitable investments, and the commodity of slave labour just seemed a better deal compared to what was to be had.

As professor of Sociology, William Julius Wilson explained, “the conversion to slavery was not only prompted by the heightened concern over a cheap labour shortage in the face of rapid development of tobacco farming as a commercial enterprise and the declining number of white indentured servants entering the colonies, but also by the fact that the slave had become a better investment than the servant. As life expectancy increased…planters were willing to finance the extra cost of slaves. Indeed, during the first half of the seventeenth century, indentured labour was actually more advantageous than slave labour”.

That term ‘indentured labour’ is worth pondering. You may recall from part one how slavery is often portrayed as a violent theft of a person’s liberty (in movies, for example, there is often a sequence showing people being rounded up and physically forced into their new role as labourers or domestic servants). But a person did not always come to be in a position of servitude because others physically forced them into it. Sometimes people sold themselves into slavery. Now why on earth would anybody do such a thing? For the same reason plenty of people submit to employment. They are in debt and faced with likely punishment if it is not paid and so they ‘voluntarily’ give up their liberty and labour for others until the debt burden is lifted. In the case of 17th century indentured servants (and quite a few people today, actually) debts were so substantial it could take a lifetime in order to clear a debt, meaning little practical difference in such cases between an indentured servant and an outright slave.

I put voluntarily in scare quotes because I believe it is possible that, even though people who made such a decision may not have been physically forced into slavery, nevertheless there were other pressures which, if coercive enough, could have psychologically forced them into a life of servitude. In other essays I have referred to this as ‘negative motivation’, taking action not because you hope to be rewarded if you do, but because you fear the outcome if you don’t. For some reason free market ideologues believe that, once you legally grant the right of the individual to remove his or her labour, any deal involving the hiring of labour must be one that is free of any form of coercion and is voluntary in the true meaning of the word (“if they don’t like the deal being offered, they can walk away!”) It seems much more realistic to me that, rather than a sharp distinction between unfree slaves and employees whose decision to hire their labour is entirely volitional, you can instead draw a smooth continuum from the slave who is physically forced into servitude, to the indentured servant who is psychologically coerced into servitude, to the employee whose experience is a kind of ‘carrot and stick’ combination of rewards and punishment and so on up to the worker who regards his career as his true calling and does it gladly.

European indentured servants were not only practically similar to slaves. Attitudes toward them were also similar. As civil rights professor Carter A Wilson explained:

“Colour prejudice against Africans was rare in the first two-thirds of the 17th century. Legal distinctions between black slaves and white servants did not appear until the 1660s…Interracial marriages were common in the first half of the 17th century and…at this time they provoked little or no reaction”.

How slavery became racist

So, if market economics and not racism was what caused slavery, how did prejudice end up such a dominant part of the practice? It seems as though racism and class distinction was deliberately stirred up as a means of exerting control. Around the last half of the 17th century, expanded agriculture in Southern states created a huge demand for cheap labour, and that demand was answered by way of the global African slave trade. That also obviously meant a dramatic increase in population size. Thus, it was around this time that public policy began to change, with the intent to create security through hierarchical dominance. The invention of division between poor whites and black slaves was carried out in order to achieve the social distinction necessary for hierarchy. According to historian Edmund S. Morgan, for example, a government assembly in Virginia:

“Did what it could to foster contempt of whites for blacks and Indians…In 1680 it prescribed 30 lashes…’if any negro or other slave shall presume to lift up his hand in opposition against any Christian’. This was a particularly effective provision that allowed servants to bully slaves without fear of retaliation, thus placing them psychologically on a par with masters”.

The purpose of this prejudiced-based bullying was to ensure the growing slave population remained subdued and controllable. As Peter Joseph put it, it was a move to “generate a culture of bigotry and dominance that echoes to this day. So, in a sense, racism has effectively been a system reinforcer to optimise slave labour by way of sociological manipulation”.

Even after slavery was supposedly abolished, there continued to be an interest in controlling minority and lower-class populations. Segregation played an obvious part here, effectively trapping people in areas and circumstances where political and economic oppression were ever-present. As Peter Joseph explained, “the legal system morphed from direct racial oppression to indirect by targeting the outcomes of historical and present socioeconomic inequality, rather than any specific group”.

In other words, although in theory slavery has been made illegal in most countries, in actual fact societies were, and in many places continue to be, structured in such a way as to ensure a ready supply of labour that is not as free as we would like to believe. More on that in the next instalment.


The New Human Rights Movement by Peter Joseph

Centuries Of Change by Ian Mortimer


How slavery is still legal

In part one we were asked to imagine a newly liberated slave who is deciding what to do in order to live. We imagined that he would refrain from stealing or trespassing on the grounds that to do so would break laws. In reality, he would have found it incredibly hard to avoid breaking any laws, because the judicial system was so rigged against his class.

The aftermath of the civil war left the South in a state of economic turmoil, and under such chaotic conditions authorities played fast and loose with the power to arrest and detain. There were vagrancy laws that were vaguely defined and other dubious reasons to charge folk (typically blacks and poor people). This actually had little to do with a drive to restore law and order. The purpose was actually to ensure prisons were kept well stocked. You see, forced labour as a form of punishment was still legal so anyone (a former slave, say) who got arrested and found guilty of whatever could be commanded to do what was to all practical intents and purposes slave labour. The practice even had a name: Convict leasing. So popular was this practice that, by 1898, 73 percent of Alabama’s total revenue was derived from convict leasing, and it took many decades for federal government to shut it down completely.

But, actually, an argument could be made saying the practice was never completely abolished. Even today we have private prisons and corporations exploiting the labour of inmates. Companies like McDonald’s and Starbucks ‘employ’ prisoners, who in some cases earn as little as 23 cents an hour. Also, there are contractual agreements between state and local governments and private prisons that require the state to meet prison-occupancy quotas or otherwise pay for empty cells. The practice of convict leasing resulted in corrupt arrests being carried out in order to meet labour demand, and this current practice of maximum occupancy of prisons regardless of a region’s actual crime levels has also resulted in corruption. There was, for example, the 2008 ‘kids for cash’ scandal in which two Pennsylvanian judges were taking millions in bribes from a for-profit prison company to increase the number of inmates. With a pool of labour for hire at mere pennies an hour, one can appreciate the economic incentive to keep prisons well stocked.

The prison-industrial complex

Having said that, the largest beneficiary of slave labour from prison inmates is not private business but rather the State. As was explained in the Storyville documentary “Jailed In America”, “when someone is convicted and moves from jail to a federal or state prison, the government now has legal access to them as a workforce. These prisoners work for almost nothing, making road signs…or just about anything the government decides”. They may also be put to work providing services the prisons require in order to function, such as doing laundry or maintaining the building’s plumbing. Incarceration is part of a massive prison-industrial complex, an industry worth some $265 billion a year. It could not exist were it not for inmates and so there needs to be a steady supply of new people. Hierarchical societies are structured in such a way as to ensure poor people face limited life choices that are highly likely to lead to incarceration. And the way such things as parole are conducted further adds to the idea that the prison-industrial complex is structured in such a way as to provide a supply of slaves. Being on parole comes with conditions which, if broken, lead to violators being returned to prison. These include such things as being homeless or out of work. Note that for everybody else these are not illegal. Nevertheless for those on parole being made homeless or losing your job (and plenty of other situations that are not law-breaking) result in your being thrown back into jail and the slave labour that often awaits.

Why do we punish the guilty?

When it comes to prisoners, we are encouraged to believe that inmates are just bad people who freely chose to commit crime. Such an attitude probably has its roots in monotheism and its portrayal of the human as an individual with free will who exists separate from the rest of nature. Although one should be careful not to absolve individuals of all personal responsibility, the fact of the matter is that what free will we have is easily overcome. Both magicians and fraudsters understand and exploit flaws in our ability to make decisions and process information, tricking us into carrying out actions of their choosing while believing we are exercising pure free will. There are also plenty of experiments that show how easily people’s ability to make independent choices can be affected by peer and authoritarian pressure. Environmental and social factors impact on our ability to freely choose, and these predominantly affect the lower classes. What kind of upbringing you had, the state of your education, the quality of your diet, economic factors and more can set people on a course that is more likely to end in a conviction compared to the life choices presented to others.

Again, I should stress that this is not being pointed out in order to argue that personal responsibility does not exist, because it does at least to some degree. But, equally, we really shouldn’t condemn those found guilty when we know nothing of the factors that may have influenced the way their life turned out. Crime is sometimes described as a ‘social disease’. Sometimes it is necessary to quarantine people who have a contagious biological disease. Note, however, that no moral condemnation is attached to such a decision. But when it comes to those who catch the social disease of criminality there does tend to be moral condemnation along with the need to separate such people from society. Any society based around competition for material advantage via whatever method you can get away with, and which also incentivises negative attitudes towards the losers of such competitive behaviour (being labelled as failures and so on), is just bound to create conditions in which some will succumb to the temptations of crime. In a neo-liberal free market where everything is a commodity with a price tag attached, how ethical you are depends on how ethical you can afford to be. Morality doesn’t really come into it. As Peter Joseph said, with regard to corporations exploiting the cheap labour of prison inmates:

“This pursuit of cost-efficiency is what notably defines market efficiency…This is simply the nature of capitalist logic, and the still-common idea that the rise of capitalism was somehow instrumental in the general ending of abject slavery on the structural level is little more than denialism”.

Indeed. For, as we have already seen, the popular conception of how slavery ended is quite wrong. It did not just end with the passing of laws that made it illegal. Rather, there has been a long process of rooting out the opportunities for exploitation and establishing the rights people (particularly lower classes) require in order to live in reasonable comfort and security. While capitalism should get some credit for its contribution toward creating the wealth that makes it more affordable to be ethical, it should not be forgotten that most rights we have now come to expect as workers had to be fought for. I have no doubt that, were it not for the pressure from socialist movements, work under capitalism would have remained so exploitative, life for the majority would be akin to slavery and the wealth generated would be much more concentrated as indeed it has been in all redistributive societies where the poor have little or no voice with which to protest their conditions (those being the sort of societies we have predominantly lived in since the Neolithic revolution).

Nor should we kid ourselves into thinking the struggle to end slavery is over. It continues to exist in varying degrees of obviousness, mostly because the root cause of most slavery persists to this day. We have encountered this cause several times throughout this series. It was there when we talked about people in debt selling themselves into slavery. It turned up again by implication when we discussed the forced labour of prisoners, for incarceration has long been justified as a means of making wrongdoers ‘pay their debt to society’.

Yes, the root cause of exploitation is debt. That’s what we will look into next time.


Storyville: Jailed In America

The New Human Rights Movement by Peter Joseph



Along with war and conquest, the imposition of debt stands as one of two major causes of slavery and servitude. Some would probably argue that debt is a natural and inevitable part of any society involving interactions between individuals with differing whatever. While it’s true that any society can only function so long as people recognise and meet ongoing obligations toward one another, the amount of debt that exists in the world today is way out of proportion of anything required to maintain a prospering, egalitarian society. It is instead diagnostic of a market system that has become decoupled from reality.

Much of the pursuit of profit now has little to do with making physical products intended to solve real problems, but instead has moved to the abstract world of financialisation in which Wall Street and its equivalent in other countries collude with governments to create and manipulate complex forms of debt. Major companies no longer derive their profits principally from selling actual products. Instead they float their shares on the stock market, borrow cheap money from the government, buy back their own shares and thereby gain a boost in the paper profit of the company.

This move into abstraction is not without consequences, for there are real downsides to this expansion of debt. Any investigation into how banking works will reveal that banks don’t actually lend out money others have deposited, but instead create money ‘out of nothing’ whenever anyone meets the criteria of being worthy of a loan.

Actually, money is not really being created out of nothing. Rather, wealth is being snatched from the future in order to pay for goods and services here and now. This practice is fair enough when the wealth snatched from the future gets into the hands of those who really can build a better tomorrow. But in reality it too often ends up being used for short-term profit that ultimately causes long-term harm. Banking is a complex system in which people bring about the creation of money out of debt (and of course it is predominantly the poor who need to take out loans) and then, thanks to the negative and positive externalities of market capitalism, the debt and money separate, with the upper classes extracting the money while the poor get burdened with the debt. Are there exceptions to this rule? Yes, but then one can also find exceptions to the ‘survival of the fittest’ rule that drives evolution. Nevertheless evolution is fact and the consequences of this kind of inequality are fact.

How impactful is debt? It’s relative…

How much it matters that you are in debt really depends on how likely it is that you will encounter somebody more powerful than you who can demand repayment. This means that, for the most powerful player of all, debt is of no consequence whatsoever because the day of reckoning will never come. As Alan Greenspan once pointed out, “the US can pay any debt it has because we can always print money”. Or, to put it another way, the US can endlessly snatch wealth from the future without fear that a mighty one will one day come along demanding repayment (of course this rests on the assumption that the country remains the dominant power in the world).

But for weaker players, it’s a different story altogether. Consider the words of President Obasanjo of Nigeria:

“All that we had borrowed up to 1985 or 1986 was around $5 billion and we have paid back so far about $16 billion. Yet, we are told that we still owe about $28 billion…because of the foreign creditors’ interest rates”.

By 2004, the developing world was having to pay $20 in interest repayment for every dollar received in foreign aid and grants. The result was crippling austerity and the creation of highly vulnerable people ripe for exploitation by predatory corporations. And austerity is not just a third-world phenomenon. Even rich countries had to put up with it following the last great speculative bubble (in sub-prime lending as you may recall). But, in keeping with the idea that the powerful escape the consequences of bad societal decisions while the weak must bare the cost, the CEOs who lead the way in reckless speculation got away with it for the most part, riding off into the sunset with breathtakingly large pensions and severance packages, while the poor had services cut and good, secure jobs taken away and replaced with gig work stripped of many hard-won benefits.

As debt grows and its harmful consequences fall predominantly on the vulnerable, such people become more desperate, more prepared to do anything to delay the day of reckoning. As Kevin Bales, who is an expert in human trafficking, explained, “the question isn’t ‘are they the right colour to be slaves?’, but ‘are they vulnerable enough to be enslaved?’. The criteria of enslavement today do not concern colour, tribe, or religion; they focus on weakness, gullibility and deprivation”.

How many are slaves today?

Slavery has not been abolished. It still exists to varying degrees. That slavery continues to this day cannot be doubted (any human rights organisation will correct you with evidence if you believe otherwise) but how much of it there is depends on how you define servitude. According to UN estimates there are roughly 27 million slaves in the world today. However, another organisation called the Walk Free Foundation has put the total at more like 46 million. They are mostly bonded labourers or debt slaves in India, Pakistan, Bangladesh and Nepal.

But could the numbers be higher still? Think back to the notion of debt bondage and selling oneself into slavery, which we touched upon in part one. What, fundamentally, is the difference between selling yourself to one master for life, and being in a position where you must constantly make your labour available for hire, toiling away for minimal reward while others gain most of the rewards being generated by workers like yourself? Doesn’t that just show a continuum of exploitation from abject slavery to indentured servitude to wage labour? Yes, one could argue that the conditions of wage labour is preferable to outright slavery (at least in a lot of cases) but you cannot really call either condition ‘freedom’. After all, if to be free is to work for oneself and to gain most of the rewards from a job well done (and also to carry the costs of not doing your work competently) then precious few can claim to be truly liberated from the bonds of servitude. For as Federal Reserve expert G Edward Griffin said:

“No matter where you earn money, its origin was a bank and its ultimate destination is a bank…This total of human effort is ultimately for the benefit of those who create fiat money. It is a form of modern serfdom in which the great mass of society works as indentured servants to a ruling class of financial nobility”.

If Robinson’s argument is valid, the true number of slaves in the world today would be counted in the billions.


At the beginning of this series a question was posed: Is the popular portrayal of slavery’s end incorrect? We have seen how slavery did not just get abolished with the passing of an Act, creating a gulf between the un-free past and the liberated now. Rather, escape from slavery has been a long process that has made only modest progress in breaking the bonds of servitude and, in some cases, none at all. Progress toward freedom is so slow because, at its very root, market capitalism contains the socioeconomic structures that have given rise to exploitation since the Neolithic period: Systems that justify competition, self-interest, hierarchy and inequality, perpetuating scarcity and profiteering from the growing environmental and social fallout of negative externalities by exploiting the vulnerabilities lower class people and developing nations feel under such circumstances.

Yes, to some extent progress has been made. But not as much as we are lead to believe by apologists for capitalism and certainly nowhere near enough compared to what is technically possible. For example, much is made of the apparent reduction in abject poverty around the world (a condition most likely to result in exploitation). But what’s not appreciated is that such results are obtained by using an infeasibly low threshold for an absolute minimum wage. On the other hand, were we to use the ‘Ethical Poverty Line’ devised by Peter Edward (set at about $7.40 a day), then 4.2 billion people or 60 percent of the world remain in an impoverished state, ripe for exploitation.

Or consider that it would cost about $30 billion a year to end world hunger and that the 1800 billionaires in the world could provide such provision for 200 years and still have roughly $500 million each. It is disgusting that malnourishment and other forms of deprivation that are completely unnecessary continue to exist. The reason they persist is because market capitalism profits from servicing the problems they generate, and really has no interest in bringing about an end to scarcity because assumptions of scarcity are fundamental to how this competitive system works. If you add up all the deaths caused by various negative externalities ultimately traceable to market competition’s root socioeconomic orientation, capitalism has killed more people than all of the 20th century’s despots combined, and has enslaved more people than any other system in history.


“The New Human Rights Movement” by Peter Joseph

“The Creature From Jekyll Island” by Edward G. Robinson

“Modernising Money” by Joseph Lietar

Posted in Uncategorized | 3 Comments



Do you always have work to do at your place of employment, or is your work of a kind where sometimes you are busy, while at other times there’s not much to do? If you are one of those employees working where activity goes through peaks and troughs, chances are you have encountered an attitude that is usually accepted as normal but which would have been regarded as quite bizarre by most of our ancestors.

The best way to explain what I mean is to quote from an employee who has experienced this weird attitude. David Graeber has several such interviews in his book, ‘Bullsht Jobs: A Theory’. Here’s a typical example from ‘Patrick’ who worked in a convenience store:

“Being on shift on a Sunday afternoon…was just appalling. They had this thing about us not being able to just do nothing, even if the shop was empty. So we couldn’t just sit at the till and read a magazine. Instead, the manager made up utterly meaningless work for us to do, like going round the whole shop and checking that things were in date (even though we knew for a fact they were because of the turnover rate) or rearranging products on shelves in even more pristine order than they already were”.

What I am referring to, then, is that attitude employers have that regards slack time as something that should be filled with pointless tasks or ‘make-work’.

How else might these slow periods be dealt with? I can think of a few alternative options. The business could send unneeded staff home without pay. They could send them home with pay. They could require them to stay at their posts, but let the staff socialise, play games or pursue their own interests until there are real work-based duties to carry out.

Of all these options, sending staff home with pay is the least popular. It hardly ever happens. Letting employees do their own thing during slow periods is also pretty unusual. Sending staff home, forfeiting remaining wages is more widely practiced, especially with zero-hours contracts that specify no set hours. But if you are in a regular job and there are times when the work is slow, the most common solution is to have that time filled with useless tasks.

It’s hard to see how this practice of making up pointless tasks is in any way productive. Indeed, a case could be made that it encourages anti-productivity. David Graeber recalled an incident when he worked as a cleaner in a kitchen and he and the rest of the cleaning staff pulled together to get everything done as well and quickly as possible. With their work completed, they all relaxed…until the boss turned up.

“I don’t care if there are no more dishes coming in right now, you are on my time! You can goof around on your own time! Get back to work!”

He then had them occupy their time scrubbing baseboards that were already in pristine condition. From then on, the cleaning staff took their time carrying out their duties.

Graeber’s boss’s outburst provides insights into why this attitude exists and why it would have seemed so peculiar to our ancestors. He said, “you are on my time”. In other words, he did not consider his staff’s time to be their own. No, he had purchased their time, which made it his, and so to see them doing anything but look busy felt almost like robbery.

How Our Ancestors Worked

But our ancestors could not possibly have conceived of time as something distinct from work and available to be purchased, and they certainly would have seen no reason to fill down time with make-work. You can tell this is so by noting how make-work is absent from the rest of the animal kingdom. You have animals that live short, frenetic lives, constantly busy at the task of survival. Think of the industrious ant or the hummingbird, forever moving in the search for nectar. You have animals that are sometimes active but at other times take life easy, such as lions who mostly sleep and only occasionally hunt. But what you never see are animals being instructed to do pointless tasks.

There’s every reason to believe our ancestors would have been under no such instructions, either, particularly when you know a bit about the kind of societies they lived in and the practicalities they faced. Our earliest ancestors lived in bands or tribes in which there were no upper or lower classes, for the simple reason that the hunter-gatherer lifestyle would not permit much social stratification.

This should not be taken to mean that there was absolute equality among members of bands or tribes, however. Leaders did emerge, distinguishing themselves from the rest of the band or tribe through qualities like personality or strength. Both bands and tribes had big-men, recognised in some ways as the leader. But such leaders would have been barely distinguishable from ordinary tribe members. At best, the big-man could only sway communal decisions and had no independent decision-making authority to wield or knew any diplomatic secrets that could confer individual advantage. Moreover, the big-man’s lifestyle was indistinguishable from everyone else’s. As Jared Diamond put it, “he lives in the same type of hut, has the same clothes and ornaments, or is as naked, as everyone else”.

Given that our ancestors were hunter-gatherers, it would have made no sense for ‘big-men’ to make anyone fill spare time with make-work. No, the sensible would have been to permit relaxation during slack periods in order for there to be plenty of energy when the time came to put it to good use. You can imagine how there would have been seasons in which there was plenty of fruit to gather, or moments when everyone should mobilise to bring home game. But afterwards, when the fruit was picked and the hog roasting on the spit, the time left was better spent playing, socialising, or resting.

This is, in fact, how we evolved to work. We are designed for occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity.

This work pattern could hardly have changed much when human societies transitioned to farming and were able to develop into chieftains and larger hierarchical societies. After all, farming is also very seasonal work, so here too it would have made much more sense to adopt work attitudes that encouraged intense activity when necessary (such as when the harvest was ready to be gathered) but at other times to just leave the peasants alone to potter about minding and maintaining things or relaxing.

Now, it’s true that the evolution of human societies into hierarchical structures not only entailed the emergence of a ruling ‘upper class’ but also a lower caste of slaves and serfs. But, although we commonly conceive of such lower caste people as being worked to death by brutal task-masters, in actual fact early upper classes were nowhere near as obsessed with time-management as is the modern boss and didn’t care what people were up to so long as the necessary work was accomplished. As Graeber explained, “the typical medieval serf, male or female, probably worked from dawn to dusk for twenty to thirty days out of any year, but just a few hours a day otherwise, and on feast days, not at all. And feast days were not infrequent”.

So, our ancestors saw no need to fill idle time with make-work, partly because it was (and still is) of little practical use. But if masters of serfs could plainly see how silly it is to force make-work on their serfs, why can’t modern managers grasp the same thing with regards to their staff? Well, it all has to do with concepts of time, and that’s something we’ll look into next time…


Bullshit Jobs: A Theory by David Graeber

Guns, Germs and Steel by Jared Diamond


If you could go back in time and say to somebody, “can I borrow you for a few minutes?”, your request would have been met with a baffled look. This would be because such a person would have no understanding of time as being broken up into hours, minutes and seconds. Instead, what understanding of time there was consisted of passing seasons, cycles like day and night or the length of time actions took, on average to perform. “I will be there in five minutes” means nothing to a rural person in Madagascar, but saying it takes two cookings of a pot of rice would have let somebody know how long your journey would likely take. As Graeber explained, for societies without clocks, “time is not a grid against which work can be measured, because the work is the measure itself”.

It’s because our ancestors had no ‘clock’ concept of time that they could not therefore conceive of somebody’s labour-power as being distinct from the labourer himself. Consequently, if somebody came across, say, a cooper, they could imagine offering to buy the barrels he made, or they could imagine buying the cooper himself. But the notion of buying something as abstract as time? How was that possible?

Well, once slavery came about our ancestors did have an approximation to modern employment practices, in that slaves could be rented instead of bought outright. Whenever we find examples of wage labour in ancient times, it pretty much always involves people who were slaves already, hired out to do some other masters’ work for a while.

Around the 19th century we do see occasional warnings by plantation owners that slaves had best be kept busy during idle periods, for who knows what they might plot if left with time on their hands? But it took technological innovations from the 14th century onward to really make time seem like a commodity that could be bought, spent, misspent or stolen.

Clocks and buying time

What set us on the road to bosses complaining about ‘their time’ being wasted was similar to what lead to the evolution of money. Our ancestors lived in gift-based economies in which favours were freely undertaken with the vague understanding that they would be suitably reciprocated at a later date. But when was a favour suitably reciprocated or a slight adequately compensated? Such questions lead to rules, regulations, laws and contracts that gradually quantified obligations and transformed them into debts and credits that could be precisely calculated.

By the 14th century, clocks had been invented and began to show up in town squares. But where the clock-based concept of time really took off was in the factories of the industrial revolution. The increasing routinisation and micro-tasking of work that typified the production-line brought about the quantisation of time into discrete chunks that could be bought, and the need to coordinate logistics lead standardised times (imagine running trains when no two towns agree on when it is 2PM). By dividing time into the now-familiar hours, minutes and seconds, we created a concept of time that conceives it as a definite quantity that could be purchased, distinct from both the labourer and his produce. It became possible to conceive of buying a portion of his time and owning whatever produce that got created during that time, while not actually owning the labourer himself. This, of course, is what distinguishes an employee from a slave.

But once we began thinking about time as discrete units that could be bought, that then lead to a belief that time could be wastefully spent, not just by being literally idle but by spending ‘somebody else’s’ time doing your own thing, like playing a board-game or reading a magazine. The attitude I referred to earlier (‘don’t let slaves be idle lest they plot to free themselves’) was carried over to working practices in industrial cities. This, combined with the idea that you could buy somebody’s time but they could then waste ‘your’ time (misspend it) lead to the peculiar modern notion of time discipline and its obsession with busyness and make-work. Once you get to the 18th century and onwards, you get the emergence of bosses and upper classes who increasingly view the old episodic style of working (which involved occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity) as problematic rather than sensible. Moralists came to see poverty as being due to bad time-management. If you were poor it was because your time was being spent recklessly or wastefully. What better remedy than to have your misspent time purchased by somebody who was rich and, therefore, better able to budget time carefully, as one who is frugal would budget and dispose of money?

It was not only the bosses who came to see time as purchasable units that might be misspent. So, too, did employees, especially since the old struggle between the conflicting interests of employer and employee meant the latter also had to adopt the clock-concept of time. If you are an employee, you want an hourly wage for an hours’ work. But if you are the boss, it would be preferable to somehow extract more than an hours’ work for an hour’s pay. Early factories did not allow workers to bring in their own timepieces, which meant those employees only had the owner’s clock to go on. Such owners regularly fiddled with the clock so as to appropriate more value (by getting them to do overtime for free) from their employees. This lead to arguments over hourly rates, free time, fixed-hour contracts and all that. But, as David Graeber pointed out, “the very act of demanding ‘free time’…had the effect of subtly reinforcing this idea that when the worker was ‘on the clock’ his time truly did belong to the person who had bought it”.

So, the belief that any spare time in work should be filled with pointless tasks came about as a result of somebody’s time becoming conceived of as distinct units that somebody else could buy and, consequently, as something that could be stolen or misspent. This in turn lead to a form of moralising that regarded idleness as sin, as something to be eradicated through the provision of make-work and indignation upon seeing employees doing anything other than their jobs or pretending to carry out tasks when their actual job is done.

It’s not just in stores, offices and factories that this attitude prevails. Where care work is concerned, the service being offered can sometimes consist of being on stand-by just in case the elderly client needs attention. But the elderly person gets so indignant about the carer ‘sitting around wasting my money’ they, too, end up being asked to pretend to do ‘something useful’ like tidy up a home they have already tidied. From the perspective of the stand-by carer, this can make the work intolerably frustrating.

The future of make-work

Make-work also has worrying implications if future technological capabilities will be as potent as futurists like Ray Kurzweil claim. I would argue that each major work revolution has focused on successfully less urgent demands. The agricultural revolution was concerned with food production, which is of obvious importance since we cannot live without food nor do any other work without adequate nutrition. The industrial revolutions (and the socialist movements that accompanied them) lead to greater standards of living and increased comfort. While not as essential as food, conveniences like microwaves, carpets and television sets can make life more pleasant and the products of manufacturing enable us to carry out essential work with more ease.

But what happens when people have enough of what they need to lead healthy, comfortable lives? Their consumption slows, and that’s anathema to a growth-based system like market capitalism. No wonder, then, that from the 50s onwards psychologists like Edward Bernays were working with advertising departments in order to create fake needs so as to sell bogus cures. No wonder, then, that we went from being utilitarian in our attitude toward products, buying them for practical purposes and make-do-and-mending in order to get maximum-possible use out of our stuff, to adopting a throwaway culture, replacing stuff just because it’s out of fashion or because it was designed to fail as soon as can be gotten away with and not built to be easily maintained.

General AI and atomically-precise manufacturing could drastically increase the efficiency with which we manage and carry out the rearrangement of materials, lead to a radical reduction in waste and free up time, as we would have the means to automate most of today’s jobs. Once we have automated jobs in agriculture, manufacturing, services and administration, the sensible thing would be to pursue interests outside of the narrow sphere of wage labour. It would be a good time to rediscover the periodic working practices of our ancestors and the greater commitment to social capital typical of tribal living, only with the added bonus of immense technological capability to keep us safe from hardships that do sometimes affect hunter-gatherer societies.

But is such an outcome likely to happen when it has to evolve within a system based on a throwaway culture and where work is seen as virtuous in and of itself to the extent that ‘spare time’ is considered to be something that should be filled with pointless tasks? What I am saying is that markets have already proven themselves capable of creating scarcity where little real needs exist, so it is not too great a leap of imagination to suppose that the moral indignation that stems from the attitude ‘time is money’ and ‘you are misspending my time’ could work against what should be capitalism’s greatest triumph, which is to unlock the potential abundance inherent in the Earth’s richness of resources and elevate us to positions where we can live comfortable lives that need not come with the condition that some have to adopt extreme levels of frugality, and where we are free to become all we can be within a more rounded existence. Instead of that promising outcome, we might well just fill the technological-unemployment gap with make-work and bullshit jobs.

What a waste of time it would be if that were to happen.


Bullshit Jobs: A Theory by David Graeber

Guns, Germs and Steel by Jared Diamond

Posted in Uncategorized | Leave a comment

The Road To Freedom?

In 1944 the Austrian economist, Friedrich August Von Hayek, published ‘The Road To Serfdom’. The book set out to argue that the free market is the only viable way of bringing about freedom and prosperity. Actually, the book does not talk so much about the virtues of free markets but rather the downsides of the alternative which, at the time, was central planning. Hayek’s argument was that we can only handle the complexities of reality in a bottom-up fashion, with individuals looking after their own self-interests while guided by pricing signals. This, he reckoned, would result in the efficient allocation of resources arising from what would now be called emergent behaviour.

On the other hand, if we instead relied on a centralised authority to determine resource allocation, such an authority would inevitably find the complexity of modern economies too much to handle. The only way the authority could gain some measure of control would be to exercise more power over the people, restricting their freedom and making them live their lives according to some plan. Thus, a socialist economy would become more authoritarian over time. As the title of the book said, Hayek reckoned socialism to be the road to serfdom.

It’s fair to say that the book remains one of the classic texts of neo-liberalism. Margaret Thatcher described Hayek as one of the great intellects of the 20th century, and he was awarded the Nobel Prize in economics in 1974. Even now, some 64 years after its publication, it is still regarded as a definitive refutation of leftist politics and proof that only neo-liberalism can deliver prosperity. You could say that Hayek is as important a figure to the free market as Karl Marx is to communism.

But, I wonder, does Hayek’s argument really successfully demolish every alternative to neo-liberalism? Does the selfish pursuit of money and the conversion of everything to a commodity to be bought and sold on the market still stand as the only way we can achieve peace and prosperity? Or are its advocates wrong to say there is no alternative?

I would say there is an alternative. We are no longer restricted to the either-or choice of laissez-faire capitalism or authoritarian central planning. There might just be a third way.

It’s worth baring in mind the time in which Hayek wrote his book and how things have changed since then. At the heart of his argument is the belief that the world is really, really complex and, because of this, far too much information is generated for a centralised authority to handle without imposing real restrictions on individual liberty. Only market competition guided by pricing signals can manage such complexity. But, remember, he was writing in 1944. Communications back then was a good deal more primitive than is the case today. There was not one satellite in orbit. Now we have many hundreds, if not thousands, constantly monitoring all kinds of stuff such as weather patterns, urban sprawl, how crops are faring and so on and so on. This amounts to a network of sensors englobing our planet and allowing for realtime feedback about all kinds of important things. Such a perspective simply didn’t exist when ‘Road’ was published.

The advances we have made in our ability to transmit information is truly remarkable. The numbers are hard to grasp as they are pretty astronomical, but let’s give ourselves some standard of comparison and see if that helps. The author James Martin proposed the ‘Shakespeare’ as the standard of reference for our ability to transmit information. One Shakespeare is equivalent to 70 million bits, enough to encode everything the Bard wrote in his lifetime.

Using a laser beam, you can transmit 500 Shakespeares per second. Sounds impressive, but in fact fibre optics technology can do much better. By using a technique called Wavelength Division Multiplexing, the bandwidth of a fibre can be divided into many separate wavelengths. Think of it as encoding information on different colours of the spectrum. Some modern fibres are able to transmit 96 laser beams simultaneously, each beam carrying tens of billions of bits per second or 13,000 Shakespeares.

But we are still not done, because many such fibres can be packed into a single cable. Indeed, some companies make cables with more than 600 strands of optical fibre. That is sufficient to handle 14 million Shakespeares or a thousand trillion bits per second.

Think about that. We can now transmit data equivalent to 14 million times Shakespeare’s lifetime’s output from one side of the planet to the other almost instantaneously. Of course, this is quantity of information and not necessarily quality (not everything we send over the Internet is of Shakespearean standards!) but the point is that we can now send an awful lot of information around the world whereas this was not possible in Hayek’s day.

It would do little good to transmit petabits of information if we did not also improve our ability to store and crunch that data. In 1944 computers barely existed. What computers did exist came in the form of room-sized electromechanical behemoths that consumed huge amounts of power and were so temperamental only specialised engineers could be trusted to go near them.

Ray Kurzweil once said, “if all the computers in 1960 had stopped functioning, few people would have noticed. A few thousand scientists would have seen a delay in getting printouts from their last submission of data on punch cards. Some businesses reports would have been held up. Nothing to worry about”. And this was in 1960, over a decade after Road was published.

Since then, Moore’s Law (related to the price-performance of computer circuitry) has increased the power of computers by billions of times. It has shrunk hardware from the room-sized calculators of old to swift, multi-tasking supercomputers that can easily slip into your pocket. The cost has been reduced from about 100 calculations per second per thousand dollars in 1960, to well over a billion cps by 2000. Such a reduction means we can treat computing as essentially free, as proven by the way people are constantly on their web-enabled devices without ever fretting about how much it is costing. Also, computers have become increasingly user-friendly over time, from devices that required considerable technical skill for even simple tasks to modern conveniences like Alexa that can be interacted with through ordinary conversation.

The result of all this technological progress is that we are now practically cyborgs from infancy, thanks to the near-constant access to enormously powerful and intuitive computational devices. We live as part of a vast, dense network of bio-digital beings, connected to one another regardless of distance and with ready access to all kinds of information and digital assistance.

What this has to do with Hayek’s argument was expressed in an opinion put forward by David Graeber: “One could easily make a case that the main reason the Soviet economy worked so badly was because they were never able to develop computer technology efficient enough to coordinate such large amounts of data automatically…now it would be easy”.

In part two, we will see how the Internet and other technological advances provide options that were not feasible when ‘Road’ was written.

When Hayek wrote his book there was no Internet. Nobody was a blogger. Not one video had been uploaded. There was not a single Wikipedia entry, not one modded videogame. Linux and bitcoin were not words in anyone’s vocabulary. Now, such things are a ubiquitous part of modern life and most of them are free, part of the collaborative commons. OK, the price of bitcoin went crazily high but its founder provided the underlying blockchain of technology gratis, and made its white paper public knowledge so anyone could improve and expand upon it to create stuff like a decentralised social media site built on a blockchain.

Indeed, there’s now a great many things we can do on a voluntary basis. Much of the content of the web owes its existence more to passion than the pursuit of money. Jeremy Rifkin calls this ‘collaboratism’. Collaboratism means engaging in work not because financial pressures or some authority compels you, but because the means of producing and distributing stuff has become cheap enough that anyone with any drive to do something has the means to flex their creative muscles, and to connect with others with complementary skills and weaknesses.

This kind of technological progress changes many things. For example, when you have ready access to manufacturing or logistical systems it makes more sense not to have private ownership of stuff (which nearly always entails that stuff sitting in storage not being used for most of its life) but rather using stuff as and when you need it, and then making it available for others to use when you don’t. Think, for example, of driverless cars that could be there when you need transport and make themselves available for others to use if not. If that car was your own private possession, it would probably be parked somewhere not being used by anyone for long stretches. What a waste of resources!

This is the kind of world advocated by the Zeitgeist Movement. Critics of Peter Joseph tend to dismiss him using the same arguments Hayek used in ‘Road’. But this is to fundamentally misunderstand Joseph’s position. He is in no way advocating any centralised control, but rather more efficient decentralised methods than the corrupt monetary systems that are leaking value from today’s markets.

As to why neo-liberals tend to mistake Zeitgeist’s resource-based economy for central planning, maybe it can be traced back to concept drawings by Jacques Fresco? His Venus project shows plans for cities whose infrastructure is organised into a circle, at the centre of which sits a big computer monitoring the various flows of information a city generates. Such an illustration sure makes it seem like a centralised authority is in charge.

But you have to bare in mind that this city-wide perspective is only one viewpoint. If we could zoom out, we would see that the spokes of this ‘wheel’ radiate out beyond the confines of the city to connect with other cities, such that it becomes a node in a web of interconnected smart cities. Or, you could zoom in to a more personal level, and see that each person is a node in the network thanks to the web-enabled devices they have ready access to. Just shift perspective and what seems like a centralised master computer turns out to be a node in a network.

I would make an analogy with the web of life. Imagine telling somebody that there is a digital programme, encoded in DNA, running evolution. Imagine that person demanding to know where, precisely, the computer running this programme is located, and also telling you evolution can’t possibly work because Hayek proved centralised planning is hopeless. This would be a fundamental misunderstanding, because the code of life is not to be found in any particular location, but rather distributed throughout the world. Nobody is in charge, there is no top-down authority commanding natural selection.

Similarly, when confronted with Zeitgeist’s outline for systems of feedback that would enable us to track the world’s resources and manage them according to the principles of technical efficiency, it’s always denounced by critics as central planning. It’s almost as if such people forget the Internet ever existed.

When Hayek wrote ‘Road,’ mass production was the most obvious manifestation of market competition’s drive to produce sellable commodities, and mass production at that time was largely dependent on factories powered by large stations. Those were hugely expensive means of production that only a minority could afford to own, and which were most efficiently run along fascist lines. You might have been free to quit your job but once you clocked in you become part of a vertically-integrated management structure and had authorities whose orders had to be obeyed (and who, for the most part, were more interested in lining their own pockets and those of the banking and governmental masters they answered to than rewarding your efforts).

In marked contrast, the technologies of the 21st century could enable production by the masses, for the simple reason that the means of production and distribution could become ever more accessible in terms of cost and ease-of-use. Few can own a factory but if the price-performance of atomically-precise manufacturing goes far enough, what is effectively a factory in a box could sit beside your printer, and if robots follow the same trajectory as computers they should go from being very limited, expensive and largely inaccessible labour-saving devices to cheap, versatile, user-friendly, ubiquitous helpers. We could all become owners of the means of production. Such a decentralised form of production works best when we act as collaborating individuals united by complementary strengths and weaknesses in laterally-scaled networks, which is quite different from the vertically-integrated management that jobs have traditionally been designed around.


When Hayek wrote ‘Road’, the only alternative to free markets he could imagine was central planning. But really, who could blame him? There was no satellite communication, hardly anybody had access to computers and the World Wide Web did not exist. In short there was none of the infrastructure that the digital commons needs to get off the ground, making it perfectly reasonable for Hayek not to consider collaboratism as a viable alternative to the selfish pursuit of money.

Now, the infrastructure is beginning to fall into place. We have a communications web, an information web, and the beginnings of a logistic web and energy web too. Thanks to advances in artificial intelligence, robotics, nanotechnology and more, we are approaching the point of near zero-marginal cost for the creation and delivery of all kinds of content, not just digital stuff but physical stuff too. We can now work together, forming groups and collaborating on projects out of passion rather than out of some selfish pursuit of monetary gain.

‘The Road To Serfdom still stands as an effective argument that market competition is preferable to central planning. But when you consider how laissez-faire principles brought about the financial crisis of 2008 (Wall Street really did take advantage of Ayn Rand devotee Alan Greenspan’s deregulation and the commodifying of political influence to make fraudulent activity legal and prey on people’s financial gullibility) and the impossibility of sustaining free market principles in anything that resembles the way market competition actually developed (covered in my essay series ‘This Is What You get’) I suspect that, were he alive today, Hayek would be championing the Zeitgeist movement as the best way of bringing about prosperity. In 1944 there may have been no viable alternative to neo-liberalism, but that’s changing.


“The Road To Serfdom” by Hayek

‘Zeitgeist Movement Defined’

‘The Zero-Marginal Cost Society’ by Jeremy Rifkin

‘Age Of Spiritual Machines and ‘The Singularity Is near’ by Ray Kurzweil

‘The Meaning Of The 21st Century’ by James Martin.

“Bullshit Jobs: A Theory” by David Graeber

Posted in Uncategorized | Leave a comment