Hot Air: An Allegorical Tale


Once upon a time, people dreamed of being able to fly. This dream lead to two kinds of flying machine being built. One type of ‘flying machine’ was the ornithopter. I put the term flying machine in scare quotes because, when such machines were scaled up to be large enough to transport passengers, they were utterly useless. Knowledgeable critics pointed out fundamental reasons why this kind of flying machine could never work, which really should have spelled the end of ornithopters. Nevertheless there was always those who persisted in believing it could be made to work. Due to the comedic footage of these machines’ pathetic attempts at flying, such believers were labelled ‘Commies’.

The other kind of flying machine was the hot air balloon. Unlike the ornithopter, which had never worked when scaled up to accommodate human passengers, the hot air balloon did show some limited success in raising people up. Noticing how people were climbing into baskets suspended under balloons and enjoying the freedom of the skies, folks nicknamed balloonists ‘free basketeers’, and the adoption of the ‘free basket’ spread around the world.

The free basketeers saw nothing but a bright future ahead. They pointed out the successes that had been made in lifting people up (undeniable to anyone willing to look at the evidence) and also pointed out the improvements that had been made to balloons, enabling them to go higher, travel further, and lift more people up. Sure, there had been some mistakes along the way and even a disaster or several, but overall the trend was toward ‘better’. The free basketeers confidently asserted that the ability of free baskets to raise people up was limitless and that future generations would surely be flying through space in hot air balloons.

But a few critics believed there were serious flaws in such assumptions. Although they could certainly see that free baskets had shown limited success in lifting people up, they also highlighted various problems that seemed to be getting more serious. For one thing, there was evidence that, the higher up you go, the colder it got. If such trends continued, they argued, then by the time you got into space it would be so lethally cold you and your passengers would freeze to death. And then there was the problem of breathable air. The higher up you go, the thinner the air became. So people would asphyxiate long before they got to fly around the Andromeda galaxy.

All such objections were dismissed by the free basketeers. “Higher altitudes are correlated with colder conditions? We don’t believe in that global cooling nonsense!”. (The scientific consensus that, yes, it really is unbearably cold in space, did nothing to shake their belief in the boundless expanse of free basketeering). “Oxygen will run out before we reach outer space? Oh, pooh, such talk of reaching ‘peak oxygen’ just fails to see ballooning will find a way!”.

Meanwhile the critics continued to insist that there were fundamental limits to what could be achieved via ballooning and continued to highlight problems that free basketeers dismissed with counter-arguments that seemed wholly convincing to them but very flawed to the critics. And the critics did not just criticise. Some, like Buckminster Fuller, Jacques Fresco and Peter Joseph, reckoned that hot air ballooning had provided the foundations for an entirely different way of doing things. For example, there was that burner that was used to heat up the air inside the balloon. If you aimed it at the ground you got a little push, in accordance with the Newtonian principle ‘for every action there is an equal and opposite reaction’. This gave some the idea that ‘rockets’ with enough thrust to achieve escape velocity could one day be built. They drew up outlines- not complete blueprints, mind you, there was a great deal left to be figured out-showing how such rockets could be built.

Another vague design took as its basis the material that balloons were made out of, which had been getting lighter and stronger. Some fringe thinkers considered the possibility of sails that could use the solar winds produced by the sun to push a craft through space. As with the rockets, there was a lot of details missing in such plans, but those who saw merit in the idea insisted it could work in principle, whereas balloons would never actually transport us to other planets and stars regardless of what the free basketeers told you.

How did the free basketeers respond to these alternatives to balloons? They were dismissed as totally impractical. Very rarely was any valid criticism put forward. Rather, anyone who spoke negatively about free baskets or outlined an alternative was dismissed as a ‘commie’, told (as one might explain to a simpleton) that ‘ornithopters cannot work’. The basketeers had come to believe that ornithopters or hot air balloons were the only kinds of flying machines there could be and they regarded all alternatives through this blinkered perspective. If you did not believe in the free basket, you had to be a commie!

So how does the story end? I do not know. It could be that free basketeers succeed in persuading people that anyone attempting a different approach is a commie and that there are no limits to how high and how far you can go in a hot air balloon, resulting in catastrophe as this particular approach runs up against problems it is fundamentally incapable of dealing with.

Or, it could be that those proposing alternatives to both ornithopters and balloons convince enough people of the potential to be found in their way of doing things, if only we could redirect our efforts to such proposals. In that case, perhaps the free basket would be remembered as a somewhat successful means of providing flight whose adoption was understandable given the knowledge available at the time, and whose eventual replacement was wholly necessary.

So what did we learn from this allegorical tale? We had two attempts at making a flying machine, and you probably guessed that ‘commie’ and ‘free basket’ were puns on ‘communism’ and ‘free markets’. The ‘free basket’ approach to flying demonstrated limited success in ‘raising people up’ (in a literal sense, not the metaphorical sense in which the free market is claimed to be a rising tide that lifts all boats). This leads free basketeers to suppose there is no limit to how high and how far you can go in a balloon, a belief that can only persist by ignoring the mounting problems that such an approach is fundamentally incapable of solving. 

But, as it is at root a competitive system based on exploiting scarcity for the purpose of gaining material advantage, and therefore needs scarcity to persist even as it makes some progress in reducing want in limited cases, capitalism ends up creating problems like ecosystems struggling to cope with the garbage we in our throwaway consumerist cultures discard, resource wars, social decay, a banking and financial system that (roughly speaking) creates fictional wealth that boosts the paper profits of corporations while loading increasing debt onto the lower classes, and other increasingly urgent issues that tend to be brushed aside as ‘negative externalities’. 

We also imagined how, with some ingenious rethinking and a great deal of applied effort, some of the technologies built to make ballooning work could be used as the basis for a completely different approach. That is what the likes of the Zeitgeist movement are trying to do; come up with credible outlines for an alternative to both capitalism and communism that identifies the fundamental flaws in these older economic models and applying such strategies as systems thinking, strategic abundance, technical efficiency and the Creative Commons to sketch out a different method of running an economy. But just as rockets were dismissed as ornithopters (even though they obviously are not) the ‘resource-based economy’ outlined by Zeitgeist gets dismissed as communism, even though it obviously is not.

So, in summary my story was a cautionary tale about how the limited success free baskets/ free markets had in raising people up somehow led to an irrational belief in this method’s boundless potential, an optimistic outlook that could only be sustained by adopting a ‘head in the sand ‘ approach to mounting problems and directing straw-men attacks at promising alternatives. All in all, a pretty apt portrayal of how debates over the flaws of communism/capitalism and the potential of RBE tend to go. 


“Utopia For Realists” by Rutger Bergman

“The New Human Rights Revolution” by Peter Joseph

“The Virtue Of Selfishness” by Ayn Rand

Posted in Uncategorized | Leave a comment

What Flexibility In Gig Work?

Labour’s shadow chancellor, John McDonald, once claimed that workers’ rights are now in the kind of precarious situation that has not been seen since the 1930s. Specifically, the shadow chancellor highlighted the gig economy and zero-hours contracts. He likened them to the kind of employment his father had to endure. His father was a docker, and every day he had to stand around with other men in the hope he would be selected for work that day. It was a precarious existence with no guarantee of wages from one day to the next. The gig economy, claimed the Shadow Chancellor, offers a similarly raw deal.

Some commentators reckon that this is not the case, claiming that many workers enjoy being independent contractors and the flexibility that comes with it. That flexibility is apparently so desirable that they would gladly give up other rights that labour unions fought hard to establish, such as sick pay, holiday entitlement and a minimum wage.

So, is it really true that people crave the flexibility offered by the gig economy, so much so that they would forgo a great many workers’ rights if offered the choice?

I think that depends on what kind of flexibility we are talking about.

I can imagine a form of flexibility that would make gig work pretty darn attractive to the employee. The kind of flexibility I am imagining is the kind where the availability of work is entirely at the workers’ convenience. In other words, whenever you want to work you may do so, but equally the moment it’s more convenient to quit that is also perfectly OK.

One can imagine people adopting all kinds of working patterns, tailored to all kinds of lifestyles. People who work through winter and relax through summer; people who have one day on, one day off; people who have no routine at all and are in and out of work as the mood takes them. If the flexibility that the gig economy offered really is the kind that works entirely at the independent contractor’s convenience, then the critic must have a point when he says that many people prefer this kind of work.

But there is another kind of flexibility, the kind where you are at a business’s beck and call. When the business has work available, you are there to do it. As soon as your services are no longer required, you are sent home, ready to spring into action the moment it is convenient for the business to hire you once more. Notice that this kind of flexibility need not necessarily work entirely in the independent contractor’s favour. There may be times when you’re called to work but it’s not so convenient. It’s a sunny day and your friends are off to the beach. You feel under the weather. Your partner is in labour and you want to be there to see your first-born enter the world. And those times when it’s convenient for the business to send you home may not be so convenient for you. “No work available today, huh? Darn, I could really have used the money”.

Now, who would benefit from the kind of flexibility that puts workers entirely at a company’s beck and call? Obviously, the employers would be the ones to benefit. You can’t tell me that in an ideal world (as seen from their perspective) companies like Uber wouldn’t rather there be flexibility of the ‘beck and call’ variety.

Strictly speaking, workers have always had the choice to work any hours they like. It’s not as if factories and offices lock the doors, trapping their workforce inside until the boss declares sufficient work has been extracted from them (well Ok some sweatshops in developing countries do this, but these are exceptional cases). So, really, you could walk away from your job at any time.

I should point out that I am talking about a particular kind of freedom here. What I really mean is that you are free to do something like walk away from your job, provided you are willing to accept whatever consequences may result. Provided you are prepared to face whatever may come, it’s hard to think of an example where one is not free to choose. You are free to commit crime, free not to pay taxes, free to jump off a cliff.

Of course, most people would say that the consequences could be so bad it acts as a sufficient restraint on how we behave. So, most people pay their taxes and obey the law not because they must in the sense we must all die one day, but because they choose to obey and therefore avoid the consequence, rather than choose not to and maybe face those consequences.

Similarly, even if you are in a regular, full-time, Monday to Friday, nine to five job, you don’t have to stick to that schedule. You could decide, “you know what, it’s Wednesday, it’s 10 AM and it’s a sunny day outside. So I am off”, and just go, cheerily telling your superior, “I’ll be back tomorrow. Or not. Depends on my mood”.

Would there be consequences? Probably yes. Most likely, you would figure that such behaviour would result in your being fired, and the threat of that would be enough to ensure you stick to your contract and be at work when it says you should.

For gig work to offer the good kind of flexibility where you get to work only when it is convenient for you, then the consequences resulting from your choice not to work can’t be so (presumably) dire that you regulate your behaviour like people tend to do in regular work.

Now, where regular work is concerned, the long struggle of unionisation and workers’ rights has established procedures that must be undertaken when dealing with workers who break the rules. In lots of cases you can’t just be sacked, but must first receive verbal warnings, written warnings, and only if you persist in your behaviour does the company finally have it in their power to terminate your employment.

But what about the gig economy? Businesses offering work of this kind do not need to go through any kind of disciplinary procedure. They don’t even need to fire anyone. Indeed, seeing as how workers in the gig economy are supposedly self-employed contractors, they can’t fire you because you do not officially work for them. But what they can do is stop you from using the app.

I can imagine how rumours going around could constrain workers’ freedom to choose whether or not to work. The chatter might go something like this. “Man, be careful. That app you use to accept or refuse work? It logs every time you refuse. I heard that if you turn down too many offers you go on ‘Blacklist’ and can’t get any more work for months. I would accept at least 80 percent of the jobs they offer if I were you”.

By the way, this isn’t just a made-up scenario. In actual fact, businesses in the gig economy often do require their independent contractors to submit to a certain percentage of tasks or otherwise face being temporarily or permanently banned from using their app. And it usually is a high percentage that you must accept, like 80%. Put another way, you are free to turn down the opportunity to work…twenty percent of the time. But other than that if the app says there is work to be done you had better agree to do it, or else. To me, this is kind of like saying something is entirely free…you just have to pay £80 in order to access it.

Furthermore, we should not assume that things will stay as they are. Perhaps things will change in a positive way, with future kinds of gig work offering more of the ‘work whenever you feel like it’ kind of flexibility. But, then again, it could go the other way and become more about working at the businesses’ convenience.

Which way it goes depends on who has the greater power to influence the market and the political system. Given that the owners of successful disruptive software can become multi billionaires, a sum of money way beyond the aspiration of anyone tasking for the gig economy, it seems to me that the safe bet is to say it will be the gig business owners who wield the greater influence.

Since we mentioned disruptive software, we should also take into consideration the effects of artificial intelligence. According to some experts, the algorithms coming in the foreseeable future will wipe out most middle-class jobs. The result will be a market that on one hand consists of a few owners of algorithms that come out on top in a ‘winner takes all’ environment, perhaps becoming multi-trillion dollar businesses.

On the other hand there will be everybody else, a great mass of humanity competing for whatever jobs are left. When you consider Moravec’s Paradox, which is named after a roboticist called Hans Moravec and states ‘whatever we find easiest to accomplish, AI finds hardest and vice versa’, you can see that AIs are more likely to take away the sort of jobs you need a college or university education for, rather than the sort of jobs anyone can walk in off the street and do. In other words, it will be AIs that do things like writing articles for journals, flying airplanes, and work in legal and medical practices (which they can already do to a limited extent), leaving work like scrubbing toilets and dusting shelves to humans (which robots are still pathetic at).

This larger group has been labelled the ‘precariat’, a combination of ‘precarious’ and ‘proletariat’ or ‘many mouths’. It underscores the precarious situation this mass of humanity (which will include the majority of us, by definition) because there would be so many competing for whatever jobs are left, and that would provide an enormous amount of pressure that businesses who still require human labour could use to their advantage.

We imagined earlier how gig-based businesses could use the data harvested by their apps to filter out those who persist in working at their own convenience, encouraging (coercing?) more to lean toward the ‘work at the company’s convenience’ approach to tasking. Once you have a reserve army of independent contractors prepared to leap into action whenever the boss snaps his or her fingers, where do you go from there?

Probably, you then have your independent contractors bid for tasks, with the ‘winner’ being whoever is prepared to do the best job for the least amount of money. Thus, potentially, the combined disruptive effects of artificial intelligence and precariat work would result in a race to the bottom where the vast majority of us find it virtually impossible to get anything beyond minimal reward for labour, squeezed by pressure from demand for remaining jobs from our fellow precariat on one hand, and on the other by the rapacious profit-seeking demands of that tiny minority who have now become so wealthy and influential they form a plutocracy as powerful as any emperor or pharaoh.


“Austerity” By Kerry-Ann Mendoza

“The Corruption Of Capitalism” by Guy Standing

Posted in Uncategorized | Leave a comment



Welcome to the second part of my essay on the virtues of being lazy.

I think that, when you look at the contrast between work we evolved to do and how work is often structured in jobs, you get the real explanation for the sort of laziness that gets held in contempt.

Of course, nobody really idolises hard work to the point where even adopting practical labour-saving solutions is frowned upon. We can all see the benefit to be had from avoiding work that does not need doing. What we don’t like are people who do evade necessary work, the sort who could get a job if they really tried but instead remain unemployed, sponging off those who do contribute to society.

At the same time, we understand why they are reluctant to get off their backsides. After all, work sucks. Songs like ‘Manic Monday’ and ‘Tell Me Why (I Don’t Like Mondays’) are popular precisely because they tap into that widely-held feeling of “oh no, not another working week”. We love weekends, bank holidays and vacations mostly because of what we are not doing, namely working.  

Our hatred at those permanently workshy characters is fuelled partly by jealousy. Lucky sods. How dare they get to avoid work when I must submit to it? We therefore organise society to punish those who refuse to work, partly as a means of correcting their behaviour but also as a deterrent intended to stop us joining them (which, secretly, most of us would prefer to do).  After all, since we hate work but work is necessary, a carrot and stick approach is required to keep us turning up at our employment.

But this idea that we hate work…really it is a pernicious myth. We don’t ‘hate work’ at all. You can see this is so by looking at how people spend their time off. Very rarely do we spend it in the stereotypical way the unemployed are portrayed as living, which is ‘sitting around doing nothing’. No, we are up and about making plans for socialising, carrying out repairs or improvements to our homes, tending our gardens. We have hobbies we throw ourselves into. In fact, we often pack so much activity into our breaks we joke half-seriously that we could do with another break to recover!

No, we don’t hate work. No animal that evolved to have such a large brain, such an imaginative mind, such dexterous hands, such complex language and cooperative abilities, got that way through hating work. That would be as weird as dolphins being evolved to have streamlined bodies and flippers and yet hating water and staying away from it.

We actually enjoy working when it consists of short bursts of activity whose competent execution leads directly to reward for ourselves and our loved-ones. That scene from ‘Witness’ in which an Amish community work together to construct a barn exemplifies the sort of work we like doing; labouring as part of a family to produce something that will directly improve that family’s life. Then resting in sociable contemplation at what we have achieved.

You don’t necessarily get that kind of satisfaction from a job. Many people, I am sure, would nod in agreement with Karl Marx and his description of work in the capitalist context: “The worker therefore only feels himself outside his work, and in his work feels outside himself. He is at home when he is not working, and when he is working he is not at home”. We have had this separation of the spheres of working life and home life for so many generations that we regard it as ‘natural’, but it would never have occurred to our ancestors that any distinction between working life and family life existed, as indeed it did not in their environments.

Not only do we feel often feel we have to remove ourselves from those we care about in order to go to employment, but it also has to be said that  the primary goal of a job is not to produce anything of direct value to you or anyone you could reasonably call ‘family’. No, it’s ultimately all for the purpose of enriching strangers; of meeting very abstract goals like seeing some company’s position on the FTSE or DOW JONES go up by some points. That’s not to say you get no compensation from a job. You get paid, at least. But that means you are at least one step removed from your true reward. After all, it’s not really money you want but rather what its purchasing power can provide.

And, increasingly, we use that money in order to access stuff many steps removed from anything we truly need. Tyler Durden of ‘Fight Club’ fame spoke of our condition when he said “advertising has us working jobs we hate so we can buy shit we don’t need”. That’s modern working life, isn’t it? Submitting to work you don’t enjoy because it is structured to be unlike any kind of working pattern we evolved to do; for personal goals that are a gross distortion of true values, and all ultimately for a purpose like ‘increasing the profits of corporation X’ which, frankly, most of us couldn’t care less about.

How strange it is that we fear robots taking away our jobs. Our lazy natures, our evolved love of work but certainly not in the form of jobs as they are often structured, should rejoice at the prospect of technology that eliminates jobs once and for all.

I say, liberate our lazy natures! Let us play, let us socialise, let us relax whenever we want and work as and when the mood takes us. We evolved to take things easy but also to broaden our horizons and strive for more. Properly organised, a world of advanced artificial intelligence could bring about a world of true work properly aligned with how we evolved to be.


“Guns, Germs and Steel” by Jared Diamond

“Bullshit Jobs: A Theory” by David Graeber

“Why We Work” by Barry Schwartz

Posted in Uncategorized | Leave a comment

In Praise Of Laziness


What is the greatest human trait? Judging by the way it gets praised so often, one might assume that to be a ‘hard worker’ would be an obvious candidate. By general agreement, it is those who ‘work hard’ who should be rewarded the most. And whenever a politician speaks about wanting to represent the interests of his or her constituency, you can be sure that it will be ‘hard working folk’ who he or she intends to help.

In contrast, to be lazy is not worthy of praise. Indeed, it is considered to be one of the seven deadly sins. Lazy characters in stories tend to be there so as to serve as some kind of morality tale encouraging us to abandon such ways. “Don’t be like this character, look where you will end up”.

Yes, hard work is good and therefore something to be encouraged, while laziness is just wrong and to be disapproved of. At least, that seems to be the attitude society wants to encourage.

But is it correct? Is laziness really all bad? Are we we really right in holding up hard work as the ultimate virtue?

I don’t think we are. I think laziness is part of the reason why progress is made; why the future can turn out better than the past.

A major reason why the future can seem brighter is because of technological development. It is thanks to new technological capabilities that we can reduce or eliminate problems that were hitherto intractable. It can aspire to more than was previously obtainable. Now, obviously, work has to be done or else technological progress would grind to a halt. I don’t intend to try and show we should be against hard work. But it does seem to me that ‘lazy’ intentions are, to some extent, the driving force behind a lot of what we invent. After all, a lot of what we invent are ‘labour-saving’ devices. We invent something often because there is a task we can’t really be bothered with and would rather get away with doing it less or not at all.

Imagine that our ancient ancestors, with their primitive stone tools, only wanted to ‘work hard’. If that were so, then I would argue that they would have shown a great deal less interest in improving their tools. “This tree I am attempting to chop down with my flint knife, it’s going to take an enormous amount of effort. Great! I love hard work, me. Who would want an axe or, heaven forfend a chainsaw? That would get the work done in half the time, and I am not at all interested in anything but hard work”.

In reality, we couldn’t be bothered to work quite so hard at whatever we were doing, and so we looked for ways to reduce the amount of effort needed to reach our goals. Did our cavemen ancestors progress from stone tools to iron ones out of a desire to work hard in solving the various problems such an evolution requires, or because they were kind of lazy and therefore wanted better tools and less work? In our modern age do people start businesses because they crave the hard work one must undertake to succeed in such endeavours, or because they look forward to one day earning so much profit they can afford to hire staff to do all the work for them (and have you ever noticed how the most vocal proponents of ‘hard work’ tend to be those with enough capital to pay others to do all the work?).

The answer is that both play a part. Human nature is not one hundred percent committed to hard work nor totally in favour of being lazy. If were were content to just be lazy, our world would look as radically different today as the hypothetical ‘world of hard workers’ just imagined. If we were content to just live as lazy folk, then we would be satisfied with merely meeting our most basic survival needs. So long as we had a quenched thirst, a full stomach and protection from harsh environments we would have all we could ever want. There would be no desire to make music or play sports or make scientific discoveries. We went on to do all those things because we are lazy being with the capacity to work hard and strive for more.

We are lazy beings because it makes evolutionary sense to be that way. Energy should not be wasted unnecessarily and natural selection harshly punishes those that do. The successful hunter is the one evolved to catch prey with minimal effort, not the ones who prefer the long, arduous chase even when a shorter, easier catch is an option. And prey likewise evolve herd behaviour, camouflage and defences like armour and poisons in order to make it easier to defend themselves against predation. They too get punished if they waste unnecessary energy in thwarting a predator’s intention to make a meal of them. In nature, winners are the ones who work hard only when they have to.

Given that our ancestors were hunter-gatherers, the sensible would have been to permit relaxation during slack periods in order for there to be plenty of energy when the time came to put it to good use. You can imagine how there would have been seasons in which there was plenty of fruit to gather, or moments when everyone should mobilise to bring home game. But afterwards, when the fruit was picked and the hog roasting on the spit, the time left was better spent playing, socialising, or resting.

This is, in fact, how we evolved to work. We are designed for occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity.

This work pattern could hardly have changed much when human societies transitioned to farming and were able to develop into chieftains and larger hierarchical societies. After all, farming is also very seasonal work, so here too it would have made much more sense to adopt work attitudes that encouraged intense activity when necessary (such as when the harvest was ready to be gathered) but at other times to just leave the peasants alone to potter about minding and maintaining things or relaxing.

Now, it’s true that the evolution of human societies into hierarchical structures not only entailed the emergence of a ruling ‘upper class’ but also a lower caste of slaves and serfs. But, although we commonly conceive of such lower caste people as being worked to death by brutal task-masters, in actual fact early upper classes were nowhere near as obsessed with time-management as is the modern boss and didn’t care what people were up to so long as the necessary work was accomplished. As Graeber explained, “the typical medieval serf, male or female, probably worked from dawn to dusk for twenty to thirty days out of any year, but just a few hours a day otherwise, and on feast days, not at all. And feast days were not infrequent”.

Part two of this essay still to come.


“Guns, Germs and Steel” by Jared Diamond

“Bullshit Jobs: A Theory” by David Graeber

“Why We Work” by Barry Schwartz

Posted in Uncategorized | Leave a comment



If you are interested in cryptocurrencies such as bitcoin, chances are that you have heard some skeptic make a comparison with ‘tulips’. Why would blockchain-based assets be compared with that particular flower? Well, it is all to do with one of the craziest bubbles ever inflated, which was what I want to talk about in this post. In order to lay down the groundwork, though, we have to go way back in time to the 15th century…

In The Beginning…

The story of how Amsterdam’s most famous bloom became the basis of one of the most infamous speculative bubbles does not actually begin in the Netherlands, but rather in Spain and Portugal. The end of the 15th century saw improvements to the design of ships and inventions that were to prove important for navigation, such as the clock and the compass. Together, these advances made it possible to cross oceans, discover new lands, and open up trade routes.

The Christian kingdoms of Spain and Portugal did just that, famously sending Christopher Columbus west in 1492 on a journey that would ‘discover’ the Americas. Five years later Vasco de Gama journeyed southward to discover the Cape Of Good Hope and the naval route to India.

With these discoveries, both Spain and Portugal suddenly found themselves with trading options along African and Asian coasts, not to mention access to vast and rich territories in the New World. This meant that, from the 16th century onwards, the scene was set for a transformation from the old feudal economies to mercantile economies. The international trade routes made it possible to create far superior wealth compared to that offered by grain production by the small feudal fiefs of Europe. Mercantile economies were based on the idea that a country’s total amount of wealth represented the overall profit it made from trade. As each strip of land obviously holds only a limited amount of tradable resources, the volume of a country’s trade was dependent on the amount of land over which it held trade rights.

Mercantilism therefore lead to expansionism, as any European power that could afford it sent off ships in search of hitherto undiscovered territory (not discovered by any other European, that is). It was customary for the Monarch to hold claim to the new territory overseas, the management of which required a large administrative body under direct royal control. It had always been profitable to serve the King during times of war, but the territorial expansion meant the nobility could make more wealth serving the King abroad rather than by managing their private estates.

This lead to a powerful, centralised monarchy and the creation of the first great European empire. But there was something of a downside to this way of organising things, since the creation of a powerful, centralised monarchy held back the creation of a strong and independent mercantile class, which in turn held back private enterprise. The result of all this was that capitalism did not grow out of the empires of Spain and Portugal, but rather from one of the more disadvantaged newcomers in the race for international trade.

The Dutch East India Company

That nation was the Netherlands. The end of their 80-year struggle for independence from Spain left the nation with no significant aristocracy and not much in the way of marked class differences. Instead, the Netherlands developed a significant middle class that thrived on trade. Up to the Industrial Revolution, Amsterdam could lay claim to being the greatest city in Europe, as well as laying claim to a few ‘firsts’ in capitalism. For example, many historians consider the Netherlands to be the world’s first truly capitalist nation. Also, the Dutch East India Company, which was formed in 1602, was one of the first multinational companies. Also, by being the first company ever to offer its stock on the market, the Dutch East India Company pretty much invented the stock market, meaning the Dutch could claim that among their list of ‘firsts’ too.

The Netherlands were really successful at trade, so much so that it had managed to drive the Portuguese off most of their trading posts in the Indian Ocean. By the 1630s, the timing was almost right for a period of mass speculation. Thanks to the trade of their merchants, the Dutch were the recipients of the highest salaries of any European. Shares of the Dutch East India Company were richly rewarding shareholders for their investments, and much of that money was being poured into properties to create a robust housing market. Ongoing appreciation of asset values created excess wealth that went on to fund further asset purchases.

This wealth was setting the scene for an asset bubble, but at the time there was something holding back the move toward wild speculation. That something was the fact that not everyone could take part. This was because Dutch East India shares were both expensive and illiquid (in other words not easily resold) and that made them unavailable to all but the wealthiest. The same could be said for the most prized properties. However, a quirk of nature was soon to arise which would seemingly hold out the promise of vast wealth that anybody could speculate on…

Enter the Tulips

Tulips had been introduced to Europe around the mid-1500s, and had always held the promise of some value. In fact, they still do, as can be appreciated by remembering how famous Amsterdam is for that particular bloom. But something happened around 1634 that would cause the value of this plant to skyrocket, and that something was a virus. The virus, which was transmitted by aphids, lead to a couple of consequences for the tulip, both of which are the reason why a crazy speculative bubble arose. Firstly, the virus had the effect of transforming an ordinary solid-coloured tulip into a startling-looking variegated variety with  beautiful flamelike petals. This was a much-prized variety, and as nobody really knew what caused such variegation there was much speculation as folks attempted to predict which bulbs would develop into the prized tulips.

Secondly, the virus ultimately killed the tulip. This made it something of a hot potato, in that you really wanted to sell the tulip on for a higher price rather than be the sucker who was left with nothing but a dead bulb.

Unlike shares in the Dutch East India Company or prized property, tulips were much more affordable, which meant more people could join in the speculation of this particular asset. Not surprisingly, given the stories of immense riches to be gained from selling on a prized bulb, many, many people were drawn into speculation. Most of these people were not experienced traders. In fact, the professionals pretty much shunned the tulip trade and continued investing in good old reliables such as East India stock. They regarded tulips as more of an expression of wealth than a means to that end.

But for more inexperienced traders, the chance of having and reselling a prized tulip was considered to be the means to great fortune. Because the tulip spends most of its life as a bulb rather than a blossom, it naturally lent itself to a futures market (something the Dutch called a windhandel, or the wind trade). By ‘futures market’, I mean a situation where both buyer and seller agree to the future price of a good, and when that specific time arrives, the buyer is obliged to pay the seller whatever amount was agreed upon.

However, waiting for that agreed-upon time to arrive was too slow for the growing crowds of speculators. Therefore, a move was made to transition from selling tulips themselves, and instead trading those futures contracts. And trade them they did, sometimes as much as ten times in one day. You can see then, how the value of tulips was entering into ever higher realms of abstraction. The trade in futures market contracts meant that people didn’t have to worry about an actual tulip being delivered. No, their only concern was being able to sell the contract for a higher price than they had bought it for. The result of this was that, at the very peak of the tulipmania during the winter of late 1636 and early 1637, a time when the bulbs were still dormant in the ground, not one blossoming tulip actually changed hands.

Funny money

But there is even more to this tale of wild speculation than that. You see, not only were no bulbs being traded, no real money was, either. At that time, ‘real money’ was the guilder, the currency of the Dutch Republic. This was not the paper currency we are used to, it was money based on a specific amount of precious metal, 0.027 ounces of gold. Much of the trade in futures contracts was not financed with real money, but rather with ‘notes of personal credit’. In other words, with IOUs. So not only were there no bulbs being traded during the heights of tulipmania, no money was changing hands either. Instead, transactions were being made on nothing but the promise to deliver the money in the future.

According to Edward Chancellor, author of ‘Devil Take the Hindmost: A History Of Financial Speculation’, “by the later stages of the mania, the fusion of the windhandel with paper credit created the perfect symmetry of insubstantiality: most transactions were for tulip bulbs that could never be delivered because they didn’t exist and were paid for with credit notes that could never be honoured because the money wasn’t there”.

To give an idea of just how high the price of tulip bulbs rose (or, perhaps I should say, the price of the promise of such a bulb) consider that the highest record amount paid for a tulip at that time was a whopping 5,200 guilders. In gold terms, that’s nine pounds of the stuff. You could have bought eighteen modest-sized houses for the price of that one tulip.

It all ends

Like all bubbles, this one could not inflate forever. The end inevitably came, because the bulbs blossomed into flowers or turned out to be dead duds, and because the contractual dates for when IOUs had to be paid for with the promised money were coming around. The wealthiest were not hit too hard, since, if you remember, they had continued investing in things like townhouses and East India Stock. No, it was those less experienced in investing, the people caught it in crowd behaviour, buying into futures contracts for tulip bulbs for no reason other than that was what everyone else was doing, that got hurt the most. Inevitably, a lot of those people found out that their anticipated fortunes amounted to nothing but worthless promises. Fights broke out over the amount due per contract, and the Dutch government stepped in, declaring that the contracts could be settled for 3.5 percent of their initial value. On one hand, that was obviously preferable to paying the full contract. But nevertheless 3.5 percent of the most expensive tulip still equated to a year’s salary for some unfortunate citizens.


So that’s the story of tulipmania. What lessons can be applied to blockchain-based assets? Well, firstly, I don’t think it is all that fair to compare blockchain-based assets to ‘tulips’. A tulip does have some value. They are pretty things and people pay for pretty things. But you can hardly call a tulip bulb a general-purpose technology. A general-purpose technology is one that can be used in a great many ways. Examples would be ‘electricity’ or ‘computing’. Just think of all the inventions and industries and jobs that have been built on the basis of those two technologies. The blockchain is also a general purpose technology, and that means speculating on its future growth need not be sheer pie-in the sky. People who expect to make a fortune from crypto-assets might just be making educated guess regarding the future potential of Satoshi Nakamoto’s invention.

Having said that, all speculation is prone to crowd behaviour. Just because the underlying blockchain technology is sound, doesn’t mean to say that assets built on top of it can’t be scams designed to lure in suckers, or that genuine products can’t fuel asset bubbles as people buy or sell for no good reason other than everybody else is doing likewise. ‘It’s just like tulips!’ may be a retort used by skeptics who don’t really know all that much about cryptoassets and blockchains, but nevertheless the story of the tulip speculative bubble does hold some valuable lessons. After all, those who do not learn from history are doomed to repeat it.


“Capitalism: A Graphic Guide” by Dan Cryan, Sharron Shatil and Piero

“Cryptoassets: The Innovative Investor’s Guide to Bitcoin and Beyond” by Chris Burnsike and Jack Tatar.

Posted in Uncategorized | Leave a comment

Philanthropy: Praiseworthy Or Propaganda?



How will Bill Gates be remembered?

If this question had been asked in the 90s, I suspect most people would say the answer is obvious. He would be remembered as the co-founder of Microsoft, a feat of entrepreneurialism that resulted in him becoming one of the richest men in the world.

But there is another noteworthy thing that can be attributed to Bill Gates. For, as well as being extraordinarily rich, he can also be credited with remarkable acts of charity. In 2010, for example, he put $10 billion toward vaccines, which was the largest pledge ever made by a charitable foundation to a single cause. Also in 2010, Gates and Warren Buffet announced the ‘Giving Pledge’, so called because the wealthy people who sign it pledge to give away half their fortune to philanthropic and charitable causes.

So, there are two ways in which Bill Gates might be remembered. Chiefly as a businessman who accumulated great wealth or as a philanthropist who made big donations to worthy causes.

That last legacy of being a great philanthropist sounds like an achievement that could only be viewed in a positive light. But, as with most things, there are two ways of looking at philanthropy. On the one hand, it can be seen as a justification for the social structures that enable some people to acquire disproportionate wealth, for it turns out that, however ruthless such people might have been in gaining their fortune, they ultimately had a significant altruistic side to their character, generously giving to worthy causes.

But, on the other hand, a more cynical way to look at it would be to say that it is really nothing but a band-aid to cover up the exploitative conditions that cause so many to need charity. If society was structured in such a way as to not allow the extent of inequality extreme wealth necessitates, those people would not have required charity in the first place. In short, these so-called philanthropists are just using a portion of their vast wealth in propaganda and token gestures while not really doing anything much to alter the structures they took advantage of.

Those opposing viewpoints have existed since the beginnings of modern philanthropy. By ‘modern philanthropy’, I mean large-scale philanthropy based in the private rather than the public sector.

There is general agreement among contemporary historians that modern philanthropy was invented by the great industrialists whose names are now synonymous with extraordinary wealth. John David Rockefeller, Andrew Carnegie, Cornelius Vanderbilt and people like that. These were legendarily ruthless businessmen whose rapaciousness earned them the title ‘Robber Baron’.

Rockefeller, for example, acquired his extraordinary wealth partly through industrial espionage. He sent spies into his competitors’ businesses in order to ascertain their financial situation. His own company (Standard Oil) would then lower the price of its own oil, making his rival hopelessly uncompetitive. Meanwhile, in other parts of the country, the price Standard Oil charged was increased in order to make up the difference. According to Dylan Ratigan, “in this way, the company charged its customers a premium to drive the competition out of business, which left those same customers even more dependent on Standard Oil. Rockefeller referred to this approach as ‘sweating the competition’”.

By 1882 Standard Oil controlled up to 90 percent of the oil refining capacity in the United States. Seven years later its monopolistic grip had extended to retail, wholesale and oilfields as well. In Short, Rockefeller’s tactics changed what had been a free market in oil with fluctuating prices adjusting with competitive supply and demand, to a rigged market where prices were stabilised at artificially high prices. That’s why people like him were called ‘robber’ barons. They used the free market mandate of increasing one’s wealth via whatever method you can get away with to ultimately end the free market and impose a monopoly that effectively took wealth away from people through rent extraction.

Like Bill Gates, Rockefeller went on to become the world’s richest man. Also, like Bill Gates, he went on to build a foundation-the Rockefeller Foundation-dedicated to philanthropic ventures. It was set up in 1910 from $50 million in Standard Oil stock and, by the time of his death in 1937, half of Rockefeller’s fortune had been given away. This legacy, along with the philanthropic acts of other 19th century American industrialists, can be seen in cities like New York. You would be hard-pressed to find a museum, art gallery, university, concert hall or charity that cannot trace its origins back to some such businessman.

But, as I said earlier, there has always been a more cynical way of looking at this. Businessweek highlighted this fact when they wrote, “John D Rockefeller became a major donor-but only after a public relations expert, Ivy Lee, told him that donations could help salvage a damaged Rockefeller image”. Put another way, according to this cynical view, Rockefeller was not actually interested in Philanthropy. After all, a true desire to promote social welfare necessarily seeks to fundamentally change the preconditions that are the root cause of social problems. What Rockefeller was doing was simply placating an angry public by throwing a bit of money at some public service or other, while not really doing  much to alter the structures that enabled the few to gain so much at the expense of the many in the first place.


In the last chapter, we talked about the 19th century industrialists who earned the dubious title of robber baron and who can be credited with inventing modern philanthropy. When Warren Buffet wanted to make a philanthropist out of Gates, his first move was to give him a copy of an essay by Andrew Carnegie (the greatest of all the 19th century industrialist/philanthropists), called ‘The Gospel Of Wealth’. Incidentally, we see here (and also with the case of Rockefeller, who we talked about in part one) a curious observation, which is that these billionaires do not seem to consider using their wealth for the common good until somebody else points out that this is an option. And we can add a third billionaire to that list, for Paul Allen, the co-founder of Microsoft, is quoted as complaining “I’ve spent money on jets, boats. I don’t know what to do next”. Notice that it never occurred to him that a portion of his billions might be put to philanthropic use and once again this had to be pointed out to him (by the author Douglas Adams in this case). It could be that there are some very wealthy people who did not need any persuading to turn to charitable acts, but given that these are, almost by definition, mostly selfish, greedy, ruthless and ethically dubious individuals (it is, after all, kind of hard to acquire that kind of fortune by being a nice guy) I am willing to bet they are few and far between.

That might sound like a harsh assessment, but it was one echoed by the economist Jeffrey Sachs, who reckoned, “They are tough, greedy, aggressive, and feel absolutely out of control in a quite literal sense, and they have gamed the system to a remarkable extent. They genuinely believe they have a God-given right to take as much money as they possibly can in any way they can get it, legal or otherwise”.

I should point out that this was his view of City traders on Wall Street, so it should not be considered applicable to everyone who is very wealthy. Probably J.K Rowling is not like this, since she owes her fortune to the astonishing success of the Harry Potter franchise, making her more like a lottery winner than somebody who fought their way to the top of business.

Anyway, in Carnegie’s essay, the great iron and steel magnate posed the question, “what is the proper mode of administering wealth after the laws upon which civilisation is founded have thrown it into the hands of the few?”.

Such a question seems aligned with the findings of Thomas Piketty. In ‘Capital In The 21st Century’ (which is actually concerned with capitalism up to the 21st century) the French economist put together the most exhaustively researched analysis of market capitalism and its consequences so far assembled, and drew the following conclusion. When allowed to unfold in its natural, ‘financially liberalized’ state, capitalism will very likely result in those who hold significant wealth gaining far greater returns compared to those who rely primarily on labour income. There are various reasons for this. For example, if you are very wealthy you can afford to hire the very best financial advisors who know how to skirt around the law and squeeze every drop of value out of your assets while avoiding the tax man. The poor, meanwhile, cannot afford any financial advice and are constantly being targeted by fraudsters, predatory bosses, authoritative bureaucrats and marginally legitimate debt sellers. Also, the owner classes can gain access to high-level capital investments that are simply unavailable to those of us lacking significant wealth in the first place. Technically, this is known as the ‘wealth to income ratio’.

Pinketty’s study showed what Carnegie thought was true, which is that wealth tends to grow much faster for the already wealthy. That wealth can then be used to further alter the structure of social systems so as to further increase the tendency for wealth to flow toward the elite minority. The behaviour of Rockefeller (talked about in part one) aligns with Pinketty’s view of meritocratic competition making sense only when one is trying to establish a fortune. However, once wealth is acquired it makes more sense to turn anti-competitive so you can live off of the rent extraction to be had from a market rigged in your favour. Why strive to make a fortune when you can just protect the fortune you (or your ancestors) have already amassed?

The more positive way of looking at this might be to say that it is OK to amass a great fortune, even one that involves ethically-dubious practices, if one then uses that money in charitable ways. Those who adopt this way of looking at philanthropy tend to prefer to cast charitable donations in monetary terms, probably because the donations seem so amazingly generous when so presented. For example, Peter Diamandis and Steven Kotler, in a chapter devoted to technophilanthropists in their book ‘Abundance’, wrote, “by 2004, charitable giving in America had increased to $248.5 billion, the biggest yearly total ever. Two years later, the number was $295 billion”.

By everyday standards, such sums of money are almost unimaginably large, way beyond anything most people could earn in several lifetimes. It therefore seems that those who donate such amounts must be superhumanly charitable and worthy of the highest praises society can bestow.

As for the cynics, they tend to prefer percentages. For example, the Chronicle Of Philanthropy’s study found that households earning between $50,000 and $75,000 a year gave an average of 7.6% of their discretionary income to charity. 7.6% of a pretty paltry amount if you ask me. Shouldn’t a society founded on Christian values be giving away more like 40% of discretionary income? Worse still, the figures are even more dire when we turn to those who earn over $100,000. Even though such folk are obviously in a position to be much more generous, in percentage terms they are less so, giving a paltry average of just 4.2%.

Several studies have come to similar conclusions, which is that those who can least afford to donate to charity actually donate the greatest amount in percentage terms, while those who are most financially advantaged give away the least. Ken Stern, writing for the Atlantic, reckoned that America’s bottom 20 percent donated 3.2 percent of their income to charitable causes, while the top 20 percent gave away a minuscule 1.3 percent. Of course, if you have billions to begin with, then 1.3 percent of that would be a lot of money, more than most of us can ever dream of having let alone giving away. But when expressed in percentage terms it comes across as a tiny amounts, a mere gesture intended to paper over the fact that society is rigged against the many. A 2011 study from the University of California, Berkeley, found that upper class individuals are more likely to “exhibit unethical decision-making tendencies, take valued goods from others, lie in a negotiation and cheat to increase their chances of winning”. They also have a disproportionate ability to mould  a society so heavily dependent on the pursuit of money into a shape of their liking, so unsurprisingly our societies work to reward them at the expense of so many others.


There is a tendency for rich philanthropists to become patrons of things that primarily interest the upper classes, while ignoring issues affecting the  poor even though such issues are usually much more urgent. We saw this attitude in part one, when we talked about the philanthropic ventures of the 19th century Robber Barons. As you may recall, they mostly donated to elite schools, concert halls, museums and stuff like that. People like Carnegie also showed zero interest in tackling issues outside of their own city.

Given the period in which these men lived, they can be forgiven for being so localised in their philanthropy. After all, this was long before the age of smartphones and global communications networks, so people were nothing like as aware of issues like poverty in Africa as we are today.

But while we might forgive such blinkered philanthropy back then for the reason given, it’s much harder to justify it now. And yet it still occurs. According to Ken Stern’s article in The Atlantic (“Why The Rich Don’t Give To Charity”), in 2012 “not one of the top 50 individual charitable gifts went to a social-service organisation or to a charity that principally serves the poor and dispossessed”.

It would be incorrect to say philanthropists never turn their attention to such issues, because they sometimes do. There is, for instance, the Omidyar Network (brainchild of EBay founder Pierre Omidyar), pursuing such things as microfinancing which could potentially unlock opportunity for people who cannot access traditional financial and banking outlets. Another example would be the Rockefeller-backed Acumen fund, which is a for-profit company that only invests in businesses that manufacture and sell, at affordable prices,  products and services needed in the developing world (things like mosquito nets and reading glasses). Of course, the Bill and Melinda Gates Foundation’s multibillion dollar commitment to tackle malaria can also be counted among the charitable ventures focused on problems that really matter.

All such examples of worthy causes should of course be commended. But the emphasis on charity as the means to deal with the fallout from the negative consequences of market competition arguably evades the more pressing question, which is ‘why is such intervention necessary to begin with?’. A cynic might say that throwing money at those affected by the negative externalities that are virtually inevitable when people are engaged in a competition to selfishly their own fortune via whatever method can be gotten away with, operating within social structures that already disproportionately favour a minority of greedy, ethically-dubious people, is akin to giving up and managing the symptoms of a socioeconomic disease rather than seeking out its root and curing it outright. It is sort of like having an engine that is leaking oil and you choose to constantly add more oil rather than fix the engine.

As to the root causes, those who have researched the systemic causes of today’s problems have traced the root back to incompatible assumptions that have held since the Neolithic period. We find one of these assumptions contained within the definition of the word ‘economics’, which is the ‘study of the allocation of scarce resources’. Along with the assumption of scarcity there is a potentially incompatible assumption applied to growth, which is that it is infinite.

The reason why the assumption of scarcity is incompatible with the assumption of infinite growth should be so obvious as to not need spelling out. But as our current consumerist lifestyles are using up vital resources far beyond sustainable rates, the obviousness must not be all that graspable, so perhaps we should express the contradiction between assumptions of scarcity and assumptions of infinite growth. Growth cannot be sustained indefinitely in an ever-accelerating fashion if it relies on something of finite supply. Eventually that supply will be exhausted and the growth must come to a stop. It is worth noting that it really does not matter how large that finite supply is, infinite growth is always by definition infinitely greater. Therefore, people who dismiss concerns about how unsustainable our economic systems based on infinite consumption are, because we can try and gain access to the much larger resources of the solar system or the galaxy or the visible universe (assuming that is even remotely practical an aim to begin with), miss the crucial point that infinite growth in consumption will exhaust any physical resource. Only infinite resources can sustain infinite growth, but then we would have to abandon the assumption of scarcity, returning us to the essential point of scarcity and infinite growth being incompatible beliefs.

For most of human history since the Neolithic period, it did not much matter that we operated under such contradictory assumptions, because we lacked the practical ability to do much harm. Up until the Industrial Revolution, populations were caught in a ‘Malthusian Trap’. It was named after Thomas Malthus, who argued that population growth would outrun the ability to provide enough essential resources to sustain that growth, leading to famines and other calamities that would reduce the numbers of mouths to feed to more sustainable levels.

Going back beyond the Neolithic period, we are talking about a period of time when human populations consisted of small groups of tribes and bands. When populations are small and their capacity to take is restricted, Earth’s resources can seem endlessly bountiful, as indeed they are if the capacity to take is restricted to levels below that of the Earth’s ability to replenish resources. Hunter-gatherers fishing with rods and nets couldn’t even begin to affect fish stocks in an impactful way.

But it’s quite a different story when your pursuit infinite growth in consumption of fish in the interest of ever more profit has resulted in fish-catching technologies that can capture hundreds or thousands of tons of fish. There was about 1.8 million tons of spawning cod in the Grand Banks when the first commercial fishing ships capable of capturing such prodigious amounts appeared in 1951. By 1991, it was down to 130,000 tons and a year later the Canadian government had to step in and close the Grand Banks to cod fishing, or else the species would have been fished to extinction. But that decision came with consequences too, because it meant that 32,000 fishermen were thrown out of work and required billions of dollars in aid to support their families. You can see in this sorry tale how dangerous the assumption of infinite growth is when subjected to a finite resource. I would also point out that the pursuit of more profit could well have been fed by the dwindling supplies of cod, for the more scarce this in-demand resource became, the more expensive and worth pursuing it would be. That would incentivise more profit-seeking fishermen to pursue and catch the prized fish, in a positive feedback loop that either ends in the extinction of cod or government/social intervention to halt this unsustainable consumption.

Another root cause has to do with how societies have been structured since the Neolithic period. I have covered this in detail in my series ‘This Is What You Get’, so search for that if you want more details, but suffice to say that societies have tended to be ‘redistributive’. That is, they are societies in which the populace can be divided into two groups: A non-producing elite who wield great political power and social influence, and who receive ‘tribute’ and get to disproportionately determine how it should then be redistributed to the rest of the populace (hence ‘redistributive’ societies). And, on the other hand, everybody else, the producing masses, toiling for minimal reward and who wield comparatively little social influence and political power.

This hierarchical structure has held (with some modifications, though none that truly affect its essential form) through every form of society since the Neolithic period. From the abject slaves and ruling monarchs of Egypt, to the vassals and lords in medieval feudal societies, to the handicraft merchants and state monopolists of mercantilism and on to our contemporary with the growing numbers of precariat employees and a rapacious elite in the financial and banking sectors, we see broadly the same thing, which is a society in which there are people who work and then there are those other people (always much smaller in number but far more powerful in other ways) who gain the lion’s share of the reward generated by that work. In short, it is a systemic framework that assures the superiority of a minority for whom the temptations of kleptocracy (stealing from the people they rule) is all too often a siren song they cannot resist. Even communism, which was vaguely imagined to operate very differently, turned out not so different in practice. In capitalism you get bossed by business people and in communism you get bossed by bureaucrats. Either way it is a redistributive society comprised of those who do the work on one hand, and the non-producing elites who disproportionately control fruits of that work on the other.

Stanford neurologist Robert Sapolsky, summarised the issue in the following way. “Agriculture allowed for the stockpiling of surplus resources and thus, inevitably, the unequal stockpiling of them. Stratification of society and the invention of classes. Thus it has allowed for the invention of poverty”.

Ever since, the presence of a powerful elite with kleptocratic tendencies, running societies on the incompatible assumptions of scarcity and infinite growth, have worked to sustain poverty rather than truly to eradicate it. This should not be thought of in terms of conspiratorial plotting. Rather, it is the evolutionary outcome of the selfish pursuit of material wealth via whatever method can be gotten away with, let loose in societies with a pre-existing disproportionate advantage for some that changes the psychological makeup of the rest in ways that lead to unsustainable exploitations of the poverty that cannot be eradicated,  for to do so would be to bring down the very system that the elites depend upon for their position.


In the last chapter we ended with the following observation:

‘The presence of a powerful elite with kleptocratic tendencies, running societies on the incompatible assumptions of scarcity and infinite growth, have worked to sustain poverty rather than truly to eradicate it. This should not be thought of in terms of conspiratorial plotting. Rather, it is the evolutionary outcome of the selfish pursuit of material wealth via whatever method can be gotten away with, let loose in societies with a pre-existing disproportionate advantage for some that changes the psychological makeup of the rest in ways that lead to unsustainable exploitations of the poverty that cannot be eradicated,  for to do so would be to bring down the very system that the elites depend upon for their position’.

Now, one might think that this assertion can be refuted by those examples of the problem-solving capacity of market competition turning a once-scarce resource into an abundant one. Aluminium, for example, was once a rare and expensive thing, but now it is so cheap we use it and discard it without a thought. Such examples really do not refute the claim that market competition seeks to perpetuate scarcity because we are talking about an overall condition that cannot be refuted simply by citing isolated examples of a particular resource that has been made abundantly available.

Ever since our technological capabilities became sophisticated enough to enable us to escape the Malthusian trap, we have seen the rise of various ways of artificially perpetuating scarcity. These have included psychological manipulations intended to mess with the ability to distinguish between genuine needs and frivolous ‘wants’, and the creation of a throwaway culture. Thus, although we still talk about the ‘economy’, what we actually have is nothing like an economy in the sense of what the root of that word referred to. Because, rather than striving to use our resources in as efficient a way as possible, given current technical knowhow (which is what we should be doing if we are really trying to economise) we seem determined instead to turn the world’s resources into junk to be thrown away and repurchased. After all, the goal in a consumerist culture is to sell more stuff, so the last thing you want is for people to be content with what they have (this ties in with the point about our ability to make intelligent choices over what we really need being messed with). We also have services that are less about helping those in need and more to do with extracting rent out of those in need by preying on their financial instability so as to get them more indebted, profiting from their desperation.

This has resulted in two forms of socioeconomic sickness: The existence of needless poverty on one hand, and the existence of wealth obesity on the other. It really should disgust people that anyone should become a multibillionnaire in a world where others must subsist on less than two dollars a day. Sadly, for millennia ruling kleptocrats have used propaganda and other methods to keep the masses from developing revolutionary thoughts, so this complacency is not surprising.

As Gillian Tett of the Financial Times said, “most societies have an elite and the elite try to stay in power; and the way they stay in power is not merely by controlling the means of production, but by controlling the cognitive map, the way we think”. We see this ‘control of the cognitive map’ in the way our societies condition us to aspire toward the excesses of the wealthy and to accept many eminently solvable issues as intractable problems we should just accept as “the way it has to be”. It is all to do with the ‘need’ to perpetuate scarcity so the boundless growth of consumerism can continue to extract profit and the elites can maintain the structures that their privileged position depend upon.

Any call to seriously re-engineer society in order to achieve a more equitable distribution of material wealth tends to meet the same retort, which is that it can only lead to Leftist totalitarianism. Such a statement was echoed by Forbes writer Jeffrey Dorfman:

“Once you admit that income redistribution is fair, there’s no logical stopping point short of communism”.

There are a couple of flaws in this assertion. Firstly, it seems to forget that the market economy is itself a process of wealth redistribution. Perhaps Dorfman is one of those market ideologues with faith in the ‘invisible hand’ creating peace and harmony out of the selfish pursuit of competing to gain differential advantage over others? But, as Harvard researcher Jonathan Schlefer explained, “beginning in the 1870s, theorists…wanted to show how market trading among individuals, pursuing self-interest, and firms, maximising profit, would lead an economy to a stable and optimal equilibrium…after a century of work, they concluded that no mechanism can be shown to lead decentralised markets toward equilibrium, unless you make assumptions…regarded as utterly implausible”.

Market dynamics turn human frailty and misfortune into commodities to be exploited for profit. And the competition to find the commodities that can be sold at the most cost-competitive price encourages fraud (because what could be more competitive than successfully making money out of nothing but bogus claims?). This is why a totally laissez faire market operating absent of any kind of regulation will tend to destroy itself. But neoliberalism is driven toward turning everything into a commodity to be bought and sold for differential advantage, and those regulatory bodies are no exception to that rule. They can be corrupted into a means of conferring unfair advantage in the interest of selfish profit maximisation. “Crony” capitalism is really just the likely outcome of free market principles operating in the real world with its historic cases of hierarchical, redistributive societies that are prone to kleptocracy.

The other flawed assumption is that, if the goal is ‘fair’ income redistribution, then the logical stopping point is communism. Presumably, Dorfman means a society in which everybody gets the same income (that is, after all, how most people think communism is supposed to work). But how can that possibly be the logical stopping point if the goal is fairness? There is nothing ‘fair’ about equal pay across the board, given that individuals clearly make unequal contributions toward beneficial and detrimental outcomes. It does not bother me that some people are more materially rewarded than I am, and it does not bother most other people. When asked how wealth should be distributed, most people rightly dismiss full communism and opt instead for some measure of distribution that rewards the most productive while ensuring sufficient wealth at the bottom to alleviate relative poverty (defined as not being able to access the average lifestyle for that society). It’s just that this ideal redistribution is more equitable than what people believe is actually the case (and true wealth inequality is even worse than most people believe).

Despite the fact that there is a public call to bring about greater equitability in wealth redistribution, every policy to bring it about tends to be dismissed by new-liberal ideologies as unworkable solutions that can only end in authoritarianism and the Gulag. For some reason, philanthropy comes out as the only viable means of patching over the harm caused by the selfish pursuit of material wealth. The question is, why?



Last chapter we ended with a question:

‘Despite the fact that there is a public call to bring about greater equitability in wealth redistribution, every policy to bring it about tends to be dismissed by neo-liberal ideologies as unworkable solutions that can only end in authoritarianism and the Gulag. For some reason, philanthropy comes out as the only viable means of patching over the harm caused by the selfish pursuit of material wealth. The question is, why?’.

From the positive perspective, it could be because, where philanthropy is concerned, money is being handled by those with a proven track record of making it work to produce value. Whereas governments are known to waste money on unnecessary bureaucracy, the philanthropists are people who have revolutionised retail, or brought computing to the masses, or built rockets that can land on platforms out at sea. Who could be better placed to use money responsibly and build a better future?

Advocates of philanthropy also cite autonomy as another advantage. This line of reasoning was adopted by Matthew Bishop (author of ‘Philanthrocapitalism: How the Rich Can Save the World’) “They do not face elections every few years like politicians, or suffer the tyranny of shareholder demands for ever-increasing profits, like CEOs of most public companies. Nor do they have to devote vast amounts of time and resources to raising money, like most heads of NGOs. That frees them up…to take up ideas too risky for government, to deploy substantial resources quickly when the situation demands it”. In short, they answer to nobody and if their heart is in the right place nothing can stop them putting life-changing sums of money to good use”.

But, since these are life-changing sums of money the philanthropists are being trusted with, there needs to be assurances that their hearts are, indeed, in the right place. The best way to ensure things are done properly is to have transparency and a democratic process. The problem is, there is often neither transparency or accountability. The World Health Organisation’s head of Malaria Research, Aarati Kochi, compared the Gates foundation to a cartel, claiming the organisation was “accountable to no-one other than itself”. And Dr David McCoy, adviser to the People’s Health Movement”, reckoned “it also operates through an interconnected network of organisations and individuals across the NGO and business sectors. This allows it to leverage influence through a kind of ‘group-think’ in international health”. From this perspective, ‘philanthropic’ organisations have zero transparency, are accountable to nobody, and are really just an excuse to transfer power from the State to billionaires. As Peter Kramer, a critic of the Giving Pledge, said, “it’s not the state that determines what is good for the people, but rather the rich want to decide”. Given that these are often some of the most ruthless exploiters of competitive behaviour and its negative effects, one has to wonder if unaccountable billionaires working without transparency really can be trusted to serve the public’s interests.

The cynical way of looking at philanthropy is to view it as just a PR exercise whose purpose is to justify some having so much to begin with, while throwing token amounts of money at those enduring the negative externalities that inevitably rise when we compete to gain more by whatever method we can get away with. Where the ‘Giving Pledge’ is concerned, there is no legal obligation to do anything. Signatories merely say they will give away half of their fortune; signing the pledge places them under no enforceable commitment to actually follow through on their promise. Now, if there were transparency, so that the public could see what was being donated and where it was going, that might ensure the pledge is indeed honoured. But, guess what? There is no transparency. So how can we ever know what was given away or for what purpose? Really, then, there is nothing to prevent the Giving Pledge from being a PR stunt intended to placate a public grown sick and tired of the excesses of the rich and the gross wealth inequality fueled by a greed is good culture that has brought such harm to people and their communities. As activist Slavoj Zizek put it, “charity is the humanitarian mask hiding the face of economic exploitation”.

Also, when it comes to the establishment of charitable organisations, there are reasons for taking such action that do not necessarily count as altruistic. You see, by setting up such organisations, the ultra-rich can take advantage of tax loopholes as money is passed through them.

Such was the case of the foundation set up by the Walton Family. These five Walmart heirs have a combined net wealth of over $139 billion, meaning they have more money than the bottom 40 percent of Americans combined. An independent audit determined that the Walton Family Foundation-built “at almost no cost to themselves” was “exploiting complex loopholes in order to avoid billions of dollars in estate taxes”.

As to how much of that $139 billion fortune actually went to charity, the answer is…0.4 percent. That is such a paltry amount, it is hard not to agree with Peter Joseph of the Zeitgeist Movement who said, “what they are really doing is bypassing state funding in favour of their own interests. Moving money to charity foundations, effectively consolidating wealth in the hands of private interests rather than government, is a logical method to keep things ‘in the club’ of private business power”.


Philanthropy and charity are either the most viable way of dealing with the problems we are facing, or they are just a PR stunt amounting to a band-aid for problems whose systemic root remains deliberately unaddressed by those with vested interests in retaining the status quo. Obviously, one’s own political ideologies would influence which of these possibilities seems most plausible. Really, though, I suppose all we can do is wait and see if the philanthropist’s pledges really do bare fruit and build a better tomorrow.


“The New Human Rights Movement” by Peter Joseph

“The Survival Manual” by Mark Braund and Ross Ashcroft

Posted in Uncategorized | 2 Comments

Rewarding Work In ‘Red Dead Redemption 2’


In this essay I thought I would write about the ways in which Rockstar’s Red Dead Redemption 2 incorporates the elements work needs in order to be rewarding into its gameplay.

First, though, we need to figure out what those elements are. Barry Schwartz has looked into this, and come up with the following ideas:

“Satisfied people do their work because they feel that they are in charge. Their work day offers them a measure of autonomy and discretion. They use that autonomy and discretion to achieve a level of mastery or expertise…Finally, these people are satisfied with their work because they find what they do meaningful. Potentially, their work makes a difference to the world”.

I think the key words in that passage are ‘autonomy’, ‘discretion’, ‘mastery’ and ‘meaning’. Whenever physical or mental activity incorporates these, you have work that is rewarding.

So how does Red Dead Redemption 2 fare? First off, the environment in which this game is set obviously lends itself to ‘autonomy’. It is set in the vast expanse of the American Wild West and, as the game’s trailer puts it, “the world is full of adventures and experiences that you discover naturally as you move fluidly from one moment to another”. This gives the game a non-linear feel, as the player can ride off in any direction.

Along the way, the player is likely to encounter various situations and activities. Most of the time you are not required to participate and can decide for yourself whether or not to get involved. This means the game manages to incorporate another feature work needs in order to be rewarding, namely ‘discretion’. We also see discretion at work during missions, where you are asked to make decisions like what actions members of your posse should take, or whether an aggressive or pacifist response is most appropriate for the current situation.

One could also cite character management and customisation as further ways in which this game provides opportunities for discretion or judgement. As the game’s trailer says, “your experience is defined by the choices and decisions you make…You can, of course, choose what to wear, ride and eat”. Furthermore, these are not merely cosmetic choices that just change your appearance but have no real consequences. Your character has various health attributes that you need to take care of. A decent coat in winter could mean the difference between life and death, whereas during a hot summer it would not be wise to wear such warm clothing. From character customisation and management, to the snap decisions required of the player during missions, to the open world and the nonlinear experiences it offers, Red Dead Redemption 2 provides plenty of opportunity to apply one’s discretion.

When it comes to mastery, ever since Pong was introduced with the simple instruction to ‘avoid missing ball for high score’, videogames have provided players with challenges that test their ability and enable them to feel like their skills are developing.

The best games don’t just rely on setting a challenge like getting from A to B in a set time or shooting X number of targets. They incorporate systems of feedback into the gameplay that informs the player how well they are performing and whether they should try another strategy. You have visual and audio cues that let you know if things are going well or not, and the best games do not leave you in the dark over what you should be doing, but at the same time don’t hold your hand and instead leave it up to you to figure out how to accomplish what needs to be done.

Finally, by providing the player with identifiable tokens of progress in the form of special items, areas and other stuff that you unlock by achieving certain objectives and challenges, games like Red Dead Redemption 2 let you feel like you are gaining mastery and making real progress as the gameplay continues.

Now, when it comes to meaningful work, one might struggle to claim Red Dead Redemption 2 provides much of this if we consider the game from the perspective of ‘real life’. This is, after all, just a videogame. Sitting in front of a TV pressing buttons on a joypad hardly stands besides researching the cure for cancer as “work that makes a positive difference to the world”.

But in the context of the in-world experience, many games offer a grand narrative that sees the player progress from a nobody at the start to a significant figure whose actions and decisions have had a decisive effect on shaping history by the end. You become the hero who saved the world. Admittedly, in Rockstar’s most famous franchise (Grand Theft Auto) you are attempting to rise in the ranks of criminals, which is not exactly everyone’s idea of a positive contribution to society. And, in Red Dead Redemption 2 you are cast as an outlaw and can engage in the kind of activities for which Rockstar has earned an image as the bad boy of videogaming. You know, robbery, murder, that kind of thing. But there does appear to be opportunities to act like an outlaw with noble intentions. There are situations in which you can choose to help people or to refrain from killing foes. According to the trailer, “there are countless secrets to uncover and people to meet. You can get into raucous altercations…chase down bounties. Your behaviour has consequences and people will remember you and your actions”.

So, like all the best games, Rockstar’s Red Dead Redemption 2 successfully incorporates ‘autonomy’, ‘discretion’ and ‘mastery’ into a grand narrative that provides a sense of social meaning.

This achievement is all the more remarkable when you consider how menial an activity videogaming actually is. After all, what are you physically doing when you play these games? Repeatedly pushing buttons. Really, that’s it. Yet, somehow, games designers can take this dull, repetitive activity, one that ranks alongside rote assembly line work as the most menial ever created, and build an experience on top of it that is so compelling people happily pay good money so they can do it!

But, really, it is because it is we the players who are paying to do work in videogames that designers have every reason to try and make it as engaging and rewarding as possible. Aside from stories of people who work in sweatshops grinding through MMORPGS to level up avatars that then get sold on to people who would rather pay than do the tiresome work of obtaining a high-level character themselves, nobody is ever coerced into playing a videogame. Participating in such activity is entirely voluntary for the vast majority of players, so if they are to sell, their developers have to make sure that the work a gamer has to do in order to get through the game is rewarding.

However, when it comes to jobs, I believe there is an incentive to reduce the elements that make work rewarding. If you take the first three features (which were ‘autonomy’, ‘discretion’ and ‘mastery’), I believe these have something in common, which is that they provide opportunity to enhance one’s individuality. The best games really do try to include ways to let the player customise the experience, which in some cases goes as far as incorporating editing tools that enable you to craft whole new levels and gameplay. I would argue that the reason why videogames tend not to make good films is because the characters in them are often pretty much blank slates intended to be filled in by the player’s personality, not well-developed characters with their own psychology.

The best videogames provide plenty of opportunity to enhance one’s individuality. We are each of us unique individuals with our own lifestyles, presences and abilities, and ideally we would have jobs that reflect this. But this could potentially cause problems when combined with the other feature that makes work rewarding, which is that it should be ‘socially meaningful’.

Imagine that my job is very important and valuable to society, and that it is also perfectly tailored to suit the unique individual I am. Obviously this would be tremendously engaging and rewarding work for me, but what if one day I was run over by a bus? The company would be in big trouble, for who could replace me and fit into a position uniquely suited to the individual I am?

On the other hand, if you can somehow reduce or even eliminate the amount of autonomy, discretion and mastery a job requires, you also need rely less on the individuality your employee. In so doing, the employee can be treated less like a unique person and more like an interchangeable unit that can be removed and replaced at the employer’s discretion.

This obviously impacts on the employee’s bargaining power, as anyone who feels they are eminently replaceable is not going to ask for better pay or preferable working conditions. Cheaper labour means more profit for employers. Furthermore, employees are in a competitive environment in which they fight to earn enough money to keep from becoming too indebted, to keep up appearances in environments that emphasise material wealth as the sign of success, and in which there are taxes that have to be paid if you don’t want to go to jail. In other words, there is a lot of ‘negative motivation’ leading people to submit to jobs not because they expect to be rewarded if they do, but because they fear the consequences if they don’t.

So, employers have to pay their workers in order to get jobs done, and they prefer cheaper labour where possible and so are incentivised to reduce the qualities of work that make it rewarding, as in doing so they make employees more like interchangeable and replaceable units. Furthermore, in the world of wage labour there are various forms of negative motivation that pushes people into accepting jobs that are not very rewarding, so unlike videogames designers employers need not be too concerned that the work they offer sucks.

I think this helps explain why people seemingly don’t want to work. It really is bizarre if you think about it. Imagine an animal like a dolphin, obviously evolved for a life in water and yet seemingly reluctant to leave dry land and go swimming. People are like that. We have large brains housing creative minds, dexterous hands that can use tools in complex ways, sophisticated language that enables us to cooperate and compete in ways no other animal can even imagine doing, and we are healthiest when mentally and physically active in social situations. In short, we evolved to work but apparently we don’t want to. At least, that seems to be the attitude people have when the topic of UBI comes up. As well as objections about how unaffordable they think it would be, people claim that if you did not have to earn wages in order to survive, nobody would work and we would just passively consume TV all day.

Actually, the evidence shows that it is in countries where people spend the most time in jobs that we find the highest consumption of TV. Which makes sense if you think about it. Having burned so much energy in their jobs they don’t have much left to do anything else in their spare time. On the other hand, people who live in countries in which less time is expected to be spent in jobs tend to be engaged in more voluntary work and spend less time sat in front of the TV.

Also, to get back to the theme of this essay, videogames debunk the theory that nobody wants to work. After all, if that were true, nobody would pay good money to go through the effort of trying to beat the various challenges these games set. Nor would anyone develop their sporting or artistic talents. After all, these things take work, and quite a lot of it in some cases. The reason why we so willingly pay to do the work in videogames is, as I have argued, because the designers of such games have an incentive to make such work as rewarding as they possibly can, because at the end of the day they want as many people to go out and buy the game and recommend others do so as possible. On the other hand, job providers are incentivised to reduce the qualities that make work rewarding in order to make employees more replaceable and exploitable. Not that all jobs can have such qualities reduced; it’s just that enough can be made unrewarding to explain why 90 percent of people don’t enjoy their paid work.

As Red Dead Redemption 2 shows, it is not really work we don’t like. It is jobs.

Thanks to Rockstar for the images


‘Why We Work’ by Barry Schwartz

‘Utopia For Realists’ by Rutger Bregman

‘Red Dead Redemption 2 trailer’ by Rockstar

Posted in Uncategorized | Leave a comment