If you are interested in cryptocurrencies such as bitcoin, chances are that you have heard some skeptic make a comparison with ‘tulips’. Why would blockchain-based assets be compared with that particular flower? Well, it is all to do with one of the craziest bubbles ever inflated, which was what I want to talk about in this post. In order to lay down the groundwork, though, we have to go way back in time to the 15th century…

In The Beginning…

The story of how Amsterdam’s most famous bloom became the basis of one of the most infamous speculative bubbles does not actually begin in the Netherlands, but rather in Spain and Portugal. The end of the 15th century saw improvements to the design of ships and inventions that were to prove important for navigation, such as the clock and the compass. Together, these advances made it possible to cross oceans, discover new lands, and open up trade routes.

The Christian kingdoms of Spain and Portugal did just that, famously sending Christopher Columbus west in 1492 on a journey that would ‘discover’ the Americas. Five years later Vasco de Gama journeyed southward to discover the Cape Of Good Hope and the naval route to India.

With these discoveries, both Spain and Portugal suddenly found themselves with trading options along African and Asian coasts, not to mention access to vast and rich territories in the New World. This meant that, from the 16th century onwards, the scene was set for a transformation from the old feudal economies to mercantile economies. The international trade routes made it possible to create far superior wealth compared to that offered by grain production by the small feudal fiefs of Europe. Mercantile economies were based on the idea that a country’s total amount of wealth represented the overall profit it made from trade. As each strip of land obviously holds only a limited amount of tradable resources, the volume of a country’s trade was dependent on the amount of land over which it held trade rights.

Mercantilism therefore lead to expansionism, as any European power that could afford it sent off ships in search of hitherto undiscovered territory (not discovered by any other European, that is). It was customary for the Monarch to hold claim to the new territory overseas, the management of which required a large administrative body under direct royal control. It had always been profitable to serve the King during times of war, but the territorial expansion meant the nobility could make more wealth serving the King abroad rather than by managing their private estates.

This lead to a powerful, centralised monarchy and the creation of the first great European empire. But there was something of a downside to this way of organising things, since the creation of a powerful, centralised monarchy held back the creation of a strong and independent mercantile class, which in turn held back private enterprise. The result of all this was that capitalism did not grow out of the empires of Spain and Portugal, but rather from one of the more disadvantaged newcomers in the race for international trade.

The Dutch East India Company

That nation was the Netherlands. The end of their 80-year struggle for independence from Spain left the nation with no significant aristocracy and not much in the way of marked class differences. Instead, the Netherlands developed a significant middle class that thrived on trade. Up to the Industrial Revolution, Amsterdam could lay claim to being the greatest city in Europe, as well as laying claim to a few ‘firsts’ in capitalism. For example, many historians consider the Netherlands to be the world’s first truly capitalist nation. Also, the Dutch East India Company, which was formed in 1602, was one of the first multinational companies. Also, by being the first company ever to offer its stock on the market, the Dutch East India Company pretty much invented the stock market, meaning the Dutch could claim that among their list of ‘firsts’ too.

The Netherlands were really successful at trade, so much so that it had managed to drive the Portuguese off most of their trading posts in the Indian Ocean. By the 1630s, the timing was almost right for a period of mass speculation. Thanks to the trade of their merchants, the Dutch were the recipients of the highest salaries of any European. Shares of the Dutch East India Company were richly rewarding shareholders for their investments, and much of that money was being poured into properties to create a robust housing market. Ongoing appreciation of asset values created excess wealth that went on to fund further asset purchases.

This wealth was setting the scene for an asset bubble, but at the time there was something holding back the move toward wild speculation. That something was the fact that not everyone could take part. This was because Dutch East India shares were both expensive and illiquid (in other words not easily resold) and that made them unavailable to all but the wealthiest. The same could be said for the most prized properties. However, a quirk of nature was soon to arise which would seemingly hold out the promise of vast wealth that anybody could speculate on…

Enter the Tulips

Tulips had been introduced to Europe around the mid-1500s, and had always held the promise of some value. In fact, they still do, as can be appreciated by remembering how famous Amsterdam is for that particular bloom. But something happened around 1634 that would cause the value of this plant to skyrocket, and that something was a virus. The virus, which was transmitted by aphids, lead to a couple of consequences for the tulip, both of which are the reason why a crazy speculative bubble arose. Firstly, the virus had the effect of transforming an ordinary solid-coloured tulip into a startling-looking variegated variety with  beautiful flamelike petals. This was a much-prized variety, and as nobody really knew what caused such variegation there was much speculation as folks attempted to predict which bulbs would develop into the prized tulips.

Secondly, the virus ultimately killed the tulip. This made it something of a hot potato, in that you really wanted to sell the tulip on for a higher price rather than be the sucker who was left with nothing but a dead bulb.

Unlike shares in the Dutch East India Company or prized property, tulips were much more affordable, which meant more people could join in the speculation of this particular asset. Not surprisingly, given the stories of immense riches to be gained from selling on a prized bulb, many, many people were drawn into speculation. Most of these people were not experienced traders. In fact, the professionals pretty much shunned the tulip trade and continued investing in good old reliables such as East India stock. They regarded tulips as more of an expression of wealth than a means to that end.

But for more inexperienced traders, the chance of having and reselling a prized tulip was considered to be the means to great fortune. Because the tulip spends most of its life as a bulb rather than a blossom, it naturally lent itself to a futures market (something the Dutch called a windhandel, or the wind trade). By ‘futures market’, I mean a situation where both buyer and seller agree to the future price of a good, and when that specific time arrives, the buyer is obliged to pay the seller whatever amount was agreed upon.

However, waiting for that agreed-upon time to arrive was too slow for the growing crowds of speculators. Therefore, a move was made to transition from selling tulips themselves, and instead trading those futures contracts. And trade them they did, sometimes as much as ten times in one day. You can see then, how the value of tulips was entering into ever higher realms of abstraction. The trade in futures market contracts meant that people didn’t have to worry about an actual tulip being delivered. No, their only concern was being able to sell the contract for a higher price than they had bought it for. The result of this was that, at the very peak of the tulipmania during the winter of late 1636 and early 1637, a time when the bulbs were still dormant in the ground, not one blossoming tulip actually changed hands.

Funny money

But there is even more to this tale of wild speculation than that. You see, not only were no bulbs being traded, no real money was, either. At that time, ‘real money’ was the guilder, the currency of the Dutch Republic. This was not the paper currency we are used to, it was money based on a specific amount of precious metal, 0.027 ounces of gold. Much of the trade in futures contracts was not financed with real money, but rather with ‘notes of personal credit’. In other words, with IOUs. So not only were there no bulbs being traded during the heights of tulipmania, no money was changing hands either. Instead, transactions were being made on nothing but the promise to deliver the money in the future.

According to Edward Chancellor, author of ‘Devil Take the Hindmost: A History Of Financial Speculation’, “by the later stages of the mania, the fusion of the windhandel with paper credit created the perfect symmetry of insubstantiality: most transactions were for tulip bulbs that could never be delivered because they didn’t exist and were paid for with credit notes that could never be honoured because the money wasn’t there”.

To give an idea of just how high the price of tulip bulbs rose (or, perhaps I should say, the price of the promise of such a bulb) consider that the highest record amount paid for a tulip at that time was a whopping 5,200 guilders. In gold terms, that’s nine pounds of the stuff. You could have bought eighteen modest-sized houses for the price of that one tulip.

It all ends

Like all bubbles, this one could not inflate forever. The end inevitably came, because the bulbs blossomed into flowers or turned out to be dead duds, and because the contractual dates for when IOUs had to be paid for with the promised money were coming around. The wealthiest were not hit too hard, since, if you remember, they had continued investing in things like townhouses and East India Stock. No, it was those less experienced in investing, the people caught it in crowd behaviour, buying into futures contracts for tulip bulbs for no reason other than that was what everyone else was doing, that got hurt the most. Inevitably, a lot of those people found out that their anticipated fortunes amounted to nothing but worthless promises. Fights broke out over the amount due per contract, and the Dutch government stepped in, declaring that the contracts could be settled for 3.5 percent of their initial value. On one hand, that was obviously preferable to paying the full contract. But nevertheless 3.5 percent of the most expensive tulip still equated to a year’s salary for some unfortunate citizens.


So that’s the story of tulipmania. What lessons can be applied to blockchain-based assets? Well, firstly, I don’t think it is all that fair to compare blockchain-based assets to ‘tulips’. A tulip does have some value. They are pretty things and people pay for pretty things. But you can hardly call a tulip bulb a general-purpose technology. A general-purpose technology is one that can be used in a great many ways. Examples would be ‘electricity’ or ‘computing’. Just think of all the inventions and industries and jobs that have been built on the basis of those two technologies. The blockchain is also a general purpose technology, and that means speculating on its future growth need not be sheer pie-in the sky. People who expect to make a fortune from crypto-assets might just be making educated guess regarding the future potential of Satoshi Nakamoto’s invention.

Having said that, all speculation is prone to crowd behaviour. Just because the underlying blockchain technology is sound, doesn’t mean to say that assets built on top of it can’t be scams designed to lure in suckers, or that genuine products can’t fuel asset bubbles as people buy or sell for no good reason other than everybody else is doing likewise. ‘It’s just like tulips!’ may be a retort used by skeptics who don’t really know all that much about cryptoassets and blockchains, but nevertheless the story of the tulip speculative bubble does hold some valuable lessons. After all, those who do not learn from history are doomed to repeat it.


“Capitalism: A Graphic Guide” by Dan Cryan, Sharron Shatil and Piero

“Cryptoassets: The Innovative Investor’s Guide to Bitcoin and Beyond” by Chris Burnsike and Jack Tatar.

Posted in Uncategorized | Leave a comment

Philanthropy: Praiseworthy Or Propaganda?



How will Bill Gates be remembered?

If this question had been asked in the 90s, I suspect most people would say the answer is obvious. He would be remembered as the co-founder of Microsoft, a feat of entrepreneurialism that resulted in him becoming one of the richest men in the world.

But there is another noteworthy thing that can be attributed to Bill Gates. For, as well as being extraordinarily rich, he can also be credited with remarkable acts of charity. In 2010, for example, he put $10 billion toward vaccines, which was the largest pledge ever made by a charitable foundation to a single cause. Also in 2010, Gates and Warren Buffet announced the ‘Giving Pledge’, so called because the wealthy people who sign it pledge to give away half their fortune to philanthropic and charitable causes.

So, there are two ways in which Bill Gates might be remembered. Chiefly as a businessman who accumulated great wealth or as a philanthropist who made big donations to worthy causes.

That last legacy of being a great philanthropist sounds like an achievement that could only be viewed in a positive light. But, as with most things, there are two ways of looking at philanthropy. On the one hand, it can be seen as a justification for the social structures that enable some people to acquire disproportionate wealth, for it turns out that, however ruthless such people might have been in gaining their fortune, they ultimately had a significant altruistic side to their character, generously giving to worthy causes.

But, on the other hand, a more cynical way to look at it would be to say that it is really nothing but a band-aid to cover up the exploitative conditions that cause so many to need charity. If society was structured in such a way as to not allow the extent of inequality extreme wealth necessitates, those people would not have required charity in the first place. In short, these so-called philanthropists are just using a portion of their vast wealth in propaganda and token gestures while not really doing anything much to alter the structures they took advantage of.

Those opposing viewpoints have existed since the beginnings of modern philanthropy. By ‘modern philanthropy’, I mean large-scale philanthropy based in the private rather than the public sector.

There is general agreement among contemporary historians that modern philanthropy was invented by the great industrialists whose names are now synonymous with extraordinary wealth. John David Rockefeller, Andrew Carnegie, Cornelius Vanderbilt and people like that. These were legendarily ruthless businessmen whose rapaciousness earned them the title ‘Robber Baron’.

Rockefeller, for example, acquired his extraordinary wealth partly through industrial espionage. He sent spies into his competitors’ businesses in order to ascertain their financial situation. His own company (Standard Oil) would then lower the price of its own oil, making his rival hopelessly uncompetitive. Meanwhile, in other parts of the country, the price Standard Oil charged was increased in order to make up the difference. According to Dylan Ratigan, “in this way, the company charged its customers a premium to drive the competition out of business, which left those same customers even more dependent on Standard Oil. Rockefeller referred to this approach as ‘sweating the competition’”.

By 1882 Standard Oil controlled up to 90 percent of the oil refining capacity in the United States. Seven years later its monopolistic grip had extended to retail, wholesale and oilfields as well. In Short, Rockefeller’s tactics changed what had been a free market in oil with fluctuating prices adjusting with competitive supply and demand, to a rigged market where prices were stabilised at artificially high prices. That’s why people like him were called ‘robber’ barons. They used the free market mandate of increasing one’s wealth via whatever method you can get away with to ultimately end the free market and impose a monopoly that effectively took wealth away from people through rent extraction.

Like Bill Gates, Rockefeller went on to become the world’s richest man. Also, like Bill Gates, he went on to build a foundation-the Rockefeller Foundation-dedicated to philanthropic ventures. It was set up in 1910 from $50 million in Standard Oil stock and, by the time of his death in 1937, half of Rockefeller’s fortune had been given away. This legacy, along with the philanthropic acts of other 19th century American industrialists, can be seen in cities like New York. You would be hard-pressed to find a museum, art gallery, university, concert hall or charity that cannot trace its origins back to some such businessman.

But, as I said earlier, there has always been a more cynical way of looking at this. Businessweek highlighted this fact when they wrote, “John D Rockefeller became a major donor-but only after a public relations expert, Ivy Lee, told him that donations could help salvage a damaged Rockefeller image”. Put another way, according to this cynical view, Rockefeller was not actually interested in Philanthropy. After all, a true desire to promote social welfare necessarily seeks to fundamentally change the preconditions that are the root cause of social problems. What Rockefeller was doing was simply placating an angry public by throwing a bit of money at some public service or other, while not really doing  much to alter the structures that enabled the few to gain so much at the expense of the many in the first place.


In the last chapter, we talked about the 19th century industrialists who earned the dubious title of robber baron and who can be credited with inventing modern philanthropy. When Warren Buffet wanted to make a philanthropist out of Gates, his first move was to give him a copy of an essay by Andrew Carnegie (the greatest of all the 19th century industrialist/philanthropists), called ‘The Gospel Of Wealth’. Incidentally, we see here (and also with the case of Rockefeller, who we talked about in part one) a curious observation, which is that these billionaires do not seem to consider using their wealth for the common good until somebody else points out that this is an option. And we can add a third billionaire to that list, for Paul Allen, the co-founder of Microsoft, is quoted as complaining “I’ve spent money on jets, boats. I don’t know what to do next”. Notice that it never occurred to him that a portion of his billions might be put to philanthropic use and once again this had to be pointed out to him (by the author Douglas Adams in this case). It could be that there are some very wealthy people who did not need any persuading to turn to charitable acts, but given that these are, almost by definition, mostly selfish, greedy, ruthless and ethically dubious individuals (it is, after all, kind of hard to acquire that kind of fortune by being a nice guy) I am willing to bet they are few and far between.

That might sound like a harsh assessment, but it was one echoed by the economist Jeffrey Sachs, who reckoned, “They are tough, greedy, aggressive, and feel absolutely out of control in a quite literal sense, and they have gamed the system to a remarkable extent. They genuinely believe they have a God-given right to take as much money as they possibly can in any way they can get it, legal or otherwise”.

I should point out that this was his view of City traders on Wall Street, so it should not be considered applicable to everyone who is very wealthy. Probably J.K Rowling is not like this, since she owes her fortune to the astonishing success of the Harry Potter franchise, making her more like a lottery winner than somebody who fought their way to the top of business.

Anyway, in Carnegie’s essay, the great iron and steel magnate posed the question, “what is the proper mode of administering wealth after the laws upon which civilisation is founded have thrown it into the hands of the few?”.

Such a question seems aligned with the findings of Thomas Piketty. In ‘Capital In The 21st Century’ (which is actually concerned with capitalism up to the 21st century) the French economist put together the most exhaustively researched analysis of market capitalism and its consequences so far assembled, and drew the following conclusion. When allowed to unfold in its natural, ‘financially liberalized’ state, capitalism will very likely result in those who hold significant wealth gaining far greater returns compared to those who rely primarily on labour income. There are various reasons for this. For example, if you are very wealthy you can afford to hire the very best financial advisors who know how to skirt around the law and squeeze every drop of value out of your assets while avoiding the tax man. The poor, meanwhile, cannot afford any financial advice and are constantly being targeted by fraudsters, predatory bosses, authoritative bureaucrats and marginally legitimate debt sellers. Also, the owner classes can gain access to high-level capital investments that are simply unavailable to those of us lacking significant wealth in the first place. Technically, this is known as the ‘wealth to income ratio’.

Pinketty’s study showed what Carnegie thought was true, which is that wealth tends to grow much faster for the already wealthy. That wealth can then be used to further alter the structure of social systems so as to further increase the tendency for wealth to flow toward the elite minority. The behaviour of Rockefeller (talked about in part one) aligns with Pinketty’s view of meritocratic competition making sense only when one is trying to establish a fortune. However, once wealth is acquired it makes more sense to turn anti-competitive so you can live off of the rent extraction to be had from a market rigged in your favour. Why strive to make a fortune when you can just protect the fortune you (or your ancestors) have already amassed?

The more positive way of looking at this might be to say that it is OK to amass a great fortune, even one that involves ethically-dubious practices, if one then uses that money in charitable ways. Those who adopt this way of looking at philanthropy tend to prefer to cast charitable donations in monetary terms, probably because the donations seem so amazingly generous when so presented. For example, Peter Diamandis and Steven Kotler, in a chapter devoted to technophilanthropists in their book ‘Abundance’, wrote, “by 2004, charitable giving in America had increased to $248.5 billion, the biggest yearly total ever. Two years later, the number was $295 billion”.

By everyday standards, such sums of money are almost unimaginably large, way beyond anything most people could earn in several lifetimes. It therefore seems that those who donate such amounts must be superhumanly charitable and worthy of the highest praises society can bestow.

As for the cynics, they tend to prefer percentages. For example, the Chronicle Of Philanthropy’s study found that households earning between $50,000 and $75,000 a year gave an average of 7.6% of their discretionary income to charity. 7.6% of a pretty paltry amount if you ask me. Shouldn’t a society founded on Christian values be giving away more like 40% of discretionary income? Worse still, the figures are even more dire when we turn to those who earn over $100,000. Even though such folk are obviously in a position to be much more generous, in percentage terms they are less so, giving a paltry average of just 4.2%.

Several studies have come to similar conclusions, which is that those who can least afford to donate to charity actually donate the greatest amount in percentage terms, while those who are most financially advantaged give away the least. Ken Stern, writing for the Atlantic, reckoned that America’s bottom 20 percent donated 3.2 percent of their income to charitable causes, while the top 20 percent gave away a minuscule 1.3 percent. Of course, if you have billions to begin with, then 1.3 percent of that would be a lot of money, more than most of us can ever dream of having let alone giving away. But when expressed in percentage terms it comes across as a tiny amounts, a mere gesture intended to paper over the fact that society is rigged against the many. A 2011 study from the University of California, Berkeley, found that upper class individuals are more likely to “exhibit unethical decision-making tendencies, take valued goods from others, lie in a negotiation and cheat to increase their chances of winning”. They also have a disproportionate ability to mould  a society so heavily dependent on the pursuit of money into a shape of their liking, so unsurprisingly our societies work to reward them at the expense of so many others.


There is a tendency for rich philanthropists to become patrons of things that primarily interest the upper classes, while ignoring issues affecting the  poor even though such issues are usually much more urgent. We saw this attitude in part one, when we talked about the philanthropic ventures of the 19th century Robber Barons. As you may recall, they mostly donated to elite schools, concert halls, museums and stuff like that. People like Carnegie also showed zero interest in tackling issues outside of their own city.

Given the period in which these men lived, they can be forgiven for being so localised in their philanthropy. After all, this was long before the age of smartphones and global communications networks, so people were nothing like as aware of issues like poverty in Africa as we are today.

But while we might forgive such blinkered philanthropy back then for the reason given, it’s much harder to justify it now. And yet it still occurs. According to Ken Stern’s article in The Atlantic (“Why The Rich Don’t Give To Charity”), in 2012 “not one of the top 50 individual charitable gifts went to a social-service organisation or to a charity that principally serves the poor and dispossessed”.

It would be incorrect to say philanthropists never turn their attention to such issues, because they sometimes do. There is, for instance, the Omidyar Network (brainchild of EBay founder Pierre Omidyar), pursuing such things as microfinancing which could potentially unlock opportunity for people who cannot access traditional financial and banking outlets. Another example would be the Rockefeller-backed Acumen fund, which is a for-profit company that only invests in businesses that manufacture and sell, at affordable prices,  products and services needed in the developing world (things like mosquito nets and reading glasses). Of course, the Bill and Melinda Gates Foundation’s multibillion dollar commitment to tackle malaria can also be counted among the charitable ventures focused on problems that really matter.

All such examples of worthy causes should of course be commended. But the emphasis on charity as the means to deal with the fallout from the negative consequences of market competition arguably evades the more pressing question, which is ‘why is such intervention necessary to begin with?’. A cynic might say that throwing money at those affected by the negative externalities that are virtually inevitable when people are engaged in a competition to selfishly their own fortune via whatever method can be gotten away with, operating within social structures that already disproportionately favour a minority of greedy, ethically-dubious people, is akin to giving up and managing the symptoms of a socioeconomic disease rather than seeking out its root and curing it outright. It is sort of like having an engine that is leaking oil and you choose to constantly add more oil rather than fix the engine.

As to the root causes, those who have researched the systemic causes of today’s problems have traced the root back to incompatible assumptions that have held since the Neolithic period. We find one of these assumptions contained within the definition of the word ‘economics’, which is the ‘study of the allocation of scarce resources’. Along with the assumption of scarcity there is a potentially incompatible assumption applied to growth, which is that it is infinite.

The reason why the assumption of scarcity is incompatible with the assumption of infinite growth should be so obvious as to not need spelling out. But as our current consumerist lifestyles are using up vital resources far beyond sustainable rates, the obviousness must not be all that graspable, so perhaps we should express the contradiction between assumptions of scarcity and assumptions of infinite growth. Growth cannot be sustained indefinitely in an ever-accelerating fashion if it relies on something of finite supply. Eventually that supply will be exhausted and the growth must come to a stop. It is worth noting that it really does not matter how large that finite supply is, infinite growth is always by definition infinitely greater. Therefore, people who dismiss concerns about how unsustainable our economic systems based on infinite consumption are, because we can try and gain access to the much larger resources of the solar system or the galaxy or the visible universe (assuming that is even remotely practical an aim to begin with), miss the crucial point that infinite growth in consumption will exhaust any physical resource. Only infinite resources can sustain infinite growth, but then we would have to abandon the assumption of scarcity, returning us to the essential point of scarcity and infinite growth being incompatible beliefs.

For most of human history since the Neolithic period, it did not much matter that we operated under such contradictory assumptions, because we lacked the practical ability to do much harm. Up until the Industrial Revolution, populations were caught in a ‘Malthusian Trap’. It was named after Thomas Malthus, who argued that population growth would outrun the ability to provide enough essential resources to sustain that growth, leading to famines and other calamities that would reduce the numbers of mouths to feed to more sustainable levels.

Going back beyond the Neolithic period, we are talking about a period of time when human populations consisted of small groups of tribes and bands. When populations are small and their capacity to take is restricted, Earth’s resources can seem endlessly bountiful, as indeed they are if the capacity to take is restricted to levels below that of the Earth’s ability to replenish resources. Hunter-gatherers fishing with rods and nets couldn’t even begin to affect fish stocks in an impactful way.

But it’s quite a different story when your pursuit infinite growth in consumption of fish in the interest of ever more profit has resulted in fish-catching technologies that can capture hundreds or thousands of tons of fish. There was about 1.8 million tons of spawning cod in the Grand Banks when the first commercial fishing ships capable of capturing such prodigious amounts appeared in 1951. By 1991, it was down to 130,000 tons and a year later the Canadian government had to step in and close the Grand Banks to cod fishing, or else the species would have been fished to extinction. But that decision came with consequences too, because it meant that 32,000 fishermen were thrown out of work and required billions of dollars in aid to support their families. You can see in this sorry tale how dangerous the assumption of infinite growth is when subjected to a finite resource. I would also point out that the pursuit of more profit could well have been fed by the dwindling supplies of cod, for the more scarce this in-demand resource became, the more expensive and worth pursuing it would be. That would incentivise more profit-seeking fishermen to pursue and catch the prized fish, in a positive feedback loop that either ends in the extinction of cod or government/social intervention to halt this unsustainable consumption.

Another root cause has to do with how societies have been structured since the Neolithic period. I have covered this in detail in my series ‘This Is What You Get’, so search for that if you want more details, but suffice to say that societies have tended to be ‘redistributive’. That is, they are societies in which the populace can be divided into two groups: A non-producing elite who wield great political power and social influence, and who receive ‘tribute’ and get to disproportionately determine how it should then be redistributed to the rest of the populace (hence ‘redistributive’ societies). And, on the other hand, everybody else, the producing masses, toiling for minimal reward and who wield comparatively little social influence and political power.

This hierarchical structure has held (with some modifications, though none that truly affect its essential form) through every form of society since the Neolithic period. From the abject slaves and ruling monarchs of Egypt, to the vassals and lords in medieval feudal societies, to the handicraft merchants and state monopolists of mercantilism and on to our contemporary with the growing numbers of precariat employees and a rapacious elite in the financial and banking sectors, we see broadly the same thing, which is a society in which there are people who work and then there are those other people (always much smaller in number but far more powerful in other ways) who gain the lion’s share of the reward generated by that work. In short, it is a systemic framework that assures the superiority of a minority for whom the temptations of kleptocracy (stealing from the people they rule) is all too often a siren song they cannot resist. Even communism, which was vaguely imagined to operate very differently, turned out not so different in practice. In capitalism you get bossed by business people and in communism you get bossed by bureaucrats. Either way it is a redistributive society comprised of those who do the work on one hand, and the non-producing elites who disproportionately control fruits of that work on the other.

Stanford neurologist Robert Sapolsky, summarised the issue in the following way. “Agriculture allowed for the stockpiling of surplus resources and thus, inevitably, the unequal stockpiling of them. Stratification of society and the invention of classes. Thus it has allowed for the invention of poverty”.

Ever since, the presence of a powerful elite with kleptocratic tendencies, running societies on the incompatible assumptions of scarcity and infinite growth, have worked to sustain poverty rather than truly to eradicate it. This should not be thought of in terms of conspiratorial plotting. Rather, it is the evolutionary outcome of the selfish pursuit of material wealth via whatever method can be gotten away with, let loose in societies with a pre-existing disproportionate advantage for some that changes the psychological makeup of the rest in ways that lead to unsustainable exploitations of the poverty that cannot be eradicated,  for to do so would be to bring down the very system that the elites depend upon for their position.


In the last chapter we ended with the following observation:

‘The presence of a powerful elite with kleptocratic tendencies, running societies on the incompatible assumptions of scarcity and infinite growth, have worked to sustain poverty rather than truly to eradicate it. This should not be thought of in terms of conspiratorial plotting. Rather, it is the evolutionary outcome of the selfish pursuit of material wealth via whatever method can be gotten away with, let loose in societies with a pre-existing disproportionate advantage for some that changes the psychological makeup of the rest in ways that lead to unsustainable exploitations of the poverty that cannot be eradicated,  for to do so would be to bring down the very system that the elites depend upon for their position’.

Now, one might think that this assertion can be refuted by those examples of the problem-solving capacity of market competition turning a once-scarce resource into an abundant one. Aluminium, for example, was once a rare and expensive thing, but now it is so cheap we use it and discard it without a thought. Such examples really do not refute the claim that market competition seeks to perpetuate scarcity because we are talking about an overall condition that cannot be refuted simply by citing isolated examples of a particular resource that has been made abundantly available.

Ever since our technological capabilities became sophisticated enough to enable us to escape the Malthusian trap, we have seen the rise of various ways of artificially perpetuating scarcity. These have included psychological manipulations intended to mess with the ability to distinguish between genuine needs and frivolous ‘wants’, and the creation of a throwaway culture. Thus, although we still talk about the ‘economy’, what we actually have is nothing like an economy in the sense of what the root of that word referred to. Because, rather than striving to use our resources in as efficient a way as possible, given current technical knowhow (which is what we should be doing if we are really trying to economise) we seem determined instead to turn the world’s resources into junk to be thrown away and repurchased. After all, the goal in a consumerist culture is to sell more stuff, so the last thing you want is for people to be content with what they have (this ties in with the point about our ability to make intelligent choices over what we really need being messed with). We also have services that are less about helping those in need and more to do with extracting rent out of those in need by preying on their financial instability so as to get them more indebted, profiting from their desperation.

This has resulted in two forms of socioeconomic sickness: The existence of needless poverty on one hand, and the existence of wealth obesity on the other. It really should disgust people that anyone should become a multibillionnaire in a world where others must subsist on less than two dollars a day. Sadly, for millennia ruling kleptocrats have used propaganda and other methods to keep the masses from developing revolutionary thoughts, so this complacency is not surprising.

As Gillian Tett of the Financial Times said, “most societies have an elite and the elite try to stay in power; and the way they stay in power is not merely by controlling the means of production, but by controlling the cognitive map, the way we think”. We see this ‘control of the cognitive map’ in the way our societies condition us to aspire toward the excesses of the wealthy and to accept many eminently solvable issues as intractable problems we should just accept as “the way it has to be”. It is all to do with the ‘need’ to perpetuate scarcity so the boundless growth of consumerism can continue to extract profit and the elites can maintain the structures that their privileged position depend upon.

Any call to seriously re-engineer society in order to achieve a more equitable distribution of material wealth tends to meet the same retort, which is that it can only lead to Leftist totalitarianism. Such a statement was echoed by Forbes writer Jeffrey Dorfman:

“Once you admit that income redistribution is fair, there’s no logical stopping point short of communism”.

There are a couple of flaws in this assertion. Firstly, it seems to forget that the market economy is itself a process of wealth redistribution. Perhaps Dorfman is one of those market ideologues with faith in the ‘invisible hand’ creating peace and harmony out of the selfish pursuit of competing to gain differential advantage over others? But, as Harvard researcher Jonathan Schlefer explained, “beginning in the 1870s, theorists…wanted to show how market trading among individuals, pursuing self-interest, and firms, maximising profit, would lead an economy to a stable and optimal equilibrium…after a century of work, they concluded that no mechanism can be shown to lead decentralised markets toward equilibrium, unless you make assumptions…regarded as utterly implausible”.

Market dynamics turn human frailty and misfortune into commodities to be exploited for profit. And the competition to find the commodities that can be sold at the most cost-competitive price encourages fraud (because what could be more competitive than successfully making money out of nothing but bogus claims?). This is why a totally laissez faire market operating absent of any kind of regulation will tend to destroy itself. But neoliberalism is driven toward turning everything into a commodity to be bought and sold for differential advantage, and those regulatory bodies are no exception to that rule. They can be corrupted into a means of conferring unfair advantage in the interest of selfish profit maximisation. “Crony” capitalism is really just the likely outcome of free market principles operating in the real world with its historic cases of hierarchical, redistributive societies that are prone to kleptocracy.

The other flawed assumption is that, if the goal is ‘fair’ income redistribution, then the logical stopping point is communism. Presumably, Dorfman means a society in which everybody gets the same income (that is, after all, how most people think communism is supposed to work). But how can that possibly be the logical stopping point if the goal is fairness? There is nothing ‘fair’ about equal pay across the board, given that individuals clearly make unequal contributions toward beneficial and detrimental outcomes. It does not bother me that some people are more materially rewarded than I am, and it does not bother most other people. When asked how wealth should be distributed, most people rightly dismiss full communism and opt instead for some measure of distribution that rewards the most productive while ensuring sufficient wealth at the bottom to alleviate relative poverty (defined as not being able to access the average lifestyle for that society). It’s just that this ideal redistribution is more equitable than what people believe is actually the case (and true wealth inequality is even worse than most people believe).

Despite the fact that there is a public call to bring about greater equitability in wealth redistribution, every policy to bring it about tends to be dismissed by new-liberal ideologies as unworkable solutions that can only end in authoritarianism and the Gulag. For some reason, philanthropy comes out as the only viable means of patching over the harm caused by the selfish pursuit of material wealth. The question is, why?



Last chapter we ended with a question:

‘Despite the fact that there is a public call to bring about greater equitability in wealth redistribution, every policy to bring it about tends to be dismissed by neo-liberal ideologies as unworkable solutions that can only end in authoritarianism and the Gulag. For some reason, philanthropy comes out as the only viable means of patching over the harm caused by the selfish pursuit of material wealth. The question is, why?’.

From the positive perspective, it could be because, where philanthropy is concerned, money is being handled by those with a proven track record of making it work to produce value. Whereas governments are known to waste money on unnecessary bureaucracy, the philanthropists are people who have revolutionised retail, or brought computing to the masses, or built rockets that can land on platforms out at sea. Who could be better placed to use money responsibly and build a better future?

Advocates of philanthropy also cite autonomy as another advantage. This line of reasoning was adopted by Matthew Bishop (author of ‘Philanthrocapitalism: How the Rich Can Save the World’) “They do not face elections every few years like politicians, or suffer the tyranny of shareholder demands for ever-increasing profits, like CEOs of most public companies. Nor do they have to devote vast amounts of time and resources to raising money, like most heads of NGOs. That frees them up…to take up ideas too risky for government, to deploy substantial resources quickly when the situation demands it”. In short, they answer to nobody and if their heart is in the right place nothing can stop them putting life-changing sums of money to good use”.

But, since these are life-changing sums of money the philanthropists are being trusted with, there needs to be assurances that their hearts are, indeed, in the right place. The best way to ensure things are done properly is to have transparency and a democratic process. The problem is, there is often neither transparency or accountability. The World Health Organisation’s head of Malaria Research, Aarati Kochi, compared the Gates foundation to a cartel, claiming the organisation was “accountable to no-one other than itself”. And Dr David McCoy, adviser to the People’s Health Movement”, reckoned “it also operates through an interconnected network of organisations and individuals across the NGO and business sectors. This allows it to leverage influence through a kind of ‘group-think’ in international health”. From this perspective, ‘philanthropic’ organisations have zero transparency, are accountable to nobody, and are really just an excuse to transfer power from the State to billionaires. As Peter Kramer, a critic of the Giving Pledge, said, “it’s not the state that determines what is good for the people, but rather the rich want to decide”. Given that these are often some of the most ruthless exploiters of competitive behaviour and its negative effects, one has to wonder if unaccountable billionaires working without transparency really can be trusted to serve the public’s interests.

The cynical way of looking at philanthropy is to view it as just a PR exercise whose purpose is to justify some having so much to begin with, while throwing token amounts of money at those enduring the negative externalities that inevitably rise when we compete to gain more by whatever method we can get away with. Where the ‘Giving Pledge’ is concerned, there is no legal obligation to do anything. Signatories merely say they will give away half of their fortune; signing the pledge places them under no enforceable commitment to actually follow through on their promise. Now, if there were transparency, so that the public could see what was being donated and where it was going, that might ensure the pledge is indeed honoured. But, guess what? There is no transparency. So how can we ever know what was given away or for what purpose? Really, then, there is nothing to prevent the Giving Pledge from being a PR stunt intended to placate a public grown sick and tired of the excesses of the rich and the gross wealth inequality fueled by a greed is good culture that has brought such harm to people and their communities. As activist Slavoj Zizek put it, “charity is the humanitarian mask hiding the face of economic exploitation”.

Also, when it comes to the establishment of charitable organisations, there are reasons for taking such action that do not necessarily count as altruistic. You see, by setting up such organisations, the ultra-rich can take advantage of tax loopholes as money is passed through them.

Such was the case of the foundation set up by the Walton Family. These five Walmart heirs have a combined net wealth of over $139 billion, meaning they have more money than the bottom 40 percent of Americans combined. An independent audit determined that the Walton Family Foundation-built “at almost no cost to themselves” was “exploiting complex loopholes in order to avoid billions of dollars in estate taxes”.

As to how much of that $139 billion fortune actually went to charity, the answer is…0.4 percent. That is such a paltry amount, it is hard not to agree with Peter Joseph of the Zeitgeist Movement who said, “what they are really doing is bypassing state funding in favour of their own interests. Moving money to charity foundations, effectively consolidating wealth in the hands of private interests rather than government, is a logical method to keep things ‘in the club’ of private business power”.


Philanthropy and charity are either the most viable way of dealing with the problems we are facing, or they are just a PR stunt amounting to a band-aid for problems whose systemic root remains deliberately unaddressed by those with vested interests in retaining the status quo. Obviously, one’s own political ideologies would influence which of these possibilities seems most plausible. Really, though, I suppose all we can do is wait and see if the philanthropist’s pledges really do bare fruit and build a better tomorrow.


“The New Human Rights Movement” by Peter Joseph

“The Survival Manual” by Mark Braund and Ross Ashcroft

Posted in Uncategorized | 2 Comments

Rewarding Work In ‘Red Dead Redemption 2’


In this essay I thought I would write about the ways in which Rockstar’s Red Dead Redemption 2 incorporates the elements work needs in order to be rewarding into its gameplay.

First, though, we need to figure out what those elements are. Barry Schwartz has looked into this, and come up with the following ideas:

“Satisfied people do their work because they feel that they are in charge. Their work day offers them a measure of autonomy and discretion. They use that autonomy and discretion to achieve a level of mastery or expertise…Finally, these people are satisfied with their work because they find what they do meaningful. Potentially, their work makes a difference to the world”.

I think the key words in that passage are ‘autonomy’, ‘discretion’, ‘mastery’ and ‘meaning’. Whenever physical or mental activity incorporates these, you have work that is rewarding.

So how does Red Dead Redemption 2 fare? First off, the environment in which this game is set obviously lends itself to ‘autonomy’. It is set in the vast expanse of the American Wild West and, as the game’s trailer puts it, “the world is full of adventures and experiences that you discover naturally as you move fluidly from one moment to another”. This gives the game a non-linear feel, as the player can ride off in any direction.

Along the way, the player is likely to encounter various situations and activities. Most of the time you are not required to participate and can decide for yourself whether or not to get involved. This means the game manages to incorporate another feature work needs in order to be rewarding, namely ‘discretion’. We also see discretion at work during missions, where you are asked to make decisions like what actions members of your posse should take, or whether an aggressive or pacifist response is most appropriate for the current situation.

One could also cite character management and customisation as further ways in which this game provides opportunities for discretion or judgement. As the game’s trailer says, “your experience is defined by the choices and decisions you make…You can, of course, choose what to wear, ride and eat”. Furthermore, these are not merely cosmetic choices that just change your appearance but have no real consequences. Your character has various health attributes that you need to take care of. A decent coat in winter could mean the difference between life and death, whereas during a hot summer it would not be wise to wear such warm clothing. From character customisation and management, to the snap decisions required of the player during missions, to the open world and the nonlinear experiences it offers, Red Dead Redemption 2 provides plenty of opportunity to apply one’s discretion.

When it comes to mastery, ever since Pong was introduced with the simple instruction to ‘avoid missing ball for high score’, videogames have provided players with challenges that test their ability and enable them to feel like their skills are developing.

The best games don’t just rely on setting a challenge like getting from A to B in a set time or shooting X number of targets. They incorporate systems of feedback into the gameplay that informs the player how well they are performing and whether they should try another strategy. You have visual and audio cues that let you know if things are going well or not, and the best games do not leave you in the dark over what you should be doing, but at the same time don’t hold your hand and instead leave it up to you to figure out how to accomplish what needs to be done.

Finally, by providing the player with identifiable tokens of progress in the form of special items, areas and other stuff that you unlock by achieving certain objectives and challenges, games like Red Dead Redemption 2 let you feel like you are gaining mastery and making real progress as the gameplay continues.

Now, when it comes to meaningful work, one might struggle to claim Red Dead Redemption 2 provides much of this if we consider the game from the perspective of ‘real life’. This is, after all, just a videogame. Sitting in front of a TV pressing buttons on a joypad hardly stands besides researching the cure for cancer as “work that makes a positive difference to the world”.

But in the context of the in-world experience, many games offer a grand narrative that sees the player progress from a nobody at the start to a significant figure whose actions and decisions have had a decisive effect on shaping history by the end. You become the hero who saved the world. Admittedly, in Rockstar’s most famous franchise (Grand Theft Auto) you are attempting to rise in the ranks of criminals, which is not exactly everyone’s idea of a positive contribution to society. And, in Red Dead Redemption 2 you are cast as an outlaw and can engage in the kind of activities for which Rockstar has earned an image as the bad boy of videogaming. You know, robbery, murder, that kind of thing. But there does appear to be opportunities to act like an outlaw with noble intentions. There are situations in which you can choose to help people or to refrain from killing foes. According to the trailer, “there are countless secrets to uncover and people to meet. You can get into raucous altercations…chase down bounties. Your behaviour has consequences and people will remember you and your actions”.

So, like all the best games, Rockstar’s Red Dead Redemption 2 successfully incorporates ‘autonomy’, ‘discretion’ and ‘mastery’ into a grand narrative that provides a sense of social meaning.

This achievement is all the more remarkable when you consider how menial an activity videogaming actually is. After all, what are you physically doing when you play these games? Repeatedly pushing buttons. Really, that’s it. Yet, somehow, games designers can take this dull, repetitive activity, one that ranks alongside rote assembly line work as the most menial ever created, and build an experience on top of it that is so compelling people happily pay good money so they can do it!

But, really, it is because it is we the players who are paying to do work in videogames that designers have every reason to try and make it as engaging and rewarding as possible. Aside from stories of people who work in sweatshops grinding through MMORPGS to level up avatars that then get sold on to people who would rather pay than do the tiresome work of obtaining a high-level character themselves, nobody is ever coerced into playing a videogame. Participating in such activity is entirely voluntary for the vast majority of players, so if they are to sell, their developers have to make sure that the work a gamer has to do in order to get through the game is rewarding.

However, when it comes to jobs, I believe there is an incentive to reduce the elements that make work rewarding. If you take the first three features (which were ‘autonomy’, ‘discretion’ and ‘mastery’), I believe these have something in common, which is that they provide opportunity to enhance one’s individuality. The best games really do try to include ways to let the player customise the experience, which in some cases goes as far as incorporating editing tools that enable you to craft whole new levels and gameplay. I would argue that the reason why videogames tend not to make good films is because the characters in them are often pretty much blank slates intended to be filled in by the player’s personality, not well-developed characters with their own psychology.

The best videogames provide plenty of opportunity to enhance one’s individuality. We are each of us unique individuals with our own lifestyles, presences and abilities, and ideally we would have jobs that reflect this. But this could potentially cause problems when combined with the other feature that makes work rewarding, which is that it should be ‘socially meaningful’.

Imagine that my job is very important and valuable to society, and that it is also perfectly tailored to suit the unique individual I am. Obviously this would be tremendously engaging and rewarding work for me, but what if one day I was run over by a bus? The company would be in big trouble, for who could replace me and fit into a position uniquely suited to the individual I am?

On the other hand, if you can somehow reduce or even eliminate the amount of autonomy, discretion and mastery a job requires, you also need rely less on the individuality your employee. In so doing, the employee can be treated less like a unique person and more like an interchangeable unit that can be removed and replaced at the employer’s discretion.

This obviously impacts on the employee’s bargaining power, as anyone who feels they are eminently replaceable is not going to ask for better pay or preferable working conditions. Cheaper labour means more profit for employers. Furthermore, employees are in a competitive environment in which they fight to earn enough money to keep from becoming too indebted, to keep up appearances in environments that emphasise material wealth as the sign of success, and in which there are taxes that have to be paid if you don’t want to go to jail. In other words, there is a lot of ‘negative motivation’ leading people to submit to jobs not because they expect to be rewarded if they do, but because they fear the consequences if they don’t.

So, employers have to pay their workers in order to get jobs done, and they prefer cheaper labour where possible and so are incentivised to reduce the qualities of work that make it rewarding, as in doing so they make employees more like interchangeable and replaceable units. Furthermore, in the world of wage labour there are various forms of negative motivation that pushes people into accepting jobs that are not very rewarding, so unlike videogames designers employers need not be too concerned that the work they offer sucks.

I think this helps explain why people seemingly don’t want to work. It really is bizarre if you think about it. Imagine an animal like a dolphin, obviously evolved for a life in water and yet seemingly reluctant to leave dry land and go swimming. People are like that. We have large brains housing creative minds, dexterous hands that can use tools in complex ways, sophisticated language that enables us to cooperate and compete in ways no other animal can even imagine doing, and we are healthiest when mentally and physically active in social situations. In short, we evolved to work but apparently we don’t want to. At least, that seems to be the attitude people have when the topic of UBI comes up. As well as objections about how unaffordable they think it would be, people claim that if you did not have to earn wages in order to survive, nobody would work and we would just passively consume TV all day.

Actually, the evidence shows that it is in countries where people spend the most time in jobs that we find the highest consumption of TV. Which makes sense if you think about it. Having burned so much energy in their jobs they don’t have much left to do anything else in their spare time. On the other hand, people who live in countries in which less time is expected to be spent in jobs tend to be engaged in more voluntary work and spend less time sat in front of the TV.

Also, to get back to the theme of this essay, videogames debunk the theory that nobody wants to work. After all, if that were true, nobody would pay good money to go through the effort of trying to beat the various challenges these games set. Nor would anyone develop their sporting or artistic talents. After all, these things take work, and quite a lot of it in some cases. The reason why we so willingly pay to do the work in videogames is, as I have argued, because the designers of such games have an incentive to make such work as rewarding as they possibly can, because at the end of the day they want as many people to go out and buy the game and recommend others do so as possible. On the other hand, job providers are incentivised to reduce the qualities that make work rewarding in order to make employees more replaceable and exploitable. Not that all jobs can have such qualities reduced; it’s just that enough can be made unrewarding to explain why 90 percent of people don’t enjoy their paid work.

As Red Dead Redemption 2 shows, it is not really work we don’t like. It is jobs.

Thanks to Rockstar for the images


‘Why We Work’ by Barry Schwartz

‘Utopia For Realists’ by Rutger Bregman

‘Red Dead Redemption 2 trailer’ by Rockstar

Posted in Uncategorized | Leave a comment

On Slavery


The past, it has been said, is a foreign country where things are done differently. At times, when looking back at the past, one feels a sense of relief to live now and not then. Who, for example, has heard of accounts of people enduring surgery while awake and aware and not thought “thank goodness I live now, when anaesthesia exists”?

And then there is the practice that is the main topic of this series. Slavery was once legal and widely practiced. Thank goodness we live now, when it is not only illegal but considered so morally repugnant there is a call to take down the monuments of historical figures whose fortunes partially depended on it, regardless of what philanthropic achievements they may also have accomplished. Not everyone believes this move to strip historical figures of their monuments because they did not live according to modern ethical principles is just, but we must all feel that the abolishment of slavery ranks as one of the high points of human progress.

Yet I feel like we have the wrong belief when it comes to slavery. Not wrong in the sense of our moral attitude toward it, but in the sense of how we think it ended and the extent to which it did end.

The way its end is popularised

In the popular imagination, it was the superior moral argument that ended slavery. Abolitionists campaigned to make it illegal and as right was on their side they ultimately won. And that was that, slavery was abolished. And while it was being practiced, we are encouraged to believe it was always the most brutal violation of a person’s liberty. Dramas and documentaries always portray the practice as white Europeans colonising foreign lands and, finding people of colour and being too prejudiced to see we are all the same beneath the superficial difference of skin tones, treat them like beasts of burden. They round them up, clap them in chains, throw them into the cargo hold of a ship and then sell them in markets to brutal masters who force them to toil under the crack of the whip.

What these beliefs do is make it seem like a chasm exists between the past and the present. Over there, beyond that great dividing line, there was slavery. Thank goodness we live over here where there is freedom and career opportunities.

But, really, this is a flawed belief. The abolishment of slavery was neither as decisive nor as complete as we are lead to believe. There is no gulf between slavery and jobs; rather, they exist on a continuum. And if there is freedom to be found along this path, then we have not yet reached that point.

The transition

You only have to imagine how a sudden transition to illegality for slavery would play out in practice to see how it would have been a gradual evolution toward freedom. Picture the scene. You are a slave and, as such, you own nothing at all, not even your own body. But then slavery is abolished and now, for the first time since your capture, there is something you can call your own. You are the sole owner of your labour power. But you don’t own anything else, and everything around you is the property of others. You cannot farm the land in order to grow your own food because that’s somebody else’s private property. You own no tools and have no money with which to buy them and you cannot just take some for to do so would be stealing.

All in all, as a former slave who now owns your own labour power and little else, your options are going to feel very limited. In fact, you would probably feel like there is only one thing you can do. You are going to have to beg your former masters to employ you. Now, this is hardly going to be a negotiation between equals. Pretty much all the bargaining power will be in the hands of the rich, propertied and well-connected former owners. So, when you come begging for a job, are you really going to be offered reasonable hours, paid vacations, entitlement to in-work safety protocols and a decent salary? No, certainly not, not if your former masters follow capitalist logic and are out to hire labour as cost-effectively as possible. What you will be offered would be work that is barely distinguishable from your former state of complete servitude, with no rights other than the right to quit and wages so low you can only subsist on them (which of course means that actually quitting work altogether feels like an unobtainable fantasy) and (if there are plenty of former slaves also looking for employment) not much chance of getting offers that are better anywhere else. After all, why would former owners not squeeze every last drop of value out of your labour power, when they hold all the bargaining chips?

The comedian Steve Hughes summed up how it really felt the day slavery was ‘abolished’ in one of his stand-up shows:

“Right! You lot are free to go. We’ll see you back here tomorrow at six-thirty!”.

In the next instalment, we will see that the situation was probably even worse, because of how rigged society was against those recently ‘liberated’ slaves.


“The New Human Rights Movement” by Peter Joseph

Steve Hughes standup routine


Slavery and racism

No essay on slavery can avoid talking about racial prejudice. After all, racism is often portrayed as being synonymous with slavery. But while there is no denying that an attitude of white superiority has existed, especially during the late 19th and early 20th century, we are wrong to suppose that blacks were enslaved simply because white racists considered them inferior. No, what actually drove slavery (or, at least, American slavery) was economics. Simply put, there was market pressure to secure cheap labour and profitable investments, and the commodity of slave labour just seemed a better deal compared to what was to be had.

As professor of Sociology, William Julius Wilson explained, “the conversion to slavery was not only prompted by the heightened concern over a cheap labour shortage in the face of rapid development of tobacco farming as a commercial enterprise and the declining number of white indentured servants entering the colonies, but also by the fact that the slave had become a better investment than the servant. As life expectancy increased…planters were willing to finance the extra cost of slaves. Indeed, during the first half of the seventeenth century, indentured labour was actually more advantageous than slave labour”.

That term ‘indentured labour’ is worth pondering. You may recall from part one how slavery is often portrayed as a violent theft of a person’s liberty (in movies, for example, there is often a sequence showing people being rounded up and physically forced into their new role as labourers or domestic servants). But a person did not always come to be in a position of servitude because others physically forced them into it. Sometimes people sold themselves into slavery. Now why on earth would anybody do such a thing? For the same reason plenty of people submit to employment. They are in debt and faced with likely punishment if it is not paid and so they ‘voluntarily’ give up their liberty and labour for others until the debt burden is lifted. In the case of 17th century indentured servants (and quite a few people today, actually) debts were so substantial it could take a lifetime in order to clear a debt, meaning little practical difference in such cases between an indentured servant and an outright slave.

I put voluntarily in scare quotes because I believe it is possible that, even though people who made such a decision may not have been physically forced into slavery, nevertheless there were other pressures which, if coercive enough, could have psychologically forced them into a life of servitude. In other essays I have referred to this as ‘negative motivation’, taking action not because you hope to be rewarded if you do, but because you fear the outcome if you don’t. For some reason free market ideologues believe that, once you legally grant the right of the individual to remove his or her labour, any deal involving the hiring of labour must be one that is free of any form of coercion and is voluntary in the true meaning of the word (“if they don’t like the deal being offered, they can walk away!”) It seems much more realistic to me that, rather than a sharp distinction between unfree slaves and employees whose decision to hire their labour is entirely volitional, you can instead draw a smooth continuum from the slave who is physically forced into servitude, to the indentured servant who is psychologically coerced into servitude, to the employee whose experience is a kind of ‘carrot and stick’ combination of rewards and punishment and so on up to the worker who regards his career as his true calling and does it gladly.

European indentured servants were not only practically similar to slaves. Attitudes toward them were also similar. As civil rights professor Carter A Wilson explained:

“Colour prejudice against Africans was rare in the first two-thirds of the 17th century. Legal distinctions between black slaves and white servants did not appear until the 1660s…Interracial marriages were common in the first half of the 17th century and…at this time they provoked little or no reaction”.

How slavery became racist

So, if market economics and not racism was what caused slavery, how did prejudice end up such a dominant part of the practice? It seems as though racism and class distinction was deliberately stirred up as a means of exerting control. Around the last half of the 17th century, expanded agriculture in Southern states created a huge demand for cheap labour, and that demand was answered by way of the global African slave trade. That also obviously meant a dramatic increase in population size. Thus, it was around this time that public policy began to change, with the intent to create security through hierarchical dominance. The invention of division between poor whites and black slaves was carried out in order to achieve the social distinction necessary for hierarchy. According to historian Edmund S. Morgan, for example, a government assembly in Virginia:

“Did what it could to foster contempt of whites for blacks and Indians…In 1680 it prescribed 30 lashes…’if any negro or other slave shall presume to lift up his hand in opposition against any Christian’. This was a particularly effective provision that allowed servants to bully slaves without fear of retaliation, thus placing them psychologically on a par with masters”.

The purpose of this prejudiced-based bullying was to ensure the growing slave population remained subdued and controllable. As Peter Joseph put it, it was a move to “generate a culture of bigotry and dominance that echoes to this day. So, in a sense, racism has effectively been a system reinforcer to optimise slave labour by way of sociological manipulation”.

Even after slavery was supposedly abolished, there continued to be an interest in controlling minority and lower-class populations. Segregation played an obvious part here, effectively trapping people in areas and circumstances where political and economic oppression were ever-present. As Peter Joseph explained, “the legal system morphed from direct racial oppression to indirect by targeting the outcomes of historical and present socioeconomic inequality, rather than any specific group”.

In other words, although in theory slavery has been made illegal in most countries, in actual fact societies were, and in many places continue to be, structured in such a way as to ensure a ready supply of labour that is not as free as we would like to believe. More on that in the next instalment.


The New Human Rights Movement by Peter Joseph

Centuries Of Change by Ian Mortimer


How slavery is still legal

In part one we were asked to imagine a newly liberated slave who is deciding what to do in order to live. We imagined that he would refrain from stealing or trespassing on the grounds that to do so would break laws. In reality, he would have found it incredibly hard to avoid breaking any laws, because the judicial system was so rigged against his class.

The aftermath of the civil war left the South in a state of economic turmoil, and under such chaotic conditions authorities played fast and loose with the power to arrest and detain. There were vagrancy laws that were vaguely defined and other dubious reasons to charge folk (typically blacks and poor people). This actually had little to do with a drive to restore law and order. The purpose was actually to ensure prisons were kept well stocked. You see, forced labour as a form of punishment was still legal so anyone (a former slave, say) who got arrested and found guilty of whatever could be commanded to do what was to all practical intents and purposes slave labour. The practice even had a name: Convict leasing. So popular was this practice that, by 1898, 73 percent of Alabama’s total revenue was derived from convict leasing, and it took many decades for federal government to shut it down completely.

But, actually, an argument could be made saying the practice was never completely abolished. Even today we have private prisons and corporations exploiting the labour of inmates. Companies like McDonald’s and Starbucks ‘employ’ prisoners, who in some cases earn as little as 23 cents an hour. Also, there are contractual agreements between state and local governments and private prisons that require the state to meet prison-occupancy quotas or otherwise pay for empty cells. The practice of convict leasing resulted in corrupt arrests being carried out in order to meet labour demand, and this current practice of maximum occupancy of prisons regardless of a region’s actual crime levels has also resulted in corruption. There was, for example, the 2008 ‘kids for cash’ scandal in which two Pennsylvanian judges were taking millions in bribes from a for-profit prison company to increase the number of inmates. With a pool of labour for hire at mere pennies an hour, one can appreciate the economic incentive to keep prisons well stocked.

The prison-industrial complex

Having said that, the largest beneficiary of slave labour from prison inmates is not private business but rather the State. As was explained in the Storyville documentary “Jailed In America”, “when someone is convicted and moves from jail to a federal or state prison, the government now has legal access to them as a workforce. These prisoners work for almost nothing, making road signs…or just about anything the government decides”. They may also be put to work providing services the prisons require in order to function, such as doing laundry or maintaining the building’s plumbing. Incarceration is part of a massive prison-industrial complex, an industry worth some $265 billion a year. It could not exist were it not for inmates and so there needs to be a steady supply of new people. Hierarchical societies are structured in such a way as to ensure poor people face limited life choices that are highly likely to lead to incarceration. And the way such things as parole are conducted further adds to the idea that the prison-industrial complex is structured in such a way as to provide a supply of slaves. Being on parole comes with conditions which, if broken, lead to violators being returned to prison. These include such things as being homeless or out of work. Note that for everybody else these are not illegal. Nevertheless for those on parole being made homeless or losing your job (and plenty of other situations that are not law-breaking) result in your being thrown back into jail and the slave labour that often awaits.

Why do we punish the guilty?

When it comes to prisoners, we are encouraged to believe that inmates are just bad people who freely chose to commit crime. Such an attitude probably has its roots in monotheism and its portrayal of the human as an individual with free will who exists separate from the rest of nature. Although one should be careful not to absolve individuals of all personal responsibility, the fact of the matter is that what free will we have is easily overcome. Both magicians and fraudsters understand and exploit flaws in our ability to make decisions and process information, tricking us into carrying out actions of their choosing while believing we are exercising pure free will. There are also plenty of experiments that show how easily people’s ability to make independent choices can be affected by peer and authoritarian pressure. Environmental and social factors impact on our ability to freely choose, and these predominantly affect the lower classes. What kind of upbringing you had, the state of your education, the quality of your diet, economic factors and more can set people on a course that is more likely to end in a conviction compared to the life choices presented to others.

Again, I should stress that this is not being pointed out in order to argue that personal responsibility does not exist, because it does at least to some degree. But, equally, we really shouldn’t condemn those found guilty when we know nothing of the factors that may have influenced the way their life turned out. Crime is sometimes described as a ‘social disease’. Sometimes it is necessary to quarantine people who have a contagious biological disease. Note, however, that no moral condemnation is attached to such a decision. But when it comes to those who catch the social disease of criminality there does tend to be moral condemnation along with the need to separate such people from society. Any society based around competition for material advantage via whatever method you can get away with, and which also incentivises negative attitudes towards the losers of such competitive behaviour (being labelled as failures and so on), is just bound to create conditions in which some will succumb to the temptations of crime. In a neo-liberal free market where everything is a commodity with a price tag attached, how ethical you are depends on how ethical you can afford to be. Morality doesn’t really come into it. As Peter Joseph said, with regard to corporations exploiting the cheap labour of prison inmates:

“This pursuit of cost-efficiency is what notably defines market efficiency…This is simply the nature of capitalist logic, and the still-common idea that the rise of capitalism was somehow instrumental in the general ending of abject slavery on the structural level is little more than denialism”.

Indeed. For, as we have already seen, the popular conception of how slavery ended is quite wrong. It did not just end with the passing of laws that made it illegal. Rather, there has been a long process of rooting out the opportunities for exploitation and establishing the rights people (particularly lower classes) require in order to live in reasonable comfort and security. While capitalism should get some credit for its contribution toward creating the wealth that makes it more affordable to be ethical, it should not be forgotten that most rights we have now come to expect as workers had to be fought for. I have no doubt that, were it not for the pressure from socialist movements, work under capitalism would have remained so exploitative, life for the majority would be akin to slavery and the wealth generated would be much more concentrated as indeed it has been in all redistributive societies where the poor have little or no voice with which to protest their conditions (those being the sort of societies we have predominantly lived in since the Neolithic revolution).

Nor should we kid ourselves into thinking the struggle to end slavery is over. It continues to exist in varying degrees of obviousness, mostly because the root cause of most slavery persists to this day. We have encountered this cause several times throughout this series. It was there when we talked about people in debt selling themselves into slavery. It turned up again by implication when we discussed the forced labour of prisoners, for incarceration has long been justified as a means of making wrongdoers ‘pay their debt to society’.

Yes, the root cause of exploitation is debt. That’s what we will look into next time.


Storyville: Jailed In America

The New Human Rights Movement by Peter Joseph



Along with war and conquest, the imposition of debt stands as one of two major causes of slavery and servitude. Some would probably argue that debt is a natural and inevitable part of any society involving interactions between individuals with differing whatever. While it’s true that any society can only function so long as people recognise and meet ongoing obligations toward one another, the amount of debt that exists in the world today is way out of proportion of anything required to maintain a prospering, egalitarian society. It is instead diagnostic of a market system that has become decoupled from reality.

Much of the pursuit of profit now has little to do with making physical products intended to solve real problems, but instead has moved to the abstract world of financialisation in which Wall Street and its equivalent in other countries collude with governments to create and manipulate complex forms of debt. Major companies no longer derive their profits principally from selling actual products. Instead they float their shares on the stock market, borrow cheap money from the government, buy back their own shares and thereby gain a boost in the paper profit of the company.

This move into abstraction is not without consequences, for there are real downsides to this expansion of debt. Any investigation into how banking works will reveal that banks don’t actually lend out money others have deposited, but instead create money ‘out of nothing’ whenever anyone meets the criteria of being worthy of a loan.

Actually, money is not really being created out of nothing. Rather, wealth is being snatched from the future in order to pay for goods and services here and now. This practice is fair enough when the wealth snatched from the future gets into the hands of those who really can build a better tomorrow. But in reality it too often ends up being used for short-term profit that ultimately causes long-term harm. Banking is a complex system in which people bring about the creation of money out of debt (and of course it is predominantly the poor who need to take out loans) and then, thanks to the negative and positive externalities of market capitalism, the debt and money separate, with the upper classes extracting the money while the poor get burdened with the debt. Are there exceptions to this rule? Yes, but then one can also find exceptions to the ‘survival of the fittest’ rule that drives evolution. Nevertheless evolution is fact and the consequences of this kind of inequality are fact.

How impactful is debt? It’s relative…

How much it matters that you are in debt really depends on how likely it is that you will encounter somebody more powerful than you who can demand repayment. This means that, for the most powerful player of all, debt is of no consequence whatsoever because the day of reckoning will never come. As Alan Greenspan once pointed out, “the US can pay any debt it has because we can always print money”. Or, to put it another way, the US can endlessly snatch wealth from the future without fear that a mighty one will one day come along demanding repayment (of course this rests on the assumption that the country remains the dominant power in the world).

But for weaker players, it’s a different story altogether. Consider the words of President Obasanjo of Nigeria:

“All that we had borrowed up to 1985 or 1986 was around $5 billion and we have paid back so far about $16 billion. Yet, we are told that we still owe about $28 billion…because of the foreign creditors’ interest rates”.

By 2004, the developing world was having to pay $20 in interest repayment for every dollar received in foreign aid and grants. The result was crippling austerity and the creation of highly vulnerable people ripe for exploitation by predatory corporations. And austerity is not just a third-world phenomenon. Even rich countries had to put up with it following the last great speculative bubble (in sub-prime lending as you may recall). But, in keeping with the idea that the powerful escape the consequences of bad societal decisions while the weak must bare the cost, the CEOs who lead the way in reckless speculation got away with it for the most part, riding off into the sunset with breathtakingly large pensions and severance packages, while the poor had services cut and good, secure jobs taken away and replaced with gig work stripped of many hard-won benefits.

As debt grows and its harmful consequences fall predominantly on the vulnerable, such people become more desperate, more prepared to do anything to delay the day of reckoning. As Kevin Bales, who is an expert in human trafficking, explained, “the question isn’t ‘are they the right colour to be slaves?’, but ‘are they vulnerable enough to be enslaved?’. The criteria of enslavement today do not concern colour, tribe, or religion; they focus on weakness, gullibility and deprivation”.

How many are slaves today?

Slavery has not been abolished. It still exists to varying degrees. That slavery continues to this day cannot be doubted (any human rights organisation will correct you with evidence if you believe otherwise) but how much of it there is depends on how you define servitude. According to UN estimates there are roughly 27 million slaves in the world today. However, another organisation called the Walk Free Foundation has put the total at more like 46 million. They are mostly bonded labourers or debt slaves in India, Pakistan, Bangladesh and Nepal.

But could the numbers be higher still? Think back to the notion of debt bondage and selling oneself into slavery, which we touched upon in part one. What, fundamentally, is the difference between selling yourself to one master for life, and being in a position where you must constantly make your labour available for hire, toiling away for minimal reward while others gain most of the rewards being generated by workers like yourself? Doesn’t that just show a continuum of exploitation from abject slavery to indentured servitude to wage labour? Yes, one could argue that the conditions of wage labour is preferable to outright slavery (at least in a lot of cases) but you cannot really call either condition ‘freedom’. After all, if to be free is to work for oneself and to gain most of the rewards from a job well done (and also to carry the costs of not doing your work competently) then precious few can claim to be truly liberated from the bonds of servitude. For as Federal Reserve expert G Edward Griffin said:

“No matter where you earn money, its origin was a bank and its ultimate destination is a bank…This total of human effort is ultimately for the benefit of those who create fiat money. It is a form of modern serfdom in which the great mass of society works as indentured servants to a ruling class of financial nobility”.

If Robinson’s argument is valid, the true number of slaves in the world today would be counted in the billions.


At the beginning of this series a question was posed: Is the popular portrayal of slavery’s end incorrect? We have seen how slavery did not just get abolished with the passing of an Act, creating a gulf between the un-free past and the liberated now. Rather, escape from slavery has been a long process that has made only modest progress in breaking the bonds of servitude and, in some cases, none at all. Progress toward freedom is so slow because, at its very root, market capitalism contains the socioeconomic structures that have given rise to exploitation since the Neolithic period: Systems that justify competition, self-interest, hierarchy and inequality, perpetuating scarcity and profiteering from the growing environmental and social fallout of negative externalities by exploiting the vulnerabilities lower class people and developing nations feel under such circumstances.

Yes, to some extent progress has been made. But not as much as we are lead to believe by apologists for capitalism and certainly nowhere near enough compared to what is technically possible. For example, much is made of the apparent reduction in abject poverty around the world (a condition most likely to result in exploitation). But what’s not appreciated is that such results are obtained by using an infeasibly low threshold for an absolute minimum wage. On the other hand, were we to use the ‘Ethical Poverty Line’ devised by Peter Edward (set at about $7.40 a day), then 4.2 billion people or 60 percent of the world remain in an impoverished state, ripe for exploitation.

Or consider that it would cost about $30 billion a year to end world hunger and that the 1800 billionaires in the world could provide such provision for 200 years and still have roughly $500 million each. It is disgusting that malnourishment and other forms of deprivation that are completely unnecessary continue to exist. The reason they persist is because market capitalism profits from servicing the problems they generate, and really has no interest in bringing about an end to scarcity because assumptions of scarcity are fundamental to how this competitive system works. If you add up all the deaths caused by various negative externalities ultimately traceable to market competition’s root socioeconomic orientation, capitalism has killed more people than all of the 20th century’s despots combined, and has enslaved more people than any other system in history.


“The New Human Rights Movement” by Peter Joseph

“The Creature From Jekyll Island” by Edward G. Robinson

“Modernising Money” by Joseph Lietar

Posted in Uncategorized | 2 Comments



Do you always have work to do at your place of employment, or is your work of a kind where sometimes you are busy, while at other times there’s not much to do? If you are one of those employees working where activity goes through peaks and troughs, chances are you have encountered an attitude that is usually accepted as normal but which would have been regarded as quite bizarre by most of our ancestors.

The best way to explain what I mean is to quote from an employee who has experienced this weird attitude. David Graeber has several such interviews in his book, ‘Bullsht Jobs: A Theory’. Here’s a typical example from ‘Patrick’ who worked in a convenience store:

“Being on shift on a Sunday afternoon…was just appalling. They had this thing about us not being able to just do nothing, even if the shop was empty. So we couldn’t just sit at the till and read a magazine. Instead, the manager made up utterly meaningless work for us to do, like going round the whole shop and checking that things were in date (even though we knew for a fact they were because of the turnover rate) or rearranging products on shelves in even more pristine order than they already were”.

What I am referring to, then, is that attitude employers have that regards slack time as something that should be filled with pointless tasks or ‘make-work’.

How else might these slow periods be dealt with? I can think of a few alternative options. The business could send unneeded staff home without pay. They could send them home with pay. They could require them to stay at their posts, but let the staff socialise, play games or pursue their own interests until there are real work-based duties to carry out.

Of all these options, sending staff home with pay is the least popular. It hardly ever happens. Letting employees do their own thing during slow periods is also pretty unusual. Sending staff home, forfeiting remaining wages is more widely practiced, especially with zero-hours contracts that specify no set hours. But if you are in a regular job and there are times when the work is slow, the most common solution is to have that time filled with useless tasks.

It’s hard to see how this practice of making up pointless tasks is in any way productive. Indeed, a case could be made that it encourages anti-productivity. David Graeber recalled an incident when he worked as a cleaner in a kitchen and he and the rest of the cleaning staff pulled together to get everything done as well and quickly as possible. With their work completed, they all relaxed…until the boss turned up.

“I don’t care if there are no more dishes coming in right now, you are on my time! You can goof around on your own time! Get back to work!”

He then had them occupy their time scrubbing baseboards that were already in pristine condition. From then on, the cleaning staff took their time carrying out their duties.

Graeber’s boss’s outburst provides insights into why this attitude exists and why it would have seemed so peculiar to our ancestors. He said, “you are on my time”. In other words, he did not consider his staff’s time to be their own. No, he had purchased their time, which made it his, and so to see them doing anything but look busy felt almost like robbery.

How Our Ancestors Worked

But our ancestors could not possibly have conceived of time as something distinct from work and available to be purchased, and they certainly would have seen no reason to fill down time with make-work. You can tell this is so by noting how make-work is absent from the rest of the animal kingdom. You have animals that live short, frenetic lives, constantly busy at the task of survival. Think of the industrious ant or the hummingbird, forever moving in the search for nectar. You have animals that are sometimes active but at other times take life easy, such as lions who mostly sleep and only occasionally hunt. But what you never see are animals being instructed to do pointless tasks.

There’s every reason to believe our ancestors would have been under no such instructions, either, particularly when you know a bit about the kind of societies they lived in and the practicalities they faced. Our earliest ancestors lived in bands or tribes in which there were no upper or lower classes, for the simple reason that the hunter-gatherer lifestyle would not permit much social stratification.

This should not be taken to mean that there was absolute equality among members of bands or tribes, however. Leaders did emerge, distinguishing themselves from the rest of the band or tribe through qualities like personality or strength. Both bands and tribes had big-men, recognised in some ways as the leader. But such leaders would have been barely distinguishable from ordinary tribe members. At best, the big-man could only sway communal decisions and had no independent decision-making authority to wield or knew any diplomatic secrets that could confer individual advantage. Moreover, the big-man’s lifestyle was indistinguishable from everyone else’s. As Jared Diamond put it, “he lives in the same type of hut, has the same clothes and ornaments, or is as naked, as everyone else”.

Given that our ancestors were hunter-gatherers, it would have made no sense for ‘big-men’ to make anyone fill spare time with make-work. No, the sensible would have been to permit relaxation during slack periods in order for there to be plenty of energy when the time came to put it to good use. You can imagine how there would have been seasons in which there was plenty of fruit to gather, or moments when everyone should mobilise to bring home game. But afterwards, when the fruit was picked and the hog roasting on the spit, the time left was better spent playing, socialising, or resting.

This is, in fact, how we evolved to work. We are designed for occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity.

This work pattern could hardly have changed much when human societies transitioned to farming and were able to develop into chieftains and larger hierarchical societies. After all, farming is also very seasonal work, so here too it would have made much more sense to adopt work attitudes that encouraged intense activity when necessary (such as when the harvest was ready to be gathered) but at other times to just leave the peasants alone to potter about minding and maintaining things or relaxing.

Now, it’s true that the evolution of human societies into hierarchical structures not only entailed the emergence of a ruling ‘upper class’ but also a lower caste of slaves and serfs. But, although we commonly conceive of such lower caste people as being worked to death by brutal task-masters, in actual fact early upper classes were nowhere near as obsessed with time-management as is the modern boss and didn’t care what people were up to so long as the necessary work was accomplished. As Graeber explained, “the typical medieval serf, male or female, probably worked from dawn to dusk for twenty to thirty days out of any year, but just a few hours a day otherwise, and on feast days, not at all. And feast days were not infrequent”.

So, our ancestors saw no need to fill idle time with make-work, partly because it was (and still is) of little practical use. But if masters of serfs could plainly see how silly it is to force make-work on their serfs, why can’t modern managers grasp the same thing with regards to their staff? Well, it all has to do with concepts of time, and that’s something we’ll look into next time…


Bullshit Jobs: A Theory by David Graeber

Guns, Germs and Steel by Jared Diamond


If you could go back in time and say to somebody, “can I borrow you for a few minutes?”, your request would have been met with a baffled look. This would be because such a person would have no understanding of time as being broken up into hours, minutes and seconds. Instead, what understanding of time there was consisted of passing seasons, cycles like day and night or the length of time actions took, on average to perform. “I will be there in five minutes” means nothing to a rural person in Madagascar, but saying it takes two cookings of a pot of rice would have let somebody know how long your journey would likely take. As Graeber explained, for societies without clocks, “time is not a grid against which work can be measured, because the work is the measure itself”.

It’s because our ancestors had no ‘clock’ concept of time that they could not therefore conceive of somebody’s labour-power as being distinct from the labourer himself. Consequently, if somebody came across, say, a cooper, they could imagine offering to buy the barrels he made, or they could imagine buying the cooper himself. But the notion of buying something as abstract as time? How was that possible?

Well, once slavery came about our ancestors did have an approximation to modern employment practices, in that slaves could be rented instead of bought outright. Whenever we find examples of wage labour in ancient times, it pretty much always involves people who were slaves already, hired out to do some other masters’ work for a while.

Around the 19th century we do see occasional warnings by plantation owners that slaves had best be kept busy during idle periods, for who knows what they might plot if left with time on their hands? But it took technological innovations from the 14th century onward to really make time seem like a commodity that could be bought, spent, misspent or stolen.

Clocks and buying time

What set us on the road to bosses complaining about ‘their time’ being wasted was similar to what lead to the evolution of money. Our ancestors lived in gift-based economies in which favours were freely undertaken with the vague understanding that they would be suitably reciprocated at a later date. But when was a favour suitably reciprocated or a slight adequately compensated? Such questions lead to rules, regulations, laws and contracts that gradually quantified obligations and transformed them into debts and credits that could be precisely calculated.

By the 14th century, clocks had been invented and began to show up in town squares. But where the clock-based concept of time really took off was in the factories of the industrial revolution. The increasing routinisation and micro-tasking of work that typified the production-line brought about the quantisation of time into discrete chunks that could be bought, and the need to coordinate logistics lead standardised times (imagine running trains when no two towns agree on when it is 2PM). By dividing time into the now-familiar hours, minutes and seconds, we created a concept of time that conceives it as a definite quantity that could be purchased, distinct from both the labourer and his produce. It became possible to conceive of buying a portion of his time and owning whatever produce that got created during that time, while not actually owning the labourer himself. This, of course, is what distinguishes an employee from a slave.

But once we began thinking about time as discrete units that could be bought, that then lead to a belief that time could be wastefully spent, not just by being literally idle but by spending ‘somebody else’s’ time doing your own thing, like playing a board-game or reading a magazine. The attitude I referred to earlier (‘don’t let slaves be idle lest they plot to free themselves’) was carried over to working practices in industrial cities. This, combined with the idea that you could buy somebody’s time but they could then waste ‘your’ time (misspend it) lead to the peculiar modern notion of time discipline and its obsession with busyness and make-work. Once you get to the 18th century and onwards, you get the emergence of bosses and upper classes who increasingly view the old episodic style of working (which involved occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity) as problematic rather than sensible. Moralists came to see poverty as being due to bad time-management. If you were poor it was because your time was being spent recklessly or wastefully. What better remedy than to have your misspent time purchased by somebody who was rich and, therefore, better able to budget time carefully, as one who is frugal would budget and dispose of money?

It was not only the bosses who came to see time as purchasable units that might be misspent. So, too, did employees, especially since the old struggle between the conflicting interests of employer and employee meant the latter also had to adopt the clock-concept of time. If you are an employee, you want an hourly wage for an hours’ work. But if you are the boss, it would be preferable to somehow extract more than an hours’ work for an hour’s pay. Early factories did not allow workers to bring in their own timepieces, which meant those employees only had the owner’s clock to go on. Such owners regularly fiddled with the clock so as to appropriate more value (by getting them to do overtime for free) from their employees. This lead to arguments over hourly rates, free time, fixed-hour contracts and all that. But, as David Graeber pointed out, “the very act of demanding ‘free time’…had the effect of subtly reinforcing this idea that when the worker was ‘on the clock’ his time truly did belong to the person who had bought it”.

So, the belief that any spare time in work should be filled with pointless tasks came about as a result of somebody’s time becoming conceived of as distinct units that somebody else could buy and, consequently, as something that could be stolen or misspent. This in turn lead to a form of moralising that regarded idleness as sin, as something to be eradicated through the provision of make-work and indignation upon seeing employees doing anything other than their jobs or pretending to carry out tasks when their actual job is done.

It’s not just in stores, offices and factories that this attitude prevails. Where care work is concerned, the service being offered can sometimes consist of being on stand-by just in case the elderly client needs attention. But the elderly person gets so indignant about the carer ‘sitting around wasting my money’ they, too, end up being asked to pretend to do ‘something useful’ like tidy up a home they have already tidied. From the perspective of the stand-by carer, this can make the work intolerably frustrating.

The future of make-work

Make-work also has worrying implications if future technological capabilities will be as potent as futurists like Ray Kurzweil claim. I would argue that each major work revolution has focused on successfully less urgent demands. The agricultural revolution was concerned with food production, which is of obvious importance since we cannot live without food nor do any other work without adequate nutrition. The industrial revolutions (and the socialist movements that accompanied them) lead to greater standards of living and increased comfort. While not as essential as food, conveniences like microwaves, carpets and television sets can make life more pleasant and the products of manufacturing enable us to carry out essential work with more ease.

But what happens when people have enough of what they need to lead healthy, comfortable lives? Their consumption slows, and that’s anathema to a growth-based system like market capitalism. No wonder, then, that from the 50s onwards psychologists like Edward Bernays were working with advertising departments in order to create fake needs so as to sell bogus cures. No wonder, then, that we went from being utilitarian in our attitude toward products, buying them for practical purposes and make-do-and-mending in order to get maximum-possible use out of our stuff, to adopting a throwaway culture, replacing stuff just because it’s out of fashion or because it was designed to fail as soon as can be gotten away with and not built to be easily maintained.

General AI and atomically-precise manufacturing could drastically increase the efficiency with which we manage and carry out the rearrangement of materials, lead to a radical reduction in waste and free up time, as we would have the means to automate most of today’s jobs. Once we have automated jobs in agriculture, manufacturing, services and administration, the sensible thing would be to pursue interests outside of the narrow sphere of wage labour. It would be a good time to rediscover the periodic working practices of our ancestors and the greater commitment to social capital typical of tribal living, only with the added bonus of immense technological capability to keep us safe from hardships that do sometimes affect hunter-gatherer societies.

But is such an outcome likely to happen when it has to evolve within a system based on a throwaway culture and where work is seen as virtuous in and of itself to the extent that ‘spare time’ is considered to be something that should be filled with pointless tasks? What I am saying is that markets have already proven themselves capable of creating scarcity where little real needs exist, so it is not too great a leap of imagination to suppose that the moral indignation that stems from the attitude ‘time is money’ and ‘you are misspending my time’ could work against what should be capitalism’s greatest triumph, which is to unlock the potential abundance inherent in the Earth’s richness of resources and elevate us to positions where we can live comfortable lives that need not come with the condition that some have to adopt extreme levels of frugality, and where we are free to become all we can be within a more rounded existence. Instead of that promising outcome, we might well just fill the technological-unemployment gap with make-work and bullshit jobs.

What a waste of time it would be if that were to happen.


Bullshit Jobs: A Theory by David Graeber

Guns, Germs and Steel by Jared Diamond

Posted in Uncategorized | Leave a comment

The Road To Freedom?

In 1944 the Austrian economist, Friedrich August Von Hayek, published ‘The Road To Serfdom’. The book set out to argue that the free market is the only viable way of bringing about freedom and prosperity. Actually, the book does not talk so much about the virtues of free markets but rather the downsides of the alternative which, at the time, was central planning. Hayek’s argument was that we can only handle the complexities of reality in a bottom-up fashion, with individuals looking after their own self-interests while guided by pricing signals. This, he reckoned, would result in the efficient allocation of resources arising from what would now be called emergent behaviour.

On the other hand, if we instead relied on a centralised authority to determine resource allocation, such an authority would inevitably find the complexity of modern economies too much to handle. The only way the authority could gain some measure of control would be to exercise more power over the people, restricting their freedom and making them live their lives according to some plan. Thus, a socialist economy would become more authoritarian over time. As the title of the book said, Hayek reckoned socialism to be the road to serfdom.

It’s fair to say that the book remains one of the classic texts of neo-liberalism. Margaret Thatcher described Hayek as one of the great intellects of the 20th century, and he was awarded the Nobel Prize in economics in 1974. Even now, some 64 years after its publication, it is still regarded as a definitive refutation of leftist politics and proof that only neo-liberalism can deliver prosperity. You could say that Hayek is as important a figure to the free market as Karl Marx is to communism.

But, I wonder, does Hayek’s argument really successfully demolish every alternative to neo-liberalism? Does the selfish pursuit of money and the conversion of everything to a commodity to be bought and sold on the market still stand as the only way we can achieve peace and prosperity? Or are its advocates wrong to say there is no alternative?

I would say there is an alternative. We are no longer restricted to the either-or choice of laissez-faire capitalism or authoritarian central planning. There might just be a third way.

It’s worth baring in mind the time in which Hayek wrote his book and how things have changed since then. At the heart of his argument is the belief that the world is really, really complex and, because of this, far too much information is generated for a centralised authority to handle without imposing real restrictions on individual liberty. Only market competition guided by pricing signals can manage such complexity. But, remember, he was writing in 1944. Communications back then was a good deal more primitive than is the case today. There was not one satellite in orbit. Now we have many hundreds, if not thousands, constantly monitoring all kinds of stuff such as weather patterns, urban sprawl, how crops are faring and so on and so on. This amounts to a network of sensors englobing our planet and allowing for realtime feedback about all kinds of important things. Such a perspective simply didn’t exist when ‘Road’ was published.

The advances we have made in our ability to transmit information is truly remarkable. The numbers are hard to grasp as they are pretty astronomical, but let’s give ourselves some standard of comparison and see if that helps. The author James Martin proposed the ‘Shakespeare’ as the standard of reference for our ability to transmit information. One Shakespeare is equivalent to 70 million bits, enough to encode everything the Bard wrote in his lifetime.

Using a laser beam, you can transmit 500 Shakespeares per second. Sounds impressive, but in fact fibre optics technology can do much better. By using a technique called Wavelength Division Multiplexing, the bandwidth of a fibre can be divided into many separate wavelengths. Think of it as encoding information on different colours of the spectrum. Some modern fibres are able to transmit 96 laser beams simultaneously, each beam carrying tens of billions of bits per second or 13,000 Shakespeares.

But we are still not done, because many such fibres can be packed into a single cable. Indeed, some companies make cables with more than 600 strands of optical fibre. That is sufficient to handle 14 million Shakespeares or a thousand trillion bits per second.

Think about that. We can now transmit data equivalent to 14 million times Shakespeare’s lifetime’s output from one side of the planet to the other almost instantaneously. Of course, this is quantity of information and not necessarily quality (not everything we send over the Internet is of Shakespearean standards!) but the point is that we can now send an awful lot of information around the world whereas this was not possible in Hayek’s day.

It would do little good to transmit petabits of information if we did not also improve our ability to store and crunch that data. In 1944 computers barely existed. What computers did exist came in the form of room-sized electromechanical behemoths that consumed huge amounts of power and were so temperamental only specialised engineers could be trusted to go near them.

Ray Kurzweil once said, “if all the computers in 1960 had stopped functioning, few people would have noticed. A few thousand scientists would have seen a delay in getting printouts from their last submission of data on punch cards. Some businesses reports would have been held up. Nothing to worry about”. And this was in 1960, over a decade after Road was published.

Since then, Moore’s Law (related to the price-performance of computer circuitry) has increased the power of computers by billions of times. It has shrunk hardware from the room-sized calculators of old to swift, multi-tasking supercomputers that can easily slip into your pocket. The cost has been reduced from about 100 calculations per second per thousand dollars in 1960, to well over a billion cps by 2000. Such a reduction means we can treat computing as essentially free, as proven by the way people are constantly on their web-enabled devices without ever fretting about how much it is costing. Also, computers have become increasingly user-friendly over time, from devices that required considerable technical skill for even simple tasks to modern conveniences like Alexa that can be interacted with through ordinary conversation.

The result of all this technological progress is that we are now practically cyborgs from infancy, thanks to the near-constant access to enormously powerful and intuitive computational devices. We live as part of a vast, dense network of bio-digital beings, connected to one another regardless of distance and with ready access to all kinds of information and digital assistance.

What this has to do with Hayek’s argument was expressed in an opinion put forward by David Graeber: “One could easily make a case that the main reason the Soviet economy worked so badly was because they were never able to develop computer technology efficient enough to coordinate such large amounts of data automatically…now it would be easy”.

In part two, we will see how the Internet and other technological advances provide options that were not feasible when ‘Road’ was written.

When Hayek wrote his book there was no Internet. Nobody was a blogger. Not one video had been uploaded. There was not a single Wikipedia entry, not one modded videogame. Linux and bitcoin were not words in anyone’s vocabulary. Now, such things are a ubiquitous part of modern life and most of them are free, part of the collaborative commons. OK, the price of bitcoin went crazily high but its founder provided the underlying blockchain of technology gratis, and made its white paper public knowledge so anyone could improve and expand upon it to create stuff like a decentralised social media site built on a blockchain.

Indeed, there’s now a great many things we can do on a voluntary basis. Much of the content of the web owes its existence more to passion than the pursuit of money. Jeremy Rifkin calls this ‘collaboratism’. Collaboratism means engaging in work not because financial pressures or some authority compels you, but because the means of producing and distributing stuff has become cheap enough that anyone with any drive to do something has the means to flex their creative muscles, and to connect with others with complementary skills and weaknesses.

This kind of technological progress changes many things. For example, when you have ready access to manufacturing or logistical systems it makes more sense not to have private ownership of stuff (which nearly always entails that stuff sitting in storage not being used for most of its life) but rather using stuff as and when you need it, and then making it available for others to use when you don’t. Think, for example, of driverless cars that could be there when you need transport and make themselves available for others to use if not. If that car was your own private possession, it would probably be parked somewhere not being used by anyone for long stretches. What a waste of resources!

This is the kind of world advocated by the Zeitgeist Movement. Critics of Peter Joseph tend to dismiss him using the same arguments Hayek used in ‘Road’. But this is to fundamentally misunderstand Joseph’s position. He is in no way advocating any centralised control, but rather more efficient decentralised methods than the corrupt monetary systems that are leaking value from today’s markets.

As to why neo-liberals tend to mistake Zeitgeist’s resource-based economy for central planning, maybe it can be traced back to concept drawings by Jacques Fresco? His Venus project shows plans for cities whose infrastructure is organised into a circle, at the centre of which sits a big computer monitoring the various flows of information a city generates. Such an illustration sure makes it seem like a centralised authority is in charge.

But you have to bare in mind that this city-wide perspective is only one viewpoint. If we could zoom out, we would see that the spokes of this ‘wheel’ radiate out beyond the confines of the city to connect with other cities, such that it becomes a node in a web of interconnected smart cities. Or, you could zoom in to a more personal level, and see that each person is a node in the network thanks to the web-enabled devices they have ready access to. Just shift perspective and what seems like a centralised master computer turns out to be a node in a network.

I would make an analogy with the web of life. Imagine telling somebody that there is a digital programme, encoded in DNA, running evolution. Imagine that person demanding to know where, precisely, the computer running this programme is located, and also telling you evolution can’t possibly work because Hayek proved centralised planning is hopeless. This would be a fundamental misunderstanding, because the code of life is not to be found in any particular location, but rather distributed throughout the world. Nobody is in charge, there is no top-down authority commanding natural selection.

Similarly, when confronted with Zeitgeist’s outline for systems of feedback that would enable us to track the world’s resources and manage them according to the principles of technical efficiency, it’s always denounced by critics as central planning. It’s almost as if such people forget the Internet ever existed.

When Hayek wrote ‘Road,’ mass production was the most obvious manifestation of market competition’s drive to produce sellable commodities, and mass production at that time was largely dependent on factories powered by large stations. Those were hugely expensive means of production that only a minority could afford to own, and which were most efficiently run along fascist lines. You might have been free to quit your job but once you clocked in you become part of a vertically-integrated management structure and had authorities whose orders had to be obeyed (and who, for the most part, were more interested in lining their own pockets and those of the banking and governmental masters they answered to than rewarding your efforts).

In marked contrast, the technologies of the 21st century could enable production by the masses, for the simple reason that the means of production and distribution could become ever more accessible in terms of cost and ease-of-use. Few can own a factory but if the price-performance of atomically-precise manufacturing goes far enough, what is effectively a factory in a box could sit beside your printer, and if robots follow the same trajectory as computers they should go from being very limited, expensive and largely inaccessible labour-saving devices to cheap, versatile, user-friendly, ubiquitous helpers. We could all become owners of the means of production. Such a decentralised form of production works best when we act as collaborating individuals united by complementary strengths and weaknesses in laterally-scaled networks, which is quite different from the vertically-integrated management that jobs have traditionally been designed around.


When Hayek wrote ‘Road’, the only alternative to free markets he could imagine was central planning. But really, who could blame him? There was no satellite communication, hardly anybody had access to computers and the World Wide Web did not exist. In short there was none of the infrastructure that the digital commons needs to get off the ground, making it perfectly reasonable for Hayek not to consider collaboratism as a viable alternative to the selfish pursuit of money.

Now, the infrastructure is beginning to fall into place. We have a communications web, an information web, and the beginnings of a logistic web and energy web too. Thanks to advances in artificial intelligence, robotics, nanotechnology and more, we are approaching the point of near zero-marginal cost for the creation and delivery of all kinds of content, not just digital stuff but physical stuff too. We can now work together, forming groups and collaborating on projects out of passion rather than out of some selfish pursuit of monetary gain.

‘The Road To Serfdom still stands as an effective argument that market competition is preferable to central planning. But when you consider how laissez-faire principles brought about the financial crisis of 2008 (Wall Street really did take advantage of Ayn Rand devotee Alan Greenspan’s deregulation and the commodifying of political influence to make fraudulent activity legal and prey on people’s financial gullibility) and the impossibility of sustaining free market principles in anything that resembles the way market competition actually developed (covered in my essay series ‘This Is What You get’) I suspect that, were he alive today, Hayek would be championing the Zeitgeist movement as the best way of bringing about prosperity. In 1944 there may have been no viable alternative to neo-liberalism, but that’s changing.


“The Road To Serfdom” by Hayek

‘Zeitgeist Movement Defined’

‘The Zero-Marginal Cost Society’ by Jeremy Rifkin

‘Age Of Spiritual Machines and ‘The Singularity Is near’ by Ray Kurzweil

‘The Meaning Of The 21st Century’ by James Martin.

“Bullshit Jobs: A Theory” by David Graeber

Posted in Uncategorized | Leave a comment



Have you ever felt like your job was a waste of time? If so, you are not alone. When Yougov asked people ‘does your job make a meaningful contribution to the world?’, 37% replied that it did not and 13% were ‘unsure’. In other words, fifty percent of people polled either didn’t know whether their job was worthwhile or not, or were certain that it was not. If you are one of these people, chances are you have a ‘bullshit job’.

‘What is a bullshit job’?

It might be worth talking a bit about what the term ‘bullshit job’ means. Perhaps the easiest way to grasp this is to consider its opposite. When it comes to employment, we usually assume that some need is first identified, and then some service is created to fill that gap in the market. An obvious way to tell if that service is necessary to society overall would be to observe the effect when it is removed- say, as a consequence of strike action. If society experiences a noticeable and negative effect, then it’s almost certain that the job was a valuable one.

On the other hand, if a job could disappear without almost anybody noticing (because its absence has either no effect or is actually beneficial) that would be a bullshit job.

Here’s one such example of such a job, taken from David Graeber’s ‘Bullshit Jobs: A Theory’:

“I worked as a museum guard for a major global security company in a museum where one exhibition room was left unused more or less permanently. My job was to guard that empty room, ensuring no museum guests touch the…well, nothing in the room, and ensure nobody set any fires. To keep my mind sharp and attention undivided, I was forbidden any form of mental stimulation, like books, phones etc. Since nobody was ever there, in practice I sat still and twiddled my thumbs for seven and a half hours, waiting for the fire alarm to sound. If it did, I was to calmly stand up and walk out. That was it”.

Now, some points are worth going over at this stage. Firstly, a bullshit job is best thought of as one that makes no positive contribution to society overall (since it would hardly matter if the position did not exist) rather than one that is of no benefit to absolutely anyone. As we shall see, it could suit some people to employ somebody to stand or sit around wearing an impressive-looking uniform. It’s just that whatever function this serves really has little to do with capitalism as most people understand it.

Secondly, one can always invent a meaning for this job, just as philosophers have made up reasons why Sisyphus could find meaning in his pointless task of rolling that boulder up-hill in the sure and certain knowledge that it would roll back down again. But, really, all this does is to highlight what bullshit such jobs are. After all, where genuine jobs are concerned one need not wrack one’s brains making up justifications, because the need pre-exists the job.

So, with those points out of the way and with a definition of bullshit jobs to work with (‘employment of no positive significance to society overall’) we can return to the question ‘how come such jobs exist?’.

‘This cannot be!’

One reason, strangely enough, is because many people assume they cannot exist. The reason why is because the very idea of bullshit jobs seems to run contrary to how capitalism is meant to work. If one word could be used to sum up the workings of capitalism in the popular imagination, that word would probably be ‘efficiency’. Capitalism is imagined to be ruthless in its drive to cut costs and reduce waste. That being the case, it surely makes no sense for any business to make up pointless jobs.

At the same time, people have no problem believing stories of how socialist countries like the USSR made up pointless jobs like having several clerks sell a loaf of bread where only one was necessary, due to some top-down command to achieve full employment. After all, governments and bureaucracies are known for wasting public money.

It’s worth thinking about what happened in the Soviet example and what did not. No authority figure ever demanded that pointless jobs be invented. Instead, there was a general push to achieve full employment but not much diligence in ensuring such jobs met actual demands. Those lower down with targets to meet did what was necessary to tick boxes and meet their quotas.

Studies from Harvard Business School, Northwestern University’s Kellogg School of Management, and others have shown that goals people set for themselves with the intention of gaining mastery are usually healthy, but when those goals are imposed on them by others- such as sales targets, standardized test scores and quarterly returns- such incentives, though intended to ensure peak performance, often produce the opposite. They can lead to efforts to game the system and look good without producing the underlying results the metric was supposed to be assessing. As Patrick Schiltz, a professor of law, put it:

“Your entire frame of reference will change [and the dozens of quick decisions you will make every day] will reflect a set of values that embodies not what is right or wrong but what is profitable, what you can get away with”.

Practical examples abound. Sears imposed a sales quota on its auto repair staff- who responded by overcharging customers and carrying out repairs that weren’t actually needed. Ford set the goal of producing a car by a particular date at a certain price that had to be at a certain weight, constraints that lead to safety checks being omitted and the dangerous Ford Pinto (a car that tended to explode if involved in a rear-end collision, due to the placement of its fuel tank) being sold to the public.

Perhaps most infamously, the way extrinsic motivation can cause people to focus on the short-term while discounting longer-term consequences contributed to the financial crisis of 2008, as buyers bought unaffordable homes, mortgage brokers chased commissions, Wall Street traders wanted new securities to sell, and politicians wanted people to spend, spend spend because that would keep the economy buoyant- at least while they were in office.

With all that in mind, it’s worth remembering the one thing that unites thinkers on the left and right sides of the political spectrum in Western thinking. Both agree that there should be more jobs. I don’t think I have seen a current-affairs debate where the call for ‘more jobs’ wasn’t made, and made often.

Whether you are a ‘lefty’ or a ‘right-winger’, you probably believe that there should be ‘more jobs’. You just disagree on how to go about creating them. For those on the left, the way to do it would be through strengthening workers’ rights, improving state education and maybe through workfare programs like Roosevelt’s ‘New Deal’. For right-wingers, it’s achieved through deregulation and tax-breaks for business, the idea being that this will free up entrepreneurs and create more jobs.

But, in neither case does anyone insist that whatever jobs are created should be of benefit to society overall. Instead, it’s just usually assumed that of course they will be. This is roughly comparable to somebody being so convinced that burglary does not happen they take no precautions to protect themselves against theft. This just makes them more vulnerable to criminal activity.

If this analogy is to work, it has to be the case that we are wrong to assume modern markets actively work against bullshit jobs; that, actually, there are reasons why pointless jobs are being created. In that case, our assumption that such jobs can’t exist would work against the possibility of acting to prevent their proliferation.

In fact, such reasons do exist, and a major one is something called ‘Managerial Feudalism’. What is that? Well, that’s a topic we will tackle in the next instalment.


‘Bullshit jobs: A Theory’ by David Graeber

‘Why We Work’ by Barry Schwartzh


Bullshit jobs are proliferating throughout the economy, and the reason why is partly due to something called ‘managerial feudalism’. In order to understand the role this plays in the creation of bullshit jobs, we need to look at the various positions people occupied in feudal societies. If you have ever watched a drama set in such times, you will no doubt have noticed how there is always an elite class of people who employ the services of a great many others. In some cases, their servants perform tasks that would be considered useful in today’s society, attending to such things as gardening, food preparation and household duties. But the nobility also seem to be surrounded by individuals who (despite the importance of their appearance, what with all the flashy uniforms they wear) don’t seem to be doing much of anything.

What are all these people for? Mostly, they are just there to make their superiors look, well, ‘superior’ . By being able to walk into a room surrounded by men in smart uniforms, nobles give off an air of gravitas. And the greater your entourage is, the more important you must be. At least, that’s the impression you hope to convey when you employ people to stand around making you look impressive.

The desire to place oneself above subordinates and to increase the numbers of those subordinates, thereby gaining a show of prestige, happens whenever society structures itself into a definite hierarchy with a minority that hold a ‘noble’ position within that structure. This is exactly what we find in large businesses, where the executive classes assume the role of the nobility. In order to understand why bullshit jobs exist, we need to look at how the condition of managerial feudalism came about.

Rise of the corporate nobility

Once upon a time, from around the mid-40s to the mid-70s, businesses ran what might be called ‘paternalistic’ models that worked in the interests of all stakeholders. The need to rebuild infrastructure following the war, a desire to provide security to those who had fought in it, the strength of unions, and governments following Keynesian economics, all worked to ensure that increases in productivity would bring about increases to worker compensation.

But, during the 80s and onwards, attitudes towards worker collectives and Keynesian economics changed and were instead seen as stifling entrepreneurs. This gave rise to more lean-and-mean economic practices. What really helped the rise of the lean-and-mean model in the 80s and 90s was certain federal and state regulatory changes, coupled with innovations from Wall Street. The federal and state regulatory changes brought about an environment in which corporate mergers and takeovers could flourish.

Meanwhile, Michael Milken, of investment house Drexel Burnham, created high-yield debt instruments known as ‘junk bonds’, which allowed for much riskier and aggressive corporate raids. This triggered an era of hostile takeovers, leveraged buyouts and corporate bustups.

The people who most benefited from all this deregulation and financialisation were those at the executive level. Once upon a time, the CEO of a large corporation would have been the epitome of the cool, rational planner. He or she would have been trained in ‘management science’ and probably worked his or her way up within the ranks of the organisation so that, by the time they reached the top, the CEO had mastered every aspect of the business. Once there at the apex of the corporate pyramid this highly trained, rational specialist would have carried out the central belief of the college-educated middle-class, with its mandate of progress for all and not just the few.

But as the corporate world became more volatile toward the end of the 20th century, questions began to arise over whether such rationality and level-headedness was best for delivering the new goal of short-term boosts to shareholders’ profits. With the business world now seen as so tumultuous and complex as to “defy predictability and even rationality” (as an article in Fast Company put it) a new kind of CEO emerged, one driven more by intuition and gut-feeling. The new CEO was less of a manager with great experience obtained from working his way up the company hierarchy, and more of a flamboyant leader who had achieved celebrity status in the business world, and was hired on the basis of his showmanship, whether his prior role had anything to do with the new position or not. And they certainly prospered in their position, because the focus on improving the bottom line and rewarding celebrity CEOs saw executive pay soar to over three hundred times that of the typical worker.

It’s hard to exaggerate the difference between the old-style corporate boss and the new breed that arose around the late 20th century. As David Graeber pointed out, the old-fashioned leaders of industry identified much more with the workers in their own firms and it was not until the era of mergers, acquisitions and bustups that we get this fusion between the financial sector and the executive classes.

This marked change in attitudes was reflected in comments made by the Business Roundtable in the 1990s. At the start of the decade, Business Roundtable said of corporate responsibility that they “are chartered to serve both their shareholders and society as a whole”. But, seven years later, the message had changed to “the notion that the board must somehow balance the interests of other stakeholders fundamentally misconstrues the role of directors”. In other words, a corporation looks after its shareholders and the interests of other stakeholders-employees, customers, and society in general-are of far less importance.

Pointless White-Collar Jobs

Now, the term ‘lean and mean’ implies that capitalism had become more, well, ‘capitalist’, taking the axe to any unnecessary expenditure and therefore bringing about more streamlined operations run by more efficient employees. In other words, the exact opposite of conditions favourable to the growth of bullshit jobs. But, actually, the pressure to downsize was directed mostly at those near the bottom doing the blue-collar work of moving, fixing and maintaining things. They were subjected to ‘scientific management’ theories designed to dehumanise work and bring about robotic levels of efficiency, or were replaced by automation or lost their jobs when the firm took advantage of globalisation and moved abroad where more exploitable workers were available. This freed up lots of capital, and it is how that capital was used that is key to understanding how this so-called ‘lean-and-mean’ period brought about bullshit jobs. As Graeber said, “the same period that saw the most ruthless application of speed-ups and downsizing in the blue-collar sector also brought a rapid multiplication of meaningless managerial and administrative posts in almost all large firms. It’s as if businesses were endlessly trimming the fat on the shop floor and using the resulting savings to acquire even more unnecessary workers in the offices upstairs…The end result was that, just as Socialist regimes had created millions of dummy proletarian jobs, capitalist regimes somehow ended up presiding over the creation of millions of dummy white-collar jobs instead”.


“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich


The era of mergers and acquisitions which broke up admittedly bloated old corporations in order to bring about short-term boosts to shareholders resulted in the creation of a ‘noble class’ of executives, and subordinates whose only purpose was add to the prestige of those above them. One such employee was ‘Ophelia’, interviewed in Graeber’s book. “My current job title is Portfolio Coordinator, and everyone always asks what that means, or what it is I actually do? I have no idea. I’m still trying to figure it out….Most of the midlevel managers sit around and stare at a wall, seemingly bored to death and just trying to kill time doing pointless things (like that one guy who rearranges his backpack for a half hour every day). Obviously, there isn’t enough work to keep most of us occupied, but—in a weird logic that probably just makes them all feel more important about their own jobs—we are now recruiting another manager”.

This raises a couple of questions. How come the person ultimately in charge did nothing to prevent this flagrant waste of money? And how did an era of corporate bustups, mergers and acquisitions result in a proliferation of bullshit jobs?

Well, firstly one has to recognise a crucial difference between corporate raiders and the ‘robber barons’ they styled themselves on. The crucial difference is that people like Rockefeller and Vanderbilt, whatever you think of their practices, actually built business empires. But corporate raiders like James Goldsmith and Al ‘Chainsaw’ Dunlap didn’t do much building. No, they just took advantage of deregulation and financial innovations like junk bonds to tear apart existing businesses, lay off thousands and gain short-term boosts to their shares. They were vultures. That’s not necessarily derogatory. Vultures play a necessary part in cleaning away carcasses. Arguably, the old corporate structure had become too bloated and inefficient and really the axe should have come down on it. What I am suggesting is that, while the raiders were good at profiteering from the death of the old corporate structure, they lacked the ability to prevent the rise of a new one just as liable to create bullshit jobs.

The Influence Of Positive Thought

We can perhaps understand why by combining ‘managerial feudalism’ and its nobles looking for shows of status and flunkies providing a visible manifestation of that superiority, with the phenomenon I talked about in the series ‘How Religion Caused The Great Recession’.

In that series, I explained how early settlers of the United States practiced ‘Calvinism’. The Calvinist religion saw much virtue in industrious labour and particularly in constant self-examination for any sinful thought. Such an outlook probably helped settlers survive in what was, after all, the ‘Wild West’.

But as the harsh environments were gradually tamed, the constant self-examination for sinful thought and its eradication through labour came to impose a hefty toll on those who became cut off from industrious work. Faced with people succumbing to the symptoms of neurasthenia, and with the medical establishment seemingly unable to cure such patients, people began to reject their forebears’ punitive religion. In the 1860s, Phineas Parkhurst Quimby met up with one Mary Baker Eddy, and together they launched the cultural phenomenon of positive thinking. Drawing on a variety of sources from transcendentalism to Hinduism, New Thought re-imagined God from the hostile deity of Calvinism to a positive and all-powerful spirit. And humanity was brought closer to God, too, thanks to a concept of Man as part of one universal, benevolent spirit. And if reality consisted of nothing but the perfect and positive spirit of God, how could there be such things as sin, disease, and other negative things? New Thought saw these as mere errors that humans could eradicate through “the boundless power of spirit”.

But although intended as an alternative to Calvinism, New Thought did not succeed in eradicating all the harmful aspects of that religion. As Barbara Ehrenreich explained in ‘Smile Or Die’, “it ended up preserving some of Calvinism’s more toxic features- a harsh judgmentalism, echoing the old religion’s condemnation of sin, and the insistence on the constant exterior labour of self-examination”. The only difference was that while the Calvinist’s introspection was intended to eradicate sin, the practitioner of New Thought and its later incarnations of positive thinking was constantly monitoring the self for negativity. Anything other than positive thought was an error that had to be driven out of the mind.

So, from the 19th century onwards, a belief that the universe is fundamentally benevolent and that the power of positive thought could make wishes come true and prevent all negative things from happening, was simmering away in the American subconsciousness. When consumerism took hold in the 20th century, positive thinking would become increasingly imposed on anyone looking to get ahead in an increasingly materialistic world.

What all this has to do with the current topic, is that the cult of positive thinking that was begun with New Thought and amplified by 20th century consumer culture ended up having an effect on how businesses were run. Whereas, before the Great Depression, there had been campaigners speaking out against the excesses of the wealthy and the oppression imposed on the poor, the prosperity gospel that had begun in the 19th century and which was amplified by megachurches and TV evangelists responding to market signals from 20th century consumption culture, had a markedly different message: There was nothing amiss with a deeply unequal society. Anyone at all stood to become as wealthy as the top 1 percent. Just remain resolutely optimistic and all will be well.

But, unlike with the megachurches (which one could leave at any time) or television evangelists (which one could always just turn off) the books and seminars to be consumed at corporate events were often mandatory for any employee who wanted to keep his or her job. Workers were required to read books like Mike Hernacki’s ‘The Ultimate Secret to Getting Everything You Want’ or ‘The Secrets Of The Millionaire Mind’ by T. Harv Ecker, which encouraged practitioners of positive thinking to place their hands on their hearts and say out loud, “I love rich people! And I’m going to be one of those rich people too!”.

Remember, that Positive Thinking ideology considers any negativity to be a sin, and some of its gurus recommended removing negative people from one’s life. And in the world of corporate America-where, other than in clear-cut cases of racial, gender, or age-related discrimination, anyone can be fired for any reason or no reason at all-that was easy to do: terminate that negative person’s employment. Joel Osteen of Houston Lakewood church (described as “America’s most influential Christian” by Church Report magazine) told his followers, “employers prefer employees who are excited about working at their companies…God wants you to give it everything you’ve got. Be enthusiastic. Set an example”. And if you didn’t set an example and radiate unbridled optimism every second of the working day, you were made an example of. As banking expert Steve Eisman explained, “anybody who voiced negativity was thrown out”.

Such was the fate of Mike Gelband, who was in charge of Lehman Brothers’ real estate division. At the end of 2006 he grew increasingly anxious over the growing subprime mortgage bubble and advised “we have to rethink our business model”. For this unforgivable lapse into negativity, Lehman CEO Richard Fuld fired the miscreant.

A Bullshit Corporate Culture

So, the corporate culture had become one that was decidedly hostile to any bad news, such that even those in positions of high authority got the sack if they voiced any negativity. As for the lower ranks, whatever misgivings they had concerning the way things were had to be filtered through layer upon layer of management. If there’s already a culture of hiding negative reports on how business practices are shaping up, of putting a positive spin on everything, it’s not much of a step from there to not being entirely truthful about the usefulness of the people being hired. This is even more likely to happen if A) your status is defined by how many subordinates you have (and, therefore, to lose subordinates is to suffer diminished status) and B) if employees come to depend on the pretty generous salaries that often come with bullshit white-collar work, for example because their consumerist lifestyle has left them with substantial mortgages and credit card bills. If that’s the case, then it’s probably not a good idea to broadcast how unnecessary some jobs are.

The idea that those in ultimate authority might be prevented from knowing everything that’s going on in their business was encapsulated by a comment that one billionaire made to crisis manager Eric Dezenhall: “I’m the most lied to man in the world”.

It’s important to point out that the role of CEO is not itself bullshit. What is being argued instead is that some CEOs are effectively blind to all the bullshit happening in their firms. Why wouldn’t they be, when anyone bringing them bad news is liable to be sacked, when executives and middle-managers surround themselves with yes-men and flunkies, and when an obsession with increasing shareholder value is creating some decidedly dodgy business practices disguised through impenetrable economic jargon and management-speak? Such practices are well-suited to redirecting resources so as to create an elite minority with sufficient wealth and power to be deserving of the ‘nobility’ label, for creating elaborate hierarchies of flunkies who are just there to provide visible displays of their ‘superiors’ magnificence, and spindoctors pulling the wool over people’s eyes and preventing the truth from being revealed. Medieval feudalism had its priestly caste with their religious texts written in an obscure tongue with which to justify the divine right of kings and all that. Managerial Feudalism has the financial and banking sector and all the obscure language that comes with it, ceaselessly denouncing working classes whenever they demand living wages and justifying any money grab or show of status by the executive and managerial classes no matter how greedy and socially unjust.

It’s when we examine financialisation that we really understand how it can be that BS jobs exist. That’s a topic for next time.


“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich


In what way does the world of finance help bring about bullshit jobs? Well, it partly has to do with the way jobs are categorised in the popular imagination. When we talk about major revolutions in working practice we speak of transitions from hunter-gathering, to farming, to manufacturing, to services. Such terms imply that at every stage people always transition to work that is of obvious benefit to society, involving as it does the creation of products that improve quality of life, or by offering services that meet some pressing need or just make life more pleasant.

What’s wrong with this belief is that it paints the wrong picture of what everyone in ‘services’ does. Contrary to what the term implies, not everyone in ‘services’ is helping their fellow human beings by clipping hedges, serving ice-cream and so on. No, there’s a fourth sector involved in work of a different kind, one economists call FIRE after Finance, Insurance and Real-Estate.

The kind of thing this sector is involved in is well illustrated by the goings-on that lead up to the 2008 crash. Banks’ profits once relied on the quality of the loans they extended. However, quite recently we have seen a switch toward ‘securitisation’, which in practice involves bundling multiple loans together and selling portions of those bundles to investors as Collateralized Debt Obligations or CDOs. Rather than earning interest as loans are repaid over time, when it comes to securitisation the banks’ profit is derived from fees for arranging the loans. As to the risk inherent in lending money, it’s the buyer of the CDO who takes on the risk, meaning that, as far as the bank is concerned, defaults are somebody else’s problem.

This caused a shift from lending that was quality-driven toward quantity-driven borrowing. Thanks to securitisation, banks could make loans with the knowledge that they could be sold off to someone else, the risk associated with such loans being their problem. What this meant was that banks were freed from the downside of defaults. And if conditions are in place to cause wild exuberance, borrowing is bound to spiral out of control.

Of course, that’s precisely what happened in the runup to the 2008 subprime mortgage crisis. In the words of Bernard Lietaer and Jacqui Dunne, “math ‘quants’ took the giant pools of home loans now sitting on their employers’ balance sheets and repackaged them into highly complex, opaque, and difficult-to-value securities that were sold as safe bets. As more and more of these risky securities were purchased by pension funds, insurance firms, and other stewards of the global public’s savings, the quants’ securitisation machine demanded more loans, which in turn led to a massive expansion of dubious lending to low-income American households”.

Advertisements for banks really push the message that they are but humble servants helping customers protect and manage their money. And with talk of ‘markets’ and ‘products’, the financial ‘industry’ likewise presents itself as doing the traditional work of making useful stuff and providing much-needed services. If you believe the propaganda, the primary purpose of this sector is to help direct investments to those parts of commerce and industry that will raise prosperity, while earning an honest profit in the process.

But while this kind of thing does happen, it’s very misleading to portray the financial sector as being mostly concerned with such services. We can see this is so by looking at where the money goes. A piffling 0.8 percent of the £435 billion created by the UK government in quantitative easing (ie money printing) went to the real, productive economy. The rest went to the financial sector.

As David Graeber explained, what this sector actually does is as follows: “the overwhelming bulk of its profits comes from colluding with government to create, and then trade and manipulate, various forms of debt”. In other words, what the FIRE sector mostly does is create money from ‘nothing’. But, the thing is, there actually is no such thing as money from nothing. If somebody is making money out of thin air, somebody somewhere else is being lumbered with the cost. So, really, financialisation is the subordination of value-adding activity to the servicing of debt.

It is under such conditions, in which work is morphed into a political process of appropriating wealth and the repackaging and redistribution of debt, that the nature of BS jobs (which seems so bizarre from the traditional capitalist point-of-view) actually makes sense. From the perspective of the FIRE sector, the more inefficient and unnecessary chains of command there are, the more adept such organisations become at the art of rent-extraction, of soaking up resources before they get to claimants.

An example of such practices was provided by ‘Elliot’:

“I did a job for a little while working for one of the ‘big four’ accountancy firms. They had been contracted by a bank to provide compensation to customers that had been involved in the PPI scandal. The accountancy firm was paid by the case, and we were paid by the hour. As a result, they purposefully mis-trained and disorganised the staff so that jobs were repeatedly and consistently done wrong. The systems and practices were changed and modified all the time, to ensure no one could get used to the new practice and actually do the work correctly. This meant that cases had to be redone and contracts extended. The senior management had to be aware of this, but it was never explicitly stated. In looser moments, some of the management said things like “we make money from dealing with a leaky pipe-do you fix the pipe, or do you let the pipe keep leaking?’’’.

In order for such organisations to continue doing what they are doing, there has to be employees that work to prevent such dubious practices from becoming widely known. Faithful allies must be rewarded, whistleblowers punished. Those on the rise must show visible signs of success, surrounded by important-looking men who make their ‘superiors’ look special in office environments where one’s status is determined by how many underlings you command. Meanwhile, those flunky roles are themselves a handy means of distributing political favours, and since those in the lower ranks had best be distracted from the dodgy goings on, this incentivises the creation of an elaborate hierarchy of job positions, titles and honours. Let them occupy themselves squabbling over that.

So, ‘Managerial Feudalism’ is so-called because the FIRE sector (which in practice is spreading, which is why car commercials no longer tell you what it costs to buy the vehicle, only what APR representative you can expect if you take out a loan) has brought about conditions that resemble classic medieval feudalism, which was likewise primed to create hierarchies of nobles, flunkies, mystic castes quoting obscure texts, and downtrodden masses.

This is not without consequence. In the early 20th century, economists like Keynes were tracking progress in science, technology and management and predicting that, by the 21st century, our industries would be so productive we could drastically reduce the amount of time devoted to paid employment, investing the time gained in the pursuit of a more well-rounded existence. When you consider that 50 percent of jobs are either definitely bullshit or kind of vague regarding their value to society, you can see how people like Keynes were partly correct. Had we continued to focus on technical efficiency and productive capability we doubtlessly would have access to much more leisure and prosperity. But, instead, business, economics and politics combined in such a way as to create a new kind of feudalism that has imposed itself on top of capitalism.

Recapping what we have learned over this series, the old paternalistic corporate model came under attack during an era of bustups, mergers and acquisitions. The corporate raiders who lead this attack were different from their predecessors in that they identified much more with finance than the workers under their management. This, coupled with a cult of materialist positive thinking, gave rise to an executive class whose salary and bonus structure put them in a ‘noble’ position. It also gave rise to a corporate culture that was hostile to any bad news. This meant that, when the savings that were being made by bringing the axe down on those at the lower end of the corporate hierarchy only ended up being wasted by the hiring of more levels of management, there were few people who dared speak out against this practice. Moreover, keeping one’s mouth shut and hoping you, too, might be in line for a pointless but well-paid white collar job had become the sensible choice for those burdened with the high costs of an over-consumptive lifestyle. And that part of the ‘service’ sector which has little to do with providing services but is more concerned with colluding with government in order to repackage and sell ever-more complex forms of debt had every incentive to run things as inefficiently as possible, since those are the conditions in which rent-extraction can cream off more of other people’s money.

Such conditions encourage the existence of jobs that are more to do with appropriating rather than creating wealth, and with disguising the fact that this is happening. When your status is defined by how many underlings you have, this can encourage an increase in the levels of management. If other big businesses employ somebody to sit at a desk, your company must do likewise. Not because the person has anything useful to do, necessarily, but simply because it’s ‘what is done’. When you make your money from a ‘leaky pipe’ (ie some deficiency in the system) this can encourage ‘duct-taping’ jobs that merely manage the problem rather than deal with it. This is like employing somebody to replace the bucket rather than fix the leaking roof. Of course, in that overly-simplistic example the ruse would be easily spotted. But in the deliberately complex world of the FIRE sector there is more chance of doing things incompetently and getting away with it, because few can penetrate the jargon and management-speak and see the bullshit hiding behind it.

What this all means is that the ‘technological unemployment’ gap that Keynes predicted has been filled with jobs that, quite frankly, don’t need to exist. If you can’t imagine how that can happen under capitalism, well, your mistake is in assuming our current system is something that people like Adam Smith or Milton Friedman would recognise as ‘capitalist’. Bullshit jobs really shouldn’t exist in the kind of free market that people like Stefan Molyneux promote, but they can and do exist in the whatever market system dominates today.


“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich

Posted in Uncategorized | Leave a comment