WHY IS THERE ‘MAKE-WORK’?

WHY IS THERE ‘MAKE-WORK’?

Do you always have work to do at your place of employment, or is your work of a kind where sometimes you are busy, while at other times there’s not much to do? If you are one of those employees working where activity goes through peaks and troughs, chances are you have encountered an attitude that is usually accepted as normal but which would have been regarded as quite bizarre by most of our ancestors.

The best way to explain what I mean is to quote from an employee who has experienced this weird attitude. David Graeber has several such interviews in his book, ‘Bullsht Jobs: A Theory’. Here’s a typical example from ‘Patrick’ who worked in a convenience store:

“Being on shift on a Sunday afternoon…was just appalling. They had this thing about us not being able to just do nothing, even if the shop was empty. So we couldn’t just sit at the till and read a magazine. Instead, the manager made up utterly meaningless work for us to do, like going round the whole shop and checking that things were in date (even though we knew for a fact they were because of the turnover rate) or rearranging products on shelves in even more pristine order than they already were”.

What I am referring to, then, is that attitude employers have that regards slack time as something that should be filled with pointless tasks or ‘make-work’.

How else might these slow periods be dealt with? I can think of a few alternative options. The business could send unneeded staff home without pay. They could send them home with pay. They could require them to stay at their posts, but let the staff socialise, play games or pursue their own interests until there are real work-based duties to carry out.

Of all these options, sending staff home with pay is the least popular. It hardly ever happens. Letting employees do their own thing during slow periods is also pretty unusual. Sending staff home, forfeiting remaining wages is more widely practiced, especially with zero-hours contracts that specify no set hours. But if you are in a regular job and there are times when the work is slow, the most common solution is to have that time filled with useless tasks.

It’s hard to see how this practice of making up pointless tasks is in any way productive. Indeed, a case could be made that it encourages anti-productivity. David Graeber recalled an incident when he worked as a cleaner in a kitchen and he and the rest of the cleaning staff pulled together to get everything done as well and quickly as possible. With their work completed, they all relaxed…until the boss turned up.

“I don’t care if there are no more dishes coming in right now, you are on my time! You can goof around on your own time! Get back to work!”

He then had them occupy their time scrubbing baseboards that were already in pristine condition. From then on, the cleaning staff took their time carrying out their duties.

Graeber’s boss’s outburst provides insights into why this attitude exists and why it would have seemed so peculiar to our ancestors. He said, “you are on my time”. In other words, he did not consider his staff’s time to be their own. No, he had purchased their time, which made it his, and so to see them doing anything but look busy felt almost like robbery.

How Our Ancestors Worked

But our ancestors could not possibly have conceived of time as something distinct from work and available to be purchased, and they certainly would have seen no reason to fill down time with make-work. You can tell this is so by noting how make-work is absent from the rest of the animal kingdom. You have animals that live short, frenetic lives, constantly busy at the task of survival. Think of the industrious ant or the hummingbird, forever moving in the search for nectar. You have animals that are sometimes active but at other times take life easy, such as lions who mostly sleep and only occasionally hunt. But what you never see are animals being instructed to do pointless tasks.

There’s every reason to believe our ancestors would have been under no such instructions, either, particularly when you know a bit about the kind of societies they lived in and the practicalities they faced. Our earliest ancestors lived in bands or tribes in which there were no upper or lower classes, for the simple reason that the hunter-gatherer lifestyle would not permit much social stratification.

This should not be taken to mean that there was absolute equality among members of bands or tribes, however. Leaders did emerge, distinguishing themselves from the rest of the band or tribe through qualities like personality or strength. Both bands and tribes had big-men, recognised in some ways as the leader. But such leaders would have been barely distinguishable from ordinary tribe members. At best, the big-man could only sway communal decisions and had no independent decision-making authority to wield or knew any diplomatic secrets that could confer individual advantage. Moreover, the big-man’s lifestyle was indistinguishable from everyone else’s. As Jared Diamond put it, “he lives in the same type of hut, has the same clothes and ornaments, or is as naked, as everyone else”.

Given that our ancestors were hunter-gatherers, it would have made no sense for ‘big-men’ to make anyone fill spare time with make-work. No, the sensible would have been to permit relaxation during slack periods in order for there to be plenty of energy when the time came to put it to good use. You can imagine how there would have been seasons in which there was plenty of fruit to gather, or moments when everyone should mobilise to bring home game. But afterwards, when the fruit was picked and the hog roasting on the spit, the time left was better spent playing, socialising, or resting.

This is, in fact, how we evolved to work. We are designed for occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity.

This work pattern could hardly have changed much when human societies transitioned to farming and were able to develop into chieftains and larger hierarchical societies. After all, farming is also very seasonal work, so here too it would have made much more sense to adopt work attitudes that encouraged intense activity when necessary (such as when the harvest was ready to be gathered) but at other times to just leave the peasants alone to potter about minding and maintaining things or relaxing.

Now, it’s true that the evolution of human societies into hierarchical structures not only entailed the emergence of a ruling ‘upper class’ but also a lower caste of slaves and serfs. But, although we commonly conceive of such lower caste people as being worked to death by brutal task-masters, in actual fact early upper classes were nowhere near as obsessed with time-management as is the modern boss and didn’t care what people were up to so long as the necessary work was accomplished. As Graeber explained, “the typical medieval serf, male or female, probably worked from dawn to dusk for twenty to thirty days out of any year, but just a few hours a day otherwise, and on feast days, not at all. And feast days were not infrequent”.

So, our ancestors saw no need to fill idle time with make-work, partly because it was (and still is) of little practical use. But if masters of serfs could plainly see how silly it is to force make-work on their serfs, why can’t modern managers grasp the same thing with regards to their staff? Well, it all has to do with concepts of time, and that’s something we’ll look into next time…

REFERENCES

Bullshit Jobs: A Theory by David Graeber

Guns, Germs and Steel by Jared Diamond

WHY THERE IS ‘MAKE-WORK’

If you could go back in time and say to somebody, “can I borrow you for a few minutes?”, your request would have been met with a baffled look. This would be because such a person would have no understanding of time as being broken up into hours, minutes and seconds. Instead, what understanding of time there was consisted of passing seasons, cycles like day and night or the length of time actions took, on average to perform. “I will be there in five minutes” means nothing to a rural person in Madagascar, but saying it takes two cookings of a pot of rice would have let somebody know how long your journey would likely take. As Graeber explained, for societies without clocks, “time is not a grid against which work can be measured, because the work is the measure itself”.

It’s because our ancestors had no ‘clock’ concept of time that they could not therefore conceive of somebody’s labour-power as being distinct from the labourer himself. Consequently, if somebody came across, say, a cooper, they could imagine offering to buy the barrels he made, or they could imagine buying the cooper himself. But the notion of buying something as abstract as time? How was that possible?

Well, once slavery came about our ancestors did have an approximation to modern employment practices, in that slaves could be rented instead of bought outright. Whenever we find examples of wage labour in ancient times, it pretty much always involves people who were slaves already, hired out to do some other masters’ work for a while.

Around the 19th century we do see occasional warnings by plantation owners that slaves had best be kept busy during idle periods, for who knows what they might plot if left with time on their hands? But it took technological innovations from the 14th century onward to really make time seem like a commodity that could be bought, spent, misspent or stolen.

Clocks and buying time

What set us on the road to bosses complaining about ‘their time’ being wasted was similar to what lead to the evolution of money. Our ancestors lived in gift-based economies in which favours were freely undertaken with the vague understanding that they would be suitably reciprocated at a later date. But when was a favour suitably reciprocated or a slight adequately compensated? Such questions lead to rules, regulations, laws and contracts that gradually quantified obligations and transformed them into debts and credits that could be precisely calculated.

By the 14th century, clocks had been invented and began to show up in town squares. But where the clock-based concept of time really took off was in the factories of the industrial revolution. The increasing routinisation and micro-tasking of work that typified the production-line brought about the quantisation of time into discrete chunks that could be bought, and the need to coordinate logistics lead standardised times (imagine running trains when no two towns agree on when it is 2PM). By dividing time into the now-familiar hours, minutes and seconds, we created a concept of time that conceives it as a definite quantity that could be purchased, distinct from both the labourer and his produce. It became possible to conceive of buying a portion of his time and owning whatever produce that got created during that time, while not actually owning the labourer himself. This, of course, is what distinguishes an employee from a slave.

But once we began thinking about time as discrete units that could be bought, that then lead to a belief that time could be wastefully spent, not just by being literally idle but by spending ‘somebody else’s’ time doing your own thing, like playing a board-game or reading a magazine. The attitude I referred to earlier (‘don’t let slaves be idle lest they plot to free themselves’) was carried over to working practices in industrial cities. This, combined with the idea that you could buy somebody’s time but they could then waste ‘your’ time (misspend it) lead to the peculiar modern notion of time discipline and its obsession with busyness and make-work. Once you get to the 18th century and onwards, you get the emergence of bosses and upper classes who increasingly view the old episodic style of working (which involved occasional bursts of intense energy, which is then followed by relaxation as we slowly build up for the next short period of high activity) as problematic rather than sensible. Moralists came to see poverty as being due to bad time-management. If you were poor it was because your time was being spent recklessly or wastefully. What better remedy than to have your misspent time purchased by somebody who was rich and, therefore, better able to budget time carefully, as one who is frugal would budget and dispose of money?

It was not only the bosses who came to see time as purchasable units that might be misspent. So, too, did employees, especially since the old struggle between the conflicting interests of employer and employee meant the latter also had to adopt the clock-concept of time. If you are an employee, you want an hourly wage for an hours’ work. But if you are the boss, it would be preferable to somehow extract more than an hours’ work for an hour’s pay. Early factories did not allow workers to bring in their own timepieces, which meant those employees only had the owner’s clock to go on. Such owners regularly fiddled with the clock so as to appropriate more value (by getting them to do overtime for free) from their employees. This lead to arguments over hourly rates, free time, fixed-hour contracts and all that. But, as David Graeber pointed out, “the very act of demanding ‘free time’…had the effect of subtly reinforcing this idea that when the worker was ‘on the clock’ his time truly did belong to the person who had bought it”.

So, the belief that any spare time in work should be filled with pointless tasks came about as a result of somebody’s time becoming conceived of as distinct units that somebody else could buy and, consequently, as something that could be stolen or misspent. This in turn lead to a form of moralising that regarded idleness as sin, as something to be eradicated through the provision of make-work and indignation upon seeing employees doing anything other than their jobs or pretending to carry out tasks when their actual job is done.

It’s not just in stores, offices and factories that this attitude prevails. Where care work is concerned, the service being offered can sometimes consist of being on stand-by just in case the elderly client needs attention. But the elderly person gets so indignant about the carer ‘sitting around wasting my money’ they, too, end up being asked to pretend to do ‘something useful’ like tidy up a home they have already tidied. From the perspective of the stand-by carer, this can make the work intolerably frustrating.

The future of make-work

Make-work also has worrying implications if future technological capabilities will be as potent as futurists like Ray Kurzweil claim. I would argue that each major work revolution has focused on successfully less urgent demands. The agricultural revolution was concerned with food production, which is of obvious importance since we cannot live without food nor do any other work without adequate nutrition. The industrial revolutions (and the socialist movements that accompanied them) lead to greater standards of living and increased comfort. While not as essential as food, conveniences like microwaves, carpets and television sets can make life more pleasant and the products of manufacturing enable us to carry out essential work with more ease.

But what happens when people have enough of what they need to lead healthy, comfortable lives? Their consumption slows, and that’s anathema to a growth-based system like market capitalism. No wonder, then, that from the 50s onwards psychologists like Edward Bernays were working with advertising departments in order to create fake needs so as to sell bogus cures. No wonder, then, that we went from being utilitarian in our attitude toward products, buying them for practical purposes and make-do-and-mending in order to get maximum-possible use out of our stuff, to adopting a throwaway culture, replacing stuff just because it’s out of fashion or because it was designed to fail as soon as can be gotten away with and not built to be easily maintained.

General AI and atomically-precise manufacturing could drastically increase the efficiency with which we manage and carry out the rearrangement of materials, lead to a radical reduction in waste and free up time, as we would have the means to automate most of today’s jobs. Once we have automated jobs in agriculture, manufacturing, services and administration, the sensible thing would be to pursue interests outside of the narrow sphere of wage labour. It would be a good time to rediscover the periodic working practices of our ancestors and the greater commitment to social capital typical of tribal living, only with the added bonus of immense technological capability to keep us safe from hardships that do sometimes affect hunter-gatherer societies.

But is such an outcome likely to happen when it has to evolve within a system based on a throwaway culture and where work is seen as virtuous in and of itself to the extent that ‘spare time’ is considered to be something that should be filled with pointless tasks? What I am saying is that markets have already proven themselves capable of creating scarcity where little real needs exist, so it is not too great a leap of imagination to suppose that the moral indignation that stems from the attitude ‘time is money’ and ‘you are misspending my time’ could work against what should be capitalism’s greatest triumph, which is to unlock the potential abundance inherent in the Earth’s richness of resources and elevate us to positions where we can live comfortable lives that need not come with the condition that some have to adopt extreme levels of frugality, and where we are free to become all we can be within a more rounded existence. Instead of that promising outcome, we might well just fill the technological-unemployment gap with make-work and bullshit jobs.

What a waste of time it would be if that were to happen.

REFERENCES

Bullshit Jobs: A Theory by David Graeber

Guns, Germs and Steel by Jared Diamond

Advertisements
Posted in Uncategorized | Leave a comment

The Road To Freedom?

In 1944 the Austrian economist, Friedrich August Von Hayek, published ‘The Road To Serfdom’. The book set out to argue that the free market is the only viable way of bringing about freedom and prosperity. Actually, the book does not talk so much about the virtues of free markets but rather the downsides of the alternative which, at the time, was central planning. Hayek’s argument was that we can only handle the complexities of reality in a bottom-up fashion, with individuals looking after their own self-interests while guided by pricing signals. This, he reckoned, would result in the efficient allocation of resources arising from what would now be called emergent behaviour.

On the other hand, if we instead relied on a centralised authority to determine resource allocation, such an authority would inevitably find the complexity of modern economies too much to handle. The only way the authority could gain some measure of control would be to exercise more power over the people, restricting their freedom and making them live their lives according to some plan. Thus, a socialist economy would become more authoritarian over time. As the title of the book said, Hayek reckoned socialism to be the road to serfdom.

It’s fair to say that the book remains one of the classic texts of neo-liberalism. Margaret Thatcher described Hayek as one of the great intellects of the 20th century, and he was awarded the Nobel Prize in economics in 1974. Even now, some 64 years after its publication, it is still regarded as a definitive refutation of leftist politics and proof that only neo-liberalism can deliver prosperity. You could say that Hayek is as important a figure to the free market as Karl Marx is to communism.

But, I wonder, does Hayek’s argument really successfully demolish every alternative to neo-liberalism? Does the selfish pursuit of money and the conversion of everything to a commodity to be bought and sold on the market still stand as the only way we can achieve peace and prosperity? Or are its advocates wrong to say there is no alternative?

I would say there is an alternative. We are no longer restricted to the either-or choice of laissez-faire capitalism or authoritarian central planning. There might just be a third way.

It’s worth baring in mind the time in which Hayek wrote his book and how things have changed since then. At the heart of his argument is the belief that the world is really, really complex and, because of this, far too much information is generated for a centralised authority to handle without imposing real restrictions on individual liberty. Only market competition guided by pricing signals can manage such complexity. But, remember, he was writing in 1944. Communications back then was a good deal more primitive than is the case today. There was not one satellite in orbit. Now we have many hundreds, if not thousands, constantly monitoring all kinds of stuff such as weather patterns, urban sprawl, how crops are faring and so on and so on. This amounts to a network of sensors englobing our planet and allowing for realtime feedback about all kinds of important things. Such a perspective simply didn’t exist when ‘Road’ was published.

The advances we have made in our ability to transmit information is truly remarkable. The numbers are hard to grasp as they are pretty astronomical, but let’s give ourselves some standard of comparison and see if that helps. The author James Martin proposed the ‘Shakespeare’ as the standard of reference for our ability to transmit information. One Shakespeare is equivalent to 70 million bits, enough to encode everything the Bard wrote in his lifetime.

Using a laser beam, you can transmit 500 Shakespeares per second. Sounds impressive, but in fact fibre optics technology can do much better. By using a technique called Wavelength Division Multiplexing, the bandwidth of a fibre can be divided into many separate wavelengths. Think of it as encoding information on different colours of the spectrum. Some modern fibres are able to transmit 96 laser beams simultaneously, each beam carrying tens of billions of bits per second or 13,000 Shakespeares.

But we are still not done, because many such fibres can be packed into a single cable. Indeed, some companies make cables with more than 600 strands of optical fibre. That is sufficient to handle 14 million Shakespeares or a thousand trillion bits per second.

Think about that. We can now transmit data equivalent to 14 million times Shakespeare’s lifetime’s output from one side of the planet to the other almost instantaneously. Of course, this is quantity of information and not necessarily quality (not everything we send over the Internet is of Shakespearean standards!) but the point is that we can now send an awful lot of information around the world whereas this was not possible in Hayek’s day.

It would do little good to transmit petabits of information if we did not also improve our ability to store and crunch that data. In 1944 computers barely existed. What computers did exist came in the form of room-sized electromechanical behemoths that consumed huge amounts of power and were so temperamental only specialised engineers could be trusted to go near them.

Ray Kurzweil once said, “if all the computers in 1960 had stopped functioning, few people would have noticed. A few thousand scientists would have seen a delay in getting printouts from their last submission of data on punch cards. Some businesses reports would have been held up. Nothing to worry about”. And this was in 1960, over a decade after Road was published.

Since then, Moore’s Law (related to the price-performance of computer circuitry) has increased the power of computers by billions of times. It has shrunk hardware from the room-sized calculators of old to swift, multi-tasking supercomputers that can easily slip into your pocket. The cost has been reduced from about 100 calculations per second per thousand dollars in 1960, to well over a billion cps by 2000. Such a reduction means we can treat computing as essentially free, as proven by the way people are constantly on their web-enabled devices without ever fretting about how much it is costing. Also, computers have become increasingly user-friendly over time, from devices that required considerable technical skill for even simple tasks to modern conveniences like Alexa that can be interacted with through ordinary conversation.

The result of all this technological progress is that we are now practically cyborgs from infancy, thanks to the near-constant access to enormously powerful and intuitive computational devices. We live as part of a vast, dense network of bio-digital beings, connected to one another regardless of distance and with ready access to all kinds of information and digital assistance.

What this has to do with Hayek’s argument was expressed in an opinion put forward by David Graeber: “One could easily make a case that the main reason the Soviet economy worked so badly was because they were never able to develop computer technology efficient enough to coordinate such large amounts of data automatically…now it would be easy”.

In part two, we will see how the Internet and other technological advances provide options that were not feasible when ‘Road’ was written.

When Hayek wrote his book there was no Internet. Nobody was a blogger. Not one video had been uploaded. There was not a single Wikipedia entry, not one modded videogame. Linux and bitcoin were not words in anyone’s vocabulary. Now, such things are a ubiquitous part of modern life and most of them are free, part of the collaborative commons. OK, the price of bitcoin went crazily high but its founder provided the underlying blockchain of technology gratis, and made its white paper public knowledge so anyone could improve and expand upon it to create stuff like a decentralised social media site built on a blockchain.

Indeed, there’s now a great many things we can do on a voluntary basis. Much of the content of the web owes its existence more to passion than the pursuit of money. Jeremy Rifkin calls this ‘collaboratism’. Collaboratism means engaging in work not because financial pressures or some authority compels you, but because the means of producing and distributing stuff has become cheap enough that anyone with any drive to do something has the means to flex their creative muscles, and to connect with others with complementary skills and weaknesses.

This kind of technological progress changes many things. For example, when you have ready access to manufacturing or logistical systems it makes more sense not to have private ownership of stuff (which nearly always entails that stuff sitting in storage not being used for most of its life) but rather using stuff as and when you need it, and then making it available for others to use when you don’t. Think, for example, of driverless cars that could be there when you need transport and make themselves available for others to use if not. If that car was your own private possession, it would probably be parked somewhere not being used by anyone for long stretches. What a waste of resources!

This is the kind of world advocated by the Zeitgeist Movement. Critics of Peter Joseph tend to dismiss him using the same arguments Hayek used in ‘Road’. But this is to fundamentally misunderstand Joseph’s position. He is in no way advocating any centralised control, but rather more efficient decentralised methods than the corrupt monetary systems that are leaking value from today’s markets.

As to why neo-liberals tend to mistake Zeitgeist’s resource-based economy for central planning, maybe it can be traced back to concept drawings by Jacques Fresco? His Venus project shows plans for cities whose infrastructure is organised into a circle, at the centre of which sits a big computer monitoring the various flows of information a city generates. Such an illustration sure makes it seem like a centralised authority is in charge.

But you have to bare in mind that this city-wide perspective is only one viewpoint. If we could zoom out, we would see that the spokes of this ‘wheel’ radiate out beyond the confines of the city to connect with other cities, such that it becomes a node in a web of interconnected smart cities. Or, you could zoom in to a more personal level, and see that each person is a node in the network thanks to the web-enabled devices they have ready access to. Just shift perspective and what seems like a centralised master computer turns out to be a node in a network.

I would make an analogy with the web of life. Imagine telling somebody that there is a digital programme, encoded in DNA, running evolution. Imagine that person demanding to know where, precisely, the computer running this programme is located, and also telling you evolution can’t possibly work because Hayek proved centralised planning is hopeless. This would be a fundamental misunderstanding, because the code of life is not to be found in any particular location, but rather distributed throughout the world. Nobody is in charge, there is no top-down authority commanding natural selection.

Similarly, when confronted with Zeitgeist’s outline for systems of feedback that would enable us to track the world’s resources and manage them according to the principles of technical efficiency, it’s always denounced by critics as central planning. It’s almost as if such people forget the Internet ever existed.

When Hayek wrote ‘Road,’ mass production was the most obvious manifestation of market competition’s drive to produce sellable commodities, and mass production at that time was largely dependent on factories powered by large stations. Those were hugely expensive means of production that only a minority could afford to own, and which were most efficiently run along fascist lines. You might have been free to quit your job but once you clocked in you become part of a vertically-integrated management structure and had authorities whose orders had to be obeyed (and who, for the most part, were more interested in lining their own pockets and those of the banking and governmental masters they answered to than rewarding your efforts).

In marked contrast, the technologies of the 21st century could enable production by the masses, for the simple reason that the means of production and distribution could become ever more accessible in terms of cost and ease-of-use. Few can own a factory but if the price-performance of atomically-precise manufacturing goes far enough, what is effectively a factory in a box could sit beside your printer, and if robots follow the same trajectory as computers they should go from being very limited, expensive and largely inaccessible labour-saving devices to cheap, versatile, user-friendly, ubiquitous helpers. We could all become owners of the means of production. Such a decentralised form of production works best when we act as collaborating individuals united by complementary strengths and weaknesses in laterally-scaled networks, which is quite different from the vertically-integrated management that jobs have traditionally been designed around.

CONCLUSION.

When Hayek wrote ‘Road’, the only alternative to free markets he could imagine was central planning. But really, who could blame him? There was no satellite communication, hardly anybody had access to computers and the World Wide Web did not exist. In short there was none of the infrastructure that the digital commons needs to get off the ground, making it perfectly reasonable for Hayek not to consider collaboratism as a viable alternative to the selfish pursuit of money.

Now, the infrastructure is beginning to fall into place. We have a communications web, an information web, and the beginnings of a logistic web and energy web too. Thanks to advances in artificial intelligence, robotics, nanotechnology and more, we are approaching the point of near zero-marginal cost for the creation and delivery of all kinds of content, not just digital stuff but physical stuff too. We can now work together, forming groups and collaborating on projects out of passion rather than out of some selfish pursuit of monetary gain.

‘The Road To Serfdom still stands as an effective argument that market competition is preferable to central planning. But when you consider how laissez-faire principles brought about the financial crisis of 2008 (Wall Street really did take advantage of Ayn Rand devotee Alan Greenspan’s deregulation and the commodifying of political influence to make fraudulent activity legal and prey on people’s financial gullibility) and the impossibility of sustaining free market principles in anything that resembles the way market competition actually developed (covered in my essay series ‘This Is What You get’) I suspect that, were he alive today, Hayek would be championing the Zeitgeist movement as the best way of bringing about prosperity. In 1944 there may have been no viable alternative to neo-liberalism, but that’s changing.

REFERENCES

“The Road To Serfdom” by Hayek

‘Zeitgeist Movement Defined’

‘The Zero-Marginal Cost Society’ by Jeremy Rifkin

‘Age Of Spiritual Machines and ‘The Singularity Is near’ by Ray Kurzweil

‘The Meaning Of The 21st Century’ by James Martin.

“Bullshit Jobs: A Theory” by David Graeber

Posted in Uncategorized | Leave a comment

BULLSHIT JOBS and the NEW FEUDALISTS

BULLSHIT JOBS AND THE NEW FEUDALISTS

Have you ever felt like your job was a waste of time? If so, you are not alone. When Yougov asked people ‘does your job make a meaningful contribution to the world?’, 37% replied that it did not and 13% were ‘unsure’. In other words, fifty percent of people polled either didn’t know whether their job was worthwhile or not, or were certain that it was not. If you are one of these people, chances are you have a ‘bullshit job’.

‘What is a bullshit job’?

It might be worth talking a bit about what the term ‘bullshit job’ means. Perhaps the easiest way to grasp this is to consider its opposite. When it comes to employment, we usually assume that some need is first identified, and then some service is created to fill that gap in the market. An obvious way to tell if that service is necessary to society overall would be to observe the effect when it is removed- say, as a consequence of strike action. If society experiences a noticeable and negative effect, then it’s almost certain that the job was a valuable one.

On the other hand, if a job could disappear without almost anybody noticing (because its absence has either no effect or is actually beneficial) that would be a bullshit job.

Here’s one such example of such a job, taken from David Graeber’s ‘Bullshit Jobs: A Theory’:

“I worked as a museum guard for a major global security company in a museum where one exhibition room was left unused more or less permanently. My job was to guard that empty room, ensuring no museum guests touch the…well, nothing in the room, and ensure nobody set any fires. To keep my mind sharp and attention undivided, I was forbidden any form of mental stimulation, like books, phones etc. Since nobody was ever there, in practice I sat still and twiddled my thumbs for seven and a half hours, waiting for the fire alarm to sound. If it did, I was to calmly stand up and walk out. That was it”.

Now, some points are worth going over at this stage. Firstly, a bullshit job is best thought of as one that makes no positive contribution to society overall (since it would hardly matter if the position did not exist) rather than one that is of no benefit to absolutely anyone. As we shall see, it could suit some people to employ somebody to stand or sit around wearing an impressive-looking uniform. It’s just that whatever function this serves really has little to do with capitalism as most people understand it.

Secondly, one can always invent a meaning for this job, just as philosophers have made up reasons why Sisyphus could find meaning in his pointless task of rolling that boulder up-hill in the sure and certain knowledge that it would roll back down again. But, really, all this does is to highlight what bullshit such jobs are. After all, where genuine jobs are concerned one need not wrack one’s brains making up justifications, because the need pre-exists the job.

So, with those points out of the way and with a definition of bullshit jobs to work with (‘employment of no positive significance to society overall’) we can return to the question ‘how come such jobs exist?’.

‘This cannot be!’

One reason, strangely enough, is because many people assume they cannot exist. The reason why is because the very idea of bullshit jobs seems to run contrary to how capitalism is meant to work. If one word could be used to sum up the workings of capitalism in the popular imagination, that word would probably be ‘efficiency’. Capitalism is imagined to be ruthless in its drive to cut costs and reduce waste. That being the case, it surely makes no sense for any business to make up pointless jobs.

At the same time, people have no problem believing stories of how socialist countries like the USSR made up pointless jobs like having several clerks sell a loaf of bread where only one was necessary, due to some top-down command to achieve full employment. After all, governments and bureaucracies are known for wasting public money.

It’s worth thinking about what happened in the Soviet example and what did not. No authority figure ever demanded that pointless jobs be invented. Instead, there was a general push to achieve full employment but not much diligence in ensuring such jobs met actual demands. Those lower down with targets to meet did what was necessary to tick boxes and meet their quotas.

Studies from Harvard Business School, Northwestern University’s Kellogg School of Management, and others have shown that goals people set for themselves with the intention of gaining mastery are usually healthy, but when those goals are imposed on them by others- such as sales targets, standardized test scores and quarterly returns- such incentives, though intended to ensure peak performance, often produce the opposite. They can lead to efforts to game the system and look good without producing the underlying results the metric was supposed to be assessing. As Patrick Schiltz, a professor of law, put it:

“Your entire frame of reference will change [and the dozens of quick decisions you will make every day] will reflect a set of values that embodies not what is right or wrong but what is profitable, what you can get away with”.

Practical examples abound. Sears imposed a sales quota on its auto repair staff- who responded by overcharging customers and carrying out repairs that weren’t actually needed. Ford set the goal of producing a car by a particular date at a certain price that had to be at a certain weight, constraints that lead to safety checks being omitted and the dangerous Ford Pinto (a car that tended to explode if involved in a rear-end collision, due to the placement of its fuel tank) being sold to the public.

Perhaps most infamously, the way extrinsic motivation can cause people to focus on the short-term while discounting longer-term consequences contributed to the financial crisis of 2008, as buyers bought unaffordable homes, mortgage brokers chased commissions, Wall Street traders wanted new securities to sell, and politicians wanted people to spend, spend spend because that would keep the economy buoyant- at least while they were in office.

With all that in mind, it’s worth remembering the one thing that unites thinkers on the left and right sides of the political spectrum in Western thinking. Both agree that there should be more jobs. I don’t think I have seen a current-affairs debate where the call for ‘more jobs’ wasn’t made, and made often.

Whether you are a ‘lefty’ or a ‘right-winger’, you probably believe that there should be ‘more jobs’. You just disagree on how to go about creating them. For those on the left, the way to do it would be through strengthening workers’ rights, improving state education and maybe through workfare programs like Roosevelt’s ‘New Deal’. For right-wingers, it’s achieved through deregulation and tax-breaks for business, the idea being that this will free up entrepreneurs and create more jobs.

But, in neither case does anyone insist that whatever jobs are created should be of benefit to society overall. Instead, it’s just usually assumed that of course they will be. This is roughly comparable to somebody being so convinced that burglary does not happen they take no precautions to protect themselves against theft. This just makes them more vulnerable to criminal activity.

If this analogy is to work, it has to be the case that we are wrong to assume modern markets actively work against bullshit jobs; that, actually, there are reasons why pointless jobs are being created. In that case, our assumption that such jobs can’t exist would work against the possibility of acting to prevent their proliferation.

In fact, such reasons do exist, and a major one is something called ‘Managerial Feudalism’. What is that? Well, that’s a topic we will tackle in the next instalment.

REFERENCES

‘Bullshit jobs: A Theory’ by David Graeber

‘Why We Work’ by Barry Schwartzh

BULLSHIT JOBS AND THE NEW FEUDALISTS

Bullshit jobs are proliferating throughout the economy, and the reason why is partly due to something called ‘managerial feudalism’. In order to understand the role this plays in the creation of bullshit jobs, we need to look at the various positions people occupied in feudal societies. If you have ever watched a drama set in such times, you will no doubt have noticed how there is always an elite class of people who employ the services of a great many others. In some cases, their servants perform tasks that would be considered useful in today’s society, attending to such things as gardening, food preparation and household duties. But the nobility also seem to be surrounded by individuals who (despite the importance of their appearance, what with all the flashy uniforms they wear) don’t seem to be doing much of anything.

What are all these people for? Mostly, they are just there to make their superiors look, well, ‘superior’ . By being able to walk into a room surrounded by men in smart uniforms, nobles give off an air of gravitas. And the greater your entourage is, the more important you must be. At least, that’s the impression you hope to convey when you employ people to stand around making you look impressive.

The desire to place oneself above subordinates and to increase the numbers of those subordinates, thereby gaining a show of prestige, happens whenever society structures itself into a definite hierarchy with a minority that hold a ‘noble’ position within that structure. This is exactly what we find in large businesses, where the executive classes assume the role of the nobility. In order to understand why bullshit jobs exist, we need to look at how the condition of managerial feudalism came about.

Rise of the corporate nobility

Once upon a time, from around the mid-40s to the mid-70s, businesses ran what might be called ‘paternalistic’ models that worked in the interests of all stakeholders. The need to rebuild infrastructure following the war, a desire to provide security to those who had fought in it, the strength of unions, and governments following Keynesian economics, all worked to ensure that increases in productivity would bring about increases to worker compensation.

But, during the 80s and onwards, attitudes towards worker collectives and Keynesian economics changed and were instead seen as stifling entrepreneurs. This gave rise to more lean-and-mean economic practices. What really helped the rise of the lean-and-mean model in the 80s and 90s was certain federal and state regulatory changes, coupled with innovations from Wall Street. The federal and state regulatory changes brought about an environment in which corporate mergers and takeovers could flourish.

Meanwhile, Michael Milken, of investment house Drexel Burnham, created high-yield debt instruments known as ‘junk bonds’, which allowed for much riskier and aggressive corporate raids. This triggered an era of hostile takeovers, leveraged buyouts and corporate bustups.

The people who most benefited from all this deregulation and financialisation were those at the executive level. Once upon a time, the CEO of a large corporation would have been the epitome of the cool, rational planner. He or she would have been trained in ‘management science’ and probably worked his or her way up within the ranks of the organisation so that, by the time they reached the top, the CEO had mastered every aspect of the business. Once there at the apex of the corporate pyramid this highly trained, rational specialist would have carried out the central belief of the college-educated middle-class, with its mandate of progress for all and not just the few.

But as the corporate world became more volatile toward the end of the 20th century, questions began to arise over whether such rationality and level-headedness was best for delivering the new goal of short-term boosts to shareholders’ profits. With the business world now seen as so tumultuous and complex as to “defy predictability and even rationality” (as an article in Fast Company put it) a new kind of CEO emerged, one driven more by intuition and gut-feeling. The new CEO was less of a manager with great experience obtained from working his way up the company hierarchy, and more of a flamboyant leader who had achieved celebrity status in the business world, and was hired on the basis of his showmanship, whether his prior role had anything to do with the new position or not. And they certainly prospered in their position, because the focus on improving the bottom line and rewarding celebrity CEOs saw executive pay soar to over three hundred times that of the typical worker.

It’s hard to exaggerate the difference between the old-style corporate boss and the new breed that arose around the late 20th century. As David Graeber pointed out, the old-fashioned leaders of industry identified much more with the workers in their own firms and it was not until the era of mergers, acquisitions and bustups that we get this fusion between the financial sector and the executive classes.

This marked change in attitudes was reflected in comments made by the Business Roundtable in the 1990s. At the start of the decade, Business Roundtable said of corporate responsibility that they “are chartered to serve both their shareholders and society as a whole”. But, seven years later, the message had changed to “the notion that the board must somehow balance the interests of other stakeholders fundamentally misconstrues the role of directors”. In other words, a corporation looks after its shareholders and the interests of other stakeholders-employees, customers, and society in general-are of far less importance.

Pointless White-Collar Jobs

Now, the term ‘lean and mean’ implies that capitalism had become more, well, ‘capitalist’, taking the axe to any unnecessary expenditure and therefore bringing about more streamlined operations run by more efficient employees. In other words, the exact opposite of conditions favourable to the growth of bullshit jobs. But, actually, the pressure to downsize was directed mostly at those near the bottom doing the blue-collar work of moving, fixing and maintaining things. They were subjected to ‘scientific management’ theories designed to dehumanise work and bring about robotic levels of efficiency, or were replaced by automation or lost their jobs when the firm took advantage of globalisation and moved abroad where more exploitable workers were available. This freed up lots of capital, and it is how that capital was used that is key to understanding how this so-called ‘lean-and-mean’ period brought about bullshit jobs. As Graeber said, “the same period that saw the most ruthless application of speed-ups and downsizing in the blue-collar sector also brought a rapid multiplication of meaningless managerial and administrative posts in almost all large firms. It’s as if businesses were endlessly trimming the fat on the shop floor and using the resulting savings to acquire even more unnecessary workers in the offices upstairs…The end result was that, just as Socialist regimes had created millions of dummy proletarian jobs, capitalist regimes somehow ended up presiding over the creation of millions of dummy white-collar jobs instead”.

REFERENCES

“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich

BULLSHIT JOBS AND THE NEW FEUDALISTS.

The era of mergers and acquisitions which broke up admittedly bloated old corporations in order to bring about short-term boosts to shareholders resulted in the creation of a ‘noble class’ of executives, and subordinates whose only purpose was add to the prestige of those above them. One such employee was ‘Ophelia’, interviewed in Graeber’s book. “My current job title is Portfolio Coordinator, and everyone always asks what that means, or what it is I actually do? I have no idea. I’m still trying to figure it out….Most of the midlevel managers sit around and stare at a wall, seemingly bored to death and just trying to kill time doing pointless things (like that one guy who rearranges his backpack for a half hour every day). Obviously, there isn’t enough work to keep most of us occupied, but—in a weird logic that probably just makes them all feel more important about their own jobs—we are now recruiting another manager”.

This raises a couple of questions. How come the person ultimately in charge did nothing to prevent this flagrant waste of money? And how did an era of corporate bustups, mergers and acquisitions result in a proliferation of bullshit jobs?

Well, firstly one has to recognise a crucial difference between corporate raiders and the ‘robber barons’ they styled themselves on. The crucial difference is that people like Rockefeller and Vanderbilt, whatever you think of their practices, actually built business empires. But corporate raiders like James Goldsmith and Al ‘Chainsaw’ Dunlap didn’t do much building. No, they just took advantage of deregulation and financial innovations like junk bonds to tear apart existing businesses, lay off thousands and gain short-term boosts to their shares. They were vultures. That’s not necessarily derogatory. Vultures play a necessary part in cleaning away carcasses. Arguably, the old corporate structure had become too bloated and inefficient and really the axe should have come down on it. What I am suggesting is that, while the raiders were good at profiteering from the death of the old corporate structure, they lacked the ability to prevent the rise of a new one just as liable to create bullshit jobs.

The Influence Of Positive Thought

We can perhaps understand why by combining ‘managerial feudalism’ and its nobles looking for shows of status and flunkies providing a visible manifestation of that superiority, with the phenomenon I talked about in the series ‘How Religion Caused The Great Recession’.

In that series, I explained how early settlers of the United States practiced ‘Calvinism’. The Calvinist religion saw much virtue in industrious labour and particularly in constant self-examination for any sinful thought. Such an outlook probably helped settlers survive in what was, after all, the ‘Wild West’.

But as the harsh environments were gradually tamed, the constant self-examination for sinful thought and its eradication through labour came to impose a hefty toll on those who became cut off from industrious work. Faced with people succumbing to the symptoms of neurasthenia, and with the medical establishment seemingly unable to cure such patients, people began to reject their forebears’ punitive religion. In the 1860s, Phineas Parkhurst Quimby met up with one Mary Baker Eddy, and together they launched the cultural phenomenon of positive thinking. Drawing on a variety of sources from transcendentalism to Hinduism, New Thought re-imagined God from the hostile deity of Calvinism to a positive and all-powerful spirit. And humanity was brought closer to God, too, thanks to a concept of Man as part of one universal, benevolent spirit. And if reality consisted of nothing but the perfect and positive spirit of God, how could there be such things as sin, disease, and other negative things? New Thought saw these as mere errors that humans could eradicate through “the boundless power of spirit”.

But although intended as an alternative to Calvinism, New Thought did not succeed in eradicating all the harmful aspects of that religion. As Barbara Ehrenreich explained in ‘Smile Or Die’, “it ended up preserving some of Calvinism’s more toxic features- a harsh judgmentalism, echoing the old religion’s condemnation of sin, and the insistence on the constant exterior labour of self-examination”. The only difference was that while the Calvinist’s introspection was intended to eradicate sin, the practitioner of New Thought and its later incarnations of positive thinking was constantly monitoring the self for negativity. Anything other than positive thought was an error that had to be driven out of the mind.

So, from the 19th century onwards, a belief that the universe is fundamentally benevolent and that the power of positive thought could make wishes come true and prevent all negative things from happening, was simmering away in the American subconsciousness. When consumerism took hold in the 20th century, positive thinking would become increasingly imposed on anyone looking to get ahead in an increasingly materialistic world.

What all this has to do with the current topic, is that the cult of positive thinking that was begun with New Thought and amplified by 20th century consumer culture ended up having an effect on how businesses were run. Whereas, before the Great Depression, there had been campaigners speaking out against the excesses of the wealthy and the oppression imposed on the poor, the prosperity gospel that had begun in the 19th century and which was amplified by megachurches and TV evangelists responding to market signals from 20th century consumption culture, had a markedly different message: There was nothing amiss with a deeply unequal society. Anyone at all stood to become as wealthy as the top 1 percent. Just remain resolutely optimistic and all will be well.

But, unlike with the megachurches (which one could leave at any time) or television evangelists (which one could always just turn off) the books and seminars to be consumed at corporate events were often mandatory for any employee who wanted to keep his or her job. Workers were required to read books like Mike Hernacki’s ‘The Ultimate Secret to Getting Everything You Want’ or ‘The Secrets Of The Millionaire Mind’ by T. Harv Ecker, which encouraged practitioners of positive thinking to place their hands on their hearts and say out loud, “I love rich people! And I’m going to be one of those rich people too!”.

Remember, that Positive Thinking ideology considers any negativity to be a sin, and some of its gurus recommended removing negative people from one’s life. And in the world of corporate America-where, other than in clear-cut cases of racial, gender, or age-related discrimination, anyone can be fired for any reason or no reason at all-that was easy to do: terminate that negative person’s employment. Joel Osteen of Houston Lakewood church (described as “America’s most influential Christian” by Church Report magazine) told his followers, “employers prefer employees who are excited about working at their companies…God wants you to give it everything you’ve got. Be enthusiastic. Set an example”. And if you didn’t set an example and radiate unbridled optimism every second of the working day, you were made an example of. As banking expert Steve Eisman explained, “anybody who voiced negativity was thrown out”.

Such was the fate of Mike Gelband, who was in charge of Lehman Brothers’ real estate division. At the end of 2006 he grew increasingly anxious over the growing subprime mortgage bubble and advised “we have to rethink our business model”. For this unforgivable lapse into negativity, Lehman CEO Richard Fuld fired the miscreant.

A Bullshit Corporate Culture

So, the corporate culture had become one that was decidedly hostile to any bad news, such that even those in positions of high authority got the sack if they voiced any negativity. As for the lower ranks, whatever misgivings they had concerning the way things were had to be filtered through layer upon layer of management. If there’s already a culture of hiding negative reports on how business practices are shaping up, of putting a positive spin on everything, it’s not much of a step from there to not being entirely truthful about the usefulness of the people being hired. This is even more likely to happen if A) your status is defined by how many subordinates you have (and, therefore, to lose subordinates is to suffer diminished status) and B) if employees come to depend on the pretty generous salaries that often come with bullshit white-collar work, for example because their consumerist lifestyle has left them with substantial mortgages and credit card bills. If that’s the case, then it’s probably not a good idea to broadcast how unnecessary some jobs are.

The idea that those in ultimate authority might be prevented from knowing everything that’s going on in their business was encapsulated by a comment that one billionaire made to crisis manager Eric Dezenhall: “I’m the most lied to man in the world”.

It’s important to point out that the role of CEO is not itself bullshit. What is being argued instead is that some CEOs are effectively blind to all the bullshit happening in their firms. Why wouldn’t they be, when anyone bringing them bad news is liable to be sacked, when executives and middle-managers surround themselves with yes-men and flunkies, and when an obsession with increasing shareholder value is creating some decidedly dodgy business practices disguised through impenetrable economic jargon and management-speak? Such practices are well-suited to redirecting resources so as to create an elite minority with sufficient wealth and power to be deserving of the ‘nobility’ label, for creating elaborate hierarchies of flunkies who are just there to provide visible displays of their ‘superiors’ magnificence, and spindoctors pulling the wool over people’s eyes and preventing the truth from being revealed. Medieval feudalism had its priestly caste with their religious texts written in an obscure tongue with which to justify the divine right of kings and all that. Managerial Feudalism has the financial and banking sector and all the obscure language that comes with it, ceaselessly denouncing working classes whenever they demand living wages and justifying any money grab or show of status by the executive and managerial classes no matter how greedy and socially unjust.

It’s when we examine financialisation that we really understand how it can be that BS jobs exist. That’s a topic for next time.

REFERENCES

“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich

BULLSHIT JOBS AND THE NEW FEUDALISTS

In what way does the world of finance help bring about bullshit jobs? Well, it partly has to do with the way jobs are categorised in the popular imagination. When we talk about major revolutions in working practice we speak of transitions from hunter-gathering, to farming, to manufacturing, to services. Such terms imply that at every stage people always transition to work that is of obvious benefit to society, involving as it does the creation of products that improve quality of life, or by offering services that meet some pressing need or just make life more pleasant.

What’s wrong with this belief is that it paints the wrong picture of what everyone in ‘services’ does. Contrary to what the term implies, not everyone in ‘services’ is helping their fellow human beings by clipping hedges, serving ice-cream and so on. No, there’s a fourth sector involved in work of a different kind, one economists call FIRE after Finance, Insurance and Real-Estate.

The kind of thing this sector is involved in is well illustrated by the goings-on that lead up to the 2008 crash. Banks’ profits once relied on the quality of the loans they extended. However, quite recently we have seen a switch toward ‘securitisation’, which in practice involves bundling multiple loans together and selling portions of those bundles to investors as Collateralized Debt Obligations or CDOs. Rather than earning interest as loans are repaid over time, when it comes to securitisation the banks’ profit is derived from fees for arranging the loans. As to the risk inherent in lending money, it’s the buyer of the CDO who takes on the risk, meaning that, as far as the bank is concerned, defaults are somebody else’s problem.

This caused a shift from lending that was quality-driven toward quantity-driven borrowing. Thanks to securitisation, banks could make loans with the knowledge that they could be sold off to someone else, the risk associated with such loans being their problem. What this meant was that banks were freed from the downside of defaults. And if conditions are in place to cause wild exuberance, borrowing is bound to spiral out of control.

Of course, that’s precisely what happened in the runup to the 2008 subprime mortgage crisis. In the words of Bernard Lietaer and Jacqui Dunne, “math ‘quants’ took the giant pools of home loans now sitting on their employers’ balance sheets and repackaged them into highly complex, opaque, and difficult-to-value securities that were sold as safe bets. As more and more of these risky securities were purchased by pension funds, insurance firms, and other stewards of the global public’s savings, the quants’ securitisation machine demanded more loans, which in turn led to a massive expansion of dubious lending to low-income American households”.

Advertisements for banks really push the message that they are but humble servants helping customers protect and manage their money. And with talk of ‘markets’ and ‘products’, the financial ‘industry’ likewise presents itself as doing the traditional work of making useful stuff and providing much-needed services. If you believe the propaganda, the primary purpose of this sector is to help direct investments to those parts of commerce and industry that will raise prosperity, while earning an honest profit in the process.

But while this kind of thing does happen, it’s very misleading to portray the financial sector as being mostly concerned with such services. We can see this is so by looking at where the money goes. A piffling 0.8 percent of the £435 billion created by the UK government in quantitative easing (ie money printing) went to the real, productive economy. The rest went to the financial sector.

As David Graeber explained, what this sector actually does is as follows: “the overwhelming bulk of its profits comes from colluding with government to create, and then trade and manipulate, various forms of debt”. In other words, what the FIRE sector mostly does is create money from ‘nothing’. But, the thing is, there actually is no such thing as money from nothing. If somebody is making money out of thin air, somebody somewhere else is being lumbered with the cost. So, really, financialisation is the subordination of value-adding activity to the servicing of debt.

It is under such conditions, in which work is morphed into a political process of appropriating wealth and the repackaging and redistribution of debt, that the nature of BS jobs (which seems so bizarre from the traditional capitalist point-of-view) actually makes sense. From the perspective of the FIRE sector, the more inefficient and unnecessary chains of command there are, the more adept such organisations become at the art of rent-extraction, of soaking up resources before they get to claimants.

An example of such practices was provided by ‘Elliot’:

“I did a job for a little while working for one of the ‘big four’ accountancy firms. They had been contracted by a bank to provide compensation to customers that had been involved in the PPI scandal. The accountancy firm was paid by the case, and we were paid by the hour. As a result, they purposefully mis-trained and disorganised the staff so that jobs were repeatedly and consistently done wrong. The systems and practices were changed and modified all the time, to ensure no one could get used to the new practice and actually do the work correctly. This meant that cases had to be redone and contracts extended. The senior management had to be aware of this, but it was never explicitly stated. In looser moments, some of the management said things like “we make money from dealing with a leaky pipe-do you fix the pipe, or do you let the pipe keep leaking?’’’.

In order for such organisations to continue doing what they are doing, there has to be employees that work to prevent such dubious practices from becoming widely known. Faithful allies must be rewarded, whistleblowers punished. Those on the rise must show visible signs of success, surrounded by important-looking men who make their ‘superiors’ look special in office environments where one’s status is determined by how many underlings you command. Meanwhile, those flunky roles are themselves a handy means of distributing political favours, and since those in the lower ranks had best be distracted from the dodgy goings on, this incentivises the creation of an elaborate hierarchy of job positions, titles and honours. Let them occupy themselves squabbling over that.

So, ‘Managerial Feudalism’ is so-called because the FIRE sector (which in practice is spreading, which is why car commercials no longer tell you what it costs to buy the vehicle, only what APR representative you can expect if you take out a loan) has brought about conditions that resemble classic medieval feudalism, which was likewise primed to create hierarchies of nobles, flunkies, mystic castes quoting obscure texts, and downtrodden masses.

This is not without consequence. In the early 20th century, economists like Keynes were tracking progress in science, technology and management and predicting that, by the 21st century, our industries would be so productive we could drastically reduce the amount of time devoted to paid employment, investing the time gained in the pursuit of a more well-rounded existence. When you consider that 50 percent of jobs are either definitely bullshit or kind of vague regarding their value to society, you can see how people like Keynes were partly correct. Had we continued to focus on technical efficiency and productive capability we doubtlessly would have access to much more leisure and prosperity. But, instead, business, economics and politics combined in such a way as to create a new kind of feudalism that has imposed itself on top of capitalism.

Recapping what we have learned over this series, the old paternalistic corporate model came under attack during an era of bustups, mergers and acquisitions. The corporate raiders who lead this attack were different from their predecessors in that they identified much more with finance than the workers under their management. This, coupled with a cult of materialist positive thinking, gave rise to an executive class whose salary and bonus structure put them in a ‘noble’ position. It also gave rise to a corporate culture that was hostile to any bad news. This meant that, when the savings that were being made by bringing the axe down on those at the lower end of the corporate hierarchy only ended up being wasted by the hiring of more levels of management, there were few people who dared speak out against this practice. Moreover, keeping one’s mouth shut and hoping you, too, might be in line for a pointless but well-paid white collar job had become the sensible choice for those burdened with the high costs of an over-consumptive lifestyle. And that part of the ‘service’ sector which has little to do with providing services but is more concerned with colluding with government in order to repackage and sell ever-more complex forms of debt had every incentive to run things as inefficiently as possible, since those are the conditions in which rent-extraction can cream off more of other people’s money.

Such conditions encourage the existence of jobs that are more to do with appropriating rather than creating wealth, and with disguising the fact that this is happening. When your status is defined by how many underlings you have, this can encourage an increase in the levels of management. If other big businesses employ somebody to sit at a desk, your company must do likewise. Not because the person has anything useful to do, necessarily, but simply because it’s ‘what is done’. When you make your money from a ‘leaky pipe’ (ie some deficiency in the system) this can encourage ‘duct-taping’ jobs that merely manage the problem rather than deal with it. This is like employing somebody to replace the bucket rather than fix the leaking roof. Of course, in that overly-simplistic example the ruse would be easily spotted. But in the deliberately complex world of the FIRE sector there is more chance of doing things incompetently and getting away with it, because few can penetrate the jargon and management-speak and see the bullshit hiding behind it.

What this all means is that the ‘technological unemployment’ gap that Keynes predicted has been filled with jobs that, quite frankly, don’t need to exist. If you can’t imagine how that can happen under capitalism, well, your mistake is in assuming our current system is something that people like Adam Smith or Milton Friedman would recognise as ‘capitalist’. Bullshit jobs really shouldn’t exist in the kind of free market that people like Stefan Molyneux promote, but they can and do exist in the whatever market system dominates today.

REFERENCES

“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich

Posted in Uncategorized | Leave a comment

WHY EXECUTIVES DON’T STRIKE

Strikes. They’re a nuisance, aren’t they? Bringing disruption to our lives by denying us the services we rely on. But have you ever noticed how the workers who organise strikes always seem to be employees at the lower end of the corporate hierarchy? It’s always blue-collar workers, junior doctors and other lowly types that are threatening such action. Executives, for some reason, never stage a walkout.

I wonder why that is?

Now, some might think the reason is obvious: Strikes are undertaken in order to get more pay, and so executives have no need for such action as they are already very handsomely compensated. For example, if you are an advertising executive your yearly salary is around half a million pounds. Not too bad!

But, actually, ‘more money’ is not the only reason why workers feel the need to strike. Sometimes, strike action is undertaken in order to bring to the world’s attention unfair working practices. If being treated unfairly justifies a walkout, then maybe executives would have a reason to strike?

Think about how such people are portrayed in movies. In nearly all cases, executives in films are portrayed as corrupt. You have Gordon Gecko in ‘Wall Street’, breaking laws and destroying small businesses in his thirst for more dirty money. You have the executive classes in ‘Elysium’, living in luxury aboard their space station while down on earth their overworked, underpaid blue-collar employees are callously discarded when they fall foul of atrocious working conditions the higher-ups are too uncaring to fix. You have the CEO of OCP looking on in concern as Robocop2 lays waste to the city- not concern for the people it’s killing mind you, but at what it could mean for his company’s shares (“this could look bad for OCP, Johnson! Scramble our best spin team!”).

Those are just a few examples of films that make business men out to be bad guys. Now try to think of movies where executives are not portrayed as villains, but as heroes. I can only think of two. Batman’s Bruce Wayne has a strong moral code. But that’s not a particularly good example, because he is only being altruistic when he is the Caped Crusader. His ‘Bruce Wayne’ persona is of a billionaire playboy who is a bit of a prick. And in the Christopher Nolan films the board of directors that run Wayne enterprises are your usual bunch of villains in suits. The other example I can think of is Ayn Rand’s ‘Atlas Shrugged’, and do you know what that book and movie is about? It’s about successful businessmen becoming so disgruntled with being portrayed as villains by society that they go on strike.

So, given how often successful businessmen are portrayed as bad guys, why don’t they ever stage a walkout and remind us all of how much we rely on the work they do, just as their fictional counterparts in Rand’s opus did?

I think the reason why is as follows: because it just wouldn’t work out the way it did in ‘Atlas Shrugged’. In that story, the consequences were that society soon started falling apart. When workers low down in the corporate hierarchy stage a walkout, the effects are, indeed, most often immediate and near-catastrophic. Everything grinds to a halt, everyday life is hopelessly disrupted, and we are reminded that such people provide vital services we can scarcely do without. I would suggest that if the executive classes were to stage a walkout, life would not grind to a halt, at least not for quite some time. On the contrary, most people would not even notice anything amiss.

Now, you might counter that this is mere speculation with nothing to back it up. However, I believe there are a couple of examples that indicate that what I say is true.

The first example involves something that happened during the decade from 1966 to 1976 in Ireland. During that time, Ireland experienced three bank strikes that caused banks to shut down for a total of twelve months. During the time in which they were closed, no checks could be cashed, no banking transactions could be carried out, and the Irish lost access to well over 80% of the money supply.

You would have thought this would have spelled utter disaster for Ireland. After all, banking executives are among the top earners (paid around £5 million a year, as well as being awarded endless bonuses) and we’re always being told of the utterly vital function the banking and financial sectors play in the economy. Surely, then, Ireland was brought to her knees very soon after the banks closed their doors and removed their services?

Actually, no. Instead, the Irish just carried on doing business without the banks. They understood that, since the banks were closed, there was nothing to stop people writing a check and using it like cash. Once official checks were used up, people used stationary from shops as checks, written in denominations of fives, tens, and twenties. And it was not just individuals who operated this mutual credit system, businesses also got in on the act. Large employers like Guinness issued paychecks not in the usual full-salary amount but rather in various smaller denominations, precisely so they could be used as a medium of exchange as though they were cash.

All this was possible because, at the time, Ireland had a small population of three million inhabitants. In most communities, people had a high degree of personal contact with other individuals, and where knowledge of somebody was lacking, local shops and pubs had owners who knew their clientele very well and could vouch for a person’s creditworthiness.

According to economics professor Antoin E. Murphy, author of ‘Money in an Economy without Banks’, “The Irish created an unregulated, totally anarchistic community currency matrix…there was nobody in charge and people took the checks they liked and didn’t take the checks they didn’t like….And, it worked! As soon as the banks opened again, you’re back to fear and deprivation and scarcity. But until that point it had been a wonderful time”.

A few years before the Irish incident, New York’s refuse collectors went on strike and just ten days afterwards the city was brought to her knees. I don’t think anyone would have described that situation as ‘a wonderful time’. Unlike the millions paid to city bankers, refuse workers get around £12,000 a year.

Another example suggesting that executives wouldn’t be missed for quite some time were they to disappear would be the company Uber, for it has seen not only the resignation of its founder, Travis Kalanick, but also a whole bunch of other top executives, so that, according to a 2017 article in ‘marketwatch’, it “is currently operating without a CEO, Chief operating officer, chief financial officer, or chief marketing officer”. Did the company fall down without the aid of these essential people? No, it carried on just fine without them.

Now this is intriguing. Why is it, that when low-paid staff nearer the bottom of the corporate hierarchy go on strike we feel the pain almost immediately, but on the rare occasions when highly-rewarded executives don’t show up for work nobody cares because nothing much changes?

I think it all hinges on what these people actually do. What do they actually do? It’s hard to say, because any role you can think of that might be of use to a company turns out to be a job description for somebody lower down the hierarchy. Do they make anything, these executives? No, the workers down in manufacturing do that. Do they manage anything? No, managers do that. Are they responsible for sales? No, that’s what salespeople are for. And so on and so on. Now, I’m not suggesting the CEO does literally nothing but it stands to reason that when you have delegated responsibility for just about everything to your subordinates, it’s going to harm that company much more if the subordinates don’t show up than if you were to disappear.

And that’s just counting the official jobs subordinates have. But what about unofficial ones? Take Personal Assistants. If you have ever watched the Apprentice you know the sort of employee I am talking about: The woman or man at the desk who answers the phone and says ‘Lord Sugar/ Mr Trump will see you now’. According to David Graeber, secretarial work like answering the phone, doing filing and taking diction is not all PAs do. “in fact, they often ended up doing 80 percent to 90 percent of their bosses’ jobs, and sometimes, 100 percent…It would be fascinating—though probably impossible—to write a history of books, designs, plans, and documents attributed to famous men that were actually written by their secretaries”.

So businesses seem not to be negatively affected when executives don’t show up for work. But when they are present, is their work of value to society? Not according to studies into negative externalities (in other words, the social costs of doing business) Let’s take the example of advertisement executive mentioned earlier. As you may recall, advertisement executives bring home a yearly salary of around £500,000. But the studies reckon that around £11.50 of social value is destroyed per £1 paid. Contrast this with a recycling worker, who brings home a yearly income of around £12,500, and creates £12 in social value for every £1 they are paid.

This, then, is why executives don’t strike. Far from reminding us what a valuable service they provide, it would instead shine a light on how businesses could function perfectly well without them, at least for much longer periods than they could function if their much lower-paid subordinates were to stage a walkout. For people who are a credit to society in terms of creating more social value for every pound they are paid, strike action can be an effective way of empathising the value to society their work generates. But that can hardly be the case when your work causes negative externalities that cost society more than it benefits from your existence. In that case, strikes can only shine light on the fact that you are not all that necessary.

REFERENCES

‘Bullshit Jobs: A Theory’ by David Graeber

‘Rethinking Money’ by Bernard Lietar and Jacque Dunne

“Money in an Economy Without Banks’ by Antoin E. Murphy

“Marketwatch”.

Posted in Uncategorized | Leave a comment

What Videogames Teach Us About work

Videogames have been featuring in the news recently. BBC Radio 4 is running a half-hour programme about Fortnite and in an article written for i by Will Tanner, it was reported that a Universal Basic Income experiment was ended because “ministers refused to extend its funding amidst concern that young teenagers would stay at home and play computer games instead of looking for work”.

That argument had a tone that is sadly familiar, depicting videogaming as an addictive evil that distracts its victims from what they ought to be doing. But I think it would be more accurate to say that gamers have already found meaningful work and are reluctant to forsake it and submit to less rewarding labour instead.

This way of looking at it goes largely unrecognised because we are not taught to equate videogaming with work. Instead, you ‘play’ a videogame and we are raised to believe that play is childish, a distraction, mere fun. Play, we are encouraged to believe, is the opposite of work.

But it really isn’t. One only has to look at the play other animals engage in to see there is a serious side to it. It’s a way of honing skills that will become essential in later life.

Similarly, in videogaming we find many activities that can be seen to hone skills that are important in this digital age we live in. Authors Bryon Reeves and J. Leighton Read list over a hundred such activities, including:

“Getting information: Observing, receiving and otherwise obtaining information from all relevant sources.

Identifying information by categorising, estimating, recognising differences or similarities and detecting changes in circumstances and events.

Estimating sizes, distances and quantities or determining time, cost, resources, or materials needed to perform a work activity.

Thinking creatively: developing, designing or creating new applications, ideas, relationships, systems or products, including artistic contributions”.

Also, in an article written for ‘Wired’ (“You Play World of Warcraft? You’re Hired!”) John Seely and Douglas Thomas explain how “the process of becoming an effective guildmaster amounts to a total-immersion course in leadership…to run a large one, a guild master must be adept at many skills: attracting, evaluating and recruiting new members; creating apprenticeship programs; executing group strategy…these conditions provide realworld training a manager can apply directly in the workplace”.

Far from being a distraction from work, videogames are, along with jobs, one of modern life’s two main work providers. Instead of lending support to the idea that people don’t want to work, videogames demonstrate how eager we are to engage in productive activity, to reach for goals, to solve problems and to take part in collaborative projects.

It does, however, raise a question: How come one work provider is able to draw upon willing and eager volunteers, while the other (jobs) mostly creates a feeling that work is a necessary evil you wouldn’t do if you had a choice? And, yes, that is how a great many people feel, as revealed by polls that show ninety percent of people hate their jobs.

Fundamentally, I think it all has to do with the direction in which money flows, and how that affects the design of work in videogames and jobs.

What do I mean by the direction in which money flows? Quite simply, I mean that if you have a job, then, assuming you are not an unpaid intern, a company will be paying you to work. This means that you are both an investment and a cost. On the other hand, when it comes to videogames, you pay a company to work, since you have to first purchase the game (and even if it is free-to-play like Fortnite, the company will have some means of extracting money from you). This means that you represent almost all profit, and only negligible cost.

Because videogame publishers want as many people to spend money on their games as possible, it obviously makes sense if working in a gaming context is as enjoyable and rewarding as it can be. When it comes to making work engaging, productive activity should provide opportunity to pursue mastery; it should offer autonomy, flexibility, judgement and creativity that is firmly in the hands of the individual doing the actual work.

The best videogames are great at providing all these conditions. Autonomy and flexibility are found in games where you don’t have to tackle challenges in a strictly linear fashion but can forge your own path instead. For example, in ‘Batman: Arkham Knight’ you, as the Caped Crusader, are free to roam Gotham City, swooping down to fight crime as and when you find it. If you hear an alarm ringing, you can locate its source and do a sub-mission involving a bank robbery. If you see smoke you can attempt to arrest Firefly. Exactly how you get to the game’s finale is entirely up to you.

Many games offer creativity, providing opportunities to customise the look of your character or items you have acquired. Some games come with comprehensive editing tools that offer even more scope for creative expression, such as ‘LittleBigPlanet’ which goes as far as enabling players to create whole new games. And since their very inception, videogames have given us the chance to exercise our judgement and gain mastery, as we make the snap decisions required to advance up the high-score charts, helped by well-crafted feedback systems that informs us when we are doing well and when we should try alternative strategies.

Now, it’s true that jobs may also provide the things that make work worthwhile. But, the crucial difference is that, where videogames are concerned, there is never a good reason to try and reduce or eliminate such qualities. Doing so would only make for a bad game that nobody would choose to play. There is, however, a reason why employers might want to reduce such qualities in a job. There is something that unites these qualities, which is that they all help to enhance our individuality. That’s not something that employers necessarily desire. The more creativity, judgement, and autonomy can be reduced on an individual level, the easier it becomes to train new recruits. Indeed, in many ways it’s preferable if your employees are less like unique individuals and more like interchangeable units that can be replaced at as short a notice as possible. The reason why that’s advantageous is because it reduces the bargaining power of the workforce, since you are less likely to complain about pay and working conditions if you know it won’t be too difficult for the boss to fire and replace you.

The result? A cheaper workforce, more value extracted from the commodity of labour-power, and more profit for those the labourers work for. You have to bare in mind that employees are quite low down in the pecking order for rewards from the labour process. Governments want their cut, banks and financial services want their cut, the company executives want their cut, and they take priority over the working classes, kind of like how the more powerful predators and scavengers get the juicy meat and leave only scraps for the rest to fight over. When it comes to the pursuit of more profit, it pays to make work as unrewarding (in a monetary sense) as you can get away with, which often results in work being designed to be as unrewarding (in the sense of not being engaging) as possible.

“But why would people choose to do work designed to be lacking the very qualities that make it engaging?”, you might be asking. The answer can be found in ‘negative motivation’. Being without a job can have serious consequences. Cut off from an income, bills cannot be paid and the threat of rough sleeping looms ever closer. On top of that there is cultural pressure to ‘get a job’, so much so that we don’t care if the job is useless or even harmful to society (‘at least s/he has a job’). This all amounts to enormous pressure to submit to employment, not really because of the gains people expect if they do have a job, but rather because of the punishment they dread if they don’t.

Videogame companies, on the other hand, cannot rely on negative motivation for the simple fact that hardly anyone can be forced to play games (I say hardly anyone, because there are sweatshops in which people grind through MMORPGS to level up characters that can be sold on to richer customers). This further emphasises the point that videogames never have an incentive to make work less rewarding, whereas such incentives do exist in the world of jobs.

CONCLUSION

Videogames, far from demonstrating our distaste for work, in fact show how willing and eager to work we are. So willing, in fact, that our desire to work supports one of the most successful industries of the modern age. Every day, millions of us spend billions all so we can engage in the work videogaming requires. If we really hated work, the first person to put a quarter into the first arcade game would have walked away in disgust at having to pay to stand there and perform repetitive manual labour. What, are you crazy?

What videogaming shows instead is that if you can take that simple mechanical operation and craft around it creativity, flexibility, autonomy, judgement and mastery, the result is work that people want to do so much they will gladly pay for it. But if, in the interest of extracting more value for money out of your workforce, you reduce or eliminate such qualities, people will hate such work and will only submit to it if circumstances force them.

That’s what jobs teach us.

REFERENCES

‘Wired’

‘Total Engagement’ by Bryon Reeves and J. Leighton Read.

‘Why We Work’ by Barry Schwartz.

Posted in Uncategorized | 2 Comments

Let ‘Em In: The Immigration Controversy

LET ‘EM IN: THE IMMIGRATION CONTROVERSY
ONE: THAT ‘RIVERS OF BLOOD’ SPEECH
During the EU Referendum, some controversial issues formed part of the debate over whether the UK should vote Leave. One such issue was immigration. The Leave campaign’s slogan, promising that the UK would ‘take back control’, was understood to refer at least in part to some inability to control borders and decide as an autonomous country who to let in. The campaign poster ‘breaking point’, which depicted large crowds supposedly flooding into the UK, summed up Leave’s position and spoke to those who felt that change had come too fast and was leaving them disempowered.
Opposing this view was the belief that the free movement of people and goods had been beneficial overall. Somehow, though, sensible debates over the ability and desirability to control immigration in a global age invariably seems to turn into an argument over extreme positions tinged with xenophobia. Control over borders and limiting migration is criticised as though it were promoting a fortress mentality in which the drawbridge is raised never to be lowered again, and the UK becomes ‘little Britain’, isolated from the world and viewing all foreigners with suspicion and intolerance.
In order to understand why debates over immigration get pushed to extremes, we need to go back in history. Now, immigration has been happening for hundreds of thousands of years, ever since humanity left its place of origin (Africa) in search of new lands to settle. I don’t intend to give a complete history of this phenomenon, but instead want to focus on a period in postwar Britain that lead to an infamous speech that would become an accusation levelled at anyone raising the issue of immigration.
IMMIGRATION AFTER WORLD WAR 2
At the end of World War 2, Britain was in need of extra manpower in order to help rebuild the country. So, the 1948 British Nationality Act came into being. This act declared that all the King’s subjects had British citizenship, which meant that around 800 million people had the right to enter the UK. This act, by the way, was never given any mandate by the People; it was, instead, a political decision. But it was not particularly controversial. For one thing, transportation was much more costly back then, so not many of the 800 million actually moved. Also, the fact that the country needed rebuilding, coupled with the fact that it was growing economically, meant that the half million who did arrive were easily absorbed.
In 1962, however, the Commonwealth and Immigrants Acts came into being, which was a quota system designed to place restrictions on immigration. Just prior to the introduction of this act, there had been a large influx of Pakistanis and Indians from the Muslim province around Kashmir. Like the Caribbean immigrants who had migrated following the British Nationality Act, these were hard-working men who brought some much-needed labour to textile mills in Bradford and surrounding towns, and to manufacturing towns like Leister. But there were also some notable differences. The Pakistani and Indian immigrants were far more likely to send for their families, and they were much less interested in any integration with their communities. As Andrew Marr explained, this group was:
“more religiously divided from the whites around them and cut off from the main form of male white working-class entertainment, the consumption of alcohol. Muslim women were kept inside the house and ancient habits of brides being chosen to cement family connections at home meant there was almost no sexual mixing, either. To many whites, the ‘Pakis’ were no less threatening than the self-confident young Caribbean men, but also more alien”.
ENOCH POWELL
A year later, in 1963, Kenya won its independence and gave its 185,000 Asians a choice between surrendering their British passports and becoming full Kenyan nationals, or becoming effectively foreigners requiring work permits. Many decided to emigrate, to the point where some 2000 Asians a month were arriving in the UK by 1968. An amendment to the Commonwealth Immigrants Act that tried to impose an annual quota was rushed through by the then Home Secretary, Jim Callaghan (labour). Also, a Race Relations Bill was brought forward so that cases of discrimination in employment and housing could be tried in courts.
Although the Asian immigrants were well-educated, being as they were mostly civil servants, doctors and businesspeople, their arrival was cause for concern among the British public, noting once again that communities were changing without the electorate giving a mandate for it. This disquiet came to the attention of a member of the Conservative shadow cabinet, one Enoch Powell. Powell had seen how concerns over immigration had lead to a 7.5 percent swing to Peter Griffiths, who had gone on to defeat Labour’s Patrick Gordon Walker in Smethwick during the 1964 election. The campaign Griffiths had run was a shockingly racist one. Its slogan was ‘if you want a nigger for a neighbour, vote labour’. Two years later, Griffiths would lose his seat, having been denounced by Prime Minister Harold Wilson as a ‘parliamentary lepper’. But Powell saw some merit in Griffiths’ position, particularly the accusation that the political class was turning a blind eye to the effects of immigration.
So it was that on the 20th April 1968, Powell gave a speech in Birmingham’s Midland hotel. It opened with an anecdote about a constituent who was considering leaving the country because “in 15 or 20 years’ time the black man will have the whip hand over the white man”, and went on to say that this was a view shared by hundreds of thousands. Did Powell not have a duty to voice the concerns of these people? “We must be mad, literally mad”, he told the small crowd, “as a nation to be permitting the annual inflow of some 50,000 dependents” Powell warned that if this immigration wasn’t stopped, the result would be unrest and riot:
“As I look ahead, I am filled with foreboding; like the Roman, I seem to see “the Tiber foaming with much blood”’.
That speech has since become known as the ‘rivers of blood’ speech. It lead to Powell being sacked by shadow leader Edward Heath, who called the speech “racialist in tone and liable to exacerbate racial tensions”. It would also come to have an effect on the ability to hold a sensible discussion over controlling immigration. As Jason Farell and Paul Goldsmith, authors of “How to Lose a Referendum” explained:
“he provided a bogeyman that could be used as a quick, lazy comparison to cut off as quickly as possible any debate about one of the key background policies of New Labour’s time in power. Becoming compared to Enoch Powell was what happened if you questioned the benefits of multiculturalism and immigration”.
We will investigate New Labour’s role in turning immigration into a politically-correct forbidden subject in an upcoming essay.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia
LET ‘EM IN: THE IMMIGRATION CONTROVERSY
NEW LABOUR 
In the 1960s, responding to a perceived public dissatisfaction over immigration, Enoch Powell delivered his infamous ‘rivers of blood’ speech, and in so doing created “a bogeyman that could be used as a quick, lazy comparison to cut off” any debate over multiculturalism or immigration. In the same decade, future politicians were children growing up amidst struggles for racial equality that reached their peak during the 60s and the following decade. Growing into adulthood, many at the top of New Labour, as well as many of its activists, had a metropolitan cultural liberal outlook that considered immigration to be an inherently good thing. In the eyes of this metropolitan mindset, there was little difference between wanting tight controls over immigration, and being racist.
Indeed, some have made the case that New Labour deliberately encouraged immigration because they wanted to remake the country in their own liberal image. For example, Andrew Neather, a former adviser to Number 10 and the Home Office, reckoned “the policy was intended- even if this wasn’t its main purpose- to rub the Right’s nose in diversity and render their arguments out of date”. Others, though, have denied such claims. One such person was Barbara Roach, who was Labour’s Immigration minister from 1999 to 2001. She attributed rising immigration levels to the fact that the previous Conservative government had not only installed a failed computer system but also made cutbacks that left just 50 officials to make asylum decisions on a backlog of 50,000 cases.
It could be argued that any government at the time would have had to respond to a rapidly changing world. In the previous essay, we saw how the British Nationality Act theoretically opened the borders to 800 million people, but the expense of travel at the time imposed a practical limit on the numbers that actually did migrate. But, by the time New Labour came to power, forces of globalisation such as lower-cost air travel and mass communication, as well as numerous conflicts in Africa and the Balkans, had lead to more rapid population movements. When increasing numbers of a asylum seekers arrived from the Balkans, the pressure was on to move them away from the costs and dependency of the Asylum system and toward the work permit route, and there was also pressure from business sectors to increase work permits in response to a booming economy and low unemployment. Meanwhile, higher education was being internationalised at a rapid pace, and that meant New Labour could finance their policy of expanding university education in the UK by encouraging foreign students into the country.
From 1997 onwards the decisions taken by New Labour added up to around 500,000 people arriving in the UK each year. By 2010, the UK population was increased by 2.2 million migrants, a population size equivalent to a city twice as large as Birmingham. It was, at the time, the largest peacetime migration in the country’s history.
As a result, many places in the country that had previously been untouched by immigration suddenly found themselves host to significant migrant communities, while at the same time many British communities saw their livelihoods disappearing overseas as the winds of globalist change swept over them. If those people thought that a Labour government with a 179 majority would speak up for the working classes the party traditionally represented, they were in for a rude awakening.
BLAIR’S SPEECH
In 2005, Tony Blair achieved a third electoral victory but with a massively-reduced majority. At the customary acceptance speech at the steps of 10 Downing Street, the Prime Minister radiated humility and insisted he had heard the concerns of rising numbers of people concerned over immigration and the the forces of globalisation. But within five months, Blair gave a speech at his twelfth annual conference as Party Leader that dispensed with the concerned socialist act and went with full-on free market liberalism instead:
“I hear people say we have to stop and debate globalisation. You might as well debate whether autumn should follow summer … The character of this changing world is indifferent to tradition. Unforgiving of frailty. No respecter of past reputations. It has no custom and practice. It is replete with opportunities, but they only go to those swift to adapt, slow to complain, open, willing and able to change”.
In other words, capitalism was sweeping across the world, bringing opportunity but also insecurity and inequality, and the only assurance the Prime Minister could give his electorate was to say nothing could be done for them and they just had to accept they were in a Darwinian market struggle for survival. Guardian Journalist John Harris, upon hearing that speech, commented, “Swift to adapt, slow to complain, open, willing and able to change.” And I wondered that if these were the qualities now demanded of millions of Britons, what would happen if they failed the test?’.
It became increasingly obvious what would happen to such people. They would be left behind, largely unrepresented by the two major political parties. Worse still, these losers in the globalist race not only found themselves ignored and unrepresented by the political elite, they found their voices were actively repressed when they tried to focus attention on the most visible manifestation of the changes globalism and the free market had wrought: Immigration.
MRS DUFFY
Of all the anecdotes that highlight the way a portion of the British electorate were treated with contempt, there is perhaps no better example than the case of Gillian Duffy. A 65 year old widow from Rochdale, she came across Prime Minister Gordon Brown who was on walkabout for the 2010 election. She wasted no time in voicing her concerns, which included the national debt, the difficulty vulnerable people were having in claiming benefits, and the costs of higher education. Oh, she also voiced concerns over immigration:
“All these Eastern Europeans what are coming in, where are they flocking from?”.
Face to face with Mrs Duffy, Gordon Brown was pleasant and persuasive enough to mend the pensioners faltering support for the Labour Party. She herself later said how she had been happy with the answers he gave. But when Brown entered what he thought was the privacy of his car, a wholly different side to his character surfaced. The world became privy to this other side to Brown, because he inaverdently left his Sky News mic on, and broadcast to the world:
‘That was a disaster. Should never have put me with that woman … whose idea was that?…she’s just a sort of bigoted woman, said she used to be Labour. It’s just ridiculous.’ 
This, then, was the attitude of the political elite who held the reigns of power during New Labour’s time in office. The very personification of charm in public, but totally contemptuous of even the mildest concerns over immigration in private. A whole class of politicians who had grown up amidst the 60s and 70s struggles for racial equality had come to adopt such a strong metropolitan mindset that they equated controls on immigration with racism and dismissed concerns over the movement of people as the ravings of bigots.
Mrs Duffey’s question was a reference to decisions made by the EU and Britain to open up the country to immigration from Eastern Europe. We’ll look at that next.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia
LET ‘EM IN: THE IMMIGRATION CONTROVERSY
THE EXPANSION OF THE EU
In 2010, a labour supporting ex-councilwoman from Rochdale called Gillian Duffy confronted the then Prime Minister (Gordon Brown). She asked a bunch of questions, one of which- “all of these Eastern Europeans what are coming in, where are they flocking from?”- resulted in her being dismissed as a bigot when Brown thought he was out of earshot.
Anyone seeking a proper answer to Mrs Duffey’s question would have to look back to May 2004. That was when the EU was due to undergo its largest expansion in terms of territory, population levels and number of states. The reason why was because former communist countries of central and Eastern Europe were set to join. Those newcomers were Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia and Slovenia. The most important thing to note about these countries is that their economic output was much lower compared to that of the existing member states. Acceptance into the EU therefore presented a golden opportunity for the people of these countries, for it meant they would have the right to move anywhere in the EU whether there was a job offer waiting for them or not, and be entitled to the same rights and privileges as national citizens. It was also good news for business because, since those job-seekers were coming from countries whose per capita GDP was less than half the EU average, they were willing to offer cheaper labour.
It was not good news for everyone, however. For those nationals who were already at the lower end of the labour market, the arrival of an even cheaper workforce put their jobs under threat. Most of the existing member states recognised this problem, and therefore decided to implement transitional controls that delayed the process of full membership into the EU seven years. German Chancellor Gerhard Schroeder, for instance, told the German people in 2000:
“Many of you are concerned about expansion of the EU … The German government will not abandon you with your concerns … We still have 3.8 million unemployed, the capacity of the German labour market to accept more people will remain seriously limited for a long time. We need transitional arrangements with flexibility for the benefit of both the old and the new member states”.
Accordingly, Germany initially maintained transition controls like bilateral quotas on the number of immigrants and work permits. All of the big European countries decided to take up transitional controls with one exception, and that was the UK.
The reason why New Labour decided not to implement transitional controls had to do with the findings of a research team, lead by Professor Christian Dustman, that had been commissioned by the Home Office. That research suggested that only 13,000 immigrants were expected to arrive each year. The economy was booming at the time, and the Performance and Innovation Unit at No 10 had produced a 73-page report that claimed the foreign-born population in the UK contributed ten percent more to government revenue than it received in State handouts.
It could also be said that, even if the Home office wanted strict controls on immigration, they would have come under pressure from other departments. These included the foreign office, who had diplomatic reasons for being pro-immigration, the department of education, who looked forward to extra revenue from foreign students, and, perhaps most important of all, the Business department, who certainly weren’t going to turn their nose up at an influx of cheap and willing labour. Finally, as we have seen in a previous essay, New Labour’s cabinet were children of the 60s and 70s, had grown up during the struggles for racial equality, and became adults with a metropolitan liberal mindset that was very much pro-multiculturalism. For all those reasons, New Labour decided not to apply transitional controls.
There was, however, an important caveat to the Dustman report’s claim that the number of immigrants coming to the UK would be 13,000 per year. The report actually said that the numbers would be a great deal higher if the other member states decided to impose transitional controls. As we have seen, that is indeed what they decided to do.
Between 2004 and 2012, 423,000 migrants came to the UK. As the noughties progressed, the effects of global conflicts and financial crises resulted in an even greater swelling of numbers. A combination of people fleeing middle-east conflict and expansion of the EU (many members of which were suffering crippling austerity due to the financial mess that was the Euro) meant that the UK’s population was increasing by 2.2 million, equivalent to a city twice the size of Birmingham.
Given that they were coming from countries that were either more economically poor or suffering from conflicts, this influx consisted of people who were prepared to offer much cheaper labour, and the effects of this were becoming apparent and were spoken about by people not afraid to defy political correctness that equated any concern over uncontrolled immigration with xenophobia. People like Nigel Farage:
“‘By 2005, it was obvious that something quite fundamental was going on. People were saying, “We’re being seriously undercut here'”.
In the next essay, we’ll look at who benefits from uncontrolled immigration- and who doesn’t.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia
LET ‘EM IN: THE IMMIGRATION CONTROVERSY
WINNERS AND LOSERS OF GLOBALISM
Toward the end of the 20th century and the start of the 21st, the UK was governed by a party with a decidedly globalist outlook and metropolitan ideology. There is perhaps no better explanation for why debates over controlling immigration degenerate into accusations of xenophobia. It’s a vestige of a time when any such debate was pretty much a forbidden subject. In 2005, when Conservative leader Michael Howard said “it’s not racist to impose limits on immigration”, he was met with outrage from New Labour. Now, more than ten years later, it is possible to at least suggest that uncontrolled movement of people is not always and everywhere a good thing without being angrily shouted down. But the attitude that you might be xenophobic lingers on. Invariably, suggestions that immigration needs to be controlled is criticised as though it were a call to stop it altogether and become isolationist. Whoever suggests there is any problem with mass migration can be expect to be lectured on the many genuine benefits the free movement of people has delivered.
But one can acknowledge the benefits immigrants bring while recognising that mass migration has not been good for everyone. This was highlighted by a chart created by economist Brank Milanovich and his colleague Cristophe Lakner. Known as the ‘elephant curve’, it lines up the people of the world in order of income and shows how the percentage of their real income changed from 1988 to 2008. One group- the 77th to 85th percentile- experienced an inflation-adjusted fall in income over the past 30 years. These people are the lower-skilled, working classes of developed countries like the UK. Something like 80 percent of the world has an income lower than that of this group, so given how financially difficult it can be for the working class you can appreciate just how poor most of the world is, and just how intense competition for a better life could become, absent of any control over the movement of people.
To illustrate why the working classes in developed nations are made worse off by uncontrolled immigration, let’s turn to a simplified example. Imagine workers in a factory. The production line does not have sufficient numbers of employees to run properly. Such a situation is not good either for the business itself or the employees. If it were to continue, the plant would close and the employees would lose their jobs.
Now, let’s suppose the plant has to recruit from overseas in order to fill the labour shortage. From the perspective of the employees, what would be the ideal immigration system? It would be a highly controlled system that only let in as many qualified people as are required to make up the shortage.
The owners of the plant might see things rather differently, however. For them, the ideal is to have no control over the movement of people and to tempt as many people into the country as possible. Now, given that these people have no vacancies to fill, what use are they, economically speaking? The answer is that they put pressure on the existing workers, who feel they can’t raise issues about current standards or even falling standards, for fear of being replaced. “There are plenty who would agree to these conditions”, we can imagine any dissenters being told. This pressure to drive down both wages and investment to improve or maintain working conditions is good for the owners, since they get to appropriate more of the wealth that their workforce produces. Surprise, surprise, the top 1 percent on the elephant curve have a line that’s almost vertical.
In case this sounds like a mere hypothetical, let’s look to some real examples. In 2006, Southhampton’s Labour MP, John Denham, noted that the daily rate of a builder in the city had fallen by 50 percent in 18 months. Or consider the findings of Guardian Journalist John Harris, who produced a series called ‘Anywhere but Westminster’, which included a Peterborough agency advertised rates and working conditions that only migrants would take.
But perhaps the most striking example would be the MD of a recruitment firm who admitted to the authors of ‘How To Lose a Referendum’ that if it were not for uncontrolled immigration, pay and working conditions might have to improve. All these examples point to the same thing, which is an increase in the supply of labour irrespective of an increase in demand resulting in a reduction of bargaining power, which the monied take advantage of to appropriate even more wealth from those who actually do the work. 
It should be noted that such outcomes are not usually entirely due to mass immigration. In April 2007, the Economist published a study of those areas of the UK that had seen the sharpest increase of new migrants over the ten year period from 2005 to 2015. In those areas- dubbed ‘migrant land’ by the magazine- real wages fell by a tenth, which was faster than the national average, and there was also a decline in health and educational services. But there were other factors impacting these areas too. They suffered cuts to public services following the Coalition’s move to austerity in the aftermath of the Great Recession, and they were disproportionately affected by the decline of the manufacturing sector.
Some have argued that these other factors are the real issue and that pointing the finger of blame at migrants is just scapegoating. Consider the words of Justin Schlosberg, media lecturer at Birbeck University:
“The working-class people have had an acute sense that their interests were not being represented by the banks and Westminster. What the right-wing press seeks to do is –rather than identify the true source of the concerns, which is inequality, concentrated wealth and power and the rise of huge multinational corporations that dominate the state. All of that is an abstract, complex story to tell. The story they told which more suits their interests is: the problem is immigrants. The problem is the person who lives down your street who works in your factory, who looks different and has different customs. It plays on those instinctive fears”.
Now, in some ways you could say he makes a fair point. Immigrants are not bad people, they are just ordinary folk doing what they can to improve their circumstances. But the fact is that mass immigration is part of the ‘abstract, complex story’ that is globalism.
So what is globalism, anyway? Is it the brotherhood of humanity, people of all races, creeds and religions holding hands and united under common bonds? If that is indeed what it is, then it would surely be welcomed by the vast majority of us. After all the latest estimates are that only 7 percent of the UK are racist.
But there is another way to look at globalism, and that is to see it as the commodification of the world, its resources and its people. It’s a global network of banking and financial systems that seems always ready to blow up and spread systemic risk, the fallout landing on the working classes while the one percenters get government bailouts. It’s a global transport and communication system that enables corporations to move manufacturing and other sectors to wherever rules and regulations are more relaxed and people more exploitable. Most damningly of all, it is the commodification of people, sometimes to the point where they are reduced to the status of disposable commodities. The tragic reality of that was vividly illustrated by the sight of greedy traffickers dangerously overfilling barely seaworthy boats with people desperate to escape dire situations, lured by false promises of some other place where opportunities are boundless and nobody slips through the social safety net. 
What really awaits these people is sometimes not just low-paid work but actual slavery. Incredibly, when their status as slaves is pointed out, such people often deny that’s what they are, because the conditions they came from were so bad their current situation feels like a step up. While one has to feel for people as downtrodden as that, one must also acknowledge what a negative effect it has on the working classes of developed nations. From the point of view of this group, the whole point of a job is to earn a living. To achieve that aim you need to earn sufficiently high wages to alleviate financial anxiety, you need to have a sense of stability and security in your working life, and you need sufficient free time with which to develop a more well-rounded existence. But all that is hard to achieve when you are competing for jobs with people who consider slavery to be an improvement and when jobs are disappearing to places where pressure from unions and environmental groups is either weak or nonexistent and therefore unable to place regulations on exploitation of people or the natural world.
At the same time the globalist commodification of everything suits the wealthy elite. Selling arms to warring nations, offering huge loans to corrupt leaders and supporting coups to overthrow more egalitarian governments and throwing regions into chaos so precious resources can be extracted on the cheap amidst the anarchy are all money-making opportunities. And the consequences offer money-making opportunities too, as people flee from countries ravaged by war and economic weapons of mass destruction with so little bargaining power their numbers put serious downward pressure on wages and working conditions (more profit for the owners) and also increases competition for housing (which forces up land prices, thereby increasing the paper profits of the owner classes).
One has to wonder how things would have turned out if globalism had continued amidst a complete intolerance for debating the issue of uncontrolled immigration. For decades, the working classes of the UK were underrepresented by the political establishment. New Labour’s mindset was a mixture of metropolitanism and free-market ideology that imposed a Darwinian struggle for survival on the lives of people, followed by a Coalition that responded to the near-collapse of the world financial system after deregulation led to insane risk-taking with austerity, essentially making the working classes pay for excessive risk-taking and greed at the top. Meanwhile, with even the mildest objections to uncontrolled immigration shouted down as xenophobia, only the extremists were prepared to speak up. People like Nick Griffin of the BNP, or Marine LePen of Front Nationale. More recently, Chancellor Merkle’s decision to open Germany’s borders to a million mainly Middle-Eastern migrants is seen by some as a reason why the far right Alternative Fur Deutschland won 50 percent of the vote in more depressed areas. That, as in other cases, was the result of simmering dissatisfaction over what globalism had wrought and what intolerant liberalism had deemed inadmissible for reasoned debate.
To quote the words of the Leave’s campaign poster, the rise of extremist groups is a sign that people’s tolerance for what globalism has done is at breaking point.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia

Posted in Uncategorized | Leave a comment

Whacky Sci-fi Energy Proposals

WHACKY SCI-FI ENERGY PROPOSALS
Any mildly observant person is bound to notice that energy plays an important role in everyday life. Look around, and it is not too difficult to find various attempts at harnessing it. Plants extract energy from the sun through photosynthesis, animals extract energy by digesting organic material, and any industrial landscape is bound to have vehicles burning fossil fuels or the odd photovoltaic cell or wind turbine making use of renewable energies.
But, despite having sought for ways of extracting energy from the environment for billions of years, life is still not at all efficient at doing so, at least not when its various attempts are compared to theoretical limits. If you want to know how much potential energy is available to be tapped, you must turn to what is probably the most famous equation in the world: E=MC^2. This equation is basically a conversion factor that calculates how much energy is contained in a given amount of mass. If you take something like a candy bar, and you multiply its mass by the speed of light squared, that tells you precisely how much energy the bar contains. The speed of light squared is a huge number (written in MPH it is 448, 900, 000, 000, 000, 000) so even a tiny amount of mass can unleash an enormous amount of energy. An atomic bomb’s nuclear explosion for example, is the result of just a small amount of uranium being converted to energy. 
If you were to eat that candy bar, you would extract a mere ten-trillionth of its mc^2 energy. To put it another way, the process of digestion is only 0.00000001% efficient. Burning fossil fuels like coal and gasoline fairs a bit better, extracting 0.00000003 and 0.00000005% of the energy contained in such fuels respectively. How about nuclear fission, which, as we saw earlier, is capable of unleashing tremendous amounts of energy? Well, it certainly does a lot better than digestion or fossil fuel burning, but at an efficiency rating of 0.08%, it’s still far from ideal.
The fact that we are mostly failing to put this energy to use can be considered good news, in that any energy shortage we may experience has little to do with it being a scarce resource, and is instead due to our inability to access it. Unlike true scarcity (which we can’t do much about) an inability to access what’s available is a problem that can be addressed with appropriate technology. For example, by 2030 the world will need around thirty trillion watts, an energy need that could be met if we succeed in capturing three ten thousandths of the sun’s energy that hits the earth.
That would be a most welcome outcome in terms of securing our future, but even this achievement would not fare particularly well in terms of putting all available energy to good use. After all, most of the Sun’s output does not strike the earth but is instead dumped into empty space. Some radical thinkers have proposed ambitious schemes for harvesting this wasted energy.
THE DYSON SPHERE
One such proposal was put forward in 1960 by Freeman Dyson. His idea was to deconstruct Jupiter in order to form a spherical shell around the Sun. Doing so would enable our descendants to capture a trillion times more energy than we are capable of harvesting today. It would also provide 100 billion times more living space if you were able to move around its surface and, with the sun at the centre and you walking around on the inside of the sphere, everywhere on your habitat there would be permanent daylight. However, with gravity ten thousand times weaker than what we’re used to, travelling all the way around such a sphere without falling off would be pretty much impossible. In fact, it’s probably fair to say that life in general (or, at least, life as we know it) would be infeasibly difficult at best and impossible at worst if we had to live on the inner or outer surface of the Dyson sphere itself.
A way around that problem may be to construct habitats like the ones proposed by an American physicist called Gerard K O’Neill within the Dyson sphere. Known as O’Neill cylinders, they could provide habitats more like those we are familiar with if they orbit the sun in such a way as to always be pointing straight at it. Centrifugal force caused by their rotation could provide artificial gravity, and we could even have a 24 hour day-night cycle if there were mirrors to direct the sunlight in an appropriate way. 
Obviously, constructing a Dyson sphere would be a feat of engineering way beyond anything remotely achievable today. But that didn’t stop its originator, Freeman Dyson, from considering them a realistic prospect, given sufficient time. “One should expect that, within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere which completely surrounds its parent star”.
Amazingly, even this vastly ambitious project would not be all that successful at capturing the energy contained within the sun’s mass. This is because the process of nuclear fusion going on in a star like our Sun succeeds in converting only about a tenth of its hydrogen fuel, and after that its life as a normal star is over and it will expand into a red giant and end its life. So, even if we were to enclose the sun in a perfect Dyson sphere, we could not hope to put more than 0.08% of the sun’s potential energy (i.e the energy contained in its mass) to good use. 
SPINNING BLACK HOLES
For those descendants looking for more power than even a Dyson sphere can provide, they might consider an idea put forward by a British physicist called Roger Penrose. Many black holes are spinning, and this rotational energy could potentially be put to good use. Like all black holes, the spinning variant have a singularity (the remnants of a star so dense it has crushed itself to an infinitesimal size and of which we know very little about because it exists in realms of nature beyond anything our current models can handle) and an event horizon, which is a region of space surrounding the black hole that, once crossed, nothing can escape the gravitational pull of the singularity at its centre. A spinning black hole also has another feature known as an ‘ergosphere’, where, according to Max Tegmark, “the spinning black hole drags space along with it so fast that it’s impossible for a particle to sit still and not get dragged along”.
What this means is that any object tossed into the ergosphere will pick up speed as it rotates around the black hole. Normally, such objects will inevitably cross the event horizon and be swallowed by the black hole. But Roger Penrose worked out that if you could launch an object at the right angle and have it split into two pieces, then only one piece would get eaten while the other would be escape the black hole. More importantly, it would escape with more energy than you started with. This process could be repeated for as many times as it takes to convert all of the black hole’s rotational energy into energy that can be put to work for you. Assuming the black hole was spinning as fast as possible (which would mean its event horizon was spinning at close to the speed of light) you could convert 29% of its mass into energy using this method. That would be equivalent to converting 800,000 suns with 100 percent efficiency, or having 1000 million Dyson spheres working for billions of years.
CONCLUSION
As I said before, Dyson spheres and spinning black holes are proposals way beyond anything remotely plausible today. It might be tempting, therefore, to dismiss such ideas as crazy science fiction. But, I think there is a serious point to be made among all this whacky sci-fi stuff, which is that we are extremely far from putting available energy to good use. Next time you hear about an energy crisis, bare in mind that this really has nothing to do with energy being a scarce commodity. No, it is all down to our technical inability to capture the energy that is available. These crazy sci-fi proposals are therefore something to aspire to, and even if our actual technologies succeed in only capturing one percent of one percent of the energy that something like a Dyson sphere can harvest, that would provide way, way more energy than our global needs are likely to require. And, besides, if your going to have ambitions, they might as well be big!
REFERENCES
Life 2.0 by Max Tegmark
The Singularity Is Near by Ray Kurzweil.

Posted in technology and us | Tagged , , | Leave a comment