BULLSHIT JOBS and the NEW FEUDALISTS

BULLSHIT JOBS AND THE NEW FEUDALISTS

Have you ever felt like your job was a waste of time? If so, you are not alone. When Yougov asked people ‘does your job make a meaningful contribution to the world?’, 37% replied that it did not and 13% were ‘unsure’. In other words, fifty percent of people polled either didn’t know whether their job was worthwhile or not, or were certain that it was not. If you are one of these people, chances are you have a ‘bullshit job’.

‘What is a bullshit job’?

It might be worth talking a bit about what the term ‘bullshit job’ means. Perhaps the easiest way to grasp this is to consider its opposite. When it comes to employment, we usually assume that some need is first identified, and then some service is created to fill that gap in the market. An obvious way to tell if that service is necessary to society overall would be to observe the effect when it is removed- say, as a consequence of strike action. If society experiences a noticeable and negative effect, then it’s almost certain that the job was a valuable one.

On the other hand, if a job could disappear without almost anybody noticing (because its absence has either no effect or is actually beneficial) that would be a bullshit job.

Here’s one such example of such a job, taken from David Graeber’s ‘Bullshit Jobs: A Theory’:

“I worked as a museum guard for a major global security company in a museum where one exhibition room was left unused more or less permanently. My job was to guard that empty room, ensuring no museum guests touch the…well, nothing in the room, and ensure nobody set any fires. To keep my mind sharp and attention undivided, I was forbidden any form of mental stimulation, like books, phones etc. Since nobody was ever there, in practice I sat still and twiddled my thumbs for seven and a half hours, waiting for the fire alarm to sound. If it did, I was to calmly stand up and walk out. That was it”.

Now, some points are worth going over at this stage. Firstly, a bullshit job is best thought of as one that makes no positive contribution to society overall (since it would hardly matter if the position did not exist) rather than one that is of no benefit to absolutely anyone. As we shall see, it could suit some people to employ somebody to stand or sit around wearing an impressive-looking uniform. It’s just that whatever function this serves really has little to do with capitalism as most people understand it.

Secondly, one can always invent a meaning for this job, just as philosophers have made up reasons why Sisyphus could find meaning in his pointless task of rolling that boulder up-hill in the sure and certain knowledge that it would roll back down again. But, really, all this does is to highlight what bullshit such jobs are. After all, where genuine jobs are concerned one need not wrack one’s brains making up justifications, because the need pre-exists the job.

So, with those points out of the way and with a definition of bullshit jobs to work with (‘employment of no positive significance to society overall’) we can return to the question ‘how come such jobs exist?’.

‘This cannot be!’

One reason, strangely enough, is because many people assume they cannot exist. The reason why is because the very idea of bullshit jobs seems to run contrary to how capitalism is meant to work. If one word could be used to sum up the workings of capitalism in the popular imagination, that word would probably be ‘efficiency’. Capitalism is imagined to be ruthless in its drive to cut costs and reduce waste. That being the case, it surely makes no sense for any business to make up pointless jobs.

At the same time, people have no problem believing stories of how socialist countries like the USSR made up pointless jobs like having several clerks sell a loaf of bread where only one was necessary, due to some top-down command to achieve full employment. After all, governments and bureaucracies are known for wasting public money.

It’s worth thinking about what happened in the Soviet example and what did not. No authority figure ever demanded that pointless jobs be invented. Instead, there was a general push to achieve full employment but not much diligence in ensuring such jobs met actual demands. Those lower down with targets to meet did what was necessary to tick boxes and meet their quotas.

Studies from Harvard Business School, Northwestern University’s Kellogg School of Management, and others have shown that goals people set for themselves with the intention of gaining mastery are usually healthy, but when those goals are imposed on them by others- such as sales targets, standardized test scores and quarterly returns- such incentives, though intended to ensure peak performance, often produce the opposite. They can lead to efforts to game the system and look good without producing the underlying results the metric was supposed to be assessing. As Patrick Schiltz, a professor of law, put it:

“Your entire frame of reference will change [and the dozens of quick decisions you will make every day] will reflect a set of values that embodies not what is right or wrong but what is profitable, what you can get away with”.

Practical examples abound. Sears imposed a sales quota on its auto repair staff- who responded by overcharging customers and carrying out repairs that weren’t actually needed. Ford set the goal of producing a car by a particular date at a certain price that had to be at a certain weight, constraints that lead to safety checks being omitted and the dangerous Ford Pinto (a car that tended to explode if involved in a rear-end collision, due to the placement of its fuel tank) being sold to the public.

Perhaps most infamously, the way extrinsic motivation can cause people to focus on the short-term while discounting longer-term consequences contributed to the financial crisis of 2008, as buyers bought unaffordable homes, mortgage brokers chased commissions, Wall Street traders wanted new securities to sell, and politicians wanted people to spend, spend spend because that would keep the economy buoyant- at least while they were in office.

With all that in mind, it’s worth remembering the one thing that unites thinkers on the left and right sides of the political spectrum in Western thinking. Both agree that there should be more jobs. I don’t think I have seen a current-affairs debate where the call for ‘more jobs’ wasn’t made, and made often.

Whether you are a ‘lefty’ or a ‘right-winger’, you probably believe that there should be ‘more jobs’. You just disagree on how to go about creating them. For those on the left, the way to do it would be through strengthening workers’ rights, improving state education and maybe through workfare programs like Roosevelt’s ‘New Deal’. For right-wingers, it’s achieved through deregulation and tax-breaks for business, the idea being that this will free up entrepreneurs and create more jobs.

But, in neither case does anyone insist that whatever jobs are created should be of benefit to society overall. Instead, it’s just usually assumed that of course they will be. This is roughly comparable to somebody being so convinced that burglary does not happen they take no precautions to protect themselves against theft. This just makes them more vulnerable to criminal activity.

If this analogy is to work, it has to be the case that we are wrong to assume modern markets actively work against bullshit jobs; that, actually, there are reasons why pointless jobs are being created. In that case, our assumption that such jobs can’t exist would work against the possibility of acting to prevent their proliferation.

In fact, such reasons do exist, and a major one is something called ‘Managerial Feudalism’. What is that? Well, that’s a topic we will tackle in the next instalment.

REFERENCES

‘Bullshit jobs: A Theory’ by David Graeber

‘Why We Work’ by Barry Schwartzh

BULLSHIT JOBS AND THE NEW FEUDALISTS

Bullshit jobs are proliferating throughout the economy, and the reason why is partly due to something called ‘managerial feudalism’. In order to understand the role this plays in the creation of bullshit jobs, we need to look at the various positions people occupied in feudal societies. If you have ever watched a drama set in such times, you will no doubt have noticed how there is always an elite class of people who employ the services of a great many others. In some cases, their servants perform tasks that would be considered useful in today’s society, attending to such things as gardening, food preparation and household duties. But the nobility also seem to be surrounded by individuals who (despite the importance of their appearance, what with all the flashy uniforms they wear) don’t seem to be doing much of anything.

What are all these people for? Mostly, they are just there to make their superiors look, well, ‘superior’ . By being able to walk into a room surrounded by men in smart uniforms, nobles give off an air of gravitas. And the greater your entourage is, the more important you must be. At least, that’s the impression you hope to convey when you employ people to stand around making you look impressive.

The desire to place oneself above subordinates and to increase the numbers of those subordinates, thereby gaining a show of prestige, happens whenever society structures itself into a definite hierarchy with a minority that hold a ‘noble’ position within that structure. This is exactly what we find in large businesses, where the executive classes assume the role of the nobility. In order to understand why bullshit jobs exist, we need to look at how the condition of managerial feudalism came about.

Rise of the corporate nobility

Once upon a time, from around the mid-40s to the mid-70s, businesses ran what might be called ‘paternalistic’ models that worked in the interests of all stakeholders. The need to rebuild infrastructure following the war, a desire to provide security to those who had fought in it, the strength of unions, and governments following Keynesian economics, all worked to ensure that increases in productivity would bring about increases to worker compensation.

But, during the 80s and onwards, attitudes towards worker collectives and Keynesian economics changed and were instead seen as stifling entrepreneurs. This gave rise to more lean-and-mean economic practices. What really helped the rise of the lean-and-mean model in the 80s and 90s was certain federal and state regulatory changes, coupled with innovations from Wall Street. The federal and state regulatory changes brought about an environment in which corporate mergers and takeovers could flourish.

Meanwhile, Michael Milken, of investment house Drexel Burnham, created high-yield debt instruments known as ‘junk bonds’, which allowed for much riskier and aggressive corporate raids. This triggered an era of hostile takeovers, leveraged buyouts and corporate bustups.

The people who most benefited from all this deregulation and financialisation were those at the executive level. Once upon a time, the CEO of a large corporation would have been the epitome of the cool, rational planner. He or she would have been trained in ‘management science’ and probably worked his or her way up within the ranks of the organisation so that, by the time they reached the top, the CEO had mastered every aspect of the business. Once there at the apex of the corporate pyramid this highly trained, rational specialist would have carried out the central belief of the college-educated middle-class, with its mandate of progress for all and not just the few.

But as the corporate world became more volatile toward the end of the 20th century, questions began to arise over whether such rationality and level-headedness was best for delivering the new goal of short-term boosts to shareholders’ profits. With the business world now seen as so tumultuous and complex as to “defy predictability and even rationality” (as an article in Fast Company put it) a new kind of CEO emerged, one driven more by intuition and gut-feeling. The new CEO was less of a manager with great experience obtained from working his way up the company hierarchy, and more of a flamboyant leader who had achieved celebrity status in the business world, and was hired on the basis of his showmanship, whether his prior role had anything to do with the new position or not. And they certainly prospered in their position, because the focus on improving the bottom line and rewarding celebrity CEOs saw executive pay soar to over three hundred times that of the typical worker.

It’s hard to exaggerate the difference between the old-style corporate boss and the new breed that arose around the late 20th century. As David Graeber pointed out, the old-fashioned leaders of industry identified much more with the workers in their own firms and it was not until the era of mergers, acquisitions and bustups that we get this fusion between the financial sector and the executive classes.

This marked change in attitudes was reflected in comments made by the Business Roundtable in the 1990s. At the start of the decade, Business Roundtable said of corporate responsibility that they “are chartered to serve both their shareholders and society as a whole”. But, seven years later, the message had changed to “the notion that the board must somehow balance the interests of other stakeholders fundamentally misconstrues the role of directors”. In other words, a corporation looks after its shareholders and the interests of other stakeholders-employees, customers, and society in general-are of far less importance.

Pointless White-Collar Jobs

Now, the term ‘lean and mean’ implies that capitalism had become more, well, ‘capitalist’, taking the axe to any unnecessary expenditure and therefore bringing about more streamlined operations run by more efficient employees. In other words, the exact opposite of conditions favourable to the growth of bullshit jobs. But, actually, the pressure to downsize was directed mostly at those near the bottom doing the blue-collar work of moving, fixing and maintaining things. They were subjected to ‘scientific management’ theories designed to dehumanise work and bring about robotic levels of efficiency, or were replaced by automation or lost their jobs when the firm took advantage of globalisation and moved abroad where more exploitable workers were available. This freed up lots of capital, and it is how that capital was used that is key to understanding how this so-called ‘lean-and-mean’ period brought about bullshit jobs. As Graeber said, “the same period that saw the most ruthless application of speed-ups and downsizing in the blue-collar sector also brought a rapid multiplication of meaningless managerial and administrative posts in almost all large firms. It’s as if businesses were endlessly trimming the fat on the shop floor and using the resulting savings to acquire even more unnecessary workers in the offices upstairs…The end result was that, just as Socialist regimes had created millions of dummy proletarian jobs, capitalist regimes somehow ended up presiding over the creation of millions of dummy white-collar jobs instead”.

REFERENCES

“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich

BULLSHIT JOBS AND THE NEW FEUDALISTS.

The era of mergers and acquisitions which broke up admittedly bloated old corporations in order to bring about short-term boosts to shareholders resulted in the creation of a ‘noble class’ of executives, and subordinates whose only purpose was add to the prestige of those above them. One such employee was ‘Ophelia’, interviewed in Graeber’s book. “My current job title is Portfolio Coordinator, and everyone always asks what that means, or what it is I actually do? I have no idea. I’m still trying to figure it out….Most of the midlevel managers sit around and stare at a wall, seemingly bored to death and just trying to kill time doing pointless things (like that one guy who rearranges his backpack for a half hour every day). Obviously, there isn’t enough work to keep most of us occupied, but—in a weird logic that probably just makes them all feel more important about their own jobs—we are now recruiting another manager”.

This raises a couple of questions. How come the person ultimately in charge did nothing to prevent this flagrant waste of money? And how did an era of corporate bustups, mergers and acquisitions result in a proliferation of bullshit jobs?

Well, firstly one has to recognise a crucial difference between corporate raiders and the ‘robber barons’ they styled themselves on. The crucial difference is that people like Rockefeller and Vanderbilt, whatever you think of their practices, actually built business empires. But corporate raiders like James Goldsmith and Al ‘Chainsaw’ Dunlap didn’t do much building. No, they just took advantage of deregulation and financial innovations like junk bonds to tear apart existing businesses, lay off thousands and gain short-term boosts to their shares. They were vultures. That’s not necessarily derogatory. Vultures play a necessary part in cleaning away carcasses. Arguably, the old corporate structure had become too bloated and inefficient and really the axe should have come down on it. What I am suggesting is that, while the raiders were good at profiteering from the death of the old corporate structure, they lacked the ability to prevent the rise of a new one just as liable to create bullshit jobs.

The Influence Of Positive Thought

We can perhaps understand why by combining ‘managerial feudalism’ and its nobles looking for shows of status and flunkies providing a visible manifestation of that superiority, with the phenomenon I talked about in the series ‘How Religion Caused The Great Recession’.

In that series, I explained how early settlers of the United States practiced ‘Calvinism’. The Calvinist religion saw much virtue in industrious labour and particularly in constant self-examination for any sinful thought. Such an outlook probably helped settlers survive in what was, after all, the ‘Wild West’.

But as the harsh environments were gradually tamed, the constant self-examination for sinful thought and its eradication through labour came to impose a hefty toll on those who became cut off from industrious work. Faced with people succumbing to the symptoms of neurasthenia, and with the medical establishment seemingly unable to cure such patients, people began to reject their forebears’ punitive religion. In the 1860s, Phineas Parkhurst Quimby met up with one Mary Baker Eddy, and together they launched the cultural phenomenon of positive thinking. Drawing on a variety of sources from transcendentalism to Hinduism, New Thought re-imagined God from the hostile deity of Calvinism to a positive and all-powerful spirit. And humanity was brought closer to God, too, thanks to a concept of Man as part of one universal, benevolent spirit. And if reality consisted of nothing but the perfect and positive spirit of God, how could there be such things as sin, disease, and other negative things? New Thought saw these as mere errors that humans could eradicate through “the boundless power of spirit”.

But although intended as an alternative to Calvinism, New Thought did not succeed in eradicating all the harmful aspects of that religion. As Barbara Ehrenreich explained in ‘Smile Or Die’, “it ended up preserving some of Calvinism’s more toxic features- a harsh judgmentalism, echoing the old religion’s condemnation of sin, and the insistence on the constant exterior labour of self-examination”. The only difference was that while the Calvinist’s introspection was intended to eradicate sin, the practitioner of New Thought and its later incarnations of positive thinking was constantly monitoring the self for negativity. Anything other than positive thought was an error that had to be driven out of the mind.

So, from the 19th century onwards, a belief that the universe is fundamentally benevolent and that the power of positive thought could make wishes come true and prevent all negative things from happening, was simmering away in the American subconsciousness. When consumerism took hold in the 20th century, positive thinking would become increasingly imposed on anyone looking to get ahead in an increasingly materialistic world.

What all this has to do with the current topic, is that the cult of positive thinking that was begun with New Thought and amplified by 20th century consumer culture ended up having an effect on how businesses were run. Whereas, before the Great Depression, there had been campaigners speaking out against the excesses of the wealthy and the oppression imposed on the poor, the prosperity gospel that had begun in the 19th century and which was amplified by megachurches and TV evangelists responding to market signals from 20th century consumption culture, had a markedly different message: There was nothing amiss with a deeply unequal society. Anyone at all stood to become as wealthy as the top 1 percent. Just remain resolutely optimistic and all will be well.

But, unlike with the megachurches (which one could leave at any time) or television evangelists (which one could always just turn off) the books and seminars to be consumed at corporate events were often mandatory for any employee who wanted to keep his or her job. Workers were required to read books like Mike Hernacki’s ‘The Ultimate Secret to Getting Everything You Want’ or ‘The Secrets Of The Millionaire Mind’ by T. Harv Ecker, which encouraged practitioners of positive thinking to place their hands on their hearts and say out loud, “I love rich people! And I’m going to be one of those rich people too!”.

Remember, that Positive Thinking ideology considers any negativity to be a sin, and some of its gurus recommended removing negative people from one’s life. And in the world of corporate America-where, other than in clear-cut cases of racial, gender, or age-related discrimination, anyone can be fired for any reason or no reason at all-that was easy to do: terminate that negative person’s employment. Joel Osteen of Houston Lakewood church (described as “America’s most influential Christian” by Church Report magazine) told his followers, “employers prefer employees who are excited about working at their companies…God wants you to give it everything you’ve got. Be enthusiastic. Set an example”. And if you didn’t set an example and radiate unbridled optimism every second of the working day, you were made an example of. As banking expert Steve Eisman explained, “anybody who voiced negativity was thrown out”.

Such was the fate of Mike Gelband, who was in charge of Lehman Brothers’ real estate division. At the end of 2006 he grew increasingly anxious over the growing subprime mortgage bubble and advised “we have to rethink our business model”. For this unforgivable lapse into negativity, Lehman CEO Richard Fuld fired the miscreant.

A Bullshit Corporate Culture

So, the corporate culture had become one that was decidedly hostile to any bad news, such that even those in positions of high authority got the sack if they voiced any negativity. As for the lower ranks, whatever misgivings they had concerning the way things were had to be filtered through layer upon layer of management. If there’s already a culture of hiding negative reports on how business practices are shaping up, of putting a positive spin on everything, it’s not much of a step from there to not being entirely truthful about the usefulness of the people being hired. This is even more likely to happen if A) your status is defined by how many subordinates you have (and, therefore, to lose subordinates is to suffer diminished status) and B) if employees come to depend on the pretty generous salaries that often come with bullshit white-collar work, for example because their consumerist lifestyle has left them with substantial mortgages and credit card bills. If that’s the case, then it’s probably not a good idea to broadcast how unnecessary some jobs are.

The idea that those in ultimate authority might be prevented from knowing everything that’s going on in their business was encapsulated by a comment that one billionaire made to crisis manager Eric Dezenhall: “I’m the most lied to man in the world”.

It’s important to point out that the role of CEO is not itself bullshit. What is being argued instead is that some CEOs are effectively blind to all the bullshit happening in their firms. Why wouldn’t they be, when anyone bringing them bad news is liable to be sacked, when executives and middle-managers surround themselves with yes-men and flunkies, and when an obsession with increasing shareholder value is creating some decidedly dodgy business practices disguised through impenetrable economic jargon and management-speak? Such practices are well-suited to redirecting resources so as to create an elite minority with sufficient wealth and power to be deserving of the ‘nobility’ label, for creating elaborate hierarchies of flunkies who are just there to provide visible displays of their ‘superiors’ magnificence, and spindoctors pulling the wool over people’s eyes and preventing the truth from being revealed. Medieval feudalism had its priestly caste with their religious texts written in an obscure tongue with which to justify the divine right of kings and all that. Managerial Feudalism has the financial and banking sector and all the obscure language that comes with it, ceaselessly denouncing working classes whenever they demand living wages and justifying any money grab or show of status by the executive and managerial classes no matter how greedy and socially unjust.

It’s when we examine financialisation that we really understand how it can be that BS jobs exist. That’s a topic for next time.

REFERENCES

“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich

BULLSHIT JOBS AND THE NEW FEUDALISTS

In what way does the world of finance help bring about bullshit jobs? Well, it partly has to do with the way jobs are categorised in the popular imagination. When we talk about major revolutions in working practice we speak of transitions from hunter-gathering, to farming, to manufacturing, to services. Such terms imply that at every stage people always transition to work that is of obvious benefit to society, involving as it does the creation of products that improve quality of life, or by offering services that meet some pressing need or just make life more pleasant.

What’s wrong with this belief is that it paints the wrong picture of what everyone in ‘services’ does. Contrary to what the term implies, not everyone in ‘services’ is helping their fellow human beings by clipping hedges, serving ice-cream and so on. No, there’s a fourth sector involved in work of a different kind, one economists call FIRE after Finance, Insurance and Real-Estate.

The kind of thing this sector is involved in is well illustrated by the goings-on that lead up to the 2008 crash. Banks’ profits once relied on the quality of the loans they extended. However, quite recently we have seen a switch toward ‘securitisation’, which in practice involves bundling multiple loans together and selling portions of those bundles to investors as Collateralized Debt Obligations or CDOs. Rather than earning interest as loans are repaid over time, when it comes to securitisation the banks’ profit is derived from fees for arranging the loans. As to the risk inherent in lending money, it’s the buyer of the CDO who takes on the risk, meaning that, as far as the bank is concerned, defaults are somebody else’s problem.

This caused a shift from lending that was quality-driven toward quantity-driven borrowing. Thanks to securitisation, banks could make loans with the knowledge that they could be sold off to someone else, the risk associated with such loans being their problem. What this meant was that banks were freed from the downside of defaults. And if conditions are in place to cause wild exuberance, borrowing is bound to spiral out of control.

Of course, that’s precisely what happened in the runup to the 2008 subprime mortgage crisis. In the words of Bernard Lietaer and Jacqui Dunne, “math ‘quants’ took the giant pools of home loans now sitting on their employers’ balance sheets and repackaged them into highly complex, opaque, and difficult-to-value securities that were sold as safe bets. As more and more of these risky securities were purchased by pension funds, insurance firms, and other stewards of the global public’s savings, the quants’ securitisation machine demanded more loans, which in turn led to a massive expansion of dubious lending to low-income American households”.

Advertisements for banks really push the message that they are but humble servants helping customers protect and manage their money. And with talk of ‘markets’ and ‘products’, the financial ‘industry’ likewise presents itself as doing the traditional work of making useful stuff and providing much-needed services. If you believe the propaganda, the primary purpose of this sector is to help direct investments to those parts of commerce and industry that will raise prosperity, while earning an honest profit in the process.

But while this kind of thing does happen, it’s very misleading to portray the financial sector as being mostly concerned with such services. We can see this is so by looking at where the money goes. A piffling 0.8 percent of the £435 billion created by the UK government in quantitative easing (ie money printing) went to the real, productive economy. The rest went to the financial sector.

As David Graeber explained, what this sector actually does is as follows: “the overwhelming bulk of its profits comes from colluding with government to create, and then trade and manipulate, various forms of debt”. In other words, what the FIRE sector mostly does is create money from ‘nothing’. But, the thing is, there actually is no such thing as money from nothing. If somebody is making money out of thin air, somebody somewhere else is being lumbered with the cost. So, really, financialisation is the subordination of value-adding activity to the servicing of debt.

It is under such conditions, in which work is morphed into a political process of appropriating wealth and the repackaging and redistribution of debt, that the nature of BS jobs (which seems so bizarre from the traditional capitalist point-of-view) actually makes sense. From the perspective of the FIRE sector, the more inefficient and unnecessary chains of command there are, the more adept such organisations become at the art of rent-extraction, of soaking up resources before they get to claimants.

An example of such practices was provided by ‘Elliot’:

“I did a job for a little while working for one of the ‘big four’ accountancy firms. They had been contracted by a bank to provide compensation to customers that had been involved in the PPI scandal. The accountancy firm was paid by the case, and we were paid by the hour. As a result, they purposefully mis-trained and disorganised the staff so that jobs were repeatedly and consistently done wrong. The systems and practices were changed and modified all the time, to ensure no one could get used to the new practice and actually do the work correctly. This meant that cases had to be redone and contracts extended. The senior management had to be aware of this, but it was never explicitly stated. In looser moments, some of the management said things like “we make money from dealing with a leaky pipe-do you fix the pipe, or do you let the pipe keep leaking?’’’.

In order for such organisations to continue doing what they are doing, there has to be employees that work to prevent such dubious practices from becoming widely known. Faithful allies must be rewarded, whistleblowers punished. Those on the rise must show visible signs of success, surrounded by important-looking men who make their ‘superiors’ look special in office environments where one’s status is determined by how many underlings you command. Meanwhile, those flunky roles are themselves a handy means of distributing political favours, and since those in the lower ranks had best be distracted from the dodgy goings on, this incentivises the creation of an elaborate hierarchy of job positions, titles and honours. Let them occupy themselves squabbling over that.

So, ‘Managerial Feudalism’ is so-called because the FIRE sector (which in practice is spreading, which is why car commercials no longer tell you what it costs to buy the vehicle, only what APR representative you can expect if you take out a loan) has brought about conditions that resemble classic medieval feudalism, which was likewise primed to create hierarchies of nobles, flunkies, mystic castes quoting obscure texts, and downtrodden masses.

This is not without consequence. In the early 20th century, economists like Keynes were tracking progress in science, technology and management and predicting that, by the 21st century, our industries would be so productive we could drastically reduce the amount of time devoted to paid employment, investing the time gained in the pursuit of a more well-rounded existence. When you consider that 50 percent of jobs are either definitely bullshit or kind of vague regarding their value to society, you can see how people like Keynes were partly correct. Had we continued to focus on technical efficiency and productive capability we doubtlessly would have access to much more leisure and prosperity. But, instead, business, economics and politics combined in such a way as to create a new kind of feudalism that has imposed itself on top of capitalism.

Recapping what we have learned over this series, the old paternalistic corporate model came under attack during an era of bustups, mergers and acquisitions. The corporate raiders who lead this attack were different from their predecessors in that they identified much more with finance than the workers under their management. This, coupled with a cult of materialist positive thinking, gave rise to an executive class whose salary and bonus structure put them in a ‘noble’ position. It also gave rise to a corporate culture that was hostile to any bad news. This meant that, when the savings that were being made by bringing the axe down on those at the lower end of the corporate hierarchy only ended up being wasted by the hiring of more levels of management, there were few people who dared speak out against this practice. Moreover, keeping one’s mouth shut and hoping you, too, might be in line for a pointless but well-paid white collar job had become the sensible choice for those burdened with the high costs of an over-consumptive lifestyle. And that part of the ‘service’ sector which has little to do with providing services but is more concerned with colluding with government in order to repackage and sell ever-more complex forms of debt had every incentive to run things as inefficiently as possible, since those are the conditions in which rent-extraction can cream off more of other people’s money.

Such conditions encourage the existence of jobs that are more to do with appropriating rather than creating wealth, and with disguising the fact that this is happening. When your status is defined by how many underlings you have, this can encourage an increase in the levels of management. If other big businesses employ somebody to sit at a desk, your company must do likewise. Not because the person has anything useful to do, necessarily, but simply because it’s ‘what is done’. When you make your money from a ‘leaky pipe’ (ie some deficiency in the system) this can encourage ‘duct-taping’ jobs that merely manage the problem rather than deal with it. This is like employing somebody to replace the bucket rather than fix the leaking roof. Of course, in that overly-simplistic example the ruse would be easily spotted. But in the deliberately complex world of the FIRE sector there is more chance of doing things incompetently and getting away with it, because few can penetrate the jargon and management-speak and see the bullshit hiding behind it.

What this all means is that the ‘technological unemployment’ gap that Keynes predicted has been filled with jobs that, quite frankly, don’t need to exist. If you can’t imagine how that can happen under capitalism, well, your mistake is in assuming our current system is something that people like Adam Smith or Milton Friedman would recognise as ‘capitalist’. Bullshit jobs really shouldn’t exist in the kind of free market that people like Stefan Molyneux promote, but they can and do exist in the whatever market system dominates today.

REFERENCES

“White Collar Sweatshop” by Jill Andresky Frazier

“Bullshit Jobs: A Theory” by David Graeber

“Smile Or Die” By Barbara Ehrenreich

Posted in Uncategorized | Leave a comment

WHY EXECUTIVES DON’T STRIKE

Strikes. They’re a nuisance, aren’t they? Bringing disruption to our lives by denying us the services we rely on. But have you ever noticed how the workers who organise strikes always seem to be employees at the lower end of the corporate hierarchy? It’s always blue-collar workers, junior doctors and other lowly types that are threatening such action. Executives, for some reason, never stage a walkout.

I wonder why that is?

Now, some might think the reason is obvious: Strikes are undertaken in order to get more pay, and so executives have no need for such action as they are already very handsomely compensated. For example, if you are an advertising executive your yearly salary is around half a million pounds. Not too bad!

But, actually, ‘more money’ is not the only reason why workers feel the need to strike. Sometimes, strike action is undertaken in order to bring to the world’s attention unfair working practices. If being treated unfairly justifies a walkout, then maybe executives would have a reason to strike?

Think about how such people are portrayed in movies. In nearly all cases, executives in films are portrayed as corrupt. You have Gordon Gecko in ‘Wall Street’, breaking laws and destroying small businesses in his thirst for more dirty money. You have the executive classes in ‘Elysium’, living in luxury aboard their space station while down on earth their overworked, underpaid blue-collar employees are callously discarded when they fall foul of atrocious working conditions the higher-ups are too uncaring to fix. You have the CEO of OCP looking on in concern as Robocop2 lays waste to the city- not concern for the people it’s killing mind you, but at what it could mean for his company’s shares (“this could look bad for OCP, Johnson! Scramble our best spin team!”).

Those are just a few examples of films that make business men out to be bad guys. Now try to think of movies where executives are not portrayed as villains, but as heroes. I can only think of two. Batman’s Bruce Wayne has a strong moral code. But that’s not a particularly good example, because he is only being altruistic when he is the Caped Crusader. His ‘Bruce Wayne’ persona is of a billionaire playboy who is a bit of a prick. And in the Christopher Nolan films the board of directors that run Wayne enterprises are your usual bunch of villains in suits. The other example I can think of is Ayn Rand’s ‘Atlas Shrugged’, and do you know what that book and movie is about? It’s about successful businessmen becoming so disgruntled with being portrayed as villains by society that they go on strike.

So, given how often successful businessmen are portrayed as bad guys, why don’t they ever stage a walkout and remind us all of how much we rely on the work they do, just as their fictional counterparts in Rand’s opus did?

I think the reason why is as follows: because it just wouldn’t work out the way it did in ‘Atlas Shrugged’. In that story, the consequences were that society soon started falling apart. When workers low down in the corporate hierarchy stage a walkout, the effects are, indeed, most often immediate and near-catastrophic. Everything grinds to a halt, everyday life is hopelessly disrupted, and we are reminded that such people provide vital services we can scarcely do without. I would suggest that if the executive classes were to stage a walkout, life would not grind to a halt, at least not for quite some time. On the contrary, most people would not even notice anything amiss.

Now, you might counter that this is mere speculation with nothing to back it up. However, I believe there are a couple of examples that indicate that what I say is true.

The first example involves something that happened during the decade from 1966 to 1976 in Ireland. During that time, Ireland experienced three bank strikes that caused banks to shut down for a total of twelve months. During the time in which they were closed, no checks could be cashed, no banking transactions could be carried out, and the Irish lost access to well over 80% of the money supply.

You would have thought this would have spelled utter disaster for Ireland. After all, banking executives are among the top earners (paid around £5 million a year, as well as being awarded endless bonuses) and we’re always being told of the utterly vital function the banking and financial sectors play in the economy. Surely, then, Ireland was brought to her knees very soon after the banks closed their doors and removed their services?

Actually, no. Instead, the Irish just carried on doing business without the banks. They understood that, since the banks were closed, there was nothing to stop people writing a check and using it like cash. Once official checks were used up, people used stationary from shops as checks, written in denominations of fives, tens, and twenties. And it was not just individuals who operated this mutual credit system, businesses also got in on the act. Large employers like Guinness issued paychecks not in the usual full-salary amount but rather in various smaller denominations, precisely so they could be used as a medium of exchange as though they were cash.

All this was possible because, at the time, Ireland had a small population of three million inhabitants. In most communities, people had a high degree of personal contact with other individuals, and where knowledge of somebody was lacking, local shops and pubs had owners who knew their clientele very well and could vouch for a person’s creditworthiness.

According to economics professor Antoin E. Murphy, author of ‘Money in an Economy without Banks’, “The Irish created an unregulated, totally anarchistic community currency matrix…there was nobody in charge and people took the checks they liked and didn’t take the checks they didn’t like….And, it worked! As soon as the banks opened again, you’re back to fear and deprivation and scarcity. But until that point it had been a wonderful time”.

A few years before the Irish incident, New York’s refuse collectors went on strike and just ten days afterwards the city was brought to her knees. I don’t think anyone would have described that situation as ‘a wonderful time’. Unlike the millions paid to city bankers, refuse workers get around £12,000 a year.

Another example suggesting that executives wouldn’t be missed for quite some time were they to disappear would be the company Uber, for it has seen not only the resignation of its founder, Travis Kalanick, but also a whole bunch of other top executives, so that, according to a 2017 article in ‘marketwatch’, it “is currently operating without a CEO, Chief operating officer, chief financial officer, or chief marketing officer”. Did the company fall down without the aid of these essential people? No, it carried on just fine without them.

Now this is intriguing. Why is it, that when low-paid staff nearer the bottom of the corporate hierarchy go on strike we feel the pain almost immediately, but on the rare occasions when highly-rewarded executives don’t show up for work nobody cares because nothing much changes?

I think it all hinges on what these people actually do. What do they actually do? It’s hard to say, because any role you can think of that might be of use to a company turns out to be a job description for somebody lower down the hierarchy. Do they make anything, these executives? No, the workers down in manufacturing do that. Do they manage anything? No, managers do that. Are they responsible for sales? No, that’s what salespeople are for. And so on and so on. Now, I’m not suggesting the CEO does literally nothing but it stands to reason that when you have delegated responsibility for just about everything to your subordinates, it’s going to harm that company much more if the subordinates don’t show up than if you were to disappear.

And that’s just counting the official jobs subordinates have. But what about unofficial ones? Take Personal Assistants. If you have ever watched the Apprentice you know the sort of employee I am talking about: The woman or man at the desk who answers the phone and says ‘Lord Sugar/ Mr Trump will see you now’. According to David Graeber, secretarial work like answering the phone, doing filing and taking diction is not all PAs do. “in fact, they often ended up doing 80 percent to 90 percent of their bosses’ jobs, and sometimes, 100 percent…It would be fascinating—though probably impossible—to write a history of books, designs, plans, and documents attributed to famous men that were actually written by their secretaries”.

So businesses seem not to be negatively affected when executives don’t show up for work. But when they are present, is their work of value to society? Not according to studies into negative externalities (in other words, the social costs of doing business) Let’s take the example of advertisement executive mentioned earlier. As you may recall, advertisement executives bring home a yearly salary of around £500,000. But the studies reckon that around £11.50 of social value is destroyed per £1 paid. Contrast this with a recycling worker, who brings home a yearly income of around £12,500, and creates £12 in social value for every £1 they are paid.

This, then, is why executives don’t strike. Far from reminding us what a valuable service they provide, it would instead shine a light on how businesses could function perfectly well without them, at least for much longer periods than they could function if their much lower-paid subordinates were to stage a walkout. For people who are a credit to society in terms of creating more social value for every pound they are paid, strike action can be an effective way of empathising the value to society their work generates. But that can hardly be the case when your work causes negative externalities that cost society more than it benefits from your existence. In that case, strikes can only shine light on the fact that you are not all that necessary.

REFERENCES

‘Bullshit Jobs: A Theory’ by David Graeber

‘Rethinking Money’ by Bernard Lietar and Jacque Dunne

“Money in an Economy Without Banks’ by Antoin E. Murphy

“Marketwatch”.

Posted in Uncategorized | Leave a comment

What Videogames Teach Us About work

Videogames have been featuring in the news recently. BBC Radio 4 is running a half-hour programme about Fortnite and in an article written for i by Will Tanner, it was reported that a Universal Basic Income experiment was ended because “ministers refused to extend its funding amidst concern that young teenagers would stay at home and play computer games instead of looking for work”.

That argument had a tone that is sadly familiar, depicting videogaming as an addictive evil that distracts its victims from what they ought to be doing. But I think it would be more accurate to say that gamers have already found meaningful work and are reluctant to forsake it and submit to less rewarding labour instead.

This way of looking at it goes largely unrecognised because we are not taught to equate videogaming with work. Instead, you ‘play’ a videogame and we are raised to believe that play is childish, a distraction, mere fun. Play, we are encouraged to believe, is the opposite of work.

But it really isn’t. One only has to look at the play other animals engage in to see there is a serious side to it. It’s a way of honing skills that will become essential in later life.

Similarly, in videogaming we find many activities that can be seen to hone skills that are important in this digital age we live in. Authors Bryon Reeves and J. Leighton Read list over a hundred such activities, including:

“Getting information: Observing, receiving and otherwise obtaining information from all relevant sources.

Identifying information by categorising, estimating, recognising differences or similarities and detecting changes in circumstances and events.

Estimating sizes, distances and quantities or determining time, cost, resources, or materials needed to perform a work activity.

Thinking creatively: developing, designing or creating new applications, ideas, relationships, systems or products, including artistic contributions”.

Also, in an article written for ‘Wired’ (“You Play World of Warcraft? You’re Hired!”) John Seely and Douglas Thomas explain how “the process of becoming an effective guildmaster amounts to a total-immersion course in leadership…to run a large one, a guild master must be adept at many skills: attracting, evaluating and recruiting new members; creating apprenticeship programs; executing group strategy…these conditions provide realworld training a manager can apply directly in the workplace”.

Far from being a distraction from work, videogames are, along with jobs, one of modern life’s two main work providers. Instead of lending support to the idea that people don’t want to work, videogames demonstrate how eager we are to engage in productive activity, to reach for goals, to solve problems and to take part in collaborative projects.

It does, however, raise a question: How come one work provider is able to draw upon willing and eager volunteers, while the other (jobs) mostly creates a feeling that work is a necessary evil you wouldn’t do if you had a choice? And, yes, that is how a great many people feel, as revealed by polls that show ninety percent of people hate their jobs.

Fundamentally, I think it all has to do with the direction in which money flows, and how that affects the design of work in videogames and jobs.

What do I mean by the direction in which money flows? Quite simply, I mean that if you have a job, then, assuming you are not an unpaid intern, a company will be paying you to work. This means that you are both an investment and a cost. On the other hand, when it comes to videogames, you pay a company to work, since you have to first purchase the game (and even if it is free-to-play like Fortnite, the company will have some means of extracting money from you). This means that you represent almost all profit, and only negligible cost.

Because videogame publishers want as many people to spend money on their games as possible, it obviously makes sense if working in a gaming context is as enjoyable and rewarding as it can be. When it comes to making work engaging, productive activity should provide opportunity to pursue mastery; it should offer autonomy, flexibility, judgement and creativity that is firmly in the hands of the individual doing the actual work.

The best videogames are great at providing all these conditions. Autonomy and flexibility are found in games where you don’t have to tackle challenges in a strictly linear fashion but can forge your own path instead. For example, in ‘Batman: Arkham Knight’ you, as the Caped Crusader, are free to roam Gotham City, swooping down to fight crime as and when you find it. If you hear an alarm ringing, you can locate its source and do a sub-mission involving a bank robbery. If you see smoke you can attempt to arrest Firefly. Exactly how you get to the game’s finale is entirely up to you.

Many games offer creativity, providing opportunities to customise the look of your character or items you have acquired. Some games come with comprehensive editing tools that offer even more scope for creative expression, such as ‘LittleBigPlanet’ which goes as far as enabling players to create whole new games. And since their very inception, videogames have given us the chance to exercise our judgement and gain mastery, as we make the snap decisions required to advance up the high-score charts, helped by well-crafted feedback systems that informs us when we are doing well and when we should try alternative strategies.

Now, it’s true that jobs may also provide the things that make work worthwhile. But, the crucial difference is that, where videogames are concerned, there is never a good reason to try and reduce or eliminate such qualities. Doing so would only make for a bad game that nobody would choose to play. There is, however, a reason why employers might want to reduce such qualities in a job. There is something that unites these qualities, which is that they all help to enhance our individuality. That’s not something that employers necessarily desire. The more creativity, judgement, and autonomy can be reduced on an individual level, the easier it becomes to train new recruits. Indeed, in many ways it’s preferable if your employees are less like unique individuals and more like interchangeable units that can be replaced at as short a notice as possible. The reason why that’s advantageous is because it reduces the bargaining power of the workforce, since you are less likely to complain about pay and working conditions if you know it won’t be too difficult for the boss to fire and replace you.

The result? A cheaper workforce, more value extracted from the commodity of labour-power, and more profit for those the labourers work for. You have to bare in mind that employees are quite low down in the pecking order for rewards from the labour process. Governments want their cut, banks and financial services want their cut, the company executives want their cut, and they take priority over the working classes, kind of like how the more powerful predators and scavengers get the juicy meat and leave only scraps for the rest to fight over. When it comes to the pursuit of more profit, it pays to make work as unrewarding (in a monetary sense) as you can get away with, which often results in work being designed to be as unrewarding (in the sense of not being engaging) as possible.

“But why would people choose to do work designed to be lacking the very qualities that make it engaging?”, you might be asking. The answer can be found in ‘negative motivation’. Being without a job can have serious consequences. Cut off from an income, bills cannot be paid and the threat of rough sleeping looms ever closer. On top of that there is cultural pressure to ‘get a job’, so much so that we don’t care if the job is useless or even harmful to society (‘at least s/he has a job’). This all amounts to enormous pressure to submit to employment, not really because of the gains people expect if they do have a job, but rather because of the punishment they dread if they don’t.

Videogame companies, on the other hand, cannot rely on negative motivation for the simple fact that hardly anyone can be forced to play games (I say hardly anyone, because there are sweatshops in which people grind through MMORPGS to level up characters that can be sold on to richer customers). This further emphasises the point that videogames never have an incentive to make work less rewarding, whereas such incentives do exist in the world of jobs.

CONCLUSION

Videogames, far from demonstrating our distaste for work, in fact show how willing and eager to work we are. So willing, in fact, that our desire to work supports one of the most successful industries of the modern age. Every day, millions of us spend billions all so we can engage in the work videogaming requires. If we really hated work, the first person to put a quarter into the first arcade game would have walked away in disgust at having to pay to stand there and perform repetitive manual labour. What, are you crazy?

What videogaming shows instead is that if you can take that simple mechanical operation and craft around it creativity, flexibility, autonomy, judgement and mastery, the result is work that people want to do so much they will gladly pay for it. But if, in the interest of extracting more value for money out of your workforce, you reduce or eliminate such qualities, people will hate such work and will only submit to it if circumstances force them.

That’s what jobs teach us.

REFERENCES

‘Wired’

‘Total Engagement’ by Bryon Reeves and J. Leighton Read.

‘Why We Work’ by Barry Schwartz.

Posted in Uncategorized | 2 Comments

Let ‘Em In: The Immigration Controversy

LET ‘EM IN: THE IMMIGRATION CONTROVERSY
ONE: THAT ‘RIVERS OF BLOOD’ SPEECH
During the EU Referendum, some controversial issues formed part of the debate over whether the UK should vote Leave. One such issue was immigration. The Leave campaign’s slogan, promising that the UK would ‘take back control’, was understood to refer at least in part to some inability to control borders and decide as an autonomous country who to let in. The campaign poster ‘breaking point’, which depicted large crowds supposedly flooding into the UK, summed up Leave’s position and spoke to those who felt that change had come too fast and was leaving them disempowered.
Opposing this view was the belief that the free movement of people and goods had been beneficial overall. Somehow, though, sensible debates over the ability and desirability to control immigration in a global age invariably seems to turn into an argument over extreme positions tinged with xenophobia. Control over borders and limiting migration is criticised as though it were promoting a fortress mentality in which the drawbridge is raised never to be lowered again, and the UK becomes ‘little Britain’, isolated from the world and viewing all foreigners with suspicion and intolerance.
In order to understand why debates over immigration get pushed to extremes, we need to go back in history. Now, immigration has been happening for hundreds of thousands of years, ever since humanity left its place of origin (Africa) in search of new lands to settle. I don’t intend to give a complete history of this phenomenon, but instead want to focus on a period in postwar Britain that lead to an infamous speech that would become an accusation levelled at anyone raising the issue of immigration.
IMMIGRATION AFTER WORLD WAR 2
At the end of World War 2, Britain was in need of extra manpower in order to help rebuild the country. So, the 1948 British Nationality Act came into being. This act declared that all the King’s subjects had British citizenship, which meant that around 800 million people had the right to enter the UK. This act, by the way, was never given any mandate by the People; it was, instead, a political decision. But it was not particularly controversial. For one thing, transportation was much more costly back then, so not many of the 800 million actually moved. Also, the fact that the country needed rebuilding, coupled with the fact that it was growing economically, meant that the half million who did arrive were easily absorbed.
In 1962, however, the Commonwealth and Immigrants Acts came into being, which was a quota system designed to place restrictions on immigration. Just prior to the introduction of this act, there had been a large influx of Pakistanis and Indians from the Muslim province around Kashmir. Like the Caribbean immigrants who had migrated following the British Nationality Act, these were hard-working men who brought some much-needed labour to textile mills in Bradford and surrounding towns, and to manufacturing towns like Leister. But there were also some notable differences. The Pakistani and Indian immigrants were far more likely to send for their families, and they were much less interested in any integration with their communities. As Andrew Marr explained, this group was:
“more religiously divided from the whites around them and cut off from the main form of male white working-class entertainment, the consumption of alcohol. Muslim women were kept inside the house and ancient habits of brides being chosen to cement family connections at home meant there was almost no sexual mixing, either. To many whites, the ‘Pakis’ were no less threatening than the self-confident young Caribbean men, but also more alien”.
ENOCH POWELL
A year later, in 1963, Kenya won its independence and gave its 185,000 Asians a choice between surrendering their British passports and becoming full Kenyan nationals, or becoming effectively foreigners requiring work permits. Many decided to emigrate, to the point where some 2000 Asians a month were arriving in the UK by 1968. An amendment to the Commonwealth Immigrants Act that tried to impose an annual quota was rushed through by the then Home Secretary, Jim Callaghan (labour). Also, a Race Relations Bill was brought forward so that cases of discrimination in employment and housing could be tried in courts.
Although the Asian immigrants were well-educated, being as they were mostly civil servants, doctors and businesspeople, their arrival was cause for concern among the British public, noting once again that communities were changing without the electorate giving a mandate for it. This disquiet came to the attention of a member of the Conservative shadow cabinet, one Enoch Powell. Powell had seen how concerns over immigration had lead to a 7.5 percent swing to Peter Griffiths, who had gone on to defeat Labour’s Patrick Gordon Walker in Smethwick during the 1964 election. The campaign Griffiths had run was a shockingly racist one. Its slogan was ‘if you want a nigger for a neighbour, vote labour’. Two years later, Griffiths would lose his seat, having been denounced by Prime Minister Harold Wilson as a ‘parliamentary lepper’. But Powell saw some merit in Griffiths’ position, particularly the accusation that the political class was turning a blind eye to the effects of immigration.
So it was that on the 20th April 1968, Powell gave a speech in Birmingham’s Midland hotel. It opened with an anecdote about a constituent who was considering leaving the country because “in 15 or 20 years’ time the black man will have the whip hand over the white man”, and went on to say that this was a view shared by hundreds of thousands. Did Powell not have a duty to voice the concerns of these people? “We must be mad, literally mad”, he told the small crowd, “as a nation to be permitting the annual inflow of some 50,000 dependents” Powell warned that if this immigration wasn’t stopped, the result would be unrest and riot:
“As I look ahead, I am filled with foreboding; like the Roman, I seem to see “the Tiber foaming with much blood”’.
That speech has since become known as the ‘rivers of blood’ speech. It lead to Powell being sacked by shadow leader Edward Heath, who called the speech “racialist in tone and liable to exacerbate racial tensions”. It would also come to have an effect on the ability to hold a sensible discussion over controlling immigration. As Jason Farell and Paul Goldsmith, authors of “How to Lose a Referendum” explained:
“he provided a bogeyman that could be used as a quick, lazy comparison to cut off as quickly as possible any debate about one of the key background policies of New Labour’s time in power. Becoming compared to Enoch Powell was what happened if you questioned the benefits of multiculturalism and immigration”.
We will investigate New Labour’s role in turning immigration into a politically-correct forbidden subject in an upcoming essay.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia
LET ‘EM IN: THE IMMIGRATION CONTROVERSY
NEW LABOUR 
In the 1960s, responding to a perceived public dissatisfaction over immigration, Enoch Powell delivered his infamous ‘rivers of blood’ speech, and in so doing created “a bogeyman that could be used as a quick, lazy comparison to cut off” any debate over multiculturalism or immigration. In the same decade, future politicians were children growing up amidst struggles for racial equality that reached their peak during the 60s and the following decade. Growing into adulthood, many at the top of New Labour, as well as many of its activists, had a metropolitan cultural liberal outlook that considered immigration to be an inherently good thing. In the eyes of this metropolitan mindset, there was little difference between wanting tight controls over immigration, and being racist.
Indeed, some have made the case that New Labour deliberately encouraged immigration because they wanted to remake the country in their own liberal image. For example, Andrew Neather, a former adviser to Number 10 and the Home Office, reckoned “the policy was intended- even if this wasn’t its main purpose- to rub the Right’s nose in diversity and render their arguments out of date”. Others, though, have denied such claims. One such person was Barbara Roach, who was Labour’s Immigration minister from 1999 to 2001. She attributed rising immigration levels to the fact that the previous Conservative government had not only installed a failed computer system but also made cutbacks that left just 50 officials to make asylum decisions on a backlog of 50,000 cases.
It could be argued that any government at the time would have had to respond to a rapidly changing world. In the previous essay, we saw how the British Nationality Act theoretically opened the borders to 800 million people, but the expense of travel at the time imposed a practical limit on the numbers that actually did migrate. But, by the time New Labour came to power, forces of globalisation such as lower-cost air travel and mass communication, as well as numerous conflicts in Africa and the Balkans, had lead to more rapid population movements. When increasing numbers of a asylum seekers arrived from the Balkans, the pressure was on to move them away from the costs and dependency of the Asylum system and toward the work permit route, and there was also pressure from business sectors to increase work permits in response to a booming economy and low unemployment. Meanwhile, higher education was being internationalised at a rapid pace, and that meant New Labour could finance their policy of expanding university education in the UK by encouraging foreign students into the country.
From 1997 onwards the decisions taken by New Labour added up to around 500,000 people arriving in the UK each year. By 2010, the UK population was increased by 2.2 million migrants, a population size equivalent to a city twice as large as Birmingham. It was, at the time, the largest peacetime migration in the country’s history.
As a result, many places in the country that had previously been untouched by immigration suddenly found themselves host to significant migrant communities, while at the same time many British communities saw their livelihoods disappearing overseas as the winds of globalist change swept over them. If those people thought that a Labour government with a 179 majority would speak up for the working classes the party traditionally represented, they were in for a rude awakening.
BLAIR’S SPEECH
In 2005, Tony Blair achieved a third electoral victory but with a massively-reduced majority. At the customary acceptance speech at the steps of 10 Downing Street, the Prime Minister radiated humility and insisted he had heard the concerns of rising numbers of people concerned over immigration and the the forces of globalisation. But within five months, Blair gave a speech at his twelfth annual conference as Party Leader that dispensed with the concerned socialist act and went with full-on free market liberalism instead:
“I hear people say we have to stop and debate globalisation. You might as well debate whether autumn should follow summer … The character of this changing world is indifferent to tradition. Unforgiving of frailty. No respecter of past reputations. It has no custom and practice. It is replete with opportunities, but they only go to those swift to adapt, slow to complain, open, willing and able to change”.
In other words, capitalism was sweeping across the world, bringing opportunity but also insecurity and inequality, and the only assurance the Prime Minister could give his electorate was to say nothing could be done for them and they just had to accept they were in a Darwinian market struggle for survival. Guardian Journalist John Harris, upon hearing that speech, commented, “Swift to adapt, slow to complain, open, willing and able to change.” And I wondered that if these were the qualities now demanded of millions of Britons, what would happen if they failed the test?’.
It became increasingly obvious what would happen to such people. They would be left behind, largely unrepresented by the two major political parties. Worse still, these losers in the globalist race not only found themselves ignored and unrepresented by the political elite, they found their voices were actively repressed when they tried to focus attention on the most visible manifestation of the changes globalism and the free market had wrought: Immigration.
MRS DUFFY
Of all the anecdotes that highlight the way a portion of the British electorate were treated with contempt, there is perhaps no better example than the case of Gillian Duffy. A 65 year old widow from Rochdale, she came across Prime Minister Gordon Brown who was on walkabout for the 2010 election. She wasted no time in voicing her concerns, which included the national debt, the difficulty vulnerable people were having in claiming benefits, and the costs of higher education. Oh, she also voiced concerns over immigration:
“All these Eastern Europeans what are coming in, where are they flocking from?”.
Face to face with Mrs Duffy, Gordon Brown was pleasant and persuasive enough to mend the pensioners faltering support for the Labour Party. She herself later said how she had been happy with the answers he gave. But when Brown entered what he thought was the privacy of his car, a wholly different side to his character surfaced. The world became privy to this other side to Brown, because he inaverdently left his Sky News mic on, and broadcast to the world:
‘That was a disaster. Should never have put me with that woman … whose idea was that?…she’s just a sort of bigoted woman, said she used to be Labour. It’s just ridiculous.’ 
This, then, was the attitude of the political elite who held the reigns of power during New Labour’s time in office. The very personification of charm in public, but totally contemptuous of even the mildest concerns over immigration in private. A whole class of politicians who had grown up amidst the 60s and 70s struggles for racial equality had come to adopt such a strong metropolitan mindset that they equated controls on immigration with racism and dismissed concerns over the movement of people as the ravings of bigots.
Mrs Duffey’s question was a reference to decisions made by the EU and Britain to open up the country to immigration from Eastern Europe. We’ll look at that next.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia
LET ‘EM IN: THE IMMIGRATION CONTROVERSY
THE EXPANSION OF THE EU
In 2010, a labour supporting ex-councilwoman from Rochdale called Gillian Duffy confronted the then Prime Minister (Gordon Brown). She asked a bunch of questions, one of which- “all of these Eastern Europeans what are coming in, where are they flocking from?”- resulted in her being dismissed as a bigot when Brown thought he was out of earshot.
Anyone seeking a proper answer to Mrs Duffey’s question would have to look back to May 2004. That was when the EU was due to undergo its largest expansion in terms of territory, population levels and number of states. The reason why was because former communist countries of central and Eastern Europe were set to join. Those newcomers were Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia and Slovenia. The most important thing to note about these countries is that their economic output was much lower compared to that of the existing member states. Acceptance into the EU therefore presented a golden opportunity for the people of these countries, for it meant they would have the right to move anywhere in the EU whether there was a job offer waiting for them or not, and be entitled to the same rights and privileges as national citizens. It was also good news for business because, since those job-seekers were coming from countries whose per capita GDP was less than half the EU average, they were willing to offer cheaper labour.
It was not good news for everyone, however. For those nationals who were already at the lower end of the labour market, the arrival of an even cheaper workforce put their jobs under threat. Most of the existing member states recognised this problem, and therefore decided to implement transitional controls that delayed the process of full membership into the EU seven years. German Chancellor Gerhard Schroeder, for instance, told the German people in 2000:
“Many of you are concerned about expansion of the EU … The German government will not abandon you with your concerns … We still have 3.8 million unemployed, the capacity of the German labour market to accept more people will remain seriously limited for a long time. We need transitional arrangements with flexibility for the benefit of both the old and the new member states”.
Accordingly, Germany initially maintained transition controls like bilateral quotas on the number of immigrants and work permits. All of the big European countries decided to take up transitional controls with one exception, and that was the UK.
The reason why New Labour decided not to implement transitional controls had to do with the findings of a research team, lead by Professor Christian Dustman, that had been commissioned by the Home Office. That research suggested that only 13,000 immigrants were expected to arrive each year. The economy was booming at the time, and the Performance and Innovation Unit at No 10 had produced a 73-page report that claimed the foreign-born population in the UK contributed ten percent more to government revenue than it received in State handouts.
It could also be said that, even if the Home office wanted strict controls on immigration, they would have come under pressure from other departments. These included the foreign office, who had diplomatic reasons for being pro-immigration, the department of education, who looked forward to extra revenue from foreign students, and, perhaps most important of all, the Business department, who certainly weren’t going to turn their nose up at an influx of cheap and willing labour. Finally, as we have seen in a previous essay, New Labour’s cabinet were children of the 60s and 70s, had grown up during the struggles for racial equality, and became adults with a metropolitan liberal mindset that was very much pro-multiculturalism. For all those reasons, New Labour decided not to apply transitional controls.
There was, however, an important caveat to the Dustman report’s claim that the number of immigrants coming to the UK would be 13,000 per year. The report actually said that the numbers would be a great deal higher if the other member states decided to impose transitional controls. As we have seen, that is indeed what they decided to do.
Between 2004 and 2012, 423,000 migrants came to the UK. As the noughties progressed, the effects of global conflicts and financial crises resulted in an even greater swelling of numbers. A combination of people fleeing middle-east conflict and expansion of the EU (many members of which were suffering crippling austerity due to the financial mess that was the Euro) meant that the UK’s population was increasing by 2.2 million, equivalent to a city twice the size of Birmingham.
Given that they were coming from countries that were either more economically poor or suffering from conflicts, this influx consisted of people who were prepared to offer much cheaper labour, and the effects of this were becoming apparent and were spoken about by people not afraid to defy political correctness that equated any concern over uncontrolled immigration with xenophobia. People like Nigel Farage:
“‘By 2005, it was obvious that something quite fundamental was going on. People were saying, “We’re being seriously undercut here'”.
In the next essay, we’ll look at who benefits from uncontrolled immigration- and who doesn’t.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia
LET ‘EM IN: THE IMMIGRATION CONTROVERSY
WINNERS AND LOSERS OF GLOBALISM
Toward the end of the 20th century and the start of the 21st, the UK was governed by a party with a decidedly globalist outlook and metropolitan ideology. There is perhaps no better explanation for why debates over controlling immigration degenerate into accusations of xenophobia. It’s a vestige of a time when any such debate was pretty much a forbidden subject. In 2005, when Conservative leader Michael Howard said “it’s not racist to impose limits on immigration”, he was met with outrage from New Labour. Now, more than ten years later, it is possible to at least suggest that uncontrolled movement of people is not always and everywhere a good thing without being angrily shouted down. But the attitude that you might be xenophobic lingers on. Invariably, suggestions that immigration needs to be controlled is criticised as though it were a call to stop it altogether and become isolationist. Whoever suggests there is any problem with mass migration can be expect to be lectured on the many genuine benefits the free movement of people has delivered.
But one can acknowledge the benefits immigrants bring while recognising that mass migration has not been good for everyone. This was highlighted by a chart created by economist Brank Milanovich and his colleague Cristophe Lakner. Known as the ‘elephant curve’, it lines up the people of the world in order of income and shows how the percentage of their real income changed from 1988 to 2008. One group- the 77th to 85th percentile- experienced an inflation-adjusted fall in income over the past 30 years. These people are the lower-skilled, working classes of developed countries like the UK. Something like 80 percent of the world has an income lower than that of this group, so given how financially difficult it can be for the working class you can appreciate just how poor most of the world is, and just how intense competition for a better life could become, absent of any control over the movement of people.
To illustrate why the working classes in developed nations are made worse off by uncontrolled immigration, let’s turn to a simplified example. Imagine workers in a factory. The production line does not have sufficient numbers of employees to run properly. Such a situation is not good either for the business itself or the employees. If it were to continue, the plant would close and the employees would lose their jobs.
Now, let’s suppose the plant has to recruit from overseas in order to fill the labour shortage. From the perspective of the employees, what would be the ideal immigration system? It would be a highly controlled system that only let in as many qualified people as are required to make up the shortage.
The owners of the plant might see things rather differently, however. For them, the ideal is to have no control over the movement of people and to tempt as many people into the country as possible. Now, given that these people have no vacancies to fill, what use are they, economically speaking? The answer is that they put pressure on the existing workers, who feel they can’t raise issues about current standards or even falling standards, for fear of being replaced. “There are plenty who would agree to these conditions”, we can imagine any dissenters being told. This pressure to drive down both wages and investment to improve or maintain working conditions is good for the owners, since they get to appropriate more of the wealth that their workforce produces. Surprise, surprise, the top 1 percent on the elephant curve have a line that’s almost vertical.
In case this sounds like a mere hypothetical, let’s look to some real examples. In 2006, Southhampton’s Labour MP, John Denham, noted that the daily rate of a builder in the city had fallen by 50 percent in 18 months. Or consider the findings of Guardian Journalist John Harris, who produced a series called ‘Anywhere but Westminster’, which included a Peterborough agency advertised rates and working conditions that only migrants would take.
But perhaps the most striking example would be the MD of a recruitment firm who admitted to the authors of ‘How To Lose a Referendum’ that if it were not for uncontrolled immigration, pay and working conditions might have to improve. All these examples point to the same thing, which is an increase in the supply of labour irrespective of an increase in demand resulting in a reduction of bargaining power, which the monied take advantage of to appropriate even more wealth from those who actually do the work. 
It should be noted that such outcomes are not usually entirely due to mass immigration. In April 2007, the Economist published a study of those areas of the UK that had seen the sharpest increase of new migrants over the ten year period from 2005 to 2015. In those areas- dubbed ‘migrant land’ by the magazine- real wages fell by a tenth, which was faster than the national average, and there was also a decline in health and educational services. But there were other factors impacting these areas too. They suffered cuts to public services following the Coalition’s move to austerity in the aftermath of the Great Recession, and they were disproportionately affected by the decline of the manufacturing sector.
Some have argued that these other factors are the real issue and that pointing the finger of blame at migrants is just scapegoating. Consider the words of Justin Schlosberg, media lecturer at Birbeck University:
“The working-class people have had an acute sense that their interests were not being represented by the banks and Westminster. What the right-wing press seeks to do is –rather than identify the true source of the concerns, which is inequality, concentrated wealth and power and the rise of huge multinational corporations that dominate the state. All of that is an abstract, complex story to tell. The story they told which more suits their interests is: the problem is immigrants. The problem is the person who lives down your street who works in your factory, who looks different and has different customs. It plays on those instinctive fears”.
Now, in some ways you could say he makes a fair point. Immigrants are not bad people, they are just ordinary folk doing what they can to improve their circumstances. But the fact is that mass immigration is part of the ‘abstract, complex story’ that is globalism.
So what is globalism, anyway? Is it the brotherhood of humanity, people of all races, creeds and religions holding hands and united under common bonds? If that is indeed what it is, then it would surely be welcomed by the vast majority of us. After all the latest estimates are that only 7 percent of the UK are racist.
But there is another way to look at globalism, and that is to see it as the commodification of the world, its resources and its people. It’s a global network of banking and financial systems that seems always ready to blow up and spread systemic risk, the fallout landing on the working classes while the one percenters get government bailouts. It’s a global transport and communication system that enables corporations to move manufacturing and other sectors to wherever rules and regulations are more relaxed and people more exploitable. Most damningly of all, it is the commodification of people, sometimes to the point where they are reduced to the status of disposable commodities. The tragic reality of that was vividly illustrated by the sight of greedy traffickers dangerously overfilling barely seaworthy boats with people desperate to escape dire situations, lured by false promises of some other place where opportunities are boundless and nobody slips through the social safety net. 
What really awaits these people is sometimes not just low-paid work but actual slavery. Incredibly, when their status as slaves is pointed out, such people often deny that’s what they are, because the conditions they came from were so bad their current situation feels like a step up. While one has to feel for people as downtrodden as that, one must also acknowledge what a negative effect it has on the working classes of developed nations. From the point of view of this group, the whole point of a job is to earn a living. To achieve that aim you need to earn sufficiently high wages to alleviate financial anxiety, you need to have a sense of stability and security in your working life, and you need sufficient free time with which to develop a more well-rounded existence. But all that is hard to achieve when you are competing for jobs with people who consider slavery to be an improvement and when jobs are disappearing to places where pressure from unions and environmental groups is either weak or nonexistent and therefore unable to place regulations on exploitation of people or the natural world.
At the same time the globalist commodification of everything suits the wealthy elite. Selling arms to warring nations, offering huge loans to corrupt leaders and supporting coups to overthrow more egalitarian governments and throwing regions into chaos so precious resources can be extracted on the cheap amidst the anarchy are all money-making opportunities. And the consequences offer money-making opportunities too, as people flee from countries ravaged by war and economic weapons of mass destruction with so little bargaining power their numbers put serious downward pressure on wages and working conditions (more profit for the owners) and also increases competition for housing (which forces up land prices, thereby increasing the paper profits of the owner classes).
One has to wonder how things would have turned out if globalism had continued amidst a complete intolerance for debating the issue of uncontrolled immigration. For decades, the working classes of the UK were underrepresented by the political establishment. New Labour’s mindset was a mixture of metropolitanism and free-market ideology that imposed a Darwinian struggle for survival on the lives of people, followed by a Coalition that responded to the near-collapse of the world financial system after deregulation led to insane risk-taking with austerity, essentially making the working classes pay for excessive risk-taking and greed at the top. Meanwhile, with even the mildest objections to uncontrolled immigration shouted down as xenophobia, only the extremists were prepared to speak up. People like Nick Griffin of the BNP, or Marine LePen of Front Nationale. More recently, Chancellor Merkle’s decision to open Germany’s borders to a million mainly Middle-Eastern migrants is seen by some as a reason why the far right Alternative Fur Deutschland won 50 percent of the vote in more depressed areas. That, as in other cases, was the result of simmering dissatisfaction over what globalism had wrought and what intolerant liberalism had deemed inadmissible for reasoned debate.
To quote the words of the Leave’s campaign poster, the rise of extremist groups is a sign that people’s tolerance for what globalism has done is at breaking point.
REFERENCES
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Wikipedia

Posted in Uncategorized | Leave a comment

Whacky Sci-fi Energy Proposals

WHACKY SCI-FI ENERGY PROPOSALS
Any mildly observant person is bound to notice that energy plays an important role in everyday life. Look around, and it is not too difficult to find various attempts at harnessing it. Plants extract energy from the sun through photosynthesis, animals extract energy by digesting organic material, and any industrial landscape is bound to have vehicles burning fossil fuels or the odd photovoltaic cell or wind turbine making use of renewable energies.
But, despite having sought for ways of extracting energy from the environment for billions of years, life is still not at all efficient at doing so, at least not when its various attempts are compared to theoretical limits. If you want to know how much potential energy is available to be tapped, you must turn to what is probably the most famous equation in the world: E=MC^2. This equation is basically a conversion factor that calculates how much energy is contained in a given amount of mass. If you take something like a candy bar, and you multiply its mass by the speed of light squared, that tells you precisely how much energy the bar contains. The speed of light squared is a huge number (written in MPH it is 448, 900, 000, 000, 000, 000) so even a tiny amount of mass can unleash an enormous amount of energy. An atomic bomb’s nuclear explosion for example, is the result of just a small amount of uranium being converted to energy. 
If you were to eat that candy bar, you would extract a mere ten-trillionth of its mc^2 energy. To put it another way, the process of digestion is only 0.00000001% efficient. Burning fossil fuels like coal and gasoline fairs a bit better, extracting 0.00000003 and 0.00000005% of the energy contained in such fuels respectively. How about nuclear fission, which, as we saw earlier, is capable of unleashing tremendous amounts of energy? Well, it certainly does a lot better than digestion or fossil fuel burning, but at an efficiency rating of 0.08%, it’s still far from ideal.
The fact that we are mostly failing to put this energy to use can be considered good news, in that any energy shortage we may experience has little to do with it being a scarce resource, and is instead due to our inability to access it. Unlike true scarcity (which we can’t do much about) an inability to access what’s available is a problem that can be addressed with appropriate technology. For example, by 2030 the world will need around thirty trillion watts, an energy need that could be met if we succeed in capturing three ten thousandths of the sun’s energy that hits the earth.
That would be a most welcome outcome in terms of securing our future, but even this achievement would not fare particularly well in terms of putting all available energy to good use. After all, most of the Sun’s output does not strike the earth but is instead dumped into empty space. Some radical thinkers have proposed ambitious schemes for harvesting this wasted energy.
THE DYSON SPHERE
One such proposal was put forward in 1960 by Freeman Dyson. His idea was to deconstruct Jupiter in order to form a spherical shell around the Sun. Doing so would enable our descendants to capture a trillion times more energy than we are capable of harvesting today. It would also provide 100 billion times more living space if you were able to move around its surface and, with the sun at the centre and you walking around on the inside of the sphere, everywhere on your habitat there would be permanent daylight. However, with gravity ten thousand times weaker than what we’re used to, travelling all the way around such a sphere without falling off would be pretty much impossible. In fact, it’s probably fair to say that life in general (or, at least, life as we know it) would be infeasibly difficult at best and impossible at worst if we had to live on the inner or outer surface of the Dyson sphere itself.
A way around that problem may be to construct habitats like the ones proposed by an American physicist called Gerard K O’Neill within the Dyson sphere. Known as O’Neill cylinders, they could provide habitats more like those we are familiar with if they orbit the sun in such a way as to always be pointing straight at it. Centrifugal force caused by their rotation could provide artificial gravity, and we could even have a 24 hour day-night cycle if there were mirrors to direct the sunlight in an appropriate way. 
Obviously, constructing a Dyson sphere would be a feat of engineering way beyond anything remotely achievable today. But that didn’t stop its originator, Freeman Dyson, from considering them a realistic prospect, given sufficient time. “One should expect that, within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere which completely surrounds its parent star”.
Amazingly, even this vastly ambitious project would not be all that successful at capturing the energy contained within the sun’s mass. This is because the process of nuclear fusion going on in a star like our Sun succeeds in converting only about a tenth of its hydrogen fuel, and after that its life as a normal star is over and it will expand into a red giant and end its life. So, even if we were to enclose the sun in a perfect Dyson sphere, we could not hope to put more than 0.08% of the sun’s potential energy (i.e the energy contained in its mass) to good use. 
SPINNING BLACK HOLES
For those descendants looking for more power than even a Dyson sphere can provide, they might consider an idea put forward by a British physicist called Roger Penrose. Many black holes are spinning, and this rotational energy could potentially be put to good use. Like all black holes, the spinning variant have a singularity (the remnants of a star so dense it has crushed itself to an infinitesimal size and of which we know very little about because it exists in realms of nature beyond anything our current models can handle) and an event horizon, which is a region of space surrounding the black hole that, once crossed, nothing can escape the gravitational pull of the singularity at its centre. A spinning black hole also has another feature known as an ‘ergosphere’, where, according to Max Tegmark, “the spinning black hole drags space along with it so fast that it’s impossible for a particle to sit still and not get dragged along”.
What this means is that any object tossed into the ergosphere will pick up speed as it rotates around the black hole. Normally, such objects will inevitably cross the event horizon and be swallowed by the black hole. But Roger Penrose worked out that if you could launch an object at the right angle and have it split into two pieces, then only one piece would get eaten while the other would be escape the black hole. More importantly, it would escape with more energy than you started with. This process could be repeated for as many times as it takes to convert all of the black hole’s rotational energy into energy that can be put to work for you. Assuming the black hole was spinning as fast as possible (which would mean its event horizon was spinning at close to the speed of light) you could convert 29% of its mass into energy using this method. That would be equivalent to converting 800,000 suns with 100 percent efficiency, or having 1000 million Dyson spheres working for billions of years.
CONCLUSION
As I said before, Dyson spheres and spinning black holes are proposals way beyond anything remotely plausible today. It might be tempting, therefore, to dismiss such ideas as crazy science fiction. But, I think there is a serious point to be made among all this whacky sci-fi stuff, which is that we are extremely far from putting available energy to good use. Next time you hear about an energy crisis, bare in mind that this really has nothing to do with energy being a scarce commodity. No, it is all down to our technical inability to capture the energy that is available. These crazy sci-fi proposals are therefore something to aspire to, and even if our actual technologies succeed in only capturing one percent of one percent of the energy that something like a Dyson sphere can harvest, that would provide way, way more energy than our global needs are likely to require. And, besides, if your going to have ambitions, they might as well be big!
REFERENCES
Life 2.0 by Max Tegmark
The Singularity Is Near by Ray Kurzweil.

Posted in technology and us | Tagged , , | Leave a comment

How Religion Caused The Great Recession cont.

PART FOUR: SMILING THROUGH THE DETERIORATION OF WORK IN CORPORATE AMERICA
At the end of part three I ended with the hint that something dark and troubling occurred within corporate America at the end of the 20th century. The story of that change was told in my book ‘How Jobs Destroyed Work’, which I will quote from now.
“During the war, the USA achieved full employment for the first time since the 1920s. When the war was over, there was a lot of concern about the possibility of a postwar recession, which the government sought to avoid through various acts and initiatives. The acts included the ‘Employment Act’ of 1946, which “committed the federal government to maintain maximum employment and with it a high level of aggregate demand”. The initiatives included the GI bill, an education initiative that helped upgrade the workforce, thereby providing a large pool of white-collar workers for the administrative and management-type roles that corporations increasingly depended upon.
As well as anxieties about recession prompting the State to push for high employment, conditions enabled by the war played a part in other ways. For one thing, industry in America was still largely intact, unlike that of Europe’s. The government invested heavily in the business sector, particularly through highway construction and defence-related expenditures. Also, wartime research had helped launch an era of technological innovation, such as IBM’s development of the first general-purpose computer. Finally, wage freezes had been put in place during the war, and this had required employers to use fringe benefits with which to attract employees. This favoured the largest corporations, who could afford to offer greater benefits than their smaller rivals.
But those corporate benefit packages were still costly, even forty or fifty years ago. This might have discouraged their mass adoption, had it not been for militant unions during the postwar period. It made sense to the larger corporations that if they treated their employees well, that would improve emotional attachment to the company, and the threat of socialism would be avoided.
And so it came to pass that the early postwar decades enjoyed economic growth and price stability. The large corporations delivered on their promise of long-term employment prospects, meaning that anyone fortunate enough to land a job there felt secure, and expected that their own prosperity would rise along with the company’s fortunes.
But all that was to change in the 80s and 90s.
During the 80s, attitudes toward the paternalistic model changed. The 1970s ended in recession, and during this period two of America’s largest companies- Chrysler and Lockheed-survived only because of government bailouts. The new decade began with inflation approaching 15%, and unemployment over 8.5%. Gold prices were soaring, a trend that is often associated with investor pessimism. Indeed, there was a general mood of unease regarding the the US’s economic prospects, as the stock market went into the worst slump since the 1930s.
Amidst all this financial trouble, people began looking at those large corporations with their many benefits packages and saw not businesses to be inspired by but rather dinosaurs to be blamed for worsening conditions. Increasingly, people saw the large corporations as bloated and inefficient, handicapped by too much bureaucracy and a workforce with an over-inflated sense of entitlement. It seemed as though America was increasingly unable to compete against more nimble competitors, most notably from Japan and West Germany. The nation was importing 25% of its steel and 53% of its numerically controlled machine tools by 1981.
What really helped the rise of the lean-and-mean model in the 80s and 90s was certain federal and state regulatory changes, coupled with innovations from Wall Street. The federal and state regulatory changes brought about an environment in which corporate mergers and takeovers could flourish. For example, there had been laws protecting local companies from out-of-state suitors, but these were declared unconstitutional by the Supreme Court. Also, President Reagan appointed an attorney who had previously defended large corporations against antitrust suits to be head of the Department of Justice’s antitrust division. This all but guaranteed there would no interference from the federal government with the growing acquisitions and mergers movement.
Meanwhile, Michael Milken, of investment house Drexel Burnham, created high-yield debt instruments known as ‘junk bonds’, which allowed for much riskier and aggressive corporate raids. These, along with the state and federal regulatory changes mentioned earlier, triggered an era of hostile takeovers, leveraged buyouts and corporate bustups”.
So what these changes-particularly the growth in the 80s of finance capitalism-did, was to transform the corporation from its traditional image of a task-based entity engaged in some collective activity defined not just in terms of profit but in an overall contribution to society, to one in which shareholder’s profits were the be all and end all. Everything else, including pride in the product in some cases (consider, for example, the internal email sent by an S&P employee which read “let’s hope we are all wealthy and retired by the time this house of cards falters”), was to be disregarded. All the focus was on the short-term raising of stock prices.
This marked change in attitudes was reflected in comments made by the Business Roundtable in the 1990s. At the start of the decade, Business Roundtable said of corporate responsibility that they “are chartered to serve both their shareholders and society as a whole”. But, seven years later, the message had changed to “the notion that the board must somehow balance the interests of other stakeholders fundamentally misconstrues the role of directors”. In other words, a corporation looks after its shareholders and the interests of other stakeholders-employees, customers, and society in general-are of far less importance.
Certainly the employee of 80s and 90s corporate America would have recognised their lack of importance in what as an increasingly insecure environment. Finance capitalism by that time had transformed the corporation from a paternal entity rewarding loyal workers with security and regular wages, to aggregations of financial assets that existed only to be merged, broken apart or destroyed, according to the whims of executives chasing short-term shareholder profit.
THE OFFICE PANOPTICON
Some observers, among them Noam Chomsky and Jacques Fresco, have noted how corporations tend to have the same organizational structure as fascist dictatorships. In other words, there is a strict hierarchy that demands tight control at the top and obedience at every level. Granted, there may be a measure of give-and-take, but the line of authority is usually clear. Others, perhaps most notably Michel Foucault, have argued that prisons and factories came in at more or less the same time, and their operators consciously borrowed each other’s’ control techniques.
For example, in the late 18th Century, social theorist Jeremy Bentham designed the ‘panopticon’. ‘Pan’ means ‘inmates’ and ‘opticon’ means ‘observed’ and so the panopticon was a prison designed in such a way that all inmates could be kept under surveillance by a single watchman. True, it was impossible for a single observer to keep an eye on all inmates at once, but the panopticon was designed in such a way as to make it impossible for any inmate to know if he was being watched or not. The inmates only knew that it was possible that they could currently be under surveillance. Bentham’s belief was that, under such conditions, inmates would effectively mind their own behaviour.
So what became of the panopticon? They are everywhere, only we now tend to refer to them as ‘offices’. Many a white-collar employee (those below the executive level, at least) spend their in-office hours in a cubicle, most likely of a one-size-fits-all, institutional-gray design that can be set up, reconfigured, and moved at the whim of those higher up the line of authority: A constant reminder of the employee’s own lack of security and importance to the corporation. Moreover, cubicles are (in the words of one employee) “mechanisms of constant surveillance”, lacking doors and usually arranged so that managers can spy on whoever they like at any time. The employees are usually made to work facing a wall, so cannot know if they are being watched unless they look over their shoulder. The message such an environment sends out is clear: We can see what you are-or are not- doing. So work harder or we’ll replace you. The employee found him or herself in a harsh working environment that did everything it could to underscore their vulnerability.
MOTIVATIONAL GURUS
As conditions for the average employee diminished and prosperity for those at the executive level soured to dizzying heights, America in the 80s and 90s had virtually returned to the highly polarised conditions of the 1920s. David Leonhart of the New York Times reckoned, “it’s as if every household in that bottom 80 percent is writing a check for $7000 every year and sending it to the top 1 percent”.
But whereas, before the Great Depression, there had been campaigners speaking out against the excesses of the wealthy and the oppression imposed on the poor, the prosperity gospel that had begun in the 19th century and which was amplified by megachurches and TV evangelists responding to market signals from late 20th century consumption culture, had a markedly different message: There was nothing amiss with a deeply unequal society. Anyone at all stood to become as wealthy as the top 1 percent. Just remain resolutely optimistic and all will be well.
Within this highly unstable environment, the positive-thinking ideology that had begun with 19th century New Thought and inflated by corporate-style churches, found an environment to which it was well suited. All kinds of life coaches and motivational gurus emerged, spreading the gospel of prosperity, and applying management speak to disguise what were worsening conditions. For example, following the Chase-Chemical merger, employees who lost their jobs were not laid off, they were instead referred to as ‘saves’. Other corporations going through mass layoffs in pursuit of boosting shareholder value in the short-term referred to those selected for redundancy as ‘nonselected employees’.
Over time, the message that life coaches and motivational gurus delivered become one on which everyone was supposed to consider the deterioration of work and its rewards in corporate America as a positive thing overall. Corporations paid substantial sums of money to the motivational industry, whose members told employees that to be laid off was an opportunity for self-development, that the volatile state of the jobs market was a welcome breeding ground producing winners.
And, unlike with the megachurches (which one could leave at any time) the books and seminars to be consumed at corporate events were often mandatory for any employee who wanted to keep his or her job. Workers were required to read books like Mike Hernacki’s ‘The Ultimate Secret to Getting Everything You Want’ or ‘The Secrets Of The Millionaire Mind’ by T. Harv Ecker, which encouraged practitioners of positive thinking to place their hands on their hearts and say out loud, “I love rich people! And I’m going to be one of those rich people too!”.
Along with being made to conform to all the rules and worksheets of the self-help literature, employees in corporate America found themselves having to attend Native American healing circles, Buddhist seminars, fire walking and other ritualistic practices, all in the name of maintaining a feverish pitch of optimism among worsening conditions. Such was the level of religious-like devotion to the gospel of prosperity and positive thinking that a 1996 business self-help book reckoned, “if you want to find a genuine mystic, you are more likely to find one in a boardroom than in a monastery or cathedral”.
In part five we will see how CEOs were transformed into cult-like leaders during the tumultuous 80s and 90s.
REFERENCES:
“Financial Fiasco” by Johan Norberg
‘Smile Or Die’ by Barbara Ehrenreich
‘White Collar Sweatshop’ by Jill Frazer
‘How Jobs Destroyed Work’ by Extropia DaSilva
HOW RELIGION CAUSED THE GREAT RECESSION
PART FIVE: “YOUR MEETING HAS BEEN RESCHEDULED, GOD”.
In part three of this series, we saw how the consumer culture of the late 20th century inspired churches to become more secular and corporate in their appearance, and how, as they grew into gigantic organisations, pastors were obliged to become more like CEOs in how they dressed and behaved. At the same time throughout the late 20/early 21st century, actual CEOs were becoming more like cult leaders. The transformation of the corporate world during the 80s and 90s (discussed in part four) had much to do with this.
CH,CH,CH CHANGES
Once upon a time, the CEO of a large corporation would have been the epitome of the cool, rational planner. He or she would have been trained in ‘management science’ and probably worked his or her way up within the ranks of the organisation so that, by the time they reached the top, the CEO had mastered every aspect of the business. Once there at the apex of the corporate pyramid this highly trained, rational specialist would have carried out the central belief of the college-educated middle-class, with its mandate of progress for all and not just the few.
But as the corporate world became more volatile toward the end of the 20th century, questions began to arise over whether such rationality and level-headedness was best for delivering the new goal of short-term boosts to shareholders’ profits. In 1999, Businessweek captured the changing mood when it asked, “who has time for decision trees and five year plans any more? Unlike the marketplace of twenty years ago, today’s information and services-dominated industry is all about instantaneous decision-making”.
These changes brought about a transformation in leadership. With the business world now seen as so tumultuous and complex as to “defy predictability and even rationality” (as an article in Fast Company put it) a new kind of CEO emerged, one driven more by intuition and gut-feeling. The new CEO was less of a manager with great experience obtained from working his way up the company hierarchy, and more of a flamboyant leader who had achieved celebrity status in the business world, and was hired on the basis of his showmanship, whether his prior role had anything to do with the new position or not.
A 2002 article in Human Relations described the celebrity CEO as being someone with “a monomaniacal conviction that there is one right way of doing things, and believe they possess an almost divine insight into reality”.
So, whereas the pastor of a megachurch was becoming more like a corporate executive, the corporate executive was becoming more like the leader of a cult. This transformation was no doubt helped by the replacement of old-style management consultants with motivational gurus. Pastorpreneurs, celebrity motivational gurus and flamboyant CEOS socialised together, advised one another, and in so doing created a business environment mixed with irrationality. According to Ehrenreich, “forsaking the ‘science’ of management, corporate leaders began a wild thrashing around in search for new ways to explain an increasingly uncertain world-everything from chaos theory…to eastern religions”.
It was certainly a time of increasing uncertainty. With the likes of Tom Peters (described by the LA Times as the ‘uberguru’ of management) offering advice like “destroy your corporation before a competitor does!”, everybody’s position in 90s corporate America was precarious. But whereas the white-collar precariat lived with the prospect of being fired at any time while shouldering the burden of increasing debt, the focus of boosting shares and rewarding celebrity CEOS had seen executive pay soar to over three hundred times that of the typical worker, and golden parachutes handed out even to the boss whose reckless behaviour crossed the line into outright criminality. For example, in 2006 the chief executive of UnitedHealth was pursued by the US Securities and Exchange Commission for illegal backdating of stock options, actions that got him fired and made to repay $465 million in partial settlement. But he also received the largest ‘golden handshake’ in corporate history, amounting to nearly $1 billion. As Ehrenreich said, “the combination of great danger and potentially dazzling rewards (lead) to a wave of giddiness that swept through America”.
Celebrity CEOs, going from their Gulfstream jets to their limousines to their luxury villas or four-star hotels, lived (in the words of Washington DC ‘crisis manager’ Eric Dezenhall) “in an artificial bubble of constant, uncritical reinforcement…a consumer of reassuring cliches”. They had come to believe in the teachings of the motivational books and speakers they recommended (maybe with a degree of cynicism) to their subordinates; positive-thinking preachers who claimed great wealth would come to anyone who visualised success, worked hard, and never complained. The average American did not complain, either, since by now the incessant New Thought message convinced positive thinkers that anyone could ascend to the world of unstinting luxury. According to researchers at the Brookings Institute, “the strong belief in opportunity and upward mobility is the explanation that is often given for Americans’ high tolerance for inequality. The majority of Americans surveyed believe they will be above mean average income in the future (even though that is a mathematical impossibility)”.
SACK THE NEGATIVE PEOPLE
But perhaps a more accurate way to put it would be to say that the average American could not complain, at least not of they wanted to keep their job. Remember, that Positive Thinking ideology considers any negativity to be a sin, and some of its gurus recommended removing negative people from one’s life. And in the world of corporate America-where, other than in clear-cut cases of racial, gender, or age-related discrimination, anyone can be fired for any reason or no reason at all-that was easy to do: terminate that negative person’s employment. Joel Osteen of Houston Lakewood church (described as “America’s most influential Christian” by Church Report magazine) told his followers, “employers prefer employees who are excited about working at their companies…God wants you to give it everything you’ve got. Be enthusiastic. Set an example”. And if you didn’t set an example and radiate unbridled optimism every second of the working day, you were made an example of. As banking expert Steve Eisman explained, “anybody who voiced negativity was thrown out”.
Such was the fate of Mike Gelband, who was in charge of Lehman Brothers’ real estate division. At the end of 2006 he grew increasingly anxious over the growing subprime mortgage bubble and advised “we have to rethink our business model”. For this unforgivable lapse into negativity, Lehman CEO Richard Fuld fired the miscreant.
PUNISHMENT FOR UNDERPERFORMING
But, actually, sacking was not the worst fate that could befall an employee in 21st century corporate America. With every white-collar employee under pressure to work on their attitudes, the pressure on that group who most require permanent smiles and positivity-the sales team-reached ludicrous heights. Underperforming salespeople were subjected to having eggs broken on their faces, were made to bend over and receive a spanking with the metal yard signs of competing companies, and in one case even subjected to waterboarding (“you saw how hard Chad fought for air right there. I want you to go back inside and fight that hard for sales”, in the words of the Prosper Management supervisor who conducted this example of motivational guidance).
So this was America in the 21st century. A world in which megachurch pastorpreneurs and TV evangelists preached to millions the Good News that “God caused the bank to ignore my credit score” (in the words of Osteen). A world in which CEOs became like the leaders of cults who, according to Steve Eisman, were infected with the executive mind-set of ‘hedge fund disease’ (“The symptoms are megalomania, plus narcissism, plus solipsism…How could you be wrong about anything? To think something is to make it happen. You’re God”) who were surrounded by yes-men who dared not raise any concerns for fear of being fired for ‘negativity’. A world in which to ‘underperform’ in sales could lead to humiliating ritual punishments like being made to wear nappies.
Pumped up with the New Thought belief that positive thinking could make wishes come true and that God would intervene to prevent any negative outcome, Americans confronted those other circumstances happening in the early 21st century: Monetary policy from the Federal Reserve coupled with surpluses of fast-growing emerging economies making money cheaper than ever; US politicians working to increase the share of home-owning families; a financial industry apparently transforming large risks into smaller ones through repackaging, labelling and selling them coupled with regulations and bonuses that tempted people into the market for mortgage-backed securities.
After the subprime mortgage bubble burst and it became obvious that the good times had been propped up by out-of-control speculation and borrowing, inevitably the cry went up: ‘Why did nobody see this coming?’. Hopefully this series has offered some explanations by showing how, prior to the 2008 crash, prosperity preachers and optimism coaches told people they could realise their material ambitions through the power of belief (‘self-help writer Stephen Covey encouraged those satisfied with what they had to “admit that what you have isn’t enough”) the perception of negative thought as a form of sin that must be removed from one’s life had the effect of ejecting cautious people bearing bad news from the workplace and there was an executive class making decisions based on gut-feeling who were behaving very much like the motivational gurus and prosperity preachers they socialised with and who they forced upon their subordinates, while at the same time enriching themselves through corporate mergers and bustups that unlocked shares-boosting capital while destroying the jobs of hundreds of thousands of people (those employees maxing out their credit cards in spending sprees in order to compensate for the deterioation of rewards in the workplace.)
Coming up next, the concluding chapter of this essay.
REFERENCES
‘Financial Fiasco’ by
‘Smile Or Die’ by Barbara Ehrenreich
‘White-Collar Sweatshop’ by Jill Andresky Frasier
HOW RELIGION CAUSED THE GREAT RECESSION
EPILOGUE
In the aftermath of the 2008 crash, faced with an epidemic of foreclosures in the housing market, the collapse of some of the oldest financial institutions and the national debt rising to $10 trillion, people understandably asked: Why? How come all those highly respected and lavishly rewarded experts never saw the crash coming? Taking into consideration the evidence presented in this essay, I think we can conclude that the West was blinded by a combination of New Thought and neo-classical ideology.
When the likes of Mary Baker Eddy and Quimby sought to create a positive alternative to the grim outlook of Calvinism, they imagined the universe to consist of nothing but an all-nurturing, all-supplying spirit. Humanity, as part of this maximally-beneficial entity, had but to exercise their powers of positive thinking, banish all negative thoughts, and everything would turn out all right.
And, as Ehrenreich pointed out, “what was market fundamentalism other than runaway positive thinking? In the ideology of the Bush administration and, to a somewhat lesser extent, the Clinton administration before it, there was no need for vigilance or anxiety about America’s financial institutions because ‘the market’ would take care of everything. It achieved the status of a deity, this market”.
SIMPLIFYING THAT WHICH IS TOO COMPLEX
The real world is too complex for human minds to fully grasp. Since that’s the case, science is obliged to devise simplified models, to work with a crude ‘toy universe’ when thinking about this universe in which we live. For example, Newtonian physics cannot accurately predict the interaction of three or more orbiting bodies. So rocket scientists planning to send a probe to, say, Mars, work with a simplified model in which there are only two objects-Mars and the Sun. The thinking is that, on human timescales, the Sun’s influence swamps everything else, so the approximation is good enough for all practical intents and purposes.
All the sciences have to make simplifying assumptions, and economics is no exception. According to Mark Braund and Ross Ashcroft (authors of “The Survival Manual: A Sane Person’s Guide to Navigating the 21st Century”) “neo-classical economics looks only at the factors influencing the investment and consumption decisions of individuals and firms. It focuses on how things would work in an imaginary world where all participants in the economy shared full and equal knowledge, not only of the market but also of the consequences of their decisions. It also assumes that everyone faces the same choices in life”.
As we have seen over the course of this essay, there are a couple of dubious claims here. There is, for example, the claim that participants share full and equal knowledge, both of the market and their decision’s consequences. This can hardly be said to apply to a corporate world in which celebrity CEOs floated high above the concerns of ordinary citizens in a bubble of luxury, surrounded by subordinates conditioned to bring them nothing but good news. “I’m the most lied to man in the world”, was how one CEO explained his situation.
Nor could it be said to apply to ordinary Americans, those folk who, in work, were obliged to attend seminars and read books by so-called experts armed with a pseudoscientific mix of economics, quantum physics and mysticism (as one life coach insisted, “with quantum physics, science is leaving behind the notion that human beings are powerless victims and moving toward an understanding that we are fully empowered creators of our lives and of our world”) and engineering a working environment where the entrenched cult of optimism made it advisable to conform lest you be targeted for ‘releases of resources’ or whatever euphemism for layoffs the company used.
Outside of work, the American citizen was preached to by TV evangelists broadcasting their ‘prosperity gospel’ that God wanted true believers in optimism to have it all (a situation that inspired a 2008 Time article called ‘Maybe We Should Blame God For The Subprime mortgage mess’). They were advised by (in Ehrenreich’s words) “professional optimists (who) dominated the world of economic commentary…Escalating house prices were pumping the entire economy by encouraging people to use their homes like ATMS…taking out home equity loans to finance surging consumption-and housing prices were believed to be permanently resistant to gravity”.
According to Washington Post columnist Steve Pearlstein, “at the heart of any economic or financial mania is an epidemic of self-delusion that infects not only large numbers of sophisticated investors but also many of the smartest, most experienced and sophisticated executives and bankers”.
An economy infected with an epidemic of self-delusion and where the pressure is on to conform to a ‘yes-man’ culture of positive thinking is hardly conducive to bringing about the neo-classical concept of man as a perfectly informed and rational agent.
Then there is the notion of everyone facing the same choices in life. Here, I will just point out that some finance companies involved in subprime mortgages were undertaking debt-to-asset ratios of 30 to 1, and ask the reader to take a member of the white-collar proletariat, massively indebted, working in a corporate environment whose advice to those facing unprecedented levels of ‘restructuring’ and ‘career-change opportunities’ (more euphemisms for layoffs) were “don’t blame the system, don’t blame the boss, work harder and pray more” or ‘deal with it, you big babies!”, and compare that person to the likes of Jack Welch, the CEO who laid off over a hundred thousand workers, who retired with a monthly income of $2.1 million, got given an $800,000-a-month Manhattan apartment, a Boeing 737 (also courtesy of the company) oh, and free security guards for his many homes. Does anyone really believe these are people who face the same choices in life?
WHAT DO YOU MEAN WHEN YOU SAY ‘FREE’?
When we make references to the ‘free market’, what, exactly is this ‘freedom’ we are referring to? The neo-liberal ideologue would no doubt claim it refers to the freedom to partake in voluntary exchange. As Ayn Rand said, “money is the material shape of the principle that men who wish to deal with one another must deal by trade and give value for value…An honest man is one who knows that he cannot consume more than he has produced”.
But, if that is the case, then it is difficult to imagine how all those toxic assets could have been accumulating in the financial sector or how borrowing could have pushed the national debt to ten trillion dollars. I think a more apt description would be: “The free market is a competitive environment in which players strive to obtain greater material wealth than other players, by whatever means they can get away with”. This definition leaves open the possibility that some may aim to get ahead by cheating and the spreading of misleading information. They may not be able to get away with it-that depends on how clued-up and vigilant the other players are to such deception and what regulatory structures are in place to curb such behaviour-but, in nature, parasites can evolve to alter the minds of their hosts such that they nurture rather than fight off the bloodsucker. The same thing can be said of market parasites.
Gillian Tett of the Financial Times has commented on how an elite “try to stay in power; and the way they stay in power is not merely by controlling the means of production but by controlling the cognitive map, the way we think. And what really matters in that respect is…what is left undebated, unsaid”.
In a corporate environment amidst a consumerist world feeding off of New Thought ideology, there was quite a lot left unsaid. As Adam Michelson, senior Vice President of Countrywide, said, “these are the times when that one person who might respond with a negative comment or a cautious appraisal might be the first to be ostracised. There is a great risk to nonconformity in any feverishly frothy environment like that”.
THE DOWNFALL
Indeed. America in the early 21st century was riding high on optimism. Communism had been defeated, and the turbulent world of financial capitalism was sold to the public as a rising tide that lifts all boats. According to Robert Reich, “optimism…explains why we spend so much and save so little…our willingness to go into debt is intimately related to our optimism”.
As we have seen through the course of this essay, this optimism can be traced back to the Calvinist religion that helped the founders of this nation tame the harsh wilderness, and the New Thought ideology that attempted to undo the mental damage such a punitive religion could impose, but actually ended up being just as harsh on ‘sin’ as what preceded it. The only difference was that it was negative thinking rather than pleasure-seeking that was held up as sinful.
As Ehrenreich explained, “for centuries, or at least since the Protestant Reformation, western economic elites have flattered themselves with the idea that poverty is a voluntary condition. The Calvinist saw it as a result of sloth and other bad habits; the positive thinker blamed it on a wilful failure to embrace abundance. This victim-blaming approach meshed neatly with the prevailing economic determinism of the past two decades. Welfare recipients were pushed into low-wage jobs, supposedly in part, to boost their self-esteem; laid off and soon to be laid off workers were subjected to motivational speakers and exercises. But the economic meltdown should have undone, once and for all, the idea of poverty as a personal shortcoming…The lines at the unemployment offices and churches offering free food include strivers as well as slackers”.
It seems God was not on hand to save us from ourselves after all.
REFERENCES
Smile or Die by Barbara Ehrenreich
The Survival Manual by Mark Braund and Ross Ashcroft
Atlas Shrugged by Ayn Rand

Posted in Uncategorized | Leave a comment

How Religion Caused The Great Recession

HOW RELIGION CAUSED THE GREAT RECESSION
PART ONE: FROM CALVINISM TO NEW THOUGHT.
INTRODUCTION
Any essay with a title like ‘how religion caused the Great Recession’ had better begin with a caveat or two, so here goes. First of all, religion was not what caused the financial crash of 2008, which is to say it was not the main reason for the subprime mortgage bubble. As to what was the main culprit, well that probably depends on one’s political ideology. The anti-capitalist would likely blame ‘too big to fail’ banks and irresponsible Wall Street wolves, while the anti-Left would probably cite State interference in the mortgage market as the main villain.
While either of these doubtless do stand up as greater culprits, both politics and finance, along with other kinds of collective activity, take place amidst the societies of the day and cannot help but be influenced by the beliefs and attitudes that evolve within them. And it is here, in the influencing of minds and group action, that we will see how religion helped set us up for a subprime mortgage bubble. But now I must make the second caveat and say that there are many different kinds of religion offering diverse schools of thought, and doubtless some would have guarded against the reckless borrowing and lending that lead to the 2008 Crash. But leading up to that crash there was an ideology sweeping through America, one that set the world up for a fall from the dizzying heights of the greatest delusion, and the origins for this hubristic attitude can be traced way back to the faith of the Pilgrim fathers.
THE CALVINISTS
As far as Westerners are concerned, the United States was colonised by pilgrims whose ancestry could be traced back to the Brownist English Dissenters who, in the 16th-17th century, had fled from the dangerous political climate of their native England for the Netherlands. The pilgrims arranged with English investors to establish a new North American colony, because they were concerned that emigrating to the Netherlands would lead to a loss of their English identity. So, in 1620, they established the Plymouth colony in present day Massachusetts, which was the second successful English settlement (Jamestown, Virginia, being the first. It was settled in 1607.)
The pilgrims who founded the Plymouth colony subscribed to a variant of the Puritan faith known as Calvinism, named after John Calvin who lived in the 16th century. This was a particularly harsh and judgemental form of Christianity, one whose God “reveals his hatred for his creatures, not his love for them”, in the words of literary scholar Ann Douglas. Calvinists believed that this God’s heaven had only a limited number of spaces available, and whether you were chosen or not had been predetermined since before your birth. As to one’s duties here on earth, the Calvinist religion saw much virtue in industrious labour and particularly in constant self-examination for any sinful thought. Idleness and pleasure-seeking were viewed as being particularly contemptible sins.
In ‘Protestant Ethics and the Spirit of Capitalism’, Max Weber argued that capitalism has its roots in Calvinist Protestantism, since it taught its followers to defer gratification in favour of hard work and wealth accumulation. It was also a mindset that was pretty well suited to the conditions the New World imposed on the colonists. Forget the images invoked by the patriotic song ‘America The Beautiful’ with its amber waves of grain, from sea to shining sea. What greeted the settlers was “a hideous, desolate wilderness” (in the words of William Bradford). Not for nothing was this land known as the Wild West. In a harsh environment such as this, where even subsistence demanded ceaseless effort, the tough-minded ideology of Calvinism probably helped them survive.
Elements of Calvinism would persist in America right through to the modern age, with the middle- and upper-class considering busyness for its own sake as a means of obtaining status (a rather convenient mindset for the increasingly demanding corporations of the 80s and 90s, as we will see). But as the harsh Wild West was gradually tamed, the constant self-examination for sinful thought and its eradication through labour came to impose a hefty toll on those who became cut off from industrious work (as were, for example, women- barred from higher education by male prejudice and faced with industrialisation stripping away productive home tasks like sewing and soap-making). With productive activity taken away, Calvinism left these people with nothing but morbid introspection and this lead to various illnesses that we would now recognise as being diagnostic of mental stress.
NEW THOUGHT
Faced with people succumbing to the symptoms of neurasthenia, and with the medical establishment seemingly unable to cure such patients, people began to reject their forebears’ punitive religion. There was, for example, Phineas Parkhurst Quimby, a watchmaker and inventor who held metaphysical beliefs concerning (in his words) “the science of life and happiness”. In the 1860s, Quimby met up with one Mary Baker Eddy who, like many middle-class women of her day, had rejected the guilt-ridden and patriarchal Calvinism in favour of a more loving and maternal deity. 
Together, Eddy and Quimby launched what we now describe as the cultural phenomenon of positive thinking. Back in the 1800s, the post-Calvinist way of thinking that Quimby and Eddy established was known as New Thought. Drawing on a variety of sources from transcendentalism to Hinduism, New Thought re-imagined God from the hostile deity of Calvinism to a positive and all-powerful spirit. And humanity was brought closer to God, too. Out went the idea of an exclusive heaven reserved only for a select few, replaced with a concept of Man as part of one universal, benevolent spirit. And if reality consisted of nothing but the perfect and positive spirit of God, how could there be such things as sin, disease, and other negative things? New Thought saw these as mere errors that humans could eradicate through “the boundless power of spirit”.
Patients suffering mental breakdowns due to the ceaseless morbid introspection of Calvinism came to see Quimby and his ‘talking cure’, which sought to replace such negative thoughts with a belief in a universe that was benevolent, coupled with an insistence that the patient could ‘correct’ any negativity through positive thinking. The ‘Talking cure’ did indeed seem to cure the mental anxieties that were leading to invalidism among Calvinists who had idleness imposed upon them.
Meanwhile, Mary Baker Eddy went on to gain considerable wealth after founding Christian Science, the core teachings of which were that the material world did not exist; there was only Thought, Mind, Spirit, Goodness and Love. Whatever negativity or want seemed to exist were but temporary delusions.
New Thought went on to influence such people as William James, the first American psychologist, who claimed in his ‘Varieties of Religious Experience’ that, through New Thought, “lifelong invalids have had their health restored”. It also influenced Norman Vincent Peale, perhaps best known for his 1952 “The Power of Positive Thinking”. But perhaps most importantly, as far as this essay is concerned, Mary Baker Eddy’s notion of negativity as controllable delusions influenced the mystical teachings of modern-day ‘motivational gurus’ who would lead those aspiring to the American Dream into believing that success and wealth would surely come their way if only they believed fervently enough.
THE DARK SIDE
And now we come to the dark side of New Thought. Although intended as an alternative to Calvinism, New Thought did not succeed in eradicating all the harmful aspects of that religion. As Barbara Ehrenreich explained in ‘Smile Or Die’, “it ended up preserving some of Calvinism’s more toxic features- a harsh judgmentalism, echoing the old religion’s condemnation of sin, and the insistence on the constant exterior labour of self-examination”. The only difference was that while the Calvinist’s introspection was intended to eradicate sin, the practitioner of New Thought and its later incarnations of positive thinking was constantly monitoring the self for negativity. Anything other than positive thought was an error that had to be driven out of the mind.
So, from the 19th century onwards, a belief that the universe is fundamentally benevolent and that the power of positive thought could make wishes come true and prevent all negative things from happening, was simmering away in the American subconsciousness. When consumerism took hold in the 20th century, positive thinking would become increasingly imposed on anyone looking to get ahead in an increasingly materialistic world.
To be continued…
REFERENCES
Wikipedia
‘Guns, Germs and Steel’ by Jared Diamond
‘Smile Or Die’ by Barbara Ehrenreich.

HOW RELIGION CAUSED THE GREAT RECESSION
PART TWO: THE RISE OF CONSUMER CULTURE
In part one, we saw how the Plymouth Colonists settled in a harsh, untamed environment that required ceaseless labour just to maintain subsistence living. Gradually, though, the unforgiving Wild West would be tamed, with railroads and freeways stretching from State to State, vast swathes of farmland providing an abundance of food, and industrial centres capable of such high productivity it seemed as though everybody’s needs would soon be met.
But while this might sound like a positive thing, it actually posed something of a problem to the economic system that had been established. It was a system based on perpetual growth and that was fundamentally opposed to any notion of ‘enough’ that might dwell in the human soul. In the competitive world of business, companies manufacturing goods were compelled to steadily increase market share and profits, of fear of being swallowed by a larger enterprise, but how could perpetual growth be maintained when customers acted with frugality and were content with what they had?
Psychologists were therefore brought in to change the human psyche. One such expert was Edward Bernays. He took certain ideas from Freudian analysis about human status and applied them to advertisement campaigns. Products were no longer to be thought of as mere practical solutions to a limited set of problems. They were, instead, symbols representative of one’s identity, physical representations of one’s status. The car, the appliance, the furniture, were to be less relevant in terms of their utility and seen instead as fashion accessories. Advertising played a major role in developing this new consumer culture, because if the economy was to fulfil its imperative of perpetual growth, the customer had to be persuaded to buy things they did not even know they needed.
SALES AND SERVICES
The consumer economy necessitated the rise of sales and service-based industries and those kinds of workplaces proved fertile breeding ground for positive thinking. After all, we all expect staff in shops and waiters serving us food to be friendly and greet us with smiles and a positive attitude (even if we don’t really believe the grinning sales assistant is genuinely pleased to see us). 
Increasingly, then, employees found themselves in occupations that required the kind of self-examination and improvement that practitioners of positive thinking strived to achieve. As Ehrenreich explained, “the work of Americans, and especially its ever-growing white-collar proletariat, is in no small part work that is performed on the self in order to make that self more acceptable and even likeable to employers, clients, coworkers and potential customers”. Nor were interpersonal skills and constant optimism confined to obvious places like sales and service-based industries. As Carnegie observed, “even in such technical lines as engineering, about 15 percent of one’s financial success is due to one’s technical knowledge and about 85 percent is due to skill in human engineering”.
And so, whether in work or out, the consumer lived surrounded by the positive thinking message that anyone can have whatever they want, provided they exercised sufficient belief that good things will come their way. It was a belief generated in no small part to create an insatiable appetite for consumer culture. And as the corporate world seemed to ascend to increasingly dazzling heights of financial success, some clergymen noticed this ascendency and recognised within it methods to grow their churches.
Continued in part three
REFERENCES
‘Culture In Decline’ by Peter Joseph
‘Smile Or Die’ by Barbara Ehrenreich.

HOW RELIGION CAUSED THE GREAT RECESSION
PART THREE: THE RISE OF THE MEGACHURCH.
In part two, we saw how a market system based on perpetual growth required a change in social attitudes once productivity was capable of meeting basic needs, and that transition was one in which we went from frugality to signalling our individuality through consumption. By the late 20th century it would have been impossible to miss consumption culture and, perhaps inevitably, marketing, advertising and other aspects of growth culture began to have an influence in areas one might consider to be outside such economic concerns.
One such example would be Church. Membership of mainstream church had been declining in the latter part of the 20th century. In the past, churches faced with an increasing number of ‘unchurched’ folk might have sent out missionaries to try and convert the heathen population. But, this being an era of marketing, they tried something different. They did what any business would do when looking to relaunch a flagging product. They began thinking of potential members as ‘customers’ and conducted market research in order to determine what the ‘customer’ wanted. The various surveys and research indicated that people were not much interested in the kind of sermons they had sat through as children. Not for them, the angry sermon condemning sin. In fact, the market research showed people were not much interested in traditional church at all.
So pastors like Rick Warren, Bill Hybels and Robert Shuller set about reconfiguring church in order to better accommodate what the ‘customer’ wanted. Out went the hard pews, replaced with comfortable seating. Out went all the imagery of conventional churches. These new churches would have little in the way of traditional Christian iconography, such as crosses or images of Jesus. The result of this transformation was a building that looked less like a church and more like architecture that fit seamlessly with the modernist corporate-style environment of the rest of the city.
It was not only the physical appearance of the church that changed to suit the modern corporate, secular world. The sermons themselves changed as well. The more demanding principles of Christianity with its teachings of modesty and humble living were discarded, replaced with positive messages very much like the ones New Thought had preached. The new breed of pastor saw themselves not as critics of the secular, materialistic world but rather as active participants within it. They preached a ‘prosperity gospel’, one which claimed God wanted you to achieve status, wealth and other trappings of material success.
In terms of growth, this tactic of transforming churches into secular conference centres spreading the good news that God would payeth thy credit card proved very successful. The churches led by the likes of Schuller, Warren and Hybels became ‘megachurches’ which, if you include those attending via TV broadcast, preached to an audience of millions. Being so big, megachurches had to employ hundreds of people and find millions of dollars to keep the organisation running. These conditions led to their pastors becoming ever less like traditional clergy and more like CEOs of large corporations. As Ehrenreich explained, many of these churches were “nondenominational, meaning they couldn’t turn to a centralised bureaucracy for financial or any other kind of support…They depended entirely on their own charisma and salesmanship”.
So the audience of a megachurch entered a building that looked pretty much like a corporate headquarters. The person preaching to them wore a business suit like any CEO and probably thought of himself as a ‘pastorpreneur’- part pastor, part entrepreneur. And the message the pastorpreneur delivered was much the same as the one the corporate world wanted to get across: Through positive thinking, you can make anything happen. You can get ahead, you can become successful, you will become rich. Consider, for instance, the words of televangelist Joyce Meyer: “I believe God wants to give us nice things”.
But, underneath all that positivity was the dark undercurrent of New Thought’s attitude toward negativity as a sin. If, despite all your positivity, riches did not come your way, don’t look for any flaw in business, economics or politics. Instead, blame yourself. You just didn’t try hard enough. Pastor Robert Schuller advised his congregation to “never verbalise a negative emotion”.
Still, at least the megachurch managed to remain a nice, comfortable place in which to receive the prosperity gospel. As Ehrenreich said, in a megachurch “no one will yell at you, impose impossible deadlines…All the visual signs of corporate power and efficiency, only without the cruelty and fear”. 
The same could not be said of corporate America in the late 20th/early 21st century.
Continued in PART FOUR
REFERENCES:
‘Smile Or Die’ by Barbara Ehrenreich 

Posted in work jobs and all that | Tagged , , , | Leave a comment

(Dis)honest ways to make it rich

(DIS)HONEST WAYS TO MAKE IT RICH.
Money. In a world where most things come with a price tag attached, we are all obliged to try and acquire the stuff. This can be achieved through fair means or foul. But what is an honest way of becoming wealthy?
To my mind there is only one way to become wealthy through entirely honest means, and that is to provide a product or service that an informed customer may choose to spend his or her money on. The company that provides this product or service relies only on the actual quality of it to keep them ahead of competitors. 
Furthermore, the honest boss of a company recognises that he or she is but one member of a team, and it was that collective which worked together to bring product X to the market. We might make a comparison to the conductor of an orchestra. A great conductor can make the difference between a performance that is merely OK and one that is sublime. It would be wrong, however, to attribute the excellence of the performance entirely to the person who happens to lead the orchestra. Obviously, were it not for the violinists, the trumpet players, the pianists, the percussionists and all the other members of the orchestra, bringing what is likely years of hard practice at perfecting the craft of playing their chosen instrument, there would be no music at all, sublime or otherwise. 
The same can be said for the CEO of a company. A great CEO can make the difference between an outstanding year for the company and a merely average (or abysmal) one. But a CEO of a Fortune 500 company could no more bring its product to market by themselves than a conductor could wave his baton like a wand and create music, absent of all the other members of the team we call an orchestra. The honest boss recognises that he or she is but one person among many, and that it is actually the organisation the team comprises that earns those £billions multinational corporations bring in. The honest boss would not accept a financial reward that is so high other members of the team must necessarily do with so little even if they work full time their daily life is one of constant money anxiety. Of course, it would also be wrong to pay everybody the same, since people clearly have different levels of responsibilities and skills in any organisation. But there is surely a mutually beneficial compromise between the extremes of total equality and high inequality.

NOT SO HONEST
Do all companies competing in the market adhere strictly to these conditions for honest money-making? Clearly not, as it is not too difficult to find examples of businesses that break at least one of the rules I just mentioned. To recap, the totally honest business:
1. Sells a product or service to an informed customer. In other words, whatever advertisement is used to try and sell the product gives an honest description of its advantages and disadvantages in comparison to rival products. Any potential customer truly knows exactly what it is they are about to pay for.
2. Relies only on the actual quality of that product/service in order to stay ahead of the competition. In other words, there is no lobbying the State to pass laws that disadvantage their competitors, grant monopoly rights that are then exploited through price hikes that would not be possible in true free market competition, and other distortions of the market. 
3. Acknowledge that it is the company, not any one person, who earns the profit. This income should be distributed in a mutually beneficial way that avoids the injustices of total equality (which fails to compensate for differences in responsibility and talent) and extreme inequality (which places unnecessary anxiety on the disfavoured and can lead to structural violence, as the disenfranchised riot against what is an obviously rigged system). In a good business, every person from the bottom to the top is motivated by the hope of success; the bad business relies on the fear of failure to keep its employees working.
UNINFORMED CUSTOMERS 
So, are customers always completely informed about the product they are about to buy? Not according to the documentary ‘Will Work For Free’:
“If I wandered into a phone shop, unsure of what to buy, and make the mistake of telling the salesperson that I am not too familiar with the differences, I leave myself open to product sale bias. In this scenario, the store has no problem selling the best products, so instead, I am presented with an inferior product which the store is struggling to offload. The salesperson’s job here, becomes distorted and I the customer will most likely be subjected to either a sales pitch as opposed to honest insight”.
Or how about this quote from the Daily Mail:
“The Herts and Essex Fertility Centre charges £1,247 for three drugs routinely prescribed to women on IVF. But the same prescriptions in the same quantities cost £876.72 from Boots or £929.22 from Asda. Couples who buy drugs from the clinics may have no idea they are paying over the odds or that they can get them elsewhere…Experts accused the clinics of exploitation, calling the way they charge for IVF drugs ‘a complete racket'”.
Neither of these quotes convey an impression of potential customers making decisions armed with a complete set of all facts. I dare say most people have had the experience of dealing with some salesperson or business who appears to be, shall we say, economical with the truth in order to ensure a sale.
BEND THOSE RULES
Moving on to the second condition for honest money making, I think it is fair to say that there are all kinds of distortions of the free market principle of trading true value for value under conditions of competition that favour only those who genuinely provide the best product/service. In fact, this quote from Ayn Rand’s Atlas Shrugged sounds suspiciously like the actual (as opposed to some ideological ‘free’) market:
“When you see that trading is done, not by consent, but by compulsion–when you see that in order to produce, you need to obtain permission from men who produce nothing–when you see that money is flowing to those who deal, not in goods, but in favours–when you see that men get richer by graft and by pull than by work, and your laws don’t protect you against them, but protect them against you–when you see corruption being rewarded and honesty becoming a self-sacrifice–you may know that your society is doomed”. 
YOU ARE WORTH THAT? REALLY?
When you watch an athlete push their body to the limits of human capability and beat a record for the fastest sprint, the longest jump, the most perfectly executed dive and so on, you can only feel a sense of admiration for those who have put in such hard work to achieve this pinnacle, and a sense of humility that some are able to dedicate themselves to an effort more supreme than most of us could ever endure.
But, given that the human body is capable of doing only so much, at some point an ever-decreasing time limit for completing a sprint or other record-breaking achievement starts to seem kind of…dubious. And then, it happens. The once-celebrated athlete is exposed as a cheat. He took performance-enhancing drugs, or some other means of gaining an edge that is against the rules of professional sport. 
It is, by the way, a bit unfair to dismiss those who are caught relying on such dubious tactics as just cheats. It is almost certain that these athletes trained every bit as hard as those who never touched performance-enhancing drugs. It is not like they just slobbed around in front of the TV all their lives, then one day injected themselves with something and wandered down to the Olympic park to beat Usain Bolt. No, their result came about by a mixture of honest and dishonest methods. 
Just as we understand that an individual can only break a sports record by a certain amount before it becomes obvious that they must have relied on some kind of cheating, so too should we recognise that an individual can only make so much money for themselves before their wages and bonuses are the result, not simply of their own merit, but a combination of honest work and cheating, of either rigging the system to favour themselves and disadvantage their fellow workers, or being in a position to benefit from a system that is already rigged. Is the CEO of a multinational company worth five times as much as the average employee? Doubtlessly, yes, because he or she shoulders enormous responsibility. Ten times more? Twenty? I think most people would still accept this is fair. But when those at the executive level start making hundreds or thousands of times more than the average employee, we really should be as dubious of this reward as we would be of an athlete who somehow manages to shave ten, fifty, or one hundred seconds off the previous world record in the sprint. Anyone who is a billionaire definitely did not earn that money through entirely honest means under conditions of true competition in which all have an equal chance to excel. No, they were in a position to take advantage of a rigged system.
CONCLUSION
Like everything created created by humans, markets are not perfect but flawed creations. If we come up with a description of markets which recognises the possibility that some may violate one or more of the conditions for acquiring deserved wealth, we can see what that flaw is. So here goes: The free market is an arena of competition in which individuals and groups try to gain a greater monetary reward than other individuals and groups, via whatever method they can get away with’. Clearly, such conditions are prone to cheats whose method for gaining wealth relies not entirely on making their own money, but (at least in part, to a greater or lesser extent, ) in taking wealth from others. Modern understandings of markets view them not as efficient machines, but rather as chaotic ecosystems. Just as the natural ecosystem inevitably allows parasites to evolve, so too do market systems give rise to parasites. And, just as in the natural world, those parasites are under competitive pressure to hide themselves from their victims, or better yet to fool their victims into believing they are something to be protected, rather than fought. 
Bare in mind what I said about the ‘cheating’ athlete, though. Just as the athlete was not simply a cheat but somebody who relied on a combination of meritocratic and dubious methods for achieving success, so too are the most successful cheats in business rarely simply parasites. Just as natural parasites have a competitive selective advantage in becoming interwoven among some vital function of their hosts body, such that removing the parasite without causing harm to the host presents a great challenge, so too are market parasites under selective pressure to interweave their dubious wealth-extraction schemes among genuinely useful services. The best parasites are never just cheats.
But, whatever. It is true to say that if those who take wealth, rather than make it, are allowed to flourish too much, then society is doomed just like Rand said. We know what needs to be done to ensure that never happens, though: Don’t let them get away with it.
REFERENCES
“Will Work For Free” You tube documentary
“And They Charge Hundreds of Pounds More For Drugs You Can Buy at Asda” by the Daily Mail
‘Atlas Shrugged’ by Ayn Rand

Posted in work jobs and all that | Tagged , , | 2 Comments

THE SINGULARITIES

THE SINGULARITIES

In 1993 the science fiction author Vernor Vinge wrote a paper in which he predicted a coming event which would radically change life as we know it. “I argue in this paper”, he wrote, “that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence”. Vinge coined a name for this change. He called it the ‘Technological Singularity’.

What I want to explore in this essay is the possibility that a singularity is not a unique event but one which has happened more than once. I believe that Vinge’s own words lead us to suppose that reality has gone through profound shifts in possibility before. The key sentence is as follows:

“This change will be a throwing away of all previous rules…developments that before might only happen in “a million years” (if ever) will likely happen in the next century”.

In other words, a key aspect of a singularity is that it leads to a dramatic change in perceptions of time, or rather, a dramatic compression of possibility, such the the wildly implausible becomes likely. With that in mind I think we can see in the past history of our universe at least three events which qualify as Singularities.

SINGULARITY NUMBER ONE: THE BIG BANG

Before science fiction writers speculated on the possibility of technology bringing about such a dramatic change that it imposed a ‘singularity’ on our future through which we could not peer and see clearly what was to come, cosmologists looked to the dim and distant past and traced the evolution of the universe itself until they reached a point where our understanding of physics can take us no further. They called this point where the state of existence is shrouded in utter mystery a ‘singularity’ and no doubt those speculators of the future (Vinge was not the first person to use the phrase in the context of future technological change) borrowed the phrase from the cosmologists. 

Today, by far the most popular theory of the universe’s origins is ‘The Big Bang’, which points to evidence that our universe is expanding and argues that, if this is so, as we go back in time the universe must have been smaller until a moment is reached where all of creation was compressed into a speck of infinitesimal size. This naturally leads to the question, ‘what happened before?’, to which the answer seems to be ‘there was no ‘before’. The Big Bang marks the moment time and space began, so asking what came before the Big Bang is as nonsensical as asking ‘what’s north of the North Pole”? It follows from this that it is meaningless to wonder how long that mysterious pre-Big Bang reality lasted, for without time a nanosecond is no different to an eternity. What bigger difference in perceptions of time can there be than the transition from timelessness to change that can be measured?

SINGULARITY NUMBER TWO: THE ORIGIN OF LIFE
Vernor Vinge saw the Technological Singularity as an event which could compress our expectations of what is possible in a given time-frame, making ‘only in a million years’ events happen within a century, if not sooner. If we look to our past, we can see another event which dramatically speeded up possibility, and that event was the transition from single-step selection to cumulative selection.

Richard Dawkins illustrated single-step selection by selecting a phrase from Shakespeare’s ‘Hamlet’ (‘METHINKS IT IS LIKE A WEASEL’) and asked how long we should expect to wait for a monkey, randomly bashing away at a special word processor with only 27 characters (each letter of the alphabet-capital only-plus a ‘space’ as the 28th character) and which allows only exactly 28 bashes per go, which happens to be the same amount of characters in that phrase, if we include a space as a ‘character’.
The chances of a monkey happening to type ‘M’ as the first letter is one in 27. After all, there are 27 other possible characters that the primate could happen to bash. In order to get the first two letters the monkey must beat odds of 1/27 times by 1/27, which gives us odds of 1/ 729. In order to randomly type the entire sentence with no spelling errors and spaces all in the correct place, the monkey must beat odds of about 1 in 10,000 million, million, million, million, million, million. As you can imagine, then, you would likely have to wait a very, very long time for a monkey to bash out that precise phrase on Dawkins’s special keyboard.
And yet these odds are quite good compared to the odds of a haemoglobin molecule happening to assemble itself from the random recombinations of amino acids from which it is made. The haemoglobin molecule consists of four chains of amino acids, there are 146 amino acids per chain, and in living things we commonly find 20 different kinds of amino acids. Another science fiction writer- Isaac Asimov, calculated the number of possible ways of arranging 20 kinds of things in chains 146 links long and came up with the ‘haemoglobin number’, which is (more or less) a one with one hundred and ninety noughts after it. Compare that to our ‘METHINKS’ odds in which the monkey ‘only’ had to beat odds of 1/10^40 or 1 followed by forty zeros. And, of course, one haemoglobin molecule makes up only a tiny fraction of the complexity of a living organism. If it were left up to random chance, we would have to wait far longer than the life of the universe itself for life to emerge.

Since life evidently has emerged we must conclude that a Vinge-style ‘possibility compression’ must have occurred at some point in the past, and we know exactly what that event was (although we are still in the dark as to what exact form it took). That event was the transition from single step selection aka random chance to ‘cumulative selection’.

The difference between these two is that, whereas single-step selection has no memory whatsoever of the past, where cumulative selection is concerned the results of one process is fed into subsequent processes. To illustrate the power of cumulative selection, Dawkins designed a computer program that, like that monkey, randomly a random sequences of 28 letters. It would then duplicate that phrase but with a certain chance that ‘copying error’ would alter the phrase. The computer would then examine all those ‘offspring’ phrases, and reproduce whichever phrase most closely resembled ‘METHINKS IT IS LIKE A WEASEL’. 

First of all, the program typed the following:

WDLMNT DTJBKWIRZREZLMQCO P
Pretty much the kind of thing you would expect a monkey to produce were it let loose on a word processor. After ten generations and selecting of ‘phrase closest to METHINKS IT IS LIKE A WEASEL’ the program had managed to produce:
MDLSMNS ITJISWHRZREZ MECS P
Still hardly a recognisable word, let alone Shakespearian in its quality.
By the time the 30 generations had been bred and selected, a resemblance to the target phrase had become undeniable:
METHINGS IT ISWLIKE B WECSEL
And within 43 generations cumulative selection had produced the exact quotation.
How long did it take for the computer to evolve five word quotation from ‘Hamlet’? About eleven seconds. Compare that to how long we would expect to wait if we relied only on random chance: About a million, million, million, million, million years.
Now, there is one important difference between Dawkins’s evolutionary program and natural selection, and it is this: That program was given a definite target in that it had to search through strings of 28 characters and select the one which most resembled, however slightly, the phrase ‘METHINKS IT IS LIKE A WEASEL’. Natural selection, on the other hand, is not heading toward any definite future goal. But still, the experiment Dawkins run does give us some inkling of the power of cumulative selection to dramatically speed up the likelihood of something as improbable as the complexity of life as we know it today. Paraphrasing Vinge, we might say that ‘events which would otherwise take about a million, million, million, million, million years to happen, can actually happen between eleven seconds and one hour’.

SINGULARITY NUMBER THREE: THE GREAT LEAP FORWARD
The theory of evolution by natural selection tells us that human beings are just one more species belonging to a great family tree comprised of all living things that exist, or have ever existed. But are human beings really just another animal, no more remarkable than any other creature or plant? Or is there a good reason to pick human beings out for being special in some way? 

I think the latter is true, and the special reason is as follows: Human beings, unique among life on Earth, enabled a new kind of evolutionary process. As Dawkins wrote, “There is an evolution-like process, orders of magnitude faster than biological evolution…This is variously called cultural evolution, exosomatic evolution, or technological evolution”. Whereas all other forms of life on Earth can only adapt at the speed of natural selection, human beings have the imagination, the communicative capability, and the dexterity to reshape materials around them to produce useful designs intended to suit some purpose. We don’t have to wait for natural selection to adapt us for operating under water, we can develop snorkels, aqualungs, submarines and other forms of aquatic technology. And while it took billions of years for natural selection to produce flying animals like birds, it took only a couple of million years for human beings to fly to the Moon. 

Our hominid ancestors were not always so rapid in their technological development. If we look back more than forty thousand years we find man-made artefacts that had hardly changed for a million years. Generation upon generation upon generation of humans produced the same kind of flint knife that their ancestors relied on, plus a few other tools. As for paintings and carvings and figurines, they produced none.
Our distant ancestors were no different to any other animal. Humans are not the only animals that make use of tools. Other primates have been observed using blades of grass to ‘fish’ ants out of holes with; thrushes use a stone as an ‘anvil’, bashing snails against it so as to break its shell and get at the soft meat inside; beavers fell trees in order to construct dams, the list goes on. But none of those animals have anything like technology, which is an ever-accumulating family of tools and techniques solving more and more problems. No beaver ever figured out how to add hydroelectric power to its dam, no thrush ever learned to add its snail meat to a recipe combining that ingredient with others to produce a tastier dish. Similarly, it seems our 40,000 year old+ ancestors never figured out that they could make dramatic improvements to their tools and that there was an almost infinite range of possible tools and techniques that were waiting to be fashioned from the resources around them. 
But then, something happened, and Jared Diamond called this event ‘The Great Leap Forward’. As I said before, prior to this Leap, the tools our ancestors used hardly changed for a million years, but after it we find paintings, carvings, musical instruments, and the beginnings of true technological capability that would result in, among other wondrous inventions, the iPad2 and app on which I am writing this very essay a mere 40,000 years or so later.
As Matt Ridley wrote in ‘The Rational Optimist’, the human race has “surrounded itself with peculiar, non-random arrangements of atoms called technologies, which it invents, reinvents and discards almost continuously. This is not true for other creatures…they do not ‘raise their standard of living’, or experience ‘economic growth’…they do not experience agricultural, urban, commercial, industrial, and information revolutions”.

This leads one to ask what it is about the human species that enabled it to trigger this paradigm shift to technological evolution. Some authorities think it has something to do with language. Perhaps not the evolution of language itself (linguists like Stephen Pinker believe language to be older than the Leap) but rather (as Dawkins speculated) “a new trick of grammar, such as the conditional clause, which, at a stroke, would have enabled ‘what if’ imagination to flower”. You can understand why people look to language, or some adaptation of the ability to communicate linguistically, for the triggering of the Great Leap Forward, because technology is an inherently collaborative process. The idea of the lone genius who gets a great idea from out of nowhere is pretty much false. Instead, inventors take materials, tools, techniques and ideas that already exist and put them together in a new way to achieve a new result. They rely, in other words, on the work that has already been accomplished. Furthermore, they pretty much always rely, either directly or indirectly, on support from other people in building whatever they are making. This fact is illustrated in a famous essay called ‘I, Pencil’, in which you are challenged to make the kind of pencil you can buy in a shop from scratch. Of course, you could snap off a twig and use it to etch markings into soft Earth, but could you make a pencil with a graphite tip and a little rubber on the opposite end, held in place by a strip of metal? In order to do that you would have to know how to mine that graphite and metal, produce that rubber, and how to get that graphite inside a hollow tube of wood. And, of course, you could not rely on anyone else’s tools to do this work, you would have to build it all entirely from scratch.

Simply put, this is an impossible task for one person to do. All of us rely on work that was done almost entirely by other people. Technology is very much dependent on specialisation and exchange, on people collaborating with one another and relying on an accumulating, evolving record of knowledge. Such a capability could never have arisen had each individual only had its own mind to rely on, and no way of communicating ideas from one mind to another and across generations. Vinge himself actually used the arrival of the human species as an example of a Singularity-like change. Recall that he wrote:
“We humans have the ability to internalize the world and conduct “what if’s” in our heads; we can solve many problems thousands of times faster than natural selection”.

So, there we are: My three examples of past events which had such a dramatic effect on time, on what is possible, that they deserve to be thought of as ‘Singularities’. Makes you wonder how different the future will be if Vinge’s ‘technological singularity’ actually happens, doesn’t it?

REFERENCES:
The Blind Watchmaker, and The Ancestor’s Tale” by Richard Dawkins.
“Rational Optimist” by Matt Ridley.
“The Technological Singularity” by Vernor Vinge.
“I, Pencil” by Leonard E. Read.

Posted in technology and us | Leave a comment

BANKING THE UNBANKED

BANKING THE UNBANKED
INTRODUCTION
It is difficult to imagine going through life without relying on banks and the services they provide. We use such services every day. Our wages get wired to our accounts, we regularly use credit cards to buy things in stores and online. And then there are all those bills and loan offers that get sent to us. Yes, clearly, banking is interwoven into all our lives.
This, however, is not universally true. Around the world there are people- roughly 2.5 billion adults- that don’t have access to banks and the services they provide. Such people are unable to start savings accounts and they cannot get credit cards. While in countries like Canada, the U.K, and Germany around 96 percent of people above the age of fifteen have a bank account, in Pakistan only 27 percent of the population does. Nor is this a problem exclusive to developing countries. While in the USA some 88 percent of people have a bank account of some kind, some 30 percent must make do with nontraditional banking sources such as payday loans, and have insufficient access to the financial system. These people, who either live where there are no banks or who are unable to access all the services we have come to expect from the industry, are the ‘unbanked’.
OUT OF THE SYSTEM
So, how come there are people who don’t have access to such a crucial service? The two main reasons are: Lack of facilities and lack of documentation.
Starting with the former, there are people who live in places where traditional banks don’t want to go. This is partly because, being poor, they don’t offer the kind of profits that richer clients do. But, also, it is because these people live in places with insufficient infrastructure and security and that makes building bank branches in these areas difficult. As such, there is a distinct lack of services that we take for granted. For example, in Uganda circa 2005 there were one hundred ATM machines for 27 million people.
As well as lacking the infrastructure that banking requires, the unbanked have the problem of insufficient documentation. As Peruvian economist Hernando de Soto has shown, economic growth and the creation of wealth depend upon clearly defined and documented property rights. In the West we have documents attached to our assets- things like our cars and our homes- that can be presented as collateral to a bank to borrow money. But in developing countries people have assets but no documentation. Lacking documentation to prove their identity, put up collateral and create credit histories, the unbanked have a lack of the basic foundations for participating in the banking system, and are limited to cash transactions. In the absence of mainstream banking, a shadow banking system has emerged to meet the needs of the unbanked. But such organisations leave these vulnerable people open to corruption.
A PROBLEM THAT CAN AND SHOULD BE SOLVED
Fortunately, we have now developed, and are expanding, the technological capability to integrate the unbanked into the global financial system. This is being achieved not through building physical bank branches but by leapfrogging brick-and-mortar banks using mobile technology. The potential in bringing those 2.5 billion unbanked into the global economy is not to be sniffed at. According to the journalist Robert Neuwirth, in aggregate there is some $10 trillions-worth of assets owned by undocumented people around the world. Were they their own country, it would be a country whose economy is second only to the USA.
It is therefore no fool’s errand to find ways of overcoming the obstacles the unbanked face in becoming part of the global economy. Indeed, the work done so far proves how valuable this can be. Thanks to globalisation and digitisation, people in India with sufficient understanding of English and IT skills can work from home servicing computers in America and Europe. Multinational companies now source their goods from all over the world, bringing job prospects to areas that hitherto had only a hand-to-mouth existence. Of course, this is not without some negative consequences. It means jobs are lost in richer countries as they are outsourced to places where there is cheaper labour. But it helped drop the percentage of the world’s population living on less than $1.25 a day from 43.1% to 20.6%.
Perhaps the most important device for bringing the unbanked into the 21st century is the mobile phone, for it is this device, plus the wireless infrastructure and IT that support it, that more than anything have allowed people to move away from cash and opt for more secure and useful forms of money.
Developing countries may be most open to this kind of change for a couple of reasons. For one, in richer countries we are used to digital currencies that offer all the convenience of cashless currency, but hide quite a bit of expense from the end user; that expense eventually showing up as higher prices. In developing countries, corruption and excessive bureaucracy are more blatant. Workers in some African countries must pay their boss a bribe in order to receive their wages. In Cairo, acquiring and registering a plot of state-owned land involves wading through some 77 bureaucratic procedures across 33 agencies, and can take up to 14 years. So we should perhaps expect to see such people embrace blockchain 2.0 and cryptocurrency solutions faster than countries that have convenient credit-card payment schemes with hidden charges.
Secondly, in developing countries we find a greater proportion of self-employed people- rickshaw drivers, food-stall operators, small business owners- and for such people it is particularly important to save costs on financial transactions. A payment solution that is more secure, able to circumvent corruption and avoids hidden charges, has obvious benefits for people who have to look after every penny. 
I would add a third reason why mobile banking, cryptocurrency and blockchain 2.0 technology may develop in countries where there are the ‘unbanked’ and that is the ‘latecomer’s advantage’. There is no rule that says a country has to retread all the steps that lead to the modern world. They can leapfrog straight to the latest technologies and practices. Indeed, this leapfrogging makes a great deal of sense, because the most modern technologies often do the same job as predecessors, only more cost effectively and less wastefully. The cost of purchasing and burying copper wire for a communications infrastructure would be more than $100 million. Cell tower infrastructure would cost a relatively small tens of thousands of dollars. If a city like Zinder in South Niger were to adopt PCs, then by the time 10% of the population were using them, the power they consume- 1,500 KW- would exceed that of all households today. Mobile devices, on the other hand, would consume just 74KWs, and as they run off of batteries they would be more useful in areas where power outage is a common experience. It is for reasons such as these that countries like El Salvdore and Panama have adopted mobile communications faster than the USA.

The fact that the rich nations have well-established systems and infrastructures could be an impediment to progress. W. Brian Arthur, External Professor at the Santa Fe Institute and author of ‘The Nature of Technology’ has written about how established technologies and practices can delay the adoption of new methods, even though those new methods are superior. In 1955, the economist Marvin Frankel noticed that cotton mills in Lancashire were not using the more modern and efficient machinery. This was because the old brick structures that housed the old machinery would have to be torn down before the new machinery could be installed. As Arthur wrote, “The outer assemblies thus locked in the inner machinery and thus the Lancashire mills did not change”. To this day, whenever a technology is so interwoven with the fabric of everyday life or business practice that replacing it seems too much bother, we say it has become ‘locked-in’.
There is also a psychological aspect to consider. Established technologies and practices can lead people to adopting certain ways of doing things, and upstart technologies that obsolete the old ways can be threatening. Sociologist Diane Vaughan called this ‘Psychological Dissonance’ and wrote:
“(We use) a frame of reference constructed from integrated sets of assumptions, expectations, and experiences…This frame of reference is not easily altered or dismantled, because the way we tend to see the world is intimately linked to how we see and define ourselves in relation to the world. Thus, we have a vested interest in maintaining consistency because our own identity is at risk”.
Therefore, established technologies, infrastructures and methods can create hysteresis- a delayed response to change- that holds the new at bay, at least until the old ways simply cannot be stretched any further. So, it could be that developing countries which lack many of these established infrastructures and technologies, would adopt the new and accommodate themselves more quickly to the methods and practices they make possible.
As a digital communications infrastructure is established and made accessible, the unbanked have the opportunity to pursue specialisation and exchange, building services that reduce problems in economic activity, match underused resources with unmet needs, and generally follow a proven path to prosperity. Through a combination of the Internet, mobile telephony and micro-financing, websites like Kiva allow individuals in the West to lend to African entrepreneurs who are able to deposit receipts and pay bills without having to handle cash. Zambian farmers have boosted profits by 20 percent by using their mobile phones to buy seeds and fertiliser.

M-PESA
Perhaps the most successful mobile banking scheme (in terms of helping the unbanked, at least) is M-Pesa. M-Pesa came into existence in 2007, when Safaricom began a pilot program that turned prepaid calling minutes into a form of currency. In order to use M-Pesa, people sign up for an account and their phone gets an E-Wallet. They can then go to a local Safaricom agent and pay cash for “e-float”. As Paul Vigna and Michael. J. Casey explained in “Cryptocurrency”, “this money isn’t actually held in the form of Kenyan shillings but as a separate claim on the overall M-Pesa e-float, all of which is backed by depositors in the banks with which Safaricom has accounts”. This currency can then be sent to other phone users who also have M-Pesa accounts, or a user can withdraw cash by going to an agent who will hand over money provided the user has an equivalent amount of e-float in their account.

Today, two-thirds of Kenyans use M-Pesa and 25 percent of the country’s GDP flows through it. Vodaphone, who own 40 percent of Safaricom, have brought M-Pesa to Tanzania, South Africa, Fiji, India, Romania and others. The relief group, Concern worldwide, used M-Pesa to help bring aid to Kenya’s remote Kerio valley. With the nation’s institutions frozen after violence broke out after a hotly contested election, this form of digital currency provided a means of moving money around, and the transaction fee that Safaricom charged was far less than the cost of transporting food and material. In Tansania, people who neither live near a hospital or can afford to travel to one are helped by an organisation called Comprehensive Community-Based Rehabilitation, which uses M-Pesa to cover their travel expenses.
PROBLEMS WITH M-PESA

As a form of digital money, M-Pesa is not without its drawbacks. To the end user it may appear automatic, but lurking in the background there is an infrastructure that is unwieldy and expensive. Agents still have to handle cash-indeed, large amounts of cash-which can leave the vulnerable to criminals. And, as Vigna and Casey explained, “when agents run out of money, they have to either stop what they are doing, close the shop and go to a bank, or stop what they are doing and send somebody on their behalf”. Also, Vodaphone has partnerships with other payment networks that all charge the usual fees and banking-system-dependent costs we have unfortunately come to expect from such a middleman-heavy service.

CRYPTOCURRENCY TO THE RESCUE
Little wonder, then, that many cryptocurrency enthusiasts see bitcoin as a solution to the problems M-Pesa and other banking-system-dependent forms of digital money cannot resolve. After all, bitcoin makes possible the direct transfer of money between two parties, entirely bypassing the cumbersome and expensive system for international transfers. Because bitcoins are essentially nothing but lines of code, it does not even necessarily require a smartphone to participate in this form of currency. A project called 37Coins uses people who have Android smartphones as ‘gateways’ to transmit messages, and this allows others to use cheaper, more rudimentary phones to send money via SMS. Mozilla, the company perhaps best known for the Firefox browser, sells a suitable phone for just $25.

Cryptocurrency also deals with the documentation. As Vigna and Casey pointed out, “you, your identity and your credit history are irrelevant. You do need an electronic platform with which to connect to the Internet. But if you are able to get that, bitcoin allows you to send or receive money from anywhere”. With smartphones becoming cheaper, and bitcoin wallets becoming easier to use, this can only help decentralised, peer-to-peer cryptocurrency spread further.
Eliminating the need for middlemen who all take their cut lowers the costs of transactions. But this is only the beginning of the benefits that the technology behind bitcoin could bring about. The blockchain provides a middleman-free way to exchange any asset. Not just money but intellectual property, contracts, and so on. It creates an irrefutable public record not controlled from any one central institution. According to Vigna and Casey, “the Blockchain’s groundbreaking model for authenticating information could liberate the poor from the incompetence and corruption of bureacrats and judges. Digitised registers of real-estate deeds, all fully administered by a cryptocurrency computer network without the engagement of a central government agency, could be created to cheaply and reliably manage people’s rights to property”.

THE FUTURE

Looking further into the future, we can foresee a time when your money is truly your money. This is not the case with cash. Anybody who gains access to your purse or wallet can spend the money it contains. But if your smartphone will not unlock without biometric data unique to you, then it’s useless to anyone else. The science-fiction author Charles Stross imagined a scenario in which a thief snatches a bag, only for the bag to start screaming in distress at being handled by a stranger. With a combination of sensors and artificial intelligence that can distinguish between property’s rightful owner and everybody else, and GPS tracking, our personal devices could behave just like that bag, immediately alerting authorities and providing incriminating evidence.

As for people who have permission to access our property, intelligent devices could allow for more precise control over the extent of that access. No more having to carry different loyalty cards; the phone would track your position, know you are in a certain store and allow you to use its customer reward scheme. People who rent out their homes on AirBnB and other such services could rely on smartphones that provide access to the home for a set period, or which permit entry to some rooms but not others. And, seeing as how bitcoin does no care who you are, anything smart enough to begin using the service could do so. As Mike Hearn, a former Google employee, pointed out, “bitcoin has no intermediaries. Therefore, there’s really nothing to stop a computer from connecting to the Internet and taking part”. Indeed, any suitably smart AI could, and Hearn has envisaged driverless taxis that connect to an automated, electronic marketplace he has dubbed ‘Tradenet’. The car (or, rather, the AI that controls it) would own itself, paying its costs and receiving its own revenue. If it were programmed to provide as cheap and efficient a service as possible, it would be focused on maximising its productivity, with no interest whatsoever on bling and other signs of material wealth. 
THE END OF MONEY?
The issue of robots taking over tasks traditionally the preserve of wage-earning humans has lead some to suppose that money won’t be necessary in the future. It seems a reasonable argument to make: the robots don’t work for wages, humans can’t compete against them for jobs, so goods and services might as well be provided free by our tireless AI servants. 

But this argument assumes that money is merely a commodity recognised as a unit of exchange in order to overcome what would otherwise be cumbersome barter exchanges. However, if we look past the physical manifestations of money (be that gold coins, paper notes or lines of code) and focus instead on the credit and trust relationships between the individual and society at large, we discover money is not so much an intrinsically-valuable commodity but rather akin to a social contract whose value depends entirely on everybody agreeing it can be redeemed for an agreed-upon measure of goods and services. Even if robots completely take over the economy and do all jobs, it is still an economy and there would still be resources and services whose relative value has to be measured, somehow. So it seems likely that robots and AI would rely on some way of measuring the relative value of the resources they are using, the goods they are creating, and the services they are offering. So long as there is a society, there will be obligations between debtors and creditors. And that, ultimately, is what money is.

CONCLUSION
The unbanked are a reminder that scarcity often has little to do with resources being scarce, but rather because we lack the ability to access them. There is a tremendous reserve of human potential unfortunately constrained by cumbersome bureaucracy, corruption, and unengaging work. By finding solutions to these problems, we can make the future brighter. The Mobile technology, cryptocurrency and Blockchain are doing their part to make that happen.

REFERENCES
“Rational Optimist” by Matt Ridley.
“Accelerando” by Charles Stross
“Cryptocurrency” by Paul Vigna and Michael J. Casey
“Rethinking Money” by Bernard Lietaer and Jacqui Dunne.
“Technology: What It Is And How It Evolves” by W. Brian Arthur.

Posted in Uncategorized | Tagged , | Leave a comment