Strikes. They’re a nuisance, aren’t they? Bringing disruption to our lives by denying us the services we rely on. But have you ever noticed how the workers who organise strikes always seem to be employees at the lower end of the corporate hierarchy? It’s always blue-collar workers, junior doctors and other lowly types that are threatening such action. Executives, for some reason, never stage a walkout.

I wonder why that is?

Now, some might think the reason is obvious: Strikes are undertaken in order to get more pay, and so executives have no need for such action as they are already very handsomely compensated. For example, if you are an advertising executive your yearly salary is around half a million pounds. Not too bad!

But, actually, ‘more money’ is not the only reason why workers feel the need to strike. Sometimes, strike action is undertaken in order to bring to the world’s attention unfair working practices. If being treated unfairly justifies a walkout, then maybe executives would have a reason to strike?

Think about how such people are portrayed in movies. In nearly all cases, executives in films are portrayed as corrupt. You have Gordon Gecko in ‘Wall Street’, breaking laws and destroying small businesses in his thirst for more dirty money. You have the executive classes in ‘Elysium’, living in luxury aboard their space station while down on earth their overworked, underpaid blue-collar employees are callously discarded when they fall foul of atrocious working conditions the higher-ups are too uncaring to fix. You have the CEO of OCP looking on in concern as Robocop2 lays waste to the city- not concern for the people it’s killing mind you, but at what it could mean for his company’s shares (“this could look bad for OCP, Johnson! Scramble our best spin team!”).

Those are just a few examples of films that make business men out to be bad guys. Now try to think of movies where executives are not portrayed as villains, but as heroes. I can only think of two. Batman’s Bruce Wayne has a strong moral code. But that’s not a particularly good example, because he is only being altruistic when he is the Caped Crusader. His ‘Bruce Wayne’ persona is of a billionaire playboy who is a bit of a prick. And in the Christopher Nolan films the board of directors that run Wayne enterprises are your usual bunch of villains in suits. The other example I can think of is Ayn Rand’s ‘Atlas Shrugged’, and do you know what that book and movie is about? It’s about successful businessmen becoming so disgruntled with being portrayed as villains by society that they go on strike.

So, given how often successful businessmen are portrayed as bad guys, why don’t they ever stage a walkout and remind us all of how much we rely on the work they do, just as their fictional counterparts in Rand’s opus did?

I think the reason why is as follows: because it just wouldn’t work out the way it did in ‘Atlas Shrugged’. In that story, the consequences were that society soon started falling apart. When workers low down in the corporate hierarchy stage a walkout, the effects are, indeed, most often immediate and near-catastrophic. Everything grinds to a halt, everyday life is hopelessly disrupted, and we are reminded that such people provide vital services we can scarcely do without. I would suggest that if the executive classes were to stage a walkout, life would not grind to a halt, at least not for quite some time. On the contrary, most people would not even notice anything amiss.

Now, you might counter that this is mere speculation with nothing to back it up. However, I believe there are a couple of examples that indicate that what I say is true.

The first example involves something that happened during the decade from 1966 to 1976 in Ireland. During that time, Ireland experienced three bank strikes that caused banks to shut down for a total of twelve months. During the time in which they were closed, no checks could be cashed, no banking transactions could be carried out, and the Irish lost access to well over 80% of the money supply.

You would have thought this would have spelled utter disaster for Ireland. After all, banking executives are among the top earners (paid around £5 million a year, as well as being awarded endless bonuses) and we’re always being told of the utterly vital function the banking and financial sectors play in the economy. Surely, then, Ireland was brought to her knees very soon after the banks closed their doors and removed their services?

Actually, no. Instead, the Irish just carried on doing business without the banks. They understood that, since the banks were closed, there was nothing to stop people writing a check and using it like cash. Once official checks were used up, people used stationary from shops as checks, written in denominations of fives, tens, and twenties. And it was not just individuals who operated this mutual credit system, businesses also got in on the act. Large employers like Guinness issued paychecks not in the usual full-salary amount but rather in various smaller denominations, precisely so they could be used as a medium of exchange as though they were cash.

All this was possible because, at the time, Ireland had a small population of three million inhabitants. In most communities, people had a high degree of personal contact with other individuals, and where knowledge of somebody was lacking, local shops and pubs had owners who knew their clientele very well and could vouch for a person’s creditworthiness.

According to economics professor Antoin E. Murphy, author of ‘Money in an Economy without Banks’, “The Irish created an unregulated, totally anarchistic community currency matrix…there was nobody in charge and people took the checks they liked and didn’t take the checks they didn’t like….And, it worked! As soon as the banks opened again, you’re back to fear and deprivation and scarcity. But until that point it had been a wonderful time”.

A few years before the Irish incident, New York’s refuse collectors went on strike and just ten days afterwards the city was brought to her knees. I don’t think anyone would have described that situation as ‘a wonderful time’. Unlike the millions paid to city bankers, refuse workers get around £12,000 a year.

Another example suggesting that executives wouldn’t be missed for quite some time were they to disappear would be the company Uber, for it has seen not only the resignation of its founder, Travis Kalanick, but also a whole bunch of other top executives, so that, according to a 2017 article in ‘marketwatch’, it “is currently operating without a CEO, Chief operating officer, chief financial officer, or chief marketing officer”. Did the company fall down without the aid of these essential people? No, it carried on just fine without them.

Now this is intriguing. Why is it, that when low-paid staff nearer the bottom of the corporate hierarchy go on strike we feel the pain almost immediately, but on the rare occasions when highly-rewarded executives don’t show up for work nobody cares because nothing much changes?

I think it all hinges on what these people actually do. What do they actually do? It’s hard to say, because any role you can think of that might be of use to a company turns out to be a job description for somebody lower down the hierarchy. Do they make anything, these executives? No, the workers down in manufacturing do that. Do they manage anything? No, managers do that. Are they responsible for sales? No, that’s what salespeople are for. And so on and so on. Now, I’m not suggesting the CEO does literally nothing but it stands to reason that when you have delegated responsibility for just about everything to your subordinates, it’s going to harm that company much more if the subordinates don’t show up than if you were to disappear.

And that’s just counting the official jobs subordinates have. But what about unofficial ones? Take Personal Assistants. If you have ever watched the Apprentice you know the sort of employee I am talking about: The woman or man at the desk who answers the phone and says ‘Lord Sugar/ Mr Trump will see you now’. According to David Graeber, secretarial work like answering the phone, doing filing and taking diction is not all PAs do. “in fact, they often ended up doing 80 percent to 90 percent of their bosses’ jobs, and sometimes, 100 percent…It would be fascinating—though probably impossible—to write a history of books, designs, plans, and documents attributed to famous men that were actually written by their secretaries”.

So businesses seem not to be negatively affected when executives don’t show up for work. But when they are present, is their work of value to society? Not according to studies into negative externalities (in other words, the social costs of doing business) Let’s take the example of advertisement executive mentioned earlier. As you may recall, advertisement executives bring home a yearly salary of around £500,000. But the studies reckon that around £11.50 of social value is destroyed per £1 paid. Contrast this with a recycling worker, who brings home a yearly income of around £12,500, and creates £12 in social value for every £1 they are paid.

This, then, is why executives don’t strike. Far from reminding us what a valuable service they provide, it would instead shine a light on how businesses could function perfectly well without them, at least for much longer periods than they could function if their much lower-paid subordinates were to stage a walkout. For people who are a credit to society in terms of creating more social value for every pound they are paid, strike action can be an effective way of empathising the value to society their work generates. But that can hardly be the case when your work causes negative externalities that cost society more than it benefits from your existence. In that case, strikes can only shine light on the fact that you are not all that necessary.


‘Bullshit Jobs: A Theory’ by David Graeber

‘Rethinking Money’ by Bernard Lietar and Jacque Dunne

“Money in an Economy Without Banks’ by Antoin E. Murphy


Posted in Uncategorized | Leave a comment

What Videogames Teach Us About work

Videogames have been featuring in the news recently. BBC Radio 4 is running a half-hour programme about Fortnite and in an article written for i by Will Tanner, it was reported that a Universal Basic Income experiment was ended because “ministers refused to extend its funding amidst concern that young teenagers would stay at home and play computer games instead of looking for work”.

That argument had a tone that is sadly familiar, depicting videogaming as an addictive evil that distracts its victims from what they ought to be doing. But I think it would be more accurate to say that gamers have already found meaningful work and are reluctant to forsake it and submit to less rewarding labour instead.

This way of looking at it goes largely unrecognised because we are not taught to equate videogaming with work. Instead, you ‘play’ a videogame and we are raised to believe that play is childish, a distraction, mere fun. Play, we are encouraged to believe, is the opposite of work.

But it really isn’t. One only has to look at the play other animals engage in to see there is a serious side to it. It’s a way of honing skills that will become essential in later life.

Similarly, in videogaming we find many activities that can be seen to hone skills that are important in this digital age we live in. Authors Bryon Reeves and J. Leighton Read list over a hundred such activities, including:

“Getting information: Observing, receiving and otherwise obtaining information from all relevant sources.

Identifying information by categorising, estimating, recognising differences or similarities and detecting changes in circumstances and events.

Estimating sizes, distances and quantities or determining time, cost, resources, or materials needed to perform a work activity.

Thinking creatively: developing, designing or creating new applications, ideas, relationships, systems or products, including artistic contributions”.

Also, in an article written for ‘Wired’ (“You Play World of Warcraft? You’re Hired!”) John Seely and Douglas Thomas explain how “the process of becoming an effective guildmaster amounts to a total-immersion course in leadership…to run a large one, a guild master must be adept at many skills: attracting, evaluating and recruiting new members; creating apprenticeship programs; executing group strategy…these conditions provide realworld training a manager can apply directly in the workplace”.

Far from being a distraction from work, videogames are, along with jobs, one of modern life’s two main work providers. Instead of lending support to the idea that people don’t want to work, videogames demonstrate how eager we are to engage in productive activity, to reach for goals, to solve problems and to take part in collaborative projects.

It does, however, raise a question: How come one work provider is able to draw upon willing and eager volunteers, while the other (jobs) mostly creates a feeling that work is a necessary evil you wouldn’t do if you had a choice? And, yes, that is how a great many people feel, as revealed by polls that show ninety percent of people hate their jobs.

Fundamentally, I think it all has to do with the direction in which money flows, and how that affects the design of work in videogames and jobs.

What do I mean by the direction in which money flows? Quite simply, I mean that if you have a job, then, assuming you are not an unpaid intern, a company will be paying you to work. This means that you are both an investment and a cost. On the other hand, when it comes to videogames, you pay a company to work, since you have to first purchase the game (and even if it is free-to-play like Fortnite, the company will have some means of extracting money from you). This means that you represent almost all profit, and only negligible cost.

Because videogame publishers want as many people to spend money on their games as possible, it obviously makes sense if working in a gaming context is as enjoyable and rewarding as it can be. When it comes to making work engaging, productive activity should provide opportunity to pursue mastery; it should offer autonomy, flexibility, judgement and creativity that is firmly in the hands of the individual doing the actual work.

The best videogames are great at providing all these conditions. Autonomy and flexibility are found in games where you don’t have to tackle challenges in a strictly linear fashion but can forge your own path instead. For example, in ‘Batman: Arkham Knight’ you, as the Caped Crusader, are free to roam Gotham City, swooping down to fight crime as and when you find it. If you hear an alarm ringing, you can locate its source and do a sub-mission involving a bank robbery. If you see smoke you can attempt to arrest Firefly. Exactly how you get to the game’s finale is entirely up to you.

Many games offer creativity, providing opportunities to customise the look of your character or items you have acquired. Some games come with comprehensive editing tools that offer even more scope for creative expression, such as ‘LittleBigPlanet’ which goes as far as enabling players to create whole new games. And since their very inception, videogames have given us the chance to exercise our judgement and gain mastery, as we make the snap decisions required to advance up the high-score charts, helped by well-crafted feedback systems that informs us when we are doing well and when we should try alternative strategies.

Now, it’s true that jobs may also provide the things that make work worthwhile. But, the crucial difference is that, where videogames are concerned, there is never a good reason to try and reduce or eliminate such qualities. Doing so would only make for a bad game that nobody would choose to play. There is, however, a reason why employers might want to reduce such qualities in a job. There is something that unites these qualities, which is that they all help to enhance our individuality. That’s not something that employers necessarily desire. The more creativity, judgement, and autonomy can be reduced on an individual level, the easier it becomes to train new recruits. Indeed, in many ways it’s preferable if your employees are less like unique individuals and more like interchangeable units that can be replaced at as short a notice as possible. The reason why that’s advantageous is because it reduces the bargaining power of the workforce, since you are less likely to complain about pay and working conditions if you know it won’t be too difficult for the boss to fire and replace you.

The result? A cheaper workforce, more value extracted from the commodity of labour-power, and more profit for those the labourers work for. You have to bare in mind that employees are quite low down in the pecking order for rewards from the labour process. Governments want their cut, banks and financial services want their cut, the company executives want their cut, and they take priority over the working classes, kind of like how the more powerful predators and scavengers get the juicy meat and leave only scraps for the rest to fight over. When it comes to the pursuit of more profit, it pays to make work as unrewarding (in a monetary sense) as you can get away with, which often results in work being designed to be as unrewarding (in the sense of not being engaging) as possible.

“But why would people choose to do work designed to be lacking the very qualities that make it engaging?”, you might be asking. The answer can be found in ‘negative motivation’. Being without a job can have serious consequences. Cut off from an income, bills cannot be paid and the threat of rough sleeping looms ever closer. On top of that there is cultural pressure to ‘get a job’, so much so that we don’t care if the job is useless or even harmful to society (‘at least s/he has a job’). This all amounts to enormous pressure to submit to employment, not really because of the gains people expect if they do have a job, but rather because of the punishment they dread if they don’t.

Videogame companies, on the other hand, cannot rely on negative motivation for the simple fact that hardly anyone can be forced to play games (I say hardly anyone, because there are sweatshops in which people grind through MMORPGS to level up characters that can be sold on to richer customers). This further emphasises the point that videogames never have an incentive to make work less rewarding, whereas such incentives do exist in the world of jobs.


Videogames, far from demonstrating our distaste for work, in fact show how willing and eager to work we are. So willing, in fact, that our desire to work supports one of the most successful industries of the modern age. Every day, millions of us spend billions all so we can engage in the work videogaming requires. If we really hated work, the first person to put a quarter into the first arcade game would have walked away in disgust at having to pay to stand there and perform repetitive manual labour. What, are you crazy?

What videogaming shows instead is that if you can take that simple mechanical operation and craft around it creativity, flexibility, autonomy, judgement and mastery, the result is work that people want to do so much they will gladly pay for it. But if, in the interest of extracting more value for money out of your workforce, you reduce or eliminate such qualities, people will hate such work and will only submit to it if circumstances force them.

That’s what jobs teach us.



‘Total Engagement’ by Bryon Reeves and J. Leighton Read.

‘Why We Work’ by Barry Schwartz.

Posted in Uncategorized | 2 Comments

Let ‘Em In: The Immigration Controversy

During the EU Referendum, some controversial issues formed part of the debate over whether the UK should vote Leave. One such issue was immigration. The Leave campaign’s slogan, promising that the UK would ‘take back control’, was understood to refer at least in part to some inability to control borders and decide as an autonomous country who to let in. The campaign poster ‘breaking point’, which depicted large crowds supposedly flooding into the UK, summed up Leave’s position and spoke to those who felt that change had come too fast and was leaving them disempowered.
Opposing this view was the belief that the free movement of people and goods had been beneficial overall. Somehow, though, sensible debates over the ability and desirability to control immigration in a global age invariably seems to turn into an argument over extreme positions tinged with xenophobia. Control over borders and limiting migration is criticised as though it were promoting a fortress mentality in which the drawbridge is raised never to be lowered again, and the UK becomes ‘little Britain’, isolated from the world and viewing all foreigners with suspicion and intolerance.
In order to understand why debates over immigration get pushed to extremes, we need to go back in history. Now, immigration has been happening for hundreds of thousands of years, ever since humanity left its place of origin (Africa) in search of new lands to settle. I don’t intend to give a complete history of this phenomenon, but instead want to focus on a period in postwar Britain that lead to an infamous speech that would become an accusation levelled at anyone raising the issue of immigration.
At the end of World War 2, Britain was in need of extra manpower in order to help rebuild the country. So, the 1948 British Nationality Act came into being. This act declared that all the King’s subjects had British citizenship, which meant that around 800 million people had the right to enter the UK. This act, by the way, was never given any mandate by the People; it was, instead, a political decision. But it was not particularly controversial. For one thing, transportation was much more costly back then, so not many of the 800 million actually moved. Also, the fact that the country needed rebuilding, coupled with the fact that it was growing economically, meant that the half million who did arrive were easily absorbed.
In 1962, however, the Commonwealth and Immigrants Acts came into being, which was a quota system designed to place restrictions on immigration. Just prior to the introduction of this act, there had been a large influx of Pakistanis and Indians from the Muslim province around Kashmir. Like the Caribbean immigrants who had migrated following the British Nationality Act, these were hard-working men who brought some much-needed labour to textile mills in Bradford and surrounding towns, and to manufacturing towns like Leister. But there were also some notable differences. The Pakistani and Indian immigrants were far more likely to send for their families, and they were much less interested in any integration with their communities. As Andrew Marr explained, this group was:
“more religiously divided from the whites around them and cut off from the main form of male white working-class entertainment, the consumption of alcohol. Muslim women were kept inside the house and ancient habits of brides being chosen to cement family connections at home meant there was almost no sexual mixing, either. To many whites, the ‘Pakis’ were no less threatening than the self-confident young Caribbean men, but also more alien”.
A year later, in 1963, Kenya won its independence and gave its 185,000 Asians a choice between surrendering their British passports and becoming full Kenyan nationals, or becoming effectively foreigners requiring work permits. Many decided to emigrate, to the point where some 2000 Asians a month were arriving in the UK by 1968. An amendment to the Commonwealth Immigrants Act that tried to impose an annual quota was rushed through by the then Home Secretary, Jim Callaghan (labour). Also, a Race Relations Bill was brought forward so that cases of discrimination in employment and housing could be tried in courts.
Although the Asian immigrants were well-educated, being as they were mostly civil servants, doctors and businesspeople, their arrival was cause for concern among the British public, noting once again that communities were changing without the electorate giving a mandate for it. This disquiet came to the attention of a member of the Conservative shadow cabinet, one Enoch Powell. Powell had seen how concerns over immigration had lead to a 7.5 percent swing to Peter Griffiths, who had gone on to defeat Labour’s Patrick Gordon Walker in Smethwick during the 1964 election. The campaign Griffiths had run was a shockingly racist one. Its slogan was ‘if you want a nigger for a neighbour, vote labour’. Two years later, Griffiths would lose his seat, having been denounced by Prime Minister Harold Wilson as a ‘parliamentary lepper’. But Powell saw some merit in Griffiths’ position, particularly the accusation that the political class was turning a blind eye to the effects of immigration.
So it was that on the 20th April 1968, Powell gave a speech in Birmingham’s Midland hotel. It opened with an anecdote about a constituent who was considering leaving the country because “in 15 or 20 years’ time the black man will have the whip hand over the white man”, and went on to say that this was a view shared by hundreds of thousands. Did Powell not have a duty to voice the concerns of these people? “We must be mad, literally mad”, he told the small crowd, “as a nation to be permitting the annual inflow of some 50,000 dependents” Powell warned that if this immigration wasn’t stopped, the result would be unrest and riot:
“As I look ahead, I am filled with foreboding; like the Roman, I seem to see “the Tiber foaming with much blood”’.
That speech has since become known as the ‘rivers of blood’ speech. It lead to Powell being sacked by shadow leader Edward Heath, who called the speech “racialist in tone and liable to exacerbate racial tensions”. It would also come to have an effect on the ability to hold a sensible discussion over controlling immigration. As Jason Farell and Paul Goldsmith, authors of “How to Lose a Referendum” explained:
“he provided a bogeyman that could be used as a quick, lazy comparison to cut off as quickly as possible any debate about one of the key background policies of New Labour’s time in power. Becoming compared to Enoch Powell was what happened if you questioned the benefits of multiculturalism and immigration”.
We will investigate New Labour’s role in turning immigration into a politically-correct forbidden subject in an upcoming essay.
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
In the 1960s, responding to a perceived public dissatisfaction over immigration, Enoch Powell delivered his infamous ‘rivers of blood’ speech, and in so doing created “a bogeyman that could be used as a quick, lazy comparison to cut off” any debate over multiculturalism or immigration. In the same decade, future politicians were children growing up amidst struggles for racial equality that reached their peak during the 60s and the following decade. Growing into adulthood, many at the top of New Labour, as well as many of its activists, had a metropolitan cultural liberal outlook that considered immigration to be an inherently good thing. In the eyes of this metropolitan mindset, there was little difference between wanting tight controls over immigration, and being racist.
Indeed, some have made the case that New Labour deliberately encouraged immigration because they wanted to remake the country in their own liberal image. For example, Andrew Neather, a former adviser to Number 10 and the Home Office, reckoned “the policy was intended- even if this wasn’t its main purpose- to rub the Right’s nose in diversity and render their arguments out of date”. Others, though, have denied such claims. One such person was Barbara Roach, who was Labour’s Immigration minister from 1999 to 2001. She attributed rising immigration levels to the fact that the previous Conservative government had not only installed a failed computer system but also made cutbacks that left just 50 officials to make asylum decisions on a backlog of 50,000 cases.
It could be argued that any government at the time would have had to respond to a rapidly changing world. In the previous essay, we saw how the British Nationality Act theoretically opened the borders to 800 million people, but the expense of travel at the time imposed a practical limit on the numbers that actually did migrate. But, by the time New Labour came to power, forces of globalisation such as lower-cost air travel and mass communication, as well as numerous conflicts in Africa and the Balkans, had lead to more rapid population movements. When increasing numbers of a asylum seekers arrived from the Balkans, the pressure was on to move them away from the costs and dependency of the Asylum system and toward the work permit route, and there was also pressure from business sectors to increase work permits in response to a booming economy and low unemployment. Meanwhile, higher education was being internationalised at a rapid pace, and that meant New Labour could finance their policy of expanding university education in the UK by encouraging foreign students into the country.
From 1997 onwards the decisions taken by New Labour added up to around 500,000 people arriving in the UK each year. By 2010, the UK population was increased by 2.2 million migrants, a population size equivalent to a city twice as large as Birmingham. It was, at the time, the largest peacetime migration in the country’s history.
As a result, many places in the country that had previously been untouched by immigration suddenly found themselves host to significant migrant communities, while at the same time many British communities saw their livelihoods disappearing overseas as the winds of globalist change swept over them. If those people thought that a Labour government with a 179 majority would speak up for the working classes the party traditionally represented, they were in for a rude awakening.
In 2005, Tony Blair achieved a third electoral victory but with a massively-reduced majority. At the customary acceptance speech at the steps of 10 Downing Street, the Prime Minister radiated humility and insisted he had heard the concerns of rising numbers of people concerned over immigration and the the forces of globalisation. But within five months, Blair gave a speech at his twelfth annual conference as Party Leader that dispensed with the concerned socialist act and went with full-on free market liberalism instead:
“I hear people say we have to stop and debate globalisation. You might as well debate whether autumn should follow summer … The character of this changing world is indifferent to tradition. Unforgiving of frailty. No respecter of past reputations. It has no custom and practice. It is replete with opportunities, but they only go to those swift to adapt, slow to complain, open, willing and able to change”.
In other words, capitalism was sweeping across the world, bringing opportunity but also insecurity and inequality, and the only assurance the Prime Minister could give his electorate was to say nothing could be done for them and they just had to accept they were in a Darwinian market struggle for survival. Guardian Journalist John Harris, upon hearing that speech, commented, “Swift to adapt, slow to complain, open, willing and able to change.” And I wondered that if these were the qualities now demanded of millions of Britons, what would happen if they failed the test?’.
It became increasingly obvious what would happen to such people. They would be left behind, largely unrepresented by the two major political parties. Worse still, these losers in the globalist race not only found themselves ignored and unrepresented by the political elite, they found their voices were actively repressed when they tried to focus attention on the most visible manifestation of the changes globalism and the free market had wrought: Immigration.
Of all the anecdotes that highlight the way a portion of the British electorate were treated with contempt, there is perhaps no better example than the case of Gillian Duffy. A 65 year old widow from Rochdale, she came across Prime Minister Gordon Brown who was on walkabout for the 2010 election. She wasted no time in voicing her concerns, which included the national debt, the difficulty vulnerable people were having in claiming benefits, and the costs of higher education. Oh, she also voiced concerns over immigration:
“All these Eastern Europeans what are coming in, where are they flocking from?”.
Face to face with Mrs Duffy, Gordon Brown was pleasant and persuasive enough to mend the pensioners faltering support for the Labour Party. She herself later said how she had been happy with the answers he gave. But when Brown entered what he thought was the privacy of his car, a wholly different side to his character surfaced. The world became privy to this other side to Brown, because he inaverdently left his Sky News mic on, and broadcast to the world:
‘That was a disaster. Should never have put me with that woman … whose idea was that?…she’s just a sort of bigoted woman, said she used to be Labour. It’s just ridiculous.’ 
This, then, was the attitude of the political elite who held the reigns of power during New Labour’s time in office. The very personification of charm in public, but totally contemptuous of even the mildest concerns over immigration in private. A whole class of politicians who had grown up amidst the 60s and 70s struggles for racial equality had come to adopt such a strong metropolitan mindset that they equated controls on immigration with racism and dismissed concerns over the movement of people as the ravings of bigots.
Mrs Duffey’s question was a reference to decisions made by the EU and Britain to open up the country to immigration from Eastern Europe. We’ll look at that next.
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
In 2010, a labour supporting ex-councilwoman from Rochdale called Gillian Duffy confronted the then Prime Minister (Gordon Brown). She asked a bunch of questions, one of which- “all of these Eastern Europeans what are coming in, where are they flocking from?”- resulted in her being dismissed as a bigot when Brown thought he was out of earshot.
Anyone seeking a proper answer to Mrs Duffey’s question would have to look back to May 2004. That was when the EU was due to undergo its largest expansion in terms of territory, population levels and number of states. The reason why was because former communist countries of central and Eastern Europe were set to join. Those newcomers were Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia and Slovenia. The most important thing to note about these countries is that their economic output was much lower compared to that of the existing member states. Acceptance into the EU therefore presented a golden opportunity for the people of these countries, for it meant they would have the right to move anywhere in the EU whether there was a job offer waiting for them or not, and be entitled to the same rights and privileges as national citizens. It was also good news for business because, since those job-seekers were coming from countries whose per capita GDP was less than half the EU average, they were willing to offer cheaper labour.
It was not good news for everyone, however. For those nationals who were already at the lower end of the labour market, the arrival of an even cheaper workforce put their jobs under threat. Most of the existing member states recognised this problem, and therefore decided to implement transitional controls that delayed the process of full membership into the EU seven years. German Chancellor Gerhard Schroeder, for instance, told the German people in 2000:
“Many of you are concerned about expansion of the EU … The German government will not abandon you with your concerns … We still have 3.8 million unemployed, the capacity of the German labour market to accept more people will remain seriously limited for a long time. We need transitional arrangements with flexibility for the benefit of both the old and the new member states”.
Accordingly, Germany initially maintained transition controls like bilateral quotas on the number of immigrants and work permits. All of the big European countries decided to take up transitional controls with one exception, and that was the UK.
The reason why New Labour decided not to implement transitional controls had to do with the findings of a research team, lead by Professor Christian Dustman, that had been commissioned by the Home Office. That research suggested that only 13,000 immigrants were expected to arrive each year. The economy was booming at the time, and the Performance and Innovation Unit at No 10 had produced a 73-page report that claimed the foreign-born population in the UK contributed ten percent more to government revenue than it received in State handouts.
It could also be said that, even if the Home office wanted strict controls on immigration, they would have come under pressure from other departments. These included the foreign office, who had diplomatic reasons for being pro-immigration, the department of education, who looked forward to extra revenue from foreign students, and, perhaps most important of all, the Business department, who certainly weren’t going to turn their nose up at an influx of cheap and willing labour. Finally, as we have seen in a previous essay, New Labour’s cabinet were children of the 60s and 70s, had grown up during the struggles for racial equality, and became adults with a metropolitan liberal mindset that was very much pro-multiculturalism. For all those reasons, New Labour decided not to apply transitional controls.
There was, however, an important caveat to the Dustman report’s claim that the number of immigrants coming to the UK would be 13,000 per year. The report actually said that the numbers would be a great deal higher if the other member states decided to impose transitional controls. As we have seen, that is indeed what they decided to do.
Between 2004 and 2012, 423,000 migrants came to the UK. As the noughties progressed, the effects of global conflicts and financial crises resulted in an even greater swelling of numbers. A combination of people fleeing middle-east conflict and expansion of the EU (many members of which were suffering crippling austerity due to the financial mess that was the Euro) meant that the UK’s population was increasing by 2.2 million, equivalent to a city twice the size of Birmingham.
Given that they were coming from countries that were either more economically poor or suffering from conflicts, this influx consisted of people who were prepared to offer much cheaper labour, and the effects of this were becoming apparent and were spoken about by people not afraid to defy political correctness that equated any concern over uncontrolled immigration with xenophobia. People like Nigel Farage:
“‘By 2005, it was obvious that something quite fundamental was going on. People were saying, “We’re being seriously undercut here'”.
In the next essay, we’ll look at who benefits from uncontrolled immigration- and who doesn’t.
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith
Toward the end of the 20th century and the start of the 21st, the UK was governed by a party with a decidedly globalist outlook and metropolitan ideology. There is perhaps no better explanation for why debates over controlling immigration degenerate into accusations of xenophobia. It’s a vestige of a time when any such debate was pretty much a forbidden subject. In 2005, when Conservative leader Michael Howard said “it’s not racist to impose limits on immigration”, he was met with outrage from New Labour. Now, more than ten years later, it is possible to at least suggest that uncontrolled movement of people is not always and everywhere a good thing without being angrily shouted down. But the attitude that you might be xenophobic lingers on. Invariably, suggestions that immigration needs to be controlled is criticised as though it were a call to stop it altogether and become isolationist. Whoever suggests there is any problem with mass migration can be expect to be lectured on the many genuine benefits the free movement of people has delivered.
But one can acknowledge the benefits immigrants bring while recognising that mass migration has not been good for everyone. This was highlighted by a chart created by economist Brank Milanovich and his colleague Cristophe Lakner. Known as the ‘elephant curve’, it lines up the people of the world in order of income and shows how the percentage of their real income changed from 1988 to 2008. One group- the 77th to 85th percentile- experienced an inflation-adjusted fall in income over the past 30 years. These people are the lower-skilled, working classes of developed countries like the UK. Something like 80 percent of the world has an income lower than that of this group, so given how financially difficult it can be for the working class you can appreciate just how poor most of the world is, and just how intense competition for a better life could become, absent of any control over the movement of people.
To illustrate why the working classes in developed nations are made worse off by uncontrolled immigration, let’s turn to a simplified example. Imagine workers in a factory. The production line does not have sufficient numbers of employees to run properly. Such a situation is not good either for the business itself or the employees. If it were to continue, the plant would close and the employees would lose their jobs.
Now, let’s suppose the plant has to recruit from overseas in order to fill the labour shortage. From the perspective of the employees, what would be the ideal immigration system? It would be a highly controlled system that only let in as many qualified people as are required to make up the shortage.
The owners of the plant might see things rather differently, however. For them, the ideal is to have no control over the movement of people and to tempt as many people into the country as possible. Now, given that these people have no vacancies to fill, what use are they, economically speaking? The answer is that they put pressure on the existing workers, who feel they can’t raise issues about current standards or even falling standards, for fear of being replaced. “There are plenty who would agree to these conditions”, we can imagine any dissenters being told. This pressure to drive down both wages and investment to improve or maintain working conditions is good for the owners, since they get to appropriate more of the wealth that their workforce produces. Surprise, surprise, the top 1 percent on the elephant curve have a line that’s almost vertical.
In case this sounds like a mere hypothetical, let’s look to some real examples. In 2006, Southhampton’s Labour MP, John Denham, noted that the daily rate of a builder in the city had fallen by 50 percent in 18 months. Or consider the findings of Guardian Journalist John Harris, who produced a series called ‘Anywhere but Westminster’, which included a Peterborough agency advertised rates and working conditions that only migrants would take.
But perhaps the most striking example would be the MD of a recruitment firm who admitted to the authors of ‘How To Lose a Referendum’ that if it were not for uncontrolled immigration, pay and working conditions might have to improve. All these examples point to the same thing, which is an increase in the supply of labour irrespective of an increase in demand resulting in a reduction of bargaining power, which the monied take advantage of to appropriate even more wealth from those who actually do the work. 
It should be noted that such outcomes are not usually entirely due to mass immigration. In April 2007, the Economist published a study of those areas of the UK that had seen the sharpest increase of new migrants over the ten year period from 2005 to 2015. In those areas- dubbed ‘migrant land’ by the magazine- real wages fell by a tenth, which was faster than the national average, and there was also a decline in health and educational services. But there were other factors impacting these areas too. They suffered cuts to public services following the Coalition’s move to austerity in the aftermath of the Great Recession, and they were disproportionately affected by the decline of the manufacturing sector.
Some have argued that these other factors are the real issue and that pointing the finger of blame at migrants is just scapegoating. Consider the words of Justin Schlosberg, media lecturer at Birbeck University:
“The working-class people have had an acute sense that their interests were not being represented by the banks and Westminster. What the right-wing press seeks to do is –rather than identify the true source of the concerns, which is inequality, concentrated wealth and power and the rise of huge multinational corporations that dominate the state. All of that is an abstract, complex story to tell. The story they told which more suits their interests is: the problem is immigrants. The problem is the person who lives down your street who works in your factory, who looks different and has different customs. It plays on those instinctive fears”.
Now, in some ways you could say he makes a fair point. Immigrants are not bad people, they are just ordinary folk doing what they can to improve their circumstances. But the fact is that mass immigration is part of the ‘abstract, complex story’ that is globalism.
So what is globalism, anyway? Is it the brotherhood of humanity, people of all races, creeds and religions holding hands and united under common bonds? If that is indeed what it is, then it would surely be welcomed by the vast majority of us. After all the latest estimates are that only 7 percent of the UK are racist.
But there is another way to look at globalism, and that is to see it as the commodification of the world, its resources and its people. It’s a global network of banking and financial systems that seems always ready to blow up and spread systemic risk, the fallout landing on the working classes while the one percenters get government bailouts. It’s a global transport and communication system that enables corporations to move manufacturing and other sectors to wherever rules and regulations are more relaxed and people more exploitable. Most damningly of all, it is the commodification of people, sometimes to the point where they are reduced to the status of disposable commodities. The tragic reality of that was vividly illustrated by the sight of greedy traffickers dangerously overfilling barely seaworthy boats with people desperate to escape dire situations, lured by false promises of some other place where opportunities are boundless and nobody slips through the social safety net. 
What really awaits these people is sometimes not just low-paid work but actual slavery. Incredibly, when their status as slaves is pointed out, such people often deny that’s what they are, because the conditions they came from were so bad their current situation feels like a step up. While one has to feel for people as downtrodden as that, one must also acknowledge what a negative effect it has on the working classes of developed nations. From the point of view of this group, the whole point of a job is to earn a living. To achieve that aim you need to earn sufficiently high wages to alleviate financial anxiety, you need to have a sense of stability and security in your working life, and you need sufficient free time with which to develop a more well-rounded existence. But all that is hard to achieve when you are competing for jobs with people who consider slavery to be an improvement and when jobs are disappearing to places where pressure from unions and environmental groups is either weak or nonexistent and therefore unable to place regulations on exploitation of people or the natural world.
At the same time the globalist commodification of everything suits the wealthy elite. Selling arms to warring nations, offering huge loans to corrupt leaders and supporting coups to overthrow more egalitarian governments and throwing regions into chaos so precious resources can be extracted on the cheap amidst the anarchy are all money-making opportunities. And the consequences offer money-making opportunities too, as people flee from countries ravaged by war and economic weapons of mass destruction with so little bargaining power their numbers put serious downward pressure on wages and working conditions (more profit for the owners) and also increases competition for housing (which forces up land prices, thereby increasing the paper profits of the owner classes).
One has to wonder how things would have turned out if globalism had continued amidst a complete intolerance for debating the issue of uncontrolled immigration. For decades, the working classes of the UK were underrepresented by the political establishment. New Labour’s mindset was a mixture of metropolitanism and free-market ideology that imposed a Darwinian struggle for survival on the lives of people, followed by a Coalition that responded to the near-collapse of the world financial system after deregulation led to insane risk-taking with austerity, essentially making the working classes pay for excessive risk-taking and greed at the top. Meanwhile, with even the mildest objections to uncontrolled immigration shouted down as xenophobia, only the extremists were prepared to speak up. People like Nick Griffin of the BNP, or Marine LePen of Front Nationale. More recently, Chancellor Merkle’s decision to open Germany’s borders to a million mainly Middle-Eastern migrants is seen by some as a reason why the far right Alternative Fur Deutschland won 50 percent of the vote in more depressed areas. That, as in other cases, was the result of simmering dissatisfaction over what globalism had wrought and what intolerant liberalism had deemed inadmissible for reasoned debate.
To quote the words of the Leave’s campaign poster, the rise of extremist groups is a sign that people’s tolerance for what globalism has done is at breaking point.
“How to Lose a Referendum” by Jason Farrell and Paul Goldsmith

Posted in Uncategorized | Leave a comment

Whacky Sci-fi Energy Proposals

Any mildly observant person is bound to notice that energy plays an important role in everyday life. Look around, and it is not too difficult to find various attempts at harnessing it. Plants extract energy from the sun through photosynthesis, animals extract energy by digesting organic material, and any industrial landscape is bound to have vehicles burning fossil fuels or the odd photovoltaic cell or wind turbine making use of renewable energies.
But, despite having sought for ways of extracting energy from the environment for billions of years, life is still not at all efficient at doing so, at least not when its various attempts are compared to theoretical limits. If you want to know how much potential energy is available to be tapped, you must turn to what is probably the most famous equation in the world: E=MC^2. This equation is basically a conversion factor that calculates how much energy is contained in a given amount of mass. If you take something like a candy bar, and you multiply its mass by the speed of light squared, that tells you precisely how much energy the bar contains. The speed of light squared is a huge number (written in MPH it is 448, 900, 000, 000, 000, 000) so even a tiny amount of mass can unleash an enormous amount of energy. An atomic bomb’s nuclear explosion for example, is the result of just a small amount of uranium being converted to energy. 
If you were to eat that candy bar, you would extract a mere ten-trillionth of its mc^2 energy. To put it another way, the process of digestion is only 0.00000001% efficient. Burning fossil fuels like coal and gasoline fairs a bit better, extracting 0.00000003 and 0.00000005% of the energy contained in such fuels respectively. How about nuclear fission, which, as we saw earlier, is capable of unleashing tremendous amounts of energy? Well, it certainly does a lot better than digestion or fossil fuel burning, but at an efficiency rating of 0.08%, it’s still far from ideal.
The fact that we are mostly failing to put this energy to use can be considered good news, in that any energy shortage we may experience has little to do with it being a scarce resource, and is instead due to our inability to access it. Unlike true scarcity (which we can’t do much about) an inability to access what’s available is a problem that can be addressed with appropriate technology. For example, by 2030 the world will need around thirty trillion watts, an energy need that could be met if we succeed in capturing three ten thousandths of the sun’s energy that hits the earth.
That would be a most welcome outcome in terms of securing our future, but even this achievement would not fare particularly well in terms of putting all available energy to good use. After all, most of the Sun’s output does not strike the earth but is instead dumped into empty space. Some radical thinkers have proposed ambitious schemes for harvesting this wasted energy.
One such proposal was put forward in 1960 by Freeman Dyson. His idea was to deconstruct Jupiter in order to form a spherical shell around the Sun. Doing so would enable our descendants to capture a trillion times more energy than we are capable of harvesting today. It would also provide 100 billion times more living space if you were able to move around its surface and, with the sun at the centre and you walking around on the inside of the sphere, everywhere on your habitat there would be permanent daylight. However, with gravity ten thousand times weaker than what we’re used to, travelling all the way around such a sphere without falling off would be pretty much impossible. In fact, it’s probably fair to say that life in general (or, at least, life as we know it) would be infeasibly difficult at best and impossible at worst if we had to live on the inner or outer surface of the Dyson sphere itself.
A way around that problem may be to construct habitats like the ones proposed by an American physicist called Gerard K O’Neill within the Dyson sphere. Known as O’Neill cylinders, they could provide habitats more like those we are familiar with if they orbit the sun in such a way as to always be pointing straight at it. Centrifugal force caused by their rotation could provide artificial gravity, and we could even have a 24 hour day-night cycle if there were mirrors to direct the sunlight in an appropriate way. 
Obviously, constructing a Dyson sphere would be a feat of engineering way beyond anything remotely achievable today. But that didn’t stop its originator, Freeman Dyson, from considering them a realistic prospect, given sufficient time. “One should expect that, within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere which completely surrounds its parent star”.
Amazingly, even this vastly ambitious project would not be all that successful at capturing the energy contained within the sun’s mass. This is because the process of nuclear fusion going on in a star like our Sun succeeds in converting only about a tenth of its hydrogen fuel, and after that its life as a normal star is over and it will expand into a red giant and end its life. So, even if we were to enclose the sun in a perfect Dyson sphere, we could not hope to put more than 0.08% of the sun’s potential energy (i.e the energy contained in its mass) to good use. 
For those descendants looking for more power than even a Dyson sphere can provide, they might consider an idea put forward by a British physicist called Roger Penrose. Many black holes are spinning, and this rotational energy could potentially be put to good use. Like all black holes, the spinning variant have a singularity (the remnants of a star so dense it has crushed itself to an infinitesimal size and of which we know very little about because it exists in realms of nature beyond anything our current models can handle) and an event horizon, which is a region of space surrounding the black hole that, once crossed, nothing can escape the gravitational pull of the singularity at its centre. A spinning black hole also has another feature known as an ‘ergosphere’, where, according to Max Tegmark, “the spinning black hole drags space along with it so fast that it’s impossible for a particle to sit still and not get dragged along”.
What this means is that any object tossed into the ergosphere will pick up speed as it rotates around the black hole. Normally, such objects will inevitably cross the event horizon and be swallowed by the black hole. But Roger Penrose worked out that if you could launch an object at the right angle and have it split into two pieces, then only one piece would get eaten while the other would be escape the black hole. More importantly, it would escape with more energy than you started with. This process could be repeated for as many times as it takes to convert all of the black hole’s rotational energy into energy that can be put to work for you. Assuming the black hole was spinning as fast as possible (which would mean its event horizon was spinning at close to the speed of light) you could convert 29% of its mass into energy using this method. That would be equivalent to converting 800,000 suns with 100 percent efficiency, or having 1000 million Dyson spheres working for billions of years.
As I said before, Dyson spheres and spinning black holes are proposals way beyond anything remotely plausible today. It might be tempting, therefore, to dismiss such ideas as crazy science fiction. But, I think there is a serious point to be made among all this whacky sci-fi stuff, which is that we are extremely far from putting available energy to good use. Next time you hear about an energy crisis, bare in mind that this really has nothing to do with energy being a scarce commodity. No, it is all down to our technical inability to capture the energy that is available. These crazy sci-fi proposals are therefore something to aspire to, and even if our actual technologies succeed in only capturing one percent of one percent of the energy that something like a Dyson sphere can harvest, that would provide way, way more energy than our global needs are likely to require. And, besides, if your going to have ambitions, they might as well be big!
Life 2.0 by Max Tegmark
The Singularity Is Near by Ray Kurzweil.

Posted in technology and us | Tagged , , | Leave a comment

How Religion Caused The Great Recession cont.

At the end of part three I ended with the hint that something dark and troubling occurred within corporate America at the end of the 20th century. The story of that change was told in my book ‘How Jobs Destroyed Work’, which I will quote from now.
“During the war, the USA achieved full employment for the first time since the 1920s. When the war was over, there was a lot of concern about the possibility of a postwar recession, which the government sought to avoid through various acts and initiatives. The acts included the ‘Employment Act’ of 1946, which “committed the federal government to maintain maximum employment and with it a high level of aggregate demand”. The initiatives included the GI bill, an education initiative that helped upgrade the workforce, thereby providing a large pool of white-collar workers for the administrative and management-type roles that corporations increasingly depended upon.
As well as anxieties about recession prompting the State to push for high employment, conditions enabled by the war played a part in other ways. For one thing, industry in America was still largely intact, unlike that of Europe’s. The government invested heavily in the business sector, particularly through highway construction and defence-related expenditures. Also, wartime research had helped launch an era of technological innovation, such as IBM’s development of the first general-purpose computer. Finally, wage freezes had been put in place during the war, and this had required employers to use fringe benefits with which to attract employees. This favoured the largest corporations, who could afford to offer greater benefits than their smaller rivals.
But those corporate benefit packages were still costly, even forty or fifty years ago. This might have discouraged their mass adoption, had it not been for militant unions during the postwar period. It made sense to the larger corporations that if they treated their employees well, that would improve emotional attachment to the company, and the threat of socialism would be avoided.
And so it came to pass that the early postwar decades enjoyed economic growth and price stability. The large corporations delivered on their promise of long-term employment prospects, meaning that anyone fortunate enough to land a job there felt secure, and expected that their own prosperity would rise along with the company’s fortunes.
But all that was to change in the 80s and 90s.
During the 80s, attitudes toward the paternalistic model changed. The 1970s ended in recession, and during this period two of America’s largest companies- Chrysler and Lockheed-survived only because of government bailouts. The new decade began with inflation approaching 15%, and unemployment over 8.5%. Gold prices were soaring, a trend that is often associated with investor pessimism. Indeed, there was a general mood of unease regarding the the US’s economic prospects, as the stock market went into the worst slump since the 1930s.
Amidst all this financial trouble, people began looking at those large corporations with their many benefits packages and saw not businesses to be inspired by but rather dinosaurs to be blamed for worsening conditions. Increasingly, people saw the large corporations as bloated and inefficient, handicapped by too much bureaucracy and a workforce with an over-inflated sense of entitlement. It seemed as though America was increasingly unable to compete against more nimble competitors, most notably from Japan and West Germany. The nation was importing 25% of its steel and 53% of its numerically controlled machine tools by 1981.
What really helped the rise of the lean-and-mean model in the 80s and 90s was certain federal and state regulatory changes, coupled with innovations from Wall Street. The federal and state regulatory changes brought about an environment in which corporate mergers and takeovers could flourish. For example, there had been laws protecting local companies from out-of-state suitors, but these were declared unconstitutional by the Supreme Court. Also, President Reagan appointed an attorney who had previously defended large corporations against antitrust suits to be head of the Department of Justice’s antitrust division. This all but guaranteed there would no interference from the federal government with the growing acquisitions and mergers movement.
Meanwhile, Michael Milken, of investment house Drexel Burnham, created high-yield debt instruments known as ‘junk bonds’, which allowed for much riskier and aggressive corporate raids. These, along with the state and federal regulatory changes mentioned earlier, triggered an era of hostile takeovers, leveraged buyouts and corporate bustups”.
So what these changes-particularly the growth in the 80s of finance capitalism-did, was to transform the corporation from its traditional image of a task-based entity engaged in some collective activity defined not just in terms of profit but in an overall contribution to society, to one in which shareholder’s profits were the be all and end all. Everything else, including pride in the product in some cases (consider, for example, the internal email sent by an S&P employee which read “let’s hope we are all wealthy and retired by the time this house of cards falters”), was to be disregarded. All the focus was on the short-term raising of stock prices.
This marked change in attitudes was reflected in comments made by the Business Roundtable in the 1990s. At the start of the decade, Business Roundtable said of corporate responsibility that they “are chartered to serve both their shareholders and society as a whole”. But, seven years later, the message had changed to “the notion that the board must somehow balance the interests of other stakeholders fundamentally misconstrues the role of directors”. In other words, a corporation looks after its shareholders and the interests of other stakeholders-employees, customers, and society in general-are of far less importance.
Certainly the employee of 80s and 90s corporate America would have recognised their lack of importance in what as an increasingly insecure environment. Finance capitalism by that time had transformed the corporation from a paternal entity rewarding loyal workers with security and regular wages, to aggregations of financial assets that existed only to be merged, broken apart or destroyed, according to the whims of executives chasing short-term shareholder profit.
Some observers, among them Noam Chomsky and Jacques Fresco, have noted how corporations tend to have the same organizational structure as fascist dictatorships. In other words, there is a strict hierarchy that demands tight control at the top and obedience at every level. Granted, there may be a measure of give-and-take, but the line of authority is usually clear. Others, perhaps most notably Michel Foucault, have argued that prisons and factories came in at more or less the same time, and their operators consciously borrowed each other’s’ control techniques.
For example, in the late 18th Century, social theorist Jeremy Bentham designed the ‘panopticon’. ‘Pan’ means ‘inmates’ and ‘opticon’ means ‘observed’ and so the panopticon was a prison designed in such a way that all inmates could be kept under surveillance by a single watchman. True, it was impossible for a single observer to keep an eye on all inmates at once, but the panopticon was designed in such a way as to make it impossible for any inmate to know if he was being watched or not. The inmates only knew that it was possible that they could currently be under surveillance. Bentham’s belief was that, under such conditions, inmates would effectively mind their own behaviour.
So what became of the panopticon? They are everywhere, only we now tend to refer to them as ‘offices’. Many a white-collar employee (those below the executive level, at least) spend their in-office hours in a cubicle, most likely of a one-size-fits-all, institutional-gray design that can be set up, reconfigured, and moved at the whim of those higher up the line of authority: A constant reminder of the employee’s own lack of security and importance to the corporation. Moreover, cubicles are (in the words of one employee) “mechanisms of constant surveillance”, lacking doors and usually arranged so that managers can spy on whoever they like at any time. The employees are usually made to work facing a wall, so cannot know if they are being watched unless they look over their shoulder. The message such an environment sends out is clear: We can see what you are-or are not- doing. So work harder or we’ll replace you. The employee found him or herself in a harsh working environment that did everything it could to underscore their vulnerability.
As conditions for the average employee diminished and prosperity for those at the executive level soured to dizzying heights, America in the 80s and 90s had virtually returned to the highly polarised conditions of the 1920s. David Leonhart of the New York Times reckoned, “it’s as if every household in that bottom 80 percent is writing a check for $7000 every year and sending it to the top 1 percent”.
But whereas, before the Great Depression, there had been campaigners speaking out against the excesses of the wealthy and the oppression imposed on the poor, the prosperity gospel that had begun in the 19th century and which was amplified by megachurches and TV evangelists responding to market signals from late 20th century consumption culture, had a markedly different message: There was nothing amiss with a deeply unequal society. Anyone at all stood to become as wealthy as the top 1 percent. Just remain resolutely optimistic and all will be well.
Within this highly unstable environment, the positive-thinking ideology that had begun with 19th century New Thought and inflated by corporate-style churches, found an environment to which it was well suited. All kinds of life coaches and motivational gurus emerged, spreading the gospel of prosperity, and applying management speak to disguise what were worsening conditions. For example, following the Chase-Chemical merger, employees who lost their jobs were not laid off, they were instead referred to as ‘saves’. Other corporations going through mass layoffs in pursuit of boosting shareholder value in the short-term referred to those selected for redundancy as ‘nonselected employees’.
Over time, the message that life coaches and motivational gurus delivered become one on which everyone was supposed to consider the deterioration of work and its rewards in corporate America as a positive thing overall. Corporations paid substantial sums of money to the motivational industry, whose members told employees that to be laid off was an opportunity for self-development, that the volatile state of the jobs market was a welcome breeding ground producing winners.
And, unlike with the megachurches (which one could leave at any time) the books and seminars to be consumed at corporate events were often mandatory for any employee who wanted to keep his or her job. Workers were required to read books like Mike Hernacki’s ‘The Ultimate Secret to Getting Everything You Want’ or ‘The Secrets Of The Millionaire Mind’ by T. Harv Ecker, which encouraged practitioners of positive thinking to place their hands on their hearts and say out loud, “I love rich people! And I’m going to be one of those rich people too!”.
Along with being made to conform to all the rules and worksheets of the self-help literature, employees in corporate America found themselves having to attend Native American healing circles, Buddhist seminars, fire walking and other ritualistic practices, all in the name of maintaining a feverish pitch of optimism among worsening conditions. Such was the level of religious-like devotion to the gospel of prosperity and positive thinking that a 1996 business self-help book reckoned, “if you want to find a genuine mystic, you are more likely to find one in a boardroom than in a monastery or cathedral”.
In part five we will see how CEOs were transformed into cult-like leaders during the tumultuous 80s and 90s.
“Financial Fiasco” by Johan Norberg
‘Smile Or Die’ by Barbara Ehrenreich
‘White Collar Sweatshop’ by Jill Frazer
‘How Jobs Destroyed Work’ by Extropia DaSilva
In part three of this series, we saw how the consumer culture of the late 20th century inspired churches to become more secular and corporate in their appearance, and how, as they grew into gigantic organisations, pastors were obliged to become more like CEOs in how they dressed and behaved. At the same time throughout the late 20/early 21st century, actual CEOs were becoming more like cult leaders. The transformation of the corporate world during the 80s and 90s (discussed in part four) had much to do with this.
Once upon a time, the CEO of a large corporation would have been the epitome of the cool, rational planner. He or she would have been trained in ‘management science’ and probably worked his or her way up within the ranks of the organisation so that, by the time they reached the top, the CEO had mastered every aspect of the business. Once there at the apex of the corporate pyramid this highly trained, rational specialist would have carried out the central belief of the college-educated middle-class, with its mandate of progress for all and not just the few.
But as the corporate world became more volatile toward the end of the 20th century, questions began to arise over whether such rationality and level-headedness was best for delivering the new goal of short-term boosts to shareholders’ profits. In 1999, Businessweek captured the changing mood when it asked, “who has time for decision trees and five year plans any more? Unlike the marketplace of twenty years ago, today’s information and services-dominated industry is all about instantaneous decision-making”.
These changes brought about a transformation in leadership. With the business world now seen as so tumultuous and complex as to “defy predictability and even rationality” (as an article in Fast Company put it) a new kind of CEO emerged, one driven more by intuition and gut-feeling. The new CEO was less of a manager with great experience obtained from working his way up the company hierarchy, and more of a flamboyant leader who had achieved celebrity status in the business world, and was hired on the basis of his showmanship, whether his prior role had anything to do with the new position or not.
A 2002 article in Human Relations described the celebrity CEO as being someone with “a monomaniacal conviction that there is one right way of doing things, and believe they possess an almost divine insight into reality”.
So, whereas the pastor of a megachurch was becoming more like a corporate executive, the corporate executive was becoming more like the leader of a cult. This transformation was no doubt helped by the replacement of old-style management consultants with motivational gurus. Pastorpreneurs, celebrity motivational gurus and flamboyant CEOS socialised together, advised one another, and in so doing created a business environment mixed with irrationality. According to Ehrenreich, “forsaking the ‘science’ of management, corporate leaders began a wild thrashing around in search for new ways to explain an increasingly uncertain world-everything from chaos theory…to eastern religions”.
It was certainly a time of increasing uncertainty. With the likes of Tom Peters (described by the LA Times as the ‘uberguru’ of management) offering advice like “destroy your corporation before a competitor does!”, everybody’s position in 90s corporate America was precarious. But whereas the white-collar precariat lived with the prospect of being fired at any time while shouldering the burden of increasing debt, the focus of boosting shares and rewarding celebrity CEOS had seen executive pay soar to over three hundred times that of the typical worker, and golden parachutes handed out even to the boss whose reckless behaviour crossed the line into outright criminality. For example, in 2006 the chief executive of UnitedHealth was pursued by the US Securities and Exchange Commission for illegal backdating of stock options, actions that got him fired and made to repay $465 million in partial settlement. But he also received the largest ‘golden handshake’ in corporate history, amounting to nearly $1 billion. As Ehrenreich said, “the combination of great danger and potentially dazzling rewards (lead) to a wave of giddiness that swept through America”.
Celebrity CEOs, going from their Gulfstream jets to their limousines to their luxury villas or four-star hotels, lived (in the words of Washington DC ‘crisis manager’ Eric Dezenhall) “in an artificial bubble of constant, uncritical reinforcement…a consumer of reassuring cliches”. They had come to believe in the teachings of the motivational books and speakers they recommended (maybe with a degree of cynicism) to their subordinates; positive-thinking preachers who claimed great wealth would come to anyone who visualised success, worked hard, and never complained. The average American did not complain, either, since by now the incessant New Thought message convinced positive thinkers that anyone could ascend to the world of unstinting luxury. According to researchers at the Brookings Institute, “the strong belief in opportunity and upward mobility is the explanation that is often given for Americans’ high tolerance for inequality. The majority of Americans surveyed believe they will be above mean average income in the future (even though that is a mathematical impossibility)”.
But perhaps a more accurate way to put it would be to say that the average American could not complain, at least not of they wanted to keep their job. Remember, that Positive Thinking ideology considers any negativity to be a sin, and some of its gurus recommended removing negative people from one’s life. And in the world of corporate America-where, other than in clear-cut cases of racial, gender, or age-related discrimination, anyone can be fired for any reason or no reason at all-that was easy to do: terminate that negative person’s employment. Joel Osteen of Houston Lakewood church (described as “America’s most influential Christian” by Church Report magazine) told his followers, “employers prefer employees who are excited about working at their companies…God wants you to give it everything you’ve got. Be enthusiastic. Set an example”. And if you didn’t set an example and radiate unbridled optimism every second of the working day, you were made an example of. As banking expert Steve Eisman explained, “anybody who voiced negativity was thrown out”.
Such was the fate of Mike Gelband, who was in charge of Lehman Brothers’ real estate division. At the end of 2006 he grew increasingly anxious over the growing subprime mortgage bubble and advised “we have to rethink our business model”. For this unforgivable lapse into negativity, Lehman CEO Richard Fuld fired the miscreant.
But, actually, sacking was not the worst fate that could befall an employee in 21st century corporate America. With every white-collar employee under pressure to work on their attitudes, the pressure on that group who most require permanent smiles and positivity-the sales team-reached ludicrous heights. Underperforming salespeople were subjected to having eggs broken on their faces, were made to bend over and receive a spanking with the metal yard signs of competing companies, and in one case even subjected to waterboarding (“you saw how hard Chad fought for air right there. I want you to go back inside and fight that hard for sales”, in the words of the Prosper Management supervisor who conducted this example of motivational guidance).
So this was America in the 21st century. A world in which megachurch pastorpreneurs and TV evangelists preached to millions the Good News that “God caused the bank to ignore my credit score” (in the words of Osteen). A world in which CEOs became like the leaders of cults who, according to Steve Eisman, were infected with the executive mind-set of ‘hedge fund disease’ (“The symptoms are megalomania, plus narcissism, plus solipsism…How could you be wrong about anything? To think something is to make it happen. You’re God”) who were surrounded by yes-men who dared not raise any concerns for fear of being fired for ‘negativity’. A world in which to ‘underperform’ in sales could lead to humiliating ritual punishments like being made to wear nappies.
Pumped up with the New Thought belief that positive thinking could make wishes come true and that God would intervene to prevent any negative outcome, Americans confronted those other circumstances happening in the early 21st century: Monetary policy from the Federal Reserve coupled with surpluses of fast-growing emerging economies making money cheaper than ever; US politicians working to increase the share of home-owning families; a financial industry apparently transforming large risks into smaller ones through repackaging, labelling and selling them coupled with regulations and bonuses that tempted people into the market for mortgage-backed securities.
After the subprime mortgage bubble burst and it became obvious that the good times had been propped up by out-of-control speculation and borrowing, inevitably the cry went up: ‘Why did nobody see this coming?’. Hopefully this series has offered some explanations by showing how, prior to the 2008 crash, prosperity preachers and optimism coaches told people they could realise their material ambitions through the power of belief (‘self-help writer Stephen Covey encouraged those satisfied with what they had to “admit that what you have isn’t enough”) the perception of negative thought as a form of sin that must be removed from one’s life had the effect of ejecting cautious people bearing bad news from the workplace and there was an executive class making decisions based on gut-feeling who were behaving very much like the motivational gurus and prosperity preachers they socialised with and who they forced upon their subordinates, while at the same time enriching themselves through corporate mergers and bustups that unlocked shares-boosting capital while destroying the jobs of hundreds of thousands of people (those employees maxing out their credit cards in spending sprees in order to compensate for the deterioation of rewards in the workplace.)
Coming up next, the concluding chapter of this essay.
‘Financial Fiasco’ by
‘Smile Or Die’ by Barbara Ehrenreich
‘White-Collar Sweatshop’ by Jill Andresky Frasier
In the aftermath of the 2008 crash, faced with an epidemic of foreclosures in the housing market, the collapse of some of the oldest financial institutions and the national debt rising to $10 trillion, people understandably asked: Why? How come all those highly respected and lavishly rewarded experts never saw the crash coming? Taking into consideration the evidence presented in this essay, I think we can conclude that the West was blinded by a combination of New Thought and neo-classical ideology.
When the likes of Mary Baker Eddy and Quimby sought to create a positive alternative to the grim outlook of Calvinism, they imagined the universe to consist of nothing but an all-nurturing, all-supplying spirit. Humanity, as part of this maximally-beneficial entity, had but to exercise their powers of positive thinking, banish all negative thoughts, and everything would turn out all right.
And, as Ehrenreich pointed out, “what was market fundamentalism other than runaway positive thinking? In the ideology of the Bush administration and, to a somewhat lesser extent, the Clinton administration before it, there was no need for vigilance or anxiety about America’s financial institutions because ‘the market’ would take care of everything. It achieved the status of a deity, this market”.
The real world is too complex for human minds to fully grasp. Since that’s the case, science is obliged to devise simplified models, to work with a crude ‘toy universe’ when thinking about this universe in which we live. For example, Newtonian physics cannot accurately predict the interaction of three or more orbiting bodies. So rocket scientists planning to send a probe to, say, Mars, work with a simplified model in which there are only two objects-Mars and the Sun. The thinking is that, on human timescales, the Sun’s influence swamps everything else, so the approximation is good enough for all practical intents and purposes.
All the sciences have to make simplifying assumptions, and economics is no exception. According to Mark Braund and Ross Ashcroft (authors of “The Survival Manual: A Sane Person’s Guide to Navigating the 21st Century”) “neo-classical economics looks only at the factors influencing the investment and consumption decisions of individuals and firms. It focuses on how things would work in an imaginary world where all participants in the economy shared full and equal knowledge, not only of the market but also of the consequences of their decisions. It also assumes that everyone faces the same choices in life”.
As we have seen over the course of this essay, there are a couple of dubious claims here. There is, for example, the claim that participants share full and equal knowledge, both of the market and their decision’s consequences. This can hardly be said to apply to a corporate world in which celebrity CEOs floated high above the concerns of ordinary citizens in a bubble of luxury, surrounded by subordinates conditioned to bring them nothing but good news. “I’m the most lied to man in the world”, was how one CEO explained his situation.
Nor could it be said to apply to ordinary Americans, those folk who, in work, were obliged to attend seminars and read books by so-called experts armed with a pseudoscientific mix of economics, quantum physics and mysticism (as one life coach insisted, “with quantum physics, science is leaving behind the notion that human beings are powerless victims and moving toward an understanding that we are fully empowered creators of our lives and of our world”) and engineering a working environment where the entrenched cult of optimism made it advisable to conform lest you be targeted for ‘releases of resources’ or whatever euphemism for layoffs the company used.
Outside of work, the American citizen was preached to by TV evangelists broadcasting their ‘prosperity gospel’ that God wanted true believers in optimism to have it all (a situation that inspired a 2008 Time article called ‘Maybe We Should Blame God For The Subprime mortgage mess’). They were advised by (in Ehrenreich’s words) “professional optimists (who) dominated the world of economic commentary…Escalating house prices were pumping the entire economy by encouraging people to use their homes like ATMS…taking out home equity loans to finance surging consumption-and housing prices were believed to be permanently resistant to gravity”.
According to Washington Post columnist Steve Pearlstein, “at the heart of any economic or financial mania is an epidemic of self-delusion that infects not only large numbers of sophisticated investors but also many of the smartest, most experienced and sophisticated executives and bankers”.
An economy infected with an epidemic of self-delusion and where the pressure is on to conform to a ‘yes-man’ culture of positive thinking is hardly conducive to bringing about the neo-classical concept of man as a perfectly informed and rational agent.
Then there is the notion of everyone facing the same choices in life. Here, I will just point out that some finance companies involved in subprime mortgages were undertaking debt-to-asset ratios of 30 to 1, and ask the reader to take a member of the white-collar proletariat, massively indebted, working in a corporate environment whose advice to those facing unprecedented levels of ‘restructuring’ and ‘career-change opportunities’ (more euphemisms for layoffs) were “don’t blame the system, don’t blame the boss, work harder and pray more” or ‘deal with it, you big babies!”, and compare that person to the likes of Jack Welch, the CEO who laid off over a hundred thousand workers, who retired with a monthly income of $2.1 million, got given an $800,000-a-month Manhattan apartment, a Boeing 737 (also courtesy of the company) oh, and free security guards for his many homes. Does anyone really believe these are people who face the same choices in life?
When we make references to the ‘free market’, what, exactly is this ‘freedom’ we are referring to? The neo-liberal ideologue would no doubt claim it refers to the freedom to partake in voluntary exchange. As Ayn Rand said, “money is the material shape of the principle that men who wish to deal with one another must deal by trade and give value for value…An honest man is one who knows that he cannot consume more than he has produced”.
But, if that is the case, then it is difficult to imagine how all those toxic assets could have been accumulating in the financial sector or how borrowing could have pushed the national debt to ten trillion dollars. I think a more apt description would be: “The free market is a competitive environment in which players strive to obtain greater material wealth than other players, by whatever means they can get away with”. This definition leaves open the possibility that some may aim to get ahead by cheating and the spreading of misleading information. They may not be able to get away with it-that depends on how clued-up and vigilant the other players are to such deception and what regulatory structures are in place to curb such behaviour-but, in nature, parasites can evolve to alter the minds of their hosts such that they nurture rather than fight off the bloodsucker. The same thing can be said of market parasites.
Gillian Tett of the Financial Times has commented on how an elite “try to stay in power; and the way they stay in power is not merely by controlling the means of production but by controlling the cognitive map, the way we think. And what really matters in that respect is…what is left undebated, unsaid”.
In a corporate environment amidst a consumerist world feeding off of New Thought ideology, there was quite a lot left unsaid. As Adam Michelson, senior Vice President of Countrywide, said, “these are the times when that one person who might respond with a negative comment or a cautious appraisal might be the first to be ostracised. There is a great risk to nonconformity in any feverishly frothy environment like that”.
Indeed. America in the early 21st century was riding high on optimism. Communism had been defeated, and the turbulent world of financial capitalism was sold to the public as a rising tide that lifts all boats. According to Robert Reich, “optimism…explains why we spend so much and save so little…our willingness to go into debt is intimately related to our optimism”.
As we have seen through the course of this essay, this optimism can be traced back to the Calvinist religion that helped the founders of this nation tame the harsh wilderness, and the New Thought ideology that attempted to undo the mental damage such a punitive religion could impose, but actually ended up being just as harsh on ‘sin’ as what preceded it. The only difference was that it was negative thinking rather than pleasure-seeking that was held up as sinful.
As Ehrenreich explained, “for centuries, or at least since the Protestant Reformation, western economic elites have flattered themselves with the idea that poverty is a voluntary condition. The Calvinist saw it as a result of sloth and other bad habits; the positive thinker blamed it on a wilful failure to embrace abundance. This victim-blaming approach meshed neatly with the prevailing economic determinism of the past two decades. Welfare recipients were pushed into low-wage jobs, supposedly in part, to boost their self-esteem; laid off and soon to be laid off workers were subjected to motivational speakers and exercises. But the economic meltdown should have undone, once and for all, the idea of poverty as a personal shortcoming…The lines at the unemployment offices and churches offering free food include strivers as well as slackers”.
It seems God was not on hand to save us from ourselves after all.
Smile or Die by Barbara Ehrenreich
The Survival Manual by Mark Braund and Ross Ashcroft
Atlas Shrugged by Ayn Rand

Posted in Uncategorized | Leave a comment

How Religion Caused The Great Recession

Any essay with a title like ‘how religion caused the Great Recession’ had better begin with a caveat or two, so here goes. First of all, religion was not what caused the financial crash of 2008, which is to say it was not the main reason for the subprime mortgage bubble. As to what was the main culprit, well that probably depends on one’s political ideology. The anti-capitalist would likely blame ‘too big to fail’ banks and irresponsible Wall Street wolves, while the anti-Left would probably cite State interference in the mortgage market as the main villain.
While either of these doubtless do stand up as greater culprits, both politics and finance, along with other kinds of collective activity, take place amidst the societies of the day and cannot help but be influenced by the beliefs and attitudes that evolve within them. And it is here, in the influencing of minds and group action, that we will see how religion helped set us up for a subprime mortgage bubble. But now I must make the second caveat and say that there are many different kinds of religion offering diverse schools of thought, and doubtless some would have guarded against the reckless borrowing and lending that lead to the 2008 Crash. But leading up to that crash there was an ideology sweeping through America, one that set the world up for a fall from the dizzying heights of the greatest delusion, and the origins for this hubristic attitude can be traced way back to the faith of the Pilgrim fathers.
As far as Westerners are concerned, the United States was colonised by pilgrims whose ancestry could be traced back to the Brownist English Dissenters who, in the 16th-17th century, had fled from the dangerous political climate of their native England for the Netherlands. The pilgrims arranged with English investors to establish a new North American colony, because they were concerned that emigrating to the Netherlands would lead to a loss of their English identity. So, in 1620, they established the Plymouth colony in present day Massachusetts, which was the second successful English settlement (Jamestown, Virginia, being the first. It was settled in 1607.)
The pilgrims who founded the Plymouth colony subscribed to a variant of the Puritan faith known as Calvinism, named after John Calvin who lived in the 16th century. This was a particularly harsh and judgemental form of Christianity, one whose God “reveals his hatred for his creatures, not his love for them”, in the words of literary scholar Ann Douglas. Calvinists believed that this God’s heaven had only a limited number of spaces available, and whether you were chosen or not had been predetermined since before your birth. As to one’s duties here on earth, the Calvinist religion saw much virtue in industrious labour and particularly in constant self-examination for any sinful thought. Idleness and pleasure-seeking were viewed as being particularly contemptible sins.
In ‘Protestant Ethics and the Spirit of Capitalism’, Max Weber argued that capitalism has its roots in Calvinist Protestantism, since it taught its followers to defer gratification in favour of hard work and wealth accumulation. It was also a mindset that was pretty well suited to the conditions the New World imposed on the colonists. Forget the images invoked by the patriotic song ‘America The Beautiful’ with its amber waves of grain, from sea to shining sea. What greeted the settlers was “a hideous, desolate wilderness” (in the words of William Bradford). Not for nothing was this land known as the Wild West. In a harsh environment such as this, where even subsistence demanded ceaseless effort, the tough-minded ideology of Calvinism probably helped them survive.
Elements of Calvinism would persist in America right through to the modern age, with the middle- and upper-class considering busyness for its own sake as a means of obtaining status (a rather convenient mindset for the increasingly demanding corporations of the 80s and 90s, as we will see). But as the harsh Wild West was gradually tamed, the constant self-examination for sinful thought and its eradication through labour came to impose a hefty toll on those who became cut off from industrious work (as were, for example, women- barred from higher education by male prejudice and faced with industrialisation stripping away productive home tasks like sewing and soap-making). With productive activity taken away, Calvinism left these people with nothing but morbid introspection and this lead to various illnesses that we would now recognise as being diagnostic of mental stress.
Faced with people succumbing to the symptoms of neurasthenia, and with the medical establishment seemingly unable to cure such patients, people began to reject their forebears’ punitive religion. There was, for example, Phineas Parkhurst Quimby, a watchmaker and inventor who held metaphysical beliefs concerning (in his words) “the science of life and happiness”. In the 1860s, Quimby met up with one Mary Baker Eddy who, like many middle-class women of her day, had rejected the guilt-ridden and patriarchal Calvinism in favour of a more loving and maternal deity. 
Together, Eddy and Quimby launched what we now describe as the cultural phenomenon of positive thinking. Back in the 1800s, the post-Calvinist way of thinking that Quimby and Eddy established was known as New Thought. Drawing on a variety of sources from transcendentalism to Hinduism, New Thought re-imagined God from the hostile deity of Calvinism to a positive and all-powerful spirit. And humanity was brought closer to God, too. Out went the idea of an exclusive heaven reserved only for a select few, replaced with a concept of Man as part of one universal, benevolent spirit. And if reality consisted of nothing but the perfect and positive spirit of God, how could there be such things as sin, disease, and other negative things? New Thought saw these as mere errors that humans could eradicate through “the boundless power of spirit”.
Patients suffering mental breakdowns due to the ceaseless morbid introspection of Calvinism came to see Quimby and his ‘talking cure’, which sought to replace such negative thoughts with a belief in a universe that was benevolent, coupled with an insistence that the patient could ‘correct’ any negativity through positive thinking. The ‘Talking cure’ did indeed seem to cure the mental anxieties that were leading to invalidism among Calvinists who had idleness imposed upon them.
Meanwhile, Mary Baker Eddy went on to gain considerable wealth after founding Christian Science, the core teachings of which were that the material world did not exist; there was only Thought, Mind, Spirit, Goodness and Love. Whatever negativity or want seemed to exist were but temporary delusions.
New Thought went on to influence such people as William James, the first American psychologist, who claimed in his ‘Varieties of Religious Experience’ that, through New Thought, “lifelong invalids have had their health restored”. It also influenced Norman Vincent Peale, perhaps best known for his 1952 “The Power of Positive Thinking”. But perhaps most importantly, as far as this essay is concerned, Mary Baker Eddy’s notion of negativity as controllable delusions influenced the mystical teachings of modern-day ‘motivational gurus’ who would lead those aspiring to the American Dream into believing that success and wealth would surely come their way if only they believed fervently enough.
And now we come to the dark side of New Thought. Although intended as an alternative to Calvinism, New Thought did not succeed in eradicating all the harmful aspects of that religion. As Barbara Ehrenreich explained in ‘Smile Or Die’, “it ended up preserving some of Calvinism’s more toxic features- a harsh judgmentalism, echoing the old religion’s condemnation of sin, and the insistence on the constant exterior labour of self-examination”. The only difference was that while the Calvinist’s introspection was intended to eradicate sin, the practitioner of New Thought and its later incarnations of positive thinking was constantly monitoring the self for negativity. Anything other than positive thought was an error that had to be driven out of the mind.
So, from the 19th century onwards, a belief that the universe is fundamentally benevolent and that the power of positive thought could make wishes come true and prevent all negative things from happening, was simmering away in the American subconsciousness. When consumerism took hold in the 20th century, positive thinking would become increasingly imposed on anyone looking to get ahead in an increasingly materialistic world.
To be continued…
‘Guns, Germs and Steel’ by Jared Diamond
‘Smile Or Die’ by Barbara Ehrenreich.

In part one, we saw how the Plymouth Colonists settled in a harsh, untamed environment that required ceaseless labour just to maintain subsistence living. Gradually, though, the unforgiving Wild West would be tamed, with railroads and freeways stretching from State to State, vast swathes of farmland providing an abundance of food, and industrial centres capable of such high productivity it seemed as though everybody’s needs would soon be met.
But while this might sound like a positive thing, it actually posed something of a problem to the economic system that had been established. It was a system based on perpetual growth and that was fundamentally opposed to any notion of ‘enough’ that might dwell in the human soul. In the competitive world of business, companies manufacturing goods were compelled to steadily increase market share and profits, of fear of being swallowed by a larger enterprise, but how could perpetual growth be maintained when customers acted with frugality and were content with what they had?
Psychologists were therefore brought in to change the human psyche. One such expert was Edward Bernays. He took certain ideas from Freudian analysis about human status and applied them to advertisement campaigns. Products were no longer to be thought of as mere practical solutions to a limited set of problems. They were, instead, symbols representative of one’s identity, physical representations of one’s status. The car, the appliance, the furniture, were to be less relevant in terms of their utility and seen instead as fashion accessories. Advertising played a major role in developing this new consumer culture, because if the economy was to fulfil its imperative of perpetual growth, the customer had to be persuaded to buy things they did not even know they needed.
The consumer economy necessitated the rise of sales and service-based industries and those kinds of workplaces proved fertile breeding ground for positive thinking. After all, we all expect staff in shops and waiters serving us food to be friendly and greet us with smiles and a positive attitude (even if we don’t really believe the grinning sales assistant is genuinely pleased to see us). 
Increasingly, then, employees found themselves in occupations that required the kind of self-examination and improvement that practitioners of positive thinking strived to achieve. As Ehrenreich explained, “the work of Americans, and especially its ever-growing white-collar proletariat, is in no small part work that is performed on the self in order to make that self more acceptable and even likeable to employers, clients, coworkers and potential customers”. Nor were interpersonal skills and constant optimism confined to obvious places like sales and service-based industries. As Carnegie observed, “even in such technical lines as engineering, about 15 percent of one’s financial success is due to one’s technical knowledge and about 85 percent is due to skill in human engineering”.
And so, whether in work or out, the consumer lived surrounded by the positive thinking message that anyone can have whatever they want, provided they exercised sufficient belief that good things will come their way. It was a belief generated in no small part to create an insatiable appetite for consumer culture. And as the corporate world seemed to ascend to increasingly dazzling heights of financial success, some clergymen noticed this ascendency and recognised within it methods to grow their churches.
Continued in part three
‘Culture In Decline’ by Peter Joseph
‘Smile Or Die’ by Barbara Ehrenreich.

In part two, we saw how a market system based on perpetual growth required a change in social attitudes once productivity was capable of meeting basic needs, and that transition was one in which we went from frugality to signalling our individuality through consumption. By the late 20th century it would have been impossible to miss consumption culture and, perhaps inevitably, marketing, advertising and other aspects of growth culture began to have an influence in areas one might consider to be outside such economic concerns.
One such example would be Church. Membership of mainstream church had been declining in the latter part of the 20th century. In the past, churches faced with an increasing number of ‘unchurched’ folk might have sent out missionaries to try and convert the heathen population. But, this being an era of marketing, they tried something different. They did what any business would do when looking to relaunch a flagging product. They began thinking of potential members as ‘customers’ and conducted market research in order to determine what the ‘customer’ wanted. The various surveys and research indicated that people were not much interested in the kind of sermons they had sat through as children. Not for them, the angry sermon condemning sin. In fact, the market research showed people were not much interested in traditional church at all.
So pastors like Rick Warren, Bill Hybels and Robert Shuller set about reconfiguring church in order to better accommodate what the ‘customer’ wanted. Out went the hard pews, replaced with comfortable seating. Out went all the imagery of conventional churches. These new churches would have little in the way of traditional Christian iconography, such as crosses or images of Jesus. The result of this transformation was a building that looked less like a church and more like architecture that fit seamlessly with the modernist corporate-style environment of the rest of the city.
It was not only the physical appearance of the church that changed to suit the modern corporate, secular world. The sermons themselves changed as well. The more demanding principles of Christianity with its teachings of modesty and humble living were discarded, replaced with positive messages very much like the ones New Thought had preached. The new breed of pastor saw themselves not as critics of the secular, materialistic world but rather as active participants within it. They preached a ‘prosperity gospel’, one which claimed God wanted you to achieve status, wealth and other trappings of material success.
In terms of growth, this tactic of transforming churches into secular conference centres spreading the good news that God would payeth thy credit card proved very successful. The churches led by the likes of Schuller, Warren and Hybels became ‘megachurches’ which, if you include those attending via TV broadcast, preached to an audience of millions. Being so big, megachurches had to employ hundreds of people and find millions of dollars to keep the organisation running. These conditions led to their pastors becoming ever less like traditional clergy and more like CEOs of large corporations. As Ehrenreich explained, many of these churches were “nondenominational, meaning they couldn’t turn to a centralised bureaucracy for financial or any other kind of support…They depended entirely on their own charisma and salesmanship”.
So the audience of a megachurch entered a building that looked pretty much like a corporate headquarters. The person preaching to them wore a business suit like any CEO and probably thought of himself as a ‘pastorpreneur’- part pastor, part entrepreneur. And the message the pastorpreneur delivered was much the same as the one the corporate world wanted to get across: Through positive thinking, you can make anything happen. You can get ahead, you can become successful, you will become rich. Consider, for instance, the words of televangelist Joyce Meyer: “I believe God wants to give us nice things”.
But, underneath all that positivity was the dark undercurrent of New Thought’s attitude toward negativity as a sin. If, despite all your positivity, riches did not come your way, don’t look for any flaw in business, economics or politics. Instead, blame yourself. You just didn’t try hard enough. Pastor Robert Schuller advised his congregation to “never verbalise a negative emotion”.
Still, at least the megachurch managed to remain a nice, comfortable place in which to receive the prosperity gospel. As Ehrenreich said, in a megachurch “no one will yell at you, impose impossible deadlines…All the visual signs of corporate power and efficiency, only without the cruelty and fear”. 
The same could not be said of corporate America in the late 20th/early 21st century.
Continued in PART FOUR
‘Smile Or Die’ by Barbara Ehrenreich 

Posted in work jobs and all that | Tagged , , , | Leave a comment

(Dis)honest ways to make it rich

Money. In a world where most things come with a price tag attached, we are all obliged to try and acquire the stuff. This can be achieved through fair means or foul. But what is an honest way of becoming wealthy?
To my mind there is only one way to become wealthy through entirely honest means, and that is to provide a product or service that an informed customer may choose to spend his or her money on. The company that provides this product or service relies only on the actual quality of it to keep them ahead of competitors. 
Furthermore, the honest boss of a company recognises that he or she is but one member of a team, and it was that collective which worked together to bring product X to the market. We might make a comparison to the conductor of an orchestra. A great conductor can make the difference between a performance that is merely OK and one that is sublime. It would be wrong, however, to attribute the excellence of the performance entirely to the person who happens to lead the orchestra. Obviously, were it not for the violinists, the trumpet players, the pianists, the percussionists and all the other members of the orchestra, bringing what is likely years of hard practice at perfecting the craft of playing their chosen instrument, there would be no music at all, sublime or otherwise. 
The same can be said for the CEO of a company. A great CEO can make the difference between an outstanding year for the company and a merely average (or abysmal) one. But a CEO of a Fortune 500 company could no more bring its product to market by themselves than a conductor could wave his baton like a wand and create music, absent of all the other members of the team we call an orchestra. The honest boss recognises that he or she is but one person among many, and that it is actually the organisation the team comprises that earns those £billions multinational corporations bring in. The honest boss would not accept a financial reward that is so high other members of the team must necessarily do with so little even if they work full time their daily life is one of constant money anxiety. Of course, it would also be wrong to pay everybody the same, since people clearly have different levels of responsibilities and skills in any organisation. But there is surely a mutually beneficial compromise between the extremes of total equality and high inequality.

Do all companies competing in the market adhere strictly to these conditions for honest money-making? Clearly not, as it is not too difficult to find examples of businesses that break at least one of the rules I just mentioned. To recap, the totally honest business:
1. Sells a product or service to an informed customer. In other words, whatever advertisement is used to try and sell the product gives an honest description of its advantages and disadvantages in comparison to rival products. Any potential customer truly knows exactly what it is they are about to pay for.
2. Relies only on the actual quality of that product/service in order to stay ahead of the competition. In other words, there is no lobbying the State to pass laws that disadvantage their competitors, grant monopoly rights that are then exploited through price hikes that would not be possible in true free market competition, and other distortions of the market. 
3. Acknowledge that it is the company, not any one person, who earns the profit. This income should be distributed in a mutually beneficial way that avoids the injustices of total equality (which fails to compensate for differences in responsibility and talent) and extreme inequality (which places unnecessary anxiety on the disfavoured and can lead to structural violence, as the disenfranchised riot against what is an obviously rigged system). In a good business, every person from the bottom to the top is motivated by the hope of success; the bad business relies on the fear of failure to keep its employees working.
So, are customers always completely informed about the product they are about to buy? Not according to the documentary ‘Will Work For Free’:
“If I wandered into a phone shop, unsure of what to buy, and make the mistake of telling the salesperson that I am not too familiar with the differences, I leave myself open to product sale bias. In this scenario, the store has no problem selling the best products, so instead, I am presented with an inferior product which the store is struggling to offload. The salesperson’s job here, becomes distorted and I the customer will most likely be subjected to either a sales pitch as opposed to honest insight”.
Or how about this quote from the Daily Mail:
“The Herts and Essex Fertility Centre charges £1,247 for three drugs routinely prescribed to women on IVF. But the same prescriptions in the same quantities cost £876.72 from Boots or £929.22 from Asda. Couples who buy drugs from the clinics may have no idea they are paying over the odds or that they can get them elsewhere…Experts accused the clinics of exploitation, calling the way they charge for IVF drugs ‘a complete racket'”.
Neither of these quotes convey an impression of potential customers making decisions armed with a complete set of all facts. I dare say most people have had the experience of dealing with some salesperson or business who appears to be, shall we say, economical with the truth in order to ensure a sale.
Moving on to the second condition for honest money making, I think it is fair to say that there are all kinds of distortions of the free market principle of trading true value for value under conditions of competition that favour only those who genuinely provide the best product/service. In fact, this quote from Ayn Rand’s Atlas Shrugged sounds suspiciously like the actual (as opposed to some ideological ‘free’) market:
“When you see that trading is done, not by consent, but by compulsion–when you see that in order to produce, you need to obtain permission from men who produce nothing–when you see that money is flowing to those who deal, not in goods, but in favours–when you see that men get richer by graft and by pull than by work, and your laws don’t protect you against them, but protect them against you–when you see corruption being rewarded and honesty becoming a self-sacrifice–you may know that your society is doomed”. 
When you watch an athlete push their body to the limits of human capability and beat a record for the fastest sprint, the longest jump, the most perfectly executed dive and so on, you can only feel a sense of admiration for those who have put in such hard work to achieve this pinnacle, and a sense of humility that some are able to dedicate themselves to an effort more supreme than most of us could ever endure.
But, given that the human body is capable of doing only so much, at some point an ever-decreasing time limit for completing a sprint or other record-breaking achievement starts to seem kind of…dubious. And then, it happens. The once-celebrated athlete is exposed as a cheat. He took performance-enhancing drugs, or some other means of gaining an edge that is against the rules of professional sport. 
It is, by the way, a bit unfair to dismiss those who are caught relying on such dubious tactics as just cheats. It is almost certain that these athletes trained every bit as hard as those who never touched performance-enhancing drugs. It is not like they just slobbed around in front of the TV all their lives, then one day injected themselves with something and wandered down to the Olympic park to beat Usain Bolt. No, their result came about by a mixture of honest and dishonest methods. 
Just as we understand that an individual can only break a sports record by a certain amount before it becomes obvious that they must have relied on some kind of cheating, so too should we recognise that an individual can only make so much money for themselves before their wages and bonuses are the result, not simply of their own merit, but a combination of honest work and cheating, of either rigging the system to favour themselves and disadvantage their fellow workers, or being in a position to benefit from a system that is already rigged. Is the CEO of a multinational company worth five times as much as the average employee? Doubtlessly, yes, because he or she shoulders enormous responsibility. Ten times more? Twenty? I think most people would still accept this is fair. But when those at the executive level start making hundreds or thousands of times more than the average employee, we really should be as dubious of this reward as we would be of an athlete who somehow manages to shave ten, fifty, or one hundred seconds off the previous world record in the sprint. Anyone who is a billionaire definitely did not earn that money through entirely honest means under conditions of true competition in which all have an equal chance to excel. No, they were in a position to take advantage of a rigged system.
Like everything created created by humans, markets are not perfect but flawed creations. If we come up with a description of markets which recognises the possibility that some may violate one or more of the conditions for acquiring deserved wealth, we can see what that flaw is. So here goes: The free market is an arena of competition in which individuals and groups try to gain a greater monetary reward than other individuals and groups, via whatever method they can get away with’. Clearly, such conditions are prone to cheats whose method for gaining wealth relies not entirely on making their own money, but (at least in part, to a greater or lesser extent, ) in taking wealth from others. Modern understandings of markets view them not as efficient machines, but rather as chaotic ecosystems. Just as the natural ecosystem inevitably allows parasites to evolve, so too do market systems give rise to parasites. And, just as in the natural world, those parasites are under competitive pressure to hide themselves from their victims, or better yet to fool their victims into believing they are something to be protected, rather than fought. 
Bare in mind what I said about the ‘cheating’ athlete, though. Just as the athlete was not simply a cheat but somebody who relied on a combination of meritocratic and dubious methods for achieving success, so too are the most successful cheats in business rarely simply parasites. Just as natural parasites have a competitive selective advantage in becoming interwoven among some vital function of their hosts body, such that removing the parasite without causing harm to the host presents a great challenge, so too are market parasites under selective pressure to interweave their dubious wealth-extraction schemes among genuinely useful services. The best parasites are never just cheats.
But, whatever. It is true to say that if those who take wealth, rather than make it, are allowed to flourish too much, then society is doomed just like Rand said. We know what needs to be done to ensure that never happens, though: Don’t let them get away with it.
“Will Work For Free” You tube documentary
“And They Charge Hundreds of Pounds More For Drugs You Can Buy at Asda” by the Daily Mail
‘Atlas Shrugged’ by Ayn Rand

Posted in work jobs and all that | Tagged , , | 2 Comments