A Warning For The Future

(This essay is the final part of the series ‘How Jobs Destroyed Work’)
Earlier, we saw how market efficiency has no general consideration for what is being bought and sold. It really doesn’t care if the products and services are useful or not, harmful or not, so long as cyclical consumption is kept at an acceptable rate. The same is true of labour. So far as the market economy is concerned, the true utility of labour, its actual function, is not as important as the mere act of labour itself. So long as cyclical consumption and growth is maintained, what the job consists of- whether it actually serves a necessary function that encourages work as I would define it or is detrimental to it- is far less important than perpetuating the current system.
Here we are no longer just talking about the practical argument for jobs. Were that the case, we would likely use technological unemployment as an opportunity to end wage slavery and transition to a post-hierarchical world in which robots occupy positions that used to be jobs, freeing up people’s time so that they can pursue callings. Marshall Brain’s novella ‘Manna’ presents two visions of how the rise of robots could affect our lives, one negative and one positive. The positive version does show that we can imagine ways in which we might adapt to a world in which jobs are no longer a practical necessity. But we don’t just have the practical justification to deal with. There is also the ideological part of the argument, and as the practical excuse begins to wane, becoming harder to justify as machines gain the abilities needed to make Aristotle’s vision of a hierarchical-free society a genuine possibility, we shall likely see the ideological justification for maintaining the current system pushed with increasing fervour.
The ideological argument is that jobs are not in fact a miserable necessity we should look forward to being rid of as soon as practically possible, thereafter to engage in nonmonetary forms of productivity as we create the work, selves and societies we actually want; rather, jobs are work, the only kind of work anyone should aspire to. Maybe it could be argued that when jobs were very much a practical necessity it did make sense to encourage a belief that submitting to a job and working hard mostly for someone or something else’s benefit was a way of achieving success in one’s own life. But as the practical justification for jobs is rendered obsolete by technology, the old ‘work ethic’ that cannot imagine a good reason for productive effort beyond ‘doing it for money’ becomes a serious impediment to transcending the current system. 
We must ask: Who really benefits from perpetuating this ideal of working hard for most of one’s waking hours, mostly for the benefit of a ruling class of financial nobility? Obviously, it is in the interest of whoever occupies the top of a hierarchy to maintain the structure from which their power and prestige is derived.
Throughout history there have been a few who, craving power, have done all they can to convince the rest that they ought to sacrifice the time of their lives. They have come in many guises- as lords and monarchs insisting we should be bossed by the aristocracy, as socialists who believe we should be bossed by bureaucrats, as libertarians who think we should be bossed by corporate executives. Exactly how the spoils of power should be divvied up is a topic of some disagreement among them. There is much argument over working conditions, profitability, exploitation, but fundamentally none of these ideologues object to power as such and they all want to keep us working in some form of servitude for one simple reason: Because they are the ones who mostly benefit from making others do their work for them. It’s very convenient for this powerful minority that the populace subordinate to them do not become too happy and productive in the true sense of the word; that anyone not willing to submit to work within whatever context suits their agenda is viewed with pity or contempt. As George Orwell wrote:
“If leisure and security were enjoyed by all alike, the great mass of human beings who are normally stupefied by poverty would become literate and would learn to think for themselves; and once they had done this, they would sooner or later realize that the privileged minority had no function, and they would sweep it away”.
In Orwell’s story, an endless war is fought between three superstates. The real purpose of this war is not final victory for one of the sides. In fact, the war is intended to go on forever. The real purpose of the war is simply to destroy material goods and so prevent leisure from upsetting the hierarchical power structure.
In reality much more subtle methods, part of which has to do with market as opposed to technical efficiency and manufactured debt, are used to perpetuate the hierarchy. A popular reply to the question “what happened to reduced working hours?” is that a massive increase in consumerism occurred, as if we collectively agreed that more stuff was preferable to more free time. But that provides only a partial explanation. Although we have witnessed the creation of a great many jobs, very few have anything to do with the production and distribution of goods. Jobs such as those- in industry, farming, have been largely automated away and increasingly service-based jobs are targets for automation as new generations of AI come out of R+D. So what kind of jobs are maintaining the need for so many hours devoted to the narrow definition of work? David Graeber answers:
“Rather than allowing a massive reduction of working hours to free the world’s population to pursue their own projects… we have seen the ballooning not so much of the ‘service’ sector as of the administrative sector…While corporations may engage in ruthless downsizing, the layoffs and speedups invariably fall on that class of people who are actually making, moving, fixing and maintaining things; through some strange alchemy that no one can quite explain, the number of salaried paper pushers ultimately seems to expand”.
Over the coming years will we likely see more administrative jobs created in order to provide oversight, regulation, guidance, and supervision of robots, or at least that’s how propaganda will spin it. In truth such jobs will serve no purpose other than to keep us working in the narrow sense of the word. We have seen signs of this already. The London Underground’s strong union blocked the introduction of driverless trains in the name of ‘protecting jobs’. Protecting them from what? From progress toward a future in which nobody’s time has to be wasted in driving a train each and every day? I think all these union leaders are really interested in is maintaining the hierarchy they derive their power and prestige from. Not much call for unions when robots have liberated us from servitude to corporate or bureaucratic masters.
The 21st Century will see the rise of bullshit administrative jobs that have no practical justification for their existence, and are there merely to perpetuate the class-based hierarchy that has dominated our lives, in one form or another, throughout history. Such a claim may sound like a total contradiction of prior claims that business strives to eliminate work, but bare in mind that I was referring to work in the true sense of the word, not the narrow “jobs= work” definition we are now talking about. Automating truly productive, intrinsically-rewarding work out of existence and increasing the amount of bullshit administrative jobs is a win-win outcome for those with a vested interest in perpetuating the class-based hierarchy. 
Do not think that those bullshit jobs will provide security. No, the rise of the bullshit job will coincide with the rise of ever-less secure forms of employment. The move toward employing more temporary workers who are entitled to less benefits than their full time counterparts will speed up as technological unemployment does away with productive and service-based jobs. Those displaced from such jobs, fighting to get off the scrapheap of unemployment, will provide a handy implicit threat to be used against the ‘lucky’ paper-pushers in administration. Although owners and workers generally have opposing interests (the former preferring workers who do more work in the narrow sense of the word for less personal reward, the latter preferring more personal reward and less work in the narrow sense of the word) they are not true enemies but rather co-dependents (or at least they have been). No, the true enemy of the capitalist is other capitalists- rival businesses competing to corner the market, and gain a monopoly. And the true enemy of the worker is the unemployed, who are in competition for their jobs. When the percentage of unemployed workers is low and the number of available jobs is high, the working classes are at an advantage. Conversely, when there are high numbers of unemployed and not many jobs available, power tips in favour of the owners. As a large percentage of jobs are lost to automation, causing an appreciable rise in the number of job-seekers, businesses will likely use their strengthened negotiating position to bring about an ‘Uber’ economy of ‘permalancers’- workers putting in full time hours but on temporary contracts with little if any benefits other than minimal pay. As Steven Hill wrote in a Salon article:
“In a sense, employers and employees used to be married to each other, and there was a sense of commitment and a shared destiny. Now, employers just want a bunch of one-night-stands with their employers…the so-called ‘new’ economy looks an awful lot like the old, pre-New Deal economy- with ‘jobs’ amounting to a series of low-paid micro-gigs and piece work, offering little empowerment for average workers, families or communities”.
According to Graeber, one of the strengths of right-wing populism is its ability to convince so many people that this is the way things ought to be: that we should sacrifice the time of our lives so as to perpetuate the system. He wrote:
“If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it’s hard to see how they could have done a better job. Real, productive workers are ruthlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the- universally reviled- unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc)- and particularly its financial avatars- but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value”.
It sounds crazy when written down. Who could possibly be in favour of defending something like this? But this is the world that exists today. A world in which all productive activity that falls outside of the narrow definition of work is dismissed as being of no real value; those who engage in such work regarded as ‘doing nothing’. A world in which success and reward are thought of in purely materialistic terms. A world in which those who refuse to submit to the system are deserving of nothing, no matter how much material wealth our technologies could, in principle, produce. A world in which that material wealth is concentrated at the top, not because of superior productive ability and greater input, but because the monetary, financial, and political systems have been corrupted and actually stand opposed to the free-market ideals they claim to uphold.
As Bob Black pointed out, the ‘need’ for jobs cannot be understood as a purely economic concern:
“If we hypothesize that work is essentially about social control and only incidentally about production, the boss behaviour which Rifkin finds so perversely stubborn makes perfect sense on its own twisted terms. Part of the population is overworked. Another part is ejected from the workforce. What do they have in common? Two things — mutual hostility and abject dependence. The first perpetuates the second, and each is disempowering”.
Follow these developments to their logical conclusion, as Marshall Brain has done. The belief that those who do not submit to serving the system deserve nothing will result in warehouses for those affected by technological unemployment, kept out of sight and out of mind, entitled to nought but the bare minimum of resources needed to sustain life. Those who succeed in getting a bullshit administrative job will be under intense pressure from their corporate masters ‘above’ and the impoverished, jobless masses below to ‘agree’ to intense working pressures, minimal benefits and no job security whatsoever. They will be required to consider themselves ‘lucky’ to have a job at all. Wealth will concentrate even further as the means of production, totally commodified labour power, natural resources, security and military forces and the political system will become the private property of the owners of the artificial intelligences and the financial nobility that bankroll them. The world will have become a plutocracy, run by the superich elite, for the superrich elite and there will be little anyone can do to challenge their supremacy. And all of this will be partly our fault, the consequence of continuing to believe in that false ideology that jobs are work, the only kind of work that counts, the only kind worth aspiring to. We have been told a lie, a made-up justification for why things are the way they are by those with a vested interest in keeping things that way. We must re-discover the true meaning of work, find our collective strength and push technological progress toward a future that serves the many rather than concentrates power into the hands of a few. And the time to do that is running out.

Posted in technology and us, work jobs and all that | Tagged , , , , , | 3 Comments


(This essay is part thirteen of the series ‘HOW JOBS DESTROYED WORK’)
The 21st Century could well witness a conflict between two opposing drives: The drive to eliminate work and the need to perpetuate it. In order to appreciate why these ideals should become a central issue over the coming years or decades, we need to answer the following question: Why do we work?
There are many good reasons to engage in productive activity. Pleasure and satisfaction come from seeing a project go from conception to final product. Training oneself and going from novice to seasoned expert is a rewarding activity. Work- when done mostly for oneself and communities or projects one actually cares about- ensures a meaningful way of spending one’s time. 
But that reply fits the true definition of work. What about the commonly-used definition, which considers ‘work’ almost exclusively in terms of paid servitude done mostly for the benefit of others, and which disregards nonmonetary productive activity as ‘not working’; why do we have to do that particular kind of ‘work’? I believe there is a practical and an ideological answer to that question.
The practical reason has been cited for millennia. Twenty three centuries ago, in ‘The Politics’, Aristotle considered the conditions in which a hierarchical power structure might not be necessary:
“There is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This would be if every machine could work by itself, at the word of command or by intelligent anticipation”.
Aristotle’s defence of slavery in his own stratified society has been applicable through the following years and to the modified versions of indentured servitude that followed. Providing the goods and services we have come to expect entails taking on unpleasant and uninteresting labour. It has to be done, and as technology is not up to the job, it falls on people to fill such roles.
If we only had that practical reason for wage slavery, we could view it as an unfortunate, temporary, situation; one due to come to a happy end when machines finally develop the abilities Aristotle talked about. But it’s rarely talked about in such positive terms. Instead of enthusing about the liberation from wage slavery and the freedom at long last to engage in work as I would define it, most reports of upcoming technological unemployment talk darkly of ‘robots stealing our jobs’ and ‘the end of work’.
The reason why the great opportunities promised by robot liberation from crap jobs is hardly ever considered has to do with the ideological justification for our current situation. But let’s stay with the practical argument a while longer, as this was the main justification for most of civilization’s existence.
Since Aristotle died, we have seen tremendous growth and progress in technology, most especially during the 20th century. Despite such advances, technological unemployment has never been much of an issue. People have been displaced from their occupations, yes, but the dark vision of growing numbers of workers permanently excluded from jobs no matter how much they may need employment has never come about. If anything, technology has created jobs more than it has destroyed them.
The reason why is two-fold. Firstly, machines have tended to be ultra-specialists, designed to do one or just maybe a few tasks, with no capacity whatsoever to expand their abilities beyond that which they specialise in. Think, for example, of a combine harvester. When it comes to the job for which it was built, this machine is capable in a way unmatched by any human. That’s why the image of armies of farm-hands harvesting the wheat now belongs to the dim and distant past, replaced with one or two of those machines doing much more work in much less time. But, take the combine out of the work it was built to do, attempt to use it in some other kind of labour, and you will almost certainly find it is totally useless. It just cannot do anything else, and neither has any other machine much ability to apply itself to an indefinite range of tasks. So, as new jobs are created, people with their adaptive abilities and capacity to learn, have succeeded in keeping ahead of the machine.
Secondly, for most of human history, the speed at which paradigm shifts in occupation took place was plenty slow enough for adjustments to occur. Today, when the subject of technological unemployment is raised, it’s often dismissed as nothing to worry about. Technology has always been eliminating jobs on one hand and creating them on the other, and we have always adjusted to the changing landscape. In the past, most of us worked the land. When technology radically reduced the amount of labour needed in farming we transitioned to factory work. But it was not really a case of farmers leaving their fields and retraining for factory jobs. It was more a case of their sons or grandsons being raised to seek their job prospects in towns and cities rather than the country. When major shifts in employment take at least a generation to show their effect, people have plenty of time to adjust. Educational systems can be built to train the populace in readiness for the slowly changing circumstances. Society can put in place measures to help us make it through the gradual transition. So long as new jobs are being created and there is time to adjust to changing circumstances, people only have one another to contend with in the competition for paid employment.
What happens, though, when machines approach, match, and then surpass our ability to adapt and learn? What happens when the rate at which major changes occur not over generational time but months or weeks? What if more jobs are being lost to smart technology than are being created? Humans have a marvellous- but not unlimited- capacity to adapt. Machines have so far succeeded in outperforming us in terms of physical strength. When they can likewise far outperform us in terms of learning ability, manual dexterity and creativity, this obviously means major changes in our assumptions about work.
It’s also worth point out that, in the past, foreseeing what kind of jobs would replace the old was a great deal easier compared to our current situation. The reduction in agricultural labour was achieved through increased mechanisation. That called for factories, coal mines, oil refineries and other apparatus of the industrial revolution, so it was fairly obvious where people could go. Then, when our increased ability to produce more stuff needed more shops, and more administration, we again could see that people could seek employment in offices and in the service-based industries. At each stage in these transitions, we swapped fairly routine work in one industry for fairly routine work in another.
But now that manual work, administrative work, and service-based work is being taken over by automation, and these AIs are much more adaptable than the automatons of old, we have no real clue as to where all the jobs to replace these services are supposed to come from. 
There are tremendous economic reasons to pursue such a future. You will recall from earlier how society is generally divided up into classes of ‘owners’ and ‘workers’. The latter own their own labour power and have the legal right to take it away, but have no right to any particular job. The owner classes own the means of production, get most of the rewards of production, get to choose who is employed in any particular job, but cannot physically force anyone to work (though they can, of course, take advantage of externalities that lower a person’s bargaining power to the point where refusal to submit to labour is hardly an option). 
Now, regardless of whether you think this way of organizing society is just or exploitative, it works pretty well so long as both classes are dependant on one another. For most of human history this has been the case. Workers have needed owners to provide jobs so that they can earn wages; owners have needed workers to run the means of production so that they may receive profit. The urge to increase profit, driven in no small part by the tendency of debt to grow due to systemic issues arising from interest-bearing fiat currency, pushes business to commodify labour as much as it can. The ultimate endpoint in the commodification of labour is the robot. Such machines are not cost free. They have to be bought, they require maintenance, they consume power. But they promise such a rise in productivity coupled with such a reduction in financial liability thanks to their not needing health insurance, unemployment insurance, paid vacations, union protection or wages, we can all but guarantee that R+D into the creation of smarter technologies and more flexible, adaptive forms of automation will continue. Tellingly, most major technology companies and their executives have expressed opinions that advances in robotic and AI over the coming years will put a strain on our ability to provide sufficient numbers of jobs- although some still insist that, somehow, enough new work that only humans can do will be created. 
The thing is, work as in its common, narrow, definition is simultaneously a cost to be reduced as much as possible, and a necessity that must be perpetuated if we are to maintain the current hierarchical system in which money means power and wealth means material acquisition. Remember: businesses don’t really exist to provide work for people; they exist to make profit for their owners. When, in the future, there is the option to choose between relatively costly and unreliable human labour, or a cheap and proficient robot workforce, the working classes are going to find their lack of right to any particular job within a free market makes it impossible to get a job.
But, the market economy as it exists today is predicated on people earning wages and spending their earnings on consumer goods and related services. This cycling of money of consumers spending wages, thus generating profit, part of which is used to pay wages, is a vital part of economic stability and growth. If people can’t earn wages because their labour is not economically viable in a world of intelligent machines, they cannot be consumers with disposable income to spend.
We will continue our investigation of technological employment in part fourteen

Posted in technology and us, work jobs and all that | Tagged , , , , , , | 4 Comments


(This essay is part twelve of the series HOW JOBS DESTROYED WORK)
Does monetary reward really provide the best incentive to work? If you live within a system that commodifies everything, turning it into private property that you cannot gain access to unless you have money, and your only means of obtaining money is to submit to paid employment, that would most likely push you toward getting a job. That seems more ‘stick’ than ‘carrot’. But what about those fortunate few, that 13% or so, who don’t hate their job? If you take their existing motivation and add another- financial- incentive to work, does that increase their desire to work?
Common sense would assume it would. Two incentives have got to be better than one. And financial reward must be the great motivator, for why else would executives be worth so much? Well, firstly, executives are not paid what they are worth; nobody is paid what they are worth in a market economy. People receive whatever they can negotiate, and with the balance of power tipped so much in their favour, the 1% can strike a great deal for themselves. 
As for common sense’s view of an extra, financial, incentive increasing motivation, psychologists and economists have been making empirical studies of this assumption for forty years, and the evidence is that it just isn’t true. Adding a monetary incentive to work that is already rewarding does not make it even more rewarding. Quite the opposite in fact: It undermines, rather than enhances, the motives people already had. 
In one study, conducted by James Heyman and Dan Ariely, people were asked to help load a van. When no fee was offered, people tended to help, inclined as they were to view the situation in social terms. But when a fee was included, that induced participants to take the transaction out of the social realm and reframe it as financial. The offer of money lead to the question “is it worth my time and effort?”. The extrinsic motivation of monetary reward undermined the intrinsic motivation of being a helpful person. Economist Bruno Frey describes this as ‘motivational crowding out’.
Interestingly, the assumption that we are motivated by money holds only in the general sense. When studies are conducted to gauge people’s attitude to work and what motivates us, we tend to see that most individuals don’t generally think of themselves as primarily motivated by money. For example, Chip Heath surveyed law students, and 64% said they were pursuing such a career because they were interested in law and found it an intellectually appealing subject. But, while we don’t think of ourselves ‘in it for the money’ we tend to think that it is other people’s prime motive. 62% of people in Heath’s survey reckoned their peers were pursuing a career in law for monetary gain. Since it’s generally believed that money is the main incentive provider (‘I’ being the exception) it’s not surprising that material reward continues to be so heavily relied on.
It is important to point out that extrinsic motivations are not always bad, it is just that when an activity is intrinsically rewarding, adding an extrinsic motivation can actually reduce engagement in that task, not increase it as common sense might lead us to believe would be the case. Furthermore, studies from Harvard Business School, Northwestern University’s Kellogg School of Management, and others have shown that goals people set for themselves with the intention of gaining mastery are usually healthy, but when those goals are imposed on them by others- such as sales targets, standardized test scores and quarterly returns- such incentives, though intended to ensure peak performance, often produce the opposite. They can lead to efforts to game the system and look good without producing the underlying results the metric was supposed to be assessing. As Patrick Schiltz, a professor of law, put it:
“Your entire frame of reference will change [and the dozens of quick decisions you will make every day] will reflect a set of values that embodies not what is right or wrong but what is profitable, what you can get away with”.
Practical examples abound. Sears imposed a sales quota on its auto repair staff- who responded by overcharging customers and carrying out repairs that weren’t actually needed. Ford set the goal of producing a car by a particular date at a certain price that had to be at a certain weight, constraints that lead to safety checks being omitted and the dangerous Ford Pinto (a car that tended to explode if involved in a rear-end collision, due to the placement of its fuel tank) being sold to the public. 
Perhaps most infamously, the way extrinsic motivation can cause people to focus on the short-term while discounting longer-term consequences lead to the financial crisis on 2008, as buyers bought unaffordable homes, mortgage brokers chased commissions, Wall Street traders wanted new securities to sell, and politicians wanted people to spend, spend spend because that would keep the economy buoyant- at least while they were in office. 
It would be handy if there were another form of productive activity, other than employment, that relied on other incentives to work, for then we could see how successful non- monetary incentives are at motivating us. Actually, there is. We call them videogames. I would argue that videogames have opposing drives to jobs, due to the fact that where jobs are concerned a business pays you to work, but where videogames are concerned you pay a business to work.
 Paying wages counts as a cost to a business. The company wants to reduce costs, and one way it might accomplish that would be to reduce, as much as possible, the amount of challenge, creativity, autonomy, and judgement required to do the job, thereby making it possible to employ workers who are less skilled, easier to train, and hence more replaceable and so not in a good position to bargain for a better deal. 
You might wonder why anybody would want to do a job that has little going for it beyond the fact it pays wages, but of course nobody wants to do it; many are just not in a position to turn it down.
A videogame, in contrast, is a product people pay for. But there is a problem. Fundamentally, what you physically do in a videogame stands comparison to the dullest production-line job. You are just pressing a few buttons over and over again. The game designers must take that basic, monotonous, action and add layer upon layer of non-monetary incentive. This may include a clear sense of purpose for why you are doing what you are doing, perhaps through a strong narrative; plenty of opportunity for social engagement through team-building, community message boards and such; and a meritocratic system that rewards skillful play and turns failure into a valuable lesson through systems of feedback that constantly provide you with signals so that you know if you should rethink your strategy.
The result? Videogames are massively popular. People pay good money to do what is, essentially, work. In fact, Bryon Reeves and Jo Leighton Read, in their book ‘Total Engagement’, list hundreds of job types and show how every one has its equivalent occupation in videogames and online worlds.
The perspective mainstream media takes is usually a negative one. Buying videogames is fine, of course, as that helps growth and provides jobs. But playing videogames, especially for long periods, is a definite no-no. It’s usually described as kids stuck in front of a screen, not motivated to go out and get a job because they are ‘addicted’ to Grand Theft Auto V or whatever the current bad boy is.
But why would anyone put down their videogame or log out of their online world- arguably the only place where you can find something like a true meritocracy and where nonmonetary incentives have been refined over many years- and go seek a job unless you were really forced to? What for? The modern market stands opposed to everything it claims to champion, as John Mccurdy illustrated in ‘The Cancer Stage of Capitalism”:
“Non-living corporations are conceived as human individuals…Continent-wide machine extractions of the world’s natural resources, pollutive mass-manufacturing and throwaway packages are imaged as home-spun market offerings for the local community…Faceless corporate bureaucracies structured to avoid the liability of their stock holders are represented as intimate and caring family friends…If we walk through each of the properties of the real free market, in short, we find not one of them belongs in fact to the global market system, but every one of them is appropriated by it as its own”.
Who is to blame? Banks? Corporations? Politicians? Consumers? The Left? The Right? I have no definitive answer. I have heard many opinions from all sides, each providing justification for why they are right and everybody else is wrong. My hunch is that this is a systemic outcome that cannot be conveniently blamed on any one group, thing or ideology. Whatever was behind the development of the global market system and debt-based, interest-bearing currency, we now inhabit a world in which jobs destroy work by devaluing voluntarism and undermining intrinsic motivation, use technology to cause job overspill and turn workplaces into panopticons, and pursue short-term profit at any cost, encouraging the growth of ‘socialism for the rich’ where the reward for risk-taking in the casino world of derivatives etc is concentrated into the accounts of the few financial nobility who now rule us, while the costs are borne by we, the taxpaying serfs, who are not physically chained but compelled to labour through manufactured debt and the suppression of true, technical efficiency for monetary gain. And now that system is gearing up to throw employees on the scrapheap.
The subject of technological unemployment will be the next point of discussion.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment


(This essay is part eleven of the series HOW JOBS DESTROYED WORK)
Because we have built for ourselves a system that tends to cause debt to outgrow productive ability, it is essential for the perpetuation of such a system that we never succeed in ending scarcity. Now, human needs are finite, or if not finite then nowhere near insatiable enough to fuel the appetite for growth that our market system has, dominated as it is by cancerous forms of money growth that extract wealth from the real economy. Fortunately for the system, human desires can be manipulated to embrace a ‘throwaway’ culture. Such an outcome requires a hedonistic, short-sighted value system and measures of ‘wealth’ and ‘success’ that define such things in purely consumerist terms. Such an outcome must undermine, as much as it can, any interest in the types of innovation and problem-solving that are not inherently based on monetary return.
It is therefore perhaps not surprising that the majority of people have been conditioned to devalue all nonmonetary forms of work and to ascribe success to overconsumption. It also explains why advertising has grown to become such a dominant part of many businesses budget. It’s required to manufacture fake needs.
Notice how many adverts rely on genuine meaning and non-monetary values to sell their products. TV commercials tend to revolve around people finding love, or being among friends, or having the freedom to immerse themselves in idyllic locations. Adverts for cars, for example, will show a happy driver travelling down an empty road to some stunning location, perhaps to meet a gorgeous partner. Those adverts don’t depict a stressed-out employee stuck in rush hour traffic with his superior barking in his ear through a cellular phone, whose debt levels brought on in part by his consumerist lifestyle severely restrict bargaining power in negotiating better terms.
The power advertising has to influence our minds is demonstrated by the ‘Pepsi paradox’. The paradox consists of the fact that when blind taste tests are conducted, people tend to select Pepsi as the best tasting cola. But Coca-cola outsells Pepsi. Brain scans show that when people taste Pepsi and Coca-Cola without knowing which is which, the former drink triggers greater activity in the Ventral Putamen, a component of the brain’s reward system. When the drinks are tasted with awareness of which drink is which, we see a change in the brain: Coca-Cola triggers greater activity in the medial prefrontal cortex, an area of the brain dedicated to personality expression and moderating social behaviour. As Steve Quartz of the Californian Institute of Technology explained:
“What is a brand? It is a social distinction that we are creating…Cola is brown sugar water about which the brain discerns no particular difference until the brand information comes in. Then, the brain suddenly perceives an enormous difference”.
Another study conducted some years ago introduced television to Fijian islanders who had not been exposed to Western values. Prior to this introduction, eating disorders were almost unheard of, but by the end of the observation period, the barrage of materialistic and vanity values that feature so heavily in our commercial world had altered the psychology of the islanders. As Zeitgeist explained:
“A relevant percentage of young women…who prior had embraced the style of healthy weight and full features, became obsessed with being thin”.
It would be wrong to suggest that all consumerism is bad. If we had no choice but to live in austere conditions that offered no comforts or luxuries and only addressed our most basic needs, I don’t think that would make us all that happy. It is a triumph of capitalism and market systems that ordinary people now enjoy a greater range of food, drink and luxury items than a medieval monarch ever laid hands on. 
Capitalism’s success lies in its ability to solve problems and the measure of a society’s wealth should be determined by how well problems are being solved. But our debt-growing monetary system and the consumerist mentality nurtured to support it have encouraged the emergence of a market that creates problems more than it solves them, for the simple reason that monetary gain can be had if the market can perpetuate a feeling of inadequacy and inferiority so as to sell us bogus cures. 
It’s also opposed to technical efficiency. A product that maximises technical efficiency lasts for as long as possible. This may be because it is robustly made- think of a bulb that provides light for as long as physical limits permit. It may be achieved through easy repairability. Think of a tablet computer with component parts that can be swapped out as they suffer wear and tear. 
Market efficiency, on the other hand, is much more concerned with driving sales, and so there is an incentive to inhibit technical efficiency for the sake of repeat purchases. Food comes with ‘sell-by’ dates- even if it is tinned food that lasts pretty much indefinitely if unopened. That’s assuming it makes it onto the shelves at all. Enormous amounts of decent food gets thrown away simply because it fails to meet the exacting standards of supermarkets who want absolute uniformity in fruit and vegetables. 
A famous example of market efficiency versus technical efficiency would be the Pheobus light bulb cartel of the 1930s. Back then, light bulbs were technically able to provide about 25,000 hours of light. The cartel forced each company to restrict light bulb lifespans to less than 1000 hours- much better, at least as far as repeat sales are concerned. Today, some inkjet printer manufacturers employ smart chips in their ink cartridges to prevent them from being used after a certain threshold (for example, after a certain number of pages have been printed) even though the cartridge may still contain usable ink. When the cartridge is taken to be refilled, the chip is simply reset and the same cartridge is resold to its owner. 
In 1801, Eli Whitney produced fully interchangeable parts for muskets. Prior to this move, the whole gun was useless if a part broke; Whitney’s interchangeable parts allowed for continual maintenance. Common sense would assume that such an idea would spread throughout the market, but instead we see proprietary components that ensure a total lack of universal compatibility, and products driven to unnecessary obsolescence. 
Bare in mind that these products are the result of human effort. People are giving up their time to labour at producing goods that are designed to be thrown away, and the sooner the better. As Zeitgeist put it:
“The intention of the market system is to maintain or elevate rates of turnover, as this is what keeps people employed and increases so-called growth. Hence, at its core, the market’s entire premise of efficiency is based around tactics to accomplish this and hence any force that works to reduce the need for labour or turnover is considered “inefficient” from the view of the market, even though it might be very efficient in terms of the true definition of the economy itself, which means to conserve, reduce waste and do more with less”.
A clear indication of how money can distort perspectives of work can be seen in the rationale of Gross Domestic Product or GDP. GDP has its origins in post-depression America. In the 1930s, presidents Hoover and Roosevelt were looking for a way to determine how dire the situation was. A Nobel Laureate in economics, Simon Kuznets, devised a method for measuring the flow of money as a whole. Back then, industry dominated the economy, which meant that most economic activity involved the creation and sale of physical products. So, Kuznets’ measurement only tracked the flow of money among different sectors, not the creation and sale of actual things. To further simplify things, Kuznets’ model tended to undercount what economists call ‘externalities’, which refers to any industrial or commercial activity that is experienced by third parties who are not directly related to the transaction (IE they are not directly working for, or customers of, the company). Externalities can come in both positive forms, providing unintentional benefits (a cider farm’s apple orchard happens to provide nectar for a nearby bee-keeper’s bees) or negative, causing unintentional harm (an industrial process causes pollution, affecting the health of people who happen to live nearby). Also, externalities occur from both production and consumption.
So, what does life look like when viewed through the filter of GDP? Since it only tracks the flow of money, anything that does not involve monetary exchange is disregarded. In reality, work can and often does exist outside of monetary transactions. A man may volunteer to paint fences in his community because he wants the neighbourhood to look nice. A retired teacher might give free maths tuition to his friend’s son who is falling behind at school. All good work, but from the strictly monetized perspective taken by GDP, services rendered without payment have no value. Furthermore, whereas violence, crime, breakdown of the family and other such things have a negative impact on society, GDP counts them as improvements if the decline results in paid intervention. Crime results in the need for legal services, more police and prisons and repairs to damaged property. The breakdown of the family may lead to social work, psychological counselling and subscriptions to antidepressants. Since these are paid interventions, as far as GDP is concerned, social decay registers as an improvement, because it’s generating financial flows as we pay for all that extra security, counselling, and damage repairs.
GDP is obviously a simplified model that cannot accurately inform us of how well a society is solving its problems and improving lives (to be fair to Kuznets, he did point out its limitations). But notice how well it serves the logic of today’s market, which has at the heart of its context of ‘efficiency’ a focus on monetary exchange and general growth in consumption, with scant regard as to what is being produced or what effect it has. So long as money and consumption continues to grow, that’s good. Any significant reductions are bad.
Submitting to labour within a market that devalues community-building voluntarism and views the cost of social decay as a financial benefit, exposed to an endless barrage of psychological manipulation designed to persuade us that happiness comes from the possession of material things, and culturally trained to idolise the very people who tipped the balance of negotiating power almost entirely in their favour enabling them to reduce wages and benefits down to a point where there is hardly any compensation for so many hours lost to servitude, it cannot come as a surprise if we see credit card binges and other forms of overextension. During the lean-and-mean 90s, many people found that pushing household borrowing to dangerous heights was just about the only way they could obtain what seemed like an appropriate reward for so much sacrifice. Savings were put into mutual funds, but few held investments in such funds that were large enough to compensate for all the insecurity, benefit cutbacks and downsizing that typified the era. Moreover, their portfolio managers often backed the very investor-raiders and corporate changes aimed at short-term profit at any cost, that was causing so much deterioration in working life. 
Coming up in part twelve: how money decentivises work

Posted in technology and us, work jobs and all that | Tagged , , , , , | Leave a comment


(This essay is the tenth installment of the series HOW JOBS DESTROYED WORK)
When it comes to the topic of how money can destroy work, we encounter a problem. Such a topic really calls for an investigation into what money actually is, along with the history of empire building, trade, banking and finance and the various actions, both well-meaning and exploitative in intent, which caused money to evolve into the form it now takes. Such an undertaking calls for a book in and of itself, and as such we cannot do full justice to the role of money in destroying work here. Rather than attempt a comprehensive account, I want to focus on a few things: The commodification of debt, market efficiency, and extrinsic versus intrinsic value.
In explaining the commodification of debt, one might select as a starting point the 17th century. The reason for choosing that century over another has to do with that being the period in time in which the ideas that underpin capitalism were put down in writing.
If one belief can be said to be of prime importance to capitalism, it would have to be the concept of private property. The problem is, the Earth and its resources do not actually belong to anyone. Such a problem was resolved in the past by asserting that divine power had set up rigid caste systems, with everybody assigned their place from birth. But such assertions were not persuasive enough for more rational minds. Another justification was needed.
In 1689, in chapter five of his Second Treatise of Government, John Locke attempted such a justification. Locke’s line of reasoning was that commodities hardly ever come in a form where they can be obtained without effort. Tin must be mined, crops must be sown, land that is to be built on must be made fit for such a purpose. So why not say that whoever performs such labour may lay claim to the end product?
“The labour of his body and the work of his hands, we may say, are strictly his. So when he takes something from the state of nature has provided and left it in, he mixes his labour with it, thus joining to it something that is his own; in that way, he makes it his property”.
Such an argument fits nicely with my definition of work, because it entails a person undertaking mental and physical effort that leads to reward. Locke argued that the individual was entitled to work to obtain all the reward he or she could make good use of:
“Anyone can, through his labour, come to own as much as he can use in a beneficial way before it spoils; anything beyond this is more than his share and belongs to others”.
Again, all reasonable-sounding points: Work to acquire all that you can use, but use all that you acquire. Do not over-hoard, for in doing so you are taking resources that others may have more use for.
However, in the very same book, Locke made another statement that undermines his argument concerning labour and property. He wrote:
“The one thing that blocks this is the invention of money, and men’s tacit agreement to put a value on it; this made it possible, with men’s consent, to have larger possessions and to have a right to them”.
We have already seen how market logic commodifies labour and self-interest adjusts negotiating positions such that the owner classes may impose conditions that maximise their benefits while minimising the reward due to the working classes. The power that money has to commodify labour brings the statement ‘anyone can, through his labour, come to own as much as he can use’ into doubt. After all, if I have money, and I pay other people to build me a house, whose labour is it that is being mixed with the resources used up in such a property? Not mine. I have not lifted a finger, other than to write a cheque.
Now, at the time when John Locke was writing, just about everybody was something of a producer. That being the case one could claim that whoever had money must have mixed their labour with it (although you would have to ignore the reality of inheritance connected to earlier empiric conquests, feudalism, or state monopolies of mercantilism to believe it is true in all cases). This kind of reasoning may sound acceptable to people, most of whom are used to striving to obtain even modest amounts of income. But capitalism’s drive to commodify everything and lower costs would further undermine such belief.
If you ask somebody to visualise money, chances are that they will picture a physical object, like a dollar bill or an English penny. There is a popular belief that before money existed, people relied on systems of barter. Such systems would have fallen foul of the ‘double coincidence of wants’: For a barter transaction to take place, I must want what you have, and you must want what I have. Some commodities were more likely to be accepted, and people began using these as intermediaries. The word ‘salary’ is derived from ‘salt’, so presumably people were once content to be paid in salt because they could be reasonably sure of it being exchanged for something more useful. Over time, we converged on a commodity that was used pretty much exclusively as a medium of exchange; something like coins. Money was born. 
This story, by the way, is fictional. Anthropologists have been searching for the fabled land of barter, a land in which people act just like today only with money removed, and found no evidence that it ever actually existed. That is not to say that nobody ever bartered. Rather, the evidence points to no actual commonly-used system of the form ‘I will swap my four eggs for your beef steak”. If you read economic textbooks, you may note how they all use imaginary examples of a barter-based economy. True, they may be based on real-world communities (‘imagine a tribal village…’, ‘imagine a small-town community…’) but they are never examples drawn from historical fact. 
Quite simply, this tale that goes ‘in the beginning there was barter, and it was darned inconvenient so money was invented’ is capitalism’s creation myth. Most economists retell it, and it is a lie. Why perpetuate such a fantasy? David Graeber has argued that it is necessary to believe this is how money came into being, because it justifies arguing that economics is “itself a field of human inquiry with its own principles and laws- that is, as distinct from, say, ethics or politics”. Once you accept that, it follows that property, money, and markets existed prior political institutions and that there ought to be a separation between the State and the economy, with the former limited to protecting private property rights and guaranteeing the soundness of the currency (some go further and say the government should be limited to protecting property rights only). In actual fact the emergence of markets and money has always depended on the existence of the State (whether markets and money will continue to depend on the state, regardless of technological development, is another matter.)
A full justification of the accusation that the most commonly-told story of money’s creation is a myth is beyond the scope of this essay (those who wish to consider the evidence should read ‘Debt: The First 5000 Years’ by David Graeber). Whatever the real evolution of money was, we undoubtedly did arrive at a situation in which coins (usually gold or silver) became the ubiquitous medium of exchange. But the drive to reduce costs was not content with money remaining as a physical commodity. If something else, some non-physical thing, could be accepted as ‘money’, that would enable a marvellous ability for whoever gained control of such ‘money’: It would be possible to created any amount of the stuff out of nothing.
What could possibly be accepted as a non-physical form of money? In a word: Debt. We are still lead to believe that money is a physical commodity. Documentaries and news reports invariably depict money in the form of paper notes and coins, but money of this kind actually represents a tiny percentage of currency in use today. The vast majority of money, 95% or more, is created out of nothing by the banking system. More precisely, it’s created out of the promise to pay. Whenever anybody takes out a loan, the bank does not lend out money it already has. Instead, the digits are simply entered into the borrower’s account, and hey presto, the money is there.
Explaining in detail the mechanisms by which money is created out of debt, and how the banking system is able to get away with something like ‘printing money’ which is fraud if done by anyone else, is sadly beyond the scope of this essay. Those who are interested to know more might want to read books like ‘The Creature From Jekyll Island: A Second Look At The Federal Reserve by G. Edward Griffin or ‘Modernising Money: Why Our Monetary System Is Broken And How It Can Be Fixed’ by Andrew Jackson and Ben Dyson. Here, though, I want to focus on an aspect of the system that is surrounded by a lot of misunderstanding: The application of interest.
Whenever money is borrowed, it typically has to be paid back plus accrued interest. For example, if I borrow £10,000, at 9% interest I must pay back £10,900. Now, the banking system only creates enough money to pay back the principle (ie the original £10,000) not the interest. Therefore, the total amount of money in existence is insufficient to repay all loans plus interest.
This has lead some to conclude that our-debt-based, interest-bearing currencies must lead inevitably to growing debt. There simply is no way for all the money plus interest to be repaid, other than to borrow the extra amount. But that extra also has interest charged. Therefore, the more we borrow, the more we have to borrow, in a never-ending spiral of growing debt.
There is, however, a way to pay back more money than actually exists. To see how, we can turn to a simplified example. Imagine that Alice and Bob live alone on an island. Bob is in possession of a pound coin, the only money on the whole island. Alice asks to borrow the pound, and after interest is added she must pay back £5. How can she do that? She can submit to paid labour. She paints Bob’s fence, and earns £1 salary. She hands that £1 over and pays of a part of her debt. Next day she prunes Bob’s roses, and receives the same £1 she was paid as wages yesterday. Again, she hands it back as payment for part of her interest-bearing debt. This revolving-door process of money handed back and forth as wages and repayment can continue until the debt is paid off entirely. 
The same principle can apply in the real world. Say a bank loans £10,000 at £900 per month and that £80 represents interest. That interest is spendable money in the account of the bank. The bank decides to hire a cleaner, and the result is not all that different from the simplified case. Nor do you have to literally seek employment at the very branch of bank which you borrowed money from. So long as there is no requirement to repay everything at once, so long as the money needed to pay interest is spent into the economy, and so long as there are wage-paying jobs, it is possible to repay debt plus interest even though the total amount of money is never enough to cover all the money owed.
Another popular idea, in marked contrast to the previous belief of absolute shortage, is the idea that since banks spend interest charges as operating expenses, interest to depositors and shareholder dividends, there is in fact enough money released back into the community to make all payments. But this is an oversimplification that rests on the assumption that interest-bearing money is always spent into the economy, never lent at interest or invested for gain. In reality, there are a significant proportion of non-bank lenders, and if they manage to capture some of the money needed to retire the loan that created that money, the original loan cannot be retired. Beyond that there is a cultural expectation that money should generate more money- not through productive effort but through mere investment for personal gain.
The theory that there is always enough money spent into the productive economy to pay off interest has to be true 100% of the time. But that cannot be the case when the money needed by borrowers in everyday productive economics is instead moved ‘upstairs’ to play in a casino world where players essentially gamble on how money moves through financial systems. For example, the volume of trade on the world’s foreign exchange markets in just one week exceeds the total volume of trade in real goods and services during an entire year. This money is in continuous play by speculators hoping to make windfall profits on currency fluctuations; it is money circulated for no reason and no productive outcome, other than to make more money. Nowhere in our system is there any restriction on re-lending money or investing it for personal gain in the casino economy that is, arguably, completely auxiliary to the real, productive economy. It stands to reason that every time interest is added to money that already bears an interest charge (as happens when secondary lenders capture such money) and every time money is taken out of circulation in the real, productive economy, that increases the pressure on the system to repay the debt.
Producers may respond by increasing sales or raising prices. Consumers may meet the demand by taking on an additional job or paying off debts over a longer period of time. Governments may respond by raising taxes. But each tactic comes with possible negative consequences. For producers, competition for sales usually entails lowering prices, a move that necessitates even more sales and possible overproduction and saturation of the market. Increasing taxes drains money from the productive economy, thereby reducing the collective ability to pay taxes, which then necessitates deficit spending and additional interest charges. Competition for jobs lowers wages, lower wages means less consumer spending, and paying interest over longer periods adds enormously to the amount of interest owed.
The truth, regarding the affect that interest has on our lives, lies somewhere between those opposing extremes in which the money supply is believed to be in absolute shortage on one hand, and always available on the other. In principle, the money could be made available provided it were always spent into the real, productive economy, but it isn’t. And the result- fuelled in part by the commodification of debt and what some call the ‘money sequence of value’ (meaning systems that do nothing except turn money into more money)- is increasing debt, and people finding the struggle to keep themselves afloat becoming harder as the system plays out.
So who benefits from a crazy system like this which causes debt to grow and grow? Those who control the money, that’s who. Remember: banks profit primarily from interest-bearing debt. As G. Edward Griffin put it:
“No matter where you earn the money, its origin was a bank and its ultimate destination is a bank…this total of human effort is ultimately for the benefit of those who create fiat money. It is a form of modern serfdom in which the great mass of society works as indentured servants to a ruling class of financial nobility”.
I have defined work as physical or mental effort that is intrinsically meaningful and directly connected to a reward. I would argue that any work that is directly connected to a reward will necessarily be intrinsically meaningful. A beaver does not construct a lodge for no good reason. Work done for the purpose of rewarding yourself is a great thing, but how many of us can honestly be said to be ‘working for ourselves’? The so-called ‘self-employed’ cannot be said to truly work for themselves if what they are mostly doing is paying off debt. They and anyone else in that situation are working for the banks or the state (whoever you think controls fiat money). They have jobs, but they don’t have work. They are, indeed, indentured servants to a ruling class of financial nobility.
If money is created out of debt, one may ask if money is destroyed as debt is repaid. The answer is yes. In 1864 King Henry II borrowed £1, 200, 000 from a consortium of bankers. In return for the loan the consortium was given a monopoly on the issuance of banknotes, which basically meant they could ‘monetize’ the debt and advance IOUs for a portion of the money owed by the king. The bankers were able to charge the king 8% annual interest for the original loan and also charge interest on the same money to the clients who borrowed it. So long as the original debt remained outstanding this system could continue and, in fact, it remains unretired. It cannot be fully paid off because if it were Great Britain’s entire monetary system would cease to exist.
Mariner Eccles, one-time Chair of the Federal Reserve, agreed that money is destroyed as debt is paid off in a debt-based system:
“If there were no debts in our monetary system, there wouldn’t be any money”.
Coming up in part eleven, market versus technological efficiency.

Posted in work jobs and all that | Tagged , , , , , | Leave a comment


(This essay is the ninth part of the series ‘How Jobs Destroyed Work’)
There is this commonly-held belief that there are people who ‘don’t want to work’. I find it hard to believe that any such person really exists: that there could be anyone who would wish to avoid productive activity that is meaningful, enhances one’s talents and improves quality of life. But, of course, when people say of others that ‘they don’t want to work’, what they always mean is that some people would rather not submit to paid employment unless forced to. If employers strive to reduce or eliminate the very qualities work needs to become a calling, though, is it any wonder if people turn their noses up at such labour? Who would willingly submit to a job that has had engagement designed out of it and is performed primarily for the enrichment of strangers?
At this point, it may be worth uncovering an important way in which ‘human nature’ differs from ‘nature’. Nature is in no way influenced by our theories regarding its workings. For example, it does not matter how many people believe nuclear fusion powers the Sun, nor how fervently this belief is held. Nature either does power stars using fusion, or it does not. What we believe has got nothing to do with it.
But human nature can be influenced by our theories of human nature. How? If we believe certain things about people, and society is designed in such a way as to amplify those traits, people may well be influenced to turn out that way. If we then take how people happened to turn out in the society we created as proof that our theories of human nature are correct, that may further persuade us to build societies around such beliefs. We have created a self-fulfilling prophecy.
Take the belief that people don’t want to work and that only wages motivate us. As we saw in part one, Adam Smith believed this was the case. Now, suppose that this Smith chap is influential enough to have his beliefs adopted by others. Business owners, fearing the disastrous consequences of laziness and inattention, set about designing systems to manage people based on such beliefs. Industrialists, believing workers are only motivated by pay, construct assembly lines that reduce work into essentially meaningless units.
What is happening, then, is that workplaces are being designed to focus on efficiency only, counting on the pay to compensate. If we put people into environments such as these, it would hardly be surprising if they acted in stereotypical ways predicted by their superiors’ theories of human nature: Finding no meaning in the labour they are performing and having no reason to do such a job apart from needing the wages. And now that feedback loop of self-fulfilling prophecy really kicks in. Discretion, autonomy, and creativity have been designed out of work, making it hard to find meaning and engagement in what one is doing. Therefore, people feel less satisfaction and chances are that if they feel a lack of satisfaction they will not perform very well unless pressured to. As they work less well absent coercion from superiors, this reinforces the belief about human nature, and so the system is reconfigured to impose even tighter controls and take away even more autonomy. As Barry Schwartz wrote in ‘Why We Work’:
“The concept of ideology, and the self-fulfilling feedback loops that ideology can give rise to, helps explain, I think, why it is that most workplaces have come to be dominated by excessive reliance on close supervision, routinized work, and [monetary] incentives”.
If there is more than one reason to tempt those with power to design workplaces along certain lines, that obviously increases the likelihood of such practices being adopted. In the case of reducing or eliminating the qualities of work that make it a calling, the advantage (to owners, at least) is that workers need less training, less skill, and are therefore more expendable. That enables owners to squeeze more value out of labour, as the reduced bargaining power that results means the working classes are more likely to ‘agree’ to longer hours and fewer benefits, knowing full-well how much easier it is for their superiors to replace them and throw them on the scrapheap of unemployment. During the rise of the lean-and-mean model, a time when lack of commitment to employees was regarded as something business should boast about and aspire to, and when the benefits of prosperity were largely monopolised by those at the executive level and professional investors, workers at lower scales of the hierarchy increasingly adopted similar attitudes to those of their superiors, as self-protection, cynicism, and preoccupation with short-term material rewards proliferated. As Frazer put it:
“A new breed of careerists began to multiply, a group that felt as little attachment to their jobs or employers as the era’s day-trading investors had for the stocks and shares they bought and sold at a rapid-fire rate”.
Why wouldn’t they feel that way? After all, the workplace had commodified their labour power to the extent that hiring and firing was proceeding at a similarly frenetic pace, and not for their benefit but rather to increase the paper-profits of executives who were not beholden to the same merciless standards. People, looking around for missing meaning, received plenty of advice on where it might be found: Through the consumption of material wealth. Advertisements and other forms of propaganda were insistent that satisfaction and happiness could be bought on credit.
But there is more to this sorry tale than just individuals seeking meaning by maxing out their credit cards. We are encouraged to believe that if individuals get deeply into debt it is all down to their own reckless behaviour. But the fact is we are all in debt regardless of how prudent we may have been. This is because we are now in a situation in which entire countries are mired in debt and when their governments bail out banks that are too big to fail by engaging in massive deficit spending, what this amounts to is the stealing of future generations’ prosperity. Also, according to David Graeber, when it comes to personal debt:
“Very little of this debt was accrued by those determined to find money to bet on the horses or toss away on friperies. Insofar as it was borrowed for what economists like to call discretionary spending, it was mainly…to be able to build and maintain relations with other human beings based on something other than sheer material calculation…most ordinary Americans- including Black and Latino Americans, recent immigrants, and others who were formerly excluded from credit- have responded with a stubborn insistence on continuing to love one another. They continue to acquire houses for their families…insist on continuing to hold weddings and funerals, regardless of whether this is likely to send them skirting default or bankruptcy…Granted, the role of discretionary spending itself should not be exaggerated. The chief cause of bankruptcy in America is catastrophic illness; most borrowing is simply a matter of survival (if one does not have a car, one cannot work); and for most, simply being able to go to college now means debt peonage for at least half of one’s subsequent working life”.
Graber summarised the plight of the blue and white collar worker:
“One must go into debt to achieve a life that goes in any way beyond sheer survival”.
When he says ‘one must’ he is not referring to any law of nature but rather a consequence of how we have allowed our economy to develop. The role money plays in destroying work will be our next topic.

Posted in work jobs and all that | Tagged , , , , , , | 2 Comments


(This essay is part eight of the series ‘How Jobs Destroyed Work’) 
Some observers, among them Noam Chomsky and Jaques Fresco, have noted how corporations tend to have the same organizational structure as fascist dictatorships. In other words, there is a strict hierarchy that demands tight control at the top and obedience at every level. Granted, there may be a measure of give-and-take, but the line of authority is usually clear. Others, perhaps most notably Michel Foucault, have argued that prisons and factories came in at more or less the same time, and their operators consciously borrowed each other’s’ control techniques. 
For example, in the late 18th Century, social theorist Jeremy Bentham designed the ‘panopticon’. ‘Pan’ means ‘inmates’ and ‘opticon’ means ‘observed’ and so the panopticon was a prison designed in such a way that all inmates could be kept under surveillance by a single watchman. True, it was impossible for a single observer to keep an eye on all inmates at once, but the panopticon was designed in such a way as to make it impossible for any inmate to know if he was being watched or not. The inmates only knew that it was possible that they could currently be under surveillance. Bentham’s belief was that, under such conditions, inmates would effectively mind their own behaviour.
So what became of the panopticon? They are everywhere, only we now tend to refer to them as ‘offices’. Many a white-collar employee (those below the executive level, at least) spend their in-office hours in a cubicle, most likely of a one-size-fits-all, institutional-gray design that can be set up, reconfigured, and moved at the whim of those higher up the line of authority: A constant reminder of the employee’s own lack of security and importance to the corporation. Moreover, cubicles are (in the words of one employee) “mechanisms of constant surveillance”, lacking doors and usually arranged so that managers can spy on whoever they like at any time. The employees are usually made to work facing a wall, so cannot know if they are being watched unless they look over their shoulder. The message such an environment sends out is clear: We can see what you are-or are not- doing. So work harder or we’ll replace you.
With IT technology, bosses have access to surveillance possibilities that Bentham can hardly have imagined. There is, for example, the ‘Investigator’ program. This software can be installed, without the employee’s knowledge, on the company’s PCs. It records everything the employee does- every mouse-click, every keystroke, every command. Through appropriate programing of their internal computer networks and security systems, businesses have the power to impose constant surveillance on their employees, tracking precisely when they start, when they finish, how often and for how long they take a toilet break, and so on. If they so choose, bosses can adjust the ‘Investigator’ program so that any time an employee types an ‘alert’ word (‘union’, say) the document in which it appeared will be automatically emailed to the appropriate supervisor. Given all that, is it any wonder that Bob Black was moved to write:
“There is more freedom in any moderately de-Stalinized dictatorship than there is in the ordinary American workplace…The boss says when to show up, when to leave, and what to do in the meantime. He tells you how much work to do, and how fast. He is free to carry his control to humiliating extremes, regulating, if he feels like it, the clothes you wear or how often you go to the bathroom…He has you spied on by snitches and supervisors”. 
By creating environments that constantly monitor worker behaviour and reduce, so far as is possible, reliance on employee skillsets, businesses increase the commodification of labour power, creating a just-in-time workforce that can be increased or decreased with all the impersonal efficiency with which a business may manage its inventory. As might be expected, this has lead to an increase in the number of temporary employees. In the mid 1980s, there were around 800,000 temps employed daily across the US. By the late 1990s, that number had increased fourfold.
In labour terms, temporary employees are often indistinguishable from their full-time counterparts. The job functions they perform are the same, and they frequently put in the same 50, 60, 70 hour schedules expected of noncontingency staff throughout the high-tech sector. What really sets them apart from the full-timer are the benefits to which they are- or rather are not- entitled. While agency fees mean temps are more costly than full-time staff in terms of their salaries, they can save up to 33% of an employee’s paycheck, thanks to those skipped benefits. The benefits they are not eligible for are often discounted-stock-options or corporate retirement plans. As one Microsoft temp put it:
“People who started at the same time as I did are cashing in their options and paying for their houses in cash…I’m still paying $200 a month for healthcare”.
The advantages to business in hiring temps over full-time staff does not just consist in saving costs through denial of benefits. A workplace heavy on contingent workers has the added bonus of creating an environment of fear. We saw earlier how office floorplans that rely on quickly reconfigurable cubicles underscore the precarious, temporary nature of employment within the lean-and-mean model, and this is only enhanced as more job categories get converted to contingent status. The underlying tension created by workplaces such as these means full-timers are less likely to make demands upon management and more likely to submit to more labour for fewer rewards. 
Here, too, technology helps business while negatively affecting workers. Blue-collar workers have long known the monotony of working on production lines that micromanage every aspect of the working day but they were at least able to walk away from such dehumanizing environments at the end of their shift. The white-collar worker in the modern IT sweatshop is not so fortunate. Because white-collar employees can be issued with web-enabled company phones, laptops and other such devices (perhaps loaded with software that can surreptitiously spy on them) businesses now have the ability, in effect, to set up portable assembly lines wherever they please, be it in former recreational places or private homes. This results in what Jill Andresky Fraser called ‘Job Spill’- the phenomenon of one’s job demands growing and taking over more and more of one’s ‘out-of-office’ existence. So, not only can smart and spy technologies enable employers to create efficient workplaces that can cut down on the quality and quantity of staff they employ, there is also the opportunity to squeeze more labour out of the remaining staff (‘voluntarily’, of course, if you ignore the coercive pressure imposed by harsh corporate and financial conditions). As Fraser put it:
“The balanced and secure 9-5 work lives of their parents’ generation belongs to a utopian past…they struggle to fulfil job demands that require them to work after dinner and during lunch hours…on Saturdays and Sundays…during summer or winter vacations…while waiting in line at movie theatres”.
As technology and management techniques reduce or eliminate the judgement, flexibility, and challenge work needs to become a calling, we can perhaps explain why many employees feel they have ‘bullshit jobs’- that is to say, labour that seems to perform no beneficial function, either for the person doing it or society. ‘The Economist’ explained how the complexity of today’s economy has compelled businesses to impose production-line techniques at ever higher-levels of the professional ladder. In the case of much white-collar labour:
“You end up with the clerical equivalent of repeatedly affixing tab A to frame B: Shuffling papers, management of the minutiae of the supply chain and so on. Disaggregation may make it look meaningless, since many workers end up doing things incredibly far-removed from the endpoints of the process”.
Now, ‘meaning’ and ‘boredom’ are subjective states of mind. Following Sartre, we might say that the individual is always free to find meaning in what they are doing. I think there’s a lot of truth in that, but I also think it’s fair to say that, for most people, finding meaning while doing their job happens in spite of- not because of- what they are doing. This is because so many of us end up employed in jobs that have had autonomy and creativity designed out of them in the interests of reducing the bargaining power of employees so the executive classes can take advantage of economic coercion and grab a larger slice of the pie. 
Coming up in part nine, false beliefs regarding human nature and work.

Posted in technology and us, work jobs and all that | Tagged , , , , , , , , , , , | Leave a comment