Thoughts On the Baby Boomers (part two)

THOUGHTS ON THE BABY BOOMERS (PART TWO)
HERE COMES THE NEW BOSS, NOT LIKE THE OLD BOSS
In part one, we saw how historical events early in the 20th century lead to lower inequality and greater opportunity for the post-war generation, but that this meritocratic capitalism is no more. So what happened? Why did the conditions that Baby Boomers enjoyed- lower inequality, greater opportunity, more security- come to an end? I would argue that this was largely brought about by two things: faith in the market and distrust in the old bureaucracies. From the 1980s through to the 21st century, politicians, economists and others on both the right and the left attempted to extend an idea of freedom that was modelled on the market. This model could trace its roots back to a scientific theory called Game Theory, which was developed during the Cold War and turned by John Nash into a way of looking at social interaction in general. Nash argued that individuals lived their lives in a game, pursuing only their self-interests and constantly adjusting to one another’s strategies. Economists argued that, if this were true, then we should give up on the very idea of the collective people’s will. There was no way of adding up all individuals’ competing desires to produce one coherent goal. Furthermore, it suggested that the old idea of politicians and civil servants being motivated by some altruistic calling to serve the public good, was a lie. In reality, it was now thought, politics and the civil service were motivated by self-interest to build up their empires. The idea of public duty was based on an illusion. Only the market could possibly respond to people’s self-interested drives and create overall prosperity. The best thing that politicians could do was to stop interfering.
Political leaders like Thatcher, Major, and Blair (and, in the States, presidents like Reagan and Clinton) set about deregulating the market. Following advice from management consultants, John Major set out to create an alternate system intended to mimic the self-interested drive of the free market. This involved setting performance targets, the idea being that this would harness individualism and cause a transformation from self-serving bureaucrats to heroic entrepreneurs who would be driven by market forces to provide great services. The inexorable logic at the heart of game theory lead to this targets-based system spreading much further than bureaucratic institutions, as teachers, nurses and workers in the private sector were also given performance targets.
FED UP WITH THE OLD WAYS
It wasn’t only politicians who had grown tired of the old bureaucracies. Throughout the 70s and 80s, popular wisdom increasingly saw corporations as bloated and inefficient, with business executives handicapping their organisations with unwieldy bureaucracies, and an overly-entitled workforce who were spoiled by far too generous rewards and low performance demands. In America, the business community was perceived as being unable to compete against more nimble foreign competitors. In the UK, years of industrial action turned the public against the Unions.
This perception lead to leaders in politics and business to taking drastic action in deregulation. In America, President Reagan appointed an attorney, who had previously defended large corporations against anti-trust suits, as head of the Department of Justice’s antitrust division, thereby pretty much guaranteeing non-interference from the government in the face of a growing mergers-and-acquisition movement. And, around the same time, the Supreme Court declared laws aimed at shielding local companies from out-of-state suitors to be unconstitutional, a decision that also helped accelerate an era of mergers and acquisitions.
JUNK BONDS AND A FEEDING FRENZY
Meanwhile, in the City, Michael Milken, of the investment house Drexel Burnham, created high-yield debt instruments known as junk bonds. They enabled far riskier and aggressive corporate raids than were possible in the previous, more cautious era. This deregulation and move toward hostile takeovers, leveraged buyouts and corporate bust ups lead to a dramatically different working environment. This change is perhaps best illustrated by considering the nicknames that CEOs of this period acquired. In 1962, Earl S. Willis, manager of employee benefits services at General Electric wrote, “maximising employment security is a prime company goal”. In marked contrast to this, 20 years later General Electric’s CEO Jack Welch earned the nickname ‘Neutron Jack’ because, so the wags quipped, by the time he was done with all the layoffs and cutbacks, only the buildings were left standing.
He was hardly alone. Indeed, the 80s was a period in which employees went through many a corporate crisis, brought about either by deregulatory trends to global competition. The pressures these trends placed on workplaces lead to a corporate perspective focuses on increasingly short-term goals, and job conditions that became increasingly uncertain, unrewarding, and demanding. Permanent careers became impermanent jobs, with a move from permanent staff to contingent labour in the form of temps and independent contractors, and during the 80s and 90s pension protections that had existed for almost as long as a century were cut back or eliminated altogether, resulting in growing numbers of men and women who lack pensions entirely.
The change that these circumstances wrought was summed up by Steven Hill, who wrote in an article for Salon:
“In a sense, employers and employees used to be married to each other, and there was a sense of commitment and a shared destiny. Now, employers just want a bunch of one-night-stands with their employees…with ‘jobs’ amounting to a series of low-paid micro-gigs and piece work, offering little empowerment for average workers”.
THE CREDIT CASINO
It was not only the workplace that underwent radical changes. The deregulations of the 80s and 90s freed up a lot of capital, but achieved this at the expense of creating some highly risky financial instruments. For example, there was something known as Consolidated Debt Obligations. As Dylan Ratigan explained, “CDOs gave banks a way to sell investors bets on whether all of us will be able to pay all our bills”. And then there were ‘Asset-backed Securities’, a type of bond that enabled buying the right to collect on debt payments like credit cards and car loans. Aided by increasingly powerful computers capable of tracking huge quantities of data, those existing bonds became gigantic Consolidated Debt Obligations. As Ratigan explained, “the new idea in banking was to take every kind of obligation to repay borrowed money- trillions of dollars-worth-put them in a statistical blender, and then sell portions of the mixture as investments”.
Investment banks began to intentionally mix high-risk loans such as poor people’s housing loans, with low-risk loans such as wealthy people’s credit cards. This mix of high- and low-risk loans produced a credit score rating comparable to medium-risk loans, even though the safe loans in the mix provided no protection from the really risky ones.
The overall result was ever-increasing risk transfer or what Dylan Ratigan called “playing hot potato with debt…The traditional incentive for banks to act as price integrity police- the standard of making careful, educated investments, was replaced by the incentive to sell as much insurance on as much debt as possible”.
This was a time of mortgages granted to NINJAS. That is, people with no job, no income, and no assets. It was a time when every delivery by the postal service included pre-approved credit card applications. And it was during this time that the City cowboys used their political influence to bring about the Financial Services Modernisation Act of 1999. This lead to the revoking of a rule that had been established after the Crash of 1929, a rule that meant no one company could simultaneously be a traditional bank, investment firm and insurance company. The Financial Services Modernisation Act meant a bank could (in Ratigan’s words) “take your money for safekeeping and use it as collateral with no supervision, all the while insuring itself against losses that taxpayers must pay of the bets the banks made with our money went bad”.
The move to deregulate the market and free capital from political interference and bureaucracy resulted in the rise of a credit casino, which lured people into taking on increasing levels of debt hidden behind such complex financial instruments that nobody could really hope to understand them. The banking and financial sector became increasingly infected with toxic debt and a speculative bubble that threatened to pop at any time, leading to a catastrophic downward spiral.
Of course, something like that almost happened in 2008, when the subprime mortgage speculative bubble burst and threatened to bring down such huge banking conglomerates that the government had to bail out the banks to avoid a crash as bad as the 1920s, if not worse. The massive stimulus packages that rescued the banks has resulted in austerity for future generations, or at least those generations who can’t afford top financial advisers who can use every morally-dubious trick in the book to protect their money. And the dubious financial instruments that lead to the near collapse of the global monetary system are still largely in place, leaving us with the probability that another speculative bubble could inflate and then pop, wiping out your savings.
WHAT ABOUT THOSE TARGETS?
Still, at least those performance targets had rid us of self-serving bureaucrats and delivered efficient services run by heroic entrepreneurs, right? New Labour certainly expected that to be the case. When they came to power in 1997, New Labour modelled itself on the Clinton Administration. Like their US counterpart, New Labour gave power away to the banks and the markets. And they took the targets-based system that John Major introduced and vastly expanded upon it- to the point where just about everyone from cabinet ministers down, and things that were previously considered unquantifiable- such as ‘happiness’- became part of a huge mathematical system that was supposed to use targets to free public servants from bureaucratic control.
But what this targets-based system did was to provide an opportunity for cheats to succeed by finding sneaky ways of fulfilling their goals. For example, hospital managers were given targets to cut waiting lists, and they achieved this by ordering consultants to prioritise the easiest operations like bunions, over more complicated ones like cancer. When they were given targets to reduce the number of patients waiting on trolleys, management removed the wheels from some of the trollies, reclassified them as beds, and reclassified the corridors as wards. Again, this meant they could take those patients off the list and meet their targets. Obviously, those tactics were not doing much to increase the actual quality of medical care. 
At first the government dismissed reports of cheating as a few bad eggs, but as more reports of fiddling the numbers came in, it became obvious that cheating had become endemic throughout the public services. Harvard Business School and others have shown that when goals are imposed on people, though intended to ensure peak performance, they often result in efforts to game the system without producing the underlying results the metric was supposed to be assessing. As Patrick Schilz, a professor of law, said:
“Your entire frame of reference will change and the dozens of quick decisions you make every day will reflect a set of values that embodies not what is right, but what you think you can get away with”.
This endemic cheating turned what had been intended as a rational system for boosting efficiency, into a weird world in which people were confronted with numbers and simply didn’t know whether to trust them or not. New Labour responded by adding even more mathematical levels of management, devising complex systems of auditing in order to monitor workers and make sure targets were being correctly fulfilled. The effect of all this was to turn what had been intended as a system of liberation into powerful new forms of control.
THE RETURN OF INEQUALITY
Worse still, this system had the effect of creating a more rigid and stratified society, and this happened because of what the system did to education. League tables had been created which showed parents which schools were the best performing, and which were bottom of the heap. The intention was that such league tables would incentivise less successful schools to improve their services, leading to rising standards across society. But, instead, rich parents moved to areas where the best schools were, which caused house prices to spiral, thereby keeping poor families out. And since the league tables were based on exam results, schooling was transformed from a system intended to give poor children the well-rounded education they would need to achieve social mobility, into one that focused on training kids to be specialised for passing exams, thereby enabling the school to rise up the league tables. The result was that by 2006, the country had become more rigid and stratified than at any time since the Second World War.
CONCLUSION
So if we compare life for the Baby Boomers with that of subsequent generations, we find the following. The Baby Boomers had an educational system designed to train them for their future careers and so enable social mobility; future generations received an education intended only to help schools look good on league tables, and higher education that is often not worth a damn when it comes to improving one’s chances of landing a decent job. When they entered the world of employment, the paternalistic corporate model ensured Baby Boomers a secure and steady working life where they were treated as stakeholders in the company they worked for, provided with many benefits in return for loyalty. Nowadays you enter a job and in all probability have no idea if you will still have a job tomorrow. You are a ‘permalancer’, working the same long hours as a full time employee but enjoying the same lack of benefits (like no sick pay and holiday entitlement) as the self-employed. The Baby Boomers left their dependable jobs and received a pension that enabled them to maintain the middle-class lifestyle they had earned throughout their working lives. These days, with austerity eating away at so many services, and a banking and financial sector still very much prone to speculative bubbles and subsequent crashes, you have no idea whether or not you will have any money to provide support in your old age. 
If we can visualise the Baby Boomers as being on a fairly well signposted path to prosperity, we can visualise later generations as being in some bewildering maze-cum-gauntlet, trying to negotiate their way around advice from genuine, well-meaning experts, and cheats disguised as servants but interested only in achieving tremendous short-term gain at their expense. In such a complex and uncertain world, is it any wonder that today’s young have adopted an “eat, drink and be merry, for tomorrow we die” outlook on life? I rather suspect that, had Baby Boomers lived under such conditions of uncertainty, they might have behaved the same.

Images from Wikimedia commons
REFERENCES

Capital In the 21st Century by Thomas Piketty
White Collar Sweatshop by Jill Andresky Frazer
The Trap by Adam Curtis
Greedy Bastards by Dylan Ratigan

Advertisements
Posted in work jobs and all that | Tagged , , , | Leave a comment

Thoughts On the Baby Boomers (part one)

THOUGHTS ON THE BABY BOOMERS (PART ONE)
INTRODUCTION
In a letter submitted to the Daily Mail, Dorothy Dobson responded to an article, published in the same paper, which attacked the Baby Boomer generation for being one that enjoyed privileges no longer available to subsequent generations. Headlined ‘Not All Baby Boomers Lived The Life Of Riley’, Mrs Dobson’s letter explained how “I was born in 1948, to a household with no bathroom, no hot water, and no inside toilet…Our first house cost us £1,400 at a time when my husband’s wages were £4 a week…My husband worked 16 hours a day, seven days a week to earn enough to live on…We’re now pensioners and have a decent home and a small amount of savings…We have good holidays but we have worked hard all our lives. Today’s whingers…think life owes them a living…They should be like we were and go without their luxuries”.
Reading their letter, one’s impression of the Dobsons’ life is hardly one in which everything was handed to them on a silver platter. I expect many other Baby Boomers recognised their own life in this letter, for they too were born into households with very little luxury and worked hard for decades to secure a reasonable standard of living in which to live out their old age. It must be very frustrating to hear younger generations, those selfish, smartphone-obsessed kidults, going on endless ‘gap-years’ and maxing out their credit cards rather than saving for retirement, accusing the older generation of ‘having it easy’.
As it happens, an understanding of the society that Baby Boomers grew up in, and the conditions experienced by later generations, indeed does not support the notion that our parents were born with everything handed to them on a silver platter. But things were different back then, and those differences arguably did grant the Baby Boomers opportunities that did not exist before, and have not existed since.
FROM PATRIMONIAL TO MERITOCRATIC CAPITALISM
In order to see why this might be so, one needs to understand that capitalism comes in many forms, some more inclusive than others. From the 18th to the early 20th century, the West was ruled by a form of capitalism we might call ‘Patrimonial capitalism’. This is a form of capitalism dominated by old money that’s circulated through inheritance, leading to a rigid class structure and little social mobility. This is the reason why 19th century social commentators, like Jane Austen, are so preoccupied with marriage; because back then the rigid class structure and lack of social mobility meant one’s best chances of obtaining or holding onto great wealth was to marry into old money. 
This attitude was encapsulated in an economic formula set out by Thomas Piketty: r>g. This formula relates the return on capital- r (and r includes profits, dividends, rents and other income from capital) and economic growth, or g, and > represents the idea that when the rate of growth is low, then wealth will tend to accumulate more quickly from r than from labour. It also tends to accumulate more among the top 10% and 1%. The result is a trend toward higher inequality.
This rise in financial inequality could have spelled the end of democracy, as society would have divided between an oligarchical elite whose inherited wealth dominated much of society, and everybody else, lacking the power to change their circumstances. But then, in the early 20th century, some dramatic events occurred which altered the course of history. Those events were the Great Depression and the two World Wars. As you might imagine, these events destroyed much wealth. But, as Piketty argued, they were particularly bad for the wealth owned by the elite. And in the aftermath of World War II, governments took steps towards a redistribution of wealth, and the fast economic growth occurring around the world reduced the importance of inherited wealth.
How so? Well, the clue is in the word ‘growth’. You see, when growth is slow, life changes only gradually. So a money-making venture that worked well for one generation will work for subsequent ones. The son inherits his father’s buggy-whip empire and can life off of the profits it brings in. But when growth is high, change is fast and there is no guarantee that a successful venture will continue to work in the future. There is not much call for buggy whips when the internal combustion engine has rendered horse-drawn carriages obsolete.
So, between 1930 and 1975, the trend toward higher inequality was reversed, and a combination of government redistribution programs and fast growth enabled a transition away from patrimonial capitalism toward a more meritocratic form of capitalism. Whereas with patrimonial capitalism one’s circumstances are largely dictated by the conditions into which you are born, in meritocratic capitalism it is more how you live your life that determines where on the social ladder you end up. You can go from riches to rags if you make enough bad decisions (or have inordinate amounts of bad luck). Or, like the Dobsons, you can go from being really quite poor to really quite well-off, all through your own efforts and careful money management. Certainly, this isn’t privilege handed to you on a plate; it is the opportunity to ascend or descent the social ladder based on your own life choices.
COMMUNISM AND PATERNALISM
Another event which changed society from the 1920s to the 1980s was the rise of left-wing politics and the communist revolution. This revolution did not have quite the effect that Marx expected. He believed that socialism would sweep capitalism away, as the proletariat gained awareness of themselves as a class and used their superior collective strength to wrestle the means of production from the hands of the owner-classes. But, instead, the threat of communism and the collective strength of workers organised into unions lead to a reformation of the workplace. Whereas in the dark, satanic mills of the 19th century, employees were treated as commodities to be exploited for profit, made to endure conditions that would seem intolerably brutal to us, the postwar years saw business instead think of their employees as stakeholders. They would treat their staff well, providing benefits like job security, paid vacations, full health coverage and a pension that would enable the employee to continue living in the middle-class lifestyle to which he had become accustomed through decades of loyal service. And it was, of course, that loyalty that businesses hoped to encourage. If the company adopted a paternalistic attitude and looked after its employees, they would want to work to ensure the company thrived.
The rise in benefits during the Baby Boom period was captured in a booklet called ‘A Record Of Progress’, which was published by The Consolidated Edison Company of New York. It recorded “a 143% increase (from 1945 to 1960) of sick pay, medical coverage and paid absences; “a 172%” increase in leisure time benefits (vacations and paid holidays) and “a 562% ” increase in retirement benefits”.

What the Baby Boomers enjoyed, then, was not everything handed to them on a plate, but rather a world in which there was less inequality, more opportunity to succeed, and more security for ordinary people, than had been the case in the preceding two hundred years. It was a time when you were assured that doing well at school and gaining good grades would land you a decent job; your workplace would provide security for all your working life, and when you retired your company pension would allow you to sustain your middle-class existence for several decades- the remainder of your life, basically.

It was, arguably, a golden age of capitalism and state security. And now, it is all but gone. How that came about will be the subject of part two.
REFERENCES

White Collar Sweatshop by Jill Andresky Frazer
The Trap by Adam Curtis
Greedy Bastards by Dylan Ratigan
Capital In the 21st Century by Thomas Piketty
Daily Mail

Posted in work jobs and all that | Tagged , , | Leave a comment

Thoughts On Command Economies

THOUGHTS ON COMMAND ECONOMIES
Non-capitalist ownership of the means of production and reliance on a command- rather than a market- economy were among the defining features of Communist states.
COMMAND AND MARKET ECONOMIES
The difference between a market and a command economy is that whereas the former relies on decentralised decisions between customers and suppliers to determine such things as what should be produced and in what quantity, in a command economy such decisions were undertaken by a hierarchical, top-down process. This system was organised with the politburo at the apex. Next down the chain of command were ministries for every major branch of industry. These were, in turn, supervised by the State Planning Committee and by departments of the Central Committee of the Communist Party. Unlike with market economies, producers working in command economies had little reason to be concerned with the wishes of whoever used their products. Nor were the activities of their competitors of any concern. This was because competition was absent; other producers engaged in production of similar goods were comrades collaborating on execution of the State plan. Above all else, producers were concerned with meeting whatever targets planners set.
COMMAND ECONOMIES AND IDEOLOGY
If we take these two defining features of a Communist state and also take into consideration other defining features such as democratic centralism, and the leading role of the Communist party, what we find in common with all of them is a strong ideological component. As Archie Brown said:
“These defining features of Communism, while ideologically significant, were also of clear organisational importance. They were part of the operational code of Communist rule with an everyday relevance to the task of maintaining power”.
They were ideological because they were part of the Bolshevik (and successor) belief system that held Socialism to be a higher stage of development than capitalism. While the Communists considered the victory of Communism over Capitalism to be inevitable, they also believed that the process could be speeded up, provided political power was firmly in the hands of the Party.
The absence of a market economy and private ownership played a definite part in placing power within the hands of the ruling Party. At times, upsetting State authorities meant imprisonment or death, but even in more relaxed times public dissent from State authorities was a serious threat to one’s career. After all, the State controlled the career possibilities of all citizens. Brown again:
“Communism was an all-encompassing system of beliefs…It had authorities whose word could not be questioned, and whose interpreters and guardians acted also as gatekeepers, deciding who belonged and who did not”.
SVYAZI, BLAT AND TOLKACH
It would be wrong, however, to assume that Communist countries were 100% run by a command economy, with no private ownership and market activity whatsoever. In fact, some private activity (whether legally, illegally, or a mixture of both) occurred in Communist systems. Agriculture, in particular, was not uncommon in favouring private enterprise. Indeed, in the case of Yugoslavia and Poland, agriculture was mostly in private hands.
In non-market economies, goods and services often suffered shortages and probably could not have functioned at all, were it not for informal rules that developed around three key Russian words: Svyazi, Blat, and Tolkach. Translated into English, those are ‘Connections’, ‘Pull’ and ‘Fixer’ (or ‘Pusher’). Together, they compromised practices that oiled the wheels of the command economy.
Let’s start with Svyazi or ‘connections’. Given the control that State authorities had over people’s lives, it shouldn’t be surprising to learn that connections- knowing the right people- were very important. Although Communism was supposedly a system that abolished class, Svyazi was mostly a privilege of the Soviet middle classes and elites.
Where Svyazi was concerned, favours could be rendered without expecting anything in return, even indirectly. However, ‘Blat’ (or ‘pull’) always involved a reciprocal exchange of favours. The exchange of favours need not have been direct and could have consisted of a long and rather complex chain. According to Brown:
“How much pull a person had depended, obviously, on that person’s position within society, but at all social levels in the Soviet Union, there were unofficial networks which, to a certain extent, bypassed the official structures”.
Perhaps the most important part of the informal set of rules oiling the wheels of the Command economy was the tolkach or ‘fixer’. If a factory were to fall behind schedule, that could have a huge effect in a command economy, because there was an absence of alternative suppliers. Therefore, despite official disapproval, the tolkach were tolerated because they served t o make the top priority of meeting production targets somewhat easier. This they did through begging, borrowing, bribing- basically any persuasive method that could ensure needed supplies actually arrived.
So, as stated before, the Communist command economy was never totally without private ownership and decentralised interactions between consumers and suppliers. Of course, the same thing can be said of market economies, which are, after all, never operating completely without State intervention. The State makes the sale of certain products illegal even though there is an economic demand for such products (such regulations are not 100% effective, and tend to cause the emergence of black markets) and also affects prices by imposing higher taxes on certain products. Perhaps most importantly, at times of financial crisis even the most ardent free-market ideologues find themselves turning to Government for rescue.
Still, an economic system must either be predominantly a command or a market economy, as the disastrous attempts to find a ‘third way’ have proved. Going on what history has taught us, though, one would be unwise to consider the command economy the superior of the two. Quite simply, the planned Soviet economy never worked all that well (although it could sometimes produce impressive results- think of the Soviet success in launching the first satellite, for example). A prime reason for its relative lack of success was the fact that prices were determined bureaucratically and, unlike in a market economy, budgets were not controlled by the need to make a profit. This lead to both a weakness of the penalties for failure and scant reward for success. As Brown explained:
“Shortfalls and waste…were automatically excused by the soft budget constraint…when extra costs were incurred, prices were allowed to rise, either openly or in a disguised form through a lowering of the quality of the product (which was not high to begin with). And, purchasers, whether of producer goods or consumer goods, did not have the option of taking their custom elsewhere”.
CONCLUSION
Perhaps it is no so surprising, then, that, far from overthrowing the Capitalist system, instead Communist States relaxed more and more top-down control over time. For example, in 1987, the law on the State enterprise had devolved power to factory managers, and by 1988 Gorbachev abolished pretty much all of the Central Committee’s economic departments. What was left by then was neither a functioning Command economy nor a market economy, but instead a dysfunctional hybrid which only added to the pressure to change the Communist system in such fundamental ways that it ceased to be by the mid 1990s.
REFERENCES
The Rise And Fall Of Communism by Archie Brown

Wikipedia

Posted in work jobs and all that | Tagged , , | Leave a comment

Thoughts On the Cold War

THOUGHTS ON THE COLD WAR
From 1950 to 1990, a state of Cold War existed between East and West. At its heart, this simmering tension centred around an ideological question: Who should own capital? The ‘West’ represented US-led ‘free enterprise’ capitalism, and the East Russian-style state Socialism.
From a Western point of view, the Cold War was seen as a struggle to restrain the Soviet Union and hold at bay Communism. It is ironic, then, that the Cold War brought about conditions that helped perpetuate the Communist system. Communist states tend to be highly authoritarian, and such a state is more easily maintained when there is the ever-present threat of an external enemy. Such an external threat provided justification for censorship and restricted foreign travel. Thus, the Cold War provided Communist states with an excuse to suppress information regarding the relative economic success and greater liberty to be had under market-based, democratic countries.
As Alec Nove explained, “the centralised economy, party control, censorship, and the KGB were justified in the eyes of the leaders, and many of the led, by the need to combat enemies, internal and external”.
It is doubtful that the Soviet State could have survived a hot war, as that would almost certainly have involved an exchange of nuclear weapons that would have ended civilisation. The fact that the arms race was nicknamed MAD- for Mutually Assured Destruction- speaks volumes about how unlikely survival would have been for either side if East-West tension had boiled over. But, at the same time, it was the Cold War and the tensions that resulted, tensions that were advantageous to hardliners within Eastern Europe, that helped restrict contact with more prosperous countries in the West- something that Communist states were not equipped to survive.
REFERENCES
The Rise and Fall Of Communism by Archie Brown
Introduction To Marxism by Rupert Woodfin and Oscar Zarate

Posted in Uncategorized | Leave a comment

Lysenko: Ideology and the Corruption Of Science

LYSENKO: IDEOLOGY AND THE CORRUPTION OF SCIENCE
INTRODUCTION
Throughout history there have been many examples of psuedoscientists peddling crackpot theories. Examples include John Ernst Worrell Kelly who, in 1872, announced that he had found a new physical force and managed to talk investors into backing his scheme to produce technology that exploited ‘intermolecular vibrations of the aether’, and Harry Grindell Matthews, who claimed in 1921 to have build a ‘death ray’ (he never provided detailed explanations of how the device worked and neither the British, French or American governments were willing to fund his project for military purposes).
Such examples may bring to mind fairly harmless eccentrics or, at worse, fraudulent snake oil salesmen, deluding themselves (or, perhaps, cynically exploiting others’ naivety) into believing in some extraordinary breakthrough. Occasionally, though, a pseudoscientific belief can coincide with historical circumstances to produce something much more sinister. The best example of such an outcome may well be the case of Trofin Denisovich Lysenko.
EARLY YEARS
Lysenko was born in 1898, the son of a Ukranian peasant family. He grew up to become an agronomist and was unknown to the rest of the Soviet Union until 1927 when the newspaper ‘Pravda’ run a story about his radical ideas concerning crop management.
AN ABANDONED EVOLUTIONARY THEORY
By that time, most of the modern scientific world had embraced the new-Darwin synthesis, which saw evolution as being a process whereby genes that confer advantageous traits are more likely to survive through the generations than those that confer disadvantageous traits. For example, genes which code for brown fur would create a rabbit that is easily spotted by predators in snowy regions, so it’s not surprising that arctic animals tend to have white fur.
Lysenko’s radical new theories were not based on modern genetics but rather on a theory of evolution that predates Darwin’s. Lamarck’s theory posited that a species’ traits developed as a response to its environment. According to this theory, a person who did physical exercise would build up their muscles and have offspring with similarly muscular bodies. Lamarck’s theory had been disproved in a grisly experiment involving the severing of mice’s tails. According to Lamarckism, when tailless mice bred their offspring should have been born without tails, but this never happened. We know why from modern genetic theory- it’s because the mice passed on genes for growing tails.
Despite it being a long discredited theory, Lysenko believed it to be a more accurate explanation of evolutionary change than genetics. The reason why was because he thought Lamarckism- concerned as it was with struggle and radical change- was more compatible with Marxist theory. He was not the only one to think so. When, in 1935, Lysenko gave a speech that denounced traditional geneticists as anti-Marxist and compared them to peasants who resisted the Soviet government’s collectivisation strategies, Stalin responded with a standing ovation saying, “bravo, comrade Lysenko. Bravo”. The combination of endorsement from the most powerful man in Communist Russia, plus results of some dubious experiments, saw Lysenko admitted into the hierarchy of the Communist party of the Soviet Union, where he was made head of the Institute of Genetics and Plant Breeding.
DESPERATE TIMES LEADING TO IDEOLOGY-BASED PSUEDOSCIENCE
In attempting to leapfrog from an agrarian-based economy to an industrialised economy, the communist regime’s collectivist policies had lead to mismanagement and drastic shortages in food supply. Faced with famine, both people and government were desperate for solutions and Lysenko appeared to be somebody who was quick to provide practical answers. But he was so quick in producing possible solutions- everything from cold treatment of grain, the cluster planting of trees, to different fertiliser mixes- that scientists couldn’t determine whether a technique was useless or harmful before a new technique was adopted.
Moreover, there probably wasn’t that many qualified people around to refute whatever Lysenko prescribed. He used his position in the Communist hierarchy to denounce biologists as “fly lovers and people haters”, portraying his opponents as enemies of the State intent on the purposeful destruction of the Soviet economy. By 1940, thousands of geneticists had either lost their jobs, been imprisoned, or executed.
Meanwhile, Lysenko appeared to be everything that the Soviet system deemed inspirational. He came across, after all, as a peasant who developed solutions to practical problems by applying his own intelligence. The Soviet propaganda machine overstated his successes and suppressed his failures. The ideological orthodoxy that saw many fine geneticists fall victim to Stalin’s terrors spread to other sciences such as astronomy and chemistry. The Soviet Union did have some notable technological achievements. They put the first satellite into orbit, and the first man in space (Yuri Gagarin) was from Soviet Russia. But in general it’s fair to say that Communist countries lagged far behind Western countries in terms of technological innovation. There was nothing like Silicon Valley in Communist countries (particularly if we count domestic and not military technology). Given the story of Lysenko and the replacement of science with ideologically-driven pseudoscience, we can see why this might be so.
REFERENCES
The Rise And Fall Of Communism by Archie Brown
Wikipedia
Far Out: 101 Strange Tales From Science’s Outer Edge by Mark PilkingtonLYSENKO: IDEOLOGY AND THE CORRUPTION OF SCIENCE
INTRODUCTION
Throughout history there have been many examples of psuedoscientists peddling crackpot theories. Examples include John Ernst Worrell Kelly who, in 1872, announced that he had found a new physical force and managed to talk investors into backing his scheme to produce technology that exploited ‘intermolecular vibrations of the aether’, and Harry Grindell Matthews, who claimed in 1921 to have build a ‘death ray’ (he never provided detailed explanations of how the device worked and neither the British, French or American governments were willing to fund his project for military purposes).
Such examples may bring to mind fairly harmless eccentrics or, at worse, fraudulent snake oil salesmen, deluding themselves (or, perhaps, cynically exploiting others’ naivety) into believing in some extraordinary breakthrough. Occasionally, though, a pseudoscientific belief can coincide with historical circumstances to produce something much more sinister. The best example of such an outcome may well be the case of Trofin Denisovich Lysenko.
EARLY YEARS
Lysenko was born in 1898, the son of a Ukranian peasant family. He grew up to become an agronomist and was unknown to the rest of the Soviet Union until 1927 when the newspaper ‘Pravda’ run a story about his radical ideas concerning crop management.
AN ABANDONED EVOLUTIONARY THEORY
By that time, most of the modern scientific world had embraced the new-Darwin synthesis, which saw evolution as being a process whereby genes that confer advantageous traits are more likely to survive through the generations than those that confer disadvantageous traits. For example, genes which code for brown fur would create a rabbit that is easily spotted by predators in snowy regions, so it’s not surprising that arctic animals tend to have white fur.
Lysenko’s radical new theories were not based on modern genetics but rather on a theory of evolution that predates Darwin’s. Lamarck’s theory posited that a species’ traits developed as a response to its environment. According to this theory, a person who did physical exercise would build up their muscles and have offspring with similarly muscular bodies. Lamarck’s theory had been disproved in a grisly experiment involving the severing of mice’s tails. According to Lamarckism, when tailless mice bred their offspring should have been born without tails, but this never happened. We know why from modern genetic theory- it’s because the mice passed on genes for growing tails.
Despite it being a long discredited theory, Lysenko believed it to be a more accurate explanation of evolutionary change than genetics. The reason why was because he thought Lamarckism- concerned as it was with struggle and radical change- was more compatible with Marxist theory. He was not the only one to think so. When, in 1935, Lysenko gave a speech that denounced traditional geneticists as anti-Marxist and compared them to peasants who resisted the Soviet government’s collectivisation strategies, Stalin responded with a standing ovation saying, “bravo, comrade Lysenko. Bravo”. The combination of endorsement from the most powerful man in Communist Russia, plus results of some dubious experiments, saw Lysenko admitted into the hierarchy of the Communist party of the Soviet Union, where he was made head of the Institute of Genetics and Plant Breeding.
DESPERATE TIMES LEADING TO IDEOLOGY-BASED PSUEDOSCIENCE
In attempting to leapfrog from an agrarian-based economy to an industrialised economy, the communist regime’s collectivist policies had lead to mismanagement and drastic shortages in food supply. Faced with famine, both people and government were desperate for solutions and Lysenko appeared to be somebody who was quick to provide practical answers. But he was so quick in producing possible solutions- everything from cold treatment of grain, the cluster planting of trees, to different fertiliser mixes- that scientists couldn’t determine whether a technique was useless or harmful before a new technique was adopted.
Moreover, there probably wasn’t that many qualified people around to refute whatever Lysenko prescribed. He used his position in the Communist hierarchy to denounce biologists as “fly lovers and people haters”, portraying his opponents as enemies of the State intent on the purposeful destruction of the Soviet economy. By 1940, thousands of geneticists had either lost their jobs, been imprisoned, or executed.
Meanwhile, Lysenko appeared to be everything that the Soviet system deemed inspirational. He came across, after all, as a peasant who developed solutions to practical problems by applying his own intelligence. The Soviet propaganda machine overstated his successes and suppressed his failures. The ideological orthodoxy that saw many fine geneticists fall victim to Stalin’s terrors spread to other sciences such as astronomy and chemistry. The Soviet Union did have some notable technological achievements. They put the first satellite into orbit, and the first man in space (Yuri Gagarin) was from Soviet Russia. But in general it’s fair to say that Communist countries lagged far behind Western countries in terms of technological innovation. There was nothing like Silicon Valley in Communist countries (particularly if we count domestic and not military technology). Given the story of Lysenko and the replacement of science with ideologically-driven pseudoscience, we can see why this might be so.
REFERENCES
The Rise And Fall Of Communism by Archie Brown
Wikipedia
Far Out: 101 Strange Tales From Science’s Outer Edge by Mark Pilkington

Posted in Uncategorized | Leave a comment

Battle of the Paradigms: The Fight Over Internet Neutrality 

BATTLE OF THE PARADIGMS: THE FIGHT OVER INTERNET NEUTRALITY
INTRODUCTION
The philosopher Thomas Kuhn saw major scientific theories as being established through ‘paradigm shifts’. According to this view of progress, there is always a ‘paradigm’- a set of prevailing theories which form a world view through which people observe and explain their world. Any observation or evidence that seems to conflict with that worldview is rejected by the ‘establishment’ who act on the basis that the paradigm must be held inviolate. Opposing the ‘establishment’ is a heretical group beholden to a different paradigm.
This notion of competing worldviews or paradigms is perhaps not confined to major scientific theories, but also one of the major issues affecting our most pervasive and important technologies. As such, there is a battle to be fought that inevitably affects the lives of all of us. What battle? The fight over Internet neutrality.
ORIGINS OF THE PARADIGMS
That competing worldviews should have arisen thanks to the Internet is perhaps not surprising given the history of its development and the wider historical context in which it emerged. The battle is between government and the private sector on one hand, who for various reasons seek ways of imposing more centralised control over the Internet and the enclosure for profit’s or security’s sake, and advocates of commons-style management on the other, who wish to maintain and extend the Internet’s capacity to be a distributed, peer-to-peer, laterally-scaled network that could potentially provide most of the goods and services required for a decent life at near-zero marginal cost.
These three groups- government, the private sector, and civil society- have been part of the Internet from the start. The Internet itself is a commons, meaning it is owned by everyone and no-one. It takes big telecommunications companies to establish and maintain the physical network of the Internet (the fibre optic cables, data storage etc) but such companies are merely facilitators and providers; they don’t actually own the Internet. Governance of the Internet is the job of various non-profit organisations, such as the ‘World Wide Web Consortium’, the ‘Internet Engineering Taskforce’ and the ‘Internet Corporation for Assigned Names and Numbers’. All of these organisations are open to anyone to take part in, although participation does require technical expertise, which in practice excludes the non-technical among us.
The Internet is a commons, but the Web (or rather, the applications that run on it) are a hybrid of commercial enterprises and nonprofit organisations. The line between a commercial and nonprofit organisation is not always clear-cut. In the case of Apple and its App Store, that is obviously commercial. But it can feel like services provided by, say, Google or Facebook, are not commercial because end users get to access them for ‘free’. This mix of purely, partly, and non-commercial apps on the Internet help make the issue of governance a thorny one.
ORIGINS OF CENTRALISATION
OK, but why should there be those opposing paradigms of an enclosed, privatised Internet on one hand, and an open, collaborative network on the other? If we look back over the history of the Internet and the wider historical context in which it evolved, the reason becomes clearer.
The Internet’s origins can be traced back to 1960s research commissioned by the US Federal government to build robust, fault-tolerant computer networks that would provide communication even in the event of nuclear attack. This research lead to a precursor of the Internet known as Arpanet. Arpanet served as a backbone for the interconnection of academic and military networks. By the 1990s, the linking of commercial networks and enterprises, and the linking up of personal and mobile computers saw the emergence of the Internet that we know today.
Now, as I said before, the Internet evolved within a wider historical context. In particular, it evolved in a world dominated by fossil fuels and it was this more than anything that lead to there being a worldview bent on enclosure and privatisation. The oil industry is one of the most concentrated in the world. Just about every other industry that depends upon fossil fuel necessarily requires a vast capital expenditure to establish vertical integration that brings together supply chains, production processes and distribution centres. The sheer cost of these vertically integrated corporate enterprises demand centralised management in order to increase efficiencies and lower costs. This practice of vertical integration and centralisation was both necessary given the energy/communications matrix of the twentieth century, and actually pretty successful, bringing appreciable improvement to the lives of those in industrialised nations.
It’s perhaps not surprising, then, that governments and private enterprises used to an era when vertical integration and centralisation were the optimal path to prosperity would seek to carry their worldview to 21st century technologies.
Another thing to consider is the fact that the electromagnetic spectrum we use to communicate with had long seemed like a scarce resource. Broadcast radio emerged in the 1920s. Whenever two or more broadcasters in close proximity used spectrum frequencies very close to one another, this would result in constant interruption and interference. By 1927, radio broadcasters were causing sufficient disruption for congress to pass the Radio Act to establish the Federal Radio Commission. The job of the FCC was to manage the radio spectrum and determine which frequencies could be used and by whom. In practice, this meant that a broadcaster had exclusive use to a particular radio frequency that they licensed from the FCC. It also meant that the spectrum itself was considered to be a scarce resource and, as such, to be thought of as a commercial asset. Therefore, not only did we have organisations growing up in an era when vertical integration and centralisation were engines of productivity and efficiency, but also organisations growing up at a time when broadcasting was a scarce commodity that had to be managed commercially. This all contributed to the establishment of a mindset wedded to the ‘enclosure’ paradigm.
ORIGIN OF THE COLLABORATIVE COMMONS
But, equally, from the start, the Internet has been a commons and there have been those wedded to that style of governance. ‘Internet Neutrality’ is a concept that grew out of the end-to-end structure of the Internet. This structure favours users rather than network providers because, while we must pay for Internet connection and the speed and quality provided by our ISP can be better or worse depending on how much we pay for their service, once we are connected all transmitted packets of data are treated the same way, with no one- commercial or otherwise- getting preferential treatment. Advocates of commons-style management believe that a neutral Internet is best-suited to facilitate the collaboration of millions of end users, allowing people to develop their own applications that would advance network collaborations and drive marginal costs to near zero, an eventuality that would cause a paradigm shift away from capitalist/socialist economic systems toward a new world order.
GOVERNMENT AND CENTRALISED CONTROL
The notion of a nearly free, open, transparent Internet is not something all parties welcome. According to Jeremy Rifkin, “national governments, concerned over a spate of Internet-related policy issues that affect their general welfare and sovereign interests…are enacting legislation, some of which is threatening an essential feature of the medium- its open, universal, and transparent nature”. In 2011, the countries China, Russia, Uzbekistan and Tajkstan submitted a proposal to the UN General Assembly that pushed for new forms of government control over the Internet.
COMMERCIALISATION THROUGH ENCLOSURE
Governments find themselves in the middle of two interest groups, one of which is dedicated to the capitalist model, while the other subscribes to a commons mindset. Not surprisingly, each group seeks to get government on its side, enacting legislation that furthers its interests. In the case of the private sector, it seeks price discrimination that would increase income and profits. More precisely, it seeks to secure control of information exchanged over the Internet, control that would enable the charging of different prices for access to certain kinds of information, prioritise transmissions, favour some applications while blocking others. Such a move would seriously compromise network neutrality, which is based on a non-discriminatory communications with equal access and inclusion for all participants.
This enclosure of the Internet is not just coming from outside in the form of network providers fighting against what they see as unfair restrictions on their right to pursue profits. It’s also coming from the inside as well. As Rifkin said, “some of the best-known social media sites on the Web are revving up to find ways to enclose, commercialise, and monopolise the new communications medium”.
There is a famous acronym- TANSTAAFL- which stands for ‘there ain’t no such thing as a free lunch’. In other words, in the commercial world nothing is ever really free and whenever it seems like that is the case, that’s only because the price is hidden. A case in point are the services provided by the likes of Google and Facebook. Ever since the public began connecting their computers and phones to the Net and sharing data on the web, a valuable resource of personal data has been waiting to be mined, commercialised, and exploited for profit. As Tim Berners-Lee explained, whenever somebody connects to commercial media sites, their vital information is (in Rifkin’s words) “immediately captured, siloed, enclosed and commodified”. That information is then sold to interested third parties. 
The most familiar result is the targeted advertising that comes with using Google’s search engine. But it could also potentially mean such things as health insurance companies using your search history and other digital footprints to decide whether to provide cover or not. According to Berners-Lee, “the more this kind of architecture gains widespread use, the more the Web becomes fragmented, and the less we enjoy a single, universal, information space”. In other words, we could be witnessing a form of commercial exploitation based on the commodification of the self that is creating centralised and proprietary monopolies in virtual space.
NATURAL MONOPOLIES
It is argued by commons advocates that, since Google provides an essential service that we all depend upon, and given that no rival offers anything like a comparative service, then Google should be classed as an essential facility. Concerns have been raised over loss of ‘search neutrality’, in which a dominant, privately-owned search engine is tempted, for commercial or political reasons, to manipulate search results.
In a counter to such arguments, free-market advocates warn that, by treating such services as social utilities and calling for regulations that treat them as natural monopolies, we run the risk of actually turning them into just that, because in doing so we would be protecting such companies from competition. Since regulated companies have a guaranteed rate of return and fixed prices built in, critics say, they have less incentive to be innovative. It’s not like they have competitors to worry about, after all.
Also, there are critics who argue that monopolisation of social media is less of a concern than, say, power companies, because of disproportionate up-front costs. In order to guarantee a natural monopoly, power companies had to invest huge amounts of capital in order to set in place the requisite physical infrastructure and to secure a captive user-base. But when it comes to setting up something like Twitter, the up-front costs are far less. This makes it a lot easier for new players to come along and displace market leaders.
The problem with that argument, however, is that while it was once possible to depose market leaders in social media using little capital investment, that is no longer the case. After all, as Rifkin said, “Google…et al are investing billions of dollars in expanding their user base while simultaneously creating impenetrable enclosures, protected by layer upon layer of intellectual property, all designed to profit from the global social commons they helped create”.
The more such services grow, and the more users of such services there are, the more it is of benefit if everyone likewise uses the same services. But such services remain commercial ventures, so while they are motivated to optimise social connections, in line with their users’ interests, they are also incentivised to sell information about users to third parties. This, according to Zeynap Tufecki, sociology professor at the University of South Carolina, is “the corporatization of the commons”.
ENERGY
This battle of paradigms between an enclosed, privatised and commercialised Internet on one hand and a distributed, collaborative, laterally-scaled commons-management network on the other, is not just confined to the question of who owns our personal information. It also impacts on the future of energy. I said earlier that 20th century energy production, based on fossil fuels, required vertically-integrated infrastructure and centralised management. But 21st century renewable-energy technologies will work best when they are distributed, scale laterally across society, and organised collaboratively, which means they favour a communications medium that is similarly distributed, collaborative, and laterally scaled. As Rifkin said, “Internet communications and renewable energies form the inseparable matrix for a foundational infrastructure whose operating logic is best served by commons management”.
Moreover, technological progress can turn seemingly scarce resources into abundant ones (actually, it is more accurately said that the resource was always abundant; our ability to access it was poor). Case in point: Managing communications over radio frequencies. Modern technologies in the form of smart antennas, dynamic spectrum access, cognitive radio technologies and mesh networks are able to employ a variety of tricks that use the radio spectrum with far greater efficiency. This opens up the possibility of establishing open wireless connections at near-zero marginal cost.
But, if so, it won’t come without a struggle because there are those with a vested interest in preventing the transition to an abundance-based society. We see this in the case of electric utilities, where there have been moves to design a smart grid that is centralised, proprietary, and closed. The plan is to prevent users from having access to moment-by-moment information regarding changes in the price of electricity and to prevent end users from uploading their electricity to the grid at advantageous times when electricity peaks. In other words, they would rather not have users who are efficient at managing their electricity usage, as that would eat into profits. Also, concerns have been raised over discriminatory practices that favour faster connectivity of green electricity by affiliated business partners. As in all cases of neutrality versus preferential treatment, the goal for those pursuing the latter is to enforce a centralised architecture (in this case, on the smart grid) thereby enabling commercial enclosure for profit’s sake.
THE BATTLE
So which paradigm will ultimately prevail? Will the Internet and its future expanded and enhanced incarnation, the Internet of Things (in which not just communications and information but also energy and logistics are incorporated into our networks) become increasingly enclosed and proprietary, perpetuating the dominance of capitalist/socialist economic systems? Or will we make the paradigm shift to commons management in which distributed, collaborative, laterally-scaled peer-to-peer networks drive marginal costs to near-zero and prosumers become more focused on increasing their social capital through providing access to goods and services communities value, with capitalist/socialist work relegated to a supplementary role, maintaining the infrastructure on which the collaborative commons is built, but no longer dominant in our lives?
Given the financial muscle and political influence of today’s global corporations, it might seem like the collaborative commons stands little chance. But, from an environmental perspective, it would be unwise to bet against a near-zero marginal cost society. It has long been established by the laws of thermodynamics that the total amount of energy in the universe never decreases or increases. Also, all work converts useful energy into forms unavailable for work. What this means is that every time resources are turned into commodities, we are effectively running up a bill with nature. Some say that bill has now come due in the form of climate change and loss of biodiversity. Even if you are a climate change denier, there still remains the well-established laws of thermodynamics, which demand of life constant innovation and evolution to discover more efficient use of resources, at pain of extinction. 
The collaborative commons, built on top of infrastructures established by a market system operating at the peak of its power to deliver maximum possible productivity, would be so efficient at converting resources into useful products and services and distributing them appropriately, that anyone interested in the long-term survival and prosperity of the human race ought to promote and work toward advancing it.
BATTLE OF THE PARADIGMS: THE FIGHT OVER INTERNET NEUTRALITY
INTRODUCTION
The philosopher Thomas Kuhn saw major scientific theories as being established through ‘paradigm shifts’. According to this view of progress, there is always a ‘paradigm’- a set of prevailing theories which form a world view through which people observe and explain their world. Any observation or evidence that seems to conflict with that worldview is rejected by the ‘establishment’ who act on the basis that the paradigm must be held inviolate. Opposing the ‘establishment’ is a heretical group beholden to a different paradigm.
This notion of competing worldviews or paradigms is perhaps not confined to major scientific theories, but also one of the major issues affecting our most pervasive and important technologies. As such, there is a battle to be fought that inevitably affects the lives of all of us. What battle? The fight over Internet neutrality.
ORIGINS OF THE PARADIGMS
That competing worldviews should have arisen thanks to the Internet is perhaps not surprising given the history of its development and the wider historical context in which it emerged. The battle is between government and the private sector on one hand, who for various reasons seek ways of imposing more centralised control over the Internet and the enclosure for profit’s or security’s sake, and advocates of commons-style management on the other, who wish to maintain and extend the Internet’s capacity to be a distributed, peer-to-peer, laterally-scaled network that could potentially provide most of the goods and services required for a decent life at near-zero marginal cost.
These three groups- government, the private sector, and civil society- have been part of the Internet from the start. The Internet itself is a commons, meaning it is owned by everyone and no-one. It takes big telecommunications companies to establish and maintain the physical network of the Internet (the fibre optic cables, data storage etc) but such companies are merely facilitators and providers; they don’t actually own the Internet. Governance of the Internet is the job of various non-profit organisations, such as the ‘World Wide Web Consortium’, the ‘Internet Engineering Taskforce’ and the ‘Internet Corporation for Assigned Names and Numbers’. All of these organisations are open to anyone to take part in, although participation does require technical expertise, which in practice excludes the non-technical among us.
The Internet is a commons, but the Web (or rather, the applications that run on it) are a hybrid of commercial enterprises and nonprofit organisations. The line between a commercial and nonprofit organisation is not always clear-cut. In the case of Apple and its App Store, that is obviously commercial. But it can feel like services provided by, say, Google or Facebook, are not commercial because end users get to access them for ‘free’. This mix of purely, partly, and non-commercial apps on the Internet help make the issue of governance a thorny one.
ORIGINS OF CENTRALISATION
OK, but why should there be those opposing paradigms of an enclosed, privatised Internet on one hand, and an open, collaborative network on the other? If we look back over the history of the Internet and the wider historical context in which it evolved, the reason becomes clearer.
The Internet’s origins can be traced back to 1960s research commissioned by the US Federal government to build robust, fault-tolerant computer networks that would provide communication even in the event of nuclear attack. This research lead to a precursor of the Internet known as Arpanet. Arpanet served as a backbone for the interconnection of academic and military networks. By the 1990s, the linking of commercial networks and enterprises, and the linking up of personal and mobile computers saw the emergence of the Internet that we know today.
Now, as I said before, the Internet evolved within a wider historical context. In particular, it evolved in a world dominated by fossil fuels and it was this more than anything that lead to there being a worldview bent on enclosure and privatisation. The oil industry is one of the most concentrated in the world. Just about every other industry that depends upon fossil fuel necessarily requires a vast capital expenditure to establish vertical integration that brings together supply chains, production processes and distribution centres. The sheer cost of these vertically integrated corporate enterprises demand centralised management in order to increase efficiencies and lower costs. This practice of vertical integration and centralisation was both necessary given the energy/communications matrix of the twentieth century, and actually pretty successful, bringing appreciable improvement to the lives of those in industrialised nations.
It’s perhaps not surprising, then, that governments and private enterprises used to an era when vertical integration and centralisation were the optimal path to prosperity would seek to carry their worldview to 21st century technologies.
Another thing to consider is the fact that the electromagnetic spectrum we use to communicate with had long seemed like a scarce resource. Broadcast radio emerged in the 1920s. Whenever two or more broadcasters in close proximity used spectrum frequencies very close to one another, this would result in constant interruption and interference. By 1927, radio broadcasters were causing sufficient disruption for congress to pass the Radio Act to establish the Federal Radio Commission. The job of the FCC was to manage the radio spectrum and determine which frequencies could be used and by whom. In practice, this meant that a broadcaster had exclusive use to a particular radio frequency that they licensed from the FCC. It also meant that the spectrum itself was considered to be a scarce resource and, as such, to be thought of as a commercial asset. Therefore, not only did we have organisations growing up in an era when vertical integration and centralisation were engines of productivity and efficiency, but also organisations growing up at a time when broadcasting was a scarce commodity that had to be managed commercially. This all contributed to the establishment of a mindset wedded to the ‘enclosure’ paradigm.
ORIGIN OF THE COLLABORATIVE COMMONS
But, equally, from the start, the Internet has been a commons and there have been those wedded to that style of governance. ‘Internet Neutrality’ is a concept that grew out of the end-to-end structure of the Internet. This structure favours users rather than network providers because, while we must pay for Internet connection and the speed and quality provided by our ISP can be better or worse depending on how much we pay for their service, once we are connected all transmitted packets of data are treated the same way, with no one- commercial or otherwise- getting preferential treatment. Advocates of commons-style management believe that a neutral Internet is best-suited to facilitate the collaboration of millions of end users, allowing people to develop their own applications that would advance network collaborations and drive marginal costs to near zero, an eventuality that would cause a paradigm shift away from capitalist/socialist economic systems toward a new world order.
GOVERNMENT AND CENTRALISED CONTROL
The notion of a nearly free, open, transparent Internet is not something all parties welcome. According to Jeremy Rifkin, “national governments, concerned over a spate of Internet-related policy issues that affect their general welfare and sovereign interests…are enacting legislation, some of which is threatening an essential feature of the medium- its open, universal, and transparent nature”. In 2011, the countries China, Russia, Uzbekistan and Tajkstan submitted a proposal to the UN General Assembly that pushed for new forms of government control over the Internet.
COMMERCIALISATION THROUGH ENCLOSURE
Governments find themselves in the middle of two interest groups, one of which is dedicated to the capitalist model, while the other subscribes to a commons mindset. Not surprisingly, each group seeks to get government on its side, enacting legislation that furthers its interests. In the case of the private sector, it seeks price discrimination that would increase income and profits. More precisely, it seeks to secure control of information exchanged over the Internet, control that would enable the charging of different prices for access to certain kinds of information, prioritise transmissions, favour some applications while blocking others. Such a move would seriously compromise network neutrality, which is based on a non-discriminatory communications with equal access and inclusion for all participants.
This enclosure of the Internet is not just coming from outside in the form of network providers fighting against what they see as unfair restrictions on their right to pursue profits. It’s also coming from the inside as well. As Rifkin said, “some of the best-known social media sites on the Web are revving up to find ways to enclose, commercialise, and monopolise the new communications medium”.
There is a famous acronym- TANSTAAFL- which stands for ‘there ain’t no such thing as a free lunch’. In other words, in the commercial world nothing is ever really free and whenever it seems like that is the case, that’s only because the price is hidden. A case in point are the services provided by the likes of Google and Facebook. Ever since the public began connecting their computers and phones to the Net and sharing data on the web, a valuable resource of personal data has been waiting to be mined, commercialised, and exploited for profit. As Tim Berners-Lee explained, whenever somebody connects to commercial media sites, their vital information is (in Rifkin’s words) “immediately captured, siloed, enclosed and commodified”. That information is then sold to interested third parties. 
The most familiar result is the targeted advertising that comes with using Google’s search engine. But it could also potentially mean such things as health insurance companies using your search history and other digital footprints to decide whether to provide cover or not. According to Berners-Lee, “the more this kind of architecture gains widespread use, the more the Web becomes fragmented, and the less we enjoy a single, universal, information space”. In other words, we could be witnessing a form of commercial exploitation based on the commodification of the self that is creating centralised and proprietary monopolies in virtual space.
NATURAL MONOPOLIES
It is argued by commons advocates that, since Google provides an essential service that we all depend upon, and given that no rival offers anything like a comparative service, then Google should be classed as an essential facility. Concerns have been raised over loss of ‘search neutrality’, in which a dominant, privately-owned search engine is tempted, for commercial or political reasons, to manipulate search results.
In a counter to such arguments, free-market advocates warn that, by treating such services as social utilities and calling for regulations that treat them as natural monopolies, we run the risk of actually turning them into just that, because in doing so we would be protecting such companies from competition. Since regulated companies have a guaranteed rate of return and fixed prices built in, critics say, they have less incentive to be innovative. It’s not like they have competitors to worry about, after all.
Also, there are critics who argue that monopolisation of social media is less of a concern than, say, power companies, because of disproportionate up-front costs. In order to guarantee a natural monopoly, power companies had to invest huge amounts of capital in order to set in place the requisite physical infrastructure and to secure a captive user-base. But when it comes to setting up something like Twitter, the up-front costs are far less. This makes it a lot easier for new players to come along and displace market leaders.
The problem with that argument, however, is that while it was once possible to depose market leaders in social media using little capital investment, that is no longer the case. After all, as Rifkin said, “Google…et al are investing billions of dollars in expanding their user base while simultaneously creating impenetrable enclosures, protected by layer upon layer of intellectual property, all designed to profit from the global social commons they helped create”.
The more such services grow, and the more users of such services there are, the more it is of benefit if everyone likewise uses the same services. But such services remain commercial ventures, so while they are motivated to optimise social connections, in line with their users’ interests, they are also incentivised to sell information about users to third parties. This, according to Zeynap Tufecki, sociology professor at the University of South Carolina, is “the corporatization of the commons”.
ENERGY
This battle of paradigms between an enclosed, privatised and commercialised Internet on one hand and a distributed, collaborative, laterally-scaled commons-management network on the other, is not just confined to the question of who owns our personal information. It also impacts on the future of energy. I said earlier that 20th century energy production, based on fossil fuels, required vertically-integrated infrastructure and centralised management. But 21st century renewable-energy technologies will work best when they are distributed, scale laterally across society, and organised collaboratively, which means they favour a communications medium that is similarly distributed, collaborative, and laterally scaled. As Rifkin said, “Internet communications and renewable energies form the inseparable matrix for a foundational infrastructure whose operating logic is best served by commons management”.
Moreover, technological progress can turn seemingly scarce resources into abundant ones (actually, it is more accurately said that the resource was always abundant; our ability to access it was poor). Case in point: Managing communications over radio frequencies. Modern technologies in the form of smart antennas, dynamic spectrum access, cognitive radio technologies and mesh networks are able to employ a variety of tricks that use the radio spectrum with far greater efficiency. This opens up the possibility of establishing open wireless connections at near-zero marginal cost.
But, if so, it won’t come without a struggle because there are those with a vested interest in preventing the transition to an abundance-based society. We see this in the case of electric utilities, where there have been moves to design a smart grid that is centralised, proprietary, and closed. The plan is to prevent users from having access to moment-by-moment information regarding changes in the price of electricity and to prevent end users from uploading their electricity to the grid at advantageous times when electricity peaks. In other words, they would rather not have users who are efficient at managing their electricity usage, as that would eat into profits. Also, concerns have been raised over discriminatory practices that favour faster connectivity of green electricity by affiliated business partners. As in all cases of neutrality versus preferential treatment, the goal for those pursuing the latter is to enforce a centralised architecture (in this case, on the smart grid) thereby enabling commercial enclosure for profit’s sake.
THE BATTLE
So which paradigm will ultimately prevail? Will the Internet and its future expanded and enhanced incarnation, the Internet of Things (in which not just communications and information but also energy and logistics are incorporated into our networks) become increasingly enclosed and proprietary, perpetuating the dominance of capitalist/socialist economic systems? Or will we make the paradigm shift to commons management in which distributed, collaborative, laterally-scaled peer-to-peer networks drive marginal costs to near-zero and prosumers become more focused on increasing their social capital through providing access to goods and services communities value, with capitalist/socialist work relegated to a supplementary role, maintaining the infrastructure on which the collaborative commons is built, but no longer dominant in our lives?
Given the financial muscle and political influence of today’s global corporations, it might seem like the collaborative commons stands little chance. But, from an environmental perspective, it would be unwise to bet against a near-zero marginal cost society. It has long been established by the laws of thermodynamics that the total amount of energy in the universe never decreases or increases. Also, all work converts useful energy into forms unavailable for work. What this means is that every time resources are turned into commodities, we are effectively running up a bill with nature. Some say that bill has now come due in the form of climate change and loss of biodiversity. Even if you are a climate change denier, there still remains the well-established laws of thermodynamics, which demand of life constant innovation and evolution to discover more efficient use of resources, at pain of extinction. 
The collaborative commons, built on top of infrastructures established by a market system operating at the peak of its power to deliver maximum possible productivity, would be so efficient at converting resources into useful products and services and distributing them appropriately, that anyone interested in the long-term survival and prosperity of the human race ought to promote and work toward advancing it.

Posted in Uncategorized | Leave a comment

The Logistics Internet

THE LOGISTICS INTERNET
INTRODUCTION
Whenever a new technology, machine or tool emerges on the scene, it needs to be given a name. Often, that name is derived from something older, more familiar, and which is reckoned to share some similarities with the new-fangled thing. For example, when engines were invented that could provide people with transport that did not require horses, that mode of transport was given the name of ‘horseless carriage’, later abbreviated to ‘car’. Not all names catch on like ‘car’ did. Failures include ‘Iron Horse’ (train) ‘Picture Radio’ (television) and ‘Aero-Motive Engine’ (airplane).
The concept of an interconnected network of computers also required a name. Before we settled on the now-familiar Internet, names like ‘I-Way’, ‘Infobahn’ and ‘Information Superhighway’ were suggested. Again, we see something familiar being used to describe something new, in this case interconnected highway systems that enabled cars to travel coast to coast without encountering a stop light serving as a metaphor for information transmitted over a distributed network of information/communication technologies.
Now, in the 21st century, we are developing technologies that could enable us to expand the Internet to become something that could substantially improve logistics. A ‘Logistics Internet’, if you will.
WHAT IS LOGISTICS?
So, what is logistics and why does the Internet need to expand to include it?
Logistics refers to “the flow of things between point of origin and point of consumption in order to meet requirements of customers or corporations”. In other words, it encompasses the many ways in which products and services are stored and eventually delivered to customers.
The reason why logistics needs to become part of an expanded Internet is because such a move may help eradicate many deficiencies within current logistics operations that are resulting in unnecessary cost and waste.
DEFICIENCIES IN THE CURRENT SYSTEM
The deficiencies of current logistics came about because of the strength and limitations of 19th and 20th century technologies. As pointed out by Jeremy Rifkin in his book ‘Zero Marginal Cost Society’, the first industrial revolution favoured factories and logistics networks clustered in and around major cities, relying on rail links to “bring in energy and materials from suppliers upstream and deliver finished products to wholesalers and retailers downstream”. Being an employee meant living within walking distance of such a factory, or having access to a commuter train. In the second industrial revolution, nationwide interstate highway systems allowed production to migrate from dense urban centres to the suburbs, with truck transport overtaking rail, and workers often travelling greater distances to and from their place of employment via cars.
As with most 19th and 20th century industrial capitalist practices, businesses favoured internal, top-down, centralised command over logistics as a way of providing private firms with more control over their production, storage, and distribution channels. But that control comes at the cost of lost efficiencies and productivity and increased waste.
Let’s start with inefficiencies. In the United States alone, trailer trucks are on average only sixty percent full when on the road, because, while they often leave their docks fully loaded, after each drop they become less full and often return empty. On a global scale transport is even less efficient, achieving around ten percent efficiency.
Not only that, but the reliance on giant, centralised warehouses and distribution centres serving large terrains means products often cannot be transported by the fastest routes but must take more circuitous routes instead.
Those warehouses and distribution centres create yet more waste and inefficiency. The seasonal nature of the product lines means there are periods of the year in which warehouses are under-used, while at other times they are overextended. Products are often stored in warehouses for long periods of time, and at high cost (US business inventories were estimated at $1.6 trillion in 2013). In the case of time-sensitive products like food, many such products go unsold because logistical inefficiencies mean they cannot be transported in a timely manner.
Most of these inefficiencies are the result of having a logistics system dominated by hundreds of private carriers and a dearth of common standards and protocols that would encourage greater collaboration and more efficient sharing of logistics resources.
A NEW APPROACH
Not surprisingly, then, there are those who are looking to the Internet to rethink how we do logistics on a national and global scale. As mentioned previously, interstate highways originally provided the metaphor of an ‘information superhighway, conceptualised as an interconnected communications system that allowed information to travel without effort across a distributed network. As Rifkin explained, “a packet of information transmitted over the Internet contains information on both its identity and routing to its destination. The data packet is structured independently from the equipment, allowing the packet to be processed through different systems and networks, including copper wires, fibre-optic wires, routers, local-area networks, wide-area networks, etc”.
Similarly, it is hoped that, by using the latest IT and Internet technology applications, firms could collaborate with each other and share logistical resources in ways that would increase efficiencies and productivity and lower costs. The idea is to create a ‘logistics Internet’ in which warehouses and distribution centres are connected in “an open supply web managed by sophisticated analytics and algorithms” that would enable companies to “store items and route shipments in the most efficient manner at any given time”.
ENABLING TECHNOLOGIES
So what technologies are making the development of a logistics Internet possible? Amazon is a good company to examine for clues, as it is as much a logistics company as a virtual retailer. In an effort to reduce inefficient manual labour, the company has added robots, automated storage systems and intelligent automated guided vehicles. As has been pointed out in many a news report recently, driverless vehicles will be operational in the future. As they could potentially operate at near-zero marginal labour cost, there is obviously plenty of opportunity to make savings in the cost of moving products around.
It’s not exclusively about robots, but also our increasing ability to collect, share, and analyse data, made possible by ever cheaper, convenient and capable networked sensors. UPS has sensors embedded in their vehicles that monitor individual parts, thereby flagging up potential malfunctions before they result in costly breakdowns. There are sensors that enable households to monitor moment-to-moment electricity usage by various domestic appliances, providing information that can help households reduce wasteful consumption. There are sensors that keep track of availability to raw resources and current inventories in warehouses, and which monitor production lines for any signs of inefficiency.
We are developing the technological capability to automate much of the “planning, execution, and control of the movement of goods” that defines logistics, as well as increasing our ability to monitor the logistics value chain for signs of avoidable waste. But what is really needed is universal standards and protocols. Benoit Montrueil of the University Research Centre on Enterprise Networks, Logistics and Transport in Montreal, Canada, explained how the various components (sensors, robots, data-mining algorithms etc) need to be connected into a single, transparent, open system. Physical products should ideally be embedded in standardised modular containers and equipped with non-proprietary sensors for identification and sorting, and everything along the supply chain should operate by the same standard technical protocols.
The result of all this modularisation, standardisation and intelligent monitoring would be “an open supply web managed by sophisticated analytics and algorithms” that companies could use to “store items and route shipments in the most efficient manner possible at any given moment in time”.
This would enable conventional logistics, which relies on point-to-point and hub-and-spoke transport, to give way to a more efficient distributed, multi-segment system. Whereas today one driver may be responsible for taking a load from the production centre to drop off before heading somewhere else to pick up a shipment for delivery on the homeward journey, in the distributed system one driver might deliver a shipment to a nearby hub where they would pick up another trailer and head home. Meanwhile, the first trailer they brought to the hub would be collected by a second driver who would deliver it to the next truck port, airport or whatever hub is next in line. All the while, Internet tracking of the containers would work to ensure minimal delays in handover at every distribution point. According to Montreuil, a logistics system based on distributed, laterally-scaled open-systems architecture and Commons-style management could deliver goods to their destinations in half the time taken by less efficient point-to-point systems.
CONCLUSION
The Logistics Internet is just one aspect of a future integrated global network that increases efficiencies in not just value supply chains and the gathering and sharing of information, but in virtually every aspect of economic and social life. This is a future in which every node in the network- be it a business, a home, a vehicle, an item of inventory- will be uploading data to advanced analytics that constantly monitor local, national and global activity for ways to increase efficiency, reduce waste, and lower the marginal cost of producing and delivering goods and services. In so doing, the old scarcity-based market system will in time be superseded by a laterally-scaled, collaborative, commons-management network designed to bring sustainable abundance to the people of the world.
REFERENCES
FUTURE HYPE by BOB SEIDENSTICKER
ZERO MARGINAL COST SOCIETY by JEREMY RIFKIN
ZEITGEIST MOVEMENT DEFINED
ABUNDANCE by PETER DIAMANDIS and STEVEN KOTLER

Posted in technology and us, work jobs and all that | Tagged , , | Leave a comment