Thoughts On Command Economies

THOUGHTS ON COMMAND ECONOMIES
Non-capitalist ownership of the means of production and reliance on a command- rather than a market- economy were among the defining features of Communist states.
COMMAND AND MARKET ECONOMIES
The difference between a market and a command economy is that whereas the former relies on decentralised decisions between customers and suppliers to determine such things as what should be produced and in what quantity, in a command economy such decisions were undertaken by a hierarchical, top-down process. This system was organised with the politburo at the apex. Next down the chain of command were ministries for every major branch of industry. These were, in turn, supervised by the State Planning Committee and by departments of the Central Committee of the Communist Party. Unlike with market economies, producers working in command economies had little reason to be concerned with the wishes of whoever used their products. Nor were the activities of their competitors of any concern. This was because competition was absent; other producers engaged in production of similar goods were comrades collaborating on execution of the State plan. Above all else, producers were concerned with meeting whatever targets planners set.
COMMAND ECONOMIES AND IDEOLOGY
If we take these two defining features of a Communist state and also take into consideration other defining features such as democratic centralism, and the leading role of the Communist party, what we find in common with all of them is a strong ideological component. As Archie Brown said:
“These defining features of Communism, while ideologically significant, were also of clear organisational importance. They were part of the operational code of Communist rule with an everyday relevance to the task of maintaining power”.
They were ideological because they were part of the Bolshevik (and successor) belief system that held Socialism to be a higher stage of development than capitalism. While the Communists considered the victory of Communism over Capitalism to be inevitable, they also believed that the process could be speeded up, provided political power was firmly in the hands of the Party.
The absence of a market economy and private ownership played a definite part in placing power within the hands of the ruling Party. At times, upsetting State authorities meant imprisonment or death, but even in more relaxed times public dissent from State authorities was a serious threat to one’s career. After all, the State controlled the career possibilities of all citizens. Brown again:
“Communism was an all-encompassing system of beliefs…It had authorities whose word could not be questioned, and whose interpreters and guardians acted also as gatekeepers, deciding who belonged and who did not”.
SVYAZI, BLAT AND TOLKACH
It would be wrong, however, to assume that Communist countries were 100% run by a command economy, with no private ownership and market activity whatsoever. In fact, some private activity (whether legally, illegally, or a mixture of both) occurred in Communist systems. Agriculture, in particular, was not uncommon in favouring private enterprise. Indeed, in the case of Yugoslavia and Poland, agriculture was mostly in private hands.
In non-market economies, goods and services often suffered shortages and probably could not have functioned at all, were it not for informal rules that developed around three key Russian words: Svyazi, Blat, and Tolkach. Translated into English, those are ‘Connections’, ‘Pull’ and ‘Fixer’ (or ‘Pusher’). Together, they compromised practices that oiled the wheels of the command economy.
Let’s start with Svyazi or ‘connections’. Given the control that State authorities had over people’s lives, it shouldn’t be surprising to learn that connections- knowing the right people- were very important. Although Communism was supposedly a system that abolished class, Svyazi was mostly a privilege of the Soviet middle classes and elites.
Where Svyazi was concerned, favours could be rendered without expecting anything in return, even indirectly. However, ‘Blat’ (or ‘pull’) always involved a reciprocal exchange of favours. The exchange of favours need not have been direct and could have consisted of a long and rather complex chain. According to Brown:
“How much pull a person had depended, obviously, on that person’s position within society, but at all social levels in the Soviet Union, there were unofficial networks which, to a certain extent, bypassed the official structures”.
Perhaps the most important part of the informal set of rules oiling the wheels of the Command economy was the tolkach or ‘fixer’. If a factory were to fall behind schedule, that could have a huge effect in a command economy, because there was an absence of alternative suppliers. Therefore, despite official disapproval, the tolkach were tolerated because they served t o make the top priority of meeting production targets somewhat easier. This they did through begging, borrowing, bribing- basically any persuasive method that could ensure needed supplies actually arrived.
So, as stated before, the Communist command economy was never totally without private ownership and decentralised interactions between consumers and suppliers. Of course, the same thing can be said of market economies, which are, after all, never operating completely without State intervention. The State makes the sale of certain products illegal even though there is an economic demand for such products (such regulations are not 100% effective, and tend to cause the emergence of black markets) and also affects prices by imposing higher taxes on certain products. Perhaps most importantly, at times of financial crisis even the most ardent free-market ideologues find themselves turning to Government for rescue.
Still, an economic system must either be predominantly a command or a market economy, as the disastrous attempts to find a ‘third way’ have proved. Going on what history has taught us, though, one would be unwise to consider the command economy the superior of the two. Quite simply, the planned Soviet economy never worked all that well (although it could sometimes produce impressive results- think of the Soviet success in launching the first satellite, for example). A prime reason for its relative lack of success was the fact that prices were determined bureaucratically and, unlike in a market economy, budgets were not controlled by the need to make a profit. This lead to both a weakness of the penalties for failure and scant reward for success. As Brown explained:
“Shortfalls and waste…were automatically excused by the soft budget constraint…when extra costs were incurred, prices were allowed to rise, either openly or in a disguised form through a lowering of the quality of the product (which was not high to begin with). And, purchasers, whether of producer goods or consumer goods, did not have the option of taking their custom elsewhere”.
CONCLUSION
Perhaps it is no so surprising, then, that, far from overthrowing the Capitalist system, instead Communist States relaxed more and more top-down control over time. For example, in 1987, the law on the State enterprise had devolved power to factory managers, and by 1988 Gorbachev abolished pretty much all of the Central Committee’s economic departments. What was left by then was neither a functioning Command economy nor a market economy, but instead a dysfunctional hybrid which only added to the pressure to change the Communist system in such fundamental ways that it ceased to be by the mid 1990s.
REFERENCES
The Rise And Fall Of Communism by Archie Brown

Wikipedia

Advertisements
Posted in work jobs and all that | Tagged , , | Leave a comment

Thoughts On the Cold War

THOUGHTS ON THE COLD WAR
From 1950 to 1990, a state of Cold War existed between East and West. At its heart, this simmering tension centred around an ideological question: Who should own capital? The ‘West’ represented US-led ‘free enterprise’ capitalism, and the East Russian-style state Socialism.
From a Western point of view, the Cold War was seen as a struggle to restrain the Soviet Union and hold at bay Communism. It is ironic, then, that the Cold War brought about conditions that helped perpetuate the Communist system. Communist states tend to be highly authoritarian, and such a state is more easily maintained when there is the ever-present threat of an external enemy. Such an external threat provided justification for censorship and restricted foreign travel. Thus, the Cold War provided Communist states with an excuse to suppress information regarding the relative economic success and greater liberty to be had under market-based, democratic countries.
As Alec Nove explained, “the centralised economy, party control, censorship, and the KGB were justified in the eyes of the leaders, and many of the led, by the need to combat enemies, internal and external”.
It is doubtful that the Soviet State could have survived a hot war, as that would almost certainly have involved an exchange of nuclear weapons that would have ended civilisation. The fact that the arms race was nicknamed MAD- for Mutually Assured Destruction- speaks volumes about how unlikely survival would have been for either side if East-West tension had boiled over. But, at the same time, it was the Cold War and the tensions that resulted, tensions that were advantageous to hardliners within Eastern Europe, that helped restrict contact with more prosperous countries in the West- something that Communist states were not equipped to survive.
REFERENCES
The Rise and Fall Of Communism by Archie Brown
Introduction To Marxism by Rupert Woodfin and Oscar Zarate

Posted in Uncategorized | Leave a comment

Lysenko: Ideology and the Corruption Of Science

LYSENKO: IDEOLOGY AND THE CORRUPTION OF SCIENCE
INTRODUCTION
Throughout history there have been many examples of psuedoscientists peddling crackpot theories. Examples include John Ernst Worrell Kelly who, in 1872, announced that he had found a new physical force and managed to talk investors into backing his scheme to produce technology that exploited ‘intermolecular vibrations of the aether’, and Harry Grindell Matthews, who claimed in 1921 to have build a ‘death ray’ (he never provided detailed explanations of how the device worked and neither the British, French or American governments were willing to fund his project for military purposes).
Such examples may bring to mind fairly harmless eccentrics or, at worse, fraudulent snake oil salesmen, deluding themselves (or, perhaps, cynically exploiting others’ naivety) into believing in some extraordinary breakthrough. Occasionally, though, a pseudoscientific belief can coincide with historical circumstances to produce something much more sinister. The best example of such an outcome may well be the case of Trofin Denisovich Lysenko.
EARLY YEARS
Lysenko was born in 1898, the son of a Ukranian peasant family. He grew up to become an agronomist and was unknown to the rest of the Soviet Union until 1927 when the newspaper ‘Pravda’ run a story about his radical ideas concerning crop management.
AN ABANDONED EVOLUTIONARY THEORY
By that time, most of the modern scientific world had embraced the new-Darwin synthesis, which saw evolution as being a process whereby genes that confer advantageous traits are more likely to survive through the generations than those that confer disadvantageous traits. For example, genes which code for brown fur would create a rabbit that is easily spotted by predators in snowy regions, so it’s not surprising that arctic animals tend to have white fur.
Lysenko’s radical new theories were not based on modern genetics but rather on a theory of evolution that predates Darwin’s. Lamarck’s theory posited that a species’ traits developed as a response to its environment. According to this theory, a person who did physical exercise would build up their muscles and have offspring with similarly muscular bodies. Lamarck’s theory had been disproved in a grisly experiment involving the severing of mice’s tails. According to Lamarckism, when tailless mice bred their offspring should have been born without tails, but this never happened. We know why from modern genetic theory- it’s because the mice passed on genes for growing tails.
Despite it being a long discredited theory, Lysenko believed it to be a more accurate explanation of evolutionary change than genetics. The reason why was because he thought Lamarckism- concerned as it was with struggle and radical change- was more compatible with Marxist theory. He was not the only one to think so. When, in 1935, Lysenko gave a speech that denounced traditional geneticists as anti-Marxist and compared them to peasants who resisted the Soviet government’s collectivisation strategies, Stalin responded with a standing ovation saying, “bravo, comrade Lysenko. Bravo”. The combination of endorsement from the most powerful man in Communist Russia, plus results of some dubious experiments, saw Lysenko admitted into the hierarchy of the Communist party of the Soviet Union, where he was made head of the Institute of Genetics and Plant Breeding.
DESPERATE TIMES LEADING TO IDEOLOGY-BASED PSUEDOSCIENCE
In attempting to leapfrog from an agrarian-based economy to an industrialised economy, the communist regime’s collectivist policies had lead to mismanagement and drastic shortages in food supply. Faced with famine, both people and government were desperate for solutions and Lysenko appeared to be somebody who was quick to provide practical answers. But he was so quick in producing possible solutions- everything from cold treatment of grain, the cluster planting of trees, to different fertiliser mixes- that scientists couldn’t determine whether a technique was useless or harmful before a new technique was adopted.
Moreover, there probably wasn’t that many qualified people around to refute whatever Lysenko prescribed. He used his position in the Communist hierarchy to denounce biologists as “fly lovers and people haters”, portraying his opponents as enemies of the State intent on the purposeful destruction of the Soviet economy. By 1940, thousands of geneticists had either lost their jobs, been imprisoned, or executed.
Meanwhile, Lysenko appeared to be everything that the Soviet system deemed inspirational. He came across, after all, as a peasant who developed solutions to practical problems by applying his own intelligence. The Soviet propaganda machine overstated his successes and suppressed his failures. The ideological orthodoxy that saw many fine geneticists fall victim to Stalin’s terrors spread to other sciences such as astronomy and chemistry. The Soviet Union did have some notable technological achievements. They put the first satellite into orbit, and the first man in space (Yuri Gagarin) was from Soviet Russia. But in general it’s fair to say that Communist countries lagged far behind Western countries in terms of technological innovation. There was nothing like Silicon Valley in Communist countries (particularly if we count domestic and not military technology). Given the story of Lysenko and the replacement of science with ideologically-driven pseudoscience, we can see why this might be so.
REFERENCES
The Rise And Fall Of Communism by Archie Brown
Wikipedia
Far Out: 101 Strange Tales From Science’s Outer Edge by Mark PilkingtonLYSENKO: IDEOLOGY AND THE CORRUPTION OF SCIENCE
INTRODUCTION
Throughout history there have been many examples of psuedoscientists peddling crackpot theories. Examples include John Ernst Worrell Kelly who, in 1872, announced that he had found a new physical force and managed to talk investors into backing his scheme to produce technology that exploited ‘intermolecular vibrations of the aether’, and Harry Grindell Matthews, who claimed in 1921 to have build a ‘death ray’ (he never provided detailed explanations of how the device worked and neither the British, French or American governments were willing to fund his project for military purposes).
Such examples may bring to mind fairly harmless eccentrics or, at worse, fraudulent snake oil salesmen, deluding themselves (or, perhaps, cynically exploiting others’ naivety) into believing in some extraordinary breakthrough. Occasionally, though, a pseudoscientific belief can coincide with historical circumstances to produce something much more sinister. The best example of such an outcome may well be the case of Trofin Denisovich Lysenko.
EARLY YEARS
Lysenko was born in 1898, the son of a Ukranian peasant family. He grew up to become an agronomist and was unknown to the rest of the Soviet Union until 1927 when the newspaper ‘Pravda’ run a story about his radical ideas concerning crop management.
AN ABANDONED EVOLUTIONARY THEORY
By that time, most of the modern scientific world had embraced the new-Darwin synthesis, which saw evolution as being a process whereby genes that confer advantageous traits are more likely to survive through the generations than those that confer disadvantageous traits. For example, genes which code for brown fur would create a rabbit that is easily spotted by predators in snowy regions, so it’s not surprising that arctic animals tend to have white fur.
Lysenko’s radical new theories were not based on modern genetics but rather on a theory of evolution that predates Darwin’s. Lamarck’s theory posited that a species’ traits developed as a response to its environment. According to this theory, a person who did physical exercise would build up their muscles and have offspring with similarly muscular bodies. Lamarck’s theory had been disproved in a grisly experiment involving the severing of mice’s tails. According to Lamarckism, when tailless mice bred their offspring should have been born without tails, but this never happened. We know why from modern genetic theory- it’s because the mice passed on genes for growing tails.
Despite it being a long discredited theory, Lysenko believed it to be a more accurate explanation of evolutionary change than genetics. The reason why was because he thought Lamarckism- concerned as it was with struggle and radical change- was more compatible with Marxist theory. He was not the only one to think so. When, in 1935, Lysenko gave a speech that denounced traditional geneticists as anti-Marxist and compared them to peasants who resisted the Soviet government’s collectivisation strategies, Stalin responded with a standing ovation saying, “bravo, comrade Lysenko. Bravo”. The combination of endorsement from the most powerful man in Communist Russia, plus results of some dubious experiments, saw Lysenko admitted into the hierarchy of the Communist party of the Soviet Union, where he was made head of the Institute of Genetics and Plant Breeding.
DESPERATE TIMES LEADING TO IDEOLOGY-BASED PSUEDOSCIENCE
In attempting to leapfrog from an agrarian-based economy to an industrialised economy, the communist regime’s collectivist policies had lead to mismanagement and drastic shortages in food supply. Faced with famine, both people and government were desperate for solutions and Lysenko appeared to be somebody who was quick to provide practical answers. But he was so quick in producing possible solutions- everything from cold treatment of grain, the cluster planting of trees, to different fertiliser mixes- that scientists couldn’t determine whether a technique was useless or harmful before a new technique was adopted.
Moreover, there probably wasn’t that many qualified people around to refute whatever Lysenko prescribed. He used his position in the Communist hierarchy to denounce biologists as “fly lovers and people haters”, portraying his opponents as enemies of the State intent on the purposeful destruction of the Soviet economy. By 1940, thousands of geneticists had either lost their jobs, been imprisoned, or executed.
Meanwhile, Lysenko appeared to be everything that the Soviet system deemed inspirational. He came across, after all, as a peasant who developed solutions to practical problems by applying his own intelligence. The Soviet propaganda machine overstated his successes and suppressed his failures. The ideological orthodoxy that saw many fine geneticists fall victim to Stalin’s terrors spread to other sciences such as astronomy and chemistry. The Soviet Union did have some notable technological achievements. They put the first satellite into orbit, and the first man in space (Yuri Gagarin) was from Soviet Russia. But in general it’s fair to say that Communist countries lagged far behind Western countries in terms of technological innovation. There was nothing like Silicon Valley in Communist countries (particularly if we count domestic and not military technology). Given the story of Lysenko and the replacement of science with ideologically-driven pseudoscience, we can see why this might be so.
REFERENCES
The Rise And Fall Of Communism by Archie Brown
Wikipedia
Far Out: 101 Strange Tales From Science’s Outer Edge by Mark Pilkington

Posted in Uncategorized | Leave a comment

Battle of the Paradigms: The Fight Over Internet Neutrality 

BATTLE OF THE PARADIGMS: THE FIGHT OVER INTERNET NEUTRALITY
INTRODUCTION
The philosopher Thomas Kuhn saw major scientific theories as being established through ‘paradigm shifts’. According to this view of progress, there is always a ‘paradigm’- a set of prevailing theories which form a world view through which people observe and explain their world. Any observation or evidence that seems to conflict with that worldview is rejected by the ‘establishment’ who act on the basis that the paradigm must be held inviolate. Opposing the ‘establishment’ is a heretical group beholden to a different paradigm.
This notion of competing worldviews or paradigms is perhaps not confined to major scientific theories, but also one of the major issues affecting our most pervasive and important technologies. As such, there is a battle to be fought that inevitably affects the lives of all of us. What battle? The fight over Internet neutrality.
ORIGINS OF THE PARADIGMS
That competing worldviews should have arisen thanks to the Internet is perhaps not surprising given the history of its development and the wider historical context in which it emerged. The battle is between government and the private sector on one hand, who for various reasons seek ways of imposing more centralised control over the Internet and the enclosure for profit’s or security’s sake, and advocates of commons-style management on the other, who wish to maintain and extend the Internet’s capacity to be a distributed, peer-to-peer, laterally-scaled network that could potentially provide most of the goods and services required for a decent life at near-zero marginal cost.
These three groups- government, the private sector, and civil society- have been part of the Internet from the start. The Internet itself is a commons, meaning it is owned by everyone and no-one. It takes big telecommunications companies to establish and maintain the physical network of the Internet (the fibre optic cables, data storage etc) but such companies are merely facilitators and providers; they don’t actually own the Internet. Governance of the Internet is the job of various non-profit organisations, such as the ‘World Wide Web Consortium’, the ‘Internet Engineering Taskforce’ and the ‘Internet Corporation for Assigned Names and Numbers’. All of these organisations are open to anyone to take part in, although participation does require technical expertise, which in practice excludes the non-technical among us.
The Internet is a commons, but the Web (or rather, the applications that run on it) are a hybrid of commercial enterprises and nonprofit organisations. The line between a commercial and nonprofit organisation is not always clear-cut. In the case of Apple and its App Store, that is obviously commercial. But it can feel like services provided by, say, Google or Facebook, are not commercial because end users get to access them for ‘free’. This mix of purely, partly, and non-commercial apps on the Internet help make the issue of governance a thorny one.
ORIGINS OF CENTRALISATION
OK, but why should there be those opposing paradigms of an enclosed, privatised Internet on one hand, and an open, collaborative network on the other? If we look back over the history of the Internet and the wider historical context in which it evolved, the reason becomes clearer.
The Internet’s origins can be traced back to 1960s research commissioned by the US Federal government to build robust, fault-tolerant computer networks that would provide communication even in the event of nuclear attack. This research lead to a precursor of the Internet known as Arpanet. Arpanet served as a backbone for the interconnection of academic and military networks. By the 1990s, the linking of commercial networks and enterprises, and the linking up of personal and mobile computers saw the emergence of the Internet that we know today.
Now, as I said before, the Internet evolved within a wider historical context. In particular, it evolved in a world dominated by fossil fuels and it was this more than anything that lead to there being a worldview bent on enclosure and privatisation. The oil industry is one of the most concentrated in the world. Just about every other industry that depends upon fossil fuel necessarily requires a vast capital expenditure to establish vertical integration that brings together supply chains, production processes and distribution centres. The sheer cost of these vertically integrated corporate enterprises demand centralised management in order to increase efficiencies and lower costs. This practice of vertical integration and centralisation was both necessary given the energy/communications matrix of the twentieth century, and actually pretty successful, bringing appreciable improvement to the lives of those in industrialised nations.
It’s perhaps not surprising, then, that governments and private enterprises used to an era when vertical integration and centralisation were the optimal path to prosperity would seek to carry their worldview to 21st century technologies.
Another thing to consider is the fact that the electromagnetic spectrum we use to communicate with had long seemed like a scarce resource. Broadcast radio emerged in the 1920s. Whenever two or more broadcasters in close proximity used spectrum frequencies very close to one another, this would result in constant interruption and interference. By 1927, radio broadcasters were causing sufficient disruption for congress to pass the Radio Act to establish the Federal Radio Commission. The job of the FCC was to manage the radio spectrum and determine which frequencies could be used and by whom. In practice, this meant that a broadcaster had exclusive use to a particular radio frequency that they licensed from the FCC. It also meant that the spectrum itself was considered to be a scarce resource and, as such, to be thought of as a commercial asset. Therefore, not only did we have organisations growing up in an era when vertical integration and centralisation were engines of productivity and efficiency, but also organisations growing up at a time when broadcasting was a scarce commodity that had to be managed commercially. This all contributed to the establishment of a mindset wedded to the ‘enclosure’ paradigm.
ORIGIN OF THE COLLABORATIVE COMMONS
But, equally, from the start, the Internet has been a commons and there have been those wedded to that style of governance. ‘Internet Neutrality’ is a concept that grew out of the end-to-end structure of the Internet. This structure favours users rather than network providers because, while we must pay for Internet connection and the speed and quality provided by our ISP can be better or worse depending on how much we pay for their service, once we are connected all transmitted packets of data are treated the same way, with no one- commercial or otherwise- getting preferential treatment. Advocates of commons-style management believe that a neutral Internet is best-suited to facilitate the collaboration of millions of end users, allowing people to develop their own applications that would advance network collaborations and drive marginal costs to near zero, an eventuality that would cause a paradigm shift away from capitalist/socialist economic systems toward a new world order.
GOVERNMENT AND CENTRALISED CONTROL
The notion of a nearly free, open, transparent Internet is not something all parties welcome. According to Jeremy Rifkin, “national governments, concerned over a spate of Internet-related policy issues that affect their general welfare and sovereign interests…are enacting legislation, some of which is threatening an essential feature of the medium- its open, universal, and transparent nature”. In 2011, the countries China, Russia, Uzbekistan and Tajkstan submitted a proposal to the UN General Assembly that pushed for new forms of government control over the Internet.
COMMERCIALISATION THROUGH ENCLOSURE
Governments find themselves in the middle of two interest groups, one of which is dedicated to the capitalist model, while the other subscribes to a commons mindset. Not surprisingly, each group seeks to get government on its side, enacting legislation that furthers its interests. In the case of the private sector, it seeks price discrimination that would increase income and profits. More precisely, it seeks to secure control of information exchanged over the Internet, control that would enable the charging of different prices for access to certain kinds of information, prioritise transmissions, favour some applications while blocking others. Such a move would seriously compromise network neutrality, which is based on a non-discriminatory communications with equal access and inclusion for all participants.
This enclosure of the Internet is not just coming from outside in the form of network providers fighting against what they see as unfair restrictions on their right to pursue profits. It’s also coming from the inside as well. As Rifkin said, “some of the best-known social media sites on the Web are revving up to find ways to enclose, commercialise, and monopolise the new communications medium”.
There is a famous acronym- TANSTAAFL- which stands for ‘there ain’t no such thing as a free lunch’. In other words, in the commercial world nothing is ever really free and whenever it seems like that is the case, that’s only because the price is hidden. A case in point are the services provided by the likes of Google and Facebook. Ever since the public began connecting their computers and phones to the Net and sharing data on the web, a valuable resource of personal data has been waiting to be mined, commercialised, and exploited for profit. As Tim Berners-Lee explained, whenever somebody connects to commercial media sites, their vital information is (in Rifkin’s words) “immediately captured, siloed, enclosed and commodified”. That information is then sold to interested third parties. 
The most familiar result is the targeted advertising that comes with using Google’s search engine. But it could also potentially mean such things as health insurance companies using your search history and other digital footprints to decide whether to provide cover or not. According to Berners-Lee, “the more this kind of architecture gains widespread use, the more the Web becomes fragmented, and the less we enjoy a single, universal, information space”. In other words, we could be witnessing a form of commercial exploitation based on the commodification of the self that is creating centralised and proprietary monopolies in virtual space.
NATURAL MONOPOLIES
It is argued by commons advocates that, since Google provides an essential service that we all depend upon, and given that no rival offers anything like a comparative service, then Google should be classed as an essential facility. Concerns have been raised over loss of ‘search neutrality’, in which a dominant, privately-owned search engine is tempted, for commercial or political reasons, to manipulate search results.
In a counter to such arguments, free-market advocates warn that, by treating such services as social utilities and calling for regulations that treat them as natural monopolies, we run the risk of actually turning them into just that, because in doing so we would be protecting such companies from competition. Since regulated companies have a guaranteed rate of return and fixed prices built in, critics say, they have less incentive to be innovative. It’s not like they have competitors to worry about, after all.
Also, there are critics who argue that monopolisation of social media is less of a concern than, say, power companies, because of disproportionate up-front costs. In order to guarantee a natural monopoly, power companies had to invest huge amounts of capital in order to set in place the requisite physical infrastructure and to secure a captive user-base. But when it comes to setting up something like Twitter, the up-front costs are far less. This makes it a lot easier for new players to come along and displace market leaders.
The problem with that argument, however, is that while it was once possible to depose market leaders in social media using little capital investment, that is no longer the case. After all, as Rifkin said, “Google…et al are investing billions of dollars in expanding their user base while simultaneously creating impenetrable enclosures, protected by layer upon layer of intellectual property, all designed to profit from the global social commons they helped create”.
The more such services grow, and the more users of such services there are, the more it is of benefit if everyone likewise uses the same services. But such services remain commercial ventures, so while they are motivated to optimise social connections, in line with their users’ interests, they are also incentivised to sell information about users to third parties. This, according to Zeynap Tufecki, sociology professor at the University of South Carolina, is “the corporatization of the commons”.
ENERGY
This battle of paradigms between an enclosed, privatised and commercialised Internet on one hand and a distributed, collaborative, laterally-scaled commons-management network on the other, is not just confined to the question of who owns our personal information. It also impacts on the future of energy. I said earlier that 20th century energy production, based on fossil fuels, required vertically-integrated infrastructure and centralised management. But 21st century renewable-energy technologies will work best when they are distributed, scale laterally across society, and organised collaboratively, which means they favour a communications medium that is similarly distributed, collaborative, and laterally scaled. As Rifkin said, “Internet communications and renewable energies form the inseparable matrix for a foundational infrastructure whose operating logic is best served by commons management”.
Moreover, technological progress can turn seemingly scarce resources into abundant ones (actually, it is more accurately said that the resource was always abundant; our ability to access it was poor). Case in point: Managing communications over radio frequencies. Modern technologies in the form of smart antennas, dynamic spectrum access, cognitive radio technologies and mesh networks are able to employ a variety of tricks that use the radio spectrum with far greater efficiency. This opens up the possibility of establishing open wireless connections at near-zero marginal cost.
But, if so, it won’t come without a struggle because there are those with a vested interest in preventing the transition to an abundance-based society. We see this in the case of electric utilities, where there have been moves to design a smart grid that is centralised, proprietary, and closed. The plan is to prevent users from having access to moment-by-moment information regarding changes in the price of electricity and to prevent end users from uploading their electricity to the grid at advantageous times when electricity peaks. In other words, they would rather not have users who are efficient at managing their electricity usage, as that would eat into profits. Also, concerns have been raised over discriminatory practices that favour faster connectivity of green electricity by affiliated business partners. As in all cases of neutrality versus preferential treatment, the goal for those pursuing the latter is to enforce a centralised architecture (in this case, on the smart grid) thereby enabling commercial enclosure for profit’s sake.
THE BATTLE
So which paradigm will ultimately prevail? Will the Internet and its future expanded and enhanced incarnation, the Internet of Things (in which not just communications and information but also energy and logistics are incorporated into our networks) become increasingly enclosed and proprietary, perpetuating the dominance of capitalist/socialist economic systems? Or will we make the paradigm shift to commons management in which distributed, collaborative, laterally-scaled peer-to-peer networks drive marginal costs to near-zero and prosumers become more focused on increasing their social capital through providing access to goods and services communities value, with capitalist/socialist work relegated to a supplementary role, maintaining the infrastructure on which the collaborative commons is built, but no longer dominant in our lives?
Given the financial muscle and political influence of today’s global corporations, it might seem like the collaborative commons stands little chance. But, from an environmental perspective, it would be unwise to bet against a near-zero marginal cost society. It has long been established by the laws of thermodynamics that the total amount of energy in the universe never decreases or increases. Also, all work converts useful energy into forms unavailable for work. What this means is that every time resources are turned into commodities, we are effectively running up a bill with nature. Some say that bill has now come due in the form of climate change and loss of biodiversity. Even if you are a climate change denier, there still remains the well-established laws of thermodynamics, which demand of life constant innovation and evolution to discover more efficient use of resources, at pain of extinction. 
The collaborative commons, built on top of infrastructures established by a market system operating at the peak of its power to deliver maximum possible productivity, would be so efficient at converting resources into useful products and services and distributing them appropriately, that anyone interested in the long-term survival and prosperity of the human race ought to promote and work toward advancing it.
BATTLE OF THE PARADIGMS: THE FIGHT OVER INTERNET NEUTRALITY
INTRODUCTION
The philosopher Thomas Kuhn saw major scientific theories as being established through ‘paradigm shifts’. According to this view of progress, there is always a ‘paradigm’- a set of prevailing theories which form a world view through which people observe and explain their world. Any observation or evidence that seems to conflict with that worldview is rejected by the ‘establishment’ who act on the basis that the paradigm must be held inviolate. Opposing the ‘establishment’ is a heretical group beholden to a different paradigm.
This notion of competing worldviews or paradigms is perhaps not confined to major scientific theories, but also one of the major issues affecting our most pervasive and important technologies. As such, there is a battle to be fought that inevitably affects the lives of all of us. What battle? The fight over Internet neutrality.
ORIGINS OF THE PARADIGMS
That competing worldviews should have arisen thanks to the Internet is perhaps not surprising given the history of its development and the wider historical context in which it emerged. The battle is between government and the private sector on one hand, who for various reasons seek ways of imposing more centralised control over the Internet and the enclosure for profit’s or security’s sake, and advocates of commons-style management on the other, who wish to maintain and extend the Internet’s capacity to be a distributed, peer-to-peer, laterally-scaled network that could potentially provide most of the goods and services required for a decent life at near-zero marginal cost.
These three groups- government, the private sector, and civil society- have been part of the Internet from the start. The Internet itself is a commons, meaning it is owned by everyone and no-one. It takes big telecommunications companies to establish and maintain the physical network of the Internet (the fibre optic cables, data storage etc) but such companies are merely facilitators and providers; they don’t actually own the Internet. Governance of the Internet is the job of various non-profit organisations, such as the ‘World Wide Web Consortium’, the ‘Internet Engineering Taskforce’ and the ‘Internet Corporation for Assigned Names and Numbers’. All of these organisations are open to anyone to take part in, although participation does require technical expertise, which in practice excludes the non-technical among us.
The Internet is a commons, but the Web (or rather, the applications that run on it) are a hybrid of commercial enterprises and nonprofit organisations. The line between a commercial and nonprofit organisation is not always clear-cut. In the case of Apple and its App Store, that is obviously commercial. But it can feel like services provided by, say, Google or Facebook, are not commercial because end users get to access them for ‘free’. This mix of purely, partly, and non-commercial apps on the Internet help make the issue of governance a thorny one.
ORIGINS OF CENTRALISATION
OK, but why should there be those opposing paradigms of an enclosed, privatised Internet on one hand, and an open, collaborative network on the other? If we look back over the history of the Internet and the wider historical context in which it evolved, the reason becomes clearer.
The Internet’s origins can be traced back to 1960s research commissioned by the US Federal government to build robust, fault-tolerant computer networks that would provide communication even in the event of nuclear attack. This research lead to a precursor of the Internet known as Arpanet. Arpanet served as a backbone for the interconnection of academic and military networks. By the 1990s, the linking of commercial networks and enterprises, and the linking up of personal and mobile computers saw the emergence of the Internet that we know today.
Now, as I said before, the Internet evolved within a wider historical context. In particular, it evolved in a world dominated by fossil fuels and it was this more than anything that lead to there being a worldview bent on enclosure and privatisation. The oil industry is one of the most concentrated in the world. Just about every other industry that depends upon fossil fuel necessarily requires a vast capital expenditure to establish vertical integration that brings together supply chains, production processes and distribution centres. The sheer cost of these vertically integrated corporate enterprises demand centralised management in order to increase efficiencies and lower costs. This practice of vertical integration and centralisation was both necessary given the energy/communications matrix of the twentieth century, and actually pretty successful, bringing appreciable improvement to the lives of those in industrialised nations.
It’s perhaps not surprising, then, that governments and private enterprises used to an era when vertical integration and centralisation were the optimal path to prosperity would seek to carry their worldview to 21st century technologies.
Another thing to consider is the fact that the electromagnetic spectrum we use to communicate with had long seemed like a scarce resource. Broadcast radio emerged in the 1920s. Whenever two or more broadcasters in close proximity used spectrum frequencies very close to one another, this would result in constant interruption and interference. By 1927, radio broadcasters were causing sufficient disruption for congress to pass the Radio Act to establish the Federal Radio Commission. The job of the FCC was to manage the radio spectrum and determine which frequencies could be used and by whom. In practice, this meant that a broadcaster had exclusive use to a particular radio frequency that they licensed from the FCC. It also meant that the spectrum itself was considered to be a scarce resource and, as such, to be thought of as a commercial asset. Therefore, not only did we have organisations growing up in an era when vertical integration and centralisation were engines of productivity and efficiency, but also organisations growing up at a time when broadcasting was a scarce commodity that had to be managed commercially. This all contributed to the establishment of a mindset wedded to the ‘enclosure’ paradigm.
ORIGIN OF THE COLLABORATIVE COMMONS
But, equally, from the start, the Internet has been a commons and there have been those wedded to that style of governance. ‘Internet Neutrality’ is a concept that grew out of the end-to-end structure of the Internet. This structure favours users rather than network providers because, while we must pay for Internet connection and the speed and quality provided by our ISP can be better or worse depending on how much we pay for their service, once we are connected all transmitted packets of data are treated the same way, with no one- commercial or otherwise- getting preferential treatment. Advocates of commons-style management believe that a neutral Internet is best-suited to facilitate the collaboration of millions of end users, allowing people to develop their own applications that would advance network collaborations and drive marginal costs to near zero, an eventuality that would cause a paradigm shift away from capitalist/socialist economic systems toward a new world order.
GOVERNMENT AND CENTRALISED CONTROL
The notion of a nearly free, open, transparent Internet is not something all parties welcome. According to Jeremy Rifkin, “national governments, concerned over a spate of Internet-related policy issues that affect their general welfare and sovereign interests…are enacting legislation, some of which is threatening an essential feature of the medium- its open, universal, and transparent nature”. In 2011, the countries China, Russia, Uzbekistan and Tajkstan submitted a proposal to the UN General Assembly that pushed for new forms of government control over the Internet.
COMMERCIALISATION THROUGH ENCLOSURE
Governments find themselves in the middle of two interest groups, one of which is dedicated to the capitalist model, while the other subscribes to a commons mindset. Not surprisingly, each group seeks to get government on its side, enacting legislation that furthers its interests. In the case of the private sector, it seeks price discrimination that would increase income and profits. More precisely, it seeks to secure control of information exchanged over the Internet, control that would enable the charging of different prices for access to certain kinds of information, prioritise transmissions, favour some applications while blocking others. Such a move would seriously compromise network neutrality, which is based on a non-discriminatory communications with equal access and inclusion for all participants.
This enclosure of the Internet is not just coming from outside in the form of network providers fighting against what they see as unfair restrictions on their right to pursue profits. It’s also coming from the inside as well. As Rifkin said, “some of the best-known social media sites on the Web are revving up to find ways to enclose, commercialise, and monopolise the new communications medium”.
There is a famous acronym- TANSTAAFL- which stands for ‘there ain’t no such thing as a free lunch’. In other words, in the commercial world nothing is ever really free and whenever it seems like that is the case, that’s only because the price is hidden. A case in point are the services provided by the likes of Google and Facebook. Ever since the public began connecting their computers and phones to the Net and sharing data on the web, a valuable resource of personal data has been waiting to be mined, commercialised, and exploited for profit. As Tim Berners-Lee explained, whenever somebody connects to commercial media sites, their vital information is (in Rifkin’s words) “immediately captured, siloed, enclosed and commodified”. That information is then sold to interested third parties. 
The most familiar result is the targeted advertising that comes with using Google’s search engine. But it could also potentially mean such things as health insurance companies using your search history and other digital footprints to decide whether to provide cover or not. According to Berners-Lee, “the more this kind of architecture gains widespread use, the more the Web becomes fragmented, and the less we enjoy a single, universal, information space”. In other words, we could be witnessing a form of commercial exploitation based on the commodification of the self that is creating centralised and proprietary monopolies in virtual space.
NATURAL MONOPOLIES
It is argued by commons advocates that, since Google provides an essential service that we all depend upon, and given that no rival offers anything like a comparative service, then Google should be classed as an essential facility. Concerns have been raised over loss of ‘search neutrality’, in which a dominant, privately-owned search engine is tempted, for commercial or political reasons, to manipulate search results.
In a counter to such arguments, free-market advocates warn that, by treating such services as social utilities and calling for regulations that treat them as natural monopolies, we run the risk of actually turning them into just that, because in doing so we would be protecting such companies from competition. Since regulated companies have a guaranteed rate of return and fixed prices built in, critics say, they have less incentive to be innovative. It’s not like they have competitors to worry about, after all.
Also, there are critics who argue that monopolisation of social media is less of a concern than, say, power companies, because of disproportionate up-front costs. In order to guarantee a natural monopoly, power companies had to invest huge amounts of capital in order to set in place the requisite physical infrastructure and to secure a captive user-base. But when it comes to setting up something like Twitter, the up-front costs are far less. This makes it a lot easier for new players to come along and displace market leaders.
The problem with that argument, however, is that while it was once possible to depose market leaders in social media using little capital investment, that is no longer the case. After all, as Rifkin said, “Google…et al are investing billions of dollars in expanding their user base while simultaneously creating impenetrable enclosures, protected by layer upon layer of intellectual property, all designed to profit from the global social commons they helped create”.
The more such services grow, and the more users of such services there are, the more it is of benefit if everyone likewise uses the same services. But such services remain commercial ventures, so while they are motivated to optimise social connections, in line with their users’ interests, they are also incentivised to sell information about users to third parties. This, according to Zeynap Tufecki, sociology professor at the University of South Carolina, is “the corporatization of the commons”.
ENERGY
This battle of paradigms between an enclosed, privatised and commercialised Internet on one hand and a distributed, collaborative, laterally-scaled commons-management network on the other, is not just confined to the question of who owns our personal information. It also impacts on the future of energy. I said earlier that 20th century energy production, based on fossil fuels, required vertically-integrated infrastructure and centralised management. But 21st century renewable-energy technologies will work best when they are distributed, scale laterally across society, and organised collaboratively, which means they favour a communications medium that is similarly distributed, collaborative, and laterally scaled. As Rifkin said, “Internet communications and renewable energies form the inseparable matrix for a foundational infrastructure whose operating logic is best served by commons management”.
Moreover, technological progress can turn seemingly scarce resources into abundant ones (actually, it is more accurately said that the resource was always abundant; our ability to access it was poor). Case in point: Managing communications over radio frequencies. Modern technologies in the form of smart antennas, dynamic spectrum access, cognitive radio technologies and mesh networks are able to employ a variety of tricks that use the radio spectrum with far greater efficiency. This opens up the possibility of establishing open wireless connections at near-zero marginal cost.
But, if so, it won’t come without a struggle because there are those with a vested interest in preventing the transition to an abundance-based society. We see this in the case of electric utilities, where there have been moves to design a smart grid that is centralised, proprietary, and closed. The plan is to prevent users from having access to moment-by-moment information regarding changes in the price of electricity and to prevent end users from uploading their electricity to the grid at advantageous times when electricity peaks. In other words, they would rather not have users who are efficient at managing their electricity usage, as that would eat into profits. Also, concerns have been raised over discriminatory practices that favour faster connectivity of green electricity by affiliated business partners. As in all cases of neutrality versus preferential treatment, the goal for those pursuing the latter is to enforce a centralised architecture (in this case, on the smart grid) thereby enabling commercial enclosure for profit’s sake.
THE BATTLE
So which paradigm will ultimately prevail? Will the Internet and its future expanded and enhanced incarnation, the Internet of Things (in which not just communications and information but also energy and logistics are incorporated into our networks) become increasingly enclosed and proprietary, perpetuating the dominance of capitalist/socialist economic systems? Or will we make the paradigm shift to commons management in which distributed, collaborative, laterally-scaled peer-to-peer networks drive marginal costs to near-zero and prosumers become more focused on increasing their social capital through providing access to goods and services communities value, with capitalist/socialist work relegated to a supplementary role, maintaining the infrastructure on which the collaborative commons is built, but no longer dominant in our lives?
Given the financial muscle and political influence of today’s global corporations, it might seem like the collaborative commons stands little chance. But, from an environmental perspective, it would be unwise to bet against a near-zero marginal cost society. It has long been established by the laws of thermodynamics that the total amount of energy in the universe never decreases or increases. Also, all work converts useful energy into forms unavailable for work. What this means is that every time resources are turned into commodities, we are effectively running up a bill with nature. Some say that bill has now come due in the form of climate change and loss of biodiversity. Even if you are a climate change denier, there still remains the well-established laws of thermodynamics, which demand of life constant innovation and evolution to discover more efficient use of resources, at pain of extinction. 
The collaborative commons, built on top of infrastructures established by a market system operating at the peak of its power to deliver maximum possible productivity, would be so efficient at converting resources into useful products and services and distributing them appropriately, that anyone interested in the long-term survival and prosperity of the human race ought to promote and work toward advancing it.

Posted in Uncategorized | Leave a comment

The Logistics Internet

THE LOGISTICS INTERNET
INTRODUCTION
Whenever a new technology, machine or tool emerges on the scene, it needs to be given a name. Often, that name is derived from something older, more familiar, and which is reckoned to share some similarities with the new-fangled thing. For example, when engines were invented that could provide people with transport that did not require horses, that mode of transport was given the name of ‘horseless carriage’, later abbreviated to ‘car’. Not all names catch on like ‘car’ did. Failures include ‘Iron Horse’ (train) ‘Picture Radio’ (television) and ‘Aero-Motive Engine’ (airplane).
The concept of an interconnected network of computers also required a name. Before we settled on the now-familiar Internet, names like ‘I-Way’, ‘Infobahn’ and ‘Information Superhighway’ were suggested. Again, we see something familiar being used to describe something new, in this case interconnected highway systems that enabled cars to travel coast to coast without encountering a stop light serving as a metaphor for information transmitted over a distributed network of information/communication technologies.
Now, in the 21st century, we are developing technologies that could enable us to expand the Internet to become something that could substantially improve logistics. A ‘Logistics Internet’, if you will.
WHAT IS LOGISTICS?
So, what is logistics and why does the Internet need to expand to include it?
Logistics refers to “the flow of things between point of origin and point of consumption in order to meet requirements of customers or corporations”. In other words, it encompasses the many ways in which products and services are stored and eventually delivered to customers.
The reason why logistics needs to become part of an expanded Internet is because such a move may help eradicate many deficiencies within current logistics operations that are resulting in unnecessary cost and waste.
DEFICIENCIES IN THE CURRENT SYSTEM
The deficiencies of current logistics came about because of the strength and limitations of 19th and 20th century technologies. As pointed out by Jeremy Rifkin in his book ‘Zero Marginal Cost Society’, the first industrial revolution favoured factories and logistics networks clustered in and around major cities, relying on rail links to “bring in energy and materials from suppliers upstream and deliver finished products to wholesalers and retailers downstream”. Being an employee meant living within walking distance of such a factory, or having access to a commuter train. In the second industrial revolution, nationwide interstate highway systems allowed production to migrate from dense urban centres to the suburbs, with truck transport overtaking rail, and workers often travelling greater distances to and from their place of employment via cars.
As with most 19th and 20th century industrial capitalist practices, businesses favoured internal, top-down, centralised command over logistics as a way of providing private firms with more control over their production, storage, and distribution channels. But that control comes at the cost of lost efficiencies and productivity and increased waste.
Let’s start with inefficiencies. In the United States alone, trailer trucks are on average only sixty percent full when on the road, because, while they often leave their docks fully loaded, after each drop they become less full and often return empty. On a global scale transport is even less efficient, achieving around ten percent efficiency.
Not only that, but the reliance on giant, centralised warehouses and distribution centres serving large terrains means products often cannot be transported by the fastest routes but must take more circuitous routes instead.
Those warehouses and distribution centres create yet more waste and inefficiency. The seasonal nature of the product lines means there are periods of the year in which warehouses are under-used, while at other times they are overextended. Products are often stored in warehouses for long periods of time, and at high cost (US business inventories were estimated at $1.6 trillion in 2013). In the case of time-sensitive products like food, many such products go unsold because logistical inefficiencies mean they cannot be transported in a timely manner.
Most of these inefficiencies are the result of having a logistics system dominated by hundreds of private carriers and a dearth of common standards and protocols that would encourage greater collaboration and more efficient sharing of logistics resources.
A NEW APPROACH
Not surprisingly, then, there are those who are looking to the Internet to rethink how we do logistics on a national and global scale. As mentioned previously, interstate highways originally provided the metaphor of an ‘information superhighway, conceptualised as an interconnected communications system that allowed information to travel without effort across a distributed network. As Rifkin explained, “a packet of information transmitted over the Internet contains information on both its identity and routing to its destination. The data packet is structured independently from the equipment, allowing the packet to be processed through different systems and networks, including copper wires, fibre-optic wires, routers, local-area networks, wide-area networks, etc”.
Similarly, it is hoped that, by using the latest IT and Internet technology applications, firms could collaborate with each other and share logistical resources in ways that would increase efficiencies and productivity and lower costs. The idea is to create a ‘logistics Internet’ in which warehouses and distribution centres are connected in “an open supply web managed by sophisticated analytics and algorithms” that would enable companies to “store items and route shipments in the most efficient manner at any given time”.
ENABLING TECHNOLOGIES
So what technologies are making the development of a logistics Internet possible? Amazon is a good company to examine for clues, as it is as much a logistics company as a virtual retailer. In an effort to reduce inefficient manual labour, the company has added robots, automated storage systems and intelligent automated guided vehicles. As has been pointed out in many a news report recently, driverless vehicles will be operational in the future. As they could potentially operate at near-zero marginal labour cost, there is obviously plenty of opportunity to make savings in the cost of moving products around.
It’s not exclusively about robots, but also our increasing ability to collect, share, and analyse data, made possible by ever cheaper, convenient and capable networked sensors. UPS has sensors embedded in their vehicles that monitor individual parts, thereby flagging up potential malfunctions before they result in costly breakdowns. There are sensors that enable households to monitor moment-to-moment electricity usage by various domestic appliances, providing information that can help households reduce wasteful consumption. There are sensors that keep track of availability to raw resources and current inventories in warehouses, and which monitor production lines for any signs of inefficiency.
We are developing the technological capability to automate much of the “planning, execution, and control of the movement of goods” that defines logistics, as well as increasing our ability to monitor the logistics value chain for signs of avoidable waste. But what is really needed is universal standards and protocols. Benoit Montrueil of the University Research Centre on Enterprise Networks, Logistics and Transport in Montreal, Canada, explained how the various components (sensors, robots, data-mining algorithms etc) need to be connected into a single, transparent, open system. Physical products should ideally be embedded in standardised modular containers and equipped with non-proprietary sensors for identification and sorting, and everything along the supply chain should operate by the same standard technical protocols.
The result of all this modularisation, standardisation and intelligent monitoring would be “an open supply web managed by sophisticated analytics and algorithms” that companies could use to “store items and route shipments in the most efficient manner possible at any given moment in time”.
This would enable conventional logistics, which relies on point-to-point and hub-and-spoke transport, to give way to a more efficient distributed, multi-segment system. Whereas today one driver may be responsible for taking a load from the production centre to drop off before heading somewhere else to pick up a shipment for delivery on the homeward journey, in the distributed system one driver might deliver a shipment to a nearby hub where they would pick up another trailer and head home. Meanwhile, the first trailer they brought to the hub would be collected by a second driver who would deliver it to the next truck port, airport or whatever hub is next in line. All the while, Internet tracking of the containers would work to ensure minimal delays in handover at every distribution point. According to Montreuil, a logistics system based on distributed, laterally-scaled open-systems architecture and Commons-style management could deliver goods to their destinations in half the time taken by less efficient point-to-point systems.
CONCLUSION
The Logistics Internet is just one aspect of a future integrated global network that increases efficiencies in not just value supply chains and the gathering and sharing of information, but in virtually every aspect of economic and social life. This is a future in which every node in the network- be it a business, a home, a vehicle, an item of inventory- will be uploading data to advanced analytics that constantly monitor local, national and global activity for ways to increase efficiency, reduce waste, and lower the marginal cost of producing and delivering goods and services. In so doing, the old scarcity-based market system will in time be superseded by a laterally-scaled, collaborative, commons-management network designed to bring sustainable abundance to the people of the world.
REFERENCES
FUTURE HYPE by BOB SEIDENSTICKER
ZERO MARGINAL COST SOCIETY by JEREMY RIFKIN
ZEITGEIST MOVEMENT DEFINED
ABUNDANCE by PETER DIAMANDIS and STEVEN KOTLER

Posted in technology and us, work jobs and all that | Tagged , , | Leave a comment

alternative plans for the end of jobs

Work to earn a living! (STEEMED)
It’s been promoted as the best and most honourable way to gain capital, and has served us for millennia. Since time immemorial, people have wanted goods and services, and those goods and services have needed people to bring them into being. That meant employment opportunities and society’s organised around the employer/employee relationship. Sure most jobs are not in any way fun or interesting for the people who have to do them (which is why monday is widely held to be the worst day of the week and the weekends are almost universally adored- at least for those who don’t have to go to their jobs on saturday or sunday) but it has always been necessary for people to do jobs.
But the incentives of capitalism has never been about providing jobs for people. Rather, it has always been about increasing profit for the owners of capital by lowering marginal costs. People who complain about loss of employment and the harm being done to local communities following the closure of some business and its subsequent move overseas in order to be more competitive miss the point that the CEO is tasked with increasing shareholder value and nothing else. Of course, businesses will provide jobs for people and build communities if this helps increase profit or lower marginal costs, but if a better method of doing either that does not involve employing people should come along, you can bet the most successful businesses will adopt that method.
This is why some have been keeping a wary eye on automation. Could there be a cambrian explosion of specialised robots and narrow AI, resulting in countless ways to automate jobs and squeeze human workers to the point of making employment seeking impossibly difficult? Can robot minds become as, or more, capable than human brains, resulting in artificial labourers who work for nothing 24/7, never taking holidays, never getting sick, never organising into unions and making demands?
In short, could the way of life that has applied for thousands of years, one in which capital growth requires people to find employment in jobs, come to an end, giving way to a new era in which machines grow capital with only a few humans in the loop, or maybe none at all? More to the point, if technological unemployment does happen, what should we do about it?
Erik has an idea. Assume technological unemployment is never going to happen, or if it is going to happen, not for a long while yet. No need to plan for some far distant future possibility. Erik is not alone in questioning the belief that technological unemployment is going to occur. Many have pointed out that tech creates jobs as well as eliminating them. We can retrain and move from being office workers to 4 dimensional holobiomorphal co-formulators or whatever the hell people do in 2030 to earn a living. If it is always that case that tech creates jobs and that people’s labour will always be the most cost-effective commodity one can hire to fill the vacancies those jobs open up, then we can carry on as we always have. If.
UBI is another possibility, probably the one most often argued over on this forum. If technological unemployment is going to make it impossible for most people to get a job (‘darn it, I have applied for a thousand different jobs but that Roboworker 2000 has been installed in every single one. It’s replacing jobs faster than I can retrain!’) we have to sever the link between jobs and wages. It’s all very well lowering costs by eliminating jobs and replacing human workers with ultra-efficient and capable robots, but if those robots receive no wages and people can earn no wages, where are all the consumers with money to spend going to come from? Can an economy really work if wealth is concentrated into .1% of the population leaving everybody else with little to no disposable income? 
In this thread we are going to assume that technological unemployment IS a reality we will be facing in the future. I want to know: APART from UBI, what can we do to ensure the robot revolution benefits as many people as possible? How should we organise society so that it is best-placed to meet that future in which so few people are needed in jobs? 
Here is one idea. Money needs to be reinvented so that as technological capabilities increase and more and more jobs are automated, the value of each coin in your pocket goes up. At the moment, fiat money and fractional reserve banking is designed to redistribute money from the bottom of the pyramid to the top, without those at the top necessarily making any contributions to the real economy. This is achieved through inflation and other methods. Rather than inflation decreasing the purchasing power of the money in your pocket, the purchasing power of money in ordinary peoples pockets should be increasing, as indeed I believe it did during the 19th and early 20th century. Material wealth needs to become cheap, so cheap that anybody with half a brain to at least save something and prepare for the tech unemployment to come, can comfortably live in that fantastic future in which capitalism has reached its peak.
Any other ideas?

Posted in forum thoughts, work jobs and all that | Tagged , , , , | 5 Comments

TECHNOLOGICAL UNEMPLOYMENT: THE HORSE ANALOGY

TECHNOLOGICAL UNEMPLOYMENT: THE HORSE ANALOGY. (STEEMED)
Ok, first of all I should admit that this analogy is not an invention of mine. It features in this documentary:

But it is a good one so I thought I would spread the word.
Imagine it is 1915. At this point in time, the horse population of America is at its peak, with some 21 million of these animals put into service of mankind. For tens of thousands of years, people have relied on the superior strength of animals like horses and oxen to help plough soil, pull barges along canals, work down coal mines, provide transportation, even as weapons of war. Imagine that two horses have heard of a new invention, something called the internal combustion engine, and they have different opinions concerning what impact this invention is going to have on horse labour.
One horse believes that the internal combustion engine is going to steal horses’ jobs. People will make mechanical horses that cost less money to upkeep, and will be capable of doing more work. It makes references to the The first Benz 954 cc single-cylinder four-stroke engine with trembler coil ignition, capable of producing 2⁄3 horsepower, and points out that the amount of horsepower that can be obtained from internal combustion engines has gone up and up. Horses have long been used to a vast advantage in strength in comparison to people, but now people have invented something which will quickly match the strength of horses and then, not long after, go way beyond them. It imagines a world full of mighty muscular machines, horsepower not counted in single figures but 1000 horsepower and maybe more. As far as employment is concerned, the future looks pretty bleak for horses. Their services are simply not going to be needed to anything like the extent to which mankind has relied on horses in the past.
The other horse has a more optimistic view of the situation. It acknowledges that there is this invention called the internal combustion engine, and that it has been incorporated into machines which have begun competing with horsework in some narrow situations. But it points out how superior horses still are in many cases. Yes, a car put in an environment designed to favour it can outperform a horse. A race between a sportscar on a nice long stretch of smooth tarmac would absolutely trounce a horse in a race. But what if the race were across farmland, with soil churned up by a plough, and obstacles like rivers and fences and hedges to be crossed? Who then would bet on the car taking first place?
This horse also points out that many of the jobs humans have horses do is dirty and dangerous work. Take those poor horses used down coal mines, or the warhorses rode into battle. The jobs horses have been given in cities, it argues, are more pleasant than the grim labour once imposed on the species. For this horse, it is simply infeasible that equine jobs will be consigned to the dustbin of history. Technology will change jobs for sure, always has always will. But technology is a creator of jobs for horses as well as a destroyer. Just as the horse collar, the stirrup, and the carriage opened up new ways for people and horses to cooperate in running the economy, so too will the internal combustion engine provide new forms of employment for horses that nobody can imagine today.
Well, we all know which horse was right. It was indeed the case that the internal combustion engine went onto become a massively successful invention, installed in a bewildering variety of machines. Cars, trucks, tractors, diggers, bulldozers…Our reliance on the brute strength of machines has gone up and up, and our reliance on horses has gone down and down. Sure, horse employment has not fallen to zero. But it IS a tiny fraction of what it once was. As far as horses are concerned, I think it’s fair to say technological unemployment is pretty much a reality.
I would hazard a guess that this outcome is of no particular surprise to anyone. It is a simple process of extrapolation, right? What kind of progress was horsepower making in horses? Hardly any at all. A horse could produce one, two, three horsepower just as its ancestors stretching back tens of thousands of years could. Machines, on the other hand, were producing more and more in a comparative blink of the eye. It was just inevitable that, sooner or later, machines built to do physical labour would so vastly outperform horses in all but a few very narrow circumstances, economic logic would have to reach the conclusion that employment for horses was a thing of the past.
How does this example of a law of economics strike you: “New technology means new and better jobs for horses”. I am guessing you are thinking that sounds pretty daft. There is simply no law of economics or nature or anything at all which says new technology [/i]must[i] create new and better jobs for horses. Capitalism does not give a damn about horses. It will of course commodify, and create markets for buying and selling horse labour so long as there is profit to be made from doing so, but there is nothing in capitalism’s prime objective of growth and the lowering of costs and raising of productivity that says it must always provide jobs for horses.
This is so obvious that it hardly needs saying. But I am using horses and horse employment as an analogy for human employment. While I suspect it would be very difficult to find anybody who agrees with a statement like ‘new technology means new and better jobs for horses’, it is quite easy to find people who think there is something like a law of economics which says “new technology means new and better jobs for people”. Why? Because this is what past experience has taught us to expect, I guess. We moved from backbreaking subsistence farming, to the arduous toil of factory work to the bullshit jobs of office work, where employees spend five hours of a 40 hour week doing actual work (itself ridiculously easy in comparison to the hard labour of yore) and the other 35 attending ‘motivational seminars’ or playing around on Facebook. Just as horses had the advantage of millions of years of natural selection fine-tuning them for the job of trotting and galloping around fields, meadows and marshes, humans had the advantage of millions of years of natural selection fine-tuning them for the job of doing tasks which require common sense, language ability, and creative thinking. While it now has to be conceded that machines can indeed totally trounce horses in a contest of brute strength, and that technological unemployment for horses did indeed happen as an inevitable consequence of this disparity, there are still people who believe that there is something special about humans which somehow means technological unemployment is never going to happen; that no matter how many of our current jobs are taken over by robots or rendered obsolete by some other technology (who needs banks and all the middleman services that go with them when you can do banking with blockchain cryptography and Apple Pay using your smartphone?) new work that no machine could possibly do for ages and ages is bound to come along.
In the case of horses, we have the benefit of 20/20 hindsight when it comes to talking about what the internal combustion engine ultimately meant for their job prospects. When it comes to AI and robotics and what it means for human employment, well, much of that lies in the future and our vision of things to come is nothing like as clear. When will autonomous vehicles mean the end of that line of work for people? When will Dr Watson be attending to your medical needs? When will robocop be protecting the innocent, serving the public trust, and upholding the law? When will you find yourself in a loser’s race fighting the impossible fight to retrain for jobs that are disappearing faster than people can adapt to new circumstances, outcompeted by artificial general intelligence or a cambrian explosion of narrow AI applications and innovations in manufacturing techniques producing specialised machines to do any particular task with greater efficiency and less cost than humans can offer? Years, decades? I for one would not presume to know the answer to this question.
But I do know this: Horses were nature’s proof of principle that it is possible to make a machine that can do pretty much all the work horses are good for. What possible reason could there be to suppose that humans are not nature’s proof of principle that machines can do pretty much any job humans are good for? I do not think there is any practical reason to suppose there is anything humans can do that a machine, in principle, cannot. We need to confront the coming reality of technological unemployment while we still have the luxury of time in which to decide our best course of action, not bury our heads in the sand like my fictional horse.
Oh, wait, that is camels isn’t it?

Posted in technology and us, work jobs and all that | Tagged , , , | 2 Comments