Browsed by
Tag: Vint Cerf

Ludd vs. Schumpeter: Fear of Robot Labor is Fear of the Free Market – Article by Wendy McElroy

Ludd vs. Schumpeter: Fear of Robot Labor is Fear of the Free Market – Article by Wendy McElroy

The New Renaissance Hat
Wendy McElroy
September 18, 2014
******************************

Report Suggests Nearly Half of U.S. Jobs Are Vulnerable to Computerization,” screams a headline. The cry of “robots are coming to take our jobs!” is ringing across North America. But the concern reveals nothing so much as a fear—and misunderstanding—of the free market.

In the short term, robotics will cause some job dislocation; in the long term, labor patterns will simply shift. The use of robotics to increase productivity while decreasing costs works basically the same way as past technological advances, like the production line, have worked. Those advances improved the quality of life of billions of people and created new forms of employment that were unimaginable at the time.

Given that reality, the cry that should be heard is, “Beware of monopolies controlling technology through restrictive patents or other government-granted privilege.”

The robots are coming!

Actually, they are here already. Technological advance is an inherent aspect of a free market in which innovators seeks to produce more value at a lower cost. Entrepreneurs want a market edge. Computerization, industrial control systems, and robotics have become an integral part of that quest. Many manual jobs, such as factory-line assembly, have been phased out and replaced by others, such jobs related to technology, the Internet, and games. For a number of reasons, however, robots are poised to become villains of unemployment. Two reasons come to mind:

1. Robots are now highly developed and less expensive. Such traits make them an increasingly popular option. The Banque de Luxembourg News offered a snapshot:

The currently-estimated average unit cost of around $50,000 should certainly decrease further with the arrival of “low-cost” robots on the market. This is particularly the case for “Baxter,” the humanoid robot with evolving artificial intelligence from the US company Rethink Robotics, or “Universal 5” from the Danish company Universal Robots, priced at just $22,000 and $34,000 respectively.

Better, faster, and cheaper are the bases of increased productivity.

2. Robots will be interacting more directly with the general public. The fast-food industry is a good example. People may be accustomed to ATMs, but a robotic kiosk that asks, “Do you want fries with that?” will occasion widespread public comment, albeit temporarily.

Comment from displaced fast-food restaurant workers may not be so transient. NBC News recently described a strike by workers in an estimated 150 cities. The workers’ main demand was a $15 minimum wage, but they also called for better working conditions. The protesters, ironically, are speeding up their own unemployment by making themselves expensive and difficult to manage.

Labor costs

Compared to humans, robots are cheaper to employ—partly for natural reasons and partly because of government intervention.

Among the natural costs are training, safety needs, overtime, and personnel problems such as hiring, firing and on-the-job theft. Now, according to Singularity Hub, robots can also be more productive in certain roles. They  “can make a burger in 10 seconds (360/hr). Fast yes, but also superior quality. Because the restaurant is free to spend its savings on better ingredients, it can make gourmet burgers at fast food prices.”

Government-imposed costs include minimum-wage laws and mandated benefits, as well as discrimination, liability, and other employment lawsuits. The employment advisory Workforce explained, “Defending a case through discovery and a ruling on a motion for summary judgment can cost an employer between $75,000 and $125,000. If an employer loses summary judgment—which, much more often than not, is the case—the employer can expect to spend a total of $175,000 to $250,000 to take a case to a jury verdict at trial.”

At some point, human labor will make sense only to restaurants that wish to preserve the “personal touch” or to fill a niche.

The underlying message of robotechnophobia

The tech site Motherboard aptly commented, “The coming age of robot workers chiefly reflects a tension that’s been around since the first common lands were enclosed by landowners who declared them private property: that between labour and the owners of capital. The future of labour in the robot age has everything to do with capitalism.”

Ironically, Motherboard points to one critic of capitalism who defended technological advances in production: none other than Karl Marx. He called machines “fixed capital.” The defense occurs in a segment called “The Fragment on Machines”  in the unfinished but published manuscript Grundrisse der Kritik der Politischen Ökonomie (Outlines of the Critique of Political Economy).

Marx believed the “variable capital” (workers) dislocated by machines would be freed from the exploitation of their “surplus labor,” the difference between their wages and the selling price of a product, which the capitalist pockets as profit. Machines would benefit “emancipated labour” because capitalists would “employ people upon something not directly and immediately productive, e.g. in the erection of machinery.” The relationship change would revolutionize society and hasten the end of capitalism itself.

Never mind that the idea of “surplus labor” is intellectually bankrupt, technology ended up strengthening capitalism. But Marx was right about one thing: Many workers have been emancipated from soul-deadening, repetitive labor. Many who feared technology did so because they viewed society as static. The free market is the opposite. It is a dynamic, quick-response ecosystem of value. Internet pioneer Vint Cerf argues, “Historically, technology has created more jobs than it destroys and there is no reason to think otherwise in this case.”

Forbes pointed out that U.S. unemployment rates have changed little over the past 120 years (1890 to 2014) despite massive advances in workplace technology:

There have been three major spikes in unemployment, all caused by financiers, not by engineers: the railroad and bank failures of the Panic of 1893, the bank failures of the Great Depression, and finally the Great Recession of our era, also stemming from bank failures. And each time, once the bankers and policymakers got their houses in order, businesses, engineers, and entrepreneurs restored growth and employment.

The drive to make society static is powerful obstacle to that restored employment. How does society become static? A key word in the answer is “monopoly.” But we should not equivocate on two forms of monopoly.

A monopoly established by aggressive innovation and excellence will dominate only as long as it produces better or less expensive goods than others can. Monopolies created by crony capitalism are entrenched expressions of privilege that serve elite interests. Crony capitalism is the economic arrangement by which business success depends upon having a close relationship with government, including legal privileges.

Restrictive patents are a basic building block of crony capitalism because they grant a business the “right” to exclude competition. Many libertarians deny the legitimacy of any patents. The nineteenth century classical liberal Eugen von Böhm-Bawerk rejected patents on classically Austrian grounds. He called them “legally compulsive relationships of patronage which are based on a vendor’s exclusive right of sale”: in short, a government-granted privilege that violated every man’s right to compete freely. Modern critics of patents include the Austrian economist Murray Rothbard and intellectual property attorney Stephan Kinsella.

Pharmaceuticals and technology are particularly patent-hungry. The extent of the hunger can be gauged by how much money companies spend to protect their intellectual property rights. In 2011, Apple and Google reportedly spent more on patent lawsuits and purchases than on research and development. A New York Times article addressed the costs imposed on tech companies by “patent trolls”—people who do not produce or supply services based on patents they own but use them only to collect licensing fees and legal settlements. “Litigation costs in the United States related to patent assertion entities [trolls],” the article claimed, “totaled nearly $30 billion in 2011, more than four times the costs in 2005.” These costs and associated ones, like patent infringement insurance, harm a society’s productivity by creating stasis and  preventing competition.

Dean Baker, co-director of the progressive Center for Economic Policy Research, described the difference between robots produced on the marketplace and robots produced by monopoly. Private producers “won’t directly get rich” because “robots will presumably be relatively cheap to make. After all, we can have robots make them. If the owners of robots get really rich it will be because the government has given them patent monopolies so that they can collect lots of money from anyone who wants to buy or build a robot.”  The monopoly “tax” will be passed on to impoverish both consumers and employees.

Conclusion

Ultimately, we should return again to the wisdom of Joseph Schumpeter, who reminds us that technological progress, while it can change the patterns of production, tends to free up resources for new uses, making life better over the long term. In other words, the displacement of workers by robots is just creative destruction in action. Just as the car starter replaced the buggy whip, the robot might replace the burger-flipper. Perhaps the burger-flipper will migrate to a new profession, such as caring for an elderly person or cleaning homes for busy professionals. But there are always new ways to create value.

An increased use of robots will cause labor dislocation, which will be painful for many workers in the near term. But if market forces are allowed to function, the dislocation will be temporary. And if history is a guide, the replacement jobs will require skills that better express what it means to be human: communication, problem-solving, creation, and caregiving.

Wendy McElroy (wendy@wendymcelroy.com) is an author, editor of ifeminists.com, and Research Fellow at The Independent Institute (independent.org).

This article was originally published by The Foundation for Economic Education.

How Government Sort of Created the Internet – Article by Steve Fritzinger

How Government Sort of Created the Internet – Article by Steve Fritzinger

The New Renaissance Hat
Steve Fritzinger
October 6, 2012
******************************

Editor’s Note: Vinton Cerf, one of the individuals whose work was pivotal in the development of the Internet, has responded to this article in the comments below. Read his response here.

In his now-famous “You didn’t build that” speech, President Obama said, “The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet.”

Obama’s claim is in line with the standard history of the Internet. That story goes something like this: In the 1960s the Department of Defense was worried about being able to communicate after a nuclear attack. So it directed the Advanced Research Projects Agency (ARPA) to design a network that would operate even if part of it was destroyed by an atomic blast. ARPA’s research led to the creation of the ARPANET in 1969. With federal funding and direction the ARPANET matured into today’s Internet.

Like any good creation myth, this story contains some truth. But it also conceals a story that is much more complicated and interesting. Government involvement has both promoted and retarded the Internet’s development, often at the same time. And, despite Obama’s claims, the government did not create the Internet “so all the companies could make money off” it.

The idea of internetworking was first proposed in the early 1960s by computer scientist J. C. R. Licklider at Bolt, Beranek and Newman (BBN). BBN was a private company that originally specialized in acoustic engineering. After achieving some success in that field—for example, designing the acoustics of the United Nations Assembly Hall—BBN branched out into general R&D consulting. Licklider, who held a Ph.D. in psychoacoustics, had become interested in computers in the 1950s. As a vice president at BBN he led the firm’s growing information science practice.

In a 1962 paper Licklider described a “network of networks,” which he called the “Intergalactic Computer Network.” This paper contained many of the ideas that would eventually lead to the Internet. Its most important innovation was “packet switching,” a technique that allows many computers to join a network without requiring expensive direct links between each pair of machines.

Licklider took the idea of internetworking with him when he joined ARPA in 1962. There he met computer science legends Ivan Sutherland and Bob Taylor. Sutherland and Taylor continued developing Licklider’s ideas. Their goal was to create a network that would allow more effective use of computers scattered around university and government laboratories.

In 1968 ARPA funded the first four-node packet-switched network. This network was not part of a Department of Defense (DOD) plan for post-apocalyptic survival. It was created so Taylor wouldn’t have to switch chairs so often. Taylor routinely worked on three different computers and was tired of switching between terminals. Networking would allow researchers like Taylor to access computers located around the country without having dedicated terminals for each machine.

The first test of this network was in October 1969, when Charley Kline, a student at UCLA, attempted to transmit the command “login” to a machine at the Stanford Research Institute. The test was unsuccessful. The network crashed and the first message ever transmitted over what would eventually become the Internet was simply “lo.”

With a bit more debugging the four-node network went live in December 1969, and the ARPANET was born. Over the next two decades the ARPANET would serve as a test bed for internetworking. It would grow, spawn other networks, and be transferred between DOD agencies. For civilian agencies and universities, NSFNET, operated by the National Science Foundation, replaced ARPANET in 1985. ARPANET was finally shut down in February 1990. NSFNET continued to operate until 1995, during which time it grew into an important backbone for the emerging Internet.

For its entire existence the ARPANET and most of its descendants were restricted to government agencies, universities, and companies that did business with those entities. Commercial use of these networks was illegal. Because of its DOD origins ARPANET was never opened to more than a handful of organizations. In authorizing funds for NSFNET, Congress specified that it was to be used only for activities that were “primarily for research and education in the sciences and engineering.”

During this time the vast majority of people were banned from the budding networks. None of the services, applications, or companies that define today’s Internet could exist in this environment. Facebook may have been founded by college students, but it was not “primarily for research and education in the sciences and engineering.”

This restrictive environment finally began to change in the mid-1980s with the arrival of the first dial-up bulletin boards and online services providers. Companies like Compuserve, Prodigy, and AOL took advantage of the home computer to offer network services over POTS (Plain Old Telephone Service) lines. With just a PC and a modem, a subscriber could access email, news, and other services, though at the expense of tying up the house’s single phone line for hours.

In the early 1990s these commercial services began to experiment with connections between themselves and systems hosted on NSFNET. Being able to access services hosted on a different network made a network more valuable, so service providers had to interoperate in order to survive.

ARPANET researchers led by Vint Cerf and Robert Kahn had already created many of the standards that the Internet service providers (ISPs) needed to interconnect. The most important standard was the Transmission Control Protocol/Internet Protocol (TCP/IP). In the 1970s computers used proprietary technologies to create local networks. TCP/IP was the “lingua franca” that allowed these networks to communicate regardless of who operated them or what types of computers were used on them. Today most of these proprietary technologies are obsolete and TCP/IP is the native tongue of networking. Because of TCP/IP’s success Cerf and Kahn are known as “the fathers of the Internet.”

Forced to interoperate, service providers rapidly adopted TCP/IP to share traffic between their networks and with NSFNET. The modern ISP was born. Though those links were still technically illegal, NSFNET’s commercial use restrictions were increasingly ignored.

The early 1990s saw the arrival of the World Wide Web. Tim Berners-Lee, working at the European high energy physics lab CERN, created the Uniform Resource Locator (URL), Hyper-Text Transfer Protocol (HTTP), and Hyper-Text Markup Language (HTML). These three technologies made it easier to publish, locate, and consume information online. The web rapidly grew into the most popular use of the Internet.

Berners-Lee donated these technologies to the Internet community and was knighted for his work in 2004.

In 1993 Mosaic, the first widely adopted web browser, was released by the National Center for Supercomputing Applications (NCSA). Mosaic was the first Internet application to take full advantage of Berners-Lee’s work and opened the Internet to a new type of user. For the first time the Internet became “so easy my mother can use it.”

The NCSA played a significant role in presidential politics. It had been created by the High Performance Computing & Communications Act of 1991 (aka “The Gore Bill”). In 1999 presidential candidate Al Gore cited this act in an interview about his legislative accomplishments,saying, “I took the initiative in creating the Internet.” This comment was shortened to: “I created the Internet” and quickly became a punchline for late-night comedians. This one line arguably cost Gore the presidency in 2000.

The 1992 Scientific and Advanced Technology Act, another Gore initiative, lifted some of the commercial restrictions on Internet usage. By mid-decade all the pieces for the modern Internet were in place.

In 1995, 26 years after its humble beginnings as ARPANET, the Internet was finally freed of government control. NSFNET was shut down. Operation of the Internet passed to mostly private companies, and all prohibitions on commercial use were lifted.

Anarchy, Property, and Innovation

Today the Internet can be viewed as three layers, each with its own stakeholders, business models, and regulatory structure. There are the standards, like TCP/IP, that control how information flows between networks, the physical infrastructure that actually comprises the networks, and the devices and applications that most people see as “the Internet.”

Since the Internet is really a collection of separate networks that have voluntarily joined together, there is no single central authority that owns or controls it. Instead, the Internet is governed by a loose collection of organizations that develop technologies and ensure interoperability. These organizations, like the Internet Engineering Task Force (IETF), may be the most successful anarchy ever.

Anarchy, in the classical sense, means without ruler, not without laws. The IETF demonstrates how well a true anarchy can work. The IETF has little formal structure. It is staffed by volunteers. Meetings are run by randomly chosen attendees. The closest thing there is to being an IETF member is being on the mailing list for a project and doing the work. Anyone can contribute to any project simply by attending the meetings and voicing an opinion. Something close to meritocracy controls whose ideas become part of the standards.

At the physical layer the Internet is actually a collection of servers, switches, and fiber-optic cables. At least in the United States this infrastructure is mostly privately owned and operated by for-profit companies like AT&T and Cox. The connections between these large national and international networks put the “inter” in Internet.

As for-profit companies ISPs compete for customers. They invest in faster networks, wider geographic coverage, and cooler devices to attract more monthly subscription fees. But ISPs are also heavily regulated companies. In addition to pleasing customers, they must also please regulators. This makes lobbying an important part of their business. According to the Center for Responsive Politics’s OpenSecrets website, ISPs and the telecommunications industry in general spend between $55 million and $65 million per year trying to influence legislation and regulation.

When most people think of the Internet they don’t think of a set of standards sitting on a shelf or equipment in a data center. They think of their smart phones and tablets and applications like Twitter and Spotify. It is here that Internet innovation has been most explosive. This is also where government has had the least influence.

For its first 20 years the Internet and its precursors were mostly text-based. The most popular applications, like email, Gopher (“Go for”), and Usenet news groups, had text interfaces. In the 20 years that commercial innovation has been allowed on the Internet, text has become almost a relic. Today, during peak hours, almost half of North American traffic comes from streaming movies and music. Other multimedia services, like video chat and photo sharing, consume much of people’s Internet time.

None of this innovation could have happened if the Internet were still under government control. These services were created by entrepreneurial trial and error. While some visionaries explored the possibilities of a graphically interconnected world as early as the 1960s, no central planning board knew that old-timey-looking photographs taken on ultramodern smart phones would be an important Internet application.

I, Internet

When Obama said the government created the Internet so companies could make money off it, he was half right. The government directly funded the original research into many core networking technologies and employed key people like Licklider, Taylor, Cerf, and Kahn. But after creating the idea the government sat on it for a quarter century and denied access to all but a handful of people. Its great commercial potential was locked away.

For proponents of government-directed research policies, the Internet proves the value of their programs. But government funding might not have been needed to create the Internet. The idea for internetwork came from BBN, a private company. The rise of ISPs in the 1980s showed that other companies were willing to invest in this space. Once the home PC and dial-up services became available, people joined commercial networks by the millions. The economic incentives to connect those early networks probably would have resulted in something very much like today’s Internet even if the ARPANET had never existed.

In the end the Internet rose from no single source. Like Leonard Read’s humble writing instrument, the pencil, no one organization could create the Internet. It took the efforts of thousands of engineers from the government and private sectors. Those engineers followed no central plan. Instead they explored. They competed. They made mistakes. They played.

Eventually they created a system that links a third of humanity. Now entrepreneurs all over the world are looking for the most beneficial ways to use that network.

Imagine where we’d be today if that search could have started five to ten years earlier.

Steve Fritzinger is a freelance writer from Fairfax,Virginia. He is the regular economics commentator on the BBC World Service program Business Daily.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.