Browsed by
Tag: network

Decentralization: Why Dumb Networks Are Better – Article by Andreas Antonopoulos

Decentralization: Why Dumb Networks Are Better – Article by Andreas Antonopoulos

The New Renaissance Hat
Andreas Antonopoulos
March 8, 2015
******************************

“Every device employed to bolster individual freedom must have as its chief purpose the impairment of the absoluteness of power.” — Eric Hoffer

In computer and communications networks, decentralization leads to faster innovation, greater openness, and lower cost. Decentralization creates the conditions for competition and diversity in the services the network provides.

But how can you tell if a network is decentralized, and what makes it more likely to be decentralized? Network “intelligence” is the characteristic that differentiates centralized from decentralized networks — but in a way that is surprising and counterintuitive.

Some networks are “smart.” They offer sophisticated services that can be delivered to very simple end-user devices on the “edge” of the network. Other networks are “dumb” — they offer only a very basic service and require that the end-user devices are intelligent. What’s smart about dumb networks is that they push innovation to the edge, giving end-users control over the pace and direction of innovation. Simplicity at the center allows for complexity at the edge, which fosters the vast decentralization of services.

Surprisingly, then, “dumb” networks are the smart choice for innovation and freedom.

The telephone network used to be a smart network supporting dumb devices (telephones). All the intelligence in the telephone network and all the services were contained in the phone company’s switching buildings. The telephone on the consumer’s kitchen table was little more than a speaker and a microphone. Even the most advanced touch-tone telephones were still pretty simple devices, depending entirely on the network services they could “request” through beeping the right tones.

In a smart network like that, there is no room for innovation at the edge. Sure, you can make a phone look like a cheeseburger or a banana, but you can’t change the services it offers. The services depend entirely on the central switches owned by the phone company. Centralized innovation means slow innovation. It also means innovation directed by the goals of a single company. As a result, anything that doesn’t seem to fit the vision of the company that owns the network is rejected or even actively fought.

In fact, until 1968, AT&T restricted the devices allowed on the network to a handful of approved devices. In 1968, in a landmark decision, the FCC ruled in favor of the Carterfone, an acoustic coupler device for connecting two-way radios to telephones, opening the door for any consumer device that didn’t “cause harm to the system.”

That ruling paved the way for the answering machine, the fax machine, and the modem. But even with the ability to connect smarter devices to the edge, it wasn’t until the modem that innovation really accelerated. The modem represented a complete inversion of the architecture: all the intelligence was moved to the edge, and the phone network was used only as an underlying “dumb” network to carry the data.

Did the telecommunications companies welcome this development? Of course not! They fought it for nearly a decade, using regulation, lobbying, and legal threats against the new competition. In some countries, modem calls across international lines were automatically disconnected to prevent competition in the lucrative long-distance market. In the end, the Internet won. Now, almost the entire phone network runs as an app on top of the Internet.

The Internet is a dumb network, which is its defining and most valuable feature. The Internet’s protocol (transmission control protocol/Internet protocol, or TCP/IP) doesn’t offer “services.” It doesn’t make decisions about content. It doesn’t distinguish between photos and text, video and audio. It doesn’t have a list of approved applications. It doesn’t even distinguish between client and server, user and host, or individual versus corporation. Every IP address is an equal peer.

TCP/IP acts as an efficient pipeline, moving data from one point to another. Over time, it has had some minor adjustments to offer some differentiated “quality of service” capabilities, but other than that, it remains, for the most part, a dumb data pipeline. Almost all the intelligence is on the edge — all the services, all the applications are created on the edge-devices. Creating a new application does not involve changing the network. The Web, voice, video, and social media were all created as applications on the edge without any need to modify the Internet protocol.

So the dumb network becomes a platform for independent innovation, without permission, at the edge. The result is an incredible range of innovations, carried out at an even more incredible pace. People interested in even the tiniest of niche applications can create them on the edge. Applications that only have two participants only need two devices to support them, and they can run on the Internet. Contrast that to the telephone network where a new “service,” like caller ID, had to be built and deployed on every company switch, incurring maintenance cost for every subscriber. So only the most popular, profitable, and widely used services got deployed.

The financial services industry is built on top of many highly specialized and service-specific networks. Most of these are layered atop the Internet, but they are architected as closed, centralized, and “smart” networks with limited intelligence on the edge.

Take, for example, the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the international wire transfer network. The consortium behind SWIFT has built a closed network of member banks that offers specific services: secure messages, mostly payment orders. Only banks can be members, and the network services are highly centralized.

The SWIFT network is just one of dozens of single-purpose, tightly controlled, and closed networks offered to financial services companies such as banks, brokerage firms, and exchanges. All these networks mediate the services by interposing the service provider between the “users,” and they allow minimal innovation or differentiation at the edge — that is, they are smart networks serving mostly dumb devices.

Bitcoin is the Internet of money. It offers a basic dumb network that connects peers from anywhere in the world. The bitcoin network itself does not define any financial services or applications. It doesn’t require membership registration or identification. It doesn’t control the types of devices or applications that can live on its edge. Bitcoin offers one service: securely time-stamped scripted transactions. Everything else is built on the edge-devices as an application. Bitcoin allows any application to be developed independently, without permission, on the edge of the network. A developer can create a new application using the transactional service as a platform and deploy it on any device. Even niche applications with few users — applications never envisioned by the bitcoin protocol creator — can be built and deployed.

Almost any network architecture can be inverted. You can build a closed network on top of an open network or vice versa, although it is easier to centralize than to decentralize. The modem inverted the phone network, giving us the Internet. The banks have built closed network systems on top of the decentralized Internet. Now bitcoin provides an open network platform for financial services on top of the open and decentralized Internet. The financial services built on top of bitcoin are themselves open because they are not “services” delivered by the network; they are “apps” running on top of the network. This arrangement opens a market for applications, putting the end user in a position of power to choose the right application without restrictions.

What happens when an industry transitions from using one or more “smart” and centralized networks to using a common, decentralized, open, and dumb network? A tsunami of innovation that was pent up for decades is suddenly released. All the applications that could never get permission in the closed network can now be developed and deployed without permission. At first, this change involves reinventing the previously centralized services with new and open decentralized alternatives. We saw that with the Internet, as traditional telecommunications services were reinvented with email, instant messaging, and video calls.

This first wave is also characterized by disintermediation — the removal of entire layers of intermediaries who are no longer necessary. With the Internet, this meant replacing brokers, classified ads publishers, real estate agents, car salespeople, and many others with search engines and online direct markets. In the financial industry, bitcoin will create a similar wave of disintermediation by making clearinghouses, exchanges, and wire transfer services obsolete. The big difference is that some of these disintermediated layers are multibillion dollar industries that are no longer needed.

Beyond the first wave of innovation, which simply replaces existing services, is another wave that begins to build the applications that were impossible with the previous centralized network. The second wave doesn’t just create applications that compare to existing services; it spawns new industries on the basis of applications that were previously too expensive or too difficult to scale. By eliminating friction in payments, bitcoin doesn’t just make better payments; it introduces market mechanisms and price discovery to economic activities that were too small or inefficient under the previous cost structure.

We used to think “smart” networks would deliver the most value, but making the network “dumb” enabled a massive wave of innovation. Intelligence at the edge brings choice, freedom, and experimentation without permission. In networks, “dumb” is better.

Andreas M. Antonopoulos is a technologist and serial entrepreneur who advises companies on the use of technology and decentralized digital currencies such as bitcoin.

This article was originally published by The Foundation for Economic Education.

How Government Sort of Created the Internet – Article by Steve Fritzinger

How Government Sort of Created the Internet – Article by Steve Fritzinger

The New Renaissance Hat
Steve Fritzinger
October 6, 2012
******************************

Editor’s Note: Vinton Cerf, one of the individuals whose work was pivotal in the development of the Internet, has responded to this article in the comments below. Read his response here.

In his now-famous “You didn’t build that” speech, President Obama said, “The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet.”

Obama’s claim is in line with the standard history of the Internet. That story goes something like this: In the 1960s the Department of Defense was worried about being able to communicate after a nuclear attack. So it directed the Advanced Research Projects Agency (ARPA) to design a network that would operate even if part of it was destroyed by an atomic blast. ARPA’s research led to the creation of the ARPANET in 1969. With federal funding and direction the ARPANET matured into today’s Internet.

Like any good creation myth, this story contains some truth. But it also conceals a story that is much more complicated and interesting. Government involvement has both promoted and retarded the Internet’s development, often at the same time. And, despite Obama’s claims, the government did not create the Internet “so all the companies could make money off” it.

The idea of internetworking was first proposed in the early 1960s by computer scientist J. C. R. Licklider at Bolt, Beranek and Newman (BBN). BBN was a private company that originally specialized in acoustic engineering. After achieving some success in that field—for example, designing the acoustics of the United Nations Assembly Hall—BBN branched out into general R&D consulting. Licklider, who held a Ph.D. in psychoacoustics, had become interested in computers in the 1950s. As a vice president at BBN he led the firm’s growing information science practice.

In a 1962 paper Licklider described a “network of networks,” which he called the “Intergalactic Computer Network.” This paper contained many of the ideas that would eventually lead to the Internet. Its most important innovation was “packet switching,” a technique that allows many computers to join a network without requiring expensive direct links between each pair of machines.

Licklider took the idea of internetworking with him when he joined ARPA in 1962. There he met computer science legends Ivan Sutherland and Bob Taylor. Sutherland and Taylor continued developing Licklider’s ideas. Their goal was to create a network that would allow more effective use of computers scattered around university and government laboratories.

In 1968 ARPA funded the first four-node packet-switched network. This network was not part of a Department of Defense (DOD) plan for post-apocalyptic survival. It was created so Taylor wouldn’t have to switch chairs so often. Taylor routinely worked on three different computers and was tired of switching between terminals. Networking would allow researchers like Taylor to access computers located around the country without having dedicated terminals for each machine.

The first test of this network was in October 1969, when Charley Kline, a student at UCLA, attempted to transmit the command “login” to a machine at the Stanford Research Institute. The test was unsuccessful. The network crashed and the first message ever transmitted over what would eventually become the Internet was simply “lo.”

With a bit more debugging the four-node network went live in December 1969, and the ARPANET was born. Over the next two decades the ARPANET would serve as a test bed for internetworking. It would grow, spawn other networks, and be transferred between DOD agencies. For civilian agencies and universities, NSFNET, operated by the National Science Foundation, replaced ARPANET in 1985. ARPANET was finally shut down in February 1990. NSFNET continued to operate until 1995, during which time it grew into an important backbone for the emerging Internet.

For its entire existence the ARPANET and most of its descendants were restricted to government agencies, universities, and companies that did business with those entities. Commercial use of these networks was illegal. Because of its DOD origins ARPANET was never opened to more than a handful of organizations. In authorizing funds for NSFNET, Congress specified that it was to be used only for activities that were “primarily for research and education in the sciences and engineering.”

During this time the vast majority of people were banned from the budding networks. None of the services, applications, or companies that define today’s Internet could exist in this environment. Facebook may have been founded by college students, but it was not “primarily for research and education in the sciences and engineering.”

This restrictive environment finally began to change in the mid-1980s with the arrival of the first dial-up bulletin boards and online services providers. Companies like Compuserve, Prodigy, and AOL took advantage of the home computer to offer network services over POTS (Plain Old Telephone Service) lines. With just a PC and a modem, a subscriber could access email, news, and other services, though at the expense of tying up the house’s single phone line for hours.

In the early 1990s these commercial services began to experiment with connections between themselves and systems hosted on NSFNET. Being able to access services hosted on a different network made a network more valuable, so service providers had to interoperate in order to survive.

ARPANET researchers led by Vint Cerf and Robert Kahn had already created many of the standards that the Internet service providers (ISPs) needed to interconnect. The most important standard was the Transmission Control Protocol/Internet Protocol (TCP/IP). In the 1970s computers used proprietary technologies to create local networks. TCP/IP was the “lingua franca” that allowed these networks to communicate regardless of who operated them or what types of computers were used on them. Today most of these proprietary technologies are obsolete and TCP/IP is the native tongue of networking. Because of TCP/IP’s success Cerf and Kahn are known as “the fathers of the Internet.”

Forced to interoperate, service providers rapidly adopted TCP/IP to share traffic between their networks and with NSFNET. The modern ISP was born. Though those links were still technically illegal, NSFNET’s commercial use restrictions were increasingly ignored.

The early 1990s saw the arrival of the World Wide Web. Tim Berners-Lee, working at the European high energy physics lab CERN, created the Uniform Resource Locator (URL), Hyper-Text Transfer Protocol (HTTP), and Hyper-Text Markup Language (HTML). These three technologies made it easier to publish, locate, and consume information online. The web rapidly grew into the most popular use of the Internet.

Berners-Lee donated these technologies to the Internet community and was knighted for his work in 2004.

In 1993 Mosaic, the first widely adopted web browser, was released by the National Center for Supercomputing Applications (NCSA). Mosaic was the first Internet application to take full advantage of Berners-Lee’s work and opened the Internet to a new type of user. For the first time the Internet became “so easy my mother can use it.”

The NCSA played a significant role in presidential politics. It had been created by the High Performance Computing & Communications Act of 1991 (aka “The Gore Bill”). In 1999 presidential candidate Al Gore cited this act in an interview about his legislative accomplishments,saying, “I took the initiative in creating the Internet.” This comment was shortened to: “I created the Internet” and quickly became a punchline for late-night comedians. This one line arguably cost Gore the presidency in 2000.

The 1992 Scientific and Advanced Technology Act, another Gore initiative, lifted some of the commercial restrictions on Internet usage. By mid-decade all the pieces for the modern Internet were in place.

In 1995, 26 years after its humble beginnings as ARPANET, the Internet was finally freed of government control. NSFNET was shut down. Operation of the Internet passed to mostly private companies, and all prohibitions on commercial use were lifted.

Anarchy, Property, and Innovation

Today the Internet can be viewed as three layers, each with its own stakeholders, business models, and regulatory structure. There are the standards, like TCP/IP, that control how information flows between networks, the physical infrastructure that actually comprises the networks, and the devices and applications that most people see as “the Internet.”

Since the Internet is really a collection of separate networks that have voluntarily joined together, there is no single central authority that owns or controls it. Instead, the Internet is governed by a loose collection of organizations that develop technologies and ensure interoperability. These organizations, like the Internet Engineering Task Force (IETF), may be the most successful anarchy ever.

Anarchy, in the classical sense, means without ruler, not without laws. The IETF demonstrates how well a true anarchy can work. The IETF has little formal structure. It is staffed by volunteers. Meetings are run by randomly chosen attendees. The closest thing there is to being an IETF member is being on the mailing list for a project and doing the work. Anyone can contribute to any project simply by attending the meetings and voicing an opinion. Something close to meritocracy controls whose ideas become part of the standards.

At the physical layer the Internet is actually a collection of servers, switches, and fiber-optic cables. At least in the United States this infrastructure is mostly privately owned and operated by for-profit companies like AT&T and Cox. The connections between these large national and international networks put the “inter” in Internet.

As for-profit companies ISPs compete for customers. They invest in faster networks, wider geographic coverage, and cooler devices to attract more monthly subscription fees. But ISPs are also heavily regulated companies. In addition to pleasing customers, they must also please regulators. This makes lobbying an important part of their business. According to the Center for Responsive Politics’s OpenSecrets website, ISPs and the telecommunications industry in general spend between $55 million and $65 million per year trying to influence legislation and regulation.

When most people think of the Internet they don’t think of a set of standards sitting on a shelf or equipment in a data center. They think of their smart phones and tablets and applications like Twitter and Spotify. It is here that Internet innovation has been most explosive. This is also where government has had the least influence.

For its first 20 years the Internet and its precursors were mostly text-based. The most popular applications, like email, Gopher (“Go for”), and Usenet news groups, had text interfaces. In the 20 years that commercial innovation has been allowed on the Internet, text has become almost a relic. Today, during peak hours, almost half of North American traffic comes from streaming movies and music. Other multimedia services, like video chat and photo sharing, consume much of people’s Internet time.

None of this innovation could have happened if the Internet were still under government control. These services were created by entrepreneurial trial and error. While some visionaries explored the possibilities of a graphically interconnected world as early as the 1960s, no central planning board knew that old-timey-looking photographs taken on ultramodern smart phones would be an important Internet application.

I, Internet

When Obama said the government created the Internet so companies could make money off it, he was half right. The government directly funded the original research into many core networking technologies and employed key people like Licklider, Taylor, Cerf, and Kahn. But after creating the idea the government sat on it for a quarter century and denied access to all but a handful of people. Its great commercial potential was locked away.

For proponents of government-directed research policies, the Internet proves the value of their programs. But government funding might not have been needed to create the Internet. The idea for internetwork came from BBN, a private company. The rise of ISPs in the 1980s showed that other companies were willing to invest in this space. Once the home PC and dial-up services became available, people joined commercial networks by the millions. The economic incentives to connect those early networks probably would have resulted in something very much like today’s Internet even if the ARPANET had never existed.

In the end the Internet rose from no single source. Like Leonard Read’s humble writing instrument, the pencil, no one organization could create the Internet. It took the efforts of thousands of engineers from the government and private sectors. Those engineers followed no central plan. Instead they explored. They competed. They made mistakes. They played.

Eventually they created a system that links a third of humanity. Now entrepreneurs all over the world are looking for the most beneficial ways to use that network.

Imagine where we’d be today if that search could have started five to ten years earlier.

Steve Fritzinger is a freelance writer from Fairfax,Virginia. He is the regular economics commentator on the BBC World Service program Business Daily.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.