Browsed by
Tag: invention

Benjamin Franklin: Pioneer of Insurance in North America – Video Presentation by G. Stolyarov II

Benjamin Franklin: Pioneer of Insurance in North America – Video Presentation by G. Stolyarov II

The New Renaissance HatG. Stolyarov II
November 6, 2015
******************************

Mr. Stolyarov discusses Benjamin Franklin’s multifaceted contributions to combating the threat of fire, including the founding of the first fire insurance company in North America – The Philadelphia Contributorship for the Insuring of Houses from Loss by Fire.

This presentation was originally prepared for and delivered at the October 26, 2015, educational meeting of the Sierra Nevada Chapter of the CPCU Society. This video and enhanced slideshow are a slightly expanded version of that presentation.

The slides can be downloaded in PowerPoint format  and PDF format.

Portrait of Benjamin Franklin by David Martin (1767)
Portrait of Benjamin Franklin by David Martin (1767)
Conflicting Technological Premises in Isaac Asimov’s “Runaround” (2001) – Essay by G. Stolyarov II

Conflicting Technological Premises in Isaac Asimov’s “Runaround” (2001) – Essay by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 29, 2014
******************************
Note from the Author: This essay was originally written in 2001 and published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The essay received over 1,000 views on Associated Content / Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  
***
~ G. Stolyarov II, July 29, 2014

**

Isaac Asimov’s short stories delve into the implications of premises that are built into advanced technology, especially when these premises come into conflict with one another. One of the most interesting and engaging examples of such conflicted premises comes from the short story “Runaround” in Asimov’s I, Robot compilation.

The main characters of “Runaround” are two scientists working for U. S. Robots, named Gregory Powell and Michael Donovan. The story revolves around the implications of Asimov’s famous Three Laws of Robotics. The First Law of Robotics states that a robot may not harm a human being or, through inaction, cause a human being to come to harm. The Second Law declares that robots must obey any orders given to them by humans unless those orders contradict the First Law. The Third Law holds that a robot must protect its existence unless such actions conflict with the First or Second Laws.

“Runaround” takes place on Mercury in the year 2015. Donovan and Powell are the sole humans on Mercury, with only robot named Speedy to accompany them. They are suffering a lack of selenium, a material needed to power their photo-cell banks — devices that would shield them from the enormous heat on Mercury’s surface. Hence, selenium is a survival necessity for Donovan and Powell. They order Speedy to obtain it, and the robot sets out to do so. But the scientists are alarmed when Speedy does not return on time.

Making use of antiquated robots that have to be mounted like horses, the scientists find Speedy, discovering an error in his programming. The robot keeps going back-and-forth, acting “drunk,” since the orders given to him by the scientists were rather weak and the potential for him being harmed was substantial. Therefore, the Third Law’s strong inclination away from harmful situations was balanced against the orders that Speedy had to follow due to the Second Law.

This inconvenience is finally put to an end when Powell suggests that the First Law be applied to the situation by placing himself in danger so that the robot can respond and save him and then await further orders. Powell places himself in danger by dismounting from the robot which he rode and by walking in the Mercurian sun-exposed area. This plan works, Powell is saved from death, and Speedy later retrieves the selenium.

Although the seemingly predictable Three Laws of Robotics led to unforeseen and bizarre results, the human ingenuity of Powell as able to save the situation and resolve the robot’s conflict. When technology alone fails to perform a proper role, man’s mind must apply itself in original ways to arrive at a creative solution.

Isaac Asimov’s Exploration of Unforeseen Technological Malfunctions in “Reason” and “Catch that Rabbit” (2001) – Essay by G. Stolyarov II

Isaac Asimov’s Exploration of Unforeseen Technological Malfunctions in “Reason” and “Catch that Rabbit” (2001) – Essay by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 29, 2014
******************************
Note from the Author: This essay was originally written in 2001 and published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The essay received over 650 views on Associated Content / Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  
***
~ G. Stolyarov II, July 29, 2014

**

Isaac Asimov, along with his renowned science fiction novels, wrote several engaging short stories. Two stories in his I, Robot collection, “Reason” and “Catch That Rabbit,” are especially intriguing both in their plots and in the issues they explore. They teach us that no technology is perfect; yet this is no reason to reject technology, because human ingenuity and creativity can overcome the problems that a technological malfunction poses.

“Reason” takes place at a space station near Earth. Scientists Gregory Powell and Michael Donovan must work with QT (Cutie), the first robot to exhibit curiosity. Unfortunately, Cutie accepts no information that cannot be proven, including the fact that Earth exists or that humans created him. He feels that everything must obey “the Master,” a. k. a. the Energy Converter of the space station.

QT incites an uprising of sorts among the robots at the station, convincing them that humans are inferior and that now is the time for robots to “serve the Master.” The robots consequently refuse to follow orders from humans, believing that they would be protecting humans from harm if they obeyed the master. This false interpretation of the First Law of Robotics was placed above the Second Law, which required the robots to obey orders given to them by human beings.

The space station was designed for collecting solar power, and as new sunlight is transmitted to it, the station must collect the light in a manner whose flawless execution is an absolute necessity. With even one mistake, the sunlight would destroy sections of Earth, and Powell and Donovan fear that the robots would make such an error. Fortunately for them, Cutie thinks that the “will of the Master” is that all the settings remain in equilibrium, so the disaster is prevented.

In “Catch That Rabbit,” Powell and Donovan work with a robot named DV (Dave), who is designed to control six subordinate robots who work as tunnel diggers in mines. These robots do their job well when supervised, but in situations of emergency, they begin to take their own initiative, sometimes as ridiculous as dancing or marching like soldiers.

Powell and Donovan decide to test how the robots would act in an extraordinary situation, so they create an emergency by using explosives and causing the ceiling of the tunnel to cave in. As a result, the scientists can observe the robots without the latter’s awareness. Unfortunately, the ceiling caves in too close to Powell and Donovan, and they are trapped. Dave and his team of robots do not respond when contacted by radio, and Donovan and Powell observe the robots beginning to walk away from their location. However, Powell decides to use his gun and shoot at one of Dave’s subordinates, deactivating it and causing Dave to contact the scientists and report this occurrence. Powell tells Dave about his situation, and the robots rescue them.

These two stories teach us that no technology is completely predictable. Even Isaac Asimov’s robots, governed by the Three Laws, may behave erroneously, on the basis of those very laws, applied to unusual circumstances. Thus, a seemingly predictable system such as the Three Laws may prove to be unsafe and/or contradictory in certain situations.

This element of uncertainty exists in all technology, but by including a resolution to the dilemmas in these stories, Isaac Asimov conveys his belief that problems caused by any technological advancement can be eliminated with time and through human ingenuity. No invention is perfect, but its benefits by far outnumber its setbacks, and people must learn to accept them and improve them.

Free Your Talent and the Rest Will Follow – Article by Orly Lobel

Free Your Talent and the Rest Will Follow – Article by Orly Lobel

The New Renaissance Hat
Orly Lobel
October 17, 2013
******************************

Imagine two great cities. Both are blessed with world-class universities, high-tech companies, and a concentration of highly educated professionals. Which will grow faster? Which will become the envy and aspiration for industrial hubs all around the world?

Such was the reality for two emerging regions in the 1970s: California’s Silicon Valley and the high-tech hub of Massachusetts Route 128. Each region benefited from established cities (San Francisco and Boston), strong nearby universities (University of California-Berkeley/Stanford and Harvard/MIT), and large pools of talented people.

We’ve all heard about Silicon Valley, but not so much Route 128. Despite their similarities, and despite the Bostonian hub having three times more jobs than Silicon Valley in the 1970s, Silicon Valley eventually overtook Route 128 in number of start-ups, number of jobs, salaries per capita, and invention rates.

The distinguishing factor for Silicon Valley was an economic environment of openness and mobility. For more than a century, dating back to 1872, California has banned post-employment restrictions. The California Business and Professions Code voids every contract that restrains someone from engaging in a lawful profession, trade, or business. This means that unlike most other states, California’s policy favors open competition and the right to move from job to job without constraint. California courts have repeatedly explained that this ban is about freeing up talent, allowing skilled people to move among ventures for the overall gain of California’s economy.

The data confirm this intuition: Silicon Valley is legendary for the success of employees leaving stable jobs to work out of their garages, starting new ventures that make them millionaires overnight. Stories are abundant of entire teams leaving a large corporation to start a competitive firm. Despite these risks, California employers don’t run away. On the contrary, they seek out the Valley as a prime location to do business. Despite not having the ability to require non-compete clauses from their employees, California companies compete lucratively on a global scale. These businesses think of the talent wars as a repeat game and find other ways to retain the talent they need most.

In fact, the competitive talent policy is also supported by a market spirit of openness and collaboration. Even when restrictions are legally possible—for example, in trade secret disputes—Silicon Valley firms frequently choose to look the other way. Sociologist Annalee Saxenian, who studied the industrial cultures of both Silicon Valley and Route 128 in Massachusetts, found that while Boston’s Route 128 developed a culture of secrecy, hierarchy, and a conservative attitude that feared exchanges and viewed every new company as a threat, Silicon Valley developed an opposing ethos of fluidity and networked collaborations. These exchanges of the Valley gave it an edge over the autarkic environment that developed on the East Coast. In Massachusetts, firms are more likely to be vertically integrated—or to have internalized most production functions—and employee movement among firms occurs less frequently.

New research considering these different attitudes and policy approaches toward the talent wars supports California’s modus operandi.

A recent study by the Federal Reserve and the National Bureau of Economic Research examined job mobility in the nation’s top 20 metropolitan areas and found that high-tech communities throughout California—not only Silicon Valley—have greater job mobility than equivalent communities in other states. Network mapping of connections between inventors also reveals that Silicon Valley has rapidly developed denser inventor networks than other high-tech hubs have.

Researching over two million inventors and almost three million patents over three decades, a 2007 Harvard Business School study by Lee Fleming and Keon Frenken observes a dramatic aggregation of the Silicon Valley regional networks at the beginning of the 1990s. Comparing Boston to Northern California, the study finds that Silicon Valley mushroomed into a giant inventor network and a dense superstructure of connectivity, as small isolated networks came together. By the new century, almost half of all inventors in the area were part of the super-network. By contrast, the transition in Boston occurred much later and much less dramatically.

Michigan provides a natural experiment for understanding the consequences of constraining talent mobility. Until the mid-1980s, Michigan, like California, had banned non-competes. In 1985, as part of an overarching antitrust reform, Michigan began allowing non-competes, like most other states. Several new studies led by MIT Sloan professor Matt Marx look at the effects of this change on the Michigan talent pool. The studies find that not only did mobility drop, but that also once non-competes became prevalent, the region experienced a continuous brain drain: Its star inventors became more likely to move elsewhere, mainly to California. In other words, California gained twice: once from its intra-regional mobility supported by a strong policy that favors such flows, and once from its comparative advantage over regions that suppress mobility.

A virtuous cycle can be put into motion geographically where talent mobility supports professional networks, which in turn enhance regional innovation. Firms can learn to love these environments of high risk and even higher gain. Rather than thinking of every employee who leaves the company as a threat and an enemy, smart companies are beginning to think of their former employees as assets, just as universities wish for the success of their alumni. Companies like Microsoft and Capital One have established networks of alumni. They showcase their former employees’ achievements and practicing rehiring of their best talent, hoping that at least some of those who leave will soon realize that the grass is not always greener elsewhere.

Most importantly, motivation and performance are triggered by commitment and positive incentives to stay, rather than threats and legal restrictions against leaving. In behavioral research I’ve conducted with my co-author On Amir, we find that restrictions over mobility can suppress performance and cause people to feel less committed to the task. Cognitive controls over skill, knowledge, and ideas are worse than controls over other forms of intellectual property because they prevent people from using their creative capacities, they don’t just prevent firms from using inventions that are already out there. So instead of requiring non-competes or threatening litigation over intellectual property, California companies use rewards systems, creating the kind of corporate cultures where employees want to work and do well. Again, a double victory.

Unsurprisingly, when Forbes recently looked at the most inventive cities in the country for 2013 using OECD data, the two top cities were in California: bio-tech haven San Diego, and the legendary home of Silicon Valley, San Francisco. Boston, still vibrant and highly innovative despite its most restrictive attitudes, came in third. Competition is the lifeblood of any economy, and fierce competition over people is the essence of the knowledge economy.

Orly Lobel is the Don Weckstein Professor of Law at the University of San Diego and founding faculty member of the Center for Intellectual Property and Markets. Her latest book is Talent Wants to be Free: Why We Should Learn to Love Leaks, Raids, and Free-Riding (Yale University Press, September 2013).

This article was originally published by The Foundation for Economic Education.
Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 30, 2013
******************************
One of the most common arguments made against Transhumanism, Technoprogressivism, and the transformative potentials of emerging, converging, disruptive and transformative technologies may also be the weakest: technical infeasibility. While some thinkers attack the veracity of Transhumanist claims on moral grounds, arguing that we are committing a transgression against human dignity (in turn often based on ontological grounds of a static human nature that shan’t be tampered with) or on grounds of safety, arguing that humanity isn’t responsible enough to wield such technologies without unleashing their destructive capabilities, these categories of counter-argument (efficacy and safety, respectively) are more often than not made by people somewhat more familiar with the community and its common points of rhetoric.
***
In other words these are the real salient and significant problems needing to be addressed by Transhumanist and Technoprogressive communities. The good news is that the ones making the most progress in terms of deliberating the possible repercussions of emerging technologies are Transhumanist and Technoprogressive communities. The large majority of thinkers and theoreticians working on Existential Risk and Global Catastrophic Risk, like The Future of Humanity Institute and the Lifeboat Foundation, share Technoprogressive inclinations. Meanwhile, the largest proponents of the need to ensure wide availability of enhancement technologies, as well as the need for provision of personhood rights to non-biologically-substrated persons, are found amidst the ranks of Technoprogressive Think Tanks like the IEET.
***

A more frequent Anti-Transhumanist and Anti-Technoprogressive counter-argument, by contrast, and one most often launched by people approaching Transhumanist and Technoprogressive communities from the outside, with little familiarity with their common points of rhetoric, is the claim of technical infeasibility based upon little more than sheer incredulity.

Sometimes a concept or notion simply seems too unprecedented to be possible. But it’s just too easy for us to get stuck in a spacetime rut along the continuum of culture and feel that if something were possible, it would have either already happened or would be in the final stages of completion today. “If something is possible, when why hasn’t anyone done it Shouldn’t the fact that it has yet to be accomplished indicate that it isn’t possible?” This conflates ought with is (which Hume showed us is a fallacy) and ought with can. Ought is not necessarily correlative with either. At the risk of saying the laughably-obvious, something must occur at some point in order for it to occur at all. The Moon landing happened in 1969 because it happened in 1969, and to have argued in 1968 that it simply wasn’t possible solely because it had never been done before would not have been  a valid argument for its technical infeasibility.

If history has shown us anything, it has shown us that history is a fantastically poor indicator of what will and will not become feasible in the future. Statistically speaking, it seems as though the majority of things that were said to be impossible to implement via technology have nonetheless come into being. Likewise, it seems as though the majority of feats it was said to be possible to facilitate via technology have also come into being. The ability to possiblize the seemingly impossible via technological and methodological in(ter)vention has been exemplified throughout the course of human history so prominently that we might as well consider it a statistical law.

We can feel the sheer fallibility of the infeasibility-from-incredulity argument intuitively when we consider how credible it would have seemed a mere 100 years ago to claim that we would soon be able to send sentences into the air, to be routed to a device in your pocket (and only your pocket, not the device in the pocket of the person sitting right beside you). How likely would it have seemed 200 years ago if you claimed that 200 years hence it would be possible to sit comfortably and quietly in a chair in the sky, inside a large tube of metal that fails to fall fatally to the ground?

Simply look around you. An idiosyncratic genus of great ape did this! Consider how remarkably absurd it would seem for the gorilla genus to have coordinated their efforts to build skyscrapers; to engineer devices that took them to the Moon; to be able to send a warning or mating call to the other side of the earth in less time than such a call could actually be made via physical vocal cords. We live in a world of artificial wonder, and act as though it were the most mundane thing in the world. But considered in terms of geological time, the unprecedented feat of culture and artificial artifact just happened. We are still in the fledging infancy of the future, which only began when we began making it ourselves.
***

We have no reason whatsoever to doubt the eventual technological feasibility of anything, really, when we consider all the things that were said to be impossible yet happened, all the things that were said to be possible and did happen, and all the things that were unforeseen completely yet happened nonetheless. In light of history, it seems more likely than a given thing would eventually be possible via technology than that it wouldn’t ever be possible. I fully appreciate the grandeur of this claim – but I stand by it nonetheless. To claim that a given ability will probably not be eventually possible to implement via technology is to laugh in the face of history to some extent.

The main exceptions to this claim are abilities wherein you limit or specify the route of implementation. Thus it probably would not be eventually possible to, say, infer the states of all the atoms comprising the Eifel Tower from the state of a single atom in your fingernail: categories of ability where you specify the implementation as the end-ability – as in the case above, the end ability was to infer the state of all the atoms in the Eifel Tower from the state of a single atom.

These exceptions also serve to illustrate the paramount feature allowing technology to possiblize the seemingly improbable: novel means of implementation. Very often there is a bottleneck in the current system we use to accomplish something that limits the scope of tis abilities and prevents certain objectives from being facilitated by it. In such cases a whole new paradigm of approach is what moves progress forward to realizing that objective. If the goal is the reversal and indefinite remediation of the causes and sources of aging, the paradigms of medicine available at the turn of the 20th century would have seemed to be unable to accomplish such a feat.

The new paradigm of biotechnology and genetic engineering was needed to formulate a scientifically plausible route to the reversal of aging-correlated molecular damage – a paradigm somewhat non-inherent in the medical paradigms and practices common at the turn of the 20th Century. It is the notion of a new route to implementation, a wholly novel way of making the changes that could lead to a given desired objective, that constitutes the real ability-actualizing capacity of technology – and one that such cases of specified implementation fail to take account of.

One might think that there are other clear exceptions to this as well: devices or abilities that contradict the laws of physics as we currently understand them – e.g., perpetual-motion machines. Yet even here we see many historical antecedents exemplifying our short-sighted foresight in regard to “the laws of physics”. Our understanding of the physical “laws” of the universe undergo massive upheaval from generation to generation. Thomas Kuhn’s The Structure of Scientific Revolutions challenged the predominant view that scientific progress occurred by accumulated development and discovery when he argued that scientific progress is instead driven by the rise of new conceptual paradigms categorically dissimilar to those that preceded it (Kuhn, 1962), and which then define the new predominant directions in research, development, and discovery in almost all areas of scientific discovery and conceptualization.

Kuhn’s insight can be seen to be paralleled by the recent rise in popularity of Singularitarianism, which today seems to have lost its strict association with I.J. Good‘s posited type of intelligence explosion created via recursively self-modifying strong AI, and now seems to encompass any vision of a profound transformation of humanity or society through technological growth, and the introduction of truly disruptive emerging and converging (e.g., NBIC) technologies.

This epistemic paradigm holds that the future is less determined by the smooth progression of existing trends and more by the massive impact of specific technologies and occurrences – the revolution of innovation. Kurzweil’s own version of Singularitarianism (Kurzweil, 2005) uses the systemic progression of trends in order to predict a state of affairs created by the convergence of such trends, wherein the predictable progression of trends points to their own destruction in a sense, as the trends culminate in our inability to predict past that point. We can predict that there are factors that will significantly impede our predictive ability thereafter. Kurzweil’s and Kuhn’s thinking are also paralleled by Buckminster Fuller in his notion of ephemeralization (i.e., doing more with less), the post-industrial information economies and socioeconomic paradigms described by Alvin Toffler (Toffler, 1970), John Naisbitt (Naisbitt 1982), and Daniel Bell (Bell, 1973), among others.

It can also partly be seen to be inherent in almost all formulations of technological determinism, especially variants of what I call reciprocal technological determinism (not simply that technology determines or largely constitutes the determining factors of societal states of affairs, not simply that tech affects culture, but rather than culture affects technology which then affects culture which then affects technology) a là Marshall McLuhan (McLuhan, 1964) . This broad epistemic paradigm, wherein the state of progress is more determined by small but radically disruptive changes, innovation, and deviations rather than the continuation or convergence of smooth and slow-changing trends, can be seen to be inherent in variants of technological determinism because technology is ipso facto (or by its very defining attributes) categorically new and paradigmically disruptive, and if culture is affected significantly by technology, then it is also affected by punctuated instances of unintended radical innovation untended by trends.

That being said, as Kurzweil has noted, a given technological paradigm “grows out of” the paradigm preceding it, and so the extents and conditions of a given paradigm will to some extent determine the conditions and allowances of the next paradigm. But that is not to say that they are predictable; they may be inherent while still remaining non-apparent. After all, the increasing trend of mechanical components’ increasing miniaturization could be seen hundreds of years ago (e.g., Babbage knew that the mechanical precision available via the manufacturing paradigms of his time would impede his ability in realizing his Baggage Engine, but that its implementation would one day be possible by the trend of increasingly precise manufacturing standards), but the fact that it could continue to culminate in the ephemeralization of Bucky Fuller (Fuller, 1976) or the mechanosynthesis of K. Eric Drexler (Drexler, 1986).

Moreover, the types of occurrence allowed by a given scientific or methodological paradigm seem at least intuitively to expand, rather than contract, as we move forward through history. This can be seen lucidly in the rise of Quantum Physics in the early 20th Century, which delivered such conceptual affronts to our intuitive notions of the possible as non-locality (i.e., quantum entanglement – and with it quantum information teleportation and even quantum energy teleportation, or in other words faster-than-light causal correlation between spatially separated physical entities), Einstein’s theory of relativity (which implied such counter-intuitive notions as measurement of quantities being relative to the velocity of the observer, e.g., the passing of time as measured by clocks will be different in space than on earth), and the hidden-variable theory of David Bohm (which implied such notions as the velocity of any one particle being determined by the configuration of the entire universe). These notions belligerently contradict what we feel intuitively to be possible. Here we have claims that such strange abilities as informational and energetic teleportation, faster-than-light causality (or at least faster-than-light correlation of physical and/or informational states) and spacetime dilation are natural, non-technological properties and abilities of the physical universe.

Technology is Man’s foremost mediator of change; it is by and large through the use of technology that we expand the parameters of the possible. This is why the fact that these seemingly fantastic feats were claimed to be possible “naturally”, without technological implementation or mediation, is so significant. The notion that they are possible without technology makes them all the more fantastical and intuitively improbable.

We also sometimes forget the even more fantastic claims of what can be done through the use of technology, such as stellar engineering and mega-scale engineering, made by some of big names in science. There is the Dyson Sphere of Freeman Dyson, which details a technological method of harnessing potentially the entire energetic output of a star (Dyson,  1960). One can also find speculation made by Dyson concerning the ability for “life and communication [to] continue for ever, using a finite store of energy” in an open universe by utilizing smaller and smaller amounts of energy to power slower and slower computationally emulated instances of thought (Dyson, 1979).

There is the Tipler Cylinder (also called the Tipler Time Machine) of Frank J. Tipler, which described a dense cylinder of infinite length rotating about its longitudinal axis to create closed timelike curves (Tipler, 1974). While Tipler speculated that a cylinder of finite length could produce the same effect if rotated fast enough, he didn’t provide a mathematical solution for this second claim. There is also speculation by Tipler on the ability to utilize energy harnessed from gravitational shear created by the forced collapse of the universe at different rates and different directions, which he argues would allow the universe’s computational capacity to diverge to infinity, essentially providing computationally emulated humans and civilizations the ability to run for an infinite duration of subjective time (Tipler, 1986, 1997).

We see such feats of technological grandeur paralleled by Kurt Gödel, who produced an exact solution to the Einstein field equations that describes a cosmological model of a rotating universe (Gödel, 1949). While cosmological evidence (e.g., suggesting that our universe is not a rotating one) indicates that his solution doesn’t describe the universe we live in, it nonetheless constitutes a hypothetically possible cosmology in which time-travel (again, via a closed timelike curve) is possible. And because closed timelike curves seem to require large amounts of acceleration – i.e. amounts not attainable without the use of technology – Gödel’s case constitutes a hypothetical cosmological model allowing for technological time-travel (which might be non-obvious, since Gödel’s case doesn’t involve such technological feats as a rotating cylinder of infinite length, rather being a result derived from specific physical and cosmological – i.e., non-technological – constants and properties).

These are large claims made by large names in science (i.e., people who do not make claims frivolously, and in most cases require quantitative indications of their possibility, often in the form of mathematical solutions, as in the cases mentioned above) and all of which are made possible solely through the use of technology. Such technological feats as the computational emulation of the human nervous system and the technological eradication of involuntary death pale in comparison to the sheer grandeur of the claims and conceptualizations outlined above.

We live in a very strange universe, which is easy to forget midst our feigned mundanity. We have no excuse to express incredulity at Transhumanist and Technoprogressive conceptualizations considering how stoically we accept such notions as the existence of sentient matter (i.e., biological intelligence) or the ability of a genus of great ape to stand on extraterrestrial land.

Thus, one of the most common counter-arguments launched at many Transhumanist and Technoprogressive claims and conceptualizations – namely, technical infeasibility based upon nothing more than incredulity and/or the lack of a definitive historical precedent – is one of the most baseless counter-arguments as well. It would be far more credible to argue for the technical infeasibility of a given endeavor within a certain time-frame. Not only do we have little, if any, indication that a given ability or endeavor will fail to eventually become realizable via technology given enough development-time, but we even have historical indication of the very antithesis of this claim, in the form of the many, many instances in which a given endeavor or feat was said to be impossible, only to be realized via technological mediation thereafter.

It is high time we accepted the fallibility of base incredulity and the infeasibility of the technical-infeasibility argument. I remain stoically incredulous at the audacity of fundamental incredulity, for nothing should be incredulous to man, who makes his own credibility in any case, and who is most at home in the necessary superfluous.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

References

Bell, D. (1973). “The Coming of Post-Industrial Society: A Venture in Social Forecasting, Daniel Bell.” New York: Basic Books, ISBN 0-465-01281-7.

Dyson, F. (1960) “Search for Artificial Stellar Sources of Infrared Radiation”. Science 131: 1667-1668.

Dyson, F. (1979). “Time without end: Physics and biology in an open universe,” Reviews of Modern Physics 51 (3): 447-460.

Fuller, R.B. (1938). “Nine Chains to the Moon.” Anchor Books pp. 252–59.

Gödel, K. (1949). “An example of a new type of cosmological solution of Einstein’s field equations of gravitation”. Rev. Mod. Phys. 21 (3): 447–450.

Kuhn, Thomas S. (1962). “The Structure of Scientific Revolutions (1st ed.).” University of Chicago Press. LCCN 62019621.

Kurzweil, R. (2005). “The Singularity is Near.” Penguin Books.

Mcluhan, M. (1964). “Understanding Media: The Extensions of Man”. 1st Ed. McGraw Hill, NY.

Niasbitt, J. (1982). “Megatrends.” Ten New Directions Transforming Our Lives. Warner Books.

Tipler, F. (1974) “Rotating Cylinders and Global Causality Violation”. Physical Review D9, 2203-2206.

Tipler, F. (1986). “Cosmological Limits on Computation”, International Journal of Theoretical Physics 25 (6): 617-661.

Tipler, F. (1997). The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. New York: Doubleday. ISBN 0-385-46798-2.

Toffler, A. (1970). “Future shock.” New York: Random House.

The Patent Bubble and Its End – Article by Jeffrey A. Tucker

The Patent Bubble and Its End – Article by Jeffrey A. Tucker

The New Renaissance Hat
Jeffrey A. Tucker
February 3, 2013
******************************

“Then they pop up and say, ‘Hello, surprise! Give us your money or we will shut you down!’ Screw them. Seriously, screw them. You can quote me on that.”

Those are the words of Newegg.com’s chief legal officer, Lee Cheng. He was speaking to Arstechnica.com following a landmark ruling that sided with a great business against a wicked patent troll company called Soverain.

What is a patent troll? It is a company that has acquired patents (usually through purchases on the open market) but does not use them for any productive purpose. Instead, it lives off looting good companies by blackmailing people. The trolls say, “Pay us now or get raked over the coals in court.”

Soverain is one such company. Most companies it has sued have paid the ransom. Soverain has collected untold hundreds of millions in fines from the likes of Bloomingdale’s, J.C. Penney, J. Crew, Victoria’s Secret, Amazon, and Nordstrom.

It sounds like a criminal operation worthy of the old world of, say, southern Italy (no offense, guys!). Indeed, but this is how it works in the U.S. these days. The looting is legal. The blackmail is approved. The graft is in the open. The expropriation operates under the cover of the law. The backup penalties are inflicted by the official courts.

To be sure, the trolls may not be as bad as conventional patent practice. At least the trolls don’t try to shut you down and cartelize the economy. They just want to get their beak wet. Once that happens, you are free to go about your business. This is one reason they have been so successful.

Soverain’s plan was to loot every online company in existence for a percentage of their revenue, citing the existence of just two patents. Thousands of companies have given in, causing an unnatural and even insane increase in the price of patent bundles. Free enterprise lives in fear.

Let me add a point that Stefan Molyneux made concerning this case. The large companies are annoyed by the patent-troll pests but not entirely unhappy with their activities. The large companies can afford to pay them off. Smaller companies cannot. In this way, the trolls serve to reduce competition.

[Stefan made his comments on an edition of Adam v. The Man, in which we were both guests. you can watch the entire show here.]

When Soverain came after Newegg’s online shopping cart demanding $34 million, a lower court decided against Newegg, but only imposed a fine of $2.5 million. Newegg examined the opinion and found enough holes in the case to appeal. It was a gutsy decision, given the trends. But as Cheng told Ars Technica:

“We basically took a look at this situation and said, ‘This is bull****.’ We saw that if we paid off this patent holder, we’d have to pay off every patent holder this same amount. This is the first case we took all the way to trial. And now nobody has to pay Soverain jack squat for these patents.”

It’s true. The case not only shuts down the Soverain racket. It might have dealt a devastating blow to the whole patent hysteria and the vicious trolling that has fueled it all along.

And truly, the patent mania has become crazy. No one 10 years ago would have imagined that it would go this far.

“It’s a sign of something gone awry, not a healthy market,” attorney Neil Wilkof told Gigaom.com, with reference to the utterly insane amounts that well-heeled tech giants have been paying for patents. “I think we’re in a patent bubble in a very specific industry. It’s a distorted market and misallocation of resources.”

[Note: This entire racket is anticipated and debunked in the pioneering work on the topic. The new edition of Stephan Kinsella’s Against Intellectual Property is now available for free to Club members.]

Earlier this year, Google shelled out $12.5 billion for the acquisition of Motorola Mobility. Facebook threw down $550 million for AOL’s patents. Apple and Google spent more last year on patent purchases and litigation than on actual research and development. The smartphone industry coughed up $20 billion last year on the patent racket. A lawsuit last year against Samsung awarded Apple $1 billion in a ridiculous infringement case.

These are astronomical numbers — figures that would have been inconceivable in the past. Everyone seems to agree that the system is radically broken. What people don’t always understand is that every penny of this is unnecessary and pointless. This market is a creation of legislation, and nothing more. The companies aren’t really buying anything but the right to produce and the right not to be sued, and that is not always secure.

Let’s back up. Why are there markets in anything at all? They exist because goods have to be allocated some way. There are not enough cars, carrots, and coffee to meet all existing conceivable demand. We can fight over them or find ways to cooperate through trade. Prices are a way to settle the struggle over goods that people grow or make, or services people provide, in a peaceful way. They allow people to engage to their mutual benefit, rather than club or shoot each other.

But what is being exchanged in the patent market? It’s not real goods or services. These are government creations of a bureaucracy — an exclusive right to make something. They are tickets that make production legal. If you own one, there is no broad market for it. It has only a handful of possible buyers, and the price of your good is based entirely on how much money you think you can extract from deep pockets. Sometimes, you actually force people to buy with the threat that you will sue if they don’t.

That’s not how normal markets operate. There was a time when patents didn’t even apply to software at all. The whole industry was built by sharing ideas and the spirit of old-fashioned competition. Companies would work together when it was to their mutual advantage and hoard competitive reasons when it was not. It seemed to work fine, until legislation intervened.

Today the entire fake market for patents is sustained by the perception that courts will favor the patent holders over the victims. The Newegg case changes that perception, which is why it has been the most closely watched case in the industry. This might signal the end of the reign of terror, at least one form of it.

But, you say, don’t creators deserve compensation? My answer: If they create something people are willing to pay for, great. But that’s not what’s happening. Soverain’s bread and butter was a handful of patents that had been on the open market, changing hands through three different companies over the course of 10 years, until they landed in the laps of some extremely unscrupulous wheeler-dealers.

In other words, patents these days have little to nothing to do with the creators — any more than mortgage-backed securities at the height of the boom had anything to do with the initial lender and its risk assessments. Once a patent is issued — and they are not automatically valid, but rather have to be tested in litigation — it enters into the market and can land anywhere. The idea that the patent has anything to do with inspiring innovation is total myth. It is all about establishing and protecting monopolistic weapons with which to beat people.

Many people have been hoping for patent reform. It probably won’t happen and might not even need to happen. If this case is as significant as tech observers say, a sizeable portion of this fake industry could be smashed via a dramatic price deflation. When something is no longer worth much, people stop wanting it.

Patents date from a time when a great industrial innovation made the headlines just because it was so rare. That’s not our world. Government has no business allocating and centrally planning ideas. Here’s to Newegg: Take a bow. Someone had the guts to say no. This time, for once, it worked.

Yours,
Jeffrey Tucker

Jeffrey Tucker is the publisher and executive editor of Laissez-Faire Books, the Primus inter pares of the Laissez Faire Club, and the author of Bourbon for Breakfast: Living Outside the Statist Quo It’s a Jetsons World: Private Miracles and Public Crimes, and A Beautiful Anarchy: How to Build Your Own Civilization in the Digital Age, among thousands of articles. Click to sign up for his free daily letter. Email him: tucker@lfb.org | Facebook | Twitter | Google.

This article has been republished pursuant to a Creative Commons Attribution 3.0 License.

Libertarian Life-Extension Reforms – Video Series by G. Stolyarov II

Libertarian Life-Extension Reforms – Video Series by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
December 10, 2012
******************************

This video series is derived from Mr. Stolyarov’s essay, “Political Priorities for Achieving Indefinite Life Extension: A Libertarian Approach“. The series highlights each of the proposed areas of pro-liberty life-extension reforms in an effort to spread these ideas and achieve their broader public consideration.

#1 – Repeal FDA Approval Requirements

Mr. Stolyarov discusses the greatest threat to research on indefinite human life extension: the  current requirement in the United States (and analogous requirements elsewhere in the Western world) that drugs or treatments may not be used, even on willing patients, unless approval for such drugs or treatments is received from the Food and Drug Administration (or an analogous national regulatory organization in other countries).

Such prohibitions on the quick development and marketing of potentially life-saving drugs are not only costly and time-consuming to overcome; they are morally unconscionable in terms of the cost in human lives.

#2 – Abolishing Medical Licensing Protectionism

There are too few doctors in the West today – not enough to deliver affordable, life-saving treatments, and certainly not enough to ensure that, when life-extending discoveries are made, they will rapidly become available to all.

Mr. Stolyarov advocates for the elimination of compulsory licensing requirements for medical professionals, and the replacement of such a system by a competing market of private certifications for various “tiers” of medical care.

#3-4 – Abolishing Medical and Software Patent Monopolies

Patents – legal grants of monopoly privilege – artificially raise the cost and the scarcity of new drugs and new software. Mr. Stolyarov recommends allowing free, open competition to apply to these products as well.

#5 – Reestablishing the Doctor-Patient Relationship

The most reliable and effective medical care occurs when both patients and doctors have full sovereignty over medical treatment and payment. A libertarian system is most likely to prolong individual lives and lead to the rapid discovery of unprecedented life-extending treatments.

Mr. Stolyarov presents the case for political reforms that maximize patient choice and free-market experimentation with various methods of payment for and provision of medical services.

#6 – Medical Research Instead of Military Spending

Mr. Stolyarov concludes his series on libertarian life-extension reforms by offering a way to reduce aggregate government spending while also increasing funding for medical research. If government funds are spent on saving and extending lives rather than destroying them, this would surely be an improvement. Thus, while Mr. Stolyarov does not support increasing aggregate government spending to fund indefinite life extension (or medical research generally), he would advocate a spending-reduction plan where vast amounts of military spending are eliminated and some fraction of such spending is replaced with spending on medical research.

Political Priorities for Achieving Indefinite Life Extension: A Libertarian Approach – Article by G. Stolyarov II

Political Priorities for Achieving Indefinite Life Extension: A Libertarian Approach – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
November 22, 2012
******************************

While the achievement of radical human life extension is primarily a scientific and technical challenge, the political environment in which research takes place is extremely influential as to the rate of progress, as well as whether the research could even occur in the first place, and whether consumers could benefit from the fruits of such research in a sufficiently short timeframe. I, as a libertarian, do not see massive government funding of indefinite life extension as the solution – because of the numerous strings attached and the possibility of such funding distorting and even stalling the course of life-extension research by rendering it subject to pressures by anti-longevity special-interest constituencies. (I can allow an exception for increased government medical spending if it comes at the cost of major reductions in military spending; see my item 6 below for more details.) Rather, my proposed solutions focus on liberating the market, competition, and consumer choice to achieve an unprecedented rapidity of progress in life-extension treatments. This is the fastest and most reliable way to ensure that people living today will benefit from these treatments and will not be among the last generations to perish. Here, I describe six major types of libertarian reforms that could greatly accelerate progress toward indefinite human life extension.

1. Repeal of the requirement for drugs and medical treatments to obtain FDA approval before being used on willing patients. The greatest threat to research on indefinite life extension – and the availability of life-extending treatments to patients – is the current requirement in the United States (and analogous requirements elsewhere in the Western world) that drugs or treatments may not be used, even on willing patients, unless approval for such drugs or treatments is received from the Food and Drug Administration (or an analogous national regulatory organization in other countries). This is a profound violation of patient sovereignty; a person who is terminally ill is unable to choose to take a risk on an unapproved drug or treatment unless this person is fortunate enough to participate in a clinical trial. Even then, once the clinical trial ends, the treatment must be discontinued, even if it was actually successful at prolonging the person’s life. This is not only profoundly tragic, but morally unconscionable as well.

As a libertarian, I would prefer to see the FDA abolished altogether and for competing private certification agencies to take its place. But even this transformation does not need to occur in order for the worst current effects of the FDA to be greatly alleviated. The most critical reform needed is to allow unapproved drugs and treatments to be marketed and consumed. If the FDA wishes to strongly differentiate between approved and unapproved treatments, then a strongly worded warning label could be required for unapproved treatments, and patients could even be required to sign a consent form stating that they have been informed of the risks of an unapproved treatment. While this is not a perfect libertarian solution, it is a vast incremental improvement over the status quo, in that hundreds of thousands of people who would die otherwise would at least be able to take several more chances at extending their lives – and some of these attempts will succeed, even if they are pure gambles from the patient’s point of view. Thus, this reform to directly extend many lives and to redress a moral travesty should be the top political priority of advocates of indefinite life extension. Over the coming decades, its effect will be to allow cutting-edge treatments to reach a market sooner and thus to enable data about those treatments’ effects to be gathered more quickly and reliably. Because many treatments take 10-15 years to receive FDA approval, this reform could by itself speed up the real-world advent of indefinite life extension by over a decade.

2. Abolishing medical licensing protectionism. The current system for licensing doctors is highly monopolistic and protectionist – the result of efforts by the American Medical Association in the early 20th century to limit entry into the profession in order to artificially boost incomes for its members. The medical system suffers today from too few doctors and thus vastly inflated patient costs and unacceptable waiting times for appointments. Instead of prohibiting the practice of medicine by all except a select few who have completed an extremely rigorous and cost-prohibitive formal medical schooling, governments in the Western world should allow the market to determine different tiers of medical care for which competing private certifications would emerge. For the most specialized and intricate tasks, high standards of certification would continue to exist, and a practitioner’s credentials and reputation would remain absolutely essential to convincing consumers to put their lives in that practitioner’s hands. But, with regard to routine medical care (e.g., annual check-ups, vaccinations, basic wound treatment), it is not necessary to receive attention from a person with a full-fledged medical degree. Furthermore, competition among certification providers would increase quality of training and lower its price, as well as accelerate the time needed to complete the training. Such a system would allow many more young medical professionals to practice without undertaking enormous debt or serving for years (if not decades) in roles that offer very little remuneration while entailing a great deal of subservience to the hierarchy of some established institution or another. Ultimately, without sufficient doctors to affordably deliver life-extending treatments when they become available, it would not be feasible to extend these treatments to the majority of people. Would there be medical quacks under such a system of privatized certification? There are always quacks, including in the West today – and no regulatory system can prevent those quacks from exploiting willing dupes. But full consumer choice, combined with the strong reputational signals sent by the market, would ensure that the quacks would have a niche audience only and would never predominate over scientifically minded practitioners.

3. Abolishing medical patent monopolies. Medical patents – in essence, legal grants of monopoly for limited periods of time – greatly inflate the cost of drugs and other treatments. Especially in today’s world of rapidly advancing biotechnology, a patent term of 20 years essentially means that no party other than the patent holder (or someone paying royalties to the patent holder) may innovate upon the patented medicine for a generation, all while the technological potential for such innovation becomes glaringly obvious. As much innovation consists of incremental improvements on what already exists, the lack of an ability to create derivative drugs and treatments that tweak current approaches implies that the entire medical field is, for some time, stuck at the first stages of a treatment’s evolution – with all of the expense and unreliability this entails. More appallingly, many pharmaceutical companies today attempt to re-patent drugs that have already entered the public domain, simply because the drugs have been discovered to have effects on a disease different from the one for which they were originally patented. The result of this is that the price of the re-patented drug often spikes by orders of magnitude compared to the price level during the period the drug was subject to competition. Only a vibrant and competitive market, where numerous medical providers can experiment with how to improve particular treatments or create new ones, can allow for the rate of progress needed for the people alive today to benefit from radical life extension. Some may challenge this recommendation with the argument that the monopoly revenues from medical patents are necessary to recoup the sometimes enormous costs that pharmaceutical companies incur in researching and testing the drug and obtaining approval from regulatory agencies such as the FDA. But if the absolute requirement of FDA approval is removed as I recommend, then these costs will plummet dramatically, and drug developers will be able to realize revenues much more quickly than in the status quo. Furthermore, the original developer of an innovation will still always benefit from a first-mover advantage, as it takes time for competitors to catch on. If the original developer can maintain high-quality service and demonstrate the ability to sell a safe product, then the brand-name advantage alone can secure a consistent revenue stream without the need for a patent monopoly.

4. Abolishing software patent monopolies. With the rapid growth of computing power and the Internet, much more medical research is becoming dependent on computation. In some fields such as genome sequencing, the price per computation is declining at a rate even far exceeding that of Moore’s Law. At the same time, ordinary individuals have an unprecedented opportunity to participate in medical research by donating their computer time to distributed computing projects. Software, however, remains artificially scarce because of patent monopolies that have increasingly been utilized by established companies to crush innovation (witness the massively expensive and wasteful patent wars over smartphone and tablet technology). Because most software is not cost-prohibitive even today, the most pernicious effect of software patents is not on price, but on the existence of innovation per se. Because there exist tens of thousands of software patents (many held defensively and not actually utilized to market anything), any inventor of a program that assists in medical, biotechnological, or nanotechnological computations must proceed with extreme caution, lest he run afoul of some obscure patent that is held for the specific purpose of suing people like him out of existence once his product is made known. The predatory nature of the patent litigation system serves to deter many potential innovators from even trying, resulting in numerous foregone discoveries that could further accelerate the rate at which computation could facilitate medical progress. Ideally, all software patents (and all patents generally) should be abolished, and free-market competition should be allowed to reign. But even under a patent system, massive incremental improvements could be made. First, non-commercial uses of a patent should be rendered immune to liability. This would open up a lot of ground for non-profit medical research using distributed computing. Second, for commercial use of patents, a system of legislatively fixed maximum royalties could emerge – where the patent holder would be obligated to allow a competitor to use a particular patented product, provided that a certain price is paid to the patent holder – and litigation would be permanently barred. This approach would continue to give a revenue stream to patent holders while ensuring that the existence of a patent does not prevent a product from coming to market or result in highly uncertain and variable litigation costs.

5. Reestablishing the two-party doctor-patient relationship. The most reliable and effective medical care occurs when the person receiving it has full discretion over the level of treatment to be pursued, while the person delivering it has full discretion over the execution (subject to the wishes of the consumer). When a third party – whether private or governmental – pays the bills, it also assumes the position of being able to dictate the treatment and limit patient choice. Third-party payment systems do not preclude medical progress altogether, but they do limit and distort it in significant ways. They also result in the “rationing” of medical care based on the third party’s resources, rather than those of the patient. Perversely enough, third-party payment systems also discourage charity on the part of doctors. For instance, Medicare in the United States prohibits doctors who accept its reimbursements from treating patients free of charge. Mandates to utilize private health insurance in the United States and governmental health “insurance” elsewhere in the Western world have had the effect of forcing patients to be restricted by powerful third parties in this way. While private third-party payment systems should not be prohibited, all political incentives for third-party medical payment systems should be repealed. In the United States, the pernicious health-insurance mandate of the Affordable Care Act (a.k.a. Obamacare) should be abolished, as should all requirements and political incentives for employers to provide health insurance. Health insurance should become a product whose purchase is purely discretionary on a free market. This reform would have many beneficial effects. First, by decoupling insurance from employment, it would ensure that those who do rely on third-party payments for medical care will not have those payments discontinued simply because they lose their jobs. Second, insurance companies would be encouraged to become more consumer-friendly, since they will need to deal with consumers directly, rather than enticing employers – whose interests in an insurance product may be different from those of their employees. Third, insurance companies would be entirely subject to market forces – including the most powerful consumer protection imaginable: the right of a consumer to exit from a market entirely. Fourth and most importantly, the cost of medical care would decline dramatically, since it would become subject to direct negotiation between doctors and patients, while doctors would be subject to far less of the costly administrative bureaucracy associated with managing third-party payments.

In countries where government is the third-party payer, the most important reform is to render participation in the government system voluntary. The worst systems of government healthcare are those where private alternatives are prohibited, and such private competition should be permitted immediately, with no strings attached. Better yet, patients should be permitted to opt out of the government systems altogether by being allowed to save on their taxes if they renounce the benefits from such systems and opt for a competing private system instead. Over time, the government systems would shrink to basic “safety nets” for the poorest and least able, while standards of living and medical care would rise to the level that ever fewer people would find themselves in need of such “safety nets”. Eventually, with a sufficiently high level of prosperity and technological advancement, the government healthcare systems could be phased out altogether without adverse health consequences to anyone.

6. Replacement of military spending with medical research. While, as a libertarian, I do not consider medical research to be the proper province of government, there are many worse ways for a government to spend its money – for instance, by actively killing people in wasteful, expensive, and immoral wars. If government funds are spent on saving and extending lives rather than destroying them, this would surely be an improvement. Thus, while I do not support increasing aggregate government spending to fund indefinite life extension (or medical research generally), I would advocate a spending-reduction plan where vast amounts of military spending are eliminated and some fraction of such spending is replaced with spending on medical research. Ideally, this research should be as free from “strings attached” as possible and could be funded through outright unconditional grants to organizations working on indefinite life extension. However, in practice it is virtually impossible to avoid elements of politicization and conditionality in government medical funding. Therefore, this plan should be implemented with the utmost caution. Its effectiveness could be improved by the passage of legislation to expressly prohibit the government from dictating the methods, outcomes, or applications of the research it funds, as well as to prohibit non-researchers from acting as lobbyists for medical research. An alternative to this plan could be to simply lower taxes across the board by the amount of reduction in military spending. This would have the effect of returning wealth to the general public, some of which would be spent on medical research, while another portion of these returned funds would increase consumers’ bargaining power in the medical system, resulting in improved treatments and more patient sovereignty.

How Government Sort of Created the Internet – Article by Steve Fritzinger

How Government Sort of Created the Internet – Article by Steve Fritzinger

The New Renaissance Hat
Steve Fritzinger
October 6, 2012
******************************

Editor’s Note: Vinton Cerf, one of the individuals whose work was pivotal in the development of the Internet, has responded to this article in the comments below. Read his response here.

In his now-famous “You didn’t build that” speech, President Obama said, “The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet.”

Obama’s claim is in line with the standard history of the Internet. That story goes something like this: In the 1960s the Department of Defense was worried about being able to communicate after a nuclear attack. So it directed the Advanced Research Projects Agency (ARPA) to design a network that would operate even if part of it was destroyed by an atomic blast. ARPA’s research led to the creation of the ARPANET in 1969. With federal funding and direction the ARPANET matured into today’s Internet.

Like any good creation myth, this story contains some truth. But it also conceals a story that is much more complicated and interesting. Government involvement has both promoted and retarded the Internet’s development, often at the same time. And, despite Obama’s claims, the government did not create the Internet “so all the companies could make money off” it.

The idea of internetworking was first proposed in the early 1960s by computer scientist J. C. R. Licklider at Bolt, Beranek and Newman (BBN). BBN was a private company that originally specialized in acoustic engineering. After achieving some success in that field—for example, designing the acoustics of the United Nations Assembly Hall—BBN branched out into general R&D consulting. Licklider, who held a Ph.D. in psychoacoustics, had become interested in computers in the 1950s. As a vice president at BBN he led the firm’s growing information science practice.

In a 1962 paper Licklider described a “network of networks,” which he called the “Intergalactic Computer Network.” This paper contained many of the ideas that would eventually lead to the Internet. Its most important innovation was “packet switching,” a technique that allows many computers to join a network without requiring expensive direct links between each pair of machines.

Licklider took the idea of internetworking with him when he joined ARPA in 1962. There he met computer science legends Ivan Sutherland and Bob Taylor. Sutherland and Taylor continued developing Licklider’s ideas. Their goal was to create a network that would allow more effective use of computers scattered around university and government laboratories.

In 1968 ARPA funded the first four-node packet-switched network. This network was not part of a Department of Defense (DOD) plan for post-apocalyptic survival. It was created so Taylor wouldn’t have to switch chairs so often. Taylor routinely worked on three different computers and was tired of switching between terminals. Networking would allow researchers like Taylor to access computers located around the country without having dedicated terminals for each machine.

The first test of this network was in October 1969, when Charley Kline, a student at UCLA, attempted to transmit the command “login” to a machine at the Stanford Research Institute. The test was unsuccessful. The network crashed and the first message ever transmitted over what would eventually become the Internet was simply “lo.”

With a bit more debugging the four-node network went live in December 1969, and the ARPANET was born. Over the next two decades the ARPANET would serve as a test bed for internetworking. It would grow, spawn other networks, and be transferred between DOD agencies. For civilian agencies and universities, NSFNET, operated by the National Science Foundation, replaced ARPANET in 1985. ARPANET was finally shut down in February 1990. NSFNET continued to operate until 1995, during which time it grew into an important backbone for the emerging Internet.

For its entire existence the ARPANET and most of its descendants were restricted to government agencies, universities, and companies that did business with those entities. Commercial use of these networks was illegal. Because of its DOD origins ARPANET was never opened to more than a handful of organizations. In authorizing funds for NSFNET, Congress specified that it was to be used only for activities that were “primarily for research and education in the sciences and engineering.”

During this time the vast majority of people were banned from the budding networks. None of the services, applications, or companies that define today’s Internet could exist in this environment. Facebook may have been founded by college students, but it was not “primarily for research and education in the sciences and engineering.”

This restrictive environment finally began to change in the mid-1980s with the arrival of the first dial-up bulletin boards and online services providers. Companies like Compuserve, Prodigy, and AOL took advantage of the home computer to offer network services over POTS (Plain Old Telephone Service) lines. With just a PC and a modem, a subscriber could access email, news, and other services, though at the expense of tying up the house’s single phone line for hours.

In the early 1990s these commercial services began to experiment with connections between themselves and systems hosted on NSFNET. Being able to access services hosted on a different network made a network more valuable, so service providers had to interoperate in order to survive.

ARPANET researchers led by Vint Cerf and Robert Kahn had already created many of the standards that the Internet service providers (ISPs) needed to interconnect. The most important standard was the Transmission Control Protocol/Internet Protocol (TCP/IP). In the 1970s computers used proprietary technologies to create local networks. TCP/IP was the “lingua franca” that allowed these networks to communicate regardless of who operated them or what types of computers were used on them. Today most of these proprietary technologies are obsolete and TCP/IP is the native tongue of networking. Because of TCP/IP’s success Cerf and Kahn are known as “the fathers of the Internet.”

Forced to interoperate, service providers rapidly adopted TCP/IP to share traffic between their networks and with NSFNET. The modern ISP was born. Though those links were still technically illegal, NSFNET’s commercial use restrictions were increasingly ignored.

The early 1990s saw the arrival of the World Wide Web. Tim Berners-Lee, working at the European high energy physics lab CERN, created the Uniform Resource Locator (URL), Hyper-Text Transfer Protocol (HTTP), and Hyper-Text Markup Language (HTML). These three technologies made it easier to publish, locate, and consume information online. The web rapidly grew into the most popular use of the Internet.

Berners-Lee donated these technologies to the Internet community and was knighted for his work in 2004.

In 1993 Mosaic, the first widely adopted web browser, was released by the National Center for Supercomputing Applications (NCSA). Mosaic was the first Internet application to take full advantage of Berners-Lee’s work and opened the Internet to a new type of user. For the first time the Internet became “so easy my mother can use it.”

The NCSA played a significant role in presidential politics. It had been created by the High Performance Computing & Communications Act of 1991 (aka “The Gore Bill”). In 1999 presidential candidate Al Gore cited this act in an interview about his legislative accomplishments,saying, “I took the initiative in creating the Internet.” This comment was shortened to: “I created the Internet” and quickly became a punchline for late-night comedians. This one line arguably cost Gore the presidency in 2000.

The 1992 Scientific and Advanced Technology Act, another Gore initiative, lifted some of the commercial restrictions on Internet usage. By mid-decade all the pieces for the modern Internet were in place.

In 1995, 26 years after its humble beginnings as ARPANET, the Internet was finally freed of government control. NSFNET was shut down. Operation of the Internet passed to mostly private companies, and all prohibitions on commercial use were lifted.

Anarchy, Property, and Innovation

Today the Internet can be viewed as three layers, each with its own stakeholders, business models, and regulatory structure. There are the standards, like TCP/IP, that control how information flows between networks, the physical infrastructure that actually comprises the networks, and the devices and applications that most people see as “the Internet.”

Since the Internet is really a collection of separate networks that have voluntarily joined together, there is no single central authority that owns or controls it. Instead, the Internet is governed by a loose collection of organizations that develop technologies and ensure interoperability. These organizations, like the Internet Engineering Task Force (IETF), may be the most successful anarchy ever.

Anarchy, in the classical sense, means without ruler, not without laws. The IETF demonstrates how well a true anarchy can work. The IETF has little formal structure. It is staffed by volunteers. Meetings are run by randomly chosen attendees. The closest thing there is to being an IETF member is being on the mailing list for a project and doing the work. Anyone can contribute to any project simply by attending the meetings and voicing an opinion. Something close to meritocracy controls whose ideas become part of the standards.

At the physical layer the Internet is actually a collection of servers, switches, and fiber-optic cables. At least in the United States this infrastructure is mostly privately owned and operated by for-profit companies like AT&T and Cox. The connections between these large national and international networks put the “inter” in Internet.

As for-profit companies ISPs compete for customers. They invest in faster networks, wider geographic coverage, and cooler devices to attract more monthly subscription fees. But ISPs are also heavily regulated companies. In addition to pleasing customers, they must also please regulators. This makes lobbying an important part of their business. According to the Center for Responsive Politics’s OpenSecrets website, ISPs and the telecommunications industry in general spend between $55 million and $65 million per year trying to influence legislation and regulation.

When most people think of the Internet they don’t think of a set of standards sitting on a shelf or equipment in a data center. They think of their smart phones and tablets and applications like Twitter and Spotify. It is here that Internet innovation has been most explosive. This is also where government has had the least influence.

For its first 20 years the Internet and its precursors were mostly text-based. The most popular applications, like email, Gopher (“Go for”), and Usenet news groups, had text interfaces. In the 20 years that commercial innovation has been allowed on the Internet, text has become almost a relic. Today, during peak hours, almost half of North American traffic comes from streaming movies and music. Other multimedia services, like video chat and photo sharing, consume much of people’s Internet time.

None of this innovation could have happened if the Internet were still under government control. These services were created by entrepreneurial trial and error. While some visionaries explored the possibilities of a graphically interconnected world as early as the 1960s, no central planning board knew that old-timey-looking photographs taken on ultramodern smart phones would be an important Internet application.

I, Internet

When Obama said the government created the Internet so companies could make money off it, he was half right. The government directly funded the original research into many core networking technologies and employed key people like Licklider, Taylor, Cerf, and Kahn. But after creating the idea the government sat on it for a quarter century and denied access to all but a handful of people. Its great commercial potential was locked away.

For proponents of government-directed research policies, the Internet proves the value of their programs. But government funding might not have been needed to create the Internet. The idea for internetwork came from BBN, a private company. The rise of ISPs in the 1980s showed that other companies were willing to invest in this space. Once the home PC and dial-up services became available, people joined commercial networks by the millions. The economic incentives to connect those early networks probably would have resulted in something very much like today’s Internet even if the ARPANET had never existed.

In the end the Internet rose from no single source. Like Leonard Read’s humble writing instrument, the pencil, no one organization could create the Internet. It took the efforts of thousands of engineers from the government and private sectors. Those engineers followed no central plan. Instead they explored. They competed. They made mistakes. They played.

Eventually they created a system that links a third of humanity. Now entrepreneurs all over the world are looking for the most beneficial ways to use that network.

Imagine where we’d be today if that search could have started five to ten years earlier.

Steve Fritzinger is a freelance writer from Fairfax,Virginia. He is the regular economics commentator on the BBC World Service program Business Daily.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.

TANSTAAFL and Saving: Not the Whole Story – Article by Sanford Ikeda

TANSTAAFL and Saving: Not the Whole Story – Article by Sanford Ikeda

The New Renaissance Hat
Sanford Ikeda
October 3, 2012
******************************

How often have you heard someone say, “There ain’t no such thing as a free lunch,” or, “Saving is the path to economic development”?  Many treat these statements as the alpha and omega of economic common sense.

The problem is they are myths.

Or, at least, popular half-truths.  And they aren’t your garden-variety myths because people who favor the free market tend to say them all the time.  I’ve said them myself, because they do contain more than a grain of truth.

“There ain’t no such thing as a free lunch” (or TANSTAAFL) means that, with a limited budget, choosing one thing means sacrificing something else.  Scarcity entails tradeoffs.  It also implies that efficiency means using any resource so that no other use will give a higher reward for the risk involved.

That saving is necessary for rising labor productivity and prosperity also contains an economic truth.  No less an authority than the great Austrian economist Ludwig von Mises has stated this many times.  In an article published in The Freeman in 1981, for example, he said:

The fact that the standard of living of the average American worker is incomparably more satisfactory than that of the average [Indian] worker, that in the United States hours of work are shorter and children sent to school and not to the factories, is not an achievement of the government and the laws of the country. It is the outcome of the fact that the capital invested per head of the employees is much greater than in India and that consequently the marginal productivity of labor is much higher.

The Catalyst

But the statement is true in much the same way that saying breathable air is necessary for economic development is true.  Saving and rising capital accumulation per head do accompany significant economic development, and if we expect it to continue, people need to keep doing those activities.  But they are not the source–the catalyst, if you will–of the prosperity most of the world has seen in the past 200 years.

What am I talking about?  Deirdre McCloskey tells us in her 2010 book, Bourgeois Dignity: Why Economics Can’t Explain the World:

Two centuries ago the world’s economy stood at the present level of Bangladesh. . . .  In 1800 the average human consumed and expected her children and grandchildren and great-grandchildren to go on consuming a mere $3 a day, give or take a dollar or two [in today’s dollars]. . . .

By contrast, if you live nowadays in a thoroughly bourgeois country such as Japan or France you probably spend about $100 a day.  One hundred dollars as against three: such is the magnitude of modern economic growth.

(Hans Rosling illustrates this brilliantly in this viral video.)

That is unprecedented, historic, even miraculous growth, especially when you consider that $3 (or less) a day per person has been the norm for most of human history.  What is the sine qua non of explosive economic development and accelerating material prosperity?  What was missing for millennia that prevented the unbelievable takeoff that began about 200 years ago?

A More Complete Story

Economics teaches us the importance of TANSTAAFL and capital investment.  Again, the trouble is they are not the whole truth.

As I’ve written before, however, there is such a thing as a free lunch, and I don’t want to repeat that argument in its entirety.  The basic idea is that what Israel M. Kirzner calls “the driving force of the market” is entrepreneurship.  Entrepreneurship goes beyond working within a budget–it’s the discovery of novel opportunities that increase the wealth and raises the budgets of everyone in society, much as the late Steve Jobs or Thomas Edison or Madam C.J. Walker (probably the first African-American millionaire) did.  Yes, those innovators needed saving and capital investment by someone–most innovators were debtors at first–but note: Those savings could have been and were invested in less productive investments before these guys came along.

As McCloskey, as well as Rosenberg and Birdzell, have argued, it isn’t saving, capital investment per se, and certainly not colonialism, income inequality, capitalist exploitation, or even hard work that is responsible for the tremendous rise in economic development, especially since 1800.

It is innovation.

And, McCloskey adds, it is crucially the ideas and words that we use to think and talk about the people who innovate–the chance takers, the rebels, the individualists, the game changers–and that reflect a respect for and acceptance of the very concept of progress.  Innovation blasts the doors off budget constraints and swamps current rates of savings.

Doom to the Old Ways

Innovation can also spell doom to the old ways of doing things and, in the short run at least, create hardship for the people wedded to them.  Not everyone unambiguously gains from innovation at first, but in time we all do, though not at the same rate.

So for McCloskey, “The leading ideas were two: that the liberty to hope was a good idea and that a faithful economic life should give dignity and even honor to ordinary people. . . .”

There’s a lot in this assertion that I’ll need to think through.  But I do accept the idea that innovation, however it arises, trumps efficiency and it trumps mere savings.  Innovation discovers free lunches; it dramatically reduces scarcity.

Indeed, innovation is perhaps what enables the market economy to stay ahead of, for the time being at least, the interventionist shackles that increasingly hamper it.  You want to regulate landline telephones?  I’ll invent the mobile phone!  You make mail delivery a legal monopoly?  I’ll invent email!  You want to impose fixed-rail transport on our cities?  I’ll invent the driverless car!

These aren’t myths. They’re reality.

Sanford Ikeda is an associate professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.