Against Monsanto, For GMOs – Video by G. Stolyarov II

Against Monsanto, For GMOs – Video by G. Stolyarov II

The depredations of the multinational agricultural corporation Monsanto are rightly condemned by many. But Mr. Stolyarov points out that arguments against Monsanto’s misbehavior are not valid arguments against genetically modified organisms (GMOs) as a whole.

References

– “Against Monsanto, For GMOs” – Essay by G. Stolyarov II
– “Monsanto – Legal actions and controversies” – Wikipedia
– “Copyright Term Extension Act” – Wikipedia
– “Electronic Arts discontinues Online Pass, a controversial form of video game DRM” – Sean Hollister – The Verge – May 15, 2013
– “Extinction” – Wikipedia

Heidegger, Cooney, and The Death-Gives-Meaning-To-Life Hypothesis – Article by Franco Cortese

Heidegger, Cooney, and The Death-Gives-Meaning-To-Life Hypothesis – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
August 10, 2013
******************************
One common argument against indefinite lifespans is that a definitive limit to one’s life – that is, death – provides some essential baseline reference, and that it is only in contrast to this limiting factor that life has any meaning at all. In this article I refute the argument’s underlying premises, and then argue that even if such premises were taken as true, the argument’s conclusion – that eradicating death would negate the “limiting factor” that legitimizes life – is also invalid, because the ever-changing nature of self and society – and the fact that opportunities once here are now gone –  can constitute such a scarcitizing factor just as well as death can.
***
Death gives meaning to life? No! Death is meaninglessness!
***

One version of the argument is given in Brian Cooney’s Posthumanity: Thinking Philosophically about the Future, an introductory philosophical text that uses various futurist scenarios and concepts to illustrate the broad currents of Western philosophy. Towards the end of the book, Cooney makes his argument against immortality, claiming that if we had all the time in the universe to do what we wanted, then we wouldn’t do anything at all. Essentially, his argument boils down to “if there is no possibility of not being able to do something in the future, then why would we ever do it?”

Each chapter of Cooney’s book ends with a dialogue between a fictional human and posthuman, meant to better exemplify the arguments laid out in the chapter and their various interpretations. In the final chapter, “Posthumanity”, Cooney-as-posthuman writes:

Our ancestors realized that immortality would be a curse, and we have never been tempted to bestow it on ourselves… We didn’t want to be like Homer’s gods and goddesses. The Odyssey is saturated with the contrast of mortal human life, the immortality of the gods and the shadow life of the dead in Hades… Aren’t you struck by the way these deities seem to have nothing better to do than be an active audience for the lives and deeds of humans… These gods are going to live forever and there is no scarcity of whatever resources they need for their divine way of life. So (to borrow a phrase from your economists) there is no opportunity cost to their choosing to do one thing rather than another or spend time with one person rather than another. They have endless time and resources to pursue other alternatives and relationships later. Consequently, they can’t take anyone or anything seriously… Moreover, their lives lack meaning because they are condemned to living an unending story, one that can never have narrative unity… That is the fate we avoid by fixing a standard limit to our lives. Immortals cannot have what Kierkegaard called ‘passion’… A mind is aware of limitless possibilities – it can think of itself as doing anything conceivable – and it can think of a limitless time in which to do it all. To choose a life – one that will progress like a story from its beginning to its end – is to give up the infinite for the finite… We consider ourselves free because we were liberated from the possibility of irrationality and selfishness.”   –   (Cooney, 2004, 183-186).
***

Thus we see that Cooney’s argument rests upon the thesis that death gives meaning to life because it incurs finitude, and finitude forces us to choose certain actions over others. This assumes that we make actions on the basis of not being able to do them again. But people don’t make most of their decisions this way. We didn’t go out to dinner because the restaurant was closing down; we went out for dinner because we wanted to go out for dinner. I think that Cooney’s version of the argument is naïve. We don’t make the majority of our decisions by contrasting an action to the possibility of not being able to do it in future.

Cooney’s argument seems to be that if we had a list of all possible actions set before us, and time were limitless, we might accomplish all the small, negligible things first, because they are easier and all the hard things can wait. If we had all the time in the world, we would have no reference point with which to judge how important a given action or objective is. If we really can do every single thing on that ‘listless list’, then why bother, if each is as important as every other? In his line of reasoning, importance requires scarcity. If we can do everything it was possible to do, then there is nothing that determines one thing as being more important than another. Cooney makes an analogy with an economic concept to clarify his position. Economic definitions of value require scarcity; if everything were as abundant as everything else, if nothing were scarce, then we would have no way of ascribing economic value to a given thing, such that one thing has more economic value than another. So too, Cooney argues, with possible choices in life.

But what we sometimes forget is that ecologies aren’t always like economies.

The Grave Dig|nitty of Death

In the essay collection “Transhumanism and its Critics”, Hava Tirosh-Samuelson writes:

Finally, since death is part of the cycle of life characteristic of finite creatures, we will need to concern ourselves with a dignified death… the dying process need not be humiliating or dehumanizing; if done properly, as the hospice movement has shown us, the dying process itself can be dignified by remembering that we are dealing with persons whose life narratives in community are imbued with meaning, and that meaning does not disappear when bodily functions decline or finally cease.”   –  (Tirosh-Samuelson, 2011).
***

She may have provided a line of reasoning for arguing that death need not be indignifying or humiliating (convinced me that death has any dignity whatsoever), but I would say that she’s digging her claim’s own grave by focusing on the nitty-gritty details of humiliation and dignity. It is not the circumstances of death that make death problematic and wholly unsatisfactory; it is the fact that death negates life. Only in life can an individual exhibit dignity or fail by misemphasis. Sure, people can remember you after you have gone, and contributing to larger projects that continue after one’s own death can provide some meaning… but only for those still alive – not for the dead. The meaning held or beheld by the living could pertain to the dead, but that doesn’t constitute meaning to or for the dead, who forfeited the capability to experience, or behold meaning when they lost the ability to experience, or behold anything at all.

Tirosh-Samuelson’s last claim, that death need not be dehumanizing, appears to be founded upon her personal belief in an afterlife more than the claim that meaning doesn’t necessarily have to cease when we die, because we are part of “a community imbued with meaning” and this community will continue after our own death, thus providing continuity of meaning.  Tirosh-Samuelson’s belief in the afterlife also largely invalidates the claims she makes, since death means two completely different things to an atheist and a theist. As I have argued elsewhere (Cortese, 2013, 160-172), only the atheist speaks of death; the theist speaks merely of another kind of life. For a theist, death would not be dehumanizing, humiliating, or indignifying if all the human mental attributes a person possessed in the physical world would be preserved in an afterlife.

Another version of the “limiting factor” argument comes from Martin Heidegger, in his massive philosophical work Being and Time. In the section on being-toward-death, Heidegger claims, on one level, that Being must be a totality, and in order to be a totality (in the sense of being absolute or not containing anything outside of itself) it must also be that which it is not. Being can only become what it is not through death, and so in order for Being to become a totality (which he argues it must in order to achieve authenticity – which is the goal all along, after all), it must become what it is not – that is, death – for completion (Heidegger, 1962). This reinforces some interpretations made in linking truth with completion and completion with staticity.

Another line of reasoning taken by Heidegger seems to reinforce the interpretation made by Cooney, which was probably influenced heavily by Heidegger’s concept of being-toward-death. The “fact” that we will one day die causes Being to reevaluate itself, realize that it is time and time is finite, and that its finitude requires it to take charge of its own life – to find authenticity. Finitude for Heidegger legitimizes our freedom. If we had all the time in the world to become authentic, then what’s the point? It could always be deferred. But if our time is finite, then the choice of whether to achieve authenticity or not falls in our own hands. Since we must make choices on how to spend our time, failing to become authentic by spending one’s time on actions that don’t help achieve authenticity becomes our fault.Can Limitless Life Still Have a “Filling Stillness” and “Legitimizing Limit”?

Perhaps more importantly, even if their premises were correct (i.e., that the “change” of death adds some baseline limiting factor, causing you to do what you would not have done if you had all the time in the world, and thereby constituting our main motivator for motion and metric for meaning), Cooney and Heidegger are still wrong in the conclusion that indefinitely extended life would destroy or jeopardize this “essential limitation”.

The crux of the “death-gives-meaning-to-life” argument is that life needs scarcity, finitude, or some other factor restricting the possible choices that could be made, in order to find meaning. But final death need not be the sole candidate for such a restricting factor.
***
Self: La Petite Mort
***
All changed, changed utterly… A terrible beauty is born. The self sways by the second. We are creatures of change, and in order to live we die by the moment. I am not the same as I once was, and may never be the same again. The choices we prefer and the decisions we are most likely to make go through massive upheaval.The changing self could constitute this “scarcitizing” or limiting factor just as well as death could. We can be compelled to prioritize certain choices and actions over others because we might be compelled to choose differently in another year, month, or day. We never know what we will become, and this is a blessing. Life itself can act as the limiting factor that, for some, legitimizes life.

Society: La Petite Fin du Monde

Society is ever on an s-curve swerve of consistent change as well. Culture is in constant upheaval, with new opportunities opening up(ward) all the time. Thus the changing state of culture and humanity’s upheaved hump through time could act as this “limiting factor” just as well as death or the changing self could. What is available today may be gone tomorrow. We’ve missed our chance to see the Roman Empire at its highest point, to witness the first Moon landing, to pioneer a new idea now old. Opportunities appear and vanish all the time.

Indeed, these last two points – that the changing state of self and society, together or singly, could constitute such a limiting factor just as effectively as death could – serve to undermine another common argument against the desirability of limitless life (boredom) – thereby killing two inverted phoenixes with one stoning. Too often is this rather baseless claim bandied about as a reason to forestall indefinitely extended lifespans – that longer life will lead to increased boredom. The fact that self and society are in a constant state of change means that boredom should become increasingly harder to maintain. We are on the verge of our umpteenth rebirth, and the modalities of being that are set to become available to us, as selves and as societies, will ensure that the only way to entertain the notion of increased boredom  will be to personally hard-wire it into ourselves.

Life gives meaning to life, dummy!

Death is nothing but misplaced waste, and I think it’s time to take out the trash, with haste. We don’t need death to make certain opportunities more pressing than others, or to allow us to assign higher priorities to one action than we do to another. The Becoming underlying life’s self-overcoming will do just fine.

References

Cooney, B. (2004). Posthumanity: Thinking Philosophically about the Future. Rowman & Littlefield. ISBN-10: 0742532933

Cortese, F. (2013). “Religion vs. Radical Longevity: Belief in Heaven is the Biggest Barrier to Eternal Life?!”. Human Destiny is to Eliminate Death: Essays, Arguments and Rants about Immortalism. Ed. Pellissier, H. 1st ed. Niagara Falls: Center for Transhumanity. 160-172.

Heidegger, M., Macquarrie, J., & Robinson, E. (1962). Being and time. Malden, MA: Blackwell.

Tirosh-Samuelson, H. (2011). “Engaging Transhumanism”. Transhumanism and its Critics. Ed. Grassie, W., Hansell, G. Philadelphia, PA: Metanexus Institute.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

A House Divided Over NSA Spying on Americans – Article by Ron Paul

A House Divided Over NSA Spying on Americans – Article by Ron Paul

The New Renaissance Hat
Ron Paul
August 10, 2013
******************************

In late July 2013, the House debate on the Defense Appropriations bill for 2014 produced a bit more drama than usual. After hearing that House leadership would do away with the traditional “open rule” allowing for debate on any funding limitation amendment, it was surprising to see that Rep. Justin Amash’s (R-MI) amendment was allowed on the Floor. In the wake of National Security Agency (NSA) whistleblower Edward Snowden’s revelations about the extent of US government spying on American citizens, Amash’s amendment sought to remove funding in the bill for some of the NSA programs.

Had Amash’s amendment passed, it would have been a significant symbolic victory over the administration’s massive violations of our Fourth Amendment protections. But we should be careful about believing that even if it had somehow miraculously survived the Senate vote and the President’s veto, it would have resulted in any significant change in how the Intelligence Community would behave toward Americans. The US government has built the largest and most sophisticated spying apparatus in the history of the world.

The NSA has been massively increasing the size its facilities, both at its Maryland headquarters and in its newly built (and way over-budget) enormous data center in Utah. Taken together, these two facilities will be seven times larger than the Pentagon! And we know now that much of the NSA’s capacity to intercept information has been turned inward, to spy on us.

As NSA expert James Bamford wrote earlier this year about the new Utah facility:

“The heavily fortified $2 billion center should be up and running in September 2013. Flowing through its servers and routers and stored in near-bottomless databases will be all forms of communication, including the complete contents of private emails, cell phone calls, and Google searches, as well as all sorts of personal data trails—parking receipts, travel itineraries, bookstore purchases, and other digital “pocket litter.” It is, in some measure, the realization of the “total information awareness” program created during the first term of the Bush administration—an effort that was killed by Congress in 2003 after it caused an outcry over its potential for invading Americans’ privacy.”

But it happened anyway.

In late July we have seen two significant prison-breaks, one in Iraq, where some 500 al-Qaeda members broke out of the infamous Abu Ghraib prison, which the US built, and another 1,000 escaped in a huge break in Benghazi, Libya – the city where the US Ambassador was killed by the rebels that the US government helped put in power. Did the US intelligence community, focused on listening to our phone calls, not see this real threat coming?

Rep. Amash’s amendment was an important move to at least bring attention to what the US intelligence community has become: an incredibly powerful conglomeration of secret government agencies that seem to view Americans as the real threat. It is interesting that the votes on Amash’s amendment divided the House not on party lines. Instead, we saw the votes divided between those who follow their oath to the Constitution, versus those who seem to believe that any violation of the Constitution is justified in the name of the elusive “security” of the police state at the expense of liberty. The leadership – not to my surprise — of both parties in the House voted for the police state.

It is encouraging to see the large number of votes crossing party lines in favor of the Amash amendment. Let us hope that this will be a growing trend in the House – perhaps the promise that Congress may once again begin to take its duties and obligations seriously. We should not forget, however, that in the meantime another Defense Appropriations bill passing really means another “military spending” bill. The Administration is planning for a US invasion of Syria, more military assistance to the military dictatorship in Egypt, and more drones and interventionism. We have much work yet to do.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission.

Why Won’t They Tell Us the Truth About NSA Spying? – Article by Ron Paul

Why Won’t They Tell Us the Truth About NSA Spying? – Article by Ron Paul

The New Renaissance Hat
Ron Paul
August 10, 2013
******************************
In 2001, the Patriot Act opened the door to US government monitoring of Americans without a warrant. It was unconstitutional, but most in Congress over my strong objection were so determined to do something after the attacks of 9/11 that they did not seem to give it too much thought. Civil liberties groups were concerned, and some of us in Congress warned about giving up our liberties even in the post-9/11 panic. But at the time most Americans did not seem too worried about the intrusion.
***

This complacency has suddenly shifted given recent revelations of the extent of government spying on Americans. Federal politicians and bureaucrats are faced with serious backlash from Americans outraged that their most personal communications are intercepted and stored. They had been told that only the terrorists would be monitored. In response to this anger, defenders of the program have time and again resorted to spreading lies and distortions. But these untruths are now being exposed very quickly.

In a Senate hearing this March, Director of National Intelligence James Clapper told Senator Ron Wyden that the NSA did not collect phone records of millions of Americans. This was just three months before the revelations of an NSA leaker made it clear that Clapper was not telling the truth. Pressed on his false testimony before Congress, Clapper apologized for giving an “erroneous” answer but claimed it was just because he “simply didn’t think of Section 215 of the Patriot Act.” Wow.

As the story broke in June of the extent of warrantless NSA spying against Americans, House Intelligence Committee Chairman Mike Rogers assured us that the project was a strictly limited and not invasive. He described it as a “lockbox with only phone numbers, no names, no addresses in it, we’ve used it sparingly, it is absolutely overseen by the legislature, the judicial branch and the executive branch, has lots of protections built in…”

But we soon discovered that also was not true either. We learned in another Guardian newspaper article last week that the top secret “X-Keyscore” program allows even low-level analysts to “search with no prior authorization through vast databases containing emails, online chats and the browsing histories of millions of individuals.”

The keys to Rogers’ “lockbox” seem to have been handed out to everyone but the janitors! As Chairman of the Committee that is supposed to be most in the loop on these matters, it seems either the Intelligence Community misled him about their programs or he misled the rest of us. It sure would be nice to know which one it is.

Likewise, Rep. Rogers and many other defenders of the NSA spying program promised us that this dragnet scooping up the personal electronic communications of millions of Americans had already stopped “dozens” of terrorist plots against the United States. In June, NSA director General Keith Alexander claimed that the just-disclosed bulk collection of Americans’ phone and other electronic records had “foiled 50 terror plots.”

Opponents of the program were to be charged with being unconcerned with our security.

But none of it was true.

On August 3, 2013, the Senate Judiciary Committee heard dramatic testimony from NSA deputy director John C. Inglis. According to the Guardian:

“The NSA has previously claimed that 54 terrorist plots had been disrupted ‘over the lifetime’ of the bulk phone records collection and the separate program collecting the internet habits and communications of people believed to be non-Americans. On Wednesday, Inglis said that at most one plot might have been disrupted by the bulk phone records collection alone.”

From dozens to “at most one”?

Supporters of these programs are now on the defensive, with several competing pieces of legislation in the House and Senate seeking to rein in an administration and intelligence apparatus that is clearly out of control. This is to be commended. What is even more important, though, is for more and more and more Americans to educate themselves about our precious liberties and to demand that their government abide by the Constitution. We do not have to accept being lied to – or spied on — by our government.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission.

George Zimmerman’s Acquittal – Thoughts and Implications – Video by G. Stolyarov II

George Zimmerman’s Acquittal – Thoughts and Implications – Video by G. Stolyarov II

Now that George Zimmerman has been acquitted in a court of law of the charges of murdering Trayvon Martin, Mr. Stolyarov offers his reflections on the Trayvon Martin case in light of the information that emerged during the trial. These thoughts include a re-evaluation of the comments made in Mr. Stolyarov’s earlier (March 2012) video, “The Travesty of Trayvon Martin’s Murder“.

Reference
– “Shooting of Trayvon Martin” – Wikipedia

Rondo #5, Op. 72 (2013) – Musical Composition by G. Stolyarov II

Rondo #5, Op. 72 (2013) – Musical Composition by G. Stolyarov II

This rondo by Mr. Stolyarov, new in 2013, is one of his most ornate compositions to date, while managing to maintain an intense rapidity and convey sensations of acceleration and deceleration through the use of chords and changes in prevailing note lengths. The key alternates between C minor and C major, and the melody rapidly develops in complexity until reaching a grand finale in the last recapitulation.

This composition is written for three piano parts and is played in Finale 2011 software.

Download the MP3 file of this composition here.

See the index of Mr. Stolyarov’s compositions, all available for free download, here.

The artwork is Mr. Stolyarov’s Abstract Orderism Fractal 47, available for download here and here.

Remember to LIKE, FAVORITE, and SHARE this video in order to spread rational high culture to others.

Hacking Law and Governance with Startup Cities: How Innovation Can Fix Our Social Tech – Article by Zachary Caceres

Hacking Law and Governance with Startup Cities: How Innovation Can Fix Our Social Tech – Article by Zachary Caceres

The New Renaissance Hat
Zachary Caceres
July 16, 2013
******************************

Outside of Stockholm, vandals and vines have taken over Eastman Kodak’s massive factories. The buildings are cold metal husks, slowly falling down and surrendering to nature.  The walls are covered in colorful (and sometimes vulgar) spray paint. In the words of one graffiti artist: It’s “a Kodak moment.”

After its founding in 1888, Eastman Kodak became the uncontested champion of photography for almost a century. But in early 2012, the once $30-billion company with over 140,000 employees filed for bankruptcy.

Kodak was the victim of innovation—a process that economist Joseph Schumpeter called “the gales of creative destruction.” Kodak could dominate the market only so long as a better, stable alternative to its services didn’t exist. Once that alternative—digital photography—had been created, Kodak’s fate was sealed. The camera giant slowly lost market share to upstarts like Sony and Nikon until suddenly “everyone” needed a digital camera and Kodaks were headed to antique shows.

How does this happen? Christian Sandström, a technologist from the Ratio Institute in Sweden, argues that most major innovation follows a common path.

From Fringe Markets to the Mainstream

Disruptive technologies start in “fringe markets,” and they’re usually worse in almost every way. Early digital cameras were bulky, expensive, heavy, and made low-quality pictures. But an innovation has some advantage over the dominant technology: for digital cameras it was the convenience of avoiding film. This advantage allows the innovation to serve a niche market. A tiny group of early adopters is mostly ignored by an established firm like Kodak because the dominant technology controls the mass market.

But the new technology doesn’t remain on the fringe forever. Eventually its performance improves and suddenly it rivals the leading technology. Digital cameras already dispensed with the need to hassle with film; in time, they became capable of higher resolution than film cameras, easier to use, and cheaper. Kodak pivoted and tried to enter the digital market, but it was too late. The innovation sweeps through the market and the dominant firm drowns beneath the waves of technological change.

Disruptive innovation makes the world better by challenging monopolies like Kodak. It churns through nearly every market except for one: law and governance.

Social Technology

British Common law, parliamentary democracy, the gold standard: It may seem strange to call these “technologies.” But W. Brian Arthur, a Santa Fe Institute economist and author of The Nature of Technology, suggests that they are. “Business organizations, legal systems, monetary systems, and contracts…” he writes, “… all share the properties of technology.”

Technologies harness some phenomenon toward a purpose. Although we may feel that technologies should harness something physical, like electrons or radio waves, law and governance systems harness behavioral and social phenomena instead. So one might call British common law or Parliamentary democracy “social technologies.”

Innovation in “social tech” might still seem like a stretch. But people also once took Kodak’s near-total control of photography for granted (in some countries, the word for “camera” is “Kodak”). But after disruptive innovation occurs, it seems obvious that Kodak was inferior and that the change was good. Our legal and political systems, as technologies, are just as open to disruptive innovation. It’s easy to take our social techs for granted because the market for law and governance is so rarely disrupted by innovations.

To understand how we might create disruptive innovation in law and governance, we first need to find, like Nikon did to Kodak, an area where the dominant technologies can be improved.

Where Today’s Social Techs Fail

Around the world, law and governance systems fail to provide their markets with countless services. In many developing countries, most of the population lives outside the law.

Their businesses cannot be registered. Their contracts can’t be taken to court. They cannot get permission to build a house. Many live in constant fear and danger since their governance systems cannot even provide basic security. The ability to start a legal business, to build a home, to go school, to live in safe community—all of these “functions” of social technologies are missing for billions of people.

These failures of social technology create widespread poverty and violence. Businesses that succeed do so because they’re run by cronies of the powerful and are protected from competition by the legal system. The networks of cooperation necessary for economic growth cannot form in such restrictive environments. The poor cannot become entrepreneurs without legal tools. Innovations never reach the market. Dominant firms and technologies go unchallenged by upstarts.

Here’s our niche market.

If we could find a better way to provide one or some of these services (even if we couldn’t provide everything better than the dominant political system), we might find ourselves in the position of Nikon before Kodak’s collapse. We could leverage our niche market into something much bigger.

Hacking Law and Governance with Startup Cities

A growing movement around the world to build new communities offers ways to hack our current social tech. A host nation creates multiple, small jurisdictions with new, independent law and governance. Citizens are free to immigrate to any jurisdiction of their choosing. Like any new technology, these startup cities compete to provide new and better functions—in this case, to provide citizens with services they want and need.

One new zone hosting a startup city might pioneer different environmental law or tax policy. Another may offer a custom-tailored regulatory environment for finance or universities. Still another may try a new model for funding social services.

Startup cities are a powerful alternative to risky, difficult, and politically improbable national reform. Startup cities are like low-cost prototypes for new social techs. Good social techs pioneered by startup cities can be brought into the national system.

But if bad social techs lead a zone to fail, we don’t gamble the entire nation’s livelihood. People can easily exit a startup city—effectively putting the project “out of business.” If a nation chooses to use private capital for infrastructure or other services, taxpayers can be protected from getting stuck with the bill for someone’s bad idea. Startup cities also enhance the democratic voice of citizens by giving them the power of exit.

Looking at our niche market, a startup city in a developing nation could offer streamlined incorporation laws and credible courts for poor citizens who want to become entrepreneurs. Another project could focus on building safe places for commerce and homes by piloting police and security reform. In reality, many of these functions could (and should) be combined into a single startup city project.

Like any good tech startup, startup cities would be small and agile at first. They will not be able to rival many things that dominant law and governance systems provide. But as long as people are free to enter and exit, startup cities will grow and improve over time. What began as a small, unimpressive idea to serve a niche market can blossom into a paradigm shift in social technologies.

Several countries have already begun developing startup city projects, and many others are considering them. The early stages of this movement will almost certainly be as unimpressive as the bulky, toy-like early digital cameras. Farsighted nations will invest wisely in developing their own disruptive social techs, pioneered in startup cities. Other nations—probably rich and established ones—will ignore these “niche market reforms” around the developing world. And they just might end up like Kodak—outcompeted by new social techs developed in poor and desperate nations.

The hacker finds vulnerabilities in dominant technology and uses them to create something new. In a sense, all disruptive innovation is hacking, since it relies on a niche—a crack in the armor—of the reigning tech. Our law and governance systems are no different. Startup cities are disruptive innovation in social tech. Their future is just beginning, but one need only remember the fate of Kodak—that monolithic, unstoppable monopolist—to see a world of possibility.

Those interested in learning more about the growing startup cities movement should visit startupcities.org or contact startupcities@ufm.edu.

Zachary Caceres is CIO of Startup Cities Institute and editor of Radical Social Entrepreneurs.

This article was originally published by The Foundation for Economic Education.

Internet Fascism and the Surveillance State – Article by Ben O’Neill

Internet Fascism and the Surveillance State – Article by Ben O’Neill

The New Renaissance Hat
Ben O’Neill
July 16, 2013
******************************

What is the purpose of telecommunication and internet surveillance?

The NSA presents its surveillance operations as being directed toward security issues, claiming that the programs are needed to counter terrorist attacks. Bald assertions of plots foiled are intended to bolster this claim.[1] However, secret NSA documents reveal that their surveillance is used to gather intelligence to achieve political goals for the US government. Agency documents show extensive surveillance of communications from allied governments, including the targeting of embassies and missions.[2] Reports from an NSA whistleblower also allege that the agency has targeted and intercepted communications from a range of high-level political and judicial officials, anti-war groups, US banking firms and other major companies and non-government organizations.[3] This suggests that the goal of surveillance is the further political empowerment of the NSA and the US government.

Ostensibly, the goal of the NSA surveillance is to prevent terrorist acts that would harm or kill people in the United States. But in reality, the primary goal is to enable greater control of that population (and others) by the US government. When questioned about this issue, NSA whistleblower Thomas Drake was unequivocal about the goal of the NSA: “to own the internet and find out what everybody is doing.”[4]

“To own the internet” — Public-private partnerships in mass surveillance

The internet is, by its very nature, a decentralized arrangement, created by the interaction of many private and government servers operating on telecommunications networks throughout the world. This has always been a major bugbear of advocates for government control, who have denigrated this decentralized arrangement as being “lawless.” Since it began to expand as a tool of mass communication for ordinary people, advocates for greater government power have fought a long battle to bring the internet “under control” — i.e., under their control.

The goal of government “ownership of the internet” entails accessing the facilities that route traffic through the network. This is gradually being done through government control of the network infrastructure and the gradual domination of the primary telecommunications and internet companies that provide the facilities for routing traffic through the network. Indeed, one noteworthy aspect of the mass surveillance system of the NSA is that it has allegedly involved extensive cooperation with many “private” firms operating under US law. This has allegedly included major security, telecommunications and internet companies, as well as producers of network software and hardware.

Examples of such “public-private partnerships” are set out in leaked documents of the NSA. An unnamed US telecommunications company is reported to provide the NSA with mass surveillance data on the communications of non-US people under its FAIRVIEW program.[5] Several major computing and internet companies have also been explicitly named in top secret internal NSA material as being current providers for the agency under its PRISM program.[6] Several of these companies have issued denials disavowing any participation in, or prior knowledge of the program, but this has been met with some scepticism.[7] (Indeed, given that the NSA did not anticipate public release of its own internal training material, it is unlikely that the agency would have any cause to lie about the companies they work with in this material. This suggests that the material may be accurate.)

Many of these companies have supplied the NSA with data from their own customers, or created systems which allow the agency access to the information flowing through telecommunications networks. They have done so without disclosure to their own customers of the surveillance that has occurred, by using the blanket advisement that they “comply with lawful requests for information.” By virtue of being subject to the jurisdiction of US statutes, all of these companies have been legally prohibited from discussing any of their dealings with the NSA and they have been well placed for retaliatory action by the many regulatory agencies of the US government if they do not cooperate. In any case, it appears from present reports that many companies have been active partners of the agency, assisting the NSA with illegal surveillance activities by supplying data under programs with no legitimate legal basis.

This has been a common historical pattern in the rise of totalitarian States, which have often sought to incorporate large business concerns into their network of power. Indeed, the very notion of “public-private partnerships” in this sector readily brings to mind the worst aspects of fascist economic systems that have historically existed. The actions of US companies that have cooperated in the NSA’s mass surveillance operations calls into question the “private” status of these companies. In many ways these companies have acted as an extension of the US government, providing information illegally, in exchange for privileges and intelligence. According to media reports, “Such cooperation is an extremely delicate issue for the companies involved. Many have promised their customers data confidentiality in their terms and conditions. Furthermore, they are obliged to follow the laws of the countries in which they do business. As such, their cooperation deals with the NSA are top secret. Even in internal NSA documents, they are only referred to by the use of code names.”[8]

We began this discussion by asking the purpose of telecommunication and internet surveillance. The answer lies in the uses to which those surveillance powers are being put, and will inevitably be put, as the capacity of the NSA expands. The true purpose of the NSA is not to keep us safe. Its goal is to own the internet, to own our communications, to own our private thoughts — to own us.

Ben O’Neill is a lecturer in statistics at the University of New South Wales (ADFA) in Canberra, Australia. He has formerly practiced as a lawyer and as a political adviser in Canberra. He is a Templeton Fellow at the Independent Institute, where he won first prize in the 2009 Sir John Templeton Fellowship essay contest. Send him mail. See Ben O’Neill’s article archives.

This article was published on Mises.org and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.

Notes

[1] Mathes, M. (2013) At least 50 spy programs foiled by terror plots: NSA . The Sydney Morning Herald, 19 June 2013.

[2] MacAskill, E. (2013) New NSA leaks show how US is bugging its European allies . The Guardian, 1 June 2013.

[3] Burghardt, T. (2013) NSA spying and intelligence collection: a giant blackmail machine and “warrantless wiretapping program.” Global Research , 24 June 2013. Reports are from NSA whistleblower Russ Tice, who is a former intelligence analyst at the NSA.

[4] Wolverton, J. (2012) Classified drips and leaks. The New American, 6 August 2012. Emphasis added. Capitalization of “Internet” removed.

[5] Greenwald, G. (2013) The NSA’s mass and indiscriminate spying on Brazilians . The Guardian, 7 July 2013.

[6] Gelman, B. and Poitras, L. (2013) US, British intelligence mining data from nine US internet companies in broad secret program . The Washington Post, 7 June 2013. See also NSA slides explain the PRISM data-collecting program . The Washington Post, 6 June 2013.

[7] McGarry, C. (2013) Page and Zuckerberg say NSA surveillance program is news to them . TechHive, 7 June 2013.

[8] Ibid Poitras, p. 3.

Let Market Forces Solve Organ-Transplant Crisis – Article by Ron Paul

Let Market Forces Solve Organ-Transplant Crisis – Article by Ron Paul

The New Renaissance Hat
Ron Paul
July 16, 2013
******************************

Ten-year-old cystic fibrosis patient Sarah Murnaghan captured the nation’s attention when federal bureaucrats imposed a de facto death sentence on her by refusing to modify the rules governing organ transplants. The rules in question forbid children under 12 from receiving transplants of adult organs. Even though Sarah’s own physician said she was an excellent candidate to receive an adult organ transplant, federal government officials refused to even consider modifying their rules.

Fortunately, a federal judge intervened so Sarah received the lung transplant. But the welcome decision in this case does not change the need to end government control of organ donations and repeal the federal ban on compensating organ donors.

Supporters of the current system claim that organ donation is too important to be left to the marketplace. But this is nonsensical: if we trust the market to deliver food, shelter, and all other necessities, why should we not trust it to deliver healthcare—including organs?

It is also argued that it is “uncompassionate” or “immoral” to allow patients or insurance companies to provide compensation to donors. But one of the reasons the waiting lists for transplants is so long, with many Americans dying before receiving a transplant, is because of a shortage of organs. If organ donors, or their heirs, where compensated for donating, more people would have an incentive to become organ donors.

Those who oppose allowing patients to purchase organs should ask themselves how compassionate is it to allow those people to die on the transplant waiting list who might otherwise have lived if they were able to obtain organs though private contracts.

Some are concerned that if organ donations were supplied via the market instead of through government regulation, those with lower incomes would be effectively denied access to donated organs. This ignores our current two-tier system for allocating organs, as the wealthy can travel overseas for transplants if they cannot receive a transplant in America. Allowing the free market to alleviate the shortage of organs and reduce the costs of medial procedures like transplants would benefit the middle class and the poor, not the wealthy.

The costs of obtaining organs would likely be covered by most health insurance plans, thus reducing the costs directly borne by individual patients. Furthermore, if current federal laws distorting the health care market are repealed, procedures such as transplants would be much more affordable. Expanded access to health savings accounts and flexible savings accounts, combined with generous individual tax deductions and credits, would also make it easier for people to afford health care procedures such as transplants.

There is also some hypocrisy in the argument against allowing market forces in organ transplants. Everyone else involved in organ transplantation procedures, including doctors, nurses, and even the hospital janitor, receives compensation. Not even the most extreme proponent of government-provided health care advocates forcing medical professionals to provide care without compensation. Hospitals and other private intuitions provide compensation for blood and plasma donations, and men and women are compensated for donations to fertility clinics, so why not allow compensation for organ donation?

Sarah Murnaghan’s case shows the fallacy in thinking that a free-market system for organ donations is less moral or less effective than a government-controlled system. It is only the bureaucrats who put adherence to arbitrary rules ahead of the life of a ten-year old child. It is time for Congress to wake up and see that markets work better in all aspects of health care, including organ donation, just as they work better in providing all other goods and services.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission.

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
July 14, 2013
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.